text
stringlengths 56
7.94M
|
---|
\begin{document}
\baselineskip=15pt
\title{Relative Gorenstein dimensions over triangular matrix rings}
\author{Driss Bennis$^1,a$ \hskip 2cm Rachid El Maaouy$^1,b$ \\ \\ J. R. Garc\'{\i}a Rozas$^2,c$ \hskip 1,5cm Luis Oyonarte$^2,d$}
\date{}
\maketitle
\begin{center}
\small{1: CeReMaR Research Center, Faculty of Sciences, B.P. 1014, Mohammed V University in Rabat, Rabat, Morocco.
\noindent a: [email protected]; driss$\[email protected]
\noindent b: rachid$\[email protected]; [email protected]
2: Departamento de Matem\'{a}ticas, Universidad de Almer\'{i}a, 04071 Almer\'{i}a, Spain.
\noindent c: [email protected]
\noindent d: [email protected]}
\end{center}
\begin{abstract}
Let $A$ and $B$ be rings, $U$ a $(B,A)$-bimodule and $T=\begin{pmatrix} A&0\\U&B \end{pmatrix}$ the triangular matrix ring. In this paper, several notions in relative Gorenstein algebra over a triangular matrix ring are investigated.
We first study how to construct w-tilting (tilting, semidualizing) over $T$ using the corresponding ones over $A$ and $B$. We show that when $U$ is relative (weakly) compatible we are able to describe the structure of $G_C$-projective modules over $T$. As an application, we study when a morphism in $T$-Mod has a special $G_CP(T)$-precover and when the class $G_CP(T)$ is a special precovering class. In addition, we study the relative global dimension of $T$. In some cases, we show that it can be computed from the relative global dimensions of $A$ and $B$. We end the paper with a counterexample to a result that characterizes when a $T$-module has a finite projective dimension.
\end{abstract}
{\scriptsize 2020 Mathematics Subject Classification. Primary: 16D90, 18G25}
{\scriptsize Keywords: Triangular matrix ring, weakly Wakamatsu tilting modules, relative Gorenstein dimensions.}
\section{Introduction} \label{Sec:1}
Semidualizing modules were independently studied (under different names) by Foxby \cite{F}, Golod \cite{Go}, and Vasconcelos \cite{V} over a commutative Noetherian ring. Golod used these modules to study $G_C$-dimension for finitely generated modules. Motivated (in part) by Enochs and Jenda's extensions of the classical G-dimension given in \cite{EJ}, Holm and J$\phi$rgensen, extended in \cite{HJ} this notion to arbitrary modules. After that, several generalizations of semidualizing and $G_C$-dimension have been made by several authors (\cite{Wh},\cite{LHX},\cite{ATY}).
As the authors mentioned in \cite{BGO1}, to study the Gorenstein projective modules and dimension relative to a semidualizing $(R,S)$-bimodule $C$, the condition ${\rm End}_S(C)\cong R$, seems to be too restrictive and in some cases unnecessary. So the authors introduced weakly Wakamatsu tilting as a weakly notion of semidualizing which made the theory of relative Gorenstein homological algebra wider and less restrictive but still consistent. Weakly Wakamatsu tilting modules were subject of many publications which showed how important these modules could become in developing the theory of relative (Gorenstein) homological algebra (\cite{BGO1},\cite{BGO2},\cite{BDGO})
Let $A$ and $B$ be rings and $U$ be a $(B,A)$-bimodule. The ring $T=\begin{pmatrix}
A&0\\U&B
\end{pmatrix}$ is known as the formal triangular matrix ring with usual matrix addition
and multiplication. Such rings play an important role in the representation theory of algebras. The modules over such rings can be described in a very concrete fashion. So, formal triangular matrix rings and modules over them have proved to be a rich source of examples and counterexamples. Some important Gorenstein notions over formal triangular matrix rings have been studied by many authors (see \cite{Z, EIT, ZLW}). For example, Zhang \cite{Z} introduced compatible bimodules and explicitly described the Gorenstein projective modules over triangular matrix Artin algebra. Enochs, Izurdiaga, and Torrecillas \cite{EIT} characterized when a left module over a triangular matrix ring is Gorenstein projective or Gorenstein injective under the "Gorenstein regular" condition. Under the same condition, Zhu, Liu, and Wang \cite{ZLW} investigated Gorenstein homological dimensions of modules over triangular matrix rings. Mao \cite{M2} studied Gorenstein flat modules over $T$ (without the "Gorenstein regular" condition) and gave an estimate of the weak global Gorenstein dimension of $T$.
The main objective of the present paper is to study relative Gorenstein homological notions (w-tilting, relative Gorenstein projective modules, relative Gorenstein projective dimensions and relative global projective dimension) over triangular matrix rings.
This article is organized as follows:
In Section 2, we give some preliminary results.
In Section 3, we study how to construct w-tilting (tilting, semidualizing) over $T$ using w-tilting (tilting, semidualizing) over $A$ and $B$ under the condition that $U$ is relative (weakly) compatible. We introduce (weakly) $C$-compatible $(B,A)$-bimodules for a $T$-module $C$ (Definition \ref{C-compatible}). Given two w-tilting modules $_AC_1$ and $_BC_2$, we prove in Proposition \ref{when p preserves w-tilting} that $C=\begin{pmatrix}
C_1\\(U\otimes_AC_1)\oplus C_2
\end{pmatrix}$ is a w-tilting $T$-module when $U$ is weakly $C$-compatible.
In Section 4, we first describe relative Gorenstein projective modules over $T$. Let $C=\begin{pmatrix}
C_1\\(U\otimes_AC_1)\oplus C_2
\end{pmatrix}$ be a $T$-module. We prove in Theorem \ref{structure of $G_C$-projective}, that if $U$ is $C$-compatible then a $T$-module $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ is $G_C$-projective if and only if $M_1$ is $G_{C_1}$-projective $A$-module, $\field{C}oker\varphi^M$ is $G_{C_2}$-projective $B$-module and $\varphi^M:U\otimes_A M_1\to M_2$ is injective. As an application, we prove that the converse of Proposition \ref{when p preserves w-tilting} and refine in relative setting (Proposition \ref{when T is relative CM-free}), a result of when $T$ is left (strongly) CM-free due to Enochs, Izurdiaga, and Torrecillas in \cite{EIT}.
Also when $C$ is w-tilting, we characterize when a $T$-morphism is a special precover (see Proposition \ref{when a module has precover}). Then in Theorem \ref{the class of G_C-proj is a special precovering}, we prove that class of $G_C$-projective $T$-modules is special precovering if and only if so are the classes of $G_{C_1}$-projective $A$-modules and $G_{C_2}$-projective $B$-modules, respectively.
Finally, in Section 5, we give an estimate of $G_C$-projective dimension of a left $T$-module and the left $G_C$-projective global dimension of $T$. First, it is proven that, given a $T$-module $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$, if $C=\textbf{p}(C_1,C_2):=\begin{pmatrix}
C_1\\(U\otimes_AC_1)\oplus C_2
\end{pmatrix}$ is w-tilting, $U$ is $C$-compatible and $$SG_{C_2}-PD(B):=sup\{\rm G_{C_2}\!-\!pd(U\otimes_A G)\;|\; G\in G_{C_1}P(A)\}<\infty,$$ then
$$max\{\rm G_{C_1}\!-\!pd(M_1),(\rm G_{C_2}\!-\!pd(M_2))-(SG_{C_2}-PD(B))\}$$
$$\leq {\rm G_C\!-\!pd}(M)\leq$$
$$ max\{(\rm G_{C_1}\!-\!pd(M_1))+(SG_{C_2}-PD(B))+1,\rm G_{C_2}\!-\!pd(M_2)\}.$$
As an application, we prove that, if $C=\textbf{p}(C_1,C_2)$ is w-tilting and $U$ is $C$-compatible then
$$max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\}$$
$$\leq G_C-PD(T)\leq $$
$$ max\{G_{C_1}-PD(A)+SG_{C_2}-PD(B)+1,G_{C_2}-PD(B)\}.$$
Some cases when this estimation becomes an exact formula are also given.
The authors in \cite{AS} establish a relationship between the projective dimension of
modules over $T$ and modules over $A$ and $B$. Given an integer $n{\rm G-dim}eq 0$ and $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ a $T$-module, they proved that ${\rm pd}_T(M)\leq n $ if and only if ${\rm pd}_A(M_1)\leq n$, ${\rm pd}_B(\overline{M}_2)\leq n$ and the map related to the $n$-th syzygy of $M$ is injective. We end the paper by giving a counterexample to this result (Example \ref{counter example}).
\section{Preliminaries}
Throughout this paper, all rings will be associative (not necessarily commutative)
with identity, and all modules will be, unless otherwise specified, unitary left modules.
For a ring $R$, We use $\mathscr{P}_Croj(R)$ (resp., ${\rm Inj}(R)$) to denote the class of all projective (resp.,
injective) $R$-modules. The category of all left $R$-modules will be denoted by $R$-Mod. For an $R$-module $C$, we use ${\rm Add}_R(C)$ to denote the class of all $R$-modules
which are isomorphic to direct summands of direct sums of copies of $C$, and
$\mathscr{P}_Crod_R(C)$ will denote the class of all $R$-modules which are isomorphic to direct
summands of direct products of copies of $C$.
Given a class of modules $\mathcal{F}$ (which will always be considered closed under isomorphisms), an $\mathcal{F}$-precover of $M\in R$-Mod is a morphism $\varphi:F\to M$ ($F\in \mathcal{F}$) such that ${\rm Hom}_R(F',\varphi)$ is surjective for every $F'\in\mathcal{F}$. If, in addition, any solution of the equation ${\rm Hom}_R(F,\varphi)(g)=\varphi$ is an automorphism of $F$, then $\varphi$ is said to be an $\mathcal{F}$-cover. The $\mathcal{F}$-precover $\varphi$ is said to be special if it is surjective and ${\rm Ext}^1(F,\ker \varphi)=0$ for every $F\in \mathcal{F}$. The class $\mathcal{F}$ is said to be special (pre)covering if every module has a special $\mathcal{F}$-(pre)cover.
Given the class $\mathcal{F}$, the class of all modules $N$ such that ${\rm Ext}^i_R(F,N)=0 $ for every $ F\in \mathcal{F}$ will be denoted by $\mathcal{F}^{\perp_i}$ (similarly, $^{\perp_i}\mathcal{F}=\{ N;\ {\rm Ext}^i_R(N,F)=0\ \forall F\in \mathcal{F}\}$). The right and left orthogonal classes $\mathcal{F}^{\perp}$ and $^{\perp}\mathcal{F}$ are defined as follows:
$$\mathcal{F}^{\perp}=\cap_{i{\rm G-dim}eq 1} \mathcal{F}^{\perp_i}\text{ and } ^{\perp}\mathcal{F}=\cap_{i{\rm G-dim}eq 1}\; ^{\perp_i}\mathcal{F}$$
It is immediate to see that if $C$ is any module then ${\rm Add}_R(C)^{\perp}=\{C\}^{\perp}$ and ${^{\perp}\mathscr{P}_Crod_R(C)}={^{\perp}\{C\}}$.
Given a class of $R$-modules $\mathcal{F}$, an exact sequence of $R$-modules
$$\cdots\to X^1\to X^0\to X_0\to X_1\to\cdots$$
is called ${\rm Hom}_R(-,\mathcal{F})$-exact (resp., ${\rm Hom}_R(\mathcal{F},-)$-exact) if the functor ${\rm Hom}_R(-,F)$ (resp., ${\rm Hom}_R(F,-)$) leaves the sequence exact whenever $F\in \mathcal{F}$. If $\mathcal{F}=\{F\}$, we simply say ${\rm Hom}_R(-,F)$-exact. Similarly, we can define $\mathcal{F}\otimes_R-$exact sequences when $\mathcal{F}$ is a class of right $R$-modules.
We now recall some concepts needed throughout the paper.
\begin{defn}
\begin{enumerate}
\item (\cite[Definition 2.1]{HW}) A semidualizing bimodule is an $(R,S)$-bimodule $C$ satisfying the following
properties:
\begin{enumerate}
\item $_RC$ and $C_S$ both admit a degreewise finite projective resolution in the corresponding module categories ($R$-Mod and Mod-$S$).
\item ${\rm Ext}_R^{{\rm G-dim}eq 1}(C,C)={\rm Ext}_S^{{\rm G-dim}eq 1}(C,C)=0.$
\item The natural homothety maps $R\stackrel{_R{\rm G-dim}amma}{\rightarrow}{\rm Hom}_S(C,C)$ and $S \stackrel{{\rm G-dim}amma_S}{\rightarrow} {\rm Hom}_R(C,C)$ both are ring isomorphisms.
\end{enumerate}
\item (\cite[Section 3]{W}) A Wakamatsu tilting module, simply tilting, is an $R$-module $C$ satisfying the following
properties:
\begin{enumerate}
\item $_RC$ admits a degreewise finite projective resolution.
\item ${\rm Ext}^{{\rm G-dim}eq1}_R(C,C)=0$
\item There exists a ${\rm Hom}_R(-,C)$-exact exact sequence of $R$-modules
$$ \mathbf{X}=\ 0 \rightarrow R \rightarrow C^0 \rightarrow C^1 \rightarrow \cdots $$
where $C^i\in {\rm add}_R(C)$ for every $i\in \field{N}$.
\end{enumerate}
\end{enumerate}
\end{defn}
It was proved in \cite[Corollary 3.2]{W}, that an $(R,S)$-bimodule $C$ is semidualizing if and only if $_RC$ is tilting with $S={\rm End}_R(C)$. So the following notion, which will be crucial in this paper, generalizes both concepts.
\begin{defn}[\cite{BGO1}, Definition 2.1] An $R$-module $C$ is weakly Wakamatsu tilting ($w$-tilting for short) if it has the following two properties:
\begin{enumerate}
\item ${\rm Ext}_R^{i{\rm G-dim}eq 1}(C,C^{(I)})=0$ for every set $I.$
\item There exists a ${\rm Hom}_R (-,{\rm Add}_R(C))$-exact exact sequence of $R$-modules
$$ \mathbf{X}=\ 0 \rightarrow R \rightarrow A^0 \rightarrow A^1 \rightarrow \cdots $$
where $A^i\in {\rm Add}_R(C)$ for every $i\in \field{N}$.
\end{enumerate}
If $C$ satisfies $1.$ but perhaps not $2.$ then $C$ will be said to be $\Sigma$-self-orthogonal.
\end{defn}
\begin{defn}[\cite{BGO1}, Definition 2.2]
Given any $C\in R$-Mod, an $R$-module $M$ is said to be G$_C$-projective if there exists a ${\rm Hom}_R (-,{\rm Add}_R(C))$-exact exact sequence of $R$-modules
$$ \mathbf{X}=\ \cdots \rightarrow P_1 \rightarrow P_0 \rightarrow A^0 \rightarrow A^1 \rightarrow \cdots $$
where the $P_i's$ are all projective, $A^i\in {\rm Add}_R(C)$ for every $i\in \field{N}$, $M \cong {\rm Im}(P_0 \rightarrow A^0)$.
We use $G_CP(R)$ to denote the class of all G$_C$-projective $R$-modules.
\end{defn}
It is immediate from the definitions that w-tilting modules can be characterized as follows.
\begin{lem}\label{charac of w-tilitng R-mod}
An $R$-module $C$ is w-tilting if and only if both $C$ and $R$ are $G_C$-projective modules.
\end{lem}
Now we recall some facts about triangular matrix rings. Let $A$ and $B$ be rings and $U$ a $(B,A)$-bimodule. We shall denote by $T=\begin{pmatrix}
A & 0\\
U & B
\end{pmatrix}$ the generalized triangular matrix ring.
By \cite[Theorem 1.5]{G}, the category $T$-Mod of left $T$-modules is equivalent to the category $_T\Omega$ whose objects are triples $M=\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}_{\varphi^M}$, where $M_1\in A$-Mod, $M_2\in B$-Mod and $\varphi^M : U \otimes_A M_1\rightarrow M_2$ is a $B$-morphism, and whose morphisms from $\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}_{\varphi^M}$ to $\begin{pmatrix}
N_1 \\
N_2
\end{pmatrix}_{\varphi^N}$ are pairs
$\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix}$
such that $f_1\in {\rm Hom}_A(M_1,N_1)$, $f_2\in
{\rm Hom}_B(M_2,N_2)$ satisfying that the following diagram is commutative.
$$ \xymatrix{
U\otimes_AM_1 \ar[r]^-{\varphi^M} \ar[d]_{1_U\otimes f_1} & M_2\ar[d] ^{f_2}
\\
U\otimes_AN_1\ar[r]^-{\varphi^N} & N_2
\\
}$$
Since we have the natural isomorphism $${\rm Hom}_B(U\otimes_A M_1,M_2)\cong {\rm Hom}_A(M_1,{\rm Hom}_B(U,M_2)),$$there is an alternative way of defining $T$-modules and $T$-homomorphisms in terms of maps $\widetilde{\varphi^M}:M_1\to {\rm Hom}_B(U,M_2)$ given by $\widetilde{\varphi^M}(x)(u)=\varphi^M(u\otimes x)$ for each $u\in U$ and $x\in M_1.$
Analogously, the category Mod-$T$ of right $T$-modules is equivalent to the category $\Omega_T$ whose objects are triples $M=\begin{pmatrix}
M_1 , M_2
\end{pmatrix}_{\psi^M}$, where $M_1\in$ Mod-$A$, $M_2\in$ Mod-$B$ and $\varphi^M : M_2\otimes_B U\rightarrow M_1$ is an $A$-morphism, and whose morphisms from $\begin{pmatrix}
M_1 , M_2
\end{pmatrix}_{\phi^M}$ to $\begin{pmatrix}
N_1 ,
N_2
\end{pmatrix}_{\phi^N}$ are pairs
$\begin{pmatrix}
f_1 ,
f_2
\end{pmatrix}$
such that $f_1\in {\rm Hom}_A(M_1,N_1)$, $f_2\in
{\rm Hom}_B(M_2,N_2)$ satisfying that the following diagram
$$ \xymatrix{
M_2\otimes_B U \ar[r]^-{\phi^M} \ar[d]_{f_2\otimes 1_U} & M_1\ar[d] ^{f_1}
\\
M_2\otimes_B U\ar[r]^-{\phi^N} & N_1
\\
}$$ is commutative.
In the rest of the paper we shall identify $T$-Mod (resp. Mod-$T$) with $_T\Omega$ (resp. $\Omega_T$). Consequently, through the paper, a left (resp. right) $T$-module will be a triple $M=\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}_{\varphi^M}$ (resp. $M=\begin{pmatrix}
M_1 , M_2
\end{pmatrix}_{\phi^M}$) and, whenever there is no possible confusion, we shall omit the morphisms $\varphi^M$ and $\phi^M$. For example, $_TT$ is identified with $\begin{pmatrix}
A\\U\oplus B
\end{pmatrix}$ and $T_T$ is identified with $\begin{pmatrix}
A\oplus U, B
\end{pmatrix}$.
A sequence of left T-modules $0\rightarrow\begin{pmatrix}
M_1\\M_2
\end{pmatrix}\rightarrow\begin{pmatrix}
M'_1\\M'_2
\end{pmatrix}\rightarrow\begin{pmatrix}
M''_1\\M''_2
\end{pmatrix}\rightarrow 0$ is exact if and only if both sequences $0\rightarrow
M_1\rightarrow
M'_1\rightarrow
M''_1\rightarrow 0$ and $0\rightarrow
M_2\rightarrow
M'_2\rightarrow
M''_2\rightarrow 0$ are exact.
Throughout this paper, $T=\begin{pmatrix}
A & 0\\
U & B
\end{pmatrix}$ will be a generalized triangular matrix ring. Given a $T$-module $M=\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}_{\varphi^M}$ , the $B$-module $\field{C}oker\varphi^M$ will be denoted as $\overline{M}_2$ and the $A$-module ${\rm Ker} \widetilde{\varphi^M}$ as $\underline{M_1}$. A $T$-module $N=\begin{pmatrix}
N_1\\N_2
\end{pmatrix}_{\varphi^N}$ is a submodule of $M$ if $N_1$ is a submodule of $M_1$, $N_2$ is a submodule of $M_2$ and $\varphi^M|_{U\otimes_A N_1}=\varphi^N$.
As an interesting and special case of triangular matrix rings, we recall that the $T_2$-extension of a ring $R$ is given by
$$T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$$ and the modules over $T(R)$ are triples $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ where $M_1$ and $M_2$ are $R$-modules and $\varphi^M:M_1\to M_2$ is an $R$-homomorphism.
There are some pairs of adjoint functors $(\textbf{p},\textbf{q})$, $(\textbf{q},\textbf{h})$ and $(\textbf{s},\textbf{r})$ between the category $T$-Mod and the product category $A$-Mod $\times$ $B$-Mod which are defined as follows:
\begin{enumerate}
\item $\textbf{p}$\;:\;$A$-Mod $\times$ $B$-Mod$\rightarrow T$-Mod is defined as follows: for each object $(M_1,M_2)$
of $A$-Mod$\times B$-Mod, let $\textbf{p}(M_1,M_2)=\begin{pmatrix}
M_1 \\
(U\otimes_A M_1)\oplus M_2
\end{pmatrix}$ with the obvious map and
for any morphism $(f_1,f_2)$ in $A$-Mod$\times B$-Mod, let $\textbf{p}(f_1,f_2)=\begin{pmatrix}
f_1 \\
(1_U\otimes_A f_1)\oplus f_2
\end{pmatrix}$.
\item $\textbf{q}$\;:\;$T$-Mod$\rightarrow A$-Mod $\times B$-Mod is defined, for each left $T$-module $\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}$ as $\textbf{q}(\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix})
=(M_1,M_2)$, and for each morphism $\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix}$ in $T$-Mod as $\textbf{q}(\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix})
=(f_1,f_2)$.
\item $\textbf{h}$\;:\;$A$-Mod $\times$ $B$-Mod$\rightarrow T$-Mod is defined as follows: for each object $(M_1,M_2)$
of $A$-Mod$\times B$-Mod, let $\textbf{h}(M_1,M_2)=\begin{pmatrix}
M_1 \oplus {\rm Hom}_B(U,M_2)\\
M_2
\end{pmatrix}$ with the obvious map and
for any morphism $(f_1,f_2)$ in $A$-Mod$\times B$-Mod, let $\textbf{h}(f_1,f_2)=\begin{pmatrix}
f_1 \oplus {\rm Hom}_B(U,f_2)\\
f_2
\end{pmatrix}$.
\item $\textbf{r}$\;:\;$A$-Mod $\times$ $B$-Mod$\rightarrow T$-Mod is defined as follows: for each object $(M_1,M_2)$
of $A$-Mod$\times B$-Mod, let $\textbf{r}(M_1,M_2)=\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}$ with the zero map and
for any morphism $(f_1,f_2)$ in $A$-Mod$\times B$-Mod, let $\textbf{r}(f_1,f_2)=\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix}$.
\item $\textbf{s}$\;:\;$T$-Mod$\rightarrow A$-Mod $\times B$-Mod is defined, for each left $T$-module $\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}$ as $\textbf{s}(\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix})
=(M_1,\overline{M}_2)$, and for each morphism $\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix}$ in $T$-Mod as $\textbf{s}(\begin{pmatrix}
f_1 \\
f_2
\end{pmatrix})
=(f_1,\overline{f}_2)$, where $\overline{f}_2$ is the induced map.
\end{enumerate}
It is easy to see that \textbf{q} is exact. In particular, \textbf{p} preserves projective objects and \textbf{h} preserves injective objects. Note that the pairs of adjoint functors $(\textbf{p},\textbf{q})$ and $(\textbf{q},\textbf{h})$ are defined in \cite{EIT}. In general, the three pairs of adjoint functors defined above can be found in \cite{FGR}.
For a future reference, we list these adjointness isomorphisms:
$${\rm Hom}_T(\begin{pmatrix}
M_1 \\
(U\otimes_A M_1)\oplus M_2
\end{pmatrix},N)\cong {\rm Hom}_A(M_1,N_1)\oplus {\rm Hom}_B(M_2,N_2).$$
$${\rm Hom}_T(N,\begin{pmatrix}
M_1 \\
M_2
\end{pmatrix}_0)\cong {\rm Hom}_A(N_1,M_1)\oplus {\rm Hom}_B(\overline{N}_2,M_2).$$
$${\rm Hom}_T(M,\begin{pmatrix}
N_1 \oplus {\rm Hom}_B(U,N_2)\\
N_2
\end{pmatrix})\cong {\rm Hom}_A(M_1,N_1)\oplus {\rm Hom}_B(M_2,N_2).$$
Now we recall the characterizations of projective, injective and finitely generated $T$-modules.
\begin{lem}\label{projectives-injectives over T} Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ be a $T$-module.
\item[(1)](\cite[Theorem 3.1]{HV}) $M$ is projective if and only if $M_1$ is projective in $A$-Mod, $\overline{M}_2=Coker \varphi^M$ is projective in $B$-Mod and $\varphi^M$ is injective.
\item[(2)](\cite[Proposition 5.1]{HV2} ) $M$ is injective if and only if $\underline{M_1}={\rm Ker} \widetilde{\varphi^M}$ is injective in $A$-Mod, $M_2$ is injective in $B$-Mod and $\widetilde{\varphi^M}$ is surjective.
\item[(3)] (\cite{GW}) $M$ is finitely generated if and only if $M_1$ and $\overline{M}_2$ are finitely generated.
\end{lem}
The following Lemma improves \cite[Lemma 3.2]{M}.
\begin{lem}\label{Ext} Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ and $N=\begin{pmatrix}
N_1\\N_2
\end{pmatrix}_{\varphi^N}$ be two $T$-modules and $n{\rm G-dim}eq 1$ be an integer number. Then we have the following natural isomorphisms:
\begin{enumerate}
\item If ${\rm Tor}_{1\leq i\leq n}^A(U,M_1)=0$, then ${\rm Ext}^n_T(\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix},N)\cong {\rm Ext}^n_A(M_1,N_1).$
\item ${\rm Ext}^n_T(\begin{pmatrix}
0\\M_2
\end{pmatrix},N) \cong {\rm Ext}^n_B(M_2,N_2).$
\item ${\rm Ext}^n_T(M,\begin{pmatrix}
N_1\\0
\end{pmatrix})\cong {\rm Ext}^n_A(M_1,N_1).$
\item If ${\rm Ext}_B^{1\leq i\leq n}(U,N_2)=0$, then ${\rm Ext}^n_T(M,\begin{pmatrix}
{\rm Hom}_B(U,N_2)\\N_2
\end{pmatrix})\cong {\rm Ext}^n _B(M_2,N_2).$
\end{enumerate}
\end{lem}
{\parindent0pt {\bf Proof.\ }} We prove only $1.$, since $2.$ is similar and $3.$ and $4.$ are dual. Assume that ${\rm Tor}_{1\leq i\leq n}^A(U,M_1)=0$ and consider an exact sequence of $A$-modules
$$0\to K_1\to P_1\to M_1\to 0$$
where $P_1$ is projective. So, there exists an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
K_1\\ U\otimes_AK_1
\end{pmatrix}\to \begin{pmatrix}
P_1\\ U\otimes_AP_1
\end{pmatrix}\to \begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}\to 0$$
where $\begin{pmatrix}
P_1\\ U\otimes_AP_1
\end{pmatrix}$ is projective by Lemma \ref{projectives-injectives over T}.
Let $n=1$. By applying the functor ${\rm Hom}_T(-,N)$ to the above short exact sequence and since $\begin{pmatrix}
P_1\\ U\otimes_AP_1
\end{pmatrix}$ and $P_1$ are projectives, we get a commutative diagram with exact rows
$$\xymatrix{
{\rm Hom}_T({\begin{pmatrix}
P_1\\ U\otimes_AP_1
\end{pmatrix}},N) \ar[d]^{\cong}\ar[r] &{\rm Hom}_T({\begin{pmatrix}
K_1\\ U\otimes_AK_1
\end{pmatrix}},N) \ar[d]^{\cong} \ar@{>>}[r] &{\rm Ext}^1_T({\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}},N) \ar[d] \\
{\rm Hom}_A(P_1,N_1) \ar[r] &{\rm Hom}_A(K_1,N_1) \ar@{>>}[r] &{\rm Ext}^1_A(M_1,N_1)
}$$
where the first two columns are just the natural isomorphisms given by adjointeness and the last two horizontal rows are epimorphisms. Thus, the induced map $${\rm Ext}^1_T({\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}},N)\to {\rm Ext}^1_A(M_1,N_1)$$ is an isomorphism such that the above diagram is commutative.
Assume that $n>1$. Using the long exact sequence, we get a commutative diagram with exact rows
$$\xymatrix{
0 \ar[r] &{\rm Ext}^{n-1}_T({\begin{pmatrix}
K_1\\ U\otimes_AK_1
\end{pmatrix}},N) \ar[d]^{\cong}_{\sigma} \ar[r]_{\cong}^{f} &{\rm Ext}^n_T({\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}},N) \ar@{-->}[d]\ar[r] &0 \\
0 \ar[r] &{\rm Ext}^{n-1}_A(K_1,N_1) \ar[r]^{g}_{\cong} &{\rm Ext}^n_A(M_1,N_1) \ar[r] &0
}$$
where $\sigma$ is a natural isomorphism by induction, since ${\rm Tor}_k^A(U,K_1)=0$ for every $k\in\{1,\cdots,n-1\}$ because of the exactness of the following sequence
$$0 = {\rm Tor}_{k+1}^A(U,M_1)\to {\rm Tor}_k^A(U,K_1)\to {\rm Tor}_k^A(U,P_1)= 0.$$
Thus, the composite map $$g\sigma f^{-1}:{\rm Ext}^n_T({\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}},N)\to {\rm Ext}^n_A(M_1,N_1)$$ is a natural isomorphism, as desired.
\cqfd
Since $T$ can be viewed as a trivial extension (see \cite{FGR} and \cite{BBG} for more details), the following Lemma can be easily deduced from (\cite[Theorem 3.1 and Theorem 3.4]{BBG}). For the convenience of the reader, we give a proof.
\begin{lem}\label{Add-Prod}Let $X=\begin{pmatrix}
X_1\\X_2
\end{pmatrix}_{\varphi^X}$ be a $T$-module and $(C_1,C_2)\in A$-Mod $\times\; B$-Mod.
\begin{enumerate}
\item $X\in {\rm Add} _T(\textbf{p}(C_1,C_2))$ if and only if \begin{itemize}
\item[(i)] $X\cong \textbf{p}(X_1,\overline{X}_2)$,
\item[(ii)] $X_1\in {\rm Add}_A(C_1)$ and $\overline{X}_2 \in {\rm Add}_B(C_2).$
\end{itemize}
In this case, $\varphi^X$ is injective.
\item $X\in \mathscr{P}_Crod_T(\textbf{h}(C_1,C_2))$ if and only if \begin{itemize}
\item[(i)] $X\cong \textbf{h}(\underline{X_1},X_2)$,
\item[(ii)] $\underline{X_1}\in \mathscr{P}_Crod_A(C_1)$ and $X_2 \in \mathscr{P}_Crod_B(C_2).$
\end{itemize}
In this case, $\widetilde{\varphi^X}$ is surjective.
\end{enumerate}
\end{lem}
{\parindent0pt {\bf Proof.\ }} We only need to prove (1), since (2) is dual.
For the "if" part. If $X_1\in {\rm Add}_A(C_1)$ and $\overline{X}_2 \in {\rm Add}_B(C_2)$, then $X_1\oplus Y_1=C^{(I_1)}$ and $\overline{X}_2 \oplus Y_2=C_2^{(I_2)}$ for some $(Y_1,Y_2)\in A$-Mod$\times B$-Mod and some sets $I_1$ and $I_2$. Without loss of generality, we may assume that $I=I_1=I_2.$ Then
\begin{eqnarray*}
X\oplus \textbf{p}(Y_1,Y_2)&\cong & \textbf{p}(X_1,\overline{X}_2)\oplus \textbf{p}(Y_1,Y_2) \\
&=& \begin{pmatrix}
X_1\\ ( U\otimes_A X_1)\oplus \overline{X}_2
\end{pmatrix}\oplus \begin{pmatrix}
Y_1\\ ( U\otimes_A Y_1)\oplus Y_2
\end{pmatrix}\\
&\cong & \begin{pmatrix}
C_1^{(I)}\\ ( U\otimes_A C_1^{(I)})\oplus C^{(I)}_2
\end{pmatrix}\\
&\cong & \begin{pmatrix}
C_1\\ ( U\otimes_A C_1)\oplus C_2
\end{pmatrix}^{(I)}\\
&=& \textbf{p}(C_1,C_2)^{(I)}.
\end{eqnarray*}
Hence $X\in {\rm Add}_T(\textbf{p}(C_1,C_2)).$
Conversely, let $X\in {\rm Add} _T(\textbf{p}(C_1,C_2))$ and $Y=\begin{pmatrix}
Y_1\\Y_2
\end{pmatrix}_{\varphi^Y}$ be a $T$-module such that $X\oplus Y=\textbf{p}(C_1,C_2)^{(I)}$ for some set $I$. Then $\varphi^X$ is injective, as $X$ is a submodule of $C:=\textbf{p}(C_1,C_2)^{(I)}$ and $\varphi^C$ is injective. Consider now the split exact sequence $$0\to Y\stackrel{\begin{pmatrix}
\lambda_1\\\lambda_2
\end{pmatrix}}{\longrightarrow} C \stackrel{\begin{pmatrix}
p_1\\p_2
\end{pmatrix}}{\longrightarrow} X\to 0 $$
which induces
the following commutative
diagram with exact rows and colmmuns
$$ \xymatrix{
& & & & & \\
0\ar[r] & U\otimes_A Y_1\ar@{^{(}->}[d]_{\varphi^{Y}} \ar[r]^{1_U\otimes\lambda_1} & U\otimes_A C_1^{(I)}\ar@{^{(}->}[d]_{\varphi^{C}} \ar[r]^{1_U\otimes p_1}& U\otimes_A X_1\ar@{^{(}->}[d]_{\varphi^{X}} \ar[r] & 0 \\
0\ar[r] &Y_2\ar[d]_{\overline{\varphi^{Y}}} \ar[r]^{\lambda_2\hspace{0.9cm}} & U\otimes_A C_1^{(I)} \oplus C_2^{(I)}\ar[d]_{\overline{\varphi^{C}}} \ar[r]^{\hspace{0.9cm}p_2} & X_2 \ar[d]_{\overline{\varphi^{X}}} \ar[r] & 0\\
0\ar[r] & \overline{Y_2}\ar[r]^{\overline{\lambda_2}} \ar[d] & C_2^{(I)} \ar[d] \ar[r]^{\overline{p_2}}& \overline{X_2}\ar[r] \ar[d] & 0\\
&0 &0 &0 & } $$
where $\overline{\varphi^{X}}$, $\overline{\varphi^{C}}$ and $\overline{\varphi^{X}}$ are the canonical projections.
Clearly, $p_1:C_1^{(I)}\to X_1 $ and $\overline{p_2}:C_2^{(I)}\to \overline{X}_2 $ are split epimorphisms. Then $X_1\in {\rm Add}_A(C_1)$ and $\overline{X}_2 \in {\rm Add}_B(C_2).$ It remains to prove that $X\cong \textbf{p}(X_1,\overline{X}_2)$. For this, it suffices to prove that the short exact sequence
$$0\to U\otimes_A X_1\stackrel{\varphi^X}{\to} X_2\stackrel{\overline{\varphi^X}}{\to} \overline{X}_2 \to 0$$
splits. Let $r_2$ be the retraction of $\overline{p_2}$. If $i:C_2^{(I)}\to (U\otimes_A C_1^{(I)})\oplus C_2^{(I)} $ denotes the canonical injection, then
$\overline{\varphi^X}p_2ir_2=\overline{p_2}\overline{\varphi^C}ir_2=\overline{p_2}r_2=1_{\overline{X_2}}$ and the proof is finished.
\cqfd
\begin{rem}\begin{enumerate}
\item Since the class of projective modules over $T$ is nothing but the class ${\rm Add}_T(T)$, when we take $C_1=A$ and $C_2=B$ in Lemma \ref{Add-Prod}, we recover the characterization of projective and injective $T$-modules.
\item Let $(C_1,C_2)\in A$-Mod $\times\; B$-Mod. By Lemma \ref{Add-Prod}, every module in ${\rm Add}_T(\textbf{p}(C_1,C_2))$ has the form $\textbf{p}(X_1,X_2)$ for some $X_1\in{\rm Add}_A(C_1)$ and $X_2\in{\rm Add}_B(C_2)$
\end{enumerate}
\end{rem}
\section{w-Tilting modules }
In this section, we study when the functor $\textbf{p}$ preserves w-tilting modules.
It is well known that the functor $\textbf{p}$ preserves projective modules. However,
the functor $\textbf{p}$ does not preserve w-tilting modules in general, as the following example shows.
\begin{exmp}\label{example 1---2} Let $Q$ be the quiver
$$e_1\stackrel{\alpha}{\longrightarrow} e_2,$$
and let $R=kQ$ be the path algebra over an algebraic closed field $k$. Put $P_1=Re_1$, $P_2=Re_2$, $I_1={\rm Hom}_k(e_1R,k)$ and $I_2={\rm Hom}_k(e_2R,k).$ Note that, $C_1$ and $C_2$ are projective and injective $R$-modules, respectively. By \cite[Example 2.3]{ATY},
$$C_1=P_1\oplus P_2(=R)\;\;\;\text{and} \;\;\;C_2=I_1\oplus I_2.$$
are semidualzing $(R,R)$-bimodules and then $w$-tilting $R$-modules. Now, consider the triangular matrix ring $$T(R)=\begin{pmatrix}
R& 0\\field{R}&R
\end{pmatrix}.$$ We claim that $\textbf{p}(C_1,C_2)$ is not a w-tilting $T(R)$-module. Note that $I_1$ is not projective. Since $R$ is left hereditary by \cite[Proposition 1.4]{ARS}, ${\rm pd}_R(I_1)=1$. Hence ${\rm Ext}^1_R(I_1,R)\neq 0.$ Using Lemma $\ref{Ext}$, we get that ${\rm Ext}_{T(R)}^1(\textbf{p}(C_1,C_2),\textbf{p}(C_1,C_2))\cong {\rm Ext}_R^1(C_1,C_1)
\oplus {\rm Ext}_R^1(C_2,C_1)\oplus{\rm Ext}_R^1(C_2,C_2)
\cong {\rm Ext}_R^1(I_1,R) \neq 0 $. Thus, $\textbf{p}(C_1,C_2)$ is not a w-tilting $T(R)$-module.
\end{exmp}
Motivated by the definition of compatible bimodules in \cite[Definition 1.1]{Z}, we introduce the following definition which will be crucial throughout the rest of this paper.
\begin{defn}\label{C-compatible}
Let $(C_1,C_2)\in A$-Mod $\times\; B$-Mod and $C=\textbf{p}(C_1,C_2)$. The bimodule $_BU_A$ is said to be $C$-compatible if the following two conditions hold:
\begin{enumerate}
\item[(a)]The complex $U\otimes_A \textbf{X}_1$ is exact for every exact sequence in $A$-Mod
$$\textbf{X}_1:\;\cdots\to P^1_1\to P^0_1\to C^0_1\to C^1_1\to \cdots$$
where the $P^i_1$'s are all projective and $C^i_1\in {\rm Add}_A(C_1)$ $\forall i$.
\item[(b)]The complex ${\rm Hom}_B(\textbf{X}_2,U\otimes_A {\rm Add}_A(C_1))$ is exact for every\\ ${\rm Hom}_B(-,{\rm Add}_B(C_2))$-exact exact sequence in $B$-Mod
$$\textbf{X}_2:\;\cdots\to P^1_2\to P^0_2\to C^0_2\to C^1_2\to \cdots$$
where the $P^i_2$'s are all projective and $C^i_2\in {\rm Add}_B(C_2)$ $\forall i$.
\end{enumerate}
Moreover, $U$ is called weakly $C$-compatible if it satisfies $(b)$ and the following condition
\begin{enumerate}
\item[(a')]The complex $U\otimes_A \textbf{X}_1$ is exact for every ${\rm Hom}_A(-,{\rm Add}_A(C_1))$-exact exact sequence in $A$-Mod
$$\textbf{X}_1:\;\cdots\to P^1_1\to P^0_1\to C^0_1\to C^1_1\to \cdots$$
where the $P^i_1$'s are all projective and $C^i_1\in {\rm Add}_A(C_1)$ $\forall i$.
\end{enumerate}
When $C=\;_TT=\textbf{p}(A,B)$, the bimodule $U$ will be called simply (weakly) compatible.
\end{defn}
\begin{rem}
\begin{enumerate}
\item It is clear by the definition that every $C$-compatible is weakly $C$-compatible.
\item The $(B,A)$-bimodule $U$ is weakly compatible if and only if the functor $U\otimes_A-:A$-Mod $\to B$-Mod is weak compatible (see \cite{HZ}).
\item If $A$ and $B$ are Artin algebras and since $_TT=\begin{pmatrix}
A\\U\oplus B
\end{pmatrix}=\textbf{p}(A,B)$, it is easy to see that $_TT$-compatible bimodules are
nothing but compatible $(B,A)$-bimodules as defined in \cite{Z}.
\end{enumerate}
\end{rem}
The following can be applied to produce examples of (weakly) $C$-compatible bimodules later on.
\begin{lem}Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module.
\begin{enumerate}
\item Assume that ${\rm Tor}_1^A(U,C_1)=0$. If ${\rm fd}_A(U)<\infty$, then $U$ satisfies $(a)$.
\item Assume that ${\rm Ext}^1_B(C_2,U\otimes_AC^{(I)}_1)=0$ for every set $I$. If ${\rm id}_B(U\otimes_AC_1)<\infty$, then $U$ satisfies $(b)$.
\item If $U\otimes_A C_1\in{\rm Add}_B(C_2)$, then $U$ satisfies $(b)$.
\end{enumerate}
\end{lem}
{\parindent0pt {\bf Proof.\ }} (3) is clear. We only prove (1), as (2) is similar. Consider an exact sequence of $A$-modules
$$\textbf{X}_1:\;\cdots\to P^1_1\to P^0_1\to C^0_1\to C^1_1\to \cdots$$
where the $P^i_1$'s are all projective and $C^i_1\in {\rm Add}_A(C_1)$ $\forall i$. We use induction on ${\rm fd}_AU$. If ${\rm fd}_AU=0,$ then the result is trivial. Now suppose that ${\rm fd}_AU=n{\rm G-dim}eq 1$. Then, there exists an exact sequence of right $A$-modules
$$0\to L\to F\to U\to 0$$
where ${\rm fd}_AL=n-1$ and $F$ is flat. Applying the functor $-\otimes\textbf{X}_1$ to the above short exact sequence, we get the commutative diagram with exact rows
$$ \xymatrix{
:\ar[d] &:\ar[d] &:\ar[d] &:\ar[d] & \\
0 \ar[d]\ar[r] & L\otimes P^0_1\ar[d] \ar[r] & F\otimes_A P^0_1\ar[d] \ar[r]& U\otimes_A P^0_1\ar[d] \ar[r] & 0 \\
{\rm Tor}^A_1(U,C^0_1)\ar[d]\ar[r]& L\otimes_A C^0_1 \ar[d] \ar[r] & F\otimes_A C^0_1\ar[d] \ar[r] & U\otimes_A C^0_1 \ar[d] \ar[r] & 0\\
{\rm Tor}^A_1(U,C^1_1) \ar[d]\ar[r]& L\otimes_A C^1_1\ar[r] \ar[d] & F\otimes_A C^1_1 \ar[d] \ar[r]& U\otimes_A C^1_1\ar[r] \ar[d] & 0\\
: &: &: &: } $$
Since ${\rm Tor}_1^A(U,C_1)=0$, the above diagram induces an exact sequence of complexes
$$0\to L\otimes_A\textbf{X}_1\to F\otimes_A\textbf{X}_1\to U\otimes_A\textbf{X}_1\to 0.$$
By induction hypothesis, the complexes $L\otimes_A\textbf{X}_1$ and $F\otimes_A\textbf{X}_1$ are exact. Thus $U\otimes_A\textbf{X}_1$ is exact as well. \cqfd
Given a $T$-module $C=\textbf{p}(C_1,C_2)$, we have simple characterizations of conditions $(a')$ and $(b)$ if $C_1$ and $C_2$ are w-tilting.
\begin{prop}\label{charaterization of condition a and b }
Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module.
\begin{enumerate}
\item If $C_1$ is w-tilting, then the following assertions are equivalent:
\begin{enumerate}
\item[(i)] $U$ satisfies $(a')$.
\item[(ii)] ${\rm Tor}_1^A(U,G_1)=0$, $\forall G_1\in G_{C_1}P(A)$.
\item[(iii)] ${\rm Tor}_{i{\rm G-dim}eq 1}^A(U,G_1)=0$, $\forall G_1\in G_{C_1}P(A)$.
\end{enumerate}In this case, ${\rm Tor}_{i{\rm G-dim}eq 1}^A(U,C_1)=0.$
\item If $C_2$ is w-tilting, then the following assertions are equivalent:
\begin{enumerate}
\item[(i)] $U$ satisfies $(b)$.
\item[(ii)] ${\rm Ext}^1_B(G_2,U\otimes_AX_1)=0$, $\forall G_2\in G_{C_2}P(B)$, $\forall X_1\in {\rm Add}_A(C_1)$.
\item[(iii)] ${\rm Ext}^{i{\rm G-dim}eq 1}_B(G_2,U\otimes_AX_1)=0$, $\forall G_2\in G_{C_2}P(B)$, $\forall X_1\in {\rm Add}_A(C_1)$.
\end{enumerate}In this case, ${\rm Ext}^{i{\rm G-dim}eq 1}_B(C_2,U\otimes_AX_1)=0$, $\forall X_1\in {\rm Add}_A(C_1).$
\end{enumerate}
\end{prop}
{\parindent0pt {\bf Proof.\ }} We only prove (1), since (2) is similar.
$(i)\field{R}ightarrow (iii)$ Let $G_1\in G_{C_1}P(R)$. There exists a ${\rm Hom}_A(-,{\rm Add}_A(C_1))$-exact exact sequence in $A$-Mod
$$\textbf{X}_1:\;\cdots\to P^1_1\to P^0_1\to C^0_1\to C^1_1\to \cdots$$
where the $P^i_1$'s are all projective, $G_1\cong {\rm Im}(P^0_1\to C^0_1)$ and $C^i_1\in {\rm Add}_A(C_1)$ $\forall i$. By condition $(a')$, $U\otimes_A\textbf{X}_1$ is exact, which means in particular that ${\rm Tor}^A_{i{\rm G-dim}eq 1}(U,G_1)=0.$
$(iii)\field{R}ightarrow (ii)$ Clear.
$(ii)\field{R}ightarrow (i)$ Follows by \cite[Corollary 2.13]{BGO1}.
Finally, to prove that ${\rm Tor}_{i{\rm G-dim}eq 1}^A(U,C_1)=0$, note that $C_1\in G_{C_1}P(A)$ by \cite[Theorem 2.12]{BGO1}.
\cqfd
In the following proposition, we study when $\textbf{p}$ preserves w-tilting (tilting) modules.
\begin{prop}\label{when p preserves w-tilting}
Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module and assume that $U$ is weakly $C$-compatible. If $C_1$ and $C_2$ are w-tilting (tilting), then $\textbf{p}(C_1,C_2)$ is w-tilting (tilting).
\end{prop}
{\parindent0pt {\bf Proof.\ }} By Lemma \ref{projectives-injectives over T}, the functor $\textbf{p}$ preserves finitely generated modules, so we only need prove the statement for w-tilting. Assume that $C_1$ and $C_2$ are w-tilting and let $I$ be a set. Then ${\rm Ext}_A^{i{\rm G-dim}eq 1}(C_1,C_1^{(I)})=0$ and ${\rm Ext}_B^{i{\rm G-dim}eq 1}(C_2,C_2^{(I)})=0$. By Proposition above, we have
${\rm Ext}_B^{i{\rm G-dim}eq 1}(C_2,U\otimes_AC_1^{(I)})=0$ and ${\rm Tor}^A_{i{\rm G-dim}eq 1}(U,C_1)=0$. Using Lemma \ref{Ext}, for every $n {\rm G-dim}eq 1$ we get that
\begin{eqnarray*}
{\rm Ext}_T^n(C,C^{(I)})&=&{\rm Ext}_T^n(\textbf{p}(C_1,C_2),\textbf{p}(C_1,C_2)^{(I)})\\
&\cong& {\rm Ext}_A^n(C_1,C_1^{(I)})\oplus{\rm Ext}_B^n(C_2,U\otimes_AC_1^{(I)})\oplus{\rm Ext}_B^n(C_2,C_2^{(I)})\\
&=&0.
\end{eqnarray*}
Moreover, there exist exact sequences
$$\textbf{X}_1:\;0\to A\to C_1^0\to C_1^1\to\cdots$$
and
$$\textbf{X}_2:\;0\to B\to C_2^0\to C_2^1\to\cdots$$
which are ${\rm Hom}_A(-,{\rm Add}_A(C_1))$-exact and ${\rm Hom}_B(-,{\rm Add}_B(C_2))$-exact, respectively, and such that $C_1^i\in {\rm Add}_A(C_1)$ and $C_2^i\in {\rm Add}_B(C_2)$ for every $i\in\N.$
Since $U$ is weakly $C$-compatible, the complex $U\otimes_A\textbf{X}_1$ is exact. So we construct in $T$-Mod the exact sequence
$$\textbf{p}(\textbf{X}_1,\textbf{X}_2):\;0\to T\to \textbf{p}(C_1^0,C_2^0)\to \textbf{p}(C_1^1,C_2^1)\to\cdots$$
where $\textbf{p}(C_1^i,C_2^i)=\begin{pmatrix}
C_1^i\\(U\otimes_A C_1^i)\oplus C_2^i
\end{pmatrix}\in {\rm Add}_T(\textbf{p}(C_1,C_2))$, $\forall i\in\N,$ by Lemma \ref{Add-Prod}(1).
Let $X\in {\rm Add}_T(\textbf{p}(C_1,C_2))$. As a consequence of Lemma \ref{Add-Prod}(1), $X=\textbf{p}(X_1,X_2)$ where $X_1\in {\rm Add}_A(C_1)$ and $X_2\in {\rm Add}_B(C_2).$
Using the adjoitness $(\textbf{p},\textbf{q})$, we get an isomorphism of complexes $${\rm Hom}_T(\textbf{p}(\textbf{X}_1,\textbf{X}_2),X)\cong {\rm Hom}_A(\textbf{X}_1,X_1)\oplus {\rm Hom}_B(\textbf{X}_2,U\otimes X_1)\oplus {\rm Hom}_B(\textbf{X}_2,X_2).$$ But ${\rm Hom}_A(\textbf{X}_1,X_1)$ and ${\rm Hom}_B(\textbf{X}_2,X_2)$ are exact and the complex ${\rm Hom}_B(\textbf{X}_2,U\otimes X_1)$ is also exact since $U$ is weakly $C$-compatible. Then, ${\rm Hom}_T(\textbf{p}(\textbf{X}_1,\textbf{X}_2),X)$ is exact as well and the proof is finished. \cqfd
Now, we illustrate Proposition \ref{when p preserves w-tilting} with two applications.
\begin{cor} Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module, $A'$ and $B'$ be two rings such that $_AC_{A'}$ and $_AC_{B'}$ are bimodules and assume that $U$ is weakly $C$-compatible. If $_AC_{A'}$ and $_AC_{B'}$ are semidualizing bimodules, then $\textbf{p}(C_1,C_2)$ is a semidualizing $(T,{\rm End}_T(C))$-bimodule.
\end{cor}
{\parindent0pt {\bf Proof.\ }} Follows by Proposition \ref{when p preserves w-tilting} and \cite[Corollory 3.2]{W}. \cqfd
\begin{cor} \label{application 2} Let $R$ and $S$ be rings, $\theta:R \rightarrow S$ be a homomorphism with $S_R$ flat, and $T=T(\theta)=:\begin{pmatrix}
R&0\\S&S
\end{pmatrix}$. Let $C_1$ be an $R$-module such that $S\otimes_RC_1\in{\rm Add}_R(C_1)$ (for instance, if $R$ is commutative or $R=S$). If $_RC_1$ is w-tilting, then
\begin{enumerate}
\item $S\otimes_RC_1$ is a w-tilting $S$-module.
\item $C=\begin{pmatrix}
C_1\\ (S\otimes_RC_1)\oplus (S\otimes_RC_1)
\end{pmatrix}$ is a w-tilting $T(\theta)$-module.
\end{enumerate}
\end{cor}
{\parindent0pt {\bf Proof.\ }} 1. Let $C_2=S\otimes_RC_1$ and note that $C=\textbf{p}(C_1,C_2)$ and that $_SS_R$ is $C$-compatible. So, by Proposition \ref{when p preserves w-tilting}, we only need to prove that $C_2$ is w-tilting $S$-module.
Since $_RC_1$ is w-tilting, there exist ${\rm Hom}_R(-,{\rm Add}_R(C_1))$-exact exact sequences
$$\textbf{P}:\cdots\to P_1\to P_0\to C\to 0 $$
and
$$\textbf{X}:0\to R\to C_0\to C_1\to\cdots $$
with each $_RP_i$ projective and $_RC_i\in{\rm Add}_R(C_1)$. Since $S_R$ is flat, we get an exact sequence
$$S\otimes_R\textbf{P}:\cdots\to S\otimes_RP_1\to S\otimes_RP_0\to S\otimes_RC\to 0 $$
and
$$S\otimes_R\textbf{X}:0\to S\to S\otimes_RC_0\to S\otimes_RC_1\to\cdots $$
with each $S\otimes_RP_i$ a projective $S$-module and $S\otimes_RC_i\in{\rm Add}_R(C_2)$.
We prove now that $S\otimes_R\textbf{P}$ and $S\otimes_R\textbf{X}$ are ${\rm Hom}_S-(,{\rm Add}_S(C_2))$-exact . Let $I$ be a set. Then, the complex
${\rm Hom}_S(S\otimes_R\textbf{P},S\otimes_RC_1^{(I)})\cong {\rm Hom}_R(\textbf{P},{\rm Hom}_S(S,S\otimes_RC_1^{(I)}))\cong {\rm Hom}_R(\textbf{P},S\otimes_RC_1^{(I)})$ is exact since $S\otimes_RC_1^{(I)}\in {\rm Add}_R(C_1)$. Similarly, $S\otimes_R\textbf{X}$ is ${\rm Hom}_S(-,{\rm Add}_S(C_2))$-exact.
2. This assertion follows from Proposition \ref{when p preserves w-tilting}. We only need to note that $S$ is weakly $C$-compatible since $S_R$ is flat and $S\otimes_R C_1\in{\rm Add}_R(C_2)$. \cqfd
We end this section with an example of a w-tilting module that is neither projective nor injective.
\begin{exmp} Take $R$ and $C_2$ as in example \ref{example 1---2}. So, by Corollary \ref{application 2}, $C=\begin{pmatrix} C_2\\field{C}_2\oplus C_2\end{pmatrix}$ is a w-tilting $T(R)$-module. By Lemma \ref{projectives-injectives over T}, $C$ is not projective since $C_2$ is not and it is not injective since the map $\widetilde{\varphi^C}:C_2\to C_2\oplus C_2$ is not surjective.
Moreover, by \cite[Proposition 2.6]{ARS}, $gl.dim (T(R))=gl.dim(R)+1\leq 2$. So, if $0\to T(R)\to E^0\to E^1\to E^2\to 0$ is an injective resolution of $T(R)$, then $C^1= E^0\oplus E^1\oplus E^2$ is a w-tilting $T(R)$-module. Note that $T(R)$ has at least three w-tilting modules, $C^1$, $C^2=T(R)$ and $C^3=C$.
\end{exmp}
\section{Relative Gorenstein projective modules}
In this section, we describe $G_C$-projective modules over $T$. Then we use this description to study when the class of $G_C$-projective $T$-modules is a special precovering class.
Clearly the functor $\textbf{p}$ preserves projective module. So we start by studying when the functor $\textbf{p}$ also preserves relative Gorenstein projective modules. But first we need the following
\begin{lem}\label{when p preseves particular G_C-projective modules} Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module and $U$ be weakly $C$-compatible.
\begin{enumerate}
\item If $M_1\in G_{C_1}P(A)$, then $\begin{pmatrix}M_1\\
U\otimes_AM_1
\end{pmatrix}\in G_{C}P(T).$
\item If $M_2\in G_{C_2}P(B)$, then
$\begin{pmatrix}
0\\
M_2
\end{pmatrix}\in G_{C}P(T).$
\end{enumerate}
\end{lem}
{\parindent0pt {\bf Proof.\ }} $1.$ Suppose that $M_1\in G_{C_1}P(A)$. There exists a ${\rm Hom}_A(-,{\rm Add}_A(C_1))$-exact exact sequence
$$\textbf{X}_1:\;\cdots\to P^1_1\to P_1^0\to C_1^0\to C^1_1\to \cdots$$
where the $P^i_1$'s are all projective, $C^i_1\in {\rm Add}_A(C_1)$ $\forall i$ and $M_1\cong {\rm Im}(P_1^0\to C_1^0)$. Using the fact that $U$ is weakly $C$-compatible, we get that the complex $U\otimes_A\textbf{X}_1$ is exact in $B$-Mod, which implies that the complex $\textbf{p}(\textbf{X}_1,0):$
$$
\cdots\to\begin{pmatrix}
P^1_1\\U\otimes_A P^1_1
\end{pmatrix}\to\begin{pmatrix}
P^0_1\\U\otimes_A P^0_1
\end{pmatrix}\to\begin{pmatrix}
C^0_1\\U\otimes_A C^0_1
\end{pmatrix}\to\begin{pmatrix}
C^1_1\\U\otimes_A C^1_1
\end{pmatrix}\to\cdots $$
is exact with $\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}\cong {\rm Im}(\begin{pmatrix}
P^0_1\\U\otimes_A P^0_1
\end{pmatrix}\to\begin{pmatrix}
C^0_1\\U\otimes_A C^0_1
\end{pmatrix})$. Clearly, $\textbf{p}(P_1^i,0)=\begin{pmatrix}
P^i_1\\U\otimes_A P^i_1
\end{pmatrix}\in \mathscr{P}_Croj(T)$ and $\textbf{p}(C_1^i,0)=\begin{pmatrix}
C^i_1\\U\otimes_A C^i_1
\end{pmatrix}\in {\rm Add}_T(C)$ $\forall i\in \N$ by Lemma \ref{projectives-injectives over T}(1) and Lemma \ref{Add-Prod}(1). If $X\in {\rm Add}_T(C)$, then $X_1\in {\rm Add}_A(C_1)$ by Lemma \ref{Add-Prod}(1) and using the adjointness, we get that the complex \\
${\rm Hom}_T(\textbf{p}(\textbf{X}_1,0),X)\cong {\rm Hom}_A(\textbf{X}_1,X_1)$ is exact. Hence $\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}$ is $G_C$-projective.
$2.$ Suppose that $M_2$ is $G_{C_2}$-projective. There exists a ${\rm Hom}_B(-,{\rm Add}_B(C_2))$-exact exact sequence
$$\textbf{X}_2:\;\cdots\to P^1_2\to P_2^0\to C_2^0\to C^1_2\to \cdots$$
where the $P^i_2$'s are all projective, $C^i_2\in {\rm Add}_B(C_2)$ $\forall i$ and $M_2\cong {\rm Im}(P_2^0\to C_2^0).$ Clearly the complex
$$\textbf{p}(0,\textbf{X}_2):
\cdots\to\begin{pmatrix}
0\\ P^1_2
\end{pmatrix}\to\begin{pmatrix}
0\\mathscr{P}_C^0_2
\end{pmatrix}\to\begin{pmatrix}
0\\ C^0_2
\end{pmatrix}\to\begin{pmatrix}
0\\ C^1_2
\end{pmatrix}\to\cdots $$
is exact with $\begin{pmatrix}
0\\ M_2
\end{pmatrix}\cong {\rm Im}(\begin{pmatrix}
0\\ P^1_2
\end{pmatrix}\to\begin{pmatrix}
0\\ C^0_2
\end{pmatrix})$, $\textbf{p}(0,P_2^i)=\begin{pmatrix}
0\\ P^i_2
\end{pmatrix}\in \mathscr{P}_Croj(T)$ and $\textbf{p}(0,C_2^i)=\begin{pmatrix}
0\\ C^i_2
\end{pmatrix}\in {\rm Add}_T(C)$ $\forall i,$ by Lemma \ref{projectives-injectives over T}(1) and Lemma \ref{Add-Prod}(1). Let $X\in {\rm Add}_T(C)$. Then, by Lemma \ref{Add-Prod}(1), $X=\textbf{p}(X_1,X_2)$ where $X_1\in {\rm Add}_A(C_1)$ and $X_2\in {\rm Add}_B(C_2)$. Using adjointness, we get that
$${\rm Hom}_T(\textbf{p}(0,\textbf{X}_2),X)\cong {\rm Hom}_B(\textbf{X}_2,U\otimes_AX_1)\oplus{\rm Hom}_B(\textbf{X}_2,X_2)$$
The complex ${\rm Hom}_B(\textbf{X}_2,X_2)$ is exact and since $U$ is weakly $C$-compatible, the complex ${\rm Hom}_B(\textbf{X}_2,U\otimes_AX_1)$ is also exact. This means that ${\rm Hom}_T(\textbf{p}(0,\textbf{X}_2),X)$ is exact as well and $\begin{pmatrix}
0\\M_2
\end{pmatrix}$ is $G_C$-projective.
\cqfd
\begin{prop}\label{when p preserves G_C-projectives } Let $C=\textbf{p}(C_1,C_2)$ be a $T$-module. If $U$ is weakly $C$-compatible, then the functor $\textbf{p}$ sends $G_{(C_1,C_2)}$-projectives to $G_C$-projectives. The converse holds provided that $C_1$ and $C_2$ are w-tilting.
In particular, $\textbf{p}$ preserves Gorenstein projective modules if and only if $U$ is weakly compatible.
\end{prop}
{\parindent0pt {\bf Proof.\ }} Note that
$$\textbf{p}(M_1,M_2)=\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}\oplus \begin{pmatrix}
0\\M_2
\end{pmatrix}.$$ So this direction follows from Lemma \ref{when p preseves particular G_C-projective modules} and \cite[Proposition 2.5]{BGO1}.
Conversely, assume that $C_1$ and $C_2$ are w-tilting. By Proposition \ref{charaterization of condition a and b }, it suffices to prove that ${\rm Tor}_1^A(U,G_{C_1}P(A))=0={\rm Ext}_B^1(G_{C_2}P(B),U\otimes_A{\rm Add}_A(C_1))$.
Let $G_1\in G_{C_1}P(A)$. By \cite[Corollary 2.13]{BGO1}, there exits a ${\rm Hom}_A(-,{\rm Add}_A(C_1))$-exact exact sequence $0\to L_1\stackrel{\imath}{\to} P_1\to G_1\to 0$, where $_AP_1$ is projective and $L_1$ is $G_{C_1}$-projective. Note that $A,C_1\in G_{C_1}P(A)$ and $B,C_2\in G_{C_2}P(B)$ by Lemma \ref{charac of w-tilitng R-mod}. Then $_TT=\textbf{p}(A,B)$ and $C=\textbf{p}(C_1,C_2)$ are $G_C$-projective, which imply by Lemma \ref{charac of w-tilitng R-mod}, that $C$ is w-tilting. Moreover $\begin{pmatrix}
L_1\\U\otimes_AL_1
\end{pmatrix}=\textbf{p}(L_1,0)$ is also $G_C$-projective and by \cite[Corollary 2.13]{BGO1} there exists a short exact sequence $$0\to \begin{pmatrix}
L_1\\U\otimes_AL_1
\end{pmatrix}\to \begin{pmatrix}
X_1\\X_2
\end{pmatrix}_{\varphi^X}\to \begin{pmatrix}
H_1\\H_2
\end{pmatrix}_{\varphi^H}\to 0 $$
where $X=\begin{pmatrix}
X_1\\X_2
\end{pmatrix}_{\varphi^X}\in{\rm Add}_T(C)$ and $H= \begin{pmatrix}
H_1\\H_2
\end{pmatrix}_{\varphi^H}$ is $G_C$-projective.
Since $X_1\in {\rm Add}_A(C_1)$, we have the following commutative diagram with exact rows:
$$ \xymatrix{
0 \ar[r] & L_1 \ar[r]^{\imath} \ar@{=}[d] & P_1 \ar[r] \ar[d] & G_1\ar[d] \ar[r] & 0
\\
0 \ar[r] & L_1\ar[r] & X_1\ar[r] & H_1 \ar[r] &0
\\
}$$
So if we apply the functor $U\otimes_A-$ to the above diagram, we get the following commutative diagram with exact rows:
$$ \xymatrix{
& U\otimes_AL_1 \ar[r]^{1_U\otimes\imath} \ar@{=}[d] & U\otimes_AP_1 \ar[r] \ar[d] & U\otimes_AG_1\ar[d] \ar[r] & 0
\\
& U\otimes_AL_1 \ar[r] \ar@{=}[d] & U\otimes_AX_1\ar[r] \ar[d] & U\otimes_AH_1 \ar[r] \ar[d] &0
\\
0 \ar[r] & U\otimes_AL_1 \ar[r] & X_2\ar[r] & H_2 \ar[r] &0 \\
}$$
The commutativity of this diagram implies that the map $1_U\otimes\imath$ injective, and since $P_1$ is projective, ${\rm Tor}_1^A(U,G_1)=0$.
Now let $G_2\in G_{C_2}P(B)$ and $Y_2\in{\rm Add}_A(C_1)$. By hypothesis, $\begin{pmatrix}
0\\ G_2
\end{pmatrix}=\textbf{p}(0,G_2)$ is $G_C$-projective and by Lemma \ref{Add-Prod}, $\begin{pmatrix}
Y_1\\ U\otimes Y_1
\end{pmatrix}=\textbf{p}(Y_1,0)\in{\rm Add}_T(C)$. Hence
${\rm Ext}_B^1(G_2,U\otimes_A Y_1)={\rm Ext}_T^1(\begin{pmatrix}
0\\ G_2
\end{pmatrix},\begin{pmatrix}
Y_1\\ U\otimes Y_1
\end{pmatrix})=0$ by Lemma \ref{Ext} and \cite[Proposition 2.4]{BGO1}.
\cqfd
\begin{thm}\label{structure of $G_C$-projective} Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ and $C=\textbf{p}(C_1,C_2)$ be two $T$-modules. If $U$ is $C$-compatible, then the following assertions are equivalent:
\begin{enumerate}
\item $M$ is $G_C$-projective.
\item \begin{itemize}
\item[(i)] $\varphi^M$ is injective.
\item[(ii)] $M_1$ is $G_{C_1}$-projective and $\overline{M}_2:=\field{C}oker\;{\varphi^M}$ is $G_{C_2}$-projective.
\end{itemize}
\end{enumerate}
In this case, if $C_2$ is $\Sigma$-self-orthogonal, then $U\otimes_A M_1$ is $G_{C_2}$-projective if and only if $M_2$ is $G_{C_2}$-projective.
\end{thm}
{\parindent0pt {\bf Proof.\ }} $2. \field{R}ightarrow 1.$ Since $\varphi^M$ is a injective, there exists an exact sequence in $T$-Mod
$$0\to \begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}\to M\to \begin{pmatrix}
0\\\overline{M}_2
\end{pmatrix}\to 0$$
Note that $\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix} and \begin{pmatrix}
0\\\overline{M}_2\end{pmatrix}$ are $G_C$-projective $T$-module by Lemma \ref{when p preseves particular G_C-projective modules}. So, $M$ is $G_C$-projective by \cite[Proposition 2.5]{BGO1}.
$1.\field{R}ightarrow 2.$ There exists a ${\rm Hom}_T(-,{\rm Add}_T(C))$-exact sequence in $T$-Mod $$\textbf{X}=\cdots\to \begin{pmatrix}
P_1^1\\mathscr{P}_C_2^1
\end{pmatrix}_{\varphi^{P^1}}\to \begin{pmatrix}
P_1^0\\mathscr{P}_C_2^0
\end{pmatrix}_{\varphi^{P^0}}\to \begin{pmatrix}
C_1^0\\field{C}_2^0
\end{pmatrix}_{\varphi^{C^0}}\to \begin{pmatrix}
C_1^1\\field{C}_2^1
\end{pmatrix}_{\varphi^{C^1}}\to \cdots$$
where $C^i=\begin{pmatrix}
C_1^i\\field{C}_2^i
\end{pmatrix}_{\varphi^{C^i}}\in {\rm Add}_T(C)$, $P^i=\begin{pmatrix}
P_1^i\\mathscr{P}_C_2^i
\end{pmatrix}_{\varphi^{P^i}}\in \mathscr{P}_Croj(T)$ $\forall i\in\N$, and such that $M \cong {\rm Im}(P^0\to C^0).$ Then, we get the exact sequence
$$\textbf{X}_1=\cdots\to
P_1^1\to
P_1^0\to
C_1^0\to
C_1^1\to \cdots$$
where $C^i_1\in {\rm Add}_A(C_1)$, $P^i_1\in \mathscr{P}_Croj(A)$ $\forall i\in\N$ by Lemma \ref{Add-Prod}(1) and Lemma \ref{projectives-injectives over T}(1), and such that $M_1\cong {\rm Im}(P^0_1\to C^0_1).$ Since $U$ is $C$-compatible, the complex $U\otimes_A\textbf{X}_1$ is exact with $U\otimes_A M_1\cong Im(U\otimes_AP^0_1\to U\otimes_AC^0_1).$ If $\iota_1:M_1\to C^0_1$ and $\iota_2:M_2\to C^0_2$ are the inclusions, then $1_U\otimes \iota_1$ is injective and the following diagram commutes:
$$ \xymatrix{
&U\otimes_AM_1\ar[r]^{1_U\otimes \iota_1} \ar[d]_{\varphi^{M}} &U\otimes_AC^0_1\ar[d]_{\varphi^{C^0}}\\
& M_2 \ar[r]^{\iota_2} &C^0_2
} $$
By Lemma \ref{Add-Prod}(1), $\varphi^{C^0}$ is injective, then $\varphi^M$ is also injective.
For every $i \in\N$, $\varphi^{P^i}$ and $\varphi^{C^i}$ are injective by Lemma \ref{projectives-injectives over T} and Lemma \ref{Add-Prod}(1). Then the following diagram with exact
columns
$$ \xymatrix{
&0\ar[d] &0\ar[d] &0\ar[d] &0\ar[d] & & \\
\cdots \ar[r]& U\otimes_AP_1^1\ar[r] \ar[d]_{\varphi^{P^1}} & U\otimes_AP_1^0\ar[d]_{\varphi^{P^0}} \ar[r] & U\otimes_A C_1^0\ar[d]_{\varphi^{C^0}} \ar[r]& U\otimes_A C_1^1\ar[d]_{\varphi^{C^1}} \ar[r] & \cdots \\
\cdots \ar[r]& P_2^1\ar[r] \ar[d]& P_2^0\ar[d] \ar[r] & C_2^0\ar[d] \ar[r] & C^1_2 \ar[d] \ar[r] & \cdots\\
\cdots\ar[r] & \overline{P^1_2} \ar[r] \ar[d]& \overline{P^0_2}\ar[r] \ar[d] & \overline{C^0_2} \ar[d] \ar[r]& \overline{C_2^1}\ar[r] \ar[d] & \cdots\\
&0 &0 &0 &0 & } $$
is commutative. Since the first row and the second row are exact, we get the exact sequence of B-modules
$$\overline{\textbf{X}}_2:\;\cdots\to \overline{P^1_2} \to \overline{P^0_2}\to \overline{C^0_2} \to \overline{C_2^1}\to \cdots$$
where $\overline{P^i_2}\in \mathscr{P}_Croj(B)$, $\overline{C^i_2}\in {\rm Add}_B(C_2)$ by Lemma \ref{projectives-injectives over T} and Lemma \ref{Add-Prod}(1), and such that $\overline{M}_2={\rm Im}(\overline{P^0_2}\to \overline{C^0_2})$. It remains to see that $\textbf{X}_1$ and $\overline{\textbf{X}}_2$ are ${\rm Hom}_A(-,{\rm Add}(C_1))$-exact and ${\rm Hom}_B(-,{\rm Add}_B(C_2))$-exact, respectively. Let $X_1\in {\rm Add}_A(C_1)$ and $X_2\in {\rm Add}_B(C_2)$. Then $\textbf{p}(X_1,0)=\begin{pmatrix}
X_1\\U\otimes_A X_1
\end{pmatrix}\in {\rm Add}_T(C)$ and $\textbf{p}(0,X_2)=\begin{pmatrix}
0\\X_2
\end{pmatrix}\in {\rm Add}_T(C)$ by Lemma \ref{Add-Prod}(1). So, by using adjointness, we get that ${\rm Hom}_B(\overline{\textbf{X}}_2,X_2)\cong {\rm Hom}_T(\textbf{X},\begin{pmatrix}
0\\X_2
\end{pmatrix})$ is exact. Using adjointness again we get that
$${\rm Hom}_T(\textbf{X},\begin{pmatrix}
0\\U\otimes_A X_1
\end{pmatrix})\cong {\rm Hom}_B(\overline{\textbf{X}}_2,U\otimes_A X_1)$$
and $${\rm Hom}_T(\textbf{X},\begin{pmatrix}
X_1\\0
\end{pmatrix})\cong {\rm Hom}_A(\textbf{X}_1, X_1).$$
Note that $C^i\cong \textbf{p}(C^i_1,\overline{C^i_2})$ by Lemma \ref{Add-Prod}(1). Hence ${\rm Ext}_T^1(C^i,\begin{pmatrix}
0\\U\otimes_A X_1
\end{pmatrix})\cong {\rm Ext}_B^1(\overline{C^i_2},U\otimes_A X_1)=0$ by Lemma \ref{Ext}. So, if we apply the functor ${\rm Hom}_T(\textbf{X},-)$ to the sequence
$$0\to \begin{pmatrix}
0\\U\otimes_A X_1
\end{pmatrix}\to \begin{pmatrix}
X_1\\U\otimes_A X_1
\end{pmatrix} \to \begin{pmatrix}
X_1\\0
\end{pmatrix}\to 0,$$
we get the following exact sequence of complexes
$$0\to{\rm Hom}_B(\overline{\textbf{X}}_2,U\otimes_A X_1) \to {\rm Hom}_T(\textbf{X},\begin{pmatrix}
X_1\\U\otimes_A X_1
\end{pmatrix}) \to {\rm Hom}_A(\textbf{X}_1, X_1)\to 0.$$
Since $U$ is $C$-compatible, it follows that ${\rm Hom}_B(\overline{\textbf{X}}_2,U\otimes_A X_1)$ is exact and since $C$ is w-tilting, $ {\rm Hom}_T(\textbf{X},\begin{pmatrix}
X_1\\U\otimes_A X_1
\end{pmatrix})$is also exact. Thus ${\rm Hom}_A(\textbf{X}_1, X_1)$ is exact and the proof is finished.\cqfd
The following consequence of the above theorem gives the converse of Proposition \ref{when p preserves w-tilting}.
\begin{cor} \label{cns for p(C_1,C_2) to be w-tilting}Let $C=\textbf{p}(C_1,C_2)$ and assume that $U$ is $C$-compatible. Then $C$ is w-tilting if and only if $C_1$ and $C_2$ are w-tilting.
{\parindent0pt {\bf Proof.\ }} An easy application of Proposition \ref{charac of w-tilitng R-mod} and Theorem \ref{structure of $G_C$-projective} on the $T$-modules $C=\begin{pmatrix}
C_1\\(U\otimes_A C_1)\oplus C_2
\end{pmatrix}$ and $_TT=\begin{pmatrix}
A\\U\oplus B
\end{pmatrix}$.
\cqfd
One would like to know if every w-tilting $T$-module has the form $\textbf{p}(C_1,C_2)$ where $C_1$ and $C_2$ are w-tilting. The following example gives a negative answer to this question.
\begin{exmp}Let $R$ be a quasi-Frobenius ring and $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$. Consider the exact sequence of $T$-modules
$$0\to T\to \begin{pmatrix}
R\oplus R\\ R\oplus R
\end{pmatrix}\to \begin{pmatrix}
R\\ 0
\end{pmatrix}\to 0.$$
By Lemma \ref{projectives-injectives over T}, $I^0=\begin{pmatrix}
R\oplus R\\ R\oplus R
\end{pmatrix}$ and $I^1=\begin{pmatrix}
R\\ 0
\end{pmatrix}$ are both injective $T(R)$-modules. Note that $T(R)$ is noetherian (\cite[Proposition 1.7]{GW}) and then we can see that $C:=I^0\oplus I^1$ is a w-tilting $T(R)$-module but does not have the form $\textbf{p}(C_1,C_2)$ where $C_1$ and $C_2$ are w-tilting by Lemma \ref{Add-Prod} since $I^1\in{\rm Add}_{T(R)}(C)$ and $\varphi^{I^1}$ is not injective.
\end{exmp}
\end{cor}
As an immediate consequence of Theorem \ref{structure of $G_C$-projective}, we have the following.
\begin{cor} Let $R$ be a ring and $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$. If $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ and $C=\textbf{p}(C_1,C_1)$ are two $T(R)$-modules with $C_1$ $\Sigma$-self-orthogonal, then the following assertions are equivalent:
\begin{enumerate}
\item $M$ is $G_C$-projective $T(R)$-module
\item $M_1$ and $\overline{M}_2$ are $G_{C_1}$-projective $R$-modules and $\varphi^M$ is injective
\item $M_1$ and $M_2$ are $G_{C_1}$-projective $R$-modules and $\varphi^M$ is injective
\end{enumerate}
\end{cor}
An Artin algebra $\Lambda$ is called Cohen-Macaulay free, or simply, CM-free if any finitely generated Gorenstein projective module is projective. The authors in \cite{EIT}, extended this definition to arbitrary rings and defined strongly CM-free as rings over which every Gorenstein projective module is projective.
Now, we introduce a relative notion of these rings and give a characterization of when $T$ is such rings.
\begin{defn} Let $R$ be a ring. Given an $R$-module $C$, $R$ is called CM-free (relative to $C$) if $G_CP(R)\cap R\text{-mod}={\rm add}_R(C)$ and it is called strongly CM-free (relative to $C$) if $G_CP(R)={\rm Add}_R(C)$.
\end{defn}
\begin{rem} Let $R$ be a ring and $C$ a $\Sigma$-self-orthogonal $R$-module. Then ${\rm Add}_R(C)\subseteq G_CP(R)$ and ${\rm add}_R(C)\subseteq G_CP(R)\cap R\text{-mod}$ by \cite[Proposition 2,5, 2,6 and Corollary 2.10]{BGO1}, then $R$ is CM-free (relative to $C$) if and only if every finitely generated $G_C$-projective is in ${\rm add}_R(C)$ and it is strongly CM-free (relative to $C$) if every $G_C$-projective is in ${\rm Add}_R(C).$
\end{rem}
Using the above results we refine and extend \cite[Theorem 4.1]{EIT} to our setting. Note that the condition $B$ is left Gorenstein regular is not needed.
\begin{prop}\label{when T is relative CM-free} Let $_AC_1$ and $_BC_2$ be $\Sigma$-self-orthogonal, and $C=\textbf{p}(C_1,C_2)$. Assume that $U$ is weakly $C$-compatible and consider the following assertions:
\begin{enumerate}
\item $T$ is (strongly) CM-free relative to $C$.
\item $A$ and $B$ are (strongly) CM-free relative to $C_1$ and $C_2$, respectively.
\end{enumerate}
Then $1.\field{R}ightarrow 2.$. If $U$ is $C$-compatible, then $1.\Leftrightarrow 2.$
\end{prop}
{\parindent0pt {\bf Proof.\ }} We only prove the the result for relative strongly CM-free, since the the case of relative CM-free is similar.
$1.\field{R}ightarrow 2.$ By the remark above, we only need to prove that $G_{C_1}P(A)\subseteq {\rm Add}_A({C_1})$ and $G_{C_2}P(B)\subseteq {\rm Add}_B({C_2})$. Let $M_1$ be a $G_{C_1}$-projective $A$-module and $_BM_2$ a $G_{C_2}$-projective $B$-module. By the assumption and Proposition \ref{when p preserves G_C-projectives }, $\textbf{p}(M_1,M_2)\in G_CP(T)={\rm Add}_T(C)$. Hence $M_1\in{\rm Add}_A(C_1)$ and $M_2\in{\rm Add}_B(C_2)$ by Lemma \ref{Add-Prod}.
$2.\field{R}ightarrow 1.$ Assume $U$ is $C$-compatible. Clearly, $C$ is $\Sigma$-self-orthogonal, then by Remark above, we only need to prove that $G_CP(T)\subseteq {\rm Add}_T(C)$. Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ be a $G_C$-projective $T$-module. By the assumption and Theorem \ref{structure of $G_C$-projective}, $M_1\in G_{C_1}P(A)={\rm Add}_A(C_1)$ and $\overline{M}_2\in G_{C_2}P(B)={\rm Add}_B(C_2)$ and the map $\varphi^M$ is injective. By the assumption, we can easily see that ${\rm Ext}_B^{i{\rm G-dim}eq 1}(U\otimes_A M_1,\overline{M}_2)=0$. So the map $0\to U\otimes_A M_1\stackrel{\varphi^M }{\to}M_2\to \overline{M}_2\to 0$ splits. Hence $M\cong \textbf{p}(M_1,\overline{M}_2)\in{\rm Add}_T(C)$ by Lemma \ref{Add-Prod}.
\cqfd
Our aim now is to study special $G_CP(T)$-precovers in $T$-Mod. We start with the following result.
\begin{prop} \label{when a module has precover} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting, $U$ $C$-compatible, $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ and $G=\begin{pmatrix}
G_1\\G_2
\end {pmatrix}_{\varphi^G}$ two $T$-modules with $G$ $G_C$-projective. Then $$f=\begin{pmatrix}
f_1\\f_2
\end{pmatrix}:G\longrightarrow M$$ is a special $G_CP(T)$-precover if and only if
\begin{enumerate}
\item[(i)] $G_1\stackrel{f_1}{\to} M_1$ is a special $G_{C_1}P(A)$-precover.
\item [(ii)] $G_2\stackrel{f_2}{\to} M_2$ is surjective with its kernel lies
in $ G_{C_2}P(B)^{\perp_1}$.
\end{enumerate}
In this case, if $G_2\in G_{C_2}P(B)$, then $G_2\stackrel{f_2}{\to} M_2$ is a special $G_{C_2}P(B)$-precover.
\end{prop}
{\parindent0pt {\bf Proof.\ }} First of all, let $K={\rm Ker} f=\begin{pmatrix}
K_1\\K_2
\end{pmatrix}_{\varphi^K}$ and note that, since $C_1$ is w-tilting, ${\rm Tor}_1^A(U,H_1)=0$ for every $H_1\in G_{C_1}P(A)$ by Proposition \ref{charaterization of condition a and b }(1).
$ \field{R}ightarrow $ Since $f$ is surjective, so are $f_1$ and $f_2$. Let $H_1\in G_{C_1}P(A)$ and $H_2\in G_{C_2}P(B)$. Then $\begin{pmatrix}
H_1\\U\otimes_AH_1
\end{pmatrix},\begin{pmatrix}
0\\H_2
\end{pmatrix}\in G_CP(T)$ by Theorem \ref{structure of $G_C$-projective}. Using Lemma \ref{Ext} and the fact that $K$ lies in $G_CP(R)^{\perp_1}$ , we get that $${\rm Ext}^1_A(H_1,K_1)\cong {\rm Ext}^1_T(\begin{pmatrix}
H_1\\U\otimes_AH_1
\end{pmatrix},K)=0$$ and $${\rm Ext}^1_B(H_2,K_2)\cong {\rm Ext}^1_T(\begin{pmatrix}
0\\H_2
\end{pmatrix},K)=0.$$ It remains to see that $G_1\in G_{C_1}P(A)$, which is true by Theorem \ref{structure of $G_C$-projective}, since $G$ is $G_C$-projective.
$\Leftarrow $ The morphism $f$ is surjective since $f_1$ and $f_2$ are. So we only need to prove that $K$ lies in $G_CP(R)^{\perp_1}$. Let $H\in G_CP(R)$. By Theorem \ref{structure of $G_C$-projective}, we have the short exact sequence of $T$-modules
$$0\to\begin{pmatrix}
H_1\\ U\otimes_A H_1
\end{pmatrix}\to H\to \begin{pmatrix}
0\\\overline{H}_2
\end{pmatrix}\to 0$$
where $H_1$ is $G_{C_1}$-projective and $\overline{H}_2$ is $G_{C_2}$-projective. So by hypothesis and Lemma \ref{Ext}, we get that
${\rm Ext}^1_T(\begin{pmatrix}
H_1\\U\otimes_AH_1
\end{pmatrix},K)\cong {\rm Ext}^1_A(H_1,K_1)=0$ and ${\rm Ext}^1_T(\begin{pmatrix}
0\\\overline{H}_2
\end{pmatrix},K)\cong {\rm Ext}^1_B(\overline{H}_2,K_2) =0$. Then, the exactness of this sequence
$${\rm Ext}^1_T(\begin{pmatrix}
H_1\\U\otimes_AH_1
\end{pmatrix},K)\to {\rm Ext}^1_T(H,K)\to {\rm Ext}^1_T(\begin{pmatrix}
0\\\overline{H}_2
\end{pmatrix},K)$$
implies that ${\rm Ext}^1_T(H,K)=0.$\cqfd
\iffalse
\begin{lem} Let $R$ be a ring and $C$ a w-tilting $R$-module. Then
$$G_CP(A)\cap G_CP(A)^{\perp_1}={\rm Add}_R(C).$$
\end{lem}
{\parindent0pt {\bf Proof.\ }}
\cqfd
\fi
\begin{thm}\label{the class of G_C-proj is a special precovering} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting and $U$ $C$-compatible. Then the class $G_CP(T)$ is special precovering in $T$-Mod if and only if the classes $G_{C_1}P(A)$ and $G_{C_2}P(B)$ are special precovering in $A$-Mod and $B$-Mod, respectively.
\end{thm}
{\parindent0pt {\bf Proof.\ }} $\field{R}ightarrow $ Let $M_1$ be an $A$-module and $\begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G}\rightarrow \begin{pmatrix}
M_1\\0
\end{pmatrix}$ be a special $G_CP(T)$-precover in $T$-Mod. Then by Proposition \ref{when a module has precover}, $G_1\to M_1$ is a special $G_{C_1}P(A)$-precover in $A$-Mod.
Let $M_2$ be a $B$-module and $\begin{pmatrix}
0\\ f_2
\end{pmatrix}:\begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G}\rightarrow \begin{pmatrix}
0\\M_2
\end{pmatrix}$ be a special $G_CP(T)$-precover in $T$-Mod. By Proposition \ref{when a module has precover}, $G_1\to 0$ is a special $G_{C_1}P(A)$-precover. Then
${\rm Ext}^1_A(G_{C_1}P(A),G_1)=0$. On the other hand, by \cite[Proposition 2.8]{BGO1}, there exists an exact sequence of $A$-modules
$$0\to G_1\to X_1\to H_1\to 0$$
where $X_1\in {\rm Add}_A(C_1)$ and $H_1$ is $G_{C_1}$-projective. But this sequence splits, since ${\rm Ext}^1_A(H_1,G_1)=0$, which implies that $G_1\in {\rm Add}_A(C_1)$. Let $K=\begin{pmatrix}
K_1\\K_2
\end{pmatrix}_{\varphi^K}$ be the kernel of $\begin{pmatrix}
0\\ f_2
\end{pmatrix}$. Note that $K_1=G_1$. So, there exists a commutative diagram
$$ \xymatrix{
& & & & & \\
0\ar[r] & U\otimes_A G_1\ar[d]_{\varphi^{K}} \ar@{=}[r] & U\otimes_A G_1\ar[d]_{\varphi^G} \ar[r]& 0 \ar[d] \ar[r] & 0 \\
0\ar[r]& K_2\ar[d] \ar[r] & G_2\ar[d] \ar[r] & M_2 \ar@{=}[d] \ar[r] & 0\\
& \overline{K}_2\ar[r] \ar[d] & \overline{G}_2 \ar[d] \ar[r]& M_2\ar[r] \ar[d] & 0\\
&0 &0 &0 & } $$
Using the snake lemma, there exists an exact sequence of $B$-modules
$$0\to \overline{K}_2\to \overline{G}_2 \to M_2 \to 0 $$
where $\overline{G}_2$ is $G_{C_2}$-projective by Theorem \ref{structure of $G_C$-projective}. It remains to see that $\overline{K}_2$ lies in $G_{C_2}P(B)^{\perp_1}$. Let $H_2\in G_{C_2}P(B)$. Then
${\rm Ext}_B^1(H_2,K_2)=0$ by Proposition \ref{when a module has precover} and ${\rm Ext}_B^{i{\rm G-dim}eq 1}(H_2,U\otimes_A G_1)=0$ by Proposition \ref{charaterization of condition a and b }(2). From the above diagram, $\varphi^K$ is injective. So, if we apply the functor ${\rm Hom}_B(H_2,-)$ to the short exact sequence
$$0\to U\otimes_A G_1\to K_2\to \overline{K}_2\to 0,$$
we get an exact sequence $${\rm Ext}_B^1(H_2,K_2)\to {\rm Ext}_B^1(H_2,\overline{K}_2)\to {\rm Ext}_B^2(H_2,U\otimes_A G_1)$$
which implies that ${\rm Ext}_B^1(H_2,\overline{K}_2)=0$.
\iffalse
(ii) We prove now that $U\otimes_A G_{C_1}P(A)\subseteq G_{C_2}P(B)$. Let $H_1\in G_{C_1}P(A)$. By hypothesis, there exists an exact sequence of $T$-modules
$$0\to\begin{pmatrix}
K_1\\K_2
\end{pmatrix}_{\varphi^K}\to\begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G}\rightarrow \begin{pmatrix}
H_1\\0
\end{pmatrix}\to 0$$
where $K=\begin{pmatrix}
K_1\\K_2
\end{pmatrix}_{\varphi^K}\in G_CP(T)^{\perp_1}$ and $G=\begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G}$ is $G_C$-projective. By Proposition \ref{when a module has precover}, we may assume that $K_2=G_2$. Then $G_2\in G_{C_2}P(B)^{\perp_1}$. We have seen that $G_{C_2}P(B)$ is special precovering. Then there exists an exact sequence of $B$-modules
$$0\to L_2\to H_2\to G_2\to 0$$
where $L_2\in G_{C_2}P(B)^{\perp_1}$ and $H_2$ is $G_{C_2}$-projective. Then $H_2\in G_{C_2}P(B)^{\perp_1}$ since the class $G_{C_2}P(B)^{\perp_1}$ is closed under extensions. But this implies that $H_2\in{\rm Add}_B(C_2)$ by Lemma ?. Note that above short exact sequence splits, which means that $G_2\in{\rm Add}_B(C_2)$ as well. By Theorem \ref{structure of $G_C$-projective}, there exists an exact of $B$-modules
$$0\to U\otimes_A G_1 \stackrel{\varphi^G}{\to} G_2\to\overline{G}_2\to$$
where $\overline{G}_2$ is $G_{C_2}$-projective. Using \cite[Proposition 2.8]{BGO1}, we see that $ U\otimes_A G_1$ is also $G_{C_2}$-projective. on the other hand, by proposition \ref{when a module has precover} and since $H_1$ is $G_{C_1}$-projective, the following short exact sequence
$$0\to K_1\to G_1\to H_1\to 0$$
splits. Then $U\otimes_A H_1$ is isomorphic to a direct summand of $U\otimes_A G_1$ which is $G_{C_2}$-projective. Thus $U\otimes_A H_1\in G_{C_2}P(B)$ by \cite[Corollary 2.10]{BGO1}.
$\Leftarrow$ Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ be a $T$-module. There exists an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
0\\M_2
\end{pmatrix}\to M\to \begin{pmatrix}
M_1\\0
\end{pmatrix}\to 0.$$
By \cite[Theorem 2.12]{BGO1} and \cite[Lemma 5.4]{M}, it suffices to prove that $\begin{pmatrix}
0\\M_2
\end{pmatrix}$ and $\begin{pmatrix}
M_1\\0
\end{pmatrix}$ have special $G_CP(T)$-precovers. Let $f_2:G_2\to M_2$ be a special $G_{C_2}P(B)$-precover. Then, by Proposition \ref{when a module has precover}, $\begin{pmatrix}
0\\f_2
\end{pmatrix}:\begin{pmatrix}
0\\G_2
\end{pmatrix}\to \begin{pmatrix}
0\\M_2
\end{pmatrix} $ is a special $G_CP(T)$-precover as well. Let $f_1:G_1\to M_1$ be a special $G_{C_1}P(A)$-precover. By (i), $U\otimes_A G_1$ is $G_{C_2}$-projective. Then, by \cite[Proposition 2.8]{BGO1}, there exists an exact sequence of $A$-modules
$$0\to U\otimes_A G_1\stackrel{\varphi^G}{\rightarrow} G_2\to \overline{G}_2\to 0$$
where $G_2\in {\rm Add}_B(C_2)$ and $\overline{G}_2$ is $G_{C_2}$-projective. Let $K_1={\rm Ker} f_1$. Then we construct an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
K_1\\G_2
\end{pmatrix} \stackrel{\begin{pmatrix}
i\\1_{G_2}
\end{pmatrix}}{\rightarrow} \begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G} \stackrel{\begin{pmatrix}
f_1\\0
\end{pmatrix}}{\rightarrow} \begin{pmatrix}
M_1\\0
\end{pmatrix}\to 0$$
Note that $G=\begin{pmatrix}
G_1\\G_2
\end{pmatrix}_{\varphi^G}$ is $G_C$-projective by Theorem \ref{structure of $G_C$-projective}. So, by Proposition \ref{when a module has precover}, it remains to see $G_2$ lies in $G_{C_2}P(B)^{\perp_1}$. But this is true since $G_2\in {\rm Add}_B(C_2)$. \cqfd
\fi
$\Leftarrow$ Note that the functor $U\otimes_A-:A$-Mod$\to B$-Mod is $G_{C_1}P(A)$-exact since ${\rm Tor}_1^A(U,G_{C_1}P(A))=0$ by Proposition \ref{charaterization of condition a and b }. So the this direction follows by \cite[Theorem 1.1.]{HZ} since $G_CP(T)=\{M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}\in T\text{-Mod}|M_1\in G_{C_1}P(A), \;\overline{M_2}\in G_{C_2}P(B) \text{ and $\varphi^M$ is injective }\}$ by Theorem \ref{structure of $G_C$-projective}. \cqfd
\begin{cor} Let $R$ be a ring, $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$ and $C=\textbf{p}(C_1,C_1)$ a w-tilting $T(R)$-module. Then $G_CP(T(R))$ is a special precovering class if and only if $G_{C_1}P(R)$ is a special precovering class.
\end{cor}
\section{Relative global Gorenstein dimension}
In this section, we investigate $G_C$-projective dimension of $T$-modules and the left $G_C$-projective global dimension of $T$.
Let $R$ be a ring. Recall (\cite{BGO1}) that a module $M$ is said to have $G_C$-projective dimension less than or equal to $n$, ${\rm G_C\!-\!pd}(M)\leq n$, if there is an exact sequence $$0\to G_n\to\cdots\to G_0\to M\to 0$$
with $G_i\in G_CP(R)$ for every $i\in \{0,\cdots,n\}$. If $n$ is the least nonnegative integer for
which such a sequence exists then ${\rm G_C\!-\!pd}(M)=n$, and if there is no such $n$ then ${\rm G_C\!-\!pd}(M)=\infty$.
The left $G_C$-projective global dimension of $R$ is defined as:
$$G_C-PD(R)=sup\{{\rm G_C\!-\!pd}(M) \;|\; \text{$M$ is an $R$-module}\}$$
\begin{lem} \label{dimension of special modules} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting and $U$ $C$-compatible.
\begin{enumerate}
\item $\rm G_{C_2}\!-\!pd(M_2)={\rm G_C\!-\!pd}(\begin{pmatrix}
0\\M_2
\end{pmatrix}).$
\item $\rm G_{C_1}\!-\!pd(M_1)\leq{\rm G_C\!-\!pd}(\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}),$ and the equality holds if\\${\rm Tor}^A_{i{\rm G-dim}eq 1}(U,M_1)=0.$
\end{enumerate}
\end{lem}
{\parindent0pt {\bf Proof.\ }} 1. Let $n\in \N$ and consider an exact sequence of $B$-modules
$$0\to K_2^n\to G_2^{n-1}\to\cdots\to G_2^0\to M_2 \to 0$$
where each $G_2^i$ is $G_{C_2}$-projective. Thus, there exists an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
0\\K_2^n
\end{pmatrix}\to \begin{pmatrix}
0\\G_2^{n-1}
\end{pmatrix}\to\cdots\to \begin{pmatrix}
0\\G_2^0
\end{pmatrix}\to \begin{pmatrix}
0\\M_2
\end{pmatrix}\to 0$$
where each $\begin{pmatrix}
0\\G_2^i
\end{pmatrix}$ is $G_C$-projective by Theorem \ref{structure of $G_C$-projective}. Again, by Theorem \ref{structure of $G_C$-projective}, $\begin{pmatrix}
0\\K_2^n
\end{pmatrix}$ is $G_C$-projective if and only if $K_2^n$ is $G_{C_1}$-projective which means that ${\rm G_C\!-\!pd}(\begin{pmatrix}
0\\M_2
\end{pmatrix})\leq n$ if and only if $\rm G_{C_2}\!-\!pd(M_2)\leq n$ by \cite[Theorem 3.8]{BGO1}. Hence ${\rm G_C\!-\!pd}(\begin{pmatrix}
0\\M_2
\end{pmatrix})=\rm G_{C_2}\!-\!pd(M_2).$
2. We may assume that $n={\rm G_C\!-\!pd}(\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix})<\infty$. By Definition, there exists an exact sequence of $T$-modules
$$0\to G^n\to G^{n-1}\to\cdots\to G^0\to \begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix}\to 0$$
where each $G^i=\begin{pmatrix}
G_1^i\\G_2^i
\end{pmatrix}_{\varphi^{G^i}}$ is $G_C$-projective. Thus, there exists an exact sequence of $A$-modules
$$0\to G_1^n\to G_1^{n-1}\to\cdots\to G_1^0\to M_1\to 0$$
where each $G_1^i$ is $G_{C_1}$-projective by Theorem \ref{structure of $G_C$-projective}. So, $\rm G_{C_1}\!-\!pd(M_1)\leq n$.
Conversely, we prove that ${\rm G_C\!-\!pd}(\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix})\leq \rm G_{C_1}\!-\!pd(M_1)$. We may assume that $m:=\rm G_{C_1}\!-\!pd(M_1)<\infty.$ The hypothesis means that if
$$\textbf{X}_1 \;: \;0\to K_1^m\to P_1^{m-1}\to\cdots\to P_1^0\to M_1\to 0$$
is an exact sequence of $A$-modules where each $P_1^i$ is projective, then the complex $U\otimes_A\textbf{X}_1$ is exact. Since $C_1$ is w-tilting, each $P_i$ is $G_{C_1}$-projective by \cite[Proposition 2.11]{BGO1} and then $K^m$ is $G_{C_1}$-projective by \cite[Theorem 3.8]{BGO1}. Thus, there exists and exact sequence of $T$-modules
$$0\to \begin{pmatrix}
K_1^m\\ U\otimes_AK_1^m
\end{pmatrix}\to\begin{pmatrix}
P_1^{m-1}\\ U\otimes_AP_1^{m-1}
\end{pmatrix}\to\cdots \to\begin{pmatrix}
P_1^0\\ U\otimes_AP_1^0
\end{pmatrix}\to\begin{pmatrix}
M_1\\ U\otimes_AM_1
\end{pmatrix}\to 0$$
where $\begin{pmatrix}
K_1^m\\ U\otimes_AK_1^m
\end{pmatrix}$ and all $\begin{pmatrix}
P_1^i\\ U\otimes_AP_1^i
\end{pmatrix}$ are $G_C$-projectives by Theorem \ref{structure of $G_C$-projective}. Therefore, ${\rm G_C\!-\!pd}(\begin{pmatrix}
M_1\\U\otimes_A M_1
\end{pmatrix})\leq m=\rm G_{C_1}\!-\!pd(M_1)$.
\cqfd
Given a $T$-module $C=\textbf{p}(C_1,C_2)$, we introduce a strong notion of $G_{C_2}$-projective global dimension of $B$, which will be crucial when we estimate the $G_C$-projective of a $T$-module and the left global $G_C$-projective dimension of $T$. Set $$SG_{C_2}-PD(B)=sup\{\rm G_{C_2}\!-\!pd_B(U\otimes_A G)\;|\; G\in G_{C_1}P(A)\}.$$
\begin{rem}
\begin{enumerate}
\item Clearly, $ SG_{C_2}-PD(B)\leq G_{C_2}-PD(B)$.
\item Note that ${\rm pd}_B(U)=sup\{{\rm pd}_B(U\otimes_A P)\;|\; _AP \text{ is projective }\; \}$. Therefore, in the classical case, the strong left global dimension of $B$ is nothing but the projective dimension of $_BU$.
\end{enumerate}
\end{rem}
\begin{thm}\label{Estimation of dimension of a T-module}
Let $C=\textbf{p}(C_1,C_2)$ be w-tilting, $U$ $C$-compatible, $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ a $T$-module and $SG_{C_2}-PD(B)<\infty$ . Then
$$max\{\rm G_{C_1}\!-\!pd_A(M_1),(\rm G_{C_2}\!-\!pd_B(M_2))-(SG_{C_2}-PD(B))\}$$
$$\leq {\rm G_C\!-\!pd}(M)\leq$$
$$ max\{(\rm G_{C_1}\!-\!pd_A(M_1))+(SG_{C_2}-PD(B))+1,\rm G_{C_2}\!-\!pd_B(M_2)\}$$
\end{thm}
{\parindent0pt {\bf Proof.\ }} First of all, note that $C_1$ and $C_2$ are w-tilting by Proposition \ref{cns for p(C_1,C_2) to be w-tilting} and let $k:=SG_{C_2}-PD(B).$
Let us first prove that $$max\{\rm G_{C_1}\!-\!pd(M_1),\rm G_{C_2}\!-\!pd(M_2)-k\}\leq {\rm G_C\!-\!pd}(M).$$ We may assume that $n:={\rm G_C\!-\!pd}(M)<\infty.$ Then, there exists an exact sequence of $T$-modules
$$0\to G^n\to G^{n-1}\to\cdots\to G^0\to M\to 0$$
where each $G^i=\begin{pmatrix}
G_1^i\\G_2^i
\end{pmatrix}_{\varphi^{G^i}}$ is $G_C$-projective.
Thus, there exists an exact sequence of $A$-modules
$$0\to G_1^n\to G_1^{n-1}\to\cdots\to G_1^0\to M_1\to 0$$
where each $G_1^i$ is $G_{C_1}$-projective by Theorem \ref{structure of $G_C$-projective}. So, $\rm G_{C_1}\!-\!pd(M_1)\leq n$. By Theorem \ref{structure of $G_C$-projective}, for each $i$, there exists an exact sequence of $B$-modules
$$0\to U\otimes_AG_1^i\to G_2^i\to \overline{G_2^i}\to 0$$
where $\overline{G_2^i}$ is $G_{C_2}$-projective. Then $\rm G_{C_2}\!-\!pd(G^i_2)=\rm G_{C_2}\!-\!pd(U\otimes_AG_1^i)\leq k$ by \cite[Proposition 3.11]{BGO1}. So, using the exact sequence of $B$-modules
$$0\to G_2^n\to G_2^{n-1}\to\cdots\to G_2^0\to M_2\to 0$$ and \cite[Proposition 3.11(4)]{BGO1}, we get that
$\rm G_{C_2}\!-\!pd(M_2)\leq n+k.$
Next we prove that $${\rm G_C\!-\!pd}(M)\leq max\{\rm G_{C_1}\!-\!pd(M_1)+k+1,\rm G_{C_2}\!-\!pd(M_2)\}$$
We may assume that $$m:=max\{\rm G_{C_1}\!-\!pd(M_1)+k+1,\rm G_{C_2}\!-\!pd(M_2)\}< \infty.$$ Then
$n_1:=\rm G_{C_1}\!-\!pd(M_1)<\infty$ and $ n_2:=\rm G_{C_2}\!-\!pd(M_2)<\infty$. Since $\rm G_{C_1}\!-\!pd(M_1)$\\$=n_1\leq m-k-1$, there exists an exact sequence of $A$-modules
$$0\to G_1^{m-k-1}\to \cdots \to G_1^{n_2-k}\to\cdots\stackrel{f_1^1}{\to} G_1^0\stackrel{f_1^0}{\to} M_1\to 0$$
where each $G_1^i$ is $G_{C_1}$-projective. Since $C_2$ is w-tilting, there exists an exact sequence of $B$-modules $G_2^0\stackrel{g_2^0}{\to} M_2\to 0$ where $G_2^0$ is $G_{C_2}$-projective by \cite[Corollary 2.14]{BGO1}. Let $K_1^i={\rm Ker} f_1^i$ and define the map $f^0_2:U\otimes_A G_1^0\oplus G_2^0\to M_2$ to be $(\varphi^M(1_U\otimes f^0_1))\oplus g_2^0$. Then, we get an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
K^1_1\\ K_2^1
\end{pmatrix}_{\varphi^{K^1}}\to\begin{pmatrix}
G^0_1\\(U\otimes_A G_1^0)\oplus G_2^0
\end{pmatrix}\stackrel{\begin{pmatrix}
f_1^0\\f_2^0
\end{pmatrix}}{\to} M\to 0.$$
Similarly, there exists an exact sequence of $B$-modules $G_2^1\stackrel{g_2^1}{\to} K_2^1\to 0$ where $G_2^1$ is $G_{C_2}$-projective and then,
we get an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
K^2_1\\ K_2^2
\end{pmatrix}_{\varphi^{K^2}}\to\begin{pmatrix}
G^1_1\\(U\otimes_A G_1^1)\oplus G_2^1
\end{pmatrix}\to \begin{pmatrix}
K^1_1\\ K_2^1
\end{pmatrix}_{\varphi^{K^1}}\to 0.$$
repeat this process, we get the exact sequence of $T$-modules
$$0\to \begin{pmatrix}
0\\K^{m-k}_2
\end{pmatrix}\to \begin{pmatrix}
G^{m-k-1}_1\\(U\otimes_A G_1^{m-k-1})\oplus G_2^{m-k-1}
\end{pmatrix}
\stackrel{\begin{pmatrix}
f_1^{m-k-1}\\f_2^{m-k-1}
\end{pmatrix}}{\longrightarrow}$$
$$ \cdots\to\begin{pmatrix}
G^1_1\\(U\otimes_A G_1^1)\oplus G_2^1
\end{pmatrix}\stackrel{\begin{pmatrix}
f_1^1\\f_2^1
\end{pmatrix}}{\longrightarrow} \begin{pmatrix}
G^0_1\\(U\otimes_A G_1^0)\oplus G_2^0
\end{pmatrix}\stackrel{\begin{pmatrix}
f_1^0\\f_2^0
\end{pmatrix}}{\longrightarrow} M\to 0$$
Note that $\rm G_{C_2}\!-\!pd((U\otimes_AG_1^i)\oplus G_2^i)=\rm G_{C_2}\!-\!pd( U\otimes_AG_1^i)\leq k$, for every $i\in\{0,\cdots,m-k-1\}$. So, by \cite[Proposition 3.11(2)]{BGO1} and the exact sequence
$$0\to K^{m-k}_2
\to (U\otimes_A G_1^{m-k-1})\oplus G_2^{m-k-1}
\stackrel{f_2^{m-k-1}
}{\longrightarrow}\cdots\to (U\otimes_A G_1^0)\oplus G_2^0
\stackrel{f_2^0
}{\to} M_2\to 0$$
we get that $\rm G_{C_2}\!-\!pd(K_2^{m-k})\leq k$. This means that, there exists an exact sequence of $B$-modules
$$0\to G_2^m\to \cdots\to G_2^{m-k+1}\to G_2^{m-k}\to K_2^{m-k}\to 0.$$
Thus, There exists an exact sequence of $T$-modules
$$0\to \begin{pmatrix}
0\\G_2^m
\end{pmatrix}\to \cdots \to \begin{pmatrix}
0\\G_2^{m-k+1}
\end{pmatrix}\to$$
$$ \begin{pmatrix}
0\\G_2^{m-k}
\end{pmatrix}\to \begin{pmatrix}
G^{m-k-1}_1\\(U\otimes_A G_1^{m-k-1})\oplus G_2^{m-k-1}
\end{pmatrix}
\stackrel{\begin{pmatrix}
f_1^{m-k-1}\\f_2^{m-k-1}
\end{pmatrix}}{\longrightarrow}$$
$$\cdots \to \begin{pmatrix}
G^1_1\\(U\otimes_A G_1^1)\oplus G_2^1
\end{pmatrix}\stackrel{\begin{pmatrix}
f_1^1\\f_2^1
\end{pmatrix}}{\longrightarrow}\begin{pmatrix}
G^0_1\\(U\otimes_A G_1^0)\oplus G_2^0
\end{pmatrix}\stackrel{\begin{pmatrix}
f_1^0\\f_2^0
\end{pmatrix}}{\longrightarrow} M\to 0$$
By Theorem \ref{structure of $G_C$-projective}, all $\begin{pmatrix}
G^i_1\\(U\otimes_A G_1^i)\oplus G_2^i
\end{pmatrix}$ and all $\begin{pmatrix}
0\\G_2^j
\end{pmatrix}$ are $G_C$-projectives. Thus, ${\rm G_C\!-\!pd}(M)\leq m$. \cqfd
The following consequence of Theorem \ref{Estimation of dimension of a T-module} extends \cite[Proposition 2.8(1)]{EIT} and \cite[Theorem 2.7(1)]{ZLW} to the relative setting.
\begin{cor}Let $C=\textbf{p}(C_1,C_2)$ be w-tilting, $U$ $C$-compatible and $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ a $T$-module. If $SG_{C_2}-PD(B)<\infty,$
then ${\rm G_C\!-\!pd}(M)<\infty$ if and only if $\rm G_{C_1}\!-\!pd(M_1)<\infty$ and $\rm G_{C_2}\!-\!pd(M_2)<\infty.$
\end{cor}
The following theorem gives an estimate of the left $G_C$-projective global dimension of $T$.
\begin{thm}\label{Estimation} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting and $U$ $C$-compatible.
Then
$$max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\}$$
$$\leq G_C-PD(T)\leq $$
$$ max\{G_{C_1}-PD(A)+SG_{C_2}-PD(B)+1,G_{C_2}-PD(B)\}.$$
\end{thm}
{\parindent0pt {\bf Proof.\ }} We prove first that $max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\}\leq G_C-PD(T)$. We may assume that $n:=G_C-PD(T)<\infty.$ Let $M_1$ be an $A$-module and $M_2$ be a $B$-module. Since ${\rm G_C\!-\!pd}(\begin{pmatrix}
M_1\\U\otimes_A M_2
\end{pmatrix}\leq n$ and ${\rm G_C\!-\!pd}(\begin{pmatrix}
0\\M_2
\end{pmatrix}\leq n$, $\rm G_{C_1}\!-\!pd(M_1)\leq n$ and $\rm G_{C_2}\!-\!pd(M_2)\leq n$ by Lemma \ref{dimension of special modules}. Thus $G_{C_1}-PD(A)\leq n$ and
$G_{C_2}-PD(B)\leq n$.
Next we prove that $$G_C-PD(T)\leq max\{G_{C_1}-PD(A)+1+ SG_{C_2}-PD(B),G_{C_2}-PD(B)\}.$$ We may assume that $$m:=max\{G_{C_1}-PD(A)+1+ SG_{C_2}-PD(B),G_{C_2}-PD(B)\}< \infty.$$ Then
$n_1:=G_{C_1}-PD(A)<\infty$ and $k:=SG_{C_2}-PD(B)\leq n_2:=G_{C_2}-PD(B)<\infty$
Let $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ be a $T$-module. By Theorem \ref{Estimation of dimension of a T-module}, $${\rm G_C\!-\!pd}(M)\leq
max\{n_1+k+1,n_2\}\leq m.$$
\cqfd
\begin{cor}\label{when rel gldim of T is finite} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting and $U$ $C$-compatible. Then \\
$ G_C-PD(T)<\infty $ if and only if $G_{C_1}-PD(A)<\infty$ and $G_{C_2}-PD(B)<\infty $
\end{cor}
Recall that a ring $R$ is called left Gorenstein regular if the category $R$-Mod is Gorenstein (\cite[Definition 2.1]{EIT} and \cite[Definition 2.18]{EEG}).
We know by \cite[Theorem 1.1]{BM}, that the following equality holds:
$$sup\{{\rm Gpd}_R(M)\;|\; \text{ $M\in R$-Mod}\}=sup\{{\rm Gid}_R(M)\;|\; \text{$M\in R$-Mod}\}.$$
and this common value is call the left global Gorenstein dimension of $R$, denoted by $l.{\rm Ggldim}(R)$. As a consequence of \cite[Theorem 2.28]{EEG}, a ring $R$ is left Gorenstein regular if and only if the global Gorenstein dimension of $R$ is finite.
We shall say that a ring $R$ is left $n$-Gorenstein regular if $n=l.{\rm Ggldim}(R)<\infty.$
Enochs, Izurdiaga and Torrecillas, characterized in \cite[Theorem 3.1]{EIT} when $T$ is left Gorenstein regular under the conditions that $_BU$ has finite projective dimension and $U_A$ has finite flat dimension. As a direct consequence of Corollary \ref{when rel gldim of T is finite}, we refine this result.
\begin{cor} Assume that $U$ is compatible. Then
$T$ is left Gorenstein regular if and only if so are $A$ and $B$.
\end{cor}
There are some cases when the estimate in Theorem \ref{Estimation} becomes an exact formula which computes left $G_C$-projective global dimension of $T$.
Recall that an injective cogenerator $E$ in $R$-Mod is said to be strong if any $R$-module embeds in a direct sum of copies of $E$.
\begin{cor} Let $C=\textbf{p}(C_1,C_2)$ be w-tilting and $U$ $C$-compatible.
\begin{enumerate}
\item If $U=0$ then
$$ G_C-PD(T)= max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\}$$
\item If $A$ is left noetherian and $_AC_1$ is a strong injective cogenerator, then
$$G_C-PD(T)=\begin{cases}
G_{C_2}-PD(B) & \text{if $U=0$ }\\
max\{SG_{C_2}-PD(B)+1,G_{C_2}-PD(B)\} & \text{if $U\neq 0$}
\end{cases}$$
\end{enumerate}
\end{cor}
{\parindent0pt {\bf Proof.\ }} 1. Using a similar way as we do in the proof of Theorem \ref{Estimation of dimension of a T-module} and \ref{Estimation}, we can prove this statment. We only need to notice that if $U=0$, then a $T$-module $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ is $G_C$-projective if and only if $M_1$ is $G_{C_1}$-projective and $M_2$ is $G_{C_2}$-projective (since $\varphi^M$ is always injective and $M_2=\overline{M}_2$) by Theorem \ref{structure of $G_C$-projective}.
\iffalse
Assume that $U=0$. Then $SG_{C_2}-PD(B)=0$. By Theorem \ref{Estimation of dimension of a T-module}, $$max\{\rm G_{C_1}\!-\!pd(M_1),\rm G_{C_2}\!-\!pd(M_2)\}\leq {\rm G_C\!-\!pd}(M)$$
which imlies that
$$ max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\} \leq G_C-PD(T).$$
Let us prove that $$ G_C-PD(T) \leq max\{G_{C_1}-PD(A),G_{C_2}-PD(B)\} .$$
For that we only need to prove that $$ {\rm G_C\!-\!pd}(M)\leq max\{\rm G_{C_1}\!-\!pd(M_1),\rm G_{C_2}\!-\!pd(M_2)\}$$ for every $T$-module $M$. We may assume that $$n= max\{\rm G_{C_1}\!-\!pd(M_1),\rm G_{C_2}\!-\!pd(M_2)\}<\infty$$
\fi
2. Note first that $G_{C_1}-PD(A)=0$ by \cite[Corollary 2.3]{BGO2}. Then the case $U=0$ follows by 1. Assume that $U\neq 0$. Note that by Theorem \ref{structure of $G_C$-projective}, $ \begin{pmatrix}
A\\0
\end{pmatrix}$ is not $G_C$-projective since $U\neq 0$. Hence $G_{C_2}-PD(B){\rm G-dim}eq {\rm G_C\!-\!pd}_T( \begin{pmatrix}
A\\0
\end{pmatrix}){\rm G-dim}eq 1$.
By Theorem \ref{Estimation}, we have the inequality $$G_{C_2}-PD(B)\leq G_C-PD(T)\leq
max\{SG_{C_2}-PD(B)+1,G_{C_2}-PD(B)\}.$$
So, the case $SG_{C_2}-PD(B)+1\leq G_{C_2}-PD(B)$ is clear and we only need to prove the result when $SG_{C_2}-PD(B)+1> n:=G_{C_2}-PD(B)$. Since $\rm G_{C_2}\!-\!pd(U\otimes_A G)\leq G_{C_2}-PD(B)=n$ for every $G\in G_{C_1}P(A)$, $SG_{C_2}-PD(B)=n$. Let $G_1$ be a $G_{C_1}$-projective $A$-module with $\rm G_{C_2}\!-\!pd(U\otimes_A G_1)=n$ and consider the following short exact sequnece
$$0\to \begin{pmatrix}
0\\U\otimes_A G_1
\end{pmatrix}\to \begin{pmatrix}
G_1\\U\otimes_A G_1
\end{pmatrix}\to \begin{pmatrix}
G_1\\0
\end{pmatrix}\to 0.$$
By Theorem \ref{structure of $G_C$-projective}, $ \begin{pmatrix}
G_1\\U\otimes_A G_1
\end{pmatrix}$ is $G_C$-projective and by Lemma \ref{dimension of special modules} $${\rm G_C\!-\!pd}( \begin{pmatrix}
0\\U\otimes_A G_1
\end{pmatrix})=\rm G_{C_2}\!-\!pd(U\otimes_A G)=n.$$ Thus by \cite[Proposition 3.11(4)
]{BGO1}
$${\rm G_C\!-\!pd}( \begin{pmatrix}
G_1\\0
\end{pmatrix})={\rm G_C\!-\!pd}( \begin{pmatrix}
0\\U\otimes_A G_1
\end{pmatrix})+1=n+1=SG_{C_2}-PD(B)+1.$$
This shows that $G_C-PD(T)=SG_{C_2}-PD(B)+1$ and the proof is finished. \cqfd
\begin{cor} Let $R$ be a ring, $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$ and $C=\textbf{p}(C_1,C_1)$ where $C_1$ is w-tilting. Then
$$ G_{C}-PD(T(R))
=G_{C_1}-PD(R)+1.$$
\end{cor}
{\parindent0pt {\bf Proof.\ }}
Note first that $C$ is w-tilting $T(R)$-module, $R$ is $C$-compatible and $SG_{C_1}-PD(R)=0$. So, by Theorem \ref{Estimation},
$$G_{C_1}-PD(R)\leq G_{C}-PD(T(R))
\leq G_{C_1}-PD(R)+1.$$
The case $G_{C_1}-PD(R)=\infty$ is clear. Assume that $n:=G_{C_1}-PD(R)<\infty.$
There exists an $R$-module $M$ with $\rm G_{C_1}\!-\!pd(M)=n$ and ${\rm Ext}_R^n(M,X)\neq 0$ for some $X\in {\rm Add}_R(C_1)$ by \cite[Theorem 3.8]{BGO1}. If we apply the functor ${\rm Hom}_{T(R)}(-,\begin{pmatrix}
0\\X
\end{pmatrix})$ to the exact sequence of $T(R)$-modules
$$0\to \begin{pmatrix}
0\\M
\end{pmatrix}\to \begin{pmatrix}
M\\M
\end{pmatrix}_{1_M}\to \begin{pmatrix}
M\\0
\end{pmatrix}\to 0$$
we get an exact sequence
$$\cdots \to{\rm Ext}^n_{T(R)}(\begin{pmatrix}
M\\M
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\to {\rm Ext}^n_{T(R)}(\begin{pmatrix}
0\\M
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\to $$$$ {\rm Ext}^{n+1}_{T(R)}(\begin{pmatrix}
M\\0
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix}) \to {\rm Ext}^{n+1}_{T(R)}(\begin{pmatrix}
M\\M
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\to\cdots $$
By Lemma \ref{Ext}, ${\rm Ext}^{i{\rm G-dim}eq 1}_{T(R)}(\begin{pmatrix}
M\\M
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\cong {\rm Ext}_R^{i{\rm G-dim}eq 1}(M,0)=0$. Again by Lemma \ref{Ext} and the above exact sequence, $${\rm Ext}^{n+1}_{T(R)}(\begin{pmatrix}
M\\0
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\cong {\rm Ext}^n_{T(R)}(\begin{pmatrix}
0\\M
\end{pmatrix},\begin{pmatrix}
0\\X
\end{pmatrix})\cong {\rm Ext}_R^n(M,X)\neq 0.$$
since $\begin{pmatrix}
0\\X
\end{pmatrix}\in {\rm Add}_{T(R)}(C)$ by Lemma \ref{Add-Prod}(1), it follows that $n<{\rm G_C\!-\!pd}(\begin{pmatrix}
M\\0
\end{pmatrix})$ by \cite[Theorem 3.8]{BGO1}. But ${\rm G_C\!-\!pd}(\begin{pmatrix}
M\\0
\end{pmatrix})\leq G_C-PD(T(R))
\leq n+1$. Thus ${\rm G_C\!-\!pd}(\begin{pmatrix}
M\\0
\end{pmatrix})=n+1$. which means that $G_C-PD(T(R))
= n+1.$ \cqfd
\begin{cor} Let $R$ be a ring, $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$ and $n{\rm G-dim}eq 0$ an integer. Then $T(R)$ is left $(n+1)$-Gorenstein regular if and only if $R$ left $n$-Gorenstein regular .
\end{cor}
The authors in \cite{AS} establish a relationship between the projective dimension of
modules over $T$ and modules over $A$ and $B$. Given an integer $n{\rm G-dim}eq 0$ and $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ a $T$-module, they proved that ${\rm pd}_T(M)\leq n $ if and only if ${\rm pd}_A(M_1)\leq n$, ${\rm pd}_B(\overline{M}_2)\leq n$ and the map related to the $n$-th syzygy of $M$ is injective. The following example shows that this is not true in general.
\begin{exmp}\label{counter example} Let $R$ be a left hereditary ring which not semisimple and let $T(R)=\begin{pmatrix}
R&0\\field{R}&R
\end{pmatrix}$. Then $lD(T(R))=lD(R)+1=2 $ by \cite[corollary 3.4(3)]{M}. This means that there exists a $T(R)$-module $M=\begin{pmatrix}
M_1\\M_2
\end{pmatrix}_{\varphi^M}$ with ${\rm pd}_{T(R)}(M)=2$. If $K^1=\begin{pmatrix}
K_1^1\\K_2^1
\end{pmatrix}_{\varphi^{K^1}}$ is the first syzygy of $M$, then there exists an exact sequence of $T(R)$-modules $$0\to K^1\to P\to M\to 0$$
where $P=\begin{pmatrix}
P_1\\mathscr{P}_C_2
\end{pmatrix}_{\varphi^P}$ is projective. Then we get the following commutative diagram
$$ \xymatrix{
& &0\ar[d] & & & \\
0\ar[r]& K^1_1\ar[d]_{\varphi^{K^11}} \ar[r] & P_1\ar[d]_{\varphi^P} \ar[r]& M_1\ar[d]_{\varphi^M} \ar[r] & 0 \\
0\ar[r]& K^1_2\ar[d] \ar[r] & P_2\ar[d] \ar[r] & M_2 \ar[d] \ar[r] & 0\\
& \overline{K^1_2}\ar[r] \ar[d] & \overline{P}_2 \ar[d] \ar[r]& \overline{M}_2\ar[r] \ar[d] & 0\\
&0 &0 &0 & } $$
By Snake Lemma $\varphi^{K^1}$ is injective. On the other hand, Since $lD(R)=1$, ${\rm pd}_R(M_1)\leq 1$ and ${\rm pd}_R(\overline{M}_2)\leq 1$. But ${\rm pd}_{T(R)}(M)=2>1$. \cqfd
\end{exmp}
\noindent {\bf Acknowledgement.}
The third and forth authors were partially supported by Ministerio de
Econom \'{\i}a y Competitividad, grant reference 2017MTM2017-86987-P, and Junta de
Andaluc\'{\i}a, grant reference
P20-00770. The authors would like to thank Professor Javad Asadollahi for the discussion on the Example \ref{counter example}.
\end{document}
|
\begin{document}
\title{Unique Continuation on Convex Domains}
\author{Sean McCurdy\\}
\maketitle
\begin{abstract}
In this paper, we adapt powerful tools from geometric analysis to get \textit{quantitative} estimates on the quantitative strata of the generalized critical set of harmonic functions which vanish continuously on an open subset of the boundary of a convex domain. These estimates represent a significant improvement upon existing results for boundary analytic continuation in the convex case.
\end{abstract}
\tableofcontents
\section{Introduction}
Unique continuation is a fundamental property for functions which solve the Laplace and related linear equations. A closely related problem is that of boundary unique continuation--- given a domain, $\Omega \subset \mathbb{R}^n$, and a function, $u$, which is harmonic in $\Omega$ and vanishes continuously on $V \subset \partial \Omega$, how large can the set $\{Q \in V: |\nabla u| = 0\}$ be if $u \not \equiv 0$? Boundary unique continuation is closely tied to the Cauchy problem and questions of well-posedness and stability of solutions to boundary value problems (see, for instance, \cite{Tataru02} and \cite{AlessandriniRondiRossetVessella09}). In this paper, we address questions in boundary unique continuation for harmonic functions. We follow the approach of Garofalo and Lin \cite{GarofaloLin87} insofar as we make essential use of the Almgren frequency function. However, we introduce to this field recent techniques from geometric measure theory and geometric analysis developed by Naber and Valtorta \cite{NaberValtorta17-1}. These tools allow us to obtain very fine results on not just on the size and structure of the set $\{Q \in V \subset \partial \Omega : |\nabla u| =0 \}$, but also on how the strata of the critical set, $\{ p \in \Omega : |\nabla u| = 0\}$, approach $V \subset \partial \Omega.$
For dimensions $n \ge 3$, Bourgain and Wolff \cite{BourgainWolff90} have constructed an example of a function, $u: \mathbb{R}^{n}_+ \rightarrow \mathbb{R}$, which is harmonic in $\mathbb{R}^n_+$, $C^1$ up to the boundary $\mathbb{R}^{n-1} \subset \mathbb{R}^n$, and for which both $u$ and $\nabla u$ vanish on a set of positive surface measure. This result has been generalized by Wang \cite{Wang95}
to $C^{1, \alpha}$ domains, $\Omega \subset \mathbb{R}^n,$ for $n \ge 3$. However, the sets of positive measure for which these functions vanish are \textit{not} open.
In general, the following question posed by Lin in \cite{LinFH91} is still open.
\begin{question}\label{open question}
Let $n \ge 2$ and $\Omega \subset \mathbb{R}^n$ be an open, connected Lipschitz domain. If $u$ is a harmonic function which vanishes continuously on a relatively open set $V \subset \partial \Omega$, does
$$
\mathcal{H}^{n-1}(\{x \in V: |\nabla u| = 0\}) > 0
$$
imply that $u$ is the zero function?
\end{question}
If $u$ is non-negative, the techniques of PDEs on non-tangentially accessible (NTA) domains give a comparison principle \cite{Dahlberg77} which allows us to say that the norm of the normal derivative is point-wise comparable to the density of the harmonic measure with respect to the surface measure, $d\sigma$. Additionally, for Lipschitz domains it is well-known that the harmonic measure is mutually absolutely continuous with respect to $d\sigma$. These two facts then imply that if the normal derivative vanishes on a set of positive (surface) measure, then $u$ must be identically $0$.
The challenge is for harmonic functions, $u$, which change sign. For such functions, the aforementioned techniques fail completely because we cannot apply the Harnack principle. Authors have therefore approached this problem by asking for additional regularity. In \cite{LinFH91}, Lin proves that for $C^{1, 1}$ domains, $\Omega \subset \mathbb{R}^n$, for $n \ge 2$, if $u$ is a non-constant harmonic function which vanishes on an open set $V \cap \partial \Omega$, then $dim_{\mathcal{H}}(\{x \in V \cap \partial \Omega: |\nabla u| = 0\}) \le n-2.$ Similar results were later shown by Adolfsson and Escauriaza for domains with locally $C^{1, \alpha}$ boundary. In \cite{AdolfssonEscauriaza97} it was shown that for $C^{1}$ Dini boundaries $\partial \Omega$, Similarly, Kukavica and Nyst\"om showed that $\mathcal{H}^{n-1}(\{x \in V: |\nabla u| = 0\}) > 0$ implies that $u \equiv 0$ if $\partial \Omega$ is $C^{1}$ Dini, \cite{KukavicaNystrom98}.
Making different geometric assumptions on the boundary, $\partial \Omega$, Adolfsson, Escauriaza, and Kenig showed that for a convex domain, $\Omega \subset \mathbb{R}^n$, if $u$ is a harmonic function in $\Omega$ which vanishes continuously on a relatively open set $V \subset \partial \Omega$, then if $\{x \in V \cap \partial \Omega: |\nabla u| = 0\}$ has positive surface measure, $u$ must be a constant function \cite{AdolfssonEscauriazaKenig95}.
The method of attack pursued in \cite{LinFH91}, \cite{AdolfssonEscauriazaKenig95}, \cite{KukavicaNystrom98}, and \cite{AdolfssonEscauriaza97} was centered on showing that the harmonic function is ``doubling" on the boundary in the following sense. If $\Omega \subset \mathbb{R}^n$, then there exists an absolute constant $M<\infty$ such that for all $B_{2r}(Q) \cap \partial \Omega \subset V,$
\begin{align*}
\int_{B_{2r}(Q) \cap \Omega}u^2 dx \le M \int_{B_r(Q)\cap \Omega} u^2 dx.
\end{align*}
This doubling property allows them to show that the normal derivative is an $A_2$ Muckenhoupt weight with respect to surface measure, a kind of quantified version of mutual absolute continuity. It is well know that of $u$ vanishes in a surface ball $\Delta_r(Q)$ and the normal derivative of $u$ is a $2$-weight with respect to surface measure, then either $\{Q' \in \Delta_r(Q) : | \nabla u| = 0 \}$ has measure zero or $\{Q' \in \Delta_r(Q) : |\nabla u| > 0 \}$ has measure zero. The improvement from measure to dimension bounds in \cite{AdolfssonEscauriaza97} and \cite{LinFH91} comes from applying an additional Federer dimension-reduction type argument.
In this paper, we restrict our investigation to convex domains, $\Omega$, and introduce to this context powerful new tools developed in \cite{NaberValtorta17-1}. These tools can be viewed as a quantitative refinement of Federer dimension-reduction as used by Almgren in his stratification of singularities result (\cite{Almgren84}, Corollary $2.27$). In particular, the techniques of \cite{NaberValtorta17-1} allow for very refined estimates on the size and structure of the strata of both $\{Q \in V \subset \partial \Omega : |\nabla u| =0 \}$ and $\{ p \in \Omega : |\nabla u| = 0\}$ as they approach $V \subset \partial \Omega.$
Throuhout this paper, the term, \textit{singular set} will refer to the subset of the boundary,
$$
sing(\partial \Omega) \subset \partial \Omega,
$$
for which all geometric blow-ups are not flat. We note that for convex domains all blow-ups are unique. For flat points, $Q \in \partial \Omega$, we shall denote the normal to $\partial \Omega$ at $Q$ by $\vec \eta_{Q}$. Because we wish to treat both interior points $p \in \Omega$ and boundary points, $Q \in V \subset \partial \Omega$ all at once, we make the following definition. We shall use the term \textit{generalized critical set} of $u$ to refer to,
\begin{equation}
C(u) = \{p \in \Omega: |\nabla u| = 0\} \cap \{Q \in \partial \Omega : |\nabla u| = 0\}
\end{equation}
where for $Q \in \partial \Omega$ we shall abuse notation by writing $|\nabla u| = 0$ to mean that blow-ups (properly normalized) are neither linear nor one-sided linear (e.g., $x_n^+$). We justify this in Section \ref{Appendix D: Blow-ups}, where we shall show that both the singular set of $\partial \Omega \cap V$ and the flat points $Q \in V$ for which $\lim_{h \rightarrow 0^+}\frac{u(Q + h \vec \eta_Q) - u(Q)}{h} = 0$ are contained in $\{Q \in \partial \Omega : |\nabla u| = 0\}$.
We investigate the following questions.
\begin{questions}\label{q:2}
How do the strata of the generalized critical set, $C(u) \cap V$, sit in the boundary $V \cap \partial \Omega$? Are the strata of $\{Q \in V \subset \partial \Omega : |\nabla u| = 0\}$ rectifiable? Do the strata of $\{p \in \Omega: |\nabla u| = 0\}$ oscillate wildly and become ``thick" as they approach the boundary, $V \cap \partial \Omega$?
\end{questions}
Our gauge of how a set ``sits in space" or how ``thick" it is will be estimates on the volume of tubular neighborhoods and upper Minkowski dimension. We shall use the convention that for any $A \subset \mathbb{R}^n$, $B_r(A) = \{x \in \mathbb{R}^n : d(A, x) < r \}$. Recall that we can define upper Minkowski $s$-content by
\begin{align}\label{upper minkowski content}
\mathcal{M}^{*s}(A) = \limsup_{r \rightarrow 0}\frac{Vol(B_r(A))}{(2r)^{n-s}}
\end{align}
and upper Minkowski dimension as
$$\overline \dim_{\mathcal{M}}(A) = \inf\{s: \mathcal{M}^{*s}(A) = 0 \} = \sup \{s: \mathcal{M}^{*s}(A) > 0 \}.$$
The strata we shall be investigating are a modification of the quantitative strata introduced by Cheeger and Naber in \cite{CheegerNaber13} for studying the regularity of stationary harmonic maps and minimal currents. The standard stratification of the singular or critical set is by symmetry. For $0 \le k \le n-1$, we define the $k^{th}$ critical stratum as follows.
\begin{align*}
S^k = \{x \in \overline \Omega: \text{blow-ups are at most $k$-symmetric} \}
\end{align*}
where by $k$-symmetric, we mean that the function or set is translation invariant along a $k$-plane. The clever idea of Cheeger and Naber is to further stratify $S^k$ into $\mathcal{S}^k_{\epsilon, r_0}(u)$ where $\epsilon$ quantifies how the rescalings of $u$ avoid higher symmetries at all scales above $r_0$. See Subsection \ref{rescaling and symmetry} for rigorous definitions.
The main contribution of this paper is the following theorem, stated roughly.
\begin{thm}\label{T: main theorem 1.1}
For $n \ge 2$, let $u: \mathbb{R}^n \rightarrow \mathbb{R}$ be harmonic inside a convex domain, $\Omega \subset \mathbb{R}^n$ with $diam(\Omega) \ge 3$ and vanish continuously on $B_{2}(0) \cap \partial \Omega$. If $u$ is a not identically zero in $\Omega$, and
\begin{align*}
N(0, 2, u) = \frac{\int_{B_2(0)}|\nabla u|^2dx}{\int_{\partial B_2(0) \cap \Omega} (u)^2 d\sigma} \le \Lambda
\end{align*}
then for all $0 \le k \le n-2$, all $r_0> 0$, and all $r> r_0> 0$,
\begin{equation}
Vol(B_r(\mathcal{S}^k_{\epsilon, r_0}(u) \cap B_{\frac{1}{16}}(0))) \le C(n, \Lambda, \epsilon)r^{n-k}
\end{equation}
\end{thm}
We include a number of corollaries to this theorem (see Section \ref{S:main results} for full statements of all results), but in particular, we are able to show the following.
\begin{cor}\label{C: corollary 1.2}
For $u$, $\Omega$ as in Theorem \ref{T: main theorem 1.1}, then if $u$ is non-constant in $\Omega$
\begin{equation}
\mathcal{M}^{*, n-2}\left(\left(\{Q \in \partial \Omega : |\nabla u| = 0\} \setminus sing{\partial \Omega} \right) \cap B_{\frac{1}{16}}(0)\right) \le C(n, \Lambda) < \infty.
\end{equation}
and $ \{Q \in \partial \Omega : |\nabla u| = 0\}$ is countably $n-2$-rectifiable.
\end{cor}
The author would like to thank Tatiana Toro, whose advice, patience, and support can only be described as \textit{sine qua non}.
\section{Definitions and Preliminaries}\label{S:defs}
We shall denote the $C^{0, \gamma}(B_1(0))$-norm by,
$$
||u||_{C^{0, \gamma}(B_1(0))} = ||u||_{C^0(B_1(0))} + \sup_{x, y \in B_1(0)} \frac{|u(x) - u(y)|}{|x-y|^{\gamma}}.$$
Throughout this paper, we shall not keep close track of constants. The constant $C$ will by ubiquitous and represent different constants even within the same string of inequalities. A constant, $C(n, \Lambda)$, will only depend upon $n$ and $\Lambda$, but each instantiation may represent a distinct constant.
\subsection{A class of domains}
In this paper, we begin by investigating convex domains. To that end, we normalize and define the following class of functions.
\begin{definition} \label{domain def}
Let
$\mathcal{D}(n)$ be the collection of domains, $\Omega \subset \mathbb{R}^n$, which satisfy the following conditions:
\begin{enumerate}
\item $0 \in \partial \Omega$.
\item $\Omega \cap B_2(0)$ is convex.
\item $\Omega \cap (B_2(0))^c \not = \emptyset$.
\end{enumerate}
\end{definition}
\subsection{Almgren Frequency and a class of functions}
One of the key tools of this paper will be an Almgren frequency function. Introduced by Almgren in \cite{Almgren79}, Almgren frequency-type functions have been well-studied.
\begin{definition} \label{N def} Let $r >0$, $\Omega \subset \mathbb{R}^n$, and $p \in \overline \Omega$. For any function $u: \mathbb{R}^n \rightarrow \mathbb{R}$, such that $u \in C(B_{2r}(p)) \cap W^{1, 2}(B_{2r}(p))$ we define the following quantities:
$$H_{\Omega}(p, r, u) = \int_{\partial B_r(p) \cap \overline{\Omega}} |u-u(p)|^2 dx$$
$$D_{\Omega}(p, r, u) = \int_{B_r(p)} |\nabla u|^2dx$$
$$
N_{\Omega}(p, r, u) = r \frac{D_{\Omega}(p, r, u)}{H_{\Omega}(p, r, u)}.
$$
\end{definition}
\begin{rmk} \label{R: rescalings}
This normalized version of the Almgren frequency function is invariant in the following senses. Let $a, b, c \in \mathbb{R}$ with $a, r \not = 0$. If $w(x) = au(bx + p) + c$ and $T_{p,b}\Omega = \frac{1}{b}(\Omega - p)$ then
$$
N_{\Omega}(p,r, u) = N_{T_{p, b}\Omega}(0, b^{-1}r, w)
$$
\end{rmk}
\begin{definition} \label{A def}
Let
$\mathcal{A}(n, \Lambda)$ be the set of functions, $u: \mathbb{R}^n \rightarrow \mathbb{R}$, which have the following properties:
\begin{enumerate}
\item $u: \mathbb{R}^n \rightarrow \mathbb{R}$ is harmonic in a convex domain, $\Omega \in \mathcal{D}(n)$.
\item $u \in C(\overline{B_2(0)})$ and $u = 0$ on $\Omega^c \cap B_2(0)$.
\item $N_{\Omega}(0, 2, u) \le \Lambda$.
\end{enumerate}
\end{definition}
This last assumption, that $N_{\Omega}(0, 2, u) \le \Lambda$ will give us a geometric non-degeneracy we need for the domains $\Omega \in \mathcal{D}$ which we will consider. We now note some of the elementary properties of $H_{\Omega}(p, r, u), D_{\Omega}(p, r, u), N_{\Omega}(p, r, u),$ and their derivatives.
\begin{lem} \label{L: avg log H derivative}
Let $u \in \mathcal{A}(n, \Lambda)$ and $p \in \overline{\Omega} \cap B_1(0)$ and all $0< r< 1$,
\begin{equation}\label{H derivative}
\frac{d}{dr}H_{\Omega}(p, r, u) = \frac{n-1}{r}H_{\Omega}(p, r, u) + 2D_{\Omega}(p, r, u) + 2\int_{B_r(p)}(u - u(p))\Delta u
\end{equation}
\begin{align} \nonumber \label{D derivative}
\frac{d}{dr}D_{\Omega}(p, r, u) = & \frac{n-2}{r}D_{\Omega}(p, r, u) + 2\int_{\partial B_r(p)}(\nabla u \cdot \vec \eta)^2d\sigma\\
& + \int_{\partial \Omega \cap B_r(p)}(Q-p) \cdot \vec \eta (\nabla u \cdot \vec \eta)^2 d\sigma(Q)
\end{align}
\begin{align} \label{L: avg log H derivative}
\frac{d}{dr}\ln(\frac{1}{r^{n-1}}H_{\Omega}(p, r, u)) & = \frac{2}{r}N_{\Omega}(p, r, u) + 2\frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)}\\
\frac{d}{dr}\ln(H_{\Omega}(p, r, u)) & = \frac{n-1}{r} + \frac{2}{r}N_{\Omega}(p, r, u) + 2 \frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)}
\end{align}
\end{lem}
\begin{rmk}
In the interior setting, for $\overline{B_r(p)} \subset \Omega$, these identities follow from straightforward computation. (\ref{H derivative}) follows from the change of variables, $y \rightarrow rx + p$, and the divergence theorem. (\ref{D derivative}) relies upon the Rellich-Necas Identity,
\begin{equation}
div(X|\nabla u|^2) = 2div((X \cdot \nabla u)\nabla u) + (n-2)|\nabla u|^2,
\end{equation}
the divergence theorem, and the fact that $u$ vanishes on the boundary. The last two equations follow immediately from \ref{H derivative}. Without exception, the standard interior computations go through identically for radii for which $B_r(p) \cap \partial \Omega \not = \emptyset$.
\end{rmk}
Because we wish to prove the almost-monotonicity for $N_{\Omega}(p, r, u)$ as a function of $r$ for radii such that $B_{r}(p) \cap \partial \Omega \not = \emptyset$, we need to investigate $\frac{d}{dr}N_{\Omega}(p, r, u).$ The following lemma records a useful identity which follows from the previous lemma by straightforward computation.
\begin{lem}\label{N derivative calculation}
For $u \in \mathcal{A}(n, \Lambda)$ and $p \in \overline{\Omega} \cap B_1(0)$ and all $0< r< 1$, $\frac{d}{dr}N_{\Omega}(p, r, u)$ may be decomposed into four terms,
\begin{align}
\frac{d}{dr}N_{\Omega}(p, r, u) = & N_1'(r) + N_2'(r) + N_3'(r) + N_4'(r)
\end{align}
where
\begin{align*}
N_1'(r) = & \frac{1}{H_{\Omega}(p, r, u)^2} 2r[ H_{\Omega}(p, r, u) \int_{\partial B_r(p) \cap \overline{\Omega}}(\nabla u \cdot \vec \eta)^2d\sigma - (\int_{\partial B_r(p) \cap \overline{\Omega}}(u - u(p))(\nabla u \cdot \vec \eta)d\sigma)^2]\\
N_2'(r) = & \frac{1}{H_{\Omega}(p, r, u)} r \int_{\partial \Omega \cap B_r(p)}(Q-p) \cdot \vec \eta (\nabla u \cdot \vec \eta)^2 d\sigma(Q) \\
N_3'(r) = & 2N_{\Omega}(p, r, u) \frac{1}{H_{\Omega}(p, r, u)} \int_{B_r(p)}(u - u(p))\Delta u\\
N_4'(r) = & \frac{2r}{H_{\Omega}(p, r, u)^2} (\int_{ B_r(p)}(u - u(p))\Delta u)^2
\end{align*}
\end{lem}
In Section \ref{S: beta numbers}, we will need to consider a different form of $N_1'(r)$. To that end, we include the following useful identity.
\begin{lem}\label{N_1 redefine}
Let $u \in \mathcal{A}(n, \Lambda)$ and $p \in \overline{\Omega} \cap B_1(0)$ and all $0< r< 1$,
\begin{align*}
N_1'(r) = & \frac{2r[ H_{\Omega}(p, r, u) \int_{\partial B_r(p)}(\nabla u \cdot \vec \eta)^2d\sigma - (\int_{\partial B_r(p)}(u - u(p))(\nabla u \cdot \vec \eta)d\sigma)^2]}{H_{\Omega}(p, r, u)^2}\\
= & \frac{2}{r H_{\Omega}(p, r, u)} ( \int_{\partial B_r(p) \cap \overline{\Omega}} |\nabla u \cdot (y-p) - \lambda(p, r, u) (u(y)-u(p))|^2d\sigma(y))
\end{align*}
where
\begin{align*}
\lambda(p, r, u) = \frac{\int_{\partial B_{r}(p)\cap \overline{\Omega}} (u(y) - u(p))\nabla u \cdot (y-p) d\sigma(y)}{H_{\Omega}(p, r, u)}.
\end{align*}
\end{lem}
\begin{proof}
Recall that by the Cauchy-Schwarz inequality, we have that for $\lambda = \frac{\langle u, v \rangle}{||v||^2}$
$$
||v ||^2 ||u-\lambda v ||^2 = |u|||^2 ||v||^2 - |\langle u, v \rangle|^2.
$$
Choosing $u = \nabla u \cdot (y-p)$ and $v = u- u(p)$, we have
\begin{align*}
N_1'(r) = & H_{\Omega}(p, r, u)^{-1}2r( \int_{\partial B_r(p)\cap \overline{\Omega}} |(u)_{\nu} - \frac{1}{r}\lambda(p, r, u) (u-u(p))|^2d\sigma) \\
= & \frac{2}{r H_{\Omega}(p, r, u)} ( \int_{\partial B_r(p)\cap \overline{\Omega}} |\nabla u \cdot (y-p) - \lambda(p, r, u) (u(y)-u(p))|^2d\sigma(y))
\end{align*}
\end{proof}
\begin{rmk}
For want of something better, we shall call $\lambda(p, r, u)$ the \textbf{frequency coefficient} of $u$ at scale $r$ and location $p$. If $u$ is a harmonic polynomial which is homogeneous about the point, $p$, then $\lambda(p, r, u) = N_{\mathbb{R}^n}(p, r, u)$.
\end{rmk}
In the expansion of $\frac{d}{dr}N_{\Omega}(p, r, u)$, the term $\int_{B_r(p)}(u - u(p))\Delta u$ shows up frequently. We interpret $\Delta u$ as a measure. We make this rigorous in the following lemma.
\begin{lem}\label{harmonic measure}
Let $u \in \mathcal{A}(n, \Lambda)$. Then $\Delta u$ is a measure supported on $\partial \Omega$. More precisely, for $p \in \overline{\Omega} \cap B_1(0)$ and all $0< r< 1$,
\begin{align*}
\int_{B_r(p)}(u - u(p))\Delta u = -u(p) \int_{\partial \Omega \cap B_r(p)} \nabla u \cdot \vec \eta d\sigma.
\end{align*}
\end{lem}
We defer the proof to Appendix A.
\subsection{Rescaling procedures and Symmetry}\label{rescaling and symmetry}
Because the main goal in this paper is to get estimates on the fine-slace structure of the critical set of certain functions, $u$, rescalings will be a major tool. We shall use rescalings which are adapted to the quantitative stratification methods introduced by Cheeger and Naber in \cite{CheegerNaber13} for studying the regularity of stationary harmonic maps and minimal currents.
\begin{definition} \label{L2 rescaling def}
Let $u \in C(\mathbb{R}^n)$. We define the rescaled function, $T_{x, r}u$ of $u$ at a point $x \in B_{1-r}(0)$ at scale $0<r<1$ by
$$T_{x, r}u(y) = \frac{u(x +ry) - u(x)}{(\int_{\partial B_1(0)\cap \overline{\Omega}} (u(x + ry) -u(x))^2dy)^{1/2}}. $$
We denote the limit as $r \rightarrow 0$ by $$T_xu(y) = \lim_{r \rightarrow 0}T_{x, r}u(y)$$.
\end{definition}
Note that the denominator simply normalizes the blow-up. In the case that the denominator is zero, we define $T_{x, r}u= \infty$.
\begin{definition} \label{domain rescaling def}
Let $\Omega \in \mathcal{D}(n).$ We shall break with established convention and denote the rescalings of $\Omega$ as follows. Let $T_{p, r}\Omega = \frac{\Omega - p}{r}$ and $T_{p, r}\partial \Omega = \frac{\partial \Omega - p}{r}.$
\end{definition}
\begin{rmk}
Note that if $N_{\Omega}(p, r, u) = \Lambda$, then $N_{\Omega_{p, r}}(0, 1, T_{p, r}u) = D_{\Omega_{p, r}}(0, 1, T_{p, r}u) = \Lambda$.
\end{rmk}
The geometry we wish to capture with the blow-ups $T_xf$ are encoded in their translational symmetries. We now define the class of pseudo-tangent profiles, the potential subsequential limits for a sequences $T_{x_i, r_i} u$.
\begin{definition}\label{symmetric def}
Let $u \in C(\mathbb{R}^n)$. We say $u$ is $0$-symmetric if $u$ satisfies one the following conditions.
\begin{enumerate}
\item $u$ is a homogeneous harmonic polynomial.
\item $u$ is a continuous function which is homogeneous and harmonic in a convex cone, $\Omega \in \mathcal{D}(n)$, and which vanishes in $\Omega^{c}$.
\end{enumerate}
We will say that $u$ is $k$-symmetric if $u$ is $0$-symmetric and there exists a $k$-dimensional subspace, V, such that $u(x+y) =u(x)$ for all $x \in \mathbb{R}^n$ and all $y \in V$.
\end{definition}
\textit{Cf.} Lemma \ref{(0, 0)-symmetric functions} for a related discussion. We now define the quantitative version of symmetry which describes how close to being $k$-symmetric a function is in a ball, $B_r(x) \subset \mathbb{R}^n$.
\begin{definition} \label{quant symmetric def}
For any $u \in \mathcal{A}(n, \Lambda)$ with associated domain $\Omega$, $u$ will be called $(k, \epsilon, r, p)$-symmetric if there exists a $k$-symmetric function, $P$, such that
\begin{itemize}
\item[1.] $\int_{\partial B_1(0)} |P|^2 = 1$
\item[2.] $\int_{B_1(0) \cap \overline{T_{p, r}\Omega}}|T_{p, r}u - P|^2 < \epsilon.$
\end{itemize}
Sometimes, we shall refer to a function $u$ as being $(k, \epsilon)$-symmetric in the ball $B_r(p)$ to mean $u$ is $(k, \epsilon, r, p)$-symmetric.
\end{definition}
\begin{definition}\label{quant strat def}
Let $u \in \mathcal{A}(n, \Lambda)$ with $\Omega \in \mathcal{D}(n)$ its associated domain. We denote the $(k, \epsilon, r)$-critical stratum of $u$ by $\mathcal{S}^k_{\epsilon, r}(u)$, and we define it by
\begin{equation}\nonumber
\mathcal{S}^k_{\epsilon, r}(u) = \{x \in \overline{\Omega}: u \text{ is not } (k+1, \epsilon, s, x) \text{-symmetric for all } s \ge r\}
\end{equation}
We shall also use the notation $\mathcal{S}^k_{\epsilon}(u)$ for $\mathcal{S}^k_{\epsilon, 0}(u).$
\end{definition}
It is immediate from the definitions that $S^k_{\epsilon, r}(u) \subset S^{k'}_{\epsilon', r'}(u)$ if $k' \le k, \epsilon' \le \epsilon, r \le r'$. This in turn implies that we can recover the qualitative stratification by
$$\mathcal{S}^k(u) = \{x \in \mathbb{R}^n : T_xu \text{ is not } (k+1)\text{-symmetric}\} = \cup_{\eta} \cap_{r} \mathcal{S}^k_{\eta, r}(u). $$
\begin{rmk}
Definition \ref{quant symmetric def} leads to a few interesting quirks. Let $u \in \mathcal{A}(n, \Lambda)$ with associated domain $\Omega$. Let $p \in \overline{\Omega} \cap B_1(0)$ and $0< r< 1$. It is possible for $u$ to be $(0, 0, r, p)$-symmetric and yet for $T_{p, r}u$ to \textit{not} be a $0$-symmetric function. Since we only consider the $L^2(B_1(0) \cap \overline{T_{p, r}\Omega})$ distance, $u$ may be the restriction of a homogeneous harmonic function to a convex domain. However, because $u$ must vanish continuously on $\partial \Omega \cap B_2(0)$, if $T_{p, r}u$ is $(0, 0)$-symmetric in $B_1(0)$, then $T_{p, r}\partial \Omega$ must be part of a connected component of a level set of $T_{p, r}u$. However, $T_{p, r}\partial \Omega$ must still be the boundary of a convex domain in $B_{1}(0)$. This is very restrictive. The next lemma makes this statement rigorous.
\end{rmk}
\begin{lem}\label{(0, 0)-symmetric functions}
Let $u \in \mathcal{A}(n, \Lambda)$ with associated domain $\Omega$. Let $p \in \overline{\Omega} \cap B_1(0)$ and $0< r< 1$. Let $u$ to be $(0, 0, r, p)$-symmetric. If $p \in \partial \Omega$,
\begin{enumerate}
\item $T_{p, r}u$ is a homogeneous harmonic function.
\item $T_{p, r} \Omega$ is a convex cone.
\item $T_{p, r}u$ is $0$-symmetric.
\end{enumerate}
If $p \in \Omega$,
\begin{enumerate}
\item $T_{p, r}u$ is a $(n-1, 0)$-symmetric in $B_r(p)$, i.e., $T_{p. r}u$ is a linear function restricted to a super level set.
\item $T_{p, r} \Omega$ is an affine hyperplane.
\item $T_{p, r}u$ is not $0$-symmetric.
\end{enumerate}
\end{lem}
\begin{proof}
Note that if $p \in \partial \Omega$, $(2)$ and $(3)$ follow immediately from $(1)$. Furthermore, $(1)$ follows from $T_{p, r}u$ being equivalent to a $0$-symmetric function, $P$, in $L^2(B_1(0) \cap \overline{T_{p, r}\Omega})$ and vanishing in $T_{p, r} \Omega^c$. Both options for $P$ in Definition \ref{symmetric def} give $(1).$
Now, suppose that $p \in \Omega$. As before, $(2)$ and $(3)$ follow immediately from $(1)$. Let $d = dist(0, T_{p, r} \partial \Omega) > 0$. Since $T_{0, \frac{d}{2}}T_{p, r}u$ must be $(0, 0)$-symmetric in $B_1(0),$ $T_{0, \frac{d}{2}}T_{p, r}u$ is a homogeneous harmonic polynomial in $B_1(0).$ By unique continuation, then, $T_{p, r}u$ is a homogeneous harmonic polynomial restricted to $T_{p, r}\Omega.$ Since $u$ is assumed to continuously vanish on $\partial \Omega \cap B_2(0)$, $T_{p, r}\partial \Omega$ to be contained in a connected component of a level set of a homogeneous harmonic polynomial, as well. Furthermore, since $d = dist(0, T_{p, r} \partial \Omega) > 0$, this level must be contained in a non-zero level set of a homogeneous harmonic function. Therefore, we may find a neighborhood, $U$, in which it is smooth. Let $\tilde U$ be the cone of $U$ over the origin.
Let $x_0 \in U$ and let $V$ be the tangent plane to $\Gamma$ at the point $(x_0, u(x_0)).$ Note that $V= (x_0, u(x_0)) + span\{v_1, ... , v_n\}$ where $\{v_1, ..., v_{n-1}\}$ is a basis for a tangent plane to the level set $\{u(x) = u(x_0) : x \in U\}$ and $v_n$ is the tangent vector to the graph of the homogeneous function, $f(\lambda) = u(\lambda x_0)$. Let $L_{x_0}: \mathbb{R}^n \rightarrow \mathbb{R}$ be the affine linear function whose graph is $V$. Consider the graph of $u - L_{x_0}.$ Since $u$ is homogeneous of degree $\le 1$ and $\{u(x) = u(x_0) : x \in U\}$ is the boundary of a convex domain, $\mathbb{R}^n \times \{0\} = \{(x_1, ..., x_n, 0)\}$ is a supporting hyperplane for $graph(u - L_{x_0}).$
It is a standard observation that in this situation, the scalar mean curvature at $x_0$ is the Laplacian of $u - L_{x_0}$ at $x_0$. It is a standard fact of undergraduate geometry then, that the second fundamental form is semi-definite (see \cite{Thorpe79}, Chapter 13, Theorem 1). This fact follows from the fact that the sectional curvatures are the directional second derivatives. Therefore, if $u$ is homogeneous of order $>1$, then the mean curvature is positive, which means which implies that $u-L_{x_0}$ is not harmonic. By this same logic, if we can find an $x_0$ where $\{u(x) = u(x_0) : x \in U\}$ is strictly convex, then we also have that the mean curvature is positive at this point and that $u - L_{x_0}$ is not harmonic. Therefore, $u$ is homogeneous of order $1$, and the level sets are flat. Therefore, $u$ is linear function restricted to a super level set.
\end{proof}
\begin{rmk}
Up to scaling and rotation the only $(n-1, 0, p, r)$-symmetric functions are linear functions restricted to super level sets as in Lemma \ref{(0, 0)-symmetric functions}.
\end{rmk}
\subsection{The Jones Beta-numbers and the Discrete Reifenberg Theorem}
One of the important tools in the second half of this paper will be the Jones $\beta-$numbers.
\begin{definition}\label{beta def}
For $\mu$ a Borel measure, we define $\beta_{\mu}^k (p,r)^2$ as follows.
\begin{equation*}
\beta_{\mu}^k (p, r)^2 = \inf_{L^k}\frac{1}{r^k}\int_{B_r(p)} \frac{dist(x, L)^2}{r^2}d\mu(x)
\end{equation*}
where the infimum is taken over all affine $k-$planes.
\end{definition}
Taking the \textit{infimum}-- as opposed to the minimum-- here is a convention. The space of admissible planes is compact, so a minimizing plane exists. Let $V_{\mu}^k(p, r)$ denote a k-plane which minimizes the \textit{infimum} in the definition of $\beta_{\mu}^k(p,r)^2$. Note that this k-plane is not \textit{a priori} unique.
We now come to one of the most important tools in this paper. The following theorem of Naber and Valtorta is a powerful tool which links the sum of the $\beta_{\mu}^k (p, r)^2$ over all points and scales to packing estimates.
\begin{thm}\label{discrete reif}(Discrete Reifenberg, \cite{NaberValtorta17-1})
Let $\{ B_{\tau_i}(x_i)\}_i$ be a collection of disjoint balls such that for all $i=1, 2, ... $ $\tau_i \le 1$. Define a measure $$\mu = \sum_i \tau_i^k \delta_{x_i}$$
and suppose that for any $x \in B_2$ and any scale $l \in \{0, 1, 2, ...\}$, if $B_{r_l}(x) \subset B_2(0)$ and $\mu(B_{r_l}(x)) \ge \epsilon_k r_l^k$ then
\begin{equation*}
\sum_{i \ge l} \int_{B_{2r_{l}(x)}} \beta_{\mu}^k (z, 16r_i)^2 d\mu(z) < r_{l}^k \delta^2.
\end{equation*}
Then, there exists a $\delta_0= \delta_0(n) >0$ such that if $\delta \le \delta_0$, $$\mu (B_1(0)) = \sum_{i \text{ s.t. } x_i \in B_1(0)} \tau_i^k \le C_{DR}(n).$$
\end{thm}
\begin{thm}(Rectifiable Reifenberg, \cite{NaberValtorta17-1})\label{rect reif}
There exists a $0< \delta_0 = \delta_0(n)$ such that if $\Sigma \subset B_2(0) \subset \mathbb{R}^n$ is an $\mathcal{H}^k$-measurable subset such that for all $B_{r_l}(x) \subset B_2(0),$
\begin{equation}
\sum_{i \ge l} \int_{B_{2r_{l}(x)}} \beta_{\Sigma}^k (z, 16r_i) d\mathcal{H}^k|_{\Sigma}(z) < r_{l}^k \delta^2.
\end{equation}
then $\Sigma \cap B_1(0)$ is countably k-rectifiable and there exists a constant, $C_{RR}< \infty$, such that for each ball $B_r(x) \subset B_1(0)$ with $x \in \Sigma,$
$$\mathcal{H}^k|_{\Sigma}(B_r(x)) \le C_{RR} r^k$$
\end{thm}
\section{Main Results} \label{S:main results}
The main results in this work are on the \textit{fine-scale, quantitative} estimates on the strata of the generalized critical set of $u \in \mathcal{A}(n, \Lambda).$
\begin{thm}\label{T: main theorem 1}
Let $u \in \mathcal{A}(n, \Lambda)$, then for any $r_0> 0$ and all $r> r_0> 0$
\begin{equation}
Vol(B_r(\mathcal{S}^k_{\epsilon, r_0}(u) \cap B_{\frac{1}{16}}(0)) \le C(n, \Lambda, k, \epsilon)r^{n-k}
\end{equation}
\end{thm}
As an immediate corollary, we have the following results.
\begin{cor}\label{main corollary}
Let $u \in \mathcal{A}(n, \Lambda)$, then for all $0 \le k \le n-2$
\begin{equation}
\mathcal{M}^{*, k}(\mathcal{S}^{k}_{\epsilon}(u) \cap B_{\frac{1}{16}}(0)) \le C(n, \Lambda, k, \epsilon)
\end{equation}
\begin{equation}
\dim_{\mathcal{H}}(\mathcal{S}^{k}(u) \cap B_{\frac{1}{16}}(0)) \le k
\end{equation}
\end{cor}
\begin{proof} The first statement is immediate from Theorem \ref{T: main theorem 1} and the definition of upper Minkowski content. This also implies that $\overline{\dim_{\mathcal{M}}}(\mathcal{S}^{k}_{\epsilon}(u) \cap B_{\frac{1}{4}}(0)) \le k$. To see the second statement, we need only remember that $\mathcal{S}^{k} = \cup_{0 < \epsilon} \mathcal{S}^{k}_{\epsilon}(u)$, $\dim_{\mathcal{H}}(A) \le \overline{\dim_{\mathcal{M}}}(A)$ for all $A \subset \mathbb{R}^n$, and that $\dim_{\mathcal{H}}(\cup_{i}A_i) = \max_{i}\dim_{\mathcal{H}}(A_i).$
\end{proof}
Without too much difficulty, we also have the following corollary.
\begin{cor}\label{rectifiability}
Let $u \in \mathcal{A}(n, \Lambda)$, then for all $0 \le k \le n-2$ the set $\mathcal{S}^k(u)$ is countably $k$-rectifiable.
\end{cor}
We defer the proof to Section \ref{S: rectifiability}. It was proved in \cite{Alberti94} that the geometric singular sets, $\mathcal{S}^k \cap \partial \Omega$ are contained in countably many $C^2$ $k$-dimensional submanifolds. The new information that this corollary provides is that $\left(\{x \in \partial \Omega: |\nabla u|= 0\} \setminus sing(\partial \Omega)\right) \cap S^k$ is countably $k$-rectifiable.
Due to an $\epsilon$-regularity result, Lemma \ref{e-reg containment}, we also have the following corollary.
\begin{cor}\label{e-reg}
Let $u \in \mathcal{A}(n, \Lambda)$, then
\begin{align*}
\mathcal{M}^{*,n-2}\left(\left(\{Q \in \partial \Omega: |\nabla u| = 0\} \setminus sing(\partial \Omega)\right) \cap B_{\frac{1}{16}}(0)\right) \le C(n, \Lambda).
\end{align*}
\end{cor}
Since it was shown in \cite{Alberti94} that the singular set of a convex function can be prescribed, for any $C^2$ $(n-2)$-rectifiable set, the geometric singular set a convex body can, in particular, have infinite $\mathcal{H}^{n-2}$ measure. Therefore, there is no hope of obtaining an $\epsilon$-regularity result which extends to all of $sing(\partial \Omega)$ for arbitrary convex $\Omega.$ In light of these considerations, Theorem \ref{T: main theorem 1} and it's corollaries are sharp in the following senses: it agrees with the best known interior estimates on the critical strata of harmonic functions, \cite{NaberValtorta17-2}, and known boundary estimates on the geometric singular set on the boundary \cite{Alberti94}.
The structure of this paper is roughly in four parts. In Sections 4-8, we develop some results about the class of functions, $\mathcal{A}(n, \Lambda),$ and the Almgren frequency function. These include the macroscopic almost monotonicity of the Almgren frequency and its uniform boundedness for interior and boundary points, as well as the proper compactness results for $\mathcal{A}(n, \Lambda).$ The key point of this section is to obtain \textit{uniform} estimates on the frequency function at all points, both interior and on the boundary, at all scales $0< r < c$. In particular, we must deal with points $p \in \Omega$ and scales such that $\partial B_r(p) \cap \Omega^c \not = \emptyset$.
The main technical innovations are contained in Sections 9-10. In Section 9, we develop the necessary framework to overcome the problems of almost-monotonicity of the Almgren frequency for the purposes of quantitative rigidity. In this, convexity plays an important role, \textit{Cf.} Lemma \ref{(0, 0)-symmetric functions}. In Section 10, we obtain the necessary decay to modify the machinery of \cite{NaberValtorta17-1} so that we may obtain the necessary uniform estimates. Again, the problem is extending these techniques to points $p \in \Omega$ and scales such that $\partial B_r(p) \cap \Omega^c \not = \emptyset$.
In Sections 11-15, we follow the proof technique of \cite{NaberValtorta17-1} with a few notable changes. First, we must connect the Jones $\beta$-numbers to the drop in the Almgren frequency. Second, we must obtain the proper packing estimates. The results of Section 9 are crucial in obtaining the desired control on the Jones $\beta$-numbers in Section 12. The results of Section 10 are also essential in obtaining the necessary packing estimates in Section 13.
Sections 16 and 17 are devoted to proving Corollary \ref{rectifiability} and Corollary \ref{e-reg}, respectively. Several Appendices are also included for completeness.
\section{The Almgren frequency function on the Boundary}\label{S:monotonicity}
In this section, we develop crucial properties of the Almgren frequency function for points, $Q \in \partial \Omega \cap B_{\frac{1}{4}}(0).$ The main result in this section is Lemma \ref{N bound lem 1}, which states that there is a constant such that for all $Q \in B_{\frac{1}{4}}(0) \cap \partial \Omega$ and all $r \in (0, \frac{1}{2})$, $N_{\Omega}(Q, r, u) \le C(n, \Lambda)$. This lemma follows from the monotonicity of $N_{\Omega}(Q, r, u).$
\begin{lem}\label{N monotonicity 1}
Let $u \in \mathcal{A}(n, \Lambda)$ and $Q \in \partial \Omega \cap B_1(0)$, then $N_{\Omega}(Q, r, u)$ is monotonically non-decreasing in $0< r < 1$.
\end{lem}
\begin{proof}
Recall Lemma \ref{N derivative calculation}. Note that $N_1'(r)$ is non-negative by the Cauchy-Schwartz inequality. Furthermore, $N_2'(r)$ is non-negative because $\partial \Omega$ is a convex surface and therefore $(Q- p) \cdot \vec \eta \ge 0$ for all $p \in \Omega \cap B_2(0)$ and all $Q \in \partial \Omega \cap B_2(0).$ Observe that $N_3'(r) = N_4'(r) = 0$ because $u(Q) = 0$. Therefore, $\frac{d}{dr}N_{\Omega}(Q, r, u)$ is non-negative.
\end{proof}
As corollaries to monotonicity, we have the following standard results.
\begin{lem}\label{N constant gives homogeneity}
Let $u \in \mathcal{A}(n, \Lambda)$ and $Q \in \partial \Omega \cap B_1(0)$, then if for any two radii $0< s < S < 1$,
\begin{align*}
N_{\Omega}(Q, s, u) = N_{\Omega}(Q, S, u)
\end{align*}
then $N_{\Omega}(Q, r, u)$ is a constant for all $r$, $u$ is a homogeneous function of degree $N_{\Omega}(Q, s, u)$, and $\partial \Omega \cap B_2(0)$ is a convex cone over $Q$.
\end{lem}
\begin{proof}
Without loss of generality, we assume $Q = 0$. Since $N_{\Omega}(Q, r, u)$ is monotonic, the assumption $N_{\Omega}(Q, s, u) = N_{\Omega}(Q, S, u)$ implies that $N_{\Omega}(Q, r, u)$ is a constant for all $s< r< S$. Furthermore, using the notation in Lemma \ref{N derivative calculation},
$N'_1(r) = N'_2(r) = 0$ for all $s<r<S.$ Thus, we have that for all $y \in \partial B_r(Q)$
\begin{align*}
\nabla u \cdot (y-Q) =& \partial_r u r \\
=& \frac{ \int_{\partial B_r(Q)} u \partial_r u rd\sigma(y)}{H_{\Omega}(Q, r, u)} u
\end{align*}
By the divergence theorem, we have,
\begin{align*}
\partial_r u r = & \frac{ r D_{\Omega}(Q, r, u)}{H_{\Omega}(Q, r, u)} u\\
= & N_{\Omega}(Q, r, u) u.
\end{align*}
Since $N_{\Omega}(Q, r, u)$ is a constant for all $s< r< S$, this becomes a separable ODE. $u = u(r, \theta) = r^{N_{\Omega}(Q, S, u)}u(\theta).$ Since $\Omega \cap B_S(Q) \setminus B_s(Q)$ is open, unique continuation implies that $u$ is a homogeneous function of degree $N_{\Omega}(Q, S, u)$ in $\Omega.$ Thus, $N_{\Omega}(Q, r, u)$ is a constant for all radii $0< r$. Since $u$ vanishes continuously on $\partial \Omega \cap B_2(0),$ the last claim follows from the homogeneity of $u$.
\end{proof}
\begin{lem} \label{L: H doubling-ish 1}($H_{\Omega}(Q, r, u)$ is Doubling)
Let $u \in \mathcal{A}(n, \Lambda)$ with $Q \in B_{1}(0) \cap \partial \Omega$. For any $0 < s < S \le 1$,
\begin{equation}
H_{\Omega}(Q, S, u) \le (\frac{S}{s})^{(n-1) + 2(N_{\Omega}(Q, S, u))} H_{\Omega}(Q, s, u).
\end{equation}
\end{lem}
\begin{proof} First, recall
$$H_{\Omega}'(Q, s, u) = \frac{n-1}{r}\int_{\partial B_r(Q)}|(u-u(Q))|^2 + 2 \int_{B_r(Q)} |\nabla u|^2$$
Next, we consider the following identity:
\begin{align*}
\ln(\frac{H_{\Omega}(Q, S, u)}{H_{\Omega}(Q, s, u)}) & = \ln(H_{\Omega}(Q, S,u)) - \ln(H_{\Omega}(Q, s, u))\\
& = \int_s^S \frac{H_{\Omega}'(Q, r, u)}{H_{\Omega}(Q, r, u)}dr\\
& = \int_s^S \frac{n-1}{r} + \frac{2}{r}N_{\Omega}(Q, r, u)
\end{align*}
We bound $N_{\Omega}(Q, r, u)$ by $N_{\Omega}(Q, S, u)$ using Lemma \ref{N monotonicity 1}. Plugging in these bounds, we have that for $r \in [s, S]$ and $\epsilon << s$,
\begin{align*}
\ln(\frac{H_{\Omega}(Q, S, u)}{H_{\Omega}(Q, s, u)}) & \le [(n-1) + 2N_{\Omega}(Q, S, u)] \ln(r)|^{S}_s \\
\end{align*}
Evaluating and exponentiating gives the desired result.
\end{proof}
\begin{rmk}\label{R: H doubling-ish}
Because $N_{\Omega}(Q, r, u)$ is monotonic for $Q \in B_{1}(0) \cap \partial \Omega$, we can also extract the identity,
\begin{align*}
\ln(\frac{H_{\Omega}(Q, S, u)}{H_{\Omega}(Q, s, u)}) & \ge [(n-1) + 2N_{\Omega}(Q, s, u)] \ln(r)|^{S}_s \\
\end{align*}
which leads to
\begin{equation}
H_{\Omega}(Q, s, u) \le (\frac{s}{S})^{(n-1) + 2N_{\Omega}(Q, s, u)} H_{\Omega}(Q, S, u).
\end{equation}
If $S = 1$ and $u = T_{Q, r}u$, then we have that for all $1> s >0,$
\begin{equation}
H_{\Omega}(0, s, T_{Q, r}u) \le s^{(n-1) + 2N_{\Omega}(Q, 0, T_{Q, r}u)}.
\end{equation}
\end{rmk}
\begin{lem}\label{L: avg log H derivative 2}
Let $Q \in B_{1}(0) \cap \partial \Omega$, then
$\frac{d}{dr}\ln(\frac{1}{r^{n-1}}H_{\Omega}(Q, r, u)) = \frac{2}{r}N_{\Omega}(Q, r, u)$
\end{lem}
\begin{proof}
We see in (\ref{L: avg log H derivative}), for $u(Q) = 0,$ the second term is trivial.
\end{proof}
We are now ready for the main result of this section.
\begin{lem}\label{N bound lem 1}
Let $u \in \mathcal{A}(n, \Lambda)$, as above. There is a constant, $C(n, \Lambda)$ such that for all $Q \in B_{\frac{1}{4}}(0) \cap \partial \Omega$ and all $r \in (0, \frac{1}{2})$,
\begin{equation}
N_{\Omega}(Q, r, u) \le C(n, \Lambda).
\end{equation}
\end{lem}
\begin{proof} Recall that $0 \in \partial \Omega$ and that the Almgren frequency function is invariant under rescalings. Therefore, we normalize our function $u$ by the rescaling $v = T_{0,1}u$.
Therefore, applying Lemma \ref{L: H doubling-ish 1} to $Q = 0$, letting $r = cR$, and integrating both sides with respect to $R$ from $0$ to $S$, we have that for any $c \in (0, 1)$,
\begin{align*}
\int_{B_{S}(0)}|v|^2 &\le \int_0^{S} (\frac{1}{c})^{(n-1) + 2N_{\Omega}(0, R, v)} \int_{\partial B_{cR}(0)}|v|^2dSdR\\
& \le (\frac{1}{c})^{(n-1) + 2N_{\Omega}(0, S, v)} \int_0^{S} \int_{\partial B_{cR}(0)}|v|^2dSdR.
\end{align*}
Thus, we have that for any such $c \in (0,1)$ and any $0< S \le 1$,
\begin{align*}
\fint_{B_S(0)}|v|^2dV \le (\frac{1}{c})^{2N_{\Omega}(0, S, v)}\fint_{B_{cS}(0)} |v|^2dV.
\end{align*}
Let $S = 1$ and $c= \frac{1}{16}$. We have that,
\begin{equation}\label{N bound base ineq 1}
\fint_{B_1(0)}|v|^2 \le (16)^{2N_{\Omega}(0, 1, v)} \fint_{B_{\frac{1}{16}}(0)} |v|^2.
\end{equation}
Thus, for any $Q \in B_{\frac{1}{4}}(0)\cap \partial \Omega$ by inclusion, we have the following:
\begin{align*}
\int_{B_{1}(0)}|v|^2 &\ge \int_{B_{3/4}(Q)}|v - v(Q)|^2\\
\end{align*}
and
\begin{align*}
\int_{B_{\frac{1}{16}}(0)} |v|^2 & \le \int_{B_{\frac{9}{16}}(Q)}|v-v(Q)|^2.
\end{align*}
Therefore, substituting these bounds into (\ref{N bound base ineq 1}),
\begin{equation}\label{N bound base ineq 2}
\fint_{B_{\frac{3}{4}}(Q)}|v-v(Q)|^2 \le (16)^{2N_{\Omega}(0, 1, v)}(\fint_{B_{\frac{9}{16}}(Q)} |v-v(Q)|^2).
\end{equation}
Now, we wish to bound $\fint_{B_{\frac{3}{4}}(Q)}|v- v(Q)|^2$ from below and $\fint_{B_{\frac{9}{16}}(Q)} |v- v(Q)|^2$ from above. We rely upon Lemma \ref{H derivative}, which states that for $v(Q) = 0$, $\frac{d}{dr} \int_{\partial B_r(Q)} (v-v(Q))^2 \ge 0$. Thus, for all $Q \in B_{\frac{1}{4}}(0) \cap \partial \Omega$, we may bound $\int_{B_{\frac{3}{4}}(Q)} |v- v(Q)|^2$ from below as follows:
\begin{align*}
\int_{B_{\frac{3}{4}}(Q)}|v- v(Q)|^2 & \ge \int_{B_{\frac{3}{4}}(Q) \setminus B_{\frac{5}{8}}(Q)} |v- v(Q)|^2 \\
& = \int_{\frac{5}{8}}^{\frac{3}{4}} \int_{\partial B_r(Q)} |v- v(Q)|^2 dS dr \\
& \ge \int_{\frac{5}{8}}^{\frac{3}{4}} \int_{\partial B_{\frac{5}{8}}(Q)} |v- v(Q)|^2 dSdr \\
& \ge c \int_{\partial B_{\frac{5}{8}}(Q)} |v- v(Q)|^2 dS
\end{align*}
To get the upper bound, we use the same trick.
\begin{align*}
\int_{B_{\frac{9}{16}}(Q)} |v- v(Q)|^2 & = \int_0^{\frac{9}{16}} \int_{\partial B_s(Q)}|v- v(Q)|^2 dS ds\\
& \le \int_0^{\frac{9}{16}} \int_{\partial B_{\frac{9}{16}}(Q)}|v- v(Q)|^2 dS ds\\
& \le c\int_{\partial B_{\frac{9}{16}} (Q)} |v- v(Q)|^2.
\end{align*}
Putting it all together, we plug our above bounds into (\ref{N bound base ineq 2}) and obtain the following for all $Q \in B_{\frac{1}{4}}(0) \cap \partial \Omega$,
$$\fint_{\partial B_{\frac{5}{8}}(Q)} |v - v(Q)|^2 dS \le c(n) (16)^{2N_{\Omega}(0, 1, u)}(\fint_{\partial B_{\frac{9}{16}} (Q)} |v - v(Q)|^2).$$
Though somewhat messier, we restate the above in the following form for convenience later.
\begin{align} \label{ugly 1}
\frac{\fint_{\partial B_{\frac{5}{8}}(Q)} |u-u(Q)|^2 dS}{\fint_{\partial B_{\frac{9}{16}}(Q)}|u-u(Q)|^2 dS} \le & C(n) (16)^{2N_{\Omega}(0, 1, v)}.
\end{align}
Now, we change tack slightly and, recalling Lemma \ref{L: avg log H derivative 2} and the monotonicity of $N_{\Omega}(Q, r, v)$, we see that,
\begin{align*}
\ln(\fint_{\partial B_{\frac{5}{8}}(Q)} v^2) - \ln(\fint_{\partial B_{\frac{9}{16}}(Q)} v^2) & = \int_{\frac{9}{16}}^{\frac{5}{8}} \frac{d}{ds}\ln(\frac{1}{s^{n-1}}H_{\Omega}(Q, s, v)) ds\\
& = \int_{\frac{9}{16}}^{\frac{5}{8}} \frac{2}{s}N_{\Omega}(Q, s, v)ds\\
& \ge 2[N_{\Omega}(Q, \frac{5}{8}, v)](\ln(\frac{5}{8}) - \ln(\frac{9}{16})) \\
& \ge 2c[N_{\Omega}(Q, \frac{5}{8}, v)] .
\end{align*}
Thus, if we recall (\ref{ugly 1}), above, we see that
\begin{align*}
2c[N_{\Omega}(Q, \frac{5}{8}, v)] \le & \ln(\frac{\fint_{\partial B_{\frac{5}{8}}(Q)} |u - u(Q)|^2}{\fint_{\partial B_{\frac{9}{16}}(Q)} |u- u(Q)|^2})\\
\le & \ln[C(n) (16)^{2[N_{\Omega}(0, 1, v)]}]\\
= & 2N_{\Omega}(0, 1, v)\ln(16) + C(n)\\
= & 2\Lambda \ln(16) + C(n).
\end{align*}
Now, Lemma \ref{N monotonicity 1}, gives that for $\frac{5}{8} > s > 0$,
\begin{equation*}
N_{\Omega}(Q, \frac{5}{8}, v) \ge N_{\Omega}(Q, s, v).
\end{equation*}
Thus, recalling that $N_{\Omega}(Q, r, v) = N_{\Omega}(Q, r, u),$ we have the desired claim.
\end{proof}
\subsection{Corollaries to Lemma \ref{N bound lem 1}}
\begin{lem}(Uniform Holder continuity)\label{L: unif Holder bound 1}
Let $u \in \mathcal{A}(n, \Lambda)$, $Q \in B_{\frac{1}{4}}(0) \cap \partial \Omega$, and $r \in (0,1/2]$. Then,
\begin{equation}
|| T_{Q, r}u ||_{C^{0, \gamma}(B_1(0))} \le C(n, \Lambda)
\end{equation}
\end{lem}
We defer the proof of this statement to the Appendix A. The techniques are standard.
\begin{cor}\label{C: geometric non-degeneracy}
Let $u \in \mathcal{A}(n, \Lambda)$ and $\Omega \in \mathcal{D}(n)$ it's associated convex domain. There exists a constant, $0<c = c(\Lambda, n)$ such that $\partial B_1(0) \cap \Omega$ is a relatively open convex surface with
$$\mathcal{H}^{n-1}(\partial B_1(0) \cap \Omega) > c.$$
\end{cor}
\begin{proof} That $\partial B_1(0) \cap \Omega$ is relatively open and relatively convex is immediate from the definition of $\Omega.$ To see that $\mathcal{H}^{n-1}(\partial B_1(0) \cap \Omega) > \alpha,$ we observe that since $\max_{B_1(0)}|T_{0, 1}u(x)| \le C(n, \Lambda)$ and $H_{\Omega}(0, 1, T_{0, 1}u) = 1$, we have that
\begin{align*}
H_{\Omega}(0, 1, T_{0, 1}u) & \le \mathcal{H}^{n-1}(\partial B_1(0) \cap \Omega) C^2
\end{align*}
Therefore, $\mathcal{H}^{n-1}(\partial B_1(0) \cap \Omega) \ge C^{-2} = c.$
\end{proof}
\begin{cor}\label{sub-linear bound}
For all $u \in \mathcal{A}(n, \Lambda)$, $Q_0 \in \partial \Omega \cap B_{1/4}(0),$ and $0 < r \le \frac{1}{4}.$ the following estimate holds. Let $Q \in T_{Q_0, r}\partial \Omega \cap B_{\frac{1}{2}}(0)$ and let $L_Q$, be a supporting hyperplane to $T_{Q_0, r}\partial \Omega$ at $Q$. Then for all $p \in \overline{T_{Q_0, r}\Omega} \cap B_{\frac{1}{4}}(Q),$
\begin{align*}
|T_{Q_0, r}u(p)| \le C(n, \Lambda) dist(p, L_Q).
\end{align*}
\end{cor}
\begin{proof}
Let $\mathbb{H}_Q$ be the half-space with boundary $L_Q$ which contains $T_{Q_0, r}\Omega.$ Consider the function, $\phi$, which solves the Dirichlet problem,
\begin{align*}
\Delta \phi = & 0 \qquad \qquad \text{ in } \mathbb{H}_Q \cap B_{\frac{1}{2}}(Q),\\
\phi = & \begin{cases}
C(n, \Lambda) & \text{on } \partial B_{\frac{1}{2}}(Q) \cap T_{Q_0, r}\Omega\\
0 & \text{on } \partial(B_1(Q) \cap \mathbb{H}_Q) \setminus (\partial B_{\frac{1}{2}}(Q) \cap T_{Q_0, r}\Omega).
\end{cases}
\end{align*}
Where we choose $C(n, \Lambda)$ to be the same constant in Lemma \ref{L: unif Holder bound 1} for which we have $\sup_{\partial B_1(0)} |T_{Q_0, r}u| \le C(n, \Lambda)$. By the maximum principle, then $T_{Q_0, r}u \le \phi$ in $T_{Q_0, r}\Omega \cap B_{\frac{1}{2}}(Q)$. We now argue that $\phi$ is comparable to a linear function in $B_{\frac{1}{4}}(Q) \cap \mathbb{H}_Q$.
Let $L$ be the affine linear function with $\{L = 0 \} = L_Q$ such that
$$\max_{\partial B_{1/2}(Q)}L= \max_{\partial B_{1/2}(Q)}\phi = C(n, \Lambda).
$$
By \cite{JerisonKenig82} Theorem 5.1., there is a constant, $C$, such that
we have that for all $x \in B_{\frac{1}{4}}(Q) \cap \mathbb{H}_Q$,
\begin{align*}
\phi(x) \le C L(x)
\end{align*}
where $C$ depends only upon the geometry of $B_{\frac{1}{4}}(Q) \cap \mathbb{H}_Q.$ Since this geometry is always a half-ball, this constant is uniform. Therefore, we have that for all $x \in B_{\frac{1}{4}}(Q) \cap \mathbb{H}_Q$,
\begin{align*}
\phi(x) \le C L(x) \le C 2 C(n, \Lambda) dist(x, L_Q)
\end{align*}
Thus, for $p \in \overline{T_{Q_0, r}\Omega} \cap B_{\frac{1}{4}}(Q)$, we have
\begin{align*}
T_{Q_0, r}u(p) \le \phi(p) \le C(n, \Lambda) dist(x, L_Q)
\end{align*}
Applying this argument to $\pm T_{Q_0, r}u$, we obtain the desired estimate.
\end{proof}
\begin{rmk} \label{normal derivative bound}
Note that as a corollary to Corollary \ref{sub-linear bound}, for all $u \in \mathcal{A}(n, \Lambda)$, $Q \in B_{1/4}(0) \cap \partial \Omega$, and $0 < r \le \frac{1}{4}$,
$$|\nabla T_{Q, r}u \cdot \vec \eta| \le C(n, \Lambda)$$
on $\partial T_{Q, r}\Omega \cap B_{\frac{1}{2}}(0).$
\end{rmk}
\begin{lem}\label{compactness 1}
Let $u_i \in \mathcal{A}(n, \Lambda),$ $Q_i \in \partial \Omega_i \cap B_{1/4}(0)$, and $0<r_i \le \frac{1}{2}.$ Then there exists a subsequence such that,
\begin{enumerate}
\item $T_{Q_i, r_i}u_i \rightarrow u_{\infty}$ in $C^{0, \gamma}(\overline{B_1(0)}).$
\item $\partial T_{Q_i, r_i}\Omega_i \cap B_{1}(0) \rightarrow \partial \Omega_{\infty} \cap B_{1}(0)$ in the Hausdorff metric on compact subsets.
\item $u_{\infty}$ is harmonic in $\Omega_{\infty}.$
\end{enumerate}
\end{lem}
\begin{proof}
By definition, $T_{Q_i, r_i}u_i(0) = 0$. Therefore, Lemma \ref{L: unif Holder bound 1} implies the first convergence result by Arzela-Ascoli for H\"older continuous functions. This uniform modulus of continuity also implies that $\{T_{Q_i, r_i}u_{i} = 0\} \cap B_1(0)$ converges in the Hausdorff metric to $\{u_{\infty} = 0\} \cap B_1(0)$. The boundaries are a special case.
Since $D_{T_{Q_i, r_i}\partial \Omega_i}(0, 1, T_{Q_i, r_i}u_i) \le C(n, \Lambda)$, in any neighborhood away from $\partial \Omega_{\infty},$ $u_i$ is a collection of harmonic functions with bounded $W^{1, 2}(B_1(0))$ norm. Therefore, we have $C^{\infty}$ convergence and $u_{\infty}$ is harmonic in $B_1(0) \setminus B_{\epsilon}(\partial \Omega_{\infty})$ for any $0 < \epsilon$.
\end{proof}
\begin{lem}\label{H lower bound}
For any $u \in \mathcal{A}(n, \Lambda)$, $Q \in B_\frac{1}{4}(0)$, and $0< r< \frac{1}{4},$ there is a constant $0< c(n, \Lambda)$ such that for all $y \in B_{1/2}(0),$
\begin{align*}
c(n , \Lambda) < H_{T_{Q, 2r}\Omega}(y, \frac{1}{4}, T_{Q, 2r}u) < C(n, \Lambda).
\end{align*}
\end{lem}
\begin{proof}
Note that the upper bound follows directly from Lemma \ref{L: unif Holder bound 1}. To show the lower bound, we argue by compactness. Suppose that there is a sequence of functions, $u_i \in \mathcal{A}(n, \Lambda)$, points $Q_i \in \partial \Omega_i \cap B_{\frac{1}{4}}(0)$ and radii $0< r_i< \frac{1}{4}$ such that there exist points $y_i \in B_{1/2}(0) \cap \overline{\Omega}$ for which
\begin{align*}
H_{T_{Q_i, 2r_i}\Omega_i}(y_i, \frac{1}{4}, T_{Q_i, 2r_i}u_i) \le 2^{-i}
\end{align*}
Letting $i \rightarrow \infty$, by Lemma \ref{compactness 1}, there exists a subsequence $T_{Q_j, 2r_j}u_j$ which converges to a H\"older continuous function, $u_{\infty}$, which is harmonic in a non-degenerate convex domain, $\Omega_{\infty}$, and which vanishes on the boundary of $\partial \Omega_{\infty} \cap B_{8}(0)$. Similarly, we may take subsequences such that $y_i \rightarrow y_{\infty}$. Note that H\"older convergence implies $H_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1$.
Since we have that $H_{\Omega_{\infty}}(y, \frac{1}{4}, u_{\infty}) = 0$, it must be that $u_{\infty} = u_{\infty}(y_{\infty})$ on $\partial B_{1/4}(y) \cap \Omega_{\infty}.$ If $\partial B_{1/4}(y) \subset \Omega_{\infty},$ then $u_{\infty} \equiv u_{\infty}(y_{\infty})$ in $\Omega_{\infty}.$ This contradicts $u_{\infty}(0) = 0$ and $H_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1$. If $\partial B_{1/4}(y)$ intersects $\partial \Omega_{\infty}$, then $u_{\infty}(y_{\infty}) = 0$, since $u_{\infty}$ must vanish continuously on $\partial \Omega_{\infty}.$ However, this forces $u_{\infty} \equiv 0$, which contradicts $H_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1$.
\end{proof}
\section{Macroscopic Almost-monotonicity}
In this section and all subsequent sections, we work towards uniform estimates for points in $\overline{\Omega} \cap B_{1/4}$ at all radii below some threshold. In this section, we obtain almost monotonicity of the Almgren frequency for interior points $p \in \Omega \cap B_{1/4}$ at radii $0 < r \le \frac{1}{2}.$ See Lemma \ref{N neg term bound 2}.
\begin{lem}\label{N_3 bound}
Let $u \in \mathcal{A}(n, \Lambda)$, $p \in B_{\frac{1}{4}}(0) \cap \overline{\Omega}$, and $r \le \frac{1}{8}.$
\begin{align*}
|N'_3(r)| & = |2 N_{\Omega}(p, r, u) \frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)}|\\
& \le 2 N_{\Omega}(p, r, u) C(n, \Lambda)\\
\end{align*}
\end{lem}
\begin{proof}
Let $Q \in \partial \Omega \cap B_{\frac{1}{4}}(0)$ be the point such that $|Q - p| = dist(p , \partial \Omega).$ Let $y = T_{Q, 4r}p$
\begin{align*}
|\frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)} | & = | \frac{\int_{T_{Q, 4r}\partial \Omega \cap B_{1/4}(y)}(T_{Q, 4r}u - T_{Q, 4r}u(y))\nabla T_{0, 1}u \cdot \vec \eta d\sigma}{H_{T_{Q, 4r}\Omega}(y, \frac{1}{4}, T_{Q, 4r}u)}| \\
& \le \frac{\int_{T_{Q, 4r}\partial \Omega \cap B_{1/4}(y)}| T_{Q, 4r}u(y)| |\frac{1}{r}\nabla T_{Q, 4r}u \cdot \vec \eta| d\sigma}{H_{T_{Q, 4r} \Omega}(y, \frac{1}{4}, T_{Q, 4r}u)} \\
& \le C(n, \Lambda) \frac{\int_{T_{Q, 4r}\partial \Omega \cap B_{1/4}(y)} |\nabla T_{Q, 4r}u \cdot \vec \eta| d\sigma}{H_{T_{Q, 4r}\Omega}(y, \frac{1}{4}, T_{Q, 4r}u)} \\
& = C(n, \Lambda)^2 \omega_{n-1} \frac{1}{H_{T_{Q, 4r}\Omega}(y, \frac{1}{4}, T_{Q, 4r}u)} \\
\end{align*}
Where we have bounded $|T_{Q, 4r}u(y)| \le C(n, \Lambda)$ using Lemma \ref{L: unif Holder bound 1}, and $|\nabla T_{Q, 4r}u \cdot \vec \eta| \le C(n, \Lambda)$ by Remark \ref{normal derivative bound}. Lemma \ref{H lower bound}, then, gives the desired result.
\end{proof}
With Lemma \ref{N_3 bound}, we are able to restate Lemma \ref{N derivative calculation} and establish almost-monotonicity of the Almgren frequency function for interior points.
\begin{lem}\label{N neg term bound 2}
For $u \in \mathcal{A}(n, \Lambda)$, $p \in \overline{\Omega} \cap B_{\frac{1}{4}}(0)$, and radii $0< r< \frac{1}{8}$,
\begin{align}
e^{2C(n, \Lambda)(R-r)} N_{\Omega}(p, R, u) - N_{\Omega}(p, r, u) & \ge \int_{r}^R N_1'(s)ds\\
N_{\Omega}(p, R, u) - e^{2C(n, \Lambda)(r- R)}N_{\Omega}(p, r, u) & \ge e^{2C(n, \Lambda)(r-R)} \int_{r}^R N_1'(s)ds
\end{align}
where
\begin{align*}
N_1'(s) & = \frac{2}{s H_{\Omega}(p, s, u)} ( \int_{\partial B_s(p)} |\nabla u \cdot (y-p) - \lambda(p, s, u) (u(y)-u(p))|^2d\sigma(y)) \ge 0
\end{align*}
and
$$
\lambda(p, s, u) = \frac{\int_{\partial B_{s}(p)} (u(y) - u(p))\nabla u \cdot (y-p) d\sigma(y)}{H_{\Omega}(p, s, u)}.
$$
\end{lem}
\begin{proof}
\begin{align*} \nonumber
\frac{d}{dr}N_{\Omega}(p, r, u) \ge & N_1'(r) -2C(n, \Lambda) N_{\Omega}(p, r, u)\\
\frac{d}{dr}(e^{2C(n, \Lambda)r} N_{\Omega}(p, r, u)) & \ge e^{2C(n, \Lambda)r}N_1'(r)\\
e^{2C(n, \Lambda)R} N_{\Omega}(p, R, u) - e^{-2C(n, \Lambda)r} N_{\Omega}(p, r, u) & \ge \int_{r}^Re^{2C(n, \Lambda)s}N_1'(s)ds\\
& \ge e^{2C(n, \Lambda)r}\int_{r}^R N_1'(s)ds\\
\end{align*}
\end{proof}
\section{Uniform boundedness of the Almgren frequency}\label{S: N bound}
The main result of this section is Lemma \ref{N bound lem}, which says that for all $p \in B_{1/4}(0) \cap \partial \Omega$ and all $r \le \frac{1}{8}$, $N_{\Omega}(p, r, u) \le C(n, \Lambda).$ Before we prove this result, we need a few technical lemmata. The first is the doubling of $H_{\Omega}(p,r, u).$
\begin{lem} \label{L: H doubling-ish}($H_{\Omega}(p, r, u)$ is Doubling)
Let $u \in \mathcal{A}(n, \Lambda)$ with $p \in B_{\frac{1}{4}}(0) \cap \overline{\Omega}$. For any $0 < s < S \le \frac{1}{8}$,
\begin{align*}
H_{\Omega}(p, S, u) & \le (\frac{S}{s})^{(n-1) + 2e^{2C(n, \Lambda)(S-s)}N_{\Omega}(p, S, u)}e^{2C(n, \Lambda)(S-s)} H_{\Omega}(p, s, u)\\
H_{\Omega}(p, S, u) & \ge (\frac{S}{s})^{(n-1) + 2e^{2C(n, \Lambda)(s-S)}N_{\Omega}(p, s, u)}e^{2C(n, \Lambda)(S-s)} H_{\Omega}(p, s, u).
\end{align*}
\end{lem}
\begin{proof} First, recall
\begin{align*}
\frac{d}{dr}H_{\Omega}(p, r, u) = & \frac{n-1}{r}H_{\Omega}(p, r, u) + 2D_{\Omega}(p, r, u) + 2\int_{B_r(p)}(u - u(p))\Delta u
\end{align*}
Next, we consider the following identity:
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & = \ln(H_{\Omega}(p, S,u)) - \ln(H_{\Omega}(p, s, u))\\
& = \int_s^S \frac{H_{\Omega}'(p, r, u)}{H_{\Omega}(p, r, u)}dr\\
& = \int_s^S \frac{n-1}{r} + \frac{2}{r}N_{\Omega}(p, r, u) + 2 \frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)}
\end{align*}
Using the argument from Lemma \ref{N_3 bound} to bound the last term, and Lemma \ref{N neg term bound 2} to bound $N$, we have the following,
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & \le \int_s^S \frac{n-1}{r} + \frac{2}{r}e^{2C(n, \Lambda)(S-s)}N_{\Omega}(p, S, u) + 2C(n, \Lambda)dr
\end{align*}
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & \ge \int_s^S \frac{n-1}{r} + \frac{2}{r}e^{2C(n, \Lambda)(s-S)}N_{\Omega}(p, s, u) - 2C(n, \Lambda)dr.
\end{align*}
Evaluating and exponentiating gives the desired result.
\end{proof}
Now, we need two compactness results.
\begin{lem}\label{H comp 1}
For $u \in \mathcal{A}(n, \Lambda)$, $p \in \overline{\Omega} \cap B_{1/4}(0),$ there is a constant, $C(n, \Lambda),$ such that,
\begin{align*}
\int_{B_{\frac{3}{4}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p)|^2 \le C(n, \Lambda) \int_{B_{\frac{3}{4}}(p) \cap \Omega}|T_{0, 1}u|^2.
\end{align*}
\end{lem}
\begin{proof}
First, we note that by Lemma \ref{L: unif Holder bound 1},
\begin{align*}
\int_{B_{\frac{3}{4}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p)|^2 \le & \int_{B_{\frac{3}{4}}(p) \cap \Omega}4C(n, \Lambda)^2\\
\le & C(n, \Lambda).
\end{align*}
Thus, we argue that there is a constant, $0<c(n, \Lambda)$, such that
\begin{align*}
c(n, \Lambda) \le \int_{B_{\frac{3}{4}}(p) \cap \Omega}|T_{0, 1}u|^2.
\end{align*}
We argue by limit compactness. Suppose that there is a sequence of functions, $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in \overline{\Omega_i \cap B_{\frac{1}{4}}(0)},$ such that,
\begin{align*}
\int_{B_{\frac{3}{4}}(p_i) \cap \Omega_i}|T_{0, 1}u_i|^2 \le 2^{-i}.
\end{align*}
We may take a subsequence such that $T_{0, 1}u_i \rightarrow u_{\infty}$ and $\Omega_{i} \rightarrow \Omega_{\infty}$ in the senses of Lemma \ref{compactness 1}. We may also assume that $p_i \rightarrow p_{\infty} \in \overline{\Omega_{\infty} \cap B_{\frac{1}{4}}(0)}$. Since $T_{0, 1}u_i \rightarrow u_{\infty}$ in $C^0(B_1(0)),$ we have that,
\begin{align*}
\int_{B_{\frac{3}{4}}(p_{\infty}) \cap \Omega_{\infty}}|u_{\infty}|^2 = 0.
\end{align*}
However, by Corollary \ref{C: geometric non-degeneracy} applied to $u_{\infty},$ we have that $\Omega_{\infty}$ is a non-degenerate domain. Furthermore, by Lemma \ref{compactness 1}, $u_{\infty}$ is harmonic in $\Omega_{\infty}.$ Therefore, $u_{\infty} = 0$ in $\Omega_{\infty}.$ However, this contradicts,
\begin{align*}
\lim_{i \rightarrow \infty} H_{\Omega_{i}}(0, 1, T_{0, 1}u_i) = H_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1.
\end{align*}
\end{proof}
\begin{lem}\label{H comp 2}
For $u \in \mathcal{A}(n, \Lambda)$, $p \in \overline{\Omega} \cap B_{1/4}(0),$ there is a constant, $C(n, \Lambda),$ such that
\begin{align*}
\int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u|^2 \le C(n, \Lambda) \int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p)|^2
\end{align*}
\end{lem}
\begin{proof}
First, we note that,
\begin{align*}
\int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u|^2 \le & \int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p) + T_{0, 1}u(p)|^2\\
\le & 2\int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p)|^2 + 2 \int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u(p)|^2.
\end{align*}
Since by Lemma \ref{L: unif Holder bound 1},
\begin{align*}
\int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u(p)|^2 \le & C(n, \Lambda)^2,
\end{align*}
we reduce to arguing that there is a constant, $0 < c(n, \Lambda)$, such that,
\begin{align*}
c(n, \Lambda) \le \int_{B_{\frac{9}{16}}(p) \cap \Omega}|T_{0, 1}u - T_{0, 1}u(p)|^2.
\end{align*}
We argue by limit compactness. Suppose that there is a sequence of functions, $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in \overline{\Omega_i \cap B_{\frac{1}{4}}(0)},$ such that
\begin{align*}
\int_{B_{\frac{9}{16}}(p_i) \cap \Omega_i}|T_{0, 1}u_i - T_{0, 1}u_i(p_i)|^2 \le 2^{-i}
\end{align*}
We may take a subsequence such that $T_{0, 1}u_i \rightarrow u_{\infty}$ and $\Omega_{i} \rightarrow \Omega_{\infty}$ in the senses of Lemma \ref{compactness 1}. We may also assume that $p_i \rightarrow p_{\infty} \in \overline{\Omega_{\infty} \cap B_{\frac{1}{4}}(0)}$. Since $T_{0, 1}u_i \rightarrow u_{\infty}$ in $C^0(B_1(0)),$ we have that
\begin{align*}
\int_{B_{\frac{9}{16}}(p_{\infty}) \cap \Omega_{\infty}}|u_{\infty} - u_{\infty}(p_{\infty})|^2 = 0.
\end{align*}
However, by Corollary \ref{C: geometric non-degeneracy} applied to $u_{\infty},$ we have that $\Omega_{\infty}$ is a non-degenerate domain. Furthermore, by Lemma \ref{compactness 1}, $u_{\infty}$ is harmonic in $\Omega_{\infty}.$ Therefore, $u_{\infty} = u_{\infty}(p_{\infty})$ in $\Omega_{\infty}.$ However, this contradicts that both
\begin{align*}
\lim_{i \rightarrow \infty} H_{\Omega_{i}}(0, 1, T_{0, 1}u_i) = H_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1
\end{align*}
and $u_{\infty}$ must vanish continuously on $\partial \Omega_{\infty}$.
\end{proof}
\begin{lem}\label{N bound lem}
Let $u \in \mathcal{A}(n, \Lambda)$, as above. There is a constant, $C(n, \Lambda)$ such that for all $p \in B_{\frac{1}{16}}(0) \cap \partial \Omega$ and all $0< r \le \frac{1}{16},$
\begin{equation}
N_{\Omega}(p, r, u) \le C(n, \Lambda).
\end{equation}
\end{lem}
\begin{proof} Recall that $0 \in \partial \Omega$ and that the Almgren frequency function is invariant under rescalings. Therefore, we normalize our function $u$ by the rescaling $v = T_{0,\frac{1}{8}}u$.
Therefore, applying Lemma \ref{L: H doubling-ish} to $p = 0$, letting $r = cR$, and integrating both sides with respect to $R$ from $0$ to $S \le 1$, we have that for any $c \in (0, 1)$
\begin{align*}
\int_{B_{S}(0)}|v|^2 &\le \int_0^{S} (\frac{1}{c})^{(n-1) + 2e^{2C(n, \Lambda)(R-cR)}N_{T_{0, \frac{1}{8}}\Omega}(0, R, v)}e^{2C(n, \Lambda)(R-cR)} \int_{\partial B_{cR}(0)}|v|^2dSdR\\
& \le (\frac{1}{c})^{(n-1) + 2e^{2C(n, \Lambda)(S)}[e^{2C(n, \Lambda)(S)} N_{T_{0, \frac{1}{8}}\Omega}(0, S, v)]}e^{2C(n, \Lambda)(S)} \int_0^{S} \int_{\partial B_{cR}(0)}|v|^2dSdR.
\end{align*}
Thus, we have that for any such $c \in (0,1)$ and any $0< S \le 1$
\begin{align*}
\fint_{B_S(0)}|v|^2dV \le (\frac{1}{c})^{2e^{4C(n, \Lambda)(S)}N_{T_{0, \frac{1}{8}}\Omega}(0, S, v)}e^{2C(n, \Lambda)(S)}\fint_{B_{cS}(0)} |v|^2dV.
\end{align*}
Let $S = 1$ and $c= \frac{1}{16}$. We have that,
\begin{equation}\label{N bound base ineq}
\fint_{B_1(0)}|v|^2 \le (16)^{2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v)}e^{2C(n, \Lambda)} \fint_{B_{\frac{1}{16}}(0)} |v|^2,
\end{equation}
Thus, for any $p \in B_{\frac{1}{4}}(0) \cap \overline{T_{0, \frac{1}{8}}\Omega}$ by inclusion, we have the following:
\begin{align*}
\int_{B_{1}(0)}|v|^2 &\ge \int_{B_{3/4}(p)}|v|^2\\
\end{align*}
and
\begin{align*}
\int_{B_{\frac{1}{16}}(0)} |v|^2 & \le \int_{B_{\frac{9}{16}}(p)}|v|^2
\end{align*}
By Lemmata \ref{H comp 1} and \ref{H comp 2}, then, we have (\ref{N bound base ineq}), we have
\begin{equation}\label{N bound ineq 2}
\fint_{B_{\frac{3}{4}}(p)}|v-v(p)|^2 \le C(n, \Lambda) (16)^{2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v)}e^{2C(n, \Lambda)}(\fint_{B_{\frac{9}{16}}(p)} |v-v(p)|^2).
\end{equation}
Now, we wish to bound $\fint_{B_{\frac{3}{4}}(p)}|v- v(p)|^2$ from below and $\fint_{B_{\frac{9}{16}}(p)} |v- v(p)|^2$ from above. We rely upon Lemma \ref{L: H doubling-ish}. Thus, for all $p \in B_{\frac{1}{4}}(0) \cap T_{0, \frac{1}{8}}\Omega$, we may bound $\int_{B_{\frac{3}{4}}(p)} |v- v(p)|^2$ from below as follows:
\begin{align*}
\int_{B_{\frac{3}{4}}(p)}|v- v(p)|^2 & \ge \int_{B_{\frac{3}{4}}(p) \setminus B_{\frac{5}{8}}(p)} |v- v(p)|^2 \\
& = \int_{\frac{5}{8}}^{\frac{3}{4}} H_{\Omega}(p, r, v) dr \\
& \ge \int_{\frac{5}{8}}^{\frac{3}{4}} (\frac{r}{\frac{5}{8}})^{(n-1) + 2e^{2C(n, \Lambda)(r-\frac{5}{8})}N_{T_{0, \frac{1}{8}}\Omega}(p, r, v)}e^{2C(n, \Lambda)(r-\frac{5}{8})} H_{\Omega}(p, \frac{5}{8}, u) dr \\
& \ge \int_{\frac{5}{8}}^{\frac{3}{4}} (\frac{r}{\frac{5}{8}})^{(n-1) + 2N_{T_{0, \frac{1}{8}}\Omega}(p, \frac{5}{8}, v)}e^{2C(n, \Lambda)(-\frac{5}{8})} H_{\Omega}(p, \frac{5}{8}, u) dr \\
& \ge e^{2C(n, \Lambda)(-\frac{5}{8})} \int_{\frac{5}{8}}^{\frac{3}{4}} H_{\Omega}(p, \frac{5}{8}, u) dr \\
& \ge c \int_{\partial B_{\frac{5}{8}}(p)} |v- v(p)|^2 dS
\end{align*}
where we have used the fact that $N_{\Omega}(p, r, u) \ge 0$.
To get the upper bound we want, we use the same trick.
\begin{align*}
\int_{B_{\frac{9}{16}}(p)} |v- v(p)|^2 & = \int_0^{\frac{9}{16}} H_{\Omega}(p, r, v) dS dr\\
& \le \int_0^{\frac{9}{16}} (\frac{r}{\frac{9}{16}})^{(n-1) + 2e^{2C(n, \Lambda)(r-\frac{9}{16})}N_{T_{0, \frac{1}{8}}\Omega}(p, r, v)}e^{2C(n, \Lambda)(r-\frac{9}{16})}H_{\Omega}(p, \frac{9}{16}, u) dr\\
& \le \int_0^{\frac{9}{16}} H_{\Omega}(p, \frac{9}{16}, u) dr\\
& \le c\int_{\partial B_{\frac{9}{16}} (p)} |v- v(p)|^2.
\end{align*}
where we have used the fact that $N_{\Omega}(p, r, u) \ge 0$.
Putting it all together, we plug our above bounds into (\ref{N bound ineq 2}) and obtain the following for all $p \in B_{\frac{1}{4}}(0) \cap \overline{T_{0, \frac{1}{8}}\Omega}$,
$$\fint_{\partial B_{\frac{5}{8}}(p)} |v - v(p)|^2 dS \le C(n, \Lambda) (16)^{2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v)}e^{2C(n, \Lambda)}(\fint_{\partial B_{\frac{9}{16}} (p)} |v - v(p)|^2).$$
Though somewhat messier, we restate the above in the following form for convenience later.
\begin{align} \label{ugly}
\frac{\fint_{\partial B_{\frac{5}{8}}(p)} |v-v(p)|^2 dS}{\fint_{\partial B_{\frac{9}{16}}(p)}|v-v(p)|^2 dS} \le & C(n, \Lambda) (16)^{2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v)}e^{2C(n, \Lambda)}.
\end{align}
Now, we change tack slightly and, recalling Lemma \ref{L: avg log H derivative} and the almost monotonicity of $N_{T_{0, \frac{1}{8}}\Omega}(p, r, v)$,
\begin{align*}
\ln(\frac{\fint_{\partial B_{\frac{5}{8}}(p) \cap T_{0, \frac{1}{8}}\Omega} |v -v(p)|^2}{\fint_{\partial B_{\frac{9}{16}}(p) \cap T_{0, \frac{1}{8}}\Omega} |v - v(p)|^2}) & = \int_{\frac{9}{16}}^{\frac{5}{8}} \frac{d}{ds}\ln(\frac{1}{s^{n-1}}H_{T_{0, \frac{1}{8}}\Omega}(p, s, v)) ds\\
& = \int_{\frac{9}{16}}^{\frac{5}{8}} \frac{2}{r}N_{T_{0, \frac{1}{8}}\Omega}(p, r, v) + 2\frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{T_{0, \frac{1}{8}}\Omega}(p, r, v)}ds\\
& \ge \int_{\frac{9}{16}}^{\frac{5}{8}} \frac{2}{r}N_{\Omega}(p, r, u) - 2C(n, \Lambda)ds\\
& \ge \int^{\frac{5}{8}}_{\frac{9}{16}} \frac{1}{r}2[e^{2C(n, \Lambda)(\frac{9}{16} - \frac{1}{2})}N_{\Omega}(p, \frac{1}{2}, v)] - C(n, \Lambda)) \\
& \ge C(n, \Lambda) N_{\Omega}(p, \frac{1}{2}, v) - C(n, \Lambda)
\end{align*}
Thus, if we recall Equation \ref{ugly}, above, we see that
\begin{align*}
C(n, \Lambda)[N_{\Omega}(p, \frac{1}{2}, v)] - C(n, \Lambda) \le & \ln(\frac{\fint_{\partial B_{\frac{5}{8}}(p)} |u - u(p)|^2}{\fint_{\partial B_{\frac{9}{16}}(p)} |u- u(p)|^2})\\
\le & \ln[C(n, \Lambda) (16)^{2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v)}e^{2C(n, \Lambda)}]\\
= & C(n, \Lambda)2e^{4C(n, \Lambda)}N_{\Omega}(p, 1, u) +C(n, \Lambda)\\
N_{\Omega}(p, \frac{1}{2}, v) \le & C(n, \Lambda)2e^{4C(n, \Lambda)}N_{T_{0, \frac{1}{8}}\Omega}(0, 1, v) + C(n, \Lambda)\\
N_{\Omega}(p, \frac{1}{2}, v) \le & C(n, \Lambda)2e^{4C(n, \Lambda)} \Lambda + C(n, \Lambda).
\end{align*}
In the last inequality, we use the fact that $N_{T_{0, \frac{1}{8}}\Omega}(0, r, v)$ is a monotonically increasing function in $r$.
Now, Lemma \ref{N neg term bound 2}, gives that for $1/2 > s > 0$,
\begin{equation*}
e^{\frac{1}{2}2C(n, \Lambda))} N_{T_{0, \frac{1}{8}}\Omega}(p, 1/2, v) > N_{T_{0, \frac{1}{8}}\Omega}(p, s, v).
\end{equation*}
Thus, $N_{\Omega}(p, s, v) \le e^{\frac{1}{2}2C(n, \Lambda))}[C(n, \Lambda)2e^{4C(n, \Lambda)} \Lambda + C(n, \Lambda)] = C_1(n, \Lambda).$ This proves the lemma.
\end{proof}
Now that we know that $N_{\Omega}(p, s, u) \le C_1(n, \Lambda)$ for all $p \in \overline{B_{\frac{1}{16}}(0) \cap \Omega}$ and all radii $0 < s \le \frac{1}{16}$, we may improve our almost monotonicity estimates. For example, returning to Lemma \ref{N neg term bound 2}, we may improve our estimate to the following.
\begin{lem}\label{N neg term bound 3}
For $u \in \mathcal{A}(n, \Lambda)$ and $p \in \overline{\Omega \cap B_{\frac{1}{16}}(0)}$, for all radii $\frac{1}{16} \ge r>0$,
\begin{align}
N_{\Omega}(p, R, u) - N_{\Omega}(p, r, u) + 2C_1(n, \Lambda)(R -r)& \ge \int_{r}^R N_1'(s)ds
\end{align}
where
\begin{align*}
N_1'(r) & = \frac{2}{r H_{\Omega}(p, r, u)} ( \int_{\partial B_r(p)} |\nabla u \cdot (y-p) - \lambda(p, r, u) (u(y)-u(p))|^2d\sigma(y)) \ge 0
\end{align*}
and
$$
\lambda(p, r, u) = \frac{\int_{\partial B_{r}(p)} (u(y) - u(p))\nabla u \cdot (y-p) d\sigma(y)}{H_{\Omega}(p, r, u)}.
$$
\end{lem}
\begin{proof}
Plugging the bounds from Lemma \ref{N bound lem} into the proof of Lemma \ref{N neg term bound 2},
\begin{align*} \nonumber
\frac{d}{dr}N_{\Omega}(p, r, u) \ge & N'_1(r) -2C_1(n, \Lambda)\\
\frac{d}{dr}N_{\Omega}(p, r, u) + 2C_1(n, \Lambda) & \ge N_1'(r)\\
N_{\Omega}(p, R, u) - N_{\Omega}(p, r, u) + 2C_1(n, \Lambda)(R -r) & \ge \int_{r}^R N_1'(s)ds.
\end{align*}
\end{proof}
We also have an improved version of Lemma \ref{L: H doubling-ish}.
\begin{lem} \label{L: H doubling-ish 2}($H_{\Omega}(p, r, u)$ is Doubling)
Let $u \in \mathcal{A}(n, \Lambda)$ with $p \in \overline{B_{\frac{1}{16}}(0) \cap \Omega}$. For any $0 < s < S \le \frac{1}{16}$,
\begin{align*}
H_{\Omega}(p, S, u) & \le (\frac{S}{s})^{(n-1) + 2(N_{\Omega}(p, S, u) +2C_1(n, \Lambda)(S-s))} H_{\Omega}(p, s, u)\\
H_{\Omega}(p, S, u) & \ge (\frac{S}{s})^{(n-1) + 2(N_{\Omega}(p, s, u) - 2C_1(n, \Lambda)(S-s))} H_{\Omega}(p, s, u).
\end{align*}
\end{lem}
\begin{proof} First, recall
\begin{align*}
\frac{d}{dr}H_{\Omega}(p, r, u) = & \frac{n-1}{r}H_{\Omega}(p, r, u) + 2D_{\Omega}(p, r, u) + 2\int_{B_r(p)}(u - u(p))\Delta u
\end{align*}
Next, we consider the following identity:
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & = \ln(H_{\Omega}(p, S,u)) - \ln(H_{\Omega}(p, s, u))\\
& = \int_s^S \frac{H_{\Omega}'(p, r, u)}{H_{\Omega}(p, r, u)}dr\\
& = \int_s^S \frac{n-1}{r} + \frac{2}{r}N_{\Omega}(p, r, u) + 2 \frac{\int_{B_r(p)}(u - u(p))\Delta u}{H_{\Omega}(p, r, u)}
\end{align*}
Using the argument in the proof of Lemma \ref{N_3 bound} to bound the last term and Lemma \ref{N neg term bound 3} we have the following,
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & \le \int_s^S \frac{n-1}{r} + \frac{2}{r}(N_{\Omega}(p, S, u) + 2C_1(n, \Lambda)(S - r)) + 2C(n, \Lambda)dr
\end{align*}
\begin{align*}
\ln(\frac{H_{\Omega}(p, S, u)}{H_{\Omega}(p, s, u)}) & \ge \int_s^S \frac{n-1}{r} + \frac{2}{r}(N_{\Omega}(p, s, u) - 2C_1(n, \Lambda)(r - s)) - 2C(n, \Lambda)dr
\end{align*}
Evaluating and exponentiating gives the desired result.
\end{proof}
\section{Compactness}\label{S: compactness}
The uniform bounds on the Almgren frequency function allow us to prove strong compactness results for the class $\mathcal{A}(n, \Lambda)$. After proving a uniform Lipschitz bound and strong compactness in $W^{1, 2},$ we include some important corollaries.
\begin{lem}\label{Lipschitz bound}
For $u \in \mathcal{A}(n, \Lambda),$ for all $p \in \overline{\Omega \cap B_{\frac{1}{32}}(0)}$ and all radii, $0< r \le \frac{1}{128}$, $T_{p, r}u \in Lip(B_1(0))$ with uniform Lipschitz constant $Lip(T_{p, r}u) \le C(n, \Lambda)$.
\end{lem}
\begin{proof}
Since $T_{p, r}u$ is continuous and constant outside of $T_{p, r} \Omega$, we reduce to bounding $\nabla T_{p, r}u$ at interior points $y \in T_{p, r}\Omega \cap B_1(0).$ Note that by our definition of the rescalings and Lemma \ref{L: H doubling-ish 2},
\begin{align*}
|T_{p, r}u(y)| = & \frac{1}{4}\frac{\frac{1}{(4r)^{n-1}}H_{\Omega}(p, 4r, u)}{\frac{1}{r^{n-1}}H_{\Omega}(p, r, u)} |T_{p, 4r}u(y')|\\
\le & \frac{1}{4} (4)^{2(C_1(n, \Lambda) +2C_1(n, \Lambda)(3r))} |T_{p, 4r}u(y')|\\
\le & C(n, \Lambda) |T_{p, 4r}u(y')|\\
\end{align*}
where $y' = \frac{1}{4}y$.
Note that $y' \in B_{\frac{1}{4}}(0) \cap T_{p, 4r}\Omega$. Let $\delta = dist(y', T_{p, 4r}\partial \Omega).$ Therefore, $\nabla T_{p, 4r}u(y') = \fint_{B_{\delta}(y')} \nabla T_{p, 4r}u$. Recall that $|\nabla u|$ is subharmonic.
\begin{align*}
|\nabla T_{p, 4r}u(y')| & \le \fint_{B_{\delta}(y')}|\nabla T_{p, 4r} u|\\
& \le (\fint_{B_{\delta}(y')}|\nabla T_{p, 4r} u|^2)^{\frac{1}{2}}\\
& \le (C_1(n, \Lambda)\delta^{-2}(\fint_{\partial B_{\delta}(y')}(T_{p, 4r}u - T_{p, 4r}u(y'))^2 ) )^{\frac{1}{2}}.
\end{align*}
Now, let $Q \in T_{p, 4r} \partial \Omega$ be a point such that $\delta = |y' - Q|$ and let $y' = y'' + Q.$ Now, we translate the domain by $Q.$
\begin{align*}
\fint_{\partial B_{\delta}(y')}(T_{p, 4r}u - T_{p, 4r}u(y'))^2d\sigma = \fint_{\partial B_{\delta}(y'')}(T_{p, 4r}u(x + Q) - T_{p, 4r}u(y''+ Q))^2d\sigma.
\end{align*}
Note that $T_{p, 4r}u(x + Q) \in \mathcal{A}(n, C_1(n, \Lambda)).$ Now, by Corollary \ref{sub-linear bound} applied to $T_{p, 4r}u(x + Q) \in \mathcal{A}(n, C_1(n, \Lambda))$ with $Q_0 = 0$ we bound
\begin{align*}
\fint_{\partial B_{\delta}(y')}(T_{p, 4r}u - T_{p, 4r}u(y'))^2d\sigma & = \fint_{\partial B_{\delta}(y'')}(T_{p, 4r}u(x + Q) - T_{p, 4r}u(y''+ Q))^2d\sigma(x)\\
& \le \fint_{\partial B_{\delta}(y'')}(4C(n, C_1(n, \Lambda))\delta)^2d\sigma \\
& = (4C(n, C_1(n, \Lambda))\delta)^2 \\
\end{align*}
Thus, we have that,
\begin{align*}
|\nabla T_{p, r}u(y)| \le & C(n, \Lambda) |T_{p, 4r}u(y')|\\
& \le C(n, \Lambda)(C_1(n, \Lambda)\delta^{-2}(\fint_{\partial B_{\delta}(y)}(T_{p, r}u - T_{p, r}u(y))^2d\sigma ) )^{\frac{1}{2}}\\
& \le C(n, \Lambda) C_1(n, \Lambda)^{\frac{1}{2}}\frac{1}{\delta}(4C(n, C_1(n, \Lambda))\delta)\\
& \le C(n, \Lambda)C_1(n, \Lambda)^{\frac{1}{2}}(4C(n, C_1(n, \Lambda)))\\
& \le C(n, \Lambda)\\
\end{align*}
\end{proof}
\begin{lem}(Compactness)\label{compactness}
Let $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in \overline \Omega_i \cap B_{\frac{1}{32}}(0)$, and $r_i \in (0,\frac{1}{128}]$. Then, there exists a subsequence and a function, $u_{\infty} \in W^{1, 2}_{loc}(\mathbb{R}^n)$, such that $T_{p_j, r_j}u_j$ converges to $u_{\infty}$ in the following senses.
\begin{enumerate}
\item $T_{p_i, r_i}u_i \rightarrow u_{\infty}$ in $C^{0,1}(B_1(0))$
\item If $T_{p_i, r_i}u_i|_{T_{p_i, r_i}\partial \Omega_i} = a_i$, then $a_i \rightarrow a$ and $T_{p_i, r_i} \partial \Omega_i \cap B_1(0) \rightarrow \{ u_{\infty} = a \} \cap B_1(0)$ in the Hausdorff metric on compact subsets.
\item $T_{p_i, r_i}u \rightarrow u_{\infty}$ in $L^2(B_1(0))$
\item $\nabla T_{p_i, r_i}u \rightharpoonup \nabla u_{\infty}$ in $L^2(B_1(0), \mathbb{R}^n)$
\end{enumerate}
\end{lem}
\begin{proof} To see $(1)$, we observe that $T_{p_i, r_i}u_i(0) = 0$ and $\{T_{p_i, r_i}u_i\}$ are uniformly Lipschitz. Therefore, by Arzela-Ascoli, there exists a subsequence which converges in $C^{0,1}(B_1(0))$. To see that the $T_{p_i, r_i} \partial \Omega_i \cap B_1(0)\rightarrow \{ u_{\infty} = a \} \cap B_1(0)$, we note that since $T_{p_i, r_i}u_i \rightarrow u_{\infty}$ in $C^{0, 1}(B_1(0))$, if $x_i \rightarrow x$, then
$$
\lim_{i \rightarrow \infty} T_{p_i, r_i}u_i(x_i) = u_{\infty}(x).
$$
Thus, if $x_i \in T_{p_i, r_i} \partial \Omega_i$ and $x_i \not \in B_{\epsilon}(\{u_{\infty}= a\})$, then certainly $x_i$ has a limit point, $x$, and $a = \lim_{i \rightarrow \infty} T_{p_i, r_i}u_i(x_i) = u_{\infty}(x).$ Similarly, if there exists an $x \in \{u_{\infty}= a\}$ such that $x \not \in B_{\epsilon}(\{u_i= a_i\})$ for all $i = 1, 2, ...$, this contradicts $(1).$
Since $C^{0, 1}(B_1(0)) \subset L^2(B_1(0))$, this also proves $(3)$.
$(4)$ follows from Rellich compactness. By our choice of rescaling, $T_{p_j, r_j}u$, we have that $N_{\Omega}(0, 1, T_{p_j, r_j}u_j) = \int_{B_1(0)}|\nabla T_{p_j, r_j}u_j|^2dx.$ Therefore, Lemma \ref{N bound lem} gives that $\nabla T_{p_j, r_j}u_j$ are uniformly bounded in $L^2(B_1(0), \mathbb{R}^n).$ Therefore, Rellich compactness gives us desired convergence.
\end{proof}
\begin{cor}(Limit functions are harmonic in the limit domain)\label{C: limit harmonic}
Let the sequence of functions $T_{p_j, r_j}u$ converge to the function $u_{\infty}$ in the senses of Lemma \ref{compactness}. Then, $u_{\infty}$ is harmonic in $\Omega_{\infty}$.
\end{cor}
\begin{proof} Recall that the boundaries, $T_{p, r}\partial \Omega_j \rightarrow \partial \Omega_{\infty}$ in the Hausorff distance on compact subsets. Therefore, for any $0 < \epsilon$, for $j$ large enough, every $T_{p, r}u$ will be harmonic in the region $B_1(0) \setminus B_{\epsilon}(\partial \Omega_{\infty}).$ By $C^{0, \gamma}(B_1(0))$ convergence of harmonic functions, $u_{\infty}$ is therefore harmonic in $B_1(0) \setminus B_{\epsilon}(\partial \Omega_{\infty}).$ Letting $\epsilon \rightarrow 0$ gives the desired statement.
\end{proof}
We also have the following immediate corollary.
\begin{cor}(Geometric Non-degeneracy)\label{geometric non-degeneracy}
Let $u \in \mathcal{A}(n, \Lambda)$, $p \in B_{\frac{1}{32}}(0) \cap \overline{\Omega}$, and $0<r \le \frac{1}{128}$. There exists a constant, $0< \alpha(n, \Lambda)$ such that the components of $\partial B_1(0) \cap T_{p, r}\Omega$ are relatively convex, relatively open sets which satisfy
$$\mathcal{H}^{n-1}(\partial B_1(0) \cap T_{p, r}\Omega) > \alpha.$$
\end{cor}
\begin{proof} That the components of $\partial B_1(0) \cap T_{p, r}\Omega$ are relatively open and relatively convex is immediate from the definition of $T_{p, r}\Omega.$ To see that $\mathcal{H}^{n-1}(\partial B_1(0) \cap T_{p, r}\Omega) > \alpha,$ observe that since $\max_{B_1(0)}|T_{p, r}u(x)| \le C(n, \Lambda)$, and that $H_{T_{p, r}\Omega}(0, 1, T_{p, r}u) = 1$, we have that
\begin{align*}
H_{T_{p, r}\Omega}(0, 1, T_{p, r}u) & \le \mathcal{H}^{n-1}(\partial B_1(0) \cap T_{p, r}\Omega) C(n, \Lambda)
\end{align*}
Therefore, $\mathcal{H}^{n-1}(\partial B_1(0) \cap T_{p, r}\Omega) \ge C(n, \Lambda)^{-1}.$
\end{proof}
Note that by convexity, this implies that $T_{p, r}\Omega$ satisfies an interior cone condition \textit{uniformly} in admissible $p, r,$ and $u$, depending only upon the ambient dimension and $\Lambda$, our bound on the Almgren frequency.
\begin{lem}(Strong convergence) \label{strong convergence}
Let $u_i \in \mathcal{A}(\Lambda, \alpha)$, $p_i \in \overline \Omega_i \cap B_{\frac{1}{32}}(0)$, and $r_i \in (0, \frac{1}{128}]$. Then, there exists a subsequence and a function, $u_{\infty} \in W^{1, 2}_{loc}(\mathbb{R}^n)$, such that $T_{p_i, r_i}u \rightarrow u_{\infty}$ in $W^{1,2}(B_1(0)).$
\end{lem}
\begin{proof}
Invoking Lemma \ref{compactness}, the only thing to show is that $\nabla T_{p_j, r_j}u_j \rightarrow \nabla u_{\infty}$.
Recall that by our choice of subsequence, $\partial \Omega_j$ have a convergent subsequence such that $T_{p_i, r_i}\partial \Omega_i \rightarrow \partial \Omega_{\infty}$ locally in the Hausdorff metric. Furthermore, by Corollary \ref{geometric non-degeneracy}, the limit domain is non-degenerate.
By Lemma \ref{Lipschitz bound}, an that fact that for all $\Omega \in \mathcal{D}(n)$, $\partial \Omega$ is locally the graph of a Lipschitz function and therefore $\overline{dim}_\mathcal{M}(\partial \Omega \cap B_1(0)) = n-1$. Thus, by continuity of measures, for all $\epsilon > 0$ we can find a $\tau(\Lambda, n, \epsilon)$ independent of $T_{p_i, r_i}u_i$, such that,
\begin{align*}
\int_{B_1(0) \cap B_{\tau}(\partial \Omega_{\infty})} |\nabla T_{p_i, r_i}u|^2dx \le \epsilon.
\end{align*}
Therefore,
\begin{align*}
\lim_{j \rightarrow \infty} D_{\Omega_i}(1, 0, T_{p, r_j}u) = & \lim_{j \rightarrow \infty} \int_{B_1(0)}|\nabla T_{p_j, r_j}u|^2dx\\
= & \int_{B_1(0) \setminus B_{\tau}( T_{p_j, r_j}\partial \Omega_{j})} |\nabla T_{p_j, r_j}u|^2dx + \lim_{j \rightarrow \infty} \int_{B_1(0) \cap B_{\tau}( T_{p_j, r_j}\partial \Omega_{j})}|\nabla T_{p, r_j}u|^2dx \\
\le & \lim_{j \rightarrow \infty} \int_{B_1(0) \setminus B_{\tau/2}(\partial \Omega_{\infty})} |\nabla T_{p_j, r_j}u|^2dx + \epsilon \\
\le & D_{\Omega_{\infty}}(1, 0, u_{\infty}) + \epsilon.
\end{align*}
where the last equality follows from $W^{1, 2}$ convergence of harmonic functions in the region $B_1(0) \setminus B_{\tau}( \partial \Omega_{\infty}).$ Since $\epsilon > 0$ was arbitrary, we have that $\lim_{j \rightarrow \infty}D_{\Omega_i}(1, 0, T_{Q_j, r_j}v_j) \le D_{\Omega_{\infty}}(1, 0, v_{\infty})$. The other inequality follows from the same trick or from lower semi-continuity. Thus, $\lim_{j \rightarrow \infty}D(1, 0, T_{Q_j, r_j}v_j) = D_{\Omega_{\infty}}(1, 0, v_{\infty})$. This implies strong convergence.
\end{proof}
\subsection{Corollaries to Compactness}
\begin{cor}\label{N continuity}
For $u_j \in \mathcal{A}(n, \Lambda)$, $p_j \in B_{\frac{1}{16}}(0) \cap \overline \Omega_i$, and $r_j \in (0, \frac{1}{256}]$, there exists a subsequence and a limit function such that $N_{T_{p_j, r_j}\Omega_i} (0, 1, T_{p_j, r_j}u_j) \rightarrow N_{\Omega_{\infty}}(0, 1, u_{\infty})$.
\end{cor}
\begin{proof}
The continuous convergence of $T_{p_j, 2r_j}u_j$ in $B_{\frac{1}{2}}(0)$ and the strong convergence $\nabla T_{p_j, 2r_j}u_j$ in $B_{\frac{1}{2}}(0)$ give the desired convergence of $H_{T_{p_j, 2r_j}\Omega_i} (0, \frac{1}{2}, T_{p_j, 2r_j}u_j)$ and $D_{T_{p_j, 2r_j}\Omega_j} (0, \frac{1}{2}, T_{p_j, 2r_j}u_j)$, respectively. Since
\begin{align*}
H_{T_{p_j, 2r_j}\Omega_i} (0, \frac{1}{2}, T_{p_j, 2r_j}u_j) = \left(\frac{H_{\Omega}(p_j, \frac{1}{2}r_j, u_i)}{H_{\Omega_j}(p_j, r_j, u_j)}\right)^{\frac{1}{2}}\\
D_{T_{p_j, 2r_j}\Omega_i} (0, \frac{1}{2}, T_{p_j, 2r_j}u_j) = D_{T_{p_j, r_j}\Omega_i} (0, 1, T_{p_j, r_j}u_j) \left(\frac{H_{\Omega_j}(p_j, \frac{1}{2}r_j, u_j)}{H_{\Omega_j}(p_j, r_j, u_j)}\right)^{\frac{1}{2}},
\end{align*}
we reduce to claiming that there exist constants, $0< c(n, \Lambda) < C(n, \Lambda) < \infty,$ such that for all $u_j \in \mathcal{A}(n, \Lambda)$, $p_j \in B_{\frac{1}{16}}(0) \cap \overline \Omega_i$, and $r_j \in (0, \frac{1}{256}]$,
\begin{align*}
c < H_{T_{p_j, r_j}\Omega_j}(0, \frac{1}{2}, T_{p_j, r_j}u_j) < C.
\end{align*}
The upper bound follows immediately from Lemma \ref{Lipschitz bound}. The lower bound we argue by compactness. Suppose there were a sequence such that $H_{T_{p_j, r_j}\Omega_j}(0, \frac{1}{2}, T_{p_j, r_j}u_j) \le 2^{-j}.$ By Lemma \ref{compactness}, we may extract a subsequence which converges to a function $u_{\infty}$ in $C^{0, 1}(B_1(0)).$ By Corollary \ref{C: limit harmonic}, $u_{\infty}$ is harmonic in the limit domain, $\Omega_{\infty}$. But, by the continuous convergence, $H_{\Omega_{\infty}}(0, \frac{1}{2}, u_{\infty}) = 0$. Thus, $u_{\infty} \equiv 0$ on $\partial B_{\frac{1}{2}}(0)$ and by unique continuation, $u_{\infty} \equiv 0$ in $\Omega_{\infty}.$
One the other hand, the functions, $T_{p_j, r_j}u_j$, are uniformly Lipschitz with $H_{T_{p_j, r_j}\Omega_j}(0, 1, T_{p_j, r_j}u_j) = 1$ for all $j = 1, 2, ...$. Letting $x_j \in \partial B_1(0) \cap T_{p_j, r_j}\Omega$ be a point such that $|T_{p_j, r_j}u| \ge \frac{1}{\omega_{n-1}}$. Let $x_{\infty}$ be a limit point of the sequence $x_j$. Since the boundaries, $T_{p_j, r_j}\partial \Omega \rightarrow \partial \Omega$ in the Hausdorff metric and the functions, $T_{p_j, r_j}u_j$, are uniformly Lipschitz, there must be a point $p \in \partial B_1(0) \cap \Omega_{\infty}$ near $x_{\infty}$ and a ball $B_{\delta}(p) \subset \subset \Omega_{\infty}$ for which $|T_{p_j, r_j}u_j| \ge \frac{1}{2\omega_{n-1}}$ for all sufficiently large indices $j$. By continuous convergence, then, $u_{\infty}$ cannot be identically zero. This is a contradiction. Thus, the desired constant exists.
\end{proof}
\begin{cor}\label{N non-degenerate almost monotonicity}
Let $u \in \mathcal{A}(n, \Lambda)$ with $p_0 \in B_{\frac{1}{32}}(0) \cap \overline{\Omega}$ and $0< r_0 \le \frac{1}{256}$. Then, for all $p \in B_1(0) \cap \overline{T_{p_0, r_0}\Omega}$ and all $0 \le s < S \le 1$,
\begin{eqnarray}\nonumber
\frac{2}{C} \int_{A_{s, S}(p)} \frac{|\nabla T_{p_0, r_0}u(y) \cdot (y-p) - \lambda(p, |y-p|, T_{p_0, r_0}u) (T_{p_0, r_0}u(y) -T_{p_0, r_0}u(p))|^2}{|y-p|^{n+2}}dy \\ \le N_{T_{p_0, r_0}\Omega}(p, S, T_{p_0, r_0}u) - N_{T_{p_0, r_0}\Omega}(p, s, T_{p_0, r_0}u) + 2C_1(n, \Lambda)(S -s)
\end{eqnarray}
where $C = C(n, \Lambda) = Lip(T_{p_0, r_0}u).$
\end{cor}
\begin{proof}
From Lemma \ref{Lipschitz bound} we have that $H_{T_{p_0, r_0}\Omega}(r, p, T_{p_0, r_0}u) \le C(n, \Lambda)r^{n+1}$. Therefore, recalling our expansion for $N_1'(r)$ in Lemma \ref{N neg term bound 3},
\begin{align*}
& \int^S_{s} \frac{2}{r H_{\Omega}(p, r, T_{p_0, r_0}u)} ( \int_{\partial B_r(p)} |\nabla T_{p_0, r_0} u \cdot (y-p) - \lambda(p, r, T_{p_0, r_0}u) (T_{p_0, r_0}u(y)-T_{p_0, r_0}u(p))|^2d\sigma(y) dr ) \\
& \qquad \ge \int^S_{s} \frac{2}{C} \int_{\partial B_r(p)} \frac{|\nabla T_{p_0, r_0}u \cdot (y-p) - \lambda(p, |y-p|, T_{p_0, r_0}u) (T_{0, 1}u-T_{p_0, r_0}u(p))|^2}{|y-p|^{n+2}}d\sigma(y)dr\\
& \qquad = \frac{2}{C} \int_{A_{s, S}(p)} \frac{|\nabla T_{p_0, r_0}u \cdot (y-p) - \lambda(p, |y-p|, T_{p_0, r_0}u) (T_{0, 1}u-T_{p_0, r_0}u(p))|^2}{|y-p|^{n+2}}dy
\end{align*}
where $\lambda(p, r, T_{p_0, r_0}u) = \frac{\int_{\partial B_{r}(p)} (T_{p_0, r_0}u(y) - T_{p_0, r_0}u(p))\nabla T_{p_0, r_0}u \cdot (y-p) d\sigma(y)}{H_{T_{p_0, r_0}\Omega}(p, r, T_{p_0, r_0}u)}$. Recalling Lemma \ref{N neg term bound 3} completes the proof.
\end{proof}
\section{Frequency coefficient behaviour}
Now, we investigate the behavior of the frequency coefficient, $\lambda(p, r, u)$. In order to connect the drop in the Almgren frequency to the Jones $\beta$-numbers at points $p \in \Omega \cap B_{\frac{1}{32}}(0)$ at scales $r$ such that $\partial B_r(p) \cap \Omega^c \not = \emptyset$ we must first obtain uniform bounds on the frequency coefficient, Lemma \ref{fudge II}. This requires a compactness result, Lemma \ref{lambda convergence}.
\begin{rmk}\label{lambda invariances}
Note that the quantity,
\begin{align*}
\lambda(p, r, u) = \frac{\int_{\partial B_{r}(p)} (u(y) - u(p))\nabla u \cdot (y-p) d\sigma(y)}{H_{\Omega}(p, r, u)},
\end{align*}
is invariant in the following sense. For any real numbers, $a \not= 0$, $R > 0$, and $b \in \mathbb{R}$, and any vector $p' \in \mathbb{R}^n$, $\lambda(p, r, u) = \lambda(\frac{p-p'}{R}, \frac{r}{R}, au(Rx + p') + b).$
\end{rmk}
\begin{lem}\label{fudge}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{32}}(0) \cap \overline{\Omega}$ and $0< r_0< \frac{1}{256}$. Let $T_{p_0, r_0}u$ be $(0, 0)$-symmetric in $B_8(0)$ which satisfies $N_{T_{p_0, r_0}\Omega}(0, 2, T_{p_0, r_0}u) \le C(n, \Lambda).$ Then, there exists a constant, $C(n, \Lambda)$ such that for all $y \in \overline{T_{p_0, r_0}\partial \Omega} \cap B_1(0)$ and all $r \in [2, 7],$
\begin{align*}
\lambda(y, r, T_{p_0, r_0}u) \le C(n, \Lambda).
\end{align*}
\end{lem}
\begin{proof}
By Remark \ref{lambda invariances} and the homogeneity of $T_{p_0, r_0}u$ we may reduce by dilation, to $B_r(p) \subset B_{\frac{1}{4}}(0)$. Furthermore, by Remark \ref{lambda invariances} we may reduce to considering $\lambda(0, 1, T_{y, r}T_{p_0, r_0}u)$. By Lemma \ref{Lipschitz bound} applied to $T_{y, r}T_{p_0, r_0}u$, we see that $T_{y, r}T_{p_0, r_0}u$ is Lipschitz with Lipschitz coefficient bounded by $C(n, \Lambda).$ Thus, $\lambda(0, 1, T_{y, r}T_{p_0, r_0}u) \le C(n, \Lambda)$.
\end{proof}
\begin{lem} \label{unif minkowski fudge balls}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{32}}(0) \cap \overline{\Omega}$ and $0< r_0< \frac{1}{256}$. Let $T_{p_0, r_0}u$ be $(0, 0)$-symmetric in $B_8(0)$ which satisfies $N_{T_{p_0, r_0}\Omega}(0, 2, T_{p_0, r_0}u) \le C(n, \Lambda).$ Then, all $y \in \overline{T_{p_0, r_0}\partial \Omega} \cap B_1(0)$ and all $r \in [2, 7],$
\begin{align*}
\mathcal{H}^{n-1}(\partial B_r(p) \cap B_{\rho}(T_{p, r}\partial \Omega)) \le C(\rho, \Lambda)
\end{align*}
and $C(\rho, \Lambda) \rightarrow 0$ as $\rho \rightarrow 0$.
\end{lem}
We defer this proof to the Appendix.
In order to prove a quantitative rigidity result for Lemma \ref{fudge} we need the following result.
\begin{lem}\label{lambda convergence}
Let $u_i \in \mathcal{A}(n, \Lambda)$ and $x_i \in B_{1/4}(0)$ and $0< r_i \le 1/32$. Let $\rho_i \in [2r_i, 7r_i]$. If $v_i$ is $(0, 2^{-i})$-symmetric in $B_{8r_i}(x_i)$, then we may extract a subsequence such that,
\begin{align*}
T_{x_i, 8r_i}v_i \rightarrow v_{\infty} \qquad T_{x_i, 8r_i}y_i \rightarrow y \qquad \frac{\rho_i}{r_i} \rightarrow \rho,
\end{align*}
and
\begin{align*}
\lambda(y_i, \rho_i, v_i) = \lambda(T_{x_i, 8r_i}y_i, \frac{\rho_i}{r_i}, T_{x_i, 8r_i}v_i) \rightarrow \lambda(y, \rho, v_{\infty}).
\end{align*}
\end{lem}
We note that all but the last convergence result are already established by Lemma \ref{compactness} and compactness, respectively. Furthermore, by the modes of convergence of Lemma \ref{compactness}, $H(y_j, \frac{\rho_j}{r_j}, T_{x_j, 8r_j}v_j) \rightarrow H(y, \rho, v_{\infty})$. Therefore, we only consider the numerator, $$\int_{\partial B_{\frac{\rho_j}{r_j}}(T_{x_i, 8r_i}y_i)} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(T_{x_i, 8r_i}y_i))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z).$$ We begin with an auxiliary lemma.
\begin{rmk}\label{mink bounds haus}
For any $E \subset \mathbb{R}^n$, we shall use the notation,
\begin{align*}
P(E, \epsilon) & = \max \{k : \text{ there are disjoint balls } B_{\epsilon}(x_i), i=1, ..., k, x_i \in E\}.
\end{align*}
Note that if $\{B_{\epsilon}(x_i)\}$ are a maximal disjoint collection of balls with centers $x_i \in E$, then $B_{\epsilon}(E) \subset \bigcup_{i}^{P(E, \epsilon)}B_{3\epsilon}(x_i)$ and $Vol(\bigcup_{i}^{P(E, \epsilon)}B_{\epsilon}(x_i)) \le Vol (B_{\epsilon}(E))$.
Therefore,
\begin{align*}
P(E, \epsilon) \omega_{n} \epsilon^n \le Vol(B_{\epsilon}(E)),
\end{align*}
and therefore, if $\mathcal{M}^{*, n-1}(E) = 0$, then $\limsup_{\epsilon \rightarrow 0} \frac{P(E, \epsilon) \omega_{n} \epsilon^n}{\epsilon} = 0$, as well. See \cite{Mattila95} Chapter 5 for further details.
\end{rmk}
\subsection{Proof of Lemma \ref{lambda convergence}}
\begin{proof}
We now argue that under the assumptions of Lemma \ref{lambda convergence}, there exists a subsequence such that $\lim_{j \rightarrow \infty} \lambda(y'_i, \tilde r_i, T_{x_j, 8r_j}v_j) = \lambda(y_{\infty}, \tilde r, v_{\infty}).$ Note that Lemma \ref{compactness} gives that $H(\tilde r_j, y'_j, T_{x_j, 8r_j}v_i) \rightarrow H(\tilde r, y_{\infty}, v_{\infty})$. Therefore, we only consider the numerator.
Using Lemma \ref{unif minkowski fudge balls}, we argue as in Lemma \ref{strong convergence}. That is, for admissible locations and scales, $\mathcal{H}^{n-1}(B_{r}(\{ v_{\infty} = 0\}) \cap \partial B_{r'}(y)) \rightarrow 0$ as $r \rightarrow 0$. Thus, for any $\theta>0$ we can find an $r(\theta) > 0$ such that for all $0< r< r(\theta)$,
$$\mathcal{H}^{n-1}(B_{r}( \{ v_{\infty} = 0\} ) \cap \partial B_{r'}(y)) \le \theta.$$
Now, recall that $T_{x_i, 8r_i}v_i$ are harmonic away from $T_{x_i, 8r_i}\partial \Omega^{\pm}_i$, and that therefore $W^{1, 2}$ convergence in $B_R(0) \setminus B_r(\{v_{\infty} = 0 \})$ implies $C^\infty$ convergence in $B_R(0) \setminus B_r(\partial \Omega^{\pm}_{\infty})$. Recall that $T_{x_i, 8r_i}v_i$ are uniformly Lipschitz in $B_8(0)$ with Lipschitz constant $C(\alpha, M_0, \Gamma)$ by Lemma \ref{Lipschitz bound}.
\begin{align*}
&\limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i)} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
& \le \limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i) \cap B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
& + \limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i) \setminus B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
& \le \limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i) \cap B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} C(\alpha, M_0, \Gamma) d\sigma(z)\\
& + \limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i) \setminus B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_{i}(z) \cdot (z-y) d\sigma(z)\\
& \le C(\alpha, M_0, \Gamma)\theta + \int_{\partial B_{\tilde r}(y_{\infty}) \setminus B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z)\\
\end{align*}
Furthermore, by the same reasoning,
$$
\int_{\partial B_{\tilde r}(y_{\infty}) \cap B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z) \ge -C(\alpha, M_0, \Gamma),
$$
and so we have,
\begin{align*}
\int_{\partial B_{\tilde r}(y_{\infty}) \setminus B_{r(\theta)}(\partial \Omega^{\pm}_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z) \\
\le C(\alpha, M_0, \Gamma) \theta + \int_{\partial B_{\tilde r}(y_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z),
\end{align*}
Therefore,
\begin{align*}
\limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i) } (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
\le 2C(M_0)\theta + \int_{\partial B_{\tilde r}(y_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z)\\
\end{align*}
Letting $\theta \rightarrow 0$,
\begin{align*}
\limsup_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i)} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
\le \int_{\partial B_{\tilde r}(y_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z).
\end{align*}
The same argument (using $-C(\alpha, M_0, \Gamma)$) may be used to lower bound,
\begin{align*}
\liminf_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i)} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z)\\
\ge \int_{\partial B_{\tilde r}(y_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z).
\end{align*}
All together then, we have that
\begin{align*}
\lim_{i \rightarrow \infty} \int_{\partial B_{\frac{\rho_i}{r_i}}(T_{x_i, 8r_i}y_i)} (T_{x_i, 8r_i}v_i(z) - T_{x_i, 8r_i}v_i(y))\nabla T_{x_i, 8r_i}v_i(z) \cdot (z-y) d\sigma(z) \\
= \int_{\partial B_{\tilde r}(y_{\infty})} (v_{\infty}(z) - v_{\infty}(y))\nabla v_{\infty}(z) \cdot (z-y) d\sigma(z)\\
\end{align*}
Thus, $\lim_{i \rightarrow \infty} \lambda(y'_i, \tilde r_i, T_{x_j, 8r_j}v_j) = \lambda(y_{\infty}, \tilde r, v_{\infty}).$
\end{proof}
\begin{lem}\label{fudge II}
Let $u \in \mathcal{A}(n, \Lambda)$ and $p \in B_{\frac{1}{16}}(0)$. There exists a constant, $0< \delta(n, \Lambda)$, such that for any $0< r\le \frac{1}{256}$, if $u$ is $(0, \delta)$-symmetric in $B_{8r}(p)$, then for all $y \in B_r(p)$ and every $\rho \in [2r, 7r]$,
\begin{align*}
|\lambda(y, \rho, u)|\le 2C,
\end{align*}
where $C = C(n, \Lambda)$ is the constant from Lemma \ref{fudge} with $C(n, \Lambda)$ from Lemma \ref{N bound lem}.
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that there is a sequence of $u_i \in \mathcal{A}(n, \Lambda)$, points, $p_i \in B_{1/4}(0)$, radii, $0< r_i< 1$, such that $u_i$ is $(0, 2^{-i})$-symmetric in $B_{8r_i}(p_i)$, but for which there exist points, $y_i \in B_{r_i}(p_i)$ and radii, $r_i' \in [r_i, 8r_i]$ for which
\begin{align*}
|\lambda(y_i, r_i', u_i)|\ge 2C
\end{align*}
We rescale to $T_{p_i, 8r_i}u_i$. By Lemmata \ref{compactness} and \ref{strong convergence}, we can extract a subsequence which converges to a limit function, $u_{\infty}$, which is $0$-symmetric. By Lemma \ref{N bound lem}, $v_{\infty}$ satisfies $N_{\Omega}(0, 1, u_{\infty}) \le C(n \Lambda)$. Therefore, by Lemma \ref{fudge} there is a constant, $C$, such that for all $y \in B_{\frac{1}{8}}(0)$ and all $r \in [\frac{1}{8}, 1]$
\begin{align*}
|\lambda(y, r, u_{\infty})| \le C.
\end{align*}
Note that by assumption, for each $i= 1, 2, ...$ there exists a point, $y_i' \in B_{\frac{1}{8}}(0)$ and a radius, $\tilde r_i \in [\frac{2}{8}, \frac{7}{8}]$ such that,
\begin{align*}
|\lambda(y'_i, \tilde r_i, T_{p_i, 8r_i}u_i) |\ge 2C.
\end{align*}
Note that because $\overline{B_{\frac{1}{8}}(0)} \times [\frac{2}{8}, \frac{7}{8}]$ is compact, we may assume that $y_i' \rightarrow y_{\infty}$ and $\tilde r_i \rightarrow \tilde r.$ In order to obtain a contradiction, we now argue that there exists a subsequence such that $\lim_{j \rightarrow \infty} \lambda(y'_i, \tilde r_i, T_{p_j, 8r_j}u_j) = \lambda(y_{\infty}, \tilde r, u_{\infty}).$ However, this is exactly Lemma \ref{lambda convergence}. Therefore, we have the contradiction we desired.
\end{proof}
\section{Quantitative Rigidity}\label{S: quant rigidity}
In the interior setting, $N_{\Omega}(p, r, u)$ is monotonic and constant only if $u$ is a harmonic polynomial which is homogeneous about the point $p$. A quantitative rigidity result states that if $N_{\Omega}(p, r, u)$ is \textit{almost} constant ($N_{\Omega}(p, 1, u) - N_{\Omega}(p, r, u) \le \delta(\epsilon) $), then $u$ is \textit{almost} a homogeneous harmonic polynomial ($||T_{p, 1}u - P||_{L^2(B_1(0))} \le \sqrt{\epsilon}$). This result relies upon the monotonicity of $N_{\Omega}(p, r, u)$. Indeed, if $N_{\Omega}(p, r, u)$ is not monotonic, then $N_{\Omega}(p, 1, u) = N_{\Omega}(p, r, u)$ does not imply that the Almgren frequency is constant.
Therefore, in order to obtain the necessary quantitative rigidity, we use the following quantity, $E(p, r, u).$
\begin{definition}
For any $u \in \mathcal{A}(n, \Lambda)$, point, $p \in \Omega,$ and radius, $0< r$, we define,
\begin{align*}
E(p, r, u) = \sup_{\tau \in [0, r]}N(p, \tau, u).
\end{align*}
By continuity, this \textit{supremum} is a \textit{maximum}.
\end{definition}
We now establish the limit case, followed shortly by the quantitative rigidity result.
\begin{lem}\label{S: N(0)=E }
Let $u \in \mathcal{A}(n, \Lambda).$ Let $p \in \overline{\Omega} \cap B_{\frac{1}{16}}(0)$ and $0< r \le \frac{1}{256}.$ If $y \in T_{p, r}\Omega \cap B_1(0)$ and,
\begin{align*}
N_{T_{p, r}\Omega}(y, 0, T_{p, r}u) = \sup_{\tau \in [0, 2]} N_{T_{p, r}\Omega}(y, \tau, T_{p, r}u),
\end{align*}
then if $y \in \Omega$, $T_{y, r}u$ is $(n-1, 0)$-symmetric. If $y \in \partial \Omega$, then $T_{y, r}u$ is $0$-symmetric.
\end{lem}
\begin{proof}
If $y \in \Omega$, then there is a radius, $\delta>0$ such that $B_{\delta}(y) \subset T_{p, r}\Omega$. For $0 <r <\delta$, $N_{T_{p, r}\Omega}(y, r, T_{p, r}u)$ is an increasing function in $r$, which, under the assumptions, means constant. Therefore, $u$ is a homogeneous harmonic function. By the proof of Lemma \ref{(0, 0)-symmetric functions}, then, $T_{y, r}u$ is $(n-1, 0)$-symmetric. If $y \in \partial \Omega$, then the proof of Lemma \ref{(0, 0)-symmetric functions}, shows that $T_{y, r}u$ is $0$-symmetric.
\end{proof}
\begin{lem} (Quantitative Rigidity)\label{quant rigidity}
Let $u \in \mathcal{A}(n, \Lambda)$, as above. Let $p \in B_{\frac{1}{16}}(0) \cap \overline \Omega$ and $0< r \le \frac{1}{512} $. For every $\delta > 0$, there is an $0< \gamma_0= \gamma_0(n, \Lambda, \delta)$ such that for any $0 < \gamma \le \gamma_0$ if,
$$
|E(0, 2, T_{p, r}u) - N_{T_{p, r}\Omega}(0, \gamma, T_{p, r}u)| \le \gamma
$$
then $T_{p, r}u$ is $(0, \delta, 1, 0)$-symmetric.
\end{lem}
\begin{proof} We argue by contradiction. Assume that there exists a $\delta >0$ such that there is a sequence of functions, $u_i \in \mathcal{A}(n, \Lambda)$, and points, $p_i \in B_{\frac{1}{16}}(0) \cap \overline \Omega_i$, radii $0< r_i < \frac{1}{512}$, such that,
$$
|E(0, 2, T_{p_i, r_i}u_i) - N_{T_{p_i, r_i}\Omega_i}(0, 2^{-i}, T_{p_i, r_i}u_i)| \le 2^{-i}
$$
but that no $T_{p_i, r_i}u_i$ is $(0, \delta, 1, 0)$-symmetric.
By Lemma \ref{strong convergence}, we have that there exists a subsequence such that $T_{p_j, r_j}u_j$ converges strongly in $W^{1, 2}(B_1(0))$ to a function, $u_{\infty}$. By Corollary \ref{C: limit harmonic}, we know that $u_{\infty}$ is harmonic in a convex domain $\Omega_{\infty}$. Recalling both the statement and the proof of Corollary \ref{N continuity} applied to $T_{p_j, 2r_j}u_j$, we have that for any $0 \le r \le 2$, $\lim_{j \rightarrow \infty} N_{\Omega_j}(0, r, T_{p_j, r_j}u_j) = N_{\Omega_{\infty}}(0, r, u_{\infty})$. Therefore, we have that,
$$
E(0, 2, u_{\infty}) - N_{\Omega_{\infty}}(0, 0, u_{\infty}) = 0
$$
By Lemma \ref{S: N(0)=E }, this implies that $u_{\infty}$ is $(0, 0)$-symmetric. This contradicts our assumption that that no $T_{p_i, r_i}u_i$ is $(0, \delta, 1, 0)$-symmetric.
\end{proof}
\section{Decay}
In order to connect the drop across scales in the Almgren frequency function to the Jones $\beta$-numbers at points $p \in \Omega \cap B_{\frac{1}{16}}(0)$ at macroscopic scales for which $\partial B_{r}(p) \cap \Omega^c \not = \emptyset$, we must obtain proper decay on the function $u.$ Broadly speaking, this section proves that if $u$ is very close to a $0$-symmatric function, but far from all $(n-1)$-symmetric functions (which are $1$-homogeneous), then it must decay faster a $1$-homogeneous function. This extra decay is crucial to obtaining uniform macroscopic estimates.
\begin{lem}\label{wiggle room}
Let $u \in \mathcal{A}(n, \Lambda)$ and $0\le k \le n-2$, $p \in B_{\frac{1}{16}}(0) \cap \overline{\Omega}$, and $0< r < \frac{1}{512}$. For any $0< \epsilon,$ there are constants, $0< m=m(\epsilon, \Lambda)$ and $0< \delta_0(n, \Lambda)$, such that if $u$ is not $(k+1, \epsilon, 8r, p)$-symmetric but $u$ is $(0, \delta, 8r, p)$-symmetric for any $0< \delta \le \delta_0$, then for all $\rho \in [r, 8r]$,
\begin{align*}
N_{\Omega}(p, \rho, u) > 1+ m.
\end{align*}.
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that for a given $0<\epsilon$, no such $0<m$ and $0< \delta_0$ exist. That is, suppose there is a sequence of $u_i, p_i,$ and $r_i,$ such that $u_i$ is $(0, 2^{-i} 8r_i, p_i)$-symmetric and each $u_i$ is not $(k+1, \epsilon, 8r_i, p_i)$-symmetric, but that $N_{\Omega}(p_i, \rho_i, u_i) \le 1 + 2^{-i}$ for some $\rho_i \in [r_i, 8r_i]$.
We rescale. The functions $T_{p_i, 8r_i}u_i$ converge in the senses of Lemma \ref{compactness} and Lemma \ref{strong convergence} to a function $u_{\infty},$ which is not $(k+1, \epsilon, 1, 0)$-symmetric. Similarly, we may assume that $\frac{\rho_i}{8r_i} \rightarrow \rho \in [1/8, 1].$ By the aforementioned convergence lemmata, we have that $u_{\infty}$ is $(0, 0, 1, 0)$-symmetric, not $(k+1, \epsilon, 1, 0)$-symmetric, and, by Lemma \ref{N continuity}, we have that $N_{\Omega_{\infty}}(0, \rho, u_{\infty}) \le 1.$
Note that there are three cases for $\lim_{i}\Omega_{i} = \Omega_{\infty}$. Either $\Omega_{\infty} = \mathbb{R}^n$, or $\Omega_{\infty}$ is a convex domain with boundary. If $\Omega_{\infty}$ is a convex domain with boundary, then Lemma \ref{(0, 0)-symmetric functions} implies that $u_{\infty}$ is $(n-1, 0)$-symmetric. This contradicts the assumption that $u_{\infty}$ is not $(k+1, \epsilon, 1, 0)$-symmetric.
If $\Omega_{\infty} = \mathbb{R}^n$, then classical results imply that $u_{\infty}$ is a homogeneous harmonic polynomial. Since $u_{\infty}$ is not $(k+1, \epsilon, 1, 0)$-symmetric, it cannot be $(n-1)$-symmetric, and therefore cannot be linear. Thus, $N(0, 0, u_{\infty}) \ge 2$ and $N(0, \tau, u_{\infty})$ is constant in $\tau$. This contradicts the assumption that $N(\rho, 0, u_{\infty}) \le 1.$ Therefore, such constants as desired must exist.
\end{proof}
\begin{lem}\label{H non-linear growth}
Let $u \in \mathcal{A}(n, \Lambda)$, $0 \le k \le n-2$, and $p \in B_{\frac{1}{16}}(0)$ and $0< r \le \frac{1}{512}.$ Let $0<\epsilon$ be fixed. There is a $0<\delta(\epsilon, \Lambda)$ such that if $u$ is $(0, \delta)$-symmetric, but not $(k+1, \epsilon)$-symmetric in $B_{8r}(p)$, then there exists an absolute constant, $C= C(n, \Lambda, \epsilon)$ such that for all $\rho \in [r, 8r],$
\begin{equation}
\fint_{\partial B_{\rho}(p)}(T_{0, 1}u(x) - T_{0, 1}u(p))^2 d\sigma(x) \le C \rho^{2(1+\frac{m}{2})}
\end{equation}
\end{lem}
\begin{proof}
Let $\delta \le \delta_0$, where $\delta_0$ is as in Lemma \ref{wiggle room}. Therefore, we have that $N(p, \rho, u) > 1+ m$. Plugging this into Lemma \ref{L: H doubling-ish 2}, we have that, for any $S > \rho$,
\begin{align*}
H_{\Omega}(p, S, u) & \ge (\frac{S}{\rho})^{(n-1) + 2(N_{\Omega}(p, \rho, u) - 2C_1(n, \Lambda)(S-\rho))} H_{\Omega}(p, \rho, u)\\
& \ge (\frac{S}{\rho})^{(n-1) + 2(1 + m - 2C_1(n, \Lambda)S)} H_{\Omega}(p, \rho, u)\\
\end{align*}
Let $0<r_0(n, \Lambda, \epsilon)$ be small enough that $m - 2C_1(n, \Lambda)r_0 \ge \frac{m}{2}.$ Thus, if $r_0 \ge 8r \ge \rho \ge r$ we have,
\begin{align*}
C(Lip(T_{0, 1}u))r_0^2 & \ge \frac{1}{r_0^{n-1}}H(p, r_0, T_{0, 1}u)\\
\ge & \left(\frac{r_0}{\rho}\right)^{2(N(\rho, p, v) - Cr_0)}\frac{1}{\rho^{n-1}}H(p, \rho, T_{0, 1}u)\\
\ge & \left(\frac{r_0}{\rho}\right)^{2(1 + \frac{m}{2})}\frac{1}{\rho^{n-1}}H(p, \rho, T_{0, 1}u)\\
\end{align*}
Or,
\begin{align*}
C(n, \Lambda, \epsilon)\left(\rho \right)^{2(1 + \frac{m}{2})}& \ge \frac{1}{\rho^{n-1}}H(p, \rho, T_{0, 1}u).\\
\end{align*}
If, on the other hand, $\rho>r_0,$ we let $C = \frac{16C(n, \Lambda))^2}{r_0^{2(1+\frac{m}{2})}}$, where $C(n, \Lambda)$ is the uniform Lipschitz constant. Taking the maximum of these two constants gives the desired result.
\end{proof}
\begin{lem}\label{2nd order upper bound, approx}
Let $u \in \mathcal{A}(n, \Lambda)$ and $0 \le k \le n-2$. Let $p \in B_{\frac{1}{16}}(0) \cap \overline{\Omega},$ $0< r\le \frac{1}{512}$. Let $0< \epsilon$ be fixed. There is a $0< \delta_0(n, \Lambda, \epsilon)$ such that if $u$ is $(0, \delta)$-symmetric for $0<\delta \le \delta_0$ in $B_{8r}(p),$ but not $(k+1, \epsilon, 8r, p)$-symmetric, then there is a constant, $C= C(n, \Lambda, \epsilon)$ such that for all $\rho \in [r, 8r]$ and all $y \in B_{\rho}(p)$,
\begin{equation*}
|T_{0, 1}u(y)-T_{0, 1}u(p)| \le C \rho^{1+ m/2}.
\end{equation*}
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that there exist a sequence of $u_i \in \mathcal{A}(n, \Lambda),$ with points, $p_i \in B_{\frac{1}{16}}(0) \cap \overline{\Omega_i},$ and radii, $0< r_i \le \frac{1}{512}$, such that $u_i$ is $(0, 2^{-i})$-symmetric in $B_{8r_i}(p_i)$ and not $(k+1, \epsilon, 8r_i, p_i)$-symmetric, but for which there exist $\rho_i \in [r_i, 8r_i]$ and $y_i \in \partial B_{\rho_i}(p_i)$ such that,
\begin{equation*}
|T_{0, 1}u_i(y_i)-T_{0, 1}u_i(p_i)| \ge i \rho_i^{1+ m/2}.
\end{equation*}
We consider $T_{p_i, r_i}u_i$. By Lemma \ref{compactness} and Lemma \ref{strong convergence}, we may extract a subsequence such that $T_{p_i, r_i}v_i \rightarrow u_{\infty}$ strongly in $W^{1, 2}_{loc}(\mathbb{R}^n)$ and in $C_{loc}(\mathbb{R}^n).$ Since the functions $T_{p_i, r_i}u_i$ are locally uniformly Lipschitz, the function $u_{\infty}$ is locally Lipschitz with the same Lipschitz constants. Further, $u_{\infty} = T_{0, 1}u_{\infty}$ is $0$-symmetric in $B_{8}(0)$ and is not $(k+1, \epsilon, 8, 0)$-symmetric. However, by assumption,
\begin{equation*}
\left(\int_{\partial B_{1}(0)} (T_{0, 1}u_i(xr_i + p_i) -T_{0,1}u_i(p_i))^2d\sigma(x)\right)^{\frac{1}{2}} |T_{p_i, r_i}u_i(\frac{y_i -p_i}{r_i})| \ge i \rho_i^{1+ m/2} .
\end{equation*}
Now, by Lemma \ref{H non-linear growth} for $i$ sufficiently large, we can bound
\begin{align*}
\left(\int_{\partial B_{1}(0)} (T_{0, 1}u_i(xr_i + p_i) -T_{0,1}u_i(p_i))^2d\sigma(x)\right)^{\frac{1}{2}} \le C \rho_i^{1 + m/2}.
\end{align*}
Therefore,
\begin{equation*}
|T_{p_i, r_i}u_i(\frac{y_i -p_i}{r_i})| \ge i .
\end{equation*}
Since $\frac{y_i -p_i}{r_i} \in \partial B_{\rho_i /r_i}(0),$ and $\rho_i /r_i \in [1, 8]$ this is a contradicts the fact that the $T_{p_i, r_i}u_i$ are locally uniformly Lipschitz. Thus, such a constant must exist.
\end{proof}
\section{Cone-splitting}
In Sections $11- 15$, we follow the proof techniques of Naber and Valorta \cite{NaberValtorta17-1}. If we were restricting our focus to $\partial \Omega,$ then their methods go through almost verbatim. However, because we are interested in how the critical set approaches $\partial \Omega$, macroscopic almost monotonicity forces us to make several important modifications in Sections $12$ and $13$. We include Sections $14$ and $15$ for completeness.
In this section, we obtain the necessary geometric control on the set where the Almgren frequency has small drop. The prototypical example of a result like this is the following proposition. See \cite{HanLin_nodalsets} Theorem 4.1.3 for the proof of similar results.
\begin{prop}\label{P: classic cone splitting}
Let $P: \mathbb{R}^n \rightarrow \mathbb{R}$ be a $0$-symmetric function. Let $k \le n-2$. If $P$ is symmetric with respect to some $k$-dimensional subspace $V$ and $P$ is homogeneous with respect to some point $x \not\in V$, then $P$ is $k + 1$-symmetric with respect to $span\{x, V\}$.
\end{prop}
We now prove a similar result for our almost-symmetric functions $u \in \mathcal{A}(n, \Lambda)$.
\begin{lem}(Geometric Control)\label{geometric control lem}
Let $\eta', \rho>0$ be fixed. Let $u \in \mathcal{A}(n, \Lambda)$, $p \in \overline \Omega \cap B_{\frac{1}{16}}(0)$, and $0 < r \le \frac{1}{512}.$ There exist an $0<\eta_0(n, \Lambda, \eta', \rho, \epsilon) << \rho$ and a $\beta(n, \Lambda, \eta', \rho, \epsilon) <1$ such that if $\eta \le \eta_0$ and
\begin{enumerate}
\item $E= \sup_{x \in B_1(0) \cap \overline{T_{p, r}\Omega}} E(x, 2, T_{p, r}u) \in [0, C(n, \Lambda)]$
\item There exist points $\{y_0, y_1, ... , y_k \} \subset B_1(0) \cap \overline{T_{p, r}\Omega}$ satisfying $y_i \not \in B_{\rho}(\langle y_0, ..., y_{i-1}, y_{i+1}, ... , y_k \rangle)$ and
$$
N_{T_{p, r}\Omega}(y_i, \eta \rho, T_{p, r}u) \ge E-\eta
$$
for all $i = 0, 1, ..., k$
\end{enumerate}
then, if we denote $\langle y_0, ..., y_{k}\rangle = L$, for all $p \in B_{\beta}(L) \cap B_1(0) \cap \overline{T_{p, r}\Omega}$
$$
N_{T_{p, r}\Omega}(p, \eta \rho, T_{p, r}u) \ge E - \eta'
$$
and
$$
S^k_{\epsilon, \eta_0}(T_{p, r}u) \cap B_1(0) \subset B_{\beta}(L) \cap B_1(0) \cap \overline{T_{p, r}\Omega}.
$$
\end{lem}
\begin{proof} We argue by contradiction. Suppose that the first claim fails. That is, fix the constants $\rho, \eta' >0$. Let $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in \overline \Omega_i \cap B_{\frac{1}{16}}(0)$, and $0 < r_i \frac{1}{512}.$ Note that by Lemma \ref{N bound lem}, we have that $E_i= \sup_{x \in B_1(0) \cap \overline{T_{p, r}\Omega_i}} E(x, 2, T_{p_i, r_i}u_i)\in [0, C(n, \Lambda)]$. Suppose that for each $i = 1, 2, ...$ we can find points $\{y_{i, j} \}_{j}$ satisfying (2), above, with $\eta <2^{-i}$ and a sequence $\beta_i \le 2^{-i}$ such that for each $i$, there exists a point $x_i \in B_{\beta_i}(L_i) \cap B_1(0) \cap \overline{T_{p_i, r_i}\Omega_{i}}$ for which $N_{\Omega_i}(x_i, \eta \rho, T_{p_i, r_i}u_i) < E - \eta'$.
By Lemma \ref{compactness} and Lemma \ref{strong convergence} there exists a subsequence $u_j$ such that $T_{p_j, r_j}u_j$ converges to a limit function $u_{\infty}$. Further, we may assume that
$$E_j \rightarrow E \quad y_{ij} \rightarrow y_i \quad L_j \rightarrow L \quad x_j \rightarrow x_{\infty} \in L \cap \bar B_1(0).$$ Note that Corollary \ref{N continuity} gives that
$$
\sup_{p \in B_1(0) \cap \overline{\Omega_{\infty}} }N_{\Omega_{\infty}}(p, 2, u_{\infty}) \le E \qquad N_{\Omega_{\infty}}(x_{\infty}, 0, u_{\infty}) < E - \eta'
$$
and
$$
N_{\Omega_{\infty}}(y_i, 0, u_{\infty}) \ge E
$$
for all $j= 0, 1, ..., k$. Note that by Corollary \ref{C: limit harmonic}, $u_{\infty}$ is harmonic in $\Omega_{\infty}$. Note that by Lemma \ref{S: N(0)=E }, $u_{\infty}$ is homogeneous about the points $y_0, ..., y_k$. Proposition \ref{P: classic cone splitting}, implies that $u_{\infty}$ is translation invariant along $L$ in $B_{1 +\delta}(0) \subset \cap_j B_2(y_j)$, for some $\delta >0$ depending upon the placement of the $y_j$. Since $x_{\infty} \in L$, this implies that $N_{\Omega_{\infty}}(x_{\infty}, 0, u_{\infty}) = E$. This contradicts $N_{\Omega_{\infty}}(x_{\infty}, 0, u_{\infty}) < E - \eta'$.
Now assume that the second claim fails. That is, fix $\beta >0$ and assume that there is a sequence of $u_i \in \mathcal{A}(n, \Lambda)$ with $\sup_{p \in B_1(0)} N_{\Omega_i}(p, 2, u_i) = E_i \in [0, E_0]$ and points $\{y_{i, j} \}_{j}$ satisfying (2), above, and a sequence of $\eta_i \rightarrow 0$ such that for each $i$ there exists a point $x_i \in S^k_{\epsilon, \eta_i}(u_i) \cap B_1(0) \setminus B_{\beta}(L_i).$
Again, we extract a convergent subsequence, as above. The function $u_{\infty}$ will be harmonic in $\Omega_{\infty}$and $k-$symmetric in $B_{1+ \delta}(0)$, as above. And, $x_i \rightarrow x \in \bar B_1(0) \setminus B_{\beta}(L).$ Note that by our definition of $S^k_{\epsilon, \eta_i}(u_i)$ and Lemma \ref{compactness}, $x \in S^k_{\epsilon/2}(u_{\infty}).$
Since $u_{\infty}$ is $k-$symmetric in $B_{1+ \delta}(0)$, every blow-up at a point in $B_1(0)$ will be $(k+1)-$symmetric. Thus, there must exist a radius, $r$ for which $u_{\infty}$ is $(k+1, \epsilon/2, r, x)-$symmetric. This contradict the assumption that $x \in S^k_{\epsilon/2}(u_{\infty}).$
\end{proof}
The above lemma gives us more than just good geometric control of the quantitative strata under the hypotheses. It gives a dichotomy, either we can find well-separated $(k+1)$ points, $y_{ij}$, with very small drop in frequency or we cannot. In the former case, the Almgren frequency has small drop on all of $S^k_{\epsilon, \eta}(u)$ (and we also get good geometric control) or the set on which the Almgren frequency has small drop is close to a $(k-1)-$plane. In the latter case, even though we have no geometric control on $S^k_{\epsilon, \eta}(u)$, we have very good packing control on the part with small drop in frequency. We make this formal in the following corollary.
\begin{cor}\label{key dichotomy}
Let $\gamma, \rho, \eta' \in (0, 1)$. For all $u \in \mathcal{A}(n, \Lambda)$, $p \in \overline \Omega \cap B_{\frac{1}{16}}(0)$, and $0 < r \le \frac{1}{512},$ let $E = \sup_{x \in B_r(p) \cap \Omega}N_{\Omega}(x, 2r, u).$ There is an $\eta << \rho$ so that the following holds. if $\eta \le \eta_0$, then one of the following possibilities must occur:
\begin{itemize}
\item[1.] $N_{\Omega}(x, \eta \rho r, u) \ge E - \eta'$ for all $x \in S^k_{\epsilon, \eta}(u) \cap B_r(p),$ and
$$
S^k_{\epsilon, \eta_0}(u) \cap B_r(p) \subset B_{\beta r}(L).
$$
\item[2.] There exists a $(k-1)-$dimensional affine plane, $L^{k-1}$, such that $$\{x \in \overline{\Omega}: N_{\Omega}(x, 2\eta r, u) \ge E-\eta \} \cap B_r(p) \subset B_{\rho r}(L^{k-1}).$$
\end{itemize}
\end{cor}
\begin{rmk}
In the later case of the dichotomy, we know that all points outside $B_r(p) \cap B_{\rho r}(L^{k-1})$ must have $N_{\Omega}(x, 2\eta r, u) < E-\eta.$ Since $N_{\Omega}(x, r, u)$ is almost monotonic and uniformly bounded, this can only happen for each $p$ finitely many times.
\end{rmk}
\section{The Beta numbers}\label{S: beta numbers}
In this section, we relate the Jones $\beta$-numbers to the drop in Almgren frequency. Again, the challenge is to obtain uniform macroscopic estimates. See \cite{mccurdy18-1} for similar results for a free-boundary problem for harmonic measure. The main result of this section is the following lemma.
\begin{lem}\label{beta bound lem}
There exists a constant, $\delta_0 = \delta_0(n, \Lambda, \epsilon)>0$ such that for any $0< \delta \le \delta_0$, if $u \in \mathcal{A}(n, \Lambda)$, then for any $p \in B_{\frac{1}{16}}(0)$ and $0<r \le \frac{1}{512}$ if $u$ is $(0, \delta, 8r, p)-$symmetric, but not $(k+1, \epsilon, 8r, p)-$symmetric, then for any finite Borel measure, $\mu$,
\begin{align}\nonumber
\beta_{\mu, 2}^k(p, r)^2 \le & \frac{C(n, \Lambda, \epsilon)}{r^{k}}\int_{B_r(p)} N(y, 8r, u) - N(y, r, u) + C_1(n, \Lambda)rd\mu(y)\\
& + C(n, \Lambda, \epsilon) \frac{\mu(B_r(p))}{r^{k}}r^{m}
\end{align}
where $0<m$ is the constant defined in Lemma \ref{wiggle room}.
\end{lem}
We begin by noting that for any finite Borel measure, $\mu$, and any $B_r(p)$ we can define the $\mu$ center of mass, $X = \fint_{B_r(p)}xd\mu(x)$, and define the covariance matrix of the mass distribution in $B_r(p)$ by
$$
\Sigma = \int_{B_r(p)}(y - X)(y-X)^{\perp}d\mu(y).
$$
With this matrix, we may naturally define a symmetric, non-negative bilinear form,
$$
Q(v, w) = v^{\perp} \Sigma w = \fint_{B_r(p)}(v\cdot (y-X))(w \cdot (y - X))d\mu(y).
$$
Let $\vec v_1, ... ,\vec v_n$ be an orthonormal eigenbasis and $\lambda_1 \ge ... \ge \lambda_n \ge 0$ their associated eigenvalues. These objects enjoy the following relationships,
$$
V_{\mu, 2}^k(p, r)= X + span\{\vec v_1, ..., \vec v_k \} , \quad \beta_{\mu, 2}^k(x, r)^2 = \frac{\mu(B_r(p))}{r^k}(\lambda_{k+1} + ... + \lambda_n).
$$
See \cite{Hochman15} Section 4.2.
\begin{lem}\label{beta bound lem, part 1}
Let $u \in \mathcal{A}(n, \Lambda)$. Let $\mu$ be a finite Borel measure and $Q, \lambda_i, \vec v_i$ defined as above. For any $i$, any $z$ for which $\nabla T_{0, 1}u(z)$ is defined, and any scalar $c \in \mathbb{R}$,
\begin{align}\nonumber \label{quadratic form bound}
& \lambda_i\frac{1}{r^{n+2}}\int_{A_{3r, 4r}(p)}(\vec v_i \cdot \nabla T_{0, 1}u(z))^2dz \\
& \qquad \qquad \qquad \le 5^n \fint_{B_r(p)} \int_{A_{2r, 7r}(y)} \frac{|c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz d\mu(y).
\end{align}
\end{lem}
\begin{proof}
Observe that by the definition of center of mass, $$\fint_{B_{r}(p)}\vec w \cdot (y-X)d\mu(y) = 0$$ for any $\vec w \in \mathbb{R}^n.$ Therefore,
\begin{align*}
\lambda_i(\vec v_i \cdot \nabla T_{0, 1}u(z)) = &~ Q(\vec v_i, \nabla T_{0, 1}u(z))\\
=& \fint_{B_r(p)}(\vec v_i \cdot (y-X))(\nabla T_{0, 1}u(z) \cdot (y - X))d\mu(y) \\
=& \fint_{B_r(p)}(\vec v_i \cdot (y-X))(\nabla T_{0, 1}u(z) \cdot (y - X))d\mu(y)\\
&+ \fint_{B_{r}(p)} c(T_{0, 1}u(z)- T_{0,1}u(p))(\vec v_i \cdot (y-X))d\mu(y)\\
= & \fint_{B_r(p)}(\vec v_i \cdot (y-X))(c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (X - z + z -y))d\mu(y) \\
= & \fint_{B_r(p)}(\vec v_i \cdot (y-X))(c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y))d\mu(y) \\
\le & \lambda_i^{\frac{1}{2}} (\fint_{B_r(p)} |c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2d\mu(y))^{\frac{1}{2}}.
\end{align*}
Recalling that $A_{r, R}(p) = B_{R}(p) \setminus B_{r}(p),$ we calculate,
\begin{align*}
\lambda_i\frac{1}{r^{n+2}}\int_{A_{3r, 4r}(p)}(\vec v_i \cdot \nabla T_{0, 1}u(z))^2dz \le \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\
\qquad \frac{1}{r^{n+2}} \int_{A_{3r, 4r}(p)} \fint_{B_r(p)} |c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2d\mu(y)dz\\
\le 5^n \fint_{B_r(p)} \int_{A_{3r, 4r}(p)} \frac{|c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz d\mu(y)\\
\le 5^n \fint_{B_r(p)} \int_{A_{2r, 7r}(y)} \frac{|c(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz d\mu(y)\\
\end{align*}
\end{proof}
\begin{lem}\label{fudge III}
Let $u \in \mathcal{A}(n, \Lambda)$ and $0 \le k \le n-2$. Let $p \in B_{\frac{1}{16}}(0),$ $0< r \le \frac{1}{512}$. Let $0< \epsilon$ be fixed. There is a $0< \delta_0(n, \Lambda, \epsilon)$ such that if $u$ is $(0, \delta)$-symmetric in $B_{8r}(p)$ for any $0<\delta \le \delta_0$ but not $(k+1, \epsilon, 8r, p)$-symmetric, then for any $y \in B_r(p),$
\begin{align*}
\int_{A_{2r, 7r}(y)} \frac{|\lambda(p, 7r, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz \le \qquad \qquad \qquad & \\
4 \int_{A_{2r, 7r}(y)} \frac{|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz + Cr^{m}
\end{align*}
where $C = C(n, \Lambda).$
\end{lem}
\begin{proof}
First, we observe that,
\begin{align*}
\lambda(p, 7r, T_{0, 1}u) = & \lambda(y, |z-y|, T_{0, 1}u) + [\lambda(p, 7r, T_{0, 1}u) - \lambda(p, r, T_{0, 1}u)] \\
& + [\lambda(p, |z-y|, T_{0, 1}u) - \lambda(y, |z-y|, T_{0, 1}u)]
\end{align*}
Making $\delta_0$ small as in Lemma \ref{fudge II}, we have that, $$|\lambda(p, 7r, T_{0, 1}u) - \lambda(y, |z-y|, T_{0, 1}u)| \le 8C.$$
Now, bound the maximum of the difference. First, we change from the constant, $\lambda(p, 7r, T_{0, 1}u)$, to the $\lambda(y, |z-y|, T_{0, 1}u)$. Note that,
\begin{align*}
&|\lambda(p, 7r, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2\\
&\quad \le (|(\lambda(p, 7r, T_{0, 1}u) - \lambda(y, |z-y|, T_{0, 1}u))(T_{0, 1}u(z)- T_{0,1}u(p))| \\
&\qquad +|(\lambda(y, |z-y|, T_{0, 1}u))(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|)^2 \\
&\quad \le 2|(\lambda(p, 7r, T_{0, 1}u) - \lambda(y, |z-y|, T_{0, 1}u))(T_{0, 1}u(z)- T_{0,1}u(p))|^2 \\
&\qquad +2|(\lambda(y, |z-y|, T_{0, 1}u))(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2.
\end{align*}
Therefore, we estimate,
\begin{align*}
& \int_{A_{2r, 7r}(y)} \frac{|\lambda(p, 7r, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz \\
& \le \int_{A_{2r, 7r}(y)} \frac{2|(\lambda(p, 7r, T_{0, 1}u) - \lambda(y, |z-y|, T_{0, 1}u))(T_{0, 1}u(z)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz\\
& \quad + \int_{A_{2r, 7r}(y)} \frac{2|\lambda(y, |z-y|, T_{0, 1}v)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz\\
& \le \int_{A_{r, 8r}(p)} \frac{2|4C(T_{0, 1}u(z)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz\\
& \quad + \int_{A_{2r, 7r}(y)} \frac{2|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz
\end{align*}
Now, by Lemma \ref{2nd order upper bound, approx}, for $0<\delta$ small enough, we have that for every $\rho \in [r, 8r]$ and all $z \in B_{\rho}(p)$, we have the estimate,
$|(T_{0, 1}v(z) - T_{0,1}v(p))| \le Cr^{1 + \frac{m}{2}}.$ Therefore,
\begin{align*}
\int_{A_{r, 8r}(p)} \frac{2|4C(T_{0, 1}u(z)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz & \le \int_{A_{r, 8r}(p)} \frac{32C^2(r^{1 + \frac{m}{2}})^2}{|z-y|^{n+2}}dz \\
&\le \int_{A_{r, 8r}(p)} \frac{32C^2 r^{m}}{|z-y|^{n}}dz \\
&\le 7\omega_{n-1} 32C^2 r^{m}
\end{align*}
Secondly, we change from $(T_{0, 1}u(z)- T_{0,1}u(p))$ to $(T_{0, 1}u(z)- T_{0,1}u(y)).$ Note that,
\begin{align*}
&|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2\\
& \quad = |\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0, 1}u(y)+ T_{0, 1}u(y) - T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2\\
& \quad \le 2|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(y)- T_{0,1}u(p))|^2\\
& \qquad + 2|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2
\end{align*}
Therefore, we estimate,
\begin{align*}
& \int_{A_{2r, 7r}(y)} \frac{2|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz\\
& \le \int_{A_{2r, 7r}(y)} \frac{4|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(y)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz \\
& \quad + \int_{A_{2r, 7r}(y)} \frac{4|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz\\
& \le \int_{A_{r, 8r}(p)} \frac{4|2C(T_{0, 1}u(y)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz \\
& \quad + \int_{A_{r, 8r}(y)} \frac{4|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz
\end{align*}
Now, we must upper bound $\int_{A_{r, 8r}(p)} \frac{4|2C(T_{0, 1}u(y)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz$. We do so by bounding $|T_{0, 1}u(y)- T_{0,1}u(p)|^2.$ By Lemma \ref{2nd order upper bound, approx}, for $0<\delta$ small enough, we have that for every $\rho \in [r, 8r]$ and all $z \in B_{\rho}(p),$
\begin{align*}
|(T_{0, 1}u(z) - T_{0,1}u(p))| \le Cr^{1 + \frac{m}{2}}.
\end{align*}
If $y \in \Omega^+$, then the Maximum Principle applied to the subharmonic function $|(T_{0, 1}u(z) - T_{0,1}u(p))|$ in $B_{r}(p) \cap \Omega$ implies that $|T_{0, 1}u(z) - T_{0,1}u(p)| \le Cr^{1 + \frac{m}{2}}.$ Therefore,
\begin{align*}
& \int_{A_{r, 8r}(p)} \frac{4|2C(T_{0, 1}u(y)- T_{0,1}u(p))|^2}{|z-y|^{n+2}}dz \\
&\le \int_{A_{r, 8r}(p)} \frac{16C^2(r^{1 + \frac{m}{2}})^2}{|z-y|^{n+2}}dz \\
&\le \int_{A_{r, 8r}(p)} \frac{16C^2 r^{m}}{|z-y|^{n}}dz \\
&\le 7\omega_{n-1} 16C^2 r^{m}
\end{align*}
\end{proof}
\begin{lem}\label{vector thing}
Let $u \in \mathcal{A}(n, \Lambda)$ and $0 \le k \le n-2$. Let $p \in B_{\frac{1}{16}}(0) \cap \overline{\Omega},$ $0< r \le \frac{1}{512}$. Let $0< \epsilon$ be fixed. Fix $0 < \epsilon.$ There exists a constant, $\delta = \delta_0(n, \Lambda, \epsilon)>0$ and a constant, $C(n, \Lambda, \epsilon)$, such that if $u$ is $(0, \delta, 8r, p)-$symmetric, but not $(k+1, \epsilon, 8r, p)-$symmetric, then for any orthonormal vectors, $\vec v_1, ... , \vec v_{k+1},$
\begin{align*}
\frac{1}{C} \le \frac{1}{r^{n+2}} \int_{A_{3r, 4r}(x)} \sum_{i = 1}^{k+1}(\vec v_i \cdot DT_{0, 1}u(z))^2dz.
\end{align*}
\end{lem}
\begin{proof}
Now, we argue that if $\delta_0 >0$ is small enough, then there exists a constant, $C(n, \Lambda, \epsilon)$, such that for any orthonormal vectors, $\vec v_1, ... , \vec v_{k+1},$
$$
\frac{1}{C} \le \frac{1}{r^{n+2}} \int_{A_{3r, 4r}(x)} \sum_{i = 1}^{k+1}(\vec v_i \cdot DT_{0, 1}u(z))^2dz.
$$
Again, we argue by contradiction. Assume that there is a sequence of functions in $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in B_{\frac{1}{16}}(0) \cap \overline{\Omega_i},$ and $0< r_i \le \frac{1}{512}$ such that $u_j$ is $(0, 2^{-j}, 8r_i, p)-$symmetric, but not $(k+1, \epsilon, 8r_i, p_i)-$symmetric. And, for each $i$, there exists an orthonormal collection of vectors, $\{\vec v_{ij} \}$, such that, rescaling $\tilde u_{i}(x) = u_i(r_ix +p_i)$,
$$
\int_{A_{3, 4}(0)} \sum_{j = 1}^{k+1}(\vec v_{ij} \cdot D\tilde u_{i}(z))^2dz \le 2^{-i}
$$
Again, we use Lemma \ref{better compactness lem} to extract a subsequence $\tilde u_j$ for which $\tilde u_j$ converge to a harmonic function, $u_{\infty}$. Similarly, $\{\vec v_{ij} \}$ converges to an orthonormal collection $\{\vec v_i\}$. Given the assumptions above, $u_{\infty}$ is also $0-$symmetric in $B_8(0)$ and $\nabla u_{\infty} \cdot \vec v_i =0$ for all $i = 1, ..., k+1.$ Thus, $u_{\infty}$ is $(k+1, 0)-$symmetric in $B_8(0)$. But this is our contradiction, since $\tilde u_j$ were supposed to stay away from $(k+1)-$symmetric functions in $L^2(B_1(0))$.
\end{proof}
\subsection{The proof of Lemma \ref{beta bound lem}}
\begin{proof}
By Lemma \ref{vector thing} and properties of the $\beta$-numbers, we have for $\{\vec v_i \}$ the orthonormal basis and $\lambda_i$ the associated eigenvalues of the quadratic form in Lemma \ref{beta bound lem, part 1},
\begin{align*}
\beta_{\mu, 2}^k(p, r)^2 & \le \frac{\mu(B_r(p))}{r^{k}}n\lambda_{k+1}\\
& \le \frac{\mu(B_r(p))}{r^{k}}n C(n, \Lambda, \epsilon) \sum_{i=1}^{k+1} \frac{\lambda_{k+1}}{r^{n+2}} \int_{A_{3r, 4r}(p)}(\vec v_i \cdot DT_{0, 1}u(z))^2dz. \\
& \le \frac{\mu(B_r(p))}{r^{k}}n C(n, \Lambda, \epsilon) \sum_{i=1}^{k+1} \frac{\lambda_i}{r^{n+2}} \int_{A_{3r, 4r}(p)}(\vec v_i \cdot DT_{0, 1}u(z))^2dz. \\
\end{align*}
We now bound $\frac{\lambda_i}{r^{n+2}} \int_{A_{3r, 4r}(p)}(\vec v_i \cdot DT_{0, 1}u(z))^2dz$ using Lemma \ref{beta bound lem, part 1}. By Equation \ref{quadratic form bound}
\begin{align*}
& \lambda_i\frac{1}{r^{n+2}}\int_{A_{3r, 4r}(p)}(\vec v_i \cdot \nabla T_{0, 1}u(z))^2dz \le \\
& ~ 5^n \fint_{B_r(p)} \int_{A_{2r, 7r}(y)} \frac{|\lambda(p, 7r, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz d\mu(y).
\end{align*}
By Lemma \ref{fudge II} and Lemma \ref{fudge III}, we have that for $0<\delta$ sufficiently small, all $y \in B_r(p)$ we can bound,
\begin{align*}
& \int_{A_{2r, 7r}(y)} \frac{|\lambda(p, 7r, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(p)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz\\
&\le 4 \int_{A_{r, 8r}(y)} \frac{|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz + Cr^{m}
\end{align*}
Therefore, collecting constants we have that for $\delta$ sufficiently small,
\begin{align*}
& \beta_{\mu, 2}^k(p, r)^2 \le \frac{\mu(B_r(p))}{r^{k}}n C(n, \Lambda, \epsilon)(k+1) 5^n \times \\
& \left( \fint_{B_r(p)} 4 \int_{A_{r, 8r}(y)} \frac{|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz + Cr^{m} d\mu(y) \right)\\
& \le \frac{C(n, \Lambda, \epsilon)}{r^{k}} \times \\
& \left( \int_{B_r(p)} \int_{A_{r, 8r}(y)} \frac{|\lambda(y, |z-y|, T_{0, 1}u)(T_{0, 1}u(z)- T_{0,1}u(y)) - \nabla T_{0, 1}u(z) \cdot (z - y)|^2}{|z-y|^{n+2}}dz + Cr^{m} d\mu(y) \right)\\
\end{align*}
Now, using Lemma \ref{N non-degenerate almost monotonicity}, we have,
\begin{align*}
& \beta_{\mu, 2}^k(p, r)^2 \le \\
& \le \frac{C(n, \Lambda, \epsilon)}{r^{k}}\left( \int_{B_r(p)}\frac{C(Lip)}{2}( N_{\Omega}(8r, y, u) - N_{\Omega}(r, y, T_{0, 1}u) + C_1(n, \Lambda) r) + Cr^{m} d\mu(y) \right)\\
& \le \frac{C(n, \Lambda, \epsilon)}{r^{k}}\int_{B_r(p)} N_{\Omega}(8r, y, T_{0, 1}u) - N_{\Omega}(r, y, T_{0, 1}u) + C_1(n, \Lambda)rd\mu(y) \\
& \qquad + C(n, \Lambda, \epsilon)\frac{\mu(B_r(p))}{r^{k}}r^{m}\\
\end{align*}
\end{proof}
\section{Packing Estimates}\label{S:Packing}
Now that we have linked the behavior of $N_{\Omega}(p, r, v)$ to the $\beta-$numbers, we are ready to prove the crucial packing lemma. We note that the error terms generated by the triangle inequality,
$$
C(n, \Lambda, \epsilon)\frac{\mu(B_r(p))}{r^{k}}r^{m},
$$
force us to make modifications to the approach of \cite{NaberValtorta17-1}. In particular, we can only apply their Discrete Reifenberg Theorem at very small scales.
\begin{lem}\label{packing lem}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{16}}(0) \cap \overline{\Omega}$, and $0< r_0 \le \frac{1}{512}$. Let
$$
\sup_{x \in B_1(0) \cap \overline{T_{p_0, r_0}\Omega}} E(x, 2, T_{p_0, r_0}u) = E \le C(n, \Lambda).
$$
There is an $\eta_1(n, \Lambda, \epsilon)>0$ such that if $\{B_{2r_{p}}(p)\}$ is a collection of disjoint balls satisfying
\begin{equation}
N_{T_{p_0, r_0}\Omega}(p, \eta r_{p}, T_{p_0, r_0}u) \ge E - \eta , \qquad p \in S^k_{\epsilon, r}(T_{p_0, r_0}u), \qquad r \le r_p \le 1
\end{equation}
for any $0< r< 1$ then,
\begin{equation}
\sum_{p}r_{p}^k \le C_2(n, \Lambda, \epsilon).
\end{equation}
\end{lem}
\begin{proof}
Choose $\delta(n, \Lambda, \epsilon)$ as in Lemma \ref{beta bound lem}, and $\eta(n, \Lambda, \delta)$ as in Lemma \ref{quant rigidity}. Let
$$\eta \le \eta_1 = \frac{1}{4} \min\{\delta, \gamma\}.$$
We will employ the convention that $r_i = 2^{-i}$. For each $i \in \mathbb{N}$, define the truncated measure,
\begin{equation*}
\mu_i = \sum_{r_p \le r_i}r_p^k\delta_p.
\end{equation*}
We will write $\beta^k_i(x, r)=\beta_{\mu_i, 2}^k(x, r).$ Observe that $\beta_i$ enjoy the following properties. First, because the balls are disjoint, for all $j \ge i$
\begin{equation*}
\beta_i(x,r_j) = \begin{cases}
\beta_j(x, r_j) & x\in supp(\mu_j)\\
0 & \text{ otherwise.}
\end{cases}
\end{equation*}
Furthermore, for any $r< r_i \le 2^{-4},$
\begin{align*}
N(p, 8r_{i}, T_{p_0, r_0}u) - N(p, r_{i}, T_{p_0, r_0}u) & \le E - N(p, r_{i}, T_{p_0, r_0}u) + N(p, \eta r_{p}, T_{p_0, r_0}u) - N(p, \eta r_{p}, T_{p_0, r_0}u)\\
& \le E - N(p, \eta r_{p}, T_{p_0, r_0}u) + |N(p, r_{i}, T_{p_0, r_0}u) - N(p_i, \eta r_{p}, T_{p_0, r_0}u)|.\\
& \le \eta + \max\{\eta, 2C_1(n, \Lambda)r_{i}\}.
\end{align*}
It is evident that the same argument shows that $N(p, 16r_{i}, T_{p_0, r_0}u) - N(p, r_{i}, T_{p_0, r_0}u) \le \eta + \max\{\eta, 2C_1(n, \Lambda)r_{i}\}$. Thus, for all $r_i \le \rho = \eta_1 \frac{1}{4C_1(n, \Lambda)},$ by our choice of $\eta \le \eta_1$, Lemma \ref{quant rigidity}, and Lemma \ref{beta bound lem}, we have,
\begin{align*}
\beta_{i}^k(p, r_{i})^2 \le & \frac{C(n, \Lambda, \epsilon)}{r_i^k} \int_{B_{r_i}(p)} N_{T_{p_0, r_0}\Omega}(y, 8r_{i}, T_{p_0, r_0}u) - N_{T_{p_0, r_0}\Omega}(y, r_{i}, T_{p_0, r_0}u) + 2C_1(n, \Lambda)r_{i} d\mu_i(y)\\
& \qquad + C(n, \Lambda, \epsilon)\frac{\mu(B_{r_i}(p))}{r_i^{k}}r_i^{m}.
\end{align*}
The claim of the lemma is that $\mu_0(B_1(0)) \le C(n, \Lambda, \epsilon)$. We prove the claim inductively. That is, we shall argue that for $r_i \le \rho \ll 2^{-4}$ and all $x \in B_1(0),$
\begin{equation*}
\mu_i(B_{r_i}(x)) \le C_{DR}(n)r_i^k
\end{equation*}
Observe that since $r_p \ge r> 0$, for $r_i < r$, the claim is trivially satisfied because $\mu_i = 0$. Assume, then, that the inductive hypothesis holds for all $j \ge i+1$. Let $x \in B_1(0).$ Observe that we can get a course bound,
\begin{equation*}
\mu_j(B_{4r_j}(x)) \le \Gamma(n)r_j^k, \quad \forall j \ge i-2, \quad \forall x \in B_1(0).
\end{equation*}
by observing that $\mu_j(B_{4r_j}(x)) = \mu_{j+2}(B_{4r_j}(x)) + \sum r^k_p$ where the sum is taken over all $p \in B_{4r_j}(x)$ with $r_{j+2} < r_p \le r_j$. Since the balls $B_{r_p}(p)$ are disjoint, there is a dimensional constant, $c(n),$ which bounds the number of such points, so $\Gamma(n) = c(n)C_{DR}$.
Now, we calculate,
\begin{align*}
\sum_{r_j < 2r_i} \int_{B_{2r_i}(x)} \beta_i(z, r_j )^2d\mu_i(z) = & \sum_{r_j < 2r_i} \int_{B_{2r_i}(x)} \beta_j(z, r_j )^2d\mu_j(z) \qquad \qquad \qquad \qquad \\
\end{align*}
\begin{align*}
\qquad \qquad \le & C \sum_{r_j < 2r_i} \frac{1}{r_j^k} \int_{B_{2r_i}(x)} \int_{B_{r_j}(z)} N_{\Omega}(y, 8r_j, v) - N_{\Omega}(y, r_j, v) + 2C_1(n, \Lambda)r_jd\mu_j(y)d\mu_j(z)\\
& + C \sum_{r_j < 2r_i} \int_{B_{2r_i}(x)} \frac{\mu_j(B_{r_j}(z))}{r_j^{k}}r_j^{m}d\mu_j(z)\\
\le & C \sum_{r_j < 2r_i} \int_{B_{2r_i + r_j}(x)} \frac{\mu_j(B_{r_j}(y))}{r_j^k} (N_{\Omega}(y, 8r_j, v) - N_{\Omega}(y, r_j, v) + 2C_1(n, \Lambda)r_j) d\mu_j(y)\\
& + C \sum_{r_j < 2r_i} \int_{B_{2r_i}(x)} \frac{\mu_j(B_{r_j}(z))}{r_j^{k}}r_j^{m}d\mu_i(z)\\
\le & C\Gamma(n) \int_{B_{4r_i}(x)} \sum_{r_j < 2r_i} (N_{\Omega}(y, 8r_j, v) - N_{\Omega}(y, r_j, v) + 2C_1(n, \Lambda)r_j) d\mu_j(y)\\
& + C \Gamma(n) \sum_{r_j < 2r_i} \mu_j(B_{4r_i}(x)) r_j^{m}\\
\le & C\Gamma(n) \int_{B_{4r_i}(x)} \sum_{r_j < 2r_i} (N_{\Omega}(y, 8r_j, v) - N_{\Omega}(y, r_j, v) + 2C_1(n, \Lambda)r_j) d\mu_j(y)\\
& + C \Gamma(n) \sum_{r_j < 2r_i} \mu_i(B_{4r_i}(x)) r_j^{m}\\
\le & C \Gamma(n) (\sum_{p \in B_{4r_i}(x) \cap supp(\mu_i)} r_p^k(N_{\Omega}(p, 16r_j, v) - N_{\Omega}(p, r_p, v) + 2C_1(n, \Lambda)\sum_{j \ge i} r_j )\\
& + C \Gamma(n)^2 \left( \sum_{r_j < 2r_i} r_j^{m} \right) r_i^k\\
\le & C \Gamma(n) (\sum_{p \in B_{4r_i}(x) \cap supp(\mu_i)} r_p^k( \eta + \eta_1 + 4C_1(n, \Lambda) r_i) + C \Gamma(n)^2 \left( \sum_{r_j < 2r_i} r_j^{m} \right) r_i^k\\
\le & C \Gamma(n) \mu_i(B_{4r_i}(x)) ( \eta + \eta_1 + \eta_1) + C \Gamma(n)^2 \left( \sum_{r_j < 2r_i} r_j^{m} \right) r_i^k\\
\le & C \Gamma(n)^2 r_i^k (3 \eta_1) + C\Gamma(n)^2 \left( \sum_{r_j < 2r_i} r_j^{m} \right) r_i^k\\
\end{align*}
Thus, for $\eta \le \eta_1(n, \Lambda, \epsilon)$ sufficiently small depending only upon $n, \Lambda, \epsilon$,
$$C \Gamma(n)^2 (3 \eta_1) \le \frac{1}{2}\delta_{DR}.$$
For $r_i \le \rho= \eta_1 \frac{1}{4C_1(n, \Lambda)},$ depending only upon $n, \Lambda, \epsilon$, we can ensure,
$$
C\Gamma(n)^2 \left( \sum_{r_j < 2r_i} r_j^{m} \right) \le \frac{1}{2}\delta_{DR}.$$
Thus, $\mu_i$ satisfies the hypotheses of the Discrete Reifenberg Theorem,
\begin{equation*}
\sum_{r_j < 2r_i} \int_{B_{2r_i}(x)} \beta_i(z, r_j )^2d\mu_i(z) \le \delta_{DR}r_i^k.
\end{equation*}
The Discreet Reifenberg Theorem therefore implies that $\mu_i(B_{r_i}(x)) \le C_{DR}r_i^k$.
Thus, by induction, the claim holds for all $r_i \le \rho(n, \Lambda, \epsilon)= \eta_1 \frac{1}{4C_1(n, \Lambda)}.$ A packing argument proves $\mu_0(B_1(0)) \le C(n, \Lambda, \epsilon)$.
\end{proof}
\section{Tree Construction}\label{S:trees}
In this section, we detail two procedures for inductively-refined covering schemes. We will use these covering schemes in the next section to generate the actual cover which proves Theorem \ref{T: main theorem 1}. First, we fix our constants.
\subsection{Fixing Constants and a Definition.}
Let $u_1 \in \mathcal{A}(n, \Lambda)$, $p \in B_{\frac{1}{16}}(0) \cap \overline{\Omega}$. We shall assume define $u = T_{p, \frac{1}{512}}u_1$. Thus, by Lemma \ref{N bound lem},
\begin{align*}
\sup_{p \in B_1(0) \cap \overline{T_{p, \frac{1}{512}} \Omega}}E(p, 2, u) \le C(n, \Lambda) = E.
\end{align*}
We fix the scale of the covering we wish to construct as $R \in (0, 1].$ Let $0< \epsilon$ be given.
We will let $\rho$ denote the inductive scale by which we will refine our cover. For convenience, we will use the convention $r_i = \rho^{-i}.$ Let $\rho < \frac{1}{10}$ and so that
$$
2c_1(n)c_2(n)\rho < 1/2.
$$
where $c_1(n)$ and $c_2(n)$ are dimensional constants which will be given in the following lemmata. Let $\delta_0(n, \Lambda, \epsilon)$ be as in Lemma \ref{beta bound lem} and $\gamma_0(n, \Lambda, \delta_0)$ as in Lemma \ref{quant rigidity}. Now, we also let $\eta_1(n, \Lambda, \epsilon)$ be as in Lemma \ref{packing lem} and
$$
\eta' = \min\{\gamma_0, \eta_1/20\}.
$$
We then let $\eta = \eta_0(n, \Lambda, \eta', \rho, \epsilon)$ as in Corollary \ref{key dichotomy}.
The sorting principle for our covering comes from the dichotomy in Corollary \ref{key dichotomy}. To formalize this, we make the following definition.
\begin{definition}
For $p \in B_2(0)$ and $R< r< 2$, the ball $B_r(p)$ will be called ``good" if
$$
N_{\Omega}(p, \gamma \rho r, u) \ge E - \eta' \qquad \text{on} \quad S^k_{\epsilon, \eta R}(u) \cap B_r(p).
$$
We will say that $B_r(p)$ is ``bad" if it is not good.
\end{definition}
\begin{rmk}
By Corollary \ref{key dichotomy}, with $E+ \eta/2$ in place of $E$, which is admissible by monotonicity, in any bad ball $B_r(p)$, there exists a $(k-1)-$dimensional affine plane, $L^{k-1}$ such that
$$
\{N_{\Omega}(p, \gamma \rho r, u) \ge E - \eta/2 \} \cap B_r(p) \subset B_{\rho r}(L^{k-1}).
$$
\end{rmk}
\subsection{Good trees}
Let $x \in B_1(0)$ and $B_{r_A}(x)$ be a good ball for $A \ge 0$. We will detail the inductive construction of a good tree based at $B_{r_A}(x)$. The induction will build a successively refined covering $B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon}(u)$. We will terminate the process and have a cover which consists of a collection of bad balls with packing estimates and a collection of stop balls whose radii are comparable to $R$. We shall use the notation $\mathcal{G}_i$ to denote the collection of centers of good balls of scale $r_i$, $\mathcal{B}_i$ shall denote the collection of centers of bad balls of scale $r_i$.
Because $B_{r_A}(x)$ is a good ball, at scale $i = A$, we set $\mathcal{G}_A = x$. We let $\mathcal{B}_A = \emptyset$.
Now the inductive step. Suppose that we have constructed our collections of good and bad balls down to scale $j-1 \ge A$. Let $\{z \}_{J_i}$ be a maximal $\frac{2}{5}r_j$-net in
$$
B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_{r_{j-1}}(\mathcal{G}_{j-1})\setminus \cup_{i=A}^{j-1}B_{r_i}(\mathcal{B}_i).
$$
We then sort these points into $\mathcal{G}_j$ and $\mathcal{B}_j$ depending on whether $B_{r_j}(z)$ is a good ball or a bad ball. If $r_j > R$, we proceed inductively. If $r_j \le R$, then we stop the procedure. In this case, we let $\mathcal{S} = \mathcal{G}_j \cup \mathcal{B}_j$ and we call this the collection of ``stop" balls.
Some notation: the covering at which we arrive at the end of this process shall be called the ``good tree at $B_{r_A}(x)$." We shall follow \cite{EdelenEngelstein17} and denote this $\mathcal{T}_{\mathcal{G}} = \mathcal{T}_{\mathcal{G}}(B_{r_A}(x))$.
We shall call the collection of ``bad" ball centers, $\cup_{i}\mathcal{B}_i$, the ``leaves of the tree" and denote this collection by $\mathcal{F}(\mathcal{T}_{\mathcal{G}}).$ We shall denote the collection of ``stop" ball centers by $\mathcal{S}(\mathcal{T}_{\mathcal{G}}) = \mathcal{S}$.
For $b \in \mathcal{F}(\mathcal{T}_{\mathcal{G}})$ we let $r_b = r_i$ for $i$ such that $b \in \mathcal{B}_i$. Similarly, is $s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}})$, we let $r_s = r_j$ for the terminal $j$.
\begin{thm} \label{good trees}
A good tree, $\mathcal{T}_{\mathcal{G}}(B_{r_A}(x))$, enjoys the following properties:
\begin{itemize}
\item[(A)] Tree-leaf packing:
$$
\sum_{b \in \mathcal{F}(\mathcal{T}_{\mathcal{G}})} r_b^k \le c_1(n) r^k_A
$$
\item[(B)] Stop ball packing
$$
\sum_{s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}})} r_s^k \le c(n) r^k_A
$$
\item[(C)] Covering control
$$
\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_{r_A}(x) \subset \cup_{s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}})} B_{r_s}(s) \cup \cup_{b \in \mathcal{F}(\mathcal{T}_{\mathcal{G}})} B_{r_b}(b)
$$
\item[(D)] Size control: for any $s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}})$, $\rho R \le r_s \le R$.
\end{itemize}
\end{thm}
\begin{proof} First, observe that by construction,
$$
\{B_{\frac{r_b}{5}}(b) : b \in \mathcal{F}(\mathcal{T}_{\mathcal{G}}) \} \cup \{B_{\frac{r_s}{5}}(s) : s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}}) \}
$$
is pairwise disjoint and centered in the set $\mathcal{S}^k_{\epsilon, \eta R}(v).$
Next, all bad balls and stop balls are centered in a good ball of the previous scale. By our definition of good balls, then, we have for all $i$
$$
N_{\Omega}(b, \gamma r_i, v) = N_{\Omega}(b, \gamma \rho r_{i-1}, v) \ge E - \eta' \quad \forall b \in \mathcal{B}_i
$$
and
$$
N_{\Omega}(s, \gamma r_s, v) \ge E - \eta' \quad \forall s \in \mathcal{S}(\mathcal{T}_{\mathcal{G}}) .
$$
Since by monotonicity we have that $\sup_{p \in B_{r_A}(x)} N_{\Omega}(p, 2r_A, u) \le E + \eta'$, we can apply Lemma \ref{packing lem} to $B_{r_A}(x)$ and get the packing estimates, (A), (B).
Covering control follows from our choice of a maximal $\frac{2}{5}r_i$-net at each scale $i$. If $i$ is the first scale at which a point, $x \in \mathcal{S}^k_{\epsilon, \eta R}(u)$, was not contained in our inductively refined cover, it would violate the maximality assumption.
The last condition, (D), follows because we stop only if $j$ is the first scale for which $r_j \le R$. Since we decrease by $\rho$ each time, (D) follows.
\end{proof}
\subsection{Bad trees}
Let $B_{r_A}(x)$ be a bad ball. Note that for every bad ball, there is a $(k-1)-$dimensional affine plane, $L^{k-1}$, associated to it which satisfies the properties elaborated in Corollary \ref{key dichotomy}. Our construction of bad trees will differ in several respects from our construction of good trees. The idea is still to define an inductively refined cover at decreasing scales of $B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon, \eta R}(u)$. We shall again sort balls at each step into ``good," ``bad," and ``stop" balls. But these balls will play slightly different roles and the ``stop" balls will have different radii.
We shall reuse the notation $\mathcal{G}_i$ to denote the collection of centers of good balls of scale $r_i$, $\mathcal{B}_i$ to denote the collection of centers of bad balls of scale $r_i$, and $\mathcal{S}_i$ to denote the collection of centers of stop balls of scale $r_i$.
At scale $i = A$, we set $\mathcal{B}_A = x$, since $B_{r_A}(x)$ is a bad ball, and set $\mathcal{S}_A= \mathcal{G}_A = \emptyset$. Suppose, now that we have constructed good, bad, and stop balls for scale $i-1 \ge A$. If $r_i > R$, then define $\mathcal{S}_i$ to be a maximal $\frac{2}{5}\eta r_{i-1}$-net in
$$
B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon, \eta R}(u) \cap \cup_{b \in \mathcal{B}_{i-1}} B_{r_{i-1}}(b) \setminus B_{2\rho r_{i-1}}(L^{k-1}_b).
$$
Note that $\eta << \rho$, so $\eta r_{i-1} < r_i$.
We then let $\{z\}$ be a maximal $\frac{2}{5}r_{i}$-net in
$$
B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon, \eta R}(u) \cap \cup_{b \in \mathcal{B}_{i-1}} B_{r_{i-1}}(b) \cap B_{2\rho r_{i-1}}(L^{k-1}_b).
$$
We then sort $\{z\}$ into the disjoint union $\mathcal{G}_i \cup \mathcal{B}_i$ depending on whether $B_{r_i}(z)$ is a good ball or a bad ball.
If $r_i \le R$, then we terminate the process by defining $\mathcal{G}_i = \mathcal{B}_i = \emptyset$ and letting $\mathcal{S}_i$ be a maximal $\frac{2}{5}\eta r_{i-1}$-net in
$$
B_{r_A}(x) \cap \mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_{r_i}(\mathcal{B}_{i-1}).
$$
Some notation: the covering at which we arrive at the end of this process shall be called the ``bad tree at $B_{r_A}(x)$." We shall follow \cite{EdelenEngelstein17} and denote this $\mathcal{T}_{\mathcal{B}} = \mathcal{T}_{\mathcal{B}}(B_{r_A}(x))$.
We shall call the collection of ``good" ball centers, $\cup_{i}\mathcal{G}_i$, the ``leaves of the tree" and denote this collection by $\mathcal{F}(\mathcal{T}_{\mathcal{B}}).$ We shall denote the collection of ``stop" ball centers by $\mathcal{S}(\mathcal{T}_{\mathcal{B}}) = \cup_i \mathcal{S}_i$.
As before, we shall use the convention that for $g \in \mathcal{F}(\mathcal{T}_{\mathcal{B}})$ we let $r_g = r_i$ for $i$ such that $g \in \mathcal{G}_i$. However, note that now, if $s \in \mathcal{S}_i \subset \mathcal{S}(\mathcal{T}_{\mathcal{B}})$, we let $r_s =\eta r_{i-1}$.
\begin{thm} \label{bad trees}
A bad tree, $\mathcal{T}_{\mathcal{B}}(B_{r_A}(x))$, enjoys the following properties:
\begin{itemize}
\item[(A)] Tree-leaf packing:
$$
\sum_{g \in \mathcal{F}(\mathcal{T}_{\mathcal{B}})} r_g^k \le 2c_2(n)\rho r^k_A
$$
\item[(B)] Stop ball packing
$$
\sum_{s \in \mathcal{S}(\mathcal{T}_{\mathcal{B}})} r_s^k \le c(n, \eta) r^k_A
$$
\item[(C)] Covering control
$$
\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_{r_A}(x) \subset \cup_{s \in \mathcal{S}(\mathcal{T}_{\mathcal{B}})} B_{r_s}(s) \cup \cup_{g \in \mathcal{F}(\mathcal{T}_{\mathcal{B}})} B_{r_g}(g)
$$
\item[(D)] Size control: for any $s \in \mathcal{S}(\mathcal{T}_{\mathcal{B}})$, at least one of the following holds:
$$\eta R \le r_s \le R \quad \text{ or }\quad \sup_{p \in B_{2r_s}(s)} N_{\Omega}(p, 2r_s, u) \le E- \eta/2.$$
\end{itemize}
\end{thm}
\begin{proof}
Conclusion (C) follows identically as in the good tree theorem. Next, we consider the packing etimates. Let $r_i > R$. Then, by construction, for any $b \in \mathcal{B}_{i-1}$, we have that
$$
\mathcal{G}_i \cup \mathcal{B}_i \cup B_{r_{i-1}}(b) \subset B_{2\rho r_{i-1}}(L_b^{k-1}).
$$
Thus, since the points $\mathcal{G}_i \cup \mathcal{B}_i$ are $\frac{2}{5}r_i$ disjoint, we calculate
$$
|\mathcal{G}_i \cup \mathcal{B}_i \cup B_{r_{i-1}}(b)| \le \omega_{k-1}\omega_{n-k+1}(3\rho)^{n-k+1} \frac{1}{\omega_n (\rho/5)^n} \le c_2(n) \rho^{1-k}.
$$
We can push this estimate up the scales as follows,
\begin{eqnarray*}
|\mathcal{G}_i \cup \mathcal{B}_i \cup B_{r_{i-1}}(b)| r_i^k &\le &c_2(n) \rho^{1}|\mathcal{B}_{i-1}| r_{i-1}^k\\
& \le & c_2(n) \rho^{1}|\mathcal{B}_{i-1} \cup \mathcal{G}_{i-1}| r_{i-1}^k\\
& \vdots & \\
&\le & (c_2 \rho)^{i-A}r_A^k
\end{eqnarray*}
Summing over all $i \ge A$, then, we have that
\begin{equation}\nonumber
\sum_{i = A+1}^{\infty} |\mathcal{B}_{i-1} \cup \mathcal{G}_{i-1}|r_i^k \le \sum_{i = A+1}^{\infty} (c_2 \rho)^{i-A}r_A^k
\end{equation}
Since we chose $c_2\rho \le 1/2$, we have that the sum converges and $\sum_{i = A+1}^{\infty} |\mathcal{B}_{i-1} \cup \mathcal{G}_{i-1}|r_i^k \le 2c_2 \rho r_A^k $. This proves (A).
To see (B), we observe that for any given scale $i \ge A+1$, the collection of stop balls, $\{B_{\eta r_{i-1}}(s) \}_{s \in \mathcal{S}_i}$, form a Vitali collection centered in $B_{r_{i-1}}(\mathcal{B}_{i-1})$. Thus, we have that
$$
|\{\mathcal{S}_i \}| \le \frac{10^n}{\eta^n} |\{\mathcal{B}_{i-1}\}|.
$$
Since by construction there are no stop balls at the initial scale, $A$, we compute that
$$
\sum_{i=A+1}^{\infty}|\{\mathcal{S}_i \}|(\eta r_{i-1})^k \le 10^k \eta^{k-n} \sum_{i=A}^{\infty} |\{\mathcal{B}_i \}|r^k_i \le c(n, \eta)r^k_A
$$
This is (B).
We now argue (D). For $s \in \mathcal{S}_i$ where $r_i > R$, by construction $s \in B_{r_{i-1}}(b) \setminus B_{2\rho r_{i-1}}(L^{k-1})
$ for some $b \in \mathcal{B}_{i-1}$. By Corollary \ref{key dichotomy}, the construction, and our choice of $\eta \le \frac{\rho}{2}$, we have that
\begin{equation}\nonumber
\sup_{p \in B_{2r_s}(s)}N_{\Omega}(p, 2r_s, u)
\le \sup_{p \in B_{2\eta r_{i-1}}(s)}N_{\Omega}(p, 2\eta r_{i-1},u) \le E - \eta/2.
\end{equation}
On the other hand, if $r_i \le R,$ then $r_{i-1} > R$. Thus,
\begin{equation}\nonumber
R \ge \rho r_{i-1} \ge \eta r_{i-1} = r_s \ge \eta R.
\end{equation}
This proves (D).
\end{proof}
\section{The Covering}\label{S:covering}
Continuing to assume that $u_1 \in \mathcal{A}(n, \Lambda)$, $p \in B_{\frac{1}{16}}(0) \cap \overline{\Omega}$. We shall assume define $u = T_{p, \frac{1}{512}}u_1$. We now wish to build the covering of $\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_1(0) \cap \overline{T_{p, \frac{1}{512}}\Omega}$ using the tree constructions, above. The idea is that $B_1(0)$ is either a good ball or a bad ball. Therefore, we can construct a tree with $B_1(0)$ as the root. Then in each of the leaves, we construct either good trees or bad trees, depending upon the type of the leaves. Since in each construction, we decrease the size of the leaves by a factor of $\rho<1/10$, we can continue alternating tree types until the process terminates in finite time.
Explicitly, we let $\mathcal{F}_0 = \{0\}.$ and let $B_1(0)$ be the only leaf. We set $\mathcal{S}_0 = \emptyset$. Now, assume that we have defined the leaves and stop balls up to stage $i-1$.
Since by hypothesis, the leaves in $\mathcal{F}_i$ are all good balls or bad balls, if they are good, we define for each $f \in \mathcal{F}_{i-1}$ the good tree $\mathcal{T}_{\mathcal{G}}(B_{r_f}(f)).$ We then set,
\begin{equation*}
\mathcal{F}_i = \cup_{f \in \mathcal{F}_{i-1}} \mathcal{F}(\mathcal{T}_{\mathcal{G}}(B_{r_f}(f)))
\end{equation*}
and
\begin{equation*}
\mathcal{S}_i = \mathcal{S}_{i-1} \cup \cup_{f \in \mathcal{F}_{i-1}} \mathcal{S}(\mathcal{T}_{\mathcal{G}}(B_{r_f}(f)))
\end{equation*}
Since all the leaves of good trees are bad balls, all the leaves of $\mathcal{F}_i$ are bad.
If, on the other hand, leaves of $\mathcal{F}_{i-1}$ are bad, then for each $f \in \mathcal{F}_{i-1},$ we construct a bad tree, $\mathcal{T}_{\mathcal{B}}(B_{r_f}(f))$. In this case, we set
\begin{equation*}
\mathcal{F}_i = \cup_{f \in \mathcal{F}_{i-1}} \mathcal{F}(\mathcal{T}_{\mathcal{B}}(B_{r_f}(f)))
\end{equation*}
and
\begin{equation*}
\mathcal{S}_i = \mathcal{S}_{i-1} \cup \cup_{f \in \mathcal{F}_{i-1}} \mathcal{S}(\mathcal{T}_{\mathcal{B}}(B_{r_f}(f)))
\end{equation*}
Since all the leaves of bad trees are good balls, all the leaves of $\mathcal{F}_i$ are good.
This construction gives the following estimates.
\begin{lem}
For the construction described above, there is an $N \in \mathbb{N}$ such that $\mathcal{F}_N = \emptyset$ with the following properties:
\begin{itemize}
\item[(A)] Leaf packing:
$$
\sum_{i = 0}^{N-1} \sum_{f \in \mathcal{F}_i} r_f^k \le c(n)
$$
\item[(B)] Stop ball packing
$$
\sum_{s \in \mathcal{S}_N} r_s^k \le c(n, \epsilon, \Lambda)
$$
\item[(C)] Covering control
$$
\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_1(0) \subset \cup_{s \in \mathcal{S}_N} B_{r_s}(s)
$$
\item[(D)] Size control: for any $s \in \mathcal{S}_N$, at least one of the following holds:
$$\eta R \le r_s \le R \quad \text{ or }\quad \sup_{p \in B_{2r_s}(s)} N_{\Omega}(p, 2r_s, u) \le E- \eta/2.$$
\end{itemize}
\end{lem}
\begin{proof}
By construction, each of the leaves of a good or bad tree satisfy $r_f \le r_i$. Thus, there is an $i$ sufficiently large so that $r_i <R$. Thus, $N$ is finite.
To see (A), we use the previous theorems. That is, if the leaves, $\mathcal{F}_i$, are good, then they are the leaves of bad trees rooted in $\mathcal{F}_{i-1}$. Thus, we calculate by Theorem \ref{bad trees},
\begin{equation*}
\sum_{f \in \mathcal{F}_i} r_f^k \le 2c_2(n)\rho \sum_{f' \in \mathcal{F}_{i-1}} r_{f'}^k
\end{equation*}
On the other hand, if the leaves, $\mathcal{F}_i$, are bad, then they are the leaves of good trees rooted in $\mathcal{F}_{i-1}$. Thus, we calculate by Theorem \ref{good trees},
\begin{equation*}
\sum_{f \in \mathcal{F}_i} r_f^k \le c_1(n) \sum_{f' \in \mathcal{F}_{i-1}} r_{f'}^k
\end{equation*}
Concatenating the estimates, since we alternate between good and bad leaves, we have,
\begin{equation*}
\sum_{f \in \mathcal{F}_i} r_f^k \le c(n)(2c_1(n)c_2(n)\rho)^{i/2}
\end{equation*}
By our choice of $\rho$, then, $\sum_{f \in \mathcal{F}_i} r_f^k \le c(n)2^{-i/2}$. The estimate (A) follows immediately.
We now turn our attention to (B). Each stop ball, $s \in \mathcal{S}_N$, is a stop ball coming from a good or a bad tree rooted in one of the leaves of a bad tree or good tree. We have the estimates from Theorems \ref{good trees} and \ref{bad trees}, which give bounds packing both leaves and stop balls. Combining these, we get
\begin{align*}
\sum_{s\in \mathcal{S}_N} r_s^k & = \sum_{i = 0}^N \sum_{s \in \mathcal{S}_i} r_s^k\\
& \le \sum_{i = 0}^{N-1} \sum_{f \in \mathcal{F}_i}c(n, \eta) r_f^k\\
& \le C(n, \eta)
\end{align*}
Recalling the dependencies of $\eta$ gives the desired result.
(C) follows inductively from the analogous covering control in Theorems \ref{good trees} and Theorem \ref{bad trees} applied to each tree constructed. (D) is immediate from these theorems, as well.
\end{proof}
\begin{cor} \label{covering cor}
Let $u \in \mathcal{A}(n, \Lambda)$ and $0< R \le 1$. Let $p \in B_{\frac{1}{16}(0)} \cap \overline{\Omega}$ and $0<r \le \frac{1}{512}.$ There is an $\eta(n, \Lambda, \epsilon)>0$ and a collection of balls, $\{B_{r_x}(x) \}_{x \in \mathcal{U}}$ with centers $x \in S^k_{\epsilon, \eta R}(u) \cap B_r(p)$ with $R \le r_x \le \frac{1}{10}r$ which satisfies the following properties:
\begin{itemize}
\item[(A)] Packing:
$$
\sum_{x \in \mathcal{U}} r_x^k \le c(n, \Lambda, \epsilon)
$$
\item[(B)] Covering control
$$
\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_{r}(p) \subset \cup_{x \in \mathcal{U}} B_{r_x}(x)
$$
\item[(C)] Energy drop: For every $x \in \mathcal{U}$, either
$$r_x = R \qquad \text{ or } \quad \sup_{y \in B_{2r_s}(s)} N_{\Omega}(y, 2r_s, u) \le C_1(n, \Lambda)- \eta(n, \Lambda, \epsilon)/2.$$
\end{itemize}
\end{cor}
This follows immediately from the previous lemma with $\eta \le \frac{1}{2C_1}\eta_1$, $\mathcal{S}_N = \mathcal{U}$ ,and setting $r_x = max\{R, r_s \}$.
\begin{thm}\label{other theorem}
Let $u \in \mathcal{A}(n, \Lambda).$ For all $\epsilon>0$ there exists an $\eta(n, \Lambda, \epsilon)>0$ such that for all $0< r< 1$ and $k= 1, 2, ..., n-1$ we can find a collection of balls, $\{B_r(x_i)\}_i$ with the following properties:
\begin{enumerate}
\item $\mathcal{S}^k_{\epsilon, r}(u) \cap B_{\frac{1}{512}}(0) \subset \cup_i B_r(x_i).$
\item$|\{x_i\}_i| \le c(n, \Lambda, \epsilon) r^{-k}$
\end{enumerate}
\end{thm}
\begin{proof} By the assumptions of the theorem and Lemma \ref{N bound lem}, we know that $E \le C_1(n, \Lambda)$.
Ensuring that $c(n, \Lambda, \epsilon)$ is sufficiently large, we may reduce to arguing for $r < \eta$. We now use Corollary \ref{covering cor}, to build the covering $\mathcal{U}_1$. If every $r_x = R$, then the packing and covering estimates give the claim directly, since
\begin{equation*}
R^{k-n} Vol(B_R(\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_1(0))) \le \omega_n R^{k-n} \sum_{\mathcal{U}_1}(2R)^n = \omega_n 2^n \sum_{\mathcal{U}_1}r_x^k \le c(n, \Lambda, \epsilon)
\end{equation*}
If there exists an $r_x \not = R$, we use Corollary \ref{covering cor}, to build a finite sequence of refined covers, $\mathcal{U}_1, \mathcal{U}_2, \mathcal{U}_3 , ... $ such that for each for each $i$, the covering satisfies the following properties:
\begin{itemize}
\item[$(A_i)$] Packing:
$$
\sum_{x \in \mathcal{U}_i} r_x^k \le c(n, \Lambda, \epsilon)( 1 + \sum_{x \in \mathcal{U}_{i-1}} r_x^k )
$$
\item[$(B_i)$] Covering control
$$
\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_1(0) \subset \cup_{x \in \mathcal{U}_i} B_{r_x}(x)
$$
\item[$(C_i)$] Energy drop: For every $x \in \mathcal{U}_i$, either
$$r_x = R \qquad \text{ or } \quad \sup_{p \in B_{2r_s}(s)} N_{\Omega}(p, 2r_s, u) \le E- i\eta/2.$$
\item[$(D_i)$] radius control:
$$
\sup_{x \in \mathcal{U}_i} r_x \le 10^{-i}
$$
\end{itemize}
If we can construct such a sequence of covers, then this process will terminate in finite time, \textit{independent of} $R$. To see this claim, recall that $N_{\Omega}(p, r, u)\ge 0$ for all $p \in B_{\frac{1}{4}}(0)$ and all $0<r$. Therefore, once $i > C_1(n, \Lambda)\frac{2}{\eta}$, $r_x = R$ for all $x \in \mathcal{U}_i$. In this case, we will have the claim with a bound of the form,
\begin{equation*}
R^{k-n} Vol(B_R(\mathcal{S}^k_{\epsilon, \eta R}(u) \cap B_\frac{1}{512}(0))) \le c(n, \Lambda, \epsilon)^{C(n, \Lambda)\frac{2}{\eta}}.
\end{equation*}
Thus, we reduce to inductively constructing the required covers. Suppose we have already constructed $\mathcal{U}_{i-1}$ as desired. For each $x \in \mathcal{U}_{i-1}$ with $r_x > R$, we apply Corollary \ref{covering cor} to $B_{r_x}(x)$ to obtain a new collection of balls, $\mathcal{U}_{i, x}$. From the assumption that $r_x \le 1/10$, it is clear that $u$ satisfies the hypotheses of Corollary \ref{covering cor} in $B_{r_x}(x).$
To check packing control, we have that
\begin{equation*}
\sum_{y \in \mathcal{U}_{i, x}} r_y^k \le c(n, \Lambda, \epsilon) r_x^k
\end{equation*}
Covering control follow immediately from the statement of Corollary \ref{covering cor}. Similarly,
from hypothesis $(C_{i-1}),$ we have that $\sup_{p \in B_{2r_x}(x)} N_{\Omega}(p, 2r_x, u) \le E- (i-1)\eta/2.$ Thus, the statement of Corollary \ref{covering cor} at scale $B_{r_x}(x)$ gives that $\sup_{p \in B_{2r_y}(y)} N_{\Omega}(p, 2r_y, u) \le E- (i)\eta/2$ for all $y \in \mathcal{U}_{i, x}$ with $r_y > R$.
Radius control follows immediately from the fact that $\sup_{y \in \mathcal{U}_{i, x}} r_y \le r_x /10 \le 10^{-i}.$
Thus, if we let
\begin{equation*}
\mathcal{U}_i = \{x \in \mathcal{U}_{i-1} | r_x = R \} \cup \bigcup_{\substack{x \in \mathcal{U}_{i-1}\\
r_x > R}} \mathcal{U}_{i, x}
\end{equation*}
then $\mathcal{U}_i$ satisfy the inductive claim. This completes the proof.
\end{proof}
\begin{rmk}
To obtain the statement claimed in Theorem \ref{T: main theorem 1} and it's corollaries, we repeat Theorem \ref{other theorem} within $c(n)$ balls, $\{B_{\frac{1}{512}}(p)\}$ which cover $B_{\frac{1}{16}}(0) \cap \overline{\Omega}.$
\end{rmk}
\section{Rectifiability}\label{S: rectifiability}
In this section, we prove Corollary \ref{rectifiability}. Recall that $\mathcal{S}^k = \cup_{0<\epsilon}\mathcal{S}^k_{\epsilon}$ and that $\mathcal{S}^k_{\epsilon} \subset \mathcal{S}^k_{\epsilon'}$ if $\epsilon' < \epsilon.$ Therefore, we shall show that $\mathcal{S}^k_{\epsilon}$ is countably $k$-rectifiable for every $0<\epsilon$. In fact, we will further simplify our problem by considering the sets
$$
U_v = \mathcal{S}^k_{\epsilon} \cap \{x \in \overline{\Omega} : N_{\Omega}(x, 0, u) \ge v \}.
$$
\begin{proof}
Note that by the calculation in Lemma \ref{packing lem}, applied to $U_v \cap B_{r}(p)$ shows that for $0< \eta(n, \Lambda, \epsilon)$ sufficiently small, $U_v \cap B_{r}(p)$ satisfies the hypotheses of the Rectifiable Reifenberg Theorem (Theorem \ref{rect reif}). The Rectifiable Reifenberg Theorem therefore implies that $U_v \cap B_r(p)$ is countably $k$-rectifiable.
Now, by Vitali covering, we can cover $U_{v} \setminus U_{v+ \eta}$ by countably many balls, centered on $U_{v} \setminus U_{v+ \eta}$, such that the hypotheses of the Rectifiable Reifenberg Theorem hold. Thus, $U_{v} \setminus U_{v+ \eta}$ is countably $k$-rectifiable. Since $v\in \mathbb{R}$ was arbitrary and rectifiability is stable under countable unions, $\mathcal{S}^k_{\epsilon}= \bigcup_{i \in \mathbb{N}} U_{i \eta} \setminus U_{(i+1)\eta}$ and $\mathcal{S}^k$ are both countably $k$-rectifiable.
\end{proof}
\section{$\epsilon$-regularity for flat points, $Q \in B_{\frac{1}{16}}(0) \cap \partial \Omega$}
\begin{lem}\label{e-reg continuity}
For all $\delta > 0$ there exists a $\epsilon > 0$ such that if $u \in \mathcal{A}(n, \Lambda)$, $Q \in B_{\frac{1}{16}}(0) \cap \partial \Omega$, and $0< r \le \frac{1}{512}$, and $u$ is $(n-1, \epsilon)$-symmetric in $B_r(Q),$ then
\begin{align*}
|N_{\Omega}(Q, r, u) - 1| < \delta
\end{align*}
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that there were a sequence of functions, $u_i$, such that $(n-1, 2^{-1})$-symmetric in $B_{r_i}(p_i)$ but that for each $u_i$ satisfied,
\begin{align*}
|N_{\Omega_i}(p_i, r_i, u_i) - 1| \ge \delta
\end{align*}
By Lemma \ref{compactness} and Lemma \ref{strong convergence} we may extract a subsequence such that $T_{p_j, r_j}\partial \Omega_i \rightarrow \partial \Omega_{\infty}$ in the Hausdorff metric, $T_{p_j, r_j}u_j \rightarrow u_{\infty}$ in $W^{1, 2}(B_2(0))$. As noted earlier, this implies that $T_{p_j, r_j}u_j \rightarrow u_{\infty}$ in $C^{\infty}(B_1(0) \setminus B_{\epsilon}(\partial \Omega_{\infty})).$
Note that $u_{\infty}$ is $(n-1)$-symmetric, and therefore, $N_{\Omega_{\infty}}(0, 1, u_{\infty}) = 1$. By Corollary \ref{N continuity}, we have that for $j$ sufficiently large, $|N_{\Omega_i}(0, 1, u_i) - 1| < \delta$, which contradicts our assumption.
\end{proof}
\begin{lem}\label{e-reg containment}
There exists an $\epsilon > 0$ such that,
\begin{align*}
\left(\{Q \in \partial \Omega: |\nabla u| = 0\} \setminus sing(\partial \Omega)\right) \cap B_{\frac{1}{16}}(0) \subset \mathcal{S}^{n-2}_{\epsilon}(u).
\end{align*}
\end{lem}
\begin{proof}
Note that $N_{\Omega}(Q, r, u)$ is monotonically non-decreasing in $r$ for all $Q \in \partial \Omega.$ First, we claim that if $Q \in \{Q \in \partial \Omega: |\frac{\partial u}{\partial \eta}| = 0 \}$ then $N_{\Omega}(Q, 0, u) \ge 2$. This follows from the fact that the normal to $\partial \Omega$ exists at $Q$, and so $T_{Q, r_i}\Omega \rightarrow \mathbb{H}_Q$ for any $r_i \rightarrow 0$, where $\mathbb{H}_Q$ is the unique supporting half-space for $\Omega$ at $Q$. Therefore, we can reflect $T_{Q, 0}u$ across $\partial \mathbb{H}_Q$ to obtain an entire homogeneous harmonic function. It is a standard exercise to show that this must be a homogeneous harmonic polynomial, $P$, see Appendix D: Blow-ups. Since by assumption, $|\nabla P| = 0$ at $0$, it cannot be linear, and so $N_{\mathbb{R}}(0, r, P) = N_{\Omega}(Q, 0, u) \ge 2.$
Therefore, monotonicity implies that $N_{\Omega}(Q, r, u) \ge 2$ for all $0< r $. Thus, for $\epsilon= \epsilon(\frac{1}{2})>0$ the constant guaranteed by Lemma \ref{e-reg continuity}, if there exists a scale, $0 < r$, such that $T_{Q, r}u$ is $(n-1, \epsilon)$-symmetric, then $N_{\Omega}(0, 1, T_{Q, r}u) < 2$. Since this cannot happen, we have the desired containment.
\end{proof}
\section{Appendix A: H\"older Continuity}\label{S: Holder continuity}
In this section, we provide a proof of Lemma \ref{L: unif Holder bound 1}. First, some standard results.
\begin{definition}
A bounded domain, $\Omega \subset \mathbb{R}^n$, is said to be of class $S$ if there exist numbers $0 < c_0 \le 1$ and $0< r_0$ such that for all $Q \in \partial \Omega$ and all $0< r\le r_0$,
\begin{align*}
\mathcal{H}^{n}(B_r(Q) \cap \Omega^c) \ge c_0 \mathcal{H}^n(B_{r}(Q)).
\end{align*}
\end{definition}
\begin{lem}(Bounding the Supremum, \cite{Kenig94} Lemma 1.1.22)\label{sup bound}
Let $\Omega$ be a domain of class $S$. Let $Q \in \partial \Omega \cap B_1(0)$ and $0 < r \le \frac{1}{2}$. Let $u$ be a function which is harmonic in $\Omega$ such that $u \in C(\overline{B_{2r}(Q) \cap \Omega})$ and $u \equiv 0$ on $B_{2r}(Q) \cap \partial \Omega$. There exists a $c(n)$ such that for any $p \in B_r(Q) \cap \Omega$,
\begin{align*}
\max_{B_r(Q) \cap \Omega}|u| \le c(n)\left( \fint_{B_{2r}(Q) \cap \Omega} u^2 dx \right)^{\frac{1}{2}}.
\end{align*}
\end{lem}
\begin{lem}(H\"older continuity up to the Boundary, \cite{Kenig94} Corollary 1.1.24)\label{holder boundary}
Let $\Omega$ be a domain of class $S$. Let $Q \in \partial \Omega \cap B_1(0)$ and $0 < r \le \frac{1}{2}$. Let $u$ be a function which is harmonic in $\Omega$, $u \in C(\overline{B_{2r}(Q) \cap \Omega})$, $u \equiv 0$ on $B_{2r}(Q) \cap \partial \Omega$, and $u \ge 0$. There exists a $c(n)$ and an exponent $0 < \alpha (n) \le 1$ such that for any $p \in B_r(Q) \cap \Omega$,
\begin{align*}
u(p) \le c(n) \left(\frac{(|p - Q|)}{r}\right)^{\alpha}\left( \sup \{ u(y) : y \in B_{2r}(Q) \} \right).
\end{align*}
\end{lem}
\begin{thm}(Oscillation in the Interior, \cite{HeinonenKilpelainenMartio} Theorem 6.6)\label{T: interior osc}
Suppose that $u$ is harmonic in $\tilde \Omega$. If $0 < r< R< \infty$ are such that $B_r(x) \subset B_R(x_0) \subset \tilde \Omega$, then
\begin{align*}
osc(u, B_r(x_0)) \le 2^{\alpha}\left(\frac{r}{R}\right)^{\alpha}osc(u, B_R(x_0)),
\end{align*}
where $\alpha = \alpha(n) \in (0, 1]$ only depends on $n$.
\end{thm}
\begin{thm}(H\"older Continuity in $B_2(0)$, \cite{HeinonenKilpelainenMartio} Theorem 6.44) \label{interior holder}
Suppose that $\Omega_1$ is of class $S$ with constant $c_0>0$ and $0< r_0 \le 1.$ Let $h \in C^0(\overline{\Omega_1})$ be a harmonic function in $\Omega_1$. If there are constants $M \ge 0$ and $0 < \alpha \le 1$ such that
\begin{align*}
|h(x) - h(y)| \le M |x -y|^{\alpha}
\end{align*}
for all $x, y \in \partial \Omega_1,$ then
\begin{align*}
|h(x) - h(y)| \le M_1 |x -y|^{\gamma}
\end{align*}
for all $x, y \in \overline{\Omega_1}$. Moreover, $\gamma = \gamma(n, \alpha, c_0)>0$ and one can choose $M_1 = 80 Mr_0^{-2}\max\{1, diam(\Omega_1)2\}$.
\end{thm}
\subsection{Proof of Lemma \ref{L: unif Holder bound 1}}
Let $u \in \mathcal{A}(n, \Lambda)$ with associated domain $\Omega \in \mathcal{D}(n)$, and let $Q_0 \in \partial \Omega \cap B_1(0)$ and $0 < r_0 \le \frac{1}{2}$. First, we claim that $(\fint_{B_2(0) \cap \Omega}(T_{Q_0, r_0}u)^2 dx)^{1/2} \le C(n, \Lambda).$ By the Poincare inequality,
\begin{align*}
\fint_{B_2(0)} |T_{Q, r}u - Avg_{B_2(0)}(T_{Q, r}u)|^2 dx & \le C(n) |B_2(0)|^\frac{2}{n}(\fint_{B_2(0)} |\nabla T_{Q, r}u|^2dx)\\
& \le C(n)C(\Lambda, n)
\end{align*}
Furthermore, since $\mathcal{H}^{n}(\Omega^c \cap B_2(0)) \ge \frac{1}{10}\mathcal{H}^n(B_2(0)),$ we have that $$|Avg_{B_2(0)}(T_{Q, r}u)|^2 \frac{1}{10}\mathcal{H}^n(B_2(0)) \le C(n)C(\Lambda, n).$$ Thus, $|\fint_{B_2(0)}(T_{Q, r}u)| \le C'(n) \sqrt{C(n , \Lambda)}.$
Next, we claim that there exists a $c(n)$ and an exponent $0 < \alpha (n) \le 1$ such that, for $Q \in T_{Q_0, r_0}\partial \Omega \cap B_1(0)$ and $p \in T_{Q_0, r_0}\Omega \cap B_{\frac{1}{2}}(Q)$,
\begin{align*}
|T_{Q_0,r_0}u(p)| \le C(n, \Lambda) \left(\frac{(|p - Q|)}{r}\right)^{\alpha}.
\end{align*}
For any $T_{Q_0, r_0}u$ which changes sign in $T_{Q_0, r_0}\Omega \cap B_{1}(Q),$ we decompose $T_{Q_0, r_0}u = T_{Q_0, r_0}u^+ - T_{Q_0, r_0}u^-.$ Note that both $T_{Q_0, r_0}u^{\pm}$ are subharmonic. Let $h_{\pm}$ be the harmonic extension of $T_{Q_0, r_0}u^{\pm}$ to $B_{1}(Q) \cap T_{Q_0, r_0}\Omega$. Note that $B_{1}(Q) \cap T_{Q_0, r_0}\Omega$ is convex, and so is of class $S$. Then, by Lemma \ref{holder boundary}, and the maximum principle, then
\begin{align*}
h_{\pm}(p) \le c(n) \left(\frac{(|p - Q|)}{r}\right)^{\alpha}\left( \sup \{ h_{\pm}(y) : y \in \partial (B_{1}(Q) \cap T_{Q_0, r_0}\Omega) \} \right).
\end{align*}
By subharmonicity, $T_{Q_0, r_0}u^{\pm} \le h_{\pm}$. By construction, $h_{\pm} = T_{Q_0, r_0}u^{\pm}$ on $\partial (B_{1}(Q) \cap T_{Q_0, r_0}\Omega)$ and by our first claim and Lemma \ref{sup bound},
\begin{align*}
\sup \{ h_{\pm}(y) : y \in \partial (B_{1}(Q) \cap T_{Q_0, r_0}\Omega)\} \le C(n, \Lambda).
\end{align*}
Note that this gives uniform control on the oscillation in $T_{Q_0, r_0}\Omega \cap B_2(0)$. This uniform control together with Theorem \ref{T: interior osc} implies that $T_{Q_0, r_0}u$ is locally H\"older on $\partial B_1(0) \cap T_{Q_0, r_0}\Omega$.
Now, we claim that for all $x, y \in \partial(T_{Q_0, r_0}\Omega \cap B_1(0))$,
\begin{align*}
|T_{Q_0, r_0}u(x) - T_{Q_0, r_0}u(y)| \le C(n, \Lambda) |x -y|^{\alpha}
\end{align*}
We argue by cases. Suppose that $|x-y| < \max\{ dist(x, T_{Q_0, r_0}\partial \Omega), dist(y, T_{Q_0, r_0}\partial \Omega)\}$. Then, there is a ball, $B_r(z) \subset T_{Q_0, r_0}\Omega$ with $|x-y| < r \le 2 |x-y|$ which contains both $x$ and $y$. By Theorem \ref{T: interior osc} and the preceding paragraph, then we have the desired statement.
Suppose that $|x-y| \ge \max\{ dist(x, T_{Q_0, r_0}\partial \Omega), dist(y, T_{Q_0, r_0}\partial \Omega)\}$. Let $x_0, y_0 \in T_{Q_0, r_0}\partial \Omega$ be points such that $|x-x_0| = dist(x, \partial \Omega)$ and $|y-y_0| = dist(y, \partial \Omega)$. Then,
\begin{align*}
|T_{0, 1}u(x) - T_{0, 1}u(y)| & \le |T_{0, 1}u(x) - T_{0, 1}u(x_0)| + |T_{0, 1}u(y) - T_{0, 1}u(y_0)|\\
& \le C(n, \Lambda) 2^{\alpha}|x- x_0|^{\alpha} + C(n, \Lambda) 2^{\alpha}|y- y_0|^{\alpha}\\
& \le C(n, \Lambda) 2^{\alpha +1} (\max\{ dist(x, T_{Q_0, r_0}\partial \Omega), dist(y, T_{Q_0, r_0}\partial \Omega)\})^{\alpha}\\
& \le C(n, \Lambda) |x -y|^{\alpha}\\
\end{align*}
This proves the claim. To obtain uniform interior H\"older continuity on the interior of $T_{Q_0, r_0}\Omega \cap B_1(0)$, we invoke Theorem \ref{interior holder} with $\Omega_1 = T_{Q_0, r_0}\Omega \cap B_1(0)$.
\section{Appendix B: The Divergence Theorem}
\begin{lem}\label{harmonic measure}
Let $u \in \mathcal{A}(n, \Lambda)$. Then $\Delta u$ is a measure supported on $\partial \Omega$. More precisely, for all $p \in B_1(0)$ and $\mathcal{H}^1$-almost every radius $0 < r \le 1$,
\begin{align*}
\int_{B_r(p)}(u - u(p))\Delta u = -u(p) \int_{\partial \Omega \cap B_r(p)} \nabla u \cdot \vec \eta d\sigma
\end{align*}
\end{lem}
\begin{proof}
Let $\phi$ a standard, smooth mollifier satisfying $\int \phi dx = 1$ and $supp(\phi) \subset B_1(0).$ Let $\phi_{\epsilon}(x) = \epsilon^{-n} \phi(\frac{x}{\epsilon})$, and let $u_{\epsilon} = u \star \phi_{\epsilon}$.
Now, since $\Omega \cap B_2(0)$ is convex, we have that $\partial \Omega \cap B_2(0)$ is locally a Lipschitz graph. Since $B_r(p) \cap \partial \Omega$ is pre-compact, we reduce to showing the identity on a (possibly smaller) ball such that $\partial \Omega$ can be written as a Lipschitz graph, $\partial \Omega \cap B_r(p) = \Gamma = \{(x, g(x)) \in \mathbb{R}^n: x \in B_{r'}(p') \subset \mathbb{R}^{n-1}\}$. Let $\Gamma_{\epsilon} = \{(x, g(x) + \epsilon) \in \mathbb{R}^n: x \in B_{r'}(p') \subset \mathbb{R}^{n-1}\}$ and $\Gamma_{\epsilon}^+ = \{ (x, y): y \ge g(x)+ \epsilon\ , (x, y) \in \Omega, \text{ and } x \in B_{r'}(p') \subset \mathbb{R}^{n-1}\}$. We let $Lip(g)$ be the Lipschitz constant of the function $g.$
Since $u_{\epsilon}$ is smooth, $\Delta u_{\epsilon}$ is a smooth function. Since $u$ is harmonic in $\Gamma_{\epsilon}^+$, $\Delta u_{\epsilon} = 0$ in this region. Therefore, we directly calculate via the divergence theorem.
\begin{align*}
\int_{B_r(p)} \Delta u_{\epsilon} = & \int_{\Gamma_{-\epsilon}^+ \setminus \Gamma_{C \epsilon}^+ \cap B_r(p)}\Delta u_{\epsilon}\\
= & \int_{\Gamma_{C\epsilon} \cap B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma\\
= & \int_{\Gamma_{-\epsilon} \cap B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma\\
& + \int_{\Gamma_{-\epsilon}^+ \setminus \Gamma_{C\epsilon}^+ \cap \partial B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma\\
\end{align*}
Where we are using the notation $\vec \eta$ for the outward unit normal. The constant, $1 \le C$, depends only upon $Lip(g)$, and is chosen so that for all $x \in B_{r'}(p') \subset \mathbb{R}^{n-1}$, $B_{\epsilon}((x, g(x) + C\epsilon)) \subset \Gamma_{\epsilon}^+ \subset \Omega.$ Since $u = 0$ in $\Omega^c$, we may reduce our calculation to the following.
\begin{align*}
\int_{B_r(p)} \Delta u_{\epsilon} = & \int_{\Gamma_{C\epsilon} \cap B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma\\
& + \int_{\Gamma_{-\epsilon}^+ \setminus \Gamma_{C\epsilon}^+ \cap \partial B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma\\
\end{align*}
Now, we let $\epsilon \rightarrow 0$. Since $\partial \Omega$ is locally Lipschitz, $\Gamma_{\epsilon} \cap B_r(p) \rightarrow \Gamma \cap B_r(p) = \partial \Omega \cap B_r(p)$ in the Hausdorff metric and by construction $\vec \eta_{\Gamma_{\epsilon}}((x, g(x) + \epsilon)) = \vec \eta_{\partial \Omega}((x, g(x)))$ for all $x \in B_{r'}(p')$ for which the normal exists . Furthermore, the constant $C$ has been chosen such that $\nabla u_{\epsilon} = \nabla u$ on $\Gamma_{C \epsilon}.$. Therefore, the only thing that remains to show is that,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \int_{\Gamma_{-\epsilon}^+ \setminus \Gamma_{C\epsilon}^+ \cap \partial B_r(p)}\nabla u_{\epsilon} \cdot \vec \eta d\sigma = 0.
\end{align*}
However, since $T_{0, 1}u$ is uniformly Lipschitz, $u$ is Lipschitz, though we have sacrificed uniformity across the class $\mathcal{A}(n, \Lambda).$ Thus, $|\nabla u_{\epsilon}|$ is bounded. On the other hand, because $\Gamma$ is a Lipschitz graph, $\mathcal{H}^{n-1}(\partial B_{r}(p) \cap \Gamma_{-\epsilon} \setminus \Gamma_{C \epsilon}) \rightarrow 0$ as $\epsilon \rightarrow 0$ for every $p$ and $\mathcal{H}^{1}$-almost every $r$. Thus, choosing such $r$ in out finite cover of $\partial \Omega \cap B_2(0)$ gives the desired result.
\end{proof}
\section{Appendic C: The proof of Lemma \ref{unif minkowski fudge balls}}
\begin{lem}\label{normal not too bad}
Let $y \in B_1(0)$ and all $r \in [2, 7].$ Then, there exists a constant, $0< c < 1,$ such that if $x \in \partial B_r(y)$, and $\vec \eta_{y, r} (x)$ the unit outward normal to $\partial B_r(y)$ at $x$, then,
\begin{align*}
\frac{x}{|x|} \cdot \vec \eta_{y, r}(x) \ge 1-c.
\end{align*}
\end{lem}
\begin{proof}
Consider the function, $f: \overline{B_{9}(0)} \setminus B_1(0) \times \overline{B_1(0)} \cap \{(x, y) : |x -y| \ge 2\} \rightarrow \mathbb{R}$ given by
\begin{align*}
f(x, y) = |x| - \langle \frac{x}{|x|} , y \rangle.
\end{align*}
Note that the domain of $f$ is the intersection of two compact sets and is therefore compact. Furthermore, $f$ is continuous in $x$ and $y$. We argue that $f \not = 0$. Because $|x| \ge 1$ and $|y| \le 1$, the only way that $ |x| = \langle \frac{x}{|x|} , y \rangle$, is if $|x| = |y| = 1$ and $y = x$. However, $y = x$ is not in the domain in question. Since $f$ is a continuous function on a compact domain which never vanishes there exists a constant $\tilde c > 0$ such that $f(x, y) \ge \tilde c$. Now we note that $\frac{x}{|x|} \cdot \vec \eta_{y, r}(x) = \langle \frac{x}{|x|}, \frac{x - y}{|x - y|} \rangle = \frac{1}{|x| |x-y|} f(x, y) \ge \frac{1}{90} \tilde c = 1 -c$.
\end{proof}
\begin{lem}\label{phi map}
Let $y \in B_1(0)$ and all $r \in [2, 7].$ For any cone, $C$, over the origin and any $x \in \partial B_r(y)$ there exists a $0<\rho_0\ll 1$ and a bi-Lipschitz map, $\phi: B_{\rho_0}(x) \rightarrow \mathbb{R}^n$ with the following properties:
\begin{itemize}
\item $\phi(\partial B_r(y) \cap B_{\rho_0}(x)) \subset \mathbb{R}^{n-1} \times \{0\}.$
\item There exists an $0< \rho_1 \le \rho_0$ such that
$$
\phi(C \cap \partial B_{r}(y) \cap B_{\frac{\rho_0}{2}}(x)) \times (-\rho_1, \rho_1) \subset \phi(C \cap B_{\rho}(x)).
$$
\end{itemize}
\end{lem}
\begin{proof}
Let $\psi:\mathbb{R}^n \setminus \{0\} \rightarrow \mathbb{S}^{n-1} \times \mathbb{R}$ be the map which changes Cartesian coordinates to polar coordinates. Since we are restricting this map to $B_{\rho_0}(x)$ which is away from the origin, $\psi |_{B_{\rho_0}(x)}$ is a diffeomorphism. Now, by Lemma \ref{normal not too bad} the set $\psi(\partial B_r(y) \cap B_{\rho}(x)) = graph_{\mathbb{S}^{n-1}}g$ for some Lipschitz function, $g$, with Lipschitz coefficient bounded by $1- c$.
Now, we claim that $\phi = (Id_{\theta}, Id_r - g) \circ \psi$. We reduce to showing that $|\nabla(Id_r - g)| \not = 0.$ However, $g$ is Lipschitz, for all $z \in \partial B_r(y) \cap B_r(x)$ and all $v \in T_z\partial B_r(y)$ with $|v| = 1$, all directional derivatives $\partial_v g = v \cdot \frac{z}{|z|} \le c.$ Thus, $\phi$ is bi-Lipschitz. That $\phi$ satisfies $(1)$ is immediate from the construction.
To see $(2)$, we note that $\overline{C \cap \partial B_r(y) \cap B_{\frac{\rho}{2}}(x)}$ is compact, and therefore,
$$
\min_{x \in \phi(\overline{C \cap \partial B_r(y) \cap B_{\frac{\rho}{2}}(x)})} dist(z, \phi(B_{\frac{3}{4}\rho}(x))) = \rho_1 > 0
$$
exists and is positive.
\end{proof}
\begin{lem}\label{upper mink}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{32}}(0) \cap \overline{\Omega}$ and $0< r_0< \frac{1}{512}$. Let $T_{p_0, r_0}u$ be $(0, 0)$-symmetric in $B_8(0)$ which satisfies $N_{T_{p_0, r_0}\Omega}(0, 2, T_{p_0, r_0}u) \le C(n, \Lambda).$ Then, all $y \in \overline{T_{p_0, r_0}\partial \Omega} \cap B_1(0)$ and all $r \in [2, 7],$
\begin{align*}
\mathcal{M}^{*, n-1}(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega) = 0.
\end{align*}
\end{lem}
\begin{proof}
By Lemma \ref{(0, 0)-symmetric functions}, there are only two possibilities, $T_{p_0, r_0}\partial \Omega$ is either a convex cone or an affine hyperplane. In the later case, $T_{p_0, r_0}\partial \Omega \cap \partial B_{r}(y)$ is the boundary of a spherical cap. This is a smooth $(n-2)$-submanifold. Therefore, the claim holds. In the former case, we argue by contradiction. Suppose that there is a sphere and a $T_{p_0, r_0}\partial \Omega$ such that $\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega$ which satisfies, $\mathcal{M}^{*, n-1}(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega) = c > 0$. By the finite additivity of upper Minkowski content, a point, $x \in \partial B_r(y)$ and a ball, $B_{\rho_0}(x)$, for which,
\begin{align*}
\mathcal{M}^{*, n-1}(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega \cap B_{\rho_0}(x)) > 0.
\end{align*}
Using the map, $\phi$, as in Lemma \ref{phi map}, there is a $0< \rho_1$ such that
\begin{align*}
\mathcal{M}^{*, n-1}(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega \cap B_{\frac{\rho_0}{2}}(x)) > 0
\end{align*}
and
\begin{align*}
\phi(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega \cap B_{\frac{\rho_0}{2}}(x)) \times (-\rho_1, \rho_1) \subset \phi(\Sigma \cap B_{\rho_0}(x)).
\end{align*}
Thus, since $\phi$ is bi-Lipschitz, $T_{p_0, r_0}\partial \Omega$ must have $\overline{dim}_{\mathcal{M}}(T_{p_0, r_0}\partial \Omega) \ge n$. However, this contradicts our assumption that $T_{p_0, r_0}\partial \Omega$ was a convex cone. That is, $T_{p_0, r_0}\partial \Omega$ is locally the graph of a Lipschitz function and therefore must have $\overline{dim}_{\mathcal{M}}(\Sigma) = n-1$.
\end{proof}
\begin{lem}\label{nbhd to balls}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{32}}(0) \cap \partial \Omega$ and $0< r_0< \frac{1}{512}$. Let $T_{p_0, r_0}u$ be $(0, 0)$-symmetric in $B_8(0)$ which satisfies $N_{T_{p_0, r_0}\Omega}(0, 2, T_{p_0, r_0}u) \le C(n, \Lambda).$ Then, all $y \in \overline{T_{p_0, r_0}\partial \Omega} \cap B_1(0)$, $r \in [2, 7],$ and every $0< \rho \ll 1$ small enough,
\begin{align*}
\partial B_r(y) \cap B_{\rho}(T_{p_0, r_0}\partial \Omega) \subset \partial B_r(y) \cap B_{(1 + C)\rho}(\partial B_r(y) \cap T_{p_0, r_0}\partial \Omega)
\end{align*}
where $C$ only depends upon the constant $c$ in Lemma \ref{normal not too bad}.
\end{lem}
\begin{proof}
Let $X \in \partial B_r(y) \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)$. Then, there is a point, $S \in T_{p_0, r_0}\partial \Omega$ such that $X \in B_{\rho}(S)$. Note that because $B_{\rho}(X)$ is convex and $T_{p_0, r_0}\partial \Omega$ is a cone over $\{ 0\}$, we may take $S \in B_r(y)$. Since $cS \in T_{p_0, r_0}\partial \Omega$ for all scalars $0< c$, there exists a $cS \in \partial B_r(y) \cap T_{p_0, r_0}\partial \Omega$.
We now argue that for all $0< \rho$ sufficiently small, there is a constant, $C$, such $|S - cS| \le C \rho$ where $C$ only depends upon the constant $c$ in Lemma \ref{normal not too bad}. If such a constant exists for all $\rho$ sufficiently small, then $X \in \partial B_r(y) \cap B_{(1 + C)r_0}(cS),$ which is the desired result.
First, note that by convexity, $|S -cS| \le |S - T|$ where $T = c'S \in T_x(\partial B_r(y)) + X.$ See Figure 1, below, for an illustration. Therefore, we reduce to estimating $|S -T|$. Next, note that if $\frac{X}{|X|} \cdot \vec \eta_{y, r}(X) \ge 1-c$ as in Lemma \ref{normal not too bad}, then,
$$\max _{v \in T_x(\partial B_r(y))} \{v \cdot \frac{X}{|X|}\} \le c < 1,$$
where $c$ is the same constant as in Lemma \ref{normal not too bad}.
Now, let $\theta(\rho') = \max \{ \frac{X}{|X|} \cdot \frac{z}{|z|} : z \in B_{\rho'}(\frac{X}{|X|})\}$. Note that by containment, $\theta(\rho') \rightarrow 0$ monotonically as $\rho' \rightarrow 0$. This quantity gives an upper bound on the ``visual radius" of $B_{\rho}(X)$ since $|X| >1$. Therefore, there is a $\rho'(c)$ such that for all $\rho \le \rho'(c)$ and all $s \in B_{\rho}(x)$ and all $v \in T_x(\partial B_r(y))$ with $|v| = 1$,
\begin{align*}
\frac{s}{|s|} \cdot v & = dist_{\mathcal{G}}(\frac{s}{|s|}, v )\\
& \le dist_{\mathcal{G}}(\frac{s}{|s|}, \frac{x}{|x|}) + dist_{\mathcal{G}}(\frac{x}{|x|} , v )\\
& \le \theta(\rho') + c\\
& \le \frac{1 - c}{2} + c < 1
\end{align*}
Therefore, in the triangle $\Delta (S, X, T)$, we have that,
$$|S - X| \le \rho,$$
and
$$\angle \left( \overline{X T}, \overline{ST} \right) \ge cos^{-1} (\frac{1 - c}{2} + c).$$
We use the Law of Sines,
\begin{align*}
\frac{|S - T|}{\sin(\angle \left(\overline{XT}, \overline{XS}\right))} & = \frac{|X - S|}{\sin(\angle \left(\overline{XT}, \overline{ST}\right))}\\
& \le \frac{1}{\sqrt{1-(\frac{1 - c}{2} + c)^2}} \rho.
\end{align*}
Since the sum of the remaining angles of the triangle must sum to $\frac{\pi}{2},$ we have that for all $\rho \le \rho'(c),$
\begin{align*}
|S - T| & \le \frac{1}{\sqrt{1-(\frac{1 - c}{2} + c)^2}} \sin(\frac{\pi}{2} - cos^{-1} (\frac{1 - c}{2} + c)) \rho \\
& \le C(c) \rho.
\end{align*}
\end{proof}
\begin{figure}
\caption{An illustration of the geometry in Lemma \ref{nbhd to balls}
\label{fig 1}
\end{figure}
\begin{lem}\label{ball containment fudge}
Let $u \in \mathcal{A}(n, \Lambda)$, $p_0 \in B_{\frac{1}{32}}(0) \cap \partial \Omega$ and $0< r_0< \frac{1}{512}$. Let $T_{p_0, r_0}u$ be $(0, 0)$-symmetric in $B_8(0)$ which satisfies $N_{T_{p_0, r_0}\Omega}(0, 2, T_{p_0, r_0}u) \le C(n, \Lambda).$ Then, all $y \in \overline{T_{p_0, r_0}\partial \Omega} \cap B_1(0)$, $r \in [2, 7],$ and every $0< \rho \ll 1$ small enough, there is a $\epsilon(\rho)> 0$ such that if $|y - y'| < \epsilon$ and $|r - r'| < \epsilon$, then
\begin{align*}
\mathcal{H}^{n-1}(\partial B_{r'}(y') \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)) \le C(n) \mathcal{H}^{n-1}(\partial B_{r}(y) \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)).
\end{align*}
\end{lem}
\begin{proof}
Note that $\partial B_{r}(y) \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)$ is relatively open in $\partial B_{r}(y)$. There is a finite collection of balls, $\{B_{2\rho}(x_i)\}_{i \in I},$ with centers, $x_i \in \partial B_{r}(y) \cap T_{p_0, r_0}\partial \Omega,$ such that the collection $\{B_{\rho}(x_i)\}_{i \in I}$ is pairwise disjoint, and for all $0< \rho$ small enough,
\begin{align}\label{fudge balls cover}
\partial B_{r}(y) \cap B_{2\rho}(T_{p_0, r_0}\partial \Omega) \subset \bigcup_{i \in I} B_{(1 + C)2\rho}(x_i)
\end{align}
by Lemma \ref{nbhd to balls}.
Furthermore, by taking $0< \rho \ll 1$ small enough, and $0<\epsilon$ small enough with respect to $\rho$, we may assume that
\begin{align*}
\mathcal{H}^{n-1}(\partial B_{r'}(y') \cap B_{(1 + C)2\rho}(x_i))) & \le C((1 + C)2\rho)^{n-1}\\
\sum_{i \in I} (\rho)^{n-1} & \le C(n) \mathcal{H}^{n-1}\left(\partial B_r(y) \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)\right).
\end{align*}
Note that if $|y - y'| < \epsilon$ and $|r - r'| < \epsilon$, then $dist_{\mathcal{H}}(\partial B_{r'}(y'), \partial B_r(y)) \le 2\epsilon.$ Therefore, $dist_{\mathcal{H}}(T_{p_0, r_0}\partial \Omega \cap \partial B_{r'}(y'), T_{p_0, r_0}\partial \Omega \cap \partial B_r(y)) \le 2 \epsilon$. Therefore, by taking $\epsilon \le \frac{1}{2}\rho$, we have,
\begin{align*}
\partial B_{r'}(y') \cap B_{\rho}(T_{p_0, r_0}\partial \Omega) \subset \partial B_{r}(y) \cap B_{2\rho}(T_{p_0, r_0}\partial \Omega) \subset \bigcup_{i \in I} B_{(1 + C)2\rho}(x_i).
\end{align*}
Therefore,
\begin{align*}
\mathcal{H}^{n-1}(\partial B_{r'}(y') \cap B_{\rho}(T_{p_0, r_0}\partial \Omega)) & \le \mathcal{H}^{n-1}(\partial B_{r'}(y') \cap \bigcup_{i \in I} B_{(1+ C)2\rho}(x_i))\\
& \le C ((1+C)2)^{n-1}\sum_{i\in I} \rho^{n-1} \\
& \le C(n) \mathcal{H}^{n-1}(\partial B_{r}(y) \cap B_{r}(T_{p_0, r_0}\partial \Omega)).
\end{align*}
\end{proof}
\begin{lem}\label{expansion}
For any $A \subset \mathbb{R}^n$, $y_0 \in \mathbb{R}^n$, and $0< r,$ if $0 < r_1 < r_2,$
$$
\mathcal{H}^{n-1}(\partial B_{r}(y) \cap B_{r_1}(A)) \le \mathcal{H}^{n-1}(\partial B_{r}(y) \cap B_{r_2}(A))
$$
\end{lem}
This is simply a result of containment and the monotonicity of measures.
\begin{lem}\label{weak convergence}
Let $U$ be an open set. Let $y_i \in \partial B_1(0)$ and $r_i \in [2, 7]$ be such that $y_i \rightarrow y$ and $r_i \rightarrow r.$ Then, $\lim \mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap U) = \mathcal{H}^{n-1}(\partial B_r(y) \cap U).$
\end{lem}
This is by weak convergence of the Radon measures $\mu_i = \mathcal{H}^{n-1}\restr \partial B_{r_i}(y_i) \cap U.$
\subsection{Proof of Lemma \ref{unif minkowski fudge balls}}
\begin{proof}
We argue by cases. By Lemma \ref{(0, 0)-symmetric functions}, there are only two possibilities, $T_{p_0, r_0}\partial \Omega$ is either a convex cone or an affine hyperplane. In the later case, $T_{p_0, r_0}\partial \Omega \cap \partial B_{r}(y)$ is the boundary of a spherical cap. Suppose that there were a sequence $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in B_{\frac{1}{32}}(0) \cap \partial \Omega_i$ and $0< r_i< \frac{1}{512}$. Let $T_{p_i, r_i}u_i$ be $(n-1, 0)$-symmetric in $B_8(0)$ . Let $y_i \in \overline{T_{p_i, r_i} \Omega_i} \cap B_1(0)$, $r_i \in [2, 7].$ Suppose that $\mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i)) \ge c > 0.$
Let $y, r$ be the subsequential limit center and radius, respectively. Since the set of hyperplanes which intersect $\partial B_{r}(y)$ is compact, we may take another subsequence $T_{p_i, r_i}\partial \Omega_i$ which converges to a hyperplane $\partial \Omega$ which intersects $\partial B_{r}(y)$. Note that $\partial \Omega \cap \partial B_{r}(y)$ is wither a point or the boundary of a spherical cap. Therefore, it satisfies $\mathcal{M}^{*, n-1}(\partial \Omega \cap \partial B_{r}(y)) = 0$. Therefore, there is an $\epsilon>0$ such that $\mathcal{H}^{n-1}(B_{\epsilon}(\partial \Omega) \cap \partial B_{r}(y)) \le \frac{c}{2}.$ By the nature of the convergence of $T_{p_i, r_i}\partial \Omega_i$ in the Hausdorff metric on compact subsets and Lemma \ref{weak convergence}, then for large enough $i$,
\begin{align*}
\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i) \subset \partial B_{r_i}(y_i) \cap B_{\epsilon}(\partial \Omega).
\end{align*}
Therefore,
\begin{align*}
\mathcal{H}^{n-1}( \partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i) ) & \le \mathcal{H}^{n-1}( \partial B_{r_i}(y_i) \cap B_{\epsilon}(\partial \Omega))\\
& \rightarrow \mathcal{H}^{n-1}(\partial B_{r}(y) \cap B_{\epsilon}(\partial \Omega)\\
& \le \frac{c}{2}
\end{align*}
This contradicts our assumption.
Now, let $u_i \in \mathcal{A}(n, \Lambda)$, $p_i \in B_{\frac{1}{32}}(0) \cap \partial \Omega_i$ and $0< r_i< \frac{1}{512}$. Let $T_{p_i, r_i}u_i$ be $(0, 0)$-symmetric in $B_8(0)$. Let $y_i \in \overline{T_{p_i, r_i} \Omega_i} \cap B_1(0)$, $r_i \in [2, 7].$ Suppose that $\mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i)) \ge c > 0.$ Let $y_0$, $r_0$ be sub-sequential limit point and radius, respectively. By Lemma \ref{compactness}, we may extract a subsequence such that $T_{p_i, r_i}u_i \rightarrow u_{\infty}$ and $T_{p_i, r_i}\partial \Omega_i \cap B_{10}(0) \rightarrow \partial \Omega_{\infty} \cap B_{10}(0)$ locally in the Hausdorff metric on compact subsets.
That $T_{p_i, r_i}\partial \Omega_i \cap B_{10}(0) \rightarrow \partial \Omega_{\infty} \cap B_{10}(0)$ locally in the Hausdorff metric on compact subsets, implies that for any fixed $0< \tilde r$, there is an $i(\tilde r)$ such that $B_{2^{-i}}(\Sigma_{p_i}) \cap B_{10}(0) \subset B_{\tilde r}(\Sigma_p) \cap B_{10}(0)$ for all $i \ge i(\tilde r)$. Therefore,
\begin{align*}
\lim_{i \rightarrow \infty} \mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i)) \le \lim_{i \rightarrow \infty} \mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{\tilde r}(\partial \Omega_{\infty})).
\end{align*}
Additionally, for any fixed $0< \tilde r \ll 1$, once $|y_i - y_0| < \epsilon(\tilde r)$ and $|r_i - r_0| < \epsilon(\tilde r)$ as in Lemma \ref{ball containment fudge}, we have that,
\begin{align*}
\lim_{i \rightarrow \infty}\mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{\tilde r}(T_{p_i, r_i}\partial \Omega_i)) \le C' \mathcal{H}^{n-1}(\partial B_{r_0}(y_0) \cap B_{\tilde r}(\partial \Omega_{\infty} \cap \partial B_{r_0}(y_0))).
\end{align*}
Now, let $\tilde r$ be sufficiently small so that for any $x_i \in \partial B_{r_0}(y_0)$, we can bound $\mathcal{H}^{n-1}(B_{2\tilde r}(x_i)) \le 10 (2\tilde r)^{n-1}.$ Then, for $\{B_{\tilde r} (x_i)\}$ a maximal disjoint collection of balls with $x_i \in \partial \Omega_{\infty} \cap \partial B_{r_0}(y_0)$, we have
\begin{align*}
\mathcal{H}^{n-1}(\partial B_{r_0}(y_0) \cap B_{\tilde r}(\partial \Omega_{\infty} \cap \partial B_{r_0}(y_0))) & \le \mathcal{H}^{n-1}(\partial B_{r_0}(y_0) \cap \bigcup_i B_{2 \tilde r}(x_i))\\
& \le P(\partial \Omega_{\infty} \cap \partial B_{r_0}(y_0), 2\tilde r) \mathcal{H}^{n-1}(\partial B_{r_0}(y_0) \cap B_{2 \tilde r}(x_i))\\
& \le 10 2^{n-1} P(\partial \Omega_{\infty} \cap \partial B_{r_0}(y_0), \tilde r) (\tilde r)^{n-1}.
\end{align*}
Now, by Remark \ref{mink bounds haus} and Lemma \ref{upper mink} we have that $$\limsup_{\tilde r \rightarrow 0} P(\partial \Omega_{\infty}\cap \partial B_{r_0}(y_0), \tilde r) ( \tilde r)^{n-1} = 0.$$ Thus, we can choose $0< \tilde r$ sufficiently small so that
\begin{align*}
\lim_{i \rightarrow \infty} \mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i)) & \le \lim_{i \rightarrow \infty} \mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{\tilde r}(\partial \Omega_{\infty}))\\
& \le C' \mathcal{H}^{n-1}(\partial B_{r_0}(y_0) \cap B_{\tilde r}(\partial \Omega_{\infty}))\\
& < c
\end{align*}
This contradicts the assumption that $\mathcal{H}^{n-1}(\partial B_{r_i}(y_i) \cap B_{2^{-i}}(T_{p_i, r_i}\partial \Omega_i)) \ge c > 0$ for all $i$. Therefore, we have the desired result.
\end{proof}
\section{Appendix D: Blow-ups}\label{Appendix D: Blow-ups}
Blow-ups fit into two different regimes: interior and boundary points. Interior results are classical, and we summarize them, below.
\begin{lem}(Interior Blow-ups) \label{interior blow-ups}
Let $u \in \mathcal{A}(n, \Lambda)$. For any $p \in \Omega \cap B_{\frac{1}{16}}(0)$ and any sequence $r_j \rightarrow 0$, the sequence of functions $T_{p, r_j}u$ has a subsequence which converges to a function $u_{p, \infty}$ in the following senses:
\begin{enumerate}
\item $T_{p, r_i}u \rightarrow u_{p, \infty}$ in $C^{0,1}_{loc}(\mathbb{R}^n)$
\item $T_{p, r_i}u \rightarrow u_{p, \infty}$ in $W^{1^2}_{loc}(\mathbb{R}^n)$
\item $u_{p, \infty}$ is a homogeneous harmonic polynomial.
\end{enumerate}
\end{lem}
The purpose of this Appendix is to justify the claim that both $sing(\partial \Omega) \cap V$ and $\{Q \in V : Q \text{ is a flat point and } \lim_{h \rightarrow 0^+}\frac{u(Q + h \vec \eta_Q) - u(Q)}{h} = 0 \}$ are contained in $\{Q \in \partial \Omega : |\nabla u| = 0\}$. To this end, we prove some elementary properties of blow-ups at the boundary.
\begin{lem}(Boundary Blow-ups) \label{blow-ups I}
Let $u \in \mathcal{A}(n, \Lambda)$. For any $Q \in \partial \Omega \cap B_{\frac{1}{16}}(0)$ and any sequence $r_j \rightarrow 0$, the sequence of functions $T_{Q, r_j}u$ has a subsequence which converges to a function $u_{Q, \infty}$ in the following senses:
\begin{enumerate}
\item $T_{Q, r_i}u \rightarrow u_{Q, \infty}$ in $C^{0, 1}_{loc}(\mathbb{R}^n)$
\item $T_{Q, r_i}u \rightarrow u_{Q, \infty}$ in $W^{1,2}_{loc}(\mathbb{R}^n)$
\item $\partial T_{Q, r_i}\Omega \rightarrow \partial \Omega_{Q, \infty}$ locally uniformly in the sense of compact sets in the Hausdorff distance. Moreover, $\partial \Omega_{Q, \infty}$ is a cone.
\item $u_{Q, \infty}$ is harmonic in $\Omega_{Q, \infty}$.
\end{enumerate}
\end{lem}
\begin{proof} Let $1< R< \infty$ and consider $N_{T_{Q, r_i}\Omega}(0, R, T_{Q, r_i}u).$ By invariance,
$$N_{T_{p_i, r_i}\Omega}(0, R, T_{p_i, r_i}u) = N_{T_{Q, Rr_i}\Omega}(0, 1, T_{Q, R r_i}u).$$
Since $r_i \rightarrow 0$, for sufficiently large $i$, $Rr_i \le \frac{1}{4}.$ Hence, Lemma \ref{N bound lem} gives that $$N_{T_{Q, r_i}\Omega}(0, R, T_{Q, r_i}u) \le C(n, \Lambda).$$ Hence, we may apply Lemma \ref{compactness} and Lemma \ref{strong convergence} to the sequence $T_{Q, r_i}u$ on $B_R(0)$. These results give the existence of a subsequence which converges to a limit function, $u_{p, \infty}$ in the above senses.
To see that $\partial \Omega_{Q, \infty}$ is a cone, let $\epsilon,R>0$ be given. Recall that the set of subgradients to a point $p$ on a convex set, $S_Q(\Omega) \subset G(n, n-1)$, is a closed set. For any two $n-1$ planes, $V, W \in G(n, n-1)$Let $\pi(V, W)$ be the unique two-dimensional subspace spanned by their normals. Recall that $dist(V \cap \partial B_1(0), W \cap \partial B_1(0))$ realizes its maximum at points in $\pi(V, W)$.
Let $V \in G(n, n-1)$ be such that $dist_{G}(V, S_Q(\Omega)) = \frac{\epsilon}{R}$. Since $S_Q(\Omega)$ is closed, there is a $W \in S_Q(\Omega)$ which realizes this distance. We use $\pi(V, W)$ to reduce to a two-dimensional picture. Denote the point $(\pi(V, W) + Q) \cap (V + Q) \cap \partial \Omega$ by $x_v$. Note that $x_v$ might be the point at infinity for some $V$.
The collection, $\{x_V \}$ is bounded away from $p$. Assume not. Then there is a sequence of $n-1$-planes, $V_i$, such that $dist_{G}(V_i, S_Q(\Omega)) = \frac{\epsilon}{R}$ and $|x_{V_i} - Q| \rightarrow 0$. Since the Grassmanian is compact, there is a limiting plane, $V$, with $dist_{G}(V, S_Q(\Omega)) = \frac{\epsilon}{R}$. But, $V \in S_Q(\Omega),$ since $\mathcal{H}^{n-1}(V \cap \Omega) = 0$. For $r_j < \inf_j |x_{V_j} - Q|$, $dist_{\mathcal{H}}(T_{Q, r_j}\partial \Omega \cap B_R(0), \cup_{S_Q(\Omega)} V \cap B_R(0)) \le \epsilon.$ Thus, $T_{Q, r_j}\partial \Omega$ converges to a convex cone in the Hausdorff metric on compact subsets. Note that this argument also gives that $\partial \Omega_{p, \infty}$ is a cone carved out by $S_Q(\Omega)$.
Lemma \ref{compactness 1} gives that $u_{Q, \infty}$ is harmonic in $\Omega_{Q, \infty}$.
\end{proof}
\begin{lem}\label{L: blow-ups homogeneous}
Let $u \in \mathcal{A}(n, \Lambda)$, $Q \in \partial \Omega \cap B_{\frac{1}{16}}(0)$ and $r_j \rightarrow 0$. Let the sequence of functions $T_{Q, r_j}u$ converge to the function $u_{p, \infty}$ in the senses of Lemma \ref{blow-ups I}. Then, $N_{\Omega_{p, \infty}}(0, r, u_{p, \infty})$ is constant in $r$ and $u_{p, \infty}$ is homogeneous.
\end{lem}
\begin{proof} We recall that $N_{\Omega}(Q, r, u)$ is monotonically increasing in $r$ and that $N_{\Omega}(Q, r, u) = N_{T_{Q, r}\Omega}(0, 1, T_{Q, r}u)$. Furthermore, by the nature of convergence in Corollary \ref{N continuity}, we have that
$$N_{\Omega}(Q, 0, u) = \lim_{i \rightarrow \infty} N_{T_{Q, r_i}\Omega}(0, 1, T_{Q, r_i}u) = N_{\Omega_{Q, \infty}}(0, 1, u_{Q, \infty}).$$
For any fixed radius, $s$, let $\{r_{j, k}\}$ and $\{r_{j, l}\}$ be subsequences of $\{r_j\}$ such that $r_{j, k} < sr_j < r_{j, l}$. By monotonicity, then, we have that
\begin{align*}
N_{\Omega_{Q, \infty}}(0, s, u_{Q, \infty}) & = \lim_{i \rightarrow \infty} N_{T_{Q, r_i}\Omega}(0, s, T_{Q, r_i}u)\\
& \le \lim_{i \rightarrow \infty}N_{\Omega}(Q, r_{j, l}, u)\\
& \le N_{\Omega}(p, 0, u)
\end{align*}
Similarly,
\begin{align*}
N_{\Omega_{Q, \infty}}(0, s, u_{Q, \infty}) & = \lim_{i \rightarrow \infty} N_{T_{Q, r_i}\Omega}(0, s, T_{Q, r_i}u)\\
& \ge \lim_{i \rightarrow \infty}N_{\Omega}(Q, r_{j, k}, u)\\
& \ge N_{\Omega}(Q, 0, u)
\end{align*}
From the constancy of the Almgren frequency, Lemma \ref{N constant gives homogeneity} implies that $u_{Q, \infty}$ is homogeneous.
\end{proof}
\begin{lem}\label{L: avg growth bound}
Let $u_{Q, \infty}$ be a blow-up, as above. For all $0< r < \infty$,
\begin{equation}
\int_{B_r(0)}|u_{Q, \infty} - Avg_{B_r(0)}(u_{Q, \infty})|^2 \le C(n, \Lambda) r^{n + 2N_{\Omega}(Q, 0, u)}
\end{equation}
and
\begin{equation}
\int_{B_r(0)}|u_{Q, \infty}|^2 \le C(n, \Lambda) r^{n + 2N_{\Omega}(Q, 0, u)}
\end{equation}
\end{lem}
\begin{proof}
By Poincare, we have that for any $0< r <\infty$,
\begin{align*}
\int_{B_r(0)} |u_{Q, \infty} - Avg_{B_r(0)}(u_{Q, \infty})|^2dx & \le c(n) r^2 \int_{B_r(0)}|\nabla u_{Q, \infty}|^2
\end{align*}
Since $N_{\Omega}(Q, 0, u) = N_{\Omega_{Q, \infty}}(0, r, u_{Q, \infty})$ for all $0< r< \infty$ and by Lemma \ref{N bound lem} $N_{\Omega}(Q, 0, u) \le C(n, \Lambda)$, we have that
\begin{align*}
\int_{B_r(0)} |u_{Q, \infty} - Avg_{B_r(0)}(u_{Q, \infty})|^2dx & \le c(n) r^2 C(n, \Lambda) \frac{1}{r} H_{\Omega_{Q, \infty}}(0, r, u_{Q, \infty})\\
\end{align*}
As we have seen, convexity of $\Omega_{Q, \infty}$ implies that $|Avg_{B_r(0)}(u_{Q, \infty})|^2 \le \frac{2}{\omega_n r^n} c(n) r^2 C(n, \Lambda) \frac{1}{r} H_{\Omega_{Q, \infty}}(0, r, u_{Q, \infty})$. Thus, we also have that
\begin{align*}
\int_{B_r(0)} |u_{Q, \infty}|^2dx & \le 2\int_{B_r(0)} |u_{Q, \infty} - Avg_{B_r(0)}(u_{Q, \infty})|^2dx + 2\int_{B_r(0)} |Avg_{B_r(0)}(u_{Q, \infty})|^2dx \\
& \le c(n) r^2 C(n, \Lambda) \frac{1}{r} H_{\Omega_{Q, \infty}}(0, r, u_{Q, \infty})\\
\end{align*}
Now, since $H_{\Omega_{Q, \infty}}(1, 0, u_{Q, \infty}) = 1$, we have by Remark \ref{R: H doubling-ish}, for $r < 1$, and Lemma \ref{L: H doubling-ish} for $r \ge 1$,
\begin{align*}
c(n) r^2 C(n, \Lambda) \frac{1}{r} H_{\Omega_{Q, \infty}}(0, r, u_{Q, \infty}) & \le c(n) r^2 C(n, \Lambda) \frac{1}{r} r^{n-1+2N_{\Omega}(Q, 0, u)}\\
& \le c(n) C(n, \Lambda) r^{n+2N_{\Omega}(Q, 0, u)}
\end{align*}
\end{proof}
\begin{cor}\label{C: hom degree N}
Let $u_{Q, \infty}$ be a blow-up, as above. $u_{Q, \infty}$ is homogeneous of degree $N_{\Omega}(Q, 0, u)$.
\end{cor}
\begin{proof} Let $u_{Q, \infty}$ be homogeneous of degree $k_0$. We calculate by rescaling.
\begin{align*}
\int_{B_r(0)} |u_{Q, \infty}|^2dx = r^n \int_{B_1(0)} |u_{Q, \infty}(rx)|^2dx\\
= r^n \int_{B_1(0)} |r^{k_0} u_{Q, \infty}(x)|^2dx\\
= r^{n+ 2 k_0}\int_{B_1(0)} |u_{Q, \infty}(x)|^2dx\\
\end{align*}
Therefore, by Lemma \ref{L: avg growth bound}, $k_0 = N_{\Omega}(Q, 0, u).$
\end{proof}
We now turn to the main results of this section. We show that $N_{\Omega}(Q, 0, u) \ge 1.$ To do so, we must break into cases, depending on whether $Q \in \partial \Omega$ is a flat point or not. Recall that $Q \in \partial \Omega$ is a flat point if $\partial \Omega_{p, \infty}$ is flat.
\begin{lem}\label{L: N flat points}
Let $Q \in \partial \Omega \cap B_{\frac{1}{4}}(0)$ be a flat point. Then, $N_{\Omega}(Q, 0, u) \ge 1$ and any blow-up, $u_{Q, \infty}$, is the restriction of a homogeneous harmonic polynomial to $\Omega_{Q, \infty}$.
\end{lem}
\begin{proof} Suppose that $\partial \Omega_{Q, \infty}$ is flat. Note that while the blow-ups, $u_{Q, \infty}$, are not \textit{a priori} unique, the geometric blow-up, $\partial \Omega_{Q, \infty}$ is by convexity. By rotation, assume that $\partial \Omega_{Q, \infty} = \mathbb{R}^{n-1} \times \{ 0\}$. By continuity, $u_{Q, \infty}$ vanishes on $\partial \Omega_{Q, \infty}.$
We define a new function, $v_{Q, \infty},$ by odd reflection across $\partial \Omega_{Q, \infty}$. That is, writing $\mathbb{R}^n = \mathbb{R}^{n-1}\times \mathbb{R}$, and $\mathbb{R}^n_{+} = (x', x)$ such that $x >0$, we let
\begin{align*}
v_{Q, \infty}(x', x) = \begin{cases}
u_{Q, \infty}(x', x) & (x', x) \in \overline \mathbb{R}^n_+\\
-u_{Q, \infty}(x', -x) & (x', x) \not \in \overline \mathbb{R}^n_+.
\end{cases}
\end{align*}
Note that $v_{Q, \infty}$ is harmonic in $\mathbb{H}^n_{\pm}$. Furthermore, since we have taken an odd reflection, $Q_{p, \infty}$ satisfies the mean value property on $\partial \Omega_{Q, \infty}$. Therefore, $v_{Q, \infty}$ is harmonic in $\mathbb{R}^n$.
To see that an entire homogeneous harmonic function must be a homogeneous harmonic polynomial, we use the Poison kernel. Let $x_0 \in \partial B_1(0)$ be a point which satisfies $c = |v_{Q, \infty}(x_0)| = \max_{\partial B_1(0)} |v_{Q, \infty}|.$ For concision, we shall denote the degree of homogeneity by $d$. By the Poisson formula, for any $z \in B_R(0)$,
\begin{align*}
v_{Q, \infty}(z) = & \frac{R^2 - |z|^2}{c_n R}\int_{\partial B_R(0)} v_{Q, \infty}(\zeta) \frac{1}{|z - \zeta|^{n+1}} d\zeta
\end{align*}
Taking $k$ derivatives, we have
\begin{align*}
\partial_{\alpha_1}\partial_{\alpha_2} ... \partial_{\alpha_k}v_{Q, \infty}(z) = & \frac{R^2 - |z|^2}{c_n R}\int_{\partial B_R(0)} v_{Q, \infty}(\zeta) \partial_{\alpha_1}\partial_{\alpha_2} ... \partial_{\alpha_k}\frac{1}{|z - \zeta|^{n+1}} d\zeta\\
& \pm \sum_{j=1}^k \frac{1}{c_nR} \partial_{\alpha_j}|z|^2 \int_{\partial B_R(y)} v_{Q, \infty}(\zeta) \partial_{\alpha_1}... \hat \partial_{\alpha_j} ... \partial_{\alpha_k}\frac{1}{|z- \zeta|^{n+1}} d\zeta\\
& \pm \sum_{ i \not = j}^{k} \frac{1}{c_nR} \partial_{\alpha_i} \partial_{\alpha_j}|z|^2 \int_{\partial B_R(0)} v_{Q, \infty}(\zeta) \partial_{\alpha_1}... \hat \partial_{\alpha_i} ... \hat \partial_{\alpha_j} ... \partial_{\alpha_k}\frac{1}{|z - \zeta|^{n+1}} d\zeta
\end{align*}
Thus, for $z \in B_{\frac{1}{2} R}(0)$,
\begin{align*}
|\partial_{\alpha_1} ... \partial_{\alpha_k} v_{Q, \infty}(z)| \le & CR\int_{\partial B_R(0)} |v_{Q, \infty}(\zeta)| \frac{1}{|z - \zeta|^{n+1 + k}} d\zeta\\
& + C\int_{\partial B_R(y)} |v_{Q, \infty}(\zeta)| \frac{1}{|z - \zeta|^{n+ k}} d\zeta\\
& + C\frac{1}{R} \int_{\partial B_R(0)} |v_{Q, \infty}(\zeta)| \frac{1}{|z - \zeta|^{n+ k - 1}} d\zeta\\
\le & CR R^d|v_{p, \infty}(x_0)| \int_{\partial B_R(0)} \frac{1}{|z - \zeta|^{n+1 + k}} d\zeta\\
& + CR^d|v_{Q, \infty}(x_0)| \int_{\partial B_R(y)} \frac{1}{|z - \zeta|^{n+ k}} d\zeta\\
& + C\frac{1}{R} R^d|v_{Q, \infty}(x_0)| \int_{\partial B_R(0)} \frac{1}{|z - \zeta|^{n+ k - 1}} d\zeta\\
\le & CR R^d|v_{Q, \infty}(x_0)| R^n \frac{1}{|\frac{1}{2}R|^{n+1 + k}}\\
& + CR^d|v_{Q, \infty}(x_0)| R^n \frac{1}{|\frac{1}{2}R|^{n+ k}}\\
& + C\frac{1}{R} R^d|v_{Q, \infty}(x_0)| R^n \frac{1}{|\frac{1}{2}R|^{n+k-1}}\\
\le & C R^{d-k} |v_{Q, \infty}(x_0)|.
\end{align*}
Thus, for $k > d$, letting $R \rightarrow \infty$, we see that all higher derivatives must be zero. Thus, $v_{Q, \infty}$ is a homogeneous harmonic polynomial, and $u_{Q, \infty}$ is the restriction of $v_{Q, \infty}$ to $\partial \Omega_{Q, \infty}$.
Since there is no non-trivial homogeneous harmonic polynomial of degree less than $1$, the other claim follows.
\end{proof}
Note that Lemma \ref{L: N flat points} implies that if $Q \in \partial \Omega \cap B_{\frac{1}{4}}(0)$ and $Q$ is a flat point such that,
\begin{align*}
\lim_{h \rightarrow 0^+}\frac{u(Q + h\vec \eta) - u(Q)}{h} = 0,
\end{align*}
then $u_{Q, \infty}$ is the restriction of a homogeneous harmonic polynomial of order $\ge 2.$ Thus, $Q \in \mathcal{S}^{n-2}(u).$
\begin{lem}\label{trough lem}
Let $\Omega$ be a convex cone over the origin. If $\Omega$ is not an open half-space, then there exists a convex cone $\mathcal{C} = \mathbb{R}^{n-2} \times \mathcal{C}'$ such that $\Omega \subset \mathcal{C}$ and $\mathcal{C}'$ is a convex cone in $\mathbb{R}^2$ which is not a half-plane.
\end{lem}
\begin{proof}
If $\Omega$ is a convex cone, then we consider the set of supporting hyper-planes at the origin, $\{L_{\alpha}\}_{\alpha}$. If $|\{L_{\alpha}\}_{\alpha}| = 1,$ then $\Omega$ is a half-plane. If $|\{L_{\alpha}\}_{\alpha}| > 1$, then there exist at least two supporting half-planes, and $\Omega$ is contained in their intersection.
\end{proof}
\begin{lem}\label{L: N non-flat points}
Let $Q \in \partial \Omega \cap B_1(0)$ be not a flat point. Then, $N_{\Omega}(Q, 0, u) \ge 1$.
\end{lem}
\begin{proof} By assumption, $\partial \Omega_{p, \infty}$ is a non-flat cone. Therefore, by Lemma \ref{trough lemma}, we may contain $\Omega$ in the union of two half spaces, $C = \mathcal{C}' \times \mathbb{R}^{n-2}$, where $\mathcal{C}'$ is a strictly convex (i.e. non-flat) cone in $\mathbb{R}^{2}.$ Let us write $\partial \mathcal{C}'$ as the graph of a 1-homogeneous function, $f:\mathbb{R} \rightarrow \mathbb{R}$.
By the strict convexity of $\mathcal{C}'$, we consider the level-set $L = \{x_1, x_2 \in \mathbb{R}: f(x_i)= 1\}$. Let $R$ be the rectangle in $\mathbb{R}^2$ with corners, $x, 1, x, 2, f(x_1), f(x_2).$ Let $\mathcal{R} \subset \mathbb{R}^{n}$ be the rectangle, $R \times [-1, 1]^{n-2}$.
Now, we let our comparison function, $\phi$, be the solution to the following Dirichlet problem,
$$
\begin{cases}
\Delta \phi = 0 & \text{ in } \mathcal{R}\\
\phi = 0 & \text{ on } \overline{(x_1, 0)(x_2, 0)} \times [-1, 1]^{n-2}\\
\phi = c & \text{ on } \partial \mathcal{R} \setminus \left( \overline{(x_1, 0)(x_2, 0)} \times [-1, 1]^{n-2}\right)
\end{cases}
$$
where $c> 0$ is a constant chosen such that $|u_{Q, \infty}| < c$ on $\partial \mathcal{R}$. This is possible by our uniform bounds on the Lipschitz norm. Note that $\phi \ge 0$ in $\Omega_{Q, \infty} \cap T$ and that $u_{Q, \infty} = 0$ on $\partial \Omega_{Q, \infty}$. Thus, by the maximum principle, $\phi \ge u_{Q, \infty}$ in $\Omega_{Q, \infty} \cap T$.
Now, we claim that $\phi \rightarrow 0$ as $x$ approaches $\{0\}$ at a rate comparable to linear decay. To see this, we consider the blow-up of $\phi$ at $\{ 0\} \in \partial T$. Since $\{0\}$ is a flat point, $\phi_{0, \infty} \ge 0$ in $\mathbb{H}^n_+$, and the zero set of $\phi_{0, \infty}$ is $(n-1)$-symmetric. The only homogeneous harmonic function which satisfies these properties is the linear one.
By \cite{JerisonKenig82} Theorem 5.1., there is a neighborhood of $\{0\}$ and constants, $c_1, c_2$ such that if $\delta(x) = dist(x, \overline{(x_1, 0)(x_2, 0)} \times [-1, 1]^{n-2})$, then
\begin{align*}
c_1 \le \frac{\phi(x)}{\delta(x)} \le c_2
\end{align*}
If $N_{\Omega}(Q, 0, u) < 1$, then $u_{Q, \infty}$ would be homogeneous of degree $N_{\Omega}(Q, 0, u) < 1$ and could not be bounded by a linear function. However, $\phi \ge u_{Q, \infty},$ so $N_{\Omega}(Q, 0, u)\ge 1.$
\end{proof}
It follows from Lemma \ref{(0, 0)-symmetric functions}, that if $N_{\Omega}(Q, 0, u) = 1$, then $Q$ must be a flat point. Therefore, for non-flat points, $Q \in \partial \Omega$, $N_{\Omega}(Q, 0, u) > 1.$ Thus, $sing(\partial \Omega) \cap B_{\frac{1}{4}}(0) \subset \mathcal{S}^{n-2}(u).$
\end{document}
|
\begin{equation}gin{document}
\title{FINITE VOLUME SCHEMES FOR DIFFUSION EQUATIONS: INTRODUCTION TO AND REVIEW
OF MODERN METHODS\footnote{Preprint of an article published in
Math. Models Methods Appl. Sci. (M3AS) 24 (2014), no. 8, 1575-1619 (special issue on Recent Techniques for PDE Discretizations on Polyhedral Meshes). DOI:10.1142/S0218202514400041 \copyright{} World Scientific Publishing Company
http://www.worldscientific.com/worldscinet/m3as }}
\author{JEROME DRONIOU}
\address{School of Mathematical Sciences,
Monash University\\
Victoria 3800, Australia.\\
[email protected]}
\maketitle
\begin{equation}gin{abstract} We present Finite Volume methods for diffusion equations on generic meshes,
that received important coverage in the last decade or so.
After introducing the main ideas and construction principles of the methods,
we review some literature results, focusing on two important properties
of schemes (discrete versions of well-known properties of the continuous
equation): coercivity and minimum-maximum principles. Coercivity ensures the
stability of the method as well as its convergence under assumptions
compatible with real-world applications, whereas minimum-maximum principles
are crucial in case of strong anisotropy to obtain physically meaningful
approximate solutions.
\end{abstract}
\keywords{review, elliptic equation, finite volume schemes, multi-point flux approximation,
hybrid mimetic mixed methods, discrete duality finite volume schemes, coercivity,
convergence analysis, monotony, minimum and maximum principles.}
\ccode{AMS Subject Classification: 65N06, 65N08, 65N12, 65N15, 65N30}
\section{Introduction}\label{sec:intro}
Diffusion processes are ubiquitous in physics of flows, such as heat propagation
or flows in porous media encountered in reservoir engineering. A simple
form of diffusion equation is
\begin{equation}\label{base}
\ba -{\rm d}iv(\Lambda(x)\mathbf{n}abla{\overline{u}}(x))=f(x)\,,&\quad x\in\Omegamega,\\
{\overline{u}}(x)=\bu_b\,,&\quad x\in\partial\Omegamega,
\end{array}
\end{equation}
where $\Omegamega$ is the domain of study, $f$ describes the volumic sources or sinks,
$\Lambda$ encodes the diffusion properties of the medium,
$\bu_b$ is the fixed boundary condition and
${\overline{u}}$ is the unknown of interest (pressure, saturation, etc.).
Although very simplified with respect to real-world models, Equation \eqref{base} already
contains some of the main issues that have to be dealt with when designing
and analysing numerical methods for diffusion processes.
The assumptions on the data are:
\begin{equation}gin{eqnarray}
\label{hyp-omega}
&&\Omegamega\mbox{ is a bounded connected polygonal open subset of $\mathbb R^d$, $d\ge 1$,}\\
\label{hyp-fudir}
&&f\in L^2(\Omegamega)\,,\quad \bu_b\in H^{1/2}(\Omega)\,,\\
\label{hyp-lambda}
&&
\ba
\Lambda:\Omegamega\to \mathbb R^{d\times d}\mbox{ is symmetric-valued, essentially bounded and coercive}\\
\mbox{(i.e. $\exists \lambda_-,\lambda_+>0$ such that, for a.e. $x\in\Omegamega$ and all
$\xi\in \mathbb R^d$,}\\
\lambda_- |\xi|^2\le \Lambda(x)\xi\cdot\xi\le \lambda_+|\xi|^2)
\end{array}
\end{eqnarray}
($\cdot$ and $|\cdot|$ are the Euclidean dot product and norm on $\mathbb R^d$).
No other regularity properties are assumed on $\Lambda$, $f$ or $\bu_b$, and
the proper mathematical formulation of \eqref{base} is therefore,
denoting by $\gamma:H^1(\Omegamega)\mapsto H^{1/2}(\partial\Omegamega)$ the trace operator:
\begin{equation}\label{basew}
\ba
{\overline{u}}\in \{v\in H^1(\Omegamega)\;:\;\gamma(v)=\bu_b\},\\
{\rm d}isplaystyle\forall \varphi\in H^1_0(\Omegamega)\,,\quad \int_\Omegamega \Lambda(x)\mathbf{n}abla{\overline{u}}(x)\cdot
\mathbf{n}abla\varphi(x){\rm d} x = \int_\Omegamega f(x)\varphi(x){\rm d} x.
\end{array}
\end{equation}
Amongst the numerous families of numerical methods for diffusion equations
(Finite Difference, Finite Element, Discontinuous Galerkin...),
Finite Volume (FV) schemes are methods of choice for a number of engineering
applications in which the conservation of various extensive
quantities is important. Local conservativity
of the fluxes is in particular essential to handle the hyperbolicity and
strong coupling which occur in models of miscible
or immiscible flows in porous media.
The purpose of this work is to present a few
modern FV methods for \eqref{base} and to review some of the
mathematical results established for these methods. Although FV
methods can be applied on a number of fluid models, our discussion
will be made with models of porous media flows in mind.
In this case, \eqref{base} corresponds to a steady single-phase
single-component Darcy problem with no gravitational effects,
${\overline{u}}$ is the pressure and $\Lambda$ is the permeability field\cite{DIP13-2}.
The paper is organised as follows. In the rest of this section,
we detail the basics behind the construction of FV methods
and we point out two important properties of Equation \eqref{base}
(coercivity and minimum-maximum principle) which are also desirable for discretisations thereof.
Coercivity, in particular, is at the core of techniques which allows
one to carry out convergence proofs without assuming
non-physical regularities on the data or the solution.
Sec. \ref{sec:FV2} presents the most classical FV method for \eqref{base},
based on a 2-point flux approximation, and highlights its coercivity
and minimum-maximum principle properties as well as its main flaw: it is hardly
applicable on meshes encountered in practical applications.
Secs. \ref{sec:MPFA}, \ref{sec:HMM} and \ref{sec:DDFV} then present
three families of FV schemes applicable on generic meshes: Multi Point
Flux Approximation methods (O-, L- and G-methods), Hybrid Mimetic Mixed methods
(including Hybrid Finite Volume methods, Mimetic Finite Difference schemes and
Mixed Finite Volume methods) and Discrete Duality Finite Volume methods.
In each of these sections, we first present the construction of the method,
focusing on its principles rather than on the details of the
computations, and we then review the literature results on their
coercivity (and convergence) and minimum-maximum principle properties. These sections
are also completed by short conclusions summarising the strengths and weaknesses
of each method. In Sec. \ref{sec:LMP}, we consider some FV
schemes specifically designed to satisfy minimum-maximum principles
on any mesh. Sec. \ref{sec:concl} concludes the paper.
\subsection{What is a Finite Volume scheme?}\label{sec:whatis}
Good question... not easy to answer given the number of methods
presented in the literature as ``Finite Volume'' schemes.
Nevertheless, some basic ideas remain which should be shared by any
method called ``Finite Volume''.
The physical principle that leads to \eqref{base} is the balance of
some extensive quantity $Q$ (heat, component mass, etc.):
given a domain $\omega$, the variation
of $Q$ inside $\omega$ comes from the creation of $Q$ in $\omega$
and the transfer of $Q$ through $\partial\omega$. In a stationary
context, there is no variation of $Q$ and the volumic creation
inside $\omega$ must therefore balance out the quantity of $Q$ which
leaves $\omega$ through $\partial\omega$.
Under modelling assumptions, the creation of $Q$ inside $\omega$
has a volumetric density function $f$ and the flow of $Q$ outside $\omega$
has a surfacic density $-\Lambda(x)\mathbf{n}abla{\overline{u}}(x)\cdot\mathbf{n}_{\omega}(x)$
(Darcy's or Fourier's law), where $\mathbf{n}_\omega$ is the outer unit normal to $\partial\omega$
and $\Lambda(x)$ is a symmetric positive definite matrix --- heat conductivity matrix in the case
of the heat equation, permeability matrix in reservoir engineering.
The mass balance of $Q$ then reads
\begin{equation}\label{equation-cons2}
\int_{\partial\omega} -\Lambda(x)\mathbf{n}abla{\overline{u}}(x)\cdot\mathbf{n}_\omega(x){\rm d} S(x)=\int_\omega f(x){\rm d} x.
\end{equation}
Using Stokes' formula on the left-hand side,
taking $\omega$ a ball around $x\in\Omega$, dividing by the measure of $\omega$
and letting its radius tend to $0$ leads to \eqref{base}. This is the ``infinitesimal'' control volume technique to
derive the diffusion equation.
If, on the other hand, we consider a ``finite'' control volume approach in which
$\omega=K$ is a (small but not infinitesimal) polygonal
open set, then \eqref{equation-cons2} becomes
\begin{equation}\label{balance-flux}
\sum_{\sigma\mbox{ \scriptsize edge of }K} \overline{F}_{K,\sigma}=\int_K f(x){\rm d} x
\end{equation}
where $\overline{F}_{K,\sigma}=\int_\sigma -\Lambda(x)\mathbf{n}abla {\overline{u}}(x)\cdot \mathbf{n}_K(x){\rm d} S(x)$
is the flux of ${\overline{u}}$ through $\sigma$.
It can also be noticed that, if $\sigma$ is an edge
between two polygons $K$ and $L$, then
\begin{equation}\label{cons-flux}
\overline{F}_{K,\sigma}+\overline{F}_{L,\sigma}=0.
\end{equation}
\begin{equation}gin{remark} Another way to get \eqref{balance-flux} is
to integrate \eqref{base} on $K$. This is how FV
methods are usually presented in textbooks, but it is
important to realise that \eqref{balance-flux} directly comes from
physical principles (without even writing \eqref{base}).
This explains why FV methods are particularly
attractive in many engineering contexts.
\end{remark}
The balance \eqref{balance-flux} and conservativity \eqref{cons-flux} of
the fluxes are the two main elements on which FV methods are built.
Let $(\mathcal M,\mathcal E,\mathcal P)$ be a mesh
of $\Omegamega$ as given by Definition \ref{def:mesh} below.
All FV methods we consider here have at least cell unknowns $(u_K)_{K\in\mathcal M}$,
that play the role of approximate values of $({\overline{u}}(\mathbi{x}_K))_{K\in\mathcal M}$.
Such cell unknowns are often desirable in applications, for
coupling issues and because the medium properties (permeability, etc.) are
usually constant in each cell. Some FV methods also use
additional unknowns, e.g. approximate values of ${\overline{u}}$ on the edges.
The principle of FV schemes is to compute, using all these unknowns,
consistent approximations $F_{K,\sigma}$ of $\overline{F}_{K,\sigma}$
and to write discrete versions of \eqref{balance-flux} and \eqref{cons-flux}:
\begin{equation}\label{bf}
\mbox{for any }K\in \mathcal M\,:\;\sum_{\sigma\in\mathcal E_{K}} F_{K,\sigma}=\int_K f(x){\rm d} x,
\end{equation}
\begin{equation}\label{cf}
\mbox{for any edge $\sigma$ between two distinct $K,L\in\mathcal M$}\,:\;
F_{K,\sigma}+F_{L,\sigma}=0.
\end{equation}
\begin{equation}gin{definition}[Mesh]\label{def:mesh} A mesh of $\Omegamega$ is $(\mathcal M,\mathcal E,\mathcal P)$
where:
\begin{equation}gin{itemize}
\item $\mathcal M$ is a finite family of non-empty open disjoint polygons
(the ``control volumes'' or ``cells'')
such that $\overline{\Omegamega}=\cup_{K\in\mathcal M}\overline{K}$,
\item $\mathcal E$ is a finite family of
non-empty disjoint planar subsets of $\Omegamega$ (the ``edges'') with positive
$(d-1)$-dimensional measure. We assume that
for each control volume $K$ there exists $\mathcal E_K\subset \mathcal E$
such that $\partial K=\cup_{\sigma\in\mathcal E_K}\overline{\sigma}$.
We also assume that each edge $\sigma\in\mathcal E$ belongs to exactly
one or two sets $(\mathcal E_K)_{K\in\mathcal M}$.
\item $\mathcal P$ is a family of points $(\mathbi{x}_K)_{K\in\mathcal M}$ such that,
for each $K$, $\mathbi{x}_K\in {K}$.
\end{itemize}
We denote by $|K|$ the $d$-dimensional measure of $K\in\mathcal M$,
by $|\sigma|$ the $(d-1)$-dimensional measure of $\sigma\in\mathcal E$
and by $\mathbf{n}_{K,\sigma}$ the unit normal to $\sigma\in \mathcal E_K$
outward $K$. We also partition $\mathcal E$ into the interior edges $\mathcal E_{\rm int}$
(those included in $\Omegamega$) and the exterior edges $\mathcal E_{\rm ext}$
(those included in $\partial\Omegamega$).
The size of the mesh is $h_\mathcal M=\max_{K\in\mathcal M}{\rm diam}(K)$.
We also take $\Lambda_K$ a value of $\Lambda$ in $K$ (e.g. $\frac{1}{|K|}\int_K \Lambda$
or $\Lambda(\mathbi{x}_K)$ -- in reservoir applications, $\Lambda$ is constant in each
cell $K$).
\end{definition}
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-mesh.pdf_t}
\caption{\label{fig:mesh}A mesh of $\Omegamega$.}
\end{center}
\end{figure}
\begin{equation}gin{remark} Although we use a 2D vocabulary (polygon, edges...), most of
what we present here is valid in any space dimension.
\end{remark}
\subsection{Convergence analysis and coercivity}\label{sec:analysis}
In reservoir applications, the data (and thus the solution) are not smooth. It is
for example natural for the permeability $\Lambda$ to be discontinuous from
one geological layer to another. Convergence analysis of numerical methods
for such problems should
take into account these practical constraints and should therefore not rely
on non-physical regularity assumptions on the data or solution.
Being able to carry out a convergence analysis under very weak regularity assumptions
on the data or the
solution is also essential for more complex models
(Navier-Stokes equations, multi-phase flows, etc.).
Assuming to simplify that $\bu_b=0$ (in which case ${\overline{u}}\in H^1_0(\Omega)$),
an efficient path to prove the convergence of FV methods for \eqref{base} is to follow
these steps:
\begin{equation}gin{itemize}
\item[\textsf{(C1)}] Establish \emph{a priori} energy estimates on the solution to the scheme,
in a mesh- and scheme-dependent discrete norm which mimics the $H^1_0$ norm,
\item[\textsf{(C2)}] Prove a discrete Rellich compactness result, i.e. that,
as the mesh size tends to $0$, sequences of approximate solutions bounded in these
discrete norms have subsequences which converge(\footnote{In a sense
depending on the method, but which includes at least some form
of strong convergence in $L^2(\Omegamega)$ and often some form of
weak convergence of discrete gradients.}) to some function ${\overline{u}}\in H^1_0(\Omegamega)$,
\item[\textsf{(C3)}] Prove that any such limit ${\overline{u}}$ of approximate solutions
satisfies \eqref{basew}.
\end{itemize}
Because the solution to \eqref{basew} is unique, Steps \textsf{(C1)}---\textsf{(C3)}
show the convergence of the scheme in the sense that the whole sequence of approximate
solutions converges to the solution of \eqref{basew}. Moreover, for linear schemes,
Step \textsf{(C1)} ensures the existence and uniqueness of a solution to the scheme.
Following this path does not require any regularity property on $\Lambda$, $f$ or ${\overline{u}}$
besides those in \eqref{hyp-omega}---\eqref{hyp-lambda} and \eqref{basew}.
Ensuring that \emph{a priori} energy estimates can be obtained
in a proper ``discrete $H^1_0$ norm'' however requires some
assumptions on the scheme.
Consider the continuous equation \eqref{base},
multiply it by ${\overline{u}}$ and integrate by parts (or, equivalently, take $\varphi=
{\overline{u}}$ in \eqref{basew}). Then
\begin{equation}gin{equation}\label{ipp}
\lambda_- |{\overline{u}}|_{H^1_0}^2\le \int_\Omegamega \Lambda(x)\mathbf{n}abla{\overline{u}}(x)\cdot\mathbf{n}abla{\overline{u}}(x){\rm d} x
=\int_\Omegamega f(x){\overline{u}}(x){\rm d} x\le ||f||_{L^2}||{\overline{u}}||_{L^2}
\end{equation}
and the Poincar\'e inequality $||{\overline{u}}||_{L^2(\Omegamega)}\le {\rm diam}(\Omegamega)|{\overline{u}}|_{H^1_0(\Omegamega)}$
gives estimate on $|{\overline{u}}|_{H^1_0(\Omegamega)}:=||\,|\mathbf{n}abla{\overline{u}}|\,||_{L^2(\Omegamega)}$.
The key element here is the coercivity of $\Lambda$
(which is equivalent to the coercivity of the bilinear form in \eqref{basew}).
Discrete $H^1_0$ estimates on the solution to a FV scheme are usually obtained
by mimicking this process at the discrete level: multiply the scheme by the unknown,
perform discrete integration by parts (or, equivalently, take the unknown
as test function in a variational formulation of the scheme)
and conclude by establishing a discrete Poincar\'e inequality
(see e.g. Sec. \ref{sec:FV2-coer}). This process does not work
for all schemes but, when it does, it shows how to find the
discrete $H^1_0$ norm associated with the scheme and mesh.
For schemes using only cell unknowns, for example, multiplying \eqref{bf} by
$u_K$, summing on the cells and using \eqref{cf}, we see that a
discrete version of \eqref{ipp} can be obtained if there exists
a discrete $H^1_0$ norm $||.||_{1,{\rm disc}}$ satisfying the
Poincar\'e's inequality
\[
\left(\sum_{K\in\mathcal M} |K|u_K^2\right)^{1/2}\le C||u||_{1,{\rm disc}}
\]
and the estimate
\begin{equation}gin{equation}\label{coer-bd}
||u||_{1,{\rm disc}}^2\le C\sum_{\sigma\in\mathcal E} F_{K,\sigma}(u_K-u_L),
\end{equation}
for some $C$ not depending on $u$ or the mesh
(in the previous sum, $K,L$ are the cells on each side of $\sigma\in\mathcal E_{\rm int}$
and $u_L=0$ if $\sigma\in\mathcal E_{\rm ext}\cap\mathcal E_K$).
Obtaining such discrete $H^1_0$ estimates is not only the first step in
proving the convergence of the scheme, but it is also crucial to ensure
its numerical stability. Schemes for which such energy estimates can be established
are called \emph{coercive}. If a linear scheme is coercive and has a symmetric
matrix, then it has a symmetric positive definite matrix and
very efficient algorithms (Cholesky decomposition, conjugate gradient, etc.) can be used to compute
its solution. Note however that the mere symmetry and positive-definiteness
of the matrix are not enough to ensure the coercivity of the scheme, as
this positive-definiteness must be uniform with respect to
the mesh and must hold for a discrete $H^1_0$ norm satisfying
\textsf{(C2)}.
\begin{equation}gin{remark}[Consistency of Finite Volume methods]
In FV methods, the numerical fluxes $F_{K,\sigma}$ are consistent
approximations of the exact fluxes $\overline{F}_{K,\sigma}$:
if $F^\star_{K,\sigma}$ are the numerical fluxes computed by replacing
the unknowns by the exact values of ${\overline{u}}$ and if all
data are smooth, then
\begin{equation}\label{flux-cons}
F^\star_{K,\sigma}=\overline{F}_{K,\sigma}+\mathcal O(|\sigma|{\rm diam}(K))
\end{equation}
(note that $\overline{F}_{K,\sigma}=\mathcal O(|\sigma|)$).
It is however often said that FV methods do not provide consistent approximations
of the operator $-{\rm d}iv(\Lambda\mathbf{n}abla {\overline{u}})$ ``in the Finite Difference sense''
(see Ref. \refcite{EGH00}, Chapter 2).
We can indeed check that, in general,
\begin{equation}\label{noncons-fd}
\sum_{\sigma\in\mathcal E_K} F^\star_{K,\sigma}=
\int_K -{\rm d}iv(\Lambda\mathbf{n}abla{\overline{u}})+\mathcal O(|K|)
\end{equation}
(note that $\int_K -{\rm d}iv(\Lambda\mathbf{n}abla {\overline{u}})=\mathcal O(|K|)$).
In fact, as often in mathematical analysis, everything is relative to
topology.
Relation \eqref{noncons-fd} shows a non-consistency in $L^\infty$ or $L^2$
norm, but thanks to the flux consistency \eqref{flux-cons} and the conservativity
of fluxes, we can prove that, for any $\varphi\in H^1_0(\Omega)$,
\[
\sum_{K\in\mathcal M}\sum_{\sigma\in\mathcal E_K} F^\star_{K,\sigma}\varphi_K
=-\int_\Omega {\rm d}iv(\Lambda\mathbf{n}abla{\overline{u}})(x)\varphi(x){\rm d} x
+\mathcal O(h_{\mathcal M}||(\varphi_K)_{K\in \mathcal M}||_{1,{\rm disc}}),
\]
where $\varphi_K=\frac{1}{|K|}\int_K\varphi(x){\rm d} x$ and
$||\cdot||_{1,{\rm disc}}$ is the discrete $H^1_0$ norm of Sec. \ref{sec:FV2-coer}.
Hence, $\sum_{\sigma\in\mathcal E_K} F^\star_{K,\sigma}$ is a consistent
approximation of $\int_K -{\rm d}iv(\Lambda\mathbf{n}abla {\overline{u}})$ in some discrete dual $H^1_0$ norm
and, because of this, establishing discrete $H^1_0$ estimates on approximate
solutions is also crucial to pass to the limit in Step \textsf{(C3)}.
\end{remark}
\begin{equation}gin{remark}[Linearly exact scheme]\label{rem-linex}
The consistency relation \eqref{flux-cons} is strongly related with
the fact that the scheme is \emph{linearly exact}, meaning that
if the exact solution ${\overline{u}}$ to \eqref{base} is piecewise linear
on the mesh then its interpolation is the solution to the scheme
(i.e. the scheme exactly reproduces piecewise linear solutions).
In this case, observed numerical orders of convergence(\footnote{Here and everywhere else in this paper, error estimates and orders of convergence are in some form of $L^2$ norm depending on the
scheme.}) are usually 2 for ${\overline{u}}$ and $1$ for its gradient
(at least for smooth solutions and linear schemes).
\end{remark}
\subsection{Maximum and minimum principles, or monotony}\label{sec:monotony}
A remarkable property of diffusion equations such as \eqref{base} is their
maximum and minimum principles, see Ref. \refcite{HOP27} or Chapter I in Ref. \refcite{MIR70}. In its strong form (also called the local
minimum principle), the minimum
principle states that, should $f$ be non-negative, the solution ${\overline{u}}$ to \eqref{base}
cannot have a local minimum inside $\Omegamega$ unless it is constant. This prevents in particular
the solution from presenting oscillating behaviours.
This local minimum principle implies the following weaker (global) form
\begin{equation}gin{equation}\label{minpple}
\mbox{if $f\ge 0$ and $\bu_b\ge 0$ then ${\overline{u}}\ge 0$},
\end{equation}
as well as the (global) minimum-maximum principle (obtained
by applying \eqref{minpple} to ${\overline{u}}-(\inf_{\partial\Omega}\bu_b)$ and $(\sup_{\partial \Omega} \bu_b)-{\overline{u}}$):
\begin{equation}gin{equation}\label{minmaxpple}
\mbox{if $f=0$ then $\inf_{\partial\Omega} \bu_b \le {\overline{u}}\le \sup_{\partial\Omega}\bu_b$.}
\end{equation}
Assume that $U=((u_i)_{i\in I},(u_z)_{z\in B})$ is a vector gathering
the unknowns $(u_i)_{i\in I}$ of the scheme and the discretised
boundary conditions $(u_z)_{z\in B}$, computed from $\bu_b$.
If the scheme is written $S(U)=R$, where $R$ is
a vector constructed from $f$, the discrete desirable versions of \eqref{minpple}
and \eqref{minmaxpple} are
\begin{equation}gin{equation}\label{disc-minpple}
\mbox{if $S(U)=R \ge 0$ and $u_z\ge 0$ for all $z\in B$ then $u_i\ge 0$
for all $i\in I$}
\end{equation}
(where $R\ge 0$ means that all components of $R$ are non-negative)
and
\begin{equation}gin{equation}\label{disc-minmaxpple}
\mbox{if $S(U)=0$ then $\inf_{z\in B} u_z\le u_i
\le \sup_{z\in B}u_z$ for all $i\in I$}.
\end{equation}
For linear schemes (i.e. $S$ is a linear function) that are
exact on constant functions (i.e. $S(\mathbf{1})=0$, where
$\mathbf{1}$ is the vector with all components equal to $1$),
the discrete minimum principle \eqref{disc-minpple} implies
the discrete minimum-maximum principle \eqref{disc-minmaxpple}
(if $S(U)=0$, apply \eqref{disc-minpple} to $V=(\max_{z\in B}u_z) \mathbf{1} - U$
and $V=U-(\min_{z\in B}u_z)\mathbf{1}$, which both satisfy
$V_b\ge 0$ for all $b\in B$ and $S(V)=0$ by linearity of $S$).
As we shall see in Sec. \ref{sec:LMP}, non-linear schemes may satisfy
\eqref{disc-minpple} without satisfying
\eqref{disc-minmaxpple}.
The usual way in the literature to prove that a linear
scheme satisfies \eqref{disc-minpple} is to show that its
matrix $A=(a_{ij})_{ij}$ is diagonally dominant by columns
(i.e. $a_{ii}>0$ for all $i$, $a_{ij}\le 0$ for all $i\mathbf{n}ot= j$ and
$a_{kk}\ge \sum_{i\mathbf{n}ot=k}|a_{ik}|$ for all $k$ with strict inequality
for at least one $k$) and has a connected graph.
Under these assumptions, it is easy to see that
$A$ is invertible and that $A^{-1}$ only has non-negative coefficients
($A$ is thus an $M$-matrix), see Chapter 6 in Ref. \refcite{ABR79}.
Provided that the scheme is written
$S(U)=A(u_i)_{i\in I} - C(u_z)_{z\in B}=R$ where $C$ is a matrix
with non-negative coefficients, we then obtain $(u_i)_{i\in I}=A^{-1}(R+C(u_z)_{z\in B})
\ge 0$ whenever $R\ge 0$ and $u_z\ge 0$ for all $z\in B$.
Satisfying a discrete minimum-maximum principle is particularly important
in complex models such as multi-phase flows in reservoir engineering.
Schemes that do not satisfy this principle may give rise
to spurious oscillations which may lead to gas-oil numerical
instabilities.
Linear schemes for \eqref{base} satisfying \eqref{disc-minpple}
are also called \emph{monotone}, as they preserve the order
of boundary conditions (for non-negative right-hand sides) or of
initial conditions (when applied to transient equations).
\section{TPFA scheme}\label{sec:FV2}
Let us assume that the medium is isotropic, i.e. $\Lambda(x)=\lambda(x){\rm Id}$
for some scalar function $\lambda$. We also assume the following orthogonality conditions
on the mesh:
\begin{equation}\label{cond-orth}
\ba
\forall \sigma\mbox{ edge between two control volumes $K,L\in\mathcal M$}\,,\;
(\mathbi{x}_K\mathbi{x}_L)\bot \sigma\,,\\
\forall \sigma\in\mathcal E_{\rm ext}\cap\mathcal E_K\,,\;
\mbox{ the half-line $\mathbi{x}_K+[0,\infty)\mathbf{n}_{K,\sigma}$
intersects $\sigma$}.
\end{array}
\end{equation}
In Fig. \ref{fig:mesh}, for example, this assumption is satisfied by
the edge $\sigma$ between $K$ and $L$ but not by the
edge between $K$ and $M$.
Letting $\{\mathbi{x}_\sigma\}=(\mathbi{x}_K\mathbi{x}_L)\cap \sigma$ (or $\{\mathbi{x}_\sigma\}=(\mathbi{x}_K+[0,\infty)\mathbf{n}_{K,\sigma})
\cap \sigma$ if $\sigma\in\mathcal E_{\rm ext}$), consistent approximations
of the fluxes for small $h_{\mathcal M}$ are
\begin{equation}gin{eqnarray}
\label{disc-flux-int}
\mbox{if $\sigma\in\mathcal E_K\cap\mathcal E_L$}&:&
F_{K,\sigma}= \lambda_K|\sigma|\frac{u_K-u_\sigma}{{\rm d}ist(\mathbi{x}_K,\mathbi{x}_\sigma)}
\mbox{ and } F_{L,\sigma}= \lambda_L|\sigma|\frac{u_L-u_\sigma}{{\rm d}ist(\mathbi{x}_L,\mathbi{x}_\sigma)},\\
\label{disc-flux-ext}
\mbox{if $\sigma\in\mathcal E_{\rm ext}\cap \mathcal E_K$}
&:&F_{K,\sigma}=\lambda_K|\sigma|\frac{u_K-u_\sigma}{{\rm d}ist(\mathbi{x}_K,\sigma)},
\end{eqnarray}
where ${\rm d}ist(a,b)=|a-b|$, $\lambda_K$ is the value of $\lambda$ on $K$
and $u_\sigma$ approximates ${\overline{u}}(\mathbi{x}_\sigma)$.
If $\sigma\in\mathcal E_{\rm ext}$, $u_\sigma$ is fixed by $\bu_b$(\footnote{Several
choices are possible. If $\bu_b$ is smooth enough, then one can take
$u_\sigma=\bu_b(\mathbi{x}_\sigma)$.
Otherwise, $u_\sigma$ can be chosen as the average of $\bu_b$ on $\sigma$.}).
If $\sigma\in\mathcal E_K\cap\mathcal E_L$, the
additional unknown $u_\sigma$ is eliminated by imposing the conservativity
\eqref{cf} of fluxes and we get (see Ref. \refcite{EGH00}, Chapter 3):
\begin{equation}\label{2pt-fluxes-h}
F_{K,\sigma}=\tau_\sigma (u_K-u_L) \quad\mbox{ with }
\tau_\sigma
=\frac{|\sigma|}{{\rm d}ist(\mathbi{x}_K,\mathbi{x}_L)}\frac{\lambda_K \lambda_L {\rm d}ist(\mathbi{x}_K,\mathbi{x}_L)}{\lambda_K{\rm d}ist(\mathbi{x}_L,\mathbi{x}_\sigma)
+\lambda_L {\rm d}ist(\mathbi{x}_K,\mathbi{x}_\sigma)}.
\end{equation}
The balance equation \eqref{bf} of the discrete fluxes \eqref{disc-flux-ext}-\eqref{2pt-fluxes-h}
then gives an FV scheme for \eqref{base} when $\Lambda=\lambda{\rm Id}$,
called the Two Point Flux Approximation Finite Volume scheme (TPFA for short)
since each flux is computed using only the 2 unknowns on each side of the
edge.
\begin{equation}gin{remark} As ${\rm d}ist(\mathbi{x}_K,\mathbi{x}_\sigma)+{\rm d}ist(\mathbi{x}_L,\mathbi{x}_\sigma)={\rm d}ist(\mathbi{x}_K,\mathbi{x}_L)$,
the transmissibility $\tau_\sigma$ involves an harmonic average
of the values of $\Lambda$ in the cells on each side of $\sigma$. This harmonic
average is well-known, in FV methods, to give a much more accurate solution
than other averages.
\end{remark}
\begin{equation}gin{remark}\label{rem:orth} If $\Lambda$ is an anisotropic full tensor,
the same construction can be made (see Chapter 3 in Ref. \refcite{EGH00}) provided that
the orthogonality condition \eqref{cond-orth} is replaced with \eqref{cond-orth2},
in which $D_{K,\sigma}$ is the straight line going through $\mathbi{x}_K$ and orthogonal to $\sigma$
for the scalar product induced by $\Lambda_K^{-1}$:
\begin{equation}\label{cond-orth2}
\ba
\forall \sigma\mbox{ between two control volumes $K,L\in\mathcal M$}\,,\;
D_{K,\sigma}\cap \sigma=D_{L,\sigma}\cap\sigma\mathbf{n}ot=\emptyset\,,\\
\forall \sigma\in\mathcal E_{\rm ext}\cap\mathcal E_K\,,\;
D_{K,\sigma}\cap\sigma\mathbf{n}ot=\emptyset.
\end{array}
\end{equation}
\end{remark}
\subsection{Coercivity}\label{sec:FV2-coer}
Assume that $\bu_b=0$ and thus that $u_\sigma=0$ for all $\sigma\in\mathcal E_{\rm ext}$.
Multiplying the balance equation \eqref{bf} by $u_K$,
summing on $K\in\mathcal M$ and gathering by edges (=discrete integration
by parts), we obtain, thanks to \eqref{2pt-fluxes-h},
\begin{equation}\label{2pt-estimate}
||u||_{1,{\cal D}}^2:=\sum_{\sigma\in\mathcal E_{\rm int}}\tau_\sigma(u_K-u_L)^2+
\sum_{\sigma\in\mathcal E_{\rm ext}}\tau_\sigma u_K^2
=\int_\Omegamega f(x)u(x){\rm d} x
\end{equation}
where $u$ is the piecewise constant function equal to $u_K$ on $K$
and, in the sums, $K$ and $L$ are the control volumes
on each side of $\sigma\in\mathcal E_{\rm int}$
(we let $\tau_\sigma=\lambda_K\frac{|\sigma|}{{\rm d}ist(\mathbi{x}_K,\sigma)}$
whenever $\sigma\in\mathcal E_{\rm ext}\cap\mathcal E_K$).
The left-hand side of \eqref{2pt-estimate} defines a discrete
$H^1_0$ norm $||u||_{1,{\rm disc}}$ for which one can
establish the discrete Poincar\'e inequality
$||u||_{L^2(\Omegamega)}\le {\rm diam}(\Omegamega)||u||_{1,{\rm disc}}$
and a discrete compactness result as in Step \textsf{(C2)}
of Sec. \ref{sec:analysis}, see Chapter 3 in Ref. \refcite{EGH00}.
The TPFA scheme is thus coercive (with a symmetric
matrix) and its convergence can be proved under the sole
assumptions \eqref{hyp-omega}--\eqref{hyp-lambda}. Of course,
error estimates can also be obtained if the data are more regular\cite{HER95}.
\subsection{Monotony}
Injecting \eqref{disc-flux-ext}-\eqref{2pt-fluxes-h} in the balance equation
\eqref{bf} we obtain, with the same conventions as in \eqref{2pt-estimate},
for all $K\in\mathcal M$,
\begin{equation}\label{2pt-mpple}
\sum_{\sigma\in\mathcal E_{\rm int}}\tau_\sigma(u_K-u_L)
+\sum_{\sigma\in\mathcal E_{\rm ext}}\tau_\sigma u_K
={\rm d}isplaystyle \int_K f(x){\rm d} x
+\sum_{\sigma\in\mathcal E_{\rm ext}}\tau_\sigma u_\sigma.
\end{equation}
{}From this expression we can see that the scheme's function (see Section \ref{sec:monotony})
can be written $S(U)=A(u_K)_{K\in \mathcal M}-C(u_\sigma)_{\sigma\in\mathcal E_{\rm ext}}$,
with $A$ diagonally dominant, symmetric and graph-connected, and all coefficients of $C$
non-negative. Sec. \ref{sec:monotony} then shows that the TPFA scheme is monotone.
\begin{equation}gin{remark}\label{rem:monvf2}
Monotony of the TPFA scheme is in fact easy to prove from \eqref{2pt-mpple}.
If $f\ge 0$, $u_\sigma\ge 0$ for all $\sigma\in\mathcal E_{\rm ext}$
and $u_K=\min_{M\in\mathcal M}u_M<0$ then
the left-hand side of \eqref{2pt-mpple} is a
non-negative sum of non-positive terms. Hence all terms are equal to $0$
and $u_K=u_L$ for all neighbours $L$ of $K$. The minimal value $u_K$
thus propagates to all neighbours and, ultimately, to the whole connected domain.
Using \eqref{2pt-mpple} for one boundary cell
then contradicts the negativity of this minimal value.
In fact, this reasoning applied to $A^T$ gives a proof that the diagonal dominance
by columns of $A$ and its graph connectedness entail the non-negativity of all coefficients
of $A^{-1}$. It also shows that schemes with such matrices
satisfy in fact a discrete version of the strong minimum principle:
if $f\ge 0$, the solution to the scheme cannot have any interior minimum
unless it is constant.
\end{remark}
\subsection{The perfect scheme?}
The TPFA scheme is a cell-centred scheme (only involving cell unknowns), very
cheap to implement and with a small stencil:
5 on 2D quadrilateral meshes and 7 on 3D hexahedral meshes. Its matrix is
therefore very sparse and its solution easy to compute.
For these reasons, it has been adopted in many engineering software,
but it is not the perfect scheme...
Meshes available in field applications
may be quite distorted and may have cells presenting various complex geometries,
especially in basin simulation where alignment with
geological layers and erosion may lead to hexahedra with
collapsed faces. The orthogonality properties
\eqref{cond-orth} or \eqref{cond-orth2} are impossible to satisfy
on these meshes and, should they fail for too many edges, the solution given
by the TPFA scheme will be totally incorrect\cite{FAI92,AAV02,EIG05}.
Other FV methods therefore had to be designed, providing consistent fluxes
for general meshes and tensors.
\section{MPFA methods}\label{sec:MPFA}
Consistent approximations of the fluxes $\overline{F}_{K,\sigma}$
on general meshes require the usage of more approximate values of ${\overline{u}}$
(in cells, on edges or at vertices) than the two at $\mathbi{x}_K$
and $\mathbi{x}_L$ on each side of $\sigma$.
One easy way to get such values is to interpolate them from cell unknowns.
This is the path chosen in Ref. \refcite{FAI92} which introduces,
for each edge, additional cell values located at points satisfying the
orthogonality condition \eqref{cond-orth2} for the considered edge,
and then compute these values by convex combinations of existing cell unknowns.
However, this scheme's construction and stability can only be
ensured for grids not too distorted and tensors not too anisotropic.
Another idea is not to try and get back the orthogonality condition
\eqref{cond-orth2}, but to use the additional values to compute
approximate gradients, which in turn give approximate fluxes $F_{K,\sigma}$.
However, the computation of the
additional values must be done in a clever way, especially when $\Lambda$
is discontinuous, to ensure that the flux conservativity \eqref{cf} is satisfied.
The Multi-Point Flux Approximation (MPFA) schemes are based on such a
construction. Introduced in the mid- to late 90's\cite{AAV96,AAV98-I,AAV98-II,EDW98,EDW94},
these methods assume that the solution is piecewise linear in some sub-cells around each
vertex, introduce additional edge unknowns and express the linear variation
of the solution to compute gradients and thus fluxes in these sub-cells.
The edge unknowns are then eliminated (interpolated using cell unknowns) by writing
\emph{continuity equations} for the solution and
\emph{conservativity equations} for its fluxes. The final numerical fluxes
are consistent, conservative and expressed only in terms of cell unknowns.
\subsection{O-method}
Several MPFA methods have been devised over the years and their main variation is
on the choice of the local continuity and conservativity equations.
Amongst those methods, the O-method
(presented in Refs. \refcite{AAV02,AAV98-I} for particular polygonal
meshes) has received one of the largest coverage in literature
on MPFA methods.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-mpfa.pdf_t}
\caption{\label{fig:mpfa}Control volumes ($K,L,\ldots$) and
interaction region (enclosed in dotted line) for the MPFA O-method.
$\mathbf{n}u_{K,\tau}=$ normal
vector to $(\mathbi{x}_K\overline{\mathbi{x}}_\tau)$ with length ${\rm d}ist(\mathbi{x}_K,\overline{\mathbi{x}}_\tau)$ ($\tau=\sigma,\sigma'$).
}
\end{center}
\end{figure}
Let us first consider the 2D case. For each edge $\sigma$, we fix a point $\overline{\mathbi{x}}s$
on $\sigma$. Several choices are possible\cite{AAV98-I,EDW98} but we only consider
here the case where $\overline{\mathbi{x}}s$ is the midpoint of $\sigma$. Then for each
vertex $\mathbi{v}$ of the mesh, an \emph{interaction region} is built by joining the
cell points $\mathbi{x}_K$ around $\mathbi{v}$ and the midpoints $\overline{\mathbi{x}}s$ of the edges containing $\mathbi{v}$
(see Fig. \ref{fig:mpfa}). This interaction region is made of one sub-cell $K_\mathbi{v}$ per
cell $K$ and the solution ${\overline{u}}$ is approximated by
a function that is linear inside each sub-cell around $\mathbi{v}$ (\footnote{This
linear approximation is natural if the mesh size is small enough since,
usually, $\Lambda$ and $f$ are assumed to be constant or smooth in $K$,
so that ${\overline{u}}$ is expected to be smooth inside $K$.}).
At this stage, \emph{continuity of this piecewise linear approximation is assumed at
each edge midpoint $\overline{\mathbi{x}}s$} around $\mathbi{v}$. We can therefore talk about
the value $u_\sigma$ of this function at $\overline{\mathbi{x}}s$, and its
constant gradient $\mathbf{n}abla_{K_\mathbi{v}}u$ on $K_\mathbi{v}$ satisfies
\begin{equation}\label{lin-grad}
\mathbf{n}abla_{K_\mathbi{v}}u\cdot (\mathbi{x}_K-\overline{\mathbi{x}}_\tau)=u_K-u_\tau\quad \mbox{($\tau=\sigma$ or $\sigma'$)}.
\end{equation}
Assuming that the vectors $\vect{\mathbi{x}_K\overline{\mathbi{x}}s}$ and $\vect{\mathbi{x}_K\overline{\mathbi{x}}_{\sigma'}}$ are
linearly independent, these two projections of $\mathbf{n}abla_{K_\mathbi{v}}u$ on these
vectors provide\cite{AAV98-I} the whole gradient $\mathbf{n}abla_{K_\mathbi{v}}u$:
\begin{equation}\label{eq-grad}
\mathbf{n}abla_{K_\mathbi{v}}u = -\frac{1}{2T}\left((u_\sigma-u_K)\mathbf{n}u_{K,\sigma'}+(u_{\sigma'}-u_K)\mathbf{n}u_{K,\sigma}\right)
\end{equation}
where $T$ is the area of triangle $(\mathbi{x}_K\overline{\mathbi{x}}s\overline{\mathbi{x}}_{\sigma'})$ and
$\mathbf{n}u_{K,\tau}$ ($\tau=\sigma$ or $\sigma'$) is the normal to $(\mathbi{x}_K\overline{\mathbi{x}}_\tau)$,
pointing outward this triangle and having length ${\rm d}ist(\mathbi{x}_K,\overline{\mathbi{x}}_\tau)$.
Sub-fluxes across the half-edges $[\mathbi{v}\overline{\mathbi{x}}_\tau]$ around $\mathbi{v}$ are then computed
using these gradients, and therefore depend
on the cell unknowns $u_K,u_L,\ldots$ and the edge unknowns $u_\sigma,u_{\sigma'},\ldots$
around $\mathbi{v}$. For example, the sub-flux from $K$ through
$[\mathbi{v}\overline{\mathbi{x}}_\tau]$ is
\begin{equation}\label{eq-flux}
F_{K,\tau,\mathbi{v}}=-{\rm d}ist(\mathbi{v},\overline{\mathbi{x}}_\tau)\Lambda_K\mathbf{n}abla_{K_\mathbi{v}}u\cdot\mathbf{n}_{K,\tau}
\quad (\tau=\sigma\mbox{ or }\sigma').
\end{equation}
The next step is to eliminate the edge unknowns involved in these sub-fluxes.
This is done by imposing the conservativity of the fluxes around $\mathbi{v}$:
\begin{equation}\label{elim-O}
\ba
\mbox{For any edge $\tau$ containing $\mathbi{v}$, if $R,S$ are the
cells on each side of $\tau$,}\\
{\rm d}isplaystyle F_{R,\tau,\mathbi{v}}+F_{S,\tau,\mathbi{v}}=0.
\end{array}
\end{equation}
Note that if $\tau$ is an edge on $\partial\Omegamega$, $u_\tau$
is not eliminated but fixed by the value of $\bu_b$
(Neumann boundary conditions are also easily handled,
either by imposing the value of $F_{K,\tau,\mathbi{v}}$ whenever $\tau$ is a boundary edge
or by using -- which is equivalent -- ghosts cells outside $\Omegamega$\cite{AAV98-I,AAV02}).
{}From the construction \eqref{eq-grad}-\eqref{eq-flux} of the sub-fluxes,
\eqref{elim-O} gives a linear square system on the edge unknowns
$u_\sigma,u_{\sigma'},\ldots$ around $\mathbi{v}$ which is, in general, invertible and gives an
expression of these edge unknowns in terms of the cell unknowns $u_K,u_L,\ldots$ around $\mathbi{v}$.
Plugged into \eqref{eq-grad}-\eqref{eq-flux}, these expressions of the edge unknowns
give formulas for the sub-flux $F_{K,\sigma,\mathbi{v}}$ using only the cell unknowns
$u_K,u_L,\ldots$ around $\mathbi{v}$. The same procedure performed
from the other vertex $\mathbi{v}'$ of $\sigma$ gives a second sub-flux $F_{K,\sigma,\mathbi{v}'}$.
The global flux through $\sigma$, that is
$F_{K,\sigma}=F_{K,\sigma,\mathbi{v}}+F_{K,\sigma,\mathbi{v}'}$,
is therefore a function of all the unknowns $u_K,u_L,\ldots$
in all the cells around $\mathbi{v}$ and $\mathbi{v}'$.
By construction, $(F_{K,\sigma})_{K\in\mathcal M\,,\;\sigma\in\mathcal E_K}$
naturally satisfy the conservativity
equation \eqref{cf} and the O-scheme is thus obtained by only imposing
the balance equation \eqref{bf}.
\begin{equation}gin{remark}[Two edge unknowns per edge] The elimination of the edge unknowns
is performed locally around each vertex $\mathbi{v}$ and the continuity at the edge midpoints is
only enforced when eliminating the edge unknowns around $\mathbi{v}$.
The edge unknown $u_\sigma$ at $\overline{\mathbi{x}}s$ when viewed from vertex $\mathbi{v}$ therefore may
be different from the edge
unknown at $\overline{\mathbi{x}}s$ viewed from the other vertex $\mathbi{v}'$ of $\sigma$.
This may look strange, as there is no particular reason for
${\overline{u}}$ to have different values at $\overline{\mathbi{x}}s$, but this comes from the
construction of the MPFA method which cannot assume that the
linear variations of ${\overline{u}}$ in $K_\mathbi{v}$ and in $K_{\mathbi{v}'}$ have the same
value at $\overline{\mathbi{x}}s$ (otherwise, some flux conservativity equations could
not be satisfied).
\end{remark}
The generalisation of this construction to 3D polyhedral cells is pretty straightforward\cite{AAV06}
if we assume that
\begin{equation}\label{assum-3D}
\mbox{for each cell $K$ and each vertex $\mathbi{v}$ of $K$,
exactly 3 faces of $K$ meet at $\mathbi{v}$.}
\end{equation}
In this case, the sub-cell
$K_\mathbi{v}$ is the hexahedron obtained by joining $\mathbi{v}$, $\mathbi{x}_K$, the midpoints of edges
of $K$ having $\mathbi{v}$ as vertex and the three centres of gravity $\overline{\mathbi{x}}s$,
$\overline{\mathbi{x}}_{\sigma'}$ and $\overline{\mathbi{x}}_{\sigma''}$ of the faces of $K$
meeting at $\mathbi{v}$. Three temporary unknowns $u_\sigma$, $u_{\sigma'}$ and $u_{\sigma''}$
are introduced at the centres of gravity of the faces and,
assuming that the vectors $\vect{\mathbi{x}_K\overline{\mathbi{x}}s}$, $\vect{\mathbi{x}_K\overline{\mathbi{x}}_{\sigma'}}$ and
$\vect{\mathbi{x}_K\overline{\mathbi{x}}_{\sigma''}}$ are linearly independent, the three equations
\eqref{lin-grad} for $\tau=\sigma$, $\sigma'$ and $\sigma''$ can be solved
for $\mathbf{n}abla_{K_\mathbi{v}}u$, which is thus computed in terms of $u_K,u_\sigma,u_{\sigma'}$
and $u_{\sigma''}$.
The rest of the construction follows as in 2D, the edge
unknowns being eliminated thanks to the sub-fluxes conservativity.
\begin{equation}gin{remark}
This procedure even allows for non-planar faces (which often occurs
in hexahedral meshes in 3D, as the four vertices of a given face may
not be on the same plane), provided that the vectors $\mathbf{n}_{K,\sigma}$
are defined as the mean value on $\sigma$ of the pointwise normal vector to the face\cite{AAV02,AAV06}.
\end{remark}
Construction of an MPFA O-method on 3D meshes is much less obvious
when \eqref{assum-3D} does not hold. In this case,
for some vertices $\mathbi{v}$ the system \eqref{lin-grad} has 4 or more equations and,
since (in general) the gradient $\mathbf{n}abla_{K_\mathbi{v}}u$ is entirely determined by $u_K$ and
only 3 face unknowns, the other face unknowns will be fixed
by those 3 face unknowns. No degrees of freedom then remain to impose
the conservativity of the corresponding sub-fluxes.
Ref. \refcite{AGE10} however introduces a scheme on general polygonal
or polyhedral meshes (without assuming \eqref{assum-3D}),
which coincides with the MPFA O-method in 2D and in 3D when
\eqref{assum-3D} holds. This reference also presents a new formulation of the O-method,
based on a discrete form of the variational formulation \eqref{basew}
rather than on a flux balance \eqref{bf}.
\begin{equation}gin{remark}
Explicit formulas for the fluxes in terms of the cell unknowns can
be obtained\cite{AAV02} in the case of parallelogram or parallelepiped meshes
and $\Lambda$ constant. In other cases, System \eqref{elim-O} has to be numerically solved.
\end{remark}
\begin{equation}gin{remark}
For non-conforming meshes such as the ones appearing in reservoirs with
faults, this MPFA O-method leads to unacceptable fluxes and must
therefore be modified\cite{AAV01},
by introducing two linear approximations of ${\overline{u}}$ in some sub-cells $K_\mathbi{v}$.
\end{remark}
\subsection{L- and G-methods}\label{sec:LG}
As already mentioned, many choices are available to compute consistent
conservative fluxes from piecewise linear approximations of ${\overline{u}}$ around each vertex.
Another well-studied MPFA method is the L-method,
introduced in Ref. \refcite{AAV08} for quadrilateral meshes.
The major difference of the L-method with respect to the
O-method are: (i) no edge unknowns need to be introduced as
the gradient themselves are the additional unknowns to eliminate,
(ii) the continuity and sub-flux conservativity equations are written
only on 2 edges, (iii) the continuity of the piecewise linear approximation
is imposed on whole edges (not only at edge midpoints),
and (iv) the gradients and piecewise linear approximation constructed on sub-cells $K_\mathbi{v}$, $L_\mathbi{v}$,
$\ldots$, depend on the edge $\sigma$ through which we want to compute the flux
and are thus not common to all sub-fluxes around $\mathbi{v}$.
Still using the notations in Fig. \ref{fig:mpfa},
let us consider the sub-flux $F_{K,\sigma,\mathbi{v}}$ and let us introduce
$\mathbf{n}abla^\sigma_{M_\mathbi{v}}u$, $\mathbf{n}abla^\sigma_{K_\mathbi{v}}u$ and $\mathbf{n}abla^\sigma_{L_\mathbi{v}}u$,
the three constant gradients of a piecewise linear approximation of ${\overline{u}}$
on $M_\mathbi{v}\cup K_\mathbi{v}\cup L_\mathbi{v}$. As mentioned above, these gradients will only
be used to compute $F_{K,\sigma,\mathbi{v}}$ and other gradients would be used
if we were to compute $F_{K,\sigma',\mathbi{v}}$ for example (ergo the
super-script $\sigma$ in $\mathbf{n}abla^\sigma_{M_\mathbi{v}}u$, $\mathbf{n}abla^\sigma_{K_\mathbi{v}}u$, $\mathbf{n}abla^\sigma_{L_\mathbi{v}}u$).
In the L-method, full continuity is imposed for this approximation:
\begin{equation}\label{full-cont}
\ba
{\rm d}isplaystyle \forall x\in [\mathbi{v}\overline{\mathbi{x}}_{\sigma'}]\,:\;u_K+(\mathbf{n}abla^\sigma_{K_\mathbi{v}}u)\cdot(x-\mathbi{x}_K)=
u_M+(\mathbf{n}abla^\sigma_{M_\mathbi{v}}u)\cdot(x-\mathbi{x}_M)\\
{\rm d}isplaystyle \forall x\in [\mathbi{v}\overline{\mathbi{x}}s]\,:\;u_K+(\mathbf{n}abla^\sigma_{K_\mathbi{v}}u)\cdot(x-\mathbi{x}_K)
=u_L+(\mathbf{n}abla^\sigma_{L_\mathbi{v}}u)\cdot(x-\mathbi{x}_L)
\end{array}
\end{equation}
These equations can be equivalently written only at $\mathbi{v}$, $\overline{\mathbi{x}}_{\sigma'}$
and $\mathbi{v}$, $\overline{\mathbi{x}}s$ respectively, and they provide 4 conditions on the 6 degrees of freedom of
the 3 gradients. The sub-flux conservativities give the remaining 2 equations
\begin{equation}\label{cons-L}
\ba
{\rm d}isplaystyle \Lambda_K \mathbf{n}abla^\sigma_{K_\mathbi{v}}u\cdot\mathbf{n}_{K,\sigma'}
+\Lambda_M \mathbf{n}abla^\sigma_{M_\mathbi{v}}u\cdot\mathbf{n}_{M,\sigma'}=0\\
{\rm d}isplaystyle \Lambda_K \mathbf{n}abla^\sigma_{K_\mathbi{v}}u\cdot\mathbf{n}_{K,\sigma}
+\Lambda_L \mathbf{n}abla^\sigma_{L_\mathbi{v}}u\cdot\mathbf{n}_{L,\sigma}=0.
\end{array}
\end{equation}
System \eqref{full-cont}-\eqref{cons-L} is therefore square
and invertible in general (otherwise, work\-arounds can be
designed\cite{AGE10-2}). The local gradients can then be
expressed in terms of the cell unknowns $u_M$, $u_K$ and $u_L$,
and so does the sub-flux $F_{K,\sigma,\mathbi{v}}=-{\rm d}ist(\mathbi{v},\overline{\mathbi{x}}s)\Lambda_K \mathbf{n}abla^\sigma_{K_\mathbi{v}}u\cdot
\mathbf{n}_{K,\sigma}$.
\begin{equation}gin{remark}
If $\sigma$ or $\sigma'$ is a boundary edge, then the corresponding
right-hand side in \eqref{full-cont} is fixed by the value of $\bu_b$ and
the corresponding conservativity equation in \eqref{cons-L} is removed.
System \eqref{full-cont}-\eqref{cons-L} remains square, of size 4
(if only one edge is a boundary edge) or 2 (if both $\sigma$ and $\sigma'$ are boundary edges).
\end{remark}
This is however but one choice that can be made to compute the flux through $\sigma$.
Another natural choice would be to use the edges $\sigma$ and $\sigma''$ instead of $\sigma$
and $\sigma'$ in \eqref{full-cont}-\eqref{cons-L}. This would give another
sub-flux $F_{K,\sigma,\mathbi{v}}$ in terms of $u_K$, $u_L$, $u_N$.
In the L-method, the choice between using $\sigma,\sigma'$ or $\sigma,\sigma''$
is made according to a criterion\cite{AAV08} involving transmissibility signs and ensuring
that each cell unknown $u_M,u_K,u_L$ or $u_K,u_L,u_N$ contributes with the most
physically-relevant sign to the sub-flux through $[\mathbi{v}\overline{\mathbi{x}}s]$.
Full formulas can be obtained\cite{AAV08} in the case of homogeneous
media and grids made of parallelograms and, in the case
of moderate skewness of the diffusion tensor and the grid, the chosen
criterion indeed leads to the correct signs.
\begin{equation}gin{remark}The L-method does not suffer from the same issues
(and does not need modification) as the original
MPFA O-method on meshes with faults\cite{AAV08}.\end{remark}
A generalisation of the L-method, the G-method, has been proposed in Ref. \refcite{AGE10-2}.
Its principles are the same (full continuity of ${\overline{u}}$ and conservativity of the
fluxes on some edges), but the above selection criterion is not applied and
the global fluxes through $\sigma$ are built as convex combinations of all possible
sub-fluxes through this edge. These combinations are chosen according to
some local index, designed to improve the coercivity properties of the
scheme.
\begin{equation}gin{remark} Contrary to the O-scheme, construction of
the L- and G-scheme on general 3D polyhedral meshes is straightforward\cite{AGE10-2}.
Indeed, no face unknown is introduced and there is always, whatever the number of faces
that meet at a given vertex, enough degrees of freedom (one local constant gradient
per face which contains the vertex) to impose the local conservativity of sub-fluxes.
\end{remark}
\begin{equation}gin{remark}
The MPFA U-method\cite{AAV98-I} is based on principles a bit similar to the
L-method, computing the flux through $[\mathbi{v}\overline{\mathbi{x}}s]$ by mixing
midpoint continuity \eqref{lin-grad} (at $\overline{\mathbi{x}}s$) and the full continuity
on $[\mathbi{v}\overline{\mathbi{x}}_{\sigma'}]$ and $[\mathbi{v}\overline{\mathbi{x}}_{\sigma''}]$ (as in \eqref{full-cont}).
The local gradients also depend on the edge $\sigma$ through which we compute
the sub-flux.
\end{remark}
\subsection{Coercivity and convergence of MPFA methods}
MPFA methods are linearly exact, and therefore consistent in the sense \eqref{flux-cons},
but they are not coercive in general. Using reference elements (or curvilinear coordinates)
such as in Finite Element methods, constructions of symmetric definite positive MPFA O-methods have been
proposed on quadrilateral (hexahedral in 3D) meshes in Refs. \refcite{AAV02,AAV07,EDW08-II}
and on general 2D polygonal meshes in Ref. \refcite{FRI08}.
However, these methods method turn out to be numerically less stable than the
MPFA O-method presented above\cite{AAV07} (constructed in physical space).
Convergence of these reference element-based O-methods even sometimes seems to
be lost in presence of anisotropy or perturbed mesh,
when the O-method constructed in physical space still converges\cite{AAV06,AAV07,KLA06}. A reason for this
loss of convergence, in view of Sec. \ref{sec:analysis},
is probably the following\cite{AAV07}: when constructing the method on a reference
mesh, the coercivity properties of the scheme matrix depends on the
mesh regularity (via the Piola mapping) and may
degenerate for strongly perturbed meshes as the mesh size tends to $0$,
thus preventing from establishing energy estimates in a proper discrete $H^1_0$ norm
for which the compactness result of Step \textsf{(C2)} in Sec. \ref{sec:analysis}
would hold.
It has been proved that the physical O-method is coercive (and
gives a symmetric definite positive matrix) on meshes made of parallelograms (parallelepiped in 3D)
with $(\mathbi{x}_K)_{K\in\mathcal M}$ the centres of gravity of the cells\cite{AAV06,AGE10}.
This is also true for meshes made of triangles (tetrahedra in 3D),
provided that the unknown $u_\sigma$ used to construct
the piecewise linear approximation of ${\overline{u}}$ in $K_\mathbi{v}$ is
not located at $\overline{\mathbi{x}}s$ but closer to $\mathbi{v}$ (see Refs. \refcite{AGE10,LPO05}).
Except in those particular instances,
proofs of convergence of MPFA methods are always done by \emph{assuming}
some coercivity property.
Ref. \refcite{KLA06} compares the MPFA O-method on 2D quadrilateral
meshes to a non-symmetric Mixed Finite Element method (using a particular quadrature
rule) and obtains, under a global coercivity assumption on the system matrix,
$\mathcal O(h_{\mathcal M})$ error estimates for the approximate
solution and fluxes, under the assumptions $\Lambda\in
C^1(\overline{\Omegamega})$ and ${\overline{u}}\in H^2(\Omegamega)$.
In a recent study\cite{KLA12}, the MPFA O-method is compared on 2D or
3D polyhedral meshes satisfying \eqref{assum-3D} to
a non-symmetric Mimetic Finite Difference method (see Sec. \ref{sec:HMM}).
Under local coercivity assumptions, $\mathcal O(h_{\mathcal M}^{\alpha})$
error estimates are obtained when $\Lambda\in C^1(\overline{\Omegamega})$
and ${\overline{u}}\in H^{1+\alpha}(\Omegamega)$ ($\alpha> 1/2$ in 3D).
The regularity assumptions on $\Lambda$ and ${\overline{u}}$ required to establish
these error estimates are not compatible
with usual field applications (see Sec. \ref{sec:analysis}).
It is however possible to perform the full convergence analysis of
the MPFA O- and L-method without assuming any non-physical smoothness
on the data, by following the path sketched in Sec. \ref{sec:analysis}.
This is done in Ref. \refcite{AGE10} for the MPFA O-method and in Ref. \refcite{AGE10-2}
for the MPFA L- and G-method. In these references, the convergence of MPFA
methods on generic grids, in 2D or 3D (without assuming \eqref{assum-3D}),
is proved by only assuming \eqref{hyp-omega}---\eqref{hyp-lambda} and some
local coercivity conditions which can be checked in numerical experiments.
The numerical study of the convergence of MPFA methods has also been performed in
a number of articles\cite{EIG05,AAV06,PAL06}. As
expected, the numerical orders of convergence of the O-method are usually $\mathcal O(h_{\mathcal M}^2)$
for ${\overline{u}}$ and $\mathcal O(h_{\mathcal M})$ for the fluxes, provided that ${\overline{u}}\in H^2$.
If ${\overline{u}}\in H^{1+\alpha}$ with $\alpha\ge 0$, the orders of convergence
seem to be\cite{AAV06} $\min(2,2\alpha)$ for ${\overline{u}}$ and $\min(1,\alpha)$ for
its fluxes ($\min(2,\alpha)$ for the fluxes in case of smooth meshes).
It has nonetheless been noticed\cite{AGE10} that, for anisotropy
ratios (the largest eigenvalue of $\Lambda$ divided by the smallest eigenvalue
of $\Lambda$) of order $1000$ or more, the MPFA O-method no longer seems to converge on distorted grids,
due to its loss of coercivity.
L- and G-methods have similar numerical behaviours, but they
seem more stable than the O-method in presence of
strong anisotropy or on irregular meshes used in basin simulation\cite{AAV08,AGE10-2}.
\subsection{Maximum principle for MPFA methods}
When the mesh satisfies the orthogonality condition \eqref{cond-orth2},
MPFA methods are identical to the TPFA scheme and are therefore monotone.
As mentioned, however, such orthogonality conditions are too restrictive
in practice.
For some particular meshes, such as polygonal meshes whose cells are
the union of triangles satisfying the Delaunay condition
(the interaction regions are then triangles),
the O-method is monotone if $\Lambda$
is constant. In the general case, conditions can be found\cite{EIG02} on the
triangle angles and the diffusion tensor to ensure that the
O-method gives rise to an M-matrix, and these conditions can be used
to modify the positions of the mesh vertices in order to try and get
an M-matrix. However, for large anisotropy ratios, such a modification may fail.
In most cases, the L-method displays better monotony properties
than the O-method. The sufficient conditions of Ref. \refcite{NOR07}
(see below) are satisfied by the L-method on a larger class of meshes and tensors
than for the O-method and, even in cases where monotony is violated,
the L-method seems to present much less oscillations than the O-method\cite{AAV08}.
One way to mitigate the problem of large anisotropy in the O-method,
which leads to non-monotony and inaccuracies, is to apply a stretching\cite{AAV98-II}
of the physical space to reduce the anisotropy ratio of $\Lambda$. This
stretching does not seem necessary for regular hexagonal meshes
but mandatory for triangular meshes when the anisotropy ratio is
larger than 10.
The inaccuracy of the O-method in case of strong anisotropy can also be
reduced by using a variant of the MPFA O-method introduced (in 2D)
separately in Ref. \refcite{CHE08} under the name ``Enriched MPFA O-method'' (EMPFA)
and in Ref. \refcite{EDW08} under the name ``Full pressure support scheme'' (FPS).
This method relaxes the constraints on edge and cell unknowns by adding
vertices unknowns, which gives enough degrees of freedom to
assume the full continuity of the approximation of ${\overline{u}}$ on the sub-edges (not only at midpoints).
This approximation is taken either piecewise linear (on the triangles $\overline{\mathbi{x}}s \mathbi{v} \mathbi{x}_K$,
$\mathbi{x}_K \mathbi{v} \overline{\mathbi{x}}_{\sigma'}$, etc.) or piecewise bilinear (on the subcells
$\mathbi{v} \overline{\mathbi{x}}_{\sigma'} \mathbi{x}_K\overline{\mathbi{x}}s$, $\mathbi{v} \overline{\mathbi{x}}s \mathbi{x}_L \overline{\mathbi{x}}_{\sigma''}$, etc.) and
the new vertex unknown at $\mathbi{v}$ is eliminated by integrating \eqref{base} on a small
domain around $\mathbi{v}$. The monotony (using M-matrix conditions introduced Ref. \refcite{EDW98})
and coercivity of the bilinear variant are analysed for quadrangular meshes
in Ref. \refcite{EDW08} and for triangular meshes in Ref. \refcite{FRI11}.
However, even if the EMPFA/FPS method improves the monotony properties
of the O-method in a number of numerical tests,
it remains unstable (non coercive) in case of strong anisotropy\cite{TRU09}.
According to Ref. \refcite{EDW08,FRI11}, these improved monotony properties
stem from imposing the continuity of the approximation
on whole sub-edges, which prevents the EMPFA/FPS method from displaying
decoupling properties of the O-method shown to be the cause of spurious oscillations.
As mentioned above, the L-method also imposes continuities of full edges and
presents improved monotony characteristics with respect to the O-method
(its extension to 3D meshes moreover appears to be more straightforward than the extension
of cell-centred EMPFA/FPS method). However, to our best knowledge,
numerical or theoretical comparisons of the EMPFA/FPS and L methods
still remain to be done.
A series of interesting results deserves to be mentioned here on the issue
of the monotony of generic 9-point schemes on quadrilateral grids
(which contain the MPFA methods). Sufficient conditions\cite{NOR05,NOR07}
for the monotony of such scheme can be obtained if $\Lambda$
is constant, which provide guidance to generate
meshes on which MPFA methods are monotone, and also show that
7-point methods (such as the L-method) enjoy better monotony
properties in general\cite{AAV08}. These results also prove\cite{KEI09}
that no linear 9-point scheme on generic quadrilateral meshes, which is
exact on linear solutions, can be monotone for any $\Lambda$ (this
has already been noticed, under another form, in Ref. \refcite{KER81}).
\subsection{To summarise: MPFA methods}
The main strengths of MPFA methods are their cell-centred characteristic
and a local computation of the fluxes
(only cell unknowns close to an edge are used in the computation of the flux
across this edge), which lead to acceptable stencils: 9 on 2D quadrilaterals,
27 on 3D hexahedral.
A (small) disadvantage is the necessity to solve local
systems to eliminate the edge/gradient unknowns, which may prove non-invertible
in some cases and therefore require to locally modify the
method\cite{AGE10-2,VOH06}. This however seems to happen relatively
rarely and most numerical tests presented in the literature run without this issue.
A more undesirable characteristic of the MPFA method is their \emph{conditional}
coercivity and monotony. Despite numerous works on the topic, it is not
always obvious to establish \emph{a priori} the range of coercivity or
monotony of an MPFA method on a generic mesh or with a generic diffusion
tensor. As a consequence, unforeseen instabilities and loss of convergence may
occur.
The question therefore remains to find a FV method which would be
unconditionally coercive and monotone on any type of mesh...
\section{HMM methods}\label{sec:HMM}
Hybrid Mimetic Mixed (HMM) methods are made up of three families of
methods, separately developed in the last ten years or so:
the Hybrid Finite Volume method\cite{EYM10} (HFV),
the Mimetic Finite Difference method\cite{BRE05-I,BRE05-II} (MFD) and the
Mixed Finite Volume method\cite{DRO06} (MFV). It has recently been understood\cite{DRO10}
that all these methods are in fact identical and, therefore, that any analysis
made for one also applies to the other two.
In HMM methods, the main unknowns are cell unknowns $(u_K)_{K\in\mathcal M}$
and edge unknowns $(u_\sigma)_{\sigma\in\mathcal E}$ (approximations of
$({\overline{u}}(\overline{\mathbi{x}}s))_{\sigma\in\mathcal E}$ where, as in Sec. \ref{sec:MPFA},
$\overline{\mathbi{x}}s$ is the centre of gravity of $\sigma$). Of
the three families gathered in HMM methods, MFV methods are the ones
with the most classical FV presentation, involving imposed balance and conservativity
equations \eqref{bf}-\eqref{cf}. Contrary to MPFA methods, edge unknowns are not
eliminated and the computation of the fluxes is made through
local inner products, thus ensuring the coercivity of the scheme.
For given fluxes $F_K=(F_{K,\sigma})_{\sigma\in\mathcal E_K}$ on $\partial K$, we introduce
the vector
\begin{equation}gin{equation}\label{defvmfv}
\mathbf{n}ablahmm_K(F_K)=-\frac{1}{|K|}\Lambda_K^{-1}\sum_{\sigma\in\mathcal E_K}F_{K,\sigma}(\overline{\mathbi{x}}s-\mathbi{x}_K).
\end{equation}
Stokes' formula shows that if ${\overline{u}}$ is linear in $K$
and $F_{K,\sigma}=-|\sigma|\Lambda_K\mathbf{n}abla{\overline{u}}_{|K}\cdot \mathbf{n}_{K,\sigma}$, then
$\mathbf{n}ablahmm_K(F_K)=\mathbf{n}abla{\overline{u}}_{|K}$. Hence, $\mathbf{n}ablahmm_K(F_K)$ can be considered as a consistent approximation
of $\mathbf{n}abla{\overline{u}}$ on $K$.
Letting
\begin{equation}gin{equation}
T_K(F_K)=(T_{K,\sigma}(F_K))_{\sigma\in\mathcal E_K}\mbox{ with }
T_{K,\sigma}(F_K)=\frac{1}{|\sigma|}F_{K,\sigma}+\Lambda_K \mathbf{n}ablahmm_K(F_K)\cdot\mathbf{n}_{K,\sigma},
\label{defpenmfv}
\end{equation}
the following local inner product is defined
\begin{equation}gin{equation}
[F_K,G_K]_K=|K|\mathbf{n}ablahmm_K(F_K)\cdot\Lambda_K\mathbf{n}ablahmm_K(G_K)+
T_K(G_K)^T\mathbb{B}_K T_{K}(F_K)
\label{defpslocalmfv}\end{equation}
(where $\mathbb{B}_K$ is a symmetric definite positive matrix)
and the relation between the fluxes and the cell and edge unknowns is
\begin{equation}gin{equation}
\forall G_K=(G_{K,\sigma})_{\sigma\in\mathcal E_K}\in \mathbb R^{\mathcal E_K}\,:\;
[F_K,G_K]_K=\sum_{\sigma\in\mathcal E_K}(u_K-u_\sigma)G_{K,\sigma}.
\label{lienpFmfv}\end{equation}
An MFV scheme is defined by \eqref{bf}-\eqref{cf}-\eqref{defpenmfv}-\eqref{defpslocalmfv}-\eqref{lienpFmfv}
for some choices of $(\mathbb{B}_K)_{K\in\mathcal M}$,
with Dirichlet boundary conditions enforced by imposing the value of
$u_\sigma$ if $\sigma\in\mathcal E_{\rm ext}$.
Neumann boundary conditions are as easily considered\cite{CHA07} by imposing
the value of $F_{K,\sigma}$ for all $\sigma\in\mathcal E_{\rm ext}$.
\begin{equation}gin{remark} For a given edge $\sigma\in\mathcal E_K$, using $G_K(\sigma)=
({\rm d}elta_{\sigma,\sigma'})_{\sigma'\in\mathcal E_K}$ (${\rm d}elta_{\sigma,\sigma'}$ being
Kronecker's symbol), we can see\cite{DRO10} that
\begin{equation}gin{equation}
u_\sigma-u_K=\mathbf{n}ablahmm_K(F_K)\cdot (\overline{\mathbi{x}}s-\mathbi{x}_K)-T_K(G_K(\sigma))^T\mathbb{B}_KT_K(F_K).
\label{p_increments}\end{equation}
Given that $T_K$ vanishes on exact fluxes of linear functions
and that $\mathbf{n}ablahmm_K(F_K)\approx \mathbf{n}abla{\overline{u}}_{|K}$, \eqref{p_increments} shows that
\eqref{lienpFmfv} is a Taylor expansion
with second order remainder.
\end{remark}
MFD methods are constructed starting from \eqref{lienpFmfv} and looking for
inner products $[\cdot,\cdot]_K$ which satisfy the following consistency condition
(discrete Stokes' formula): for all affine function $q$ and all
$G_K=(G_{K,\sigma})_{\sigma\in\mathcal E_K}\in\mathbb R^{\mathcal E_K}$,
\begin{equation}gin{equation}\label{consmfd}
[(\Lambda\mathbf{n}abla q)^I,G]_K+\int_K q(x)({\rm d}ivmfd G_K){\rm d} x =
\sum_{\sigma\in\mathcal E_K}\frac{1}{|\sigma|}G_{K,\sigma}
\int_{\sigma}q(x){\rm d} S(x),
\end{equation}
where $((\Lambda\mathbf{n}abla q)^I)_{K,\sigma}=|\sigma|\Lambda_K\mathbf{n}abla q_{|K}\cdot\mathbf{n}_{K,\sigma}$
and ${\rm d}ivmfd G_K=\frac{1}{|K|} \sum_{\sigma\in\mathcal E_K}G_{K,\sigma}$
is the natural discrete divergence of the discrete vector field $G_K$.
{}From the consistency condition \eqref{consmfd}, an algebraic decomposition of
the matrix of $[\cdot,\cdot]_K$(\footnote{i.e. the matrix $\mathbb{M}_K$
such that $[F_K,G_K]_K=G_K^T\mathbb{M}_KF_K$.}) can be obtained\cite{BRE05-II} and used
to prove\cite{DRO10} that any inner product satisfying \eqref{consmfd} has the form
\eqref{defpslocalmfv} for some symmetric positive definite $\mathbb{B}_K$.
Relation \eqref{lienpFmfv} can be inverted to express the fluxes in terms
of the cell and edge unknowns and eliminate them. By doing so, we obtain\cite{DRO10}
the HFV scheme. To write down this formulation of the HMM
methods, we introduce for any given vector $u=((u_K)_{K\in\mathcal M},
(u_\sigma)_{\sigma\in\mathcal E})$ the following discrete gradient in $K$:
\begin{equation}gin{equation}
\mathbf{n}abla_K u=\frac{1}{|K|}\sum_{\sigma\in\mathcal E_K}|\sigma|(u_\sigma-u_K)\mathbf{n}_{K,\sigma}.
\label{defgradhfv}\end{equation}
Stokes' formula shows that this gradient is exact if the vector $u$ interpolates
a linear function at $(\mathbi{x}_K)_{K\in\mathcal M}$, $(\overline{\mathbi{x}}s)_{\sigma\in\mathcal E}$
(it can also be seen\cite{DRO10} that if $u$ and
$F_K$ are related by \eqref{lienpFmfv} then $\mathbf{n}abla_K u=\mathbf{n}ablahmm_K(F_K)$).
The function
\begin{equation}gin{equation}
S_K(u)=(S_{K,\sigma}(u))_{\sigma\in\mathcal E_K}\mbox{ with }
S_{K,\sigma}(u)=u_\sigma-u_K-\mathbf{n}abla_K u\cdot(\overline{\mathbi{x}}s-\mathbi{x}_K)
\label{defSk}\end{equation}
is therefore a first order Taylor expansion, which vanishes
on interpolants of linear functions. The formulation of the HFV method
is then: find $u=((u_K)_{K\in\mathcal M},(u_\sigma)_{\sigma\in\mathcal E})$
(where $u_\sigma$ is fixed by $\bu_b$ if $\sigma\in\mathcal E_{\rm ext}$)
such that, for any vector $v=((v_K)_{K\in\mathcal M},(v_\sigma)_{\sigma\in\mathcal E})$ with
$v_\sigma=0$ if $\sigma\in\mathcal E_{\rm ext}$,
\begin{equation}gin{equation}
\sum_{K\in\mathcal M}|K|\Lambda_K \mathbf{n}abla_K u\cdot\mathbf{n}abla_K v +\sum_{K\in\mathcal M} S_K(v)^T
\widetilde{\mathbb{B}}_K S_{K}(u) =\sum_{K\in\mathcal M}v_K\int_K f,
\label{defhfv2}
\end{equation}
where $(\widetilde{\mathbb{B}}_K)_{K\in\mathcal M}$ are symmetric positive
definite matrices (which depend on the matrices $(\mathbb{B}_K)_{K\in\mathcal M}$
in \eqref{lienpFmfv}). This formulation is clearly a discretisation of the
weak formulation \eqref{basew} of \eqref{base}.
\begin{equation}gin{remark} The original MFV, MFD and HFV methods are slightly less general than
the ones presented here. The original MFV method writes \eqref{p_increments} with
a different (stronger) stabilisation, the original MFD method only consider
the case where $\mathbi{x}_K$ is the centre of gravity of $K$, and the original HFV
method is only written using diagonal matrices $\widetilde{\mathbb{B}}_K$.
Most of the analysis developed for each of these three methods however extends to the
general HMM method.
\end{remark}
\subsection{Coercivity and convergence of HMM methods}
HMM methods are built on inner products and are therefore unconditionally
coercive (under natural and not very restrictive assumptions on the mesh
regularity). As a consequence and since they are linearly exact,
they enjoy nice stability and convergence properties.
The path of convergence described in Sec. \ref{sec:analysis} has been
successfully applied to HMM methods in Refs. \refcite{DRO06,EYM10}.
Assuming that $\bu_b=0$ and taking $v=u$ in the discrete variational formulation \eqref{defhfv2}
gives a natural discrete $H^1_0$ norm (the square root of the
left-hand side of the equation), for which one can establish
a Poincar\'e inequality and a discrete Rellich theorem.
The convergence of HMM schemes therefore holds even if $\Lambda$ is discontinuous
and ${\overline{u}}$ only belongs to $H^1$. For
simplicial meshes, the stabilisation term in \eqref{defpslocalmfv} can
be removed\cite{DRO06} (i.e. $\mathbb{B}_K=0$) without
losing the coercivity, although numerical results are
then slightly less accurate.
Nevertheless, numerical tests\cite{BRE05-II,EYM10} indicate that the choice of
$\mathbb{B}_K$ usually plays little role in the accuracy of the scheme,
provided that this matrix is scaled accordingly to some measure of the eigenvalues of
$\Lambda_K$ (e.g. the trace of this tensor) and that its coercivity properties
incorporate geometric information such as face sizes\cite{DRO10} in case of very distorted
meshes\cite{LIP13-p}. Let us however notice that, in some cases,
$\mathbb{B}_K$ can be selected to ensure the monotony of the
HMM method (see Sec. \ref{sec:HMM-monotone}).
This analysis of HMM method has been extended
to convection-diffusion equations\cite{BEI11}, with various discretisations of the convection
term (centred, upwind, mimetic-based\cite{CAN09}).
General forms of ``automated upwinding'' of the convection, scaled by the local diffusion strength,
are studied in Ref. \refcite{BEI11} and shown to be accurate in all regimes
(diffusion- or convection-dominated).
Numerical experiments also show that much better results are obtained,
in case of strong anisotropy and heterogeneity in a convection-dominated regime, if
the upwinding is made with edge unknowns
rather than cell unknowns (see also Ref. \refcite{DRO10-II} for
the Navier-Stokes equations). This is probably general to many
methods involving edge unknowns, but this would need to be
theoretically and numerically investigated in a more thorough way.
As HMM methods are based on full gradients reconstructions $\mathbf{n}ablahmm_K(F_K)$
or $\mathbf{n}abla_K u$, they are particularly well-suited to non-linear equations
and have been adapted to a number of meaningful models such as
fully non-linear equations of the
Leray-Lions type\cite{DRO06-II} (appearing in particular in models of
non-newtonian fluids), miscible flows in porous media\cite{CHA07}
or the Navier-Stokes equations\cite{DRO09}.
Since the technique in Sec. \ref{sec:analysis} neither
relies on the linearity of the equation nor on the regularity of the solution,
complete convergence analyses of HMM methods for these models are
successfully carried out in these references (along with benchmarking),
under assumptions compatible with applications.
A cell-centred modification (the SUCCES scheme)
of the HMM method, eliminating the edge unknowns by
computing them as convex combinations of cell unknowns,
has been proposed and analysed in Ref. \refcite{EYM10}
for \eqref{base} and in Ref. \refcite{EYM09} for non-linear elliptic equations.
This modification ends up with less unknowns than the HMM method (only cell unknowns) and
is still unconditionally coercive, but it has a larger stencil than
MPFA methods and it displays less accurate numerical results on
grids provoking numerical locking or if $\Lambda$ is discontinuous\cite{EYM08}
(in this latter case, accuracy issues can be mitigated by retaining edge unknowns at
the discontinuities, giving rise to the SUSHI scheme).
When $(\mathbi{x}_K)_{K\in\mathcal M}$ are the centres of gravity of the cells,
HMM methods are the original (edge-based) MFD methods and
all results on these methods apply to HMM methods, for example:
convergence rates for smooth data
and super-convergence of $u$ if a proper lifting of the numerical
fluxes exists\cite{BRE05-I,BRE07}, \emph{a posteriori} estimators
usable for mesh refinement\cite{BEI08,BEI08-II},
higher order methods designed to recover optimal orders of convergence
on the fluxes\cite{GYR08,BEI08-I,BEI09}, or extension to non-planar
faces\cite{BRE06,BRE07,LIP06}.
We will not delve into more details here and we refer to Ref. \refcite{LIP13}
for a comprehensive review of MFD methods. One
open issue however seems interesting to mention regarding the extensions of MFD methods
which introduce additional flux unknowns (higher order methods or methods for
non-planar faces). These methods are based on the construction of
local scalar products satisfying a generalisation of the consistency relation \eqref{consmfd}
on the expanded flux space.
Algebraic decomposition of these scalar product matrices are known\cite{BEI08-I,BRE07},
but the question remains open to find expression of these products
purely based on geometrical quantities such as in \eqref{defpslocalmfv}.
This would in particular eliminate the need to solve local algebraic problems to construct them.
\begin{equation}gin{remark}[Mixing MPFA and HMM ideas]\label{rem:mix}
In Refs. \refcite{AGE09,EYM12}, the sub-cells flux continuity of the MPFA methods
is combined with the gradient and stabilisation \eqref{defgradhfv}-\eqref{defSk}
of HMM methods (on the same sub-cells, by introducing half-edge unknowns) to construct an unconditionally coercive and convergent
scheme. If the mesh and diffusion tensors are not too skewed,
the sub-cells can be defined using particular harmonic edge points (instead of
$\overline{\mathbi{x}}s$), where the solution can be interpolated using only the two
neighbouring cell values. In this case, the half-edge unknowns can
be eliminated vertex by vertex, as in the O-method, and a 9-point stencil cell-centred
scheme is recovered on quadrilateral meshes.
Another mixing of MPFA and HMM ideas can be found in the method presented in Ref. \refcite{LIP09}.
This method uses, as the MPFA O-method,
additional face unknowns (as many on $\sigma$ as the number of vertices of $\sigma$)
but constructs local ``scalar products'' in each sub-cell around
a given vertex, trying to satisfy the local consistency conditions
\eqref{consmfd}. Except on simplicial meshes, construction of
such consistent coercive scalar products is not theoretically proved,
but when they exists their block structure around each vertex allows one,
as in the O-method, to eliminate the face unknowns
and obtain a coercive method with the same stencil as the O-method.
\end{remark}
\begin{equation}gin{remark}[Mixing HMM, MPFA and dG ideas]
Ref. \refcite{DIP12} proposes a scheme which mixes
HMM, MPFA and dG ideas. This method consists in
constructing a finite-dimensional subspace $V_h$ of piecewise
affine functions, whose gradient in each cell is
given by \eqref{defgradhfv} in which the edge unknowns are
computed from cell unknowns using the elimination technique of the MPFA L-method.
This space $V_h$ is then used in a Finite-Element like discretisation of
\eqref{basew} with a bilinear form including jumps penalisations
as in dG methods.
If the edges unknowns are not eliminated then numerical fluxes can
be found\cite{DIP13} such that this scheme satisfies the balance and conservativity
equations \eqref{bf}-\eqref{cf}.
\end{remark}
\subsection{Maximum principle for HMM methods}\label{sec:HMM-monotone}
HMM methods are usually not monotone, even on parallelogram meshes and for
constant $\Lambda$. In simple cases, one can obtain
necessary and/or sufficient conditions on the diffusion tensor and the mesh for the existence
(i.e. a choice of $\mathbb{B}_K$) of a monotone HMM method\cite{LIP11,LIP11-II}.
The idea is to hybridise the method (i.e. eliminate the cell unknowns, see
Sec. \ref{sec:HMM-summary}) and to analyse if the corresponding matrix
is an M-matrix and if the corresponding right-hand side is non-negative
whenever $f\ge 0$.
For simplicial meshes, a necessary and sufficient condition of monotony of
any HMM method is that $\Lambda_K\mathbf{n}_{K,\sigma}\cdot\mathbf{n}_{K,\sigma'}<0$
for all $K\in\mathcal M$ and all $\sigma\mathbf{n}ot=\sigma'\in\mathcal E_K$
(if $\Lambda$ is isotropic, this comes down to imposing that all angles
of the simplicial meshes are less that $\pi/2$). \emph{Necessary} monotony conditions
can be written for meshes made of parallelograms or parallelepipeds, which turn
out to be identical to the conditions in 2D for
9-point cell-centred schemes\cite{NOR07}. These conditions give insights on how to
construct, using the algebraic point of view of MFD methods,
the matrices of the local scalar products $[\cdot,\cdot]_K$ in \eqref{defpslocalmfv},
but remain to be translated into \emph{geometric} constructions
of proper $\mathbb{B}_K$ matrices. Although similar conditions can also
be written for other types of meshes, such as locally refined rectangular
meshes\cite{LIP11}, a more thorough analysis remains to be done to find necessary
and/or sufficient monotony conditions for HMM methods on generic meshes.
Ref. \refcite{LIP11-II} suggests, in the absence of such an analysis, to use a heuristic
based on constructing $\mathbb{B}_K$ by solving local optimisation problems
which penalise the scalar products $[\cdot,\cdot]_K$ whose matrix is not
an M-matrix.
\subsection{Coercivity vs. Monotony vs. Accuracy}
If a scheme's matrix has negative eigenvalues,
any negative mode will be amplified when the scheme is applied
to a transient equation, thus provoking the explosion of the solution.
Fig. \ref{fig:Gexplose} illustrates this phenomenon when a (non-coercive) G-scheme
and a time-implicit discretisation (involving 150 time steps)
is applied with $\Omegamega=(0,1)^2$ and final time $T=0.1$ to
$\partial_t {\overline{u}} -{\rm d}iv(\Lambda\mathbf{n}abla{\overline{u}})=0$ with $\bu_b=0$ and
\[
\Lambda(x,y)=\frac{1}{x^2+y^2}\left(\begin{equation}gin{array}{c@{\quad}c} 10^{-3}x^2+y^2 & (10^{-3}-1)xy\\
(10^{-3}-1)xy & x^2+10^{-3}y^2\end{array}\right)\,,\;\;
u(0,\cdot)=\left\{\begin{equation}gin{array}{ll} 1&\mbox{ on $(\frac{1}{4},\frac{3}{4})^2$},\\
0&\mbox{ elsewhere.}\end{array}\right.
\]
The coercivity of a scheme does not only ensure that it converges as the mesh is refined,
but also that it does not explode in transient cases as shown for the HMM method in Fig. \ref{fig:Gexplose}
(the HMM solution is quite close to the expected solution in this test case).
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\begin{equation}gin{tabular}{ccc}
\includegraphics[width=0.28\linewidth]{instationnaire-grid-pattern.pdf}&
\includegraphics[width=0.28\linewidth]{instationnaire-G-solution2.pdf}&
\includegraphics[width=0.28\linewidth]{instationnaire-HMM-solution2.pdf}\\
Mesh pattern&G-scheme solution,&HMM solution,\\
(mesh=$10\times 10$ reproduction&$\min u=-9\times 10^{240}$&$\min u=-7.9\times 10^{-3}$\\
of this pattern)&
$\max u=7\times 10^{240}$&$\max u=0.52$
\end{tabular}
\caption{\label{fig:Gexplose}Explosion of a non-coercive method applied to a
transient problem.}
\end{center}
\end{figure}
The convergence insured by the coercivity of a method however does not
mean that it is always accurate (only that it is accurate
as the mesh size tends to $0$). For instance, the unconditionally coercive
HMM method may display very bad numerical behaviour in presence
of strong misalignment between the grid directions and the
principal directions of diffusion.
In Fig. \ref{fig:hmmG}, we present the numerical
solutions produced by an HMM method and the G-scheme
for the constant diagonal tensor $\Lambda={\rm diag}(10^4,1)$
and the exact solution ${\overline{u}}(x,y)=x(1-x)y(1-y)$.
The strong oscillations displayed by the HMM
method in this example are probably due to its lack of monotony
properties and to its non-local computations of
the numerical fluxes ($F_{K,\sigma}$ is expressed in term of \emph{all}
the edge unknowns around $K$, not just unknowns around $\sigma$).
Although it can be checked that the G-scheme is \emph{not} coercive
(and therefore not monotone) on this test case, its
local computation of the fluxes prevents its solution from presenting
spurious oscillations, and therefore seems to improve its
``apparent'' monotony properties.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\begin{equation}gin{tabular}{ccc}
\includegraphics[width=0.28\linewidth]{forte_ani-20x20-grid.pdf}&
\includegraphics[width=0.28\linewidth]{forte_ani-20x20-hmm.pdf}&
\includegraphics[width=0.28\linewidth]{forte_ani-20x20-gscheme.pdf}\\
Mesh&HMM&G-scheme
\end{tabular}
\caption{\label{fig:hmmG}Numerical test with strong anisotropy ratio.
The G-scheme is not coercive
in this test case.}
\end{center}
\end{figure}
\subsection{To summarise: HMM methods}\label{sec:HMM-summary}
The strength of HMM methods is their unconditional coercivity,
on any mesh and for any diffusion tensor. This is achieved
at the cost of a larger number of unknowns (cell and edge
unknowns) than in MPFA methods, but hybridisation techniques can be applied as in Mixed Finite Element
methods to locally eliminate the cell unknowns and retain only the edge
unknowns. This unconditional coercivity ensures the robustness of
HMM methods (no explosion for transient equations) and provides the means
for full convergence analyses for a vast range of different complex models, involving
non-linearities and non-smooth data and solutions.
HMM methods are however not always monotone and, despite the
large freedom in their construction (through the choice
of the matrices $\mathbb{B}_K$), the analysis of their monotony range
is to date very limited. Another weakness is their relative non-local
computation of the fluxes, as $F_{K,\sigma}$ depends on all
edge unknowns around $K$. Because of this, they may present inaccurate results
on coarse meshes in presence of strong anistropy -- although their
unconditional coercivity ensures that, as the mesh is refined, the approximate
solution converges to the exact solution.
The question still remains to find a FV method which would be
unconditionally coercive and monotone on any type of mesh...
\section{DDFV methods}\label{sec:DDFV}
Discrete Duality Finite Volume (DDFV) methods have been introduced
around the early 2000's\cite{HER98,HER00,HER03}, but have been mostly
studied after 2005\cite{DOM05,AND07,BOY08-II}.
The basic idea of DDFV methods in 2D is a bit similar to
MPFA methods and also draws some inspiration from Ref. \refcite{COU99}.
The initial remark is that the two values $u_K$ and $u_L$ around $\sigma$
only give an approximation of the local gradient in the direction
$(\mathbi{x}_K\mathbi{x}_L)$ and are therefore insufficient to obtain an
expression of the whole gradient around $\sigma$ (when the orthogonality condition
\eqref{cond-orth2} does not hold,
the whole gradient is required to compute an approximate flux
${F}_{K,\sigma}$).
So, as in MPFA methods, DDFV methods introduce new unknowns to get
an approximation of the gradient in another direction than
$(\mathbi{x}_K\mathbi{x}_L)$. Using these approximate projections
of the gradient on two independent directions, an approximation of the whole gradient
can be reconstructed in a similar way as \eqref{lin-grad}
defines the gradient \eqref{eq-grad} in MPFA methods.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-ddfv.pdf_t}
\caption{\label{fig:ddfv}DDFV primal meshes (continuous lines: $K$, $L$, $M$),
dual meshes (dashed lines: $P_\mathbi{v}$) and diamonds (filled: $D$).
$\mathbf{n}_{K,\sigma}$ and $\mathbf{n}_{\mathbi{v},\tau}=$ unit normals
to $\sigma=[\mathbi{v},\mathbi{v}']$ and $\tau=[\mathbi{x}_K,\mathbi{x}_L]$.}
\end{center}
\end{figure}
The additional unknowns of DDFV methods are located at the vertices of the mesh
(we denote by $\mathcal V$ the set of vertices and we refer to
Fig. \ref{fig:ddfv} for notations).
{}From the cell $(u_K)_{K\in\mathcal M}$ and
vertex $(u_\mathbi{v})_{\mathbi{v}\in\mathcal V}$ unknowns and since $\vect{\mathbi{v}\mathbi{v}'}$ and
$\vect{\mathbi{x}_K\mathbi{x}_L}$ are linearly independent,
a constant approximate gradient $\mathbf{n}abla_D u$ can be computed
on the diamond $D:={\rm co}(\sigma\cup\{\mathbi{x}_K\})
\cup{\rm co}(\sigma\cup\{\mathbi{x}_L\})$(\footnote{``${\rm co}$'' denotes the convex hull.
Note that the diamond $D$ may be non-convex (this is
the case for the diamond around $\sigma'$ in Fig. \ref{fig:ddfv}).})
by imposing $\mathbf{n}abla_D u\cdot (\mathbi{x}_K-\mathbi{x}_L)=u_K-u_L$ and
$\mathbf{n}abla_D u\cdot (\mathbi{v}-\mathbi{v}')=u_\mathbi{v}-u_{\mathbi{v}'}$, which leads to\cite{DOM05,AND07}
\begin{equation}gin{equation}\label{ddfv-eqgrad}
\begin{equation}gin{array}{lcl}
{\rm d}isplaystyle\mathbf{n}abla_D u &=&{\rm d}isplaystyle \frac{1}{\sin(\widehat{\sigma\tau})}
\left(\frac{u_L-u_K}{{\rm d}(\mathbi{x}_K,\mathbi{x}_L)}\mathbf{n}_{K,\sigma}+\frac{u_{\mathbi{v}'}-u_\mathbi{v}}{{\rm d}(\mathbi{v},\mathbi{v}')}\mathbf{n}_{\mathbi{v},\tau}\right)\\
&=&{\rm d}isplaystyle \frac{1}{2|D|}\left((u_L-u_K){\rm d}(\mathbi{v},\mathbi{v}')\mathbf{n}_{K,\sigma}+(u_{\mathbi{v}'}-u_\mathbi{v}){\rm d}(\mathbi{x}_K,\mathbi{x}_L)\mathbf{n}_{\mathbi{v},\tau}\right),
\end{array}
\end{equation}
where $\widehat{\sigma\tau}$ is the angle between the straight
lines $(\mathbi{x}_K\mathbi{x}_L)$ and $(\mathbi{v}\mathbi{v}')$ and $|D|$ is the area of $D$.
One can then compute an approximate flux
through $\sigma$:
\begin{equation}gin{equation}\label{ddfv-flux1}
F_{K,\sigma}=-|\sigma|\Lambda_D \mathbf{n}abla_D u\cdot\mathbf{n}_{K,\sigma},
\end{equation}
where $\Lambda_D$ is the mean value of $\Lambda$ on $D$. The balance
equations on each cell \eqref{bf} then give as many equations as
the number of cell unknowns. To close the system, it remains to find as many
equations as the number of vertex unknowns, which is simply done by
writing the balance equation on new cells (``dual cells'') constructed around vertices.
A natural choice\cite{DOM05,AND07,BOY08-II,HER00} for the dual
cell around $\mathbi{v}$ is the polygon $P_\mathbi{v}$ which has all
the cell points $\mathbi{x}_K,\mathbi{x}_L,\ldots$ around $\mathbi{v}$ as vertices (in dotted
lines in Fig. \ref{fig:ddfv}). The flux through the edge $\tau=[\mathbi{x}_K,\mathbi{x}_L]$
of $P_\mathbi{v}$ can be computed using the gradient on $D$:
\begin{equation}gin{equation}\label{ddfv-flux2}
F_{\mathbi{v},\tau}=-|\tau|\Lambda_D \mathbf{n}abla_D u\cdot\mathbf{n}_{\mathbi{v},\tau}
\end{equation}
and the balance of these fluxes around a vertex $\mathbi{v}$ reads
\begin{equation}gin{equation}\label{ddfv-bf2}
\sum_{\tau\in \mathcal E_{P_\mathbi{v}}}F_{\mathbi{v},\tau}=\int_{P_\mathbi{v}} f(x){\rm d} x.
\end{equation}
where $\mathcal E_{P_\mathbi{v}}$ is the set of all edges of $P_\mathbi{v}$.
These balance equations around each vertex complete the set
of equations which define the DDFV method, that is
\eqref{bf}-\eqref{ddfv-eqgrad}-\eqref{ddfv-flux1}-\eqref{ddfv-flux2}-\eqref{ddfv-bf2}.
Note that the flux conservativity across primal $\sigma$ and dual $\tau$
edges are naturally satisfied by \eqref{ddfv-flux1} and \eqref{ddfv-flux2}.
\begin{equation}gin{remark}
Dirichlet or Neumann boundary conditions are handled seamlessly. The diamond
around a boundary edge $\sigma\in\mathcal E_K\cap\mathcal E_{\rm ext}$
is only made of the triangle ${\rm co}(\sigma\cup\{\mathbi{x}_K\})$, and the gradient
on $D$ is constructed by replacing $\mathbi{x}_L$ with a point $\mathbi{x}_\sigma\in\sigma$
(which is also used to define the dual cell around $\mathbi{v}$) and $u_L$
with some unknown $u_\sigma$.
Dirichlet boundary conditions then fix $(u_\sigma)_{\sigma\in\mathcal E_{\rm ext}}$ and
$(u_\mathbi{v})_{\mathbi{v}\in \mathcal V\cap\partial\Omegamega}$ using the values of $\bu_b$, and
\eqref{ddfv-bf2} is not written for boundary vertices\cite{AND07,DOM05}.
Neumann boundary conditions simply impose the value of $F_{K,\sigma}$,
and \eqref{ddfv-bf2} is written for all vertices\cite{DOM05}.
\end{remark}
The preceding construction is valid if all dual cells $(P_\mathbi{v})_{\mathbi{v}\in\mathcal V}$
have disjoint interiors and, therefore, form a partition $\Omegamega$. It may happen for
peculiar meshes that the preceding construction of $P_\mathbi{v}$ leads to overlapping
dual cells. In this case, the scheme must be modified and a possible choice\cite{HER03} is
to take for $P_\mathbi{v}$ the interaction region around $\mathbi{v}$ from the MPFA
methods (see Fig. \ref{fig:mpfa}).
If $\Lambda$ is discontinuous across $\sigma$, the usage of
its mean value on $D$ in \eqref{ddfv-flux1} and \eqref{ddfv-flux2} may lead to a loss of accuracy.
In case this case, and still assuming that $\Lambda$ is
constant on each (primal) cell $K\in\mathcal M$, the DDFV scheme can
be modified\cite{HER03} by introducing an unknown $u_\sigma$
at the point $\{\mathbi{x}_\sigma\}=\sigma\cap (\mathbi{x}_K\mathbi{x}_L)$ (or $\overline{\mathbi{x}}s$ if $D$ is not convex
and $P_\mathbi{v}$ is the same interaction region as in MPFA methods),
using it to compute constant gradients in each half-diamond
$D\cap K$ and $D\cap L$ and then eliminating it thanks to the flux conservativity \eqref{cf}
through primal edges. Since there is no jump of $\Lambda$
through $\tau=[\mathbi{x}_K,\mathbi{x}_L]$, the conservativity through this
dual edge is ensured as the sub-fluxes through $[\mathbi{x}_K,\mathbi{x}_\sigma]$ and $[\mathbi{x}_\sigma,\mathbi{x}_L]$
use the same values of $\Lambda$ on each side of $\tau$ (respectively $\Lambda_K$
and $\Lambda_L$) and the same gradient on each half diamond.
If $\Lambda$ is also discontinuous across dual edges (which is not standard
in reservoir engineering), a further
modification of the DDFV method has been proposed in Ref. \refcite{BOY08-II}.
This ``m-DDFV'' method uses local gradients which are
constant in quarters of diamonds. Four new
unknowns need to be introduced in each diamond, and are then eliminated by imposing (as in MPFA
methods) flux conservativity equations through the diamond diagonals.
Although this presentation of DDFV methods clearly shows that they are
based on FV principle (flux conservativity and balance), it does not
explain the name ``Discrete Duality Finite Volume''. DDFV methods
can be re-cast using discrete gradient and divergence operators, in such
a way that the Green-Stokes duality formula holds at the discrete
level\cite{DOM05,AND07,BOY08-II}. The gradient operator, already
defined, takes cell and vertex values (assumed to represent
piecewise constant functions in primal and dual cells) and constructs a piecewise
constant gradient on the diamonds. The divergence operator
takes a piecewise constant vector field $(\xi_D)_D$ on diamonds and defines
its divergence as piecewise constant functions on primal and dual cells
by writing the flux balances \eqref{bf} and \eqref{ddfv-bf2}
with $F_{K,\sigma}=|\sigma|\xi_D\cdot\mathbf{n}_{K,\sigma}$ and $F_{\mathbi{v},\tau}=|\tau|\xi_D
\cdot\mathbf{n}_{\mathbi{v},\tau}$. Under this form, DDFV methods are based on similar
principles as MFD methods, which aim at satisfying the discrete
Green-Stokes formula \eqref{consmfd}. They are
different methods
but DDFV methods can be re-cast in a framework similar to MFD methods\cite{COU10}.
Generalisation of DDFV methods to 3D is
based on similar ideas as in the 2D case, but requires quite heavy notations to be
properly defined. Two essentially different 3D generalisations exist:
methods using Cell and Vertex unknowns (hence dubbed CeVe-DDFV)
and methods relying on Cell, Vertex, Faces and Edges unknowns
(called CeVeFE-DDFV).
Refs. \refcite{HER09,COU09,AND10} design CeVe-DDFV methods by
reconstructing a piecewise constant gradient from
its projection on $\vect{\mathbi{x}_K\mathbi{x}_L}$ computed using $u_K$ and $u_L$,
and its projection on the plane generated by $\sigma$
computed using the values on the vertices of $\sigma$.
Linearly exact formulas can be found for this projected gradient\cite{AND12}
but the discrete Poincar\'e inequality (crucial to Step \textsf{(C1)}
in Sec. \ref{sec:analysis}) only seems provable
when all faces $\sigma$ are triangles(\footnote{Or on cartesian grids\cite{AND13}.})
and the CeVe-DDFV method is therefore not coercive on generic meshes.
Refs. \refcite{COU11,COU11-II} propose a CeVeFE-DDFV method
with a local gradient computed from its projection on $\vect{\mathbi{x}_K\mathbi{x}_L}$ and
$\vect{\mathbi{v}\mathbi{v}'}$ (as in 2D) and on a third face-edge direction.
A third mesh is built around each face and edge centres to
obtain additional balance equations for the new face and edge unknowns.
This CeVeFE-DDFV method is coercive on any mesh, but at the cost
of additional unknowns with respect to the CeVe-DDFV method.
\subsection{Coercivity and convergence of DDFV methods}
Because DDFV methods are based on discrete gradient and divergence
operators which reproduce, as MFD methods, the Green-Stokes formula,
discrete $H^1_0$ estimates can be obtained by mimicking the continuous
integration by parts \eqref{ipp}, provided that the discrete
Poincar\'e inequality holds. This is the case in 2D,
for the CeVeFE-DDFV 3D method or for the CeVe-DDFV 3D method on
meshes with triangular faces. In these cases,
DDFV methods are coercive and, being linearly
exact, they enjoy the corresponding stability and convergence properties.
The technique outlined in Sec. \ref{sec:analysis} has been applied\cite{AND07}
to prove the convergence, without additional
regularity assumption on the data or the solution, of
the 2D DDFV method using the mean values $\Lambda_D$
as in \eqref{ddfv-flux1}-\eqref{ddfv-flux2} (Ref. \refcite{AND07}
provides in fact a convergence analysis for a non-linear equation,
which contains \eqref{base} as a particular case). An
$\mathcal O(h_{\mathcal M})$ error estimate for $u$ and the discrete gradient are
also established if $\Lambda$ is Lipschitz-continuous and
${\overline{u}}\in H^2$ (this estimate was known\cite{DOM05} for $\Lambda={\rm Id}$).
Concerning the m-DDFV method\cite{HER03,BOY08-II},
an $\mathcal O(h_{\mathcal M})$ error estimate for ${\overline{u}}$ and its
gradient has been proved in Ref. \refcite{BOY08-II} (also for a non-linear
version of \eqref{base}), provided that ${\overline{u}}$ is $H^2$ on each half- or quarter-diamond.
This regularity assumption does not seem always satisfied
(in particular if $\Omegamega$ or some cells around discontinuities
of $\Lambda$ are not convex), but the path described in
Sec. \ref{sec:analysis} could also be applied to the m-DDFV method.
An $\mathcal O(h_{\mathcal M})$ error estimate on ${\overline{u}}$ has
been obtained in Ref. \refcite{COU11} for the 3D CeVeFE-DDFV method,
under the assumptions that $\Lambda$ is Lipschitz-continuous and
that ${\overline{u}}\in H^2(\Omegamega)$. We can however notice
that this CeVeFE-DDFV method (as well as the 2D DDFV scheme)
is a Gradient Scheme\cite{EYM12} and, therefore, that
its convergence without regularity assumptions, for \eqref{base} as well as
non-linear and non-local equations,
follows from the general convergence analysis of Gradient Schemes\cite{EYM12,DRO12}.
As HMM methods, DDFV methods have been adapted to more complex models
than \eqref{base}: non-linear elliptic equations\cite{AND07,BOY08-II,COU11},
stationary and transient convection-diffusion equations\cite{COU10,HER12},
the cardiac bidomain model\cite{AND11}, div-curl problems\cite{DEL07,HERM08},
degenerate hyperbolic-parabolic problems\cite{AND10} (with assumptions on the mesh,
see Sec. \ref{sec:ddfv-monotone}), the linear Stokes equations with
varying viscosity\cite{KRE11,KRE12}, semiconductor models\cite{CHA09}
and the Peaceman model\cite{CHA13}.
The convergence analysis of DDFV methods is carried out (sometimes under regularity
assumptions) for all these models except the last two.
Analysis tools for the 3D CeVe-DDFV method are presented in
Ref. \refcite{AND13,AND12} and used to study its convergence
for transient non-linear equations or systems.
\subsection{Maximum principle for DDFV methods}\label{sec:ddfv-monotone}
On meshes satisfying the orthogonality conditions \eqref{cond-orth} or
\eqref{cond-orth2}, DDFV methods for \eqref{base} are identical to
two TPFA schemes\cite{DOM05} (one on each primal and dual mesh),
and are therefore monotone. This monotony
under orthogonality conditions on the mesh is used in
Ref. \refcite{AND10} to study DDFV discretisations of degenerate hyperbolic-parabolic
equations, and in particular to establish discrete entropy inequalities
on approximate solutions. Study of the
monotony of DDFV methods on generic meshes however remains to be done.
\subsection{To summarise: DDFV methods}
As HMM methods, the main strength of DDFV methods is their unconditional
coercivity (with some caveats for 3D methods, see above),
which ensures their robustness and allows one to adapt them and
analyse their convergence for a number of models. Another very practical
property for the analysis of DDFV methods is their
discrete duality property (existence of discrete gradient and divergence operators satisfying
Stokes' formula), which is also shared by HMM methods.
An advantage of DDFV methods over HMM methods is perhaps their
more local computation of the fluxes
($F_{K,\sigma}$ is expressed in terms of unknowns localised around the
edge $\sigma$, whereas in HMM methods this flux requires all
edge unknowns around $K$).
A relative weakness of DDFV methods is their intricacy in 3D.
The heavy and numerous notations required for the definitions of 3D
DDFV methods probably makes them difficult to adopt by non-specialists
and complexifies their analysis. In particular, establishing the discrete
duality formula is far from obvious. Once passed these complicated
notations, however, implementation of 3D DDFV methods is not particularly difficult.
The lack of monotony studies for DDFV methods is also
a gap in the literature, which would probably need to be filled
to get a better understanding on the possible applicability of these methods
to multi-phase flow models.
And so our quest for an unconditionally coercive and monotone FV
method on any mesh continues...
\section{Monotone and Minimum-Maximum preserving (MMP) methods}\label{sec:LMP}
Previously cited results\cite{NOR05,NOR07,KEI09,KER81} show that
no \emph{linear} 9-point scheme on quadrangular meshes, exact for linear functions (i.e.
of formal order 2), can be monotone on any distorted mesh or for any
diffusion tensor. Some constraints must be relaxed...
One choice is to allow for larger stencils (see Ref. \refcite{LEP09-II} for a Finite Difference scheme).
For Finite Volume methods, the most common choice appears to be a relaxation
of the \emph{linearity} of the scheme and the construction of \emph{non-linear}
``monotone'' approximations of the linear equation \eqref{base}.
The obvious trade-of is that computing the solution to the scheme
will be more complex, requiring Picard or Newton iterations,
which may create computational issues (such as the choice of stopping
criteria). Also, the monotony, conservativity and/or consistency may
only be achieved for the genuine solution to the non-linear scheme,
not at each iteration of these algorithms\cite{LEP09}.
Contrary to MPFA, HMM or DDFV methods, schemes presenting
discrete minimum-maximum principles do not form a well defined family of methods but
are rather schemes constructed using similar ideas and trying to achieve
the discrete minimum principle \eqref{disc-minpple} or
the discrete minimum-maximum principle \eqref{disc-minmaxpple}.
As we are considering non-linear schemes, these two principles
are not equivalent and we should make sure that we clearly separate
both. Schemes satisfying \eqref{disc-minpple} will be called \emph{monotone},
as a commonly used but somewhat misguided extension of the vocabulary
used for linear schemes(\footnote{Indeed, ``monotone'' non-linear methods
do not necessarily preserve orders of boundary conditions or of initial condition for time-dependent problems.
They merely provide solutions which remain non-negative when the
boundary/initial conditions are non-negative.}), whereas schemes which
satisfy \eqref{disc-minmaxpple} will be called \emph{minimum-maximum
preserving} (MMP) schemes.
A widespread idea to obtain a monotone or MMP scheme
is to compute two \emph{linear} fluxes $F^1_{K,\sigma}$
and $F^2_{K,\sigma}$ for each interior edge and to define $F_{K,\sigma}$
as a convex combination of
$F^1_{K,\sigma}$ and $F^2_{K,\sigma}$ with coefficients depending upon the unknown $u$:
\begin{equation}gin{equation}\label{combconv}
\begin{equation}gin{array}{l}
{\rm d}isplaystyle F_{K,\sigma}=\mu^1_{K,\sigma}(u)F^1_{K,\sigma} + \mu^2_{K,\sigma}(u)F^2_{K,\sigma}\\[0.5em]
{\rm d}isplaystyle \mbox{with
$\mu^1_{K,\sigma}(u)\ge 0$, $\mu^2_{K,\sigma}(u)\ge 0$ and $\mu^1_{K,\sigma}(u)+
\mu^2_{K,\sigma}(u)=1$}.
\end{array}
\end{equation}
The methods we consider here are cell-centred, but
the definition of $F^1_{K,\sigma}$ and $F^2_{K,\sigma}$ may require
to introduce additional unknowns (e.g. vertex, edge or other unknowns).
These unknowns are then eliminated, classically by expressing them as convex combinations
of cell unknowns.
The coefficients $\mu^1_{K,\sigma}(u)$ and $\mu^2_{K,\sigma}(u)$ are chosen to
eliminate the ``bad'' parts of $F^1_{K,\sigma}$ and $F^2_{K,\sigma}$,
responsible for the possible loss of monotony.
\subsection{Non-linear ``2pt-fluxes'': monotone schemes}\label{sec:nl2pt}
The TPFA scheme is monotone thanks to its ``2pt-flux'' structure. This suggests to
try and build monotone methods on generic meshes by computing
$F_{K,\sigma}$ with a ``2pt'' formula, involving apparently only $u_K$ and $u_L$
but with coefficients depending on all
cell unknowns and boundary values $U=((u_M)_{M\in\mathcal M},(u_\sigma)_{\sigma\in\mathcal E_{\rm ext}})$
(same notation as in Sec. \ref{sec:monotony}). Indeed, assume that $F_{K,\sigma}$
is written
\begin{equation}gin{equation}\label{nonlin-2pt}
F_{K,\sigma}=\alpha_{K,L}(U) u_K - \begin{equation}ta_{K,L}(U) u_L
\quad\mbox{with $\alpha_{K,L}(U)> 0$ and $\begin{equation}ta_{K,L}(U)> 0$}
\end{equation}
(where $L$ is the cell on the other side of $\sigma\in\mathcal E_K\cap
\mathcal E_{\rm int}$ and $L=\sigma$ whenever $\sigma\in\mathcal E_K\cap \mathcal E_{\rm ext}$).
Then the conservativity relation \eqref{cf} imposes,
assuming that it must be satisfied for any value of $U$,
\begin{equation}gin{equation}\label{nonlin-cf}
\alpha_{K,L}(U)=\begin{equation}ta_{L,K}(U)\mbox{ for any neighbour cells $K$ and $L$}.
\end{equation}
The scheme \eqref{bf} can then be recast as
\begin{equation}gin{equation}\label{nl-recast}
A(U)(u_K)_{K\in\mathcal M}=(B_K(U))_{K\in\mathcal M},
\end{equation}
where $B_K(U)=\int_K f(x){\rm d} x + \sum_{\sigma\in\mathcal E_{\rm ext}\cap
\mathcal E_K}\begin{equation}ta_{K,\sigma}(U)u_\sigma$
and the matrix $A(U)$ has
(i) diagonal coefficients $A_{K,K}(U)=\sum_{M}\alpha_{K,M}(U)>0$
(the sum being on all $M$ neighbour cells or edges of $K$),
(ii) extra-diagonal coefficients $A_{K,L}(U)=-\begin{equation}ta_{K,L}(U)<0$ if
$K$, $L$ are neighbour cells, $A_{K,L}(U)=0$ otherwise,
and (iii) is diagonally dominant by column (strictly for columns
$L$ such that $\mathcal E_{\rm ext}\cap\mathcal E_L\mathbf{n}ot=\emptyset$)
thanks to \eqref{nonlin-cf}. The graph of $A(U)$ is also connected and
(cf. Sec. \ref{sec:monotony}) $A(U)^{-1}$ therefore
has non-negative coefficients, which means that
the scheme \eqref{nl-recast} satisfies \eqref{disc-minpple}.
\subsubsection{Triangular meshes}
A first idea\cite{LEP05} to achieve \eqref{nonlin-2pt} via \eqref{combconv}
on 2D triangular meshes is to compute, for each interior edge $\sigma$
and each $i=1,2$, a constant gradient $\mathbf{n}abla_i u$ on
the triangle $T_i=\mathbi{v}_i \mathbi{x}_K\mathbi{x}_L$ (see notations in Fig. \ref{fig:lmp1})
by using unknown values $(u_{\mathbi{v}_i},u_K,u_L)$ at this triangle vertices.
These gradients are given by \eqref{eq-grad}
with $\mathbi{x}_K$ replaced by $\mathbi{v}_i$ and $\overline{\mathbi{x}}s,\overline{\mathbi{x}}_{\sigma'}$ replaced
by $\mathbi{x}_K,\mathbi{x}_L$ and, assuming $\Lambda={\rm Id}$,
the linear conservative fluxes $F^i_{K,\sigma}$ ($i=1,2$) are then\cite{LEP05,LIP07}
\begin{equation}gin{equation}\label{choixFi}
F^i_{K,\sigma}:=-|\sigma| \mathbf{n}abla_i u\cdot\mathbf{n}_{K,\sigma}=
\frac{|\sigma|}{2|T_i|}\left(u_K \mathbf{n}u_i^L + u_L\mathbf{n}u_i^K - u_{\mathbi{v}_i}(\mathbf{n}u_i^K +\mathbf{n}u_i^L)\right)\cdot
\mathbf{n}_{K,\sigma}
\end{equation}
where $|T_i|$ is the area of triangle $T_i$.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-lmp1.pdf_t}
\caption{\label{fig:lmp1}Construction of a monotone scheme on triangular
meshes. The vectors $\mathbf{n}u_i^{K/L}$ have the length of the
segment to which they are orthogonal.}
\end{center}
\end{figure}
The convex combination \eqref{combconv} is then designed to
eliminate, in $F_{K,\sigma}$,
the term
\[
-\frac{|\sigma|}{2}\left(
\frac{\mu^1_\sigma(u)(\mathbf{n}u_1^K+\mathbf{n}u_1^L)\cdot\mathbf{n}_{K,\sigma}}{|T_1|}u_{\mathbi{v}_1}+
\frac{\mu^2_\sigma(u)(\mathbf{n}u_2^K+\mathbf{n}u_2^L)\cdot\mathbf{n}_{K,\sigma}}{|T_2|}u_{\mathbi{v}_2}\right)
\]
involving $u_{\mathbi{v}_1},u_{\mathbi{v}_2}$ and which prevents this flux from having
the ``2-pt structure'' \eqref{nonlin-2pt}.
As $\mathbf{n}u_1^K+\mathbf{n}u_1^L+\mathbf{n}u_2^K+\mathbf{n}u_2^L=0$, valid choices of the coefficients
are
\begin{equation}gin{equation}\label{choixmu}
\mu^1_\sigma(u)=\frac{u_{\mathbi{v}_2}/|T_2|}{u_{\mathbi{v}_1}/|T_1|+u_{\mathbi{v}_2}/|T_2|}
\mbox{ and }
\mu^2_\sigma(u)=\frac{u_{\mathbi{v}_1}/|T_1|}{u_{\mathbi{v}_1}/|T_1|+u_{\mathbi{v}_2}/|T_2|},
\end{equation}
provided that $u_{\mathbi{v}_1}$ and $u_{\mathbi{v}_2}$ are
both non-negative and not simultaneously equal to $0$ (in this
last case, we can still take $\mu^1_\sigma=\mu^2_\sigma=\frac{1}{2}$).
Computing these vertex values by convex combinations of the
cell unknowns ensures that they are non-negative whenever
all cell unknowns are non-negative. Two combinations are
suggested in Ref. \refcite{LIP07}, but none of them takes
into account the possible non-smoothness of ${\overline{u}}$
around discontinuities of $\Lambda$ and the resulting schemes therefore
suffer from a loss of consistency around these discontinuities
(see Remark \ref{rem:convcomb}).
With the choices \eqref{choixFi}-\eqref{choixmu}, it can be
proved that, \emph{provided that $(\mathbi{x}_K)_{K\in\mathcal M}$
are at the intersections of the bisectors of the triangles $K\in\mathcal M$}
(this is where the restriction on the
mesh, i.e. that it is made of triangles, comes into play),
$F_{K,\sigma}$ given by \eqref{combconv} indeed has the ``2pt structure'' \eqref{nonlin-2pt}
with positive coefficients.
\begin{equation}gin{remark}
This construction of fluxes only makes sense if all $u_K$
are non-negative, and the scheme's matrix
$A(U)$ in \eqref{nl-recast} is therefore well defined only for non-negative
cell unknowns. This is not a practical issue as the non-linear
system \eqref{nl-recast} is often solved by iterating an
algorithm of the form $A(U^n)(u^{n+1}_K)_{K\in\mathcal M}=
(B_K(U^n))_{K\in\mathcal M}$ with all components of $B_K(U^n)$
non-negative if all components of $U^n$ are non-negative. By the properties of $A(U)$,
all $u^n_K$ found in these iterations are non-negative and
$A(U^n)$ is therefore well defined.
\end{remark}
The modification of this method\cite{LIP07} for heterogeneous anisotropic tensors $\Lambda$
consists in taking $\mathbi{x}_K$ at the intersection of
the bisectors for the $\Lambda_K$-metric of triangle $K$
and in introducing an additional unknown $u_\sigma$ at the
edge midpoint $\overline{\mathbi{x}}s$. Four fluxes $F^{i,M}_{K,\sigma}$ are then computed using
gradients in the triangles $\mathbi{v}_i \mathbi{x}_M \overline{\mathbi{x}}s$
($i=1,2$, $M=K,L$) and the flux continuities $F^{i,K}_{K,\sigma}=
F^{i,L}_{K,\sigma}$ are written to eliminate the unknown
$u_\sigma$ and to obtain two fluxes $F^i_{K,\sigma}$,
which are then used in \eqref{combconv}. New coefficients $\mu^i_\sigma(u)$
are found which eliminate the $u_{\mathbi{v}_i}$ terms
and, thanks to the initial choice of $\mathbi{x}_K$,
$F_{K,\sigma}$ has the structure \eqref{nonlin-2pt}.
This method has been extended to 3D tetrahedral meshes in Ref. \refcite{KAP07}
(using convex combinations of three linear fluxes instead of two)
and to general 2D polygonal meshes in Ref. \refcite{LIP07}, albeit in this last case
at the expense of a loss of consistency of the method, especially for
strong anisotropic tensors.
These non-linear 2pt-fluxes methods are not coercive in general
and no convergence proof is provided in the literature. However,
numerical tests show for smooth data a generic order of convergence 2 for the
solution and 1 for its gradient. Some numerical simulations\cite{LIP07}
also confirm that the solution does not satisfy the full discrete
minimum-maximum principle \eqref{disc-minmaxpple} in general:
the approximate solution for $f=0$ may present values beyond
the maximum of the boundary values, and even internal oscillations.
\subsubsection{Polygonal meshes}\label{sec:2ptpoly}
Ref. \refcite{YUA08} presents the construction of consistent 2pt-fluxes \eqref{nonlin-2pt}
on polygonal meshes using similar ideas to Ref. \refcite{LEP05,LIP07}.
The starting point is, for $\sigma\in\mathcal E_K$, to select two vertices $\mathbi{v}_1,\mathbi{v}_2$ of $K$ such that
$\Lambda_K\mathbf{n}_{K,\sigma}$ is in the positive cone generated by $\vect{\mathbi{x}_K\mathbi{v}_1}$
and $\vect{\mathbi{x}_K\mathbi{v}_2}$ (cf Fig. \ref{fig:lmp2}).
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-lmp2.pdf_t}
\caption{\label{fig:lmp2}Construction of a monotone scheme on polygonal
meshes.}
\end{center}
\end{figure}
The flux through $\sigma$ outside $K$ can then be approximated by
a positive combination of $-\mathbf{n}abla{\overline{u}}\cdot \vect{\mathbi{x}_K \mathbi{v}_i} \approx {\rm d}ist(\mathbi{x}_K,\mathbi{v}_i)
(u_K-u_{\mathbi{v}_i})$ ($i=1,2$)
and this gives a first numerical flux
$F^1_{K,\sigma}=a_{K,\sigma}(u_K-u_{\mathbi{v}_1})+b_{K,\sigma}(u_K-u_{\mathbi{v}_2})$,
with non-negative coefficients $a_{K,\sigma}$ and $b_{K,\sigma}$. The same
construction from cell $L$ gives a numerical flux outside $K$ (i.e. inside
$L$)
$F^2_{K,\sigma}=-a_{L,\sigma}(u_L-u_{\mathbi{v}_3})-b_{L,\sigma}(u_L-u_{\mathbi{v}_4})$
with $\mathbi{v}_3,\mathbi{v}_4$ vertices of $L$ and $a_{L,\sigma},b_{L,\sigma}$ non-negative.
The total flux is then obtained as in Refs. \refcite{LEP05,LIP07} by
a convex combination \eqref{combconv} designed to eliminate the coefficients
of $u_{\mathbi{v}_i}$ and to provide the conservativity of the global flux:
\begin{equation}gin{eqnarray*}
\mu^1_{K,\sigma}(u)&=&\frac{a_{L,\sigma}u_{\mathbi{v}_3}+b_{L,\sigma}u_{\mathbi{v}_4}}
{a_{K,\sigma}u_{\mathbi{v}_1}+b_{K,\sigma}u_{\mathbi{v}_2}+a_{L,\sigma}u_{\mathbi{v}_3}+b_{L,\sigma}u_{\mathbi{v}_4}}
\,,\\
\mu^2_{K,\sigma}(u)&=&
\frac{a_{K,\sigma}u_{\mathbi{v}_1}+b_{K,\sigma}u_{\mathbi{v}_2}}
{a_{K,\sigma}u_{\mathbi{v}_1}+b_{K,\sigma}u_{\mathbi{v}_2}+a_{L,\sigma}u_{\mathbi{v}_3}+b_{L,\sigma}u_{\mathbi{v}_4}}.
\end{eqnarray*}
The resulting flux \eqref{combconv} is well defined provided that all $u_{\mathbi{v}_i}$ are
non-negative (if they are all equal to $0$, we take $\mu^i_{K,\sigma}(u)=1/2$) and
has the ``2pt-structure'' \eqref{nonlin-2pt}.
The vertex values $u_{\mathbi{v}_i}$ are computed using convex combinations of
cell values as in Ref. \refcite{LIP07} or, in case of discontinuity of
$\Lambda$, by writing down the flux conservativity and the continuity of tangential
gradients at the vertices. This last method however sometimes fails to provide
non-negative vertex values $u_{\mathbi{v}_i}$ from non-negative cell values, in which
case a simple convex combination must be used.
As for the methods constructed in Refs. \refcite{LEP05,LIP07}, no proof of
convergence is provided in Ref. \refcite{YUA08} but numerical experiments
shows convergence, with rates 2 for ${\overline{u}}$ and 1 for the fluxes
for smooth data. However, for strongly anisotropic $\Lambda$, the rate of
convergence for ${\overline{u}}$ seems reduced, at least at available mesh sizes.
This method has been applied to advection-diffusion equations\cite{WAN12} (for
a constant $\Lambda$), using the same kind of discretisation of the advection
term as in Ref. \refcite{LIP10}, i.e. a higher order method with slopes limiters.
A variant can be constructed\cite{SHE12} using edge unknowns $u_\sigma$ (instead
of vertices unknowns) and eliminating them as in the MPFA O-method.
This process may however produce negative $u_\sigma$'s from non-negative $u_K$'s
and, when this happens, $u_\sigma$ must be computed using a simple
convex combination of $u_K$'s.
Although the number of iterations required to compute the solution
are reduced in Ref. \refcite{SHE12} with respect to Ref. \refcite{YUA08},
it seems much higher than for the methods in Refs. \refcite{DRO11,LEP09}
(see Sec. \ref{sec:minmax}), for which the number of
iterations appears to remain bounded independently
on the mesh size.
The ideas of Ref. \refcite{YUA08} have also been used in Ref. \refcite{LIP09-II,LIP10},
but by expressing $\Lambda_K\mathbf{n}_{K,\sigma}$ as a positive combination of
$\vect{\mathbi{x}_K \mathbi{x}_{L_i}}$, for some cell or edges $L_1,L_2$, instead
of $\vect{\mathbi{x}_K \mathbi{v}_i}$ for some vertices $\mathbi{v}_1,\mathbi{v}_2$. This choice
does not require to interpolate new vertex or edge unknowns,
which is an advantage since such interpolations may lead to inaccuracies if not well chosen\cite{LIP07}.
However, when $\Lambda$ is discontinuous
across an edge, the cell centres on each side must be moved according to the heterogeneity
of $\Lambda$ (in such a way that \eqref{cond-orth2} holds for this edge).
As a consequence, the method is applicable only if each cell has
at most one edge across which $\Lambda$ is discontinuous, which
restricts the number and positions of diffusion jumps.
This method has been extended to general 3D polyhedral meshes in
Refs. \refcite{DAN09,NIK10}.
\subsection{Non-linear multi-point fluxes: MMP schemes}
\label{sec:minmax}
As mentioned above, methods based on the form \eqref{nonlin-2pt} are monotone
but do not satisfy the discrete minimum-maximum principle, mostly because
they do not ensure that $\sum_L \alpha_{K,L}(U)\ge \sum_L \begin{equation}ta_{K,L}(U)$.
It is however possible to construct, on generic 3D meshes, non-linear
MMP schemes provided the fluxes are computed
using a multi-point formula. More precisely, if
\begin{equation}gin{equation}\label{nonlin-mpfluxes}
F_{K,\sigma}=\sum_{Z\in V(K)}\tau_{K,Z}(U)(u_K-u_Z)
\end{equation}
with $V(K)$ a set of cells or edges and $\tau_{K,Z}(U)\ge 0$ ($>0$ whenever
$Z$ is a cell or edge around $K$), then a straightforward
adaptation of the proof in Remark \ref{rem:monvf2} shows that
the resulting scheme satisfies the discrete minimum-maximum principle
\eqref{disc-minmaxpple} (this proof, as mentioned in Remark \ref{rem:monvf2},
demonstrates in fact that the scheme is non-oscillating).
The key element is that \eqref{nonlin-mpfluxes}
ensures that, whenever all cell values are equal, the fluxes
are equal to $0$ or have a sign opposite to the sign of $\bu_b$
(this is not certain with \eqref{nonlin-2pt}).
A first scheme in this direction is proposed in Ref. \refcite{BER05},
for isotropic diffusion and under restrictive assumptions on the mesh
(made of simplices), such that there exists cell points $(\mathbi{x}_K)_{K\in\mathcal M}$
satisfying the orthogonality condition \eqref{cond-orth2}.
For such equations and meshes, the TPFA method can be applied
but the interest of the method in Ref. \refcite{BER05} resides in the
fact that it produces order 2 approximations \emph{of the cell averages of
${\overline{u}}$} (the TPFA method would produce order 2 approximations of $({\overline{u}}(\mathbi{x}_K))_{K\in\mathcal M}$,
where $(\mathbi{x}_K)_{K\in\mathcal M}$ are not at cell barycentres).
Nonetheless, the particular convex combinations ideas of Ref. \refcite{BER05}
have been used to construct MMP schemes on triangular
meshes\cite{LEP08,LEP09}, construction then generalised to
generic 2D or 3D meshes in Ref. \refcite{DRO11}.
\begin{equation}gin{figure}[h!]
\begin{equation}gin{center}
\input{fig-lmp3.pdf_t}
\caption{\label{fig:lmp3}Construction of an MMP scheme.
$\mathbi{x}_{\sigma,1}$ is at the intersection of $\mathbi{x}_K+[0,\infty)\Lambda_K\mathbf{n}_{K,\sigma}$
and of the line/plane containing $\sigma$. $M_2$ is on the half-line
$\mathbi{x}_{\sigma,1}+[0,\infty)\Lambda_L\mathbf{n}_{K,\sigma}$.}
\end{center}
\end{figure}
With the notations in Fig. \ref{fig:lmp3}, the scheme in Ref. \refcite{DRO11}
starts from the two consistent fluxes outside $K$:
\[
\widetilde{F}^1_{K,\sigma}=|\Lambda_K\mathbf{n}_{K,\sigma}|\frac{u_K-u_{\sigma,1}}{{\rm d}(\mathbi{x}_K,\mathbi{x}_{\sigma,1})}\,,\quad \widetilde{F}^2_{K,\sigma}=|\Lambda_L\mathbf{n}_{K,\sigma}|\frac{u_{\sigma,1}-u_{M_2}}{{\rm d}(\mathbi{x}_{\sigma,1},M_2)}
\]
where $u_{M_2}$ and $u_{\sigma,1}$ are values at $M_2$ and $x_{\sigma,1}$ respectively.
Writing the conservativity
of these fluxes allows us to eliminate $u_{\sigma,1}$ and to get a
linear conservative flux $F^1_{K,\sigma}=a^1_{K,\sigma}(u_K-u_{M_2})$ with $a^1_{K,\sigma}\ge 0$.
Expressing $u_{M_2}$ as a convex combination of cell unknowns, in such a way
that $u_L$ appears with a non-zero coefficient in this combination (this is
always possible), we then get
\begin{equation}gin{equation}\label{mmp1}
F^1_{K,\sigma}=\alpha^1_{K,\sigma}(u_K-u_L)+G^1_{K,\sigma}\mbox{ with }G^1_{K,\sigma}=
\sum_{M}\begin{equation}ta^1_{K,M}(u_K-u_M)
\end{equation}
with $\alpha^1_{K,\sigma}>0$ and $\begin{equation}ta_{K,M}\ge 0$.
The same construction from cell $L$ gives a flux outside $K$ (i.e. inside $L$)
\begin{equation}gin{equation}\label{mmp2}
F^2_{K,\sigma}=\alpha^2_{L,\sigma}(u_K-u_L)+G^2_{L,\sigma}\mbox{ with }G^2_{L,\sigma}=
\sum_{M}\begin{equation}ta^2_{L,M}(u_M-u_L).
\end{equation}
Following Ref. \refcite{BER05},
a convex combination \eqref{combconv} of these two fluxes is then chosen in order to
eliminate the ``bad'' terms with respect to \eqref{nonlin-mpfluxes}, i.e.
$G^2_{L,\sigma}$:
\begin{equation}gin{equation}\label{choixcombconv}
\mu^1_{K,\sigma}(u)=\frac{|G^2_{L,\sigma}|}{|G^1_{K,\sigma}|+|G^2_{L,\sigma}|}
\,,\quad \mu^2_{K,\sigma}(u)=\frac{|G^1_{K,\sigma}|}{|G^1_{K,\sigma}|+|G^2_{L,\sigma}|}
\end{equation}
(once again, these coefficients are chosen equal to $1/2$ if their denominator
vanishes). By studying separate cases depending on the sign of $G^1_{K,\sigma}G^2_{L,\sigma}$,
we can then see that
$F_{K,\sigma}$ defined by \eqref{combconv}, \eqref{mmp1},
\eqref{mmp2} and \eqref{choixcombconv} always satisfies \eqref{nonlin-mpfluxes},
whatever the values (positive or negative) of the cell unknowns.
\begin{equation}gin{remark}
More freedom is possible on the decompositions in \eqref{mmp1} and \eqref{mmp2},
provided that the global non-linear flux $F_{K,\sigma}$ is
continuous with respect to $u$.
This is ensured\cite{DRO11} if we take $\alpha^1_{K,\sigma}=\alpha^2_{L,\sigma}$
(always possible, upon moving part of the term $u_K-u_L$
in \eqref{mmp1} and \eqref{mmp2} into $G^1_{K,\sigma}$ and $G^2_{L,\sigma}$).
\end{remark}
This method is not necessarily coercive. However, under some coercivity
assumptions (which seem satisfied in numerical tests), a rigorous proof of
convergence is given in Ref. \refcite{DRO11} without regularity assumptions
on the data, drawing on the fact that
the global flux is a convex combination of linear fluxes and adapting the
analysis technique developed in Ref. \refcite{AGE10-2}. This is,
to our best knowledge, the first proof of convergence of an MMP scheme.
Numerical results show a general order 2 convergence for $u$
and, of course, the absence of spurious oscillations in the solution.
\begin{equation}gin{remark}[Choice of convex combination for $u_{M_i}$]\label{rem:convcomb}
In case of jumps of $\Lambda$, numerical tests\cite{DRO11} show that if
$u_{M_i}$ is computed from cell unknowns on both sides of a discontinuity of $\Lambda$ then the order
of the scheme can be reduced (and the number of Picard iterations
to compute the approximate solution increases significantly).
In many applications, it is however always possible to choose $M_i$ such that
$u_{M_i}$ can be computed using cell unknowns all in a same zone of smoothness
of $\Lambda$.
\end{remark}
The ideas developed for ``2pt non-linear fluxes'' (see Section \ref{sec:nl2pt})
have also been combined with the convex combination
\eqref{choixcombconv} used in Refs. \refcite{BER05,DRO11,LEP05}
to produce minimum-maximum preserving schemes on 2D polygonal meshes.
In Ref. \refcite{SHE11}, the ideas of Ref. \refcite{YUA08} (replacing vertex unknowns by
edge unknowns) are used to built an MMP method, in which
edge unknowns are interpolated from cell unknowns
by writing a particular flux conservativity which takes into account
the possible jumps of $\Lambda$.
Under an assumption which slightly limits the mesh's skewness and the tensor's
anisotropy, Ref. \refcite{LIP12} draws on the core idea of Ref. \refcite{LIP09-II}
(expressing $\Lambda_K\mathbf{n}_{K,\sigma}$ as a positive combination of $\vect{\mathbi{x}_K\mathbi{x}_{L_i}}$ for some
cell or edges $L_i$) to produce an MMP scheme
on 2D polygonal meshes. Using cell unknowns rather than interpolating new vertex
or edge unknowns ensures that the stencil of the linear systems
solved at each Picard iteration is as small as the stencil of the TPFA
method (with the trade-of that the fluxes are only conservative
at the limit of these non-linear iterations).
Contrary to Ref. \refcite{LIP09-II}, the method in
Ref. \refcite{LIP12} also does not move cell centres on each side
of an edge across which $\Lambda$ is discontinuous,
but rather makes use of the harmonic interpolation introduced
in Ref. \refcite{AGE09} (see Remark \ref{rem:mix}) to compute the flux through
these edges. The usage of this harmonic interpolation however leads
to a reduced accuracy if the mesh or the tensor are too skewed.
\subsection{MMP schemes by non-linear corrections of linear schemes}
None of the monotone or MMP method presented in the previous sections
is unconditionally coercive. It turns out that the most efficient
way to construct MMP \emph{and coercive} methods
is not to design a whole new method, but to take existing linear
coercive methods and to devise a non-linear modification of them,
which preserves its coercivity while adding the discrete minimum-maximum
principle.
Let us consider a cell-centred linear scheme \eqref{bf}-\eqref{cf} which is coercive
(it satisfies in particular \eqref{coer-bd}). Assume that, for this scheme,
\[
A_K(u):=\sum_{\sigma\in\mathcal E_K}F_{K,\sigma}=
\sum_{Z\in V(K)}a_{K,Z}(u_K-u_Z)
\]
for some possibly negative $a_{K,Z}$ and $V(K)$ a set of cells or boundary edges
such that, for two cells $(K,Z)$, $Z\in V(K)$ if and only if $K\in V(Z)$.
The scheme is thus written: for all $K\in\mathcal M$, $A_K(u)=\int_K f(x){\rm d} x$.
Then a coercive MMP scheme
can be obtained\cite{LEP10,CAN13} by writing $S_K(u)=\int_K f(x){\rm d} x$ for all $K\in\mathcal M$,
where
\[
\begin{equation}gin{array}{l}
{\rm d}isplaystyle S_K(u)=A_K(u)+\sum_{Z\in V(K)}\begin{equation}ta_{K,Z}(u)(u_K-u_Z)\\
{\rm d}isplaystyle \mbox{with }\quad
\begin{equation}ta_{K,Z}(u)\ge \frac{|A_K(u)|}{\sum_{Y\in V(K)}|u_K-u_Y|}
\end{array}
\]
(``$\ge$'' is replaced with ``$>$'' if $Z$ is a neighbouring cell or
edge of $K$, and if $\sum_{Y\in V(K)}|u_K-u_Y|=0$ then we only need $\begin{equation}ta_{K,Z}(u)\ge 0$;
this condition on $\begin{equation}ta_{K,Z}$ is only an example, see Ref. \refcite{CAN13}).
If $\begin{equation}ta_{K,Z}(u)=\begin{equation}ta_{Z,K}(u)$ for any cells $K,Z$,
then the modified scheme is indeed a FV method:
non-linear conservative fluxes $F_{K,\sigma}'(u)$ can be found such that
$S_K(u)=\sum_{\sigma\in\mathcal E_K}F_{K,\sigma}'(u)$.
It is obvious from the symmetry of $\begin{equation}ta_{K,Z}(u)$ that the corrected
scheme retains the coercivity property \eqref{coer-bd} of the original scheme.
It can also be proved that, if the original scheme is consistent in the
sense of FV methods, then the modified scheme converges as the mesh size tends
to $0$, under assumptions on the approximations not formally proved
but holding well in numerical tests.
These numerical tests show astonishing improvements
of the $L^2$ error when using the non-linear correction
(sometimes\cite{LEP13-p} by a factor 10,000 in case of an anisotropy ratio of $10^6$).
This correction however appears to degrade the order of convergence to
1 and is therefore outperformed by the original order 2 linear
scheme on very thin meshes (sometimes at a size which is nevertheless beyond
computational capacities). The reason for this reduction of
convergence rate is not well understood, but it is worth
mentioning that, even for linear FV schemes, the convergence order 2
on ${\overline{u}}$ is mostly only \emph{noticed} on numerical tests and
not proved in general.
The consequence is that non-linear corrections should only
be applied for coarse meshes and strongly anisotropic diffusion tensors
for which the original scheme provides physically unacceptable solutions.
This correction technique has been adapted in Ref. \refcite{LEP12} to methods
involving cell and edge unknowns.
\section{Conclusion}\label{sec:concl}
We presented and gave a review of some recent FV methods for diffusion
equations, focusing on the capacity of the methods to
be applicable on generic meshes and to reproduce two important
properties of the continuous equation: coercivity, which
ensures the stability of the scheme and allows one to carry out
convergence proofs under realistic assumptions, and minimum and maximum principles,
which ensure physically acceptable solutions in case of strong anisotropy.
This review is of course partial and much more could be written
on FV methods for \eqref{base}, for example about the comparison
of their respective numerical behaviours -- see e.g. the two comprehensive benchmarks
of Refs. \refcite{EYM12-2,HER08}. Other methods or topics of interest regarding the discretisation
of \eqref{base} are worth mentioning:
\begin{equation}gin{itemize}
\item vertex-centred MPFA O-methods\cite{EDW02,EDW10,EDW11,PAL12},
\item Finite Volume Element methods\cite{CAI91,CAI91-2,EWI02},
based on Finite Element spaces with vertex unknowns
and flux balances on dual meshes around vertices,
\item studies of relationships between FV and Finite Element methods, or mixing
of ideas between different families of methods\cite{VOH06,VOH13,YOU04,WHE06},
\item Gradient Schemes\cite{DRO12,EYM12,EYM11-2,EYM13-2,EYM11},
a generic framework (including HMM methods and some MPFA and DDFV schemes, as well as non-FV
methods) for the convergence analysis of numerical methods on numerous models,
\item the recent review of Ref. \refcite{DIP13-2} on numerical methods in geosciences.
\end{itemize}
The overall conclusion of this review is that currently there is
no miraculous method which provides an excellent
solution in all circumstances. The various numerical
methods available for \eqref{base} should be considered as a kit of clever
techniques which can be adapted and re-used in particular situations.
The ideas behind the methods are as important as the methods
themselves.
Let us close this study with an open question. For the TPFA scheme,
the flux balance \eqref{bf} can be written
\begin{equation}gin{equation}\label{peculiar}
\sum_{L\in\mathcal M}\tau_{K,L}(u_K-u_L)=\int_K f(x){\rm d} x,
\end{equation}
with $\tau_{K,L}=\tau_{L,K}$ non-negative and such that the method
is coercive.
This structure allows one, by using non-linear functions of the solution
as test functions, to prove \emph{a priori} estimates and analyse the convergence
of the TPFA scheme for non-coercive convection-diffusion equations\cite{DRO02,DRO03-2,CHA11},
hyperbolic-parabolic equations\cite{AND10,EYM02},
equations with Radon measures\cite{GAL99,DRO03} (used to model
wells in reservoirs), or chemotaxis problems\cite{FIL06}.
To date, it is not known how to design a method that can be written
\eqref{peculiar} for any mesh and tensor (as separately noticed in Ref. \refcite{EYM13-3}),
or how to adapt the afore mentioned \emph{a priori} estimate techniques
to schemes not having this structure...
\section*{Acknowledgment}
The author would like to thank the following colleagues, whose comments
helped improve this paper: D. Di Pietro, M.G. Edwards,
R. Eymard, T. Gallou\"et, F. Hermeline, K. Lipnikov, M. Shashkov, D. Svyatskiy
and Yu. Vassilevski.
Special thanks to B. Andreianov, R. Herbin, C. Le Potier and G. Manzini for their thorough reading
and feedback.
\end{document}
|
\begin{document}
\title{An extension to the Wiener space of the arbitrary functions principle}
\selectlanguage{francais}
\selectlanguage{english}
\begin{abstract}
The arbitrary functions principle says that the fractional part of $nX$ converges stably to an independent random variable uniformly distributed on the unit interval, as soon as the random variable $X$ possesses a density or a characteristic function vanishing at infinity. We prove a similar property for random variables defined on the Wiener space when the stochastic measure $dB_s$ is crumpled on itself.
{\it To cite this article: N. Bouleau, C. R.
Acad. Sci. Paris, Ser. I 343, (2006), 329-332.}
\vskip 0.5\baselineskip
\selectlanguage{francais}
\noindent{\bf R\'esum\'e}
\vskip 0.5\baselineskip
\noindent
Le principe des fonctions arbitraires dit que la partie fractionnaire de $nX$ converge stablement vers une variable al\'eatoire ind\'ependante uniform\'ement r\'epartie sur $[0,1]$ d\`es que $X$ a une densit\'e ou seulement une fonction caract\'eristique tendant vers z\'ero \`a l'infini. Nous \'etablissons une propri\'et\'e analogue pour des variables al\'eatoires d\'efinies sur l'espace du mouvement brownien par repliement de la mesure stochastique $dB_s$ sur elle-m\^eme.
{\it Pour citer cet article~: N. Bouleau, C. R.
Acad. Sci. Paris, Ser. I 343, (2006), 329-332.}
\end{abstract}
\selectlanguage{english}
\section{Introduction}
Let us denote $\{x\}$ the fractional part of the real number $x$ and $\stackrel{d}{\Longrightarrow}$ the weak convergence of random variables. Let $(X,Y)$ be a pair of random variables with values in $\mathbb{R}\times\mathbb{R}^r$, we refer to the following property or its extensions as the arbitrary functions principle:
\begin{equation}(\{nX\},Y)\quad\stackrel{d}{\Longrightarrow}\quad (U,Y)\end{equation}
where $U$ is uniformly distributed on $[0,1]$ independent of $Y$.
This property is satisfied when $X$ has a density or more generally a characteristic function vanishing at infinity. (cf [5] Chap. VIII \S92 and \S93, [2], [4]). It yields an approximation property of $X$ by the random variable $X_n= X-\frac{1}{n}\{nX\}=\frac{[nX]}{n}$ where $[x]$ denotes the entire part of $x$:\\
\noindent{\bf Proposition 1.} {\it Let $X$ be a real random variable with density and $Y$ a random variable with values in $\mathbb{R}^r$. Let $X_n=\frac{[nX]}{n}$
a) For all $\varphi\in\mathcal{C}^1\bigcap{\mbox{\rm Lip}}(\mathbb{R})$ and for all integrable random variable $Z$,
$$(n(\varphi(X_n)-\varphi(X)), Y)\quad\stackrel{d}{\Longrightarrow}\quad(-U\varphi^\prime(X),Y)$$
$$n^2\mathbb{E}[(\varphi(X_n)-\varphi(X))^2Z]\quad\rightarrow\quad\frac{1}{3}\mathbb{E}[\varphi^{\prime 2}(X)Z]$$
where $U$ is uniformly distributed on $[0,1]$ independent of $(X,Y)$.
\indent b) $\forall\psi\in L^1([0,1])$
$$(\psi(n(X_n-X)),Y)\quad\stackrel{d}{\Longrightarrow}\quad(\psi(-U),Y)$$
under any probability measure $\tilde{\mathbb{P}}\ll\mathbb{P}$.}
We extend such results to random variables defined on the Wiener space.
\section{Periodic isometries.}
Let $(B_t)$ be a standard $d$-dimensional Brownian motion and let $m$ be the Wiener measure, law of $B$. Let $t\mapsto M_t$ be a bounded deterministic measurable map, periodic with unit period, into the space of orthogonal $d\times d$-matrices such that $\int_0^1M_s ds=0$ (e.g. a rotation in $\mathbb{R}^d$ of angle $2\pi t$). The transform $B_t\mapsto\int_0^tM_sdB_s$ defines an isometric endomorphism in $L^p(m), 1\leq p\leq \infty$. Let be $M_n(s)=M(ns)$ and $T_n=T_{M_n}$. The transposed of the matrix $N$ is denoted $N^\ast$.\\
\noindent{\bf Proposition 2.} {\it Let be $X\in L^1(m)$. Let $\tilde{m}$ be a probability measure absolutely continuous w.r. to $m$. Under $\tilde{m}$ we have
$$(T_n(X),B)\quad\stackrel{d}{\Longrightarrow}\quad(X(w),B).$$
The weak convergence acts on $\mathbb{R}\times\mathcal{C}([0,1])$ and $X(w)$ denotes a random variable with the same law as $X$ had under $m$ function of a Brownian motion $W$ independent of $B$.}\\
\noindent{\bf Proof.} a) If $X=\exp\{i\int_0^1\xi.dB+\frac{1}{2}\int_0^1|\xi|^2ds\}$ for some element $\xi\in L^2([0,1],\mathbb{R}^d)$, we have
$T_n(X)=\exp\{i\int_0^1\xi^\ast_sM_n(s)dB_s+\frac{1}{2}\int_0^1|\xi|^2ds\}.$
Putting $Z^n_t=\int_0^t\xi^\ast_sM_n(s)dB_s$ gives
$\langle Z^n,Z^n\rangle_t=\int_0^t\xi^\ast_sM_n(s)M_n^\ast(s)\xi_sds=\int_0^t|\xi|^2(s)ds$ which is a continuous function. Now by proposition 1,
$\int_0^t\xi^\ast_sM_n(s)ds\rightarrow\int_0^t\xi^\ast_sds\int_0^1M_n(s)ds=0.$
which implies by Ascoli theorem $sup_t|\int_0^t\xi^\ast_sM_n(s)ds|\rightarrow0.$ The argument of H. Rootz\'en [6] applies and yields
$(\int_0^.\xi^\ast M_ndB,B)\stackrel{d}{\Longrightarrow}(\int_0^.\xi.dW,B)$ giving the result in this case by continuity of the exponential function.
b) When $X\in L^1(m)$, we approximate $X$ by $X_k$ linear combination of exponentials of the preceding type and consider the caracteristic functions. The inequality $$|\mathbb{E}[e^{iuT_n(X)}e^{i\int h.dB}-\mathbb{E}[e^{iuT_n(X_k)}e^{i\int h.dB}]|
\leq |u|\mathbb{E}|T_n(X)-T_n(X_k)|=|u|\;\|X-X_k\|_{L^1}$$
gives the result.
c) This extends to the case $\tilde{m}\ll m$ by the properties of stable convergence.
$\diamond$
\section{Approximation of the Ornstein-Uhlenbeck structure.}
From now on, we assume for simplicity that $(B)$ is one-dimensional. Let $\theta$ be a periodic real function with unit period such that$\int_0^1\theta(s)ds=0$ and $\int_0^1\theta^2(s)ds=1$. We consider the transform $R_n$ of the space $L^2_\mathbb{C}(m)$ defined by its action on the Wiener chaos:
If $X=\int_{s_1<\cdots<s_k}\hat{f}(s_1,\ldots,s_k)dB_{s_1}\ldots dB_{s_k}$ for $\hat{f}\in L^2_{sym}([0,1]^k,\mathbb{C})$,
$$R_n(X)=\int_{s_1<\cdots<s_k}\hat{f}(s_1,\ldots,s_k)e^{i\frac{1}{n}\theta(ns_1)}dB_{s_1}\ldots e^{i\frac{1}{n}\theta(ns_k)}dB_{s_k}.$$
$R_n$ is an isometry from $L^2_\mathbb{C}(m)$ into itself. From
$n(e^{\frac{i}{n}\sum_{p=1}^k\theta(ns_p)}-1)=i\sum_{p=1}^k\theta(ns_p)\int_0^1 e^{\alpha \frac{i}{n}\sum_p\theta(ns_p)}d\alpha$ it follows that if $X$ belongs to the $k$-th chaos
$$\|n(R_n(X)-X)\|^2_{L^2}\leq k^2\|X\|^2_{L^2}\|\theta\|^2_\infty.$$ In other words, denoting $A$ the Ornstein-Uhlenbeck operator, $X\in \mathcal{D}(A)$ implies
$$\|n(R_n(X)-X)\|_{L^2}\leq 2\|AX\|_{L^2}\|\theta\|_\infty$$ and this leads to\\
\noindent{\bf Proposition 3.} {\it If $X\in\mathcal{D}(A)$
$$(-in(R_n(X)-X),B)\quad\stackrel{d}{\Longrightarrow}\quad(X^\#(\omega,w),B)$$
where $W$ is an Brownian motion independent of $B$ and $X^\#=\int_0^1D_sX\,dW_s$.}\\
\noindent{\bf Proof.} {If $X$ belongs to the $k$-th chaos, expanding the exponential by its Taylor series gives
$$n(R_n(X)-X) =i\int_{s_1<\cdots<s_k}\hat{f}(s_1,\ldots,s_k)\sum_{p=1}^k\theta(ns_p)dB_{s_1}\ldots dB_{s_k}+Q_n$$
with $\|Q_n\|^2\leq \frac{1}{4n}k^2\|\theta\|^2_\infty\|X\|^2$.
Then using that $\int_{s_1<\cdots<s_p<\cdots<s_k}h(s_1,\ldots,s_k)\theta(ns_p)dB_{s_1}\ldots dB_{s_p}\ldots dB_{s_k}$
\noindent converges stably to $\int_{s_1<\cdots<s_p<\cdots<s_k}h(s_1,\ldots,s_k)dB_{s_1}\ldots dW_{s_p}\ldots dB_{s_k}$ one gets
$$-in(R_n(X)-X)\quad\stackrel{s}{\Longrightarrow}\quad
\begin{array}{l}
\int_{t<s_2<\cdots<s_k}\hat{f}(t,s_2,\ldots,s_k)dW_tdB_{s_2}\ldots dB_{s_k}\\
+\int_{s_1<t<\cdots<s_k}\hat{f}(s_1,t,\ldots,s_k)dB_{s_1}dW_t\ldots dB_{s_k}\\
+\cdots\\
+\int_{s_1<\cdots<s_{k-1}<t}\hat{f}(s_1,\ldots,s_{k-1},t)dB_{s_1}\ldots dB_{s_{k-1}}dW_t
\end{array}
$$
which equals $\int D_s(X)dW_s=X^\#$.
The general case in obtained by approximation of $X$ by $X_k$ for the $\mathbb{D}^{2,2}$ norm and the same argument as in the proof of proposition 2 by the caracteristic functions gives the result.
$\diamond$
By the properties of stable convergence, the weak convergence of prop. 3 also holds under $\tilde{m}\ll m$. By similar computations we obtain\\
\noindent{\bf Proposition 4.} {\it $\forall X\in\mathcal{D}(A)$
$$n^2\mathbb{E}[|R_n(X)-X|^2]\rightarrow2\mathcal{E}[X]$$
where $\mathcal{E}$ is the Dirichlet form associated with the Ornstein-Uhlenbeck operator.}\\
Following the same lines, it is possible to show that the theoretical $\overline{A}$ and practical $\underline{A}$ bias operators (cf. [1]) defined on the algebra $\mathcal{L}\{e^{\int\xi dB}\;;\;\xi\in\mathcal{C}^1\}$ by
$$\begin{array}{c}
n^2\mathbb{E}[(R_n(X)-X)Y]=<\overline{A}X,Y>_{L^2(m)}\\
n^2\mathbb{E}[(X-R_n(X))R_n(Y)]=<\underline{A}X,Y>_{L^2(m)}
\end{array}
$$ are defined and equal to $A$.\\
\noindent{\it Comment.} The preceding properties are very similar to the results concerning the weak asymptotic error for the resolution of SDEs by the Euler scheme, involving also an ``extra"-Brownian motion (cf. [3]).
Nevertheless these results do not use the arbitrary functions principle because a convergence like
$(n\int_0^.(s-\frac{[ns]}{n})dB_s,B)\stackrel{d}{\Longrightarrow}(\frac{1}{\sqrt{12}}W+\frac{1}{2}B,B)$
is hidden by a dominating phenomenon
$(\sqrt{n}\int_0^.(B_s-B_{\frac{[ns]}{n}}dB_s,B)$ $\stackrel{d}{\Longrightarrow}(\frac{1}{\sqrt{2}}
\tilde{W},B)$
due to the fact that when a sequence of variables in the second (or higher order) chaos converges stably to a Gaussian variable, this one appears to be independent of the fisrt chaos and therefore of $B$.
The arbitrary functions principle is slightly different, it is a crumpling of the random orthogonal measure $dB_s$ on itself. This operates even on the first chaos. Concerning the solution of SDEs by the Euler scheme, it is in force for SDEs of the form
$$\left\{
\begin{array}{l}
X^1_t=x^1_0+\int_0^tf^{11}(X^2_s)dB_s+\int_0^tf^{12}(X^1_s,X^2_s)ds\\
X^2_t=x^2_0+\int_0^tf^{22}(X^1_s,X^2_s)ds
\end{array}\right.
$$
where $X^1$ is with values in $\mathbb{R}^{k_1}$, $X^2$ in $\mathbb{R}^{k_2}$, $B$ in $\mathbb{R}^d$ and $f^{ij}$ are matrices with suitable dimensions which are encountered for the description of mechanical systems under noisy sollicitations when the noise depend only on the position of the system and the time. In such equations, integration by parts reduces the stochastic integrals to ordinary integrals and it may be shown that solved by the Euler scheme they present a weak asymptotic error in $\frac{1}{n}$ instead of $\frac{1}{\sqrt{n}}$ as usual.
\end{document}
|
\begin{document}
\title{Descriptional Complexity of Three-Nonterminal Scattered Context Grammars: An Improvement}
\defThree-Nonterminal Scattered Context Grammars: An Improvement{Three-Nonterminal Scattered Context Grammars: An Improvement}
\author{Tom{\' a}{\v s} Masopust \qquad Alexander Meduna
\institute{Faculty of Information Technology -- Brno University of Technology\\
Bo\v{z}et\v{e}chova 2 -- Brno 61266 -- Czech Republic}
\email{\{masopust,meduna\}@fit.vutbr.cz}
}
\defT.~Masopust, A.~Meduna{T.~Masopust, A.~Meduna}
\maketitle
\begin{abstract}
Recently, it has been shown that every recursively enumerable language can be generated by a scattered context grammar with no more than three nonterminals. However, in that construction, the maximal number of nonterminals simultaneously rewritten during a derivation step depends on many factors, such as the cardinality of the alphabet of the generated language and the structure of the generated language itself. This paper improves the result by showing that the maximal number of nonterminals simultaneously rewritten during any derivation step can be limited by a small constant regardless of other factors.
\end{abstract}
\section{Introduction}
Scattered context grammars, introduced by Greibach and Hopcroft in~\cite{GreHop}, are partially parallel rewriting devices based on context-free productions, where in each derivation step, a finite number of nonterminal symbols of the current sentential form is simultaneously rewritten. As scattered context grammars were originally defined without erasing productions, it is no surprise that they generate only context sensitive languages. On the other hand, however, the question of whether every context sensitive language can be generated by a (nonerasing) scattered context grammar is an interesting, longstanding open problem. Note that the natural generalization of these grammars allowing erasing productions makes them computationally complete (see \cite{meduna:eatcs}). For some conditions when a scattered context grammar can be transformed to an equivalent nonerasing scattered context grammar, the reader is referred to \cite{techetAI}. In what follows, we implicitly consider scattered context grammars with erasing productions.
Although many interesting results have been achieved in the area of the descriptional complexity of scattered context grammars during the last few decades, the main motivation to re-open this investigation area comes from an interesting, recently started research project on bulding parsers and compilers of programming languages making use of advantages of scattered context grammars (see, for instance, papers \cite{kolar,rychnov} for more information on the advantages and problems arising from this approach).
To give an insight into the descriptional complexity of scattered context grammars (including erasing productions), note that it is proved in \cite{meduna2} that one\mbox{-}nonterminal scattered context grammars are not powerful enough to generate all context sensitive languages so that it is demonsrated that they are not able to generate the language $\{a^{2^{2^n}} : n\ge 0\}$ (which is scattered context, see Lemma \ref{lem1} below). In addition, although they are not able to generate all these languages, it is an open problem (because of the erasing productions) whether they can generate a language which is not context sensitive. On the other hand, it is proved in \cite{Meduna00b} that three nonterminals are sufficient enough for scattered context grammars to characterize the family of recursively enumerable languages. In that proof, however, the maximal number of nonterminal symbols simultaneously rewritten during any derivation step depends on the alphabet of the generated language and on the structure of the generated language itself.
Later, in \cite{vaszil}, Vaszil gave another construction limiting the maximal number of nonterminals simultaneously rewritten during one derivation step. However, this improvement is for the price of increasing the number of nonterminals. Although Vaszil's construction has been improved since then (in the sense of the number of nonterminals, see \cite{masopustTCS} for an overview of the latest results), the number of three nonterminals has not been achieved.
This paper presents a construction improving the descriptional complexity of scattered context grammars with three nonterminals by limiting the maximal number of nonterminals simultaneously rewritten during any derivation step regardless of any other factors. This result is achieved by the combination of approaches of both previously mentioned papers. Specifically, this paper proves that every recursively enumerable language is generated by a three-nonterminal scattered context grammar, where no more than nine symbols are simultaneously rewritten during any derivation step. This is a significant improvement in comparison with the result of \cite{Meduna00b}, where more than $2n+4$ symbols have to be simultaneously rewritten during almost all derivation steps of any successful derivation, for some $n$ strictly greater than the number of terminal symbols of the generated language plus two. To be more precise, $n$ strongly depends not only on the terminal alphabet of the generated language, but also on the structure of the generated language itself.
Finally, note that analogously as in \cite{Meduna00b}, we do not give a constant limit on the number of non-context-free productions, which is also limited by fixed constants in \cite{vaszil} and \cite{masopustTCS}. To find such a limit is an interesting challenge for the future research, as well as to find out whether the number of nonterminals can be reduced to two. See also the overview of known results and open problems in the conclusion.
\section{Preliminaries and definitions}
We assume that the reader is familiar with formal language theory (see \cite{salomaa}).
For an alphabet (finite nonempty set) $V$, $V^*$ represents the free monoid generated
by~$V$ with the unit denoted by $\lambda$. Set $V^+ = V^*-\{\lambda\}$. For $w \in V^*$ and $a\in V$, let $|w|$, $|w|_a$, and $w^R$ denote the length of $w$, the number of occurrences of $a$ in $w$, and the mirror image of $w$, respectively.
A {\em scattered context grammar\/} is a quadruple $G=(N,T,P,S)$, where $N$ is the alphabet of nonterminals, $T$ is the alphabet of terminals such that $N\cap T=\emptyset$, $S\in N$ is the start symbol, and $P$ is a finite set of productions of the form $(A_1,A_2,\dots,A_n)\to (x_1,x_2,\dots,x_n)$, for some $n\geq 1$, where $A_i\in N$ and $x_i \in (N\cup T)^*$, for all $i=1,\dots,n$. If $n\ge 2$, then the production is said to be {\em non\mbox{-}context-free}; otherwise, it is context-free. In addition, if for each $i=1,\dots,n$, $x_i\neq\lambda$, then the production is said to be {\em nonerasing}; $G$ is {\em nonerasing\/} if all its productions are nonerasing.
For $u,v\in (N\cup T)^*$, $u\Rightarrow v$ in $G$ provided that
\begin{itemize}
\item $u=u_1A_1u_2A_2u_3\dots u_nA_nu_{n+1}$,
\item $v=u_1x_1u_2x_2u_3\dots u_nx_nu_{n+1}$, and
\item $(A_1,A_2,\dots,A_n)\to (x_1,x_2,\dots,x_n)\in P$,
\end{itemize}
where $u_i \in (N\cup T)^*$, for all $i=1,\dots,n+1$. The language generated by $G$ is
defined as
$$L(G)=\{w \in T^* : S \Rightarrow^* w\},$$
where $\Rightarrow^*$ denotes the reflexive and
transitive closure of the relation $\Rightarrow$. A language $L$ is said to be
a (nonerasing) {\em scattered context language} if there is a (nonerasing) scattered
context grammar~$G$ such that $L=L(G)$.
\section{Main results}
First, we give a simple example of a nonerasing scattered context grammar generating a non\mbox{-}context-free language. Then, we present a nonerasing scattered context grammar generating the nontrivial context sensitive language $\{a^{l^{k^n}} : n\ge 0\}$, for any $k,l\ge 2$. Thus, for $k=l=2$, we have a scattered context grammar generating the language mentioned in the introduction. Note that independently on $k$ and $l$, the grammar has only twelve nonterminals and fourteen productions, ten of which are non-context-free.
\begin{example}
Let $G=(\{S,A,B,C\},\{a,b,c\},P,S)$ be a scattered context grammar with $P$ containing the following productions
\begin{itemize}
\item $(S)\to(ABC)$
\item $(A,B,C)\to(aA,bB,cC)$
\item $(A,B,C)\to(a,b,c)$
\end{itemize}
Then, it is not hard to see that the language generated by $G$ is
\[L(G)=\{a^nb^nc^n : n\ge 1\}.\tag*{$\diamond$}\]
\end{example}
\begin{lemma}\label{lem1}
For any $k,l\ge 2$, the language $\{a^{l^{k^n}} : n\ge 0\}$ is a nonerasing scattered context language.
\end{lemma}
\begin{proof}
Let $G=(\{S,A,A',A'',B,C,X,X_2,X_3,Y,Z,Z'\},\{a\},P,S)$ be a nonerasing scattered context grammar with $P$ containing the following productions:
\begin{enumerate}
\item\label{b1} $(S) \to (a^l)$,
\item\label{b2} $(S) \to \big(a^{l^k}\big)$,
\item\label{b3} $(S) \to \Big(a^{l^{k^2}}\Big)$,
\item\label{b4} $(S) \to (A''A^{l-1}X_2B^{k^2-3}A'C^{k^2-1}XY)$,
\item[*] first stage
\item\label{b5} $(A',C,X,Y) \to (B^{k-1},A',X,C^{k}Y)$,
\item\label{b6} $(A',X,Y) \to (B^{k-1},A',C^{k-1}XY)$,
\item\label{b7} $(A',X,Y) \to (Z,Z,Y)$,
\item\label{b8} $(Z,C,Z,Y) \to (Z,B^{k-1},Z,Y)$,
\item\label{b9} $(Z,Z,Y) \to (B,B^{k-1},X_3)$,
\item[*] second stage
\item\label{b10} $(A'',A,X_2,X_3) \to (a^{l-1},A'',X_2A^{l},X_3)$,
\item\label{b11} $(A'',X_2,B,X_3) \to (a^{l-1},A'',A^{l-1}X_2,X_3)$,
\item\label{b12} $(A'',X_2,X_3) \to (Z',Z',X_3)$,
\item\label{b13} $(Z',A,Z',X_3) \to (Z',a^{l-1},Z',X_3)$,
\item\label{b14} $(Z',Z',X_3) \to (a,a^{l-1},a^{l-1})$.
\end{enumerate}
Then, all the possible successful derivations of $G$ are summarized in the following (strings in the square brackets are regular expressions describing the productions applied during the derivations).
\[\begin{array}{rllcl}
S & \Rightarrow & a^l & \qquad & [\textrm{(\ref{b1})}]\\
S & \Rightarrow & a^{l^k} & \qquad & [\textrm{(\ref{b2})}]\\
S & \Rightarrow & a^{l^{k^2}} & \qquad & [\textrm{(\ref{b3})}]\\
S & \Rightarrow & A''A^{l-1}X_2B^{k^2-3}A'C^{k^2-1}XY & & [\textrm{(\ref{b4})}]\\
& \Rightarrow^* & A''A^{l-1}X_2B^{k^n-2}X_3 & & [\textrm{((\ref{b5})$^+$(\ref{b6}))$^*$(\ref{b7})(\ref{b8})$^+$(\ref{b9})}]\\
& \Rightarrow^* & a^{l^{k^n-1}-l}A''A^{l^{k^n-1}-1}X_2X_3 & & [\textrm{((\ref{b10})$^+$(\ref{b11}))$^*$}]\\
& \Rightarrow^* & a^{l^{k^n}} & & [\textrm{(\ref{b12})(\ref{b13})$^+$(\ref{b14})}]\,.
\end{array}\]
The first three cases are clear. In the last case, $l$ symbols $A$ (including~$A''$) are generated in the first derivation step. Then, the derivation can be divided into two parts: in the first part, only productions from the first stage are applied (because there is no $X_3$ in the sentential form) generating $k^n$ auxiliary symbols ($B$'s, $X_2$, and $X_3$). Then, in the second part, only productions from the second stage are applied (because there is no $Y$ in the sentential form) generating $l^{k^n}$ symbols $a$. More precisely, we prove that all sentential forms of a successful derivation containing $X_3$, i.\,e. of the second part, are of the form \[a^{l^{m-1}-l}A''A^{l^{m-1}-1}X_2B^{k^n-m}X_3\,,\] for all $m=2,3,\dots,k^n$ and $n\ge 3$. Clearly, for $m=2$, the sentential form is $A''A^{l-1}X_2B^{k^n-2}X_3$. For $m=k^n$, we have $a^{l^{k^n-1}-l}A''A^{l^{k^n-1}-1}X_2X_3$ and it is not hard to prove that
\begin{eqnarray*}
a^{l^{k^n-1}-l}A''A^{l^{k^n-1}-1}X_2X_3 & \Rightarrow^* & a^{l^{k^n-1}-l}aa^{(l-1)(l^{k^n-1}-1)}a^{l-1}a^{l-1} = a^{l^{k^n}}\,.
\end{eqnarray*} Thus, assume that $2\le m < k^n$. Then,
\begin{eqnarray*}
& & a^{l^{m-1}-l}A''A^{l^{m-1}-1}X_2B^{k^n-m}X_3\\
& \Rightarrow^* & a^{l^{m-1}-l}a^{(l-1)(l^{m-1}-1)}A''X_2A^{l(l^{m-1}-1)}B^{k^n-m}X_3\quad [\textrm{(\ref{b10})}^*]\\
&&= a^{l^m-2l+1}A''X_2A^{l^m-l}B^{k^n-m}X_3\\
& \Rightarrow & a^{l^m-2l+1}a^{l-1}A''A^{l^m-l}A^{l-1}X_2B^{k^n-m-1}X_3\quad [\textrm{(\ref{b11})}]\\
&&= a^{l^m-l}A''A^{l^m-1}X_2B^{k^n-(m+1)}X_3\,.
\end{eqnarray*}
For a complete proof of the correctness of this construction, the reader is referred to \cite{masopust:mono}.
\end{proof}
Now, we prove the main result of this paper.
\begin{theorem}
Every recursively enumerable language is generated by a scattered context grammar with three nonterminals, where no more than nine nonterminals are simultaneously rewritten during one derivation step.
\end{theorem}
\begin{proof}
Let $L$ be a recursively enumerable language. Then, by Geffert~\cite{Geffert:1988},
there is a grammar\linebreak \hbox{$G'=(\{S',A,B,C,D\},T,P\cup\{AB\to\lambda,CD\to\lambda\},$ $S')$},
where $P$ contains only context-free productions of the following three forms:
$S'\to uS'a$, $S'\to uS'v$, $S'\to \lambda$, for $u\in\{A,C\}^*$, $v\in\{B,D\}^*$,
and $a\in T$. In addition, Geffert proved that any successful derivation of $G'$ is
divided into two parts: the first part is of the form
\[S' \Rightarrow^* w_1S'w_2w \Rightarrow w_1w_2w\,,\]
generated only by context-free productions from $P$, where $w_1\in\{A,C\}^*$,
$w_2\in\{B,D\}^*$, and $w\in T^*$, and the second part is of the form
\[w_1w_2w\Rightarrow^* w\,,\]
generated only by productions $AB\to\lambda$ and $CD\to\lambda$.
Let $G=(\{S,A,B\},T,P,S)$ be a scattered context grammar with $P$ constructed as follows:
\begin{enumerate}
\item\label{genS} $(S) \to (SBBASABBSA)$,
\item\label{a2} $(S,S,S) \to (S,h(u)Sh(a),S)$ \qquad if $S' \to uS'a \in P'$,
\item\label{a3} $(S,S,S) \to (S,h(u)Sh(v),S)$ \qquad if $S' \to uS'v \in P'$,
\item\label{a4} $(S,A,B,B,S,B,B,A,S) \to (\lambda,\lambda,\lambda,S,S,S,\lambda,\lambda,\lambda)$,
\item\label{a5} $(S,B,A,B,S,B,A,B,S) \to (\lambda,\lambda,\lambda,S,S,S,\lambda,\lambda,\lambda)$,
\item\label{a6} $(S,B,B,A,S,A,B,B,S) \to (\lambda,\lambda,\lambda,SBBA,S,S,\lambda,\lambda,\lambda)$,
\item\label{a7} $(S,B,B,A,S,A,B,B,S) \to (\lambda,\lambda,\lambda,S,S,S,\lambda,\lambda,\lambda)$,
\item\label{remS} $(S,S,S,A) \to (\lambda,\lambda,\lambda,\lambda)$,
\end{enumerate}
where $h$ is a homomorphism from $(\{A,B,C,D\}\cup T)^*$ to $(\{A,B\}\cup T)^*$ defined as $h(A)=ABB$, $h(B)=BBA$, $h(C)=h(D)=BAB$, and $h(a)=AaBB$, for all $a\in T$.
To prove that $L(G')\subseteq L(G)$, consider a successful derivation of $w\in T^*$
in~$G'$. Such a derivation is of the form described above, where the second part of the derivation is according to a sequence $p_1p_2\dots p_r$ of productions $AB\to\lambda$ and $CD\to\lambda$, for some $r\ge 0$. Then, in $G$, the derivation of $w$ can be simulated by applications of the corresponding productions constructed above as follows:
\begin{eqnarray*}
S & \Rightarrow & SBBASABBSA \quad [\textrm{(\ref{genS})}]\\
& \Rightarrow^* & SBBAh(w_1)Sh(w_2w)ABBSA \quad [\textrm{(\ref{a2})$^*$(\ref{a3})$^*$}]\\
& \Rightarrow^* & Sh(w_1)Sh(w_2)SwA \quad [\textrm{(\ref{a6})$^*$(\ref{a7})}]\\
& \Rightarrow^* & SSSwA \quad [q_r\dots q_2q_1]\\
& \Rightarrow & w \quad [\textrm{(\ref{remS})}]\,,
\end{eqnarray*}
where, for each $1\le i\le r$,
$$q_i=\begin{cases}
(S,A,B,B,S,B,B,A,S)\to (\lambda,\lambda,\lambda,S,S,S,\lambda,\lambda,\lambda), & \text{ if $p_i=AB\to\lambda$}, \\
(S,B,A,B,S,B,A,B,S)\to (\lambda,\lambda,\lambda,S,S,S,\lambda,\lambda,\lambda), & \text{ otherwise}.
\end{cases}$$
On the other hand, to prove that $L(G)\subseteq L(G')$, we demonsrate that $G'$ generates any $x\in L(G)$.
First, we prove that each of the productions (\ref{genS}) and (\ref{remS}) is applied exactly once in each successful derivation of $G$. To prove this, let $S\Rightarrow^* x$ be a derivation of a string $x\in(\{S,A,B\}\cup T)^*$. Let $i$ be the number of applications of production~(\ref{genS}), $j$ be the number of applications of production (\ref{remS}), and $2k$ be the number of $B$'s in $x$. Then, it is not hard to see that
\begin{itemize}
\item $|x|_B = 2k$,
\item $|x|_A = k + i - j$,
\item $|x|_S = 1 + 2i - 3j$.
\end{itemize}
Thus, for $x\in T^*$, we have that $2k=0$ and $i=j$. In addition, $1+2i-3i=0$ implies that $i=1$, which means that each of the productions (\ref{genS}) and (\ref{remS}) is applied exactly once in each successful derivation of $G$---production (\ref{genS}) as the first production and production (\ref{remS}) as the last production of the derivation. We have shown that every successful derivation of $G$ is of the form \[S\Rightarrow SBBASABBSA \Rightarrow^* w_1Sw_2Sw_3Sw_4A \Rightarrow w_1w_2w_3w_4\,,\] for some terminal strings $w_1,w_2,w_3,w_4\in T^*$.
Furthermore, there is no production that can change the position of the middle
symbol $S$. Therefore, with respect to productions of $G$, we have
that
$w_1, w_2\in \{A,B\}^*$, which along with $w_1,w_2\in T^*$ implies that $w_1=w_2=\lambda$. Thus, the previously shown successful derivation is of the form \[S\Rightarrow SBBASABBSA \Rightarrow^* SSw_3Sw_4A \Rightarrow w_3w_4\,.\] Analogously, it can be seen that $w_3\in\{BAB,BBA,AaBB : a\in T\}^*$. Therefore, from the same reason as above, $w_3=\lambda$ and every successful derivation of $G$ is of the form
\begin{eqnarray}
S \Rightarrow SBBASABBSA \Rightarrow^* SSSwA \Rightarrow w\,,
\end{eqnarray}
for some $w\in T^*$.
Consider any inner sentential form of a successful derivation of $G$. Such a sentential form is a string of the form \[u_1Su_2Su_3Su_4A\,,\] for some $u_i\in (\{A,B\}\cup T)^*$, $1\le i\le 4$. However, it is not hard to see that $u_1=\lambda$ and $u_4\in T^*$; otherwise, if there is a nonterminal symbol appearing in the string $u_1u_4$, then, according to the form of productions, none of these symbols can be removed and, therefore, the derivation cannot be successful. Thus, every inner sentential form of any successful derivation of $G$ is of the form
\begin{eqnarray}\label{sf}
S\bar{u}S\bar{v}S\bar{w}A\,,
\end{eqnarray}
where $\bar{u}\in(BBA+\lambda)\{ABB,BAB\}^*$, $\bar{v}\in\{BAB,BBA,AaBB : a\in T\}^*$, and $\bar{w}\in T^*$. Now, we prove that \[\bar{v}\in\{BBA,BAB\}^*\{AaBB : a\in T\}^*(ABB+\lambda)\,.\] In other words, we prove that any applications of productions (\ref{a6}) and (\ref{a7}) precede the first application of any of productions (\ref{a4}) and (\ref{a5}).
Thus, consider the beginning of a successful derivation of the form \[S\Rightarrow SBBASABBSA \Rightarrow^* SBBAuSvABBSwA\,,\] where none of productions (\ref{a6}) and (\ref{a7}) has been applied, and the first application of one of these productions follows. Note that during this derivation, only productions (\ref{genS}) to (\ref{a3}) have been applied because the application of production (\ref{a4}) or (\ref{a5}) skips some nonterminal symbols and, therefore, leads to an incorrect sentential form (see the correct form (\ref{sf}) above). Clearly, $w=\lambda\in T^*$ (it is presented here for the reason of induction).
If production (\ref{a6}) follows, the derivation proceeds
\begin{eqnarray}
SBBAuSvABBSwA & \Rightarrow & SBBAuSvSwA\,,
\end{eqnarray}
and if production (\ref{a7}) follows, the derivation proceeds
\begin{eqnarray}
SBBAuSvABBSwA & \Rightarrow & SuSvSwA\,.
\end{eqnarray}
In addition, $w\in T^*$ and, according to the form of productions (\ref{genS})
to (\ref{a3}),
$u\in\{ABB,BAB\}^*$ and $v\in\{BBA,BAB,AaBB : a\in T\}^*$.
Now, productions (\ref{a2}) and (\ref{a3}) can be applied. Let
\begin{eqnarray}\label{5}
SBBAuSvSwA & \Rightarrow^* & SBBAuu_1Sv_1vSwA \quad [\textrm{((\ref{a2})+(\ref{a3}))}^*]
\end{eqnarray}
and
\begin{eqnarray}\label{6}
SuSvSwA & \Rightarrow^* & Suu_1Sv_1vSwA \quad [\textrm{((\ref{a2})+(\ref{a3}))}^*]
\end{eqnarray}
be the longest parts of the derivation by productions (\ref{a2}) and (\ref{a3}), i.\,e., the application of one of productions (\ref{a4}) to (\ref{remS}) follows.
${\bf I.}$ In the first case, derivation $(\ref{5})$, each of productions (\ref{a4}), (\ref{a5}), and (\ref{remS}) leads to an incorrect sentential form. Thus, either production (\ref{a6}) or (\ref{a7}) has to be applied. In both cases, however, $v_1v$ has to be of the form $v'AaBB$, for some $a\in T$, i.\,e.,
\begin{eqnarray}
SBBAuu_1Sv'AaBBSwA & \Rightarrow & SBBAu'Sv'SawA \quad [\textrm{(\ref{a6})}]
\end{eqnarray}
and the derivation proceeds as in $(\ref{5})$ or
\begin{eqnarray}
SBBAuu_1Sv'AaBBSwA & \Rightarrow & Su'Sv'SawA \quad [\textrm{(\ref{a7})}]
\end{eqnarray}
and the derivation proceeds as in $(\ref{6})$ because
$$u'=uu_1\in\{ABB,BAB\}^* \mbox{ and }v'\in\{BBA,BAB,AaBB : a\in T\}^*.$$
By induction,
\begin{eqnarray}
SBBAu'Sv'SawA & \Rightarrow^* & Su''Sv''Sw''awA \quad [\textrm{((\ref{a2})+(\ref{a3})+(\ref{a6}))$^*$(\ref{a7})}]\,,
\end{eqnarray}
for some $u''\in\{ABB,BAB\}^*$, $v''\in\{BBA,BAB,AaBB : a\in T\}^*$, and
$w''aw\in T^*$.
${\bf II.}$ In the second case, derivation $(\ref{6})$, each of productions (\ref{a6})
and (\ref{a7}) leads to an incorrect sentential form, and production (\ref{remS})
finishes the derivation, which, as shown above, implies that \hbox{$uu_1=v_1v=\lambda$}. Thus,
assume that either production (\ref{a4}) or production (\ref{a5}) is applied. Then, in
the former case,\linebreak
$uu_1=ABBu'$ and $v_1v=v'BBA$, and, in the latter case, $uu_1=BABu'$ and $v_1v=v'BAB$, i.\,e.,
\begin{eqnarray}\label{9}
SABBu'Sv'BBASwA & \Rightarrow & Su'Sv'SwA \quad [\textrm{(\ref{a4})}]
\end{eqnarray}
and the derivation proceeds as in $(\ref{6})$ or
\begin{eqnarray}\label{10}
SBABu'Sv'BABSwA & \Rightarrow & Su'Sv'SwA \quad [\textrm{(\ref{a5})}]
\end{eqnarray}
and the derivation also proceeds as in $(\ref{6})$ because
$$u'\in\{ABB,BAB\}^* \mbox{ and } v'\in\{BBA,BAB,AaBB : a\in T\}^*.$$
Notice that the application of a production
constructed in (\ref{a2}) would lead, in its consequence, to an incorrect sentential
form because the derivation would reach one of the following two forms
$$SABBxSyAaBBSzA \ \mbox{ or }\ SBABxSyAaBBSzA,$$
and each of productions (\ref{a4}) and~(\ref{a5}) would move
either $A$ in front of the first $S$, or at least one $B$ behind the last~$S$.
By induction, it implies that the successful derivation proceeds as
\begin{eqnarray}
Su'Sv'SwA & \Rightarrow^* & SSSwA \Rightarrow w \quad [\textrm{((\ref{a3})+(\ref{a4})+(\ref{a5}))$^*$(\ref{remS})}]\,.
\end{eqnarray}
Thus, we have proved that the following sequence of productions \[\textrm{((\ref{a4})+(\ref{a5}))((\ref{a2})+(\ref{a3}))$^*$((\ref{a6})+(\ref{a7}))}\] cannot be applied in any successful derivation of $G$. Therefore, all applications of productions (\ref{a6})
and~(\ref{a7}) precede any application of productions (\ref{a4}) and (\ref{a5}), which means that \[\bar{v}\in\{BBA,BAB\}^*\{AaBB : a\in T\}^*(ABB+\lambda)\,.\]
Finally, by skipping all productions (\ref{a4}) and (\ref{a5}) in the considered successful derivation $S\Rightarrow^* w$, we have
\begin{eqnarray*}
S & \Rightarrow & SBBASABBSA \quad [\textrm{(\ref{genS})}]\\
& \Rightarrow^* & SuSvSwA \quad [\textrm{((\ref{a2})+(\ref{a3})+(\ref{a6}))$^*$(\ref{a7})(\ref{a3})$^*$}]\\
& \Rightarrow & uvw \quad [\textrm{(\ref{remS})}]\,,
\end{eqnarray*}
where $u\in\{ABB,BAB\}^*$, $v\in\{BBA,BAB\}^*$, $u=v^R$ (see~${\bf II}$), and $w\in T^*$. It is not hard to see that by applications of the corresponding productions constructed in (\ref{a2}) and (\ref{a3}), ignoring productions (\ref{a6}) and (\ref{a7}), and applying $S'\to\lambda$ immediately after the last application of productions constructed in~(\ref{a3}), we have that $S'\Rightarrow^* w_1w_2w$ in $G'$, where $w_1\in\{A,C\}^*$ and $w_2\in\{B,D\}^*$ are such that $h(w_1)=u$ and $h(w_2)=v$. As $u=v^R$, we have that $w_1w_2w\Rightarrow^* w$ by productions $AB\to\lambda$ and $CD\to\lambda$, which completes the proof.
\end{proof}
\section{Conclusion}
This section summarizes the results and open problems concerning the descriptional complexity of scattered context grammars known so far.
{\bf One-nonterminal scattered context grammars:}
It is proved in \cite{meduna2} that scattered context grammars with only one nonterminal (including erasing productions) are not able to generate all context sensitive languages. However, because of the erasing productions, it is an open problem whether they can generate a language which is not context sensitive.
{\bf Two-nonterminal scattered context grammars:}
As far as the authors know, there is no published study concerning the generative power of scattered context grammars with two nonterminals.
{\bf Three-nonterminal scattered context grammars:}
In this paper, we have shown that scattered context grammars with three nonterminals, where no more than nine nonterminals are simultaneously rewritten during any derivation step, characterize the family of recursively enumerable languages. However, no other descriptional complexity measures, such as the number of non-context-free productions, are limited in this paper.
Note that Greibach and Hopcroft \cite{GreHop} have shown that every scattered context grammar can be transformed to an equivalent scattered context grammar where no more than two nonterminals are simultaneously rewritten during any derivation step. This transformation, however, introduces many new nonterminals and, therefore, does not improve our result. Thus, it is an open problem whether the maximal number of nonterminals simultaneously rewritten during any derivation step can be reduced to two in case of scattered context grammars with three nonterminals.
Finally, it is also an open problem whether the number of non-context-free productions can be limited.
{\bf Four-nonterminal scattered context grammars:}
It is proved in \cite{masopustTCS} that every recursively enumerable language can be generated by a scattered context grammar with four nonterminals and three non-context-free productions, where no more than six nonterminals are simultaneously rewritten during any derivation step. In comparison with the result of this paper, that result improves the maximal number of simultaneously rewritten symbols and limits the number of non-context-free productions. On the other hand, however, it requires more nonterminals.
{\bf Five-nonterminal scattered context grammars:}
It is proved in \cite{vaszil} that every recursively enumerable language can be generated by a scattered context grammar with five nonterminals and two non-context-free productions, where no more than four nonterminals are simultaneously rewritten during any derivation step. Note that this is the best known bound on the number of non-context-free productions. It is an interesting open problem whether this bound can also be achieved in case of scattered context grammars with three nonterminals.
{\bf Scattered context grammars with one non-context-free production:}
In comparison with the previous result, it is a natural question to ask what is the generative power of scattered context grammars with only one non-context-free production. However, as far as the authors know, this is another very interesting open problem.
{\bf Nonerasing scattered context grammars:}
So far, we have only considered scattered context grammars with erasing productions. However, the most interesting open problem in this investigation area is the question of what is the generative power of nonerasing scattered context grammars. It is not hard to see that they can generate only context sensitive languages. However, it is not known whether nonerasing scattered context grammars are powerful enough to characterize the family of context sensitive languages.
Finally, from the descriptional complexity point of view, it is an interesting challenge for the future research to find out whether some results similar to those proved for scattered context grammars with erasing productions can also be achieved in case of nonerasing scattered context grammars.
\paragraph{Acknowledgements}
Both authors have been supported by the Czech Ministry of Education under the research plan no. MSM~0021630528. The second author has also been supported by the Czech Grant Agency project no. 201/07/0005.
\end{document}
|
\begin{document}
\title[Gap Labelling for Lee--Yang Zeros]{Gap Labels for Zeros of the Partition Function of the 1D Ising Model via the Schwartzman Homomorphism}
\author[D.\ Damanik]{David Damanik}
\address{Department of Mathematics, Rice University, Houston, TX~77005, USA}
\email{[email protected]}
\author[M.\ Embree]{Mark Embree}
\address{Department of Mathematics, Virginia Tech, Blacksburg, VA 24061, USA}
\email{[email protected]}
\author[J.\ Fillman]{Jake Fillman}
\address{Department of Mathematics, Texas State University, San Marcos, TX 78666, USA}
\email{[email protected]}
\dedicatory{Dedicated to the memory of Uwe Grimm}
\maketitle
\begin{abstract}
Inspired by the 1995 paper of Baake--Grimm--Pisani, we aim to explain the empirical observation that the distribution of Lee--Yang zeros corresponding to a one-dimensional Ising model appears to follow the gap labelling theorem. This follows by combining two main ingredients: first, the relation between the transfer matrix formalism for 1D Ising model and an ostensibly unrelated matrix formalism generating the Szeg\H{o} recursion for orthogonal polynomials on the unit circle, and second, the gap labelling theorem for CMV matrices.
\end{abstract}
\setcounter{tocdepth}{1}\tableofcontents
\hypersetup{
linkcolor={black!30!blue},
citecolor={red},
urlcolor={black!30!blue}
}
\section{Introduction}
\subsection{Inspiration}
Since their discovery, gap labelling theorems have been a useful tool in the analysis of operators. In an abstract formulation, a gap labelling theorem says that if an operator family is generated by an ergodic process by continuously sampling along orbits of the process, then there is a countable subgroup of ${\mathbb{R}}$ that describes the distribution of eigenvalues of the operators. Moreover, this group depends on the ergodic process, but does not depend on the continuous function by which the operators are generated. The $K$-theory formulation of the gap labelling theorem, due to Bellissard and coworkers, realizes this subgroup as the range of a normalized trace on a suitable $C^*$ algebra \cite{Bel1992b, Bel2003}. This approach is further elucidated for substitution models in \cite{BaaGriJos1993, BelBovGhe1992}. The Johnson formulation realizes the gap labels in terms of the range of the Schwartzman asymptotic cycle for one-dimensional differential and finite difference operators \cite{Johnson1986JDE, JohnMos1982CMP}. For additional details and applications, see also \cite{DFGap} and references therein.
In some situations, distributions of zeros or eigenvalues appear to obey a law similar to the one predicted by a gap labelling theorem, even if no operators seem to be present. In 1995, Baake--Grimm--Pisani observed that the one-dimensional ferromagnetic Ising model appears to be just such a model when the magnetic couplings are chosen according to the Fibonacci substitution sequence \cite{BaaGriPis1995JSP}. The purpose of this note is to explain how certain unitary operators (hence gap labelling theorems) enter the picture and to contextualize the observations of \cite{BaaGriPis1995JSP}.
In short, the zeros of a Lee--Yang partition function can be identified with zeros of the trace of a $2 \times 2$ matrix propagator for the Szeg\H{o} recursion for orthogonal polynomials generated by a measure on the unit circle \cite{DamMunYes2013JSP, YessenThesis}. In turn, these zeros can be shown to be eigenvalues of unitary operators derived from such orthogonal polynomials. On the other hand, there exists a gap labelling theory for such unitary operators, which is a result of Geronimo--Johnson \cite{GeronimoJohnson1996JDE}. The rest of the note will spell this out in more detail, and we conclude with a gallery of examples.
\subsection{Ferromagnetic Ising Models on the Line}
Let us begin by defining objects associated with an Ising model on the line. This is not meant to be an exhaustive overview; we just collect the objects and results that we need to exhibit our main points. For additional background and history, we direct the reader to \cite{Baxter1989, Brush1967RMP} and references therein.
To specify a one-dimensional ferromagnetic Ising model, choose a sequence of magnetic couplings $\{J_n\}_{n=1}^\infty$ with $J_n>0$ for all $n$.
For each $N \in {\mathbb{N}}$, denote $\Lambda_N = \{\pm 1\}^N = \{\pm\}^N$. Both versions of $\Lambda_N$ are convenient in certain formulas, so we freely pass between the two representations. On the lattice $\{1,2,\ldots,N\}$, the nearest neighbor Ising model with constant field $H$ is specified by the energy functional
\begin{equation}
E(\sigma) := -\frac{1}{k_B \tau} \sum_{n=1}^N \left( J_n \sigma_n\sigma_{n+1} + H\sigma_n\right), \quad \sigma= (\sigma_1,\ldots,\sigma_N) \in \Lambda_N,
\end{equation}
where $H$ denotes the magnetic field, $\tau>0$ is the temperature, $k_B>0$ is the Boltzmann constant, and $\sigma$ satisfies the periodic boundary condition
\begin{equation} \label{eq:sigmaperbc}
\sigma_{N+1} = \sigma_1.
\end{equation}
For convenience of notation, we introduce $p_n=J_n/(k_B \tau)$ and $q = H/(k_B\tau)$ so that
\begin{equation}
E(\sigma) = E(\sigma,q) := - \sum_{n=1}^N \left( p_n \sigma_n\sigma_{n+1} + q\sigma_n\right), \quad \sigma \in \Lambda_N.
\end{equation}
In physical applications, one is often interested in the \emph{Gibbs state} in which $\mathbb{P}(\sigma)$, the probability of the configuration $\sigma$, is proportional to $\exp(-E(\sigma))$, since this is the probability distribution on $\Lambda_N$ that maximizes the entropy $-\sum_\sigma \mathbb{P}(\sigma) \log \mathbb{P}(\sigma)$. Naturally, the corresponding normalization constant, known as the \emph{partition function}, plays an important role. More precisely, the partition function is defined by
\begin{equation}
Z_N(q):= \sum_{\sigma \in \Lambda_N} e^{-E(\sigma,q)}, \quad N \in {\mathbb{N}}.
\end{equation}
Introduce the variables
\begin{equation} \label{eq:wbetadef}
\zeta = e^{q}, \quad \beta_n = e^{p_n}
\end{equation}
so that $Z_N$ can be viewed as a function of the variable $\zeta$:
\begin{equation} \label{eq:calZNdef}
{\mathcal{Z}}_N(\zeta) = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N \beta_n^{\sigma_n\sigma_{n+1}} \zeta^{\sigma_n}.
\end{equation}
Due to the Lee--Yang theorem \cite{LeeYang1952PR}, zeros of ${\mathcal{Z}}_N$ lie on the unit circle
\[ \partial {\mathbb{D}} := \{\zeta \in {\mathbb{C}} : |\zeta|=1\}.\]
Later on, we will see that the zeros of ${\mathcal{Z}}_N$ are the eigenvalues of a suitable unitary operator, which gives another way to see that they lie on $\partial {\mathbb{D}}$ (compare Propositions~\ref{prop:DNzerosAreEigs} and \ref{prop:IsingzerosToDN}).
\subsection{The Ergodic Setting}
The examples that we will study in the present work are generated by sampling along orbits of ergodic topological dynamical systems. Let us make this more precise.
Suppose $(\Omega,T,\mu)$ is an ergodic topological dynamical system (we will review definitions and results from ergodic theory in Section~\ref{sec:background}), denote ${\mathbb{R}}_+ = \{x \in {\mathbb{R}} : x>0\}$, and consider $g \in C(\Omega,{\mathbb{R}}_+)$. For each $\omega \in \Omega$, one obtains a realization of a ferromagnetic Ising model by taking $p_n = p_n(\omega) = g(T^n\omega)$. We denote the dependence on $\omega$ by writing, for example,
\begin{equation}
{\mathcal{Z}}_N(\zeta,\omega) = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N e^{g(T^n\omega)\sigma_n\sigma_{n+1}} \zeta^{\sigma_n}
\end{equation}
for the partition function. Let us say that $\zeta \in \partial {\mathbb{D}}$ is in a \emph{spectral gap} of the Ising model if for some $\varepsilon>0$, a.e.\ $\omega$, and sufficiently large $N$ there are no zeros of ${\mathcal{Z}}_N(\zeta,\omega)$ in an $\varepsilon$-neighborhood of $\zeta$.
\begin{theorem} \label{t:isingGL}
Let $(\Omega,T,\mu)$ denote an ergodic topological dynamical system such that $\Omega = \supp \mu$. There is a countable group ${\mathfrak{A}} = {\mathfrak{A}}(\Omega,T,\mu)$ such that the following statement holds true.
Given $g \in C(\Omega,{\mathbb{R}}_+)$, let ${\mathcal{Z}}_N(\zeta,\omega)$ denote the associated partition functions. If $\zeta_1,\zeta_2 \in \partial {\mathbb{D}}$ both lie in a spectral gap of the associated Ising model, then
\begin{equation} \label{eq:isingGL}
\lim_{N\to\infty} \frac{1}{N} \#\{\zeta \in [\zeta_1,\zeta_2] : {\mathcal{Z}}_N(\zeta,\omega)=0\} \in {\mathfrak{A}}(\Omega,T,\mu), \quad \text{a.e.\ } \omega \in \Omega.
\end{equation}
where $[\zeta_1, \zeta_2]$ denotes the closed arc from $\zeta_1$ to $\zeta_2$ in the counterclockwise direction.
\end{theorem}
\begin{remark}
Let us make some comments.
\begin{enumerate}
\item[(a)] The group ${\mathfrak{A}}(\Omega,T,\mu)$ may be computed explicitly in many cases of interest. Since it arises from the application of a homomorphism studied by Schwartzman \cite{Schwartzman1957Annals}, it is sometimes called the \emph{Schwartzman group} of $(\Omega,T,\mu)$. We will describe ${\mathfrak{A}}(\Omega,T,\mu)$ more precisely in Section~\ref{sec:background}. In Section~\ref{sec:gallery}, we will look at some specific examples in which ${\mathfrak{A}}$ can be computed.
\item[(b)] In the case in which $(\Omega,T,\mu)$ is the strictly ergodic subshift generated by the Fibonacci substitution, Baake--Grimm--Pisani observed the conclusion of Theorem~\ref{t:isingGL} empirically \cite{BaaGriPis1995JSP}. We will discuss this further in Section~\ref{sec:gallery}.
\item[(c)] Theorem~\ref{t:isingGL} follows by combining some theorems and observations from a few different papers that came about since the publication of \cite{BaaGriPis1995JSP}. \end{enumerate}
\end{remark}
Uwe Grimm was an exceptionally generous and encouraging colleague who enjoyed finding surprising connections between ostensibly different mathematical problems. We hope that Uwe would have appreciated how recent developments in mathematical physics shed new light on his earlier observations.
The rest of the paper is laid out as follows. In Section~\ref{sec:background}, we discuss some background about dynamical systems and CMV matrices. Section~\ref{sec:LY2Sz} explains how the partition function may be related to a polynomial derived from a suitable CMV matrix, and then Section~\ref{sec:Sz2Schwartman} explains how to prove Theorem~\ref{t:isingGL}. We conclude with a discussion of specific classes of examples in Section~\ref{sec:gallery} as well as some relevant plots.
\section{Background} \label{sec:background}
Let us begin by reviewing some relevant background. Since Theorem~\ref{t:isingGL} is proved by connecting ideas from dynamical systems and CMV matrices to the Ising model, we introduce the relevant notions from topological dynamics, the general theory of CMV matrices, and the theory of CMV matrices with dynamically defined coefficients.
\subsection{Odds and Ends from Dynamical Systems} \label{ssec:topdyn}
Let us briefly review some terminology and relevant results from dynamical systems. For further reading, one may consult textbook treatments such as Brin--Stuck \cite{BrinStuck2015Book}, Katok--Hasselblatt \cite{KatokHassel1995Book}, and Walters \cite{Walters1982:ErgTh}.
\begin{definition}
By a \emph{topological dynamical system}, we mean an ordered pair $(\Omega,T)$ in which $\Omega$ is a compact metric space and $T:\Omega \to \Omega$ is a homeomorphism. A Borel probability measure $\mu$ on $\Omega$ is called $T$\emph{-invariant} if $\mu(T^{-1}B) = \mu(B)$ for each Borel set $B \subseteq \Omega$. A $T$-invariant Borel probability measure $\mu$ is called \emph{ergodic} (with respect to $T$) if $\mu(E) \in \{0,1\}$ whenever $T^{-1}E=E$. In this case, we say that the triple $(\Omega,T,\mu)$ is an ergodic topological dynamical system.
\end{definition}
\begin{definition}Suppose $(\Omega,T)$ denotes a topological dynamical system. Given a continuous map $A:\Omega \to {\mathrm{GL}}(2,{\mathbb{C}})$, the associated \emph{cocycle} is the skew product
\begin{equation}
(T,A):\Omega \times {\mathbb{C}}^2 \to \Omega \times {\mathbb{C}}^2, \quad (\omega,v)\mapsto (T\omega,A(\omega)v).
\end{equation}
The iterates of $A$ are then defined by $(T,A)^n = (T^n,A^n)$, which the reader can check implies
\begin{equation} \label{eq:cocycleIteratesDef}
A^n(\omega) =
\begin{cases}
A(T^{n-1}\omega) \cdots A(T\omega)A(\omega), & n \geq 1;\\
I, & n =0;\\
[A^{-n}(T^n\omega)]^{-1}, & n \leq -1.
\end{cases}
\end{equation}
\end{definition}
\begin{definition}
Consider a continuous cocycle $(T,A)$ over a topological dynamical system $(\Omega, T)$.
\begin{enumerate}
\item[(a)] We say that $(T,A)$ is \emph{uniformly hyperbolic} if for constants $c,\lambda > 0$ one has
\begin{equation}
\|A^n(\omega)\| \geq ce^{\lambda |n|}, \quad \forall \omega \in \Omega, \ n \in {\mathbb{Z}}.
\end{equation}
\item[(b)] We say that $(T,A)$ enjoys an \emph{exponential dichotomy} if there exist continuous maps $\mathsf{E}^-,\mathsf{E}^+:\Omega \to {\mathbb{C}}{\mathbb{P}}^1$ such that
\begin{equation}
A(\omega) \mathsf{E}^\pm(\omega) = \mathsf{E}^\pm(T\omega),
\end{equation}
and constants $C,\lambda >0$ such that
\begin{equation}
\|A^n(\omega)v_+\| , \ \|A^{-n}(\omega)v_-\| \leq Ce^{-\lambda n}, \quad \forall n \in {\mathbb{N}}, \ \omega \in \Omega,
\end{equation}
for all unit vectors $v_\pm \in \mathsf{E}^\pm(\omega)$.
\end{enumerate}
\end{definition}
If $|\det A(\omega)| =1$, then (a) and (b) are equivalent. See \cite{DFLY2016DCDS} for proofs.
\subsection*{The Schwartzman Homomorphism}
As before, let $(\Omega,T,\mu)$ denote an ergodic topological dynamical system. To define the Schwartzman homomorphism and the associated groups, one needs a flow, that is, a continuous-time dynamical system. The most natural way to produce a continuous-time dynamical system that interpolates a discrete-time system such as $(\Omega,T)$ is to form the \emph{suspension}. To be more specific, the \emph{suspension} of $(\Omega,T,\mu)$, denoted $(X,\tau,\nu)$, is defined as follows. The space $X$ is given by
\begin{equation}
X = \Omega \times {\mathbb{R}}/ \! \sim, \quad \text{ where } (\omega,t) \sim (\omega',t') \iff t-t' \in {\mathbb{Z}} \text{ and } T^{t-t'}\omega = \omega'. \end{equation}
We write $[\omega,t]$ for the class of $(\omega,t)$ in $X$. The flow on $X$, denoted by $\tau$, is the natural projection of the translation action of ${\mathbb{R}}$, that is,
\begin{equation}
\tau^s([\omega,t]) = [\omega,t+s], \quad [\omega,t] \in X.
\end{equation}
Finally, $\nu$ is the natural measure on $X$ given by
\begin{equation}
\int_X f\, d\nu = \int_\Omega \int_0^1 f([\omega,t]) \, d\mu(\omega) \, dt.
\end{equation}
Recall that $\phi_0,\phi_1 \in C(X,{\mathbb{T}})$ are called \emph{homotopic}, denoted $\phi_0 \sim \phi_1$, if there exists a continuous $F:X \times [0,1] \to {\mathbb{T}}$ such that $F(\cdot,j) = \phi_j$ for $j=0,1$. Let $C^\sharp(X,{\mathbb{T}}) = C(X,{\mathbb{T}})/\!\sim$ denote the set of homotopy classes of continuous maps $X \to {\mathbb{T}}$. Given $\phi \in C(X,{\mathbb{T}})$, $x \in X$, one can lift the map $\phi_x:t \mapsto \phi(\tau^t x)$ to $\psi_x : {\mathbb{R}} \to {\mathbb{R}}$. From \cite{Schwartzman1957Annals}, there exists a real number ${\mathrm{rot}}(\phi) \in {\mathbb{R}}$ such that
\[{\mathrm{rot}}(\phi) = \lim_{t \to \infty} \frac{\psi_x(t)}{t}, \quad \nu\text{-a.e.\ } x \in X.\]
The induced map ${\mathfrak{F}}_\nu:C^\sharp(X,{\mathbb{T}}) \to {\mathbb{R}}$ given by
\[ {\mathfrak{F}}_\nu([\phi]) = {\mathrm{rot}}(\phi) \quad \nu\text{-a.e.\ } x \in X, \]
is called the \emph{Schwartzman homomorphism}. When working with linear cocycles over a dynamical system, it is often convenient to work with maps into the projective line ${\mathbb{R}}{\mathbb{P}}^1$ instead of ${\mathbb{T}}$. For such maps, one can define ${\mathfrak{F}}_\nu$ by identifying ${\mathbb{R}}{\mathbb{P}}^1$ with ${\mathbb{T}}$ via the map ${\mathbb{T}} \ni \theta \mapsto \mathrm{span}\{(\cos\pi\theta,\sin\pi\theta)^\top\} \in {\mathbb{R}}{\mathbb{P}}^1$. Using this identification, if $\Lambda \in C(X,{\mathbb{R}}{\mathbb{P}}^1)$, one has
\begin{equation} \label{eq:schwartzmanFromDeltaArg}
{\mathfrak{F}}_\nu([\Lambda])
= \lim_{T\to \infty} \frac{1}{\pi T} \Delta_{\rm arg}^{[0,T]} \Lambda(\tau^t x), \quad \nu\text{-a.e.\ } x \in X,
\end{equation}
where $\Delta_{\rm arg}^I$ denotes the net change in the argument on the interval $I$. Since we have chosen to define the Schwartzman homomorphism and group by considering maps into ${\mathbb{T}} = {\mathbb{R}} / {\mathbb{Z}}$ instead of ${\mathbb{R}}{\mathbb{P}}^1$, notice the factor of $\pi$ that appears in \eqref{eq:schwartzmanFromDeltaArg}.
\begin{definition}
With notation as above, the \emph{Schwartzman group} associated with $(\Omega,T,\mu)$, denoted ${\mathfrak{A}}(\Omega,T,\mu)$, is the range of the Schwartzman homomorphism, that is,
\begin{equation}
{\mathfrak{A}}(\Omega,T,\mu) = {\mathfrak{F}}_\nu(C^\sharp(X,{\mathbb{T}})).
\end{equation}
\end{definition}
It is known and not hard to check that ${\mathfrak{A}}(\Omega,T,\mu)$ is a countable subgroup of ${\mathbb{R}}$ that contains ${\mathbb{Z}}$. Indeed, one can check that $C^\sharp(X,{\mathbb{T}})$ has at most countably many elements and the (class of) the map $[\omega,t]\mapsto t \ \mathrm{mod} \ {\mathbb{Z}}$ is mapped to $1$ by ${\mathfrak{F}}_\nu$. The reader may see \cite{DFGap, ESO1} for details and further discussion. In Section~\ref{sec:gallery}, we will discuss some specific examples and identify their Schwartzman groups (without proofs, which can also be found in \cite{DFGap}).
\subsection{CMV Matrices}
Let us briefly review some aspects of the general theory of CMV matrices and Floquet theory for periodic CMV matrices. We refer the reader to the monographs \cite{Simon2005:OPUC1, Simon2005:OPUC2} for additional details and proofs.
Given a sequence $\{\alpha_n\}_{n \in {\mathbb{Z}}}$ with $\alpha_n \in {\mathbb{D}}$ for every $n$, the associated \emph{extended CMV matrix} ${\mathcal{E}} = {\mathcal{E}}_\alpha$ is given by
\begin{equation} \label{def:extcmv}
{\mathcal{E}}
=
\begin{bmatrix}
\ddots & \ddots & \ddots & \ddots &&&& \\
& \overline{\alpha_0}\rho_{-1} & \boxed{-\overline{\alpha_0}\alpha_{-1}} & \overline{\alpha_1}\rho_0 & \rho_1\rho_0 &&& \\
& {\rho_0\rho_{-1}} & -{\rho_0}\alpha_{-1} & {-\overline{\alpha_1}\alpha_0} & -\rho_1 \alpha_0 &&& \\
&& & \overline{\alpha_2}\rho_1 & -\overline{\alpha_2}\alpha_1 & \overline{\alpha_3} \rho_2 & \rho_3\rho_2 & \\
&& & {\rho_2\rho_1} & -{\rho_2}\alpha_1 & -\overline{\alpha_3}\alpha_2 & -\rho_3\alpha_2 & \\
&& && \ddots & \ddots & \ddots & \ddots &
\end{bmatrix},
\end{equation}
where $\rho_n = \sqrt{1-|\alpha_n|^2}$ and we use a box to denote the matrix element $\langle \delta_0,{\mathcal{E}}_\alpha,\delta_0\rangle$. It is well known that ${\mathcal{E}}$ enjoys a factorization ${\mathcal{E}} = {\mathcal{L}}{\mathcal{M}}$, where ${\mathcal{L}}$ and ${\mathcal{M}}$ are block diagonal with $2 \times 2$ blocks. Namely,
\begin{align}
{\mathcal{L}} & = \bigoplus \Theta(\alpha_{2n}) \\
{\mathcal{M}} & = \bigoplus \Theta(\alpha_{2n+1}),
\end{align}
where in both cases $\Theta(\alpha_j)$ acts on $\ell^2(\{j, j+1\})$ and $\Theta$ is given by
\begin{equation} \Theta(\alpha)
= \begin{bmatrix} \overline{\alpha} & \sqrt{1-|\alpha|^2}\ \\ \sqrt{1-|\alpha|^2} & -\alpha \end{bmatrix}. \end{equation}
If $\alpha$ is periodic of period $N$ and $N$ is even, then we consider the Floquet matrices ${\mathcal{F}}_N(\theta)$ given by restricting to $[0,N-1]$ with the boundary condition $u_{n+N}=e^{i\theta}u_n$.
One can check that the ${\mathcal{L}}{\mathcal{M}}$ factorization induces a corresponding factorization of the Floquet operators. That is, with
\begin{align}
{\mathcal{L}}_N(\theta) & = \begin{bmatrix} \Theta(\alpha_0) \\ & \Theta(\alpha_2) \\ && \ddots \\ &&& \Theta(\alpha_{N-2}) \end{bmatrix} \\
{\mathcal{M}}_N(\theta) & = \begin{bmatrix} -\alpha_{N-1} &&&& e^{-i\theta}\rho_{N-1} \\ & \Theta(\alpha_1) \\ && \ddots \\ &&& \Theta(\alpha_{N-3}) \\ e^{i\theta}\rho_{N-1} &&&& \overline{\alpha}_{N-1} \end{bmatrix},
\end{align}
we have
\begin{equation}
{\mathcal{F}}_N(\theta) = {\mathcal{L}}_N(\theta){\mathcal{M}}_N(\theta).
\end{equation}
Since we are interested in computations, let us write out the exact form of ${\mathcal{F}}_N(\theta)$ for relevant ranges of $N \in 2 {\mathbb{N}}$.
For $N=2$,
\begin{align}
\nonumber
{\mathcal{F}}_N(\theta)
& = {\mathcal{L}}_N(\theta) {\mathcal{M}}_N(\theta) \\
\nonumber
& = \begin{bmatrix} \overline{\alpha_0} & \rho_0 \\ {\rho_0} & -\alpha_0 \end{bmatrix}\begin{bmatrix} -\alpha_1 & e^{-i\theta}\rho_1 \\ e^{i\theta} \rho_1 & \overline{\alpha}_1 \end{bmatrix} \\
\label{eq:floqCMVper2}
& = \begin{bmatrix} -\alpha_1\overline{\alpha_0}+e^{i\theta}\rho_1\rho_0
& e^{-i\theta}\rho_1\overline{\alpha_0} + \overline{\alpha_1}\rho_0 \\
-\alpha_1 \rho_0 - e^{i\theta}\rho_1\alpha_0
& e^{-i\theta}\rho_1 \rho_0 -\overline{\alpha_1}\alpha_0 \end{bmatrix}.
\end{align}
Similarly, for $N=4$, we have
\begin{equation}
\label{eq:floqCMVper4}
{\mathcal{F}}_N(\theta)
= \begin{bmatrix}
-\overline{\alpha_0}\alpha_{3}
& \overline{\alpha_1}\rho_0 & \rho_1\rho_0
& {e^{-i\theta}\overline{\alpha_0}\rho_{3}}\\
-{\rho_0}\alpha_{3}
& {-\overline{\alpha_1}\alpha_0}
& -\rho_1 \alpha_0
& {e^{-i\theta}{\rho_0\rho_{3}}}
\\
e^{i\theta} \rho_{3}\rho_{2}
& \overline{\alpha_{2}}\rho_{1} & -\overline{\alpha_{2}}\alpha_{1} & \overline{\alpha_{3}} \rho_{2} \\
-e^{i\theta} \rho_{3}\alpha_{2}
& {\rho_{2}\rho_{1}} & -{\rho_{2}}\alpha_{1} & -\overline{\alpha_{3}}\alpha_{2}
\end{bmatrix}.
\end{equation}
In general, for $N \geq 6$, these have the form
\begin{equation}\label{eq:floqCMVperGeq6}
\tiny
{\mathcal{F}}_N(\theta)
=
\begin{bmatrix}
-\overline{\alpha_0}\alpha_{N-1}
& \overline{\alpha_1}\rho_0 & \rho_1\rho_0
&&& && {e^{-i\theta}\overline{\alpha_0}\rho_{N-1}}\\
-{\rho_0}\alpha_{N-1}
& {-\overline{\alpha_1}\alpha_0}
& -\rho_1 \alpha_0
&&&&& {e^{-i\theta}{\rho_0\rho_{N-1}}}
\\
& \overline{\alpha_2}\rho_1 & -\overline{\alpha_2}\alpha_1 & \overline{\alpha_3} \rho_2 & \rho_3\rho_2 & \\
& {\rho_2\rho_1} & -{\rho_2}\alpha_1 & -\overline{\alpha_3}\alpha_2 & -\rho_3\alpha_2 & \\ \\
&& \ddots & \ddots & \ddots & \ddots \\ \\
&&& \overline{\alpha_{N-4}}\rho_{N-5} & -\overline{\alpha_{N-4}}\alpha_{N-5} & \overline{\alpha_{N-3}} \rho_{N-4} & \rho_{N-3}\rho_{N-4} & \\
&&& {\rho_{N-4}\rho_{N-5}} & -{\rho_{N-4}}\alpha_{N-5} & -\overline{\alpha_{N-3}}\alpha_{N-4} & -\rho_{N-3}\alpha_{N-4} & \\
e^{i\theta} \rho_{N-1}\rho_{N-2} &&&&& \overline{\alpha_{N-2}}\rho_{N-3} & -\overline{\alpha_{N-2}}\alpha_{N-3} & \overline{\alpha_{N-1}} \rho_{N-2} \\
-e^{i\theta} \rho_{N-1}\alpha_{N-2} &&&&& {\rho_{N-2}\rho_{N-3}} & -{\rho_{N-2}}\alpha_{N-3} & -\overline{\alpha_{N-1}}\alpha_{N-2}
\end{bmatrix}.
\end{equation}
Let us also define the Szeg\H{o} transfer matrices. For $z \in {\mathbb{C}}$ and $\alpha \in {\mathbb{D}}$, one defines
\begin{equation}\label{eq:Salphaz:def}
S(\alpha,z) = \frac{1}{\sqrt{1-|\alpha|^2}} \begin{bmatrix} z & -\overline\alpha \\ -\alpha z & 1\end{bmatrix}.
\end{equation}
For our numerical calculations, we want to note the following fact, which relates zeros of the trace of a product of Szeg\H{o} matrices to eigenvalues of a suitable Floquet cutoff and follows from the general theory of periodic CMV matrices. For completeness, we include the short proof.
\begin{prop} \label{prop:DNzerosAreEigs}
Suppose $\{\alpha_n\}_{n \in {\mathbb{Z}}}$ is $N$-periodic, let $\Delta_N$ denote the associated discriminant given by
\begin{equation} \label{eq:normalizedDNz:def}
\Delta_N(z) = {\mathrm{Tr}}(z^{-N/2}S(\alpha_N,z)S(\alpha_{N-1},z) \cdots S(\alpha_1,z)),
\end{equation}
and consider the Floquet matrices as in \eqref{eq:floqCMVper2}, \eqref{eq:floqCMVper4}, and \eqref{eq:floqCMVperGeq6}.
\begin{enumerate}
\item[{\rm(a)}] If $N$ is even, then $z$ is a zero of $\Delta_N$ if and only if $z$ is an eigenvalue of ${\mathcal{F}}_N(\pi/2)$.
\item[{\rm(b)}] If $N$ is odd, then $z$ is a zero of $\Delta_N$ if and only if $z$ is an eigenvalue of ${\mathcal{F}}_{2N}(\pi)$.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose $N$ is even.
Setting $\theta = \pi/2$ in \cite[Eq.~(11.2.17)]{Simon2005:OPUC2} (notice that $\beta = e^{i\theta}$ in Simon's notation) gives
\begin{equation} \label{eq:fromdetFNtoTraceDeltaN}
\det(z-{\mathcal{F}}_N(\pi/2)) = z^{N/2}\left[\prod_{j=0}^{N-1} \rho_j \right] \Delta_N(z).
\end{equation}
In view of \eqref{eq:normalizedDNz:def}, this implies that the zeros of $\Delta_N(z)$ and $\det(z-{\mathcal{F}}_N(\pi/2))$ coincide, proving (a). The proof of (b) follows in a similar fashion by using \cite[Eq.~(11.2.17)]{Simon2005:OPUC2} with $\theta = \pi$ to get
\begin{equation}
\det(z-{\mathcal{F}}_{2N}(\pi)) = z^{N}\left[\prod_{j=0}^{2N-1} \rho_j \right] (\Delta_{2N}(z)+2)
\end{equation}
together with the identity
\[A^2 = {\mathrm{Tr}}(A)A-I\]
for $A \in {\mathrm{SL}}(2,{\mathbb{C}})$, which implies $\Delta_{2N}(z) = [\Delta_N(z)]^2 - 2$.
\end{proof}
We mention this connection for two reasons. First, the results that one brings together to connect the Ising partition function to gap labels naturally relate to the two sides of \eqref{eq:fromdetFNtoTraceDeltaN}. More specifically, the gap labelling theorem that we will formulate in Theorem~\ref{t:CMVgl} concerns the density of states measure, which is related to normalized eigenvalue counting measures associated with cutoff operators in a natural way, and hence connects to the left hand side of \eqref{eq:fromdetFNtoTraceDeltaN}, whereas Theorem~\ref{prop:IsingzerosToDN} gives a connection between $\Delta_N(z)$, which appears on the right hand side of \eqref{eq:fromdetFNtoTraceDeltaN}, and the partition function of an associated Ising model. Secondly, finding roots of polynomials can be numerically delicate (depending on the basis in which the polynomials are expressed, the magnitude of the coefficients, and the algorithm for finding roots), whereas computing eigenvalues of unitary matrices is robust. Indeed, a common way to compute roots of polynomials expressed in the monomial basis is to find eigenvalues of the associated companion matrix, which can yield poor results~\cite{TT94}.
\subsection{Ergodic CMV Matrices}
\begin{definition}
Let $(\Omega,T,\mu)$ denote an ergodic topological dynamical system as in Section~\ref{ssec:topdyn}, that is, $\Omega$ is a compact metric space, $T:\Omega \to \Omega$ is a homeomorphism, and $\mu$ is a $T$-ergodic probability measure. Given a continuous function $f:\Omega \to {\mathbb{D}}$, the associated \emph{ergodic family of CMV matrices} is $\{{\mathcal{E}}(\omega)\}_{\omega \in \Omega}$, where ${\mathcal{E}}(\omega)$ is defined by the coefficients
\begin{equation} \label{eq:ergCMVAlphaNDef}
\alpha_n(\omega)= f(T^n\omega), \quad \omega \in \Omega, \ n \in {\mathbb{Z}}.
\end{equation}
\end{definition}
\begin{definition}
For each $N \in {\mathbb{N}}$, $\omega \in \Omega$, we define the measure $\kappa_{\omega,N}$ to be the normalized eigenvalue counting measure of ${\mathcal{E}}(\omega)\chi_{[0,N-1]}$, that is,
\begin{equation}
\int f \, d\kappa_{\omega,N} = \frac{1}{N} {\mathrm{Tr}}\, f({\mathcal{E}}(\omega)\chi_{[0,N-1]}).
\end{equation}
We also define the \emph{density of states (DOS) measure} $\kappa$ by
\begin{equation}
\int_{\partial {\mathbb{D}}}g \, d\kappa = \int_\Omega \langle \delta_0, g({\mathcal{E}}(\omega))\delta_0 \rangle \, d\mu(\omega),
\end{equation}
and note that $\kappa_{\omega,N} \to \kappa$ weakly as $N \to \infty$ for $\mu$-a.e.\ $\omega \in \Omega$ by arguments using ergodicity~\cite[Theorem~10.5.21]{Simon2005:OPUC2}.
\end{definition}
Let us see that one can recover the DOS from the zeros of the discriminants associated with periodic operators defined by the ergodic family. Given ${\mathcal{E}}(\omega)$ as above, define
\[ D_{N}(z,\omega) = {\mathrm{Tr}}\left[ S(\alpha_{N}(\omega),z) \cdots S(\alpha_1(\omega),z)\right]. \]
It is known that $D_{N}(\cdot, \omega)$ has $N$ distinct zeros $\xi_1(\omega),\ldots, \xi_{N}(\omega)$ that lie on $\partial {\mathbb{D}}$ \cite{Simon2005:OPUC2}. We denote by $\nu_{\omega,N}$ the normalized zero-counting measure, that is,
\begin{equation}
\int_{\partial {\mathbb{D}}} f \, d\nu_{\omega,N} =
\frac{1}{N} \sum_{n=1}^{N} f(\xi_n(\omega)).
\end{equation}
\begin{prop} \label{prop:DOSviaFloquet}
With notation as above, one has $\nu_{\omega,N} \to \kappa$ weakly for a.e.\ $\omega \in \Omega$.
\end{prop}
\begin{proof}
From \cite[Theorem~10.5.21]{Simon2005:OPUC2}, we know $\kappa_{\omega,N} \to \kappa$, so it suffices to show
that $\nu_{\omega,N}$ has the same weak limit as $\kappa_{\omega,N}$. Using Proposition~\ref{prop:DNzerosAreEigs}, we see that $\nu_{\omega,N}$ is the normalized eigenvalue counting measure of a suitable Floquet cutoff of ${\mathcal{E}}(\omega)$, so the desired conclusion holds by a direct calculation.
\end{proof}
\section{From Ising, Lee, and Yang to Cantero, Moral, and Vel\'{a}zquez} \label{sec:LY2Sz}
Let us explain how the relationship between Lee--Yang zeros and discriminants of CMV matrices arises. The first observation is that one can characterize the partition function ${\mathcal{Z}}_N(\zeta)$ as the trace of a suitable matrix product. This matrix formalism is well-known to experts, but we include a detailed discussion for ease of reading. To define the aforementioned matrix product, write
\begin{equation} \label{eq:isingTMdef}
M(\beta,\zeta) =
\begin{bmatrix} \beta\zeta & 1/\beta \\[2mm] 1/\beta & \beta/\zeta \end{bmatrix}
\end{equation}
for $\beta,\zeta \in {\mathbb{C}} \setminus\{0\}$.
\begin{prop} \label{prop:isingTMs}
Let $p_n>0$ be given for $1 \le n \le N$, define $\beta_n$ as in \eqref{eq:wbetadef}, and let ${\mathcal{Z}}_N$ denote the associated partition function. One has
\begin{equation}
{\mathcal{Z}}_N(\zeta) = {\mathrm{Tr}}[M(\beta_N,\zeta)M(\beta_{N-1}, \zeta) \cdots M(\beta_1, \zeta)]
\end{equation}
for all $\zeta \neq 0$, where $M$ is given by \eqref{eq:isingTMdef}.
\end{prop}
\begin{proof}
The proof of this result can be found in most standard textbooks on solvable models in statistical mechanics, e.g., \cite{Baxter1989}. We reproduce the proof of \cite{Baxter1989} here to keep the paper more self-contained.
Write the entries of a $2 \times 2$ matrix as
\begin{equation} \label{eq:Asigmasigma'def}
A = \begin{bmatrix} A_{+,+} & A_{+,-} \\ A_{-,+} & A_{-,-} \end{bmatrix}.
\end{equation}
In particular, combining \eqref{eq:isingTMdef} and \eqref{eq:Asigmasigma'def} gives
\begin{equation}
M(\beta, \zeta)_{\sigma,\sigma'} = \beta^{\sigma\sigma'} \zeta^{(\sigma+\sigma')/2}.
\end{equation}
Consequently, using \eqref{eq:calZNdef} and \eqref{eq:sigmaperbc}, we have
\begin{align*}
{\mathcal{Z}}_N(\zeta)
& = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N \beta_n^{\sigma_n\sigma_{n+1}} \zeta^{\sigma_n} \\
& = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N \beta_n^{\sigma_n\sigma_{n+1}} \zeta^{(\sigma_n+\sigma_{n+1})/2} \\
& = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N M(\beta_n,\zeta)_{\sigma_n,\sigma_{n+1}}.
\end{align*}
Now split the sum and use the periodic boundary condition again to get
\begin{align*}
{\mathcal{Z}}_N(\zeta)
& = \sum_{\sigma \in \Lambda_N} \prod_{n=1}^N M(\beta_n, \zeta)_{\sigma_n,\sigma_{n+1}} \\
& = \sum_{\sigma_1 \in \Lambda_1} \sum_{(\sigma_2,\ldots,\sigma_N) \in \Lambda_{N-1}} \prod_{n=1}^N M(\beta_n, \zeta)_{\sigma_n,\sigma_{n+1}} \\
& = \sum_{\sigma_1 \in \Lambda_1} [M(\beta_1,\zeta) \cdots M(\beta_N,\zeta)]_{\sigma_1,\sigma_1} \\
& = {\mathrm{Tr}}[M(\beta_1, \zeta) \cdots M(\beta_N, \zeta)].
\end{align*}
The result follows by noting that $\beta_n>0$, so $M$ is symmetric and thus one can reverse the order of the factors by taking the transpose.
\end{proof}
With Proposition~\ref{prop:isingTMs} proved, let us now connect back to CMV matrices, by way of the Szeg\H{o} transfer matrices introduced in \eqref{eq:Salphaz:def}.
\begin{prop} \label{prop:IsingzerosToDN}
With notation as in Proposition~\ref{prop:isingTMs}, the zeros of ${\mathcal{Z}}_N(\zeta)$ are the same as the zeros of $D_N(\zeta^2)$, where
\begin{equation}
D_N(z) := {\mathrm{Tr}}\left[S(\beta_N^{-2},z) \cdots S(\beta_1^{-2},z)\right].
\end{equation}
Equivalently, the zeros of ${\mathcal{Z}}_N(\zeta)$ coincide with those of $\widetilde D_N(\zeta)$, where
\begin{equation}
\widetilde D_N(z) := {\mathrm{Tr}}\left[S(\beta_N^{-2},z)S(0,z)S(\beta_{N-1}^{-2},z)S(0,z)\cdots S(\beta_1^{-2},z) S(0,z) \right].
\end{equation}
\end{prop}
\begin{proof}
For $\zeta \neq 0$ and $\beta>1$, notice that
\begin{align*}
\begin{bmatrix} -1 & 0 \\ 0 & \zeta\end{bmatrix}
M(\beta,\zeta)
\begin{bmatrix} -1 & 0 \\ 0 & 1/\zeta\end{bmatrix}
& = \begin{bmatrix} -1 & 0 \\ 0 & \zeta\end{bmatrix}
\begin{bmatrix} \beta\zeta & \frac{1}{\beta} \\[2mm] \frac{1}{\beta} & \frac{\beta}{\zeta}\end{bmatrix}
\begin{bmatrix} -1 & 0 \\ 0 & 1/\zeta\end{bmatrix} \\
& = \begin{bmatrix} \beta\zeta & -\frac{1}{\beta\zeta} \\[2mm] -\frac{\zeta}{\beta} & \frac{\beta}{\zeta}\end{bmatrix} \\
& = \frac{\beta}{\zeta} \begin{bmatrix} \zeta^2 & -\frac{1}{\beta^2} \\[2mm] -\frac{\zeta^2}{\beta^2} & 1\end{bmatrix} \\
& = \frac{\beta}{\zeta}\sqrt{1-\beta^{-2}}\,S(\beta^{-2},\zeta^2).
\end{align*}
Thus, $S(\beta_N^{-2}, \zeta^2) \cdots S(\beta_1^{-2}, \zeta^2)$ is similar to a nonzero multiple of $M(\beta_N,\zeta)\cdots M(\beta_1,\zeta)$ and the first claim follows from Proposition~\ref{prop:isingTMs}. The second follows follows from noting that
\[S(\alpha,z) S(0,z)= S(\alpha,z^2)\]
for any $\alpha \in {\mathbb{D}}$, $z \in {\mathbb{C}}$.
\end{proof}
\section{From Szeg\H{o} to Schwartzman \`a la Geronimo and Johnson} \label{sec:Sz2Schwartman}
In the previous section, we saw how to relate the partition functions of a ferromagnetic Ising model to discriminants associated with a family of periodic CMV matrices. Next, we want to explain how to relate zeros of the discriminant to the rotation number of the Szeg\H{o} cocycle and hence to the range of the Schwartzman homomorphism. Throughout this section we fix an ergodic topological dynamical system $(\Omega,T,\mu)$, a continuous $f:\Omega \to {\mathbb{D}}$, and let $\{{\mathcal{E}}(\omega)\}$ denote the associated family of CMV matrices defined by \eqref{eq:ergCMVAlphaNDef}. We assume $\supp\mu = \Omega$.
By general arguments, there exists a compact set $\Sigma \subseteq \partial {\mathbb{D}}$ such that $\sigma({\mathcal{E}}(\omega))=\Sigma$ for $\mu$-a.e.\ $\omega \in \Omega$. Moreover, $\sigma({\mathcal{E}}(\omega))=\Sigma$ for any $\omega$ with a dense $T$-orbit.
Given this setup, we define
\[A_z(\omega) = z^{-1}S(f(T\omega),z)S(f(\omega),z), \quad \omega \in \Omega, \ z \in {\mathbb{C}} \setminus\{0\},\]
where $S$ is given by \eqref{eq:Salphaz:def}. We can then characterize the almost-sure spectrum $\Sigma$ as the complement of the set where $(T^2,A_z)$ is uniformly hyperbolic.
\begin{theorem}[Johnson's Theorem for CMV matrices] \label{t:cmv:johnson}
Assume $(\Omega,T,\mu)$ is an ergodic topological dynamical system such that $\supp \mu = \Omega$, $f \in C(\Omega,{\mathbb{D}})$, and let $\Sigma$ denote the associated almost-sure spectrum associated with the family $\{{\mathcal{E}}(\omega)\}_{\omega \in \Omega}$. We have
\begin{equation}
\partial {\mathbb{D}} \setminus \Sigma = {\calU\calH}:=\{z \in \partial {\mathbb{D}} : (T^2,A_z) \text{ is uniformly hyperbolic}\}
\end{equation}
\end{theorem}
See \cite{DFLY2016DCDS} for additional details and a proof. One major application of Theorem~\ref{t:cmv:johnson} is the gap labelling theorem for ergodic CMV matrices, which we formulate presently.
\begin{theorem} \label{t:CMVgl}
Let $(\Omega,T,\mu)$ denote an ergodic topological dynamical system such that $\Omega = \supp \mu$, $f \in C(\Omega,{\mathbb{D}})$, and $\{{\mathcal{E}}(\omega)\}_{\omega \in \Omega}$ the associated ergodic family of extended CMV matrices. Let $\kappa$ and $\Sigma$ denote the density of states measure and almost-sure spectrum associated with this family. For any $z_1,z_2 \in \partial {\mathbb{D}} \setminus \Sigma$, one has
\begin{equation}
\kappa([z_1,z_2]) \in {\mathfrak{A}}(\Omega,T,\mu),
\end{equation}
where $[z_1,z_2]$ denotes the closed arc from $z_1$ to $z_2$ in the counterclockwise direction.
\end{theorem}
\begin{proof}
This result is a consequence of \cite{GeronimoJohnson1996JDE} and \cite{Simon2005:OPUC1}. Let $\rho$ denote the rotation number associated with the family $\{{\mathcal{E}}(\omega)\}$, as defined in \cite[Section~4]{GeronimoJohnson1996JDE}; notice that this comes with a factor $\frac{1}{2}$ (cf.~\cite[Eq.~(4.9)]{GeronimoJohnson1996JDE}). On the one hand, \cite[Theorem~5.6]{GeronimoJohnson1996JDE} asserts that $\rho$ takes values in ($2\pi$ times) the Schwartzman group in the gaps of $\Sigma$ (that is, on connected components of $\partial {\mathbb{D}} \setminus \Sigma$. Notice that the Schwartzman group in \cite{GeronimoJohnson1996JDE} differs from ours by a factor of $2\pi$; compare the first displayed equation on \cite[p.~171]{GeronimoJohnson1996JDE}. On the other hand, $\rho$ is related to the Lyaupnov exponent via \cite[Theorem~4.7]{GeronimoJohnson1996JDE}. Namely, there is an analytic function $w(z)$ such that the boundary values of ${\mathrm{Re}}\, w$ give the Lyapunov exponent and the boundary values of ${\mathrm{Im}}\, w$ give $\rho$. By combining this with \cite[Theorems~10.5.8 and~10.5.21]{Simon2005:OPUC2}, we are done.
\end{proof}
The main result follows by combining all of these pieces.
\begin{proof}[Proof of Theorem~\ref{t:isingGL}]
This follows by combining Proposition~\ref{prop:DOSviaFloquet}, Proposition~\ref{prop:IsingzerosToDN} and Theorem~\ref{t:CMVgl}. More specifically, Propositions~\ref{prop:DOSviaFloquet} and \ref{prop:IsingzerosToDN} show that the left-hand side of \eqref{eq:isingGL} is $\kappa([z_1,z_2])$ and Theorem~\ref{t:CMVgl} shows that this quantity belongs to ${\mathfrak{A}}$.
\end{proof}
\section{A Gallery of Lee--Yang and CMV Zeros} \label{sec:gallery}
Let us conclude with a discussion of several examples, including some plots of the distributions of the relevant zeros. By combining Propositions~\ref{prop:IsingzerosToDN} and~\ref{prop:DNzerosAreEigs}, the zeros of ${\mathcal{Z}}_N$ may be computed by finding eigenvalues of ${\mathcal{F}}_{N}(\pi/2)$, where ${\mathcal{F}}_{N}$ denotes the Floquet matrices associated with a suitable extended CMV matrix, which is the approach employed in the numerics below.
Before embarking on these computations, we note one potential opportunity for acceleration of the
eigenvalue calculation for large-scale problems. The corner entries in~\eqref{eq:floqCMVperGeq6}
cause the matrices ${\mathcal{F}}_N(\theta)$ to have full bandwidth.
In the case of Jacobi matrices, a simple reordering described in~\cite{PEF2015IEOT} results
in Hermitian matrices of bandwidth~5, allowing for efficient numerical eigenvalue computations.
An analogous reordering is possible here, though the CMV structure makes this reordering
a bit more intricate; the result is a matrix having bandwidth~9.
Such structure is less clearly exploitable in non-Hermitian eigenvalue computations,
but QR-related algorithms for CMV matrices (see~\cite{BE91,VW12}) could
potentially be adapted and extended to this case.
We consider a CMV matrix of even period $N$, and define a permutation $p:\{1,\ldots,N\} \to \{1,\ldots,N\}$ by
\begin{equation}
p(j) =
\begin{cases}
2j-1, & \mbox{$j$ odd and $1 \le j < N/2$;} \\
2N+1-2j, & \mbox{$j$ odd and $N/2 < j < N$;}\\
2j, & \mbox{$j$ even and $1 < j \le N/2$;} \\
2N+2-2j, & \mbox{$j$ even and $N/2 < j \leq N$.}
\end{cases}
\end{equation}
Let $P$ denote the associated permutation matrix, which is given by $Pe_j = e_{p(j)}$. Equivalently,
\begin{equation}
P_{i,j} = \delta_{i,p(j)}
\end{equation}
The bandwidth of the reordered matrix $P{\mathcal{F}}_N(\theta)P^*$ is at most $9$.
\begin{prop} \label{prop:bandwidth}
Suppose $N \geq 2$ is even. The bandwidth of $\widetilde{{\mathcal{F}}}_N(\theta) := P{\mathcal{F}}_N(\theta)P^*$ is at most $9$. More precisely,
\begin{equation}
\langle e_j, \widetilde {\mathcal{F}}_N(\theta)e_k \rangle = 0
\end{equation}
whenever $|j-k|>4$.
\end{prop}
\begin{proof}
If $N=2$ or $4$, the claim holds vacuously, so assume $N \geq 6$.
Write $d_N(j,k) = \min\{|j-k|,N-|j-k|\}$, and notice that $\langle e_j, {\mathcal{F}}_N(\theta)e_k\rangle=0$ if $d_N(j,k)>2$. Since
\begin{equation}
\langle e_j,\widetilde{{\mathcal{F}}}_N(\theta)e_k \rangle =
\langle e_{p^{-1}(j)}, {\mathcal{F}}_N(\theta)e_{p^{-1}(k)} \rangle,
\end{equation}
it suffices to demonstrate
\begin{equation}\label{eq:bandwidthgoal}
|j-k|>4 \implies d_N(p^{-1}(j),p^{-1}(k))>2.
\end{equation}
Equivalently, we may show
\begin{equation}\label{eq:bandwidthgoal2}
d_N(\ell,m)\leq 2 \implies |p(\ell) - p(m)| \leq 4.
\end{equation}
It is straightforward (albeit a little tedious) to verify \eqref{eq:bandwidthgoal2} from the definition of $p$. Indeed, if $\ell = m+1$, we have (using $N+1=1$)
\[p(m+1) - p(m) = \begin{cases}
2(m+1)-(2m-1) = 3, & \mbox{$m$ odd and $m<N/2$;} \\
2(m+1)-1-2m = 1, & \mbox{$m$ even and $m<N/2$;} \\
2N+2-2(m+1) - (2N+1-2m) = -1, & \mbox{$m$ odd and $N/2<m<N$;} \\
2N+1-2(m+1) - (2N+2-2m) = -3, & \mbox{$m$ even and $N/2<m<N$;} \\
1-2 = -1, & \mbox{$m=N$.} \end{cases}\]
The results are similar, but slightly more laborious for $p(m+2)-p(m)$.
\end{proof}
For inspiration and context, here is a picture of the permutation when $N=24$.
\begin{center}
\includegraphics[scale=.5]{IMAGES/cmv_graph_24}
\qquad
\includegraphics[scale=.5]{IMAGES/cmv_graph_24_ro}
\end{center}
The graph on the left shows the adjacency relations for ${\mathcal{F}}_N(\theta)$, having an edge from $i$ to $j$ if $i \neq j$ and $[{\mathcal{F}}_N(\theta)]_{ij} \neq 0$ (for a generic CMV matrix). The graph on the right shows the same scheme for $\widetilde{{\mathcal{F}}}_N(\theta)$. From this perspective, the conclusion of Proposition~\ref{prop:bandwidth} is easy to check visually: one simply verifies that the indices of connected nodes can differ by no more than four in the adjacency graph for $\widetilde{{\mathcal{F}}}_N(\theta)$.
The figures below show the corresponding nonzero pattern of the matrix ${\mathcal{F}}_N(\theta)$ (left) and its reordered version $\widetilde{{\mathcal{F}}}_N(\theta)$ (right).
\begin{center}
\includegraphics[width=2in]{IMAGES/cmv_spy_24}
\qquad\qquad
\includegraphics[width=2in]{IMAGES/cmv_spy_24_ro}
\end{center}
Let us conclude with some plots of zeros and numerical approximations of the density of zeros. We begin with the Fibonacci case, which supplied the original motivation for the present work.
\begin{example}[Fibonacci]
Consider an alphabet ${\mathcal{A}} = \{{\mathsf{a}},{\mathsf{b}}\}$ with two letters, and let ${\mathcal{A}}^*$ denote the free monoid over ${\mathcal{A}}$ (that is, the set of finite words written with the letters in ${\mathcal{A}}$). The \emph{Fibonacci substitution} is defined by $S({\mathsf{a}}) = {\mathsf{a}}{\mathsf{b}}$ and $S({\mathsf{b}}) = {\mathsf{a}}$, and extended to ${\mathcal{A}}^*$ by concatenation. Thus, beginning with the seed $u_0={\mathsf{a}}$, one forms the sequence $u_k = S^k(w_0)$:
\begin{align*}
u_1 & = {\mathsf{a}}{\mathsf{b}} \\
u_2 & = {\mathsf{a}}{\mathsf{b}}{\mathsf{a}} \\
u_3 & = {\mathsf{a}}{\mathsf{b}}{\mathsf{a}}{\mathsf{a}}{\mathsf{b}} \\
u_4 & = {\mathsf{a}}{\mathsf{b}}{\mathsf{a}}{\mathsf{a}}{\mathsf{b}}{\mathsf{a}}{\mathsf{b}}{\mathsf{a}},
\end{align*}
and so on. As one can see, the initial letters stabilize once they appear, so one can define the word $u_\infty = \lim_{k \to \infty} u_k = {\mathsf{a}}{\mathsf{b}}{\mathsf{a}}{\mathsf{a}}{\mathsf{b}}{\mathsf{a}}{\mathsf{b}}{\mathsf{a}}\ldots$, where the limit may be understood in the sense of the product topology on ${\mathcal{A}}^{\mathbb{N}}$ (and in which ${\mathcal{A}}$ has the discrete topology).
One can specify a ferromagnetic Ising model by choosing $p_{\mathsf{a}}, p_{\mathsf{b}}>0$ and defining the sequence of normalized magnetic couplings via
\begin{equation}
p_n = \begin{cases} p_{\mathsf{a}}, & u_\infty(n)={\mathsf{a}};\\ p_{\mathsf{b}}, & u_\infty(n)={\mathsf{b}}. \end{cases}
\end{equation}
There are several equivalent ways to imbed this example into an ergodic context. The \emph{Fibonacci subshift}, $\Omega_{\rm F}\subseteq {\mathcal{A}}^{\mathbb{Z}}$ is the set of all sequences whose local structure coincides with that of $u_\infty$. More precisely, if $v =v_1\cdots v_\ell \in {\mathcal{A}}^*$ and $u$ is a finite word or infinite sequence, we write $v{\mathrm{Tr}}iangleleft u$ if for some $j$, $v = u(j)u(j+1) \cdots u(j+\ell-1)$ (and we say $v$ is a \emph{subword} of $u$). One then defines
\begin{equation}
\Omega_{\rm F} = \{ \omega = (\omega_n)_{n\in{\mathbb{Z}}} : \forall \ell \in {\mathbb{N}}, \ n \in {\mathbb{Z}}, \omega_n \cdots \omega_{n+\ell-1} {\mathrm{Tr}}iangleleft u_\infty\}.
\end{equation}
One can check that $\Omega_{\rm F}$ is a compact subset of ${\mathcal{A}}^{\mathbb{Z}}$ that is invariant under the action of the shift $[T\omega]_n = \omega_{n+1}$. It is furthermore known that $(\Omega_{\rm F},T)$ enjoys a unique $T$-invariant measure $\mu$ satisfying $\supp\mu = \Omega_{\rm F}$.
For this system, the set of labels can be computed explicitly:
\begin{equation}
{\mathfrak{A}}(\Omega_{\rm F}, T, \mu) = {\mathbb{Z}} + \alpha {\mathbb{Z}} = \{n+m\alpha : n,m \in {\mathbb{Z}}\},
\end{equation}
where $\alpha = (\sqrt{5}-1)/2$ denotes the inverse of the golden mean; see, e.g., \cite{BelBovGhe1992, DFGap} for details.
Let us show some plots for this model. Following~\cite{BaaGriPis1995JSP}, we take $p_{\mathsf{a}} = 2/3$ and $p_{\mathsf{b}} = 1/100$. First, we show the zeros of the partition function for the Ising model corresponding to $n=10$ and $n=17$ iterations of the Fibonacci substitution.
(The $n=10$ plot replicates the analogous plot in~\cite[Fig.~1]{BaaGriPis1995JSP}.)
\begin{center}
\includegraphics[scale=.45]{IMAGES/bgp_zeros10}
\qquad
\includegraphics[scale=.45]{IMAGES/bgp_zeros17}
\end{center}
Next, let us inspect the corresponding IDS, plotted as function of $\theta = -i\log z/2\pi$, that is, we compute $\kappa([1,e^{2\pi i \theta}])$ for $\theta \in [0,1]$.
(The IDS for $n=10$ is shown in~\cite[Fig.~2]{BaaGriPis1995JSP}.)
The fractal nature of the distribution of the zeros becomes more apparent from this perspective, and indeed can be seen to be a consequence of the Cantor structure of the spectrum of the CMV operator having coefficients generated by the Fibonacci sequence \cite{DamMunYes2013JSP}. It is known that the density of states measure assigns no weight to gaps of the spectrum, so each flat portion in the graph of the IDS corresponds to a gap in the spectrum. The height of the graph of the IDS in the gap then corresponds to the gap label.
\begin{center}
\includegraphics[scale=.45]{IMAGES/bgp_ids10}
\begin{picture}(0,0)
\put(-184.75,72){\includegraphics[scale=0.18]{IMAGES/bgp_ids_zoom10}} \end{picture}
\qquad
\includegraphics[scale=.45]{IMAGES/bgp_ids17}
\begin{picture}(0,0)
\put(-184.75,71.75){\includegraphics[scale=0.18]{IMAGES/bgp_ids_zoom17}}
\end{picture}
\end{center}
To get another perspective on the structure of gaps, let us look at the distribution of gap lengths. The histograms below show the proportion of gaps between successive zeros of various lengths.
\begin{center}
\includegraphics[scale=.45]{IMAGES/bgp_gaps_10}
\quad
\includegraphics[scale=.45]{IMAGES/bgp_gaps_17}
\end{center}
\end{example}
\begin{example}[General Subshifts]
The previous example is a special case of a general type of dynamical system, called a subshift.
To formulate the general setting, consider a finite set ${\mathcal{A}}$ (the \emph{alphabet}) with the discrete topology, and ${\mathbb{X}} = {\mathcal{A}}^{\mathbb{Z}}$ with the product topology. This topology makes ${\mathbb{X}}$ a compact metrizable space with, e.g.,
\begin{equation}
d(\omega,\omega') = 2^{-\min\{|n|: \omega_n \neq \omega'_n\}}, \quad \omega \neq \omega'
\end{equation}
an example of a metric giving the topology on ${\mathbb{X}}$.\ \ The \emph{shift} on ${\mathbb{X}}$ is given by $[Tx]_n = x_{n+1}$ for $x \in {\mathbb{X}}$.\ \ A \emph{subshift} is any $T$-invariant compact subset of ${\mathbb{X}}$.\ \ If $\Omega \subseteq {\mathbb{X}}$ is a subshift, one abuses notation and writes $T$ for the restriction of the shift to $\Omega$.\ \ If $\mu$ is a $T$-ergodic measure on $\Omega$, it is known that
\begin{equation}
{\mathfrak{A}}(\Omega,T,\mu) = \set{\int f \, d\mu : f \in C(\Omega,{\mathbb{Z}})}.
\end{equation}
Equivalently, ${\mathfrak{A}}(\Omega,T,\mu)$ can be characterized by measures of cylinder sets. More precisely, given a word $u \in {\mathcal{A}}^*$, the associated cylinder set is
\[\Xi_u = \{\omega \in \Omega : \omega_j = u_j, \ \forall 1 \le j \le n\}.\]
Then ${\mathfrak{A}}(\Omega,T,\mu)$ is precisely the group generated by $\{\mu(\Xi_u) : u \in {\mathcal{A}}^*\}$.
One particularly interesting class of subshifts is supplied by so-called subshifts of finite-type. Given a matrix $M \in {\mathbb{R}}^{{\mathcal{A}} \times {\mathcal{A}}}$ such that $M_{{\mathsf{a}},{\mathsf{b}}} \in \{0,1\}$ for every ${\mathsf{a}}, {\mathsf{b}} \in{\mathcal{A}}$, the associated \emph{subshift of finite type} is given by
\begin{equation}
\Omega_M = \{\omega \in {\mathcal{A}}^{\mathbb{Z}} : M_{\omega_n,\omega_{n+1}}=1 \ \forall n \in {\mathbb{Z}}\}.
\end{equation}
One also assumes that $M$ is \emph{primitive} in the sense that for some $n \in {\mathbb{N}}$, $(M^n)_{{\mathsf{a}},{\mathsf{b}}}>0$ for all ${\mathsf{a}},{\mathsf{b}} \in {\mathcal{A}}$.
There is a plethora of invariant measures on $\Omega_M$. Suppose $P \in {\mathbb{R}}^{{\mathcal{A}} \times {\mathcal{A}}}$ is such that every entry of $P$ is nonnegative, $P_{{\mathsf{a}}, {\mathsf{b}}} >0 \iff M_{{\mathsf{a}},{\mathsf{b}}} = 1$, and
\[\sum_{{\mathsf{b}} \in {\mathcal{A}}} P_{{\mathsf{a}},{\mathsf{b}}}=1, \quad \forall {\mathsf{a}} \in {\mathcal{A}}.\]
By the primivity assumption on $M$ and the Perron--Frobenius theorem, there is a unique invariant probability vector, that is, a vector $(p_{\mathsf{a}})_{{\mathsf{a}} \in {\mathcal{A}}}$ such that
\begin{equation}
\sum_{{\mathsf{a}} \in {\mathcal{A}}}p_{\mathsf{a}} P_{{\mathsf{a}},{\mathsf{b}}} = p_{\mathsf{b}}, \quad \forall {\mathsf{b}} \in {\mathcal{A}}.
\end{equation}
The induced measure $\mu$ on $\Omega_M$ is given by
\begin{equation} \label{eq:muXiu}
\mu(\Xi_u) = p_{u_1}\prod_{j=1}^{n-1}P_{u_j,u_{j+1}}.
\end{equation}
In view of the previous discussion, one can use this to compute ${\mathfrak{A}}(\Omega,T,\mu)$ in terms of the entries of $P$ and $p$, namely, ${\mathfrak{A}}(\Omega,T,\mu)$ is the ${\mathbb{Z}}$-module generated by the numbers in \eqref{eq:muXiu}.
\end{example}
\begin{example}[Cat Map]
We begin with the base space $\Omega = {\mathbb{T}}^2 := {\mathbb{R}}^2/{\mathbb{Z}}^2$. The \emph{cat map} is the transformation $T = T_{\rm cat}:\Omega \to \Omega$ given by
\begin{equation}
T_{\rm cat}(x,y) = (2x+y,x+y), \quad (x,y) \in {\mathbb{T}}^2.
\end{equation}
This example is known to have many invariant measures. One can check that $\mu = \mathrm{Leb}$, the normalized Lebesgue measure on ${\mathbb{T}}^2$, is $T_{\rm cat}$-ergodic. It was shown in \cite{DFGap} that
\begin{equation}
{\mathfrak{A}}(\Omega,T_{\rm cat}, \mu) = {\mathbb{Z}}.
\end{equation}
For the sampling function, we take $f(x,y) = 1/2 + \cos(2\pi y)/3$. The corresponding Verblunsky coefficients are
\begin{equation}
\alpha_n(x,y)= f(T^n(x,y)), \quad (x,y) \in {\mathbb{T}}^2.
\end{equation}
In view of the relationship we have discussed previously, this corresponds to an Ising model with couplings
\[p_n = p_n(x,y) = - \frac{1}{2}\log f(T^n(x,y)). \]
By induction, one can check that
\[T_{\rm cat}^n(x,y) = (F_{2n+1}x+F_{2n}y, F_{2n}x+F_{2n-1}y),\]
where $F_n$ denotes the $n$th Fibonacci number, normalized by $F_0=0$, $F_1=1$, and $F_{n+1}= F_n + F_{n-1}$ for $n \geq 1$.
For the cat map, we produce plots similar to those from previous examples.
We note that the numerical calculations in the illustrations that follow require
more care than might first be apparent. The rapid growth of the entries in $T_{\rm cat}^n(x,y)$
means that, in standard double-precision floating point arithmetic~\cite{Ove01},
the argument $y$ in $\cos(2\pi y)$ in the calculation of $\alpha_n(x,y)$ lacks
sufficient precision for the Verblunsky coefficients to be computed accurately.
Indeed, double precision calculations produce $\alpha_n(x,y)$ with errors of $O(1)$ when $n\ge 40$,
rendering subsequent numerically computed eigenvalue statistics essentially meaningless.
To avoid this pitfall, we compute these coefficients using high-precision arithmetic in Mathematica,
then render them in full double-precision accuracy for the subsequent eigenvalue calculation.
(We use MATLAB's standard dense nonsymmetric eigensolver {\tt eig} to compute the
eigenvalues of the unitary matrix ${\mathcal{F}}_N(\pi/2)$.)
We start with the zeros, generated with $x=1/\sqrt{2}$ and $y = 1/\sqrt{3}$.
In the plots below, the red arc $[e^{-i \phi}, e^{i \phi}]$
for $\phi = 2 \sin^{-1}(1/6)$ denotes an inner bound on the spectral gap proved in \cite{DFLY2015IMRN}.
\begin{center}
\includegraphics[scale=.45]{IMAGES/cat_zeros_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/cat_zeros_newg_3200}
\end{center}
As the reader can see, the zeros densely fill the circle, which is expected given results obtained rigorously from the gap labelling theorem. Below, we show the IDS as a function of $\theta = -i \log z / 2\pi$ as before.
\begin{center}
\includegraphics[scale=.45]{IMAGES/cat_ids_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/cat_ids_newg_3200}
\end{center}
As one can see, the graph comports with the general picture of an absence of gaps, since it appears (as it must) that the IDS is everywhere increasing on a suitable arc.
Finally, we conclude with the distribution of gap lengths.
\begin{center}
\includegraphics[scale=.45]{IMAGES/cat_gaps_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/cat_gaps_newg_3200}
\end{center}
Here, one observes something rather curious: the lengths of the gaps seem to be more uniform than one might expect. Concretely, from existing work on Schr\"odinger operators \cite{BourgainSchlag2000CMP}, one would expect CMV matrices with coefficients generated by the cat map to exhibit Anderson localization, that is, pure point spectrum with exponentially decaying eigenfunctions. Then, based on the same reasoning that one pursues for Schr\"odinger operators, one would expect the distribution of eigenvalues to exhibit less repulsion.
Let us note that the cat map has a natural Markov partition and hence is a factor of a subshift of finite type in a natural way. Concretely, taking ${\mathcal{A}} = \{1,2,3,4,5\}$ and
\begin{equation}
M_{\rm cat} = \begin{bmatrix} 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1. \end{bmatrix},
\end{equation}
there is a continuous factor map $\Phi:\Omega_M \to {\mathbb{T}}^2$ such that $\Phi \circ T = T_{\rm cat}\circ \Phi$, where $T$ denotes the shift on $\Omega_M$.
See Brin--Stuck~\cite[pp.~135--137]{BrinStuck2015Book} or Katok--Hasselblatt~\cite[Section~20]{KatokHassel1995Book} for more discussion and details.
What is notable about this example is the disparity between the label sets. For the subshift $\Omega_M$ with invariant measure $\mu$, the label set is a dense subgroup of ${\mathbb{R}}$. However, the label set for the cat map itself is ${\mathbb{Z}}$, which leads to the conclusion that the almost-sure spectrum associated with a cat map model is connected.
\end{example}
\begin{example}[Skew Shift]
The base space is ${\mathbb{T}}^2 = {\mathbb{R}}^2/{\mathbb{Z}}^2$ and the transformation $T = T_{\rm ss}$ is given by
\begin{equation}
T_{\rm ss}(x,y) = (x+\gamma,x+y)
\end{equation}
where $\gamma \in {\mathbb{R}}$ is a fixed irrational number. For this example, the label set can be shown to be
\begin{equation} {\mathbb{Z}} + \gamma {\mathbb{Z}} = \{n+m\gamma: n,m \in {\mathbb{Z}}\}, \end{equation}
which is a dense subgroup of ${\mathbb{R}}$.
For the sampling function, we take $f(x,y) = 1/2+ \cos(2\pi y)/3$ as before. The Verblunsky coefficients are
\begin{equation}
\alpha_n(x,y)= f(T^n(x,y)), \quad (x,y) \in {\mathbb{T}}^2.
\end{equation}
As before, one can use induction to write $T_{\rm ss}^n$ explicitly for $n \in {\mathbb{N}}$ as
\begin{equation}
T_{\rm ss}^n(x,y)= \left(x+n\gamma, y+nx+\frac{n(n-1)}{2}\gamma\right).
\end{equation}
Taking $(x,y) = (\gamma/2,0)$ for the starting point leads to
\begin{equation}
\alpha_n(\gamma/2,0) = 1/2+ \cos(n^2 \pi \gamma)/3, \quad n \in {\mathbb{N}}.
\end{equation}
As in the case of the cat map, we show the zeros for truncations of order $N=400$ and $N=3200$ for the case $\gamma = 1/\sqrt{2}$, together with a red arc showing an inner bound on the main spectral gap about $z=1$ (indeed the same inner bound as in the cat map example). As before, we also show the integrated density of states and a histogram showing the distribution of gap lengths.
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_zeros_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_zeros_newg_3200}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_ids_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_ids_newg_3200}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_gaps_newg_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_gaps_newg_3200}
\end{center}
While the application to Ising necessitates choosing strictly positive Verblunsky coefficients, it is also of interest to look at sign-indefinite models from the CMV perspective. A notably interesting example is given by using the sampling function $f(x,y) = \lambda \cos(2\pi y)$ for some $0<\lambda<1$. The corresponding figures appear below.
(Both skew shift examples give a numerically computed gap of width zero for $N=3200$, not shown on the histograms.)
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_zeros_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_zeros_3200}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_ids_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_ids_3200}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/skew_gaps_400}
\quad
\includegraphics[scale=.45]{IMAGES/skew_gaps_3200}
\end{center}
\end{example}
\begin{example}[Unitary Almost-Mathieu Operator]
We conclude with the unitary almost-Matheiu operator, which was investigated in \cite{CFO, FOZ}. The coefficients are given by choosing $\gamma$ irrational, and constants $0<\lambda_1,\lambda_2<1$, and defining\begin{equation}\alpha_{2n}(x) = \sqrt{1-\lambda_2^2}, \quad \alpha_{2n-1}(x) = \lambda_1\cos(2\pi(n\gamma+x)) \end{equation}for $x \in {\mathbb{T}}$.
The plots below use $\lambda_1 = 9/10$ and $\lambda_2 = \gamma = 1/\sqrt{2}$.
\begin{center}
\includegraphics[scale=.45]{IMAGES/uamo_zeros_400}
\quad
\includegraphics[scale=.45]{IMAGES/uamo_zeros_3200}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/uamo_ids_400}
\begin{picture}(0,0)
\put(-188,72){\includegraphics[scale=0.18]{IMAGES/uamo_ids_zoom_400}}
\end{picture}
\qquad
\includegraphics[scale=.45]{IMAGES/uamo_ids_3200}
\begin{picture}(0,0)
\put(-188,71.75){\includegraphics[scale=0.18]{IMAGES/uamo_ids_zoom_3200}}
\end{picture}
\end{center}
\begin{center}
\includegraphics[scale=.45]{IMAGES/uamo_gaps_400}
\quad
\includegraphics[scale=.45]{IMAGES/uamo_gaps_3200}
\end{center}
\end{example}
These plots suggest many interesting problems. We hope this work inspires some readers to study these questions in more detail, with the goal of proving some rigorous results. For instance, it would be very interesting to confirm that the spectra of the skew-shift models proposed above are connected subsets of the circle.
\end{document}
|
\begin{document}
\title{Slopes of eigencurves over boundary disks}
\author{Daqing Wan}
\address{Daqing Wan,
University of California at Irvine,
Department of Mathematics, 340 Rowland Hall, Irvine, CA 92697, U.S.A.}
\email{[email protected]}
\author{Liang Xiao}
\address{Liang Xiao,
University of Connecticut, Department of Mathematics, 196 Auditorium Road, Unit 3009, Storrs, CT 06269--3009, U.S.A.}
\email{[email protected]}
\author{Jun Zhang}
\address{Jun Zhang, School of Mathematical Sciences, Capital Normal University, Beijing 100048, P.R. China.}
\email{[email protected]}
\date{\today}
\begin{abstract}
Let $p$ be a prime number.
We study the slopes of $U_p$-eigenvalues on the subspace of modular forms that can be transferred to a definite quaternion algebra.
We give a sharp lower bound of the corresponding Newton polygon.
The computation happens over a definite quaternion algebra by Jacquet--Langlands correspondence; it generalizes a prior work of Daniel Jacobs \cite{jacobs} who treated the case of $p=3$ with a particular level.
In case when the modular forms have a finite character of conductor highly divisible by $p$, we improve the lower bound to show that the slopes of $U_p$-eigenvalues grow roughly like arithmetic progressions as the weight $k$ increases. This is the first very positive evidence for Buzzard--Kilford's conjecture on the behavior of the eigencurve near the boundary of the weight space, that is proved for arbitrary $p$ and general level.
We give the exact formula of a fraction of the slope sequence.
\end{abstract}
\subjclass[2010]{11F33 (primary), 11F85 (secondary).}
\mathbf{k}eywords{Eigencurves, slope of $U_p$-operators, quaternionic automorphic forms, overconvergent modular forms, Gouv\^ea--Mazur Conjecture, Gouv\^ea's conjecture on slopes}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
Let $p$ be a fixed prime number which we assume to be odd for simplicity in this introduction.
For $N$ a positive integer (the ``tame level") coprime to $p$, $k+1 \geq 2$ an integer (the ``weight")\footnote{For the subject we study, writing $k+1$ for the weight will simplify the presentation.}, $m$ a positive integer, and $\psi$ a character of $(\mathbb{Z}/p^m\mathbb{Z})^\times$, we use $S_{k+1}(\Gamma_0(p^mN); \psi)$ to denote the space of modular cuspforms of weight $k+1$, level $p^mN$, and nebentypus character $\psi$ over some finite extension $E$ of $\mathbb{Q}_p$.
This space comes equipped with the action of Hecke operators, most importantly the action of the $U_p$-operator.
It is a central question in the theory of $p$-adic modular forms to understand the distributions of the ``slopes", namely, the $p$-adic valuations of the eigenvalues of $U_p$ acting on $S_{k+1}(\Gamma_0(p^mN); \psi)$, as the weight $k+1$ varies.
\emph{All $p$-adic valuations or norms in this paper are normalized so that $p$ has valuation $1$ and norm $p^{-1}$.}
One of the most interesting expectations concerns the case when the nebentypus character $\psi$ has exact conductor $p^m$ for $m \geq 2$, i.e. $\psi$ does not factor through a character on $(\mathbb{Z}/p^{m-1}\mathbb{Z})^\times$.
Let $\omega: (\mathbb{Z}/p\mathbb{Z})^\times \to \mathbb{Z}_p^\times$ denote the Teichmu\"uller character.
The following question was asked by Coleman and Mazur \cite{coleman-mazur} and later elaborated by Buzzard and Kilford \cite{buzzard-kilford}.
\begin{conjecture}
\label{Conj:weak-buzzard-kilford}
Fix an integer $N$ coprime to $p$ and a character $\psi_0$ of $(\mathbb{Z}/p\mathbb{Z})^\times$ such that $\psi_0(-1) =-1$.
Then
there exists a non-decreasing sequence of rational numbers $a_1, a_2, \dots$ approaching to infinity such that
\begin{itemize}
\mathbf{i}tem
for any integers $m\geq 2, k+1 \geq 2$, and any character $\psi$ of $(\mathbb{Z}/p^m\mathbb{Z})^\times$ of exact conductor $p^m$ such that $\psi|_{(\mathbb{Z}/p\mathbb{Z})^\times}\cdot \omega^k = \psi_0$, the slopes of $U_p$ acting on $S_{k+1}(\Gamma_0(p^mN); \psi)$
is given by the first few terms of the sequence
\[
a_1/p^m,\ a_2 / p^m,\ \dots
\]
consisting of all numbers strictly less than $k$ and some equal to $k$'s.
\end{itemize}
Moreover, the sequence $a_1, a_2, \dots$ is a union of finitely many arithmetic progressions.
\end{conjecture}
There has been many direct computations supporting this Conjecture in special cases, first by Buzzard and Kilford \cite{buzzard-kilford} (extending the work of Emerton \cite{emerton}) in the case when $p=2$ and $N=1$\footnote{We earlier excluded the case of $p=2$ for simple presentation; but slight modification allows us to include this case, as we will do for the rest of the paper.}, then in many similar particular cases with small primes $p$ and small levels; see
\cite{roe, kilford, kilford-mcmurdy, jacobs}.
Nonetheless, this Conjecture was never recorded in the literature for lack of theoretic or heuristic evidences.
The goal of this paper is to provide some positive indications in the general case.
\subsection{The geometry of the eigencurve}
Before proceeding, we explain the meaning of Conjecture~\ref{Conj:weak-buzzard-kilford} in terms of the geometry of the eigencurve.
Eigencurves were introduced by Coleman and Mazur \cite{coleman-mazur} to $p$-adically interpolate modular eigenforms of different weights.
Here the notion of weights is generalized to mean a continuous character of $\mathbb{Z}_p^\times$; for examples, $x \mapsto x^k \psi(x)$ corresponds the case of classical weight ${k+1}$ with nebentypus character $\psi$.
In the loosest terms, the eigencurve
is a rigid analytic closed subscheme of the product of the weight space and $\mathbb{G}_m$, defined as the Zariski closure of the set of pairs $(x^k\psi(x), a_p(f))$ for each eigenform $f$ of weight ${k+1}$ and nebentypus character $\psi$ with $U_p$-eigenvalue $a_p(f)$.
In particular, its fiber over the point $x^k\psi(x)$ of the weight space parametrizes the $U_p$-eigenvalues on the space of modular forms $S_{k+1}(\Gamma_0(p^mN); \psi)$ and the overconvergent ones.
The eigencurve plays a crucial role and has many applications in the modern $p$-adic number theory; to name one: Kisin's proof of Fontaine-Mazur conjecture \cite{kisin}.
Despite the many arithmetic applications, the geometry of the eigencurve was however poorly understood for a long time.
For example, the properness of the eigencurve was not known until the very recent work of Diao and Liu \cite{diao-liu}.
Conjecture~\ref{Conj:weak-buzzard-kilford} and this paper focus on another intriguing property: the behavior of the eigencurve near the boundary of the weight space.
The striking computation of Buzzard and Kilford \cite{buzzard-kilford} mentioned above shows that, when $p=2$ and $N=1$, the Coleman-Mazur eigencurve, when restricted over the boundary annulus of the weight space, is an infinite disjoint union of copies of this annulus.
This is a family and a much stronger version of Conjecture~\ref{Conj:weak-buzzard-kilford}; see Conjecture~\ref{Conj:CM-conj} for the precise expectation.
Generalizing this result would have many number theoretical applications. For example, in \cite{pottharst-xiao}, the second author and Pottharst reduced the parity conjecture of Selmer rank for modular forms to this precise statement.
\subsection{Main result of this paper}
\label{SS:main result}
For the sake of presentation, we assume that there exists a prime number $\ell$ such that $\ell|| N$.
We only consider the subspace of modular forms which are $\ell$-new, denote by a superscript $\ell\textrm{-}\mathrm{new}$, e.g. $S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$.
This is the subspace of modular forms that can be identified by Jacquet--Langlands correspondence with the automorphic forms on a definite quaternion algebra $D$ which ramifies at $\ell$ and $\mathbf{i}nfty$.
The following lower bound of the Newton polygon of the $U_p$-action on $S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$ might be known among some experts.\footnote{We think that Buzzard probably has an unpublished note on certain version of this theorem; see \cite{buzzard-slope}.}
\begin{thm}
\label{T:theorem A}
Assume that the conductor of $\psi$ is exactly $p^m$. (By our later convention, this will include the case when $\psi$ is trivial and $m=1$.)
Let $t$ denote $\dim S_2(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$ so that $\dim S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}} = kt$.
Then the Newton polygon of the $U_p$-action on $S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$ lies above the polygon with vertices
\[
(0,0), (t,0), (2t, t), \dots, (nt, \tfrac{n(n-1)}{2}t), \dots
\]
\end{thm}
The complete proof is given in Theorem~\ref{T:weak Hodge polygon}. Note that the lower bound is independent of $k$, and thus uniform in $k$.
A similar uniform quadratic lower bound of Newton polygon was obtained by the first named author in \cite{wan} using a variant of Dwork's trace formula.
Our lower bound here is very sharp: the distance between the end point of the Newton polygon of $U_p$ acting on $S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$ and our lower bound is \emph{linear} in $k$.
When the character $\psi$ is trivial, Theorem~\ref{T:theorem A} gives some theoretic evidence of a conjecture of Gouv\^ea on the distributions of slopes. But the method presented here is not enough to prove this conjecture of Gouv\^ea. We refer to Remarks~\ref{R:improve Hodge bound classical forms} and \ref{R:heuristic gouvea} for related discussions.
We also point out that Theorem~\ref{T:theorem A} may suggest a very effective way to compute the eigencurve using definite quaternion algebras; the statement implies that the computation converges very well, comparable to the prevailing method of modular symbols.
The proof of Theorem~\ref{T:theorem A} (and the proof of the subsequent theorems in this paper) uses Jacquet--Langlands correspondence to transfer all information into automorphic forms for a definite quaternion algebra.
The advantage of working with definite quaternion algebra is its simpler geometry compared to the modular curves.
The theory of overconvergent automorphic forms on a definite quaternion algebra d'apr\`es Buzzard \cite{buzzard} come equipped with a nice integral basis.
Our computation essentially reproduces Jacobs' thesis \cite{jacobs}, except taking a more theoretical as opposed to computational approach.
The real improvement over Jacobs' work is that, when the conductor $p^m$ of $\psi$ is large (e.g. $m \geq 4$), we can improve the lower bound above so that it \emph{agrees} with the Newton polygon (in the overconvergent setting) at infinitely many points which form an arithmetic progression. This gives the following
\begin{thm}
\label{T:theorem B}
Keep the notation as in Theorem~\ref{T:theorem A} and assume that $m \geq 4$.
Let $a_0(k) \leq a_1(k) \leq \dots \leq a_{kt-1}(k)$ denote the slopes of the $U_p$-action on $S_{k+1}(\Gamma_0(p^mN); \psi)^{\ell \textrm{-}\mathrm{new}}$, in non-decreasing order (with multiplicity).
Then we have
\[
\lfloor \tfrac nt\rfloor \leq a_n(k)\leq \lfloor \tfrac nt\rfloor +1.
\]
\end{thm}
This is proved in Theorem~\ref{T:sharp Hodge bound}. Note that the inequality of the slopes does not depend on the weight $k+1$.
In fact, we prove a family version of such inequality which gives rise to a decomposition (Theorem~\ref{T:improved main theorem}) of the eigencurve over the disks $\mathcal{W}(x\psi, p^{-1})$ of radius $p^{-1}$ centered around the character $x\psi$, just as in Buzzard--Kilford \cite{buzzard-kilford}. Unfortunately, we cannot extend this result to the entire weight annulus of radius $p^{-1/p^{m-2}(p-1)}$ which contains $x\psi$.\footnote{In recent joint work of the first two authors and Ruochuan Liu \cite{liu-wan-xiao}, we extend this result to the entire boundary of the weight space, through using a different basis for the overconvergent automorphic forms. Many ideas of \cite{liu-wan-xiao} are taken from this paper.}
The main idea of the proof consists of two major inputs: (1) We show that there is a natural isomorphism
\begin{equation}
\label{E:overconvergent = classical}
S^{D, \dagger}(U; \mathbf{k}appa) \cong \widehat\bigoplus_{n=0}^\mathbf{i}nfty S_2^D(U; \psi \omega^{-2n}) \otimes (\omega^n \circ \det),
\end{equation}
such that the $U_p$-action on the left hand side is ``approximately" the action of $\bigoplus_{n\geq 0} (p^n \cdot U_p)$ on the right hand side.
Here the letter $U$ is the corresponding level structure which looks like $\Gamma_0(p^m)$ at $p$;
$S^{D, \dagger}(U; \mathbf{k}appa)$
stands for the space of overconvergent automorphic forms over a definite quaternion algebra $D$ with weight character $\mathbf{k}appa$ living in $\mathcal{W}(x\psi, p^{-1})$;
the right hand side is the completed direct sum of \emph{classical} automorphic forms over $D$ of \emph{weight $2$} with characters $\psi \omega^{-2n}$, \emph{twisted by the character $\omega^n \circ \det$}.
It thus follows that the $U_p$-slopes on $S^{D, \dagger}(U; \mathbf{k}appa)$ is approximately determined by the $U_p$-slopes on these space of classical forms of weight $2$.
(2) To carry out the approximation in \eqref{E:overconvergent = classical}, it is important to show that the slopes of the \emph{Hodge polygon} of the $U_p$-action on each $S_2^D(U; \psi\omega^{-2n})$ are between $0$ and $1$. Here the Hodge polygon
of the matrix for the $U_p$-action
refers to the convex hull of points given by the minimal $p$-adic valuation of the minors of the matrix.
To prove this key result, we make use of (in the definite quaternion situation) the Atkin--Lehner map (see \ref{S:Atkin-Lehner})
\[
\mathrm{AL}_{\psi_m}: S_2^D(U; \psi_m) \longrightarrow S_2^D(U; \psi_m^{-1}) \otimes (\psi_m \circ \det)
\]
and the fact that $U_p\circ\mathrm{AL}_{\psi_m}\circ U_p(\varphi) = p \cdot \mathrm{AL}_{\psi_m}\circ S_p(\varphi)$, where $S_p$ is the unramified central character action at $p$.
In fact, we also need certain deformed version of this map in order to improve the result from the open disks of radius $p^{-1}$ to the closed disks of the same radius. This small improvement is essential to Theorem~\ref{T:theorem B}.
We refer to Section~\ref{Section:improve Hodge polygon} for details.
We also point out that the condition $m\geq 4$ is currently an unfortunate technical condition.
See Remark~\ref{R:m=3} for the discussion in the case when $m=3$.
A consequence of the proof of Theorem~\ref{T:theorem B} is that we can in fact show that some of the slopes indeed form arithmetic progressions.
\begin{thm}
\label{T:theorem C}
Keep the notation as in Theorem~\ref{T:theorem B}.
Fix $r \mathbf{i}n \{0,1, \dots, \frac{p-3}{2}\}$.
Let $\NP_r(i)$ and $\HP_r(i)$ denote the Newton polygon and Hodge polygon functions for the $U_p$-action on $S_2(\Gamma_0(p^mN); \psi \omega^{-2r})^{\ell \textrm{-}\mathrm{new}}$.
Suppose that $(s_0, \NP_r(s_0))$ is a vertex of the Newton polygon $\NP_r$ and suppose that
\[
\NP_r(s) < \HP_r(s-1) + 1 \textrm{ for all }s =1, \dots, s_0.
\]
Then for any $s = 0,1, \dots, s_0$, the following subsequence
\[
a_{s+rt}(k),\ a_{s+rt+\frac{p-1}{2}t}(k),\ \dots, \ a_{s+ rt+ i \frac{p-1}2t}(k),\ \dots
\]
is independent of the positive integer $k$ whenever $k \equiv 2r+1 \bmod{p-1}$ (and whenever it makes sense) and it forms an arithmetic progression with common difference $\frac{p-1}{2}$.
\end{thm}
This is proved in Corollary~\ref{C:precise NP computation}.
\emph{Note that the common difference for the arithmetic progression is $\frac{p-1}{2}$ but not $1$}.
This is due to the periodic appearance of the powers of Teichm\"uller characters in \eqref{E:overconvergent = classical}.
In fact, this (larger) common difference agrees with the computation of Kilford \cite{kilford} and Kilford--McMurdy \cite{kilford-mcmurdy} in the case $m=2$, where the common difference is $2$ when $p=5$ and is $\frac 32$ (which can be further separated into two arithmetic progressions with common difference $3$) when $p=7$.
The power of Theorem~\ref{T:theorem C}
is limited by how close the Hodge polygon is to the Newton polygon.
In particular, as $N$ and $m$ get bigger, the gap between the Newton and Hodge polygons will be inevitably widened, and hence $s_0$ is relatively small compared to $t$.
One remedy we propose is to ``decompose" the space of (overconvergent) modular forms according to the associated residual Galois (pseudo-)representations.\footnote{Galois pseudo-representations are equivalent to semisimple Galois representations. Since we are really using the tame Hecke eigenvalues, we prefer to use the concept of pseudo-representations.}
\begin{thm}
\label{T:theorem D}
Let $\bar \rho_1, \dots, \bar \rho_d$ be the residual Galois pseudo-representations appearing as the pseudo-representations attached to the eigenforms in $S_2^D(U; \psi \omega^{-2r})$ for some $r =0,1, \dots, \frac{p-3}{2}$.
Then we have a natural decomposition of (overconvergent) automorphic forms:
\[
S^{D, \dagger}(U; \mathbf{k}appa) = \bigoplus_{j=1}^d S^{D, \dagger}(U; \mathbf{k}appa)_{\bar \rho_j} \quad \textrm{and}\quad
S_{k+1}^{D}(U; \psi) = \bigoplus_{j=1}^d S_{k+1}^{D}(U; \psi)_{\bar \rho_j}
\]
for all weights $k+1$.
Moreover, Theorem~\ref{T:theorem C} holds for each individual $S_2^D(U, \psi \omega^{-2r})_{\bar \rho_j}$.
\end{thm}
This is proved in Theorem~\ref{T:main theorem each residual}. The idea behind this theorem is that the isomorphism \eqref{E:overconvergent = classical} is also approximately equivariant for the tame Hecke actions. One can certainly decompose the right hand side of \eqref{E:overconvergent = classical} according to the reductions of the associated Galois (pseudo-)representations; the isomorphism \eqref{E:overconvergent = classical} allows us, to some extend, transfer the decomposition to the space of overconvergent automorphic forms. The error terms can be killed by taking the limit of repeated $p$-powers of the approximate projectors on the space of overconvergent automorphic forms.
We believe that the decomposition by Galois pseudo-representations has its own interest; for example, it gives a natural decomposition of the eigencurve according to the residual Galois pseudo-representations.
Our decomposition is given in a reasonably explicit way on the Banach space of overconvergent automorphic forms and we have a good ``model" of each factor.
So the decomposition of the eigencurve over disks of radius $p^{-1}$ centered around $x \psi(x)$ applies to the piece corresponding to each Galois pseudo-representation.
\subsection{Relation with later works}
Recently, the first two authors and R. Liu \cite{liu-wan-xiao} proved many cases of Conjecture~\ref{Conj:CM-conj} of Coleman--Mazur and Buzzard--Kilford. The method is very similar to this paper, but made use of a difference basis for automorphic forms.
\subsection{Structure of the paper}
\label{S:structure of paper}
We first briefly recall the construction of eigencurves in Section~\ref{Section:CM eigencurve} as well as the conjecture of Coleman--Mazur and Buzzard--Kilford.
Section~\ref{Section:automorphic forms} sets up basic notations for classical and overconvergent automorphic forms for a definite quaternion algebra.
Section~\ref{Section:computation of Up} gives the most fundamental computation of the infinite matrix for the $U_p$-action on the space of overconvergent automorphic forms.
In particular, Theorem~\ref{T:theorem A} is proved here.
The theoretical computation is complemented by a concrete example which we present in Section~\ref{Section:explicit example}; this was previously studied by Jacobs \cite{jacobs} who relies heavily on computer computation, but made much more accessible here as a by-hand computation. We hope this explicit example can inspire the readers to seek for new ideas.
After this, we study the Atkin--Lehner involution in Section~\ref{Section:improve Hodge polygon} and prove Theorems~\ref{T:theorem B} and \ref{T:theorem C} at the end of the section.
Section~\ref{Section:separation residue} is devoted to separating the eigencurve according to residual Galois pseudo-representations.
Theorems~\ref{T:theorem D} is proved at the end of Section \ref{Section:separation residue}.
\subsection*{Acknowledgments}
We are grateful to Frank Calegari, Matthew Emerton, Chan-Ho Kim, and Xinyi Yuan for many useful discussions. We thank the anonymous referee for carefully reading the paper and for suggestive comments.
We thank Chris Davis and Hui June Zhu for their interests.
We thank {\tt Sage notebook} and {\tt lmfdb.org} for providing numerical input in the course of this research.
The first author is partially supported by Simons Fellowship.
The second author is partially supported by Simons Collaboration Grant \#278433, NSF Grant DMS--1502147, and CORCL research grant from University of California, Irvine.
The third author is supported by Beijing outstanding talent training program (\#2014000020124G140). And the third author would like to thank University of California, Irvine for the hospitality during his visit.
\subsection*{Unconventional use of notations}
We list a few unconventional use of notations.
\begin{itemize}
\mathbf{i}tem
The conductor of a trivial character of $\mathbb{Z}_p^\times$ is $p$ as opposed to $1$.
\mathbf{i}tem
We use $k+1$, as opposed to $k$, for the weight of modular forms.
Related to this, the right action appearing in the definition of automorphic forms on definite quaternion algebra uses a slightly different normalization; see \eqref{E:right action}.
\mathbf{i}tem
Although the Hecke actions seem to come from certain right actions on the Tate algebras, we still view them as left actions. Therefore, we exclusively work with column vectors. We will try to clarify this in the context (e.g. Proposition~\ref{P:generating series}).
\mathbf{i}tem
All row and column indices of a matrix start with $0$ as opposed to $1$; this will be extremely useful when considering infinite matrices later.
\end{itemize}
\section{Coleman--Mazur eigencurves}
\label{Section:CM eigencurve}
\subsection{Weight space}
We fix a prime number $p$.
We write $\Gamma=\mathbb{Z}_p^\times$ as $\Delta \times \Gamma_0$, where $\Gamma_0 = (1+2p\mathbb{Z}_p)^\times\cong \mathbb{Z}_p$ (identified via the map $x \mapsto \frac 1{2p}\log(x) = \frac 1{2p}\big( (x-1) - \frac{(x-1)^2}2+ \cdots \big)$) and $\Delta = (\mathbb{Z}_p/2p\mathbb{Z}_p)^\times$ is isomorphic to $\mathbb{Z}/(p-1)\mathbb{Z}$ if $p\geq 3$, and $\mathbb{Z}/2\mathbb{Z}$ if $p=2$.
We choose the topological generator $\gamma_0$ of $ \Gamma_0$ to be the element $\exp(2p) \mathbf{i}n\Gamma_0 \subseteq \mathbb{Z}_p^\times$.
We use $\Lambda = \mathbb{Z}_p\llbracket \Gamma\rrbracket$ and $\Lambda_0
= \mathbb{Z}_p\llbracket\Gamma_0\rrbracket$ to denote the Iwasawa algebras.
In particular, we have $\Lambda \cong \Lambda _0 \otimes_{\mathbb{Z}_p} \mathbb{Z}_p[\Delta]$. For an element $\gamma \mathbf{i}n \Gamma$, we use $[\gamma]$ to denote its image in the Iwasawa algebra $\Lambda$.
The chosen $\gamma_0$ defines an isomorphism $\mathbb{Z}_p\llbracket T\rrbracket \simeq \Lambda _0 $ given by $T \mapsto [\gamma_0] -1$.
The weight space is defined to be $\mathcal{W}: = \Max(\Lambda[\frac 1p])$, the rigid analytic space associated to the formal scheme $\Spf(\Lambda)$; it is a disjoint union of $\#\Delta$ copies of the open unit disk.
The natural projection
\[
\mathcal{W} \cong \Max(\Lambda_0 \otimes_{\mathbb{Z}_p} \mathbb{Q}_p[\Delta]) \to \Max(\Lambda_0[\tfrac1p]) \simeq \Max(\mathbb{Z}_p\llbracket T\rrbracket[\tfrac 1p])
\]
gives each point on $\mathcal{W}$ a \emph{$T$-coordinate}.
The weight space $\mathcal{W}$ may be viewed as the universal space for continuous characters of $\Gamma$. More precisely, a continuous character $\mathbf{k}appa:\Gamma \to \mathcal{O}_{\mathbb{C}_p}^\times$ gives rise to a continuous homomorphism $\mathbf{k}appa: \Lambda = \mathbb{Z}_p\llbracket \Gamma\rrbracket \to \mathcal{O}_{\mathbb{C}_p}$ and hence defines a point, still denoted by $\mathbf{k}appa$, on the weight space $\mathcal{W}$.
The $T$-coordinate of the point $\mathbf{k}appa$ is
$
T_\mathbf{k}appa = \mathbf{k}appa(\gamma_0) -1.
$
We point out that, the $T$-coordinate of a point of $\mathcal{W}$ depends on the choice of the topological generator $\gamma_0$, but its $p$-adic valuation does not.
\begin{example}
\label{Ex: characters on weight space}
For $k \mathbf{i}n \mathbb{Z}$, the character $x^k: \Gamma \to \mathbb{Z}_p^\times$ sending $a$ to $a^k$ has $T$-coordinate $T_{x^k} = \exp(2kp) - 1$. We observe that $|\exp(2kp)-1| = p^{-v_p(2kp)}$; in other words, these types of points are very closed to the centers of the weight disks.
Let $\psi_m: \Gamma \to (\mathbb{Z}_p / p^{m} \mathbb{Z}_p)^\times \to \mathcal{O}_{\mathbb{C}_p}^\times$ denote a finite continuous character which does not factor through smaller \emph{positive} $m$ ($m \geq 2$ if $p=2$). We say that $\psi_m$ has \emph{conductor $p^m$}, ignoring the prime-to-$p$ part of the conductor.
In particular, a trivial character has conductor $p$ (or $4$ if $p=2$); so $m=1$ (or $m=2$ if $p=2$).
When $m \geq 2$ and $p>2$, $\psi_m(\gamma_0)$ is a primitive $p^{m-1}$-st root of unity $\zeta_{p^{m-1}}$. Thus, the point $x^k \psi_m$ has $T$-coordinate $\zeta_{p^{m-1}}\exp(2pk) -1$, which has norm $ p^{-1/p^{m-2}(p-1)}$ (independent of $k$).
So these points tend towards the boundary of the weight disks as $m$ increases; \emph{but stay in the same ``rim" as $k$ varies, and accumulate as $k$ becomes more congruent modulo powers of $p$.}
We call characters $x^k\psi_m$ with $k \geq 1$ \emph{classical characters}. (Our weight will always be $k+1$ from now on.)
We use $\omega: \Delta \to \mathbb{Z}_p^\times$ to denote the Teichm\"uller character. We use $\langle\cdot \rangle: \Gamma\to \mathbb{Z}_p^\times $ to denote the character $x\omega^{-1}$.
\end{example}
\subsection{Coleman--Mazur eigencurve}
\label{S:CM eigencurve}
Instead of working with the usual eigencurves, we shall work with the so-called ``spectral curves"; the main Conjecture~\ref{Conj:CM-conj} is, for a large part, equivalent for these two curves.
We first recall the definition of spectral curves; for details, we refer to \cite[Section~2]{buzzard}.
Suppose that we are given an affinoid algebra $A$\footnote{Typically, $\Max(A)$ is an affinoid subdomain of $\mathcal{W}$.} over $\mathbb{Q}_p$ and a Banach $A$-module $S$ which satisfies Buzzard's property (Pr) (see \cite[after Lemma 2.10]{buzzard}), that is a Banach $A$-module isomorphic to a direct summand of a Banach $A$-module $P$ which admits a countable orthonormal basis $(e_i)_{i \mathbf{i}n \mathbb{N}}$.
Moreover, suppose that we are given a \emph{nuclear} operator $U_p$ on $S$, that is, the uniform limit of a sequence of continuous $A$-linear operators on $S$ whose images are finite $A$-modules.
Then we can extend the action of $U_p$ to the ambient space $P$ by taking the zero action on other direct summands of $P$.
Write $U_p$ as an infinite matrix $M$, respect to the basis $(e_i)$.
Then the \emph{characteristic power series} of $U_p$ acting on $S$
\[
\Char(U_p; S): = \det (I - XM) = 1+ c_1 X+ c_2 X^2 + \cdots \mathbf{i}n A\llbracket X \rrbracket
\]
converges and is independent of the choices of the ambient space $P$ and its basis $(e_i)$.
Moreover, we have $\lim_{n\to \mathbf{i}nfty} |c_n| r^n =0$ for any $r \mathbf{i}n \mathbb{R}^+$.
Consequently, it makes sense to talk about the zero locus of the characteristic power series $\Char(U_p;S)$ in $\Max(A) \times \mathbb{G}_{m, \mathrm{rig}}$, where $X$ is the coordinate of the second factor.
We denote this zero locus by $\Spc :=\Spc(U_p;S)$; it is called the \emph{spectral variety} associated to the Banach module $S$ and the $U_p$-operator.
The natural projection $\mathrm{wt}: \Spc\to \Max(A)$ is called the \emph{weight map}; the map $a_p: \Spc \to \mathbb{G}_{m,\mathrm{rig}}\xrightarrow{x \to x^{-1}}\mathbb{G}_{m, \mathrm{rig}}$ given by the composite of the other natural projection with an inverse map is called the \emph{slope map}.
\[
\xymatrix{
\Spc \ar[d]^{\mathrm{wt}} \ar[r]^-{a_p} &\mathbb{G}_{m,\mathrm{rig}}\\ \Max(A).
}
\]
The weight map is known to be locally finite.
For each closed point $z \mathbf{i}n\Spc$, we use $|\mathrm{wt}(z)|$ to denote the absolute value of the $T$-coordinate of $z$ and $|a_p(z)|$ to denote the absolute value of the corresponding point with respect to the natural coordinate on $\mathbb{G}_{m, \mathrm{rig}}$.
In the case of elliptic modular forms (with level $\Gamma_0(p)$), Coleman and Mazur \cite{coleman-mazur} constructed, for each affinoid subdomain $A$ of the weight space $\mathcal{W}$, a Banach module $M$ consisting of overconvergent cuspidal modular forms of weight in $A$ and of a fixed convergence radius; it carries a natural action of the $U_p$-operator.
This construction was subsequently generalized by Buzzard \cite{buzzard} to allow arbitrary tame level on the modular curve.
Using the construction of the previous paragraph, one can define the spectral curve over $\Max(A)$, which patches together over $\mathcal{W}$ as the subdomain $\Max(A)$ varies.
We do not recall the precise definition here, but refer to \cite{buzzard} for details. However, we shall later encounter a slightly different situation working with definite quaternion algebras. Detailed construction of the corresponding Banach module will be given then.
\subsection{The eigencurve near the boundary of the weight space}
Recall that weight space $\mathcal{W}$ has a natural coordinate $T$.
For $r <1$, we use $\mathcal{W}^{\geq r}$ to denote the sub-annulus of $\mathcal{W}$ where $r \leq |T| <1$, called the \emph{rim} of the weight space (after Mazur).
We are mostly interested in the situation when $r\to 1^-$.
As computed in Example~\ref{Ex: characters on weight space}, all powers $x^k$ of the cyclotomic character are \emph{not} in the rim of the weight space as soon as $r > p^{-1}$.
We put $\Spc^{\geq r}: = \mathrm{wt}^{-1}(\mathcal{W}^{\geq r}).$
The following question was asked by Coleman and Mazur \cite{coleman-mazur}, and later elaborated by Buzzard and Kilford \cite{buzzard-kilford}.\footnote{In the recent preprint of the first two authors and R. Liu \cite{liu-wan-xiao}, we proved many cases of this conjecture.}
\begin{conjecture}
\label{Conj:CM-conj}
When $r$ is sufficiently close to $1$, the following statements hold.
\begin{enumerate}
\mathbf{i}tem
The space $\Spc^{\geq r}$ is a disjoint union of (countably infinitely many) connected components $X_1, X_2, \dots$ such that the weight map $\mathrm{wt}: X_n \to \mathcal{W}^{\geq r}$ is finite and flat for each $n$.
\mathbf{i}tem
There exist nonnegative rational numbers $\lambda_1, \lambda_2, \dots \mathbf{i}n \mathbb{Q}$ in non-decreasing order and approaching to infinity such that, for each $i$ and each point $z \mathbf{i}n X_n$, we have
\[
|a_p(z)| = |\mathrm{wt}(z)|^{(p-1)\lambda_n}.
\]
\mathbf{i}tem
The sequence $\lambda_1, \lambda_2, \dots$ is a disjoint union of finitely many arithmetic progressions, counted with multiplicity (at least when the indices are large enough).
\end{enumerate}
\end{conjecture}
Clearly Conjecture~\ref{Conj:CM-conj} implies Conjecture~\ref{Conj:weak-buzzard-kilford} by specializing to classical weights using Coleman's classicality result \cite{coleman1,coleman2}.
\begin{remark}
\label{R:remark after the conjecture}
Let us give a few evidences and remarks on Conjecture~\ref{Conj:CM-conj} (as well as Conjecture~\ref{Conj:weak-buzzard-kilford}).
\begin{enumerate}
\mathbf{i}tem
The novelty of our formulation lies in emphasizing statement (3) of Conjecture~\ref{Conj:CM-conj} as part of the general picture.
In fact, the aim of this paper is to give strong evidence to support this expectation; see in particular, Corollary~\ref{C:precise NP computation} and Theorem~\ref{T:main theorem each residual}(3).
\mathbf{i}tem Similar properties near the center of the weight space are expected to be false; we refer to \cite{buzzard-calegari2, buzzard-slope, clay, loeffler} for more discussions.\footnote{Very recently, Bergdall and Pollack \cite{bergdall-pollack} gave an interesting conjecture regarding the slopes of eigencurve at the center of the weight space.} But see also Remarks~\ref{R:heuristic gouvea}.
\mathbf{i}tem
One can reformulate this conjecture for eigencurves instead of spectral curves; the two statements would be essentially equivalent.
\mathbf{i}tem When $p=2,3$ and the modular curve is taken to be $X_0(p)$, Conjecture~\ref{Conj:CM-conj} is proved using direct computations by Buzzard--Kilford \cite{buzzard-kilford} and Roe \cite{roe}, extending the thesis of Emerton \cite{emerton}.
\mathbf{i}tem For $p=5,7$, the weaker version Conjecture~\ref{Conj:weak-buzzard-kilford} was verified in some cases by Kilford and McMurty \cite{kilford, kilford-mcmurdy}.
\mathbf{i}tem In an analogous situation where the eigencurve associated to Artin--Schreier--Witt tower of curves is considered, the analogue of Conjecture~\ref{Conj:CM-conj}, in fact over the entire weight space,\footnote{The fact that the analogous statements hold over the entire weight space means that the situation is largely simplified; the method will probably not translate directly to the Coleman--Mazur eigencurve case.} is proved by Davis and the first two authors \cite{davis-wan-xiao}.
Our argument in Section~\ref{Section:improve Hodge polygon} shares some similarities with this approach and is in part inspired by it.
\end{enumerate}
\end{remark}
\begin{remark}
We give our most optimistic expectation of the numerics in Conjecture~\ref{Conj:CM-conj}.
Suppose $ p \geq 3$ for simplicity. First, we expect Conjecture~\ref{Conj:CM-conj} to hold for $r = p^{-1/(p-1)}$ (i.e. the radius for finite characters of conductor $p^2$).\footnote{It is possible that Conjecture~\ref{Conj:CM-conj} holds for even smaller $r$, e.g. $r < p^{-1}$; but we do not have strong evidence either supporting or against this.}
Moreover, we hope to make a guess about the sequence $\lambda_1, \lambda_2, \dots $ in Conjecture~\ref{Conj:CM-conj}.
Assume that the tame level structure is \emph{neat}.
Fix a connected component of the weight disk and fix a finite character $\psi_2$ of conductor $p^2$ so that the character $x \psi_2$ lies in that weight disk.
For $i=0, \dots, \frac{p-3}2$,
consider the action of $U_p$ on the space of cusp forms $S_2(p^2; \psi_2 \omega^{-2i})$ whose tame level is as given and the level at $p$ is $\Gamma_0(p^2)$ with nebentypus character $\psi_2\omega^{-2i}$.
The dimension of such space is denoted by $t$ (which does not depend on $i$). Let $\alpha_1^{(i)}, \dots, \alpha_t^{(i)}$ denote the $p$-adic valuations of the corresponding $U_p$-eigenvalues, counted with multiplicity.
Let $d$ denote the number of cusps of the modular curve with only the tame level, or equivalently the dimension of the weight $2$ Eisenstein series for the tame level.
Then the sequence $\lambda_1,\lambda_2, \dots$ is expected to be the union (rearranged into the non-decreasing order) of exactly the following list of numbers:
\begin{itemize}
\mathbf{i}tem
the
numbers $1, 2, 3, \dots$ with multiplicity $d$, and
\mathbf{i}tem
for $i=0, \dots, \frac{p-1}{2}$ and $r = 1, \dots, t$, the numbers
\[
\alpha_r^{(i)} + i,\ \alpha_r^{(i)}+i + \tfrac{p-1}{2},\ \alpha_r^{(i)}+i + (p-1),\ \dots.
\]
\end{itemize}
The former part should be considered as ``contributions from the Eisenstein series" although the overconvergent modular forms are cuspidal; and the latter part is the ``contributions from the cuspidal part", which is a union of arithmetic progressions with common difference $\frac{p-1}{2}$. (The number $\frac{p-1}2$ comes from the cyclic repetition of powers of the Teichm\"uller character.)
Our guess is motivated by the main theorems of this paper and some computation of Kilford and McMurty \cite{kilford, kilford-mcmurdy}.
\end{remark}
\section{Automorphic forms for a definite quaternion algebra}
\label{Section:automorphic forms}
One of the major technical difficulties, among others, is the poor understanding of the geometry of the modular curves, in explicit coordinates.
To bypass this difficulty, we consider the eigencurve for a definite quaternion algebra; then a $p$-adic family version of Jacquet--Langlands correspondence \cite{chenevier} allows us to recover a big part of Conjecture~\ref{Conj:CM-conj}, from the corresponding statements for the quaternion algebra.
We now recall the definition of the quaternionic eigencurves following \cite{buzzard2,buzzard}.
\subsection{Setup}
\label{S:setup for D}
Let $\mathbb{A}_f$ denote the finite adeles of $\mathbb{Q}$ and $\mathbb{A}_f^{(p)}$ its prime-to-$p$ components.
Let $D$ be a definite quaternion algebra over $\mathbb{Q}$ which splits at $p$; in other words,
$D \otimes_\mathbb{Q} \mathbb{R}$ is isomorphic to the Hamiltonian quaternion and
$D \otimes_{\mathbb{Q}} \mathbb{Q}_p \simeq \mathrm{M}_2(\mathbb{Q}_p)$.
Put $D_f : = D \otimes_\mathbb{Q} \mathbb{A}_f$.
Let $\mathcal{S}$ be a finite set of primes including $p$ and all primes at which $D$ ramifies.
For each prime $l \neq p$, we fix an open compact subgroup of $U_l$ of $(D \otimes_\mathbb{Q} \mathbb{Q}_l)^\times$.
For $l \notin \mathcal{S}$, we fix an isomorphism $D \otimes_\mathbb{Q} \mathbb{Q}_l \simeq \mathrm{M}_2(\mathbb{Q}_l)$ and require that $U_l \simeq \GL_2(\mathbb{Z}_l)$ under this identification.
We fix a positive integer $m \mathbf{i}n \mathbb{N}$ and consider the Iwahori subgroup
\[
U_0(p^m) =
\begin{pmatrix}
\mathbb{Z}_p^\times & \mathbb{Z}_p \\ p^m \mathbb{Z}_p & \mathbb{Z}_p^\times
\end{pmatrix} \subset \GL_2(\mathbb{Q}_p) \simeq (D \otimes_\mathbb{Q} \mathbb{Q}_p)^\times.
\]
We will later need the monoid
\[
\Sigma_0(p^m): =\Big\{\gamma =\big(
\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big) \mathbf{i}n \mathrm{M}_2(\mathbb{Z}_p) \; \Big| \;
p^m|c, \ p\nmid d,\, \det(\gamma) \neq 0 \Big\}.
\]
Finally, we write $U = \mathrm{pr}od_{l \neq p} U_l \times U_0(p^m)$ for the product, as an open compact subgroup of $D_f^\times$.
We occasionally use $U_1$ to denote $\mathrm{pr}od_{l \neq p} U_l \times \big(\begin{smallmatrix} \mathbb{Z}_p^\times & \mathbb{Z}_p\\ p^m\mathbb{Z}_p & 1+p^m \mathbb{Z}_p \end{smallmatrix}\big)$.
We further assume that $U$ is taken sufficiently small so that
(see \cite[Section~4]{buzzard2})
\begin{equation}
\label{E:buzzard condition}
\textrm{for any }x \mathbf{i}n D_f^\times, \textrm{ we have }x^{-1} D^\times x \cap U = \{1\}.
\end{equation}
We fix a finite extension $E$ of $\mathbb{Q}_p$ as the coefficient field, which we will enlarge as needed in the argument.
Let $\mathcal{O}$ denote the valuation ring of $E$ and $\varpi$ an uniformizer. Write $\mathbb{F} = \mathcal{O} / (\varpi)$ for the residue field.
Let $v(\cdot)$ denote the valuation on $E$ normalized so that $v(p) = 1$.
We write $\mathcal{A}^\circ := \mathcal{O} \langle z \rangle$ and $\mathcal{A} = \mathcal{A}^\circ[\frac1p]$ for the Tate algebras.
Put $r_m = p^{-1/p^{m-1}(p-1)}$ if $p>2$ and $r_m = 2^{-1/2^{m-2}}$ if $p=2$.
Let $\mathcal{W}^{< r_m}$ denote the open disks of $\mathcal{W}$ where the $T$-coordinate has absolute value $< r_m$.
Let $\Max(A)$ be an affinoid space over $\mathcal{W}^{< r_m}$. (Typical examples of $\Max(A)$ we consider are either a subdomain or a point.) Let $\mathbf{k}appa: \Gamma \to A^\times$ denote the universal character.
Then $\mathbf{k}appa$ extends to a continuous character
\begin{align}
\label{E:extend kappa A}
\mathbf{k}appa: (\mathbb{Z}_p + p^m \mathcal{A}^\circ)^\times = \mathbb{Z}_p^\times \cdot (1+p^m&\mathcal{A}^\circ)^\times \longrightarrow (A \widehat \otimes \mathcal{A}^\circ)^\times
\\
\nonumber
a\cdot x &\longmapsto \mathbf{k}appa(a) \cdot \mathbf{k}appa(\exp(2p))^{(\log x)/{2p}}.
\end{align}
One checks easily that the condition $|\mathbf{k}appa(\exp(2p))-1|< r_m$ ensures the convergence and the independence of the factorization $a\cdot x$. See e.g. \cite[Section~2.1]{pilloni} for a more optimal convergence condition.
\subsection{Overconvergent automorphic forms}
\label{S:quaternionic forms}
Consider the right action of $\Sigma_0(p^m)$ on $A\widehat\otimes\mathcal{A}$ given by
\begin{equation}
\label{E:right action}
\textrm{for } \gamma = \big(
\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big) \mathbf{i}n \Sigma_0(p^m) \textrm{ and }h(z) \mathbf{i}n A \widehat \otimes \mathcal{A}, \quad
(h||_{\mathbf{k}appa}\gamma)(z): = \frac{\mathbf{k}appa(cz+d)}{cz+d} h\big( \frac{az+b}{cz+d}\big).\footnote{Our weight normalization is different from \cite{buzzard2, buzzard, jacobs} and most of the literature by using $cz+d$ in the denominator as opposed to $(cz+d)^2$; we will see a small benefit of our choice later in Proposition~\ref{P:generating series}.}
\end{equation}
Note that it is crucial that $p^m |c$ and $d \mathbf{i}n \mathbb{Z}_p^\times$ so that $\mathbf{k}appa(cz+d)$ and $(cz+d)^{-1}$ make sense.
We define
the space of \emph{overconvergent automorphic forms of weight $\mathbf{k}appa$ and level $U$} to be
\[
S^{D,\dagger}(U; \mathbf{k}appa): = \Big
\{
\varphi: D^\times_f \to A \widehat \otimes \mathcal{A}\;\Big|\;
\varphi(\delta gu) = \varphi(g)||_{\mathbf{k}appa}u_p, \textrm{ for any }\delta \mathbf{i}n D^\times, g \mathbf{i}n D_f^\times, u \mathbf{i}n U
\Big\},
\]
where $u_p$ is the $p$-component of $u$.
\begin{example}
\label{Ex:classical weight}
When $\mathbf{k}appa = x^k\psi_{m'}: \Gamma \to \mathbb{Q}_p(\zeta_{p^{m'-1}})^\times$ is the continuous character considered in Example~\ref{Ex: characters on weight space},
we can take the definition above for $A = E \supset \mathbb{Q}_p(\zeta_{p^{m'-1}})$ corresponding to the point $\mathbf{k}appa$ on $\mathcal{W}$, which lies
in $\mathcal{W}^{\leq r_m}$ if $m' \leq m$.
In this case, the right action is given by
\begin{equation}
\label{E:chi action}
(h||_\mathbf{k}appa\gamma)(z)= (cz+d)^{k-1} \psi_{m'}(d) h\big(\frac{az+b}{cz+d} \big).
\end{equation}
The space $S^{D,\dagger}(U; \mathbf{k}appa) = S^{D,\dagger}_{k+1}(U;\psi_{m'})$ is the space of \emph{overconvergent automorphic forms of weight $k+1$, nebentypus character $\psi_{m'}$, and level $U$}.
Moreover, when $k \geq 1$ is a positive integer, we observe that the subspace $L_{k-1}$ of $\mathcal{A}$ consisting of polynomials in $z$ with degree $\leq k-1$ is stable under the action \eqref{E:chi action}; so we can define the space of \emph{classical automorphic forms of weight $k+1$, character $\psi_{m'}$, and level $U$} to be the subspace $S^D_{k+1}(U; \psi_{m'})$ of $S^{D ,\dagger}_{k+1}(U; \psi_{m'})$ consisting of functions $\varphi$ with values in $L_{k-1}$.
In particular, when $k=1$,
\begin{align}
\label{E:S2 classical}
S_2^D(U; \psi_m): = \big\{ \varphi: D^\times_f \to E \, \big| \, \varphi(\delta gu) = \psi_m(d) & \varphi(g) \textrm{ for any }\delta \mathbf{i}n D^\times, g \mathbf{i}n D^\times_f,
\\
\nonumber
&\textrm{and } u\mathbf{i}n U \textrm{ with }u_p = \big(\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big) \big\}.\footnote{By Jacquet--Langlands, constant function on $D^\times \backslash D^\times_f/U$ corresponds to weight two modular forms.}
\end{align}
We occasionally write $S_2^D(U; \psi_m; \mathcal{O})$ for the subspace of functions that take values in $\mathcal{O}$ (as opposed to $E$).
\end{example}
\subsection{Hecke actions}
\label{S:Hecke action}
The space $S^{D,\dagger}(U; \mathbf{k}appa)$ carries actions of Hecke operators, which preserves the subspace of classical automorphic forms $S^D_{k+1}(U; \psi_m)$ when $\mathbf{k}appa = x^k\psi_m$ is given as in Example~\ref{Ex:classical weight}.
Let $l$ be a prime not in $\mathcal{S}$; then $U_l \simeq \GL_2(\mathbb{Z}_l)$.
We write $U_l \big( \begin{smallmatrix}
l&0\\0&1
\end{smallmatrix}\big) U_l = \coprod_{i=0}^{l} U_l w_i$, with $w_i = \big( \begin{smallmatrix}
l&0\\mathbf{i}&1
\end{smallmatrix}\big)$ for $i =0, \dots, l-1$ and $w_l = \big( \begin{smallmatrix}
1&0\\0&l
\end{smallmatrix}\big)$, viewed as elements in $\GL_2(\mathbb{Q}_l) \simeq D \otimes_\mathbb{Q} \mathbb{Q}_l$.
We define the action of the operator $T_l$ on $S^{D,\dagger}(U; \mathbf{k}appa)$ by
\[
T_l(\varphi) = \sum_{i =0}^l \varphi|_{\mathbf{k}appa}w_i, \quad \textrm{with }(\varphi|_{\mathbf{k}appa}w_i)(g): = \varphi(gw_i^{-1}).\footnote{This looks slightly different from \eqref{E:defn of U_p} below because $||_\mathbf{k}appa w_i$ is trivial as $w_i$ is not in the $p$-component.}
\]
Similarly, we write (note $m \geq 1$)
\begin{equation}
\label{E:Up operator cosets}
U_0(p^m)\big(\begin{smallmatrix}
p&0\\0&1
\end{smallmatrix}\big) U_0(p^m) = \coprod_{i=0}^{p-1}
U_0(p^m) v_i, \quad \textrm{with } v_i = \big(\begin{smallmatrix}
p&0\\mathbf{i}p^{m} &1
\end{smallmatrix}\big).
\end{equation}
Then the action of the operator $U_p$ on $S^{D,\dagger}(U; \mathbf{k}appa)$ is defined to be
\begin{equation}
\label{E:defn of U_p}
U_p(\varphi) = \sum_{i =0}^{p-1} \varphi|_{\mathbf{k}appa}v_i, \quad \textrm{with }(\varphi|_{\mathbf{k}appa}v_i)(g): = \varphi(gv_i^{-1})||_{\mathbf{k}appa}v_i.
\end{equation}
We point out that the definition of $U_p$- and $T_l$-operators do not depend on the choices of the double coset representatives $w_i$ and $v_i$. But our choices may ease the computation.
These $U_p$- and $T_l$-operators are viewed as acting on the space on the left (although the expression seems to suggest a right action); they are pairwise commutative.
\begin{notation}
If an (overconvergent) automorphic form $\varphi$ is a (generalized) eigenvector for the $U_p$-operator, we call the $p$-adic valuation of its (generalized) $U_p$-eigenvalue the \emph{$U_p$-slope} or simply the \emph{slope} of $\varphi$.
By \emph{$U_p$-slopes} on a space of (overconvergent) automorphic forms, we mean the set of slopes of all generalized $U_p$-eigenforms in this space, counted with multiplicity.
\end{notation}
\subsection{Classicality of automorphic forms}
The relation between the classical and the overconvergent automorphic forms in weight $k+1 \geq 2$ can be summarized by the following exact sequence
\[
0 \to S_{k+1}^D(U; \psi_m) \to S_{k+1}^{D, \dagger}(U; \psi_m) \xrightarrow{(\frac{d}{dz})^k} S_{1-k}^{D, \dagger}(U; \psi_m) \to 0,
\]
where the first map is the natural embedding and the second map is given by
\[
\big( \big(\frac{d}{dz} \big)^k (\varphi) \big) (g) : =
\big(\frac{d}{dz} \big)^k \big( \varphi(g)\big).
\]
One checks that $ (\frac{d}{dz} )^k \circ U_p= p^k \cdot U_p \circ (\frac{d}{dz} )^k$ (see \cite[\S 7]{buzzard2}).
As a corollary, all $U_p$-eigenforms of $S_{k+1}^{D, \dagger}(U; \psi_m)$ with slope strictly less than $k$ are classical.
It is also well known that the $U_p$-slopes on $S_{k+1}^D(U; \psi_m)$ are always less than or equal to $k$ by the admissibility of the associated Galois representation.
It follows that the $U_p$-slopes on $S_{k+1}^D(U; \psi_m)$ are exactly the smallest $\dim S_{k+1}^D(U; \psi_m)$\footnote{This number can be expressed in a simple way as in Corollary~\ref{C:dimension formula}.} numbers (counted with multiplicity) in the set of $U_p$-slopes on $S_{k+1}^{D, \dagger}(U; \psi_m)$.
\subsection{Jacquet--Langlands correspondence}
\label{S:jacquet langlands}
We recall a very special case of the classical Jacquet--Langlands correspondence, which was used in the introduction. Let $N$ be a positive integer coprime to $p$.
Assume that there exists a prime number $\ell$ such that $\ell ||N$.
Let $D_{\ell\mathbf{i}nfty}$ denote the definite quaternion algebra over $\mathbb{Q}$ which ramifies at exactly $\ell$ and $\mathbf{i}nfty$.
If we take the level structure so that $\mathcal{S}$ is the set of prime factors of $p\ell N$, $U_\ell$ is the maximal open compact subgroup of $(D_{\ell\mathbf{i}nfty}\otimes\mathbb{Q}_\ell)^\times$, and $U_q = \big( \begin{smallmatrix}
\mathbb{Z}_q^\times & \mathbb{Z}_q \\ N\mathbb{Z}_q & \mathbb{Z}_q^\times
\end{smallmatrix} \big) \subset \GL_2(\mathbb{Q}_q) \simeq (D_{\ell\mathbf{i}nfty} \otimes \mathbb{Q}_q)^\times$ for a prime $q |N$ but $q \neq \ell , p$, then the Jacquet--Langlands correspondence says that there exists an isomorphism of modules of $U_p$- and all $T_q$-operators for $q \nmid Np$:
\begin{equation}
\label{E:Jaquet-Langlands classical}
S_{k+1}(\Gamma_0(p^mN); \psi_m)^{\ell\textrm{-}\mathrm{new}} \cong S_{k+1}^{D_{\ell\mathbf{i}nfty}}(U; \psi_m)
\end{equation}
for all weights $k+1 \geq 2$.
This allows us to translate our results about automorphic forms on definite quaternion algebras to results about modular forms.
One can certainly make variants of this, but we do not further discuss.
\subsection{Eigencurve for $D$}
It is clear that $S^{D, \dagger}(U;\mathbf{k}appa)$ satisfies Buzzard's property (Pr) (see \ref{S:CM eigencurve}), by an argument similar to \cite[\S 10]{buzzard} or imitate Lemma~\ref{L:explicit space of automorphic forms}.
The action of the $U_p$-operator on $S^{D, \dagger}(U; \mathbf{k}appa)$ is nuclear by
\cite[Lemma~12.2]{buzzard}.
So the construction in Subsection~\ref{S:CM eigencurve} applies with $S = S^{D, \dagger}(U; \mathbf{k}appa)$ to give a spectral curve over $\Max(A)$.
The construction is clearly functorial in $A$ and hence defines a spectral curve $\Spc_D$ over $\mathcal{W}^{<r_m}$.
As explained in \cite[Section~13]{buzzard}, the construction for different $m$ also glues over small weight disks and hence gives rise to a spectral curve $\Spc_D$ over the entire weight space $\mathcal{W}$.
The
Jacquet--Langlands correspondence above can be made into $p$-adic families.
By \cite{chenevier}, there is a closed immersion $\Spc_D^\mathrm{red}\hookrightarrow \Spc^\mathrm{red}$, where the superscript means to take the reduced subscheme structure.\footnote{Rigorously speaking, \cite{chenevier} proves the result for eigencurves; but the spectral curves, when taking the reduced scheme structure, are exactly the images of the eigencurves after forgetting the tame Hecke actions.}
Therefore, it is natural to expect that Conjecture~\ref{Conj:CM-conj} holds for $\Spc_D$ in place of $\Spc$.
Conversely, knowing Conjecture~\ref{Conj:CM-conj} for $\Spc_D$, it is quite possible to infer a lot of information regarding $\Spc$ via the comparison \cite{chenevier}.
\section{Explicit computation of the $U_p$-operators}
\label{Section:computation of Up}
We now make the first attempt to prove certain weak version of Conjectures~\ref{Conj:weak-buzzard-kilford} and \ref{Conj:CM-conj}, ending with a proof of Theorem~\ref{T:theorem A}.
To our best knowledge, the only known approach to any form of these conjectures
is via ``brutal force" computation, that is to compute directly the characteristic power series of the $U_p$-operator to the extent that one can determine its slopes.
Our approach is derived from a computation made by Jacobs \cite{jacobs} of the infinite matrix for $U_p$ in terms of concrete numbers.
The novelty of our improvement is to make ``brutal but formal computation" as opposed to using numbers.
We include his example in the next section with some simplification. It serves as a toy model of our computation presented in this section.
\begin{notation}
\label{N:representatives}
We decompose $D^\times_f$ into (a disjoint union of) double cosets $\coprod_{i=0}^{t-1} D^\times \gamma_i U$, for some elements $\gamma_0,\gamma_1, \dots, \gamma_{t-1} \mathbf{i}n D^\times_f$.
By our smallness hypothesis on $U$ in Subsection~\ref{S:setup for D}, the natural map $D^\times \times U \to D^\times \gamma_i U$ for each $i$ sending $(\delta,u)$ to $\delta\gamma_iu$ is bijective. We say that the double coset decomposition above is \emph{honest}.
Since the norm map $\mathrm{Nm}: D^\times \to \mathbb{Q}^\times_{>0}$ is surjective, we may modify the representatives $\gamma_i$ so that $\mathrm{Nm}(\gamma_i) \mathbf{i}n \widehat \mathbb{Z}^\times$. Moreover, since $\mathrm{Nm}(U_0(p^m)) = \mathbb{Z}_p^\times$, we can further modify the $p$-component of each $\gamma_i$ so that its norm is $1$. Finally, using the fact that $(D^\times)^{\mathrm{Nm}=1} $ is dense in $(D\otimes_\mathbb{Q} \mathbb{Q}_p)^{\times, \mathrm{Nm}=1}$, we may assume that the $p$-component of each $\gamma_i$ is trivial, still keeping the property that $\mathrm{Nm}(\gamma_i) \mathbf{i}n \widehat \mathbb{Z}^\times$.
\end{notation}
Let $\Max(A)$ be an affinoid space over $\mathcal{W}^{ < r_m}$ and let $\mathbf{k}appa: \Gamma \to A^\times$ be the universal character.
\begin{lemma}
\label{L:explicit space of automorphic forms}
We have an $A$-linear isomorphism of Banach spaces
\[
\xymatrix@R=0pt{
S^{D,\dagger}(U; \mathbf{k}appa) \ar[r]^\cong &
\oplus_{i=0}^{t-1}A \widehat \otimes \mathcal{A}\\
\varphi \ar@{|->}[r] & \big(\varphi(\gamma_i) \big)_{i = 0, \dots, t-1}.
}
\]
\end{lemma}
\begin{proof}
This is clear as the function $\varphi$ is uniquely determined by its value at the chosen representatives $\gamma_i$.
There is no further restriction on the value of $\varphi(\gamma_i)$ because the double coset decomposition in Notation~\ref{N:representatives} is honest.
\end{proof}
\begin{corollary}
\label{C:dimension formula}
We have $\dim S_{k+1}^D(U; \psi_m) = kt$, for the number $t$ in Notation~\ref{N:representatives}.
\end{corollary}
\begin{proposition}
\label{P:explicit Up}
In terms of the explicit description of the space of overconvergent automorphic forms, the $U_p$- and $T_l$- (for $l \notin \mathcal{S}$) operators can be described by the following commutative diagram.
\[
\xymatrix@C=80pt{
S^{D,\dagger}(U; \mathbf{k}appa) \ar[r]^{\varphi \mapsto (\varphi(\gamma_i))} \ar[d]_{\begin{tiny}\begin{split}\varphi \mapsto U_p\varphi\\ \varphi \mapsto T_l \varphi\end{split}\end{tiny}} &
\oplus_{i=0}^{t-1} A \widehat \otimes\mathcal{A} \ar[d]^{\begin{tiny}\begin{split}\textrm{Map of}\\ \textrm{interest} \end{split}\end{tiny}\quad}_{\begin{tiny}\begin{split}\mathfrak{U}_p\\ \mathfrak{T}_l\end{split}\end{tiny}}
\\
S^{D,\dagger}(U; \mathbf{k}appa) \ar[r]^{\varphi \mapsto (\varphi(\gamma_i))} &
\oplus_{i=0}^{t-1} A \widehat \otimes \mathcal{A}.
}
\]
Here the right vertical arrow $\mathfrak{U}_p$ (resp. $\mathfrak{T}_l$) is given by a matrix with the following description.
\begin{itemize}
\mathbf{i}tem[(1)] The entries of $\mathfrak{U}_p$ (resp. $\mathfrak{T}_l$) are sums of operators of the form $||_\mathbf{k}appa \delta_p$, where $\delta_p$ is the $p$-component of a \emph{global} element $\delta \mathbf{i}n D^\times$ \emph{of norm $p$ (resp. norm $l$)}.
\mathbf{i}tem[(2)] There are exactly $p$ (resp. $l+1$) such operators appearing in each row and each column of $\mathfrak{U}_p$ (resp. $\mathfrak{T}_l$).
\mathbf{i}tem[(3)] We have
$
\delta_p \mathbf{i}n \big(\begin{smallmatrix} p\mathbb{Z}_p& \mathbb{Z}_p\\p^{m}\mathbb{Z}_p&\mathbb{Z}_p^\times \end{smallmatrix}\big)
$ (resp. $
\delta_p \mathbf{i}n U_0(p^m) = \big( \begin{smallmatrix} \mathbb{Z}_p^\times& \mathbb{Z}_p\\p^{m}\mathbb{Z}_p&\mathbb{Z}_p^\times \end{smallmatrix} \big)
$).
\end{itemize}
\end{proposition}
\begin{proof}
We only prove this for the $U_p$-operator and the proof for the $T_l$-operator ($l \notin \mathcal{S}$) is similar. For each $\gamma_i$, we have
\[
(U_p \varphi)(\gamma_i) = \sum_{j=0}^{p-1}
\varphi(\gamma_i v_j^{-1})||_\mathbf{k}appa v_j.
\]
Now we can write each $\gamma_iv_j^{-1}$ \emph{uniquely} as $\delta_{i,j}^{-1} \gamma_{\lambda_{i,j}} u_{i,j}$ for $\delta_{i,j} \mathbf{i}n D^\times$, $\lambda_{i,j} \mathbf{i}n \{0, \dots, t-1\}$, and $u_{i,j} \mathbf{i}n U$.
Then we have
\[
(U_p \varphi)(\gamma_i) = \sum_{j=0}^{p-1}
\varphi(\delta_{i,j}^{-1} \gamma_{\lambda_{i,j}} u_{i,j})||_\mathbf{k}appa v_j = \sum_{j=0}^{p-1}
\varphi( \gamma_{\lambda_{i,j}})||_\mathbf{k}appa (u_{i,j,p}v_j),
\]
where $u_{i,j,p}$ is the $p$-component of $u_{i,j}$.
Substitute back in $u_{i,j}v_j = \gamma_{\lambda_{i,j}}^{-1} \delta_{i,j} \gamma_i$ and note the fact that both $\gamma_i$ and $\gamma_{\lambda_{i,j}}$ have trivial $p$-component by our choice in Notation~\ref{N:representatives}. We have
\[
(U_p\varphi)(\gamma_i) =
\sum_{j=0}^{p-1} \varphi(\gamma_{\lambda_{i,j}})||_\mathbf{k}appa \delta_{i,j,p},
\]
where $\delta_{i,j,p}$ is the $p$-component of the \emph{global element} $\delta_{i,j} \mathbf{i}n D^\times$.
We now check the description of each $\delta_{i,j}$:
\[
\delta_{i,j} = \gamma_{\lambda_{i,j}} u_{i,j} v_j \gamma_i^{-1} \mathbf{i}n \gamma_{\lambda_{i,j}} U\big(\begin{smallmatrix} p&0\\0&1 \end{smallmatrix}\big)U \gamma_i^{-1}.\]
From this, we see that the $p$-component of $\delta_{i,j}$ lies in $\big(\begin{smallmatrix} p\mathbb{Z}_p& \mathbb{Z}_p\\p^{m}\mathbb{Z}_p&\mathbb{Z}_p^\times \end{smallmatrix}\big)$.
Moreover, the norm of
$\gamma_{\lambda_{i,j}} U\big(\begin{smallmatrix} p&0\\0&1 \end{smallmatrix}\big) U \gamma_i^{-1}$ lands in $p \widehat \mathbb{Z}^\times$, because our choice of the representatives satisfies $\mathrm{Nm}(\gamma_{i}) \mathbf{i}n \widehat \mathbb{Z}^\times$ by Notation~\ref{N:representatives}.
Therefore, $\mathrm{Nm}(\delta_{i,j}) \mathbf{i}n \mathbb{Q}^\times_{>0} \cap p\widehat \mathbb{Z}^\times = \{p\}$. This concludes the proof of the proposition.
\end{proof}
\subsection{Infinite matrices and generating functions}
For an infinite matrix (where the row and column indices start with $0$ as opposed to $1$)
\begin{equation}
\label{E:infinite matrix}
M = \begin{pmatrix}
m_{0,0} & m_{0,1} & m_{0,2} &\cdots\\
m_{1,0} & m_{1,1} & m_{1,2} &\cdots\\
m_{2,0} & m_{2,1} & m_{2,2} &\cdots\\
\vdots & \vdots & \vdots & \ddots
\end{pmatrix}
\end{equation}
with coefficients in an affinoid $E$-algebra $A$, we consider the following formal power series:
\[
H_M(x,y) = \sum_{i,j \mathbf{i}n \mathbb{Z}_{\geq 0}} m_{i,j} x^i y^j \mathbf{i}n A\llbracket x,y\rrbracket.
\]
It is called the \emph{generating series} of the matrix $M$.
When $M$ is the matrix for an operator $T$ acting on the Tate algebra $A \widehat \otimes \mathcal{A} = A\langle z\rangle$ over $A$ with respect to the basis $1, z, z^2, \dots$, we call $H_M(x,y)$ the \emph{generating series} of $T$.
For $u \mathbf{i}n E$, we write $\Diag(u)$ for the infinite diagonal matrix with diagonal elements $1, u, u^2, \dots$.
Then we have
\[
H_{\Diag(u)M\Diag(v)} (x,y)= H_M(u x,v y).
\]
For $t \mathbf{i}n \mathbb{N}$, we write $\Diag(u; t)$ for the infinite diagonal matrix with diagonal elements $1, \dots, 1, u, \dots, u, u^2, \dots$ where each number appears repeatedly $t$ times.
The following key calculation is due to Jacobs \cite[Proposition~2.6]{jacobs}.
\begin{proposition}
\label{P:generating series}
Let $\mathbf{k}appa: \Gamma \to A^\times$ be the universal character for an affinoid space $\Max(A)$ over $\mathcal{W}^{< r_m}$. Let $\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)$ be a matrix in $\Sigma_0(p^m)$.
The generating series of the operator $||_{\mathbf{k}appa}\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)$ acting on $A \widehat \otimes \mathcal{A}$ (with respect to the basis $1, z, z^2, \dots$) is given by
\[
\frac{\mathbf{k}appa(c x+d)}{c x+d-a xy-b y}.
\footnote{Comparing to the convetion in \cite{jacobs}, we loose an extra factor of $cx+d$ in the denominator due to our normalization \eqref{E:right action}. There is no real improvement in our formula except that it looks shorter.}
\]
Here we point out that, although the operator $||_{\mathbf{k}appa}\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)$ when viewed as the action of the monoid $\Sigma_0(p^m)$ is a right action, we only use one particular operator and will not discuss the composition; so we still use the column vector convention (pretending it as a left operator).
\end{proposition}
\begin{proof}
This is straightforward.
By definition,
\begin{align*}
H_{||_{\mathbf{k}appa}\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)}(x,y) &=
\sum_{i \mathbf{i}n \mathbb{Z}_{\geq 0}}
y^i \frac{\mathbf{k}appa(c x+d)}{c x+d}\cdot \big(\frac{a x+b}{c x+d}\big)^i\\
&=
\frac{\mathbf{k}appa(c x+d)}{c x+d}\cdot \frac1{1-y \cdot \frac{a x+b}{c x+d}}
=\frac{\mathbf{k}appa(c x+d)}{c x+d-a xy-b y}.\qedhere
\end{align*}
\end{proof}
Combining Proposition~\ref{P:generating series} with Proposition~\ref{P:explicit Up}, we can give a good description of the infinite matrices for $\mathfrak{U}_p$ and $\mathfrak{T}_l$ (for $l \notin \mathcal{S}$).
\subsection{Hodge polygon and Newton polygon of a matrix}
\label{S:Hodge v.s. Newton}
Before proceeding, we remind the readers some basic facts about $p$-adic analysis. We will use them later freely without referencing back here. Let $M \mathbf{i}n \mathrm{M}_n(E)$ be an $n\times n$-matrix.
\begin{enumerate}
\mathbf{i}tem
The \emph{Newton polygon} of $M$ is the convex polygon starting at $(0,0)$ whose slopes are exactly the $p$-adic valuations of the eigenvalues of $M$, counted with multiplicity.
\mathbf{i}tem
The \emph{Hodge polygon} of $M$ is the convex hull of the vertices
\[
\big( i,\ \textrm{the minimal $p$-adic valuation of the determinants of all $i\times i$-minors} \big).
\]
\mathbf{i}tem
The Hodge polygon is invariant when conjugating $M$ by elements in $\GL_n(\mathcal{O})$; the Newton polygon is invariant when conjugating $M$ by elements in $\GL_n(E)$.
\mathbf{i}tem
If the slopes of the Hodge polygon of $M$ are $a_1\leq \dots\leq a_n$, then there exist matrices $A, B \mathbf{i}n \GL_n(\mathcal{O})$ such that $AMB$ is a diagonal matrix whose diagonal elements have valuation exactly $a_1, \dots, a_n$.
Conversely, if such $A$ and $B$ exist, the Hodge polygon of $M$ has the described slopes.
\mathbf{i}tem
If the slopes of the Hodge polygon of $M$ are $a_1\leq \dots\leq a_n$, then there exists a matrix $A \mathbf{i}n \GL_n(\mathcal{O})$ such that the valuations of all entries in the $i$-th row of $AMA^{-1}$ are at least $ a_i$ for all $i$.
Conversely, when such $A$ exists, the Hodge polygon of $M$ lies above the polygon with slopes $a_1, \dots, a_n$.
\mathbf{i}tem
When $\mathcal{O}^{\oplus n}$ can be written as $V \oplus V'$ for two $M$-stable $\mathcal{O}$-submodules.
Then the set of Newton slopes for $M$ is the union of the sets of Newton slopes of $M$ acting on $V$ and $V'$, counted with multiplicity. The same holds for Hodge slopes.
\mathbf{i}tem
It is always true that the Newton polygon lies above the Hodge polygon. This also holds for an infinite matrix associated to a nuclear operator.
\end{enumerate}
We now prove Theorem~\ref{T:theorem A} from the introduction (through the Jacquet--Langlands correspondence \eqref{E:Jaquet-Langlands classical}):
\begin{theorem}
\label{T:weak Hodge polygon}
Let $\psi_m$ be a finite character of $\mathbb{Z}_p^\times$ of conductor $p^m$ with the same $m$ that defines the level structure $U$.\footnote{Again, we allow $\psi$ to be trivial, in which case $m=1$ if $p>2$ and $m=2$ if $p=2$.}
Recall that $\dim S_2^D(U; \psi_m)=t$.
Then the Newton polygon for the slopes of $U_p$ acting on $S^{D,\dagger}(U, x\langle x\rangle^k\psi_m)$ for $k \mathbf{i}n \mathbb{Z}_p$ lies above the polygon with vertices
\begin{equation}
\label{E:trivial hodge bound}
(0,0), (t,0), (2t, t), \dots, (nt, \tfrac{n(n-1)}2 t), \dots.
\end{equation}
\end{theorem}
\begin{proof}
By Lemma~\ref{L:explicit space of automorphic forms} and Proposition~\ref{P:explicit Up} (in our case $A = E = \mathbb{Q}_p(\zeta_{p^{m-1}})$), it suffices to understand the matrix for the operator $\mathfrak{U}_p$.
We first give $\oplus_{i=0}^{t-1} \mathcal{A}$ a basis:
\[
1_0, z_0, z_0^2, \dots, 1_1, z_1, z^2_1, \dots, 1_{t-1}, z_{t-1}, \dots,
\]
where the subscripts indicate which copy of $\mathcal{A}$ the element comes from. Then the matrix for $\mathfrak{U}_p$ is a $t\times t$-block matrix such that each block is an infinite matrix.
By Proposition~\ref{P:generating series}, the generating series of each block is the sum of power series of the form
\[
\frac{d\psi_m(d) \langle d\rangle ^k(1+ \frac cd x)^{k+1}}{cx+d-axy-by}, \quad \textrm{with } p|a\textrm{ and }p^m|c \textrm{ by Proposition~\ref{P:explicit Up}}.
\]
When $k \mathbf{i}n \mathbb{Z}_p$, the expression above lands in $\mathcal{O}\llbracket p^mx, pxy,y\rrbracket \subseteq \mathcal{O}\llbracket px, y\rrbracket$. In particular, the $i$th row of the corresponding infinite matrix is divisible by $p^i$.
We can then rewrite the matrix of $\mathfrak{U}_p$ under the following basis of $\oplus_{i=0}^{t-1} \mathcal{A}$:
\[
1_0,1_1, \dots, 1_{t-1}, z_0,\dots, z_{t-1}, z_0^2, \dots.
\]
Then the matrix of $\mathfrak{U}_p$ becomes an infinite block matrix, where each block is $t\times t$.
Moreover, the discussion above implies that the $i$th block row is entirely divisible by $p^i$.
In other words, the Hodge polygon of this matrix lies above the polygon with vertices given by \eqref{E:trivial hodge bound}.
So the Newton polygon of $\mathfrak{U}_p$ also lies above it.
\end{proof}
\begin{remark}
\label{R:improve Hodge bound classical forms}
We discuss how one can improve the lower bound of the Newton polygon of the $U_p$-action on the classical automorphic forms $S_{k+1}^D(U; \psi)$ when $\psi$ has conductor $p$ and $m=1$ (or $4$ and $m=2$). (The case when $m \geq 2$ for $p >2$ and $m \geq 3$ for $p=2$ will be studied in length in Section~\ref{Section:improve Hodge polygon}.)
Note that this includes the case when $\psi$ is trivial.
For simplicity, we assume that the condition \eqref{E:buzzard condition} holds for $U$ replaced by $\mathrm{pr}od_{l \neq p} U_l \times \GL_2(\mathbb{Z}_p)$. In particular, $(p+1)|t$ if $p>2$ and $6|t$ if $p =2$.
\begin{enumerate}
\mathbf{i}tem
When $\psi$ is non-trivial of conductor $p$ or $4$,
we know that the $U_p$-slopes on $S^D_{k+1}(U; \psi )$ are exactly given by $k$ minus the $U_p$-slopes on $S^D_{k+1}(U; \psi^{-1} )$, by Atkin--Lehner theory (see Proposition~\ref{P:Up pair to p} for the proof in the case of $k=1$, and the general case being similar).
Thus, applying Theorem~\ref{T:weak Hodge polygon} to $S^D_{k+1}(U; \psi) $ and $ S^D_{k+1}(U; \psi^{-1})$ and using the fact above, we can improve the lower bound in Theorem~\ref{T:weak Hodge polygon} of the Newton polygon for the $U_p$-action on the \emph{direct sum} $S^D_{k+1}(U; \psi) \oplus S^D_{k+1}(U; \psi^{-1})$, which must lie above the polygon with slopes
\begin{itemize}
\mathbf{i}tem (if $k$ is even) $0,\, 1, \, \dots \,, \frac k2 -1, \, \frac k2+1, \, \frac k2+2, \, \dots, \, k$, each with multiplicity $2t$;
\mathbf{i}tem (if $k$ is odd)
$0,\, 1, \, \dots \,, \frac {k-1}2, \, \frac {k+1}2, \, \frac {k+3}2, \, \dots, \, k$, each with multiplicity $2t$, except the slopes $\frac {k-1}2$ and $\frac {k+1}2$ each has multiplicity $t$.
\end{itemize}
We do not know how to improve the bound on the $U_p$-slopes of each individual $S_{k+1}^D(U; \psi)$.
\mathbf{i}tem
When $\psi$ is the trivial character, $S^D_{k+1}(U; \textrm{triv})$ is the direct sum of the $p$-old part $S^D_{k+1}(U; \textrm{triv})^{p\textrm{-old}}$ and the $p$-new part $S^D_{k+1}(U; \textrm{triv})^{p\textrm{-new}}$.
Given our earlier hypothesis on $U$, we have
\[
\dim S^D_{k+1}(U; \textrm{triv})^{p\textrm{-old}} = \tfrac{2}{p+1}kt, \quad \textrm{and} \quad \dim S^D_{k+1}(U; \textrm{triv})^{p\textrm{-new}} = \tfrac{p-1}{p+1}kt.
\]
The eigenvalues of $U_p$-action on $S^D_{k+1}(U; \textrm{triv})^{p\textrm{-new}}$ all have valuation $(k-1)/2$;
whereas the eigenvalues of $U_p$-action on $S^D_{k+1}(U; \textrm{triv})^{p\textrm{-old}}$ can be paired so that the product of each pair has valuation $k$, according to the property of $p$-stabilization.
Thus the total $U_p$-slope on $S_{k+1}^D(U; \mathrm{triv})$ is
\begin{equation}
\label{E:total slope when char is triv}
\tfrac{1}{p+1}kt \cdot k + \tfrac{p-1}{p+1}kt \cdot \tfrac {k-1}2 = \tfrac{pk-p+k+1}{2(p+1)} kt.
\end{equation}
We claim that the Newton polygon for the $U_p$-action on $S^D_{k+1}(U; \textrm{triv})$ lies above the polygon with slopes
\begin{itemize}
\mathbf{i}tem
$0, 1, \dots, [\frac{k}{p+1}]-1$, each with multiplicity $t$,
\mathbf{i}tem
$[\frac{k}{p+1}]$ with multiplicity $\frac{kt}{p+1} - [\frac{k}{p+1}] t$,
\mathbf{i}tem
$\frac{k-1}2$ with multiplicity $\frac{(p-1)kt}{p+1}$,
\mathbf{i}tem
$k - [\frac{k}{p+1}]$ with multiplicity $\frac{kt}{p+1} - [\frac{k}{p+1}] t$, and
\mathbf{i}tem
$k - [\frac{k}{p+1}]+1, k- [\frac{k}{p+1}]+2, \dots, k$, each with multiplicity $t$.
\end{itemize}
Indeed, the fact that the lower bound over the interval $[0, \frac{kt}{p+1}]$ follows from Theorem~\ref{T:weak Hodge polygon}.
The lower bound over the interval $[\frac{pkt}{p+1}, kt]$ follows from the above lower bound together with the property of $p$-stabilization (as we know the total $U_p$-slopes as computed in \eqref{E:total slope when char is triv}).
The fact that the dimension of the $p$-new forms is $\frac{p-1}{p+1}kt$ and the $p$-stabilization property implies that the $n$th slope with $n \mathbf{i}n (\frac{kt}{p+1}, \frac{pkt}{p+1}]$ of $S_{k+1}^D(U;\mathrm{triv})$ is greater than or equal to $\frac{k-1}2$. So the Newton polygon of the $U_p$-action on $S_{k+1}^D(U; \mathrm{triv})$, over the interval $[\frac{kt}{p+1}, \frac{pkt}{p+1}]$ lies above the segment with slope $\frac{k-1}{2}$ starting at the known lower bound at the point $x = \frac{kt}{p+1}$.
This proves the claim.
\end{enumerate}
We point out that the bounds in both cases share the same end point with the actual Newton polygon of the $U_p$-action.
Moreover, the distance of this end point and the vertex $(kt, \frac{k(k-1)}{2}t)$ given by Theorem~\ref{T:weak Hodge polygon} is linear in $k$. So Theorem~\ref{T:weak Hodge polygon} is already a quite sharp bound in this sense.
\end{remark}
\begin{remark}
\label{R:heuristic gouvea}
Keep the setup as in Remark~\ref{R:improve Hodge bound classical forms}(2) and consider the case of trivial character now.
Gouv\^ea \cite{gouvea} has computed many numerical examples\footnote{Rigorously speaking, Gouv\^ea \cite{gouvea} worked with actual modular forms, but we expect the analogue of his conjecture applies in this case.} to support his expectation of the distribution of $U_p$-slopes on $S^D_{k+1}(U; \textrm{triv})$.
If one uses $a_1(k)\leq \dots \leq a_{kt/(p+1)}(k)$ to denote the lesser slopes on the space of $p$-old forms,
Gouv\^ea conjectured that the distribution given by the numbers
\[
a_1(k) / k,\, a_2(k)/k,\, \dots, \,a_{kt/(p+1)}/k, \textrm{ as } k \to \mathbf{i}nfty,
\]
converges to a uniform distribution on $[0,\frac{1}{p+1}]$.
In view of the discussion above, this conjecture can be reinterpreted as: the Newton polygon of $U_p$-action on $S^D_{k+1}(U; \mathrm{triv})$ ``stays close" to the lower bound given in Remark~\ref{R:improve Hodge bound classical forms}.
At least, the polygon lower bound provides an inequality for the distribution conjectured by Gouv\^ea.
\end{remark}
\section{An example of explicit computation}
\label{Section:explicit example}
In this section, we give an example of by-hand computation of the $U_p$-slopes for a particular definite quaternion algebra, a prime number $p$, and a level structure.
This case was considered earlier by Jacobs \cite{jacobs}, a former student of Buzzard, in his thesis.
Unfortunately, Jacobs relied too much on the computer and hence made the computation unaccessable to people who are interested in checking for patterns.
We reproduce a variant of this computation to serve as a key toy model of our various proofs.
We hope that this hand-on computation can inspire the readers to further develop this technique.
\subsection{The quaternion algebra}
In this section, we consider the quaternion algebra $D$ which ramifies exactly at $2$ and $\mathbf{i}nfty$. Explicitly, it is
\[
D = \mathbb{Q}\langle \mathbf{i},\mathbf{j}\rangle / (\mathbf{i}\mathbf{j} = -\mathbf{j}\mathbf{i}, \mathbf{i}^2 = \mathbf{j}^2 = -1).
\]
Here we use angled bracket to signify the non-commutativity of the algebra.
It is conventional to put $\mathbf{k} = \mathbf{i}\mathbf{j}$.
The maximal order of $D$ is given by
\[
\mathcal{O}_D = \mathbb{Z} \big\langle\, \mathbf{i},\, \mathbf{j},\, \tfrac12 (1+\mathbf{i}+\mathbf{j}+\mathbf{k}) \big\rangle.
\]
The unit group consists of $24$ elements; they are
\[
\mathcal{O}_D^\times = \big\{ \pm\! 1, \pm \mathbf{i}, \pm \mathbf{j}, \pm \mathbf{k}, \tfrac12(\pm 1 \pm \mathbf{i} \pm \mathbf{j} \pm \mathbf{k})\; \big\}.
\]
\subsection{Level structure}
Our distinguished prime $p$ is $3$.
Put $D_f = D\otimes \mathbb{A}_f$.
For each $l \neq 2$, we identify $D \otimes \mathbb{Q}_l$ with $\mathrm{M}_2(\mathbb{Q}_l)$.
For $l = 2$, we use $D^\times(\mathbb{Z}_2)$ to denote the maximal compact subgroup of $ (D \otimes \mathbb{Q}_2)^\times$.
We consider the following open compact subgroup of $D^\times_f$:
\begin{equation}
\label{E:level structure example}
U = D^\times(\mathbb{Z}_2) \times \mathrm{pr}od_{l \neq 2,3}\GL_2(\mathbb{Z}_l) \times \begin{pmatrix}
\mathbb{Z}_3^\times & \mathbb{Z}_3\\
9 \mathbb{Z}_3 & 1+ 3\mathbb{Z}_3
\end{pmatrix}.\footnote{Our choice of the level structure is slightly different from \cite{jacobs}, who uses the $\Gamma_1(9)$-level structure. Here $\Gamma_1(9)$ is defined in the same way as \eqref{E:level structure example} but with the lower right entry of the last factor replaced by $1+9\mathbb{Z}_3$. As a result, Jacobs had to go through an additional factorization to get the same answer.}
\end{equation}
We point out that for our choice of $p=3$, this corresponds to $m = 2$ in Theorem~\ref{T:theorem B}; so it is not literally covered by it.
\begin{notation}
Let $\nu_3$ denote the square root of $-2$ that is congruent to $1$ modulo $3$. We have a $3$-adic expansion
\[
\nu_3 = 1+ 3+2 \cdot 3^2 + 2 \cdot 3^5 + 3^7 + \cdots.
\]
We choose the isomorphism between $D \otimes \mathbb{Q}_3$ and $\mathrm{M}_2(\mathbb{Q}_3)$ so that
\[
1 \leftrightarrow \begin{pmatrix}
1 &0 \\ 0 & 1
\end{pmatrix}, \quad \mathbf{i} \leftrightarrow
\begin{pmatrix}
\nu_3 & 1\\ 1& -\nu_3
\end{pmatrix}, \quad
\mathbf{j} \leftrightarrow \begin{pmatrix}
0 & -1 \\ 1 & 0
\end{pmatrix}, \textrm{ and } \mathbf{k} \leftrightarrow \begin{pmatrix}
1 & -\nu_3 \\ -\nu_3 &- 1
\end{pmatrix}
.
\]
\end{notation}
\begin{lemma}
\label{L:honest decomposition V(9)}
The following natural map is bijective.\footnote{In \cite{jacobs}, $D^\times_f$ is written as the disjoint union of three double cosets, which in fact corresponds to the double coset decomposition of $U$ over $\Gamma_1(9)$.}
\[
\xymatrix@R=0pt{
D^ \times \times U \ar[r] & D_f^\times\\
(\delta, u) \ar@{|->}[r] & \delta u.
}
\]
\end{lemma}
\begin{proof}
This is of course coincidental for our choices of $D$, $p$ and $U$. We first observe that $D_f^\times = D ^\times \cdot U_\mathrm{max}$ (see \cite[Lemma~1.22]{jacobs}), where $U_\mathrm{max}$ is a maximal open compact subgroup of $(D\otimes \mathbb{A}_f)^\times$, defined using the same equation as in \eqref{E:level structure example} except the factor at $3$ is replaced by $\GL_2(\mathbb{Z}_3)$. Taking into account of the duplication, we have
\[
D_f^\times = D ^\times \times_{\mathcal{O}_D^\times} U_\mathrm{max}.
\]
So it suffices to check that the image of $\mathcal{O}_D^\times$ in $\GL_2(\mathbb{Z}_3)$ turns out to form a coset representative of $ U_\mathrm{max} / U$.
This can be checked easily by hand. (See the proof of \cite[Theorem~2.1]{jacobs} for the list of residues of $\mathcal{O}_D^\times$ when taking modulo $9$.)
\end{proof}
\begin{corollary}
Let $\psi$ be a continuous character of $\mathbb{Z}_3^\times$ of conductor $9$ such that $\psi(-1)=1$ and let $\mathbf{k}appa = x \langle x\rangle ^w\psi$ with $w\mathbf{i}n \mathcal{O}_{\mathbb{C}_3}$ be a character considered in Example~\ref{Ex: characters on weight space}.
Then evaluation at $1$ induces an isomorphism $S^{D,\dagger}(U; \mathbf{k}appa) \cong \mathcal{A}$.
\end{corollary}
\begin{lemma}\label{lemma1}
For the case considered in this section, the map $\mathfrak{U}_3$ in Proposition~\ref{P:explicit Up} is given by
$\mathfrak{U}_3 = ||_\mathbf{k}appa \delta_1 + ||_\mathbf{k}appa \delta_2 + ||_\mathbf{k}appa \delta_3$, where
\[
\delta_1 =
-1 + \mathbf{i} - \mathbf{j}, \ \delta_2= \tfrac12(1+\mathbf{i}+3\mathbf{j}+\mathbf{k}),\ \textrm{and }\delta_3= \tfrac12(1-3\mathbf{i}-\mathbf{j}-\mathbf{k}).
\]
The images of $\delta_1, \delta_2,\delta_3$ in $\GL_2(\mathbb{Z}_3)$ are given by
\[
\begin{pmatrix}
\nu_3-1 & 2 \\ 0 & -1- \nu_3
\end{pmatrix}, \quad
\begin{pmatrix}
1+\frac{\nu_3}2 & -1-\frac{\nu_3}{2} \\ 2-\frac{\nu_3}{2} & -\frac{\nu_3}2
\end{pmatrix}, \quad \textrm{and }
\begin{pmatrix}
-\frac{3\nu_3}{2} & -1+\frac{\nu_3}{2} \\ -2+\frac{\nu_3}{2} & 1+ \frac{3\nu_3}2
\end{pmatrix}.
\footnote{One compares these matrices with the ones appearing after \cite[Lemma~2.5]{jacobs}. Jacobs has a different normalizations which could be removed if one wishes. Also, we think his matrices involving $v_1^{-1}$ are not correct; this error is however fixed on the next page of {\mathbf{i}t loc. cit.}.}
\]
Modulo $9$, they are
\[
\begin{pmatrix}
3 & 2 \\ 0 & 4
\end{pmatrix}, \quad\begin{pmatrix}
3 & 6 \\ 0 & 7
\end{pmatrix}, \quad \textrm{and }\begin{pmatrix}
3 & 1 \\ 0 & 7
\end{pmatrix}.
\]
\end{lemma}
\begin{proof}
We follow the computation in Proposition~\ref{P:explicit Up}.
We need to compute
\[
U_3(\varphi)(1) = \sum_{j=1}^3 \varphi(v_j^{-1})||_\mathbf{k}appa v_j, \quad \textrm{for }v_j = \big(
\begin{smallmatrix}
3&0\\ 9j &1
\end{smallmatrix}
\big)
\]
By Lemma~\ref{L:honest decomposition V(9)}, we can write each $v_j^{-1}$ uniquely as $\delta_j^{-1} u_j$ for $\delta_j \mathbf{i}n D^\times$ and $u_j \mathbf{i}n U$.
Then
\[
\varphi(v_j^{-1})||_\mathbf{k}appa v_j = \varphi(1)||_\mathbf{k}appa (u_{j,3}v_j) = \varphi(1)||_\mathbf{k}appa \delta_{j,3},
\]
where $u_{j,3}$ and $\delta_{j,3}$ denote the $3$-components of $u_j$ and $\delta_j$, respectively.
On the other hand, we have
\[
\delta_j = u_jv_j \mathbf{i}n D^\times \cap U v_j \subseteq D^\times\cap U\big( \begin{smallmatrix}
3 & 0 \\ 0 & 1
\end{smallmatrix}\big) U = D^\times \cap U_1(3) \big( \begin{smallmatrix}
3 & 0 \\ 0 & 1
\end{smallmatrix}\big),
\]
where $U_1(3)$ is defined as $U$ in \eqref{E:level structure example} except the last factor is replaced by $\big(
\begin{smallmatrix}
\mathbb{Z}_3^\times & \mathbb{Z}_3 \\ 3 \mathbb{Z}_3 & 1+ 3\mathbb{Z}_3
\end{smallmatrix}
\big)$.
If we put $\delta_j = \delta'_j (1 - \mathbf{i} + \mathbf{j})$, then we have
\begin{align}
\nonumber
\delta'_j &\mathbf{i}n D^\times \cap U_1(3)\big(\begin{smallmatrix}
3 & 0 \\ 0 & 1
\end{smallmatrix}\big)(1-\mathbf{i} +\mathbf{j})^{-1} = D^\times \cap U_1(3) \big(\begin{smallmatrix}
1+ \nu_3 & 2 \\ 0 & (1-\nu_3)/3
\end{smallmatrix}\big)\\
\nonumber
&=D^\times \cap U_1(3) \big(\begin{smallmatrix}
5 & 2 \\ 0 & 2
\end{smallmatrix}\big) = \big\{
-1, \ \tfrac12(1+\mathbf{i}+\mathbf{j}-\mathbf{k}),\ \tfrac12(1-\mathbf{i}-\mathbf{j}+\mathbf{k})
\big\}
\end{align}
The last equality follows from looking at the list of $\mathcal{O}_D^\times$ modulo $3$. (In the notation of Jacobs' thesis \cite{jacobs}, this set is $\{-1, u_5, -u_8\}$.)
It is then clear that all $\delta_j$'s are among the collections of the above right-multiplied by $1-\mathbf{i}+\mathbf{j}$. The rest of the lemma is straightforward.
\end{proof}
The main theorem of Jacobs' thesis \cite{jacobs} is the following.
\begin{theorem}[Jacobs]
Let $\psi$ be a character of $\mathbb{Z}_3^\times$ of conductor $9$ such that $\psi(-1)=1$.
We consider the characters $\mathbf{k}appa = x\langle x\rangle ^w\psi$ ($w \mathbf{i}n \mathcal{O}_{\mathbb{C}_3}$) as in Example~\ref{Ex:classical weight}.
The slopes of the $U_3$-operator acting on $S^{D,\dagger}(U; \mathbf{k}appa)$ are $\frac{1}{2},1+\frac{1}{2},2+\frac{1}{2},3+\frac{1}{2},\dots$.
\end{theorem}
\begin{proof}
Put $\xi = \psi(4)$; it is a primitive third root of unity. Then $\psi(7) = \xi^2$.
Put $\pi = \xi-1$ so that $v(\pi) = \frac 12$.
Let $H_{\mathfrak{U}_3}(x,y)$ denote the generating series of the Hecke operator acting on $S^{D,\dagger}(U; \mathbf{k}appa) \cong \mathcal{A}$.
By Lemma~\ref{lemma1}, the map $\mathfrak{U}_3$ is given as $
\mathfrak{U}_3 = ||_\mathbf{k}appa \delta_1 + ||_\mathbf{k}appa \delta_2 + ||_\mathbf{k}appa \delta_3$
for the elements $\delta_1, \delta_2, \delta_3$ given therein.
By Lemma~\ref{lemma1}, we have
\begin{align*}
H_{\mathfrak{U}_3}(\frac{1}{3\pi} x,\pi y) &\equiv
\frac{\psi(4)4^w}{4-3 \frac{1}{3\pi} x \cdot \pi x -2\pi y}+ \frac{\psi(7)7^w}{7-3 \frac{1}{3\pi}x \cdot \pi y-6 \pi y}
+ \frac{\psi(7)7^w}{7-3 \frac{1}{3\pi} x \cdot\pi y-\pi y}
\\
& \equiv \frac{\xi}{4-xy-2\pi y}+ \frac{\xi^2}{7-xy-6\pi y}+ \frac{\xi^2}{7-xy-\pi y}
\\
&\equiv
\frac{1+\pi}{1-xy+\pi y}+ \frac{(1+\pi)^2}{1-xy}+ \frac{(1+\pi)^2}{1-xy-\pi y} \pmod 3
\end{align*}
It is now straightforward to check that this is congruent to $\dfrac{2\pi}{1-xy}$ modulo $3$.
In other words, the matrix $\Diag(\frac{1}{3\pi}) \cdot \mathfrak{U}_3 \cdot \Diag(\pi)$ is congruent modulo $3$ to $2 \pi \cdot I_\mathbf{i}nfty$, where $I_\mathbf{i}nfty$ is the infinite identity matrix.
It follows from this easily that the slopes of the $U_3$-operator acting on $S^{D,\dagger}(U; \mathbf{k}appa)$ are $\frac{1}{2},1+\frac{1}{2},2+\frac{1}{2},3+\frac{1}{2},\dots$.
\end{proof}
\section{Improving the lower bound}
\label{Section:improve Hodge polygon}
The key to obtain a strong result on $U_p$-slopes is to improve the lower bound in Theorem~\ref{T:weak Hodge polygon} so that it agrees with the Newton polygon for sufficiently many points.
\begin{hypothesis}
\label{H:psi conductor pm}
In this and the next section, we retain the notation from Section~\ref{Section:computation of Up} to work with a general definite quaternion algebra $D$ (which splits at $p$).
We fix an integer $m \geq 4$.
By writing $\psi_m$, we always mean a finite continuous character of $\mathbb{Z}_p^\times$ of conductor $p^m$. Let $E$ be a finite extension of $\mathbb{Q}_p(\zeta_{p^{m-1}})$.
The level structure at $p$ is always taken to be $U_0(p^m)$ with the same number $m$. Some of the results (perhaps after modification) may hold for smaller $m$; see Remark~\ref{R:m=3}.
We assume that $\psi(-1) = 1$.
\end{hypothesis}
\subsection{Facts about classical automorphic forms}
\label{S:classical automorphic forms}
To avoid future confusion, we must clarify how twisting an automorphic representation $\pi$ (of weight $2$) by a central character $\eta: (\mathbb{Z}_p/p^m\mathbb{Z}_p)^\times \to E^\times$ works, in an explicit way.
We consider the following space of classical automorphic forms
\begin{align}
\label{E:S2 classical twist}
S_2^D(U; \psi_m; \eta): = \big\{ \varphi: D^\times_f \to E \, \big| \, \varphi(\delta gu&) = \eta(ad) \psi_m(d) \varphi(g) \textrm{ for any }\delta \mathbf{i}n D^\times,\\
\nonumber
& g \mathbf{i}n D^\times_f, \textrm{and } u\mathbf{i}n U \textrm{ with }u_p = \big(\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big) \big\}.
\end{align}
It carries an action of Hecke operators $T_l$ (for $l\notin \mathcal{S}$) and $U_p$ just as defined in Subsection~\ref{S:Hecke action}, except multiplied by $\eta(l)$ and $1$, respectively.
Recall that (after making a finite extension of the coefficient field $E$), we have a decomposition of automorphic representations under the actions of all Hecke operators $T_l$ ($l \notin \mathcal{S}$):
\begin{equation}
\label{E:spectral decomposition over C}
S_2^D(U; \psi_m; \eta) = \bigoplus_\pi V(\pi),
\end{equation}
where the sum is taken over all automorphic representations $\pi$ of $\GL_2(\mathbb{A}^\mathbf{i}nfty)$ of weight $2$. We say that $\pi$ \emph{appears} in $S_2^D(U; \psi_m ; \eta)$ if the corresponding space $V(\pi) \neq 0$.
We may view $\eta$ as a Hecke character of $\mathbb{A}^\times$ via the identification
\[
\mathbb{Q}^\times \Big\backslash \mathbb{A}^\times \Big/ \Big( \mathbb{R}^\times_{>0}(1+p^m\mathbb{Z}_p)^\times\mathrm{pr}od_{l \neq p} \mathbb{Z}_l^\times\Big) \cong (\mathbb{Z}_p/p^m\mathbb{Z}_p)^\times.
\]
Write $\pi \otimes (\eta \circ \det)$ for the tensor product of the automorphic representations of $\GL_2(\mathbb{A})$; it has central character $\omega_\pi \eta^2$, where $\omega_\pi$ is the central character of $\pi$.
For each $l \notin \mathcal{S}$, the $T_l$-eigenvalue on the spherical vector at $l$ for $\pi \otimes (\eta \circ \det)$ is $\eta(l)$ times that for $\pi$.
It is clear from this construction that $\pi$ appears in $S_2^D(U; \psi_m)$ if and only if $\pi \otimes (\eta \circ \det)$ appears in $S_2^D(U; \psi_m; \eta)$.
In fact we have a canonical isomorphism of modules of Hecke operators $T_l$ for $l \notin \mathcal{S}$ and $U_p$
\begin{equation}
\label{E:untwist}
S_2^D(U; \psi_m) \otimes (\eta \circ \det) \cong S_2^D(U; \psi_m; \eta),
\end{equation}
where $T_l$ for $l \notin \mathcal{S}$ acts on the factor $(\eta \circ \det)$ by multiplication by $\eta(l)$ and $U_p$ acts trivially.
We must point out that
$S_2^D(U; \psi_m; \eta)$ is \emph{genuinely different} from $S_2^D(U; \psi_m \eta^2)$ (not even up to twists).
\begin{lemma}
\label{L:only new forms}
Assume Hypothesis~\ref{H:psi conductor pm}.
Then each Hecke eigenform in $S_2^D(U; \psi_m; \omega^r)$ is $p$-new, and the action of $U_p$ on each of $V(\pi)$ in \eqref{E:spectral decomposition over C} is just the scalar multiplication by some $a_p(\pi) \mathbf{i}n E$.
Moreover, $v(a_p(\pi)) \mathbf{i}n [0,1]$.
\end{lemma}
\begin{proof}
The isomorphism~(\ref{E:untwist}) allows us to assume $r=0$ in \eqref{E:spectral decomposition over C}.
The condition on $\psi_m$ and the level structure ensures that the $p$-component $\pi_p$ of $\pi$ is forced to be a principal series, and has only one-dimensional fixed vector under the action of the group $U_1(p^m) = \big(\begin{smallmatrix} \mathbb{Z}_p^\times& \mathbb{Z}_p\\p^{m}\mathbb{Z}_p&1+p^m\mathbb{Z}_p \end{smallmatrix}\big)$.
So $U_p$ acts on $V(\pi)$ in the same way as $U_p$ acts on this one-dimensional fixed vector, by multiplication of some $a_p(\pi)\mathbf{i}n E$.
The norm bound on $v(a_p(\pi))$ follows from the admissibility at $p$ of the Galois representation attached to $\pi$.
\end{proof}
\subsection{Atkin--Lehner involution}
\label{S:Atkin-Lehner}
Recall that weight $2$ automorphic forms are simply functions on $D_f^\times$.
Similar to the case of classical modular forms, we have the following Atkin--Lehner involution map
\begin{equation}
\label{E:Atkin-Lehner}
\xymatrix@R=0pt{
\mathrm{AL}_{\psi_m}: S_2^D(U; \psi_m) \ar[r] & S_2^D(U; \psi_m^{-1}; \psi_m)
\\
\varphi \ar@{|->}[r] & \varphi\big(\bullet \Matrix 01{p^m}0\big).
}
\end{equation}
One checks that the condition
\[
\varphi(gu) = \psi_m(d) \varphi(g) \quad \textrm{ for } u\mathbf{i}n U \textrm{ with } u_p = \Matrix abcd
\]
is equivalent to
\[
\varphi\big(gu \Matrix 01{p^m}0 \big) = \psi_m(a) \varphi\big(g\Matrix 01{p^m}0 \big) \quad \textrm{ for } u\mathbf{i}n U \textrm{ with } u_p = \Matrix abcd.
\]
This justifies the source and the target of $\mathrm{AL}_{\psi_m}$. It is clear that $\mathrm{AL}_{\psi_m}$ is an isomorphism, preserving the obvious $\mathcal{O}$-lattice, given by evaluation at the fixed coset representatives $\gamma_i$'s.
\begin{proposition}
\label{P:Up pair to p}
Keep the notation as above. For $\varphi \mathbf{i}n S_2^D(U; \psi_m)$, we have
\begin{equation}
\label{E:Up pair to p}
U_p \circ \mathrm{AL}_{\psi_m} \circ U_p (\varphi) =p \cdot \mathrm{AL}_{\psi_m} \circ S_p(\varphi),
\end{equation}
where $S_p$ is the automorphism $\varphi \mapsto \varphi( \bullet \big(\begin{smallmatrix}p^{-1}&0\\0&p^{-1}\end{smallmatrix}\big)\,)$ of $S_2^D(U, \psi_m)$ given by shifting the variable by an idele at $p$.
\end{proposition}
\begin{proof}
Recall the coset decomposition \eqref{E:Up operator cosets} with coset representatives $v_i = \Matrix p0{ip^m}1$ for $i=0,\dots, p-1$ to define the $U_p$-operator. We have $v_i^{-1} = \Matrix{1/p} 0{-ip^{m-1}} 1$.
Note that the action of $||_{v_i}$ on $E$ is trivial.
We exhibit a direct computation:
\begin{align}\nonumber
U_p \circ \mathrm{AL}_{\psi_m} \circ U_p &(\varphi)(g) = \sum_{i=0}^{p-1} \sum_{j=0}^{p-1} \varphi\big( g v_i^{-1} \Matrix 01{p^m}0 v_j^{-1} \big)\\ \nonumber
&= \sum_{i,j=0}^{p-1} \varphi \Big( g \Matrix{1/p} 0{-ip^{m-1}} 1 \Matrix 01{p^m}0 \Matrix{1/p} 0{-jp^{m-1}} 1 \Big)\\
\label{E:AL calculation}
&= \sum_{i,j=0}^{p-1} \varphi \Big( g\Matrix 01{p^m}0 \Matrix 1{-i/p}0{1/p} \Matrix{1/p} 0{-jp^{m-1}} 1 \Big).
\end{align}
One can verify the following equality:
\[
\MATRIX 1{-i/p}0{1/p} \MATRIX{1/p} 0{-jp^{m-1}} 1 = \MATRIX {1/p} 0 {-jp^{m-2}}{1/p} \MATRIX{1+ijp^{m-1}}{-i}{ijp^{2m-2}}{1-ijp^{m-1}}.
\]
Thus, we have
\[
\varphi \Big( g\Matrix 01{p^m}0 \Matrix 1{-i/p}0{1/p} \Matrix{1/p} 0{-jp^{m-1}} 1 \Big) = \psi_m(1-ijp^{m-1}) \cdot \varphi \Big( g \Matrix 01{p^m}0 \Matrix {1/p} 0 {-jp^{m-2}}{1/p} \Big).
\]
Since $\psi_m$ has conductor exactly $p^m$, as we sum up \eqref{E:AL calculation} over $i$ (for a fixed $j$) all terms cancel to zero unless when $j=0$, the corresponding terms exactly give $p$ copies of
\[
\varphi\big(g \Matrix 01{p^m}0 \Matrix{1/p}00{1/p} \big).
\]
From this, we deduce immediately that
\[
U_p\circ \mathrm{AL}_{\psi_m} \circ U_p(\varphi) = p \cdot \mathrm{AL}_{\psi_m} \circ S_p(\varphi).\qedhere
\]
\end{proof}
\begin{remark}
A similar Atkin--Lehner map exists for the space of automorphic forms of higher weights $k+1$, except that it does not preserve the natural $\mathcal{O}$-lattices. We do not need this generalization in this paper.
\end{remark}
\begin{notation}
We identify the space of weight two classical automorphic forms $S_2^D(U; \psi_m)$ with $\oplus_{i=0}^{t-1} E$ by evaluating at $\gamma_0, \dots, \gamma_{t-1}$.
We use $\mathfrak{U}_p^\mathrm{cl}(\psi_m)$ and $\mathfrak{T}_l^\mathrm{cl}(\psi_m)$ to denote the matrices for the Hecke actions of $U_p$ and $T_l$
(for $l \notin \mathcal{S}$) under the standard basis.
Let $\alpha_0(\psi_m)\leq \dots \leq \alpha_{t-1}(\psi_m)$ denote the slopes of the Hodge polygon of $\mathfrak{U}_p^\mathrm{cl}(\psi_m)$, in non-decreasing order. For simplicity, we assume that $E$ contains all powers $p^{\alpha_i(\psi_m)}$.
\end{notation}
\begin{corollary}
\label{C:Hodge polygon less eq 1}
The numbers $\alpha_i(\psi_m)$ belong to $ [0,1]$.
There exists a basis $e_0(\psi_m), \dots, e_{t-1}(\psi_m)$ of $S_2^D(U; \psi_m, \mathcal{O}) \cong \oplus_{i=0}^{t-1} \mathcal{O}$ such that the matrix of $U_p$-action is given by a matrix $\mathfrak{U}_p^\mathrm{cl, \mathbf{e}}(\psi_m)$ whose $i$th row is divisible by $p^{\alpha_i(\psi_m)}$.
\end{corollary}
\begin{proof}
The existence of the basis $e_1(\psi_m), \dots, e_t(\psi_m)$ follows from Subsection~\ref{S:Hodge v.s. Newton}(5). We shall prove the first statement now.
It is clear that $\mathfrak{U}_p^\mathrm{cl}(\psi_m)$ has entries in the integral ring $\mathcal{O}$.
By Proposition~\ref{P:Up pair to p},
we have
\[
\big(\mathrm{AL}_{\psi_m}^{-1}\circ U_p \circ \mathrm{AL}_{\psi_m}\big) \circ U_p = p\cdot S_p.
\]
Writing this in terms of matrices, we have
\begin{equation}
\label{E:UpUp=pA}
M \cdot
\mathfrak{U}_p^\mathrm{cl}(\psi_m) = pA,
\end{equation}
where $A \mathbf{i}n \GL_t(\mathcal{O})$ is the matrix for the action of the central character $S_p$, and $M \mathbf{i}n \mathrm{M}_t(\mathcal{O})$ corresponds to the action of $U_p$ on $S_2^D(U; \psi_m^{-1}; \psi_m)$. (Note that $\mathrm{AL}_{\psi_m}$ is an isomorphism preserving the natural $\mathcal{O}$-lattices.)
By Subsection~\ref{S:Hodge v.s. Newton}(4), we can write $\mathfrak{U}_p^\mathrm{cl}(\psi_m) = BDC$ with $B, C \mathbf{i}n \GL_t(\mathcal{O})$ and $D$ diagonal, so that the valuations of the diagonal entries of $D$ are exactly $\alpha_0(\psi_m), \dots, \alpha_{t-1}(\psi_m)$.
So \eqref{E:UpUp=pA} can be rewritten as
\[
M = AC^{-1}(pD^{-1}) B^{-1},
\]
where $AC^{-1}, B \mathbf{i}n \GL_t(\mathcal{O})$.
By Subsection~\ref{S:Hodge v.s. Newton}(4), this means that the slopes of the Hodge polygon of $M$ are given by $1-\alpha_i(\psi_m)$. Since $M$ and $\mathfrak{U}_p^\mathrm{cl}(\psi_m)$ both have integral entries, both $\alpha_i(\psi_m)$ and $1-\alpha_i(\psi_m)$ are non-negative. Thus $\alpha_i(\psi_m) \mathbf{i}n [0,1]$.
\end{proof}
\subsection{A variant of the Atkin--Lehner map}
\label{S:variant of duality pairing}
For a purely technical reason, we need a generalization of the Atkin--Lehner map \eqref{E:Atkin-Lehner} of automorphic forms whose weights are infinitesimal deformations of weight $2$.
Consider the ring $\mathcal{O}[w]/(p^2w)$ consisting of ``polynomials" $f(w) = a_0+ a_1w + \cdots $ where the coefficients $a_i$ with $i \geq 1$ are only meaningful modulo $p^2 \mathcal{O}$.
Let $w$ be an indeterminant.
Note that we have a character
\[
\xymatrix@R=0pt{
\psi_{m,w}: \ \big({\begin{smallmatrix} \mathbb{Z}_p^\times& \mathbb{Z}_p\\ p^{m}\mathbb{Z}_p&\mathbb{Z}_p^\times \end{smallmatrix}}\big)
\ar[rr] && \big( \mathcal{O}[w]/(p^2w) \big)^\times
\\
\big({\begin{smallmatrix} a&b\\ c&d \end{smallmatrix}}\big)
\ar@{|->}[rr] && \psi_m(d) \langle d\rangle ^w,
}
\]
where $\langle d \rangle: = d \omega^{-1}(d)$ is defined as before and $\langle d\rangle^w = 1+ (\langle d \rangle -1)w \mathbf{i}n \mathcal{O}[w]/(p^2w)$. Note that it is important here to consider torsion coefficients, otherwise, $\psi_{m,w}$ is not a homomorphism of groups.
We point out that the image of $\psi_{m,w}$ in fact lands in the \emph{subring}
\begin{equation}
\label{E:torsion image}
\mathcal{O} + p \mathcal{O} /p^2\mathcal{O} \cdot w
\end{equation}
of $\mathcal{O}[w]/(p^2w)$.
We think of $\psi_{m,w}$ as certain deformation of the character $\psi_m$.
We introduce the following deformed version of classical automorphic forms:
\begin{equation}
\label{E:S2 deformation}
S_2^D(U; \psi_{m,w}): = \Big\{ \varphi: D^\times_f \to \mathcal{O}[w]/(p^2w) \, \Big| \,
\begin{array}{c}
\varphi(\delta gu) = \psi_m(d)\langle d\rangle^w \varphi(g) \textrm{ for any }
\\
\delta \mathbf{i}n D^\times, g \mathbf{i}n D^\times_f,
\textrm{and } u\mathbf{i}n U \textrm{ with }u_p = \big(\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big)
\end{array}
\Big\}.
\end{equation}
This is a finite free $\mathcal{O}[w]/(p^2w)$-module which carries linear actions of $T_l$ (for $l \notin \mathcal{S}$) and $U_p$ in the natural way.
Abstractly, we can identify $S_2^D(U; \psi_{m, w})$ with $S_2^D(U; \psi_m; \mathcal{O}) \otimes_\mathcal{O} \mathcal{O}[w]/(p^2w)$ by identifying the evaluations at $\gamma_i$'s.
Then the elements $e_0(\psi_m), \dots, e_{t-1}(\psi_m)$ in Corollary~\ref{C:Hodge polygon less eq 1} gives rise to a basis of $S_2^D(U; \psi_{m, w})$ over $\mathcal{O}[w]/(p^2w)$.
Let $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m, w}) \mathbf{i}n \mathrm{M}_t \big(\mathcal{O}[w]/(p^2w) \big)$ denote the matrix for the $U_p$-action on $S_2^D(U; \psi_{m,w})$ with respect to this basis.
Since $\psi_{m,w}$ takes value in the \emph{subring} \eqref{E:torsion image}, all entries of $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m, w})$ belongs to this subring $\mathcal{O}+p\mathcal{O}/p^2\mathcal{O} \cdot w$.
It follows that the $i$th row of $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m, w})$ belongs to $p^{\alpha_i(\psi_m)}\mathcal{O}[w] / p^2w \mathcal{O}[w] \subseteq \mathcal{O}[w]/(p^2w)$. This is true because all $\alpha_i(\psi_m) \mathbf{i}n [0,1]$; so the coefficient on the variable $w$ belongs to $p\mathcal{O} / p^2\mathcal{O} \subseteq p^{\alpha_i(\psi_m)}\mathcal{O} / p^2\mathcal{O}$.
The following technical proposition will be important for us later.
\begin{proposition}
\label{P:determine of Up deformed}
Let $\overline \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w}) \mathbf{i}n \mathrm{M}_t(\mathbb{F}[w])$ denote the reduction modulo $\varpi$ of the matrix given by dividing the $i$th row of $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w})$ by $p^{\alpha_i(\psi_m)}$.\footnote{While the division by $p^{\alpha_i(\psi_m)}$ in the ring $\mathcal{O}/p^2\mathcal{O}[w]$ is not well-defined, the reduction modulo $\varpi$ of the quotient is a well-defined element in $\mathbb{F}[w]$.}
Then
\[
\det \big( \overline \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w}) \big) \mathbf{i}n \mathbb{F}^\times,
\]
i.e., it is invertible as a matrix in $\mathrm{M}_t(\mathbb{F}[w])$.
\end{proposition}
\begin{proof}
We write $S_2^D(U; \psi_{m,w}^{-1}; \psi_{m,w})$ for the space defined similar to \eqref{E:S2 deformation} except that $\psi_m(d) \langle d \rangle ^w$ is replaced by $\psi_m(a) \langle a \rangle ^w$.
Similar to the case of classical automorphic forms, we define an Atkin--Lehner map
\[
\xymatrix@R=0pt{
\mathrm{AL}_{\psi_{m,w}}: S_2^D(U; \psi_{m,w}) \ar[r] & S_2^D(U; \psi^{-1}_{m,w}; \psi_{m,w})
\\
\varphi \ar@{|->}[r] & \varphi\big(\bullet \Matrix 01{p^m}0\big).
}
\]
One can similarly check that $\mathrm{AL}_{\psi_{m,w}}$ is a well-defined isomorphism.
Since the proof of Proposition~\ref{P:Up pair to p} is mostly tautological and $\psi_{m,w}(1+ap^{m-1}) = \psi_m(1+ap^{m-1})$ for $a \mathbf{i}n \mathbb{Z}$,
its proof may be transported verbatim to our setup to show
\begin{equation}
\label{E:Up pair to p deformed}
(\mathrm{AL}_{\psi_{m,w}}^{-1} \circ U_p \circ \mathrm{AL}_{\psi_{m,w}}) \circ U_p (\varphi) =p \cdot S_p(\varphi).
\end{equation}
Let $B \mathbf{i}n \GL_t(\mathcal{O})$ denote the change of basis matrix from the basis given by evaluation at $\gamma_i$'s to the basis $e_0(\psi_m), \dots, e_{t-1}(\psi_m)$.
Then \eqref{E:Up pair to p deformed} gives
\[
M_w B^{-1} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w})B = p A_w,
\]
where $A_w \mathbf{i}n \GL_t(\mathcal{O}[w]/(p^2w)) $ is the matrix for the action of $S_p$ on $S_2^D(U; \psi_{m,w})$, and $M_w \mathbf{i}n \mathrm{M}_t(\mathcal{O}[w]/(p^2w))$ is the matrix for the action of $\mathrm{AL}_{\psi_{m,w}}^{-1} \circ U_p \circ \mathrm{AL}_{\psi_{m,w}}$.
It follows that
\begin{equation}
\label{E:Up pair to p deformed matrix version}
BA_w^{-1}M_w B^{-1} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w}) =pI \qquad \textrm{as matrices in }\mathrm{M}_t(\mathcal{O}[w]/(p^2w)).
\end{equation}
We claim that all entries on the $i$th row of the matrix $N = BA_w^{-1}M_w B^{-1}$ belongs to $p^{1-\alpha_i(\psi_m)}\mathcal{O}[w] / p^2w\mathcal{O}[w] \subseteq \mathcal{O}[w]/(p^2w)$.
For a matrix $L \mathbf{i}n \mathrm{M}_t(\mathcal{O}[w]/(p^2w))$, we write $L|_{w=0}$ for the matrix whose entries are the image of the corresponding entires of $L$ under the map $\mathcal{O}[w]/(p^2w) \to \mathcal{O}[w]/(w) \cong \mathcal{O}$.
We first show that the $i$th row of $N|_{w=0}$
belongs to $p^{1-\alpha_i(\psi_m)}\mathcal{O}$.
For this, we write $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w})|_{w=0} = \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m)= \Diag\{p^{\alpha_0(\psi_m)}, \dots, p^{\alpha_{t-1}(\psi_m)}\}\cdot C$ for some matrix $C \mathbf{i}n \mathrm{M}_t(\mathcal{O})$.
Since $\alpha_0(\psi_m), \dots, \alpha_{t-1}(\psi_m)$ are the slopes of Hodge polygon of the matrix $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m})$, the determinant of the matrix $C$ belongs to $\mathcal{O}^\times$. Thus, $C \mathbf{i}n \GL_t(\mathcal{O})$.
Using this and \eqref{E:Up pair to p deformed matrix version}, we deduce
\[
N|_{w=0} \Diag\{p^{\alpha_0(\psi_m)}, \dots, p^{\alpha_{t-1}(\psi_m)}\}\cdot C = pI,
\]
and hence
\[
N|_{w=0} = C^{-1} \cdot \Diag\{p^{1-\alpha_0(\psi_m)}, \dots, p^{1-\alpha_{t-1}(\psi_m)}\}.
\]
So the $i$th column of $N|_{w=0}$ belongs to $p^{1-\alpha_i(\psi_m)} \mathcal{O}$.
Now, we observe that the matrix $N$ is a product of matrices with entries in the subring $\mathcal{O} + p\mathcal{O}/p^2\mathcal{O} \cdot w$, so the entires of the $i$th column of $N$ belongs to $p^{1-\alpha_i(\psi_m)}\mathcal{O}[w] / p^2w\mathcal{O}[w]$, as the coefficients on $w$ automatically belongs to $p\mathcal{O} \subseteq p^{1-\alpha_i(\psi_m)}\mathcal{O}$.
We use $\overline N \mathbf{i}n \mathrm{M}_t(\mathbb{F}[w])$ to denote the reduction modulo $\varpi$ of the matrix given by dividing the $i$th column of $M$ by $p^{1-\alpha_i(\psi_m)}$.
It then follows that
\[
\overline N \cdot
\overline \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w}) = I \qquad \textrm{ as matrices in }\mathrm{M}_t(\mathbb{F}[w]).
\]
So $ \overline \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w}) \mathbf{i}n \GL_t(\mathbb{F}[w])$ and its determinant belongs to $\mathbb{F}^\times$.
\end{proof}
\begin{notation}
\label{N:character kappa}
Let $\psi_m$ be as in Hypothesis~\ref{H:psi conductor pm}.
We use $\mathcal{W}(x\psi_m; p^{-1})$ to denote the closed disk of radius $p^{-1}$ ($\frac 14$ in case $p=2$)\footnote{We apologize for the confusing notation when $p=2$.} centered at $x\psi_m$ in the weight space.
This disk corresponds to all characters of the form $x\psi_m \langle \cdot \rangle^w$ for $w \mathbf{i}n\mathcal{O}_{\mathbb{C}_p}$, in particular, including classical characters $x^k\psi_m\omega^{1-k}$ for $k \geq 1$.
We take $A^\circ$ to be the Tate algebra $\mathcal{O}\langle w\rangle$ and $A $ to be $ E\langle w\rangle$.
We identify $\Max(A) = \Max(E\langle w\rangle)$ with the disk $\mathcal{W}(x \psi_m; p^{-1})$ so that the universal character
$\mathbf{k}appa: \Gamma \to E\langle w\rangle^\times$ is given by
\[
\mathbf{k}appa(a): = a \psi_m(a)\langle a\rangle^{w}.
\]
Here the expression $\langle a\rangle^w$ is understood as $(1+2pb)^w = \sum_{n\geq 0} (2pb)^n \binom wn \mathbf{i}n 1+2pw\mathbb{Z}_p\langle w\rangle$, if $\langle a\rangle = 1+2pb $.
\end{notation}
\subsection{A variant of the space of overconvergent automorphic forms}
For a technical reason, it is more convenient to consider a variant of the space of overconvergent automorphic forms, with coefficients in $\mathcal{B} : = A\langle p z\rangle = E\langle w, pz \rangle \subset A\widehat \otimes \mathcal{A}$.
Recall that the right action $||_\mathbf{k}appa \gamma$ of $\gamma = \big(
\begin{smallmatrix}
a&b\\c&d
\end{smallmatrix} \big) \mathbf{i}n \Sigma_0(p^m)$ on $A \widehat \otimes\mathcal{A}$ is given by
\begin{equation}
\label{E:right action by gamma}
(h||_{\mathbf{k}appa}\gamma)(z): = \frac{\mathbf{k}appa(cz+d)}{cz+d} h\big( \frac{az+b}{cz+d}\big) = \psi_m(d)\langle d\rangle^w \big(1+\frac cdz\big)^{w}h\big( \frac{az+b}{cz+d}\big)
\textrm{ for }h(z) \mathbf{i}n \mathcal{A}.
\end{equation}
Since $p \nmid d$ and $ p^m |c$, the expansion of the exponential $(1+\frac cdz)^w$ lands in $\mathcal{O}\langle w, p^{m-1} z\rangle \subset \mathcal{B}$.\footnote{This follows from the standard estimate $(1+x)^w = 1+\sum_{n\geq 1} \binom wn x^n \mathbf{i}n \mathcal{O}\langle w, p^{-1}x\rangle$ (note that the binomial coefficients are not integral for a free variable $w$.) We will use this estimate freely later in the paper.}
So \eqref{E:right action by gamma} can be applied to an element $h(z) \mathbf{i}n \mathcal{B}$ and gives rise to a right action of $\Sigma_0(p^m)$ on $\mathcal{B}$.
Therefore, we can define the space of overconvergent automorphic forms with coefficients in $\mathcal{B}$ (instead of $A \widehat \otimes \mathcal{A}$):
\[
S^{D,\dagger}_\mathcal{B}(U; \mathbf{k}appa): = \Big
\{
\varphi: D^\times_f \to\mathcal{B}\;\Big|\;
\varphi(\delta gu) = \varphi(g)||_{\mathbf{k}appa}u_p, \textrm{ for any }\delta \mathbf{i}n D^\times, g \mathbf{i}n D_f^\times, u \mathbf{i}n U
\Big\};
\]
it is a subspace of $S^{D,\dagger}(U; \mathbf{k}appa)$ (with coefficients in $A \widehat \otimes \mathcal{A}$).
In explicit forms, we have an isomorphism of Banach spaces $S^{D,\dagger}_\mathcal{B}(U; \mathbf{k}appa) \xrightarrow{\cong} \oplus_{i=0}^{t-1} \mathcal{B}$ given by $\varphi\mapsto (\varphi(\gamma_i))_{i=0, \dots, t-1}$.
\begin{notation}
\label{N:matrix for Tl and Up}
We use $\mathfrak{U}_p^A(\mathbf{k}appa)$ and $\mathfrak{T}_l^A(\mathbf{k}appa)$ (for $l \notin \mathcal{S}$) to denote the infinite matrices in Proposition~\ref{P:explicit Up} for the operators $\mathfrak{U}_p$ and $\mathfrak{T}_l$ acting on $\oplus_{i=0}^{t-1} A\widehat \otimes \mathcal{A}$, with respect to the orthonormal basis $1_0, \dots, 1_{t-1}, z_0, \dots, z_{t-1}, z^2_0, \dots$. Here the subscripts indicate which copy of $\mathcal{A}$ the element comes from.
We use $\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa)$ and $\mathfrak{T}_l^\mathcal{B}(\mathbf{k}appa)$ (for $l \notin \mathcal{S}$) to denote the infinite matrix for the operators $\mathfrak{U}_p$ and $\mathfrak{T}_l$ acting on $S^{D,\dagger}_\mathcal{B}(U; \mathbf{k}appa) = \oplus_{i=0}^{t-1} \mathcal{B}$, with respect to the orthonormal basis $1_0, \dots, 1_{t-1}, pz_0, \dots, pz_{t-1}, p^2z^2_0, \dots$
It is clear from the definition that
\[
\Diag(p^{-1}; t) \mathfrak{U}_p^A(\mathbf{k}appa) \Diag(p; t) = \mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa) , \quad \textrm{and} \quad \Diag(p^{-1}; t) \mathfrak{T}_l^A(\mathbf{k}appa) \Diag(p; t) = \mathfrak{T}_l^\mathcal{B}(\mathbf{k}appa) .
\]
In particular, $\Char \big(\mathfrak{U}_p^A(\mathbf{k}appa) , S^{D,\dagger}(U; \mathbf{k}appa)\big) = \Char\big(\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa) , S^{D,\dagger}_\mathcal{B}(U; \mathbf{k}appa) \big).$ So to understand the $U_p$-slopes on $S^{D, \dagger}(U;\mathbf{k}appa)$, it suffices to look at the $U_p$-slopes on $S^{D,\dagger}_\mathcal{B}(U;\mathbf{k}appa)$.
\end{notation}
The following lemma gives a key congruence relation between the action of a matrix in $\Sigma_0(p^m)$ on the space of overconvergent automorphic forms and on the space of \emph{classical automorphic forms}.
\begin{lemma}
\label{L:congruence}
Let $\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)$ be a matrix in $\Sigma_0(p^m)$ with $v(a) = 0$ or $1$.
Then the matrix for $||_\mathbf{k}appa\big(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} \big)$ acting on $\mathcal{B}$ (with respect to the basis $1, pz, p^2z^2, \dots$) belongs to
\[
\begin{pmatrix}
\psi_m(d)\langle d\rangle^w & pA^\circ & p^2 A^\circ & p^3A^\circ &\cdots
\\
p^3A^\circ & \frac ad\psi_m(d)\langle d\rangle^w + p^2aA^\circ & paA^\circ &p^2aA^\circ & \cdots
\\
p^4 A^\circ &p^3aA^\circ & (\frac ad)^2\psi_m(d)\langle d\rangle^w + p^2a^2A^\circ& pa^2A^\circ & \cdots
\\
p^5 A^\circ &p^4 aA^\circ &p^3a^2A^\circ & (\frac ad)^3 \psi_m(d)\langle d\rangle^w+ p^2a^3A^\circ& \cdots
\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix}
\]
where the $(i,j)$-entry of the matrix is
\begin{itemize}
\mathbf{i}tem
$(\frac ad)^i \psi_m(d)\langle d\rangle^w+ p^2a^iA^\circ$ if $i=j>0$,
\mathbf{i}tem
$p^{i-j+2}a^jA^\circ$ if $i>j$, and
\mathbf{i}tem
$p^{j-i} a^i A^\circ$ if $i<j$.
\end{itemize}
\end{lemma}
\begin{proof}
Note that $(1+ p^{m-1} z)^w \mathbf{i}n 1+ p^3zw\mathcal{O}\langle p z, w\rangle$ since $m \geq 4$.\footnote{It is important that the $zw$ coefficient has valuation \emph{strictly} bigger than $2$. The case $m=3$ fails exactly at this point. See Remark~\ref{R:m=3}.} So Proposition~\ref{P:generating series} implies that (note that $ p^m|c,p\nmid d$)
\[
H_{||_\mathbf{k}appa(\begin{smallmatrix}a&b\\c&d\end{smallmatrix} )}(p^{-1}x,py) = \frac{d\psi_m(d)\langle d \rangle^w(1+p^{-1}\frac cd x)^w}{p^{-1} c x + d - a xy - p by} \mathbf{i}n \mathcal{O} \langle w, py, axy, p^{i+2} x^i; i \mathbf{i}n \mathbb{N}\rangle.
\]
Translating this congruence into the language of matrix and noting that the dominant coefficients on terms $x^iy^i$ come from the expansion of $\frac{d\psi_m(d) \langle d \rangle^w}{d - a xy}$ proves the Lemma.
\end{proof}
Lemma~\ref{L:congruence} implies that the actions of $U_p$ and $T_l$ for $l \notin \mathcal{S}$ on $S^{D, \dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$ are ``very close" to their actions on the completed direct sum
\[
\widehat \bigoplus_{n \geq 0} S_2^D(U; \psi_m\omega^{-2n}; \omega^n).
\]
More precisely, we have the following.
\begin{proposition}
\label{P:Tl Up overconvergent equiv classical}
(1)
For $l \notin \mathcal{S}$, we consider the infinite block diagonal matrix
\[
\mathfrak{T}_l^{\mathrm{cl}, \mathbf{i}nfty} : = \Diag\big\{\mathfrak{T}_l^\mathrm{cl}(\psi_m), \ l \cdot \mathfrak{T}_l^\mathrm{cl}(\psi_m\omega^{-2}), \ l^2 \cdot \mathfrak{T}_l^\mathrm{cl}(\psi_m\omega^{-4}), \ \dots \big\}.
\]
Then the difference $\mathfrak{T}_l^\mathcal{B}(\mathbf{k}appa) - \mathfrak{T}_l^{\mathrm{cl}, \mathbf{i}nfty}$ lies in the error space
\begin{equation}
\label{E:error matrix}
\mathbf{Err}: =
\begin{pmatrix}
p\mathrm{M}_t(A^\circ) & p\mathrm{M}_t(A^\circ) & p^2 \mathrm{M}_t(A^\circ) & p^3\mathrm{M}_t(A^\circ) &\cdots
\\
p^3\mathrm{M}_t(A^\circ) & p\mathrm{M}_t(A^\circ) & p\mathrm{M}_t(A^\circ) &p^2 \mathrm{M}_t(A^\circ) & \cdots
\\
p^4 \mathrm{M}_t(A^\circ) &p^3\mathrm{M}_t(A^\circ) & p \mathrm{M}_t(A^\circ)& p\mathrm{M}_t(A^\circ) & \cdots\\
p^5\mathrm{M}_t(A^\circ) &p^4\mathrm{M}_t(A^\circ) &p^3\mathrm{M}_t(A^\circ) & p\mathrm{M}_t(A^\circ)& \cdots
\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix}
\end{equation}
where the $(i,j)$-block entry of the matrix is
\begin{itemize}
\mathbf{i}tem
$ p\mathrm{M}_t(A^\circ)$ if $i=j$,
\mathbf{i}tem
$p^{i-j+2}\mathrm{M}_t(A^\circ)$ if $i > j$, and
\mathbf{i}tem
$p^{j-i}\mathrm{M}_t(A^\circ)$ if $i < j$.
\end{itemize}
(2) Similarly, we consider the infinite block diagonal matrix
\[
\mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}: = \Diag\big( \mathfrak{U}_p^\mathrm{cl}(\psi_m),\ p \cdot \mathfrak{U}_p^\mathrm{cl}(\psi_m \omega^{-2}),\ p^2\cdot \mathfrak{U}_p^\mathrm{cl}(\psi_m \omega^{-4}),\ \dots \big).
\]
Then difference $\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa) - \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}$ lies in the $p$-error space
\begin{equation}
\label{E:Errp}
\mathbf{Err}_p: =
\begin{pmatrix}
p\mathrm{M}_t(A^\circ) & p\mathrm{M}_t(A^\circ) & p^2 \mathrm{M}_t(A^\circ) & p^3\mathrm{M}_t(A^\circ) &\cdots
\\
p^3\mathrm{M}_t(A^\circ) & p^2\mathrm{M}_t(A^\circ) & p^2\mathrm{M}_t(A^\circ) &p^3 \mathrm{M}_t(A^\circ) & \cdots
\\
p^4 \mathrm{M}_t(A^\circ) &p^4\mathrm{M}_t(A^\circ) & p^3 \mathrm{M}_t(A^\circ)& p^3\mathrm{M}_t(A^\circ) & \cdots
\\
p^5\mathrm{M}_t(A^\circ) &p^5\mathrm{M}_t(A^\circ) &p^5\mathrm{M}_t(A^\circ) & p^4\mathrm{M}_t(A^\circ)& \cdots
\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix},
\end{equation}
where the $(i,j)$-block entry is
\begin{itemize}
\mathbf{i}tem
$p^{i+1}\mathrm{M}_t(A^\circ)$ if $i =j$,
\mathbf{i}tem
$p^{i+2}\mathrm{M}_t(A^\circ)$ if $i>j$, and
\mathbf{i}tem
$p^j\mathrm{M}_t(A^\circ)$ if $i<j$.
\end{itemize}
Moreover, the $(i,i)$-block entry of $\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa)$ is congruent to the matrix $p^i\cdot \mathfrak{U}_p^\mathrm{cl}(\psi_{m,w-2i} \omega^{-2i})$ modulo $p^{i+2} \mathrm{M}_t(A^\circ)$.
\end{proposition}
\begin{proof}
Note that the global elements $\delta$ appearing in the matrix of $\mathfrak{U}_p$ or $\mathfrak{T}_l$ for $l \notin \mathcal{S}$ in Proposition~\ref{P:explicit Up} are the same for classical or overconvergent automorphic forms for all characters.
So to prove (1) and (2), it suffices to estimate the difference between the actions of each relevant $ \delta_p$ on $S_\mathcal{B}^{D, \dagger}(U; \mathbf{k}appa)$ and on the completed direct sum $\widehat \bigoplus_{n \geq 0} S_2^D(U; \psi_m\omega^{-2n}; \omega^n)$. (Note that $l^{r} \cdot \mathfrak{T}_l^\mathrm{cl}(\psi_m \omega^{-2r})$ is congruent modulo $p$ to the action of $T_l$ on the space of classical automorphic forms $S_2^D(U; \psi_m\omega^{-2r}; \omega^r)$.)
For $l \notin\mathcal{S}$, Proposition~\ref{P:explicit Up} implies that, for every $\delta_p =\big(\begin{smallmatrix}a&b\\c&d \end{smallmatrix}\big)$ appearing in the expression of $\mathfrak{T}_l^\mathcal{B}(\mathbf{k}appa)$, we have $a, d \mathbf{i}n \mathbb{Z}_p^\times$, $b, c \mathbf{i}n \mathbb{Z}_p$, and $ad-bc=l$; so we have $a d \equiv l \pmod {p^m}$.
By Lemma~\ref{L:congruence}, $||_\mathbf{k}appa \delta_p$ is, modulo the expression \eqref{E:error matrix} but with $t=1$, congruent to the infinite diagonal matrix with diagonal elements
\[
\psi_m(d)\langle d\rangle^w,\ \psi_m(d)\tfrac ad\langle d\rangle^w,\
\psi_m(d)(\tfrac ad)^2\langle d\rangle^w,\ \dots
\]
which is the same as
\[
\psi_m(d),\ l\tfrac{\psi_m(d)}{d^2},\ l^2
\tfrac{\psi_m(d)}{d^4},\ \dots
\]
modulo $p$; it is further the same as
\[\psi_m(d),\ l \psi_m(d) \omega^{-2}(d),\ l^2 \psi_m(d) \omega^{-4}(d),\ \dots
\]
modulo $p$.
This is the same as the contribution of $\delta_p$ to the matrix
\[
\mathfrak{T}_l^{\mathrm{cl}, \mathbf{i}nfty} = \bigoplus_{r \geq 0} l^r \cdot \mathfrak{T}_l^\mathrm{cl}(\psi_m\omega^{-2r}).
\]
This concludes the proof of (1).
(2) can be checked similarly:
for each $\delta_p =\big(\begin{smallmatrix}a&b\\c&d \end{smallmatrix}\big)$ appearing in the expression of $\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa)$, we have $a \mathbf{i}n p\cdot \mathbb{Z}_p^\times$, $ d \mathbf{i}n \mathbb{Z}_p^\times$, $b, c \mathbf{i}n \mathbb{Z}_p$, and $a d \equiv p \pmod {p^m}$.
Using Lemma~\ref{L:congruence} as well as the congruence $\frac ad \equiv \frac{p}{d^2} = p \omega^{-2}(d)\langle d\rangle^{-2} \pmod {p^3}$, we conclude (2) in the same way as above.
\end{proof}
We now proceed to prove Theorem~\ref{T:theorem B}.
\begin{notation}
Put $q = 1$ if $p=2$ and $q=\frac{p-1}{2}$ if $p >2$.
We write the characteristic series of $U_p$ acting on $S_\mathcal{B}^{D, \dagger}(U; \mathbf{k}appa)$ as
\[
\Char(\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa), S_\mathcal{B}^{D, \dagger}(U; \mathbf{k}appa)\big) = 1+ c_1(w) X + c_2(w) X^2 + \cdots \mathbf{i}n 1+ \mathcal{O}\langle w\rangle \llbracket X\rrbracket.
\]
\end{notation}
\begin{theorem}
\label{T:sharp Hodge bound}
Assume $m \geq 4$ as before.
We have the following results regarding the Newton polygon.
\begin{enumerate}
\mathbf{i}tem
For any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the Newton polygon of the power series $1+ c_1(w_0)X + \cdots $ lies above the polygon starting at $(0,0)$ with slopes given by
\begin{equation}
\label{E:slopes of improved Hodge polygon}
\bigcup_{n=0}^\mathbf{i}nfty \bigcup_{r=0}^ {q-1}\big\{
\alpha_0(\psi_m\omega^{-2r})+qn+r, \dots, \alpha_{t-1}(\psi_m\omega^{-2r})+qn+r\big\}.
\end{equation}
\mathbf{i}tem
For each $n \mathbf{i}n \mathbb{N}$, let $\lambda_n$ denote the sum of $n$ smallest numbers in \eqref{E:slopes of improved Hodge polygon}.
Then
\[
c_{kt}(w) \mathbf{i}n p^{\lambda_{kt}} \cdot \mathcal{O}\langle w\rangle^\times, \quad \textrm{for every }k \mathbf{i}n \mathbb{N}.
\]
\mathbf{i}tem
For any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the Newton polygon of the power series $1+ c_1(w_0)X + \cdots $ passes through the point $(kt, \lambda_{kt})$ for each $k \mathbf{i}n \mathbb{N}$ (which lies on the Hodge polygon in (1)).
In particular, the $n$th slope of this Newton polygon belongs to $\big[ \lfloor \frac nt \rfloor, \lfloor \frac nt \rfloor +1 \big]$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Recall from Proposition~\ref{P:Tl Up overconvergent equiv classical}(2), the matrix for $U_p$ satisfies
\[
\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa) - \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \mathbf{i}n \mathbf{Err}_p.
\]
We now uses the basis \[
e_0(\psi_m), \dots, e_{t-1}(\psi_m), pe_0(\psi_m\omega^{-2})z, \dots, pe_{t-1}(\psi_m \omega^{-2})z, p^2e_0(\psi_m\omega^{-4}) z^2, \dots
\]
As a result, we need to conjugate both $\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa)$ and $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}$ by an infinite block diagonal matrix whose block entries are $t\times t$-matrices in $\mathrm{M}_t(\mathcal{O})$. Thus, the action of $U_p$ is given by a new matrix $\mathfrak{U}_p^{\mathcal{B}, \mathbf{e}}$ which is congruent to
\[
\Diag\big( \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m),\ p\cdot \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m \omega^{-2}),\ p^2\cdot \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m \omega^{-4}),\ \dots \big)
\]
modulo $\mathbf{Err}_p$.
In particular, for $i=0, \dots, t-1$, the $((qn+r)t+i)$th row of $\mathfrak{U}_p^{\mathcal{B}, \mathbf{e}}$ is entirely divisible by $p^{\alpha_i(\psi_m \omega^{-2r}) + qn+r}$.
Therefore the Hodge polygon of $\mathfrak{U}_p^{\mathcal{B}, \mathbf{e}}$ lies above the Hodge polygon with slopes given by \eqref{E:slopes of improved Hodge polygon}. This improves the result of Theorem~\ref{T:weak Hodge polygon} (when $m \geq 4$).
(2) By the proof of (1), we know that $c_{kt}(w) \mathbf{i}n p^{\lambda_{kt}} \cdot \mathcal{O}\langle w\rangle$ for each $k \mathbf{i}n \mathbb{N}$.
It suffices to show that the reduction of $p^{-\lambda_{kt}} c_{kt}(w)$ modulo $\varpi$ lies in $\mathbb{F}^\times \subset \mathbb{F}[w]$.
Note that, if we think of $\mathfrak{U}_p^{\mathcal{B}, \mathbf{e}}$ as an infinite block matrix with $t \times t$-matrices as entries, then its $(i,j)$-block for $i>j$ is entirely divisible by $p^{i+2}$. So it will not contribute to the reduction of $p^{-\lambda_{kt}} c_{kt}(w)$ modulo $\varpi$.
In other words, if $M_n$ denotes the $t\times t$-matrix appearing as the $(n,n)$-block entry of $\mathfrak{U}_p^{\mathcal{B}, \mathbf{e}}$, then
\[
p^{-\lambda_{kt}} c_{kt}(w)\ \equiv \ p^{-\lambda_{kt}} \mathrm{pr}od_{n =0}^{k-1} \det (M_n) \pmod \varpi.
\]
Using the congruence relation discussed in (1) and Proposition~\ref{P:Tl Up overconvergent equiv classical}(2), we see that the diagonal $t\times t$-matrices are exactly given by
\[
\mathfrak{U}_p^{\mathrm{cl} ,\mathbf{e}}(\psi_{m,w}) \textrm{ modulo }p^2, \quad p \cdot \big( \mathfrak{U}_p^{\mathrm{cl} ,\mathbf{e}}(\psi_{m,w-2}\omega^{-2}) \textrm{ modulo }p^2\big), \quad p^2 \cdot \big( \mathfrak{U}_p^{\mathrm{cl} ,\mathbf{e}}(\psi_{m,w-4}\omega^{-4}) \textrm{ modulo }p^2\big), \dots
\]
Consequently, the reduction of $p^{-\lambda_{kt}} c_{kt}(w)$ modulo $\varpi$ is the same as the product
\[
\mathrm{pr}od_{n=0}^{k-1} \det \big( \overline \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_{m,w-2n} \omega^{-2n})\big) \textrm{ mod } \varpi.
\]
By Proposition~\ref{P:determine of Up deformed}, each factor belongs to $\mathbb{F}^\times$ and so is the product. (2) follows from this.
(3) Since (2) implies that the Newton polygon agrees with the Hodge polygon at points $(kt, \lambda_{kt})$ for all $k \geq 0$, the Newton polygon of the power series $1+ c_1(w_0) X+ \cdots $ is confined between the Hodge polygon of (1) and the polygon with vertices $(kt, \lambda_{kt})$. (3) is immediate from this.
\end{proof}
Theorem~\ref{T:theorem B} is a corollary of Theorem~\ref{T:sharp Hodge bound} using the Jacquet--Langlands correspondence~\eqref{E:Jaquet-Langlands classical}.
\begin{remark}
\label{R:m=3}
Assume $p>2$.
When $m=3$, Theorem~\ref{T:sharp Hodge bound}(1) still holds.
But the argument in (2) fails in that, for example, there might be $p^2w$ terms in $(1,0)$-block entry for the matrix $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}$. Apriori, they may have nontrivial contribution to the reduction of $p^{-\lambda_{kt}}a_{kt}(w)$ modulo $\varpi$.
So we can only conclude that the reduction is a unit in $\mathbb{F}\llbracket w \rrbracket$ but not necessarily a unit in $\mathbb{F}\langle w \rangle$.
The slope estimate would then only work over some \emph{open} disk of radius $p^{-1}$.
Nonetheless, we still expect our theorem continue to hold as long as $m \geq 2$.
It would be interesting to know how to extend our argument to the case $m=2,3$.
\end{remark}
\begin{corollary}
\label{C:precise NP computation}
Assume $m \geq 4$ as before. Let $\HP(\psi_m)$ (resp. $\NP(\psi_m)$) denote the Hodge polygon (resp. Newton polygon) of the $U_p$-action on $S_2^D(U; \psi_m)$; we write $\HP(\psi_m)(i)$ (resp. $\NP(\psi_m)$(i)) for the $y$-coordinate of the polygon when the $x$-coordinate is $i$.
Fix $r=0, \dots, q-1$.
Suppose that $(s_0, \NP(\psi_m\omega^{-2r})(s_0))$ is a \emph{vertex} of the Newton polygon $\NP(\psi_m\omega^{-2r})$ and suppose that
\begin{equation}
\label{E:NP<HP+1}
\NP(\psi_m\omega^{-2r})(s) < \HP(\psi_m\omega^{-2r})(s-1) + 1
\footnote{Note that the Newton polygon is evaluated at $s$ and the Hodge polygon is evaluated at $s-1$.} \textrm{ for all } s = 1, \dots, s_0.
\end{equation}
Then for any $s=0, \dots, s_0$, any $n \mathbf{i}n \mathbb{Z}_{\geq 0}$, and any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the $( qnt+ rt +s)$th slope of the power series $1+ c_1(w_0)X + \cdots$ is the $s$th $U_p$-slope on $S_2^D(U; \psi_m \omega^{-2r})$ plus $ qn + r $.
\end{corollary}
\begin{proof}
As in the proof of Theorem~\ref{T:sharp Hodge bound}(2),
$c_{qnt+ rt +s}(w_0)$ is divisible by
$
p^{\lambda_{qnt+ rt +s}}.
$
The approximation in the proof of Theorem~\ref{T:sharp Hodge bound}(1) also implies that, modulo $p^{\lambda_{qnt+ rt +s-1}} \cdot p$, this number is equal to
\[
\Big(
\mathrm{pr}od_{a = 0}^{nq+r-1} p^a\det \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m \omega^{-2a}) \Big) \cdot p^{(nq+r)s} \cdot \ \big( \textrm{coefficient of $X^s$ in } \Char(U_p; S_2^D(\psi_m \omega^{-2r}))\, \big).
\]
So under the hypothesis \eqref{E:NP<HP+1}, this implies that, for each $s$,
\begin{itemize}
\mathbf{i}tem
if $s$ is a vertex of $\NP(\psi_m\omega^{-2r})$, the valuation of $c_{qnt+rt+s}(w_0)$ is determined by the Newton Polygon of the classical forms, i.e.
\[
v(c_{qnt+ rt +s}(w_0)) = v(c_{qnt+rt}(w_0)) + \NP(\psi_m\omega^{-2r})(s) + (qn+r)s,
\]
\mathbf{i}tem
if $s$ is not a vertex of $\NP(\psi_m\omega^{-2r})$, the valuation of $c_{qnt+rt+s}(w_0)$ is either determined by the Newton Polygon of the classical forms or greater than or equal to $\lambda_{qnt+rt+s-1}+1$; in either case, we have
\[
v(c_{qnt+ rt +s}(w_0)) \geq v(c_{qnt+rt}(w_0)) + \NP(\psi_m\omega^{-2r})(s) + (qn+r)s.
\]
\end{itemize}
Since $(s_0, \NP(\psi_m\omega^{-2r})(s_0))$ is a vertex, the $(qnt+ rt +s)$th slope, for $s=0, \dots, s_0$, of the power series $ 1+ c_1(w_0)X+ \cdots$ agrees with the $s$th slope of $\Char(U_p; S_2^D(\psi_m \omega^{-2r}))$ plus the normalizing factor $qn+r$.
\end{proof}
\begin{remark}
We emphasize that the sequence given by $s$th $U_p$-slope on $S^D_2(U; \psi_m \omega^{-2r})$ plus $qn+r$, as $n$ increases, is an arithmetic progression with common difference $q$ (\emph{but not $1$}). This is due to the periodic appearance of the powers of the Teichm\"uller character.
This agrees with the computation of Kilford and McMurdy \cite{kilford, kilford-mcmurdy} in some special cases (with $m=2$), where the common difference is $2$ when $p=5$, and is $\frac 32$ (which can be further broken up into two arithmetic progressions with common difference $3$) when $p=7$.
\end{remark}
\begin{example}
\label{Ex:explicit slopes rare examples}
We provide an example to better understand the strength of \eqref{E:NP<HP+1}.
Consider the explicit example in Section~\ref{Section:explicit example} with $D = \mathbb{Q}\langle \mathbf{i}, \mathbf{j}\rangle$ and $p=3$. We first consider the $m=3$ case where we
take $U$ to be
\begin{equation}
\label{E:p=3,m=3}
U= D^\times(\mathbb{Z}_2) \times \mathrm{pr}od_{l \neq 2,3}\GL_2(\mathbb{Z}_l) \times \begin{pmatrix}
\mathbb{Z}_3^\times & \mathbb{Z}_3\\
27 \mathbb{Z}_3 & 1+ 3\mathbb{Z}_3
\end{pmatrix}
\end{equation}
and $\psi_3$ to be a character of $\mathbb{Z}_3^\times$ of conductor $27$.
Then $S_2^D(U; \psi_3)$ is $3$-dimensional, and the action of $U_3$ on the a basis is given by
\begin{equation}
\label{E:M3}
\begin{pmatrix}
\zeta_9 &\zeta_9^2& \zeta_9^8\\
\zeta_9^4 &\zeta_9^2& \zeta_9^5\\
\zeta_9^7&\zeta_9^2&\zeta_9^2
\end{pmatrix}.
\end{equation}
Its Newton polygon has slopes $\frac 16, \frac 12$, and $\frac 56$ and the Hodge polygon has slopes $0, \frac 12$, and $1$.
For the case $m=4$, we take $U$ to be as in \eqref{E:p=3,m=3} except the number $27$ is replaced by $81$. We take the character $\psi_4$ to have conductor $81$.
Then $S_2^D(U; \psi_4)$ is $9$-dimensional, and the action of $U_3$ on a basis is given by
\begin{equation}
\label{E:M4}
\begin{pmatrix}
\zeta^{19} &0&0&0&\zeta^2&\zeta^{17}&0&0&0\\
0&0&0&\zeta^{13}&0&0&0&\zeta^{20}&\zeta^{23}\\
0&\zeta^{11}&\zeta^2&0&0&0&\zeta^7&0&0\\
\zeta&0&0&0&\zeta^2&\zeta^8&0&0&0\\
0&0&0&\zeta^{22}&0&0&0&\zeta^{20}&\zeta^{14}\\
0&\zeta^{11}&\zeta^{20}&0&0&0&\zeta^{16}&0&0\\
\zeta^{10}&0&0&0&\zeta^2&\zeta^{26}&0&0&0\\
0&0&0&\zeta^4&0&0&0&\zeta^{20}&\zeta^5\\
0&\zeta^{11}&\zeta^{11}&0&0&0&\zeta^{25}&0&0
\end{pmatrix},
\end{equation}
where $\zeta$ is a primitive $27$th root of unity. The Newton polygon of this matrix has slopes $\frac{1}{18}, \frac{1}{6}, \frac{5}{18}, \dots, \frac{17}{18}$, and the Hodge polygon has slopes $0, 0, 0, \frac 12, \frac 12, \frac 12, 1, 1, 1$.
In this case, the number $s_0$ in Corollary~\ref{C:precise NP computation} can be taken to be $6$; so we can determine about ``two thirds" of all slopes using Corollary~\ref{C:precise NP computation}.
\end{example}
We now return to the general case.
\begin{theorem}
\label{T:improved main theorem}
Assume $m \geq 4$ as before.
Let $\mathrm{ord}(\psi_m\omega^{-2n})$ denote the dimension of the ordinary part of $S_2^D(U; \psi_m \omega^{-2n})$, or equivalently, the multiplicity of slope $0$ in $\NP(\psi_m \omega^{-2n})$.
Then the spectral variety $\Spc_D \times_\mathcal{W} \mathcal{W}(x\psi_m; p^{-1})$ is a disjoint union of subvarieties
\[
X_0,\ X_{(0,1]},\ X_{(1,2]}, \ X_{(2,3]},\ \dots
\]
such that each subvariety is finite and flat over $\mathcal{W}(x\psi_m; p^{-1})$, and for any closed point $x \mathbf{i}n X_{(n,n+1]}$ (resp. $x \mathbf{i}n X_0$), we have $v(a_p(x)) \mathbf{i}n (n,n+1]$ (resp. $v(a_p(x))=0$).
Moreover, the degree of $X_{(n,n+1]}$ over $\mathcal{W}(x\psi_m; p^{-1})$ is exactly
\[
t + \mathrm{ord}(\psi_m \omega^{-2n-2}) - \mathrm{ord}(\psi_m \omega^{-2n}).
\]
In particular, this number depends only on $n \textrm{ mod } q$.
\end{theorem}
\begin{proof}
It suffices to show that, for a fixed $n \mathbf{i}n \mathbb{Z}_{\geq 0}$ and any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the number of slopes of $1+ c_1(w_0) X + \cdots $ less than or equal to $n$, is \emph{independent of $w_0$} and is equal to $nt + \mathrm{ord}(\psi_m\omega^{-2n})$.
If so, the subspace
\[
X_{[0,n]} = \big\{ (x, w_0) \mathbf{i}n \Spc_D \times_\mathcal{W} \mathcal{W}(x\psi_m; p^{-1})\,|\, v(a_p(x)) \leq n \big\}
\]
is finite and flat of degree $nt + \mathrm{ord}(\psi_m\omega^{-2n})$ over $\mathcal{W}(x\psi_m; p^{-1})$, and it would follow that $X_{[0,n]}$ is both open (by definition) and closed (by finiteness) in $\Spc_D \times_\mathcal{W} \mathcal{W}(x\psi_m; p^{-1})$, and hence a union of connected components. The theorem would then follow from this.
To estimate the number of slopes less than or equal to $n$, we use the Hodge polygon lower bound in Theorem~\ref{T:sharp Hodge bound}. It then suffices to prove that
\begin{equation}
\label{E:ordinary condition}
\begin{split}
&v(c_{nt + \mathrm{ord}(\psi_m \omega^{-2n})}(w_0)) = \lambda_{nt} + n \cdot \mathrm{ord}(\psi_m \omega^{-2n}), \textrm{ and }\\
&\qquad v(c_{nt + s}(w_0)) > \lambda_{nt} + ns \textrm{ for }s >\mathrm{ord}(\psi_m \omega^{-2n}).
\end{split}
\end{equation}
We again go back to the slope estimate in the proof of Theorem~\ref{T:sharp Hodge bound} (like in the proof of Corollary~\ref{C:precise NP computation}). It is easy to deduce that $c_{nt+s}(w_0)$ for $s \geq \mathrm{ord}(\psi_m \omega^{-2n})$ is congruent to
\[
\big(
\mathrm{pr}od_{i = 0}^{n-1} \det \mathfrak{U}_p^{\mathrm{cl}, \mathbf{e}}(\psi_m \omega^{-2i}) \big) \cdot p^{ns} \cdot \ \big( \textrm{coefficient of $X^s$ in } \Char(U_p; S_2^D(\psi_m \omega^{-2n}))\, \big)
\]
modulo $p^{\lambda_{nt} + ns+1}$.
The valuation inequalities \eqref{E:ordinary condition} follow from this congruence relation.
\end{proof}
\begin{remark}
We certainly expect that $X_{(i,i+1]}$ is the disjoint union of $X_{(i,i+1)} \coprod X_{i+1}$ (with the obvious meaning); but we do not know how to prove this because, apriori, the error terms from $w$ might present an obstruction.
\end{remark}
\begin{remark}
\label{R:global decomposition under condition}
Using Corollary~\ref{C:precise NP computation} and the argument above, we can show that, when there is a vertex $(s_0, \NP(\psi_m \omega^{-2r})(s_0))$ of the Newton polygon $\NP(\psi_m \omega^{-2r})$ as in Corollary~\ref{C:precise NP computation}, we can get a further decomposition of $X_{(qn+r, qn+r+1]}$ separating those points whose $a_p$-slopes are first $s_0$ $U_p$-slopes on $S_2^D(U; \psi_m \omega^{-2r})$ plus $qn+r$.
\end{remark}
\section{Techniques for separation by residual pseudo-representations}
\label{Section:separation residue}
We motivate this section by pointing out that the power of Corollary~\ref{C:precise NP computation} is largely determined by how close the Newton polygon is to the Hodge polygon. The application of this result will be increasingly limited when the level subgroup $U$ gets smaller.
An natural idea to loosen the condition \eqref{E:NP<HP+1} is to separate the space of automorphic forms using the \emph{tame} Hecke algebras.
In fact, we will show that one can obtain a natural direct sum decomposition of the space of overconvergent automorphic forms according to the residual Galois pseudo-representations attached. Furthermore, we can reproduce main theorems of the previous section for each direct summand.
We also emphasize that this decomposition should have its own interest.
We keep the notation as in the previous section. In particular, we assume Hypothesis~\ref{H:psi conductor pm}: $m \geq 4$.
\subsection{Pseudo-representations}
Let $G_{\mathbb{Q},\mathcal{S}}$ denote the Galois group of the maximal extension of $\mathbb{Q}$ unramified outside $\mathcal{S}$ (see Subsection~\ref{S:setup for D} for $\mathcal{S}$).
Let $R$ be a (topological) ring.
A ($2$-dimensional) \emph{pseudo-representation} is a (continuous) map $\rho: G_{\mathbb{Q}, \mathcal{S}} \to R$ such that, for $g_1, g_2, g_3 \mathbf{i}n G_{\mathbb{Q},\mathcal{S}}$, we have $\rho(1) = 2$, $\rho(g_1g_2) = \rho(g_2g_1)$, and
\[
\rho(g_1) \rho(g_2) \rho(g_3) + \rho(g_1g_2g_3) + \rho(g_1g_3g_2) = \rho(g_1) \rho(g_2g_3)+ \rho(g_2) \rho(g_1g_3) + \rho(g_3) \rho(g_1g_2).
\]
Let $\rho: G_{\mathbb{Q}, \mathcal{S}} \to \mathcal{O}$ be a pseudo-representation.
\begin{itemize}
\mathbf{i}tem
If $\chi: G_{\mathbb{Q}, \mathcal{S}} \to \mathcal{O}^\times$ is a continuous character, then $(\rho\otimes \chi)(g): = \rho(g) \chi(g)$ is a pseudo-representation.
\mathbf{i}tem
We use $\bar \rho: G_{\mathbb{Q},\mathcal{S}} \to \mathbb{F}$ to denote the reduction $\bar \rho(g) : = \rho (g) \textrm{ mod }\varpi$; it is called the \emph{residual pseudo-representation} associated to $\rho$.
\mathbf{i}tem
The (residual) pseudo-representation is uniquely determined by the its evaluation on the geometric Frobenius: $\rho(\mathrm{Fr}ob_l)$ for $l \notin \mathcal{S}$.
\end{itemize}
It is known that to each automorphic representation $\pi$ appearing in $S_2^D(U; \psi_m)$,
there exists a pseudo-representation $\rho_\pi: G_{\mathbb{Q}, \mathcal{S}} \to \mathcal{O}$ such that $\rho(\mathrm{Fr}ob_l) = a_l(\pi)$ for all $l \notin \mathcal{S}$.
We say that a residual pseudo-representation $\bar \rho: G_{\mathbb{Q}, \mathcal{S}} \to \mathbb{F}$ \emph{appears} in a space of automorphic forms $S_2^D(U; \psi_m)$ if there is an automorphic representation $\pi$ appearing in $S_2^D(U; \psi_m)$ such that the reduction of the associated pseudo-representation is $\bar \rho$.
The goal of this section is to decompose the space $S_\mathcal{B}^{D, \dagger}(U; \mathbf{k}appa)$ according to the residual pseudo-representations appearing in the space of \emph{weight two} classical automorphic forms.\footnote{It should not be too surprise to see that we only need weight two modular forms, as it was already observed by Serre \cite{serre} that all modular residual pseudo-representations appear in weight two.}
The key is to use the tame Hecke action to break up the space $S_\mathcal{B}^{D, \dagger}(U; \mathbf{k}appa)$.
We start with the decomposition over the space of classical automorphic forms.
\begin{notation}
We use $\mathscr{B}(U; \psi_m)$ to denote all residual pseudo-representations $\bar \rho$ that appear in $S^\mathrm{cl}: = \bigoplus_{r=0}^{q-1}
S_2^D(U; \psi_m \omega^{-2r}; \omega^r)$.
For each pair of distinct residual pseudo-representations $\bar \rho, \bar \rho' \mathbf{i}n \mathscr{B}(U; \psi)$, we pick a prime $l_{\bar \rho, \bar \rho'} \notin \mathcal{S}$ such that $\bar \rho(\mathrm{Fr}ob_{l_{\bar \rho, \bar \rho'}}) \neq \bar \rho'(\mathrm{Fr}ob_{l_{\bar \rho, \bar \rho'}})$. We fix a lift $\tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho) \mathbf{i}n \mathcal{O}$ of $\bar \rho(\mathrm{Fr}ob_{l_{\bar \rho, \bar \rho'}})$ and a lift $\tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho') \mathbf{i}n \mathcal{O}$ of $\bar \rho'(\mathrm{Fr}ob_{l_{\bar \rho, \bar \rho'}})$.
For $\bar \rho \mathbf{i}n \mathscr{B} (U; \psi_m)$, consider the following tame Hecke operator
\[
P_{\bar \rho} : = \mathrm{pr}od_{\bar \rho' \neq \bar \rho}
\big( T_{l_{\bar \rho, \bar\rho'}} - \tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho')\big) \big/ \big( \tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho) - \tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho')\big).
\]
Note that $P_{\bar \rho}$ defines an endomorphism of the integral model $
S_2^D(U; \psi_m \omega^{-2r}; \omega^r; \mathcal{O})$ for each $r$.
The operator $P_{\bar \rho}$ depends on the choice of the lifts $\tilde a_{\bar \rho, \bar \rho'}(\bar \rho)$ and $\tilde a_{\bar \rho, \bar \rho'}(\bar \rho')$'s.
\end{notation}
\begin{lemma}
\label{L:decomposition classical forms}
Fix $r \mathbf{i}n\{0, \dots, q-1\}$.
Let $P_{\bar \rho,r} $ denote the action of $P_{\bar \rho}$ on the space of classical automorphic forms $
S_2^D(U; \psi_m \omega^{-2r}; \omega^r; \mathcal{O})$.
Then $P_{\bar \rho, r}^2 \equiv P_{\bar \rho, r} \pmod {\varpi}$.
The limit
\[
\widetilde P_{\bar \rho, r}^\mathrm{cl}: = \lim_{n \to \mathbf{i}nfty} (P_{\bar \rho, r})^{p^n}
\]
exists and it is the projection to the direct sum $V(\bar\rho)_r$ of subspaces $V(\pi)$ over all automorphic representations $\pi$ appearing in $S_2^D(U; \psi_m \omega^{-2r}; \omega^r)$ for which the associated pseudo-representation reduces to $\bar \rho$.
In particular, we have
\[
\big(\widetilde P_{\bar \rho, r} \big)^2 = \widetilde P_{\bar \rho, r}, \quad
\widetilde P_{\bar \rho, r} \widetilde P_{\bar \rho', r} = 0 \textrm{ if }\bar \rho \neq \bar \rho',\ \textrm{ and }
\sum_{\bar \rho \mathbf{i}n \mathscr{B}(U; \psi_m)} \widetilde P_{\bar \rho, r} = \mathrm{id}.
\]
Moreover, the definition of $\widetilde P_{\bar \rho, r}$ is independent of the choice of the lifts $\tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho)$ and $\tilde a_{l_{\bar \rho, \bar \rho'}}(\bar \rho')$'s; and it defines a direct sum decomposition of the integral model
\[
S_2^D(U; \psi_m \omega^{-2r}; \omega^r; \mathcal{O}) \cong \bigoplus_{\bar\rho \mathbf{i}n \mathscr{B}(U; \psi_m)} V(\bar \rho; \mathcal{O})_r.
\]
\end{lemma}
\begin{proof}
Note that, $P_{\bar \rho, r}$ acts on each $V(\pi)$ by some element in $(\varpi)$ if $\bar \rho_\pi \neq \bar \rho $, and by some $1$-unit if $\bar \rho_\pi = \bar \rho$. The Lemma follows from this immediately.
\end{proof}
The upshot is that one can extend the decomposition above to the case of overconvergent automorphic forms.
\subsection{Some infinite matrices}
For each $r$, we identify $S^D_2(U; \psi_m \omega^{-2r}; \omega^r)$ with $\oplus_{i=0}^{t-1} E$ by evaluating the automorphic forms at $\gamma_0,\gamma_1, \dots, \gamma_{t-1}$.
This way, the operators $P_{\bar \rho,r}$ and $\widetilde P_{\bar \rho,r}$ are represented by two $t\times t$-matrices $\mathfrak{P}_{\bar \rho,r}^\mathrm{cl}, \widetilde \mathfrak{P}_{\bar \rho,r}^\mathrm{cl} \mathbf{i}n \mathrm{M}_{t}(\mathcal{O})$.
We use $\mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty}$ (resp. $\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty}$) to denote the infinite block diagonal matrix
whose diagonal block-entries are $\mathfrak{P}_{\bar \rho,0}^\mathrm{cl}, \mathfrak{P}_{\bar \rho,1}^\mathrm{cl}, \dots$ (resp. $\widetilde \mathfrak{P}_{\bar \rho,0}^\mathrm{cl}, \widetilde \mathfrak{P}_{\bar \rho,1}^\mathrm{cl}, \dots$).
Note that $P_{\bar \rho}$ only involves Hecke operators, so it also acts on the space of overconvergent automorphic forms $S^{D,\dagger}_{ \mathcal{B}}(U;\mathbf{k}appa)$, where $\mathbf{k}appa$ is the continuous character of $\mathbb{Z}_p^\times$ with values in $(A^\circ)^\times$ as defined in Notation~\ref{N:character kappa}.
Let $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ denote the matrix for $P_{\bar \rho}$ under the basis given by $1_0, \dots, 1_{t-1}, pz_0, \dots, pz_{t-1}, p^2 z^2_0, \dots$ as in Notation~\ref{N:matrix for Tl and Up}.
By Proposition~\ref{P:Tl Up overconvergent equiv classical}(1), we have that
\begin{equation}
\label{E:congruence Q with P}
\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)
\equiv \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \textrm{ modulo the error space }\mathbf{Err} \textrm{ in }\eqref{E:error matrix}.
\end{equation}
The next Proposition says that we can improve the infinite matrix $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ into a projection, as we did above so that we can factor out the subspace of overconvergent automorphic forms
corresponding to the Galois pseudo-representation $\bar \rho$.
\begin{proposition}
\label{P:decomposition}
Keep the notation as above.
\begin{enumerate}
\mathbf{i}tem We have $(\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty})^2 = \widetilde\mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty}$, $\widetilde\mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \widetilde\mathfrak{P}_{{\bar \rho}'}^{\mathrm{cl}, \mathbf{i}nfty} = 0$ if ${\bar \rho} \neq {\bar \rho}'$, and $\sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \widetilde\mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} = I_\mathbf{i}nfty$, where $I_\mathbf{i}nfty: = \Diag(1)$ denotes the infinite identity matrix.
\mathbf{i}tem
The limit
\[
\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa): = \lim_{n \to \mathbf{i}nfty} (\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa))^{p^n}
\]
exists. Moreover, we have
\[
\big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)\big)^2 = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa), \
\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \widetilde \mathfrak{P}_{{\bar \rho}'}^\mathcal{B}(\mathbf{k}appa) = 0 \textrm{ for }{\bar \rho} \neq {\bar \rho}', \textrm{ and }
\sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) = I_\mathbf{i}nfty.
\]
\mathbf{i}tem
We have a decomposition of Banach $A$-modules respecting the $U_p$-action:
\[
S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa) = \bigoplus_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)}\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa).
\]
Consequently, we have a product formula for the characteristic power series
\[
\Char(U_p; S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)) = \mathrm{pr}od_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \Char \big(U_p; \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa) \big).
\]
\mathbf{i}tem
We have the following congruence relation: for every $\bar \rho\mathbf{i}n \mathscr{B}(U; \psi_m)$, the difference of the infinite matrices $ \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty}$ belongs to the space $\mathbf{Err}$ in \eqref{E:error matrix}.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) follows from the corresponding properties of $\widetilde\mathfrak{P}_{\bar \rho,r}^\mathrm{cl}$ in Lemma~\ref{L:decomposition classical forms}.
For (2), we observe that $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \equiv \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \pmod \varpi$ by Proposition~\ref{P:Tl Up overconvergent equiv classical}(1). So by Lemma~\ref{L:decomposition classical forms},
\begin{equation}
\label{E:P idempotent mod pi}
\big(\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)\big)^2
\equiv
\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \pmod \varpi.
\end{equation}
Easy induction proves that $(\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa))^{p^{i+1}} \equiv (\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa))^{p^i} \pmod{\varpi^i}$. So the limit $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa): = \lim_{i \to \mathbf{i}nfty} (\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa))^{p^i}$ exists.
The property $\big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)\big)^2 = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ also follows from \eqref{E:P idempotent mod pi}.
Now for two pseudo-representations $\bar \rho \neq \bar \rho'$ in $\mathscr{B}(U; \psi_m)$,
we have
\[
\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{P}_{{\bar \rho}'}^\mathcal{B}(\mathbf{k}appa) \equiv \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{P}_{{\bar \rho}'}^{\mathrm{cl}, \mathbf{i}nfty} \equiv 0 \pmod \varpi.
\]
It then follows that $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \widetilde \mathfrak{P}_{{\bar \rho}'}^\mathcal{B}(\mathbf{k}appa) = 0$ (note that it is important to know that $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ commutes with $ \mathfrak{P}_{{\bar \rho}'}^\mathcal{B}(\mathbf{k}appa)$ because both operators can be expressed in terms of Hecke operators.)
Similarly, if we start with
\[
\sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \mathfrak{P}_{\bar \rho}^\mathcal{B} (\mathbf{k}appa)\equiv \sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \equiv I_\mathbf{i}nfty \pmod \varpi,
\]
raising it to $p^i$th power implies that
\[
I_\mathbf{i}nfty \equiv
\sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \big(\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)\big)^{p^i} \pmod {\varpi^i}.
\]
Here we used the fact that $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{P}_{{\bar \rho}'}^\mathcal{B}(\mathbf{k}appa) \equiv 0 \pmod \varpi$ for ${\bar \rho} \neq {\bar \rho}'$ and once again the crucial commutativity of $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$'s.
Taking the limit as $i \to \mathbf{i}nfty$ shows that $\sum_{{\bar \rho} \mathbf{i}n \mathscr{B}(U; \psi_m)} \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) =I_\mathbf{i}nfty$.
(3) follows from (2) and the fact that $U_p$ commutes with each $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$, as this operator is a limit of polynomials in tame Hecke operators.
We now check (4).
First recall some basic properties of the error space $\mathbf{Err}$ defined in \eqref{E:error matrix}.
For $M_1, M_2 \mathbf{i}n \mathbf{Err}$, it is easy to see that $M_1M_2 \mathbf{i}n \mathbf{Err}$ and $ \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} M_1, M_1 \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathbf{i}n \mathbf{Err}$.
Thus
\begin{equation}
\label{E:gothQ minus overline gothQ}
(\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa))^{p^n} - (\mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty})^{p^n} =
\big( \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} + (\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty})\big)^{p^n} - ( \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty})^{p^n} \mathbf{i}n \mathbf{Err}
\end{equation}
because $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathbf{i}n \mathbf{Err}$ by \eqref{E:congruence Q with P}. Taking the limit as $n \to \mathbf{i}nfty$ proves (4).
\end{proof}
\begin{caution}
It is important to point out that, in \eqref{E:gothQ minus overline gothQ}, since $\mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ and $ \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty}$ do not commute with each other, we \emph{cannot} use binomial expansion formula to improve the congruence \eqref{E:gothQ minus overline gothQ}. Hence the limit $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ is \emph{not} a block diagonal matrix.
So Proposition~\ref{P:decomposition}(4) is the best congruence we could hope for.
\end{caution}
\begin{remark}
\label{R:pseudo-decomposition formal}
We should point out that decomposing a Banach Hecke module according to pseudo-Galois representations $\bar \rho$ is a quite formal process and can be done in a much greater generality.
However, it is often difficult to control the factor corresponding to each $\bar \rho$.
The advantage of our situation is that we can give a good ``model" of the factor corresponding to each $\bar \rho$.
\end{remark}
\subsection{${\bar \rho}$-part of classical automorphic forms}
Recall from Lemma~\ref{L:decomposition classical forms} that the space of classical automorphic forms $S_2^D(U; \psi_m\omega^{-2r}; \omega^r; \mathcal{O})$ for each $r$ is written as the direct sum $\bigoplus_{\bar \rho \mathbf{i}n \mathscr{B}(U; \psi_m)} V(\bar \rho, \mathcal{O})_r$.
We put $V(\bar \rho)_r = V(\bar \rho; \mathcal{O})[\frac 1p]$ and $d_{\bar \rho, r}: = \dim V(\bar \rho)_r$.
Note that the operator $U_p$
acts on each $V(\bar \rho, \mathcal{O})_r$.
By Corollary~\ref{C:Hodge polygon less eq 1} (and Subsection~\ref{S:Hodge v.s. Newton}(6)), the Hodge slopes $\alpha_0(\bar\rho)_r\leq \cdots \leq \alpha_{ d_{\bar \rho, r}-1}(\bar \rho)_r$ of the $U_p$-action on each $V(\bar \rho, \mathcal{O})_r$ belong to $[0,1]$. The same holds for the Newton slopes.
We pick a basis $e_0(\bar \rho)_r, \dots, e_{d_{\bar \rho, r}-1}(\bar \rho)_r$ of $V(\bar \rho, \mathcal{O})_r$ such that, the corresponding matrix $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho, r}$ of the $U_p$-action has $i$th row divisible by $p^{\alpha_i(\bar \rho)_r}$.
Providing $S_2^D(U; \psi_m \omega^{-2r}; \omega^r)$ with the natural basis of evaluation at $\gamma_0, \dots, \gamma_{t-1}$, and each $V(\bar \rho)_r$ with the basis above,
we write $\mathfrak{C}_{\bar \rho ,r}$ and $\mathfrak{D}_{\bar \rho, r}$ for the matrices for the natural inclusion and the natural projection $\widetilde \mathfrak{P}_{\bar \rho, r}^{\mathrm{cl}}$:
\[
\xymatrix@C=30pt{
V(\bar \rho)_r \ar[r]^-{ \mathfrak{C}_{\bar \rho,r}} & S_2^D(U; \psi_m \omega^{-2r}; \omega^r)
\ar[r]^-{\mathfrak{D}_{\bar \rho, r}} & V(\bar \rho)_r.
}
\]
So $\mathfrak{C}_{\bar \rho, r}$ is a $t \times d_{\bar \rho, r}$-matrix and $\mathfrak{D}_{\bar \rho, r}$ is a $d_{\bar \rho, r} \times t$-matrix
such that
$ \mathfrak{C}_{\bar \rho, r} \mathfrak{D}_{\bar \rho, r} = \widetilde \mathfrak{P}_{\bar \rho, r}^\mathrm{cl}$ and $\mathfrak{D}_{\bar \rho, r} \mathfrak{C}_{\bar \rho, r} = I_{d_{\bar \rho, r}}$.
We point out that the number $d_{\bar \rho, r}$ and the matrices $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho, r}$, $\mathfrak{C}_{\bar \rho, r}$, $\mathfrak{D}_{\bar \rho, r}$, and $\widetilde\mathfrak{P}^\mathrm{cl}_{\bar \rho, r}$ only depends on $r$ modulo $q$ as opposed to $r$.
\subsection{A model for the $\bar\rho$-part of overconvergent automorphic forms}
Proposition~\ref{P:decomposition} allows us to reduce the study of the $U_p$-action on $ S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$ to the $U_p$-action on each subspace $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$, which we call the \emph{${\bar \rho}$-part of $S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$}.
This space is too abstract to study as pointed out in Remark~\ref{R:pseudo-decomposition formal}. We need to give it a ``model": $V({\bar \rho})^\mathbf{i}nfty_A$.
We set
\[
S^{\mathrm{cl}, \mathbf{i}nfty, \circ}_A: = \widehat \bigoplus_{r \geq 0} S^D_2(U; \psi_m \omega^{-2r}; \omega^r; \mathcal{O}) \otimes_{\mathcal{O}} A^\circ, \textrm{ and } S^{\mathrm{cl}, \mathbf{i}nfty}_A : = S^{\mathrm{cl}, \mathbf{i}nfty, \circ}_A \otimes_{\mathcal{O}} E.
\]
We define the $U_p$-action on this space to be $\bigoplus_{r \geq 0} p^r \cdot U_p$.
Let $\widetilde \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}$ denote the matrix for this action with respect the standard basis given by evaluation at $\gamma_0, \dots, \gamma_{t-1}$ of each of the summand. This matrix is the infinite block diagonal matrix whose diagonal components are $p^r \cdot \mathfrak{U}_p^{\mathrm{cl}}(\psi_m\omega^{-2r})$.
We put
\[
V(\bar\rho)^{\mathbf{i}nfty, \circ}_A: = \widehat \bigoplus_{r \geq 0} V(\bar\rho; \mathcal{O})_r \otimes_{\mathcal{O}} A^\circ, \textrm{ and }V(\bar\rho)^{\mathbf{i}nfty}_A = V(\bar\rho)^{\mathbf{i}nfty, \circ}_A \otimes_{\mathcal{O}} E.
\]
We define the $U_p$-action on this space to be $\bigoplus_{r \geq 0} p^r \cdot U_p$. The corresponding matrix with respect to the chosen basis on each $V(\bar \rho; \mathcal{O})_r$ is an infinite block diagonal matrix $\mathfrak{U}_p^{\bar \rho, \mathbf{i}nfty}$ whose diagonal components are $p^r \cdot \mathfrak{U}_p^{\mathrm{cl}, \bar \rho, r}$.
We write
\[
\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty: = \widehat \oplus_{r \geq 0} \mathfrak{C}_{\bar \rho, r}: V(\bar \rho)_A^\mathbf{i}nfty \to S_A^{\mathrm{cl}, \mathbf{i}nfty} \quad \textrm{ and }\quad \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty: = \widehat \oplus_{r \geq 0} \mathfrak{D}_{\bar \rho, r}: S_A^{\mathrm{cl}, \mathbf{i}nfty} \to V(\bar \rho)_A^\mathbf{i}nfty
\]
for the natural inclusion and projection, respectively. So we have $\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty = I_\mathbf{i}nfty$, and $\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} = \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty$ is the infinite block diagonal matrix composed of $\widetilde \mathfrak{P}_{\bar \rho,r}^\mathrm{cl}$.
At the infinite level, we consider the following identification
\begin{equation}
\label{E:total space}
S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)=
\bigoplus_{i=0}^{t-1} E\langle w, pz\rangle =
\bigoplus_{i=0}^{t-1} \widehat \bigoplus_{n\geq 0} E\langle w\rangle (pz)^{n} \cong S_A^{\mathrm{cl}, \mathbf{i}nfty},
\end{equation}
where the first and the last equality are given by evaluation at the elements $\gamma_0,\gamma_1, \dots, \gamma_{t-1}$. This isomorphism does not respect the actions of the Hecke operators literally but we will show later that it approximately does.
\begin{proposition}
\label{P:identification of V(pi) and PS}
The following two natural morphisms are isomorphisms
\[
\xymatrix@C=40pt{
\varphi_{\bar \rho}:\ \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa) \subseteq
S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)\ar[r]^-{\eqref{E:total space}}_-\cong & S_A^{\mathrm{cl}, \mathbf{i}nfty}
\ar[r]^-{\mathfrak{D}_{\bar \rho}^{ \mathbf{i}nfty}} &V({\bar \rho})^\mathbf{i}nfty_A;
}
\]
\[
\xymatrix@C=20pt{
\psi_{\bar \rho}:\ V({\bar \rho})^\mathbf{i}nfty_A \ar[r]^-{\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty} & S_A^{\mathrm{cl}, \mathbf{i}nfty} \ar[rr]^-{\eqref{E:total space}^{-1}}_-\cong & &
S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)
\ar[r]^-{\widetilde \mathfrak{P}_{\bar \rho}^{\mathcal{B}}(\mathbf{k}appa)} &\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa) .
}
\]
Moreover, $\psi_{\bar \rho}^{-1} =(1+ \boldsymbol \epsilon) \circ \varphi_{\bar \rho}$ for some endomorphism $\boldsymbol \epsilon: V({\bar \rho})^\mathbf{i}nfty_A \to V({\bar \rho})^\mathbf{i}nfty_A$ which, under the basis $\{ e_j (\bar \rho)_r\, |\, j = 0, \dots, d_{\bar \rho, r}-1 \textrm{ and }r \geq 0\}$, is an infinite matrix in
\[
\mathbf{Err}_{\bar \rho}: = \begin{pmatrix}
p\mathrm{M}_{d_{\bar \rho, 0}}(A^\circ) & p\mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 1}}(A^\circ) & p^2 \mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 2}}(A^\circ) & p^3\mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 3}}(A^\circ) &\cdots
\\
p^3\mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 0}}(A^\circ) & p\mathrm{M}_{d_{\bar \rho, 1}}(A^\circ) & p\mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 2}}(A^\circ) &p^2 \mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 3}}(A^\circ) & \cdots
\\
p^4 \mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 0}}(A^\circ) &p^3\mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 1}}(A^\circ) & p \mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 2}}(A^\circ)& p\mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 3}}(A^\circ) & \cdots\\
p^5\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 0}}(A^\circ) &p^4\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 1}}(A^\circ) &p^3\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 2}}(A^\circ) & p\mathrm{M}_{d_{\bar \rho, 3}}(A^\circ)& \cdots
\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix},
\]
where the $(i,j)$-block entry is
\begin{itemize}
\mathbf{i}tem
$ p\mathrm{M}_{d_{\bar \rho,i}}(A^\circ)$ if $i=j$,
\mathbf{i}tem
$p^{i-j+2}\mathrm{M}_{d_{\bar \rho, i}\times d_{\bar \rho, j}}(A^\circ)$ if $i > j$, and
\mathbf{i}tem
$p^{j-i}\mathrm{M}_{d_{\bar \rho, i}\times d_{\bar \rho, j}}(A^\circ)$ if $i < j$.
\end{itemize}
\end{proposition}
\begin{proof}
We first take the composition
\begin{align}
\label{E:varphi psi almost id}
\varphi_{\bar \rho} \circ \psi_{\bar \rho} - I_\mathbf{i}nfty &= \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty - I_\mathbf{i}nfty
\\
\nonumber&
= \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_\pi^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty - I_\mathbf{i}nfty + \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \big) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty.
\end{align}
Note that $\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty - I_\mathbf{i}nfty = \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty - I_\mathbf{i}nfty = 0$ and
\[
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \big) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathbf{i}n \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \cdot \mathbf{Err} \cdot \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \subseteq \mathbf{Err}_{\bar \rho},
\]
where the last inclusion uses the fact that $\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty$ and $\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty$ are block diagonal matrices (but not with square blocks though).
Since all matrices in $I_\mathbf{i}nfty+ \mathbf{Err}_{\bar \rho}$ are invertible, $\varphi_{\bar \rho} \circ \psi_{\bar \rho}$ is an isomorphism.
Thus it suffices to prove that $\psi_{\bar \rho}$ is surjective.
For this, we need only to show the surjectivity of $\psi_{\bar \rho} \circ \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty$. Note that
\[
\psi_{\bar \rho} \circ \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \big( I_\mathbf{i}nfty + (\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} - \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)) \big).
\]
By Proposition~\ref{P:decomposition}(4), the operator $I_\mathbf{i}nfty + (\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} - \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)) \mathbf{i}n I_\mathbf{i}nfty+\mathbf{Err}$ is an isomorphism. Then the surjectivity of $\psi_{\bar \rho} \circ \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty $ follows from the surjectivity of $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ onto $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$.
This then concludes the proof of both $\varphi_{\bar \rho}$ and $\psi_{\bar \rho}$ being isomorphisms.
Finally, we observe that
\eqref{E:varphi psi almost id} implies that
\[
\psi_{\bar \rho}^{-1} = \Big( I_\mathbf{i}nfty + \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \big) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty\Big)^{-1} \circ \varphi_{\bar \rho} = (I_\mathbf{i}nfty+ \boldsymbol \epsilon) \circ \varphi_{\bar \rho}
\]
for the infinite matrix $\boldsymbol \epsilon = \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \big(\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \big) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathbf{i}n \mathbf{Err}_{\bar \rho}$.
\end{proof}
\begin{notation}
Fix $\bar \rho \mathbf{i}n \mathscr{B}(U; \psi_m)$ a residual pseudo-representation.
Let $\HP_{\bar \rho, r}$ (resp. $\NP_{\bar \rho, r}$) denote the Hodge polygon (resp. Newton polygon) of the matrix $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho, r}$. Let $\HP_{\bar \rho, r}(i)$ (resp. $\NP_{\bar \rho, r}(i)$) denote the $y$-coordinate of the polygon when the $x$-coordinate is $i$.
Let $\alpha_0(\bar\rho)_r\leq \cdots \leq \alpha_{ d_{\bar \rho, r}-1}(\bar \rho)_r$ denote the slopes of $\HP_{\bar \rho, r}$ in non-decreasing order.
Let $\mathrm{ord}_{\bar \rho, r}$ denote the multiplicity of the slope $0$ in $\NP_{\bar \rho, r}$.
Once again, we point out that the polygons $\HP_{\bar \rho, r}$ and $\NP_{\bar \rho, r}$ and hence the slopes $\alpha_0(\bar \rho)_r, \dots, \alpha_{d_{\bar \rho, r}-1}(\bar \rho)_r$ depend only on $r$ modulo $q$, as opposed to $r$.
Write the characteristic power series of $U_p$ on $ \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)$ as
\[
\Char \big(U_p, \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa)\big) = 1 + c_{\bar \rho, 1}(w) X + c_{\bar \rho, 2}(w) X^2 + \cdots \mathbf{i}n 1+ \mathcal{O}\langle w \rangle \llbracket X \rrbracket.
\]
Its zero in $\mathcal{W}(x\psi_m, p^{-1}) \times \mathbb{G}_{m, \mathrm{rig}}$ is the spectral curve $\Spc_{\bar \rho}$ (over the weight disk $\mathcal{W}(x\psi_m, p^{-1})$).
We have
\[
\Spc \times_\mathcal{W} \mathcal{W}(x\psi_m; p^{-1}) = \bigcup_{\bar \rho \mathbf{i}n \mathscr{B}(U; \psi_m)} \Spc_{\bar \rho}.
\]
\end{notation}
\begin{theorem}
\label{T:main theorem each residual}
Assume $m \geq 4$ as before.
Theorem~\ref{T:sharp Hodge bound}, Corollary~\ref{C:precise NP computation}, and Theorem~\ref{T:improved main theorem} hold for each $\bar \rho \mathbf{i}n \mathscr{B}(U;\psi_m)$, in the following sense.
\begin{enumerate}
\mathbf{i}tem
For any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m, p^{-1})$, the Newton polygon of the power series $1+ c_{\bar \rho, 1}(w_0)X + \cdots $ lies above the polygon starting at $(0,0)$ with slopes given by
\begin{equation}
\label{E:slopes of improved Hodge polygon bar rho}
\bigcup_{r=0}^\mathbf{i}nfty \big\{
\alpha_0(\bar \rho)_r+r,\alpha_1(\bar \rho)_r+r, \dots, \alpha_{\bar\rho, d_{\bar\rho,r}-1}(\bar \rho)_r+r\big\}.
\end{equation}
\mathbf{i}tem
For each $n \mathbf{i}n \mathbb{N}$, let $\lambda_{\bar \rho, n}$ denote the sum of $n$ smallest numbers in \eqref{E:slopes of improved Hodge polygon bar rho}.
Then
\[
c_{\bar \rho, n}(w) \mathbf{i}n p^{\lambda_{\bar \rho, n}} \cdot \mathcal{O}\langle w\rangle^\times, \quad \textrm{for all }n \textrm{ of the form }n=n_{\bar \rho, k}=\sum_{r=0}^k d_{\bar\rho, r}.
\]
In particular, for any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the Newton polygon of the power series $1+ c_{\bar \rho, 1}(w_0)X + \cdots $ passes through the point $(n, \lambda_{\bar \rho, n})$ for $n = n_{\bar \rho, k}$.
\mathbf{i}tem
Fix $r=0, \dots, q-1$.
Suppose that $(s_0, \NP_{\bar \rho, r}(s_0))$ is a vertex of the Newton polygon $\NP_{\bar \rho, r}$ and suppose that
\begin{equation}
\label{E:NP<HP+1 bar rho}
\NP_{\bar \rho, r}(s) < \HP_{\bar \rho, r}(s-1) + 1
\textrm{ for all } s = 1, \dots, s_0.
\end{equation}
Then for any $s=0, \dots, s_0$, any $n \mathbf{i}n \mathbb{Z}_{\geq 0}$, and any $w_0 \mathbf{i}n \mathcal{W}(x\psi_m; p^{-1})$, the $(n_{\bar \rho, qn+r}+s)$th slope of the power series $1+ c_1(w_0)X + \cdots$ is the $s$th $U_p$-slope on $V(\bar \rho)_r$ plus $qn+r$.
\mathbf{i}tem
The spectral variety $\Spc_{\bar \rho}$ is a disjoint union of subvarieties
\[
X_{\bar \rho,0},\ X_{\bar \rho,(0,1]},\ X_{\bar \rho,(1,2]}, \ X_{\bar \rho,(2,3]},\ \dots
\]
such that each subvariety is finite and flat over $\mathcal{W}(x\psi_m; p^{-1})$, and for any closed point $x \mathbf{i}n X_{\bar \rho,?}$, we have $v(a_p(x)) \mathbf{i}n ?$.
Moreover, the degree of $X_{\bar \rho,(r,r+1]}$ over $\mathcal{W}(x\psi_m; p^{-1})$ is exactly
\[
d_{\bar\rho, r+1} + \mathrm{ord}_{\bar \rho, r+1} - \mathrm{ord}_{\bar \rho, r}.
\]
\mathbf{i}tem
Keep the notation and hypothesis as in (3) and (4).
For all $n\mathbf{i}n \mathbb{Z}_{\geq 0}$ and for a number $\beta>0$ appearing in the first $s_0$ $U_p$-slopes on $V(\bar \rho)_r$, the closed points $x \mathbf{i}n X_{ \bar \rho, (qn+r, qn+r+1]}$ for which $v(a_p(x)) = \beta + qn+r$ form a connected component of $X_{\bar \rho, (qn+r, qn+r+1]}$. It is finite and flat over $\mathcal{W}(x\psi_m; p^{-1})$ of degree equal to the multiplicity of $\beta$ in the set of $U_p$-slopes of $V(\bar \rho)_r$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Proposition~\ref{P:identification of V(pi) and PS}, both $\varphi_{\bar \rho}$ and $\psi_{\bar \rho}$ are isomorphisms of Banach spaces.
So we have
\[
\Char\big(U_p; \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) S^{D,\dagger}_{\mathcal{B}}(U; \mathbf{k}appa) \big) = \Char\big( (\psi_{\bar \rho})^{-1} \circ \mathfrak{U}_p^\mathcal{B}\circ \psi_{\bar \rho}; V({\bar \rho})_A^\mathbf{i}nfty\big).
\]
Recall from Proposition~\ref{P:Tl Up overconvergent equiv classical}(2) that the infinite block diagonal matrix $\mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} = \Diag \{\mathfrak{U}_p^\mathrm{cl}(\psi_m),\, p\cdot \mathfrak{U}_p^\mathrm{cl}(\psi_m \omega^{-2}),\, p^2\cdot \mathfrak{U}_p^\mathrm{cl}(\psi_m \omega^{-4}),\, \dots\}$ satisfies
\[
\mathfrak{U}_p^\mathcal{B}(\mathbf{k}appa) - \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \mathbf{i}n \textrm{ the error space }\mathbf{Err}_p \textrm{ in \eqref{E:Errp}}.
\]
We introduce the following error space
\[
\mathbf{Err}_{\bar \rho, p}: =
\begin{pmatrix}
p\mathrm{M}_{d_{\bar \rho, 0}}(A^\circ) & p\mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 1}}(A^\circ) & p^2 \mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 2}}(A^\circ) & p^3\mathrm{M}_{d_{\bar \rho, 0}\times d_{\bar \rho, 3}}(A^\circ) &\cdots
\\
p^3\mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 0}}(A^\circ) & p^2 \mathrm{M}_{d_{\bar \rho, 1}}(A^\circ) & p^2\mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 2}}(A^\circ) &p^3 \mathrm{M}_{d_{\bar \rho, 1}\times d_{\bar \rho, 3}}(A^\circ) & \cdots
\\
p^4 \mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 0}}(A^\circ) &p^4 \mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 1}}(A^\circ) & p^3 \mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 2}}(A^\circ)& p^3\mathrm{M}_{d_{\bar \rho, 2}\times d_{\bar \rho, 3}}(A^\circ) & \cdots\\
p^5\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 0}}(A^\circ) &p^5\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 1}}(A^\circ) &p^5\mathrm{M}_{d_{\bar \rho, 3}\times d_{\bar \rho, 2}}(A^\circ) & p^4\mathrm{M}_{d_{\bar \rho, 3}}(A^\circ)& \cdots
\\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix},
\]
where the $(i,j)$-block entry is
\begin{itemize}
\mathbf{i}tem
$ p^{i+1}\mathrm{M}_{d_{\bar \rho,i}}(A^\circ)$ if $i=j$,
\mathbf{i}tem
$p^{i+2}\mathrm{M}_{d_{\bar \rho, i}\times d_{\bar \rho, j}}(A^\circ)$ if $i > j$, and
\mathbf{i}tem
$p^{j}\mathrm{M}_{d_{\bar \rho, i}\times d_{\bar \rho, j}}(A^\circ)$ if $i < j$.
\end{itemize}
Rewrite the composite $(\psi_{\bar \rho})^{-1} \circ \mathfrak{U}_p^\mathcal{B} \circ \psi_{\bar{\rho}}$ as
\begin{align}
\nonumber
(\psi_{\bar \rho})^{-1}& \circ \mathfrak{U}_p^\mathcal{B} \circ \psi_{\bar \rho}
=
(\mathbf{id} + \boldsymbol \epsilon)
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty
\\
\nonumber
&= \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)
\mathfrak{U}_p^\mathcal{B}\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty
+
\boldsymbol \epsilon
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \\
\label{E:last estimate}
&=\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty
+
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty ( \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty
+\boldsymbol \epsilon \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B} (\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty .
\end{align}
Here the second equality in the first line follows from the commutativity of $\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa)$ and $ \mathfrak{U}_p^\mathcal{B}$ as they are (limits of) Hecke operators.
It suffices to understand each of the terms.
\begin{enumerate}
\mathbf{i}tem[(i)]
The first term $\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty$ of \eqref{E:last estimate} exactly gives the action of $U_p$ on the space of classical automorphic forms.
\mathbf{i}tem[(ii)]
By Proposition~\ref{P:Tl Up overconvergent equiv classical}, we easily deduce that
\begin{align*}
\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B} (\mathbf{k}appa)
\mathfrak{U}_p^\mathcal{B}& - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} = \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) (\mathfrak{U}_p^\mathcal{B} - \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty})
+ ( \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty})\mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}
\\
& \mathbf{i}n (\widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} + \mathbf{Err})\cdot \mathbf{Err}_p + \mathbf{Err} \cdot \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \ \subseteq \mathbf{Err}_p;
\end{align*}
so the middle term of \eqref{E:last estimate}
\[
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty ( \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} - \widetilde \mathfrak{P}_{\bar \rho}^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \mathbf{i}n \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\cdot \mathbf{Err}_p \cdot \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty \subseteq \mathbf{Err}_{{\bar \rho}, p}.
\]
\mathbf{i}tem[(iii)]
We write
\[\boldsymbol \epsilon
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty \widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty =\boldsymbol \epsilon \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty (\widetilde \mathfrak{P}_{\bar \rho}^\mathcal{B}(\mathbf{k}appa) \mathfrak{U}_p^\mathcal{B} -\mathfrak{P}_{\bar \rho}^{\mathrm{cl},\mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty}) \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty +\boldsymbol \epsilon \mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\mathfrak{P}_{\bar \rho}^{\mathrm{cl},\mathbf{i}nfty} \mathfrak{U}_p^{\mathrm{cl}, \mathbf{i}nfty} \mathfrak{C}_{\bar \rho}^\mathbf{i}nfty.
\]
The second term belongs to $\mathbf{Err}_{{\bar \rho}, p}$ because
$\boldsymbol \epsilon \mathbf{i}n \mathbf{Err}_{\bar \rho}$ by Proposition~\ref{P:identification of V(pi) and PS}.
For the first term, we use the argument in (ii) to see that it belongs to
\[ \boldsymbol \epsilon \cdot
\mathfrak{D}_{\bar \rho}^\mathbf{i}nfty\cdot \mathbf{Err}_p \cdot\mathfrak{C}_{\bar \rho}^\mathbf{i}nfty\ \subseteq \mathbf{Err}_{{\bar \rho}, p}.
\]
\end{enumerate}
Combining the computation above, we see that $(\psi_{\bar \rho})^{-1} \circ \mathfrak{U}_p^\mathcal{B}\circ \psi_{\bar \rho}$ belongs to
\begin{equation}
\label{E:Up on bar rho}
\begin{pmatrix}
\mathfrak{U}_p^{\mathrm{cl}, \bar\rho, 0} &&&\\
& p\cdot \mathfrak{U}_p^{\mathrm{cl}, \bar\rho, 1} && \\
&&p^2\cdot \mathfrak{U}_p^{\mathrm{cl}, \bar\rho, 2} & \\
& & &\ddots
\end{pmatrix} + \mathbf{Err}_{\bar \rho, p}.
\end{equation}
At this point, (1)--(4) of the Theorem can be proved in the same way as they were proved in Theorem~\ref{T:sharp Hodge bound}, Corollary~\ref{C:precise NP computation}, and Theorem~\ref{T:improved main theorem}, with the modifications indicated below.
(1) already follows from the estimate \eqref{E:Up on bar rho} because each $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho, r}$ is already written in the form adapted to its Hodge polygon.
For (2), we need to consider the action of $P_{\bar \rho}$ on the space $S_2^D(U; \psi_{m,w}\omega^{-2r})$ (see \eqref{E:S2 deformation} for the definition).
Let $\widetilde P_{\bar \rho}$ denote the limit $\lim_{n \to \mathbf{i}nfty} (P_{\bar \rho})^{p^n}$.
By the same argument as in Proposition~\ref{P:decomposition}, we have $\widetilde P_{\bar\rho}^2 = \widetilde P_{\bar \rho}$, $ \widetilde P_{\bar \rho} \widetilde P_{\bar \rho'} = 0$ for $\bar \rho \neq \bar \rho'$, and $\sum_{ \bar\rho \mathbf{i}n \mathscr{B}(U; \psi_m)} \widetilde P_{\bar \rho} = \mathrm{id}$.
We use $V(\bar \rho, w)_r$ to denote the image $ \widetilde P_{\bar \rho} S_2^D(U; \psi_{m,w}\omega^{-2r})$, which is isomorphic to $V(\bar\rho)_r \otimes_\mathcal{O} \mathcal{O}/p^2\mathcal{O}[w]$, as an $\mathcal{O}$-module.
Let $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho, w, r}$ denote the matrix for the $U_p$-action on $V(\bar \rho, w)_r$ with respect to the basis $e_0(\bar\rho)_r, \dots, e_{d_{\bar\rho, r}-1}(\bar \rho)_r$; its $i$th row is divisible by $p^{\alpha_i(\bar \rho)_r}$, and all coefficients on $w$ belongs to $p\mathcal{O}/ p^2\mathcal{O}$.
We use
$\overline \mathfrak{U}_p^{\mathrm{cl}, \bar\rho,w, r}$ to denote the matrix given by dividing the $i$th row of $\mathfrak{U}_p^{\mathrm{cl}, \bar \rho,w, r}$ by $p^{\alpha_i(\bar \rho)_r}$.
As argued in the proof of Theorem~\ref{T:sharp Hodge bound}(2), it suffices to prove that $\det \overline \mathfrak{U}_p^{\mathrm{cl}, \bar\rho,w, r}$ belongs to $\mathbb{F}^\times \subseteq \mathbb{F}[w]$ for each $r$.
However, this follows from the fact that the product
\[
\mathrm{pr}od_{\bar\rho \mathbf{i}n \mathscr{B}(U; \psi_m)}
\det \overline \mathfrak{U}_p^{\mathrm{cl}, \bar\rho,w, r} = \det \overline \mathfrak{U}_p^{\mathrm{cl} ,\mathbf{e}}(\psi_{m,w}\omega^{-2r}) \mathbf{i}n \mathbb{F}^\times.
\]
(3) and (4) follow from the arguments in Corollary~\ref{C:precise NP computation} and Theorem~\ref{T:improved main theorem} with no essential changes.
(5) follows from (3) immediately.
\end{proof}
\end{document}
|
\begin{document}
\title{\bf Non-self adjoint Sturm-Liouville problem with spectral and physical parameters in boundary conditions}
\author{Rodrigo Meneses Pacheco\\
\\
Escuela de Ingenier\'ia Civil, \\
Facultad de Ingenier\'ia, Universidad de Valpara\'iso\\
Avda.
Errazuriz 1834, Valpara\'iso, Chile\\
\href{mailto:[email protected]}{[email protected]}\\
\\
Oscar Orellana\\
\\
Departamento de Matem\'aticas\\
Universidad T\'ecnica Federico santa Mar\'ia\\
Avenida Espa\~na 1680, Valpara\'iso, Chile\\
\href{mailto:[email protected]}{[email protected]}
}
\maketitle
\begin{abstract}
We present a complete description on the spectrum and eigenfunctions of the following two point boundary value problem
\begin{equation}
\lambdabel{pabstract}
\left\lbrace\begin{array}{rcl}
\displaystyle{(p(x)f')'-(q(x)- \lambdambda r(x))f}&=&0\qquad 0<x<L \\
f'(0)&=&(\alphapha_{1} \lambdambda +\alphapha_{2})f(0)\\
f'(L)&=&(\beta_{1}\lambdambda -\beta_{2})f(L)
\end{array}\right.
\end{equation}
where $\lambda$ and $\alpha_{i},\ \beta_{i}$ are spectral and physical parameters.
Our survey is focused mainly in the case $\alphapha_{1}>0$ and $\beta_{1}<0$, where neither self adjoint operator theorems on Hilbert spaces nor Sturm's comparison results can be used directly.
We describe the spectrum and the oscillatory results of the eigenfunctions from a geometrical approach, using a function related to the Pr\"ufer angle.
The proofs of the asymptotic results of the eigenvalues and separation theorem of the eigenfunctions are developed through classical second order differential equation tools.
Finally, the results on the spectrum of \eqref{pabstract} are used for the study of the linear instability of a simple model for the fingering phenomenon on the flooding oil recovery process.
\end{abstract}
\emph{Keywords:} Non-self adjoint operator, Theorems of Sturm, Pr\"ufer transformation, Hele-Shaw cell.
\section{Introduction}
It is well known that the results on boundary valued problems have a direct and relevant implication on many models in applied mathematics, as can be seen for the vast literature on the subject \cite{chandrasekhar1961hydrodynamic,amrein2005sturm,drazinhydrodynamic}.
On many occasions, the boundary conditions are described as functions of the spectral parameter and other physical parameters.
Problems of this type are found in mechanic models \cite{chandrasekhar1961hydrodynamic,rayleigh1896theory,timoshenko1974vibration,budak1988collection}.
On elastic models of rod and string they can usually be described through self adjoint Sturm-Liouville problem (SLP) \cite{langer1947fourier} while that the spectral problems arising in hydrodynamics are usually non-self adjoint \cite{chandrasekhar1954characteristic,mennicken2003non}.
An extensive classical bibliography for various physical applications can be found in \cite{amara1999sturm,fulton1977two,walter1973regular}.
In this article we study the following SLP with spectral parameter in the boundary conditions
\begin{equation}
\lambdabel{problema}
\left\lbrace\begin{array}{rcl}
\displaystyle{(p(x)f')'-(q(x)- \lambdambda r(x))f}&=&0\qquad 0<x<L \\
f'(0)&=&(\alphapha_{1} \lambdambda +\alphapha_{2})f(0)\\
f'(L)&=&(\beta_{1}\lambdambda -\beta_{2})f(L)
\end{array}\right.
\end{equation}
where $\lambda$ is a spectral parameter, and $\alpha_{i}$ and $\beta_{i}$ can be considered as physical parameters.
Our interest in the study of problem \eqref{problema} arises from the linear stability analysis of a three-layer Hele-Shaw cell model for the study of a secondary oil recovery process.
This problem is presented in \cite{gorell1983theory} and has been studied in several articles on the subject, see \cite{carasso1998optimal} and references therein.
The model problem is presented in section \ref{applications}, where problem \eqref{problema} corresponds to a synthesis of the physical laws considered.
In the stability model, $\alpha_{i}$ and $\beta_{k}$ are defined as functions of the wave numbers of the perturbative wave, therefore the signs of those parameters are related to different wave number ranges.
Referring the SLP with spectral parameter on the boundary conditions, the way in which the spectral parameter relates with the physical parameters introduces different types of complications on the tools used for the analysis.
Concerning the tools for the analysis of \eqref{problema}, the sign of parameters $\alpha_{1}$ and $\beta_{1}$ plays an essential role.
For the case where $\alpha_{1}<0$ and $\beta_{1}>0$, the description of the spectrum, asymptotic results, oscillatory results on the eigenfunctions and eigenfunction expansion results have been widely developed in several articles (see \cite{churchill1942expansions,cohen1966integral}) and can be learned from classical texts, like Ince \cite{ince1962ordinary} and Reid \cite{reid1971ordinary}, among others.
In this case, other than Sturm's comparison results, it's also possible to use results on selfadjoint operators in Hilbert spaces \cite{fulton1977two}.
The identification and characterization of the spectrum of \eqref{problema} for the case where $\alpha_{1}\beta_{1}\geq0$ is not as straightforward, since the geometrical approach through the transformation of Pr\"ufer presents new difficulties.
In \cite[Theorem 2.1]{binding2004transformation} the authors describe in detail the behavior of the spectrum of \eqref{problema} with $\alpha_{1}=0$ and $\beta_{1}<0$ via multiple Crum-Darboux transformations, obtaining an associated 'almost' isospectral regular SLP.
For the case where $\alpha_{1}=\alpha_{2}=\beta_{2}=0$, with $\beta_{1}<0$, in the article \cite{amara1999sturm}, the authors address the study of the spectrum (Theorem 2) and and asymptotic results of the eigenvalues and eigenfunctions (Theorem 1) using self adjoint operators in Pontryagin space $\Pi_{1}$.
We note that in the general case, where $\alpha_{1}\beta_{1}\geq0$, it's possible to apply Sturm's oscillatory results for one of the boundary conditions, which eases the geometric analyses commonly used for the study of the spectrum.
General information on SLP with spectral parameters can be found in \cite{behrndt2006sturm, binding2004transformation, binding2003hierarchy, binding2006sturm, binding1994sturm, churchill1942expansions, ince1962ordinary, reid1971ordinary}.
The main objective of this work is to obtain results for \eqref{problema} when $\alpha_{1}>0$ and $\beta_{1}<0$, namely on the descriptions and behaviors of the eigenvalues and eigenfunctions, similar to existing results on regular SLP.
In our analysis, we mainly use classical tools on second order ordinary differential equations (ODE), adapting several characteristic results on regular SLP, for example theorems of Sturm (see Hartman \cite{hartman1964ordinary} 11.3).
For the description of the spectrum and the oscillatory results of the eigenfunctions, we used a geometrical approach through a function related to the Pr\"ufer angle.
For the results of separation, we make an analysis of the monotone behavior of the zeros of the solution functions, obtaining this result through the implicit function theorem.
Concerning the functions $p(x),\ q(x),\ r(x)$, we assumed them to be positive and sufficiently regular in $[0,L]$, so they allow the use of classical results on regularity of second order ODEs solutions with respect to the model parameters (See Peano's theorem in Hartman \cite{hartman1964ordinary}).
When studying the model in Section \ref{applications}, we also assume that the function $p(x)$ is strictly increasing and that constants $\alpha_{2}$ and $\beta_{2}$ are positive.
Our main result read as follows:
\begin{te}
\lambdabel{principal}
Under the conditions $\alphapha_{1}>0$ and $\beta_{1}<0$, the spectrum of \eqref{problema} consists of an unbounded sequence of real eigenvalues
\begin{equation}
\lambdabel{orden}
\lambda_{-1}<\lambda_{-0}<0<\lambda_{0}<\lambda_{1}<\lambda_{2}<\dots\nearrow \infty
\end{equation}
and the corresponding eigenfunction $f(x;\lambda_{l})$ has exactly $\vert l\vert$ zeros in $]0,L[$.
\end{te}
We prove this using classical geometric tools related with the Pr\"ufer angle.
The technique consists in considering $f(x;\lambda)$ a solution of the ODE in \eqref{problema}, satisfying the boundary condition at $x=0$. The eigenvalues of \eqref{problema} are studied from the equation
\begin{equation}
\lambdabel{laecuacion}
p(L)\frac{f'(L;\lambda)}{\lambda f(L;\lambda)}=\beta_{1}-\frac{\beta_{2}}{\lambda},
\end{equation}
for those $\lambda$ where the expression are well defined.
In the course of the proof, the monotony of the function in \eqref{laecuacion} is a fundamental tool.
For this, it's necessary to characterize the branches where the function $h_{1}(\lambda)=p(L)\frac{f'(L;\lambda)}{\lambda f(L;\lambda)}$ is well defined.
Knowing the behavior of function $h_{1}(\lambda)$, the solutions of equation \eqref{laecuacion} are studied as the graph intersection of functions $h_{1}(\lambda)$ and $h_{2}(\lambda)=\beta_{1}-\frac{\beta_{2}}{\lambda}$.
When considering the case for $\alphapha_{1}>0$ and $\beta_{1}<0$ we can't apply Sturm's first comparison theorem directly, therefore we can't use as corollary Sturm's separation theorem.
The monotony results of $h_{1}(\lambda)$ on each branch where defined are obtained from the analysis of an ODE, where $h_{1}(\lambda)$ is a solution.
From the consideration $\alpha_{1}>0$ and $\beta_{1}<0$, we can't use Sturm's separation theorem.
Next, we present our second main result, which addresses this problem.
\begin{te}[Separation Theorem]
\lambdabel{principal2}
For $0<\underline{\lambda}<\overline{\lambda}$, consider $\underline{f}(x)=f(x;\underline{\lambda})$
and $\overline{f}(x)=f(x;\overline{\lambda})$.
Let $<x_{1}<x_{2}<L$ two consecutive zeros of $\underline{f}$.
Then $\overline{f}$ has at least one zero on $[x_{1},x_{2}]$.
\end{te}
The proof is made using the behavior of the zeros of function $f(x;\lambda)$.
For this, we use the implicit function theorem and, therefore, use regularity of coefficients of the ODE in \eqref{problema}, for the regularity of parameter $\lambda$.
Using Theorem \ref{principal2} and the oscillatory results of Sturm, we get the following results on the eigenfunctions directly.
\begin{coro}
\lambdabel{separacion}
Let $f_{n}(x)=f(x;\lambda_{n})$ and $f_{n+1}(x)=f(x;\lambda_{n+1})$ with $0<\lambda_{n}<\lambda_{n+1}$ eigenvalues of \eqref{problema}.
Then between two consecutive zeros of $f_{n+1}$ there is exactly one zero of $f_{n}$.
\end{coro}
We note that for the case where $\lambda<0$ we have a non-oscillatory ODE, on which other tools, like the maximum principle are used for its analysis.
All the results presented are studied from a geometrical approach, specifically, studying the behavior of the function $h_{1}(\lambda)$.
Finally, to understand the qualities of the eigenvalues, we present the following asymptotic result:
\begin{te}
\lambdabel{asintotico}
For $\lambda_{n}$ in \eqref{orden} we have $\sqrt{\lambda_{n}}=n\pi/L +O(n^{-1})$ as $n\to\infty$
\end{te}
The development of the proof of this theorem is conditioned to the existence of an eigenvalue $\lambda_{0}$ such that $f(x,\lambda_{0}) > 0$ in $[0,L]$.
For the proof we use Crum-Darboux and Liouville transformations, obtaining an associated regular SLP where we can relate the spectra of both problems.
The ideas and terminology where taken from article \cite{binding2004transformation} and some of its references.
The present work is ordered as follows: Section \ref{pre1} of preliminaries, presents a list of antecedents used for our main results.
In the subsections we display elemental tools of our analysis.
Such tools correspond (mostly) to adaptations of classical results on regular SLP.
We decided to present a sequence of lemmas for easing the read of the main proofs.
Of the results presented in Section \ref{pre1}, we highlight the behavior of the function $h_{1}(\lambda)$ defined in \eqref{funciones} and the behavior of the zeros of the functions $f(x;\lambda)$, solutions for the ODE in \eqref{problema} satisfying the boundary condition for $x=0$.
In Section \ref{demoprincipales1} we develop the proofs of our main results.
In section \ref{demoprincipales2} we use a regular SLP to characterize the asymptotic behavior of the eigenvalues of problem \eqref{problema}.
The results of this work are applied to an hydrodynamics model in Section \ref{applications}.
The problem considered in this section corresponds to a non regular SLP associates to the study of a linear stability problem on a secondary oil recovery process.
The hypotheses on the coefficient functions and parameters in \eqref{problema} correspond to considerations on the studied model.
In this section we develop numeric computations on a particular case of \eqref{problema}.
\section{Preliminaries.}\lambdabel{pre1}
In this section we present the main ideas and some results used in the proofs of our theorems.
The aim is to point out some of technical difficulties for the attainment of results for the case where $\alpha_{1}>0$ and $\beta_{1}<0$.
In order to ease the reading of this section, we indicate some particular cases of our main results separately, specifically, the result of existence of the main eigenvalue $\lambda_{0}>0$ satisfying $f(x;\lambda_{0})>0$ in $]0,L[$.
We also note that the existence of such eigenvalue is fundamental for the construction of the proofs for the asymptotic representation of the eigenvalues, presented in Theorem \ref{asintotico}.
We alse present results on the behavior of the zeros of the solutions for the ODE in \eqref{problema} and the first boundary condition (see problem \eqref{main2}).
\subsection{ Geometrical Framework.
Shooting technique and an auxiliary non-regular SLP }
We use the approach considered for the description of the spectrum of \eqref{problema} and the oscillatory results of the eigenfunctions.
Under conditions of regularity of the ODE coefficients, there are two functions $f_{1}(x;\lambdambda)$ and $f_{2}(x;\lambdambda)$ of class $\mathcal{C}^{2}$, such that they are linearly independent solutions of the ODE in (\ref{pabstract}).
Thus, the general solution of the ODE in (\ref{pabstract}) can be represented as follows
$$ f(x;\lambdambda)=C_{1}f_{1}(x;\lambdambda) + C_{2}f_{2}(x;\lambdambda).$$
Now, applying the first boundary condition of problem (\ref{pabstract}) we get that:
$$ C_{1}f_{1}'(0;\lambdambda)+C_{2}f_{2}'(0;\lambdambda)=(\alphapha_{1}\lambdambda + \alphapha_{2})( C_{1}f_{1}(0;\lambdambda)+C_{2}f_{2}(0;\lambdambda) ) $$
and therefore,
$$ C_{2}\left( f_{2}'(0;\lambdambda)- (\alphapha_{1}\lambdambda + \alphapha_{2}) f_{2}(0;\lambdambda) \right) = C_{1}( (\alphapha_{1}\lambdambda + \alphapha_{2}) f_{1}(0;\lambdambda)-f'_{1}(0;\lambdambda) ) .
$$
Thus, considering
$$ C_{2}=C_{1}\left( \frac{(\alphapha_{1}\lambdambda + \alphapha_{2}) f_{1}(0;\lambdambda)-f'_{1}(0;\lambdambda) }{f_{2}'(0;\lambdambda)- (\alphapha_{1}\lambdambda + \alphapha_{2}) f_{2}(0;\lambdambda)} \right) $$
we obtain the following representation:
$$ f(x;\lambdambda)= C_{1}\left( f_{1}(x;\lambdambda)-\left[ \frac{ f'_{1}(0;\lambdambda)-(\alphapha_{1}\lambdambda + \alphapha_{2}) f_{1}(0;\lambdambda) }{f_{2}'(0;\lambdambda)- (\alphapha_{1}\lambdambda + \alphapha_{2}) f_{2}(0;\lambdambda)} \right]f_{2}(x;\lambdambda)\right) $$
where $C_{1}\neq 0$ is an arbitrary constant.
We note that the expression $f'(x;\lambda)/f(x;\lambda)$ is completely independent of the value $C_{1}$.
On the other hand, from the regularity of the coefficient functions $p(x)$, $q(x)$ and $r(x)$, for each $\lambda\in\mathbb{C}$, the unique solution of problem \eqref{subpro} (defined below) $f(x;\lambda)$ and its derivative $f'(x;\lambda)$ are functions of class $\mathcal{C}^{1}$ on $]0,L[\times \mathbb{C}$ (see Peano's theorem in \cite{hartman1964ordinary} 5.3).
Thus, for each continuity point of $\phi(x,\lambda)=p(x)f'(x;\lambda)/f(x;\lambda)$ we get that $\phi(x,\lambda)$ is of class $\mathcal{C}^{1}$.
Now, we define the following functions
\begin{equation}
\lambdabel{funciones}
h_{1}(\lambda)=\frac{f'(L;\lambda)}{\lambda f(L;\lambda)},\qquad h_{2}(\lambda)=\beta_{1}-\frac{\beta_{2}}{\lambda}.
\end{equation}
We remark that the regularity of the function is independent of the constant $C_{1}$.
Without loss of generality, we can consider a $C_{1}$ such that $f(0;\lambda)=1$.
Under that consideration, for $f(x;\lambda)$ the solution of the ODE in \eqref{problema} and the boundary condition at $x=0$, we get that is the unique regular solution for the following initial value problem
\begin{equation}
\lambdabel{subpro}
\left\lbrace \begin{array}{rcl}
(p(x)f)'+(q(x)-\lambda r(x))f &=&0\qquad 0<x<L\\
f(0) &=&1\\
f'(0)&=&\alpha_{1}\lambda +\alpha_{2}
\end{array}
\right.
\end{equation}
Reciprocally, the solution of \eqref{subpro} satisfies the ODE and the boundary condition at $x=0$ in our problem \eqref{problema}.
We note that function $h_{1}(\lambda)$ is well defined as long as it satisfies $f(L;\lambda)\neq 0$.
Therefore, in order to obtain the domain of function $h_{1}(\lambda)$ we consider the following auxiliary boundary value problem.
\begin{equation}
\lambdabel{aux1}
\left\lbrace\begin{array}{rcl}
\displaystyle{(p(x)y')'-(q(x)- \lambdambda r(x) )y}&=&0\qquad 0<x<L \\
y'(0)&=&(\alphapha_{1}\lambdambda +\alphapha_{2})y(0)\\
y(L)&=&0
\end{array}\right.
\end{equation}
Unlike in our problem, in \eqref{aux1} there is a fix boundary condition, which eases (in part) the calculations, since some of the results on regular SLP are directly applicable, e.g. Sturm's comparison theorems.
For this problem, we have the following result on the spectrum.
\begin{te}
\lambdabel{secundario}Under the condition $\alphapha_{1}>0$, the spectrum of \eqref{aux1} is an ordered set
$$\eta_{-0}<0<\eta_{0}<\eta_{1}<\eta_{2}<\dots\nearrow \infty$$
with the property that the eigenfunction $y(x;\eta_{l})$ has exactly $\vert l\vert$ zeros in $]0,L[$.
\end{te}
The proof is presented in \cite{binding2004transformation}. In Appendix \ref{demoaux} we present an alternative proof.
Using this result, we note that function$h_{1}(\lambda)$ is well defined if $\lambda\neq \eta_{n}$, with $\eta_{n}$ an eigenvalue of \eqref{aux1} and $\lambda\neq 0$.
Now, the analysis of the monotone behavior of function $h_1(\lambda)$ is developed on the branches
\begin{equation}
\lambdabel{ramas}
\mathcal{B}_{-1}=]-\infty,-\eta_{0}[;\quad\mathcal{B}_{-0}=]-\eta_{0},0[;\quad \mathcal{B}_{0}= ]0,\eta_{0}[;\quad \dots \quad \mathcal{B}_{n}= ]\eta_{n-1},\eta_{n}[
\end{equation}
This way, on each branch $\mathcal{B}_{n}$ we study the solutions of equation \eqref{laecuacion} through an analysis of the function graph intersection of $h_{1}(\lambda)$ and $h_{2}(\lambda)$.
Knowing that $\beta_{2}>0$ we get that $h_{2}(\lambda)$ is increasing on each branch $]-\infty,0[$ and $]0,\infty[$.
The monotony of $h_{1}(\lambda)$ is obtained through the analysis of an ODE in terms of $h_{1}(\lambda)$ and the coefficient functions $p(x)$, $q(x)$ and $r(x)$.
For this point, the regularity of functions $f(L;\lambda)$ and $f'(L;\lambda)$ respect to $\lambda$ is fundamental.
In the following subsection we cover the subject of regularity.
\begin{rem}
For the rest of this article we will denote by $f(x;\lambda)$ a regular function satisfying the following equations
\begin{equation}
\lambdabel{main2}
\left\lbrace\begin{array}{rcl}
\displaystyle{(p(x)f')'-(q(x)- \lambdambda r(x))f}&=&0\qquad 0<x<L \\
f'(0)&=&(\alphapha_{1} \lambdambda +\alphapha_{2})f(0)
\end{array}\right.
\end{equation}
Moreover, for all class of results on the regularity of ODEs, we consider $f(x;\lambda)$ the solution of \eqref{main2} such that $f(0;\lambda)=1$.
As we have already noted, this consideration does not yield a loss of generality, since it doesn't modify the behavior of function $h_{1}(\lambda)$ in \eqref{funciones}.
\end{rem}
\subsection{On the existence of $\lambda_{0}$ and $\lambda_{-0}$}\lambdabel{valorespropios}
In this part we prove the existence of eigenvalues $\lambda_{-0}<0<\lambda_{0}$ such that the respective eigenfunctions don't change sign in the interval $]0,L[$.
In the proof of this existence result, we present the main idea to prove the monotonicity of $h_{1}(\lambda)$ in \eqref{funciones}.
\begin{lemma}
\lambdabel{primervalorproio}
There exist $\lambda_{-0}$ and $\lambda_{0}$ two eigenvalues of \eqref{problema} such that $\eta_{-0}<\lambda_{-0}<0<\lambda_{0}<\eta_{0}$ and the respectively eigenfunctions are positives in $]0,L[$.
\end{lemma}
In the proof of this Lemma we obtain an ODE in terms of $h_{1}(\lambda)$.
Analyzing the sign of $\frac{d}{d\lambda}h_{1}$ we get that $h_{1}(\lambda)$ is monotone decreasing for $\lambda\in]0,\eta_{0}[$.
The proof for the existence of $\lambda_{-0}$ is similar.
The argument presented in this part will be used for our proof of the existence of a sequence of eigenvalues in \eqref{orden}.
For the general case, other class of technical arguments are required.
\noindent{\bf Proof:} We begin considering
\begin{equation}\lambdabel{va}
\varphi(x,\lambda)=p(x)f'(x;\lambda)/f(x;\lambda),
\end{equation}
with $f(x;\lambda)$ solution of \eqref{main2}.
From Peano's theorem (see Hartman \cite{hartman1964ordinary} 5.3) we know that $\varphi(x,\lambda)$ is of class $\mathcal{C}^{1}$ in $]0,L[\times\mathbb{C}$.
It follows directly that
\begin{equation*}
\begin{array}{rcl}
\displaystyleplaystyle{\frac{\partial \varphi }{\partial x}}&=&\displaystyleplaystyle{\frac{(pf')'}{f}-\mu_{0}\frac{(f')^{2}}{f^{2}}}\\
&=& \displaystyleplaystyle{-\lambda r(x)+q(x)-\frac{1}{p(x)}\varphi^{2}}.
\end{array}
\end{equation*}
Now, taking the partial derivative with respect to $\lambdambda$ and interchanging the order of the derivatives, we get
\begin{equation*}
\displaystyle{\frac{\partial }{\partial x}\frac{\partial \varphi }{\partial \lambda}=-r(x)-2\frac{\varphi}{p(x)}\frac{\partial \varphi }{\partial \lambdambda}}.
\end{equation*}
Taking $\displaystyle{\psi(x,\lambda)=\frac{\partial}{\partial\lambda}\varphi}$, we rewrite the above equation as follows
$$ \displaystyle{ \frac{\partial \psi}{\partial x} +2\frac{f'(x;\lambda)}{f(x;\lambda)}\psi =-r(x) } $$
and, therefore we get
\begin{equation}
\frac{\partial}{\partial x}\left(f^{2}\psi\right)=-rf^{2}.
\end{equation}
Thus, integrating between $[0,L]$ we obtain the following relation
\begin{equation}
\lambdabel{rel1}
f^{2}(L;\lambda)\psi(L;\lambda)=-\int_{0}^{L} f^{2}(x;\lambda) r(x)dx+f^{2}(0;\lambda)\psi(0;\lambda)
\end{equation}
On the other hand, from the ODE in \eqref{main2} we know that
\begin{equation}
\lambdabel{integral}
\begin{array}{rcl}
\displaystyleplaystyle{-\int_{0}^{L} r(s)f^{2}(s;\lambda)ds }&=&\displaystyle{\frac{1}{\lambda}\left\lbrace \int_{0}^{L} f(pf')'ds-\int_{0}^{L} q f^{2}ds \right\rbrace }\\
&=&\displaystyle{\frac{1}{\lambda}\left\lbrace f(L;\lambda)p(L)f'(L;\lambda)-f(0;\lambda)p(0)f'(0;\lambda)-\right.}\\
& & \displaystyleplaystyle{\left.-\int_{0}^{L} p(f')^{2}ds-\int_{0}^{L} q f^{2}ds \right\rbrace }\\
&=&\displaystyle{\frac{1}{\lambda}\left\lbrace f(L;\lambda)^{2}\varphi(L,\lambda)-f^{2}(0;\lambda)(\alpha_{1}\lambda+\alpha_{2})-\right.}\\
& &\displaystyle{-\left.\int_{0}^{L}\left[ p(f')^{2}+ q f^{2}\right]ds \right\rbrace }\\
\end{array}
\end{equation}
Using the expression for the boundary condition at $x=0$ it follows that $\psi(0,\lambda)=\alpha_{1}$ and, therefore we get
$$ f^{2}(L;\lambda)\psi(L;\lambda)-\frac{1}{\lambda} f^{2}(L;\lambda) \varphi(L,\lambda)= -f^{2}(L;\lambda)\frac{\alpha_{2}}{\lambda}-\displaystyle{\frac{1}{\lambda}\left\lbrace \int_{0}^{L}\left[ p(f')^{2}+ q f^{2}\right]ds \right\rbrace } $$
Finally, knowing that $\psi =\frac{\partial \varphi}{\partial \lambda}$, from the above equation we obtain
\begin{equation}
\lambdabel{ODE}
\frac{d}{d\lambda}\left(\frac{1}{\lambda}\varphi(L,\lambda)\right)=-\frac{1}{(\lambda f(L;\lambda))^{2}}\left\lbrace \alpha_{2}+\int_{0}^{L}\left[ p(f')^{2}+ q f^{2}\right]ds \right\rbrace
\end{equation}
Hence, the function $h_{1}(\lambda)=\frac{1}{\lambda}p(L)\frac{f'(L;\lambda)}{f(L;\lambda)}$ is a decreasing function for $\lambda>0$ while $f(L;\lambda)>0$.
The Next step is to study the equation \eqref{laecuacion} through the graph intersection of $h_{1}(\lambda)$ and $h_{2}(\lambda)=\beta_{1}(k)-\frac{\beta_{2}(k)}{\lambda}$.
Knowing that $\alpha_{1}$ and $\alpha_{2}$ are positive, from the consideration $f(0;\lambda)=1$ we get that $f'(0;\lambda)>0$ for each $\lambda\in\mathcal{B}_{0}=]0,\eta_{0}[$.
On the other hand, as $\lambda>0$, from the maximum principle for second order linear differential ODE, we get $f'(L;\lambda)<0$.
Therefore, from the definition of $h_{1}(\lambda)$ and $\eta_{0}$, the principal eigenvalue of the auxiliary non-regular SLP \eqref{aux1}, we obtain
$$\lim_{\lambda\searrow 0}h_{1}(\lambda)=\infty;\qquad \lim_{\lambda\nearrow\mu_{0}}h_{1}(\lambda)=-\infty$$
Thus, $h_{1}:\mathcal{B}_{0}\to\mathbb{R}$ is a surjective monotonic decreasing function.
Finally, as $\beta_{2}>0$ we get that $h_{2}(\lambda)$ in \eqref{funciones} is increasing function in $\mathcal{B}_{0}$ and therefore there exists one graph intersection between $h_{1}(\lambda)$ and $h_{2}(\lambda)$ in $\mathcal{B}_{0}$.
If we denote by $\lambda_{0}$ the unique solution of \eqref{laecuacion} in $\mathcal{B}_{0}$, from the definition of $h_{2}(\lambda)$ and knowing that $f(x;\lambda_{0})$ satisfy \eqref{main2}, we get that $\lambda_{0}$ is an eigenvalue of \eqref{problema} where $f(x;\lambda_{0})>0$ in $]0,L[$.
Using similar argument we obtain the existence of $\lambda_{-0}\in]\eta_{-0},0[$, eigenvalue of \eqref{problema}, such that $f(x;\lambda_{-0})>0$ in $]0,L[$.
$\square$
\begin{rem}
We use a similar argument for the description of the other eigenvalues in \eqref{orden}.
For the general case we need other technical considerations, but our proof is centered in the sign of $\frac{d}{d\lambda}h_{1}(\lambda)$.
For this end, in the next section we present some results on the behavior of the zeros of $f(x;\lambda)$ (solution of \eqref{main2}).
\end{rem}
\subsection{Regular SLP associated with \eqref{problema}}
In this subsection we obtain a regular SLP where its spectrum is related with the spectrum of our problem \eqref{problema} by means of a Crum-Darboux type transformation. We use the ideas and terminology given in \cite{binding2004transformation}.
From the Lemma \ref{primervalorproio}, let $\lambda_{0}>0$ be an eigenvalue of \eqref{problema} such that $y_{0}(x)=f(x;\lambda_{0})>0$.
Consider the following Crum-Darboux type of transformation
\begin{equation}
\lambdabel{transformacion}
g(x)=p(x)f'(x;\lambda)-p(x)f(x;\lambda)\frac{y_{0}'(x)}{y_{0}(x)}
\end{equation}
with $y_{0}(x)=f(x;\lambda_{0})$, where $f(x;\lambda)$ is solution of \eqref{main2}.\\
Therefore, $g(x)$ in (\ref{transformacion}) satisfy the following regular boundary value problem
\begin{equation}
\lambdabel{problem2}
(P')\left\lbrace\begin{array}{rcl}
\displaystyle{(\tilde{p}(x)g')'-(\tilde{q}(x)-\lambda\tilde{r}(x))g} &=&0\qquad0<x<L \\
g'(0)&=&-\tilde{\alpha} g(0)\\
g'(L) &=&-\tilde{\beta}g(L)
\end{array}\right.
\end{equation}
where
$$\tilde{p}=\frac{1}{r(x)};\quad \tilde{r}(x)=\frac{1}{p(x)}\lambda_{0}-\left( -\frac{1}{r}\frac{y_{0}'}{y_{0}} \right)'+\frac{1}{r}\left(\frac{y'_{0}}{y_{0}}\right)^2;\quad \tilde{r}(x)=\frac{1}{p(x)}$$
and
\begin{equation}
\lambdabel{condicionesregular}
\tilde{\alpha}_{1}=\frac{r(0)}{\alpha_{1}} +\frac{y_{0}'(0)}{y_{0}(0)};\qquad \tilde{\beta}_{1}=\frac{r(L)}{\beta_{1}} +\frac{y_{0}'(L)}{y_{0}(L)}
\end{equation}
The deduction of \eqref{problem2} is presented in Appendix \ref{regularslp}.
We can derive directly the following result on the spectrum of \eqref{problema}:
\begin{coro}
\lambdabel{soloreales}
The spectrum of \eqref{problema} is a countable real set.
\end{coro}
\noindent{\bf Proof:} The proof is obtained directly, by contradiction, considering the regular SLP \eqref{problem2}.
$\square$
\subsection{Monotonic behaviors of the zeros of $f(x;\lambda)$.}
In this part, we study the behavior of the zeros of $f(x;\lambda)$, a solution of the initial value problem \eqref{main2}.
Through a collection of lemmas, we present some fundamental tools for our analysis on the behavior of the function $h_{1}(\lambda)$ defined in \eqref{funciones}.
We note that in the case where $\alpha_{1}<0$ and $\beta_{1}>0$, using Sturm's comparison theorems we have a decreasing behavior of $h_{1}(\lambda)$ for $\lambda>0$.
In the case where $\alpha_{1}\beta_{1}\geq 0$, this results can be adapted using Majorant and Minorant Sturm's problems (see Hartman \cite{hartman1964ordinary} 11.3). In Appendix \ref{otroespectro} we present the analysis for the case $\alpha_{1}\beta_{1}\geq 0$.
In the following, we consider $z_{j}(\lambda)$ to denote the $j-$th zero of $f(x;\lambda)$, i.e., $f(x;\lambda)$ has exactly $j-1$ zeros in $]0,z_{j}(\lambda)[$, also satisfying $f(z_{j}(\lambda);\lambda)=0$.
We describe $z_{j}(\lambda)$ as a regular function of the variable $\lambda$ by means of the implicit function.
For the development of our argument, we begin by noting the regularity of $f(x;\lambda)$, respective to the parameter $\lambda$.
Since $r(x)$ and $q(x)$ are positive, through oscillatory results for second order ODE, we know that there exist a $\lambda^{*}>0$ such that $ f(x;\lambda^{*})=0 $ has at least one root in $]0,L[$.
Considering $\lambda^{*}$ fixed, we denote by $x^{*}$ the first zero of $f(x;\lambda^{*})$ in $]0,L[$, i.e., we get that $f^{2}(x;\lambda^{*})>0$ in $]0,x^{*}[$ and $f(x^{*};\lambda^{*})=0$.
On the other hand, if we assume that $f'(x^{*};\lambda^{*})=0$, considering the initial value problem
\begin{equation}
\lambdabel{cauchy}
(p(x)f')'-(q(x)-\lambda^{*} r(x))f=0,\qquad f(x^{*})=f'(x^{*})=0
\end{equation}
we have that $f(x;\lambda^{*})=0$, obtaining a contradiction.
Thus $f'(x^{*};\lambda^{*})\neq 0$.
Now, consider $\phi:[0,L]\times \mathbb{R}\to\mathbb{R}$ defined by $\phi(x,\lambda)=f(x;\lambda)$.
From the regularity of $p(x),\ q(x)$ and $r(x)$, using the Peano's theorem (see Hartman \cite{hartman1964ordinary} 5.3), for each neighborhood $\Omega^{*}$ of $(x^{*},\lambda^{*})$ we have that $\phi\in\mathcal{C}^{1}(\Omega)$.
Moreover, as $f'(x^{*};\lambda^{*})\neq 0$, from the definition of $\phi(x,\lambda)$ we also have that $\displaystyle{ \frac{\partial }{\partial x}\phi(x^{*},\lambda^{*})\neq 0 }$.
Therefore, from the implicit function theorem (IFT), we know that there exist $I_{1}$ and $I_{2}$, neighborhood of $\lambda^{*}$ and $x^{*}$ respectively, and a unique $\mathcal{C}^{1}$ function $ g:I_{1}\to I_{2}$, such that
$$ \phi(g(\lambda);\lambda)=0,\qquad \textrm{for each}\ \lambda\in I_{1}.$$
Moreover, from the IFT it follows that
$$ \frac{dg}{d\lambda}=-\displaystyle{\frac{\frac{\partial }{\partial \lambda}\phi(g(\lambda),\lambda) }{\frac{\partial }{\partial x}\phi(g(\lambda),\lambda)} } $$
Thus, as $\phi(x,\lambda)=f(x;\lambda)$, given some $\lambda\in\mathbb{C}$, such that $f(x;\lambda)=0$ has at least one root in $]0,L[$, there exists some neighborhood $I_{1}$ of $\lambda$ such that the first root of $f(x;\lambda)=0$ can be defined as a regular function $z_{1}:I_{1}\to \mathbb{R}$ satisfying
\begin{equation}
\lambdabel{signo2}
\displaystyle{ \frac{d z_{1}}{d\lambda}= -\displaystyle{\frac{\frac{\partial }{\partial \lambda}f(z_{1}(\lambda);\lambda) }{f'(z_{1}(\lambda);\lambda)} } }.
\end{equation}
This argument can be repeated for each zero denoted by $z_{j}(\lambda)$.
The aim is to obtain the sign of the derivatives of $z_{j}(\lambda)$.
We note that, through the oscillatory results, it's possible to prove that if $l\geq j$, and $\lambda \in\mathcal{B}_{l}$, then $z_{j}(\lambda)$ is well defined and its domain is given by $]\eta_{j},\infty[$ with $\eta_{j}$ eigenvalue of \eqref{aux1}.
We begin proving that for each $\lambda<\eta_{-0}$ the function $f(x;\lambda)$ has exactly one zero in $]0,L[$.
\begin{lemma}
\lambdabel{numerodezeronegativos}
For each $\lambda\in]-\infty,\eta_{-0}[$ the solution of (\ref{main2}) has exactly one zero in $]0,L[$.
\end{lemma}
\noindent{\bf Proof:} For $\lambda<\eta_{-0}$ we have that $q(x)-\lambda r(x)>0$ and using oscillatory results, $f(x;\lambda)$ is non oscillatory in $]0,L[$.
Thus, $f(x;\lambda)$ has at most one zero in $]0,L[$.
Now, we use Sturm's first comparison theorem using a Sturm majorant at (\ref{main2}).
For $a=\min\lbrace q(x): 0\leq x\leq L \rbrace$; $b=\min\lbrace r(x): 0\leq x\leq L \rbrace$ and $c=\min\lbrace p(x): 0\leq x\leq L \rbrace$, consider
\begin{equation}
\lambdabel{majo2}
\left\lbrace\begin{array}{rcl}
\displaystyle{(c\tilde{f}')'-(a- \lambdambda b)\tilde{f}}&=&0\qquad 0<x<L \\
\tilde{f}'(0)&=&(\alphapha_{1} \lambdambda +\alphapha_{2})\tilde{f}(0)
\end{array}\right.
\end{equation}
Directly, from the definition of $a,\ b$ and $c$, we have that \eqref{majo2} is a majorant problem at \eqref{main2}.
Under our consideration that $p'(x)>0$, we have that $c=p(0)$.
On the other hand, taking $\overline{c}=\sqrt{(a-b\lambdambda)/c}$, we get that
\begin{equation}
\lambdabel{somajorant}
\tilde{f}(x;\lambda)=\cosh(\overline{c}(\lambda)x)+\frac{(\alpha_{1}\lambda+\alpha_{2})}{\overline{c}(\lambda)}\sinh(\overline{c}(\lambda)x)
\end{equation}
defines a solution of (\ref{majo2}).
As $\lim_{\lambda\to-\infty}\tanh(\overline{c}(\lambda)(s+L))=1$ and $\overline{c}(\lambda)\sim O(1/\sqrt{-\lambda})$ when $\lambda\to-\infty$ then, there exists some $\lambda^{*}$ such that the equation $\tilde{f}(x;\lambda^{*})=0$ has one root in $]0,L[$.
As (\ref{main2}) and (\ref{majo2}) have the same boundary condition in $x=0$, we have
\begin{equation}
\lambdabel{secondsturm1}
\frac{c \tilde{f}'(0;\lambda)}{\tilde{f}(0;\lambda)}\geq \frac{p(0)f'(0;\lambda)}{f(0;\lambda)}.
\end{equation}
Finally, using the Sturm's first comparison theorem, we have that $f(x;\lambda)$ has exactly one zero in $]0,L[$.
$\square$
\begin{rem}
Through the analysis of the behaviors of $z_{j}(\lambda)$, we develop our arguments to conclude that $h_{1}(\lambda)$ is a decreasing function on each of its branches $\mathcal{B}_{n}$ defined in \eqref{ramas}.
We remark that, when $\lambda\in]\eta_{-0},\eta_{0}[$ we have $f(x;\lambda)>0$ in $]0,L[$.
\end{rem}
The analysis in the cases $\lambda<\eta_{-0}$ and $\lambda> \eta_{1}$ will be studied separately in the following two Lemmas.
We begin with $\lambda<\eta_{-0}$ as follows:
\begin{lemma}
\lambdabel{zeros2}
For $\underline{\lambda}<\overline{\lambda}<\eta_{-0}$, we get $z_{1}(\underline{\lambda})<z_{1}(\overline{\lambda})$.
\end{lemma}
\noindent{\bf Proof:} The existence of $z_{1}(\lambda)$ for $\lambda<\eta_{-0}$ was proved in Lemma \ref{numerodezeronegativos}.
Let $\displaystyle{ \varphi(x,\lambda)=p(x)\frac{f'(x;\lambda)}{f(x;\lambda)} }$ be a regular function in $]0,z_{1}(\lambda)[$.
Now, taking
$$\psi(x;\lambda)=\displaystyle{\frac{\partial \varphi}{\partial\lambda}}=p(x)\frac{ \displaystyle{\frac{\partial f'}{\partial \lambda} f - f' \frac{\partial f}{\partial \lambda} } }{ f^{2} (x;\lambda) },$$
we get
\begin{equation}
\lambdabel{limite}
\displaystyle{\lim_{x\nearrow z_{1}(\lambda)}f^{2}(x;\lambda)\Psi(x;\lambda)=-p(z_{1}(\lambda))f'(z_{1}(\lambda);\lambda)\frac{\partial}{\partial\lambda}f(x;\lambda)\left\vert_{x=z_{1}(\lambda)}\right.
}
\end{equation}
Thus, in the following steps we commit the limit procedure.
From the ODE in \eqref{problema} we know that
\begin{equation}
\lambdabel{eq reducida 2}
\frac{\partial}{\partial x}\left( f^{2}(x;\lambda) \psi(x,\lambda) \right)=-r(x)f^{2}(x;\lambda).
\end{equation}
On the other hand, from $f(x;\lambda)=1$ and $f'(0;\lambda)=\alpha_{1}\lambda+\alpha_{2}$, we get that $\varphi(0,\lambda)=p(0)(\alpha_{1}\lambda +\alpha_{2})$, and therefore
$$\psi(0,\lambda)=\displaystyle{\frac{\partial\varphi(0;\lambda)}{\partial\lambda} =p(0)\frac{\partial}{\partial\lambda}(\alpha_{1}\lambda+\alpha_{2}) =p(0)\alpha_{1}}$$
Thus, using similar argument as in the limit presented in (\ref{limite}), integrating the left hand side of the equation (\ref{eq reducida 2}) between $0$ and $z_{1}(\lambda)$, we obtain
\begin{equation}
\lambdabel{izq}
\begin{array}{rcl}
\displaystyle{\int_{0}^{z_{1}(\lambda)}\frac{\partial}{\partial x}\left( f^{2}(x;\lambda) \psi(x,\lambda) \right)dx }&=& \displaystyle{-p(z_{1}(\lambda))f'(z_{1}(\lambda);\lambda) \frac{\partial}{\partial\lambda} f(z_{1}(\lambda);\lambda)-f^{2}(0;\lambda)\psi(0,\lambda)}\\
&=&\displaystyle{-p(z_{1}(\lambda))f'(z_{1}(\lambda);\lambda) \frac{\partial}{\partial\lambda} f(z_{1}(\lambda);\lambda)-p(0)\alpha_{1} }.
\end{array}
\end{equation}
Now, integrating the right side of (\ref{eq reducida 2}) between $0$ and $z_{1}(\lambda)$ we obtain
\begin{equation}
\lambdabel{der}
\begin{array}{rcl}
\displaystyle{-\int_{0}^{z_{1}(\lambda)}r(x)(f(x;\lambda))^{2}dx}&=& \displaystyle{ \frac{1}{\lambda} \int_{0}^{z_{1}(\lambda)} f(pf')'dx - \frac{1}{\lambda} \int_{0}^{z_{1}(\lambda)} qf^{2}dx}\\
&=& \displaystyle{\frac{1}{\lambda} \left((fpf')(z_{1}(\lambda))-(fpf')(0) - \int_{0}^{z_{1}(\lambda)} (p(f')^{2}+qf^{2})dx \right)}\\
&=&\displaystyle{-\frac{p(0)}{\lambda}(\alpha_{1}\lambda +\alpha_{2}) -\frac{1}{\lambda}\int_{0}^{z_{1}(\lambda)} (p(f')^{2}+qf^{2})dx }
\end{array}
\end{equation}
Thus, if we integrate (\ref{eq reducida 2}) between $0$ and $z_{1}(\lambda)$, from (\ref{izq}) and (\ref{der}) we obtain
$$ \displaystyle{-p(z_{1}(\lambda))f'(z_{1}(\lambda);\lambda) \frac{\partial}{\partial\lambda} f(z_{1}(\lambda);\lambda)-p(0)\alpha_{1}}=\displaystyle{-\frac{p(0)}{\lambda}(\alpha_{1}\lambda +\alpha_{2}) -\frac{1}{\lambda}\int_{0}^{z_{1}(\lambda)} (p(f')^{2}+qf^{2})dx } $$
and therefore
\begin{equation}
\lambdabel{casi}
\displaystyle{p(z_{1}(\lambda))f'(z_{1}(\lambda);\lambda) \frac{\partial}{\partial\lambda} f(z_{1}(\lambda);\lambda )}=\displaystyle{\frac{1}{\lambda}\left(\alpha_{2} + \int_{0}^{z_{1}(\lambda)} (p(f')^{2}+qf^{2})dx \right) }
\end{equation}
On the other hand, from the IFT we get \eqref{signo2}, i.e.$\displaystyle{\frac{d z_{1}(\lambda)}{d\lambda} =-\frac{ \frac{\partial}{\partial\lambda} f(z_{1}(\lambda);\lambda )}{f'(z_{1}(\lambda);\lambda)} }$ and therefore the sign of $\frac{dz_{1}}{d\lambda}$ can be obtained from (\ref{casi}).
Finally, as $\lambda<\eta_{-0}<0$ we get $\displaystyle{\frac{d z_{1}(\lambda)}{d\lambda}>0 }$.
$\square$
Now, we analyze the behavior of $z_{n}(\lambda)$ for $\lambda>\eta_{0}$.
\begin{lemma}
\lambdabel{zeros1}
Let $\lambda>\eta_{0}$ and $z_{n}(\lambda)$ the $n-$th zero of $f(x;\lambda)$.
Thus, $\frac{d z_{n}}{d\lambda}<0.$
\end{lemma}
\noindent{\bf Proof:} Assume that $f(x;\lambda)$ has at least $n$ zeros in $]0,L[$, denoted by $z_{1}(\lambda)<z_{2}(\lambda)<\dots<z_{n}(\lambda)$.
For $z_{1}(\lambda)$, using \eqref{casi} we get that $\frac{d z_{1}}{d\lambda}<0$.
Now, integrating (\ref{eq reducida 2}) between $z_{1}(\lambda)$ and $z_{2}(\lambda)$, we get
\begin{equation}
\lambdabel{paso1}
-p(z_{2})f'(z_{2};\lambda)\frac{\partial}{\partial\lambda}f(z_{2};\lambda)+p(z_{1})f'(z_{1};\lambda)\frac{\partial}{\partial\lambda}f(z_{1};\lambda)=-\displaystyle{\int_{z_{1}}^{z_{2}} r f^{2}dx}.
\end{equation}
Using the implicit function theorem, from \eqref{paso1} we obtain
\begin{equation}
\lambdabel{paso2}
p(z_{2})(f'(z_{2};\lambda))^{2}\frac{dz_{2}}{d\lambda}=p(z_{1})(f'(z_{1};\lambda))^{2}\frac{dz_{1}}{d\lambda} -\displaystyle{\int_{z_{1}}^{z_{2}} r f^{2}dx}
\end{equation}
Knowing that $\frac{dz_{1}}{d\lambda}<0$, from \eqref{paso2} we get $\frac{dz_{2}}{d\lambda}<0$.
Following similar steps we obtain the proof for any $z_{j}(\lambda)$ with $j=3,\dots,n$.
$\square$
\subsection{Behavior of $h_{1}(\lambda)$ as $\lambda\in]-\infty,\eta_{-0}[$.
Asymptotic representation.}\lambdabel{asym}
Using the asymptotic representation for $h_{1}(\lambda)$ presented in this section, we obtain results of existence of $\lambda_{-1}<0$, eigenvalue of \eqref{problema} such that the corresponding eigenfunctions change sign exactly once in $]0,L[$ (see Lemma \ref{numerodezeronegativos}).
We continue working with $f(x;\lambda)$, solution of \eqref{main2}.
In the next result we consider $h_{1}(\lambda)$ defined in \eqref{funciones} and $\eta_{-0}<0$ eigenvalue of \eqref{aux1} such that the respective eigenfunction satisfy $y(x;\eta_{0})>0$ in $]0,L[$.
\begin{lemma}
\lambdabel{sobrelafuncion}
The function $h_{1}:]-\infty,\eta_{-0}[\to]-\infty,0[$ is a bijective map and monotonic decreasing function and its asymptotic representation is given by $h_{1}(\lambda)=O(\lambda^{-1/2})$ as $\lambda\to-\infty$.
\end{lemma}
\begin{rem}
Directly from the fact stated in the previous lemma, we note that in case the $\beta_{1}<0$, the graphs of functions $h_{1}(\lambda)$ and $h_{2}(\lambda)$ do not intersect in $]-\infty,\eta_{-0}[$, therefore $\lambda_{-1}$ doesn't exist in this case.
\end{rem}
\noindent{\bf Proof:} We begin by obtaining the sign of $h_{1}(\lambda)$.
Since $q(x)-\lambda r(x)>0$, when $\lambda\leq 0$, knowing that $\lambda<\eta_{-0}<-\frac{\alpha_{2}}{\alpha_{1}}$, from maximum principle for second order linear ODE (see \cite{protter1984maximum} Chapter 1) we have that $f'(L;\lambda)\cdot f(L;\lambda)>0$.
Thus, $h(\lambda)<0$ for $\lambda<\eta_{-0}$.
Moreover, we have $\lim_{\lambda\nearrow \eta_{-0}}h_{1}(\lambda)=-\infty$.
For the asymptotic representation as $\lambda\to\infty$, we use the notion of Sturm majorant and Sturm minorant problem for \eqref{main2}.
We begin considering the Sturm majorant at \eqref{main2} given in \eqref{majo2}.
Knowing that $\tilde{f}(x;\lambda)$ in \eqref{somajorant} has one zero for $\lambda<\eta_{-0}$, from the condition at $x=0$ and Lemma \eqref{numerodezeronegativos}, through Sturm's second comparison theorem, we get
\begin{equation}
\lambdabel{upperbound}
c\frac{\tilde{f}'(L;\lambda)}{ \tilde{f}(L;\lambda)}>p(L)\frac{f'(L;\lambda)}{f(L;\lambda)}
\end{equation}
To obtain a lower bound for the asymptotic behavior of $h_{1}(\lambda)$, we use \eqref{upperbound} considering that $\tilde{f}'(L;\lambda)/\tilde{f}(L;\lambda)\sim \overline{c}(\lambda)\coth(\overline{c}L)$, as $\lambda\to-\infty$.\\
Thus, knowing that $\coth(\overline{c}(\lambda)L)\sim 1$ as $\lambda\to -\infty$, from \eqref{upperbound} we obtain
\begin{equation}
\lambdabel{cota1}
-\sqrt{b\cdot c}(-\lambda)^{-1/2}<h_{1}(\lambda),\qquad\textrm{as}\ \lambda\to-\infty
\end{equation}
Now, we use a Sturm minorant problem at \eqref{main2} and following a similar argument we obtain an upper bound.
Consider $\underline{c}(\lambda)=\sqrt{ (A-B\lambda)/C }$, with $A=\max\lbrace q(x): 0\leq x\leq L \rbrace$; $B=\max\lbrace r(x): 0\leq x\leq L \rbrace$ and $C=\max\lbrace p(x): 0\leq x\leq L \rbrace$.
Using this upper bounds we define the following Sturm minorant problem:
\begin{equation*}
\lambdabel{mino}
\left\lbrace\begin{array}{rcl}
\displaystyle{y''-(\underline{c}(\lambda))^{2}y}&=&0\qquad0<x<L \\
y(L)&=&0
\end{array}\right.
\end{equation*}
Following a similar argument as in the proof of the bound given in (\ref{cota1}) we obtain
\begin{equation}
\lambdabel{cota2}
h_{1}(\lambda)>C\frac{\underline{c}(\lambda)}{\lambda}\textrm{cotanh}(-\underline{c}(\lambda)L).
\end{equation}
Hence, since $\underline{c}(\lambda)$ and $\overline{c}(\lambda) $ have the same asymptotic behavior as $\lambda\to-\infty$, given by $\sqrt{-\lambda}$, using the bounds in (\ref{cota1}) and (\ref{cota2}), we obtain that $\tilde{h}(\lambda)=O((-\lambda)^{-1/2})$ as $\lambda\to-\infty$.
Finally, as $h_{1}(\lambda)$ is a decreasing function, we get that $h_{1}:]-\infty,\eta_{-0}[\to]-\infty,0[$ is a surjective function.
$\square$
\section{Proofs of Theorems \ref{principal} and \ref{principal2}.}\lambdabel{demoprincipales1}
In this section we develop the proofs of Theorems \ref{principal} and \ref{principal2} through the study of function $h_{1}(\lambda)$.
The collection of eigenvalues in \eqref{orden} is obtained though the analysis of graph intersection of the functions $h_{1}(\lambda)$ and $h_{2}(\lambda)$.
The monotone behavior of function $h_{1}(\lambda)$ is fundamental for this part.
The main idea of the proof was presented inf the proof of Lemma \ref{primervalorproio}.
We use Lemmas \ref{zeros2} and \ref{zeros1} for the oscillatory results of the eigenfunctions in Theorem \ref{principal} and for the separation Theorem \ref{principal2}.
We begin by describing the eigenvalues of \eqref{orden} and continue working with $f(x;\lambda)$, solution of \eqref{main2}.
\noindent{\bf Proof Theorem \ref{principal}} From Corollary \ref{soloreales} we know that the spectrum must be a real subset.
Then, through graph intersection analysis of functions $h_{1}(\lambda)$ and $h_{2}(\lambda)$ we obtain the result.
Consider $\lambda\in\mathcal{B}_{n}=]\eta_{n-1},\eta_{n}[$, with $n=1,2,\dots$.
From the boundary condition at $x=0$ of \eqref{aux1}, through the Sturm's first comparison theorem we get $f(x;\lambda)$ has at least $l$ zeros in $]0,L[$.
Let $z_{l}(\lambda)$ the $l-$th zero of $f(x;\lambda)$ as in Lemma \ref{zeros1}.
Consider that $l\geq n$ and assume that $f(x;\lambda)$ doesn't change sign in $]z_{l}(\lambda),L[$.\\
Similar to the proofs of Lemmas \ref{primervalorproio} and \ref{zeros2}, we have
\begin{equation}
\lambdabel{1}
f^{2}(L;\lambda)\frac{\partial}{\partial\lambda}\varphi(L,\lambda)=-p(z_{l}(\lambda))f'(z_{l}(\lambda);\lambda)\frac{\partial}{\partial\lambda}f(z_{l}(\lambda);\lambda)-\int_{z_{l}(\lambda)}^{L}k^{2}p'f^{2}d\tau,
\end{equation}
with $\varphi(L,\lambda)=p(L)f'(L;\lambda)/f(L;\lambda)$.
As $f(z_{l}(\lambda);\lambda)=0$, using similar limit argument as in \eqref{limite}, we obtain
\begin{equation*}
\begin{array}{rcr}
f^{2}(L;\lambda)\left( \frac{d}{d\lambda}\varphi(L,\lambda) -\frac{1}{\lambda}\varphi(L,\lambda) \right)&=&-p(z_{l}(\lambda))f'(z_{l}(\lambda);\lambda)\frac{\partial}{\partial\lambda}f(z_{l}(\lambda);\lambda) -\\
\\
& &\displaystyle{-\frac{1}{\lambda}\int_{z_{l}(\lambda)}^{L}p(f'^{2}+k^{2}f^{2})d\tau}
\end{array}
\end{equation*}
From the definition of $\varphi(L;\lambda)$, the above identity can be written as follow
\begin{equation}
\lambdabel{sigder}
\begin{array}{rcr}
\displaystyle{\frac{d}{d\lambda}\left( \frac{\varphi(L,\lambda)}{\lambda} \right)}&=&\displaystyle{-\frac{1}{f^{2}(0;\lambda)}\left\lbrace \frac{f'(z_{l}(\lambda);\lambda)\frac{\partial}{\partial\lambda}f(z_{l}(\lambda);\lambda)}{\lambda}+\right.}\\
& & \displaystyle{\left.+\frac{1}{\lambda^{2}}\int_{z_{l}(\lambda)}^{0}p(f'^{2}+k^{2}f^{2})d\tau\right\rbrace}.
\end{array}
\end{equation}
On the other hands, from the IFT we have that
$$ \frac{d z_{l}(\lambda)}{d\lambda}=-\displaystyle{ \frac{\frac{\partial}{\partial\lambda}f(z_{l}(\lambda);\lambda)}{f'(z_{l}(\lambda);\lambda)} } $$
Using Lemma \ref{zeros1} we know $\frac{d z_{l}}{d\lambda}<0$, from \eqref{sigder} we obtain that $h_{1}(\lambda)$ in (\ref{funciones}) is decreasing on each branch $\mathcal{B}_{n}=]\eta_{n-1},\eta_{n}[$, with $n=1,2,\dots$.
Moreover, as $\lim_{\lambda\searrow\eta_{n-1}}h_{1}(\lambda)=\infty$ and $\lim_{\lambda\nearrow\eta_{n}}h_{1}(\lambda)=-\infty$, we obtain that $h_{1}:\mathcal{B}_{n}\to\mathbb{R}$ is a surjective function.
As $h_{2}(\lambda)$ is an increasing function, in each $\mathcal{B}_{n}$ there exist one graph intersection denoted by $\lambda_{n}$.
Knowing that $f(x;\lambda)$ is a solution of \eqref{main2}, from the definition of $h_{2}(\lambda)$ we get that $\lambda_{n}$ is a real eigenvalue of \eqref{problema}.\\
Now, we prove the oscillatory results for the eigenfunctions by reduction to absurd.
Assume that $l>n$, thus there exist $z_{n}(\lambda)<z_{l}(\lambda)$ zeros of $f(x;\lambda)$.
Taking
$$ \displaystyleplaystyle{L=\lim_{\lambda\nearrow \eta_{n}}z_{n}(\lambda)< \lim_{\lambda\nearrow \eta_{n}}z_{l}(\lambda)< L,}$$
reaching a contradiction.
Therefore, the eigenfunction $f(x;\lambda_{n})$ Changes of sign exactly $n$ times in $]0,L[$.
Hence, we have the following collection of eigenvalues $\lambda_{1}<\lambda_{2}<\lambda_{3}<\dots$.
The existence of $\eta_{-0}<\lambda_{-0}<0<\lambda_{0}<\eta_{0}$ is presented in \eqref{primervalorproio}.
To finish our proof, we show the existence of $\lambda_{-1}<\eta_{-0}$.
From Lemma \ref{sobrelafuncion} we know that $\lim_{\lambda\to-\infty}h_{1}(\lambda)=0$.
Knowing that $\lim_{\lambda\to-\infty}h_{2}(\lambda)=\beta_{1}<0$, through the monotony of functions $h_{1}(\lambda)$ and $h_{2}(\lambda)$ we get the existence and uniqueness of $\lambda_{-1}$, solution of \eqref{laecuacion}.
For the oscillatory result of $f(x;\lambda_{-1})$ we use Lemma \eqref{numerodezeronegativos}.
Finally, from the existence of $\lambda_{0}>0$ such that $f(x;\lambda_{0})>0$ in $]0,L[$, we have the result in Corollary \ref{soloreales}. Hence, all eigenvalues of \eqref{problema} are obtained as the graph intersections of $h_{1}(\lambda)$ and $h_{2}(\lambda)$ with $\lambda\in\mathbb{R}$.
$\square$
\noindent{\bf Proof Theorem \ref{principal2}:} Consider $\mathcal{B}_{l}$ in (\ref{ramas}) with $l\geq 1$.
Given $\lambda\in\mathcal{B}_{l}$, from Theorem \ref{principal} we get that $f(x;\lambda)$ has exactly $l$ zeros in ]0,L[.
Now, consider $\lambda_{*}<\lambda^{*}$, both in $\mathcal{B}_{l}$.
From Lemma \ref{zeros1} we know that $z_{l}(\lambda^{*})<z_{l}(\lambda_{*})$.
Since the ODE in (\ref{pabstract}) is linear, we can assume that $\overline{f}(x)=f(x;\lambda^{*})$ and $\underline{f}(x)=f(x;\lambda_{*})$ are positive functions in $]z_{l}(\lambda^{*}),L]$ and $]z_{l}(\lambda_{*}),L]$ respectively.
For notational simplicity we consider $z_{l-1}=z_{l-1}(\underline{\lambda})$ and $z_{l}=z_{l}(\underline{\lambda})$.
Using Green's formula in $[z_{l-1},z_{l}]$, we obtain the following identity:
\begin{equation}
\lambdabel{signo}
p(z_{l})\overline{f}(z_{l})\underline{f}'(z_{l})-p(z_{l-1})\overline{f}(z_{l-1})\underline{f}'(z_{l-1})
=\displaystyleplaystyle{(\lambda^{*}-\lambda_{*})\int_{z_{l-1}}^{z_{l}} r(\tau)\underline{f}(\tau)\overline{f}(\tau)d\tau}
\end{equation}
Knowing that $\underline{f}(x)>0$ in $]z_{l},L[$ and $\underline{f}(z_{l})=0$, then it holds that $\underline{f}'(z_{j})>0$.
Similarly, knowing that $z_{j-1}<z_{j}$ are two consecutive zeros of $\underline{f}$, then $\underline{f}<0$ in $]z_{j-1},z_{j}[$ and $\underline{f}'(z_{j-1})<0$.
On the other hand, we know that $z_{j}(\lambda^{*})<z_{j}$, therefore it holds that $\overline{f}(z_{j})>0$.
Assuming that $z_{j}(\lambda^{*})<z_{j-1}$, we have that $\overline{f}(x)>0$ in $]z_{j-1},z_{j}[$, therefore we have $(\lambda^{*}-\lambda_{*})\int_{z_{l-1}}^{z_{l}} r(\tau)\underline{f}(\tau)\overline{f}(\tau)d\tau<0 $, reaching a contradiction with \eqref{signo}.
Therefore, it holds that $z_{j-1}<z_{j}(\overline{\lambda})<z_{j}$.
Following the same ideas, the proof for general case can be reached.
$\square$
\noindent{\bf Proof Corollary \ref{separacion}:} Using Theorem \ref{principal} we have that $f_{n}(x)$ has exactly $n$ zeros in $]0,L[$, which we denote by $z_{1}^{n}<z_{2}^{n}<\dots<z_{n}^{n}$.
Similar for $f_{n+1}$, considering $z_{1}^{n+1}<z_{2}^{n+1}<\dots<z_{n}^{n+1}<\dots<z_{n}^{n+1}$.
Using Lemma \ref{zeros1} we have that $z_{1}^{n+1}<z_{1}^{n}$.
Now, using Theorem \ref{principal2} it holds that $z_{1}^{n}<z_{2}^{n+1}<z_{2}^{n}$.
Hence $z_{1}^{n+1}<z_{1}^{n}<z_{2}^{n+1}$.
Proceeding similarly, the proof the general case can be reached.
$\square$
\section{Proof of Theorem \ref{asintotico}.
Asymptotic result for the spectrum}\lambdabel{demoprincipales2}
Now, we apply the Liouville transformation to obtain a normalized regular SLP.
Thus, the asymptotic behavior can be obtained through classical results for second order linear ODEs.
For
$$ P(x)=(\tilde{r}\tilde{p})^{1/2};\qquad G(x)=(\tilde{p}\tilde{r})^{-1/4} $$
consider the following change of variable and transformation
\begin{equation}
\lambdabel{liouville}
t=\int_{0}^{x}P(u)du;\qquad g(x)=G(x)z(t)\qquad\textrm{(Liouville transformation)}.
\end{equation}
For
$$ R(t)=\left[\tilde{p}^{1/4}\tilde{r}^{-3/4}\frac{d}{dx}\tilde{p}\right]\frac{d}{dx}(\tilde{p}\tilde{r})^{-1/4} $$
and $z(t)$ in \eqref{liouville} we obtain the following second order ODE
\begin{equation}
\lambdabel{normal}
\ddot{z}(t)+\left[Q_{1}(t)+R(t) \right]z(t)=0,\qquad (0<t<t_{1}),
\end{equation}
with $Q_{1}(t)=\lambda-\frac{\tilde{q}(x)}{\tilde{r}(x)}$ and $t_{1}$, the value of $t$ at $x=L$.
On the other hand, from \eqref{liouville} we get
\begin{equation}
\lambdabel{boundary}
\begin{array}{rcl}
G(0)\tilde{p}(0)\dot{z}(0)&=&-\left[ \tilde{\alpha}_{1}-G'(0) \right]\frac{1}{G(0)}z(0)\\
G(L)\tilde{p}(L)\dot{z}(t_{1})&=&-\left[ \tilde{\beta}_{1}-G'(L) \right]\frac{1}{G(L)}z(t_{1})
\end{array}
\end{equation}
Finally, as $G(0)\tilde{p}(0)$ and $G(L)\tilde{p}(L)$ are positive constants, using the classical results for regular SLP given in \cite{eastham1970theory} Theorem 5.5.1, it follows the results in Theorem \eqref{asintotico} for the asymptotic behaviors of the eigenvalues of \eqref{problema}.
$\square$
\section{Applications. Studies on stability in three-layer Hele-Shaw flows}
\lambdabel{applications}
Using the results presented in the previous sections, in this part we consider a linear stability problem of the interfaces of a three-layer Hele-Shaw flow. This problem is a model to study a secondary oil recovery process and was presented in \cite{gorell1983theory}.
This class of problems arise when oil is displaced by water through a porous medium producing the "fingering" phenomenon.
Only for the sake of completeness, we give an introduction to the deduction of the model.
See \cite{carasso1998optimal} for more details on the deduction an physical considerations.
In \cite{gorell1983theory} the authors S. Gorell and M. Homsy consider the displacement process by less viscous fluid containing a solute, and present a policy under which the Saffman-Taylor-Chouke instability can be minimized.
As the concentration of the solute is not constant and knowing that is possible to relate the concentration to the viscosity, the porous medium can be considered saturated by three immiscible fluid: water, polymer-solute and oil.
The equations which govern the flow through a porous medium are
\begin{equation}
\lambdabel{sys}
\nabla\cdot\overrightarrow{V}=0,\qquad \nabla P=-\mu \overrightarrow{V},\qquad \frac{\partial \mu}{\partial t}+\overrightarrow{V}\cdot \nabla \mu=0,
\end{equation}
i.e.
the conservation of mass, the Darcy's law and the advection of viscosity for solute under the assumption that the adsorption, dispersion and diffusion are neglected.
This system admits the following steady displacement solution
\begin{equation}
\lambdabel{basicsol}
u=U;\quad v=0;\quad \mu=\mu_{0}(x_{1}-Ut);\quad P=-U\int_{x_{0}}^{x_{1}}\mu_{0}(x'-Ut)dx'=P_{0}
\end{equation}
For the stability analysis of this solution, linear perturbations are considered, obtaining
\begin{equation}
\lambdabel{sysli}
\nabla\cdot\overrightarrow{V'}=0,\qquad \frac{\partial P'}{\partial x}=-\mu' U-\mu_{0}u',\qquad \frac{\partial P'}{\partial y}=-\mu_{0}v' ,\qquad \frac{\partial \mu'}{\partial t}+u'\frac{d\mu_{0}}{dx}=0,
\end{equation}
where the coordinate $x_{1}$ was transformed into the moving reference frame $x=x_{1}-Ut$ and $u',\ v',\ P'$ and $\mu_{0}'$ denotes the component of the linear perturbation.
In what follows we consider that the lines $x=0$ and $x=L$ represent the interface between the fluids.
Since (\ref{sysli}) is linear, the perturbations can be represented by its Fourier integral.
Considering a typical wave component of the form (\emph{normal mode})
\begin{equation}
\lambdabel{ansatz}
(u',v',p',\mu')=(f(x),\tau(x),m(x),n(x))\cdot e^{iky+\sigma t},
\end{equation}
where $k\in\mathbb{R}$ denotes the wave number and $\sigma\in\mathbb{C}$ denotes the growth rate of the perturbation of our basic configuration, and assuming that they are sectionally smooth functions, the ansatz (\ref{ansatz}) is consistent with (\ref{sysli}) provided
\begin{equation}
\lambdabel{compatible}
\tau(s)=ik^{-1}f'(s),\quad m(s)=-k^{-2} \mu_{0}(x)f'(x),\quad n(x)=-\sigma^{-1}\mu'_{0}(x)f(x),
\end{equation}
with $f(x)$ satisfying the following ODE
$$(\mu_{0}(x) f')'-k^{2}\mu_{0}(x) f =-\frac{k^{2}U\mu'_{0}(x)}{\sigma}f,\qquad \textrm{when}\ x\neq0,\ x\neq L \textrm{and}\ \sigma\neq 0.$$
Here $()'$ denote the derivative with respect $x$.
Considering the following notations
\begin{equation}
\lambdabel{coeff}
\begin{array}{ccl}
\alphapha_{1}(k)&=&k^{2}\left(\frac{S}{U}k^{2}+\mu_{1}-\mu_{0}(0^{+}) \right)\\
\alphapha_{2}(k)&=&\mu_{1}k\\
\beta_{1}(k) &=&k^{2}\left(\mu_{2}-\mu_{0}(L^{-})-\frac{T}{U}k^{2} \right)\\
\beta_{2}(2) &=&\mu_{2}k
\end{array}
\end{equation}
where $\mu_{1},\ \mu_{2}$ denotes the viscosity of water and oil respectively, $S$ the interfacial tension between water-polymer and $T$ between polymer-oil, the dynamic and kinematic conditions to material points in the interfaces can be represented as follows
$$\mu_{0}(0)f'(0)=\left(\alphapha_{1}(k)\frac{U}{\sigma} +\alphapha_{2}(k)\right)f(0);\qquad \mu_{0}(L)f'(L)=\left(\beta_{1}(k)\frac{U}{\sigma} -\beta_{2}(k)\right)f(L)$$
See Section 2 in \cite{gorell1983theory} for details on the above approximations.
Finally, considering
\begin{equation}
\lambdabel{growth}
\lambdambda=\frac{U}{\sigma},
\end{equation}
a spectral parameter, we obtain the following boundary value problem with the spectral parameter in the boundary condition
\begin{equation}
\lambdabel{problemintro}
(P)\left\lbrace\begin{array}{rcl}
\displaystyle{(\mu_{0}(x)f')'-(k^{2}\mu(x)-\lambdambda k^{2}\mu'_{0}(x) )f}&=&0\qquad 0<x<L \\
\mu_{0}(0)f'(0)&=&(\alphapha_{1}(k)\lambdambda +\alphapha_{2}(k))f(0)\\
\mu_{0}(L)f'(L)&=&(\beta_{1}(k)\lambdambda -\beta_{2}(k))f(L)
\end{array}\right.
\end{equation}
We remark that $\mu_{0}(x)$ is the viscosity of the polymer-solute and therefore the condition $\mu_{0}'(x)>0$ isn't a restrictive or unrealistic consideration.
Therefore, for problem \eqref{problemintro} we can use the results presented in this article.
This way, we can recognize the following behavior for the spectrum of (\ref{problemintro}):
\begin{coro}
\lambdabel{principalaplicacion}
The spectrum of (\ref{problemintro}) can be ordered in following cases:
\begin{itemize}
\item[i)] If $\alphapha_{1}(k)<0$ and $\beta_{1}(k)>0$, then
$$\quad 0<\lambdambda_{0}<\lambdambda_{1}<\lambdambda_{2}<\dots$$
\item[ii)] If $\alphapha_{1}(k)\beta_{1}(k)\geq0$, then
$$\quad \lambdambda_{-0}<0<\lambdambda_{0}<\lambdambda_{1}<\lambdambda_{2}<\dots$$
\item[iii)] If $\alphapha_{1}(k)>0$ and $\beta_{1}(k)<0$, then
$$\quad\lambdambda_{-1}<\lambdambda_{-0}<0<\lambdambda_{0}<\lambdambda_{1}<\lambdambda_{2}<\dots$$
\end{itemize}
Moreover, the eigenfunction $f_{l}(x)$ has exactly $\vert l\vert$ zeros in $]0,L[$.
\end{coro}
The case i) is proved using the classical results presented in \cite{ince1962ordinary}; case ii) is presented in Appendix \ref{otroespectro} and finally, the case iii) is related with the work developed in this paper.
\begin{rem}
For the identification of the cases presented in \eqref{principalaplicacion}, de la definition of the physical parameters \eqref{coeff} we consider
\begin{equation}
\lambdabel{rangos}
\begin{array}{rcl}
\underline{k}^{2}&=&\min\left\lbrace \frac{U}{S}(\mu_{0}(0^{+})-\mu_{1}),\ \frac{U}{T}(\mu_{2}-\mu_{0}(0^{+})) \right\rbrace\\
\\
\overline{k}^{2} &=&\max\left\lbrace \frac{U}{S}(\mu_{0}(0^{+})-\mu_{1}),\ \frac{U}{T}(\mu_{2}-\mu_{0}(0^{+})) \right\rbrace
\end{array}
\end{equation}
Thus, the case i) is given for those $k<\underline{k}$, the case ii) for $k\in[\underline{k},\overline{k}]$, while iii) manifests for $k>\overline{k}$.
\end{rem}
Concerning the amplitude of the perturbative waves, from \eqref{growth}, when $k<\underline{k}$ we have that all growth rates are positive.
This might lead us to think that the most unstable case is for $k<\underline{k}$.
We remark that the degree of instability is set by the magnitude $\sigma_{0}=\frac{U}{\lambda_{0}}$.
In subsection \ref{subsec_numerico}, numerical computations are presented to provide an approximate understanding of the behavior of $\sigma_{0}$ when changing the wave number.
The aim of the numerical computations is to attain information on the dependency of the spectrum on the physical parameters.
\subsection{Numerical experiments for linear middle viscous profile.
}\lambdabel{subsec_numerico}
In this part we present numerical approximations of the eigenvalues obtained for the particular case of a linear profile for the viscosity of the intermediate fluid, specifically, in \eqref{problemintro} we take $\mu_{0}(x)=ax+b$, where
$$ a=\frac{(\mu_{2}-\mu_{1})-(J_{1}+J_{2}) }{L};\qquad b=J_{1}+\frac{J_{2}+J_{1}-(\mu_{2}-\mu_{1})}{L} $$
Here, we denote the viscosity jumps in the interfaces described by $x=0$ and $x=L$ as $J_{1}$ and $J_{2}$.
The results are presented in Tables \ref{cuadro1} and \ref{cuadro2}.
The numerical approximations are obtained using the nonlinear equation \eqref{laecuacion}.
The values for the physical parameters are the following:
$$S=1;\ T=1;\ L=0.1;\ \mu_{1}=1;\ \mu_{2}=2;\ J_{1}=0.1;\ J_{2}=0.1$$
In the experiment's developed we studied the spectra of \eqref{problemintro} for the cases: $U=1$ (Table \ref{cuadro1}) and $U=10$ (Table \ref{cuadro2}).
The values of the spectral parameter coefficients on the boundary conditions are the following:
\begin{equation}
\lambdabel{condicionesdeborde}
\begin{array}{lcll}
\alpha_{1}(k)=k^{2}\left[k^{2}-0.1 \right];& & \beta_{1}(k)=k^{2}\left[ 0.1-k^{2} \right]&\textrm{(case $U=1$)}\\
\\
\alpha_{1}(k)=0.1 k^{2}\left[k^{2}-0.01 \right];& & \beta_{1}(k)=0.1 k^{2}\left[ 0.01-k^{2} \right]&\textrm{(case $U=10$)}
\end{array}
\end{equation}
For the linear profile case, the solution for the ODE in \eqref{problemintro} can be represented through of Kummer and Tricomi confluent hypergeometric functions:
\begin{equation}
\lambdabel{hyper}
\left\lbrace \begin{array}{ccl}
\Phi(\alpha,1;z)&=& 1+\displaystyle{\sum_{l=0}^{\infty}\frac{(\alpha)_{l}}{(l!)^{2}}z^{l}}\\
\\
\Psi(\alpha,1;z)&=&\displaystyle{ \frac{1}{\Gamma(\alpha)}\left( \Phi(\alpha,1;z)\ln(z)+\sum_{l=0}^{\infty}\frac{(\alpha)_{l}}{(l!)^{2}}\frac{\Gamma'(\alpha+l)}{\Gamma(\alpha+l)}z^{l} \right) }
\end{array}\right.
\end{equation}
Here $(\alpha)_{k}$ denotes the Pochhammer symbol and $\Gamma$ denotes the Euler-gamma function.
The fact that $h_{1}(\lambda)$ can be defined using these special function is direct from the consideration
\begin{equation}
\lambdabel{sus}
z=\frac{2k}{a}(ax+b);\qquad f(s)=e^{-z/2}g(z)
\end{equation}
Now, replacing in the ODE \eqref{problemintro}, the following Kummer equation is obtained
\begin{equation}
\lambdabel{kummer}
z\frac{d^{2}g}{dz^{2}}+(1-z)\frac{dg}{dz}-\left( \frac{1}{2}- \frac{k}{2}\lambda \right)g=0,
\end{equation}
and therefore, the solution can be defined as follows
\begin{equation}
\lambdabel{gen}
g(z;\lambda)=C_{1}\Phi((1-\lambda k)/2,1;z)+C_{2}\Psi((1-\lambda k)/2,1;z)
\end{equation}
with $\Phi$ and $\Psi$ in \eqref{hyper} and $\alpha=(1-\lambda k)/2$.
\begin{table}[!hbt]
\begin{center}
\begin{tabular}[0.85\textwidth]{| c | c | c | c | c| c |c | c |}
\hline
$k$ & $\alphapha_{1}(k)$& $\beta_{1}(k)$ &$\lambda_{-1}$ & $\lambda_{-0}$ &$\lambda_{0}$ & $\lambda_{1}$ & $\lambda_{2}$ \\
\hline
\hline
1 & 0.9 & -0.9 & -6.46019 & -3.27535 & 14.4968 & 68.1856 & 158.906 \\
\hline
2 & 15.6 &-15.6 & -0.51684 & -0.29893 & 6.11938 & 19.6737 & 42.3671 \\
\hline
3 & 80.1 & -80.1 & -0.143895 & -0.084287 & 3.81799 & 9.86499 & 19.9527 \\
\hline
4 & 254.4 & -254.4 & -0.0601436 & -0.0347156 & 2.96087 & 6.3777 & 12.0525 \\
\hline
5 & 622.5 &-622.5 & -0.0307741 & -0.0175412 & 2.55349 & 4.75521 & 8.38692 \\
\hline
6 & 1292.4 & -1292.4 & -0.017825 & -0.010068 & 2.32709 & 3.87244 & 6.39387 \\
\hline
7 & 2396.1 & -2396.1 & -0.0112375 & -0.00630497 & 2.18659 & 3.34028 & 5.19194 \\
\hline
8 & 4089.6 & -4089.6 &-0.007536 & -0.0042069 & 2.09175 & 2.99538 & 4.4122 \\
\hline
9 & 6552.9 & -6552.9 & -0.0052976 & -0.00294567 & 2.0233 & 2.75939 & 3.87815 \\
\hline
\hline
\end{tabular}
\caption{\lambdabel{cuadro1} Approximations considering $U=1$.
Here $S/U=1$ and $T/U=1$.
In our numerical experiment we use: Software \emph{Mathematica 9.0}; Library functions: Hypergeometric1F1[a,b,z]; HypergeometricU[a,b,z] for the Kummer and Tricomi confluent hypergeometric functions respectively.}
\end{center}
\end{table}
\begin{table}[!hbt]
\begin{center}
\begin{tabular}[0.85\textwidth]{| c | c | c | c | c| c |c | c | c | c |c|}
\hline
$k$ & $\alphapha_{1}(k)$& $\beta_{1}(k)$ &$\lambda_{-1}$ & $\lambda_{-0}$ &$\lambda_{0}$ & $\lambda_{1}$ & $\lambda_{2}$ \\
\hline
\hline
1 & 0 & 0 & - & - & 4.96968 &26.7236 & 81.7087 \\
\hline
2 & 1.2 & -1.2 & -10.5131 & -6.14168 & 4.38068 & 16.2001 & 38.3316 \\
\hline
3 & 7.2 & -7.2 & -1.84609 & -1.06909 & 3.51446 & 9.30549 & 19.3001 \\
\hline
4 & 24 & -24 & -0.677423 & -0.38866 & 2.88683 & 6.22874 & 11.8694 \\
\hline
5 & 60 & -60 & -0.329265 & -0.187139 & 2.5302 & 4.70374 & 8.31946 \\
\hline
6 & 126 & -126 & -0.186085 & -0.104951 & 2.31824 & 3.85155 & 6.36463 \\
\hline
7 & 235.2 & -235.2 & -0.115748 & -0.0648894 & 2.18267 & 3.33682 & 5.17781 \\
\hline
8 & 403.2 & -403.2 & -0.076998 & -0.049627 & 2.08979 & 2.99072 & 4.40481 \\
\hline
9 & 648 & -648 & -0.0538465 & -0.0299318 &2.02221 & 2.75694 & 3.87404 \\
\hline
\hline
\end{tabular}
\caption{\lambdabel{cuadro2} Approximations considering $U=10$.
Here $S/U=0.1$ and $T/U=0.1$.
The approximation for the values of $\lambda_{l}$ }
\end{center}
\end{table}
In the results listing of Tables \ref{cuadro1} and \ref{cuadro2} we note a monotone behavior of the eigenvalues as the wave number increments.
We believe that this fact can be related to the asymptotic results of the eigenvalues as functions of the wave number $k$.
That, under the fact that we have $\beta_{1}(k)\sim -\frac{T}{U}k^{4}$ and $\beta_{2}(k)=\mu_{2}k$, and therefore have $h_{2}(\lambda,k)\sim -\frac{T}{U}k^{4}-\frac{\mu_{2}k}{\lambda}$.
This way, the graph intersection between $h_{1}(\lambda,k)$ and $h_{2}(\lambda,k)$ tend to $\mu_{n}(k)$, where $\mu_{n}(k)$ is eigenvalue of the auxiliary problem \eqref{aux1}.
\section{Conclusions and comments}\lambdabel{sec_com}
Using classic tools on second order ODE ans elemental tools of mathematical analysis, we have obtained a set of results for the characterization of the eigenvalues and eigenfunctions of a SLP with spectral parameter in both boundary conditions.
For reaching our main results,we developed a list of lemmas that correspond to elemental technical adaptations in regular SLP, e.g. comparison Sturm's theorem and separation theorem among other, see Section \ref{pre1}.
Concerning the techniques used for the description of the spectrum presented in other articles on the subject (see \cite{amara1999sturm,binding1994sturm} and their references), in our analysis we have considered a variant for the study of the characteristic equation, see equation \eqref{laecuacion}.
This variant corresponds to the definition of the function $h_{1}(\lambda)$ in \eqref{funciones}.
The reason for this consideration is in the construction of the associated ODE through which we obtained the sign of the derivative and, therefore, the monotone behavior of $h_{1}(\lambda)$, see Equation \eqref{ODE} and Equation \eqref{sigder}.
This behavior of $h_{1}(\lambda)$ allowed us to order the real eigenvalues of \eqref{problema} bounding them as follows:
\begin{equation}
\lambdabel{orden2}
\lambda_{-1}<\eta_{-0}<\lambda_{-0}<0<\lambda_{0}<\eta_{0}<\lambda_{1}<\eta_{1}<\lambda_{2}<\eta_{2}<\dots
\end{equation}
where $\eta_{l}$ are the eigenvalues of an auxiliary SLP given in \eqref{aux1}.
About this auxiliary problem, we note that the fixed boundary condition in $x=0$ allows us to use directly some classical tools, e.g. Sturm's comparison criteria.
In order to have that the spectrum of our problem \eqref{problema} is a real subset, we use a Crum-Darboux type transformation, see Corollary \ref{soloreales}.
Also, through a Liouville transformation we have shown that the main problem \eqref{problema} has a regular SLP associated, and have obtained information on the spectrum through results on regular SLP, see Theorem \ref{asintotico}.
For the oscillatory results of the eigenfunctions, we have developed auxiliary results using the implicit function theorem.
On this results, we obtained a description of the behavior of the zeros of the functions that satisfy problem \eqref{main2}, see Lemmas \ref{zeros1} and \ref{zeros2}.
By means of this lemmas and the oscillatory results of Strum we proved the oscillatory behavior of the eigenfunctions, indicated in \ref{principal}, the separation Theorem \ref{principal2} (for the solutions of \eqref{main2}) and as a direct consequence of Corollary \ref{separacion}, which corresponds to the separation results for the eigenfunctions.
The results on \eqref{problema} have been applied in a three-layer Hele-Shaw flows model for the study of hydrodynamic stability of planar interfaces, see Section \ref{applications}.
This model was presented in \cite{gorell1983theory}, where the authors developed a theory which is able to describe the optimal policy which, if followed, minimizes the effects of the Saffman-Taylor-Chouke instability.
We comment that the initial motivation for our study on problem \eqref{problema} was to understand the non-regular SLP \eqref{problemintro}, therefore the hypotheses on the coefficient functions of the ODE and the signs of the coefficients on boundary conditions obey to that problem, see Corollary \ref{principalaplicacion}.
Concerning the hydrodynamic model \eqref{problemintro}, articles on the subject usually treat the case where $k<\underline{k}$, see \cite{carasso1998optimal} and references therein.
Out of the cases mentioned in Corollary \ref{problemintro} we have obtained a complete description of the behavior of the spectrum respective to the wave number $k$.
For an (introductory) analysis of the behavior of the growth rate of the perturbative waves, in Section \ref{subsec_numerico} we consider a particular case for $\mu_{0}(x)$.
Making numerical computations, we verified the results stated in our main results, synthesized in Corollary \eqref{problemintro}.
The list of approximations presented in Tables \ref{cuadro1} and \ref{cuadro2} is backed up using the oscillatory results on the eigenfunctions.
Otherwise we would have no certainty that the order of the eigenvalues is correct.
Generally, the instability and their description is related with the difference of viscosity between phases, see \cite{chuoke1959instability}, \cite{saffman1958penetration}.
In the numerical results we noted a high degree of dependency of the spectrum of \eqref{problemintro} and the physical parameters $L$, $S$, $T$, $U$, $\mu_{1}$ and $\mu_{2}$.
A more detailed analysis on this parameters requires other tools than the ones we have used in this work. We look to developing such tools in a future work.
Concerning the stability problem on a secondary oil recovery process, given the complexity of the phenomenon, we believe that the consideration of a linear profile for $\mu_{0}(x)$ answers to elemental needs, and the consideration of profiles $\mu_{0}(x)$ may be lacking of practical interest unless the explicit or numerical solutions allow a deeper analysis of the behavior of $\sigma_{n}(k)$ respective to the physical parameters mentioned.
To understand the degree of sensibility in the answer of the system to perturbations, a detailed analysis of the behavior of $\sigma_{0}=\sigma_{0}(k)$ must be developed.
We believe that by means of elemental analysis tools (e.g IFT) it's possible to obtain results for the description of $\sigma_{0}(k)$. We look to be able to develop such tools in a future article.
Concerning the geometrical approach to the spectrum and its relation with the oscillatory results of the eigenfunctions, we believe that it has reach for more general problems, and for problems that have been treated with other class of tools.
In this direction we put our tools to test
That problem was addressed in the article \cite{daripa2008studies}, where the existence of a particular class of waves, called neutral waves is stated.
In Appendix \ref{susec_neutral} we prove that such class of wave doesn't exist, and explore the question whether the curves $\sigma_{n}(k)$ can intersect.
Such phenomenon would be in violation of the oscillator results of the eigenfunction, see Table \ref{espectros1}.
Finally, we believe that our tools can be adapted for superior order non-selfadjoint SLP, like the ones emerging on problems of hydrodynamics, see \cite{chandrasekhar1954characteristic, chandrasekhar1961hydrodynamic}.
\section*{\bf Acknowledgments:}\
The work of the second author (Oscar Orellana) was supported in part by Fondo Nacional de Desarrollo Cent\'ifico y Tecnologico (FONDECYT) under grant 1141260 and Universidad T\'ecnica Federico Santa Mar\'ia, Valpara\'iso, Chile.
The statements made herein are solely the responsibility of the authors.
\begin{appendices}
\lambdabel{apendice}
\section{On the spectrum for constant viscous middle profile.}\lambdabel{susec_neutral}
The oscillatory results of eigenfunctions can be used to determine if the spectrum of the eigenvalue problem is well defined. In this direction, The aim in this part is to use the geometrical approach and the oscillatory results on the eigenvalues and eigenfunctions to fully understand the spectrum of problem \eqref{problemintro} for the particular case $\mu_{0}(x)=\mu$ a constant function in $[0,L]$.\\
This model was studied in \cite{daripa2008studies} to obtain an upper bound for the growth rate in a simple unstable model of multi-layer Hele-Shaw flows, among other results.
.
For this particular case, \eqref{problemintro} is rewritten as follows:
\begin{equation}
\lambdabel{modelconstant}
\left\lbrace\begin{array}{rcl}
f''- k^{2}f&=&0\qquad 0<x<L \\
p(0)f'(0)&=&(\alphapha_{1}(k)\lambdambda +\alphapha_{2}(k))f(0)\\
p(L)f'(L)&=&(\beta_{1}(k)\lambdambda -\beta_{2}(k))f(L)
\end{array}\right.
\end{equation}
Since the ODE is non oscillatory, there are no eigenvalues $\lambda_{l}$ with $l\geq 2$.
This fact becomes clear by considering
\begin{equation}
\lambdabel{solmodelo}
f(x;\lambda)=\mu\cosh(kx)+\frac{(\alpha_{1}(k)\lambda+\alpha_{2}(k))}{k}\sinh(kx),
\end{equation}
the solution of the ODE in (\ref{modelconstant}), satisfying the boundary condition in $x=0$.
Thus
\begin{equation}
\lambdabel{fun1}
h_{1}(\lambda)=\frac{k}{\lambda}\frac{k\mu\sinh(kL)+(\alpha_{1}(k)\lambda+\alpha_{2}(k))\cosh(kL)}{k\mu\cosh(kL)+(\alpha_{1}(k)\lambda+\alpha_{2}(k))\sinh(kL)}.
\end{equation}
In \cite{daripa2008studies} the author presents the dispersion relation between the parameters $\lambda$ and $k$ y develops an analysis on the stability.
We focus mainly on the postulation on the existence of neutral waves (see \cite{daripa2008studies} III. A).
Using the functions in \ref{funciones}, the dispersion relation can be obtained using the geometric through the algebraic equation
$$h_{1}(\lambda)-h_{2}(\lambda)=0.$$
On the other hand, we note that
\begin{equation*}
\lim_{\lambda\to\infty}h_{1}(\lambda)=\lim_{\lambda\to-\infty}h_{1}(\lambda)=0.
\end{equation*}
Using the branches of $h_{1}(\lambda)$, in this part we present a complete characterization of the spectrum of the model problem (\ref{modelconstant}) and clarify some comments presented in \cite{daripa2008studies}.
Following similar arguments as the proofs in sections above, the next step corresponds to the search of the spectrum of the auxiliary problem.
To this end, when $\alpha_{1}(k)\neq 0$, the equation $f(0;\lambda)=0$ is equivalent
\begin{equation}
\lambdabel{valores}
\lambda=-\frac{1}{\alpha_{1}(k)}\left( \frac{\mu k}{\tanh(kL)}+\alpha_{2}(k)\right).
\end{equation}
As the bracket term is positive then, the sign of $\lambda$ is determined using the sign of boundary coefficient $\alpha_{1}(k)$.
Thus,
\begin{itemize}
\item If $\alpha_{1}(k)>0$ then $\lambda=\eta_{-0}$.
\item If $\alpha_{1}(k)<0$ then $\lambda=\eta_{0}$.
\end{itemize}
From the form of $f(s;\lambda)$ in (\ref{solmodelo}), for cases $p(s)=\mu$ constant viscosity profile, the spectrum of the (sub) auxiliary problem
\begin{equation*}
\left\lbrace\begin{array}{rcl}
f''- k^{2}f&=&0\qquad 0<x<L \\
f(0)&=&0\\
f(L)&=&0
\end{array}\right.
\end{equation*}
consists only of one element and this eigenvalue is determinate in (\ref{valores}) and $h_{1}(\lambda)$ has only three branches.\\
In the Table \ref{espectros1} we present the spectrum of (\ref{modelconstant}) at the cases $\alpha_{1}(k)\neq 0$.
\begin{table}[!hbt]
\begin{center}
\begin{tabular}[0.85\textwidth]{ c c c c c c }
\hline
\hline
$\alpha_{1}(k)$ & $\beta_{1}(k)$ & $\lambda_{-1}$ & $\lambda_{-0}$ & $\lambda_{0}$ & $\lambda_{1}$ \\
\hline
\hline
negative & positive & not & not & exist & exist
\\
negative &0 & not & not & exist & not
\\
negative &negative & not & exist & exist & not
\\
positive & positive & not & exist & exist & not
\\
positive &0 & not & exist & not & not
\\
positive &negative & exist & exist & not & not
\\
\hline
\hline
\end{tabular}
\caption{\lambdabel{espectros1} Spectrum of (\ref{modelconstant}) at $\alpha_{1}(k)\neq 0$.}
\end{center}
\end{table}
We remark that the case $\alpha_{1}(k)=0$ is a critical case.
In this case, $f(s;\lambda)$ in (\ref{solmodelo}) and $f'(s;\lambda)$ are both positive functions and $h_{1}(\lambda)$ has only two branches, $\mathcal{B}_{-0}=]-\infty,0[$ and $\mathcal{B}_{0}=]0,\infty[$.
Thus, when $\alpha_{1}(k)=0 $ we get
\begin{itemize}
\item[] If $\beta_{1}(k)>0$ then $\Lambda=\lbrace \lambda_{0} \rbrace$
\item[] If $\beta_{1}(k)=0$ then $\Lambda=\emptyset$
\item[] If $\beta_{1}(k)<0$ then $\Lambda=\lbrace \lambda_{-0} \rbrace$
\end{itemize}
Finally, from Table \ref{espectros1} we note that the neutral waves postulated in \cite{daripa2008studies} do not exist.
\section{Attainment of the regular SLP \eqref{problem2}}\lambdabel{regularslp}
Through the following steps we obtain a regular SLP associated with $g(x)$.
Here we use the ideas given in \cite{binding2004transformation}.
For the sake of completeness, we present the steps for the deduction of \eqref{problem2}.
From \eqref{transformacion}, we have that
\begin{equation}\lambdabel{derivada}
\begin{array}{rcl}
\displaystyleplaystyle{\frac{dg}{dx}}&=&\displaystyleplaystyle{(pf')'-pf'\frac{y_{0}'}{y_{0}}-f(py'_{0}/y_{0})'}\\
&=& \displaystyleplaystyle{(q-\lambda r)f-f' p\frac{y_{0}'}{y_{0}}-f\left\lbrace \frac{(py'_{0})'}{y_{0}}-p\left(\frac{y'_{0}}{y_{0}}\right)^{2} \right\rbrace} \\
&=& \displaystyleplaystyle{(q-\lambda r)f- f' p\frac{y_{0}'}{y_{0}}-f\left\lbrace (q-\lambda_{0}r)- p\left(\frac{y'_{0}}{y_{0}}\right)^{2} \right\rbrace}\\
&=& \displaystyleplaystyle{(\lambda_{0}-\lambda)r f-\frac{y'_{0}}{y_{0}}\left\lbrace p f'-pf\frac{y'_{0}}{y_{0}} \right\rbrace}\\
&=& \displaystyleplaystyle{ (\lambda_{0}-\lambda)r f-\frac{y'_{0}}{y_{0}}g}.
\end{array}
\end{equation}
Dividing by $r$ and taking the derivative, it follows
\begin{equation}\lambdabel{derivada2}
\begin{array}{rcl}
\displaystyleplaystyle{\frac{d}{dx}\left(\frac{1}{r}\frac{dg}{dx}\right)}&=&\displaystyleplaystyle{(\lambda_{0}-\lambda)f'-\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' g-
\frac{1}{r}\frac{y'_{0}}{y_{0}}\frac{dg}{dx}}\\
&=&\displaystyleplaystyle{ (\lambda_{0}-\lambda)f'-\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' g-
\frac{1}{r}\frac{y'_{0}}{y_{0}}\left\lbrace (\lambda-\lambda_{0}) r f-\frac{y'_{0}}{y_{0}}g \right\rbrace }\\
&=&\displaystyleplaystyle{\left\lbrace (\lambda_{0}-\lambda)\left(f' - \frac{y'_{0}}{y_{0}}f\right)\right\rbrace +\left\lbrace -\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' +\frac{1}{r}\left(\frac{y'_{0}}{y_{0}}\right)^{2}
\right\rbrace}g\\
&=&\displaystyleplaystyle{\left\lbrace (\lambda_{0}-\lambda)\frac{p(x)}{p(x)}\left(f' - \frac{y'_{0}}{y_{0}}f\right)\right\rbrace +\left\lbrace -\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' +\frac{1}{r}\left(\frac{y'_{0}}{y_{0}}\right)^{2}
\right\rbrace}g\\
&=&\displaystyleplaystyle{\left\lbrace \frac{1}{p(x)}(\lambda_{0}-\lambda)\left(p(x)f' - \frac{y'_{0}}{y_{0}}\mu_{0}(x)f\right)\right\rbrace +\left\lbrace -\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' +\frac{1}{r}\left(\frac{y'_{0}}{y_{0}}\right)^{2}
\right\rbrace}g\\
&=&\displaystyleplaystyle{\left\lbrace \frac{\lambda_{0}}{p(x)} -\left(\frac{1}{r}\frac{y'_{0}}{y_{0}}\right)' +\frac{1}{r}\left(\frac{y'_{0}}{y_{0}}\right)^{2}-\frac{1}{p(x)}\lambda
\right\rbrace}g
\end{array}
\end{equation}
Now, we obtain the boundary condition for $g(x)$.
At $x=0$ we have
\begin{equation}\lambdabel{condicion1}
\begin{array}{rcl}
g'(0)&=&\displaystyleplaystyle{(\lambda_{0}-\lambda)r(0)f(0)-\frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{r(0)\left\lbrace \lambda_{0}f(0)-\lambda f(0;\lambda) \right\rbrace-\frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{r(0)\left\lbrace \lambda_{0}f(0)-\frac{\alpha_{1}\lambda f(0)}{\alpha_{1}} \right\rbrace-\frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \alpha_{1}\lambda_{0}f(0)-(\alpha_{1}\lambda f(0)) \right\rbrace-\frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \lambda_{0}\alpha_{1}f(0)-(p(0)f'(0)-\alpha_{2}f(0) ) \right\rbrace-
\frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \frac{(\lambda_{0}\alpha_{1}y_{0}(0))}{y_{0}(0)}
f(0)-(p(0)f'(0)-\alpha_{2}f(0) ) \right\rbrace- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \frac{p(0)y'_{0}(0)-\alpha_{2}y_{0}(0)}{y_{0}(0)}
f(0)-(p(0)f'(0)-\alpha_{2}f(0) ) \right\rbrace- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \frac{p(0)y'_{0}(0)}{y_{0}(0)}f(0)-\alpha_{2}f(0)-
(p(0)f'(0)-\alpha_{2}f(0) ) \right\rbrace- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace \frac{p(0)y'_{0}(0)}{y_{0}(0)}f(0)-
p(0)f'(0) \right\rbrace- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{\frac{r(0)}{\alpha_{1}}\left\lbrace -\left(-\frac{p(0)y'_{0}(0)}{y_{0}(0)}f(0)+
p(0)f'(0)\right) \right\rbrace- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{-\frac{r(0)}{\alpha_{1}}g(0)- \frac{y'_{0}(0)}{y_{0}(0)}g(0)}\\
&=&\displaystyleplaystyle{-\left\lbrace\frac{r(0)}{\alpha_{1}}+ \frac{y'_{0}(0)}{y_{0}(0)}\right\rbrace g(0)}.
\end{array}
\end{equation}
Similar to $x=L$, we get
\begin{equation}
\lambdabel{condicion2}
g'(L)=-\left\lbrace \frac{r(L)}{\beta_{1}}+\frac{y'_{0}(L)}{y_{0}(L)} \right\rbrace g(L)
\end{equation}
\section{Proof of Theorem \ref{secundario}}
\lambdabel{demoaux}
We remark that \eqref{aux1} has a fixed boundary condition, and therefore some elemental result are directly applicable.
In order to obtain a direct similitude with \eqref{problema}, we consider the following transformations
\begin{equation}
\lambdabel{cambio}
z=L-x;\qquad \tilde{y}(z)= y(x),
\end{equation}
obtaining the following non-regular SLP equivalent
\begin{equation}
\lambdabel{aux1eq}
\left\lbrace\begin{array}{rcl}
\displaystyle{(\tilde{p}(z)y')'-(\tilde{q}(z)- \lambdambda \tilde{r}(z) )\tilde{y}}&=&0\qquad 0<z<L \\
\tilde{y}(0)&=&0\\
\tilde{ y}'(L)&=&(-\alphapha_{1}\lambdambda -\alphapha_{2})\tilde{y}(L)\\
\end{array}\right.
\end{equation}
Under the condition that $\alpha_{2}>0$, we have that $\tilde{h}_{2}(\lambda)=-\alpha_{1}-\frac{\alpha_{2}}{\lambda}$ is strictly increasing on each branch $]-\infty,0[$ and $]0,\infty[$.
Similar to our analysis for the study of the eigenvalues of \eqref{problema}, for $\tilde{y}(z;\lambda)$, solution of the ODE in \eqref{aux1eq} satisfying the boundary condition $\tilde{y}(0)=0$, we consider the function
\begin{equation}
\lambdabel{funcion2}
\tilde{h}_{1}(\lambda)=\tilde{p}(L)\frac{\tilde{y}'(L;\lambda)}{\lambda\tilde{y}(L;\lambda)}
\end{equation}
defined for all $\lambda\neq 0$ such that $\tilde{y}(L;\lambda)\neq 0$. This way, we consider the following auxiliary regular SLP for \eqref{aux1eq}
\begin{equation}
\lambdabel{aux2eq}
\left\lbrace\begin{array}{rcl}
\displaystyle{(\tilde{p}(z)y')'-(\tilde{q}(z)- \lambdambda \tilde{r}(z) )\tilde{y}}&=&0\qquad 0<z<L \\
\tilde{y}(0)&=&0\\
\tilde{ y}(L)&=&0\\
\end{array}\right.
\end{equation}
Using classical results (see Theorem II Ince \cite{ince1962ordinary} 10.6), we know that the spectrum of \eqref{aux2eq} can be ordered as follows:
$$ \tilde{\eta}_{0}<\tilde{\eta}_{1}<\tilde{\eta}_{3}<\dots \nearrow \infty,$$
where the corresponding eigenfunction $\tilde{y}_{n}=\tilde{y}(z;\tilde{\eta}_{n})$ has exactly $n$ zeros in $]0,L[$. Moreover, concerning the boundary conditions on \eqref{aux1eq}, for $\tilde{y}_{0}$ we know there exists certain $z^{*}$ such that the function $\tilde{y}_{0}$ reaches a maximum in $z^{*}$ (if $\tilde{y}_{0}<0$ we take the minimum and reach the same result). This way:
$$\tilde{p}(z^{*})\tilde{y}''_{0}(z^{*})=(\tilde{q}(z^{*})-\tilde{\eta}_{0}\tilde{r}(z^{*}))\tilde{y}_{0}(z^{*})=0$$
Knowing that the coefficients are positive, we get that $\tilde{\eta}_{0}>0$. Then, we have the following branches for the definitions of function $\tilde{h}_{1}(\lambda)$:
$$ \tilde{B}_{-0}=]-\infty,0[;\ \tilde{B}_{0}=]0,\tilde{\eta}_{0}[;\ \tilde{B}_{1}=]\tilde{\eta}_{1},\tilde{\eta}_{2}[;\dots $$
Knowing $\tilde{y}(0;\lambda)=0$, through Sturm's second comparison theorem, for each $\tilde{B}_{n}$ ($n=0,1,2,\dots$) we get that $\tilde{h}_{1}(\lambda)$ is a decreasing function where
$$ \lim_{\lambda\nearrow \tilde{\eta}_{n}}\tilde{h}_{1}(\lambda)=-\infty;\quad \lim_{\lambda\searrow \tilde{\eta}_{n}}\tilde{h}_{1}(\lambda)=\infty $$
Thus, for each $\tilde{B}_{n}$ ($n=0,1,2,\dots$) there exists a unique $\eta_{0}$ solution of $\tilde{h}_{1}(\lambda)=\tilde{h}_{2}(\lambda)$. As $y(z;\lambda)$ in \eqref{funcion2} satisfies the ODE in \eqref{aux2eq} and the first boundary condition at $z=0$, from the definition of $\tilde{h}_{2}(\lambda)$ we get that $\eta_{n}$ is an eigenvalue of \eqref{aux1}. For the oscillatory results we use Sturm's first comparison theorem. Finally, using minorant and majorant Sturm problems, similar as the proof of Lemma \ref{sobrelafuncion}, we obtain the existence for $\eta_{-0}$ and the oscillatory result is obtained using the maximum principle. $\square$
\section{Spectrum for the case $\alpha_{1}\beta_{1}\geq 0$}\lambdabel{otroespectro}
In this part, we consider $\alpha_{1}$ and $\beta_{1}$ in \eqref{problema}, such that $\alpha_{1}\beta_{1}\geq 0$. We obtain that the spectrum in this case can be ordered as follows
\begin{equation}
\lambdabel{espectros2}
\lambda_{-0}<0<\lambda_{0}<\lambda_{1}<\dots\nearrow \infty
\end{equation}
We note that in this case, we can use Sturm's comparison theorem.
Without loss of generality we may consider $\alpha_{1}\leq 0$, otherwise, we may introduce a change of coordinates, like in \eqref{cambio}.
Under the consideration that $\alpha_{1}\leq 0$, the spectrum of the auxiliary problem \eqref{aux1} is ordered al follows:
$$ 0<\eta_{0}<\eta_{1}<\eta_{2}<\dots\nearrow \infty $$
Let $f(x;\lambda)$ be the solution of the initial value problem \eqref{main2} and let $h_{1}(\lambda)$ be defined in \eqref{funciones}. Since $\alpha_{1}<0$, for $\underline{\lambda}<\overline{\lambda}$, both in $\mathcal{B}_{n}$ (with $n=1,2,\dots$) from Sturm's first comparison theorem have that $f(x;\underline{\lambda})$ and $f(x;\overline{\lambda})$ have exactly $n$ zeros in $]0,L[$. Now, from Sturm's second comparison theorem we obtain the monotonic behavior of $h_{1}(\lambda)$. The existence of $\lambda_{0}<\lambda_{1}<\lambda_{2}<\dots\nearrow\infty$ is obtained in similar way to the proofs already presented. For the proof of the existence of $\lambda_{-0}$, similar arguments to the ones presented in Lemma \ref{sobrelafuncion} can be used.
\section{Relacion entre los espectros}\lambdabel{regularSLP}
In this appendix, we show that the spectra of problems \eqref{problema} and \eqref{problem2} are almost directly related.
We will denote the eigenvalues of \eqref{problem2} as follows
\begin{equation}
\lambdabel{espectroslpregular}
\tilde{\lambda}_{0}<\tilde{\lambda}_{1}<\tilde{\lambda}_{2}<\dots\nearrow \infty,
\end{equation}
with their respective eigenfunctions, $g_{m}(x)$.
We define
\begin{equation}
\lambdabel{funpropia}
w_{m}(x)=y_{0}(x)\int_{0}^{x}\frac{1}{y_{0}p}g_{m}ds +Cy_{0}(x),
\end{equation}
where $C$ is a constant to be determined. We have that
\begin{equation}
\lambdabel{relacion}
g_{m}(x)=p(x)w'_{m}-pw_{m}\frac{y'_{0}}{y_{0}}
\end{equation}
On the other hand, from the definition, it holds that
$$ p\frac{d w_{m}}{dx}=py'_{0}\left\lbrace C+ \int_{0}^{x}\frac{1}{y_{0}p}g_{m}ds \right\rbrace +g_{m}.$$
Differentiating the previous equation we have
\begin{equation}
\lambdabel{step0}
\begin{array}{rcl}
\displaystyle{\frac{d}{dx}(pw_{m}')}&=&\displaystyle{(q-\lambda_{0}r)y_{0}\left\lbrace C+ \int_{0}^{x}\frac{1}{y_{0}p}g_{m}ds \right\rbrace +\frac{y_{0}'}{y_{0}}g_{m}+\frac{d g_{m}}{dx}}\\
&=&\displaystyle{(q-\lambda_{0}r) w_{m}+\frac{y_{0}'}{y_{0}}g_{m}+\frac{d g_{m}}{dx}}
\end{array}
\end{equation}
On the other hand, integrating the ODE \eqref{problem2} between 0 an $x$, we get
\begin{equation}
\lambdabel{step1}
\begin{array}{rcl}
\frac{1}{r(x)}\left\lbrace \frac{y_{0}'}{y_{0}}g_{m}+g'_{m} \right\rbrace&=& \frac{1}{r(0)}\left\lbrace \frac{y_{0}'(0)}{y_{0}(0)}g_{m}(0)+g'_{m}(0) \right\rbrace +\displaystyle{(\lambda_{0}-\tilde{\lambda}_{m})\int_{0}^{x}\frac{g_{m}}{p} ds+ }\\
&& +\displaystyle{\int_{0}^{x}\frac{1}{r}\frac{y'_{0}}{y_{0}}\left\lbrace \frac{y_{0}'}{y_{0}}g_{m}+g'_{m} \right\rbrace ds}
\end{array}
\end{equation}
Now, using \eqref{relacion} we get
\begin{equation}
\lambdabel{step2}
\begin{array}{rcl}
\displaystyle{ \int_{0}^{x}\frac{g_{m}}{p} ds }&=&\displaystyle{ \int_{0}^{x}w'_{m}-w_{m} \frac{y'_{0}}{y_{0}}ds }\\
&=&\displaystyle{ w_{m}(x)-w_{m}(0)- \int_{0}^{x}w_{m} \frac{y'_{0}}{y_{0}}ds }
\end{array}
\end{equation}
Taking
\begin{equation}
\lambdabel{lafuncion}
F(x)=-\lambda_{0}rw_{m}+\frac{y_{0}'}{y_{0}}g_{m}+g'_{m},
\end{equation}
when we substitute \eqref{step2} in \eqref{step1}, we get
\begin{equation}
\lambdabel{step3}
F(x)=r(x)\left\lbrace \frac{1}{r(0)}F(0)+\tilde{\lambda}_{m} \right\rbrace+r(x)\left\lbrace \int_{0}^{x} \frac{y_{0}'}{y_{0}r}(F(s)+\tilde{\lambda}_{m}w_{m}(s))ds\right\rbrace-\tilde{\lambda}_{m}rw_{m}
\end{equation}
We take $w_{m}(0)$ (and therefore, some $C$ in \eqref{funpropia}) such that
\begin{equation}
\lambdabel{condicion0}
(\lambda_{0}-\tilde{\lambda}_{m})w_{m}(0)=\frac{1}{r(0)}g'_{m}(0)+\frac{1}{r(0)}\frac{y'_{0}(0)}{y_{0}(0)}
\end{equation}
Now, from \eqref{step0} and \eqref{step3} we have
\begin{equation}
\lambdabel{step4}
\displaystyle{\frac{d}{dx}(pw_{m}')}-(q-r\tilde{\lambda}_{m})w_{m}=r(x)\left\lbrace \int_{0}^{x}\frac{y'_{0}}{r y_{0}} (F(s)+\tilde{\lambda}_{m} w_{m}(s))ds \right\rbrace
\end{equation}
Taking
\begin{equation}
\lambdabel{step5}
\tilde{F}(x)=\displaystyle{\frac{d}{dx}(pw_{m}')}-(q-r\tilde{\lambda}_{m})w_{m}
\end{equation}
and the definitions in \eqref{lafuncion}, from \eqref{step4} we have
$$ \tilde{F}(x)=r(x)\displaystyle{ \int_{0}^{x}\frac{y'_{0}}{p y_{0}} \tilde{F} ds}$$
Using the consideration \eqref{condicion0} we have that $\tilde{F}(0)=0$. Now we have the following initial vaule problem
$$ \frac{d\tilde{F}}{dx}=\left\lbrace \frac{r'(x)}{r(x)}+\frac{y'_{0}(x)}{y_{0}(x)} \right\rbrace\tilde{F}(x);\qquad \tilde{F}(0)=0, $$
and therefore, it holds that $\tilde{F}=0$. Then we have
\begin{equation}
\lambdabel{step6}
\displaystyle{\frac{d}{dx}(pw_{m}')}-(q-r\tilde{\lambda}_{m})w_{m}=0
\end{equation}
Finally, knowing that it holds that $p(0)\frac{y'_{0}(0)}{y_{0}}=\lambda_{0}\alpha_{1}+\alpha_{2}$ and $p(L)\frac{y'_{0}(L)}{y_{0}(L)}=\beta_{1}\lambda-\beta_{2}$, from \eqref{condicionesregular} we have that
\begin{equation}
\lambdabel{step7}
p(0)w'_{m}(0)=(\alpha_{1}\tilde{\lambda}_{m}+\alpha_{2})w_{m}(0);\qquad p(L)w'_{m}(L)=(\beta_{1}\tilde{\lambda}_{m}-\beta_{2})w_{m}(L)
\end{equation}
This way, considering \eqref{step6} and \eqref{step7}, we have that $w_{m}(x)$ in \eqref{funpropia} is an eigenfunction of \eqref{problema}, with eigenvalue $\tilde{\lambda}_{m}$.
\end{appendices}
\end{document}
|
\begin{document}
\begin{abstract} We investigate under which conditions the space of idempotent measures is an absolute retract and the idempotent barycenter map is soft.
\varepsilonnd{abstract}
\title{Idempotent measures:absolute retracts and soft maps}
\section{Introduction}
The notion of idempotent (Maslov) measure finds important applications in different
part of mathematics, mathematical physics and economics (see the survey article
\cite{Litv} and the bibliography therein). Topological and categorical properties of the functor of idempotent measures were studied in \cite{Zar}. Although idempotent measures are not additive and corresponding functionals are not linear, there are some parallels between topological properties of the functor of probability measures and the functor of idempotent measures (see for example \cite{Zar} and \cite{Radul}) which are based on existence of natural equiconnectedness structure on both functors.
However, some differences appear when the problem of the openness of the barycentre map was studying.
The problem of the openness of the barycentre map of probability measures was investigated in \cite{Fed}, \cite{Fed1}, \cite{Eif}, \cite{OBr} and \cite{Pap}. In particular, it is proved in \cite{OBr} that the barycentre map for a compact convex set in a locally convex space is open iff the map $(x, y)\mapsto 1/2 (x + y)$ is open.
Zarichnyj defined in \cite{Zar} the idempotent barycentre map for idempotent measures and asked if the analogous characterization is true. A negative answer to this question was given in \cite{Radul1}.
We investigate the problem when the space of idempotent measures is absolute retract (shortly AR). It is shown in \cite{Zar} that the space of idempotent measures $I([0,1]^\tau)$ on Tychonov cube $[0,1]^\tau$ is not an absolute retract for any $\tau>\omega_1$. It follows from the results of \cite{Radul} that the space of idempotent measures $IX$ is an absolute retract for each openly generated compactum $X$ of the weight $\le\omega_1$. We will show in this paper that the space of idempotent measures $IX$ is an absolute retract iff $X$ is an openly generated compactum of the weight $\le\omega_1$. Let us remark that it is an idempotent analogue of Ditor-Haydon Theorem for probability measures \cite{DH}.
The problem of the softness of the barycentre map of probability measures was investigated in \cite{Fed1}, \cite{Radul2} and \cite{Radul3}. Fedorchuk proved in \cite{Fed1} that each product of $\omega_1$ barycentrically open convex metrizable compacta (i.e. convex metrizable compacta for which the barycentre map is open) is barycentrically soft and asked two questions: if each barycentrically open convex compactum of the weight $\le\omega_1$ is baricentrically soft and if there exists a baricentrically soft convex compactum of the weight $\ge\omega_2$. The first question was answered in negative in \cite{Radul2}, showing that barycenrical softness of the space of probability measures $PX$ implies metrizability of the compactum $X$. The second question was answered in negative in \cite{Radul3}.
In this paper we discuss analogous problems for the space of idempotent measures and idempotent barycenter map.
\section{Idempotent measures: preliminaries}
In the sequel, all maps will be assumed to be continuous. Let $X$ be a compact Hausdorff space. We shall denote by $C(X)$ the
Banach space of continuous functions on $X$ endowed with the sup-norm. For any $c\in\ R$ we shall denote by $c_X$ the
constant function on $X$ taking the value $c$.
Let $\mathbb R_{\max}=\mathbb R\cup\{-\infty\}$ be the metric space endowed with the metric $\varrho$ defined by $\varrho(x, y) = |e^x-e^y|$.
Following the notation of idempotent mathematics (see e.g., \cite{MS}) we use the
notations $\oplus$ and $\odot$ in $\mathbb R$ as alternatives for $\max$ and $+$ respectively. The convention $-\infty\odot x=-\infty$ allows us to extend $\odot$ and $\oplus$ over $\mathbb R_{\max}$.
Max-Plus convex sets were introduced in \cite{Z}.
Let $\tau$ be a cardinal number. Given $x, y \in \mathbb R^\tau$ and $\lambda\in\mathbb R_{\max}$, we denote by $y\oplus x$ the coordinatewise
maximum of x and y and by $\lambda\odot x$ the vector obtained from $x$ by adding $\lambda$ to each of its coordinates. A subset $A$ in $\mathbb R^\tau$ is said to be Max-Plus convex if $\alpha\odot a\oplus b\in A$ for all $a, b\in A$ and $\alpha\in\mathbb R_{\max}$ with $\alpha\le 0$. It is easy to check that $A$ is Max-Plus convex iff $\oplus_{i=1}^n\lambda_i\odot\delta_{x_i}\in A$ for all $x_1,\dots, x_n\in A$ and $\lambda_1,\dots,\lambda_n\in\mathbb R_{\max}$ such that $\oplus_{i=1}^n\lambda_i=0$. In the following by Max-Plus convex compactum we mean a Max-Plus convex compact subset of $\mathbb R^\tau$.
We denote by $\odot:\mathbb R\times C(X)\to C(X)$ the map acting by $(\lambda,\varphi)\mapsto \lambda_X+\varphi$, and by $\oplus:C(X)\times C(X)\to C(X)$ the map acting by $(\psi,\varphi)\mapsto \max\{\psi,\varphi\}$.
\begin{df}\cite{Zar} A functional $\mu: C(X) \to \mathbb R$ is called an idempotent measure (a Maslov measure) if
\begin{enumerate}
\item $\mu(1_X)=1$;
\item $\mu(\lambda\odot\varphi)=\lambda\odot\mu(\varphi)$ for each $\lambda\in\mathbb R$ and $\varphi\in C(X)$;
\item $\mu(\psi\oplus\varphi)=\mu(\psi)\oplus\mu(\varphi)$ for each $\psi$, $\varphi\in C(X)$.
\varepsilonnd{enumerate}
\varepsilonnd{df}
Let $IX$ denote the set of all idempotent measures on a compactum $X$. We consider
$IX$ as a subspace of $\mathbb R^{C(X)}$. It is shown in \cite{Zar} that $IX$ is a compact Max-Plus subset of $\mathbb R^{C(X)}$. The construction $I$ is functorial what means that for each continuous map $f:X\to Y$ we can consider a continuous map $If:IX\to IY$ defined as follows $If(\mu)(\psi)=\mu(\psi\circ f)$ for $\mu\in IX$ and $\psi\in C(Y)$. It is proved in \cite{Zar} that the functor $I$ preserves topological embedding. For an embedding $i:A\to X$ we shall identify the space $F(A)$ and the subspace $F(i)(F(A))\subset F(X)$.
By $\delta_{x}$ we denote the Dirac measure supported by the point $x\in X$. We can consider a map $\delta X:X\to IX$ defined as $\delta X(x)=\delta_{x}$, $x\in X$. The map $\delta X$ is continuous, moreover it is an embedding \cite{Zar}. It is also shown in \cite{Zar} that the set $$I_\omega X=\{\oplus_{i=1}^n\lambda_i\odot\delta_{x_i}\mid\lambda_i\in\mathbb R_{\max},\ i\in\{1,\dots,n\},\ \oplus_{i=1}^n\lambda_i=0,\ x_i\in X,\ n\in\mathbb N\},$$ (i.e., the set of idempotent probability measures of finite support) is dense in $IX$.
Let $A\subset \mathbb R^T$ be a compact max-plus convex subset. For each $t\in T$ we put $f_t=\mathrm{pr}_t|_A:A\to \mathbb R$ where $\mathrm{pr}_t:\mathbb R^T\to\mathbb R$ is the natural projection. Given $\mu\in A$, the point $\beta_A(\mu)\in\mathbb R^T$ is defined by the conditions $\mathrm{pr}_t(\beta_A(\mu))=\mu(f_t)$ for each $t\in T$. It is shown in \cite{Zar} that $\beta_A(\mu)\in A$ for each $\mu\in I(A)$ and the map $\beta_A : I(A)\to A$ is continuous.
The map $\beta_A$ is called the idempotent barycenter map.
For a function $\varphi\in C(X)$ by $\tilde\varphi\in C(IX)$ we denote the function defined by the formula $\tilde\varphi(\nu)=\nu(\varphi)$ for $\nu\in IX$. Diagonal product $(\tilde\varphi)_{\varphi\in C(X)}$ embeds $IX$ into $\mathbb R^{C(X)}$ as a Max-Plus convex subset. It is easy to see that the map $\beta_{IX}$ satisfies the equality $\beta_{IX}(\mathcal M)(\varphi)=\mathcal M(\tilde\varphi)$ for any $\mathcal M\in I^2X=I(IX)$ and $\varphi\in C(X)$. Particularly we have $\beta_{IX}\circ I(\delta X)=\mathrm{id}_{IX}$ for each compactum $X$.
A map $f:X\to Y$ between Max-Plus convex compacta $X$ and $Y$ is called Max-Plus affine if for each $a, b\in X$ and $\alpha\in[-\infty,0]$ we have $f(\alpha\odot a\oplus b)=\alpha\odot f(a)\oplus f(b)$. It is easy to check that the diagram
$$ \mathcal CD
IX @>If>> IY \\
@VV\beta_XV @VV\beta_YV \\
X @>f>> Y
\varepsilonndCD
$$
is commutative provided $f$ is Max-Plus affine. It is also easy to check that the map $b_X$ is Max-Plus affine for each Max-Plus convex compactum $X$ and the map $If$ is Max-Plus affine for each continuous map $f:X\to Y$ between compacta $X$ and $Y$.
The notion of density for an idempotent measure was introduced in \cite{A}. Let $\mu\in IX$. Then we can define a function $d_\mu:X\to [-\infty,0]$ by the formula $d_\mu(x)=\inf\{\mu(\varphi)|\varphi\in C(X)$ such that $\varphi\le 0$ and $\varphi(x)=0\}$, $x\in X$. The function $d_\mu$ is upper semicontinuous and is called the density of $\mu$. Conversely, each upper semicontinuous function $f:X\to [-\infty,0]$ with $\max f = 0$ determines an idempotent measure $\nu_f$
by the formula $\nu_f(\varphi) = \max\{f(x)\odot\varphi(x) | x \in X\}$, for $\varphi\in C(X)$.
Let $A$ be a closed subset of a compactum $X$. It is easy to check that $\nu\in IA$ iff $\{x\in X|d_\nu(x)>-\infty\}\subset A$.
\begin{lemma}\label{supp} Let $A$ be a closed subset of a compactum $X$. Then $\beta_{IX}^{-1}(IA)\subset I^2A$.
\varepsilonnd{lemma}
\begin{proof} Suppose the contrary. Then there exists $\mathcal M\in \beta_{IX}^{-1}(IA)\setminus I^2A$. Hence there exists $\mu\in IX$ such that $d_\mathcal M(\mu)=s>-\infty$ and $d_\mu(x)=t>-\infty$ for some point $x\in X\setminus A$.
Choose a function $\varphi\in C(X)$ such that $\varphi(x)=1-s-t$ and $\varphi(A)\subset\{0\}$. Then we have $\mathcal M(\tilde\varphi)\ge\tilde\varphi(\mu)+s=\mu(\varphi)+s\ge\varphi(x)+t+s=1$. On the other hand $\beta_{IX}(\mathcal M)\in IA$ implies $\beta_{IX}(\mathcal M)(\varphi)=0$. But $\mathcal M(\tilde\varphi)=\beta_{IX}(\mathcal M)(\varphi)$ and we obtain a contradiction.
\varepsilonnd{proof}
\begin{lemma}\label{mapdenc} Let $f:X\to Y$ be a continuous map, $\nu\in IX$. Then $d_{If(\nu)}(y)=\max\{d_\nu(x)|x\in f^{-1}(y)\}$ for each $y\in Y$.
\varepsilonnd{lemma}
\begin{proof} Let $d:Y\to [-\infty,0]$ be a function defined by the formula $d(y)=\max\{d_\nu(x)|x\in f^{-1}(y)\}$ for $y\in Y$. It is easy to see that the function $d$ is upper semicontinuous with $\max d = 0$. Let $\mu$ be an idempotent measure generated by $d$. Then we have
$$\mu(\varphi)=\max\{d(y)+\varphi(y)|y\in Y\}=\max\{\varphi(y)+\max\{ d_\nu(x)|x\in f^{-1}(y)\}|y\in Y\}=$$
$$=\max\{\varphi\circ f(x)+ d_\nu(x)|x\in X\}=\nu(\varphi\circ f)=If(\nu)(\varphi)$$
for each $\varphi\in C(Y)$. Hence $\mu=If(\nu)$.
\varepsilonnd{proof}
\begin{lemma}\label{supp+} Let $f:X\to Y$ be a continuous map, $A$ and $B$ are disjoint closed subsets of $Y$ and $\mu\in IX$ such that $If(\mu)=s\odot\nu\oplus\pi$ where $\nu\in IA$ and $\pi\in IB$. Then there exist $\nu'\in I(f^{-1}(A))$ and $\pi'\in I(f^{-1}(B))$ such that $\mu=s\odot\nu'\oplus\pi'$.
\varepsilonnd{lemma}
\begin{proof} Consider the density $d_\mu$ of $\mu$. We have that $\max\{d_\mu(x)|x\in f^{-1}(A)\}=s$ and $\max\{d_\mu(x)|x\in f^{-1}(B)\}=1$ by Lemma \ref{mapdenc}. Consider functions $d_1,d_2:X\to [-\infty,0]$ defined by the formulas
$$d_1(x)=\begin{cases}
d_\mu(x)-c,&x\in A,\\
-\infty,&x\notin A\varepsilonnd{cases}
$$
and
$$d_2(x)=\begin{cases}
d_\mu(x),&x\in B,\\
-\infty,&x\notin B\varepsilonnd{cases}
$$
and idempotent measures $\nu'$ and $\pi'$ generated by function $d_1$ and $d_2$. Then $\nu'$ and $\pi'$ are the measures we are looking for.
\varepsilonnd{proof}
\section{Idempotent measures and absolute retracts}
By $w(X)$ we denote the weight of the space $X$ and by $\chi(X)$ the character of the space $X$.
We will need some notations and facts from the theory of
non-metrizable compacta. See \cite{Shchep} for more details.
Let $\tau$ be an infinite cardinal number. A partially ordered set $\mathcal A$ is
called $\tau$-{\it complete}, if every subset of cardinality $\le\tau$ has
a least upper bound in $\mathcal A$. An inverse system consisting of compacta and
surjective bonding maps over a $\tau$-complete indexing set is called
$\tau$-complete. A continuous $\tau$-complete system consisting of compacta
of weight $\le\tau$ is called a $\tau$-{\it system}.
As usual, by $\omega$ we denote the countable cardinal number, by $\omega_1$ we denote the first uncountable cardinal number and so on.
A compactum $X$ is called openly generated if $X$ can be represented as the
limit of an $\omega$-system with open bonding maps. We have $w(X)=\chi(X)$ for each openly generated compactum $X$ (see for example Lemma 4 from \cite{Radul4}). A compactum $X$ is called absolute extensor in the class of 0-dimensional compacta (shortly AE(0)) if for any 0-dimensional compactum $Z$, any closed subspace $A$ of $Z$ and a continuous map $\varphi:A\to X$ there
exists a continuous map $\Phi:Z\to X$ such that $\Phi|A=\varphi$. Evidently each absolute retract is AE(0). Let us also remark that each AE(0) is openly generated and these classes coincide for compacta of the weight $\le\omega_1$.
By $D$ we denote the two-point set with discrete topology.
\begin{lemma}\label{Domega} The compactum $I(D^\tau)$ is not an absolute retract for each $\tau\ge\omega_2$.
\varepsilonnd{lemma}
\begin{proof} Suppose the contrary: there exists $\tau\ge\omega_2$ such that $I(D^\tau)$ is an absolute retract.
Choose a continuous onto map $f:D^\tau\to [0,1]^\tau$ such that there exists a continuous map $s:[0,1]^\tau\to I(D^\tau)$ such that $If\circ s=\delta [0,1]^\tau$. Existence of such map follows from Theorem 2.1 \cite{Radul}.
Then we have $If\circ\beta_{I(D^\tau)}\circ Is=\beta_{I([0,1]^\tau)}\circ I^2f\circ Is=\beta_{I([0,1]^\tau)}\circ I(\delta [0,1]^\tau)=\mathrm{id}_{[0,1]^\tau}$. Hence the map $If:I(D^\tau)\to I([0,1]^\tau)$ is a retraction and the compactum $I([0,1]^\tau)$ is an absolute retract. We obtain a contradiction to the above mentioned Zarichnyi result.
\varepsilonnd{proof}
\begin{theorem}\label{Gen} The compactum $IX$ is an absolute retract iff $X$ is an openly generated compactum of the weight $\le\omega_1$.
\varepsilonnd{theorem}
\begin{proof} The sufficiency follows from Corollary 3.5 \cite{Radul} and the fact that the functor of idempotent measures preserves weight of infinite compacta, open maps and preimages \cite{Zar}.
Let us prove the necessity. Consider any compactum $X$ such that the compactum $IX$ is an absolute retract. Since the functor $I$ is normal \cite{Zar}, the compactum $X$ is AE(0) (\cite{Shchep}, Corollary 4.2). Let us show that $w(X)\le\omega_1$. Suppose the contrary $w(X)>\omega_1$, then by Theorem 5.6 and Proposition 6.3 from \cite{Hayd}, there exists an embedding $s:D^{\omega_2}\to X$. It follows from results of \cite{BR} that there exists a continuous map $f:X\to I(D^{\omega_2})$ such that $Is(f(x))=\delta_x$ for each $x\in s(D^\tau)$. Since the map $Is$ is an embedding, we have $f\circ s=\delta D^{\omega_2}$.
Define a map $u:C(D^{\omega_2})\to C(X)$ by the formula $u(\varphi)(x)=f(x)(\varphi)$ for $\varphi\in C(D^{\omega_2})$ and $x\in X$. It is easy to check that $u$ is well-defined, continuous and preserves operations $\odot$, $\oplus$ and constant functions. The equality $f\circ s=\delta D^{\omega_2}$ implies $u(\varphi)\circ s=\varphi$.
Define a map $\phi:IX\to I(D^{\omega_2})$ by the formula $\phi(\nu)(\varphi)=\nu(u(\varphi))$ for $\varphi\in C(D^{\omega_2})$ and $\nu\in IX$. Since $u$ preserves operations $\odot$, $\oplus$ and constant functions, $\phi(\nu)\in I(D^{\omega_2})$ for each $\nu\in IX$. It is easy check that $\phi$ is continuous.
Finally, for each $\varphi\in C(D^{\omega_2})$ and $\nu\in I(D^{\omega_2})$ we have $(\phi\circ Is)(\nu)(\varphi)=Is(\nu)(u(\varphi))=\nu(u(\varphi)\circ s)=\nu(\varphi)$. Hence the map $\phi$ is a retraction and $ I(D^{\omega_2}$ is an absolute retract. We obtain a contradiction to Lemma \ref{Domega}.
\varepsilonnd{proof}
\section{On the softness of the idempotent barycenter map}
A map $f:X\to Y$ is said to be (0-)soft if for any (0-dimensional) paracompact space $Z$, any closed subspace $A$ of $Z$ and
maps $\Phi:A\to X$ and $\Psi:Z\to Y$ with $\Psi|A=f\circ\Phi$ there exists a
map $G:Z\to X$ such that $G|A=\Phi$ and $\Psi=f\circ G$. This notion is
introduced by E.Shchepin \cite{Shchep1}. Let us remark that each 0-soft map is open and 0-softness is equivalent to the openness for all the maps between metrizable compacta.
Let $$ \mathcal CD
X_1 @>p>> X_2 \\
@VVf_1V @VVf_2V \\
Y_1 @>q>> Y_2
\varepsilonndCD
$$
be a commutative diagram. The map $\chi:X_1\to
X_2\times{}_{Y_2}Y_1= \{(x,y)\in X_2\times Y_1\mid f_2(x)=q(y)\}$
defined by $\chi(x)=(p(x),f_1(x))$ is called a characteristic map
of this diagram. The diagram is called open (0-soft, soft) if the map $\chi$ is open (0-soft, soft).
The following theorem from \cite{Zar3} gives a
characterization of 0-soft maps:
\begin{theoremA}\cite{Zar3} A map $f:X\to Y$ is
0-soft if and only if there exist $\omega$-systems $S_X$ and $S_Y$ with the
limits $X$ and $Y$ respectively and a morphism $\{f_\alpha\}:S_X\to S_Y$ with
the limit $f$ such that 1) $f_\alpha$ is 0-soft for every $\alpha$; 2) every
limit square diagram is 0-soft.
\varepsilonnd{theoremA}
\begin{theorem}\label{metrsoft} Let $X\subset\mathbb R^\omega$ be a compact Max-Plus convex subset and $f : X \to Y$ be
an open map onto a compact metrizable space with Max-Plus convex
preimages. Then the map $f$ is soft.
\varepsilonnd{theorem}
\begin{proof} The theorem can be proved using the same arguments as in \cite{Zar2}, where the statement of the theorem was proved for finite-dimensional $X$.
\varepsilonnd{proof}
A Max-Plus convex compactum $K$ is said to be
I-barycentrically soft (open), if the idempotent barycenter map $b_K$ is soft (open). It is easy to see that the idempotent barycenter map has Max-Plus convex preimages. Hence we obtain the following corollary.
\begin{corollary}\label{metr} Each metrizable I-barycentrically open compactum is I-barycentrically soft.
\varepsilonnd{corollary}
Now we investigate non-metrizable compacta. It was proved in \cite{Zar} that the functor $I$ preserves open maps, i.e. the openness of a map $f:X\to Y$ implies the openness of the map $If:IX\to IY$. We will need the converse statement.
\begin{lemma}\label{refl} Let $f:X\to Y$ be a continuous map such that the map $If:IX\to IY$ is open. Then $f$ is open.
\varepsilonnd{lemma}
\begin{proof} Suppose the contrary. Then there exists a point $x\in X$, a neighborhood $U$ of $x$ and a net $(y_\alpha)_{\alpha\in A}$ converging to $f(x)$ such that $f^{-1}(y_\alpha)\cap U=\varepsilonmptyset$. Then we have that the net $(\delta_{y_\alpha})_{\alpha\in A}$ converges to $\delta_{f(x)}=Ff(\delta_x)$. Take a function $\psi\in C(X)$ such that $\psi(x)=1$ and $\psi(X\setminus U)\subset\{0\}$. We have $\delta_x(\psi)=1$. Since the functor $I$ preserves preimages \cite{Zar}, we have $(If)^{-1}(\delta_{y_\alpha})\subset I(X\setminus U)$ for each $\alpha\in A$. Hence $\nu(\psi)=0$ for each $\nu\in(If)^{-1}(\delta_{y_\alpha})$ and $\alpha\in A$. We obtain a contradiction to openness of the map $If$.
\varepsilonnd{proof}
\begin{corollary}\label{OG} The compactum $IX$ is openly generated if and only if $X$ is openly generated.
\varepsilonnd{corollary}
\begin{lemma}\label{OD} Let $f:X\to Y$ be a Max-Plus affine surjective map between Max-Plus convex compacta $X$ and $Y$ such that the diagram
$$ \mathcal CD
IX @>If>> IY \\
@VV\beta_XV @VV\beta_YV \\
X @>f>> Y
\varepsilonndCD
$$
is open. Then $f$ is open.
\varepsilonnd{lemma}
\begin{proof} Suppose the contrary. Then there exists a point $x\in X$, a neighborhood $U$ of $x$ and a net $(y_\alpha)_{\alpha\in A}$ converging to $f(x)=y$ such that $f^{-1}(y_\alpha)\cap U=\varepsilonmptyset$.
(By $\varepsilonxp X$ we denote the hyperspace of $X$, i.e., the set of nonempty
closed subsets of $X$ endowed with Vietoris topology). We can assume that $f^{-1}(y_\alpha)$ converges to $A\in\varepsilonxp X$. Since $f$
is a closed map we have that $A$ is a subset in $f^{-1}(y)$. Evidently, $x\notin A$.
Choose a point $x_1\in A$. There exists $s\in(-\infty,0)$ such that $s\odot x_1\oplus x \notin A$. Consider any open set $V\supset A$ such that $s\odot x_1\oplus x \notin \mathrm{Cl} V$. We can assume that $f^{-1}(y_\alpha)\subset V$ for every $\alpha\in\mathcal B$. Consider $\delta_{s\odot x_1\oplus x} \in IX$. Then $\chi(\delta_{s\odot x_1\oplus x})=(s\odot x_1\oplus x;\delta_{y})$.
For each $\alpha\in\mathcal B$ choose a point $x_\alpha\in f^{-1}(y_\alpha)$ such that the net $x_\alpha$ converges to $x_1$. We have that the net $s\odot x_\alpha\oplus x$ converges to $s\odot x_1\oplus x$ in $X$ and the net $s\odot \delta_{y_\alpha}\oplus \delta_y$ converges to $\delta_y$ in $IY$. Moreover, $(s\odot x_\alpha\oplus x;s\odot \delta_{y_\alpha}\oplus \delta_y)\in X\times_T IT$.
Choose a function $\varphi\in C(X)$ such that $\varphi(\mathrm{Cl} V)\subset\{-s+1\}$ and $\varphi(s\odot x_1\oplus x)=0$. Consider the neighborhood $O=\{\nu\in IX| \nu(\varphi)<\frac12\}$ of $\delta_{s\odot x_1\oplus x}$. Let $\mu\in If^{-1}(s\odot \delta_{y_\alpha}\oplus \delta_y)$. By Lemma \ref{supp+} we have $\mu=s\odot\varepsilonta\oplus \nu$ where $ \varepsilonta\in I(f^{-1}(y_\alpha))$ and $\nu\in I(f^{-1}(y))$. Hence $\mu(\varphi)=1>\frac12$ and $\mu\notin O$.
We obtain a contradiction to openness of the characteristic map $\chi$.
\varepsilonnd{proof}
\begin{theorem}\label{softOG} Let $K$ be a Max-Plus convex compactum such that the map $\beta_K$ is 0-soft. Then $K$ is openly generated.
\varepsilonnd{theorem}
\begin{proof} Present $K$ as a limit of an
$\omega$-system $S_K=\{K_\alpha,p_\alpha,\mathcal A\}$ where $K_\alpha$ are Max-Plus convex metrizable
compacta and bonding maps $p_\alpha$ are Max-Plus affine for every $\alpha\in\mathcal A$. If
the map $b_K:I(K)\to K$ is 0-soft, then, using the spectral theorem of
E.V.~Shchepin \cite{Shchep} and theorem A, we obtain that there exists a
closed cofinal subset $\mathcal B\subset\mathcal A$ such that for each $\alpha\in\mathcal B$ the
diagram
$$ \mathcal CD I(K) @>I(p_\alpha)>> I(K_\alpha) \\
@VVb_KV @VVb_{K_\alpha}V
\\ K @>p_\alpha>> K_\alpha \varepsilonndCD $$
is 0-soft and therefore open. It follows from Lemma \ref{OD} that the map $p_\alpha$ is open
for each $\alpha\in\mathcal B$. But since $K=\lim\{K_\alpha,p_\alpha,\mathcal B\}$, the
compactum $K$ is openly generated. The theorem is proved.
\varepsilonnd{proof}
\begin{lemma}\label{noniz} Let compactum $X$ be a limit space of an
$\omega$-system $S_X=\{X_\alpha,p_\alpha,\mathcal A\}$ and $x\in X$ has uncountable character. Then there exists $\alpha\in \mathcal A$ such that the point $p_\alpha(x)$ is non-isolated in $X_\alpha$.
\varepsilonnd{lemma}
\begin{proof} Take any $\alpha_1\in\mathcal A$. Since $X_{\alpha_1}$ is metrizable, the set $p_{\alpha_1}^{-1}(p_{\alpha_1}(x))$ contains more then one point. There exists $\alpha_2\in A$ such that $(p^{\alpha_2}_{\alpha_1})^{-1}(p_{\alpha_1}(x))$ contains more then one point. Inductively we can find a sequence $\alpha_1\le\alpha_2\le\dots\le\alpha_i\le\dots$ such that $(p^{\alpha_{i+1}}_{\alpha_i})^{-1}(p_{\alpha_i}(x))$ contains more then one point for each $i\in\mathbb N$. Since the system $S_X$ is $\omega$-complete, there exists $\alpha=\sup_{i\in\mathbb N}\alpha_i$. Since the system $S_X$ is continuous, $X_\alpha=\lim\{X_{\alpha_i},p^\alpha_{\alpha_i},\mathbb N\}$ and the point $p_\alpha(x)$ is non-isolated in $X_\alpha$.
\varepsilonnd{proof}
\begin{theorem}\label{softmetr} Let $X$ be a compactum such that the map $\beta_{IX}$ is 0-soft. Then $X$ is metrizable.
\varepsilonnd{theorem}
\begin{proof} Theorem \ref{softOG} implies that $IX$ is openly generated. Then $X$ is openly generated by Corollary \ref{OG}.
Suppose that $X$ is non-metrizable. Present $X$ as a limit of an
$\omega$-system $S_X=\{X_\alpha,p_\alpha,\mathcal A\}$ with open surjective maps $p_\alpha$. If
the map $b_{IX}:I^2(X)\to IX$ is 0-soft, then, using the spectral theorem of
E.V.~Shchepin \cite{Shchep} and theorem A, we obtain that there exists a
closed cofinal subset $\mathcal B\subset\mathcal A$ such that for each $\alpha\in\mathcal B$ the
diagram
$$ \mathcal CD I^2(X) @>I^2(p_\alpha)>> I^2(X_\alpha) \\
@VVb_{IX}V @VVb_{I(X_\alpha)}V
\\ IX @>I(p_\alpha)>> I(X_\alpha) \varepsilonndCD $$
is 0-soft and therefore open.
Since $X$ is non-metrizable, there exists a point $x\in X$ with uncountable character. By Lemma \ref{noniz}, there exists $\alpha\in\mathcal B$ such that the point $y=p_\alpha(x)$ is non-isolated. Since $X_\alpha$ is metrizable, the set $p_\alpha^{-1}(y)$ is not a one-point set. Then there exists $\beta\in\mathcal A$ such that $\beta\ge\alpha$ and the set $(p^\beta_\alpha)^{-1}(y)$ is not a one-point set. The characteristic map $\chi:I^2(X_\beta)\to I(X_\beta)\times_{X_\alpha}I^2(X_\alpha)$ of the diagram
$$ \mathcal CD I^2(X_\beta) @>I^2(p^\beta_\alpha)>> I^2(X_\alpha) \\
@VVb_{I(X_\beta)}V @VVb_{I(X_\alpha)}V
\\ I(X_\beta) @>I(p^\beta_\alpha)>> I(X_\alpha) \varepsilonndCD $$
is open being a left divisor of the open map $(I(p_\beta)\times\mathrm{id}_{I^2(X_\alpha)}|_{(I(X_\beta)\times_{X_\alpha}I^2(X_\alpha))})\circ \chi'$ where $\chi'$ is the characteristic map of the diagram
$$ \mathcal CD I^2(X) @>I^2(p_\alpha)>> I^2(X_\alpha) \\
@VVb_{IX}V @VVb_{I(X_\alpha)}V
\\ IX @>I(p_\alpha)>> I(X_\alpha). \varepsilonndCD $$
Choose two distinct point $x_1$ and $x_2\in(p^\beta_\alpha)^{-1}(y)$. Consider $\delta_{\delta_{x_1}}\oplus\delta_{\delta_{x_2}}\in I^2(X_\beta)$. Then we have $\chi(\delta_{\delta_{x_1}}\oplus\delta_{\delta_{x_2}})=(\delta_{x_1}\oplus\delta_{x_2};\delta_{\delta_{y}})$.
Choose any converging to $y$ sequence $(y_i)$ such that $y_i\neq y$ for each $i\in \mathbb N$. Since the map $p^\beta_\alpha$ is open, there exists a sequence $(x_i)$ converging to $x_2$ such that $p^\beta_\alpha(x_i)=y_i$. Then the sequence $\delta_{x_1}\oplus\delta_{x_2}\oplus\delta_{x_i}$ converges to $\delta_{x_1}\oplus\delta_{x_2}$ and the sequence $\delta_{\delta_y\oplus\delta_{y_i}}$ converges to $\delta_{\delta_{y}}$. Moreover, $I(p^\beta_\alpha)(\delta_{x_1}\oplus\delta_{x_2}\oplus\delta_{x_i})=\delta_y\oplus\delta_{y_i}
=b_{I(X_\alpha)}(\delta_{\delta_y\oplus\delta_{y_i}})$, hence we have $(\delta_{x_1}\oplus\delta_{x_2}\oplus\delta_{x_i},\delta_{\delta_y\oplus\delta_{y_i}})\in
I(X_\beta)\times_{X_\alpha}I^2(X_\alpha)$ for each $i\in \mathbb N$.
Consider any $\mathcal M_i\in\chi^{-1}(\delta_{x_1}\oplus\delta_{x_2}\oplus\delta_{x_i},\delta_{\delta_y\oplus\delta_{y_i}})$. Since $\mathcal M_i\in (b_{IX})^{-1}(\delta_{x_1}\oplus\delta_{x_2}\oplus\delta_{x_i})$, we obtain $\mathcal M_i\in I^2(\{x_1,x_2,x_i\})$ by Lemma \ref{supp}. Since $\mathcal M_i\in (I^2(p^\beta_\alpha))^{-1}(\delta_{\delta_y\oplus\delta_{y_i}})$ and the functor $I$ preserves preimages, we have $\mathcal M_i\in IS$ where $S=\{\nu\in I(\{x_1,x_2,x_i\})|\nu=s\odot \delta_{x_1}\oplus t\odot\delta_{x_2}\oplus\delta_{x_i}$ where $t,s\in [-\infty,0]$ with $s\oplus t=0\}$ by Lemma \ref{supp+}.
Choose a function $\varphi\in C(X_\beta)$ such that $\varphi(x_1)=0$ and $\varphi(x_2)>1$. We can assume that $\varphi(x_i)>1$ for each $i\in \mathbb N$. Consider open sets $O_1=\{\nu\in I(X_\beta)|\nu(\varphi)<\frac{1}{3}\}$ and $O_2=\{\nu\in I(X_\beta)|\nu(\varphi)>\frac{2}{3}\}$. Then we have $\delta_{x_1}\in O_1$, $S\subset O_2$ and $\mathrm{Cl} O_1\cap \mathrm{Cl} O_2$. Choose a function $\psi\in C(I(X_\beta))$ such that $\psi(O_1)\subset\{1\}$ and $\psi(O_2)\subset\{0\}$. Then we have $\delta_{\delta_{x_1}}\oplus\delta_{\delta_{x_2}}(\psi)=1$ and $\mathcal M_i(\psi)=0$ for each $i\in \mathbb N$. We obtain a contradiction to openness of $\chi$. The theorem is proved.
\varepsilonnd{proof}
Let us remark that an analogous theorem for probability measures was proved in \cite{Radul2}. Fedorchuk proved in \cite{Fed1} that each product of $\omega_1$ barycentrically open convex metrizable compacta is barycentrically soft. The following theorem demonstrates that the situation is different in the case of idempotent probability measures.
\begin{theorem}\label{softpr} The map $\beta_{I([0,1]^{\omega_1})}$ is not 0-soft.
\varepsilonnd{theorem}
\begin{proof} Suppose the contrary. Then using the same arguments as in the proof of Theorem \ref{softmetr} we obtain that the diagram
$$ \mathcal CD I([0,1]^{\omega}\times[0,1]^{\omega}) @>I(p)>> I([0,1]^{\omega}) \\
@VVb_{[0,1]^{\omega}\times[0,1]^{\omega}}V @VVb_{[0,1]^{\omega}}V
\\ [0,1]^{\omega}\times[0,1]^{\omega} @>p>> [0,1]^{\omega} \varepsilonndCD $$
is open (by $p:[0,1]^{\omega}\times[0,1]^{\omega}\to[0,1]^{\omega}$ we denote the natural projection to the second coordinate). As before by $\chi$ we denote the characteristic map.
For $t\in [0,1]$ we put $\overline{t}=(t,0,0,\dots)\in [0,1]^{\omega}$. Consider $\delta_{(\overline{0};\overline{0})}\oplus\delta_{(\overline{1};\overline{1})}\in I([0,1]^{\omega}\times[0,1]^{\omega})$. Then we have $\chi(\delta_{(\overline{0};\overline{0})}\oplus\delta_{(\overline{1};\overline{1})})=
((\overline{1};\overline{1});\delta_{\overline{0}}\oplus\delta_{\overline{1}})$.
The sequence $(\overline{1};\overline{1-\frac{1}{i}})$ converges to $(\overline{1};\overline{1})$ and the sequence $\delta_{\overline{0}}\oplus(-\frac{1}{i})\odot\delta_{\overline{1}}$ converges to $\delta_{\overline{0}}\oplus\delta_{\overline{1}}$. Moreover, $((\overline{1};\overline{1-\frac{1}{i}}),\delta_{\overline{0}}\oplus(-\frac{1}{i})\odot\delta_{\overline{1}})\in
([0,1]^{\omega}\times[0,1]^{\omega})\times_{[0,1]^{\omega}}I([0,1]^{\omega})$ for each $i\in \mathbb N$.
Consider any $\pi_i\in\chi^{-1}((\overline{1};\overline{1-\frac{1}{i}}),
\delta_{\overline{0}}\oplus(-\frac{1}{i})\odot\delta_{\overline{1}})$. Since $\pi_i\in (Ip)^{-1}(\delta_{\overline{0}}\oplus(-\frac{1}{i})\odot\delta_{\overline{1}})$, we have by Lemma \ref{supp+} that $\pi_i=\nu_i\oplus(-\frac{1}{i})\odot\mu_i$ where $\nu_i\in I([0,1]^{\omega}\times\{\overline{0}\})$ and $\mu_i\in I([0,1]^{\omega}\times\{\overline{1}\})$ for each $i\in \mathbb N$. Since $\pi_i\in (b_{[0,1]^{\omega}\times[0,1]^{\omega}})^{-1}(\overline{1};\overline{1-\frac{1}{i}})$, we have $d_{\nu_i}(\overline{1};\overline{0})=1$ for each $i\in \mathbb N$ where $d_{\nu_i}$ is the density of $\nu_i$.
Choose a function $\varphi\in C([0,1]^{\omega}\times[0,1]^{\omega})$ such that $\varphi(\overline{1};\overline{0})=1$ and $\varphi(\overline{0};\overline{0})=\varphi(\overline{1};\overline{1})=0$. Then we have $\delta_{(\overline{0};\overline{0})}\oplus\delta_{(\overline{1};\overline{1})}(\varphi)=0$ and $\pi_i(\varphi)\ge\nu_i(\varphi)\ge 1$ for each $i\in \mathbb N$. We obtain a contradiction to openness of $\chi$. The theorem is proved.
\varepsilonnd{proof}
\begin{question} If there exists a non-metrizable I-barycentrically soft compactum?
\varepsilonnd{question}
\begin{thebibliography}{}
\bibitem{A} M.Akian, {\varepsilonm Densities of idempotent measures and large deviations}, Trans. of Amer.Math.Soc. {\bf 351} (1999), no. 11, 4515–-4543.
\bibitem{BR} T.Banakh, T.Radul, {\varepsilonm F-Dugundji spaces, F-Milutin spaces and absolute F-valued retracts}, Topology Appl. {\bf 179} (2015), 34--50.
\bibitem{DH} S.Ditor,R.Haydon, {\varepsilonm On absolute retracts, $P(S)$ and
complemented subspaces of $C(D^{\omega_1})$}, Studia
Math., {\bf 56} (1976), 243–-251.
\bibitem{Eif} L.Q. Eifler, {\varepsilonm Openness of convex averaging}, Glasnik Mat. Ser. III, {\bf 32} (1977), no. 1, 67–-72.
\bibitem{Fed} V.V.~Fedorchuk, {\varepsilonm On a barycentric map of probability measures}, Vestn. Mosk. Univ, Ser. I, No 1, (1992), 42--47.
\bibitem{Fed1} V.V.~Fedorchuk, {\varepsilonm On barycentrically open bicompacta}, Siberian Mathematical Journal, {\bf 33} (1992), 1135--1139.
\bibitem{Hayd} R.Haydon, {\varepsilonm Embedding $D^\tau)$ in Dugunji spaces, with an application to linear topological classification of spaces of continouos function}, Studia
Math., {\bf 56} (1976), 31–-44.
\bibitem{Litv} G. L.Litvinov, {\varepsilonm The Maslov dequantization, idempotent and tropical mathematics: a very brief
introduction}, Idempotent mathematics and mathematical physics, 1–17, Contemp. Math., 377, Amer. Math. Soc., Providence, RI, 2005.
\bibitem{MS} V.P. Maslov, S.N. Samborskii, {\varepsilonm Idempotent Analysis}, Adv. Soviet Math., vol. 13, Amer. Math. Soc., Providence, 1992.
\bibitem{OBr} R.C. O'Brien, {\varepsilonm On the openness of the barycentre map}, Math. Ann., {\bf 223} (1976), 207--212.
\bibitem{Pap} S. Papadopoulou, {\varepsilonm On the geometry of stable compact convex sets}, Math. Ann., {\bf 229} (1977), 193--200.
\bibitem{Radul} T.~Radul, {\varepsilonm Absolute retracts and equiconnected monads}, Topology Appl. {\bf 202} (2016), 1--6.
\bibitem{Radul1} T.~Radul, {\varepsilonm On the openness of the idempotent barycenter map}, Topology Appl. (submitted).
\bibitem{Radul2} T.~Radul, {\varepsilonm On the baricentric map of probability measures}, Vestn. Mosk. Univ., Ser. I (1994), No.1, 3--6.
\bibitem{Radul3} T.~Radul, {\varepsilonm On baricentrically soft compacta}, Fund.Math. {\bf 148} (1995), 27--33.
\bibitem{Radul4} T.~Radul, {\varepsilonm Topology of the space of ordering-preserving functionals}, Bull. Pol. Acad. Sci. {\bf 47} (1999), 53--60.
\bibitem{Shchep} E.V.Shchepin, {\varepsilonm Functors and uncountable powers of
compacta}, Uspekhi Mat. Nauk {\bf 36} (1981), 3--62.
\bibitem{Shchep1} E.V.Shchepin, {\varepsilonm Topology of limit spaces of uncountable inverse spectra}, Russian Mathematical Surveys {\bf 31} (1976), 155--191.
\bibitem{Zar} M.~Zarichnyi, {\varepsilonm Spaces and mappings of idempotent measures}, Izv. Ross. Akad. Nauk Ser. Mat. {\bf 74} (2010), 45--64.
\bibitem{Zar2} M.~Zarichnyi, {\varepsilonm Michael selection theorem for max-plus compact convex sets}, Topology Proceedings. {\bf 31} (2007), 677--681.
\bibitem{Zar3} M.~Zarichnyi, {\varepsilonm Absolute extensors and the geometry of multiplication of monads in the
category of compacta}, Mat. Sbornik. {\bf 182} (1991), 1261--1288.
\bibitem{Z} K. Zimmermann, {\varepsilonm A general separation theorem in extremal algebras}, Ekon.-Mat. Obz. {\bf 13} (1977) 179--201.
\varepsilonnd{thebibliography}
\varepsilonnd{document}
|
\begin{document}
\title{Global Complex Roots and Poles Finding Algorithm Based on Phase Analysis for Propagation and Radiation Problems}
\author{Piotr Kowalczyk
\thanks{This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.}
\thanks{This work was supported under funding for Statutory Activities for the Faculty of Electronics, Telecommunication and Informatics,
Gdansk University of Technology.}
\thanks{Piotr Kowalczyk is with Gdansk University of Technology, Faculty of Electronics, Telecommunications and Informatics, Narutowicza 11/12, 80233 Gdansk, Poland (e-mail: [email protected], [email protected]). }
}
\maketitle
\begin{abstract}
A flexible and effective algorithm for complex roots and poles finding is presented. A wide class of analytic functions can be analyzed, and any arbitrarily shaped search region can be considered. The method is very simple and intuitive. It is based on sampling a function at the nodes of a regular mesh, and on the analysis of the function phase. As a result, a set of candidate regions is created and then the roots/poles are verified using a discretized Cauchy's argument principle. The accuracy of the results can be improved by the application of a self-adaptive mesh. The effectiveness of the presented technique is supported by numerical tests involving different types of structures, where electromagnetic waves are guided and radiated. The results are verified, and the computational efficiency of the method is examined.
\end{abstract}
\begin{IEEEkeywords}
Complex roots finding algorithm, complex modes, propagation, radiation
\end{IEEEkeywords}
\section{Introduction}
\label{sec:introduction}
\IEEEPARstart{M}{any} propagation and radiation problems are formulated in a complex domain. One of the most common parameter representing electromagnetic wave propagation, as guiding, radiation or losses, is in general a complex number. Similarly, resonant frequencies of resonators or radiators are complex values. In many cases the evaluation of these parameters boils down to finding a complex root of a more or less complicated function. Even in the case of a simple technique, as mode matching or field matching \cite{warecka18}, the roots cannot be found analytically. For more sophisticated structures and methods the function can be expressed in the numerical form (spectral domain approach, discrete methods, nonlinear matrix eigenvalue problem) \cite{lech14,lech18}. Therefore, effective and efficient root finding algorithms became a necessary tool in the electromagnetic waves analysis.
Root finding is one of the oldest and most common numerical problems. For a real function of a real variable the problem can be solved using many different techniques. Moreover, the results can be verified simply by checking the function sign changes at the ends of any sufficiently small interval containing a single root. However, even in this case, finding all the roots in a fixed region can be difficult. Finding the roots of a complex valued function of a complex variable is more complicated. Although, there are many complex root finding techniques, they usually consider only a special class of functions or a restricted region of analysis.
The most common schemes, such as Newton's method \cite{Abramowitz72} or Muller's method \cite{Press92} are useful if the initial value of the root is already known. The same applies to algorithms tracking the root in a function with an extra parameter \cite{Michalski11,Kowalczyk17AP}. Global root finding algorithms are very efficient for simple polynomial functions \cite{Pinkert76,Schonhage82}, so a number of procedures based on polynomial approximation has been proposed \cite{Long98,Changying10,KRAVANJA2000}. However, the zeros of the considered function may bear little or no relation to its polynomial approximation \cite{Delves67}. An extreme example is $f(z)=\exp(z)$, which has no roots, whereas its any finite polynomial approximation e.g. $f(z)\approx \sum_{n=0}^{N} z^n/n!$ has $N$ zeros. Moreover, the roots can be extremely sensitive to perturbations in the coefficients of the higher order polynomial \cite{Wilkinson1994}. Therefore, the results obtained from such approximation should be further verified. Furthermore, the accuracy of the obtained zeros cannot be simply determined and controlled, so they can only be used as starting values for an extra iterative process. However, the results of such iterative techniques can be unreliable, especially if many roots/poles are located in a small region. In such cases, there is no guarantee that the process will converge to the specific initially evaluated root. Therefore, some of the results can be omitted (if their initial values are not sufficiently close to the roots). The control of such processes can be a fairly difficult task.
Moreover, the polynomial approximation is ineffective for the function containing singularities and branch cuts in the analyzed region (the same limitation applies to some mesh based methods \cite{Wan11,Meylan2003}). Instead, a rational approximation can be applied. An interesting brief review of the polynomial and rational approximations used for roots/poles determination (based on roots of unity disk) can be found in \cite{Austin2014}. The approach involving rational approximation seems to be much more flexible and accurate, however, it is also more fragile. The improvement is often at the expanse of generating spurious poles-zeros pairs (Froissart doublets). In some cases the problem can reduced by proper regularization \cite{Gonnet2011}, but still the roots/poles obtained from such approximation should be verified and their accuracy cannot be simply controlled.
The previously mentioned methods can be very efficient, especially for simple functions, however, in many practical applications, difficult and complex routines are implemented (e.g., those based on a genetic algorithm \cite{Tian09,Ariyaratne16} or on knowledge of the function singularities \cite{Chen2017}). Recently, two global algorithms has been proposed \cite{Kowalczyk15,ZOUROS2018}, which are general and flexible. Although they are quite effective, their efficiency and reliability can be significantly improved.
In this article, a simple global complex roots and poles finding algorithm is presented. The technique can be applied for very wide class of analytic functions (including those containing singularities or even branch cuts). An arbitrarily shaped search region can be considered, so an extra numerical error (corresponding to scaling or mapping of the function) can be avoided.
In the first step, the function is sampled using a regular triangular mesh. The idea of domain triangulation for finding the zeros of the function is not new. Its origins are rooted in multidimensional bisection \cite{Eiger84} and it is also used in \cite{Kowalczyk15}. However, in the presented technique the function phase in the nodes is analyzed, rather than the simple sign change. From this analysis "candidate edges" are detected. Next, all the triangles attached to the candidate edges are surrounded by close contours determining the "candidate regions". For these contours a discretized form of Cauchy's Argument Principle (CAP) is applied, in order to verify the existence of roots or poles in the candidate regions. The Discretized Cauchy's Argument Principle (DCAP) does not require the derivative of the function and integration over the contour, as it is presented in \cite{Kravanja1999,Henrici88} and \cite{GILLAN2006}. In the proposed approach a minimal number of the function samples is utilized for DCAP (sometimes only four) and the contour shape is determined by the mesh geometry.
To improve the accuracy of the results any local (iterative) root finding scheme can be applied. However, as previously indicated, such methods may be unreliable and some of the roots can be missed, if the initial value is not sufficiently precise. In the presented approach, a simple self-adaptive mesh refinement (inside the previously determined candidate regions) is applied. This approach has slightly worse convergence than three-point iterative algorithms (e.g. Muller's technique), but the results are much more reliable - if the root/pole is located inside the candidate region, it cannot be lost in the sequential iterations.
The proposed technique consists of two stages: the preliminary estimation and the self-adaptive mesh refinement. The latter stage can be skipped, if the required accuracy is obtained in the former stage.
In order to support the validity of the presented technique several numerical tests, involving different types of functions, are performed. The results are verified using other global techniques \cite{Kowalczyk15,Austin2014,ZOUROS2018} and the computational effectiveness and efficiencies of the methods are compared. It is shown that the proposed algorithm can be up to three orders of magnitude faster and requires significantly smaller number of function evaluations.
The examples presented in this paper are focused on microwave and optical applications, however the algorithm is not limited to computational electrodynamics. Similar problems occur in acoustics \cite{Jensen2011}, control theory \cite{Popov62}, quantum mechanics \cite{Fernandez2001} and many other fields.
\section{Algorithm}
Let us denote the considered analytic function by $f(z)$ and the search region by
$\Omega\subset \mathbb{C}$. The aim is to find all the zeros and poles of the function in this region.
The proposed algorithm can be divided into two separate stages: preliminary
estimation and final refinement. In the preliminary estimation process the roots and poles are initially determined by sampling the function at the nodes of a triangular regular mesh and by using DCAP. In the second stage a self-adaptive mesh refinement is applied to obtain the required accuracy.
\subsection{Preliminary Estimation}
To increase the readability and clarity of the description of this stage, it is divided into five steps.
\subsubsection{Mesh}
In the first step, region $\Omega$ is covered with a regular triangular mesh (e.g. using Delaunay triangulation) of $N$ nodes and $P$ edges. A honeycomb arrangement (equilateral triangles) of the nodes $z_n\in \Omega$, results in the highest efficiency of the algorithm. However, any other configuration is also possible, provided the longest edge length is smaller than the assumed resolution $\Delta r$ (this length is discussed in more detail in section \ref{sec:limits}).
\subsubsection{Function Evaluation}
In this step, the function is evaluated at all the nodes $f_n=f(z_n)$ (this part of the algorithm can be simply parallelized, which can significantly improve the efficiency of the process for large problems). In this method the function value is not as important as the quadrant in which it lies, and only the quadrant
\begin{equation}
\label{eqn:quadrant}
Q_n=
\left\{
\begin{array}{ll}
1,&0\le \arg f(z_n)<\pi/2\\
2,&\pi/2\le \arg f(z_n)<\pi\\
3,&\pi \le \arg f(z_n)<3\pi/2\\
4,&3\pi/2 \le \arg f(z_n)<2\pi
\end{array}
\right.
\end{equation}
associated with the node will be taken into account in the subsequent part of the algorithm.
\subsubsection{Candidate Edges}
Next, the phase change along each of the edges is analyzed. For this purpose, an extra parameter representing the quadrant difference along the edge can be introduced
\begin{equation}
\label{eqn:Ep}
\Delta Q_p=Q_{n_{p2}}-Q_{n_{p1}},\qquad \Delta Q_p\in\{-2,-1,0,1,2\}
\end{equation}
where $n_{p1}$ and $n_{p2}$ are nodes attached to edge $p$.
The main idea of this stage is based on the simple fact that any root or pole is located at the point where the regions described by four different quadrants meet - as it is shown in Figure~\ref{fig:E1} (to clarify this idea a phase portrait of the function is placed in the background \cite{Wegert2012}). Since any triangulation of the four nodes located in the four different quadrants requires at least one edge of $|\Delta Q_p|= 2$, then all such edges should be considered as a potential vicinity of the root or pole. All such candidate edges are collected in a single set
$\mathcal{E}=\{p: |\Delta Q_p|= 2\}$.
\begin{figure}
\caption{The preliminary estimation algorithm applied for function $f(z)=(z-1)(z-i)^2(z+1)^3/(z+i)$. The numbers (colors): $1$ (red), $2$ (yellow), $3$ (green) and $4$ (blue) represent the quadrants in which the function values lie. The candidate edges are marked by thick black lines. The black dotted lines represent the boundaries of the candidate regions.}
\label{fig:E1}
\end{figure}
\subsubsection{Candidate Regions}
All the triangles attached to the candidate edges $\mathcal{E}$ can also be collected in a single set of candidate triangles. From all the edges attached to these candidate triangles it is easy to find those that occur only once, and to collect them in a set $\mathcal{C}$, representing the boundary of the candidate regions (the inside edges are attached to two candidate triangles). The boundary of the candidate region must be constructed from the edges of $|\Delta Q_p|<2$ only, as it is explained in the next paragraph \ref{VDCAP} (see condition (\ref{eqn:cond})).
Than, the set $\mathcal{C}$ can be divided into subsets $\mathcal{C}^{(k)}$, where $\mathcal{C}^{(k)}$ creates close contour surrounding $k$-th candidate region (in Figure \ref{fig:E1} there are four candidate regions). From the implementation point of view, such an operation is very simple. Starting from any edge from the set $\mathcal{C}$, one can construct the boundary of the region by finding the next edge connected to the previous one. If there is no connected edge in the set, then the edge should close the contour and the construction of the next candidate region can be started.
\subsubsection{Verification with Discretized Cauchy's Argument Principle}\label{VDCAP}
In a complex domain, to confirm the existence of a root or a pole in a fixed region, CAP is usually applied \cite{Brown2009}. According to this principle, the integral
\begin{equation}
\label{eqn:cap} q = \frac{1}{2 \pi i}\oint\limits_{C} \frac{f'(z)}{f(z)} dz
\end{equation}
represents the sum of all zeros counted with their multiplicities, minus the sum of all poles
counted with their multiplicities. If the region contains only a single candidate point, the parameter $q$ can be: a positive integer (root of order $q$), a negative integer (pole of order $-q$) or zero (regular point).
In practice, integral (\ref{eqn:cap}) represents a total change in the argument of the function $f(z)$ over a closed contour $C$ and there is no need to calculate this integral directly. The parameter $q$ can be evaluated from DCAP - by sampling the function along the contour $C$ \cite{Kravanja1999,Henrici88}
\begin{equation}
\label{eqn:dcap} q = \frac{1}{2\pi} \sum_{p=1}^P \textrm{arg} \frac{f(z_{p+1})}{f(z_{p})}.
\end{equation}
The points $z_{1},z_{2},...,z_{P}$ (and $z_{P+1}=z_{1}$) are obtained from discretization of the contour $C$ and the increase of the argument of $f(z)$ along the segment $C_p$ ($C=\bigcup_{p=1}^P C_p$) from $z_{p}$ to $z_{p+1}$ satisfies the condition
\begin{equation}
\label{eqn:cond} \left| [ \textrm{arg} f(z) ]_{z\in(z_{p},z_{p+1})} \right| \le \pi.
\end{equation}
As it is shown in \cite{Henrici88,Ying1988}, the condition (\ref{eqn:cond}) may not be easy to verify. However, in the presented approach the verification contour $C$ is defined by the boundary of the analyzed candidate region $\mathcal{C}^{(k)}$. For all the edges in $\mathcal{C}^{(k)}$ the phase change is $|\Delta Q_p|\le 1$ and the condition (\ref{eqn:cond}) is fulfilled.
An example of DCAP (single root in $z^{(1)}=1$, double root in $z^{(2)}=i$, triple root in $z^{(3)}=-1$ and singularity in $z^{(4)}=-i$) is presented in Figure~\ref{fig:E1}. For each of the four candidate regions, the function argument varies along the contour taking the values from the four quadrants. Since the quadrant difference along a single edge is $|\Delta Q_p|\le 1$, the discretization of the boundary is sufficient to evaluate the total phase change over the region boundary. By summing all the increases in the quadrants along the contour in the counterclockwise direction, one obtains the values $4$, $8$, $12$ and $-4$ for regions containing $z^{(1)}$, $z^{(2)}$, $z^{(3)}$ and $z^{(4)}$, respectively. Since the increase in the quadrant numbers along the edge represents the changes in the function argument of $\pi/2$, the parameter $q$ is equal to $1$, $2$ , $3$ and $-1$, respectively (single root, double root, triple root and singularity):
\begin{equation}
\label{eqn:cap_simp} q = \frac{1}{4}\sum_{p=1}^{P} \Delta Q_p.
\end{equation}
In general, at least $P=4q$ nodes is required to verify a single root or a pole of the $q$-th order.
It is worth noting that the change in quadrants along the candidate edges $|\Delta Q_p|=2$ is not unambiguous; it is impossible to determine whether the phase increases or decreases by two quadrants (condition (\ref{eqn:cond}) is not satisfied).
In some cases, the regions cannot be unambiguously determined because the boundary of the candidate region cannot be closed (the candidate edge is located at the boundary of the domain $\Omega$). To solve this problem, the domain $\Omega$ should be extended or a denser initial mesh should be used.
\subsection{Mesh Refinement}
In order to improve the accuracy of the root location, a self-adaptive mesh is applied. This approach prevents an improper convergence of the algorithm - none of the initially found roots or poles can be missed even if they are not exactly inside the candidate region. In other techniques (such as Newton's or Muller's) a process that started with a given initial point can converge to a different root/pole, especially if the roots/poles are located in the immediate vicinity of each other.
In order to illustrate the main idea of the proposed approach, a simple example of the process is presented in Figure \ref{fig:E2} (again, a phase portrait of the function is placed in the background). In the first step, new extra nodes are added to the mesh in the centers of the edges in the candidate regions. Then, using Delaunay triangulation, a new mesh is obtained. Next, the function values are evaluated at these new points and the new configuration is analyzed exactly as in the preliminary estimation - new candidate regions are determined for a locally denser mesh. Obviously, the area of the new candidate region is smaller, which improves the accuracy of the result. The process may be repeated any number of times, until a fixed accuracy $\delta$ is reached.
In subsequent repetitions, the refinement of the mesh can lead to ill-conditioned geometry ("skinny triangles"). To avoid this problem an additional zone surrounding the region should be considered (white dotted line in Figure \ref{fig:E2}). If the triangle in the extra zone is "skinny" (e.g. the ratio of the longest triangle edge to the shortest edge is greater than $3$), a new extra node is added in its center - see Figures \ref{fig:E2} (c) and \ref{fig:E2} (d).
\begin{figure}
\caption{A simple example of the mesh refinement process. Figures (a)-(d) represent four consecutive iterations. The candidate edges are marked by thick black lines. The black dotted lines represent the boundary of the candidate regions and the white dotted line represents a boundary of the extra zone.}
\label{fig:E2}
\end{figure}
In this stage of the algorithm, the refinement can be performed only for roots or poles (if there is no need to find all the characteristic points). However, it is possible (and quite efficient) to start the refinement process without verification of the candidate regions in the preliminary estimation. The verification can be performed after the refinement, and the results may be more accurate (e.g., two different roots could be verified as a double root in the preliminary estimation, but they may be separated in the refinement process).
\subsection{Effectiveness and Limitations of the Algorithm}\label{sec:limits}
An application of the regular mesh has a very clear guarantee of correctness -
if the discretization of the function is proper, then none of the roots/poles can be missed. The proper discretization means that for all edges the phase change does not exceed three quadrants. Hence, the algorithm can be applied for any analytic function and in any domain, if the initial mesh step $\Delta r$ is sufficiently small.
However, if the self-adaptive process is involved this condition is sufficient, but not necessary (for the initial mesh discretization). In practice, $\Delta r$ can be even greater than the distance between the roots/poles of the function (as shown in section \ref{sec:num}). Unfortunately, just as for the other established methods (e.g., based on interpolatins \cite{Austin2014} or discrete techniques \cite{Kowalczyk15,Wan11,Meylan2003}), there is no clear recipe for the a priori estimation of the initial sampling, for an arbitrary function. In practice, it can be chosen by a user experimentally, via sequential iterations. Initial verification can be performed using the idea of DCAP for the whole boundary of the region $\Omega$. However, this still does not guarantee that all the roots are found (for instance if the result of DCAP is equal to zero, then the region may be free of roots and poles or it may contain an equal number of roots and poles).
To reduce the risk of missing roots/poles, CAP can be extended to higher moments $m$ \cite{Lamparillo75}:
\begin{equation}
\label{eqn:cap_himo} \frac{1}{2 \pi i}\oint\limits_{C} \frac{f'(z)}{f(z)} z^m dz =\sum_{k\in\{roots\}}\left( z^{(k)}\right)^m - \sum_{k\in\{poles\}}\left( z^{(k)}\right)^m.
\end{equation}
The moment $m=1$ eliminates the problem for a single root-pole pair \cite{Zieniutycz83} and each higher moment can further reduce the risk. Such an approach can be especially useful if an analytical expression of the function is known.
\section{Numerical Tests}\label{sec:num}
The algorithm was implemented in the MATLAB environment, and all the tests were performed using an Intel(R) Core i7-2600K CPU 3.40-GHz, 16-GB RAM computer.
\subsection{Complex Modes}\label{sec:cmpx}
As the first example, a complex wave propagation problem in a
circular waveguide of radius $b$, coaxially loaded with a dielectric cylinder of radius $a$ is considered \cite{Mrozowski97,Kowalczyk17AP}. To ensure continuity of the fields at the boundary of the dielectric and metal, the following determinant function must
be equal to zero:
\begin{equation}
\label{eqn:cmpx} f(z)= \left|
\begin{array}{cccccc}
-J_1 & 0 & J_2 & Y_2& 0 &0 \\
0 &J_1& 0 & 0 &-J_2&-Y_2\\
-\frac{z m J_1}{a\kappa_1^2}& -\frac{i\eta_0 J'_1}{\kappa_1}& \frac{z m J_2}{a\kappa_2^2}& \frac{z m Y_2}{a\kappa_2^2}&\frac{i\eta_0 J'_2}{\kappa_2}&\frac{i\eta_0 Y'_2}{\kappa_2}\\
-\frac{i\varepsilon_r J'_1}{\kappa_1\eta_0}& -\frac{z m J_1}{a\kappa_1^2}&
\frac{i J'_2}{\kappa_2\eta_0}& \frac{i Y'_2}{\kappa_2\eta_0}& \frac{z m J_2}{a\kappa_2^2}&
\frac{z m Y_2}{a\kappa_2^2}\\
0& 0& J_3& Y_3& 0&0\\
0& 0& \frac{z m J_3}{b\kappa_2}& \frac{ z m Y_3}{b\kappa_2}& i\eta_0J'_3&
i\eta_0 Y'_3\\
\end{array} \right|,
\end{equation}
where $z$ represents a normalized propagation coefficient. $J_1=J_m(k_0\kappa_1 a)$, $Y_1=Y_m(k_0\kappa_1 a)$, $J_2=J_m(k_0\kappa_2 a)$, $Y_2=Y_m(k_0\kappa_2 a)$, $J_3=J_m(k_0\kappa_2 b)$ and $Y_3=Y_m(k_0\kappa_2 b)$ are Bessel and Neumann functions (primes denote derivatives). The coefficients are $\kappa_1=\sqrt{z^2+\varepsilon_r}$, $\kappa_2=\sqrt{z^2+1}$, $k_0=2\pi f / c$ and $\eta_0=120\pi$ $\Omega$. The tests are performed for parameters
$a = 6.35$ mm, $b = 10$ mm, $\varepsilon_r=10$, $m=1$ and $f=5$ GHz.
In order to compare the efficiency of the proposed algorithm with the other established methods, the considered region is a unite disk $\Omega=\{\bar{z}\in\mathbb{C}:|\bar{z}|<1\}$ and the scaling factor $10$ is applied $z=10\bar{z}$.
The initial mesh, evenly covering region $\Omega$ with $N=271$ nodes, is sufficient to find all roots and poles of the function in the considered region. Such discretization corresponds to mesh resolution $\Delta r=0.15$ and, obviously, any higher resolution leads to the same results - twelve single roots:\\
$\bar{z}^{(1)}=-0.096642302459942 - 0.062923397455697i$,\\
$\bar{z}^{(2)}=-0.096642302459942 + 0.062923397455697i$,\\
$\bar{z}^{(3)}=0.096642302459942 - 0.062923397455697i$,\\
$\bar{z}^{(4)}=0.096642302459942 + 0.062923397455696i$,\\
$\bar{z}^{(5)}=-0.444429043110023 + 0.000000000000000i$,\\
$\bar{z}^{(6)}=0.444429043110023 - 0.000000000000000i$,\\
$\bar{z}^{(7)}=-0.703772250217811 + 0.000000000000000i$,\\
$\bar{z}^{(8)}=0.703772250217811 - 0.000000000000000i$,\\
$\bar{z}^{(9)}=-0.775021522202022 + 0.000000000000000i$,\\
$\bar{z}^{(10)}=0.775021522202023 - 0.000000000000000i$,\\
$\bar{z}^{(11)}=-0.856115203911565 + 0.000000000000000i$,\\
$\bar{z}^{(12)}=0.856115203911564 - 0.000000000000000i$\\
and two second order poles:\\
$\bar{z}^{(13)}=0.000000000000000 + 0.100000000000000i$,\\
$\bar{z}^{(14)}=0.000000000000000 - 0.100000000000000i$.\\
The parameters and results of the analysis for various accuracy $\delta$ are collected in Table \ref{tab:cmpx} and in Figure \ref{fig:cmpx}.
\begin{table}
\caption{Analysis of problem (\ref{eqn:cmpx}) - parameters and results of the proposed algorithm \label{tab:cmpx}}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
accuracy & CPU time & no. of nodes & no. of iterations\\
\hline
$\delta = {1e-3}$ & $0.46$ s & $1603$ & $11$ \\
\hline
$\delta = {1e-6}$ & $0.72$ s & $2759$ & $20$ \\
\hline
$\delta = {1e-9}$ & $1.05$ s & $3867$ & $30$ \\
\hline
$\delta = {1e-12}$ & $1.37$ s & $5013$ & $40$ \\
\hline
$\delta = {1e-15}$ & $1.78$ s & $6167$ & $51$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\caption{Self-adaptive meshes obtained for problem (\ref{eqn:cmpx}
\label{fig:cmpx}
\end{figure}
Similar discrete algorithm \cite{Kowalczyk15} requires $721$ samples (corresponds to $\Delta r=0.06$) for the initial mesh and this number increases to $2252$ nodes for the verification and for the accuracy improvement to level $1e-3$. Moreover, the computational time of algorithm \cite{Kowalczyk15} is two orders of magnitude longer and equals $113$ s. This large discrepancy in the algorithms arises from fundamental differences in the data processing. The older algorithm \cite{Kowalczyk15} is much more complex. It requires approximation of the real and imaginary parts of the function separately. Next, the curves representing the zero of the real and imaginary parts of the function are constructed, and then all crossings of these curves are estimated. Finally, DCAP (with extra nodes) is applied over the circles surrounding the crossings. Moreover, if DCAP is applied over the artificial circle of radius $\Delta r$ surrounding the candidate point, this value $\Delta r$ must be sufficiently small to separate all the roots, which requires much denser initial discretization. In the approach presented in this paper, the processing is significantly simpler, which results in faster calculations and lower memory requirements.
Additionally, the effectiveness of the method based on rational approximation presented in \cite{Austin2014}, based on the "ratdisk" algorithm \cite{Gonnet2011}, is tested. The parameters and results of the analysis for various sampling $N$ and orders of numerator $m$ and denominator polynomials $n$ are collected in Table \ref{tab:cmpx_rd}. If the number of samples $N$ is sufficiently high, then all the roots can be found. However, despite the regularization \cite{Gonnet2011} used in the analysis, there are numerous spurious roots and poles among the proper results. Moreover, the accuracy of the results cannot be simply controlled - for higher number of samples $N$ and higher orders of $m$ and $n$ the accuracy increases, but for each root/pole this value can be different (maximum $Err_{max}$ and minimum $Err_{min}$ absolute errors are collected in Table \ref{tab:cmpx_rd}). So, in practice, these results can be used as an efficient preliminary estimation and extra post processing may be required.
\begin{table}
\caption{Analysis of problem (\ref{eqn:cmpx}) - parameters and results of rational approximation \cite{Austin2014} \label{tab:cmpx_rd}}
\begin{center}
\begin{tabular}{|p{0.6cm}|p{1.8cm}|p{1.8cm}|p{1.8cm}|}
\hline
& $N=50$ & $N=500$ & $N=5000$\\
\hline
$m=25$ & $Err_{max}=1e-1$ & $Err_{max}=1e-1$ & $Err_{max}=1e-1$ \\
$n=25$ & $Err_{min}=2e-4$ & $Err_{min}=7e-4$ & $Err_{min}=7e-4$ \\
& $t_{CPU}=0.09s$ & $t_{CPU}=0.18s$ & $t_{CPU}=0.81s$ \\
& $2$ missing roots & $2$ missing roots & $2$ missing roots \\
& $1$ spurious pole & & \\
\hline
$m=250$ & $m+n>N$ & $Err_{max}=4e-3$ & $Err_{max}=2e-3$ \\
$n=250$ & & $Err_{min}=8e-9$ & $Err_{min}=6e-9$ \\
& & $t_{CPU}=0.25s$ & $t_{CPU}=1.32s$ \\
& & $7$ spurious roots& $4$ spurious roots \\
& & $7$ spurious poles& $6$ spurious poles \\
\hline
$m=2500$ & $m+n>N$ & $m+n>N$ & $Err_{max}=2e-3$ \\
$n=2500$ & & & $Err_{min}=5e-9$ \\
& & & $t_{CPU}=23.08s$ \\
& & & $3$ spurious roots \\
& & & $5$ spurious poles \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Lossy Multilayered Waveguide}\label{sec:multi}
As the second example, a multilayered guiding structure is considered \cite{Anemogiannis92,ZOUROS2018}. That structures are widely used in microwave applications and their analysis boils down to satisfying specific boundary conditions, which requires zero of the following determinant function:
\begin{equation}
\label{eqn:multi}
f(z)= \left|
\begin{array}{cc}
1 & -\cos(k_0\kappa_1 d_1)-\gamma_C \sin(k_0 \kappa_1 d_1)/\kappa_1 \\
i\gamma_S & -i \kappa_1\sin(k_0\kappa_1 d_1)+i\gamma_C \cos(k_0 \kappa_1 d_1) \\
\end{array} \right|,
\end{equation}
where $z$ represents a normalized propagation coefficient, $k_0=2\pi /\lambda_0$, $\kappa_1=\sqrt{n_1^2-z^2}$, $\gamma_S=\sqrt{z^2-n_S^2}$ and $\gamma_C=\sqrt{z^2-n_C^2}$. The typical set of material parameters is $n_1=1.5835$, $n_S=0.065-4i$ and $n_C=1$. The analysis is performed for thickness $d_1=1.81$ $\mu$m and frequency corresponds to wavelength $\lambda_0=0.6328$ $\mu$m.
The assumed region is the same as the one proposed in \cite{ZOUROS2018} $\Omega=\{z\in\mathbb{C}:1<\textrm{Re}(z)<2.5 \wedge -1<\textrm{Im}(z)<1\}$. The results and parameters of the analysis for various accuracy $\delta$ are shown in Table \ref{tab:multi} and in Figure \ref{fig:multi}.
In this case, the initial mesh, evenly covering region $\Omega$ with $N=27$ nodes (which corresponds to $\Delta r=0.5$), is sufficient to find all roots of the function (\ref{eqn:multi}) in this region - seven single roots:\\
$z^{(1)}=1.574863045752781 - 0.000002974623699i$,\\
$z^{(2)}=1.548692243882210 - 0.000012101013332i$,\\
$z^{(3)}=1.504169866404311 - 0.000028029436583i$,\\
$z^{(4)}=1.439795544245059 - 0.000052001665381i$,\\
$z^{(5)}=1.353140429182476 - 0.000086139194522i$,\\
$z^{(6)}=1.240454471356097 - 0.000133822149870i$,\\
$z^{(7)}=1.096752543407689 - 0.000197146879192i$.
Again, the efficiency is compared with discrete algorithm \cite{Kowalczyk15}, which requires $10927$ ($\Delta r=0.02$) samples for the initial mesh and this number increases to $11067$ nodes for the verification and for the accuracy improvement to level $1e-3$. Also, the computational time of the algorithm \cite{Kowalczyk15} is about two orders of magnitude longer and equals $62.33$ s. The most recently published algorithm \cite{ZOUROS2018} requires even more function calls (in this case it is $156803$) which results in significantly longer analysis. The same applies to algorithm \cite{Anemogiannis92} where the similar huge number of the function samples is required.
\begin{table}
\caption{Analysis of problem (\ref{eqn:multi}) - parameters and results of the proposed algorithm \label{tab:multi} }
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
accuracy & CPU time & no. of nodes & no. of iterations\\
\hline
$\delta = {1e-3}$ & $0.33$ s & $1623$ & $10$ \\
\hline
$\delta = {1e-6}$ & $0.44$ s & $2066$ & $21$ \\
\hline
$\delta = {1e-9}$ & $0.58$ s & $2472$ & $31$ \\
\hline
$\delta = {1e-12}$ & $0.71$ s & $2900$ & $41$ \\
\hline
$\delta = {1e-15}$ & $0.87$ s & $3322$ & $51$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\caption{Self-adaptive meshes obtained for problem (\ref{eqn:multi}
\label{fig:multi}
\end{figure}
\subsection{Graphene Transmission Line}\label{sec:gtl}
As the last example, a simple graphene transmission line is considered. The guide consists of a thin graphene layer deposited and a silicone substrate \cite{Gomez2013,Kowalczyk17AP}. In this case the normalized propagation coefficient $z$, for TM modes, can be found from the following equation
\begin{eqnarray}
\label{eqn:gtl}
&& f(z)=
\frac{\varepsilon_{r1}}{\eta_0 \sqrt{\varepsilon_{r1}+z^2}}+
\frac{\varepsilon_{r2}}{\eta_0 \sqrt{\varepsilon_{r2}+z^2}}\nonumber\\
&&\qquad +\left[
\sigma_{lo}-z^2k_{0}^{2}(\alpha_{sd}+\beta_{sd})
\right],
\end{eqnarray}
where $k_0=2\pi f/c$ and $\eta_0$ is a wave impedance of the vacuum. The graphene parameters depend on the frequency as follows
\begin{equation}
\sigma_{lo}=\frac{-iq_e^2k_B T}{\pi \hbar^2 (2\pi f-i\tau^{-1})}
\ln\left[
2\left(1+\cosh \left( \frac{\mu_c}{k_B T} \right) \right)
\right],
\end{equation}
\begin{equation}
\alpha_{sd}=\frac{-3 v_F^2 \sigma_{lo} }{4 (2\pi f-i\tau^{-1})^2}, \qquad \beta_{sd}=\frac{\alpha_{sd}}{3},
\end{equation}
where $q_e$ is electron charge, $k_B$ is Boltzmann's constant, $T=300$ K, $\tau=0.135$ ps,
$\mu_c=0.05q_e$, $v_F=10^6$ m/s. The tests are performed for frequency $f=1$ THz, $\varepsilon_{r1}=1$ and $\varepsilon_{r2}=11.9$.
The region $\Omega=\{z\in\mathbb{C}:-100<\textrm{Re}(z)<400 \wedge -100<\textrm{Im}(z)<400\}$ is considered. Due to four Riemann sheets of the function (\ref{eqn:gtl}) their pointwise product is analyzed \cite{Kowalczyk17}, in order to avoid separate investigation of each sheet.
The results and parameters of the analysis for various accuracy $\delta$ are presented in Table \ref{tab:gtl} and in Figure \ref{fig:gtl}. $N=973$ function samples ($\Delta r=18$) is sufficient to determine all roots and poles of the function in $\Omega$ - eight single roots:\\
$z^{(1)}= -32.1019622516073 - 27.4308619360125i$,\\
$z^{(2)}= 32.1019622516073 + 27.4308619360128i$,\\
$z^{(3)}=-38.1777253144799 - 32.5295210455987i $,\\
$z^{(4)}= 38.1777253144797 - 32.5295210455987i $,\\
$z^{(5)}= 332.7448889298402 + 282.2430799544401i$,\\
$z^{(6)}= 336.2202873389791 + 285.1910910139915i$,\\
$z^{(7)}= 368.4394672155518 + 312.5220780593669i $,\\
$z^{(8)}= 371.0075708341529 + 314.7004076766967i$,\\
and two second order poles:\\
$z^{(9)}=0.000000000000184 - 3.449637662132114i$,\\
$z^{(10)}=-0.000000000000158 + 3.449637662131965i$.
\begin{table}
\caption{Analysis of problem (\ref{eqn:gtl}) - parameters and results \label{tab:gtl} }
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
accuracy & CPU time & no. of nodes & no. of iterations\\
\hline
$\delta = {1e-3}$ & $0.39$ s & $2342$ & $16$ \\
\hline
$\delta = {1e-6}$ & $0.54$ s & $3121$ & $26$ \\
\hline
$\delta = {1e-9}$ & $0.75$ s & $4084$ & $36$ \\
\hline
$\delta = {1e-12}$ & $0.99$ s & $4983$ & $46$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\caption{Self-adaptive meshes obtained for problem (\ref{eqn:gtl}
\label{fig:gtl}
\end{figure}
For comparison, the algorithm \cite{Kowalczyk15} requires $32689$ (corresponds to $\Delta r=3$) samples for the initial mesh and this number increases to $33256$ nodes for the verification and for the accuracy improvement to level $1e-3$. In this case, the computational time is even three orders of magnitude longer and equals $487$ s.
\section{Conclusions}
A new algorithm for complex roots and poles finding is presented. A wide class of complex functions can be analyzed in any arbitrarily shaped search region. The effectiveness of the proposed technique is supported by several numerical tests. Moreover, the efficiency of the presented method is confirmed by comparing the analysis parameters to those obtained from alternative recently published techniques. It is shown that the proposed algorithm can be up to three orders of magnitude faster and requires significantly smaller number of function evaluations.
\appendices
\section{Source code}
The source code for the GRPF (Global complex Roots and Poles Finding algorithm based on phase analysis), can be found at: (if the paper is accepted, the code will be available at https://github.com/), and it is licensed under the MIT License.
\begin{IEEEbiographynophoto}{Piotr Kowalczyk}
was born in Wejherowo, Poland, in 1977. He received the M.S. degree in applied physics and mathematics and Ph.D. degree in electrical engineering from the Gdansk University of Technology, Gdansk, Poland, in 2001 and 2008, respectively.
He is currently with Microwave and Antenna Engineering, Technical University,
Gdansk, Poland. His research is focused on scattering and propagation of electromagnetic wave problems, algorithms and numerical methods.
\end{IEEEbiographynophoto}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
There are several approaches to study occurrences of consecutive
patterns in permutations such as the inclusion-exclusion method, the
tree representations of permutations, the spectral approach and
others. We propose yet another approach to study occurrences of
consecutive patterns in permutations. The approach is based on
considering the graph of patterns overlaps, which is a certain
subgraph of the de Bruijn graph.
While applying our approach, the notion of a uniquely $k$-determined
permutation appears. We give two criteria for a permutation to be
uniquely $k$-determined: one in terms of the distance between two
consecutive elements in a permutation, and the other one in terms of
directed hamiltonian paths in the certain graphs called
path-schemes. Moreover, we describe a finite set of prohibitions
that gives the set of uniquely $k$-determined permutations. Those
prohibitions make applying the transfer matrix method possible for
determining the number of uniquely $k$-determined permutations.
\end{abstract}
\section{Introduction}
A {\em pattern} $\tau$ is a permutation on $\{1,2,\ldots,k\}$. An
occurrence of a {\em consecutive pattern} $\tau$ in a permutation
$\pi=\pi_1\pi_2\ldots\pi_n$ is a word
$\pi_i\pi_{i+1}\ldots\pi_{i+k-1}$ that is {\em order-isomorphic} to
$\tau$. For example, the permutation 253164 contains two occurrences
of the pattern 132, namely 253 and 164. In this paper we deal only
with consecutive patterns, which courses omitting the word
``consecutive'' in defining a pattern to shorten the notation.
There are several approaches in the literature to study the {\em
distribution} and, in particular, {\em avoidance}, of consecutive
patterns in permutations. For example, direct combinatorial
considerations are used in~\cite{Kit0}; the {\em method of
inclusion-exclusion} is used in~\cite{GoulJack,Kit1}; the {\em tree
representations of permutations} are used in~\cite{ElizNoy}; the
{\em spectral theory of integral operators} on $L^{2}([0,1]^{k})$ is
used in~\cite{EhrKitPer}. In this paper we introduce yet another
approach to study occurrences of consecutive patterns in
permutations. The approach is based on considering the {\em graph of
patterns overlaps} defined below, which is a subgraph of the {\em de
Bruijn graph} studied broadly in the literature mainly in connection
with combinatorics on words and graph theory.
Suppose we are interested in the number of occurrences of a pattern
$\tau$ of length $k$ in a permutation $\pi$ of length $n$. To find
this number, we scan $\pi$ from left to right with a ``window'' of
length $k$, that is, we consider
$P_i=\pi_i\pi_{i+1}\ldots\pi_{i+k-1}$ for $i=1,2,\ldots,n-k+1$: if
we meet an occurrence of $\tau$, we register it. Each $P_i$ forms a
pattern of length $k$, and the procedure of scanning $\pi$ gives us
a path in the {\em graph $\mathcal{P}_k$ of patterns overlaps of
order $k$} defined as follows (graphs of patterns/permutations
overlaps appear in \cite{BurKit,Chung,Hur}). The nodes of
$\mathcal{P}_k$ are all $k!$ $k$-permutations, and there is an arc
from a node $a_1a_2\ldots a_k$ to a node $b_1b_2\ldots b_k$ if and
only if $a_2a_3\ldots a_k$ and $b_1b_2\ldots b_{k-1}$ form the same
pattern. Thus, for any $n$-permutation there is a path in
$\mathcal{P}_k$ of length $n-k+1$ corresponding to it. For example,
if $k=3$ then to the permutation $13542$ there corresponds the path
$123\rightarrow 132\rightarrow 321$ in $\mathcal{P}_3$.
Our approach to study the distribution of a consecutive pattern
$\tau$ of length $k$ among $n$-permutations is to take
$\mathcal{P}_k$ and to consider all paths of length $n-k+1$ passing
through the node $\tau$ exactly $\ell$ times, where
$\ell=0,1,\ldots, n-k+1$. Then we could count the permutations
corresponding to the paths. Similarly, for the ``avoidance
problems'' that attracted much attention in the literature, we
proceed as follows: given a set of patterns of length $k$ to avoid,
we remove the corresponding nodes with the corresponding arcs from
$\mathcal{P}_k$, consider all the paths of certain length in the
graph obtained, and then count the permutations of interest.
However, a complication with the approach is that a permutation does
not need to be reconstructible uniquely from the path corresponding
to it. For example, the permutation $13542$ above has the same path
in $\mathcal{P}_3$ corresponding to it as the permutations $23541$
and $12543$. Thus, different paths in $\mathcal{P}_k$ may have
different contributions to the number of permutations with required
properties; in particular, some of the paths in $\mathcal{P}_k$ give
exactly one permutation corresponding to them. We call such
permutations {\em uniquely $k$-determined}. Study of such
permutations is the main concern of the paper, and it should be
considered as the first step in understanding how to use our
approach to the problems described. Also, in our considerations we
assume that all the nodes in $\mathcal{P}_k$ are allowed while
dealing with uniquely $k$-determined permutations, that is, we do
not prohibit any pattern.
The paper is organized as follows. In Section~\ref{sec2} we study
the set of uniquely $k$-determined permutations. In particular, we
give two criteria for a permutation to be uniquely $k$-determined:
one in terms of the distance between two {\em consecutive elements}
in a permutation, and the other one in terms of directed hamiltonian
paths in the certain graphs called {\em path-schemes}. We use the
second criteria to establish (rough) upper and lower bounds for the
number of uniquely $k$-determined permutations. Moreover, given an
integer $k$, we describe a finite set of prohibitions that
determines the set of uniquely $k$-determined permutations. Those
prohibitions make applying the {\em transfer matrix
method}~\cite[Thm. 4.7.2]{Stan} possible for determining the number
of uniquely $k$-determined permutations and we discuss this in
Subsection~\ref{proh}. As a corollary of using the method, we get
that the generating function for the number of uniquely
$k$-determined permutations is rational. Besides, we show that there
are no {\em crucial permutations} in the set of uniquely
$k$-determined permutations. (Crucial objects, in the sense defined
below, are natural to study in infinite sets of objects defined by
prohibitions; for instance, see~\cite{EvdKit} for some results in
this direction related to words.) We consider in more details the
case $k=3$ in Subsection~\ref{k=3}. Finally, in Section~\ref{sec3},
we state several open problems for further research.
\section{Uniquely $k$-determined permutations}\label{sec2}
\subsection{Distance between consecutive elements; a criterion on unique $k$-determinability}
Suppose $\pi=\pi_1\pi_2\ldots\pi_n$ is a permutation and $i<j$. The
{\em distance} $d_{\pi}(\pi_i,\pi_j)=d_{\pi}(\pi_j,\pi_i)$ between
the elements $\pi_i$ and $\pi_j$ is $j-i$. For example,
$d_{253164}(3,6)=d_{253164}(6,3)=2$.
\begin{theorem}\label{cr01}{\rm[}First criterion on unique $k$-determinability{\rm]}
An $n$-permutation $\pi$ is uniquely $k$-determined if and only if
for each $1\leq x< n$, the distance $d_{\pi}(x,x+1)\leq k-1$.
\end{theorem}
\begin{proof} Suppose for an $n$-permutation $\pi$, $d(x,x+1)\geq k$ for some $1\leq x < n$. This
means that $x$ and $x+1$ will never be inside a ``window'' of length
$k$ while scanning consecutive elements of $\pi$. Thus, these
elements are incomparable in $\pi$ in the sense that switching $x$
and $x+1$ in $\pi$ will lead to another permutation $\pi'$ having
the same path in $\mathcal{P}_k$ as $\pi$ has. So, $\pi$ is not
uniquely $k$-determined.
On the other hand, if for each $1\leq x< n$, the distance
$d_{\pi}(x,x+1)\leq k-1$, then the positions of the elements
$1,2,\ldots, n$ are uniquely determined (first we note that the
position of 1 is uniquely determined, then we determine the position
of 2 which is a 1's neighbor in a ``window'' of length $k$, then the
position of 3, etc.) leading to the fact that $\pi$ is uniquely
$k$-determined.
\end{proof}
The following corollary to Theorem~\ref{cr01} is straightforward.
\begin{cor}\label{cr02} An $n$-permutation $\pi$ is not uniquely $k$-determined
if and only if there exists $x$, $1\leq x< n$, such that
$d_{\pi}(x,x+1)\geq k$. \end{cor}
So, to determine if a given $n$-permutation is uniquely
$k$-determined, all we need to do is to check the distance for $n-1$
pairs of numbers: $(1,2)$, $(2,3)$,..., $(n-1,n)$. Also, the
language of uniquely determined $k$-permutations is {\em factorial}
in the sense that if $\pi_1\pi_2\ldots\pi_n$ is uniquely
$k$-determined, then so is the pattern of $\pi_i\pi_{i+1}\ldots
\pi_j$ for any $i\leq j$ (this is a simple corollary to
Theorem~\ref{cr01}).
Coming back to the permutation $13542$ above and using
Corollary~\ref{cr02}, we see why this permutation is not uniquely
$3$-determined ($k=3$): the distance $d_{13542}(2,3)=3=k$.
\subsection{Directed hamiltonian paths in path-schemes; another criterion on unique
$k$-determinability}\label{dirpaths}
Let $V=\{1,2,\ldots, n\}$ and $M$ be a subset of $V$. A {\em
path-scheme} $P(n,M)$ is a graph $G=(V,E)$, where the edge set $E$
is $\{(x,y)\ |\ |x-y| \in M \}$. See Figure~\ref{il} for an example
of a path-scheme.
\begin{figure}
\caption{The path-scheme $P(6,\{2,4\}
\label{il}
\end{figure}
Path-schemes appeared in the literature, for example, in connection
with counting independent sets (see~\cite{Kit2}). However, we will
be interested in path-schemes having $M=\{1,2,\ldots, k-1\}$ for
some $k$ (the number of independent sets for such $M$ in case of $n$
nodes is given by the $(n+k)$-th $k$-{\em generalized Fibonacci
number}). Let $\mathcal{G}_{k,n}=P(n,\{1,2,\ldots, k-1\})$, where
$k\leq n$. Clearly, $\mathcal{G}_{k,n}$ is a subgraph of
$\mathcal{G}_{n,n}$.
Any permutation $\pi=\pi_1\pi_2\ldots\pi_n$ determines uniquely a
directed hamiltonian path in $\mathcal{G}_{n,n}$ starting with
$\pi_1$, then going to $\pi_2$, then to $\pi_3$ and so on. The
reverse is also true: given a directed hamiltonian path in
$\mathcal{G}_{n,n}$ we can easily construct the permutation
corresponding to it.
\begin{theorem}\label{cr03}{\rm[}Second criterion on unique $k$-determinability{\rm]}
Let $\Phi$ be a map that sends a uniquely $k$-determined
$n$-permutation $\pi$ to the directed hamiltonian path in
$\mathcal{G}_{n,n}$ corresponding to $\pi^{-1}$. $\Phi$ is a
bijection between the set of all uniquely $k$-determined
$n$-permutations and the set of all directed hamiltonian paths in
$\mathcal{G}_{k,n}$.
\end{theorem}
\begin{proof} Let $\pi$ be a uniquely $k$-determined $n$-permutation.
We claim that the directed hamiltonian path in $\mathcal{G}_{n,n}$
corresponding to $\pi^{-1}$ is actually a directed hamiltonian path
in $\mathcal{G}_{k,n}$. Indeed, suppose the elements $x$ and $x+1$,
$1\leq x <n$, are located in $\pi$ in positions $i$ and $j$
respectively. According to Theorem~\ref{cr01}, $|j-i|\leq k-1$. Now,
$ij$ is a factor in $\pi^{-1}$, and the directed hamiltonian path
corresponding to $\pi^{-1}$ contains the arc from $i$ to $j$, which
is an arc in $\mathcal{G}_{k,n}$. Obviously, $\Phi$ is injective.
Also, it is easy to see how to find the inverse to $\Phi$ mapping a
directed hamiltonian path in $\mathcal{G}_{k,n}$ to a permutation
that, due to Theorem~\ref{cr01}, is uniquely $k$-determined.
\end{proof}
Theorem~\ref{cr03} suggests a quick checking of whether an
$n$-permutation $\pi$ is uniquely $k$-determined or not. One simply
needs to consider $n-1$ differences of the adjacent elements in
$\pi^{-1}$ and check whether at least one of those differences
exceeds~$k-1$ or not. Moreover, one can find the number of uniquely
$k$-determined $n$-permutations by listing them and checking for
each of them the differences of consecutive elements in the manner
described above. Using this approach, one can run a computer program
to get the number of uniquely $k$-determined $n$-permutations for
initial values of $k$ and $n$, which we record in Table~\ref{data}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|l|}
\hline
$k=2$ & $1,\ 2,\ 2,\ 2,\ 2,\ 2,\ 2,\ 2,\
2,\ldots$\\
\hline
$k=3$ & $1,\ 2,\ 6,\ 12,\ 20,\ 34,\ 56,\ 88,\
136,\ldots$\\
\hline
$k=4$ & $1,\ 2,\ 6,\ 24,\ 72,\ 180,\ 428,\ 1042,\
2512,\ldots$ \\
\hline
$k=5$ & $1,\ 2,\ 6,\ 24,\ 120,\ 480,\ 1632,\ 5124,\
15860,\ldots$\\
\hline
$k=6$ & $1,\ 2,\ 6,\ 24,\ 120,\ 720,\ 3600,\ 15600,\
61872,\ldots$\\
\hline
$k=7$ & $1,\ 2,\ 6,\ 24,\ 120,\ 720,\ 5040,\ 30240,\
159840,\ldots$\\
\hline
$k=8$ & $1,\ 2,\ 6,\ 24,\ 120,\ 720,\ 5040,\ 40320,\
282240,\ldots$\\
\hline
\end{tabular}
\caption{The initial values for the number of uniquely $k$-determined
$n$-permutations.} \label{data}
\end{center}
\end{table}
It is remarkable that the sequence corresponding to the case $k=3$
in Table~\ref{data} appears in~\cite[A003274]{Sloane}, where we
learn that the inverses to the uniquely 3-determined permutations
are called the {\em key permutations} and they appear
in~\cite{Page}. Another sequence appearing in Table~\ref{data} is
\cite[A003274]{Sloane}: 0, 2, 12, 72, 480, 3600, .... In our case,
this is the number of uniquely $n$-determined $(n+1)$-permutations,
$n\geq 1$; in \cite{Sloane}, this is the number of
$(n+1)$-permutations that have 2 predetermined elements non-adjacent
(e.g., for $n=2$, the permutations with say 1 and 2 non-adjacent are
132 and 231). It is clear that both of the last objects are counted
by $n!(n-1)$. Indeed, to create a uniquely $n$-determined
$(n+1)$-permutation, we take any permutation (there are $n!$
choices) and extend it to the right by one element making sure that
the extension is not adjacent to the leftmost element of the
permutation (there are $n-1$ possibilities; here we use
Theorem~\ref{cr01}). On the other hand, to create a ``good''
permutation appearing in \cite{Sloane}, we take any of $n!$
permutations, and insert one of the predetermined elements into any
position not adjacent to the other predetermined element (there are
$(n-1)$ choices). A bijection between the sets of permutations above
is given by the following: Suppose $a$ and $b$ are the predetermined
elements in $\pi=\pi_1\ldots\pi_n$, and $\pi_i=a$ and $\pi_j=b$. We
build the permutation $\pi'$ corresponding to $\pi$ by setting
$\pi'_1=i$, $\pi'_n=j$, and $\pi_2'\ldots\pi_{n-1}'$ is obtained
from $\pi$ by first removing $a$ and $b$, and then, in what is left,
by replacing $i$ by $a$ and $j$ by $b$. For example, assuming that 2
and 4 are the determined elements, to
$13\underline{4}5\underline{2}6$ there corresponds
$\underline{5}1426\underline{3}$ which is a uniquely 5-determined
6-permutation.
Another application of Theorem~\ref{cr03} is finding lower and upper
bounds for the number $A_{k,n}$ of uniquely $k$-determined
$n$-permutations.
\begin{theorem}\label{number} We have $2((k-1)!)^{\lfloor n/k\rfloor}<A_{k,n}<2(2(k-1))^n$. \end{theorem}
\begin{proof}
According to Theorem~\ref{cr03}, we can estimate the number of
directed hamiltonian paths in $\mathcal{G}_{k,n}$ to get the
desired. This number is two times the number of (non-directed)
hamiltonian paths in $\mathcal{G}_{k,n}$, which is bounded from
above by $(2(k-1))^n$, since $2(k-1)$ is the maximum degree of
$\mathcal{G}_{k,n}$ (for $n\geq 2k-1$). So, $A_{k,n}<2(2(k-1))^n$.
To see that $A_{k,n}>2((k-1)!)^{\lfloor n/k\rfloor}$, consider
hamiltonian paths starting at node 1 and {\em not} going to any of
the nodes $i$, $i\geq k+1$ unless a path goes through {\em all} the
nodes $1,2,\ldots,k$. Going through all the first $k$ nodes can be
arranged in $(k-1)!$ different ways. After covering the first $k$
nodes we send the path under consideration to node $k+1$, which can
be done since we deal with $\mathcal{G}_{k,n}$. Then the path covers
{\em all}, but not any other, of the $k-1$ nodes $k+2,k+3,\ldots,
2k$ (this can be done in $(k-1)!$ ways) and comes to node $2k+1$,
etc. That is, we subdivide the nodes of $\mathcal{G}_{k,n}$ into
groups of $k$ nodes and go through all the nodes of a group before
proceeding with the nodes of the group to the right of it. The
number of such paths can be estimated from below by
$((k-1)!)^{\lceil n/k\rceil}$. Clearly, we get the desired result
after multiplying the last formula by 2 (any hamiltonian path can be
oriented in two ways).
\end{proof}
\subsection{Prohibitions giving unique
$k$-determinability}\label{proh}
The set of uniquely $k$-determined $n$-permutations can be described
by the language of prohibited patterns $\mathcal{L}_{k,n}$ as
follows. Using Theorem~\ref{cr01}, we can describe the set of
uniquely $k$-determined $n$-permutations by prohibiting patterns of
the forms $xX(x+1)$ and $(x+1)Xx$, where $X$ is a permutation on
$\{1,2,\ldots,|X|+2\}-\{x,x+1\}$ ($|X|$ is the number of elements in
$X$), the length of $X$ is at least $k-1$, and $1\leq x<n$. We
collect all such patterns in the set $\mathcal{L}_{k,n}$; also, let
$\mathcal{L}_k=\cup_{n\geq 0}\mathcal{L}_{k,n}$.
A prohibited pattern $X=aYb$ from $\mathcal{L}_k$, where $a$ and $b$
are some consecutive elements and $Y$ is a (possibly empty) word, is
called {\em irreducible} if the patterns of $Yb$ and $aY$ are not
prohibited, in other words, if the patterns of $Yb$ and $aY$ are
uniquely $k$-determined permutations. Without loss the generality,
we can assume that $\mathcal{L}_k$ consists only of irreducible
prohibited patterns.
\begin{theorem}\label{irr} Suppose $k$ is fixed. The number of (irreducible) prohibitions in $\mathcal{L}_k$ is
finite. Moreover, the longest prohibited patterns in $\mathcal{L}_k$
are of length $2k-1$.\end{theorem}
\begin{proof} Suppose that a pattern $P=xX(x+1)$ of length $2k$ or larger belongs to $\mathcal{L}_k$ (the case
$P=(x+1)Xx$ can be considered in the same way). Then obviously $X$
contains either $x-1$ or $x+2$ on the distance at least $k-1$ from
either $x$ or $x+1$. In any case, clearly we get either a prohibited
pattern $P'=yY(y+1)$ or $P'=(y+1)Yy$, which is a proper factor of
$P$. Contradiction with $P$ being irreducible.
\end{proof}
Theorem~\ref{irr} allows us to use the transfer matrix method to
find the number of uniquely $k$-determined permutations. Indeed, we
can consider the graph $\mathcal{P}_{2k-1}(\mathcal{L}_k)$, which is
the graph $\mathcal{P}_{2k-1}$ of patterns overlaps without nodes
containing prohibited patterns as factors. Then the number $A_{k,n}$
of uniquely $k$-determined $n$-permutation is equal to the number of
paths of length $n-2k+1$ in the graph, which can be found using the
transfer matrix method~\cite[Thm. 4.7.2]{Stan}\footnote{In fact, one
can use a smaller graph, namely $\mathcal{P}_{2k-2}(\mathcal{L}_k)$,
in which we mark arcs by corresponding permutations of length
$2k-1$; then we remove arcs containing prohibitions and use the
transfer matrix method. In this case, to an $n$-permutation there
corresponds a path of length $n-2k+2$. See Figure~\ref{il01} for
such a graph in the case $k=3$.}. In particular, the method makes
the following statement true.
\begin{theorem} The generating function $A_k(x)=\sum_{n\geq 0}A_{k,n}x^n$ for the number of uniquely $k$-determined
permutations is rational. \end{theorem}
A permutation is called {\em crucial} with respect to a given set of
prohibitions, if it does not contain any prohibitions, but adjoining
any element to the right of it leads to a permutation containing a
prohibition. In our case, an $n$-permutation is crucial if it is
uniquely $k$-determined, but adjoining any element to the right of
it, and thus creating an $(n+1)$-permutation, leads to a {\em
non}-uniquely $k$-determined permutation\footnote{As it is mentioned
in the introduction, crucial {\em words} are studied, for example,
in~\cite{EvdKit}. We define crucial permutations with respect to a
set of prohibited patterns in a similar way. However, as
Theorem~\ref{nonexist} shows, there are no crucial permutations with
respect to $\mathcal{L}_k$.}. If such a $\pi$ exists, then the path
in $\mathcal{P}_{2k-1}(\mathcal{L}_k)$ corresponding to $\pi$ ends
up in a sink. However, the following theorem shows that there are no
crucial permutations with respect to the set of prohibitions
$\mathcal{L}_k$, thus any path in
$\mathcal{P}_{2k-1}(\mathcal{L}_k)$ can always be continued.
\begin{theorem}\label{nonexist} There do not exist crucial permutations with respect to $\mathcal{L}_k$. \end{theorem}
\begin{proof} If $k=2$ then only the monotone permutations are uniquely $k$-determined, and we always can
extend to the right a decreasing permutation by the least element,
and the increasing permutation by the largest element.
Suppose $k\geq 3$ and let $Xx$ be an $n$-permutation avoiding
$\mathcal{L}_k$, that is, $Xx$ is uniquely $k$-determined. If $x=1$
then $Xx$ can be extended to the right by 1 without creating a
prohibition; if $x=n$ then $Xx$ can be extended to the right by
$n+1$ without creating a prohibition. Otherwise, due to
Theorem~\ref{cr01}, both $x-1$ and $x+1$ must be among the $k$
leftmost elements of $Xx$. In particular, at least one of them, say
$y$, is among the $k-1$ leftmost elements of $Xx$. If $y=x-1$, we
extend $Xx$ by $x$ (the ``old'' $x$ becomes $(x+1)$); if $y=x+1$, we
extend $Xx$ by $x+1$ (the ``old'' $x+1$ becomes $(x+2)$). In either
of the cases considered above, Theorem~\ref{cr01} guarantees that no
prohibitions will be created. So, $Xx$ can be extended to the right
to form a uniquely $k$-determined $(n+1)$-permutation, and thus $Xx$
is not a crucial $n$-permutation.
\end{proof}
\subsection{The case $k=3$}\label{k=3}
In this subsection we take a closer look to the graph
$\mathcal{P}_{4}(\mathcal{L}_3)$ whose paths give all uniquely
$3$-determined permutations (we read marked arcs of a path to form
the permutation corresponding to it). It turns out that
$\mathcal{P}_{4}(\mathcal{L}_3)$ has a nice structure (see
Figure~\ref{il01}).
Suppose $w'$ denotes the {\em complement} to an $n$-permutation
$w=w_1w_2\cdots w_n$. That is, $w'_i=n-w_i+1$ for $1\leq i\leq n$.
$\mathcal{P}_{4}(\mathcal{L}_3)$ has the following 12 nodes (those
are all uniquely $3$-determined $4$-permutations):\\
\begin{center}
\begin{tabular}{ll} $a=1234$ & $a'=4321$ \\ $b=1324$ & $b'=4231$ \\ $c=1243$ & $c'=4312$ \\ $d=3421$ & $d'=2134$ \\
$e=1423$ & $e'=4132$ \\ $f=3241$ & $f'=2314$ \end{tabular}
\end{center}
In Figure~\ref{il01} we draw 20 arcs corresponding to the 20
uniquely $3$-determined $5$-permutations. Notice that
$\mathcal{P}_{4}(\mathcal{L}_3)$ is not strongly connected: for
example, there is no directed path from $c$ to $f$.
\setlength{\unitlength}{3mm}
\begin{figure}
\caption{Graph $\mathcal{P}
\label{il01}
\end{figure}
To find the generating function $A_3(x)=\sum_{n\geq 0}A_{3,n}x^n$
for the number of uniquely 3-determined permutations one can build a
12x12 matrix corresponding to $\mathcal{P}_{4}(\mathcal{L}_3)$ and
to proceed with the transfer matrix method. However, we do not do
that since, as it was mentioned in Subsection~\ref{dirpaths}, the
generating function for these numbers is
known~\cite[A003274]{Sloane}:
$$A_3(x)=\frac{1-2x+2x^2+x^3-x^5+x^6}{(1-x-x^3)(1-x)^2}.$$
\section{Open problems}\label{sec3}
It is clear that any $n$-permutation is uniquely $n$-determined,
whereas for $n\geq 2$ no $n$-permutation is uniquely $1$-determined.
Moreover, for any $n\geq 2$ there are exactly two uniquely
$2$-determined permutations, namely the monotone permutations. For a
permutation $\pi$, we define its {\em index} $IR(\pi)$ {\em of
reconstructibility} to be the minimal integer $k$ such that $\pi$ is
uniquely $k$-determined.
\begin{problem} Describe the distribution of $IR(\pi)$ among all $n$-permutations. \end{problem}
\begin{problem} Study the set of uniquely $k$-determined permutations in the case when a set of
nodes is removed from $\mathcal{P}_k$, that is, when some of
patterns of length $k$ are prohibited. \end{problem}
An $n$-permutation $\pi$ is $m$-$k$-{\em determined}, $m,k\geq 1$,
if there are exactly $m$ (different) $n$-permutations having the
same path in $\mathcal{P}_k$ as $\pi$ has. In particular, the
uniquely $k$-determined permutations correspond to the case $m=1$.
\begin{problem}\label{pr03} Find the number of $m$-$k$-determined $n$-permutations. \end{problem}
Problem~\ref{pr03} is directly related to finding the number of {\em
linear extensions} of a {\em poset}. Indeed, to any path $w$ in
$\mathcal{P}_k$ there naturally corresponds a poset $\mathcal{W}$.
In particular, any factor of length $k$ in $w$ consists of
comparable to each other elements in $\mathcal{W}$. For example, if
$k=3$ and $w=134265$ then $\mathcal{W}$ is the poset in
Figure~\ref{il03}.
\begin{figure}
\caption{The poset associated with the path $w=134265$ in
$\mathcal{P}
\label{il03}
\end{figure}
If all the elements are comparable to each other in $w$, then
$\mathcal{W}$ is a linear order and $w$ gives a uniquely
$k$-determined permutation. If $\mathcal{W}$ contains exactly one
pair of incomparable elements, then $w$ gives (two) 2-$k$-determined
permutations. In the example in Figure~\ref{il03}, there are 4 pairs
of incomparable elements, (1,2), (1,5), (3,5), and (4,5), and this
poset can be extended to a linear order in 7 different ways giving
(seven) 7-3-determined permutations.
\begin{problem} Which posets on $n$ elements appear while considering paths (of length $n-k+1$) in
$\mathcal{P}_k$? Give a classification of the posets (different from
the classification by the number of pairs of incomparable elements).
\end{problem}
\begin{problem} How many linear extensions can a poset (associated to a path in
$\mathcal{P}_k$) on $n$ elements with $t$ pairs of incomparable
elements have? \end{problem}
\begin{problem} Describe the structure of $\mathcal{L}_k$
(see Subsection~\ref{proh} for definitions) that consists of
irreducible prohibitions. Is there a nice way to generate
$\mathcal{L}_k$? How many elements does $\mathcal{L}_k$
have?\end{problem}
\end{document}
|
\begin{document}
\mathfrak setlength{\abovedisplayskip}{4pt}
\mathfrak setlength{\belowdisplayskip}{4pt}
\mathfrak parindent=0pt
\mathfrak title{Irreducible constituents of minimal degree in supercharacters of the finite unitriangular groups}
\author{Richard Dipper, Qiong Guo\\ \\Institut f\"{u}r Algebra und Zahlentheorie\\ Universit\"{a}t Stuttgart, 70569 Stuttgart, Germany
\\ \mathfrak scriptsize{E-mail: [email protected], [email protected]}
\mathfrak setcounter{footnote}{-1}\footnote{\emph{Date:} December 12th, 2013.}
\mathfrak setcounter{footnote}{-1}\footnote{\emph{2010 Mathematics Subject Classification.} Primary 20C15, 20D15. Secondary 20C33, 20D20 }
\mathfrak setcounter{footnote}{-1}\footnote{\emph{Key words and phrases.} Unitriangular group, supercharacter, irreducible character.}}
\date{}
\title{Irreducible constituents of minimal degree in supercharacters of the finite unitriangular groups}
\begin{abstract}
Let $q$ be a prime power and $U$ the group of lower unitriangular matrices of order $n$ for some natural number $n$. We give a lower bound for the degrees of irreducible constituents of Andr\'{e}-Yan supercharacters and classify the supercharacters having constituents whose degree assume this lower bound. Moreover we show that the number of distinct irreducible characters of $U$ meeting this condition is a polynomial in $(q-1)$ with nonnegative integral coefficients and exhibit monomial sources for those.
\end{abstract}
\mathfrak section{Introduction}
Let $p$ be a prime, $q$ a power of $p$, $\mathbb F_q$ the finite field with $q$ elements and $U_n(q)$ ($n\mathbf {I}n \mathbb N$) the group of lower unitriangular $(n\mathfrak times n)$-matrices with entries in $\mathbb F_q$. Thus $U$ is a $p$-Sylow subgroup of the full general linear group $GL_n(q)$. It is known that determining the conjugacy classes of $U$ for all $n$ and $q$ is a wild problem. Even finding their number as function $C(n,q)$ of $q$ and $n$ and hence the number of distinct irreducible complex characters is still an open problem. A longstanding conjecture contributed to G. Higman \mathbf Cite{higman} states that $C(q,n)$ should be a polynomial in $q$ with integral coefficients depending only on $n$, not on $ q$. G. Lehrer refined this by conjecturing that the number of pairwise distinct irreducible characters of $U$ of a fixed degree $q^c, c\mathbf {I}n \mathbb Z_{\geqslant 0}$, should be a polynomial in $q$ with integral coefficients \mathbf Cite{lehrer}, which more recently was refined once more by Isaacs who conjectured that these polynomials should be actually polynomials in $(q-1)$ with nonnegative integral coefficients \mathbf Cite{Isaacs2}. This, of course, uses the fact that the degrees of the complex irreducible characters of the $p$-group $U$ are powers of $q$ not just of $p$, by a result of Huppert \mathbf Cite{huppert}, which is actually true for $\mathbb F_q$-algebra groups by a theorem of Isaacs \mathbf Cite{Isaacsq}.
The Andr\'{e}-Yan supercharacter theory \mathbf Cite{andre1}, \mathbf Cite{yan} provides an approximation to the problem of classifying the irreducible complex characters of $U$. A supercharacter theory for some finite group $G$ consists of a set partition of the collection of conjugacy classes, the unions of the parts called superclasses, and a set of pairwise orthogonal complex characters, called supercharacters such that every irreducible complex character of $U$ occurs as constituent in precisely one supercharacter. Moreover superclasses and supercharacters are in 1-1 correspondence and supercharacters are constant on superclasses.
\mathfrak smallskip
An $\mathbb F_q$-algebra group $G$ is of the form $G=\{1+x\,|\, x\mathbf {I}n J(A)\}$ for some finite dimensional $\mathbb F_q$-algebra $A$ with Jacobson radical $J(A)$. Taking for $A$ the $\mathbb F_q$-algebra of lower triangular matrices, $V=J(A)$ is the nilpotent $\mathbb F_q$-algebra of strictly lower triangular matrices and $U=\{1+x\,|\,x\mathbf {I}n J(A)\}$ is indeed an $\mathbb F_q$-algebra group. Moreover $U$ acts on $V=J(A)$ by left and right multiplication and hence on the set of linear complex characters $hat V$ of the additive group $(V, +)$. The map $f: U\mathbf Rightarrow V: u\mapsto u-1\mathbf {I}n V$ induces an 1-cocycle $\alpha: hat V\mathfrak times U\longrightarrow \mathbb C ^*: (\mathbf Chi, u)\mapsto \mathbf Chi(u^{-1}-1)$ providing a right monomial action of $U$ on $\mathbb C hat V$ with monomial basis $hat V$. There is a similar left hand side construction for a monomial action of $U$ on $hat V$ from the left. With this action from both sides $\mathbb C hat V$ becomes an $U$-$U$-bimodule which is isomorphic to the regular bi-representation $_{\mathbb C U} \mathbb C U _{\mathbb C U}$. Each biorbit of $hat V$ under the action of $U$ is a union of right orbits. It turns out that all right orbits in a biorbit induce isomorphic right modules, and any two right orbits being contained in different biorbits afford orthogonal characters. The different (and hence orthogonal) characters afforded by all the right orbits are the Andr\'{e}-Yan supercharacters.
These can be described combinatorially. We show that there is for each right orbit a lower bound for the degrees of irreducible constituents occurring in the representation of $\mathbb C U$ on $\mathbb C \mathcal O$ for an orbit of $hat V$ under the right action of $U$. We say the irreducible $\mathbb C U$-module $S$ has minimal dimension, if $\dim_\mathbb C S$ assumes this lower bound in the right orbit module of which it is an irreducible constituent.
\mathfrak smallskip
Inspecting the endomorphism rings of right $U$-orbit modules we obtain a completely combinatorial necessary and sufficient condition for supercharacters having irreducible character of minimal degree as constituents. This is our first main result.
Moreover we show, those have multiplicity one in their supercharacters and show that there are $q^c$, $c\mathbf {I}n \mathbb N$ many irreducible constituents of minimal degree in supercharacters, where $c\mathbf {I}n \mathbb N$ is determined combinatorially. As a consequence we obtain that the number of distinct irreducible characters of $U$ of minimal degree in their supercharacters (of degree $q^d, d\mathbf {I}n \mathbb Z_{\geqslant 0}$ fixed) is a polynomial in $(q-1)$ with nonnegative integral coefficients.
\mathfrak smallskip
By a theorem of Halasi \mathbf Cite{halasi}, every irreducible character $\mu$ of $U$ is induced from a linear character $\lambda: H\mathbf Rightarrow \mathbb C ^*=\mathbb C\mathfrak setminus \{0\}$ for some $\mathbb F_q$-algebra subgroup $H$ of $U$. We call the pair $(H, \lambda)$ a monomial source of $\mu\mathbf {I}n \Irr(U)$. Our second main result determines monomial sources of irreducible characters of $U$ of minimal degree in their supercharacters, (\mathbf Ref{6.11}). These are in fact linear characters of certain pattern subgroups of $U$ and hence irreducible characters of minimal degree in their supercharacters are well-induced in the sense of Evseev \mathbf Cite{Evseev}.
\mathfrak smallskip
In her doctoral thesis \mathbf Cite{guo} the second author determined the irreducible $U$-constituents of the permutation module of $GL_n(q)$ on the cosets of a maximal parabolic subgroup of $GL_n(q)$. In a forthcoming paper we shall show that these irreducible $\mathbb C U$-modules are precisely the irreducible constituents of minimal dimension in a certain combinatorially determined subclass of $U$-orbit modules having irreducible constituents of minimal degree.
We now fix some notation which is used throughout this paper.
We identify the set
$\Phi=\{(i,j)\,|\,1\leqslant i, j \leqslant n, i\not=j\}$ with
the standard root system of $G$ where
$\Phi^+=\{(i,j)\mathbf {I}n \Phi \,|\,i>j\}$, $\,\Phi^-=\{(i,j)\mathbf {I}n \Phi
\,|\,i<j\}$ are the positive respectively negative roots with
respect to the basis $\mathfrak Delta=\{(i+1,i)\mathbf {I}n
\Phi^+\,|\,1\leqslant i \leqslant n-1\}$ of $\Phi$.
A subset $J$ of $\Phi$ is {\bf closed} if $(i,j),(j,k)\mathbf {I}n J, (i,k)\mathbf {I}n \Phi$ implies
$(i,k)\mathbf {I}n J.$
For $1\leqslant i, j \leqslant n$ let $\epsilon_{ij}$ be the
$n\mathfrak times n$-matrix $g=(g_{ij})$ over $\mathbb F_q$, with $g_{ij}=1$ and
$g_{kl}=0$ for all $1\leqslant k,l \leqslant n$ with
$(k,l)\not=(i,j).$ Thus $\{\epsilon_{ij}\,|\, 1\leqslant i, j
\leqslant n \}$ is the natural basis
of the $\mathbb F_q$-algebra $M_n(\mathbb F_q)$ of $n\mathfrak times n$-matrices with
entries in $\mathbb F_q.$
For $1\leqslant i, j \leqslant n,$\, $i\not=j$ and $\alpha\mathbf {I}n
\mathbb F_q$, let $x_{ij}(\alpha)=E_n+\alpha \epsilon_{ij},$ where $E_n$
is the $n\mathfrak times n$-identity matrix. Then
$X_{ij}=\{x_{ij}(\alpha)\,|\,\alpha\mathbf {I}n \mathbb F_q\}$ is the {\bf root
subgroup} of $G$ associated with
the root $(i,j)\mathbf {I}n \Phi,$ and is isomorphic to the additive
group $(\mathbb F_q,+)$ of the underlying field $\mathbb F_q$, hence is in
particular abelian. Moreover $U=\langle
x_{ij}(\alpha)\,|\,1\leqslant j<i\leqslant n, \, \alpha\mathbf {I}n
\mathbb F_q\mathbf Rangle$ is the unitriangular subgroup of $G=GL_n(q)$
consisting of all lower triangular matrices with ones on the
diagonal. It is well known that for a closed subset
$J$ of $\Phi^+$, the set
$
U_J=\{u\mathbf {I}n U\,|\,u_{ij}=0,\,\forall\, (i,j)\notin J\}$ is the
subgroup of $U$ generated by $X_{kl},\, (k,l)\mathbf {I}n J$ and if we choose any linear ordering on $J$ then
$U_J=\{\mathfrak prod_{(i,j)\mathbf {I}n
J}x_{ij}(\alpha_{ij})\,|\,\alpha_{ij}\mathbf {I}n \mathbb F_q\}$, where the
products are given in the fixed linear ordering.
\mathfrak section{$U$-Supercharacters}
In \mathbf Cite{yan} Yan constructed a basis of $\mathbb C U$, called Fourier basis, such that $U$ acts monomially (from both left and right) on it. We shall give here a brief overview on Yan's construction setting it up in a notation more suitable for our work here than the original one used by Yan. We shall use an approach introduced by Markus Jedlitschky in his thesis \mathbf Cite{markus} which produces Yan's Fourier basis. We begin with a very general setting.
For the moment let $G$ be an arbitrary group and $(V,+)$ be an abelian group on which $G$ acts as group of automorphisms, the action denoted by right multiplication. Let $K$ be a field. Then $G$ acts on $K^V$, the set of functions from $V$ to $K$ by
\begin{equation}\label{2.1}
(\mathfrak tau\ldotp g)(A)=\mathfrak tau(A g^{-1}) \mathfrak text{ for } \mathfrak tau\mathbf {I}n K^V, g\mathbf {I}n G, A\mathbf {I}n V.
\end{equation}
In particular, the subset $hat V=\mathcal Hom(V, K^*)$ of linear characters of $V$ in $K$ of $K^V$ is $G$-invariant, since for $\mathbf Chi\mathbf {I}n hat V$, $A, B\mathbf {I}n V$ and $g\mathbf {I}n G$ we have
\begin{eqnarray*}
(\mathbf Chi\ldotp g)(A+B)&=&\mathbf Chi((A+B))g^{-1}=\mathbf Chi(Ag^{-1}+Bg^{-1})\\
&=& \mathbf Chi(Ag^{-1})\mathbf Chi(Bg^{-1})=(\mathbf Chi\ldotp g)(A) \mathbf Chi\ldotp g(B)
\end{eqnarray*}
proving that $\mathbf Chi\ldotp g$ is again a linear character on $V$. Suppose now that $f: G\longrightarrow V$ is a (right sided) 1-cocycle, i.e. we have
\begin{equation}\label{cocycle}
f(xg)=f(x)g+f(g) \quad\forall\, x,g\mathbf {I}n G.
\end{equation}
We define $\alpha: hat V \mathfrak times G \longrightarrow K^*: (\mathbf Chi, g)\mapsto \mathbf Chi(f(g^{-1}))$ for $\mathbf Chi\mathbf {I}n hat V, g\mathbf {I}n G,$ then the following holds:
\begin{Theorem}[Jedlitschky \mathbf Cite{markus}]\label{jed}
Let $Khat V$ be the $K$-vector space with basis $hat V$. Then $K hat V$ becomes a $KG$-module with monomial basis $hat V$, where the new action denoted by ``$*$'' of $G$ on $hat V$ is given as
$$
\mathbf Chi * g=\alpha(\mathbf Chi,g) \mathbf Chi\ldotp g=\mathbf Chi(f(g^{-1}) )\mathbf Chi \ldotp g
$$
for $\mathbf Chi\mathbf {I}n hat V, g\mathbf {I}n G$.
\end{Theorem}
The fact that $f$ satisfies (\mathbf Ref{cocycle}) ensures, that the $*$-action on $K hat V$ is indeed compatible with the multiplication in $G$, that is we have
$$
(\mathbf Chi* g)* h=\mathbf Chi *(gh) \quad \forall\, \mathbf Chi\mathbf {I}n hat V, g, h\mathbf {I}n G.
$$
For simplicity, we replace the $*$ in \mathbf Ref{jed} by the standard notation and write $\mathbf Chi g$ instead of $\mathbf Chi* g$. Obviously, if $G$ acts on $V$ from the left as group of automorphisms, replacing $f$ by an left 1-cocycle $\mathfrak tilde f: G\longrightarrow V$ satisfying $\mathfrak tilde f(gx)=g\mathfrak tilde f(x)+\mathfrak tilde f(g)$, we can define analogously the left $KG$-module $Khat V$ with monomial basis $hat V$. Moreover if $G$ acts on $V$ from both sides such that $(gA)h=g(Ah)$ for all $g, h\mathbf {I}n g$ and $A\mathbf {I}n V$, and if $f$ is a right and left 1-cocycle, $Khat V$ becomes an $KG$-bimodule with monomial action of $G$ on the base $hat V$ from both sides. $Khat V$ decomposes into a direct sum of $KG$-modules (from right, left, both sides) of $G$ on $hat V$ under the dot permutation action ``$\ldotp$'' in (\mathbf Ref{2.1}).
We apply this to the special case, where $G=U=U_n(q)$ is the group of lower unitriangular $n\mathfrak times n$-matrices with entries in the filed $\mathbb F_q$ with $q$ a prime power. Let $V$ be the set of nilpotent strictly lower triangular $n\mathfrak times n$-matrices over $\mathbb F_q$, thus $V=\{u-1\,|\,u\mathbf {I}n U\}$. Then $V$ is the Lie algebra of $U$ and in particular an abelian group under addition of matrices. $U$ acts on $V$ by right- and left multiplication group of automorphisms and both action commute by associativity of matrix multiplication. Henceforth we take $K=\mathbb C$ and choose once for all a non trivial character $\mathfrak theta:(\mathbb F_q,+)\longrightarrow \mathbb C^{*}$. Moreover the $V^*=\mathcal Hom(V, \mathbb F_q)$ has a basis given by the coordinate functions $\xi_{ij}: V\longrightarrow \mathbb F_q: A=A_{ij}\mathbf {I}n \mathbb F_q$, where $A_{ij}$ denotes the entry of the matrix $A$ at position $(i,j)$, $1\leqslant i,j \leqslant n$. For a matrix $B\mathbf {I}n V$, we define the linear $\mathbb C$-character $\mathbf Chi_{_B}\mathbf {I}n hat V=\mathcal Hom((V,+),\mathbb C^*)$ to be
$$
\mathbf Chi_{_B}=\mathfrak theta\mathbf Circ (\mathfrak sum_{i,j} B_{ij}\xi_{ij}).
$$
Then $hat V=\{\mathbf Chi_{_B}\,|\,B\mathbf {I}n V\}$.
Now consider the map $f: U\longrightarrow V: u\mapsto u-1$. It can be seen easily that $f$ is a two sided 1-cocycle from $U$ to $V$, thus we may apply theorem \mathbf Ref{jed} to turn the $\mathbb C$-space $\mathbb C hat V$ into an $\mathbb C U$-bimodule, where $U$ acts on both sides on the basis $hat V$ of $\mathbb Chat V$ monomially.
Indeed the $\mathbb C U$-bimodule $\mathbb C hat V$ is isomorphic to the regular $\mathbb C U$-bimodule $_{ \mathbb C U}\mathbb C U_{ \mathbb C U}$. This can be shown using the fact that $f: U\longrightarrow V: u\mapsto u-1$ is a bijection. Moreover, since $V$ is a finite group, the $\mathbb C$-vector space $\mathbb C^V$ of functions from $V$ to $\mathbb C$ is isomorphic to the group algebra $\mathbb C V$ as $\mathbb C V$-module. Here for any field $\mathbb C $ the action of $V$ on $\mathbb C^V$ from the right is given in (\mathbf Ref{2.1}) and similarly from the left by setting
$$
A \ldotp \mathfrak tau(B)=\mathfrak tau(-A+B), \mathfrak text{ for } \mathfrak tau\mathbf {I}n \mathbb C^V, A, B\mathbf {I}n V.
$$
(Recall that addition in the abelian group $V$ turns into multiplication in the group algebra $\mathbb C V$). The isomorphism $\mathbb C^V\longrightarrow \mathbb C V$ then is given by evaluation of $\mathfrak tau\mathbf {I}n \mathbb C^V$:
$$
\mathfrak tau\mapsto \mathfrak sum_{A\mathbf {I}n V} \mathfrak tau(A) A.
$$
In particular, for $A\mathbf {I}n V$, the linear character $\mathbf Chi_A\mathbf {I}n hat V \leqslant \mathbb C^V$ is mapped to
\begin{equation}
\mathbf Chi_{_A }\mapsto \mathfrak sum_{B\mathbf {I}n V}\mathbf Chi_{_A}(B) B =|V|e_{_{-A}}\mathbf {I}n \mathbb C V,
\end{equation}
where $e_{_{-A}}$ is the primitive idempotent in $\mathbb C V$ associated with $-A\mathbf {I}n V$. Thus \mathbf Ref{jed} yields an action of $\mathbb C U$ on $\mathbb C V$ such that $U$ acts on the basis $\{e_{_A}\,|\,A\mathbf {I}n V\}$ of $\mathbb C V$ monomially.
Next we describe the action of $U$ on $hat V$. Recall that for $A\mathbf {I}n V, u\mathbf {I}n U$, the linear character $\mathbf Chi_A\ldotp u \mathbf {I}n hat V$ is given by
\begin{equation}
( \mathbf Chi_{_A} \ldotp u ) (B)=\mathbf Chi_{_A}(Bu^{-1})\quad\forall\, B\mathbf {I}n V.
\end{equation}
Since $\mathbf Chi_A\ldotp u\mathbf {I}n hat V$ there must be a $C\mathbf {I}n V$ such that $\mathbf Chi_A\ldotp u=\mathbf Chi_C$. In order to describe $C$ for given $A$ and $u$ we let \,$\bar{} : M_{n\mathfrak times n}(\mathbb F_q)\longrightarrow V: A \mapsto \overline{A}\mathbf {I}n V$ the natural projection. Recall that $\epsilon_{ij}\mathbf {I}n M_{n\mathfrak times n}(\mathbb F_q)$ denotes the $(i,j)$-th matrix unit. For $A\mathbf {I}n M_{n\mathfrak times n}(\mathbb F_q)$, we denote by $A_{ij}\mathbf {I}n \mathbb F_q$ the entry of $A$ at position $(i,j)$, for $1\leqslant i, j\leqslant n$. Thus $A=\mathfrak sum_{1\leqslant i, j\leqslant n}A_{ij}\epsilon_{ij}$ and $\overline A =\mathfrak sum_{1\leqslant j<i\leqslant n}A_{ij}\epsilon_{ij}$. For $u\mathbf {I}n U$ let $u^t$ denote the transposed matrix (an upper unitriangular matrix). Then we have the following:
\begin{Lemma}[Yan]\label{2.6}
Let $A\mathbf {I}n V$ and $u\mathbf {I}n U$. Then $\mathbf Chi_{_A} \ldotp u=\mathbf Chi_{_C}$, where $C=\overline{Au^{-t}}$, setting $u^{-t}=(u^{-1})^t$.
\end{Lemma}
By general theory the idempotent $e_{_A}$ in $KV$ affording the linear character $\mathbf Chi_{_A}$ of $V$ is given as:
$$e_{_A}=\frac{1}{|V|}\mathfrak sum_{B\mathbf {I}n V}\overline{\mathbf Chi_{_A}(B)}B,$$
where $\bar{} : \mathbb C\mathbf Rightarrow \mathbb C: z\mapsto \bar{z}$ denotes complex conjugation.
We write $[A]=e_{_{A}}$ and illustrate this idempotent by a triangle, omitting superfluous zeros in the upper half of matrix $A\mathbf {I}n V$. For instance
\begin{center}
\begin{picture}(150, 50)
\mathfrak put(0,20){$e_{_A}=$}
\mathfrak put(30,0){\line(1,0){50}}
\mathfrak put(30,0){\line(0,1){50}}
\mathfrak put(30,50){\line(1,-1){50}}
\mathfrak put(32,33){0}
\mathfrak put(32,18){$\alpha$}
\mathfrak put(32,3){$\beta$}
\mathfrak put(47,18){$0$}
\mathfrak put(47,3){$\gamma$}
\mathfrak put(62,3){$0$}
\mathfrak put(82,0){,}
\mathfrak put(100,20) {$\alpha, \beta, \gamma\mathbf {I}n \mathbb F_q$}
\end{picture}
\end{center}
denotes the idempotent $e_{_A}=[A]\mathbf {I}n KV$ affording the linear character $\mathbf Chi_{_A}\mathbf {I}n hat V$ with $$A=
\begin{pmatrix}
0 & 0 & 0\\
\alpha & 0&0 \\
\beta & \gamma &0
\end{pmatrix}\mathbf {I}n V.$$
For $A\mathbf {I}n V, u\mathbf {I}n U$ we denote the matrix $\overline{Au^{-t}}\mathbf {I}n V$ by $A\ldotp u$. This defines indeed a permutation action of $u$ on $V$. Moreover, using theorem \mathbf Ref{jed} we derive a monomial action of $u\mathbf {I}n U$ on the idempotent basis $\{[A]\,|\,A\mathbf {I}n V\}$ of $KV$ interpreting again linear characters of $V$ as elements of the group $KV$:
\begin{Cor}\label{2.7}
Let $A\mathbf {I}n V$ and $u\mathbf {I}n U$. Then
$$[A] u=\mathbf Chi_{_{A\ldotp u}}(u-1)[A\ldotp u].$$
\end{Cor}
For $1\leqslant j < i \leqslant n$ and $\alpha\mathbf {I}n \mathbb F_q$ recall that $x_{ij}(\alpha)=1+\alpha \epsilon_{ij}\mathbf {I}n U$ and that the root subgroup $X_{ij}=\{x_{ij}(\alpha)\,|\,\alpha\mathbf {I}n \mathbb F_q\}$ is an abelian subgroup of $U$ isomorphic to $(\mathbb F_q,+)$. Moreover $x_{ij}(\alpha)$ acts on any matrix $A\mathbf {I}n M_{n\mathfrak times n}(\mathbb F_q)$ by the elementary column operation adding $\alpha$ times column $i$ to column $j$ in $A$. Now $x_{ij}(\alpha)^{-t}=x_{ji}(-\alpha)$, hence $A\ldotp x_{ij}(\alpha)$ is obtained from $A$ by adding $-\alpha$ times column $j$ to column $i$ (from left to right) and setting nonzero entries in the resulting matrix at position on or to the right of the diagonal to zero. We call this maneuver ``truncated column operation'' (from left to right). Since every element of $U$ can be written uniquely as product $u=\mathfrak prod_{1\leqslant j < i \leqslant n}x_{ij}(\alpha_{ij})$ for $\alpha_{ij}\mathbf {I}n \mathbb F_q$, where the product is taken in an arbitrary but fixed linear order of the indices $(i,j)$, the (permutation) action of $u\mathbf {I}n U$ on $[A]$ for $A\mathbf {I}n V$ can be described by the corresponding sequence of truncated column operations. Moreover
\begin{equation}\label{2.8}
\mathbf Chi_{A\ldotp x_{ij}(\alpha)}(x_{ij}(\alpha)-1)= \mathbf Chi_{_{A\ldotp x_{ij}(\alpha)}}(\alpha e_{ij})=\mathfrak theta(\alpha A_{ij})
\end{equation}since column $j$ coincides in $A$ and $A.x_{ij}(\alpha)$.
\begin{Cor}\label{2.9}
Let $1\leqslant j < i \leqslant n$, $\alpha\mathbf {I}n \mathbb F_q$ and $A\mathbf {I}n V$. Then
$$[A]x_{ij}(\alpha)=\mathfrak theta(\alpha A_{ij})[A\ldotp x_{ij}(\alpha)].$$
\end{Cor}
Similarly the left operation of $u$ on the idempotent basis $\{[A]\,|\,A\mathbf {I}n V\}$ of $V$ can be described by sequences of truncated row operations from down up, the coefficient in $\mathbb C$ being obtained similarly.
\begin{Remark}\label{2.10}
The basis $\{\mathbf Chi_{_A}\,|\,A\mathbf {I}n V\}$ of $\mathbb C hat V$ mapped back into $\mathbb C U\mathbf Cong \mathbb C^U$ by extending $f^{-1}: V \longrightarrow U: A\mapsto A+1\mathbf {I}n U$ be linearity is Yan's Fourier basis of $\mathbb C U$, dual to the basis $\{f^{-1}[A]\,|\,A\mathbf {I}n V\}$ of $\mathbb C U$.
\end{Remark}
We set $\mathcal E=\{[A]\,|\,A\mathbf {I}n V\}$. The orbits of $U$ acting on $\mathcal E$ can now be described combinatorially using \mathbf Ref{2.9} and its left handed analogue.
\begin{Defn}\label{2.11}
Let $1\leqslant j < i \leqslant n$. The arm $h_{ij}^a$ centred at $(i,j)$ consists of all positions $(i,k)\mathbf {I}n \Phi^+$ strictly to the right of $(i,j)$, thus $h_{ij}^a=\{(i,k)\,|\,j<k<i\}$, and the hook leg $h_{ij}^l$ is the set of positions $(l,j)\mathbf {I}n \Phi^+$ strictly above $(i,j)$, thus $h_{ij}^l=\{(l,j)\,|\,j<l<i\}$. Finally the hook $h_{ij}$ centred at $(i,j)$ is defined to be $h_{ij}=h_{ij}^a\mathbf Cup h_{ij}^b\mathbf Cup \{(i,j)\}$. This may be illustrated as
\begin{center}
\begin{picture}(120,130)
\mathfrak put(0,25){\line(1,0){100}}
\mathfrak put(0,25){\line(0,1){100}}
\mathfrak put(0,125){\line(1,-1){100}}
\mathfrak put(30,45){\mathbf Circle*{4}}
\mathfrak put(80,45){\mathbf Circle*{4}}
\mathfrak put(30,95){\mathbf Circle*{4}}
\multiput(30,45)(7,0){7}{\line(1,0){4}}
\multiput(30,45)(0,7){7}{\line(0,1){5}}
\mathfrak put(88,43){\footnotesize$i$}
\mathfrak put(28,103){\footnotesize$j$}
\mathfrak put(22,35){\footnotesize$(i,j)$}
\mathfrak put(31,65){\line(1,1){30}}
\mathfrak put(65,95){\footnotesize$h_{ij}^l$}
\mathfrak put(61,40){\line(-1,-1){30}}
\mathfrak put(26,0){\footnotesize$h_{ij}^a$}
\end{picture}
\end{center}
\end{Defn}
Let $[A]\mathbf {I}n \mathcal E$, and let $\mathfrak tilde \mathcal O$ be the $U$-$U$-biorbit containing $[A]$. If $A$ is the zero matrix, $\mathcal O=\{[A]\}$. Otherwise let $A_{ij}\not=0$ be the lowest non zero entry in the first non zero column $j$ of $A$ from the left. Acting by truncated row- and column operations, i.e. by elements of $U$ of the form $x_{ik}(\alpha_k)$ from the left and $x_{lj}(\beta_l)$ from the right, for suitable $\alpha_k, \beta_{l}\mathbf {I}n \mathbb F_q$, $l, k=j+1,\ldots,i-1$ we find $[B]\mathbf {I}n \mathcal O$ such that $B_{ij}=A_{ij}$ and $B_{kl}=0$ for all $(k,l)\mathbf {I}n h_{ij}^a\mathbf Cup h_{ij}^l$. Choosing $0\not=B_{st}$ to be the lowest non zero value in $B$ in the first non zero column $t$ strictly to the right of column $j$ (so $j<t<s\leqslant n-1$) and proceeding in the same way, we obtain an idempotent $[C]\mathbf {I}n \mathcal O$ with $A_{ij}=B_{ij}=C_{ij}, C_{st}=B_{st}$, such that all entries on the hook arms $h_{ij}^a$, $h_{st}^a$ of $C$ are zeros. Proceeding in this way we finally find an idempotent $[D]$ in $\mathcal O$ such that in $D$ in each row and in each column there is at most one non zero entry. Such idempotents are called {\bf verge idempotents} and we have shown that each $U$-$U$-biorbit $\mathcal O$ of $\mathcal E$ contains a verge idempotent. One sees easily that each biorbit contains at most one verge idempotent and we have shown that there is a bijection between verges and $U$-$U$-biorbits on $\mathcal E$. Obviously the idempotent $[A]\mathbf {I}n \mathcal E$ for the zero matrix $A=0$ affords the trivial representation of $V$.
\begin{Defn}\label{2.12}
A (right) {\bf template} is an idempotent $[A]\mathbf {I}n \mathbb C V$, $A\mathbf {I}n V$ such that the entries to the right of the lowest non zero entries in each column of $A$ are zero. So if $A_{ij}\not=0$, $1\leqslant j<i \leqslant n$, but $A_{kj}=0$ for $i+1\leqslant k \leqslant n$, then all entries on $h_{ij}^a$ are zero.
A {\bf main condition} of a template $[A]$ is a position $(i,j)$ such that $A_{ij}$ is the lowest non zero entry in column $j$ of $A$. Note that the main conditions of templates are in different rows by construction. We denote the set of main conditions of the template $[A]$ by $\main[A]$ and call $[B]\mathbf {I}n \mathbb C V$ with $ B=\mathfrak sum_{(i,j)\mathbf {I}n \main(A)} A_{ij}e_{ij}$, the {\bf verge} of the template $[A]$, denoted by $\verge[A]=[B]$. Note that by construction $\verge[A]$, $[A]$ a template, is indeed a {\bf verge idempotent}, that is $\verge[A]$ has in each row and each column at most one non zero entry.
We may illustrate this as follows:
\end{Defn}
\begin{center}
\begin{picture}(190,145)
\mathfrak put(80,0) {\line(1,0){140}}
\mathfrak put(80,0){\line(0,1){140}}
\mathfrak put(80,140){\line(1,-1){140}}
\mathfrak put(0,60){template $[A]=$}
\mathfrak put(100,30) {\line(1,0){90}}
\mathfrak put(100,30){\line(0,1){90}}
\mathfrak put(140,40) {\line(1,0){40}}
\mathfrak put(140,40){\line(0,1){40}}
\mathfrak put(110,60) {\line(1,0){50}}
\mathfrak put(110,60){\line(0,1){50}}
\mathfrak put(120,10) {\line(1,0){90}}
\mathfrak put(120,10){\line(0,1){90}}
\mathfrak put(100,30){\mathbf Circle*{3}}
\mathfrak put(140,40){\mathbf Circle*{3}}
\mathfrak put(110,60){\mathbf Circle*{3}}
\mathfrak put(120,10){\mathbf Circle*{3}}
\mathfrak put(90,25){$z$}
\mathfrak put(132,33){$z$}
\mathfrak put(105,52){$z$}
\mathfrak put(110,5){$z$}
\mathfrak put(116,27){$\mathfrak times$}
\mathfrak put(116,57){$\mathfrak times$}
\mathfrak put(136,57){$\mathfrak times$}
\mathfrak put(122,80) {\line(1,1){30}}
\mathfrak put(154,110) {hook}
\end{picture}
\end{center}
The position denoted by $z$ are main conditions and have non zero entries in $A$ . All other possible non zero entries are located in columns with main conditions strictly above those and not on hook intersection, that is to the right of a main condition, indicated by $\mathfrak times$. The positions strictly above main conditions not to the right of main conditions are called {\bf supplementary conditions} and their set is denoted by $\mathfrak suppl(\mathfrak p)$, where $\mathfrak p=\main[A]$. Thus $A_{st}\not=0$ implies $(s,t)\mathbf {I}n \mathfrak p$ or $(s,t)\mathbf {I}n \mathfrak suppl(\mathfrak p)$.
\begin{Theorem}[Yan]\label{2.13}
Each right $U$-orbit $\mathcal O$ contains precisely one template $[A]$. Moreover if $\mathcal O$, $\mathcal O'$ are right $U$-orbits on $\mathcal E$ with templates $[A]$ respectively $[A'], A, A'\mathbf {I}n V$, then $\mathbb C \mathcal O\mathbf Cong \mathbb C \mathcal O'$ if and only if $\verge[A]=\verge[A']$. If $\verge[A]\not=\verge[A']$, then $\mathbb C \mathcal O$ and $\mathbb C \mathcal O'$ afford orthogonal characters, that is have no irreducible constituent in common.
\end{Theorem}
In particular $[A] \mathbb C U\mathbf Cong \verge[A] \mathbb C U$ for any template $[A]$ in $\mathcal E$. An isomorphism is obtained by a sequence of truncated row operations applied to $\verge[A]$ yielding the template $[A]$. The characters afforded by the right $U$-orbits $\mathcal O$ of $\mathcal E$ are supercharacters in Yan's supercharacter theory for $U$. By \mathbf Ref{2.13} for any idempotent $[B]\mathbf {I}n \mathcal E$, we can find a unique verge idempotent $[A]\mathbf {I}n \mathcal E$ such that $[B]\mathbb C U\mathbf Cong [A]\mathbb C U$, so it suffices to investigate the supercharacters arising from orbits generated by verge idempotents and we can extend the definition \mathbf Ref{2.12} by denoting $\main[B]=\main [A]$ and $\verge[B]=\verge[A]$.
\mathfrak section{Hom-spaces and irreducibles of minimal dimension}
Most of the material presented in this section is known or can easily be derived from the existing literature (e.g. \mathbf Cite{super} or \mathbf Cite{yan}). However, since we employ a representation theoretic approach using Hom-spaces between orbit modules explicitely, we provide proofs as well.
For $[A]\mathbf {I}n \mathcal E, A\mathbf {I}n V$, we denote the right- respectively left $U$-orbit in $\mathcal E$ containing $[A]$ by $\mathcal O_A^r$ and $\mathcal O_A^l$ respectively. Recall that $\mathbb C \mathcal E$ is isomorphic to $\mathbb C U$ as $U$-$U$-bimodule (see Remark \mathbf Ref{2.10}), an isomorphism given by the inverse $f^{-1}$ of the (bijective) 1-cocycle $f: U\mathbf Rightarrow V.$ More precisely
\[
f^{-1}[A]=\frac{1}{|V|}\mathfrak sum_{B\mathbf {I}n V} \overline{\mathbf Chi_{_A}(B)}(B+1)\mathbf {I}n \mathbb C U
\]
extends by linearity to an $U$-$U$-bimodule isomorphism. In particular $\mathbb C \mathcal O^r_A$ and $\mathbb C \mathcal O^l_A$ are isomorphic under $f^{-1}$ to the right respectively the left ideal of $\mathbb C U$ generated by $f^{-1}[A]$.
\begin{Prop}\label{3.1}
Let $[A], [B]\mathbf {I}n\mathcal E$. Then $\mathcal Hom_{\mathbb C U}(\mathbb C \mathcal O^r_A, \mathbb C\mathcal O^r_B)$ has $\mathbb C$-basis $f^{-1}(\mathcal O^l_A\mathbf Cap \mathcal O_B^r)$, where $x\mathbf {I}n f^{-1}(\mathcal O^l_A\mathbf Cap \mathcal O_B^r)$ acts on $\mathbb C \mathcal O^r_A$ by left multiplication.
\end{Prop}
\begin{proof}
$\mathbb C U$ is a self injective algebra hence, for any $x, y\mathbf {I}n \mathbb C U$ every homomorphism from $x\mathbb C U$ to $y \mathbb C U$ is obtained by left multiplication by some element of $\mathbb C U$. As an easy consequence we have
\begin{eqnarray*}
\mathcal Hom_{\mathbb C U} ([A] \mathbb C U, [B] \mathbb C U) \mathbf Cong f^{-1}(\mathbb C U[A]\mathbf Cap [B] \mathbb C U)\mathbf Cong \mathbb C (\mathcal O^l_A\mathbf Cap \mathcal O^r_B)\end{eqnarray*} as $\mathbb C$-vector space, since $\mathcal O^l_A, \mathcal O^r_B \mathfrak subseteq \mathcal E$. We conclude that $f^{-1}(\mathcal O^l_A\mathbf Cap \mathcal O^r_B)$ gives a $\mathbb C$-basis of $\mathcal Hom_{\mathbb C U}(\mathbb C \mathcal O^r_A, \mathbb C \mathcal O_B^r)$.
\end{proof}
\begin{Defn}\label{3.2}
Let $M, N$ be $\mathbb C U$-modules. We say that $M$ and $N$ are {\bf disjoint}, if they have no irreducible constituents in common, that is if the characters afforded by $M$ and $N$ are orthogonal.
\end{Defn}
Proposition \mathbf Ref{3.1} enables us to give a proof of part of Yan's theorem \mathbf Ref{2.13}.
\begin{Cor}\label{3.3}
Let $\mathcal O_1, \mathcal O_2\mathfrak subseteq \mathcal E$ be right $U$-orbits. Then either $\mathbb C \mathcal O_1\mathbf Cong \mathbb C \mathcal O_2$ or $\mathbb C O_1$ and $\mathbb C O_2$ are disjoint.
\end{Cor}
\begin{proof}
Let $\mathcal Hom_{\mathbb C U}(\mathbb C \mathcal O_1, \mathbb C \mathcal O_2)\not=(0)$. Let $[A]\mathbf {I}n \mathcal O_1$ and let $\mathfrak tilde \mathcal O_1$ be the left $U$-orbit in $\mathcal E$ generated by $[A]$. By \mathbf Ref{3.1} we find $[B]\mathbf {I}n \mathfrak tilde \mathcal O_1\mathbf Cap \mathcal O_2$, thus there exists $u\mathbf {I}n U$ such that $u[A]=\kappa[B]$ with $[B]\mathbf {I}n \mathcal O_2, \kappa\mathbf {I}n \mathbb C^*$. Thus $u [A] \mathbb C U=[B]\mathbb C U=\mathbb C \mathcal O_2$ and left multiplication by $u$ maps $\mathbb C \mathcal O_1$ surjectively onto $\mathbb C \mathcal O_2$. But left multiplication by $u$ is invertible and therefore $\mathbb C \mathcal O_1\mathbf Cong \mathbb C \mathcal O_2$, as desired.
\end{proof}
\begin{Defn}\label{3.4}
Let $[A]\mathbf {I}n \mathcal E$. The {\bf projective stabilizer} $\Pstab_u[A]$ of $[A]$ in $U$ is defined to be
\[
\Pstab_U[A]=\{u\mathbf {I}n U\,|\, [A] u=\lambda_u [A], \, \lambda_u \mathbf {I}n \mathbb C ^*\}.
\]
Thus $\Pstab_U[A]$ acts on $\mathbb C [A]$ by the linear character $\mathfrak tau: u\mapsto \lambda_u\mathbf {I}n \mathbb C^*$. By general theory we have
\[
[A]\mathbb C U\mathbf Cong \Ind^U_{\Pstab_U [A]} \mathbb C_\mathfrak tau,
\]
where $\mathbb C_\mathfrak tau$ denotes the one dimensional $\mathbb C \Pstab_u[A]$-module affording $\mathfrak tau$. The left projective stabilizer $\Pstab_U^{\ell}[A]$ is defined analogously.
\end{Defn}
We illustrate the action of root subgroups $X_{kl}$ ($1 \leqslant l <k \leqslant n$) on idempotents as follows
\begin{equation}
\begin{picture}(100,100)
\mathfrak put(0, 40){$[A]=$}
\mathfrak put(40,0) {\line(1,0){90}}
\mathfrak put(40,0) {\line(0,1){90}}
\mathfrak put(40,90) {\line(1,-1){90}}
\mathfrak put(60,20) {\line(1,0){50}}
\mathfrak put(60,20) {\line(0,1){50}}
\mathfrak put(60,20){\mathbf Circle*{2}}
\mathfrak put(50,12){\mathfrak tiny$(i,j)$}
\mathfrak put(75,12){\mathfrak tiny$(i,l)$}
\mathfrak put(80,20){\mathbf Circle*{2}}
\mathfrak put(60,50){\mathbf Circle*{2}}
\mathfrak put(115,18){\footnotesize$i$}
\multiput(80,20)(0,6.2){5}{\line(0,1){5}}
\multiput(60,50)(5,0){4}{\line(1,0){4.5}}
\mathfrak put(84,49){\footnotesize$l$}
\mathfrak put(62,72){\footnotesize$j$}
\mathfrak put(43,49){\mathfrak tiny$(l,j)$}
\end{picture}
\end{equation}
Suppose $[A]\mathbf {I}n \mathcal E, A_{ij}\not=0$ ($1\leqslant j< i \leqslant n$). Let $j<l<i$. Then left operation on $[A]$ by $x_{il}(\alpha), \alpha\mathbf {I}n \mathbb F_q$ adds $-\alpha$ times row $i$ to row $l$, hence changes entry $(l,j)$ of $A$ to $A_{lj}-\alpha A_{ij}$, leaving all entries not below the diagonal unchanged to be zero by truncation. Similarly right operation on $[A]$ by $x_{lj}(\beta), \beta\mathbf {I}n \mathbb F_q$ adds $-\beta$ times column $j$ to column $i$ and truncating the resulting matrix, and hence changing entry $A_{il}$ to $A_{il}-\beta A_{ij}$. By \mathbf Ref{2.9} the resulting idempotents have to be multiplied by the scalar $\mathfrak theta(\alpha A_{il})$ in the first and $\mathfrak theta(\beta A_{lj})$ in the second case to complete the monomial action of \mathbf Ref{2.9}. Using this we obtain the following result immediately.
\begin{Theorem}\label{3.6}
Let $[A]\mathbf {I}n \mathcal E$ be a right template with $\main[A]=\mathfrak p_{_A}=\{(i_1,j_1),\ldots,(i_k,j_k)\}\mathfrak subseteq \Phi^+$. Then $\Pstab_U[A]$ is a pattern subgroup $U_{\mathbf Cal R}$ with
\[
{\mathbf Cal R}=\{(r,s)\mathbf {I}n \Phi^+\,|\,s\notin \{j_1,\ldots,j_k\}\}\mathbf Cup\{(r,j_\nu)\,|\,\nu=1,\ldots,k, i_\nu\leqslant r \leqslant n\}.
\]
Thus $\mathfrak p_{_A}\mathfrak subseteq {\mathbf Cal R}$ and ${\mathbf Cal R}^\mathbf Circ ={\mathbf Cal R}\mathfrak setminus \mathfrak p_{_A}$ is closed. Moreover $U_{{\mathbf Cal R}^\mathbf Circ }$ acts trivially on $[A]$, $U_{{\mathbf Cal R}^\mathbf Circ }\mathfrak trianglelefteq U_R$ and $U_{\mathbf Cal R}/U_{{\mathbf Cal R}^\mathbf Circ }\mathbf Cong X_{i_1 j_1}\mathfrak times \mathbf Cdots\mathfrak times X_{i_k j_k}$
acting on $[A]$ by the linear character $\mathfrak theta_A=\mathfrak theta_1\mathfrak times \mathfrak theta_2\mathfrak times \mathbf Cdots \mathfrak theta_k$, where
$\mathfrak theta_{\nu}: X_{i_\nu j_\nu}\mathbf Rightarrow \mathbb C^*$ sends $x_{i_\nu j_\nu}(\alpha)$ to $ \mathfrak theta(A_{i_\nu j_\nu}\mathbf Cdot \alpha)\mathbf {I}n \mathbb C^*$ for $\alpha\mathbf {I}n \mathbb F_q,\, \nu=1,\ldots,k$.
\end{Theorem}
Thus ${\mathbf Cal R}$ consists of all positions in $\Phi^+$ in zero columns of $A$ together with all positions on and below the positions in $\mathfrak p_{_A}$.
\mathfrak smallskip
Analogously $[A]\mathbf {I}n \mathcal E$ is a {\bf left template}, if the nonzero entries of $A$ are only in rows, containing a main condition, and are besides the main conditions itself to the right of main conditions and not on hook intersections, that is below main conditions. Define
\[
{\mathbf Cal L} =\{(i,j)\mathbf {I}n \Phi^+\, | \, i\notin\{i_1,\ldots,i_k\}\}\mathbf Cup \{(i_\nu,l)\,|\, \nu=1,\ldots,k \mathfrak text{ and } 1\leqslant l\leqslant j_\nu\}.
\]
Thus $\mathbf Cal L$ consists of all positions in zero rows of $[A]$ together with all position to the left of main conditions including main conditions. Set ${\mathbf Cal L} ^\mathbf Circ={\mathbf Cal L}\mathfrak setminus \mathfrak p_{_A}$, then $U_{\mathbf Cal L}=\Pstab_U^l [A]$, $U_{{\mathbf Cal L} ^\mathbf Circ}$ is normal in $U_{\mathbf Cal L}$ and acts trivially on $[A]$ from the left and $U_{\mathbf Cal L}/U_{{\mathbf Cal L} ^\mathbf Circ}=U_{\mathbf Cal R}/U_{{\mathbf Cal R} ^\mathbf Circ}=X_{i_1 j_1}\mathfrak times \mathbf Cdots\mathfrak times X_{i_k j_k}$ acting on $[A]$ by the left with the linear character $\mathfrak theta_A$ again.
We describe now orbit modules in more detail. In view of theorem \mathbf Ref{2.13} we may concentrate on verge orbits, that is orbits generated by some verge idempotent $[A]\mathbf {I}n \mathcal E$. Let $\mathfrak p=\mathfrak p_{_A}=\main[A]$. Thus $A=\mathfrak sum_{(i,j)\mathbf {I}n \mathfrak p} A_{ij}\epsilon_{ij}$. Let $\mathfrak p=\{(i_1,j_1),\ldots,(i_k,j_k)\}\mathfrak subseteq \Phi^+$. Denote the hook centered at $(i_\nu, j_\nu) $ by $h_\nu, \nu=1,\ldots,k $. Set $a_\nu=i_\nu-j_\nu-1, a=a_1+\mathbf Cdots+a_k$, and let $b$ be the number of hook intersections of the hooks $h_\nu$, $\nu=1,\ldots,k$. Let $\mathcal T_A$ be the set of templates $[C]$ with $\verge[C]=[A]$. Thus $[C]\mathbf {I}n \mathcal T_A$ if and only if $C_{i_\nu j_\nu}=A_{i_\nu j_\nu}, \nu=1,\ldots,k$ and the only other nonzero entries in $C$ are on the hook legs $h_\nu ^l$ different from the hook intersections.
We illustrate this in the following example:
\begin{Example}\label{3.7}
Suppose $[A]\mathbf {I}n \mathcal E$ is a verge idempotent and let $\mathfrak p=\{(i,j), (r,s)\}=\main[A]$ with $1\leqslant j<s<i<r\leqslant n$. Then $h_{ij}\mathbf Cap h_{rs}=(i,s)$:
\begin{center}
\begin{picture}(140,160)
\mathfrak put(0,0){\line(1,0){160}}
\mathfrak put(0,0){\line(0,1){160}}
\mathfrak put(0,160){\line(1,-1){160}}
\mathfrak put(30,50){\line(1,0){80}}
\mathfrak put(30,50){\line(0,1){80}}
\mathfrak put(50,20){\line(0,1){90}}
\mathfrak put(50,20){\line(1,0){90}}
\mathfrak put(17,39){$(i,j)$}
\mathfrak put(52,39){$(i,s)$}
\mathfrak put(40,10){$(r,s)$}
\mathfrak put(30,130){\mathbf Circle*{3}}
\mathfrak put(32,135){$j$}
\mathfrak put(50,110){\mathbf Circle*{3}}
\mathfrak put(52,113){$s$}
\mathfrak put(110,50){\mathbf Circle*{3}}
\mathfrak put(115,51){$i$}
\mathfrak put(140,20){\mathbf Circle*{3}}
\mathfrak put(145,21){$r$}
\multiput(110,50)(0,-6.4){5}{\line(0,-1){4}}
\mathfrak put(100,10){$(r,i)$}
\mathfrak put(106,17){$\mathfrak times$}
\multiput(30,110)(6.8,0){3}{\line(1,0){4}}
\mathfrak put(5,105){$(s,j)$}
\mathfrak put(26,107.2){$\mathfrak times$}
\end{picture}
\end{center}
Then $ [C]\mathbf {I}n \mathcal T_A$ if and only if $A_{ij}=C_{ij}, A_ {rs}=C_{rs}$ and $C_{is}=0$, where all other nonzero entries of $C$ are on positions in columns $j$ and $s$ above $(i,j)$ and $(r,s)$ respectively. Inspecting the action of root subgroups $X_{rt}$ on $[A]$ from the left, we observe that for $\alpha\mathbf {I}n \mathbb F_q$ using the left hand analog of corollary \mathbf Ref{2.7}:
\[x_{rt}(\alpha) [A]=\mathfrak theta(\alpha A_{rt})[B],\]
where $B\mathbf {I}n V$ is obtained from $A$ by adding $-\alpha$ times row $r$ to row $t$ in $A$ and setting in the resulting matrix all entries above and on the main diagonal back to zero. Since the only non zero entry in row $r$ of $A$ is $A_{rs}$, we have $\mathfrak theta(\alpha A_{rt})=\mathfrak theta(0)=1$, for $t\neq s$. Moreover, if $t\leqslant s$, we obtain by truncation $[B]=[A]$. For $s<t<r$, $B$ differs from $A$ only at position $(t,s)$, indeed $B_{ts}=-\alpha A_{rs}\neq 0$, since $A_{ts}=0$. Sequences of truncated row operations by the action of root subgroups $X_{rt}, 1\leqslant t<r$ on row $r$ will just place arbitrary entries on the hook leg $h_{rs}^l$ leaving other entries of $A$ unchanged. Similarly left action by sequences of elements of root subgroups $X_{it}$ fills the positions of $h_{ij}^\ell$ with arbitrary values of $\mathbb F_q$, leaving other entries of $A$ unchanged. All other root subgroups $X_{ab}$ not in row $r$ or row $i$ (i.e. $a\neq r, a\neq i$) belong to the projective left stabilizer of $[A]$ in $U$. We conclude that $|\mathcal O^l_A|=q^a$ with $a=(i-j-1)+(r-s-1)=|h_{ij}^\ell|+|h_{rs}^\ell|$. Similarly for the right action of $U$ on $[A]$ we see that $X_{ab}\mathbf {I}n \Pstab_U [A]$ for $s\neq b\neq j$. The right action of $X_{ts}, s<t<r$ fills position $(r,t)$ on $h_{rs}^a$ with arbitrary chosen values of $\mathbb F_q$. Replacing $(r,s)$ by $(i,j)$ the root subgroups $X_{tj}$, $j<t<i$ fill the positions on $h_{ij}^a$ in row $i$ of $A$ with arbitrary elements of $\mathbb F_q$. We conclude again, since $|h_{ij}^a|=|h_{ij}^\ell|=i-j-1$, $|h_{rs}^a|=|h_{rs}^\ell|=r-s-1,$ that $|\mathcal O^r_A|=q^a$.
Finally acting by $X_{ri}$ from the left as well as acting by $X_{sj}$ from the right on $[A]$ will fill position $(i,s)$ and leave all other entries of $A$ unchanged. Thus $\mathcal O_A^l\mathbf Cap \mathcal O_A^r$ is precisely the set of idempotents $[B]\mathbf {I}n \mathcal E$ with $A_{rs}=B_{rs}, A_{ij}=B_{ij}, B_{is}\mathbf {I}n \mathbb F_q$ all other entries of $B$ being zero. We conclude $|\mathcal O_A^l\mathbf Cap \mathcal O_A^r|=q=q^1$, that is $b=1$ and hence $|\mathcal T_A|=q^{a-b}=q^{a-1}$.
\end{Example}
\begin{Theorem}\label{3.8}
Let the verge $[A]\mathbf {I}n \mathcal E$ be a verge with main conditions $\mathfrak p=\mathfrak p_{_A}$. Keeping the notation introduced above we have
\begin{enumerate}
\mathbf {I}tem [1)] $\mathcal O_A^r$ consists of all idempotent arising by filling all positions in $h_\nu^a$ ($\nu=1, \ldots, k$) with arbitrary elements of $\mathbb F_q$. Similarly $\mathcal O_A^l$ consists of all idempotents $[B]\mathbf {I}n \mathcal E$, where $B$ coincides at main conditions with $A$ and where the hook legs $h_\nu^\ell$ are filled by arbitrary elements from $\mathbb F_q$.
\mathbf {I}tem [2)] $|\mathcal O_A^l|=|\mathcal O_A^r|=q^a$.
\mathbf {I}tem [3)] $|\mathcal T_A|=q^{a-b}$.
\mathbf {I}tem [4)] For every $[B]\mathbf {I}n \mathcal T_A$ there exists $u\mathbf {I}n U$ such that $u[A]=[B]$. In particular $\mathbb C \mathcal O^r_B\mathbf Cong \mathbb C \mathcal O_A^r$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Everything follows at once by inspecting example \mathbf Ref{3.7}.
\end{proof}
Note that part 4) of \mathbf Ref{3.8} reproves part of Yan's theorem \mathbf Ref{2.13}. Moreover, using \mathbf Ref{3.1}, one easily derives a complete proof of \mathbf Ref{2.13}, proving that $\mathbb C \mathcal O^r_A\mathbf Cong \mathbb C \mathcal O_B^r$ for a template $[B]\mathbf {I}n \mathcal E$ if and only if $[B]\mathbf {I}n \mathcal T_A$. Of course there is an analogous statement, using left templates, for left orbits. Since $\mathbb C \mathcal E$ is isomorphic to $\mathbb C U$ as $U$-$U$-bimodule we conclude, that the biorbit module $\mathbb C U[A] \mathbb C U$ generated by a verge $[A]\mathbf {I}n \mathcal E$ is the direct sum of Wedderburn components of $\mathbb C U$. In particular, if $S$ is an irreducible submodule of $\mathbb C \mathcal O_A^r$, $S$ occurs in $\mathbb C U_{\mathbb C U}$ with multiplicity at least $|\mathcal T_A|=q^{a-b}$, and hence $\dim_\mathbb C S\geqslant q^{a-b}$, where $a$ and $b$ are defined as above. We have shown:
\begin{Theorem}\label{3.10}
Let $[A]\mathbf {I}n \mathcal E$ be a verge, and let $S$ be an irreducible constituent of $\mathbb C \mathcal O_A^r$. Then $\dim_\mathbb C S \geqslant q ^{a-b}$. If $\dim_\mathbb C S=q^{a-b+m}$ for some $m\mathbf {I}n \mathbb N$, then the multiplicity of $S$ as constituent of $\mathbb C\mathcal O_A^r$ is given as $q^m$.
\end{Theorem}
\begin{proof}
We already proved that $q^{a-b}$ is a lower bound for the dimension of $S$. Let $\dim_\mathbb C S=q^{a-b+m}$ and let $s$ be the multiplicity of $S$ as irreducible constituent in $\mathbb C \mathcal O_A^r$. Then $s$ is the multiplicity of $S$ in $\mathbb C \mathcal O_B^r$ for all $[B]\mathbf {I}n \mathcal T_A$ and so $s\mathbf Cdot q^{a-b}$ is the multiplicity of $S$ as constituent in the regular $U$-module $\mathbb C U_{\mathbb C U}$, and hence its dimension as well. So $s\mathbf Cdot q^{a-b}=q^{a-b+m}$, proving $s=q^m$.
\end{proof}
In the setting of theorem \mathbf Ref{3.10} we obtain as well an upper bound for the dimension of $S$ observing that $q^m \dim_\mathbb C S$ has to be less or equal the size $q^a$ of $\mathcal O^r_A$. So $q^m q^{a-b+m}\leqslant q^a$, that is $2m\leqslant b$.
\begin{Cor}\label{3.10n}
Let $[A] \mathbf {I}n \mathcal E$ be a verge and let $S$ be an irreducible constituent of $\mathbf C \mathcal O_A^r$. Let $a,b$ be defined as above. Then
\[q^{a-b}\leqslant \dim_\mathbb C S \leqslant q^{a+c}\]
where $c$ is the largest integer less or equal $-\frac{b}{2}$.
\end{Cor}
We say $S\leqslant \mathbb C \mathcal O_A^r$ has {\bf minimal dimension} if and only if $\dim_\mathbb C (S)=q^{a-b}$. By standard arguments on endomorphism rings of semisimple modules we conclude:
\begin{Cor}\label{3.11}
Let $[A]\mathbf {I}n \mathcal E$ be a verge, $S\leqslant \mathbb C \mathcal O_A^r$ be irreducible and let $\epsilon_{_S}: \mathbb C \mathcal O_A^r \mathbf Rightarrow S hookrightarrow \mathbb C \mathcal O_A^r $ be the canonical projection. Thus $\epsilon_{_S}\mathbf {I}n E=\mathcal End_{\mathbb C U}(\mathbb C \mathcal O_A^r)$. Then the following holds:
\begin{enumerate}
\mathbf {I}tem [1)] $S=\mathbb C \mathcal O_A^r$, that is $\mathbb C \mathcal O_A^r$ is irreducible if and only if the hooks centered at the main conditions do not intersect, (Yan).
\mathbf {I}tem [2)] If $\dim_\mathbb C S=q^{a-b+m}$, $a, b,m\mathbf {I}n \mathbb N$ as above, then $\epsilon_{_S} E$ is an irreducible $E$-module of dimension $q^m$.
\mathbf {I}tem [3)] $S$ has minimal dimension if and only if $\epsilon_{_S}E=\mathbb C \epsilon_{_S}$ is a one dimensional $E$-module.
\end{enumerate}
\end{Cor}
\mathfrak section{Endomorphism rings}\label{4}
Throughout this section let $[A]\mathbf {I}n\mathcal E$ be a verge with main conditions $\mathfrak p=\mathfrak p_{_A}=\{(i_1,j_1) \ldots,(i_k,j_k)\}$ $\mathfrak subseteq \Phi^+$. Recall that $\mathbf CR^\mathbf Circ \mathfrak subseteq \Phi^+$ denotes the set of all positions in $\Phi^+$ in zero columns in $[A]$, together with all positions strictly below the main conditions in columns $j_1, \ldots,j_k$ in $[A]$ and set again $\mathbf CR=\mathbf CR^\mathbf Circ\mathbf Cup \mathfrak p$. Similarly we have $\mathbf CL^\mathbf Circ\mathfrak subseteq \mathbf CL \mathfrak subseteq \Phi^+$ for the left action of $U$, (compare \mathbf Ref{3.6}). We define
\begin{equation}\label{4.1}
\begin{matrix}
Y^r&=&\{u\mathbf {I}n U\,|\, u[A]\mathbf {I}n \mathbb C \mathcal O_A^r\}\\
Y^l&=&\{u\mathbf {I}n U\,|\, [A]u\mathbf {I}n \mathbb C \mathcal O_A^l\}
\end{matrix}
\end{equation}
Obviously $Y^r, Y^l$ are subgroups of $U$ and by \mathbf Ref{3.1} $E^r=\mathcal End_{\mathbb C U} \mathbb C \mathcal O_A^r, E^l=\mathcal End_{\mathbb C U} \mathbb C \mathcal O_A^l$ are epimorphic images of $\mathbb C Y^r$ and $\mathbb C Y^l$ respectively. More precisely, since every endomorphism $h\mathbf {I}n E^r$ is completely determined by its action on the generator $[A]$ of $\mathbf C \mathcal O_A^r$, we have
\begin{Lemma}\label{4.2}
The left annihilator $\mathfrak a_r=\{x\mathbf {I}n \mathbb C Y^r\,|\, x[A]=0\}$ is an ideal in $\mathbb C Y^r$ such that $E^r=\mathbb C Y^r/\mathfrak a_r$. Similarly $E^l=\mathbb C Y^l/\mathfrak a_l$ for the right annihilator $\mathfrak a_l$ of $[A]$ in $\mathbb C Y^l$.
\end{Lemma}
Note that the indices ``$r$'' and ``$l$'' indicate here which action is centralized, even if the algebras $E^r$ and $E^l$ act from the left and the right respectively.
In \mathbf Ref{3.1} we exhibited a $\mathbb C$-basis of $E^r, E^l$ to be labeled by $\mathcal O_A^l\mathbf Cap \mathcal O_A^r$. More precisely, for any $[B]\mathbf {I}n \mathcal O_A^l\mathbf Cap \mathcal O_A^r$ we may choose $u_{_B}\mathbf {I}n U$ such that $u_{_B}[A]\mathbf {I}n \mathbb C [B]$. Then $\{u_{_B}\,|\,[B]\mathbf {I}n \mathcal O_A^l\mathbf Cap \mathcal O_A^r\}\mathfrak subseteq Y^r$ is modulo $\frak a_r$ a basis of $E^r$.
\begin{Lemma}\label{4.3}
$[B]\mathbf {I}n \mathcal O_A^l\mathbf Cap \mathcal O_A^r$ if and only if $B$ coincides with $A$ at all positions except the hook intersections.
\end{Lemma}
\begin{proof}
By \mathbf Ref{3.8} $\mathcal O_A^l$ consists of all idempotents $[B]\mathbf {I}n \mathcal E$ which differ from $[A]$ only in positions on the hook legs $h_{ij}^\ell$ with $(i,j)\mathbf {I}n \mathfrak p$ and $\mathcal O_A^r$ consists of idempotents $[C]\mathbf {I}n \mathcal E$ differing from $[A]$ only on the hook arms $h_{ij}^a$ with $(i,j)\mathbf {I}n \mathfrak p$. This implies the lemma immediately.
\end{proof}
Let $(i,j), (r,s)\mathbf {I}n \mathfrak p$, $1\leqslant j<s <i<r \leqslant n$. Thus $h_{ij}, h_{rs}$ intersect precisely in position $(i,s)$ (compare \mathbf Ref{3.7}). Moreover acting by $X_{ri}$ from the left on $[A]$ produces all idempotents $[B]\mathbf {I}n \mathcal O_A^l\mathbf Cap \mathcal O_A^r$, which differ from $[A]$ only in position $(i,s)$, the hook intersection. Note that $(r,i)\mathbf {I}n h_{rs}^a$. For dealing with more than one hook intersection and keeping track of all the positions involved, we inspect the following example:
\begin{Example}\label{4.4}
Let $1\leqslant j<s<b<i<r<a\leqslant n$ with $\{(i,j), (r,s), (a,b)\}\mathfrak subseteq \mathfrak p_{_A}$.
\begin{center}
\begin{picture}(130,200)
\mathfrak put(-40,0){\line(1,0){200}}
\mathfrak put(-40,0){\line(0,1){200}}
\mathfrak put(-10,80){\line(1,0){90}}
\mathfrak put(-10,80){\line(0,1){90}}
\mathfrak put(-40,200){\line(1,-1){200}}
\mathfrak put(20,50){\line(1,0){90}}
\mathfrak put(20,50){\line(0,1){90}}
\mathfrak put(50,20){\line(0,1){90}}
\mathfrak put(50,20){\line(1,0){90}}
\mathfrak put(10,39){\footnotesize $(r,s)$}
\mathfrak put(52,39){\footnotesize $(r,b)$}
\mathfrak put(82,39){\footnotesize $(r,i)$}
\mathfrak put(38,10){\footnotesize $(a,b)$}
\mathfrak put(20,140){\mathbf Circle*{3}}
\mathfrak put(20,50){\mathbf Circle*{3}}
\mathfrak put(-10,80){\mathbf Circle*{3}}
\mathfrak put(50,20){\mathbf Circle*{3}}
\mathfrak put(22,145){$s$}
\mathfrak put(-10,170){\mathbf Circle*{3}}
\mathfrak put(-8,175){$j$}
\mathfrak put(50,110){\mathbf Circle*{3}}
\mathfrak put(52,113){$b$}
\mathfrak put(80,80){\mathbf Circle*{3}}
\mathfrak put(82,83){$i$}
\mathfrak put(110,50){\mathbf Circle*{3}}
\mathfrak put(115,51){$r$}
\mathfrak put(140,20){\mathbf Circle*{3}}
\mathfrak put(145,21){$a$}
\multiput(110,50)(0,-6.4){5}{\line(0,-1){4}}
\multiput(80,80)(0,-7){9}{\line(0,-1){4}}
\mathfrak put(100,10){\footnotesize $(a,r)$}
\mathfrak put(70,10){\footnotesize $(a,i)$}
\mathfrak put(106.1,17.2){$\mathfrak times$}
\mathfrak put(76.1,17.2){$\mathfrak times$}
\mathfrak put(76.1,47.2){$\mathfrak times$}
\mathfrak put(-18,69){\footnotesize $(i,j)$}
\mathfrak put(22,69){\footnotesize $(i,s)$}
\mathfrak put(52,69){\footnotesize $(i,b)$}
\multiput(50,110)(-6.2,0){10}{\line(-1,0){4}}
\multiput(20,140)(-6.2,0){5}{\line(-1,0){4}}
\mathfrak put(22,99){\footnotesize $(b,s)$}
\mathfrak put(-33,107){\footnotesize $(b,j)$}
\mathfrak put(-33,137){\footnotesize $(s,j)$}
\mathfrak put(16,107){$\mathfrak times$}
\mathfrak put(-14,107){$\mathfrak times$}
\mathfrak put(-14,137){$\mathfrak times$}
\end{picture}
\end{center}
We list the positions of hook intersections and the root subgroups changing the values there by left and right action in the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|}hline
\begin{tabular}{c}hook intersections\\ to be changed \end{tabular} & \begin{tabular}{c}$h_{ij}\mathbf Cap h_{rs}$\\ $=(i,s)$ \end{tabular} & \begin{tabular}{c} $h_{ij}\mathbf Cap h_{ab}$\\ $=(i,b)$ \end{tabular} & \begin{tabular}{c}$h_{rs}\mathbf Cap h_{ab}$ \\ $=(r,b)$\end{tabular}\\ hline
\begin{tabular}{c}by left action =\\ truncated row operations\end{tabular}& $X_{ri}$ & $X_{ai}$ & $X_{ar}$\\ hline
\begin{tabular}{c}by right action =\\ truncated column operations\end{tabular}& $X_{sj}$ & $X_{bj}$ & $X_{bs}$\\hline
\end{tabular}
\end{center}
\end{Example}
So let $[B]\mathbf {I}n \mathcal O_A^l\mathbf Cap \mathcal O_A^r$ with $B$ differing from $A$ only in these three hook intersections we may act first by $X_{ri}$ to obtain $B_{is}$ at position $(i,s)$. Then left action by $X_{ai}$ and $X_{ar}$ will change only position $(i,b)$ and $(r,b)$, thus we can insert $B_{ib}$ at position $(i,b)$ and $B_{rb}$ at position $(r,b)$ without changing the entry at position $(i,s)$. Similarly we use $X_{bs}$ on the right first, then $X_{sj}$ and $X_{bj}$ acting from the right on $[A]$ to obtain $[B]$.
It is easy to check $\mathbf CR \mathbf Cup \{(s,j),(b,j),(b,s)\}$ and $\mathbf CL \mathbf Cup \{(r,i),(a,i),(a,r)\}$ are closed subsets of $\Phi^+$.
In general, we define $hat \mathbf CL$ to be $\mathbf CL$ combined with all positions $(r,i)\mathbf {I}n \Phi^+$ such that there exist $1\leqslant j, s \leqslant n$ such that $(i,j), (r,s) \mathbf {I}n \mathfrak p_{_A}$ and $1\leqslant j<s<i<r\leqslant n$. Then $hat \mathbf CL$ consists of $\mathbf CL$ and all positions on hook arms, such that the corresponding root subgroups change in $[A]$ only the values at a hook intersection acting from the left. Thus obviously $|hat \mathbf CL|-|\mathbf CL|=|hat \mathbf CL \mathfrak setminus \mathbf CL|=b=$number of hook intersections. We can now prove the following:
\begin{Theorem}\label{4.5}
$hat \mathbf CL$ is a closed subset of $\Phi^+$ such that $U_{hat \mathbf CL }=\{u\mathbf {I}n U\,|\,u[A]\mathbf {I}n \mathbf C \mathcal O_A^r\}=Y^R$. Moreover $U_{{\mathbf CL}^\mathbf Circ}$ and $U_{\mathbf Cal L}$ are normal subgroups of $U_{hat \mathbf CL }$ and $U_{\mathbf Cal L}/U_{{\mathbf Cal L}^\mathbf Circ }=X_{i_1 j_1}\mathfrak times \mathbf Cdots X_{i_k j_k}$ is contained in the center of $U_{hat \mathbf CL}/U_{{\mathbf Cal L}^\mathbf Circ }$ and acts on the verge idempotent $[A]$ from the left by the linear character $\mathfrak theta_A$ defined in \mathbf Ref{3.6}. If $\epsilon_{_A}\mathbf {I}n \mathbb C U_{\mathbf CL}$ denotes the idempotent affording the linear character $\mathfrak theta_A$ with trivial $U_{\mathbf CL^\mathbf Circ}$-action, the endomorphism ring $E^r=\mathcal End_{\mathbb C U} (\mathbb C \mathcal O_A^r)$ is isomorphic to the right ideal $\epsilon_{_A} \mathbb C U_{hat \mathbf CL}$ of $\mathbb C U_{hat \mathbf CL}$.
\end{Theorem}
\begin{proof}
First we show that $(a,r), (r,i)\mathbf {I}n hat \mathbf CL$ implies $(a,i)\mathbf {I}n hat \mathbf CL$ proving that $hat \mathbf CL$ is a closed subset of $\Phi^+$. Since $\mathbf CL$ is closed, we may assume that $(a,r)$ or $(r,i)$ is not contained in $\mathbf CL$.
Suppose $(a,r)\notin \mathbf CL$ thus we find $(a,b), (r,s)\mathbf {I}n \mathfrak p$, $1\leqslant s<b<r<a\leqslant n$ such that $h_{ab}\mathbf Cap h_{rs}=\{(r,b)\}$ and $X_{ar}$ acting from the left on $[A]$ changes only the entry at the hook intersection $(r,b)$ in $[A]$.
\begin{center}
\begin{picture}(140,160)
\mathfrak put(0,0){\line(1,0){160}}
\mathfrak put(0,0){\line(0,1){160}}
\mathfrak put(0,160){\line(1,-1){160}}
\mathfrak put(30,50){\line(1,0){80}}
\mathfrak put(30,50){\line(0,1){80}}
\mathfrak put(50,55){\line(1,1){60}}
\mathfrak put(70,20){\line(0,1){70}}
\mathfrak put(70,20){\line(1,0){70}}
\mathfrak put(17,39){$(r,s)$}
\mathfrak put(72,39){$(r,b)$}
\mathfrak put(60,10){$(a,b)$}
\mathfrak put(100,10){$(a,r)$}
\mathfrak put(30,130){\mathbf Circle*{3}}
\mathfrak put(32,135){$s$}
\mathfrak put(70,90){\mathbf Circle*{3}}
\mathfrak put(72,93){$b$}
\mathfrak put(110,50){\mathbf Circle*{3}}
\mathfrak put(115,51){$r$}
\mathfrak put(140,20){\mathbf Circle*{3}}
\mathfrak put(145,21){$a$}
\mathfrak put(115,115){$(r,i)\mathbf {I}n \mathbf Row r$}
\multiput(110,50)(0,-6.4){5}{\line(0,-1){4}}
\end{picture}
\end{center}
Now $(r,i)\mathbf {I}n hat \mathbf CL$ is a position in row $r$, so $i<r$.
Suppose first $(r,i)\mathbf {I}n \mathbf CL$. Then $(r,i)$ is not to the right of $(r,s)\mathbf {I}n \mathfrak p\mathfrak subseteq \mathbf CL$, that is $i\leqslant s$. Since $r<a$ and $s<b$ we have $i<b$ and hence the position $(a,i)$ in row $a$ is strictly to the left of $(a,b)\mathbf {I}n \mathfrak p$. We conclude that $(a,i)\mathbf {I}n \mathbf CL^\mathbf Circ \mathfrak subseteq \mathbf CL\mathfrak subseteq hat \mathbf CL$. Since the commutator subgroup $[X_{ar}, X_{ri}]$ equals $X_{ai}$ this shows too that $X_{ar}$ normalizes $U_{\mathbf CL^\mathbf Circ}$ and $U_{\mathbf CL}$ and that it commutes with $X_{rs}$ modulo $U_{\mathbf CL^\mathbf Circ}$.
Now suppose $(r,i)\notin \mathbf CL$ as well. So $(r,i)$ is in row $r$ strictly to the right of $(r,s)\mathbf {I}n\mathfrak p$, that is $s<i$. If $i\leqslant b$, then $(a,i)$ is in row $a$ not to right of $(a,b)\mathbf {I}n \mathfrak p$, hence is contained in $\mathbf CL$ and we are done, since $\mathbf CL\mathfrak subseteq hat \mathbf CL$. Suppose $b<i$. Since $(r,i)\notin \mathbf CL$, but $(r,i)\mathbf {I}n hat \mathbf CL$, there must be in addition a main condition $(i,j)$ in row $i$ with $j<s$ such that $X_{ri}$ acting from the left on $[A]$ changes only the value at the hook intersection $h_{ij}\mathbf Cap h_{rs}=\{(i,s)\}$ in $[A]$. We have (compare \mathbf Ref{4.4}):
$1\leqslant j<s<b<i<r<a\leqslant n$ combining all inequalities above, and $(i,j),(r,s),(a,b)\mathbf {I}n \mathfrak p$. In particular $h_{ij}\mathbf Cap h_{ab}=\{(i,b)\}$. Moreover $X_{ai}$ acts on $[A]$ from the left changing in $[A]$ only the value $A_{ib}=0$ at position $(i,b)$, hence $(a,i)\mathbf {I}n hat \mathbf CL$, as desired.
A similar argument shows that $(r,i)\notin \mathbf CL$ and $(a,r)\mathbf {I}n \mathbf CL$, then $(a,i)\mathbf {I}n \mathbf CL^\mathbf Circ$ proving that $hat \mathbf CL$ is closed, and that the commutator subgroup $[U_{hat \mathbf CL}, U_\mathbf CL]$ is contained in $U_{{\mathbf Cal L}^\mathbf Circ}$. This implies $U_{{\mathbf Cal L}^\mathbf Circ}, U_{\mathbf Cal L}$ are normal subgroups of $ U_{hat \mathbf CL}$ and $U_{\mathbf Cal L}/U_{{\mathbf Cal L}^\mathbf Circ }$ is contained in the center $Z(U_{hat \mathbf CL}/U_{{\mathbf Cal L}^\mathbf Circ})$ of $U_{hat \mathbf CL}/U_{{\mathbf Cal L}^\mathbf Circ}$.
Obviously $U_{hat \mathbf CL}\mathfrak subseteq Y^r$ by \mathbf Ref{3.1}. Since $U_{\mathbf CL^\mathbf Circ}$ acts trivially and $U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ}$ by the linear character $\mathfrak theta_A$ from the left on $[A]$, $\dim_{\mathbb C} (\epsilon_{_A} \mathbb C U_{hat \mathbf CL})=q^b=\dim_{\mathbb C} E^r$, finishing the proof of the theorem.
\end{proof}
Obviously there is a left sided version of \mathbf Ref{4.5}. Recall that $\mathbf CR^\mathbf Circ$ (for given verge $[A]\mathbf {I}n \mathcal E$) consists of all positions $(i,j)\mathbf {I}n \Phi^+$ in zero columns and in column $j$ such that position $(i,j)$ is strictly above some main condition $(r,j)\mathbf {I}n \mathfrak p_{_A}$, that is $r< i\leqslant n$. $\mathbf CR$ was defined to be $\mathbf CR^\mathbf Circ \mathbf Cup \mathfrak p$. Now $hat \mathbf CR$ is $\mathbf CR$ together with all positions $(a,b)\mathbf {I}n \Phi^+$ satisfying the following: $(a,b)\mathbf {I}n h_{ij}^\ell$ for some $(i,j)\mathbf {I}n \mathfrak p_{_A}$ (so $b=j$) and $X_{ab}$ acts on $[A]$ from the right by changing the value on precisely one hook intersection.
\begin{Theorem}\label{4.5l}
$hat \mathbf CR\mathfrak subseteq \Phi^+$ is closed, $U_\mathbf CR, U_{\mathbf CR^\mathbf Circ}\mathfrak trianglelefteq U_{hat \mathbf CR}$ and $U_{\mathbf CR}/U_{\mathbf CR^\mathbf Circ}$ is central in $U_{hat \mathbf CR}/U_{\mathbf CR^\mathbf Circ}$. Right multiplication on $[A]$ by elements of $\mathbb C U_\mathbf CR$ induces endomorphism of the left orbit module $\mathbb C \mathcal O_A^l$. This endomorphism ring $E^l=\mathcal End_{\mathbb C U} \mathbb C \mathcal O_A^l$ is isomorphic to the left ideal $\mathbb C U\!_{hat \mathbf CR}\epsilon_{_A}^\mathfrak perp$ of the group algebra $\mathbb C U\!_{hat \mathbf CR}$, where $\epsilon_{_A}^\mathfrak perp$ is the idempotent of $\mathbb C U_\mathbf CR$ affording the trivial $U_\mathbf CR^\mathbf Circ$-action and affording the linear character $\mathfrak theta_A$ of \mathbf Ref{3.6} on $U_{\mathbf CR}/U_\mathbf CR^{\mathbf Circ}$.
\end{Theorem}
\begin{Remark}\label{4.6}
Since $\epsilon_{_A}$ and $\epsilon_{_A}^\mathfrak perp$ are central idempotents in $\mathbb C U_{hat{\mathbf CL}}$ and $\mathbb C U_{hat{\mathbf CR}}$ respectively we see that
$\epsilon_{_A} \mathbb C U_{hat \mathbf CL} =\mathbb C U_{hat \mathbf CL} \epsilon_{_A}$ and $\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR} =\mathbb C U_{hat \mathbf CR} \epsilon_{_A}^\mathfrak perp$ are ideals of $\mathbb C U_{hat \mathbf CL}$ and $\mathbb C U_{hat \mathbf CR}$ respectively.
\end{Remark}
The next lemma shows, how we can shift the left action on $[A]$ of $U_{hat \mathbf CL}$ to the right to obtain a right action by $U_{hat \mathbf CR}$. Let $(i,j), (r,s)\mathbf {I}n \mathfrak p$ with $1\leqslant j<s<i<r\leqslant n$ as in example \mathbf Ref{3.7}. Thus $h_{ij}\mathbf Cap h_{rs}=\{(i,s)\}$ and $X_{ri}$ acting from the left and $X_{sj}$ acting from the right on $[A]$ will change only the value at the hook intersection $(i,s)$ in $[A]$. Note that $(r,i)\mathbf {I}n hat \mathbf CL\mathfrak setminus \mathbf CL$ and $(s,j)\mathbf {I}n hat \mathbf CR\mathfrak setminus \mathbf CR$. In this situation we have:
\begin{Lemma}\label{4.7}
Let $A_{rs}=\mathfrak tau$, $A_{ij}=\mathbf Rho$, so $0\not=\mathbf Rho, \mathfrak tau\mathbf {I}n \mathbb F_q$. Set $\mathfrak sigma=\mathfrak tau/\mathbf Rho\mathbf {I}n \mathbb F_q$. Then
\[
x_{ri}(\alpha)[A]=[A]x_{sj}(\alpha \mathfrak sigma) \mathfrak text{ for all } \alpha\mathbf {I}n \mathbb F_q.
\]
\end{Lemma}
\begin{proof}
We calculate, using the left hand sided version of \mathbf Ref{2.9}:
\[
x_{ri}(\alpha)[A]=\mathfrak theta(\alpha A_{ri})[B]=[B]\mathbf {I}n \mathcal O_A^l \mathbf Cap \mathcal O_A^r,
\]
using $A_{ri}=0$, where $B$ differs from $A$ only at position $(i,s)$ which is $B_{is}=-\alpha \mathfrak tau\mathbf {I}n \mathbb F_q$. Again by \mathbf Ref{2.9}:
\[
[A]x_{sj}(\alpha\mathfrak sigma)=\mathfrak theta(\alpha\mathfrak sigma A_{sj})[C]=[C], \mathfrak text{ since } A_{sj}=0,
\]
where $C$ differs from $A$ only at position $(i,s)$ with $C_{is}=-\alpha \mathfrak sigma\mathbf Rho=-\alpha \mathbf Cdot \frac{\mathfrak tau}{\mathbf Rho}\mathbf Cdot \mathbf Rho=-\alpha\mathfrak tau=B_{is}$. So $B=C$ proving the lemma.
\end{proof}
For $(a,b)\mathbf {I}n \Phi^+, \beta\mathbf {I}n \mathbb F_q$, we define the linear character $\mathfrak theta_{ab}^\beta: X_{ab}\mathbf Rightarrow \mathbb C^*$ by $\mathfrak theta_{ab}^\beta (x_{ab}(\alpha))=\mathfrak theta(\alpha \beta)$. Observe that $\{\mathfrak theta_{ab}^\beta\,|\, \beta\mathbf {I}n \mathbb F_q\}$ is the complete set of distinct linear characters of the root subgroup $X_{ab}\leqslant U$. For $\beta\mathbf {I}n \mathbb F_q$ we define then $f_{ab}^\beta=\frac{1}{q}\mathfrak sum_{\alpha\mathbf {I}n \mathbb F_q} \mathfrak theta_{ab}^\beta (x_{ab}(-\alpha)) x_{ab}(\alpha)\mathbf {I}n \mathbb C X_{ab}$. Thus $f_{ab}^\beta$ is the primitive idempotent of $\mathbb C X_{ab}$ affording $\mathfrak theta_{ab}^\beta$. Keeping this notation and the notation in \mathbf Ref{4.7} we have:
\begin{Cor}\label{4.8}
Let $[A]$, $X_{ri}, X_{sj}$ as above, then
\[
f_{ri}^\beta [A]=[A] f_{sj}^{\beta \mathfrak sigma^{-1}} \mathfrak text{ for all } \beta \mathbf {I}n \mathbb F_q.
\]
\end{Cor}
\begin{proof}
For $\beta\mathbf {I}n \mathbb F_q$ we calculate:
\begin{eqnarray*}
f_{ri}^\beta [A]&=& \frac{1}{q}\mathfrak sum_{\alpha\mathbf {I}n \mathbb F_q} \mathfrak theta_{ri}^\beta(x_{ri}(-\alpha)) x_{ri}(\alpha) [A]\\
&=& \frac{1}{q}\mathfrak sum_{\alpha\mathbf {I}n \mathbb F_q} \mathfrak theta(-\beta\alpha)[A]x_{sj}(\alpha \mathfrak sigma)\\
&=& [A] \mathbf Cdot \frac{1}{q}\mathfrak sum_{\alpha\mathbf {I}n \mathbb F_q} \mathfrak theta(-\beta \mathfrak sigma^{-1}\alpha)x_{sj}(\alpha)\\
&=& [A]f_{sj}^{\beta \mathfrak sigma^{-1}}, \mathfrak text{ as desired.}
\end{eqnarray*}
\end{proof}
Recall from \mathbf Ref{4.5l} that $\Pstab_U^r[A]=U_\mathbf CR$, where $\mathbf CR$ consists of all positions in zero column and those positions in column containing a main condition, which are on or below the main condition. In the setting of \mathbf Ref{4.7} let $J=\mathbf CR\mathbf Cup \{(s,j)\}$.
\begin{Lemma}\label{4.9}
$J=\mathbf CR\mathbf Cup \{(s,j)\}$ is a closed subset of $\Phi^+$.
\end{Lemma}
\begin{proof}
Let $(l,s)\mathbf {I}n \mathbf CR$. Then $(l,s)$ has to be position not above the main condition $(r,s)$ and hence $l\geqslant r$. Recall that we have $1\leqslant j<s<i<r\leqslant n$ by our setting \mathbf Ref{4.7}, and hence $l>i$ and $(l,j)\mathbf {I}n \mathbf CR^\mathbf Circ \mathfrak subseteq \mathbf CR\mathfrak subseteq J$. Now suppose $(j,t)\mathbf {I}n \mathbf CR$. We have to show that $(s,t)\mathbf {I}n J$. If $t\notin \{j_1, \ldots,j_k\}, \mathfrak p=\{(i_1, j_1),\ldots, (i_k,j_k)\}$ as above, column $t$ is a zero column in $[A]$ and hence $(s,t)\mathbf {I}n \mathbf CR^\mathbf Circ \mathfrak subseteq \mathbf CR\mathfrak subseteq J$. So let $t=j_\nu$ for some $1\leqslant \nu \leqslant k$. Then $(i_\nu, j_\nu)\mathbf {I}n \mathfrak p$ and we have $j\geqslant i_\nu$, since $(j,t)\mathbf {I}n \mathbf CR$. Now $s>j$,thus $s>i_\nu$ and hence again $(s,t)=(s, j_\nu)\mathbf {I}n \mathbf CR^\mathbf Circ \mathfrak subseteq \mathbf CR\mathfrak subseteq J$.
\end{proof}
In fact we have shown that $[X_{sj}, U_\mathbf CR]\mathfrak subseteq U_{\mathbf CR^\mathbf Circ}$. Recall that $\mathbf CR=\mathbf CR^\mathbf Circ \mathbf Cup \mathfrak p$ by \mathbf Ref{4.5l} and $U_{\mathbf CR^\mathbf Circ}\mathfrak trianglelefteq U_\mathbf CR$. Thus we have:
\begin{Cor}\label{4.10}
$U_{\mathbf CR^\mathbf Circ}\mathfrak trianglelefteq U_J$. Moreover $U_J/U_{\mathbf CR^\mathbf Circ}\mathbf Cong X_{i_1 j_1}\mathfrak times \mathbf Cdots \mathfrak times X_{i_k j_k} \mathfrak times X_{sj}$.
\end{Cor}
Since $U_{\mathbf CR^\mathbf Circ}$ acts trivially on $[A]$ from the right by \mathbf Ref{3.6}, the linear character $\mathfrak theta_A$ of \mathbf Ref{3.6}, henceforth denoted by $\mathfrak theta_A^r$, by which $U_\mathbf CR$ acts from the right on $[A]$ can be extended to $U_J$ by any linear character $\mathfrak theta_{sj}^\beta$ of $X_{sj}, \beta\mathbf {I}n\mathbb F_q$.
We set $\Gamma=\{(a, j_\nu)\mathbf {I}n \Phi^+\,|\,\nu=1, \ldots, k, j_\nu<a<i_\nu\}$, so $\Phi^+=\mathbf CR\dot \mathbf Cup \Gamma$. We fix a linear ordering on $\Gamma$ such that $(s,j)\mathbf {I}n \Gamma$ comes first. Note that by Chevelley's commutator formula
\[\big\{\mathfrak prod\nolimits_{(a,b)\mathbf {I}n \Gamma} x_{ab}(\alpha_{ab})\,|\,\alpha_{ab}\mathbf {I}n \mathbb F_q,\,\forall\, (a,b)\mathbf {I}n \Gamma\big\}\]
is a complete set of right coset representatives of $U_\mathbf CR$ in $U$ and is in bijection with the idempotents $[B]\mathbf {I}n \mathcal O_A^r$, where the products are taken in the given order of $\Gamma$.
\begin{Theorem}\label{4.11}
In the setting of \mathbf Ref{4.6} keeping the notation introduced above, let $\beta\mathbf {I}n \mathbb F_q$. Then
\begin{enumerate}
\mathbf {I}tem [1)] $[A] f_{sj}^\beta u =\mathfrak theta_A(u)[A] f_{sj}^\beta$ for all $u\mathbf {I}n U_\mathbf CR$.
\mathbf {I}tem [2)] $[A]f_{sj}^\beta x_{sj}(\alpha)=\mathfrak theta(\beta \alpha)[A]f_{sj}^\beta$. Thus $U_J$ acts on $[A]f_{sj}^\beta$ by the linear character $\mathfrak tilde \mathfrak theta=\mathfrak theta_A \mathfrak times \mathfrak theta_{sj}^\beta$ with trivial $U_{\mathbf CR^\mathbf Circ}$-action.
\mathbf {I}tem [3)] $[A]f_{sj}^\beta \mathbb C U$ is isomorphic to $\Ind_{U_J}^U \mathbb C_{\mathfrak tilde \mathfrak theta}$, where $\mathbb C_{\mathfrak tilde \mathfrak theta}$ is the one dimensional $\mathbb C U_J$-module affording the linear character $\mathfrak tilde \mathfrak theta$ of $U_J$.
The set $\{[A]f_{sj}^\beta u\,|\, u\mathbf {I}n C_{\Gamma'} \}$ is a basis of $[A] f_{sj}^\beta \mathbb C U$, where $\Gamma'=\Gamma\mathfrak setminus \{\big(s,j)\}$ and
$
C_{\Gamma'}=\big\{\mathfrak prod\nolimits_{(a,b)\mathbf {I}n \Gamma'} x_{ab}(\alpha_{ab})\,|\,\alpha_{ab}\mathbf {I}n \mathbb F_q,\,\forall\, (a,b)\mathbf {I}n \Gamma'\big\}.
$
In particular $\dim_\mathbb C [A]f_{sj}^\beta \mathbb C U=q^{a-1}=\frac{1}{q}\dim_\mathbb C \mathbb C \mathcal O_A^r$, where $a=|\Gamma|$.
\end{enumerate}
\end{Theorem}
We remark that theorem \mathbf Ref{4.11} can be easily extended to idempotents $f_{sj}^\beta$ for more positions $(s,j)\mathbf {I}n hat \mathbf CR\mathfrak setminus \mathbf CR$ as long as the corresponding root subgroups commute modulo $U_{\mathbf CR^\mathbf Circ }$. In the next section we shall use the technique of shifting the action of the endomorphism ring $E^r$ from the left to the right in \mathbf Ref{4.7} to classify all verges $[A]\mathbf {I}n \mathcal E$ such that $\mathbb C \mathcal O_A^r$ contains irreducible constituents of minimal dimension and to describe those.
\mathfrak section{The irreducibles of minimal dimension}
Throughout this section let $[A]\mathbf {I}n \mathcal E$ be a verge with main conditions $\main[A]=\mathfrak p=\mathfrak p_{_A}=\{(i_1, j_1),\ldots,(i_k,j_k)\}\mathfrak subseteq \Phi^+$. Let $h_\nu=h_{i_\nu j_\nu}, \nu=1,\ldots,k$ and recall from section \mathbf Ref{4} that $\mathbf CL^\mathbf Circ$ consists of all positions in zero rows together with the positions in rows $i_1,\ldots, i_k$ not to the right of $(i_\nu, j_\nu), \nu=1,\ldots,k$, $\mathbf CL=\mathbf CL^\mathbf Circ \mathbf Cup \mathfrak p$ and $hat \mathbf CL$ arises by adding to $\mathbf CL$ all positions $(r,i)\mathbf {I}n \Phi^+$ such that there exist $(i,j), (r,s)\mathbf {I}n \mathfrak p$ with $1\leqslant j<s<i<r\leqslant n$. Then $h_{ij}\mathbf Cap h_{rs}=\{(i,s)\}\mathfrak subseteq \Phi^+$ and left action by $X_{ri}$ on $[A]$ fills position $(i,s)$ in $[A]$ and leaves $[A]$ unchanged otherwise. The corresponding sets for the right action are denoted by $\mathbf CR^\mathbf Circ\mathfrak subseteq \mathbf CR^\mathbf Circ \mathbf Cup \mathfrak p=\mathbf CR\mathfrak subseteq hat \mathbf CR$, replacing rows by columns and $(r,i)\mathbf {I}n hat \mathbf CL$ by $(s,j)\mathbf {I}n hat \mathbf CR$.
\begin{Defn}\label{5.1}
Let $1\leqslant \nu, \mu\leqslant k$. We say $(i_\nu, j_\nu)$ and $(i_\mu, j_\mu)$ are {\bf connected} (in $\mathfrak p$), if the following holds:
\begin{enumerate}
\mathbf {I}tem [1)] $j_\nu=i_\mu$ or $j_\mu=i_\nu$ (so $\nu\not=\mu$ and $h_\nu, h_\mu$ meet at the diagonal).
\mathbf {I}tem [2)] There exists a hook $h_\mathbf Rho$, $1\leqslant \mathbf Rho \leqslant k$, such that $h_\mathbf Rho$ intersects both, $h_\nu$ and $ h_\mu$, each in precisely one position in $\Phi^+$.
\end{enumerate}
If there are no pairs of connected main conditions in $\mathfrak p$, we say $\mathfrak p$ is {\bf hook disconnected}.
\end{Defn}
We illustrate this with main conditions $(i,j), (j,m), (r,s)\mathbf {I}n \mathfrak p$, where $1\leqslant s<j<r<i\leqslant n$, hence $h_{ij}\mathbf Cap h_{rs}=\{(r,j)\}\not=\emptyset$, and $1\leqslant m<s<j<r\leqslant n$, so $h_ {jm}\mathbf Cap h_{rs}=\{(j,s)\}\neq \emptyset$. Putting both inequalities together, we obtain $1\leqslant m<s<j<r<i\leqslant n$.
\begin{equation}\label{5.2}
\begin{picture}(120,105)
\mathfrak put(-40,-60){\line(1,0){160}}
\mathfrak put(-40,-60){\line(0,1){160}}
\mathfrak put(-10,10){\line(1,0){60}}
\mathfrak put(-10,10){\line(0,1){60}}
\mathfrak put(-40,100){\line(1,-1){160}}
\mathfrak put(20,-20){\line(1,0){60}}
\mathfrak put(20,-20){\line(0,1){60}}
\mathfrak put(50,-40){\line(0,1){50}}
\mathfrak put(50,-40){\line(1,0){50}}
\mathfrak put(10,-31){\footnotesize $(r,s)$}
\mathfrak put(52,-31){\footnotesize $(r,j)$}
\mathfrak put(38,-50){\footnotesize $(i,j)$}
\mathfrak put(20,40){\mathbf Circle*{3}}
\mathfrak put(20,-20){\mathbf Circle*{3}}
\mathfrak put(50,-40){\mathbf Circle*{3}}
\mathfrak put(22,45){$s$}
\mathfrak put(-10,70){\mathbf Circle*{3}}
\mathfrak put(-8,75){$m$}
\mathfrak put(50,10){\mathbf Circle*{3}}
\mathfrak put(52,13){$j$}
\mathfrak put(80,-20){\mathbf Circle*{3}}
\mathfrak put(85,-19){$r$}
\mathfrak put(100,-40){\mathbf Circle*{3}}
\mathfrak put(105,-39){$i$}
\mathfrak put(70,-50){\footnotesize $(i,r)$}
\mathfrak put(76.1,-42.8){$\mathfrak times$}
\mathfrak put(46.1,-22.8){$\mathfrak times$}
\mathfrak put(22,-1){\footnotesize $(j,s)$}
\mathfrak put(-25,0){\footnotesize $(j,m)$}
\mathfrak put(-37,37){\footnotesize $(s,m)$}
\mathfrak put(16,7){$\mathfrak times$}
\mathfrak put(-14,37){$\mathfrak times$}
\end{picture}
\end{equation}
\vskip80pt
We see from (\mathbf Ref{5.2}) that $(i,r), (r,j)\mathbf {I}n hat \mathbf CL\mathfrak setminus \mathbf CL$ but the sum $(i,j)$ of these roots in $\Phi^+$ is contained in $\mathfrak p$. Thus the commutator group $[X_{ir}, X_{rj}]=X_{ij}$ acts by the nontrivial character $\mathfrak theta_{ij}^\mathfrak tau$ from the left (and right) on $[A]$ with $0\neq \mathfrak tau=A_{ij}\mathbf {I}n \mathbb F_q$. Similarly $X_{jm}=[X_{js}, X_{sm}]$ acts by $\mathfrak theta_{jm}^\mathbf Rho$ on $[A]$ with $\mathbf Rho=A_{jm}$.
\begin{Theorem}\label{5.3}
Suppose $1\leqslant m<s<j<r<i\leqslant n$ such that $(i,j), (j,m), (r,s)\mathbf {I}n \mathfrak p$. Then $\mathbb C \mathcal O_A^r$ does not contain irreducible constituents of minimal dimension.
\end{Theorem}
\begin{proof}
Obviously the set $\mathfrak Delta=\{(i,r), (r,j), (i,j)\}$ is closed in $\Phi^+$, and $(i,r), (r,j)\mathbf {I}n hat \mathbf CL\mathfrak setminus \mathbf CL$, whereas $(i,j)\mathbf {I}n \mathfrak p\mathfrak subseteq \mathbf CL$ (compare (\mathbf Ref{5.2})). Now $U_{\mathfrak Delta}\leqslant U_{hat \mathbf CL}$ is isomorphic to the unitriangular group $U_3(q)$. Since $X_{ij}=[X_{ir}, X_{rj}]$, it acts trivially on every one dimensional $\mathbb C U_\mathfrak Delta$-module.
By \mathbf Ref{3.11} $\mathbb C \mathcal O_A^r$ contains irreducible constituents of minimal dimension if and only if $E^r=\mathcal End_{\mathbb C U} (\mathbb C \mathcal O_A^r)$ has one dimensional representations. By \mathbf Ref{4.5} $E^r=\epsilon_{_A} \mathbb C U_{hat \mathbf CL}$, where $\epsilon_{_A}\mathbf {I}n \mathbb C U_{hat \mathbf CL}$ is the central idempotent (see \mathbf Ref{4.6}) on which $U_{\mathbf CL^\mathbf Circ}$ acts trivially and $U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ}$ by the linear character $\mathfrak theta_A$ defined in \mathbf Ref{3.6}. In particular $X_{ij}$ acts on $\epsilon _{_A}$ by the nontrivial character $\mathfrak theta_{ij}^{A_{ij}}$, since $(i,j)\mathbf {I}n \mathfrak p, A_{ij}\neq 0$. Let $e\mathbf {I}n E^r$ be an idempotent affording a one dimensional representation of $E^r$. Since $\epsilon_{_A}$ is the identity of $E^r$, we have $\epsilon_{_A} e=e$. We obtain $e=x_{ij}(\alpha)e=x_{ij}(\alpha) \epsilon_{_A} e=\mathfrak theta(A_{ij}\alpha) \epsilon_{_A}e=\mathfrak theta(A_{ij} \alpha) e$, for all $\alpha\mathbf {I}n \mathbb F_q$ and hence $\mathfrak theta(A_{ij}\alpha)=1$ for all $\alpha\mathbf {I}n \mathbb F_q$, a contradiction. Thus $E^r$ has no one dimensional representations and the theorem is proved.
\end{proof}
We now turn to the hook disconnected case.
\begin{Prop}\label{5.4}
Suppose $\mathfrak p$ is hook disconnected and let $[A]\mathbf {I}n \mathcal E$ be a verge with $\main[A]=\mathfrak p_{_A}=\mathfrak p$. Set $ hat \mathbf CL ^{-} =hat \mathbf CL\mathfrak setminus \mathfrak p$, $ hat \mathbf CR ^{-} =hat \mathbf CR\mathfrak setminus \mathfrak p$. Then $ hat \mathbf CL ^{-} , hat \mathbf CR ^{-} $ are closed subsets of $\Phi^+$ and
$$U_{hat \mathbf CL}/ U_{\mathbf CL^\mathbf Circ}\mathbf Cong U_{hat \mathbf CL}/U_{\mathbf CL}\mathfrak times U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ},\quad
U_{hat \mathbf CR}/ U_{\mathbf CR^\mathbf Circ}\mathbf Cong U_{hat \mathbf CR}/U_{\mathbf CR}\mathfrak times U_{\mathbf CR}/U_{\mathbf CR^\mathbf Circ}.$$
\end{Prop}
\begin{proof}
Suppose $(a,b), (b,c)\mathbf {I}n hat \mathbf CL^{-}=$. So $1\leqslant c<b<a\leqslant n$. If $(a,b), (b,c)\mathbf {I}n \mathbf CL^\mathbf Circ$, then $(a,c)\mathbf {I}n \mathbf CL^\mathbf Circ\mathfrak subseteq hat \mathbf CL ^{-} $, since $\mathbf CL^\mathbf Circ$ is closed in $\Phi^+$. Suppose $(a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus\mathbf CL^\mathbf Circ $, but $(b,c)\mathbf {I}n \mathbf CL^\mathbf Circ$. Since $U_{\mathbf CL^\mathbf Circ}\mathfrak trianglelefteq U_{hat \mathbf CL}$ by \mathbf Ref{4.5}, we have $x^{-1}y^{-1} x =z \mathbf {I}n U_{\mathbf CL^\mathbf Circ}$ for $x\mathbf {I}n X_{ab}, y\mathbf {I}n X_{bc}\mathfrak subseteq U_{\mathbf CL^\mathbf Circ}$, and hence $X_{ac}=[X_{ab}, X_{bc}]$ contains $x^{-1} y^{-1}x y=zy\mathbf {I}n U_{\mathbf CL^\mathbf Circ}$, proving $(a,c)\mathbf {I}n \mathbf CL^\mathbf Circ \mathfrak subseteq hat \mathbf CL^{-}$. If $(a,b)\mathbf {I}n \mathbf CL^\mathbf Circ, (b,c)\notin \mathbf CL^\mathbf Circ$, a similar argument shows $(a,c)\mathbf {I}n \mathbf CL^\mathbf Circ \mathfrak subseteq hat \mathbf CL ^{-} $.
Thus suppose $(a,b), (b,c)\notin \mathbf CL^\mathbf Circ$, but both are contained in $ hat \mathbf CL ^{-} =hat \mathbf CL\mathfrak setminus \mathfrak p$. Since $hat \mathbf CL$ is closed by \mathbf Ref{4.5}, $(a,c)\mathbf {I}n hat \mathbf CL$. Thus we have to show $(a,c)\notin\mathfrak p$. Suppose $(a,c)\mathbf {I}n \mathfrak p$. We illustrate the situation:
\begin{equation}\label{5.5}
\begin{picture}(120,110)
\mathfrak put(-40,-100){\line(1,0){200}}
\mathfrak put(-40,-100){\line(0,1){200}}
\mathfrak put(-10,10){\line(1,0){60}}
\mathfrak put(-10,10){\line(0,1){60}}
\mathfrak put(-40,100){\line(1,-1){200}}
\mathfrak put(20,-50){\line(1,0){90}}
\mathfrak put(20,-50){\line(0,1){90}}
\mathfrak put(50,-80){\line(0,1){90}}
\mathfrak put(50,-80){\line(1,0){90}}
\mathfrak put(10,-61){\footnotesize $(b,d)$}
\mathfrak put(52,-61){\footnotesize $(b,c)$}
\mathfrak put(38,-90){\footnotesize $(a,c)$}
\mathfrak put(20,40){\mathbf Circle*{3}}
\mathfrak put(20,-50){\mathbf Circle*{3}}
\mathfrak put(50,-80){\mathbf Circle*{3}}
\mathfrak put(22,45){$d$}
\mathfrak put(-10,70){\mathbf Circle*{3}}
\mathfrak put(-8,75){$r$}
\mathfrak put(50,10){\mathbf Circle*{3}}
\mathfrak put(52,13){$c$}
\mathfrak put(110,-50){\mathbf Circle*{3}}
\mathfrak put(115,-49){$b$}
\mathfrak put(140,-80){\mathbf Circle*{3}}
\mathfrak put(145,-79){$a$}
\mathfrak put(100,-90){\footnotesize $(a,b)$}
\mathfrak put(106.1,-82.8){$\mathfrak times$}
\mathfrak put(-25,0){\footnotesize $(c,r)$}
\mathfrak put(21,0){\footnotesize $(c,d)$}
\end{picture}
\end{equation}
\vskip100pt
Since $(b,c)\notin \mathfrak p,$ but $(b,c)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ$, there must be a main condition strictly to the left of $(b,c)$ in row $b$, i.e. we find $1\leqslant d <c$ with $(b,d)\mathbf {I}n \mathfrak p$. So $(a,c), (c,r), (b,d)\mathbf {I}n \mathfrak p$ and $h_{ac}, h_{cr}$ intersect $h_{bd}$ at $(b,c)$ and $(c,d)$ respectively. Thus $\mathfrak p$ is not hook disconnected, a contradiction, hence $(a,c)\mathbf {I}n hat \mathbf CL\mathfrak setminus\mathfrak p=\mathfrak tilde \mathfrak p$.
Thus $ hat \mathbf CL ^{-} =\mathbf CL\mathfrak setminus \mathfrak p$ is closed. We have
$\mathbf CL ^\mathbf Circ \mathfrak subseteq hat \mathbf CL ^{-} \mathfrak subseteq hat \mathbf CL= hat \mathbf CL ^{-} \dot \mathbf Cup \mathfrak p$, and $U_{\mathbf CL^\mathbf Circ}\mathfrak trianglelefteq U_{hat \mathbf CL}$ by \mathbf Ref{4.5}. Moreover $U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }\leqslant U_{hat \mathbf CL}/U_{\mathbf CL^\mathbf Circ }$ and $U_{ \mathbf CL}/U_{\mathbf CL^\mathbf Circ }\mathbf Cong$ {huge $\mathfrak times$}$\!_{(a,b)\mathbf {I}n \mathfrak p}\, X_{ab}$ is a central subgroup of $U_{hat\mathbf CL}/U_{\mathbf CL^\mathbf Circ }$ by \mathbf Ref{4.5}. Now $U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }\mathbf Cap U_{ \mathbf CL}/U_{\mathbf CL^\mathbf Circ }=(1)$, and obviously $(U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ })\mathbf Cdot (U_{ \mathbf CL}/U_{\mathbf CL^\mathbf Circ })=U_{hat \mathbf CL}/U_{\mathbf CL^\mathbf Circ }$. Since
$U_{ \mathbf CL}/U_{\mathbf CL^\mathbf Circ }$ is central in $U_{hat \mathbf CL}/U_{\mathbf CL^\mathbf Circ }$, we obtain $U_{hat \mathbf CL}/ U_{\mathbf CL^\mathbf Circ}\mathbf Cong U_{ hat \mathbf CL }/U_{\mathbf CL}\mathfrak times U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ}$, as desired. An analogous argument shows the claim for $\mathbf CR^\mathbf Circ \mathfrak trianglelefteq hat \mathbf CR^{-} \mathfrak subseteq hat \mathbf CR \mathfrak subseteq \Phi^+$.
\end{proof}
We remark in passing that $U_{hat \mathbf CL}/U_{\mathbf CL}\mathbf Cong U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }$ by the second isomorphism theorem, since $\mathbf CL\mathbf Cap hat \mathbf CL ^{-} =\mathbf CL^\mathbf Circ$, $\mathbf CL\mathbf Cup hat \mathbf CL ^{-} =hat \mathbf CL$ imply $U_{\mathbf CL ^\mathbf Circ }=U_{\mathbf CL}\mathbf Cap U_{ hat \mathbf CL ^{-} }$ and $U_{ hat \mathbf CL ^{-} }U_{\mathbf CL}=U_{hat \mathbf CL}$. Obviously $hat \mathbf CL\mathfrak setminus \mathbf CL= hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ$ and $hat \mathbf CR\mathfrak setminus \mathbf CR= hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ.$
\begin{Cor}\label{5.6}
Suppose $\mathfrak p\mathfrak subseteq \Phi^+$ is a hook disconnected set of main conditions and let $[A]\mathbf {I}n \mathcal E$ be a verge with $\mathfrak p_{_A}=\mathfrak p$. Then $E^r=\mathcal End_{\mathbb C U} (\mathbb C \mathcal O_A^r)$ is isomorphic to the group algebra of $U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }$.
\end{Cor}
\begin{proof}
By \mathbf Ref{5.4} $U_{hat \mathbf CL}/ U_{\mathbf CL^\mathbf Circ}\mathbf Cong U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }\mathfrak times U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ}$. Since $U_{\mathbf CL}/U_{\mathbf CL^\mathbf Circ}\mathbf Cong X_{i_1 j_1}\mathfrak times \mathbf Cdots \mathfrak times X_{i_k j_k}$, $\mathfrak p=\{(i_1, j_1), \ldots,$ $ (i_k, j_k)\}$ $\mathfrak subseteq \Phi^+$ the algebras of the form $(\mathbb C U_{hat \mathbf CL}) \epsilon$ are all isomorphic for any primitive idempotent $\epsilon$ in $\mathbf C U_\mathbf CL$ affording a one dimensional representation with trivial $U_{\mathbf CL^\mathbf Circ}$-action. Choosing $\epsilon$ to be the trivial idempotent $\frac{1}{|U_{\mathbf CL}|}\mathfrak sum_{x\mathbf {I}n U_{\mathbf CL}} x \mathbf {I}n \mathbb C U_{\mathbf CL}$ yields $E^r=(\mathbb C U_{hat \mathbf CL})\epsilon _{_A}\mathbf Cong \mathbb C (U_{hat \mathbf CL}/U_\mathbf CL)$, where $\epsilon_{_A}\mathbf {I}n \mathbb C U_{\mathbf CL}$ is defined as in \mathbf Ref{4.5}.
\end{proof}
This result has some interesting consequences, which we shall list below. To simplify notation we denote in the situation of \mathbf Ref{5.4} we identify $U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }$ and $U_{hat \mathbf CL}/U_\mathbf CL$ and denote this group by $H=H_\mathfrak p$, so $E^r\mathbf Cong \mathbb C H$.
We define $I\mathfrak subseteq hat \mathbf CL ^{-} $ to consist of $\mathbf CL^\mathbf Circ $ together will all positions $(i,b)\mathbf {I}n hat \mathbf CL ^{-} $ such that there exist $(i,a), (a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ $. Then $I$ is closed in $\Phi^+$ and $U_I/U_{\mathbf CL^\mathbf Circ}$ is the commutator subgroup $H'=[H, H]$ of $H=U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ }$. Moreover $H/H'$ is isomorphic to the direct product of the root subgroups $X_{ab}$ with $(a,b)\mathbf {I}n hat \mathbf CL ^{-} $, $(a,b)\notin I$. Let $c=| hat \mathbf CL ^{-} \mathfrak setminus I|=| hat \mathbf CL ^{-} |-|I|$, then $H/H'$ is abelian of order $q^c$ and hence $H$ has precisely $q^c$ many non isomorphic one dimensional representations. Keeping this notation, we have shown:
\begin{Theorem}\label{5.7}
Let $[A]\mathbf {I}n \mathcal E$ be a verge with $\main[A]=\mathfrak p$ then $\mathbb C \mathcal O_A^r$ has irreducible constituents of minimal dimension $q^{a-b}$, $a=|\mathcal O^r_A|,$ $ b=$ number of hook intersections of hooks centered at position in $\mathfrak p$ if and only if $\mathfrak p$ is hook disconnected. In this case we have
\begin{enumerate}
\mathbf {I}tem [1)] $\mathbb C \mathcal O_A^r$ has precisely $q^c$ many non isomorphic irreducible constituents of minimal dimension $q^{a-b}$, $c=|hat \mathbf CL\mathfrak setminus I|$ as above, each occurring with multiplicity one in $\mathbf C \mathcal O_A^r$.
\mathbf {I}tem[2)] $\mathcal End_{\mathbb C U}(\mathbb C \mathcal O_A^r)$ depends only on $\mathfrak p=\main[A]$ not on the particular values of $A$ at positions $(i,j)\mathbf {I}n \mathfrak p$. Thus varying the nonzero values at positions $(i,j)\mathbf {I}n \mathfrak p$ through $\mathbb F_q^*$ one obtains $(q-1)^k q^c$ many non isomorphic irreducible $\mathbb C U$-modules which occur in their right orbits with minimal dimension.
\end{enumerate}
\end{Theorem}
Recall the conjectures of Higman \mathbf Cite{higman}, Lehrer \mathbf Cite{lehrer} and Isaacs \mathbf Cite{Isaacs2} from the introduction. Observe that $q=(q-1)^1+(q-1)^0$, hence $(q-1)^k q^c$ is indeed a polynomial in $(q-1)$ with nonnegative integral coefficients. Summing over all sets $\mathfrak p\mathfrak subseteq \Phi^+$ of main conditions, such that $\mathfrak p$ is hook disconnected, we obtain:
\begin{Theorem}\label{5.8}
There exists a polynomial $d_n(t)\mathbf {I}n \mathbb Z[t]$ with non negative coefficients such that $d_n(q-1)$ is the number of distinct irreducible characters of $U$, which occur with minimal degree in their supercharacters.
\end{Theorem}
Obviously the Lehrer variant of that statement holds as well.
\mathfrak section{Monomial sources of irreducibles of minimal dimension}
In this section we shall determine monomial sources of the irreducibles $\mathbb C U$-modules of minimal dimension. Here a {\bf monomial source} for an irreducible $\mathbb C G$-module $S$, where $G$ is a finite group, means a subgroup $H$ of $G$ together with some one dimensional $\mathbb C H$-module $\mathbb C _\lambda$ such that $S=\Ind_H^G \mathbb C_\lambda$. The index ``$\lambda$'' denotes the linear character of $H$ by which $H$ acts on $\mathbb C _\lambda$. Thus $G$ acts monomially on the cosets of $H$ in $G$. A finite group $G$ is called a {\bf monomial group} ($M$-group for short), if every irreducible $\mathbb C G$-module is monomial, that is has a monomial source. It is well know (see e.g. Isaacs' book \mathbf Cite{Isaacsbook}) that every supersolvable and hence every nilpotent group is an $M$-group. Thus in particular finite $p$-groups and hence the unitriangular groups $U_n(q)$ are $M$-groups. Halasi has shown in \mathbf Cite{halasi} that every irreducible character of $U$ is induced from a linear character of
some $\mathbb F_q$-algebra subgroup of $U$. In \mathbf Ref{6.11} we shall prove that for
irreducibles of minimal dimension, the $\mathbb F_q$-algebra subgroup can be
chosen to be a pattern subgroup.
We continue with the setting of the previous section. Thus let $\mathfrak p\mathfrak subseteq \Phi^+$ be a subset of $\Phi^+$ of main conditions and let $[A]\mathbf {I}n \mathcal E$ be a verge with main conditions $\mathfrak p_{_A}=\mathfrak p$. If not stated otherwise, we assume now that $\mathfrak p$ is hook disconnected. Recall from the previous sections the definition of the closed subsets $\mathbf CL^\mathbf Circ \mathfrak subseteq \mathbf CL=\mathbf CL^\mathbf Circ \mathbf Cup \mathfrak p\mathfrak subseteq hat \mathbf CL$, $\mathbf CR^\mathbf Circ \mathfrak subseteq \mathbf CR=\mathbf CR^\mathbf Circ \mathbf Cup \mathfrak p\mathfrak subseteq hat \mathbf CR$ (\mathbf Ref{4.5} and \mathbf Ref{4.5l}) and of $ hat \mathbf CL ^{-} =hat \mathbf CL\mathfrak setminus \mathfrak p, hat \mathbf CR ^{-} =hat \mathbf CR\mathfrak setminus \mathfrak p$ and observe that $| hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ|=| hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ|=b$ is the number of hook intersections in $\Phi^+$ of hooks centered at main conditions in $\mathfrak p$. Indeed \mathbf Ref{3.7} provides bijection:
\begin{equation}\label{6.1}
\mathfrak perp:\quad hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ \mathbf Rightarrow hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ : (r,i)\mapsto (r,i)^\mathfrak perp =(s,j)
\end{equation}
for $(i,j), (r,s)\mathbf {I}n \mathfrak p$ with $1\leqslant j<s<i<r\leqslant n$.
In \mathbf Ref{4.5l} we showed that $E^r=\mathcal End_{\mathbb C U} (\mathbb C \mathcal O_A^r)$ is given as ideal $\epsilon_{_A} \mathbb C U_{hat \mathbf CL}$, where $ \epsilon_{_A}$ is the central primitive idempotent in $\mathbb C U_{_\mathbf CL}$ affording the trivial character on $U_{\mathbf CL^\mathbf Circ}$ and the linear character $\mathfrak theta_{A}$, defined in \mathbf Ref{3.6} on $U_{ \mathbf CL}/U_{\mathbf CL^\mathbf Circ }\mathbf Cong$ {huge $\mathfrak times$}$\!_{(a,b)\mathbf {I}n \mathfrak p}\, X_{ab}$.
For $(r,i)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ, (r,i)^\mathfrak perp =(s,j)\mathbf {I}n hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ$, the map:
\begin{equation}\label{6.2}
\Upsilon_{ri}: \, X_{ri}\mathbf Rightarrow X_{sj}: x_{ri}(\alpha)\mapsto x_{sj}(\alpha \mathfrak sigma)
\end{equation}
with $\mathfrak sigma=A_{rs}/A_{ij}\mathbf {I}n \mathbb F_q^*$ is obviously an isomorphism of abelian groups. In \mathbf Ref{4.7} we proved that
\begin{equation}\label{6.3}
x_{ri}(\alpha)[A]=[A]x_{sj}(\alpha \mathfrak sigma)=[A]\Upsilon_{ri}(x_{ri}(\alpha)).
\end{equation}
Fix a linear ordering on $ hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ$ and take all occurring products $\mathfrak prod_{(a,b)\mathbf {I}n hat \mathbf CL\mathfrak setminus \mathbf CL} x_{ab}(\alpha_{ab})$, $\alpha_{ab}\mathbf {I}n \mathbb F_q$ for $(a,b)\mathbf {I}n hat \mathbf CL\mathfrak setminus \mathbf CL$, in that ordering. Transfer this ordering to $ hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ$ by $\mathfrak perp$. Note that $\epsilon_{_A} \mathbb C U_{hat \mathbf CL}$ and $\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR}$ have bases
\begin{equation}\label{6.4}
\big\{\epsilon_{_A}\mathbf Cdot\mathfrak prod\nolimits_{(a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ}x_{ab}(\alpha_{ab})\,|\, \alpha_{ab}\mathbf {I}n \mathbb F_q \mathfrak text{ for all } (a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ \big\}
\end{equation}
respectively
\begin{equation}\label{6.5}
\big\{\epsilon_{_A}^\mathfrak perp\mathbf Cdot\mathfrak prod\nolimits_{(a,b)\mathbf {I}n hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ}x_{ab}(\alpha_{ab})\,|\, \alpha_{ab}\mathbf {I}n \mathbb F_q \mathfrak text{ for all } (a,b)\mathbf {I}n hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ \big\}.
\end{equation}
thus
\begin{equation}\label{6.6}
\Upsilon: \epsilon_{_A} \mathbb C U_{hat \mathbf CL}\mathbf Rightarrow\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR}
\end{equation}
which sends $\epsilon_{_A}\mathbf Cdot\mathfrak prod\nolimits_{(a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ}x_{ab}(\alpha_{ab})$ to $\epsilon_{_A}^\mathfrak perp\mathbf Cdot\mathfrak prod\nolimits_{(a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ}\Upsilon_{ab}(x_{ab}(\alpha_{ab}))$
in $\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR}$ is a bijection. We have
\begin{Prop}\label{6.7}
Let $\mathfrak p$ be hook disconnected. Then $\Upsilon: \epsilon_{_A} \mathbb C U_{hat \mathbf CL}\mathbf Rightarrow\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR} $ is a $\mathbb C$-algebra isomorphism from $E^r=\mathcal End_\mathbb C U (\mathbb C \mathcal O_A^r)$ into the subalgebra $\epsilon_A^\mathfrak perp \mathbb C U_{hat \mathbf CR}$ of $\mathbb C U$, such that for all $x\mathbf {I}n \epsilon_{_A} \mathbb C U_{hat \mathbf CL}$ we have:
\[x[A]=[A]\Upsilon(x).\]
\end{Prop}
\begin{proof}
Note that $\epsilon_{_A}$ and $\epsilon_{_A}^\mathfrak perp$ are central in $\mathbb C U_{hat \mathbf CL}$ and $\mathbb C U_{hat \mathbf CR}$ respectively by \mathbf Ref{4.6}. Moreover the restriction $\Upsilon_{ab}$ of $\Upsilon$ to $X_{ab}$ for $ (a,b)\mathbf {I}n hat \mathbf CL ^{-} \mathfrak setminus \mathbf CL^\mathbf Circ$, is a group homomorphism by (\mathbf Ref{6.2}). Thus by Chevalley's commutator formula (\mathbf Cite{carter}, 5.2.2), it suffices to check that $\Upsilon$ preserves commutator relations modulo $U_{\mathbf CL^\mathbf Circ}$. More precisely, using \mathbf Ref{5.4},
we have to show that $\Upsilon$ respects the commutator relation of the form:
\begin{equation}\label{6.8}
[x_{ar}(\alpha), x_{ri}(\beta)]=x_{ai}(\alpha\beta) \mathfrak text{ modulo } (1-\epsilon_{_A}) \mathbb C U_{hat \mathbf CL}, \,\forall\, \alpha, \beta\mathbf {I}n \mathbb F_q
\end{equation}
for $(a,r), (r,i)\mathbf {I}n hat \mathbf CL ^{-} $. So let $1\leqslant s<b<r<a\leqslant n$, $1\leqslant j<s<i<r\leqslant n$ such that $(a,b),(r,s),(i,j)\mathbf {I}n \mathfrak p$ and $\{(r,b)\}=h_{rs}\mathbf Cap h_{ab}$, $\{(i,s)\}=h_{ij}\mathbf Cap h_{rs}$, (compare example \mathbf Ref{4.4}). Then $(a,r)^\mathfrak perp=(b,s)$ and $(r,i)^\mathfrak perp=(s,j)$.
We distinguish two cases:
{\bf Case 1:} Suppose $i<b$. Then $(a,i)$ is to the left of main condition $(a,b)$ in row $a$ and hence contained in $\mathbf CL^\mathbf Circ$. So $X_{ai}\mathfrak subseteq U_{\mathbf CL^\mathbf Circ}$, and $x_{ai}(\alpha\beta)=(1)\mod (1-\epsilon_{_A})\mathbb C U_{hat \mathbf CL}$, since $x_{ai}(\alpha \beta)\epsilon_{_A}=\epsilon_{_A}x_{ai}(\alpha\beta)=\epsilon_{_A}$. Now, since $i<b$, the position $(b,j)$ in column $j$ is below $(i,j)$, and hence $X_{bj}\mathfrak subseteq U_{\mathbf CR^\mathbf Circ}$. We conclude
\[
[\Upsilon(X_{ar}), \Upsilon(X_{ri})]=[X_{bs}, X_{sj}]=X_{bj}=(1) \mod (1-\epsilon_{_A}^\mathfrak perp) \mathbb C U_{hat \mathbf CR}
\]
proving that $\Upsilon$ preserves commutators in this case.
\mathfrak smallskip
Note that the case $b=i$ cannot occur, since then $(a,i)=(a,b)\mathbf {I}n \mathfrak p$. Since $\mathfrak p$ is hook disconnected, $ hat \mathbf CL ^{-} $ is closed and hence $(a, i)\mathbf {I}n hat \mathbf CL ^{-} =hat \mathbf CL\mathfrak setminus \mathfrak p$.
\mathfrak smallskip
{\bf Case 2:} $i>b$. Thus $(a,i)$ is in row $a$ strictly to the right of $(a,b)\mathbf {I}n \mathfrak p$, and hence is not contained in $\mathbf CL=\mathbf CL^\mathbf Circ \mathbf Cup \mathfrak p$. Moreover $X_{ai}$ acting from the left on $[A]$ will change position $(i,b)$ which is the hook intersection of the hooks $h_{ab}$ and $h_{ij}$. On the other hand $X_{bj}$ acting from the right on $[A]$ changes only position $(i,b)$ in $[A]$ and hence $(a,i)^\mathfrak perp =(b,j)\mathbf {I}n hat \mathbf CR\mathfrak setminus \mathbf CR$. By (\mathbf Ref{6.2}) $\Upsilon (x_{ai}(\gamma))=x_{bj}(\gamma \frac{A_{ab}}{A_{ij}})$ for all $\gamma\mathbf {I}n\mathbb F_q$. Similarly,
$\Upsilon(x_{ar}(\alpha))=x_{bs}(\alpha \frac{A_{ab}}{A_{rs}})$ and $\Upsilon(x_{ri}(\beta))=x_{sj}(\beta \frac{A_{rs}}{A_{ij}})$
and hence
\begin{eqnarray*}
\Upsilon[x_{ar}(\alpha), x_{ri}(\beta)]&=& \Upsilon(x_{ai}(\alpha\beta))=x_{bj}(\alpha\beta \frac{A_{ab}}{A_{ij}})\\&=& x_{bj}(\alpha \frac{A_{ab}}{A_{rs}}\beta \frac{A_{rs}}{A_{ij}})=[x_{bs}(\alpha \frac{A_{ab}}{A_{rs}}), x_{sj}(\beta \frac{A_{rs}}{A_{ij}})]\\
&=& [\Upsilon(x_{ar}(\alpha)), \Upsilon(x_{ri}(\beta))]
\end{eqnarray*}
as desired.
\end{proof}
\begin{Remark}\label{6.9}
For hook connected main conditions $\mathfrak p$, it can be shown that a similar $\mathbb C$-algebra isomorphism $\Upsilon: \epsilon_{_A} \mathbb C U_{hat \mathbf CL}\mathbf Rightarrow\epsilon_{_A}^\mathfrak perp \mathbb C U_{hat \mathbf CR} $ exists. It acts on root subgroups $X_{ab}, (a,b)\mathbf {I}n hat \mathbf CL\mathfrak setminus\mathbf CL^\mathbf Circ$ in the same fashion. However the case $b=i$ in the proof of proposition \mathbf Ref{6.7} can occur which forces for sequences of hook connected conditions a non trivial action on root subgroups $X_{ij}$ with $(i,j)\mathbf {I}n \mathfrak p$ as well, permuting those, to take care of the case that a commutator subgroup meets root subgroups at main conditions. To keep the character $\mathfrak theta_A$ stable one needs to adjust the entry $\alpha$ for $x_{ij}(\alpha)$, $(i,j)\mathbf {I}n \mathfrak p$, by factors derived from quotients of entries at main conditions. This extension to the general case however will not be needed in this paper.
\end{Remark}
Let $\mathfrak p\mathfrak subseteq \Phi^+$ be again hook disconnected. Recall from \mathbf Ref{5.4} that the endomorphism rings $E^r, E^l$ of $\mathbb C \mathcal O_A^r$ and $\mathbb C \mathcal O_A^l$ of \mathbf Ref{4.5} and \mathbf Ref{4.5l} respectively can be identified with the group algebras $\mathbb C H$ and $\mathbb C H^\mathfrak perp$, setting $H=H_\mathfrak p=U_{ hat \mathbf CL ^{-} }/U_{\mathbf CL^\mathbf Circ}\mathbf Cong U_{hat \mathbf CL}/U_{\mathbf CL}$ and $H^\mathfrak perp=U_{ hat \mathbf CR ^{-} }/U_{\mathbf CR^\mathbf Circ}\mathbf Cong U_{hat \mathbf CR}/U_{\mathbf CR}$, where again $ hat \mathbf CL ^{-} =hat \mathbf CL\mathfrak setminus \mathfrak p, hat \mathbf CR ^{-} =hat \mathbf CR\mathfrak setminus \mathfrak p$ . Obviously the algebra isomorphism $\Upsilon$ of \mathbf Ref{6.7} induces a group isomorphism $H\mathbf Rightarrow H^\mathfrak perp$. For $x\mathbf {I}n \mathbb C H$ we denote now $\Upsilon (x)\mathbf {I}n \mathbb C H^\mathfrak perp$ by $x^\mathfrak perp$.\\
If $S$ is an irreducible $\mathbb C H^\mathfrak perp$-module, we can extend the $H^\mathfrak perp$-action on $S$ by \mathbf Ref{5.4} to $U_{hat \mathbf CR}/U_{\mathbf CR^\mathbf Circ}=H^\mathfrak perp\mathfrak times U_{\mathbf CR}/U_{\mathbf CR^\mathbf Circ}$ by $\mathfrak theta_A: U_{\mathbf CR}/U_{\mathbf CR^\mathbf Circ}=${huge $\mathfrak times$}$_{(a,b)\mathbf {I}n\mathfrak p} X_{ab}\mathbf Rightarrow \mathbb C^*$ defined in \mathbf Ref{3.6}, and then lift the resulting $U_{hat \mathbf CR}/U_{\mathbf CR^\mathbf Circ}$-module to $U_{hat \mathbf CR}$ by letting $U_{\mathbf CR ^\mathbf Circ}$ act trivially on it. The corresponding $\mathbb C U_{hat \mathbf CR}$-module is now denoted by $hat S_A=hat S$. Let $\Irr(\mathbb C H^\mathfrak perp)$ be a complete set of non isomorphic irreducible $\mathbb C H^\mathfrak perp$-modules.
Recall that $|\mathbb C \mathcal O_A^r|=q^a$, where $a=|\Phi^+\mathfrak setminus \mathbf CR|$, since $U_{\mathbf CR}=\Pstab[A]$ by \mathbf Ref{3.6}. Since $| hat \mathbf CR ^{-} \mathfrak setminus \mathbf CR^\mathbf Circ|=|hat \mathbf CR\mathfrak setminus \mathbf CR|=b$ is the number of hook intersections in $\Phi^+$ of hooks centered in $\mathfrak p$, we have $|\Phi^+\mathfrak setminushat \mathbf CR|=a-b$ and hence $[U:U_{hat \mathbf CR}]=q^{a-b}$. Similarly $[U:U_{hat \mathbf CL}]=q^{a-b}$.
\begin{Remark}
In general $\Upsilon$ defined in \mathbf Ref{6.9} shifts the action on the verge idempotents from left to right. As a consequence all irreducible constituents of $\mathbb C \mathcal O_A^r$ are induced from precisely the irreducible modules of $U_{hat \mathbf CR}$ in the sum of Wedderburn components attached to the central idempotent $\epsilon^\mathfrak perp_A$. This result has been observed as well by Tung Le in \mathbf Cite{le} but was shown there by different methods. In the special case of hook disconnected main conditions it follows immediately from proposition \mathbf Ref{6.7}:
\end{Remark}
\begin{Cor}\label{6.10}
Let $[A]\mathbf {I}n\mathcal E$ be a verge such that $\main[A]=\mathfrak p\mathfrak subseteq \Phi^+$ is hook disconnected. Let $\mathbf CR^\mathbf Circ\mathfrak subseteq \mathbf CR\mathfrak subseteq hat \mathbf CR\mathfrak subseteq \Phi^+$ be defined as above and $ hat \mathbf CR ^{-} =hat \mathbf CR\mathfrak setminus \mathfrak p, H^\mathfrak perp=U_{ hat \mathbf CR ^{-} }/U_{\mathbf CR^\mathbf Circ}\mathbf Cong U_{hat \mathbf CR}/U_{\mathbf CR^\mathbf Circ}$. If $S$ is an irreducible $\mathbb C H^\mathfrak perp$-module, the lift $hat S$ of $S$ to $U_{hat \mathbf CR}$ is defined as above. Then we have:
\begin{enumerate}
\mathbf {I}tem [1)] $hat S\mathbf Cong [A]S$ and $[A]S\mathbb C U$ is an irreducible constituent of $\mathbb C \mathcal O_A^r$ of dimension $q^{a-b+m}$ setting $q^m=\dim_\mathbb C S$.
\mathbf {I}tem[2)] $[A] S \mathbb C U\mathbf Cong \Ind_{U_{hat \mathbf CR}}^U [A] hat S\mathbf Cong \Ind_{U_{hat \mathbf CR}}^U hat S$.
\mathbf {I}tem[3)] $\mathbb C \mathcal O_A^r=\mathfrak sum_{S\mathbf {I}n \Irr(\mathbb C H^\mathfrak perp)} (\dim_\mathbb C S) [A] S \mathbb C U=\bigoplus_{S\mathbf {I}n \Irr(\mathbb C H^\mathfrak perp)}(\dim_{\mathbb C} S) \Ind_{U_{hat \mathbf CR}}^U hat S$
is the decomposition of $\mathbb C \mathcal O_A^r$ into a direct sum of irreducible $\mathbb C U$-modules.
\end{enumerate}
\end{Cor}
\begin{proof}
Since $U_{\mathbf CR^\mathbf Circ}$ acts trivially on $[A]$ and $U_{\mathbf CR}=\Pstab_U^r[A]$, we conclude that $[A]\mathbb C U_{ hat \mathbf CR ^{-} }=[A]\mathbb C H^\mathfrak perp$ is isomorphic to the regular representation $\mathbb C H^\mathfrak perp_{\mathbb C H^\mathfrak perp}$ of $\mathbb C H^\mathfrak perp$. Since $U_{\mathbf CR}/U_{\mathbf CR^\mathbf Circ}$ acts on $[A]$ by the linear character $\mathfrak theta_A$ we conclude $[A]S\mathbf Cong [A]hat S\mathbf Cong hat S$. The preimage $\Upsilon^{-1}$ of $S$ under $\Upsilon: \mathbb C H\mathbf Rightarrow \mathbb C H^\mathfrak perp$ is an irreducible $\mathbb C H$-module generated by some primitive idempotent $\epsilon_{_S}\mathbf {I}n \mathbb C H$ say. Thus $\epsilon_{_S} \mathbb C H[A]=[A] S$ and hence $[A]S \mathbb C U=\epsilon_{_S} [A]\mathbb C U=\epsilon_{_S} \mathbb C \mathcal O_A^r$ is an irreducible constituent of $\mathbb C \mathcal O_A^r$. Now by Isaacs' theorem (\mathbf Cite{Isaacsq}, Theorem A) $\dim_\mathbb C S$ is a power of $q$, say $q^m$. Thus $\dim_\mathbb C \epsilon_{_S}\mathbb C H=q^m$ too, and $\dim_\mathbb C \epsilon_{_S} \mathbb C \mathcal O_A^r=\dim_\mathbb C [A] S\mathbb C U=q^{a-b+m}$, since $\epsilon_{_S} \mathbb C H$ has multiplicity $q^m$ in $\mathbb C H_{\mathbb C H}$, which may be considered as endomorphism algebra of $\mathbb C \mathcal O_A^r$. Now 1) follows from \mathbf Ref{3.10} and \mathbf Ref{3.10n}. Thus multiplying up the $\mathbb C U_{hat\mathbf CR}$-submodule $[A]S=[A]hat S$ of $\mathbb C \mathcal O_A^r$ to $\mathbb C U$ produces an increase in dimension by a factor $q^{a-b}$ which incidentally is the index of $U_{hat \mathbf CR}$ in $U$. This implies 2) and hence 3) holds as well.
\end{proof}
In \mathbf Ref{6.10} let the irreducible $\mathbb C H^\mathfrak perp$-module be one dimensional and hence afford a linear character $\lambda_S: H\mathbf Rightarrow \mathbb C^*$ which extends to a linear character $\lambda_S\mathbf Cdot \mathfrak theta_A: U_{hat\mathbf CR}\mathfrak setminus U_{\mathbf CR^\mathbf Circ}\mathbf Rightarrow \mathbb C^*$ and hence to a linear character $hat \lambda_S: U_{hat \mathbf CR}\mathbf Rightarrow \mathbb C^*$. Then we have precisely in the situation of theorem \mathbf Ref{5.7} and hence have constructed the monomial sources of the irreducible $\mathbb C U$-modules which have minimal dimension in their verge orbit module:
\begin{Cor}\label{6.11}
Let $\mathcal S\leqslant \mathbb C \mathcal O_A^r$ be of minimal dimension. Then $[A]\mathbf {I}n \mathcal E$ has hook disconnected main conditions $\mathfrak p\mathfrak subseteq \Phi^+$.
Moreover there exists a linear character $\lambda: U_{hat \mathbf CR}\mathbf Rightarrow \mathbb C^*$ having $U_{\mathbf CR^\mathbf Circ}$ in its kernel, such that the restriction of $\lambda$ to $U_{\mathbf CR}$ is $\mathfrak theta_A$ and $(U_{hat \mathbf CR}, \lambda)$ is a monomial source of $\mathcal S$.
\end{Cor}
This corollary shows in particular that irreducible characters of $U$ of minimal degree in their supercharacters are induced from linear characters of pattern subgroups. Recall that by \mathbf Cite{halasi} every irreducible character of $U$ is introduced from a linear character of some $\mathbb F_q$-algebra subgroup of $U$. However, as shown by Evseev in \mathbf Cite{Evseev} not every irreducible character of $U$ needs to be induced from a linear character of some pattern subgroup of $U$. Those, which are, are called ``well-induced'' by Evseev and our result shows that irreducible characters of minimal degree in their supercharacters are well-induced.
\mathfrak providecommand{\bysame}{\leavevmode ---\ }
\mathfrak providecommand{\og}{``} \mathfrak providecommand{\fg}{''}
\mathfrak providecommand{\mathfrak smfandname}{and}
\mathfrak providecommand{\mathfrak smfedsname}{\'eds.}
\mathfrak providecommand{\mathfrak smfedname}{\'ed.}
\mathfrak providecommand{\mathfrak smfmastersthesisname}{M\'emoire}
\mathfrak providecommand{\mathfrak smfphdthesisname}{Th\`ese}
\end{document}
|
\begin{document}
\begin{abstract}
Let $M$ be an open $n$-manifold of nonnegative Ricci curvature and let $p\in M$. We show that if $(M,p)$ has escape rate less than some positive constant $\epsilon(n)$, that is, minimal representing geodesic loops of $\pi_1(M,p)$ escape from any bounded balls at a small linear rate with respect to their lengths, then $\pi_1(M,p)$ is virtually abelian. This generalizes the author's previous work \cite{Pan_escape}, where the zero escape rate is considered.
\end{abstract}
\subjclass[2010]{53C23,53C20,57S30}
\maketitle
We study the structure of fundamental group of open manifolds with nonnegative Ricci curvature. In comparison to sectional curvature, we recall that it follows from soul theorem that the fundamental group of any open manifold with nonnegative sectional curvature is virtually abelian, that is, it contains an abelian subgroup of finite index \cite{CG_soul}. Regarding Ricci curvature, Wei constructed open manifolds with positive Ricci curvature and fundamental groups that are torsion-free nilpotent \cite{Wei}. Later, Wilking proved that any finitely generated virtually nilpotent group can be realized as the fundamental group of some open manifold of positive Ricci curvature \cite{Wilk}. Conversely, any finitely generated subgroup of $\pi_1(M)$ has polynomial growth \cite{Mil}, so by Gromov's work \cite{Gro_poly}, it has a nilpotent subgroup of finite index (also see \cite{KW}).
The author discovered that the virtual abelianness/nilpotency of $\pi_1(M)$ is related to where the representing geodesic loops of $\pi_1(M,p)$ are positioned in $M$ \cite{Pan_escape}. For any element $\gamma\in \pi_1(M,p)$, we can choose a geodesic loop based at $p$ representing $\gamma$ of minimal length, denoted by $c_\gamma$. It is known before that Cheeger-Gromoll splitting theorem \cite{CG_split} implies that if all representing geodesic loops are contained in a bounded set, then $\pi_1(M)$ is virtually abelian. However, it is prevalent for representing geodesic loops to escape from any bounded balls in nonnegative Ricci curvature: if $M$ has positive Ricci curvature and an infinite fundamental group, then this escape phenomenon always occurs \cite{SW}. The escape rate $E(M,p)$ introduced in \cite{Pan_escape} measures how fast the representing geodesic loops of $\pi_1(M,p)$ escape from any bounded balls by comparing the size of $c_\gamma$ to its lengths:
$$E(M,p):=\limsup_{|\gamma|\to\infty} \dfrac{d_H(p,c_\gamma)}{|\gamma|},$$
where $|\gamma|$ is the length of $c_\gamma$ and $d_H$ is the Hausdorff distance. For a doubly warped product $M=[0,\infty)\times_f S^{p-1}\times_h S^1$, $E(M,p)$ is determined by the decaying rate of the warping function $h(r)$ (see \cite[Appendix B]{Pan_escape}). As $h(r)$ decreases, a representing geodesic loop would take advantage the thin end to shorten its length, while this also increases its size. Hence the faster $h(r)$ decays, the larger escape rate $(M,p)$ has. As the main result of \cite{Pan_escape}, we proved that if $E(M,p)=0$, then $\pi_1(M)$ is virtually abelian.
In this paper, we further generalize the above mentioned result by proving a universal escape rate gap.
\begin{thm1}
Given $n$, there is a positive constant $\epsilon(n)$ such that for any open $n$-manifold $(M,p)$ of $\mathrm{Ric}\ge 0$, if $E(M,p)\le \epsilon(n)$, then $\pi_1(M,p)$ is virtually abelian.
\end{thm1}
In other words, if $\pi_1(M)$ contains a free nilpotent non-abelian subgroup, then $E(M,p)>\epsilon(n)$, that is, there is a sequence of elements $\gamma_i\in \pi_1(M,p)$ such that $d_H(x,c_{\gamma_i})>\epsilon(n)|\gamma_i|$. The converse of Theorem A is not true in general (see \cite[Appendix B]{Pan_escape}).
The proof of Theorem A involves the study of geometry of $(\widetilde{M},\tilde{p},\pi_1(M,p))$ at infinity, where $(\widetilde{M},\tilde{p})$ is the Riemannian universal cover of $(M,p)$ and $\pi_1(M,p)$ acts on $\widetilde{M}$ as isometries. For any sequence $r_i\to\infty$, we can pass to a subsequence and obtain the following pointed equivariant Gromov-Hausdorff convergence \cite{FY}:
$$(r_i^{-1}\widetilde{M},\tilde{p},\pi_1(M,p))\overset{GH}\longrightarrow (Y,y,G).$$
The limit $(Y,y,G)$ is called an equivariant asymptotic cone of $(\widetilde{M},\pi_1(M,p))$. To illustrate the approach to Theorem A, we first recall the strategy for the zero escape rate case \cite{Pan_escape}, which roughly goes as follows:
$E(M,p)=0;$\\
$\Rightarrow$ $Gy$ is geodesic in $Y$ for any equivariant asymptotic cone $(Y,y,G)$;\\
$\Rightarrow$ $Gy$ is a metric product $\mathbb{R}^k\times Z$, where $Z$ is compact, for any $(Y,y,G)$;\\
$\Rightarrow$ $Gy$ is a standard Euclidean space for any $(Y,y,G)$;\\
$\Rightarrow$ Any nilpotent subgroup $N$ of $\Gamma$ acts as almost translations on $\widetilde{M}$ at large scale;\\
$\Rightarrow$ $\Gamma$ is virtually abelian.
To study the case $E(M,p)\le\epsilon$, we quantify the approach above. As the first step, we will introduce the concept of $\delta$-geodesic, which measures how close a subset is to being geodesic (Definition \ref{def_delta_geo}), and show that $Gy$ is $\delta_\epsilon$-geodesic, where $\delta_\epsilon\to 0$ as $\epsilon\to 0$ (Proposition \ref{limit_delta_geodesic}).
Regarding the second and third steps above, we use pointed Gromov-Hausdorff closeness to quantify. One may expect that $Gy$ is $\Phi(\epsilon|n)$-close to $\mathbb{R}^k\times Z$, where $Z$ is compact, or $\mathbb{R}^k$ in the pointed Gromov-Hausdorff sense, where $\Phi(\epsilon|n)$ is an unspecified function depending on $\epsilon$ and $n$ with $\lim_{\epsilon\to 0}\Phi(\epsilon|n)=0$. However, this statement has a clear obstruction in the second step: pointed Gromov-Hausdorff closeness cannot distinguish a non-compact space from a compact one with a large diameter. To overcome this, for any equivariant asymptotic cone $(Y,y,G)$, we shall consider an associated family of spaces $\{(sY,y,G)\}_{s>0}$, where $(sY,y,G)$ means scaling $(Y,y,G)$ by $s$. We shall apply Cheeger-Colding quantitative splitting theorem \cite{CC1} to $(Y,y,G)$ only when almost splitting holds for all $(sY,y,G)$. With this idea, we show that for any $(Y,y,G)$, either $Gy$ in $(sY,y,G)$ is close to $\mathbb{R}^k$ for all $s>0$, or there is some $s>0$ such that $Gy$ in $(sY,y,G)$ is close to a product $\mathbb{R}^k\times Z_s$ with the diameter of $Z_s$ being around $1$ (see Proposition \ref{split_all_scales}). Next, we further rule out the compact factor $Z_s$; more precisely, we prove the following (also see Definition \ref{def_GH_subset} and Proposition \ref{converse}):
\begin{thm}\label{almost_eu_orbit}
Let $(M,p)$ be an open $n$-manifold with $\mathrm{Ric}\ge 0$ and $E(M,p)\le\epsilon$. Then there is an integer $k$ such that for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, we have
$$d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \Phi(\epsilon|n),$$
where $(X,x)$ is a length space that depends on $(Y,y)$.
\end{thm}
The proof of Theorem \ref{almost_eu_orbit} relies on a critical rescaling argument, which is effective to prove uniformity among all equivariant asymptotic cones. This type of argument is first introduced by the author in \cite{Pan_eu} and also applied in \cite{Pan_al_stable,Pan_escape}.
We organize the paper as follows. We start with preliminaries in Section \ref{sec_pre}. In Section \ref{sec_geo}, we introduce the notion of {$\delta$-geodesic} and show that the limit orbit $Gy$ is always $\delta$-geodesic. In Section \ref{sec_split}, we study the quantitative splitting behavior of $Gy$ in the associated family $\{(sY,y,G)\}_{s>0}$. In Section \ref{sec_eu}, we prove Theorem \ref{almost_eu_orbit} and Theorem A.
\tableofcontents
\section{Preliminaries}\label{sec_pre}
\noindent\textbf{1.1 Almost splitting}
Cheeger and Colding proved a quantitative splitting result for manifolds with almost nonnegative Ricci curvature \cite{CC1}. Here we need a version for Ricci limit spaces, which follows directly from the result on manifolds. We denote $\mathcal{M}(n,0)$ the set of all Ricci limit spaces coming from some sequence of complete Riemannian $n$-manifolds $(M_i,p_i)$ of $\mathrm{Ric}\ge 0$. Given $y_-,y_+$ in a space $Y\in\mathcal{M}(n,0)$, recall that the excess function is
$$e_{y_+,y_-}(z)=d(z,y_+)+d(z,y_-)-d(y_+,y_-).$$
\begin{thm}\label{thm_CC_split}\cite{CC1}
Let $(Y,y)\in\mathcal{M}(n,0)$. Let $y_-,y_+\in Y$ with $d(y_\pm,y)=L\gg R$ and $e_{y_-,y_+}(y)\le \delta$, then there exists a length space $(X,x)$ such that
$$d_{GH}(B_R(y),B_R(0,x))\le \Phi(\delta,L^{-1}|n,R),$$
where $(0,x)$ is a point in the metric product $\mathbb{R}\times X$.
\end{thm}
Here $\Phi(\delta_1,...,\delta_k|c_1,...,c_l)$ means a nonnegative function depending on $\delta_1,...,\delta_k$ and $c_1,...,c_l$ such that
$$\lim\limits_{\delta_1,...,\delta_k\to 0} \Phi(\delta_1,...,\delta_k|c_1,...,c_l)=0.$$
We briefly recall how Theorem \ref{thm_CC_split} is proved for manifolds since we need some elements from this proof later. Let $(M,y)$ be a complete Riemannian manifold of $\mathrm{Ric}\ge0$. Given $y_-,y_+\in M$ with the assumptions in Theorem \ref{thm_CC_split}, the partial Busemann function $b$ is defined as $$b(z)=d(z,y_+)-d(y,y_+).$$
Let $h$ be the solution to the Dirichlet problem
$$\begin{cases} \Delta h=0 \quad & \text{on } B_{20R}(y),\\
h=b \quad & \text{on } \partial B_{20R}(y). \end{cases}$$
Among other properties, $h$ satisfies the following estimates.
\begin{prop}\cite{CC1}\label{split_estimates}
Let $z\in B_R(y)$ and $w\in h^{-1}(h(y))$ be a closest point from $z$ to $h^{-1}(h(y))$. Then the followings hold:\\
(1) ($C^0$-estimate) $|h(z)-b(z)|\le \Phi(\delta,L^{-1}|n,R)$;\\
(2) (Almost parallel) $d(z,w)=|h(z)-h(y)|\pm \Phi(\delta,L^{-1}|n,R)$;\\
(3) (Almost Pythagorean) $|d(y,w)^2+d(z,w)^2-d(z,y)^2|\le \Phi(\delta,L^{-1}|n,R)$.
\end{prop}
Then the map
$$F:B_R(y)\to \mathbb{R}\times h^{-1}(h(y)),\quad z\mapsto (h(z)-h(y),w),$$
where $w\in h^{-1}(h(y))$ is a closest point from $z$ to $h^{-1}(h(y))$, is a $\Phi(\delta,L^{-1}|n,R)$-approximation between $B_R(y)$ and $B_R(0,x)$ \cite{CC1}.\\
\noindent\textbf{1.2 Asymptotic geometry}
Let $(M,p)$ be an open manifold of $\mathrm{Ric}\ge 0$ and let $r_i\to\infty$. Then there is a subsequence converging in the pointed Gromov-Hausdorff topology:
$$(r_i^{-1}M,p)\overset{GH}\longrightarrow (Z,z).$$
The limit space $(Z,z)$ is called an \textit{asymptotic cone} of $M$. $(Z,z)$ in general depends on the scaling sequence $r_i$, so $M$ may not have a unique asymptotic cone.
Let $(\widetilde{M},\tilde{p})$ be the Riemannian universal cover of $(M,p)$ and let $\Gamma$ be the fundamental group $\pi_1(M,p)$, which acts on $\widetilde{M}$ isometrically. For a sequence $r_i\to \infty$, we can consider a convergent subsequence in the pointed equivariant Gromov-Hausdorff topology \cite{FY}:
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y,y,G),$$
where $G$ is a closed subgroup of the isometry group of $Y$. We call $(Y,y,G)$ an \textit{equivariant asymptotic cone} of $(\widetilde{M},\Gamma)$. Note that $(Y,y)\in\mathcal{M}(n,0)$, so Theorem \ref{thm_CC_split} can be applied. According to \cite{CC2,CN}, the limit group $G$ is a Lie group. In particular, $G/G_0$ is discrete, where $G_0$ is the identity component subgroup of $G$. Hence there is a positive distance between different components of the orbit $Gy$.
Let $\Omega(\widetilde{M},\Gamma)$ be the set of all equivariant asymptotic cones of $(\widetilde{M},\Gamma)$. The result below is well-known (see \cite[Proposition 2.1]{Pan_eu} for a proof).
\begin{prop}\label{cpt_cnt}
Let $(M,p)$ be an open $n$-manifold with $\mathrm{Ric}\ge 0$. Then the set $\Omega(\widetilde{M},\Gamma)$ is compact and connected in the pointed equivariant Gromov-Hausdorff topology.
\end{prop}
\noindent\textbf{1.3 Gromov-Hausdorff convergence of closed subsets}
To study the quantitative splitting behavior of the limit orbit $Gy$ in an equivariant asymptotic cone $(Y,y,G)$, we need a natural notion to measure the closeness between closed subsets in two nearby pointed spaces.
\begin{defn}\label{def_GH_subset}
For $i=1,2$, let $(X_i,x_i)$ be a complete pointed length space and let $A_i$ be a closed subset of $X_i$ with $x_i\in A_i$. Let $\delta>0$. We say that a map $\phi:X_1\to X_2$ is an $\delta$-approximation from $(X_1,x_1,A_1)$ to $(X_2,x_2,A_2)$ if\\
(1) $d(x_2,\phi(x_1))\le\delta$,\\
(2) $|d(z,z')-d(\phi(z),\phi(z'))|\le\delta$ for all $z,z'\in B_{1/\delta}(x_1)$,\\
(3) $B_{1/\delta}(x_2)\subseteq B_{\delta}\left(\phi(B_{1/\delta}(x_1))\right)$,\\
(4) $B_{1/\delta}(x_2)\cap A_2 \subseteq B_{\delta}\left(\phi(B_{1/\delta}(x_1)\cap A_1)\right)$.
We say that
$$d_{GH}((X_1,x_1,A_1),(X_2,x_2,A_2))\le\delta,$$
if there are $\delta$-approximation maps
$$\phi:(X_1,x_1,A_1)\to (X_2,x_2,A_2), \quad \psi:(X_2,x_2,A_2)\to (X_1,x_1,A_1).$$
\end{defn}
Note that the first three conditions in Definition \ref{def_GH_subset} say that $\phi:X_1\to X_2$ is an $\delta$-approximation from $(X_1,x_1)$ to $(X_2,x_2)$ in the pointed Gromov-Hausdorff sense. Together with the last condition, the closure of ${\phi(A_1)}$ and $A_2$, as (pointed) closed subsets of $X_2$, are $\delta$-close in the pointed Hausdorff sense.
\begin{rem}
In practice, we only need to check one side of the approximations in Definition \ref{def_GH_subset}. Given an $\delta$-approximation $\phi:(X_1,x_1,A_1)\to (X_2,x_2,A_2)$, then one can construct a $3\delta$-approximation $\psi:(X_2,x_2,A_2)\to (X_1,x_1,A_1)$: for any $z_2\in B_{1/\delta}(x_2)$, by (3) there is $z_1\in B_{1/\delta}(x_1)$ such that $d(\phi(z_1),z_2)\le\delta$, then assign $\psi(z_2)$ as $z_1$.
\end{rem}
\begin{rem}
Though $d_{GH}$ as in Definition \ref{def_GH_subset} may not satisfy the triangle inequality, a weaker estimate always holds:
\begin{align*}
&\ d_{GH}((X_1,x_1,A_1),(X_3,x_3,A_3))\\
\le&\ 2d_{GH}((X_1,x_1,A_1),(X_2,x_2,A_2))+2d_{GH}((X_2,x_2,A_2),(X_3,x_3,A_3)).
\end{align*}
\end{rem}
\begin{rem}\label{rem_subset_precpt}
For a sequence $(X_i,x_i)$ converging to $(X,x)$ with a sequence of closed subsets $A_i\subseteq X_i$ containing $x_i$, we can always extract a subsequence so that $$(X_{i(j)},x_{i(j)},A_{i(j)})\overset{GH}\longrightarrow (X,x,A)$$
in the sense of Definition \ref{def_GH_subset}, where $A$ is some closed subset of $X$ containing $x$. To see this precompactness result, one can consider the closure of $\phi_i(A_i)$ in $X_i$, where $\phi_i:X_i\to X$ are $\delta_i$-approximation maps with $\delta_i\to 0$, then the sequence $\{(\overline{\phi_i(A_i)},\phi(x_i))\}$ subconverges in the pointed Hausdorff sense to some limit $(A,x)$.
\end{rem}
\begin{rem}
For a pointed equivariant Gromov-Hausdorff convergent sequence
$$(X_i,x_i,G_i)\overset{GH}\longrightarrow (X,x,G),$$
by definition we have corresponding convergence of orbits $G_ix_i$ to $Gx$ in the sense of Definition \ref{def_GH_subset}:
$$(X_i,x_i,G_ix_i)\overset{GH}\longrightarrow (X,x,Gx).$$
\end{rem}
\section{Almost geodesic limit orbits}\label{sec_geo}
Through this section and beyond, we assume that $(M,p)$ is an open $n$-manifold with nonnegative Ricci curvature and an infinite fundamental group $\Gamma=\pi_1(M,p)$; we also always assume that $E(M,p)\le\epsilon$ unless otherwise noted, where $\epsilon>0$ is a small number that will be determined through the text. To avoid ambiguities, symbol $\epsilon$ will be exclusively used for this purpose. $\delta_\epsilon$ means a constant only depending on $\epsilon$ with $\lim_{\epsilon\to 0} \delta_\epsilon=0$.
In this section, we prove that in any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, the orbit $Gy$ is $\delta_\epsilon$-geodesic (Proposition \ref{limit_delta_geodesic}). We also prove some properties for $\delta_\epsilon$-geodesic subsets for later use.
For a closed subset $N$ in a length space $X$, we say that $N$ is \textit{geodesic} in $X$, if the extrinsic and intrinsic metrics on $N$ agree. To quantify this, we introduce the notion \textit{$\delta$-geodesic}.
\begin{defn}\label{def_delta_geo}
Let $\delta>0$. Let $X$ be a length space and let $N$ be a closed subset of $X$. We say that $N$ is \textit{$\delta$-geodesic in $X$}, if for any two points $a,b\in N$, there is a chain of points $z_0=a,...,z_j,...,z_k=b$ in $N$ such that
$$\sum_{j=1}^{k} d(z_{j-1},z_{j})\le (1+\delta)\cdot d(a,b)\quad \text{and}\quad d(z_{j-1},z_{j})\le\delta\cdot d(a,b) \text{ for all } j=1,...,k.$$
\end{defn}
Note that being $\delta$-geodesic is scaling invariant.
Since $\Gamma$ acts freely on $\widetilde{M}$, we can assign a natural metric $\rho$ on $\Gamma$ by
$$\rho(\gamma,\gamma')=d(\gamma\tilde{p},\gamma'\tilde{p}).$$
\begin{lem}\label{pre_limit_delta_geodesic}
For any $\gamma\in\Gamma$ with $\rho(e,\gamma)$ sufficiently large, there are $\gamma_1,...,\gamma_k\in\Gamma$ such that the following holds:\\
(1) $\prod_{j=1}^k \gamma_j=\gamma$,\\
(2) $\sum_{j=1}^k \rho(e,\gamma_j)\le (1+\delta_\epsilon)\cdot \rho(e,\gamma)$,\\
(3) $\rho(e,\gamma_j)\le \delta_\epsilon \cdot \rho(e,\gamma)$ for all $j=1,...,k$,\\
where $\delta_\epsilon\to 0$ as $\epsilon\to 0$.
\end{lem}
\begin{proof}
The proof is similar to that of \cite[Proposition 2.2]{Pan_escape}.
We put $\Gamma(R)=\{\gamma\in \Gamma | d(\tilde{p},\gamma\tilde{p})\le R \}$ and $$D(R)=\max_{\gamma\in \Gamma(R)}d_H(x,c_\gamma).$$ It follows from $E(M,p)\le\epsilon$ that
$$\limsup_{R\to\infty} \dfrac{D(R)}{R}\le\epsilon.$$
For $\eta>0$, which we will determine later, we define $s(\eta,R)=2(\eta^{-1}+1)\cdot D(R)$.
Let $\gamma\in\Gamma$ with $R=\rho(e,\gamma)$ and let $c$ be a representing geodesic loop of $\gamma$. It is clear that $c$ is contained in $\overline{B_{D(R)}}(p)$. Let $\tilde{c}$ be the lift of $c$ starting at $\tilde{p}$; we can assume that $\tilde{c}:[0,R]\to\widetilde{M}$ is of unit speed. Following the same argument as in the proof of Proposition 2.2 in \cite{Pan_escape}, we can choose a series of points $\{\tilde{c}(t_j)\}_{j=1}^k$ such that $t_0=0$, $t_k=R$ and
$$t_j-t_{j-1}=2\eta^{-1}D(R) \text{ for all }j=1,...,k-1,\quad t_k-t_{k-1}\le 2\eta^{-1}D(R).$$
We also choose $\beta_0=e$, $\beta_k=\gamma$, and $\beta_j\in\Gamma$ such that $d(\beta_j\tilde{p},\tilde{c}(t_j))\le D(R)$ for $j=1,...,k-1$. Then $\{\gamma_j=\beta^{-1}_{j-1}\beta_j\}_{j=1}^k$ satisfies $\prod_{j=1}^k \gamma_j=\gamma$,
$$\sum_{j=1}^k \rho(e,\gamma_j)\le (1+\eta)\rho(e,\gamma),$$
and
$$\rho(e,\gamma_j)\le s(\eta,R)\le 4(\eta^{-1}+1)\epsilon\cdot \rho(e,\gamma)$$
when $R$ is sufficiently large. Setting $\eta=\sqrt{\epsilon}$ and $\delta_\epsilon=4(\sqrt{\epsilon}+\epsilon)$, we complete the proof.
\end{proof}
\begin{prop}\label{limit_delta_geodesic}
For any $(Y,y,G)\in \Omega(\widetilde{M},\Gamma)$, the orbit $Gy$ is $\delta_\epsilon$-geodesic in $Y$, where $\delta_\epsilon\to 0$ as $\epsilon\to 0$.
\end{prop}
\begin{proof}
Let $r_i\to\infty$ such that
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y,y,G).$$
Let $g\in G$ with $gy\not= y$ and let $\gamma_i\in \Gamma$ converging to $g$ associated to the above convergence. We put $R_i=d(\gamma_i\tilde{p},\tilde{p})\to\infty$. It follows from Lemma \ref{pre_limit_delta_geodesic} that for any $i$ large, we can find a word $\prod_{j=1}^{k_i} \gamma_{i,j}=\gamma_i$ with
$$\sum_{j=1}^{k_i} \rho(e,\gamma_{i,j})\le (1+\delta_\epsilon)\cdot R_i,\quad \rho(e,\gamma_{i,j})\le \delta_\epsilon\cdot R_i.$$
We group successive portions of the word $\prod_{j=1}^{k_i} \gamma_{i,j}$ into a new word $\prod_{j=1}^{K_i} g_{i,j}=\gamma_i$ such that
$$\delta_\epsilon R_i\le\rho(e,g_{i,j})\le 2\delta_\epsilon R_i.$$
It is clear that
$$\sum_{j=1}^{K_i}\rho(e,g_{i,j})\le (1+\delta_\epsilon)\cdot R_i,\quad K_i\le (1+\delta_\epsilon)/\delta_\epsilon.$$
Passing to a subsequence, we assume that all $K_i$ are the same and each sequence $\{g_{i,j}\}_i$ converges to $g_j\in G$ associated to $(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y,y,G)$. Then
$$d_Y(g_jy,y)=\lim\limits_{i\to\infty} r_i^{-1}\rho(e,g_{i,j})\le r_i^{-1}\cdot 2\delta_\epsilon R_i=2\delta_\epsilon\cdot d_Y(gy,y),$$
$$\sum_{j=1}^K d_Y(g_jy,y)=\lim\limits_{i\to\infty} \sum_{j=1}^K r_i^{-1}\rho(e,g_{i,j})\le\lim\limits_{i\to\infty} r_i^{-1}(1+\delta_\epsilon)R_i=(1+\delta_\epsilon)d_Y(gy,y).$$
This shows that the limit orbit $Gy$ is $2\delta_\epsilon$-geodesic in $Y$.
\end{proof}
\begin{cor}\label{cor_cnt_orbit}
Given that $\epsilon>0$ is sufficiently small, the orbit $Gy$ is connected for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$.
\end{cor}
\begin{proof}
We choose $\epsilon>0$ small so that $\delta_\epsilon\le 0.1$, where $\delta_\epsilon$ is the constant in Proposition \ref{limit_delta_geodesic}.
We argue by contradiction. Suppose that the orbit $Gy$ is not connected for some $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. Let $C$ be the connected component of $Gy$ containing $y$. We choose an orbit point $hy\in Gy-C$ such that
$$d(hy,y)=\min_{z\in Gy-C}d(z,y)=\min_{z\in Gy-C} d(z,C).$$
By Proposition \ref{limit_delta_geodesic}, there is a chain of orbit points $z_0=y,...,z_j,...,z_k=hy$ such that $d(z_{j-1},z_j)\le 0.1\cdot d(hy,y)$.
We claim that $z_1\in C$. Otherwise, we have $$0<d(C,z_1)\le d(z_0,z_1)\le 0.1\cdot d(hy,y);$$
this cannot happen due to our choice of $hy$.
Inductively, we have $z_j\in C$ for all $j$. A contradiction.
\end{proof}
For the rest of this section, we prove some lemmas on convergence of $\delta$-geodesic subsets, which will be used in the next section.
\begin{lem}\label{to_geodesic}
Let $(Y_i,y_i)$ be a sequence of complete pointed length spaces and $N_i$ be a closed subset of $Y_i$ containing $y_i$ for each $i$. Suppose that
$$(Y_i,y_i,N_i)\overset{GH}\longrightarrow (Y,y,N)$$
and $N_i$ is $\delta_i$-geodesic in $Y_i$, where $\delta_i\to 0$. Then $N$ is geodesic in $Y$.
\end{lem}
\begin{proof}
Let $a,b\in N$. From the convergence, we can choose points $a_i$ and $b_i$ in $N_i$ converging to $a$ and $b$ respectively. Because each $N_i$ is $\delta_i$-geodesic in $Y_i$, there is a series of points $z_{i,0}=a_i,...,z_{i,j},...,z_{i,k(i)}=b_i$ in $N_i$ such that
$$\sum_{j=1}^{k(i)} d(z_{i,j-1},z_{i,j})\le (1+\delta_i) d(a_i,b_i),\quad d(z_{i,j-1},z_{i,j})\le\delta_id(a_i,b_i)\to 0.$$
For any $\eta>0$, we choose a series of points $\{w_{i,j}\}_{j=0}^{K(i)}$ from $\{z_{i,j}\}_{j=0}^{k(i)}$ such that $w_{i,0}=z_{i,0}$, $w_{i,K(i)}=z_{i,k(i)}$, and
$$\eta/2\le d(w_{i,j-1},w_{i,j})\le \eta$$
for all $j=1,...,K(i)$. Note that
$$\sum_{j=0}^{K(i)} d(w_{i,j-1},w_{i,j})\le (1+\delta_i)d(a_i,b_i),\quad K(i)\le (1+\delta_i)d(a_i,b_i)/(\eta/2)\to 2d(a,b)/\eta.$$
Passing to a subsequence, we assume that all $K(i)$ are the same $K$ and each sequence $\{w_{i,j}\}_i$ converges to $w_j\in N$. Then $\{w_j\}_{j=1}^K\subseteq N$ satisfies
$$\sum_{j=0}^{K}d(w_{j-1},w_{j})\le d(a,b),\quad d(w_{j-1},w_{j})\le\eta \text{ for all } j=1,...,K.$$
This shows that $N$ is geodesic in $Y$.
\end{proof}
The following result in \cite[Lemma 3.1]{Pan_escape} characterizes any closed and geodesic subset in a metric product $\mathbb{R}^k\times X$ that contains a slice of $\mathbb{R}^k$.
\begin{lem}\label{geo_in_product}
Let $X$ be a locally compact length metric space. Let $N$ be a closed subset in the product metric space $\mathbb{R}^k\times X$, where $\mathbb{R}^k$ is endowed with the standard Euclidean metric. Suppose that $N$ is geodesic in $\mathbb{R}^k\times X$ and $N$ contains a slice $\mathbb{R}^k\times \{x\}$ for some $x\in X$. Then $N$ equals $\mathbb{R}^k\times Z$ as a subset of $\mathbb{R}^k\times X$, where
$$Z=N \cap (\{0\}\times X);$$
in particular, $N$ is a product metric.
\end{lem}
We prove a quantitative version of Lemma \ref{geo_in_product} in terms of $\delta$-geodesic subsets and Gromov-Hausdorff closeness.
\begin{lem}\label{subset_almost_product}
Let $(X,x)$ be a pointed locally compact length space and let $(Y,y)\in\mathcal{M}(n,0)$. Let $N$ be a closed and $\delta$-geodesic subset in $Y$ with $y\in N$. Suppose that
$N$ has a closed subset $S$ such that
$$d_{GH}((Y,y,S),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times\{x\}))\le\delta.$$
Then there is a length space $(X',x')$, which is possibly different from $(X,x)$, and a closed geodesic subset $Z'\subseteq X'$ such that
$$d_{GH}((Y,y,N),(\mathbb{R}^k\times X',(0,x'),\mathbb{R}^k\times Z'))\le \Phi(\delta|n).$$
\end{lem}
\begin{proof}
Let $(Y_i,y_i)$ be any sequence of spaces in $\mathcal{M}(n,0)$ satisfying\\
(1) $Y_i$ has a closed and $\delta_i$-geodesic subset $N_i$ containing $y_i$,\\
(2) $N_i$ has a closed subset $S_i$ with
$$d_{GH}((Y_i,y_i,S_i),(\mathbb{R}^k\times X_i,(0,x_i),\mathbb{R}^k\times\{x_i\}))\le\delta_i\to 0.$$
Passing to a subsequence, the sequences $(Y_i,y_i,S_i)$ and $(\mathbb{R}^k\times X_i,(0,x_i),\mathbb{R}^k\times\{x_i\})$ converge to the same limit space $$(Y_\infty,y_\infty,S_\infty)=(\mathbb{R}^k\times X_\infty,(0,x_\infty),\mathbb{R}^k\times\{x_\infty\}).$$
Due to Remark \ref{rem_subset_precpt}, we can also assume that
$$(Y_i,y_i,N_i)\overset{GH}\longrightarrow (Y_\infty,y_\infty,N_\infty).$$
By Lemma \ref{to_geodesic}, $N_\infty$ is geodesic in $Y_\infty=\mathbb{R}^k\times X_\infty$; moreover, $N_\infty$ contains a slice $\mathbb{R}^k\times \{x_\infty\}$. It follows from \ref{geo_in_product} that $N_\infty$, as a subset of $\mathbb{R}^k\times X_\infty$, is a product $\mathbb{R}^k\times Z_\infty$, where $Z_\infty=N_\infty\cap (\{0\}\times X_\infty)$. Finally, note that $Z_\infty$ is geodesic in $X_\infty$ and
$$d_{GH}((Y_i,y_i,N_i),(\mathbb{R}^k\times X_\infty,(0,x_\infty),\mathbb{R}^k\times Z_\infty))\to 0.$$
\end{proof}
\section{Almost splitting of limit orbits under all scales}\label{sec_split}
We study the quantitative splitting of the orbit $Gy$ for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. Again, we assume that $E(M,p)\le\epsilon$ unless otherwise noted.
\begin{lem}\label{non_cpt_orbit}
Let $(M,p)$ be an open $n$-manifold with $\mathrm{Ric}\ge0$, an infinite fundamental group, and $E(M,p)<1/2$, then the orbit $Gy$ is non-compact for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$.
\end{lem}
\begin{proof}
Suppose that $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$ has a compact orbit $Gy$. Let $r_i\to\infty$ such that
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y,y,G).$$
Let $D>0$ such that $Gy\subseteq B_D(y)$.
Since $\Gamma$ is infinite, we can find a sequence $\gamma_i\in \Gamma$ such that $d(\gamma_i\tilde{p},\tilde{p})\ge 10r_iD$.
Because $E(M,p)<1/2$, by \cite[Lemma 2.1]{Pan_escape}, $\Gamma$ is finitely generated. Let $S$ be a finite generating set and let $R=\max_{\gamma\in S} d(\gamma\tilde{p},\tilde{p})$. For each $\gamma_i$, we write
$\gamma_i=\prod_{k=1}^{N_i} g_{i,k}$, where $g_{i,k}\in S$. Let $h_{i,j}=\prod_{k=1}^j g_{i,k}$ for each $j$. Then $\{h_{i,j}\tilde{p}\}_{j=1}^{N_i}$ is a series of orbit points starting from $\tilde{p}$ and ending at $\gamma_i\tilde{p}$ such that
$$d(h_{i,j}\tilde{p},h_{i,j+1}\tilde{p})\le R$$
for each $j$. In particular, for each $i$ we can find some $h_{i,j(i)}\tilde{p}$ such that
$$2r_iD-R\le d(h_{i,j(i)}\tilde{p},\tilde{p})\le 2r_iD+R.$$
Then
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma,h_{i,j(i)})\overset{GH}\longrightarrow (Y,y,G,h)$$
with $h\in G$ and $d(hy,y)=2D$. This contradicts the hypothesis that $Gy$ is contained in $B_D(y)$.
\end{proof}
With Lemma \ref{non_cpt_orbit}, we first show that the asymptotic cone $Y$ of $\widetilde{M}$ almost splits off a line when $E(M,p)\le\epsilon$.
\begin{lem}\label{noncpt_space_split}
Let $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. Then there is a length space $X$ such that
$$d_{GH}((Y,y),(\mathbb{R}\times X, (0,x)))\le \Phi(\epsilon|n).$$
\end{lem}
\begin{proof}
For any small $\eta>0$, we will determine an $\epsilon>0$ so that $$d_{GH}((Y,y),(\mathbb{R}\times X, (0,x)))\le \eta$$
when $E(M,p)\le \epsilon$.
Since $Gy$ is non-compact from Lemma \ref{non_cpt_orbit}, we can choose a point $gy$ so that $d(y,gy)=L\gg R$, where $L$ and $R$ will be determined later. We know that $Gy$ is $\delta_\epsilon$-geodesic by Proposition \ref{limit_delta_geodesic}. This provides a series of points $z_0=y,...,z_j,...,z_k=gy$ in $Gy$ such that
$$\sum_{j=0}^{k} d(z_{j-1},z_{j})\le (1+\delta_\epsilon)L,\quad d(z_{j-1},z_{j})\le \delta_\epsilon L \text{ for all }j.$$
We choose a point $w\in \{z_j\}_j$ that is about the middle between $y$ and $gy$; more precisely, $w$ such that
$$(1/2-\delta_\epsilon)L\le d(y,w)\le (1/2+\delta_\epsilon)L.$$
We write this $w=hy$ for some $h\in G$. Then for $y_-=h^{-1}y$ and $y_+=h^{-1}gy$, we have
$$e_{y_-,y_+}(y)\le 2\delta_\epsilon L.$$
By quantitative splitting Theorem \ref{thm_CC_split}, we see that
$$d_{GH}(B_R(y),B_R(0,x))\le \Phi(\delta_\epsilon L,L^{-1}|n,R),$$
where $(0,x)\in \mathbb{R}\times X$ for some length space $X$.
Now we set $R=2\eta^{-1}$, $L=1/\sqrt{\delta_\epsilon}$, and $\epsilon>0$ sufficiently small so that
$$\Phi(\delta_\epsilon L,L^{-1}|n,R)=\Phi(\epsilon|n,2\eta^{-1})\le\eta.$$
With this $\epsilon$, we have
$$d_{GH}((Y,y),(\mathbb{R}\times X,(0,x)))\le\eta.$$
\end{proof}
\begin{lem}\label{orbit_close_to_segment}
In the context of Theorem \ref{thm_CC_split}, let $S=\{z_j\}_{j=0}^k$ be a series of points in $Y$ such that\\
(1) $z_0=y_-$, $z_k=y_+$, and $y\in S$;\\
(2) $\sum_{j=1}^{k} d(z_{j-1},z_{j})\le (1+\delta)L$;\\
(3) $d(z_{j-1},z_{j})\le \delta L$ for all $j=1,...,k$.\\
Then
$$d_{GH}((B_R(y),y,S\cap B_R(y)),(B_R (0,x),(0,x),[-R,R]\times \{x\})\le \Phi(L^{-1},\delta L|n,R).$$
\end{lem}
\begin{proof}
It suffices to prove the statement for manifolds.
Let $$F:B_R(y)\to \mathbb{R}\times h^{-1}(h(y)),\quad z\mapsto (h(z)-h(y),w)$$ be the Gromov-Hausdorff approximation mentioned in Section \ref{sec_pre}. We first show that
$$d(w_j,y)\le \Phi(\delta,L^{-1}|n,R)$$
for any point $z_j\in B_R(y)\cap S$, where $w_j\in h^{-1}(h(y))$ is a closed point from $z_j$ to $h^{-1}(h(y))$. Since $y\in S$, we write $y=z_J$ for some $J=1,...,k$. We consider the case that $j<J$; the case $j>J$ would be similar. By Proposition \ref{split_estimates}(1,2) and condition (2), we have
\begin{align*}
d(z_j,w_j) &= h(z_j)-h(y)\pm \Phi(\delta,L^{-1}|n,R)\\
&= b(z_j)-b(y)\pm \Phi(\delta,L^{-1}|n,R)\\
&= d(y_+,z_j)-d(y_+,y)\pm \Phi(\delta,L^{-1}|n,R)\\
&= \sum_{i=j}^{k-1}d(z_i,z_{i+1})-\sum_{i=J}^{k-1}d(z_i,z_{i+1})\pm \Phi(\delta,L^{-1}|n,R)\\
&= \sum_{i=i}^{J-1} d(z_i,z_{i+1})\pm \Phi(\delta,L^{-1}|n,R)\\
&= d(z_j,y)\pm \Phi(\delta,L^{-1}|n,R).
\end{align*}
Together with Proposition \ref{split_estimates}(3), we see that
$$d(w_j,y)\le \Phi(\delta,L^{-1}|n,R).$$
Let $(t,x)\in [-R,R]\times \{x\}$. By condition (3), we can choose $z_j\in B_R(y)\cap S$ such that
\begin{align*}
h(z_j)-h(y)&= \pm d(z_j,y)\pm \Phi(\delta,L^{-1}|n,R)\\
&\in[t-\delta L-\Phi(\delta,L^{-1}|n,R),t+\delta L+\Phi(\delta,L^{-1}|n,R)].
\end{align*}
Then this $z_j$ satisfies that
\begin{align*}
d(F(z_j),(t,x))^2&=|h(z_j)-h(y)-t|^2+d(w_j,y)^2\\
&\le \Phi(L^{-1},\delta L|n,R)^2+\Phi(\delta,L^{-1}|n,R)^2.
\end{align*}
This shows that
$$[-R,R]\times \{x\}\subseteq B_{\Phi(L^{-1},\delta L|n,R)}\left(F(B_R(y)\cap S)\right)$$
and the result follows.
\end{proof}
With Lemmas \ref{subset_almost_product} and \ref{orbit_close_to_segment}, the set $\{z_j\}\subseteq Gy$ that we used in Lemma \ref{noncpt_space_split} shows the almost splitting of $Gy$.
\begin{prop}\label{noncpt_orbit_split}
Let $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. Then there are a length space $X$ and a closed geodesic subset $Z\subseteq X$ such that
$$d_{GH}((Y,y,Gy),(\mathbb{R}\times X, (0,x),\mathbb{R}\times Z))\le \Phi(\epsilon|n).$$
\end{prop}
\begin{proof}
Let $\eta>0$. We continue to use the notations in the proof of Lemma \ref{noncpt_space_split}, from which we see that the points $y_-$ and $y_+$ with $$e_{y_-,y_+}(y)\le2\delta_\epsilon L=2\sqrt{\delta_\epsilon}$$ provides a $\Phi(\epsilon|n,2\eta^{-1})$-approximation between $(Y,y)$ and $(\mathbb{R}\times X,(0,x))$. Moreover, we also have a series of points $S=\{z'_j=h^{-1}z_j\}_{j=0}^{k}\subseteq Gy$ with\\
(1) $z'_0=y_-$, $z'_k=y_+$, and $z'_j=y$ for some $j$;\\
(2) $\sum_{j=1}^{k-1} d(z'_{j-1},z'_{j})\le (1+\delta_\epsilon)L$;\\
(3) $d(z'_{j-1},z'_{j})\le \delta_\epsilon L$ for all $j=1,...,k$.\\
It follows from Lemma \ref{orbit_close_to_segment} that
$$d_{GH}((B_R(y),y,S\cap B_R(y)),(B_R (0,x),(0,x),[-R,R]\times \{x\})\le \Phi(\epsilon|n,R).$$
Recall that $R$ is chosen as $2\eta^{-1}$, then for $\epsilon$ small, we have
$$d_{GH}((Y,y,S),(\mathbb{R}\times X,(0,x),\mathbb{R}\times\{x\}))\le \Phi(\epsilon|n,2\eta^{-1})\le\eta.$$
Then the result follows from Lemma \ref{subset_almost_product}.
\end{proof}
When $E(M,p)=0$, \cite[Lemma 3.2]{Pan_escape} shows that the orbit $Gy$ is a metric product $\mathbb{R}^k\times Z$, where $Z$ is compact. With Proposition \ref{noncpt_orbit_split}, where we showed that $Gy$ is $\Phi(\epsilon|n)$-close to a metric product $\mathbb{R}\times Z$, we wish to continue the splitting process if $Z$ is non-compact.
As mentioned in the introduction, the main issue here is that (pointed) Gromov-Hausdorff closeness in general cannot measure whether a subset is compact or not. In the context of
$$d_{GH}((Y,y,Gy),(\mathbb{R}\times X, (0,x),\mathbb{R}\times Z))\le \Phi(\epsilon|n),$$
if $\mathrm{diam}(Z)\gg \Phi(\epsilon|n)^{-1}$, then whether $Z$ is compact or not does not a make difference.
To overcome this, for each $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, we shall consider a corresponding family of spaces $\{(sY,y,G)\}_{s>0}\subseteq \Omega(\widetilde{M},\Gamma)$.
Our main goal for the rest of this section is the following result.
\begin{prop}\label{split_all_scales}
Let $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. Then there is an integer $k$ such that for all $s>0$,
$$d_{GH}((sY,y,Gy),(\mathbb{R}^k\times X_s,(0,x_s),\mathbb{R}^k\times Z_s))\le \Phi(\epsilon|n),$$
where $(X_s,x_s)$ is a length space depending on $sY$, and $Z_s$ is a closed geodesic subset of $X_s$. Moreover, one of the following holds:\\
(1) $Z_s$ is a single point for all $s>0$;\\
(2) $\mathrm{diam}(Z_s)\in[0.9,1.1]$ for some $s>0$.
\end{prop}
In next section, we will further rule out case (2) above.
\begin{lem}\label{large_diam_split}
Let $(Y,y)\in\mathcal{M}(n,0)$ and let $G$ be a closed subgroup of $\mathrm{Isom}(Y)$. Suppose that\\
(1) $Gy$ is $\delta$-geodesic in $Y$,\\
(2) $d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times Z))\le\delta,$ where $X$ is a length space and $Z$ is a closed subset in $X$.\\
(3) $\mathrm{diam}(Z)\ge L$.\\
Then
$$d_{GH}((Y,y,Gy),(\mathbb{R}^{k+1}\times X',(0,x'),\mathbb{R}^{k+1}\times Z'))\le \Phi(\delta,L^{-1}|n)$$
for some length metric $(X',x')$ and some closed geodesic subset $Z'\subseteq X'$.
\end{lem}
\begin{proof}
Suppose that there is a contradicting sequence $\{(Y_i,y_i,G_i)\}_i$ with\\
(1) $G_iy_i$ is $\delta_i$-geodesic in $Y_i$, where $\delta_i\to 0$;\\
(2) $d_{GH}((Y_i,y_i,G_i y_i),(\mathbb{R}^k\times X_i,(0,x_i),\mathbb{R}^k\times Z_i))=\delta_i\to 0$,\\
(3) $\mathrm{diam}(Z_i)\to\infty$.\\
Passing to a subsequence,
$$(Y_i,y_i,G_i)\overset{GH}\longrightarrow (Y_\infty,y_\infty,G_\infty)$$
such that
$$(Y_\infty,y_\infty,G_\infty y_\infty)=(\mathbb{R}^k\times X_\infty,(0,x_\infty),\mathbb{R}^k\times Z_\infty),$$
where $X_\infty$ is a length space. By Lemma \ref{to_geodesic} and the hypothesis (1,3), $Z_\infty$ is a non-compact closed geodesic subset in $X_\infty$. We show that $\mathbb{R}^k\times X_\infty$ indeed splits off an $\mathbb{R}^{k+1}$-factor. Suppose that $X_\infty$ does not contain any line. For each $j$, let $z_j\in \{0\}\times Z_\infty$ such that $d(z_j,y_\infty)\ge j$ and let $\gamma_j$ be a minimal geodesic from $y_\infty$ to $z_j$ that lies in $\{0\}\times Z_\infty\subseteq G_\infty y_\infty$. We write the midpoint of $\gamma_j$ as $h_j\cdot y_\infty$ for some $h_j\in G_\infty$. Then $h_j^{-1}\gamma_j$ is a sequence of minimal geodesics which have midpoint $y_\infty$ and have length $\ge j\to\infty$. $h_j^{-1}\gamma_j$ sub-converges to a line in $Y_\infty=\mathbb{R}^k\times X_\infty$. Note that each $h_j$ maps to $\{0\}\times X_\infty$ to itself, thus this limit line is in the $X_\infty$-factor. It follows from the Cheeger-Colding splitting theorem that $\mathbb{R}^k\times X_\infty$ is isometric to $\mathbb{R}^{k+1}\times X'$.
It remains to show that the orbit $G_\infty y_\infty=\mathbb{R}^k\times Z_\infty$ is indeed $\mathbb{R}^{k+1}\times Z'$ for some $Z'\subseteq X'$. Since the minimal geodesics $h_j^{-1}\gamma_j$ are in $\{0\}\times Z_\infty$, its limit line is contained in $\{0\}\times Z_\infty$ as well. Combined with the fact that $\mathbb{R}^k\times Z_\infty$ is geodesic in $\mathbb{R}^k\times X_\infty=\mathbb{R}^{k+1}\times X'$, we see that $\mathbb{R}^{k}\times Z_\infty$ contains a slice $\mathbb{R}^{k+1}$. Applying Lemma \ref{geo_in_product}, we obtain the desired conclusion.
\end{proof}
By Proposition \ref{noncpt_orbit_split}, for any $s>0$ and any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, we have
$$d_{GH}((sY,y,Gy),(\mathbb{R}\times X_s, (0,x_s),\mathbb{R}\times Z_s))\le \Phi(\epsilon|n)$$
for some length space $X_s$ and a closed geodesic subset $Z_s\subseteq X_s$. If $Z_s$ has large or infinite diameter for all $s>0$, then we shall apply the above Lemma \ref{large_diam_split} to split off an $\mathbb{R}^2$-factor for all $(sY,y,Gy)$.
\begin{lem}\label{small_large_cpt}
Let $(Y,y)\in\mathcal{M}(n,0)$ and let $G$ be a closed subgroup of $\mathrm{Isom}(Y)$. Let $\delta\in (0,10^{-4})$. Suppose that for all $s>0$, we have
$$d_{GH}((sY,y,Gy),(\mathbb{R}^k\times X_s,(0,x_s),\mathbb{R}^k\times Z_s))\le\delta,$$
where $(X_s,x_s)$ is a length space and $Z_s$ is a closed geodesic subset of $X_s$.
Then for $L=\delta^{-1/2}$, one of the followings holds:\\
(1) $\mathrm{diam}(Z_s)>L$ for all $s>0$;\\
(2) $\mathrm{diam}(Z_s)<L^{-1}$ for all $s>0$;\\
(3) there is $s>0$ such that $\mathrm{diam}(Z_s)\in [0.9,1.1]$, given that $\delta$ is sufficiently small.
\end{lem}
\begin{proof}
Suppose that $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$ does not belong to cases (1) and (2) listed in the statement. This means that there exists $s>0$ such that $(sY,y,G)$ satisfies $d=\mathrm{diam}(Z_s)\in[L^{-1},L].$
We claim that $(d^{-1}sY,y,G)$ has $\mathrm{diam}(Z_{d^{-1}s})\in [0.9,1.1]$. Let $$F:sY\to \mathbb{R}^k\times X_s$$ be an $\delta$-approximation between $(sY,y,Gy)$ and $(\mathbb{R}^k\times X_s,(0,x_s),\mathbb{R}^k\times Z_s)$. We consider its scaling $$d^{-1}F:d^{-1}sY\to \mathbb{R}^k\times (d^{-1}X_s).$$ When $d^{-1}\ge 1$, $d^{-1}F$ is a $d^{-1}\delta$-approximation between $(d^{-1}sY,y,Gy)$ and $(\mathbb{R}^k\times (d^{-1}X_s),(0,x_s),\mathbb{R}^k\times (d^{-1}Z_s))$; when $d^{-1}\le 1$, $d^{-1}F$ is a $d\delta$-approximation between the above two spaces. Since $d\in[L^{-1},L]= [\delta^{1/2},\delta^{-1/2}]$, we see that $d^{-1}F$ shows
$$d_{GH}((d^{-1}sY,y,Gy),(\mathbb{R}^k\times (d^{-1}X_s),(0,x_s),\mathbb{R}^k\times (d^{-1}Z_s)))\le \delta^{1/2},$$
where $d^{-1}Z_s$ has diameter $1$. On the other hand, we have assumption
$$d_{GH}((d^{-1}sY,y,Gy),(\mathbb{R}^k\times X_{d^{-1}s}, (0,x_{d^{-1}s}),\mathbb{R}^k\times Z_{d^{-1}s}))\le \delta.$$
Therefore,
$$\mathrm{diam}(Z_{d^{-1}s})=\mathrm{diam}(d^{-1}Z_s)\pm \Phi(\delta)=1\pm \Phi(\delta).$$
\end{proof}
\begin{proof}[Proof of Proposition \ref{split_all_scales}]
To avoid confusions, we will write $\Phi_0$ as the estimate in Lemma \ref{large_diam_split} and $\Phi_1$ as the one in Proposition \ref{noncpt_orbit_split}, respectively.
By Proposition \ref{noncpt_orbit_split}, $(Y,y,G)$ satisfies the conditions in Lemma \ref{small_large_cpt} with $k=1$ and $\delta=\Phi_1(\epsilon|n)$. If $(Y,y,G)$ belongs to case (3) of Lemma \ref{small_large_cpt}, then we are done. For case (2), we have
$$d_{GH}((sY,y,Gy),(\mathbb{R}\times X_s,(0,x_s),\mathbb{R}\times \{x_s\}))\le\Phi'_1(\epsilon|n)$$
with a slightly increased $\Phi'_1(\epsilon|n)$. Case (1) is where we shall continue the splitting process. It follows from Lemma \ref{large_diam_split} that for all $s>0$,
$$d_{GH}((sY,y,Gy),(\mathbb{R}^2\times X'_s,(0,x'_s),\mathbb{R}^2\times Z'_s))\le\Phi_0(\Phi_1,\Phi_1^{1/2}|n)=:\Phi_2(\epsilon|n).$$
Applying the above procedure repeatedly with Lemmas \ref{large_diam_split} and \ref{small_large_cpt}, we end in the desired result.
\end{proof}
\section{Limit orbits as almost Euclidean spaces}\label{sec_eu}
In this section, we first rule out case (2) in Proposition \ref{split_all_scales}; this shows that the orbit $Gy$ are almost Euclidean for all $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$ (Theorem \ref{almost_eu_orbit}). Then we prove Theorem A.
The proof of Theorem \ref{almost_eu_orbit} uses a critical rescaling argument, which is effective in proving uniformity among all spaces in $\Omega(\widetilde{M},\Gamma)$ (see \cite{Pan_eu,Pan_al_stable,Pan_escape}).
We introduce a notation for convenience.
\begin{defn}\label{def_approx}
Let $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$. In the context of Proposition \ref{split_all_scales}, we call $k$ as the \textit{approximate Euclidean dimension} of $Gy$, written as $\mathrm{EuDim}_A(Gy)$; we also call $\mathrm{diam}(Z_1)$ as an \textit{approximate width} of $Gy$, written as $\mathrm{Wid}_A(Gy)$.
\end{defn}
For a fixed space $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, note that $\mathrm{EuDim}_A(Gy)$ is uniquely determined, while $\mathrm{Wid}_A(Gy)$ may allow a small error up to $\Phi(\epsilon|n)$.
\begin{proof}[Proof of Theorem \ref{almost_eu_orbit}]
We show that for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, there is an integer $k$ such that
$$d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \Phi(\epsilon|n).$$
If we can prove this, then by Proposition \ref{cpt_cnt}, $k$ must be uniform among all spaces in $\Omega(\widetilde{M},\Gamma)$.
$(Y,y,G)$ has two possibilities as listed in the Proposition \ref{split_all_scales}. It suffices to rule out case (2), that is, there is $s>0$ such that
$$d_{GH}((sY,y,Gy),(\mathbb{R}^k\times X_s,(0,x_s),\mathbb{R}^k\times Z_s))\le\Phi(\epsilon|n)$$
with $\mathrm{diam}(Z_s)\in [0.9,1.1]$. We also assume that $(Y,y,G)$ has the smallest $k=\mathrm{EuDim}_A(Gy)$ so that case (2) occurs; in other words, if a space $(W,w,H)\in\Omega(\widetilde{M},\Gamma)$ also belongs to case (2), then $\mathrm{EuDim}_A(Hw)\ge k$. Besides $(sY,y,G)$, we shall also consider its scaling $(10sY,y,G)\in \Omega(\widetilde{M},\Gamma)$. By Proposition \ref{split_all_scales} and the proof of Lemma \ref{small_large_cpt}, we have
$$d_{GH}((10sY,y,Gy),(\mathbb{R}^k\times X_{10s},(0,x_{10s}),\mathbb{R}^k\times Z_{10s}))\le\Phi(\epsilon|n)$$
with $\mathrm{diam}(Z_{10s})=10\mathrm{diam}(Z_s)\pm\Phi(\epsilon|n)\in [8,12]$.
For convenience, we now write
$$(Y_1,y_1,G_1):=(10sY,y,G),\quad (Y_2,y_2,G_2):=(sY,y,G);$$
correspondingly; we also write
\begin{align*}
(\mathbb{R}^k\times V_1,(0,v_1),\mathbb{R}^k\times U_1)&:=(\mathbb{R}^k\times X_{10s},(0,x_{10s}),\mathbb{R}^k\times Z_{10s}),\\
(\mathbb{R}^k\times V_2,(0,v_2),\mathbb{R}^k\times U_2)&:=(\mathbb{R}^k\times X_{s},(0,x_{s}),\mathbb{R}^k\times Z_{s}).
\end{align*}
Note that we have
$$\mathrm{diam}(U_1)\in [8,12],\quad \mathrm{diam}(U_2)\in [0.9,1.1].$$
Since both $(Y_1,y_1,G_1)$ and $(Y_2,y_2,G_2)$ are equivariant asymptotic cones of $(\widetilde{M},\Gamma)$, there are sequences $r_i,s_i\to\infty$ such that
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y_1,y_1,G_1),\quad (s_i^{-1}\widetilde{M},\tilde{p},\Gamma)\overset{GH}\longrightarrow (Y_2,y_2,G_2).$$
After passing to a subsequence, we assume that
$$t_i:=s_i^{-1}/r_i^{-1}\to\infty.$$
Setting $(N_i,q_i,\Gamma_i)$ as $(r_i^{-1}\widetilde{M},\tilde{p},\Gamma)$, we have
$$(N_i,q_i,\Gamma_i)\overset{GH}\longrightarrow (Y_1,y_1,G_1),\quad (t_iN_i,q_i,\Gamma_i)\overset{GH}\longrightarrow (Y_2,y_2,G_2).$$
Next we choose an intermediate scaling sequence $l_i$ as follows. For each $i$, let
\begin{align*}
L_i=\{1\le l\le t_i\ |\ &d_{GH}((lN_i,q_i,\Gamma_i),(W,w,H))\le 10^{-3} \text{ for some space}\\
& (W,w,H)\in\Omega(\widetilde{M},\Gamma) \text{ such that $\mathrm{EuDim}_A(Hw)<k$, or}\\
& \mathrm{EuDim}_A(Hw)=k \text{ with } \mathrm{Wid}_A(Hw)\le 2 \}.
\end{align*}
Since $(Y_2,y_2,G_2)$ satisfies $\mathrm{EuDim}_A(G_2y_2)=k$ and $\mathrm{Wid}_A(G_2y_2)\le 1.1$, it is clear that $t_i\in L_i$ for all $i$ large. We choose $l_i\in L_i$ such that $\inf L_i\le l_i \le \inf L_i+1/i$.
\textbf{Claim 1:} $\liminf l_i>\Phi(\epsilon|n)^{-1/2}$, where $\Phi(\epsilon|n)$ is the constant in Proposition \ref{split_all_scales}. We argue by contradiction. Suppose that $l_i\to l\in [1,\Phi(\epsilon|n)^{-1/2}]$ for a subsequence. Then
$$(l_iN_i,q_i,\Gamma_i)\overset{GH}\longrightarrow (lY_1,y_1,G_1).$$
For each $i$, since $l_i\in L_i$, there is $(W_i,w_i,H_i)\in\Omega(\widetilde{M},\Gamma)$ such that
$$d_{GH}((l_iN_i,q_i,\Gamma_i),(W_i,w_i,H_i))\le 10^{-3};$$
moreover,
$$d_{GH}((W_i,w_i,H_i),(\mathbb{R}^{m_i}\times X_i,(0,x_i),\mathbb{R}^{m_i}\times Z_i))\le \Phi(\epsilon|n)$$
with $m_i\le k$, and $\mathrm{diam}(Z_i)\le 2$ if $m_i=k$.
Recall that
$$d_{GH}((Y_1,y_1,G_1\cdot y_1),(\mathbb{R}^k\times V_1,(0,v_1),\mathbb{R}^k\times U_1))\le\Phi(\epsilon|n);$$
let $F$ be an $\Phi(\epsilon|n)$ approximation between them. Then its scaling $lF$ shows that
$$d_{GH}((lY_1,y_1,G_1\cdot y_1),(\mathbb{R}^k\times lV_1,(0,v_1),\mathbb{R}^k\times lU_1))\le l\cdot \Phi(\epsilon|n)\le \Phi(\epsilon|n)^{1/2}.$$
Combining the above together, we see that
$$d_{GH}((\mathbb{R}^{m_i}\times X_i,(0,x_i),\mathbb{R}^{m_i}\times Z_i),(\mathbb{R}^k\times lV_1,(0,v_1),\mathbb{R}^k\times lU_1))\le 10^{-2}+\Phi'(\epsilon|n)$$
for $i$ large. If $m_i<k$, then by our choice of $k$, $\mathrm{diam}(Z_i)=0$ and thus the above estimate cannot hold. If $m_i=k$, then $\mathrm{diam}(Z_i)\le 2$ and $\mathrm{diam}(lU_1)\ge 8$ also lead to a contradiction. This proves Claim 1.
Next we consider the convergence under the critical rescaling $l_i$:
$$(l_iN_i,q_i,\Gamma_i)\overset{GH}\longrightarrow (Y',y',G')\in \Omega(\widetilde{M},\Gamma).$$
\textbf{Claim 2:} $\mathrm{EuDim}_A(G'y')\le k$; $\mathrm{Wid}_A(G'y')\le 3$ when $\mathrm{EuDim}_A(G'y')=k$. By Proposition \ref{split_all_scales}, for each $s>0$, we have
$$d_{GH}((sY',y',G'y'),(\mathbb{R}^{m'}\times X'_{s},(0,x'_s),\mathbb{R}^{m'}\times Z'_{s}))\le\Phi(\epsilon|n),$$
where $m'=\mathrm{EuDim}_A(G'y').$ Similar to the proof of Claim 1, for each $i$ we can find $(W_i,w_i,H_i)$ that is $10^{-3}$-close to $(l_iN_i,q_i,\Gamma_i)$, and $(\mathbb{R}^{m_i}\times X_i,(0,x_i),\mathbb{R}^{m_i}\times Z_i)$ that is $\Phi(\epsilon|n)$-close to $(W_i,w_i,H_i)$. This shows that
$$d_{GH}((\mathbb{R}^{m_i}\times X_i,(0,x_i),\mathbb{R}^{m_i}\times Z_i),(\mathbb{R}^{m'}\times X'_{1},(0,x'_1),\mathbb{R}^{m'}\times Z'_{1}))\le 10^{-2}+\Phi'(\epsilon|n)$$
for all $i$ large. Recall that either $m_i<k$ with $\mathrm{diam}(Z_i)=0$, or $m_i=k$ with $\mathrm{diam}(Z_i)\le 2$. We conclude that $m'<k$ or $m'=k$ with $\mathrm{diam}(Z'_1)\le 3$. This proves Claim 2.
To reach a final contradiction, we consider
$$(10^{-1}l_iN_i,q_i,\Gamma_i)\overset{GH}\longrightarrow (10^{-1}Y',y',G').$$
Note that the scaled down limit $(10^{-1}Y',y',G')$ satisfies $\mathrm{EuDim}_A(10^{-1}G'y')<k$ with $\mathrm{Wid}_A(10^{-1}G'y')=0$, or $\mathrm{EuDim}_A(10^{-1}G'y')=k$ with $\mathrm{Wid}_A(10^{-1}G'y')\le 1$. In other words, $(10^{-1}Y',y',G')$ satisfies the condition described in the definition of $L_i$. Since $\liminf l_i>\Phi(\epsilon|n)^{-1/2}$ as showed in Claim 1, we have $10^{-1}l_i\in [1,t_i]$ for all $i$ large. It follows that $10^{-1}l_i\in L_i$ for all $i$ large, which contradicts our choice of $l_i\in L_i$ as $\inf L_i\le l_i\le \inf L_i+1/i$. This completes the proof.
\end{proof}
We show that the converse of Theorem \ref{almost_eu_orbit} also holds.
\begin{prop}\label{converse}
Let $(M,p)$ be an open manifold of $\mathrm{Ric}\ge 0$. Suppose that there is an integer $k$ such that for any $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, we have
$$d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \eta,$$
where $(X,x)$ is a length space that depends on $(Y,y)$. Then $E(M,p)\le \Phi(\eta|n)$.
\end{prop}
We prove a lemma below before proving Proposition \ref{converse}.
\begin{lem}\label{geo_almost_prod}
Let $(Y,y)\in\mathcal{M}(n,0)$ and let $N$ be a closed subset of $Y$ containing $y$. Suppose that
$$d_{GH}((Y,y,N),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \eta$$
for some length space $(X,x)$. Then for any point $p\in N$ with $d(p,y)\le 10$ and any minimal geodesic $\sigma$ from $y$ to $p$, $\gamma$ must be contained in the $\Phi(\eta|n)$-neighborhood of $N$.
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that there are $\delta>0$ and a sequence of spaces $(Y_i,y_i,N_i)$ with
$$d_{GH}((Y_i,y_i,N_i),(\mathbb{R}^{k_i}\times X_i,(0,x_i),\mathbb{R}^{k_i}\times \{x_i\}))\to 0$$
but for some minimal geodesic $\sigma_i$ from $y_i$ to some point $p_i\in N_i$ with $d(y_i,p_i)\le 10$, $\sigma_i$ is not contained in the $\delta$-neighborhood of $N_i$. After passing to a subsequence, we have convergence
$$(Y_i,y_i,N_i)\overset{GH}\longrightarrow (Y_\infty,y_\infty,N_\infty)=(\mathbb{R}^k\times X_\infty,(0,x_\infty),\mathbb{R}^k\times \{x_\infty\}).$$
For this subsequence, we can also assume that $p_i\to p_\infty \in \mathbb{R}^k\times \{x_\infty\}$ and $\sigma_i$ converges to a minimal geodesic $\sigma_\infty$ from $(0,x_\infty)$ to $p_\infty$. By hypothesis, $\sigma_\infty$ is not contained in the $\delta/2$-neighborhood of $N_\infty$. On the other hand, as a segment in the metric product, $\sigma_\infty$ must be contained in $\mathbb{R}^k\times \{x_\infty\}=N_\infty$. A contradiction.
\end{proof}
\begin{proof}[Proof of Proposition \ref{converse}]
We write $\epsilon=E(M,p)$. Let $\gamma_i\in \pi_1(M,p)$ be a sequence with $r_i=d(\gamma_i\tilde{p},\tilde{p})\to\infty$ and $c_i$ be a sequence of representing geodesic loops based at $p$ such that
$$\epsilon_i:=\dfrac{d_H(x,c_i)}{r_i}\to \epsilon.$$
Let $\sigma_i$ be the lift of $c_i$ in $\widetilde{M}$ starting from $\tilde{p}$. Then by our choice $\sigma_i$ is not contained in $\pi^{-1}(B_{\epsilon_ir_i/2}(p))$, where $\pi:(\widetilde{M},\tilde{p})\to (M,p)$ is the covering map. For a convergent subsequence
\begin{center}
$\begin{CD}
(r^{-1}_i\widetilde{M},\tilde{p},\Gamma,\gamma_i) @>GH>>
(Y,y,G,g)\\
@VV\pi V @VV\pi V\\
(r^{-1}_iM,p) @>GH>> (Z=Y/G,z),
\end{CD}$
\end{center}
it is clear that $d(y,gy)=1$. We can also assume that $\sigma_i$ converges to a minimal geodesic $\sigma$ from $y$ to $gy$. We also know that $\sigma$ is not contained in $\pi^{-1}(B_{\epsilon/3}(z))$. On the other hand, by assumption and Lemma \ref{geo_almost_prod}, $\sigma$ should be contained in a $\Phi(\eta|n)$-neighborhood of $Gy$, that is, $\pi^{-1}(B_{\Phi(\eta|n)}(z))$. This shows that $\epsilon\le 3\Phi(\eta|n)$.
\end{proof}
Combined with the results previously, we can also obtain the converse of Proposition \ref{limit_delta_geodesic}.
\begin{cor}\label{cor_converse}
Let $(M,p)$ be an open manifold of $\mathrm{Ric}\ge 0$ with an infinite fundamental group. If the orbit $Gy$ is $\eta$-geodesic for all $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$, then $E(M,p)\le \Phi(\eta|n)$.
\end{cor}
\begin{proof}
From the proof of Proposition \ref{split_all_scales} and Theorem \ref{almost_eu_orbit}, we see that $Gy$ being $\eta$-geodesic from all $(Y,y,G)\in\Omega(\widetilde{M},\Gamma)$ implies that
$$d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \Phi(\eta|n),$$
where $(X,x)$ is a length space depending on $(Y,y)$. Together with Proposition \ref{converse}, the result follows.
\end{proof}
\begin{cor}
Let $(M,p)$ be an open manifold of $\mathrm{Ric}\ge 0$ with an infinite fundamental group. If $E(M,p)\le \epsilon$, then $E(M,q)\le \Phi(\epsilon|n)$ for all $q\in M$.
\end{cor}
\begin{proof}
We write $\Gamma_p=\pi_1(M,p)$ and $\Gamma_q=\pi_1(M,q)$ for convenience. Note that
$$\Gamma_p\cdot \tilde{p}=\pi^{-1}(p)=\Gamma_q\cdot \tilde{p},$$
where $\pi:\widetilde{M}\to M$ is the covering map. Let $r_i\to\infty$ be any sequence. With respect to the convergence
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma_p)\overset{GH}\longrightarrow (Y,y,G),$$
$\Gamma_p\cdot \tilde{p}=\Gamma_q\cdot \tilde{p}$ converges to $Gy$. By Theorem \ref{almost_eu_orbit}, there is an integer $k$ and a length metric space $(X,x)$ such that
$$d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times \{x\}))\le \Phi(\epsilon|n).$$
Under the scaling $r_i^{-1}$, $\Gamma_q\cdot \tilde{q}$ and $\Gamma_q\cdot \tilde{p}$ converges to the same limit. As a result, the conditions in Proposition \ref{converse} hold for $(M,q)$ with $\eta=\Phi(\epsilon|n)$. Applying Proposition \ref{converse}, we conclude that $E(M,q)\le \Phi'(\epsilon|n)$.
\end{proof}
We move on to prove Theorem A. We first show that any transitive nilpotent group action on an almost Euclidean orbit is by almost translations, which is a quantitative version of \cite[Lemma 3.10]{Pan_escape}.
\begin{lem}\label{almost_trans}
Let $(Y,y)\in \mathcal{M}(n,0)$ and $G$ be a closed subgroup of $\mathrm{Isom}(Y)$. Suppose that\\
(1) $d_{GH}((Y,y,Gy),(\mathbb{R}^k\times X,(0,x),\mathbb{R}^k\times\{x\}))\le \delta$ for some length space $(X,x)$,\\
(2) $G$ has a closed nilpotent subgroup $H$ of nilpotency length $\le n$ that acts transitively on $Gy$.\\
Then
$$d(h^2y,y)\ge (2-\Phi(\delta|n))\cdot d(hy,y)$$
for all $h\in H$ with $d(hy,y)\le 10$.
\end{lem}
\begin{proof}
Suppose that $(Y_i,y_i,G_i)$ is a sequence of spaces such that for each $i$,\\
(1) $d_{GH}((Y_i,y_i,G_i y_i),(\mathbb{R}^{k_i}\times X_i,(0,x_i),\mathbb{R}^{k_i}\times\{x_i\}))\le \delta_i\to 0$ for some length space $X_i$,\\
(2) $G_i$ has a closed nilpotent subgroup $H_i$ of nilpotency length $\le n$ acting transitively on $G_i\cdot y_i$.\\
Since $k_i\le n$, without lose of generality, we can assume that all $k_i$ are the same, denoted as $k$. Then passing to a subsequence, we obtain convergence
$$(Y_i,y_i,G_i,H_i)\overset{GH}\longrightarrow (Y_\infty,y_\infty,G_\infty,H_\infty).$$
It follows from (1) that $$(Y_\infty,y_\infty,G_\infty y_\infty)=(\mathbb{R}^k\times X_\infty,(0,\infty),\mathbb{R}^k\times\{x_\infty\}).$$
Passing (2) to the limit, we see that $H_\infty$ is a closed nilpotent group acting transitively on $G_\infty \cdot y_\infty=\mathbb{R}^k\times\{x_\infty\}$. By \cite[Lemma 3.10]{Pan_escape}, $H_\infty$ acts as translations on $G_\infty\cdot y_\infty$. In particular,
$$d(h_\infty^2 y_\infty,y_\infty)=2\cdot d(h_\infty y_\infty,y_\infty)$$
holds for all $h_\infty\in H_\infty$. This immediately implies the desired result.
\end{proof}
Theorem \ref{almost_eu_orbit} and Lemma \ref{almost_trans} restrict $\Gamma$-action on $\widetilde{M}$ at large scale if $E(M,x)\le\epsilon$.
\begin{lem}\label{almost_trans_large}
Given $n$, there is $\epsilon(n)>0$ such that the following holds.
Let $(M,p)$ be an open manifold of $\mathrm{Ric}\ge 0$ and $E(M,p)\le\epsilon(n)$. Let $N$ be a nilpotent subgroup of $\Gamma$ of finite index and nilpotency length $\le n$. Then there is $R>0$, depending on $M$, such that
$$|\gamma^2|\ge 1.9\cdot |\gamma|$$
for all $\gamma\in N$ with $|\gamma|\ge R$.
\end{lem}
\begin{proof}
We argue by contradiction. Suppose that there is a sequence $\gamma_i\in N$ with $r_i=d(\gamma_i\tilde{p},\tilde{p})\to \infty$ and
$$d(\gamma_i^2\tilde{p},\tilde{p})< 1.9\cdot d(\gamma_i\tilde{p},\tilde{p}).$$
We consider the convergence
$$(r_i^{-1}\widetilde{M},\tilde{p},\Gamma,N,\gamma_i)\overset{GH}\longrightarrow(Y,y,G,H,h).$$
$H$ is a closed nilpotent subgroup of $G$ with finite index and nilpotency length $\le n$. By Corollary \ref{cor_cnt_orbit}, the orbit $Gy$ is connected. Hence $H$ acts transitively on $Gy$. Then it follows from Theorem \ref{almost_eu_orbit} and Lemma \ref{almost_trans} that
$$d(h^2y,y)\ge (1-\Phi(\epsilon|n))\cdot d(hy,y).$$
When $\epsilon$ is small that $\Phi(\epsilon|n)\le 0.01$, we see that
$$d(\gamma_i^2\tilde{p},\tilde{p})\ge 1.95 \cdot d(\gamma_i\tilde{p},\tilde{p})$$
for all $i$ large, which is a contradiction to our assumption.
\end{proof}
\begin{proof}[Proof of Theorem A]
By \cite[Lemma 2.1]{Pan_escape}, $\Gamma$ is finitely generated. Then $\Gamma$ has a nilpotent subgroup of $\Gamma$ of finite index with nilpotency length at most $n$ \cite{Mil,Gro_poly,KW}. It follows from the same argument in \cite[Theorem A]{Pan_escape} (also see \cite[Section 4]{Pan_al_stable}) that the conclusion in Lemma \ref{almost_trans_large} implies that $N$ has an abelian subgroup of finite index.
\end{proof}
\end{document}
|
\begin{document}
\title{Entanglement-assisted atomic clock beyond the projection noise
limit}
\author{Anne Louchet-Chauvet, J\"urgen Appel, Jelmer J.
Renema, Daniel Oblak, Niels Kj\ae rgaard\footnote{Present address: Danish Fundamental Metrology, Matematiktorvet 307, 2800 Kgs-Lyngby, Denmark}, Eugene S. Polzik}
\address{QUANTOP, Niels Bohr Institute, University of Copenhagen,
Blegdamsvej 17, 2100 København Ø, Denmark}
\ead{[email protected]}
\begin{abstract}
We use a quantum non-demolition measurement to generate a spin
squeezed state and to create entanglement in a cloud of $10^5$ cold
cesium atoms, and for the first time operate an atomic clock
improved by spin squeezing beyond the projection noise limit in a
proof-of-principle experiment. For a clock-interrogation time of
$\unit[10]{\mu s}$ the experiments show an improvement of
$\unit[1.1]{dB}$ in the signal-to-noise ratio, compared to the
atomic projection noise limit.
\end{abstract}
\section{Introduction}
Atomic projection noise, originating from the Heisenberg uncertainty
principle, is a fundamental limit to the precision of spectroscopic
measurements, when dealing with ensembles of independent atoms. This
limit has been approached, for example in atomic
clocks~\cite{santarelli1999,wilpers2002,Ludlow:2008_SrLatticeClock}.
Theoretical studies have shown that introducing quantum correlations
between the atoms can help overcome this limit and reach even better
precision~\cite{wineland1992,huelga1997,giovannetti04:_quant_enhan_measur,andre2004,meiser2008}.
Spin squeezing in a system of two ions has been shown to improve the
precision of Ramsey spectroscopy for frequency measurements
\cite{meyer01:_ion_spin_squeezing}. Furthermore, squeezed atomic
ensembles improve the sensitivity of
magnetometers~\cite{wasilewski:_magnetometry,koschorreck-2009:_magnetometry}.
In a previous publication~\cite{appel2009}, we have reported the
generation of quantum noise squeezing on the cesium clock transition
via quantum non-demolition (QND) measurements. By proposing an
entanglement-assisted Ramsey (EAR) method including a QND
measurement, we showed how this squeezing could help improve the
precision of atomic clocks. In this work, we describe the spin
squeezing experiment in more detail and implement the complete
EAR clock sequence. For the first time we demonstrate an
atomic microwave clock improved by spin squeezing. Decoherence
effects are measured and included in the analysis. The clock reported
here does not reach record precision due to technical reasons, however
the demonstrated approach is applicable to the state-of-the-art
clocks, as indicated in~\cite{lodewyck:Nondestructivemeasurement}.
\section{Generation of a conditionally squeezed atomic state}
\subsection{Coherent and squeezed spin states}
An ensemble of $N_A$ identical 2-level atoms can be described as an
ensemble of pseudo-spin-$1/2$ particles. We define the collective
pseudo-spin vector $\hat{J}$ as the sum of all individual spins.
Traditionally, its $z$-component $J_z$ is defined by the population
difference $\Delta N$, such that: $J_z=\frac 1 2
(N_\uparrow-N_\downarrow)=\Delta N/2$. A coherent spin state (CSS) is
a product state (i.e. atoms are uncorrelated) where the spins of $N_A$ atoms are aligned in the same
direction, for example such that $J_x=N_A/2$, i.e. $\ket{\mathrm{CSS}} =
\bigotimes_{i=1}^{N_A} \frac{1}{\sqrt{2}}
\left(\ket{\uparrow}_i+\ket{\downarrow}_i\right)$. Then, the other
projections of $\hat{J}$ minimize the Heisenberg uncertainty relation:
$\var(J_z)\cdot \var(J_y) \geq \left<J_x\right>^2/4$ and
$\var(J_z)=\var(J_y)=N_A/4$. These quantum fluctuations, referred to as
\emph{CSS projection noise}, pose a fundamental limit to the precision
of the $J_z$ measurement~\cite{itano1993}. It is possible to reduce
the fluctuations of one of the spin components - for example $J_z$ -
to below the projection noise limit by introducing quantum
correlations between different atoms within the atomic ensemble. In
this case, the fluctuations on the conjugate observable - here $J_y$ -
increase according to the Heisenberg uncertainty relation. Such a
state is referred to as a spin squeezed state (SSS). Whether the
atoms exhibit non-classical correlations is determined by the criterion
\begin{equation}
\var(J_z)<\frac{\left<J\right>^2}{N_A}
\qquad
\Leftrightarrow
\qquad
\xi = \frac{\var(J_z)}{\left<J\right>^2}N_A < 1
\label{eq:wineland},
\end{equation}
where $\xi$ is called the squeezing parameter. Under this condition
(even for a general mixed state) the atoms are entangled whereby the signal-to-projection-noise ratio in spectroscopy and
metrology experiments is improved by a factor of $1/\xi$ in variance, or $1/\sqrt{\xi}$ in standard deviation~\cite{wineland1992}.
Equation~(\ref{eq:wineland}) will be referred to as the Wineland
criterion throughout this paper.
Spin squeezing can be produced, for example, by atomic
interactions~\cite{sorensenduan2001:bec_entanglement,esteve2008:bec_spinsqueezing},
by mapping the properties of squeezed light onto an atomic
ensemble~\cite{kuzmich97:_spin_squeez_ensem_atoms_illum_squeez_light,hald99:_spin_squeez_by_light,
appel2008:quantum_memory_squeezed,honda:storage_squeezed}, or by
non-destructive measurements on the atoms~\cite{grangier91:QND,
kuzmich98:_atomicQND,kuzmich00:_spinsqueez_continuous,
chaudhury06:_contin_nondem_measur_cs_clock_trans_pseud,SchleierSmith2008,
takano09:_spin_squeez_cold_atomic_ensem,julsgaard01:_entanglement}.
We follow the latter approach by performing a weak, non-destructive
measurement of the $J_z$ spin component. Any later measurement on
$J_z$ on the same ensemble will be partly correlated to the first
measurement outcome. Therefore, the outcome of a subsequent
$J_z$-measurement can be predicted to a precision better than the
CSS-projection noise. In other words, if $\phi_1$ and $\phi_2$ are
the outcomes of the first and second measurements, respectively, the
conditional variance $\var(\phi_2-\zeta \phi_1)$ is reduced to below
the variance of a single measurement $\var(\phi_1)=\var(\phi_2)$,
where $\zeta$ is the correlation strength
$\zeta\equiv\cov(\phi_1,\phi_2)/\var(\phi_1)$. If the QND measurement
does not reduce the length of the pseudo-spin vector $\langle J
\rangle$ too much, Eq.~(\ref{eq:wineland}) implies that the reduction
of the variance results in a metrologically relevant SSS.
\subsection{Preparation of the coherent spin state} \label{atomsprep}
The experimental sequence for the preparation of the coherent spin state
and the QND measurements is shown in Fig.~\ref{fig:seqsqueezing}.
Cesium atoms are first loaded from a background cesium vapor into a
standard magneto-optical trap (MOT) on the $D_2$-line, and are then
transferred into an elongated far off-resonant trap (FORT). The FORT is generated
by a Versadisk laser with a wavelength of \unit[1032]{nm} and a power
of $\unit[2.3]{W}$, which is focused to a $\unit[20]{\mu m}$ radius spot to
confine an elongated atomic sample.
After the loading of the FORT,
the MOT is switched off and a bias magnetic field is applied, defining
a quantization axis orthogonal to the trapping beam. The
$6S_{1/2}\ket{F=3, m_F=0}$ and $6S_{1/2}\ket{F=4, m_F=0}$ ground
levels are referred to as the \emph{clock levels}. We denote them as
$\ket{\downarrow}$ and $\ket{\uparrow}$, respectively. The cesium
atoms are then prepared in the clock level $\ket{\downarrow}$ by optical
pumping. Atoms remaining in states other than $\ket{\downarrow}$ due
to imperfect optical pumping are subsequently pushed out of the
trap, as described in~\cite{appel2009}. A resonant microwave pulse ($\pi/2$-pulse) is
used to put the atoms into $\ket{\mathrm{CSS}}$. We then perform successive QND
measurements of the atomic population difference $\Delta N$ by detecting the
state-dependent phase shift of probe light pulses with a Mach-Zehnder
interferometer as described in detail in section 2.3. Later on, we
optically pump the atoms into $F=4$ to measure the atom number
$N_A$. We recycle the remaining atoms for three subsequent
experiments, preparing them into a CSS, performing successive QND
measurements and finally measuring the atom number. After these four
experiments, all the atoms are blown away with laser light and we perform three series
of QND measurements with the empty interferometer to obtain a zero
phase shift reference measurement. This sequence is repeated several
thousand times with a cycle time of $\approx \unit[5]{s}$.
\begin{figure}
\caption{Experimental pulse sequence for the preparation of a coherent spin state and quantum non-demolition measurements.}
\label{fig:seqsqueezing}
\end{figure}
To ensure that the microwave does not adress hyperfine transitions
other than the clock transition, the bias magnetic field $B$ is set so
that the Zeeman splitting $\Delta E_Z$ between adjacent magnetic
sublevels ($\unit[350]{kHz/G}$ to first order) coincides with one of the
zeroes of the microwave pulses' frequency spectrum: $\Delta
E_Z=\ell/\tau_{\pi/2}$, where $\tau_{\pi/2}$ is the microwave $\pi/2$
pulse duration, and $\ell$ is an integer number. Therefore for typical
durations of $\tau_{\pi/2}=\unit[7]{\mu s}$, we set the bias magnetic field to
$\unit[1.22]{G}$.
\subsection{Dispersive population measurement}
\begin{figure}
\caption{Mach-Zehnder interferometer. The atoms are placed in one
arm of a Mach-Zehnder interferometer. The dipole trapping beam is
overlapped with the probe arm to form an elongated atomic cloud
along the propagation direction of the probe beam. Two probe beams
of identical linear polarization enter the interferometer via the
same input port and acquire phase shifts proportional to the
number of atoms in the clock states $N_\uparrow$ and
$N_\downarrow$, respectively. The bias magnetic field (B) is
aligned to the polarization of the probe beams.}
\label{fig:mzi}
\end{figure}
In order to measure an atomic squeezed state, we require a measurement
sensitivity that is sufficient to reveal the atomic projection noise
limit. The population in each clock level is measured via the phase
shift imprinted on a dual-color beam propagating through the atomic
cloud~\cite{saffman09:_spinsqueez_multicolor}. The dipole trap is
overlapped with one arm of a Mach-Zehnder interferometer (see
Fig.~\ref{fig:mzi}). A beam $P_\downarrow$ of one color off-resonantly
probes the $\ket{F=3}\rightarrow\ket{F'=2}$ transition, whereas a
second beam $P_\uparrow$ off-resonantly probes the
$\ket{F=4}\rightarrow\ket{F'=5}$ transition (see
Fig.~\ref{fig:levels}). Each color experiences a phase shift
proportional to the number of atoms in the ground state of the probed
transition: $\phi_\downarrow=\chi_\downarrow N_\downarrow$ and
$\phi_\uparrow=\chi_\uparrow
N_\uparrow$~\cite{windpassinger08:_nondes_probin_rabi_oscil_cesium}.
We carefully choose the probe detunings to ensure that the coupling
constants are equal: $\chi_\uparrow=\chi_\downarrow$. The two
probe beams emerge from one single-mode polarization maintaining
fiber, and their intensities are stabilized to be equal
$n_\uparrow=n_\downarrow$ to within $0.1\%$.
\begin{figure}
\caption{Cesium $D_2$-line level diagram}
\label{fig:levels}
\end{figure}
For each individual probe, the photocurrent difference between the
detectors at the two output ports of the interferometer reads as:
\begin{equation}
\Delta n_\uparrow=n_\uparrow \beta \cos(k_\uparrow \Delta L+\chi_\uparrow
N_\uparrow) \textrm{ and } \Delta n_\downarrow=n_\downarrow \beta \cos(k_\downarrow
\Delta L+\chi_\downarrow N_\downarrow),
\end{equation}
where $k_\uparrow,k_\downarrow$ are the wave vector lengths for
$P_\uparrow,P_\downarrow$, respectively, $\Delta L$ is the
interferometer path length difference and $\beta=\sqrt{13}$ is the ratio of the
field amplitudes in the reference- and probe-arm of the interferometer.
In the absence of atoms $N_\uparrow=N_\downarrow=0$; therefore the
smallest interferometer path length difference which leads to opposite
phase for the signals $\Delta n_\uparrow$ and $\Delta n_\downarrow$
is: $\Delta L_0 = \frac{\pi}{k_\uparrow - k_\downarrow}$. Since the
wavelengths of the two probe lasers are so close (to one part in
$40\,000$), one can assume that the fringes of each color are of
opposite phase in the neighborhood of several wavelengths around
$\Delta L_0$. The interferometer path length
difference
\bibnote{In \cite{appel2009} we used opposite input ports for the two probe colors and operated the interferometer in its white-light position $\Delta L=0$ to minimize sensitivity to differential probe frequency noise. The method described here allows us to feed light of both probe colors into the interferometer through one common single mode fiber. This eliminates a possible spatial mode mismatch, which leads to spatially inhomogeneous differential AC-Stark shifts across the atomic sample. The differential probe frequency is controlled tightly using an optical phase lock \cite{appel09:_pll}.}
is set to the value closest to
$\Delta L_0$ that also satisfies: $\Delta n_\uparrow=\Delta
n_\downarrow=0$. This path length difference therefore varies with the
expected atom number. To account for technical imperfections in the
balancing of the probe powers and frequencies we define the average
effective coupling strength $n \chi \equiv (n_\uparrow \chi_\uparrow +
n_\downarrow \chi_\downarrow) /2 $ and the balancing error $n \Delta
\chi \equiv (n_\uparrow \chi_\uparrow - n_\downarrow \chi_\downarrow)
/2$ as well as the average intensity $n=(n_\uparrow+n_\downarrow)/2$.
This way, the total photocurrent difference
$\Delta n=\Delta n_\uparrow+ \Delta n_\downarrow$ reads as:
\begin{equation} \Delta n=n_\uparrow \beta \sin (\chi_\uparrow N_\uparrow) -n_\downarrow \beta \sin (\chi_\downarrow
N_\downarrow) \simeq n \beta (\chi\, \Delta N + \Delta \chi \, N_A).
\end{equation}
We define the phase measurement outcome as:
\begin{equation} \phi \equiv \frac{\Delta n}{\beta n} =\frac{\delta n}{\beta n} + \chi\,
\Delta N + \Delta \chi \, N_A,
\end{equation}
where $\delta n$ denotes the total shot noise contribution from both
colors. The phase $\phi$ provides a measurement of $\Delta N$ with
added shot noise and classical noise. For $N_A$ atoms in a CSS, we
have $\var(\Delta N)=N_A$. The projection noise increases with the
atom number and using $\langle \Delta \chi \rangle =0, \langle \Delta
N \rangle =0$ we obtain:
\begin{equation} \var(\phi)=\frac{\beta^2 +1}{2 \beta^2} \, \frac{1}{n} + \chi^2 N_A + \var(\Delta \chi) \, {N_A}^2.
\label{eq:varphi}
\end{equation}
After each experiment we use the same dual-color probe beam to
determine the total atom number $N_A$. To that end we first optically
pump all atoms into $\ket{F=4,m_F=-1,0,+1}$. The phase
measurement outcome reads as $\phi=n\bar{\chi} N_A$, where
$\bar{\chi}$ is the effective coupling constant for the probe
$P_{\uparrow}$, when the atoms are distributed among different
magnetic sublevels. The similarity of the Clebsch-Gordan coefficients
for the $\ket{F=4,m_F}\rightarrow\ket{F'=5,m_F}$ transitions (with low
$|m_F|$) ensures that $\bar{\chi}\approx \chi$ which is confirmed by
experiments with a precision of better than $\unit[5]{\%}$.
\subsection{Projection noise measurements}
The two probe beams are generated by two extended-cavity diode lasers, which are phase-locked in order to minimize their relative frequency
noise~\cite{appel09:_pll}. Their detunings are given in
Fig.~\ref{fig:levels}.
\begin{figure}
\caption{Projection noise and spin squeezing. The first QND
measurement is performed with a $10~\mu$s bichromatic pulse
containing $6\cdot10^6$ photons in total. The second measurement
is comprised of two such pulses. Blue stars: variance of the
second measurement $\var(\phi_2)$, where the data are sorted by
atom number and grouped into 10 bins. Solid blue line: quadratic
fit to $\var(\phi_2)$. Green area: atomic projection noise of the
CSS. Red diamonds: conditional variance $\var(\phi_2-\zeta
\phi_1)$. Red line: reduced noise as predicted by the fits to the
noise data. The error bars correspond to the statistical
uncertainty given by the number of measurements. This data is
obtained by acquiring 1200 MOT loading cycles, i.e. 4800
experiments. Inset: metrologically relevant spin squeezing $\xi$
(as defined in Eq.~\ref{eq:wineland}
\label{fig:squeezing}
\end{figure}
In Fig.~\ref{fig:squeezing}, we analyze the variance of the atomic population
difference measurement as a function of the atom number.
As the data acquisition proceeds over several hours, it is a major
challenge to keep experimental parameters such as $\tau_{\pi/2}$
constant to much better than $1/\sqrt{N_A}$ so that the mean values of
the atomic population difference measurements would not drift by more
than their quantum projection noise. To eliminate the influence of
such slow drifts, we subtract the outcomes of measurements
performed on independent atomic ensembles recorded in successive MOT
cycles from each other. We then calculate the variances using this
differential data.
The correlated and uncorrelated parts of the noise variance are fitted
with second order polynomials. According to Eq. \ref{eq:varphi}, we
interpret the linear part of this fit as the CSS projection noise
contribution. We observe a negligible quadratic part which means that
classical noise sources like laser intensity and frequency
fluctuations, which cause noise in the effective coupling constant are
small. Achieving this linear noise scaling is a significant
experimental challenge (see \cite[Suppl.]{appel2009}), since the
effect on $\Delta N$ of various sources of classical noise must be
kept well below the level of $1/\sqrt{N_A}\simeq 3\cdot 10^{-3}$
between independent measurements, i.e. over a duration of
$\simeq\unit[5]{s}$.
\subsection{Conditional noise reduction}
Both measurements $\phi_1$ and $\phi_2$ are randomly normally
distributed around zero with variances that have contributions from the
shot noise and the atomic projection noise:
\begin{equation} \var(\phi_1)=\frac{1}{n_1} + \chi^2 N_A, \
\var(\phi_2)=\frac{1}{n_2} + \chi^2 N_A.
\end{equation}
Since the two measurements are performed on the same atomic sample,
they are correlated: $\cov(\phi_1,\phi_2) = \chi^2 N_A$. The
conditional variance $\var(\phi_2-\zeta \phi_1)$ is minimal when
$\zeta =\frac{\cov(\phi_1,\phi_2)} {\var(\phi_1)}$:
\begin{equation} \var(\phi_2-\zeta \phi_1)=\frac{1}{n_2} +
\frac{1}{1+\kappa^2} \chi^2 N_A.
\end{equation}
We measure $\kappa^2=1.6$ for $N_A=1.2\cdot10^5$ atoms and we obtain a
reduction of the projection noise by $\frac{1}{1+\kappa^2}=-4$~dB
compared to the CSS projection noise, as indicated by the blue arrow
in Fig.~\ref{fig:squeezing}.
\subsection{Decoherence}
\label{sec:decoherence}
Dispersive coupling is inevitably accompanied by spontaneous photon
scattering, inducing atomic population redistribution among the ground
magnetic sublevels, as well as a reduction of coherence between the
clock levels. Due to the selection rules, the population
redistribution predominantly occurs within the Zeeman structure of the
hyperfine levels~\cite{saffman09:_spinsqueez_multicolor}: Atoms that
scatter a photon from the $P_\downarrow$ probe almost certainly end up
in the $\ket{F=3,m_F=-1,0,+1}$ sublevels again and atoms that
scatter a photon from the $P_\uparrow$ probe predominantly end up in
the $\ket{F=4,m_F=-1,0,+1}$ states (see. Fig.~\ref{fig:levels}). More
importantly, the similarity of the Clebsch-Gordan coefficients that
describe coupling of the probe light to low $|m_F|$-sublevel states
ensures that the optical phase shift is almost unaffected by such a
population redistribution. This makes our dual-color measurement an
almost ideal QND-measurement as spontaneous scattering events
effectively do not change the outcome of a $J_z$ measurement, i.e. no
extra projection noise due to repartition into the opposite hyperfine ground
states is added. On the other hand, the spontaneous photon scattering
still leads to a shortening of the mean collective spin vector $ \langle J
\rangle \rightarrow (1-\eta) \langle J \rangle $.
The photon-number dependence of this mechanism can be modeled as
$\eta=1-e^{\alpha n}$, where $n$ is the total number of photons in the
probe pulse. We measure the parameter $\alpha$ in a separate
experiment, by comparing the Ramsey fringe amplitudes with and without
a bichromatic QND pulse between the two microwave $\pi/2$-pulses, as
shown in Fig.~\ref{fig:decoherence}. We obtain a value of
$\alpha=-2.39\cdot 10^{-8}$ for this particular atomic cloud geometry.
\begin{figure}
\caption{Decoherence measurement via Ramsey fringe reduction. (a)
Blue points: full Ramsey fringe. Red points: reduced fringe when a
QND-pulse containing $5.9\cdot 10^6$ photons is inserted in the
Ramsey sequence. In this experiment, the fringe height is reduced
by $13.1\%$, leading to a decoherence parameter
$\alpha=2.39\times10^{-8}
\label{fig:decoherence}
\end{figure}
Each probe color induces an inhomogeneous ac Stark shift on the atomic
levels, causing additional dephasing and decoherence. Nevertheless,
in our two-color probing scheme, the Stark shift on level
$\ket{\uparrow}$ caused by probe $P_{\uparrow}$ is compensated by an
identical Stark shift on level $\ket{\downarrow}$ caused by probe
$P_{\downarrow}$, provided the probe frequencies are set so that
$\chi_\uparrow=\chi_\downarrow$, and the probe powers are equal.
Hence, the two Ramsey fringes depicted on Fig.~\ref{fig:decoherence}
(with and without a light pulse) are in phase to better than
$1^\circ$.
\subsection{Squeezing and entanglement}
A QND measurement is characterized by the decoherence $\eta$ it
induces, and by the $\kappa^2$-coefficient that describes the
measurement strength. In the absence of classical noise, for the
dual-color QND the squeezing parameter as defined in
Eq.~\ref{eq:wineland} can be written as:
\begin{equation} \xi_\text{lin}=\frac{1}{(1-\eta)^2\, (1+\kappa^2)}.
\label{eq:etaeqn}
\end{equation}
Therefore, for a fixed detuning of the laser frequencies, the choice of the number of photons used in a QND
measurement is the result of a trade-off between the amount of
decoherence induced by the photons and the amount of information that
the $\phi_1$-measurement yields~\cite{windpassinger09:_squeez}.
Here, we use $n_1/2=3\cdot 10^6$ photons per probe color, inducing a
shortening of $\eta = 14\%$ of the collective spin $J$. From Eq.
\ref{eq:etaeqn} we expect an improvement of the
signal-to-projection-noise ratio by $1/\xi_\text{lin}=2.8$~dB compared
to a CSS. From the data presented in Fig.~\ref{fig:squeezing} we
measure $1/\xi=2.7$~dB, as indicated with a red arrow. This value
agrees well with the theory and with the results reported
in~\cite{appel2009}.
The detuning of the probe light with respect to the atomic transitions
(see Fig.~\ref{fig:levels}) as well as the duration of the probe pulses
($\unit[10]{\mu s}$) have been chosen to minimize the influence of
classical noise sources such as electronic noise in the detector and
relative frequency- and intensity-noise of the two probe colors.
Since the probe pulses redistribute population only within the
hyperfine manifold (cf. sec. \ref{sec:decoherence}), the light shot
noise contribution in the second measurement variance can be reduced
by increasing the number of photons $n_2$ in the second measurement
(cf. Eq. \ref{eq:etaeqn}). In our experiment, the second measurement
is comprised of $\unit[10]{\mu s}$ long bichromatic pulses containing
a total of $n_2$ photons, separated by $10~\mu$s. As shown in the
inset of Fig.~\ref{fig:squeezing}, $\xi$ exponentially approaches
unity as more pulses are combined to form the second measurement. We
attribute this decay to the atomic motion within the dipole trap
during the second measurement: the atoms move in the transverse
profile of the probe beam, and are probed with a position dependent
weight corresponding to the local probe light intensity. Movement during the time
interval between the first and second probe pulse therefore induces a decay of
the correlations between the two measurements. We fit
the experimental data with:
\begin{equation} \xi(t_2)=1-B e^{-t_2/\tau_\mathrm{decay}},
\end{equation}
where $t_2$ is the total duration of the second measurement, and
obtain $\tau_\mathrm{decay}=\unit[670]{\mu s}$, which is of the same order of
magnitude as half the radial trap oscillation
period~\cite{oblak08:_echo_gauss}.
\section{Entanglement-assisted atomic clock}
\subsection{Entanglement-assisted Ramsey sequence}
We use the spin squeezing technique described in the previous section
to improve the precision of a Ramsey clock. The modified Ramsey
sequence is shown in Fig.~\ref{fig:clockramseysq}. As in a traditional
Ramsey sequence, all atoms are prepared in $\ket{\downarrow}$
initially. A near-resonant $\pi/2$-pulse with a phase of
$\vartheta_0=90^\circ$ and detuning $\Delta$ drives them into a CSS with macroscopic $J_x =
\frac{N_A}{2}$ (Fig.~\ref{fig:clockramseysq}.a). This state is then
squeezed along the $z$-direction by performing a weak QND measurement
of the pseudo-spin component $J_z$ (Fig.~\ref{fig:clockramseysq}.b).
This population-squeezed state is converted into a phase-squeezed
state by a second $\pi/2$-pulse with phase $\vartheta_1=0^\circ$ that
rotates the state around the $x$-axis. At this point, the Ramsey
interrogation time starts (Fig~\ref{fig:clockramseysq}.c.1). We
let the atoms evolve freely for a time $T$, during which they acquire
a phase proportional to the microwave detuning $\varphi=\Delta\, T$
(Fig~\ref{fig:clockramseysq}.c.2). To stop the clock, a third
$\pi/2$-pulse with phase $180^\circ$ is applied, converting the
accumulated atomic phase shift $\varphi$ into a population difference
(Fig~\ref{fig:clockramseysq}.c.3). Finally, we measure the $J_z$-spin
component a second time and from the two measurement outcomes we
compute the conditional variance $\var{(\phi_2 - \zeta \phi_1)}$.
\begin{figure}
\caption{Ramsey sequence including squeezing in the Bloch sphere
representation. The sequence starts with all atoms in
$\ket{\downarrow}
\label{fig:clockramseysq}
\end{figure}
The first QND measurement of $J_z$ shown in
Fig.\ref{fig:clockramseysq}.b induces decoherence that reduces the
pseudo-spin vector length, and hence the Ramsey fringe height, so
that:
\begin{equation}
\langle J_z\rangle=(1-\eta)\frac{N_A}{2}\sin\varphi,
\label{eq:fringe}
\end{equation}
in contrast to the Ramsey fringe height in the absence of a QND
measurement: $\langle J_z\rangle = \frac{N_A}{2} \sin\varphi$.
Therefore, a QND measurement reduces the clock phase-sensitivity
(given by the Ramsey signal slope $d\phi/d\varphi)$ by a factor
$1-\eta$. Tailoring the QND measurement strength induces squeezing,
and thus improves the signal-to-projection-noise ratio by $1/\xi$ in
variance as given in Eq.~\ref{eq:wineland}.
We emphasize that atoms that have undergone spontaneous emissions into
the $m_F\neq 0$ states do not take part in the clock rotations: Due to
the magnetic bias field the microwave radiation only couples the
$\ket{F=3,m_F=0}$ to the $\ket{F=4,m_F=0}$ state. These atoms are
however part of the entangled state as they carry some part of the
information of the QND measurement outcome: Only when these atoms are
measured together with the $m_F=0$-atoms in the second measurement
there is no additional partition noise due to spontaneous emissions.
It is therefore important that in the absence of a phase shift
($\varphi=0$) the rotation operator corresponding to our microwave
clock pulse sequence commute with $J_z$, i.e. the population
difference $\Delta N$ is not changed.
\subsection{Low phase noise microwave source}
A projection noise limited atomic clock with $N_A=10^5$ atoms can
resolve atomic phase fluctuations as small as $\delta \varphi =
1/\sqrt{N_A} = \unit[3]{mrad}$. This poses strict requirements on the
phase noise of our microwave oscillator: during the interrogation time
its phase has to be much more stable than $1/\sqrt{N_A}$ so that our
clock performance is not limited by the oscillator. For an oscillator
with a white phase noise spectrum and for a clock interrogation time
of $T=\unit[10]{\mu s}$ this translates into a required relative phase
noise power density lower than $\unit[-100]{dBc/Hz}$ over a
\unit[100]{kHz} sideband next to the \unit[9.192]{GHz} carrier. This
is more than one order of magnitude smaller than the phase noise of
the Agilent HP8341B microwave synthesizer used in~\cite{appel2009}.
A key step in the implementation of the clock sequence therefore was
the construction of a low-noise synthesizer chain: a \unit[9]{GHz}
dielectric resonator oscillator (DRO-9.000-FR, Poseidon Scientific
Instruments) with a phase noise of
$\unit[-132]{dBc/Hz}@\unit[100]{kHz}$ is phase locked to an
oven-controlled \unit[500]{MHz} quartz oscillator~(OCXO) (MV87, Morion
Inc.) within a bandwidth of \unit[15]{kHz}. The OCXO itself is slowly
$(\approx\unit[10]{Hz})$ locked to a GPS reference. A direct digital
synthesis (DDS) board (Analog Devices AD9910) is clocked from the
frequency doubled OCXO and produces a \unit[192]{MHz} signal which is
mixed onto the DRO output; a microwave cavity resonator, with a
\unit[50]{MHz} width, filters out the upper \unit[9.192]{GHz} sideband.
By using the DDS to shape the microwave pulses we control the
microwave pulse duration with a precise timing in \unit[4]{ns} steps
and control the microwave phase digitally with a $ {\unit[2 \pi\cdot 2^{-16}]{rad}} $
resolution, allowing for complex and precise pulse sequences.
\subsection{Ramsey fringe decay}
We first operate our clock with the sequence described in
Fig.~\ref{fig:clockramseysq}, leaving out the first QND measurement.
The phase $\vartheta_2$ of the last microwave pulse is varied between
$0^\circ$ and $360^\circ$. The microwave frequency is set to resonance
(to within $10$~Hz), so we expect $\langle J_z \rangle=\frac{N_A}{2}
\cos \vartheta_2$. In Fig.~\ref{fig:fringedecay}, we plot the
distribution of $J_z$-measurements normalized to $\langle J
\rangle=N_A/2$, for different interrogation times $T$ between
$\unit[10]{\mu s}$ and $\unit[350]{\mu s}$. The differential ac Stark
shift induced by the Gaussian dipole trap potential causes spatially
inhomogeneous dephasing during the whole interrogation time $T$. This
results in a decay of the Ramsey fringe contrast $h(T)$ (and therefore a decay of
the clock phase sensitivity) as $T$ increases, as shown in
Fig.~\ref{fig:fringedecay}.
\begin{figure}
\caption{(a) Ramsey fringe for
interrogation times ranging from $\unit[10]{\mu s}
\label{fig:fringedecay}
\end{figure}
\subsection{Ramsey sequence with squeezing}
We implement the full EAR sequence described in
Fig.~\ref{fig:clockramseysq}, and keep the last microwave pulse phase
$\vartheta_2=180^\circ$. The noise contributions to the second QND
measurement are shown in Fig.~\ref{fig:clock_sq_10mus}. With such a
short interrogation time ($10~\mu$s), only little classical noise is
added to the second measurement, compared to the first measurement
($\var (\phi_1)\approx\var (\phi_2)$). We attribute this classical
noise to microwave frequency noise and dipole trap intensity
fluctuations.
Using the first weak measurement $\phi_1$ to predict the outcome of
the second measurement $\phi_2$, we observe a metrologically
relevant noise reduction of $\xi=\unit[-1.1]{dB}$ for
$9\cdot10^4$~atoms. This gives us the signal-to-projection-noise ratio
improvement that we gained by running our clock with an entangled
atomic ensemble, compared to a standard clock operating with
unentangled atoms (see Fig.~\ref{fig:clock_sq_10mus}). The
experimentally observed squeezing is lower than the expected value
$\xi_\text{lin}=[ (1-\eta)^2(1+\kappa^2) ]^{-1}=-2.2~$dB because of
the extra classical noise in the second measurement $\phi_2$.
Note that the obtained squeezing is different from the one shown in
Fig.~\ref{fig:squeezing}. Apart from the fact that the atom number
used in the clock experiment is lower, the reduction of the spin
squeezing can be explained by two experimental effects: firstly, the
atomic motion in the trap results in a decay of the correlations
$\cov(\phi_1,\phi_2)$ during the the longer time interval between the
two QND measurements $\phi_1,\phi_2$ in the clock sequence. Secondly,
in the Ramsey sequence, phase- and frequency fluctuations between
atoms and the microwave oscillator affect the $\phi_2$ measurement to
first order (cf. Eq.~\ref{eq:fringe}), whereas they only appear in
second order in the simple squeezing sequence shown in
Fig.\ref{fig:seqsqueezing}.
The first QND measurement $\phi_1$ also produces backaction
antisqueezing on the conjugate spin variable $J_x$. Classical
fluctuations in relative intensities of the two probe
colors lead to a fluctuating differential AC-Stark shift between the clock
levels. This adds additional noise to $J_x$, so that the produced SSS
is about \unit[10]{dB} more noisy in the (anti-squeezed) $J_x$
quadrature than a minimum uncertainty state. However, this excess
noise does not affect the clock precision since we only perform $J_z$
measurements.
\begin{figure}
\caption{Noise contributions to the second measurement in the clock
sequence, with an integration time of $T=10~\mu$s. The first
measurement contains $7.1\cdot 10^6$ photons and induces a Bloch
vector shortening of $13.5\%$.}
\label{fig:clock_sq_10mus}
\end{figure}
\subsection{Frequency noise measurements}
The Ramsey sequence makes it possible to use an atomic ensemble either
as a clock, where the frequency of an oscillator is locked to the
transition frequency between the clock states
$\nu=(E_\uparrow-E_\downarrow)/h$, or as a sensor to measure
perturbations of the energy difference between these levels. In our
experiment, the atomic ensemble is sensitive to fluctuations of the
frequency difference $\Delta$ between the cesium clock transition
and the microwave oscillator.
The atomic transition frequency of $\unit[9\, 192\, 630\, 500]{Hz}$ as
measured in our experiments has a systematic offset from the SI value
of $\unit[9\, 192\, 631\, 770]{Hz}$ mainly for two reasons: firstly,
the bias field of $\approx \unit[1]{G}$ causes a quadratic Zeeman
shift of $\unit[+427]{Hz/G^2}$ on the clock levels. Secondly, the
trapping light produces a differential ac~Stark shift of the order of
$\unit[-1700]{Hz}$ averaged over the atoms within the probe beam.
Noise in the phase evolution between the atoms and the microwave
oscillator leads to added classical noise in the $\phi_2$ measurement,
which scales as ${N_A}^2$.
The frequency noise components that an atomic clock can detect are
determined by the clock's cycle time $t_c$, and the interrogation time
$T$. Uncorrelated frequency fluctuations from cycle to cycle result
in additional noise in the $\phi_2$-measurement that scales as $T^2$
in variance. Phase noise between atoms and the microwave oscillator
during the interrogation time $T$, however, can follow a different
scaling behaviour. Since we subtract the outcomes of successive MOT
loading-cycles, our clock is not sensitive to any frequency noise
slower than $t_c=\unit[5]{s}$.
\subsection{Influence of classical noise on the clock performance}
The quantum noise limited frequency sensitivity of an atomic clock
improves with increasing the interrogation time. However, achieving a
higher frequency sensitivity makes our experiment more susceptible to
classical noise. In order to evaluate the limitations of our
proof-of-principle experiment, we vary the interrogation time $T$ from
$\unit[10]{\mu s}$ to $\unit[310]{\mu s}$ by steps of $20~\mu$s, while
keeping the atom number approximately constant at $N_A=9\cdot10^4$.
From the optical phase shift measurements $\phi_1$ and $\phi_2$, we
can infer the atomic phase evolution by normalizing to the Ramsey
fringe amplitude and define:
\begin{equation}
\label{eq:tilde}
\tilde \varphi_2 = \frac{\phi_2}{A(T)} \qquad
\tilde \varphi_{21} = \frac{\phi_2-\zeta \phi_1}{A(T)}.
\end{equation}
where $A(T) = \chi\, (1-\eta)\, h(T)\, N_A$ and $h(T)$ is the Ramsey
fringe contrast shown in Fig.~\ref{fig:fringedecay}.
\begin{figure}
\caption{Contributions to the atomic phase noise as a function of
the interrogation time $T$. Blue points: $\var(\tilde\varphi_2)$.
Red points: $\var(\tilde\varphi_{21}
\label{fig:phasenoise}
\end{figure}
In Figure~\ref{fig:phasenoise}, we plot the measured atomic phase
noise variance $\var(\tilde \varphi_2)$ and the conditionally reduced
noise of the atomic phase $\var(\tilde \varphi_{21})$ as a function of
the interrogation time $T$. Only in the first data point
(corresponding to $T=\unit[10]{\mu s}$) we observe squeezing, i.e. the
measured noise in $\tilde \varphi_{21}$ is lower than that of a
traditionally operated atomic clock with an identical
signal-to-projection-noise ratio (i.e. with an atom number of
$N_A/\xi_\text{lin}$).
We observe a quadratic increase in the classical noise with $T$. This
suggests that the detuning is roughly constant over the interrogation
time, but varies from cycle to cycle. We attribute these fluctuations
to both intensity drifts in the dipole trap and variations of the
geometry of the atomic cloud. From the fit (solid green line) we infer
$\sqrt{\var(\Delta)} = \unit[7.5]{Hz}$ per cycle.
\section{Conclusion}
In summary, we have demonstrated the first entanglement-assisted
Ramsey clock, following the proposal of~\cite{appel2009}. We have
implemented a modified Ramsey sequence in which the spin state is
squeezed by means of a quantum non-demolition measurement. Squeezing
results in a metrologically relevant reduction of the noise variance by
$\unit[-1.1]{dB}$ with $9\cdot10^4$ atoms, compared to a traditional
atomic clock at the projection noise limit with the same number of
atoms. The present experimental developments have been made possible
by the introduction of a low phase noise microwave source.
The dual-color QND method implemented in this work is readily
applicable to optical clocks as well. The idea of using quantum
non-demolition measurements to generate entanglement and to improve
the precision of a clock has drawn the attention of the
ultra-precise-frequency-standards
community~\cite{meiser2008,lodewyck:Nondestructivemeasurement}.
Besides, non-destructive measurements have been shown to drastically
improve the duty cycle in a clock experiment, thereby reducing the
Dick effect~\cite{lodewyck:Nondestructivemeasurement}.
We have shown that for the present proof-of-principle experiment, the
improvement of the clock precision due to squeezing does not extend to
interrogation times beyond $\unit[10]{\mu s}$. This is due to the
inhomogeneous broadening on the hyperfine transition induced by the
dipole trap, and the fact that large interrogation times make the
clock more sensitive to light shifts induced by cloud-geometry- and
intensity fluctuations in the dipole trap. These issues could be
circumvented by turning to systems for which there exists a magic
wavelength, such as
strontium~\cite{katori03:Sr_magic_wavelength,ye08:_quant_state_ligt_insensitive_traps}
or ytterbium\cite{Barber08:Yb_magic_wavelength}. For long
interrogation times, the clock performance also is limited by the
atomic motion, which can be counteracted by introducing a transverse
optical lattice.
The figure of merit for the QND-induced spin squeezing is the resonant
optical depth of the atomic ensemble. In the present experiment the
optical depth was limited to a value $\approx 10$. Implementation of QND-based
spin squeezing in state-of-the-art clocks will require solutions where
high optical depth can be combined with low collisional broadening,
such as, for example, optical lattices placed in a low finesse optical
cavity.
\section*{References}
\end{document}
|
\begin{document}
\title{Real-time frequency estimation of a qubit without single-shot-readout}
\author{Inbar Zohar$^1$}
\email{[email protected]}
\author{Ben Haylock$^2$}
\author{Yoav Romach$^3$}
\author{Muhammad Junaid Arshad$^2$}
\author{Nir Halay$^3$}
\author{Niv Drucker$^3$}
\author{Rainer St\"ohr$^4$}
\author{Andrej Denisenko$^4$}
\author{Yonatan Cohen$^3$}
\author{Cristian Bonato$^2$}
\author{Amit Finkler$^1$}
\affiliation{$^1$Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot 7610001, Israel}
\affiliation{$^2$Institute of Photonics and Quantum Sciences, SUPA, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom}
\affiliation{$^3$Quantum Machines, Tel-Aviv 6744332, Israel}
\affiliation{$^4$Third Institute of Physics, University of Stuttgart, Stuttgart 70569, Germany}
\date{\today}
\begin{abstract}
Quantum sensors can potentially achieve the Heisenberg limit of sensitivity over a large dynamic range using quantum algorithms. The adaptive phase estimation algorithm (PEA) is one example that was proven to achieve such high sensitivities with single-shot readout (SSR) sensors. However, using the adaptive PEA on a non-SSR sensor is not trivial due to the low contrast nature of the measurement. The standard approach to account for the averaged nature of the measurement in this PEA algorithm is to use a method based on `majority voting'. Although it is easy to implement, this method is more prone to mistakes due to noise in the measurement. To reduce these mistakes, a binomial distribution technique from a batch selection was recently shown theoretically to be superior, as all ranges of outcomes from an averaged measurement are considered. Here we apply, for the first time, real-time non-adaptive PEA on a non-SSR sensor with the binomial distribution approach. We compare the mean square error of the binomial distribution method to the majority-voting approach using the nitrogen-vacancy center in diamond at ambient conditions as a non-SSR sensor. Our results suggest that the binomial distribution approach achieves better accuracy with the same sensing times. To further shorten the sensing time, we propose an adaptive algorithm that controls the readout phase and, therefore, the measurement basis set. We show by numerical simulation that adding the adaptive protocol can further improve the accuracy in a future real-time experiment.
\end{abstract}
\maketitle
\section{Introduction}
Quantum sensing is a promising technology with many possible applications in fields such as renewable energy~\cite{Crawford2021}, condensed matter physics \cite{Sar2015, Gross2017, Dovzhenko2018, Jenkins2019}, biology \cite{Shi2015, Lovchinsky2016, Barry2016}, and chemistry \cite{SchaeferNolte2014, Finkler2021}. Different quantum systems are studied as quantum sensors \cite{Degen2017}, and depending on the systems' interactions with the environment it can be used to sense different physical quantities such as magnetic fields \cite{Jenkins2019}, electric fields \cite{Barry2016}, temperature \cite{Neumann2013}, strain \cite{Trusheim2016}, or pressure \cite{Ho2021}. One of the advantages of these sensors is the possibility of achieving high sensitivity while overcoming the standard quantum limit (SQL) and reaching the Heisenberg limit (HL) \cite{Higgins2007}.
Recent studies have pushed the sensitivity to the HL using entanglement \cite{Bollinger1996}, or quantum algorithms \cite{Vorobyov2021}. One algorithm, widely studied, is the phase estimation algorithm (PEA), suggested by Kitaev \cite{Kitaev1995}. This algorithm aims to estimate a phase $(\phi)$ that a quantum sensor is accumulating due to some interaction with frequency $f$ with an unknown parameter in the environment. The sensor accumulates the phase at $K+1$ different sensing times ($\tau$) that grow exponentially, $\tau=2^k \tau_0$, where $k$ is an index going from $0$ to $K$. The shortest sensing time $\tau_0$ limits the dynamic range (DR) of the sensor to
\begin{equation}
\mathrm{DR} = [-\frac{1}{2\tau_0},\frac{1}{2\tau_0}]
\label{eq:DR}
\end{equation}
The longest sensing time is bounded from above by the dephasing time, $T_2^*$, of the sensor $2^K \tau_0<T_2^*$ \cite{Said2011}.
The full algorithm is based on a quantum system with multiple quantum bits that carry the process of estimating the phase simultaneously using the quantum Fourier transform \cite{Vorobyov2021}. Such multi-qubit systems are still challenging and sometimes not available for every sensing environment. Moreover, a single qubit sensor such as a spin-1/2 offers the ultimate spatial resolution, and any additional gain from entangling it with additional spins is canceled by the increase in sensor size. In these cases, therefore, a system of a single qubit that is incorporated with an adaptive PEA, and making use of quantum-classical interfaces \cite{Okamoto2012,Danilin2018}, can be of benefit. Here we experimentally demonstrate, in real-time, a non-adaptive PEA scheme in non-ideal but very realistic sensing conditions, and show numerically the advantage of moving this method to an adaptive one.
\begin{figure}
\caption{(a) Graphical illustration of the adaptive phase estimation algorithm comprising four steps: (1) A pulse sequence suitable for the estimation of the unknown parameter, given the nature of the interaction between the sensor and the parameter. This pulse sequence will be applied with exponentially growing sensing times. The state of the sensor is measured after every sensing time. (2) Calculating the probability function for the state of the sensor given the unknown parameter. (3) Using Bayes' Theorem to update the probability function for the parameter. (4) Calculating the optimal variables for extracting maximal information from the next iteration. After $M_k$ iterations for each sensing time, the final distribution will be the estimate of the unknown parameter. (b-c) Schematic illustration of the measurement outcome of a single-shot (b) or averaged (c) sensor.}
\label{fig1}
\end{figure}
\subsection{Adaptive phase estimation algorithm}
The general scheme for applying adaptive PEA (Fig.\,\ref{fig1}a) consists of a cyclic process of four steps. The first is a pulse sequence applied on the sensor depending on the target frequency, $f$, in question and its interaction with the sensor as expressed in the Hamiltonian, $H(f)$. This pulse sequence will use the same exponentially growing sensing time as in the quantum PEA only in sequential order, from the shortest to the longest, and not simultaneously, similar to the Kitaev's iterative PEA \cite{Kitaev2002}. After each pulse sequence with one sensing time, the sensor is measured, and the outcome ($u$) can be one of the two states of the sensor - zero or one. This outcome is used in the second step to update the probability function, $P_m(u|f)$, to measure the sensor state, $\ket{0}$ or $\ket{1}$, given that there is interaction with the unknown parameter with frequency $f$. The nature of the sensor's interaction with the target parameter in the pulse sequence is encoded in the probability function.
The critical step of the algorithm is in step \circled{3}, where one applies a Bayesian update to estimate the unknown parameter \cite{Bonato2015, Valeri2020, Gebhart2022}
\begin{equation}
P_\mathrm{posterior} (f|u)\propto P_m (u|f) P_\mathrm{prior} (f|u)
\label{eq:Bayesian}
\end{equation}
where $P(f|u)$ is the probability function of the measurement outcome given the target parameter, subscript \texttt{posterior} is the new probability after each Bayesian update and \texttt{prior} is the old one from the last update. $P(u|f)$ is the probability function of the target parameter given the outcome of the measurement is $u$, the subscript $m$ denotes a single outcome. Since the adaptive PEA applies the sensing scheme with different sensing times sequentially, each measurement holds less information about the phase than the quantum PEA. The penalty in the full scheme is that each sensing time is measured multiple times by changing one of the sensing variables, as is illustrated in step \circled{4}. The number of iterations $M_k=G+(K-k)F$ for each sensing time grows as the sensing time gets shorter, where $G$ and $F$ are optimized parameters, and $k$ is the index of the sensing time \cite{Higgins2009}. The adaptive character of the scheme is established in step \circled{4}. In this step, the optimal variables for gaining maximal information are calculated based on the last probability function and then transferred to the pulse sequence of the next iteration.
Adaptive PEA has been studied extensively. Theoretical works suggested controlling the sensing phase or the sensing time \cite{Cappellaro2012} to enhance sensitivity. Others used numerical simulations \cite{Wiebe2016, Scerri2020}, and several did experimental studies with different sensors \cite{Santagati2019, Bonato2015} to prove the feasibility and benefits of this protocol. All of these studies were performed with a single-shot readout (SSR) sensor, where the state of the sensor can be measured after one measurement with high fidelity (Fig.\,\ref{fig1}b). Nevertheless, in some cases, non-SSR sensors are the only possible sensing approach, for instance, for imaging nanoscale biological samples with high special resolution and in ambient conditions. These sensors are characterized by the high ratio of classical noise added in the measurement, for example, low photon collection efficiency in optically read-out systems, compared to the quantum projection noise of the system \cite{Degen2017}. This causes the histogram of the measurement outcomes to mix `0' and `1'. Therefore, assigning the measurement outcome to one state of the sensor with high fidelity, i.e., in one shot, is impossible (Fig.\,\ref{fig1}c).
For a non-SSR sensor, the pulse-sequence and the measurement should be applied for many repetitions to assess the sensor state, still with a non-negligible error. This situation requires adjusting the probability function $P(u|f)$ used in the Bayesian update to the averaged measurement result. The most common and simple solution is to use a threshold that is calculated based on the probability to measure a positive outcome from the sensor at each of the states, which can be a collection of photons for an optically measurable sensor (See Appendix \ref{apx:visibility}). In this method the measurement is repeated for $R$ times and the number of positive outcomes $r$ is assigned to a state of the sensor, $u$, based on the calculated threshold; we call this method ``majority voting''. This approach results in a binary outcome from a large batch of size $R$ repetitions of the measurement. This method's benefit is the possibility of using the probability function and Bayesian update as in the SSR sensor scheme \cite{Joas2021}. However, it disregards most of the possible outcomes from the $R$ repetitions by using only a binary span of results. Therefore, it is also more prone to noise, where a noisy measurement can be assigned to the wrong binary option \cite{Dinani2019}.
Since a non-SSR measurement entails repeating the measurement $R$ times to improve the readout fidelity, we consider the number of positive outcomes, $r$, out of the full $R$ batch. This probability distribution, then, is binomial,
\begin{equation}
P(f|r)=\left(
\begin{array}{c}
{R}\\{r}
\end{array}\right)
P_d(1|f)^r[1-P_d(1|f)]^{R-r}
\label{eq:binomial}
\end{equation}
where $P_d(1|f)$ is the probability of detecting a positive outcome given the sensor state was '1' calculated for the full range of the unknown parameter $f$. The subscript $d$ denotes the detection of the positive outcome, and $r$ is the number of positive outcomes of the measurement \cite{Dinani2019}. In this case, the information about the phase accumulated due to the external target parameter is encoded in $P_d$. This method considers the full range of possible outcomes for the averaged measurement. Therefore, a noisy measurement will not result in a mistake but with an error within the range of the noise of the measurement - and therefore leading to a more sensitive estimation \cite{Dinani2019}. So far, experimental demonstration of the binomial distribution approach and the enhancement in accuracy it offers has not been demonstrated.
\subsection{DC magnetometry with non-SSR sensor}
In this work, we use the nitrogen-vacancy (NV) center in diamond, a widely used non-SSR sensor at ambient conditions \cite{Maze2008, Balasubramanian2008}. The NV center is a spin-1 system with a zero field energy splitting between $m_s=0$ and $m_s=\pm 1$ spin states of $2.87$\,GHz (implicitly $\hbar$ is taken to be equal to 1). It is sensitive to DC magnetic fields due to the Zeeman effect $H(B)=\gamma_e \textbf{B}\cdot \textbf{S}$, where $\gamma_e$ is the electron gyromagnetic ratio and $\textbf{S}$ is the spin operator. When the magnetic field is aligned with the $z$ axis of the spin, the Hamiltonian of the system is simplified to
\begin{equation}
H_\mathrm{NV}(B)=DS_z^2+\gamma _eB_0S_z.
\label{eq:Hamiltonian}
\end{equation}
This interaction results in an energy splitting between the two ($m_s=\pm 1$) degenerate spin levels of $\Delta \omega = 2\gamma_e B$. Under these conditions and in most instances, each of the two single quantum transitions of the NV center can be practically considered as a two-level spin system (Fig.\,\ref{fig2}a).
The first step in the PEA scheme is applying the pulse-sequence sensitive to the target field (Fig.\,\ref{fig2}c). For a DC magnetic field, we use Ramsey interferometry (Ref.\,\onlinecite{Childress2006}, Fig.\,\ref{fig2}b). The evolution of the spin in such a pulse-sequence can be simplified when considered in the rotating frame. After initialization to $\ket{0}$, to prepare the sensor, a $\frac{\pi}{2}$ microwave pulse resonant with the eigenenergy of the sensor $\Delta \omega$ is applied, placing the sensor in a superposition state of the two eigenstates $\ket{\psi(t=0) }= 2^{-1/2}\left(\ket{0} + \ket{1}\right)$. When the sensor interacts with a small external magnetic field $\Delta B$, it will accumulate a relative phase between the two eigenstates in the rotating frame, which is proportional to the external magnetic field $\ket{\psi (t)}=2^{-1/2}\left(\ket{0} +e^{-i\gamma_e\Delta Bt}\ket{1}\right)=2^{-1/2}\left(\ket{0} +e^{-i2\pi f_{\Delta B}t}\ket{1}\right)$, where $f_{\Delta B}$ can also be considered as a small frequency detuning from the resonance frequency of the sensor (as illustrated in Fig.\,\ref{fig2}a), and therefore, throughout in the manuscript we use the two terms interchangeably. When applying another $\frac{\pi}{2}$ microwave pulse, we project the spin to the eigenstates of $\sigma_z$. If this pulse is rotated by an angle $\phi$ from the preparation pulse, we will project the sensor to a rotated spin basis $\sigma_xe^{-i\phi}$, where $\phi$ is the phase we change in the fourth step of the scheme in figure \ref{fig2}c. This pulse sequence can estimate external magnetic fields that are within the dynamic range of the measurement (Eq.\,\ref{eq:DR}).
\begin{figure*}
\caption{(a) Illustration of the energy levels of the NV center under a static magnetic field with a small detuning. The errors indicate the microwave field resonant with the $\ket{0}
\label{fig2}
\end{figure*}
The second step of the PEA (Fig.\,\ref{fig2}c) is to calculate the probability function according to the prior-measured state of the sensor $u=\ket{0} / \ket{1}$. This probability function is based on the Ramsey interferometry model for sensing a small external magnetic field $\Delta B$ (Fig.\,\ref{fig2}b),
\begin{equation}
P_m(u|\Delta B)=\frac{1}{2}\left[1+(-1)^u e^{-(\sfrac{t}{T_2^*})^2}\cos(2\pi f_{\Delta B}t-\phi)\right],
\label{eq:majority_P}
\end{equation}
where $T_2^*$ is the dephasing time of the sensor. This probability function can be used for an SSR sensor, like the NV center at cryogenic conditions \cite{Robledo2011}, or with the majority voting approach for the non-SSR sensor, like the NV center at ambient conditions \cite{Joas2021}.
However, for the binomial approach we want to use Eq.\,\ref{eq:binomial}, which accounts for all $r$ possible outcome of the repeated measurement. This probability function depends on the probability of detecting a positive outcome given the external target parameter $P_d(1|f)$. As shown in the theoretical derivation from Ref.\,\onlinecite{Dinani2019}, this probability for sensing an external DC magnetic field is
\begin{equation}
P_d(1|f)=\alpha \left[
1-Ve^{-(\sfrac{t}{T_2^*})^2}\cos(2\pi f_{\Delta B}t -\phi)
\right]
\label{eq:binomial_gauss}
\end{equation}
where $\alpha$ is the sensor's threshold, and $V$ is the visibility of the sensor (See Appendix \ref{apx:visibility}).
\section{Real-time Bayesian update comparison}
We report for the first time on the advantage of the binomial approach over the majority voting in a real-time experiment. The experiment was done at ambient conditions setup using a QM OPX to conduct real-time calculations (See Appendix \ref{apx:Set-up}). We collected data from a single NV center with a dephasing time of $T_2^*=3.5$\,$\upmu$s (See Appendix \ref{apx:Sample}). We used five sensing times ($K=4$) with $\tau_0=100$\,ns in the Ramsey measurement pulse sequence (First step in Fig.\,\ref{fig2}c). The probability function estimating the external magnetic field was constructed with a resolution (binning) of $25$\,kHz. For each external magnetic field, we applied the scheme twice, once with the majority voting probability function (Eq.\,\ref{eq:majority_P}) in the second step and once with the binomial distribution probability function (Eq.\,\ref{eq:binomial}). After the Bayesian update (third step in Fig.\,\ref{fig2}c), we change the phase of the second $\frac{\pi}{2}$ pulse linearly between zero and $\pi$ in a non-adaptive manner following a predetermined measurement sequence \cite{Higgins2009} (fourth step in Fig.\,\ref{fig2}c).
Figure\,\ref{fig3}a presents the iteration number of a measurement of a random magnetic field using the approach described above (Bayesian, non-adaptive). The probability function starts as a uniform distribution. The first iterations apply the shortest sensing time, which guides the probability function to a rough estimation of the frequency. As the iterations advance, the sensing time gets longer, and the estimated frequency gets focused and narrower to a more precise estimate. The frequency at the peak of the probability function in the last iteration is the final estimation for this measurement.
We applied the two approaches for a non-SSR sensor in a non-adaptive scheme on 500 randomly chosen magnetic fields $f_{\Delta B}$ in the range [-2, 2]\,MHz. For more information about the choice of the range, see Appendix \ref{apx:detunings}. We applied the external magnetic field as an off-resonance microwave tone relative to the $\ket{0}\rightarrow\ket{-1}$ transition of the NV, $\omega_{-1}$, at an applied magnetic field of 551\,Gauss (close to the NV's excited-state level anti-crossing), corresponding to $\omega_{-1} = 2\pi\times$1.3322 GHz. All 500 magnetic fields were measured with seven different repetition numbers $R=(100, 250, 500, 750, 1000, 2500, 5000)$. For each detuning we measured the two approaches in a random order and with all repetition numbers also randomized in the order. After each detuning we refocused the frequency and position of the confocal setup.
To compare the two sensing methods, we calculated the mean square error (MSE):
$$\mathrm{MSE} = \sqrt{V_B}=\sqrt{\left< \left(\tilde{f}_B-f_B\right)^2\right>}.$$ based on the estimated frequency $\tilde{f}_B$ calculated from the $P(f|r)$ after every iteration. Our results show a reduction of the MSE with the same measurement time when using the binomial distribution approach. The best MSE achieved was $\approx 0.12\,\mathrm{MHz}$
for $R=2500$ with a total sensing time of $T=1.07$\,s when using the binomial distribution method. The majority voting method reached a MSE of $\approx 0.28\,\mathrm{MHz}$ within the same time, more than twice as high. The lowest possible MSE is limited by the decoherence time of the sensor where $\mathrm{MSE}\geq \sqrt{\frac{1}{T_2^*}}$
We note that the MSE for larger $R$ does not improve by much, and we attribute this to the slight improvement of the contrast (See Appendix \ref{apx:contrast}). The superiority of the binomial distribution approach is evident also for shorter sensing times, starting from $R=250$ with $T=0.34$ s (see Fig.\,\ref{fig:contrast} in Appendix \ref{apx:contrast}). To see the improvement in MSE, we plot it as a function of the iteration number for the $R = 2500$ case in Fig.\,\ref{fig3}b. It improves as the iterations progress due to significant improvement in the estimation precision, smaller MSE, compared to the addition of the total sensing time needed for this improvement. We see a good agreement between the experiment and a simulation based on the experimental parameters used in the experiment. The small discrepancy between experiment and simulation can be explained by a little uncertainty in the detection probability for the 0 and 1 state, i.e., $P_d(1|m_0)$ and $P_d(1|m_1)$. The MSEs calculated for each number of repetitions, $R$, are plotted as a function of the contrast, averaged over all 500 frequencies with the same number of repetitions in Fig.\,\ref{fig:contrast} in Appendix \ref{apx:contrast}.
\begin{figure}
\caption{(a) The probability function of a single magnetic field detuning as it is updated with the iterations of the scheme. The detuning here was 1.895\,MHz. (b) Error (square root of variance) as a function of total measurement time (number of iterations) with binomial distribution approach (blue) and majority voting approach (orange) for a repetition number $R=2500$. The data presented here is for 500 random chosen detunings. The vertical lines (different values of $k$) represent the move to the next $\tau$ in the algorithm.}
\label{fig3}
\end{figure}
\section{Adaptive Bayesian update with binomial distribution}
\label{adaptive}
\begin{figure}
\caption{(a) Simulated experiment with the calculated phases for the different methods: non-adaptive (blue) and our adaptive protocol (orange). After reaching the end of the settings determined by the PEA scheme, measurement times $\tau = T_2^*$, and phases are chosen at random (non-adaptive) or via adaptive algorithm. (b) Comparison between three different phase calculations for a single experiment with detuning of $f_{\Delta B}
\label{fig4}
\end{figure}
As shown with an SSR sensor, using an adaptive scheme where the measurement variables are optimized based on the updated probability function further improves the sensitivity of the method \cite{Bonato2015}. In DC magnetometry we look for the optimal readout phase $\phi$. While one can optimize quantities such as the information gain, this is typically quite complex and adds significant computational overhead. Simpler adaptive rules can be obtained through the Cram\'er-Rao lower bound (CRLB), which represents the minimum reachable variance for any (unbiased)
estimator of $\phi$. As the CRLB of $\phi$ is inversely proportional to the Fisher information $\mathcal{I}$, one can target the maximization of $\mathcal{I}$ to improve the estimate of $\phi$. To find this phase we calculate the Fisher information of the probability function as it is written in Eq.\,\ref{eq:Information}a and maximize it with respect to the phase $\phi$,
\begin{subequations}
\begin{equation}
{\cal I}(f_{\Delta B})=E\left[\left(\frac{\partial}{\partial f_{\Delta B}}\log\left(P(r|f_{\Delta B})\right)\right)^2\right],
\end{equation}
\begin{equation}
\frac{\partial}{\partial\phi}{\cal I}(f_{\Delta B})=0.
\end{equation}
\label{eq:Information}
\end{subequations}
where $E$ is the expectation value.
By solving the optimum problem for the phase and taking the solution that results with the maximum, we find the optimal phase,
\begin{equation}
\phi_\mathrm{opt}=2\pi E\left[f_{\Delta B}\right]t-\cos^{-1}\left(\frac{-B}{A}\right),
\label{eq:phi_opt}
\end{equation}
where $A=\frac{r^2}{R^2}+\left(1-2\frac{r}{R}\right)\alpha$ and $B=\left(1-2\frac{r}{R}\right)\alpha V$ (See Appendix \ref{apx:Adaptive}). Using the number of positive results and the expectation value at each iteration with this optimal phase calculation will result in the next readout phase.
To evaluate the benefit of the adaptive scheme compared to the non-adaptive one, we simulated the experiments based on the dephasing time ($T_2^*$) and threshold ($\alpha$) (see Appendix \ref{apx:Sample}) of the NV used in the real-time experiment. Simulations were performed by numerically reproducing the experiment, randomly generating a simulated photon number $r$ from a binomial distribution as in Eq.\,\ref{eq:binomial_gauss}, using experimental parameter ($R=10^5$, $G=3$, $F=2$), and are presented in Fig.\,\ref{fig4}a. We observe two different regimes. The first one where we increase the $\tau$ exponentially, which is the `high dynamic range' regime. Once we reach $T_2^*$ we measure from that iteration number at $\tau = T_2^*$, at the SQL, hence the clear change in slope at iteration number (approximately) 30. In both cases (non-adaptive and adaptive), the probability function was calculated based on the binomial distribution approach. In the simulation of the adaptive scheme, the phase of the Ramsey readout pulse in the next iteration was calculated based on the probability function (step 4 in Fig.\,\ref{fig2}c) and Eq.\,\ref{eq:phi_opt}, whereas in the non-adaptive scheme the phase was linearly ramped between zero and $\pi$ (see Fig.\,\ref{fig4}b).
The phases calculated for the adaptive simulated experiment show convergence of the phase in a small number of iterations, smaller than $M_k$ iterations determined theoretically for each sensing time (Fig.\,\ref{fig4}b, orange circles). This convergence raises the possibility of reducing the number of iterations for each sensing time by moving to the next sensing phase once the phase remains steady for three iterations with an error of $\frac{0.1}{\pi}$ (Fig.\,\ref{fig4}b, green squares). We denote this method as adaptive-optimized. It has the potential to reduce the total measurement time significantly, which will also improve the sensitivity.
The MSE calculated from simulated results of the two methods, non-adaptive and adaptive, is plotted in Fig.\,\ref{fig4}a as a function of increasing iterations, where each iteration consists of a new value for the phase. Both methods in the simulation used the binomial distribution approach for the probability function to calculate the final estimation, as this approach proved to be more sensitive also in the real-time experiment.
\section{Conclusions}
We performed a real-time Bayesian update with an NV center, a non-SSR sensor at ambient conditions. We compared the MSE of the sensor between two calculation methods - majority voting and binomial distribution, and showed that the latter approach has better sensitivity than the former.
We showed by simulation that an adaptive scheme can further improve the MSE, and suggested using it to also reduce the total number of iterations and therefore the total sensing time, and offer extra improvement on the sensitivity. Our simulations suggest that these schemes can achieve a sensitivity four times better than the non-adaptive approach.
This work demonstrates how one can use non-SSR sensors as practical tools in adaptive PEA, and serves as a proof of concept for a specific non-SSR sensor, the NV center in diamond. Nevertheless, it could also be implemented in other sensing systems, as the approach is general. This method can also be used in other sensing schemes, such as ac magnetometry using dynamical decoupling \cite{Staudacher2013} for solid state spin sensors.
\paragraph*{Acknowledgements.} We thank B.\,Nadler for invigorating discussions. C.\,B.\, and A.\,F.\, are jointly supported by the ``Making Connections'' Weizmann-UK program. C.\,B.\, is supported by the Engineering and Physical Sciences Research Council (EP/S000550/1 and EP/V053779/1). A.\,F.\, acknowledges financial support from the Israel Science Foundation (ISF Grant No.\,963/19). A.\,F.\, is the incumbent of the Elaine Blond Career Development Chair in Perpetuity and acknowledges the historic generosity of the Harold Perlman Family, research grants from the Abramson Family Center for Young Scientists and the Willner Family Leadership Institute for the Weizmann Institute of Science.
\section{Appendices}
\renewcommand\thefigure{A\arabic{figure}}
\subsection{Sample}
The NV layer was created by a 10\,keV nitrogen ion ($^{15}\mathrm{N}^+$) implantation with a flux of 80 ions per $\upmu\mathrm{m}^2$ in an electronic grade (e6) CVD diamond, subsequently annealed in vacuum at a temperature of 950\,$^\circ\mathrm{C}$ for two hours. A nanopillar structure was then etched in the diamond for enhanced photon collection efficiency \cite{Momenzadeh2015}. All measurements were performed on a single NV center. The dephasing time of the NV center is $T_2^* = 3.5\,\upmu$s, measured with a standard Ramsey (FID) sequence on resonance. The Rabi contrast of the NV center was about ~30\%\ with count rate of 80 kcounts per second.
\label{apx:Sample}
\subsection{Experimental setup}
The NV center was measured on a custom-built (confocal microscope with a 520 nm laser diode for excitation, dichroic mirror for separating excitation and fluorescence, a band-pass filter for fluorescence counting and two avalanche photodiodes in a Hanbury-Brown and Twiss configuration). We used a QM OPX to orchestrate all pulse sequence generation, photon readout, real-time Bayesian estimation and adaptive phase calculation. A local oscillator from a Windfreak SynthNV-Pro was mixed (Marki MLIQ-0218L) with two low-frequency (150\,MHz) 90$^\circ$ phase-shifted sine signals from the OPX to produce a single-sideband modulated RF, amplified by an EliteRF (M.02006G424550) broadband amplifier.
As opposed to prior works with NVs \cite{Santagati2019, Joas2021, McMichael2021}, here the measurements and Bayesian update are performed by an FPGA-based computer in real-time (QM OPX). Together with on-the-fly pulse sequence generation, each Bayesian Update (in the non-adaptive case) takes only 0.4\,ms to complete (for a probability distribution function of length 400, or 1\,$\upmu\mathrm{s}$ per frequency bin), with a small overhead of $<1\,\upmu\mathrm{s}$ for the optimal phase calculation (in the adaptive case, Eq.\,\ref{eq:phi_opt}).
The OPX QUA code is available on \href{https://github.com/hellbourne/be-paper}{github}.
Measurements for Fig.\,\ref{fig3}b were taken on a similar setup, previously described by Arshad et al. \cite{Arshad2022}, using a laser-written NV center with $T_2^* = 5.5\,\upmu$s. A single photon count rate of $\approx 50$\,kcps was equivalent to $P_d(1|m_1) \approx 0.011$ and $P_d(1|m_0) \approx 0.016$.
\label{apx:Set-up}
\onecolumngrid
\subsection{Adaptive phase calculation}
Taking Eq.\,\ref{eq:binomial} and the Ramsey model (Eq.\,\ref{eq:binomial_gauss}), we derive in this appendix the optimal phase in the adaptive case. First, define the model as $L(f_B, \theta) = \alpha\left[1+V\cos\left(2\pi f_B\tau - \theta\right)\right]$ and calculate its derivative
$$
L'(f_B, \theta) = \frac{\partial}{\partial f_B} L(f_B, \theta) = -2\alpha V \pi \tau \sin(2\pi f_B\tau-\theta).
$$
Next, we use the binomial probability distribution to write down the mean and variance,
\begin{align*}
\mu_r & = E\left[r | f_B\right] = R\cdot L(f_B, \theta)\\
\sigma_r^2 & = E\left[\left(r-\mu_r\right)^2 | f_B\right] = R\cdot L(f_B, \theta)\left[1-L(f_B,\theta)\right].
\end{align*}
We can approximate $L(f_B,\theta) \equiv L \simeq \frac{r}{R} + \Delta$ if $\Delta \ll 1$, and then get an expression for the variance in leading orders of $\frac{r}{R}$:
\begin{align*}
\sigma^2_r & = R\cdot L(f_B, \theta)\left[1-L(f_B,\theta)\right] = RL-RL^2 = R\left[\left(\frac{r}{R} + \Delta\right) - \left(\frac{r}{R} + \Delta\right)^2\right] = \\
& = R\left[ \left(\frac{r}{R} + \Delta\right) - \left(\frac{r^2}{R^2} + 2\Delta\frac{r}{R} + \Delta^2\right) \right] \approx R\left[\left(\frac{r}{R} + \Delta\right) - \frac{r^2}{R^2} - 2\Delta\frac{r}{R}\right] = \\
& = R\left[ L-\frac{r^2}{R^2} - 2\left(L-\frac{r}{R}\right)\frac{r}{R} \right] = R\left[ L + \frac{r^2}{R^2} - 2\frac{r}{R}L \right] = R\left[L\left(1-2\frac{r}{R}\right) + \frac{r^2}{R^2}\right]
\end{align*}
Now we define the logarithm of the model (likelihood) function:
\begin{align*}
K(r, f_B) & \equiv \log P(r | f_B) = \log \left(
\begin{array}{c}
{R}\\{r}
\end{array}\right) + r\log L(f_B,\theta) + (R-r)\log\left(1-L(f_B,\theta)\right)
\end{align*}
and so
\begin{align*}
\frac{\partial}{\partial f_B}K(r, f_B) & = \frac{rL'}{L} - \frac{(R-r)L'}{1-L} = \left[ \frac{r}{L} - \frac{R-r}{1-L} \right]L' = \left[ \frac{r-RL}{L(1-L)}L' \right] = \frac{R}{\sigma_r^2}(r-\mu_r)L'(f_B, \theta).
\end{align*}
As we wrote in Sec.\,\ref{adaptive}, the Fisher information can now be explicitly calculated,
\begin{align*}
I(f_B) & = E\left[ \left(\frac{\partial}{\partial f_B}K(r,f_B)\right)^2\right] = E\left[(r-\mu_r)^2\right]\frac{R^2}{\sigma_r^4}\left(L'(f_B,\theta)\right)^2 = \frac{R^2}{\sigma_r^2}\left(L'(f_B,\theta)\right)^2 \\
& \approx \frac{R\left(L'(f_B,\theta)\right)^2}{\frac{r^2}{R^2} + \left(1-2\frac{r}{R}\right)L(f_B,\theta)} = 4R\alpha^2 V^2\pi^2\tau^2\frac{\sin^2\left(2\pi f_B\tau - \theta\right)}{\frac{r^2}{R^2} + \left(1-2\frac{r}{R}\right)\alpha\left[1+V\cos\left(2\pi f_B\tau - \theta\right)\right]}
\end{align*}
The last term can be written in a more compact form by denoting
\begin{align*}
A & = \frac{r^2}{R^2} + \left(1-2\frac{r}{R}\right)\alpha \\
B & = \left(1-2\frac{r}{R}\right)\alpha V \\
C & = 4R\alpha^2V^2\pi^2\tau^2,
\end{align*}
such that,
$$
I(f_B) = C\frac{\sin^2(2\pi f_B\tau-\theta)}{A+B\cos(2\pi f_B\tau-\theta)}
$$
We maximize the Fisher information (or minimize the Cram\'er-Rao bound):
$$
\frac{\partial}{\partial \theta} I(f_B) = 0
$$
with two solutions. The first one is a minimum with $\theta = 2\pi f_B \tau$ and the second solution is $ A\cos(2\pi f_B\tau - \theta) + B = 0$. Using $\hat{f}_B = E[f_B]$, gives:
$$
\theta_\mathrm{opt} = 2\pi \hat{f}_B \tau - \cos^{-1}\left( \frac{-B}{A} \right),
$$
\label{apx:Adaptive}
\twocolumngrid
\subsection{Contrast}
As defined previously \cite{Bonato2015}, the contrast, $C$, for $R$ repetitions scales as
$$
C = \left[ 1+ \frac{2(\alpha_0 + \alpha_1)}{(\alpha_0 - \alpha_1)^2R}\right]^{-1/2}
$$
where $\alpha_0$ is the number of photons per shot when the NV is in the $m_s=0$ state and $\alpha_1$ is the number of photons per shot when the NV is in the $m_s = 1$ state. In Fig.\,\ref{fig:contrast} we plot the MSE as a function of the contrast, $C$:
\begin{figure}
\caption{MSE as function of contrast. For typical values of $\alpha_0$ and $\alpha_1$ from our experiment and our choice of different $R$ in the range of $[100,5000]$, this translates to a contrast range from nearly zero to 0.9.}
\label{fig:contrast}
\end{figure}
The data presented in this figure is from a different dataset than that plotted in Fig.\,\ref{fig3}b, hence the difference in the MSE. Nevertheless, the point regarding the contrast is still valid.
\label{apx:contrast}
\subsection{Visibility}
In Eq.\,\ref{eq:binomial_gauss} we introduced two parameters: the threshold, $\alpha$ and the visibility, $V$. Following Ref.\,\onlinecite{Dinani2019} and for self-consistency, we define them here as
\begin{align*}
\alpha & = \frac{1}{2}\left[ P_d(1|m_0) + P_d(1|m_1)\right] \\
V & = \frac{P_d(1|m_0) - P_d(1|m_1)}{P_d(1|m_0) + P_d(1|m_1)}e^{(\tau/T_2^*)^2},
\end{align*}
where $P_d(1|m_i)$, for $i=0,1$, is the probability of a detector click for the spin in the state $\ket{i}$. Typical values for our setup were $P_d(1|m_1) = 0.0251$ and $P_d(1|m_0) = 0.03419$.
\label{apx:visibility}
\subsection{Choice of range for random detunings}
As we explained in the main text, the dynamic range of the sensor is bounded in the range $\left[ -\tfrac{1}{2\tau_0}, \tfrac{1}{2\tau_0}\right]$ which is $[-5, 5]$\,MHz in our case. While we saw no discernible change in the conventional Ramsey curve for detunings larger than 2\,MHz, the level of noise increased dramatically for the Bayesian update. We therefore limited the range to $[-2,2]\,\mathrm{MHz}$ when randomly selecting the 500 detuning used in our dataset. The increase in noise for larger detunings is currently being investigated, but we do not think it affects the results we presented in the main text.
\label{apx:detunings}
\end{document}
|
\begin{document}
\title{Characterizations of $\Gamma $-AG$^{\ast \ast }$-groupoids by their $
\Gamma $-ideals}
\subjclass[2000]{20M10 and 20N99}
\keywords{Gamma-AG-groupoids, Gamma ideals, regular Gamma AG-groupoids}
\author{}
\maketitle
\begin{center}
\textbf{Madad Khan, Naveed Ahmad and Inayatur Rehman}
COMSATS Institute of Information Technology
Abbottabad Pakistan
[email protected], [email protected]
Department of Mathematics, Quaid-i-Azam University
Islamabad, Pakistan
s\[email protected]
\end{center}
\textbf{Abstract.} In this paper we have discusses $\Gamma $-left, $\Gamma $
-right, $\Gamma $-bi-, $\Gamma $-quasi-, $\Gamma $-interior and $\Gamma $
-ideals in $\Gamma $-AG$^{\ast \ast }$-groupoids and regular $\Gamma $-AG$
^{\ast \ast }$-groupoids. Moreover we have proved that the set of $\Gamma $
-ideals in a regular $\Gamma $-AG$^{\ast \ast }$-groupoid form a semilattice
structure. Also we have characterized a regular $\Gamma $-AG$^{\ast \ast }$
-groupoid in terms of left ideals.
\section{Introduction}
Kazim and Naseeruddin \cite{Kaz} have introduced the concept of an
LA-semigroup. This structure is the generalization of a commutative
semigroup. It is closely related with a commutative semigroup and
commutative groups because if an LA-semigroup contains right identity then
it becomes a commutative semigroup and if a new binary operation is defined
on a commutative group which gives an LA-semigroup \cite{Mushtaq and Yusuf}.
The connection of the class of LA-semigroups with the class of vector spaces
over finite fields and fields has been given as: Let $W$ be a sub-space of a
vector space $V$ over a field $F$ of cardinal $2r$ such that $r>1$. Many
authors have generalized some useful results of semigroup theory.
In 1981, the notion of $\Gamma $-semigroups was introduced by M. K. Sen \cite
{gemma1} and \cite{gamma2}.
T. Shah and I. Rehman \cite{irehman} defined $\Gamma $-AG-groupoids
analogous to $\Gamma $-semigroups and then they introduce the notion of $
\Gamma $-ideals and $\Gamma $-bi-ideals in $\Gamma $-AG-groupoids. It is
easy to see that $\Gamma $-ideals and $\Gamma $-bi-ideals in $\Gamma $
-AG-groupoids are infect a generalization of ideals and bi-ideals in
AG-groupoids (for a suitable choice of $\Gamma $).
In this paper we define $\Gamma $-quasi-ideals and $\Gamma $-interior ideals
in $\Gamma $-AG$^{\ast \ast }$-groupoids and generalize some results. Also
we have proved that $\Gamma $-AG-groupoids with left identity and
AG-groupoids with left identity coincide.
Let $G$ and $\Gamma $ be two non-empty sets. $G$ is said to be a $\Gamma $
-AG-groupoid if there exist a mapping $G\times \Gamma \times G\rightarrow G$
, written $\left( a\text{, }\gamma \text{, }b\right) $ as $a\gamma b$, such
that $G$ satisfies the identity $\left( a\gamma b\right) \delta c=\left(
c\gamma b\right) \delta a$, for all $a$, $b$, $c\in G$ and $\gamma $, $
\delta \in \Gamma $ \cite{irehman}.
\begin{definition}
An element $e\in S$ is called a left identity of $\Gamma $-AG-groupoid if $
e\gamma a=a$ for all $a\in S$ and $\gamma \in \Gamma $.
\end{definition}
\begin{lemma}
If a $\Gamma $-AG-groupoid contains left identity, then it becomes an
AG-groupoid with left identity.
\end{lemma}
\begin{proof}
Let $G$ be a $\Gamma $-AG-groupoid and $e$ be the left identity of $G$ and
let $a$, $b\in G$ and $\alpha $, $\beta \in \Gamma $ therefore we have
\begin{equation*}
a\alpha b=a\alpha (e\beta b)=e\alpha (a\beta b)=a\beta b\text{.}
\end{equation*}
Hence $\Gamma $-AG-groupoid with left identity becomes and an AG-groupoid
with left identity.
\end{proof}
\begin{remark}
From Lemma 1, it is easy to see that all the results given in \cite{irehman}
and \cite{Rehman2} for a $\Gamma $-AG-groupoid with left identity is
identical to the results given in \cite{Madad1} and \cite{Madad2}.
\end{remark}
\begin{definition}
A $\Gamma $-AG-groupoid is called a $\Gamma $-AG$^{\ast \ast }$-groupoid if
it satisfies the following law
\begin{equation*}
a\alpha (b\beta c)=b\alpha (a\beta c)\text{, for all }a,b,c\in S\text{ and }
\alpha ,\beta \in \Gamma \text{.}
\end{equation*}
\end{definition}
The following results and definition from definition \ref{1} to lemma \ref
{idempotent} have been taken from \cite{irehman}.
\begin{definition}
\label{1}Let $G$ be a $\Gamma $-AG-groupoid, a non-empty subset $S$ of $G$
is called sub $\Gamma $-AG-groupoid if $a\gamma b\in S$ for all $a$, $b\in S$
and $\gamma \in \Gamma $ or $S$ is called sub $\Gamma $-AG-groupoid if$\
S\Gamma S\subseteq S.$
\end{definition}
\begin{definition}
A subset $I$ of a $\Gamma $-AG-groupoid $G$ is called left(right) $\Gamma $
-ideal of $G$ if $G\Gamma I\subseteq I\left( I\Gamma G\subseteq I\right) $
and $I$ is called $\Gamma $-ideal of $G$ if it is both left and right $
\Gamma $-ideal.
\end{definition}
\begin{definition}
An element $a$ of a $\Gamma $-AG-groupoid $G$ is called regular if there
exist $x\in G$ and $\beta $, $\gamma \in \Gamma $ such that $a=\left( a\beta
x\right) \gamma a$. $G$ is called regular $\Gamma $-AG-groupoid if all
elements of $G$ are regular.
\end{definition}
\begin{definition}
A sub $\Gamma $-AG-groupoid $B$ of a $\Gamma $-AG-groupoid $G$ is called $
\Gamma $-bi-ideal of $G$ if $\left( B\Gamma G\right) \Gamma B\subseteq B$.
\end{definition}
\begin{definition}
Let $G$ and $\Gamma $ be any non-empty sets. If there exists a mapping $
G\times \Gamma \times G\rightarrow G,$ written $\left( x\text{, }\gamma
\text{, }y\right) $ as $x\gamma y$, $G$ is called a $\Gamma $-medial if it
satisfies $\left( x\alpha y\right) \beta \left( l\gamma m\right) =\left(
x\alpha l\right) \beta \left( y\gamma m\right) $, and called $\Gamma $
-paramedial if it satisfies $\left( x\alpha y\right) \beta \left( l\gamma
m\right) =\left( m\alpha l\right) \beta \left( y\gamma x\right) $ for all $x$
, $y$, $l$, $m\in G$ and $\alpha $, $\beta $, $\gamma \in \Gamma $.
\end{definition}
\begin{lemma}
\label{ab=ba}If $A$ and $B$ are any $\Gamma $-ideals of a regular $\Gamma $
-AG-groupoid $G$ then $A\Gamma B=B\Gamma A$.
\end{lemma}
\begin{definition}
A $\Gamma $-ideal $P$ of a $\Gamma $-AG-groupoid $G$ is called $\Gamma $
-prime$\left( \Gamma \text{-semiprime}\right) $ if for any $\Gamma $-ideals $
A$ and $B$, $A\Gamma B\subseteq P\left( A\Gamma A\subseteq P\right) $
implies either $A\subseteq P$ or $B\subseteq P\left( A\subseteq P\right) $.
\end{definition}
\begin{lemma}
\label{idempotent}Any $\Gamma $-ideal $A$ of a regular $\Gamma $-AG-groupoid
is a $\Gamma $-idempotent that is $A\Gamma A=A$.
\end{lemma}
It is important to note that every $\Gamma $-AG-groupoid $G$ is $\Gamma $
-medial and every $\Gamma $-AG$^{\ast \ast }$-groupoid $G$ is $\Gamma $
-paramedial because for any $x$, $y$, $l$, $m\in G$ and $\alpha $, $\beta $,
$\gamma \in \Gamma $, we have
\begin{equation*}
\left( x\alpha y\right) \beta \left( l\gamma m\right) =\left( \left( l\gamma
m\right) \alpha y\right) \beta x=\left( \left( y\gamma m\right) \alpha
l\right) \beta x=\left( x\alpha l\right) \beta \left( y\gamma m\right) \text{
.}
\end{equation*}
We call it as $\Gamma $-medial law.
\begin{theorem}
If $L$ and $R$ are left and right $\Gamma $-ideals of a $\Gamma $-AG$^{\ast
\ast }$-groupoid $G$ then $L\cup L\Gamma G$ and $R\cup G\Gamma R$ are $
\Gamma $-ideals of $G$.
\end{theorem}
\begin{proof}
Let $L$ be a left $\Gamma $-ideal of $G$ then we have
\begin{eqnarray*}
\left( L\cup L\Gamma G\right) \Gamma G &=&\left( L\Gamma G\right) \cup
\left( L\Gamma G\right) \Gamma G=\left( L\Gamma G)\cup (G\Gamma G\right)
\Gamma L \\
&\subseteq &L\Gamma G\cup \left( G\Gamma L\right) \subseteq L\Gamma G\cup
L=L\cup L\Gamma G\text{ and} \\
G\Gamma \left( L\cup L\Gamma G\right) &=&G\Gamma L\cup G\Gamma \left(
L\Gamma G\right) \subseteq L\cup L\Gamma \left( G\Gamma G\right) =L\cup
L\Gamma G.
\end{eqnarray*}
Again let $R$ be a right $\Gamma $-ideal of $G$ then we have
\begin{eqnarray*}
\left( R\cup G\Gamma R\right) \Gamma G &=&R\Gamma G\cup \left( G\Gamma
R\right) \Gamma G\subseteq R\cup \left( G\Gamma R\right) \Gamma \left(
G\Gamma G\right) \\
&=&R\cup \left( G\Gamma G\right) \Gamma \left( R\Gamma G\right) \subseteq
R\cup G\Gamma R\text{, and } \\
G\Gamma \left( R\cup G\Gamma R\right) &=&G\Gamma R\cup G\Gamma \left(
G\Gamma R\right) =G\Gamma R\cup \left( G\Gamma G\right) \Gamma \left(
G\Gamma R\right) \\
&=&G\Gamma R\cup \left( R\Gamma G\right) \Gamma \left( G\Gamma G\right)
\subseteq G\Gamma R\cup R\Gamma G \\
&\subseteq &G\Gamma R\cup R=R\cup G\Gamma R\text{.}
\end{eqnarray*}
\end{proof}
\begin{lemma}
\label{right}Right identity in a $\Gamma $-AG-groupoid $G$\ becomes identity
of $G$ and hence $G$ becomes commutative $\Gamma $-semigroup.
\end{lemma}
\begin{proof}
Let $e$ be the right identity of $G$ , $g\in G$, $\alpha $ and $\beta \in
\Gamma $, then
\begin{equation*}
e\alpha g=\left( e\beta e\right) \alpha g=\left( g\beta e\right) \alpha
e=g\alpha e=g.
\end{equation*}
Again for $a$, $b$, $c\in G$ and $\alpha $, $\beta \in \Gamma $ we have
\begin{equation*}
a\gamma b=\left( e\alpha a\right) \gamma b=\left( e\alpha a\right) \gamma
(e\alpha b)=(b\alpha e)\gamma \left( a\alpha e\right) =b\gamma a\text{.}
\end{equation*}
Now
\begin{eqnarray*}
\left( a\alpha b\right) \beta c &=&\left( a\alpha b\right) \beta \left(
e\alpha c\right) =\left( a\alpha e\right) \beta \left( b\alpha c\right)
=e\alpha \left( \left( a\alpha e\right) \beta \left( b\alpha c\right) \right)
\\
&=&\left( a\alpha e\right) \alpha \left( e\beta \left( b\alpha c\right)
\right) =a\alpha \left( e\beta \left( b\alpha c\right) \right) =a\alpha
\left( b\beta \left( e\alpha c\right) \right) \\
&=&a\alpha \left( b\beta c\right) \text{.}
\end{eqnarray*}
\end{proof}
\begin{definition}
A sub $\Gamma $-AG-groupoid $Q$ of a $\Gamma $-AG-groupoid $G$ is called a
quasi-ideal of $G$ if $G\Gamma Q\cap Q\Gamma G\subseteq Q$.
\end{definition}
\begin{definition}
A sub $\Gamma $-AG-groupoid $I$ of a $\Gamma $-AG-groupoid $G$ is called a $
\Gamma $-interior ideal of $G$ if $\left( G\Gamma I\right) \Gamma G\subseteq
I$.
\end{definition}
\begin{lemma}
Every one sided (left or right) $\Gamma $-ideal of a $\Gamma $-AG-groupoid $
G $ is a $\Gamma $-quasi ideal of $G$.
\end{lemma}
\begin{proof}
Let $L$ be a left $\Gamma $-ideal of $G$ then we have
\begin{equation*}
L\Gamma G\cap G\Gamma L\subseteq G\Gamma L\subseteq L\text{.}
\end{equation*}
Which implies $L$ is a $\Gamma $-quasi ideal of $G$. Similarly if $R$ is a
right $\Gamma $-ideal of $G\,$then it is a $\Gamma $-quasi ideal of $G$.
\end{proof}
\begin{lemma}
\label{RLB}Every right $\Gamma $-ideal and left $\Gamma $-ideal\ of a $
\Gamma $-AG-groupoid $G$ is a $\Gamma $-bi-ideal of $G$.
\end{lemma}
\begin{proof}
Let $R$ be a right $\Gamma $-ideal of $G$ then we have
\begin{equation*}
\left( R\Gamma G\right) \Gamma R\subseteq R\Gamma R\subseteq R\Gamma
G\subseteq R\text{.}
\end{equation*}
Again let $L$ be a left $\Gamma $-ideal of $G$ then we have
\begin{equation*}
\left( L\Gamma G\right) \Gamma L\subseteq \left( G\Gamma G\right) \Gamma
L\subseteq G\Gamma L\subseteq L\text{.}
\end{equation*}
\end{proof}
\begin{corollary}
Every $\Gamma $-ideal of a $\Gamma $-AG-groupoid $G$ is a $\Gamma $-bi-ideal
of $G$.
\end{corollary}
\begin{proof}
It follows from lemma \ref{RLB}.
\end{proof}
\begin{lemma}
If $B_{1}$ and $B_{2}$ are $\Gamma $-bi-ideals of a $\Gamma $-AG$^{\ast \ast
}$-groupoid $G$ then $B_{1}\Gamma B_{2}$ is also a $\Gamma $-bi-ideals of $G$
.
\end{lemma}
\begin{proof}
Let $B_{1}$ and $B_{2}$ be $\Gamma $-bi-ideals of $G$ then we have
\begin{eqnarray*}
\left( \left( B_{1}\Gamma B_{2}\right) \Gamma G\right) \Gamma \left(
B_{1}\Gamma B_{2}\right) &=&\left( \left( B_{1}\Gamma B_{2}\right) \Gamma
\left( G\Gamma G\right) \right) \Gamma \left( B_{1}\Gamma B_{2}\right) \\
&=&\left( \left( B_{1}\Gamma G\right) \Gamma \left( B_{2}\Gamma G\right)
\right) \Gamma \left( B_{1}\Gamma B_{2}\right) \\
&=&\left( \left( B_{1}\Gamma G\right) \Gamma B_{1}\right) \Gamma \left(
\left( B_{2}\Gamma G\right) \Gamma B_{2}\right) \\
&\subseteq &B_{1}\Gamma B_{2}.
\end{eqnarray*}
\end{proof}
\begin{lemma}
Every $\Gamma $-idempotent quasi-ideal of a $\Gamma $-AG-groupoid $G$ is a $
\Gamma $-bi-ideal of $G$.
\end{lemma}
\begin{proof}
Let $Q$ be an $\Gamma $-idempotent quasi-ideal of $G$. Now
\begin{eqnarray*}
\left( Q\Gamma G\right) \Gamma Q &\subseteq &\left( G\Gamma G\right) \Gamma
Q\subseteq G\Gamma Q\text{, and} \\
\left( Q\Gamma G\right) \Gamma Q &=&\left( Q\Gamma G\right) \Gamma \left(
Q\Gamma Q\right) =\left( Q\Gamma Q\right) \Gamma \left( G\Gamma Q\right)
=Q\Gamma \left( G\Gamma Q\right) \\
&\subseteq &Q\Gamma \left( G\Gamma G\right) \subseteq Q\Gamma G\text{, which
implies that } \\
\left( Q\Gamma G\right) \Gamma Q &\subseteq &G\Gamma Q\cap Q\Gamma
G\subseteq Q\text{.}
\end{eqnarray*}
\end{proof}
\begin{lemma}
\label{ideal interior}Every $\Gamma $-ideal of a $\Gamma $-AG-groupoid $G$
is a $\Gamma $-interior ideal of $G$.
\end{lemma}
\begin{proof}
Let $I$ be a $\Gamma $-ideal of $G$ then we have
\begin{equation*}
\left( G\Gamma I\right) \Gamma G\subseteq I\Gamma G=I.
\end{equation*}
\end{proof}
\begin{lemma}
A subset $I$ of a $\Gamma $-AG$^{\ast \ast }$-groupoid $G$ is a $\Gamma $
-interior ideal if and only if it is right $\Gamma $-ideal.
\end{lemma}
\begin{proof}
Let $I$ be a right $\Gamma $-ideal $G$ then it becomes a left $\Gamma $
-ideal so is $\Gamma $-ideal and by lemma \ref{ideal interior} it is $\Gamma
$-interior ideal.
Conversely assume that $I$ is a $\Gamma $-interior ideal of $G$. Using $
\Gamma $-paramedial law, we have
\begin{eqnarray*}
I\Gamma G &=&I\Gamma \left( G\Gamma G\right) =G\Gamma \left( I\Gamma
G\right) =\left( G\Gamma G\right) \Gamma \left( I\Gamma G\right) \\
&=&\left( G\Gamma I\right) \Gamma \left( G\Gamma G\right) \subseteq \left(
G\Gamma I\right) \Gamma G\subseteq G.
\end{eqnarray*}
Which shows that $I$ is a right $\Gamma $-ideal of $G$.
\end{proof}
\begin{example}
Let $G=\left\{ 1,2,3,4,5\right\} $ with binary operation "$\cdot $" given in
the following Cayley's table, an AG-groupoid with left identity $4$.
\begin{equation*}
\begin{tabular}{l|lllll}
$\cdot $ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline
$1$ & $4$ & $5$ & $1$ & $2$ & $3$ \\
$2$ & $3$ & $4$ & $5$ & $1$ & $2$ \\
$3$ & $2$ & $3$ & $4$ & $5$ & $1$ \\
$4$ & $1$ & $2$ & $3$ & $4$ & $5$ \\
$5$ & $5$ & $1$ & $2$ & $3$ & $4$
\end{tabular}
\end{equation*}
\end{example}
It is easy to observe that $G$ is a simple AG-groupoid that is there is no
left or right ideal of $G$. Now let $\Gamma =\{\alpha $, $\beta $, $\gamma
\} $ defined as
\begin{equation*}
\begin{tabular}{l|lllll}
$\alpha $ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$2$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$3$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$4$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$5$ & $1$ & $1$ & $1$ & $1$ & $1$
\end{tabular}
\text{ \ \ \ \ \ \ \ \ }
\begin{tabular}{l|lllll}
$\beta $ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline
$1$ & $2$ & $2$ & $2$ & $2$ & $2$ \\
$2$ & $2$ & $2$ & $2$ & $2$ & $2$ \\
$3$ & $2$ & $2$ & $2$ & $2$ & $2$ \\
$4$ & $2$ & $2$ & $2$ & $2$ & $2$ \\
$5$ & $2$ & $2$ & $2$ & $2$ & $2$
\end{tabular}
\text{ \ \ \ \ \ \ \ \ }
\begin{tabular}{l|lllll}
$\gamma $ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$2$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$3$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$4$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$5$ & $1$ & $1$ & $1$ & $3$ & $3$
\end{tabular}
\end{equation*}
It is easy to prove that $G$ is a $\Gamma $-AG-groupoid because $\left( a\pi
b\right) \psi c=\left( c\pi b\right) \psi a\ $for all $a$, $b$, $c\in G$ and
$\pi $, $\psi \in \Gamma $ also $G$ is non-associative because $\left(
1\alpha 2\right) \beta 3\not=1\alpha \left( 2\beta 3\right) $. This $\Gamma $
-AG-groupoid does not contain left identity because $4\alpha 5\neq 5$, $
4\beta 5\neq 5$ and $4\gamma 5\neq 5$. It is easy to see that every
AG-groupoid with left identity not necessarily implies $\Gamma $
-AG-groupoid\ with left identity. Clearly $A=\{1,2,3\}$ is a $\Gamma $-ideal
of $G$. $B=\{1,2,4\}$ is a right $\Gamma $-ideal but is not a left $\Gamma $
-ideal. $A$ and $B$ both are $\Gamma $-bi-ideals of $G$. $C=\{1$, $2$, $3$, $
4\}$ is a $\Gamma $-interior ideal of $G$.
\begin{lemma}
For a regular $\Gamma $-AG-groupoid $G$ $A\Gamma G=A$ and $G\Gamma B=B$ for
every right $\Gamma $-ideal $A$ and for every left $\Gamma $-ideal $B$.
\end{lemma}
\begin{proof}
Let $A$ be a right $\Gamma $- ideal of $G$ then $A\Gamma G\subseteq A$. Let $
a\in A$, since $G$ is regular so there exist $x\in G$ and $\alpha $, $\gamma
\in \Gamma $ such that
\begin{equation*}
a=\left( a\alpha x\right) \gamma a\in \left( A\Gamma G\right) \Gamma
A\subseteq \left( A\Gamma G\right) \Gamma G\subseteq A\Gamma G\text{.}
\end{equation*}
Now again let $B$ be a left $\Gamma $-ideal of $G$ then $G\Gamma B\subseteq
B $. Let $b\in B$, also $G$ is regular so there exist $t\in G$ and $\pi $, $
\sigma \in \Gamma $ such that
\begin{equation*}
b=\left( b\pi t\right) \sigma b\in \left( B\Gamma G\right) \Gamma B\subseteq
\left( G\Gamma G\right) \Gamma B\subseteq G\Gamma B\text{.}
\end{equation*}
\end{proof}
\begin{lemma}
If $G$ is a $\Gamma $-AG$^{\ast \ast }$-groupoid then $g\Gamma G$ and $
G\Gamma g$ are $\Gamma $-bi-ideals for all $g\in G$.
\end{lemma}
\begin{proof}
Using the definition of $\Gamma $-AG$^{\ast \ast }$-groupoid we have
\begin{eqnarray*}
\left( \left( g\Gamma G\right) \Gamma G\right) \Gamma \left( g\Gamma
G\right) &=&\left( \left( G\Gamma G\right) \Gamma g\right) \Gamma \left(
g\Gamma G\right) \subseteq \left( G\Gamma g\right) \Gamma \left( g\Gamma
G\right) \\
&=&g\Gamma \left( \left( G\Gamma g\right) \Gamma G\right) \subseteq g\Gamma
\left( \left( G\Gamma G\right) \Gamma G\right) \subseteq g\Gamma \left(
G\Gamma G\right) \\
&\subseteq &g\Gamma G\text{.}
\end{eqnarray*}
Again using $\Gamma $-paramedial law we have
\begin{eqnarray*}
\left( \left( G\Gamma g\right) \Gamma G\right) \Gamma \left( G\Gamma
g\right) &=&\left( \left( \left( G\Gamma g\right) \Gamma g\right) \Gamma
G\right) \Gamma G=\left( \left( \left( g\Gamma g\right) \Gamma G\right)
\Gamma G\right) \Gamma G \\
&=&\left( \left( G\Gamma G\right) \Gamma G\right) \Gamma \left( g\Gamma
g\right) \subseteq \left( G\Gamma G\right) \Gamma \left( g\Gamma g\right) \\
&=&\left( g\Gamma g\right) \Gamma \left( G\Gamma G\right) \subseteq \left(
g\Gamma g\right) \Gamma G=\left( G\Gamma g\right) \Gamma g \\
&\subseteq &\left( G\Gamma G\right) \Gamma g\subseteq G\Gamma g.
\end{eqnarray*}
\end{proof}
\begin{corollary}
If $G$ is a regular $\Gamma $-AG$^{\ast \ast }$-groupoid then $a\Gamma G$ is
a $\Gamma $-bi-ideal in $G$, for all $a\in G$.
\end{corollary}
\begin{proof}
Let $G$ be a regular $\Gamma $-AG-groupoid then for every $a\in G$ there
exist $x\in G$ and $\alpha $, $\beta \in \Gamma $ such that $a=\left( \left(
a\alpha x\right) \beta a\right) $ therefore we have
\begin{eqnarray*}
\left( \left( a\Gamma G\right) \Gamma G\right) \Gamma \left( a\Gamma
G\right) &=&\left( \left( \left( \left( a\alpha x\right) \beta a\right)
\Gamma G\right) \Gamma G\right) \Gamma \left( a\Gamma G\right) \\
&=&\left( \left( G\Gamma G\right) \Gamma \left( \left( a\alpha x\right)
\beta a\right) \right) \Gamma \left( a\Gamma G\right) \\
&\subseteq &\left( G\Gamma \left( \left( a\alpha x\right) \beta a\right)
\right) \Gamma \left( a\Gamma G\right) =\left( \left( a\alpha x\right)
\Gamma \left( G\beta a\right) \right) \Gamma \left( a\Gamma G\right) \\
&\subseteq &\left( \left( a\alpha x\right) \Gamma \left( G\beta G\right)
\right) \Gamma \left( G\Gamma G\right) \subseteq \left( \left( a\alpha
x\right) \Gamma G\right) \Gamma G \\
&=&\left( G\Gamma G\right) \Gamma \left( a\alpha x\right) \subseteq G\Gamma
\left( a\alpha x\right) =a\Gamma \left( G\alpha x\right) \subseteq a\Gamma
\left( G\Gamma G\right) \\
&\subseteq &a\Gamma G\text{.}
\end{eqnarray*}
\end{proof}
\begin{lemma}
For a $\Gamma $-bi-ideal $B$ in a regular $\Gamma $-AG-groupoid $G$, $\left(
B\Gamma G\right) \Gamma B=B$.
\end{lemma}
\begin{proof}
Let $B$ be a $\Gamma $-bi-ideal in $G$ then $\left( B\Gamma G\right) \Gamma
B\subseteq B$. Let $x\in B$, since $G$ is a regular $\Gamma $-AG-groupoid
therefore there exist $a\in G$ and $\alpha $, $\beta \in \Gamma $ such that
\begin{equation*}
x=\left( x\alpha a\right) \beta x\in \left( B\Gamma G\right) \Gamma B\text{.}
\end{equation*}
Which implies that $B\subseteq \left( B\Gamma G\right) \Gamma B$.
\end{proof}
\begin{lemma}
If $G$ is a regular $\Gamma $-AG-groupoid then, $G\Gamma G=G$.
\end{lemma}
\begin{proof}
Since $G\Gamma G\subseteq G.$ Let $x\in G$, since $G$ is a regular $\Gamma $
-AG-groupoid therefore there exist $a\in G$ and $\alpha $, $\beta \in \Gamma
$ such that
\begin{equation*}
x=\left( x\alpha a\right) \beta x\in \left( G\Gamma G\right) \Gamma
G\subseteq G\Gamma G\text{.}
\end{equation*}
Which implies that $G\subseteq G\Gamma G$.
\end{proof}
\begin{lemma}
A subset $I$ of a regular $\Gamma $-AG$^{\ast \ast }$-groupoid $G$ is a left
$\Gamma $-ideal if and only if it is a right $\Gamma $-ideal of $G.$
\end{lemma}
\begin{proof}
Let $I$ be a left $\Gamma $-ideal of $G$ then $G\Gamma I\subseteq I$. Let $
i\gamma g\in I\Gamma G$ for $g\in G$, $i\in I$ and $\gamma \in \Gamma $,
also $G$ is a regular $\Gamma $-AG-groupoid therefore there exist $x$, $y\in
G$ and $\alpha $, $\beta $, $\gamma $, $\delta $, $\pi \in \Gamma $ such that
\begin{eqnarray*}
i\gamma g &=&\left( \left( i\alpha x\right) \beta i\right) \gamma \left(
\left( g\delta y\right) \pi g\right) =\left( \left( i\alpha x\right) \beta
\left( g\delta y\right) \right) \gamma \left( i\pi g\right) \\
&=&\left( \left( \left( \left( i\alpha x\right) \beta i\right) \alpha
x\right) \beta \left( g\delta y\right) \right) \gamma \left( i\pi g\right)
=\left( \left( y\alpha g\right) \beta \left( \left( i\beta \left( i\alpha
x\right) \right) \delta x\right) \right) \gamma \left( i\pi g\right) \\
&=&\left( i\beta \left( \left( \left( y\alpha g\right) \beta \left( i\alpha
x\right) \right) \delta x\right) \right) \gamma \left( i\pi g\right) =\left(
\left( i\pi g\right) \beta \left( \left( \left( y\alpha g\right) \beta
\left( i\alpha x\right) \right) \delta x\right) \right) \gamma i \\
&\in &\left( G\Gamma I\right) \subseteq I\text{.}
\end{eqnarray*}
Conversely let $I$ be a right $\Gamma $-ideal then there exist $x\in G$ and $
\alpha $, $\beta \in \Gamma $ such that
\begin{equation*}
g\gamma i=\left( \left( g\alpha x\right) \beta g\right) \gamma i=\left(
i\beta g\right) \gamma \left( g\alpha x\right) \in \left( I\Gamma G\right)
\Gamma G\subseteq I\Gamma G\subseteq I\text{.}
\end{equation*}
\end{proof}
\begin{theorem}
for a $\Gamma $-AG$^{\ast \ast }$-groupoid $G$, following statements are
equivalent.
$\left( i\right) $ $G$ is regular $\Gamma $-AG-groupoid.
$\left( ii\right) $ Every left $\Gamma $-ideal of $G$ is $\Gamma $-idempotent
$.$
\end{theorem}
\begin{proof}
$\left( i\right) \Rightarrow \left( ii\right) $
Let $G$ be a regular $\Gamma $-AG-groupoid then by lemma \ref{idempotent}
every $\Gamma $-ideal of $G$ is $\Gamma $-idempotent.
$\left( ii\right) \Rightarrow \left( i\right) $
Let every left $\Gamma $-ideal of a $\Gamma $-AG$^{\ast \ast }$-groupoid $G$
is $\Gamma $-idempotent, since $G\Gamma a$ is a left $\Gamma $-ideal of $G$
for all $a\in G$ \cite{irehman}, so is $\Gamma $-idempotent and by $\Gamma $
-paramedial law, lemma \ref{a(bc)} and $\Gamma $-medial law, we have, $a\in
G\Gamma a$ implies
\begin{eqnarray*}
a &\in &\left( G\Gamma a\right) \Gamma \left( G\Gamma a\right) =\left(
\left( G\Gamma a\right) \Gamma a\right) \Gamma G=\left( \left( a\Gamma
a\right) \Gamma G\right) \Gamma G \\
&=&\left( \left( a\Gamma a\right) \Gamma \left( G\Gamma G\right) \right)
\Gamma G=\left( \left( G\Gamma G\right) \Gamma \left( a\Gamma a\right)
\right) \Gamma G \\
&=&\left( a\Gamma \left( \left( G\Gamma G\right) \Gamma a\right) \right)
\Gamma G=\left( G\Gamma \left( \left( G\Gamma G\right) \Gamma a\right)
\right) \Gamma a \\
&=&\left( G\Gamma \left( G\Gamma a\right) \right) \Gamma a=\left( G\Gamma
\left( \left( G\Gamma a\right) \Gamma \left( G\Gamma a\right) \right)
\right) \Gamma a \\
&=&\left( G\Gamma \left( \left( a\Gamma G\right) \Gamma \left( a\Gamma
G\right) \right) \right) \Gamma a=\left( \left( G\Gamma G\right) \Gamma
\left( \left( a\Gamma G\right) \Gamma \left( a\Gamma G\right) \right)
\right) \Gamma a \\
&=&\left( \left( G\Gamma \left( a\Gamma G\right) \right) \Gamma \left(
G\Gamma \left( a\Gamma G\right) \right) \right) \Gamma a=\left( \left(
\left( a\Gamma G\right) \Gamma G\right) \Gamma \left( \left( a\Gamma
G\right) \Gamma G\right) \right) \Gamma a \\
&=&\left( \left( \left( \left( a\Gamma G\right) \Gamma G\right) \Gamma
G\right) \Gamma \left( a\Gamma G\right) \right) \Gamma a=\left( a\Gamma
\left( \left( \left( \left( a\Gamma G\right) \Gamma G\right) \Gamma G\right)
\Gamma G\right) \right) \Gamma a \\
&\subseteq &\left( a\Gamma G\right) \Gamma a.
\end{eqnarray*}
Which shows that $G$ is a regular $\Gamma $-AG$^{\ast \ast }$-groupoid.
\end{proof}
\begin{lemma}
Any $\Gamma $-ideal of a regular $\Gamma $-AG-groupoid $G$ is $\Gamma $
-semiprime.
\end{lemma}
\begin{proof}
It is an easy consequence of lemma \ref{idempotent}.
\end{proof}
\begin{theorem}
Set of all $\Gamma $-ideals in a regular $\Gamma $-AG-groupoid $G$ with
forms a semilattice $\left( G,\circ \right) $ where $A\circ B=A\Gamma B$,
for all $\Gamma $-ideals $A$ and $B$ of $G.$
\end{theorem}
\begin{proof}
Let $A$ and $B$ be any $\Gamma $-ideals in $G$, then by $\Gamma $-medial law
we have
\begin{eqnarray*}
\left( A\Gamma B\right) \Gamma G &=&\left( A\Gamma B\right) \Gamma \left(
G\Gamma G\right) =\left( A\Gamma G\right) \Gamma \left( B\Gamma G\right)
\subseteq A\Gamma B.\text{ And } \\
G\Gamma \left( A\Gamma B\right) &=&\left( G\Gamma G\right) \Gamma \left(
A\Gamma B\right) =\left( G\Gamma A\right) \Gamma \left( G\Gamma B\right)
\subseteq A\Gamma B\text{.}
\end{eqnarray*}
Also by lemma \ref{ab=ba}, we have $A\Gamma B=B\Gamma A$ which implies that
\begin{equation*}
\left( A\Gamma B\right) \Gamma C=C\Gamma \left( A\Gamma B\right) =A\Gamma
\left( C\Gamma B\right) =A\Gamma \left( B\Gamma C\right) \text{.}
\end{equation*}
And by lemma \ref{idempotent}, $A\Gamma A=A$.
\end{proof}
\end{document}
|
\begin{document}
\title{Local well-posedness of the 1d compressible Navier–Stokes system with rough data}
\begin{abstract}
This paper presents a new approach to the local well-posedness of the 1d compressible Navier–Stokes systems with rough initial data. Our approach is based on establishing some smoothing and Lipschitz-type estimates for the 1d parabolic equation with piecewise continuous coefficients.
\end{abstract}
\section{Introduction}
In this paper, we study the compressible Navier--Stokes equations in Lagrangian coordinates, which can be written as (see \cite{Smoller})
\begin{equation}\label{eqcompre}
\left\{
\begin{aligned}
& v_t-u_x=0,\\
& u_t+p_x=\left(\frac{\mu u_x}{v}\right)_x,\\
& (e+\frac{1}{2}u^2)_t+({p}u)_x=\left(\frac{\kappa}{v}\theta_x+\frac{\mu}{v}uu_x\right)_x.
\end{aligned}
\right.
\end{equation}
Here we denote $v$ the specific volume, $u$ the velocity, $p$ the pressure, $e$ the specific internal energy, $\theta $ the absolute temperature. And $\mu, \kappa>0$ are viscosity and heat conductivity coefficients. The above equations describe the conservation of mass, momentum and energy, respectively.
We will consider two systems. The first system is the so-called $p-$system (see \cite{Smoller}), which is a general model of isentropic gas dynamics(when $p(v)=Av^{-\gamma}$). In the isentropic case, the temperature is held constant hence energy must be added to the system, whence the conservation
of energy is absent. The $p-$system can be written as
\begin{align}\label{inns}
\left\{
\begin{aligned}
& v_t-u_x=0,\\
& u_t+(p(v))_x=(\frac{\mu u_x}{v})_x.
\end{aligned}
\right.
\end{align}
In this paper we consider a more general system which only requires $p\in W^{2,\infty}$.
Another system studied in this paper is \eqref{eqcompre} for the polytropic ideal gas which has the constitutive relations
$$
p(v,\theta)=\frac{K\theta}{v},\ \ \ e=\mathbf{c}\theta,
$$
where $K$ and heat capacity $\mathbf{c}$ are both positive constants. The system can be written as
\begin{equation}\label{cpns}
\left\{
\begin{aligned}
& v_t-u_x=0,\\
& u_t+(p(v,\theta))_x=\left(\frac{\mu u_x}{v}\right)_x,\\
& \theta_t+\frac{p(v,\theta)}{\mathbf{c}}u_x-\frac{\mu}{\mathbf{c}v}(u_x)^2=\left(\frac{\kappa}{\mathbf{c}v}\theta_x\right)_x.
\end{aligned}
\right.
\end{equation}
Let us give a review of classical results. The study about compressible Navier-Stokes equations in gas dynamics started with Nash in \cite{Nash1958}, where he considered general elliptic and parabolic equations. Then Itaya \cite{Itaya71,Itaya75} solved the compressible Navier-Stokes equations for initial data in H\"{o}lder space. Kazhikhov \cite{Ka77} established a priori estimates and proved the existence of weak and classical solutions. For the equation of 3-d ideal gas, Nishida and Matsumura in \cite{MN1980} applied the energy method to give the existence of a global solution with data in H\"{o}lder space. The result of large-time behavior of global solution was provided by Kawashima \cite{Kawa87}. Using the energy method, Hoff \cite{Hoff92} proved the existence of a global solution for discontinuous data. The results in the book of Lions \cite{Lions} and the book of Feireisl \cite{Fe} are applicable to a larger space of initial data.
Recently, Liu and Yu \cite{LiuCAPM} studied the system \eqref{inns} with $BV\cap L^1$ data, under the assumption $p'(v)<0$. Based on exact analysis of the associated linear equations and their Green’s functions, they initiated a new method, which converts the differential equations into integral equations, and proved that for initial data
$v_0,u_0$ satisfying
\begin{align*}
\|v_0-1\|_{L^1},\|v_0\|_{BV},\|u_0\|_{L^1},\|u_0\|_{BV}<\delta\ll 1,
\end{align*}
there exist $t_\sharp>0$ and $C_\sharp>0$ such that
the system \eqref{inns} admits a unique weak solution $(v,u)$ for $t\in(0,t_\sharp)$, and
\begin{align*}
\|v(t,\cdot)-1\|_{L^1}+\|v(t,\cdot)\|_{BV}+\|\sqrt{t}u_x(t,\cdot)\|_{L^\infty}\leq 2C_\sharp\delta,\ \ \ \forall t\in(0,t_\sharp).
\end{align*}
Moreover, they also established global result for polytropic gases $p(v)=Av^{-\gamma}$ with $1\leq \gamma<e$.
Their result was extended to the full compressible Navier--Stokes system \eqref{cpns} by Wang, Yu, and Zhang \cite{Wang}.
In this paper, we are interested in the local well-posedness of \eqref{inns} and \eqref{cpns} with rough data, which allows $v_0$ to have a finite jump. The idea to study the elliptic problem with jump coefficients comes from \cite{N1,N2}. More precisely, without loss of generality, let $x=0$ be the jump point. Suppose there exists $\varepsilon>0$ such that
\begin{align}\label{jump}
|v_0(x)-v_0(y)|\leq \delta,\ \ \ \text{if} \ |x-y|\leq \varepsilon \ \text{and}\ xy>0,
\end{align}
for some $\delta>0$ that will be fixed later.
Throughout the paper, we fix two constants $0<\alpha<\gamma\ll1$. Define the norms
\begin{align*}
& \|f\|_{X_T^{\sigma,p}(E)}=\sup_{0<s<T}s^{\sigma}\|f(s)\|_{L^p(E)}+\sup_{0<s<t<T}s^{\sigma+\alpha}\frac{\|f(t)-f(s)\|_{L^p(E)}}{(t-s)^\alpha},\\
& \|f\|_{Y_T(E)}=\sup_{0<s<T}\|f(s)\|_{L^\infty(E)}+\sup_{0<s<t<T}s^{\alpha}\frac{\|f(t)-f(s)\|_{L^\infty(E)}}{(t-s)^\alpha},\\
&\|f\|_{L^p_{T}(E)}=\|f\|_{L^p_{T}(E)}=\left(\int_0^T \int_E |f(t,x)|^p dxdt \right)^\frac{1}{p},\ \ \ \ \ \ \ \text{for any measurable set}\ E\subseteq\mathbb{R},
\end{align*}
and we omit the domain if $E=\mathbb{R}$. We write several norms together to denote the sum of them. Moreover, we denote $A\lesssim B$ if there exists a universal constant $C$ such that $A\leq CB$.
We will solve the system (\ref{inns}) and \eqref{cpns} by the fixed point theorem in the Banach space equipped with the norms above. We note that the norms $\|\cdot\|_{X_T^{\sigma,p}}$ and $\|\cdot\|_{Y_T}$ contain H\"{o}lder derivatives in time.
For the system \eqref{inns}, we impose the following conditions for initial data
\begin{align}\label{lowbd}
v_0\in L^\infty,\ \inf_xv_0\geq\lambda_0>0
,\ u_0=\partial_x\bar{u}_0\ \operatorname{with}\ \bar{u}_0\in \dot{C}^{2\gamma}.
\end{align}
Simplified versions of our main results are stated in the following theorems.
\begin{theorem}\label{maininns} Assume $p\in W^{2,\infty}(\mathbb{R})$.
There exists $\delta_0>0$ such that if initial data $(v_0,u_0)$ satisfies the conditions (\ref{jump}) and (\ref{lowbd}) with $\delta=\delta_0$, then the system \eqref{inns} admits a unique local solution $(v,u)$ in $[0,T]$ satisfying
\begin{align*}
\inf_{t\in[0,T]}\inf_xv(t,x)\geq \frac{\lambda_0}{2},\ \ \ \|v\|_{Y_T}\leq 2\|v_0\|_{L^\infty},\ \ \ \ \|\partial_xu\|_{X_T^{1-\gamma,\infty}}\leq M,
\end{align*}
for some $T,M>0$.
\end{theorem}
For the full N--S system \eqref{cpns}, we use $L^p$ setting, and set the following norm and corresponding Banach space
\begin{align*}
\|(w,\vartheta)\|_{Z_T}:=&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{3}{4},\infty}\}}\|\partial_x w\|_{\star} +\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|\vartheta\|_{\star}+\sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x \vartheta\|_{\star}.
\end{align*}
The above norms have the same scaling in the parabolic setting. The norms $\|\partial_x w\|_{L^2_{T}}$, $\|\vartheta\|_{L^2_{T}}$ and $\|\partial_x \vartheta\|_{L^\frac{6}{5}_{T}}$ are imposed in view of the parabolic structure of the equation of $(u,\theta)$. Other norms $\|\partial_x w\|_{X_T^{\frac{1}{2},2}}, \|\partial_x w\|_{X_T^{\frac{3}{4},\infty}}, \|\vartheta\|_{X_T^{\frac{1}{2},2}}, \|\partial_x \vartheta\|_{X_T^{\frac{5}{6},\frac{6}{5}}}$ are necessary to obtain local estimates of $v$.
We assume the initial data satisfies that
\begin{align}\label{idcns}
\inf_xv_0(x)\geq \lambda_0>0,\ \ \ \|v_0\|_{L^\infty}<\infty,\ \ \|u_0\|_{L^2}<\infty,\ \ \|\theta_0\|_{\dot W^{-\frac{2}{3},\frac{6}{5}}}<\infty.
\end{align}
The following result is the local well-posedness of system \eqref{cpns}.
\begin{theorem}\label{maincp}
There exists $\delta_0>0$ such that if initial data $(v_0,u_0,\theta_0)$ satisfies \eqref{jump} and \eqref{idcns} with $\delta=\delta_0$, then the system \eqref{cpns} has a unique local solution $(v,u)$ in $[0,T]$ satisfying
\begin{align*}
\inf_{t\in[0,T]}\inf_xv(t,x)\geq \frac{\lambda_0}{2},\ \ \ \|v\|_{Y_T}\leq 2\|v_0\|_{L^\infty},\ \ \ \ \|(u,\theta)\|_{ Z_T}\leq B.
\end{align*}
for some $T, B>0$.
\end{theorem}
Note that we do not need $u_0,\theta_0$ decay at infinity. Hence the condition \eqref{idcns} can be relaxed to local $L^p$ norms(see Theorem \ref{thmloc}):
\begin{align}\label{loccon}
\inf_xv_0(x)\geq \lambda_0>0,\ \ \ \|v_0\|_{L^\infty}<\infty,\ \ \sup_{z\in\mathbb{R}}\|u_0\chi_z\|_{L^2}<\infty,\ \ \sup_{z\in\mathbb{R}}\|\theta_0\chi_z\|_{\dot W^{-\frac{2}{3},\frac{6}{5}}}<\infty.
\end{align}
Here $\chi_z$ is a smooth cutoff function satisfying $\mathbf{1}_{[z-1,z+1]}\leq \chi_z\leq \mathbf{1}_{[z-2,z+2]}$, where $\mathbf{1}$ is the indicator function.
Our method is based on analysis of heat equation with jump coefficient $\phi(x)$ satisfying \eqref{conphi},
\begin{equation*}
\begin{aligned}
&\partial_t f(t,x) -\partial_x(\phi(x)\partial_xf(t,x))=\partial_x F(t,x)+R(t,x),\\
&f(0,x)=f_0(x).
\end{aligned}
\end{equation*}
We convert the differential equations into integral equations and give two different formulas of solution. One formula \eqref{soforjum} is suitable for estimates near jump points, and another one \eqref{sofor1} behaves well for estimates far from jump points. Our main results rely on Lemma \ref{lemma}.
We make some remarks about the choice of initial data. For the isentropic system \eqref{inns}, we requires $u_0\in \dot B^{-1+2\gamma}_{\infty,\infty}$. Generally in Schauder theory of parabolic/elliptic equations, to control the Lipschitz norm of solution, it requires the coefficient to be at least Dini (see \cite{Jin}). Due to the roughness of coefficient, the space $\dot B^{-1+2\gamma}_{\infty,\infty}$ is optimal in the sense that we can take $\gamma>0$ arbitrarily close to 0. For the full N--S system \eqref{inns}, we require $u_0\in L^2$ and $\theta_0\in \dot W^{-\frac{2}{3},\frac{6}{5}}$. Taking a glance at the equation of $\theta$, we need $(u_x)^2, \theta u_x\in L^1_{T}$ to make $\theta$ well-defined. By classical theory of heat equation, $u_x\in L^2_{T}$ requires $u_0\in L^2$, and $\theta\in L^2_{T}$ requires $\theta_0\in \dot H^{-1}$. Taking force terms into consideration, we finally choose $\theta_0\in\dot W^{-\frac{2}{3},\frac{6}{5}}\subset \dot H^{-1}$.
The rest of this paper is organized as follows. We introduce some primary properties for heat equations and heat kernel in Section \ref{secpre}. We establish the main lemma \ref{lemma} in Section \ref{secmain}. In Section 4 we apply Lemma \ref{lemma} to obtain local well-posedness for system (\ref{inns}) and prove Theorem \ref{maininns}. Section 5 is devoted to prove local wellposedness for system (\ref{cpns}) and prove Theorem \ref{maincp}.
\section{Preliminaries}\label{secpre}
We introduce the following estimates for the standard heat kernel $\mathbf{K}(t,x)=(4\pi t)^{-\frac{1}{2}}e^{-\frac{x^2}{4t}}$ in $\mathbb{R}$ which satisfies
\begin{align*}
&\partial_t \mathbf{K}(t,x)-\partial_{xx}\mathbf{K}(t,x)=0,\\
&\lim_{t\to 0^+}\mathbf{K}(t,x)=\mathbf{Dirac}(x).
\end{align*}
The following estimates for heat kernel will be used frequently in our proof.
\begin{lemma}\label{lemheat}
There holds
\begin{align*}
&|\partial_t^j\partial_x^m \mathbf{K}(t,x)|\lesssim \frac{1}{(t^\frac{1}{2}+|x|)^{1+2j+m}},\\
&\left(\int_{0}^{\infty}|\partial_x^m\mathbf{K}(t,x)|^pdt\right)^{\frac{1}{p}}\lesssim |x|^{\frac{2}{p}-1-m},\ \ \ \ \left(\int_{\mathbb{R}}|\partial_x^m\mathbf{K}(t,x)|^pdx\right)^{\frac{1}{p}}\lesssim t^{\frac{1}{2}(\frac{1}{p}-1-m)},\\
&\left(\int_{\mathbb{R}}|\partial_t^j\partial_x^m \mathbf{K}(t,x)|^p|x|^{\sigma p} dx\right)^\frac{1}{p}\lesssim \frac{1}{t^{j+{\frac{1}{2}(m-\frac{1}{p}+1-\sigma)}}},\\
&\left(\int_{\mathbb{R}}|\partial_t^j \partial_x^m\mathbf{K}(t+a,x)-\partial_t^j\partial_x^m \mathbf{K}(t,x)|^p|x|^{\sigma p} dx\right)^\frac{1}{p}\lesssim \frac{1}{t^{j+{\frac{1}{2}(m-\frac{1}{p}+1-\sigma)}}}\min\{1,\frac{a}{t}\},
\end{align*}
for any $m,j=0,1,2$, any $\sigma,a,t\geq 0$ and $p\in[1,+\infty]$.
\end{lemma}
It is easy to check the above estimates by the definition of heat kernel and the fact that $b^me^{-b}\lesssim_m 1$, $\forall b>0,m\in\mathbb{N}^+$. We omit details here.
\begin{lemma}\label{lemsob}
Let $h\in \dot W^{\sigma,p}(\mathbb{R})$ for some $\sigma\in(0,1), p\in[1,\infty)$. Define $\mathbf{h}(x)=h(x)\mathbf{1}_{x\geq 0}+h(-x)\mathbf{1}_{x<0}$ and $\bar{ \mathbf{h}}(x)=(h(x)-h(-x))\mathbf{1}_{x\leq 0}$. Then we have
\begin{align*}
\int_{\mathbb{R}} \int_0^\infty |\mathbf{h}(x)-\mathbf{h}(x-z)|^p\frac{dxdz}{|z|^{1+\sigma p}}+ \int_{\mathbb{R}} \int_0^\infty |\bar{ \mathbf{h}}(x)-\bar{ \mathbf{h}}(x-z)|^p\frac{dxdz}{|z|^{1+\sigma p}}\lesssim \|h\|_{\dot W^{\sigma,p}}^p.
\end{align*}
\end{lemma}
\begin{proof}
By definition, one can check that
\begin{align*}
\frac{|\mathbf{h}(x)-\mathbf{h}(x-z)|^p}{|z|^{1+\sigma p}}&=\mathbf{1}_{x\geq z} \frac{|{h}(x)-{h}(x-z)|^p}{|z|^{1+\sigma p}}+\mathbf{1}_{x< z} \frac{|{h}(x)-{h}(z-x)|^p}{|z|^{1+\sigma p}}\\
&\leq \frac{|{h}(x)-{h}(x-z)|^p}{|z|^{1+\sigma p}}+ \frac{|{h}(x)-{h}(z-x)|^p}{|2x-z|^{1+\sigma p}},\\
\frac{|\bar {\mathbf{h}}(x)-\bar {\mathbf{h}}(x-z)|^p}{|z|^{1+\sigma p}}&=\mathbf{1}_{x< z} \frac{|{h}(x-z)-{h}(z-x)|^p}{|z|^{1+\sigma p}}\\
&\leq \frac{|{h}(x)-{h}(x-z)|^p}{|z|^{1+\sigma p}}+ \frac{|{h}(x)-{h}(z-x)|^p}{|2x-z|^{1+\sigma p}}.
\end{align*}
The result is standard in view of the fact that $$\|h\|_{\dot W^{\sigma,p}}^p= \iint_{\mathbb{R}^2} \frac{|h(x)-h(y)|^p}{|x-y|^{1+\sigma p}}dxdy.
$$
This completes the proof.
\end{proof}
We introduce the following Schauder type lemma, the idea can be found in \cite{Peskin}.
\begin{lemma}\label{mainlem}
Let $\mathbf{K}$ be the heat kernel, and let
\begin{align*}
g(t,x)=\int_0^t\int_{\mathbb{R}}\partial_t \mathbf{K}(t-\tau,x-y) f(\tau,y)dyd\tau.
\end{align*}
Then we have
\begin{align}\label{re}
\|g\|_{X_T^{\sigma,p}}\lesssim \|f\|_{X_T^{\sigma,p}},\ \ \ \forall T>0,\ \sigma\in(0,1-\alpha),\ p\in[1,\infty].
\end{align}
\end{lemma}
\begin{proof}
One has \begin{align*}
g(t,x)&=\int_0^t\int_{\mathbb{R}}\partial_t \mathbf{K}(t-\tau,x-y) (f(\tau,y)-f(t,y))dyd\tau+\int_0^t\int_{\mathbb{R}}\partial_t \mathbf{K}(t-\tau,x-y) f(t,y)dyd\tau\\
&:=g_1(t,x)+g_2(t,x).
\end{align*}
By Lemma \ref{lemheat} we get
\begin{align*}
\|g_1(t)\|_{L^p}\lesssim \int_0^t \|\partial_t \mathbf{K}(t-\tau)\|_{L^1}\|f(\tau)-f(t)\|_{L^p}d\tau\lesssim \int_0^t (t-\tau)^{\alpha-1}\tau^{-\sigma-\alpha}d\tau\|f\|_{X_T^{\sigma,p}}\lesssim t^{-\sigma}\|f\|_{X_T^{\sigma,p}}.
\end{align*}
Moreover, observe that $\int_0^t \partial_t \mathbf{K}(t-\tau,x-y) d\tau=K(t,x-y)-\mathbf{Dirac}(x-y)$, hence $$g_2(t,x)=\int_{\mathbb{R}} \mathbf{K}(t,x-y)f(t,y)dy-f(t,x).$$
Then
\begin{align*}
\|g_2(t)\|_{L^p}\lesssim (1+\|\mathbf{K}(t)\|_{L^1})\|f(t)\|_{L^p}\lesssim \|f(t)\|_{L^p}\lesssim t^{-\sigma}\|f\|_{X_T^{\sigma,p}}.
\end{align*}
Thus, we obtain that
\begin{align}\label{par1}
\sup_{s\in[0,T]}s^{\sigma}\|g(s)\|_{L^p}\lesssim \|f\|_{X_T^{\sigma,p}}.
\end{align}
In the following we will denote $a=t-s>0$ for convenience, and we denote $\delta_\beta f(s,x)=f(s+\beta,x)-f(s,x)$. We write
\begin{align*}
g(t)- g(s)=&\int_0^s\delta_a\partial_t \mathbf{K}(s-\tau)\ast f(\tau)d\tau+\int_s^t\partial_t \mathbf{K}(t-\tau)\ast f(\tau)d\tau\\
=&\int_0^s\delta_a\partial_t \mathbf{K}(s-\tau)\ast (f(\tau)-f(s))d\tau+\int_s^t\partial_t \mathbf{K}(t-\tau)\ast (f(\tau)-f(t))d\tau\\
&+\big(\int_0^s\delta_a\partial_t \mathbf{K}(s-\tau)\ast f(s)d\tau+\int_s^t\partial_t \mathbf{K}(t-\tau)\ast f(t)d\tau\big)\\
:=&I_1+I_2+I_3.
\end{align*}
Applying Lemma \ref{lemheat}, we have
\begin{align*}
\|I_1\|_{L^p_x}&\lesssim \int_0^s \min\{1,\frac{a}{s-\tau}\}\frac{1}{(s-\tau)^{1-\alpha}\tau^{\sigma+\alpha}} d\tau \|f\|_{X_T^{\sigma,p}}\\
&\lesssim a^\alpha s^{-\sigma-\alpha}\|f\|_{X_T^{\sigma,p}},
\end{align*}
where we use the fact that
\begin{align}
&\int_0^s \min\{1,\frac{a}{s-\tau}\}\frac{1}{(s-\tau)^{1-\alpha}\tau^{\sigma+\alpha}} d\tau\nonumber\\
&\lesssim a^\alpha\int_0^\frac{s}{2}\frac{1}{(s-\tau)\tau^{\sigma+\alpha}}d\tau+s^{-\sigma-\alpha}\int_\frac{s}{2}^s\min\{1,\frac{a}{s-\tau}\}\frac{1}{(s-\tau)^{1-\alpha}}d\tau\lesssim a^\alpha s^{-\sigma-\alpha}.\label{intt}
\end{align}
For $I_2$, we have
\begin{align*}
\|I_2\|_{L^p}\lesssim \int_s^t \frac{1}{(t-\tau)^{1-\alpha}}\frac{1}{\tau^{\sigma+\alpha}}d\tau \|f\|_{X_T^{\sigma,p}}\lesssim a^\alpha s^{-\sigma-\alpha}\|f\|_{X_T^{\sigma,p}}.
\end{align*}
For $I_3$, we have
\begin{align*}
\|I_3\|_{L^p}\lesssim &\|f(s)\ast \mathbf{K}(t-s)-f(s)\ast \mathbf{K}(t)-f(s)+f(s)\ast \mathbf{K}(s)+f(t)-f(t)\ast \mathbf{K}(t-s)\|_{L^p}\\
\lesssim &\|f(t)-f(s)\|_{L^p}+\|f(s)\ast(\mathbf{K}(t)- \mathbf{K}(s))\|_{L^p}
\lesssim s^{-\sigma-\alpha}(t-s)^\alpha\|f\|_{X_T^{\sigma,p}},
\end{align*}
where we applied Lemma \ref{lemheat} to get $\|\mathbf{K}(t)- \mathbf{K}(s)\|_{L^1}\lesssim \frac{(t-s)^\alpha}{s^\alpha}$ in the last inequality.
Then we obtain that
\begin{align}
\label{par2}
\sup_{0<s<t<T}s^{1-\gamma+\alpha}\frac{\|g(t)-g(s)\|_{L^p}}{(t-s)^\alpha}\lesssim \|f\|_{X_T^{\sigma,p}}.
\end{align}
By \eqref{par1} and \eqref{par2}, we get \eqref{re}. The proof is complete.
\end{proof} \\
\begin{remark}\label{rem2.2}
The result in Lemma \ref{mainlem} is still true for $g(t,x)=\int_0^t\int_{\mathbb{R}}\partial_t\mathbf{G}(t-\tau,x-y,x)f(\tau,y)dy$, where the kernel $\mathbf{G}(t,z,x)$ satisfies
\begin{align*}
&\sup_z|\partial_t\mathbf{G}(t,z,x)|\lesssim |\partial_t\mathbf{K}(\zeta t,x)|,\\
&\sup_z|\partial_t\mathbf{G}(t,z,x)-\partial_t\mathbf{G}(s,z,x)|\lesssim |\partial_t\mathbf{K}(\zeta s,x)-\partial_t\mathbf{K}(\zeta t,x)|,
\end{align*}
for a universal constant $\zeta>0$.
\end{remark}
\begin{lemma}\label{lemlow}
Let $0\leq\beta< 1-\alpha$; and $\tilde{\sigma}<1$, $1\leq q\leq p\leq\infty$, $1+\frac{1}{p}=\frac{1}{r}+\frac{1}{q}$, $1+\sigma-\tilde{\sigma}-\beta\geq 0$. Suppose that $G(t,y,x)$ satisfies \begin{align}\label{conG}
\sup_x\|G(t,\cdot,x)\|_{L^r}\lesssim t^{-\beta},\ \ \ \sup_x\|G(t,\cdot,x)-G(s,\cdot,x)\|_{L^r}\lesssim s^{-\beta}\min\{1,\frac{t-s}{s}\},
\end{align}
for any $0<s<t<T$.
Then for $g(t,x)=\int_0^t\int G(t-\tau,x-y,x) f(\tau,y)dyd\tau$, we have
$$
\|g\|_{X_T^{\sigma,p}}\lesssim T^{1+\sigma-\tilde{\sigma}-\beta }\|f\|_{X_T^{\tilde{\sigma},q}}.
$$
\end{lemma}
\begin{proof}
By Young's inequality,
\begin{align*}
\|g(t)\|_{L^p}&\lesssim \int_0^t \sup_x\|G(t-\tau,\cdot,x)\|_{L^r}\|f(\tau)\|_{L^q}d\tau\\
&\lesssim \int_0^t \frac{1}{(t-\tau)^{\beta}}\frac{1}{\tau^{\tilde{\sigma}}}d\tau \|f\|_{X_T^{\tilde{\sigma},q}}\lesssim t^{1-\beta-\tilde{\sigma}} \|f\|_{X_T^{\tilde{\sigma},q}}.
\end{align*}
Hence
\begin{align}\label{ggg}
\sup_{t\in[0,T]}t^{\sigma}\|g(t)\|_{L^p}\lesssim T^{1-\beta+\sigma-\tilde{\sigma} }\|f\|_{X_T^{\tilde{\sigma},q}}.
\end{align}
Moreover, for any $0<s<t<T$, denote $a=t-s$,
\begin{align*}
g(t)- g(s)=&\int_0^s\int_{\mathbb{R}}\delta_aG(s-\tau,x-y,x) f(\tau,y)dyd\tau+\int_s^t\int_{\mathbb{R}} G(t-\tau,x-y,x) f(\tau,y)dyd\tau.
\end{align*}
By \eqref{conG} and Young's inequality, one has
\begin{align*}
\| g(t)- g(s)\|_{L^p}&\lesssim \|f\|_{X_T^{\tilde{\sigma},q}}\int_0^s\frac{\min\{1,\frac{a}{s-\tau}\}}{(s-\tau)^{\beta}}\frac{1}{\tau^{\tilde{\sigma}}}d\tau+\|f\|_{X_T^{\tilde{\sigma},q}}\int_s^t\frac{1}{(t-\tau)^{\beta}}\frac{1}{\tau^{\tilde{\sigma}}}d\tau\\
&\overset{\eqref{intt}}\lesssim a^\alpha s^{-\sigma-\alpha} T^{1-\beta+\sigma-\tilde{\sigma} } \|f\|_{X_T^{\tilde{\sigma},q}}.
\end{align*}
Combining this with \eqref{ggg}, we obtain the result.
\end{proof}
\begin{lemma}\label{rem}
Suppose
$$
g(t,x)=\int_0^t\int_{\mathbb{R}}\mathbf{K}(t-\tau,x-y)R(\tau,y)dyd\tau.
$$
Let $l=0,1$. Then for any $\frac{1}{1+2\alpha}<p<\frac{1}{2\alpha}$ and $\sigma\geq \frac{1+l}{2}-\frac{1}{2p}$, there holds
$$
\|\partial_x^lg\|_{X_T^{\sigma,p}}\lesssim T^{\sigma-\frac{1+l}{2}+\frac{1}{2p}}(\|R\|_{L_{T}^1}+\|R\|_{X_T^{1,1}}).
$$
\end{lemma}
\begin{proof}
Note that
\begin{align*}
\partial_x^lg(t,x)=\int_0^t\int_{\mathbb{R}}\partial_x^l\mathbf{K}(t-\tau,x-y)R(\tau,y)dyd\tau.
\end{align*}
By Young's inequality and Lemma \ref{lemheat},
\begin{align*}
\|\partial_x^lg(t)\|_{L^p}\lesssim \int_0^t \|\partial_x^l \mathbf{K}(t-\tau)\|_{L^p}\|R(\tau)\|_{L^1}d\tau\lesssim \int_0^t (t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\|R(\tau)\|_{L^1}d\tau.
\end{align*}
We divide the integral into two parts. When $\tau\in[0,{t/2}]$, we have $(t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\sim t^{\frac{1}{2p}-\frac{1+l}{2}}$. And when $\tau\in[{t/2},t]$, one has $\|R(\tau)\|_{L^1}\lesssim t^{-1}\|R\|_{X_T^{1,1}}$. This yields that
\begin{align}\label{in}
\|\partial_x^lg(t)\|_{L^p}\lesssim t^{\frac{1}{2p}-\frac{1+l}{2}}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{align}
It remains to check the H\"{o}lder norms. Let $0<s<t<T$ and $a=t-s$, one has
\begin{align*}
&\partial_x^l g(t,x)-\partial_x^lg(s,x)\\
&=\int_0^s\int_{\mathbb{R}}\delta_a\partial_x^l\mathbf{K}(s-\tau,x-y)R(\tau,y)dyd\tau+\int_s^t\int_{\mathbb{R}} \partial_x^l\mathbf{K}(t-\tau,x-y)R(\tau,y)dyd\tau:=\mathcal{R}_1+\mathcal{R}_2.
\end{align*}
By Young's inequality and Lemma \ref{lemheat},
\begin{align*}
\|\mathcal{R}_1\|_{L^p}\lesssim \int_0^s\|\delta_a \partial_x^l\mathbf{K}(s-\tau)\|_{L^p}\|R(\tau)\|_{L^1}d\tau\lesssim \int_0^s(s-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\min\{1,\frac{a}{s-\tau}\}\|R(\tau)\|_{L^1}d\tau.
\end{align*}
We discuss $\tau\in[0,s/2]$ and $\tau\in[s/2,s]$ seperately and get that
\begin{align*}
\|\mathcal{R}_1\|_{L^p}\lesssim a^\alpha s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}\int_0^{s/2}\|R(\tau)\|_{L^1}d\tau+s^{-1}\|R\|_{L^1_{T}}\int_{s/2}^s (s-\tau)^{\frac{1}{2p}-\frac{1+l}{2}} \min\{1,\frac{a}{s-\tau}\}d\tau.
\end{align*}
By a change of variable, we can write
\begin{align*}
&\int_{s/2}^s (s-\tau)^{\frac{1}{2p}-\frac{1+l}{2}} \min\{1,\frac{a}{s-\tau}\}d\tau=\int_0^{s/2}\tau^{\frac{1}{2p}-\frac{1+l}{2}} \min\{1,\frac{a}{\tau}\}d\tau\\
&\lesssim \mathbf{1}_{a\leq s/2}\left(\int_0^a\tau^{\frac{1}{2p}-\frac{1+l}{2}} d\tau+a\int_a^{s/2}\tau^{\frac{1}{2p}-\frac{1+l}{2}-1} d\tau\right)+\mathbf{1}_{a\geq s/2}\int_0^s\tau^{\frac{1}{2p}-\frac{1+l}{2}} d\tau\\
&\lesssim \mathbf{1}_{a\leq s/2} a^{\frac{1}{2p}+\frac{1-l}{2}}+\mathbf{1}_{a\geq s/2}s^{\frac{1}{2p}+\frac{1-l}{2}}\lesssim a^\alpha s^{\frac{1}{2p}+\frac{1-l}{2}-\alpha},
\end{align*}
provided $\frac{1}{2p}+\frac{1-l}{2}-\alpha\geq0$. Hence
\begin{align}\label{r1}
\|\mathcal{R}_1\|_{L^p}\lesssim a^\alpha s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{align}
Then we deal with $\mathcal{R}_2$. By Young's inequality,
$$
\|\mathcal{R}_2\|_{L^p}\lesssim\int_s^t \| \partial_x^l\mathbf{K}(t-\tau)\|_{L^p}\|R(\tau)\|_{L^1}d\tau\lesssim \int_s^t (t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\|R(\tau)\|_{L^1}d\tau.
$$
We consider two cases. If $s<t/2$, then one has $a\sim t$. Hence $$
\int_s^t (t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\|R(\tau)\|_{L^1}d\tau\lesssim t^{\frac{1}{2p}-\frac{1+l}{2}}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}})
\lesssim a ^{\alpha}s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}),
$$
provided $\frac{1}{2p}-\frac{1+l}{2}-\alpha<0$.\\
If $s\geq t/2$, we can write
\begin{align*}
\int_s^t (t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}\|R(\tau)\|_{L^1}d\tau&\lesssim \int_s^t (t-\tau)^{\frac{1}{2p}-\frac{1+l}{2}}d\tau t^{-1}\|R\|_{X_T^{1,1}}\lesssim a^{\frac{1}{2p}+\frac{1-l}{2}}t^{-1}\|R\|_{X_T^{1,1}}\\
&\lesssim a ^{\alpha}s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}\|R\|_{X_T^{1,1}}.
\end{align*}
We conclude that
$$
\|\mathcal{R}_2\|_{L^p}\lesssim a ^{\alpha}s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
$$
Combining this with \eqref{r1} yields that
\begin{align}\label{in2}
\|\partial_x^l g(t)-\partial_x^lg(s)\|_{L^p}\lesssim (t-s) ^{\alpha}s^{\frac{1}{2p}-\frac{1+l}{2}-\alpha}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{align}
Then we complete the proof in view of \eqref{in} and \eqref{in2}.
\end{proof}
\begin{remark}
The result in Lemma \ref{rem} is still true for $g(t,x)=\int_0^t\int \mathbf{G}(t-\tau,x,x-y)R(\tau,y)dyd\tau$, where $\mathbf{G}(t,x,y)$ satisfies that
\begin{align*}
&\sup_{z}\|\partial^l_2\mathbf{G}(t,z,\cdot)\|_{L^p}\lesssim t^{-\frac{1+l}{2}+\frac{1}{2p}},\\
&\sup_{z}\|\partial^l_2\mathbf{G}(t,z,\cdot)-\partial^l_2\mathbf{G}(s,z,\cdot)\|_{L^p}\lesssim s^{-\frac{1+l}{2}+\frac{1}{2p}}\min\{1,\frac{t-s}{s}\}, \ \text{for}\ l=0,1,\ p\in[1,\infty].
\end{align*}
\end{remark}
\section{Main lemma and its proof}\label{secmain}
We consider the system \eqref{inns} and the system \eqref{cpns} with jump initial data $v_0$. Observe that the equation for $v$ is hyperbolic, $v(t)$ will have jump at some points for any $t>0$. To deal with $u$ in \eqref{inns} and $(u,\theta)$ in \eqref{cpns}, we need to
consider the following parabolic equation with jump coefficient.
\begin{equation}\label{eqpara}
\begin{aligned}
&\partial_t f(t,x) -\partial_x(\phi(x)\partial_xf(t,x))=\partial_x F(t,x)+R(t,x),\\
&f(0,x)=f_0(x).
\end{aligned}
\end{equation}
Here the coefficient function satisfies $0<c_0\leq \phi(x)\leq c_0^{-1}, \forall x\in \mathbb{R}$, and there exist $\varepsilon>0$, $\{a_n\}_{n=1}^N\subset\mathbb{R}$ satisfying
\begin{equation}\label{conphi}
\begin{aligned}
&\min_{i\neq j}|a_i-a_j|\geq 10\varepsilon, \ \ \ \ \ \|\phi'\|_{L^\infty(\mathbb{R}\backslash\mathbb{I}_\varepsilon)}\leq C_\phi,\\
& \phi(x)= \begin{cases}
c_n^+,\ \ x\in [a_n,a_n+\varepsilon],\\
c_n^-,\ \ x\in [a_n-\varepsilon,a_n),
\end{cases} \ \ n=1,\cdots,N,
\end{aligned}
\end{equation}
for some $\{c_n^\pm\}_{n=1}^N\subset\mathbb{R}^+$. Here we denote $\mathbb{I}_\varepsilon=\cup_{n=1}^N[a_n-\varepsilon,a_n+\varepsilon]$.
\begin{lemma}\label{lemma}Let $f $ be a solution to \eqref{eqpara} with initial data $f_0=\partial_x\bar f_0$. There exists $T^*=T^*(c_0,C_\phi,\varepsilon)>0$ such that for any $0<T<T^*$,
\begin{align*}
\sum_{\star\in\{L^2_{T},X_T^{{1}/{2},2}\}}\|f\|_{\star}+\sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}} \|\partial_x f\|_{\star}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\| F\|_{\star}+T^{\frac{1}{4}}\sum_{\star\in\{{L^1_{T}},{X_T^{1,1}}\}}\|R\|_{\star}.
\end{align*}
Moreover, if $R=0$, we have
\begin{align*}
&\quad\quad\quad\quad\quad\|\partial_x f\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}}+\|F\|_{X_T^{1-\gamma,\infty}},\\
&\|\partial_xf\|_{L^2_{T}}+\|\partial_xf\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|f_0\|_{L^2}+\|F\|_{L^2_{T}}+\|F\|_{X_T^{\frac{1}{2},2}}+\|F\|_{X_T^{\frac{3}{4},\infty}}.
\end{align*}
\end{lemma}
\begin{proof}
Applying Lemma \ref{lemXTjum}-Lemma \ref{lemLPint} to obtain that there exists $T_0=T_0(C_\phi,\varepsilon)>0$ such that for any $0<T\leq T_0$,
\begin{align*}
&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|f\|_{\star}+\sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}} \|\partial_x f\|_{\star}\\
&\ \leq \mathbf{C} \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \mathbf{C}\sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}} \| F\|_{\star}+\mathbf{C}T^{\frac{1}{4}}\sum_{\star\in\{{L^1_{T}},{X_T^{1,1}}\}}\|R\|_{\star}
+\frac{1}{5}(\|f\|_{L^2_{T}}+\sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}} \|\partial_x f\|_{\star}),
\end{align*}
and, if $R=0$,
\begin{align*}
&\| \partial_x f\|_{X_T^{1-\gamma,\infty}}\leq \mathbf{C}(\|\bar f_0\|_{\dot C^{2\gamma}}+\|F\|_{X_T^{1-\gamma,\infty}}+T^\frac{1}{3}\|\partial_x f\|_{X_T^{1-\gamma,\infty}}),\\
&\|f\|_{L^6_{T}}+\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{3}{4},\infty}\}}\|\partial_xf\|_{\star}\leq \mathbf{C}\|f_0\|_{L^2}+\mathbf{C}\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{3}{4},\infty}\}}\|F\|_{\star}\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\frac{1}{10}(\|f\|_{L^6_{T}}+\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{3}{4},\infty}\}}\|\partial_x f\|_{\star}).
\end{align*}
where a constant $\mathbf{C}>0$ depends only on $c_0$, $C_\phi$ and $\varepsilon$.
This implies the results by taking $0<T^*=\min \{\frac{1}{(\mathbf{C}+1)^{10}},T_0\}$.
Then we complete the proof.
\end{proof}
We first introduce some elementary lemmas that will be used frequently in our proof.
\begin{lemma}\label{lemXTl}
Let $g(t,x)=\int_{\mathbb{R}} \partial_x\mathbf{K}(t,x-y) \bar g_0(y) dy$. Then for any $T>0$,
\begin{equation*}
\begin{aligned}
&\|\partial_x g\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar g_0\|_{\dot C^{2\gamma}}, \\
&\|\partial_x g\|_{X_T^{\frac{1}{2},2}}+\|\partial_x g\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\partial_x \bar g_0\|_{L^2},\\
&\|g\|_{X_T^{\frac{1}{2},2}}\lesssim \|\bar g_0\|_{L^2},\ \ \ \ \|\partial_x g\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim \|\bar g_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
Observe that $\int \partial_x^2\mathbf{K}(t,x)dx=0$. Hence
\begin{align*}
\partial_x g(t,x)=\int_{\mathbb{R}} \partial_x^2\mathbf{K}(t,x-y) (\bar g_0(y)-\bar g_0(x)) dy=-\int_{\mathbb{R}} \partial_x^2\mathbf{K}(t,z) \Delta_z\bar g_0(x) dy,
\end{align*}
where we denote $\Delta_z\bar g_0(x)=\bar g_0(x)-\bar g_0(x-z)$.
Note that $|\Delta_z\bar g_0(x)|\leq |z|^{2\gamma}\|\bar g_0\|_{\dot C^{2\gamma}}$. Hence, by Lemma \ref{lemheat}, we have
\begin{align*}
&\| \partial_x g(t)\|_{L^\infty}\lesssim \int_{\mathbb{R}}|\partial_x^2\mathbf{K}(t,x)||x|^{2\gamma}dx\|\bar g_0\|_{\dot C^{2\gamma}}\lesssim t^{\gamma-1}\|\bar g_0\|_{\dot C^{2\gamma}}.
\end{align*}
Moreover, for any $0<s<t<T$,
\begin{align*}
\|\partial_x g(t)-\partial_x g(s)\|_{L^\infty}&\lesssim \int_{\mathbb{R}} |\partial_{x}^2\mathbf{K}(t,x)-\partial_{x}^2\mathbf{K}(s,x)||x|^{2\gamma}dx \|\bar g_0\|_{\dot C^{2\gamma}}\\
&\lesssim s^{\gamma-1}\min\{1,\frac{t-s}{s}\}\|\bar g_0\|_{\dot C^{2\gamma}}\lesssim(t-s)^\alpha s^{\gamma-1-\alpha}\|\bar g_0\|_{\dot C^{2\gamma}}.
\end{align*}
Hence
\begin{equation}\label{1111}
\|\partial_x g\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar g_0\|_{\dot C^{2\gamma}}.
\end{equation}
By Lemma \ref{lemheat} and Young's inequality, we have
\begin{equation}\label{ini}
\begin{aligned}
&t^\frac{1}{2}\|\partial_x g(t)\|_{L^2}+t^\frac{3}{4}\|\partial_x g\|_{L^\infty}\lesssim (t^\frac{1}{2}\|\partial_x \mathbf{K}(t)\|_{L^1}+t^\frac{3}{4}\|\partial_x \mathbf{K}(t)\|_{L^2})\|\partial_x \bar g_0\|_{L^2}\lesssim \|\partial_x \bar g_0\|_{L^2},\\
&t^\frac{1}{2}\| g(t)\|_{L^2}\lesssim t^\frac{1}{2}\|\partial_x \mathbf{K}(t)\|_{L^1}\|\bar g_0\|_{L^2}\lesssim \|\bar g_0\|_{L^2},
\end{aligned}
\end{equation}and
\begin{equation}\label{ini2}
\begin{aligned}
&s^{\frac{1}{2}+\alpha}\|\partial_x g(t)-\partial_x g(s)\|_{L^2}+s^{\frac{3}{4}+\alpha}\|\partial_x g(t)-\partial_x g(s)\|_{L^\infty}\\
&\quad\quad\lesssim (s^{\frac{1}{2}+\alpha}\|\partial_x K(t)-\partial_x K(s)\|_{L^1}+s^{\frac{3}{4}+\alpha}\|\partial_x K(t)-\partial_x K(s)\|_{L^2})\|\partial_x \bar g_0\|_{L^2}\\
&\quad\quad\lesssim (t-s)^\alpha\|\partial_x \bar g_0\|_{L^2}.
\end{aligned}
\end{equation}
Similarly, it is easy to check that
\begin{equation}\label{ini3}
s^{\frac{1}{2}+\alpha}\|g(t)-g(s)\|_{L^2}\lesssim (t-s)^\alpha\|\bar g_0\|_{L^2}.
\end{equation}
By \eqref{1111}-\eqref{ini3}, one has
\begin{align}\label{re1}
\|\partial_x g\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar g_0\|_{\dot C^{2\gamma}},\ \ \ \ \ \|\partial_x g\|_{X_T^{\frac{1}{2},2}}+\|\partial_x g\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\partial_x \bar g_0\|_{L^2},\ \ \ \ \|g\|_{X_T^{\frac{1}{2},2}}\lesssim \|\bar g_0\|_{L^2}.
\end{align}
It remains to estimate $\|\partial_x g \|_{X_T^{\frac{5}{6},\frac{6}{5}}}$. By H\"{o}lder's inequality and Lemma \ref{lemheat},
\begin{align*}
\|\partial_x g(t)\|_{L^\frac{6}{5}}&\lesssim \int_{\mathbb{R}}|\partial_x^2\mathbf{K}(t,z)|\|\Delta_z\bar f_0\|_{L^\frac{6}{5}}dz\lesssim \left(\int_{\mathbb{R}} |\partial_x^2\mathbf{K}(t,z)|^6|z|^7dz\right)^\frac{1}{6}\left(\int_{\mathbb{R}} \frac{\|\Delta_z\bar f_0\|_{L^{6/5}}^{6/5}}{|z|^{7/5}}dz\right)^{5/6}\\
&\lesssim t^{-\frac{5}{6}}\|\bar g_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{align*}
Moreover, for any $0<s<t<T$,
\begin{align*}
\|\partial_x g(t)-\partial_x g(s)\|_{L^\frac{6}{5}}&\lesssim \int_{\mathbb{R}}|\partial_x^2\mathbf{K}(t,z)-\partial_x^2\mathbf{K}(s,z)|\|\Delta_z\bar f_0\|_{L^\frac{6}{5}}dz\\
&\lesssim \left(\int_{\mathbb{R}} |\partial_x^2\mathbf{K}(t,z)-\partial_x^2\mathbf{K}(s,z)|^6|z|^7dz\right)^\frac{1}{6}\left(\int_{\mathbb{R}} \frac{\|\Delta_z\bar f_0\|_{L^{6/5}}^{6/5}}{|z|^{7/5}}dz\right)^{5/6}\\
&\lesssim s^{-\frac{5}{6}-\alpha}(t-s)^\alpha\|\bar g_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{align*}
Then we get
$$
\|\partial_x g\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim \|\bar g_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
$$
Combining this with \eqref{re1}, we complete the proof.
\end{proof}
\begin{remark}\label{remlem3.2}
The result in Lemma \ref{lemXTl} is still true for $g(t,x)=\int_{\mathbb{R}}\partial_y\mathbf{G}(t,x-y,x)\bar{g}_0(y)dy$, where $\mathbf{G}(t,x,z)$ satisfies that $\int \partial_x^2 \mathbf{G}(t,x,z)dx=0$, and
\begin{align*}
&\max_{l=0,1}\sup_z|\partial_x^m\partial_z^l\mathbf{G}(t,z,x)|\lesssim |\partial_x^m\mathbf{K}(\zeta t,x)|,\\
&\max_{l=0,1}\sup_z|\partial_x^m\partial_z^l\mathbf{G}(t,z,x)-\partial_x^m\partial_z^l\mathbf{G}(s,z,x)|\lesssim |\partial_x^m\mathbf{K}(\zeta s,x)-\partial_x^m\mathbf{K}(\zeta t,x)|,\ \ \ \ m=1,2,
\end{align*}
for a universal constant $\zeta>0$.
\end{remark}
The following is the Hardy's inequality.
\begin{lemma}\label{lemhardy}
For any non-negative measurable function $h:\mathbb{R}\to \mathbb{R}^+$, there holds for any $p\in(1,\infty)$,
\begin{align*}
\int_0^\infty\left(\int_0^\infty h(x)\frac{dx}{x+y}\right)^pdy\lesssim \int_0^\infty h(x)^pdx.
\end{align*}
\end{lemma}
\begin{proof}
By Hardy's inequality \cite{Hardy},
\begin{align*}
\int_0^\infty\left(\int_0^\infty h(x)\frac{dx}{x+y}\right)^pdy\leq& \int_0^\infty\left(\int_0^y h(x){dx}\right)^p\frac{dy}{y^p}+\int_0^\infty\left(\int_y^\infty h(x)\frac{dx}{x}\right)^p{dy}\\
\leq &\left(\frac{p^p}{(p-1)^p}+p^p\right)\int_0^\infty h(x)^p dx.
\end{align*}
This completes the proof.
\end{proof}
\\
The following lemma helps us to control boundary terms in \eqref{formubd3}.
\begin{lemma}\label{lemupg}
Let $r\in[1,+\infty), p,q\in(1,+\infty)$ be such that $\frac{1}{p}+1=\frac{1}{q}+\frac{1}{r}$. Suppose that $\tilde G(t,x)$ satisfies
\begin{align*}
\int_0^\infty |\tilde G(t,x)|^r dt \lesssim \frac{1}{|x|}.
\end{align*}
Then for $$
g(t,x)=\mathbf{1}_{x\geq 0}\int_0^t \int_0^{+\infty} \tilde G(t-\tau,x+y) f(\tau,y)dyd\tau,
$$
there holds
$$
\|g\|_{L^p_{T}}\lesssim \|f\|_{L^q_{T}}.
$$
\end{lemma}
\begin{proof}
By Young's inequality,
\begin{align*}
\|g\|_{L^p_{T}}&\lesssim \left\|\int_0^\infty\left( \int_0^t |\tilde G(t,x+y)|^r dt\right)^\frac{1}{r}\|f(y)\|_{L^q_t}dy\right\|_{L^p_x(\mathbb{R}^+)}\\
&\lesssim \left\|\int_0^\infty \|f(y)\|_{L^q_t}\frac{dy}{|x+y|^{\frac{1}{r}}}\right\|_{L^p_x(\mathbb{R}^+)}.
\end{align*}
When $r>1$, then by Young's inequality, we obtain that
\begin{align*}
\left\|\int_0^\infty \|f(y)\|_{L^q_{t,T}}\frac{dy}{|x+y|^{\frac{1}{r}}}\right\|_{L^p_x(\mathbb{R}^+)}\lesssim \|f\|_{L^q_{T}} \||x |^{-\frac{1}{r}}\|_{L^r_w}\lesssim \|f\|_{L^q_{T}} .
\end{align*}
When $r=1$, we obtain the result by Lemma \ref{lemhardy}.
This completes the proof.
\end{proof}
\subsection{Estimates near jump points}
We first study estimates near jump points. Let $\mathbb{I}_{\varepsilon}^n=[a_n-\varepsilon,a_n+\varepsilon]$. Without loss of generality, we assume $a_n=0$. Then we can write equation \eqref{eqpara} as
\begin{equation}\label{jueq}
\begin{aligned}
&\partial_t f -\partial_x(\bar \phi f_x)=\partial_x \tilde F+R ,\\
&f(0,x)=f_0(x),
\end{aligned}
\end{equation}
where we denote $\tilde F=F+(\phi-\bar \phi)f_x$ with $\bar \phi=\mathbf{1}_{x\leq 0}c_n^-+\mathbf{1}_{x>0}c_n^+$. For simplicity, we omit the subscript $n$ in our proof and denote $c^\pm=c^\pm_n$.\\
By Lemma \ref{lemformula}, the solution of \eqref{jueq} has formula
$f=f^+\mathbf{1}_{x\geq 0}+f^-\mathbf{1}_{x< 0}$ , where
\begin{align}\label{soforjum}
f^\pm (t,x)=&f_L^\pm+f_N^\pm+f_R^\pm.
\end{align}
Here $f_L^\pm=f_1^\pm+f_{1,B}^\pm$, $f_N^\pm=f_2^\pm+f_{2,B}^\pm$, $f_R^\pm=f_3^\pm+f_{3,B}^\pm$, with
\begin{align*}
&f_1^\pm(t,x)=\int_{\mathbb{R}^\pm}( \mathbf{K}(c_\pm t,x-y)-\mathbf{K}(c_\pm t,x+y))f_0(y) dy,\\
&f_2^\pm(t,x)=\int_0^t \int_{\mathbb{R}^\pm}( \mathbf{K}(c_\pm (t-\tau),x-y)-\mathbf{K}(c_\pm (t-\tau),x+y)) \partial_x\tilde F(\tau,y) dyd\tau,\\
&f_3^\pm(t,x)=\int_0^t \int_{\mathbb{R}^\pm}( \mathbf{K}(c_\pm (t-\tau),x-y)-\mathbf{K}(c_\pm (t-\tau),x+y)) R(\tau,y) dyd\tau,\\
& f_{m,B}^\pm=\frac{-2\sqrt{c_\pm}}{\sqrt{c_+}+\sqrt{c_-}}\int_0^t\mathbf{K}(c_\pm(t-\tau),x)(c_+\partial_xf_m^+(\tau,0)-c_-\partial_xf_m^-(\tau,0))d\tau,\ \ m=1,2,3.
\end{align*}
Here $f_{m,B}$ means these are boundary terms. Denote
$\beta_m(\tau,y)=c_+f_m^+(\tau,y)+c_-f_m^-(\tau,-y)$, then
\begin{align*}
f_{m,B}^+&=C_+ \int_0^t\int_0^\infty\partial_y\left(\mathbf{K}(c_+(t-\tau),x+ y)\partial_x\beta_m(\tau,y)\right)dyd\tau\\
&=-C_+\int_0^t\int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)\beta_m(\tau,y)dyd\tau\\
&\quad\quad+C_+\int_0^t\int_0^\infty \mathbf{K}(c_+(t-\tau),x+ y)\partial_x^2\beta_m(\tau,y)dyd\tau,
\end{align*}
where we denote $C_\pm=\frac{-2\sqrt{c_\pm}}{\sqrt{c_+}+\sqrt{c_-}}$.
Note that
\begin{align*}
&\partial_x ^2\beta_1(\tau,y)=\partial_t f_1^+(\tau,y)+\partial_t f_1^-(\tau,-y),\\
&\partial_x ^2\beta_2(\tau,y)=\partial_t f_2^+(\tau,y)-\partial_x \tilde F(\tau,y)+\partial_t f_2^-(\tau,-y)-\partial_x \tilde F(\tau,-y),\\
&\partial_x ^2\beta_3(\tau,y)=\partial_t f_3^+(\tau,y)-R (\tau,y)+\partial_t f_3^-(\tau,-y)-R (\tau,-y).
\end{align*}
Hence,
\begin{equation}\label{formubd3}
\begin{aligned}
f_{1,B}^+(t,x)=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)f_1^-(\tau,-y)dyd\tau\\
&-C_+\int_0^\infty \mathbf{K}(c_+t,x+y)(f_0(y)+f_0(-y))dy,\\
f_{2,B}^+(t,x)=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)f_2^-(\tau,-y)dyd\tau\\
&-C_+\int_0^t \int_0^\infty \mathbf{K}(c_+(t-\tau),x+y)((\partial_x\tilde F)(\tau,y)+(\partial_x\tilde F)(\tau,-y))dyd\tau,\\
f_{3,B}^+(t,x)=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)f_3^-(\tau,-y)dyd\tau\\
&-C_+\int_0^t \int_0^\infty \mathbf{K}(c_+(t-\tau),x+y)(R(\tau,y)+R(\tau,-y)))dy.
\end{aligned}
\end{equation}
\begin{remark}
Note that we can recover the standard heat kernel from our formula when $c_+=c_-=c$. More precisely, we have
\begin{align*}
&f_L(t,x)=\int_{\mathbb{R}} \mathbf{K}(ct,x-y) f_0(y) dy, \ \ f_N(t,x)=\int_0^t \int_{\mathbb{R}} \mathbf{K}(c(t-\tau),x-y) \partial_x F(\tau,y) dy,\\
&f_R(t,x)=\int_0^t \int_{\mathbb{R}} \mathbf{K}(c(t-\tau),x-y) R(\tau,y) dy.
\end{align*}
\end{remark}
The following lemma will help us to analyze the regularity of the solution near the jump points.
\begin{lemma}\label{lemXTjum}
Let $f$ be a solution to \eqref{eqpara} with initial data $f_0=\partial_x\bar f_0$. Then there exists $T>0$ such that
\begin{equation*}
\|f\|_{X_T^{\frac{1}{2},2}(\mathbb{I}_{\varepsilon})}+\|\partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{I}_{\varepsilon})}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+T^\frac{1}{3}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+T^\frac{1}{4}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{equation*}
Moreover, if $R=0$,
\begin{align*}
&\|\partial_x f\|_{X_T^{1-\gamma,\infty}(\mathbb{I}_{\varepsilon})}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}}+\|F\|_{X_T^{1-\gamma,\infty}}+ T^{\frac{1}{3}}\|\partial_x f\|_{X_T^{1-\gamma,\infty}},\\
&\|\partial_x f\|_{X_T^{\frac{1}{2},2}(\mathbb{I}_{\varepsilon})}+\|\partial_x f\|_{X_T^{\frac{3}{4},\infty}(\mathbb{I}_{\varepsilon})}\lesssim \|\partial_x\bar f_0\|_{L^2}+\|F\|_{X_T^{\frac{3}{4},\infty}}+ T^{\frac{1}{3}}(\|\partial_x f\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f\|_{X_T^{\frac{3}{4},\infty}}).
\end{align*}
\end{lemma}
\begin{proof}Recall that $\mathbb{I}_\varepsilon=\cup_{n=1}^N\mathbb{I}_\varepsilon^n$. It suffices to consider one interval $\mathbb{I}^n_\varepsilon=[a_n-\varepsilon,a_n+\varepsilon]$. Without loss of generality, we assume $a_n=0$. The solution has formula \eqref{soforjum}.
Using integration by parts, one has
\begin{align*}
f_1^+(t,x)&=\int_{\mathbb{R}^+}( \partial_x \mathbf{K}(c_+ t,x-y)+\partial_x \mathbf{K}(c_+ t,x+y))\bar f_0(y) dy\\
&=\int_{\mathbb{R}}\partial_x \mathbf{K}(c_+ t,x-y)\mathbf{f}_0(y) dy,
\end{align*}
where $\mathbf{f}_0(y)$ is the even extension of $\bar f_0\mathbf{1}_{x\geq 0}$
$$
\mathbf{f}_0(y)=\bar f_0(y)\mathbf{1}_{y\geq 0}+\bar f_0(-y)\mathbf{1}_{y<0}.
$$
It is easy to check that
$$
\|\mathbf{f}_0\|_{\dot C^{2\gamma}}\leq \|\bar f_0\|_{\dot C^{2\gamma}}.
$$
Applying Lemma \ref{lemXTl} and Lemma \ref{lemsob} to get
\begin{equation}\label{f1XT}
\begin{aligned}
& \|\partial_x f_1^+\|_{X_T^{1-\gamma,\infty}(\mathbb{R}^+)}\lesssim \|\mathbf{f}_0\|_{\dot C^{2\gamma}}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}},\\
&\|\partial_x f_1^+\|_{X_T^{\frac{1}{2},2}(\mathbb{R}^+)}+\|\partial_x f_1^+\|_{X_T^{\frac{3}{4},\infty}(\mathbb{R}^+)}\lesssim \|\partial_x \bar f_0\|_{L^2},\ \ \ \ \ \ \ \|f_1^+\|_{X_T^{\frac{1}{2},2}(\mathbb{R}^+)}\lesssim \|\bar f_0\|_{L^2},\\
&\|\partial_x f_1^+\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}^+)}\lesssim \left(\int_{\mathbb{R}}\int_0^\infty|\mathbf{f}_0(x)-\mathbf{f}_0(x-z)|^\frac{6}{5} \frac{dxdz}{|z|^{7/5}}\right)^{5/6}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{5}{6}}}.
\end{aligned}
\end{equation}
Recall the formula of boundary term $ f_{1,B}^+$ in \eqref{formubd3}, using integration by parts one has
\begin{equation*}
\begin{aligned}
f_{1,B}^+=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)f_1^-(\tau,-y)dyd\tau\\
&-C_+\int_0^\infty \partial_x\mathbf{K}(c_+t,x+y)(\bar f_0(y)-\bar f_0(-y))dy,\\
:=&I_{1}+I_{2}.
\end{aligned}
\end{equation*}
Applying Lemma \ref{mainlem} to obtain that for $(\sigma,p)\in\{(1-\gamma,\infty),(\frac{1}{2},2),(\frac{3}{4},\infty),(\frac{5}{6},\frac{6}{5})\}$,
$$\|\partial_x^lI_{1}\|_{X_T^{\sigma,p}}\lesssim \|\partial_x^l f_1\|_{X_T^{\sigma,p}},\ \ l=1,2.$$
Moreover, observe that
\begin{align*}
I_{2}=-C_+\int_{\mathbb{R}} \partial_x \mathbf{K}(c_+t,x-y)\bar{\mathbf{f}}_0(y)dy,
\end{align*}
where $\bar{\mathbf{f}}_0(y)=(\bar f_0(-y)-\bar f_0(y))\mathbf{1}_{y\leq 0}$.
Following the proof of Lemma \ref{lemXTl} to yield
\begin{equation*}
\begin{aligned}
&\|\partial_xI_{2}\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar{\mathbf{f}}_0\|_{\dot C^{2\gamma}}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}},\\
&\|\partial_xI_{2}\|_{X_T^{\frac{1}{2},2}}+\|\partial_xI_{2}\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\partial_x (\bar f_0(-y)-\bar f_0(y))\mathbf{1}_{y\leq 0}\|_{L^2}\lesssim \|\partial_x \bar f_0\|_{L^2},\\
&\|I_{2}\|_{X_T^{\frac{1}{2},2}}\lesssim \|\bar f_0\|_{L^2}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{5}{6}}},
\end{aligned}
\end{equation*}
and
\begin{align*}
&\|\partial_xI_{2}\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}^+)}\lesssim \left(\int_{\mathbb{R}}\int_{\mathbb{R}^+}|\bar{\mathbf{f}}_0(x)-\bar{\mathbf{f}}_0(x-z)|^\frac{6}{5} \frac{dxdz}{|z|^{7/5}}\right)^{5/6}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{5}{6}}},
\end{align*}
where we applied Lemma \ref{lemsob} in the last inequality.
Similar arguments hold for $\partial_x f_{1,B}^-$. Hence
\begin{align*}
& \|\partial_x f_{1,B}\|_{X_T^{1-\gamma,\infty}}\lesssim \|\partial_x f_1\|_{X_T^{1-\gamma,\infty}}+\|\bar f_0\|_{\dot C^{2\gamma}},\\ &\|\partial_xf_{1,B}\|_{X_T^{\frac{1}{2},2}}+\|\partial_xf_{1,B}\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\partial_xf_{1}\|_{X_T^{\frac{1}{2},2}}+\|\partial_xf_{1}\|_{X_T^{\frac{3}{4},\infty}}+\|\partial_x \bar f_0\|_{L^2},\\
& \| f_{1,B}\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_{1,B}\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \| f_{1}\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_{1}\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align*}
Combining this with \eqref{f1XT}, we obtain that
\begin{equation}
\begin{aligned}\label{fLXT}
&\|\partial_x f_L\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}},\ \ \ \ \ \ \ \|\partial_x f_L\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_L\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\partial_x \bar f_0\|_{L^2},\\
&\| f_L\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_L\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{aligned}
\end{equation}
Then we estimate nonlinar term $f_N=f_2+f_{2,B}$. We first estimate $f_2$.
By integration by parts,
\begin{equation}\label{f22pa}
\begin{aligned}
f_2^\pm(t,x)=&\int_0^t \int_{\mathbb{R}^\pm}(\partial_x \mathbf{K}(c_\pm (t-\tau),x-y)+\partial_x\mathbf{K}(c_\pm (t-\tau),x+y)) F(\tau,y) dyd\tau\\
&+\int_0^t \int_{\mathbb{R}^\pm}(\partial_x \mathbf{K}(c_\pm (t-\tau),x-y)+\partial_x\mathbf{K}(c_\pm (t-\tau),x+y)) (\phi-\bar\phi)(y)f_x(\tau,y) dyd\tau\\
=&f_{2,1}^\pm(t,x)+f_{2,2}^\pm(t,x).
\end{aligned}
\end{equation}
By Lemma \ref{mainlem}, we obtain
\begin{align}\label{f21}
\|\partial_x f_{2,1}\|_{X_T^{\sigma,p}}\lesssim \| F\|_{X_T^{\sigma,p}},\ \ \ \ \|\partial_x f_{2,2}\|_{X_T^{\sigma,p}}\lesssim \|\partial_x f\|_{X_T^{\sigma,p}}.
\end{align}
And Lemma \ref{lemlow} implies that
\begin{align}\label{ccc}
\|f_{2,1}\|_{X_T^{\frac{1}{2},2}}\lesssim \|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align}
Observe that $\phi(y)-\bar\phi(y)=0$ for $y\in [-\varepsilon,\varepsilon]$. And
\begin{align}\label{phiaa}
|\phi(y)-\bar\phi(y)|=|\phi(y)-\phi(x)|\lesssim C_\phi|x-y|,
\end{align}
{for} $x\in (0,\varepsilon), y\in(0,4\varepsilon)$ or $x\in (-\varepsilon,0), y\in(-4\varepsilon,0)$. This implies a better estimate for $ f_{2,2}$ when $x\in(-\varepsilon,\varepsilon)$. For simplicity, we only consider $f_{2,2}^+(t,x)$ with $x\in(0,\varepsilon)$, while $ f_{2,2}^-(t,x)$ with $x\in(-\varepsilon,0)$ can be done similarly.
We can write
\begin{align*}
\partial_xf_{2,2}^+(t,x)=\int_0^t \int_{\mathbb{R}^+} G(t-\tau,x,y)f_x(\tau,y)dyd\tau,
\end{align*}
where
$$
G(t,x,y)=(\partial_x^2 \mathbf{K}(c_+ t,x-y)+\partial_x^2\mathbf{K}(c_+ t,x+y)) (\phi-\bar\phi)(y)\mathbf{1}_{x,y\in\mathbb{R}^+}.
$$
For $x\in (0,\varepsilon)$, $0<s<t<T$,
\begin{align*}
& \int_{\mathbb{R}} |G(t,x,y)|dy\lesssim C_\phi\int _{|y|\leq 4\varepsilon}|\partial_x^2\mathbf{K} (c_+t,y)||y|dy+ \int_{|y|\geq 4\varepsilon} |\partial_x^2\mathbf{K} (c_+t,y)|dy,\\
& \int_{\mathbb{R}} |G(t,x,y)-G(s,x,y)|dy\lesssim C_\phi\int _{|y|\leq 4\varepsilon}|\partial_x^2\mathbf{K} (c_+t,y)-\partial_x^2\mathbf{K} (c_+s,y)||y|dy\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \int_{|y|\geq 4\varepsilon} |\partial_x^2\mathbf{K} (c_+t,y)-\partial_x^2\mathbf{K} (c_+(s,y))|dy.
\end{align*}
By Lemma \ref{lemheat} we obtain that
\begin{align*}
& \int _{|y|\leq 4\varepsilon}|\partial_x^2\mathbf{K} (c_+(t,y))||y|dy\lesssim t^{-\frac{1}{2}},\ \ \ \int_{|y|\geq 4\varepsilon} |\partial_x^2\mathbf{K} (c_+(t,y))|dy\lesssim \int_{|y|\geq 4\varepsilon}\frac{1}{(|y|+t^\frac{1}{2})^3}dy\lesssim \varepsilon^{-1}t^{-\frac{1}{2}},\\
&\int _{|y|\leq 4\varepsilon}|\partial_x^2\mathbf{K} (c_+(t,y))-\partial_x^2\mathbf{K} (c_+(s,y))||y|dy\lesssim t^{-\frac{1}{2}}\min\{1,\frac{t-s}{s}\},\\
&\int_{|y|\geq 4\varepsilon} |\partial_x^2\mathbf{K} (c_+(t,y))-\partial_x^2\mathbf{K} (c_+(s,y))|dy\lesssim \varepsilon^{-1}t^{-\frac{1}{2}}\min\{1,\frac{t-s}{s}\}.
\end{align*} Hence
\begin{equation*}
\begin{aligned}
& \sup_{x\in(0,\varepsilon)}\int_{\mathbb{R}} |G(t,x,y)|dy\lesssim(C_\phi+\varepsilon^{-1}) t^{-\frac{1}{2}},\\
& \sup_{x\in(0,\varepsilon)}\int_{\mathbb{R}} |G(t,x,y)-G(s,x,y)|dy\lesssim(C_\phi+\varepsilon^{-1}) t^{-\frac{1}{2}}\min\{1,\frac{t-s}{s}\}.
\end{aligned}
\end{equation*}
Then Lemma \ref{lemlow} implies that
\begin{align*}
& \|\partial_xf_{2,2}\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}\lesssim (C_\phi +\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\sigma,p}},\\
& \|f_{2,2}\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}\lesssim (C_\phi +\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align*}
Combining this with \eqref{f21}, \eqref{ccc} to yield
\begin{equation}
\begin{aligned}\label{f2XT}
&\|\partial_x f_2\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}\lesssim \| F\|_{X_T^{\sigma,p}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\sigma,p}},\\
&\|f_2\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}\lesssim \|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+ (C_\phi +\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{aligned}
\end{equation}
For boundary terms, we can write
\begin{equation}\label{forf2b}
\begin{aligned}
\partial_x^lf_{2,B}^+=& C_+(c_--c_+)\int_0^t\int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)(\partial_x^lf_2^-)(\tau,-y)\mathbf{1}_{[0,\varepsilon]}dyd\tau\\
&+C_+(c_--c_+)\int_0^t\int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)(\partial_x^lf_2^-)(\tau,-y)\mathbf{1}_{(\varepsilon,\infty)}dyd\tau\\
&+C_+\int_0^t\int_0^\infty \partial_x^{1+l}\mathbf{K}(c_+(t-\tau),x+ y)(\tilde F(\tau,y)-\tilde F(\tau,-y))dyd\tau\\
:=&II_{1,l}+II_{2,l}+II_{3,l}.
\end{aligned}
\end{equation}
Applying Lemma \ref{mainlem} to get
\begin{align}\label{II1}
\|II_{1,l}\|_{X_T^{\sigma,p}}\lesssim \|\partial_x^l f_2\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}.
\end{align}
One can check from Lemma \ref{lemheat} that
\begin{align}\label{tutu}
\sup_{x\in\mathbb{R}^+}\int \partial_x^2 \mathbf{K}(c_+t,x+y) \mathbf{1}_{[\varepsilon,\infty)}dy \lesssim \int _0^\infty \frac{1}{(t^\frac{1}{2}+y)^3}dy\lesssim \varepsilon^{-1} t^{-\frac{1}{2}}.
\end{align}
Then Lemma \ref{lemlow} implies that
\begin{equation}
\begin{aligned}\label{II2}
&\|II_{2,1}\|_{X_T^{\sigma,p}}\lesssim \varepsilon^{-1} T^\frac{1}{2}\|\partial_x f_{2}\|_{X_T^{\sigma,p}}\overset{\eqref{f21}}\lesssim \varepsilon^{-1} T^\frac{1}{2}(\|F\|_{X_T^{\sigma,p}}+\|\partial_x f\|_{X_T^{\sigma,p}}),\\
&\|II_{2,0}\|_{X_T^{\frac{1}{2},2}}\lesssim\varepsilon^{-1} T^\frac{1}{2}\| f_{2}\|_{X_T^{\frac{1}{2},2}}\overset{\eqref{f21}}\lesssim\varepsilon^{-1} T^\frac{1}{2}(\|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+\|\partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}}).
\end{aligned}
\end{equation}
Finally, following the estimate for $f_2$, we can prove that
\begin{equation} \label{II3}
\begin{aligned}
&\|II_{3,0}\|_{X_T^{\frac{1}{2},2}([0,\varepsilon])}\lesssim \| F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}},\\
& \|II_{3,1}\|_{X_T^{\sigma,p}([0,\varepsilon])}\lesssim \| F\|_{X_T^{\sigma,p}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\sigma,p}}.
\end{aligned}
\end{equation}
One can estimate $f_{2,B}^-$ similarly.
By \eqref{II1}, \eqref{II2} and \eqref{II3}, we obtain that
\begin{align*}
& \|\partial_x f_{2,B}\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}\lesssim \|\partial_x f_2\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}+ (1+\varepsilon^{-1} T^\frac{1}{2})\| F\|_{X_T^{\sigma,p}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|\partial_xf\|_{X_T^{\sigma,p}},\\
& \| f_{2,B}\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}\lesssim \|\partial_x f_2\|_{X_T^{\frac{5}{6},\frac{6}{5}}([-\varepsilon,\varepsilon])}+ (1+\varepsilon^{-1} T^\frac{1}{2})\| F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|\partial_xf\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align*}
Combining this with \eqref{f2XT}, we obtain that
\begin{equation}
\begin{aligned}\label{fNXT}
&\|\partial_x f_N\|_{X_T^{\sigma,p}([-\varepsilon,\varepsilon])}\lesssim (1+\varepsilon^{-1} T^\frac{1}{2}) \| F\|_{X_T^{\sigma,p}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\sigma,p}},\\
&\| f_N\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}\lesssim (1+\varepsilon^{-1} T^\frac{1}{2}) \| F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{aligned}
\end{equation}
Finally, we estimate $f_R$.
For $f_3$, we can write
\begin{align*}
f_3^\pm(t,x)=\int_0^t \int_{\mathbb{R}} \mathbf{K}(c_\pm (t-\tau),x-y) R^\pm(\tau,y)dyd\tau,
\end{align*}
where $R^\pm$ are odd extensions of $R$:
$$
R^+(t,x)=R(t,x)\mathbf{1}_{x\geq 0}-R(-x)\mathbf{1}_{x< 0},\ \ \ \ R^-(t,x)=R(t,x)\mathbf{1}_{x< 0}-R(-x)\mathbf{1}_{x\geq 0}.
$$
From Lemma \ref{rem}, we obtain
\begin{align*}
&\|f_3\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_3\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim T^\frac{1}{4}(\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{align*}
The proof for boundary terms $f^\pm_{3,B}$ are similar as previous arguments. We conclude that
\begin{align}\label{fRXT}
\| f_R\|_{X_T^{\frac{1}{2},2}}+\|\partial_x f_R\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\lesssim T^\frac{1}{4} (\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}).
\end{align}
We conclude from \eqref{fLXT}, \eqref{fNXT} and \eqref{fRXT} that
\begin{align*}
\| f\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}+\|\partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}([-\varepsilon,\varepsilon])}\lesssim &\|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+(1+\varepsilon^{-1} T^\frac{1}{2}) \| F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}\|f_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\\
&\ \ +T^\frac{1}{4} (\|R\|_{L^1_{T}}+\|R\|_{X_T^{1,1}}),
\end{align*}
and, when $R=0$,
\begin{align*}
&\|\partial_x f\|_{X_T^{1-\gamma,\infty}([-\varepsilon,\varepsilon])}\lesssim \|\bar f_0\|_{\dot C^{2\gamma}}+(1+\varepsilon^{-1} T^\frac{1}{2}) \|F\|_{X_T^{1-\gamma,\infty}}+( C_\phi+\varepsilon^{-1} ) T^{\frac{1}{2}}\|\partial_x f\|_{X_T^{1-\gamma,\infty}},\\
&\|\partial_x f\|_{X_T^{\frac{1}{2},2}([-\varepsilon,\varepsilon])}+\|\partial_x f\|_{X_T^{\frac{3}{4},\infty}([-\varepsilon,\varepsilon])}\lesssim \|\partial_x \bar f_0\|_{L^2}+(1+\varepsilon^{-1} T^\frac{1}{2}) (\| F\|_{X_T^{\frac{1}{2},2}}+\| F\|_{X_T^{\frac{3}{4},\infty}})\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ (C_\phi+\varepsilon^{-1}) T^\frac{1}{2}(\|f_x\|_{X_T^{\frac{1}{2},2}}+\|f_x\|_{X_T^{\frac{3}{4},\infty}}).
\end{align*}
Fix $T=\frac{1}{(1+C_\phi+\varepsilon^{-1})^{10}}$, we obtain the results. This completes the proof.
\end{proof}
\\
The following lemma gives estimates of $L^p$ norms near jump points.
\begin{lemma}\label{lemLPjum}
Suppose $f_0=\partial_x\bar f_0$. There exists $T>0$ such that
\begin{align*}
\|f\|_{L^2_{T}(\mathbb{I}_\varepsilon)}+ \|\partial_x f\|_{L^\frac{6}{5}_{T}(\mathbb{I}_\varepsilon)}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \| F\|_{L^\frac{6}{5}_{T}}+T^\frac{1}{3}\|\partial_x f\|_{L^\frac{6}{5}_{T}}+T^{\frac{1}{4}}\|R\|_{L^1_{T}}.
\end{align*}
Moreover, if $R=0$, there holds
\begin{align*}
\|f\|_{L^6_{T}(\mathbb{I}_\varepsilon)}+ \|\partial_xf\|_{L^2_{T}(\mathbb{I}_\varepsilon)}\lesssim \|f_0\|_{L^2}+\|F\|_{L^2_{T}}+T^\frac{1}{3}\|\partial_x f\|_{L^2_{T}}.
\end{align*}
\end{lemma}
\begin{proof} ~\\
\textit{1.Estimate $f_L$.}\\
For simplicity, we only consider $f_1^+\mathbf{1}_{x\geq 0}$, while $f_1^-\mathbf{1}_{x< 0}$ can be done similarly. Integrate by parts we obtain
\begin{align*}
f_1^+(t,x)
&=\int _{\mathbb{R}}\partial_x\mathbf{K}(b_+ t,x-y)\mathbf{f}_0(y)dy,\ \ \ \text{where}\ \mathbf{f}_0(y)=\bar f_0(y)\mathbf{1}_{y\geq 0}+\bar f_0(-y)\mathbf{1}_{y< 0}.
\end{align*}
By Parseval's identity (see \eqref{paraL2}), we have for any $T>0$,
\begin{equation}\label{L21}
\begin{aligned}
&\|f_1^+\mathbf{1}_{x\geq 0}\|_{L^2_{T}}\sim \|\mathbf{f}_0\|_{L^2}\lesssim \|\bar f_0\|_{L^2}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\\ &\|\partial_x f_1^+\mathbf{1}_{x\geq 0}\|_{L^2_{T}}\sim \|\partial_x\mathbf{f}_0\|_{L^2}\lesssim \|f_0\|_{L^2}.
\end{aligned}
\end{equation}
Then following \eqref{aplinearLp} in Lemma \ref{lemap}, we obtain from Lemma \ref{lemsob} that
\begin{equation}\label{L22}
\begin{aligned}
& \| \mathbf{1}_{x\geq 0}\partial_x f_1^+\|_{L^{6/5}_{T}} ^{6/5}\lesssim \int_{\mathbb{R}} \frac{\| \mathbf{ f}_0(\cdot)-\mathbf{ f}_0(\cdot-z)\|_{L^{6/5}(\mathbb{R}^+)}^{6/5}}{|z|^{\frac{7}{5}}}dz\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}^{6/5},\\
&\| \mathbf{1}_{x\geq 0} f_1^+\|_{L^6_{T}} ^6\lesssim \int_{\mathbb{R}} \frac{\| \mathbf{ f}_0(\cdot)- \mathbf{ f}_0(\cdot-z)\|_{L^6(\mathbb{R}^+)}^6}{|z|^{5}}dz\lesssim \|\bar f_0\|_{\dot W^{\frac{2}{3},6}}^6\lesssim \|\bar f_0\|_{\dot W^{1,2}}^6=\|f_0\|_{L^2}^6.
\end{aligned}
\end{equation}
By \eqref{L21} and \eqref{L22} we obtain that
\begin{align}\label{f1}
\|f_1\|_{L^2_{T}}+\|\partial_x f_1\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\ \ \ \ \ \ \|f_1\|_{L^6_{T}}+\|\partial_x f_1\|_{L^2_{T}}\lesssim \|f_0\|_{L^2}.
\end{align}
Then we estimate $f_{1,B}$. Recall the formula
\begin{align*}
f_{1,B}^+(t,x)=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)f_1^-(\tau,-y)dyd\tau\\
&\quad\quad\quad-C_+\int \partial_x\mathbf{K}(c_+t,x-y)\bar{\mathbf {f}_0}(y)dy\\
:=&I_{1}(t,x)+I_{2}(t,x).
\end{align*}
From Lemma \ref{lemheat} one has $\left(\int_0^\infty|\partial_x\mathbf{K}(c_+\tau,x+y)|^2d\tau\right)^\frac{1}{2}\lesssim \frac{1}{x+y}$. Applying Lemma \ref{lemupg} yields that
\begin{align}\label{A1}
\|\partial_x^lI_1\|_{L^p_{T}}\lesssim \|\partial_x^lf_1\|_{L^p_{T}},\ \ \ l=0,1,\ p>1.
\end{align}
Moreover, recall that $\bar{\mathbf{f}}_0(y)=(\bar f_0(-y)-\bar f_0(y))\mathbf{1}_{y\leq 0}$. We can write
\begin{align}\nonumber
\|I_2\|_{L^2_{T}}^2&\lesssim \int_0^\infty\left(\int_0^\infty \|\partial_x\mathbf{K}(c_+\tau,x+y)\|_{L^2_t}(\bar f_0(y)-\bar f_0(-y))dy\right)^2dx\\
&\lesssim \int_0^\infty\left(\int_0^\infty(\bar f_0(y)-\bar f_0(-y))\frac{dy}{x+y}\right)^2dx\lesssim \|\bar f_0\|_{L^2}^2,\label{A2}
\end{align}
where we apply Lemma \ref{lemhardy} in the last inequality. Similarly, we obtain
\begin{align}\label{DA2}
\|\partial_x I_2\|_{L^2_{T}}\lesssim \|f_0\|_{L^2}.
\end{align}
It remains to estimate $\|\partial_x I_2\|_{L^\frac{6}{5}_{T}}$ and $\|I_2\|_{L^6_{T}}$. Following \eqref{aplinearLp} in Lemma \ref{lemap}, one gets
\begin{align*}
\|\partial_x^lI_2\|_{L^p_{T}(\mathbf{\mathbb{R}^+})}\lesssim
\int_{\mathbb{R}}\int_0^\infty \frac{| \bar {\mathbf{f}}_0(x)-\bar {\mathbf{f}}_0(x-z)|^p}{|z|^{(1+l)p-1}}dxdz.
\end{align*}
Applying Lemma \ref{lemsob} to obtain that
\begin{align*}
\|\partial_xI_2\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}, \ \ \ \ \|I_2\|_{L^6_{T}}\lesssim \|f_0\|_{L^2}.
\end{align*}
Similar argument holds for $f_{1,B}^-$. Combining this with \eqref{A1}, \eqref{A2} and \eqref{DA2}, we obtain that
\begin{align}\label{bdterm}
\|f_{1,B}\|_{L^2_{T}}+\|\partial_x f_{1,B}\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\ \ \ \ \ \|f_{1,B}\|_{L^6_{T}}+\|\partial_x f_{1,B}\|_{L^2_{T}}\lesssim \|f_0\|_{L^2}.
\end{align}
By \eqref{f1} and \eqref{bdterm} we obtain that
\begin{equation}\label{jumLpL}
\begin{aligned}
\|f_L\|_{L^2_{T}}+\|\partial_x f_L\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\ \ \ \ \ \ \ \| f_L\|_{L^6_{T}}+\|\partial_x f_L\|_{L^2_{T}}\lesssim \|f_0\|_{L^2}.
\end{aligned}
\end{equation}
\textit{2. Estimate $f_N$.}\\
We first estimate $f_2^\pm=f_{2,1}^\pm+f_{2,2}^\pm$ as defined in \eqref{f22pa}. For simplicity, we only consider $\partial_x^lf_2^+$, and the other part $\partial_x^lf_2^-$ can be done parallelly.
Note that
\begin{align*}
f_{2,1}^+(t,x)&=\int_0^t \int_{\mathbb{R}^+}( \partial_x\mathbf{K}(c_+ (t-\tau),x-y)+\partial_x\mathbf{K}(c_+ (t-\tau),x+y))F(\tau,y) dyd\tau\\
&=\int_0^t \int_{\mathbb{R}} \partial_x\mathbf{K}(c_+(t-\tau),x-y)( F(\tau,y)\mathbf{1}_{y\geq 0}+ F(\tau,-y)\mathbf{1}_{y< 0}) dyd\tau.
\end{align*}
Following the proof of Lemma \ref{lemap}, we obtain that
\begin{align}\label{f21lp}
\|\partial_x^lf_{2,1}\|_{L^p_{T}}\lesssim \| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}, \ \text{for}\ l=0,1,\ \ p>1.
\end{align}
Then we consider $f_{2,2}$. Recall that
\begin{align*}
\partial_x^lf_{2,2}^+(t,x)=\int_0^t \int_0^\infty \partial_x^lG(t-\tau,x,y)f_x(\tau,y)dyd\tau,\ \ \ l=0,1,
\end{align*}
where
$$
G(t,x,y)=(\partial_x \mathbf{K}(c_\pm t,x-y)+\partial_x\mathbf{K}(c_\pm t,x+y)) (\phi-\bar\phi)(y).
$$
Then Lemma \ref{lemap} yields that
\begin{align}\label{rrr}
\|\partial_x^lf_{2,1}\|_{L^p_{T}}\lesssim \| f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}, \ \text{for}\ l=0,1,\ \ p>1.
\end{align}
By \eqref{phiaa} and Lemma \ref{lemheat} we can check that
\begin{align*}
\sup_{x\in[0,\varepsilon]}\left(\int_0^T\int_0^\infty|\partial_x^l G(t,x,y)|^\frac{3}{2+l}dydt\right)^\frac{2+l}{3}\lesssim ( C_\phi +\varepsilon^{-1})T^\frac{1}{2}.
\end{align*}
Applying Young's inequality we obtain that
\begin{align*}
\|\partial_x^l f_{2,2}\|_{L^p_{T}( [0,\varepsilon])}&\lesssim \sup_{x\in[0,\varepsilon]}\left(\int_0^T\int_0^\infty|\partial_x^l G(t,x,y)|^\frac{3}{2+l}dydt\right)^\frac{2+l}{3}\|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}\\
&\lesssim( C_\phi +\varepsilon^{-1})T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align*}
Similarly, we have
\begin{align*}
\|\partial_x^l f_{2,2}\|_{L^p_{T}(\mathbb{I}_{\varepsilon})}\lesssim ( C_\phi +\varepsilon^{-1}) T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align*}
Combining this with \eqref{f21lp}, we obtain that
\begin{align}\label{f2lp}
\|\partial_x^l f_{2}\|_{L^p_{T}(\mathbb{I}_{\varepsilon})}\lesssim\| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}+( C_\phi +\varepsilon^{-1}) T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align}
Then we consider the boundary term $f_{2,B}$. Recall the formula $\partial_x^l f_{2,B}^+=II_{1,l}+II_{2,l}+II_{3,l}$ in \ref{forf2b}.
We apply Lemma \ref{lemupg} to obtain that
$$\|II_{1,l}\|_{L^p_{T}}\lesssim \|\partial_x^l f_2\|_{L^p_{T}([0,\varepsilon])}.$$
For ${II}_{2,l}$, the kernel is not singular in view of the fact that $x+y\geq \varepsilon$. Applying Young's inequality to obtain
\begin{align*}
\|II_{2,l}\|_{L^p_{T}}\lesssim \|\partial_x^l f_2\|_{L^p_{T}}\int_0^t \int_\varepsilon^\infty |\partial_t \mathbf{K}(c_+\tau,y)|dyd\tau\overset{\eqref{tutu}}\lesssim \varepsilon^{-1}T^{\frac{1}{2}}\|\partial_x^l f_2\|_{L^p_{T}}.
\end{align*}
One can estimate $II_{3,l}$ similarly as $\partial_x^lf_{2}$. We conclude that
\begin{align*}
&\|\partial_x^l f_{2,B}^+\|_{L^p_{T}([0,\varepsilon])}\\
& \lesssim \|\partial_x^l f_2\|_{L^p_{T}([0,\varepsilon])}+\varepsilon^{-1}T^{\frac{1}{2}} \|\partial_x^lf_2\|_{L^p_{T}}+ \| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}+( C_\phi +\varepsilon^{-1}) T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}\\
&\overset{\eqref{rrr}} \lesssim\|\partial_x^l f_2\|_{L^p_{T}([0,\varepsilon])}+(1+\varepsilon^{-1}T^\frac{1}{2})\| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}+( C_\phi +\varepsilon^{-1}) T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align*}
Combining this with \eqref{f2lp}, we get
\begin{align}\label{jumLpN}
\|\partial_x^l f_N\|_{L^p_{T}(\mathbb{I}_{\varepsilon})}\lesssim (1+\varepsilon^{-1}T^\frac{1}{2})\| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}+( C_\phi +\varepsilon^{-1}) T^\frac{1}{2} \|f_x\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align}
\textit{3. Estimate $f_{R}$.}\\
Recall the formula
\begin{align*}
\partial_x^lf_3^\pm(t,x)=\int_0^t \int_{\mathbb{R}^\pm}( \partial_x^l\mathbf{K}(c_\pm (t-\tau),x-y)-\partial_x^l\mathbf{K}(c_\pm (t-\tau),x+y)) R(\tau,y) dyd\tau,
\end{align*}
and
\begin{align*}
\partial_x^lf_{3,B}^+(t,x)=&C_+(c_+-c_-)\int_0^t \int_0^\infty\partial_x^2\mathbf{K}(c_+(t-\tau),x+ y)\partial_x^lf_3^-(\tau,-y)dyd\tau\\
&-C_+\int_0^t \int_0^\infty \partial_x^l\mathbf{K}(c_+(t-\tau),x+y)(R(\tau,y)+R(\tau,-y)))dy.
\end{align*}
It follows from Young's inequality and Lemma \ref{lemupg} that
\begin{align*}
\|f_R\|_{L^2_{T}}+\|\partial_xf_R\|_{L^\frac{6}{5}_{T}}\lesssim \| \mathbf{K}\|_{L^2_{T}}\|R\|_{L^1_{T}}+\|\partial_x \mathbf{K}\|_{L^\frac{6}{5}_{T}}\|R\|_{L^1_{T}}\lesssim T^\frac{1}{4}\|R\|_{L^1_{T}}.
\end{align*}
Combining this with \eqref{jumLpL} and \eqref{jumLpN}, we obtain the result by taking $T=\frac{1}{(10+C_\phi+\varepsilon^{-1})^{10}}$.
This completes the proof.
\end{proof}
\subsection{Interior estimates}
In this part we estimate the solution $f$ on domain $\mathbb{R}\backslash\mathbb{I}_\varepsilon$. The solution to \eqref{eqpara} satisfies (see Lemma \ref{forcont})
\begin{align}
f(t,x)=&\mathbf{H} (t,\cdot,x)\ast f_0(\cdot)+\int_0^t\partial_{1}\mathbf{H} (t-\tau,\cdot,x)\ast F(\tau,\cdot)d\tau+\int_0^t\mathbf{H} (t-\tau,\cdot,x)\ast R(\tau,\cdot)d\tau\nonumber\\
&-\int_0^t\int \partial_1\mathbf{H} (t-\tau,x-y,x)\left({ \phi(x)}-{ \phi(y)}\right)\partial_x f(\tau,y)dyd\tau\nonumber\\
:=&f_{L}(t,x)+f_N(t,x)+f_R(t,x)+f_E(t,x),\label{sofor1}
\end{align}
where we denote $\mathbf{H} (t,x,y)=\mathbf{K}({ \phi(y)}{t},x)$, $\partial_1\mathbf{H}(t,x,y)=\partial_x\mathbf{H}(t,x,y)$ and $\partial_2\mathbf{H}(t,x,y)=\partial_y\mathbf{H}(t,x,y)$. We approximate the kernel of equation \eqref{eqpara} by $\mathbf{H}(t,x,y)$ and $f_E$ is the error term.
\begin{lemma} \label{lemXTinte} Let $f$ satisfy \eqref{sofor1} with data $f_0=\partial_x \bar f_0$.
There exists $T>0$ such that
\begin{align*}
\| f\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\| \partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+T^{\frac{1}{4}}(\|R\|_{X_T^{1,1}}+\|R\|_{L_{T}^1})+T^\frac{1}{3}\|\partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align*}
Moreover, if $R=0$, then
\begin{align*}
& \|\partial_xf\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\|\partial_xf\|_{X_T^{\frac{3}{4},\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \| f_0\|_{L^2}+\|F\|_{X_T^{\frac{1}{2},2}}+\|F\|_{X_T^{\frac{3}{4},\infty}}+T^{\frac{1}{3}}(\|\partial_xf\|_{X_T^{\frac{1}{2},2}}+\|\partial_xf\|_{X_T^{\frac{3}{4},\infty}}),\\
& \|\partial_xf\|_{X_T^{1-\gamma,\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|\bar{f}_0\|_{\dot{C}^{2\gamma}}+\|F\|_{X_T^{1-\gamma,\infty}}+T^{\frac{1}{3}}\|\partial_xf\|_{X_T^{1-\gamma,\infty}}.
\end{align*}
\end{lemma}
\begin{proof}
We first estimate
\begin{align*}
f_L(t,x)=\int_{\mathbb{R}} \mathbf{H}(t,x-y,x)f_0(y) dy=\int_{\mathbb{R}} \partial_1 \mathbf{H}(t,x-y,x)\bar f_0(y) dy.
\end{align*}
One has
\begin{align*}
\partial_x f_L(t,x)=\int_{\mathbb{R}} (\partial_1^2 \mathbf{H}(t,x-y,x)+ \partial_1\partial_2\mathbf{H}(t,x-y,x))\bar f_0(y) dy.
\end{align*}
Note that $\mathbf{H}(t,x,y)$ is a time rescaling of heat kernel. We can write
\begin{align*}
\partial_1^2 \mathbf{H}(t,z,x)+ \partial_1\partial_2\mathbf{H}(t,z,x)=\partial_x^2 \mathbf{K}({\phi(x)t},z)+t\phi'(x)\partial_t \partial_x\mathbf{K}(\phi(x)t,z):=\bar G(t,z,x).
\end{align*}
By Lemma \ref{lemheat}, it is easy to check that $\bar G(t,z,x)$ satisfies the conditions in Remark \ref{remlem3.2}.
Then Lemma \ref{lemXTl} and Remark \ref{remlem3.2} imply that
\begin{equation}\label{I1XT}
\begin{aligned}
&\| f_L\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+ \| \partial_x f_L\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3}, \frac{6}{5}}},\ \ \ \ \ \ \ \| \partial_x f_L\|_{X_T^{1-\gamma,\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|\bar{f}_0\|_{\dot{C}^{2\gamma}},
\\
& \| \partial_x f_L\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\| \partial_x f_L\|_{X_T^{\frac{3}{4},\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|\partial_x\bar f_0\|_{L^2}.
\end{aligned}
\end{equation}
Then we estimate the nonlinear term
\begin{equation*}
f_N(t,x)=
\int_0^t\int_{\mathbb{R}} \partial_{1}\mathbf{H} (t-\tau,x-y,x) F(\tau,y)dyd\tau.
\end{equation*}
We have
\begin{align*}
\partial_x f_N(t,x)=\int_0^t \int_{\mathbb{R}} \bar G (t-\tau,x-y,x)F(\tau,y)dyd\tau.
\end{align*}
By Remark \ref{rem2.2} we have
\begin{align}\label{fNXT1}
\left\| \partial_x f_N\right\|_{X_T^{\sigma,p}}\lesssim \|F\|_{X_T^{\sigma,p}},\ \ \ \forall \ \sigma\in(0,1-\alpha), \ p\in[1,\infty].
\end{align}
Since
\begin{align*}
\sup_{z\in\mathbb{R}\backslash\mathbb{I}_{\varepsilon}} \|\partial_1\mathbf{H}(t,\cdot,z)\|_{L^{\frac{3}{2}}}\lesssim t^{-\frac{2}{3}}.
\end{align*}
Then Lemma \ref{lemlow} implies that
$$
\left\|\int_0^t \int_{\mathbb{R}} \partial_1\mathbf{H} (t-\tau,x-y,x)F(\tau,y)dyd\tau\right\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
$$
Combining this with \eqref{fNXT1} to obtain that
\begin{equation}
\begin{aligned}\label{DfNXT}
& \|f_N\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\|\partial_x f_N\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim\|F\|_{X_T^{\frac{5}{6},\frac{6}{5}}},
\\
&\|\partial_xf_N\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\|\partial_xf_N\|_{X_T^{\frac{3}{4},\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|F\|_{X_T^{\frac{1}{2},2}}+\|F\|_{X_T^{\frac{3}{4},\infty}},\\
& \|\partial_xf_N\|_{X_T^{1-\gamma,\infty}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim \|F\|_{X_T^{1-\gamma,\infty}}.
\end{aligned}
\end{equation}
Then we estimate $$f_R=\int_0^t\int_{\mathbb{R}}\mathbf{H} (t-\tau,x-y,x) R(\tau,y)dyd\tau.$$
By Lemma \ref{rem}, we obtain that
\begin{align}\label{DfRXT}
\|f_R\|_{X_T^{\frac{1}{2},2}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\|\partial_x f_R\|_{X_T^{\frac{5}{6},\frac{6}{5}}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\lesssim T^{\frac{1}{4}}(\|R\|_{X_T^{1,1}}+\|R\|_{L_{T}^1}).
\end{align}
Finally, we estimate $f_E$. One has
\begin{align*}
f_E(t,x)=&-\int_0^t\int_{\mathbb{R}} G_1(t-\tau,x,y)\partial_x f(\tau,y)dyd\tau,
\end{align*}
where
\begin{align*}
G_1(t-\tau,x,y)=\partial_1\mathbf{H} (t-\tau,x-y,x)\left({ \phi(x)}-{ \phi(y)}\right).
\end{align*}
Note that
$$
|\phi(x)-\phi(y)|\lesssim C_\phi|x-y|,\ \ \ \forall x\in \mathbb{R}\backslash\mathbb{I}_{\varepsilon},\ |y-x|\leq \varepsilon.
$$
Hence for any $0<s<t<T$,
\begin{align*}
& \sup_{x\in\mathbb{R}\backslash\mathbb{I}_{\varepsilon} }\int_{\mathbb{R}} |\partial_x G_1(t,x,y)|dy\lesssim(\varepsilon^{-1}+C_\phi) t^{-\frac{1}{2}},\\
&\sup_{x\in\mathbb{R}\backslash\mathbb{I}_{\varepsilon} }\int_{\mathbb{R}} |\partial_x G_1(t,x,y)-\partial_x G_1(s,x,y)|dy\lesssim(\varepsilon^{-1}+C_\phi) s^{-\frac{1}{2}}\min\{1,\frac{t-s}{s}\},\\
&\sup_{x\in\mathbb{R}\backslash\mathbb{I}_{\varepsilon} }\| G_1(t,x,\cdot)\|_{L^\frac{3}{2}}dy\lesssim C_{\phi}t^{-\frac{1}{6}},\\
&\sup_{x\in\mathbb{R}\backslash\mathbb{I}_{\varepsilon} }\ |G_1(t,x,\cdot)- G_1(s,x,\cdot)\|_{L^\frac{3}{2}}\lesssim C_{\phi}t^{-\frac{1}{6}}\min\{1,\frac{t-s}{s}\}.
\end{align*}
Combining this with Lemma \ref{lemlow} to obtain that for $(\sigma,p)\in\{(1-\gamma,\infty),(\frac{1}{2},2),(\frac{3}{4},\infty),(\frac{5}{6},\frac{6}{5})\}$,
\begin{align}\label{DfEXT}
\|\partial_x f_{E}\|_{X_T^{\sigma,p}}\lesssim T^\frac{1}{2}(C_\phi+\varepsilon^{-1})\|\partial_x f\|_{X_T^{\sigma,p}},
\end{align}
and
\begin{align}\label{DfEXTf}
\|f_{E}\|_{X_T^{\frac{1}{2},2}}\lesssim T^{\frac{1}{2}}C_\phi\|\partial_x f\|_{X_T^{\frac{5}{6},\frac{6}{5}}}.
\end{align}
Take $T=\frac{1}{(1+C_\phi+\varepsilon^{-1})^{10}}$. Then the result follows from \eqref{I1XT}, \eqref{DfNXT}, \eqref{DfRXT}, \eqref{DfEXT} and \eqref{DfEXTf}.
\end{proof}
\begin{lemma}\label{lemLPint}
Suppose $f_0=\partial_x\bar f_0$. There exists $T>0$ such that
\begin{align*}
\|f\|_{L^2_{T}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+ \|\partial_x f\|_{L^\frac{6}{5}_{T}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\leq& C_1(\|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \| F\|_{L^\frac{6}{5}_{T}}+T^{\frac{1}{4}}\|R\|_{L^1_{T}})\\
&+\frac{1}{10}(\|f\|_{L^2_{T}}+\|\partial_x f\|_{L^\frac{6}{5}_{T}}).
\end{align*}
Moreover, if $R=0$, there holds
$$
\|f\|_{L^6_{T}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}+\|\partial_xf\|_{L^2_{T}(\mathbb{R}\backslash\mathbb{I}_{\varepsilon})}\leq C_1 (\|f_0\|_{L^2}+\|F\|_{L^2_{T}})+\frac{1}{10}(\|f\|_{L^6_{T}}+\|\partial_x f\|_{L^2_{T}}).
$$
\end{lemma}
\begin{proof}
There exists a sequence $\{z_n\}_{n\in\mathbb{Z}}\subset\mathbb{R}\backslash\mathbb{I}_\varepsilon$ such that $\sum_n\chi_n^\ell(x)=1,\ \forall x\in \mathbb{R}\backslash\mathbb{I}_\varepsilon$ and $\operatorname{supp}(\chi_j^\ell)\cap \operatorname{supp}(\chi_k^\ell)=\emptyset$ if $|j-k|\geq 2$, where $\chi_n^\ell:\mathbb{R}\to [0,1]$ is a smooth cutoff function satisfying $\mathbf{1}_{[z_n-\ell/2,z_n+\ell/2]}\leq \chi_n^\ell\leq \mathbf{1}_{[z_n-\ell,z_n+\ell]}$ with $0<\ell<\varepsilon$.
We rewrite the equation \eqref{eqpara} as
\begin{align*}
& \partial_t (f\chi_n^\ell)-\phi(z_n)\partial_x ^2(f\chi_n^\ell)=\partial_x F_1+R_1,\\
& (f\chi_n^\ell)_{t=0}=\partial_x(\bar f_0\chi_n^\ell)-\bar f_0\partial_x\chi_n^\ell,
\end{align*}
where
\begin{align*}
F_1=F\chi_n^\ell+(\phi(x)-\phi(z_n))\partial_x f\chi_n^\ell+\phi(z_n)f\partial_x \chi_n^\ell,\ \ \ \ R_1=\phi(x)\partial_x f\partial_x \chi_n^\ell
+R\chi_n^\ell.
\end{align*}
Applying Lemma \ref{lemap} to obtain that
\begin{align*}
&\|f\chi_n^\ell\|_{L^2_{T}}+\|\partial_x(f\chi_n^\ell)\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\chi_n^\ell\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+T^\frac{1}{3}\|\bar f_0\partial_x\chi_n^\ell\|_{L^\frac{6}{5}}+ \|F_1\|_{L^\frac{6}{5}_{T}}+T^\frac{1}{4}\|R_1\|_{L^1_{T}},
\end{align*}
and
\begin{align*}
&\|f\chi_n^\ell\|_{L^6_{T}}+\|\partial_x(f\chi_n^\ell)\|_{L^2_{T}}\lesssim \|f_0\chi_n^\ell\|_{L^2}+\|F_1\|_{L^2_{T}}+T^\frac{1}{4}\ell^{-1}\|R_1\|_{L^2_{T}},\ \ \ \ \text{if}\ R=0.
\end{align*}
Note that
\begin{align*}
\|F_1\|_{L^\frac{6}{5}_{T}}&\lesssim \|F\chi_n^\ell\|_{L^\frac{6}{5}_{T}}+\|\phi(x)-\phi(z_n)\|_{L^\infty (B_\ell)}\|\partial_x f\|_{L^\frac{6}{5}_{T}(B_{\ell})}+\|f\|_{L^2_{t,x, T}(B_\ell)}\|\partial_x \chi_n^\ell\|_{L^3_{T}}\\
&\lesssim \|F\|_{L^\frac{6}{5}_{T}(B_{\ell})}+\ell C_\phi\|\partial_x f\|_{L^\frac{6}{5}_{T}(B_{\ell})}+\ell^{-\frac{2}{3}}T^\frac{1}{3}\|f\|_{L^2_{T}(B_{\ell})},\\
\|R_1\|_{L^1_{T}}&\lesssim \|\partial_x f\|_{L^\frac{6}{5}_{T}(B_{\ell})}\|\partial_x \chi_n^\ell\|_{L^4_{T}}+\|R\|_{L^1_{T}(B_\ell)}\\
&\lesssim T^{\frac{1}{4}}\ell^{-\frac{3}{4}}\|\partial_x f\|_{L^\frac{6}{5}_{T}(B_{\ell})}+\|R\|_{L^1_{T}(B_\ell)},
\end{align*}
where we denote $B_\ell=[z_n-\ell,z_n+\ell]$.
Moreover, we can check that
\begin{align*}
\sum_n\|\bar f_0\chi_n^\ell\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\ell^{-1}\|\bar f_0\|_{L^2}\lesssim (1+\ell^{-1})\|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{align*}
Taking sum of $n$, we obtain that
\begin{align*}
\|f\|_{L^2_{T}(\mathbb{R}\backslash\mathbb{I}_\varepsilon)}+\|\partial_xf\|_{L^\frac{6}{5}_{T}(\mathbb{R}\backslash\mathbb{I}_\varepsilon)}\leq&\tilde C_1(1+\ell^{-1})\|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\tilde C_1\|F\|_{L^\frac{6}{5}_{T}}+\tilde C_1T^\frac{1}{4}\|R\|_{L^1_{T}}\\
&+\tilde C_1(\ell C_\phi+T^\frac{1}{3}\ell^{-\frac{3}{4}})(\|\partial_x f\|_{L^\frac{6}{5}_{T}}+\|f\|_{L^2_{T}}) .
\end{align*}
Similarly, when $R=0$, we have
\begin{align*}
\|f\|_{L^6_{T}(\mathbb{R}\backslash\mathbb{I}_\varepsilon)}+\|\partial_xf\|_{L^2_{T}(\mathbb{R}\backslash\mathbb{I}_\varepsilon)}\leq&\tilde C_1(1+\ell^{-1})\|f_0\|_{L^2}+\tilde C_1\|F\|_{L^2_{T}}\\
&+\tilde C_1(\ell C_\phi+\ell^{-1}T^\frac{1}{4})(\|\partial_x f\|_{L^2_{T}}+\|f\|_{L^6_{T}}).
\end{align*}
Then we obtain the result by taking $\ell ={(10+\tilde C_1+C_\phi)^{-10}}$ and $T=\ell^{10}$.
This completes the proof.
\end{proof}
\begin{remark}
We have two ways to prove Lemma \ref{lemLPint}. One is to approximate the kernel (see \eqref{sofor1}), and the other is to cut off in space. We choose the latter one for simplicity. It is equivalent to approximate the kernel piece wisely, namely, approximate the kernel by $\sum_n \mathbf{H}(t,x,z_n)\chi_n^l(x)$.
\end{remark}
\section{Well-posedness for the isentropic Navier–Stokes equations}\label{jum}
In this section, we consider the Cauchy problem of \eqref{inns} with initial data
$
(v(0,x),u(0,x))=(v_0,u_0)
$ satisfying
\begin{align}\label{inicon}
v_0\in L^\infty,\ \ \inf_x v_0\geq \lambda_0>0,\ \ u_0=\partial_x\bar u_0\ \text{with}\ \bar u_0\in\dot C^{2\gamma}.
\end{align}
The initial data $v_0$ is allowed to have finite jump. More precisely, we assume there exists a sequence $\{a_n\}_{n=1}^N$ satisfying
\begin{align}\label{conccc}
&\min _{j\neq k} |a_j-a_k|\geq \sigma >0,\ \ \ \
\|v_0-b_{\varepsilon,\eta}\|_{L^\infty}\leq \varepsilon_0,
\end{align}
for some constants $0<\varepsilon,\eta\ll \sigma$. Here
\begin{align}\label{defb}
& b_{\varepsilon,\eta}(x)=\sum _{n=1}^{N+1}\left(v_0\ast\rho_\eta\mathbf{1}_{(a_{n-1}+\varepsilon,a_{n}-\varepsilon)}+(v_0\ast\rho_\eta)(a_n+\varepsilon)\mathbf{1}_{[a_n,a_n+\varepsilon]}+v_0\ast \rho_\eta(a_n-\varepsilon)\mathbf{1}_{[a_n-\varepsilon,a_n]}\right),
\end{align}
where we take $a_{0}=-\infty$ and $a_{N+1}=+\infty$.\\
The main result is the following
\begin{thm}\label{thmisen}
There exists $\varepsilon_0>0$ such that if initial data $(v_0,u_0)$ satisfies \eqref{inicon} and \eqref{conccc}, then the system \eqref{cpns} admits a unique local solution $(v,u)$ in $[0,T]$ satisfying
\begin{align*}
\inf_{t\in[0,T]}\inf_xv(t,x)\geq \frac{\lambda_0}{2},\ \ \ \|v\|_{Y_T}\leq 2\|v_0\|_{L^\infty},\ \ \ \ \|\partial_xu\|_{X_T^{1-\gamma,\infty}}\leq M.
\end{align*}
for some $T,M>0$.
\end{thm}
Clearly, the theorem \eqref{maininns} is as a consequece of Theorem \ref{thmisen}. \\
Define the space
\begin{align*}
\mathcal{E}_{T,M}:=\{w:w(0,x)=u_0(x),\|\partial_x w\|_{X_T^{1-\gamma,\infty}}\leq M\}.
\end{align*}
Consider $w\in\mathcal{E}_{T,M}$, we define a map $\mathcal{T}w=u$, where $u$ is a solution to the equation
\begin{align}\label{eqjum}
\partial_tu-\mu(\frac{\partial_x u}{b_{\varepsilon,\eta}})_x&=-(p(v))_x+\mu((\frac{1}{v}-\frac{1}{b_{\varepsilon,\eta}})\partial_x w)_x=:\tilde F(v,w)_x,
\end{align}
where $v$ is defined by
\begin{align}\label{defv}
v(t,x)=v_0(x)+\int_0^t \partial_x w(s,x)ds.
\end{align}
The following is a key lemma to prove Theorem \ref{thmisen}.
\begin{lemma}\label{lemrejum}
For any initial data $$v_0\in L^\infty,\ \ \inf_x v_0\geq \lambda_0>0,\ \ u_0=\partial_x\bar u_0\ \text{with}\ \bar u_0\in\dot C^{2\gamma},$$ there exists $\varepsilon_0>0$ such that if \eqref{conccc} holds, then
\begin{align*}
& \|\partial_x(\mathcal{T}w)\|_{X_T^{1-\gamma,\infty}}\leq M,\ \ \forall w\in \mathcal{E}_{T,M},\\
&\|\partial_x(\mathcal{T}w_1-\mathcal{T}w_2)\|_{X_T^{1-\gamma,\infty}}\leq \frac{1}{2}\|\partial_x(w_1-w_2)\|_{X_T^{1-\gamma,\infty}}, \ \ \forall w_1,w_2\in \mathcal{E}_{T,M},
\end{align*}
for some $T,M>0$ depending on $\varepsilon_0,\lambda_0, \|v_0\|_{L^\infty}$ and $\|\bar u_0\|_{\dot C^{2\gamma}}$.
\end{lemma}
\begin{proof}
We first show the estimates of $v$. By definition \eqref{defv}, it is easy to check that
\begin{align*}
&\sup_{s\in[0,T]}\|v-b_{\varepsilon,\eta}\|_{L^\infty}\lesssim \|v_0-b_{\varepsilon,\eta}\|_{L^\infty}+\int_0^T\|w_{x}(\tau)\|_{L^\infty}d\tau\lesssim \|v_0-b_{\varepsilon,\eta}\|_{L^\infty}+T^{\gamma}\|\partial_x w\|_{X_T^{1-\gamma,\infty}},\\
&\sup_{0<s<t<T}s^\alpha\frac{\|v(t)-v(s)\|_{L^\infty}}{(t-s)^\alpha}\lesssim \sup_{0<s<t<T}s^\alpha\frac{\int_s^t\|\partial_x w\|_{L^\infty}d\tau}{(t-s)^\alpha}\\
&\quad\ \quad\quad\quad\quad\quad\quad\quad \ \ \ \ \ \ \ \ \ \ \ \lesssim \|\partial_x w\|_{X_T^{1-\gamma,\infty}}\sup_{s<t<T}s^\alpha \frac{t^\gamma-s^\gamma}{(t-s)^\alpha}\lesssim T^\gamma\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align*}
Hence,
\begin{align*}
\|v-b_{\varepsilon,\eta}\|_{Y_T}\leq \|v_0-b_{\varepsilon,\eta}\|_{L^\infty}+T^{\gamma}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}\overset{\eqref{conccc}}\leq \varepsilon_0+T^{\gamma}M.
\end{align*}
Moreover, we have
\begin{align*}
\inf_{s\in[0,T]} \inf_x v(s,x)\geq \inf_{x}v_0(x)-\sup_{s\in[0,T]}\|v(s)-v_0\|_{L^\infty}\geq \lambda_0-T^\gamma\|\partial_x w\|_{X_T^{1-\gamma,\infty}}\geq \lambda_0-T^\gamma M.
\end{align*}
We can take $T$ small enough such that $T^\gamma M \leq \frac{1}{10}\min\{\varepsilon_0,\lambda_0\}$, then
\begin{align}\label{estofv}
\|v-b_{\varepsilon,\eta}\|_{Y_T}\leq 2\varepsilon_0,\ \ \ \ \ \inf_{s\in[0,T]} \inf_x v(s,x)\geq \frac{\lambda_0}{2}.
\end{align}
Applying Lemma \ref{lemma} to \eqref{eqjum}, there exists $T_1>0$ such that
\begin{align}\label{haha}
\|\partial_x (\mathcal{T}w)\|_{X_T^{1-\gamma,\infty}}\lesssim \|\bar u_0\|_{\dot C^{2\gamma}}+\|\tilde F(v,w)\|_{X_T^{1-\gamma,\infty}}, \ \ \ \forall 0<T<T_1.
\end{align}
We have
\begin{align*}
\|\tilde F(v,w)\|_{X_T^{1-\gamma,\infty}}\lesssim \|p(v)\|_{X_T^{1-\gamma,\infty}}+\left\|(\frac{1}{v}-\frac{1}{b_{\varepsilon,\eta}})\partial_x w\right\|_{X_T^{1-\gamma,\infty}}.
\end{align*}
Note that $p\in W^{2,\infty}$, hence
\begin{align*}
&\sup_{t\in[0,T]}t^{1-\gamma}\|p(v)(t)\|_{L^\infty}\lesssim T^{1-\gamma},\\
&\frac{\|p(v)(t)-p(v)(s)\|_{L^\infty}}{(t-s)^\alpha}\lesssim \frac{\|v(t)-v(s)\|_{L^\infty}}{(t-s)^\alpha}\lesssim s^{-\alpha}\|v-b_{\varepsilon}\|_{Y_T},\ \ \forall 0<s<t<T.
\end{align*}
Hence,
\begin{align}\label{PXT}
\|p(v)\|_{X_T^{1-\gamma,\infty}}\lesssim T^{1-\gamma}(1+\|v-b_{\varepsilon}\|_{Y_T}).
\end{align}
Denote $q(t,x)=(\frac{1}{v(t,x)}-\frac{1}{b_{\varepsilon,\eta}(x)})\partial_x w(t,x)$, we have
\begin{align}\label{qinf}
\left\|q(t)\right\|_{L^\infty}\lesssim \|v(t)-b_{\varepsilon,\eta}\|_{L^\infty}\|\partial_x w(t)\|_{L^\infty}\lesssim t^{\gamma-1}\|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align}
Moreover, for $0<s<t<T$, we have
\begin{align*}
q(t,x)-q(s,x)=(\frac{1}{v(t)}-\frac{1}{v(s)})\partial_x w(t)+(\frac{1}{v(s)}-\frac{1}{b_{\varepsilon,\eta}})(\partial_x w(t)-\partial_x w(s)).
\end{align*}
Note that
\begin{align*}
& \left\|(\frac{1}{v(t)}-\frac{1}{v(s)})\partial_x w(t)\right\|_{L^\infty}\lesssim \|v(t)-v(s)\|_{L^\infty}\|\partial_x w(t)\|_{L^\infty}\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\ \lesssim (t-s)^{\alpha} s^{\gamma-1-\alpha}\|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}},\\
& \left\|(\frac{1}{v(s)}-\frac{1}{b_{\varepsilon,\eta}})(\partial_x w(t)-\partial_x w(s))\right\|_{L^\infty}\lesssim \|v(s)-b_{\varepsilon,\eta}\|_{L^\infty}\|w(t)-w(s)\|_{L^\infty}\\
&\quad\quad\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\lesssim (t-s)^\alpha s^{\gamma-1-\alpha}\|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align*}
Then one gets
\begin{align}\label{qdif}
\sup_{0<s<t<T} s^{1-\gamma+\alpha}\frac{\left\| q(t)-q(s)\right\|_{L^\infty}}{(t-s)^\alpha}\lesssim \|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align}
Hence we get from \eqref{qinf} and \eqref{qdif} that
\begin{align*}
\|q\|_{X_T^{1-\gamma,\infty}}\lesssim \|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align*}
Combining this with \eqref{PXT} to obtain that
\begin{align}\label{tilF}
\|\tilde F(v,w)\|_{X_T^{1-\gamma,\infty}}&\leq C_1 T^{1-\gamma}(1+\|v-b_{\varepsilon}\|_{Y_T})+ C_1\|v-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x w\|_{X_T^{1-\gamma,\infty}}.
\end{align}
We conclude from \eqref{estofv}, \eqref{haha} and \eqref{tilF} that
\begin{align*}
\|\partial_x (\mathcal{T}w)\|_{X_T^{1-\gamma,\infty}}&\leq C_1 \|\bar u_0\|_{\dot C^{2\gamma}}+C_1 T^{1-\gamma}(1+2\varepsilon_0)+2C_1 \varepsilon_0M, \ \ \ \ \ \forall w\in \mathcal{E}_{T,M}.
\end{align*}
Then we obtain $\|\partial_x (\mathcal{T}w)\|_{X_T^{1-\gamma,\infty}}\leq M$ by taking $M=2C_1 \|\bar u_0\|_{\dot C^{2\gamma}}+1$, $\varepsilon_0<\frac{1}{(1+C_1)^{10}}$ and $T<\min\{\frac{1}{(1+C_1)^{10}},T_1\}$. This implies that $\mathcal{T}$ maps $\mathcal{E}_{T,M}$ to itself.
\\
It remains to prove that $\mathcal{T}$ is contraction. Consider $w_1,w_2\in \mathcal{E}_{T,M}$, let $v_m(t,x)=v_0(x)+\int_0^t \partial_x w_m(s,x)ds$, $m=1,2$. Then it is easy to check that both $v_1$ and $v_2$ satisfy \eqref{estofv}, and
\begin{align}\label{difv}
\|v_1-v_2\|_{Y_T}\lesssim T^\gamma \|\partial_x (w_1-w_2)\|_{X_T^{1-\gamma,\infty}}.
\end{align}
Thanks to Lemma \ref{lemma}, there exists $T_2>0$ such that
\begin{align}\label{contra}
\|\partial_x(\mathcal{T}w_1- \mathcal{T}w_2)\|_{X_T^{1-\gamma,\infty}}\lesssim \|\tilde F(v_1,w_1)-\tilde F(v_2,w_2)\|_{X_T^{1-\gamma,\infty}},\ \ \forall 0<T<T_2.
\end{align}
Note that
\begin{align*}
\tilde F(v_1,w_1)-\tilde F(v_2,w_2)=p(v_2)-p(v_1)+\mu \left(\left(\frac{1}{v_1}-\frac{1}{b_{\varepsilon,\eta}}\right)\partial_xw_1- \left(\frac{1}{v_2}-\frac{1}{b_{\varepsilon,\eta}}\right)\partial_xw_2\right).
\end{align*}
We first estimate $p(v_2)-p(v_1)$. Note that $p\in W^{2,\infty}$, hence
\begin{align*}
&\|p(v_2(t))-p(v_1(t))\|_{L^\infty}\lesssim \|v_1-v_2\|_{L^\infty}\lesssim \|v_1-v_2\|_{Y_T},
\end{align*}
and \begin{align*}
&\|p(v_2(t))-p(v_1(t))-(p(v_2(s))-p(v_1(s)))\|_{L^\infty}\\
&\lesssim\|(v_2-v_1)(t)-(v_2-v_1)(s)\|_{L^\infty}+ \|v_2(t)-v_1(t)\|_{L^\infty}(\|v_1(t)-v_1(s)\|_{L^\infty}+\|v_2(t)-v_2(s)\|_{L^\infty})\\
&\lesssim (t-s)^\alpha s^{-\alpha}\|v_1-v_2\|_{Y_T}(1+\|v_1-b_{\varepsilon,\eta}\|_{Y_T}+\|v_2-b_{\varepsilon,\eta}\|_{Y_T}).
\end{align*}
Hence
\begin{align}
\|p(v_2)-p(v_1)\|_{X_T^{1-\gamma,\infty}}&\lesssim T^{1-\gamma} \|v_1-v_2\|_{Y_T}(1+\|v_1-b_{\varepsilon,\eta}\|_{Y_T}+\|v_2-b_{\varepsilon,\eta}\|_{Y_T})\nonumber\\
&\overset{\eqref{difv}}\lesssim T\|\partial_x (w_1-w_2)\|_{X_T^{1-\gamma,\infty}}.\label{P}
\end{align}
Then we estimate $\mathcal{Q}=\left(\frac{1}{v_1}-\frac{1}{b_{\varepsilon,\eta}}\right)\partial_xw_1- \left(\frac{1}{v_2}-\frac{1}{b_{\varepsilon,\eta}}\right)\partial_xw_2$. We have
\begin{align*}
\mathcal{Q}=\left(\frac{1}{v_1}-\frac{1}{v_2}\right)\partial_xw_1+\left(\frac{1}{v_2}-\frac{1}{b_{\varepsilon,\eta}}\right)\partial_x(w_1-w_2).
\end{align*}
For any $0<t<T$, one has
\begin{align}\label{111}
\|\mathcal{Q}(t)\|_{L^\infty}\lesssim \|(v_1-v_2)(t)\|_{L^\infty}\|\partial_x w_1(t)\|_{L^\infty}\lesssim t^{\gamma-1}\|v_1-v_2\|_{Y_T}\|\partial_x w_1\|_{X_T^{1-\gamma,\infty}}.
\end{align}
For any $0<s<t<T$,
\begin{align}
& \|\mathcal{Q}(t)-\mathcal{Q}(s)\|_{L^\infty}\nonumber\\
&\lesssim\|(v_1-v_2)(t)-(v_1-v_2)(s)\|_{L^\infty}\|\partial_x w_1(t)\|_{L^\infty}\nonumber \\
&\ \ \ \ +\|(v_1-v_2)(t)\|_{L^\infty}(\|v_1(t)-v_1(s)\|_{L^\infty}+\|v_2(t)-v_2(s)\|_{L^\infty})\|\partial_x w_1(t)\|_{L^\infty}\nonumber \\
&\ \ \ \ +\|(v_1-v_2)(s)\|_{L^\infty}\|\partial_x (w_1(t)-w_1(s))\|_{L^\infty}+\|v_2(t)-v_2(s)\|_{L^\infty}\|\partial_x (w_1-w_2)(t)\|_{L^\infty}\nonumber \\
&\ \ \ \ +\|v_2(s)-b_{\varepsilon,\eta}\|_{L^\infty} \|\partial_x (w_1-w_2)(t)-\partial_x (w_1-w_2)(s)\|_{L^\infty}\nonumber \\
&\lesssim (t-s)^\alpha s^{\gamma-1-\alpha}\|v_1-v_2\|_{Y_T}\|\partial_x w_1\|_{X_T^{1-\gamma,\infty}}(1+\|v_1-b_{\varepsilon,\eta}\|_{Y_T}+\|v_2-b_{\varepsilon,\eta}\|_{Y_T})\nonumber \\
&\quad\quad\quad\quad+(t-s)^\alpha s^{\gamma-1-\alpha}\|v_2-b_{\varepsilon,\eta}\|_{Y_T}\|\partial_x(w_1-w_2)\|_{X_T^{1-\gamma,\infty}}.\label{222}
\end{align}
Combining \eqref{difv}, \eqref{111}, \eqref{222} with the fact that $\|v_1-b_{\varepsilon,\eta}\|_{Y_T}+\|v_2-b_{\varepsilon,\eta}\|_{Y_T}\lesssim \varepsilon_0$ and $\|\partial_x w_1\|_{X_T^{1-\gamma,\infty}}\leq M$, we obtain
\begin{align}\label{Q}
\|\mathcal{Q}\|_{X_T^{1-\gamma,\infty}}\lesssim M T^\gamma \|\partial_x (w_1-w_2)\|_{X_T^{1-\gamma,\infty}}.
\end{align}
We conclude from \eqref{contra}, \eqref{P} and \eqref{Q} that
$$
\|\partial_x(\mathcal{T}w_1- \mathcal{T}w_2)\|_{X_T^{1-\gamma,\infty}}\leq C_2( M T^\gamma+T)\|\partial_x(w_1-w_2)\|_{X_T^{1-\gamma,\infty}}\leq \frac{1}{2}\|\partial_x(w_1-w_2)\|_{X_T^{1-\gamma,\infty}}.
$$
Here we fix $T=\min\left\{\frac{1}{(1+C_1+C_2)^{10}},T_1,T_2\right\}$. This completes the proof.
\end{proof}
\section{Well-posedness for the full Navier–Stokes system }
In this section, we consider the Cauchy problem of \eqref{cpns} with initial data
$(v,u,\theta)(0,x)=(v_0,u_0,\theta_0)(x)$ satisfying
\begin{align}\label{conini2}
\inf_xv_0(x)\geq \lambda_0>0,\ \ \ \|v_0\|_{L^\infty}<\infty,\ \ \|u_0\|_{L^2}<\infty,\ \ \|\theta_0\|_{\dot W^{-\frac{2}{3},\frac{6}{5}}}<\infty.
\end{align}
Moreover, we suppose that there exist $0<\varepsilon,\eta\ll 1 $ such that $v_0$ satisfies the condition \eqref{conccc} for some $\varepsilon_0>0$ that will be fixed later in our proof.
\begin{thm}\label{thmfull}
There exists $\varepsilon_0>0$ such that if initial data $(v_0,u_0,\theta_0)$ satisfies \eqref{conini2} and \eqref{conccc}, then the system \eqref{cpns} admits a unique local solution in $[0,T]$ satisfying
\begin{align*}
\inf_{t\in[0,T]}\inf_xv(t,x)\geq \frac{\lambda_0}{2},\ \ \ \|v\|_{Y_T}\leq 2\|v_0\|_{L^\infty},\ \ \ \ \|(u,\theta)\|_{ Z_T}\leq B.
\end{align*}
for some $T,B>0.$
\end{thm}
Clearly, the theorem \eqref{maincp} is as a consequece of Theorem \ref{thmfull}. \\
Define the space
\begin{align*}
\mathcal{X}_{T,B}=\{(w,\vartheta):(w,\vartheta)|_{t=0}=(u_0,\theta_0), \|(w,\vartheta)\|_{Z_T}\leq B\}.
\end{align*}
Let $(w,\vartheta)\in\mathcal{X}_{T,B}$, we define a map $\mathcal{M}(w,\vartheta)=(u,\theta)$, where $u,\theta$ solve the equations
\begin{align*}
& u_t-\mu(\frac{ u_x}{b_{\varepsilon,\eta}})_x+(p(v,\theta))_x=\mu((\frac{1}{v}-\frac{1}{b_{\varepsilon,\eta}})\partial_x w)_x,\ \ \ \
u(0,x)=u_0(x),
\\
& \theta_{t}-\frac{\kappa}{\mathbf{c}}\left(\frac{\theta_x}{b_{\varepsilon,\eta}}\right)_x=-\frac{p(v,\vartheta)}{\mathbf{c}} w_{x}+\frac{\mu}{\mathbf{c} v}\left(w_{x}\right)^{2}+\frac{\kappa}{\mathbf{c}}((\frac{1}{v}-\frac{1}{b_{\varepsilon,\eta}})\partial_x \vartheta)_x,\ \ \ \theta(0,x)=\theta_0(x),
\end{align*}
where $v$ is fixed by
\begin{align*}
v(t,x)=v_0(x)+\int_0^t \partial_x w(s,x)ds,
\end{align*}
and the function $b_{\varepsilon,\eta}$ is defined in \eqref{defb}.\\
The following is the key lemma to prove Theorem \ref{thmfull}.
\begin{lemma}\label{lemre2}
There exists $\varepsilon_0>0$ such that if initial data $(v_0,u_0,\theta_0)$ satisfies \eqref{conini2} and \eqref{conccc}, then we have
\begin{align*}
& \|\mathcal{M}(w,\vartheta)\|_{Z_T}\leq B,\ \ \forall (w,\vartheta)\in \mathcal{X}_{T,B},\\
&\|\mathcal{M}(w_1,\vartheta_1)-\mathcal{M}(w_1,\vartheta_2)\|_{Z_T}\leq \frac{1}{2}\|(w_1-w_2,\vartheta_1-\vartheta_2)\|_{Z_T}, \ \ \forall (w_k,\vartheta_k)\in \mathcal{X}_{T,B},\ k=1,2.
\end{align*}
for some $T,B>0$.
\end{lemma}
\begin{proof}~\\
1.\textit{Estimate $v$}\\
We have
\begin{align*}
&\|v(t)-b_{\varepsilon,\eta}\|_{L^\infty}\lesssim \|v_0-b_{\varepsilon,\eta}\|_{L^\infty}+\int_0^t \|w_x(\tau)\|_{L^\infty}d\tau,\\
& \inf_{t\in[0,T]}\inf_x v(t,x)\geq \inf_{x}v_0(x)-\int_0^t \|w_x(\tau)\|_{L^\infty}d\tau.
\end{align*}
By H\"{o}lder's inequality,
\begin{align*}
\int_0^T \|w_x(\tau)\|_{L^\infty}d\tau \leq C T^\frac{1}{4}\|w_x\|_{X_T^{\frac{3}{4},\infty}}\leq C T^\frac{1}{4}B.
\end{align*}
We take $T$ small enough such that $CT^\frac{1}{4}B\leq \frac{1}{100}\min\{\lambda_0,\varepsilon_0\} $, and
\begin{align}\label{es2v}
\|v-b_{\varepsilon,\eta}\|_{Y_T}\leq 2\varepsilon_0,\ \ \ \ \ \inf_{s\in[0,T]} \inf_x v(s,x)\geq \frac{\lambda_0}{2}.
\end{align}
2.\textit{Estimate $\theta$}\\
\begin{align*}
\theta_{t}-\frac{\kappa}{\mathbf{c}}\left(\frac{\theta_x}{b_{\varepsilon,\eta}}\right)_x=-\frac{p(v,\vartheta)}{\mathbf{c}} w_{x}+\frac{\mu}{\mathbf{c} v}\left(w_{x}\right)^{2}+\frac{\kappa}{\mathbf{c}}((\frac{1}{v}-\frac{1}{b_{\varepsilon,\eta}})\partial_x \vartheta)_x:=R(v,w,\vartheta)+\partial_x \tilde F(v,\vartheta).
\end{align*}
Applying Lemma \ref{lemma} with the equation above for $\theta$ to obtain
\begin{align*}
&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|\theta\|_{\star}+ \sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\theta_x\|_{\star}\lesssim \|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\sum_{\star\in\{L_{T}^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\tilde F\|_{\star}+T^\frac{1}{4}\sum_{\star\in\{L^1_{T},X_T^{1,1}\}}\|R\|_{\star}.
\end{align*}
It is easy to check that
\begin{align*}
&\sum_{\star\in\{L_{T}^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\tilde F\|_{\star}\lesssim \|v-b_{\varepsilon,\eta}\|_{L^\infty_{T}}(\|\partial_x\vartheta\|_{L_{T}^\frac{6}{5}}+\|\partial_x\vartheta\|_{X_T^{\frac{5}{6},\frac{6}{5}}})\lesssim \varepsilon_0B,\\
& \sum_{\star\in\{L^1_{T},X_T^{1,1}\}}\|R\|_{\star}\lesssim \|w_x\|_{L^2_T}^2+\|\vartheta\|_{L^2_T}^2+\|w_x\|_{X_T^{\frac{1}{2},2}}^2+\|\vartheta\|_{X_T^{\frac{1}{2},2}}^2\lesssim B^2.
\end{align*}
Hence we obtain
\begin{align}
&\|\theta\|_{L^2_{T}}+ \|\theta_x\|_{L^\frac{6}{5}_{T}}+\|\theta\|_{X_T^{\frac{1}{2},2}}+\|\theta_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}\nonumber\\
&\quad\quad\quad\ \ \ \ \leq \tilde C_0\|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\tilde C_0\varepsilon_0B+\tilde C_0T^\frac{1}{4}B^2\leq \frac{B}{3},\label{retheta}
\end{align}
provided $ \|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}\leq \frac{B}{\tilde C_0}$, $\varepsilon_0\leq \frac{1}{10(1+\tilde C_0)^{10}}$ and $T\leq \frac{1}{(1+\tilde C_0+B)^{10}}$.\\
3.\textit{ Estimate $u$}\\
Recall the equation
\begin{align*}
u_t-\mu\left(\frac{u_x}{b_{\varepsilon}}\right)_x=-(p(v,\theta))_x+\mu((\frac{1}{v}-\frac{1}{b_{\varepsilon}}) w_x)_x=:\partial_xF(v,w,\theta).
\end{align*}
Applying Lemma \ref{lemma} with the equation above for $u$, we obtain that
\begin{align*}
\|u_x\|_{L^2_{T}}+ \|u_x\|_{X_T^{\frac{1}{2},2}}+\|u_x\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|u_0\|_{L^2}+\|F\|_{L^2_{T}}+\|F\|_{X_T^{\frac{1}{2},2}}+\|F\|_{X_T^{\frac{3}{4},\infty}}.
\end{align*}
It is easy to check that
\begin{align*}
&\|F\|_{L^2_{T}}\lesssim \sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{1}{2},2}\}}\|\theta\|_{\star}+\|v-b_{\varepsilon}\|_{L^\infty}\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|w_x\|_{\star},\\
&\|F\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\theta\|_{X_T^{\frac{3}{4},\infty}}+\|v-b_{\varepsilon}\|_{L^\infty}\|w_x\|_{X_T^{\frac{3}{4},\infty}}.
\end{align*}
By the interpolation inequality, we have $\|\theta\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|\theta\|_{X_T^{\frac{1}{2},2}}+\|\theta_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}$.
Hence
\begin{align*}
&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2},X_T^{\frac{3}{4},\infty}\}}\|u_x\|_{\star}\\
&\ \ \leq \tilde C_1 (\|u_0\|_{L^2}+\|\theta\|_{L^2_{T}}+\|\theta\|_{X_T^{\frac{1}{2},2}}+\|\theta_x\|_{X_T^{\frac{5}{6},\frac{6}{5}}}+\|v-b_{\varepsilon,\eta}\|_{L^\infty}(\|w_x\|_{L^2_{T}}+\|w_x\|_{X_T^{\frac{1}{2},2}}))\\
&\ \ \overset{\eqref{es2v}, \eqref{retheta}}\leq \tilde C_1(\|u_0\|_{L^2}+\tilde C_0\|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+\tilde C_0\varepsilon_0B+\tilde C_0T^\frac{1}{4}B^2+\varepsilon_0 B)\\
&\ \ \ \ \ \leq \frac{B}{3},
\end{align*}
provided $B\geq (10+\tilde C_0+\tilde C_1+\tilde C_2)(\|u_0\|_{L^2}+\|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}})$, $\varepsilon_0\leq \frac{1}{100(1+\tilde C_0+\tilde C_1+\tilde C_2)^{10}}$ and $T\leq {\varepsilon_0^{10}}B^{-10}$.
Combining this with \eqref{retheta}, we obtain
\begin{align*}
\|(u,\theta)\|_{Z_T}\leq B.
\end{align*}
This implies that $\mathcal{M}$ maps $\mathcal{X}_{T,B}$ to itself. It remains to prove that $\mathcal{M}$ is contraction. Consider $(w_1,\vartheta_1), (w_2,\vartheta_2)\in \mathcal{X}_{T,B}$, let $v_m(t,x)=v_0(x)+\int_0^t \partial_x w_m(s,x)ds, m=1,2$. Then we have
\begin{equation}\label{v1v2}
\begin{aligned}
&\|v_m-b_{\varepsilon,\eta}\|_{L^\infty_{T}}\leq 2\varepsilon_0,\ \ \ \ \ \inf_{s\in[0,T]} \inf_x v_m(s,x)\geq \frac{\lambda_0}{2},\\
&\|v_1-v_2\|_{L^\infty_{T}}\leq \int_0^T \|\partial_x(w_1-w_2)(s)\|_{L^\infty}ds\lesssim T^\frac{1}{4} \|\partial_x (w_1-w_2)\|_{X_T^{\frac{3}{4},\infty}}.
\end{aligned}
\end{equation}
Denote $(u_m,\theta_m)=\mathcal{M}(w_m,\vartheta_m), m=1,2$. We have
\begin{align*}
& \partial_t (u_1-u_2)-\kappa\left(\frac{\partial_x (u_1-u_2)}{b_{\varepsilon,\eta}}\right)_x=\partial_x(F(v_1,w_1,\theta_1)- F(v_2,w_2,\theta_2)),\\
& \partial_{t}(\theta_1-\theta_2)-\frac{\kappa}{\mathbf{c}}\left(\frac{\partial_x(\theta_1-\theta_2)}{b_{\varepsilon,\eta}}\right)_x=R(v_1,w_1,\vartheta_1)-R(v_2,w_2,\vartheta_2)+\partial_x(\tilde F(v_1,\vartheta_1)-\tilde F(v_2,\vartheta_2)).
\end{align*}
Applying Lemma \ref{lemma} with $(f,\phi,C_\phi)=(\theta_1-\theta_2,b_{\varepsilon,\eta}^{-1},\eta^{-1}\|v_0\|_{L^\infty})$ to get
\begin{align}
&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|\theta_1-\theta_2\|_{\star}+ \sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x(\theta_1-\theta_2)\|_{\star}\nonumber\\
&\lesssim \sum_{\star\in\{L_{T}^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\tilde F(v_1,\vartheta_1)-\tilde F(v_2,\vartheta_2)\|_{\star}+T^\frac{1}{4}\sum_{\star\in\{L^1_{T},X_T^{1,1}\}}\|R(v_1,w_1,\vartheta_1)-R(v_2,w_2,\vartheta_2)\|_{\star} .\label{diffthet}
\end{align}
Then \eqref{v1v2} yields
\begin{align}
&\sum_{\star\in\{L_{T}^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|(\tilde F(v_1,\vartheta_1)-\tilde F(v_2,\vartheta_2))\|_{\star}\nonumber\\
&\lesssim \|{v_1}-{v_2}\|_{L^\infty}\sum_{\star\in\{L_{T}^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\left\|\partial_x \vartheta_1\right\|_{\star} +\|v_1-b_{\varepsilon,\eta}\|_{L^\infty}\sum_{\star\in\{L_T^\frac{6}{5},X_T^{\frac{5}{6},\frac{6}{5}}\}}\left\|\partial_x (\vartheta_1-\vartheta_2)\right\|_{\star}\nonumber\\
&\lesssim (T^\frac{1}{5}B+\varepsilon_0)\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}.\label{diffF}
\end{align}
Moreover, we have
\begin{align*}
|R(v_1,w_1,\vartheta_1)-R(v_2,w_2,\vartheta_2)|&\lesssim |\vartheta_1-\vartheta_2||\partial_xw_1|+|\vartheta_2||\partial_x(w_1-w_2)|+|\vartheta_2||\partial_xw_2||v_1-v_2|\\
&\quad\quad+|\partial_x(w_1-w_2)|(|\partial_xw_1|+|\partial_x w_2|)+||\partial_xw_2|^2|v_1-v_2|.
\end{align*}
By H\"{o}lder's inequality, it is easy to check that
\begin{align}
&\|R(v_1,w_1,\vartheta_1)-R(v_2,w_2,\vartheta_2)\|_{L^1_{T}}\nonumber\\
&\lesssim \|\vartheta_1-\vartheta_2\|_{L^{2}_{T}}\|\partial_x w_1\|_{L^{2}_{T}}+(\|\vartheta_2\|_{L^{2}_{T}}+\|\partial_x w_1\|_{L^{2}_{T}}+\|\partial_x w_2\|_{L^{2}_{T}})\|\partial_x (w_1-w_2)\|_{L^{2}_{T}}\nonumber\\
&\quad\quad\quad+(\|\vartheta_2\|_{L^{2}_{T}}^2+ \|\partial_x w_2\|_{L^{2}_{T}}^2)\|v_1-v_2\|_{L^\infty_{T}}\nonumber\\
&\lesssim (B+B^2T^\frac{1}{5})\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}.\label{diffR}
\end{align}
Similarly, one gets
\begin{align*}
& \sum_{\star\in\{L^1_{T},X_T^{1,1}\}}\|R(v_1,w_1,\vartheta_1)-R(v_2,w_2,\vartheta_2)\|_{\star}\lesssim (B+B^2T^\frac{1}{5})\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}.
\end{align*}
Combining this with \eqref{diffthet}, \eqref{diffF} and \eqref{diffR} to obtain
\begin{align}
&\sum_{\star\in\{L^2_{T},X_T^{\frac{1}{2},2}\}}\|\theta_1-\theta_2\|_{\star}+ \sum_{\star\in\{L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x(\theta_1-\theta_2)\|_{\star}\nonumber\\
&\leq \tilde C_3 (T^\frac{1}{5}B+\varepsilon_0+T^\frac{1}{4}(B+B^2T^\frac{1}{5}))\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}\nonumber\\
&\leq 2\tilde C_3\varepsilon_0\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}.\label{diffrethe}
\end{align}
Then we estimate $u_1-u_2$. By Lemma \ref{lemma}, we get
\begin{align*}
&\sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|\partial_x (u_1-u_2)\|_{\star}\lesssim \sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|F(v_1,w_1,\theta_1)- F(v_2,w_2,\theta_2)\|_{\star}.
\end{align*}
Note that
\begin{align*}
|F(v_1,w_1,\theta_1)- F(v_2,w_2,\theta_2)|\lesssim |\theta_1-\theta_2|+|\theta_2||v_1-v_2|+|v_1-b_{\varepsilon,\eta}||\partial_x(w_1-w_2)|+|v_1-v_2||\partial_x w_2|.
\end{align*}
Using the interpolation $\|f\|_{X_T^{\frac{3}{4},\infty}}\lesssim \|f\|_{X_T^{\frac{1}{2},2}}\|\partial_x f\|_{ X_T^{\frac{5}{6},\frac{6}{5}}}$, we get
\begin{align*}
& \sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|F(v_1,w_1,\theta_1)- F(v_2,w_2,\theta_2)\|_{\star}\\
&\ \ \ \lesssim \sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2}\}}\|\theta_1-\theta_2\|_{\star}+\|\partial_x(\theta_1-\theta_2)\|_{ X_T^{\frac{5}{6},\frac{6}{5}}}+\|v_1-b_{\varepsilon,\eta}\|_{L^\infty}\sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|\partial_x(w_1-w_2)\|_{\star}\\
&\ \ \ \ \ \ +\Big(\sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2}\}}\|\theta_2\|_{\star}+\|\partial_x\theta_2\|_{ X_T^{\frac{5}{6},\frac{6}{5}}}+\sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|\partial_x w_2\|_{\star}\Big)\|v_1-v_2\|_{L^\infty}\\
&\ \ \ \lesssim(\tilde C_3\varepsilon_0+BT^\frac{1}{5})\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}.
\end{align*}
Then we get
\begin{align*}
&\sum_{\star\in\{L^2_{T},\ X_T^{\frac{1}{2},2},\ X_T^{\frac{3}{4},\infty}\}}\|\partial_x (u_1-u_2)\|_{\star}\\
&\quad\quad\quad\leq \tilde C_4 (\tilde C_3\varepsilon_0+BT^\frac{1}{5})\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T}\leq \frac{1}{5}\|(w_1,\vartheta_1)-(w_2,\vartheta_2)\|_{Z_T},
\end{align*}
by taking $B= (10+\sum_{m=1}^4\tilde C_m)(\|u_0\|_{L^2}+\|\bar \theta_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}})$, $\varepsilon_0= {100(1+\sum_{m=1}^4\tilde C_m)^{-10}}$ and $T= {\varepsilon_0^{10}}B^{-10}$. Combining this with \eqref{diffrethe} yields
\begin{align*}
\|\mathcal{M}(w_1,\vartheta_1)-\mathcal{M}(w_1,\vartheta_2)\|_{Z_T}\leq \frac{1}{2}\|(w_1-w_2,\vartheta_1-\vartheta_2)\|_{Z_T}.
\end{align*}
This completes the proof.
\end{proof}
\\
In the rest part of this section, we give a local version of Theorem \ref{thmfull}.
Define the local norms
\begin{align*}
\|h\|_{\tilde L^p}:=\sup_{z}\|h\|_{L^p([z,z+1])},\ \ \
\|h\|_{\tilde W ^{s,p} }:=\sup_{z}\|h\chi_z\|_{\dot W ^{s,p}},\ \ \ \|f\|_{\tilde X^{\sigma,p}_T}=\sup_z\|f\chi_z\|_{\tilde X^{\sigma,p}_T},
\end{align*}
for $h:\mathbb{R}\to \mathbb{R}$, $f:\mathbb{R}^+\times\mathbb{R}\to\mathbb{R}$. Here $\chi_z$ is a smooth cutoff function satisfying $\mathbf{1}_{[z-1,z+1]}\leq \chi_z\leq \mathbf{1}_{[z-2,z+2]}$.
Moreover, we denote $\|f\|_{\tilde L^p_{T}}=\|f\|_{L^p_t\tilde L^p_x}$.
We introduce the following lemma.
\begin{lemma}\label{lemlocal}
Let $$
g(t,x)=\int_0^t \int_{|y|\geq \rho} \partial_t K(t-\tau,y)F(\tau,x-y)dyd\tau,
$$
for some $\rho\in(0,1)$. Then for any $0<T<1$, $p\geq 1$, $\sigma\in(0,1-\alpha)$,
\begin{align*}
\|g\|_{\tilde L^p_{T}}\lesssim_\rho \|F\|_{\tilde L^p_{T}},\ \ \ \ \|g\|_{\tilde X^{\sigma,p}_T}\lesssim _\rho \|F\|_{\tilde X^{\sigma,p}_T}.
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \ref{lemheat} and H\"{o}lder's inequality,
\begin{align*}
|g(t,x)|\lesssim &\int_0^t \int_{|y|\geq \rho} \frac{1}{((t-\tau)^\frac{1}{2}+|y|)^3}|F(\tau,x-y)|dyd\tau
\\
\lesssim &\sum_{n=0}^\infty\int_0^t \int_{|y|\in[\rho+n,\rho+n+1]}\frac{1}{((t-\tau)^\frac{1}{2}+|y|)^3}|F(\tau,x-y)|dyd\tau\\
\lesssim& \sum_{n=0}^\infty \frac{1}{(\rho+n)^3}\min\{\|F\|_{\tilde L^p_{T}},\|F\|_{\tilde X_T^{\sigma,p}}\}\lesssim \rho^{-3}\min\{\|F\|_{\tilde L^p_{T}},\|F\|_{\tilde X_T^{\sigma,p}}\},
\end{align*}
for any $ x\in\mathbb{R}, \ t\in(0,T)$.
Hence for any $z\in\mathbb{R}$,
\begin{align*}
\|g\|_{L^p_{T}([z,z+1])}+\|g\|_{X_T^{\sigma,p}([z,z+1])}\lesssim \|g\|_{L^\infty_{T}([z,z+1])}\lesssim \min\{\|F\|_{\tilde L^p_{T}},\|F\|_{\tilde X_T^{\sigma,p}}\}.
\end{align*}Similarly, one can check that $\sup_{0<s<t<T}s^{\sigma+\alpha}\frac{\|g(t)-g(s)\|_{\tilde L^p}}{(t-s)^\alpha}\lesssim \|F\|_{\tilde X_T^{\sigma,p}}$.
This completes the proof.
\end{proof}
The following is a local version of Lemma \ref{lemma}.
\begin{lemma}
Let $f$ be a solution to \eqref{eqpara} with initial data $f_0=\partial_x \bar f_0$. There exists $T>0$ such that
$$
\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2}\}}\|f\|_{\star}+ \sum_{\star\in\{\tilde L^\frac{6}{5}_{T},\tilde X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x f\|_{\star}\lesssim \|\bar f_0\|_{\tilde W^{\frac{1}{3},\frac{6}{5}}}+ \sum_{\star\in\{\tilde L^\frac{6}{5}_{T},\tilde X_T^{\frac{5}{6},\frac{6}{5}}\}}\| F\|_{\star}+T^{\frac{1}{4}}\sum_{\star\in\{{\tilde L^1_{T}},{\tilde X_T^{1,1}}\}}\|R\|_{\star}.
$$
Moreover, if $R=0$, we have
$$\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|\partial_xf\|_{\star}\lesssim \|f_0\|_{\tilde L^2}+\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|F\|_{\star}.
$$
\end{lemma}
\begin{proof} Denote $\mathbb{I}_{\varepsilon}=\cup_{n=1}^\infty[a_n-\varepsilon,a_n+\varepsilon]$.
Fix $r>100$ such that $\mathbb{I}_{\varepsilon}\subset B_{r/10}=[-r/10,r/10]$.
We consider a compact interval $\mathbb{K}$, which is either $B_{r/2}$ or any $[z-\frac{1}{2},z+\frac{1}{2}]\subset \mathbb{R}\backslash B_{r/2}$.
We divide the solution into two parts $f=f_{1}+f_{2}$, such that
\begin{align*}
\begin{cases}
\partial_x f_1-\partial_x(\phi(x)\partial_x f_1)=\partial_x (F\chi)+R\chi,\\
f_1(0,x)=\partial_x (\bar f_0(x)\chi(x)),
\end{cases} \ \ \ \
\begin{cases}
\partial_x f_2-\partial_x(\phi(x)\partial_x f_2)=\partial_x (F(1-\chi))+R(1-\chi),\\
f_2(0,x)=\partial_x (\bar f_0(x)(1-\chi)(x)).
\end{cases}
\end{align*}
Here $\chi$ is a smooth cutoff function satisfying $$\begin{cases}
&\mathbf{1}_{B_r}\leq \chi\leq \mathbf{1}_{B_{2r}},\ \ \text{if}\ \mathbb{K}=B_{r/2},\\
&\mathbf{1}_{[z-1,z+1]}\leq \chi\leq \mathbf{1}_{[z-2,z+2]},\ \ \text{if}\ \mathbb{K}=[z-\frac{1}{2},z+\frac{1}{2}]\subset \mathbb{R}\backslash B_{r/2}.
\end{cases}$$ We first consider the main term $f_1$. Applying Lemma \ref{lemma} to obtain that
\begin{align*}
\sum_{\star\in\{ L^2_{T},X_T^{\frac{1}{2},2}\}}\|f_1\|_{\star}+ \sum_{\star\in\{ L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x f_1\|_{\star}\lesssim \|\bar f_0\chi\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+ \sum_{\star\in\{ L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\| F\chi\|_{\star}+T^{\frac{1}{4}}\sum_{\star\in\{ L^1_{T},X_T^{1,1}\}}\|R\chi\|_{\star}.
\end{align*}
Note that $\chi$ is compactly supported. Hence $\|g\chi\|_{L^p}\lesssim \|g\|_{\tilde L^p}$. Then we obtain
\begin{align}\label{f1local} & \
\sum_{\star\in\{ L^2_{T},X_T^{\frac{1}{2},2}\}}\|f_1\|_{\star}+ \sum_{\star\in\{ L^\frac{6}{5}_{T},X_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x f_1\|_{\star}\lesssim \|\bar f_0\|_{\tilde W^{\frac{1}{3},\frac{6}{5}}}+ \sum_{\star\in\{ L^\frac{6}{5}_{T},\tilde X_T^{\frac{5}{6},\frac{6}{5}}\}}\| F\|_{\star}+T^{\frac{1}{4}}\sum_{\star\in\{\tilde L^1_{T},\tilde X_T^{1,1}\}}\|R\|_{\star}.
\end{align}
Similarly, when $R=0$ we have
\begin{align*}
\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|\partial_xf_1\|_{\star}\lesssim \|f_0\|_{\tilde L^2}+\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|F\|_{\star}.
\end{align*}
Then we estimate the perturbation part $f_2$. Note that for any $y\in\operatorname{supp}(1-\chi)$ and any $x\in \mathbb{K}$, there holds $|x-y|\geq \frac{1}{2}$, which removes the singularity of kernel. The good decay property of the kernel helps us to control local norms (see Lemma \ref{lemlocal}). We consider $\partial_x f_{2,N}=\int_0^t \int \partial_1^2\mathbf{H} (t-\tau,x-y,x)F(\tau,y)(1-\chi(y))dyd\tau$ for an example. Other terms can be done similarly.
Applying Lemma \ref{lemlocal} we obtain that
\begin{align*}
\sum_{\star\in\{\tilde L^p_{T}(\mathbb{K}),\tilde X_T^{\sigma,p}\}}\|\partial_x f_{2,N}\|_{\star}
\lesssim \sum_{\star\in\{\tilde L^p_{T},\tilde X_T^{\sigma,p}\}}\|F\|_{\star},\ \ \ p\geq 1.
\end{align*}
Similarly, we obtain that
\begin{align*}
\sum_{\star\in\{\tilde{L}^2_{T},\tilde{X}_T^{\frac{1}{2},2}\}}\|f_2\|_{ \star}+\sum_{\star\in\{ \tilde{L}^\frac{6}{5}_{T},\tilde{X}_T^{\frac{5}{6},\frac{6}{5}}\}} \|\partial_x f_2\|_{\star}\lesssim \|\bar f_0\|_{\tilde W^{\frac{1}{3},\frac{6}{5}}}+\sum_{\star\in\{\tilde L^\frac{6}{5}_{T},\tilde X_T^{\frac{5}{6},\frac{6}{5}}\}} \| F\|_{\star}+T^{\frac{1}{4}}\sum_{\star\in\{\tilde L^1_{T},\tilde X_T^{1,1}\}}\|R\|_{\star},
\end{align*}
and
\begin{align*}
\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|\partial_xf_2\|_{\star}\lesssim \|f_0\|_{\tilde L^2}+\sum_{\star\in\{\tilde L^2_{T},\tilde X_T^{\frac{1}{2},2},\tilde X_T^{\frac{3}{4},\infty}\}}\|F\|_{\star},\ \ \ \text{if}\ R=0.
\end{align*}
Combining this with \eqref{f1local}, we obtain the result.
\end{proof}
\\
Denote
\begin{align*} \|(w,\vartheta)\|_{\tilde Z_T}:=\sum_{\star\in\{\tilde{L}_T^2,\tilde{X}_T^{\frac{1}{2},2},\tilde{X}_T^{\frac{3}{4},\infty}\}}\|\partial_x w\|_{\star}+\sum_{\star\in\{\tilde{L}_T^2,\tilde{X}_T^{\frac{1}{2},2}\}}\|\vartheta\|_{\star}+\sum_{\star\in\{\tilde{L}_T^{\frac{6}{5}},\tilde{X}_T^{\frac{5}{6},\frac{6}{5}}\}}\|\partial_x \vartheta\|_{\star}.
\end{align*}
Following the proof of Theorem \ref{thmfull}, we obtain the following result.
\begin{thm}\label{thmloc}There exists $\varepsilon_0>0$ such that if initial data $(v_0,u_0,\theta_0)$ satisfies \eqref{loccon} and \eqref{conccc}, then the system \eqref{cpns} admits a unique local solution $(v,u,\theta)$ in $[0,T]$ satisfying
\begin{align*}
\inf_{t\in[0,T]}\inf_xv(t,x)\geq \frac{\lambda_0}{2},\ \ \ \|v\|_{Y_T}\leq 2\|v_0\|_{L^\infty},\ \ \ \ \|(u,\theta)\|_{\tilde Z_T}\leq B.
\end{align*}
for some $T,B>0$.
\end{thm}
\section{Appendix}
Consider the heat equation
\begin{align}\label{eqparacons}
\partial_t f-\partial_x^2 f=\partial_x F+R,\ \ \ \ \ \ \ f(0,x)=\partial_x \bar f_0(x)+\tilde f_0(x).
\end{align}
We introduce the following lemma.
\begin{lemma}\label{lemap}
Let $f$ be a solution to \eqref{eqparacons}, for any $T\in(0,1)$, there holds
\begin{align*}
& \|f\|_{L^2_{T}}+\|\partial_x f\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}+T^\frac{1}{3}\|\tilde f_0\|_{L^\frac{6}{5}}+\|F\|_{L^\frac{6}{5}_{T}}+T^\frac{1}{4}\|R\|_{L^1_{T}},\\
& \|f\|_{L^6_{T}}+\|\partial_x f\|_{L^2_{T}}\lesssim\|\partial_x \bar f_0\|_{L^2}+\|\tilde f_0\|_{L^2}+\|F\|_{L^2_{T}}+T^\frac{1}{2}\|R\|_{L^2_{T}}.
\end{align*}
\end{lemma}
\begin{proof}
The solution has formula
\begin{align*}
f(t,x)=&\int_{\mathbb{R}} \partial_x\mathbf{K} (t,x-y) \bar f_0(y) dy+\int_{\mathbb{R}} \mathbf{K} (t,x-y) \tilde f_0(y) dy\\
&+\int_0^t \int_{\mathbb{R}} \partial_x \mathbf{K} (t-\tau,x-y)F(\tau,y) dyd\tau+ \int_0^t \int_{\mathbb{R}} \mathbf{K} (t-\tau,x-y)R(\tau,y) dyd\tau\\
:=&f_{L1}(t,x)+f_{L2}(t,x)+f_N(t,x)+f_R(t,x).
\end{align*}
We first estimate $f_{L1}$. By Parseval’s identity and Sobolev inequality, it is easy to check that
\begin{equation}\label{paraL2}
\begin{aligned}
& \|f_{L1} \|_{L^2_{\infty}}^2=\int_0^\infty \int_{\mathbb{R}} |\xi|^2|e^{-|\xi|^2\tau}\mathcal{F}(\bar f_0)(\xi)|^2d\xi d\tau=\frac{1}{2}\int_{\mathbb{R}}| \mathcal{F}(\bar f_0)(\xi)|^2d\xi=\frac{1}{2}\|\bar f_0\|_{L^2}^2\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\\
& \|\partial_x f_{L1} \|_{L^2_{\infty}}^2=\int_0^\infty \int_{\mathbb{R}} |\xi|^4|e^{-|\xi|^2\tau}\mathcal{F}(\bar f_0)(\xi)|^2d\xi d\tau=\frac{1}{2}\int_{\mathbb{R}}|\xi|^2| \mathcal{F}(\bar f_0)(\xi)|^2d\xi=\frac{1}{2}\|\partial_x\bar f_0\|_{L^2}^2.
\end{aligned}
\end{equation}
Let $(l,p)=(1,\frac{6}{5})\ \text{or}\ (0,6)$.
Note that
\begin{align*}
\partial_x^lf_{L1}(t,x)=\int_{\mathbb{R}} \partial_x^{1+l}\mathbf{K} (t,x-y) (\bar f_0(y)-\bar f_0(x)) dy.
\end{align*}
Set $K_0(t,x)=|\partial_x^{1+l}\mathbf{K}(t,x)|$. Denote $\Delta_z g(x)=g(x)-g(x-z)$. Then
by a change of variable,
\begin{align*}
|\partial_x^lf_{L1}(t,x)|\lesssim \int_{\mathbb{R}} K_0(t,z) |\Delta_z\bar f_0(x)|dz.
\end{align*}
Then we have
\begin{align*}
\| \partial_x^lf_{L1}\|_{L^p_{T}}^p&=\int _0^\infty \int_{\mathbb{R}} \left(\int_{\mathbb{R}} K_0(t,z) |\Delta_z\bar f_0(x)|dz\right)^pdxdt\\
&\lesssim \int _0^\infty \left(\int_{\mathbb{R}} K_0(t,z) \|\Delta_z \bar f_0\|_{L^p_x}dz\right)^pdt\\
&\lesssim \int_0^\infty\left(\int_{\mathbb{R}} K_0(t,z)^\frac{p}{p-1}(1+|z|^2t^{-1})^\frac{\gamma p}{p-1}dz\right)^{p-1}\left(\int_{\mathbb{R}} \frac{\|\Delta_z\bar f_0\|_{L^p}^p}{(1+|z|^2t^{-1})^{\gamma p}}dz\right)dt,
\end{align*}
where we applied Minkowski's inequality in the first inequality, and H\"{o}lder's inequality in the second inequality.
It is easy to check that
\begin{align*}
\left(\int_{\mathbb{R}} K_0(t,z)^\frac{p}{p-1}(1+|z|^2t^{-1})^\frac{\gamma p}{p-1}dz\right)^{p-1}\lesssim t^{\frac{-(l+1)p-1}{2}},
\end{align*}
where we fix $\gamma=\frac{1+l}{2}$.
Hence,
\begin{align}
\| \partial_x ^lf_L(t,x)\|_{L^p_{T}} ^p&\lesssim \int_0^\infty t^{\frac{-(l+1)p-1}{2}}\left(\int_{\mathbb{R}} \frac{\|\Delta_z \bar f_0(x)\|_{L^p}^p}{(1+|z|^2t^{-1})^{\gamma p}}dz\right)dt\nonumber\\&\lesssim \int_{\mathbb{R}} \frac{\|\Delta_z \bar f_0(x)\|_{L^p}^p}{|z|^{(1+l)p-1}}dz=c \|\bar f_0\|_{\dot W^{\frac{p-2}{p}+l,p}}^p. \label{aplinearLp}
\end{align}
Hence we obtain that
\begin{equation}\label{e222}
\begin{aligned}
\| f_{L1}\|_{L^6_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{2}{3},6}}\lesssim \|\partial_x \bar f_0\|_{L^2},\ \ \ \ \ \
\| \partial_xf_{L1}\|_{L^\frac{6}{5}_{\infty}}\lesssim & \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}}.
\end{aligned}
\end{equation}
Combining this with \eqref{paraL2} and \eqref{e222}, one has
\begin{equation}\label{fL1}
\begin{aligned}
&\|f_{L1}\|_{L^2_{T}}+\|\partial_x f_{L1}\|_{L^\frac{6}{5}_{T}}\lesssim \|\bar f_0\|_{\dot W^{\frac{1}{3},\frac{6}{5}}},\\
& \|f_{L1}\|_{L^6_{T}}+\|\partial_x f_{L1}\|_{L^2_{T}}\lesssim \|\partial_x \bar f_0\|_{L^2}.
\end{aligned}
\end{equation}
Then we estimate $f_{L2}$. By Parseval’s identity, we obtain that
\begin{align*}
& \|\partial_x f_{L2}\|_{L^2_{T}}\lesssim \int_0^\infty \int_{\mathbb{R}} |\xi|^2|e^{-|\xi|^2\tau}\mathcal{F}(\tilde f_0)(\xi)|^2d\xi d\tau=\frac{1}{2}\int_{\mathbb{R}}| \mathcal{F}(\tilde f_0)(\xi)|^2d\xi=\frac{1}{2}\|\tilde f_0\|_{L^2}^2.
\end{align*}
Moreover, by H\"{o}lder's inequality and Young's inequality,
\begin{align*}
&\|f_{L2}\|_{L^{2}_{T}}\lesssim \|\mathbf{K}\|_{L^2_{t,T}L^\frac{3}{2}_x}\|\tilde f_0\|_{L^\frac{6}{5}}\lesssim T^\frac{1}{3}\|\tilde f_0\|_{L^\frac{6}{5}},\\
&\|f_{L2}\|_{L^{6}_{T}}\lesssim\left\|\int_{\mathbb{R}} \|\mathbf{K}(x-y)\|_{L^6_{t,T}}|\tilde f_0(y)|dy\right\|_{L^6_x}\lesssim \left\|\int_{\mathbb{R}} |\tilde f_0(y)|\frac{dy}{|x-y|^\frac{2}{3}}\right\|_{L^6_x}\lesssim \|\tilde f_0\|_{L^2},\\
&\|\partial_x f_{L2}\|_{L^\frac{6}{5}_{T}}\lesssim\|\partial_x \mathbf{K}\|_{L^\frac{6}{5}_{t,T}L^1_x}\|\tilde f_0\|_{L^\frac{6}{5}}\lesssim T^\frac{1}{3}\|\tilde f_0\|_{L^\frac{6}{5}}.
\end{align*}
Hence we obtain that
\begin{equation}\label{fL2}
\begin{aligned}
&\|f_{L2}\|_{L^2_{T}}+\|\partial_x f_{L2}\|_{L^\frac{6}{5}_{T}}\lesssim T^\frac{1}{3}\|\tilde f_0\|_{L^\frac{6}{5}},\\ &\|f_{L2}\|_{L^6_{T}}+\|\partial_x f_{L2}\|_{L^2_{T}}\lesssim \|\tilde f_0\|_{L^2}.
\end{aligned}
\end{equation}
Then we estimate the forced term $f_N$. We can write
$$
\partial_t f_N-\partial_x^2 f_N =\partial_x F_0,\ \ \ \text{in}\ \mathbb{R}\times \mathbb{R}, \ \ \ \ \text{where}\ \ F_0=F\mathbf{1}_{t>0}.
$$
We take Fourier transform in $(t,x)$ and obtain that
\begin{align*}
f_N=\mathcal{F}^{-1}_{t,x}\left(\frac{\xi\mathcal{F}_{T} ( F_0)(\tau,\xi)}{i\tau+\xi^2}\right),
\end{align*}
where we denote $\mathcal{F}_{t,x}$ the Fourier transform with respect to $(t,x)$, and $\mathcal{F}^{-1}_{T}$ its inverse. It follows from the parabolic Calderon-Zygmund theory (see \cite{Stein}) that
$$
\left\|\partial_x f_N\right\|_{L^p_{T}}\lesssim \|F_0\|_{L^p_{T}}\lesssim \|F\|_{L^p_{T}},\ \ \ \ p\in(1,\infty).
$$
On the other hand, we have
$
\left|\partial_x\mathbf{K}( t,x)\right|\lesssim {(t^\frac{1}{2}+|x|)^{-2}}.
$
Then by Young's inequality,
\begin{align*}
&\left\| \int_0^t \int_{\mathbb{R}}\partial_x\mathbf{K}( t-\tau,x-y)F(\tau,y) dyd\tau\right\|_{L^p_{T}}\\
&\lesssim \left\|\int_{\mathbb{R}} \|\partial_x\mathbf{K}(\cdot ,x-y)\|_{L^\frac{3}{2}_{t,T}}\| F(y)\|_{L^\frac{3p}{3+p}_{t,T}}dy\right\|_{L^p_x}\lesssim \left\|\int_{\mathbb{R}}\| F(y)\|_{L^\frac{3p}{3+p}_{t,T}}\frac{dy}{|x-y|^\frac{2}{3}}\right\|_{L^p_x}\\
&\lesssim \| F\|_{L^\frac{3p}{3+p}_{T}}.
\end{align*}
Hence we obtain that for $l=0,1$,
\begin{align}\label{N}
\|\partial_x^l f_N\|_{L^p_{T}}=\|\int_0^t \int_{\mathbb{R}}\partial_x^{1+l}\mathbf{K}( t-\tau,x-y)F(\tau,y) dyd\tau\|_{L^p_{T}}\lesssim \| F\|_{L^\frac{3p}{3+(1-l)p}_{T}}.
\end{align}
Finally, we estimate $f_R$. By H\"{o}lder's inequality we have
$$
\|\partial_x^l f_R\|_{L^p_{T}}\lesssim \|\partial_x ^l \mathbf{K}\|_{L^p_{T}}\|R\|_{L^1_{T}}\lesssim T^\frac{1}{4}\|R\|_{L^1_{T}},\ \ (l,p)\in\{(0,2),(1,6/5)\},$$
and $$ \|\partial_x^l f_R\|_{L^p_{T}}\lesssim \|\partial_x ^l \mathbf{K}\|_{L^\frac{2p}{2+p}_{T}}\|R\|_{L^1_{T}}\lesssim T^\frac{1}{2} \|R\|_{L^2_{T}},\ \ (l,p)\in\{(0,6),(1,2)\}.
$$
Combining this with \eqref{fL1}, \eqref{fL2} and \eqref{N}, we finish the proof.
\end{proof}
\\
Consider the parabolic equation with jump coefficients,
\begin{equation}\label{jueqle}
\begin{aligned}
&\partial_t f^\pm - c_\pm f_{xx}^\pm=F ,\ \ \ \text{in}\ (t,x)\in(0,T)\times \mathbb{R}^\pm,\\
& f^+(t,0)=f^-(t,0),\quad\quad \ \ c_+\partial_x f^+(t,0)-c_-\partial_x f^-(t,0)=0,\ \ t\in(0,T),\\
&f(0,x)=f_0(x),\ \ \ \ x\in\mathbb{R}.
\end{aligned}
\end{equation}
\begin{lemma}\label{lemformula}
Solution to \eqref{jueqle} has formula
$f=f^+\mathbf{1}_{x\geq 0}+f^-\mathbf{1}_{x< 0}$ , where
$$ f^\pm (t,x)=f_M^\pm(t,x)+f_B^\pm(t,x).
$$
with
\begin{align}
f_M^\pm(t,x)=&\int_{\mathbb{R}^\pm}( \mathbf{K}(c_\pm t,x-y)-\mathbf{K}(c_\pm t,x+y))f_0(y) dy\nonumber\\
&+\int_0^t \int_{\mathbb{R}^\pm}( \mathbf{K}(c_\pm (t-\tau),x-y)-\mathbf{K}(c_\pm (t-\tau),x+y)) F(\tau,y) dyd\tau,\label{forM}\\
f_{B}^\pm(t,x)=&\frac{-2\sqrt{c_\pm}}{\sqrt{c_+}+\sqrt{c_-}}\int_0^t\mathbf{K}(c_\pm(t-\tau),x)(c_+\partial_xf_M^+(\tau,0)-c_-\partial_xf_M^-(\tau,0))d\tau.\label{forB}
\end{align}
\end{lemma}
\begin{proof}
By superposition principle, we have $ f^\pm (t,x)=f_M^\pm(t,x)+f_B^\pm(t,x)$, where
$f_M^\pm$ solves the following equation on half space
\begin{align*}
\begin{cases}
&\partial_t f_M^+-c_+\partial_{x}^2f_M^+=F,\ \ \ \text{in}\ (0,T)\times \mathbb{R}^+,\\
&f_M^+(0,x)=f_0(x),\ \ \ x\in\mathbb{R}^+,\\
&f_M^+(t,0)=0,\ \ \ t\in(0,T).
\end{cases}
\begin{cases}
&\partial_t f_M^--c_-\partial_{x}^2f_M^-=F,\ \ \ \text{in}\ (0,T)\times \mathbb{R}^-,\\
&f_M^-(0,x)=f_0(x),\ \ \ x\in\mathbb{R}^-,\\
&f_M^-(t,0)=0,\ \ \ t\in(0,T).
\end{cases}
\end{align*}
And $f_B^\pm$ solves the system
\begin{equation}\label{eqB}
\begin{aligned}
&\partial_t f_B^\pm-c_\pm\partial_{x}^2f_B^\pm=0,\ \ \ \text{in}\ (0,T)\times \mathbb{R}^\pm,\\
&f_B^+(0,x)=0\ \text{in}\ \mathbb{R}^+,\ \ f_B^-(0,x)=0\ \text{in}\ \mathbb{R}^-,\\
&f_B^+(t,0)=f_B^-(t,0),\ \ \ t\in(0,T),\\
& c_+\partial_x f_B^+(t,0)-c_-\partial_x f_B^-(t,0)=-(c_+\partial_x f_M^+(t,0)-c_-\partial_x f_M^-(t,0)),\ \ \ t\in(0,T).
\end{aligned}
\end{equation}
Indeed, integrate by parts we have
\begin{align*}
&\int_0^t \int_{\mathbb{R}^+}(\mathbf{K}(c_+(t-\tau),x-y)-\mathbf{K}(c_+(t-\tau),x+y))F(\tau,y) dyd\tau\\
&=\int_0^t \int_{\mathbb{R}^+}(\mathbf{K}(c_+(t-\tau),x-y)-\mathbf{K}(c_+(t-\tau),x+y))(\partial_t f_M^+-c_+\partial_x ^2 f_M^+)(\tau,y) dyd\tau\\
&=f_M^+(t,x)-\int_{\mathbb{R}^+}(\mathbf{K}(c_+t,x-y)-\mathbf{K}(c_+t,x+y))f_0(y)dy\\
&\quad\quad+\int_0^t \int_{\mathbb{R}^+}\partial_t(\mathbf{K}(c_+(t-\tau),x-y)-\mathbf{K}(c_+(t-\tau),x+y))f_M^+(\tau,y)dyd\tau\\
&\quad\quad-c_+\int_0^t \int_{\mathbb{R}^+}\partial_x^2(\mathbf{K}(c_+(t-\tau),x-y)-\mathbf{K}(c_+(t-\tau),x+y))f_M^+(\tau,y)dyd\tau\\
&=f_M^+(t,x)-\int_{\mathbb{R}^+}(\mathbf{K}(c_+t,x-y)-\mathbf{K}(c_+t,x+y))f_0(y)dy,
\end{align*}
where we use the fact that
$$
\partial_t \mathbf{K}(c_+t,x)-c_+\partial_x^2\mathbf{K}(c_+t,x)=0,\ \ \forall \ t>0,x\in\mathbb{R}.
$$
We also get similar estimate for $f^-_M$. Hence we obtain formula \eqref{forM}. \\
It remains to check that formula \eqref{forB} satisfies equation \eqref{eqB}. For simplicity, we only check the Neumann boundary condition
\begin{align}\label{neubdhh}
c_+\partial_x f_B^+(t,0)-c_-\partial_x f_B^-(t,0)=-(c_+\partial_x f_M^+(t,0)-c_-\partial_x f_M^-(t,0)):=h(t),\ \ \ t\in(0,T).
\end{align}
And other conditions are trivial.
Note that
\begin{equation}\label{neubd}
\begin{aligned}
c_+\partial_x f_B^+(t,0)-c_-\partial_x f_B^-(t,0)=&\frac{2c_+^\frac{3}{2}}{\sqrt{c_+}+\sqrt{c_-}}\lim_{x\to 0^+}\int_0^t\partial_x\mathbf{K}(c_+(t-\tau),x)h(\tau)d\tau\\
&-\frac{2c_-^\frac{3}{2}}{\sqrt{c_+}+\sqrt{c_-}}\lim_{x\to 0^-}\int_0^t\partial_x\mathbf{K}(c_-(t-\tau),x)h(\tau)d\tau.
\end{aligned}
\end{equation}
Note that the heat kernel has scaling $\mathbf{K}(t,x)=t^{-\frac{1}{2}}\mathbf{K}(1,\frac{x}{\sqrt{t}})$.
\begin{align*}
\int_0^t \partial_x \mathbf{K}(c_+(t-\tau),x)h(\tau)d\tau&=-\int_0^t \frac{x}{2c_+\tau}K(c_+\tau,x)h(t-\tau)d\tau\\
&=-\int_0^t \frac{x}{2(c_+\tau)^\frac{3}{2}}K(1,\frac{x}{\sqrt{c_+\tau}})h(t-\tau)d\tau.
\end{align*}
Let $\tilde \tau=\frac{x}{\sqrt{c_+\tau}}$, we obtain that
$$
\int_0^t \partial_x \mathbf{K}(c_+(t-\tau),x)h(\tau)d\tau=c_+^{-1}\int_{\frac{x}{\sqrt{c_+t}}}^\infty\mathbf{K}(1,\tilde \tau)h(t-\frac{x^2}{c_+\tilde\tau^2})d\tilde \tau.
$$
We can write
\begin{align*}
\int_{\tilde t}^\infty\mathbf{K}(1,\tilde \tau)h(t-\frac{x^2}{c_+\tilde\tau^2})d\tilde \tau=\left(\int_{\tilde t^{1/2}}^\infty+\int_{\tilde t}^{\tilde t^\frac{1}{2}}\right)\mathbf{K}(1,\tilde \tau)h(t-\frac{x^2}{c_+\tilde\tau^2})d\tilde \tau,
\end{align*}
where we denote $\tilde t=\frac{x}{\sqrt{c_+t}}$.
We have $\tilde t\to 0$ when $x\to 0$, and the integral on $[\tilde t,\tilde t^\frac{1}{2}]$ tends to 0. Hence,
$$
\lim_{x\to 0^+}\int_0^t\partial_x\mathbf{K}(c_+(t-\tau),x)h(\tau)d\tau=c_+^{-1}\int_0^\infty \mathbf{K}(1,\tilde \tau)d\tilde \tau h(t)=\frac{1}{2c_+}h(t).
$$
Similarly, one has
\begin{align}\label{neubd2}
\lim_{x\to 0^-}\int_0^t\partial_x\mathbf{K}(c_-(t-\tau),x)h(\tau)d\tau=-\frac{1}{2c_-}h(t).
\end{align}
Then we obtain \eqref{neubdhh} from \eqref{neubd}-\eqref{neubd2}.
This completes the proof.
\end{proof}
\\
Consider the following parabolic equation with variable coefficient,
\begin{equation}\label{conticoeff}
\begin{aligned}
&\partial_t f-\partial_x (\phi(x)f_x)=F,\quad\quad \ \ (t,x)\in (0,T)\times \mathbb{R},\\
& f(0,x)=f_0(x).
\end{aligned}
\end{equation}
\begin{lemma}\label{forcont} the
Solution $f$ to \eqref{conticoeff} satisfies
\begin{align*}
f(t,x)=&\int_{\mathbb{R}}\mathbf{H} (t,x-y,x)f_0(y)dy+\int_0^t\int_{\mathbb{R}} \mathbf{H} (t-\tau,x-y,x)F(\tau,y)dyd\tau\\
&-\int_0^t \int_{\mathbb{R}} \partial_1 \mathbf{H}(t-\tau,x-y,x)(\phi(x)-\phi(y)) f_x(\tau,y)dyd\tau,
\end{align*}
where we denote $\mathbf{H} (t,x,y)=\mathbf{K}({ \phi(y)}{t},x)$, $\partial_1\mathbf{H}(t,x,y)=\partial_x\mathbf{H}(t,x,y)$.
\end{lemma}
\begin{proof}
Integrate by parts we obtain
\begin{align*}
&\int_0^t\int_{\mathbb{R}} \mathbf{H} (t-\tau,x-y,x)F(\tau,y)dyd\tau\\
&= \int_0^t\int_{\mathbb{R}} \mathbf{H} (t-\tau,x-y,x)(\partial_t f-\partial_x(\phi f_x))(\tau,y)dyd\tau\\
&=f(t,x)-\int_{\mathbb{R}} \mathbf{H}(t,x-y,x)f_0(y) dy +\int_0^t \int_{\mathbb{R}} \partial_t \mathbf{H} (t-\tau,x-y,x) f(\tau,y)dyd\tau\\
&\ \ \ +\int_0^t \int_{\mathbb{R}} \partial_1 \mathbf{H}(t-\tau,x-y,x)(\phi(x)-\phi(y)) f_x(\tau,y)dyd\tau\\
&\ \ \ -\phi(x)\int_0^t \int_{\mathbb{R}} \partial_1^2 \mathbf{H} (t-\tau,x-y,x) f(\tau,y)dyd\tau.
\end{align*}
It is easy to check that
$$
\partial_t \mathbf{H} (t-\tau,x-y,x)-\phi(x)\partial_1^2 \mathbf{H} (t-\tau,x-y,x)=0.
$$
Then we obtain the result.
\end{proof}
\\
\textbf{Acknowledgements:}
Q.H.N. is supported by the Academy of Mathematics and Systems Science, Chinese Academy of Sciences startup fund, and the National Natural Science Foundation of China (No. 12050410257 and No. 12288201) and the National Key R$\&$D Program of China under grant 2021YFA1000800.
\end{document}
|
\gammammagin{document}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{open}[thm]{Open Problem}
\theoremstyle{definition}
\newtheorem{defn}{Definition}
\newtheorem{asmp}{Assumption}
\newtheorem{notn}{Notation}
\newtheorem{prb}{Problem}
\theoremstyle{remark}
\newtheorem{rmk}{Remark}
\newtheorem{exm}{Example}
\newtheorem{clm}{Claim}
\title[TCI for Reflected Diffusions]{A Note on Transportation Cost Inequalities\\ for Diffusions with Reflections}
\author{Soumik Pal}
\address{Department of Mathematics, University of Washington, Seattle}
\email{[email protected]}
\author{Andrey Sarantsev}
\address{Department of Mathematics and Statistics, University of Nevada, Reno}
\email{[email protected]}
\date{\today}
\keywords{Reflected Brownian motion, Wasserstein distance, relative entropy, transportation cost-information inequality, concentration of measure, competing Brownian particles}
\subjclass[2010]{82C22, 60H10, 60J60, 60K35, 91G10}
\thanks{S.P. was supported by NSF grant DMS 1612483 and A.S. was supported in part by the NSF grant DMS 1409434.}
\gammammagin{abstract}
We prove that reflected Brownian motion with normal reflections in a convex domain satisfies a dimension free Talagrand type transportation cost-information inequality. The result is generalized to other reflected diffusion processes with suitable drift and diffusion coefficients. We apply this to get such an inequality for interacting Brownian particles with rank-based drift and diffusion coefficients such as the infinite Atlas model. This is an improvement over earlier dimension-dependent results.
\end{equation}d{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction and Main Results}
Consider a metric space $(E, \rho)$. Fix $p \in [1, \infty)$. For any pair of Borel probability measures $\mathbb{P}$ and $\mathbb{Q}$ on $E$, define the Wasserstein distance of order $p$ as
$$
W_p(\mathbb{P}, \mathbb{Q}) := \inf\limits_{\pi\in \Pi}\left[\iint\rho^p(x, y)\mathrm{d}\pi(x, y)\right]^{1/p},
$$
where the $\inf$ is taken over the set $\Pi$ of all {\it couplings} of $\mathbb{P}$ and $\mathbb{Q}$ (i.e., measures on $E\times E$ with marginal distributions $\mathbb{P}$ and $\mathbb{Q}$). Here and throughout, when we write $\mu(f)$, where $\mu$ is a probability measure and $f$ is a $\mu$-integrable function, we mean the expectation of $f$ with respect to $\mu$. The {\it relative entropy} $\mathcal H(\mathbb{Q}\mid\mathbb{P})$ of $\mathbb{Q}$ with respect to $\mathbb{P}$ is defined as
$$
\mathcal H(\mathbb{Q}\mid\mathbb{P}) :=
\mathbb{Q}\left[\log\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\right] = \mathbb{P}\left[\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\log\left(\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\right)\right],\quad \mbox{if}\; \mathbb{Q} \ll \mathbb{P},
$$
and $\mathcal H(\mathbb{Q}\mid\mathbb{P}) = +\infty$ otherwise.
\gammammagin{defn} A Borel probability measure $\mathbb{P}$ satisfies the {\it transportation-cost information (TCI) inequality of order} $p$ with constant $C > 0$ (we write: $\mathbb{P} \in T_p(C)$) if for every Borel probability measure $\mathbb{Q}$ on $E$ we have:
\gammammagin{equation}
\lambdabel{eq:TCI}
W_p(\mathbb{P}, \mathbb{Q}) \le \sqrt{2C\mathcal H(\mathbb{Q}\mid\mathbb{P})}.
\end{equation}d{equation}
\end{equation}d{defn}
TCI is an example of the vast gallery of various inequalities linking transportation cost, relative entropy, and Fisher information. It is impossible to do justice to the enormous literature and its many uses. We refer the reader to an excellent survey by Gozlan and
L\'eonard \cite{Gozlansurvey} and the recent book \cite{NewBook} by Boucheron et al. Talagrand studied concentration for product spaces in \cite{Talagrand3, Talagrand5}. His idea is that a function of many variables which is Lipschitz in any variable but does not depend much on any single variable is close to a constant. A particularly useful application of TCI inequalities, such as above (for $p\ge 1$), is to prove Talagrand type Gaussian concentration. See the original article by Talagrand \cite{Talagrand5}, as well as Marton's derivation by using TCI inequality in \cite{Marton}. See also \cite{Bobkov1, C3, Otto} on relation between TCI and log-Sobolev inequalities.
In \cite{Talagrand5}, Talagrand proved that a standard Gaussian measure on $\mathbb{R}^d$ satisfies $T_2(C)$ with $C = 1$. Afterwards, TCI inequalities were established for discrete-time Markov chains, \cite{Marton1, Paulin, Samson}; for discrete-time stationary processes, \cite{Marton2}; for stochastic ordinary differential equations driven by Brownian motion \cite{LogConcave, Djellout, Pal, Ustunel} and by more general noise \cite{MoroccoFBM, Saussereau, Riedel}; for stochastic partial differential equations, \cite{Morocco, Davar, RDE}, and for neutral stochastic equations (which depend on past history) \cite{Neutral, Delay, MoroccoFBM}. Applications include model selection in statistics \cite{Model}, risk theory \cite{Lacker}, order statistics \cite{Order}, information theory \cite{Bobkov3, Sason}, and randomized algorithms \cite{Algorithm}.
In this paper, $\mathbb{P}$ and $\mathbb{Q}$ will represent laws of reflecting diffusion processes seen as probability measures on the set of continuous paths equipped with the uniform norm. Specifically we prove TCI inequalities for a certain class of interacting Brownian particle systems, called {\it competing Brownian particles}, with each particle moving as a Brownian motion on the real line with drift and diffusion coefficients dependent on the current rank of this particle relative to other particles. These systems were constructed in \cite{BFK} as a model for financial markets; see also \cite{CP2010, FernholzBook}. Our inequalities are {\it dimension-free:} that is, the constant $C$ is independent of the number of particles. This allows us to extend the inequality to infinite competing particle systems such as the infinite Atlas model \cite{CDSS, DemboTsai, KolliShkol, PalPitman}. This is an improvement over the dimension-dependent inequalities in papers \cite{Pal, PalShkolnikov} where applications of such inequalities can be found. See also \cite{IPS2013, JM2008} on Poincar\'e inequalities for competing Brownian particles, and \cite{4people} on large deviations for these particle systems.
The result for competing particles is a particular case of a general TCI inequality for normally reflected diffusion processes in convex domains. Reflected diffusions are defined as continuous-time stochastic processes in a certain domain $D \subseteq \mathbb{R}^d$. As long as such process is in the interior, it behaves as a solution of a stochastic differential equation (SDE). As it hits the boundary, it is reflected back inside the domain. The simplest case is a {\it reflected Brownian motion}, which behaves as a Brownian motion inside the domain.
Dimension-free TCI inequalities are remarkable. Most known examples are in the case of product measures which utilize tensorization property of the entropy and the cost. Our examples are far from product measures since they involve motion of particles interacting with one another. Hence, dimension-free TCI inequalities in this context seem interesting. The proof, however, does not require much beyond existing machinery. Our novel contribution is essentially a single observation made in \begin{equation}ref{eq:nonincreasing-1}.
\subsection{Notation} We denote by $a\cdot b = a_1b_1 + \ldots + a_Nb_N$ the dot product of vectors $a, b \in \mathbb{R}^N$. The Euclidean norm of a vector $a$ is denoted by $\norm{a} := \left[a\cdot a\right]^{1/2}$. The matrix norm of a matrix $A$ is defined as $\norm{A} := \max_{\norm{x} = 1}\norm{Ax}$. We let $\mathbb{R}_+ := [0, \infty)$. We denote the space $C\left([0, T], \mathbb{R}^N\right)$ of continuous functions $[0, T] \to \mathbb{R}^N$ with the sup-norm $\norm{x} := \sup_{0 \le t \le T}\norm{x(t)}$. We prove TCI inequality where the Wasserstein-$2$ transportation cost is measured in this norm.
\subsection{TCI inequalities for competing Brownian particles} Fix an integer $N \ge 2$. For any vector $x = (x_1, \ldots, x_N) \in \mathbb{R}^N$, there exists a unique {\it ranking permutation}: a one-to-one mapping $\mathbf{p}_x : \{1, \ldots, N\} \to \{1, \ldots, N\}$, with the following properties:
\gammammagin{enumerate}[(a)]
\item $x_{\mathbf{p}_x(i)} \le x_{\mathbf{p}_x(j)}$ for $1 \le i < j \le N$;
\lambdabel{defn:ranks}
\item if $x_{\mathbf{p}_x(i)} = x_{\mathbf{p}_x(j)}$ for $1 \le i < j \le N$, then $\mathbf{p}_x(i) < \mathbf{p}_x(j)$.
\lambdabel{defn:ranks2}
\end{equation}d{enumerate}
That is, $\mathbf{p}_x$ arranges the coordinates of $x$ in increasing order, with ties broken by the increasing order of the index (or, \textit{{name}}) of the coordinates that are tied.
Take a filtered probability space $(\Omega, \mathcal F, (\mathcal F_t)_{t \ge 0}, \mathbb{P})$, with the filtration satisfying the usual conditions and supporting an $N$-dimensional Brownian motion $W = (W_1, \ldots, W_N)$. Fix constants $g_1, \ldots, g_N \in \mathbb{R}$ and $\sigma_1, \ldots, \sigma_N > 0$.
\gammammagin{defn}
Consider a continuous adapted process $X(t) = (X_1(t), \ldots, X_N(t)),\ t \ge 0$. Let $\mathbf{p}_t = \mathbf{p}_{X(t)}$. We say that $\mathbf{p}_t^{-1}(i)$ is the {\it rank} of particle $i$ at time $t$, and $\mathbf{p}_t(k)$ is the {\it name} of the $k$th ranked particle at time $t$. Then the following system of SDE:
\gammammagin{equation}
\lambdabel{eq:mainSDE}
\mathrm{d} X_i(t) = \sum\limits_{k=1}^N1(\mathbf{p}_t(k) = i)\left(g_k\mathrm{d} t + \sigma_k\mathrm{d} W_i(t)\right),
\end{equation}d{equation}
for $i = 1, \ldots, N$, defines a finite system of $N$ competing Brownian particles with drift coefficients $g_1, \ldots, g_N$ and diffusion coefficients $\sigma_1^2, \ldots, \sigma_N^2$. Let $Y_k(t) := X_{(k)}(t) := X_{\mathbf{p}_t(k)}(t)$ be the position of the $k$th ranked particle, and let $Z_k(t) := Y_{k+1}(t) - Y_k(t)$ be the gap between the $k$th and $(k+1)$st ranked particles. The local time $L_{(k, k+1)} = (L_{(k, k+1)}(t), t \ge 0)$ of collision between $k$th and $k+1$st ranked particles is defined as the local time of the continuous semimartingale $Z_k$ at zero. The process
$L(t) = \left(L_{(1, 2)}(t), \ldots, L_{(N-1, N)}(t)\right)$ is called the vector of local times.
\lambdabel{defn:CBP}
\end{equation}d{defn}
From \cite{Bass1987}, this system exists in the weak sense and is unique in law. Strong existence and pathwise unqiueness are proved under the following assumptions, \cite[Theorem 2]{IKS2013}, \cite[Theorem 1.4]{MyOwn3}.
\gammammagin{equation}
\lambdabel{eq:convex-diffusion}
\sigmagma^2_n \ge \frac12\left(\sigmagma^2_{n-1} + \sigmagma^2_{n+1}\right),\, \mbox{if}\;
1 < n < N.
\end{equation}d{equation}
Similar infinite systems can be defined for $N = \infty$; then we assume that the vector $X(t) = (X_i(t))_{i \ge 1}$ is {\it rankable}; that is, for every $t\ge 0$ there exists a unique permutation $\mathbf{p}_{X(t)}$ of $\mathbb{N} := \{1, 2, \ldots\}$ which satisfies conditions (\ref{defn:ranks}) and (\ref{defn:ranks2}). They were introduced in \cite{PalPitman}. See \cite[Theorem 3.1]{MyOwn6} for weak existence and uniqueness in law under an assumption on initial conditions,
\gammammagin{equation}
\lambdabel{eq:initial-asmp}
\lim\limits_{n \to \infty}X_n(0) \to \infty\quad \mbox{a.s., and}\quad \sum\limits_{n=1}^{\infty}e^{-\alphapha X_n^2(0)} < \infty\quad \mbox{for all}\quad \alphapha > 0,
\end{equation}d{equation}
and the following assumptions on drift and diffusion coefficients:
\gammammagin{equation}
\lambdabel{eq:coeff-asmp}
g_{n} = g_{n_0}\quad \mbox{and}\quad \sigmagma_n = \sigmagma_{n_0}\quad \mbox{for all}\quad n \ge n_0.
\end{equation}d{equation}
See \cite[Theorems 1, 2]{IKS2013} and \cite[Theorem 5.1, Remark 8]{MyOwn6} for strong existence and pathwise uniqueness: We need~\begin{equation}ref{eq:convex-diffusion} in addition to~\begin{equation}ref{eq:initial-asmp} and~\begin{equation}ref{eq:coeff-asmp}. {\it Two-sided infinite systems}, indexed by $i \in \mathbb{Z}$, were introduced in \cite{MyOwn11}. The proofs and the results from this paper carry over to that set-up as well.
\gammammagin{thm} (a) For an $N \in \mathbb{N} \cup \{\infty\}$ assume that the drift and diffusion coefficients satisfy the following conditions: $g_1 \ge g_2 \ge \ldots$, and $\sigma_1 = \sigma_2 = \ldots = 1$. For the case of an infinite system, assume in addition~\begin{equation}ref{eq:initial-asmp} and~\begin{equation}ref{eq:coeff-asmp}. Then for every finite $k \le N$, the distribution of $X = (X_1, \ldots, X_k)$ on $C([0, T], \mathbb{R}^k)$ satisfies $T_2(C)$ with $C = T$.
\lambdabel{thm:CBP}
(b) Assume weak existence and uniqueness in law. For an $N \in \mathbb{N}\cup\{\infty\}$, and a finite $k \le N$, the vector of ranked particles $Y = (Y_1, \ldots, Y_k)$ satisfies $T_2(C)$ on $C([0, T], \mathbb{R}^k)$ with $C = T\sup\limits_{m \ge 1}\sigma_m^2$.
\end{equation}d{thm}
Theorem~\ref{thm:CBP} (a) follows from results from \cite{LogConcave}. The more non-trivial Theorem~\ref{thm:CBP} (b) is based on Theorem~\ref{thm:main} below, which is the main result of this paper. This is a general result that says that normally reflected Brownian motion in a convex domain satisfies a dimension-free TCI inequality as described below. It turns out that the vector of ranked particles $Y$ is a particular case of such normally reflected Brownian motion in a wedge $\{y = (y_1, \ldots, y_N)\mid y_1 \le \ldots \le y_N\}$.
Fix $d \ge 2$, the dimension. In this article a {\it domain} in $\mathbb{R}^d$ is the closure of an open connected subset. We consider only convex domains. Following \cite{Tanaka1979}, we do not impose any additional smoothness conditions on such domain. For every $x \in \partialrtial D$, we say that a unit vector $y \in \mathbb{R}^d$ is an {\it inward unit normal vector} at point $x$, if
\gammammagin{equation}
\lambdabel{eq:inward-normal}
z \in D\quad \mbox{implies}\quad (z - x)\cdot y \ge 0.
\end{equation}d{equation}
The set of such inward unit normal vectors at $x$ is denoted as $\mathcal N(x)$. The most elementary example of this is a $C^1$ domain $D$; that is, with boundary $\partial D$ which can be locally (after a rotation) parametrized as a graph of a $C^1$ function. Then there exists a unique inward unit normal vector $\mathbf{n}(x)$ at every point $x \in \partial D$, and $\mathcal N(x) = \{\mathbf{n}(x)\}$.
A more complicated example is a {\it convex piecewise smooth domain}. Fix $m \ge 1$, the number of faces. Take $m$ domains $D_1, \ldots, D_m$ in $\mathbb{R}^d$ which are $C^1$ and convex. Let $D = \bigcap_{i=1}^mD_i$. Assume $D \ne \cap_{j \ne i}D_j$ for every $i = 1, \ldots, m$; that is, each one of $m$ smooth domains is essential. Assume also that for each $i = 1, \ldots, m$, $F_i := \partial D\cap\partial D_i$ is a manifold of codimension $1$ and has nonempty relative interior. Then $D$ is called a {\it convex piecewise smooth domain} with $m$ {\it faces} $F_1, \ldots, F_m$, and $\partial D = \cup_{i=1}^mF_i$. For every $x \in F_i$, define the {\it inward unit normal vector} $\mathbf{n}_i(x)$ to $\partial D_i$ at this point $x$, pointing inside $D_i$. For a point $x \in \partial D$ on the boundary, if $I(x) = \{i = 1, \ldots, m\mid x \in F_i\}$, then
$$
\mathcal N(x) = \Bigl\{\sum\limits_{i \in I}\alpha_i\mathbf{n}_i(x)\mid \alpha_i \ge 0,\quad i \in I;\quad \sum\limits_{i \in I}\alpha_i^2 = 1\Bigr\}.
$$
\gammammagin{defn} For a vector field $g : \mathbb{R}_+\times D \to \mathbb{R}^d$ and a $z_0 \in D$, consider the following equation:
\gammammagin{equation}
\lambdabel{eq:SDE-main}
Z(t) = z_0 + W(t) + \int_0^tg(s, Z(s))\,\mathrm{d} s + \int_0^t\mathbf{n}(s)\,\mathrm{d} \ell(s),\ \ t \ge 0.
\end{equation}d{equation}
Here, $Z : \mathbb{R}_+ \to D$ is a continuous adapted process, $W$ is a $d$-dimensional Brownian motion with zero drift vector and constant, symmetric, positive definite $d\times d$ covariance matrix $A$, starting from the origin. For every $t \ge 0$, $\mathbf{n}(t) \in \mathbb{R}^d$ is a unit vector, and $\mathbf{n} : \mathbb{R}_+ \to \mathbb{R}$ is a measurable function. The function $\ell : \mathbb{R}_+ \to \mathbb{R}$ is continuous, nondecreasing, and can increase only when $Z(t) \in \partial D$; for such $t$, $\mathbf{n}(t) \in \mathcal N(Z(t))$. The solution $Z$ of \begin{equation}ref{eq:SDE-main} is called a {\it reflected diffusion} in $D$, with {\it drift} $g$ and (constant) {\it diffusion matrix} $A$, starting from $z_0$. If $g$ is a constant (does not depend on $x$ and $t$), then we call $Z$ a {\it reflected Brownian motion} (RBM) in $D$ with {\it drift} $g$ and {\it diffusion matrix} $A$.
\end{equation}d{defn}
\gammammagin{asmp} For an integrable function $F : [0, T] \to \mathbb{R}$, we have:
\gammammagin{equation}
\lambdabel{eq:monotone-F}
\left(g(t, x) - g(t, y)\right)\cdot(x - y) \le \norm{x-y}^2F(t),\ t \in [0, T],\ x, y \in D,
\end{equation}d{equation}
\lambdabel{asmp:main}
\end{equation}d{asmp}
\gammammagin{rmk} Under Assumption~\ref{asmp:main}, the equation~\begin{equation}ref{eq:SDE-main} has a pathwise unique strong solution on time horizon $[0, T]$. This is proved similarly to \cite[Theorem 4.1]{Tanaka1979}.
\lambdabel{rmk:strong-existence}
\end{equation}d{rmk}
We start by proving that, under Assumption \ref{asmp:main}, the reflected diffusion satisfies a dimension-free $T_2(C)$. Our proof follows existing ideas in \cite{Pal, LogConcave} for non-reflected diffusions with one notable observation to handle the reflection. Consider an SDE in $\mathbb{R}^d$ without reflection $\mathrm{d} X(t) = \mathrm{d} W(t) + g(t, X(t))\,\mathrm{d} t$, for some drift vector field $g$ defined on $[0, T]\times \mathbb{R}^d$, which satisfies {\it contraction condition:}
\gammammagin{equation}
\lambdabel{eq:contraction}
(g(t, x) - g(t, y)) \cdot (x - y) \le 0,\quad \mbox{for all}\quad x, y \in \mathbb{R}^d,\, t \in [0, T].
\end{equation}d{equation}
It is shown in \cite{LogConcave} that under condition~\begin{equation}ref{eq:contraction}, the distribution of $X$ in the space $C([0, T], \mathbb{R}^d)$ satisfies $T_2(C)$ with $C = T$. Our main observation in this article is that for a reflected diffusion in a convex domain $D$, the reflection term $\mathbf{n}(t)\,\mathrm{d}\ell(t)$ plays the role of such drift.
For the next result, take a convex domain $D$, fix time horizon $T > 0$, and let $\mathbb{P}$ denote the law of the reflected diffusion in $C([0, T], \mathbb{R}^d)$ with drift vector field $g$, starting from $x \in D$.
\gammammagin{thm} Under Assumption~\ref{asmp:main}, $\mathbb{P} \in T_2(C)$, with the constant $C$ given by
\gammammagin{equation}
\lambdabel{eq:constant-minus}
C := \norm{A}\sup\limits_{0 \le t \le T}\int_0^t\exp\left(2\int_s^tF(u)\,\mathrm{d} u\right)\,\mathrm{d} s.
\end{equation}d{equation}
\lambdabel{thm:main}
\end{equation}d{thm}
If $F(t) = \gammamma$ is a constant, then we can calculate
$$
C=
\gammammagin{cases}
\norm{A}\frac{e^{2\gammamma T} - 1}{2\gammamma},\quad \gammamma \neq 0;\\
\norm{A}T, \quad \gammamma=0.
\end{equation}d{cases}
$$
This gives us the following corollary.
\gammammagin{cor} The law of an RBM in a convex domain $D$ with constant drift and constant diffusion matrix $A$ satisfies $T_2(C)$ on $C([0, T], \mathbb{R}^d)$ with $C = T\norm{A}$.
\lambdabel{cor:RBM}
\end{equation}d{cor}
\section{Proofs}
\gammammagin{proof}[Proof of Theorem~\ref{thm:main}]
The established method for proving TCI inequality for diffusions is by using Girsanov theorem. We explain the main idea behind this line of argument. More details can be found in \cite{Djellout, Pal, Ustunel}. We assume for simplicity that $A = I_d$. At the end of this subsection, we shall explain what to do for general $A$. Take a filtered probability space $(\Omega, \mathcal F, (\mathcal F_t)_{0 \le t \le T}, \mathbf R)$, with the filtration satisfying the usual conditions and generated by a $d$-dimensional Brownian motion $W = (W(t),\, 0 \le t \le T)$. By Assumption 1, on this space we can construct a solution $X$ to the equation~\begin{equation}ref{eq:SDE-main} driven by the Brownian motion $W$. We view $X$ as a random element of the space $C([0, T], \mathbb{R}^d)$ with law $\mathbb{P}$. On $C([0, T], \mathbb{R}^d)$, take any probability measure $\mathbb{Q} \ll \mathbb{P}$. Let $\overline{\mathbf R} \ll \mathbf R$ be another probability measure on the space $(\Omega, \mathcal F)$, defined through its Radon-Nikodym derivative:
\gammammagin{equation}
\lambdabel{eq:fundamental-change}
\frac{\mathrm{d}\overline{\mathbf R}}{\mathrm{d}\mathbf R} := \frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}(X).
\end{equation}d{equation}
The next lemma is taken from \cite[Proof of Theorem 5.6]{Djellout}. See also related papers \cite{Feyel-Ustunel, Ustunel}.
\gammammagin{lemma} There exists an $\left(\mathcal F_t\right)$-adapted process $\gammamma = (\gammamma_t,\ 0 \le t \le T)$ such that, $\overline{\mathbf R}$-almost surely $\int_0^T\norm{\gammamma_t}^2\mathrm{d} t < \infty$, and the following process is a standard $d$-dimensional $\left((\mathcal F_t), \overline{\mathbf R}\right)$-Brownian motion:
\[
B(t) = W(t) - \int_0^t\gammamma_s\mathrm{d} s,\quad 0 \le t \le T.
\]
Moreover,
\gammammagin{equation}
\lambdabel{eq:entropy-derivation}
\mathcal H(\overline{\mathbf R}\mid\mathbf R) = \frac{1}{2} \overline{\mathbf{R}}\left[ \int_0^T\norm{\gammamma_t}^2\mathrm{d} t\right].
\end{equation}d{equation}
\lambdabel{lemma:change-of-measure}
\end{equation}d{lemma}
From~\begin{equation}ref{eq:fundamental-change}, the relative entropy of $\mathbb{Q}$ with respect to $\mathbb{P}$ is given by the same formula~\begin{equation}ref{eq:entropy-derivation}:
\gammammagin{equation}
\lambdabel{eq:entropy}
\mathcal H(\mathbb{Q}\mid\mathbb{P}) = \mathcal H(\overline{\mathbf R}\mid\mathbf R) = \frac{1}{2} \overline{\mathbf{R}}\left[ \int_0^T\norm{\gammamma_t}^2\mathrm{d} t\right].
\end{equation}d{equation}
The law of $X$ on the probability space
\gammammagin{equation}
\lambdabel{eq:new-probability-space}
\left(\Omega, \mathcal F, (\mathcal F_t)_{0 \le t \le T}, \overline{\mathbf R}\right)
\end{equation}d{equation}
is $\mathbb{Q}$ instead of $\mathbb{P}$. Let $X'$ be the solution of~\begin{equation}ref{eq:SDE-main} on the space~\begin{equation}ref{eq:new-probability-space} with Brownian motion $B$ instead of $W$. This solution exists and is unique by Remark~\ref{rmk:strong-existence}. Then $X'$ has law $\mathbb{P}$. Hence, on that probability space~\begin{equation}ref{eq:new-probability-space} we now have two processes $(X,X')$ such that $X'\sigmam \mathbb{P}$ and $X \sigmam \mathbb{Q}$, together with continuous adapted nondecreasing processes $\ell = (\ell(t),\, 0 \le t \le T)$ and $\ell' = (\ell'(t),\ 0 \le t \le T)$, starting from $0$, such that $\ell$ can increase only when $X \in \partial D$; similarly for $\ell'$, and
\gammammagin{eqnarray}
X(t) &=& x + \int_0^tg(s, X(s))\,\mathrm{d} s + B(t) + \int_0^t\gammamma_s\,\mathrm{d} s + \int_0^t\mathbf{n}(s)\,\mathrm{d}\ell(s),\lambdabel{eq:X}\\
X'(t) &=& x + \int_0^tg(s, X'(s))\,\mathrm{d} s + B(t) + \int_0^t\mathbf{n}'(s)\,\mathrm{d}\ell'(s). \lambdabel{eq:X'}
\end{equation}d{eqnarray}
From~\begin{equation}ref{eq:X} and~\begin{equation}ref{eq:X'}, we get:
\gammammagin{align}
\lambdabel{eq:X-X-diff}
\gammammagin{split}
X(t) - X'(t) = \int_0^t\left[g(s, X(s)) - g(s, X'(s)) + \gammamma_s\right]\mathrm{d} s +\int_0^t\left[\mathbf{n}(s)\,\mathrm{d}\ell(s) - \mathbf{n}'(s)\,\mathrm{d}\ell'(s)\right].
\end{equation}d{split}
\end{equation}d{align}
We claim
\gammammagin{equation}
\lambdabel{eq:goal}
\overline{\mathbf{R}}\left[\norm{X-X'}^2\right] \le C\cdot\overline{\mathbf{R}}\Bigl[\int_0^T\norm{\gammamma_t}^2\,\mathrm{d} t\Bigr] = 2C\mathcal H(\mathbb{Q}\mid\mathbb{P}).
\end{equation}d{equation}
Since $(X',X)$ is a coupling of the $(\mathbb{P}, \mathbb{Q})$,~\begin{equation}ref{eq:goal} gives an upper bound on the $\mathcal W_2$-distance, and hence $T_2(C)$. For general (constant) diffusion $A$, let $A^{1/2}$ refer to its positive definite square root. Write $A^{1/2}\gamma_s$ instead of $\gammamma_s$ in~\begin{equation}ref{eq:X},~\begin{equation}ref{eq:X-X-diff}, and subsequent places; then observe that $\norm{A^{1/2}\gammamma_s}^2 \le \norm{A}\norm{\gammamma_s}^2$.
To prove ~\begin{equation}ref{eq:goal}, define
\gammammagin{equation}
\lambdabel{eq:norm-Y}
Y(t) := \norm{X(t) - X'(t)}\,;\quad \mbox{then}\quad Y^2(t) = (X(t) - X'(t))\cdot(X(t) - X'(t)).
\end{equation}d{equation}
Since $X-X'$ is continuous and of finite variation, the same can be said of $Y$. Thus we can apply the classic chain rule ({\it not} It\^o's formula) to the process $Y^2(\cdot)$:
\gammammagin{equation}
\lambdabel{eq:square}
\mathrm{d} Y^2(t) = 2(X(t) - X'(t))\cdot\mathrm{d}(X(t) - X'(t)) = 2Y(t)\,\mathrm{d} Y(t).
\end{equation}d{equation}
Combining ~\begin{equation}ref{eq:X-X-diff} and~\begin{equation}ref{eq:square}, we get:
\gammammagin{align}
\lambdabel{eq:Ito-phi}
\gammammagin{split}
\mathrm{d} Y^2(t) & = 2(X(t) - X'(t))\cdot \left[g(t, X(t)) - g(t, X'(t))\right]\,\mathrm{d} t + 2(X(t) - X'(t))\cdot \gammamma_t\,\mathrm{d} t \\ & + 2\mathbf{n}(t)\cdot(X(t) - X'(t))\,\mathrm{d}\ell(t) - 2\mathbf{n}'(t)\cdot(X(t) - X'(t))\,\mathrm{d}\ell'(t).
\end{equation}d{split}
\end{equation}d{align}
The next remark is on differentials of continuous functions with bounded variation.
\gammammagin{rmk}
For two continuous functions $f_1, f_2 : [0, T] \to \mathbb R$ of bounded variation, we write $\mathrm{d} f_1(t) \le \mathrm{d} f_2(t)$ for all $t$ in a subinterval $I \subseteq [0, T]$, if $f_1(t) - f_1(s) \le f_2(t) - f_2(s)$ for all $s, t \in I,\, s < t$. This is equivalent to the following condition: for the signed measures $\mu_1$ and $\mu_2$ on $[0, T]$ defined by $\mu_i[0, t] = f_i(t) - f_i(0),\, t \in [0, T],\, i = 1, 2$, the measure $\mu_2-\mu_1$ is nonnegative on $[0,T]$; that is, $\mu_1(B) \le \mu_2(B)$ for any Borel set $B \subseteq [0, T]$.
For continuous functions $f_1, f_2 : [0, T] \to \mathbb R$ of bounded variation, and for a continuous function $g : [0, T] \to [0, \infty)$, if $\mathrm{d}f_1(t) \le \mathrm{d}f_2(t)$, then $\mathrm{d}F_1(t) \le \mathrm{d}F_2(t)$, where $F_1, F_2$ are defined as follows:
$$
F_i(t) = \int_0^tg(s)\,\mathrm{d}f_i(s),\ i = 1, 2.
$$
We can write this as $g(t)\,\mathrm{d}f_1(t) \le g(t)\,\mathrm{d}f_2(t)$.
\lambdabel{rmk:rules}
\end{equation}d{rmk}
Now comes the crucial observation:
\gammammagin{equation}
\lambdabel{eq:nonincreasing-1}
\mathbf{n}(t)\cdot(X(t) - X'(t))\,\mathrm{d}\ell(t) \le 0,\quad \text{for all}\; t \ge 0.
\end{equation}d{equation}
Indeed, $\ell(t)$ can grow only when $X(t) \in \partial D$, and in this case $X'(t) \in D$, and therefore $\mathbf{n}(t)\cdot(X(t) - X'(t)) \le 0$ from~\begin{equation}ref{eq:inward-normal}. Combine this with $\mathrm{d}\ell(t) \ge 0$ and get~\begin{equation}ref{eq:nonincreasing-1}. Similarly,
$\mathbf{n}'(t)\cdot(X(t) - X'(t))\,\mathrm{d}\ell'(t) \ge 0$.
Also from~\begin{equation}ref{eq:monotone-F}, we get that
\gammammagin{equation}
\lambdabel{eq:monotone}
(g(t, X(t)) - g(t, X'(t))\cdot(X(t) - X'(t)) \le F(t)\norm{X(t) - X'(t)}^2 = F(t)Y^2(t).
\end{equation}d{equation}
Thus, from ~\begin{equation}ref{eq:Ito-phi}, we get:
\gammammagin{equation}
\lambdabel{eq:basic-inequality}
\mathrm{d} Y^2(t) \le \left(2F(t)\,Y^2(t) + 2 (X(t) - X'(t))\cdot\gammamma_t\right)\mathrm{d} t \le \left(2F(t)Y^2(t) + 2Y(t)\norm{\gammamma_t}\right)\mathrm{d} t.
\end{equation}d{equation}
Using~\begin{equation}ref{eq:square}, we rewrite~\begin{equation}ref{eq:basic-inequality} as
\gammammagin{equation}
\lambdabel{eq:point-of-choice}
2Y(t)\,\mathrm{d} Y(t) \le 2Y(t)\left(F(t)Y(t) + \norm{\gammamma_t}\right)\,\mathrm{d} t.
\end{equation}d{equation}
Now, we claim that for $t \in [0, T]$,
\gammammagin{equation}
\lambdabel{eq:ineq-Y}
Y(t) \le \int_0^t\norm{\gammamma_s}\exp\left(\int_s^tF(u)\mathrm{d} u\right)\,\mathrm{d} s.
\end{equation}d{equation}
For every $t \in [0, T]$, either $Y(t) = 0$, and then~\begin{equation}ref{eq:ineq-Y} is immediate, or $Y(t) > 0$. In this second case, we prove~\begin{equation}ref{eq:ineq-Y} as follows. Since the function $Y$ is continuous, the set $I := \{t \in (0, T)\mid Y(t) > 0\}$ is open, therefore is a countable union of disjoint open intervals. On each such interval $(\alpha_1, \alpha_2)$, $Y(t)>0$. According to Remark~\ref{rmk:rules}, we can multiply~\begin{equation}ref{eq:point-of-choice} by $Y^{-1}(t) > 0$:
\gammammagin{equation}\lambdabel{eq:ptchoice2}
\,\mathrm{d} Y(t) \le F(t)Y(t)\,\mathrm{d} t + \norm{\gammamma_t}\,\mathrm{d} t.
\end{equation}d{equation}
We can rewrite \begin{equation}ref{eq:ptchoice2} as
$
\mathrm{d} Y(t) - F(t)Y(t)\,\mathrm{d} t \le \norm{\gammamma_t}\,\mathrm{d} t.
$
Multiplying by an integrating factor, we get:
$$
\mathrm{d} \left(Y(t)\exp\left(-\int_{\alpha_1}^tF(s)\mathrm{d} s\right)\right) \le \norm{\gammamma_t}\exp\left(-\int_{\alpha_1}^tF(s)\mathrm{d} s\right)\,\mathrm{d} t.
$$
Integrating with respect to $t$ over $[\alpha_1, t]$ and using the fact that $Y(\alpha_1) = 0$, we get:
$$
Y(t)\exp\left(-\int_{\alpha_1}^tF(s)\mathrm{d} s\right) \le \int_{\alpha_1}^t\norm{\gammamma_s}\exp\left(-\int_{\alpha_1}^sF(u)\mathrm{d} u\right)\,\mathrm{d} s.
$$
Thus, for $t \in (\alpha_1, \alpha_2)$,
\begin{equation}
\gammammagin{split}
Y(t)&\exp\left(-\int_0^tF(s)\mathrm{d} s\right) = \exp\left(-\int_0^{\alpha_1}F(s)\mathrm{d} s\right) Y(t)\exp\left(-\int_{\alpha_1}^tF(s)\mathrm{d} s\right)\\
& \le \int_{\alpha_1}^t\norm{\gammamma_s}\exp\left(-\int_0^sF(u)\mathrm{d} u\right)\mathrm{d} s \le \int_0^t\norm{\gammamma_s}\exp\left(-\int_0^sF(u)\mathrm{d} u\right)\,\mathrm{d} s.
\end{equation}d{split}
\end{equation}
By rearranging the integrating factor, this proves inequality~\begin{equation}ref{eq:ineq-Y}. Hence, by the Cauchy-Schwartz inequality:
$$
Y^2(t) \le \int_0^t\norm{\gammamma_s}^2\mathrm{d} s \int_0^t\exp\left(2\int_s^tF(u)\mathrm{d} u\right)\mathrm{d} s.
$$
Finally, \begin{equation}ref{eq:goal} follows by taking $\sup$ over $t \in [0, T]$, and applying expectation $\overline{\mathbf{R}}$ and ~\begin{equation}ref{eq:entropy}.
\end{equation}d{proof}
\gammammagin{proof}[Proof of Corollary~\ref{cor:RBM}] Follows from Remark~\ref{rmk:strong-existence} and by taking $F\begin{equation}uiv 0$.
\end{equation}d{proof}
\gammammagin{proof}[Proof of Theorem~\ref{thm:CBP}]
\textbf{Proof of (a) for finite systems.} We apply \cite[Proposition 2.11]{LogConcave}. It suffices to show that the drift in the equation~\begin{equation}ref{eq:mainSDE}:
$$
g(x) = (g_1(x), \ldots, g_N(x))\ \mbox{with}\ \ g_i(t) = \sum\limits_{k=1}^N1(\mathbf{p}_x(k) = i)g_k \begin{equation}uiv g_{\mathbf{p}^{-1}_x(i)}
$$
satisfies the contraction condition in~\begin{equation}ref{eq:contraction}. Rewrite the dot product in~\begin{equation}ref{eq:contraction} as
\gammammagin{equation}
\lambdabel{eq:exposition}
\left(g(x) - g(y)\right)\cdot(x-y) = \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_x(i)}x_i - \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_y(i)}x_i - \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_x(i)}y_i + \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_y(i)}y_i.
\end{equation}d{equation}
The fact that
\gammammagin{equation}
\lambdabel{eq:ineq-exposition}
\sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_x(i)}x_i \le \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_y(i)}x_i,\quad\mbox{or, equivalently,}\quad \sum\limits_{k=1}^Ng_kx_{\mathbf{p}_x(k)} \le \sum\limits_{k=1}^Ng_kx_{\mathbf{p}_y(k)},
\end{equation}d{equation}
follows from \cite[Lemma 1.4]{JM2008} applied to $a(i) = -g_i,\ b(i) = x_i,\ i = 1, \ldots, N;\ \thetau = \mathbf{p}_y\mathbf{p}_x^{-1}$. Similarly,
$$
\sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_y(i)}y_i \le \sum\limits_{i=1}^Ng_{\mathbf{p}^{-1}_x(i)}y_i.
$$
This proves the contraction condition~\begin{equation}ref{eq:contraction}, and thus completes the proof.
\textbf{Proof of (b) for finite systems.} The vector $Y = (Y(t), t \ge 0)$ of ranked particles is a (normally) reflected Brownian motion in the (convex) wedge
$
D := \{y \in \mathbb{R}^N\mid y_1 \le \ldots \le y_N\},
$
with constant drift $g := (g_1, \ldots, g_N)$ and constant diffusion $A = \diag(\sigma_1^2, \ldots, \sigma_N^2)$. It suffices to apply Corollary~\ref{cor:RBM} of Theorem~\ref{thm:main}. This completes the proof for finite systems.
\textbf{The case of infinite systems.} Approximate the infinite system by corresponding finite systems. For every $N \ge 1$, consider a system of $N$ competing Brownian particles
$
X = (X^{(N)}_1, \ldots, X_N^{(N)})
$
with drift and diffusion coefficients $g_1, \ldots, g_N$ and $\sigma_1^2, \ldots, \sigma_N^2$, starting from $(x_1, \ldots, x_N)$. Denote the corresponding ranked particles by
$
Y_k^{(N)}, \quad k = 1, \ldots, N.
$
By \cite[Theorem 3.3]{MyOwn6}, as $N \to \infty$, via some subsequence we have weak convergence in $C([0, T], \mathbb{R}^k)$:
\gammammagin{align*}
\bigl(X_1^{(N)}, \ldots, X_k^{(N)}\bigr) & \Rightarrow \bigl(X_1, \ldots, X_k\bigr) \quad \mbox{in case of (a)};\\
\bigl(Y^{(N)}_1, \ldots, Y^{(N)}_k\bigr) &\Rightarrow \bigl(Y_1, \ldots, Y_k\bigr)\quad \mbox{in case of (b)}.
\end{equation}d{align*}
Since $T_2(C)$ inequalities are preserved under weak limits (\cite[Lemma 2.2]{Djellout}), we are done.
\end{equation}d{proof}
\noindent
\gammammagin{thebibliography}{12}
\bibitem{BFK} \textsc{Adrian D. Banner, E. Robert Fernholz, Ioannis Karatzas} (2005). Atlas Models of Equity Markets. \textit{Ann. Appl. Probab.} \textbf{15} (4), 2996--2330.
\bibitem{Bass1987} \textsc{Richard Bass, \'Etienne Pardoux} (1987). Uniqueness for Diffusions with Piecewise Constant Coefficients. \textit{Probab. Th. Rel. Fields} \textbf{76} (4), 557--572.
\bibitem{Neutral} \textsc{Jianhai Bao, Feng-Yu Wang, Chenggui Yang} (2013). Transportation Cost Inequalities for Neutral Functional Stochastic Equations. \textit{Z. Anal. Anwend.} \textbf{32} (4), 457--475.
\bibitem{Bobkov1} \textsc{Sergey Bobkov, Friedrich G\"otze} (1999). Exponential Integrability and Transportation Cost Related to Logarithmic Sobolev Inequalities. \textit{J. Funct. Anal.} \textbf{163} (1), 1--28.
\bibitem{Bobkov3} \textsc{Sergey G. Bobkov, Mokshay Madiman} (2011). Concentration of the Information in Data with Log-Concave Distributions. \textit{Ann. Probab.} \textbf{39} (4), 1528-1543.
\bibitem{Order} \textsc{St\'ephane Boucheron and Maud Thomas} (2012). Concentration Inequalities for Order Statistics. \textit{Electr. Comm. Probab.} \textbf{17} (51), 1--12.
\bibitem{NewBook} \textsc{St\'ephane Boucheron, G\'abor Lugosi, Pascal Massart} (2013). \textit{Concentration Inequalities: A Nonasymptotic Theory of Independence.} Oxford University Press.
\bibitem{MoroccoFBM} \textsc{Brahim Boufossi, Salah Hajji} (2017). Transportation Inequalities for Neutral Stochastic Differential Equations Driven by Fractional Brownian Motion with Hurst Parameter Lesser than $1/2$. \textit{Mediterr. J. Math.} \textbf{14} (5), 1--16.
\bibitem{Morocco} \textsc{Brahim Boufossi, Salah Hajji} (2018). Transportation Inequalities for Stochastic Heat Equation. \textit{Stat. Probab. Let.} \textbf{139}, 75--83.
\bibitem{CDSS} \textsc{Manuel Cabezas, Amir Dembo, Andrey Sarantsev, Vladas Sidoravicius} (2018). Brownian Particles with Rank-Dependent Drifts: Out of Equilibrium Behavior. To appear in \textit{Comm. Pure Appl. Math.} Available at arXiv:1708.01918.
\bibitem{LogConcave} \textsc{Patrick Cattiaux, Arnaud Guillin} (2014). Semi Log-Concave Markov Diffusions. \textit{S\'eminaire de Probabilit\'es XLVI} , 231--292. \textit{Lecture Notes in Mathematics} \textbf{2123}, Springer, Cham.
\bibitem{C3} \textsc{Patrick Cattiaux, Arnaud Guillin, Liming Wu} (2009). A Note on Talagrand's Transportation Inequality and Logarithmic Sobolev Inequality. \textit{Probab. Th. Rel. Fields} \textbf{148} (1-2), 285--304.
\bibitem{CP2010} \textsc{Sourav Chatterjee, Soumik Pal} (2010). A Phase Transition Behavior for Brownian Motions Interacting Through Their Ranks. \textit{Probab. Th. Rel. Fields} \textbf{147} (1-2), 123--159.
\bibitem{DemboTsai} \textsc{Amir Dembo, Li-Cheng Tsai} (2017). Equilibrium fluctuation of the Atlas model. \textit{Ann. Probab.}, \textbf{45} (6B), 4529--4560.
\bibitem{4people} \textsc{Amir Dembo, Mykhaylo Shkolnikov, S.R. Srinivasa Varadhan, Ofer Zeitouni} (2016). Large Deviations for Diffusions Interacting Through Their Ranks. \textit{Comm. Pure Appl. Math.} \textbf{69} (7), 1259--1313.
\bibitem{Djellout} \textsc{Hac\'ene Djellout, Arnaud Guillin, Liming Wu} (2004). Transportation Cost-Information Inequalities and Applications to Random Dynamical Systems and Diffusions. \textit{Ann. Probab.} \textbf{32} (3B), 2702--2732.
\bibitem{Algorithm} \textsc{Devdatt P. Dubhashi, Alessandro Panconesi} (2012). \textit{Concentration of Measure for the Analysis of Randomized Algorithms.} Cambridge University Press.
\bibitem{FernholzBook} \textsc{E. Robert Fernholz} (2002). \textit{Stochastic Portfolio Theory.} Applications of Mathematics \textbf{48}. Springer-Verlag.
\bibitem{Feyel-Ustunel} \textsc{Denis Feyel, Ali S\"uleyman \"Ust\"unel} (2000). The Notion of Convexity and Concavity on Wiener Space. \textit{J. Funct. Anal.} \textbf{176} (2), 400--428.
\bibitem{Gozlansurvey} \textsc{Nathael Gozlan, Christian L\'{e}onard} (2010). Transport Inequalities. A Survey. \textit{Markov Proc. Rel. Fields} \textbf{16} (4), 635--736.
\bibitem{Gozlan} \textsc{Nathael Gozlan, Cyril Roberto, Paul-Marie Samson} (2011). From Concentration to Logarithmic Sobolev and Poincar\'{e} Inequalities. \textit{J. Funct. Anal.} \textbf{260} (5), 1491--1522.
\bibitem{IKS2013} \textsc{Tomoyuki Ichiba, Ioannis Karatzas, Mykhaylo Shkolnikov} (2013). Strong Solutions of Stochastic Equations with Rank-Based Coefficients. \textit{Probab. Th. Rel. Fields} \textbf{156} (1-2), 229--248.
\bibitem{IPS2013} \textsc{Tomoyuki Ichiba, Soumik Pal, Mykhaylo Shkolnikov} (2013). Convergence Rates for Rank-Based Models with Applications to Portfolio Theory. \textit{Probab. Th. Rel. Fields} \textbf{156} (1-2), 415--448.
\bibitem{JM2008} \textsc{Benjamin Jourdain, Florent Malrieu} (2008). Propagation of Chaos and Poincar\'{e} Inequalities for a System of Particles Interacting Through Their cdf. \textit{Ann. Appl. Probab.} \textbf{18} (5), 1706--1736.
\bibitem{Lacker} \textsc{Daniel Lacker} (2015). Liquidity, Risk Measures, and Concentration of Measure. \textit{Math. Oper. Res.} \textbf{43} (3).
\bibitem{Davar} \textsc{Davar Khoshnevisan, Andrey Sarantsev} (2018). Talagrand Concentration Inequalities for Stochastic Partial Differential Equations. Available at arXiv:1709.07098.
\bibitem{KolliShkol} \textsc{Praveen Kolli, Mykhaylo Shkolnikov} (2018). SPDE limit of the global fluctuations in rank-based models. \textit{Ann. Probab.} \textbf{46}, no. 2, 1042--1069.
\bibitem{Delay} \textsc{Zhi Li, Jiaowan Luo} (2015). Transportation Inequalities for Stochastic Delay Evolution Equations Driven by Fractional Brownian Motion. \textit{Front. Math. China} \textbf{10} (2), 303--321.
\bibitem{Marton} \textsc{Katalin Marton} (1996). Bounding $\bar{d}$-Distance by Information Divergence: a Method to Prove Measure Concentration. \textit{Ann. Probab.} \textbf{24} (2), 857--866.
\bibitem{Marton1} \textsc{Katalin Marton} (1996). A Measure Concentration Inequality for Contracting Markov Chains. \textit{Geom. Funct. Anal.} \textbf{6} (3), 556--571.
\bibitem{Marton2} \textsc{Katalin Marton} (1998). Measure Concentration for a Class of Random Processes. \textit{Probab. Th. Rel. Fields} \textbf{110} (3), 427--439.
\bibitem{Model} \textsc{Pascal Massart} (2007). \textit{Concentration Inequalities and Model Selection.} Lecture Notes in Mathematics \textbf{1896}. Springer.
\bibitem{Otto} \textsc{Felix Otto, C\'edric Villani} (2000). Generalization of an Inequality by Talagrand and Links with the Logarithmic Sobolev Inequality. \textit{J. Funct. Anal.} \textbf{173} (2), 361--400.
\bibitem{Pal} \textsc{Soumik Pal} (2012). Concentration for Multidimensional Diffusions and their Boundary Local Times. \textit{Probab. Th. Rel. Fields} \textbf{154} (1), 225--254.
\bibitem{PalPitman} \textsc{Soumik Pal, Jim Pitman} (2008). One-Dimensional Brownian Particle Systems with Rank-Dependent Drifts. \textit{Ann. Appl. Probab.} \textbf{18} (6), 2179--2207.
\bibitem{PalShkolnikov} \textsc{Soumik Pal, Mykhaylo Shkolnikov} (2014). Concentration of Measure for Brownian Particle Systems Interacting Through Their Ranks. \textit{Ann. Appl. Probab.} \textbf{24} (4), 1482--1508.
\bibitem{Paulin} \textsc{Daniel Paulin} (2015). Concentration Inequalities for Markov Chains by Marton Couplings and Spectral Methods. \textit{Electr. J. Probab.} \textbf{20} (79), 1--32.
\bibitem{Sason} \textsc{Maxim Raginsky, Igal Sason} (2019). \textit{Concentration of Measure Inequalities in Information Theory, Communications, and Coding,} 3rd edition. Foundations and Trends in Communications and Information \textbf{45}. Now Publishers.
\bibitem{Riedel} \textsc{Sebastian Riedel} (2017). Transportation-Cost Inequalities for Diffusions Driven by Gaussian Processes. \textit{Electr. J. Probab.} \textbf{22} (24), 1--26.
\bibitem{Samson} \textsc{Paul-Marie Samson} (2000). Concentration of Measure Inequalities for Markov Chains and $\Phi$-Mixing Processes. \textit{Ann. Probab.} \textbf{28} (1), 416--461.
\bibitem{MyOwn3} \textsc{Andrey Sarantsev} (2015). Triple and Simultaneous Collisions of Competing Brownian Particles. \textit{Electr. J. Probab.} \textbf{20} (29), 1--28.
\bibitem{MyOwn6} \textsc{Andrey Sarantsev} (2017). Infinite Systems of Competing Brownian Particles. \textit{Ann. Inst. H. Poincar\'{e} Probab. Stat.} \textbf{53} (4), 2279--2315.
\bibitem{MyOwn11} \textsc{Andrey Sarantsev} (2017). Two-Sided Infinite Systems of Competing Brownian Particles. \textit{ESAIM Probab. Stat.} \textbf{21}, 317--349.
\bibitem{Saussereau} \textsc{Bruno Saussereau} (2012). Transportation Inequalities for Stochastic
Differential Equations Driven by a Fractional Brownian Motion. \textit{Bernoulli} \textbf{18} (1), 1--23.
\bibitem{Talagrand3} \textsc{Michel Talagrand} (1995). Concentration of Measure and Isoperimetric Inequalities in Product Spaces. \textit{Publications Mathematiques de IHES} \textbf{81} (1), 73--205.
\bibitem{Talagrand5} \textsc{Michel Talagrand} (1996). Transportation Cost for Gaussian and Other Product Measures. \textit{Geom. Funct. Anal.} \textbf{6} (3), 587--600.
\bibitem{Talagrand1} \textsc{Michel Talagrand} (2006). A New Isoperimetric Inequality for Product Measure, and the Concentration of Measure Phenomenon. \textit{Israel Seminar, Geom. Funct. Anal.} Lecture Notes in Mathematics, \textbf{1469}, 91-124. Springer-Verlag.
\bibitem{Tanaka1979} \textsc{Hiroshi Tanaka} (1979). Stochastic Differential Equations with Reflecting Boundary Conditions in Convex Regions. \textit{Hiroshima Math. J.} \textbf{9} (1), 163--177.
\bibitem{Ustunel} \textsc{Ali S\"uleyman \"Ust\"unel} (2012). Transportation Cost Inequalities for Diffusions Under Uniform Distance. \textit{Stochastic Analysis and Related Topics,} \textit{Springer Proc. Math. Stat.} \textbf{22}, 203--214.
\bibitem{RDE} \textsc{Liming Wu, Zhengliang Zhang} (2006). Talagrand's $T_2$-Transportation Inequality and Log-Sobolev Inequality for Dissipative SPDEs and Applications to Reaction-Diffusion Equations. \textit{Chinese Ann. Math. B} \textbf{27} (3), 243--262.
\end{equation}d{thebibliography}
\end{equation}d{document}
|
\begin{document}
\maketitle
\begin{abstract}
We study the Landau-Lifshitz system
associated with Maxwell equations in a bilayered ferromagnetic body when
super-exchange and surface anisotropy interactions are present in the
spacer in-between the layers. In the presence of these surface
energies, the Neumann boundary condition becomes nonlinear.
We prove, in three dimensions, the existence of global weak solutions to the
Landau-Lifshitz-Maxwell system with nonlinear Neumann boundary conditions.
\end{abstract}
\section{Introduction}
Ferromagnetic materials are widely used in the industrial
world. Their four main applications are data storage (hard drives), furtivity,
communications (wave circulator), and energy (tranformers). For an
introduction to ferromagnetism, see~Aharoni\cite{Aharoni:introduc} or
Brown\cite{Brown:microm}.
The state of a ferromagnetic body is characterized by its
magnetization $\bm{m}$, a vector field whose norm is equal to $1$
inside the ferromagnetic body and null outside. The evolution of
$\bm{m}$ can be modeled by the Landau-Lifshitz equation
\begin{equation*}
\frac{\partial\bm{m}}{\partial t}=
-\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}
-\alpha\bm{m}{\wedge}(\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}),
\end{equation*}
where $\bm{h}_{\mathrm{tot}}$ depends on $\bm{m}$ and contains various
contributions. In particular, in this paper, $\bm{h}_{\mathrm{tot}}$
includes various volumic and surfacic energies, among which
the solution to Maxwell equations and several surfacic terms such as
super-exchange and surface anisotropy.
F.~Alouges and A.~Soyeur\cite{Alouges.Soyeur:OnGlobal} established
the existence and the non-uniqueness of weak solutions to the Landau-Lifshitz system when only
exchange is present, \textit{i.e.} when
$\bm{h}_{\mathrm{tot}}=\Lapl\bm{m}$, see also A.Visintin~\cite{Visintin:Landau}.
S.~Labbé \cite[Ch. 10]{labbe:mumag} extended the existence result in the presence of the
magnetostatic field. In the absence of the exchange interaction,
J.L.~Joly, G.~Métivier and J.~Rauch
obtain global existence and uniqueness results in~\cite{Joly:global}.
G. Carbou and P. Fabrie \cite{Carbou.Fabrie:Time} proved
the existence of weak solutions when the Landau-Lifshitz equation is
associated with Maxwell equations.
K.~Santugini proved in~\cite{Santugini:SolutionsLL}, see
also~\cite[chap. 6]{Santugini:These}, the existence of
weak solutions globally in
time to the magnetostatic Landau-Lifshitz system in the presence of surface energies
that cause the Neumann boundary conditions to become nonlinear.
In this paper, we prove the existence of weak solutions to the full
Landau-Lifshitz-Maxwell system with the nonlinear Neumann boundary
conditions arising from the super-exchange and the surface anisotropy energies. In addition, we address the long time behavior by describing the $\omega$-limit set of the trajectories.
The plan of the paper is the following. In \S\ref{sect:notations}, we introduce several notations we use
throughout this paper. In \S\ref{sect:MicromagneticModel}, we recall the micromagnetic
model. In \S\ref{sect:LandauLifshitzSystem}, we state our main
theorems.
Theorem~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}
states the global existence in time of weak solutions to the
Landau-Lifshitz system with the nonlinear Neumann Boundary conditions
arising from the super-exchange and the surface anisotropy energies. Theorem \ref{theo-omegalim} describes the $\omega$-limit set of a solution given by the previous theorem.
In \S\ref{sect:prerequisites}, before starting the proofs, we recall technical results on Sobolev
Spaces we use in this paper. We prove Theorem~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}
in \S\ref{sect:proofExistenceWeakLLMSurfaceEnergies} and Theorem~\ref{theo-omegalim} in \S\ref{section-omegalimit}.
{\bf Notation} Throughout the paper, $\lVert\cdot\rVert$ denotes the euclidean
norm over $\mathbb{R}^d$ where $d$ is a positive integer, often equal
to $3$. When refering to the $\mathrm{L}^2$ norm over a measurable set
$A$, we use instead the $\lVert\cdot\rVert_{\mathrm{L}^2(A)}$ notation.
\section{Geometry of spacers and related notations}\label{sect:notations}
In this paper, we consider a ferromagnetic domain with spacer. We denote by
$\Omega={{B}}{\times}\Interv{}$ this domain,
where ${{B}}$ is a bounded domain of $\mathbb{R}^2$ with smooth
boundary and $\Interv{}$ is the interval $\rbrack-{{L}}sous,{{L}}sur\lbrack\setminus\{0\}$.
We set $Q_T= \rbrack0,T\lbrack{\times}\Omega$ where ${{L}}sur$ and ${{L}}sous$
are two positive real numbers.
\begin{center}
\begin{tikzpicture}
\filldraw[fill=blue!50] (0,1.6)--(4,1.6)--(4,3.0)--(0,3.0)--cycle;
\filldraw[fill=blue!50] (0,0)--(4,0)--(4,1.5)--(0,1.5)--cycle;
\filldraw[fill=blue!50] (4,1.6)--(5.5,3.6)--(5.5,5.0)--(4,3.0)--cycle;
\filldraw[fill=blue!50] (4,0)--(5.5,2.0)--(5.5,3.5)--(4,1.5)--cycle;
\begin{scope}
\def(0,3.0)--(4,3.0)--(5.5,5.0)--(2,5.0)--cycle{(0,3.0)--(4,3.0)--(5.5,5.0)--(2,5.0)--cycle}
\filldraw[fill=blue!50] (0,3.0)--(4,3.0)--(5.5,5.0)--(2,5.0)--cycle;
\clip(0,3.0)--(4,3.0)--(5.5,5.0)--(2,5.0)--cycle;
\end{scope}
\end{tikzpicture}
\end{center}
On the common boundary
$\Gamma={{B}}{\times}\{0\}$ (the spacer), $\gamma^+$
is the trace map from above that sends the restriction $\bm{m}_{\vert{{B}}{\times}\rbrack0,{{L}}sur\lbrack}$
to $\gamma\bm{m}$ on $\Gamma$, and
$\gamma^-$ is the trace map from below that sends the restriction $\bm{m}_{\vert{{B}}{\times}\rbrack-{{L}}sous,0\lbrack}$
to $\gamma\bm{m}$ on $\Gamma$. To simplify notations, we consider $\Gamma$
has two sides: $\Gamma^+={{B}}{\times}\{0^+\}$ and $\Gamma^-={{B}}{\times}\{0^-\}$.
By $\Gamma^\pm$, we denote the union of these two sides $\Gamma^+\cup\Gamma^-$. In this
paper, integrating over $\Gamma^\pm$ means integrating over both
sides, while integrating over $\Gamma$ means integrating only once.
On $\Gamma^\pm$, $\gamma$ is the map that sends $\bm{m}$
to its trace on both sides. The trace map $\gamma^{*}$ is the trace
map that exchange the two sides of $\Gamma$: it maps $\bm{m}$
to $\gamma(\bm{m}\circ s)$ where $s$
is the application that sends
$(x,y,z,t)$ to $(x,y,-z,t)$.
For convenience, we denote by $\bm{\nu}$ the extension to $\Omega$ of
the unitary exterior normal defined
on $\Gamma^\pm$, thus $\bm{\nu}(\bm{x})=-\bm{e}_z$
if $z>0$ or if $\bm{x}$ belongs to $\Gamma^+$,
and $\bm{\nu}(\bm{x})=\bm{e}_z$ if $z<0$ or
if $\bm{x}$ belongs to $\Gamma^-$.
In this paper, $\mathbb{H}^1(\Omega)$ denotes
$\mathrm{H}^1(\Omega;\mathbb{R}^3)$,
and $\mathbb{L}^2(\Omega)$ denotes
$L^2(\Omega;\mathbb{R}^3)$.
By $\mathcal{C}^\infty_c(\Omega)$, we denote the set of
$\mathcal{C}^\infty$ functions that have compact support in
$\Omega$. By $\mathcal{C}^\infty_c(\lbrack0,T\rbrack{\times}\Omega)$,
we denote the set of
$\mathcal{C}^\infty$ functions that have compact support in
$\lbrack0,T\rbrack{\times} \Omega$.
\section{The micromagnetic model}\label{sect:MicromagneticModel}
One possible model of ferromagnetism is the micromagnetic model
introduced by W.F~Brown\cite{Brown:microm}. In the micromagnetic model, the
magnetization $\bm{M}$ is the mean at the mesoscopic scale of
the microscopic magnetization and has constant norm $M_s$ in the
ferromagnetic material and is null outside. In this
paper, we only work with the dimensionless magnetization
$\bm{m}=\bm{M}/M_s$.
To each interaction $p$ present in the ferromagnetic material is
associated an energy $\mathrm{E}_p(\bm{m})$ and an operator
$\mathcal{H}_p$ linked by
\begin{equation*}
\mathrm{D}\mathrm{E}_p(\bm{m})\cdot\bm{v}=-\int_{\Omega}\mathcal{H}_p(\bm{m}) (\bm{x})\cdot\bm{v}(\bm{x}){\mathrm{d}}\bm{x}
\end{equation*}
The vector field $\bm{h}_p=\mathcal{H}_p(\bm{m})$ is the magnetic
effective field associated to interaction $p$. The total energy is the sum
of all the energies associated with every interaction.
These energies completely characterize the stationary problem:
the steady states of the magnetization are the
minimizers of the total energy under the constraint
$\lVert\bm{m}\rVert=1$.
To have an evolution problem, a phenomenological partial differential
equation was introduced in Landau-Lifshitz~\cite{L.L.:Phys}, the Landau-Lifshitz
equation:
\begin{equation*}
\frac{\partial\bm{m}}{\partial t}=-\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}
-\alpha\bm{m}{\wedge}(\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}).
\end{equation*}
where $\bm{h}_{\mathrm{tot}}$ contains all the contributions to the
magnetic effective field. These contributions can either be volumic or
surfacic in nature.
\subsection{Volume energies}
\subsubsection{Exchange}
Exchange is essential in the micromagnetic theory. Without exchange,
there would be no ferromagnetic materials. This
interaction aligns the magnetization over short distances.
In the isotrope and homogenous case,
the exchange energy may be modeled by the following energy
\begin{equation*}
\mathrm{E}_{e}(\bm{m})=\frac{A}{2}\int_\Omega\lVert\nabla\bm{m}\rVert^2{\mathrm{d}}\bm{x}.
\end{equation*}
The associated exchange operator is $\mathcal{H}_e(\bm{m})=-A\Lapl\bm{m}$.
\subsubsection{Anisotropy}
Many ferromagnetic materials have a crystalline structure. This
crystalline structure can penalize some directions of magnetization
and favor others. Anisotropy can be modeled by
\begin{equation*}
\mathrm{E}_{a}(\bm{m})=\frac{1}{2}\int_\Omega(\mathbf{K}(\bm{x})\bm{m}(\bm{x}))\cdot\bm{m}(\bm{x}){\mathrm{d}}\bm{x}.
\end{equation*}
where $\mathbf{K}$ is a positive symmetric matrix field.
The associated anisotropy operator is $\mathcal{H}_a(\bm{m})=-\mathbf{K}\bm{m}$.
\subsubsection{Maxwell}
This is the magnetic interaction that comes from Maxwell
equations. The constitutive relations in the ferromagnetic medium are given by:
\begin{equation*}\left\{
\begin{array}{l}
B=\mu_0(\bm{h}+\overline{\bm{m}}),\\
D=\varepsilon_0 \bm{e},
\end{array}
\right.
\end{equation*}
where $\overline{\bm{m}}$ is the extension of $\bm{m}$ by zero outside $\Omega$.
Starting from the Maxwell equations, the magnetic excitation $\bm{h}$ and the electric field
$\bm{e}$ are solutions to the following system:
\begin{align*}
\mu_0\frac{\partial(\bm{h}+\overline{\bm{m}})}{\partial t}+\mathbb{R}ot\bm{e}&=0,\\
\mu_0\frac{\partial\bm{e}}{\partial
t}+\sigma(\bm{e}+\bm{f})\mathds{1}_\Omega-\mathbb{R}ot\bm{h}&=0.
\end{align*}
As these are evolution equations, initial conditions are needed to
complete the system.
The energy associated with the Maxwell
interaction is
\begin{equation*}
E_{\textrm{maxw}}(\bm{h},\bm{e})=\frac{1}{2}\lVert\bm{h}\rVert^2_{\mathrm{L}^2(\mathbb{R}^3)}
+\frac{\varepsilon_0}{2\mu_0}\lVert\bm{e}\rVert^2_{\mathrm{L}^2(\mathbb{R}^3)}.
\end{equation*}
We recall the Law of Faraday:
$\mbox{div } B=0.$
Here, the constitutive relation reads $B=\mu_0(\bm{h}+\overline{\bm{m}})$. Therefore, in order to satisfy the law of Faraday, we must assume that it is satisfied at initial time. For positive times, by taking the divergence of the first Maxwell's equation, we remark that the divergence free condition is propagated by the system.
\subsection{Surface energies}
When a spacer is present inside a ferromagnetic material, new physical
phenomena may appear in the spacer. These phenomena are modeled
by surface energies, see M.~Labrune and J.~Miltat~\cite{Labrune.Miltat:wall.structure}.
\subsubsection{Super-exchange}
This surface energy penalizes the jump of the magnetization across the
spacer. It is modeled by a quadratic and a biquadratic term:
\begin{equation}\label{eq:SuperExchangeEnergy}
\mathrm{E}_{se}(\bm{m})=
\frac{J_1}{2}\int_\Gamma\lVert\gamma^+\bm{m}-\gamma^-\bm{m}\rVert^2{\mathrm{d}} S(\hat{\bm{x}})
+J_2\int_\Gamma\lVert\gamma^+\bm{m}{\wedge}\gamma^-\bm{m}\rVert^2{\mathrm{d}} S(\hat{\bm{x}}).
\end{equation}
The magnetic excitation associated with super-exchange is:
\begin{equation*}
\mathcal{H}_{se}(\bm{m})=\Big(J_1(\gamma^{*}\bm{m}-\gamma\bm{m})
+2J_2\big((\gamma\bm{m}\cdot\gamma^{*}\bm{m})\gamma^{*}\bm{m}
-\lVert\gamma^*\bm{m}\rVert^2\gamma\bm{m}\big)\Big)
{\mathrm{d}} S(\Gamma^+\cup\Gamma^-),
\end{equation*}
where $\gamma^*$ is defined in~\S3.
Integration over ${\mathrm{d}} S(\Gamma^+\cup\Gamma^-)$ should be understood as
integrating over both faces of the surface $\Gamma$.
\subsubsection{Surface anisotropy}
Surface anisotropy penalizes magnetization that is orthogonal on the
boundary. In the micromagnetic model, it is modeled by a surface energy:
\begin{equation}\label{eq:SurfaceAnisotropyEnergy}
\begin{split}
\mathrm{E}_{sa}(\bm{m})&=\frac{K_s}{2}\int_{\Gamma^+}\lVert\gamma\bm{m}{\wedge}\bm{\nu}\rVert^2{\mathrm{d}} S(\hat{\bm{x}})
+\frac{K_s}{2}\int_{\Gamma^-}\lVert\gamma\bm{m}{\wedge}\bm{\nu}\rVert^2{\mathrm{d}} S(\hat{\bm{x}})\\
&=\frac{K_s}{2}\int_{\Gamma^\pm}\lVert\gamma\bm{m}{\wedge}\bm{\nu}\rVert^2{\mathrm{d}} S(\hat{\bm{x}}).
\end{split}
\end{equation}
The magnetic excitation associated with surface anisotropy is:
\begin{equation*}
\mathcal{H}_{sa}(\bm{m})=K_s\big((\gamma\bm{m}\cdot\bm{\nu})\bm{\nu}-\gamma\bm{m}\big)
{\mathrm{d}} S(\Gamma^+\cup\Gamma^-).
\end{equation*}
\subsubsection{New boundary conditions}
Without surface energies, the standard boundary condition is the
homogenous Neumann condition. When surface energies are present,
the boundary conditions are the ones arising from the stationarity conditions
on the total magnetic energy:
\begin{equation*}
A\gamma\bm{m}{\wedge}\frac{\partial\bm{m}}{\partial\bm{\nu}}=
K_s(\bm{\nu}\cdot\gamma\bm{m})\gamma\bm{m}{\wedge}\bm{\nu}
+J_1\gamma\bm{m}{\wedge}\gamma^{*}\bm{m}
+2J_2(\gamma\bm{m}\cdot\gamma^{*}\bm{m})\gamma\bm{m}{\wedge}\gamma^{*}\bm{m}
\end{equation*}
on the interface $\Gamma^\pm$. A more convincing justification for
these boundary conditions is that they are the ones needed to recover
formally the energy inequality. These boundary conditions are
nonlinear.
\section{The Landau-Lifshitz system}\label{sect:LandauLifshitzSystem}
We consider the following Landau-Lifshitz-Maxwell system:
\begin{subequations}
\begin{align}
\frac{\partial\bm{m}}{\partial t}&=
-\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}^{\mathrm{vol}}-\alpha\bm{m}{\wedge}(\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}^{\mathrm{vol}})\mbox{ in }\mathbb{R}^+\times \Omega,\label{41a}\\
\bm{m}(0,\cdot)&=\bm{m}_0\mbox{ on } \Omega,\\
\lVert\bm{m}\rVert&=1\mbox{ in }\mathbb{R}^+\times \Omega,\\
\frac{\partial\bm{m}}{\partial\bm{\nu}}&=
0\qquad\text{on $\partial\Omega\setminus\Gamma^\pm$},\\
\begin{split}
\frac{\partial\bm{m}}{\partial\bm{\nu}}&=
\frac{Ks}{A}(\bm{\nu}\cdot\gamma\bm{m})
(\bm{\nu}-(\bm{\nu}\cdot\gamma\bm{m})\gamma\bm{m})
\\&\phantom{=}
+\frac{J_1}{A}(\gamma^{*}\bm{m}-
(\gamma\bm{m}\cdot\gamma^{*}\bm{m})
\gamma\bm{m})
\\&\phantom{=}
+2\frac{J_2}{A}(\gamma\bm{m}\cdot\gamma^{*}\bm{m})
(\gamma^{*}\bm{m}-
(\gamma\bm{m}\cdot\gamma^{*}\bm{m})
\gamma\bm{m})
\qquad\text{on $\mathbb{R}\times\Gamma^\pm$},
\end{split}
\end{align}
\end{subequations}
where $\bm{h}_{\mathrm{tot}}^{\mathrm{vol}}=\bm{h}-\mathbf{K}\bm{m}+A\Lapl\bm{m}$
and $(\bm{e},\bm{h})$ is solution to Maxwell equations:
\begin{subequations}
\begin{align}
\mu_0\frac{\partial(\overline{\bm{m}}+\bm{h})}{\partial t}+\mathbb{R}ot\bm{e}&=0\mbox{ in }\mathbb{R}^+\times \mathbb{R}^3,\\
\varepsilon_0\frac{\partial\bm{e}}{\partial t}+\sigma(\bm{e}+\bm{f})\mathds{1}_\Omega-\mathbb{R}ot\bm{h}&=0\mbox{ in }\mathbb{R}^+\times \mathbb{R}^3,\\
\bm{e}(0,\cdot)&=\bm{e}_0\mbox{ in }\mathbb{R}^3,\\
\bm{h}(0,\cdot)&=\bm{h}_0\mbox{ in } \mathbb{R}^3.
\end{align}
\end{subequations}
We first begin by defining the concept of weak solution to the
Landau-Lifshitz-Maxwell system with surface energies. This
concept of weak solutions is present
in~\cite{Alouges.Soyeur:OnGlobal, Carbou.Fabrie:Time, labbe:mumag,
Santugini:SolutionsLL}. The key point is that the Landau-Lifschitz equation (\ref{41a}) is formally equivalent to the following Landau-Lifschitz-Gilberg equation:
$$\frac{\partial\bm{m}}{\partial t}-\alpha \bm{m} {\wedge} \frac{\partial\bm{m}}{\partial t}=-(1+\alpha^2)\bm{m}{\wedge}\bm{h}_{\mathrm{tot}}^{\mathrm{vol}},$$
which is more convenient to obtain the weak formulation defined by:
\begin{definition}[Weak solutions to Landau-Lifshitz-Maxwell with
surface energies]\label{defin:LLMaxwellWeak}
\begin{subequations}
Functions $\bm{m}$ in $\mathrm{L}^\infty(0,+\infty;\mathbb{H}^1(\Omega))$
and in $\mathrm{H}^1_{loc}([0,+\infty\lbrack;\mathbb{L}^2(\Omega))$ with
$\frac{\partial\bm{m}}{\partial t}$ in $ \mathbb{L}^2(\mathbb{R}^+{\times}\Omega)$,
$\bm{e}$ in
$\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\mathbb{R}^3))$,
and $\bm{h}$
in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\mathbb{R}^3))$
are said to be weak solutions to the Landau-Lifshitz Maxwell
system with surface energies if
\begin{enumerate}
\item
$\lVert \bm{m}\rVert=1$ almost
everywhere in $\rbrack0,T\lbrack{\times}\Omega$.
\item For all $T>0$ and $\bm{\phi}$ in $\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$,
\begin{equation}\label{eq:WeakFormulationMagnetization}
\begin{split}
&\phantom{=}
\iint_{Q_T}
\frac{\partial\bm{m}}{\partial t}\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
-\alpha\iint_{Q_T}
\left(\bm{m}(t,\bm{x}){\wedge}\frac{\partial\bm{m}}{\partial t}(t,\bm{x})\!\right)
\cdot\bm{\phi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
(1+\alpha^2) A\iint_{Q_T}\sum_{i=1}^3
\left(\bm{m}(t,\bm{x}){\wedge}\frac{\partial \bm{m}}{\partial x_i}(t,\bm{x})\right)
\cdot\frac{\partial\bm{\phi}}{\partial x_i}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}(t,\bm{x}){\wedge}\mathbf{K}(\bm{x})\bm{m}(t,\bm{x})\right)
\cdot\bm{\phi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}(t,\bm{x}){\wedge}\bm{h}(t,\bm{x})\right)
\cdot\bm{\phi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)K_s\iint_{\rbrack0,T\lbrack {\times}\Gamma^\pm}
(\bm{\nu}\cdot\gamma\bm{m})(\gamma\bm{m}{\wedge}\bm{\nu})
\cdot\gamma\bm{\phi}{\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)J_1\iint_{\rbrack0,T\lbrack{\times}\Gamma^\pm}
(\gamma\bm{m}{\wedge}\gamma^{*}\bm{m})\cdot\gamma\bm{\phi}{\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} t
\\&\phantom{=}
-2(1+\alpha^2)J_2
\iint_{\rbrack0,T\lbrack{\times}\Gamma^\pm}
(\gamma\bm{m}\cdot\gamma^{*}\bm{m})(\gamma\bm{m}{\wedge}\gamma^{*}\bm{m})
\cdot\gamma\bm{\phi}{\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} t.
\end{split}
\end{equation}
\item In the sense of traces,
$\bm{m}(0,\cdot)=\bm{m}_0$.
\item For all $\bm{\psi}$ in $\mathcal{C}^\infty_c(\lbrack0,+\infty\lbrack,\mathbb{R}^3)$:
\begin{multline}\label{eq:WeakFormulationExcitation}
-\mu_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}(\bm{h}+\bm{m})\cdot
\frac{\partial\bm{\psi}}{\partial t}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}\cdot\mathbb{R}ot\bm{\psi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
=\\=
\mu_0\int_{\mathbb{R}^3}(\bm{h}_0+\bm{m}_0)\cdot\bm{\psi}_0 {\mathrm{d}}\bm{x}
\end{multline}
\item For all $\bm{\Theta}$ in $\mathcal{C}^\infty_c(\lbrack0,+\infty\lbrack{\times}\mathbb{R}^3)$:
\begin{multline}\label{eq:WeakFormulationElectric}
-\varepsilon_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}\cdot
\frac{\partial\bm{\Theta}}{\partial t}{\mathrm{d}}\bm{x}{\mathrm{d}} t
-\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{h}\cdot\mathbb{R}ot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\sigma\iint_{\mathbb{R}^+{\times}\Omega}(\bm{e}+\bm{f})
\cdot\bm{\Theta}
{\mathrm{d}}\bm{x}{\mathrm{d}} t
=\\=
\varepsilon_0\int_{\mathbb{R}^3}\bm{e}_0\cdot\bm{\Theta}_0 {\mathrm{d}}\bm{x}.
\end{multline}
\item The following energy inequality holds
\begin{equation}\label{eq:EnergyInequality}
\begin{split}
\mathrm{E}(\bm{m}(T),\bm{h}(T),\bm{e}(T))+\frac{\alpha}{1+\alpha^2}\iint_{Q_T}
\left\vert \frac{\partial \bm{m}}{\partial t}\right\vert^2 {\mathrm{d}}\bm{x}
{\mathrm{d}} t
&\\
+\frac{\sigma}{\mu_0}\int_0^T\lVert\bm{e}\rVert^2_{\mathbb{L}^2(\Omega)}{\mathrm{d}} t+
\frac{\sigma}{\mu_0} \iint_{Q_T} \bm{e}\cdot\bm{f}{\mathrm{d}}\bm{x} {\mathrm{d}} t
&\leq \mathrm{E}(\bm{m}_0,\bm{h}_0,\bm{e}_0),
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
\mathrm{E}(\bm{m},\bm{h},\bm{e})&=\frac{A}{2}\int_\Omega\lVert\nabla\bm{m}\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_\Omega(\mathbf{K}(\bm{x})\bm{m}(\bm{x}))\cdot\bm{m}(\bm{x}){\mathrm{d}}\bm{x}\\
&\phantom{=}+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}(\bm{x})\rVert^2
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}(\bm{x})\rVert^2
+\frac{K_s}{2}\int_{\Gamma^+\cup\Gamma^-}\lVert\gamma^+\bm{m}{\wedge}\bm{\nu}\rVert^2{\mathrm{d}} S(\bm{x})
\\
&\phantom{=}
+\frac{J_1}{2}\int_\Gamma\lVert\gamma^+\bm{m}-\gamma^-\bm{m}\rVert^2{\mathrm{d}}\bm{x}
+J_2\int_\Gamma\lVert\gamma^+\bm{m}{\wedge}\gamma^-\bm{m}\rVert^2{\mathrm{d}}\bm{x}.
\end{split}
\end{equation*}
\end{enumerate}
\end{subequations}
\end{definition}
Our first result states the existence of a global in time weak solution to the Laudau-Lifschitz-Maxwell system .
\begin{theorem}\label{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}
Let $\bm{m}_0$ be in $\mathbb{H}^1(\Omega)$ such that
$\lVert\bm{m}_0\rVert=1$ almost everywhere in $\Omega$. Let
$\bm{h}_0$ and $\bm{e}_0$ be in $\mathbb{L}^2(\Omega)$. Let $\bm{f}$
be in $\mathbb{L}^2(\mathbb{R}^+{\times}\Omega)$
Suppose ${\mathcal D}iv(\bm{h}_0+\overline{\bm{m}_0})=0$ in $\mathbb{R}^3$, where
$\overline{\bm{m}_0}$ is the extension of $\bm{m}_0$ by $0$ outside $\Omega$.
Then, there exists at least one weak solution to
the Landau-Lifshitz-Maxwell system in the sense of
Definition~\ref{defin:LLMaxwellWeak}.
\end{theorem}
Uniqueness is unlikely as the solution isn't unique when only the
exchange energy is present, see~\cite{Alouges.Soyeur:OnGlobal}.
In our second result we characterize the $\omega$-limit set of a trajectory. The definition is the following:
\begin{definition}
\label{defomega}
Let $(\bm{m},\bm{h},\bm{e})$ be a weak solution of the Landau-Lifschitz-Maxwell system given by Theorem ~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}. We call $\omega$-limit set of this trajectory the set:
\begin{equation*}\omega(\bm{m})=\left\{ v\in H^1(\Omega), \exists (t_n)_n, \; \lim_{n\rightarrow +\infty} t_n=+\infty, \; \bm{m}(t_n,.)\rightharpoonup v \mbox{ weakly in } H^1(\Omega)\right\}.\end{equation*}
\end{definition}
We remark that $m\in L^\infty(0,+\infty;\mathbb{H}^1(\Omega))$ so that $\omega(m) $ is non empty.
\begin{theorem}
\label{theo-omegalim}
Let $(\bm{m},\bm{e},\bm{h})$ be a weak solution of the Landau-Lifschitz-Maxwell system given by Theorem ~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}. Let $\bm u\in \omega(\bm m)$. Then $\bm u$ satisfies:
\begin{enumerate}
\item $\bm{u}\in \mathbb{H}^1(\Omega)$, $\vert \bm{u}\vert =1$ almost everywhere,
\item for all $\bm{\varphi}\in \mathbb{H}^1(\Omega)$,
\begin{equation}\label{eq:WeakFormulationMagnetizationOmega}
\begin{split}
0&=
A\int_{\Omega}\sum_{i=1}^3
\left(\bm{u}(\bm{x}){\wedge}\frac{\partial \bm{u}}{\partial x_i}(\bm{x})\right)
\cdot\frac{\partial\bm{\varphi}}{\partial x_i}(t,\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\int_{\Omega}\
\left(\bm{u}(\bm{x}){\wedge}\mathbf{K}(\bm{x})\bm{u}(\bm{x})\right)
\cdot\bm{\varphi}(\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
-\int_{\Omega}\
\left(\bm{u}(\bm{x}){\wedge}\bm{H}(\bm{x})\right)
\cdot\bm{\varphi}(\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
-K_s\int_{(\Gamma^\pm)}
(\bm{\nu}\cdot\gamma\bm{u})(\gamma\bm{u}{\wedge}\bm{\nu})
\cdot\gamma\bm{\varphi}{\mathrm{d}} S(\hat{\bm{x}})
\\&\phantom{=}
-J_1\int_{(\Gamma^\pm)}
(\gamma\bm{u}{\wedge}\gamma^{*}\bm{m})\cdot\gamma\bm{\varphi}{\mathrm{d}} S(\hat{\bm{x}})
\\&\phantom{=}
-2J_2
\int_{\Gamma^\pm}
(\gamma\bm{u}\cdot\gamma^{*}\bm{u})(\gamma\bm{u}{\wedge}\gamma^{*}\bm{u})
\cdot\gamma\bm{\varphi}{\mathrm{d}} S(\hat{\bm{x}}).
\end{split}
\end{equation}
\item $\bm H$ is deduced from $\bm u$ by the relations:
\begin{equation*}\mbox{div } (\bm H +\overline{ \bm u})=0 \mbox{ and }\mbox{curl } \bm H =0 \mbox{ in }{\mathcal D}'(\mathbb{R}^3).\end{equation*}
\end{enumerate}
\end{theorem}
\section{Technical prerequisite results on Sobolev Spaces}\label{sect:prerequisites}
In this section, we remind the reader about some useful previously
known results on Sobolev Spaces that we use in this paper. In the
whole section $\mathcal{O}$ is any bounded open set of $\mathbb{R}^3$,
regular enough for the usual embeddings result to hold. For example,
it is enough that $\mathcal{O}$
satisfy the cone property, see\cite[\S4.3]{Adams:SobolevSpaces}.
We start with Aubin's lemma \cite{Aubin:TheoremeCompacite}, as extended in~\cite[Corollary 4]{Simon:CompactSets}.
\begin{lemma}[Aubin's lemma]\label{lemma:AubinSimon}
Let $X\subset\subset B\subset Y$ be Banach spaces. Let $F$ be bounded
in $L^p(0,T;X)$. Suppose $\{\partial_tu, u\in F\}$ is bounded in
$L^r(0,T;Y)$. Suppose for all $t$ in $ $.
\begin{itemize}
\item If $r\geq 1$ and $1\leq p<+\infty$, then $F$ is a compact subset of $L^p(0,T;X)$ .
\item If $r>1$ and $p=+\infty$, then $F$ is a compact subset of $\mathcal{C}(0,T;B)$.
\end{itemize}
\end{lemma}
\begin{lemma}\label{lemma:compactCL2imbedding}
For all $T>0$, the imbedding from $H^1(\rbrack0,T\lbrack{\times}\mathcal{O})$ to
$\mathcal{C}(\lbrack0,T\rbrack,L^2(\mathcal{O}))$ is compact.
\end{lemma}
\begin{proof}
Use the Aubin's lemma, see~\cite[Corollary 4]{Simon:CompactSets},
extended to the case $p=+\infty$, with
$X=H^1(\mathcal{O})$ and $B=Y=L^2(\Omega)$.
\end{proof}
\begin{lemma}
Let $u$ belong to $H^1(\rbrack0,T\lbrack{\times}\mathcal{O})\cap
L^\infty(\rbrack0,T\lbrack;{H}^1(\mathcal{O}))$,
then $u$ belongs to
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{H}^1_\omega(\mathcal{O}))$
where $H_\omega^1(\mathcal{O})$ is the space $H^1(\mathcal{O})$ but with the weak topology.
\end{lemma}
\begin{proof}
The function $u$, belongs to
$\mathcal{C}(\lbrack0,T\rbrack,\mathbb{L}^2(\mathcal{O}))$. Let now $(t_n)_n$ be a
sequence in $\lbrack0,T\rbrack$ converging to $t$. Then,
$u(t_n,\cdot)$ converges to
$u(t,\cdot)$ in $L^2(\mathcal{O})$. Also, the sequence
$(u(t_n,\cdot))_{n\in\mathbb{N}}$ is bounded in
${H}^1(\mathcal{O})$, therefore from any subsequence of
$(u(t_n,\cdot))_{n\in\mathbb{N}}$, one can extract a subsequence
that converges weakly in $H^1(\mathcal{O})$. The only possible limit is
$u(t,\cdot)$ therefore the whole sequence converges weakly in $H^1(\mathcal{O})$.
\end{proof}
\begin{lemma}\label{lemma:SameSubsequenceGivenTime}
Let $(u_n)_{n\in\mathbb{N}}$ be bounded in
$H^1(\rbrack0,T\lbrack{\times}\mathcal{O})$ and in
$L^\infty(\rbrack0,T\lbrack;H^1(\mathcal{O}))$. Let
$(u_{n_k})_{k\in\mathbb{N}}$
be a subsequence which converges weakly to
some $u$ in $H^1(\rbrack0,T\lbrack{\times}\mathcal{O})$. Then, for all $t$ in
$\lbrack0,T\rbrack$, the same subsequence $u_{n_k}(t,\cdot)$ converges weakly to
$u(t,\cdot)$ in $H^1(\mathcal{O})$.
\end{lemma}
\begin{proof}
For all $t$ in $\lbrack0,T\rbrack$, $u_{n_k}(t,\cdot)$ converges strongly
to $u(t,\cdot)$ in $L^2(\mathcal{O})$. Therefore, any subsequence
$u_{n_{k_j}}(t,\cdot)$ that converges weakly in $H^1(\mathcal{O})$ has
$u(t,\cdot)$ for limit. Since $u_{n_k}(t,\cdot)$ is bounded
in $H^1(\mathcal{O})$, from any subsequence of $u_{n_k}(t,\cdot)$, one
can extract a further subsequence that converges weakly in
$H^1(\mathcal{O})$, therefore, for all $t$ in $\lbrack0,T\rbrack$,
the whole subsequence $u_{n_k}(t,\cdot)$
converges weakly to $u(t,\cdot)$ in $H^1(\mathcal{O})$.
\end{proof}
\section{Proof of
Theorem~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}}
\label{sect:proofExistenceWeakLLMSurfaceEnergies}
\subsection{Idea of the proof}
We proceed as in \cite{Carbou.Fabrie:Time}
and~\cite{Santugini:SolutionsLL} and combine the ideas of both
papers.
We start by extending the surface energies to
a thin layer of thickness $2\eta>0$.
As in~\cite{Santugini:SolutionsLL}, we consider the operator
\begin{equation}
\begin{gathered}
\mathcal{H}_s^\eta:\mathbb{H}^1(\Omega) \cap \mathbb{L}^\infty(\Omega)
\to\mathbb{H}^1(\Omega) \cap \mathbb{L}^\infty(\Omega)\\
\bm{m}\mapsto\frac{1}{2\eta}\begin{cases}
0&\text{in $\mathbb{R}^3\setminus(\,{{B}}{\times}(\Interv{}\setminus\Interveta{\eta}{})\;)$}, \\
\begin{gathered}
2K_s((\bm{m}\cdot\bm{\nu})\bm{\nu}-\bm{m})+2J_1(\bm{m}^{*}-\bm{m})\\
+4J_2\big((\bm{m}\cdot\bm{m}^{*})\bm{m}^{*}
-\lVert\bm{m}^{*}\rVert^2\bm{m}\big)
\end{gathered}
&\text{in ${{B}}{\times}(\Interv{}
\setminus\Interveta{\eta}{})$},
\end{cases}
\end{gathered}
\end{equation}
where $\bm{m}^*$ is the reflection of $\bm{m}$, \textit{i.e.}
$\bm{m}^*(x,y,z,t)=\bm{m}(x,y,-z,t)$, see Figure~\ref{fig:boundarylayer}.
\begin{figure}
\caption{Artificial boundary layer}
\label{fig:boundarylayer}
\end{figure}
The associated energy is:
\begin{equation}
\begin{split}
\mathrm{E}_s^\eta(\bm{m})&=\frac{K_s}{2\eta}
\int_{{{B}}{\times}(\Interv{}\setminus\Interveta{\eta}{})}
\left(\lVert\bm{m}\rVert^2-(\bm{m}\cdot\bm{\nu})^2\right){\mathrm{d}}\bm{x}\\
&\phantom{=}+\frac{J_1}{2\eta}
\int_{{{B}}{\times}(\Interv{}\setminus\Interveta{\eta}{})}
\left(\frac{\lVert \bm{m}\rVert^2+\lVert \bm{m}^{*}\rVert^2}{2}-(\bm{m}\cdot\bm{m}^{*})\right) {\mathrm{d}}\bm{x}\\
&\phantom{=}+\frac{J_2}{2\eta}
\int_{{{B}}{\times}\Interv{}\setminus\Interveta{\eta}{}}
\left(\lVert\bm{m}^{*}\rVert^2\lVert\bm{m}\rVert^2-(\bm{m}\cdot\bm{m}^{*})^2\right)
{\mathrm{d}}\bm{x}.
\end{split}
\end{equation}
This energy will replace the surfacic ones~\eqref{eq:SuperExchangeEnergy} and~\eqref{eq:SurfaceAnisotropyEnergy}. The idea
is to consider the Landau-Lifshitz-Maxwell system with homogenous
Neumann boundary conditions with the excitation containing this new
component then have $\eta$ tend to $0$.
We consider the doubly penalized problem:
\begin{subequations}\label{subeq:PenalizedLL}
\begin{align}
\begin{split}
\alpha\frac{\partial\bm{m}_{k,\eta}}{\partial t}+\bm{m}_{k,\eta}{\wedge}\frac{\partial\bm{m}_{k,\eta}}{\partial t}&=
(1+\alpha^2)(A\Lapl\bm{m}-\mathbf{K}\bm{m}+\bm{h}_{k,\eta}+\mathcal{H}_s^\eta(\bm{m}_{k,\eta}))
\\&\phantom{=}
-k(1+\alpha^2)((\lVert\bm{m}_{k,\eta}\rVert^2-1)\bm{m}_{k,\eta}),
\end{split}\\
\frac{\partial\bm{m}_{k,\eta}}{\partial\bm{\nu}}&=0 \quad\text{on $\partial\Omega$},\\
\bm{m}_{k,\eta} (0,\cdot)&=\bm{m}_0,
\end{align}
\end{subequations}
with Maxwell equations:
\begin{subequations}
\begin{align}
\varepsilon_0\frac{\partial\bm{e}_{k,\eta}}{\partial t}+\sigma(\bm{e}_{k,\eta}+\bm{f})\mathds{1}_\Omega-\mathbb{R}ot\bm{h}_{k,\eta}&=0,\\
\mu_0\frac{\partial(\bm{m}_{k,\eta}+\bm{h}_{k,\eta})}{\partial t}+\mathbb{R}ot\bm{e}_{k,\eta}&=0,\\
\bm{e}_{k,\eta} (0,\cdot)&=\bm{e}_0,\\
\bm{h}_{k,\eta} (0,\cdot)&=\bm{h}_0.
\end{align}
\end{subequations}
The basic idea is to prove the existence of weak solutions to the penalized problem via
Galerkin, then have $k$ tend to $+\infty$ to satisfy the local norm
constraint on the magnetization, then have $\eta$ tend to $0$ to transform the homogenous Neumann
boundary condition into the nonlinear condition above.
\subsection{First Step of Galerkin's method }
As
in~\cite{Alouges.Soyeur:OnGlobal} we consider the eigenvectors
$(v_j)_{j\geq1}$ of the Laplace operator
with Neumann homogenous conditions. This basis is, up to a
renormalisation, an hilbertian basis for the spaces
$\mathbb{L}^2(\Omega)$, $\mathbb{H}^1(\Omega)$, and
$\{\bm{u}\in
\mathbb{H}^2(\Omega),\frac{\partial\bm{u}}{\partial\bm{\nu}}=0\}$. The
eigenvectors $v_k$ all belong to $\mathcal{C}^\infty(\overline{\Omega};\mathbb{R}^3)$.
We call $V_n$ the space spanned by $(v_j)_{1\leq j\leq n}$.
As in \cite{Carbou.Fabrie:Time}, we
consider an hilbertian basis $(\bm{\omega}_j)_{j\geq1}$ of
$\mathrm{L}^2(\mathbb{R}^3;\mathbb{R}^3)$ such that every $\bm{\omega}_j$ belongs
to $\mathcal{C}^\infty_c(\mathbb{R}^3;\mathbb{R}^3)$.
We call $W_n$ the space spanned by $(\bm{\omega}_j)_{0\leq j\leq n}$.
Set $n\geq1$, $\eta>0$ and $k>0$.
We search for
$\bm{m}_{n,k,\eta}$ in $H^1(\mathbb{R}^+;(V_n)^3)$,
$\bm{h}_{n,k,\eta}$ in $H^1(\mathbb{R}^+;W_n)$, and
$\bm{e}_{n,k,\eta}$ in $H^1(\mathbb{R}^+;W_n)$
such that
\begin{subequations}\label{subeq:EqDiffmhenketa}
\begin{equation}
\begin{split}
\alpha\frac{{\mathrm{d}}\bm{m}_{n,k,\eta}}{{\mathrm{d}} t}&=-\mathcal{P}_{V_n}(\bm{m}_{n,k,\eta}{\wedge} \frac{{\mathrm{d}}\bm{m}_{n,k,\eta}}{{\mathrm{d}} t})
\\&\phantom{=} +(1+\alpha^2)\mathcal{P}_{V_n}(A\Lapl\bm{m}_{n,k,\eta}-\mathbf{K}\bm{m}_{n,k,\eta})
\\&\phantom{=}
+(1+\alpha^2)\mathcal{P}_{V_n}(\bm{h}_{n,k,\eta}
+\mathcal{H}_s^\eta(\bm{m}_{n,k,\eta}))
\\&\phantom{=}
-(1+\alpha^2)k\mathcal{P}_{V_n}((\lVert\bm{m}_{n,k,\eta}\rVert^2-1)\bm{m}_{n,k,\eta}),
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\mu_0\frac{{\mathrm{d}}\bm{h}_{n,k,\eta}}{{\mathrm{d}}
t}&=-\mu_0\mathcal{P}_{W_n}\left(\frac{{\mathrm{d}}\bm{m}_{n,k,\eta}}{{\mathrm{d}}
t}\right)
+\mathcal{P}_{W_n}(\mathbb{R}ot\bm{e}_{n,k,\eta}).
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\varepsilon_0\frac{{\mathrm{d}}\bm{e}_{n,k,\eta}}{{\mathrm{d}} t}&=
-\mathcal{P}_{W_n}(\mathbb{R}ot\bm{h}_{n,k,\eta})
-\mathcal{P}_{W_n}(\mathds{1}_\Omega(\bm{e}_{n,k,\eta}+\bm{f})),
\end{split}
\end{equation}
\end{subequations}
with the inital conditions:
\begin{subequations}\label{subeq:EqDiffCondInitmhenketa}
\begin{align}
\bm{m}_{n,k,\eta}(0,\cdot)=\mathcal{P}_{V_n}(\bm{m}_0),\\
\bm{h}_{n,k,\eta}(0,\cdot)=\mathcal{P}_{W_n}(\bm{h}_0),\\
\bm{e}_{n,k,\eta}(0,\cdot)=\mathcal{P}_{W_n}(\bm{e}_0),
\end{align}
\end{subequations}
where $\mathcal{P}_{V_n}$ is the orthogonal projection on $V_n$ in
$\mathrm{L}^2(\Omega)$ and $\mathcal{P}_{W_n}$ is the orthogonal projection on $W_n$ in
$\mathbb{L}^2(\Omega;\mathbb{R}^3))$. Let $\mathbf{a}(t)=(\bm{a}_i(t))_{1\leq
i\leq n}$,
$\mathbf{b}=(b_i)_{1\leq i\leq n}$ and $\mathbf{c}(t)=(c_i(t))_{1\leq
i\leq n}$ be the coefficients of $\bm{m}_{n,k,\eta}(t,\cdot)$,
$\bm{h}_{n,k,\eta}(t,\cdot)$ and $\bm{e}_{n,k,\eta}(t,\cdot)$ in the decomposition
\begin{align*}
\bm{m}_{n,k,\eta}(t,\cdot)&=\sum_{i=1}^n\bm{a_i}(t)v_i,\\
\bm{h}_{n,k,\eta}(t,\cdot)&=\sum_{i=1}^nb_i(t)\bm{\omega}_i,\\
\bm{e}_{n,k,\eta}(t,\cdot)&=\sum_{i=1}^nc_i(t)\bm{\omega}_i.
\end{align*}
Then, System~\eqref{subeq:EqDiffmhenketa} is equivalent to
\begin{subequations}\label{subeq:EqDiffmhenketaCoeffABC}
\begin{align}
\frac{{\mathrm{d}}\mathbf{a}}{{\mathrm{d}}
t}+\mathbf{\phi}(\mathbf{a},\frac{{\mathrm{d}}\mathbf{a}}{{\mathrm{d}} t})&=F_{\bm{m}}(\mathbf{a},\mathbf{b}),\\
\frac{{\mathrm{d}}(\mathbf{b}+L\mathbf{a})}{{\mathrm{d}} t}&=F_{\bm{h}}(\mathbf{c}),\\
\frac{{\mathrm{d}}\mathbf{c}}{{\mathrm{d}}
t}&=F_{\bm{e}}(\bm{h}_{n,k,\eta},\bm{e}_{n,k,\eta})+\mathbf{f}^*,
\end{align}
\end{subequations}
where $L$ is linear, $F_{\bm{m}}$, $F_{\bm{h}}$ and $F_{\bm{e}}$ are polynomial thus
of class $\mathcal{C}^\infty$, and $\bm{f}^*$ is in
$\mathrm{L}^2(\mathbb{R}^+;\mathbb{R}^n)$.
These are supplemented by initial conditions
\begin{align}\label{eq:EqDiffCondInitmnketaCoeffABC}
\mathbf{a}(0,\cdot)&=\mathbf{a}_0,&
\mathbf{b}(0,\cdot)&=\mathbf{b}_0,&
\mathbf{c}(0,\cdot)&=\mathbf{c}_0,
\end{align}
where $\mathbf{a}_0$, $\mathbf{b}_0$, and $\mathbf{c}_0$ are obtained
by orthogonal projection of $\bm{m}_0$, $\bm{h}_0$, $\bm{e}_0$ over the $v_i$
or the $\bm{\omega}_i$. As $\mathbf{\phi}(\cdot,\cdot)$ is bilinear
continuous and $\mathbf{\phi}(\mathbf{a},\cdot)$ is
antisymmetric, the linear application $\mathrm{Id}-\mathbf{\phi}(\mathbf{a},\cdot)$ is invertible.
Finally $\bm{f}^{*}$ is $\mathrm{L}^2$.
Therefore, by the Carathéorody theorem,
System~\eqref{subeq:EqDiffmhenketaCoeffABC} has local
solutions. Therefore, there exists $T^*>0$ and $\bm{m}_{n,k,\eta}$ in
$\mathrm{H}^1(\rbrack0,T^*\lbrack;(V_n)^3)$, $\bm{h}_{n,k,\eta}$ in
$\mathrm{H}^1(\rbrack0,T^*\lbrack;W_n)$
and $\bm{e}_{n,k,\eta}$ in $\mathrm{H}^1(\rbrack0,T^*\lbrack;W_n)$ that
satisfy~\eqref{subeq:EqDiffmhenketa}
and~\eqref{subeq:EqDiffCondInitmhenketa}.
Multiplying~\eqref{subeq:EqDiffmhenketa} by test functions and
integrating by part yields:
\begin{subequations}\label{subeq:WeakFormulationGalerkin}
\begin{equation}\label{eq:WeakFormulationGalerkinMagnetization}
\begin{split}
&\phantom{=}
\alpha\iint_{Q_T}
\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{Q_T}
\left(\bm{m}_{n,k,\eta}{\wedge}\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}\right)
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
-(1+\alpha^2) A\iint_{Q_T}\sum_{i=1}^3
\frac{\partial \bm{m}_{n,k,\eta}}{\partial x_i}\cdot\frac{\partial\bm{\phi}}{\partial x_i}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\iint_{Q_T}(
\mathbf{K}(\bm{x}) \bm{m}_{n,k,\eta}(\bm{x}))
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\iint_{Q_T}
\bm{h}_{n,k,\eta}
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)k\iint_{Q_T}(\lVert\bm{m}_{n,k,\eta}\rVert^2-1)\bm{m}_{n,k,\eta}\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\frac{K_s}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
((\bm{\nu}\cdot\bm{m}_{n,k,\eta})\bm{\nu}-\bm{m}_{n,k,\eta})
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2) \frac{ J_1}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}^{*}_{n,k,\eta}-\bm{m}_{n,k,\eta})\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+2(1+\alpha^2)\frac{J_2}{\eta}
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
\left((\bm{m}_{n,k,\eta}\cdot\bm{m}_{n,k,\eta}^{*})\bm{m}^{*}_{n,k,\eta}
-\lVert\bm{m}_{n,k,\eta}^*\rVert^2\bm{m}_{n,k,\eta}\right)
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t,
\end{split}
\end{equation}
for all $\bm{\phi}$ in $\mathcal{C}^\infty(\lbrack0,T^*\rbrack,V_n^3)$. And
\begin{equation}\label{eq:WeakFormulationGalerkinExcitation}
\begin{split}
\mathrm{\mu}_0\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\left(\frac{\partial\bm{h}_{n,k,\eta}}{\partial t}+
\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}\right)\cdot \bm{\psi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\mathbb{R}ot\bm{e}_{n,k,\eta}\cdot\bm{\psi}{\mathrm{d}}\bm{x}{\mathrm{d}} t&=0,
\end{split}
\end{equation}
for all $\bm{\psi}$
in $\mathcal{C}^\infty(\lbrack0,T^*\rbrack,W_n)$. And
\begin{equation}\label{eq:WeakFormulationGalerkinElectric}
\begin{split}
\mathrm{\varepsilon}_0\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\frac{\partial\bm{e}_{n,k,\eta}}{\partial
t}\cdot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}} t-\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\mathbb{R}ot\bm{h}_{n,k,\eta}\cdot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}}
t
&\\
+\sigma\iint_{Q_T}(\bm{e}_{n,k,\eta}+\bm{f})\cdot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}} t&=0,
\end{split}
\end{equation}
for all $\bm{\Theta}$ in
$\mathcal{C}_c^\infty(\lbrack0,T^*\rbrack,W_n)$.
By density, \eqref{subeq:WeakFormulationGalerkin} also
holds if $\bm{\phi}$ belongs to $\mathrm{L}^2(\rbrack0,T^*\lbrack;V_n^3)$, $\bm{\psi}$
belongs to $\mathrm{L}^2(\rbrack0,T^*\lbrack,W_n)$, and $\bm{\Theta}$
belongs to $\mathrm{L}^2(\rbrack0,T^*\lbrack,W_n)$.
\end{subequations}
As in~\cite{Carbou.Fabrie:Time}, set
$\bm{\phi}=\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}$ in~\eqref{eq:WeakFormulationGalerkinMagnetization},
we obtain
\begin{equation*}
\begin{split}
&\phantom{=}\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_{n,k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x})\bm{m}_{n,k,\eta}(T,\bm{x}))\cdot\bm{m}(T,\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{k}{4}\int_{\Omega}(\lVert\bm{m}_{n,k,\eta}(T,\bm{x}))\rVert^2-1)^2{\mathrm{d}}\bm{x}
-\iint_{Q_T}\bm{h}_{n,k,\eta}\cdot\frac{\partial\bm{m}_{n,k,\eta}}{\partial
t}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_{n,k,\eta}(T,\cdot))
+\frac{\alpha}{1+\alpha^2}\iint_{Q_T}\left\lVert\frac{\partial\bm{m}_{n,k,\eta}}{\partial
t}\right\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t\\
&\leq
\frac{A}{2}\int_{\Omega}\lVert\nabla\mathcal{P}_n(\bm{m}_0)\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x}) \mathcal{P}_{V_n}(\bm{m}_0))\cdot\mathcal{P}_{V_n}(\bm{m}_0){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{k}{4}\int_{\Omega}(\lVert \mathcal{P}_{V_n}(\bm{m}_0))\rVert^2-1)^2{\mathrm{d}}\bm{x}
+\mathrm{E}_s^\eta(\mathcal{P}_{V_n}(\bm{m}_0)).
\end{split}
\end{equation*}
Set $\bm{\psi}=\bm{h}_{n,k,\eta}$ in
\eqref{eq:WeakFormulationGalerkinExcitation}, we obtain
\begin{equation*}
\begin{split}
&\phantom{=}
\frac{\mathrm{\mu}_0}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_{n,k,\eta}(T,\bm{x})\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t
+\mu_0\iint_{Q_T}\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}\cdot
\bm{h}_{n,k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{h}_{n,k,\eta}\cdot\mathbb{R}ot\bm{e}_{n,k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq \frac{\mathrm{\mu}_0}{2}\int_{\mathbb{R}^3}\lVert\mathcal{P}_{W_n}(\bm{h}_0)\rVert^2 {\mathrm{d}}\bm{x}
,
\end{split}
\end{equation*}
Set $\bm{\Theta}=\bm{e}_{n,k,\eta}$ in \eqref{eq:WeakFormulationGalerkinElectric}, we obtain
\begin{equation*}
\begin{split}
&\phantom{=}\frac{\varepsilon_0}{2}\iint_{\mathbb{R}^3}\lVert\bm{e}_{n,k,\eta}(T,\cdot)\rVert^2
-\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{e}_{n,k,\eta}\cdot\mathbb{R}ot\bm{h}_{n,k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}}
t
\\&\phantom{\leq}
+\sigma\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\lVert\bm{e}_{n,k,\eta}\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\sigma\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{f}\cdot\bm{e}_{n,k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq
\frac{\varepsilon_0}{2}\iint_{\mathbb{R}^3}\lVert\mathcal{P}_{W_N}(\bm{e}_0)\rVert^2{\mathrm{d}}\bm{x}.
\end{split}
\end{equation*}
Combining these three inequalities, we get an energy inequality
\begin{equation}\label{eq:EnergyInequalityGalerkin}
\begin{split}
&\phantom{=}\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_{n,k,\eta}(T,\cdot)\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x})\bm{m}_{n,k,\eta}(T,\bm{x}))\cdot\bm{m}_{n,k,\eta}(T,\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{k}{4}\int_{\Omega}(\lVert\bm{m}_{n,k,\eta}(T,\bm{x}))\rVert^2-1)^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}_{n,k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_{n,k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_{n,k,\eta}(T,\cdot))
+\frac{\alpha}{1+\alpha^2}\iint_{Q_T}\left\lVert\frac{\partial\bm{m}_{n,k,\eta}}{\partial
t}\right\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t\\
&\phantom{\leq}+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\lVert\bm{e}_{n,k,\eta}\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{f}\cdot\bm{e}_{n,k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq
\frac{A}{2}\int_{\Omega}\lVert\nabla\mathcal{P}_{V_n}(\bm{m}_0)\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x}) \mathcal{P}_{V_n}(\bm{m}_0))\cdot\mathcal{P}_{V_n}(\bm{m}_0){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{k}{4}\int_{\Omega}(\lVert \mathcal{P}_{V_n}(\bm{m}_0)\rVert^2-1)^2{\mathrm{d}}\bm{x}
+\mathrm{E}_s^\eta(\mathcal{P}_{V_n}(\bm{m}_0))
\\&\phantom{=}
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\mathcal{P}_{W_N}(\bm{e}_0)\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\mathcal{P}_{W_N}(\bm{h}_0)\rVert^2{\mathrm{d}}\bm{x}
\end{split}
\end{equation}
The projection $\mathcal{P}_n(\bm{m}_0)$ converges to $\bm{m}_0$
in $\mathbb{H}^1(\Omega)$ and in $\mathbb{L}^6(\Omega)$
by Sobolev imbedding.
The terms on the right hand-side remain bounded independently of
$n$. The last term on the left hand-side may be dealt with by Young
inequality. Thus, $\bm{m}_{n,k,\eta}$, $\bm{h}_{n,k,\eta}$ and $\bm{e}_{n,k,\eta}$
cannot explode in finite time and exist globally.
\subsection{Final step of Galerkin's method}\label{subsect:nLimit}
We now have $n$ tend to $+\infty$
By~\eqref{eq:EnergyInequalityGalerkin} and using Young inequality to
deal with the term containing $\bm{f}$:
\begin{itemize}
\item $\bm{m}_{n,k,\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^4(\Omega))$ independently
of $n$.
\item $\nabla \bm{m}_{n,k,\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}$ is bounded in $\mathrm{L}^2(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $\bm{h}_{n,k,\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $\bm{e}_{n,k,\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\end{itemize}
Thus, there exist $\bm{m}_{k,\eta}$ in
$\mathrm{H}_{loc}^1(\lbrack0,+\infty\lbrack;\mathbb{L}^2(\Omega))\cap
\mathrm{L}^\infty(0,+\infty;\mathbb{H}^1(\Omega))$,
$\bm{h}_{k,\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$,
$\bm{e}_{k,\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$,
such that up to a subsequence:
\begin{itemize}
\item $\bm{m}_{n,k,\eta}$ converges weakly to $\bm{m}_{k,\eta}$ in
$\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{n,k,\eta}$ converges strongly to $\bm{m}_{k,\eta}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{n,k,\eta}$ converges strongly to $\bm{m}_{k,\eta}$ in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^2(\Omega))$ and thus in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^p(\Omega))$ for all $1\leq p<6$.
\item $\nabla\bm{m}_{n,k,\eta}$ converges weakly to $\nabla\bm{m}_{k,\eta}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item For all time $T$, $\nabla\bm{m}_{n,k,\eta}(T,\cdot)$ converges
weakly to
$\nabla \bm{m}_{k,\eta}(T,\cdot)$ in
$\mathbb{L}^2(\Omega)$. The same subsequence can be used for all time
$T\geq 0$, see Lemma~\ref{lemma:SameSubsequenceGivenTime}.
\item $\frac{\partial\bm{m}_{n,k,\eta}}{\partial t}$ converges star weakly to
$\frac{\partial\bm{m}_{k,\eta}}{\partial t}$ in
$\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{h}_{n,k,\eta}$ converges star weakly to $\bm{h}_{k,\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{e}_{n,k,\eta}$ converges star weakly to $\bm{e}_{k,\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\end{itemize}
Moreover, by Aubin's lemma, see~\cite{Aubin:TheoremeCompacite}, $\bm{m}_{n,k,\eta}$ converges strongly to
$\bm{m}_{k,\eta}$ in $\mathrm{L}^p(\mathbb{R}^+;\mathrm{L}^q(\Omega))$
for $1\leq p<+\infty$ and $1\leq q <6$.
Taking the limit in the energy
inequality~\eqref{eq:EnergyInequalityGalerkin} as $n$ tend to
$+\infty$ is tricky: the terms
involving the $\mathbb{L}^2(\Omega)$ norm of $\bm{e}_{n,k,\eta}(T,\cdot)$ and
$\bm{h}_{n,k,\eta}(T,\cdot)$ are tricky. For all $T>0$, we can extract
a subsequence of $\bm{e}_{n,k,\eta}(T,\cdot)$ that converges weakly to
$\bm{e}^T_{k,\eta}$ in $L^2(\Omega)$ as $n$ tends to $+\infty$. The
tricky part is that it is unproven
that $\bm{e}^T_{k,\eta}$ is equal to $\bm{e}_{k,\eta}(T,\cdot)$. If we
had strong convergence of $\bm{e}_{n,k,\eta}$ as a function defined on
$\mathbb{R}^+{\times}\Omega$ or if we had the
existence of a subsequence along which $\bm{e}_{n,k,\eta}(T,\cdot)$
converged weakly in $L^2(\Omega)$ for almost all time $T$, then we
could conclude directly. Unfortunately, while we have for all $T>0$,
the existence of a subsequence of $\bm{e}_{n,k,\eta}(T,\cdot)$ that
converges weakly in $L^2(\Omega)$, the subsequence depends on $T$.
We have the same problem for $\bm{h}_{n,k,\eta}$. There's no such
problem with $\bm{m}(T,\cdot)$,
see Lemma~\ref{lemma:SameSubsequenceGivenTime}. To solve the problem,
we first integrate ~\eqref{eq:EnergyInequalityGalerkin} over $\rbrack
T_1,T_2\lbrack$ where $0\leq T_1<T_2<+\infty$ then we can
take the limit as $n$ tend to $+\infty$:
\begin{equation*}
\begin{split}
&\phantom{=}\frac{A}{2}\int_{T_1}^{T_2}\int_{\Omega}\lVert\nabla\bm{m}_{k,\eta}(T,\cdot)\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} T
+\frac{1}{2}\int_{T_1}^{T_2}\int_{\Omega}(\mathbf{K}(\bm{x})\bm{m}_{k,\eta}(T,\bm{x}))\cdot\bm{m}_{k,\eta}(T,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} T
\\&\phantom{=}
+\frac{k}{4}\int_{T_1}^{T_2}\int_{\Omega}(\lVert\bm{m}_{k,\eta}(T,\bm{x}))\rVert^2-1)^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{\varepsilon_0}{2\mu_0}\int_{T_1}^{T_2}\int_{\mathbb{R}^3}\lVert\bm{e}_{k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{T_1}^{T_2}\int_{\mathbb{R}^3}\lVert\bm{h}_{k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\int_{T_1}^{T_2}\mathrm{E}_s^\eta(\bm{m}_{k,\eta}(T,\cdot))
+\frac{\alpha}{1+\alpha^2}\int_{T_1}^{T_2}\iint_{Q_T}\left\lVert\frac{\partial\bm{m}_{k,\eta}}{\partial
t}\right\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t\\
&\phantom{\leq}+\frac{\sigma}{\mu_0}\int_{T_1}^{T_2}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\lVert\bm{e}_{k,\eta}\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\frac{\sigma}{\mu_0}\int_{T_1}^{T_2}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{f}\cdot\bm{e}_{k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq
\frac{A}{2}\int_{T_1}^{T_2}\int_{\Omega}\lVert\nabla\bm{m}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{T_1}^{T_2}\int_{\Omega}(\mathbf{K}(\bm{x}) \bm{m}_0)\cdot\bm{m}_0{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\int_{T_1}^{T_2}\mathrm{E}_s^\eta(\bm{m}_0)
+\frac{\varepsilon_0}{2\mu_0}\int_{T_1}^{T_2}\int_{\mathbb{R}^3}\lVert\bm{e}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{T_1}^{T_2}\int_{\mathbb{R}^3}\lVert\bm{h}_0\rVert^2{\mathrm{d}}\bm{x},
\end{split}
\end{equation*}
for all $0\leq T_1<T_2<+\infty$. Since the equality holds for all
$T_1$ and $T_2$, we have for almost all $T>0$
\begin{equation}\label{eq:EnergyInequalityPenalized}
\begin{split}
&\phantom{=}\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_{k,\eta}(T,\cdot)\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} T
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x})\bm{m}_{k,\eta}(T,\bm{x}))\cdot\bm{m}_{k,\eta}(T,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} T
\\&\phantom{=}
+\frac{k}{4}\int_{\Omega}(\lVert\bm{m}_{k,\eta}(T,\bm{x}))\rVert^2-1)^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}_{k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_{k,\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_{k,\eta}(T,\cdot))
+\frac{\alpha}{1+\alpha^2}\iint_{Q_T}\left\lVert\frac{\partial\bm{m}_{k,\eta}}{\partial
t}\right\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t\\
&\phantom{\leq}+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\lVert\bm{e}_{k,\eta}\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{f}\cdot\bm{e}_{k,\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq
\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x}) \bm{m}_0)\cdot\bm{m}_0{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_0)
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_0\rVert^2{\mathrm{d}}\bm{x},
\end{split}
\end{equation}
We take the limit in~\eqref{eq:WeakFormulationGalerkinMagnetization}
as $n$ tends to $+\infty$:
\begin{subequations}\label{subeq:WeakFormulationPenalized}
\begin{equation}\label{eq:WeakFormulationPenalizedMagnetization}
\begin{split}
&\phantom{=}
\iint_{Q_T}
\alpha\frac{\partial\bm{m}_{k,\eta}}{\partial t}\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{Q_T}
\left(\bm{m}_{k,\eta}{\wedge}\frac{\partial\bm{m}_{k,\eta}}{\partial t}\right)
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
-(1+\alpha^2) A\iint_{Q_T}\sum_{i=1}^3
\frac{\partial \bm{m}_{k,\eta}}{\partial x_i}\cdot\frac{\partial\bm{\phi}}{\partial x_i}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\iint_{Q_T}
(\mathbf{K}(\bm{x}) \bm{m}_{k,\eta}(t,\bm{x}))
\cdot\bm{\phi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\iint_{Q_T}
\bm{h}_{k,\eta}
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\frac{K_s}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
((\bm{\nu}\cdot\bm{m}_{k,\eta})\bm{\nu}-\bm{m}_{k,\eta})
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2) \frac{ J_1}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}^{*}_{k,\eta}-\bm{m}_{k,\eta})\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+2(1+\alpha^2)\frac{J_2}{\eta}
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
\left((\bm{m}_{k,\eta}\cdot\bm{m}_{k,\eta}^{*})\bm{m}^{*}_{n,k,\eta}
-\lVert\bm{m}_{k,\eta}^*\rVert^2\bm{m}_{k,\eta}\right)
\cdot\bm{\phi}{\mathrm{d}}\bm{x}{\mathrm{d}} t,
\end{split}
\end{equation}
for all $\bm{\phi}$ in $\bigcup_n\mathcal{C}^\infty(\lbrack0,T\lbrack;V_n^3)$. By
density, it also holds for all $\bm{\phi}$ in
$\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$.
We integrate~\eqref{eq:WeakFormulationGalerkinExcitation} by parts then take the limit as
$n$ tends to $+\infty$.
\begin{equation}\label{eq:WeakFormulationPenalizedExcitation}
\begin{split}
&\phantom{=}-\mathrm{\mu}_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\left(\bm{h}_{k,\eta}+
\bm{m}_{k,\eta}\right)) \frac{\partial\bm{\psi}}{\partial t}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}_{k,\eta}\cdot\mathbb{R}ot\bm{\psi}{\mathrm{d}}\bm{x}{\mathrm{d}}
t\\
&=\mathrm{\mu}_0\int_{\mathbb{R}^3}\left(\bm{h}_{0}+
\bm{m}_{0}\right))\cdot\bm{\psi}(0,\cdot){\mathrm{d}}\bm{x},
\end{split}
\end{equation}
for all $\bm{\psi}$ in $\bigcup_n\mathcal{C}_c^\infty([0,+\infty\lbrack;W_n)$. By
density, it also holds for all $\bm{\psi}$ in
$\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$ such that
$\frac{\partial \bm{\psi}}{\partial t}$ belongs to $\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$.
We integrate~\eqref{eq:WeakFormulationGalerkinElectric} by parts then take the limit as
$n$ tends to $+\infty$.
\begin{equation}\label{eq:WeakFormulationPenalizedElectric}
\begin{split}
&\phantom{=}
-\mathrm{\varepsilon}_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}_{k,\eta}\cdot \frac{\partial\bm{\Theta}}{\partial
t}{\mathrm{d}}\bm{x}{\mathrm{d}} t-\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{h}_{k,\eta}\cdot \mathbb{R}ot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}}
t
\\&\phantom{=}
+\sigma\iint_{\mathbb{R}^+{\times}\Omega}(\bm{e}_{k,\eta}+\bm{f})\cdot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
\mathrm{\varepsilon}_0\int_{\mathbb{R}^3}\bm{e}_{0}\cdot\bm{\Theta}(0,\cdot){\mathrm{d}}\bm{x},
\end{split}
\end{equation}
for all $\bm{\Theta}$ in $\bigcup_n\mathcal{C}_c^\infty([0,+\infty\lbrack;W_n)$. By
density, it also holds for all $\bm{\Theta}$ in
$\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$ such that
$\frac{\partial \bm{\Theta}}{\partial t}$ belongs to $\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$.
\end{subequations}
\subsection{Limit as $k$ tends to $+\infty$}\label{subsect:kLimit}
By~\eqref{eq:EnergyInequalityPenalized} and using Young inequality to
deal with the term containing $\bm{f}$:
\begin{itemize}
\item $\bm{m}_{k,\eta}$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^4(\Omega))$ independently
of $n$.
\item $\nabla \bm{m}_{k,\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $\frac{\partial\bm{m}_{k,\eta}}{\partial t}$ is bounded in $\mathrm{L}^2(\mathbb{R}^+;\mathrm{L}^2(\Omega))$ independently
of $n$.
\item $\bm{h}_{k,\eta}$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $\bm{e}_{k,\eta}$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\item $k(\lVert\bm{m}_{k,\eta}\rVert^2-1)$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $n$.
\end{itemize}
Thus, there exist $\bm{m}_{\eta}$, $\bm{h}_{\eta}$, $\bm{e}_{\eta}$,
such that up to a subsequence:
\begin{itemize}
\item $\bm{m}_{k,\eta}$ converges weakly to $\bm{m}_{\eta}$ in
$\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{k,\eta}$ converges strongly to $\bm{m}_{\eta}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{k,\eta}$ converges strongly to $\bm{m}_{\eta}$ in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^2(\Omega))$ and thus in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^p(\Omega))$ for all $1\leq p<6$.
\item $\nabla\bm{m}_{k,\eta}$ converges weakly to $\nabla\bm{m}_{\eta}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item For all time $T$, $\nabla\bm{m}_{k,\eta}(T,\cdot)$ converges weakly to $\nabla\bm{m}_{\eta}(t,\cdot)$ in
$\mathbb{L}^2(\Omega)$.
\item $\frac{\partial\bm{m}_{k,\eta}}{\partial t}$ converges star weakly to
$\frac{\partial\bm{m}_{\eta}}{\partial t}$ in $\mathbb{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{h}_{k,\eta}$ converges star weakly to $\bm{h}_{\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{e}_{k,\eta}$ converges star weakly to $\bm{e}_{\eta}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\end{itemize}
Moreover, by Aubin's lemma $\bm{m}_{\eta}$ converges strongly to
$\bm{m}_{\eta}$ in $\mathrm{L}^p(\mathbb{R}^+;\mathbb{L}^q(\Omega))$
for $1\leq q<+\infty$ and $1\leq q <6$.
Since $\lVert\bm{m}_{k,\eta}\rVert^2-1$ converges to $0$, therefore
$\lVert\bm{m}_\eta\rVert=1$ almost everywhere on $\mathbb{R}^+{\times}\Omega$.
For the reasons explained in \S\ref{subsect:nLimit},
we integrate~\eqref{eq:EnergyInequalityPenalized} over $\lbrack T_1,T_2\rbrack$, drop the term
$k\lVert\lVert\bm{m}_{\eta}\rVert^2-1\rVert_{\mathrm{L}^2}^2/4$, and
compute the limit as $k$ tends to $+\infty$. After the limit is taken,
we drop the integral over $\lbrack T_1,T_2\rbrack$ and obtain that for
almost all $T>0$:
\begin{equation}\label{eq:EnergyInequalityLayer}
\begin{split}
&\phantom{=}\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_{\eta}(T,\cdot)\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x})\bm{m}_{\eta}(T,\bm{x}))\cdot\bm{m}_{\eta}(T,\bm{x}){\mathrm{d}}\bm{x}
\\&\phantom{=}
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}_{\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_{\eta}(T,\bm{x})\rVert^2{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_\eta(T,\cdot))
+\frac{\alpha}{1+\alpha^2}\iint_{Q_T}\left\lVert\frac{\partial\bm{m}_{\eta}}{\partial
t}\right\rVert^2 {\mathrm{d}}\bm{x}{\mathrm{d}} t\\
&\phantom{\leq}+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\lVert\bm{e}_{\eta}\rVert^2{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\frac{\sigma}{\mu_0}\iint_{\rbrack0,T\lbrack{\times}\mathbb{R}^3}\bm{f}\cdot\bm{e}_{\eta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\leq
\frac{A}{2}\int_{\Omega}\lVert\nabla\bm{m}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\Omega}(\mathbf{K}(\bm{x}) \bm{m}_0)\cdot\bm{m}_0{\mathrm{d}}\bm{x}
\\&\phantom{=}
+\mathrm{E}_s^\eta(\bm{m}_0)
+\frac{\varepsilon_0}{2\mu_0}\int_{\mathbb{R}^3}\lVert\bm{e}_0\rVert^2{\mathrm{d}}\bm{x}
+\frac{1}{2}\int_{\mathbb{R}^3}\lVert\bm{h}_0\rVert^2{\mathrm{d}}\bm{x}.
\end{split}
\end{equation}
We replace $\bm{\phi}$
in~\eqref{eq:WeakFormulationPenalizedMagnetization}
with $\bm{m}_{k,\eta}{\wedge}\bm{\varphi}$ where $\bm{\varphi}$
is $\mathcal{C}^\infty_c(\mathbb{R}^+{\times}\Omega)$:
\begin{equation*}
\begin{split}
&\phantom{=}
-\alpha\iint_{Q_T}
\left(\bm{m}_{k,\eta}{\wedge}\frac{\partial\bm{m}_{k,\eta}}{\partial t}\right)\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{Q_T}
\lVert\bm{m}_{k,\eta}\rVert^2\frac{\partial\bm{m}_{k,\eta}}{\partial t}\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=\iint_{Q_T}
\left(\bm{m}_{k,\eta}\cdot\frac{\partial\bm{m}_{k,\eta}}{\partial t}\right)(\bm{m}_{k,\eta}\cdot\bm{\varphi}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2) A\iint_{Q_T}\sum_{i=1}^3
\left(\bm{m}_{k,\eta}{\wedge}\frac{\partial \bm{m}_{k,\eta}}{\partial x_i}\right)
\cdot\frac{\partial\bm{\varphi}}{\partial x_i}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}_{k,\eta}(t,\bm{x}){\wedge}\mathbf{K}(\bm{x}) \bm{m}_{k,\eta}(t,\bm{x})\right)
\cdot\bm{\varphi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}_{k,\eta}{\wedge}\bm{h}_{k,\eta}\right)
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\frac{K_s}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{\nu}\cdot\bm{m}_{k,\eta})(\bm{m}_{k,\eta}{\wedge}\bm{\nu})
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2) \frac{ J_1}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{k,\eta}{\wedge} \bm{m}^{*}_{k,\eta})\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-2(1+\alpha^2)\frac{J_2}{\eta}
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{k,\eta}\cdot\bm{m}_{k,\eta}^{*})(\bm{m}_{k,\eta}{\wedge}\bm{m}^{*}_{k,\eta})
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t,
\end{split}
\end{equation*}
We then take the
limit as $k$ tends to $+\infty$:
\begin{subequations}
\begin{equation}\label{eq:WeakFormulationLayerMagnetization}
\begin{split}
&\phantom{=}
-\alpha\iint_{Q_T}
\left(\bm{m}_{\eta}{\wedge}\frac{\partial\bm{m}_{\eta}}{\partial t}\right)\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{Q_T}
\frac{\partial\bm{m}_{\eta}}{\partial t}\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
+(1+\alpha^2) A\iint_{Q_T}\sum_{i=1}^3
\left(\bm{m}_{\eta}{\wedge}\frac{\partial \bm{m}_{\eta}}{\partial x_i}\right)
\cdot\frac{\partial\bm{\varphi}}{\partial x_i}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
+(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}_{\eta}(t,\bm{x}){\wedge}\mathbf{K}(\bm{x}) \bm{m}_{\eta}(t,\bm{x})\right)
\cdot\bm{\varphi}(t,\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\iint_{Q_T}
\left(\bm{m}_{\eta}{\wedge}\bm{h}_{\eta}\right)
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2)\frac{K_s}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{\nu}\cdot\bm{m}_{\eta})(\bm{m}_{\eta}{\wedge}\bm{\nu})
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-(1+\alpha^2) \frac{ J_1}{\eta}\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{\eta}{\wedge} \bm{m}^{*}_{\eta})\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&\phantom{=}
-2(1+\alpha^2)\frac{J_2}{\eta}
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{\eta}\cdot\bm{m}_{\eta}^{*})(\bm{m}_{\eta}{\wedge}\bm{m}^{*}_{\eta})
\cdot\bm{\varphi}{\mathrm{d}}\bm{x}{\mathrm{d}} t,
\end{split}
\end{equation}
We take the limit in~\eqref{eq:WeakFormulationPenalizedExcitation} as
$k$ tends to $+\infty$:
\begin{equation}\label{eq:WeakFormulationLayerExcitation}
\begin{split}
&\phantom{=}-\mathrm{\mu}_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\left(\bm{h}_{\eta}+
\bm{m}_{\eta}\right)) \frac{\partial\bm{\psi}}{\partial t}{\mathrm{d}}\bm{x}{\mathrm{d}} t
+\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}_{\eta}\mathbb{R}ot\bm{\psi}{\mathrm{d}}\bm{x}{\mathrm{d}}
t\\
&=\mathrm{\mu}_0\int_{\mathbb{R}^3}\left(\bm{h}_{0}+
\bm{m}_{0}\right))\cdot\bm{\psi}(0,\cdot){\mathrm{d}}\bm{x}
\end{split}
\end{equation}
for all $\bm{\psi}$ in
$\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$ such that
$\frac{\partial \bm{\psi}}{\partial t}$ belongs to $\mathrm{L}^1(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
We take the limit in~\eqref{eq:WeakFormulationPenalizedElectric} as
$k$ tends to $+\infty$.
\begin{equation}\label{eq:WeakFormulationLayerElectric}
\begin{split}
&\phantom{=}
-\mathrm{\varepsilon}_0\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{e}_{\eta}\cdot \frac{\partial\bm{\Theta}}{\partial
t}{\mathrm{d}}\bm{x}{\mathrm{d}} t-\iint_{\mathbb{R}^+{\times}\mathbb{R}^3}\bm{h}_{\eta}\cdot \mathbb{R}ot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}}
t
\\&\phantom{=}
+\sigma\iint_{\mathbb{R}^+{\times}\Omega}(\bm{e}_\eta+\bm{f})\cdot\bm{\Theta}{\mathrm{d}}\bm{x}{\mathrm{d}} t
\\&=
\mathrm{\varepsilon}_0\int_{\mathbb{R}^3}\bm{e}_{0}\cdot\bm{\Theta}(0,\cdot){\mathrm{d}}\bm{x},
\end{split}
\end{equation}
for all $\bm{\Theta}$ in in
$\mathrm{L}^1(\mathbb{R}^+;\mathbb{H}^1(\Omega))$ such that
$\frac{\partial \bm{\Theta}}{\partial t}$ belongs to $\mathrm{L}^1(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\end{subequations}
\subsection{Limit as $\eta$ tends to $0$}\label{subsect:EtaLimit}
Since $\mathbb{H}^1(\Omega)$ is continuously imbedded in
$\mathcal{C}^0\big(\rbrack-{{L}}sous,{{L}}sur\lbrack\setminus\{0\};\mathbb{L}^4({{B}})\big)$,
$\mathrm{E}_s^\eta(\bm{m}_0)$ remains bounded independently of
$\eta$ and converges to $\mathrm{E}_s(\bm{m}_0)$ . Thus, using~\eqref{eq:EnergyInequalityLayer} and the
constraint $\lVert\bm{m}_\eta\rVert=1$ almost everywhere:
\begin{itemize}
\item $\bm{m}_{\eta}$ is bounded in
$\mathbb{L}^\infty(\mathbb{R}^+{\times}\Omega)$ by $1$.
\item $\nabla \bm{m}_{\eta}$ is bounded in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $\eta$.
\item $\frac{\partial\bm{m}_{k,\eta}}{\partial t}$ is bounded in $\mathrm{L}^2(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $\eta$.
\item $\bm{h}_{k,\eta}$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $\eta$.
\item $\bm{e}_{k,\eta}$ is bounded in in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ independently
of $\eta$.
\end{itemize}
Thus, there exists $\bm{m}$ in
$\mathbb{L}^\infty(\mathbb{R}^+;\mathbb{H}^1(\Omega))$ and in
$\mathbb{H}^1_{\mathrm{loc}}(\lbrack0,+\infty\lbrack;\mathbb{L}^2(\Omega))$,
$\bm{h}$ in $\mathbb{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$
and $\bm{e}$ in $\mathbb{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$ such that up to a subsequence
\begin{itemize}
\item $\bm{m}_{\eta}$ converges weakly to $\bm{m}$ in
$\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{\eta}$ converges strongly to $\bm{m}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item $\bm{m}_{\eta}$ converges strongly to $\bm{m}$ in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^2(\Omega))$ and thus in
$\mathcal{C}(\lbrack0,T\rbrack;\mathbb{L}^p(\Omega))$ for all $1\leq p<+\infty$.
\item $\nabla\bm{m}_{\eta}$ converges weakly to $\nabla\bm{m}$ in
$\mathbb{L}^2(\rbrack0,T\lbrack{\times}\Omega)$.
\item For all time $T$, $\nabla\bm{m}_{\eta}(t,\cdot)$ converges weakly to $\nabla\bm{m}(t,\cdot)$ in
$\mathbb{L}^2(\Omega)$.
\item $\frac{\partial\bm{m}_{\eta}}{\partial t}$ converges star weakly to
$\frac{\partial\bm{m}}{\partial t}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{h}_{\eta}$ converges star weakly to $\bm{h}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\item $\bm{e}_{\eta}$ converges star weakly to $\bm{e}$ in $\mathrm{L}^\infty(\mathbb{R}^+;\mathbb{L}^2(\Omega))$.
\end{itemize}
As $\lVert\bm{m}^\eta\rVert=1$ almost everywhere,
$\lVert\bm{m}\rVert=1$ almost everywhere. Moreover, as
$\bm{m}_\eta(0,\cdot)=\bm{m}_0$, we have $\bm{m}(0,\cdot)=\bm{m}_0$.
For the reasons explained in \S\ref{subsect:nLimit},
we integrate~\eqref{eq:EnergyInequalityLayer} over $\lbrack T_1,T_2\rbrack$, and
compute the limit as $k$ tends to $+\infty$. All the volume terms
converge to their intuitive limit. After the limit is taken,
we drop the integral over $\lbrack T_1,T_2\rbrack$ and obtain that for
almost all $T>0$:
Taking the limit in the surfacic terms requires more work. For
easier understanding,
First, the space $\mathbb{H}^1(\rbrack0,T\lbrack{\times}\Omega)$ is compactly imbedded into
\begin{equation*}
\mathcal{C}^0(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^2(\rbrack0,T\lbrack{\times}{{B}})\otimes\mathcal{C}^0(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^2(\rbrack0,T\lbrack{\times}{{B}})).
\end{equation*}
This
is a direct application of Lemma~\ref{lemma:compactCL2imbedding} with
$\mathcal{O}=\rbrack0,T\lbrack {\times} B$
and, thus a
direct consequence of the extended Aubin's
lemma~\ref{lemma:AubinSimon}.
Therefore, $\bm{m}_\eta$ converges
strongly to $\bm{m}$ in
\begin{equation*}
\mathcal{C}^0(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^2(\rbrack0,T\lbrack{\times}{{B}})\otimes\mathcal{C}^0(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^2(\rbrack0,T\lbrack{\times}{{B}})).
\end{equation*}
Since $\lVert \bm{m}_\eta\rVert=1$, the convergence is strong in
\begin{equation*}
\mathcal{C}^0(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})\otimes\mathcal{C}^0(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})),
\end{equation*}
for all $p<+\infty$.
\begin{equation*}
\begin{split}
&\phantom{\leq}\limsup_{\eta\to0}\lVert\int_{T_1}^{T_2}\mathrm{E}^\eta_s(\bm{m}_\eta(t,\cdot))-\mathrm{E}^\eta_s(\bm{m}(t,\cdot))\rVert
\\&\leq
\limsup_{\eta\to0}\frac{1}{2\eta}\int_{-\eta}^{\eta}\int_{T_1}^{T_2}\iint_{{{B}}}\left\lVert
P(\bm{m}_\eta(t),\bm{m}_\eta^{*}(t))
-P(\bm{m}(t),\bm{m}^{*}(t))\right\rVert{\mathrm{d}} x{\mathrm{d}} y{\mathrm{d}} z {\mathrm{d}} t
\\&\leq
\limsup_{\eta\to0}\sup_{z\in\lbrack-\eta,\eta\rbrack}\int_{T_1}^{T_2}\iint_{{{B}}}\left\lVert P(\bm{m}_\eta(t),\bm{m}_\eta^{*}(t))
-P(\bm{m}(t),\bm{m}^{*}(t))
\right\rVert{\mathrm{d}} x{\mathrm{d}} y
\\&\leq0.
\end{split}
\end{equation*}
where $P$ is some polynomial.
Moreover, $\bm{m}(\cdot,\cdot)$ belongs to:
\begin{equation*}
\mathcal{C}^0\big(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})\big)
\otimes
\mathcal{C}^0\big(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})\big).
\end{equation*}
Therefore, we have
\begin{equation*}
\begin{split}
&\phantom{\leq}
\lim_{\eta\to0}\int_{T_1}^{T_2}\lVert\mathrm{E}^\eta_s(\bm{m}(t,\cdot))-\mathrm{E}_s(\bm{m}(t,\cdot))\rVert
\\&\leq \lim_{\eta\to0}\frac{1}{2\eta}\int_{-\eta}^{\eta}\iint_{{{B}}}\left\lVert\big(P(\bm{m}(t),\bm{m}^{*}(t))
-P(\bm{m}(x,y,0^+,t),\bm{m}(x,y,0^-,t))\big)
\right\rVert{\mathrm{d}} x{\mathrm{d}} y{\mathrm{d}} t
\\&\leq \lim_{\eta\to0}\sup_{\lVert z\rVert<\eta}\int_{T_1}^{T_2}\iint_{{{B}}}\left\lVert P(\bm{m}(z,T),\bm{m}^{*}(z,t))
-P(\bm{m}(x,y,0^+,T),\bm{m}(x,y,0^-,t))
\right\rVert{\mathrm{d}} x{\mathrm{d}} y{\mathrm{d}} t
\\&\leq 0.
\end{split}
\end{equation*}
Hence, the integral over $\lbrack T_1,T_2\rbrack$ of
inequality~\eqref{eq:EnergyInequality} hold for all $0<T_1<T_2$,
therefore inequality~\eqref{eq:EnergyInequality} is satisfied for
almost all $t>0$.
We take the limit in~\eqref{eq:WeakFormulationLayerMagnetization} as $\eta$ tends
to $0$. All the volume terms converges to their intuitive limit.
Moreover, because of the strong convergence, along a subsequence, of
$\bm{m}_\eta$ to $\bm{m}$ in
\begin{equation*}
\mathcal{C}^0(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})\otimes\mathcal{C}^0(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})),
\end{equation*}
for all $p<+\infty$,
we have
\begin{align*}
\begin{split}
\limsup_{\eta\to0} \frac{1}{\eta}\bigg\lVert
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{\nu}\cdot\bm{m}_{\eta})(\bm{m}_{\eta}{\wedge}\bm{\nu})
\cdot\bm{\varphi}(t,\bm{x}) {\mathrm{d}}\bm{x}{\mathrm{d}} t&\\
-\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{\nu}\cdot\bm{m})(\bm{m}{\wedge}\bm{\nu})
\cdot\bm{\varphi}(t,\bm{x})
{\mathrm{d}}\bm{x}{\mathrm{d}} t
\bigg\rVert&=0,
\end{split}\\
\begin{split}
\limsup_{\eta\to0} \frac{1}{\eta}\bigg\lVert
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{\eta}{\wedge}
\bm{m}^{*}_{\eta})\cdot\bm{\varphi}(t,\bm{x}) {\mathrm{d}}\bm{x}{\mathrm{d}} t&\\
-\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}{\wedge}
\bm{m}^{*})\cdot\bm{\varphi}(t,\bm{x})
{\mathrm{d}}\bm{x}{\mathrm{d}} t
\bigg\rVert
&=0,
\end{split}\\
\begin{split}
\limsup_{\eta\to0} \frac{1}{\eta}\bigg\lVert
\frac{1}{\eta}
\iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}_{\eta}\cdot\bm{m}_{\eta}^{*})(\bm{m}_{\eta}{\wedge}\bm{m}^{*}_{k,\eta})
\cdot\bm{\varphi}(t,\bm{x}) {\mathrm{d}}\bm{x}{\mathrm{d}} t&\\
- \iint_{\rbrack0,T\lbrack{\times}(B{\times}\rbrack-\eta,\eta\lbrack)}
(\bm{m}\cdot\bm{m}^{*})(\bm{m}{\wedge}\bm{m}^{*})
\cdot\bm{\varphi}(t,\bm{x})
{\mathrm{d}}\bm{x}{\mathrm{d}} t\bigg\rVert&=0.
\end{split}
\end{align*}
Since
$\bm{m}$ belongs to
\begin{equation*}
\mathcal{C}^0(\lbrack-{{L}}sous,0\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})\otimes\mathcal{C}^0(\lbrack0,{{L}}sur\rbrack;\mathbb{L}^p(\rbrack0,T\lbrack{\times}{{B}})),
\end{equation*}
each surface term also converges to its surface intuitive
limits. Therefore, the weak
formulation~\eqref{eq:WeakFormulationMagnetization} is also satisfied.
We take the limits as $\eta$ tends to $0$ in~\eqref{eq:WeakFormulationLayerExcitation}
and~\eqref{eq:WeakFormulationLayerExcitation}. All the volume terms
converges to their intuitive limit.
Hence, relations~\eqref{eq:WeakFormulationExcitation}
and~\eqref{eq:WeakFormulationElectric} are satisfied.
This finishes our proof of Theorem~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}.
\section{Characterization of the $\omega$-limit set}
\label{section-omegalimit}
We consider $(\bm{m},\bm h,\bm e)$ a weak solution to the Landau-Lifschitz-Maxwell system given by Theorem~\ref{theo:ExistenceWeakLandauLifshitzMaxwellSurfaceEnergies}.
We consider $\bm u\in \omega(m)$. There exists a non decreasing sequence $(t_n)_n$ such that $t_n\longrightarrow +\infty$, and $\bm m (t_n,.)\rightharpoonup \bm u$ in $\mathbb{H}^1(\Omega)$ weak.
Since $\Omega $ is a smooth bounded domain, then $\bm m (t_n,.)$ tends to $\bm u$ in $\mathbb{L}^p(\Omega)$ strongly for $p\in [1,6[$, and extracting a subsequence, we assume that $\bm m(t_n,.)$ tends to $\bm u$ almost everywhere, so that the saturation constraint $\vert \bm u \vert =1$ is satisfied almost everywhere.
In addition, we remark that for all $n$, $\vert \bm m(t_n,.)\vert=1$ almost everywhere, so that $\|\bm m(t_n,.)\|_{\mathbb{L}^\infty(\Omega)}=1$. By interpolation inequalities in the $\L^p$ spaces, we obtain that for all $p<+\infty$, $\bm m(t_n,.)$ tends to $\bm u$ in $\mathbb{L}^p(\Omega) $ strongly.
{\bf First Step.} we fix $a$ a non negative real number. for $s\in ]-a,a[$ and $x\in \Omega$, for $n$ large enough, we set
\begin{equation*}U_n(s,x)=\bm m(t_n+s,x).\end{equation*}
We have the following estimate:
\begin{equation*}\begin{array}{rl}
\longrightarrowp \frac{1}{2a} \int_{-a}^a\int_\Omega |U_n(s,x)-\bm m (t_n,x)|^2dx ds = & \longrightarrowp \frac{1}{2a} \int_{-a}^a \int_\Omega \left| \int_0^s \dd{\bm m}{t}(t_n+\tau,x) d\tau \right|^2 dx ds\\\\
\leq &\longrightarrowp \frac{1}{2a} \int_{-a}^a|s|\int_\Omega \int_{t_n-a}^{+\infty} \left| \dd{\bm m}{t}(\tau,x)\right|^2 d\tau dx ds\\
\\
\leq & \longrightarrowp a\int_{t_n-a}^{+\infty} \int_\Omega\left| \dd{\bm m}{t}(\tau,x)\right|^2 d\tau dx .
\end{array}\end{equation*}
Since $\longrightarrowp \dd{\bm m}{t}$ is in $\mathbb{L}^2(\mathbb{R}^+\times \Omega)$, we obtain that
\begin{equation*}\int_{-a}^a\int_\Omega |U_n(s,x)-\bm m (t_n,x)|^2dx ds\longrightarrow 0\mbox{ as $n$ tends to }+\infty.\end{equation*}
Since $\bm m (t_n,.)$ tends strongly to $\bm u$ in $L^2(\Omega)$, then
\begin{equation}
\label{cacaprout}
U_n\mbox{ tends strongly to } \bm u \mbox{ in }L^2(-a,a; \mathbb{L}^2(\Omega)).\end{equation}
We remark now that the sequence $(\nabla U_n)_n$ is bounded in $L^\infty(-a,a;\mathbb{L}^2(\Omega))$. In addition, $(\dd{U_n}{t})_n$ is bounded in $L^2(-a,a;\mathbb{L}^2(\Omega))$.
So, by applying Aubin's Lemma with $X=\mathbb{H}^1(\Omega)$, $B= \mathbb{H}^{\frac{3}{4}}(\Omega)$, $Y=\mathbb{L}^2(\Omega)$, $r=2$ and $p=+\infty$, we obtain that $(U_n)_n$ is compact in ${\mathcal C}^0([-a,a];\mathbb{H}^{\frac{3}{4}}(\Omega))$, so that
\begin{equation}
\label{cacaproutbis}
U_n\mbox{ tends strongly to } \bm u \mbox{ in }{\mathcal C}^0([-a,a];\mathbb{H}^{\frac{3}{4}}(\Omega)).\end{equation}
By continuity of the trace operator, since $\mathbb{H}^{\frac{1}{4}}(\Gamma)\subset \mathbb{L}^2(\Gamma)$, we obtain that $$\gamma(U_n)\longrightarrow \gamma(\bm u)\mbox{ strongly in }{\mathcal C}^0([-a,a];\mathbb{L}^2(\Gamma)).$$
In addition, by classical properties of the trace operator, for all $n$, $\|U_n\|_{L^\infty([-a,a]\times\Omega)}=1$, so $\|\gamma(U_n)\|_{L^\infty([-a,a]\times\Gamma)}\leq 1$. We obtain then in particular that\begin{equation*}
\gamma (U_n)\longrightarrow \gamma (\bm u)\mbox{ strongly in }\mathbb{L}^p([-a,a]\times \partial \Omega), \; p<+\infty\end{equation*}
{\bf Second step.} We consider a smooth positive function $\rho_a$ compactly supported in $[-a,a]$ such that
\begin{equation*}\begin{array}{l}
\rho_a(\tau)=1\mbox{ for }\tau\in [-a+1,a-1],\\
\\
0\leq \rho_a\leq 1,\\
\\
\vert \rho_a'\vert\leq 2.
\end{array}\end{equation*}
For $n$ great enough, we set
\begin{equation*}\bm h_a^n(x)=\frac{1}{2a}\int_{-a}^a \bm h (t_n+s,x)\rho_a(s) ds \mbox{ and }\bm e_a^n(x)=\frac{1}{2a}\int_{-a}^a \bm e (t_n+s,x)\rho_a(s) ds.\end{equation*}
By construction of $(\bm m, \bm h ,\bm e)$, we know that $\bm h$ and $\bm e$ are in $\L^\infty(\mathbb R^+; \mathbb L ^2(\mathbb R ^3)).$ We have the following estimate:
\begin{equation*}
\begin{array}{rl}
\longrightarrowp \|\bm h_a^n\|_{\mathbb L^2(\mathbb R^3)}^2 = & \longrightarrowp \int_{x\in \mathbb R^3} \left|\frac{1}{2a}
\int_{-a}^a \bm h (t_n + s,x)\rho_a(s) ds\right|^2\\
\\
\leq &\longrightarrowp \frac{1}{2a}\int_{-a}^a \rho_a^2(s)ds \frac{1}{2a} \int_{\mathbb R^3}\int_{-a}^a |\bm h (t_n+s,x)|^2ds dx\\
\\
\leq & \longrightarrowp \frac{2a+2}{2a} \|\bm h \|_{L^\infty(\mathbb R^+;\mathbb L^2(\mathbb R^3))}.
\end{array}\end{equation*}
Therefore,
\begin{equation}
\label{estihna}
\forall a\geq 1, \; \forall n,\; \|\bm h^n_a \|_{\mathbb L^2(\mathbb R^3)}\leq 2 \|\bm h \|_{L^\infty(\mathbb R^+;\mathbb L^2(\mathbb R^3))}.
\end{equation}
In the same way, we prove that
\begin{equation}
\label{estiena}
\forall a\geq 1, \; \forall n,\; \|\bm e^n_a \|_{\mathbb L^2(\mathbb R^3)}\leq 2 \|\bm e \|_{L^\infty(\mathbb R^+;\mathbb L^2(\mathbb R^3))}.
\end{equation}
So for a fixed value of $a$ we can assume by extracting a subsequence that $\bm h^n_a$ and $\bm e^n_a$ converge weakly in $\mathbb L^2(\mathbb R^3)$ when $n$ tends to $+\infty$:
\begin{equation*}\bm h^n_a \rightharpoonup \bm h_a \mbox{ and } \bm e^n_a \rightharpoonup \bm e_a\mbox{ weakly in } \mathbb L^2(\mathbb R^3)\mbox{ when }n\rightarrow +\infty.\end{equation*}
In the weak formulation \eqref{eq:WeakFormulationMagnetization}, we take $\bm \phi(t,x)=\frac{1}{2a}\rho_a(t-t_n)\bm \psi (x)$ where $\bm \psi\in {\mathcal D}(\overline{\Omega})$. We obtain after the change of variables $s=t-t_n$:
\begin{equation*}\frac{1}{2a} \int_{-a}^a \int_\Omega \left( \dd{U_n}{t}-\alpha U_n {\wedge} \dd{U_n}{t}\right) \bm \psi(x) \rho_a(s) dx ds= T_1 + \ldots +T_6\end{equation*}
with
\begin{equation*}T_1=
(1+\alpha^2) A\frac{1}{2a}\int_{-a}^a \int_\Omega \sum_{i=1}^3
\left(U_n(s,\bm{x}){\wedge}\frac{\partial U_n}{\partial x_i}(t,\bm{x})\right)
\cdot\frac{\partial\bm{\psi}}{\partial x_i}(\bm{x}){\mathrm{d}}\bm{x}{\mathrm{d}} s,\end{equation*}
\begin{equation*}T_2=(1+\alpha^2)\frac{1}{2a}\int_{-a}^a\int_\Omega
\left(U_n(s,\bm{x}){\wedge}\mathbf{K}(\bm{x})U_n(s,\bm{x})\right)
\cdot\bm{\psi}(\bm{x})\rho_a(s){\mathrm{d}}\bm{x}{\mathrm{d}} s,\end{equation*}
\begin{equation*}T_3=
-(1+\alpha^2)\frac{1}{2a}\int_{-a}^a\int_\Omega
\left(U_n(s,\bm{x}){\wedge}\bm{h}(t_n+s,\bm{x})\right)
\cdot\bm{\psi}(\bm{x})\rho_a(s){\mathrm{d}}\bm{x}{\mathrm{d}} s,\end{equation*}
\begin{equation*}T_4=
-(1+\alpha^2)K_s\frac{1}{2a}\int_{-a}^a
\int_{(\Gamma^\pm)}
(\bm{\nu}\cdot\gamma U_n)(\gamma U_n{\wedge}\bm{\nu})
\cdot\gamma\bm{\psi}(\hat{\bm{x}})\rho_a(s){\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} s,\end{equation*}
\begin{equation*}T_5=
-(1+\alpha^2)J_1\frac{1}{2a}\int_{-a}^a
\int_{(\Gamma^\pm)}
(\gamma U_n {\wedge}\gamma^{*}U_n)\cdot\gamma\bm{\psi}(\hat{\bm{x}})\rho_a(s){\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} s, \end{equation*}
\begin{equation*}T_6=
-2(1+\alpha^2)J_2
\frac{1}{2a}\int_{-a}^a
\int_{(\Gamma^\pm)}
(\gamma U_n\cdot\gamma^{*}U_n)(\gamma U_n{\wedge}\gamma^{*}U_n)
\cdot\gamma\bm{\psi}(\hat{\bm{x}})\rho_a(s){\mathrm{d}} S(\hat{\bm{x}}){\mathrm{d}} s.\end{equation*}
Now for a fixed value of the parameter $a$, we take the limit of the previous equation when $n$ tends to $+\infty$.
{\it Left hand side term:} we have the following estimates.
\begin{equation*}\begin{array}{r}
\longrightarrowp \left|
\frac{1}{2a} \int_{-a}^a \int_\Omega \left( \dd{U_n}{t}-\alpha U_n {\wedge} \dd{U_n}{t}\right) \bm \psi(x) \rho_a(s) dx ds\right|\\
\\
\longrightarrowp \leq (1+\alpha) \frac{1}{2a} \int_{-a}^a \rho_a(s) \|\dd{U_n}{t}(s,.)\|_{\mathbb L^2(\Omega)}\|\bm \psi\|_{\mathbb L^2(\Omega)}\\\\
\longrightarrowp \leq \frac{1}{\sqrt{2a} }\|\bm \psi\|_{\mathbb L^2(\Omega)}(1+\alpha)\left( \int_{-a}^a\int_\Omega \left|\dd{U_n}{t}\right|^2dxds\right)^{\frac{1}{2}}\\
\\
\leq\longrightarrowp \frac{1}{\sqrt{2a} }\|\bm \psi\|_{\mathbb L^2(\Omega)}(1+\alpha)\left( \int_{t_n-a}^{+\infty}\int_\Omega \left|\dd{\bm m}{t}\right|^2dxds\right)^{\frac{1}{2}}
\end{array}\end{equation*}
Since $\longrightarrowp \dd \bm m t\in L^2(\mathbb R^+;\mathbb L^2(\Omega))$, the last right hand side term tends to zero when $n$ (and so $t_n$) tends to $+\infty$. Therefore
\begin{equation*}\frac{1}{2a} \int_{-a}^a \int_\Omega \left( \dd{U_n}{t}-\alpha U_n {\wedge} \dd{U_n}{t}\right) \bm \psi(x) \rho_a(s) dx ds\longrightarrow 0 \mbox{ when }n\longrightarrow +\infty.\end{equation*}
{\it Limit for $T_1$:} since $U_n\longrightarrow \bm u$ strongly in $\mathbb L^2([-a,a]\times \Omega)$, since $\longrightarrowp \dd{U_n}{x_i}\rightharpoonup \dd{\bm u}{x_i}$ in $\mathbb L^2(]-a,a[\times \Omega)$ weak, we obtain that
\begin{equation*}T_1\longrightarrow (1+\alpha^2)A\frac{1}{2a} \int_{-a}^a \rho_a(s) ds \int_{\Omega}
\sum_{i=1}^3
\left(\bm u(\bm{x}{\wedge}\frac{\partial \bm u}{\partial x_i}(\bm{x})\right)
\cdot\frac{\partial\bm{\psi}}{\partial x_i}(\bm{x}){\mathrm{d}}\bm{x}.\end{equation*}
{\it Limit for $T_2$:} since $U_n$ tends to $\bm u$ strongly in $\mathbb L^2([-a,a]\times \Omega)$,
\begin{equation*}T_2\longrightarrow (1+\alpha^2)A\frac{1}{2a} \int_{-a}^a \rho_a(s) ds \int_{\Omega}
\left(\bm u(\bm{x}){\wedge}\mathbf{K}(\bm{x})\bm u(\bm{x})\right)
\cdot\bm{\psi}(\bm{x}){\mathrm{d}}\bm{x}.\end{equation*}
{\it Limit for $T_3$:} we write
\begin{equation*}T_3=-(1+\alpha^2)\int_\Omega \bm u {\wedge} \bm h^n_a\bm \psi dx + (1+\alpha^2)\frac{1}{2a} \int_{-a}^a\int_\Omega (\bm u - U_n)h(t_n+s, x)\bm \psi(x)\rho_a(s) dx ds.\end{equation*}
We estimate the right hand side term as follows:
\begin{equation*}\begin{array}{r}
\longrightarrowp \left| \frac{1}{2a}
\frac{1}{2a} \int_{-a}^a\int_\Omega (\bm u - U_n)h(t_n+s, x)\bm \psi(x)\rho_a(s) dx ds\right|\hspace{1cm}\\
\\
\longrightarrowp \leq \|\bm \psi\|_{\mathbb L^\infty(\Omega)}\|\bm u -U_n\|_{\mathbb L^2(-a,a\times \Omega)} \|\bm h\|_{\mathbb L^2([t_n-a,t_n+a]\times \Omega)}.
\end{array}\end{equation*}
So since $U_n$ tends to $\bm u$ in $\mathbb L^2(-a,a\times \Omega)$, we obtain that
\begin{equation*}T_3\longrightarrow -(1+\alpha^2)\int_\Omega \bm{u} {\wedge} \bm h_a\bm \psi dx.\end{equation*}
{\it Limit for $T_4$, $T_5$ and $T_6$:} since $\gamma(U_n)\longrightarrow \gamma(\bm{u})$ strongly in $\mathbb L^p([-a,a]\times \Gamma^\pm)$ for $p<+\infty$, the same occurs for $\gamma^*(U_n)$ so that we obtain:
\begin{equation*}T_4\longrightarrow -(1+\alpha^2)K_s \frac{1}{2a} \int_{-a}^a \rho_a(s){\mathrm{d}} s \int_{(\Gamma^\pm)}
(\bm{\nu}\cdot\gamma \bm{u})(\gamma \bm u {\wedge}\bm{\nu})
\cdot\gamma\bm{\psi}(\hat{\bm{x}}){\mathrm{d}} S(\hat{\bm{x}}),\end{equation*}
\begin{equation*}T_5\longrightarrow
-(1+\alpha^2)J_1\frac{1}{2a}\int_{-a}^a\rho_a(s){\mathrm{d}} s
\int_{(\Gamma^\pm)}
(\gamma \bm{u} {\wedge}\gamma^{*}\bm{u})\cdot\gamma\bm{\psi}(\hat{\bm{x}})){\mathrm{d}} S(\hat{\bm{x}}), \end{equation*}
\begin{equation*}T_6\longrightarrow
-2(1+\alpha^2)J_2
\frac{1}{2a}\int_{-a}^a\rho_a(s){\mathrm{d}} s
\int_{(\Gamma^\pm)}
(\gamma \bm{u}\cdot\gamma^{*}\bm{u})(\gamma \bm{u}{\wedge}\gamma^{*}\bm{u})
\cdot\gamma\bm{\psi}(\hat{\bm{x}})){\mathrm{d}} S(\hat{\bm{x}}).\end{equation*}
So we obtain that $\bm{u}$ satisfies for all $\bm \psi\in {\mathcal D}'(\overline{\Omega})$:
\begin{equation*}
\begin{array}{l}
\longrightarrowp A \int_{\Omega}
\sum_{i=1}^3
\left(\bm u(\bm{x}{\wedge}\frac{\partial \bm u}{\partial x_i}(\bm{x})\right)
\cdot\frac{\partial\bm{\psi}}{\partial x_i}(\bm{x}){\mathrm{d}}\bm{x}
+ A \int_{\Omega}
\left(\bm u(\bm{x}){\wedge}\mathbf{K}(\bm{x})\bm u(\bm{x})\right)
\cdot\bm{\psi}(\bm{x}){\mathrm{d}}\bm{x}\\
\\
\longrightarrowp
-\frac{2a}{\int_{-a}^a \rho_a(s){\mathrm{d}} s}(1+\alpha^2)\int_\Omega \bm{u} {\wedge} \bm h_a\bm \psi dx
-K_s \int_{(\Gamma^\pm)}
(\bm{\nu}\cdot\gamma \bm{u})(\gamma \bm u {\wedge}\bm{\nu})
\cdot\gamma\bm{\psi}(\hat{\bm{x}}){\mathrm{d}} S(\hat{\bm{x}})\\
\\
\longrightarrowp
-J_1\int_{(\Gamma^\pm)}
(\gamma \bm{u} {\wedge}\gamma^{*}\bm{u})\cdot\gamma\bm{\psi}(\hat{\bm{x}})){\mathrm{d}} S(\hat{\bm{x}})
-2J_2\int_{(\Gamma^\pm)}
(\gamma \bm{u}\cdot\gamma^{*}\bm{u})(\gamma \bm{u}{\wedge}\gamma^{*}\bm{u})
\cdot\gamma\bm{\psi}(\hat{\bm{x}})){\mathrm{d}} S(\hat{\bm{x}})=0.
\end{array}\end{equation*}
We remark that by density, we can extend this equality for all $\bm \psi \in \mathbb H^1(\Omega)$.
We take now the limit when $a$ tends to $+\infty$. By definition of $\rho_a$ we obtain that
\begin{equation*}\frac{2a}{\int_{-a}^a \rho_a(s){\mathrm{d}} s}\longrightarrow 1.\end{equation*}
Concerning $h_a$, by taking the weak limit in Estimate \eqref{estihna}, we obtain that:
\begin{equation}
\label{estiha}
\forall a\geq 1, \;
\| \bm h_a \|_{\mathbb L^2(\mathbb R ^3)}\leq 2 \| \bm h \|_{L^\infty(\mathbb R^+; \mathbb L^2(\mathbb R ^3))}.
\end{equation}
So by extracting a subsequence, we can assume that
\begin{equation*}\bm h_a -\hspace{-2mm}\rightharpoonup \bm H \mbox{ in } \mathbb L^2(\mathbb R^3) \mbox{ weak when }a\longrightarrow +\infty.
\end{equation*}
In \eqref{eq:WeakFormulationExcitation}, we take $\bm \psi(t,x)=\theta_a(t-t_n)\nabla \xi(x)$ where $\xi \in {\mathcal D}'(\mathbb R^3)$ and where
\begin{equation*}\theta_a(t)=\int_a^t \rho_a(s){\mathrm{d}} s.\end{equation*}
We obtain then that
\begin{equation*}\begin{array}{r}
\longrightarrowp -\mu_0\int_{-a}^a\int_{\mathbb{R}^3}(\bm{h}(t_n+s,\bm x)+\overline{U_n(s,\bm x)})\cdot
\nabla \xi (\bm x) \rho_a(s){\mathrm{d}}\bm{x}{\mathrm{d}} s\\
\\
\longrightarrowp
=\mu_0\int_{\mathbb{R}^3}(\bm{h}_0+\overline{\bm{m}_0})\cdot \nabla \xi(\bm x) \theta_a(0) {\mathrm{d}}\bm{x}=0\end{array}
\end{equation*}
since $\mbox{div } (\bm h_0 + \overline{\bm m_0})=0$
So for all $\xi \in {\mathcal D}'(\mathbb R^3)$, for all $a\geq 1$ and all $n$ great enough,
\begin{equation*}-\mu_0 \int_{\mathbb{R}^3}(\bm h_a^n(\bm x) +\frac{1}{2a}\int_{-a}^a \overline{U_n(s,\bm x)}\rho_a(s) {\mathrm{d}} s) \cdot \nabla \xi (\bm x) {\mathrm{d}} \bm x=0.\end{equation*}
We take the limit of this equality when $n$ tends to $+\infty$ for a fixed $a$:
\begin{equation*}-\mu_0 \int_{\mathbb{R}^3}(\bm h_a(\bm x) +\frac{1}{2a}\int_{-a}^a \rho_a(s) {\mathrm{d}} s \overline{\bm u(\bm x)}) \cdot \nabla \xi (\bm x) {\mathrm{d}} \bm x=0,\end{equation*}
and taking the limit when $a$ tends to $+\infty$, we get:
\begin{equation*}-\mu_0 \int_{\mathbb{R}^3}(\bm H(\bm x) + \overline{\bm u(\bm x)}) \cdot \nabla \xi (\bm x) {\mathrm{d}} \bm x=0,\end{equation*}
that is
\begin{equation*}\mbox{div } (\bm H + \overline{\bm u})=0\mbox{ in }{\mathcal D}'(\mathbb R^3).\end{equation*}
In \eqref{eq:WeakFormulationElectric}, we take $\bm{\Theta}(t,x)=\frac{1}{2a}\rho_a(t-t_n) \xi(x)$, where $\xi \in {\mathcal D}'(\mathbb R^3)$. We obtain:
\begin{multline}\label{machin}
-\varepsilon_0\frac{1}{2a}\int_{-a}^a \int_{\mathbb{R}^3}\bm{e}(t_n+s,\bm x) \cdot
\rho_a'(s) \xi(\bm x) {\mathrm{d}}\bm{x}{\mathrm{d}} s
-\int_{\mathbb{R}^3}\bm{h}_a^n\cdot\mathbb{R}ot{\xi}{\mathrm{d}}\bm{x}
\\+\sigma\int_{\Omega}\bm{e}_a^n\cdot \xi(\bm x){\mathrm{d}} \bm x +\sigma \int_\Omega \frac{1}{2a}\int_{-a}^a \bm{f}(t_n+s, \bm x) \rho_a(s) \xi(\bm x){\mathrm{d}}\bm{x}{\mathrm{d}} s
=\\=
\varepsilon_0\int_{\mathbb{R}^3}\bm{e}_0\cdot \xi(x) \rho_a(-t_n){\mathrm{d}}\bm{x}.
\end{multline}
For $n$ large enough, the right hand side term vanishes. We denote by $\gamma_a^n$ the term:
\begin{equation*}\gamma_a^n=-\varepsilon_0\frac{1}{2a}\int_{-a}^a \int_{\mathbb{R}^3}\bm{e}(t_n+s,\bm x) \cdot
\rho_a'(s) \xi(\bm x) {\mathrm{d}}\bm{x}{\mathrm{d}} s.\end{equation*}
We have:
\begin{equation*}\left\vert \gamma_a^n\right\vert \leq \frac{\varepsilon_0}{a}\| \xi\|_{L^2(\mathbb R^3)} \|\bm e \|_{L^\infty(\mathbb R^+;\mathbb L^2(\mathbb R^3))}.\end{equation*}
So for a fixed $a$, we can extract a subsequence till denoted $\gamma_a^n$ which converges to a limit $\gamma_a$ such that
\begin{equation*}\vert \gamma_a\vert\leq \frac{\varepsilon_0}{a}\| \xi\|_{L^2(\mathbb R^3)} \|\bm e \|_{L^\infty(\mathbb R^+;\mathbb L^2(\mathbb R^3))}.\end{equation*}
Moreover,
\begin{equation*}
\begin{split}
&\phantom{\leq}\left\vert \frac{1}{2a}\int_{-a}^a \int_\Omega \bm{f}(t_n+s, \bm x)
\rho_a(s) \xi(\bm x){\mathrm{d}}\bm{x}{\mathrm{d}} s\right\vert \\
&\leq\frac{1}{2a}\left(\int_{t_n-a}^{t_n+a} \|\bm f(s,\cdot) \|^2_{\mathbb
L^2(\Omega)}{\mathrm{d}} s\right)^{\frac{1}{2}}\left(\int_{-a}^a
(\rho_a(s))^2{\mathrm{d}} s\right)^{\frac{1}{2}}\|\xi\|_{\mathbb
L^2(\Omega)}.
\end{split}
\end{equation*}
So
\begin{equation*}\left\vert \frac{1}{2a}\int_{-a}^a \int_\Omega \bm{f}(t_n+s, \bm x) \rho_a(s) \xi(\bm x){\mathrm{d}}\bm{x}{\mathrm{d}} s\right\vert \leq\frac{1}{\sqrt{2a}}\|\xi\|_{\mathbb L^2(\Omega)}
\left(\int_{t_n-a}^{+\infty} \|\bm f(s,\cdot) \|^2_{\mathbb L^2(\Omega)}{\mathrm{d}} s\right)^{\frac{1}{2}}\end{equation*}
thus for a fixed $a$, since $\bm f\in \mathbb L^2(\mathbb{R}^+\times \Omega)$, this term tends to zero as $n$ tends to $+\infty$.
Therefore taking the limit when $n$ tends to $+\infty$ in \eqref{machin} we obtain:
\begin{equation*}\gamma_a -\int_{\mathbb{R}^3}\bm{h}_a\cdot\mathbb{R}ot{\xi}{\mathrm{d}}\bm{x}+\sigma \int_{\Omega}\bm{e}_a\cdot \xi(\bm x){\mathrm{d}} \bm x =0.\end{equation*}
Taking now the limit when $a$ tends to $+\infty$ yields
\begin{equation}
\label{bidule}
-\int_{\mathbb{R}^3}\bm{H}\cdot\mathbb{R}ot{\xi}{\mathrm{d}}\bm{x}+\sigma \int_{\Omega}\bm{E}\cdot \xi(\bm x){\mathrm{d}} \bm x =0,
\end{equation}
where $\bm E$ is a weak limit of a subsequence of $(\bm e_a)_a$.
In the same way, in \eqref{eq:WeakFormulationExcitation}, we take $\bm \psi (t,\bm x)=\rho_a(t-t_n)\xi(\bm x)$. By the same arguments, we obtain that
\begin{equation*}\int_{\mathbb R^3} \bm E \mathbb{R}ot \xi=0,\end{equation*}
that is $\mathbb{R}ot E=0$ in ${\mathcal D}'(\mathbb R^3)$.
So we remark the $\bm E$ is in $\mathbb H_{curl}(\mathbb R^3)$ and by density of ${\mathcal D}(\mathbb R^3)$ in this space, we can take $\xi = \bm E$ in \eqref{bidule}. We obtain then that
\begin{equation*}\sigma\int_\Omega \vert \bm E\vert ^2 =0.\end{equation*}
Therefore we obtain from \eqref{bidule} that
\begin{equation*}\forall \xi \in {\mathcal D}(\mathbb R^3), \; \; \int_{\mathbb{R}^3}\bm{H}\cdot\mathbb{R}ot{\xi}{\mathrm{d}}\bm{x}=0,
\end{equation*}
that is $\mathbb{R}ot \bm H =0$ in ${\mathcal D}'(\mathbb R^3)$.
So $\bm H$ satisfies:
\begin{equation*}\begin{array}{l}
\mbox{div }(\bm H + \overline{\bm u})=0,\\
\\
\mathbb{R}ot \bm H =0.
\end{array}\end{equation*}
This concludes the proof of Theorem \ref{theo-omegalim}.
\section{Conclusion}
In this paper, we have proven the existence of solutions to the
Landau-Lifshitz-Maxwell system with nonlinear Neumann boundary
conditions arising from surface energies. We have also characterized
the $\omega$-limit set of those weak solutions.
Further improvements should be possible. On the one hand, we expect that
extending these results to curved spacers should be possible. No
fundamental new idea should be necessary to carry out such an
extension of our results as long as the spacer fully separates the
domain in two. However, even in that case, the technicalities
would lengthen the proof and the statement of the theorem as it would
be necessary to write down geometric conditions on the spacers (the
spacer cannot share a tangent plane with the domain boundary as it
would create cusps).
On the other hand, the construction of more regular solutions for this model remains open.
\end{document}
|
\begin{document}
\baselineskip=17pt
\subjclass[2010]{Primary 14J60; Secondary 14H60, 14J10}
\author{Rupam Karmakar}
\address{Institute of Mathematical Sciences\\ CIT Campus, Taramani, Chennai 600113, India and Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India}
\email[Rupam Karmakar]{[email protected]}
\begin{abstract}
Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ where $C$ is a smooth curve and $E_1$, $E_2$ are vector bundles over $C$.In this paper we compute the pseudo effective cones of higher codimension cycles on $X$.
\end{abstract}
\title{Effective cones of cycles on products of projective bundles over curves}
\maketitle
\section{Introduction}
The cones of divisors and curves on projective varieties have been extensively studied over the years and by now are quite well understood. However, more recently the theory of cones of cycles of higher dimension has been the subject of increasing interest(see \cite{F}, \cite{DELV}, \cite{DJV}, \cite{CC} etc). Lately, there has been significant progress in the theoretical understanding of such cycles, due to \cite{FL1}, \cite{FL2} and others. But the the number of examples where the cone of effective cycles have been explicitly computed is relatively small till date \cite{F}, \cite{CLO}, \cite{PP} etc.
Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ and consider the fibre product $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$. Motivated by the results in \cite{F}, in this paper, we compute the cones of effective cycles on $X$ in the following cases.
Case I: When both $E_1$ and $E_2$ are semistable vector bundles of rank $r_1$ and $r_2$ respectively, the cone of effective codimension k-cycles are described in theorem 3.2.
Case II: When Neither $E_1$ nor $E_2$ is semistable, the cone of low dimension effective cycles are computed in theorem 3.3 and the remaining cases in therem 3.5.
\section{Preliminaries}
Let $X$ be a smooth projective varity of dimension $n$. $N_k(X)$ is the real vector space of k-cycles on $X$ modulo numerical equivalence. For each $k$, $N_k(X)$ is a real vector space of finite dimension. Since $X$ is smooth, we can identify $N_k(X)$ with the abstract dual $N^{n - k}(X) := N_{n - k}(X)^\vee$ via the intersection pairing $N_k(X) \times N_{n - k}(X) \longrightarrow \mathbb{R}$.
For any k-dimensional subvariety $Y$ of $X$, let $[Y]$ be its class in $N_k(X)$. A class $\alpha \in N_k(X)$ is said to be effective if there exists subvarities $Y_1, Y_2, ... , Y_m$ and non-negetive real numbers $n_1, n
_2, ..., n_m$ such that $\alpha$ can be written as $ \alpha = \sum n_i Y_i$. The \textit{pseudo-effective cone} \, $\overline{\Eff}_k(X) \subset N_k(X)$ is the closure of the cone generated by classes of effective cycles. It is full-dimensional and does not contain any nonzero linear subspaces. The pseudo-effective dual classes form a closed cone in $N^k(X)$ which we denote by $\overline{\Eff}^k(X)$.
For smooth varities $Y$ and $Z$, a map $f: N^k(Y) \longrightarrow N^K(Z)$ is called pseudo-effective if $f(\overline{\Eff}^k(Y)) \subset \overline{\Eff}^k(Z)$.
The \textit{nef cone} $\Nef^k(X) \subset N^k(X)$ is the dual of $\overline{\Eff}^k(X) \subset N^k(X)$ via the pairing $N^k(X) \times N_k(X) \longrightarrow \mathbb{R}$, i.e,
\begin{align*}
\Nef^k(X) := \Big\{ \alpha \in N_k(X) | \alpha \cdot \beta \geq 0 \forall \beta \in \overline{\Eff}_k(X) \Big\}
\end{align*}
\section{Cone of effective cycles}
Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ of rank $r_1$, $r_2$ and degrees $d_1$, $d_2$ respectively. Let $\mathbb{P}(E_1) = \bf Proj $ $(\oplus_{d \geq 0}Sym^d(E_1))$ and $\mathbb{P}(E_2) = \bf Proj $ $(\oplus_{d \geq 0}Sym^d(E_2))$ be the
associated projective bundle together with the projection morphisms $\pi_1 : \mathbb{P}(E_1) \longrightarrow C$ and $\pi_2 : \mathbb{P}(E_2) \longrightarrow C$ respectively. Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ be the fibre product over
$C$. Consider the following commutative diagram:
\begin{center}
\begin{equation}
\begin{tikzcd}
X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2) \arrow[r, "p_2"] \arrow[d, "p_1"]
& \mathbb{P}(E_2)\arrow[d,"\pi_2"]\\
\mathbb{P}(E_1) \arrow[r, "\pi_1" ]
& C
\end{tikzcd}
\end{equation}
\end{center}
Let $f_1,f_2$ and $F$ denote the numerical equivalence classes of the fibres of the maps $\pi_1,\pi_2$ and $\pi_1 \circ p_1 = \pi_2 \circ p_2$ respectively. Note that, $X \cong \mathbb{P}(\pi_1^*(E_2)) \cong \mathbb{P}(\pi_2^*(E_1))$.
We first fix the following notations for the numerical equivalence classes,
\begin{center}
$\eta_1 = [\mathcal{O}_{\mathbb{P}(E_1)}(1)] \in N^1(\mathbb{P}(E_1))$ \hspace{3mm}, \hspace{3mm} $\eta_2 = [\mathcal{O}_{\mathbb{P}(E_2)}(1)] \in N^1(\mathbb{P}(E_2)),$
\end{center}
\begin{center}
$\xi_1 = [\mathcal{O}_{\mathbb{P}(\pi_1^*(E_2))}(1)]$ \hspace{2mm}, \hspace{2mm} $\xi_2 = [\mathcal{O}_{\mathbb{P}(\pi_2^*(E_1))}(1)]$\hspace{2mm} ,\hspace{2mm} $\zeta_1 = p_1^*(\eta_1)$\hspace{2mm} , \hspace{2mm} $\zeta_2 = p_2^*(\eta_2) $
\end{center}
\begin{center}
$ \zeta_1 = \xi_2$,\, $\zeta_2 = \xi_1$ \hspace{2mm}, \hspace{2mm} $F= p_1^\ast(f_1) = p_2^\ast(f_2)$
\end{center}
We here summarise some results that has been discussed in \cite{KMR} ( See section 3 in \cite{KMR} for more details) :
\begin{center}
$N^1(X)_\mathbb{R} = \mathbb{R}\zeta_1 \oplus \mathbb{R}\zeta_2 \oplus \mathbb{R}F,$
$\zeta_1^{r_1}\cdot F = 0\hspace{1.5mm},\hspace{1.5mm} \zeta_1^{r_1 + 1} = 0 \hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2}\cdot F = 0 \hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2 + 1} = 0 \hspace{1.5mm}, \hspace{1.5mm} F^2 = 0$ ,
$\zeta_1^{r_1} = (\deg(E_1))F\cdot\zeta_1^{r_1-1}\hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2} = (\deg(E_2))F\cdot\zeta_2^{r_2-1}\hspace{3.5mm}, \hspace{3.5mm}$
$\zeta_1^{r_1}\cdot\zeta_2^{r_2-1} = \deg(E_1)\hspace{3.5mm}, \hspace{3.5mm} \zeta_2^{r_2}\cdot\zeta_1^{r_1-1} = \deg(E_2)$.
\end{center}
Also, The dual basis of $N_1(X)_\mathbb{R}$ is given by $\{\delta_1, \delta_2, \delta_3\}$, where
\begin{center}
$\delta_1 = F\cdot\zeta_1^{r_1-2}\cdot\zeta_2^{r_2-1}, $
$\delta_2 = F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2},$
$\delta_3 = \zeta_1^{r_1-1}\cdot\zeta_2^{r_2-1} - \deg(E_1)F\cdot\zeta_1^{r_1-2}\cdot\zeta_2^{r_2-1} - \deg(E_2)F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2}.$
\end{center}
\begin{thm}
Let $r_1 = \rank(E_1)$ and $r_2 = \rank(E_2)$ and without loss of generality assume that $ r_1 \leq r_2$. Then the bases of $N^k(X)$ are given by
$$
N^k(X) =
\begin{cases}
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i}\}_{i = 0}^k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {k - 1} \Big) & if \quad k < r_1\\ \\
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0} ^{r_1 - 1}, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {r_1 - 1} \Big) & if \quad r_1 \leq k < r_2 \\ \\
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = t+1} ^{r_1 - 1} , \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = t}^{r_1 - 1} \Big) & if \quad k = r_2 + t \quad where \quad t \in \{0, 1, 2, ..., r_1 - 2 \}.
\end{cases}
$$
\end{thm}
\begin{proof}
To begin with consider the case where $ k < r_1$. We know that $X \cong \mathbb{P}(\pi_2^* E_1)$ and the natural morphism $ \mathbb{P}(\pi_2^*E_1) \longrightarrow \mathbb{P}(E_2)$ can be identified with $p_2$. With the above identifications in place the chow group of $X$ has the following isomorphism [see theorem 3.3 , page 64 \cite{Ful}]
\begin{align}
A(X) \cong \bigoplus_{i = 0}^ {r_1 - 1} \zeta_1^i A(\mathbb{P}(E_2))
\end{align}
\quad Choose $i_1, i_2$ suct that $ 0\leq i_1 < i_2 \leq k$. Consider the $k$- cycle $\alpha := F \cdot \zeta_1^{r_1 - i_1 -1} \cdot \zeta_2^{r_2 + i_1 -k - 1}$.
Then $\zeta_1^{i_1} \cdot \zeta_2^{k - i_1} \cdot \alpha = 1$ but $ \zeta_1^{i_2} \cdot \zeta_2^{k - i_2} \cdot \alpha = 0$. So, $\{ \zeta_1^{i_1} \cdot \zeta_2^{k - i_1} \}$ and $\{\zeta_1^{i_2} \cdot \zeta_2^{k - i_2} \}$ can not be numerically equivalent.
Similarly, take $j_1, j_2$ such that $ 0 \leq j_1 < j_2 \leq k$ and consider the $k$-cycle\\ $\beta := \zeta_1^{r_1 - j_1 - 1} \cdot \zeta_2^{r_2 + j_1 - k}$.
Then as before it happens that $F \cdot \zeta_1^{j_1} \cdot \zeta_2^{k - j_1 - 1} \cdot \beta = 1$ but $F\cdot \zeta_1^{j_2} \cdot \zeta_2^{k - j_2 - 1} \cdot \beta = 0$. So $\{ F \cdot \zeta_1^{j_1} \cdot \zeta_2^{k - j_1 - 1}\}$ and $\{ F \cdot \zeta_1^{j_2} \cdot \zeta_2^{k - j_2 - 1} \}$ can not be numerically equivalent.
For the remaining case lets assume $0 \leq i \leq j \leq k$ and consider the k-cycle $\gamma := F \cdot \zeta_1^{r_1 - i -1} \cdot \zeta_2^{r_2 + i - 1 - k}$.
Then $\{ \zeta_1^{i} \cdot \zeta_2^{k - i} \} \cdot \gamma = 1$ and $ \{F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \} \cdot \gamma = 0$. So, they can not be numerically equivalent.
From these observations and $(2)$ we obtain a basis of $N^k(X)$ which is given by
\begin{align*}
N^k(X) = \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0}^ k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^{k - 1} \Big)
\end{align*}
For the case $ r_1 \leq k < r_2$ observe that $ \zeta_1^{r_1 + 1} = 0$, $ F \cdot \zeta_1^ {r_1} = 0$ and $ \zeta_1^ {r_2} = deg(E_1)F \cdot \zeta_1^{ r_1 - 1}$.
When $k \geq r_2$ we write as $k = r_2 + t$ where $t$ ranges from $ 0$ to $r_1 - 1$. In that case the observations like $ \zeta_2^{r_2 + 1} = 0$, $F\cdot\zeta_2^{r_2} = 0$ and $\zeta_2^{r_2} = \deg(E_2)F \cdot \zeta_2^{r_2 - 1}$ proves our case.
\end{proof}
Now we are ready to treat the case where both $E_1$ and $E_2$ are semistable vector bundles over $C$.
\begin{thm}
Let $E_1$ and $E_2$ be two semistable vector bundles over $C$ of rank $r_1$ and $r_2$ respectively with $r_1 \leq r_2$ and $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$.
Then for all $k \in \{1, 2, ..., r_1 + r_2 - 1 \}$
$$
\overline{\Eff}^k(X) =
\begin{cases}
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = 0}^k, \Big\{ F \cdot \zeta_1^j \cdot\zeta_2^{k - j - 1} \Big\}_{j = 0}^{k - 1} \Bigg\rangle & if \quad k< r_1 \\ \\
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = 0}^{r_1 - 1}, \Big\{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \Big\}_{j = 0}^ {r_1 - 1} \Bigg\rangle & if \quad r_1 \leq k < r_2 \\ \\
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = t +1}^ {r_1 - 1}, \Big\{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j -1} \Big\}_{j = t}^ {r_1 - 1} \Bigg\rangle & if \quad k = r_2 + t, \quad t = 0,..., r_1-1 .
\end{cases}
$$
where $\mu_1 = \mu(E_1)$ and $\mu_2 = \mu(E_2)$.
\end{thm}
\begin{proof}
Firstly, $(\zeta_1 - \mu_1F)^i \cdot(\zeta_2 - \mu_2F)^{k - i}$ and $ F\cdot \zeta_1^i \cdot \zeta_2^{k - j - 1} [ = F \cdot (\zeta_1 - \mu_1F)^i \cdot(\zeta_2 - \mu_2F)^{k - j -1} ]$ are intersections of nef divisors. So, they are pseudo-effective for all $i \in\{0, 1, 2, ..., k \}$.
conversely, when $ k < r_1$ \, notice that we can write any element $C$ of $\overline{\Eff}^k(X)$ as \begin{align*}
C = \sum_{i = 0}^k a_i(\zeta_1 - \mu_1F)^i \cdot (\zeta_2 - \mu_2F)^{k - i} + \sum_{j = 0}^k b_j F\cdot\zeta_1^j \cdot\zeta_2^{k - j -1}
\end{align*}
where $a_i, b_i \in \mathbb{R}$.
For a fixed $i_1$ intersect $C$ with $D_{i_1} :=F \cdot(\zeta_1 - \mu_1F)^{r_1 - i_1 - 1} \cdot(\zeta_2 - \mu_2F)^{r_2 - k + i_1 -1}$ and for a fixed $j_1$ intersect $C$ with $D_{j_1}:= (\zeta_1 - \mu_1F)^{r_1 - j_1 - 1} \cdot(\zeta_2 - \mu_2F)^{r_2 +j_1 - k}$. These intersections lead us to
\begin{align*}
C \cdot D_{i_1} = a_{i_1}\quad and \quad C\cdot D_{j_1} = b_{j_1}
\end{align*}
Since $C \in \overline{\Eff}^k(X)$ and $D_{i_1}, D_{j_1}$ are intersection of nef divisors, $a_{i_1}$ and $b_{j_1}$ are non-negetive. Now running $i_1$ and$j_1$ through $\{ 0, 1, 2, ..., k \}$ we get all the $a_i$'s and $b_i$'s are non-negetive and that proves our result for $k < r_1$. The cases where $r_1 \leq k < r_2$ and $k \geq r_2$ can be proved very similarly after the intersection products involving $\zeta_1$ and $\zeta_2$ in page $2$ are taken into count.
\end{proof}
Next we study the more interesting case where $E_1$ and $E_2$ are two unstable vector bundles of rank $r_1$ and $r_2$ and degree $d_1$ and $d_2$ respectively over a smooth curve $C$.
Let $E_1$ be the unique Harder-Narasimhan filtration
\begin{align*}
E_1 = E_{10} \supset E_{11} \supset ... \supset E_{1l_1} = 0
\end{align*}
with $Q_{1i} := E_{1(i-1)}/ E_{1i}$ being semistable for all $i \in [1,l_1-1]$. Denote $ n_{1i} = \rank(Q_{1i}), \\
d_{1i} = \deg(Q_{1i})$ and $\mu_{1i} = \mu(Q_{1i}) := \frac{d_{1i}}{n_{1i}}$ for all $i$.
Similarly, $E_2$ also admits the unique Harder-Narasimhan filtration
\begin{align*}
E_2 = E_{20} \supset E_{21} \supset ... \supset E_{2l_2} = 0
\end{align*}
with $ Q_{2i} := E_{2(i-1)} / E_{2i}$ being semistable for $i \in [1,l_2-1]$. Denote $n_{2i} = \rank(Q_{2i}), \\
d_{2i} = \deg(Q_{2i})$ and $\mu_{2i} = \mu(Q_{2i}) := \frac{d_{2i}}{n_{2i}}$ for all $i$.
Consider the natural inclusion $ \overline{i} = i_1 \times i_2 : \mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$, which is induced by natural inclusions $i_1 : \mathbb{P}(Q_{11}) \longrightarrow \mathbb{P}(E_1)$ and $i_2 : \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(E_2)$. In the next theorem we will see that the cycles of $ \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ of dimension at most $n_{11} + n_{21} - 1$ can be tied down to cycles of $\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})$ via $\overline{i}$.
\begin{thm}
Let $E_1$ and $E_2$ be two unstable bundle of rank $r_1$ and $r_2$ and degree $d_1$ and $d_2$ respectively over a smooth curve $C$ and $r_1 \leq r_2$ without loss of generality and $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$.
Then for all $k \in \{1, 2, ..., \mathbf{n} \} \, \, (\mathbf{n} := n_{11} + n_{21} - 1)$
$Case(1)$: \quad $n_{11} \leq n_{21}$
$$
\overline{\Eff}_k(X) =
\begin{cases}
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{\mathbf{n} - k - i} \Big\}_{i = t + 1}^{n_{11} - 1}, \Big\{ F \cdot \zeta_1^{r_1 - n_{11} +j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = t}^{n_{11} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad k < n_{11} \quad and \quad t = 0, 1, 2, ..., n_{11} - 2 \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{\mathbf{n} - k - i}\Big\}_{i = 0}^{n_{11} - 1}, \Big\{ F\cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = 0}^{n_{11} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad n_{11} \leq k < n_{21}. \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{ \mathbf{n} - k - i}\Big\}_{i = 0}^{\mathbf{n} - k}, \Big\{ F \cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = 0}^{ \mathbf{n} - k} \Bigg\rangle & \\ \qquad \qquad if \quad k \geq n_{21}.
\end{cases}
$$
$Case(2)$: \quad $n_{21} \leq n_{11}$
$$
\overline{\Eff}_k(X) =
\begin{cases}
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{\mathbf{n} - k - i} \Big\}_{i = t + 1}^{n_{21} - 1}, \Big\{ F \cdot \zeta_2^{r_2 - n_{21} +j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = t}^{n_{21} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad k < n_{21} \quad and \quad t = 0, 1, 2, ..., n_{21} - 2 \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{\mathbf{n} - k - i}\Big\}_{i = 0}^{n_{21} - 1}, \Big\{ F\cdot \zeta_2^{r_2 - n_{21} + j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = 0}^{n_{21} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad n_{21} \leq k < n_{11}. \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{ \mathbf{n} - k - i}\Big\}_{i = 0}^{\mathbf{n} - k}, \Big\{ F \cdot \zeta_2^{r_2 - n_{21} + j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = 0}^{ \mathbf{n} - k} \Bigg\rangle & \\ \qquad \qquad if \quad k \geq n_{11}.
\end{cases}
$$
Thus in both cases $ \overline{i}_\ast$ induces an isomorphism between $ \overline{\Eff}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$ and $\overline{\Eff}_k(X)$ for $ k \leq \mathbf{n}$.
\end{thm}
\begin{proof}
to begin with consider $Case(1)$ and then take $ k \geq n_{21}$. Since $ (\zeta_1 - \mu_{11}F)$ and $ (\zeta_2 - \mu_{21}F)$ are nef
\begin{align*}
\phi_i := [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{ \mathbf{n} - k - i} \in \overline{\Eff}_k(X).
\end{align*}
for all $i \in \{ 0, 1, 2, ..., \mathbf{n} -k \}$.
Now The result in [Example 3.2.17, \cite{Ful}] adjusted to bundles of quotients over curves shows that
\begin{align*}
[\mathbb{P}(Q_{11})] = \eta_1^{r_1 - n_{11}} + (d_{11} - d_1)\eta_1^{r_1 - n_{11} - 1}f_1
\end{align*}
and
\begin{align*}
[\mathbb{P}(Q_{21})] = \eta_2^{r_2 - n_{21}} + (d_{21} - d_2)\eta_2^{r_2 - n_{21} - 1}f_2
\end{align*}
Also, $p_1^\ast[\mathbb{P}(Q_{11})] \cdot p_2^\ast[\mathbb{P}(Q_{21})] = [\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})]$ . With little calculations it can be shown that
$\phi_i \cdot (\zeta_1 - \mu_{11}F)^{n_{11} - i} \cdot (\zeta_2 - \mu_{21}F)^{k + i + 1 - n_{11}}$ \\
$ = (\zeta_1^{r_1 - n_{11}} + (d_{11} - d_1)F \cdot \zeta_1^{r_1 - n_{11} - 1})(\zeta_2^{r_2 - n_{21}} + (d_{21} - d_2)F \cdot \zeta_2^{r_2 - n_{21} - 1})(\zeta_1 - \mu_{11}F)^{n_11 - i} \cdot (\zeta_2 - \mu_{21}F)^{k + i + 1 - n_{11}}$
$= (\zeta_1^{r_1} - d_1F \cdot \zeta_1^{r_1 - 1})(\zeta_2^{r_2} - d_2F\cdot \zeta_2^{r_2 - 1})$
$= 0$.
So, $\phi_i$ 's are in the boundary of $\overline{\Eff}_k(X)$ for all $ i \in \{0, 1, ..., \mathbf{n} - k \}$. The fact that $F \cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2}$ 's are in the boundary of $\overline{\Eff}_k(X)$ for all $ i \in \{0, 1, ..., \mathbf{n} - k \}$ can be deduced from the proof of Theorem 2.2. The other cases can be proved similarly.
The proof of $Case(2)$ is similar to the proof of $Case(1)$.
Now, to show the isomorphism between pseudo-effective cones induced by $\overline{i}_\ast$ observe that $Q_{11}$ and $Q_{21}$ are semi-stable bundles over $C$. So, Theorem 2.2 gives the expressions for $\overline{\Eff}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$. Let $\zeta_{11} = \mathcal{O}_{\mathbb{P}(\tilde{\pi}_2^\ast(Q_{11}))}(1) =\tilde{p}_1^\ast(\mathcal{O}_{\mathbb{P}(Q_{11})}(1))$ and $\zeta_{21} = \mathcal{O}_{\mathbb{P}(\tilde{\pi}_1^\ast(Q_{21})}(1) = \tilde{p}_2^\ast(\mathcal{O}_{\mathbb{P}(Q_{21})}(1))$, where $\tilde{\pi_2} = \pi_2|_{\mathbb{P}(Q_{21})}$, $ \tilde{\pi_1} = \pi_1|_{\mathbb{P}(Q_{11})}$ and $\tilde{p}_1 : \mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(Q_{11})$, $\tilde{p}_2 : \mathbb{P}(Q_{11}) \times_C\mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(Q_{21})$ are the projection maps. Also notice that $ \overline{i}^\ast \zeta_1 = \zeta_{11}$ and $\overline{i}^\ast \zeta_1 = \zeta_{21}$.
Using the above relations and projection formula the isomorphism between $ \overline{\Eff}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$ and $\overline{\Eff}_k(X)$ for $ k \leq \mathbf{n}$ can be proved easily.
\end{proof}
Next we want to show that higher dimension pseudo effective cycles on $X$ can be related to the pseudo effective cycles on $ \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})$. More precisely there is a isomorphism between $\overline{\Eff}^k(X)$ and $\overline{\Eff}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})])$ for $ k < r_1 + r_2 - 1 - \mathbf{n}$. Useing the coning construction as in [ful] we show this in two steps, first we establish an isomorphism between $\overline{\Eff}^k([\mathbb{P}(E_1) \times_C \mathbb{P}(E_2)])$ and $\overline{\Eff}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)])$ and then an isomorphism between $\overline{\Eff}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)])$ and $\overline{\Eff}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})])$ in similar fashion. But before proceeding any further we need to explore some more facts.
Let $E$ be an unstable vector bundle over a non-singular projective variety $V$. There is a unique filtration
\begin{align*}
E = E^0 \supset E^1 \supset E^2 \supset ... \supset E^l = 0
\end{align*}
which is called the harder-Narasimhan filtration of $E$ with $Q^i := E^{i - 1}/E^i$ being semistable for $ i \in [1, l - 1]$. Now the following short-exact sequence
\begin{align*}
0 \longrightarrow E^1 \longrightarrow E \longrightarrow Q^1 \longrightarrow 0
\end{align*}
induced by the harder-narasimhan filtration of $E$ gives us the natural inclusion $j : \mathbb{P}(Q^1) \hookrightarrow \mathbb{P}(E)$. Considering $\mathbb{P}(Q^1)$ as a subscheme of $\mathbb{P}(E)$ we obtain the commutative diagram below by blowing up $\mathbb{P}(Q^1)$.
\begin{center}
\begin{equation}
\begin{tikzcd}
\tilde{Y} = Bl_{\mathbb{P}(Q^1)}{\mathbb{P}(E)} \arrow[r, "\Phi"] \arrow[d, "\Psi"] & \mathbb{P}(E^1) = Z \arrow [d, "q"] \\
Y = \mathbb{P}(E) \arrow [r, "p"]
& V
\end{tikzcd}
\end{equation}
\end{center}
where $\Psi$ is blow-down map.
\begin{thm}
With the above notation,there exists a locally free sheaf $G$ on $Z$ such that $\tilde{Y} \simeq \mathbb{P}_Z(G)$ and $\nu : \mathbb{P}_Z(G) \longrightarrow Z$ it's corresponding bundle map.
In particular if we place $V = \mathbb{P}(E_2)$, $ E = \pi_2^\ast E_1$, $E^1 = \pi_2^\ast E_{11}$ and $ Q^1 = \pi_2^\ast Q_{11}$ then the above commutative diagram becomes
\begin{center}
\begin{equation}
\begin{tikzcd}
\tilde{Y'} = Bl_{\mathbb{P}(\pi_2^\ast Q_{11})}{\mathbb{P}(\pi_2^\ast E_1)} \arrow[r, "\Phi'"] \arrow[d, "\Psi'"] &
\mathbb{P}(\pi_2^\ast E_{11}) = Z' \arrow[d, "\overline{p}_2"] \\
Y' = \mathbb{P}(\pi_2^\ast E_1) \arrow[r, "p_2"]
& \mathbb{P}(E_2)
\end{tikzcd}
\end{equation}
\end{center}
where $p_2 : \mathbb{P}(\pi_2^\ast E_1) \longrightarrow \mathbb{P}(E_2)$ and $\overline{p}_2 : \mathbb{P}(\pi_2^\ast E_{11}) \longrightarrow \mathbb{P}(E_2)$ are projection maps.
and there exists a locally free sheaf $G'$ on $Z'$ such that $\tilde{Y'} \simeq \mathbb{P}_{Z'}(G')$ and $\nu' : \mathbb{P}_{Z'}(G') \longrightarrow Z'$ it's bundle map.
Now let $\zeta_{Z'} = \mathcal{O}_{Z'}(1)$, $\gamma = \mathcal{O}_{\mathbb{P}_{Z'}(G')}(1)$, $F$ the numerical equivalence class of a fibre of $\pi_2 \circ p_2$, $F_1$ the numerical equivalence class of a fibre of $\pi_2 \circ \overline{p}_2$, $\tilde{E}$ the class of the exceptional divisor of $\Psi'$ and $\zeta_1 = p_1^\ast(\eta_1) = \mathcal{O}_{\mathbb{P}(\pi_2^\ast E_1)}(1)$. Then we have the following relations:
\begin{align}
\gamma = (\Psi')^\ast \, \zeta_1, \quad (\Phi')^\ast \, \zeta_{Z'} = (\Psi')^\ast \, \zeta_1 - \tilde{E}, \quad (\Phi')^\ast F_1 = (\Psi')^\ast F
\end{align}
\begin{align}
\tilde{E} \cdot (\Psi')^\ast \, (\zeta_1 - \mu_{11}F)^{n_{11}} = 0
\end{align}
Additionaly, if we also denote the support of the exceptional divisor of $\tilde{Y'}$ by $\tilde{E}$ , then $\tilde{E} \cdot N(\tilde{Y'}) = (j_{\tilde{E}})_\ast N(\tilde{E})$, where $j_{\tilde{E}}: \tilde{E} \longrightarrow \tilde{Y'}$ is the canonical inclusion.
\end{thm}
\begin{proof}
With the above hypothesis the following commutative diagram is formed:
\begin{center}
\begin{tikzcd}
0 \arrow[r] & q^\ast E^1 \arrow[r] \arrow[d, two heads] & q^\ast E \arrow[r] \arrow[d] & q^\ast Q^1 \arrow[r] \ar[equal] {d} & 0 \\
0 \arrow[r] & \mathcal{O}_{\mathbb{P}(E^1)}(1) \arrow[r] & G \arrow[r] & q^\ast Q^1 \arrow[r] & 0
\end{tikzcd}
\end{center}
where $G$ is the push-out of morphisms $ q^\ast E^1 \longrightarrow q^\ast E$ and $ q^\ast E^1 \longrightarrow \mathcal{O}_{\mathbb{P}(E^1)}(1)$ and the first vertical map is the natural surjection. Now let $W = \mathbb{P}_Z(G)$ and $\nu : W \longrightarrow Z$ be it's bundle map. So there is a cannonical surjection $\nu^\ast G \longrightarrow \mathcal{O}_{\mathbb{P}_Z(G)}(1)$. Also note that $q^\ast E \longrightarrow G$ is surjective by snake lemma. Combining these two we obtain a surjective morphism $\nu^\ast q^\ast E \longrightarrow \mathcal{O}_{\mathbb{P}_Z(G)}(1)$ which determines $\omega : W \longrightarrow Y$. We claim that we can identify $(\tilde{Y}, \Phi, \Psi)$ and $(W, \nu, \omega)$. Now Consider the following commutative diagram:
\begin{center}
\begin{equation}
\begin{tikzcd}
W = \mathbb{P}_Z(G)
\arrow[drr, bend left, "\nu"]
\arrow[ddr, bend right, "\omega"]
\arrow[dr, "\mathbf{i}"] & & \\
& Y \times_V Z = \mathbb{P}_Z(q^\ast E) \arrow[r, "pr_2"] \arrow[d, "pr_1"]
& \mathbb{P}(E^1) = Z \arrow[d, "q"] \\
& Y = \mathbb{P}(E) \arrow[r, "p"]
& V
\end{tikzcd}
\end{equation}
\end{center}
where $\mathbf{i}$ is induced by the universal property of the fiber product. Since $\mathbf{i}$ can also be obtained from the surjective morphism $q^\ast E \longrightarrow G$ it is a closed immersion. Let $\mathcal{T}$ be the $\mathcal{O}_Y$ algebra $\mathcal{O}_Y \oplus \mathcal{I} \oplus \mathcal{I}^2 \oplus ...$, where $\mathcal{I}$ is the ideal sheaf of $\mathbb{P}(Q^1)$ in $Y$. We have an induced map of $\mathcal{O}_Y$- algebras $Sym(p^\ast E^1) \longrightarrow \mathcal{T} \ast \mathcal{O}_Y(1)$ which is onto because the image of the composition $ p^\ast E^1 \longrightarrow p^\ast E \longrightarrow \mathcal{O}_Y(1)$ is $ \mathcal{T} \otimes \mathcal{O}_Y(1)$. This induces a closed immersion
\begin{center}
$\mathbf{i}' : \tilde{Y} = Proj(\mathcal{T} \ast \mathcal{O}_Y(1)) \longrightarrow Proj(Sym(p^\ast E^1) = Y \times_V Z$.
\end{center}
$\mathbf{i}'$ fits to a similar commutative diagram as $(5)$ and as a result $\Phi$ and $\Psi$ factor through $pr_2$ and $pr_1$.
Both $W$ and $\tilde{Y}$ lie inside $Y \times_V Z$ and $\omega$ and $\Psi$ factor through $pr_1$ and $\nu$ and $\Phi$ factor through $pr_2$. So to prove the identification between $(\tilde{Y}, \Phi, \Psi )$ and $(W, \nu, \omega)$ , it is enough to show that $ \tilde{Y} \cong W$. This can be checked locally. So, after choosing a suitable open cover for $V$ it is enough to prove $\tilde{Y} \cong W$ restricted to each of these open sets. Also we know that $p^{-1}(U) \cong \mathbb{P}_U ^{rk(E) - 1}$ when $E_{|U}$ is trivial and $\mathbb{P}_U^n = \mathbb{P}_{\mathbb{C}}^n \times U$. Now the the isomorphism follows from [proposition 9.11, \cite{EH}] after adjusting the the definition of projectivization in terms of \cite{H}.
We now turn our attention to the diagram $(3)$. observe that if we fix the notations $W' = \mathbb{P}_Z'(G')$ with $\omega' : W' \longrightarrow Y'$ as discussed above then we have an identification between $(\tilde{Y}', \Phi', \Psi')$ and $(W', \nu', \omega')$.
$\omega' : W' \longrightarrow Y'$ comes with $(\omega')^\ast \mathcal{O}_
{Y'}(1) = \mathcal{O}_{\mathbb{P}_{Z'}(G')}(1)$. So, $\gamma = (\Psi')^\ast \, \zeta_1$ is achieved. $(\Phi')^\ast F_1 = (\Psi')^\ast F$ follows from the commutativity of the diagram $(3)$.
The closed immersion $\mathbf{i}'$ induces a relation between the $\mathcal{O}(1)$ sheaves of $Y \times_V Z$ and $\tilde{Y}$. For $Y \times_V Z$ the $\mathcal{O}(1)$ sheaf is $pr_2^ \ast \mathcal{O}_{Z}(1)$ and for $ Proj(\mathcal{T} \ast \mathcal{O}_Y(1)$ the $\mathcal{O}(1)$ sheaf is $\mathcal{O}_{\tilde{Y}}( - \tilde{E}) \otimes (\Psi)^ \ast \mathcal{O}_Y(1)$. Since $\Phi$ factors through $pr_2$, $(\Phi)^ \ast \mathcal{O}_Z(1) = \mathcal{O}_{\tilde{Y}}( - \tilde{E}) \otimes (\Psi)^ \ast \mathcal{O}_Y(1)$. In the particular case (see diagram $(3)$) $(\Phi')^ \ast \mathcal{O}_{Z'}(1) = \mathcal{O}_{\tilde{Y}'}( - \tilde{E}) \otimes (\Psi')^ \ast \mathcal{O}_{Y'}(1)$ i. e. $(\Phi')^\ast \, \zeta_{Z'} = (\Psi')^\ast \, \zeta_1 - \tilde{E} $.
Next consider the short exact sequence:
\begin{center}$ 0 \longrightarrow \mathcal{O}_{Z'}(1) \longrightarrow G' \longrightarrow \overline{p}_2^\ast \pi_2^ \ast Q_{11} \longrightarrow 0$
\end{center}
We wish to calculate below the total chern class of $G'$ through the chern class relation obtained from the above short exact sequence.
\begin{center}
$c(G') = c(\mathcal{O}_{Z'}(1)) \cdot c(\overline{p}_2^\ast \pi_2^ \ast Q_{11}) = (1 + \zeta_{Z'}) \cdot \overline{p}_2^\ast \pi_2^\ast(1 + d_{11}[pt]) = (1 + \xi_{Z'})(1 + d_{11}F_1)$
\end{center}
From the grothendieck relation for $G'$ we have
$\gamma^{n_{11} + 1} - {\Phi'} ^ \ast(\zeta_{Z'} + d_{11}F_1) \cdot \gamma^{n_{11}} + {\Phi'} ^ \ast (d_{11}F_1 \cdot \zeta_{Z'}) \cdot \gamma^ {n_{11} - 1} = 0$ \\
$\Rightarrow \gamma^{n_{11} + 1} - ({\Psi'} ^ \ast \zeta_1 - \tilde{E}) + d_{11}{\Psi'}^ \ast F) \cdot \gamma^{n_{11}} + d_{11}({\Psi'} ^ \ast \zeta_1 - \tilde{E}) \cdot {\Psi'}^ \ast F) \cdot \gamma^{n_{11} - 1} = 0$ \\
$\Rightarrow \tilde{E} \cdot \gamma^{n_{11}} - d_{11}\tilde{E} \cdot {\Psi'} ^ \ast F \cdot \gamma^{n_{11} - 1} = 0$\\
$\Rightarrow \tilde{E} \cdot {\Psi'}^ \ast (\zeta_1 - \mu_{11}F)^{n_{11}} = 0$
For the last part note that $\tilde{E} = \mathbb{P}(\pi_2^\ast Q_{11}) \times_{\mathbb{P}(E_2)} Z'$. Also $N(\tilde{Y}')$ and $N(\tilde{E})$ are free $N(Z')$-module. Using these informations and projection formula, the identity $\tilde{E} \cdot N(\tilde{Y'}) = (j_{\tilde{E}})_\ast N(\tilde{E})$ is obtained easily.
\end{proof}
Now we are in a position to prove the next theorem.
\begin{thm}
$\overline{\Eff}^k(X) \cong \overline{\Eff}^k(Y') \cong \overline{\Eff}^k(Z')$ and $\overline{\Eff}^k(Z') \cong \overline{\Eff}^k(Z'')$. So, $\overline{\Eff}^k(X) \cong \overline{\Eff}^k(Z'')$ for $k < r_1 + r_2 - 1 - \mathbf{n}$
where $Z'= \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)$ and $ Z'' = \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})$
\end{thm}
\begin{proof}
Since $Y' = \mathbb{P}(\pi_2^ \ast E_1) \cong \mathbb{P}(E_1) \times_C \mathbb{P}(E_2) = X$, \, $\overline{\Eff}^k(X) \cong \overline{\Eff}^k(Y')$ is followed at once. To prove that $\overline{\Eff}^k(X) \cong \overline{\Eff}^k(Z')$ we first define the the map:
$\theta_k: N^k(X) \longrightarrow N^k(Z')$
by
\begin{align*}
\zeta_1^ i \cdot \zeta_2^ {k - i} \mapsto \bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i}, \quad F\ \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \mapsto F_1 \cdot \bar{\zeta}_1^ j \cdot \bar{\zeta}_2^ {k - j - 1}
\end{align*}
where $\bar{\zeta_1} = \overline{p}_1 ^\ast(\mathcal{O}_{\mathbb{P}(E_{11})}(1))$ and $\bar{\zeta_2} = \overline{p}_2 ^\ast(\mathcal{O}_{\mathbb{P}(E_2)}(1))$. $ \overline{p}_1 : \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2) \longrightarrow \mathbb{P}(E_{11})$ and $ \overline{p}_2 : \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2) \longrightarrow \mathbb{P}(E_2)$ are respective projection maps.
It is evident that the above map is in isomorphism of abstract groups. We claim that this induces an isomorphism between $ \overline{\Eff}^k(X)$ and $\overline{\Eff}(Z')$. First we construct an inverse for $\theta_k$. Define $\Omega_k : N^k(Z') \longrightarrow N^k(X)$ by
\begin{center}
$\Omega_k (l) = {\Psi'}_\ast {\Phi'}^\ast (l)$
\end{center}
$\Omega_k$ is well defined since $\Phi'$ is flat and $\Psi'$ is birational. $\Omega_k$ is also pseudo-effective. Now we need to show that $\Omega_k$ is the inverse of $\theta_k$.
\begin{align*}
\Omega_k(\bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i}) & = {\Psi'}_\ast (({\Phi'}^ \ast \bar{\zeta_1})^i \cdot ({\Phi'}^\ast \bar{\zeta_2})^{k - i}) \\
& = {\Psi'}_\ast (({\Phi'}^\ast \zeta_{Z'})^i \cdot ({\Phi'}^\ast \bar{\zeta_2})^{k - i}) \\
& = {\Psi'}_\ast (({\Psi'}^\ast \zeta_1 - \tilde{E})^i \cdot ({\Psi'}^\ast \zeta_2)^{k - i}) \\
& = {\Psi'}_\ast((\sum_{0 \leq c \leq i} (-1)^ i \tilde{E}^c ({\Psi'}^\ast \zeta_1)^{i - c}) \cdot ({\Psi'}^\ast \zeta_2)^{k - i}) \\
\end{align*}
Similarly,
\begin{align*}
\Omega_k(F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1}) = {\Psi'}_\ast (( \sum_{0 \leq d \leq j} (-1)^j \tilde{E}^d ({\Psi'}^\ast \zeta_1)^{j - d}) \cdot ({\Psi'}^\ast \zeta_2)^{k - j - 1})
\end{align*}
So,
$\Omega_k\Big(\sum_i a_i\, \bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i} + \sum_j b_j \, F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1} \Big)$
\begin{align*}
= \Big(\sum_i a_i\,{\zeta_1}^ i \cdot{\zeta_2}^{k - i} + \sum_j b_j \, F \cdot{\zeta_1}^j \cdot{\zeta_2}^{k - j -1} \Big) + {\Psi'}_\ast \Big( \sum_i \sum_{1 \leq c \leq i} \tilde{E}^c {\Psi'}^\ast(\alpha_{i, c}) + \sum_j \sum_{1 \leq d \leq j} \tilde{E}^d {\Psi'}^\ast (\beta_{j, d})\Big)
\end{align*}
for some cycles $\alpha_{i, c}, \beta_{j, d} \in N(X)$.
But, ${\Psi'}^\ast(\tilde{E}^t) = 0$ for all $1 \leq t \leq i \leq r_1 + r_2 - 1 - \mathbf{n}$ for dimensional reasons. Hence, the second part in the right hand side of the above equation vanishes and we make the conclusion that $\Omega_k = \theta_k ^{-1}$.
Next we seek an inverse of $\Omega_k$ which is pseudo-effective and meet our demand of being equal to $\theta_k$. Define $\eta_k : N^k(X) \longrightarrow N^k(Z')$ by
\begin{align*}
\eta_k(s) = {\Phi'}_\ast(\delta \cdot {\Psi'}^\ast s)
\end{align*}
where $ \delta = {\Psi'}^\ast (\xi_2 - \mu_{11}F)^{n_{11}}$.
By the relations $(5)$ and $(6)$ , ${\Psi'}^ \ast( (\zeta_1^i \cdot \zeta_2^{k - i})$ is ${\Phi'}^\ast(\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i})$ modulo $\tilde{E}$ and $\delta \cdot \tilde{E} = 0$. Also ${\phi'}_\ast \delta = [Z']$ which is derived from the fact that ${\Phi'}_\ast \gamma^{n_{11}} = [Z']$ and the same relations $(5)$ and $(6)$. Therefore
\begin{align*}
\eta_k(\zeta_1^i \cdot \zeta_2^{k - i}) = {\Phi'}_\ast (\delta \cdot {\Phi'}^\ast(\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i})) = (\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i}) \cdot [Z'] = \bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i}
\end{align*}
In a similar way, ${\Psi'}^ \ast(F \cdot \zeta_1^ j \cdot \zeta_2^{k - j - 1})$ is ${\Phi'}^\ast (F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1})$ modulo $\tilde{\mathbf{E}}$ and as a result of this
\begin{align*}
\eta_k(F \cdot \zeta_1^ j \cdot \zeta_2^{k - j - 1}) = F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1}
\end{align*}
So, $\eta_k = \theta_k$.
Next we need to show that $\eta_k$ is a pseudo- effective map. Notice that ${\Psi'}^\ast s = \bar{s} + \mathbf{j}_\ast s'$ for any effective cycle $s$ on $X$, where $\bar{s}$ is the strict transform under $\Psi'$ and hence effective. Now $\delta$ is intersection of nef classes. So, $\delta \cdot \bar{s}$ is pseudo-effective. Also $\delta \cdot \mathbf{j}_\ast s' = 0$ from theorem 2.4 and ${\Phi'}_\ast$ is pseudo-effective. Therefore $\eta_k$ is pseudo-effective and first part of the theorem is proved. We will sketch the prove for the second part i.e. $\overline{\Eff}^k(Z') \cong \overline{\Eff}^k(Z'')$ which is similar to the proof of the first part. Consider the following diagram:
\begin{center}
\begin{equation}
\begin{tikzcd}
Z'' = \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21}) \arrow[r, "\hat{p}_2"] \arrow[d, "\hat{p}_1"]
& \mathbb{P}(E_{21})\arrow[d,"\hat{\pi}_2"]\\
\mathbb{P}(E_{11}) \arrow[r, "\hat{\pi}_1" ]
& C
\end{tikzcd}
\end{equation}
\end{center}
Define $\hat{\theta}_k : N^k(Z') \longrightarrow N^k(Z'')$ by
\begin{align*}
\bar{\zeta_1}^ i \cdot \bar{\zeta_2}^ {k - i} \mapsto \hat{\zeta_1}^ i \cdot \hat{\zeta_2}^{k - i}, \quad F \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^ {k - j - 1} \mapsto F_2 \cdot \hat{\zeta_1}^j \cdot \hat{\zeta_2}^{k - j - 1}
\end{align*}
where $\hat{\zeta_1} = \hat{p_1}^\ast (\mathcal{O}_{\mathbb{P}(E_{11})}(1)), \hat{\zeta_2} = \hat{p_2}^\ast (\mathcal{O}_{\mathbb{P}(E_{21})}(1))$ and $F_2$ is the class of a fibre of $\hat{\pi_1} \circ \hat{p_1}$.
This is a isomorphism of abstract groups and behaves exactly the same as $\theta_k$. The methods applied to get the result for $\theta_k$ can also be applied successfully here.
\end{document}
|
\begin{document}
\title{The geometry of contact metric three manifolds}
\author{Karatsobanis J.\\
Mathematics Division\\
School of Technology\\
Aristotle University of Thessaloniki\\
Thessaloniki 54124 - Greece\\
[email protected]\\
Math. subj. class [2000] 53D10\\
}
\maketitle
\footnotesize Keywords: Contact Metric Structures.
\begin{abstract}
Determining the associated metrics we get a local classification of contact metric three manifolds.
\end{abstract}
\section{Introduction}\label{intro}
It is known that a $C^{\infty}$ manifold $M$ admits many Riemannian metrics. Once such a metric has been specified, $M$ has been given a great rigidity. This rigidity imposed by the metric on $M$ distinguishes the subject of geometry and topology.
Riemannian metrics associated to a contact or symplectic structure have been studied extensively. They form an infinite dimensional space and each element of this space defines the same volume element.
The associated metrics of a three dimensional contact metric manifold are determined in the present paper through an explicit formula ((\ref{metric-simp}), p. \pageref{metric-simp}). This formula comes from the solution of a partial differential equation system ((\ref{S1}) - (\ref{S5}), p. \pageref{S1}) which we call \emph{contact system} (Definition \ref{cs1}, p. \pageref{cs1}). We introduce a special coordinate system (simplifying coordinates,
Definition \ref{simpdef}, p. \pageref{simpdef}) such that the contact system is reduced to a
Riccati equation (\ref{S*3}, p. \pageref{S*3}).
Every contact manifold carries a contact metric structure, i.e. a Riemannian metric compatible with the contact structure \cite{B02}. It is a classic result \cite{MA71,LU70} that every $3$-dimensional closed and orientable manifold admits a contact structure and hence, a contact metric structure. Thus the metric (\ref{metric-simp}) represents the local geometry of the contact metric structure of these manifolds.
Concerning the existence of contact structures on open manifolds the result of Gromov \cite{G86} states that a non-compact, odd-dimensional manifold carrying a
hyperplane field with an almost complex structure, accepts a contact structure. Here we provide an existence theorem (section \ref{example}, Theorem \ref{lemma1}) for contact metric structures on $3$-dimensional orientable manifolds, both compact and non-compact. The proof of the Theorem \ref{lemma1} founds the above mentioned contact system (Definition \ref{cs1}) whose solution leads to the determination of the associated metrics. This theorem requires a Levi-Civita connection with a particular framing. Its premises are indeed met since there are many explicit examples of contact metric $3$-manifolds satisfying them, e.g. the example at section \ref{example1} of this paper.
In section \ref{algorithm} we give an algorithm for the construction of contact metric structures in $3$-manifolds. Finally in section \ref{example1} we give an example of a contact metric $3$-manifold utilizing the algorithm and the simplifying coordinates.
\section{Preliminaries}\label{intro2}
In what follows if $f\left( x^{1},x^{2},\ldots ,x^{n}\right) $ is a smooth function then $f_{i}$ will denote the partial derivative $\frac{\partial f}{\partial x^{i}}$. If $B$ is some $n\times n$ invertible matrix with elements $b_{\alpha }^{i}$, $\alpha =1,\ldots n$, $i=1,\ldots n$, then $\left(b^{-1}\right) _{i}^{\alpha }$, $\alpha =1,\ldots n$, $i=1,\ldots n$, will denote the element of the $i$ row and the $\alpha $ column of the matrix $B^{-1}$.
In this section we collect some basic facts on contact metric manifolds. All manifolds are assumed to be smooth.
A differentiable $(2n+1)$-dimensional manifold $M$ is called a contact manifold if it carries a nowhere integrable affine distribution of hyperplanes. It can be shown that every contact manifold carries a $1$-form $\eta $ satisfying $\eta \wedge (d\eta )^{n}\neq 0$ everywhere on $M$. It is well known \cite{B02} that given $\eta$ there exists a unique vector field $\xi$ (called the characteristic vector field or Reeb field) defined by $(d\eta )(\xi ,X)=0$ and $\eta (\xi)=1$. Polarizing $d\eta $ on the contact $2n$-subbundle $D$ given by $\eta =0$, one obtains a Riemannian metric $g$ and a $(1,1)$-tensor field $\phi $ such that:
\begin{equation*}
(d\eta )(X,Y)=g(\phi X,Y),~~~\eta (X)=g(X,\xi ),~~~\phi ^{2}=-I+\eta \otimes
\xi
\end{equation*}
The metric $g$ is called \emph{associated metric} of $\eta $, and $(\phi,\eta ,\xi ,g)$ a contact metric structure \cite{B02}. The associated metric is not unique. A differentiable $(2n+1)$-dimensional manifold $M$ equipped with a contact metric structure is called \emph{contact metric} $(2n+1)$- manifold.
In the theory of contact metric manifolds the symmetric tensor field $h:=\frac{1}{2}\mathcal{L}_{\xi }\phi $, $\mathcal{L}$ being the Lie derivative, plays a fundamental role. It satisfies
\begin{eqnarray}
h\xi &=&0,~~~Tr\left( h\right) =0,~~~Tr(h\phi )=0,~~~h\phi =-\phi h,~~~
\label{1} \\
\eta \circ h &=&0,\text{ }(\nabla _{\xi }h)\phi =-\phi (\nabla _{\xi
}h),~~~(\nabla _{\xi }h)\xi =0 \notag
\end{eqnarray}
The equation $h=0$ holds if and only if $\xi $ is Killing with respect to the associated metric, and then $M$ is called $K$-\emph{contact}. On a contact metric manifold we have $h\phi X=\nabla _{X}\xi +\phi X$ and\ $\nabla _{\xi }\phi =0$, where $\nabla $ is the Levi-Civita connection and $X$ is any vector field.
If the almost complex structure $J$ on $M\times \mathbb{R}$ defined by $J(X,f\frac{d}{dt})=(\phi X-f\xi ,\eta (X)\frac{d}{dt})$ is integrable, then $M$ is said to be Sasakian. A Sasakian manifold is $K$-contact and the converse is true only for $3$-dimensional manifolds.
The sectional curvature of a plane section containing $\xi $ is called $\xi $-sectional curvature. The element of the contact subbundle $D$ over a point $p$ is denoted by $D_{p}$. If $X\in D_{p}$ we denote the $\xi $-sectional curvature by $K(X,\xi )$. The sectional curvature $K(X,\phi X)$ of a plain section spanned by the vector fields $X$ and $\phi X$ (both orthogonal to $\xi $) is called $\phi $-\emph{sectional curvature}.
If $e\in D_{p}$ is a unit vector then $\phi e$ is another unit vector of $D_{p}$ orthogonal to $e$ and $\left\{ e,\phi e\right\} $ is the so-called symplectic basis of $D_{p}$. The triple $\left\{ e,\phi e,\xi \right\} $ is a basis of $T_{p}M^{3}$ and is called $\phi $-basis. Since there is a rotational freedom in choosing the symplectic basis it follows
that if $h\neq 0$ at $p$, this basis can be chosen to diagonalize $h$. Such a $\phi $-basis is called $\phi $-eigenbasis. From (\ref{1}) it follows that if $he=\lambda e$ then $h\phi e=-\lambda \phi e$.
Let $M^{3}:=(M^{3},\phi ,\eta ,\xi ,g)$ be a $3$-dimensional contact metric manifold (\emph{contact metric $3$-manifold}). Let $U_{0}$ and $U_{1}$ be two open sets defined by:
\begin{equation*}
U_{0}:=\left\{ p\in M^{3}:\lambda =0\text{ in a neighborhood of }p\right\}
\end{equation*}
\begin{equation*}
U_{1}:=\left\{ p\in M^{3}:\lambda \neq 0\text{ in a neighborhood of }
p\right\}
\end{equation*}
The closure of $U_{0}\cup U_{1}$ equals to $M^{3}$, i.e. $U_{1}\cup U_{0}$ is open and dense in $M^{3}$. For every point $p\in U_{0}\cup U_{1}$ there exists a $\phi $-eigenbasis $\{e,\phi e,\xi \}$ in a neighborhood of $p$.
Using (\ref{1}) and assuming that $\left\{ e,\phi e,\xi \right\} $ is a $\phi $-eigenbasis, it is not hard to prove the following:
\begin{lemma}
\label{aux1}On a contact metric $3$-manifold $M^{3}$ we have:
\begin{align*}
\nabla _{e}e& =b\,\phi e,~~~\nabla _{e}\phi e=-b\,e+(\lambda +1)\,\xi
,~~~\nabla _{e}\xi =-(\lambda +1)\,\phi e \\
\nabla _{\phi e}e& =-c\,\phi e+(\lambda -1)\,\xi ,~~~\nabla _{\phi e}\phi
e=c\,e,~~~\nabla _{\phi e}\xi =(1-\lambda )\,e \\
\nabla _{\xi }e& =a\,\phi e,~~~\nabla _{\xi }\phi e=-a\,e
\end{align*}
\end{lemma}
\begin{remark}
Recalling that on $U_{0}$ the relation $\nabla _{X}\xi +\phi X=0$ holds, Lemma \ref{aux1} is true on $U_{0}\cup U_{1}$. But the relations in the Lemma \ref{aux1} are equations between continuous functions. Moreover $U_{0}\cup U_{1}$ is open and dense in $M^{3}$. Hence the Lemma \ref{aux1} can be extended to $M^{3}$.
\end{remark}
From Lemma \ref{aux1} and the well known property $\nabla _{X}Y-\nabla _{Y}X=\left[ X,Y\right]$, we can prove the commutation relations of a $\phi$-eigenbasis:
\begin{equation}
\left[ e,\phi e\right] =-be+c\phi e+2\xi \tag{$\pi $} \label{a}
\end{equation}
\begin{equation}
\left[ \xi ,e\right] =\left( a+\lambda +1\right) \phi e \tag{$\delta $}
\label{b}
\end{equation}
\begin{equation}
\left[ \phi e,\xi \right] =\left( a-\lambda +1\right) e \tag{$\tau $}
\label{c}
\end{equation}
\section{An existence theorem\label{example}}
A convenient way to establish the contact system (Definition \ref{cs1}) is through the proof of Theorem \ref{lemma1}. So in this section we discuss this theorem and the contact system will be presented in the next section.
The theorem of Martinet \cite{MA71}, improved by Lutz \cite{LU70}, states that a three dimensional orientable, compact and without boundary manifold admits a contact structure. On the other hand the result of Gromov \cite{G86} states that any non-compact, odd-dimensional manifold carrying a hyperplane field with an almost complex structure accepts a contact structure. The theorem of this section is an existence result concerning both the compact and the non compact case. Moreover the conditions of the theorem are local, hence they can be checked easily.
Let $M^{3}$ be a three dimensional smooth manifold. An absolute parallelism (see e.g. \cite{KNO63} p.122) on $M^{3}$ is a global trivialization of the tangent bundle of $M^{3}$. In other words an absolute parallelism is a $3$-tuple of linearly independent nowhere zero vector fields $\left\{ e_{1},e_{2},e_{3}\right\} $, such that at every $p\in M^{3}$ this $3$-tuple constitutes a basis of $T_{p}M^{3}$. If $M^{3}$ accepts an absolute parallelism then it is said to be \emph{parallelizable}. It is well known that a $3$-dimensional manifold is orientable if and only if it is parallelizable. In what follows $\omega ^{k}$ will denote the dual 1-form of $e_{k}$, $k=1,2,3$. Duality between tangent and cotangent bundle implies $\omega ^{k}\left( e_{j}\right) =\delta _{j}^{k}$.
\begin{theorem}\label{lemma1}
Let $M^{3}$ be a 3-dimensional, orientable, smooth manifold equipped with a Levi-Civita connection $\nabla $. Suppose that
\begin{enumerate}
\item[(1)] Among the trivializations of the tangent bundle of $M^{3}$, there exists one $\left\{ e_{1},e_{2},e_{3}\right\} $ satisfying the following condition: there exists a fixed permutation $i_{0}$, $j_{0}$, $k_{0}$ of $1$, $2$, $3$, such that $f:=\omega ^{k_{0}}\left( \left[ e_{i_{0}},e_{j_{0}}\right] \right) $ is everywhere non zero,
\item[(2)] The integral curves of $e_{k_{0}}$ are geodesics i.e. $\nabla _{e_{k_{0}}}e_{k_{0}}=0$ (same index $k_{0}$ as in (1)),
\item[(3)] The function $f$ is constant along $e_{k_{0}}$ i.e. $e_{k_{0}}\left( f\right) =0$ (same index $k_{0}$ as in (1)).
\end{enumerate}
Then, $M^{3}$ admits a contact metric structure $\left( g,\eta ,\xi ,\phi \right) $. If moreover
\begin{equation}
e_{j_{0}}\left( f\right) =e_{i_{0}}\left( f\right) =0\text{ and }
c_{k_{0}j_{0}}^{j_{0}}=c_{k_{0}i_{0}}^{i_{0}} \label{eigenbasish}
\end{equation}
then $\left\{ e_{1},e_{2},e_{3}\right\} $ is an eigenbasis of $\frac{1}{2} \mathcal{L}_{\xi }\phi $. In this case $c_{k_{0}j_{0}}^{j_{0}}=c_{k_{0}i_{0}}^{i_{0}}=0$ and $f$ is a constant.
Conversely if $M^{3}$ is a contact metric manifold then there exists a global trivialization $\left\{ e_{1}, e_{2}, e_{3}\right\} $ of the tangent bundle of $M^{3}$ satisfying conditions (1), (2) and (3).
\end{theorem}
\begin{proof}
Let $\left\{ e_{1},e_{2},e_{3}\right\} $ be an absolute parallelism of $M^{3} $. The structure functions $c_{ij}^{k}$ are defined by the relation:
\begin{equation*}
\left[ e_{i},e_{j}\right] =c_{ij}^{k}e_{k}
\end{equation*}
Without causing any damage to generality we may suppose that $\omega^{3}\left( \left[ e_{1},e_{2}\right] \right) =c_{12}^{3}$ is everywhere non zero. For further simplicity we shall denote the function $c_{12}^{3}$ by $f$. If the Euler class $e\left( e_{3}\right) \in H^{2}\left( M,\mathbb{Z}\right) $ of the vector field $e_{3}$ is not zero, then we have to introduce local charts. So on local charts define an almost contact structure on $M^{3}$ as follows: Set $e=e_{1}$, $\phi e=e_{2}$, $\xi =-e_{3}$, and introduce the metric $g\left( e_{i},e_{j}\right) =\delta _{ij}$. Moreover set
\begin{equation}
\phi =\omega ^{1}\otimes e_{2}-\omega ^{2}\otimes e_{1} \label{u1}
\end{equation}
where $\omega ^{i}$ is the dual vector of $e_{i}$:
\begin{equation*}
\omega ^{i}\left( e_{j}\right) =\delta _{j}^{i}
\end{equation*}
Then
\begin{equation*}
\left( \omega ^{1}\otimes e_{2}-\omega ^{2}\otimes e_{1}\right) \left(
e_{1}\right) =\omega ^{1}\left( e_{1}\right) e_{2}-\omega ^{2}\left(
e_{1}\right) e_{1}=e_{2}
\end{equation*}
\begin{equation*}
\left( \omega ^{1}\otimes e_{2}-\omega ^{2}\otimes e_{1}\right) \left(
e_{2}\right) =\omega ^{1}\left( e_{2}\right) e_{2}-\omega ^{2}\left(
e_{2}\right) e_{1}=-e_{1}
\end{equation*}
In other words
\begin{equation*}
e_{2}=\phi e_{1},~~~-e_{1}=\phi e_{2},~~~\phi \xi =0
\end{equation*}
Moreover
\begin{equation*}
\Phi =g\left( \cdot ,\phi \cdot \right) =\omega ^{1}\otimes \omega
^{2}-\omega ^{2}\otimes \omega ^{1}
\end{equation*}
It is well known (\cite{B02} p. 53) that $\left( \phi ,\xi ,\omega^{3}\right) $ is an almost contact structure, hence the structure group of the tangent bundle can be reduced to $U\left( 1\right) \times 1$. The transition maps of the tangent bundle respect the metric $\delta _{ij}$ since $\xi $ is globally defined and the symplectic basis $\left\{ e,\phi
e\right\} $ is transformed from chart to chart by the action of an element of $U\left( 1\right) $, since the plane field defined by $\left\{ e,\phi e\right\} $ is also globally defined. Now, from the well known identity $X\omega \left( Y\right) -Y\omega \left( X\right) -\omega \left( \left[ X,Y\right] \right) =d\omega \left( X,Y\right) $ we get
\begin{equation}
d\omega ^{3}\left( e_{1},e_{2}\right) =-\omega ^{3}\left( \left[ e_{1},e_{2}
\right] \right) =-f \label{lasd1}
\end{equation}
\begin{equation}
d\omega ^{3}\left( e_{1},e_{3}\right) =-\omega ^{3}\left( \left[ e_{1},e_{3}
\right] \right) =-c_{13}^{3} \label{lasd2}
\end{equation}
\begin{equation}
d\omega ^{3}\left( e_{3},e_{2}\right) =-\omega ^{3}\left( \left[ e_{3},e_{2}
\right] \right) =-c_{32}^{3} \label{lasd3}
\end{equation}
Since $\nabla g=0$ we have $g\left( \nabla _{e_{i}}e_{j},e_{k}\right)+g\left( e_{j},\nabla _{e_{i}}e_{k}\right) =0$. Putting $i=j=3$ and taking account the hypothesis $\nabla _{e_{3}}e_{3}=0$ we get:
\begin{equation*}
g\left( e_{3},\nabla _{e_{3}}e_{j}\right) =0
\end{equation*}
On the other hand for $j=k=3$ we get $g\left( \nabla_{e_{i}}e_{3},e_{3}\right) +g\left( e_{3},\nabla _{e_{i}}e_{3}\right) =0$. Since $\nabla _{X}Y-\nabla _{Y}X=\left[ X,Y\right] $ we have
\begin{equation}
g\left( \left[ e_{i},e_{3}\right] ,e_{3}\right) =c_{i3}^{3}=0,\text{ }i=1,2,3
\label{conditionb}
\end{equation}
Now, the only non zero component of $\Phi $ is
\begin{equation}
\Phi \left( e_{1},e_{2}\right) =\omega ^{1}\left( e_{1}\right) \omega
^{2}\left( e_{2}\right) =1 \label{lasd4}
\end{equation}
From (\ref{lasd1}), (\ref{lasd2}), (\ref{lasd3}) and (\ref{lasd4}) we get $-d\omega ^{3}\left( e_{i},e_{j}\right) =f\Phi \left( e_{i},e_{j}\right) $, and thus
\begin{equation*}
\Phi \left( X,Y\right) =-\frac{1}{f}d\omega ^{3}\left( X,Y\right)
\end{equation*}
for all vector fields $X$, $Y$. Hence the above defined almost contact metric structure $\left( \phi ,e_{3},\omega ^{3},g\right) $ is not an associated one. Note further that $\Phi \wedge \omega ^{3}=2\omega^{1}\wedge \omega ^{2}\wedge \omega ^{3}=-\frac{1}{f}d\omega ^{3}\wedge\omega ^{3}$ is everywhere non zero.
In order to make the almost contact metric structure $\left( \phi ,e_{3},\omega ^{3},g\right) $ an associated one, we apply the deformation
\begin{equation}
\overline{\omega }^{3}:=\frac{f}{2}\omega ^{3},~\overline{e}_{3}:=\frac{2}{f}
e_{3},~\overline{\phi }:=\phi ,~\overline{g}:=\frac{f^{2}}{4}g
\label{deform1}
\end{equation}
Moreover we set
\begin{equation*}
\overline{e}_{1}:=\frac{2e_{1}}{f}\text{ and }\overline{e}_{2}:=\frac{2e_{1}
}{f}
\end{equation*}
so that $\overline{g}\left( \overline{e}_{i},\overline{e}_{j}\right) =\delta
_{ij}$. Observe that after the deformation $\overline{\omega }^{3}\wedge d
\overline{\omega }^{3}=\frac{f^{2}}{4}\omega ^{3}\wedge d\omega ^{3}=\omega
^{1}\wedge \omega ^{2}\wedge \omega ^{3}$, so the positive sign of the
volume form is restored. Now $d\overline{\omega }^{3}=\frac{1}{2}d\left(
f\omega ^{3}\right) =\frac{1}{2}df\wedge \omega ^{3}+\frac{1}{2}fd\omega
^{3} $. From this relation and the identity $\left( a\wedge b\right) \left(
e_{1},e_{2}\right) =a\left( e_{1}\right) b\left( e_{2}\right) -a\left(
e_{2}\right) b\left( e_{1}\right) $ we have
\begin{equation*}
d\overline{\omega }^{3}\left( e_{1},e_{2}\right) =-\frac{f^{2}}{4}
\end{equation*}
\begin{equation*}
d\overline{\omega }^{3}\left( e_{1},e_{3}\right) =e_{1}\left( f\right) =0
\end{equation*}
\begin{equation*}
d\overline{\omega }^{3}\left( e_{2},e_{3}\right) =e_{2}\left( f\right) =0
\end{equation*}
Hence
\begin{equation*}
\overline{\Phi }\left( \overline{e}_{1},\overline{e}_{2}\right) =\overline{g}
\left( \overline{e}_{1},\overline{\phi }\overline{e}_{2}\right) =\frac{4}{
f^{2}}\frac{f^{2}}{4}g\left( e_{1},\phi e_{2}\right) =-1=d\overline{\omega }
^{3}\left( \overline{e}_{1},\overline{e}_{2}\right)
\end{equation*}
\begin{equation*}
\overline{\Phi }\left( \overline{e}_{1},\overline{e}_{3}\right) =\frac{4}{
f^{2}}\overline{g}\left( e_{1},\overline{\phi }e_{3}\right) =0
\end{equation*}
\begin{equation*}
\overline{\Phi }\left( \overline{e}_{2},\overline{e}_{3}\right) =\frac{4}{
f^{2}}\overline{g}\left( e_{3},\overline{\phi }e_{3}\right) =0
\end{equation*}
Observe that since $\nabla _{\overline{e}_{3}}\overline{e}_{3}=-\frac{4}{
f^{3}}e_{3}\left( f\right) e_{3}=0$ it follows that $d\overline{\omega }
^{3}\left( \overline{e}_{1},\overline{e}_{3}\right) =d\overline{\omega }
^{3}\left( \overline{e}_{2},\overline{e}_{3}\right) =0$. Since $\overline{g}
\left( \overline{\phi }X,\overline{\phi }Y\right) =\overline{g}\left(
X,Y\right) -\overline{\omega }^{3}\left( X\right) \overline{\omega }
^{3}\left( Y\right) $ we conclude that $\left( \overline{\phi },\overline{e}
_{3},\overline{\omega }^{3},\overline{g}\right) $ is an associated contact
metric structure. Note that:
\begin{enumerate}
\item The vector field $\overline{e}_{3}$ is divergence free since $\mathcal{
L}_{\frac{2}{f}e_{3}}f=0$.
\item The commutation relations of the deformed $\phi $-basis become
\begin{equation*}
\left[ \overline{e}_{1},\overline{e}_{2}\right] =\frac{4c_{12}^{1}}{f^{2}}
e_{1}+\frac{4c_{12}^{2}}{f^{2}}e_{2}+\frac{4}{f}e_{3}
\end{equation*}
\begin{equation*}
\left[ \overline{e}_{2},\overline{e}_{3}\right] =\frac{4c_{23}^{1}}{f^{2}}
e_{1}+\frac{4c_{23}^{2}}{f^{2}}e_{2}
\end{equation*}
\begin{equation*}
\left[ \overline{e}_{3},\overline{e}_{1}\right] =\frac{4c_{31}^{1}}{f^{2}}
e_{1}+\frac{4c_{31}^{2}}{f^{2}}e_{2}
\end{equation*}
\end{enumerate}
A calculation of the Lie derivative of $\phi $ with respect to $\frac{2}{f}
e_{3}$ yields
\begin{eqnarray*}
\frac{f}{2}\left( \mathcal{L}_{\frac{2}{f}e_{3}}\phi \right) \left(
e_{i}\right) &=&-c_{3i}^{2}e_{1}+c_{3i}^{1}e_{2}+\left( \delta
_{i}^{2}c_{31}^{1}-\delta _{i}^{1}c_{32}^{1}\right) e_{1}+\left( \delta
_{i}^{2}c_{31}^{2}-\delta _{i}^{1}c_{32}^{2}\right) e_{2}- \\
&&\frac{f}{2}\delta _{i}^{2}e_{1}\left( \frac{2}{f}\right) e_{3}+\frac{f}{2}
\delta _{i}^{1}e_{2}\left( \frac{2}{f}\right) e_{3}
\end{eqnarray*}
By definition $2\overline{h}=\mathcal{L}_{\frac{2}{f}e_{3}}\overline{\phi }$
. Thus the above expression yields
\begin{equation}
f\overline{h}\left( e_{1}\right) =\left( c_{23}^{1}-c_{31}^{2}\right)
e_{1}+\left( c_{31}^{1}-c_{32}^{2}\right) e_{2}+\frac{f}{2}e_{2}\left( \frac{
2}{f}\right) e_{3} \label{eigen1}
\end{equation}
\begin{equation}
f\overline{h}\left( e_{2}\right) =\left( -c_{32}^{2}+c_{31}^{1}\right)
e_{1}+\left( c_{31}^{2}+c_{32}^{1}\right) e_{2}-\frac{f}{2}e_{1}\left( \frac{
2}{f}\right) e_{3} \label{eigen2}
\end{equation}
\begin{equation}
f\overline{h}\left( e_{3}\right) =0 \label{eigen3}
\end{equation}
The vector $e_{3}$ is an eigenvector of $\overline{h}$. However the vectors $
e_{1}$, $e_{2}$ are eigenvectors of $\overline{h}$ if and only if
\begin{equation}
c_{32}^{2}=c_{31}^{1}\text{ and }e_{1}\left( f\right) =e_{2}\left( f\right)
=0 \label{conditiona}
\end{equation}
The commutation relations (\ref{eigen1}), (\ref{eigen2}) and (\ref{eigen3})
take the forms
\begin{equation*}
\left[ \overline{e}_{1},\overline{e}_{2}\right] =\frac{2c_{12}^{1}}{f}
\overline{e}_{1}+\frac{2c_{12}^{2}}{f}\overline{e}_{2}+2\overline{e}_{3}
\end{equation*}
\begin{equation*}
\left[ \overline{e}_{2},\overline{e}_{3}\right] =\frac{2c_{23}^{1}}{f}
\overline{e}_{1}-\frac{2c_{31}^{1}}{f}\overline{e}_{2}
\end{equation*}
\begin{equation*}
\left[ \overline{e}_{3},\overline{e}_{1}\right] =\frac{2c_{31}^{1}}{f}
\overline{e}_{1}+\frac{2c_{31}^{2}}{f}\overline{e}_{2}
\end{equation*}
Now using the fact that on $M^{3}$ $\left( \mathcal{L}_{\phi X}\overline{
\omega }^{3}\right) \left( Y\right) =\left( \mathcal{L}_{\phi Y}\overline{
\omega }^{3}\right) \left( X\right) $, $\overline{\Phi }=d\overline{\omega }
^{3}$, $\nabla _{\overline{e}_{3}}\overline{e}_{3}=0$ and $\overline{g}
\left( \overline{h}X,Y\right) =\overline{g}\left( X,\overline{h}Y\right) $
(see \cite{B02} p.53) we obtain $\nabla _{X}\overline{e}_{3}=-\overline{\phi
}X-\overline{\phi }\overline{h}X$. Setting $X=\overline{e}_{1}$ and taking
account that $g\left( \nabla _{\overline{e}_{3}}\overline{e}_{1},\overline{e}
_{1}\right) =0$ we obtain
\begin{equation}
c_{31}^{1}=0 \label{conditionc}
\end{equation}
Thus, the above commutator relations become
\begin{equation}
\left[ \overline{e}_{1},\overline{e}_{2}\right] =\frac{2c_{12}^{1}}{f}
\overline{e}_{1}+\frac{2c_{12}^{2}}{f}\overline{e}_{2}+2\overline{e}_{3}
\label{lasteigen1}
\end{equation}
\begin{equation}
\left[ \overline{e}_{2},\overline{e}_{3}\right] =\frac{2c_{23}^{1}}{f}
\overline{e}_{1} \label{lasteigen2}
\end{equation}
\begin{equation}
\left[ \overline{e}_{3},\overline{e}_{1}\right] =\frac{2c_{31}^{2}}{f}
\overline{e}_{2} \label{lasteigen3}
\end{equation}
Setting $\frac{2c_{23}^{1}}{f}=a-\lambda +1$, $\frac{2c_{31}^{2}}{f}
=a+\lambda +1$,
\begin{equation}
\frac{2c_{12}^{1}}{f}=-b,\text{ and }\frac{2c_{12}^{2}}{f}=c \label{bc}
\end{equation}
we obtain the relations ($\pi $), ($\delta $) and ($\tau $) with
\begin{equation}
\lambda =\frac{c_{31}^{2}-c_{23}^{1}}{f}\text{ and }a=-1+\frac{
c_{31}^{2}+c_{23}^{1}}{f} \label{al}
\end{equation}
Conversely let $(M^{3},\phi ,g,\eta ,\xi )$ be a contact metric manifold.
Since $M^{3}$ is orientable it is parallelizable. Choose a parallelization $
\left\{ e_{1},e_{2},e_{3}\right\} $ with $e_{3}=\xi $. Consider the sets
\begin{equation*}
U_{0}=\left\{ m\in M^{3}:\lambda =0\text{ in a neighborhood of }m\right\}
\end{equation*}
\begin{equation*}
U_{1}=\left\{ m\in M^{3}:\lambda \neq 0\text{ in a neighborhood of }m\right\}
\end{equation*}
On $U_{1}$ we can take $e_{1}$ and $e_{2}$ as eigenvectors of $h$ with
eigenvalues $\lambda $ and $-\lambda $ respectively. Then the conditions
\emph{(1)}, \emph{(2)} and \emph{(3)} of the present Theorem are fulfilled
since by hypothesis $M^{3}$ is a contact metric manifold hence $f=2$ and $
\nabla _{e_{3}}e_{3}=0$.
$U_{0}$ is Sasakian. Hence $\nabla _{e_{3}}e_{3}=0$ and thus (2) is
fulfilled. Condition (1) also holds due to the non-integrability of the
contact subbundle $D$ (if $\left[ e_{1},e_{2}\right] \in D$ then $D$ would
be integrable). Condition (3) is also true: From identity
\begin{equation}
\left( \mathcal{L}_{\xi X}g\right) \left( Y_{1},Y_{2}\right) =X\cdot g\left(
Y_{1},Y_{2}\right) -g\left( \left[ X,Y_{1}\right] ,Y_{2}\right) -g\left(
Y_{1},\left[ X,Y_{2}\right] \right) \label{id1}
\end{equation}
setting $X=\xi ,Y_{1}=\left[ e,\phi e\right] ,Y_{2}=\xi $ we get the relation $\left(
\mathcal{L}_{\xi }g\right) \left( \left[ e,\phi e\right] ,\xi \right) =\xi
\cdot g\left( \left[ e,\phi e\right] ,\xi \right) -g\left( \left[ \xi ,\left[
e,\phi e\right] \right] ,\xi \right) -g\left( \left[ e,\phi e\right] ,\left[
\xi ,\xi \right] \right) $. Since $\mathcal{L}_{\xi }g=0$ and $g\left( \left[
e,\phi e\right] ,\left[ \xi ,\xi \right] \right) =0$ the identity (\ref{id1}
) gives $\xi \cdot g\left( \left[ e,\phi e\right] ,\xi \right) =g\left(
\left[ \xi ,\left[ e,\phi e\right] \right] ,\xi \right) $. Using Lemma \ref
{aux1} and after some calculations we get the relation $g\left( \left[ \xi ,\left[ e,\phi
e\right] \right] ,\xi \right) =cg\left( \left[ \xi ,\phi e\right] ,\xi
\right) -bg\left( \left[ \xi ,e\right] ,\xi \right)$, which is zero. Hence $
\xi \cdot g\left( \left[ e,\phi e\right] ,\xi \right) =0$.
Properties (1), (2) and (3) are true on $U_{0}\cup U_{1}$ which is open and
dense in $M^{3}$. But these properties involve equations between continuous
functions. Hence they are true all over $M^{3}$.
\end{proof}
\begin{remark}
The following question arises: Is the definition of $\phi $ by (\ref{u1})
and the transformation (\ref{deform1}) unique? In other words, are there any
other definitions for $\phi $ and transformations of the resulting almost
contact structure which give rise to a contact metric structure on $M^{3}$?
If no then the $\phi $ defined by (\ref{u1}) and the transformation (\ref
{deform1}) are universal. If yes then is there any connection among the
resulting contact structures? Note that in the later case, if the resulting
contact structures constitute a $1$-parameter smooth family and if the
manifold is closed then by a Theorem of Gray \cite{GR59} they are all
isotopic.
\end{remark}
\section{The contact system}\label{par4456}
\subsection{The form of the contact system}
Let $M^{3}$ be an orientable $3$-manifold. Choose on $M^{3}$ a local chart $
\left( U^{\prime },x\right) $ and on $U:=x\left( U^{\prime }\right) \subset
\mathbb{R}^{3}$ introduce a positively oriented coordinate basis $\left\{
\frac{\partial }{\partial x^{1}},\frac{\partial }{\partial x^{2}},\frac{\partial }{\partial x^{3}}\right\}$. If $g$ is a Riemannian metric on $
M^{3} $ we have $g_{ij}=g\left( \frac{\partial }{\partial x^{i}},\frac{
\partial }{\partial x^{j}}\right) $. Let $\left\{ e_{1},e_{2},e_{3}\right\} $
be a positively oriented orthonormal parallelization of $U$, i.e. $\delta
_{ab}=g\left( e_{\alpha },e_{\beta }\right) $. On $U$ there exist nine
functions $b_{\alpha }^{i}$, $i,\alpha =1,2,3$ such that
\begin{equation}
\left[
\begin{array}{c}
e_{1} \\
e_{2} \\
e_{3}
\end{array}
\right] =\left[
\begin{array}{ccc}
b_{1}^{1} & b_{1}^{2} & b_{1}^{3} \\
b_{2}^{1} & b_{2}^{2} & b_{2}^{3} \\
b_{3}^{1} & b_{3}^{2} & b_{3}^{3}
\end{array}
\right] \left[
\begin{array}{c}
\frac{\partial }{\partial x^{1}} \\
\frac{\partial }{\partial x^{2}} \\
\frac{\partial }{\partial x^{3}}
\end{array}
\right] ,\text{ and }\left[
\begin{array}{c}
\frac{\partial }{\partial x^{1}} \\
\frac{\partial }{\partial x^{2}} \\
\frac{\partial }{\partial x^{3}}
\end{array}
\right] =\left[
\begin{array}{ccc}
b_{1}^{1} & b_{1}^{2} & b_{1}^{3} \\
b_{2}^{1} & b_{2}^{2} & b_{2}^{3} \\
b_{3}^{1} & b_{3}^{2} & b_{3}^{3}
\end{array}
\right] ^{-1}\left[
\begin{array}{c}
e_{1} \\
e_{2} \\
e_{3}
\end{array}
\right] \label{matrixform1}
\end{equation}
where $b_{\alpha }^{i}\left( b^{-1}\right) _{i}^{\beta }=\delta _{\alpha
}^{\beta }$ and $\left( b^{-1}\right) _{i}^{\alpha }b_{\alpha }^{j}=\delta
_{i}^{j}$.
\begin{definition}
We call $B$-matrix the matrix whose elements are the functions $b_{\alpha
}^{i}$ defined in (\ref{matrixform1}). The function $\left( b^{-1}\right)
_{i}^{\alpha }$, $\alpha =1,2,3$, $i=1,2,3$, is the element of the $i$ row
and the $\alpha $ column of the matrix $B^{-1}$. It is obvious that $\det
B>0 $ everywhere.
\end{definition}
We seek a necessary and sufficient condition such that the basis $\left\{
e_{1}, e_{2}, e_{3}\right\}$ in (\ref{matrixform1}) to be a $\phi$-eigenbasis of a contact metric structure $\left( g,\eta ,\xi ,\phi \right)$ of $M^{3}$. In terms of the structure functions $c_{ij}^{k}$ these conditions are already found in the course of the proof of Theorem \ref{lemma1}. These are (\ref{conditionb}), (\ref{conditiona}) and (\ref{conditionc}). Let us summarize these conditions:
\begin{equation}
c_{12}^{3}=2,\text{ }c_{31}^{3}=c_{32}^{3}=0,\text{ }c_{32}^{2}=c_{31}^{1}=0
\label{easy1}
\end{equation}
Define the functions $c_{\alpha \beta }^{\gamma }$ by the relation $\left[
e_{\alpha },e_{\beta }\right] =c_{\alpha \beta }^{\gamma }e_{\gamma }$. Then
using (\ref{matrixform1}) we obtain
\begin{equation}
c_{\alpha \beta }^{\gamma }=\left( b^{-1}\right) _{j}^{\gamma }\left(
b_{\alpha }^{i}\frac{\partial b_{\beta }^{j}}{\partial x^{i}}-b_{\beta }^{i}
\frac{\partial b_{\alpha }^{j}}{\partial x^{i}}\right) \label{struct1}
\end{equation}
where $x^{i}$, $i=1,2,3$ are local coordinates of $U$. Calculating $B^{-1}$
and using (\ref{easy1}) and (\ref{struct1}) we have the following
\begin{theorem}
The following equations are necessary and sufficient conditions in order for
the basis $\left\{ e_{1},e_{2},e_{3}\right\} $ to be a $\phi $-eigenbasis of
a contact metric structure $\left( g,\eta ,\xi ,\phi \right) $ of $M^{3}$.
\begin{gather}
\left( b_{1}^{2}b_{2}^{3}-b_{1}^{3}b_{2}^{2}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{1}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{1}}{
\partial x^{i}}\right) + \label{contact1} \\
\left( b_{1}^{3}b_{2}^{1}-b_{1}^{1}b_{2}^{3}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{2}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{2}}{
\partial x^{i}}\right) + \notag \\
\left( b_{1}^{1}b_{2}^{2}-b_{1}^{2}b_{2}^{1}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{3}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{3}}{
\partial x^{i}}\right) =0 \notag
\end{gather}
\begin{gather}
\left( b_{1}^{2}b_{2}^{3}-b_{1}^{3}b_{2}^{2}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{1}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{1}}{
\partial x^{i}}\right) + \label{contact2} \\
\left( b_{1}^{3}b_{2}^{1}-b_{1}^{1}b_{2}^{3}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{2}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{2}}{
\partial x^{i}}\right) + \notag \\
\left( b_{1}^{1}b_{2}^{2}-b_{1}^{2}b_{2}^{1}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{3}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{3}}{
\partial x^{i}}\right) =0 \notag
\end{gather}
\begin{gather}
\left( b_{1}^{2}b_{2}^{3}-b_{1}^{3}b_{2}^{2}\right) \left( b_{1}^{i}\frac{
\partial b_{2}^{1}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{1}^{1}}{
\partial x^{i}}\right) + \label{contact3} \\
\left( b_{1}^{3}b_{2}^{1}-b_{1}^{1}b_{2}^{3}\right) \left( b_{1}^{i}\frac{
\partial b_{2}^{2}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{1}^{2}}{
\partial x^{i}}\right) + \notag \\
\left( b_{1}^{1}b_{2}^{2}-b_{1}^{2}b_{2}^{1}\right) \left( b_{1}^{i}\frac{
\partial b_{2}^{3}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{1}^{3}}{
\partial x^{i}}\right) =2\det B \notag
\end{gather}
\begin{gather}
\left( b_{2}^{2}b_{3}^{3}-b_{2}^{3}b_{3}^{2}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{1}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{1}}{
\partial x^{i}}\right) + \label{contact4} \\
\left( b_{2}^{3}b_{3}^{1}-b_{2}^{1}b_{3}^{3}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{2}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{2}}{
\partial x^{i}}\right) + \notag \\
\left( b_{2}^{1}b_{3}^{2}-b_{2}^{2}b_{3}^{1}\right) \left( b_{3}^{i}\frac{
\partial b_{1}^{3}}{\partial x^{i}}-b_{1}^{i}\frac{\partial b_{3}^{3}}{
\partial x^{i}}\right) =0 \notag
\end{gather}
\begin{gather}
\left( b_{1}^{3}b_{3}^{2}-b_{1}^{2}b_{3}^{3}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{1}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{1}}{
\partial x^{i}}\right) + \label{contact5} \\
\left( b_{1}^{1}b_{3}^{3}-b_{1}^{3}b_{3}^{1}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{2}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{2}}{
\partial x^{i}}\right) + \notag \\
\left( b_{1}^{2}b_{3}^{1}-b_{1}^{1}b_{3}^{2}\right) \left( b_{3}^{i}\frac{
\partial b_{2}^{3}}{\partial x^{i}}-b_{2}^{i}\frac{\partial b_{3}^{3}}{
\partial x^{i}}\right) =0 \notag
\end{gather}
\end{theorem}
\begin{definition}
\label{cs1}Equations (\ref{contact1}) - (\ref{contact5}) are called the
contact system.
\end{definition}
\begin{remark}
The metric of $M^{3}$ in local coordinates $x$ is given by the relation
\begin{equation}
g_{ij}=\left( b^{-1}\right) _{i}^{m}\left( b^{-1}\right) _{j}^{n}\delta _{mn}
\label{m2}
\end{equation}
\end{remark}
\section{Reduction and solution of the contact system\label{sectsim}}
\subsection{A Lemma}
By the following Lemma, we will simplify the $B$-matrix:
\begin{lemma}
\label{lemmaauxsimple}Let $M^{3}$ be an orientable $3$-manifold and let $
\left( U^{\prime },\theta \right) $ be a local chart of $M^{3}$. Let $
\left\{ e_{1},e_{2},e_{3}\right\} $ be an orthonormal parallelization of $
M^{3}$ and let $U=\theta \left( U^{\prime }\right) \subset \mathbb{R}^{3}$.
There exist on $U$ local coordinates $\omega $ such that
\begin{equation*}
e_{3}=\frac{\partial }{\partial \omega ^{1}}
\end{equation*}
and
\begin{equation*}
e_{1}=\chi _{1}\frac{\partial }{\partial \omega ^{1}}+\chi _{2}\frac{
\partial }{\partial \omega ^{2}}
\end{equation*}
where $\chi _{1}$ and $\chi _{2}$ are smooth functions on $U$.
\end{lemma}
\begin{proof}
We may write on $U$:
\begin{equation}
e_{i}=b_{i}^{\alpha }\left( \theta \right) \frac{\partial }{\partial \theta
^{\alpha }}=b_{i}^{\alpha }\left( \theta \right) \frac{\partial \psi ^{\beta
}}{\partial \theta ^{\alpha }}\frac{\partial }{\partial \psi ^{\beta }}
\label{gold1}
\end{equation}
where $\psi $ are some other coordinates on $U$. We claim that $\psi $ can
be chosen so that the following relations hold:
\begin{equation}
b_{3}^{\alpha }\left( \theta \right) \frac{\partial \psi ^{1}}{\partial
\theta ^{\alpha }}=1,\text{ }b_{3}^{\alpha }\left( \theta \right) \frac{
\partial \psi ^{2}}{\partial \theta ^{\alpha }}=0,\text{ }b_{3}^{\alpha
}\left( \theta \right) \frac{\partial \psi ^{3}}{\partial \theta ^{\alpha }}
=0,~b_{1}^{\alpha }\left( \theta \right) \frac{\partial \psi ^{3}}{\partial
\theta ^{\alpha }}=0 \label{lemmaaux1}
\end{equation}
Set
\begin{equation}
\frac{\partial \psi ^{1}}{\partial \theta ^{\beta }}:=g_{\beta \gamma
}\left( \theta \right) b_{3}^{\gamma }\left( \theta \right) ,\text{ }\frac{
\partial \psi ^{2}}{\partial \theta ^{\beta }}:=g_{\beta \gamma }\left(
\theta \right) b_{1}^{\gamma }\left( \theta \right) ,\text{ }\frac{\partial
\psi ^{3}}{\partial \theta ^{\beta }}:=g_{\beta \gamma }\left( \theta
\right) b_{2}^{\gamma }\left( \theta \right) \label{lemmaaux2}
\end{equation}
and observe that, since $g\left( e_{3},e_{3}\right) =1$, we have
\begin{equation}
g_{\alpha \gamma }\left( \theta \right) b_{3}^{\gamma }\left( \theta \right)
b_{3}^{\alpha }\left( \theta \right) =1 \label{lemmaaux3}
\end{equation}
Since $g\left( e_{2},e_{3}\right) =g\left( e_{1},e_{3}\right) =g\left(
e_{1},e_{2}\right) =0$ we must have
\begin{equation}
g_{\alpha \gamma }\left( \theta \right) b_{3}^{\gamma }\left( \theta \right)
b_{1}^{\alpha }\left( \theta \right) =g_{\alpha \gamma }\left( \theta
\right) b_{3}^{\gamma }\left( \theta \right) b_{2}^{\alpha }\left( \theta
\right) =g_{\alpha \gamma }\left( \theta \right) b_{1}^{\gamma }\left(
\theta \right) b_{2}^{\alpha }\left( \theta \right) =0 \label{lemmaaux4}
\end{equation}
In view of (\ref{lemmaaux3}) and (\ref{lemmaaux4}) we see that the
coordinates $\left( \psi ^{1},\psi ^{2},\psi ^{3}\right) $ defined by (\ref
{lemmaaux2}) satisfy the claimed relations, namely (\ref{lemmaaux1}). Then
using (\ref{gold1}) and setting $\chi _{1}=b_{1}^{\alpha }\left( \theta
\right) \frac{\partial \psi ^{1}}{\partial \theta ^{\alpha }}$, $\chi
_{2}=b_{1}^{\alpha }\left( \theta \right) \frac{\partial \psi ^{2}}{\partial
\theta ^{\alpha }}$ and $\omega =\psi $, the Lemma is proved.
\end{proof}
\subsection{The reduction}
From Lemma \ref{lemmaauxsimple} we see that there exists a coordinate system
on $U$ such that the $B$-matrix takes the form
\begin{equation}
B=\left[
\begin{array}{ccc}
b_{1}^{1} & b_{1}^{2} & 0 \\
b_{2}^{1} & b_{2}^{2} & b_{2}^{3} \\
1 & 0 & 0
\end{array}
\right] \label{simplified1}
\end{equation}
\begin{definition}
\label{simpdef}A local coordinate system on which the $B$-matrix takes the
form (\ref{simplified1}) is called simplifying coordinates. From now on we
will set
\begin{equation}
\left[
\begin{array}{ccc}
b_{1}^{1} & b_{1}^{2} & 0 \\
b_{2}^{1} & b_{2}^{2} & b_{2}^{3} \\
1 & 0 & 0
\end{array}
\right] :=\left[
\begin{array}{ccc}
\alpha & \beta & 0 \\
\delta & \epsilon & \zeta \\
1 & 0 & 0
\end{array}
\right] \label{simplified2}
\end{equation}
\end{definition}
Taking account (\ref{simplified1}) as well as (\ref{contact1}) - (\ref
{contact5}), and the fact that $\det B=\beta \zeta \neq 0$, the contact
system becomes in simplifying coordinates:
\begin{equation}
\alpha _{1}=0 \label{S1}
\end{equation}
\begin{equation}
\beta \delta _{1}=\alpha \epsilon _{1} \label{S2}
\end{equation}
\begin{equation}
\beta \left( \alpha \epsilon -\delta \beta \right) \zeta _{2}+\left( \alpha
\beta _{3}-\beta \alpha _{3}\right) \zeta ^{2}+\left( \beta ^{2}\delta
_{2}-\alpha \beta \epsilon _{2}+\alpha \epsilon \beta _{2}-\beta \epsilon
\alpha _{2}-2\beta \right) \zeta =0 \label{S3}
\end{equation}
\begin{equation}
\beta _{1}=0 \label{S4}
\end{equation}
\begin{equation}
\zeta _{1}=0 \label{S5}
\end{equation}
The relation (\ref{S2}) can be written as $\delta _{1}=\frac{\alpha \epsilon
_{1}}{\beta }$. Using (\ref{S1}) and (\ref{S4}) we get
\begin{equation}
\delta =\frac{\alpha \epsilon }{\beta }+F\left( x^{2},x^{3}\right)
\label{T2}
\end{equation}
where $F\left( x^{2},x^{3}\right) $ is an arbitrary smooth function. Then $\delta _{2}$ equals to $F_{2}+\frac{\alpha _{2}\epsilon \beta +\alpha \epsilon _{2}\beta
-\alpha \epsilon \beta _{2}}{\beta ^{2}}$ and (\ref{S3}) becomes
\begin{equation}
F\zeta _{2}+\left( \frac{\alpha }{\beta }\right) _{3}\zeta ^{2}-\left( F_{2}-
\frac{2}{\beta }\right) \zeta =0 \label{T3}
\end{equation}
If $F=0$ then (\ref{T3}) becomes a linear equation with respect to $\zeta $,
namely
\begin{equation}
\left( \frac{\alpha }{\beta }\right) _{3}\zeta =\left( F_{2}-\frac{2}{\beta }
\right) \label{T*3}
\end{equation}
If $F_{2}=0$ and $\left( \frac{\alpha }{\beta }\right) _{3}=0$ then $\beta
=0 $ which is absurd, since by hypothesis $\beta \neq 0$. If $F_{2}\neq 0$
and $\left( \frac{\alpha }{\beta }\right) _{3}=0$ then $F_{2}=\frac{2}{\beta
}$ and $\zeta $ is any non-zero function of $x^{2}$ and $x^{3}$. If $\left(
\frac{\alpha }{\beta }\right) _{3}\neq 0$ then $\zeta =-\frac{2}{\beta
\left( \frac{\alpha }{\beta }\right) _{3}}$. If $F\neq 0$ then (\ref{T3})
yields
\begin{equation}
\zeta _{2}+\frac{1}{F}\left( \frac{\alpha }{\beta }\right) _{3}\zeta
^{2}-\left( \frac{F_{2}}{F}-\frac{2}{F\beta }\right) \zeta =0 \label{S*3}
\end{equation}
The equation (\ref{S*3}) is a Riccati ordinary differential equation. The
unknown function is $\zeta \left( x^{2},x^{3}\right) $ and the variable is $
x^{2}$ while $x^{3}$ is treated as a parameter. The solution of the above
equation is
\begin{equation}
\zeta =F\frac{e^{-2\int \frac{dx^{2}}{\beta F}}}{\int e^{-2\int \frac{dx^{2}
}{\beta F}}\left( \frac{\alpha }{\beta }\right) _{3}dx^{2}-K\left(
x^{3}\right) } \label{sur}
\end{equation}
where $K\left( x^{3}\right) $ is a smooth function. Observe that the right
hand side of (\ref{sur}) is a function of $x^{2}$ and $x^{3}$ in accordance
with (\ref{S5}).
\subsection{The contact form and the metric}
Define the $1$-form $\eta $ by $\eta \left( X\right) =g\left( X,e_{3}\right)
$. In simplifying coordinates we have $e_{3}=\frac{\partial }{\partial x^{3}}
$ and $\eta =dx^{1}-\frac{\alpha }{\beta }dx^{2}-\frac{F}{\zeta }dx^{3}$. It
is easy to see that (\ref{S3}) can be written as
\begin{equation*}
\left[ \frac{\alpha \epsilon -\delta \beta }{\beta \zeta }\right]
_{2}+\left( \frac{\alpha }{\beta }\right) _{3}=-\frac{2}{\beta \zeta }
\end{equation*}
A calculation shows that
\begin{equation*}
\eta \wedge d\eta =\frac{1}{2}\left[ -\left( \frac{\alpha }{\beta }\right)
_{3}-\left( \frac{\alpha \epsilon -\beta \delta }{\beta \zeta }\right) _{2}
\right] dx^{1}\wedge dx^{2}\wedge dx^{3}
\end{equation*}
Hence $\eta \wedge d\eta =\frac{1}{\beta \zeta }dx^{1}\wedge dx^{2}\wedge
dx^{3}\neq 0$ everywhere, and thus $\eta $ is a contact form, as expected.
The formula (\ref{m2}) yields
\begin{equation}
g=\left[
\begin{array}{ccc}
1 & -\frac{\alpha }{\beta } & \frac{-F}{\zeta } \\
-\frac{\alpha }{\beta } & \frac{1+\alpha ^{2}}{\beta ^{2}} & \frac{\alpha
\beta F-\epsilon }{\beta ^{2}\zeta } \\
\frac{-F}{\zeta } & \frac{\alpha \beta F-\epsilon }{\beta ^{2}\zeta } &
\frac{\beta ^{2}\left( 1+F^{2}\right) +\epsilon ^{2}}{\beta ^{2}\zeta ^{2}}
\end{array}
\right] \label{metric-simp}
\end{equation}
and $\det g=\frac{1}{\beta ^{2}\zeta ^{2}}$. The relation (\ref{metric-simp}
) along with (\ref{sur}), (\ref{T*3}), (\ref{S4}) and (\ref{S5}) yield the
local classification of $3$-dimensional contact manifolds.
\section{Construction of contact metric structures on $3$-manifolds}
\subsection{An algorithm for the construction of contact metric structures
on $3$-manifolds\label{algorithm}}
Summarizing the results of the previous section, we give an algorithm
producing the contact metric structures on $3$-manifolds.
\begin{enumerate}
\item Consider a $3$-dimensional orientable manifold $M^{3}$ with a
parallelization $\left\{ e_{1},e_{2},e_{3}\right\} $, a local
chart $\left( U^{\prime },\theta \right) $ and the metric $g\left(
e_{i},e_{j}\right) =\delta _{ij}$. Consider the open $\theta \left( U\right)
\subset \mathbb{R}^{3}$. Introduce on $\theta \left( U\right) $ simplifying
coordinates, $x^{1}$, $x^{2}$ and $x^{3}$.
\item \label{step2}Insert functions $\alpha \left( x^{2},x^{3}\right) $, $
\beta \left( x^{2},x^{3}\right) \neq 0$, $\epsilon \left(
x^{1},x^{2},x^{3}\right) $ and $F\left( x^{2},x^{3}\right) $.\\ Set $\delta
\left( x^{1},x^{2},x^{3}\right) :=\frac{\alpha \epsilon }{\beta }+F$.
\item If $F=0$ then compute the quantity $\left( \frac{\alpha }{\beta }
\right) _{3}$.
\begin{enumerate}
\item If $\left( \frac{\alpha }{\beta }\right) _{3}=0$ then examine if $
F_{2}=\frac{2}{\beta }$.
\begin{enumerate}
\item If $F_{2}=\frac{2}{\beta }$ holds then set $\zeta $ to be any non-zero
smooth function of $x^{2}$ and $x^{3}$ and go to step \ref{step}.
\item If $F_{2}\neq \frac{2}{\beta }$ then go to step \ref{step2} and try an
$F$ for which $F_{2}=\frac{2}{\beta }$.
\end{enumerate}
\item If $\left( \frac{\alpha }{\beta }\right) _{3}\neq 0$ then set $\zeta =
\frac{F_{2}-\frac{2}{\beta }}{\left( \frac{\alpha }{\beta }\right) _{3}}$
and go to step \ref{step}.
\end{enumerate}
\item If $F\neq 0$ then compute $\zeta $ by the relation
\begin{equation*}
\zeta =F\frac{e^{-2\int \frac{dx^{2}}{\beta F}}}{\int e^{-2\int \frac{dx^{2}
}{\beta F}}\left( \frac{\alpha }{\beta }\right) _{3}dx^{2}-K\left(
x^{3}\right) }
\end{equation*}
\item \label{step}Compute the metric $g$ in simplifying coordinates by the
relation:
\begin{equation*}
g=\left[
\begin{array}{ccc}
1 & -\frac{\alpha }{\beta } & \frac{-F}{\zeta } \\
-\frac{\alpha }{\beta } & \frac{1+\alpha ^{2}}{\beta ^{2}} & \frac{\alpha
\beta F-\epsilon }{\beta ^{2}\zeta } \\
\frac{-F}{\zeta } & \frac{\alpha \beta F-\epsilon }{\beta ^{2}\zeta } &
\frac{\beta ^{2}\left( 1+F^{2}\right) +\epsilon ^{2}}{\beta ^{2}\zeta ^{2}}
\end{array}
\right]
\end{equation*}
\item Write down the $B$-matrix
\begin{equation*}
B=\left[
\begin{array}{ccc}
\alpha & \beta & 0 \\
\delta & \epsilon & \zeta \\
1 & 0 & 0
\end{array}
\right]
\end{equation*}
Using Lemma \ref{lemmaauxsimple} write down the vector fields $e_{1}=\alpha
\frac{\partial }{\partial x^{1}}+\beta \frac{\partial }{\partial x^{2}}$, $
e_{2}=\delta \frac{\partial }{\partial x^{1}}+\epsilon \frac{\partial }{
\partial x^{2}}+\zeta \frac{\partial }{\partial x^{3}}$ and $e_{3}=\frac{
\partial }{\partial x^{3}}$ on $U$ in simplifying coordinates.
\item Define on $M^{3}$ the the $1$-form $\eta $ by $\eta \left( X\right)
=g\left( X,e_{3}\right) $. On $U$ and in simplifying coordinates we have $
\eta =dx^{1}-\frac{\alpha }{\beta }dx^{2}-\frac{F}{\zeta }dx^{3}$. A
calculation shows that $\eta \wedge d\eta =\frac{1}{\beta \zeta }
dx^{1}\wedge dx^{2}\wedge dx^{3}\neq 0$ everywhere on $U$, thus $\eta $ is a
contact form.
\item Define the $(1,1)$ tensor field $\phi $ by $\phi e_{3}=0$, $\phi
e_{1}=e_{2}$, $\phi e_{2}=-e_{1}$. Then $\eta \left( e_{3}\right) =1$, $\phi
^{2}X=-X+\eta \left( X\right) e_{3}$, $d\eta \left( X,Y\right) =g\left(
X,\phi Y\right) $ and $g\left( \phi X,\phi Y\right) =g\left( X,Y\right)
-\eta \left( X\right) \eta \left( Y\right) $ for all vector fields $X$, $Y$
on $M^{3}$.
\item Put $\xi :=e_{3}$, $\phi e:=e_{2}$, $e:=e_{1}$. The tetrad $(\eta $, $
\phi $, $\xi $, $g)$ is a contact metric structure.
\end{enumerate}
\subsection{Example\label{example1}}
We give an example of a contact metric $3$-manifold. We use the algorithm of
section \ref{algorithm} and the notion of simplifying coordinates.
Consider $M^{3}=\mathbb{R}^{3}-\left\{ x_{2}=0\right\} $ with simplifying
coordinates $x^{1}$, $x^{2}$ and $x^{3}$. Let $\alpha \left(
x^{2},x^{3}\right) =0$, $\beta \left( x^{2},x^{3}\right) =x^{2}$, $\epsilon
\left( x^{1},x^{2},x^{3}\right) =x^{1}$ and $F\left( x^{2},x^{3}\right) =1$.
Set $\delta \left( x^{1},x^{2},x^{3}\right) :=\frac{\alpha \epsilon }{\beta }
+F=1$. Since $F\neq 0$ we have $\zeta =-\frac{1}{\left( x^{2}\right)
^{2}K\left( x^{3}\right) }$. The metric is
\begin{equation*}
g=\left[
\begin{array}{ccc}
1 & 0 & \left( x^{2}\right) ^{2}K\left( x^{3}\right) \\
0 & \frac{1}{\left( x^{2}\right) ^{2}} & x^{1}K\left( x^{3}\right) \\
\left( x^{2}\right) ^{2}K\left( x^{3}\right) & x^{1}K\left( x^{3}\right) &
\left( x^{2}\right) ^{2}K^{2}\left( x^{3}\right) \left( \left( x^{1}\right)
^{2}+2\left( x^{2}\right) ^{2}\right)
\end{array}
\right]
\end{equation*}
The $B$-matrix is
\begin{equation*}
B=\left[
\begin{array}{ccc}
0 & x^{2} & 0 \\
1 & x^{1} & -\frac{1}{\left( x^{2}\right) ^{2}K\left( x^{3}\right) } \\
1 & 0 & 0
\end{array}
\right]
\end{equation*}
and define the vector fields $e_{1}:=x^{2}\frac{\partial }{\partial x^{2}}$,
$e_{2}:=\frac{\partial }{\partial x^{1}}+x^{1}\frac{\partial }{\partial x^{2}
}-\frac{1}{\left( x^{2}\right) ^{2}K\left( x^{3}\right) }\frac{\partial }{
\partial x^{3}}$ and $e_{3}:=\frac{\partial }{\partial x^{3}}$. Define the
the $1$-form $\eta $ by $\eta \left( X\right) =g\left( X,e_{3}\right) $. In
simplifying coordinates we have $\eta =dx^{1}+\left( x^{2}\right)
^{2}K\left( x^{3}\right) dx^{3}$ and $\eta \wedge d\eta =-\frac{1}{
x^{2}K\left( x^{3}\right) }dx^{1}\wedge dx^{2}\wedge dx^{3}\neq 0$
everywhere. Define the $(1,1)$ tensor field $\phi $ by $\phi e_{3}=0$, $\phi
e_{1}=e_{2}$, $\phi e_{2}=-e_{1}$. Then $\eta \left( e_{3}\right) =1$, $\phi
^{2}X=-X+\eta \left( X\right) e_{3}$, $d\eta \left( X,Y\right) =g\left(
X,\phi Y\right) $ and $g\left( \phi X,\phi Y\right) =g\left( X,Y\right)
-\eta \left( X\right) \eta \left( Y\right) $ for all vector fields $X$, $Y$
on $M^{3}$. Put $\xi :=e_{3}$, $\phi e:=e_{2}$, $e:=e_{1}$. The tetrad $
(\eta $, $\phi $, $\xi $, $g)$ is a contact metric structure on $M^{3}$.
Further calculations yield
\begin{equation*}
\left[ e_{1},e_{2}\right] =-2e_{1}-\frac{x^{1}}{x^{2}}e_{2}+2e_{3},~~~\left[
e_{2},e_{3}\right] =-\frac{1}{x^{2}}e_{1},~~~\left[ e_{3},e_{1}\right] =0
\end{equation*}
Note that $M^{3}$ is not flat since the scalar curvature is $r=-10-\frac{
1+8x^{2}}{2\left( x^{2}\right) ^{2}}$.
\begin{remark}
Given an associated metric $g$, any Riemann metric $\gamma $ can be obtained
by adding to $g$ a tensor field $t$, provided that $\gamma $ remains a
symmetric, positive definite bilinear form. The following formula gives such
a $t$ in simplifying coordinates:
\begin{equation}
t=\left[
\begin{array}{ccc}
e^{\iota }-1 & \rho & \sigma \\
\rho & e^{\kappa }-\frac{1+\alpha ^{2}}{\beta ^{2}} & \upsilon \\
\sigma & \upsilon & e^{\nu }-\frac{\beta ^{2}\left( 1+F^{2}\right)
+\epsilon ^{2}}{\beta ^{2}\zeta ^{2}}
\end{array}
\right] \label{mt}
\end{equation}
where $\iota $, $\kappa $, $\nu $, $\rho $, $\sigma $ and $\upsilon $ are
any smooth functions of $x^{1}$, $x^{2}$ and $x^{3}$. The final formula
giving any Riemannian metric is
\begin{equation}
g+t=\gamma =\left[
\begin{array}{ccc}
e^{\iota } & -\frac{\alpha }{\beta }+\rho & -\frac{F}{\zeta }+\sigma \\
-\frac{\alpha }{\beta }+\rho & e^{\kappa } & \frac{\alpha \beta F-\epsilon
}{\beta ^{2}\zeta }+\upsilon \\
-\frac{F}{\zeta }+\sigma & \frac{\alpha \beta F-\epsilon }{\beta ^{2}\zeta }
+\upsilon & e^{\nu }
\end{array}
\right] \label{rm}
\end{equation}
Recalling step (\ref{step2}) of the algorithm presented in section \ref
{algorithm}, we see that in (\ref{rm}) the functions $\alpha $, $\beta $, $
\zeta $, $\epsilon $ and $F$ can be absorbed by a redefinition of $\rho $, $
\sigma $ and $\upsilon $ thus yielding (as expected) any symmetric, positive
definite bilinear form as a Riemannian metric. However in (\ref{mt}), the
matrix $t$ involves the functions $\alpha $, $\beta $, $\zeta $, $\epsilon $
and $F$ in a crucial way.
\end{remark}
\end{document}
|
\begin{document}
\date{}
\title{f On Ryser's conjecture for exorpdfstring{$t$}
{\let\thefootnote\relax\footnote{Mathematics Subject Classifications: 05C15, 05C65, 05B40, 05B25}}
\begin{abstract}
A famous conjecture (usually called Ryser's conjecture) that appeared in the
PhD thesis of his student, J.~R.~Henderson \cite{H}, states that
for an $r$-uniform
$r$-partite hypergraph $\ensuremath{\mathcal{H}}$, the inequality
$\tau(\ensuremath{\mathcal{H}})\le(r-1)\!\cdot\! \nu(\ensuremath{\mathcal{H}})$ always holds.
This conjecture is widely open, except in the case of
$r=2$, when it is
equivalent to K\H onig's theorem \cite{K}, and in the case of $r=3$, which was
proved by Aharoni in 2001 \cite{A}.
Here we study some special cases of Ryser's conjecture.
First of all, the most studied special case is when $\ensuremath{\mathcal{H}}$ is
intersecting. Even for this special case, not too much is known: this
conjecture is proved only for $r\le 5$ in \cite{Gy,T2}. For $r>5$
it is also widely open.
Generalizing the conjecture for intersecting hypergraphs, we conjecture
the following.
If an $r$-uniform
$r$-partite hypergraph $\ensuremath{\mathcal{H}}$ is $t$-intersecting (i.e., every two hyperedges
meet in at least $t<r$ vertices), then $\tau(\ensuremath{\mathcal{H}})\le r-t$. We prove this
conjecture for the case $t> r/4$.
Gy\'arf\'as
\cite{Gy} showed that Ryser's conjecture for intersecting hypergraphs is equivalent to
saying that the vertices of an $r$-edge-colored complete graph can be covered
by $r-1$ monochromatic components.
Motivated by this formulation, we examine what fraction of the vertices can
be covered by $r-1$ monochromatic components of \emph{different} colors in an
$r$-edge-colored complete graph.
We prove a sharp bound for this problem.
Finally we prove Ryser's conjecture for the very special case when the maximum
degree of the hypergraph is two.
\end{abstract}
\section{Introduction}
A hypergraph is a pair $\ensuremath{\mathcal{H}}=(V,E)$ where $V$ is a finite set (vertices), and $E$ is a multiset of subsets of $V$ (hyperedges).
A hypergraph is $r$-partite if its vertex set has a partition to $r$ nonempty
classes such that no hyperedge contains two vertices from the same class.
We refer to the partite classes simply as \emph{classes} (note that in some
papers they are called sides).
A set is called \emph{multi-colored} if it intersects every class in at most
one vertex, i.e., in an $r$-partite hypergraph every hyperedge is
multi-colored.
A hypergraph is $r$-uniform if all of its hyperedges have cardinality $r$. A
hypergraph is $d$-regular if every vertex is contained in exactly $d$
hyperedges.
A hypergraph is $t$-intersecting if every pair of hyperedges have at least $t$
common vertices. Throughout the paper we assume $0<t<r$ when speaking about
$t$-intersecting $r$-uniform hypergraphs.
A hypergraph is intersecting if it is 1-intersecting.
Let us introduce some more standard notations.
For a hypergraph $\ensuremath{\mathcal{H}}$ with vertex set $V=V(\ensuremath{\mathcal{H}})$ and
hyperedge set $E=E(\ensuremath{\mathcal{H}})$
\[\textrm{the vertex covering number is: } \tau(\ensuremath{\mathcal{H}})=\min\{|T| : T\subseteq V,\; T\cap f\neq\emptyset \;\; \forall f\in
E\},\]
\[\textrm{the edge covering number is: } \varrho(\ensuremath{\mathcal{H}})=\min\{|F| : F\subseteq E, \; \; \bigcup F=V\},\]
\[\textrm{the matching number is: } \nu(\ensuremath{\mathcal{H}})=\max\{|F| : F\subseteq E , \; f_1\cap f_2=\emptyset \; \; \forall
f_1\ne f_2\in F \},\]
\[\textrm{the maximum degree is: } \Delta(\ensuremath{\mathcal{H}})=\max\{|F|: \; F\subseteq E, \; \bigcap F\ne \emptyset\},\]
\[\textrm{the independence number is: } \alpha(\ensuremath{\mathcal{H}})=\max\{|X|: X\subseteq V, \; f\not\subseteq X \;\; \forall
f\in E\},\]
\[\textrm{the strong independence number is: }\alpha'(\ensuremath{\mathcal{H}})=\max\{|X|: X\subseteq V, \; |f\cap X|\leq 1\ \;\; \forall
f\in E\}.\]
A famous conjecture of Ryser (which appeared in the
PhD thesis of his student, J.R.~Henderson \cite{H}) states that
for an $r$-uniform
$r$-partite hypergraph $\ensuremath{\mathcal{H}}$, we have
$\tau(\ensuremath{\mathcal{H}})\le(r-1)\!\cdot\! \nu(\ensuremath{\mathcal{H}})$.
This conjecture is widely open, except in the special case of
$r=2$, when it is
equivalent to K\H onig's theorem \cite{K}, and when
$r=3$, which was
proved by Aharoni in 2001 \cite{A}, using topological results from \cite{AH}. We
mention also some related results. Henderson \cite{H} showed that the conjecture
cannot be improved if $r-1$ is a prime power.
Haxell and Scott \cite{HS} showed that the constant in the conjecture cannot be smaller than $r-4$ for all but finitely many values of $r$.
F\"uredi \cite{F} proved that
the fractional covering number is always at most $(r-1)\!\cdot\!\nu(\ensuremath{\mathcal{H}})$, and
Lov\'asz \cite{L} proved that the fractional matching number is always at
least $\frac2r\!\cdot\!\tau(\ensuremath{\mathcal{H}})$.
The hypergraphs achieving $\tau(\ensuremath{\mathcal{H}}) = (r-1)\!\cdot\! \nu(\ensuremath{\mathcal{H}})$ have also been investigated, but this problem is also widely open. Haxell, Narins and Szab\'o characterized the sharp examples for $r=3$ \cite{HNSz1,HNSz2}. For larger values of $r$, truncated projective planes give an infinite family of sharp examles.
Apart from these, there are some sporadic examples \cite{ABW,AP,FHMW,MSY}, moreover, Abu-Khazneh, Bar\'at, Pokrovskiy and Szab\'o \cite{ABPSz} constructed another infinite family of extremal hypergraphs but projective planes play also an important role in their construction.
Here we study some special cases of Ryser's conjecture.
First of all, the most studied special case is when $\nu=1$, i.e., when $\ensuremath{\mathcal{H}}$ is
intersecting. Even for this case, not too much is known. Gy\'arf\'as
\cite{Gy} showed that this special case of the conjecture is equivalent to
saying that the vertices of an $r$-edge-colored complete graph can be covered
by $r-1$ monochromatic components (see below).
He also proved this conjecture for $r\le 4$ \cite{Gy}, and later Tuza \cite{T2}
proved it for $r=5$. For $r>5$ this conjecture is also widely open. Some
recent papers study this special case, e.g., see \cite{AB, HS, FHMW, MSY}.
For intersecting hypergraphs, we generalize Ryser's conjecture by conjecturing the following.
If an $r$-uniform
$r$-partite hypergraph $\ensuremath{\mathcal{H}}$ is $t$-intersecting, then $\tau(\ensuremath{\mathcal{H}})\le r-t$. We
prove this conjecture for the case $r>t> r/4$. This question was also studied (independently) by Bustamante and Stein, see \cite{BS}.
The construction of Gy\'arf\'as \cite{Gy} (see also in \cite{KZ}) is the
following. We associate a multi edge-colored graph to an $r$-partite $r$-uniform
hypergraph.
\begin{defn} \label{def:Gyarfas_constr}
For an $r$-partite $r$-uniform hypergraph $\ensuremath{\mathcal{H}}$, let $G=G(\ensuremath{\mathcal{H}})$ be the
following multi edge-colored graph:
The vertex set of $G$ is $V(G)=E(\ensuremath{\mathcal{H}})$.
Two vertices $u,v\in V(G)$ are connected by an edge if the corresponding hyperedges $E_u, E_v\in E(\ensuremath{\mathcal{H}})$ have a nonempty intersection. The edge $uv$ is colored by the colors $\{i: E_u\textrm{ and } E_v \textrm{ share a vertex from the $i^{th}$ class}\}$.
We denote the set of colors of edge $uv$ by $\ensuremath{\mathrm{Col}}(uv)$. If $i\in \ensuremath{\mathrm{Col}}(uv)$, then we say that the edge $uv$ \emph{has} the color $i$.
\end{defn}
Note that if $\ensuremath{\mathcal{H}}$ is intersecting,
then $G$ is a complete graph.
\begin{remark}
The original construction of Gy\'arf\'as colored each edge $uv$ by only one
color, chosen arbitrarily from $\ensuremath{\mathrm{Col}}(uv)$.
\end{remark}
\begin{remark}\label{remark_multi}
The color sets we defined in this way are transitive: if
$i\in \ensuremath{\mathrm{Col}}(uv)\cap \ensuremath{\mathrm{Col}}(vw)$, then $i\in \ensuremath{\mathrm{Col}}(uw)$. We call a complete
graph $G$ \emph{multi $r$-edge-colored} if for each distinct vertex pair $\{u,v\}$
we have $\emptyset\ne \ensuremath{\mathrm{Col}}(uv)\subseteq[r]=\{1,\dots,r\}$ and if the
coloring is transitive.
In a multi $r$-edge-colored graph, a \emph{monochromatic component} of color
$i$ is a component of the subgraph formed by the edges using the color $i$.
Note that -- as the coloring is transitive -- if $U$ is the vertex set
of a monochromatic component of color $i$, then for every $u\ne v\in U$ we
have $i\in\ensuremath{\mathrm{Col}}(uv)$; in other words, the edges of $G$ having color $i$ form
a partition of $V(G)$ into $i$-colored cliques.
Each vertex of $\ensuremath{\mathcal{H}}$ in the class $i$ corresponds to
one maximal clique of color $i$, which is also a monochromatic ($i$-colored)
connected component of $G$. A set of vertices $T\subseteq V(\ensuremath{\mathcal{H}})$ covers the
hyperedges of $\ensuremath{\mathcal{H}}$ (as in the definition of $\tau$) if and only if the
monochromatic components corresponding to its elements cover $V(G)$.
\end{remark}
\begin{remark}\label{remark_multi2}
We also note that for any edge-colored complete graph we can consider the
color-transitive closure: for any edge $uv$ we define
$\ensuremath{\mathrm{Col}}(uv)=\{i \;|\; u \textrm{ and } v$
are in the same monochromatic component of color $i\}$.
The vertex sets of the monochromatic components of this multi edge-colored graph are the same as the vertex sets of the monochromatic components of the original edge-colored graph.
\end{remark}
Ryser's conjecture for intersecting hypergraphs is equivalent to the statement
that $r-1$ monochromatic components can cover $V(G(\ensuremath{\mathcal{H}}))$. The more general
conjecture for $t$-intersecting hypergraphs is equivalent to the statement
that for every multi $r$-edge-colored complete graph, where each edge has at
least $t$ colors, there is a set of $r-t$ monochromatic components that cover
the vertices (if $t<r$).
For the case of $r$-edge-colored complete graphs, we also study the following
problem:
What fraction of the vertices can be
covered by $r-1$ monochromatic components of \emph{different} colors?
We prove a sharp bound for this problem, namely
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot |V(G)|$.
In the hypergraph language, this corresponds to the question of ``How many
hyperedges can be covered by a multi-colored set of size $r-1$ in an
intersecting $r$-partite $r$-uniform hypergraph?'' We show that the
hypergraphs giving the minimum are exactly the hypergraphs that can be obtained from a truncated
projective plane by replacing each hyperedge by $b$ parallel copies for some
integer $b$.
Finally we prove Ryser's conjecture for the very special case when the maximum
degree of the hypergraph is two, i.e., when no vertex is contained in three or
more hyperedges.
The preliminary version of this paper can be found in \cite{KTtech}.
\section{The \texorpdfstring{$t$}{t}-intersecting case}
\begin{conj}\label{conj:t-intersecting}
Let $\ensuremath{\mathcal{H}}$ be an $r$-uniform $r$-partite $t$-intersecting hypergraph with $1\leq t \leq r-1$.
Then $\tau(\ensuremath{\mathcal{H}}) \leq r-t$.
\end{conj}
\begin{theorem}\label{p:t-intersecting}
If $\ensuremath{\mathcal{H}}$ is an $r$-uniform $r$-partite $t$-intersecting hypergraph and
$\frac{r}{4}< t \leq r-1$, then $\tau(\ensuremath{\mathcal{H}}) \leq r-t$.
\end{theorem}
Using Gy\'arf\'as' construction (Definition \ref{def:Gyarfas_constr}),
Theorem \ref{p:t-intersecting} follows from
the following statement:
\begin{theorem}\label{thm:t-metszo_eros}
Let $G$ be a multi $r$-edge-colored complete graph
where each edge has at least $t$ different colors.
If $r-1\geq t> \frac{r}{4}$, then $V(G)$ can be covered by at most $r-t$
monochromatic components.
\end{theorem}
\begin{remark}\label{not_stronger}
Conjecture \ref{conj:t-intersecting} is seemingly a
strengthening of Ryser's conjecture for
intersecting hypergraphs (which corresponds to $t=1$).
However, the statement is stronger for smaller $t$
values.
To see this, suppose the conjecture is proved for a fixed $t$ and for every
$r>t$.
To prove it for $t+1$, suppose we are given a multi $r$-edge-colored complete graph
where each edge has at least $t+1$ different colors and $r>t+1$.
Deleting color $r$ from every $\ensuremath{\mathrm{Col}}(uv)$, we get a multi
$(r\!-\!1)$-edge-colored complete graph where each edge has at least
$t$ different colors, so by the assumption its vertex set can be covered by
$r-1-t=r-(t+1)$ monochromatic components.
\end{remark}
\begin{remark}\label{rem_BS}
In a recent manuscript \cite{BS},
Bustamante and Stein formulated independently the same
conjecture (we are thankful for the reviewer who raised our attention to it).
They proved that the conjecture is true if $r-1\geq t\ge \frac{r-2}{2}$.
We note that our theorem is stronger (except that their result contains the
well-known case $r=4,\; t=1$ while our result does not).
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:t-metszo_eros}]
We assume $[r]=\{1,2,\ldots,r\}$ is the set of colors, and if $x$ is a vertex and $I\subseteq [r]$ is a set of colors, then we denote by $\ensuremath{\mathcal{C}}(x,I)$ the set of monochromatic components containing $x$ and having a color in $I$.
The proof goes by induction on the number vertices. If $|V(G)|\le 2$, we can
cover $V(G)$ by 1 monochromatic component.
If there are $x\ne y\in V(G)$ where
$|\ensuremath{\mathrm{Col}}(xy)|=r$, then contract the edge $xy$ to a vertex $x^*$
(by color-transitivity, for any vertex
$z\ne x,y$ we have $\ensuremath{\mathrm{Col}}(zx)=\ensuremath{\mathrm{Col}}(zy)$, so we define $\ensuremath{\mathrm{Col}}(zx^*)=\ensuremath{\mathrm{Col}}(zx)$).
By induction, the graph obtained can be covered by at most $r-t$ monochromatic
components. It is easy to see that the preimages of these components are
monochromatic and cover $V(G)$. So from this point we may (and will)
suppose that $|\ensuremath{\mathrm{Col}}(xy)|<r$ for every pair $x\ne y$.
First we prove some special cases.
\begin{lemma}\label{lem:t-metszo_eros}
Let $G$ be a multi $r$-edge-colored complete graph, where
each edge has
exactly $t$ different colors.
If $t+1\le r\le 4t-2$, then $V(G)$ can be covered by at most $r-t$
monochromatic components.
\end{lemma}
\begin{proof}
Take any edge $xy$. Without loss of generality, we can suppose that
$\ensuremath{\mathrm{Col}}(xy)=I=[t]$.
First consider the case $r\leq 2t$.
Let $J=[r-t]$. Now $J\subseteq I$ since $r-t\leq t$.
We claim that $\ensuremath{\mathcal{C}}(x,J)=\ensuremath{\mathcal{C}}(y,J)$ covers $V(G)$. If a vertex $z$ is not
covered, then $\ensuremath{\mathrm{Col}}(xz)=\ensuremath{\mathrm{Col}}(yz)=\{r-t+1,\ldots,r\}$. However, since each
monochromatic component is a clique, we get $\{r-t+1,\ldots,r\}\subseteq \ensuremath{\mathrm{Col}}(xy)=I$, so
$t=r$ contradicting
the assumption $t<r$.
Thus it remains to prove the case $r>2t$. Let
$j=\lfloor\frac{r}{2}\rfloor-t$ and
$J=\{t+1, \dots, t+j\}$ if $j>0$, and
$J=\emptyset$ otherwise. Take $\ensuremath{\mathcal{C}}(x,I)\cup \ensuremath{\mathcal{C}}(x,J)\cup \ensuremath{\mathcal{C}}(y,J)$.
We claim that these $t+2j\le r-t$ monochromatic components cover the
vertices of $G$.
If a vertex $z$ is not
covered, then $\ensuremath{\mathrm{Col}}(xz)\subseteq \{t+j+1, \dots, r\}$ and
$\ensuremath{\mathrm{Col}}(yz)\subseteq \{t+j+1, \dots, r\}$ and, as the coloring is transitive,
$\ensuremath{\mathrm{Col}}(xz)\cap \ensuremath{\mathrm{Col}}(yz)\subseteq I$,
thus $\ensuremath{\mathrm{Col}}(xz)\cap \ensuremath{\mathrm{Col}}(yz)=\emptyset$.
However, $|\ensuremath{\mathrm{Col}}(xz)|=|\ensuremath{\mathrm{Col}}(yz)|=t$, so $2t\leq r-t-j$, i.e.,
$2t\leq \lceil\frac{r}{2}\rceil$ or equivalently $r\geq 4t - 1$, a contradiction.
\end{proof}
\begin{lemma}
Let $G$ be a multi $r$-edge-colored complete graph where each
edge has
at least $t$ different colors.
If $t+1\le r\leq 4t-1$ and there is an edge $xy$ with $t<|\ensuremath{\mathrm{Col}}(xy)|<r$,
then $V(G)$ can be covered by at most $r-t$
monochromatic components.
\end{lemma}
\begin{proof}
Take an edge $xy$ with $|\ensuremath{\mathrm{Col}}(xy)|>t$. Without loss of generality, we can suppose that
$\ensuremath{\mathrm{Col}}(xy)=I=[\ell]$ where $t<\ell<r$.
First consider the case $r\leq t+\ell$.
Let $J=[r-t]$. Now $J\subseteq I$.
We claim that $\ensuremath{\mathcal{C}}(x,J)=\ensuremath{\mathcal{C}}(y,J)$ covers $V(G)$. If a vertex $z$ is not
covered, then $\ensuremath{\mathrm{Col}}(xz)=\ensuremath{\mathrm{Col}}(yz)=\{r-t+1,\ldots,r\}$. However, since the
coloring is transitive, we get $\{r-t+1,\ldots,r\}\subseteq I$, so
$\ell=|I|=r$ contradicting
the assumption $\ell<r$.
Thus it remains to prove the case $r>t+\ell$. Let
$j=\lfloor\frac{r-t-\ell}{2}\rfloor$ and
$J=\{\ell+1, \dots, \ell+j\}$ if $j>0$, and
$J=\emptyset$ otherwise. Take $\ensuremath{\mathcal{C}}(x,I)\cup \ensuremath{\mathcal{C}}(x,J)\cup \ensuremath{\mathcal{C}}(y,J)$.
We claim that these $\ell+2j\le r-t$ monochromatic components cover the
vertices of $G$.
If a vertex $z$ is not
covered, then $\ensuremath{\mathrm{Col}}(xz)\subseteq \{\ell+j+1, \dots, r\}$ and
$\ensuremath{\mathrm{Col}}(yz)\subseteq \{\ell+j+1, \dots, r\}$ and, as each
monochromatic component is a clique, $\ensuremath{\mathrm{Col}}(xz)\cap \ensuremath{\mathrm{Col}}(yz)=\emptyset$.
However, $|\ensuremath{\mathrm{Col}}(xz)|\geq t$ and $|\ensuremath{\mathrm{Col}}(yz)|\geq t$, so $2t\leq r-\ell-j$, i.e.,
$t\leq \lceil\frac{r-t-\ell}{2}\rceil$ or equivalently
$r\geq 3t+\ell-1\geq 4t$, a contradiction.
\end{proof}
It remains to prove the case $r=4t-1$ and $|\ensuremath{\mathrm{Col}}(xy)|=t$ for each $x\ne y$.
Let $k$ be the largest integer $j$ such that there is a triangle in $G$ with
$j$ colors occurring on all three edges.
Let $xyz$ be a triangle with $k$ common colors on its edges.
Let us introduce some further notations.
Let $K=\ensuremath{\mathrm{Col}}(xy)\cap\ensuremath{\mathrm{Col}}(yz)\cap\ensuremath{\mathrm{Col}}(zx)$
and $X=\ensuremath{\mathrm{Col}}(yz) - K$
and $Y=\ensuremath{\mathrm{Col}}(xz) - K$
and $Z=\ensuremath{\mathrm{Col}}(xy) - K$, finally let
$S=[r]- (K\cup X\cup Y\cup Z)$.
\noindent
For a set $A$, $A'$ always denotes a subset of $A$. Moreover, we denote $A- A'$ by $A''$. Note that $|K|=k$ and $|X|=|Y|=|Z|=t-k$.
\par \textbf{Case 0: } $k=0$.
Now $|V(G)|\le r+1$ as no two incident edges may have the same color.
Let $V(G)=\{u_1,\ldots,u_n\}$, where $n\le r+1$, and let $c_i\in \ensuremath{\mathrm{Col}}(u_{2i-1}u_{2i})$ for $1\le i\le n/2$.
If $n$ is even, then consider $\ensuremath{\mathcal{C}}(u_1,c_1)\cup\ensuremath{\mathcal{C}}(u_3,c_2)\cup\ldots\cup\ensuremath{\mathcal{C}}(u_{n-1},c_{n/2})$, otherwise consider $\ensuremath{\mathcal{C}}(u_1,c_1)\cup\ensuremath{\mathcal{C}}(u_3,c_2)\cup\ldots\cup\ensuremath{\mathcal{C}}(u_{n-2},c_{(n-1)/2})\cup\ensuremath{\mathcal{C}}(u_n,c)$, where $c$ is an arbitrary color of an edge incident to $u_n$. These (at most $\lfloor(n+1)/2\rfloor\le \lfloor(r+2)/2\rfloor$) monochromatic components obviously cover $V(G)$, and $\lfloor(r+2)/2\rfloor\le r-t$ as $r=4t-1$.
\par \textbf{Case 1: } $0<3k\le t$.
Choose $Y'\subseteq Y$ and $Z'\subseteq Z$ so that $|Y'|+|Z'|=t+k-1$. This is
possible, since $|Y|+|Z|= 2t-2k \geq t+k$ because $t\geq 3k$.
Take the following monochromatic components:
$\ensuremath{\mathcal{C}}(x,K\cup Y\cup Z)\cup \ensuremath{\mathcal{C}}(y,Y')\cup \ensuremath{\mathcal{C}}(z,Z')$.
The number of components chosen is at most $(2t-k)+(t+k-1)=3t-1=r-t$.
\begin{claim}
The components $\ensuremath{\mathcal{C}}(x,K\cup Y\cup Z)\cup \ensuremath{\mathcal{C}}(y,Y')\cup \ensuremath{\mathcal{C}}(z,Z')$ cover each vertex.
\end{claim}
\begin{proof}
Suppose that a vertex $w$ is not covered.
Then $\bigl(\ensuremath{\mathrm{Col}}(xw)\cup\ensuremath{\mathrm{Col}}(yw)\cup\ensuremath{\mathrm{Col}}(zw)\bigr)\cap K=\emptyset$.
Similarly, $\ensuremath{\mathrm{Col}}(xw)\cap Y=\emptyset$ and $\ensuremath{\mathrm{Col}}(yw)\cap Y'=\emptyset$.
We claim that also $\ensuremath{\mathrm{Col}}(zw)\cap Y=\emptyset$. Indeed, as $Y\subseteq \ensuremath{\mathrm{Col}}(xz)$, if $zw$ had a color from $Y$, then $xw$
would also have that color (since the coloring is transitive), a contradiction.
By the same reasoning, $\ensuremath{\mathrm{Col}}(xw)\cap Z=\emptyset$, $\;\ensuremath{\mathrm{Col}}(yw)\cap Z=\emptyset$ and $\ensuremath{\mathrm{Col}}(zw)\cap Z'=\emptyset$.
As a consequence, $\ensuremath{\mathrm{Col}}(xw)\subseteq X\cup S$, $\;\ensuremath{\mathrm{Col}}(yw)\subseteq X\cup Y''\cup S$ and $\ensuremath{\mathrm{Col}}(zw)\subseteq X\cup Z''\cup S$.
Next we claim that the colors in $X$ can occur altogether (counting with multiplicity) at most $t$ times on the edges $xw, yw$ and $zw$.
Let $c\in X$ be a color. If it occurs more than once on edges $xw, yw$ and $zw$, then it is in $\ensuremath{\mathrm{Col}}(yw)\cap\ensuremath{\mathrm{Col}}(zw)$ but $c\not\in\ensuremath{\mathrm{Col}}(xw)$. To see this, note that if $c\in\ensuremath{\mathrm{Col}}(xw)\cap\ensuremath{\mathrm{Col}}(yw)$, then $c\in\ensuremath{\mathrm{Col}}(xy)$ contradicting $X\cap\ensuremath{\mathrm{Col}}(xy)=\emptyset$; similarly $c\not\in\ensuremath{\mathrm{Col}}(xw)\cap\ensuremath{\mathrm{Col}}(zw)$.
By the choice of $k$, $|\ensuremath{\mathrm{Col}}(yw)\cap\ensuremath{\mathrm{Col}}(zw)|\le k$. Hence the colors in $X$ occur at most $|X|+k\leq t$ times on the edges $xw, yw$ and $zw$.
Each color in $S$ can only occur once on $xw, yw$ and $zw$, since by color-transitivity, a color occurring on at least two of the edges $xw, yw$ and $zw$ would also occur on one of the edges $xy, yz$ and $zx$, and that would contradict the definition of $S$.
Hence counting the colors of the edges $xw, yw$ and $zw$:
$3t \le |S|+|Z''|+|Y''|+t=|S|+(|Y|+|Z|-(|Y'|+|Z'|))+t=(4t-1-(3t-2k))+(2t-2k-(t+k-1))+t=(t+2k-1)+(t-3k+1)+t=3t-k$, which is a contradiction.
\end{proof}
\par \textbf{Case 2: } $3k>t$.
If $|X|+|Y|+|Z|= 3t-3k \geq 2k-1$, then choose $X'\subseteq X$, $Y'\subseteq Y$ and $Z'\subseteq Z$ so that $|X'|+|Y'|+|Z'|= 2k-1$.
If $|X|+|Y|+|Z|= 3t-3k < 2k-1$, then let $X'=X$, $Y'=Y$ and $Z'=Z$.
Take the following monochromatic components:
$\ensuremath{\mathcal{C}}(x,K\cup X'\cup Y\cup Z)\cup \ensuremath{\mathcal{C}}(y,Y')\cup \ensuremath{\mathcal{C}}(z,X\cup Z')$.
The number of components chosen is at most
$|K|+|X|+|Y|+|Z|+|X'|+|Y'|+|Z'|\leq k+3(t-k)+2k-1=3t-1= r-t$.
We claim that the components chosen cover each vertex.
Suppose that there is a vertex $w$ which is not covered.
Similarly to the previous case, it is easy to prove that the colors of $xw, yw$ and $zw$
are all from $S\cup X''\cup Y''\cup Z''$, and each color is used at most once altogether on these three edges. Hence $3t\leq |S|+|X''|+|Y''|+|Z''|$.
If $3t-3k\geq 2k-1$, then $3t\leq |S|+|X''|+|Y''|+|Z''|=4t-1-(|K|+|X'|+|Y'|+|Z'|)=4t-1-(k+2k-1)=4t-3k<3t$ since $t<3k$. This is a contradiction.
If $3t-3k < 2k-1$, then $3t\leq |S\cup X''\cup Y''\cup Z''|=|S|=4t-1-(3t-2k)=t+2k-1$. But this implies $2t\leq 2k-1$, hence $k>t$, which contradicts the assumption that each edge has exactly $t$ colors.
This finishes the proof of Theorem \ref{thm:t-metszo_eros}.
\end{proof}
\begin{remark}
We think that with a more diversified case analysis, Theorem \ref{thm:t-metszo_eros} can be extended to the case $t\ge r/5$. Note however, that the case $t=r/6$ would include the first unsolved case of Ryser's conjecture for intersecting hypergraphs.
\end{remark}
\section{Covering large fraction by few monochromatic components}
In this section, we give a sharp bound for the ratio of vertices that can be
covered by $r-1$ monochromatic components of pairwise different colors in an
$r$-edge colored complete graph. By Remark \ref{remark_multi2}, we can assume that the monochromatic components of the graph are cliques, since in
the color-transitive closure of a graph, the monochromatic components have the same vertex sets as in the original graph.
\begin{theorem}\label{thm:cover_diff}
Let $G$ be a multi $r$-edge-colored complete graph on $n$ vertices.
Then at least
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ vertices of
$G$ can be covered by
$r-1$ monochromatic components of pairwise different colors, and this bound
is sharp for infinitely many values of $r$. Moreover, the $r-1$ monochromatic components can be chosen so that their intersection is nonempty.
\end{theorem}
Applying the construction of Gy\'arf\'as (Definition
\ref{def:Gyarfas_constr}), we get the following equivalent statement for hypergraphs.
\begin{theorem}\label{thm:cover_diff_hyp}
Let $\ensuremath{\mathcal{H}}$ be an $r$-partite $r$-uniform intersecting hypergraph.
Then at least
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot |E(\ensuremath{\mathcal{H}})|$ hyperedges of
$\ensuremath{\mathcal{H}}$ can be covered by a multi-colored set of size
$r-1$, and this bound
is sharp for infinitely many values of $r$. Moreover, the cover can be
chosen so that it is a subset of some hyperedge of $\ensuremath{\mathcal{H}}$.
\end{theorem}
The following strengthening of Ryser's conjecture was phrased by Aharoni et al. \cite[Conjecture 3.1]{ABW}: ``In an intersecting $r$-partite $r$-uniform hypergraph $\ensuremath{\mathcal{H}}$, there exists a class of size $r-1$ or less, or a
cover of the form $e- \{x\}$ for some $e \in E$ and $x \in e$.''
This conjecture was disproved in \cite{FHMW}. Note however, that by Theorem
\ref{thm:cover_diff_hyp}, if we require the cover to be multi-colored,
then additionally requiring
it to be a subset of a hyperedge does not decrease the number of
coverable hyperedges in the worst case.
We call the reader's attention to the fact that, although our result is sharp
for infinitely many values of $r$, in all our examples showing sharpness every
class has exactly $r-1$ vertices, thus they are far from exhibiting a
counterexample to Ryser's conjecture.
\begin{proof}[Proof of Theorem \ref{thm:cover_diff}]
We call an edge-coloring of $G$ \emph{spanning} if for every color $c$ and vertex $u$ there is an edge $uv$ of $G$ such that $c\in\ensuremath{\mathrm{Col}}(uv)$.
If the edge-coloring of $G$ is not spanning, then we can cover all the vertices of $G$ by $r-1$ monochromatic components of pairwise different colors. Indeed, if there is a vertex $v$ and a color $i$ such that no edge incident to $v$ has color $i$, then $\ensuremath{\mathcal{C}}(v,[r]-\{i\})$ covers the vertices of $G$.
Now suppose that the coloring of $G$ is spanning. For $r=2$ we can cover the vertex set by one monochromatic component
by a well-known folklore observation, so we
may assume $r\ge 3$.
Let the number of monochromatic components of color $i$ be $k_i$. Let us
denote the set of monochromatic components of color $i$ by
$\mathcal{C}_i$. We may suppose
that $k_1\ge k_2\ge\ldots\ge k_r\ge 2$, otherwise (if $k_r=1$) we are
done. In the following proof, we will think of monochromatic components as vertex sets, hence when we write $C\in \mathcal{C}_i$, we mean that $C$ is the vertex set of a monochromatic component of color $i$.
\par \textbf{Case 1: } $k_1\ge r-1$.
We have
\begin{equation} \label{eq:komp_kul}
\sum_{C\in \mathcal{C}_1,\; C'\in \mathcal{C}_r} |C-C'|=(k_r-1)\cdot n,
\end{equation}
since each vertex occurs in exactly one component of color $r$ and one
component of color 1. Hence each vertex is counted $k_r-1$ times for the
$k_r-1$ components of color $r$ that does not contain it.
From (\ref{eq:komp_kul}) it follows that among the $k_1\cdot k_r$ sets
$\{C-C': C\in \mathcal{C}_1,C'\in\mathcal{C}_r\}$,
there is one which has size at most
$\frac{k_r-1}{k_1\cdot k_r}\cdot n$.
Let $C_1-C'_r$ be such a set with minimum cardinality. As $k_1\ge k_r$ we have
$\frac{k_r-1}{k_r}\le\frac{k_1-1}{k_1}$, so
$\frac{k_r-1}{k_1\cdot k_r}\cdot n\le \frac{k_1-1}{k_1^2}\cdot n$.
Using $2\le r-1\le k_1$ we also have
$\frac{k_1-1}{k_1^2}\le \frac{r-2}{(r-1)^2}$, so
$\frac{k_r-1}{k_1\cdot k_r}\cdot n\le \frac{r-2}{(r-1)^2}\cdot n$.
We claim that $C_1\cap C'_r\ne\emptyset$. Indeed, take a vertex $x\in C_1$. If
$C_1\cap C'_r=\emptyset$, then $|C_1 - \ensuremath{\mathcal{C}}(x,\{r\})|<|C_1|=|C_1 - C'_r|$ which
contradicts the minimality of $C_1-C'_r$. Thus we can choose a vertex $x$ in
$C_1\cap C'_r$. Take $\ensuremath{\mathcal{C}}(x,[r]-\{1\})$. These components cover each
vertex outside $C_1-C'_r$, hence at least $(1-\frac{r-2}{(r-1)^2})\cdot n$
vertices.
\par \textbf{Case 2: } $k_1\le r-1$ (i.e., $k_i\le r-1$ for all $i$).
Notice that Case 1 and Case 2 overlap. However, this overlapping categorization will be convenient when examining sharpness.
For a vertex $v$ and a color $i\in [r]$, let $d_i(v)=|\{u\in V(G): \ensuremath{\mathrm{Col}}(uv)=\{i\}\}|$, i.e., the number of neighbors of $v$ that are connected to $v$ by an edge having only color $i$.
It is enough to show that there exists $v\in V$ and $i \in \{1,\dots , r\}$ such that
$d_i(v)\leq \frac{r-2}{(r-1)^2}\cdot n$.
Indeed, in this case $\ensuremath{\mathcal{C}}(v,[r]-\{i\})$ cover each vertex except those that are connected to $v$ by an edge of
unique color $i$, that is, at most $\frac{r-2}{(r-1)^2}\cdot n$ vertices are uncovered.
Let $m_i=|\{uv\in E(G): \ensuremath{\mathrm{Col}}(uv)=\{i\}\}|$,
and $M_i=|\{uv\in E(G): i\in \ensuremath{\mathrm{Col}}(uv)\}|$.
Since
$\sum_{v\in V}d_i(v)=2m_i$, it is enough to show that there exists a color
$i$ such that $m_i\leq \frac{r-2}{2(r-1)^2}\cdot n^2$.
For this, it is enough to show that $\sum_{i=1}^r m_i\leq \frac{r(r-2)}{2(r-1)^2}\cdot n^2$.
We have $\sum_{i=1}^r m_i = \binom{n}{2} -t$ where $t$ denotes the number of edges having multiple colors.
It is not hard to see that
$t\geq \frac{1}{r-1}\cdot \big[\sum_{i=1}^r M_i-\binom{n}{2}\big]$, since each
edge has at most $r$ colors.
\begin{claim} \label{cl:M_i bound}
If $\ell=k_i\le r-1$, then $M_i\geq \frac{n^2}{2\ell}-\frac{n}{2}
\geq \frac{n^2}{2(r-1)}-\frac{n}{2}$.
\end{claim}
\begin{proof}
Let the cardinalities of the components of color $i$ be $\gamma_1,\dots,
\gamma_\ell$.
Then $M_i=\binom{\gamma_1}{2} + \dots + \binom{\gamma_\ell}{2}=
\frac{\gamma_1^2+ \dots +\gamma_\ell^2}{2}-\frac{\gamma_1+ \dots +\gamma_\ell}{2}=
\frac{\gamma_1^2+ \dots +\gamma_\ell^2}{2}-\frac{n}{2}$.
Now it is enough to show that $\frac{\gamma_1^2+ \dots +\gamma_\ell^2}{2}\geq
\frac{n^2}{2\ell}$
but this follows from the Arithmetic Mean--Quadratic Mean Inequality.
\end{proof}
Using the claim, we get that
$t\geq \frac{1}{r-1}\cdot \big[\sum_{i=1}^r M_i-\binom{n}{
2}\big]\geq \frac{1}{r-1}\cdot \big[\frac{r(n^2-(r-1)n)}{2(r-1)}-\binom{n}{2}\big]=\frac{rn^2-r(r-1)n-(r-1)n^2+(r-1)n}{2(r-1)^2}=
\frac{n^2}{2(r-1)^2}-\frac{n}{2}$.
So $\sum_{i=1}^r m_i= \binom{n}{2}-t\leq \binom{n}{2}-\frac{n^2}{2(r-1)^2}+\frac{n}{2}=
\frac{(r-1)^2 n^2-(r-1)^2 n-n^2+(r-1)^2 n}{2(r-1)^2}=
\frac{r(r-2)n^2}{2(r-1)^2}$.
For the proof of sharpness see Theorem \ref{thm:cover_diff_sharp}.
\end{proof}
\subsection{Characterization of sharp examples}
In this subsection we characterize the sharp examples for Theorem \ref{thm:cover_diff}.
For this, we will need the definition of an affine plane of order $r-1$.
\begin{defn}\label{def:aff_plane}
An incidence structure $\mathcal{A}=(\mathcal{P},\mathcal{L})$, where the elements of $\mathcal{P}$ are referred to as the points, and the elements of $\mathcal{L}$ are referred to as the lines is called an \emph{affine plane of order $r-1$} if the following five conditions hold.
\begin{description}
\item[(i)] Every pair of points are connected by exactly one line.
\item[(ii)] For each point $x$ and line $L$ such that $x\notin L$, there exists exactly one line $L'$ such that $x\in L'$, but $L'$ is disjoint from $L$.
\item[(iii)] Each line contains at least 2 points.
\item[(iv)] Each point is incident with at least 3 lines.
\item[(v)] The maximum number of pairwise parallel lines is $r-1$.
\end{description}
\end{defn}
We also need the following definition.
\begin{defn}
We call a multi edge-colored
complete graph $G$ the \emph{blowup of an affine plane} if there
exists an affine plane $\mathcal{A}=(\mathcal{P},\mathcal{L})$, a positive integer $b$ and a function $f:V(G)\rightarrow \mathcal{P}$ such that
\begin{itemize}
\item the lines of $\mathcal{A}$ are colored such that two lines have the same color if and only if they are disjoint (i.e., parallel),
\item for each point $p\in \mathcal{P}$, $|\{v\in V(G): f(v)=p\}|=b$
\item $i\in \ensuremath{\mathrm{Col}}(uv)$ if and only if $f(u)$ and $f(v)$ are incident to a common line of color $i$ (note that this includes the case if $f(u)=f(v)$).
\end{itemize}
\end{defn}
\begin{theorem}\label{thm:cover_diff_sharp}
For a multi $r$-edge-colored complete graph $G$ on $n$ vertices, the maximum number of vertices coverable by $r-1$ monochromatic components of pairwise different colors equals $\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ if and only if $G$ is a blowup of an affine plane.
\end{theorem}
\begin{proof}
Suppose $G$ is a sharp example, i.e., no $r-1$ monochromatic components of
pairwise different colors can cover more than
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ vertices and
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ is an integer.
As noted in the beginning of the proof of Theorem \ref{thm:cover_diff}, if the edge-coloring of $G$ is not spanning or $r=2$, then all the vertices of $G$ can be colored by $r-1$ monochromatic components of pairwise different colors, hence in these cases, there is no sharp example.
Now suppose that the coloring of $G$ is spanning, and $r\geq 3$. We examine the proof of Theorem \ref{thm:cover_diff} to see how the inequalities can be equalities.
In Case 1, $k_1=\dots = k_r=r-1$ for a sharp example, since otherwise $\frac{k_r-1}{k_1\cdot k_r}\cdot n$ would be strictly smaller than $\frac{r-2}{(r-1)^2}\cdot n$.
Also in Case 2, $k_1=\dots = k_r=r-1$ for a sharp example,
since we need $M_i= \frac{n^2}{2(r-1)}-\frac{n}{2}$ for each $i$. But if $k_i<r-1$ for some $i$, then $M_i\geq \frac{n^2}{2 k_i}-\frac{n}{2} > \frac{n^2}{2(r-1)}-\frac{n}{2}$.
Hence a sharp example is necessarily in the intersection of Case 1 and Case 2,
and the bounds in both cases are sharp for it.
We claim that the intersection of any two components of different colors must
have cardinality exactly $\frac{n}{(r-1)^2}$ (and consequently,
the cardinality of any monochromatic component is exactly
$\frac{n}{r-1}$). Let $i,j\in [r]$ be two different
colors. We know $k_i=k_j=r-1$ and by (\ref{eq:komp_kul})
\begin{equation}
\sum_{C_i\in \mathcal{C}_i,\; C_j\in \mathcal{C}_j} |C_i-C_j|=(r-2)\cdot n.
\end{equation}
Choose $C'_i\in \mathcal{C}_i$ and $C'_j\in \mathcal{C}_j$ such that
$s=|C'_i-C'_j|$ is minimum and recall from the proof of Case 1 that in this
case $C'_i\cap C'_j\ne\emptyset$. If $s<\frac{r-2}{(r-1)^2}n$, then for any
$x\in C'_i\cap C'_j$, the components $\ensuremath{\mathcal{C}}(x,[r]-\{i\})$ cover each
vertex outside $C'_i-C'_j$, hence strictly more than
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ vertices but this contradicts the
assumption. Since $s$ is the minimum, it cannot be bigger than
the average, thus for any $C_i\in \mathcal{C}_i$ and $C_j\in \mathcal{C}_j$ we
have $|C_i-C_j|=\frac{r-2}{(r-1)^2}n$. Now take any $C_i\in \mathcal{C}_i$ and
$C_j,C'_j\in \mathcal{C}_j$. As $|C_i-C_j|=|C_i-C'_j|$, we also have $|C_i\cap
C_j|=|C_i\cap C'_j|$. By symmetry we also get $|C_i\cap C_j|=|C'_i\cap C_j|$
for any $C_j\in \mathcal{C}_j$ and $C_i,C'_i\in \mathcal{C}_i$, proving the
claim.
Moreover, from $t= \frac{1}{r-1}\cdot \big[\sum_{i=1}^r M_i-\binom{n}{2}\big]$,
for each edge $uv\in E(G)$ either $|\ensuremath{\mathrm{Col}}(uv)|=1$ or $|\ensuremath{\mathrm{Col}}(uv)|=r$.
From this, the following useful property follows:
\begin{claim}\label{cl:inters_nice}
If $C_1\cap \dots \cap C_r\neq \emptyset$ where $C_1\in \mathcal{C}_1, \dots,
C_r\in\mathcal{C}_r$, then for arbitrary $1\leq i< j\leq r$,
we have $C_i\cap C_j=C_1\cap \dots \cap C_r$.
\end{claim}
\begin{proof}
If there were a vertex $x\in C_1\cap \dots \cap C_r$ and a vertex $y\in C_i\cap C_j- C_\ell$ for some $\ell$, then the edge $xy$ would have color $i$ and $j$ but not color $\ell$, which would contradict the fact that either $|\ensuremath{\mathrm{Col}}(xy)|=1$ or $|\ensuremath{\mathrm{Col}}(xy)|=r$.
\end{proof}
Now let us take the following incidence structure $\mathcal{A}$: Let the points of $\mathcal{A}$ be the nonempty intersections $C_1\cap \dots \cap C_r\neq \emptyset$, where $C_1\in \mathcal{C}_1, \dots, C_r\in\mathcal{C}_r$.
Let the lines of $\mathcal{A}$ be the monochromatic components of $G$. Let a point corresponding to $C_1\cap \dots \cap C_r\neq \emptyset$ be incident with the lines corresponding to $C_1, \dots , C_r$.
Since each vertex of $G$ is incident with edges of each color, this way each vertex of $G$ is mapped to a point of $\mathcal{A}$. Also, for a nonempty intersection, $C_1\cap \dots \cap C_r=C_1\cap C_2$. Since $|C_1\cap C_2|=\frac{n}{(r-1)^2}$, each point of $\mathcal{A}$ corresponds exactly to $\frac{n}{(r-1)^2}=:b$ vertices of $G$.
We claim that $\mathcal{A}$ is an affine plane of order $r-1$. Moreover, we claim that two lines are disjoint if and only if the corresponding monochromatic components have the same color. Note that if we prove these statements, it follows that $G$ is the blowup of an affine plane.
We have already proved that two components of $G$ of different colors have a nonempty intersection. On the other hand, two monochromatic components of the same color are disjoint by the definition of a component. Hence indeed two lines in $\mathcal{A}$ are disjoint if and only if the corresponding monochromatic components have the same color.
To prove that $\mathcal{A}$ is an affine plane of order $r-1$, we need to check the five conditions given in Definition \ref{def:aff_plane}.
(i)
We claim that the points corresponding to $C_1\cap \dots \cap C_r\neq \emptyset$ and $C'_1\cap \dots \cap C'_r\neq \emptyset$ where $C_1,C'_1\in \mathcal{C}_1, \dots, C_r,C'_r\in\mathcal{C}_r$ have at least one common monochromatic component. Indeed, take $x\in C_1\cap \dots \cap C_r$ and $y\in C'_1\cap \dots \cap C'_r$. Since $G$ is complete, $xy\in E(G)$. This edge has at least one color, hence $x$ and $y$ have a common monochromatic component.
Now we claim that these two points have at most one common monochromatic component. Indeed, by Claim \ref{cl:inters_nice}, if $C_i=C'_i$ and $C_j=C'_j$ for some $i\neq j$, then $C_1\cap \dots \cap C_r=C_i\cap C_j=C'_i\cap C'_j=C'_1\cap \dots \cap C'_r$.
(ii) Let $C$ be the monochromatic component of $G$ corresponding to the line $L$.
As we noted before, two monochromatic components in $G$ are disjoint if and only if they have the same color. Suppose that $C$ has color $i$. Let $C'$ be the component of color $i$ that contains $x$. The line corresponding to $C'$ satisfies the requirements of (ii).
(iii)
If there is a line containing only one point, let the monochromatic component of $G$ corresponding to the line be $C_i\in \mathcal{C}_i$ and the intersection corresponding to the point be $C_1\cap \dots \cap C_r\neq \emptyset$ where
$C_1\in \mathcal{C}_1,\dots , C_r\in \mathcal{C}_r$. From the fact that the line has only one point, $C_i\subseteq C_1\cap \dots \cap C_{i-1}\cap C_{i+1}\cap \dots \cap C_r$.
But then $C_1,\dots C_{i-1},C_{i+1},\dots ,C_r$ cover all the vertices of $G$ since $G$ is complete. Thus, the example is not sharp.
(iv)
It can be seen from the definition that each point of $\mathcal{A}$ is incident with $r\geq 3$ lines.
(v) This follows from the fact that two lines are parallel if and only if they correspond to monochromatic components of the same color, and for each color, there are exactly $r-1$ monochromatic components.
With this, we have proved that any sharp example needs to be a blowup of an affine plane.
Now we prove that the blowup of an affine plane is always a sharp example.
We claim that $r-1$ monochromatic components of pairwise different colors cover at most $\big( 1-\frac{r-2}{(r-1)^2}\big)\cdot n= \big( 1-\frac{r-2}{(r-1)^2}\big) \cdot b(r-1)^2=b\cdot((r-1)^2-r+2)$ vertices.
Indeed, take the first component. This covers $b(r-1)$ vertices. The second component has a different color from the first, hence they have an intersection of size $\frac{n}{(r-1)^2}=b$. Hence the two components together cover at most $b(2(r-1)-1)$ vertices. And so on, each subsequent component needs to have an intersection of size at least $b$ with the union of the previous ones, hence altogether, they cover at most $b((r-1)^2-r+2)$ vertices. We can also see, that for covering exactly $b((r-1)^2-r+2)$ vertices, we need to take $r-1$ monochromatic components having a common intersection of $b$ points.
\end{proof}
\begin{remark}
In the case if $\big(1-\frac{r-2}{(r-1)^2}\big)\cdot n$ is not an integer, it would be reasonable to call the multi $r$-edge-colored $G$ sharp if the number of vertices coverable by $r-1$ monochromatic components of pairwise different colors is the minimum possible, i.e.,
$$\Bigl\lceil \big(1-\frac{r-2}{(r-1)^2}\big)\cdot n\Bigr\rceil.$$
We do not know the structure of the sharp examples in this sense.
\end{remark}
Recall the definition of a truncated projective plane:
\begin{defn}
Take a projective plane of order $r-1$. The \emph{truncated projective plane of order $r-1$} is the following hypergraph: Remove a point and the lines incident to it from the projective plane. Let the vertices of the hypergraph be the remaining points, and the hyperedges be the remaining lines.
\end{defn}
Note that this is an $r$-partite $r$-uniform hypergraph (the partite classes correspond to the unremoved points that were contained by a removed line).
Truncated projective planes play an important role in the study of Ryser's
conjecture. They give a family of sharp examples. Moreover, the only other known family of extremals \cite{ABPSz} is also constructed using truncated projective planes.
In \cite{HNSz2} it is shown that the truncated
Fano-plane is the main building block in the characterization of the sharp hypergraphs for Ryser's conjecture in the case $r=3$. In addition,
the near-extremal family recently constructed by Haxell and Scott \cite{HS} is also
based on truncated projective planes.
Note that if one switches the role of vertices and hyperedges, an affine plane becomes a truncated projective plane.
Hence Theorem \ref{thm:cover_diff_sharp} gives the following result for hypergraphs:
\begin{theorem}\label{thm:cover_diff_hyp_sharp}
Let $\ensuremath{\mathcal{H}}$ be an $r$-partite $r$-uniform intersecting hypergraph.
The maximum number of hyperedges coverable by a multi-colored set of
size $r-1$ equals to
$\big(1-\frac{r-2}{(r-1)^2}\big)\cdot |E(\ensuremath{\mathcal{H}})|$ if and only if $\ensuremath{\mathcal{H}}$
can be obtained from a truncated projective plane by taking $b$
parallel copies of each hyperedge for some fixed integer $b$.
\end{theorem}
\section{Ryser's conjecture in the case
\texorpdfstring{$\Delta(\ensuremath{\mathcal{H}})=2$}{Delta(H) = 2}}
For $r=2$, Ryser's conjecture follows from K\H onig's theorem. In this section, we prove Ryser's conjecture for the very special case $\Delta(\ensuremath{\mathcal{H}})=2$ and $r\geq 3$. We note, that in this special case, the hypergraph does not even need to be r-partite for Ryser's bound to hold.
\begin{theorem}
Let $\ensuremath{\mathcal{H}}$ be an $r$-uniform hypergraph with $r\geq 3$ and $\Delta(\ensuremath{\mathcal{H}})=2$. Then $\tau(\ensuremath{\mathcal{H}})\leq (r-1)\cdot \nu(\ensuremath{\mathcal{H}})$.
\end{theorem}
\begin{proof}
Let the dual of a hypergraph $\ensuremath{\mathcal{H}}$ be the following hypergraph $\ensuremath{\mathcal{H}}^*$, with multiple hyperedges possible:
$$V(\ensuremath{\mathcal{H}}^*)= E(\ensuremath{\mathcal{H}})$$
$$E(\ensuremath{\mathcal{H}}^*)= \{\{ e\in E(\ensuremath{\mathcal{H}}): e\ni v\}: v\in V(\ensuremath{\mathcal{H}})\} \quad \textrm{taken as a multiset}.$$
We have $\ensuremath{\mathcal{H}}^{**}=\ensuremath{\mathcal{H}}$, hence vertices of $\ensuremath{\mathcal{H}}$ correspond exactly to hyperedges in $\mathcal{H^*}$ and hyperedges of $\ensuremath{\mathcal{H}}$ correspond exactly to vertices in $\mathcal{H^*}$.
Note that a set of vertices $T\subseteq V(\ensuremath{\mathcal{H}})$ covers the hyperedges of $\ensuremath{\mathcal{H}}$
if and only if the corresponding hyperedge set in $\mathcal{H^*}$ covers the
vertices of $\mathcal{H^*}$, so $\tau(\ensuremath{\mathcal{H}})=\varrho(\ensuremath{\mathcal{H}}^*)$.
The degree of a vertex of $\mathcal{H^*}$ is the cardinality of the
corresponding hyperedge of $\ensuremath{\mathcal{H}}$. Hence $\ensuremath{\mathcal{H}}$ is $r$-uniform if and only if
$\mathcal{H^*}$ is $r$-regular, consequently $\Delta(\ensuremath{\mathcal{H}}^*)=r$.
By definition, $\alpha'(\ensuremath{\mathcal{H}}^*)=\nu(\ensuremath{\mathcal{H}})$.
If $\Delta(\ensuremath{\mathcal{H}})=2$, then $\mathcal{H^*}$ is a hypergraph with hyperedge cardinalities one or two, and the statement of the theorem is equivalent to $\varrho(\mathcal{H^*})\leq (\Delta(\mathcal{H^*})-1) \alpha'(\mathcal{H^*})$.
We can suppose that there are no hyperedges of cardinality one in $\mathcal{H^*}$. Indeed, if a hyperedge of cardinality one is contained by a hyperedge of cardinality two, then we can remove the hyperedge of cardinality one. This does not change the value of $\alpha'$, and $\Delta=\Delta(\ensuremath{\mathcal{H}}^*)$ can only decrease.
Moreover, the value of $\varrho$ can only increase by removing a hyperedge, since a covering hyperedge set of the modified hypergraph is also a covering hyperedge set in the original hypergraph.
Hence if the statement is true for the hypergraph after removing a hyperedge, then the statement is also true for the original hypergraph.
If a hyperedge of cardinality one is not contained by a hyperedge of cardinality two, then this hyperedge (or a parallel copy of it) needs to occur in each hyperedge cover. Hence leaving this vertex and the cardinality one hyperedges incident to it, $\varrho$ decreases by one. On the other hand, $\alpha'$ also decreases by one and $\Delta$ can only decrease. Hence if the statement is true to the modified hypergraph, it is also true for the original hypergraph.
The following lemma proves the theorem if the cardinality two hyperedges form a graph which is not a cycle.
\begin{lemma}
If $G$ is a graph which is not a cycle, then $\varrho(G)\leq (\Delta(G)-1)\cdot \alpha(G)$.
\end{lemma}
\begin{proof}
We will denote by $G[X]$ the subgraph of $G$ induced by the vertex set $X$. For a set of vertices $U\subseteq V$, we denote by $\Gamma(U)$ the set of neighbors of $U$.
The statement is easily seen to be true for complete graphs with at least four vertices, hence we can suppose that $G$ is not complete.
Let $n=|V(G)|$.
Since $G$ is not a cycle, using Brooks' theorem, $G$ is colorable by $\Delta(G)$ colors. As consequence, $\alpha(G)\geq \frac{n}{\Delta}$.
Take an independent vertex set $I\subseteq V(G)$ of maximum size, and take a maximum matching $M$ in $G[V(G)-I]$.
Let $X=V(M)$ and $Y=V-I-X$. Since $M$ is a maximum matching in $G[V(G)-I]$, it follows that $Y$ is an independent set.
Hence $G[Y\cup I]$ is a bipartite graph.
We show that in $G[Y\cup I]$ there is a matching covering $Y$.
Suppose for contradiction that the condition of Hall's theorem is not satisfied, i.e., $\exists U \subseteq Y$ such that $|\Gamma(U)| < |U|$. Then $(I- \Gamma(U)) \cup U$ is an independent set, whose size is greater than $|I|$, which contradicts the choice of $I$.
Now take the following set of edges: the edges of $M$, the edges of a matching covering $Y$ in $G[Y\cup I]$, and for each thus uncovered vertex in $I$, an edge covering it. This is an edge cover of $G$ of cardinality at most $|M|+|Y|+(|I|-|Y|)=|M|+|I|$. Thus $\varrho\leq |M|+|I|$.
We show that $|M|+|I|\leq (\Delta(G)-1)\alpha(G)$. Indeed, since $|X|\leq n-|I|= n-\alpha(G)\leq n(1-1/\Delta)$, we have $|M|\leq \lfloor \frac{n(1-1/\Delta)}{2}\rfloor=\lfloor (\Delta-1)\frac{n}{2\Delta}\rfloor \leq \lfloor \frac{(\Delta-1)\alpha}{2} \rfloor$. Thus $\varrho\leq |M|+|I|\leq \lfloor \frac{(\Delta-1)\alpha}{2} \rfloor + \alpha \leq (\Delta-1)\alpha$.
\end{proof}
Now the only remaining case is if the cardinality two hyperedges of $H^*$ form a cycle, that is, $H^*$ is a cycle with some additional cardinality one hyperedges. Suppose that the cycle has $l$ vertices, plus there are $k$ isolated vertices. Then the vertex set of $H^*$ can be covered by $\lceil \frac{l}{2} \rceil + k$ hyperedges, and $\alpha'(H^*)=\lfloor \frac{l}{2} \rfloor + k$. Since $r=\Delta(H^*)>2$, this means $\varrho(\mathcal{H^*})\leq (\Delta(\mathcal{H^*})-1) \alpha'(\mathcal{H^*})$.
\end{proof}
\end{document}
|
\begin{document}
\title{Fibred torti-rational knots}
\begin{abstract}
A torti-rational knot,
denoted by $K(2\alpha,\beta|r)$,
is a knot obtained from the $2$-bridge link $B(2\alpha,\beta)$
by applying Dehn twists an arbitrary number of times, $r$,
along one component of $B(2\alpha,\beta)$.
We determine the genus of $K(2\alpha,\beta|r)$ and solve a
question of when $K(2\alpha,\beta|r)$ is fibred.
In most cases, the Alexander polynomials
determine the genus and fibredness of these knots.
We develop both
algebraic and geometric techniques to describe the genus
and fibredness by means of continued fraction expansions
of $\beta/2\alpha$.
Then, we explicitly construct minimal genus Seifert surfaces.
As an application, we solve the same question for
the satellite knots of tunnel number one.
\end{abstract}
\keywords{Fibred knot, 2-bridge knot, satellite knot, tunnel number of knots,
genus of knots, Alexander polynomial}
\ccode{Mathematics Subject Classification 2000: 57M25, 57M27}
\section{Introduction}
A torti-rational knot \footnote[1]{
This naming is due to Lee Rudolph.}, denoted by $K(2\alpha,\beta|r)$,
is a knot obtained from the $2$-bridge link $B(2\alpha,\beta)$
by applying Dehn twists an arbitrary number of times, $r$,
along one component of $B(2\alpha,\beta)$.
(For the precise definition, see Section 2.)
Torti-rational knots have occasionally
appeared in literatures of knot theory.
For example, twist knots are
torti-rational knots. By \cite{KMS}, we know when
$K(2\alpha, \beta|r)$ is unknotted (see Proposition \ref{prop:8.6++}).
A torti-rational knot is a $g1$-$b1$ knot
(i.e., admits a genus-one bridge-one decomposition), and hence
has tunnel number one.
In 1991, Morimoto and Sakuma \cite{MS}
proved that for a satellite knot of tunnel number
one, the companion knot is a torus knot $T(p,q)$
and the pattern knot is a torti-rational knot
$K(2\alpha, \beta|pq)$, for some $p,q$, and $\alpha, \beta$.
Then,
Goda and Teragaito \cite{GT} determined which of such satellite knots
of tunnel number one are of genus one.
In this paper, we study torti-rational knots systematically and
completely determine the genus of
any torti-rational knot and solve a question of when it is fibred.
In fact, we prove the following:
{\bf Theorem A}. (Theorems \ref{thm:C} and \ref{thm:A})
{\it
Let $B(2\alpha, \beta)$ be an oriented $2$-bridge link,
with linking number $\ell$.
Let $K=K(2\alpha, \beta|r)$ be a torti-rational knot.
Suppose $\ell \ne 0$. \\
(1) The genus of $K$ is exactly half of the degree of
the Alexander polynomial $\Delta_K(t)$.\\
(2) $K$ is fibred if (and only if) $\Delta_K(t)$
is monic (i.e, the leading coefficient
is $\pm1$).
}
See
Theorems \ref{thm:8.4}
and
\ref{thm:5.1}
for a practical method for determination.
If $\ell = 0$, Theorem A does not hold true.
For this case, the genus and the characterization
of a fibred torti-rational
knot are stated as follows:
{\bf Theorem B.} (Theorems \ref{thm:D} and \ref{thm:B})
{\it
Suppose $\ell=0$,
Let
$[2c_1, 2c_2, \ldots, 2c_m]$
be the continued fraction of $\frac{\beta}{2\alpha}$.\\
(1) For any $r\neq 0$,
${\displaystyle
g(K(2\alpha, \beta|r))=
\dfrac{1}{2}\sum_{i:\ {\rm odd}} |c_i|
}$.\\
(2)
(a) If $|r|>1$, then $K(2\alpha, \beta|r)$ is not
fibred.
(b) Suppose $r=\pm 1$. Then
$K(2 \alpha, \beta|r)$ is fibred if and only if
$\frac{\beta}{2\alpha}$ has the continued fraction of
the following special form:
$\frac{\beta}{2\alpha}=
\pm[2 a_1, 2, 2a_2, 2, \ldots, 2a_p, \pm 2, -2a'_1, -2,
-2a'_2, -2, \ldots, -2a'_q]$,
where $2\alpha>\beta>-2\alpha$,
$a_i, a'_j>~0$ and $\sum_{i=1}^pa_i
=\sum_{j=1}^q a'_j$.
}
See Section 2, for our convention of continued fractions.
To prove these theorems, we construct explicitly a
minimal genus Seifert surface for
$K$ and determine whether or
not it is a fibre surface for $K$.
Proofs of these theorems will be given in Sections 10 and 11.
This work is a part of our project to determine
the genus and fibredness of double torus knots
(i.e., knots embedded in a standard closed surface of genus $2$).
See \cite{HirM} for a relevant work.
Double torus knots are classified into five types
(Type $(1,1), (1,2), (1,3), (2,3)$ and $(3,3)$).
In \cite{HirM}, we settled the problem for all double torus
knots of type $(1,1)$.
A $g1$-$b1$ knot can be presented as a double torus knot
of type either $(1, 2), (2, 2)$ or $(2,3)$. In this paper, we
settle the problem for the $g1$-$b1$ knots presented as of
type $(1,2)$.
As an application of our study,
we determine the genus and the fibredness problem
for the satellite knots of tunnel number one.
In fact, we show that a similar theorem to
Theorem A
holds true for satellite knots in a slightly wider class.
The precise statements can be found in Section 13.
Recently, Goda, Hayashi and Song \cite{GHS}
study torti-rational knots
with a different motivation.
Their approach is completely different from ours.
This paper is organized as follows.
In Section 2, we give precise statements of our main
theorems (Theorems \ref{thm:C} - \ref{thm:B}).
In Section 3, we first introduce several
notions needed in this paper,
such as {\it graphs of continued fractions},
{\it dual graphs}. Then we prove that for our
study of $K(2\alpha, \beta|r)$, we may assume
$\ell\ge0$ and $r>0$, where $\ell$ is the linking number
of $B(2\alpha, \beta)$. This restriction simplifies considerably
the proofs of our main theorems.
At the end of Section 3, we construct a
spanning disk of a nice form
for one component of the $2$-bridge link.
In Sections 4 and 5,
we study the Alexander
polynomial of various knots:
In Section 4, we determine the Alexander polynomials
of $K(2\alpha, \beta|r)$.
In Section 5,
we prove one
subtle property of the (2-variable)
Alexander polynomial of
$B(2\alpha,\beta)$.
The determination of the degree of the Alexander
polynomials of $K(2\alpha, \beta|r)$ depends on
this property.
Sections 6 is devoted to
characterizing the monic
Alexander polynomial of a knot $K(2\alpha,\beta|r)$:
First, we deal with knots $K(2\alpha, \beta|r)$
for the case $\ell >0$, and
characterize the monic Alexander polynomials
in terms of a continued fraction of $\beta/2\alpha$
using the formulae given in Section 5.
In particular, we give an equivalent algebraic condition
for Theorem \ref{thm:A} (Theorem \ref{thm:8.4}).
However, for the case $\ell=0$, the monic Alexander
polynomials of $K(2\alpha, \beta|r)$ cannot be characterized
by the continued fractions.
This case is considered in the rest of Section 6,
and the characterization will be done
using a geometric interpretation of the Alexander polynomials
of $K(2\alpha, \beta|r)$.
In Section 7, we construct explicitly
a Seifert surface for $K(2\alpha,\beta|r)$,
which in most cases is of minimal genus.
In Section 8, we prove Theorem \ref{thm:C}.
In this case, some of the surfaces constructed in Section 7
are not of minimal genus, but we obtain minimal
genus surfaces after explicitly compressing them.
In Section 9, we prove Theorem \ref{thm:A}.
In Sections 10 and 11, we prove Theorems \ref{thm:D}
and \ref{thm:B}.
Various examples that illustrate our main theorems are
discussed in Section 12.
In section 13, we consider the satellite knot with fibred
companion and
$K(2\alpha,\beta|r)$, $r \ne 0$, as a pattern, and
prove an analogous theorem to Theorem A.
In the final section, Section 14,
we determine the genus one knots in our
family of knots $K(2\alpha,\beta|r)$.
In particular, we find satellite knots among them, and
hence give a negative answer to the problem posed in
\cite{AM}.
After is this paper was completed,
D. Silver pointed out that Theorem 5.5 in this paper
makes it possible to prove Theorem A algebraically
using Brown's graphs in \cite{brown}
(without explicit construction of minimal genus Seifert
surfaces).
However, Brown's method does not work for proving Theorem B.
\section{Statements of main theorems}
We begin with an (even) continued fraction
of a rational number $\frac{\beta}{2 \alpha},
0<\beta<2 \alpha$, and gcd$(2\alpha, \beta)=1$.
The (unique) continued fraction of
$$\frac{\beta}{2 \alpha}
=\cfrac{1}{2c_1
-\cfrac{1}{2c_2
-\cfrac{1}{2c_3
-\cfrac{1}{\ddots
-\cfrac{1}{2c_{m-1}
-\cfrac{1}{2c_m}}}}}}, $$
where $c_i \neq 0$, is denoted by
$\frac{\beta}{2\alpha}=[2c_1, 2c_2, \cdots, 2 c_m]$
or $[[c_1, c_2, \cdots, c_m]]$.
Note that $m$ is odd.
Throughout this paper, we consider only even continued fraction
expansions, and hence omit the word \lq even\rq.
Now, using the continued fraction of
$\frac{\beta}{2 \alpha}$,
we can obtain a diagram of
an oriented $2$-bridge link
$B(2\alpha, \beta)$ as
follows.
\begin{minipage}{8cm}
Let
$\sigma_1=$ \sone and $\sigma_2=$\stwo
be Artin's generators of the
$3$-braid group. First construct a $3$-braid
$\gamma=\sigma_2^{2c_1}
\sigma_1^{2c_2}
\sigma_2^{2c_3}\cdots \sigma_2^{2c_m}$.
Close $\gamma$ by joining the first and second strings
(at the both ends)
and then join the top and bottom of the third string by
a simple arc as in Figure 2.1.
We give downward orientation to the second and third
strings.
Figure 2.1 shows the (oriented) $2$-bridge link obtained from
the continued fraction $\frac{21}{34}=
[2, 2, -2, -2, 2]=[[1,1,-1,-1,1]]$.
Now an oriented $2$-bridge link $B(2\alpha, \beta)$
consists of two unknotted knots $K_1$ and $K_2$,
where $K_2$ is formed from the third
and fourth strings.
\end{minipage}
\hfil
\begin{minipage}{2cm}
\Abb
\end{minipage}
{\bf Note.}
Our convention for the orientation
of a $2$-bridge link is not standard, but is used
for the convenience in utilizing the $2$-variable
Alexander polynomials. (Usually we reverse the orientation
of one component so that the $2$-bridge link is fibred if
and only if all the entries of the even continued fraction
are~$\pm~2$.)
Since $K_2$ is unknotted, $K_1$ can be considered as
a knot in an unknotted solid tours $V$ and $K_2$
is a meridian of $V$.
Then by applying Dehn twists along
$K_2$ in an arbitrary number of times , say $r$,
we obtain a new knot $K$ from $K_1$.
We denote this knot $K$ by $K(2\alpha, \beta|r)$,
or simply $K(r)$.
More precisely, one Dehn twist along $K_2$
is the operation that replaces the part of $K_1$ in a cylinder
by the braid
$(\sigma_1\sigma_2\cdots\sigma_{k-1})^{k}$,
where $k$ is the
wrapping number. See Figure 2.2.
(Since $B(2\alpha, \beta)$ is symmetric, $K_1$ and $K_2$
can be interchanged, and hence this notation is justified.)
\Acc
We note that if $r=0$, then $K(2\alpha, \beta|r)$
is unknotted for any $\alpha, \beta$, and henceforth we
assume
$r\neq 0$ unless otherwise specified.
Now, given an oriented $2$-bridge link $B(2\alpha, \beta)$,
let $\ell=\mbox{$\ell k$}(K_1, K_2)$ be the linking number between
$K_1$ and $K_2$
which, for simplicity, is denoted by $\mbox{$\ell k$} B(2\alpha, \beta)$.
Our main theorems in this paper are the following four theorems:
\begin{thm}\label{thm:C}
Suppose $\ell\neq 0$.
Then the genus of $K=K(2\alpha, \beta|r)$ is half of
the degree of its Alexander polynomial $\Delta_{K}(t)$.
Namely we have
$g(K)=
\dfrac{1}{2}{\rm deg}\Delta_{K}(t)$.
\end{thm}
\begin{thm} \label{thm:A}
Suppose $\ell\neq0$. Then
$K=K(2\alpha, \beta|r)$
is a fibred knot if (and only if) $\Delta_K(t)$
is monic, i.e., $\Delta_K(0)=\pm 1$.
\end{thm}
Theorem \ref{thm:A} is divided into two parts:
Theorem \ref{thm:8.4} is the algebraic part,
where we determine when
$\Delta_K(t)$ is monic in terms of continued fractions,
and Theorem \ref{thm:5.1} is the geometric part,
where we show the fibredness,
by actually constructing fibre surfaces using the
continued fractions.
\begin{thm}\label{thm:D}
Suppose $\ell=0$,
Let
$[2c_1, 2c_2, \ldots, 2c_m]$
be the continued fraction of $\frac{\beta}{2\alpha}$.
Then for any $r\neq 0$,
${\displaystyle
g(K(2\alpha, \beta|r))=
\dfrac{1}{2}\sum_{i:\ {\rm odd}} |c_i|
}$.
\end{thm}
\begin{thm}\label{thm:B}
Suppose $\ell=0$.
(a) If $|r|>1$, then $K(2\alpha, \beta|r)$ is not
fibred.
(b) Suppose $r=\pm 1$. Then
$K(2 \alpha, \beta|r)$ is fibred if and only if
$\frac{\beta}{2\alpha}$ has the continued fraction of
the following special form:\\
$\frac{\beta}{2\alpha}=
\pm[2 a_1, 2, 2a_2, 2, \ldots, 2a_p, \pm 2, -2a'_1, -2,
-2a'_2, -2, \ldots, -2a'_q]$,
where $2\alpha>\beta>-2\alpha$,
$a_i, a'_j>~0$ and $\sum_{i=1}^pa_i
=\sum_{j=1}^q a'_j$.
\end{thm}
In Theorem \ref{thm:B}, we have non-fibred knots $K$
such that $\Delta_K(t)$ are
monic and $\deg \Delta_K(t)=2 g(K)$.
\section{Preliminaries}
In this section, we first introduce two fundamental
concepts, a {\it graph of a continued fraction}
and the {\it dual graph}, which
play a key role throughout this paper.
Next, in Subsection 3.4, we show that we can assume
$\mbox{$\ell k$}B\ge 0$ and $r>0$ without loss of
generality. This assumption is very important
to simplify the proof of our main theorem.
In the last Subsection (Subsection 3.5),
we introduce the concept of the primitive
spanning disk for $K_1$, which is the
first step of constructing a minimal genus
Seifert surface for $K(2\alpha, \beta|r)$.
{\bf 3.1. Modified continued fractions and their graphs.}
Let $S=[[c_1, c_2, \ldots, c_{2k+1}]]$ be
the continued fraction of $\frac{\beta}{2\alpha}$,
where $-2\alpha<\beta<2\alpha$ and
gcd$(2\alpha, \beta)=1$.
The {\it length} of $S$ is defined as $2k+1$.
To define the dual of $S$, we need to extend $S$ slightly to
$S^*$, called the modified form of $S$.
We will see that $S$ and $S^*$ correspond to
the same rational number.
\begin{dfn}\label{dfn:3.1}
Let $S=[[c_1, c_2, \ldots, c_{2k+1}]]$.
Then we obtain a continued fraction $S^*$
by thoroughly repeating the following and
call it {\it modified form of} $S$.\\
(1) If $c_{2i+1}>1, 0\le i\le k$, then $c_{2i+1}$ is replaced by
the new sequence of length
\hspace*{5mm}
$2c_{2i+1}-1$,
$(1, 0, 1, 0, \ldots, 0,1)$, and\\
(2) if $c_{2i+1}<-1, 0\le i\le k$, then $c_{2i+1}$ is replaced
by the new sequence of length
\hspace*{5mm}
$2|c_{i+1}|-1$,
$(-1, 0, -1, 0, \ldots, 0, -1)$.
Note that the length of $S^*$ is \\
\centerline{${\displaystyle
\sum_{i=0}^k (2|c_{2i+1}|-2)+2k+1
=\sum_{i=0}^k 2|c_{2i+1}|-1}$.
}\\
The original continued fraction $S$ may be called
the {\it standard} continued fraction of $\beta/2\alpha$,
which does not contain entries $0$.
\end{dfn}
The modified form of $\beta/2\alpha$ is of the form:
\begin{equation}
[2u_1, 2v_1, 2u_2, 2v_2, \ldots, 2u_d, 2v_d,
2u_{d+1}],
\end{equation}
where $u_i=+1$ or $-1$, for $1\le i\le d+1$, and
$v_i, (1\le i\le d)$ are arbitrary, including $0$.
Now, given the continued fraction $S$ of $\beta/2\alpha$,
consider the modified form $S^*$ for $S$ of the form (3.1).
\begin{dfn}\label{dfn:3.2}
The {\it graph} $G(S^*)$ of $S^*$,
(or the graph $G(S)$ of $S$), is a plane graph in
${\mathbb R}^2$, consisting of $d+2$ vertices
$V_0, V_1, \ldots, V_{d+1}$ and
$d+1$ line segments $E_k, (1\le k\le d+1)$ joining
two vertices $V_{k-1}$ and $V_k$, where
$V_0 = (0, 0)$ and $V_i=(i, \sum_{j=1}^i u_j)$, for
$1\le i\le d+1$.
The graph is a {\it weighted graph}, when the weight
of $V_i, (1\le i \le d)$ is defined as $2 v_i$.
The weights of both $V_0$ and $V_{d+1}$ are $0$.
\end{dfn}
\Baa
\begin{ex}\label{ex:3.1}
Let $S^*=[2, 0, 2, -2, -2, 0, -2]$.
Then $G(S^*)$ is depicted in Figure 3.1.
The weight of $V_i$ is denoted by $(m)$ near $V_i$.
\end{ex}
The following is
immediate from the diagram of $B(2\alpha, \beta)$
(Figure 2.1).
\begin{prop}\label{prop:3.3}
The $y$-coordinate of the last vertex
$V_{d+1}$ gives the linking number $\ell=
\mbox{$\ell k$}B$. Namely,
${\displaystyle \ell=\sum_{i=1}^{d+1} u_i}$.
\end{prop}
{\bf 3.2. Dual graphs and dual continued fractions.}
For a continued fraction $S$,
we define the dual $\wti{S}$ to $S$
and
the dual graph $\wti{G}$ to a graph $G(S)$.
Then we have the following theorem,
proved in Subsection 3.5:
\begin{thm}\label{thm:3.6}
Let $S$ be the continued fraction of $\beta/2\alpha$.
Then the dual $\varepsilontil$ of $S$ is the continued fraction of
$(2\alpha-\beta)/2\alpha$
(resp. $(-2\alpha-\beta)/2\alpha$),
if $\beta>0$ (resp. $\beta<0$).
\end{thm}
\begin{dfn}\label{dfn:3.4}
The {\it dual} $\wti{G}$ to the graph $G(S)$
is defined as follows. The underlying graph of
$\wti{G}$ is exactly the same as that of $G(S)$,
but the weight $\wti{w}(V_i)$ is given as follows;\\
(1) If $V_i$ is a local maximal or local minimal vertex
(including the ends of $G(S)$), then
$\wti{w}(V_i)=-w(V_i)$, and\\
(2) for the other vertices, $\wti{w}(V_i)= 2 \varepsilon_i-w(V_i)$,
where $\varepsilon_i$ is the sign of $u_i$, i.e., $\varepsilon_i=u_i/|u_i|$
The dual $\wti{S}^*$ to the modified form $S^*$ is defined to be
the modified form of the continued fraction represented
by the dual graph $\wti{G}$.
The dual $\wti{S}$ of $S$ is the standard continued fraction
obtained from $\wti{S}^*$.
\end{dfn}
\Bb
\begin{ex}\label{ex:3.2}
Let $S=[[2,-1,-1,1,-1]]$. \\
Then
$S^*=[[1,0,1,-1,-1,1,-1]]=[4,-2,-2,2,-2]$.\\
Thus
$\widetilde{S}^*=[[1,1,1,1,-1,-2,-1]]=
[2,2,2,2,-2,-4,-2]=\wti{S}$.
\end{ex}
In the following, we give an alternative formulation
of $\widetilde{S}$, the dual of $S$.
Given the continued fraction of $\beta/2\alpha$,
\begin{equation}
[[c_1,c_2, \ldots, c_{2d+1}]], c_i\neq 0,
\end{equation}
consider the
partial sequence of (3.2):
\begin{equation}
\{c_1, c_3, c_5, \ldots, c_{2d+1}\},
\end{equation}
consisting of only $c_{2i+1}, 0\le i\le d$.
In this sequence, for convenience, write $-c_i$,
where $c_i<0$, so that we may assume that
if $i$ is odd, then $c_i$ is always positive.
Thus, the sequence (3.3) is divided into several
\lq positive' or \lq negative' sub-sequences:
Therefore we can write,
$[[c_1, c_2, \ldots, c_{2d+1}]]=$
$[[a_1, b_1, a_2,$
$ \ldots, a_p, b_p,$
$ -a_{p+1}, -b_{p+1},
\ldots, -a_{r}, b_r, a_{r+1}, b_{r+1}, \ldots]]$,
where $a_i>0$ for all $i$.
Note that $a_i=c_{2i-1}$ or $-c_{2i-1},
i=1, 2, \ldots$ and $b_j=c_{2j}, j=1, 2, \ldots$
We call a sequence of the form
$[[a_1, b_1, a_2, b_2, \ldots, a_k, b_k, a_{k+1}]]$\\
(or $[[-a_1, -b_1, -a_2, -b_2, \ldots, -a_k, -b_k, -a_{k+1}]]$)
a {\it positive} (or {\it negative}) sequence, where
$a_i>0,
1\le i \le k+1$, but $b_j, (1\le j\le k)$ are arbitrary ($\neq 0)$.
We denote by $P_i$ (resp. $Q_i$) a positive (resp. negative)
subsequence.
\begin{ex}\label{ex:3.3}
\begin{align*}
&\ \ \ [[1, 1, 2, -1, 1, -1, -2, 1, -2, -1, 2, 1, 2, 1, -2, -1, -2]]\\
&=[[\underbrace{1, 1, 2, -1, 1}_{P_1}, -1 ,
\underbrace{-2, -(-1), -2}_{Q_1}, -1,
\underbrace{2, 1, 2}_{P_2}, 1,
\underbrace{-2, -1, -2}_{Q_2}]]\\
&=[[P_1,-1,Q_1,-1,P_2,1,Q_2]]
\end{align*}
\end{ex}
Thus, the sequence (3.2) can be written as
\begin{equation}
[[c_1, c_2, c_3, \ldots, c_{2d+1}]]=
\{P_1, d_1, Q_1, e_1, P_2, d_2, Q_2, e_2,P_3, \ldots\},
\end{equation}
where $d_i,e_j$ are some $c_{2k}$.
This form (3.4) is called the {\it canonical decomposition}
of the continued fraction of $\beta/2\alpha$.
\begin{rem}\label{rem:3.5}
If $\beta>0$, then the first entry $c_1>0$, but if $\beta<0$,
then $c_1<0$, and hence, the canonical decomposition begins
with $Q_1$ (not a positive sequence $P_1$ and $d_1$ is missing).
However, since this does not change our argument,
we may assume in general that $c_1>0$.
\end{rem}
Now, the {\it dual continued fraction} of (3.2)
is reformulated as follows.
Let $S=\{P_1, d_1, Q_1, e_1, P_2, \ldots\}$
be the canonical decomposition of
$[[c_1, c_2, \ldots, c_{2d+1}]]$.
First the {\it dual of a positive sequence} $P$
is obtained as follows.
Given $P=[[a_1, b_1, a_2, b_2, \ldots a_{m+1}]], a_j>0$,
consider the modified form $P^*$ of $P$
\begin{equation}
P^*=[[a^*_1, b_1^*, a_2^*, b_2^*, \ldots,
a_k^*, b_k^*, a^*_{k+1}]],
\end{equation}
where $a^*_j=1 (1\le j\le k+1)$ and $b^*_j$'s $(1\le j\le k)$
are arbitrary including $0$.
Then the dual of $P^*$, denoted by $\wti{P^*}$,
is $\wti{P^*}=
[[\wti{a}_1^*, \wti{b}_1^*, \wti{a}_2^*, \wti{b}_2^*,
\ldots,\wti{a}_{k+1}^*]$,
where $\wti{a}_j^*=a_j^*=1$, for
$1\le j\le k+1$ and $\wti{b}_j^*=1-b_j^*$ for $1\le j\le k$.\\
The {\it dual $\wti{P}$ of $P$}
is the standard form obtained
from $\wti{P}^*$.
For the negative sequence $Q$,
apply the same operation for the positive sequence $-Q$
to obtain the dual $\wti{-Q}$ of $-Q$.
Then the dual $\wti{Q}$ of $Q$
is the negative sequence $-(\wti{-Q})$.
Finally, the {\it dual} $\wti{S}$ of $S$ is
$\{\wti{P}_1,-d_1,\wti{Q}_1,-e_1,\wti{P}_2,-d_2,
\wti{Q}_2,-e_2, \ldots\}$.
{\bf Example} \ref{ex:3.3} {\bf (continued)}
(1)Since $P_1=[[1,1,2,-1,1]]$, $P^*_1=[[1,1,1,0,1,-1,1]]$,
and hence\\
$\wti{P}^*_1=[[1,0,1,1,1,2,1]]$ and
$\wti{P}_1=[[2,1,1,2,1]]$.\\
(2) Since $P_2=[[2,1,2]]$,
$P^*_2=[[1,0,1,1,1,0,1]]$,
and hence\\
$\wti{P}^*_2=[[1,1,1,0,1,1,1]]$ and
$\wti{P}_2=[[1,1,2,1,1]]$.\\
(3)
Since $Q_1=[[-2,1,-2]], -Q_1=[[2,-1,2]]$ and hence\\
$(-Q_1)^*=[[1,0,1,-1,1,0,1]]$,\\
$\wti{-Q_1^*}=[[1,1,1,2,1,1,1]]=\wti{-Q_1}$, so
$\wti{Q}_1=[[-1,-1,-1,-2,-1,-1,-1]]$\\
(4)
Since $Q_2=[[-2,-1,-2]], -Q_2=[[2,1,2]]$ and hence\\
$\wti{-Q_2^*}=[[1,1,2,1,1]]$, so
$\wti{Q}_2=[[-1,-1,-2,-1,-1]]$.
Thus,
$\varepsilontil=[[2,1,1,2,1,1,-1,-1,-1,-2,-1,-1,-1,1,1,1,2,1,1,-1,
-1,-1,-2,-1,-1]]$
{\bf 3.3. Applications.}
In this subsection, we study
some of the invariants of a $2$-bridge link
$B(2\alpha, \beta)$ deduced from $S$ or its dual $\varepsilontil$.
The following three propositions show that $S$ or $\varepsilontil$
determines the degree of the Alexander polynomial
$\Delta_{B(2\alpha, \beta)} (x, y)$ of $B(2\alpha, \beta)$.
Let $S=\{P_1, d_1, Q_1, e_1, P_2, \ldots, P_m, d_m, Q_m\}$
be the canonical decomposition of the continued fraction of
$\beta/2\alpha$.
We write more precisely:
\begin{align}
P_i&=[[a_{i,1}, b_{i,1}, a_{i,2}, b_{i,2}, \ldots,
b_{i,s_i}, a_{i, s_{i}+1}]],\ {\rm and} \nonumber\\
Q_j&=[[-a'_{j,1},-b'_{j,1},-a'_{j,2},-b'_{j,2}, \ldots,
-b'_{j,q_j},-a'_{j,q_{j}+1}]]
\end{align}
\begin{dfn}\label{dfn:3.8}
For $1\le i,j \le m$, define
\begin{align}
\rho_i&=|\{b_{i,\ell}|b_{i, \ell}=1, 1\le \ell \le s_i\}|,\nonumber\\
\rho'_j&=|\{b'_{j,\ell}|b'_{j,\ell}=1,1\le \ell \le q_j\}|,\ {\rm and}\nonumber\\
\rho&=\rho(\beta/2\alpha)=\sum_{i=1}^{m}
\rho_i +\sum_{j=1}^{m}\rho_{j'}.
\end{align}
We call this $\rho$ the {\it deficiency} (see Theorem \ref{thm:7.4}
and Sections 6 and 8).
Further, we define;
\begin{align}
\lambda_i&=\sum_{\ell=1}^{s_{i}+1}a_{i, \ell},
1\le i\le m \nonumber\\
\lambda'_j&=\sum_{\ell=1}^{q_{j}+1}a'_{j,\ell}, 1\le j\le m,\ {\rm and}\nonumber\\
\lambda&=\sum_{i=1}^{m}\lambda_i +\sum_{j=1}^{m}\lambda'_j.
\end{align}
\end{dfn}
Note that $\lambda$ equals the number of edges in $G(S)$,
which also equals the number of disks in Figure 3.4.
This number is neatly evaluated by Kanenobu as follows :
\begin{prop}\label{prop:3.9}
\cite[(4.10)]{K}
Write $\Delta_{B(2\alpha, \beta)}(x,y)=
{\displaystyle \sum_{0\le i, j}} c_{i,j} x^i y^j
\in {\mathbb Z}[x,y]$
in such a way that $\min y$-deg $\Delta_{B(2\alpha, \beta)}
(x,y) = \min\{j|c_{i,j}\neq 0\}=0$.
Then $\max y$-deg $\Delta_{B(2\alpha, \beta)}(x,y)=
\max\{j|c_{i,j}\neq 0\}=\lambda -1$.
\end{prop}
The following proposition shows that $\lambda$ and $\rho$
are related to the dual of $S$.
\begin{prop}\label{prop:3.10}
Let $\varepsilontil$ be the dual of $S$. Then the length of $\varepsilontil$
is $2(\lambda-\rho)-1$.
\end{prop}
{\it Proof.}
First consider the positive sequence $P_i$.
Let $P_i=[
2a_{i,1},2b_{i,1},2a_{i,2},\ldots,2a_{i,s_i},2b_{i,s_i},
2a_{i,s_{i}+1}]$.
Then the length of $P_i$ is $2s_{i}+1$.
Now to get the dual, consider the modified form
$P^*_i$ of $P_i$
that is of the form:
\hfil
$P^*_i=[\underbrace{2, 0, 2, \ldots, 0, 2}_{2a_{i,1}-1},
2b_{i, 1}, \underbrace{2, 0, 2, \ldots, 0, 2}_{
2a_{i,2}-1}, 2b_{i, 2}, \ldots].
$
Then to obtain $\wti{P}_i$, replace $0$ in $P^*_i$
by $2$ and $b_{i,r}$ by $1-b_{i,r}$.
Therefore, in $\wti{P}_i$, $0$ occurs exactly $\rho_i$ times.
Since the length of $P^*_i$ is $\sum^{s_i+1}_{r=1}
(2a_{i, r}-1)+s_i$, the length of the dual $\wti{P}_i$ is
\begin{align*}
\sum^{s_i+1}_{r=1} (2a_{i, r}-1)+s_i -2 \rho_i
&= \sum^{s_i+1}_{r=1} 2a_{i, r}-(s_i +1)+ s_i -2\rho_i\\
&= 2\lambda_i-2\rho_i-1.
\end{align*}
By the same reasoning, the length of the dual
$\wti{Q}_j$ of $Q_j$ is equal to $2\lambda'_j-2\rho'_j-1$.
Therefore, the length of the dual $\varepsilontil$ of $S$ is
\begin{equation*}
\sum^m_{i=1} (2\lambda_i-2\rho_i-1)+\sum^m_{j=1}
(2\lambda'_j-2\rho'_j-1)+(2m-1)=2\lambda-2\rho-1.
\end{equation*}
\fbox{}
Note that Proposition \ref{prop:3.10}
holds if $Q_m=\phi$ (and hence $d_m$ is missing) or $P_1=\phi$
(and hence $d_1$ is missing).
Combining Proposition 2 in \cite{KM} with
Proposition \ref{prop:3.10} and
Proposition \ref{prop:4.4} (in the next subsection),
we obtain:
\begin{prop}\label{prop:3.11}
Let $\Delta_{B(2\alpha, \beta)} (x, y)$ be the
Alexander polynomial of a 2-bridge link $B(2\alpha, \beta)$.
Then $\Delta_{B(2\alpha, \beta)} (t, t)$ is a polynomial of
degree $2(\lambda-\rho-1)$.
\end{prop}
{\it Proof.}
Let $\Delta_{B}(t)$ be the reduced Alexander polynomial of
$B(2\alpha, \beta)$.
Then $\Delta_{B}(t)=\Delta_{B(2\alpha, \beta)} (t, t)(1-t)$.
Apply Proposition 2 in \cite{KM}.
\fbox{}
{\bf 3.4. Reduction.}
In this subsection, we justify
the assumptions $\ell\ge 0$ and $r>0$.
This restriction drastically
simplifies the proofs of
the main theorems.
For a knot $K$, we denote by $\overline{K}$
the mirror image of $K$.
\begin{thm}\label{thm:1}
In studying the genera and fibredness of
$K(2\alpha, \beta|r)$
with $r\neq0$,
we may assume:\\
(1) $-2\alpha<\beta<2\alpha$,
(2) $\mbox{$\ell k$} B(2\alpha, \beta)\ge 0$, and
(3) $r>0$.
More precisely, we have the following:
Suppose $0<\beta<2\alpha$. Then, for any $r\neq 0$\\
(I) \
$K(2\alpha, \beta|r)=\overline{K(2\alpha, -\beta|-r)}$.\\
(II)
$K(2\alpha, \beta|r)=
\overline{K(2\alpha, 2\alpha-\beta|-r)}$ and
$K(2\alpha, -\beta|r)=
\overline{K(2\alpha, -2\alpha+\beta|-r)}$.
\end{thm}
We remark the following:
(i) $\mbox{$\ell k$}B=
-\mbox{$\ell k$} B(2\alpha, -\beta)$,
(ii) $\mbox{$\ell k$}B=
\mbox{$\ell k$} B(2\alpha, \pm 2\alpha-\beta)$,
(iii) if $\ell B(2\alpha, \beta)=0$, then we may assume
$r>0$ and $\beta>0$.
\begin{ex}
(1) $K(4,-1|3) = \overline{K(4,1|-3)} = K(4,3|3)$.\\
(2) $K(14,9|2)= \overline{K(14,-9|-2)}=K(14,-5|2)$.\\
(3) $K(14,9|-2)=\overline{K(14,-9|2)}$.\\
(4) $K(14,-9|-2)=\overline{K(14,-5|2)}$.\\
Note that $\mbox{$\ell k$} B(4,-1)=-2$ and $\mbox{$\ell k$} B(14,9)=-1$.
So, we first take the mirror
image to make the linking numbers positive,
at the expense of changing the sign of~$r$.
\end{ex}
{\it Proof of Theorem \ref{thm:1}.}
First, we prove the former part, namely we show;
\begin{prop}
We may assume (1), (2) and (3) of Theorem \ref{thm:1}.
\end{prop}
{\it Proof.}
Take a $2$-bridge link $B(2\alpha, \beta)=K_1\cup K_2$.
Without loss of generality, we may assume
$-2\alpha<\beta<2\alpha$.
If $\mbox{$\ell k$}(K_1,K_2)<0$, take
$B(2\alpha, -\beta)$.
This corresponds to
taking the mirror image of $B(2\alpha, \beta)$
while preserving the orientation
of the components.
Therefore, $\mbox{$\ell k$} B(2\alpha, -\beta)>0$,
and hence, from now on,
we assume $2$-bridge links always have
a non-negative linking number. Note that we still have
$-2\alpha<-\beta<2\alpha$.
Take $K(2\alpha, \beta|r)$.
Suppose $r<0$.
Let $B(2\alpha',\beta')=K'_1 \cup K'_2$
be the link obtained by
taking the mirror
image of $B(2\alpha, \beta)$
while reversing the orientation of
$K_2$.
Now the linking number is preserved,
i.e., $\mbox{$\ell k$}B=\mbox{$\ell k$}
B(2\alpha',\beta')$.
Recall that $K(2\alpha, \beta|r)$ is obtained
by twisting $K_1$ by $K_2$, $r$ times.
This does not depend on
the orientation of $K_2$,
and hence
the knot obtained by twisting $K_1$ along $K_2$ $r$ times
is the mirror image of the knot obtained by twisting
$K_{1}'$ along $K_{2}'$ $-r$ times.
Therefore, we see
$K(2\alpha, \beta|r)=\overline{K(2\alpha', \beta'|-r)}$
\fbox{}
Next, we show the following to prove the latter half
of Theorem \ref{thm:1}.
\begin{prop}\label{prop:4.4}
Let $L=B(2\alpha,\beta)$ be a $2$-bridge link,
where $-2\alpha<\beta<2\alpha$.
Let $L'$ be obtained by taking the mirror image of $L$
while reversing the orientation of one component.
Then, we have:
$L'=
\begin{cases}
\begin{array}{ll}
B(2\alpha, 2\alpha-\beta) & {\rm if\ } \beta>0,\\
B(2\alpha, -2\alpha-\beta) & {\rm if\ } \beta<0.
\end{array}
\end{cases}
$
\end{prop}
{\it Proof.}
Since the other case is similar,
we only deal with the case $\beta>0$.
Consider the Schubert normal form of $B(2\alpha, \beta)$.
See Figure 3.3.
\foA
First, take the mirror image by changing all crossings
simultaneously. Flip the figure by the horizontal axis.
Now we have the Schubert normal form of
$B(2\alpha, -\beta)$. Rotate the right over-bridge
clockwise by $\pi$. Change the orientation of
the component containing the right over-bridge.
This gives the Schubert normal form of
$B(2\alpha, 2\alpha -\beta)$.
\fbox{}
By two propositions above, we have
Theorem \ref{thm:1}.
\fbox{}
\noindent
\begin{minipage}{7.5cm}
{\bf 3.5. Primitive spanning disk for $K_1$.}
In this subsection, we introduce the notion of
{\it primitive spanning disk}
for $K_1$, which locally look like Figure 3.5 (b).
This surface is the first step to
construct a minimal genus Seifert surface
for $K(2\alpha, \beta|r)$.
Let $D$ be
a diagram obtained from the continued fraction
$S$ of $B(2\alpha, \beta)$ as in Figure 2.1.
By a slight modification of $D$ corresponding to
the modification of $S$ to $S^*$, as in Figure 3.4,
construct a spanning disk for $K_1$,
which consists of horizontal disks and vertical bands,
whose interiors are mutually disjoint.
In Figure 3.4, each box contains an even number of
twists (including $0$).
Note that
in Figure 3.4, all disks are showing the same side,
though $K_2$ may penetrate them from various sides.
The set of horizontal disks is divided into several
families so that each member of a family
meets $K_2$ from the same side as its neighbouring
member(s).
This corresponds to the canonical
decomposition $\{P_1, d_1, Q_1, e_1, \cdots\}$
of $S$.
For simplicity,
the disks belonging to the family corresponding
to $P_i$'s (resp. $Q_i$'s) are called
{\it positive disks} (resp. {\it negative disks}),
and a band connecting two positive (resp. negative)
disks is called a
{\it positive} (resp. {\it negative}) band.
The other bands are called {\it connecting bands}.
\end{minipage}
\hfil
\begin{minipage}{4cm}
\embfig{80}{3-4.eps}{Figure 3.4:
$B(2\alpha, \beta)$
}
\end{minipage}
\begin{rem}\label{rem:4.1}
Disks and bands in the spanning disk for $K_1$
correspond to edges and vertices, respectively,
of the graph $G(S)$ of $S$ as follows,
except for the end vertices of $G(S)$.\\
(1) Positive/negative disks correspond to
edges with positive/negative slope.\\
(2) Positive/negative bands correspond to vertices
between positive/negative edges. \\
(3) Connecting bands correspond to local maximal or
minimal vertices\\
(4) The number of twists of a band corresponds to
the weight of a vertex.
\end{rem}
\begin{dfn}\label{dfn:primitivedisk}
Slide each band so that both of its ends are
attached to the front edge of each small disk
as in Figure 3.5 (b).
The {\it primitive spanning disk} for $K_1$ is
the union of all the small disks
together with all bands arranged this way.
See Figures 7.2 (a) and 7.4 left.
\end{dfn}
We remark that in the process of sliding a band,
another band may stand in the way. However,
as shown in the following proposition, we
can always arrange the bands so that each of them
appears as in Figure 3.5 (b).
\foC
\begin{prop}\label{prop:4.2}
A relative position
of the bands in a primitive spanning disk can be
arbitrary.
\end{prop}
{\it Proof.}
Examining the case locally suffices.
See Figure 3.6, where disks, say $D_1, D_2, D_3$
and bands $B_1, B_2, B_3$ are depicted.
To change from (a) to (b), fix $D_2$
and everything
lying above $D_2$, and simultaneously
turn around everything
that hangs below $D_2$.
Similarly we can change (b) to (c).
\fbox{}
\foD
Now, we demonstrate the process of replacing
$B(2\alpha, \beta) =K_1 \cup K_2$ by
$B(2\alpha, 2\alpha-\beta)$, that is
to take the mirror image and reverse
(the orientation of) $K_2$:
Since $K_2$ consecutively penetrates the disks
transversely, we have a diagram of $K_1$ as in
Figure 3.7, where (i) all the disks are concentric,
(ii) the higher disk appears smaller
and (iii) the only crossings are in the twists of bands.
Figure 3.7 shows the process of taking the mirror
image and reversing the mirror image of $K_2$.
\foE
Then we notice that the effect of the process is
simply replacing each of the bands in $K_1$ by
its mirror image.
Therefore, the process can be depicted as in
Figure 3.5 (b) to (c).
Now reversing the operation of (a) to (b), we obtain
the standard diagram of the $2$-bridge link
$B(2\alpha, 2\alpha-\beta)$.
Finally, Theorem \ref{thm:3.6} is now almost immediate.
{\it Proof of Theorem} \ref{thm:3.6}.
It is easy to see that the final diagram Figure 3.5(d)
is the primitive disk obtained from the dual
$\widetilde{S}$ of $S$.
Therefore Theorem \ref{thm:3.6} follows from
Proposition \ref{prop:4.4}.
\fbox{}
\section{Alexander polynomials (I)}
In this section, we determine the Alexander polynomial
$\Delta_{K(r)}(t)$ for $K(2\alpha, \beta|r)$.
In fact, we prove the following
\begin{prop}\label{prop:6.1}
Let $\Delta_{B(2\alpha, \beta)} (x, y)$ be the
Alexander polynomial of an (oriented) $2$-bridge link
$B(2\alpha, \beta)$.
Let $\Delta_{K(r)}(t)$ be the Alexander polynomial of
$K(2\alpha, \beta|r), r>~0$.\\
(1)\cite{Ki} If $\mbox{$\ell k$}B=\ell\ne0$, then
$\Delta_{K(r)}(t)=\dfrac{1-t}{1-t^\ell}
\Delta_{B(2\alpha, \beta)} (t,t^{\ell r})$\\
(2) If $\ell=0$, then, for some $a=\pm 1$ and $b$,
\begin{equation*}
\Delta_{K(r)}(t)=r \biggl[
\dfrac{\Delta_{B(2\alpha, \beta)}(x, y)}
{1-y}\biggr]_{x=t, y=1} (1-t)+a t^b
\end{equation*}
\end{prop}
In Subsection 6.2,
we will give a geometric interpretation of
$\Delta_{K(r)}(t)$ when $\ell=0$. (See also \cite{Gon}
or \cite{MM})
Now Proposition \ref{prop:6.1} (1)
follows from a general result due
to Kidwell \cite{Ki}, and hence we omit the proof.
However, part(2) was not proved in \cite{Ki}.
In this section, we prove the following more general
result suggested by M. Kidwell.
\begin{prop}\label{prop:6.2}
Let $K_1$ be an oriented knot embedded in an
(unknotted) solid torus $V$.
Suppose $\mbox{$\ell k$} (K_1, K_2)=0$, where
$K_2$ is an oriented meridian of $\partial V$.
Denote by $K_1(r)$ the knot obtained from $K_1$
by applying Dehn twists $r$ times along $K_2$ ($r>0$).
Let $L=K_1\cup K_2$. Then, for some $a=\pm 1$ and $b$, we have:
\begin{equation}
\Delta_{K_1(r)}(t)=r(1-t) \biggl[
\dfrac{\Delta_{L}(x, y)}{1-y}\biggr]_{
\genfrac{}{}{0pt}{}{x=t}{y=1}}+
a t^b
\Delta_{K_1}(t).
\end{equation}
\end{prop}
Proposition \ref{prop:6.1} (2)
follows from Proposition \ref{prop:6.2} immediately,
since $\Delta_{K_1}(t)=1$.
{\it Proof of Proposition \ref{prop:6.2}.}
First, consider the link $L=K_1\cup K_2$.
We add one trivial knot $K_3$ to $L$ such that
$\mbox{$\ell k$}(K_1, K_3)=\mbox{$\ell k$}(K_2, K_3)=1$
as in Figure 4.1.
Let $\wti{L}=K_1\cup K_2\cup K_3$
be the $3$-component link.
\sixa
Using this diagram, we obtain the following Wirtinger
presentation of the link group
$G(\wti{L})$ of $\wti{L}$.
\begin{equation*}
G(\wti{L})=\langle
x_1, x_2, \ldots, x_m, y, z_1, z_2|r_1, \ldots, r_m, s, t_1, t_2
\rangle,\ {\rm where}
\end{equation*}
\hspace*{4mm}
$
\begin{array}{ll}
r_1=z_1 x_1 z_1^{-1} x_2^{-1},
&
\hspace*{8mm}
s=(x_{i_{k}}^{\varepsilon_k}\cdots
x_{i_2}^{\varepsilon_2} x_2 z_1) y (z_1^{-1} x_2^{-1}
x_{i_2}^{-\varepsilon_2}\cdots x_{i_k}^{-\varepsilon_k})y^{-1},\\
r_2=y x_2 y^{-1} x_3^{-1},
&
\hspace*{8mm} t_1=y z_1 y^{-1} z_2^{-1},\\
r_3=w_3 x_3 w_3^{-1} x_4^{-1},
&
\hspace*{8mm} t_2=x_1 z_2 x_1^{-1} z_1^{-1}\\
\hspace*{8mm}\vdots & \\
r_m=w_m x_m w_m^{-1} x_1^{-1}. &
\end{array}$\\
Here, $w_i$ is a word in $x_i$ and/or $y$.
We note that $\varepsilon_k+\cdots+\varepsilon_2+1=0$,
since $\ell k(K_1, K_2)=0$.
Now using $t_1$ and $t_2$, we can eliminate $z_2$ and
obtain a new presentation.
For simplicity, we write $z=z_1$. Then
\begin{equation*}
G(\wti{L})=\langle
x_1, x_2, \ldots, x_m, y, z|r_1, \ldots, r_m, s, t'\rangle,\ {\rm where}\
t'=x_1 y z y^{-1} x_1^{-1} z^{-1}.
\end{equation*}
From this presentation,
we obtain the Alexander matrix
$M(\wti{L}$) for $\wti{L}$.
The matrix $M(\wti{L})$ is an
$(m+2)\times(m+2)$ matrix.
A simple calculation shows
\begin{align}
&(1)\
(\dfrac{\partial r_i}{\partial y})^\phi=
\delta_i(1-x),\ {\rm where}\ \delta_i=0, 1\ {\rm or}\ -y^{-1},\nonumber\\
&(2)\
(\dfrac{\partial r_1}{\partial z})^\phi=1-x,
(\dfrac{\partial r_i}{\partial z})^\phi=0,\ {\rm for}\ i \ne1,
\end{align}
where $\partial$ indicates Fox's
free derivative and $\phi$ is the induced homomorphism
from $G(\wti{L})$ to the free abelian group
$G(\wti{L})/[G(\wti{L}),G(\wti{L})]$, where
$x_i^\phi=x, y^\phi=y$ and $z^\phi=z$.
Let $U=x_{i_k}^{\varepsilon_k}
\cdots x_{i_2}^{\varepsilon_2} x_{2}z$.
Then $s=U y U^{-1} y^{-1}$, and
\begin{equation}(\dfrac{\partial s}{\partial x_i})^\phi=
(1-y) (\dfrac{\partial U}{\partial x_i})^\phi.
\end{equation}
Since $U$ does not involve $y$ and $\varepsilon_k+
\cdots +\varepsilon_2+1=0$,
we see
\begin{align}
&(1)\ (\dfrac{\partial s}{\partial y})^\phi=z-1\nonumber\\
&(2)\ (\dfrac{\partial s}{\partial z})^\phi=1-y
\end{align}
Furthermore, we have:
\begin{align}
&(1)\ (\dfrac{\partial t'}{\partial x_1})^\phi=1-z,
\
(\dfrac{\partial t'}{\partial x_i})^\phi=0, {\rm for}\ i \ne1,\nonumber\\
&(2)\ (\dfrac{\partial t'}{\partial y})^\phi=x(1-z),\nonumber\\
&(3)\ (\dfrac{\partial t'}{\partial z})^\phi=xy-1.
\end{align}
Now, the Alexander polynomial
$\Delta_{\wti{L}}(x, y, z)$ of $\wti{L}$ is obtained as follows.
Denote by $\hat{M}(\wti{L})$
the $(m+1)\times(m+2)$ matrix obtained from
$M(\wti{L})$ by striking out the $m^{\rm th}$ row:
$\biggl(
(\dfrac{\partial r_m}{\partial x_1})^\phi,
\cdots, (\dfrac{\partial r_m}{\partial x_m})^\phi,
(\dfrac{\partial r_m}{\partial y})^\phi,
(\dfrac{\partial r_m}{\partial z})^\phi\biggr)$
Further, $\hat{M}(\wti{L})_\nu$ denotes the
$(m+1)\times(m+1)$ matrix obtained from$\hat{M}(\wti{L})$
by striking out the column corresponding to the generator
$\nu$. (For instance, to get
$\hat{M}(\wti{L})_z$, eliminate the last column of
$\hat{M}(\wti{L})$.)
Then the following is known:
\begin{equation}
\Delta_{\wti{L}}(x, y, z)\doteq
\dfrac{\det \hat{M}(\wti{L})_z}{1-z}.
\end{equation}
Since the last row of $\hat{M}(\wti{L})_z$ is divisible by
$1-z$, we have:
\begin{equation}
\Delta_{\wti{L}}(x, y, z)=\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
(1-y)\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi&
z-1\\
\noalign{\vskip3pt\hrule \vskip3pt}
1\ \ 0\cdots 0&x
\end{array}
\right].
\end{equation}
Let $\hat{L}(r)$ be the link obtained from
$K_1\cup K_3$ by applying Dehn twists $r(>0)$ times along $K_2$.
Since $\ell k(K_2,K_1)=0$ and $\ell k(K_2, K_3)=1$,
by Kidwell's theorem \cite[Corollary 3.2]{Ki} we have:
\begin{equation}
\Delta_{\hat{L}}(x, z)=
\dfrac{1}{1-z} \det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi_{y=z^r}&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
(1-z^r)\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi&
z-1\\
\noalign{\vskip3pt\hrule \vskip3pt}
1\ \ 0\ 0\ \cdots \ 0&x
\end{array}
\right]
\end{equation}.
Further, our knot $K_1(r)$ is obtained from $\hat{L}$
by eliminating $K_3$, and hence, by
Torres' Theorem \cite{T},
noting $\ell k(K_3, K_1(r))=1$, we have:
\begin{align}
&\Delta_{K_1(r)} (x)=\Delta_{\hat{L}} (x, 1),
\ {\rm and\ hence},\\
&\Delta_{K_1(r)} (x)=\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi&
-1\\
\noalign{\vskip3pt\hrule \vskip3pt}
1\ \ 0\ 0\ \cdots 0&x
\end{array}
\right]_{y=z=1}
\end{align}
We evaluate $\Delta_{K_1(r)}(x)$
by expanding it along the last row, and hence
\begin{equation}
\Delta_{K_1(r)} (x)\doteq\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 2}}&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{ j\ge2 }&
-1
\end{array}
\right]_{y=z=1}
+(-1)^m x \det
\left[
\begin{array}{c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 1} }\\
\noalign{\vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge 1}
\end{array}
\right]_{y=z=1}
\end{equation}
First we claim:
\begin{lem}\label{lem:6.3}
$\det
\left[
\begin{array}{c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 1}}\\
\noalign{\vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge1}\\
\end{array}
\right]_{y=z=1}
=0$
\end{lem}
{\it Proof.}
Since $y=1$ and $\varepsilon_k+\cdots+\varepsilon_2+1=0$, we have
$\sum_{j=1}^m(\frac{\partial r_i}{\partial x_j})_{y=1}^\phi =0$
and $\sum_{j=1}^m\bigl(\frac{\partial U}
{\partial x_j}\bigr)_{y=1}^\phi =0$,
and hence Lemma \ref{lem:6.3} follows.
\fbox{}
Now we return to the proof of Proposition \ref{prop:6.2}.
From (4.11)
and Lemma \ref{lem:6.3}, we see the following:
\begin{equation}
\Delta_{K_1(r)} (x)\doteq\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 2}}&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge2}&
-1
\end{array}
\right]_{y=1}
\end{equation}
The determinant is decomposed into two
terms as follows:
\begin{equation}
\Delta_{K_1(r)}(x) \doteq \det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 2}}&
\delta_{i}(1-x)\\
\noalign{\vskip3pt\hrule \vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge2}&
0
\end{array}
\right]_{y=1}
+
\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 2}}&
0\\
\noalign{\vskip3pt\hrule \vskip3pt}
r\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge2}&
-1
\end{array}
\right]_{y=1}
\end{equation}
The second term is equivalent to
$\det\Bigl[\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi_
{\genfrac{}{}{0pt}{}{1\le i \le m-1}{2\le j\le m}}\Bigr]_{y=1}
$
that is equal to $\Delta_{K_1}(x)$ (up to $\pm x^k$).
Therefore, the final step is to show that
\begin{equation}
\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}
\bigr)^\phi_{\genfrac{}{}{0pt}{}{i\ge 1}{j\ge 2}}&
\delta_{i}\\
\noalign{\vskip3pt\hrule \vskip3pt}
\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge2}&
0
\end{array}
\right]_{y=1}
\doteq
\left[\dfrac{\Delta_{B(2\alpha, \beta)}(x,y)}{1-y}\right]_{y=1}.
\end{equation}
To show (4.14)
we go back to $M(\wti{L})$ and compute
$\Delta_{\wti{L}} (x, y, z)$ in a different way.
We use the following formula:
\begin{equation}
\Delta_{\wti{L}} (x, y, z)=
\dfrac{\det\hat{M}(\wti{L})_y}{1-y}
\end{equation}
Then the row
$(\dfrac{\partial s}{\partial x_1},
\dfrac{\partial s}{\partial x_2},
\cdots, \dfrac{\partial s}{\partial x_m},
\dfrac{\partial s}{\partial z})^\phi$
is divisible by $1-y$, and hence, we have:
\begin{equation}
\Delta_{\wti{L}} (x, y, z)=\det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi_{j\ge1}&
\begin{array}{c}
1-x\\
0\\
\vdots\\
0\\
\end{array}\\
\noalign{\vskip3pt\hrule \vskip3pt}
\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge1}&
1\\
\noalign{\vskip3pt\hrule \vskip3pt}
1\hspace{-1.5mm}-\hspace{-1.5mm}z\ 0 \cdots 0&
xy-1
\end{array}
\right]
\end{equation}
Now we try to find $\Delta_L (x, y)$ from
$\Delta_{\wti{L}} (x,y,z)$.
To do this, we eliminate $K_3$ from $\wti{L}=K_1 \cup K_2 \cup K_3$.
Then, since $\ell k (K_3, K_1)=\ell k (K_3, K_2)=1$,
Torres' Theorem \cite{T} implies:
\begin{equation}
\Delta_L (x, y)=\dfrac{\Delta_{\wti{L}}(x, y, 1)}{xy-1},
\end{equation}
that is, from (4.16),
\begin{equation}
\Delta_L (x, y)=\det
\left[
\begin{array}{c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi_{j\ge1}\\
\noalign{\vskip3pt}
\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge1}
\end{array}
\right]_{z=1}
=N
\end{equation}
We describe $N$ precisely.
First we note:
\begin{align}
&(1)\
{\displaystyle
\sum_{j=1}^m (\frac{\partial r_i}{\partial x_j})^\phi
=
\begin{cases}
y^{\varepsilon}-1, {\rm if\ } r_i {\rm \ is\ of\ the\ from}:
y^{\varepsilon} x_i
y^{-\varepsilon} x_{i+1}^{-1}, \varepsilon=\pm1
\\
0, {\rm otherwise}.
\end{cases}
}\nonumber\\
&(2)\
{\displaystyle
\sum_{j=1}^m (\frac{\partial U}{\partial x_j})^\phi=0
}.
\end{align}
Therefore, if we add all columns of $N$ to the first column
to get $N_1$, then the first column of $N_1$ is
divisible by $1-y$. Further,
\begin{equation}
{\displaystyle
\sum_{j=1}^m (\frac{\partial r_i}{\partial x_j}
)^\phi=\varepsilon y^{\frac{\varepsilon -1}{2}} (1-y)}.
\end{equation}
Since $\varepsilon y^{\frac{\varepsilon -1}{2}} =\delta_i$,
we have:
\begin{equation}
\dfrac{N_1}{1-y}=(-1)^m \det
\left[
\begin{array}{c|c}
\bigl(\dfrac{\partial r_i}{\partial x_j}\bigr)^\phi_{j\ge2}&
\delta_{i}\\
\noalign{\vskip3pt\hrule \vskip3pt}
\bigl(\dfrac{\partial U}{\partial x_j}\bigr)^\phi_{j\ge2}&
0
\end{array}
\right]
=\dfrac{\Delta_L (x, y)}{1-y}.
\end{equation}
Evaluations of both polynomials at $y=1$ give (4.14).
The proof of Proposition \ref{prop:6.2} is now completed.
\fbox{}
\section{Alexander polynomials (II)}
We have established some relationships between
the Alexander polynomial of
$K(2\alpha, \beta|r)$
and that of the 2-bridge link $B(2\alpha, \beta)$.
However, these relations are not sufficient to our purpose.
Therefore, in this section, we prove some subtle properties of
$\Delta_{B(2\alpha, \beta)}(x, y)$.
These properties are indispensable to study
the Alexander polynomial of our knot
$K(2\alpha, \beta|r)$. See Theorem \ref{thm:7.4}.
Let $S=\{P_1,d_1, Q_1, e_1, P_2, \cdots, P_m, d_m, Q_m\}$
be the canonical decomposition of the continued fraction of
$\beta/2\alpha$.
Let $\rho_i, \rho_j, \rho, \lambda_i, \lambda_j$ and
$\lambda$ be integers as defined in Definition \ref{dfn:3.8}.
Now, by Proposition \ref{prop:3.9}, we can write
\begin{equation}
\Delta_{B (2\alpha, \beta)}(x, y)=
f_{\lambda-1}(x)
y^{\lambda-1}+f_{\lambda-2}(x) y^{\lambda-2}+\cdots+f_0(x),
\end{equation}
where $f_i(x), 0 \le i \le \lambda-1$,
are integer polynomials in $x$ of degree at most $\lambda-1$.
Our purpose is to determine these polynomials $f_i(x)$,
in particular, $f_{\lambda-1}(x)$.
{\bf 5.1 Skein relation}
Let $[[u_1, v_1, u_2, v_2, \cdots, $
$u_s, v_s, u_{s+1}]]$
be the continued fraction of $\beta/2\alpha$.
Then it is shown in \cite[Theorem 2 (4.2)]{K} that
\begin{align}
&\ \ \ \Delta_{B (2\alpha, \beta)}(x,y)\nonumber\\
&=v_s (x-1)(y-1) F_{u_{s+1}} (x, y)
\Delta[[u_1, v_1, \cdots, u_s]]
-\Delta[[u_1, v_1,
\cdots, v_{s-1}, u_s +u_{s+1}]],
\end{align}
where
$\Delta [[c_1, \cdots, c_k]]$ is the Alexander polynomial of
the 2-bridge link associated to the continued fraction
$[[c_1, c_2, \cdots, c_k]]$, and $F_{n}(x,y)$ is defined below:
\begin{align}
&(1)\ F_0 (x,y)=0.\nonumber\\
&(2)\ {\rm For}\ n>0,\nonumber\\
&\ \ \ (a)\ F_n (x,y)=1+xy+\cdots+(xy)^{n-1}=
\frac{(xy)^n-1}{xy-1},\nonumber\\
&\ \ \ (b)\ F_{-n} (x,y)=-\{(xy)^{-1}+\cdots+(xy)^{-n}\}
=\frac{-1}{(xy)^n} F_n(x,y).
\end{align}
Note that $F_c(x,y)=\Delta[[c]]$.
Formula (5.2) is obtained by applying crossing changes
and smoothing
at $v_s$,i.e.,
at the crossings
corresponding to $v_s$.
We should note that (5.2) is slightly different from the
original formula given in \cite[(4.2)]{K},
since we use a different notation.
By applying (5.2) on all $v_j, j=1, 2, \cdots, s$,
we obtain $\Delta_{B(2\alpha, \beta)}(x, y)$
in terms of various $
\Delta [[c]]$, where $c$ is written as the sum of $u_i$.
The following example illustrates a calculation.
\begin{ex}\label{ex:7.1}
Write $\frac{\beta}{2\alpha}=[[u_1, v_1, u_2, v_2, u_3]]$.
Then,
\begin{align*}
&\ \ \ \ \Delta_{B(2\alpha, \beta)}\\
&=v_2 (x-1)(y-1) F_{u_3}(x, y)
\Delta [[u_1, v_1, u_2]]-\Delta[[u_1,v_1, u_2+u_3]]\\
&=v_2 (x-1)(y-1) F_{u_3}(x, y)\{v_1(x-1)(y-1)
F_{u_2}(x, y) F_{u_1}(x, y)-\Delta[[u_1+u_2]]\}\\
&\ \ \ \ -\{v_1(x-1)(y-1) F_{u_2+u_3}(x, y)
\Delta[[u_1]]
-\Delta[[u_1+u_2+u_3]]\}\\
&=v_1 v_2 (x-1)^2 (y-1)^2 F_{u_1} F_{u_2} F_{u_3}
-(x-1)(y-1)\{v_1 F_{u_1} F_{u_2+u_3} +
v_2 F_{u_1+u_2} F_{u_3}\}\\
&\ \ \ \ +F_{u_1+u_2+u_3}
\end{align*}
\end{ex}
As is illustrated in Example \ref{ex:7.1}, we see that
$\Delta_{B(2\alpha, \beta)}(x, y)$ is of the following form:
\begin{equation}
\Delta_{B(2\alpha,\beta)}(x,y)=
\sum_{\genfrac{}{}{0pt}{}{0\le k \le s}{
1\le i_1<i_2<\cdots<i_k\le s}}
(-1)^k v_{i_1}v_{i_2}\cdots v_{i_k}(x-1)^k (y-1)^k
F_{\mu_1}F_{\mu_2}\cdots F_{\mu_k},
\end{equation}
where the summation is taken over all indices
$i_j$ such that
$1\le i_1<i_2<\cdots<i_k\le s$, and
$\mu_j$ is of the form:
$\mu_j=u_{j_1}+u_{j_1+1} +\cdots+u_{j_1+p}$ and
$\mu_1 + \mu_2 + \cdots +\mu_{k+1}
= u_1 +u_2 + \cdots+ u_{s+1}$.
For convenience, we denote by $\Lambda_{p,r}$
the set of all $p$ indices $i_1, \cdots, i_p$
such that $1 \le i_1< \cdots <i_p\le r$.
Since $F_c(x,y)$ is a rational function, we replace
$F_c(x,y)$ by a polynomial $\widetilde{F}_c(x,y)$ below.
\begin{align}
&{\it For}\ n>0,\nonumber\\
&(1)\ \wti{F}_n(x,y) = (xy-1) F_n(x,y) =(xy)^n -1.\nonumber\\
&(2)\ \wti{F}_{-n}(x,y) = (xy)^n(xy-1)
F_{-n}(x,y)= (-1)\wti{F}_n(x,y)=(-1)\{(xy)^n -1\}.
\end{align}
Using these polynomials, we obtain an integer polynomial
$\widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y)$ from
$\Delta_{B(2 \alpha,\beta)}(x,y)$:
\begin{equation}
\widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y) =
(xy)^{\sum_{j=1}^m \lambda'_j}
(xy-1)^{\sum_i(s_i+1)+\sum_j (q_j+1)}
\Delta_{B(2 \alpha,\beta)}(x,y).
\end{equation}
Therefore we have:
\begin{align}
(1)\ &\max \mbox{$y$-}\deg\widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y)\nonumber\\
&=\max \mbox{$y$-}\deg \Delta_{B(2 \alpha,\beta)}(x,y)
+{\displaystyle\sum_{i=1}^{m}s_i +\sum_{j=1}^{m}q_j +2m
}\nonumber\\
&= \lambda +\sum_{i=1}^{m}s_i +\sum_{j=1}^{m}q_j + 2m-1,\nonumber\\
(2)\ & \min \mbox{$y$-}deg \widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y) = 0.
\end{align}
Now we can write
\begin{align}
\widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y)
&= \tilde{f}_\nu (x) y^\nu + \cdots +\tilde{f}_0(x),\ {\rm and}\nonumber\\
\tilde{f}_\nu (x)&=f_{\lambda -1}(x) x^{\sum_1^m s_i +
\sum_1^m q_j +2m},
\end{align}
where $\nu= \lambda +\sum_{i=1}^{m}s_i +
\sum_{j=1}^{m}q_j + 2m - 1$.
First we show
\begin{equation}
\deg \tilde{f}_\nu(x) = \lambda +
\sum_1^m s_i + \sum_1^m q_j -
\rho
+2m - 1.
\end{equation}
{\bf 5.2 Proof of (5.9) (I)}
We consider two special cases.
Case 1. All $u_i >0$.
Consider $\beta/2\alpha = [[u_1, v_1,u_2,v_2,
\cdots, u_s,v_s,u_{s+1}]]$.\\
Then
$\displaystyle
\Delta_{B(2 \alpha,\beta)}(x,y)
=\sum_{\Lambda_{k,s},0\le k\le s}(-1)^k v_{i_1} v_{i_2}
\cdots v_{i_k} (x-1)^k(y-1)^k
F_{\mu_1} F_{\mu_2} \cdots F_{\mu_{k+1}}$,\\
where $\mu_i>0, 1\le i\le k+1,{\rm and}\
\lambda =\sum_{i=1}^{k+1} \mu_i, \lambda' =0$.
Therefore,
\begin{align}
&\ \ \ \ \widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y)\nonumber\\
&=
(xy-1)^{s+1} \Delta_{B(2 \alpha,\beta)}(x,y)\nonumber\\
&=\sum_{\genfrac{}{}{0pt}{}
{\Lambda_{k,s}}{ 0\le k\le s}}
(-1)^k v_{i_1} v_{i_2}
\cdots v_{i_k} (x-1)^k
(y-1)^k \wti{F}_{\mu_1} \wti{F}_{\mu_2} \cdots
\wti{F}_{\mu_{k+1}}(xy-1)^{s-k}.
\end{align}
Case 2. All $-u_i <0$.
Consider $\beta/2\alpha= [[-u_1, -v_1,-u_2,-v_2, \cdots,
-u_q,-v_q,-u_{q+1}]]$. Then
$\Delta_{B(2 \alpha,\beta)}(x,y) = (-1)^k(-v_{i_1})(-v_{i_2})
\cdots (-v_{i_k})
(x-1)^k(y-1)^k F_{-\mu_1} F_{-\mu_2} \cdots
F_{-\mu_{k+1}}$,\\
and hence $\lambda' = u_1 + u_2 +\cdots+u_{q+1}=
\mu_1 +\mu_2+\cdots+\mu_{k+1}$.
Therefore:
\begin{align}
&\ \ \ \
\widetilde{\Delta}_{B(2\alpha,\beta)}(x,y) \nonumber\\
&=(xy)^{\lambda'}
(xy-1)^{q+1}
\Delta_{B(2 \alpha,\beta)}(x,y)\nonumber\\
&= \sum_{\Lambda_{k,q}, 0\le k \le q}
v_{i_1}
v_{i_2} \cdots
v_{i_k} (-1)^{k+1}(x-1)^k(y-1)^k
\wti{F}_{\mu_1} \wti{F}_{\mu_2}
\cdots \wti{F}_{\mu_{k+1}}(xy-1)^{q-k}\nonumber\\
&=-\sum_{\Lambda_{k,q},0\le k \le q}
(-1)^k v_{i_1} v_{i_2}
\cdots v_{i_k} (x-1)^k(y-1)^k
\wti{F}_{\mu_1} \wti{F}_{\mu_2} \cdots \wti{F}_{\mu_{k+1}}
(xy-1)^{q-k}.
\end{align}
Note that (5.10) and (5.11) are of the same form.
Now consider the general case.
Let $\{P_1, d_1, Q_1, e_1, P_2,\cdots, P_m, d_m, Q_m\}$
be the canonical decomposition of the continued fraction of
$\beta/2\alpha$.\\
Denote
$P_i=[[a_{i,1},b_{i,1},a_{i,2},b_{i,2}, \cdots, a_{i,s_i},
b_{i,s_i},a_{i,s_i +1}]]$, $1\le i \le m$, and\\
$Q_j= [[-a'_{j,1},-b'_{j,1},-a'_{j,2},-b'_{j,2},
\cdots,-a'_{j,q_j},-b'_{j,q_j},-a'_{j,q_j+1}]]$, $1\le j\le m$,
where $a_{i,p}>0$ and $a'_{j,p} >0$, but
$b_{i,q}, b'_{j,q}$ are arbitrary.
Then by Proposition \ref{prop:3.9},
max $y$-deg $\Delta_{B(2 \alpha,\beta)}(x,y) =
{\displaystyle
\sum_{i=1}^{m}\sum_{k=1}^{s_{i}+1}
a_{i,k} + \sum_{j=1}^{m}\sum_{k=1}^{q_{j}+1} a'_{j,k} - 1
}
= \lambda - 1$.
First we try to find the term with the max $y$-degree
in $\widetilde{\Delta}_{B(2 \alpha,\beta)}(x,y)$.
Denote by $\Delta_{P_i}(x,y)$ (resp. $\Delta_{Q_j}(x,y))$
the Alexander polynomial of the 2-bridge link associated
to $P_i$ (resp. $Q_j$).
Then, as we did above, we obtain\\
$\Delta_{P_i} (x,y) =
{\displaystyle
\sum_{\Lambda_{k,s_i},0\le k \le s_i}
}(-1)^k b_{i,p_1} b_{i,p_2}
\cdots b_{i,p_k} (x-1)^k(y-1)^k F_{\mu_1}
F_{\mu_2}
\cdots F_{\mu_{k+1}}$,\\
where $\mu_1+\mu_2 + \cdots +\mu_{k+1}= \lambda_i$,
and hence,
\begin{align}
&\ \ \ \ \widetilde{\Delta}_{P_i} (x,y) \nonumber\\
&=
(xy-1)^{s_i+1}
\Delta_{P_i} (x,y)\nonumber\\
&=\sum_{\Lambda_{k,s_i}}
(-1)^k b_{i,p_1} b_{i,p_2} \cdots b_{i,p_k}
(x-1)^k(y-1)^k \wti{F}_{\mu_1} \wti{F}_{\mu_2}
\cdots \wti{F}_{\mu_{k+1}}(xy-1)^{s_i-k},
\end{align}
where $\wti{F}_\mu=(xy)^{\mu}-1, \mu>~0$.
On the other hand,
$\Delta_{Q_j} (x,y) =
{\displaystyle
\sum_{\lambda_{k,q_j}}
}
(-1)^k b'_{j,r_1} b'_{j,r_2}
\cdots b'_{j,r_k} (x-1)^k(y-1)^k F_{-\mu'_1} F_{-\mu'_2}
\cdots F_{-\mu'_{k+1}}$, and hence we have:
\begin{align}
&\ \ \ \ \widetilde{\Delta}_{Q_j} (x,y)\nonumber\\
& = (xy)^{\lambda'}
(xy-1)^{q_j+1} \Delta_{Q_j}(x,y) \nonumber\\
&=-\sum_{\Lambda_{k,q_j}}
(-1)^k b'_{j,r_1} b'_{j,r_2} \cdots
b'_{j,r_k} (x-1)^k(y-1)^k \wti{F}_{\mu'_1} \wti{F}_{\mu'_2}
\cdots \wti{F}_{\mu'_{k+1}}(xy-1)^{q_j-k}.
\end{align}
{\bf 5.3. Proof of (5.9) (II)}
To evaluate $\Delta_B(x,y)$, we must split and smooth
at various crossings.
We classify these operations into two types.
Type 1. Split all crossings at every $d_i$ and $e_j$.
Type 2 Smooth some crossings at some $d_i$ and/or
$e_j$.
From Type 1 operation, we obtain the following term in
$\widetilde{\Delta}_B(x,y)$ :
\begin{align}
A = &(-1)^m
d_1\cdots d_m (-1)^{m-1}
e_1 \cdots e_{m-1}(x-1)^{2m-1} (y-1)^{2m-1}\nonumber\\
&\times
\prod_{i=1}^{m}\widetilde{\Delta}_{P_i} (x,y)
\prod_{j=1}^{m}\widetilde{\Delta}_{Q_j} (x,y).
\end{align}
Terms in $A$ with the max $y$-degree are obtained by\\
(1) taking $y^{2m-1}$ from $(y-1)^{2m-1}$, \\
(2) taking , in each $P_i$, $y^k$ from $(y-1)^k$,
$(xy)^{\mu_i}$ from each $\wti{F}_{\mu_i}$ and
$(xy)^{s_i -k}$ from
$(xy-1)^{s_{i}-k}$, and\\
(3) taking, in each $Q_j, y^k$ from $(y-1)^k,
(xy)^{\mu'_i}$
from $\wti{F}_{\mu'_i}$, and
$(xy)^{q_j -k}$ from
$(xy-1)^{q_j -k}$.
Therefore, the max $y$-degree in $A$ is
${\displaystyle
2m-1+\sum_{i=1}^{m}(k+\mu_1+\cdots+\mu_{k+1}+s_i -k)+
\sum_{j=1}^{m}(k+\mu'_1 + \cdots \mu'_{k+1}+q_j- k)
}$
${\displaystyle
= 2m-1+\sum_{i=1}^{m}(s_ i +\lambda_i ) +
\sum_{j=1}^{m}(q_j + \lambda'_j)
= 2m-1 +\sum_{i=1}^{m}s_i + \sum_{j=1}^{m}q_j + \lambda
}$.\\
While, the min $y$-deg in A is obviously $0$. \\
Since $
{\displaystyle
\widetilde{\Delta}_B(x,y) = (xy)^{\sum_{i=1}^{m}\lambda'_i}
(xy-1)^{\sum_{i=1}^{m}(s_i+1)+\sum_{j=1}^{m}(q_j+1)}
\Delta_B(x,y)
}$, \\
the $y$-degree of
$\Delta_B(x,y)$ is at least
$2m-1 +\sum s_i+ \sum q_j + \lambda -
(\sum s_i +m +\sum q_j + m) = \lambda - 1$,
that coincides with Proposition \ref{prop:3.9}.
Therefore,
these terms are in fact the terms with maximal $y$-degree.
{\bf 5.4. Proof of (5.9) (III)}
Next we show that Type 2
operation does not yield a term with
max $y$-degree in $\widetilde{\Delta}_B(x,y)$.
To see this, we can assume without loss of generality that
we smooth only crossings at $d_1$, but not others.
Namely, we split at other crossings $d_i (i\neq1)$ and
$e_j, 1\le j\le m-1$.
Case 1. Suppose $a_{1,s_1+1} > a'_{1,1}$.
By smoothing at $d_1$, we have a new canonical
decomposition of the new continued fraction:
$\hat{S} = \{\hat{P}_1, \hat{c}_1, \hat{Q}_1, e_1, P_2,
d_2, Q_2, e_2, \cdots,
P_m, d_m, Q_m\}$ ,
where
$\hat{P}_1=[[a_{1,1},b_{1,1},a_{1,2},b_{1,2}, \cdots,
a_{1,s_1},
b_{1,s_1},a_{1,s_1 +1} - a'_{1,1}]], \
\hat{c}_1 = - b'_{1,1}$ and
$\hat{Q}= [[-a'_{1,2},-b'_{1,2},
\cdots,-a'_{1,q_1},-b'_{1,q_1},-a'_{1,q_1+1}]]$.
Consider
$\widetilde{\Delta}_B(x,y) = (xy)^{\lambda'}
(xy-1)^{\sum (s_i+1)+\sum(q_j+1)}
\Delta_B(x,y)$.
Using the previous argument,
we can determine the terms of max
$y$-degree of $\Delta(\hat{S})$ in $\widetilde{\Delta}_B(x,y)$.
Since the terms
of max $y$-degree are obtained as those in each
$P_i$ and $Q_j$, we will determine these terms for
$\hat{P}_1$ and $\hat{Q}_1$.
For $\hat{P}_1$, the max $y$-degree is
\begin{equation}
k + \hat{\mu}_1 + \cdots + \hat{\mu}_{k+1} +
s_1 + 1 - (k + 1).
\end{equation}
Since $\hat{\mu}_1 + \cdots + \hat{\mu}_{k+1} =
a_{1,1} + a_{1,2} +
\cdots + a_{1,s_1 +1} - a'_{1,1} = \lambda_1 - a'_{1,1}$,
it follows from (5.15) that the max $y$-degree is
$\lambda_1 - a'_{1,1} + s_1$.
For $\hat{Q}_1$, the maximal terms are contained in\\
${\displaystyle
\sum
(-1)^k b'_{1,r_1}\cdots b'_{1,r_k} (x-1)^k(y-1)^k
\wti{F}_{\mu'_1} \cdots
\wti{F}_{\mu'_{k+1}} (xy-1)^{q_1-k-1}
(xy)^{a'_{1,1}} (xy-1),
}$
where the sum is taken over
$2\le r_1<\cdots<r_k\leq q_1,
k=0,1, \ldots, q_{1}-1$.
Since the original multipliers
$(xy)^{\lambda'}$
cannot be cancelled out in this case, $(xy)^{a'_{1,1}}$
remains.
Therefore, max $y$-degree in $\hat{Q}_1$ is
$k + q_1 - k - 1 + \hat{\mu}'_1 + \cdots +
\hat{\mu}'_{k+1} + a'_{1,1} + 1
= q_1 + \lambda'_1 - a'_{1,1} + a'_{1,1}
= q_1 + \lambda'_1$,
since $\hat{\mu'}_1 + \cdots + \hat{\mu'}_{k+1}
= \lambda'_1 - a'_{1,1}$, and hence, the
max $y$-deg of $\widetilde{\Delta}_B(x,y)$ is
$2m - 1 + (\lambda_1 - a'_{1,1})
+s_1 + \lambda'_1 + q_1 +\sum_{i=2}^{m}(s_i +\lambda_i )
+\sum_{j=2}^m (q_j + \lambda_j)
= 2m - 1 +\sum_{i=1}^{m}s_i +\sum_{j=1}^mq_j + \lambda - a'_{1,1}$.
Since $a'_{1,1} > 0$, we cannot get a term of
the max $y$-degree from $\Delta(\hat{S})$.
Case 2. $a_{1,s_1 +1} < a'_{1,1}$ or $a_{1,s_1 +1} = a'_{1,1}$.
A similar argument works, and hence omit the details.
Therefore, to evaluate $\wti{f}_\nu (x)$, it suffices to consider
$\widetilde{\Delta}(P_i)$, since the treatment for $\widetilde{\Delta}(Q_j)$
is similar to $\widetilde{\Delta}(P_i)$.
In other words, we will show the following:
\begin{prop}\label{prop:7.1}
Let $S=[[u_1, v_1,u_2,v_2, \cdots, u_s,v_s,u_{s+1}]]$, where
$u_i > 0, 1\le i\le s +~1$.
Write
$\widetilde{\Delta}_{B}(x,y) = f_{s+\lambda}(x) y^{s+\lambda} +
\cdots + f_0(x)$,
where $\lambda = \sum_{i=1}^{s+1}u_i$. Then we can write
as follows, using some integer $\gamma_{s+\lambda-\rho}
\neq 0$.
\begin{equation}
f_{s+\lambda}(x) =
\gamma_{s+\lambda-\rho}x^{s+\lambda-\rho}
+ \cdots + \gamma_\zeta x^\zeta, \
{\it for\ some}\
\zeta \ge 0, s+\lambda-\rho>\zeta.
\end{equation}
\end{prop}
{\bf 5.5. Auxiliary Lemmas}
Before we proceed to the proof of
Proposition \ref{prop:7.1}, we show the following
two lemmas.
\begin{lem}\label{lem:7.2}
Assume $n\ge k\ge 0$ and $n\ge m \ge 0$. Then
\begin{align}
&\ \ \ \bi{n}{k}-\bi{n-1}{k-1}\bi{m}{m-1}+\bi{n-2}{k-2}\bi{m}{m-2}- \nonumber\\
&\ \ \ \ \ \
\cdots + (-1)^\ell \bi{n-\ell}{k-\ell}\bi{m}{m-\ell}+
\cdots+(-1)^m
\bi{n-m}{k-m}\bi{m}{0}\nonumber\\
&=\bi{n-m}{k}
\end{align}
\end{lem}
Note. In (5.17) we assume that $\bi{n}{k}=0$
if $n\le 0$ or $k\le 0$, and $\bi{0}{0}=1$.
{\it Proof.}
We prove (5.17) by induction on $n,k$ and $m$.
Direct calculations prove the validity of the first step.
Suppose (5.17) holds up to $n, k$ and $m-1$.
Then we see the following:
The LHS of (5.17) is
\begin{align*}
&\bi{n}{k}-\Bigl\{\bi{n-1}{k-1}\bi{m-1}{1}
+\bi{n-1}{k-1}\bi{m-1}{0}\Bigr\}\\
&\ \ \
+\Bigl\{\bi{n-2}{k-2}\bi{m-1}{2}
+\bi{n-2}{k-2}\bi{m-1}{1}
\Bigr\}-\cdots\\
&\ \ \
+(-1)^{m-1}
\Bigl\{\bi{n-m+1}{k-m+1}\bi{m-1}{m-1}+\bi{n-m+1}{k-m+1}
\bi{m-1}{m-2}\Bigr\}\\
&\ \ \
+(-1)^m\Bigl\{
\bi{n-m}{k-m}\bi{m-1}{m-1}\Bigr\}\\
&=\bi{n}{k}-\bi{n-1}{k-1}\bi{m-1}{1}+\bi{n-2}{k-2}\bi{m-1}{2}
+\cdots\\
&\ \ \ + (-1)^{m-1}\bi{n-m+1}{k-m+1}\bi{m-1}{m-1}\\
&\ \ \ \
-\Bigl\{
\bi{n-1}{k-1}\bi{m-1}{0}-\bi{n-2}{k-2}\bi{m-1}{1}+\cdots \\
&\ \ \ +
(-1)^{m-1}\bi{n-m}{k-m}\bi{m-1}{m-1}\Bigr\}\\
&=
\bi{n-(m-1)}{k}-\bi{n-m}{k-1}=\bi{n-m}{k}+\bi{n-m}{k-1}-
\bi{n-m}{k-1}=\bi{n-m}{k},
\end{align*}
by induction hypothesis.
\fbox{}
\begin{lem}\label{lem:7.3}
Let $n\ge k \ge0$.
Then the following equality holds among integer polynomials
in $n$ variables $x_1,x_2, \cdots,x_n$:
\begin{align*}
&\bi{n}{k}x_1\cdots x_n -\bi{n-1}{k}\sum_{\Lambda_{n-1}}x_{i_1}
\cdots x_{i_{n-1}}\\
&\ \ \ +\bi{n-2}{k}\sum_{\Lambda_{n-2}} x_{i_1}\cdots x_{i_{n-2}}+
\cdots
+(-1)^{n-k} \bi{k}{k} \sum_{\Lambda_{k}} x_{i_1}\cdots
x_{i_k}\\
&
=\bi{n}{k}(x_{1}-1)\cdots(x_{n}-1)+\bi{n-1}{k-1}
\sum_{\Lambda_{n-1}}(x_{i_1}-1)\cdots(x_{i_{n-1}}-1)\\
&\ \ \
+\bi{n-2}{k-2}\sum_{\Lambda_{n-2}}(x_{i_1}-1)\cdots
(x_{i_{n-2}}-1)+\cdots\\
&\ \ \ +\bi{n-k}{0}\sum_{\Lambda_{n-k}}
(x_{i_1}-1)\cdots(x_{i_{n-k}}-1),
\end{align*}
where the summation is taken over the set $\Lambda_j$
consisting of all indices $i_1,\cdots, i_j$ such that
$1 \le i_1<\cdots< i_j\le n$.
If $n-k=0$, then the last term on the right side is interpreted
as~$1$.
\end{lem}
{\it Proof.} Since polynomials on both sides are
symmetric polynomials over the symmetric group $S_n$,
it is enough to compare the coefficients of
$x_1 x_2 \cdots x_m$, $1\le m\le n$.
For example, the constant term of the LHS is $0$ if $k>0$,
while that of the RHS is
$(-1)^n \bi{n}{k} +(-1)^{n-1}\bi{n-1}{k-1}\bi{n}{n-1}
+\cdots+(-1)^{n-k}\bi{n-k}{0}\bi{n}{n-k}$\\
$=(-1)^n\sum_{i=0}^k (-1)^i \bi{n-i}{k-i}\bi{n}{n-i}$\\
$=(-1)^n\sum_{i=0}^k (-1)^i\bi{n}{k}\bi{k}{i}$\\
$=(-1)^n\bi{n}{k}\sum_{i=0}^k (-1)^i\bi{k}{i}$\\
$=0
$
First, $x_1 x_2 \cdots x_n$ appears $\bi{n}{k}$
times in both sides. Thus the
formula is true for $x_1 x_2 \cdots x_n$.
Next, consider $x_1 x_2 \cdots x_{n-1}$.
This appears $-\bi{n-1}{k}$ times in the LHS, while
it appears, in the RHS,
$-\bi{n}{k}+\bi{n-1}{k-1}=-\bi{n-1}{k}$ times.
Thus the formula is true.
In general, $x_1 x_2 \cdots x_{n-r}$, $r\ge1$,
appears $(-1)^r \bi{n-r}{k}$ times in the LHS,
while in the RHS, it appears as many times as
\begin{align*}
&(-1)^r \Bigl\{
\bi{n}{k}-\bi{n-1}{k-1}\bi{r}{r-1}+\bi{n-2}{k-2}
\bi{r}{r-2}- \cdots + (-1)^r \bi{n-r}{k-r}\Bigr\}\\
&=(-1)^r \bi{n-r}{k}\ {\rm (by}\ {\rm Lemma }\ \ref{lem:7.2}).
\end{align*}
\fbox{}
{\bf 5.6. Proof of Proposition \ref{prop:7.1}.}
By (5.10), we can write
\begin{align}
\widetilde{\Delta}_B(x,y) &= (xy-1)^{s+1} \Delta_B(x,y)\nonumber\\
&=\sum_{p=0}^{s}\sum_{\Lambda_{p,s}}
(-1)^p v_{i_1} v_{i_2}
\cdots v_{i_p} (x-1)^p(y-1)^p \wti{F}_{\mu_1} \wti{F}_{\mu_2}
\cdots \wti{F}_{\mu_{p+1}}
(xy-1)^{s-p},
\end{align}
where $\mu_1 + \cdots + \mu_{p+1} =
\lambda = u_1 + u_2+ \cdots + u_{s+1}$.
In (5.18), terms with $y^{s+\lambda}$ are obtained as follows.
Let $B_k$ be the coefficient of the term
$x^{s+\lambda-k}y^{s+\lambda}$.\\
(1) For $p = 0$,
since we smooth all crossings at $v_i$,
we have only one term
$\wti{F}_{\mu_1}(xy-1)^s$.
Since $\mu_1 = \lambda$, we have one term
$x^{\lambda+s}y^{\lambda+s}$.\\
(2) For $p=1$,
we have the following polynomial
\begin{equation*}
(-1)\sum_{i=1}^s v_i(x-1)(y-1) \widetilde{F}_{\mu_1}
\widetilde{F}_{\mu_2} (xy-1)^{s-1}.
\end{equation*}
Thus the contribution to $B_0$ by these polynomials is
$(-1)\sum_{i=1}^s v_i$.\\
(3) For general $p$,
the contribution to $B_0$ by the polynomials in
$(-1)^p \sum v_{i_1} v_{i_2}
\cdots v_{i_p} (x-1)^p(y-1)^p
\wti{F}_{\mu_1} \wti{F}_{\mu_2} \cdots
\wti{F}_{\mu_{p+1}}(xy-1)^{s-p}$\
is \
${\displaystyle
(-1)^p \sum_{1\le i_1<\cdots<i_p\le s}
v_{i_1} v_{i_2} \cdots v_{i_p}
}$.
Therefore, by letting $n=s$ and $k=0$ in Lemma \ref{lem:7.3},
we have:
\begin{align}
B_0 &= 1 - \sum_{i=1}^{s}v_i+\sum_{1\le i<j\le s}v_i v_j+
\cdots\nonumber\\
& \ \ \ +
(-1)^p\sum_{\Lambda_{p,s}}v_{i_1} v_{i_2}
\cdots v_{i_p} + \cdots + (-1)^s v_1 v_2
\cdots v_s\nonumber\\
&=(-1)^s( v_1 - 1)( v_2 - 1) \cdots (v_s - 1).
\end{align}
If $\rho = 0$, i.e., $v_j\neq 1$
for any $j$, then $B_0 \neq 0$,
i.e. $x^{\lambda+s}y^{\lambda+s}$ does exist.
However, if $\rho >0$, then $B_0=0$, and hence
$x^{\lambda+s} y^{\lambda+s}$ does not exist.
Next we consider $B_1$. The terms
$x^{\lambda+s - 1 }y^{\lambda+s}$ are obtained as follows.\\
(1) If $p =0$, we do not get the term
$x^{\lambda+s - 1 }y^{\lambda+s}$.\\
(2) Suppose $p \ge1$. Then in order to get
$x^{\lambda+s - 1 }y^{\lambda+s}$,
we must take every possible $y$-term of maximal degree.
In other words, from each $\wti{F}_{\mu_j}$,
take $(xy)^{\mu_j}$ and
$(xy)^{s - p}$ from $(xy - 1)^{s - p}$ and
$y^p$ from $(y - 1)^p$.
For the $x$-terms we take
$(-1)^{}\bi{p}{p-1} x^{p-1}$ from $(x-1)^p$.\\
Therefore, we have, by Lemma \ref{lem:7.3},
\begin{align}
B_1&=(-1)\sum_{1\le i_1\le s} v_{i}(-1)\bi{1}{0}+
(-1)^2\sum_{\Lambda_{2,s}} v_{i_1}v_{i_2}(-1)\bi{2}{1}\nonumber\\
&\ \ \ +(-1)^3\sum_{\Lambda_{3,s}}v_{i_1}v_{i_2}v_{i_3}(-1)
\bi{3}{2}+\cdots+
(-1)^{s}v_1\cdots v_{s}(-1)\bi{s}{s-1}\nonumber\\
&=(-1)\Bigl\{s(v_{1}-1)\cdots(v_{s}-1)+
\sum_{1\le i_{1}<\cdots<i_{s-1}\le s}(v_{i_1}-1)
\cdots(v_{i_{s-1}}-1)
\Bigr\}.
\end{align}
If $\rho = 1$, then the first term in the RHS is
$0$, but one term in the second summation survives.
Thus, $x^{\lambda+s - 1 }y^{\lambda+s}$ does exist, and
$B_1 =-(v_1 - 1)(v_2 - 1) \cdots (v_{t-1} - 1)
(v_{t+1} - 1) \cdots (v_s -1)$
for some $t$.\\
However, if $\rho \ge 2$, then $B_1 = 0$.
By the same argument, we can show;
\begin{align}
B_r
&=(-1)^r\sum_{\Lambda_{r,s}} v_{i_1}\cdots v_{i_r}
(-1)^r\bi{r}{0}\nonumber
+(-1)^{r+1}\sum_{\Lambda_{r+1,s}}v_{i_1}\cdots v_{i_{r+1}}
(-1)^r\bi{r+1}{1}\nonumber\\
&\ \ \ +(-1)^{r+2}\sum_{\Lambda_{r+2,s}}v_{i_1}\cdots v_{i_{r+2}}
(-1)^r\bi{r+2}{2}+\cdots
+(-1)^s v_1\cdots v_{s}(-1)^r \bi{s}{s-r}\nonumber\\
&=(-1)^{s+r}\Bigl\{
\bi{s}{r}v_1 \cdots v_{s} - \bi{s-1}{r}
\sum_{\Lambda_{s-1,s}}v_{i_1}\cdots v_{i_{s-1}}\nonumber\\
&\ \ \ \ \ \
+\bi{s-2}{r}\sum_{\Lambda_{s-2,s}}v_{i_1}\cdots v_{i_{s-2}}
-\cdots\nonumber\\
&\ \ \ \ \ \
+(-1)^{s-r-1}\bi{r+1}{r}\sum_{\Lambda_{r+1,s}}v_{i_1}\cdots
v_{i_{r+1}}+(-1)^{s-r}\bi{r}{r}\sum_{\Lambda_{r,s}}v_{i_1}
\cdots v_{i_{r}}\Bigr\}\nonumber\\
&=(-1)^{s+r}\Bigl\{\bi{s}{r}(v_1-1)\cdots (v_{s}-1)
+\bi{s-1}{r-1}\sum_{\Lambda_{s-1,s}} (v_{i_{1}}-1)\cdots
(v_{i_{s-1}}-1)+\cdots\nonumber\\
&\ \ \ \ \ \ +\bi{s-r}{0}\sum_{\Lambda_{s-r,s}}(v_{i_1}-1)\cdots
(v_{i_{s-r}}-1)\Bigr\}
\end{align}
Thus, if $\rho \ge r+1$, then $B_r = 0$. However,
if $\rho =r$, say $v_1=v_2 =\cdots=v_r=~1$, but
$v_j\neq 1, j\ge r+1$, then only the last summation
contains one non-zero term:
$(v_{r+1} -1)\cdots (v_s -1) \neq 0$.
Therefore, if $\rho = r$, then
$B_0=B_1=\cdots=B_{\rho-1}=0$, but there exist
$s - \rho$ integers $v_{i_1}, v_{i_2},
\cdots, v_{i_{s-\rho}}$,
each of which is not $1$, and
\begin{equation}
B_\rho = (-1)^{s+r}(v_{i_1} - 1)( v_{i_2} - 1)
\cdots( v_{i_{s-\rho}} - 1) \neq 0.
\end{equation}
This proves Proposition \ref{prop:7.1}.
\fbox{}
{\bf 5.7. Precise form of $\Delta_B(x,y)$}
Now we arrive at our final theorem of this section.
\begin{thm}\label{thm:7.4}
Let S = $\{P_1, d_1, Q_1, e_1, P_2, d_2, Q_2, e_2,
\cdots, P_m, d_m, Q_m\}$ be the canonical
decomposition of the continued fraction of $\beta/2\alpha$.
Let $\rho$ and $\lambda$ be the numbers defined in
Definition \ref{dfn:3.8}. Write
\begin{equation*}
\Delta_{B(2\alpha,\beta)}(x,y) = f_{\lambda - 1}(x)
y^{\lambda - 1} + \cdots + f_0(x),
\end{equation*}
where $f_{\lambda - 1}(x) \neq 0$ and $f_0(x) \neq 0$,
and $f_i(x), 0 \le i \le \lambda - 1$, are integer polynomials.
Then we have:
\begin{align}
(1)\ &f_i(x^{-1}) x^{\lambda-1} =
f_{\lambda-1-i}(x), 0\le i\le \lambda - 1,\nonumber\\
(2)\ &f_{\lambda - 1}(x) =
\gamma_{\lambda-1,\lambda-1-\rho}
x^{\lambda-1-\rho} + \cdots +\gamma_{\lambda-1,\zeta} x^{\zeta},\nonumber\\
&f_{\lambda - 2}(x) =
\gamma_{\lambda-2,\lambda-1-\rho+1}
x^{\lambda-1-\rho+1} + \cdots,\nonumber\\
& \ldots \nonumber\\
& f_{\lambda - i - 1 }(x) =
\gamma_{\lambda- i-1,\lambda-1-\rho+i} x^{\lambda-1-\rho+i} +
\cdots \nonumber\\
& \ldots \nonumber\\
& f_{\lambda - 1 - \rho}(x) =\gamma_{\lambda-1 -
\rho,\lambda-1} x^{\lambda-1} + \cdots,\ {\it where\ } \gamma_{\lambda-1,\lambda-1-\rho} =
\gamma_{\lambda-1-\rho,\lambda-1} \neq 0, \nonumber\\
& {\it and\ hence,\ } \nonumber\\
&\deg f_{\lambda-1}(x)=\lambda-1-\rho,
\deg f_{\lambda-1-i}(x) \le \lambda - 1 +i- \rho,
1 \le i \le \rho - 1\ {\it and\ }\nonumber\\
&\deg f_{\lambda-1-\rho}(x) = \lambda - 1.\nonumber\\
(3)\ & \ {\it All\ \mbox{\it non-zero}\ leading\ coefficients\ of}\
f_i(x)\ {\it are\ of\ the\ same\ sign}.\nonumber\\
(4)\ &
\prod_{i=1}^m d_i \prod_{j=1}^{m-1} e_j
\ {\it divides}\
\gamma_{\lambda - 1, \lambda - 1 - \rho}\
{\it (and}\ \gamma_{\lambda-1-\rho,\lambda-1})\nonumber\\
(5)\ &
\gamma_{\lambda - 1, \lambda - 1 - \rho}\
{\it (and}\ \gamma_{\lambda-1-\rho,\lambda-1}\ {\it )\
is\ equal\ to}\ \pm 1\ {\it if\ and\ only\ if}\nonumber \\
&
\begin{tabular}{ll}
(i)& all $d_i = \pm 1$, $1 \le i \le m$, and\\
(ii)& all $e_j = \pm 1$, $1\le j \le m-1$, and\\
(iii) &all $b_{i,k}$, $1 \le i \le m, 1 \le k \le s_i$ \\
& and all $b'_{j,k},1\le j\le m, 1 \le k \le q_j$
are either $1$ or $2$.
\end{tabular}
\end{align}
\end{thm}
{\it Proof.}
(1) Since a 2-bridge link $B(2\alpha,\beta)$ is invertible, we have
$\Delta_{ B(2\alpha,\beta)} (x^{-1},y^{-1})
x^{\lambda-1} y^{\lambda-1}= \Delta_ {B(2\alpha,\beta)}(x,y)$.
This implies:\\
$x^{\lambda-1} y^{\lambda-1}
\Bigl\{f_{\lambda-1}
(x^{-1}) y^{-(\lambda-1)} + f_{\lambda-2}
(x^{-1}) y^{-(\lambda-2)} + \cdots + f_0(x^{-1})
\Bigr\}$
$= f_{\lambda-1}(x^{-1}) x^{\lambda-1} +
f_{\lambda-2}(x^{-1}) x^{\lambda-1}y
+ \cdots + f_0(x^{-1}) x^{\lambda-1} y^{\lambda-1}$,
and hence, we have (1).
(2) Proposition \ref{prop:7.1}
shows that $f_{\lambda-1}(x)$ is a required form.
Since $B(2\alpha,\beta)$ is
interchangeable, we see $\Delta_B(x,y) = \Delta_B(y,x)$,
and hence
$\gamma_{\lambda-1,\lambda-1-\rho}
=\gamma_{\lambda-1-\rho,\lambda-1}$.
Next, to show that $\deg f_{\lambda-1-i}\le
\lambda-1+i-\rho, 1 \le i \le \rho-1$,
we need the following easy lemma.
\begin{lem}\label{lem:7.5}
The number of terms of
$\Delta_{B(2\alpha,\beta)}(x,y)$ is exactly $\alpha$.
In other words, if we write
${
\displaystyle
\Delta_{B(2\alpha,\beta)}(x,y) =\sum_{0\le p,q}
c_{p,q} x^p y^q
}$,
then
${\displaystyle
\sum_{0\le p,q}|c_{p,q}| = \alpha
}$.
\end{lem}
{\it Proof.}
The group of $B(2\alpha,\beta)$ has the following
Wirtinger presentation:\\
$\pi_1(S^3 - B(2\alpha,\beta)) =\langle x,y|R\rangle$,
where $R=W x W^{-1}x^{-1}$, and \\
$W=y^{\varepsilon_1}
x^{\varepsilon_2} y^{\varepsilon_3}\cdots
y^{\varepsilon_{2\alpha-1}}, \varepsilon_i = \pm1$.
Therefore, the Alexander matrix $M$ is of the form:
$M=\Bigl[\frac{\partial R}{\partial x}\ \frac{\partial R}
{\partial y}\Bigr]^\phi
=\Bigl[\frac{\partial W}{\partial x}(1-x)+W-1,
\frac{\partial W}{\partial y}(1-x)\Bigr]^\phi
=\Bigl[\frac{\partial W}{\partial y}(1-y)\ \frac{
\partial W}{\partial y}(1-x)\Bigr]^\phi
$
and hence,
$\Delta_B(x,y) =\det \bigl[
\frac{\partial W}{\partial y}\bigr]^\phi$.
Here $\det \bigl[
\frac{\partial W}{\partial y}\bigr]^\phi$
is the sum of $\alpha$ terms, while
$|\Delta_{B(2\alpha,\beta)}(-1,-1)| = \alpha$,
and hence no cancellation occurs among these
$\alpha$ terms.
\fbox{}
Now we return to the proof of (2).
Suppose $\deg f_{\lambda-1-i}(x) >
\lambda - 1 +i- \rho$.
Then
$\deg f_{\lambda-1-i}(t) t^{\lambda-1-i} >
2( \lambda - 1)- \rho$.\\
Write
$f_{\lambda-1-i}(x) =
\gamma_{\lambda - 1 -i, k} x^k
+ \cdots +\gamma_{\lambda-1-i,r} x^r$,
where $k>\lambda - 1 + i- \rho$, and $k\ge r$.
Then by~(1),
$f_i(x) = f_{\lambda-1-i}(x^{-1})x^{\lambda-1} =
\gamma_{\lambda - 1 -i, r} x^{\lambda-1-r} +
\cdots +\gamma_{\lambda-1-i,k} x^{\lambda-1-k}$.
Since $\lambda - 1 - r \ge \lambda - 1 - k$,
$\Delta_{B}(t,t)$ contains the term with degree
$\lambda - 1 - k + i$.
Since no cancellation occurs when we set $x=y=t$,
we see
\begin{align*}
\deg \Delta_{B(2\alpha,\beta)}(t,t) &>
2(\lambda - 1) - \rho - (\lambda - 1 - k + i)\\
& =
\lambda - 1- i - \rho + k \\
&> \lambda - 1- i - \rho +
\lambda - 1+ i - \rho \\
&=2 \lambda - 2 - 2\rho.
\end{align*}
This contradicts Proposition \ref{prop:3.11}. This proves (2).
(3) follows also from the fact that no cancellations
occur when we set $x=y=t$ in $\Delta_B(x,y)$.
(4) follows from (5.14).
(5) follows also from (5.14) and (5.22).
Theorem \ref{thm:7.4} is now proved.
\fbox{}
\begin{rem}\label{rem:7.6}
It is quite likely that
\begin{equation}
\deg f_{\lambda - 1- i }(x) =
\lambda - 1+i - \rho, 1 \le i \le \rho - 1.
\end{equation}
\end{rem}
\section{Monic Alexander polynomials}
In this section, we determine
when the Alexander polynomial of
$K(2\alpha, \beta|r), r>0$, is monic.
We use the results proved in the previous section.
In Subsection 6.1, we deal with the case $\mbox{$\ell k$}B \neq 0$,
using the
continued fraction of $\beta/2\alpha$.
However, if $\mbox{$\ell k$}B = 0$,
we cannot characterize $K(2\alpha, \beta|r)$ with
monic Alexander polynomials in terms of continued fractions.
We then deal with this case in Subsection 6.2.
Let $\{P_1, d_1, Q_1, e_1, P_2, d_2, Q_2, e_2,
\cdots, P_m, d_m, Q_m\}$ be the canonical
decomposition of the continued fraction
of $\beta/2\alpha$.\\
Write
$P_i=[[a_{i,1},b_{ i,1},a_{i,2},b_{i,2},
\cdots, a_{i,s_i}, b_{i,s_i},a_{i,s_i +1} ]]$,
$a_{i,j}>0$, and \\
$Q_j= [[-a'_{j,1},-b'_{j,1},
\cdots,-a'_{j,q_j},-b'_{j,q_j},-a'_{j,q_j+1}]]$,
$a'_{j,k}>0$.
{\bf 6.1. The case $\mbox{$\ell k$}B >0$.}\
The purpose of this subsection is to state
algebraic conditions equivalent to that in Theorem \ref{thm:A}.
Namely, we prove the following:
\begin{thm} \label{thm:8.4}
Suppose $\ell = \mbox{$\ell k$}B\neq0$.\\
(1) Suppose $\ell =r=1$.
Then,
$\Delta_{K(2\alpha, \beta|1)}(t)$ is monic
if and only if, for any $i, j, p, q$,
(a) $d_i, e_j = \pm 1$ and
(b) $b_{i,k}=b'_{j,p}=2$, \\
(2) Suppose $\ell \ge 2$.
Then for any $r \ge1$,
$\Delta_{K(2\alpha, \beta|r)}(t)$ is monic
if and only if
(a) $d_i, e_j = \pm1$ and
(b) $ b_{i,k}$ and $b'_{j,p}$ are $1$ or $2$.
\end{thm}
Let $\Delta_{B(2\alpha,\beta)}(x,y)$ be the
Alexander polynomial of $B(2\alpha,\beta)$.
Suppose $\ell=\mbox{$\ell k$}B>0$.
Then by Proposition \ref{prop:6.1}
(1) and Theorem \ref{thm:7.4},
the Alexander polynomial
$\Delta_K(t)$ of $K=K(2\alpha,\beta|r), r >0$,
is given by
\begin{equation}
\Delta_K(t) = \dfrac{1 - t}{1-t^\ell}
\bigl\{f_{\lambda-1}(t) t^{(\lambda-1)\ell r} +
f_{\lambda-2}(t)t^{(\lambda-2)\ell r} + \cdots +
f_{0}(t)\bigr\},
\end{equation}
where $\lambda - 1$ is the maximal
$y$-degree of $\Delta_{B(2\alpha,\beta)}(x,y)$.
First we determine the degree of
$\Delta_{B(2\alpha, \beta)}(t, t^{\ell r})$.
\begin{prop}\label{prop:8.1+}
(1) The highest degree of
$\Delta_{B(2\alpha, \beta)}(t, t^{\ell r})$ is
\begin{equation}
\lambda-1-\rho+(\lambda-1)\ell r.
\end{equation}
(2) The lowest degree of
$\Delta_{B(2\alpha, \beta)}(t, t^{\ell r})$ is $\rho$.
\end{prop}
\noindent
{\it Proof.}
(1)
We write
$f_{\lambda-1}(x) =
\gamma_{\lambda-1,\lambda-1-\rho}
x^{\lambda-1-\rho} + \cdots +
\gamma_{\lambda-1,\zeta}x^\zeta,
\gamma_{\lambda-1,\lambda-1-\rho}\neq0$.\\
We show that if $\ell r \ge 2$, then
$\gamma_{\lambda-1,\lambda-1-\rho}
t^{\lambda-1-\rho} t^{(\lambda-1)\ell r}$
is the only term with the highest degree in
$\Delta_{B(2\alpha, \beta)}(t,t^{\ell r})$.
In fact, by Theorem \ref{thm:7.4}, we see that
for $1\le i\le \rho-1$,
$\deg f_{\lambda-1-i}(x)\le \lambda-1+i-\rho$,
and hence, since $\ell r\ge 1$, for $1\le i\le \rho-1$
we have:
\begin{equation}
\lambda-1-\rho+(\lambda-1)\ell r \ge
\lambda-1+i-\rho+(\lambda-1-i)\ell r.
\end{equation}
Moreover, obviously, $\deg f_j(x)\le \lambda-1$,
for $0\le j\le \lambda-2-\rho$, and hence,
if $\ell r\ge1$, then for $0\le j\le \lambda-2-\rho$, we see
\begin{equation}
\lambda-1-\rho+(\lambda-1)\ell r \ge \lambda-1+j\ell r.
\end{equation}
Combining (6.3) and (6.4), we have (1).
In particular, if $\ell r\ge 2$, the strict inequality
holds in (6.3), and therefore,
$\gamma_{\lambda-1,\lambda-1-\rho}t^{\lambda-1-\rho}
t^{(\lambda-1)\ell r}$ is the only term with
the highest degree in
$\Delta_{B(2\alpha,\beta)}(t,t^{\ell r})$.
However, if $\ell r=1$, then the equality holds in (6.3), and
thus, the terms with the highest degree appear at least in
$f_{\lambda-1}(t) t^{(\lambda-1)\ell r}$
and
$f_{\lambda-1-\rho}(t) t^{(\lambda-1-\rho) \ell r}$.
(2) First we note from Proposition \ref{prop:3.11}
that the lowest degree
of $\Delta_{B(2\alpha,\beta)}(t, t)$
is $\rho$, since the highest degree of
$\Delta_{B(2\alpha, \beta)}(t,t)$ is $2(\lambda-1)-\rho$ by (1).
By Theorem \ref{thm:7.4}, we see that
$\deg f_{\lambda-1-i}(x) \le \lambda -1+i - \rho,
1 \le i\le \rho - 1$ and, of course,
$\deg f_j \le \lambda - 1$, for
$0\le j\le \lambda - 2 - \rho$.
Now, since $f_0(x) = f_{\lambda-1}(x^{-1})
x^{\lambda-1}$, it follows that the lowest degree of
$f_{\lambda-1}(t) t^{(\lambda-1)\ell r}$ is
$\zeta + (\lambda-1)\ell r$,
and that of $f_0(t)$ is $\rho$.
Since $\rho\le\lambda-1$,
$\min\{\zeta+(\lambda-1) \ell r,\rho\}=\rho$,
and hence, if $\ell r\ge 1$, $f_0(t)$ contains
the term of degree $\rho$.
Furthermore, since the lowest degree of
$\Delta_{B(2\alpha, \beta)}(t,t)$
is $\rho$, the degree of any term in
$\Delta_{B(2\alpha, \beta)}(t,t^{\ell r})$ is
at least $\rho$. This proves (2).
\fbox{}
Proposition \ref{prop:8.1+} implies the following:
\begin{prop}\label{prop:8.1}
If $\ell r \ge 1$, then
$\deg \Delta_{K(r)}(t)=(\lambda-1)(\ell r+1)
- 2\rho-(\ell -1)$.
In particular, if $\ell r=1$ (i.e. $\ell =r=1$), then
$\deg \Delta_{K(r)}(t)=2(\lambda -\rho-1)$.
\end{prop}
Using the above results, we can characterize the
monic Alexander polynomial.
First, we see that if
$\ell r\ge2$, then the leading coefficient of
$\Delta_K(t)$ is given by
$\gamma_{\lambda-1,\lambda-1-\rho}$.
Therefore,$\Delta_K(t)$ is monic if and only if
$\gamma_{\lambda-1,\lambda-1-\rho}$ is $\pm 1$,
and hence Theorem \ref{thm:7.4}(5) gives us
immediately the following:
\begin{prop}\label{prop:8.2}
Suppose $\ell r\ge2$. Then $\Delta_K(t)$ is
monic if and only if the following conditions hold:
(1) $d_i, e_j=\pm1$ for any $i,j$, and
(2) $b_{i,k}= 1$ or $2$ and $b'_{j,p}= 1$ or $2$,
for any $1\le i\le m$, $1\le k\le s_i+1$,
and $1\le j\le m, 1\le p\le q_j +1$.
\end{prop}
Note that $a_{i,j}$ and
$a'_{j,k}$ are arbitrary.
If $\ell r=1$, then the following proposition holds.
\begin{prop}\label{prop:8.3}
Suppose $\ell =r=1$. Then $\Delta_{K(1)}(t)$
is monic if and only if
(1) $d_i, e_j = \pm1$ for any $i,j$, and
(2) $b_{i,k}= b'_{j,p}=2$ for any $i,k,j,p$.
(In particular, $\rho =0$.)
\end{prop}
{\it Proof.}
(1) Suppose that $d_i$ or $e_j$ is not $\pm 1$.
Then $\Delta_{K(1)}(t)$ is not monic by Theorem \ref{thm:7.4}.
(2) Suppose $\rho \neq 0$. Then $\Delta_{B(2\alpha, \beta)}(x,y)$
contains at least two non-zero terms,
$\gamma_{\lambda-1, \lambda-1-\rho}
x^{\lambda-1-\rho}y^{\lambda-1}$
and $\gamma_{\lambda-1-\rho, \lambda-1}
x^{\lambda-1}y^{\lambda-1-\rho}$.
Since $\gamma_{\lambda-1, \lambda-1-\rho}=
\gamma_{\lambda-1-\rho, \lambda-1}$ by Theorem \ref{thm:7.4}(2),
we see that $\Delta_{K(2\alpha, \beta|1)}(t)$ is not monic.
Further, as is proved in Subsection 5.6,
we have all $b_{i,k}=b'_{j,k}=2$.
The converse follows from Theorem \ref{thm:7.4}.
\fbox{}
By these results above, we obtain Theorem \ref{thm:8.4}.
Finally, we note that
we can prove the following
(c.f., \cite[Theorem 4.2]{KMS})
as a simple consequence of
Proposition \ref{prop:8.1}.
\begin{prop}\label{prop:8.6++}
Suppose $\ell, r>0$.
Then $K(2\alpha, \beta|r)$ is unknotted if and only if
$(2\alpha, \beta)=(4,3)$ and $r=1$,
and hence $\ell=2$.
\end{prop}
{\it Proof.}
Since the \lq\lq if\rq\rq\ part is obvious, we only
consider the \lq\lq only if\rq\rq\ part.
Suppose $K(r) = K(2\alpha, \beta|r)$
is unknotted.
Then, by Proposition \ref{prop:8.1},
we have:
\begin{equation}
(\lambda-1)(\ell r+1)-2\rho-(\ell-1)=0.
\end{equation}
Rewrite the LHS of (6.5) as
\begin{equation}
2(\lambda-1-\rho)+(\lambda-1)(r-1)\ell+
(\lambda-2)(\ell-1)=0.
\end{equation}
Since $\lambda\ge 2$ and $\lambda-1\ge \rho$,
it follows that
each term of the LHS is non-negative,
and hence, the equality holds only if
we have:
(i) $\lambda-1=\rho, r=1$, and
(ii) $\lambda=2$ or $\ell=1$.
Now, since $\lambda-1=\rho$,
we see that $(2\alpha, \beta)=(2\lambda, 2\lambda-1)$
and $\mbox{$\ell k$} B(2\alpha, \beta)=\lambda$.
Since $\lambda\ge2$, $\ell$ cannot be $1$,
and hence the conclusion follows.
\fbox{}
{\bf 6.2. The case $\mbox{$\ell k$}B =0$.}
In this subsection, we characterize the
monic Alexander polynomial of $K(r)$ when
$\mbox{$\ell k$}B=0$.
To do this, first, we give a geometric interpretation of
$\Delta_{K(r)}(t)$. We remind that the calculation of the
Alexander polynomial of $K(r)$ from
$\Delta_{B(2\alpha,\beta)}(x,y)$ is quite different for
$\mbox{$\ell k$}B=0$,
as is seen in Proposition \ref{prop:6.1}.
Let $B(2\alpha,\beta)$ be a 2-bridge link
consisting of $K_1$ and $K_2$. Suppose that
$\ell k(K_1,K_2)=0$.
Consider the infinite cyclic
cover $M^3$ of $S^3\setminus K_2$.
Since $K_2$ is unknotted, $M^3$ is an infinite cylinder
$D^2 \times {\mathbb R}^1$.
Let $\{\wti{K}_m, m=0, \pm1,\pm2,\ldots\}$ be the set of
lifts of $K_1$ in $M^3$, where
$\wti{K}_j=\psi^{j}(\wti{K}_0)$ with the covering
translation $\psi$.
Since $\ell k(K_1,K_2)=0$,
each lift $\wti{K}_m$
is a knot in $M^3$ with orientation inherited from that of
$K_1$.
Denote $c_j =\ell k(\wti{K}_0,\wti{K}_j), j\neq 0$, and let
\begin{equation}
\Gamma(t) = \sum_{-\infty<j<\infty} c_jt^j,\
{\rm where}\ c_0 = - \sum_{-\infty<j<\infty} c_j.
\end{equation}
Note that $c_0$ is well defined, and also $c_j=c_{-j}$
for any $j$. Then it is proved in \cite{MM}
\begin{prop}\label{prop:10.1}
$\Gamma(t) \doteq (t-1)\Bigl[\dfrac{\Delta_{B}(x,y)}
{1-y}\Bigr]_{x=t,y=1}$
\end{prop}
We note that
Gonzalez-Acu$\tilde{{\rm n}}$a
also studied this polynomial
$\Gamma(t)$ in \cite{Gon}.
Using Proposition \ref{prop:10.1},
we can estimate the
maximal and minimal degree of
$\Delta_{K(2\alpha,\beta|r)}(t)$.\\
Let $S = [[ u_1,v_1,u_2,v_2,\ldots,u_s,v_s,u_{s+1}]]$
be the continued fraction of $\beta/2\alpha$.
Let $G(S)$ be the graph of $S$.
This graph $G(S)$ will be used to estimate the degree of
$\Delta_{K(2\alpha,\beta|r)}(t)$.
\begin{prop}\label{prop:10.2}
Let $h$ and $q$ be the highest and lowest
$y$-coordinates of $G(S)$. Then
$\deg \Delta_{K(2\alpha,\beta|r)}(t) \le 2 \max\{h,|q|\}$.
\end{prop}
Note that $h\ge 0$ and $q\le0$.
{\it Proof.}
We span $K_2$ by a disk $D$ in such a way that
$K_1$ intersects $D$ transversally
at $\sum_{i=1}^{s+1}|u_i|$ points.
Using $D$, we construct $M^3$.
Then it is easy to evaluate the linking number
between $\wti{K}_0$ and $\wti{K}_j$ for
$j \ge1$ using the
primitive disk for $K_1$. See Example \ref{ex:lzero}.
Also we can easily determine $h$ and $q$ from the graph
$G(S)$. In fact, $h$ is the $y$-coordinate of the
absolute maximal vertices of $G(S)$, and $q$ is the
$y$-coordinate of the absolute minimal vertices.
Let $V_{i,1}, V_{i,2},\ldots,V_{i,p}$ be the absolute maximal
vertices and $V_{j,1},V_{j,2},\ldots,V_{j,s}$
be the absolute minimal vertices of $G(S)$.
Let $w_{i,k}$ be the weight of $V_{i,k},
k=1, 2, \cdots, p$,
and $w_{j,n}$ the weight of $V_{j,n},
n=1,2, \cdots, s$.
Then we have:
\begin{align}
&(1)\ {\rm If}\ h>|q|,\ {\rm then}\
\mbox{$\ell k$}(\wti{K}_0,\wti{K}_h)=
-\frac{1}{2}\sum^p_{k=1}
w_{i,k}.\nonumber\\
&(2)\ {\rm If}\ h<|q|,\ {\rm then}\
\mbox{$\ell k$}(\wti{K}_0,\wti{K}_q)=
-\frac{1}{2}\sum^s_{n=1}
w_{j,n}.\nonumber\\
&(3)\ {\rm If}\ h=|q|,\ {\rm then}\
\mbox{$\ell k$}(\wti{K}_0,\wti{K}_h)=-\frac{1}{2}
\left\{
\sum^p_{k=1}w_{i,k}+\sum^s_{n=1}w_{j,n}
\right\}.\nonumber\\
&(4)\ {\rm If}\ d>\max\{h,|q|\},\ {\rm then}\
\mbox{$\ell k$}(\wti{K}_0,\wti{K}_d)=0.
\end{align}
Therefore,
$\max \deg \Gamma(t) \le \max\{h, |q|\}$,
and
$\min \deg \Gamma(t) \ge - \max\{h,|q|\}$,
and hence,
$\deg \Delta_{K(2\alpha,\beta|r)}(t)
\le 2 \max\{h,|q|\}$.
\fbox{}
Under the same notation used in the proof of Proposition
\ref{prop:10.2}, we have
\begin{cor}\label{cor:10.3}
$\biggl[\dfrac{\Delta_{K(2\alpha,\beta)}(x,y)}{1-y}
\biggr]_{\genfrac{}{}{0pt}{}
{x=t}{y=1}}(1-t)$
is monic and its degree is equal to
$2\max\{h,|q|\}$
if and only if
(1) $\sum_{k=1}^{p} w_{i,k}=\pm 2$, when $h>|q|$,
(2) $\sum_{n=1}^s w_{j,n} = \pm 2$, when $h<|q|$,
(3) $\sum_{k=1}^p w_{i,k} +\sum_{n=1}^s w_{j,n}
= \pm 2$,
when $h=|q|$.
\end{cor}
\begin{prop}\label{prop:10.4}
Suppose $\mbox{$\ell k$}B=0$.
Then, for $r >1$,
(1) $\Delta_{K(2\alpha,\beta|r)}(t)$ is
either non-monic or $\Delta_{K(2\alpha,\beta|r)}(t) = 1$.
(2) $\Delta_{K(2\alpha,\beta|1)}(t)$ is monic if and only if
$\biggl[\dfrac{\Delta_{K(2\alpha,\beta)}(x,y)}{1-y}
\biggr]_{\genfrac{}{}{0pt}{}
{x=t}{y=1}}$
is monic.
\end{prop}
This is an immediate consequence of
Proposition \ref{prop:6.1}.
The rest of this paper (except for the last three sections)
will be devoted to the proofs of our main theorems.
\section{Construction of a Seifert surface $F_1$ for $K_1$.}
By Theorem \ref{thm:1}, we assume $\mbox{$\ell k$}B =\ell
\ge0, r>0$.
In section 3,
we constructed a primitive spanning disk
$F_D$ for $K_1$, which consists of disks and bands
corresponding to the edges
and vertices in $G(S)$.
Recall that $F_D$ intersects $K_2$
as many times as the number of edges in $G(S)$,
which is equal to $\lambda$.
In this section, we construct a Seifert surface
$F(r)$ for $K(2\alpha, \beta|r)=K(r)$.
First,
using $G(S)$,
we construct a new Seifert surface $F_1$ for $K_1$
which intersects $K_2$ exactly
$\ell \ge0$ times.
We call $F_1$ a {\it canonical surface} for $K_1$.
Let $S=\{P_1,d_1,Q_1,e_1,P_2, \ldots \}$ be the
canonical decomposition of the continued fraction of
$\beta/2\alpha$. Let $G(S)$ be the graph of $S$,
which by definition is the graph $G(S^*)$ of
the modified continued fraction $S^*$ of $S$.
We construct a new surface $F_1$ by induction on
$\nu(G(S))$,
the total number of local maximal and local minimal
vertices,
including the end vertices.
Case 1: $\nu(G)=2$
Since $\ell\ge 0$,
$G$ is an ascending
line segment. See Figure 7.1 (a).
\tena
In this case, $F_D$ itself is our new surface $F_1$,
which consists of two disks connected by a twisted band.
Case 2: $\nu(G)=3$
There are two cases. See Figure 7.1 (b1) and (b2).
Note that the vertex $B$ may be on the $x$-axis.
For the first case (b1), $F_D$ consists of $p$, say,
positive disks $D_1, D_2, \ldots, D_p$ followed by $q$,
say, negative disks
$D'_1, D'_2, \ldots, D'_q, p\ge q$, and $p+q-1$ bands
$B_j, j=1,2, \ldots, p+q-1$, connecting
these disks. See Figure 7.2 (a).
\tenc
Then replace two disks $D_p$ and $D'_1$
by a cylinder $R_1$, where
$R_1\cap F_D=\partial R_1=\partial(D_p \cup D'_1)$.
The orientation of $R_1$ is naturally induced from
those of disks.
Next we replace two disks $D_{p-1}$ and $D'_2$
by a cylinder $R_2$ that is inside of $R_1$ and
$R_2\cap F_D=\partial R_2=\partial(D_{p-1} \cup D'_2)$.
Repeat this operation for every pair of a positive
disk $D_{p-i+1}$ and a negative disk $D'_i$, $1\le i\le q$,
in the same manner,
so that we have a sequence of cylinders.
Positive disks and bands corresponding to the subgraph $OB'$
are untouched as in Case 1.
This untouched part of $F_D$ and the cylinders and all
bands form our new surface $F_1$.
See Figure 7.2 (b).
For the second case (b2), $F_1$ is constructed in the same
manner, and it looks like a surface depicted in
Figure 7.2 (c).
We note that
in this construction, all bands are untouched,
and therefore,
we are only concerned with disks.
Case 3: $\nu(G)\ge 4$.
Subcase (i): The origin $O$ is local minimal.
Then a local maximal vertex $A$ is followed
by a local minimal vertex $B$.
See Figure 7.3 (a1) and (a2).
Subcase (ii): The origin $O$ is local maximal.
Then a local minimal vertex $A$ is followed by a local maximal vertex
$B$. See Figure 7.3 (b1) and (b2).
\tend
In, Subcase (i) (a1),
first, apply the argument used in Case 2 (b1)
on the subgraph $OA\cup AB$
of $G(S)$, and replace pairs of disks corresponding to the edges on
$B'A$ and $AB$ by cylinders. Secondly, delete the subgraph
$B'A\cup AB$ from $G$ and then identify $B'$ and $B$ to obtain a new
graph $G'$.
Since $\nu(G')=\nu(G)-2$,
we can inductively construct a surface $F'_1$ in such a way that
all cylinders in $F'_1$ are inside the cylinders we previously
constructed.
Our new surface $F_1$ is the union of cylinders firstly
constructed and $F'_1$, (and all bands).
As an example, in Figure 7.4 below,
we depict a sequence of
modifications of $G$ and corresponding surfaces.
The last surface is the surface $F_1$ we sought.
In Subcase (i) (a2),
apply the argument used in Case 2 (b1) on the subgraph $OA\cup AB'$,
and replace pairs of disks by cylinders.
Then delete the subgraph $OA\cup AB'$ from $G(S)$ and
identify $O$ and $B'$ so that a new graph $G'$ is obtained.
Since $\nu(G')=\nu(G)-1$, apply induction on $G'$.
In Subcase (ii) (b1) and (b2),
apply the argument used in Case 2 (b2) and repeat similar arguments
used in Subcase (i) (a1) and (a2).
\tenf
\begin{ex} \label{ex:9.1}
Figure s 7.5 and 7.6 depict graphs and
corresponding canonical surfaces.
\end{ex}
\teng
\tenh
To construct $F_1$, we start with a primitive spanning
disk
and the genus increases by $1$ each time
we replace a pair of small disks by a cylinder.
We have
$(\lambda-\ell)/2$ pairs of disks to replace.
Therefore, we have the following:
\begin{prop}\label{prop:10.last}
Let $\lambda$ be the number of edges in $G(S)$.
The number of disks in $F_1$ constructed above is
$\ell=\mbox{$\ell k$}B$,
and the number of cylinders of $F_1$ is $\frac{1}{2}(\lambda-\ell)$,
therefore, $g(F_1)= \frac{1}{2}(\lambda-\ell)$.
\end{prop}
Now we twist $F_1$ by $K_2$.
First, suppose $\ell=0$.
Then just by twisting, we obtain a Seifert surface $F(r)$
for $K(r)$. In Section 10, (10.1),
we show that $g(K(r))=g(F(r))$, which is equal to the number
of cylinders, i.e., $(\lambda-\ell)/2$.
Next, suppose $\ell \neq 0$.
$F_1$ consists of $\ell$ disks and
$(\lambda-\ell)/2$ cylinders, connected by bands,
and $g(F_1)= \frac{1}{2} (\lambda-\ell)$
(by Proposition \ref{prop:10.last}).
If we twist $F_1$, $r$ times by $K_2$,
we obtain a singular surface,
in which cylinders penetrate the $\ell$ disks transversely.
Remove ribbon singularities by smoothing intersections
in the standard way.
Then we obtain a Seifert surface $F(r)$ for
$K(r)$.
See Figure 7.7 for the case $r=1$.
For a technical reason, we need more specific description
(given at the end of this section)
on the position of bands connecting disks
and cylinders.
\fotnA
Each time we make a hole,
the genus of the surface is increased by one.
We see that there are exactly
$\frac{1}{2}(\lambda-\ell)\ell r$ intersections.
Furthermore, by twisting along $K_2$, the boundaries of $\ell$ disks
form a torus link of
type $(\ell, \ell r)$.
Therefore, we have:
\begin{align}
g(F(r))&=\frac{1}{2}
(\lambda-\ell) +
\frac{1}{2}(\lambda-\ell)\ell r +
\frac{1}{2}(\ell-1)\ell r\nonumber\\
&= \frac{1}{2}\bigl\{(\lambda-1)(\ell r +1) - (\ell-1)
\bigr\}.
\end{align}
In the proof of Theorem \ref{thm:C}, in the next section,
we show (i) if $\rho=0$, then $F(r)$ is a minimal genus
Seifert surface for $K(r)$,
and (ii) if $\rho>0$, then $F(r)$ admits
compressions $\rho$ times and the result is a minimal genus
Seifert surface for $K(r)$, where $\rho$ is the deficiency
(Definition \ref{dfn:3.8}).
Now we give a precise description of the relative
position of bands connecting
disks and cylinders.
Proposition \ref{prop:4.2} also implies
that, in $F_1$, we have a freedom of relative positions
of the bands.
However this freedom
is lost when we have twisted $F_1$ to obtain $F(r)$.
A band $B$ in $F_1$
is of one of the five types below. See Figure 7.8.
Note that the boundary of
($F_1 -$ bands) consists of circles. In Figure 7.8,
a circle with an arrow
heading toward left (resp. right)
corresponds to a rising (resp. falling) edge of $G$.
Each band corresponds to a vertex in $G$.
Type I:
$B$ connects the boundary of an outermost cylinder.
Type II:
$B$ connects two stacked cylinders.
Type III:
$B$ connects two cylinders side by side
(of the same sign
or the opposite sign).
Type IV:
$B$ connects a disk and a cylinder.
In this type, we have three subtypes
as in Figure 7.8.3.
Type V:
$B$ connects two positive disks.
\constAa
\constAb
\constAc
\constAd
Note.
In Figure 7.8.3 (c), there are no
disks above the depicted cylinder
by our construction of $F_1$.
To obtain $F(r)$, we place the bands of Type V
as in Figure 7.8.4, so that they are placed close to
the bands emerged by twisting.
The bands of Types I, II and III should be
arranged as in Figure 7.9, where the horizontal disk
is the very top disk in $F_1$, i.e, it corresponds
to the edge in $G$ that is the first rising edge
after the last intersection of $G$ and the $x$-axis.
Bands of Type IV, where a positive cylinder is connected (Figure 7.8.3 (a), (b)),
should be arranged as in Figure 7.7.
If $r\ge 2$,
then the first band (resp. the second band, if any)
is placed before (resp. after) the $r$-twists.
A band of Type IV, where a negative cylinder is connected (Figure 7.8.3 (c)) is similarly done
as Figure 7.8.3 (b). See Figure 8.5 for
a local picture for this type of band
after twisting by $K_2$.
Now we have constructed a Seifert surface
for $K(r)$.
\constB
\secti{Proof of Theorem \ref{thm:C}}
Let $S = \{P_1,d_1,Q_1,e_1,P_2,d_2,Q_2,e_2,\ldots, P_m,d_m,Q_m\}$
be the
canonical decomposition of the continued fraction of
$\beta/2\alpha$.
Express each $P_i$ and $Q_j$
by modified continued fractions,
and write,
\begin{align*}
&P_i =[[1,b_{i,1},1,b_{i,2},1,\ldots,1,b_{i.s_i},1]],
1\le i\le m,\nonumber\\
&Q_j= [[-1,-b_{j,1}',-1,-b_{j,2}',-1,
\ldots,-1,-b_{j,q_j}',-1]],
1\le j\le m,
\end{align*}
where $b_{i,k}$ and $b_{j,k}'$ are arbitrary, and may be $0$.
Since $\ell \neq 0$, we see from Proposition \ref{prop:6.1},
\begin{equation}
\Delta_{K(r)}(t) = \frac{1-t}{1-t^\ell}
\Delta_{B(2\alpha,\beta)}(t,t^{\ell r}),
\end{equation}
where $\ell >0$ and $r>0$.
Then, by Proposition \ref{prop:8.1}, we have:
\begin{equation*}
\deg \Delta_{K(r)}(t) = (\lambda-1)(\ell r+1)-(\ell-1)-2\rho.
\end{equation*}
Recall that in Section 7, (7.1),
we constructed a Seifert surface $F(r)$
for $K(r$) with $g(F(r))
= \frac{1}{2}\bigl\{(\lambda-1)(\ell r +1) - (\ell-1) \bigr\}$.
If $\rho =0$, then $F(r)$
is a minimal genus Seifert surface for $K(r)$,
since $g(F(r))= \frac{1}{2}\deg \Delta_{K(r)}(t)$.
Therefore, to prove Theorem \ref{thm:C},
it suffices to confirm that we can compress $F(r)$
as many times as $\rho$ (Definition \ref{dfn:3.8}).
In Proposition \ref{prop:15.2} below,
we demonstrate where to apply compression
corresponding to each $b_{i,k}=1$ and $b_{j,k}'=1$ in $S$.
Since we need it in the proof of Theorem \ref{thm:A},
we also show
where we can deplumb twisted annuli from
$F(r)$. But first, we apply compressions.
\begin{prop}\label{prop:15.2}
We can compress $F(r)$ $\rho$ times, where
each compression
corresponds to an occasion of
$b_{i,k} = 1$ or $b'_{j,k} = 1$.
Furthermore,
we can deplumb $\sum_{i=1}^{m} s_i +
\sum_{j=1}^{m} q_j + 2m-1-\rho$ unknotted,
twisted annuli from $F(r)$, where
each deplumbing corresponds
to an occasion of
$b_{i,k} \neq 1, b'_{j,k} \neq 1, d_i $ and $e_i$.
\end{prop}
{\it Proof.}
At each band connecting disks and cylinders,
we explicitly show how to apply either compression or
deplumbing of an annulus.
Each deplumbing corresponds to removing a band,
and each compression corresponds to cutting
the surface along a properly embedded arc.
If a band is of Type I, it is obvious that
we can deplumb an unknotted annulus with
$d_i$ or $e_j$ full twists. Compressions never occur
for this type.
For a band of Type II, the relevant part of $F(r)$ is depicted in Figure 8.1.
If $b\neq 1$ or $b' \neq 1$, then we can deplumb
a twisted annulus and thus remove a band.
See Figure 8.1 (a).
In particular,
if $b$ or $b'$ is either $0$ or $2$,
then the annulus is a Hopf band.
However, if $b=1$ or $b'=1$,
then the annulus yields a compressing disk, so we
do not deplumb a band, but apply compression
(see Figure 8.1 (b)).
\fotnC
Take a band $B$ of Type III. If $B$ connects two
cylinders showing the same side (see Figure 8.2),
we can deplumb a twisted annulus.
In this case, compression never occurs.
In particular we can deplume a Hopf band if
$d=\pm1$ or $e=\pm1$.
On the other hand, if $B$ connects two cylinders
showing the opposite sides (see Figure 8.3),
then we can apply compression if $b$ or $b'$
equals $1$,
and otherwise deplumb a twisted annulus.
\fotnD
\fotnE
Take a band $B$ of Type IV (Figure 7.8.3).
There are three subtypes according
to the feature of the corresponding vertex $v_B$ in $G$:
(a) $v_B$, not on the $x$-axis
is between two rising edges of $G$,
(b) $v_B$ is a local minimum, and
(c) $v_B$, on the $x$-axis, is between two rising edges.
For (a), see Figure 8.4. If $b=1$, then we can compress,
and otherwise we can remove the band by deplumbing an
annulus with $b-1$ full twists. (See Figure 7.7.)
For (b), we can deplumb a band with $e$ full twists,
and compression never occurs.
For (c), see Figure 8.5.
\fotnF
\fotnG
For bands of Type V,
deform $F(r)$ by isotopy as in Figure 8.6
so that each band of Type V is adjacent to
a band emerged by twisting $F_1$.
Then we can remove the band by deplumbing if $b \neq 1$,
and otherwise cancel it with its neighbor, which corresponds
to compressing $F(r)$.
Note that since $r\ge1$, even if we cancel all bands of
Type V, still there are bands connecting each
pair of adjacent disks.
\fbox{}
\fotnH
By Proposition \ref{prop:15.2}, the proof of
Theorem \ref{thm:C} is now completed.
\secti{Proof of Theorem \ref{thm:A}}
By Theorem \ref{thm:1},
we assume, $\mbox{$\ell k$}B=
\ell\ge0$ and $r>0$.
Let $\beta/2\alpha=[[c_1, c_2, \ldots, c_{2d+1}]]$
be a continued fraction of $\beta/2\alpha$, and
$S=$\\
$\{P_1, d_1, Q_1, e_1, P_2, \ldots, P_m, d_m, Q_m\}$
be the canonical decomposition of $S$.
Write
\begin{align*}
&P_i=[[a_{i,1},b_{ i,1},a_{i,2},b_{i,2},
\cdots, a_{i,s_i}, b_{i,s_i},a_{i,s_i +1} ]], a_{i,j}>0,\
{\rm and}\\
&Q_j= [[-a'_{j,1},-b'_{j,1},
\cdots,-a'_{j,q_j},-b'_{j,q_j},-a'_{j,q_j+1}]],
a'_{j,k}>0.
\end{align*}
{\bf 9.1. Reformation of Theorem \ref{thm:A}.}
In Theorem \ref{thm:8.4}, we have
characterized $K(2\alpha, \beta|r)$ with $\ell>0, r>0$
whose Alexander polynomial is monic.
Hence now Theorem \ref{thm:A} is equivalent
to Theorem \ref{thm:5.1} below.
\begin{thm}\label{thm:5.1}
Fibredness of $K(r)=K(2\alpha,\beta|r)$ with
$\ell=\mbox{$\ell k$}B \neq 0$ is determined as follows,
where we assume $\ell>0$ and $r>0$
by Theorem \ref{thm:1}.
In each case below,
$a_{i,j}$ and $a'_{i,j}$ are arbitrary.\\
Case 1. $\ell=r=1$.
$K(1)$ is fibred if and only if
(a) $d_i, e_j=\pm 1$, for any $i, j$, and
(b) in each $P_i$ and $Q_i$,
$b_{i,k}$ and $b'_{j,p}$ are $2$.\\
Case 2. $\ell r\ge2$.
$K(r)$ is fibred if and only if
(a) $d_i, e_j=\pm 1$, for any $i, j$, and
(b) in each $P_i$ and $Q_i$,
$b_{i,k}$ and $b'_{j,p}$ are $1$ or $2$.
\end{thm}
{\bf 9.2. Proof of Theorem \ref{thm:5.1}, Case 1.}
We assume in this subsection
\begin{equation}
\ell = \mbox{$\ell k$}B = 1\ {\rm and}\ r=1.
\end{equation}
First suppose $K(r)$ is fibred.
Then $\Delta_{K(r)}(t)$ is monic, and hence, by
Proposition \ref{prop:8.3},
\begin{align}
&(a)\ d_i, e_j = \pm 1,\ {\rm and}\nonumber\\
&(b)\ b_{i,k}=b_{j,p}'=2\ {\rm for\ any}\ i,j,k,p.
\end{align}
This proves the \lq \lq only if" part of Case 1.
Conversely, suppose (9.2) is satisfied.
Rewrite the continued fraction as the
modified continued fraction.
Then some of new $b_{i,k}, b'_{j,p}$ may be
zero, but still (9.2) implies the
deficiency $\rho = 0$.
Now, by Proposition \ref{prop:15.2},
we see that (9.2) also implies the following:
Let $F^*$ be the surface obtained from
$F(r)$ (constructed in Section 7) by
removing all the bands connecting the disks
and cylinders. Then $F^*$ is
obtained from $F(r)$ by deplumbing Hopf bands.
Therefore, to prove that $K(r)$ is fibred,
it suffices to show that $F^*$
is a fibre surface.
The following lemma shows that $F^*$ is a
fibre surface, and hence Theorem \ref{thm:A} Case 1
is proved.
\fbox{}
\begin{lem}\label{lem:16.1} (Braided fibre surface)
Let $L_1$ and $L_2$ be (naturally) oriented closed braids
in a tubular neighbourhood
$N(L)$ of a Hopf link $L$, where $L_1$ and $L_2$ are
embedded in different components of $N(L)$.
Suppose that $L_1$ is a positive closed braid.
Then $L_1$ is a fibred link and a fibre surface $S$ for $L_1$
is obtained by applying Seifert algorithm.
Now replace each component $L_{2,i} (i\le i\le \mu)$ of
$L_2$ by an annulus
$B_i$ whose core is $K_{2,i}$,
but the number of twists of $B_i$ is arbitrary.
We assume that $B_i$ intersects $S$ transversally
in ribbon singularities.
By smoothing all the
the ribbon singularities, we obtain a Seifert surface
$F$ for $L_1\cup\partial
B_1\cup \cdots \cup\partial B_{\mu}$.
Then $F$ is a fibre surface.
\end{lem}
{\it Proof.}
The surface $S$ consists of disks and
bands connecting these disks.
Since $L_1$ is a positive braid,
each band has only a positive half twist.
Since neighbouring two bands form a Hopf band,
we can eliminate one of the bands by deplumbing.
After all,
we may assume that $L_1$ is a trivial knot, i.e.
$S$ consists of $\nu$ disks and $\nu-1$ bands,
and it suffices to show that $F$ constructed
using this $S$ is a fibre surface.
See Figure 9.1.
\fif
Denote by $\mu$ the number of strings of the braid of $L_2$.
Now consider a sutured manifold $M=F \times I$.
For the definition of sutured manifold and its decompositions,
see
\cite[pp.8--10 and Appendix A]{Ga1} and
\cite[Section 1]{Ga2}.
$M$ is a solid ball with $\nu \times \mu$ holes and $\nu \times \mu$
$1$-handles attached.
Applying a series of $C$-product decompositions, first we fill
these holes by $2$-handles, and obtain a ball
$M'$ with
$\nu \times \mu$ 1-handles attached,
where the suture on $M'$ is the equator, and
each $1$-handle has exactly one suture, which is parallel to a co-core.
Since each of the $1$-handles
connects the north hemisphere
and the south hemisphere of $M'$
without a local knotting,
we can arrange the $1$-handles by sliding their feet
so that they are attached to $M'$ trivially.
Then, by a $C$-product decomposition, we can amalgamate
a pair of $1$-handles, and eventually, we have
a solid torus whose sutures are two meridians.
Applying one more $C$-product decomposition, we have
a ball with a single suture.
Therefore, the original surface $F$ is a fibre
surface.
\fbox{}
{\bf 9.3. Proof of Theorem \ref{thm:5.1} Case 2.}
In this subsection, we assume that
\begin{equation}
\ell r\ge 2.
\end{equation}
First, we note that Proposition \ref{prop:8.2}
proves the \lq \lq only if" part.
Therefore, suppose that
the continued fraction of $\beta/2\alpha$ satisfies
\begin{align}
&(a)\ d_i, e_j = \pm1\ {\rm and}\nonumber\\
&(b)\ b_{i,k}\ {\rm and}\
b_{j,p}' = 1\ {\rm or}\ 2\ {\rm for\ any}\ i,j,k,p.
\end{align}
Again rewrite the continued fraction as a
modified continued fraction. Then in (9.4) (b)
we have \lq $0, 1$ or $2$', in stead of
\lq $1$ or $2$'.
By Proposition \ref{prop:15.2},
we can apply compressions
corresponding to all $b_{i,k}, b'_{j,p} =1$, and
remove all bands with $d_i, e_j = \pm1$
and $b_{i,k} =
b_{j,p}' = 0$ or $2$ by deplumbing Hopf bands.
Denote the
resulting surface by $\wti{F}$.
To complete the proof, it suffices to show
$\wti{F}$ is a fibre surface.
Recall that in the
proof of Proposition \ref{prop:15.2},
each compression corresponds to cutting the surface
along a properly embedded arc.
In the following,
we depict how to undo each of the cuts
by plumbing a Hopf band.
To do this, the assumption that
$\ell r \ge 2$ is essential.
In fact, to undo the cuts,
we use two consecutive holes
that occur as intersections of a
cylinder and the disk(s).
It suffices to consider each of Types II, III and IV in
Proposition \ref{prop:15.2}.
For Type II (resp. III),
we plumb a band $B$ along the curve depicted in
Figure 9.2 (resp. 9.3).
Then we can undo the cut by
plumbing a Hopf band.
\stnA
\stnB
Type IV is a bit complicated. Since the argument is similar, we only prove for the case where the band connects
the positive disk and the positive cylinder
(in Figure 7.8.3 (a)). The cut made by compression is
in Figure 8.4.
In Figure 9.4,
the band $B'$ gives a compressing disk so that the
result of compression is as in Figure 8.4.
\stnC
To undo the cut, we consider three subcases:
Let $A$ be the annulus to which $B'$ is connected.
Subcase (i): There are some disks below $A$ in
$\wti{F}$ (Figure 9,4 (a)).
Subcase (ii): There are more than one disk
above $A$ but no disks below $A$ in
$\wti{F}$ (Figure 9.4 (b)),
Subcase (iii), There are only one disk above $A$
and no disks below $A$ in $\wti{F}$
(Figure 9.4 (c)). Note that in Subcase (iii),
$r\ge 2$ by assumption that $\ell r>1$.
In each subcase, using the arc depicted in Figure 9.4,
we can add a band $B$ by
plumbing a Hopf band after compressing at $B'$.
Then, by sliding $B$ along $A$
to the cite of compression,
we can undo the cut.
This fact is also understood by seeing that the bands
$B$ and $B'$ cancel each other.
Now all cuts are undone by Hopf plumbings.
Then as in Case 1, we can further deplumb Hopf bands
and apply
Lemma \ref{lem:16.1}.
This proves Theorem \ref{thm:5.1}, Case~2.
\fbox{}
\secti{Proof of Theorem \ref{thm:D}}
In this section,
we determine the genus of a knot $K(r)$ for
the case $\mbox{$\ell k$}B=\ell=0$, and thus prove Theorem
\ref{thm:D}.
In Section 7, we span $K_1$ by a canonical Seifert surface $F_1$.
By applying Dehn twists on $F_1$ along $K_2$,
we obtain a Seifert surface $F(r)=F$ for $K(r)$.
We will show that $F$ is of minimal genus.
Since $F$ and $F_1$ have the same genus,
we show in fact that
$g(K(r))$ is equal to the number of cylinders in $F_1$.
However, $g(F)$ is much
larger than one half of
the degree of the Alexander polynomial of $K(r)$,
c.f., Proposition \ref{prop:10.2}.
Therefore, in order to show that $F$ is of minimal genus,
we use geometry.
First, deplumbing a twisted annulus from $F$ does not
hurt the genus-minimality by the additivity of genus under
the Murasugi sum.
So we remove all bands connecting
the boundaries of the same cylinder.
Then our main tool is the sutured manifold hierarchies.
As a special case of general results
of sutured manifold hierarchies, we have the following
(see \cite[Corollary 1.29]{Ga1}):
\begin{prop}\label{prop:std}
Let $(M, \gamma)=(F\times I, \partial F \times I)$
be the sutured manifold obtained from a Seifert surface $F$.
Apply complementary disk- (annulus-) decompositions to
$(M, \gamma)$ and suppose we obtain
$(V, \delta)$ where $V$ is a standard solid torus and
each suture is a loop running longitudinally once and
meridionally non-zero times.
Then $F$ is of minimal genus.
\end{prop}
Throughout the rest of this section,
we omit the adjective \lq complementary'
for complementary sutured manifold decompositions,
since we only deal with such decompositions and
no confusions are expected.
Let $[2u_1,2v_1,2u_2,2v_2,\ldots, 2u_m,2v_m, 2u_{m+1}]$
be the continued fraction of $\beta/2\alpha$.
Suppose that $\mbox{$\ell k$}B=0$.
Then to prove Theorem \ref{thm:D}, we show
the following:
\begin{equation}
g(K(r))= \dfrac{1}{2}\sum_{i=1}^{m+1}|u_i|
=\dfrac{\lambda}{2}
(= \#\{{\rm cylinders\ of\ } F_1\}).
\end{equation}
Note that since $\mbox{$\ell k$}B=0, \sum_{i=1}^{m+1}|u_i|=
\lambda=\#\{{\rm edges\ of\ } G(S)\}$
is even.
{\it Proof.}
We prove (10.1) by induction on $\lambda$.
If $\lambda=2$, then (10.1) is obvious since
$F(r)$ is a plumbing of two twisted annuli.
Suppose $\lambda\ge4$.
First we deplumb all bands corresponding to
proper local maximal, or minimal vertices,
i.e., those connecting the two boundaries of a cylinder.
Denote by $\hat{F}$ the resulting surface.
We inductively reduce the graph $G(S)$ and accordingly
amalgamate the solid tori in
$(\hat{F}\times I, \partial \hat{F}\times I)$
by disk- (annulus-) decompositions,
until we have only one torus where each of the sutures
run longitudinally once and
meridionally non-zero times.
After that, we will see that all such deplumbing and
amalgamations commute
with Dehn twists along $K_2$, and hence
by Proposition \ref{prop:std}, we have (10.1).
\elebb
Case 1: There is a vertex in $G(S)$ incident to
two consecutive rising edges and two consecutive
falling edges, as in Figure 10.1 (a1) or (b1),
where the white
vertices may be a terminal vertex
or a non-terminal vertex, and they may lie on the $x$-axes.
As in Figure 10.2, we amalgamate the top solid torus
with the second top one.
There are several cases according to the
number of twists in the two bands connected.
\eleb
Case 1.1. The two bands are twisted in the opposite directions:
See Figure 10.2 (a). First apply a
disk decomposition
using the disk with shadow, then apply a product
disk decomposition. Note that if the two bands
are both only half-twisted, then the first disk decomposition
is also a product disk decomposition.
Case 1.2. Two bands are twisted in the same direction:
See Figure 10.2 (b) and (c).
As before, we apply a disk decomposition and a product disk
decomposition. Figure 10.2 (b) depicts the case where both bands
are more than half-twisted. In this case, we have two extra
sutures, but they do not affect the following inductive amalgamations.
Figure 10.2 (c) depicts the case at least one of the bands is
half-twisted. Note that if both bands are half-twisted,
then the first disk decomposition is a product disk decomposition.
Case 2: There are no subgraph considered in Case 1, or all such subgraphs
are removed.
Now it suffices to find a subgraph as in Figure 10.3 or 10.4 and
amalgamate a solid torus.
\elecc
\eledd
Since the other cases are similar, we only consider subgraphs in
Figure 10.3 (a1) and Figure 10.4.
By construction,
a rising edge above (resp. below)
the $x$-axes is paired with the
a falling edge on its right (resp. left) so that
the pair corresponds to a cylinder in $F_1$.
Sutured manifold decompositions amalgamate
the solid tori as respectively depicted in Figure s 10.5 and 10.6.
In Figure 10.5, corresponding to Figure 10.3 (a1),
two cylinders showing the same side are
connected by an even-twisted band.
In Figure 10.6, corresponding to Figure 10.4,
two cylinders showing the opposite sides are connected
by an odd-twisted band.
However, as seen in Figure 10.2, longitudinal sutures may
have accumulated.
In Figure s 10.5 and 10.6, we first
apply an annulus decomposition and then a disk decomposition.
Now we have amalgamated all the tori into one and see
that we may apply the Dehn twist along $K_2$ beforehand.
Therefore, by Proposition \ref{prop:std},
(10.1) is obtained.
\fbox{}
The proof of Theorem \ref{thm:D} is now completed.
\eleee
\eleff
As an immediate consequence of Theorem \ref{thm:D},
we have:
\begin{cor}\label{cor:11.last}
Suppose $\mbox{$\ell k$}B=0$ and $\alpha\ge2$.
Then, for any $r>0$, $K(2\alpha, \beta|r)$
is never unknotted.
\end{cor}
\secti{Proof of Theorem \ref{thm:B}}
{\it Proof of Theorem \ref{thm:B} (a). }
Write
$\Delta_{B(2\alpha,\beta)}(x,y) = (x-1)(y-1)f(x,y)$.
Then by Proposition \ref{prop:6.1}, we can write
$\Delta_{K(r)}(t) = r(t-1)^2 f(t,1) +\varepsilon t^k$,
where $\varepsilon = \pm 1$ and $k$ is chosen so that
$\Delta_{K(r)}(t)$ is symmetric.
Suppose $r\ge 2$.
If $f(t,1) \neq 0$, then $\Delta_{K(r)}(t) $
is not monic and hence $K(r)$ is not
fibred.
Suppose $f(t,1)=0$. Then $\Delta_{K(r)}(t)=1$.
However, since $\alpha \ge 2$, $K(2\alpha,\beta|r)$
is
not trivial by Corollary \ref{cor:11.last},
and therefore $K(r)$ is not
fibred for
$r \ge 2$.
\fbox{}
The rest of this section is devoted to the
proof of Theorem \ref{thm:B} (b).
As we remarked in subsection 3.4,
we may assume $r>0$ and $\beta>0$.
To prove Theorem \ref{thm:B} (b),
we need the following two propositions.\\
Let $\{P_1,d_1,Q_1,e_1,P_2,d_2,Q_2,e_2,
\ldots, P_m,d_m,Q_m\}$ be the
canonical decomposition of $\beta/2\alpha$.
Using Theorem \ref{thm:D}, first we prove the following.
\begin{prop}\label{prop:12.1}
Suppose $\mbox{$\ell k$}B=0$.
Then for any $r\ge 1,
g\bigl(K(r)\bigr)=\frac{1}{2}deg \Delta_{K(r)}(t)$
if and only if $m=1$, i.e. $\{P_1,d_1,Q_1\}$ is the
canonical decomposition of $\beta/2\alpha$.
\end{prop}
{\it Proof.} By Theorem \ref{thm:D}, for any $r\ge1$,
${\displaystyle
2g\bigl(K(r)\bigr)= \sum_{i=1}^{m}\Bigl\{
\sum_{k=1}^{s_i+1}|a_{i,k}| +\sum_{k=1}^{q_i +1}|a'_{i,k}|\Bigr\}
}$.\\
On the other hand,
$\deg \Delta_{K(r)}(t) \le 2 max\{h, |q|\}$,
where $h$ (and $q$) is the $y$-coordinate of
the absolute maximal
(and minimal) vertices in $G(S)$
(Proposition \ref{prop:10.2}).
Therefore, if there exist local minimal vertices (not end vertices),
we see easily that
$2g\bigl(K(r)\bigr) > 2 \max\{h, |q|\} \ge \deg
\Delta_{K(r)}(t)$,
and hence, there is only one local (and hence, absolute)
maximal vertex, and therefore, $m=1$.
\fbox{}
\begin{prop}\label{prop:12.2}
Suppose $\mbox{$\ell k$}B=0$.
Suppose further, $\{P_1,d_1,Q_1\}$ is the canonical
decomposition of $\beta/2\alpha$. Then
$\Delta_{K(1)}(t)$ is monic if and only if
$d_1 = \pm1$.
\end{prop}
{\it Proof.}
This follows from Corollary \ref{cor:10.3}.
\fbox{}
Now we proceed to the proof of Theorem \ref{thm:B}(b).
{\it Proof of the \lq\lq if\rq\rq\ part.}
Suppose
that
the modified continued fraction of $\beta/2\alpha$
is of the form
$S = [[1,b_1,1,b_2,
\ldots,1,b_{p-1},1,d_{1},-1,-b'_{p-1},-1,\ldots, -b'_{1},
-1]]$,
where $b_i$ and $b_i'$, $1\le i \le p-1$, are $0$ or $1$
and
$d_1 = \pm1$.
What is to show is that $K(1)=K(2\alpha, \beta|1)$ is fibred.
Let $F_1$ be the canonical Seifert surface for
$K_1$. (See Figure 7.6 (a).)
By twisting $F_1$ once by $K_2$, we obtain a
Seifert surface for $K$.
Since $d_1=\pm 1$, the band corresponding to the maximal
vertex, is a Hopf band, and hence, we may remove it
by deplumbing. Denote the resulting surface by $\hat{F}$.
Now, since $b_i$ and $b_i' (1\le i\le p-1)$ are either $0$ or $1$,
every band in $\hat{F}$ is only half-twisted.
Therefore, every disk decompositions employed in the proof of
(10.1) Case 1
is in fact a product disk decomposition,
and hence $\hat{F}$ is a fibre surface by \cite[Theorem 1.9]{Ga2},
and $K(1)$ is a fibred knot.
{\it Proof of the \lq\lq only if\rq\rq\ part.}
Suppose $K(1)=K(2\alpha,\beta|1)$ is a
fibred knot.
Then by
Proposition \ref{prop:12.1}, the continued fraction $S$
for $\beta/2\alpha$ must be
$\{P_1,d_1,Q_1\}$.
Therefore, the modified continued
fraction is of the form:
$\beta/2\alpha=[[1,b_1,1,b_2, \ldots,
1,b_{p-1},1,d_1,-1,-b_{p-1}',-1, \ldots, -1,-b'_1,-1]]$.
Consider the canonical Seifert surface $F_1$ for $K_1$,
as in Figure 7.6 (a).
A Seifert surface $F$ for $K(1)$, consisting of $p$ cylinders
$A_1, A_2, \ldots, A_p$ and $2p-1$ bands,
is obtained from $F_1$
by applying a Dehn twist once along $K_2$.
In Section 10, we have seen that $F$ is of minimal genus,
and hence a fiber surface for $K(1)$.
By Proposition \ref{prop:12.2}, $d_1 = \pm1$
and hence we remove the band on the top annulus $A_p$
by deplumbing a Hopf band, and denote by
$\hat{F}$ the resulting surface.
Then the
inclusion map below must be onto:
$\phi : \pi_1(\hat{F}) \longrightarrow \pi_1(S^3 -
\hat{F})$.
(11.1)\\
\setcounter{equation}{1}
Now, we show that
all $b_j$ and $b_j'$ are $0$ or $1$ and hence that
if at least one of $b_j$ and
$b_j'$ is neither $0$ nor $1$,
then $\phi$ is not onto and hence $K(1)$
is not fibred.
If $b_1$ and $b_1'$ are $0$ or $1$,
then we can \lq remove\rq\ the bottom annulus
$A_1$ by two product decompositions
as in the proof of (10.1) Case 1.
Therefore by \cite[Lemma 2.2]{Ga2}, we may
assume without loss of generality that
$b_1'\neq 0,1$, i.e., the band $B'_1$ is more than
half-twisted.
To show that $\phi$ is not onto, we
need explicit presentations of the groups
$\pi_1(\hat{F})$ and $\pi_1(S^3-\hat{F})$.
To do that, we deform $\hat{F}$ as depicted in
Figure 11.1 (where $p=3$).
Note that both $\pi_1(\hat{F})$ and
$\pi_1(S^3-\hat{F})$ are free of rank $2p-1$.
Figure 11.1 also depicts
a base point $**$ and the generators of
$\pi_1(S^3-\hat{F})$
denoted by $x_1, x_2, \ldots,x_p$ and
$a_1,a_2,\ldots,a_{p-1}$.
Take a base point $*$ for $\pi_1(\hat{F})$ on
$A_1$ as in Figure 11.1.
The generators for
$\pi_1(\hat{F})$ are denoted by
$\alpha_1, \alpha_2, \ldots, \alpha_p$ and
$\beta_1, \beta_2,\ldots, \beta_{p-1}$.
A loop $\alpha_i$ starts at $*$ moving toward on $A_i$
through bands
$B_1,\ldots,B_{i-1}$ and circle once around $A_i$
counter-clockwise,
and then return to $*$ through $B_{i-1}, \ldots, B_1$.
A loop $\beta_i$ starts at $*$ moving toward $A_{i+1}$ through
$B_1,B_2, \ldots, B_i$
and returns to $*$ passing through first $B_i'$ and then
$B_{i-1}, B_{i-2}, \ldots, B_1$.
\thir
We must express $\phi(\alpha_i), \phi(\beta_j)$ in terms of
$x_i , a_j$.
Let $D=x_1 x_2 \ldots x_p$.
For simplicity, we use $\alpha_i$, (or $\beta_j$) instead of $\phi(\alpha_i)$
(or $\phi(\beta_j)$).
Then we have the following:
\begin{align}
&\alpha_1= a_1D, \nonumber\\
&\alpha_2= u_1a_2D u_1^{-1}, \nonumber\\
&\alpha_3= u_1u_2a_3 D u_2^{-1} u_1^{-1}, \nonumber\\
&\indent \vdots\nonumber\\
& \alpha_{p-1}= u_1 u_2\ldots u_{p-2}a_{p-1}D u_{p-2}^{-1}
\cdots u_1^{-1}, \nonumber\\
& \alpha_p= u_1u_2\cdots u_{p-1} D u_{p-1}^{-1}\ldots u_1^{-1}.\\
&\beta_1 = u_1 w_1,\nonumber\\
&\beta_2 = u_1(u_2 w_2) u_1^{-1},\nonumber\\
&\beta_3 = u_1u_2(u_3 w_3) u_2^{-1} u_1^{-1},\nonumber\\
& \indent
\vdots \nonumber\\
& \beta_{p-1} = u_1u_2\ldots u_{p-2}(u_{p-1} w_{p-1})
u_{p-2}^{-1}\ldots u_1^{-1},
\end{align}
where $u_i=a_{i}^{b_{i}}$
and $w_i = x_{i+1}^{-1}(x_i^{-1} \ldots x_1^{-1} D x_1
\ldots x_i D^{-1} a_i^{-1} )^{b_i'}, 1 \le i\le p-1$.
Denote $h_i=x_i^{-1} \ldots x_1^{-1} D x_1
\ldots x_i D^{-1} a_i^{-1}$.
By assumption, $b'_1 \neq 0,1$.
Let $H = {\rm Im} \phi$ and $G= \pi_1(S^3-\hat{F})$.
We want to show that $H \neq G$.
First we may suppose that $a_1 \in H$ and $x_1 x_2 \in H$.
Otherwise, obviously, $H \neq G$, and we are done.
Now since $a_1\in H$, by (11.2), we have $D \in H$, and hence,
$a_2 \in H$, since $H \ni\alpha_2=u_1a_2D u_1^{-1}$, and
$H \ni u_1=a_1^{b_1}$ and $H \ni D$.
Repeat this process to obtain
\begin{equation}
a_1, a_2, \ldots, a_{p-1} \in H\ {\rm and}\ D\in H.
\end{equation}
Therefore, $H$ is generated by
\begin{equation}
\{ a_1, a_2, \ldots, a_{p-1}, D, x_1x_2,
\alpha_1, \ldots, \alpha_p,
\beta_1,\ldots, \beta_{p-1}\}.
\end{equation}
However, since $\alpha_i$ is written in terms of $a_i$ and $D$,
we can eliminate these $\alpha_i$ from the set of generators (11.5),
and hence $H$ is generated by
\begin{equation}
\{ a_1, a_2, \ldots, a_{p-1}, D, x_1x_2, \beta_1,\ldots, \beta_{p-1}\}.
\end{equation}
Since $u_i \in H, 1\le i\le p-1$, we introduce new generators
$\gamma_j$, replacing $\beta_j$, as
\begin{equation}
\gamma_1 = w_1,\gamma_2 = w_2, \ldots, \gamma_{p-1} = w_{p-1}.
\end{equation}
Then $H$ is generated by
\begin{equation}
\{ a_1, a_2, \ldots, a_{p-1}, D, x_1x_2,
\gamma_1,\ldots, \gamma_{p-1}\}.
\end{equation}
Since $h_2^{b_2'} =
(x_2^{-1} x_1^{-1} D x_1 x_2 D^{-1} a_2^{-1})^{b_2'} \in H$
and
$H\ni\gamma_2=x_3^{-1} h_2^{b_2'}$, it follows that
${x_3}^{-1} \in H$.
Similarly,
using $\gamma_3, \ldots, \gamma_{p-1}$,
we have ${x_4}^{-1},\ldots, {x_p}^{-1} \in H$.
Therefore, we can replace $x_{i+1}$ by $\gamma_i, i\ge2$,
and we see that $H$ is generated by
\begin{equation}
\{ a_1, a_2, \ldots, a_{p-1}, D, x_1x_2, x_3, \ldots, x_p, \gamma_1\}.
\end{equation}
Since $D=x_1x_2\cdots x_p$, we can eliminate $D$ from the set of generators of $H$,
and $H$ is generated by $2p-1$ elements
\begin{equation}
\{ a_1, a_2, \ldots, a_{p-1}, x_1x_2, x_3, \ldots, x_p, \gamma_1\}.
\end{equation}
Since $H$ is free of rank $2p-1$, the above set is a set of free generators of $H$.\\
On the other hand, $G$ is freely generated by
$\{ a_1, a_2, \ldots, a_{p-1}, x_1,x_2, x_3, \ldots, x_p\}$.
Now introduce a new generator $z_2 = x_1x_2$ and
replace $x_2$ by $z_2$. Then we have:
\begin{align}
&(1)\ G \ {\rm is\ generated\ (freely)\ by}\
\{ a_1, a_2, \ldots, a_{p-1}, x_1,z_2, x_3, \ldots, x_p\},\ {\rm and}\nonumber\\
&(2)\ H\ {\rm is\ generated\ (freely)\ by}\
\{ a_1, a_2, \ldots, a_{p-1}, z_2, x_3, \ldots, x_p, \gamma_1\},\nonumber\\
&\ \ \ \ {\rm where} \gamma_1=w_1=x_2^{-1}(x_1^{-1} D x_1D^{-1} a_1^{-1})^{b_1'}.
\end{align}
Therefore, if $H=G$, then $x_1\in H$.
In other words, $x_1$ can be written as a word on
$a_i, 1\le i \le p-1,z_2,x_j, 3\le j\le p$, and $\gamma_1$.
(Note that $H$ is a free group of rank $2p-1$.)
Recall $b'_1\neq 0,1$.
Case 1: $ b'_1 > 0$, i.e. $b'_1\ge2$.
Then,
\begin{align*}
\gamma_1 &=
x_2^{-1}(x_1^{-1} D x_1D^{-1} a_1^{-1})(x_1^{-1} D x_1D^{-1}
a_1^{-1})^{b'_1-1}\\
&=
z_2^{-1} D x_1D^{-1} a_1^{-1}(x_1^{-1} D x_1D^{-1}
a_1^{-1})^{b'_1-1}.
\end{align*}
Since $z_2^{-1}$, $D$ and $D^{-1}a_1^{-1}$ are in $H$,
we can replace $\gamma_1$ by
\begin{equation*}
\delta_1= x_1(D^{-1} a_1^{-1})(x_1^{-1} D x_1D^{-1}
a_1^{-1})^{b'_1-2}(x_1^{-1} D x_1).
\end{equation*}
Since $D=z_2 x_3 \cdots x_p$, $D$ does not involve $x_1$ and hence
$\delta_1$ is of the form:\\
$\delta_1= x_1 W_1(x_1^{-1} W_2 x_1W_1)^{b'_1-2}(x_1^{-1} W_2 x_1)$,
where $W_1 =D^{-1} a_1^{-1}$ and $W_2 = D$,
none of which involves $x_1$.
Therefore, $\delta_1$ is a reduced word.
Since $b'_1-2\ge0, \delta_1$ involves $x_1$
at least three times, and $\delta_1$ is of the form:
$x_1 U x_1$.
Therefore, we cannot write $x_1$ in terms of $a_i, 1
\le i\le p-1, z_2,x_3,\ldots,x_p$ and $\delta_1$.
Case 2: $b'_1<0$. Write $b'_1=-q, q\ge1$. Then
$\gamma_1=x_2^{-1}(a_1 D x_1^{-1}D^{-1} x_1)^q
=z_2^{-1}x_1(a_1 D x_1^{-1}D^{-1} x_1)^q$.
Again, since $z_2 \in H$, we can replace $\gamma_1$ by $\delta_1'$,
$\delta_1'=x_1(a_1 D x_1^{-1}D^{-1} x_1)^q
=x_1(W_1^{-1}x_1^{-1} W_2^{-1} x_1)^q$.
Since $q>0, \delta_1' $is a reduced word and $\delta_1'$
is of the form $x_1Vx_1$, and $V$ contains $x_1$ at least once.
Therefore $x_1$ cannot be written in terms of
$\delta_1', a_i,
1\le i\le p-1, z_2, x_3, \ldots, x_p$.
It proves that $H\neq G$, and hence $\phi$ is not onto.
Theorem \ref{thm:B} is now
completely proved.
\fbox{}
\section{Examples}
In this section, we discuss several examples
that illustrate our main theorems.
\begin{ex}\label{ex:lzero}
All $2$-bridge links in this example have linking number
$0$.
Theorem \ref{thm:D} determines
$g(K(2\alpha,\beta|r)$
and the fibredness is checked by Theorem
\ref{thm:B}.
(1) Consider $K(48,31|r)$.
Since $31/48=[2,2,-4,2,2]$,
the genus is $2$, and it is not fibred for any $r>0$.
The graph is given in Figure 12.1 (1).
The lifts $\{\wti{K}_j\}$ of $K_1$ in the infinite cyclic cover
$M^3$ of $S^3\setminus K_2$ are depicted in
Figure 12.2 (1).
From it, we see that $\Delta_{K(48,31|1)}(t)$ is not monic.
This can also be checked by evaluating
$\Delta_{B(48,31)}(x,y)$:\\
$\Delta_{B(48,31)}(x,y)=(1-x)(1-y)\{
1-(x+y)-xy(x+y)+x^2y^2\}$,
and hence, $\Delta_{K(48,31|r)}(t)=2r(1-t)^2+t$.
Thus, we see that
$\Delta_{K(48,31|1)}(t)$ is not monic.
(2) Consider $K(64,41|r)$.
Since $41/64=[2,2,-4,-2,2]$, the genus is $2$,
and it is not fibred for any $r>0$.
See Figure 12.1 (2) for its graph.
The lifts $\{\wti{K}_j\}$ of $K_1$ in $M^3$
is depicted in Figure 12.2 (2).
From it, we see that $\Delta_{K(64,41|r)}(t)=1$.
The same result is also obtained using
$\Delta_{B(64,41)}(x,y)$.
$\Delta_{B(64,41)}(x,y)=(1-x)^2(1-y)^2(1+xy)$,
and hence $\Delta_{K(64,41|r)}(t)=1$
for any $r>0$.
\etna
\etnb
(3) Consider $K(40,11|r)$.
Since $11/40=[4,2,-2,-2,-2]$,
the genus is $2$ and it is fibred only for $r=1$.
See Figure 12.1(3) for its graph.
The lifts are depicted in Figure 12.2~(3).
Also, we have:
$\Delta_{B(40,11)}(x,y)=(1-x)(1-y)\{(x+y)-xy+
xy(x+y)\}$ and hence,
$\Delta_{K(40,11|r)}(t)=r(1-t-t^3+t^4)+t^2$.
(4) Consider $K(112,71|r)$.
Since $71/112=[2,2,-2,2,2,2,-2]$,
the genus is $2$ and it is not fibred for any $r>0$.
The lifts $\{\wti{K}_j\}$ of $K_1$ are depicted in Figure 12.2 (4).
Also, we have:
$\Delta_{B(112,71)}(x,y)=
(1-x)(1-y)\{1-2(x+y)+(x+y)^2-2xy(x+y)+x^2y^2\}$,
and hence, $\Delta_{K(112,71|r)}(t)=2r(1-t)^2+t$.
\end{ex}
\begin{ex}
Each of the first two $2$-bridge links has linking
number $1$, and the other two links have
linking number $2$.
The graphs are depicted in Figure 12.3.
We use Theorem \ref{thm:A},
Proposition \ref{prop:8.1} and
Theorem \ref{thm:8.4}.\\
(1) Consider $K(18,13|r)$.
Since $13/18=[2,2,2,-2,-2]$, we see
$P_1=[[1,1,1]], Q_1=[[-1]]$ and $d_1=-1$.
Since $b_{11}=1$, it follows from Theorem \ref{thm:8.4}
that $\Delta_{K(18,13|r)}(t)$ is not monic for $r=1$,
but it is monic for $r>1$,
and hence $K(18,13|r)$ is fibred for $r>1$.
Since $\lambda=3$ and $\rho=1$, the degree of
$\Delta_{K(18,13|r)}(t)$ is $2r$, and hence,
the genus is $r$ by Proposition \ref{prop:8.1}
and Theorem \ref{thm:A}.
These facts are also confirmed
by evaluating
$\Delta_{B(18,13)}(x,y)$:
$\Delta_{B(18,13)}(x,y)=(x+y)-(x^2+3xy+y^2)
+xy(x+y)$,
and use Proposition \ref{prop:6.1} (1).\\
(2)
Consider $K(482,381|r)$.
Since $381/482=[2,2,2,2,-4,-2,-2,2,2,2,2]$,
we see that $P_1=[[1,1,1]] , Q_1=[[-2,-1,-1]],
P_2=[[1,1,1]], d_1=1$ and $e_1=1$.
Therefore, we see that $\lambda = 7$ and
$\rho =3$, and hence, by
Proposition \ref{prop:8.1}, the degree of
$\Delta_{K(482,381)}(t) = 6r$ and the genus is $3r$.
Further, it follows from Theorem \ref{thm:8.4}(1)
that it is not fibred for $r=1$, but it is fibred for $r >1$.
These facts are also confirmed by evaluating the
Alexander polynomial of $B(482,381)$, and
using Proposition \ref{prop:6.1} (1).
$\Delta_{B(482,381)}(x,y) =(-x^3 + 2x^2 -x)y^6 +
(-3x^4 + 8x^3 -8x^2 +4x -1)y^5 +
(-3x^5 + 12x^4 -17x^3 +14x^2 -8x +2)y^4 +
(-x^6 + 8x^5 -17x^4 +21x^3 -17x^2 +8x -1)y^3 +
(2x^6 - 8x^5 + 14x^4 -17x^3 +12x^2-3x)y^2 +
(- x^6 + 4x^5 - 8x^4 +8x^3 -3x^2)y -x^5 + 2x^4 -x^3$.\\
(3) Consider $K(60,47|r)$.
Note that $\mbox{$\ell k$} B(60,47)=2$.
Since $47/60=[2,2,2,2,$ $-2,-2,2]$,
we see that the genus is $3r$ and
$\Delta_{K(60,47|r)}(t)$ is monic
and hence is fibred for any $r>0$.
Note that $\Delta_{B(60,47)}(x,y)=
(x+y)-(2x^2+3xy+2y^2)+
(x^3+5x^2y+5xy^2+y^3)-xy(2x^2+3xy+2y^2)
+x^2y^2(x+y)$.\\
(4)
Consider $K(1732,-671)$.
Since $-671/1732=[-2,2,4,2,-2,2,2,4,2]$,
we see by Theorem \ref{thm:8.4} (2) that
$\Delta_{K(r)}(t)$ is monic for $r>0$
and hence $K(r)$ is fibred for any $r>0$.
Also, since $\lambda=6$ and $\rho=0$, we have
$g(K(r))=5r+2$.
Note that
$\Delta_{K(1732,-671)}(x,y)=
(x^5-5x^4+9x^3-7x^2+2x)y^5
-(5x^5-23x^4+44x^3-42x^2+18x-2)y^4+
(9x^5-44x^4+87x^3-86x^2+42x-7)y^3-
(7x^5-42x^4+86x^3-87x^2+44x-9)y^2+
(2x^5-18x^4+42x^3-44x^2+23x-5)y+
2x^4-7x^3+9x^2-5x+1$.
\end{ex}
\etnc
\section{Fibred satellite knots of tunnel number one}
In this section,
we determine the genera of the satellite knots of
tunnel number one, and also solve the question of
when it is fibred.
Let $\widehat{K}$ be a satellite knot of tunnel number one.
According to \cite{MS},
the companion of $\widehat{K}$ is a torus knot of type $(a, b)$,
say, and $|a|, |b| >1$,
and the pattern is the torti-rational knot
$K(2\alpha, \beta|ab)$ for $|\alpha|>1$.
To be more precise,
let $B(2\alpha,\beta)=K_1 \cup K_2$ be a $2$-bridge link.
Then by construction, a torti-rational knot,
$K(2\alpha,\beta|r)$ is a knot in
the interior of a (unknotted) solid torus $V$,
where $V$ is homeomorphic to
$S^3 \setminus N(K_2)$,
$N(K_2)$ being a tubular neighbourhood of $K_2$.
Let $m$ be a meridian of $V$.
Then the pattern is a pair $(V, K(2\alpha, \beta|r))$.
Generally, if the pattern is $(V, K(2\alpha, \beta|r))$,
then our technique can be applied on any satellite knot
with fibred companion $K_C$.
Therefore,
in this section we can prove slightly more general
theorems as follows.
\begin{thm} \label{thm:17.1}
Let $K_C$ be a non-trivial fibred knot and
$\widehat{K}$ be the satellite knot
with companion $K_C$ and the pattern
$(V, K(2\alpha, \beta|r)), r \ne 0$.
Suppose $\ell = lk(K_1,K_2) \ne 0$.
Then the following hold:\\
(1) the genus of $\widehat{K}$ is exactly half of the
degree of the Alexander polynomial of $\widehat{K}$.\\
(2) $\widehat{K}$ is fibred
if (and only if) the Alexander polynomial of
$\widehat{K}$ (and hence, that of
$K(2\alpha, \beta|r)$ is monic.
\end{thm}
\begin{thm}\label{thm:17.2}
Under the same notation of Theorem \ref{thm:17.1},
suppose $r \ne 0$ and $\ell =0$. Then
(1) $\widehat{K}$ is not fibred for any $r\ne 0$.
\cite[Proposition 4.15]{BZ}
(2) Let $[2c_1,2c_2, \ldots, 2c_m]$ be the continued fraction of
$\beta/2\alpha$. Then $g(\widehat{K}) = \sum_{odd\ j} |c_j|$.
\end{thm}
{\it Proof of Theorem \ref{thm:17.1}.}
(1) Let $\phi$ be a faithful homeomorphism of a solid torus
$V$ in which
$K(2\alpha,\beta|r)$ is embedded to a tubular
neighbourhood
$N(K_C)$ of $K_C$ in $S^3$.
The minimal genus Seifert surface $F$ for $K=K(2\alpha, \beta|r)$
we had in Section 9
intersects $\partial V$ at
$\ell$ parallel longitudes,
where $\ell=\ell k(K_1,K_2)>0$.
Since the image of each longitude under $\phi$
spans a fiber surface $S_C$ of genus
$g(K_C)$ in $S^3-\phi(V)$, we can construct
a Seifert surface $\widehat{F}$ for $\widehat{K}$ by
capping off $\phi(V\cap F)$ by $\ell$ copies of $S_C$.
Hence we have
$g(\widehat{K})\le
g(K) +
\ell g(K_C)$.
Combining with Schubert's inequality
\cite[Proposition 2.10]{BZ}, we have:
\begin{equation}
g(\widehat{K})=
g(K) + \ell g(K_C).
\end{equation}
However, since $K_C$ is a fibred knot, we see:
\begin{equation}
g(K_C) = \frac{1}{2}deg \Delta_{K_C}(t).
\end{equation}
With the fact that $g(K)=
\frac{1}{2} \deg
\Delta_{K}(t)$ and
Seifert's theorem \cite[Proposition 8.23 (b)]{BZ},
we obtain
\begin{equation}
2g(\widehat{K})=
deg \Delta_{K}(t) +
\ell \deg \Delta_{K_C}(t) =
\deg \Delta_{\widehat{K}}(t).
\end{equation}
This proves (1).
Next, we prove (2).
Suppose $\Delta_{K(2\alpha,\beta|r)}(t)$ is monic.
Then construct a fibre surface $F$ for
$K=K(2\alpha,\beta|r)$ as in Section 9. As we did in the proof of
(1), construct a minimal genus Seifert surface $\widehat{F}$
for $\widehat{K}$.
If we needed compressions to obtain $F$ from
the surface $F(r)$, then
we undo the cuts in $\widehat{F}$
caused by compressions by plumbing Hopf bands.
Note this is possible since
the plumbings of Hope band in the proof of Theorem \ref{thm:5.1}
can be locally done in $V$.
Denote by $F'$ the resulting Seifert surface.
To complete the proof, it suffices to show that
$F'$ is a fibre surface.
Now $F'$ looks like as in Figure 7.7, where
horizontal parts are understood as parallel copies of
$S_C$. We can remove the bands by deplumbing Hopf bands
as we did in the proof of Proposition \ref{prop:15.2},
until we have a new Seifert surface
$F''$ \lq consisting\rq\
of the annuli, and $\ell$ copies of $S_C$ and $\ell-1$ half-twisted band,
where each pair of adjacent copies of $S_C$ is connected by
a half-twisted band.
We show that $F''$ is a fibre surface by using
sutured manifolds.
Let $(F''\times I, \partial F'' \times I)$ be the sutured manifold
obtained from $F''$.
Apply a $C$-product disk decomposition corresponding
to each site of ribbon holes arising from the intersection
of the annuli and copies of $S_C$.
Then we have $1$-handles each with a meridional suture.
Actually, we have $\ell(\lambda-\ell)/2$ $1$-handles.
See Figure 13.1 (a).
We can remove, by $C$-product decompositions,
all such $1$-handles coming from the annuli (Figure 13.1 (b)).
Apply a $C$-product decomposition
between a pair of
parallel copies of $S_C$'s. Then the complementary sutured manifold
is separated into two pieces: one of which is a product sutured manifold
between two copies of $S_C$ and hence we can disregard it
(Figure 13.1 (c)).
Repeating this, we have a sutured manifold obtained from $S_C$
(Figure 13.1 (d)).
Since $S_C$ is a fibre surface, the complementary sutured manifold
is a product sutured manifold. Therefore,
$F''$ is a fibre surface.
Theorem \ref{thm:17.1} (2) is now proved.
\fbox{}
\satel
{\it Proof of Theorem \ref{thm:17.2}.}
For any $r>0$ and any fibred companion $K_C$, we see
$g(\widehat{K})\ge g(K(2\alpha, \beta|r))$.
However, since $\ell=0$, from the construction
of $F_1$ in Section 7, we have $g(\widehat{K})=
g(K(2\alpha, \beta|r))$, and hence (2) follows immediately
from Theorem \ref{thm:D}.
\fbox{}
\begin{rem}\label{rem:17.3}
When $r = 0$, $K(2\alpha, \beta|0)$ is a trivial knot.
If we consider the satellite knot $\widehat{K}$
with $K(2\alpha,\beta|0)$ as a pattern knot,
this satellite knot gives rise to a very interesting problem.
As is known to some specialists,
even if the pattern and the companion of
$\widehat{K}$ are both fibred,
$\widehat{K}$ may not be fibred \cite{EN}.
Therefore, the determination of the genus and fibredness
of a satellite knot $\widehat{K}$ with a fibred companion and
the pattern $K(2\alpha, \beta|0)$ is not straightforward.
We leave this problem untouched.
To the reader who are interested in this problem,
we refer to \cite{Dan}.
\end{rem}
\section{Genus one knots}
In this final section,
we determine torti-rational knots
$K(2\alpha, \beta|r)$ of genus one (Theorem \ref{thm:gtOne}).
Recall that torti-rational knots have tunnel number one.
(We denote the tunnel number of $K$ by $t(K)$.)
Recently Scharlemann \cite{S}
proved a conjecture by Goda and Teragaito
which states that if
a knot $K$ has $g(K)=t(K)=1$,
then $K$ is a $2$-bridge knot or satellite knot.
Before that, Goda and Teragaito \cite{GT} had classified
satellite knots $K$ with $g(K)=t(K)=1$.
\begin{prop}\cite[Proposition 18.1]{GT} \label{prop:GT}
Let $K$ be a satellite knot with $g(K)=t(K)=1$.
Then the pattern knot is genus one $2$-bridge knot.
More precisely, the pattern knot is the torti-rational
knot $K(8d, 4d+1 |pq)$ and the companion knot is a
torus knot $T(p,q)$.
\end{prop}
Note that $(4d+1)/8d = [2, 2d, -2]$, and hence
the pattern knot is a $2$-bridge knot whose
associated continued fraction is $[2d,2pq]$.
In Proposition \ref{prop:GT}, it is evident that
the associated $2$-bridge link has linking number zero,
and that the pattern knot has genus one.
Hence Proposition \ref{prop:GT} immediately follows from
Theorem \ref{thm:gtOne} below.
In Theorem \ref{thm:gtOne}, we have a family of
torti-rational knots $K$ with \mbox{$g(K)=t(K)=1$},
which, at a glance, looked like
counter-examples to the Goda-Teragaito conjecture
(See Example \ref{ex:19.1}).
\begin{thm}\label{thm:gtOne}
A torti-rational knot
$K=K(2\alpha, \beta|r)$ with
$\ell=\mbox{$\ell k$} B(2\alpha, \beta)\ge0, r>0$ has
$g(K)=1$ if and only if one of the following is
satisfied:
Case A: $\ell=0$.\\
A1: $\beta/2\alpha=\pm [2, 2d, -2], d\neq 0$ and $r$ is arbitrary.
Case B: $\ell>0$.\\
B1: $\beta/2\alpha=[2,2,2], r=2$\\
B2: $\beta/2\alpha=[2,2,2,2,2], r=1$\\
B3: $\beta/2\alpha=[2,2d,2], d\neq 1, r=1$.
Note possibly, $d=0$.\\
B4: $\beta/2\alpha=\pm [
\underbrace{2,2,\ldots,2}_{2a+1},
2b,
\underbrace{-2,-2,\ldots,-2}_{2a-1}]$,
or
$\pm [
\underbrace{-2,-2,\ldots,-2}_{2a-1},
2b,
\underbrace{2,2,\ldots,2}_{2a+1}]$, $a \ge1, b\neq0, r=1$.
\end{thm}
\begin{rem}\label{rem:tunnel1}
In Theorem \ref{thm:gtOne},
$K=K(2\alpha, \beta|r)$ in A1, or in B4 with $a=1$,
is a $2$-bridge knot.
If $K$ is in B1 or B2, then $K$ is a trefoil knot.
If $K$ is in B3,
then $K$ is a twist knot.
If $K$ is in B4 with $a\ge 2$, then $K$ is a satellite knot
with its companion a torus knot
$T(a,a+1)$ and the
pattern knot $B(4ab(a+1)-1,2a(a+1))$,
where the Alexander polynomial is
$\Delta_{K(1)}(t) = ab(a+1)(t-1)^2 + t$.
\end{rem}
As a direct consequence, we have:
\begin{cor}\label{cor:tunnel2}
A torti-rational knot $K$ of genus one is
a satellite knot if and only if $K$ is in
Case B4 with $a\ge 2$ of Theorem \ref{thm:gtOne}.
\end{cor}
\begin{rem}\label{cex}
(1) Our knots in B4 with $a\ge 2$
has genus one and hence is prime and not a cable knot.
Therefore, they are negative examples to the question
posed in \cite{AM}.
(2) We can also prove that if $\beta/2\alpha=[4,\pm2,\pm 4]$,
or $[4,\pm3, \pm4]$, then $K(2\alpha, \beta|\pm1)$
cannot be a prime satellite knot
(c.f. \cite[Theorem 1.6 (2)]{GHS}).
\end{rem}
\begin{ex} \label{ex:19.1}
Consider a 2-bridge link $B(46,39)$, where
$39/46=
$\\
$
[2,2,2,2,2,2,-2,-2,-2]$.
(This is the case $(a, b)=(2,1)$
in Case B4).
The diagram of $K(r), r=1$, is depicted in Figure 14.1(a)
together with a compressible Seifert surface $F$.
Compressing $F$ three times, we obtain
a minimal genus Seifert surface $F'$ of
genus $1$, isotopic
to those depicted in Figure s 14.1 (b), (c) and (d).
Then we see that $F'$ is a Seifert surface
for the satellite
knot with its companion
$T(2,3)$ and the pattern knot a
$2$-bridge knot $B(23,12)$.
\end{ex}
\eighteen
{\it Proof of Theorem \ref{thm:gtOne}.}
Case A: By (10.1),
$g(K) = 1$ if and only if
$\beta/2\alpha=\pm[2,2d,-2], d\neq 0$.
Therefore, $K$ is
a $2$-bridge knot associated to
the continued fraction $[\pm2d,2r]$.
Case B:
By Proposition \ref{prop:8.1},
we see
\begin{equation}
2(\lambda-1-\rho)+(\lambda-1)(r-1)\ell+
(\lambda-2)(\ell -1)=2.
\end{equation}
Since
each term of the LHS in (14.1) is non-negative,
we have two cases: (i) $\lambda-1-\rho=0$
and (ii) $\lambda-1-\rho=1$.
For case (i), as is seen in the proof of
Proposition \ref{prop:8.6++}, we have
$(2\alpha, \beta)=(2\lambda, 2\lambda-1)$
and $\lambda=\ell$,
and hence, either
(a) $\lambda=\ell=2$ and $r=2$, or
(b) $\lambda=\ell=3$ and $r=1$.
Therefore, we have either B1 or B2.
For case (ii), we have either
(c) $\lambda=2$ and hence $r=1$ and $\rho=0$, or
(d) $\lambda>2, \ell=r=1$ and hence $\rho=\lambda-2$.
Therefore, we have B3 or B4.
This proves Theorem \ref{thm:gtOne}.
\fbox{}
\noindent
{\bf Acknowledgements.}
The first author is
partially supported by MEXT, Grant-in-Aid for
Young Scientists (B) 18740035,
and the second author is
partially supported by NSERC Grant~A~4034.
The authors express their appreciation to
K.Morimoto, M.Sakuma, T.Kobayashi, T.Kanenobu, M.Kidwell
and D. Silver
for their invaluable comments.
\end{document}
|
\begin{document}
\title[]{Twists and resonance of $L$-functions, II}
\author[]{J.Kaczorowski \lowercase{and} A.Perelli}
\date{}
\maketitle
{\bf Abstract.} We continue our investigations of the analytic properties of nonlinear twists of $L$-functions developed in \cite{Ka-Pe/2005},\cite{Ka-Pe/2011a} and \cite{Ka-Pe/resoI}. Let $F(s)$ be an $L$-function of degree $d$. First we extend the transformation formula in \cite{Ka-Pe/2011a}, relating a twist $F(s;f)$ with leading exponent $\kappa_0>1/d$ to its dual twist $\overline{F}(s;f^*)$. Then we combine the results in \cite{Ka-Pe/resoI} with such a transformation formula to obtain the analytic properties of new classes of nonlinear twists. This allows to detect several new cases of resonance of the classical $L$-functions.
{\bf Mathematics Subject Classification (2000):} 11 M 41
{\bf Keywords:} $L$-functions, Selberg class, twists.
\vskip1cm
\section{Introduction}
1.1.~{\bf Outline.} This paper is a continuation of \cite{Ka-Pe/resoI}, to which we refer for definitions, notation and a general discussion of twists of $L$-functions. In \cite{Ka-Pe/resoI} we gave a rather complete description of the analytic properties of the nonlinear twists (as usual $e(x)=e^{2\pi ix}$)
\[
F(s;f) = \sum_{n=1}^\infty \frac{a(n)}{n^s} e(-f(n,\bfalpha))
\]
with leading exponent $\kappa_0\leq 1/d$, as well as some applications to the resonance problem for such twists. Here $F(s)$ is a function of degree $d$ in the extended Selberg class (i.e. $F\in{\mathcal S}_d^\sharp$) and
\begin{equation}
\label{1-1}
f(\xi,\bfalpha) = \sum_{j=0}^N \alpha_j \xi^{\kappa_j}, \hskip1.5cm 0\leq \kappa_N<\dots<\kappa_0, \ \alpha_j\in\RR \ \text{and} \ \prod_{j=0}^N\alpha_j\neq0.
\end{equation}
In this paper we deal with the case $\kappa_0>1/d$. We start with an extension of the transformation formula, given in Theorem 1.1 of \cite{Ka-Pe/2011a}, relating a nonlinear twist $F(s;f)$ with $\kappa_0>1/d$ to its dual twist $\overline{F}(s;f^*)$; see Theorem 1 below. Then we combine the results in \cite{Ka-Pe/resoI} and Theorem 1 to obtain the analytic properties of new classes of nonlinear twists. Finally, we consider in greater detail some classes of twists of degree 2 $L$-functions. The applications to the resonance problem are similar to those in \cite{Ka-Pe/resoI}, hence we shall very briefly outline them.
In order to give the flavor of the results in the paper, we first present two very special cases of our theorems. The elliptic curve $E$ of equation $y^2-y = x^3-x$ has conductor 37 and corresponds to a newform of weight 2 and level 37; see Zagier \cite{Zag/1985} for a pre-Wiles justification of this. Denoting by $L_E(s)$ the associated $L$-function and by $a_E(n)$ its coefficients, we have that
\[
F(s) = L_E(s+\frac12)
\]
is a degree 2 element of the Selberg class, hence we may apply Theorem 6 below. Choosing $k=1$, $\ell=0$ and $\alpha=(37)^{-2/3}$, and recalling that the conductor of $F(s)$ equals 37, from the definition of spectrum of $F(s)$ in \cite{Ka-Pe/resoI} we see that $2(37)^{-1/2} \in \ \text{Spec}(F)$. Hence, using the notation introduced below, from Theorem 6 we deduce that the corresponding function
\[
f(\xi)= \frac{3}{2(2738)^{1/3}} \xi^{2/3} + \frac{1}{(1369)^{1/3}} \xi^{1/6}
\]
belongs to $\A_0(F)\setminus \A_{00}(F)$, since $F(s)$ is entire. Thus by Theorem 4 below (after a change of variable) we see that the twisted $L$-function
\[
L_E(s;f)=\sum_{n=1}^\infty \frac{a_E(n)}{n^s} e(-\frac{3}{2(2738)^{1/3}} n^{2/3} - \frac{1}{(1369)^{1/3} }n^{1/6})
\]
is meromorphic on $\CC$, and has a simple pole at $s_0 = 1 + \frac{1}{4D(f)}$ since $\theta_F=0$. Moreover, we have that $f=TS(f_0)$, where $f_0$ is the standard twist and $S$ is a shift of degree 2; hence $D(f) = 2\cdot2-1=3$, thus $s_0=13/12$. As a consequence, choosing $r=1$ in \eqref{1-16} below, we deduce the following asymptotic formula for the associated nonlinear exponential sum:
\[
\sum_{n=1}^\infty a_E(n) e(-\frac{3}{2(2738)^{1/3}} n^{2/3} - \frac{1}{(1369)^{1/3} }n^{1/6}) e^{-n/x} \sim c_0 x^{13/12}
\]
as $x\to\infty$, with a certain $c_0\neq0$. If we change the value of $\alpha$, e.g. we choose $\alpha=(37)^{-20/31}$, then by a similar argument we see that the resulting function $f(\xi)$ belongs to $\A(F)\setminus\A_0(F)$. Hence by Theorems 3 and 6 below (again after a change of variable) we obtain that
\[
L_E(s;f)=\sum_{n=1}^\infty \frac{a_E(n)}{n^s} e(-\frac{3}{2(2738)^{1/3}} n^{2/3} - \frac{1}{(37)^{20/31} }n^{1/6})
\]
is entire, and the corresponding nonlinear exponential sum is $O(1)$. Similar results can be deduced, from the above quoted theorems, for the coefficients any degree 2 $L$-function, e.g. the divisor function $d(n)$ or the Ramanujan function $\tau(n)$.
1.2.~{\bf Transformation formula.} In order to state Theorem 1 we need to introduce further notation, not present in \cite{Ka-Pe/resoI}, concerning nonlinear twists of a given $F\in{\mathcal S}_d^\sharp$, as well as some results proved in \cite{Ka-Pe/2011a}. First we rewrite the twist function $f(\xi,\bfalpha)$ (sometimes simply $f(\xi)$ or $f$) as
\[
f(\xi,\bfalpha) = \xi^{\kappa_0}\sum_{j=0}^N \alpha_j \xi^{-\omega_j} \hskip1.5cm 0=\omega_0<\dots<\omega_N\leq \kappa_0,
\]
we recall that $\kappa_0$ is the {\it leading exponent} of $f$ and write $\kappa_0=\text{lexp}(f)$, and consider the semigroup
\[
\D_f = \{\omega=\sum_{j=1}^N m_j\omega_j: m_j\in\ZZ, \ m_j\geq 0\}.
\]
If $\alpha_0>0$ and $ \kappa_0>1/d$ we put ($q_F$ is the conductor of $F(s)$, see e.g. p.1398 of \cite{Ka-Pe/2011a})
\begin{equation}
\label{1-2}
\Phi(z,\xi,\bfalpha) = z^{1/d} - 2\pi f(\frac{q z}{\xi},\bfalpha) \hskip1.5cm q=q_F(2\pi d)^{-d}.
\end{equation}
Thanks to (i) of Theorem 1.2 of \cite{Ka-Pe/2011a} we have the operator
\begin{equation}
\label{1-3}
T(f)(\xi,\bfalpha) := \frac{1}{4\pi^2 i} \int_{\C} \frac{\Phi(z,\xi,\bfalpha) \frac{\partial^2}{\partial z^2} \Phi(z,\xi,\bfalpha)}{\frac{\partial}{\partial z} \Phi(z,\xi,\bfalpha)} \d z = \xi^{\kappa_0^*} \sum_{\omega\in\D_f} A_\omega(\bfalpha) \xi^{-\omega^*},
\end{equation}
where
\[
\kappa_0^* = \frac{\kappa_0}{d\kappa_0-1}, \hskip1.5cm \omega^* = \frac{\omega}{d\kappa_0-1}
\]
and $\C$ is, roughly, a circle around the $z$-critical point of $\Phi(z,\xi,\bfalpha)$; see p.1401 of \cite{Ka-Pe/2011a} for the exact definition of $\C$. Moreover, from Theorem 1.2 of \cite{Ka-Pe/2011a} we obtain some properties of the coefficients $A_\omega(\bfalpha)$, and also that $T$ is self-reciprocal (i.e. $T^2=$ identity).
{\bf Remark 1.} Note that the operator $T$ depends on the degree $d$ of $F(s)$ but also on its conductor $q_F$. Therefore we fix the function $F\in{\mathcal S}^\sharp_d$ when dealing with the operator $T$ or, at least, we fix degree and conductor. \fine
\noindent
We also write
\begin{equation}
\label{1-4}
s^* = \frac{s+\frac{d\kappa_0}{2}-1+id\theta_F\kappa_0}{d\kappa_0-1},
\end{equation}
where $\theta_F$ is the internal shift of $F(s)$ (see \cite{Ka-Pe/resoI}; usually $\theta_F=0$ for the classical $L$-functions). We denote by $T^\flat(f)(\xi,\bfalpha)$ the finite sum obtained from the right hand side of \eqref{1-3} dropping the terms with $\omega>\kappa_0$, and write
\begin{equation}
\label{1-5}
f^*(\xi,\bfalpha) = T^\flat(f)(\xi,\bfalpha).
\end{equation}
Since $T$ is self-reciprocal we have
\[
(f^*)^*(\xi,\bfalpha) = f(\xi,\bfalpha).
\]
Finally, for $\alpha_0<0$ we put
\[
T(f)(\xi,\bfalpha) = -T(-f)(\xi,\bfalpha),
\]
and hence for every $f(\xi,\bfalpha)$ we have
\begin{equation}
\label{1-6}
f^*(\xi,\bfalpha) = - (-f)^*(\xi,\bfalpha) \hskip1.5cm (f^*)^*(\xi,\bfalpha) = f(\xi,\bfalpha).
\end{equation}
{\bf Theorem 1.} {\sl Let $F\in{\mathcal S}_d^\sharp$ with $d\geq1$, $f(\xi,\bfalpha)$ be as in \eqref{1-1} with $\kappa_0>1/d$, and let $K\geq0$. There exist an integer $J\geq0$, constants $0=\eta_0<\eta_1<\dots<\eta_J$ and functions $W_0(s),\dots,W_J(s)$, $G(s;f)$ holomorphic for $\sigma>-K$, with $W_0(s)$ nonvanishing there, such that for $\sigma>-K$
\begin{equation}
\label{1-7}
F(s;f) = \sum_{j=0}^J W_j(s) \overline{F}(s^*+\eta_j;f^*) + G(s;f).
\end{equation}
Moreover, uniformly for $s$ in any given vertical strip inside $\sigma>-K$ and any $\epsilon>0$ we have}
\[
G(s;f) \ll e^{\epsilon|t|} \hskip1.5cm |t|\to\infty.
\]
The meaning of \eqref{1-7} is that $F(s;f) - \sum_{j=0}^J W_j(s) \overline{F}(s^*+\eta_j;f^*)$ has holomorphic continuation to $\si>-K$. If $1<d\kappa_0<2$ and $\si>1/2$, then $\Re s^* > \si$ and hence \eqref{1-7} gives some analytic continuation of $F(s;f)$ to the left of $\si=1$. However, below we show that Theorem 1 can be coupled with other ideas to give meromorphic continuation for a new class of nonlinear twists.
{\bf Remark 2.} Comparing with Theorem 1.1 of \cite{Ka-Pe/2011a}, we see that in Theorem 1 we dropped the three restrictions $F(s)$ entire, $\theta_F=0$ and $\sigma>0$. Hence the transformation formula in Theorem 1.1 of \cite{Ka-Pe/2011a} holds now for every $F\in{\mathcal S}_d^\sharp$ and for $s$ in any given right half-plane. Moreover, Theorem 1 contains in addition a bound on the order of growth of the function $G(s;f)$. Note that condition $d\geq 1$ is equivalent to $d>0$, since the extended Selberg class is empty for degrees between 0 and 1; see Conrey-Ghosh \cite{Co-Gh/1993} and our paper \cite{Ka-Pe/1999a}. Note also that the integer $J$, the constants $\eta_j$ and the functions $W_j(s)$ and $G(s;f)$ may depend on $F(s)$, $f(\xi,\bfalpha)$ and $K$. \fine
{\bf Remark 3.} An inspection of the proof of Theorem 1, see Section 2.5, allows to say something more on the shifts $\eta_j$ and on the functions $W_j(s)$. In particular, $\eta_J>(K+\frac{d\kappa_0}{2})/(d\kappa_0-1)$ and the $W_j(s)$ are of the form $W_j(s) = e^{a_js}P_j(s)$ with $a_j\in\RR$, $P_j\in\CC[s]$ and deg$P_j \ll j$. \fine
{\bf Remark 4.} In Theorem 4 of \cite{Ka-Pe/resoI} we showed that the presence of negative exponents in the function $f(\xi,\bfalpha)$ produces a kind of stratification of the twist $F(s;f)$ in terms of shifts of the twists $F(; f^\flat)$ ($f^\flat$ denotes, according to the above notation, the part of $f$ with exponents $\geq0$); see formula (4.1) there. In the proof of Theorem 1 we actually cut off the negative exponents coming out naturally in $T(f)(\xi,\bfalpha)$, but their effect is present in the stratification appearing in formula \eqref{1-7} above. \fine
The strategy of the proof of Theorem 1 is sketched at the beginning of Section 2.
1.3.~{\bf The group $\frak{G}$.} Now we turn to the applications of Theorem 1 and of the results in \cite{Ka-Pe/resoI}, but we first have to set up further notation. Along with the {\it duality} operator $T$ in \eqref{1-3}, whose applicability is limited by the fact that it is self-reciprocal, we consider the {\it shift} operator $S$, defined by
\begin{equation}
\label{1-8}
S(f)(\xi,\bfalpha) = f(\xi,\bfalpha) + P(\xi),
\end{equation}
where $P\in\ZZ[\xi]$ has $\deg P\geq 1$. We define the degree of $S$ as $\deg S= \deg P$, and note that every $S$ can be obtained by repeated applications of the shifts $S^{(k)}$, associated with $P(\xi)=\xi^k$ for $k=1,2,\dots$, and of their inverses (associated with $P(\xi)=-\xi^k$). Clearly, $S$ acts trivially on twists, i.e. $F(s;S(f)) = F(s;f)$, but the action of $T$ on $S(f)(\xi,\bfalpha)$ differs from the action of $T$ on $f(\xi,\bfalpha)$. This enables to build nontrivial chains of applications of the operators $T$ and $S$.
{\bf Remark 5.} This phenomenon, along with the transformation formula (see Theorem 1) and the properties of the standard twist (see \cite{Ka-Pe/2005} and \cite{Ka-Pe/resoI}), is the basis for the most interesting aspects of our twist theory. Indeed, in \cite{Ka-Pe/2002} and \cite{Ka-Pe/2011a} we exploited it to prove the degree conjecture for ${\mathcal S}^\sharp$ in the range $1<d<2$, while in this paper it is used to obtain the analytic properties of a new class of nonlinear twists. \fine
We first consider the formal group $\frak{G}$ generated by the symbols $\bfT, \bfS^{(1)}, \bfS^{(2)}, \dots$, satisfying the relations $\bfT^2=$ identity and $\bfS^{(h)}\bfS^{(k)} = \bfS^{(k)}\bfS^{(h)}$. Thus every element of $\frak{G}$ has the form
\[
\bfG = \bfT^a \bfS_k \bfT \bfS_{k-1} \cdots \bfT \bfS_1 \bfT^b \hskip1.5cm a,b\in\{0,1\},
\]
where each $\bfS_j$ is a product of $\bfS^{(k)}$'s and their inverses. Then, for given $F\in{\mathcal S}_d^\sharp$ with $d\geq1$ and $f(\xi,\bfalpha)$ as in \eqref{1-1}, to an element $\bfG$ as above we associate, with obvious notation, the operator $G$ defined by
\begin{equation}
\label{1-9}
G(f) = (T^\flat)^a S_k T^\flat S_{k-1} \cdots T^\flat S_1 (T^\flat)^b (f),
\end{equation}
provided it is well defined. Indeed, in view of Remark 1 the action of $T^\flat$ depends on $F(s)$ and, by Theorem 1, $T^\flat(f)$ makes sense under the compatibility condition that lexp$(f)>1/d$. Since in the applications of Theorem 1 in this paper we usually start with functions $f_0(\xi,\bfalpha)$ satisfying
\begin{equation}
\label{1-10}
0\leq \text{lexp}(f_0) \leq 1/d,
\end{equation}
we choose $b=0$ in \eqref{1-9}. Moreover, we may always assume that $a=1$ since, as we said, the action of $S$ on twists is trivial. Therefore we may assume that $G$ has the form
\begin{equation}
\label{1-11}
G = T^\flat S_k T^\flat S_{k-1} \cdots T^\flat S_1,
\end{equation}
in which case $G(f_0)$ is well defined if and only if for $j=1,\dots,k$ we have
\begin{equation}
\label{1-12}
\ell_j := \text{lexp}(S_jT^\flat \cdots T^\flat S_1(f_0))> 1/d.
\end{equation}
We remark that condition \eqref{1-12} is not empty only when $d\leq 2$, i.e. for $d=1$ and $d=2$ in view of the main result in \cite{Ka-Pe/2011a}, thanks to the following
{\bf Fact 1.} {\sl If $d>2$ then $\ell_j=\deg S_j$ for $j=1,\dots,k$. In particular, \eqref{1-12} is satisfied.} \fine
\noindent
The proof of this fact will be given at the beginning of Section 3.
Given $F\in{\mathcal S}^\sharp$, if $G$ as in \eqref{1-11} and $f_0(\xi,\bfalpha)$ as in \eqref{1-1} satisfy \eqref{1-10} and \eqref{1-12}, thanks to Theorem 1 and the results in \cite{Ka-Pe/resoI} we may hope to get nontrivial information of the analytic properties of the twist $F(s;G(f_0))$. We therefore define
\begin{equation}
\label{1-13}
\A(F) = \{f = G(f_0): \text{$G$ as in \eqref{1-11} and $f_0$ as in \eqref{1-10} satisfy \eqref{1-12}}\}.
\end{equation}
Given $G(f_0), H(f_1)\in\A(F)$, we first observe the following
{\bf Fact 2.} {\sl If $d>2$ we have $G(f_0)=H(f_1) \Rightarrow G=H$.} \fine
\noindent
Again, for the proof see the beginning of Section 3. For $d\leq 2$ this does not hold, as the following example shows. Suppose that $F(s)$ has degree 2 and conductor 1, and let $f_0(\xi)=\alpha\sqrt{\xi}$. Then a computation based on Theorem 1.2 of \cite{Ka-Pe/2011a} shows that
\[
T^\flat S^{(1)}(f_0)(\xi) = -\xi +\alpha\sqrt{\xi},
\]
hence, for example, $T^\flat S^{(1)} S^{(1)}T^\flat S^{(1)}(f_0) = T^\flat S^{(1)}(f_0)$ but $T^\flat S^{(1)} S^{(1)}T^\flat S^{(1)} \neq T^\flat S^{(1)}$.
The number of $T^\flat$ in $G$ is called the {\it weight} of $G$ and is denoted by $\omega(G)$; moreover, given $f\in\A(F)$ we denote by
\[
\omega(f) = \min\{\omega(G): \text{$G$ as in \eqref{1-11} and $G(f_0)=f$ for some $f_0$ satisfying \eqref{1-10}}\}
\]
the {\it weight} of $f$. In view of Fact 2, for $d>2$ the pair $(G,f_0)$ is uniquely determined by $f=G(f_0)$, and in this case we have $\omega(f)=\omega(G)$. If $f\in\A(F)$ has the form $f=G(f_0)$ and $\omega(f)\geq 1$ we write
\[
D(f) = \prod_{j=1}^k(d\ell_j-1),
\]
where the $\ell_j$ are associated with $f_0$ as in \eqref{1-12}, while if $\omega(f)=0$ we simply put $D(f) = 1$. The main feature of $D(f)$ is that it is independent of the particular representation of $f$ inside $\A(F)$, as shown (with obvious notation) by the following
{\bf Fact 3.} {\sl If $f=G(f_0)=H(f_0')$ then $\prod_{j=1}^k (d\ell_j-1) = \prod_{j=1}^{k'} (d\ell'_j-1)$.} \fine
\noindent
Fact 3 follows at once from Theorem 4 below. Of course, in view of Fact 2, it is nontrivial only for $d\leq 2$.
Finally, we refer to \cite{Ka-Pe/resoI} for the notions and properties of standard twist $F(s,\alpha)$ and spectrum Spec$(F)$ of $F\in{\mathcal S}^\sharp$, and define
\[
\text{Spec$^*(F) = \{\alpha\in\RR: F(s,\alpha)$ is not entire$\}$}.
\]
Clearly
\[
\text{Spec}^*(F) =
\begin{cases}
\text{Spec}(F) \cup (-\text{Spec}(F)) \ \ & \text{if $F(s)$ is entire} \\
\text{Spec}(F) \cup (-\text{Spec}(F)) \cup\{0\} \ \ & \text{if $F(s)$ has a pole at $s=1$.}
\end{cases}
\]
The results in \cite{Ka-Pe/resoI} show a different behavior of $F(s,\alpha)$ with $\alpha\in$ Spec$(F)$ with respect to the other twists $F(s;f_0)$ with $0\leq \text{lexp}(f_0) \leq 1/d$; accordingly, we consider the subsets of $\A(F)$, see \eqref{1-13}, defined as
\begin{equation}
\label{1-14}
\A_0(F) = \{f\in\A(F): f=G(f_0) \ \text{with} \ f_0(\xi,\alpha) = \alpha\xi^{1/d}, \alpha\in \text{Spec}^*(F)\}
\end{equation}
\begin{equation}
\label{1-15}
\A_{00}(F) = \{f\in\A_0(F) \ \text{with $\alpha=0$ in \eqref{1-14}}\}.
\end{equation}
Of course $\A_{00}(F) \subset \A_0(F) \subset \A(F)$, and $\A_{00}(F) \neq \emptyset$ if and only if $F(s)$ is polar; recall that the polar order of $F(s)$ at $s=1$ is denoted by $m_F$.
1.4.~{\bf Nonlinear twists.} Now we can state several general results about the twists $F(s;f)$. Recalling the definitions in \eqref{1-13}, \eqref{1-14} and \eqref{1-15} we have
{\bf Theorem 2.} {\sl Let $F\in{\mathcal S}^\sharp_d$ with $d\geq 1$. Then $F(s;f)$ is meromorphic on $\CC$ for every $f\in\A(F)$, with all poles in a horizontal strip of type $|t|\leq T_0(F)$ and of order $\leq \max(1,m_F)$. Moreover, for any $\epsilon>0$ and $A<B$, as $|t|\to\infty$ we have
\[
F(s;f) \ll e^{\epsilon|t|}
\]
uniformly for} $A\leq \sigma\leq B$.
{\bf Theorem 3.} {\sl Let $F\in{\mathcal S}^\sharp_d$ with $d\geq 1$. If $f\in\A(F)\setminus \A_0(F)$ then $F(s;f)$ is entire.}
Probably, in this case the twists $F(s;f)$ have finite order. Proving this would require additional uniformity estimates in Theorem 1, similar to those in \cite{Ka-Pe/twist}. Since this would considerably enlarge the size of the paper, we shall not enter such a problem.
{\bf Theorem 4.} {\sl Let $F\in{\mathcal S}^\sharp_d$ with $d\geq 1$. If $f\in\A_0(F)\setminus \A_{00}(F)$ then $F(s;f)$ is not entire, all its poles are simple and lie on the half-line
\[
s=\sigma -i\theta_F \quad \text{with} \quad \sigma \leq \frac12 + \frac{1}{2dD(f)},
\]
the initial point $s_0= \frac12 + \frac{1}{2dD(f)} -i\theta_F$ being a pole.}
{\bf Theorem 5.} {\sl Let $F\in{\mathcal S}^\sharp_d$ with $d\geq 1$. If $f\in\A_{00}(F)$ then $F(s;f)$ is not entire, all its poles have order $\leq m_F$ and lie on the half-line
\[
s=\sigma + i\theta_F\big(\frac{(-1)^{\omega(f)}}{D(f)}-1\big) \quad \text{with} \quad \sigma \leq \frac12 + \frac{1}{2D(f)},
\]
the initial point $s_0= \frac12 + \frac{1}{2D(f)} + i \big(\frac{(-1)^{\omega(f)}}{D(f)}-1\big) \theta_F$ being a pole of order $m_F$.}
1.5.~{\bf Resonance.} As an application of the above theorems we consider the smoothed nonlinear exponential sums
\begin{equation}
\label{1-16}
S_F(x;f,r) = \sum_{n=1}^\infty a(n)e(-f(n,\bfalpha)) e^{-(n/x)^r}
\end{equation}
with $r>0$ arbitrary. Then, as $x\to\infty$, under the hypotheses of Theorems 3, 4 and 5 we have respectively that
\[
\begin{split}
S_F(x;f,r) &= O(1), \\
S_F(x;f,r) &\sim c_0 x^{1/2 + 1/(2dD(f) )- i\theta_F}
\end{split}
\]
with a certain $c_0\neq 0$, and
\[
\hskip1.2cm S_F(x;f,r) \sim x^{1/2 + 1/(2D(f) )+ i\lambda_f\theta_F} P(\log x)
\]
with $\lambda_f = \big(\frac{(-1)^{\omega(f)}}{D(f)}-1\big)$ and $P\in\CC[x]$, $\deg P = m_F-1$. Proofs are standard and hence omitted.
1.6.~{\bf Degree 2 examples.} As an illustration of the above theorems we describe in detail some functions $f\in\A(F)$ in the degree 2 case. The two examples presented in Section 1.1, concerning the $L$-function associated with an elliptic curve, are special cases of the following
{\bf Theorem 6.} {\sl Let $F\in{\mathcal S}^\sharp_2$. Then for every $k,\ell\in\ZZ$, $k\neq0$, and $\alpha\in\RR$ the function
\[
f(\xi) = \frac{3}{2} \frac{1}{(2kq_F^2)^{1/3}} \xi^{2/3} + \frac{3}{4} \frac{\ell}{(2k^2q_F)^{2/3}} \xi^{1/3} + \alpha \xi^{1/6}
\]
belongs to $\A(F)$. Moreover,} $f\in\A_0(F) \Leftrightarrow 2(k^2q_F)^{1/6}|\alpha| \in$ Spec$(F)$.
{\bf Remark 6.} In \cite{Ka-Pe/resoI} we proved that if the leading exponent $\kappa_0$ is $\leq 1/d$ and $f(\xi,\bfalpha)$ has at least one exponent $<1/d$, then $F(s;f)$ is entire. Theorem 6, and in particular the first example given in Section 1.1, shows that this is not true in the case $\kappa_0>1/d$. A heuristic explanation of these facts would be of interest. \fine
We conclude remarking that explicit results similar to Theorem 6 can be computed for any $L$-function of degree $d\geq1$. For example, starting with $\zeta(s)$ we can get the basic analytic properties of
\[
\sum_{n=1}^\infty \frac{e(\alpha n^{3/2} +\beta n)}{n^s}
\]
with $\alpha$ in a certain infinite discrete set and any real $\beta$. As a consequence, the asymptotic behavior of the corresponding nonlinear exponential sums can be detected.
{\bf Acknowledgements.}
This research was partially supported by the Istituto Nazionale di Alta Matematica, by grant PRIN2010-11 {\sl Arithmetic Algebraic Geometry and Number Theory} and by grants N N201 605940 and 2013/11/B/ST1/ 02799 {\sl Analytic Methods in Arithmetic} of the National Science Centre.
\section{Proof of Theorem 1}
We follow the proof of Theorem 1.1 of \cite{Ka-Pe/2011a}, indicating only the main differences and referring to our previous papers \cite{Ka-Pe/2005}, \cite{Ka-Pe/2011a} and \cite{Ka-Pe/resoI} as much as possible. Moreover, we take this opportunity to streamline the arguments in \cite{Ka-Pe/2011a} and to correct some inaccuracies. Let $F(s)$ and $f(\xi,\bfalpha)$ be as in Theorem 1, with $\alpha_0>0$ otherwise we take conjugates, and for $X\geq 1$ let
\[
F_X(s;f) = \sum_{n=1}^\infty \frac{a(n)}{n^s} e(-f(n,\bfz)), \qquad \bfz = (z_0,\dots,z_N), \qquad z_j=\frac{1}{X}+2\pi i\alpha_j.
\]
Clearly, the series is convergent for every $s\in\CC$, and for $\si>1$
\[
\lim_{X\to\infty} F_X(s;f) = F(s;f).
\]
Before embarking into the proof of Theorem 1, we briefly sketch our basic strategy. Note that the arguments in the proof sometimes give more than the minimal requirements needed to prove Theorem 1, which we use in the following sketch. Let $R$ be a given positive real number and $\epsilon>0$ be arbitrarily small. We first show, see Lemma 2.1 below, that for $-R<\sigma<-R+\delta$
\[
F_X(s;f) = M^{(1)}_X(s) + H^{(1)}_X(s),
\]
where $\delta>0$ is a small constant, $M^{(1)}_X(s)$ is a certain main term and the error term $H^{(1)}_X(s)$ is $\ll e^{\epsilon|t|}$ as $|t|\to\infty$, uniformly in $X$. Then we progressively refine $M^{(1)}_X(s)$, thus getting similar expressions for $F_X(s)$ with certain terms $M^{(2)}_X(s), \dots$ and $H^{(2)}_X(s), \dots$, satisfying the same upper bound, in place of $M^{(1)}_X(s)$ and $H^{(1)}_X(s)$, respectively. Eventually we obtain that
\[
F_X(s;f) - M_X(s;f) = H_X(s;f),
\]
where, thanks to its rather explicit form, $M_X(s;f)$ is holomorphic for $\si>-R$ and hence so is $H_X(s;f)$. Moreover, $H_X(s;f)\ll e^{\epsilon|t|}$ uniformly in $X$ for $-R<\sigma<-R+\delta$ and, again thanks to the form of $M_X(s;f)$, $H_X(s;f)$ is bounded uniformly in $X$ for $\si>\si_0$, with a certain $\si_0>1$. Further, for each $X\geq1$ we have that $H_X(s;f)$ is bounded for $\si>-R$. Therefore, by an application of the Phragm\'en-Lindel\"of theorem we have that $H_X(s;f)\ll e^{\epsilon|t|}$ uniformly in $X$ for $s$ in any substrip of $-R<\sigma<\si_0+1$. Since for $\si>\si_0$ the limit as $X\to\infty$ of $M_X(s;f)$, call it $M(s;f)$, exists and is holomorphic, by an application of Vitali's convergence theorem (see Section 5.21 of Titchmarsh \cite{Tit/1939} or Lemma C of \cite{Ka-Pe/resoI}) we obtain that the limit as $X\to\infty$ of $H_X(s;f)$ exists, is holomorphic and $\ll e^{\epsilon|t|}$ in any such substrip. As a consequence, $F(s;f)-M(s;f)$ has holomorphic continuation to $\si>-R$ and is $\ll e^{\epsilon|t|}$ there. Finally, Theorem 1 follows by further refining $M(s;f)$, similarly as in Section 2.5 on p.1421 of \cite{Ka-Pe/2011a}.
2.1.~{\bf Set up.} Let $F_X(s;f)$ be as above, $c>0$ be a constant (not necessarily the same at each occurrence and possibly depending on some parameters), $\epsilon>0$ be arbitrarily small and $\delta>0$ be sufficiently small. Moreover, for $N$ as above let $[N]=\{0,1,\dots,N\}$, $\emptyset \neq \A\subseteq [N]$, $|\A|$ be the cardinality of the set $\A$ and (with the obvious notation $\LL_{|\A}$)
\[
\bfw = \sum_{j=0}^N \kappa_jw_j, \quad \d \bfw = \prod_{j=0}^N \d w_j, \quad G(\bfw) = \prod_{j=0}^N \Gamma(w_j)z_j^{-w_j},
\]
\[
\bfw_{|\A} = \sum_{j\in\A} \kappa_jw_j, \quad \d \bfw_{|\A} = \prod_{j\in\A} \d w_j, \quad G(\bfw_{|\A}) = \prod_{j\in\A} \Gamma(w_j)z_j^{-w_j},
\]
\[
\frac{1}{2} <\eta<\frac{3}{4}, \quad \int_{\LL}\d\bfw=\int_{(-\eta)}\dots\int_{(-\eta)}\d\bfw \ \ \text{and analogously for} \ \int_{\LL_{|\A}}\d\bfw_{|\A},
\]
\[
I_X(s,\A)=\frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}}F(s+\bfw_{|\A}) G(\bfw_{|\A}) \d \bfw_{|\A}.
\]
Let $K\geq 0$, $R\geq K+1$ be such that $\frac{d+1}{2}+dR\not\in\NN$, $\sigma>-R$ and $\rho>\frac{R+1}{\K}$, where $\K = \sum_{j=0}^N\kappa_j$. Then for $\Re w_j = \rho$ we have $\Re(s+\bfw) = \sigma+\K\rho>1$, hence by Mellin's transform we get
\begin{equation}
\label{2-1}
F_X(s;f) = \frac{1}{(2\pi i)^{N+1}} \int_{(\rho)} \dots \int_{(\rho)} F(s+\bfw)G(\bfw)\d \bfw.
\end{equation}
As in Lemma 2.1 of \cite{Ka-Pe/2011a} we want to shift the line of integration in \eqref{2-1} to $\LL$, but now we have to cross the possible pole of $F(s+\bfw)$ at $\bfw=1-s$, in addition to the poles of $G(\bfw)$ at $w_j=0$. This adds extra difficulties, hence we start with the following
{\bf Lemma 2.1.} {\sl With the above notation, for $-R<\sigma<-R+\delta$ we have
\[
F_X(s;f) = \sum_{\emptyset \neq \A \subseteq [N]} I_X(s,\A) + H^{(1)}_X(s),
\]
where $H^{(1)}_X(s) \ll (1+|t|)^c$ as $|t|\to\infty$ with some $c>0$, uniformly for $X\geq1$.}
{\it Proof.} Suppose that $F(s)$ has the following Laurent expansion at $s=1$:
\[
F(s) = \sum_{k=1}^{m_F} \frac{\beta_k}{(s-1)^k} + F_0(s), \hskip1.5cm \text{$F_0(s)$ entire}.
\]
Let $-R<\si<-R+\delta$. We move the line of integration in the $w_0$-variable to the right (if necessary) to $\Re w_0 = \rho_0$ with $\rho_0>\frac{R+1}{\kappa_0}$, and all the other to the left to $\Re w_j=\epsilon$ with a small $\epsilon>0$. Since we don't cross any pole, choosing $\A=\{1,\dots,N\}$ we have
\begin{equation}
\label{2-2}
F_X(s;f) = \frac{1}{(2\pi i)^{N}} \int_{(\epsilon)} \dots \int_{(\epsilon)} \big(\frac{1}{2\pi i} \int_{(\rho_0)} F(s+\bfw)\Gamma(w_0) z_0^{-w_0} \d w_0 \big) G(\bfw_{|\A}) \d \bfw_{|\A}.
\end{equation}
Now we move the integration in the inner integral to $\Re w_0 = \epsilon$, hence such an integral equals
\begin{equation}
\label{2-3}
\begin{split}
R_X(s,w_1,\dots,w_N) &+ \frac{1}{2\pi i} \int_{(\epsilon)} F(s+\bfw)\Gamma(w_0) z_0^{-w_0} \d w_0, \\
R_X(s,w_1,\dots,w_N) &= \text{res}_{w_0=\frac{1}{\kappa_0}(1-s-\bfw_{|\A})} F(s+\bfw) \Gamma(w_0) z_0^{-w_0}.
\end{split}
\end{equation}
The residual function $R_X(s,w_1,\dots,w_N)$ coincides with the residual function $R_N^{(1)}(s,\alpha)$ defined on p.333 of \cite{Ka-Pe/2005}, after the following changes in the latter function:
\begin{equation}
\label{2-4}
s\mapsto s+\bfw_{|\A}, \qquad d\mapsto 1/\kappa_0, \qquad N\mapsto X, \qquad \alpha\mapsto \alpha_0.
\end{equation}
Therefore, the computations on p.334 of \cite{Ka-Pe/2005} give (recall that the coefficients $\alpha_k$ of the Laurent expansion of $F(s)$ in \cite{Ka-Pe/2005} are now called $\beta_k$)
\[
R_X(s,w_1,\dots,w_N) = \sum_{k=1}^{m_F} \frac{\beta_k}{\kappa_0^k(k-1)!} \sum_{\nu=0}^{k-1} {k-1 \choose \nu} \Gamma^{(\nu)}\big(\frac{1-s-\bfw_{|\A}}{\kappa_0}\big) (-\log z_0)^{k-\nu-1} z_0^{\frac{s+\bfw_{|\A}-1}{\kappa_0}}.
\]
Hence, in view of \eqref{2-3}, the contribution of $R_X(s,w_1,\dots,w_N)$ to the integral in \eqref{2-2} equals
\begin{equation}
\label{2-5}
\begin{split}
\sum_{k=1}^{m_F} \frac{\beta_k}{\kappa_0^k(k-1)!} &\sum_{\nu=0}^{k-1} {k-1 \choose \nu} z_0^{\frac{s-1}{\kappa_0}} (-\log z_0)^{k-\nu-1}\times \\
&\times \frac{1}{(2\pi i)^{N}} \int_{(\epsilon)} \dots \int_{(\epsilon)} \prod_{j=1}^N\Gamma(w_j) (z_0^{-\frac{\kappa_j}{\kappa_0}}z_j)^{-w_j} \Gamma^{(\nu)}\big(\frac{1-s-\bfw_{|\A}}{\kappa_0}\big) \d \bfw_{|\A}.
\end{split}
\end{equation}
Thanks to Stirling's formula and to the form of the $z_j$'s, recalling that $\kappa_j<\kappa_0$ we may shift the lines of integration in \eqref{2-5} to $-\infty$, thus getting that \eqref{2-5} equals
\begin{equation}
\label{2-6}
\sum_{k=1}^{m_F} \frac{\beta_k}{\kappa_0^k(k-1)!} \sum_{\nu=0}^{k-1} {k-1 \choose \nu} z_0^{\frac{s-1}{\kappa_0}} (-\log z_0)^{k-\nu-1} H_{\nu}(s,\bfz),
\end{equation}
where
\begin{equation}
\label{2-7}
\begin{split}
H_{\nu}(s,\bfz) &= \sum_{k_1=0}^\infty \cdots \sum_{k_N=0}^\infty \frac{(-1)^{k_1+\cdots +k_N}}{k_1!\cdots k_N!} \Gamma^{(\nu)}\big(\frac{1-s+\sum_{j=1}^N\kappa_jk_j}{\kappa_0}\big) \prod_{j=1}^N (z_0^{-\frac{\kappa_j}{\kappa_0}}z_j)^{k_j} \\
&= \Gamma^{(\nu)}\big(\frac{1-s}{\kappa_0}\big) + \widetilde{H}_{\nu}(s,\bfz),
\end{split}
\end{equation}
say, where $\widetilde{H}_{\nu}(s,\bfz)$ denotes the above summation over $(k_1,\dots,k_N)\neq (0,\dots,0)$. Moreover, since $\kappa_j<\kappa_0$, from Stirling's formula we have that there exists a constant $\delta_1>0$ such that
\[
\Gamma^{(\nu)}\big(\frac{1-s+\sum_{j=1}^N\kappa_jk_j}{\kappa_0}\big) \ll \prod_{j=1}^N k_j^{(1-\delta_1)k_j}
\]
for $-R < \sigma < -R+\delta$ (the implicit constant may depend on $R$ and $\nu$). Hence from \eqref{2-7}
\[
\widetilde{H}_{\nu}(s,\bfz) \ll \prod_{j=1}^N\big(\sum_{k_j=0}^\infty \frac{c^{k_j}k_j^{(1-\delta_2)k_j}}{k_j!}\big) \ll 1
\]
for $-R < \sigma < -R+\delta$ and some $\delta_2>0$, uniformly for $X\geq 1$. As a consequence, the part of \eqref{2-6} coming from $ \widetilde{H}_{\nu}(s,\bfz)$, which we denote by $k_X(s)$, is uniformly bounded in $X$ for $-R<\sigma <-R+\delta$. Moreover, the computations before (3.6) on p.334 of \cite{Ka-Pe/2005} and \eqref{2-4} show that the part of \eqref{2-6} coming from $ \Gamma^{(\nu)}\big(\frac{1-s}{\kappa_0}\big)$ equals
\begin{equation}
\label{2-8}
- \sum_{k=1}^{m_F}\frac{\beta_k}{(s-1)^k} + g_X(s),
\end{equation}
where $g_X(s)$ is also uniformly bounded in $X$ for $-R<\sigma <-R+\delta$. Therefore, gathering \eqref{2-2}, \eqref{2-3} and \eqref{2-5}-\eqref{2-8}, from the properties of $k_X(s)$ and $g_X(s)$ we get
\begin{equation}
\label{2-9}
F_X(s;f) = - \sum_{k=1}^{m_F}\frac{\beta_k}{(s-1)^k} + \frac{1}{(2\pi i)^{N+1}} \int_{(\epsilon)} \dots \int_{(\epsilon)} F(s+\bfw) G(\bfw) \d \bfw +h_X(s),
\end{equation}
where $h_X(s)$ is uniformly bounded for $X\geq1$ and $-R<\sigma <-R+\delta$.
Finally, shifting the lines to $-\eta$ we have to cross the poles of $G(\bfw)$ at $w_j=0$, hence we deal with the integral in \eqref{2-9} as in Lemma 2.1 of \cite{Ka-Pe/2011a}, thus getting
\[
\begin{split}
F_X(s;f) &= F(s) - \sum_{k=1}^{m_F}\frac{\beta_k}{(s-1)^k} + \sum_{\emptyset \neq \A \subseteq [N]} \frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}}F(s+\bfw_{|\A}) G(\bfw_{|\A}) \d \bfw_{|\A} +h_X(s) \\
&= \sum_{\emptyset \neq \A \subseteq [N]} I_X(s,\A) + H^{(1)}_X(s),
\end{split}
\]
say. Moreover, thanks to the properties of $h_X(s)$ and $F(s)$, $H^{(1)}_X(s)$ satisfies
\[
H^{(1)}_X(s) \ll (1+|t|)^c
\]
for some $c>0$, uniformly for $X\geq1$ and $-R<\sigma<-R+\delta$, and the lemma follows. \fine
{\bf Remark 7.} Note that $H^{(1)}_X(s)$ is holomorphic for $-R<\sigma<-R+\delta$, but this information is not necessary to prove Theorem 1. Indeed, holomorphy of the relevant terms will follow in a simpler way at a later stage of the proof, as we pointed out in the sketch at the beginning of the section. The same remark applies to the other terms below, in the sense that they are holomorphic for $-R<\sigma<-R+\delta$, $\delta>0$ sufficiently small. Indeed, in general such terms are meromorphic with poles in a horizontal strip of finite height. We shall more or less implicitly use the latter property in the rest of the proof; we refer to Section 2 of \cite{Ka-Pe/2011a} for details. \fine
Let $-R<\sigma<-R+\delta$. Writing $\bfx_{|\A} = s+\bfw_{|\A}$ and
\[
\widetilde{G}(\bfx_{|\A}) = \prod_{j=1}^r \Gamma(\lambda_j(1-\bfx_{|\A}) +\overline{\mu}_j) \Gamma(1-\lambda_j\bfx_{|\A} -\mu_j), \qquad S(\bfx_{|\A}) = \prod_{j=1}^r \sin\pi(\lambda_j\bfx_{|\A}+\mu_j),
\]
we apply, in the integrals $I_X(s,\A)$ , the functional equation to $F(s+\bfw_{|\A})$ and then the reflection formula to the resulting $\Gamma$-factors. Expanding the Dirichlet series of $\overline{F}(1-s-\bfw_{|\A})$ we obtain
\begin{equation}
\label{2-10}
I_X(s,\A) = \frac{\omega}{\pi^r}Q^{1-2s} \sum_{n=1}^\infty \frac{\overline{a(n)}}{n^{1-s}} \frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}} \widetilde{G}(\bfx_{|\A}) S(\bfx_{|\A}) G(\bfw_{|\A}) \big(\frac{n}{Q^2}\big)^{\bfw_{|\A}} \d \bfw_{|\A};
\end{equation}
see (2.3) of \cite{Ka-Pe/2011a}. Since for $\bfw_{|\A}\in\LL_{|\A}$ we have $\Re(-\bfx_{|\A})>|\sigma|+\kappa_N\eta>0$, we may apply Stirling's formula to $\widetilde{G}(\bfx_{|\A})$. By a computation similar to the one leading to (2.4) of \cite{Ka-Pe/2011a} (recalling that here we do not assume that $\theta_F=0$ as in \cite{Ka-Pe/2011a}) for any integer $L>0$ we get
\begin{equation}
\label{2-11}
\widetilde{G}(\bfx_{|\A}) = B^{\bfy_{|\A}} \sum_{\ell=0}^L c_\ell \Gamma\big(\frac{d+1}{2} - d\bfy_{|\A}-\ell \big) + O\big(e^{-\frac{\pi}{2}d|\Im\bfx_{|\A}|} (1+|\Im\bfx_{|\A}|)^{-L+c}\big),
\end{equation}
where $\bfy_{|\A}=\bfx_{|\A}+i\theta_F$, $B = d^d/\beta$, $\beta=\prod_{j=1}^r\lambda_j^{2\lambda_j}$, $c_0\neq0$ and $c>0$. Turning to $S(\bfx_{|\A})$, by the same argument leading to (2.6) and (2.7) of \cite{Ka-Pe/2011a} we obtain
\begin{equation}
\label{2-12}
S(\bfx_{|\A}) = k_1e^{-i\frac{\pi}{2} d \bfx_{|\A}} + k_2e^{i\frac{\pi}{2} d \bfx_{|\A}} + O(e^{\frac{\pi}{2}(d-c)|\Im \bfx_{|\A}|})
\end{equation}
with constants $k_1,k_2\neq0$ and some $c>0$. Therefore, a computation based on \eqref{2-11} and \eqref{2-12} shows that
\begin{equation}
\label{2-13}
\widetilde{G}(\bfx_{|\A}) S(\bfx_{|\A}) = \big(k_1e^{-i\frac{\pi}{2} d \bfx_{|\A}} + k_2e^{i\frac{\pi}{2} d \bfx_{|\A}}\big) B^{\bfy_{|\A}} \sum_{\ell=0}^L c_\ell \Gamma\big(\frac{d+1}{2} - d\bfy_{|\A}-\ell \big) + R(\bfx_{|\A})
\end{equation}
with $R(\bfx_{|\A}) \ll 1$ provided $L>c$. Moreover, by Stirling's formula we have
\[
G(\bfx_{|\A}) \ll \prod_{j=0}^N (1+ |w_j|^{1/2+\eta})^{-1}
\]
uniformly for $X\geq1$ and hence, recalling that $\eta>1/2$, with the same uniformty we get
\begin{equation}
\label{2-14}
\frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}} \big|R(\bfx_{|\A}) G(\bfx_{|\A}) \big(\frac{n}{Q^2}\big)^{\bfw_{|\A}} \d \bfw_{|\A} \big| \ll 1.
\end{equation}
From \eqref{2-10}, \eqref{2-13} and \eqref{2-14} we finally obtain that for $-R<\sigma<-R+\delta$
\begin{equation}
\label{2-15}
I_X(s,\A) = e^{a_1s+b_1} \sum_{n=1}^\infty \frac{\overline{a(n)}}{n^{1-s}} \sum_{\ell=0}^L\big(e_\ell I_X(s,\A,n,\ell) + e'_\ell J_X(s,\A,n,\ell)\big) + H_X^{(2)}(s,\A)
\end{equation}
where $a_1\in\RR$ and $b_1,e_\ell,e'_\ell$ are certain constants with $e_0e'_0\neq0$,
\begin{equation}
\label{2-16}
I_X(s,\A,n,\ell) = \frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}} \Gamma\big(\frac{d+1}{2}-d\bfx_{|\A}-\ell -id\theta_F\big) e^{-i\frac{\pi}{2}d\bfx_{|\A}} G(\bfw_{|\A}) \big(\frac{n}{q}\big)^{\bfw_{|\A}} \d \bfw_{|\A},
\end{equation}
$q=Q^2/B$, $J_X(s,\A,n,\ell)$ is equal to $I_X(s,\A,n,\ell)$ with $e^{-i\frac{\pi}{2}d\bfx_{|\A}}$ replaced by $e^{i\frac{\pi}{2}d\bfx_{|\A}}$, and
\begin{equation}
\label{2-17}
\sum_{\emptyset\neq\A\subseteq [N]} H_X^{(2)}(s,\A)\ll 1
\end{equation}
as $|t|\to\infty$ uniformly for $X\geq1$.
2.2.~{\bf Mellin transform.} Thanks to the factor $G(\bfw_{|\A})$, the integral \eqref{2-16} with $e^{- i\frac{\pi}{2}d\bfx_{|\A}}$ replaced by $e^{i\pi \Lambda\bfx_{|\A}}$, $|\Lambda|\leq d/2$, is meromorphic with poles in a horizontal strip of finite height. For $|\Lambda|<d/2$, the Mellin transform argument on p.1410 of \cite{Ka-Pe/2011a} shows that for $\frac{d+1}{2d}-\frac{\ell}{d} <\sigma< \frac{d+1}{2d}-\frac{\ell}{d} + \delta$
\begin{equation}
\label{2-18}
\begin{split}
&\frac{1}{(2\pi i)^{|\A|}} \int_{\LL_{|\A}} \Gamma\big(\frac{d+1}{2}-d\bfx_{|\A}-\ell - id\theta_F\big) e^{i\pi \Lambda\bfx_{|\A}} G(\bfw_{|\A}) \big(\frac{n}{q}\big)^{\bfw_{|\A}} \d \bfw_{|\A} \\
&= f_\ell \int_0^\infty \exp(-e^{\pi i\Lambda/d}x^{1/d}) \prod_{j\in\A}\big(e^{-z_j(\frac{qx}{n})^{\kappa_j}}-1\big) x^{\frac{d+1}{2d} - \frac{\ell}{d} -s -1 -i\theta_F} \d x
\end{split}
\end{equation}
with a certain $f_\ell\neq0$. Indeed, under the above conditions all the integrals involved in the argument are absolutely convergent. Moreover, for $\Lambda=\mp d/2$ the integrals in \eqref{2-18} are also absolutely convergent for $s$ in the above range. Hence we let $\Lambda \to \mp d/2$ in \eqref{2-18}, thus getting similar expressions for $I_X(s,\A,n,\ell)$ and $J_X(s,\A,n,\ell)$. Next we sum such expressions over $\A$ and argue as on p.1411 of \cite{Ka-Pe/2011a} to get the analog of (2.17) and (2.19) of \cite{Ka-Pe/2011a}. We obtain
\begin{equation}
\label{2-19}
\sum_{\emptyset\neq\A\subseteq[N]} I_X(s,\A,n,\ell) =e^{a_2s} f_\ell \int_0^\infty e^{ix^{1/d}} \big(e^{-\Psi_X(x,n)} e^{-2\pi if(\frac{qx}{n},\bfalpha)} -1\big) x^{\frac{d+1}{2d} - \frac{\ell}{d} -s -1 -i\theta_F} \d x
\end{equation}
for $\frac{d+1}{2d}-\frac{\ell}{d} <\sigma< \frac{d+1}{2d}-\frac{\ell}{d} + \delta$, where $a_2\in\RR$ and $\Psi_X(x,n) = \frac{1}{X} \sum_{j=0}^N \big(\frac{qx}{n}\big)^{\kappa_j}$, and similarly for $J_X(s,\A,n,\ell)$. Accordingly we write ($J_X(s,n,\ell)$ is as in \eqref{2-19} with $e^{-ix^{1/d}}$ in place of $e^{ix^{1/d}}$)
\begin{equation}
\label{2-20}
\sum_{\emptyset\neq\A\subseteq[N]} I_X(s,\A,n,\ell) = f_\ell I_X(s,n,\ell) \ \ \text{and} \ \ \sum_{\emptyset\neq\A\subseteq[N]} J_X(s,\A,n,\ell) = f_\ell J_X(s,n,\ell).
\end{equation}
The integral $J_X(s,n,\ell)$, without saddle point, is dealt with by the following
{\bf Lemma 2.2.} {\sl $J_X(s,n,\ell)$ is meromorphic for $\sigma<\frac{d+1}{2d}+\delta$, and for every given $-R < \sigma< -R + \delta$ and $\epsilon>0$, $J_X(s,n,\ell) \ll e^{\epsilon|t|}$ as $|t|\to\infty$ uniformly for $n\in\NN$ and $X\geq 1$}.
{\it Proof.} Let $\frac{d+1}{2d}-\frac{\ell}{d} <\sigma< \frac{d+1}{2d}-\frac{\ell}{d} + \delta$. We start as in the proof of Lemma 2.2 of \cite{Ka-Pe/2011a}, switching to the complex variable $z$ in place of the real variable $x$ and then replacing the path of integration by $z=\rho e^{-i\phi}$, $0<\rho<\infty$ and $\phi>0$ arbitrarily small. On the new path we have
$|e^{iz^{1/d}}| = e^{-\rho^{1/d}\sin(\phi/d)}\ll e^{-c\phi\rho^{1/d}}$ and we split the integral as
\[
J_X(s,n,\ell) = \int_0^{e^{-i\phi}\infty} \hskip-.5cm \dots \d z = e^{-i\phi}\big(\int_0^1 \dots\d\rho + \int_1^{n^\epsilon} \hskip-.2cm\dots\d\rho + \int_{n^\epsilon}^{\infty} \dots\d\rho\big) = I_1(s)+I_2(s) +I_3(s),
\]
say. Arguing similarly as in (2.21) of \cite{Ka-Pe/2011a}, using the above bound for $e^{iz^{1/d}}$ we see that $I_3(s)$ is entire and $\ll 1$ uniformly in $n$ and $X$ for $\sigma>-R$. Analogously, $I_2(s)$ is entire and satisfies
\[
I_2(s) \ll \int_1^{n^\epsilon} |e^{-\Psi_X(z,n)} e^{-2\pi if(\frac{qz}{n},\bfalpha)} -1| \rho^{\frac{d+1}{2d} +R -1} \d\rho \ll \frac{1}{n^{\kappa_N}} \int_1^{n^\epsilon} \rho^{\frac{d+1}{2d} +R -1+\kappa_N} \d\rho \ll 1
\]
uniformly in $n$ and $X$ for $\sigma>-R$, provided $\epsilon>0$ is sufficiently small.
In order to deal with $I_1(s)$, for $0<\rho<1$ we expand the exponentials in such a way that given $W\geq L/d+1+\delta$ we find an integer $M\geq1$ with the property that
\[
e^{iz^{1/d}} \big(e^{-\Psi_X(z,n)} e^{-2\pi if(\frac{qz}{n},\bfalpha)} -1\big) = \sum_{m=1}^M \beta_m(n) \rho^{u_m} + E_M(z,n)
\]
with $|E_M(z,n)|\ll \rho^W$ uniformly in $n$ and $X$. Moreover, we also have that $\beta_m(n) \ll n^{-\kappa_N}$ and the exponents $u_m$ are of the form $\frac{k}{d} + \sum_{j=0}^N \ell_j\kappa_j$ with $k,\ell_j\geq0$, $ \sum_{j=0}^N \ell_j>0$ and $0<u_m<W$. Consequently we have
\[
I_1(s) = \sum_{m=1}^M \frac{\beta_m(n) e^{-i\phi(\frac{d+1}{2d} - \frac{\ell}{d} -s -i\theta_F)}}{u_m + \frac{d+1}{2d} - \frac{\ell}{d} -s -i\theta_F} + h_{X,M}(s,n) = {\mathcal S}igma_{X,M}(s,n) + h_{X,M}(s,n),
\]
say, with
\[
h_{X,M}(s,n) = e^{-i\phi(\frac{d+1}{2d} - \frac{\ell}{d} -s -i\theta_F)} \int_0^1 E_M(z,n) \rho^{\frac{d+1}{2d} - \frac{\ell}{d} -s -1 -i\theta_F} \d\rho.
\]
Thanks to the choice of $W$, this integral is absolutely convergent for $\sigma< \frac{d+1}{2d} + \delta$. Hence in view of our choice of $\phi$, $h_{X,M}(s,n)$ is holomorphic and satisfies $h_{X,M}(s,n) \ll e^{\epsilon|t|}$ uniformly in $n$ and $X$ for $s$ in any finite vertical strip contained in $\sigma< \frac{d+1}{2d} + \delta$. Clearly, ${\mathcal S}igma_{X,M}(s,n)$ is meromorphic over $\CC$ with poles at $s= u_m + \frac{d+1}{2d} - \frac{\ell}{d} -s -i\theta_F$, $1\leq m\leq M$ and $0\leq \ell \leq L$, therefore we may choose $-R<\sigma<-R+\delta$ away from the poles. Moreover, thanks to the bound for $\beta_m(n)$, for such a $\sigma$ we have that ${\mathcal S}igma_{X,M}(s,n) \ll e^{\epsilon|t|}$ uniformly in $n$ and $X$, and the lemma follows. \fine
2.3.~{\bf Saddle point.} Here we follow closely the saddle point argument in Section 2.3 of \cite{Ka-Pe/2011a}, hence we only briefly outline the needed changes and refer to \cite{Ka-Pe/2011a} for details and notation (see also the Introduction for some notation). Let $\xi$ be sufficiently large, $x_0=x_0(\xi,\bfalpha)\in\RR$ be the critical point of $\Phi(z,\xi,\bfalpha)$ as in Lemma 2.3 of \cite{Ka-Pe/2011a} and
\begin{equation}
\label{2-21}
K_X(s,\xi) = \gamma x_0^{\frac{d+1}{2d}-s} \int_{-r}^r e^{-\Psi_X(z,\xi)+i\Phi(z,\xi,\bfalpha)}(1+\gamma\lambda)^{\frac{d+1}{2d}-s-1} \d\lambda,
\end{equation}
where $\gamma=1-i$, $z=x_0(1+\gamma\lambda)$ and $r\in(0,1)$ is given in (2.29) of \cite{Ka-Pe/2011a}. Clearly, $K_X(s,\xi)$ is an entire function since $\Re(1+\gamma\lambda)>0$. As in \cite{Ka-Pe/2011a} we show that, for $n$ sufficiently large, the main contribution to the meromorphic integral $I_X(s,n,\ell)$ comes from $K_X(s+\frac{\ell}{d}+i\theta_F,n)$.
{\bf Lemma 2.3.} {\sl Let $n_0$ be sufficiently large. Then for $n\geq n_0$ we have
\[
I_X(s,n,\ell) = K_X(s+\frac{\ell}{d}+i\theta_F,n) + H_X^{(3)}(s,n,\ell),
\]
where $H_X^{(3)}(s,n,\ell)$ is meromorphic for $\sigma<\frac{d+1}{2d} + \delta$ and, for every given $-R < \sigma< -R + \delta$ and $\epsilon>0$, satisfies $H^{(3)}_X(\sigma+it,n,\ell) \ll e^{\epsilon|t|}$ as $|t|\to\infty$ uniformly for $n\geq n_0$ and $X\geq 1$. Moreover, $I_X(s,n,\ell)$ has the same properties of $H_X^{(3)}(s,n,\ell)$ for $n<n_0$.}
{\it Proof.} This is the analog of Lemma 2.4 of \cite{Ka-Pe/2011a} and its proof is similar, provided we make the same variations we did in Lemma 2.2 above with respect to Lemma 2.2 in \cite{Ka-Pe/2011a}. In particular, we have to split the integral over the path $z=\rho e^{i\phi}$, $\phi$ arbitrarily small and $\rho>0$, into three parts, as for $J_X(s,n,\ell)$. We don't give details to keep the paper in a reasonable size. \fine
From Lemma 2.1, \eqref{2-15}, \eqref{2-17}, \eqref{2-19}, \eqref{2-20} and Lemmas 2.2 and 2.3 we deduce that
\begin{equation}
\label{2-22}
F_X(s;f) = e^{as+b} \sum_{\ell=0}^L g_\ell \sum_{n=n_0}^\infty \frac{\overline{a(n)}}{n^{1-s}} K_X(s+ \frac{\ell}{d} + i\theta_F,n) + H_X(s;f) = M_X(s;f) + H_X(s;f),
\end{equation}
say, where $n_0$ is sufficiently large, $a\in\RR$, $g_0\neq0$ and $b,g_1,\dots,g_L$ are constants. Moreover, for any given $-R<\sigma<-R+\delta$ and $\epsilon>0$, $H_X(\sigma+it;f) \ll e^{\epsilon|t|}$ uniformly for $X\geq1$.
2.4.~{\bf Limit as $X\to\infty$.} We write EBV$(X)$ for ``entire and bounded on every vertical strip, depending on $X$''. Arguing exactly as in Lemma 2.5 of \cite{Ka-Pe/2011a} (the extra $i\theta_F$ we have here does not change the bounds in \cite{Ka-Pe/2011a}), recalling that $d\kappa_0>1$ and using \eqref{2-22} we have
$\bullet$ $H_X(s;f)$ is EBV$(X)$;
$\bullet$ for any given $\sigma>d\kappa_0$, $H_X(\sigma+it;f) \ll 1$ uniformly for $X\geq1$;
$\bullet$ for any given $-R<\sigma<-R+\delta$ and $\epsilon>0$, $H_X(\sigma+it;f) \ll e^{\epsilon|t|}$ uniformly for $X\geq1$.
\noindent
Hence, by an application of the Phragm\'en-Lindel\"of theorem to $\Gamma(2\epsilon s+c)H_X(s;f)$ ($c>0$ suitable) and the strip $\sigma_1\leq \sigma\leq \sigma_2$, where $-R<\sigma_1<-R+\delta$ and $\sigma_2>d\kappa_0$, we deduce that
(i) {\it for $\sigma_1,\sigma_2$ as above and every $\epsilon>0$, $H_X(s;f)$ is holomorphic in the strip $\sigma_1<\sigma<\sigma_2$ and satisfies $H_X(s;f)\ll e^{\epsilon|t|}$ as $|t|\to\infty$, uniformly for $X\geq1$.}
\noindent
Moreover, still arguing as in Lemma 2.5 of \cite{Ka-Pe/2011a}, we also have that
(ii) {\it the limit as $X\to\infty$ of $H_X(s;f)$ exists for every $s$ in the strip $d\kappa_0<\sigma<\sigma_2$.}
Therefore, thanks to (i) and (ii), from Vitali's convergence theorem (see Section 5.21 of Titchmarsh \cite{Tit/1939} or Lemma C of \cite{Ka-Pe/resoI}) we deduce that
\[
H(s;f) = \lim_{X\to\infty} H_X(s;f)
\]
exists and is holomorphic in any substrip of $\sigma_1<\sigma<\sigma_2$, and satisfies $H(s;f)\ll e^{\epsilon|t|}$. Writing
\[
K(s,\xi) = \gamma x_0^{\frac{d+1}{2d}-s} \int_{-r}^r e^{i\Phi(z,\xi,\bfalpha)}(1+\gamma\lambda)^{\frac{d+1}{2d}-s-1} \d\lambda
\]
(notation is as in \eqref{2-21}), once again by the arguments in Section 2.5 of \cite{Ka-Pe/2011a}, applied to the term $M_X(s;f)$ in \eqref{2-22}, we have that
\begin{equation}
\label{2-23}
M(s;f) = \lim_{X\to\infty} M_X(s;f) = e^{as+b} \sum_{\ell=0}^L g_\ell \sum_{n=n_0}^\infty \frac{\overline{a(n)}}{n^{1-s}} K(s+ \frac{\ell}{d} + i\theta_F,n)
\end{equation}
exists and is holomorphic and bounded for $\sigma>d\kappa_0$. Thus, letting $X\to\infty$ in \eqref{2-22} we obtain
\[
F(s;f) = M(s;f) + H(s;f)
\]
where $H(s;f)$ is holomorphic for $\sigma>-K$ and satisfies $H(s;f)\ll e^{\epsilon|t|}$ as $|t|\to\infty$.
2.5.~{\bf Completion of the proof.} To complete the proof of Theorem 1 we follow the arguments in Section 2.5 of \cite{Ka-Pe/2011a}, but here we have to take into account many more terms in the expansions. Such terms will eventually contribute to the sum on the right hand side of \eqref{1-7}. Again, to keep the paper in a reasonable size, we outline the changes and refer to \cite{Ka-Pe/2011a} as much as possible. We recall that $x_0$ is as in \eqref{2-21}, $P=x_0^2\Phi''(x_0,\xi,\bfalpha)$ (see (2.29) of \cite{Ka-Pe/2011a}, where the same quantity is called $R$), $\gamma=1-i$ and $\kappa_0^*=\frac{\kappa_0}{d\kappa_0-1}$. Moreover, we denote by $Q(s,\xi)$ a finite sum of type
\begin{equation}
\label{2-24}
Q(s,\xi) = \sum_i P_{i}(s) r_{i}(\xi)
\end{equation}
with polynomials $P_{i}(s)$ and $r_{i}(\xi)=x_0^{a_{i}}/\xi^{b_{i}}$, $a_{i},b_{i}\geq0$. The analog of Lemma 2.7 of \cite{Ka-Pe/2011a} is
{\bf Lemma 2.4.} {\sl Let $\xi_0$ be sufficiently large, $\xi\geq \xi_0$, $0\leq \ell\leq L$, $M\geq 2$ be a given integer and $\epsilon>0$. Then there exist $h_j>0$ and $Q_{j,\ell}(s,\xi)$ as in \eqref{2-24}, $j=0,\dots,M(M+1)$, such that
\[
K(s+\frac{\ell}{d}+i\theta_F,\xi) = \sum_{j=0}^{M(M+1)} \frac{\gamma h_j}{|P|^{(j+1)/2}} Q_{j,\ell}(s,\xi) x_0^{\frac{d+1}{2d} - \frac{\ell}{d} - s -i\theta_F} e\big(\frac{1}{2\pi}\Phi(x_0,\xi,\bfalpha)\big) + H_\ell^{(4)}(s,\xi)
\]
where $h_0=\sqrt{\pi}$, $Q_{0,\ell}(s,\xi)=1$ identically and $H_\ell^{(4)}(s,\xi)$ is entire. Moreover, for $s$ in any given vertical strip and $0\leq \ell\leq L$, $H_\ell^{(4)}(s,\xi)$ satisfies
\[
H_\ell^{(4)}(s,\xi) \ll e^{\epsilon|t|} \xi^{-d\kappa_0^*\sigma-(M-d)\kappa_0^*/2}\log^{3M+1}\xi,
\]
and the functions $r_{i}(\xi)$ inside $Q_{j,\ell}(s,\xi)$ satisfy $r_{i}(\xi)\ll \xi^{j\kappa_0^*/2}$.}
{\it Proof.} Apart from switching from $R$ to $P$, we keep the notation of Lemma 2.7 of \cite{Ka-Pe/2011a} and write
\begin{equation}
\label{2-25}
K(s+ \frac{\ell}{d} + i\theta_F,\xi) = \gamma x_0^{\frac{d+1}{2d} - \frac{\ell}{d} - s -i\theta_F} e^{i\Phi(x_0)} I(s), \quad I(s) = \int_{-r}^r e^{i (\Phi(z)-\Phi(x_0))} (1+\gamma\lambda)^{c(s,\ell)} \d\lambda
\end{equation}
where $c(s,\ell) = \frac{d+1}{2d} - \frac{\ell}{d} - s-1 -i\theta_F$ and $r\ll \xi^{-\kappa_0^*/2}\log \xi$. We need the following expansions, based on (2.29), (2.30) and (2.41) of \cite{Ka-Pe/2011a} and valid for $-r\leq \lambda\leq r$. First we have
\[
\Phi(z)-\Phi(x_0) = \sum_{m=2}^M \frac{R_m}{m!} (\gamma\lambda)^m + O(\xi^{-(M-1)\kappa_0^*/2} \log^{M+1}\xi)
\]
(see p.1422 of \cite{Ka-Pe/2011a}). Since $R_2=P<0$ and $| \sum_{m=3}^M (\gamma\lambda)^m\frac{R_m}{m!}|\ll \xi^{-\kappa_0^*/2} \log^3\xi$ we deduce that
\[
\begin{split}
e&^{i (\Phi(z)-\Phi(x_0))} = e^{-|P|\lambda^2} \exp\big(i \sum_{m=3}^M \frac{R_m}{m!}(\gamma\lambda)^m\big)(1+O(\xi^{-(M-1)\kappa_0^*/2} \log^{M+1}\xi)) \\
&= \big(e^{-|P|\lambda^2} \sum_{k=0}^M \frac{1}{k!}\big(i \sum_{m=3}^M (\gamma\lambda)^m\frac{R_m}{m!}\big)^k +O(\xi^{-M\kappa_0^*/2} \log^{3M}\xi)\big) \big(1+O(\xi^{-(M-1)\kappa_0^*/2} \log^{M+1}\xi)\big) \\
&= \big(e^{-|P|\lambda^2} \big(1+ \sum_{h=3}^{M^2} R_h(\xi) \lambda^h\big) + O(\xi^{-M\kappa_0^*/2} \log^{3M}\xi) \big) \big(1+O(\xi^{-(M-1)\kappa_0^*/2} \log^{M+1}\xi)\big)
\end{split}
\]
where, in view of the definition of the $R_m$ on p.1421 of \cite{Ka-Pe/2011a}, the functions $R_h(\xi)$ are linear combinations of terms of type $x_0^a/\xi^b$. Moreover, for $s$ in any fixed vertical strip we have
\[
(1+\gamma\lambda)^{c(s,\ell)}= 1 + \sum_{k=1}^M P_k(s,\ell) \lambda^k +O(e^{\epsilon|t|} \xi^{-M\kappa_0^*/2} \log^{M}\xi)
\]
with certain polynomials $P_k(s,\ell)$. Hence, for $s$ in any vertical strip, the integrand in $I(s)$ equals
\[
e^{-|P|\lambda^2}\big(1 + \sum_{j=1}^{M(M+1)} Q_{j,\ell}(s,\xi)\lambda^j\big) + O(e^{\epsilon|t|} \xi^{-M\kappa_0^*/2}\log^{3M}\xi)
\]
where the $Q_{j,\ell}(s,\xi)$ are as in \eqref{2-24}. By (2.41) of \cite{Ka-Pe/2011a}, $|R_m|\ll \xi^{\kappa_0^*}$, hence $R_h(\xi)\ll \xi^{h\kappa_0^*/3}$ since the above summation over $m$ starts from $m=3$. Therefore, the functions $r_{i}(\xi)$ inside $Q_{j,\ell}(s,\xi)$ satisfy $r_{i}(\xi)\ll \xi^{j\kappa_0^*/2}$. Integrating over $[-r,r]$ and arguing similarly as on p.1422 of \cite{Ka-Pe/2011a} we get
\begin{equation}
\label{2-26}
I(s) = \sum_{j=0}^{M(M+1)} \frac{h_j}{|P|^{(j+1)/2}} Q_{j,\ell}(s,\xi) + O(e^{\epsilon|t|} \xi^{-(M+1)\kappa_0^*/2}\log^{3M+1}\xi)
\end{equation}
for $s$ in any vertical strip, where $h_0 = \sqrt{\pi}$, $h_j>0$ and $Q_{0,\ell}(s,\xi)=1$ identically. The lemma follows from \eqref{2-25} and \eqref{2-26}. \fine
Now we are ready to conclude the proof. Recalling the notation before both \eqref{2-21} and Lemma 2.4, for $n\geq n_0$ we denote by $x_n$ the value of $x_0$ relative to $\xi=n$, i.e. $x_n=x_0(n,\bfalpha)$; analogously, $P_n$ is the value of $P$ relative to $\xi=n$. From \eqref{2-23} and Lemma 2.4 we obtain that
\[
M(s;f) = \gamma e^{as+b} \sum_{\ell=0}^L g_\ell \sum_{j=0}^{M(M+1)} h_j \sum_{n=n_0}^\infty \frac{\overline{a(n)}}{n^{1-s}} \frac{Q_{j,\ell}(s,n)}{|P_n|^{(j+1)/2}} x_n^{\frac{d+1}{2d} - \frac{\ell}{d} - s -i\theta_F} e\big(\frac{1}{2\pi}\Phi(x_n,n,\bfalpha)\big) + H^{(5)}(s;f)
\]
where, choosing $M=M(K)$ sufficiently large, $H^{(5)}(s;f)$ is holomorphic for $\sigma>-K$ and bounded by $e^{\epsilon|t|}$ for $s$ in any vertical strip inside $\sigma>-K$. Then we use Lemma 2.8 of \cite{Ka-Pe/2011a} and proceed analogously as on p.1424-1426. More precisely, we use the expansions of $1/\sqrt{|P_n|}$, $x_n^{\frac{d+1}{2d} - \frac{\ell}{d} - s -i\theta_F}$ and $e\big(\frac{1}{2\pi}\Phi(x_n,n,\bfalpha)\big)$ in (2.47), in the displayed equation after (2.47) and in (2.49) of \cite{Ka-Pe/2011a}, respectively. Thanks to the shape of the functions $r_i(\xi)$ in $Q_{j,\ell}(s,\xi)$ we also have
\[
Q_{j,\ell}(s,n) = n^{\theta_j}\sum_{\omega(j)} c_{\omega(j)}(s,\ell)n^{-\omega(j)}
\]
where, for each $j\geq1$, $0\leq\omega(j)\to\infty$ is a certain sequence, $0\leq \theta_j<j\kappa_0^*/2$ and $c_{\omega(j)}(s,\ell)$ is holomorphic and bounded by $e^{\epsilon|t|}$. Collecting sufficiently many terms in such expansions we get
\[
\frac{Q_{j,\ell}(s,n)}{|P_n|^{(j+1)/2}} x_n^{\frac{d+1}{2d} - \frac{\ell}{d} - s -i\theta_F} e\big(\frac{1}{2\pi}\Phi(x_n,n,\bfalpha)\big) = \frac{e(f^*(n,\bfalpha))}{n^{d\kappa_0^*(s-\frac{d+1}{2d}+i\theta_F) +\frac{\kappa^*}{2}}} \sum_{\nu=0}^V \frac{c_{\nu,\ell,j}(s,\bfalpha)}{n^{\delta_{\nu,\ell,j}}} + H_{\ell,j}^{(6)}(s,n)
\]
where $c_{\nu,\ell,j}(s,\bfalpha)$ are entire with $c_{0,0,0}(s,\bfalpha)\neq0$, $\delta_{\nu,\ell,j}\geq0$ with $\delta_{0,0,0}=0$. Moreover, if $V=V(K)$ is sufficiently large, the sum over $n\geq n_0$ of the entire functions $\overline{a(n)}n^{s-1}H_{\ell,j}^{(6)}(s,n)$ is absolutely convergent for $\sigma>-K$ and bounded by $e^{\epsilon|t|}$ for $s$ in any vertical strip inside $\sigma>-K$.
Theorem 1 follows now summing the last equation over $n,j$ and $\ell$, since clearly
\[
s^* = 1 - s + d\kappa_0^*(s-\frac{d+1}{2d}+i\theta_F) +\frac{\kappa^*}{2}.
\]
The assertions in Remark 3 in the Introduction follow by an analysis of the above arguments.
\section{Proof of the other statements}
3.1.~{\bf Proof of Fact 1.} We proceed by induction on $j$, recalling that $\deg S\geq 1$; see after \eqref{1-8}. Clearly, for $j=1$ we have $\ell_1=\deg S_1$. Assuming that $\ell_{j-1}=\deg S_{j-1}$, thanks to \eqref{1-3} we have
\[
\text{lexp}((TS_{j-1}\cdots TS_1)^\flat) = \frac{\ell_{j-1}}{d\ell_{j-1}-1} <1,
\]
since $\deg S_{j-1}\geq 1$ and $d>2$. Hence applying $S_j$ we immediately have that $\ell_j=\deg S_j$. \fine
3.2.~{\bf Proof of Fact 2.} Suppose that $G(f_0)=H(f_1)$ and $G\neq H$, thus we may write
\[
\begin{split}
G &= TS_M\cdots TS_NTS_{N-1}\cdots TS_1 \\
H &= TS_M\cdots TS_NTS'_R\cdots TS'_1
\end{split}
\]
with, in particular, $S_{N-1}\neq S'_R$. Hence, since $\frak{G}$ is a group, we must have
\[
S_{N-1}\cdots TS_1(f_0) = S'_R\cdots TS'_1(f_1).
\]
But $\deg(S_{N-1}^{-1}S'_R) \geq 1$, hence $f_0=S_1^{-1}T\cdots TS_{N-1}^{-1}S'_R\cdots TS'_1(f_1)$, all the operations being allowed by Fact 1. But this gives a contradiction, since $\text{lexp}(f_0)= \deg S_1^{-1}=\deg S_1\geq 1$ by Fact 1, and $\text{lexp}(f_0)\leq 1/d$ by definition of $\A(F)$. \fine
3.3.~{\bf Proof of Theorems 2, 3, 4 and 5.} We apply induction with respect to the weight $\omega(f)$. If $\omega(f)=0$ then $f(\xi) = f_0(\xi) + P(\xi)$ with $0\leq \ \text{lexp}(f_0) \leq 1/d$ and $P\in\ZZ[\xi]$. Hence $F(s;f)=F(s;f_0)$ and therefore the assertions of all theorems hold true in this case thanks to the results in \cite{Ka-Pe/resoI}, since $D(f)=1$ by definition in this case. In order to perform the inductive step, we suppose that $\omega(f)=M$ and assume the assertions of the theorems true for any $F\in{\mathcal S}^\sharp$ with degree $d\geq 1$ and any $f_1\in\A(F)$ (resp. $\A(F)\setminus\A_0(F)$, $\A_0(F)\setminus\A_{00}(F)$ and $\A_{00}(F)$) of weight $M-1$. Hence we have
\begin{equation}
\label{3-1}
f(\xi) = (f_1(\xi) + P(\xi))^*
\end{equation}
with some $P\in\ZZ[\xi]$ of degree $\geq 1$, lexp$(f_1+P)=\ell_M>1/d$ and $\omega(f_1)=M-1$. We shall use different arguments to show that $F(s;f)$ has the required properties according to the case at hand.
To prove Theorem 2, suppose that $f,f_1\in\A(F)$ and satisfy \eqref{3-1}, and let $K\geq0$ be arbitrary. Then we apply Theorem 1, thus getting an expression of type \eqref{1-7} for $F(s;f)$. But in view of \eqref{3-1} and \eqref{1-6} we have
\begin{equation}
\label{3-2}
\overline{F}(s^*+\eta_j; f^*) = \overline{F}(s^*+\eta_j; f_1),
\end{equation}
hence the right hand side of \eqref{1-7} is meromorphic for $\sigma>-K$ by the inductive hypothesis. Moreover, still from Theorem 1 we have that
\[
W_j(s), G(s) \ll e^{\epsilon|t|}
\]
uniformly in every vertical strip contained in $\sigma>-K$. Theorem 2 follows from the inductive hypothesis, since $K$ is arbitrarily large. \fine
To prove Theorem 3 we just observe that if $f\in\A(F)\setminus\A_0(F)$ then $f_1\in\A(F)\setminus\A_0(F)$ as well. Hence, since $F(s;f_1)$ is entire in this case, every $\overline{F}(s^*+\eta_j; f_1)$ is also entire, and arguing as before we obtain that $F(s;f)$ is entire as well, thus proving Theorem 3. \fine
Suppose now that $f\in\A_0(F)\setminus\A_{00}(F)$. Then $f_1\in\A_0(F)\setminus\A_{00}(F)$ and by the inductive hypothesis, $F(s;f_1)$ has simple poles at $s_0 = \frac12 + \frac{1}{2dD(f_1)} -i\theta_F$ and on the half-line
\[
s=\sigma - i\theta_F \hskip1.5cm \sigma \leq \frac12 + \frac{1}{2dD(f_1)}.
\]
Writing $\kappa_0=$ lexp$(f)$, by \eqref{1-12} we have
\begin{equation}
\label{3-3}
\kappa_0 = \frac{\ell_M}{(d\ell_M-1)}.
\end{equation}
Hence by \eqref{1-7}, \eqref{1-4} and \eqref{3-2}, recalling that $\theta_{\overline{F}} = -\theta_F$ we see that the poles of $F(s;f)$ are simple and lie on the half-line
\[
\frac{s+\frac{d\kappa_0}{2}-1+id\theta_F\kappa_0}{d\kappa_0-1} = \sigma +i\theta_F \hskip1.5cm \sigma \leq \frac12 + \frac{1}{2dD(f_1)}.
\]
Therefore, the polar half-line of $F(s;f)$ becomes
\[
s= (d\kappa_0-1)\sigma - \frac{d\kappa_0}{2} + 1 - i\theta_F = \sigma' - i\theta_F,
\]
say, where thanks to \eqref{3-3} we have
\[
\sigma' \leq (d\kappa_0-1)\big(\frac12 + \frac{1}{2dD(f_1)}\big) - \frac{d\kappa_0}{2} + 1 = \frac12 + \frac{d\kappa_0-1}{2dD(f_1)} = \frac12 + \frac{1}{2dD(f)}.
\]
Moreover, the initial point of such a half-line is a simple pole, and Theorem 4 follows. \fine
Suppose finally that $f\in\A_{00}(F)$. Arguing as before we see that $F(s;f)$ has poles of order $\leq m_F$ on the half-line
\[
\frac{s+\frac{d\kappa_0}{2}-1+id\theta_F\kappa_0}{d\kappa_0-1} = \sigma -i\theta_F\big(\frac{(-1)^{M-1}}{D(f_1)}-1\big) \hskip1.5cm \sigma \leq \frac12 + \frac{1}{2D(f_1)},
\]
and a pole of order $m_F$ at its initial point. Hence
\[
s= (d\kappa_0-1)\sigma - \frac{d\kappa_0}{2} + 1 - i\theta_F\big(d\kappa_0 + \big(\frac{(-1)^{M-1}}{D(f_1)}-1\big)(d\kappa_0-1)\big) = \sigma' -i\theta_F D',
\]
say. But thanks to \eqref{3-3} we have
\[
D' = 1 + \frac{1}{d\ell_M-1} + \big(\frac{(-1)^{M-1}}{D(f_1)}-1\big) \frac{1}{d\ell_M-1} = - \big(\frac{(-1)^M}{D(f)}-1\big)
\]
and
\[
\sigma' \leq (d\kappa_0-1)\big(\frac12 + \frac{1}{2D(f_1)}\big) - \frac{d\kappa_0}{2} + 1 = \frac{d\kappa_0-1}{2D(f_1)} +\frac12 = \frac12 + \frac{1}{D(f)},
\]
thus proving Theorem 5. \fine
3.4.~{\bf Proof of Theorem 6.} Let $F\in{\mathcal S}_2^\sharp$. We start with the twist function $g(\xi) = k\xi^2 + \ell \xi + \beta \sqrt{\xi}$, where $k,\ell\in\ZZ$, $k>0$ and $\beta \in\RR$. Since clearly $g= S(f_0)$, where $f_0$ is the standard twist and $S$ is a shift of degree 2, we have that $g\in\A(F)$. Then we apply the operator $T$ and compute explicitly $T(g)^\flat=g^*\in\A(F)$; the function $f(\xi)$ in Theorem 6 will be closely related to $g^*(\xi)$.
To this end we recall the definition of $x_0$ and the displayed equation before (1.11) on p.1401 of \cite{Ka-Pe/2011a}, saying that
\[
g^*(\xi) = \frac{1}{2\pi} \Phi(x_0,\xi,\bfalpha)^\flat,
\]
where $\Phi(z,\xi,\bfalpha)$ is as in \eqref{1-2} (with $g$ in place of $f$ in this case) and the real number $x_0=x_0(\xi)\geq 1$ is the unique solution (in a certain region) of the equation $\frac{\partial}{\partial z} \Phi(z,\xi,\bfalpha)=0$. Therefore, writing for simplicity $q$ for the conductor $q_F$ and $\Phi(z)$ for $ \Phi(z,\xi,\bfalpha)$, we first compute the critical point $x_0$ of the function
\begin{equation}
\label{3-4}
\Phi(z) = \big(1-2\pi \beta \sqrt{\frac{\lambda}{\xi}}\big) z^{1/2} - 2\pi\big(\frac{k\lambda^2}{\xi^2}z^2 + \frac{\ell\lambda}{\xi}z\big) \hskip1.5cm \lambda=\frac{q}{(4\pi)^2},
\end{equation}
which satisfies
\begin{equation}
\label{3-5}
\frac12\big(1-2\pi \beta\sqrt{\frac{\lambda}{\xi}}\big)x_0^{-1/2} = \frac{4\pi k \lambda^2}{\xi^2}x_0 + \frac{2\pi\ell\lambda}{\xi}.
\end{equation}
Putting $x_0^{1/2} = X_0$ we obtain the cubic equation
\[
X_0^3 + \frac{\ell \xi}{2k\lambda} X_0 - \frac{\xi^2\beta(\xi)}{8\pi k \lambda^2} = 0,
\]
where $\beta(\xi) = 1-2\pi \beta \sqrt{\lambda/\xi}$. Hence by Cardano's formulae we get
\begin{equation}
\label{3-6}
X_0 = a\xi^{2/3}\left\{\left(1+\sqrt{1+\frac{b}{\xi}}\right)^{1/3} + \left(1-\sqrt{1-\frac{b}{\xi}}\right)^{1/3} \right\}
\end{equation}
with
\begin{equation}
\label{3-7}
a=a(\xi)=\big(\frac{\beta(\xi)}{16\pi k \lambda^2}\big)^{1/3} \hskip2cm b =b(\xi)= \frac{32}{27} \frac{\pi^2\ell^3\lambda}{k\beta(\xi)^2}.
\end{equation}
In view of \eqref{3-4} and \eqref{3-5} we therefore have
\begin{equation}
\label{3-8}
\Phi(x_0) = \frac{3}{4}\beta(\xi)x_0^{1/2} - \frac{\pi \ell \lambda}{\xi} x_0 = \frac{3}{4}\beta(\xi)X_0 - \frac{\pi \ell \lambda}{\xi} X_0^2.
\end{equation}
In order to compute $g^*(\xi)$ we have to approximate the right hand side of \eqref{3-8} up to negative powers of $\xi$. To this end we first note that, since $b/\xi\to 0$ as $\xi\to\infty$, a simple computation gives
\[
\begin{split}
\big(1+\sqrt{1+\frac{b}{\xi}}\big)^{1/3} &= 2^{1/3} +O\big(\frac{1}{\xi}\big) \\
\big(1-\sqrt{1-\frac{b}{\xi}}\big)^{1/3} &= -\big(\frac{b}{2\xi}\big)^{1/3} \big(1+ O(\frac{1}{\xi})\big)
\end{split}
\]
as $\xi\to\infty$. Hence, observing that $a,b = O(1)$ as $\xi\to\infty$, from \eqref{3-6} we obtain
\[
\begin{split}
X_0 &= a\xi^{2/3}\big(2^{1/3} - \big(\frac{b}{2\xi}\big)^{1/3}\big) + O(\xi^{-1/3}) \\
X_0^2 &= 2^{2/3} a^2 \xi^{4/3} - 2a^2 b^{1/3} \xi + O(\xi^{2/3}).
\end{split}
\]
Therefore from \eqref{3-8} we finally get
\begin{equation}
\label{3-9}
\begin{split}
\Phi(x_0) &= \frac{3}{4} \beta(\xi) a \xi^{2/3}\big(2^{1/3} - \big(\frac{b}{2\xi}\big)^{1/3}\big) - \pi\ell\lambda (2^{2/3} a^2 \xi^{1/3} - 2a^2 b^{1/3}) + O(\xi^{-1/3}) \\
&= 2\pi\big(A\xi^{2/3} + B\xi^{1/3} + C\xi^{1/6} + c_0 \big)+ O(\xi^{-1/6})
\end{split}
\end{equation}
as $\xi\to\infty$, where
\begin{equation}
\label{3-10}
A = \frac{3}{2(2kq^2)^{1/3}} \hskip1.5cm
B = -\frac{3\ell}{4(2k^2q)^{1/3}} \hskip1.5cm
C = -\frac{\beta}{2(k^2q)^{1/6}}
\end{equation}
and $c_0$ is a certain constant. Indeed, writing
\[
\tilde{\beta} = 2\pi\beta\lambda^{1/2}, \quad \tilde{a} = (16\pi k \lambda^2)^{-1/3}, \quad \tilde{b} = \frac{32}{27}\frac{\pi^2\ell^3\lambda}{k}
\]
and recalling \eqref{3-7} and $\beta(\xi) = 1-\tilde{\beta}\xi^{-1/2}$, a computation shows that the relevant functions in the first line of \eqref{3-9} satisfy
\[
\begin{split}
\beta(\xi)a &= \tilde{a} - \frac{4}{3} \tilde{\beta} \tilde{a}\xi^{-1/2} + O(\xi^{-1}), \\
\big(\frac{b}{\xi}\big)^{1/3} &= \tilde{b}^{1/3}\xi^{-1/3} - \frac{2}{3} \tilde{\beta}\tilde{b}^{1/3} \xi^{-2/3} + O(\xi^{-4/3}), \\
a^2 &= \tilde{a}^2 + O(\xi^{-1/2}), \\
a^2b^{1/3} &= c + O(\xi^{-1/2}),
\end{split}
\]
where $c$ is a certain constant. This proves the expansion of $\Phi(x_0)$ in the second line of \eqref{3-9}, apart from the explicit value of the constants $A,B$ and $C$ in \eqref{3-10}, since
\[
\Phi(x_0) = \big(\frac{3}{4} 2^{1/3} \tilde{a}\big) \xi^{2/3} - \big(\frac{3}{4} 2^{-1/3} \tilde{a}\tilde{b}^{1/3} + \pi\ell \lambda 2^{2/3} \tilde{a}^2\big) \xi^{1/3} -\big(2^{1/3}\tilde{\beta}\tilde{a}\big) \xi^{1/6} + c_0 + O(\xi^{-1/6}).
\]
Then a further computation (which we omit), involving also the value of $\lambda$ in \eqref{3-4}, proves \eqref{3-10}. Therefore
\[
g^*(\xi) = A\xi^{2/3} + B\xi^{1/3} + C\xi^{1/6} + c_0
\]
with $A,B,C$ as in \eqref{3-10} and a certain constant $c_0$, and $g^*\in\A_0(F) \Leftrightarrow |\beta|\in$ Spec$(F)$.
Note that since both $\ell$ and $\beta$ can be positive and negative, the two signs $-$ in \eqref{3-10} may be omitted.
Hence by the substitution $\alpha = \frac{\beta}{2(k^2q)^{1/6}}$ we see that Theorem 6 is proved in the case $k>0$, since the constant $c_0$ may clearly be omitted without affecting the statement. If $k<0$ we just recall \eqref{1-6}, and the result follows in its full generality. \fine
\ifx\undefined\leavevmode\hbox to3em{\hrulefill}\ ,{poly}.
\newcommand{\leavevmode\hbox to3em{\hrulefill}\ ,}{\leavevmode\hbox to3em{\hrulefill}\ ,}
\fi
\vskip 1cm
\noindent
Jerzy Kaczorowski, Faculty of Mathematics and Computer Science, A.Mickiewicz University, 61-614 Pozna\'n, Poland and Institute of Mathematics of the Polish Academy of Sciences,
00-956 Warsaw, Poland. e-mail: [email protected]
\noindent
Alberto Perelli, Dipartimento di Matematica, Universit\`a di Genova, via Dodecaneso 35, 16146 Genova, Italy. e-mail: [email protected]
\end{document}
|
\begin{document}
\begin{center}
{\Large \textbf{On convex hull of Gaussian samples}}\\
{\large Yu. Davydov\footnote{University of Lille 1, France}
}
\end{center}
\noindent\textbf{Abstract:} {\small
Let $X_i = \{X_i(t), \;t \in T\}$ be i.i.d. copies of a centered Gaussian process\\ $X = \{X(t),\;\; t \in T\,\}$ with values in $\mathbb{R}^d$ defined on a separable metric space $T.$\\ It
is supposed that $X$ is bounded.
We consider the asymptotic behaviour of convex hulls
$$
W_n = \conv \{\{\,X_1(t),\ldots, X_n(t),\;\;t \in T\}
$$
and show that with probability 1
$$
\lim_{n\rightarrow \infty} \frac{1}{\sqrt{2\ln n}}\,W_n = W
$$
(in the sense of Hausdorff distance), where the limit shape $W$ is defined by the covariance structure of $X$:
$\;\;W = \conv {}\{K_t, \;t\in T\}, \;\; K_t $ being the concentration ellipsoid of $X(t).$
The asymptotic behavior of the mathematical expectations $Ef(W_n),$ where $f$ is an homogeneous functional is also studied
.}\\
\noindent\emph{Key-words:} Gaussian process, Gaussian sample, convex hull, limit theorem.
\section{Introduction}
Let $T$ be a separable metric space. Let $X_i = \{X_i(t), \;t \in T\}$ be i.i.d. copies of a centered Gaussian process $X = \{X(t),\; t \in T\}$ with values in $\mathbb{R}^d.$ Assume that $X$ has a.s. bounded paths and consider the convex hulls
\begin{equation}
\label{W_n}
W_n = \conv \{\{\,X_1(t),\ldots, X_n(t),\;\;t \in T\}.
\end{equation}
We are studying the existence of a limit shape for the sequence $\{W_n\}.$
Our work is motivated by recent papers \cite{RFMC, MCRF} inspired by an interesting implication in ecological context in estimating the home range of a herd of animals with population size $n.$
Mathematical results of these articles consist in exact computation of a mean perimeter $L_n$ and area $A_n$ of $W_n$ in the case when $d=2$ and $X$ is a standard Brownian motion on $T= [0,1].$
It was shown that
\begin{equation}
\label{perim_area}
L_n \sim 2\pi\sqrt{2\ln n},\;\;\;\; A_n \sim 2\pi\ln n,\;\;\; n\rightarrow \infty.
\end{equation}
The relation between $L_n$ and $A_n$ being the same as the relation between the perimeter and area of a circle of the radius $\sqrt{2\ln n},$ it
seems credible to suppose that $W_n$ rounds up with the growth of $n.$ Our aim is to show that this phenomenon really occurs for all bounded
Gaussian processes. Our main result (Theorem 1) establishes the existence with probability 1 of the limit
\begin{equation}
\label{asympt}
\lim_{n\rightarrow \infty} \frac{1}{\sqrt{2\ln n}}\,W_n = W
\end{equation}
(in the sense of Hausdorff distance) and gives the complete description of the limit set $W$ which is natural to call {\it limit shape} for
convex hulls $W_n.$ In particular case of standard Brownian motion on $[0,1]$ the set $W$ coincides with the unit ball $B_d(0,1)$ of $\mathbb{R}^d.$
An interesting consequence of (\ref{asympt}) is that the rate of the growth of the convex hulls $W_n$ is the same for all bounded Gaussian processes.
The proof for continuous Gaussian processes may be easily deduced from the known results concerning the asymptotic of Gaussian samples
(see \cite{AK, G}), but in general case one needs
an independent demonstration.
Let us remark in addition that if $T$ is a singleton, $T=\{t_0\},$ and $d=1$, then the process $X$ is simply a real random variable and $W_n$ is the segment\\ $[\max \{\,X_1,\ldots, X_n,\,\}\,,\,
\min \{\,X_1,\ldots, X_n,\,\}].$ It means that in some sense our study is closely connected with the classical theory of extrema.
\section{Asymptotic behavior of $W_n$}
\subsection{Notation}
$B_d(0,1),\;\;S_d(0,1)$ are respectively unit ball and unit sphere of $\mathbb{R}^d.$
\noindent
$\langle \cdot, \cdot\rangle$ is the scalar product in $\mathbb{R}^d.$
\noindent
$\mathcal{K}(B)$ is the space of compact convex subsets of a Banach space $\mathbb B$ provided with Hausdorff distance $\rho_{\mathbb B}$ :
$$
\rho_{\mathbb B}(A,B) = \max\{\inf\{\,\epsilon \;|\;A \subset B^\epsilon\},\;\; \inf\{\,\epsilon \;|\;B \;\subset A^\epsilon\}\},
$$
$A^\epsilon$ is the open $\epsilon$-neighbourhood of $A$.
\noindent
We set $\mathcal{K}^d = \mathcal{K}(\mathbb{R}^d)$ and $\rho = \rho_{{\mathbb R}^d}.$
\noindent
$\mathcal{M}_A(\theta),\;\;\theta \in S_d(0,1),$ is a support function of a set $A\in \mathcal{K}^d:$
$$
\mathcal{M}_A(\theta)= \sup_{x\in A}\langle x, \theta\rangle, \;\;\;\theta \in S_d(0,1).
$$
$T$ is a separable metric space.
\noindent
$\mathbb{C}(T)$ is the space of continuous functions on $T$ with uniform norm.
\noindent
$X = \{X(t),\; t \in T\}$ is a separable bounded centered Gaussian process with values in $\mathbb{R}^d.$
\noindent
$R_t$ is the covariance matrix of $X(t).$
\noindent
$K_t$ is the ellipsoid of concentration of $X(t):$
$$
K_t = \{x\in \mathbb{R}^d\;|\;\langle R_t^{-1}x, x\rangle \leq 1\}.
$$
\noindent
Finally we set
\begin{equation}
\label{lim_shape}
W = \conv \{\{K_t, \;t\in T\}.
\end{equation}
\subsection{Limit shape}
\begin{thm} \label{as_conv}
${}$
1) Let $X = \{X(t),\; t \in T\}$ be a bounded centered Gaussian process with values in $\mathbb{R}^d.$ Let $(X_i)$ be a sequence of i.i.d. copies of $X$ and $W_n$ be
the convex hull defined by (\ref{W_n}).
Then with probability 1
\begin{equation}
\label{thm1-1}
\frac{1}{\sqrt{2\ln n}}\,W_n \;\;
\stackrel{\mathcal{K}^d}{\longrightarrow}\;\; W.
\end{equation}
2) If $T$ is compact and $X$ is continuous, then a.s.
\begin{equation}
\label{thm1-2}
\rho\left(\frac{1}{\sqrt{2\ln n}}\,W_n,\;\;W\right) = o\left(\frac{1}{\sqrt{\ln n}}\right).
\end{equation}
\end{thm}
\begin{rem} \label{th1}
It is not difficult to see that the support function $\mathcal{M}_W$ of the limit shape $W$ admits the following representation
$$
\mathcal{M}_W(\theta) = \sigma(\theta),
$$
where
$$
\sigma^2(\theta) = \sup_{t\in T} \langle R_t\theta,\, \theta\rangle,\;\;\; \theta \in S_d(0,1).
$$
\end{rem}
The examples below show that in concrete cases the identification of $W$ is not very complicated.
\begin{rem} \label{th1-2}
For non-centered processes the relation (\ref{thm1-1}) remains the same whe\-reas (\ref{thm1-2}) must be replaced by
\begin{equation}
\label{thm1-3}
\rho\left(\frac{1}{\sqrt{2\ln n}}\,W_n,\;\;W\right) = O\left(\frac{1}{\sqrt{\ln n}}\right).
\end{equation}
\end{rem}
\subsection{Asymptotic behavior of moments}
Let $f: \mathcal{K}^d \rightarrow \mathbb{R}^1$ be a continuous
positive increasing homogeneous function of degree $p$ , that is
$f(A) \geq 0 \;\;\; \forall A \in \mathcal{K}^d;$
$f(A_1) \leq f(A_2) \;\;\;\forall A_1\subset A_2,\;\, A_1,A_2 \in \mathcal{K}^d;$
$f(cA) = c^pf(A),\;\; \forall \;c\geq 0,\; \forall A \in \mathcal{K}^d.$
\begin{thm} \label{esper}
Let $f$ be a function with the properties described above. Then, under hypothesis of Theorem 1
\begin{equation}
\label{thm2}
Ef\left(\frac{1}{\sqrt{2\ln n}}\,W_n\right) \rightarrow f\left(W\right).
\end{equation}
\end{thm}
\begin{rem} \label{th2}
${}$
1) This theorem gives in particular the asymptotic behavior for mean values of all reasonable geometrical characteristics of $W_n$ (such as volume, surface measure, diameter,\ ...).
2) By replacing $f$ with $f^m, m>0,$ we get the asymptotic behavior of higher order moments
\begin{equation}
\label{remthm2}
Ef^m\left(\frac{1}{\sqrt{2\ln n}}\,W_n\right) \rightarrow f^m\left(W\right).
\end{equation}
\end{rem}
\subsection{Examples}
{\bf Brownian motion.} Let $X$ be a standard $d$-dimensional Brownian motion on $T=[0,1].$
Then $K_t = \sqrt{t}B_d(0,1)$ and the limit shape is $W= B_d(0,1).$
In particular, Theorem 2 gives for $d=2$ the relations (\ref{perim_area}).
\noindent
{\bf Self-similar processes.} Let $X= \{X(t),\;t\in \mathbb{R}_+\}$ be a Gaussian centered self-similar process (SSP) with values in $\mathbb{R}^d.$ It means that for some $\alpha>0$ the processes
$$
\{X(at), \;t\in \mathbb{R}_+\},\;\;\;\{a^{\alpha}X(t), \;t\in \mathbb{R}_+\}
$$
have the same law for any $a>0.$
If we suppose that $X$ is bounded, then we can apply our Theorem 1 to the restriction of $X$ on $[0,1].$ As $X(t) \stackrel{\mathcal{D}}{=} t^{\alpha}X(1),$ we have
$K_t = t^\alpha K_1$ which gives $W=K_1.$
This conclusion is available in particular when $X$ has the stationary increments: indeed, in this case the process is continuous as for any $\theta \in S_d(0,1)$ the process $\langle X(t),\, \theta\rangle$ is a fractional Brownian motion (FBM).
\noindent
{\bf Fractional Brownian Bridge.} Now let us suppose that the coordinates of the process
$Y(t)=\{Y_1(t),\ldots,Y_d(t)\}$ are independent FBM's:
$$
EY_i(t)=0,\;\;\;r(t,s):= EY_i(t)Y_i(s)=\frac{1}{2}(t^{2H} +s^{2H} -|t-s|^{2H}).
$$
Then the conditional
process related to the condition $Y(1)=0$, which can be called {\it Fractional
Brownian Bridge,} coincides in distribution with the process
$$
X(t)=\{X_1(t),\ldots,X_d(t)\},\;\;\; X_i(t)=Y_i(t)-r(t,1)Y_i(1),\;\;t\in [0,1].
$$
It is clear that $K_t=\sigma(t)B_d(0,1),$ where $\sigma^2(t) =
t^{2H} -\frac{1}{4}(t^{2H}+1-|1-t|^{2H})^2.$ The function $\sigma^2$ reaches its
maximum at $t=\frac{1}{2}$ and $\sigma^2_{\mathrm {max}}=
\frac{1}{2^{2H}}-\frac{1}{4}.$ Finally we see that $W=\sigma_{\mathrm {max}}B_d(0,1).$
\section{Proofs}
{\bf Proving Theorem 1.} The theorem is a consequence of two following lemmas.
\begin{lem}
\label{lem1}
Let $Y$ be a r.v. such that for all $\gamma <\frac{1}{2}$
$$
E\exp{\{\gamma Y^2\}} < \infty.
$$
Let $(Y_k)$ be a sequence of independent copies of $Y.$ Then with probability 1
$$
\limsup_n \frac{1}{\sqrt{2\ln n}}\max\{Y_1,\ldots,Y_n\} \leq 1.
$$
\end{lem}
\begin{lem}
\label{lem2}
Let $Y = \sup_T X(t),$ where $X = (X(t), \;t\in T)$ is a centered bounded Gaussian process with
$\sup_T \var {}X(t) = 1$ and let $(Y_n)$ be a sequence of independent copies of $Y.$
Then with probability 1
$$
\liminf_n \frac{1}{\sqrt{2\ln n}}\max\{Y_1,\ldots,Y_n\} \geq 1.
$$
\end{lem}
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Theorem 1, first part.}
Fix $\theta \in S_d(0,1).$ Define $\pi_\theta : \mathbb{R}^d\rightarrow \mathbb{R}^1$ by $\pi_\theta(x) = \langle \theta, x\rangle, \;\;x \in \mathbb{R}^d,$ and set
$$
\pi_\theta(W_n) = [m_n^{(\theta)}, M_n^{(\theta)}],
$$
where
$$
m_n^{(\theta)}= \min_{i\leq n}\{m_{i,\theta}\}, \;\; \;\;
m_{i,\theta}= \inf_{t\in T}\langle \theta, X_i(t)\rangle,
$$
$$
M_n^{(\theta)}=\max_{i\leq n}\{M_{i,\theta}\},\;\; \;\;
M_{i,\theta}= \sup_{t\in T}\langle \theta, X_i(t)\rangle.
$$
Since the paths of $X$ are bounded, due to the well known result of Fernique - Marcus and Shepp \cite{F, MSh} we have that
$$
E\exp{\{\gamma {M_n^{(\theta)}}^2\}} < \infty
$$
for all $\gamma < \frac{1}{2\sigma^2(\theta)},$ where $\sigma^2(\theta)= \sup_{t\in T} \var{} \langle \theta, X_i(t)\rangle.$
Then by Lemma 1 with probability 1
\begin{equation}
\label{appl1lem1}
\limsup_n \frac{1}{\sqrt{2\ln n}} M_n^{(\theta)}\leq \sigma(\theta).
\end{equation}
On the other hand, by Lemma 2
\begin{equation}
\label{appl1lem2}
\liminf_n \frac{1}{\sqrt{2\ln n}} M_n^{(\theta)}\geq \sigma(\theta), \;\;\;{\textrm a.s.}
\end{equation}
Therefore
\begin{equation}
\label{lem1-2max}
\lim_n \frac{1}{\sqrt{2\ln n}} M_n^{(\theta)} = \sigma(\theta), \;\;\;{\textrm a.s.}
\end{equation}
By the same arguments
\begin{equation}
\label{lem1-2min}
\lim_n \frac{1}{\sqrt{2\ln n}} m_n^{(\theta)} = -\sigma(\theta), \;\;\;{\textrm a.s.}
\end{equation}
It means that for any $\theta \in S_d(0,1)$ with probability 1
\begin{equation}
\label{conv_proj}
\pi_\theta\left( \frac{1}{\sqrt{2\ln n}} W_n\right)\; \longrightarrow\;\;
[- \sigma(\theta), \;\;\sigma(\theta)].
\end{equation}
Let $\{e_i,\; i= 1,\ldots,d\}$ be a basis of $\mathbb{R}^d.$ Consider the parallelepiped $C_n$ defined by the orthogonal projections of $\frac{1}{\sqrt{2\ln n}} W_n$ onto coordinate axes.
The relation (\ref{conv_proj}) implies that
$$
C_n \longrightarrow \prod_{i=1}^d [- \sigma(e_i), \;\;\sigma(e_i)], \;\;\;{\textrm a.s.}
$$
Since $\frac{1}{\sqrt{2\ln n}} W_n \subset C_n,$ the sequence $\{\frac{1}{\sqrt{2\ln n}} W_n\}$ is bounded, and hence relatively compact, in $\mathcal{K}^d.$
Due to the natural isometry between $(\mathcal{K}^d,\,\rho)$ and $\mathbb{C}(S_d(0,1))$ it follows from this that the sequence $\{\mathcal{M}_n(\theta),\;\; \theta \in S_d(0,1)\}$ of support functions of the sets $\{\frac{1}{\sqrt{2\ln n}} W_n\}$ is a.s. relatively compact in the space $\mathbb{C}(S_d(0,1)).$ Let $\Theta$ be a countable dense subset of $S_d(0,1).$ Using the relation
(\ref{lem1-2max}) we see that with probability 1 for all $\theta \in \Theta$
$$
\mathcal{M}_n(\theta) \rightarrow \sigma(\theta).
$$
Together with relative compactness this shows that almost surely the sequence $\{\mathcal{M}_n(\cdot)\}$ has a unic limit point. Then the same is true for $\{\frac{1}{\sqrt{2\ln n}} W_n\},$ and Remark 1 concludes the proof of the first part.
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Theorem 1, second part.} Now we can consider the processes $X$ and $X_k$ as random elements of the separable Banach space $\mathbb{B}=\mathbb{C}(T).$ By Theorem 2.1. of \cite{G} with probability 1
\begin{equation}
\label{goodman}
\rho_{\mathbb{B}}\left(\widetilde{W}_n,\;\;\sqrt{2\ln n}\,\widetilde{W}\right) = o(1),
\end{equation}
where $\widetilde{W}_n = \conv{}_{\mathbb{B}}\{X_1,\ldots,X_n\}$ and $\widetilde{W}$ is the ellipsoid of concentration of $X.$ Let $\varphi : \mathbb{B} \rightarrow \mathcal{K}^d$ be defined by
$$
\varphi(x) = \conv {}\{\,x(t), \;t\in T\}.
$$
It is clear that $\varphi(\widetilde{W}_n)= W_n,\;\;\;\varphi(\widetilde{W})= W,$ and it is easy to check that the map $\varphi$ is Lipschitzian:
$$
\rho(\varphi(x),\;\varphi(y)) \leq \rho_{\mathbb{B}}(x,y),\;\; x,y \in \mathbb{B}.
$$
Therefore (\ref{thm1-2}) follows directly from (\ref{goodman}).
$
\mbox{$\Box$}\\[2mm]$
{\bf Proofs of Lemmas 1-2.}
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Lemma 1.} Let $s>0.$ Setting
$$
Z_n = \frac{1}{\sqrt{2\ln n}}\max\{Y_1,\ldots,Y_n \}
$$
our assumption implies
$$
P\left\{Z_n \geq 1+s \right\}\;\; \leq \;\;
nP\{Y\geq (1+s)\sqrt{2\ln n}\}\leq
$$
$$
\hspace{-40pt}\leq
\;\;\frac{n E\exp{\{\gamma Y^2\}}}{\exp{\{\gamma (1+s)^22\ln n\}}}\;\; =\;\; C(\gamma)n^{-\delta},
$$
where $\delta = 1 - 2\gamma(1+s)^2 >0$ if $\gamma > \frac{1}{2(1+s)^2}.$
This inequality shows that the series
$$
\sum_m P\left\{Z_{m^a} \geq 1+s \right\}
$$
is summable if $a > 1/\delta.$
We apply the Borel--Cantelli lemma and since $s$ is arbitrary, we find that
$$
\limsup_m Z_{m^a} \leq 1.
$$
As for $k \in [m^a, (m+1)^a)$
$$
Z_k \leq Z_{(m+1)^a}\sqrt{\frac{\ln (m+1)}{\ln m}},
$$
we get
$$
\limsup_n Z_{n} \leq 1.
$$
$
\mbox{$\Box$}\\[2mm]$
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Lemma 2.} Let $0<s<1$ and let $t_0 \in T$ be chosen so that $\sigma_0^2 :=\var {}X(t_0) > s.$ We use the same notation $Z_n$ for $\frac{1}{\sqrt{2\ln n}}\max\{Y_1,\ldots,Y_n \}.$ We have
$$
P\left\{Z_n \leq s \right\}\;\;= P\left\{Y \leq s \sqrt{2\ln n}\right\}^n\;\; = \;\;
F(s\sqrt{2\ln n})^n\;\;\;
$$
$$
\leq \;\;\exp{\{-n(1-F(s\sqrt{2\ln n}))\}},
$$
where $F$ is the distribution function of $Y.$
Note that
$$
1-F(x) = P\{\sup_T X(t) > x\}\;\; \geq \;\;P\{X(t_0) > x\}\;\; \geq\;\;Cx^{-1}\exp{\left\{-\frac{x^2}{2\sigma_0^2}\right\}},
$$
which shows that
$$
P\left\{Z_n \leq s \right\}\;\;\leq \exp{\left\{-C(\ln n)^{-\frac{1}{2}}n^{1-\frac{s^2}{\sigma_0^2}}\right\}}.
$$
It means that the series $\sum_n P\left\{Z_n \leq s \right\}$ is summable, and by
applying the Borel--Cantelli lemma we finish the proof. $
\mbox{$\Box$}\\[2mm]$
{\bf Proving Theorem 2.} The proof is based on the following lemma completing the information given by Lemma 1.
\begin{lem}
\label{lem3}
Let $Y$ be a r.v. such that for some $\gamma > 0$
$$
E\exp{\{\,\gamma Y^2\}} < \infty.
$$
Let $(Y_i)$ be a sequence of independent copies of $Y.$
Then for any $k\in {\mathbb N}$
$$
\sup_n E \left(\frac{1}{\sqrt{2\ln n}}\max\{\,Y_1,\ldots,Y_n\}\right)^k < \infty.
$$
\end{lem}
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Theorem 2.}
Due to the continuity of $f$ and the convergence (\ref{thm1-1}) the result will follow from the uniform integrability of the family $\left\{f\left(\frac{W_n}{\sqrt{2\ln n}}\right)\right\}.$
Using the notation from the proof of the first part of Theorem 1, we set
$$
m_n = \min_{i=1,\ldots,d} m_n^{e_i},\;\;\;M_n = \max_{i=1,\ldots,d} M_n^{e_i},\;\;\;
D_n = \max\{- m_n,M_n\},
$$
$$
L_n = [-D_n, D_n]^d.
$$
Since $W_n \subset L_n,$ we have
$$
f\left(\frac{W_n}{\sqrt{2\ln n}}\right) \;\;\leq \;\;f\left(\frac{L_n}{\sqrt{2\ln n}}\right)
\;\;=\;\; \left(\frac{D_n}{\sqrt{2\ln n}}\right)^p f([-1,1]^d),
$$
hence it is sufficient to state that for all $p>0$
\begin{equation}
\label{supE}
\sup_n E\left(\frac{D_n}{\sqrt{2\ln n}}\right)^p \;\;< \infty.
\end{equation}
The latter relation follows directly from Lemma 1 , and the
theorem is proved. $
\mbox{$\Box$}\\[2mm]$
\noindent
{\usefont{T1}{cmr}{m}{sc}
\selectfont
Proof of Lemma 3.} Let
$$
Z_n = \frac{1}{\sqrt{2\ln n}}\max\{\,Y_1,\ldots,Y_n \}.
$$
Denote by $F,\; F_n$ the distribution functions of $Y$ and $Z_n$ respectively. By Markov inequality and by the assumption of Lemma
\begin{equation}
\label{lem3_1}
1-F_n(x) \leq nP\{Y\geq x\sqrt{2\ln n}\} \;\;\leq An^{1-2\gamma x^2}.
\end{equation}
Hence for $a=\frac{1}{\sqrt{2\gamma}}$
\begin{equation}
\label{lem3_2}
E(Z_n)^k = \int_0^\infty x^{k-1}(1-F_n(x))dx \;\;\leq
a^k\;+ \;kA\int_a^\infty x^{k-1}n^{1-2\gamma x^2}dx.
\end{equation}
As for $x\geq \frac{1}{\sqrt{2\gamma}}$
$$
n^{1-2\gamma x^2} \leq \exp{\{-\gamma x^2\ln n\}},
$$
we find that
$$
\limsup_n \int_a^\infty x^{k-1}n^{1-2\gamma x^2}dx \;\;=\;\; 0,
$$
and we get from (\ref{lem3_2})
$$
\limsup_n E(Z_n)^k\;\;\leq \;\;a^k,
$$
which completes the proof. $
\mbox{$\Box$}\\[2mm]$
\section{Concluding remarks}
\hspace{11pt} 1. It is clear that the result of the second part of Theorem 1 is still available if the space
$\mathbb{R}^d$ is replaced by a separable Banach space.
On the contrary, the similar question about the first part of Theorem 1 and about Theorem 2 is more delicate: their proofs are essentially based on the
compactness of bounded subsets of $\mathbb{R}^d$ and it is not clear how to handle this obstacle in the infinite-dimensional case.
2. The second interesting question is about the character of approach of $\frac{W_n}{\sqrt{2\ln n}}$ to $W$. Is it true that $b_n \rho\left(\frac{W_n}{\sqrt{2\ln n}}\,,\;W\right)$ does converge in law to some limit for some choice of normalizing constants $b_n$?
The same question can be also asked for the processes \\ $\{b_n(M_n(\theta) - M(\theta))\,,\;\;\theta \in S^{d-1}\},$
where $M_n(\theta),\; M(\theta)$ are respectively the support functions of $\frac{W_n}{\sqrt{2\ln n}}$ and $W.$ It seems that the recent paper \cite{K} may be useful in this context.
3. What can we say on the behaviour of $W_n$ in non-Gaussian
case? It is more or less clear that the convergence a.s. must be
replaced by the weak one and the normalizing constants will be
transformed from logarithmic to power ones. Indeed, if $X$ is a
vector in $\mathbb{R}^m$ with regulary varying distribution and $(X_i)$ is a
sequence of i.i.d. copies of $X$, then it is well known that the
point processes
$$
\beta_n = \sum_{i=1}^n \delta_{\left\{\frac{X_i}{n^{1/\alpha}}\right\}}
$$
converge weekly to some Poisson point process $\Pi_\alpha.$
It follows immediately from this that
\begin{equation}
\label{stable}
\frac{W_n}{n^{1/\alpha}} \Longrightarrow \conv{}(\Pi_\alpha).
\end{equation}
This fact is still available for a much more general case when $X$ is a random element of an abstract convex cone $\mathbb K$ (see \cite{DMZ}) provided $X$ satisfies the condition of regular variation (condition (4.5) in \cite{DMZ}).
Hence the convergence (\ref{stable}) may be considered as a ``regular varying'' analog of the second part of Theorem 1.
The main difficulty now is how to check the condition of regular variation for concrete situations.
{\bf Acknowledgments.} The author wishes to thank M. Lifshits
for his interest to this work and useful discussions and V. Paulauskas for stimulating remarks,
as well as all participants of working seminar on Stochastic Geometry of the university Lille 1 for their support.
\end{document}
|
\begin{document}
\author{Ahmet M. G\"ulo\u{g}lu }
\address{Department of Mathematics, Bilkent University, Ankara, Turkey} \email{[email protected]}
\keywords{Non-vanishing, Moments, Cubic Dirichlet characters, Cubic Gauss sums, Hecke $L$-functions.}
\title[Non-vanishing of Cubic Dirichlet L-Functions]{Non-vanishing of Cubic Dirichlet L-Functions over the Eisenstein Field}
\begin{abstract}
We establish an asymptotic formula for the first moment and derive an upper bound for the second moment of L-functions associated with the complete family of primitive cubic Dirichlet characters defined over the Eisenstein field. Our results are unconditional, and indicate that there are infinitely many characters within this family for which the L-function $L(s,\chi)$ does not vanish at the central point $s=1/2$.
\end{abstract}
\maketitle
\section{Introduction}
According to Chowla's conjecture \cite{Chow}, it is stated that for any real non-principal Dirichlet character $\chi$, the value of $L(1/2, \chi)$ cannot be equal to zero. However, Li \cite{Li} demonstrated that there are infinitely many quadratic Dirichlet $L$-functions over the rational function field for which $L(1/2, \chi) = 0$. Nonetheless, it is widely believed that among all quadratic characters, the number of such characters with $L(1/2, \chi) = 0$ should have zero density.
In the case of quadratic Dirichlet $L$-functions, Özlük and Snyder \cite{OS-1999}, assuming the Generalized Riemann Hypothesis (GRH), computed the one-level density for the low-lying zeroes in the family and demonstrated that at least 15/16 of these functions do not vanish at $s=1/2$. The conjectures put forth by Katz and Sarnak \cite{KS-book} suggest that $L(1/2, \chi) \neq 0$ for almost all quadratic Dirichlet $L$-functions. Soundararajan \cite{Sound}, by calculating the first two mollified moments, proved that at least 87.5\% of quadratic Dirichlet $L$-functions do not vanish at $s = 1/2$ without assuming GRH. It is worth noting that using only the first two (non-mollified) moments does not yield a positive proportion of non-vanishing, as their growth rate is too rapid (refer to Conjecture 1.5.3 in \cite{CFKRS} and the work of Jutila \cite{Jut}).
In the function field case, Bui and Florea \cite{BuiFlo2018} computed the one-level density and obtained a proportion of at least 94\% for non-vanishing quadratic Dirichlet $L$-functions.
In this paper, we consider the family of $L$-functions attached to primitive cubic Dirichlet characters defined over the Eisenstein field $K = \Q(\omega)$, where $\omega = e^{2\pi i/3}$. Various researchers have investigated the one-level density for families of cubic characters over both $\Q$ and $K$, employing test functions whose Fourier transforms are limited to the interval $(-1, 1)$. These families possess unitary symmetry. However, to achieve a positive proportion of non-vanishing, it becomes necessary to extend beyond the interval $(-1, 1)$. David and the author accomplished this for a subset of cubic characters over $K$ in their work \cite{DG}. They employed test functions with support in $(-13/11, 13/11)$, which yielded a proportion of non-vanishing equal to $2/13$. Notably, these results are obtained assuming the Generalized Riemann Hypothesis (GRH).
Regarding the moments of $L(1/2, \chi)$, where $\chi$ is a primitive cubic character, Baier and Young \cite{BaYo} computed the first moment over $\Q$, Luo \cite{Luo} calculated it for the same thin sub-family as in \cite{DG} over $K$, and David, Florea, and Lalin \cite{DFL-1} did so over function fields. The first two papers utilized smoothed sums over the respective families. In all three cases, the authors obtained lower bounds for the number of non-vanishing cubic twists but not positive proportions, relying on upper bounds for higher moments. The asymptotic behavior of the second moment for cubic Dirichlet $L$-functions remains an open question for function fields and number fields.
David, Florea, and Lalin \cite{DFL-2} established, using mollified moments of $L$-functions, that there is a positive proportion of non-vanishing among cubic Dirichlet $L$-functions at $s = 1/2$ over function fields in the non-Kummer case. Using the same methodology, Yesilyurt and the author have recently demonstrated a positive proportion of non-vanishing for the entire family of cubic Dirichlet characters over $K$. This result had not previously been established for the complete family of cubic characters. It is important to note, however, that it relies on the GRH for the higher moments.
In this study, we unconditionally obtain an asymptotic formula for the first moment and an upper bound for the second moment of the $L$-functions associated with the entire family of primitive cubic Dirichlet characters defined over the Eisenstein field $K$. Our approach is similar to that of Luo \cite{Luo}, although we consider characters with conductors of bounded norm rather than employing a smoothed sum over the family.
The result presented in this paper builds upon the methods developed in \cite{DG}, which rely on the profound contributions of Kubota and Patterson regarding the average of cubic Gauss sums. Furthermore, we utilize Heath-Brown's cubic large sieve inequality as described in \cite[Theorem 2]{HB2000}. These techniques form the foundation for our analysis and enable us to derive the following result.
\begin{theorem} \label{thm:moments}
Consider the family $\mathcal{F}(X)$, defined in equation \eqref{family}, consisting of all primitive cubic Dirichlet characters whose conductors have a norm not exceeding $X$. Then,
\eqs{
\sum_{\chi \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} L(1/2, \chi) = D X\log X + EX + O (X^{65/66 + {\varepsilon}}),}
where $D$ and $E$ are given in \eqref{AsymptoticConstant}. Furthermore,
\eqn{\label{secondmoment}
\sum_{\chi \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} |L(1/2, \chi)|^2 \ll X^{7/6+{\varepsilon}} .}
\end{theorem}
Using theorem \ref{thm:moments} together with Cauchy-Schwarz inequality we obtain the following result.
\begin{cor}
There exist infinitely many primitive cubic Dirichlet characters $\chi$ such that $L(1/2, \chi) \neq 0$.
More precisely, the number of such characters whose conductor has a norm not exceeding $X$ is $\gg X^{5/6-{\varepsilon}}$.
\end{cor}
\section{Preliminary}
\subsection{Notation}
$\N(n) \sim N$ means $N < \N(n) \leqslant} \def\ge{\geqslant 2N$. We write $\N\fa$ for the norm of the ideal $\fa$, and $\N(a)$ for the norm of the ideal $a\ring$. We use
$e(z) = \exp ( 2\pi i z)$ and put $\tr(z) = z + \bar z$.
The following lemma is used to find optimal bounds in the proofs of several results throughout the paper.
\begin{lem}[{\cite[Lemma 2.4]{GraKol}}] \label{balancing}
Suppose that $$L(H) = \sum_{i=1}^m A_i H^{a_i} + \sum_{j=1}^n B_j H^{-b_j},$$ where
$A_i, B_j, a_i, b_j$ are positive, and that $H_1 \leqslant} \def\ge{\geqslant H_2$. Then, there is some $H$ with $H_1 \leqslant} \def\ge{\geqslant H \leqslant} \def\ge{\geqslant H_2$ such that
$$
L(H) \ll \sum_{i=1}^m \sum_{j=1}^n \leqslant} \def\ge{\geqslantft(A_i^{b_j} B_j^{a_i}\right)^{1/(a_i+b_j)} + \sum_{i=1}^m A_i H_1^{a_i} + \sum_{j=1}^n B_j H_2^{-b_j},
$$
where the implied constant depends only on $m$ and $n$.
\end{lem}
\subsection{Cubic Characters}
The ring of integers $\ring$ of $K$ has class number one and six units $\leqslant} \def\ge{\geqslantft\{ \pm 1, \pm \omega, \pm \omega^2 \right\}$. Each non-trivial principal ideal $\mathfrak{n}} \def\fL{\mathfrak{L}}\def\fO{\mathfrak{O}}\def\fD{\mathfrak{D}$ co-prime to $3$ has a unique generator $n \equiv 1 \mod 3$.
The cubic Dirichlet characters on ${\mathbb Z}} \def\R{{\mathbb R}} \def\F{{\mathbb F}} \def\N{{\mathrm N}}\def\C{{\mathbb C}} \def\Q{{\mathbb Q}} \def\P{{\mathbb P}[\omega]$ are given by the cubic residue symbols. For each prime $\pi \in \ring$ with $\pi \equiv 1 \mod 3$, there are two primitive characters of conductor $(\pi)$; the cubic residue character $\chi_\pi$ satisfying
\eqs{\chi_\pi(\alpha) = \leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right] \frac{\alpha}{\pi}\)_3 \equiv \alpha^{(\N(\pi)-1)/3} \mod \pi\ring,}
and its conjugate $\overline{\chi}_\pi= \chi_\pi^2$.
In general, for $n\in\ring$ with $n \equiv 1 \mod 3$, the cubic residue symbol $\chi_n$ is defined multiplicatively using the characters of prime conductor by
\eqs{\chi_n (\alpha) = \leqslant} \def\ge{\geqslantft( \frac{\alpha}{n} \right)_3 = \prod_{\pi^{v_\pi} \| n} \chi_\pi (\alpha)^{v_\pi} .}
Such a character $\chi_n$ is primitive when it is a product of characters of distinct prime conductors, i.e. either $\chi_\pi$ or $\overline \chi_\pi = \chi_{\pi}^2 = \chi_{\pi^2}$.
Moreover, $\chi_n$ is a (cubic) \emph{Hecke} character of conductor $n \ring$ if $\chi_n(\omega) = 1$.
Since
\eqs{\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right] \frac{\omega}{n}\)_3 = \prod_{\pi \mid n} \omega^{v_\pi(n) (\N(\pi)-1)/3} = \omega^{\sum_{\pi \mid n} v_\pi(n) (\N(\pi)-1)/3} = \omega^{(\N(n) - 1)/3},}
we conclude that a given Dirichlet character $\chi$ is a primitive cubic Hecke character provided that $\chi = \chi_n$, where
\begin{enumerate}
\item $n=n_1 n_2^2$, where $n_1, n_2$ are square-free and co-prime, and
\item $\N(n) \equiv 1 \mod 9$, or equivalently, $\N(n_1) \equiv \N(n_2) \mod 9$.
\end{enumerate}
In this case, $\chi$ has conductor $n_1 n_2\ring$.
We recall the cubic reciprocity theorem (cf. \cite[page 114, Theorem 1]{IR}) for cubic characters. Let $m,n \in \ring, m,n \equiv \pm 1 \mod 3.$ Then,
$$
\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right] \frac{m}{n}\)_3 = \leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right] \frac{n}{m}\)_3.
$$
\subsection{The family $\mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P$ of cubic Dirichlet characters}
We shall consider the family $\mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P$ of cubic characters $\chi_{c_1}\overline{\chi_{c_2}}$ given by cubic residue symbols, where $c_1, c_2 \equiv 1 \mod 3\in {\mathbb Z}} \def\R{{\mathbb R}} \def\F{{\mathbb F}} \def\N{{\mathrm N}}\def\C{{\mathbb C}} \def\Q{{\mathbb Q}} \def\P{{\mathbb P}[\omega]$ are square-free and co-prime, and $c_1c_2^2 \equiv 1 \mod 9$. We naturally exclude the case $c_1=c_2=1$. By a slight abuse of notation (dropping the letter $\chi$), we shall write
\bsc{ \label{family}
\mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P &= \Big\{ c_1c_2^2 \in \ring\setminus\{1\} : \begin{array}{l} c_1, c_2 \equiv 1 \mod 3 \text{ both square-free},\\
(c_1,c_2)=1, \;c_1c_2^2 \equiv 1 \mod 9
\end{array} \Big\} \\
&= \Big\{ qd \in \ring\setminus\{1\} : \begin{array}{l}
q, d \equiv 1 \mod 3, q \text{ square-free},\\
d \mid q, \;qd \equiv 1 \mod 9
\end{array} \Big\}.
}
Note that for $c=qd \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P$, the conductor of the character $\chi_{qd}$ is $q\ring$. We shall write $\cond{c}$ for the norm of the conductor of $\chi_c$, and $c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)$ to mean that $c\in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P$ and $\cond c \leqslant} \def\ge{\geqslant X$.
\subsection{Cubic Gauss Sums}
For any $n\equiv 1 \mod 3$, the shifted cubic Gauss sum is defined by
\eqn{ \label{ShiftedGaussSum}
g(r,n) = \sum_{\alpha \mod n} \chi_n (\alpha) \e \bigl(\tr( r \alpha/n)\bigr).}
\section{Second Moment}
By the approximate functional equation of $L(s,\chi)$ for $s=1/2$ we have (cf., for example, \cite[Theorem 5.3]{IwaKow}) that
\bsc{\label{approxfnceqn}
L(1/2,\chi_c) &= \sum_{r \ge 0} 3^{-r/2} \sum_{a \equiv 1 \mod 3} \frac{\chi_c (a)}{\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)} Y \biggr) \\
&\quad + \frac{W(\chi_c)}{\sqrt{\cond{c}}}
\sum_{r \ge 0} 3^{-r/2} \sum_{a \equiv 1 \mod 3} \frac{\overline{\chi_c (a)}}{\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)Y} {3\cond{c}} \biggl),}
where
\eqs{
V (y) = \frac 1 {2\pi i} \int_{2-i\infty}^{2+i\infty} (2 \pi y)^{-u} \frac{\Gamma(1/2+u)}{\Gamma(1/2)} \frac{du}{u}.
}
\begin{rem}
For $0< \alpha \leqslant} \def\ge{\geqslant 1/2-{\varepsilon}$ with ${\varepsilon} < 1/2$, and $A > 0$, it follows from Stirling's formula for the Gamma function (see, for example, \cite[Eqn. 5.112]{IwaKow}) by shifting the contour to $\re u = -\alpha$ and to $\re u = A$, respectively, that
\bsc{\label{Vbound}
y^a \frac{d^aV}{dy^a} (y) &= \delta_{0,a} + O_{{\varepsilon}, \alpha} ( y^\alpha) \\
y^a \frac{d^aV}{dy^a} (y) &\ll_A y^{-A}
}
for any integer $a \ge 0$, where $\delta_{0,a}$ is the Kronecker delta function.
\end{rem}
We choose $Y = X^{1/2}$ in this section. By Cauchy-Schwarz inequality
\eqs{|L(1/2,\chi_c)|^2 \leqslant} \def\ge{\geqslant \frac {2\sqrt 3} {\sqrt 3 - 1}
\sum_{r \ge 0} 3^{-r/2} \Bigl( \big| \Sigma_1 \big |^2 + \big | \Sigma_2 \big|^2\Bigr)
}
where
\bs{\Sigma_1 &= \sum_{a \equiv 1 \mod 3} \frac{\chi_c (a)}{\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)} Y \biggr) \\
\Sigma_2 &= \sum_{a \equiv 1 \mod 3} \frac{\overline{\chi_c (a)}}{\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)Y} {3\cond{c}} \biggl).
}
Using partial integration and Cauchy-Schwarz inequality again we have that
\bsc{\label{2ndmomentsum1}
\big| \Sigma_1 \big |^2
&\leqslant} \def\ge{\geqslant
\int_{1^-}^\infty \bigg|\frac {3^r} Y V' \biggl( \frac {3^r z} Y \biggr) \bigg| dz \cdot \int_{1^-}^\infty \bigg|\frac {3^r} Y V' \biggl( \frac {3^r z} Y \biggr) \bigg| \bigg| \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a)}{\N(a)^{1/2}} \bigg|^2 dz \\
&\ll \log Y \biggl(\int_1^Y z^{-1} + \int_Y^\infty \frac{Y^2}{3^{2r}z^3} \biggr)
\bigg| \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a)}{\N(a)^{1/2}} \bigg|^2 dz,
}
where we used the bounds $yV'(y) \ll 1$ when $z \leqslant} \def\ge{\geqslant Y= \sqrt X$, and $yV'(y) \ll y^{-2}$ for $z > \sqrt X$.
Writing $a= a_1a_2^2$ with $a_i \equiv 1 \mod 3$ and $a_1$ square-free, and using Cauchy's inequality shows that
\bs{\bigg| \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a)}{\N(a)^{1/2}} \bigg|^2
&=
\bigg|\su{a_2 \equiv 1 \mod 3\\ \N(a_2) \leqslant} \def\ge{\geqslant \sqrt z} \frac{\chi_c (a_2^2)}{\N(a_2)}
\ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a_1)}{\N(a_1)^{1/2}}\bigg| \\
& \ll \log z \su{a_2 \equiv 1 \mod 3\\ \N(a_2) \leqslant} \def\ge{\geqslant \sqrt z} \frac 1 {\N(a_2)} \bigg| \quad \ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a_1)}{\N(a_1)^{1/2}}\bigg|^2.
}
Here and in what follows, $\Sigma^\ast$ indicates summation over square-free integers.
Note that if $A(c) \ge 0$ for all $c\in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)$, removing the conditions $(c_1, c_2)=1$ and $c_1c_2^2 \equiv 1 \mod 9$ on each $c=c_1c_2^2$, it follows that
\bs{
\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} A(c)
&\leqslant} \def\ge{\geqslant \ps{c_1 \equiv 1 \mod 3\\ \N(c_1) \leqslant} \def\ge{\geqslant \sqrt X}\; \ps{c_2 \equiv 1 \mod 3\\ \N(c_1c_2) \leqslant} \def\ge{\geqslant X} A (c_1c_2^2) + \ps{c_2 \equiv 1 \mod 3\\ \N(c_2) \leqslant} \def\ge{\geqslant \sqrt X}\; \ps{c_1 \equiv 1 \mod 3\\ \N(c_1c_2) \leqslant} \def\ge{\geqslant X} A (c_1c_2^2).
}
Applying this idea with
\eqs{A(c) = \bigg| \quad \ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a_1)}{\N(a_1)^{1/2}}\bigg|^2}
and using cubic large sieve inequality \cite[Theorem 2]{HB2000}, we see that
\bsc {\label{2ndmomentsieve}
\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} \bigg| \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a)}{\N(a)^{1/2}} \bigg|^2
&\ll X^{1+{\varepsilon}}z^{\varepsilon} + z^{1+{\varepsilon}} X^{1/2+{\varepsilon}} + X^{5/6+{\varepsilon}} z^{2/3+{\varepsilon}}.
}
Thus, summing \eqref{2ndmomentsum1} over $c$ and using the last estimate \eqref{2ndmomentsieve} gives that
\bs{\sum_c \big| \Sigma_1 \big |^2
&\ll X^{1+{\varepsilon}} Y^{2{\varepsilon}} + Y^{1+2{\varepsilon}} X^{1/2+{\varepsilon}} + X^{5/6+{\varepsilon}} Y^{2/3+2{\varepsilon}} \ll X^{7/6+2{\varepsilon}}.
}
Similarly we have
\bs {
\big| \Sigma_2 \big |^2
&\ll \log X \biggl(\int_1^{\sqrt X} \frac 1 z + \int_{\sqrt X}^\infty \frac {X^2}{z^3Y^2} \biggr) \bigg| \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac{\chi_c (a^2)}{\N(a)^{1/2}} \bigg|^2 dz
. }
By summing over $c$ and applying \eqref{2ndmomentsieve} once more, we arrive at the same estimate of $X^{7/6+2{\varepsilon}ilon}$.
Therefore, the claim \eqref{secondmoment} in theorem \ref{thm:moments} follows.
\section{First Moment}
We write the first moment using \eqref{approxfnceqn} as
\eqs{
\sum_{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} L(1/2, \chi_c) = S_1 + S_2 + S_3,}
where
\bs {
S_1 &= \sum_{r \ge 0} 3^{-r/2} \su{a \equiv 1 \mod 3\\ a = \cube} \frac 1 {\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)} Y \biggl) \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} 1 \\
S_2 &= \sum_{r \ge 0} 3^{-r/2} \su{a \equiv 1 \mod 3\\ a \neq \cube} \frac 1 {\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)} Y \biggl) \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} \chi_{a} (c) \\
S_3 &= \sum_{r \ge 0} 3^{-r/2} \sum_{a \equiv 1 \mod 3} \frac 1 {\N(a)^{1/2}}
\sum_{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)} \frac{\chi_c (a^2) W(\chi_c)}{\sqrt{\cond{c}}} V \biggl( \frac {3^{r}Y\N(a)} {3\cond{c}} \biggr) . }
\subsection{The main term $S_1$}
First recalling that $V(y) = 1 + O(y^{1/6-{\varepsilon}})$ (see \eqref{Vbound}) and trivially estimating the sum over $c$ by $X\log X$ we see that
\eqs{
S_1 = \sum_{r \ge 0} 3^{-r/2} \su{a \equiv 1 \mod 3\\ a = \cube} \frac 1 {\N(a)^{1/2}} \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} 1 + O (X^{1+{\varepsilon}} Y^{-1/6}).
}
Then, one can proceed as in \cite[Lemma 3.1]{DG} to get the main terms with an admissible error. But to show that one can get a better error term, we shall continue.
Next, removing the conditions $(c_1, c_2) =1,c_1c_2^2 \equiv 1 \mod 9$ and $c_1, c_2$ be square-free, we can write
\bs{\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} 1
&= \frac 1 {h_{(9)}} \sum_{\psi \mod 9}
\su{d \equiv 1 \mod 3\\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right]d, a)=1\\ \N(d)^2 \leqslant} \def\ge{\geqslant X} \mu_K(d) \psi (d^3)
\su{e_1 \equiv 1\mod 3\\ (e_1,ad)=1\\ \N(de_1)^2 \leqslant} \def\ge{\geqslant X} \mu_K (e_1) \psi (e_1^2) \\
& \quad \cdot
\su{e_2 \equiv 1\mod 3\\ (e_2, ad)=1\\ \N(de_1e_2)^2 \leqslant} \def\ge{\geqslant X} \mu_K (e_2) \psi (e_2^4) \su{c_1 \equiv 1 \mod 3\\ (c_1, ad)=1} \psi (c_1)
\su{c_2 \equiv 1 \mod 3\\ (c_2, ad)=1\\ \N(c_1c_2) \leqslant} \def\ge{\geqslant W} \psi (c_2^2),
}
where $W= X/\N(de_1e_2)^2$. Applying Perron's formula, the sums over $c_1, c_2$ then becomes
\eqs{\int_{\varrho-iT}^{\varrho+iT} W^s G_\psi (s) L(s,\psi) L(s,\psi^2) \frac{ds} s + O ( W^{\varepsilon} + W^{1+{\varepsilon}}T^{-1} ),
}
where $\varrho = 1 + \log (2W), T \in [1,W]$ and
\eqs{G_\psi(s) = \prod_{\pi \mid ad} \Bigl(1-\frac{\psi(\pi)}{\N(\pi)^s} \Bigr)\Bigl(1-\frac{\psi^2(\pi)}{\N(\pi)^s} \Bigr)
.
}
Moving the contour to $\sigma = {\varepsilon}$ we pick up the residue at $s=1$ coming from the double pole of $L(s,\psi)L(s,\psi^2)$ when $\psi = 1$, and using the classical convexity bound $L(s,\psi) \ll {\varepsilon}^{-1} (\cond{\psi}(1+|t|))^{1+{\varepsilon}-\sigma}$ for $-{\varepsilon} \leqslant} \def\ge{\geqslant \sigma \leqslant} \def\ge{\geqslant 1+ {\varepsilon}$, the horizontal and the vertical integrals are
\eqs{ \ll W^{\varepsilon} \prod_{\pi \mid ad} (1+\N(\pi)^{-{\varepsilon}}) \bigl(W T^{-1} + W^{{\varepsilon}} T^{2- 2{\varepsilon}}\bigr).}
Choosing $T = W^{\frac{1-{\varepsilon}}{3-2{\varepsilon}}}$ shows that the double sum over $c_1, c_2$ equals
\eqs{R + O \Bigl( \prod_{\pi \mid ad} (1+\N(\pi)^{-{\varepsilon}}) W^{2/3 + 2 {\varepsilon}} \Bigr),}
where the residue $R$ at $s=1$ is given by
\bs{
R &= \lim_{s\to 1} (W^s s^{-1} G(s) f^2 (s) )'\\
&= A G(1) W \log W - A G(1) W + A G'(1) W + B G(1) W \\
&= A G(1) W \log X - W C.
}
Here we wrote $G$ for $G_1$ and
\bs{f (s) &= (s-1) L(s,1) = (s-1) (1-3^{-s})\zeta_K (s), \\
A &= \lim_{s\to 1} f^2 (s) = \frac{4\pi}{27\sqrt{3}} , \qquad B = \lim_{s\to 1} (f^2 (s))' \\
C&= A G(1) \log \N(de_1e_2)^2 + A G(1) - A G'(1) - B G(1) .}
Therefore, we have shown so far that
\bs{\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} 1 &=
\frac 1 {h_{(9)}} \su{d \equiv 1 \mod 3\\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right]d, a)=1\\ \N(d)^2 \leqslant} \def\ge{\geqslant X} \mu_K(d) \su{e_1 \equiv 1\mod 3\\ (e_1,ad)=1\\ \N(de_1)^2 \leqslant} \def\ge{\geqslant X} \mu_K (e_1) \su{e_2 \equiv 1\mod 3\\ (e_2, ad)=1\\ \N(de_1e_2)^2 \leqslant} \def\ge{\geqslant X} \mu_K (e_2) \\
&\quad \cdot \Bigl( A G(1) W \log X - W C + O \Bigl( \prod_{\pi \mid ad} (1+\N(\pi)^{-{\varepsilon}}) W^{2/3 + 2 {\varepsilon}} \Bigr) \Bigr).
}
Completing the sums over $d, e_1, e_2$ introduces an error $\ll \log \N(a) X^{1/2} \log^3 X$ since
\eqn{\label{GG'}
G'(1) \ll \log \N(ad), \qquad G(1) \leqslant} \def\ge{\geqslant 1,}
and we obtain
\bs{\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} 1
& = \frac {AX\log X} {h_{(9)}} \prod_{\pi \equiv 1 \mod 3} \Bigl(1 - \frac 3 {\N(\pi)^2} + \frac 2 {\N(\pi)^3} \Bigr) \pr{\pi \equiv 1 \mod 3 \\ \pi \mid a} \frac {\N(\pi)} {\N(\pi) + 2} \\
& + E(a) X + O\Bigl( \prod_{\pi \mid a} (1+\N(\pi)^{-{\varepsilon}}) X ^{2/3 + 2{\varepsilon}} \Bigr), \\
}
where
\bs{
E(a) & = -\frac 1 {h_{(9)}} \su{d \equiv 1 \mod 3\\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right]d, a)=1} \frac{\mu_K(d)}{\N(d)^2} \su{e_1 \equiv 1\mod 3\\ (e_1,ad)=1} \frac{\mu_K (e_1)}{\N(e_1)^2} \su{e_2 \equiv 1\mod 3\\ (e_2, ad)=1} \frac {\mu_K (e_2)} {\N(e_2)^2} C \\
& \ll \log \N(a),
}
where inequality follows by \eqref{GG'}. Now recalling that $a = \cube$ it follows that
\eqn{\label{S1}
S_1 = DX\log X + EX + O \bigl( X^{1+{\varepsilon}} Y^{-1/6} + X ^{2/3 + 2{\varepsilon}} \bigr),
}
where
\bsc{\label{AsymptoticConstant}
D &= \frac{4\pi}{81(\sqrt{3}-1)} \\
&\qquad \cdot \prod_{\pi \equiv 1 \mod 3} \Bigl(1 - \frac 3 {\N(\pi)^2} + \frac 2 {\N(\pi)^3} \Bigr) \Bigl(1 + \frac {\N(\pi)} {(\N(\pi) + 2)(\N(\xi)^{3/2}-1)} \Bigr), \\
E &= \frac 1 {1-1/\sqrt 3} \su{a \equiv 1 \mod 3\\ a = \cube} \frac {E(a)} {\N(a)^{1/2}}
.}
\subsection{Estimate of $S_2$}
Note that
\bsc{\label{S2integral}
&\su{a \equiv 1 \mod 3\\ a \neq \cube} \frac 1 {\N(a)^{1/2}} V \biggl( \frac {3^{r}\N(a)} Y \biggl) \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} \chi_{a} (c) \\
&= - \int_{1^-}^\infty \biggl( \su{a \equiv 1 \mod 3\\ a \neq \cube\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a)^{1/2}} \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)\\ (c,a)=1} \chi_{a} (c) \biggr)
\frac{3^r} Y V' \biggl( \frac {3^r z} Y \biggl) dz.
}
First consider the sums
\bs{
\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P\\ \cond c \sim y} \chi_{a} (c) &= \frac 1 {h_{(9)}} \sum_{\psi \mod 9} \su{d \equiv 1 \mod 3\\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right]d, a)=1\\ \N(d) \leqslant} \def\ge{\geqslant y^{1/2}
} \mu_K(d) \psi (d^3) \\
& \quad \cdot
\ps{c_1 \equiv 1 \mod 3\\ (c_1, ad)=1} (\chi_a \psi) (c_1)
\ps{c_2 \equiv 1 \mod 3\\ (c_2, ad)=1\\ \N(c_1c_2) \sim W} (\chi_a \psi) (c_2^2)
}
for $y=X2^{-k} \ge 1$ with $k \ge 1$, where $W = y/\N(d)^2$. It will be enough to estimate the sums
\eqn{\label{S2dyadic}
\su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ a \neq \cube\\ \N(a) \sim z} \frac 1 {\N(a)^{1/2}} \ps{c_1 \equiv 1 \mod 3\\ (c_1,ad) = 1\\ \N(c_1) \sim U} (\chi_a \psi) (c_1)
\ps{c_2 \equiv 1 \mod 3\\ \N(c_2) \sim V \\ (c_2,ad) = 1\\ \N(c_1c_2) \sim W} (\chi_a \psi) (c_2^2)
}
for a fixed $d$ with $\N(d) \in [1,\sqrt y]$ and $1 \leqslant} \def\ge{\geqslant U = W2^{-i}, V=W2^{-j}$ with $i, j \ge 1$ satisfying $W/4 < UV < 2W$. Applying Perron's formula with $T = W$ for the sums over $c_1$ and $c_2$ and summing over $a$, we see that \eqref{S2dyadic} is bounded by
\eqn{
\label{S2AfterPerron}
W \log W \sup_t \su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ a \neq \cube \\ \N(a) \sim z} \frac 1 {\N(a)^{1/2}} \big|\Sigma_1 (U) \Sigma_2 (V) \big|
+ O\bigl( W^{\varepsilon} z^{1/2} \bigr),
}
where
\eqs{
\Sigma_1 (U) = \ps{c_1 \equiv 1 \mod 3\\ (c_1,ad) = 1\\ \N(c_1) \sim U} \frac{(\chi_a \psi) (c_1)}{\N(c_1)^s}, \qquad
\Sigma_2 (V) = \ps{c_2 \equiv 1 \mod 3\\ (c_2,ad) = 1\\ \N(c_2) \sim V} \frac{(\chi_a \psi) (c_2^2)}{\N(c_2)^s},
}
and $s= 1 + 1/\log 2W + it$. Without loss of generality, we can assume that $U \leqslant} \def\ge{\geqslant V$ so that $V \gg W^{1/2}$. Removing the square-free condition on $c_2$, we have that
\bs{\Sigma_2 (V) &= \su{e \equiv 1 \mod 3\\ (e,ad) = 1\\ \N(e) \leqslant} \def\ge{\geqslant V^{1/2} } \frac{\mu_K(e) (\chi_a \psi) (e^4)}{\N(e)^{2s}} \su{c_2 \equiv 1 \mod 3\\ (c_2,ad) = 1\\ \N(c_2e^2) \sim V} \frac{(\chi_a \psi) (c_2^2)}{\N(c_2)^s}.
}
For the inner sum, since $a \neq \cube$, we use the estimate
\eqs{\su{c_2 \equiv 1 \mod 3\\ (c_2,ad) = 1\\ \N(c_2e^2) \sim V} \frac{(\chi_a \psi) (c_2^2)}{\N(c_2)^s} \ll (V/\N(e)^2)^{-1/2+{\varepsilon}} \N(a)^{1/4} \N(ad)^{\varepsilon},
}
which follows easily using Perron's formula and classical convexity bound together with partial integration, so that
\eqs{
\Sigma_2 (V) \ll V^{-1/2+{\varepsilon}} \N(a)^{1/4} \N(ad)^{\varepsilon}.
}
Using this result in \eqref{S2AfterPerron} and applying cubic large sieve inequality for $\Sigma_1(U)$ yields that
\bs{
\eqref{S2AfterPerron}
& \ll \N(d)^{\varepsilon} \bigl(W^{ 1/2+{\varepsilon}} z^{3/4+3{\varepsilon}/2} + W^{3/4+{\varepsilon}} z^{1/4+2{\varepsilon}} + W^{2/3+{\varepsilon}} z^{7/12+{\varepsilon}}\bigr).
}
Summing this over all $\ll \log W$ possible $U$'s and then over $\N(d) \leqslant} \def\ge{\geqslant y^{1/2}$, and finally over $\ll \log X$ possible $y$'s, and inserting the resulting estimates in \eqref{S2integral} we conclude that
\eqn{\label{S2bound}
S_2 \ll X^{ 1/2+{\varepsilon}} Y^{3/4+3{\varepsilon}/2} + X^{3/4+{\varepsilon}} Y^{1/4+2{\varepsilon}} + X^{2/3+{\varepsilon}} Y^{7/12+{\varepsilon}}.
}
\subsection{Estimate of $S_3$}
The contribution of $c$ with $\cond c \leqslant} \def\ge{\geqslant Y$ to $S_3$ is $\ll Y\log Y$. Hence it is enough to consider
\eqs{\sum_{a \equiv 1 \mod 3} \frac 1 {\N(a)^{1/2}}
\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P\\ Y < \cond c \leqslant} \def\ge{\geqslant X} \frac{\chi_c (a^2) W(\chi_c)}{\sqrt{\cond{c}}} V \biggl( \frac {Y\N(a)} {\cond{c}} \biggr).}
Using partial integration twice expresses this sum as
\bsc{\label{S3integral}
& - \int_{1^-}^\infty \frac Y X V' \biggl( \frac {Yz} X \biggr) \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a)^{1/2}} \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P\\ Y < \cond c \leqslant} \def\ge{\geqslant X} \chi_c (a^2) \frac {W(\chi_c)}{\sqrt{\cond{c}}}
dz \\
&- Y \int_{1^-}^\infty \int_Y^X \biggl( \frac 1 {y^2} V' \biggl( \frac {Yz} y \biggr) + \frac {Yz} {y^3} V'' \biggl( \frac {Yz} y \biggr) \biggr) \\
& \qquad \cdot \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a)^{1/2}} \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P\\ Y < \cond c \leqslant} \def\ge{\geqslant y} \chi_c (a^2) \frac {W(\chi_c)}{\sqrt{\cond{c}}} dy dz.
}
First note that for $c = c_1^2c_2 \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P$,
\eqs{W(\chi_c) = \chi_{c_1^2} (c_2) \chi_{c_2} (c_1) \overline{W(\chi_{c_1})} W(\chi_{c_2}) = \overline{W(\chi_{c_1})} W(\chi_{c_2}),}
where the second equality follows by cubic reciprocity law. Note also that
\eqs{W(\chi_{c_i}) = \chi_{c_i} (\sqrt{-3}) g(1,c_i),}
where $g(r,d)$ is the shifted Gauss sum defined in \eqref{ShiftedGaussSum}. Hence,
\eqs{W(\chi_c) = \chi_c (\sqrt{-3}) \overline{g(1,c_1)} g(1,c_2) = \overline{g(1,c_1)} g(1,c_2),}
since $\chi_c (\sqrt{-3}) = \chi_{c} (w(1-\omega)) = 1$ whenever $c \equiv 1 \mod 9$. Since $(a,c)=1$, it follows from \cite[Lemma 2.7]{DG} that $\chi_{c_1^2}(a^2)\overline{g(1, c_1)} = \overline{g(a,c_1)}$, and $\chi_{c_2}(a^2) g(1,c_2) = g(a,c_2)$. We can also remove the square-free conditions on $c_1, c_2$ using \cite[Lemma 2.8]{DG}. Thus, using ray class character modulo 9 to remove the condition $c_1^2c_2 \equiv 1 \mod 9$, we derive that
\bs{\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P \cup \{1\} \\ \cond c \sim y} \chi_c (a^2) \frac{W(\chi_c)}{\sqrt{\cond{c}}}
&= \frac 1 {h_{(9)}} \sum_{\psi \mod 9}
\su{c_1 \equiv 1 \mod 3\\ (c_1, a)=1} \psi (c_1^2) \frac{\overline{g(a, c_1)}}{\N(c_1)^{1/2}} \\
& \qquad \cdot
\su{c_2 \equiv 1 \mod 3\\ (c_2,a c_1)=1 \\ \N(c_1c_2) \sim y/\N(d)^2} \psi (c_2) \frac{g(a,c_2)}{\N(c_2)^{1/2}}.
}
Finally, we remove the condition $(c_1, c_2)=1$ to get
\bs{
&\su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P \cup \{1\} \\ \cond c \sim y} \chi_c (a^2) \frac{W(\chi_c)}{\sqrt{\cond{c}}}
= \frac 1 {h_{(9)}} \sum_{\psi \mod 9} \su{d \equiv 1 \mod 3\\leqslant} \def\ge{\geqslantft(} \def\){\right)} \def\[{\leqslant} \def\ge{\geqslantft[} \def\]{\right]d, a)=1\\ \N(d) \leqslant} \def\ge{\geqslant \sqrt y} \mu_K(d) \psi (d^3) \\
&\qquad \cdot \su{c_1 \equiv 1 \mod 3\\ (c_1, ad)=1} \psi (c_1^2) \frac{\overline{g(ad, c_1)}}{\N(c_1)^{1/2}}
\su{c_2 \equiv 1 \mod 3\\ (c_2, ad)=1\\ \N(c_1c_2) \sim y/\N(d)^2} \psi(c_2) \frac{g(ad,c_2)}{\N(c_2)^{1/2}}
}
Here, we used \cite[Lemma 2.7]{DG} to first introduce the conditions $(d,c_1c_2)=1$, which can be done since otherwise the Gauss sums vanish, and then write $g(a,dc_i)$ as $g(ad, c_i)g(a,d)$, and finally used the fact that $|g(a,d)|^2 = \N(d)$ as $(d,a)=1$.
We shall first estimate
\eqn{\label{S3dyadicaverage}
\su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ \N(a) \sim z} \frac 1 {\N(a)^{1/2}} \su{c_1 \equiv 1 \mod 3\\ (c_1, ad)=1\\ \N(c_1) \sim U} \psi (c_1^2) \frac{\overline{g(ad, c_1)}}{\N(c_1)^{1/2}}
\su{c_2 \equiv 1 \mod 3\\ (c_2, ad)=1\\ \N(c_2) \sim V \\ \N(c_1c_2) \sim W} \psi(c_2) \frac{g(ad,c_2)}{\N(c_2)^{1/2}}
}
for a fixed $d$ with $\N(d) \in [1,\sqrt y]$ and $1 \leqslant} \def\ge{\geqslant U = W2^{-i}, V= W2^{-j}$ satisfying $W/4 < UV < 2W$, where $W= y/\N(d)^2$. Applying Perron's formula with $T = W$, we see that \eqref{S3dyadicaverage} is bounded by
\eqn{
\label{LargeD}
W \log W \sup_t \su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ \N(a) \sim z} \frac 1 {\N(a)^{1/2}} \big|\Sigma_1 (U) \Sigma_2 (V) \big|
+ O\bigl( W^{\varepsilon} z^{1/2} \bigr)
}
where
\eqs{
\Sigma_1 (U) = \su{c_1 \equiv 1 \mod 3\\ (c_1, ad)=1\\ \N(c_1) \sim U} \psi (c_1^2) \frac{\overline{g(ad, c_1)}}{\N(c_1)^{1/2+s}}, \quad
\Sigma_2 (V) = \su{c_2 \equiv 1 \mod 3\\ (c_2, ad)=1\\ \N(c_2) \sim V} \psi(c_2) \frac{g(ad,c_2)}{\N(c_2)^{1/2+s}},
}
and $s= 1 + 1/\log 2W + it$.
By the last equation in the proof of \cite[Proposition 6.2]{DG},
\mult{\su{c \equiv 1 \mod 3\\ (c, r)=1\\ \N(c) \leqslant} \def\ge{\geqslant z} \lambda(c) \frac{g(r, c)}{\N(c)^{1/2}} \\
\ll \N(r_1r_3^\ast)^{\varepsilon} \bigl( z^{5/6}\N(r_1)^{-1/6} + z^{2/3+{\varepsilon}} \N(r_1r_2^2)^{1/6} + z^{1/2+{\varepsilon}} \N(r_1r_2^2)^{1/4} \bigr), }
where $r=r_1r_2^2r_3^3$ with $r_i \equiv 1 \mod 3$, $r_1, r_2$ coprime and square-free and $r_3^\ast$ is the product of primes dividing $r_3$ but not $r_1r_2$. Also, the first error term appears only when $r_2 = 1$. Write $a = a_1a_2^2 a_3^3$ with $a_i \equiv 1\mod 3$, and $a_1, a_2$ square-free and coprime. Then, $ad = r_1r_2^2r_3^3$ with $r_1 = a_1d, r_2 = a_2$ and $r_3 = a_3$, since $(a,d)=1$ and $d$ is square-free. Using partial integration and the above result, we derive that
\eqn{\label{PV}
\Sigma_2 (V) \ll \N(a_1da_3)^{\varepsilon} \Bigl(\frac 1 {V^{1/6} \N(a_1d)^{1/6}} + \frac{\N(da_1a_2^2)^{1/6}}{V^{1/3-{\varepsilon}}} + \frac{\N(da_1a_2^2)^{1/4}}{V^{1/2-{\varepsilon}}} \Bigr) }
where the first term appears only when $a_2 = 1$. Using \eqref{PV} it then follows that the sum over $a$ in \eqref{LargeD} is bounded by
\bs{
& \frac{(z\N(d))^{\varepsilon}}{(z\N(d)V)^{1/6}} \su{a_3 \equiv 1 \mod 3\\ \N(a_3) \leqslant} \def\ge{\geqslant z^{1/3}} \N(a_3)^{-1} \biggl( \; \ps{a_1\equiv 1 \mod 3\\ \N(a_1a_3^3) \sim z} \big| \Sigma_1 (U) \big|^2 \biggr)^{1/2}
+ \frac{(z\N(d))^{1/6+{\varepsilon}} }{V^{1/3-{\varepsilon}}} \\
&
\cdot \sum_{\N(a_3) \leqslant} \def\ge{\geqslant z^{1/3}} \N(a_3)^{-2-2{\varepsilon}} \ps{\N(a_2^2a_3^3) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a_2)^{1+2{\varepsilon}}} \biggl( \ps{a_1 \equiv 1 \mod 3\\ \N(a_1) \sim z/\N(a_2^2a_3^3)} |\Sigma_1(U)|^2\biggr)^{1/2} \\
& + \frac{(z\N(d))^{1/4+{\varepsilon}} }{V^{1/2-{\varepsilon}}} \sum_{\N(a_3) \leqslant} \def\ge{\geqslant z^{1/3}} \frac 1 {\N(a_3)^{9/4+2{\varepsilon}}} \ps{\N(a_2^2a_3^3) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a_2)^{1+2{\varepsilon}}} \biggl( \ps{a_1 \equiv 1 \mod 3\\ \N(a_1) \sim z/\N(a_2^2a_3^3)} |\Sigma_1(U)|^2\biggr)^{1/2}.}
Applying cubic large-sieve inequality for each term and assuming that
$U \leqslant} \def\ge{\geqslant V$ shows that \eqref{LargeD} is bounded by
\bs{
& W^{11/12+ {\varepsilon}} z^{-1/6+3{\varepsilon}/2} \N(d)^{-1/6+{\varepsilon}}
+ W^{5/6+{\varepsilon}} z^{1/3+3{\varepsilon}/2} \N(d)^{-1/6+{\varepsilon}}
\\
& + W^{3/4+2{\varepsilon}} \N(d)^{1/6+{\varepsilon}} z^{1/2+3{\varepsilon}/2}
+ W^{3/4+2{\varepsilon}} \N(d)^{ 1/4+{\varepsilon}} z^{1/4+3{\varepsilon}/2} \\
&
+ W^{2/3+{\varepsilon}} \N(d)^{1/6+{\varepsilon}} z^{2/3+3{\varepsilon}/2}
+ W^{2/3+2{\varepsilon}} \N(d)^{ 1/4+{\varepsilon}} z^{7/12+3{\varepsilon}/2}
\\
& + W^{1/2+2{\varepsilon}} \N(d)^{ 1/4+{\varepsilon}} z^{3/4+3{\varepsilon}/2}
+ W^{5/6+2{\varepsilon}} \N(d)^{1/6+{\varepsilon}} z^{1/6+3{\varepsilon}/2} + W^{\varepsilon} z^{1/2} .
}
Summing over the range $\N(d) \leqslant} \def\ge{\geqslant B$ yields the bound
\bsc{\label{S3Smalld}
& y^{11/12+ {\varepsilon}} {z^{-1/6+3{\varepsilon}/2} }
+ y^{5/6+{\varepsilon}} z^{1/3+3{\varepsilon}/2}
+ y^{3/4+2{\varepsilon}} z^{1/2+3{\varepsilon}/2}
\\
&
+ y^{2/3+{\varepsilon}} z^{2/3+3{\varepsilon}/2}
+ y^{1/2+2{\varepsilon}} B^{ 1/4-{\varepsilon}} z^{3/4+3{\varepsilon}/2}
+ y^{\varepsilon} B^{1-2{\varepsilon}} z^{1/2} .
}
Assume now that $B < \N(d) \leqslant} \def\ge{\geqslant y^{1/2}$. Using Cauchy-Schwarz inequality gives
\eqs{
\su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ \N(a) \sim z} \frac {\big|\Sigma_1 (U) \Sigma_2 (V) \big|} {\N(a)^{1/2}}
\leqslant} \def\ge{\geqslant z^{-1/2}\su{a_2 \equiv 1 \mod 3\\ \N(a_2) \leqslant} \def\ge{\geqslant \sqrt z} \biggl( \; \ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \sim z} \big|\Sigma_1 (U) \big|^2 \ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \sim z} \big| \Sigma_2 (V) \big|^2 \biggr)^{1/2}.
}
By the cubic large sieve inequality,
\bs{
\ps{a_1 \equiv 1 \mod 3\\ \N(a_1a_2^2) \sim z} \big|\Sigma_1 (U) \big|^2 &\ll
\Bigl(\frac z {U\N(a_2)^2}\Bigr)^{1+{\varepsilon}} + \Bigl(\frac {Uz} {\N(a_2)^2}\Bigr)^{{\varepsilon}} + U^{-1/3+{\varepsilon}} \Bigl(\frac z {\N(a_2)^2}\Bigr)^{2/3+{\varepsilon}}, }
so that
\mult{
\su{a \equiv 1 \mod 3\\ (a,d) = 1 \\ \N(a) \sim z} \frac {\big|\Sigma_1 (U) \Sigma_2 (V) \big|} {\N(a)^{1/2}} \ll W^{{\varepsilon}/2} \biggl(
W^{-1/2} z^{1/2+{\varepsilon}} + U^{-1/2} z^{{\varepsilon}} \\
+ U^{-1/2} z^{1/3+{\varepsilon}} V^{-1/6} + V^{-1/2} z^{{\varepsilon}} + 1 + z^{1/3+ {\varepsilon}} U^{-1/6} V^{-1/2} + W^{-1/6} z^{1/6+{\varepsilon}}
\biggr).
}
Inserting this in \eqref{LargeD} and
assuming $V \leqslant} \def\ge{\geqslant U$ shows that \eqref{S3dyadicaverage} is bounded by
\eqs{
W^{1/2+2{\varepsilon}} z^{1/2+{\varepsilon}} + W^{3/4+2{\varepsilon}} z^{{\varepsilon}} + W^{2/3+2{\varepsilon}} z^{1/3+{\varepsilon}} + W^{1+{\varepsilon}} z^{{\varepsilon}} + z^{1/3+ {\varepsilon}} W^{5/6+{\varepsilon}} .
}
Summing over $B < \N(d) \leqslant} \def\ge{\geqslant \sqrt y$ we get
\multn{\label{S3Larged}
y^{1/2+2{\varepsilon}} z^{1/2+{\varepsilon}} + y^{3/4+2{\varepsilon}} z^{{\varepsilon}} B^{-1/2} + y^{2/3+2{\varepsilon}} z^{1/3+{\varepsilon}} B^{-1/3} \\
+ y^{1+{\varepsilon}} z^{{\varepsilon}} B^{-1} + y^{5/6+{\varepsilon}} z^{1/3+ {\varepsilon}} B^{-2/3}.
}
Combining \eqref{S3Smalld} and \eqref{S3Larged} and balancing terms using Lemma \ref{balancing} with $B \in [1,y^{1/2}]$ shows that for $y \in (Y,X]$,
\eqs{ \su{a \equiv 1 \mod 3\\ \N(a) \leqslant} \def\ge{\geqslant z} \frac 1 {\N(a)^{1/2}} \su{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P\\ Y < \cond c \leqslant} \def\ge{\geqslant y} \chi_c (a^2) \frac {W(\chi_c)}{\sqrt{\cond{c}}}}
is bounded by
\mult{
y^{11/12+ {\varepsilon}} z^{-1/6+3{\varepsilon}/2} + y^{5/6+{\varepsilon}} z^{1/3+3{\varepsilon}/2} + y^{3/4+2{\varepsilon}} z^{1/2+3{\varepsilon}/2}
\\
+ y^{2/3+{\varepsilon}} z^{2/3+3{\varepsilon}/2} + y^{1/2+2{\varepsilon}} z^{3/4+3{\varepsilon}/2} + y^{3/5+2{\varepsilon}} z^{3/5+2{\varepsilon}} .
}
Using this result in \eqref{S3integral} we derive that
\eqn{\label{S3bound}
S_3
\ll X^{\varepsilon} \bigl(X^{11/12} + X^{7/6} Y^{-1/3}
+ X^{5/4}Y^{-1/2}
+ X^{4/3} Y^{-2/3} + X^{6/5}Y^{-3/5} \bigr),
}
upon redefining ${\varepsilon}$.
\subsection{Gluing the pieces}
Gathering \eqref{S1}, \eqref{S2bound} and \eqref{S3bound} we have shown that
\bs{\sum_{c \in \mathscr F} \def\sD{\mathscr D} \def\sK{\mathscr K} \def\sL{\mathscr L} \def\sl{\mathscr l} \def\sP{\mathscr P(X)}& L(1/2, \chi_c)
- DX\log X + EX \\
& \ll X^{\varepsilon} \Bigl(X^{11/12} + X^{1/2} Y^{3/4} + X^{3/4} Y^{1/4} + \red{X^{2/3} Y^{7/12}} \\
& \quad + X Y^{-1/6} + \red{X^{7/6} Y^{-1/3}} + X^{5/4}Y^{-1/2} + X^{4/3} Y^{-2/3} + X^{6/5}Y^{-3/5}\Bigr), }
upon redefining ${\varepsilon}$.
Using Lemma \ref{balancing} with $Y \in [1,X]$ gives the claimed error term in theorem \eqref{thm:moments} and finishes the proof.
\begin{rem}
In the study conducted by Luo in \cite{Luo}, the first and second smoothed moments were examined within a thin subfamily of all cubic Dirichlet characters. In order to estimate the corresponding sum $S_2$, Luo utilized the cubic large sieve inequality for large values of $d$, along with a smoothed version of the Polya-Vinogradov inequality for small values of $d$. However, if one considers the first moment without using smoothing, as we have done in our work, it is sufficient to rely solely on the Polya-Vinogradov type inequality for the entire range of $1 \leqslant} \def\ge{\geqslant \N(d) \leqslant} \def\ge{\geqslant \sqrt y$.
Regarding the calculation of $S_3$, Luo employed a result concerning the shifted cubic sums, attributed to Patterson and referenced in \cite[equation 7]{Luo}. However, we encountered difficulty in locating this particular estimate in the cited paper. Furthermore, the mentioned estimate does not provide any indication of the dependence on the conductor of the character $\chi_a$, the shift parameter of the Gauss sums. In our previous work \cite{DG}, we addressed this issue. Upon reexamining Luo's case without incorporating the smooth sums, we discovered that it is still possible to obtain the first moment for the thin family, albeit with a slightly worse error term.
\end{rem}
\nocite{*}
\end{document}
|
\begin{document}
\title{AABC: approximate approximate Bayesian computation when simulating a large number of data sets is computationally infeasible}
\author{Erkan O. Buzbas \\
Department of Biology \\ Stanford University, Stanford, CA 94305-5020 USA \\
and\\
Department of Statistical Science \\ University of Idaho, Moscow, ID 84844-1104 USA\\
email: \texttt{[email protected]}\\
and\\
Noah A. Rosenberg \\
Department of Biology \\ Stanford University, Stanford, CA 94305-5020 USA \\
email: \texttt{[email protected]}
}
\maketitle
\begin{center}
\textbf{Abstract}
\end{center}
Approximate Bayesian computation (ABC) methods perform inference on model-specific parameters of mechanistically motivated parametric statistical models
when evaluating likelihoods is difficult.
Central to the success of ABC methods is computationally inexpensive simulation of data sets from the parametric model of interest.
However, when simulating data sets from a model is so computationally expensive that the posterior distribution of parameters cannot be adequately sampled by ABC, inference is not straightforward.
We present ``approximate approximate Bayesian computation'' (AABC), a class of methods that extends simulation-based inference by ABC to models in which simulating data is expensive.
In AABC, we first simulate a \emph{limited number} of data sets that is computationally feasible to simulate from the parametric model.
We use these data sets as fixed background information to inform a non-mechanistic statistical model that approximates the correct parametric model and enables efficient simulation of a large number of data sets by Bayesian resampling methods.
We show that under mild assumptions, the posterior distribution obtained by AABC converges to the posterior distribution obtained by ABC, as the number of data sets simulated from the parametric model and the sample size of the observed data set increase simultaneously.
We illustrate the performance of AABC on a population-genetic model of natural selection, as well as on a model of the admixture history of hybrid populations.
\vspace*{.3in}
\noindent\textsc{Keywords}: {Approximate Bayesian computation, likelihood-free methods, nonparametrics, posterior distribution}
\section{Introduction}\label{Intro}
Stochastic processes motivated by mechanistic considerations enable investigators to capture salient phenomena in modeling natural systems.
Statistical models resulting from these stochastic processes are often parametric, and estimating model-specific parameters---which often have a natural interpretation---is a major aim of data analysis.
Contemporary mechanistic models tend to involve complex stochastic processes, however, and parametric statistical models resulting from these processes lead to computationally intractable likelihood functions.
When likelihood functions are computationally intractable, likelihood-based inference is a challenging problem that has received considerable attention
in the literature \citep{RobertCasella2004, Liu2008}.
\par
When statistical models are known only at the level of the stochastic mechanism generating the data---such as in implicit statistical models \citep{DiggleGratton1984}---explicit evaluation of likelihoods might be impossible.
In these models, standard computational methods that require evaluation of likelihoods up to a proportionality constant (e.g., rejection methods) cannot be used to sample distributions of interest.
However, data sets simulated from the model under a range of parameter values
can be used to assess parameter likelihoods without explicit evaluation \citep{Rubin1984}.
Approximate Bayesian computation (ABC) methods \citep{Tavareetal1997,Beaumontetal2002, Marjorametal2003}
implement this idea in a Bayesian context to sample an {\em approximate} posterior distribution of the parameters.
Intuitively, parameter values producing simulated data sets similar to the observed data set
arise in approximate proportion to their likelihood, and hence, when weighted by prior probabilities, to their posterior probabilities.
\par
\subsection{The ABC literature}
ABC methods have been based on rejection algorithms
\citep{Tavareetal1997, Beaumontetal2002, BlumFrancois2010},
Markov chain Monte Carlo \citep{Beaumont2003, Marjorametal2003, Bortotetal2007, Wegmannetal2009}, and
sequential Monte Carlo \citep{Sissonetal2007, Sissonetal2009, Beaumontetal2009, Tonietal2009}.
Model selection using ABC \citep{Pritchardetal1999, Fagundesetal2007, Grelaudetal2009,
BlumJakobsson2010, Robertetal2011}, the choice of summary statistics when the likelihood is based on
summary statistics instead of the full data \citep{JoyceMarjoram2008, Wegmannetal2009, NunesBalding2010, FearnheadPrangle2012},
and the equivalence of posterior distributions targeted in different ABC methods \citep{Wilkinson2008, Sissonetal2010} have also
been investigated.
\par
ABC methods have had a considerable effect on model-based inference in disciplines that rely on genetic data, particularly data
shaped by diverse evolutionary, demographic, and environmental forces.
Example applications have included problems in the demographic history of populations \citep{Pritchardetal1999, Francoisetal2008, Verduetal2009, BlumJakobsson2010} and species
\citep{Estoupetal2004, PlagnolTavare2004, BecquetPrzeworski2007, Fagundesetal2007, Wilkinsonetal2010},
as well as problems in the evolution of cancer cell lineages \citep{Tavare2005, Siegmundetal2008} and the evolution of protein networks \citep{Ratmannetal2009}.
Other applications outside of genetics have included inference on the physics of stereological extremes \citep{Bortotetal2007},
the ecology of tropical forests \citep{JabotChave2009}, dynamical systems in biology
\citep{Tonietal2009}, and small-world network disease models \citep{Walkeretal2010}.
ABC methods have been reviewed by \citet{MarjoramTavare2006}, \citet{Cornuetetal2008}, \citet{Beaumontetal2009}, \citet{Beaumont2010},
\citet{Csilleryetal2010}, and \citet{Marinetal2011}.
\par
\subsection{A limitation of ABC methods}
An informal categorization of the information available about the likelihood function is helpful
to illustrate the class of models in which ABC methods are most useful.
First, exact inference on the posterior distribution of the parameters is possible only if the likelihood function is analytically available.
Second, if the likelihood function is not analytically available but can be evaluated up to a constant given a parameter value, then standard computational methods such as rejection algorithms can sample the posterior distribution.
In this case, inference is exact up to a Monte Carlo error due to sampling from the posterior.
Third, if the likelihood function cannot be evaluated, but data sets can feasibly be simulated from the model, then ABC methods sample the posterior distribution using approximations on the {\em data space} in addition to a Monte Carlo error due to sampling.
\par
Although ABC methods sample the posterior distribution of parameters without evaluating the likelihood function, they are computationally intensive.
Adequately sampling a posterior distribution of a parameter by ABC requires many random realizations from the prior distribution
of the parameter and the sampling distribution of the data.
Simulating from the prior is straightforward, but the computational cost of simulating a data set from the mechanistic model increases quickly with the complexity and number of stochastic processes involved.
Henceforth, we refer to statistical models in which not only evaluating the likelihoods is difficult but also simulating a large number of data sets is computationally infeasible as {\em limited-generative} models.
When a model is limited-generative and only a small number of data sets can be simulated from the model, likelihoods cannot be assessed using ABC and hence, the posterior distribution of parameters cannot be adequately sampled.
\par
\subsection{Our contribution}
In this article, we introduce {\em approximate} approximate Bayesian computation (AABC), a class of methods that perform inference on model-specific parameters of limited-generative models when standard ABC methods are computationally infeasible to apply.
In AABC, the idea of assessing the likelihoods approximately using simulated data sets is taken one step further than in ABC.
AABC methods make approximations on the {\em parameter space} and the {\em model space} in addition to standard ABC approximations on the data space.
In conjunction with Bayesian resampling methods, these approximations help us overcome the computational intractability associated with simulating data from a limited-generative model (Figure \ref{fig:1}).
\par
Our key innovation is to condition on a limited number of data sets that can be feasibly simulated from the limited-generative model and to employ a non-mechanistic statistical model to simulate a large number of data sets.
We set up the non-mechanistic model based on empirical distributions of the limited number of data sets simulated from the mechanistic model.
Since the data values from the limited number of simulated data sets are used to construct new random data sets by resampling methods, it is computationally inexpensive to simulate a large number of data sets in AABC.
The AABC approach allows a researcher to allocate a fixed computer time to simulating a limited number of data sets from the limited-generative model,
thus making otherwise challenging likelihood-based inference attainable.
\par
Intuitively, the information conditioned upon by the non-mechanistic model increases with the number of data sets simulated from the mechanistic model, and the expected accuracy of inference obtained by AABC methods increases.
We formalize this intuition by showing that the posterior distribution of parameters obtained by AABC converges to the corresponding posterior distribution obtained by standard ABC, as the sample size of the observed data set and the number of data sets simulated from the limited-generative model increase simultaneously.
\par
\begin{figure}\label{fig:1}
\end{figure}
\par
AABC methods utilize the established machinery of ABC methods in sampling the posterior distribution of the parameters.
Therefore, standard approximations on the data space involved in an ABC method---which facilitate the sampling of the posterior distribution---apply to AABC methods as well.
We now briefly review these approximations in the context of ABC by rejection algorithms.
\section{Review of ABC by rejection algorithms}\label{sec:ABCreview}
To more formally set up the class of problems in which ABC methods are useful, we assume that a parametric model generates observations conditional on parameter $\theta \in \Theta \equiv \mathbf{R}^p,\;p\geq1.$
We let $P_{\theta}$ be the sampling distribution of a data set of $n$ observations independent and identically distributed (IID) from this model.
We denote a random data set by ${\bf x}=(x_1,x_2,...,x_n) \in \mathcal{X},$ where $\mathcal{X}$ is the space in which the data set sits, and the observed data set by ${\bf x}obs.$
In the genetics context, a data point $x_i$
might be a vector denoting the allelic types of a genetic locus at genomic position $i$ in a group of individuals;
the data matrix ${\bf x}$ might then contain genotypes from these individuals in a sample of $n$ independent genetic loci.
\par
Suppose that $P_{\theta}$ is available to the extent that the likelihood function $p({\bf x}obs|\theta)$ can be evaluated up to a constant whose value does not depend on the parameters.
Given a prior distribution $\pi(\theta)$ on parameter $\theta,$ the posterior
distribution of $\theta$ given the observed data ${\bf x}obs$ under the model $P_{\theta}$ is
$\pi(\theta|{\bf x}obs, P_{\theta}).$
Then $\pi(\theta|{\bf x}obs, P_{\theta})$ can be sampled by standard rejection sampling from $p({\bf x}obs|\theta)\pi(\theta),$ a quantity that is proportional to
$\pi(\theta|{\bf x}obs, P_{\theta})$ by Bayes' Theorem.
In principle, sampling $\pi(\theta|{\bf x}obs, P_{\theta})$ without evaluating the
likelihood function $p({\bf x}obs|\theta)$ is possible, if simulating the data from the model $P_{\theta}$ is feasible.
An early example due to Tavar\'e {\em et al.} (1997) samples $\pi(\theta|{\bf x}obs,P_{\theta})$
by accepting a value $\theta_i$ simulated from the prior $\pi(\theta)$
only if the data set ${\bf x}_i$ simulated from $P_{\theta}sub_{\theta_i}$ satisfies ${\bf x}_i={\bf x}obs.$
By standard rejection algorithm arguments, the $\theta_i$ sampled in this fashion are from the correct posterior distribution.
However, the acceptance condition ${\bf x}_i={\bf x}obs$ is rarely satisfied with high-dimensional data.
A first approximation in ABC methods is dimension reduction by substituting the data set ${\bf x}$ with a low-dimensional set of summary statistics $\textrm{\boldmath$s$}.$
The observed data ${\bf x}obs$ and the simulated data ${\bf x}_i$ are substituted by
$\textrm{\boldmath$s$}obs$ and $\textrm{\boldmath$s$}_i,$ calculated from their respective data sets.
This is equivalent to substituting the likelihood function of the data $p({\bf x}|\theta)$ with the likelihood function of the summary statistics $p(\textrm{\boldmath$s$}|\theta).$
Since ABC is most useful in statistical models that do not admit sufficient statistics, dimension
reduction to summary statistics often entails information loss about the parameters.
The choice of summary statistics minimizing this information loss
is an active research area \citep{JoyceMarjoram2008, Wegmannetal2009, Robertetal2011, Aeschbacheretal2012, FearnheadPrangle2012}.
\par
When the data are substituted with summary statistics, the acceptance condition ${\bf x}_i={\bf x}obs$ is substituted by $\textrm{\boldmath$s$}_i=\textrm{\boldmath$s$}obs,$ but exact equality may still be too stringent a condition to be satisfied with simulated data.
A second approximation in ABC is to relax the exact acceptance condition with a tolerance acceptance condition.
For example, \citet{Pritchardetal1999} used the Euclidean distance $||\cdot||$ and a small tuning parameter $\epsilon$
to accept a value $\theta_i$ from an approximate posterior distribution if the data set ${\bf x}_i$ simulated from $P_{\theta}sub_{\theta_i}$ produced $\textrm{\boldmath$s$}_i$ satisfying
\begin{equation}\label{eq:euclideandistance}
||\textrm{\boldmath$s$}_i-\textrm{\boldmath$s$}obs||=\left[\sum_{j=1}^{k}(s_{ij}-s_{oj})^2\right]^{1/2}\leq\epsilon,
\end{equation}
where $\textrm{\boldmath$s$}$ is a $k$-dimensional statistic, and $s_{ij}$ and $s_{oj}$ are the $j$th components of $\textrm{\boldmath$s$}_i$ and $\textrm{\boldmath$s$}obs,$ respectively
(see also \citet{WeissVonHaeseler1998} for an application in a pure likelihood inference context).
Distance metrics other than the Euclidean distance, such as the total variation distance \citep{Tavareetal2002}, have also been used.
\par
Substituting the binary accept/reject step in the rejection sampling by weighting $\textrm{\boldmath$s$}_i$ smoothly according to its distance from $\textrm{\boldmath$s$}obs$ using a kernel density ${\rm K}_{\epsilon}(\textrm{\boldmath$s$}_i,\textrm{\boldmath$s$}obs)$ with bandwidth $\epsilon$ leads to importance sampling \citep{Wilkinson2008}.
The tolerance condition $||\textrm{\boldmath$s$}_i-\textrm{\boldmath$s$}obs||\leq\epsilon$ in the rejection algorithm of \citet{Pritchardetal1999} then corresponds to using a uniform kernel on an $\epsilon$-ball around $\textrm{\boldmath$s$}obs.$
Other approaches to kernel choice include Epanechnikov \citep{Beaumontetal2002} and Gaussian \citep{LeuenbergerWegmann2010}
kernels.
\par
When the data likelihood is substituted by the likelihood based on the summary statistics and a
tolerance condition with a uniform kernel and the Euclidean distance is used, the posterior distribution
sampled with ABC by rejection is
\begin{equation}\label{eq:2}
\pi_{\epsilon}(\theta|{\bf x}obs, P_{\theta}) =\frac{1}{C_{P_{\theta}}}\int_{\mathcal{X}} \mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}} p({\bf x}|\theta)\pi(\theta) \;d{\bf x},
\end{equation}
where $\mathbf{I}_{A}$ is an indicator function that takes a value of 1 on set $A$ and is zero otherwise,
and $C_{P_{\theta}}=\int_{\Theta}\int_{\mathcal{X}} \mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}} p({\bf x}|\theta)\pi(\theta) \;d{\bf x} \;d\theta$ is the normalizing constant.
A standard ABC algorithm that samples $\pi_{\epsilon}(\theta|{\bf x}obs, P_{\theta})$ appears in Figure \ref{fig:2}.
\par
\begin{figure}\label{fig:2}
\end{figure}
\par
The choice of summary statistics, tolerance parameter $\epsilon,$ distance function, and kernel constitute approximations on the data space in ABC methods.
We assume that these standard ABC approximations work reasonably well, and we focus on new modeling approximations on the parameter and model spaces introduced by AABC (Figure \ref{tablo:1}).
\begin{figure}\label{tablo:1}
\end{figure}
\par
\section{Approximate approximate Bayesian computation (AABC)}\label{sec:theory}
Algorithm 1 returns an adequate sample size from the posterior distribution of a parameter if it is iterated a large number of times, $M$.
The set of realizations simulated from the joint distribution of the parameter and the data by steps 1 and 2 of Algorithm 1 is then $\{({\bf x}_1, \theta_1),({\bf x}_2, \theta_2),...,({\bf x}_M, \theta_M)\}.$
AABC methods seek inference on parameter $\theta$ when the model $P_{\theta}$ is limited-generative, and simulating $M$ data sets under $P_{\theta}$ is therefore computationally infeasible.
We thus assume that only a limited number $m$ of data sets ${\bf x}_1,{\bf x}_2,...,{\bf x}_m$ can be obtained by step 2 of Algorithm 1 $(m \ll M)$.
We denote the set of realizations simulated from the joint distribution of the parameter and the data by $\mathcal{Z}_{n,m}=\{({\bf x}_1, \theta_1),({\bf x}_2, \theta_2),...,({\bf x}_m, \theta_m)\},$
where each data set ${\bf x}_i$ of $n$ IID observations is simulated from the model $P_{\theta}sub_{\theta_i}.$
\par
In AABC, we substitute the joint sampling distribution $P_{\theta}$ of a data set of size $n$ with the joint sampling distribution $Q_{\theta},$ from which simulating
data sets is computationally inexpensive.
In replacing $P_{\theta}$ with $Q_{\theta},$ we require that the posterior distribution $\pi(\theta|{\bf x}obs, Q_{\theta})$ based on the likelihood implied by model $Q_{\theta}$ approximates the posterior distribution $\pi(\theta|{\bf x}obs, P_{\theta})$ based on the likelihood implied by model $P_{\theta}.$
Further, we require that $Q_{\theta}$ can be used with a wide range of $P_{\theta},$ in the sense that $Q_{\theta}$ is constructed without using the details of model $P_{\theta}.$
\par
\subsection{Approximations on the parameter and model spaces due to replacing $P_{\theta}$ with $Q_{\theta}$}\label{subsec:nonparametric}
Two approximations are involved in substituting $P_{\theta}$ with $Q_{\theta}.$
First, $\mathcal{Z}_{n,m}$ includes only $m$ parameter values $\theta_1,\theta_2,...,\theta_m$ under which data sets are simulated from $P_{\theta}$.
After obtaining $\mathcal{Z}_{n,m},$ for any new parameter value $\theta$ from the prior distribution under which we want to simulate a new data set, we substitute $\theta$ with $\tilde{\theta}$ such that
$(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
The value $\tilde{\theta}$ has the minimum Euclidean distance to the value $\theta$ among all parameter values in $\mathcal{Z}_{n,m}.$
More precisely, $\tilde{\theta}=\displaystyle{\mathop{\mbox{arg\;min}}_{\theta_j \in \mathcal{Z}_{n,m}}}||\theta_j-\theta||.$
In essence, this approximation is equivalent to replacing the sampling distribution of the data set $P_{\theta}$ with the sampling distribution $P_{\theta}sub_{\tilde{\theta}}$; we call this an approximation on the parameter space.
However, this parameter space approximation is not sufficient to simulate data sets efficiently, since the model $P_{\theta}sub_{\tilde{\theta}}$ is still limited-generative after this substitution.
\par
As a second approximation, we substitute the model $P_{\theta}sub_{\tilde{\theta}}$ with the empirical distribution of the data set $\tilde{{\bf x}}$ that has already been simulated from
$P_{\theta}sub_{\tilde{\theta}}$ as $(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
Here, we assume a positive probability mass only on the data values observed in the set $\tilde{{\bf x}}.$
We call this an approximation on the model space because the model $P_{\theta}sub_{\tilde{\theta}}$ is substituted with the empirical distribution of a data set simulated from
$P_{\theta}sub_{\tilde{\theta}}.$
\par
To simulate a new data set ${\bf x}$ in AABC, we utilize a vector of positive auxiliary parameters $\textrm{\boldmath$\phi$}=(\phi_1,\phi_2,...,\phi_{n}),$ that satisfy $\sum_{i=1}^{n}\phi_i=1.$
We let $\phi_i$ be the probability that a random data value $x_j\in{\bf x}$ is equal to a given value $\tilde{x}_i$ found in the data set $\tilde{{\bf x}}=(\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_n).$
The premise is that the sample $\tilde{{\bf x}}$ simulated under $\tilde{\theta}$ provides information about the model $P_{\theta}sub_{\tilde{\theta}},$ and by an approximation of $\theta$ to $\tilde{\theta}$ on the parameter space, about $P_{\theta}$.
\par
If we denote the approximate sampling distribution of a data set ${\bf x}=(x_1,x_2,...,x_n)$ by $Q_{\theta},$ its joint probability mass function is
\begin{equation}\label{eq:Q}
\int_{{\Phi}}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$}) \;d\textrm{\boldmath$\phi$}\; \mathbf{I}_{\{\theta,\tilde{\theta}\}},
\end{equation}
where $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})={n \choose n_1 \;n_2\; \cdots\; n_k}\prod_{j=1}^{n}\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}},$ and $\mathbf{I}_{\{\theta,\tilde{\theta}\}}$ is 1 if $\tilde{\theta}\in\mathcal{Z}_{n,m}$ is the closest value to $\theta$ in the Euclidean sense and is 0 otherwise.
Here, $n_i$ is the number of times $\tilde{x}_i$ observed in the new sample ${\bf x},$ $k$ is the number of distinct data values observed in the data set ${\bf x},$ and $\mathbf{I}_{\{x_j=\tilde{x}_i\}}$ is 1 if $x_j=\tilde{x}_i$ and is 0 otherwise.
The distribution $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})$ is that of an IID sample ${\bf x}=(x_1,x_2,...,x_n),$ where $x_j$ is drawn from the values $(\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_n)$ with probabilities $(\phi_1,\phi_2,...,\phi_n).$
\par
The probability vector $\textrm{\boldmath$\phi$}$ is a parameter of the model conditional on $\tilde{{\bf x}},$ and thus, we need to posit a prior distribution on $\textrm{\boldmath$\phi$}.$
As a natural prior on probabilities, we let the prior distribution $\pi(\textrm{\boldmath$\phi$})$ on $\textrm{\boldmath$\phi$}$ be the symmetric Dirichlet distribution on the $(n-1)${\em-}dimensional simplex $\Phi,$ with hyperparameters (1,1,...,1) and a uniform probability density function proportional to $1.$
This choice assigns equal weight to all distributions placing positive probability mass on the data points $\tilde{x}_i\in\tilde{{\bf x}}.$
Further, it assigns zero posterior probability to data values unobserved in the sample $\tilde{{\bf x}},$ thereby avoiding difficulties created by such values in the likelihood \citep{Rubin1981, Owen1990}.
\par
To distinguish the parameter and data set realizations in $\mathcal{Z}_{n,m}=\{({\bf x}_i,\theta_i)\}_{i=1}^{m}$ from the parameter and data sets simulated using AABC, we use starred versions of each quantity to denote specific values simulated in AABC.
For example, as the sampling distribution $P_{\theta}sub_{\theta_i}$ delivers a data set ${\bf x}_i$ under a given parameter value $\theta_i$ in the ABC procedure of Algorithm 2, the sampling distribution
$Q_{\theta}sub_{\theta^*_i}$ delivers a data set ${\bf x}^*_i$ under a given parameter value $\theta^*_i$ simulated from its prior distribution (see Figure \ref{fig:11} for notation).
\begin{figure}\label{fig:11}
\end{figure}
\par
The sampling distribution $Q_{\theta}$ utilizes the information available in the set of realizations $\mathcal{Z}_{n,m}$ through the parameter $\textrm{\boldmath$\phi$},$ since the prior distribution of $\textrm{\boldmath$\phi$}$ conditions on $(\tilde{{\bf x}},\tilde{\theta})\in \mathcal{Z}_{n,m}$ and thus on the set $\mathcal{Z}_{n,m}.$
In this sense, the available realizations $\mathcal{Z}_{n,m}$ are used as fixed background information about $P_{\theta},$ and inferences using the substitute model $Q_{\theta}$ are conditional on the simulated sets $\mathcal{Z}_{n,m}.$
\par
\subsection{The posterior distribution of $\theta$ sampled by AABC}\label{sec:posterior}
In sampling the approximate posterior distribution of $\theta$ by AABC methods, we use the two ABC approximations described in Section \ref{sec:ABCreview}.
First, we substitute each data instance ${\bf x}$ with summary statistics $\textrm{\boldmath$s$}.$
Second, we use an acceptance condition with tolerance $\epsilon,$ employing the Euclidean distance to measure the proximity of the summary statistics calculated from the observed and simulated data, as in equation \ref{eq:euclideandistance}.
If we let $\theta^*_j$ be a new parameter value simulated from its prior distribution after obtaining the set $\mathcal{Z}_{n,m},$ in AABC we accept
the parameter values $\theta^*_j$ producing summary statistics $\textrm{\boldmath$s$}^*_j$ that satisfy the condition $||\textrm{\boldmath$s$}^*_j-\textrm{\boldmath$s$}obs||<\epsilon$ as being draws from the posterior distribution.
This acceptance condition corresponds to a uniform kernel, which we use throughout this article, although like ABC, AABC can employ other kernels
to obtain smooth weighting of $\textrm{\boldmath$s$}^*_j$ values by their distance from $\textrm{\boldmath$s$}obs.$
Substituting $P_{\theta}$ with $Q_{\theta}$ involves replacing $p({\bf x}|\theta)$ in expression \ref{eq:2} with expression \ref{eq:Q} and adjusting the normalizing constant accordingly.
The approximate posterior distribution sampled by an AABC method is
\begin{equation}\label{eq:4}
\pi_{\epsilon}(\theta|{\bf x}obs,Q_{\theta}) =
\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta) \;d{\bf x},
\end{equation}
where $C_{Q_{\theta}}=\int_{\Theta}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta) \;d{\bf x} \;d\theta$ is the normalizing constant.
\par
The AABC approach is sensible in that as the limited generative model increasingly permits a larger number of simulated data sets, for large sample sizes the posterior distribution obtained by an AABC method approaches the same distribution as the posterior distribution obtained by an ABC method.
We codify this claim with a theorem.
\par
\vskip 0.5cm
\noindent
{\em Theorem.}
Let $\pi(\theta)$ be a bounded prior on $\theta.$ Let $\pi_{\epsilon}(\theta|{\bf x}obs,P_{\theta})$ and $\pi_{\epsilon}(\theta|{\bf x}obs,Q_{\theta})$ be the posterior distributions sampled by a standard ABC method and an AABC method, respectively. Then
\begin{equation}\label{eq:theorem}
\lim_{m \rightarrow \infty}\lim_{n\rightarrow \infty}\pi_{\epsilon}(\theta|{\bf x}obs,Q_{\theta})=\lim_{n \rightarrow \infty}\pi_{\epsilon}(\theta|{\bf x}obs,P_{\theta}).
\end{equation}
A proof of the theorem is given in Appendix 1.
The convergence of the posterior distribution sampled by AABC is a consequence of the fact that, for each given value of $\theta,$ the sampling distribution $\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}$ converges to the true sampling distribution $p({\bf x}|\theta)$ as the sample size $n$ and the number of simulated samples $m$ from $P_{\theta}$ increase.
The intuition for the double limit in equation \ref{eq:theorem} is as follows.
The standard notion of a distibution converging to a point in the parameter space as the sample size $n$ increases does not directly apply to the posterior distribution $\pi_{\epsilon}(\theta|{\bf x}obs,Q_{\theta}),$ since this posterior depends not only on the sample size $n,$ but also on the number $m$ of simulated data sets from $P_{\theta}.$
Hence, for convergence of the posterior distribution based on the likelihood of $Q_{\theta},$ the requirement is that both $n\rightarrow \infty$ and $m\rightarrow \infty.$
As $n\rightarrow\infty,$ the empirical distribution converges to $P_{\theta}sub_{\tilde{\theta}},$ the correct sampling distribution with the incorrect parameter value $\tilde{\theta}.$
As $m\rightarrow \infty,$ the distance between the parameter value $\theta$ under which we want to simulate a new data set and the parameter value $\tilde{\theta}\in\mathcal{Z}_{n,m}$ closest to $\theta$ approaches zero.
Therefore, taking both limits simultaneously results in convergence to the correct sampling distribution $P_{\theta}.$
\par
\subsection{AABC algorithms}\label{subsec:ABCapproximations}
The structure of AABC algorithms sampling the posterior distribution in expression \ref{eq:4} can be conveniently summarized in three parts, as shown in AABC by a rejection algorithm (Figure \ref{fig:3}).
In Algorithm 2, Part I involves obtaining a limited number of realizations
from the joint distribution of the parameter and the data from the limited-generative model $P_{\theta}.$
Part I simply involves the application of steps 1 and 2 from Algorithm 1, but only for $m$ iterations.
Part II involves simulating a new parameter value $\theta^*_i$ from its prior distribution (step 4) and then simulating a data set ${\bf x}^*_i$
from the model $Q_{\theta}sub_{\theta^*_i}$ (steps 5, 6, 7), conditional on $\mathcal{Z}_{n,m}$ obtained in Part I.
Part III involves comparing the summary statistics $\textrm{\boldmath$s$}^*_i$ calculated from the simulated data set ${\bf x}^*_i$ with the summary statistics $\textrm{\boldmath$s$}obs$ calculated from the observed data set ${\bf x}obs,$ to accept or reject the parameter value $\theta^*_i.$
The calculation and comparison of summary statistics follows the same procedure as in steps 3 and 4 of Algorithm 1.
Hence, Part II of AABC by rejection has the novel steps 5, 6, and 7, whereas Parts I and III use the machinery of ABC by rejection from Algorithm 1.
\par
We can show that Algorithm 2 samples the correct posterior distribution $\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta}).$
The probability of sampling a parameter value $\theta$ in Algorithm 2 is proportional to
\begin{align*}
&\sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}}\pi(\theta)\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\textrm{\boldmath$\phi$})q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})
\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\\
& = \sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}}\pi(\theta,\textrm{\boldmath$\phi$})\mathbf{I}_{\{\theta,\tilde{\theta}\}}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\\
&\propto \sum_{\textrm{\boldmath$s$}}\sum_{\textrm{\boldmath$\phi$}} \pi(\theta,\textrm{\boldmath$\phi$}|Q_{\theta})\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon)\}}\\
& \propto \pi_\epsilon(\theta|{\bf x}obs,Q_{\theta}),
\end{align*}
where the third line follows from the fact that the expression on the second line is the product of the likelihood under the model $Q_{\theta}$ and the prior, and therefore it is proportional to the posterior distribution of parameters based on the model $Q_{\theta}.$
\begin{figure}\label{fig:3}
\end{figure}
\section{Applications}
In this section, we investigate the inferential performance of AABC approach with two examples.
The following simulation setup is used in both examples.
\subsection{Simulation study design}\label{sec:simulation}
We simulated a reference set with $M=10^5$ realizations $\{({\bf x}_1,\theta_1),({\bf x}_2,\theta_2),...,({\bf x}_{10^5},\theta_{10^5})\},$ by first generating $\theta_i\sim \pi(\theta)$ and then simulating a data set ${\bf x}_i\sim P_{\theta}sub_{\theta_i}.$
We then sampled 1000 pairs $({\bf x}_i,\theta_i)$ from the reference set, uniformly at random without replacement.
Thus, we selected 1000 ``true'' parameter values $\theta_i,$ along with corresponding test data sets ${\bf x}_i$ generated under each value $\theta_i$ from the model $P_{\theta}sub_{\theta_i}$.
Further, we built the sets $\mathcal{Z}_{n,m},$ with $m=10^2, 5\times 10^2, 10^3, 5\times 10^3, 10^4,5\times 10^4, 10^5$ by sampling the reference set uniformly at random without replacement for $m<10^5,$ and taking all the realizations in the reference set for $m=M=10^5.$
The sample size $n$ of the data is described in each relevant example.
\par
On each test data set, we performed AABC by rejection sampling (Algorithm 2) using each set $\mathcal{Z}_{n,m}.$
In example 1, where our goal is to compare the performance of the AABC and ABC approaches, we performed ABC analyses by rejection sampling (Algorithm 1) using the same sets $\mathcal{Z}_{n,m}.$
For all analyses, we obtained a sample from the joint posterior distribution of the parameter vector $\theta$ by accepting the parameter vector values that generated data whose summary statistics were in the top $1$ percentile with respect to the statistics calculated from the test data set, in the sense of equation \ref{eq:euclideandistance}.
Compared to the approach of fixing the $\epsilon$ cutoff, accepting parameter vectors that generate data whose summary statistics are in a top percentile has the advantage that a desired number of samples from the posterior is always obtained given a total fixed number of proposed parameter values.
This approach is often preferred by ABC practitioners and is convenient in our case for comparing ABC and AABC.
\par
We assessed the accuracy of the posterior samples for each component of the parameter vector $\theta$ separately, using the root sum of squared error for standardized parameter values accepted in the posterior sample.
For a generic scalar parameter $\alpha,$ the root sum of squared errors is given by $\textrm{RSSE}=(1/r)\sqrt{\sum_{j=1}^{r}(\alpha_j-\alpha_T)^2/\textrm{Var}(\mathbf{\alpha})},$
where $\mathbf{\alpha}=(\alpha_1,\alpha_2,...,\alpha_r)$ are $r$ accepted values in the posterior sample, $\alpha_T$ is the true parameter value, and $\textrm{Var}(\mathbf{\alpha})$ is the variance of the set of $r$ values.
We report the mean RSSE over 1000 test data sets as $\textrm{RMSE}=(1/1000)\sum_{i=1}^{1000}\textrm{RSSE}_i$ (see \citet{NunesBalding2010}).
\par
\subsection{Example 1: The strength of balancing selection in a multi-locus $K${\em-}allele model}\label{sec:example1}
In this section, we consider inference from the stationary distribution of allele frequencies in the diffusion approximation
to a Wright-Fisher model with symmetric balancing selection and mutation \citep{Wright1949}.
If we let $a_i>0,$ with $i=1,2,...,K,$ and $\sum_{i=1}^{K}a_i=1,$ and denote the frequency of allelic type $i$ in the population at a genetic locus, the joint probability density function of allele frequencies $x=(a_1,a_2,...,a_K)$ is $f(x|\sigma, \mu)= c(\sigma,\mu)^{-1}\exp(-\sigma\sum_{i=1}^{K}a_i^2)\prod_{i=1}^{K}a_i^{\mu/K-1}.$
Parameters $\sigma$ and $\mu$ determine the population-scaled strength of balancing selection and the mutation rate, respectively.
A data set of observed allele frequencies is a random sample of $n$ draws from the population frequencies $f(x|\sigma,\mu).$
\par
ABC methods are well-suited for inference from this model for three reasons.
First, the statistics $\sum_{j=1}^{K}a_j^2$ and $-\sum_{j=1}^{K}\log a_j$ are jointly sufficient for parameters $\sigma$ and $\mu,$ and no information loss occurs in dimension reduction to the summary statistics.
Second, the parameter-dependent normalizing constant $c(\sigma,\mu)$ is hard to calculate, and performing likelihood-based inference on $\sigma$ and $\mu$ is therefore difficult.
Third, a method specifically designed to simulate data sets from $f(x|\sigma,\mu)$ is readily available \citep{Joyceetal2012}, and performing ABC is therefore straightforward.
For simplicity, we assume 100 loci with the same true parameter values, each with $K=4,$ and that the allele frequencies at each locus are independent of the allele frequencies at other loci.
Thus, the joint probability density function of allele frequencies for 100 loci is equal to the product of probability density functions across loci.
We choose uniform prior distributions, on $(0.1,10)$ for the mutation rate $(\mu),$ and on $(0,50)$ for the selection parameter $(\sigma)$.
\par
{\em Results.}
Posterior samples model parameters $(\sigma,\mu)$ obtained by ABC and AABC using a typical data set are given in Figure \ref{fig:4}.
In analyses with $m=10^2,5\times 10^2, 10^3$ or $5\times 10^3$ simulated data sets, few samples are accepted with ABC, and thus, little mass is observed in ABC histograms (black).
For small $m,$ ABC does not produce an adequate sample size from the posterior distribution of parameters.
AABC, however, produces a posterior sample of size $10^3$ for any $m,$ because $10^5$ data sets are simulated from the non-mechanistic model (Algorithm 2, steps 5, 6, 7) and the top 1 percentile are accepted as belonging to the approximate posterior distribution.
The histograms obtained by AABC recover the true value reasonably well (Figure \ref{fig:4}).
The RMSE values in AABC procedures are approximately constant with increasing $m.$
For $m=10^2,5\times 10^2, 10^3, 5\times 10^3, 10^4, 5 \times 10^4,$ and $10^5$ simulated data sets, the RMSE values for parameter $\mu$ are 5.988, 5.932, 6.012, 6.086, 6.125, 6.078, and 6.088 respectively, close to the RMSE of 5.290 obtained by a standard ABC approach using $M=10^5$ simulated data sets from the mechanistic model.
The RMSE values in the last column of Figure \ref{fig:4} show that an AABC approach produces posterior samples that have on average greater variance than posterior samples obtained from ABC with the same large number of realizations.
Here, greater variance in posterior samples obtained by AABC is a result of simulating data sets in AABC by resampling the observed data values that are found only in the $m$ realizations in $\mathcal{Z}_{n,m}.$
Consider two parameter values $\theta^*_1$ and $\theta^*_2$ for which data sets ${\bf x}^*_1$ and ${\bf x}^*_2$ are simulated in the AABC approach by steps 5, 6, 7 of Algorithm 2 such that the parameter value $\tilde{\theta}\in\mathcal{Z}_{n,m}$ closest to both $\theta^*_1$ and $\theta^*_2$ is the same value.
The data sets ${\bf x}^*_1$ and ${\bf x}^*_2$ can include only the data values observed in $\tilde{{\bf x}}$ of the pair $(\tilde{{\bf x}},\tilde{\theta})\in\mathcal{Z}_{n,m}.$
On average, ${\bf x}^*_1$ and ${\bf x}^*_2$ share more observations in common than two data sets simulated from the respective mechanistic models $P_{\theta}sub_{\theta^*_1}$ and $P_{\theta}sub_{\theta^*_2}.$
Therefore, each data set simulated in the AABC approach using $Q_{\theta}$ is expected to be less able to distinguish between different parameter values than the independent data sets simulated in the ABC approach using $P_{\theta}.$
This situation results in relatively flat likelihoods and hence posterior samples with larger variance.
\par
\begin{figure}\label{fig:4}
\end{figure}
\subsection{Example 2: Admixture rates in hybrid populations}
Models in which hybrid populations are founded by, and receive genetic contributions from, multiple source populations are of interest in describing the demographic history of admixture.
Stochastic models including admixture often result in likelihoods that are difficult to calculate, and statistical methods capable of performing inference on admixture rates have received much attention for their implications on topics ranging from human evolution to conservation ecology \citep{Falushetal2003, Tangetal2005, BuerkleLexer2008}.
Here, we consider inference on admixture rates from a mechanistic model of \citet{VerduRosenberg2011}.
We use reported estimates of individual admixture as data.
\par
We consider a model of admixture for a diploid hybrid population of constant size $N,$ founded at some known $t$ generations in the past with contributions from source populations A and B.
We follow the distribution of admixture fractions of individuals in the hybrid population at a given genetic locus.
Each generation, the admixture fraction for each individual in the hybrid population is obtained as the mean of the admixture fractions of its parents.
The parents are chosen independently of each other, from source population A, source population B, or the hybrid population of the previous generation with probabilities $p_A,p_B,$ and $p_H,$ respectively ($p_A+p_B+p_H=1$).
In the special case of the founding generation, $p_H=0,$ and we assume $p_A=p_B=0.5.$
Individuals from source populations A and B are assigned admixture fractions of $1$ and $0$ respectively.
For example, if both parents of an individual in the hybrid population of the founding generation are from source population A, that individual has admixture fraction
$(1+1)/2=1.$
If both parents are from population 2, the admixture fraction is $(0+0)/2=0,$ and if one parent is from population 1 and the other is from population B, then the admixture fraction is $(1+0)/2=0.5.$
The distribution of the admixture fraction in the hybrid population is propagated in this manner for $t$ generations until the present, in which a sample of $n$ individuals is obtained from the resulting distribution (Figure \ref{fig:5}).
Our goal is to estimate the admixture rates $(p_A,p_B,p_H),$ given the individual admixture fractions estimated from observed genetic data.
\begin{figure}\label{fig:5}
\end{figure}
\par
We apply the AABC approach using individual admixture fractions from $n=604$ individuals from Central African Pygmy populations reported by \citet{Verduetal2009}, with an assumed constant population size of $N=10^4.$
This assumption differs slightly from the original model in \citet{VerduRosenberg2011} in that a finite population size is assumed, so that only $10^4$ admixture fraction values are allowed in the population at any given generation.
We assume that an admixture event with contributions from two ancestral source populations started at the mean estimate of $t=771$ generations ago \citep{Verduetal2009} with a generation time of 25 years, and that it continued until the present.
Source population A refers to an ancestral Pygmy population, and source population B refers to an ancestral non-Pygmy population.
The feature of this model relevant to our method is the computational intractability of simulating data sets.
For each set of parameter values $(p_A,p_B,p_H)$ simulated from the priors, the distribution of admixture fractions is discrete on a support of a number of admixture fraction values that doubles each generation, and this distribution evolves for 771 generations.
A random sample of admixture fraction values comparable to the values calculated from the observed data set is obtained from the distribution of the present generation.
Simulating a large number of data sets under this model with such a large number of generations is computationally infeasible, and standard ABC is impractical.
We thus perform AABC by rejection (Algorithm 2) using $m=10^4$ realizations from this model.
We assume a Dirichlet prior with hyperparameters $(1,1,1)$ on parameters $(p_A,p_B,p_H).$
\par
We also assessed the contribution of the approximations on the parameter and model spaces in the AABC approach to the RMSE separately, with a simulation study using a small number of generations ($t=30$), where simulating data sets from the mechanistic model is feasible.
First, we performed AABC with rejection as in Algorithm 2 with 1000 ``true'' data sets using $m=10^2, 5\times10^2,10^3, 5\times10^3,10^4, 5\times10^4,$ and $10^5$ realizations from the model, and we calculated the RMSE for $p_A,p_B,$ and $p_H$ over 1000 ``true'' data sets as described in Section \ref{sec:simulation}.
This AABC analysis includes error due to approximations on the parameter space and on the model space.
Second, we performed an AABC analysis with the same set of $m$ realizations, by including the error only due to the approximation on the parameter space.
We achieved this by running Algorithm 2 up through step 5, and then simulating data sets from the mechanistic model by substituting steps 6 and 7 of Algorithm 2 with step 2 of Algorithm 1, the standard ABC approach by rejection.
By this substitution, all data sets are simulated from the mechanistic model, but each data set is obtained using a parameter vector $(\tilde{p}_A,\tilde{p}_B,\tilde{p}_H)$ found in step 5 of Algorithm 2.
In this procedure, the error due to the approximation on the model space is eliminated, because data sets are simulated from the correct mechanistic model and not by resampling from the available realizations in $\mathcal{Z}_{n,m}$.
However, this procedure includes error due to the approximation on the parameter space, because each data set is simulated not under the correct proposed parameter value, but under the parameter value $(\tilde{p}_A,\tilde{p}_B,\tilde{p}_H),$ the closest value to the correct proposed value that can be found in $\mathcal{Z}_{n,m}.$
We compared the RMSE of the AABC procedure involving the approximation on both the parameter and model spaces and the RMSE of the AABC procedure involving only the approximation on the parameter space to the RMSE obtained from a standard ABC approach.
For these two AABC procedures, we also compared the percent excess in RMSE, defined as the ratio of the absolute difference in RMSE of the AABC and standard ABC approaches to the RMSE of the standard ABC approach, expressed as a percent.
\par
{\em Results.}
The individual admixture fractions calculated from the Pygmy data carry substantial information about the admixture parameters $p_A,p_B,$ and $p_H,$ since the joint posterior distribution is concentrated in a relatively small region of the 3-dimensional unit simplex on which $(p_A,p_B,p_H)$ sits (Figure \ref{fig:6}A). The marginal posterior distributions (Figure \ref{fig:6}B, \ref{fig:6}C, and \ref{fig:6}D) have means $p_A=0.151,\; p_B=0.132,$ and $p_H=0.717.$
These values are interpreted as contribution of genetic material of 15.1\% from the ancestral Pygmy population (source population A), 13.2\% from the ancestral Non-Pygmy population (source population B), and 71.7\% from the hybrid population to itself at each generation, over $771$ generations of constant admixture.
\begin{figure}\label{fig:6}
\end{figure}
\par
For the simulation study with $t=30$ generations and 1000 ``true data'' sets, the RMSE values from AABC analyses decrease with increasing $m$ (Figure \ref{fig:7}A, \ref{fig:7}B,
\ref{fig:7}C).
Further, as $m$ increases, the error due to the approximation on the parameter space decreases (Figure \ref{fig:7}D last column), due to the fact that for large $m,$ the difference decreases between the closest parameter value chosen at step 5 of Algorithm 2 and the correct parameter value under which we want to simulate a data set.
In fact, the RMSE from the AABC analysis with $m=10^5$ realizations and approximation only on the parameter space and the RMSE from the standard ABC approach are virtually indistinguishable (Figure \ref{fig:7}A, \ref{fig:7}B, \ref{fig:7}C, red star).
For $m=10^3,$ the AABC analysis with approximations on the parameter and model spaces has a percent excess RMSE of 13.81\%, whereas AABC analysis including only the approximation on the parameter space has excess RMSE of 6.61\%.
That is, at $m=10^3,$ approximately half of the excess RMSE in the AABC approach with respect to the standard ABC analysis comes from the error due to the approximation on the parameter space and half arises due to the approximation on the model space.
\begin{figure}\label{fig:7}
\end{figure}
\section{Discussion}
Performing likelihood-based inference from statistical models incorporating a multitude of stochastic processes is often challenging due to computationally intractable likelihoods.
In principle, when stochastic processes are complex but a family of parametric statistical models is well-defined, data can be simulated from the model to assess the parameter likelihoods.
In the last decade, ABC methods have become a standard tool to perform approximate Bayesian inference in subject areas such as ecology and evolution, by exploiting
the idea of simulating many data sets from a model, when such simulations are computationally feasible.
To deliver an adequate sample from the posterior distribution of the parameters, however, ABC requires a large number of simulated data sets, and it might not perform well when only a limited number of data sets can be simulated.
\par
In this article, we introduced an approach that extends simulation-based Bayesian inference methods to model spaces in which only a limited number of data sets can be simulated from the model, at the expense of requiring approximations on the parameter and the model spaces.
Our AABC approaches rely on two statistical approximations.
In our approximation on the parameter space, for each parameter simulated from the prior distribution, we take the closest parameter value available in the set of realizations
$\mathcal{Z}_{n,m}$ obtained from the mechanistic model.
This approach has a uniform kernel smoothing interpretation
in the sense that each parameter value in the set $\mathcal{Z}_{n,m}$ dissects the support of the prior distribution into non-overlapping components such that each interval is mapped to the same parameter value in $\mathcal{Z}_{n,m}.$
Each component then represents the support of a uniform kernel.
Kernel approximations have an operational role in implementing ABC methods, and a natural future direction for AABC is to improve the accuracy of posterior samples using smooth weighting kernels for the approximation on the parameter space.
\par
The approximation on the model space is achieved by assigning Dirichlet probabilities to data points of realizations obtained from the mechanistic model.
This is a variation on the resampling method originally introduced in Rubin's Bayesian bootstrap \citep{Rubin1981}, and therefore, it is an application of Bayesian nonparametric methods.
From this perspective, AABC methods connect standard model-based Bayesian inference on model-specific parameters and Bayesian nonparametric methods within the ABC framework.
\par
Our approach of using a non-mechanistic model and Bayesian resampling methods to help perform inference on model-specific parameters of a mechanistic model is a fundamental difference between AABC and existing ABC methods.
ABC performs inference on model-specific parameters of a mechanistic model using a likelihood based purely on the mechanistic model.
AABC instead performs inference on the same model-specific parameters of the mechanistic model as ABC, using a likelihood based on a non-mechanistic model that incorporates a limited number of data sets simulated from the mechanistic model.
Consequently, the model likelihoods used in ABC and AABC are not exactly the same, and the posterior distributions targeted by the two classes of methods are not exaxctly equivalent for finite sample sizes.
The advantage of AABC methods in contrast to pure non-mechanistic modeling approaches (e.g., nonparametric methods) is that AABC can perform inference on the quantities of interest---the model-specific parameters of the mechanistic model.
\par
Unlike other ABC methods, the AABC approach delivers a posterior sample of desired size from the joint distribution of parameters for any $m>1.$
This is both a strength and a limitation of AABC.
The strength is that in practice, a researcher can fix $m$ and thus the computation time {\em a priori}, to simulate data from the mechanistic model to obtain a reasonable inference by AABC; other ABC methods may fail to produce an adequate posterior sample in equivalent computation time.
In our example, for moderate values of $m$ (e.g., $10^3$ to $10^4$) for which standard ABC approaches were unsatisfactory, AABC adequately sampled an approximate posterior distribution.
The limitation is that when $m$ is too small, the posterior sample obtained by AABC can be a distorted representation of the true posterior distribution.
Although in the limit, AABC and ABC are expected to produce similar results, the posterior distribution sampled by an AABC approach is not the correct posterior distribution, because many parameter values simulated from the prior are tested for acceptance based on repeated use of the data values in $m$ realizations, instead of based on data sets simulated independently of each other.
A future direction is to investigate the relationship between $m$ and the dimensionality of the parameter space to optimize $m$ in producing a given level of accuracy for approximating the true posterior distributions.
\par
\section*{Acknowledgments}
The authors thank Paul Verdu for helpful discussions on the genetics of Central African Pygmy populations. Support for this research is partially provided by NIH grant R01 GM 081441, NSF grant DBI-1146722, and the Burroughs Wellcome Fund.
\section*{Appendix 1}
We let $k\leq n$ be the number of distinct values $\tilde{x}_1,\tilde{x}_2,...,\tilde{x}_k$ in the data set $\tilde{{\bf x}},$ and denote the number of observed $\tilde{x}_i$ by $\tilde{n}_i,$ where $n=\sum_{i=1}^{k}\tilde{n}_i.$
Then the prior distribution for the probabilities of an AABC replicate data set based on the ABC simulated data set $\tilde{{\bf x}}$ is the Dirichlet distribution $\pi(\textrm{\boldmath$\phi$})=[\Gamma(\sum_{i=1}^k \tilde{n}_i)/\prod_{i=1}^k\Gamma(\tilde{n}_i)] \prod_{i=1}^{k}\phi^{\tilde{n}_i-1}$ with parameters $\tilde{n}_1,\tilde{n}_2,...,\tilde{n}_k.$
The special case of the prior proportional to $1$ described in the text is obtained with $k=n,$ when all observations in $\tilde{{\bf x}}$ are distinct $(\tilde{n}_1,=\tilde{n}_2=\;\cdots\;=\tilde{n}_n=1)$.
Our goal is to show that $\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta})=\lim_{n\rightarrow\infty}\pi_{\epsilon}(\theta|{\bf x}obs,P_{\theta}).$
\par
Recalling equation \ref{eq:4},
\begin{equation}\label{eq:app1}
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta})= \displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\left[\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\right]\pi(\theta)\; d{\bf x}.
\end{equation}
The integral in the brackets is the expectation of $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}}),$ with respect to the prior $\pi(\textrm{\boldmath$\phi$}).$ We let $C={n \choose n_1 \;n_2\; \cdots\; n_k},$ and using the definition of $q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})=C\;\prod_{j=1}^{n}\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}$ in section \ref{subsec:nonparametric}, and $\pi(\textrm{\boldmath$\phi$})=[\Gamma(\sum_{i=1}^k \tilde{n}_i)/\prod_{i=1}^k\Gamma(\tilde{n}_i)] \prod_{i=1}^{k}\phi^{\tilde{n}_i-1}$ we get
\begin{equation*}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)} \;\prod_{j=1}^{n}\int_{\Phi}\left(\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}\right)\left(\prod_{i=1}^{k}\phi_i^{\tilde{n}_i-1}\right)\;d\textrm{\boldmath$\phi$}.
\end{equation*}
Here, we have exchanged the order of the product over $j$ with the integral since the expectation of the product of $n$ IID observations in sample ${\bf x}$ is equal to the the product of the expectations of observations $x_j.$
We label the realized value of the $j$th data point $x_j$ by $(j)$ such that $\prod_{i=1}^{n}\phi_i^{\mathbf{I}_{\{x_j=\tilde{x}_i\}}}=\phi_{(j)},$ and write
\begin{equation}\label{eq:app2}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)}\;\prod_{j=1}^{n}\int_{\Phi} \left(\prod_{\substack{i=1\\i\neq(j)}}^{k}\phi_i^{\tilde{n}_i-1}\right)\phi_{(j)}^{\tilde{n}_{(j)}}\;d\textrm{\boldmath$\phi$}.
\end{equation}
Using
$\int_{\Phi}\frac{\Gamma[(\sum_{i=1,i\neq (j)}^{k}\tilde{n}_i)+\tilde{n}_{(j)}+1]}{[\prod_{i=1,i\neq (j)}^{k}\Gamma(\tilde{n}_i)]\Gamma(\tilde{n}_{(j)}+1)} \; \left(\prod_{i=1,i\neq (j)}^{k}\phi_i^{\tilde{n}_i-1}\right)\phi_{(j)}^{\tilde{n}_{(j)}}\;d\textrm{\boldmath$\phi$}=1$
(p. 487, \citet{Kotzetal2000}), we substitute the integral in equation (\ref{eq:app2}) with the ratio of the gamma functions to get
\begin{align*}
\int_{\Phi}q({\bf x}|\textrm{\boldmath$\phi$},\tilde{{\bf x}})\pi(\textrm{\boldmath$\phi$})\;d\textrm{\boldmath$\phi$}&=C\;\frac{\Gamma(\sum_{i=1}^{k}\tilde{n}_i)}{\prod_{i=1}^{k}\Gamma(\tilde{n}_i)}\prod_{j=1}^{n}
\frac{\left[\prod_{i=1,i\neq (j)}^{k}\Gamma(\tilde{n}_i)\right]\Gamma(\tilde{n}_{(j)}+1)}{\Gamma[(\sum_{i=1,i\neq (j)}^{k}\tilde{n}_i)+\tilde{n}_{(j)}+1]}\\
&=C\;\prod_{j=1}^{n}\frac{\Gamma(n)}{\Gamma(\tilde{n}_{(j)})}\frac{\Gamma(\tilde{n}_{(j)}+1)}{\Gamma(n+1)}=C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right).
\end{align*}
Substituting $C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)$ for the integral in brackets in equation (\ref{eq:app1}), we have
\begin{align}
\nonumber
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta})&=\displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\frac{1}{C_{Q_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\;C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\; d{\bf x}\\
\label{eq:exchangelimit}
&=\frac{\displaystyle{\lim_{m\rightarrow\infty}\lim_{n\rightarrow \infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\;C\;\prod_{j=1}^{n}\left(\frac{\tilde{n}_{(j)}}{n}\right)\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\; d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty }\lim_{n\rightarrow \infty}}C_{Q_{\theta}}}.
\end{align}
\par
We apply the dominated convergence theorem to exchange the limits in $n$ and the integrals in the numerator and denominator of equation (\ref{eq:exchangelimit}).
The assumptions of the theorem are satisfied as follows:
1) The integrand in equation (\ref{eq:exchangelimit}) is bounded: The indicator functions are bounded by 1, the ratios $(\tilde{n}_{(j)}/n),$ where $n_{(j)}\leq n$ are bounded by 1, and the prior $\pi(\theta)$ is bounded by assumption.
2) $\lim_{n\rightarrow \infty}(\tilde{n}_{(j)}/n)$ converges pointwise to the probability of $x_{(j)}$ under $\tilde{\theta}$ and the model $P_{\theta}sub_{\tilde{\theta}},$ given by
$p(x_{(j)}|\tilde{\theta}),$ by the frequency interpretation of probability.
Exchanging the limits in $n$ and the integrals, and using $\lim_{n\rightarrow \infty}(\tilde{n}_{(j)}/n)=p(x_{(j)}|\tilde{\theta}),$
\begin{align}
\nonumber
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta})&=\frac{\displaystyle{\lim_{m\rightarrow\infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}\prod_{j=1}^{k}\left[p(x_{(j)}|\tilde{\theta})\right]^{n_{(j)}}\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\;d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty}}C_{P_{\theta}sub_{\tilde{\theta}}}}\\
\label{eq:app4}
&=\frac{\displaystyle{\lim_{m\rightarrow\infty}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}p({\bf x}|\tilde{\theta})\;\mathbf{I}_{\{\theta,\tilde{\theta}\}}\pi(\theta)\;d{\bf x}}{\displaystyle{\lim_{m \rightarrow \infty}}C_{P_{\theta}sub_{\tilde{\theta}}}},
\end{align}
where (\ref{eq:app4}) follows by the definition of the joint distribution $p({\bf x}|\tilde{\theta})=\prod_{j=1}^{k}\left[p(x_{(j)}|\tilde{\theta})\right]^{n_{(j)}}.$
\par
We now apply the dominated convergence theorem a second time to exchange the limits in $m$ and the integrals on $\mathcal{X}$. Again, the assumptions of the dominated convergence theorem are satisfied since the integrand in (\ref{eq:app4}) is a sequence in $m$ of bounded functions, and as $m\rightarrow \infty,$ $\tilde{\theta}\rightarrow \theta,$ and $p({\bf x}|\tilde{\theta})\rightarrow p({\bf x}|\theta).$
We get
\begin{equation*}
\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\pi_\epsilon(\theta|{\bf x}obs,Q_{\theta})=\frac{1}{C_{P_{\theta}}}\int_{\mathcal{X}}\mathbf{I}_{\{||\textrm{\boldmath$s$}-\textrm{\boldmath$s$}obs||<\epsilon\}}p({\bf x}|\theta)\pi(\theta)\;d{\bf x}=\displaystyle{\lim_{n\rightarrow \infty}}\pi_{\epsilon}(\theta|{\bf x}obs,P_{\theta})
\end{equation*}
which shows that AABC posterior converges to the ABC posterior as the sample size $n$ and the simulated number of data sets $m$ increase.
\end{document}
|
\begin{document}
\title[A link invariant related to Khovanov homology and HFK]{A link invariant related to Khovanov homology and knot Floer homology}
\author{Akram Alishahi}
^{\text{th}}anks{AA was supported by NSF grants DMS-1505798 and DMS-1811210.}
\address{Department of Mathematics, Columbia University, New York, NY 10027}
\email{\href{mailto:[email protected]}{[email protected]}}
\author{Nathan Dowlin}
^{\text{th}}anks{ND was supported by NSF grant DMS-1606421.}
\address{Department of Mathematics, Columbia University, New York, NY 10027}
\email{\href{mailto:[email protected] }{[email protected]}}
\keywords{}
\date{\today}
\begin{abstract}
In this paper we introduce a chain complex $C_{1 \pm 1}(D)$ where $D$ is a plat braid diagram for a knot $K$. This complex is inspired by knot Floer homology, but it the construction is purely algebraic. It is constructed as an oriented cube of resolutions with differential $d=d_{0}+d_{1}$. We show that the $E_{2}$ page of the associated spectral sequence is isomorphic to the Khovanov homology of $K$, and that the total homology is a link invariant which we conjecture is isomorphic to $\delta$-graded knot Floer homology.
The complex can be refined to a tangle invariant for braids on $2n$ strands, where the associated invariant is a bimodule over an algebra $\mathcal{A}_{n}$. We show that $\mathcal{A}_{n}$ is isomorphic to $\overline{\mathcal{B}}'(2n+1, n)$, the algebra used for the $DA$-bimodule constructed by Ozsv\'{a}th and Szab\'{o} in their algebraic construction of knot Floer homology \cite{OS:Kauffman}.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction} The goal of this paper is to introduce a new homology theory for links in $S^{3}$ which is related to both the knot Floer homology of Ozsv\'{a}th-Szab\'{o} and Rasmussen (\cite{OS04:Knots}, \cite{Rasmussen03:Knots}) and Khovanov homology (\cite{Khovanov00:CatJones}). Knot Floer homology and Khovanov homology are defined using very different methods - the former is a Lagrangian Floer homology whose differential is often quite difficult to compute, while the latter is defined algebraically and has its roots in representation theory. Despite these differences, the two theories seem to contain a great deal of the same information and are conjectured to be related by a spectral sequence:
\begin{conjecture}[\cite{Rasmussen:KnotPolynomials}]\label{Conj1}
For any knot $K$ in $S^{3}$, there is a spectral sequence from the Khovanov homology of $K$ to the knot Floer homology of $K$.
\end{conjecture}
\noindent
Our construction gives a candidate for this spectral sequence.
\begin{remark}
Although the construction is inspired by holomorphic disc counts, the actual complex is an algebraic construction, so the reader need not be familiar with the holomorphic geometry typically present in a paper involving knot Floer homology.
\end{remark}
Given a diagram $D$ for a knot $K$ which is the plat closure of a braid, we define a complex $C_{1 \pm 1}(D)$ as an oriented cube of resolutions for $D$. The differential includes only vertex maps and edge maps. We write $d=d_{0}+d_{1}$ where $d_{i}$ increases cube filtration by $i$.
\begin{definition}
Given a knot $K$ in $S^{3}$, we write the $E_{2}$ page of the spectral sequence induced by the cube filtration as $H_{1+1}(K)$ and the total homology as $H_{1-1}(K)$.
\[H_{1+1}(K)=H_{*}(H_{*}(C_{1 \pm 1}(D), d_{0}), d_{1}^{*})\]
\[H_{1-1}(K)=H_{*}(C_{1 \pm 1}(D), d_{0}+d_{1})\]
\end{definition}
\noindent
The total homology $H_{1-1}(K)$ is singly graded, while $H_{1+1}(K)$ has a second grading coming from the cube filtration.
\begin{theorem}
The homology $H_{1+1}(K)$ is isomorphic to Khovanov homology.
\end{theorem}
\begin{theorem}
The total homology $H_{1-1}(K)$ is a link invariant.
\end{theorem}
The construction of $C_{1 \pm 1}(K)$ arose from an attempt to construct an oriented cube of resolutions for $\mathit{HFK}_{2}(K)$, a (singly graded) homology theory defined by counting holomorphic discs which pass through basepoints which are typically blocked in knot Floer homology (\cite{Dowlin1}). The relevant results on $\mathit{HFK}_{2}$ from this paper are listed below:
I) The reduced homology $\widehat{\mathit{HFK}}_{2}(K)$ is isomorphic to $\delta$-graded knot Floer homology.
II) For a completely singular link $S$ (a singular diagram with no crossings) $\mathit{HFK}_{2}(S)$ is isomorphic to the Khovanov homology of $\mathsf{sm}(S)$, where $\mathsf{sm}(S)$ is the diagram obtained by replacing each singularization with the unoriented smoothing.
If there was an oriented cube of resolutions for $\mathit{HFK}_{2}$ and the edge maps corresponded to the Khovanov edge maps, this would give the desired spectral sequence. Unfortunately, the standard construction of the oriented cube of resolutions for knot Floer homology doesn't seem to work with the additional differentials.
The complex $C_{1 \pm 1}(K)$ aims to give an algebraic construction of this exact triangle. Let $D$ be a diagram for $K$ which is the plat closure of a braid. For each complete resolution $S$ of $D$, we replace $\mathit{CFK}_{2}(S)$ with an algebraically defined complex $(C_{1 \pm 1}(S), d_{0})$ which is conjectured to be chain homotopy equivalent. (It is certainly not isomorphic, as it is obtained by making some cancellations on $\mathit{CFK}_{2}(S)$ then guessing at the resulting differentials and module structure.) Then, for each $S_{1}$ and $S_{2}$ which differ at an edge, we define an edge map
\[d_{1}: C_{ 1 \pm 1 }(S_{1}) \to C_{ 1 \pm 1 }(S_{2}) \]
\noindent
The resulting complex is $C_{ 1 \pm 1}(D)$.
\begin{conjecture}\label{Conj2}
The total homology $H_{1-1}(K)$ is isomorphic to $\mathit{HFK}_{2}(K)$.
\end{conjecture}
\noindent
Note that this would prove Conjecture \ref{Conj1}.
The ground ring for $C_{ 1 \pm 1}(D)$ is a polynomial ring
\[ R = \mathbb{Q}[U_{1},...,U_{m}] \]
\noindent
where each $U_{i}$ corresponds to an edge $e_{i}$ in the diagram $D$. For each complete resolution $S$ of $D$, the complex $C_{ 1 \pm 1}(S)$ can be viewed as an $R$-module $\mathscr{M}(S)$ tensored with a Koszul complex $\mathsf{K}(D)$.
\[C_{ 1 \pm 1}(S) = \mathscr{M}(S) \otimes \mathsf{K}(D) \]
The complex $\mathsf{K}(D)$ is the `closing off' factor - it depends only on the cups and caps in the plat closure, so it is the same for any resolution $S$. The edge map is the identity on $\mathsf{K}(D)$, we can write
\[ d_{0} = 1 \otimes d_{\mathsf{K}} \hspace{20mm} d_{1} = d_{\mathscr{M}} \otimes 1 \]
Let $\mathscr{M}(D)$ denote the cube complex which has the module $\mathscr{M}(S)$ at the vertex in the cube corresponding to $S$, with differential $d_{\mathscr{M}}$. Then $C_{1 \pm 1}(D) = \mathscr{M}(D) \otimes \mathsf{K}(D) $. We are able to construct a local theory for $\mathscr{M}(D)$ which allows us to slice our diagram $D$ into tangles as in Figure \ref{FirstExample}.
\begin{figure}
\caption{Breaking a plat diagram for the trefoil into four plat tangles}
\label{FirstExample}
\end{figure}
The local theory assigns to each $(2m, 2n)$-tangle $T$ an $(\mathcal{A}_{m}, \mathcal{A}_{n})$-bimodule $\mathsf{M}[T]$. ($\mathcal{A}_{0}$ is defined to be $\mathbb{Q}$.) We orient our tangles from bottom to top, so a $(2m, 2n)$-tangle has $2m$ strands on the bottom and $2n$ strands at the top. Note that since our diagram is the plat closure of a braid, we either have $m=n$, $m=0$, or $n=0$. We will refer to tangles which are horizontal slices of a plat closure as \emph{plat tangles}.
\begin{theorem}
The local theory satisfies the following properties:
a) Let $T_{1}$ be a $(2l, 2m)$ plat tangle and $T_{2}$ a $(2m, 2n)$ plat tangle. Then
\[ \mathsf{M}[T_{1} \circ T_{2}] \cong \mathsf{M}[T_{1}] \otimes_{\mathcal{A}_{m}} \mathsf{M}[T_{2}] \]
b) The chain homotopy type of $[T]$ is an invariant for open braids, i.e. it is invariant under braid-like Reidemeister II and III moves.
\end{theorem}
\noindent
For example, if $D$ is the diagram for the trefoil in Figure \ref{FirstExample}, then
\[\mathscr{M}(D)=\mathsf{M}[T_{1}] \otimes_{\mathcal{A}_{2}} \mathsf{M}[T_{2}] \otimes_{\mathcal{A}_{2}} \mathsf{M}[T_{3}] \otimes_{\mathcal{A}_{2}} \mathsf{M}[T_{4}] \]
\begin{remark}
Note that $\mathsf{M}[T]$ is not a tangle invariant when the cups and caps are involved - the Koszul complex $\mathsf{K}(D)$ is required.
\end{remark}
This construction is quite similar in spirit to the algebraic description of knot Floer homology by Oszv\'{a}th and Szab\'{o} in \cite{OS:Kauffman}. There are a few main differences: our tangle invariant is a bimodule while theirs is a $DA$-bimodule, and our complex comes from a planar Heegaard diagram while theirs comes from the large genus Kauffman states diagram. Especially in light of the second difference, the following theorem is quite surprising.
\begin{theorem}
The algebra $\mathcal{A}_{n}$ is isomorphic to the algebra $\overline{\mathcal{B}}'(2n+1, n)$ from \cite{OS:Kauffman}.
\end{theorem}
\noindent
This theorem gives some evidence for Conjecture \ref{Conj2}, and gives a construction of Khovanov homology using the same algebras present in Oszv\'{a}th and Szab\'{o}'s algebraic description of knot Floer homology.
\subsection*{Acknowledgments} We would like to thank Andy Manion, Ciprian Manolescu, Peter Ozsv\'{a}th, Ina Petkova, Robert Lipshitz, and Zoltan Szab\'{o} for helpful discussions.
\section{Background: Khovanov homology}
In this section we provide a definition of Khovanov's categorification of the Jones polynomial, known as Khovanov homology (\cite{Khovanov00:CatJones}). In order to make the relationship with our construction more clear, we will highlight the module structure of the Khovanov chain complex over the edge ring.
\subsection{Definition of the Khovanov Complex} Let $L$ be a link in $S^{3}$ with diagram $D \subset \mathbb{R}^{2}$. Let $\mathfrak{C}=\{c_{1}, c_{2}, ..., c_{n}\}$ denote the crossings in $D$, and viewing $D$ as a 4-valent graph, let $E=\{e_{1}, e_{2}, ..., e_{m}\}$ denote the edges of $D$. The \emph{edge ring} is defined to be
\[ R := \mathbb{Q}[X_{1}, X_{2},...,X_{m}]/\{X_{1}^{2}=X_{2}^{2}=...=X_{m}^{2}=0 \} \]
\noindent
with each variable $X_{i}$ corresponding to the edge $e_{i}$.
While our construction is an oriented cube of resolutions complex, the Khovanov complex is an \emph{unoriented} cube of resolutions. Each crossing $c_{i}$ can be resolved in two ways, the 0-resolution and the 1-resolution (see Figure \ref{resolutions}). For each $v \in \{0,1\}^{n}$, let $D_{v}$ denote the diagram obtained by replacing the crossing $c_{i}$ with the $v_{i}$-resolution. The diagram $D_{v}$ is a disjoint union of circles - denote the number of circles by $k_{v}$. The vector $v$ determines an equivalence relation on $E$, where $e_{p} \sim_{v} e_{q}$ if $e_{p}$ and $e_{q}$ lie on the same component of $D_{v}$.
\begin{figure}\label{resolutions}
\end{figure}
The module $CKh(D_{v})$ is defined to be a quotient of the ground ring:
\[ CKh(D_{v}) := R / \{X_{p}=X_{q} \text{ if } e_{p} \sim_{v} e_{q} \} \]
\noindent
we will denote this quotient by $R_{v}$.
There is a partial ordering on $\{0,1\}^{n}$ obtained by setting $u \le v$ if $u_{i} \le v_{i}$ for all $i$. We will write $u \lessdot v$ if $u \le v$ and they differ at a single crossing, i.e. there is some $i$ for which $u_{i}=0$ and $v_{i}=1$, and $u_{j}=v_{j}$ for all $j \ne i$. Corresponding to each edge of the cube, i.e. a pair $(u \lessdot v)$, there is an embedded cobordism in $\mathbb{R}^{2} \times [0,1]$ from $D_{u}$ to $D_{v}$ constructed by attaching a 1-handle near the crossing $c_{i}$ where $u_{i}<v_{i}$. This cobordism is always a pair of pants, either going from one circle to two circles (when $k_{u}=k_{v}-1$) or from two circles to one circle (when $k_{u}=k_{v}+1$). We call the former a \emph{merge} cobordism and the latter a \emph{split} cobordism.
For each vertex $v$ of the cube, the quotient ring $R_{v}$ is naturally isomorphic to $\mathcal{A}^{\otimes k_{v}}$, where $\mathcal{A}$ is the Frobenius algebra $\mathbb{Q}[X]/(X^{2}=0)$. Recall that the multiplication and comultiplication maps of $\mathcal{A}$ are given as:
\begin{displaymath}
m:\mathcal{A}\otimes_{\mathbb{Q}}\mathcal{A}\rightarrow\mathcal{A}:\begin{cases}
\begin{array}{lcc}
1 \mapsto 1&,&X_{1} \mapsto X\\
X_{1}X_{2} \mapsto 0&,&X_{2} \mapsto X
\end{array}
\end{cases}
\end{displaymath}
and
\begin{displaymath}
\Delta:\mathcal{A}\rightarrow\mathcal{A}\otimes_{\mathbb{Q}}\mathcal{A}:\begin{cases}
1 \mapsto X_{1}+X_{2}\\
X \mapsto X_{1}X_{2}
\end{cases}
\end{displaymath}
The chain complex $CKh(D)$ is defined to be the direct sum of the $CKh(D_{v})$ over all vertices in the cube:
\[ CKh(D) := \bigoplus_{v \in \{0,1\}^{n}} CKh(D_{v}) \]
The differential decomposes over the edges of the cube. When $u \lessdot v$ corresponds to a merge cobordism, define
\[ d_{u,v}: CKh(D_{u}) \to CKh(D_{v}) \]
\noindent
to be the Frobenius multiplication map, and when $u \lessdot v$ corresponds to a split cobordism, define $d_{u,v}$ to be the comultiplication map. In terms of the quotient rings $R_{u}$ and $R_{v}$, the map $m$ is projection, while $d$ is multiplication by $X_{j}+X_{k}$, where $e_{i}$, $e_{j}$, $e_{k}$, $e_{l}$ are the edges at the corresponding crossing as in Figure \ref{resolutions}. Note that $X_{j}+X_{k}=X_{i}+X_{l}$.
If $D_{u}$ and $D_{v}$ differ at crossing $c_{i}$, define $\epsilon_{u,v} = \sum_{j < i } u_{j} $. Then
\[ d = \sum_{u \lessdot v} (-1)^{\epsilon_{u,v}} d_{u,v} \]
The Khovanov complex is bigraded, with a homological grading and a quantum grading. Up to an overall grading shift, the homological grading is just the height in the cube. Setting $|v|= \sum_{i} v_{i}$, $n_{+}$ the number of positive crossings in $D$, an $n_{-}$ the number of negative crossings in $D$, we have
\[ \mathrm{gr}_{h}(R_{v}) = |v|-n_{-}\]
For each vertex $v$ of the cube, the quantum grading of $1\in R_{v}$ is given by
\[ \mathrm{gr}_{q}(1 \in R_{v}) = n_{+}-2n_{-}+|v|+k_{v} \]
\noindent
and each variable $X_{i}$ has quantum grading $-2$. With respect to the bigrading $(\mathrm{gr}_{h}, \mathrm{gr}_{q})$, the differential $d$ has bigrading $(1,0)$. The Khovanov homology $Kh(D)$ is the homology of this complex
\[ Kh(D) = H_{*}(CKh(D), d) \]
There is a third grading which will be of interest called the $\delta$-grading. It is given by
\[ \gr_{\delta} = \gr_{q} - 2 \gr_{h} \]
\noindent
The differential is homogeneous of degree $-2$ with respect to the $\gr_{\delta}$, and multiplication by $X_{i}$ lowers $\gr_{\delta}$ by 2.
The grading on our complex $C_{1 \pm 1}(D)$ will correspond to the $\delta$-grading on Khovanov homology.
\section{Construction of $C_{1 \pm 1}$ for singular knots}
The complex $C_{1 \pm 1}(D)$ is constructed as an oriented cube of resolutions. Each crossing can be resolved in one of two ways - with the oriented smoothing smoothing, or a singularization (see Figure \ref{OrientedResolutions}).
\begin{figure}
\caption{Resolutions for a positive and negative crossing}
\label{OrientedResolutions}
\end{figure}
\begin{definition}
A diagram $S$ which has been obtained from $D$ by replacing every crossing with the oriented smoothing or singularization is called a \emph{complete resolution} of $D$.
\end{definition}
\begin{remark}
A complete resolution $S$ can also be viewed as a trivalent graph by replacing each 4-valent vertex with a wide edge, as in Figure \ref{WideEdge}.
\end{remark}
\begin{figure}
\caption{Wide edge vs singularization notation}
\label{WideEdge}
\end{figure}
In this section we will describe the chain complex $C_{1\pm1}(S)$ for a complete resolution $S$. In the following section, we will give a local theory for the complex, which will make the construction of the edge differential $d_{1}$.
\subsection{Plat and Singular Closures of Braids} \label{closuresection}
Let $b\in Br_{2n}$ be a braid with $2n$ strands. The \emph{plat closure} of $b$, denoted by $\mathsf{p}(b)$, is defined to be the link obtained by joining the pairs of strings consecutively at the top and bottom. More precisely, given tangles
\[T_n^-=\cup\ \cup\ ...\ \cup\ \ \ \ \ \ \text{and}\ \ \ \ \ \ T_n^+=\cap\ \cap\ ...\ \cap\]
consisting of $n$ cups and $n$ caps, respectively, $\mathsf{p}(b)=T_n^-bT_n^+$, as in Figure \ref{PlatClosure}. We say that $D$ is a plat diagram if $D=\mathsf{p}(b)$ for some braid $b$. For plat closure we have the following well-known analogue of Alexander's theorem \cite{Alex:BraidClosure}.
\begin{figure}
\caption{A braid $b$ and its plat closure $\mathsf{p}
\label{PlatClosure}
\end{figure}
\begin{theorem}
For every link $L$, there is some braid $b$ with an even number of strands strands such that $L=\mathsf{p}(b)$.
\end{theorem}
The construction of $C_{1 \pm 1}(D)$ will take as input a plat diagram $D$. However, it will often be useful to replace the plat closure $\mathsf{p}(b)$ with a diagram $\mathbf{x}(b)$ which we will call the \emph{singular closure}. The singular closure of a braid $b$ is obtained by adding a singularization at the top of the braid between strands $2i-1$ and $2i$ for $i=1,...,n$, then taking the standard braid closure, as in Figure \ref{SingularClosure}.
\begin{figure}
\caption{A braid $b$ and its singular closure $\mathsf{x}
\label{SingularClosure}
\end{figure}
The reason we are interested in the singular braid is that in $\mathit{HFK}_{2}$, singularizations are closely related to the unoriented smoothing. Replacing the singularizations in $\mathsf{x}(b)$ with unoriented smoothings gives the plat closure $\mathsf{p}(b)$. However, the orientations are better behaved on $\mathsf{x}(b)$, and it gives a pairing between the cups and caps in $\mathsf{p}(b)$.
\subsection{Singular braids, the edge ring, and cycles} \label{sub3.2}
\begin{definition}
A \emph{singular braid} is a properly embedded $4$-valent graph in $\mathbb{R}\times [0,1]$ obtained by replacing all crossings of a braid diagram with singular points.
\end{definition}
Let $S$ be a singular braid with $n$ strands. Then $S$ is an oriented graph with $n$ incoming edges, $n$ outgoing edges and all interior vertices have two incoming and two outgoing edges. To each edge $e_i$ of $S$, we assign a variable $U_i$, and define the \emph{edge ring} of $S$, to be
\[R(S)=\mathbb{Q}[U_1,...,U_m].\]
Here, $m$ is the number of edges of $S$.
Let $S$ be a singular braid with $2n$ strands. We define \emph{plat closure} of $S$, denoted by $\mathsf{p}(S)$, to be the oriented graph obtained by attaching the $n$ oriented, singular caps (Figure~\ref{S-caps}) and $n$ oriented, singular cups (Figure~\ref{S-cups}), to the top and bottom of $S$, respectively.
\begin{figure}\label{S-caps}
\end{figure}
\begin{figure}\label{S-cups}
\end{figure}
The result is a singular oriented graph with some $4$-valent interior vertices, $n$ $2$-valent bottom vertices and $n$ $2$-valent top vertices. Bottom vertices have two outgoing edges and no incoming edges, while top vertices have two incoming edges and no outgoing edges.
\begin{definition}
A \emph{cycle} $Z$ in $\mathsf{p}(S)$ is a subset of edges in $S$ such that their union is $n$ pairwise disjoint paths connecting bottom vertices to top vertices such that $\partial Z=w_1^++...+w_n^+-w_1^--...-w_n^-$. Here, $w_1^+,...,w_n^+$ denote the top vertices and $w_1^-,...,w_n^-$ denote the bottom vertices.
\end{definition}
\begin{remark} Note that the graph $\mathsf{p}(S)$ can be viewed as the singular closure of $S$ pulled apart at the $n$ singular points involved in the closure. If we were to reidentify each $w_{i}^{+}$ with $w_{i}^{-}$, then we can view cycles as subsets of $\mathsf{x}(S)$ which are homeomorphic to $n$ disjoint circles.
\end{remark}
For any $4$-valent vertex $v$ of $S$, assume $e_{i(v)}$ and $e_{j(v)}$ are the left and right incoming edges and $e_{k(v)}$ and $e_{l(v)}$ are the left and right outgoing edges. Associated to $v$ we define a quadratic polynomial:
\[
Q_v=U_{i(v)}U_{j(v)}-U_{k(v)}U_{l(v)}
\]
in $R(S)$. The edges in $\mathsf{p}(S)$ are in one-to-one correspondence with the edges in $S$. Considering this correspondence, any cycle $Z$ specifies two ideal
\[
\begin{split}
&I_Z=\langle U_i\ |\ \text{for any}\ e_i\subset Z\rangle\\
&Q_Z=\langle Q_v\ |\ v\ \text{is any $4$-valent vertex disjoint from $Z$} \rangle
\end{split}
\]
in $R(S)$ and we define the $\mathbb{Q}$-module
\[R_Z=\frac{\mathbb{Q}[U_1,...,U_m]}{Q_Z+I_Z}.\]
Finally, associated to $S$ we define the $\mathbb{Q}$-module
\[M(S)=\bigoplus _{Z\in c(S)}R_{Z}\]
where $c(S)$ denotes the set of cycles in $\mathsf{p}(S)$.
For any pair $(Z,e_i)$ of a cycle $Z$ in $\mathsf{p}(S)$ and an edge $e_i$ in $Z$, let $\mathcal{D}(Z,e_i)$ denote the set of disks in the plane satisfying the followings:
\begin{enumerate}
\item Each $D\in \mathcal{D}(Z,e_i)$ is a bounded subset of $\mathbb{R}^2$ homeomorphism to a disk so that $\partial D\subset S$.
\item There are vertices $v_{t}(D)$ and $v_{b}(D)$ on the boundary of $D$ so that they decompose the boundary as $\partial D=\partial_L D\cup_{\{v_t(D),v_b(D)\}}\partial_RD$ where both $\partial_L D$ and $\partial_RD$ are oriented from $v_b(D)$ to $v_t(D)$.
\item $\partial_LD$ is a subset of $Z$ containing $e_i$.
\end{enumerate}
The intersection of any two disk in $\mathcal{D}(Z,e_i)$ is a disk in $\mathcal{D}(Z,e_i)$. So, let $D(Z,e_i)$ denote the \emph{smallest} disk in this set i.e. the intersection of all disks. See Figure \ref{Discs} for an example. We allow the special case that $\mathcal{D}(Z,e_i)$ is the empty set - in this case, set $D(Z,e_i)$ to be the empty disc.
Similarly, for any vertex $v$ in $Z$, we define $\mathcal{D}(Z,v)$ to be the set of all disks satisfying the above conditions, just replace $e_i$ with $v$ in $(3)$ and assume $v_t(D)$ and $v_b(D)$ are distinct from $v$ in $(2)$ . Also, denote the intersection of all such disks with $D(Z,v)$.
For any edge $e$ in $S$, let $v_t(e)$ and $v_b(e)$ denote the top and bottom vertices of $e$. Let $D$ be a disk in $\mathcal{D}(Z,e_i)$ (or $\mathcal{D}(Z,v)$). We define $I(D)$ to be the set of edges $e$ in $S$ that intersect $D$ in $v_t(e)$, and $v_t(e)$ lies on the segment of $Z$ from $v_b(D)$ and $v_{b}(e_i)$ (respectively, $v$), excluding $v_b(D)$. In other words, $I(D)$ consists of incoming edges on the left boundary of $D$ from $v_b(D)$ to $v_{b}(e_i)$. Similarly, $O(D)$ is the set of edges, pointing to the left between $v_t(e_j)$ and $v_{t}(D)$. More precisely, each edge $e$ in $O(D)$ intersects $D$ in $v_b(e)$ and this vertex lies on the segment of $Z$ from $v_t(e_j)$ (respectively, $v$) to $v_{t}(D)$, excluding $v_{t}(D)$.
\begin{figure}
\caption{The grey region is the disc $D(Z,e_{6}
\label{Discs}
\end{figure}
For any disk $D$ in $\mathcal{D}(Z,\star)$, where $\star\subset Z$ is an edge or vertex, we define a monomial in $R(S)$, called its \emph{coefficient}, by setting
\[U(D)=\prod_{e_i\subset \left(I(D)\cup O(D)\right)}U_i \in R(S).\]
We define an $R(S)$-module structure on $M(S)$ by setting
\[
U_i x_{Z}=\begin{cases}
\begin{array}{ll}
U_ix_Z&\text{if}\ e_i\not\subset Z\\
U\left(D(Z,e_i)\right)x_{U_i(Z)}&\text{if}\ e_i\subset Z\ \text{and $\partial_R(D(Z,e_i))$ only intersects Z} \\
&\text{at its endpoints}\\
0&\text{otherwise}
\end{array}
\end{cases}
\]
Here, $x_Z$ denotes $1$ in the summand $R_Z$ of $M(S)$, and $U_i(Z)$ denotes the cycle obtained from $Z$ by replacing $\partial_LD(Z,e_i)$ with $\partial_RD(Z,e_i)$ (see Figure \ref{CycleExample}). Note that when $\mathcal{D}(Z,e_i)=\emptyset$, $U_{i}x_{Z}=0$.
For $e_i\subset Z$, we refer to the equality $U_ix_Z=U\left(D(Z,e_i)\right)x_{U_i(Z)}$ as \emph{$U_i$ maps the cycle $Z$ to $U_i(Z)$ with coefficient $U(D(Z,e_i))$}.
\begin{figure}
\caption{The cycle $e_{6}
\label{CycleExample}
\end{figure}
Similarly, for any vertex $v$ in $Z$, if replacing $\partial_LD(Z,v)$ by $\partial_R D(Z,v)$ in $Z$ gives a cycle, denote it by $v(Z)$, and define
\[f_v:M(S)\longrightarrow M(S)\]
by setting:
\[
f_v(x_Z)=
\begin{cases}
\begin{array}{ll}
U_{i(v)}U_{j(v)}x_Z=U_{k(v)}U_{l(v)}x_Z&\text{if}\ v\subset Z\\
U(D(Z,v))x_{v(Z)}&\text{if}\ v\subset Z\ \text{and $\partial_R(D(Z,v))$ only intersects Z} \\
&\text{at its endpoints}\\
0&\text{otherwise}
\end{array}
\end{cases}
\]
In the next Section, we will show that the $R(S)$-module structure on $M(S)$ is well-defined, and $f_v=U_{i(v)}U_{j(v)}=U_{k(v)}U_{l(v)}$ for any vertex $v$ in $Z$.
\subsection{Well-definedness of Module Structure} \label{welldef}
Let $v$ be a 4-valent vertex in $\mathsf{p}(S)$ with incoming edges $e_{1}$ and $e_{2}$ and outgoing edges $e_{3}$ and $e_{4}$. When a cycle $Z$ is locally empty at $v$, the associated R-module $R_{Z}$ comes with the relation $Q_{v}=U_{1}U_{2}-U_{3}U_{4}$. However, it is possible that multiplication by some $U_{i}$ will map this cycle so that it is locally $e_{1}e_{3}$, $e_{1}e_{4}$, $e_{2}e_{3}$, or $e_{2}e_{4}$. Thus, we need to show that $U_{1}U_{2}=U_{3}U_{4}$ on these local cycles.
Consider the local cycle $e_{1}e_{3}$. As shown in Figures \ref{U1U2} and \ref{fv}, multiplication by $U_{1}$ followed by multiplication by $U_{2}$ maps to the same cycle as applying $f_{v}$. Moreover it does so with the same coefficient, as the right boundary of the yellow disc will not have any outgoing edges.
\begin{figure}
\caption{Multiplication by $U_{1}
\label{U1U2}
\end{figure}
\begin{figure}
\caption{Applying $f_{v}
\label{fv}
\end{figure}
The same argument applies to multiplication by $U_{3}$ followed by multiplication by $U_{4}$. Thus, we have that on the local cycle $e_{1}e_{3}$, $U_{1}U_{2}=U_{3}U_{4}=f_{v}$.
\begin{figure}
\caption{Multiplication by $U_{3}
\label{U3U4}
\end{figure}
The other 3 local cycles behave in much the same way, with $U_{1}U_{2}=U_{3}U_{4}=f_{v}$. The only difference in the proof is that there is a local outgoing edge that appears as a coefficient in $f_{v}$ - fortunately, this edge also appears as a coefficient in both $U_{1}U_{2}$ and $U_{3}U_{4}$.
\begin{remark}
A consequence of this argument is that for any 4-valent $v$ in $S$, $Q_{v}M(S)=0$.
\end{remark}
The second thing that we need to show is that the edge actions are actually commutative. More precisely, we need to show that for $x \in R_{Z}$, $U_{i}U_{j}x=U_{j}U_{i}x$ for any pair of edges $e_{i}$, $e_{j}$. There are 3 cases to consider:
1) Both $e_{i}$ and $e_{j}$ are in $Z^{c}$.
2) One edge is in $Z$ (WLOG assume this is $e_{i}$) and the other edge ($e_{j}$) is in $Z^{c}$.
3) Both $e_{i}$ and $e_{j}$ are in $Z$.
The first two cases follow directly from the definitions. However, the third case is fairly subtle, as there can be interactions between the two multiplications. Before proving this case, we will need a lemma inspired by Khovanov-Rozansky (\hspace{1sp}\cite{khovanov2008matrix}, \cite{KhovanovRozansky08:MatrixFactorizations}).
\begin{lemma} \label{ProductLemma}
Let $Z$ be a cycle in $S$, and let $D$ be an embedded disc in $\mathbb{R} \times [0,1]$ whose boundary is transverse to $S$. Suppose also that $Z \cap D = \emptyset$ and that $D$ doesn't contain any caps or cups from the closing off portions of $S$. Then
\[ \prod_{ e_{i} \in In(D)}U_{i} = \prod_{e_{j} \in Out(D)}U_{j} \]
\noindent
in $R_{Z}$, where $In(D)$ is the set of incoming edges in $D \cap S$ and $Out(D)$ is the set of outgoing edges in $D \cap S$.
\end{lemma}
\begin{proof}
To see this, we just start with the product of the incoming edges. Let $v_{1}$,...,$v_{k}$ be the vertices in $S$ which are in the interior of $D$, ordered by their first coordinate in $\mathbb{R} \times I$. If we apply the quadratic relations $Q(v_{i})$ sequentially, the result is the product of the outgoing edges.
\end{proof}
For case 3, there are several subcases to consider:
a) $e_{i}$ and $e_{j}$ lie on the same strand in $Z$
b) $e_{i}$ and $e_{j}$ lie on adjacent strands in $Z$
c) $e_{i}$ and $e_{j}$ lie on different strands in $Z$ which are not adjacent.
For (a), assume $e_{i}$ is above $e_{j}$. If $\Int(D(Z, e_{i})) \cap \Int(D(Z, e_{j})) = \emptyset$, then again we can see right away that $U_{i}U_{j}x_{Z}=U_{j}U_{i}x_{Z}$. However, if this is not the case, then the two multiplications interact. There are
4 possible ways that they can interact, based on whether $e_{j}$ is in $\partial D(Z,e_{i})$, and whether $e_{i}$ is in $\partial D(Z,e_{j})$. One of these cases is shown in Figures \ref{Commute1a} and \ref{Commute1b}. In Figure \ref{Commute1a}, we see that $U_{i}U_{j}x_{Z}$ and $U_{j}U_{i}x_{Z}$ end up on the same cycle. To see that the coefficients are the same, we apply Lemma \ref{ProductLemma} to the shaded discs in Figure \ref{Commute1b}. The cases where $e_{j} \in \partial D(Z,e_{i})$ or $e_{i} \in \partial D(Z,e_{j})$ are similar, so we leave them to the reader.
\begin{figure}
\caption{\textbf{Multiplication by}
\label{Commute1a}
\end{figure}
\begin{figure}
\caption{The product of the edges with yellow dots can be identified with the product of the edges with purple dots by applying Lemma \ref{ProductLemma}
\label{Commute1b}
\end{figure}
For (b), assume that $e_{i}$ lies on the strand in $Z$ to the left of the strand $e_{j}$ lies on. If $D(Z, e_{i}) \cap D(Z, e_{j}) = \emptyset$, then $U_{i}$ and $U_{j}$ clearly commute. However, if they intersect at all (even on the boundary this time) then we get interactions. In particular, $U_{i}x_{Z} = 0$, so $U_{j}U_{i}x_{Z} = 0$. Thus, to show commutativity, we need to show that $U_{i}U_{j}x_{Z}=0$.
Suppose $v$ is the 4-valent vertex in $D(Z, e_{i}) \cap D(Z, e_{j})$ which is closest to the top of the braid. If $e_{a}$ is the left outgoing edge of $v$ and $e_{b}$ is the right outgoing edge of $v$, then $e_{b}$ is in $Z$, so $U_{a}$ is a coefficient of $U_{j}x_{Z}$. But $U_{a}$ lies in $U_{i}(U_{j}(Z))$, so $U_{i}U_{j}x_{Z}$ gets mapped farther to the right. Since $U_{b}$ lies in $U_{a}(U_{i}(U_{j}(Z)))$, we can travel along this strand in $Z$ to the point where $U_{a}(U_{i}(U_{j}(Z)))$ diverges from $Z$, and repeat this argument. After doing this some number of times, the cycle will hit the top vertex in $D(Z, e_{j})$, making $U_{i}U_{j}x_{Z}$ equal to zero.
For (c), there is no possible way for the two multiplications to interact, so they commute.
\subsection{Definition of the Complex} \label{defcomplex}
The goal of this section is to introduce a chain complex for any singular braid $S$. Let $R=R(S)$, the edge ring of $S$. To any $4$-valent vertex $v$ we associate the linear elements
\[L_v=U_{i(v)}+U_{j(v)}-U_{k(v)}-U_{l(v)}\in R.\]
\[L'_v=U_{i(v)}+U_{j(v)}+U_{k(v)}+U_{l(v)}\in R.\]
Recall, that indices $i(v), j(v), k(v)$ and $l(v)$ are defined so that $e_{i(v)}$ and $e_{j(v)}$ are the left and right incoming edges, respectively, while $k(v)$ and $l(v)$ are left and right outgoing edges, respectively. Let $L \subset R$ to be the ideal generated by the $L_v$ corresponding to all $4$-valent vertices. We define the $R$-module $\mathscr{M}(S)$ as
\[\mathscr{M}(S):=M(S)\otimes_{R} R/L=M(S)/LM(S).\]
For any $1\le i\le n$, associated to the bottom and top vertices $w_i^-$ and $w_i^+$ in $\mathsf{p}(S)$, we define the linear elements
\[
\begin{array}{ll}
L_{w_{i}^{\pm}}=U_{i(w_i^+)}+U_{j(w_i^+)}-U_{k(w_i^-)}-U_{l(w_{i}^-)}, &\text{and}\\
L'_{w_{i}^{\pm}}=U_{i(w_i^+)}+U_{j(w_i^+)}+U_{k(w_i^-)}+U_{l(w_{i}^-)}&
\end{array}
\]
in $R$ and the \emph{closing off} matrix factorization:
Let $\mathsf{K}(S)$ denote the Koszul complex
\[ \mathsf{K}(S)=
\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_1^\pm}}&R\ar@<1ex>[l]^{L'_{w_1^{\pm}}}}\big{)}\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_2^\pm}}&R\ar@<1ex>[l]^{L'_{w_2^{\pm}}}}\big{)}\otimes...\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_n^\pm}}&R\ar@<1ex>[l]^{L'_{w_n^{\pm}}}}\big{)}\]
\begin{definition}
We define $C_{1\pm 1}(S)$ to be the tensor product $\mathscr{M}(S)\otimes \mathsf{K}(S)$.
\end{definition}
\begin{lemma}
$C_{1+1}(S)$ is a chain complex i.e. $d^2=0$.
\end{lemma}
\begin{proof}
For each $i$, the matrix factorization $\mathbf{x}ymatrix{R\ar@<1ex>[r]^{ L_{w_i^\pm}}&R\ar@<1ex>[l]^{L'_{w_i^{\pm}}}}$ has potential $L_{w_i^\pm}L'_{w_i^\pm}$. Thus, to prove $d^2=0$, we need to show that \[(\sum_{i=1}^nL_{w_i^\pm}L'_{w_i^\pm})\mathscr{M}(S)=0.\]
For any $1\le i\le n$
\[\begin{split}
L_{w_i^\pm}L'_{w_i^\pm}&= (U_{i(w_i^+)}+U_{j(w_i^+)})^2-(U_{k(w_i^-)}+U_{l(w_i^-)})^2\\
&=U_{i(w_i^\pm)}^2+U_{j(w_i^+)}^2-U_{k(w_i^-)}^2-U_{l(w_i^-)}^2\\
&+2U_{i(w_i^+)}U_{j(w_i^+)}-2U_{k(w_i^-)}U_{l(w_i^-)}.
\end{split}
\]
Follows from the module structure on $\mathscr{M}(S)$ that \[U_{i(w_i^+)}U_{j(w_i^+)}x_Z=0\ \ \ \ \text{and}\ \ \ \ U_{k(w_i^-)}U_{l(w_i^-)}x_Z=0\]
for any cycle $Z$ and any $1\le i\le n$. Thus, it suffices to show that
\[\left(\sum_{i=1}^n(U_{i(w_i^+)}^2+U_{j(w_i^+)}^2-U_{k(w_i^-)}^2-U_{l(w_i^-)}^2)\right)\mathscr{M}(S)=0.\]
On the other hand,
\[\sum_{i=1}^n(U_{i(w_i^+)}^2+U_{j(w_i^+)}^2-U_{k(w_i^-)}^2-U_{l(w_i^-)}^2)=-\sum_{v}(U_{i(v)}^2+U_{j(v)}^2-U_{k(v)}^2-U_{l(v)}^2)
\]
where the second sum is over all $4$-valent vertices in $S$. For any 4-valent $v$ in $S$, we have that $L_{v}=Q_{v}=0$ on $\mathscr{M}(S)$. It follows that $U_{i(v)}^2+U_{j(v)}^2-U_{k(v)}^2-U_{l(v)}^2=0$ for any $v$, so the sum is zero as well.
\end{proof}
\subsection{Some Properties of the Edge Ring Action}
In this section we will give some relations among the $U_{i}$-actions on $C_{1 \pm 1}(S)$. Note that for a complete resolution $S$, $H_{1+1}(S) \cong H_{1-1}(S)$, as there are no edge maps. We will write the results in terms of $H_{1+1}(S)$, but they also apply to $H_{1-1}(S)$.
\begin{lemma} \label{prop1}
Suppose $e_{i}$ and $e_{j}$ are two incoming or outgoing edges at a vertex $v$ in $S$. Then $U_{i}=-U_{j}$ on $H_{1+1}(S)$.
\end{lemma}
\begin{proof}
Suppose first that $v=w_{i}^{+}$ for some $i$, so that the two incoming edges are $e_{i(w_{i}^{+})},e_{j(w_{i}^{+})} $. The complex $\mathsf{K}(S)$ includes the factor
\[\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_i^\pm}}&R\ar@<1ex>[l]^{L'_{w_i^{\pm}}}} \]
\noindent
We can define a homotopy $H$ to act on this Koszul factor by
\[\mathbf{x}ymatrix{R\ar@<1ex>[r]^{1}&R\ar@<1ex>[l]^{1}} \]
\noindent
and by the identity on the other factors. Then $dH+Hd = 2U_{i(w_{i}^{+})}+2U_{j(w_{i}^{+})}$. Since we are working over $\mathbb{Q}$, this shows that $U_{i(w_{i}^{+})}=-U_{j(w_{i}^{+})}$ on homology.
Similarly, for $w_{i}^{-}$, we can use the homotopy $H'$ which acts on this Koszul factor by
\[\mathbf{x}ymatrix{R\ar@<1ex>[r]^{1}&R\ar@<1ex>[l]^{-1}} \]
\noindent
and by the identity on other factors. Then $dH'+Hd' = 2U_{k(w_{k}^{-})}+2U_{l(w_{l}^{-})}$, so $U_{k}(w_{k}^{-})=-U_{l}(w_{i}^{-})$ on homology.
For the 4-valent vertices, it helps to modify the complex to a quasi-isomorphic one. We can write $C_{1\pm 1}(S)$ as
\[ M(S) \otimes \mathsf{K}(S) \otimes R/L \]
\noindent
Since the $L(v)$ form a regular sequence, $R/L$ can be replaced with the Koszul complex
\[ \mathcal{L} = \bigotimes_{v}
\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{v}}&R\ar@<1ex>[l]^{L'_{v}}}\big{)}\]
Then $C_{1\pm 1}(S)$ is quasi-isomorphic to $M(S) \otimes \mathsf{K}(S) \otimes \mathcal{L}$. Using the same homotopies $H$ and $H'$ on the factor $\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{v}}&R\ar@<1ex>[l]^{L'_{v}}}$ gives the desired result.
\end{proof}
\begin{lemma} \label{prop2}
For any $i$, $U_{i}^{2}=0$ on $H_{1+1}(S)$.
\end{lemma}
\begin{proof}
For any two edges $e_{i}$ and $e_{j}$ on the same component of $\mathsf{sm}(S)$, this follows from the lemma. Since $S$ is connected, the quadratic relations $Q_{v}$ force $U_{i}^{2}=U_{j}^{2}$ for any $U_{i}$, $U_{j}$. Together with the observation that on homology $U^{2}_{i(w^{+}_{1})}=-U_{i(w^{+}_{1})}U_{j(w^{+}_{1})}=0$, this proves the corollary.
\end{proof}
Suppose $\mathsf{sm}(\mathsf{p}(S))$ consists of $k$ circles. Let $X_{i}=(-1)^{l}U_{j}$, where $e_{j}$ is an edge on the $i$th circle in $s(S)$ and lies on the $l$th strand of $S$.
\begin{corollary} \label{modulelemma1}
The homology $H_{1+1}(S)$ is a finitely generated module over
\[\mathcal{A}^{\otimes k} = \mathbb{Q}[X_{1},...,X_{k}]/(X_{1}^{2}=...=X_{k}^{2}=0)\]
\end{corollary}
\section{A Local Bimodule Representation and the Total Complex}
\subsection{A Bimodule for Tangles}
In this section we will define an algebra $A_n$ for any positive integer $n$ and a bimodule $\mathsf{M}[S]$ over $A_n$ for any singular braid $S$ with $2n$ strands such that
\[\mathsf{M}[S_1]\otimes_{A_n}\mathsf{M}[S_2]=\mathsf{M}[S_1\circ S_2]\]
for all $2n$-strand singular braids $S_1$ and $S_2$. In addition, we will define a right $A_n$-module $\mathsf{M}[S_{cup}]$ for the singular cups and a left $A_n$-module $\mathsf{M}[S_{cap}]$ for singular cap such that
\[\mathsf{M}[S_{cup}]\otimes_{A_n}\mathsf{M}[S]\otimes_{A_n}\mathsf{M}[S_{cap}]=\mathscr{M}(S)\]
for every singular braid $S$ with $2n$ strands.
\subsubsection{The algebra $A_n$} Let $[2n]$ denote the set $\{1,2,...,2n\}$. The algebra $A_{n}$ is generated over $\mathbb{Q}[u_1,...,u_{2n}]$ by monotone bijections $P: S_{1} \to S_{2}$, where $S_{1}$ and $S_{2}$ are $n$-element subsets of $[2n]$. By \emph{monotone} we mean that if $i<j$, then $P(i)<P(j)$. These generators can be viewed pictorially as a strands algebra with $n$ strands out of $2n$ slots. The monotonicity imposes the restriction that there are no crossings between the strands.
Given $P_{1}: S_{1} \to S_{2}$ and $P_{2}: S_{3} \to S_{4}$, we define the product $P_{1}P_{2}=\prod_{i=1}^nu_i^{\alpha_i}P$ where $P$ is the concatenation of the two strands diagrams when $S_{2}=S_{3}$, and $0$ otherwise, and
\[\alpha_i=\#\left\{j\ |\ \max\{j,P_2(P_1(j))\}\le i<P_1(j)\ \text{or}\ \min\{j,P_2(P_1(j))\}\ge i>P_1(j)\right\}.\]
See Figure \ref{Strands} for an $n=2$ example. These strands diagrams are drawn vertically, with left-to-right multiplication corresponding to bottom-to-top concatenation.
The idempotents are given by $id: S \to S$, which are denoted $\iota_{S}$. Note that $\sum_{S} \iota_{S} = 1$.
\begin{figure}
\caption{$P_{1}
\caption{$P_{2}
\caption{$P_{3}
\caption{$P_{1}
\label{Strands}
\end{figure}
The algebra has a simplified set of generators consisting the idempotents together with a set of elementary strand moves. Given a subset $S$ of $[2n]$ with $n$ elements, such that $i \in S$, $i+1 \notin S$, define $r_{i}(S)=S \cup \{i+1\} - \{i\}$. Let $R_{i}^{S}: S \to r_{i}(S)$ denote the bijection
\[
R_{i}^{S}(j)=\begin{cases}
\begin{array}{ll}
i+1&\text{ if } j = i \\
j &\text{ if } j \ne i
\end{array}
\end{cases}
\]
\noindent
For example, in Figure \ref{Strands}, $P_{2}=R_{1}^{\{1,3\}}$. Note that $R_{i}^{S}=\iota_{S}R_{i}^{S}\iota_{r_{i}(S)}$. Let $R_{i}^{S}=0$ when $r_{i}(S)$ is not well-defined (i.e. $i\notin S$ or $i+1 \in S$). The element $R_{i}$ is defined to be the sum over all the $R_{i}^{S}$:
\[ R_{i} = \sum_{S} R_{i}^{S} \]
There is a similar definition for the elementary leftward-moving elements. Given an idempotent $\iota_{S}$ with $i+1 \in S$ and $i \notin S$, define $l_{i}(S)=S \cup \{i\}-\{i+1\}$. Let $L_{i}^{S}: S \to l_{i}(S)$ denote the bijection
\[
L_{i}^{S}(j)=\begin{cases}
\begin{array}{ll}
i&\text{ if } j = i+1 \\
j &\text{ if } j \ne i+1
\end{array}
\end{cases}
\]
\noindent
In Figure \ref{Strands}, $P_{1}=L_{3}^{\{1,4\}}$. In terms of idempotents, $L_{i}^{S}=\iota_{S}L_{i}^{S}\iota_{l_{i}(S)}$. Let $L_{i}^{S}=0$ when $l_{i}(S)$ is not well-defined. Then we define
\[ L_{i} = \sum_{S} L_{i}^{S} \]
The strands algebra is generated by the idempotents together with $R_{i}$ and $L_{i}$, for $i=1,...,2n-1$. These elements satisfy five relations, some of which are already described, but we will list them all here for clarity.
\begin{itemize}
\item[R1)] The $u_{i}$ commute with each $R_i$, $L_i$ and all idempotents.
\item[R2)] For every $S$, $\iota_S\iota_S=\iota_S$.
\item[R3)] For every $S\neq S'$, $\iota_S\iota_{S'}=0$.
\item[R4)] For every $S$, if $r_{i}(S)$ (resp. $l_{i}(S)$) is not well-defined, then $\iota_{S}R_{i}=0$ (resp. $\iota_{S}L_{i}=0$). Otherwise, $\iota_SR_i=R_i\iota_{r_i(S)}=\iota_SR_i\iota_{r_i(S)}$ and $\iota_SL_i=L_i\iota_{l_i(S)}=\iota_SL_i\iota_{l_i(S)}$.
\item[R5)] For $i=1,...,2n-1$, $R_{i}L_{i}=\iota_{i}u_{i}$ and $L_{i}R_{i} = \iota_{i+1}u_{i}$, where \[ \iota_{i} = \sum_{S \text{ containing } i} \iota_{S} \]
\end{itemize}
\noindent
Relations R2-R5 are illustrated in Figure \ref{algrelations}.
\begin{figure}
\caption{An example of R2}
\caption{An example of R3}
\caption{An example of R4}
\caption{An example of R5}
\caption{Some pictorial examples of the algebra relations}
\label{algrelations}
\end{figure}
For each pair $j>i$, let \[\rho_{i,j}=R_iR_{i+1}...R_{j-1}\ \ \ \text{and}\ \ \ \delta_{j,i}=L_{j-1}L_{j-2}...L_{i}.\]
Thus, $R_{i}=\rho_{i,i+1}$ and $L_{i}=\delta_{i+1,i}$.
\subsubsection{The bimodule for a singular braid.} Let $S$ be a singular braid with $2n$ strands and $m$ edges. Corresponding to the top and bottom boundaries of $S$ we have the algebra $A_n$ defined in previous section. We will define a $\mathbb{Q}[U_1,...,U_m]$-module $\mathsf{M}[S]$ with a left and right action by $A_n$ such that the left action corresponds to the bottom boundary and the right action corresponds to the top boundary.
For a singular braid $S$, a \emph{cycle} $Z$ in $S$ is a set of $n$ pairwise disjoint paths oriented from the bottom boundary to the top boundary. Let $b(Z), t(Z)\subset [2n]$ denote the set of vertices occupied at the bottom and top boundary, respectively. In addition, let $e_{b(i)}$ and $e_{t(i)}$ be the outgoing edge from $\{i\}\times\{0\}$ and incoming edge to $\{i\}\times\{1\}$, respectively.
Similar to Section \ref{sub3.2}, for any cycle $Z$ and edge $e_i\subset Z$ we define $\mathcal{D}(Z,e_i)$ to be the set of disks $D$ in $\mathbb{R}\times [0,1]$ such that $\partial D\subset S\cup(\mathbb{R}\times\{0\})\cup(\mathbb{R}\times\{1\})$ decomposes as $\partial D=\partial_LD\cup \partial_RD\cup\partial_TD\cup \partial_BD$ satisfying the following conditions:
\begin{enumerate}
\item $\partial_BD$ (resp. $\partial_TD$) is either a $4$-valent vertex of $S$ or a connected line segment in $\mathbb{R}\times\{0\}$ (resp. $\mathbb{R}\times\{1\}$).
\item $\partial_LD$ (resp. $\partial_R(D)$) is an oriented path connecting left (resp. right) endpoint of $\partial_BD$ to the left (resp. right) endpoint of $\partial_TD$. Note that if $\partial_BD$ or $\partial_TD$ is a vertex then its both left and right endpoints are equal to the vertex.
\item $D\cap Z=\partial_LD$ and $\partial_LD$ contains $e_i$.
\end{enumerate}
As before, let $D(Z,e_i)$ denotes the smallest disk, i.e. the intersection of all disks in $\mathcal{D}(Z,e_i)$. Also, $U_i(Z)$ denotes the cycle obtained from $Z$ by replacing the $\partial_LD(Z,e_i)\subset Z$ with $\partial_RD(Z,e_i)$. As in Section \ref{sub3.2}, each disk $D(Z,e_i)$ has a coefficient $U(D(Z,e_i))$ which is a monomial in $\mathbb{Q}[U_1,...,U_m]$, where as before $U_i$ is the variable corresponding to the edge $e_i$.
For each cycle $Z$, the bimodule $\mathsf{M}[S]$ contains an element denoted by $x_Z$. We define $\mathsf{M}[S]$ to be the $\mathbb{Q}[U_1,...,U_m]$-module generated by elements of the form $ax_Za'$ where $a,a'\in A_n$ such that $a\iota_{b(Z)}=a$ and $\iota_{t(Z)}a'=a'$, modulo the following conditions.
\begin{itemize}
\item[M1)] $x_Z=\iota_{b(Z)}x_Z\iota_{t(Z)}$.
\item[M2)] For every $4$-valent vertex $v$ of $S$ and any cycle $Z$, $U_{i(v)}U_{j(v)}x_Z=U_{k(v)}U_{l(v)}x_Z$.
\item[M3)] For every $4$-valent vertex $v$ of $S$ and any cycle $Z$, $(U_{i(v)}+U_{j(v)})x_Z=(U_{k(v)}+U_{l(v)})x_Z$.
\item[M4)] $u_{i}x_Z=U_{b(i)}x_Z$ for every cycle $Z$ and $1\le i\le 2n$.
\item[M5)] $x_Zu_i=U_{t(i)}x_Z$ for every cycle $Z$ and $1\le i\le 2n$.
\item[M6)] Consider an edge $e_i\subset Z$. If for $D=D(Z,e_i)$, both $\partial_BD$ and $\partial_TD$ are $4$-valent vertices in $S$, then $U_ix_Z=U(D)x_{U_i(Z)}$, and if $D=\emptyset$ then $U_ix_Z=0$.
\item[M7)] For every $D=D(Z,e_i)$, if $\partial_TD$ is a vertex in $S$ and $\partial_BD=[j,k]\times \{0\}\subset\mathbb{R}\times\{0\}$, then
\[\delta_{k,j}x_Z=U(D)x_{U_i(Z)}\ \ \ \text{and}\ \ \ \rho_{j,k}(U(D)x_{U_i(Z)})=U_ix_Z.\]
\item[M8)] For every $D=D(Z,e_i)$, if $\partial_BD$ is a vertex and $\partial_TD=[j,k]\times\{1\}\subset\mathbb{R}\times\{1\}$, then
\[x_Z\rho_{j,k}=x_{U_i(Z)}\ \ \ \text{and}\ \ \ (U(D)x_{U_i(Z)})\delta_{k,j}=U_ix_Z,\]
\item[M9)] For every $D=D(Z,e_i)$ with $\partial_BD=[j_0,k_0]\subset \mathbb{R}\times\{0\}$ and $\partial_TD=[j_1,k_1]\subset\mathbb{R}\times\{1\}$
we have
\[\begin{split}
\delta_{k_0,j_0}x_{Z}=x_{U_i(Z)}\delta_{k_1,j_1},\ \ \ \ \ x_z\rho_{j_1,k_1}=\rho_{j_0,k_0}x_{U_i(Z)}\\
\text{and}\ \ U_ix_Z=\rho_{j_0,k_0}(U(D)x_{U_i(Z)})\delta_{k_1,j_1}.
\end{split}
\]
\end{itemize}
\begin{example} Let $S_{id}$ be the identity braid on $2n$ strands as in Figure \ref{sid}. In this case, the cycles are in one-to-one correspondence with the idempotents in $A_n$ and $\mathsf{M}[S_{id}]\cong A_n$ as $A_n$-bimodules.
\begin{figure}
\caption{The trivial tangle $S_{id}
\end{figure}\label{sid}
\end{example}
\begin{example}\label{ex:sing}
Let $\mathsf{X}_i$ denote the elementary singular braid on $2n$ strands with a singularization between strands $i$ and $i+1$. Let $e_1$ and $e_2$ (resp. $e_3$ and $e_4$) denote the left and right incoming (resp. outgoing) edges to (resp. from) the only $4$-valent vertex of $\mathsf{X}_i$, respectively (see Figure \ref{xi}. For each subset $I\subset\{1,2,3,4\}$, let $\mathcal{Z}I$ denote the set of cycles that locally consist of the edges labelled with elements in $I$.
\begin{figure}
\caption{The elementary singular braid $\mathsf{X}
\label{xi}
\end{figure}
Similar to identity bimodule, the bimodule $\mathsf{M}(\mathsf{X}_i)$ is the set of elements $A_n.x_Z.A_n$ modulo the following relations:
\begin{enumerate}
\item $x_{Z}=\iota_{b(Z)}x_{Z}\iota_{t(Z)}$,
\item For $j<i$ or $j>i+1$, $u_jx_Z=x_Zu_j$,
\item For $j<i-1$ or $j>i+1$, if $j+1\in b(Z)$, then $L_{j}x_{l_j(Z)}=x_{Z}L_{j}$,
\item For $j<i-1$ or $j>i+1$,, if $j\in b(Z)$, then $R_{j}x_{r_{j}(Z)}=x_{Z}R_{j}$,
\item $(u_i+u_{i+1})x_Z=x_Z(u_i+u_{i+1})$,
\item $(u_iu_{i+1})x_{Z}=x_Z(u_iu_{i+1})$,
\item For $Z\in\mathcal{Z}\emptyset$, if $i-1\in b(Z)$, then
\[L_{i-1}x_{Z}=x_{Z(1,3)}L_{i-1} \ \ \ \ \text{and}\ \ \ \ x_ZR_{i-1}=R_{i-1}x_{Z(1,3)},\]
where $Z(1,3)\in\mathcal{Z}\{1,3\}$ denotes the cycle obtained from $Z$ by replacing the $(i-1)$-th strand with local cycle $e_1e_3$.
\item For $Z\in\mathcal{Z}\{1,3\}$, \[L_ix_Z=x_{Z(2,3)}\ \ \ \text{and}\ \ \ x_{Z}R_i=x_{Z(1,4)},\]
where $Z(2,3)=(Z\setminus e_1)\cup e_2$ and $Z(1,4)=(Z\setminus e_3)\cup e_4$.
\item For $Z\in\mathcal{Z}\{1,4\}$, $L_ix_Z=x_{Z(2,4)}$ where $Z(2,4)=(Z\setminus e_1)\cup e_2$.
\item For $Z\in\mathcal{Z}\{2,3\}$, $x_ZR_i=x_{Z(2,4)}$ where $Z(2,4)=(Z\setminus e_3)\cup e_4$.
\item For each $Z\in\mathcal{Z}\emptyset$, if $i+2\in b(Z)$ then,
\[ \begin{split}
&x_ZL_{i+1}L_i=L_{i+1}x_{Z}(2,3), \ \ \ \ x_ZU_3L_{i+1}=L_{i+1}x_{Z(2,4)}\\
&R_iR_{i+1}x_Z =U_3x_{Z(1,4)}R_{i+1}\ \ \ \ R_{i+1}U_3x_Z=x_{Z(2,4)}R_{i+1}.
\end{split}
\]
where $Z(i,j)$ denotes the cycle obtained from $Z$ by replacing the $(i+2)$-th strand with local cycle $e_ie_j$.
\end{enumerate}
\end{example}
\subsubsection{The Cup and Cap Modules}
Let $\mycap_i$ denote the elementary tangle on $2n$ strands with a singular cap between the strands $i$ and $i+1$ as in Figure \ref{elcapdiagram}. This tangle consists of $2n$ edges, we label them by their bottom vertex. As before, a cycle $Z$ in $\mycap_i$ is a union of $n$ disjoint edges such that it contains exactly one of $e_i$ or $e_{i+1}$. Associated to $\mycap_i$ we define a bimodule $\mathsf{M}[\mycap_i]$, generated by the cycles so that it is a left module over $A_n$ and right module over $A_{n-1}$. Specifically, it is the set of elements $A_n.x_Z.A_{n-1}$ for all cycles $Z$ subject to the following relations.\begin{enumerate}
\item $x_Z=\iota_{b(Z)}x_Z\iota_{t(Z)}$
\item For all $1\le j<i-1$ we have $L_{j}x_Z=x_{U_j(Z)}L_j$ and $x_ZR_j=R_jx_{U_j(Z)}$.
\item For all $i+2\le j< 2n$ we have $L_jx_{Z}=x_{U_j(Z)}L_{j-2}$ and $R_jx_{U_j(Z)}=x_ZL_{j-2}$.
\item For every $Z$, $L_{i-1}x_Z=L_{i+1}x_Z=x_ZL_{i-1}=0$ and $R_{i-1}x_Z=R_{i+1}x_Z=x_{Z}R_{i-1}=0$.
\item If $e_i\subset Z$, then $L_ix_Z=x_{U_i(Z)}$.
\end{enumerate}
\begin{figure}
\caption{The elementary singular cap diagram}
\label{elcapdiagram}
\end{figure}
\noindent
Similarly, $\mycup_i$ denotes the elementary tangle on $2n$ strands with a singular cup between the strands $i$ and $i+1$ as in Figure \ref{elcupdiagram}. Associated with this we define a bimodule $\mathsf{M}[\mycup_i]$ which is a left $A_{n-1}$-module and right $A_n$-module generated by the cycles. Similarly, if we label the edges by their top boundary, a cycle in $Z$ is a union of $n$ pairwise disjoint edges containing exactly one of $e_i$ or $e_{i+1}$, and $x_Z$ denotes the corresponding generator. Then, $\mathsf{M}[\mycup_i]$ is the set of elements $A_{n-1}.x_Z.A_n$ for all cycles $Z$ subject to the following relations.
\begin{enumerate}
\item $x_Z=\iota_{b(Z)}x_Z\iota_{t(Z)}$
\item For all $1\le j<i-1$ we have $L_{j}x_Z=x_{U_j(Z)}L_j$ and $x_ZR_j=R_jx_{U_j(Z)}$.
\item For all $i+2\le j< 2n$ we have $L_jx_{Z}=x_{U_j(Z)}L_{j+2}$ and $R_jx_{U_j(Z)}=x_ZL_{j+2}$.
\item For every $Z$, $L_{i-1}x_Z=x_ZL_{i-1}=x_ZL_{i+1}=0$ and $R_{i-1}x_Z=x_ZR_{i-1}=x_ZR_{i+1}=0$.
\item If $e_i\subset Z$, then $x_ZR_i=x_{U_i(Z)}$.
\end{enumerate}
\begin{figure}
\caption{The elementary singular cup diagram}
\label{elcupdiagram}
\end{figure}
Finally, by tensoring these bimodules we define $\mathsf{M}[S_{cup}]$ and $\mathsf{M}[S_{cap}]$ for the tangle consisting of $n$ singular cups (Figure \ref{S-cups}) and caps (Figure \ref{S-caps}), respectively.
Specifically, $\mathsf{M}[S_{cup}]$ is a right $A_n$-module defined as follows. Label the edges such that $e_i$ denotes the edge occupying the $i$-th vertex on top boundary. The generators are given by local cycles $Z$ with $n$ strands that contain exactly one of the edges $e_{2j}$ and $e_{2j-1}$ for each $j\in\{1,...,n\}$. As before, $x_Z$ denotes the generator corresponding to $Z$ and $t(Z)$ denotes the set of occupied strands at the top boundary. The module $\mathsf{M}[S_{cup}]$ is the set of elements $x_Z.A_n$ for every cycle $Z$ modulo the following relations:
\begin{enumerate}
\item[C1)] $x_Z=x_Z.\iota_{t(Z)}$.
\item[C2)] If $2j-1\in t(Z)$ for some $1\le j\le n$, then $x_ZR_{2j-1}=x_{U_j(Z)}$.
\item[C3)] If $2j\in t(Z)$ for some $1\le j\le n$, then $x_ZR_{2j}=0$.
\end{enumerate}
Similarly, for $S_{cap}$, $\mathsf{M}[S_{cap}]$ is a left $A_n$-module consisting of the elements $A_n.x_Z$ for all cycles $Z$ modulo the following relations:
\begin{enumerate}
\item[C'1)] $x_Z=\iota_{b(Z)}x_Z$
\item[C'2)] If $2j-1\in b(Z)$ for some $1\le j\le n$, then $L_{2j-1}x_Z=x_{U_j(Z)}$.
\item[C'3)] If $2j\in b(Z)$ for some $1\le j\le n$, then $L_{2j}x_{Z}=0$.
\end{enumerate}
\begin{theorem}
Let $S_1$ and $S_2$ be singular braids with $2n$ strands. Then, the $\mathbb{Q}[U_1,...,U_m]$-modules
\[\mathsf{M}(S_1)\otimes_{A_n}\mathsf{M}(S_2)\cong\mathsf{M}(S_1\circ S_2)\]
and this isomorphism preserves the left and right $A_n$-actions. Here, $m$ denotes the number of edges in $S=S_1\circ S_2$.
\end{theorem}
\begin{proof}
Let
\[h:\mathsf{M}[S_1\circ S_2]\to \mathsf{M}[S_1]\otimes_{A_n}\mathsf{M}[S_2]\]
be the map defined by $h(x_Z)=x_{Z_1}\otimes x_{Z_2}$ where $Z_i=Z\cap S_i$. First, we show that $h$ extends to a well-defined $\mathbb{Q}[U_1,...,U_m]$-homomorphism that preserves the right and left $A_n$ actions.
Suppose $S_1\subset \mathbb{R}\times [0,1]$ and $S_2\subset \mathbb{R}\times [1,2]$ such that $S_1\cap (\mathbb{R}\times\{1\})=S_2\cap (\mathbb{R}\times\{1\})$. Let $Z$ be a cycle for $S$ and $Z_i=Z\cap S_i$. Consider an edge $e_i\subset Z$. If $D=D(Z,e_i)$ is a disk away from $\mathbb{R}\times\{1\}$, then it is clear from the definition that \[h(U_ix_Z)=U_ix_{Z_1}\otimes x_{Z_2}.\]
Suppose $e_i\subset S_1$, and $D\cap (\mathbb{R}\times\{1\})=[j,k]\times\{1\}$. Then, $D=D_1\cup D_2$ where $D_1=D(Z_1,e_i)$ and $D_2\subset \mathbb{R}\times [1,2]$ is the smallest disk with left boundary on $Z_2$ and bottom boundary equal $[j,k]\times \{1\}$. Depending on the type of $D$, one of the relations M6, M7, M8 or M9, implies that $U_ix_Z=a_0(U(D)x_{U_i(Z)})a_2$
where $a_0,a_2$ are generators of $A_n$. Since, $D_1=D(Z_1,e_i)$ we have
\[U_ix_{Z_1}=a_0 U(D_1)x_{Z_1'}\delta_{k,j}.\]
On the other hand, it is not hard to show that \[D_2=D(Z_2,e_{b(j)})\cup D(u_j(Z_2),e_{b(j+1)})\cup...\cup D(u_{k-1}(...(u_j(Z_2))),e_{b(k-1)}),\]
and thus $\delta_{k,j}x_{Z_2}=\frac{U(D)}{U(D_1)}x_{Z_2'}a_2$ where $U_i(Z)=Z_1'\cup Z_2'$. Therefore, \[U_ih(x_{Z})=U_ix_{Z_1}\otimes x_{Z_2}=a_0U(D)x_{Z_1'}\otimes x_{Z_2'}a_2=a_0U(D)h(x_{U_i(Z)})a_2.\]
Finally, if $e_i\cap (\mathbb{R}\times\{1\})\neq \emptyset$, then $U_ix_{Z_1}=x_{Z_1}R_iL_i$. On the other hand, $D=D_1\cup D_2$ where $D_1=D(Z_1,e_{t(i)})$ and $D_2=D(Z_2,e_{b(i)})$. Thus,
\[\begin{split}
U_ix_{Z_1}\otimes x_{Z_2}=x_{Z_1}R_i\otimes L_ix_{Z_2}&=(U(D_1)a_0x_{Z_1'})\otimes (U(D_2)x_{Z_2'}a_2)\\
&=U(D)a_0x_{Z_1'}\otimes x_{Z_2'}a_2.
\end{split}
\]
where $U_ix_Z=U(D)a_0x_{U_i(Z)}a_2$ and $U_i(Z)=Z_1'\cup Z_2'$.
The proof for the rest of relations is similar.
Next, we show that $h$ is surjective. It is enough to prove that any element of the form $x_{Z_1}\otimes ax_{Z_2}$ is in the image of $h$. Here, $Z_1$ and $Z_2$ are cycles for $S_1$ and $S_2$, respectively, and $a:t(Z_1)\to b(Z_2)$ is a monotone bijection. The relations M7, M8, M9 imply that $x_{Z_1}\otimes ax_{Z_2}=q.a_1x_{Z_1'}\otimes x_{Z_2'}a_2$, where $q\in\mathbb{Q}[U_1,...,U_m]$, $a_1,a_2\in A_n$ and $t(Z_1')=b(Z_2')$. Therefore, $h$ is surjective. It is not hard to see that $h$ is injective, and thus is an isomorphism.
\end{proof}
\begin{theorem}
Let $S$ be a singular braid with $2n$ strands and $m$ edges. Then,
\[\mathscr{M}(S)\cong\mathsf{M}(S_{cup})\otimes_{A_n}\mathsf{M}(S)\otimes_{A_n}\mathsf{M}(S_{cap})\]
as $\mathbb{Q}[U_1,...,U_m]$-modules.
\end{theorem}
\begin{proof}
By definition $\mathsf{M}(S_{cup})\otimes_{A_n}\mathsf{M}(S)\otimes_{A_n}\mathsf{M}(S_{cap})$ is generated by the elements of the form $x_{Z_0}a_0\otimes x_{Z}\otimes a_1x_{Z_1}$ where $Z, Z_0$ and $Z_1$ are cycles for $S$, $S_{cup}$ and $S_{cap}$, respectively. Moreover, $a_0:t(Z_0)\to b(Z)$ and $a_{1}:t(Z)\to b(Z_2)$ are the corresponding monotone bijections. Factoring $a_0$ and $a_1$ as products of $R_j$'s and $L_j$'s, the relations imply that $x_{Z_0}a_1\otimes x_{Z}\otimes a_1x_{Z_1}=q.x_{Z'}$ where $q\in\mathbb{Q}[U_1,...,U_m]$ and $Z'$ is a cycle in the plat closure of $S$. Because of the relations M2 and M3, we are done.
\end{proof}
\subsection{The Edge Bimodule Homomorphisms and the Total Complex}
In this Section, we define $A_n$-bimodule homomorphisms
\[
\begin{split}
d^-:\mathsf{M}[\mathsf{X}_i]\to \mathsf{M}[S_{id}]\\
d^+:\mathsf{M}[S_{id}]\to\mathsf{M}[\mathsf{X}_i].
\end{split}
\]
The cycles in $S_{id}$ are in one-to-one correspondence with subsets of $[2n]$ with $n$ elements. For each such subset $S$, denote the corresponding cycle and generator by $Z_{S}$ and $x_{S}$, respectively. Furthermore, for each cycle $Z=Z_{S}$ (i.e., $S=b(Z)=t(Z)$) that $\{i,i+1\}\not\subset S$ there is unique cycle $Z^z$ in $\mathsf{X}_i$ for which $b(Z^z)=t(Z^z)=S$. Conversely, for any cycle $Z$ in $\mathsf{X}_i$ with $b(Z)=t(Z)$, there is a unique cycle $Z^u=Z_{b(Z)}$ in $S_{id}$. So we define:
\[
d^-(x_Z)=\begin{cases}
\begin{array}{lcl}
x_{Z^u}&&\text{if}\ \{i+1\}\cap (b(Z)\cup t(Z))=\emptyset\\
x_{b(Z)}.L_i&&\text{if}\ i+1\in b(Z)\ \text{and}\ i\in t(Z)\\
R_i.x_{t(Z)}&&\text{if}\ i\in b(Z)\ \text{and}\ i+1\in t(Z)\\
U_i.x_{Z^u}&&\text{if}\ \{i+1\}\subset b(Z)\cap t(Z)
\end{array}
\end{cases}
\]
\begin{lemma}
$d^-$ is well-defined $A_n$-bimodule homomorphism.
\end{lemma}
\begin{proof}
The proof is straightforward by checking the local relations.
\end{proof}
Next, we define
\[
d^+(x_{Z})=
\begin{cases}
\begin{array}{lcl}
x_{Z^z}u_i-u_{i+1}x_{Z^z}&&\text{if}\ i+1\notin b(Z)=t(Z)\\
x_{Z^z}-\epsilon R_{i+1} x_{e_{i+1}(Z)^z}L_{i+1}&&\text{if}\ \{i,i+1\}\cap b(Z)=\{i+1\}\\
-\epsilon R_{i+1}x_{e_{i+1}(Z)^z}L_{i+1}&&\text{if}\ \{i,i+1\}\subset b(Z)
\end{array}
\end{cases}
\]
where $\epsilon=0$ if $i+2\in b(Z)$, otherwise $\epsilon=1$.
\begin{lemma}\label{lem:comp}
For any $i$ and $x\in\mathsf{M}[\mathsf{X}_i]$,
\[d^+_{i}\circ d^-_i=U_i-U_{i+1}\quad\text{and}\quad d^-_{i}\circ d^+_i(x)=xu_{i}-u_{i+1}x\]
\end{lemma}
\begin{proof}
It is straightforward.
\end{proof}
We now have the necessary tools to define the total complex $C_{1 \pm 1}(D)$. Let $\mathfrak{C}=\{c_{1},...,c_{N} \}$ denote the crossings in $D$, and for each $v \in \{0,1\}^{N}$, let $D_{v}$ denote the corresponding complete singular resolution.
The complex $(C_{1 \pm 1}(D), d_{0})$ is given by
\[ \bigoplus_{v \in \{0,1\}^{N}} C_{1 \pm 1}(D_{v}) \]
\noindent
For each $u \lessdot v$, the edge map $d_{u,v}$ from $C_{1 \pm 1}(D_{u})$ to $C_{1 \pm 1}(D_{v})$ for a positive and negative crossing, respectively, is given by the natural extensions of $d^{+}$ and $d^{-}$, respectively. The total edge map is given by
\[ d_{1} = \sum_{u \lessdot v} (-1)^{\epsilon_{u,v}} d_{u,v} \]
The fact that $(d_{0}+d_{1})^{2}=0$ follows from the local definition of $d_{1}$. In particular, if $D = D_{1} \circ ... \circ D_{j}$ where each $D_{i}$ has a single crossing, we could rewrite the complex $\mathscr{M}(D)$ as a tensor product
\[ \mathscr{M}(D) = \mathsf{M}(S_{cup}) \otimes_{A_n} \mathsf{M}(D_{1}) \otimes_{A_n} ... \otimes_{A_n} \mathsf{M}(D_{j}) \otimes_{A_n} \mathsf{M}(S_{cap}) \]
\noindent
where each $\mathsf{M}(D_{i})$ is the mapping cone on $d^{+}$ or $d^{-}$, depending on the sign of the crossing. The total complex is
\[ C_{1 \pm 1}(D) = \mathscr{M}(D) \otimes \mathsf{K}(D) \]
\noindent
Since each tensorand satisfies $d^{2}=0$, the total complex does as well.
\subsection{A Bigrading for Complete Resolutions}\label{gradingsect1}
In this section we will define a bigrading on $C_{1 \pm 1}(S)$ for a complete resolution $S$. It is denoted ($\mathfrak{gr}_{q}, \mathfrak{gr}_{a})$. We will refer to $\mathfrak{gr}_{q}$ as the quantum grading and $\mathfrak{gr}_{a}$ as the horizontal grading in analogy with the corresponding gradings on Khovanov-Rozansky $\mathfrak{sl}_{2}$ homology.
Let $R\{i,j\}$ denote a copy of the ground ring with $1 \in R$ in $\delta$-grading $i$ and horizontal grading $j$. The variables $U_{i}$ in $R$ have bigrading $(-2, 0)$.
We will start with the quantum grading. Since $R$ has an internal grading, it suffices to define the quantum grading on each generator $x_{Z}$. Given a cycle $Z$, let $T_{1}(Z)$ denote the number of 4-valent vertices at which $Z$ has edges $e_{1}$ and $e_{3}$, let $T_{2}(Z)$ be the number of 4-valent vertices at which $Z$ has edges $e_{2}$ and $e_{4}$, and $E(Z)$ the number of 4-valent vertices at which $Z$ contains none of the four edges (see Figure \ref{labeled4v} for the edge labelings).
\begin{figure}
\caption{A 4-valent vertex with labeled edges}
\label{labeled4v}
\end{figure}
At each $w_{i}^{+}$ and $w_{i}^{-}$, $Z$ contains either the left or the right edge. Define $w_{i}(Z)$ by
\[
w_{i}(Z)=\begin{cases}
\begin{array}{ll}
1 &\text{ if } Z \text{ contains the left edge at both } w_{i}^{+} \text{ and } w_{i}^{-} \\
-1 &\text{ if } Z \text{ contains the right edge at both } w_{i}^{+} \text{ and } w_{i}^{-} \\
0&\text{otherwise}
\end{array}
\end{cases}
\]
\noindent
Let $w(Z) = \sum_{i}w_{i}(Z)$.
We define $x_{Z}$ to lie in quantum grading $T_{1}(Z)-T_{2}(Z)+E(Z)+w(Z)$. For $e_{i}$ not in $Z$, multiplication by $U_{i}$ decreases $\mathfrak{gr}_{q}$ by 2 by construction. Assume $e_{i}$ is in $Z$. The vertices $v_{t}D(Z, e_{i})$ and $v_{b}D(Z, e_{i})$ contribute 2 more to $T_{1}(Z)-T_{2}(Z)+E(Z)+w(Z)$ than to $T_{1}(U_{i}(Z))-T_{2}(U_{i}(Z))+E(U_{i}(Z))+w(U_{i}(Z))$. Further, the total contributions of vertices on $\partial(D(Z, e_{i}))\setminus\{ v_{t}D(Z, e_{i}), v_{b}D(Z, e_{i}) \}$ to $T_{1}(U_{i}(Z))-T_{2}(U_{i}(Z))+E(U_{i}(Z))+w(U_{i}(Z))$ and $T_{1}(Z)-T_{2}(Z)+E(Z)+w(Z)$ differ by $\mathfrak{gr}_{q}(U(D,e_i))$. Thus, the quantum grading is well-defined on $\mathscr{M}(S)$.
For the horizontal grading, we place the whole complex $\mathscr{M}(S)$ in grading $0$. The bigrading extends to $C_{1 \pm 1}(S)$ by placing gradings on the matrix factorizations in Section \ref{defcomplex} as follows:
\[\mathbf{x}ymatrix{R\{0,-2\}\ar@<1ex>[r]^{ L_{w_i^\pm}}&R\{0,0\}\ar@<1ex>[l]^{L'_{w_i^{\pm}}}}\]
Let $d_{0+}$ denote the component of the differential $d_0$ which is homogeneous of degree $2$ with respect to the horizontal grading and $d_{0-}$ the component which is homogeneous of degree $-2$. Both $d_{0+}$ and $d_{0-}$ are homogeneous of degree $-2$ with respect to the quantum grading.
\section{Computing the homology for complete resolutions}
In this section we will compute the homology of $C_{1\pm 1}(S)$ for any singular braid $S$. The homology will first be computed as a bigraded vector space over $\mathbb{Q}$. This requires two main tools: a filtration induced by basepoints in the diagram, and a formula called the composition product. After computing the bigraded vector space, we will describe it as an $R$-module by showing that $H_{1+1}(S)$ satisfies the MOY relations of Murakami, Ohtsuki, and Yamada \cite{murakami1998homfly}.
\subsection{The Basepoint Filtration}\label{bpf}
Let $A_{1},...,A_{r}$ denote the set of bounded regions in $\mathbb{R}^{2}$ in the complement of $S$, and for each $A_{i}$, let $p_{i}$ denote a basepoint in $A_{i}$. It turns out that each $p_{i}$ gives a filtration on $M(S)$, which we will define in terms of a grading $\gr_{i}$.
Let $Z$ be a cycle in $\mathsf{p}(S)$. Instead of viewing $Z$ as strands in $\mathsf{p}(S)$, we can move to the singular closure viewpoint of Section \ref{closuresection}, so that the cycle consists of $n$ disjoint circles in $\mathbf{x}(S)$. If $Z$ is a cycle in $\mathsf{p}(S)$, let $\overline{Z}$ denote the corresponding cycle in $\mathbf{x}(S)$.
The grading $\gr_{i}$ is defined as follows. Let $\mathcal{C}_{Z}$ denote the unique 2-chain in $\mathbb{R}^{2}$ with $\partial \mathcal{C}_{Z} = \overline{Z}$. Recall that as a $\mathbb{Q}$-module,
\[M(S)=\bigoplus _{Z\in c(S)}R_{Z}\]
\noindent
On each summand $R_{Z}$, define $\gr_i$ to be the coefficient of $\mathcal{C}_{Z}$ at $p_{i}$. Since the $R$-action only moves cycles to the right, it is non-increasing with respect to each $\gr_{i}$. Thus, each $p_{i}$ induces an ascending filtration on $M(S)$
\[ ... \subseteq \mathcal{F}_{k-1}(M(S)) \subseteq \mathcal{F}_{k}(M(S)) \subseteq \mathcal{F}_{k+1}(M(S)) \subseteq ... \]
\noindent
where $\mathcal{F}_{k}(M(S))$ consists of elements of $M(S)$ with $\gr_{i} \le k$.
The module $\mathscr{M}(S) = M(S) \otimes R/L$ inherits the quotient filtration from $M(S)$. We make $C_{1 \pm 1}(S)=\mathscr{M}(S) \otimes \mathsf{K}(S)$ a filtered complex by putting the generators of $\mathsf{K}(S)$ in the same filtration level. Thus, the complex $C_{1 \pm 1}(S)$ comes equipped with a $\mathbb{Z}^{r}$ filtration by the basepoints $p_{1},...,p_{r}$, which we will call the basepoint filtration.
Note that when the filtration is bounded, this means that if $f$ induces an isomorphism on the $E_{k}$ pages for some $k$, then it induces an isomorphism on the $E_{\infty}$ pages, i.e. the total homology.
\subsection{The $\mathfrak{sl}_{1}$ Homology of Singular Braids}
Let $S$ be a singular braid, and let $\mathsf{b}(S)$ denote the standard (clockwise oriented) braid closure of $S$. Each vertex in $\mathsf{b}(S)$ is either 2-valent or 4-valent and has the same number of incoming and outgoing edges. We take the convention that there is a bivalent vertex on each arc corresponding to the braid closure, so that if the diagram were cut at these vertices, the resulting diagram would be $S$. These vertices will be called \emph{closing vertices}.
In \cite{KhovanovRozansky08:MatrixFactorizations}, Khovanov and Rozansky defined a family of knot invariants called $\mathfrak{sl}_{n}$ homology. Each $\mathfrak{sl}_{n}$ homology categorifies the $\mathfrak{sl}_{n}$ polynomial, and the $\mathfrak{sl}_{2}$ homology agrees with Khovanov homology over $\mathbb{Q}$. In this section, we will describe the $\mathfrak{sl}_{1}$ homology of a singular braid.
The $\mathfrak{sl}_{1}$ complex $C_{1}(S)$ is defined over the same ground ring $R$. The complex comes with two gradings, the quantum grading $gr_{q}$ and the horizontal grading $gr_{h}$. As with the $C_{1 \pm 1}(S)$ complex, each variable $U_{i}$ has quantum grading $-2$ and horizontal grading $0$, and we write $R\{i,j\}$ for a copy of the ground ring with $1 \in R$ in bigrading $(i,j)$.
To each 2-valent vertex $v$, let $e_{i}$ be the incoming edge at $v$ and $e_{j}$ the outgoing edge. We define a matrix factorization corresponding to $v$ by
\noindent
Now suppose $v$ is a 4-valent vertex. In sections \ref{sub3.2} and \ref{defcomplex}, we define linear polynomials $L_{v}$, $L'_{v}$, and a quadratic polynomial $Q_{v}$. For 4-valent $v$, we define the matrix factorization $C_{1}(v)$ by the Koszul complex in Figure \ref{sl14}. The total $\mathfrak{sl}_{1}$ complex $C_{1}(S)$ is given by
\[ C_{1}(S) = \bigotimes_{v \in S} C_{1}(v) \]
One can see by inspection that at each vertex (both 2-valent and 4-valent), $d^{2}= \sum_{e_{i} \in In(v)} U_{i}^{2} - \sum_{e_{j} \in Out(v)} U_{j}^{2}$, so each $C_{1}(v)$ is a matrix factorization. However, the sum of these potentials over all vertices in $S$ is zero, so $d^{2}=0$ on $C_{1}(S)$ making it a true chain complex.
\begin{figure}
\caption{The $\mathfrak{sl}
\label{sl14}
\end{figure}
Let $d_{0+}$ denote the component of the differential which has horizontal grading $2$ and $d_{0-}$ the component which has horizontal grading $-2$. Both $d_{0+}$ and $d_{0-}$ are homogeneous of degree $-2$ with respect to the quantum grading.
The $\mathfrak{sl}_{1}$ homology of $S$ is defined to be
\[ H_{1}(S) = H_{*}(C_{1}(S), d_{0+}+d_{0-}) \]
\noindent
The homology only has one grading - the quantum grading - as $d_{0+}+d_{0-}$ is not homogeneous with respect to the horizontal grading. However, if we take homology first with respect to $d_{0+}$, then with respect to $d_{0-}^{*}$, this $E_{2}$ page turns out to be isomorphic to the total homology, but now the differentials are homogeneous so the homology admits a horizontal grading. We will denote this homology by $H_{1}^{\pm}(S)$:
\[ H_{1}^{\pm}(S) = H_{*}(H_{*}(C_{1}(S), d_{0+}), d^{*}_{0-}) \]
\begin{example}
Let $\mathcal{U}$ be the 1-strand diagram for the unknot with a single bivalent vertex, and a single edge $e_{1}$. Then $C_{1}(\mathcal{U})$ is given by
\noindent
Thus, $H^{\pm}_{1}(\mathcal{U}) \cong \mathbb{Q}\{0,-2\}$.
\end{example}
\begin{remark}
The $\mathfrak{sl}_{1}$ homology of a singular diagram is invariant under addition or removal of bivalent vertices, provided each component of $S$ has at least one vertex.
\end{remark}
The $\mathfrak{sl}_{1}$ homology of singular diagrams in general is described in the following lemma.
\begin{lemma}
Let $S$ be a $k$-strand singular braid. Then
\[
H^{\pm}_{1}(S)=\begin{cases}
\begin{array}{ll}
\mathbb{Q}\{0,-2k \} &\text{ if } \mathsf{b}(S) \text{ is the }k \text{ component unlink} \\
0&\text{otherwise}
\end{array}
\end{cases}
\]
\end{lemma}
\begin{proof}
The computation for the $k$ component unlink follows from the computation for the unknot, together with the fact that disjoint union corresponds to tensor product. In other words, if $S=S_{1} \bigsqcup S_{2}$, then $C_{1}(S)=C_{1}(S_{1}) \otimes C_{1}(S_{2})$.
If $\mathsf{b}(S)$ is not the $k$ component unlink, then it has at least one 4-valent vertex. In Figure \ref{sl14}, the arrows with coefficient $2$ are isomorphisms because we are working over $\mathbb{Q}$. Thus, $C_{1}(S)$ is acyclic.
\end{proof}
\subsection{The Filtered Homology for Singular Braids}\label{bpsect}
In this section we will compute the homology of the associated graded object for $C_{1 \pm 1}(S)$ with respect to the basepoint filtration. The complex splits over cycles $Z$, with the corresponding complex given by
\[
R_{Z}\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_1^\pm}}&R\ar@<1ex>[l]^{L'_{w_1^{\pm}}}}\big{)}\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_2^\pm}}&R\ar@<1ex>[l]^{L'_{w_2^{\pm}}}}\big{)}\otimes...\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{L_{w_n^\pm}}&R\ar@<1ex>[l]^{L'_{w_n^{\pm}}}}\big{)}.
\]
Since $Z$ contains one of the edges at each $w_{k}^{+}$ and $w_{k}^{-}$, we can rewrite this as
\begin{equation}\label{sl1cycle}
R_{Z}\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{U_{i_{1}}-U_{j_{1}}}&R\ar@<1ex>[l]^{U_{i_{1}}+U_{j_{1}}}}\big{)}\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{U_{i_{2}}-U_{j_{2}}}&R\ar@<1ex>[l]^{U_{i_{2}}+U_{j_{2}}}}\big{)}\otimes...\otimes\big{(}
\mathbf{x}ymatrix{R\ar@<1ex>[r]^{U_{i_{n}}-U_{j_{n}}}&R\ar@<1ex>[l]^{U_{i_{n}}+U_{j_{n}}}}\big{)}
\end{equation}
\noindent
where $e_{i_{k}}$ is the edge adjacent to $w_{k}^{+}$ which is not in $Z$, and $e_{j_{k}}$ is the edge adjacent to $w_{k}^{-}$ which is not in $Z$.
We can view $S-Z$ as an open singular braid on $n$ strands. Let $\mathsf{b}(S-Z)$ denote the braid closure of $S-Z$ so that each $w_{i}^{+}=w_{i}^{-}$ is now a bivalent vertex, which we will denote $w_{i}$.
\begin{lemma}
Up to an overall grading shift, the complex in Equation \ref{sl1cycle} is quasi-isomorphic to the $\mathfrak{sl}_{1}$ complex of $\mathsf{b}(S-Z)$.
\end{lemma}
\begin{proof}
This is equivalent to showing that the set of relations $\{L_{v},Q_{v}:v \ne w_{i}\}$ form a regular sequence in $R$. To see that they do in fact form a regular sequence, we can start at the bottom of the braid, and use these relations to perform variable exclusion in the polynomial ring on the incoming edges at each vertex.
At each bivalent vertex, we get the complex in Figure \ref{bivcomplex}, where $e_{i}$ is the incoming edge and $e_{j}$ is the outgoing edge. We will use the relation $U_{i}-U_{j}$ to substitute for $U_{i}$.
\begin{figure}\label{bivcomplex}
\end{figure}
At each 4-valent vertex, we get the complex in Figure \ref{4comp}. We can use the relation $L_{v}=U_{i(v)}+U_{j(v)}-U_{k(v)}-U_{l(v)}$ to substitute for $U_{i(v)}$. After making this substitution, $Q_{v}$ becomes equal to $-U^{j(v)}_{2}+U_{j(v)}(U_{k(v)}+U_{l(v)})-U_{k(v)}U_{l(v)}$, so we can use this relation to exclude $U_{j}$.
\begin{figure}\label{4comp}
\end{figure}
Since at each vertex the relations can be written in terms of exclusions on the incoming edges, the relations form a regular sequence in $R$. After canceling the differentials corresponding to these relations in $C_{1}(\mathsf{b}(S-Z))$, we get exactly the complex in \ref{sl1cycle}, so the two complexes are quasi-isomorphic. To see that the bigradings match up (up to an overall grading shift), note that the gradings assigned to the Koszul complex on a bivalent vertex in $C_{1}(\mathsf{b}(S-Z))$ are the same as the gradings assigned to the closing-off Koszul complex in $C_{1 \pm 1}(S)$.
\end{proof}
The element $x_{Z}$ in $R_{Z}$ has bigrading $(T_{1}(Z)-T_{2}(Z)+E(Z)+w(Z),0)$, while the element at the bottom of the Koszul complex on $L_{v}$ and $Q_{v}$ has bigrading $(E(Z), 0)$. Thus, we can write the basepoint-filtered homology of $C_{1 \pm 1}(S)$ as follows:
\begin{lemma} \label{compproduct}
Let $d_{0}^{f}$ denote the component of the differential on $C_{1 \pm 1}(S)$ which preserves the basepoint filtration. Then
\[ H_{*}((C_{1 \pm 1}(S), d_{0+}^{f}),d_{0-}^{f}) \cong \bigoplus_{Z}H^{\pm}_{1}(S-Z)\{T_{1}(Z)-T_{2}(Z)+w(Z), 0 \} \]
\end{lemma}
There is a spectral sequence from $H_{*}((C_{1 \pm 1}(S), d_{0+}^{f}),d_{0-}^{f})$ to the total homology $H_{1+1}(S)$. However, $H^{\pm}_{1}(S-Z)$ always lies in a single horizontal grading, so $H_{*}((C_{1 \pm 1}(S), d_{0+}^{f}),d_{0-}^{f})$ does as well. Thus, there are no higher differentials, and with respect to the quantum grading, we have the following isomorphism:
\[ H_{1+1}(S) \cong \bigoplus_{Z}H^{\pm}_{1}(S-Z)\{T_{1}(Z)-T_{2}(Z)+w(Z)\} \]
\begin{corollary} \label{KhovCor}
With respect to the quantum grading, $H_{1+1}(S) \cong Kh(\mathsf{sm}(\mathsf{p}(S)))$ as graded $\mathbb{Q}$-vector spaces.
\end{corollary}
\begin{proof}
As before, if $Z$ is a cycle in $\mathsf{p}(S)$, let $\overline{Z}$ be the corresponding cycle in $\mathsf{x}(S)$. Then $T_{1}(Z)-T_{2}(Z)+w(Z) = T_{1}(\overline{Z}) - T_{2}(\overline{Z})$, and we get
\[ H_{1+1}(S) \cong \bigoplus_{Z}H^{\pm}_{1}(\mathsf{x}(S)-\overline{Z})\{T_{1}(\overline{Z}) - T_{2}(\overline{Z})\} \]
\noindent
This is precisely the composition product formula for $m=n=1$, which proves that this sum is isomorphic to
$H_{2}(\mathsf{x}(S))$. (See \cite{dowlin2015knot}, equation (5).)
The $\mathfrak{sl}_{2}$ homology of a singular diagram is known to be isomorphic to the Khovanov homology of the smoothing of that diagram \cite{hughes2014note}:
\[ H_{2}(\mathsf{x}(S)) \cong Kh(\mathsf{sm}(\mathbf{x}(S)) = Kh(\mathsf{sm}(\mathsf{p}(S)) \]
\noindent
The second equality is due to the fact that $\mathsf{sm}(\mathbf{x}(S))$ and $\mathsf{sm}(\mathsf{p}(S))$ are the same diagram.
\end{proof}
\subsection{MOY Relations} Khovanov-Rozansky homology satisfies a set of graph relations known as the MOY relations. In this section, we will show that $H_{1+1}(S)$ satisfies the analogous relations to $\mathfrak{sl}_{2}$ homology.
First, we recall these relations. Let $\mathcal{A} = \mathbb{Q}[U]/(U^2)$, the Khovanov homology of the unknot, and $S$ and $S'$ be singular braids with even strands.
\begin{itemize}
\item (MOY 0) If $\mathsf{p}(S')$ is the disjoint union of $\mathsf{p}(S)$ with the plat closure of the trivial braid with two strands (i.e. unknot) then
\[H_{1+1}(S')\cong H_{1+1}(S)\otimes \mathcal{A}\{1\}.\]
\item (MOY \rom{1}) If $\mathsf{p}(S')$ is obtained is obtained from $\mathsf{p}(S)$ by one of the local moves as in Figure \ref{fig:MI} then
\[H_{1+1}(S')\cong H_{1+1}(S)\]
\begin{figure}
\caption{MOY \rom{1}
\label{fig:MI}
\end{figure}
\item (MOY \rom{2}) If $\mathsf{p}(S')$ is obtained from $\mathsf{p}(S)$ by one of the local moves as in Figure \ref{fig:MII} then
\[H_{1+1}(S')\cong H_{1+1}(S)\otimes \mathcal{A}.\]
\begin{figure}
\caption{MOY \rom{2}
\label{fig:MII}
\end{figure}
\item (MOY \rom{3}) If $\mathsf{p}(S')$ is obtained from $\mathsf{p}(S)$ by one of the local move as in Figure \ref{fig:MIII}, then
\[H_{1+1}(S')\cong H_{1+1}(S)\]
\end{itemize}
\begin{figure}
\caption{MOY \rom{3}
\label{fig:MIII}
\end{figure}
\noindent
Note that these isomorphisms follow from Corollary \ref{KhovCor} as graded vector spaces. However, in this section we will make the isomorphisms explicit on the chain level and show that the maps commute with the $U_{i}$-actions.
\subsubsection{MOY 0} Suppose $S$ and $S'$ are singular braids with even number of strands. Then, it is clear from the definition that
\[C_{1\pm1}(S|S')\simeq C_{1\pm1}(S)\otimes C_{1\pm1}(S'),\]
where $S|S'$ is the disjoint union of $S$ and $S'$. In particular, $C_{1+1}(S|\mathcal{U}_2)\simeq C_{1+1}(S)\otimes C_{1+1}(\mathcal{U}_2)$ for any braid $S$ with an even number of strands. Let $e_1$ and $e_2$ denote the left and right strands of $\mathcal{U}_2$, respectively. The diagram $\mathcal{U}_2$ has two cycles: $Z_{1}=\{e_{1}\}$ and $Z_{2}=\{e_{2}\}$. Let $x_{1}$ and $x_{2}$ be the corresponding generators in $\mathscr{M}(\mathcal{U}_2)$. The relations are given by $U_{1}x_{1}=x_{2}$, $U_{2}x_{2}=0$. Thus, $\mathscr{M}(\mathcal{U}_2) \cong \mathbb{Q}[U_{1}, U_{2}]/(U_{1}U_{2})$. So, the complex $C_{1\pm1}(\mathcal{U}_2)$ becomes
\[
\mathbf{x}ymatrix@C+2pc{\mathbb{Q}[U_{1},U_{2}]/(U_{1}U_{2})&\mathbb{Q}[U_{1},U_{2}]/(U_{1}U_{2})\ar[l]^{2U_{1}+2U_{2}}}. \]
Since $x_{Z_{1}}$ lies in quantum grading $1$, the generator of $\mathscr{M}(\mathcal{U}_2)$ also lies in quantum grading $1$ and the homology of this complex is isomorphic to $\mathcal{A}\{1\}$, with $U_{1}$ acting as $U$ and $U_{2}$ acting as $-U$. Thus,
\[ H_{1+1}(S')=H_{1+1}(S|\mathcal{U}_2) \cong H_{1+1}(S) \otimes \mathcal{A}\{1\} \]
\subsubsection{MOY \rom{2}} Let $\mathsf{X}_i$ be the elementary singular braid where the singularization takes place between strands $i$ and $i+1$ as in Example \ref{ex:sing}. Suppose $S_{\RN{2}}^i=\mathsf{X}_i\circ \mathsf{X}_i$ i.e. $S^i_{\RN{2}}$ is obtained from $\mathsf{X}_i$ by a MOY \rom{2} move as in Figure \ref{fig:MII}.
For each cycle $Z$ in $\mathsf{X}_i$, there is a cycle $\imath(Z)$ in $S_{\RN{2}}^i$ such that $b(Z)=b(\imath(Z))$, $t(Z)=t(\imath(Z))$ and if $Z$ is not locally empty at the crossing then $e_5\subset \imath(Z)$. This induces an $A_n$-bimodule homomorphism $\imath_{\RN{2}}:\mathsf{M}[\mathsf{X}_i]\to\mathsf{M}[S^i_{\RN{2}}]$ such that $\imath_{\RN{2}}(x_Z)=x_{\imath(Z)}$.
\begin{lemma}\label{lem:MII}
With the above notation,
\[\mathsf{M}[S_{\RN{2}}^i]\cong\mathsf{M}[\mathsf{X}_i]\otimes_{A_n}\mathsf{M}[\mathsf{X}_i]\cong\mathsf{M}[\mathsf{X}_i]\{1\}\mathrm{op}lus\mathsf{M}[\mathsf{X}_i]\{-1\}\]
as $A_n$-bimodules. Moreover, $\mathbb{Q}[U_5,U_6]$ acts on this module such that multiplication by $U_5$ is given by
\[\begin{bmatrix}0&-u_iu_{i+1}\\1&u_i+u_{i+1}\end{bmatrix},\] while $U_6=u_i+u_{i+1}-U_5$.
\end{lemma}
\begin{proof}
It is easy to check that the $A_n$-subbimodule $\mathsf{M}_1=\imath_{\RN{2}}(\mathsf{M}[\mathsf{X}_i])$ is canonically isomorphic to $\mathsf{M}[\mathsf{X}_i]$. Furthermore, $U_5\mathsf{M}_1\cong \mathsf{M}_1\cong\mathsf{M}[\mathsf{X}_i]$ and $\mathsf{M}_1\mathrm{op}lus U_5\mathsf{M}_1\subset \mathsf{M}[S_{\RN{2}}^i]$.
For each element $x\in\mathsf{M}[S_{\RN{2}}^i]$, it follows from the relations that $(u_i+u_{i+1})x=x(u_i+u_{i+1})$. Using the linear relation $U_{6} = u_{i} + u_{i+1} - U_{5}$, we can substitute for $U_{6}$ in the internal action on $\mathsf{M}[S_{\RN{2}}^i]$. Similarly, the quadratic relation $U_{5}^{2} = (u_{i}+u_{i+1})U_{5} -u_{i}u_{i+1}$ allows us to substitute for $U_{5}^{2}$. Therefore,
\[\mathsf{M}[S_{\RN{2}}^i]\cong \mathsf{M}_1\mathrm{op}lus U_5\mathsf{M}_1\cong\mathsf{M}[\mathsf{X}_i]\mathrm{op}lus\mathsf{M}[\mathsf{X}_i]\]
as $A_n$-bimodules (up to quantum grading shifts), and $U_5$ and $U_6$ acts as described.
For each locally empty cycle $Z$ in $\mathsf{X}_i$, the complement of $Z$ contains one less $4$-valent vertex comparing to the complement of $\imath(Z)$ in $S_{\RN{2}}^i$. Thus, $\mathfrak{gr_q}(x_{\imath(Z)})=\mathfrak{gr_q}(x_{Z})+1$ for each locally empty cycle $Z$. Similarly, for the other cycles in $\mathsf{X}_i$ one can check that $\mathfrak{gr_q}(x_{\imath(Z)})=\mathfrak{gr_q}(x_{Z})+1$. Therefore, $\mathsf{M}_1\cong \mathsf{M}[\mathsf{X}_i]\{1\}$ and so $\mathsf{M}_2=U_5\mathsf{M}_1\cong\mathsf{M}[\mathsf{X}_i]\{-1\}$.
\end{proof}
\begin{corollary}\label{cor:MII}
Suppose $S=S_1\circ \mathsf{X}_i\circ S_2$ is a singular braid with $2n$ strands. Let $S'=S_1\circ S_{\RN{2}}^i\circ S_2$. Then,
\[H_{1+1}(S')\cong H_{1+1}(S)\otimes_{\mathbb{Q}}\mathcal{A}\{1\}\]
\end{corollary}
\begin{proof} From Lemmas \ref{prop1} and \ref{prop2}, we see that on $H_{1+1}(S)$, $U_{5}=-U_{6}$, and $U_{5}^{2}=0$. So \[H_{1+1}(S)\cong H_{1+1}(S')\otimes_{\mathbb{Q}}\mathcal{A}\{1\}\]
with $U_{5}$ acting as $U$ and $U_{6}$ acting as $-U$.
\end{proof}
\begin{lemma}\label{lem:cap} For each odd integer $i$
\[\mathsf{M}[\mathsf{X}_i]\otimes_{A_n}\mathsf{M}[\mycap_i]\cong \mathsf{M}[\mycap_i]\{1\}\mathrm{op}lus\mathsf{M}[\mycap_i]\{-1\}\]
as left $A_n$-modules. Moreover, considering the labels in Figure \ref{fig:MII} $\mathsf{M}[\mathsf{X}_i]\otimes_{A_n}\mathsf{M}[\mycap_i]$ is a module over $\mathbb{Q}[U_5,U_6]$ where $U_5$ acts as
\[
\begin{bmatrix}
0&-u_iu_{i+1}\\
1&u_i+u_{i+1}
\end{bmatrix}
\]
and $U_6=u_i+u_{i+1}-U_5$.
\end{lemma}
\begin{proof}
For each cycle $Z$ in $\mycap_i$, let $\imath(Z)$ be the cycle in $Z$ where $b(Z)=b(\imath(Z))$ and $e_{5}\subset \imath(Z)$. The set of all generators $x_{\imath(Z)}$ generates an $A_n$-subbimodule $\mathsf{M}_1$ in $\mathsf[\mathsf{X}_i\circ \mycap_i]$, which is isomorphic to $\mathsf{M}[\mycap_i]$. Furthermore, $U_5\mathsf{M}_1\cong \mathsf{M}_1$ and \[\mathsf{M}[\mathsf{X}_i]\otimes_{A_n}\mathsf{M}[\mycap_i]\cong\mathsf{M}_1\mathrm{op}lus U_5\mathsf{M}_1.\]
The rest is similar to the proof of Lemma \ref{lem:MII}.
\end{proof}
Similarly, we can prove that:
\begin{lemma}\label{lem:cup}
As right $A_n$-modules
\[\mathsf{M}[\mycup_i]\otimes_{A_n}\mathsf{M}[\mathsf{X}_i]\cong \mathsf{M}[\mycup_i]\{1\}\mathrm{op}lus\mathsf{M}[\mycup_i]\{-1\}.\]
Moreover, considering the labels in Figure \ref{fig:MII} $\mathsf{M}[\mycup_i]\otimes_{A_n}\mathsf{M}[\mathsf{X}_i]$ is a module over $\mathbb{Q}[U_5,U_6]$ where $U_5$ acts as
\[
\begin{bmatrix}
0&-u_iu_{i+1}\\
1&u_i+u_{i+1}
\end{bmatrix}
\]
and $U_6=u_i+u_{i+1}-U_5$.
\end{lemma}
\begin{corollary}
Let $S$ be a singular braid with $2n$-strands. Then, for each odd integer $i$
\[H_{1+1}(S\circ\mathsf{X}_i)\cong H_{1+1}(\mathsf{X}_i\circ S)\cong H_{1+1}(S)\otimes_{\mathbb{Q}}\mathcal{A}\{1\}.\]
\end{corollary}
\begin{proof}
It is a straightforward corollary of Lemmas \ref{lem:cap} and \ref{lem:cup}, with the same proof as Corollary \ref{cor:MII}.
\end{proof}
Finally, we will review some properties of the chain map $\imath_{\RN{2}}$ that will be used later to prove invariance.
\begin{lemma}\label{lem:propII}
For all $\mathsf{X}_i$ the followings hold:
\begin{enumerate}
\item $(d^-\otimes \mathrm{id})\circ \imath_{\RN{2}}=(\mathrm{id}\otimes d^-)\circ\imath_{\RN{2}}=\mathrm{id}$.
\item $\pi_{\RN{2}}^2\circ (d^+\otimes\mathrm{id})=\pi_{\RN{2}}^2\circ (\mathrm{id}\otimes d^+)=\mathrm{id}$, while $\pi_{\RN{2}}^1\circ (d^+\otimes\mathrm{id})=-U_2\mathrm{id}$ and $\pi_{\RN{2}}^1\circ (\mathrm{id}\otimes d^+)=-U_4\mathrm{id}$
\end{enumerate}
Here, $\pi_{\RN{2}}^i:\mathsf{M}[S_{\RN{2}}^i]\to\mathsf{M}[\mathsf{X}_i]$ is the projection onto the $i$-th summand for $i=1,2$.\end{lemma}
\begin{proof}
The $A_n$-bimodule $\mathsf{M}[\mathsf{X}_i]$ is generated by $x_Z$ for all cycles $Z$ in $\mathsf{X}_i$ such that $b(Z)=t(Z)$ and $i+1\notin b(Z)$. It is then clear from the definition of $\imath_{\RN{2}}$, $\pi_{\RN{2}}$, $d^-$ and $d^+$ that for any such $Z$ the Equations (1) and (2) hold.
\end{proof}
\subsubsection{MOY \rom{1} and MOY \rom{3}} Consider the singular braid $S_{\RN{3}a}^i=\mathsf{X}_{i}\circ\mathsf{X}_{i+1}\circ\mathsf{X}_{i}$ which is obtained from $\mathsf{X}_i$ by applying the MOY \rom{3} move as in Figure \ref{fig:MIII}. We define the $A_n$-bimodule homomorphism \[\imath_{\RN{3}a}:\mathsf{M}[\mathsf{X}_i]\to\mathsf{M}[S_{\RN{3}a}^i]\cong \mathsf{M}[\mathsf{X}_{i}]\otimes_{A_n}\mathsf{M}[\mathsf{X}_{i+1}]\otimes_{A_n}\mathsf{M}[\mathsf{X}_{i}]\]
as the composition $\imath_{\RN{3}a}=(\mathrm{id}\otimes d^+\otimes \mathrm{id})\circ\imath_{\RN{2}}$ where
\begin{multline*}
\mathrm{id}\otimes d^+\otimes \mathrm{id}: \mathsf{M}[S_{\RN{2}}^{i}]\cong\mathsf{M}[\mathsf{X}_{i}]\otimes_{A_n}\mathsf{M}[S_{id}]\otimes_{A_n}\mathsf{M}[\mathsf{X}_{i}]\\ \to \mathsf{M}[\mathsf{X}_{i}]\otimes_{A_n}\mathsf{M}[\mathsf{X}_{i+1}]\otimes_{A_n}\mathsf{M}[\mathsf{X}_{i}].
\end{multline*}
Let $\mathsf{M}_1^a=\imath_{\RN{3}a}(\mathsf{M}[\mathsf{X}_{i}])$.
\begin{lemma}\label{lem:submodMOY3}
The homomorphism $\imath_{\RN{3}a}$ is injective and so $\mathsf{M}_1^a\cong\mathsf{M}[\mathsf{X}_i]$. \end{lemma}
\begin{proof}
For each cycle $Z$ in $\mathsf{X}_i$ there is a unique cycle $\overline{Z}$ in $S_{\RN{3}a}^i$ such that $b(\overline{Z})=b(Z)$, $t(\overline{Z})=t(Z)$ and if $b(Z)\cap\{i,i+1\}\neq\emptyset$ then $e_9\subset \overline{Z}$. It follows from the definition of $d^+$ and $\imath_{\RN{2}}$ that
\[\imath_{\RN{3}a}(x_Z)=\begin{cases}
\begin{array}{lll}
(U_8-U_3)x_{\overline{Z}}&&\text{if}\ i+2\notin b(Z)\\
x_{\overline{Z}}-\epsilon R_{i+2}x_{\overline{u_{i+2}(Z)}}L_{i+2}&&\text{otherwise}
\end{array}
\end{cases}
\]
where $\epsilon=0$ if $i+3\in b(Z)$, otherwise $\epsilon=1$. It is straightforward that $\imath_{\RN{3}a}$ is injective and thus
$\mathsf{M}^a_1\cong\mathsf{M}[\mathsf{X}_i]$.
\end{proof}
As before, $\mathcal{Z}\{i_1,...,i_k\}$ denotes the set of cycles that locally contain the edges $e_{i_1},...,e_{i_k}$ for any subset $\{i_1,...,i_k\}\subset \{1,...,9\}$. Let $\mathsf{M}_2^a$ be the $A_n$-bimodule generated by $x_Z$ for every $Z\in\mathcal{Z}\emptyset\cup\mathcal{Z}\{1,9,4\}$. In particular, $x_Z\in\mathsf{M}_2^a$ for all
\begin{multline*}
Z\in\mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,9,4\}\amalg\mathcal{Z}\{1,9,5\}\amalg\mathcal{Z}\{2,9,4\}\amalg\mathcal{Z} \{2,9,5\}\amalg\\
\mathcal{Z}\{1,7,6\}\amalg \mathcal{Z}\{2,7,6\}\amalg \mathcal{Z}\{3,8,4\}\amalg\mathcal{Z}\{3,8,5\}.
\end{multline*}
Moreover, if $Z\in\mathcal{Z}\{3,6\}$ then $U_9x_Z\in \mathsf{M}_2$.
\begin{lemma}\label{lem:MIIIsplit}
The $A_n$-bimodule $\mathsf{M}[S_{\RN{3}a}^i]$ splits as the direct sum $\mathsf{M}_1^a\mathrm{op}lus\mathsf{M}_2^a$.
\end{lemma}
\begin{proof}
The fact that $\mathsf{M}_{1}^a$ and $\mathsf{M}_{2}^a$ are disjoint follows from checking the local relations, which we leave to the reader. To see that $\mathsf{M}_{1}^a \mathrm{op}lus \mathsf{M}_{2}^a$ spans $\mathsf{M}(S^i_{\RN{3}a})$, note that we can substitute for $U_{7}$, $U_{8}$, and $U_{9}^{2}$ by the relations
\[
U_{7} = U_{1}+U_{2}-U_{9}, \hspace{5mm} U_{8} = U_{4}+U_{5} - U_{9}, \hspace{5mm} U_{9}^{2} = (U_{1}+U_{2})U_{9} - U_{1}U_{2}
\]
\noindent
The variables $U_{1},...,U_{6}$ are represented in the two algebra actions, so $\mathsf{M}(S^i_{\RN{3}a})$ is generated by $ x_{Z}, U_{9}x_{Z}$ over all cycles $Z$.
For any cycle $Z$ that $i+2\notin t(Z)\cup b(Z)$ i.e. $Z$ does not contain $e_3$ and $e_6$, $\mathsf{M}_2^a$ contains $x_Z$ and $\mathsf{M}_1^a$ contains $ (U_{3}-U_{8})x_{Z} = (U_{3}-U_{4}-U_{5}+U_{9})x_{Z}$. Since $U_{3}$, $U_{4}$, and $U_{5}$ are in the bimodule action, the span of these two elements is $\langle x_{Z}, U_{9}x_{Z} \rangle$.
If $Z\in\mathcal{Z}\{3,6\}$, then $\mathsf{M}_1^a$ contains $x_Z-\epsilon R_{i+2}x_{u_{3}(Z)}R_{i+2}$ and $\mathsf{M}_2^a$ contains $x_{u_{3}(Z)}$, so $x_Z$ is in the direct sum. Also, $\mathsf{M}_2^a$ contains $U_9x_{Z}$.
If $Z\in\mathcal{Z}\{1,7,6\}$, then $x_Z\in\mathsf{M}_2^a$ and $U_9x_{Z}=(U_1+U_2-U_7)x_Z$. Thus, it suffices to show that $U_7x_Z\in\mathsf{M}_1^a\mathrm{op}lus\mathsf{M}_2^a$. Note that $U_{7}x_{Z}=R_{i}R_{i+1}x_{U_7(Z)}$ and $U_7(Z)\in\mathcal{Z}\{3,6\}$. So $x_{U_7(Z)}$ and consequently $R_{i}R_{i+1}x_{U_{7}(Z)}$ is in $\mathsf{M}_{1}^a \mathrm{op}lus \mathsf{M}_{2}^a$, as well. The proof for the cycles $Z\in\mathcal{Z}\{2,7,6\}\amalg\mathcal{Z}\{3,8,4\}\amalg\mathcal{Z}\{3,8,5\}$ are similar.
Finally, for any cycle $Z$ in \[\mathcal{Z}\{1,9,4,3,6\}\amalg\mathcal{Z}\{1,9,5,3,6\}\amalg\mathcal{Z}\{2,9,4,3,6\}\amalg\mathcal{Z}\{2,9,5,3,6\},\] $\mathsf{M}_2^a$ contains $x_{u_3(Z)}$, and so $\mathsf{M}_1^a\mathrm{op}lus\mathsf{M}_2^a$ contains $x_{Z}$. Furthermore, $U_9x_Z=0$. Thus, we are done.
\end{proof}
Similar statements hold for the singular braid $S_{\RN{3}b}^i=\mathsf{X}_{i}\circ\mathsf{X}_{i-1}\circ\mathsf{X}_{i}$ obtained from $\mathsf{X}_i$ by applying a type b MOY \rom{3}, as in Figure \ref{fig:MIII}. Specifically, the $A_n$-bimodule $\mathsf{M}[S_{\RN{3}b}^i]$ splits as the direct sum $\mathsf{M}^b_1\mathrm{op}lus\mathsf{M}_2^b$ where $\mathsf{M}_1^b=\imath_{\RN{3}b}(\mathsf{M}[\mathsf{X}_i])\cong\mathsf{M}[\mathsf{X}_i]$, and $\mathsf{M}_2^b$ gives a cyclic summand of the total complex.
In addition, the summands $\mathsf{M}_2^a$ and $\mathsf{M}_2^b$ of $\mathsf{M}[S_{\RN{3}a}^i]$ and $\mathsf{M}[S_{\RN{3}b}^{i+1}]$ are isomorphic, with isomorphisms
\begin{equation}\label{eq:isomofcyclic}
j_{ab}:\mathsf{M}_2^a\to\mathsf{M}_2^b\ \ \ \text{and}\ \ \ j_{ba}:\mathsf{M}_2^b\to\mathsf{M}_2^a,
\end{equation}
defined as follows. For every cycle $Z\in\mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,9,4\}$ of $S_{\RN{3}a}^i$ there is a unique cycle $Z'\in\mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,4\}$ of $S_{\RN{3}b}^i$ such that $b(Z)=b(Z')=t(Z)=t(Z')$ and vice versa. We set $j_{ab}(x_Z)=x_{Z'}$ and $j_{ba}(x_{Z'})=x_Z$. By checking the local relations, it is straightforward that these relations induce well-defined isomorphisms as above.
\begin{corollary}\label{cor:IIIcap}
Let $S$ be the singular tangle obtained by applying a MOY \rom{3} move to the singular cap $\mycap_i$ (resp. cup $\mycup_i$), as in Figure \ref{fig:MIII}. Then, there is an injective homomorphism from $\mathsf{M}[\mycap_i]$ (resp. cup $\mathsf{M}[\mycup_i]$) to $\mathsf{M}[S]$
such that its corresponding short exact sequence splits.
\end{corollary}
\begin{proof} Without loss of generality, we assume that $S=\mathsf{X}_i\circ\mathsf{X}_{i+1}\circ\mycap_i$ i.e. the singular tangle obtained from applying type a MOY \rom{3} move to $\mycap_i$. Let $S'=S_{\RN{3}a}^i\circ\mycap_i$, and consider the $A_n$-bimodule homomorphism
\[\imath_{\RN{3}a}\otimes\mathrm{id}:\mathsf{M}[\mathsf{X}_i]\otimes_{A_n}\mathsf{M}[\mycap_i]\to \mathsf{M}[S']\cong\mathsf{M}[S_{\RN{3}a}^i]\otimes_{A_n}\mathsf{M}[\mycap_i].\]
By Lemma \ref{lem:cap} we have
\[
\begin{split}
&\mathsf{M}[S']\cong \mathsf{M}[S]\{1\}\mathrm{op}lus\mathsf{M}[S]\{-1\}\\
&\mathsf{M}[\mathsf{X}_i\circ\mycap_i]\cong\mathsf{M}[\mycap_i]\{1\}\mathrm{op}lus\mathsf{M}[\mycap_i]\{-1\}
\end{split}
\]
It is clear from the definition that $\imath_{\RN{3}a}\otimes\mathrm{id}$ respects these splittings, and induces an $A_n$-bimodule homomorphism
\[\imath:\mathsf{M}[\mycap_i]\to\mathsf{M}[S].\]
Then, injectivity of $\imath_{\RN{3}a}$ implies injectivity of $\imath$ and since the short exact sequence for $\imath_{\RN{3}a}$ splits, the short exact sequence for $\imath$ also splits.
\end{proof}
\begin{lemma}\label{lem:MIII}
Let $S$ and $S'$ be singular braids such that $\mathsf{p}(S')$ is obtained from $\mathsf{p}(S)$ by the MOY \rom{3} local move in Figure \ref{fig:MIII}. Then, the induced chain map from $C_{1\pm1}(S)$ to $C_{1\pm 1}(S')$ by the homomorphism $\imath$ from Lemma \ref{lem:submodMOY3} or Corollary \ref{cor:IIIcap} is a quasi-isomorphism.
\end{lemma}
\begin{proof}
By Lemmas \ref{lem:submodMOY3} and \ref{lem:MIIIsplit}, and Corollary \ref{cor:IIIcap}, we get a splitting $C_{1\pm1}(S')=C_1\mathrm{op}lus C_2$ so that $H_\star(C_1)=H_{1+1}(S)$. By applying Corollary \ref{KhovCor}, $\dim H_{1+1}(S) = \dim H_{1+1}(S')$, so all of the homology must come from the complex $C_{1}$. Thus, $C_2$ is acyclic and the chain map induced by $\imath$ is a quasi-isomorphism.
\end{proof}
\begin{corollary}(MOY \rom{1}) Assume $S$ and $S'$ be singular braids, such that $\mathsf{p}(S')$ is obtained from $\mathsf{p}(S)$ by one of the local moves in Figure \ref{fig:MI}. Then, $H_{1+1}(S)\cong H_{1\pm 1}(S')$.
\end{corollary}
\begin{proof}
Suppose $S'$ and $S$ differ by a type a MOY \rom{1} move, and the extra $4$-valent vertex of $S'$ is a crossing between second and third strands. Let $S''=S'\circ \mathsf{X}_1$. Then, $\mathsf{p}(S'')$ is obtained from $\mathsf{p}(S)\amalg \mathsf{p}(\mathcal{U}_2)$ by a MOY \rom{3} move, so $\imath_{\star}$ induces an isomorphism $H_{1+1}(S'')\cong H_{1+1}(S)\otimes_{\mathbb{Q}}\mathcal{A}\{1\}$. On the hand, $\mathsf{p}(S'')$ differs from $\mathsf{p}(S')$ by a MOY \rom{2} move. Thus, $H_{1+1}(S'')\cong H_{1+1}(S')\otimes_{\mathbb{Q}}\mathcal{A}\{1\}$. It is straightforward, that $\imath_{\star}$ induces and isomorphism between $H_{1+1}(S')$ and $H_{1+1}(S)$.
\end{proof}
Finally, we will review some properties of this splitting that will be used in Section \ref{sec:invariance} for proving invariance.
For $\circ\in\{a,b\}$, let $\pi_{\RN{3}\circ}^1$ be the map from $\mathsf{M}[S_{\RN{3}\circ}^i]$ to $\mathsf{M}[\mathsf{X}_i]$ defined by composing the projection on $\mathsf{M}_1^\circ$ with $\imath_{\RN{3}\circ}^{-1}$, and $\pi_{\RN{3}\circ}^2$ be the projection on $\mathsf{M}_2^\circ$.
\begin{lemma}\label{lem:propIII-1}
For every $i$ and $\circ=a,b$ we have:
\[\pi_{\RN{3}\circ}^1=-\pi_{\RN{2}}^2\circ (\mathrm{id}\otimes d^-\otimes\mathrm{id})\]
\end{lemma}
\begin{proof}
By definition,
\[\begin{split}
\pi_{\RN{2}}^2\circ(\mathrm{id}\otimes d^-\otimes \mathrm{id})\circ \imath_{\RN{3}a}&=\pi_{\RN{2}}^2\circ(\mathrm{id}\otimes d^-\otimes \mathrm{id})\circ (\mathrm{id}\otimes d^+\otimes \mathrm{id})\circ\imath_{\RN{2}}\\
&=\pi_{\RN{2}}^2\circ (U_6 \imath_{\RN{2}})= \pi_{\RN{2}}^2\circ ((U_3+U_4-U_5) \imath_{\RN{2}})=-\mathrm{id}.
\end{split}\]
Note that in the second line of the above equalities, we are considering $S_{\RN{2}}^i$ with the labeling as in Figure \ref{fig:MII}.
\end{proof}
Let $S=\mathsf{X}_i\circ \mathsf{X}_{i+1}$. Then,
\[S_{\RN{3}a}^i=S\circ \mathsf{X}_i\quad \text{and}\quad S_{\RN{3}b}^{i+1}=\mathsf{X}_{i+1}\circ S.\]
The edge homomorphism $d^+$ induces homomorphisms $f_a=\mathrm{id}\otimes d^+$ and $f_b=d^+\otimes \mathrm{id}$ from $\mathsf{M}[S]$ to $\mathsf{M}[S_{\RN{3}a}^i]$ and $\mathsf{M}[S_{\RN{3}b}^{i+1}]$.
Moreover, $\mathsf{X}_i$ and $\mathsf{X}_{i+1}$ are obtained from $S$ by smoothing the top and bottom singular points, respectively. So, the edge map $d^-$ induces homomorphisms $g_a=\mathrm{id}\otimes d^-$ and $g_b=d^-\otimes \mathrm{id}$ from $\mathsf{M}[S]$ to $\mathsf{M}[\mathsf{X}_i]$ and $\mathsf{M}[\mathsf{X}_{i+1}]$.
\begin{lemma}\label{lem:propIII-2} With the above notation fixed,
\begin{enumerate}
\item For $\circ=a,b$ we have $g_\circ=-\pi_{\RN{3}\circ}^1\circ f_\circ$.
\item $j_{ab}\circ \pi_{\RN{3}a}^2\circ f_a=\pi_{\RN{3}b}^2\circ f_b$.
\end{enumerate}
\end{lemma}
\begin{proof} First, we prove part (1). For the singular braid $S_{\RN{2}}^i$, the edge maps induce homomorphisms $\mathrm{id}\otimes d^-\otimes\mathrm{id}$ and $\mathrm{id}\otimes d^+$ from $\mathsf{M}[S_{\RN{3}a}^i]$ and $\mathsf{M}[\mathsf{X}_i]$ to $\mathsf{M}[S_{\RN{2}}^i]$, respectively. By Lemma \ref{lem:propIII-1} we have
\[
-\pi_{\RN{3}a}^1\circ f_a=\pi_{\RN{2}}^2\circ (\mathrm{id}\otimes d^-\otimes \mathrm{id})\circ f_a=\pi_{\RN{2}}^2\circ(\mathrm{id}\otimes d^+)\circ g_a=g_a
\]
The last equality follows from part (2) of Lemma \ref{lem:propII}. The proof for $\circ=b$ is similar.
For part (2), it is enough to check it for the generators $x_Z$ where
\[Z\in\mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,4\}\amalg\mathcal{Z}\{3,5\}\amalg \mathcal{Z}\{1,3,4,5\}\]
Each $Z\in \mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,4\}\amalg\mathcal{Z}\{3,5\}$ extends to unique cycles $Z_a$ and $Z_b$ in $S_{\RN{3}a}^i$ and $S_{\RN{3}b}^i$ such that $b(Z)=t(Z)=b(Z_a)=t(Z_a)=b(Z_b)=t(Z_b)$. Moreover, $j_{ab}(x_{Z_a})=x_{Z_b}$. It follows from the definitions that for $Z\in \mathcal{Z}\emptyset\amalg\mathcal{Z}\{1,4\}$
\[j_{ab}\circ\pi_{\RN{3}a}^2\circ f_a(x_Z)=j_{ab}\left((U_4-U_3)x_{Z_a}\right)=\pi_{\RN{3}b}^2\circ f_b(x_Z),\]
and for $Z\in\mathcal{Z}\{3,5\}$
\[j_{ab}\circ\pi_{\RN{3}a}^2\circ f_a(x_Z)=j_{ab}\left(x_{Z_a}-R_{i+2}x_{u_3(Z)_a}L_{i+2}L_{i+1}\right)=\pi_{\RN{3}b}^2\circ f_b(x_Z).\]
Finally, for $Z\in \mathcal{Z}\{1,3,4,5\}$,
\[j_{ab}\circ\pi_{\RN{3}a}^2\circ f_a(x_Z)=-j_{ab}\left(R_{i+2}x_{u_3(Z)_a}L_{i+2}L_{i+1}\right)=\pi_{\RN{3}b}^2\circ f_b(x_Z).\]
\end{proof}
Similarly, the edge homomorphism $d^-$ induces homomorphisms $f_a'=\mathrm{id}\otimes d^-$ and $f_b'=d^-\otimes \mathrm{id}$ from $\mathsf{M}[S^{i}_{\RN{3}a}]$ and $\mathsf{M}[S^{i+1}_{\RN{3}b}]$ to $\mathsf{M}[S]$. Further, $d^+$ induces homomorphisms $g_a'=\mathrm{id}\otimes d^+$ and $g_{b}'=d^+\otimes\mathrm{id}$ from $\mathsf{M}[\mathsf{X}_i]$ and $\mathsf{M}[\mathsf{X}_{i+1}]$ to $\mathsf{M}[S]$.
\begin{lemma}\label{lem:propIII-3} For $\circ=a,b$ we have $g_{\circ}'=f_{\circ}\circ \imath_{\RN{3}\circ}$ and $f_a'|_{\mathsf{M}_2^a}=f_b'\circ j_{ab}$.
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{lem:propIII-2}.
\end{proof}
Analogous statements hold for $S=\mathsf{X_{i+1}}\circ \mathsf{X}_i$.
\subsection{The Module Action for Singular Braids} \label{ModuleActionSection} Let $S$ be a singular braid such that $\mathsf{sm}(\mathsf{p}(S))$ consists of $k$ circles. In Corollary \ref{modulelemma1}, we showed that $H_{1+1}(S)$ is a module over $\mathcal{A}^{\otimes k}$. Moreover, from Corollary \ref{KhovCor}, we know that the rank over $\mathbb{Q}$ is $2^k$. In this section, we will prove the following:
\begin{theorem} \label{SingularKhovanovTheorem}
The homology $H_{1+1}(S)$ is a free, rank one $\mathcal{A}^{\otimes k}$-module.
\end{theorem}
We will prove this by induction on the number of components in $\mathsf{sm}(\mathsf{p}(S))$. Let $c$ be a component of $\mathsf{sm}(\mathsf{p}(S))$ which does not contain any other components, and let $E(c)$ be the set of edges in $c$. In $S$, the edges $E(c)$ bound a set of regions $A_{i}$ which make up the interior of $c$ in $\mathsf{sm}(\mathsf{p}(S))$. Some of these regions will meet at a 4-valent vertex as in Figure \ref{Tree1}.
\begin{figure}
\caption{Two regions meeting at a 4-valent vertex}
\label{Tree1}
\end{figure}
\begin{definition}
Define the graph $G(c)$ to have vertices $A_{i}$, and add in an edge between $A_{i}$ and $A_{j}$ for each 4-valent vertex as in Figure \ref{Tree1}.
\end{definition}
Since the edges correspond to a single circle when the singularizations are replaced with smoothings, this graph must be a tree.
\begin{definition}
We say that a region $A_{i}$ meets a vertex $v$ from the left, right, top, or bottom, depending on which of the 4 quadrants are occupied.
\end{definition}
\begin{lemma} \label{prunelemma}
Let $\mathcal{L}$ be a leaf on the tree $G(c)$. Then there is a MOY I or a MOY III move which trims this leaf.
\end{lemma}
\begin{proof}
Let $A_{i}$ be the vertex on $\mathcal{L}$, and $v$ the unique 4-valent vertex at which $A_{i}$ meets another region $A_{j}$. Without loss of generality, assume that $A_{i}$ meets $v$ from the left. Since the boundary of $A_{i}$ can not meet any other vertices from the left or right, the only option is for it to meet a single vertex from the bottom and a single vertex from the top (see Figure \ref{Tree2}). The resulting region corresponds to a MOY I or a MOY III move, and applying this relation trims the leaf.
\end{proof}
\begin{figure}
\caption{A leaf on the tree $G(c)$}
\label{Tree2}
\end{figure}
\begin{proof}[Proof of Theorem \ref{SingularKhovanovTheorem}]
Given an innermost circle $c$ in $\mathsf{sm}(\mathsf{p}(S))$, we can recursively apply MOY I and MOY III moves as in Lemma \ref{prunelemma} to prune $G(c)$ down to a single vertex. This gives a new singular diagram $S'$ with $H_{1+1}(S)=H_{1+1}(S')$.
In $S'$, the circle $c$ has been replaced with a circle $c'$ with $G(c')$ equal to a single vertex. Let $A'_{i}$ denote the corresponding region in $S'$. Since $A'_{i}$ can't meet any regions from the left or right, it must meet a single vertex from the bottom and a single vertex from the top, i.e. it is a bigon. Thus, we can apply a MOY 0 or a MOY II move to remove the two edges bounding $A'_{i}$, obtaining a new diagram $S''$ satisfying
\[ H_{1+1}(S) \cong H_{1+1}(S'') \otimes \mathcal{A} \]
For $e_{i}$ in $E(c)$, $U_{i}$ acts on $H_{1+1}(S'') \otimes \mathcal{A}$ by $\text{id} \otimes (-1)^{l}U$. The remaining edges in $S$ correspond to edges in $S''$. The smoothed diagram $\mathsf{sm}(\mathsf{p}(S''))$ has $k-1$ components, so by the inductive hypothesis $H_{1+1}(S'') \cong \mathcal{A}^{\otimes(k-1)}$, proving the theorem.
\end{proof}
\subsection{The Quantum and $\delta$-gradings on the Total Complex} Now that we understand each vertex in the cube of resolutions, it is time to turn our attention to the total complex. Before identifying the edge maps with the Khovanov edge maps, we'll need the total complex $C_{1 \pm 1}(D)$ to be graded.
We have a grading $\mathfrak{gr}_{q}$ on each vertex of the cube, with respect to which the variables $U_{i}$ have grading $-2$, and the differential $d_{0}$ is homogeneous of degree $-2$. We extend $\mathfrak{gr}_{q}$ to the total complex as follows: let $S = D_{v}$ be a complete resolution of $D$, and let $|v|$ be the height of $v$ in the cube of resolutions. Then we give $\mathfrak{gr}_{q}$ a shift of $|v|$ from the definition in Section \ref{gradingsect1}.
To be more precise, if $Z$ is a cycle in $S=D_{v}$, then $x_{Z}$ has quantum grading
\[ \mathfrak{gr}_{q}(x_{Z}) = T_{1}(Z)-T_{2}(Z)+E(Z)+w(Z) +|v|+n_{+}-2n_{-} \]
With respect to $\mathfrak{gr}_{q}$, we can see by inspection that the edge maps $d_{1}$ are homogeneous of degree $0$. Unfortunately, since $d_{0}$ is homogeneous of degree $-2$, $\mathfrak{gr}_{q}$ does not give a well-defined grading on the total homology.
\begin{definition}
The $\delta$-grading $\mathfrak{gr}_{\delta}$ is given by $\mathfrak{gr}_{q} - 2 |v| + 2 n_{-}$.
\end{definition}
With respect to the $\delta$-grading, $d_{0}+d_{1}$ is homogeneous of degree $-2$, and multiplication by $U_{i}$ is as well.
\begin{lemma}
Under the module isomorphism
\[ H_{*}(C_{1 \pm 1}(D), d_{0}) \cong CKh(\mathsf{p}(D)) \]
\noindent
the gradings $\mathfrak{gr}_{q}$ and $\mathfrak{gr}_{\delta}$ on $C_{1 \pm 1}(D)$ correspond to the gradings $\gr_{q}$ and $\gr_{\delta}$ on the Khovanov complex.
\end{lemma}
\begin{proof}
Applying the MOY relations to a complete resolution $D_{v}$, we see that the generator of the homology $H_{1+1}(D_{v})$ lies in quantum grading $k_{v}+|v|+n_{+}-2n_{-}$, just as in the Khovanov complex. The $\delta$-gradings being the same follows from the definition of the $\delta$-grading on Khovanov homology.
\end{proof}
\iffalse
\section{Edge Maps}
\subsection{Definition of the Edge Maps}
Suppose $S$ is a singular braid. For each $4$-valent vertex $v$ of $S$ denote the singular braid obtained by smoothing (\emph{unzipping}) $v$ by $S^u$ ( See Figure ....) \todo{AA add figure here!}. Suppose $e_1$ and $e_2$ are the incoming edges, respectively left and right, and $e_3$ and $e_4$ are the outgoing edges of $v$, respectively left and right, in $S$. Correspondingly, let $e_1'$ and $e_2'$ be the left and right edges in $S^u$, respectively. There is a natural quotient homomorphism
\[q:R(S)\to R(S^u)\]
such that $q(U_1)=q(U_3)=U_1'$ and $q(U_2)=q(U_4)=U_2'$. Here, $U_i$ is the variable corresponding to $e_i$ for $1\le i\le 4$, while $U_i'$ is the variable corresponding to $e_i'$ for $i=1,2$. So, $q$ makes $\mathscr{M}(S^u)$ into an $R(S)$-modules. In this section, we define $R(S_0)$-module homomorphisms
\[\begin{split}
&d_{v}^-: \mathscr{M}(S)\to \mathscr{M}(S^u)\\
&d_{v}^+: \mathscr{M}(S^u)\to \mathscr{M}(S).
\end{split}
\]
First, we define $d_v^-$. For any local cycle $\bullet\in\{\emptyset, e_1e_3, e_1e_4, e_2e_3, e_2e_4\}$, let $\mathcal{Z}_{\bullet}$ denote the set of all cycles in $S$ that contain $\bullet$. Each cycle $Z$ in $\tilde{\mathcal{Z}}=\mathcal{Z}_{\emptyset}\cup\mathcal{Z}_{e_1e_3}\cup\mathcal{Z}_{e_2e_4}$ naturally corresponds to a cycle in $S^u$, by unzipping the crossing. Denote this cycle by $Z^u$.
Using this correspondance, we define an $R(S)$-module homomorphism $d_v^-$ satisfying the followings:
\[
d_v^-(x_Z)=\begin{cases}
\begin{array}{lcl}
x_{Z^u}&&Z\in\mathcal{Z}_\emptyset\cup\mathcal{Z}_{e_1e_3}\\
U(D(Z,e_1))x_{e_1(Z)^u}&&Z\in \mathcal{Z}_{e_1e_4}\\
U(D(Z,e_3))x_{e_3(Z)^u}&&Z\in \mathcal{Z}_{e_2e_3}\\
U_1 x_{Z^u}&&Z\in\mathcal{Z}_{e_2e_4}
\end{array}
\end{cases}
\]
\begin{lemma}
$d_v^-$ is well-defined.
\end{lemma}
\begin{proof}
We should check that $d_v^-$ commutes with multiplication by $U_i$ for every $i$. For $i\notin \{1,2,3,4\}$, the only nontrivial case is when multiplication by $U_i$ maps a cycle $Z\in\mathcal{Z}_{\emptyset}$ to a cycle $Z_{13}\in\mathcal{Z}_{e_1e_3}$, with coefficient $P=U(D(Z,e_i))$. It is clear that $P$ is a monomial which doesn't contain any of the variables $U_1,U_2, U_3$ and $U_4$. Thus,
\[\begin{split}
d_{v}^-(U_i x_{Z})&=d_v^-(P x_{Z_{13}})=Pd_{v}^-(x_{Z_{13}})=P x_{Z^u_{13}}\\
&=U_i x_{Z^u}=U_id_{v}^-(x_Z).
\end{split}\]
Note that the first equality of the second line follows from the equivalence of the left boundary of $D(Z,e_i)$ and $D(Z^u,e_i)$.
Next, suppose $i=1$ and $Z\in\mathcal{Z}_{e_1e_3}$. Denote $P=U(D(Z,e_1))$ and $Q=U(D(Z,e_3))$. Both $P$ and $Q$ are monomials not containing $U_1, U_2, U_3$ and $U_4$ and $U_1 x_{Z}=P x_{Z_{23}}$ and $U_3 x_{Z_{23}}=Q x_{Z_{24}}$ where $Z_{23}=e_1(Z)\in \mathcal{Z}_{e_2e_3}$ and $Z_{24}=e_3(e_1(Z))\in\mathcal{Z}_{e_22_4}$.
\[
\begin{split}
d_v^-(U_1 x_Z)&=d_v^-(P x_{Z_{23}})=Pd_v^-(x_{Z_{23}})=PQ x_{Z_{24}^u}\\
&=U_1 x_{Z^{u}}=U_1d_v^-(x_Z).
\end{split}\]
The disk $D(Z^u,e_1'=e_1e_3)$ is obtained from concatenation of the disks $D(Z,e_1)$ and $D(Z,e_3)$ after smoothing the crossing, which concludes the first equality of the second line holds.
\todo{AA add a fig?}
The argument for the rest of cases is similar, so we leave it for the reader to check.
\end{proof}
Next, we define $d_v^+$. Denote the set of all cycle in $S^u$ by $\mathcal{Z}^u$ and the set of cycles that contain a local cycle $\bullet\in\{\emptyset,e_1', e_2', e_1'e_2'\}$ by $\mathcal{Z}^u_{\bullet}$. Every cycle $Z$ that is not locally equal to $e_1'e_2'$ is obtained from unzipping some cycle in $S$, denoted by $Z^z$. So $(Z^z)^u=Z$. Then, the $R(S)$-module homomorphism $d_v^+$ is defined by the relations:
\[
d_v^+(x_{Z})=\begin{cases}
\begin{array}{lll}
(U_3-U_2)x_{Z^z}&&Z\in \mathcal{Z}_{\emptyset}^u\cup\mathcal{Z}_{e_1'}^u\\
x_{Z^z}-U(D(Z,e'_2))x_{e'_2(Z)^z}&&Z\in\mathcal{Z}_{e_2'}^u\\
-U(D(Z,e_2'))x_{e_2'(Z)^z}&&Z\in\mathcal{Z}_{e_1'e_2'}^u.
\end{array}
\end{cases}
\]
Note that for cycles $Z$ containing $e_2'$ coefficient of the disk $D(Z,e_2')$ is a monomial which doesn't contain $U_1'$ and $U_2'$. So abusing the notation $U(D(Z,e_2'))$ denotes the corresponding monomial as an element of $R(S)$.
\begin{lemma}
$d_v^+$ is a well-defined $R(S)$-module homomorphism.
\end{lemma}
\begin{proof}
Every element of $\mathscr{M}(S^u)$ is in the kernel of multiplication by $U_1-U_3$ and $U_2-U_4$. Thus, we need to show that $U_1-U_3$ and $U_2-U_4$ act trivially on the image of $d_v^+$. The linear relation $L_v$ implies that multiplication by $U_1-U_3$ is equal to $U_4-U_2$. So,
\[(U_4-U_2)(U_3-U_2)x_{Z}=[U_2^2-(U_3+U_4)U_2+U_3U_4]x_Z=[U_2^2-(U_1+U_2)+U_1U_2]x_Z=0.\]
Note that the quadratic relation $Q_v$ implies $U_1U_2=U_3U_4$ in $R(S)$. Thus, for a cycle $Z$ that is locally empty or equal to $e_1'$ we have \[(U_3-U_1)d_v^+(x_Z)=(U_4-U_2)d_v^+(x_Z)=0.\]
If $Z$ is locally equal to $e_2'$, then $U(D(Z^z,e_2))=U_3U(D(Z,e_2'))$ and $U(D(Z^z,e_4))=U_1U(D(Z,e_2'))$. Therefore,
\[(U_4-U_2)d_v^+(x_Z)=(U_4-U_2)x_{Z^z}-(U_1-U_3)U(D(Z,e_2'))x_{e_2'(Z)^z}=0.\]
The only case left is $Z\in\mathcal{Z}_{e_1'e_2'}^u$. In this case, $U_1 x_Z=U_3 x_Z=0$, so we need to show that both $U_1$ and $U_3$ act trivially on $d_v^+(x_Z)$. Let $P_I$ and $P_O$ denote the monomials obtained by multiplying the variables associated to incoming and outgoing edges on the left boundary of $D(Z,e_2')$, respectively, i.e $P_I=\prod_{e_i\subset I(D(Z,e_2'))}U_i$ and $P_O=\prod_{e_i\subset O(D(Z,e_2'))}U_i$. Thus $U(D(Z,e_2'))=P_IP_O$. An argument similar to the proof of part $(c)$ of Lemma \ref{ProductLemma} concludes that $U_1P_I x_{e_2'(Z)^z}=0$ and $U_3P_{O} x_{e_2'(Z)^z}=0$.
There is no obstruction for defining $d_v^+$ such that it commutes with multiplication by $U_i$, unless it changes the local cycle. First, suppose $Z$ is a locally empty cycle and multiplication by $U_i$ maps it to a cycle $Z_{1}$ which is locally equal to $e_1'$. Then,
\[d_v^+(U_i x_{Z})=d_v^+(P x_{Z_{1}})=P(U_3-U_2)x_{Z_{1}^z}=U_id_{v^+}(x_{Z}),\]
where $P=U(D(Z,e_i))=U(D(Z^z,e_i))$.
Second, let $Z$ be a cycle locally equal to $e_1'$ and $i=1$. Then,
\[\begin{split}
d_v^+(U_1 x_Z)&=d_v^+(P x_{Z_2})=P x_{Z_2^z}-PQ x_{e_2'(Z_2)^z}\\
&=U_1U_3 x_{Z^z}-U_1U_2 x_{Z^z}=U_1d_v^+(x_Z)
\end{split}\]
where $Z_2=e_2'(Z)$, $P=U(D(Z,e_1'))$ and $Q=U(D(Z_2,e_2'))$. Similarly, $d_v^+$ commutes with $U_3$ in this case.
Third, assume $Z\in\mathcal{Z}_{e_2'}^u$. Then, two types of multiplications can change $Z$: (1) $U_2$ (or $U_4$) maps $Z$ to a locally empty cycle denoted by $Z_{\circ}$, and (2) there might exist some $U_i$ that maps $Z$ to a cycle $Z_{12}\in \mathcal{Z}_{e_1'e_2'}^u$. In case (1):
\[\begin{split}
U_2d_v^+(x_Z)&=U_2 x_{Z^z}-U_2P x_{Z_{\circ}^z}=U_3P x_{Z_{\circ^z}}-U_2P x_{Z_{\circ}^z}\\
&=d_v^+(P x_{Z_{\circ}^z})=d_v^+(U_2 x_Z)
\end{split}\]
where $P=U(D(Z,e_2'))$. Proof for $U_4$ is similar. For case (2) if such $U_i$ exists, it is clear from the definition of $d_v^+$ that it commutes with multiplication by $U_i$.
Finally, for $Z$ locally equal to $e_1'e_2'$, multiplication by $U_2$ (or $U_4$) maps $Z$ to a cycle $Z_1\in\mathcal{Z}_{e_1'}^u$. Then, $U_3d_v^+(x_Z)=U_1d_v^+(x_Z)$ concludes the claim. More precisely,
\[d_v^+(U_2 x_Z)=d_v^+(P x_{Z_1})=(U_3-U_2)P x_{Z_x^z}=-U_2P x_{Z_1^z}=U_2d_v^+(x_Z),\]
where $P=U(D(Z,e_2'))$. Similar proof works for $U_4$ instead of $U_2$.
\end{proof}
\fi
\subsection{Isomorphism Between $H_{1+1}(L)$ and Khovanov Homology}
In this section, we will show that the $E_{2}$ page of the spectral sequence on $C_{1 \pm 1}(D)$ induced by the cube filtration is isomorphic to the Khovanov homology of $\mathsf{p}(D)$.
\begin{theorem} \label{FullKhovIsoTheorem}
Let $D$ be an open braid whose plat closure $\mathsf{p}(D)$ is a diagram for a link $L$. Then
\[H_{1+1}(D)=H_{*}(H_{*}(C_{1\pm 1}(D), d_{0}), d^{*}_{1}) \cong Kh(L) \]
\end{theorem}
Let $S$ be a complete resolution of $D$. In Theorem \ref{SingularKhovanovTheorem}, we showed that $H_{1+1}(S)$ is isomorphic to $Kh(\mathsf{sm}(\mathsf{p}(S)))$. Thus, $H_{*}(C_{1 \pm 1}(D), d_{0})$ is isomorphic to the Khovanov complex as a module, so Theorem \ref{FullKhovIsoTheorem} will follow from the induced edge maps being the Khovanov edge maps.
\begin{proof}
Let $S_{1}$ and $S_{2}$ be two complete resolutions which differ at a single crossing, with $S_{1}$ having the singularization and $S_{2}$ the smoothing, and suppose the edges are labeled $e_{1}, e_{2}, e_{3}, e_{4}$ as in Figure \ref{labeled4v}. There are two edge maps, corresponding to a positive and negative crossing, respectively:
\[ d_{1}^{-}: C_{1 \pm 1}(S_{1}) \to C_{1 \pm 1}(S_{2}) \]
\[ d_{1}^{+}: C_{1 \pm 1}(S_{2}) \to C_{1 \pm 1}(S_{1}) \]
Applying Lemma \ref{lem:comp}, we see that
\[ d_{1}^{-} \circ d_{1}^{+} = d_{1}^{+} \circ d_{1}^{-}= U_{1}-U_{4} \]
\noindent
Thus, after taking homology with respect to $d_{0}$, the induced edge maps satisfy the same property.
After identifying $\mathbb{Q}[U_{1},...,U_{k}]$ with the Khovanov ground ring $\mathbb{Q}[X_{1},...,X_{k}]$ via $X_{i}=(-1)^{b(i)}U_{i}$, where $b(i)$ is the the number of the strand on which $e_{i}$ lies, we get
\[ (d_{1}^{-})^{*} \circ (d_{1}^{+})^{*} = (d_{1}^{+})^{*} \circ (d_{1}^{-})^{*}= \pm(X_{1}+X_{4}) \]
In other words, up to sign, the composition of our two edge maps is precisely the composition of the two Khovanov edge maps. Since the edge maps are module homomorphisms, this composition uniquely determines the two edge maps up to scaling by some non-zero element of $\mathbb{Q}$.
Consider first the homomorphism corresponding to a merge of two circles. Since the map preserves the quantum grading, the only option is
\[ 1 \mapsto r \]
\noindent
for some $r \in \mathbb{Q}$. Thus, the merge map is $r$ times the Khovanov merge map. The split map then sends $r$ to $\pm(X_{1}+X_{4})$, so it is $\pm 1/r$ times the Khovanov split map.
So we have shown that each edge map is the Khovanov edge map, up to scaling. It is not hard to see that any such complex is isomorphic to the Khovanov complex, where the isomorphism is obtained by scaling the complex at each vertex so that the edge maps become the Khovanov edge maps.
\end{proof}
\iffalse
\begin{proof}
Suppose $c$ is a positive crossing in $D$. Let $S_{0}$, $S_{1}$ denote two complete resolutions of $D$ which agree on all crossings except $c$, with $S_{0}$ having the 0-resolution at $c$ and $S_{1}$ having the 1-resolution.
Replacing $\mathsf{p}(S_{0})$ and $\mathsf{p}(S_{1})$ with $\mathsf{sm}(\mathsf{p}(S_{0}))$ and $\mathsf{sm}(\mathsf{p}(S_{1}))$, the edge from $S_{0}$ to $S_{1}$ corresponds to either a merge or a split between two circles. These two circles are either concentric or not concentric, giving a total of 4 cases, which we will treat separately.
\textbf{Case 1: Merge of non-concentric circles.} Let $\gamma_{1}$ and $\gamma_{2}$ be the two circles in $s(S_{0})$, and let $\gamma'$ be their merge in $s(S_{0})$.
The circles $\gamma_{1}$ and $\gamma_{2}$ may contain other circles $\gamma_{i}$ - each such circle has a corresponding circle $\gamma'_{i}$ in $s(S_{1})$. In Section \ref{ModuleActionSection}, we showed that any innermost circle can be removed by a sequence of MOY relations. Applying this process repeatedly on an innermost circle, we can remove all of the $\gamma_{i}$ contained in $\gamma_{1}$ or $\gamma_{2}$. Let $S'_{0}$ denote the resulting diagram. These same relations can be applied to remove the circles $\gamma'_{i}$ from $S_{1}$ - let $S'_{1}$ denote the resulting diagram.
In these new diagrams, $\gamma_{1}$, $\gamma_{2}$, and $\gamma'$ are now innermost circles. There are corresponding trees $G(\gamma_{1})$, $G(\gamma_{2})$, and $G(\gamma')$. These trees satisfy
\[G(\gamma') = G(\gamma_{1}) \amalg G(\gamma_{2}) /v_{i}=v_{j} \]
\noindent
where $v_{i}$ and $v_{j}$ are the vertices corresponding to regions $A_{i}$ and $A_{j}$ (see Figure \todo{ND - make figure}. Applying Lemma \ref{prunelemma}, we can use MOY I and III relations on both $S_{0}'$ and $S'_{1}$ to trim the leaves in such a way that the resulting graph for $S'_{0}$ consists of only the two vertices $v_{i}$ and $v_{j}$, and in $S'_{1}$ the graph only consists of the single vertex $v_{i}=v_{j}$. Let $S''_{0}$ and $S''_{1}$ denote the resulting singular braids. Note that locally, the diagrams $S''_{0}$ and $S''_{1}$ are now as in Figure \todo{ND - make figure}
If $k$ circles were removed from each diagram during the first set of moves, then
\[ H_{1+1}(S_{0}) \cong H_{1+1}(S''_{0}) \otimes \mathcal{A}^{\otimes k}, \hspace{10mm} H_{1+1}(S_{1}) \cong H_{1+1}(S''_{1}) \otimes \mathcal{A}^{\otimes k}\]
\noindent
Since the MOY relations only involve local data, the induced edge map
\[ d_{1}^{*}: H_{1+1}(S_{0}) \to H_{1+1}(S_{1}) \]
\noindent
respects this decomposition and is identity on the $\mathcal{A}^{\otimes k}$ factor.
Define $\tilde{d}_{1}^{*}$ by $d_{1}^{*} = \tilde{d}_{1}^{*}\otimes \text{id}$. It suffices to show that $\tilde{d}_{1}^{*}$ is the Khovanov merge map. In terms of cycles, $\tilde{d}_{1}$ is defined by
\[ x_{\emptyset} \mapsto x'_{\emptyset} \]\[ x_{13} \mapsto x'_{1} \]\[ x_{14} \mapsto x'_{2} \]\[ x_{23} \mapsto x'_{2} \]\[ x_{24} \mapsto U'_{1}x'_{2} \]
Let $S$ be the singular diagram obtained by removing the two bigons in $S''_{0}$ by MOY II relations (or a MOY II and a MOY 0). Note that $S$ is also the diagram obtained by removing the bigon in $S''_{1}$ by a MOY II (or MOY 0) relation, so we have isomorphisms
\[ H_{1+1}(C''_{0}) \cong H_{1+1}(S) \otimes \mathcal{A}^{\otimes 2}, \hspace{10mm} H_{1+1}(S''_{1}) \cong H_{1+1}(S) \otimes \mathcal{A} \]
\noindent
If we write the $\mathcal{A}^{\otimes 2}$ on the left as $\{1,a_{1},a_{2},a_{1}a_{2}\}$ and the $\mathcal{A}$ on the right as $\{1,a\}$, then we see from the MOY II decomposition that $\tilde{d}_{1}^{*}$ is given by
\[ x \otimes 1 \mapsto x \otimes 1 \]\[ x \otimes a_{1} \mapsto x \otimes a \]\[ x \otimes a_{2} \mapsto x \otimes a \]\[ x \otimes a_{1}a_{2} \mapsto 0 \]
\noindent
The first three are clear, but to see the last one, note that $U'_{1}x'_{2}=(U'_{1})^{2}x'_{1}$, and $(U'_{1})^{2}=0$ on homology. This proves that $\tilde{d}_{1}^{*}$ is the Khovanov edge map, so $d_{1}^{*}$ is as well.
\textbf{Case 2: Merge of concentric circles.} Let $\gamma_{1}$ denote the inner circle in $s(S_{0})$, $\gamma_{2}$ the outer circle in $s(S_{0})$, and $\gamma'$ their merge in $s(S_{1})$. Without loss of generality, suppose $\gamma_{1}$ contains $A_{i}$ and $\gamma_{2}$ contains $A_{j}$. We will also assume that $\gamma_{2}$ does not contain any circles besides $\gamma_{1}$, as these circles can be removed via MOY relations as in Case 1.
\todo{ND - make diagram}
Since $\gamma_{1}$ is innermost, $G(\gamma_{1})$ is a tree, we can apply MOY I and III relations to trim the edges of this tree until we are left with a single vertex corresponding to $A_{i}$. We can apply the same MOY relations to $S_{1}$. Let $S'_{0}$ and $S'_{1}$ denote the the singular braids after these MOY relations, and $\alpha_{1}$, $\alpha_{2}$, and $\alpha'$ the circles in $S'_{0}$ and $S'_{1}$.
Since $G(\alpha_{1})$ is a single vertex, it corresponds to a bigon in $S'_0$. The singular point at the bottom changes to a smoothing in $S'_{1}$, but the singular point on top remains. This singular point corresponds to a particular edge $e'$ in $G(\alpha')$. We can apply MOY I and III relations to $S'_{1}$ to trim $G(\alpha')$ until only $e'$ remains, and apply these relations to $S'_{0}$ as well. Let $S_{0}''$ and $S_{1}''$ denote these modified diagrams (see Figure \ref{C2}). As with Case 1, it suffices to prove that the edge map
\[ \tilde{d}_{1}^{*}: H_{1+1}(S_{0}'') \to H_{1+1}(S''_{1}) \]
\noindent
is the Khovanov merge map.
\begin{figure}
\caption{The two diagrams $S''_{0}
\label{C2}
\end{figure}
By the MOY II relation, $H_{1+1}(S''_{0}) \cong H_{1+1}(S''_{1}) \otimes \mathcal{A}$. Since $\tilde{d}_{1}^{*}$ is an $R$-module homomorphism, it suffices to show that $\tilde{d}_{1}^{*}$ restricted to $H_{1+1}(S''_{1}) \otimes 1$ is the canonical isomorphism. The generator $1$ in $\mathcal{A}$ corresponds to the cycles which do not contain $e_{7}$ or $e_{8}$, and cycles which contain $e_{7}$ but not $e_{8}$. The action of $\tilde{d}_{1}^{*}$ on these cycles is given by
\[ x_{\emptyset} \to x'_{\emptyset}, \hspace{10mm}, x_{1} \to x'_{1},\hspace{10mm}, x_{4} \to x'_{4},\hspace{10mm}, x_{14} \to x'_{14} \]
\[ x_{276} \to x'_{26},\hspace{10mm} x_{2764} \to x'_{264},\hspace{10mm} x_{376} \to x'_{36} \]
\[ x_{275} \to x'_{25}, \hspace{10mm} x_{375} \to x'_{35},\hspace{10mm} x_{1375} \to x'_{135} \]
\noindent
This map is the canonical isomorphism, so this proves Case 2.
\end{proof}
\fi
\section{Invariance of the Total Homology}\label{sec:invariance}
In this section we will prove our main theorem, that the total homology is a link invariant.
\begin{theorem}
The total homology \[H_{1-1}(D)=H_{*}(C_{1\pm1}(D), d_0 + d_1)\] is a graded link invariant.
\end{theorem}
To prove invariance, we need to show that the homology is invariant under five types of local moves: braid-like Reidemeister I, II, and III moves, twists at the top and bottom, and cap swaps/cup swaps. For Reidemeister I, twists, and cap swaps / cup swaps, we will use the following lemma from homological algebra:
\begin{lemma}[\cite{SSBook}]\label{SSLemma} Let $f:C_{1} \to C_{2}$ be a filtered chain map of filtered chain complexes. If $f$ induces an isomorphism on the $E_{k}$ pages of the associated spectral sequences, then it induces an isomorphism on the $E_{l}$ pages for $l \ge k$.
\end{lemma}
Together with Theorem \ref{FullKhovIsoTheorem} and familiar results from Khovanov homology, these invariance proofs will be fairly easy. For the braid-like Reidemeister II and III moves, we will actually prove that the chain homotopy type of the bimodules is invariant, making the ($A_{n}, A_{n}$)-bimodule an invariant of open braids.
\subsection{Twists at Top and Bottom}
Let $D_{1}$ and $D_{2}$ be two diagrams which differ by a single positive twist near $w_{i}^{+}$ as in Figure \ref{fig:PTwist}.
Let $D_1^0$ and $D_1^1$ denote the $0$- and $1$-resolutions of $D_1$, respectively, at the crossing shown in Figure \ref{fig:PTwist}. Note that $D_1^0$ is isotopic to $D_2$, and $D_1^1$ differs from $D_2$ by a MOY \rom{2} move. We define
\[f=(f_0,f_1):C_{1\pm 1}(D_2)\to C_{1\pm 1}(D_1)=C_{1\pm 1}(D_1^0)\mathrm{op}lus C_{1\pm 1}(D_1^1)\]
such that $f_0=0$ and $f_1=\imath_{\RN{2}}$.
\begin{figure}
\caption{Two diagrams differing by a positive twist at $w_{i}
\label{fig:PTwist}
\end{figure}
The map $f$ is a filtered chain map with respect to the cube filtration. Let $E_{k}(C_{1 \pm 1}(D))$ denote the spectral sequence induced by the cube filtration. By the MOY II relation, $E_{1}(C_{1 \pm 1}(D_{1}^{1})) \cong E_{1}(C_{1 \pm 1}(D_{2})) \otimes \mathcal{A}$, and with respect to this isomorphism, $f$ maps $E_{1}(C_{1 \pm 1}(D_{2}))$ to $E_{1}(C_{1 \pm 1}(D_{2})) \otimes 1$.
This is the standard quasi-isomorphism on Khovanov homology corresponding to a Reidemeister I move, so $f^{*}: E_{2}(D_{2}) \to E_{2}(D_{1})$ is an isomorphism. Applying Lemma \ref{SSLemma}, $f$ induces an isomorphism on the total homology. Moreover, since $f^{*}$ preserves the $\delta$-grading on Khovanov homology, it preserves $\mathfrak{gr}_{\delta}$.
The negative twist, as well as the two twists at the bottom, follow from similar arguments.
\subsection{Reidemeister I}
Let $D_{1}$ and $D_{2}$ be two diagrams which differ by a positive Reidemeister I move as in Figure \ref{NegativeR1}.
\begin{figure}
\caption{Two diagrams differing by a positive Reidemeister I move}
\label{NegativeR1}
\end{figure}
This argument is similar to the twists, as both correspond to a Reidemeister I move on Khovanov homology. Let $D_{1}^{i}$ denote the $i$-resolution of $D_{1}$ at the crossing in Figure \ref{NegativeR1}. By the MOY 0 relation, there is a splitting
\[ C_{1 \pm 1}(D_{0}^{1}) \cong C_{1 \pm 1}(D_{2}) \otimes C_{1+1}(\mathcal{U}) \]
Since $U_{2}=U_{4}$ on $C_{1 \pm 1}(\mathcal{U})$, it can be written
\[ C_{1 \pm 1}(\mathcal{U}) =
\mathbf{x}ymatrix@C+2pc{\mathbb{Q}[U_{2},U_{3}]/(U_{2}U_{3})&\mathbb{Q}[U_{2},U_{3}]/(U_{2}U_{3})\ar[l]^{2U_{2}+2U_{3}}}. \]
Let $x_{1}$ and $x_{2}$ be the generators of the two copies of $\mathbb{Q}[U_{2},U_{3}]/(U_{2}U_{3})$ so that we can write
\[ C_{1 \pm 1}(\mathcal{U}) =
\mathbf{x}ymatrix@C+2pc{x_1 & x_2 \ar[l]^{2U_{2}+2U_{3}}}. \]
We define
\[f=(f_0,f_1): C_{1\pm 1}(D_1)=C_{1\pm 1}(D_1^0)\mathrm{op}lus C_{1\pm 1}(D_1^1) \to C_{1\pm 1}(D_2) \]
such that $f_0 (a \otimes U_{2}x_{1}) = a$, $f_0 (a \otimes x) =0 $ for $x \ne U_{2}x_{1}$, and $f_{1}=0$.
The map $f$ is clearly a filtered chain map with respect to the cube filtration. Let $E_{k}(C_{1 \pm 1}(D))$ denote the spectral sequence induced by the cube filtration. By the MOY 0 relation, $E_{1}(C_{1 \pm 1}(D^{0}_{1}) \cong E_{1}(C_{1 \pm 1}(D_2) \otimes \mathcal{A}$, and $f^{*}$ is defined by
\[ f^{*}(a \otimes 1)=0 \hspace{8mm} f^{*}(a \otimes U)= a \]
This is the standard quasi-isomorphism on Khovanov homology corresponding to a Reidemeister I move, so $f^{*}: E_{2}(D_{2}) \to E_{2}(D_{1})$ is an isomorphism. Applying Lemma \ref{SSLemma}, $f$ induces an isomorphism on the total homology. Moreover, since $f^{*}$ preserves the $\delta$-grading on Khovanov homology, it preserves $\mathfrak{gr}_{\delta}$.
\subsection{Reidemister II}
Let $D_{1}$ and $D_{2}$ be diagrams which differ by a Reidemeister II move as in Figure \ref{RII}. Let $D_{1}^{ij}$ denote the diagram $D_{1}$ where the top crossing has been resolved with the $i$-resolution and the bottom crossing has been resolved with the $j$-resolution.
\begin{figure}
\caption{Two diagrams differing by a Reidemeister II move}
\label{RII}
\end{figure}
Then, $D_{1}^{00}$ and $D_{1}^{11}$ are isotopic to the diagram $D_x$ obtained by locally replacing the two crossings with a singularization. Moreover, $D_1^{01}$ differs from $D_2$ by a MOY \rom{2} move, while $D_1^{10}$ is isotopic to $D_2$. So, $C_{1\pm 1}(D_1^{01})\cong C_{1\pm 1}(D_x)\{1\}\mathrm{op}lus C_{1\pm 1}(D_x)\{-1\}$, and with respect to this decomposition we can write $C_{1\pm 1}(D_1)$ as follows:
\[
\begin{diagram}
C_{1\pm 1}(D_x)&\rTo^{d_1^b}&C_{1\pm 1}(D_2)\\
\dTo_{d_1^a}&&\dTo_{d_1^d}\\
C_{1\pm 1}(D_x)\{1\}\mathrm{op}lus C_{1\pm 1}(D_x)\{-1\}&\rTo^{d_1^c}&C_{1\pm 1}(D_x)
\end{diagram}
\]
Then, part (2) of Lemma \ref{lem:propII} implies that $d_a^1=(-U_2\mathrm{id},\mathrm{id})$ under the above identification. Similarly, part (1) of Lemma \ref{lem:propII} implies that under the above identification the restriction of $d_1^c$ to the first and the second summands is equal to $\mathrm{id}$ and $U_3\mathrm{id}$, respectively.
We want to define a chain map $f: C_{1\pm 1}(D_{2}) \to C_{1\pm 1}(D_{1})$ which is a homotopy equivalence. Let $f^{ij}$ denote the component of $f$ mapping to $C_{1\pm 1}(D^{ij}_{1})$. We define $f^{00}=f^{11}=0$, and under the above identification $f^{10}=\mathrm{id}$ and $f^{01}=(-d_1^d,0)$.
It's not hard to see that $f$ is a chain map.
\begin{lemma}
The chain map $f$ is a homotopy equivalence.
\end{lemma}
\begin{proof}
We define a chain map $g:C_{1\pm 1}(D_1)\to C_{1\pm 1}(D_2)$ such that $g\circ f=\mathrm{id}$ and $f\circ g\simeq \mathrm{id}$. Let $g^{ij}$ denote the restriction of $g$ to $C_{1\pm 1}(D_1^{ij})$. Then, we define $g^{00}=g^{11}=0$ and under the above identification $g^{10}=\mathrm{id}$ and $g^{01}=d_1^b\circ \pi_2$, where $\pi_2$ denotes the projection on the second summand. It is clear that $g\circ f=\mathrm{id}$.
Furthermore, \[\mathrm{id}-f^{01}\circ g^{01}=\begin{bmatrix}
1&U_3-U_2\\
0&1
\end{bmatrix}=d_1^a\circ \pi_2+i_1\circ d_1^c.\]
Therefore, $f\circ g\simeq \mathrm{id}$.
\end{proof}
This proves invariance of $C_{1\pm 1}$ under the Reidemeister II move shown in Figure \ref{RII}. The other Reidemeister II move is similar.
\subsection{Reidemister \rom{3}}
Suppose $D$ and $D'$ are braid diagrams, such that $D'$ is obtained from $D$ by a Reidemeister III move, and $D''$ is the braid diagram obtained from them by the move indicated in Figure \ref{fig:RIII}. For $i=0,1$, $D_i$ and $D_i'$ denote the $i$-resolutions of the middle crossing in $D$ and $D'$, respectively.
\begin{figure}\label{fig:RIII}
\end{figure}
The diagrams $D_0$ and $D_0'$ differ from $D''$ by a Reidemeister II move, so both $C_{1\pm1}(D_0)$ and $C_{1\pm1}(D_0')$ are homotopy equivalent to $C_{1\pm1}(D'')$. Specifically, there exists chain maps
\[\imath: C_{1\pm1}(D'')\to C_{1\pm1}(D_0)\ \ \ \text{and}\ \ \ p:C_{1\pm1}(D_0)\to C_{1\pm1}(D''), \]
such that $p\circ \imath=\mathrm{id}$ and $\imath\circ p=dh+hd$ for some $h:C_{1\pm1}(D_0)\to C_{1\pm1}(D_0)$ such that $h\imath=0$. In other word, $p$ is a strong deformation retraction. Similarly, there exists a strong deformation retraction $p':C_{1\pm1}(D_0')\to C_{1\pm1}(D'')$. Denote the corresponding inclusion by $\imath'$. So \cite[Lemma 4.5]{BarNatan05:Kh-tangle-cob} implies that the chain complex $C_{1\pm1}(D)$, given by the mapping cone
\[C_{1\pm1}(D_0)\mathbf{x}rightarrow{d^+} C_{1\pm1}(D_1)\]
is chain homotopy equivalent to the mapping cone
\[C_{1\pm1}(D'')\mathbf{x}rightarrow{d^+\circ\imath} C_{1\pm1}(D_1).\]
Similarly, $C_{1\pm1}(D')$ is homotopy equivalent to the mapping cone
\[C_{1\pm1}(D'')\mathbf{x}rightarrow{d^+\circ\imath'} C_{1\pm1}(D_1').\]
Thus, it is enough to show that the above cones are homotopy equivalent.
For $\bullet=0,1$, let $D_{\bullet ij}$ (resp. $D_{\bullet ij}'$) denote the result of resolving the top and bottom crossings of $D_\bullet$ (resp. $D_\bullet'$) with the $i$-resolution and the $j$-resolution. For $ij=00$ and $11$, the diagrams $D_{1ij}$ and $D_{1ij}'$ are isotopic, while for $ij=01$ and $10$, they differ by a MOY III move. So, $C_{1\pm 1}(D_{110})\cong C_{1\pm 1}(D_{110}')\mathrm{op}lus C_2$, $C_{1\pm 1}(D'_{101})=C_{1\pm 1}(D_{101})\mathrm{op}lus C_2'$ and the isomorphisms $j_{ab}$ and $j_{ba}$ (see Equation \ref{eq:isomofcyclic}) induce isomorphisms
\[j:C_2\to C_2'\quad\text{and}\quad j':C_2'\to C_2,\]
respectively. Abusing the notation, we will denote the map from $C_{1\pm1}(D_1)$ to $C_{1\pm1}(D_1')$ (resp. $C_{1\pm1}(D_1')$ to $C_{1\pm 1}(D_1)$) that is equal to $j$ on $C_2$ (resp. $j'$ on $C_2'$) and zero on the rest of the summands by $j$ (resp. $j'$).
Let $f^{ij}:C_{1\pm1}(D_{1ij})\to C_{1\pm1}(D'_{1ij})$ be the canonical isomorphism from $ij=00, 11$, and the MOY III maps $\imath_{\RN{3}b}$ and $\pi_{\RN{3}a}^1$ for $ij=01$ and $10$, respectively. Putting these chain maps together, we get a map $f:C_{1\pm 1}(D_1)\to C_{1\pm 1}(D_1')$.
\begin{lemma}
$f+j$ is a chain homotopy equivalence from $C_{1\pm 1}(D_1)$ to $C_{1\pm 1}(D_1')$. Further, the identity map on $C_{1\pm 1}(D'')$ along with $f+j$ is a chain homotopy equivalence between the mapping cones of $d^+\circ \imath$ and $d^+\circ \imath'$, and thus $C_{1\pm 1}(D)$ and $C_{1\pm 1}(D')$.
\end{lemma}
\begin{proof}
Lemma's \ref{lem:propIII-2} and \ref{lem:propIII-2} imply that $f+j$ is a chain map, and it is clear from the definition that $f+j$ is invertible. Additionally, it follows from the definition of $\imath$ and $\imath'$ that $(\mathrm{id}, f+j)$ is a chain map, and thus a chain homotopy equivalence.
\end{proof}
\subsection{Cup and Cap Swaps}
The final move that we have to prove invariance under is cup and cap swaps. The cap swap moves are depicted in Figure \ref{R4} - the cup swap moves are similar, but they occur at the bottom of the braid instead of the top.
\begin{figure}
\caption{The cap swap moves}
\label{R4}
\end{figure}
Since the left and right diagrams have 4 crossings, they will have 16 vertices in the cube of resolutions, so proving invariance for these diagrams directly would be quite messy. Instead, since we already have Reidemeister II invariance, the moves pictured in Figure \ref{R4Simplified} imply invariance under the two cap swaps.
\begin{figure}
\caption{An alternate version of the first cap swap}
\label{R4Simplified}
\end{figure}
Let $D_{1}^{ij}$ denote the resolution of $D_{1}$ where the central crossing has resolution $i$ and the right-most crossing has resolution $j$. The four resolutions are shown in Figure \ref{R4Cube}. Note that $D_{1}^{10}$ is isotopic to $D_{3}$.
\begin{figure}
\caption{The cube of resolutions for $D_{1}
\label{R4Cube}
\end{figure}
We define $f$ by
\[ f = (f_{00}, f_{01}, f_{10}, f_{11}): C_{ 1 \pm 1}(D_{3}) \to C_{1 \pm 1}(D_{1}) \]
\begin{equation*}
\begin{split}
f_{00} & = 0 \\
f_{10} & = \mathrm{id} \\
f_{01} & = -\imath_{\RN{2}} \circ d_{1}^{(d)} \circ f_{10} \\
f_{11} & = 0 \\
\end{split}
\end{equation*}
As with the twists and the Reidemeister I move, this map is a filtered chain map with respect to the cube filtrations on $C_{1 \pm 1}(D_{1})$ and $C_{1 \pm 1}(D_{3})$. To see that it is a chain map, note that $d_{1}^{(c)} \circ -\imath_{\RN{2}} = \mathrm{id}$.
Let $E_{k}(C_{1 \pm 1}(D))$ denote the spectral sequence induced by the cube filtration on $C_{1 \pm 1}(D)$. Then $f^{*}: E_{1}(D_{3}) \to E_{1}(D_{1})$ is the standard quasi-isomorphism on Khovanov homology corresponding to a Reidemeister II move. This can be seen by replacing each diagram $S$ in Figure \ref{R4Cube} with $\mathsf{sm}(S)$. Thus, $f^{*}$ induces an isomorphism on the $E_{2}$ pages, and therefore on the total homology as well.
The isomorphism $H_{1-1}(D_{2}) \cong H_{1-1}(D_{3})$ follows from a similar argument, as do the cup swaps.
\section{Relationship with Ozsv\'{a}th-Szab\'{o} Bordered Algebra}
Let $D_1$ and $D_2$ be braid diagrams with $n$-strands, and $D=D_1\circ D_2$. In this Section, we define a quotient $\mathcal{A}_n$ of the algebra $A_n$ which is isomorphic to $\overline{\mathcal{B}}'(2n+1,n)$ and
\[\mathscr{M}[D]\cong (\mathsf{M}[S_{cup}\circ D_1]\otimes\mathcal{A}_n)\otimes_{\mathcal{A}_n}(\mathsf{M}'[D_2\circ S_{cap}])\otimes\mathcal{A}_n.\]
First, we review the definition of the algebra $\mathcal{B}'(m,k)$. Given a set of $m$ points on a line identified with $\{1,2,...,m\}\subset\mathbb{R}$, a \emph{local state} $\mathbf{x}$ is a choice of $k$ intervals $[i,i+1]$ where $1\le i\le m-1$. For each local state $\mathbf{x}$ there is an idempotent denoted by $I_{\mathbf{x}}$ in $\mathcal{B}'(m,k)$.
We identify each interval $[i,i+1]$ by its midpoint $i+1/2$, and so each local state is a subset of $\{3/2,5/2,...,m-1/2\}$. If $\mathbf{x}\cap\{i-1/2,i+1/2\}=\{i-1/2\}$, then the local state $r_i(\mathbf{x})=(\mathbf{x}\setminus\{i-1/2\})\cup\{i+1/2\}$. Similarly, if $\mathbf{x}\cap\{i-1/2,i+1/2\}=\{i+1/2\}$, then the local state $l_i(\mathbf{x})=(\mathbf{x}\setminus\{i+1/2\})\cup\{i-1/2\}$. For each $2\le i\le m-1$, the algebra $\mathcal{B}'(m,k)$ contains elements $R'_i$ and $L'_i$ corresponding to shifting to the right and left, respectively. Specifically, $\mathcal{B}'(m,k)$ is the $\mathbb{Q}[u_1,...,u_m]$-algebra generated by the idempotents $I_{\mathbf{x}}$ for all local states $\mathbf{x}$ and $R'_i$ and $L'_i$ for every $2\le i\le m-1$ modulo the following conditions:
\begin{itemize}
\item[B1)] For each local state $\mathbf{x}$, $I_{\mathbf{x}}R'_i=R'_iI_{r_i(\mathbf{x})}=I_{\mathbf{x}}R'_iI_{r_i(\mathbf{x})}$ and $I_{\mathbf{x}}L'_{i}=L'_iI_{l_i(\mathbf{x})}=I_{\mathbf{x}}L'_iI_{r_i(\mathbf{x})}$. Note that $I_{\mathbf{x}}R'_i$ and $I_{\mathbf{x}}L'_i$ vanish if $r_i(\mathbf{x})$ and $l_i(\mathbf{x})$ are not defined, respectively.
\item[B2)] If $r_i(\mathbf{x})$ is defined, then $I_{\mathbf{x}}R'_iL'_i=u_iI_{\mathbf{x}}$. Similarly, if $l_i(\mathbf{x})$ is defined then $I_{\mathbf{x}}L'_iR'_i=u_iI_{\mathbf{x}}$.
\item[B3)] For every $i$, $R'_iR'_{i+1}=0$ and $L'_{i+1}L'_i=0$.
\item[B4)] If $\{i-1/2,i+1/2\}\cap\mathbf{x}=\emptyset$, then $u_iI_{\mathbf{x}}=0$.
\end{itemize}
Assume $m=2n+1$ and $k=n$. Each local state $\mathbf{x}$ determines a subset $S_{\mathbf{x}}\subset\{1,2,...,2n\}$ with $n$ elements defined as
\[S_{\mathbf{x}}=\{i\ |\ i+1/2\notin\mathbf{x}\}.\]
Therefore, each idempotent $I_{\mathbf{x}}\in\mathcal{B}'(2n+1,n)$ identifies an idempotent in $A_n$.
\begin{definition}
We define $\mathcal{A}_n$ to be the quotient of $A_n$ with the ideal generated by $R_{i}R_{i-1}$ and $L_{i-1}L_i$ for every $2\le i\le 2n$.
\end{definition}
Let $\overline{\mathcal{B}}'(2n+1,n)$ be the quotient of $\mathcal{B}'(2n+1,n)$ with the ideal generated by $u_1$.
\begin{lemma}
There is an isomorphism
\[h:\overline{\mathcal{B}}'(2n+1,n)\to \mathcal{A}_n\]
such that $h(I_{\mathbf{x}})=\iota_{S_{\mathbf{x}}}$ for every $\mathbf{x}$, and $h(R_{i}')=L_{i-1}$, $h(L_{i}')=R_{i-1}$ and $h(u_ia)=u_{i-1}a$ for every $i$.
\end{lemma}
\begin{proof}
For every local state $\mathbf{x}$ if $r_i(\mathbf{x})$ is defined, then $\{i-1,i\}\cap S_{\mathbf{x}}=\{i\}$ and
\[h(I_{\mathbf{x}}R_i'L_i')=\iota_{S_{\mathbf{x}}}L_{i-1}R_{i-1}=u_{i-1}\iota_{S_{\mathbf{x}}}=h(u_iI_{\mathbf{x}}).\]
Similarly, if $l_i(\mathbf{x})$ is defined, then $\{i-1,i\}\cap S_{\mathbf{x}}=\{i-1\}$ and
\[h(I_{\mathbf{x}}L_i'R_i')=\iota_{S_{\mathbf{x}}}R_{i-1}L_{i-1}=u_{i-1}\iota_{S_{\mathbf{x}}}=h(u_iI_{\mathbf{x}}).\]
By the definition of $\mathcal{A}_n$ for every $i$, $h(R_i'R_{i+1'})=L_{i-1}L_{i}=0$ and $h(L_{i+1}'L_{i'})=R_{i}R_{i-1}=0$.
Finally, if $\{i-1/2,i+1/2\}\cap \mathbf{x}=\emptyset$, then $\{i-1,i\}\subset S_{\mathbf{x}}$ and \[h(u_iI_{\mathbf{x}})=u_{i-1}\iota_{S_{\mathbf{x}}}=\iota_{S_{\mathbf{x}}}R_{i-1}L_{i-1}=0\]
Thus, $h$ is an isomorphism of $\mathbb{Q}[u_1,...,u_{2n}]$-modules.
\end{proof}
\begin{theorem}
For any braid $D=D_1\circ D_2$, \[\mathscr{M}[D]\cong (\mathsf{M}[S_{cup}\circ D_1]\otimes\mathcal{A}_n)\otimes_{\mathcal{A}_n}(\mathsf{M}'[D_2\circ S_{cap}])\otimes\mathcal{A}_n.\]
\end{theorem}
\noindent
This theorem allows us to work over the quotient algebra $\mathcal{A}_n$ instead of $A_{n}$
\begin{proof}
Consider a complete resolution $S$ of $D$ and assume $S=S_1\circ S_2$ where $S_i$ is a complete resolution of $D_i$. We show that for any cycle $Z_1$ of $S_{cup}\circ S_1$
\[x_{Z_1}R_{i}R_{i-1}=0\quad\text{for all}\ i\]
and for every cycle $Z_2$ of $S_2\circ S_{cap}$
\[L_{i-1}L_ix_{Z_2}=0\quad\text{for all}\ i\]
This essentially follows from the argument in part (b) of Section \ref{welldef}, but we will give the argument here as well.
Let $Z_1$ be a cycle in $S_{cup}\circ S_1$, and suppose the $n$ outgoing edges of $S_{1}$ are labeled $e_{1},...e_{n}$. Then
\[ x_{Z}R_{i} = x_{U_{i}(Z)} U(D(Z,e_{i}))\]
\noindent
The variables in the coefficient $U(D(Z,e_{i}))=U_{j_{1}}\cdot ... \cdot U_{j_{m}}$ can be ordered from the top to the bottom based on where they come in to $\partial_{L}D(Z,e_{i}))$. We want to show that $ x_{U_{i}(Z)} U(D(Z,e_{i})) R_{i-1}=0$.
If $U(D(Z,e_{i}))=1$ then we are done, as the bottom vertex of the disc $v_{b}(D(Z,e_{i}))$ lies in both strands of $U_{i-1}(U_{i}(Z))$. Otherwise,
\[ x_{U_{i}(Z)} U(D(Z,e_{i})) R_{i-1} = x_{U_{i-1}(U_{i}(Z))}U(D(Z,e_{i})) U(D(U_{i}(Z),e_{i-1})) \]
\noindent
We claim that $x_{U_{i-1}(U_{i}(Z))}U(D(Z,e_{i}))=0$. To see this, note that $e_{j_{1}}$ is an edge in $U_{i-1}(U_{i}(Z))$. Thus, the $U_{j_{1}}$ in $U(D(Z,e_{i}))$ maps the cycle farther to the right. But we can do this recursively, as for each $k$, the edge $e_{j_{k}}$ lies in $U_{j_{k-1}}(...(U_{j_{1}}(U_{i-1}(U_{i}(Z))))...)$. In the end, $v_{b}(D(Z,e_{i}))$ is a vertex in both strands of the cycle $U_{j_{m}}(...(U_{j_{1}}(U_{i-1}(U_{i}(Z))))...)$, making the product zero.
The product \[L_{i-1}L_ix_{Z_2}=0\quad\text{for all}\ i\] is similar.
The reason that Ozsv\'{a}th and Szab\'{o} work with $\mathcal{B}'(2n,n)$ while we work with $\overline{\mathcal{B}}'(2n+1,n)$ is that our planar Heegaard diagram has an extra reduced unknotted component `at infinity.' This extra component component can be viewed as a single strand to the left of the diagram in the Ozsv\'{a}th-zab\'{o} picture, which we reduce by setting $u_1=0$.
\end{proof}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
In this paper we present a new algorithmic realization of a projection-based scheme for general convex
constrained optimization problem. The general idea is to transform the
original optimization problem to a sequence of feasibility problems by
iteratively constraining the objective function from above until the
feasibility problem is inconsistent. For each of the feasibility problems one
may apply any of the existing projection methods for solving it. In particular, the
scheme allows the use of subgradient projections and does not require exact
projections onto the constraints sets as in existing similar methods.
We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems
as well as for real-life optimization problems coming from medical treatment
planning.
\textbf{Keywords}: Projection methods, feasibility problems, superiorization,
subgradient, iterative methods
\textbf{$2010$ MSC}: 65K10, 65K15, 90C25
\end{abstract}
\section{Introduction}
In this paper we are concerned with a general convex optimization problem. Let
$f:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ and $\left\{ g_{i}:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
\right\} _{i\in I}$, for $I=\{1,\ldots,m\}$ be convex functions. We wish to solve the following convex optimization
problem.
\begin{gather}
\min f(x)\nonumber\\
\text{such that }g_{i}(x)\leq0\text{ for all }i\in I. \label{Problem:1}
\end{gather}
The literature on this problem is vast and there exist many different
techniques for solving it, see e.g., \cite{BSS06,Bertsekas99,
BoydVandenberghe04} and the many references therein. As a special case when
$f\equiv0$, (\ref{Problem:1}) reduces to find a point in the convex set,
\begin{equation}
\boldsymbol{C}:=\cap_{i\in I}C_{i}=\cap_{i\in I}\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} \neq\emptyset. \label{eq:Bold_C}
\end{equation}
$\boldsymbol{C}$ is called the \textit{feasibility set} of (\ref{Problem:1}).
In general, the problem of finding a point in the intersection of convex sets
is known as the \textit{Convex Feasibility Problem} (CFP) or \textit{Set
Theoretic Formulation}. Many real-world problems in various areas of
mathematics and of physical sciences can be modeled in this way; see
\cite{cap88} and the references therein for an early example. More work on the
CFP can be found in \cite{Byrne, byrne04, cdh10}.
In case where all the $\left\{ g_{i}\right\} _{i\in I}$ are linear and only
equalities are considered in $C_{i}$, meaning that all the $C_{i}\ $ are
hyper-planes, the CFP reduces to a system of linear equations; Kaczmarz
\cite{Kaczmarz37} and Cimmino \cite{Cimmino38} in the 1930's, proposed two
different projection methods, sequential and simultaneous projection methods
for solving a system of linear equalities. These methods were later extended
for solving systems of linear inequalities, see \cite{Agmon54, MS54}. Today,
projection methods are more general and are applied to solve general convex
feasibility problems.
In general, projection methods are iterative procedures that employ
projections onto convex sets in various ways. They typically follow the
principle that it is easier to project onto the individual sets (usually
closed and convex) instead of projecting onto other derived sets (e.g. their
intersection). The methods have different algorithmic structures, of which
some are particularly suitable for parallel computing, and they demonstrate
desirable convergence properties and good initial behavior patterns, for more
details see for example \cite{cccdh11}.
In last decades, due to their computational efficiency, projection methods
have been applied successfully to many real-world applications, for example in
imaging, see Bauschke and Borwein \cite{bb96} and Censor and Zenios \cite{CZ97}, in transportation problems \cite{cz91b,bk13}, sensor networks \cite{bh06}, in radiation therapy treatment planning \cite{cap88,ccmzkh10},
in resolution enhancement \cite{coo03}, in graph matching \cite{ww04}, in matrix balancing \cite{cz91, Amelunxen11}, in radiation therapy treatment planning, in resolution enhancement \cite{cccdh11, BC11, cc14}, to name but a few.
Their success is based on their ability to handle huge-size problems since they do not require storage or inversion of the full constraint matrix. Their algorithmic structure is either sequential or simultaneous, or in-between, as in the block-iterative projection methods or string-averaging projection methods which naturally support parallelization. This is one of the reasons that this class of methods was called \textquotedblleft Swiss army knife\textquotedblright,
see \cite{bk13}.
Following the above we aim to apply different projection methods for solving
(\ref{Problem:1}); in order to do that we first put (\ref{Problem:1}) into
epigraph form.
\begin{gather}
\min t\nonumber\\
\text{such that }t\in
\mathbb{R}
\nonumber\\
\text{and for some }x\in\boldsymbol{C}\text{ one has }f(x)\leq t.
\label{Problem:2}
\end{gather}
Denote by $t^{\ast}$ the optimal value of (\ref{Problem:2}) which we assumed is attained
and finite. Now, a natural idea for solving
(\ref{Problem:2}) is to construct a decreasing sequence $\left\{
t_{k}\right\} $ such that $t_{k}\rightarrow t^{\ast}$ and at each step, for a
fixed $t_{k}$, to solve a corresponding CFP; see also \cite[Subsection
2.1.2]{Bertsekas99}. Formally this can be phrased as follows. Set
$t_{-1}=\infty$ and at the $k$-th step, with $k\geq0$, solve the following
problem:
\begin{equation}
\text{find a point }x^{k}\text{ such that}\left\{
\begin{array}
[c]{l}
f(x^{k})\leq t_{k-1}\\
\text{and}\\
g_{i}(x^{k})\leq0\text{ for all }i\in I.
\end{array}
\right. \label{Problem:k_CFP}
\end{equation}
Once a feasible point $x^{k}$ is obtained, $t_{k}$ is updated according to the
formula
\begin{equation}
t_{k}=f(x^{k})-\varepsilon_{k}, \label{eq:epsilon_decrease}
\end{equation}
where $\varepsilon_{k}>0$ is some user chosen constant; in the numerical
experiments in Section \ref{sec:Numerical_Experiments} we use $\epsilon
_{k}=0.1|f(x^{k})|$ whenever $|f(x^{k})|>1$ and $\epsilon_{k}=0.1$
otherwise\textbf{.} For solving these CFPs at each $k$-th step, we apply
different projection methods based on the \textit{Cyclic Subgradient
Projections} \textit{Method} (CSPM) \cite{cl81, cl82} and thus obtain several
algorithmic realizations of this general scheme.
In Subsection \ref{sec:Convergence} we discuss the issue of convergence to an approximate optimal solution of (\ref{Problem:1}) (which we call an $\varepsilon$-optimal solution.)
It might happen that the objective function decreases by $\varepsilon_{k}$ after
each step. Therefore, we not only wish to solve each of the CFPs (\ref{Problem:k_CFP}), but also end
up with a \textit{Slater point}, that is a point that solves (\ref{Problem:k_CFP}) with strict inequalities at each step
to maximize the decrease in $t_{k}$. To this end, we will make use of over
relaxation parameters in the CSPM for one realization of the scheme, but also
apply the newly introduced \textit{Superiorization} idea \cite{cdh10}, where
perturbations are used in the CSPM to steer the algorithm into the interior of
the objective level set after the $k$-th CSP. Our major contribution in this work, Section \ref{sec:Numerical_Experiments}, is the applicability of the scheme and its comparability with some existing results as demonstrated and compared intensively on benchmark quadratic programming problems and medical therapy.
The paper is organized as follows. In Section \ref{sec:Prelim} we
present several projection methods and definitions which will be useful for
our analysis. Next in Section \ref{sec:GeneralOpt} our general scheme for
solving convex optimization problems are presented and analyzed. Later in
Section \ref{sec:Numerical_Experiments} numerical experiments illustrating the
different realizations of our scheme are presented and tested for convex
quadratic programming problems and for Intensity-Modulated Radiation Therapy (IMRT). Finally in Section \ref{sec:conclusion}
conclusion and further research directions are presented.
\section{Preliminaries\label{sec:Prelim}}
In this section we provide several projection methods which are relevant to
our results, mainly orthogonal and subgradient projections. We start by presenting several definitions which will be useful for our analysis.
\begin{definition}
A sequence $\left\{ x^{k}\right\} _{k=0}^{\infty}$ is said to be
\texttt{finite convergent} if $\lim_{k\rightarrow\infty}x^{k}=x^{\ast}$ and
there exists $N\in
\mathbb{N}
$ such that for all $k\geq N$, $x^{k}=x^{\ast}$.
\end{definition}
Let $C$ be non-empty, closed and convex set in the Euclidean space $
\mathbb{R}
^{n}$. Assume that the set $C$ can be represented as
\begin{equation}
C=\left\{ x\in
\mathbb{R}
^{n}\mid c(x)\leq0\right\} , \label{eq:C}
\end{equation}
where $c:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ is an appropriate continuous and convex function. Take, for example,
$c(x)=\operatorname*{dist}(x,C),$ where $\operatorname*{dist}$ is the distance
function; see, e.g., \cite[Chapter B, Subsection 1.3(c)]{hul01}.
\begin{definition}
For any point $x\in
\mathbb{R}
^{n}$,\ the \texttt{orthogonal projection} of $x$ onto $C$, denoted by
$P_{C}(x)$ is the closest point to $x$ in $C$, that is,
\begin{equation}
\left\Vert x-P_{C}\left( x\right) \right\Vert \leq\left\Vert x-y\right\Vert
\text{ for all }y\in C.
\end{equation}
\end{definition}
\begin{definition}
Let $c$ be as in the representation of $C$ in (\ref{eq:C}). The set
\begin{equation}
\partial c(z):=\{\xi\in
\mathbb{R}
^{n}\mid c(y)\geq c(z)+\langle\xi,y-z\rangle\text{ for all }y\in
\mathbb{R}
^{n}\}
\end{equation}\label{eq:2.3}
is called the \texttt{subdifferential} of $c$ at $z$ and any element of
$\partial c(z)$ is called a \texttt{subgradient}.
\end{definition}
It is well-known that if $C$ is non-empty, closed and convex, then $P_C(x)$ exists and is unique. Moreover, if $c$ is differentiable at $z,$ then
$\partial c(z)=\{\nabla c(z)\}$, see for example \cite[Theorem 5.37 (p. 77)]{Tiel84}.
Now for any $x\in\mathbb{R}^{n}$ the function $\xi$ assigns some subgradient, that is $\xi(x)\in\partial
c(x)$.
\begin{definition}
For any point $x\in
\mathbb{R}
^{n}$,\ the \texttt{subgradient projection}\textit{ }of $x$ is defined as
\begin{equation}
\Pi_{_{C}}(x):=\left\{
\begin{array}
[c]{ll}
x-\frac{\displaystyle c(x)}{\displaystyle\left\Vert \xi\right\Vert ^{2}}\xi &
\text{if\ }c(x)>0\text{,}\\
x & \text{if\ }c(x)\leq0\text{,}
\end{array}
\right.
\end{equation}
where $\xi\in\partial c(x)$. It must be that $\xi \neq 0$ when $c(x)>0$, because if $\xi=0$, then from (\ref{eq:2.3}) one has $c(x)\leq c(y)$ for every $y\in \mathbb{R}^{n}$; in particular, for $y\in C$ we have $c(x)\leq c(y)=0$, a contradiction to the assumption that $c(x)>0$.
\end{definition}
\begin{remark}
It is well-known and can be verified easily that if the set $C$ is a half-space which is presented using its canonical way (using a normal vector) then the subgradient projection is the orthogonal projection onto $C$.
\end{remark}
\begin{definition}
\label{Def:Slater}Consider the CFP
\begin{equation}
\boldsymbol{C}:=\cap_{i\in I}C_{i}=\cap_{i\in I}\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} .
\end{equation}
We say that $\boldsymbol{C}$ satisfies the \texttt{Slater Condition} if there exists a point $x\in C$ having the property that $g_i(x)<0$ for all $i\in I$.
\end{definition}
\subsection{Projection methods and Superiorization}\label{subsec:prog}
Now we present two relevant classes of projection methods, the Orthogonal and
the Subgradient projections methods. We only introduce the
sequential versions which is relevant to our result, but there exists also simultaneous version; see e.g., \cite[Chapter 5]{CZ97}. Later
we also present the Superiorization methodology.
\textbf{1. }\underline{\textbf{Sequential methods}}
Sequential projection methods are also refereed to as \textquotedblleft
row-action\textquotedblright\ methods. The main idea is that at each iteration
one constraint set $C_{i}$ is chosen with respect to some control sequence and
either an orthogonal or a subgradient projection is calculated.
\textbf{1.1. }\textit{Projection Onto Convex Sets} (POCS). The general
iterative step can fit into the following
\begin{equation}
x^{k+1}=x^{k}-\lambda_{k}\left(x^{k}-P_{C_{i(k)}}(x^{k})\right)
\label{eq:POCS}
\end{equation}
where $\lambda_{k}\in\lbrack\epsilon_{1},2-\epsilon_{2}]$ are called
\textit{relaxation parameters} for arbitrary $\epsilon_{1},\epsilon_{2}>0$ such that $\epsilon_{1}+\epsilon_{2}<2$,
$P_{C_{i(k)}}$ is the orthogonal projection of $x^{k}$ onto $C_{i(k)}$,
$\left\{ i(k)\right\} $ is a sequence of indices according to which
individual sets $C_{i}$ are chosen, for example \textit{cyclic}
$i(k)=k\operatorname*{mod}m+1$. For the linear case with equalities and
$\lambda_{k}=1$ for all $k$, this is known as Kaczmarz's algorithm
\cite{Kaczmarz37} or \textit{Algebraic Reconstruction Technique} (ART) in the
field of image reconstruction from projection, see \cite{bgh70, hm93}. For
solving a system of interval linear inequalities which appears for example in
the field of \textit{Intensity-Modulated Radiation Therapy} (IMRT), ART3 and
especially its faster version ART3+ (see \cite{hc08}) are known to find a
solution in a finite number of steps, provided that the feasible region is
full dimensional. The successful idea of ART3+ was extended for solving
optimization problems with linear objective and interval linear inequalities
constraints, this is known as ATR3+O \cite{ccmzkh10}.
\textbf{1.2.} The \textit{Cyclic Subgradient Projections} (CSP) introduced by
Censor and Lent \cite{cl81, cl82} for solving the CFP. The iterative step of
the method is formulated as follows.
\begin{equation}
x^{k+1}=\left\{
\begin{tabular}
[c]{ll}
$x^{k}-\lambda_{k}\frac{g_{i(k)}(x^{k})}{\left\Vert \xi^{k}\right\Vert ^{2}
}\xi^{k}$ & $g_{i(k)}(x^{k})>0$\\
$x^{k}$ & $g_{i(k)}(x^{k})\leq0$
\end{tabular}
\ \ \ \ \ \ \ \right.\label{csp-def}
\end{equation}
where $\xi^{k}\in\partial g_{i(k)}(x^{k})$ is arbitrary, $\lambda_{k}$ is taken as in
(\ref{eq:POCS}) and $\left\{ i(k)\right\} $ is cyclic. Of course, in the
linear case this method coincides with POCS.
\textbf{2.} \underline{\textbf{Superiorization}}
Superiorization is a recently introduced methodology which gains increasing interest and recognition,
as evidenced by the dedicated special issue entitled:
\textquotedblleft Superiorization: Theory and Applications\textquotedblright, in the journal \textit{Inverse Problems} \cite{chj17}.
The state of current research on superiorization can best be appreciated from the \textquotedblleft
Superiorization and Perturbation Resilience of Algorithms: A Bibliography
compiled and continuously updated by Yair Censor\textquotedblright \cite{Censor_sup-page}. In addition, \cite{herman-review-sm}, \cite{weak-strong15} and \cite[Section 4]{rm15} are recent reviews of interest.
This methodology is heuristic and its goal is to find certain good, or superior, solutions to optimization problems. More precisely, suppose that we want to solve a certain optimization problem, for example, minimization of a convex function under constraints (below we focus on this optimization problem because it is relevant to our paper; for an approach which considers the superiorization methodology in a much broader form, see \cite[Section 4]{rm15}). Often, solving the full problem can be rather demanding from the computational point of view, but solving part of it, say the feasibility part (namely, finding a point which satisfies all the constraints) is, in many cases, less demanding. Suppose further that our algorithmic scheme which solves the feasibility problem is perturbation resilient, that is, it converges to a solution of the feasibility problem despite perturbations which may appear in the algorithmic steps due to noise, computational errors, and so on.
Under these assumptions, the superiorization methodology claims that there is an advantage in considering perturbations in an active way during the performance of the scheme which tries to solve the feasibility part. What is this advantage? It may simply be a solution (or an approximation solution) to the feasibility problem which is found faster thanks to the perturbations; it may also be a feasible solution $x'$ which is better than (or superior) feasible solutions $x$ which would have been obtained without the perturbations, where we measure this \textquotedblleft superiority \textquotedblright with respect to some given cost/merit function $\phi$, namely we want to have $\phi(x')\leq \phi(x)$ (and hopefully $\phi(x')$ will be much smaller than $\phi(x)$).
Since our original optimization problem is the minimization of some convex function, we may, but not obliged to, take $\phi$ to be that function, and we can combine a feasibility-seeking step (a step aiming at finding a solution to the feasibility problem) with a perturbation which will reduce the cost function (such a perturbation can be chosen or be guessed in a non-ascending direction, if such a direction exists: see Definition \ref{eq:nonascend} and Algorithm \ref{Algorithm:Yair-superiorization} below). We note that the above-mentioned assumption that the algorithmic scheme which solves the feasibility part is perturbation resilient often holds in practice: for example, this is the case for the schemes considered in \cite{bdhk07, cr13, hgcd12}.
\begin{definition}\label{eq:nonascend}
Given a function $\phi:\Delta\subseteq
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ and a point $x\in\Delta$, we say that a vector $d\in
\mathbb{R}
^{n}$ is \texttt{non-ascending} \texttt{for }$\phi$\texttt{ at }$x$ if
$\left\Vert d\right\Vert \leq1$ and there is a $\delta>0$ such that
\begin{equation}
\text{for all }\lambda\in\left[ 0,\delta\right] \text{ we have }\left(
x+\lambda d\right) \in\Delta\text{ and }\phi\left( x+\lambda d\right)
\leq\phi\left( x\right) .
\end{equation}
\end{definition}
Observe that one option of choosing the perturbations, in order to steer the algorithm to a superior feasible point with respect to $\phi$, is along $-\nabla\phi$, (when $\phi$ is convex and differentiable) but this is only one example and of course the scheme allows the usage of other direction.
The following pseudocode, which is a small modification of a similar algorithm mentioned in \cite{hgcd12}, illustrates one option to perform the perturbations when applying the superiorizaton methodology.
\begin{algorithm}
\label{Algorithm:Yair-superiorization}$\left. {}\right. $
\textbf{Initialization:} Select an arbitrary starting point\textit{ }$x^{0}\in\mathbb{R}^{n}$, a positive integer $N$, an integer $\ell$, a sequence $(\eta_{\ell})_{\ell=0}^\infty$ of positive real numbers which is strictly decreasing to zero (for example, $\eta_{\ell}=a^\ell$ where $a\in(0,1)$) and a family of algorithmic
operators $(P_k)_{k=0}^{\infty}$.
\textbf{Iterative step:}
\textbf{set} $k=0$
\textbf{set} $x^{k}=x^{0}$
\textbf{set} $\ell=-1$
\textbf{repeat until a stopping criterion is satisfied (see Section \ref{sec:GeneralOpt})}
$\qquad$\textbf{set} $m=0$
$\qquad$\textbf{set} $x^{k,m}=x^{k}$
$\qquad$\textbf{while }$m$\textbf{$<$}$N$
$\qquad\qquad$\textbf{set }$v^{k,m}$\textbf{ }to be a non-ascending vector for
$\phi$ at $x^{k,m}$
$\qquad$\textbf{$\qquad$set} \emph{loop=true}
$\qquad$\textbf{$\qquad$while}\emph{ loop}
$\qquad\qquad\qquad$\textbf{set $\ell=\ell+1$}
$\qquad\qquad\qquad$\textbf{set} $\beta_{k,m}=\eta_{\ell}$
$\qquad\qquad\qquad$\textbf{set} $z=x^{k,m}+\beta_{k,m}v^{k,m}$
$\qquad\qquad\qquad$\textbf{if }$z$\textbf{$\in$}$\Delta$\textbf{ and }
$\phi\left( z\right) $\textbf{$\leq$}$\phi\left( x^{k}\right) $\textbf{
\textbf{then}}
$\qquad\qquad\qquad\qquad$\textbf{set }$m$\textbf{$=$}$m+1$
$\qquad\qquad\qquad\qquad$\textbf{set }$x^{k,m}$\textbf{$=$}$z$
$\qquad\qquad\qquad\qquad$\textbf{set }\emph{loop = false}
$\qquad$\textbf{set }$x^{k+1}$\textbf{$=$}$\boldsymbol{P}_k(x^{k,N})$
$\qquad$\textbf{set }$k=k+1$
\end{algorithm}
\section{A general projection scheme for convex
optimization\label{sec:GeneralOpt}}
In this section we present our scheme (Algorithm \ref{Algorithm:epsilon-scheme}) for solving convex optimization problems
by translating them into a sequence of convex feasibility problems
(\ref{Problem:k_CFP}) and then solving each of them by using some projection
method; since we are concerned with the general convex case, subgradient
projections are most likely to be used, but we can use any type of projections, for example orthogonal and Bregman projection, see e.g., \cite{CZ97}. There are two essential questions in
this scheme, the first is how to construct the sequence of convex feasibility
problems, meaning how to choose an $\varepsilon$ to update $t_{k}$, and the
other question is when to stop the procedure. For the latter question, that is, the stopping criterion, there are several options and the most poplar are maximum number of iterations (an upper bound on the maximum number should be specified in advance), or to check whether either $\|x^{k+1}-x^k\|$ or $|f(x^{k+1})-f(x^k)|$ are smaller than some given positive parameter.\\
For (\ref{Problem:1}) we denote $C_{i}:=\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} $, we assume that $\boldsymbol{C}:=\cap_{i\in I}C_{i}
\neq\emptyset$ and for $t\in
\mathbb{R}
$ we denote $C^{t}:=\left\{ x\in
\mathbb{R}
^{n}\mid f(x)\leq t\right\} $.
We would like to motivate our scheme (Algorithm \ref{Algorithm:epsilon-scheme}) by reviewing a natural extension of the
ART3+O \cite{ccmzkh10}, which was designed for linear problems, for the
general convex case. We will also show how the mathematical disadvantages of
ART3+O can be treated in our new scheme. ART3+O is based on the same
reformulation of the original optimization problem (\ref{Problem:1}) into a
feasibility problem (\ref{Problem:2}). Then the optimal level set value is
determined using a one-dimensional line search on $t_{k}$. In the original
work \cite{ccmzkh10} the \textit{Dichotomous} (bisection) line search
\cite[Chapter 8]{BSS06} was used (in practice a somewhat variant of the line search was used), but any line search could be applied.
Assume that a lower bound of $f$ is given and denote it by $f_{l}$. We denote by
$f^{h}$ the upper bound of $f$ and initialize it to $\infty$.
In \cite{ccmzkh10} there is also the use of a bisection scheme but for the linear case, and in what follows we generalize this scheme for the convex optimization setting (below $k$ is a natural number).
\begin{algorithm}
[Bisection scheme]\label{Algorithm:Bisection}$\left. {}\right. $
\textbf{Initialization:} Solve the following CFP
\begin{equation}
\text{find a point }x^{0}\in\boldsymbol{C}
\end{equation}
set $f^{h}=f(x^{0})$ and $t_{0}=\left( f_{l}+f^{h}\right) /2$.
\textbf{Iterative step:} Given $t_{k-1}$, try to find a feasible solution $x^{k}
\in\boldsymbol{C}\cap C^{t_{k-1}}$;
(i) If there exists a feasible solution, set $f^{h}=f(x^{k})$ and continue
with $t_{k}:=\left( f_{l}+f^{h}\right) /2$;
(ii) If there is no feasible solution, determined by a \textquotedblleft time-out\textquotedblright\ rule (meaning that a feasible point can
not be found in $n_{max}$ iterations; other alternatives might be \cite{kac91, Kiwiel96} and \cite{cl02}), then set $f_{l}=t_{k-1}$ and
continue with $t_{k}:=\left( f_{l}+f^{h}\right) /2$;
(iii) If $\left\vert f^{h} - f_{l} \right\vert \leq\gamma$ for small enough $\gamma>0$, then stop. A $\gamma$-optimal solution is obtained.
\end{algorithm}
Next we present our new scheme which we call the level set scheme for solving the
constrained minimization (\ref{Problem:1}). Let $\left\{ \varepsilon
_{k}\right\} _{k=0}^{\infty}$ be some user chosen positive sequence, such
that $\sum_{k=0}^{\infty}\epsilon_{k}=\infty$. We choose $\epsilon_{k}
=\max\{0.1|f(x^{k})|,0.1\}.$
\begin{algorithm}
[Level set scheme]\label{Algorithm:epsilon-scheme}$\left. {}\right. $
\textbf{Initialization:} Solve the following CFP
\begin{equation}
\text{find a point }x^{0}\in\boldsymbol{C}
\end{equation}
and set $t_{0}=f(x^{0})-\varepsilon_{0}$.
\textbf{Iterative step:} Given the current point $x^{k-1}$, try to find a
point $x^{k}\in\boldsymbol{C}\cap C^{t_{k-1}}$;
(i) If there exists a feasible solution, set $t_{k}=f(x^{k})-\varepsilon_{k}$
and continue.
(ii) If there is no feasible solution, then $x^{k-1}$ is an $\varepsilon_{k}
$-optimal solution.
\end{algorithm}
\begin{remark}
Compared with the bisection strategy, infeasibility is detected only once,
just before we get the $\varepsilon_{k}$-optimal solution.
\end{remark}
This level set scheme is quite general as it allows users to decide in
advance what projection method they would like to use in that scheme. For the
numerical results, we decided to apply the scheme with the following
variations of projection methods.
\textbf{1.} Each convex feasibility problem in the level set scheme is solved
via the \textit{Cyclic Subgradient Projections Method} (CSPM) (\ref{csp-def}) with $\lambda_k\in(1,2)$ over relaxation parameters.
\textbf{2.} Each convex feasibility problem in the level set scheme is solved
based on the superiorization methodology. By doing so we try to decrease the objective function value below $t_{k-1}$. Following the recent result of
\cite[Section 7]{cr13}, the methodology can be extended to convex and non-convex constraints. In general,
superiorization does not provide an optimality certificate, therefore, we
propose a sequential superiorization method where we decrease the sub-level
sets of the objective function $f$ according to the level set scheme.
\textbf{3.} In the first variation where CSPM is used to solve the resulting
feasibility problems, it may happen that the objective function only decreases
by some small $\varepsilon_k$ in each step $k$. Combining the previous ideas, if only small
steps are detected as progress, a perturbation along the negative gradient of
the objective is performed - just like in superiorization. That is, if
insufficient decrease is detected within a block of iterations, then the current
iterate is shifted by $x^{k}\leftarrow x^{k}-1.9\nabla f(x^{k})$. It is clear that this
is a heuristic step and does not guarantee that $f(x^{k-1}-1.9\nabla f(x^{k-1}))\leq f(x^{k-1})$, and so it can be revised by using an adaptive step-size rule for some positive $\alpha$ such that $f(x^{k-1}-\alpha\nabla f(x^{k-1}))\leq f(x^{k-1})$.
Let $c,s>0$ and $\left\{ \varepsilon_{k}\right\} _{k=0}^{\infty}$ be a
sequence and small user chosen constants, such that $\sum_{k=0}^{\infty
}\epsilon_{k}=\infty$, and $\delta=0$; In addition, determine the size of each
block of iterations $BLOCK$, for example if we decide to run 1000 iterations
then $BLOCK=1000.$
\begin{algorithm}
\label{Algorithm:modified-epsilon-scheme} $\left. {}\right. $
\textbf{Initialization:} Let $\delta=0$ and $t_{-1}:=\infty$; Solve the
following CFP
\begin{equation}
\text{find a point }x^{0}\in\boldsymbol{C}
\end{equation}
and set $t_{0}=f(x^{0})-\varepsilon_{0}$.
\textbf{Iterative step:} At the $k$-th iterate compute $\left\vert
t_{k-2}-t_{k-1}\right\vert $;
If $\left\vert t_{k-2}-t_{k-1}\right\vert \leq c\varepsilon_{k-1}$, then
$\delta=\delta+1$.
If $\delta/BLOCK>s,$ then $\delta=0$ and $x^{k-1}\leftarrow x^{k-1}-1.9\nabla
f(x^{k-1}).$
Set $t_{k-1}=f(x^{k-1})-\varepsilon_{k-1}$ and try to find a solution $x^{k}\in\boldsymbol{C}\cap C^{t_{k-1}}$ to the
CFP.
\end{algorithm}
\begin{remark}
\textbf{Relation with previous work.} There are numerous approaches in
the literature on how to update $t_{k}$, and by that, the sub-level set of $f$ in
solving (\ref{Problem:k_CFP}), or how to transform (\ref{Problem:1}) to a
sequence of CFPs. Some of these schemes are Khabibullin \cite[English
translation]{Khabibullin87_1} and \cite[English translation]{Khabibullin87},
\textit{Cutting-planes methods} or \textit{localization methods}; see
\cite{Kelly60, em75, lnn95, gly96, Kiwiel96, gk99} and \cite{BV07, cl02}, Cegielski in \cite{cegielski99}
and also in \cite{cd02,cd03}, \textit{subgradient method for constrained
optimization}, see e.g. \cite{bm08}. For optimization problem (\ref{Problem:1}
) with separable objective see De Pierro and Helou Neto \cite{dn09} and see also \cite{ccmzkh10}.
\end{remark}
\subsection{Convergence \label{sec:Convergence}}
Next we present the convergence proof of Algorithm \ref{Algorithm:epsilon-scheme}. We use different arguments
than those presented for other finite convergent projection methods, for example, to name but a few Khabibullin \cite[English translation]{Khabibullin87}, (see also
Kulikov and Fazylov \cite{kf89} and Konnov \cite[Procedure A.]{konnov98}) and
Iusem and Moledo \cite{im87}, De Pierro and Iusem \cite{di98} and Censor, Chen
and Pajoohesh \cite{ccp11}. These algorithms assume that the Slater Condition
holds, i.e., Definition \ref{Def:Slater}.
\begin{theorem}
\label{thm:P_k} Let $(P_{k})_{k=0}^{\infty}$ be a sequence of algorithmic
schemes (formally, each $P_{k}$ is a Turing machine). For every $k\in
\mathbb{N}\cup\{0\}$ the goal of $P_{k}$ is to solve the sub-problem (\ref{Problem:k_CFP}). It
produces, after a finite number of machine operations, an output, and then it
terminates. There are three possible cases for this output: if there exists a
solution to (\ref{Problem:k_CFP}) and the machine is able to find it before it passes a given
threshold (that is, before it performs a too large number of machine
operations, where this \textquotedblleft large number\textquotedblright\hspace{.1in}is fixed in the beginning), then this
output is a point $x^{k}$ which solves (\ref{Problem:k_CFP}); if there exists no solution to
(\ref{Problem:k_CFP}) and the machine is able to determine this case before it passes the
threshold, then the output is a string indicating that (\ref{Problem:k_CFP}) has no solution;
otherwise the output is a string indicating that the threshold has been
passed. In addition, if for some $k\in\mathbb{N}\cup\{0\}$ the algorithmic
scheme $P_{k}$ is able to find a point $x^{k}$ satisfying (\ref{Problem:k_CFP}), then a
positive number $\epsilon_{k}$ is produced (it may or may not depend on
$x^{k}$) and one defines $t_{k}:=f(x^{k})-\epsilon_{k}$. Assume further that
there exists a sequence $(\tilde{\epsilon}_{k})_{k=0}^{\infty}$ satisfying
$\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}=\infty$ and having the property that
for every $k\in\mathbb{N}\cup\{0\}$, if $P_{k}$ is able to find a point
$x^{k}$ satisfying (\ref{Problem:k_CFP}) before passing the threshold, then $\epsilon_{k}
\geq\tilde{\epsilon}_{k}$.
Under the above mentioned assumptions, Algorithm \ref{Algorithm:epsilon-scheme} terminates after a
finite number of machine operations, and, moreover, exactly one of the
following cases must hold:
\textbf{Case 1:}\label{item:P_0} The only algorithmic scheme that has been applied is
$P_{0}$ and either it declares that (\ref{Problem:k_CFP}) has no solution or it declares that
the threshold has been passed;
\textbf{Case 2:}\label{item:LastComputed} there exists $k\in\mathbb{N}\cup\{0\}$ such
that $P_{0},\ldots,P_{k}$ are able to solve (\ref{Problem:k_CFP}) before the threshold has
been passed and $P_{k+1}$ terminates by declaring that (\ref{Problem:k_CFP}) does not have a
solution. In this case $x^{k}$ is an $\epsilon_{k}$-approximate solution of
the minimization problem (\ref{Problem:2}).
\textbf{Case 3:}\label{item:threshold} there exists $k\in\mathbb{N}\cup\{0\}$ such that
$P_{0},\ldots,P_{k}$ are able to solve (\ref{Problem:k_CFP}) before the threshold has been
passed and $P_{k+1}$ terminates by declaring that the threshold has been passed;
\end{theorem}
\begin{proof}
A simple verification shows that the three cases mentioned above are mutually disjoint, and therefore at
most one of them can hold. Hence it is sufficient to show that at least one of
these cases holds. The level-set scheme starts at $k=0$. According to our
assumption on $P_{0}$ (and on any other algorithmic scheme), either it is able
to solve (\ref{Problem:k_CFP}) before passing the threshold, or it is able to show before
passing the threshold that (\ref{Problem:k_CFP}) does not have any solution, or it passes the
threshold before being able to determine whether (\ref{Problem:k_CFP}) has or does not have
any solution. If either the second or the third cases holds, then we are in
the first case (Case 1) mentioned by the theorem, and
the proof is complete (the number of machine operations done on both cases is
finite by the assumption on $P_{0}$). Hence from now on we assume that $P_{0}$
is able to solve (\ref{Problem:k_CFP}) before passing the threshold.
According to the level-set scheme definition, since we assume that $P_{0}$ was
able to solve (\ref{Problem:k_CFP}), we should now consider $P_{1}$. Either $P_{1}$ finds a
solution $x^{1}$ to (\ref{Problem:k_CFP}) before passing the threshold, or it is able to show
before passing the threshold that (\ref{Problem:k_CFP}) does not have any solution, or it
passes the threshold before being able to determine whether (\ref{Problem:k_CFP}) has or does
not have any solution. In the second case we are in Case
2 of the theorem and in the third case we are in Case 3 of the theorem. Hence in the second
and third cases the proof is complete (up to the verification that in the
second case $x^{k}$ is an $\epsilon_{k}$-optimal solution: see the next
paragraph), and so we assume from now on that $P_1$ finds a solution to (\ref{Problem:k_CFP}) before passing the threshold. By continuing this reasoning it can be shown by induction that several subcases
can hold: either any $P_{k}$, $k\in\mathbb{N}\cup\{0\}$, is able to solve
(\ref{Problem:k_CFP}) before passing the threshold, or there exists a minimal $k\in
\mathbb{N}\cup\{0\}$ such that any $P_{j}$, $j\in\{0,\ldots,k\}$ is able to
solve (\ref{Problem:k_CFP}) before passing the threshold but $P_{k+1}$ either shows that (\ref{Problem:k_CFP})
does not have any solution or $P_{k+1}$ passes the threshold before being able
to determine whether (\ref{Problem:k_CFP}) has a solution or does not have any solution. In
the second subcase we are in Case 2 of the
theorem, and in the third subcase we are in Case 3 of the theorem. In both subcases the accumulating machine operations is, of course, finite, since it is the sum of the finitely many machine operations
done by each of the algorithmic schemes $P_{j}$, $j\in\{0,1,\ldots,k+1\}$.
In the third subcase the proof is complete but in the second subcase we also need to
show that $x^{k}$ is an $\epsilon_{k}$-optimal solution. Indeed, suppose that
this subcase holds. Then $C\cap C^{t_{k}}=\emptyset$. Since a basic assumption of
the paper is that the set of minimizers of $f$ over $C$ is non-empty, there
exists $x^{*}\in C$ satisfying $f(x^{*})=t^{*}:=\inf\{f(x): x\in C\}$. It must
be that $t^{*}>t_{k}$ because otherwise we would have $f(x^{*})=t^{*}\leq
t_{k}$, i.e., $x^{*}\in C\cap C^{t_{k}}$, a contradiction. Because $x^{k}\in
C$ one has $t^{*}\leq f(x^{k})$. Hence $t^{*}\leq f(x^{k})=t_{k}+\epsilon
_{k}<t^{*}+\epsilon_{k}$ and therefore $|f(x^{k})-t^{*}|<\epsilon_{k}$. In
other words, $x^{k}$ is an $\epsilon_{k}$-optimal solution, as required.
Therefore it remains to deal with the first subcase mentioned earlier in which
each $P_{k}$, $k\in\mathbb{N}\cup\{0\}$, is able to solve (\ref{Problem:k_CFP}) before passing
the threshold. Assume to the contrary that this subcase holds. Then for
each $k\in\mathbb{N}\cup\{0\}$ the point $x^{k}$ and the numbers $\epsilon
_{k}$ and $t_{k}$ are well-defined and their definitions imply (by induction)
that when $k\geq2$, then
\begin{equation}
t_{k}=f(x^{k})-\epsilon_{k}\leq t_{k-1}-\epsilon_{k}\leq\ldots\leq t_{0}
-\sum_{j=1}^{k}\epsilon_{j}\leq t_{0}-\sum_{j=1}^{k}\tilde{\epsilon}_{j}.
\label{t_k<t_0-sum}
\end{equation}
Because $t^{\ast}=f(x^{\ast})\in\mathbb{R}$ and since, according to our
assumption, $\sum_{j=1}^{\infty}\tilde{\epsilon}_{j}=\infty$, for large enough
$k\in\mathbb{N}$ we have $t_{0}-t^{\ast}<\sum_{j=1}^{k}\tilde{\epsilon}_{j}$.
By combining this with (\ref{t_k<t_0-sum}) it follows that
\begin{equation}
t_{k}\leq t_{0}-\sum_{j=1}^{k}\tilde{\epsilon}_{j}<t^{\ast}. \label{t_k<v_0}
\end{equation}
It must be that one of the algorithmic schemes $P_{j}$, $j\in\{1,\ldots,k+1\}$
will fail to solve (\ref{Problem:k_CFP}) (either by passing the threshold or by
determining that (\ref{Problem:k_CFP}) does not have any solution), since if this is not true,
then in iteration $k+1$ a solution $x^{k+1}$ to (\ref{Problem:k_CFP}) will be found by
$P_{k+1}$. Now, because $x^{k+1}$ solves (\ref{Problem:k_CFP}) we have $f(x^{k+1})\leq t_{k}$.
Hence it follows from (\ref{t_k<v_0}) that $f(x^{k+1})<t^{\ast}$, a
contradiction to the definition of $t^{\ast}$. This contradiction shows that
the subcase mentioned earlier in which each $P_k$, $k\in \mathbb{N}\cup\{0\}$ is able to solve (\ref{Problem:k_CFP}) before passing the threshold, cannot occur.
\end{proof}
\begin{remark}
If for some given $\epsilon>0$ the sequence $\{\epsilon_{k}\}_{k=0}^{\infty}$
satisfies $\epsilon_{k}<\epsilon$ for all $k\in\mathbb{N}$ sufficiently large,
then the theorem ensures, in the third case mentioned in it, that the point
$x^{k}$ will be an $\epsilon$-approximate solution.
\end{remark}
\begin{remark}
An illustration of the condition needed in Theorem \ref{thm:P_k} is to let $\epsilon
_{k}:=\max\{0.1,0.1 |f(x^{k})|\}$, as done in the numerical simulations
(Section \ref{sec:Numerical_Experiments}). In this case $\epsilon_{k}\geq\tilde{\epsilon}_{k}:=0.1$ for all
$k\in\mathbb{N}\cup\{0\}$ for which $P_{k}$ is able to solve (\ref{Problem:k_CFP}) before
passing the threshold, and, in addition, $\sum_{k=0}^{\infty} \tilde{\epsilon
}_{k}=\infty$, as required. However, if one merely defines $\epsilon_{k}:=0.1
|f(x^{k})|$ instead of defining $\epsilon_{k}:=\max\{0.1,0.1 |f(x^{k})|\}$,
or, more generally, if one uses algorithmic schemes $P_{k}$ which, for every
$k\in\mathbb{N}\cup\{0\}$, are able to solve (\ref{Problem:k_CFP}) before passing the
threshold, and if $\sum_{k=0}^{\infty}\epsilon_{k}<\infty$, then it may happen
that none of the values $f(x^{k})$ approximate well the optimal value $t^{*}$.
Indeed, consider $f(x):=x^{2}-100$, $x\in C:=\mathbb{R}$. Denote $x^{0}:=\sqrt{500}$.
Suppose that for each $k\in\mathbb{N}$ our schemes $P_{k}$ find a point
$x^{k}$ satisfying $f(x^{k})=t_{k-1}$, namely $x^{k}=\sqrt{100+t_{k-1}}$ (we
assume that $P_{k}$ can represent numbers in an algebraic way which allows it
to store square roots without the need to represent them in a decimal way;
such schemes can be found in the scientific domains called \textquotedblleft
computer algebra\textquotedblright, \textquotedblleft exact numerical
computation\textquotedblright, and \textquotedblleft symbolic
computation\textquotedblright). Let $\epsilon_{0}:=0.1|f(x^{0})|$. Since
$f(x^{0})=400>0$ we have $t_{0}=f(x^{0})-\epsilon_{0}=0.9\cdot400=360$. Now we
need to find a point $x^{1}$ satisfying $f(x^{1})=360$, i.e.,
\[
(x^{1})^{2}-100=360=0.9\cdot f(x^{0})=0.9((x^{0})^{2}-100)=0.9(x^{0})^{2}-90
\]
and thus $x^{1}=\sqrt{10+0.9(x^{0})^{2}}$. By induction
$x^{k}=\sqrt{10+0.9(x^{k-1})^{2}}$, $f(x^{k})=t_{k-1}$ and $\epsilon
_{k}:=0.1|f(x^{k})|$ for every $k\in\mathbb{N}$. In particular, we can see by
induction that $x^{k}\geq10$ for all $k$ and hence $\epsilon_{k}=0.1f(x^{k})$
and $t_{k}=f(x^k)-\epsilon_k=0.9f(x^{k})\geq0$ for all $k\in\mathbb{N}$. Therefore
$|t_{k}-t^{\ast}|\geq100$ and $|f(x^{k})-t^{\ast}|>100$ for all $k\in
\mathbb{N}$ and hence we neither have $\lim_{k\rightarrow\infty}
f(x^{k})=-100=t^{\ast}$ nor $\lim_{k\rightarrow\infty}t_{k}=t^{\ast}$. It
remains to show that $\sum_{k=1}^{\infty}\epsilon_{k}<\infty$. Indeed, observe
that since $0<f(x^{k+1})\leq t_{k}$ for every $k\in\mathbb{N}\cup\{0\}$ and
since from (\ref{t_k<t_0-sum}) we have $\sum_{i=0}^{k}\epsilon
_{i}\leq t_{0}-t_{k}\leq t_{0}$, it follows that $\sum_{k=1}^{\infty}
\epsilon_{k}\leq t_{0}$. Therefore $\sum_{k=1}^{\infty}\epsilon
_{k}<\infty$ as claimed.
\end{remark}
\section{Numerical experiments \label{sec:Numerical_Experiments}}
In this section, we compare several variants of the two optimization schemes (Algorithms \ref{Algorithm:epsilon-scheme}
and \ref{Algorithm:modified-epsilon-scheme}) for some selected optimization problems. All solvers were tested against the
freely available library of convex quadratic programming problems stored in
the QPS format by Maros and M\'{e}sz\'{a}ros \cite{mm99} as well as clinical
cases from intensity modulated radiation therapy planning (IMRT) provided to
us by the German Cancer Research Center (DKFZ) in Heidelberg. The QPS problems
were parsed using the parser from the CoinUtils package \cite{CoinUtils} and consist of quadratic
objectives and linear constraints. The IMRT problem data is constructed using
a prototypical treatment planning system developed by the Fraunhofer ITWM and
consist of nonlinear convex objectives and constraints.
\begin{remark}
\label{remark:conv} It is clear that from the mathematical point of view only
finite convergence projection methods can be applied in each iterative step.
However, numerical experiments show that even asymptotically convergent
algorithms can be used, when the stopping rule is chosen in an educated way.
For further finite convergence methods see \cite{pm79, fuku82, mph81}.
\end{remark}
The algorithms were implemented in C++. As solvers for the feasibility
problem, we implemented the finite convergence variants of the Cyclic Subgradient Projections Method (CSPM) and
the Algebraic Reconstruction Technique 3 (ART3+) and also their regular version with standard stopping rule, see the beginning of Section \ref{sec:GeneralOpt} and Remark \ref{remark:conv}. The superiorized versions of these methods simply use the objective function of the optimization problem as
a merit function to decrease. Although the superiorized versions of CSPM and
ART3+ preformed surprisingly well in terms of the objective function value
they obtained, the solutions were in almost all cases far from the optimum.
Table \ref{tab:solver_variants} lists the variants of the level set and bisection
schemes that are compared:
\begin{table}[htp]
\begin{tabular}
[c]{|l|l|}\hline
Scheme variant & Abbreviation\\\hline
Level set (Alg. \ref{Algorithm:epsilon-scheme}) with CSPM & ls\_cspm\\\hline
Level set (Alg. \ref{Algorithm:epsilon-scheme}) with ART3+ & ls\_art3+\\\hline
Accelerated level set (Alg. \ref{Algorithm:modified-epsilon-scheme}) with
CSPM & ls\_acc\_cspm\\\hline
Level set (Alg. \ref{Algorithm:epsilon-scheme}) with superiorized CSPM &
ls\_sup\_cspm\\\hline
Level set (Alg. \ref{Algorithm:epsilon-scheme}) with superiorized ART3+ &
ls\_sup\_art3+\\\hline
Accelerated level set (Alg. \ref{Algorithm:modified-epsilon-scheme}) with
superiorized CSPM & ls\_acc\_sup\_cspm\\\hline
Bisection (Alg. \ref{Algorithm:Bisection}) with CSPM & bis\_cspm\\\hline
Bisection (Alg. \ref{Algorithm:Bisection}) with ART3+ & bis\_art3+\\\hline
Accelerated Bisection with CSPM & bis\_acc\_cspm\\\hline
Bisection (Alg. \ref{Algorithm:Bisection}) with superiorized CSPM &
bis\_sup\_cspm\\\hline
Bisection (Alg. \ref{Algorithm:Bisection}) with superiorized ART3+ &
bis\_sup\_art3+\\\hline
Accelerated Bisection with superiorized CSPM & bis\_acc\_sup\_cspm\\\hline
\end{tabular}
\caption{Overview of all tested schemes.}
\label{tab:solver_variants}
\end{table}
The bisection schemes were accelerated in the same way as in Algorithm
\ref{Algorithm:modified-epsilon-scheme}, despite the fact that this is a heuristics which is not guaranteed to converge, we decided to test and add it to our comparison. To determine whether a feasible solution exists, we set a maximum number of 1000 iterations for each of the
feasibility solvers. In Algorithm \ref{Algorithm:Bisection}(iii) we choose $\gamma=10^{-5}$. If no feasible solution is found after 1000 projections,
it is assumed that none exists. For the choice of $\varepsilon_{k}$ we used
multiplicative update rule ($\varepsilon_{k}=0.1|f(x^{k})|$) if the absolute value of the objective function is
greater than $1$ and subtraction update rule ($\varepsilon_{k}=0.1$) otherwise.
\subsection{IMRT cases descriptions}
Given fixed irradiation directions, the objective in IMRT optimization is to
determine a treatment plan consisting of an energy fluence distribution to
create a dose in the patient to irradiate the tumor as homoegeneously as
possible while sparing critical healthy organs \cite{KueMonSche09}. Figure
\ref{fig:imrt} shows irradiation directions and some energy fluence maps for a paraspinal tumor case.
\begin{figure}
\caption{The gantry moves around the couch on which the patient lies. The couch position
may also be changed to alter the beam directions.}
\label{fig:imrt}
\end{figure}
The problem is multi-criteria in nature and numerical optimization
problems are often a weighted sum scalarization of the multiple
objectives involved. IMRT optimization problems can be formulated
so that they are convex. For this work, we selected nine
head-and-neck cancer patients and posed the same optimization
formulations for each to ensure comparability, see Figure
\ref{fig:Head-Neck} for two of the nine patients. We then chose
four types of scalarization weights to determine four treatment
plans for each patient, each with different distinct solution
properties. Overall, this resulted in 36 optimization problems.
The following list describes the different scalarizations.\\
\textbf{1. }High weights on tumor volumes, low weights on healthy organs;
\textbf{2. }High weights on tumor volumes and brain stem;
\textbf{3. }High weights on tumor volumes and spinal cord;
\textbf{4. }High weights on tumor volumes and parotis glands.\\
In order to numerically optimize the treatment plans, we define
the energy fluence distribution - the variables of the optimization problem - as a vector $x\in\mathbb{R}^{n}$ (typically $n\approx 10^3$)
and assume that the resulting radiation dose in the patient is
given by $d=Dx\in\mathbb{R}^{m}$ (typically $m\approx 10^6$), where the entries $D_{ij}$ of
the so-called dose matrix $D\in\mathbb{R}^{m\times n}$ contain the
information of how much radiation is deposited in voxel $i$ of the
patient body by a unit amount of energy emitted by a small area
$j$ on the beam surface. That is, the dose in each voxel $i$ is given by
$ d_i(x):= \sum_{j=1}^n D_{ij} x_j $. To achieve a homogeneous dose in a tumor
volume given by voxel indices $\mathcal{T}$, we use functions to
minimize the amount of under-dosage below a prescribed dose $R$,
\[
f_{\text{under}}(x):=\left( \left\vert \mathcal{T}\right\vert ^{-1}\sum
_{i\in\mathcal{T}}\max(0,R-d_{i}(x))^{2}\right) ^{\frac{1}{2}},
\]
and, symmetrically the over-dosage above a given prescription. This is done
using two functions to provide better control over both aspects. These
objectives are also constrained from above, resulting in nonlinear but convex
constraints. Dose in risk organs given by voxel indices $\mathcal{R}$ is
minimized by norms of the dose in those organs:
\[
f_{\text{norm}}(x):=\left( \left\vert \mathcal{R}\right\vert ^{-1}\sum
_{i\in\mathcal{R}}d_{i}^{p}(x)\right) ^{\frac{1}{p}},
\]
where we used $p=2$ and $p=8$, depending on whether the organ is more
sensitive to the general amount of radiation (e.g. parotis glands) or the
maximal dose (e.g. spinal cord) received. More relevant data which is typical and standard to IMRT and in particular for the implementation of our scheme, i.e., the constrains set $\boldsymbol{C}$, can be found in \cite{KueMonSche09}, which also includes details on numerical optimization in IMRT planning; see also the works \cite{cekb05, aygk1505}. Note that the IMRT problem formulations here do not contain any linear constraints in our case, so that in the analysis, ART3+ is
omitted as it is identical to CSPM in this case.
\begin{figure}
\caption{Two of the nine
patients. Highlighted are some tumor volumes (lymphatic pathways
and the macroscopic tumor volume) and some healthy
structures to be spared.}
\label{fig:imrt_case_hnaa_sagital}
\label{fig:imrt_case_hnaa_trans}
\label{fig:imrt_case_hnbb_sagital}
\label{fig:imrt_case_hnbb_trans}
\label{fig:Head-Neck}
\end{figure}
\subsection{Quality evaluation of the solutions}
All of the 36 IMRT optimization problems could be solved by all variants of
the schemes. This was not the case for the QPS problems: of the 101 problems
tested in the library, only 50 problems could be solved by all of the variants of the
schemes. There are two reasons why for the other 51 cases, the projection
methods were unable to find an initial feasible solution. The first reason is that most of these QPS problems are indeed infeasible. The second reason is, that in \cite{mm99}, the primal infeasibility stopping criterion is determined as $\|Ax-b\|/(1-\|b\|)<10^{-8}$ (where $A$ and $b$ are part of the QPS problems constraints and $\|b\|<1$) (also combined with an additional stopping criterion for the dual infeasibility), while in our implementations, we choose the maximum number of iterations, such as in \cite{ccmzkh10}, denoted there by $Q$, to be the stopping rule. As can be seen, these two stopping criterion are different. It turn out that even when the number of iterations was increased to $10,000$, still the CSPM did not make a difference and hence these problems were declared infeasible.
In general, the behaviour of projection methods in the inconsistent case (infeasibility) have attracted many researchers and the subject is not fully explored. Some of the results in this area, state that in the case of infeasibility there is a cyclic convergence while for others methods, mainly simultaneous ones, there is convergent to some point that minimizes the norm of the infeasibilities. For further details on the above, the readers are refereed to the works Gubin, Polyak and Raik\'{\i}s \cite[Theorem 2]{gpr67}, Censor and Tom \cite {ct03}, the book of Chinneck \cite{Chinneck08} and the many references therein. Moreover, in the recent result of Censor and Zur \cite{cz16}, superiorization is used for the inconsistent linear case and it is shown that the generated sequence converges to a point that minimizes a proximity function which measures the linear constraints violation.
The following analysis concerning the QPS problems is restricted to those problems that could be solved by all variants. We report the required total number of projections and the total number of objective evaluations for each method as measures of numerical complexity, since these are independent of machine architecture or efficiency of implementation (parallelization or other software acceleration techniques). To measure the quality of the solutions, the following score $Q$ was calculated for each solver variant and each problem. Let $\hat{f}$ be the best objective the solver found and $f^{*} $ the best known objective value for the problem. We define
\[
Q :=
\begin{cases}
\hat{f}, & \mbox{if $f^*=0$}\\
\hat{f} - f^{*}, & \mbox{if $|f^*| \leq 1$}\\
(\hat{f} - f^{*}) / (|f^{*}|), & \mbox{else}
\end{cases}
\]
Thus, the close to 0, the better the score, a positive value for $Q $ measures
a deviation from optimality. Tables
\ref{table:SummaryQualityScoresAlgorithms_QPS} and
\ref{table:SummaryQualityScoresAlgorithms_IMRT} show some statistics for the
deviation from optimality for the solvers.
The median quality score of all optimization schemes for the problems that
could be solved are very good, meaning each solver can be expected to find the
optimal solution if the underlying projection method can find a feasible
starting point. The average is heavily skewed towards some outliers, i.e. instances for
which the algorithms needed many iterations -
especially for the QPS problems and in the bisection schemes for both problem
types. However, not all problems are solved very well, as the 90-th quantiles show: in 10\%
of all problem instances the algorithms did not produce a very good solution.
Based on these findings, the following questions are answered in the following subsections:
\textbf{1. }Do the accelerated versions of the schemes outperform the basic
versions in terms of quality and complexity?
\textbf{2. }Do the superiorized versions of the feasibility solvers outperform
their basic variants since the average deviations are lower for those solvers
with \textquotedblleft sup\textquotedblright?
\textbf{3. }Is the level set scheme better than the bisection scheme in terms
of quality and complexity?
\begin{table}[H]
\begin{tabular}
[c]{|l|c|c|c|c|}\hline
Scheme variant & Average Q & Median Q & 10-th quantile Q & 90-th quantile
Q\\\hline
ls\_cspm & 0.44 & 0.05 & 0.00 & 0.66\\\hline
ls\_art3+ & 1.83 & 0.05 & 0.00 & 0.66\\\hline
ls\_acc\_cspm & 0.44 & 0.06 & 0.00 & 0.66\\\hline
ls\_sup\_cspm & 0.17 & 0.05 & 0.00 & 0.66\\\hline
ls\_sup\_art3+ & 0.14 & 0.06 & 0.00 & 0.66\\\hline
ls\_acc\_sup\_cspm & 0.20 & 0.05 & 0.00 & 0.66\\\hline
bis\_cspm & 7.22 & 0.04 & 0.00 & 1.10\\\hline
bis\_art3+ & 7.07 & 0.04 & 0.00 & 1.00\\\hline
bis\_acc\_cspm & 7.24 & 0.05 & 0.00 & 1.10\\\hline
bis\_sup\_cspm & 0.19 & 0.03 & 0.00 & 0.89\\\hline
bis\_sup\_art3+ & 0.16 & 0.02 & 0.00 & 0.83\\\hline
bis\_acc\_sup\_cspm & 0.21 & 0.05 & 0.00 & 0.89\\\hline
\end{tabular}
\caption{Quality scores of all tested algorithms over all 50 QPS problems that
could be solved by all variants. The statistics are taken over the 50 problem
runs, thus aggregating the results over all problems.}
\label{table:SummaryQualityScoresAlgorithms_QPS}
\end{table}
\begin{table}[H]
\begin{tabular}
[c]{|l|c|c|c|c|}\hline
Scheme variant & Average Q & Median Q & 10-th quantile Q & 90-th quantile
Q\\\hline
ls\_cspm & 0.13 & 0.11 & 0.03 & 0.23\\\hline
ls\_acc\_cspm & 0.13 & 0.11 & 0.03 & 0.23\\\hline
ls\_sup\_cspm & 0.06 & 0.06 & 0.00 & 0.12\\\hline
ls\_acc\_sup\_cspm & 0.05 & 0.05 & 0.00 & 0.13\\\hline
bis\_cspm & 2.72 & 0.02 & 0.00 & 8.52\\\hline
bis\_acc\_cspm & 2.72 & 0.02 & 0.00 & 8.52\\\hline
bis\_sup\_cspm & 0.04 & 0.04 & 0.00 & 0.08\\\hline
bis\_acc\_sup\_cspm & 0.04 & 0.03 & 0.00 & 0.09\\\hline
\end{tabular}
\caption{Quality scores of all tested algorithms over all IMRT problems. ART3+
was omitted as the problem formulations did not contain any linear
constraints. The statistics are taken over the 36 problem runs, thus
aggregating the results over all problems.}
\label{table:SummaryQualityScoresAlgorithms_IMRT}
\end{table}
\subsection{Is the accelerated scheme better than the basic scheme?}
Only the CSPM variants for the level set scheme and the bisection scheme are
studied, since they are most promising candidates for each (in the bisection
scheme, there is no difference between CSPM and ART3+). The quality scores for
the level set scheme with CSPM ls\_cspm and the accelerated level set scheme
ls\_acc\_cspm were compared to see if there is a statistically significant
difference in the outcome of the methods. For the 50 QPS problems, the median
difference is 0, and the t-Test for two-tailed sample mean difference returns
a p-value of 0.472, indicating that there is no real difference between the
two versions. For the IMRT problems there was absolutely no difference in the
quality score between the two variants.
The results for the bisection scheme are even clearer. For the QPS problems,
the median difference is 0 and the average difference is -0.01. In no case
could the accelerated version produce a better objective score, and at worst,
it produced a loss of 0.28 objective score. A statistical test was not
performed for this case. Similar results were found for the IMRT problems.
As there is no difference in quality, the question remains whether the accelerated variants can be better in terms of faster function decrease or number of constraint projections or function evaluations. For IMRT, there was no difference in running time or rate of decrease of the objective function: the behavior was identical for the base
and accelerated versions. For the QPS problems, however, the acceleration of
the level set schemes lead to a faster converging algorithm. The rate of
decrease of the objective function over all feasible solutions produced by the
scheme with the accelerated version can be up to 15 times the rate of the
basic version. The chart in Figure \ref{fig:speedupObjDecLsAccCspmVsLsCspm}
shows the frequencies of different speedup factors realized for the
accelerated level set scheme over the basic level set scheme. The bins on the
horizontal axis denote the multiplication factor of how much faster the
objective scores (in case of Figure
\ref{fig:speedupObjDecLsAccCspmVsLsCspmQPS}) decreased in the accelerated
cases. ``Frequency'' refers to the count of solver instances where the
multiplication factor was observed. It should be noted that the frequency for
the 0 bin is exactly the count of zero speedups, meaning that there were no cases where the accelerated version
was better in terms of faster function decrease.
\begin{figure}
\caption{Factor of how much the objective function decreased faster in the
accelerated versions of the schemes (QPS problems). A speedup factor of 0 means that no difference in speed was observed. Negative speedups indicate that the
indicated scheme performed worse over the compared method.}
\label{fig:speedupObjDecLsAccCspmVsLsCspm}
\label{fig:speedupObjDecLsAccCspmVsLsCspmQPS}
\end{figure}
For the bisection scheme and QPS problems, there is no difference in the rate
of objective decrease between accelerated and basic version. Therefore, for
certain problem types, the accelerated level set scheme using CSPM has a great
potential to speed up the rate of objective decrease and only causes a very
moderate increase in the number of projections required by the algorithm (only
about 1.5 times as many for 17 problems).
\subsection{Are the superiorized versions a good choice for the level set
scheme?}
The quality scores in Tables \ref{table:SummaryQualityScoresAlgorithms_QPS}
and \ref{table:SummaryQualityScoresAlgorithms_IMRT} seem to indicate that the
superiorized version outperform their basic variants in both schemes. However,
for the QPS problems, the t-tests for paired two sample mean comparisons
showed that there is, in fact, no statistically significant difference in the
means (p-values for two-tailed tests were 0.148 for the level set scheme
comparison between ls\_acc\_cspm and ls\_acc\_sup\_cspm and 0.292 for the
bisection scheme comparison between bis\_art3+ and bis\_sup\_art3+).
Nevertheless, for our test cases, the superiorized versions in the level set
scheme outperformed their basic counterparts in more cases: the superiorized
version ls\_sup\_acc\_cspm produced a quality score at least as good as the
basic version ls\_acc\_cspm in about 68\% of all cases, in 64\% it could
actually get a better score.
On the other hand, for the IMRT problems, the superiorized versions clearly
outperform the basic versions in terms of objective function scores.
Statistical tests are all significant up to a level of 0.0013 (p-value for
two-tailed tests of test for mean difference being 0). This clearly shows a
promising feature of superiorized algorithms: they are very often able to
obtain better solutions, even when used in an optimization framework. The
accelerated versions of the superiorized schemes, however, did not differ in
quality from their unaccelerated versions.
However, the superiorized versions require more projections and objective
evaluations than the normal versions (see Figures
\ref{fig:SpeedupCompLsSupAccCspmVsLsAccCspmQPS} and
\ref{fig:SpeedupCompLsSupCspmVsLsCspmIMRT}). In these charts, the horizontal
axis again denotes the multiplication factor by which the number of
projections or objective function evaluations of the basic versions would have
to be multiplied to be equal to the values for the superiorized versions. An increase factor of 0 means that the indicated scheme needed as many projections as the compared method. In
many instances, the superiorized versions required more than 10 times the
number of projections over the basic variant, leading to significantly higher
computation times.
\begin{figure}
\caption{Factor of how much the complexity increases by using the
superiorized version of CSPM in the level set scheme (QPS problems). }
\label{fig:SpeedupCompLsSupAccCspmVsLsAccCspmQPS}
\end{figure}
\begin{figure}
\caption{Factor of how much the complexity increases by using the
superiorized version of CSPM in the level set scheme (IMRT problems).}
\label{fig:SpeedupCompLsSupCspmVsLsCspmIMRT}
\end{figure}
Yet, for the QPS problems, there is also some potential when it comes to the
rate of decrease of the objective function, as shown in Figure
\ref{fig:SpeedupObjDecLsSupAccCspmVsLsAccCspm}.
\begin{figure}
\caption{Factor
of speedup of the objective function decrease by using the superiorized
version of CSPM in the level set scheme for QPS problems. A factor of 0
indicates no difference, negative factors indicate that the superiorized
version exhibited a slower decrease.}
\label{fig:SpeedupObjDecLsSupAccCspmVsLsAccCspm}
\end{figure}
For IMRT problems, this decrease was only marginal: the rate of
decrease of the superiorized versions is on average only about 0.5
times faster than the basic versions (note that 0 times would indicate they progress at the same rate).
Hence, the superiorized version of CSPM uses many more projections and
evaluations. However, if these are cheap to compute, then the potential
increase in objective function reduction in early iterations could lead to a
faster approach overall for some problems if the user is willing to stop the
optimization prematurely for practical reasons.
\subsection{Which is the better optimization scheme?}
We compare the level set scheme using the accelerated CSPM and the bisection
scheme with CSPM, as these are the most promising candidates for each optimization scheme given the analysis above.
For the QPS problems, there is no statistically significant
difference in the objective score between the two methods (the p-value of the
two-tailed test is 0.389). However, ls\_acc\_cspm outperforms the bisection
scheme significantly for IMRT problems - even if 4 outliers of the 36 problems
were removed (those which skewed the average quality score of the bisection
scheme to the right). With a p-value of 0.007, ls\_acc\_cspm produces a better
quality score than the equivalent bisection scheme.
Moreover, as Figure \ref{fig:ComplexityBisVsLs} shows for QPS problems, on
average, the bisection scheme requires many more projections and objective
evaluations.
An intuitive explanation to the results is perhaps because in the bisection scheme one may be in an infeasible detecting stage several times, and in each such a stage many calculations are done (which eventually lead one to conclude that infeasibility has been detected). In the level-set scheme an infeasibility stage can happen only one time, and in the other stages usually feasibility is detected.
\begin{figure}
\caption{Factor of how much the complexity increases for QPS problems by
using the bisection scheme over the level set scheme. `0' indicates no
difference, and negative values indicate a decrease in complexity.}
\label{fig:ComplexityBisVsLs}
\end{figure}
For IMRT problems, the increase is less pronounced, but
consistent. There the increase is up to factor 4.
In terms of rate of objective decrease, the results are mixed. In fact, it
seems that bisection can be expected to decrease the objective a little faster
than the level set scheme. However, Figure
\ref{fig:SpeedupObjDecBisCspmVsLsAccCspm} shows that for the QPS problems this
does not happen very often.
\begin{figure}
\caption{Factor
of speedup of the objective function decrease by using the bisection scheme
over the level set scheme (QPS problems).}
\label{fig:SpeedupObjDecBisCspmVsLsAccCspm}
\end{figure}For IMRT problems, the results are similar, however, it seems as
if a speedup of factor 2 occurred quite frequently.
The level set scheme seems to be the better optimization tool for IMRT
problems in terms of the quality and complexity. But, if one allows superiorization, then this is not always the case and the differences might be very minor, compare for example "ls\_sup\_cspm" and "bis\_sup\_cspmsee" in Tables \ref{table:SummaryQualityScoresAlgorithms_QPS} and \ref{table:SummaryQualityScoresAlgorithms_IMRT}.
Overall, although both strategies are able to obtain similar qualities in the solutions for the QPS
problems, there is a clear advantage of the level set scheme over the
bisection scheme when it comes to complexity.
\section{Concluding remarks and Further research}
\label{sec:conclusion}
Projection methods are known for their computational efficiency and
simplicity. This is the reason we decided to use a well-known reformulation of
a convex optimization problem and apply projection methods within that
general scheme. While at this point the convergence proof for the scheme is
valid only when finite convergent algorithms are used, numerical experiments
show that general convergent algorithms also generate good solutions when the
stopping rule is chosen in an educated way. We believe that the mathematical
validity of this relies on Remark \ref{remark:conv} and is still under investigation.
Another direction we plan to investigate is based on Yamagishi and Yamada
results \cite{yy08}, which show how to replace the subgradient projections by
a more efficient projection when additional knowledge, such as lower bounds is
provided. In addition we plan to study accelerating techniques for projection
methods, for example the recent work of Pang \cite{Pang12} and \cite{Pang13}. Another direction for investigation is the usage of other type of projection methods, for example Bregman projection, see e.g., \cite{CZ97}.
\end{document}
|
\begin{document}
\title{Motivic zeta functions of hyperplane arrangements}
\begin{abstract}
For each central essential hyperplane arrangement $\mathcal{A}$ over an algebraically closed field, let $Z_\mathcal{A}^{\hat\mu}(T)$ denote the Denef-Loeser motivic zeta function of $\mathcal{A}$. We prove a formula expressing $Z_\mathcal{A}^{\hat\mu}(T)$ in terms of the Milnor fibers of related hyperplane arrangements. We use this formula to show that the map taking each complex arrangement $\mathcal{A}$ to the Hodge-Deligne specialization of $Z_{\mathcal{A}}^{\hat\mu}(T)$ is locally constant on the realization space of any loop-free matroid. We also prove a combinatorial formula expressing the motivic Igusa zeta function of $\mathcal{A}$ in terms of the characteristic polynomials of related arrangements.
\end{abstract}
\numberwithin{theorem}{section}
\numberwithin{lemma}{section}
\numberwithin{corollary}{section}
\numberwithin{proposition}{section}
\numberwithin{remark}{section}
\numberwithin{definition}{section}
\section{Introduction}
We study hyperplane arrangements and the motivic zeta functions of Denef and Loeser. Let $k$ be an algebraically closed field, and let $H_1, \dots, H_n$ be a central essential arrangement of hyperplanes in $\mathbb{A}_k^d$. If $f_1, \dots, f_n$ are linear forms defining $H_1, \dots, H_n$, respectively, then we can consider the Denef-Loeser motivic zeta function $Z_f^{\hat\mu}(T)$ of $f=f_1 \cdots f_n$ and the motivic Igusa zeta function $Z_f^{\mathrm{naive}}(T)$ of $f$.
Inspired by Kontsevich's theory of motivic integration \cite{Kontsevich}, Denef and Loeser defined zeta functions \cite{DenefLoeser1998, DenefLoeser2001, DenefLoeser2002} that are power series with coefficients in a Grothendieck ring of varieties. These zeta functions are related to multiple well-known invariants in singularity theory and birational geometry, and they have implications for Igusa's monodromy conjecture, a longstanding conjecture concerning the poles of Igusa's local zeta function. There has been interest in understanding these motivic zeta functions, and the closely related topological zeta function, in the case of polynomials defining hyperplane arrangements \cite{BudurSaitoYuzvinsky, BudurMustataTeitler, vanderVeer}.
In this paper, we prove a formula for $Z_f^{\hat\mu}(T)$ in terms of the classes of Milnor fibers of certain related hyperplane arrangements. We use this formula and a result in \cite{KutlerUsatine} to show that certain specializations of $Z_f^{\hat\mu}(T)$, including the Hodge-Deligne specialization, remain constant as we vary the arrangement $H_1, \dots, H_n$ within the same connected component of a matroid's realization space. We also prove a combinatorial formula for $Z_f^{\mathrm{naive}}(T)$ in terms of the characteristic polynomials of certain related matroids.
\subsection{Statements of main results}
Throughout this paper, $k$ will be an algebraically closed field. Before we state our results, we need to set some notation.
For each $n \in \mathbb{Z}_{>0}$, let $\mu_n \subset k^\times$ be the group of $n$-th roots of unity, let $K_0^{\mu_n}(\mathbf{Var}_k)$ be the $\mu_n$-equivariant Grothendieck ring of $k$-varieties, let $\mathbb{L} \in K_0^{\mu_n}(\mathbf{Var}_k)$ be the class of $\mathbb{A}_k^1$ with the trivial $\mu_n$-action, and let $\mathscr{M}_k^{\mu_n} = K_0^{\mu_n}(\mathbf{Var}_k)[\mathbb{L}^{-1}]$. Let $\mathscr{M}_k^{\hat\mu}=\varinjlim_n \mathscr{M}_k^{\mu_n}$, and let $\mathbb{L} \in \mathscr{M}_k^{\hat\mu}$ be the image of $\mathbb{L} \in \mathscr{M}_k^{\mu_n}$ for any $n$.
Let $d, n \in \mathbb{Z}_{>0}$, and let $\mathrm{Gr}_{d,n}$ be the Grassmannian of $d$-dimensional linear subspaces in $\mathbb{A}_k^n = \Spec(k[x_1, \dots, x_n])$. For each $\mathcal{A} \in \mathrm{Gr}_{d,n}(k)$, let $X_\mathcal{A}$ denote the corresponding linear subspace, let $F_\mathcal{A}$ be the scheme theoretic intersection of $X_\mathcal{A}$ with the closed subscheme of $\mathbb{A}_k^n$ defined by $(x_1 \cdots x_n -1)$, and endow $F_\mathcal{A}$ with the restriction of the $\mu_n$-action on $\mathbb{A}_k^n$ where each $\xi \in \mu_n$ acts by scalar multiplication. Let $Z_{\mathcal{A},k}^{\hat\mu}(T) \in \mathscr{M}_k^{\hat\mu} \llbracket T \rrbracket$ be the Denef-Loeser motivic zeta function of $(x_1 \cdots x_n)|_{X_\mathcal{A}}$, and let $Z_{\mathcal{A}, 0}^{\hat\mu}(T) \in \mathscr{M}_k^{\hat\mu} \llbracket T \rrbracket$ be the Denef-Loeser motivic zeta function of $(x_1 \cdots x_n)|_{X_\mathcal{A}}$ at the origin of $\mathbb{A}_k^n$.
If $X_\mathcal{A}$ is not contained in a coordinate hyperplane of $\mathbb{A}_k^n$, then the restrictions of the coordinates $x_i$ define a central essential hyperplane arrangement in $X_\mathcal{A}$, the Milnor fiber of that hyperplane arrangement is $F_\mathcal{A}$, the $\mu_n$-action on $F_\mathcal{A}$ is the monodromy action, and $Z_{\mathcal{A},k}^{\hat\mu}(T)$ and $Z_{\mathcal{A},0}^{\hat\mu}(T)$ are the Denef-Loeser motivic zeta functions associated to that arrangement. Note that we are using a definition of the Milnor fiber that takes advantage of the fact that a hyperplane arrangement is defined by a homogeneous polynomial. This definition is common in the hyperplane arrangement literature, and it allows us to consider the Milnor fiber $F_\mathcal{A}$ as a variety.
\begin{remark}
If $H_1, \dots, H_n$ is a central essential hyperplane arrangement in $\mathbb{A}_k^d$, then any choice of linear forms defining $H_1, \dots, H_n$ gives a linear embedding of $\mathbb{A}_k^d$ into $\mathbb{A}_k^n$, and $H_1, \dots, H_n$ is the arrangement associated to the resulting subspace of $\mathbb{A}_k^n$. Therefore, we lose no generality by considering the arrangements associated to $d$-dimensional linear subspaces in $\mathbb{A}_k^n$.
\end{remark}
Let $\mathcal{M}$ be a rank $d$ loop-free matroid on $\{1, \dots, n\}$, let $\Trop(\mathcal{M}) \subset \mathbb{R}^n$ be the Bergman fan of $\mathcal{M}$, and let $\mathrm{Gr}_\mathcal{M} \subset \mathrm{Gr}_{d,n}$ be the locus parametrizing linear subspaces whose associated hyperplane arrangements have combinatorial type $\mathcal{M}$. For any $w \in \Trop(\mathcal{M})$, there exists a rank $d$ loop-free matroid $\mathcal{M}_w$ on $\{1, \dots, n\}$ such that for all $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$, the initial degeneration $\init_w (X_\mathcal{A} \cap \mathbb{G}_{m,k}^n)$ is equal to $X_{\mathcal{A}_w} \cap \mathbb{G}_{m,k}^n$ for some unique $\mathcal{A}_w \in \mathrm{Gr}_{\mathcal{M}_w}(k)$. We refer to Section \ref*{linearsubspacesandmatroidssubsection} for the definition of $\mathcal{M}_w$. Let $\mathcal{B}(\mathcal{M})$ be the set of bases in $\mathcal{M}$, and set
\[
\wt_{\mathcal{M}}: \mathbb{R}^n \to \mathbb{R}: (w_1, \dots, w_n) \mapsto \max_{B \in \mathcal{B}(\mathcal{M})} \sum_{i \in B} w_i.
\]
In this paper, we will prove the following formulas that express the motivic zeta functions $Z^{\hat\mu}_{\mathcal{A}, k}(T)$ and $Z^{\hat\mu}_{\mathcal{A},0}(T)$ in terms of classes of the Milnor fibers $F_{\mathcal{A}_w}$.
\begin{theorem}
\label{hyperplanearrangementDLzetaformula}
Let $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$. Then
\[
Z^{\hat\mu}_{\mathcal{A}, k}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})} [F_{\mathcal{A}_w}, \hat\mu] \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket,
\]
and
\[
Z^{\hat\mu}_{\mathcal{A},0}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{> 0}^n} [F_{\mathcal{A}_w}, \hat\mu] \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket.
\]
\end{theorem}
In the course of proving \autoref*{hyperplanearrangementDLzetaformula}, we prove \autoref*{zetafunctionschon} and \autoref*{tropicalzetaformulahomogeneous}, which give formulas for motivic zeta functions when certain tropical hypotheses are satisfied. We think of \autoref*{zetafunctionschon} and \autoref*{tropicalzetaformulahomogeneous} as being in the spirit of the formulas for zeta functions of so-called Newton non-degenerate hypersurfaces \cite{DenefHoornaert, Guibert, BoriesVeys, BultotNicaise}. To prove \autoref*{zetafunctionschon} and \autoref*{tropicalzetaformulahomogeneous}, we use certain $k \llbracket \pi \rrbracket$-schemes whose special fibers are the initial degenerations that arise in tropical geometry. These $k \llbracket \pi \rrbracket$-schemes have played an essential role in much of tropical geometry. See for example \cite{Gubler}. We also use Sebag's \cite{Sebag} theory of motivic integration for Greenberg schemes, which are non-constant coefficient versions of arc schemes. For our proofs to account for the $\hat\mu$-action, we use Hartmann's \cite{Hartmann} equivariant version of Sebag's motivic integration.
\autoref*{hyperplanearrangementDLzetaformula} allows us to use results about additive invariants of the Milnor fibers $F_{\mathcal{A}_w}$ to obtain results about specializations of the Denef-Loeser motivic zeta functions. To state such an application, we first define some terminology that can apply to additive invariants. Let $\mathbb{Z}[\mathbb{L}]$ be the polynomial ring over the symbol $\mathbb{L}$, and endow $\mathscr{M}_k^{\hat\mu}$ with the $\mathbb{Z}[\mathbb{L}]$-algebra structure given by $\mathbb{L} \mapsto \mathbb{L}$.
\begin{definition}
Let $P$ be a $\mathbb{Z}[\mathbb{L}]$-module, and let $\nu: \mathscr{M}_k^{\hat\mu} \to P$ be a $\mathbb{Z}[\mathbb{L}]$-module morphism. We say that $\nu$ is \emph{constant on smooth projective families with $\mu_n$-action} if the following always holds.
\begin{itemize}
\item If $S$ is a connected separated finite type $k$-scheme with trivial $\mu_n$-action and $X \to S$ is a $\mu_n$-equivariant smooth projective morphism from a scheme $X$ with $\mu_n$-action, then the map $S(k) \to P: s \mapsto \nu[X_s, \hat\mu]$ is constant, where $X_s$ denotes the fiber of $X \to S$ over $s$.
\end{itemize}
\end{definition}
\begin{remark}
If $k = \mathbb{C}$ and $\mathrm{HD}: \mathscr{M}_k^{\hat\mu} \to \mathbb{Z}[u^{\pm 1},v^{\pm 1}]$ is the morphism that sends the class of each variety to its Hodge-Deligne polynomial, then $\mathrm{HD}$ is constant on smooth projective families with $\mu_n$-action.
\end{remark}
Note that if $w \in \Trop(\mathcal{M})$ and $\mathcal{A}_1, \mathcal{A}_2 \in \mathrm{Gr}_{\mathcal{M}}(k)$ are in the same connected component of $\mathrm{Gr}_\mathcal{M}$, then $(\mathcal{A}_1)_w, (\mathcal{A}_2)_w \in \mathrm{Gr}_{\mathcal{M}_w}(k)$ are in the same connected component of $\mathrm{Gr}_{\mathcal{M}_w}$. See for example \cite[Fact 2.4]{KutlerUsatine}. Therefore the following theorem is a direct consequence of \autoref*{hyperplanearrangementDLzetaformula} and \cite[Theorem 1.4]{KutlerUsatine}.
\begin{theorem}
\label{specializationinvarianceinconnectedcomponent}
Let $P$ be a torsion-free $\mathbb{Z}[\mathbb{L}]$-module, let $\nu: \mathscr{M}_k^{\hat\mu} \to P$ be a $\mathbb{Z}[\mathbb{L}]$-module morphism that is constant on smooth projective families with $\mu_n$-action, and assume that the characteristic of $k$ does not divide $n$.
If $\mathcal{A}_1, \mathcal{A}_2 \in \mathrm{Gr}_\mathcal{M}(k)$ are in the same connected component of $\mathrm{Gr}_\mathcal{M}$, then
\[
\nu(Z_{\mathcal{A}_1, k}^{\hat\mu}(T)) = \nu(Z_{\mathcal{A}_2, k}^{\hat\mu}(T)) \in P\llbracket T \rrbracket,
\]
and
\[
\nu(Z_{\mathcal{A}_1, 0}^{\hat\mu}(T)) = \nu(Z_{\mathcal{A}_2, 0}^{\hat\mu}(T)) \in P\llbracket T \rrbracket.
\]
\end{theorem}
\begin{remark}
In the statement of \autoref*{specializationinvarianceinconnectedcomponent}, by $\nu$ applied to a power series, we mean the power series obtained by applying $\nu$ to each coefficient.
\end{remark}
In particular, \autoref*{specializationinvarianceinconnectedcomponent} implies that the Hodge-Deligne specialization of the Denef-Loeser motivic zeta function remains constant as we vary the linear subspace within the same connected component of $\mathrm{Gr}_\mathcal{M}$. There has been much interest in understanding how invariants of hyperplane arrangements, particularly those invariants arising in singularity theory, vary as the arrangements vary with fixed combinatorial type. For example, a major open conjecture predicts that when $k = \mathbb{C}$, the Betti numbers of a hyperplane arrangement's Milnor fiber depend only on combinatorial type, i.e., they depend only on the matroid. Budur and Saito proved that a related invariant, the Hodge spectrum, depends only on the combinatorial type \cite{BudurSaito}. Randell proved that the diffeomorphism type, and thus Betti numbers, of the Milnor fiber is constant in smooth families of hyperplane arrangements with fixed combinatorial type \cite{Randell}. See \cite{Suciu} for a survey on such questions. Our perspective on \autoref*{specializationinvarianceinconnectedcomponent} is in the context of that literature, and we hope it illustrates the use of \autoref*{hyperplanearrangementDLzetaformula} in answering related questions.
Our final main result consists of combinatorial formulas for the motivic Igusa zeta functions of a hyperplane arrangement. It is well known that the motivic Igusa zeta functions are combinatorial invariants. For example, one can see this by using De Concini and Procesi's wonderful models \cite{DeConciniProcesi} and Denef and Loeser's formula for the motivic Igusa zeta function in terms of a log resolution \cite[Corollary 3.3.2]{DenefLoeser2001}. Regardless, we believe it is worth stating \autoref*{hyperplanearrangementIgusazetaformula} below, as it follows from the methods of this paper with little extra effort, and because we are not aware of these particular formulas having appeared in the literature.
Let $K_0(\mathbf{Var}_k)$ be the Grothendieck ring of $k$-varieties, let $\mathbb{L} \in K_0(\mathbf{Var}_k)$ be the class of $\mathbb{A}_k^1$, and let $\mathscr{M}_k = K_0(\mathbf{Var}_k)[\mathbb{L}^{-1}]$. For each $\mathcal{A} \in \mathrm{Gr}_{d,n}(k)$, let $Z_{\mathcal{A},k}^{\mathrm{naive}}(T) \in \mathscr{M}_k \llbracket T \rrbracket$ be the motivic Igusa zeta function of $(x_1 \cdots x_n)|_{X_\mathcal{A}}$, and let $Z_{\mathcal{A}, 0}^{\mathrm{naive}}(T) \in \mathscr{M}_k \llbracket T \rrbracket$ be the motivic Igusa zeta function of $(x_1 \cdots x_n)|_{X_\mathcal{A}}$ at the origin of $\mathbb{A}_k^n$.
\begin{theorem}
\label{hyperplanearrangementIgusazetaformula}
Let $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$. Then
\[
Z^{\mathrm{naive}}_{\mathcal{A},k}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{\geq 0}^n} \chi_{\mathcal{M}_w}(\mathbb{L}) \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k\llbracket T \rrbracket,
\]
and
\[
Z^{\mathrm{naive}}_{\mathcal{A},0}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{> 0}^n} \chi_{\mathcal{M}_w}(\mathbb{L}) \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k\llbracket T \rrbracket,
\]
where $\chi_{\mathcal{M}_w}(\mathbb{L}) \in \mathscr{M}_k$ is the characteristic polynomial of $\mathcal{M}_w$ evaluated at $\mathbb{L}$.
\end{theorem}
\section{Preliminaries}
\label{preliminariessection}
In this section, we will set some notation and recall facts about the equivariant Grothendieck ring of varieties, the motivic zeta functions of Denef and Loeser, Hartmann's equivariant motivic integration, and linear subspaces and matroids.
\subsection{The equivariant Grothendieck ring of varieties}
Suppose $X$ is a separated finite type scheme over $k$. We will let $K_0(\mathbf{Var}_X)$ denote the Grothendieck ring of varieties over $X$, we will let $\mathbb{L} \in K_0(\mathbf{Var}_X)$ denote the class of $\mathbb{A}_k^1 \times_k X$, and for each separated finite type $X$-scheme $Y$, we will let $[Y/X] \in K_0(\mathbf{Var}_X)$ denote the class of $Y$. We will let $\mathscr{M}_X$ denote the ring obtained by inverting $\mathbb{L}$ in $K_0(\mathbf{Var}_X)$, and by slight abuse of notation, we will let $\mathbb{L}, [Y/X] \in \mathscr{M}_X$ denote the images of $\mathbb{L}, [Y/X]$, respectively, in $\mathscr{M}_X$.
We will let $K_0(\mathbf{Var}_k)$ and $\mathscr{M}_k$ denote $K_0(\mathbf{Var}_{\Spec(k)})$ and $\mathscr{M}_{\Spec(k)}$, respectively, and for each separated finite type $k$-scheme $Y$, we will let $[Y] = [Y/\Spec(k)]$ in both $K_0(\mathbf{Var}_k)$ and $\mathscr{M}_k$.
Suppose $G$ is a finite abelian group. An action of $G$ on a scheme is said to be \emph{good} if each orbit is contained in an affine open subscheme. For example, any $G$-action on any quasiprojective $k$-scheme is good. Suppose $X$ is a separated finite type $k$-scheme with a good $G$-action. We will let $K_0^G(\mathbf{Var}_X)$ denote the $G$-equivariant Grothendieck ring of varieties over $X$. For the precise definition of $K_0^G(\mathbf{Var}_X)$, we refer to \cite[Definition 4.1]{Hartmann}. We will let $\mathbb{L} \in K_0^G(\mathbf{Var}_X)$ denote the class of $\mathbb{A}_k^1 \times_k X$ with the action induced by the trivial $G$-action on $\mathbb{A}_k^1$ and the given $G$-action on $X$, and for each separated finite type $X$-scheme $Y$ with good $G$-action making the structure morphism $G$-equivariant, we will let $[Y/X,G] \in K_0^G(\mathbf{Var}_X)$ denote the class of $Y$ with its given $G$-action. We will let $\mathscr{M}_X^G$ denote the ring obtained by inverting $\mathbb{L}$ in $K_0^G(\mathbf{Var}_X)$, and by slight abuse of notation, we will let $\mathbb{L}, [Y/X,G] \in \mathscr{M}_X^G$ denote the images of $\mathbb{L}, [Y/X, G]$, respectively, in $\mathscr{M}_X^G$.
If $X$ is a separated finite type $k$-scheme with no specified $G$-action and we refer to $K_0^G(\mathbf{Var}_X)$ or $\mathscr{M}_X^G$, then we are considering $X$ with the trivial $G$-action. We will let $K_0^G(\mathbf{Var}_k)$ and $\mathscr{M}_k^G$ denote $K_0^G(\mathbf{Var}_{\Spec(k)})$ and $\mathscr{M}_{\Spec(k)}^G$, respectively, and for each separated finite type $k$-scheme $Y$ with good $G$-action making the structure morphism $G$-equivariant, we will let $[Y, G] = [Y/\Spec(k), G]$ in both $K_0^G(\mathbf{Var}_k)$ and $\mathscr{M}_k$.
For each $\ell \in \mathbb{Z}_{>0}$, we will let $\mu_\ell \subset k^\times$ denote the group of $\ell$-th roots of unity.
\begin{remark}
We will only consider $\mu_\ell$ as a finite group, so when the characteristic of $k$ divides $\ell$, we will not consider the non-reduced scheme structure of $\mu_\ell$.
\end{remark}
For each $\ell, m \in \mathbb{Z}_{>0}$, there is a morphism $\mu_{\ell m} \to \mu_\ell: \xi \mapsto \xi^m$. Suppose that $X$ is a separated finite type scheme over $k$. Then for each $\ell, m \in \mathbb{Z}_{>0}$, the morphism $\mu_{\ell m} \to \mu_\ell$ induces ring morphisms $K_0^{\mu_\ell}(\mathbf{Var}_X) \to K_0^{\mu_{\ell m}}(\mathbf{Var}_X)$ and $\mathscr{M}_X^{\mu_\ell} \to \mathscr{M}_X^{\mu_{\ell m}}$. We will let $K_0^{\hat\mu}(\mathbf{Var}_X) = \varinjlim_{\ell}K_0^{\mu_\ell}(\mathbf{Var}_X)$ and $\mathscr{M}_X^{\hat\mu} = \varinjlim_\ell \mathscr{M}_X^{\mu_\ell}$. We will let $\mathbb{L} \in K_0^{\hat\mu}(\mathbf{Var}_X)$ denote the image of $\mathbb{L} \in K_0^{\mu_\ell}(\mathbf{Var}_X)$ for any $\ell \in \mathbb{Z}_{>0}$, and similarly we will let $\mathbb{L} \in \mathscr{M}_X^{\hat\mu}$ denote the image of $\mathbb{L} \in \mathscr{M}_X^{\mu_\ell}$ for any $\ell \in \mathbb{Z}_{>0}$. For each $\ell \in \mathbb{Z}_{>0}$ and each separated finite type $X$-scheme $Y$ with good $\mu_\ell$-action making the structure morphism $\mu_\ell$-equivariant, we will let $[Y/X, \hat\mu] \in K_0^{\hat\mu}(\mathbf{Var}_X)$ denote the image of $[Y/X, \mu_\ell] \in K_0^{\mu_\ell}(\mathbf{Var}_X)$, and we will similarly let $[Y/X, \hat\mu] \in \mathscr{M}_X^{\hat\mu}$ denote the image of $[Y/X, \mu_\ell] \in \mathscr{M}_X^{\mu_\ell}$.
We will let $K_0^{\hat\mu}(\mathbf{Var}_k)$ and $\mathscr{M}_k^{\hat\mu}$ denote $K_0^{\hat\mu}(\mathbf{Var}_{\Spec(k)})$ and $\mathscr{M}_{\Spec(k)}^{\hat\mu}$, respectively, and for each $\ell \in \mathbb{Z}_{>0}$ and each separated finite type $k$-scheme $Y$ with good $\mu_\ell$-action making the structure morphism $\mu_\ell$-equivaraint, we will let $[Y, \hat\mu] = [Y/\Spec(k), \hat\mu]$ in both $K_0^{\hat\mu}(\mathbf{Var}_k)$ and $\mathscr{M}_k^{\hat\mu}$.
\subsection{The motivic zeta functions of Denef and Loeser}
Let $X$ be a smooth, pure dimensional, separated, finite type $k$-scheme. For each $\ell \in \mathbb{Z}_{\geq 0}$, we will let $\mathscr{L}_\ell(X)$ denote $\ell$-th jet scheme of $X$, and for each $m \geq \ell$, we will let $\theta^m_\ell: \mathscr{L}_m(X) \to \mathscr{L}_\ell(X)$ denote the truncation morphism. We will let $\mathscr{L}(X) = \varprojlim_\ell \mathscr{L}_\ell(X)$ denote the arc scheme of $X$, and for each $\ell \in \mathbb{Z}_{\geq 0}$, we will let $\theta_\ell: \mathscr{L}(X) \to \mathscr{L}_\ell(X)$ denote the canonical morphism. The following is a special case of a theorem of Bhatt's \cite[Theorem 1.1]{Bhatt}.
\begin{theorem}[Bhatt]
The $k$-scheme $\mathscr{L}(X)$ represents the functor taking each $k$-algebra $A$ to $\Hom_k(\Spec(A\llbracket \pi \rrbracket), X)$, and under this identification, each morphism $\theta_\ell: \mathscr{L}(X) \to \mathscr{L}_\ell(X)$ is the truncation morphism.
\end{theorem}
A subset of $\mathscr{L}(X)$ is called a \emph{cylinder} if it is the preimage, under $\theta_\ell$, of a constructible subset of $\mathscr{L}_\ell(X)$ for some $\ell \in \mathbb{Z}_{\geq 0}$. We will let $\mu_X$ denote the motivic measure on $\mathscr{L}(X)$, which assigns a motivic volume in $\mathscr{M}_X$ to each cylinder.
Suppose $f$ is a regular function on $X$. If $x \in \mathscr{L}(X)$ has residue field $k(x)$, then it corresponds to a $k$-morphism $\psi_x: \Spec(k(x)\llbracket \pi \rrbracket) \to X$, and we will let $f(x)$ denote $f(\psi_x) \in k(x)\llbracket \pi \rrbracket$. For each $x \in \mathscr{L}(X)$, the \emph{order} of $f$ at $x$ will refer to the order of $\pi$ in the power series $f(x)$, and the \emph{angular component} of $f$ at $x$ will refer to the leading coefficient of the power series $f(x)$. We will let $\ord_f: \mathscr{L}(X) \to \mathbb{Z}_{\geq 0} \cup \{\infty\}$ denote the function taking each $x \in \mathscr{L}(X)$ to the order of $f$ at $x$. We will let $Z^\mathrm{naive}_f(T) \in \mathscr{M}_X\llbracket T\rrbracket$ denote the motivic Igusa zeta function of $f$. Then
\[
Z^\mathrm{naive}_f(T) = \sum_{\ell \in \mathbb{Z}_{\geq 0}} \mu_X(\ord_f^{-1}(\ell)) T^\ell \in \mathscr{M}_X \llbracket T \rrbracket.
\]
\begin{remark}
In the literature, the motivic Igusa zeta function is sometimes referred to as the naive zeta function of Denef and Loeser.
\end{remark}
We will let $Z_f^{\hat\mu}(T) \in \mathscr{M}_X^{\hat\mu}\llbracket T\rrbracket$ denote the Denef-Loeser motivic zeta function of $f$. We briefly recall the definition of $Z_f^{\hat\mu}(T)$. The constant term of $Z_f^{\hat\mu}(T)$ is equal to $0$. Let $\ell \in \mathbb{Z}_{>0}$, and let $Y_{\ell,1}$ be the closed subscheme of $\mathscr{L}_\ell(X)$ where $f$ is equal to $\pi^\ell$. For each $k$-algebra $A$, there is a $\mu_\ell$-action on $A\llbracket \pi \rrbracket$ given by $\pi \mapsto \xi \pi$ for each $\xi \in \mu_\ell$, and these actions induce a $\mu_\ell$-action on $\mathscr{L}_\ell(X)$ making $Y_{\ell, 1}$ invariant. Note also that the truncation morphism $\theta^\ell_0: \mathscr{L}_\ell(X) \to X$ restricts to a $\mu_\ell$-equivariant morphism $Y_{\ell, 1} \to X$. Then the coefficient of $T^\ell$ in $Z_f^{\hat\mu}(T)$ is defined to be equal to $[Y_{\ell, 1}/X, \hat\mu]\mathbb{L}^{-(\ell+1)\dim X} \in \mathscr{M}_X^{\hat\mu}$.
\begin{remark}
Denef and Loeser defined versions of these zeta functions with coefficients in $\mathscr{M}_k$ and $\mathscr{M}^{\hat\mu}_k$ \cite{DenefLoeser1998, DenefLoeser2002}, and Looijenga introduced versions with coefficients in the relative Grothendieck rings $\mathscr{M}_X$ and $\mathscr{M}_X^{\hat\mu}$ \cite{Looijenga}. See \cite{DenefLoeser2001} for the definitions we are using for $Z_f^{\mathrm{naive}}(T)$ and $Z_f^{\hat\mu}(T)$, but note that compared to those definitions, ours differ by a normalization factor of $\mathbb{L}^{-\dim X}$.
\end{remark}
\subsection{Hartmann's equivariant motivic integration}
For the remainder of this paper, let $R = k\llbracket \pi \rrbracket$, the ring of power series over $k$. We will set up some notation and recall facts for Greenberg schemes and Hartmann's equivariant motivic integration \cite{Hartmann}, which is an equivariant version of Sebag's motivic integration for formal schemes \cite{Sebag}. For the non-equivariant version of this theory, we also recommend the book \cite{ChambertLoirNicaiseSebag}.
\begin{remark}
In \cite{Hartmann}, Hartmann uses formal $R$-schemes. The analogous theory for algebraic $R$-schemes, as stated here, directly follows by taking $\pi$-adic completion.
\end{remark}
Let $\mathfrak{X}$ be a smooth, pure relative dimensional, separated, finite type $R$-scheme. We will let $\mathfrak{X}_0$ denote the special fiber of $\mathfrak{X}$. For each $\ell \in \mathbb{Z}_{\geq 0}$, we will let $\mathscr{G}_\ell(\mathfrak{X})$ denote the $\ell$-th Greenberg scheme of $\mathfrak{X}$. Thus $\mathscr{G}_\ell(\mathfrak{X})$ represents the functor taking each $k$-algebra $A$ to $\Hom_R(\Spec(A[\pi]/(\pi^{\ell+1})), \mathfrak{X})$. For each $m \geq \ell$, we will let $\theta^m_\ell: \mathscr{G}_m(\mathfrak{X}) \to \mathscr{G}_\ell(\mathfrak{X})$ denote the truncation morphism. We will let $\mathscr{G}(\mathfrak{X}) = \varprojlim_\ell \mathscr{G}_\ell(\mathfrak{X})$ denote the Greenberg scheme of $\mathfrak{X}$, and for each $\ell \in \mathbb{Z}_{\geq 0}$, we will let $\theta_\ell: \mathscr{G}(\mathfrak{X}) \to \mathscr{G}_\ell(\mathfrak{X})$ denote the canonical morphism. As for arc schemes, the following is a special case of \cite[Theorem 1.1]{Bhatt}. See for example \cite[Chapter 4, Proposition 3.1.7]{ChambertLoirNicaiseSebag}.
\begin{theorem}[Bhatt]
The $k$-scheme $\mathscr{G}(\mathfrak{X})$ represents the functor taking each $k$-algebra $A$ to $\Hom_R(\Spec(A\llbracket \pi \rrbracket), \mathfrak{X})$, and under this identification, each morphism $\theta_\ell: \mathscr{G}(\mathfrak{X}) \to \mathscr{G}_\ell(\mathfrak{X})$ is the truncation morphism.
\end{theorem}
A subset of $\mathscr{G}(\mathfrak{X})$ is called a \emph{cylinder} if it is the preimage, under $\theta_\ell$, of a constructible subset of $\mathscr{G}_\ell(\mathfrak{X})$ for some $\ell \in \mathbb{Z}_{\geq 0}$. We will let $\mu_\mathfrak{X}$ denote the motivic measure on $\mathscr{G}(\mathfrak{X})$, which assigns a motivic volume in $\mathscr{M}_{\mathfrak{X}_0}$ to each cylinder.
Suppose $f$ is a regular function on $\mathfrak{X}$. If $x \in \mathscr{G}(\mathfrak{X})$ has residue field $k(x)$, then it corresponds to an $R$-morphism $\psi_x: \Spec(k(x)\llbracket \pi \rrbracket) \to \mathfrak{X}$, and we will let $f(x)$ denote $f(\psi_x) \in k(x)\llbracket \pi \rrbracket$. As for arc schemes, this is used to define the \emph{order} and \emph{angular component} of $f$ at $x$ and the order function $\ord_f: \mathscr{G}(\mathfrak{X}) \to \mathbb{Z}_{\geq 0} \cup \{\infty\}$.
Now suppose $G$ is a finite abelian group acting on $R$, and suppose that each element of $G$ acts on $R$ by a $\pi$-adically continuous $k$-algebra morphism. Endow $\mathfrak{X}$ with a good $G$-action making the structure morphism $G$-equivariant, and endow $\mathfrak{X}_0$ with the restriction of the $G$-action on $\mathfrak{X}$. The $G$-action on $\mathfrak{X}$ induces good $G$-actions on $\mathscr{G}(\mathfrak{X})$ and each $\mathscr{G}_\ell(\mathfrak{X})$. We refer to \cite[Section 3.2]{Hartmann} for the construction and properties of these $G$-actions on the Greenberg schemes. We will let $\mu_{\mathfrak{X}}^G$ denote the $G$-equivariant motivic measure on $\mathscr{G}(\mathfrak{X})$, which assigns a motivic volume in $\mathscr{M}_{\mathfrak{X}_0}^G$ to each $G$-invariant cylinder in $\mathscr{G}(\mathfrak{X})$. We refer to \cite[Section 4.2]{Hartmann} for the definition of $\mu_{\mathfrak{X}}^G$.
If $A \subset \mathscr{G}(\mathfrak{X})$ is a $G$-invariant cylinder and $\alpha: A \to \mathbb{Z}$ is a function whose fibers are $G$-invariant cylinders, then the integral of $\alpha$ is defined to be
\[
\int_A \mathbb{L}^{-\alpha} \mathrm{d}\mu_{\mathfrak{X}}^G = \sum_{\ell \in \mathbb{Z}} \mu_\mathfrak{X}^G(\alpha^{-1}(\ell)) \mathbb{L}^{-\ell} \in \mathscr{M}_{\mathfrak{X}_0}^G.
\]
\begin{remark}
By the quasi-compactness of the construcible topology, $\alpha$ takes finitely many values, so the above sum is well defined. See \cite[Chaper 6, Section 1.2]{ChambertLoirNicaiseSebag}.
\end{remark}
We now state the equivariant version of the motivic change of variables formula \cite[Theorem 4.18]{Hartmann}. If $h: \mathfrak{Y} \to \mathfrak{X}$ is a morphism of $R$-schemes, then we let $\ordjac_h: \mathscr{G}(\mathfrak{Y}) \to \mathbb{Z}_{\geq 0} \cup \{\infty\}$ denote the order function of the jacobian ideal of $h$.
\begin{theorem}[Hartmann]
Suppose $\# G$ is not divisible by the characteristic of $k$. Let $\mathfrak{X}, \mathfrak{Y}$ be smooth, pure relative dimensional, separated, finite type $R$-schemes with good $G$-action making the structure morphisms equivariant, and let $h: \mathfrak{Y} \to \mathfrak{X}$ be a $G$-equivariant morphism that induces an open immersion on generic fibers. Let $A,B$ be $G$-invariant cylinders in $\mathscr{G}(\mathfrak{X}), \mathscr{G}(\mathfrak{Y})$, respectively, such that $h$ induces a bijection $B(k') \to A(k')$ for all extensions $k'$ of $k$.
If $\alpha: A \to \mathbb{Z}$ is a function whose fibers are $G$-invariant cylinders, then $\alpha \circ \mathscr{G}(h) - \ordjac_h: B \to \mathbb{Z}$ is a function whose fibers are $G$-invariant cylinders, and
\[
\int_A \mathbb{L}^{-\alpha} \mathrm{d}\mu_{\mathfrak{X}}^G = \int_B \mathbb{L}^{-(\alpha \circ \mathscr{G}(h) + \ordjac_h)} \mathrm{d}\mu_{\mathfrak{Y}}^G \in \mathscr{M}_{\mathfrak{X}_0}^G.
\]
\end{theorem}
\begin{remark}
Hartmann stated the formula when $A = \mathscr{G}(\mathfrak{X})$ and $B = \mathscr{G}(\mathfrak{Y})$, but the same proof works when replacing $\mathscr{G}(\mathfrak{X})$ and $\mathscr{G}(\mathfrak{Y})$ with $G$-invariant cylinders. See for example the proof of the non-equivariant version in \cite{ChambertLoirNicaiseSebag}.
\end{remark}
We note that for all $\ell \in \mathbb{Z}_{>0}$, the characteristic of $k$ never divides $\# \mu_\ell$.
\subsection{Linear subspaces and matroids}
\label{linearsubspacesandmatroidssubsection}
Let $d, n \in \mathbb{Z}_{>0}$. We will let $\mathrm{Gr}_{d,n}$ denote the Grassmannian of $d$-dimensional linear subspaces in $\mathbb{A}_k^n = \Spec(k[x_1, \dots, x_n])$. We will let $\mathbb{G}_{m,k}^n = \Spec(k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]) \subset \mathbb{A}_k^n$ denote the complement of the coordinate hyperplanes, and we will let $V(x_1 \cdots x_n -1)$ denote the closed subscheme of $\mathbb{A}_k^n$ defined by $(x_1 \cdots x_n -1)$. For each $\mathcal{A} \in \mathrm{Gr}_{d,n}(k)$, we will let $X_\mathcal{A} \hookrightarrow \mathbb{A}_k^n$ denote the corresponding linear subspace. If $X_\mathcal{A}$ is not contained in a coordinate hyperplane of $\mathbb{A}_k^n$, then the restrictions to $X_\mathcal{A}$ of the coordinates $x_i$ define a central essential hyperplane arrangement in $X_\mathcal{A}$. We let $U_\mathcal{A} = X_\mathcal{A} \cap \mathbb{G}_{m,k}^n$ and $F_\mathcal{A} = X_\mathcal{A} \cap V(x_1 \cdots x_n -1)$ denote this arrangement's complement and Milnor fiber, respectively, and we endow $F_\mathcal{A}$ with the restriction of the $\mu_n$-action on $\mathbb{A}_k^n$ where each $\xi \in \mu_n$ acts by scalar multiplication. In the context of tropical geometry, we will consider both $U_\mathcal{A}$ and $F_\mathcal{A}$ as closed subschemes of the algebraic torus $\mathbb{G}_{m,k}^n$. We will let $Z_{\mathcal{A}}^{\hat\mu}(T) \in \mathscr{M}^{\hat\mu}_{X_\mathcal{A}}\llbracket T \rrbracket$ and $Z_{\mathcal{A}}^{\mathrm{naive}}(T) \in \mathscr{M}_{X_\mathcal{A}} \llbracket T \rrbracket$ denote the Denef-Loeser motivic zeta function and the motivic Igusa zeta function, respectively, of the restriction of $(x_1 \cdots x_n)$ to $X_\mathcal{A}$. We will let $Z^{\hat\mu}_{\mathcal{A}, k}(T) \in \mathscr{M}_k^{\hat\mu} \llbracket T \rrbracket$ (resp. $Z^{\mathrm{naive}}_{\mathcal{A}, k}(T) \in \mathscr{M}_k \llbracket T \rrbracket$) denote the power series obtained by pushing forward each coefficient of $Z_{\mathcal{A}}^{\hat\mu}(T)$ (resp. $Z_{\mathcal{A}}^{\mathrm{naive}}(T)$) along the structure morphism of $X_\mathcal{A}$. We will let $Z^{\hat\mu}_{\mathcal{A}, 0}(T) \in \mathscr{M}_k^{\hat\mu} \llbracket T \rrbracket$ (resp. $Z^{\mathrm{naive}}_{\mathcal{A}, 0}(T) \in \mathscr{M}_k \llbracket T \rrbracket$) denote the power series obtained by pulling back each coefficient of $Z_{\mathcal{A}}^{\hat\mu}(T)$ (resp. $Z_{\mathcal{A}}^{\mathrm{naive}}(T)$) along the inclusion of the origin into $X_\mathcal{A}$.
\begin{remark}
The zeta functions $Z_{\mathcal{A},k}^{\hat\mu}(T), Z_{\mathcal{A},k}^\mathrm{naive}(T), Z_{\mathcal{A},0}^{\hat\mu}(T)$, and $Z_{\mathcal{A}, 0}^\mathrm{naive}(T)$ are as denoted in the introduction of this paper.
\end{remark}
Let $\mathcal{M}$ be a rank $d$ loop-free matroid on $\{1, \dots, n\}$. We will let $\chi_\mathcal{M}(\mathbb{L}) \in \mathscr{M}_k$ denote the characteristic polynomial of $\mathcal{M}$ evaluated at $\mathbb{L}$, so
\[
\chi_\mathcal{M}(\mathbb{L}) = \sum_{I \subset \{1, \dots, n\}} (-1)^{\# I} \mathbb{L}^{d-\rk I} \in \mathscr{M}_k,
\]
where $\rk I$ is the rank function of $\mathcal{M}$ applied to $I$. We will let $\mathcal{B}(\mathcal{M})$ denote the set of bases of $\mathcal{M}$, and we will let $\wt_\mathcal{M}: \mathbb{R}^n \to \mathbb{R}$ denote the function $(w_1, \dots, w_n) \mapsto \max_{B \in \mathcal{B}(\mathcal{M})} \sum_{i \in B} w_i$. For each $w = (w_1, \dots, w_n) \in \mathbb{R}^n$, we will set
\[
\mathcal{B}(\mathcal{M}_w) = \{ B \in \mathcal{B}(\mathcal{M}) \, | \, \sum_{i \in B} w_i = \wt_\mathcal{M}(w)\}.
\]
Then $\mathcal{B}(\mathcal{M}_w)$ is the set of bases for a rank $d$ matroid on $\{1, \dots, n\}$, and we will let $\mathcal{M}_w$ denote that matroid. We let $\Trop(\mathcal{M})$ denote the Bergman fan of $\mathcal{M}$, so
\[
\Trop(\mathcal{M}) = \{w \in \mathbb{R}^n \, | \, \text{$\mathcal{M}_w$ is loop-free}\}.
\]
We will let $\mathrm{Gr}_\mathcal{M} \subset \mathrm{Gr}_{d,n}$ denote the locus parametrizing linear subspaces whose associated hyperplane arrangements have combinatorial type $\mathcal{M}$. For all $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$, the fact that $\mathcal{M}$ is loop-free implies that $X_\mathcal{A}$ is not contained in a coordinate hyperplane. Note that if $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$, then $[U_\mathcal{A}] = \chi_\mathcal{M}(\mathbb{L}) \in \mathscr{M}_k$ and
\[
\Trop(U_\mathcal{A}) = \{w \in \mathbb{R}^n \, | \, \init_w U_\mathcal{A} \neq \emptyset\} = \Trop(\mathcal{M}).
\]
For each $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$ and each $w \in \Trop(\mathcal{M})$, we will let $\mathcal{A}_w \in \mathrm{Gr}_{\mathcal{M}_w}(k)$ denote the unique point such that $\init_w U_\mathcal{A} = U_{\mathcal{A}_w}$.
Before concluding the preliminaries, we recall two propositions proved in \cite{KutlerUsatine} that will be used in Section \ref*{motiviczetafunctionsprovingmainresultsection}. If $B \in \mathcal{B}(\mathcal{M})$ and $i \in \{1, \dots, n\} \setminus B$, then we will let $C(\mathcal{M}, i, B)$ denote the fundamental circuit in $\mathcal{M}$ of $B$ with respect to $i$, so $C(\mathcal{M}, i, B)$ is the unique circuit in $\mathcal{M}$ contained in $B \cup \{i\}$. For each circuit $C$ in $\mathcal{M}$ and each $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$, we will let $L_C^\mathcal{A} \in k[x_1, \dots, x_n]$ denote a linear form in the ideal defining $X_\mathcal{A}$ in $\mathbb{A}_k^n$ such that the coefficient of $x_i$ in $L_C^\mathcal{A}$ is nonzero if and only if $i \in C$. Such an $L_C^\mathcal{A}$ exists and is unique up to scaling by a unit in $k$. Once and for all, we fix such an $L_C^\mathcal{A}$ for all $C$ and $\mathcal{A}$.
\begin{proposition}[Proposition 3.6 in \cite{KutlerUsatine}]
\label{gbasisforlinearsubspace}
Let $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$, let $w \in \mathbb{R}^n$, and let $B \in \mathcal{B}(\mathcal{M}_w)$. Then
\[
\{L_{C(\mathcal{M}, i,B)}^\mathcal{A} \, | \, i \in \{1, \dots, n\} \setminus B\} \subset k[x_1, \dots, x_n]
\]
generates the ideal of $X_\mathcal{A}$ in $\mathbb{A}_k^n$, and
\[
\{ \init_w L_{C(\mathcal{M}, i, B)}^\mathcal{A} \, | \, i \in \{1, \dots, n\} \setminus B\} \subset k[x_1^{\pm}, \dots, x_n^{\pm}]
\]
generates the ideal of $\init_w U_{\mathcal{A}}$ in $\mathbb{G}_{m,k}^n$.
\end{proposition}
\begin{proposition}[Proposition 3.2 in \cite{KutlerUsatine}]
\label{wmaximalextrahasminimalweight}
Let $w = (w_1, \dots, w_n) \in \mathbb{R}^n$, let $B \in \mathcal{B}(\mathcal{M}_w)$, and let $i \in \{1, \dots, n\} \setminus B$. Then
\[
\min_{j \in C(\mathcal{M},i,B)} w_j = w_i.
\]
\end{proposition}
For additional information on matroids and the tropical geometry of linear subspaces, we refer to \cite[Chapter 4]{MaclaganSturmfels}.
\section{Equivariant motivic integration and the motivic zeta function}
Let $\ell \in \mathbb{Z}_{>0}$, and throughout this section, endow $R$ with the $\mu_\ell$-action where each $\xi \in \mu_{\ell}$ acts on $R$ by the $\pi$-adically continuous $k$-morphism $\pi \mapsto \xi^{-1}\pi$.
Let $X$ be a smooth, pure dimensional, finite type, separated scheme over $k$. We will endow $\mathscr{L}(X)$ and each $\mathscr{L}_m(X)$ with $\mu_\ell$-actions that make the truncation morphisms $\mu_\ell$-equivariant as follows. Let $\xi \in \mu_\ell$, let $A$ be a $k$-algebra, let $\xi_{A\llbracket \pi \rrbracket}: \Spec(A\llbracket \pi \rrbracket) \to \Spec(A \llbracket \pi \rrbracket)$ be the morphism whose pullback is the $\pi$-adically continuous $A$-algebra morphism $\pi \mapsto \xi^{-1}\pi$, and let $\xi_{A[ \pi ]/(\pi^{n+1})}: \Spec(A[ \pi ]/(\pi^{n+1})) \to \Spec(A[ \pi ]/(\pi^{n+1}))$ be the morphism whose pullback is the $A$-algebra morphism $\pi \mapsto \xi^{-1}\pi$.
If $x \in \mathscr{L}(X)(A)$ corresponds to a $k$-morphism
\[
\psi_x: \Spec(A \llbracket \pi \rrbracket) \to X,
\]
then let $\xi \cdot x \in \mathscr{L}(X)(A)$ correspond to the $k$-morphism
\[
\psi_x \circ \xi_{A \llbracket \pi \rrbracket}^{-1}: \Spec(A \llbracket \pi \rrbracket) \to X.
\]
This action is clearly functorial in $A$, so it defines a $\mu_\ell$-action on $\mathscr{L}(X)$. Similarly, if $x \in \mathscr{L}_m(X)(A)$ corresponds to a $k$-morphism
\[
\psi_x: \Spec(A [ \pi ]/(\pi^{m+1})) \to X,
\]
then let $\xi \cdot x \in \mathscr{L}_m(X)(A)$ correspond to the $k$-morphism
\[
\psi_x \circ \xi_{A [ \pi ]/(\pi^{m+1})}^{-1}: \Spec(A [ \pi ]/(\pi^{m+1})) \to X.
\]
This action is also functorial in $A$, so it defines a $\mu_\ell$-action on $\mathscr{L}_m(X)$. We also see that these $\mu_\ell$-actions make the truncation morphisms $\mu_\ell$-equivariant.
\begin{proposition}
\label{orderangularcomponentarcscheme}
Let $f$ be a regular function on $X$. Then $f$ has constant order on any $\mu_\ell$-orbit of $\mathscr{L}(X)$. Furthermore, $f$ has constant angular component on any $\mu_\ell$-orbit of $\mathscr{L}(X)$ on which $f$ has order $\ell$.
\end{proposition}
\begin{proof}
Let $\xi \in \mu_\ell$, let $\xi_{\mathscr{L}(X)}: \mathscr{L}(X) \to \mathscr{L}(X)$ be its action on $\mathscr{L}(X)$, let $x \in \mathscr{L}(X)(k')$ for some extension $k'$ of $k$, let $R' = k' \llbracket \pi \rrbracket$, and let $\xi_{R'}: \Spec(R') \to \Spec(R')$ be the morphism whose pullback is the $\pi$-adically continuous $k'$-algebra morphism $\pi \mapsto \xi^{-1}\pi$. Then $x$ corresponds to a $k$-morphism
\[
\psi_x: \Spec(R') \to X,
\]
and $\xi_{\mathscr{L}(X)}(x) \in \mathscr{L}(X)(k')$ corresponds to the $k$-morphism
\[
\psi_x \circ \xi_{R'}^{-1}: \Spec(R') \to X.
\]
Write
\[
f(x) = f(\psi_x) = \sum_{i \geq 0} a_i \pi^i \in R',
\]
where each $a_i \in k'$. Then
\[
f(\xi_{\mathscr{L}(X)}(x)) = f(\psi_x \circ \xi_{R'}^{-1}) = \sum_{i \geq 0} a_i \xi^i \pi^i \in R'.
\]
Thus the order of $f(x)$ is equal to the order of $f(\xi_{\mathscr{L}(X)}(x))$, and if $f(x)$ has order $\ell$, then the fact that $\xi ^\ell = 1$ implies that the angular component of $f(x)$ is equal to the angular component of $f(\xi_{\mathscr{L}(X)}(x))$. Thus we are done.
\end{proof}
Let $\mathfrak{X} = X \times_k \Spec(R)$ and endow $\mathfrak{X}$ with the $\mu_\ell$-action induced by the $\mu_\ell$-action on $R$ and the trivial $\mu_\ell$-action on $X$. Note that any open affine cover of $X$ induces an open cover of $\mathfrak{X}$ by $\mu_\ell$-invariant affines, so the $\mu_\ell$-action on $\mathfrak{X}$ is good. Composition with the projection $\mathfrak{X} \to X$ induces isomorphisms $\mathscr{G}(\mathfrak{X}) \to \mathscr{L}(X)$ and $\mathscr{G}_m(\mathfrak{X}) \to \mathscr{L}_m(X)$ that commute with the truncation morphisms.
\begin{proposition}
\label{equivariantisogreenbergarc}
The isomorphisms $\mathscr{G}(\mathfrak{X}) \to \mathscr{L}(X)$ and $\mathscr{G}_m(\mathfrak{X}) \to \mathscr{L}_m(X)$ are $\mu_\ell$-equivariant.
\end{proposition}
\begin{proof}
Let $m \in \mathbb{Z}_{\geq 0}$. It will be sufficient to show that the isomorphism $\mathscr{G}_m(\mathfrak{X}) \to \mathscr{L}_m(X)$ is $\mu_\ell$-equivariant, as we get the remainder of the proposition by taking inverse limit.
Let $\xi \in \mu_\ell$, let $\xi_\mathfrak{X}: \mathfrak{X} \to \mathfrak{X}$ be its action on $\mathfrak{X}$, and let $\xi_{\mathscr{G}_m(\mathfrak{X})}: \mathscr{G}_m(\mathfrak{X}) \to \mathscr{G}_m(\mathfrak{X})$ be its action on $\mathscr{G}_m(\mathfrak{X})$.
Let $x \in \mathscr{G}_m(\mathfrak{X})(A)$ for some $k$-algebra $A$, and let
\[
\xi_{A [ \pi ]/(\pi^{m+1})}: \Spec(A [ \pi ]/(\pi^{m+1})) \to \Spec(A [ \pi ]/(\pi^{m+1}))
\]
be the morphism whose pullback is the $A$-algebra morphism $\pi \mapsto \xi^{-1} \pi$. Then $x$ corresponds to an $R$-morphism
\[
\psi_x: \Spec(A [ \pi ]/(\pi^{m+1})) \to \mathfrak{X},
\]
and $\xi_{\mathscr{G}_m(\mathfrak{X})}(x) \in \mathscr{G}_m(\mathfrak{X})(A)$ corresponds to the $R$-morphism
\[
\xi_{\mathfrak{X}} \circ \psi_x \circ \xi_{A [ \pi ]/(\pi^{m+1})}^{-1}: \Spec(A [ \pi ]/(\pi^{m+1})) \to \mathfrak{X}.
\]
Because $\xi_{\mathfrak{X}}$ is trivial on the factor $X$, we get that the composition of the above morphism with the projection $\mathfrak{X} \to X$ is equal to the composition of
\[
\psi_x \circ \xi_{A [ \pi ]/(\pi^{m+1})}^{-1}: \Spec(A [ \pi ]/(\pi^{m+1})) \to \mathfrak{X}
\]
with the projection $\mathfrak{X} \to X$. Thus the proposition follows by our definition of the $\mu_\ell$-action on $\mathscr{L}_m(X)$.
\end{proof}
\begin{proposition}
\label{orderangularcomponentgreenbergscheme}
Let $f$ be a regular function on $\mathfrak{X}$ obtained by pulling back a regular function on $X$ along the projection $\mathfrak{X} \to X$. Then $f$ has constant order on any $\mu_\ell$-orbit of $\mathscr{G}(\mathfrak{X})$. Furthermore, $f$ has constant angular component on any $\mu_\ell$-orbit of $\mathscr{G}(\mathfrak{X})$ on which $f$ has order $\ell$.
\end{proposition}
\begin{proof}
This follows from Propositions \ref*{orderangularcomponentarcscheme} and \ref*{equivariantisogreenbergarc}.
\end{proof}
Let $f$ be a regular function on $X$, let $Z_f^{\hat\mu}(T) \in \mathscr{M}_{X}^{\hat\mu} \llbracket T \rrbracket$ denote the Denef-Loeser motivic zeta function of $f$, and let $Z_f^\mathrm{naive}(T) \in \mathscr{M}_X \llbracket T \rrbracket$ denote the motivic Igusa zeta function of $f$. By slight abuse of notation, we will also let $f$ denote the regular function on $\mathfrak{X}$ obtained by pulling back $f$ along the projection $\mathfrak{X} \to X$.
\begin{proposition}
\label{coefficientDLfromvolume}
Let $A_{\ell,1} \subset \mathscr{G}(\mathfrak{X})$ be the subset of arcs where $f$ has order $\ell$ and angular component 1. Then $A_{\ell,1}$ is a $\mu_\ell$-invariant cylinder, and the coefficient of $T^\ell$ in $Z_f^{\hat\mu}(T)$ is equal to the image of $\mu_{\mathfrak{X}}^{\mu_\ell}(A_{\ell,1})$ in $\mathscr{M}_X^{\hat\mu}$.
\end{proposition}
\begin{proof}
Let $B_{\ell,1} \subset \mathscr{L}(X)$ be the subset of arcs where $f$ has order $\ell$ and angular component 1, and let $Y_{\ell, 1}$ be the closed subscheme of $\mathscr{L}_\ell(X)$ consisting of jets where $f$ is equal to $\pi^\ell$. Then $\theta_\ell(B_{\ell, 1}) = Y_{\ell, 1}$. By \autoref*{orderangularcomponentarcscheme}, $B_{\ell,1}$ is a $\mu_\ell$-invariant subset of $\mathscr{L}(X)$, so because $\theta_\ell$ is $\mu_{\ell}$-equiviariant, we have that $Y_{\ell, 1}$ is a $\mu_\ell$-invariant subset of $\mathscr{L}_\ell(X)$. Thus we may endow $Y_{\ell, 1}$ with the $\mu_\ell$-action given by restriction of the $\mu_\ell$-action on $\mathscr{L}_\ell(X)$. By the definition of $Z^{\hat\mu}_f(T)$ and the $\mu_\ell$-action on $Y_{\ell, 1}$, the coefficient of $T^\ell$ in $Z_f^{\hat\mu}(T)$ is equal to
\[
[Y_{\ell, 1} / X, \hat\mu] \mathbb{L}^{-(\ell+1)\dim X} \in \mathscr{M}_X^{\hat\mu}.
\]
But by the $\mu_\ell$-equivariant isomorphisms $\mathscr{G}(\mathfrak{X}) \to \mathscr{L}(X)$ and $\mathscr{G}_\ell(\mathfrak{X}) \to \mathscr{L}_\ell(X)$, the fact that the image of $A_{\ell, 1}$ under $\mathscr{G}(\mathfrak{X}) \to \mathscr{L}(X)$ is equal to $B_{\ell, 1}$, and the fact that $\theta_\ell^{-1}(Y_{\ell, 1}) = B_{\ell,1}$, we have that $A_{\ell,1}$ is a $\mu_\ell$-invariant cylinder and
\[
\mu_{\mathfrak{X}}^{\mu_\ell}(A_{\ell,1}) = [Y_{\ell, 1} / X, \mu_\ell] \mathbb{L}^{-(\ell+1)\dim X} \in \mathscr{M}_X^{\mu_\ell},
\]
and we are done.
\end{proof}
\begin{proposition}
\label{coefficientIgfromvolume}
Let $A_\ell \subset \mathscr{G}(\mathfrak{X})$ be the subset of arcs where $f$ has order order $\ell$. Then $A_\ell$ is a cylinder and the coefficient of $T^\ell$ in $Z_f^\mathrm{naive}(T)$ is equal to $\mu_{\mathfrak{X}}(A_\ell)$.
\end{proposition}
\begin{proof}
This proposition follows from the definition of $Z_f^\mathrm{naive}(T)$ and the fact that the isomorphism $\mathscr{G}(\mathfrak{X}) \to \mathscr{L}(X)$ is cylinder and volume preserving.
\end{proof}
\section{Actions of the roots of unity on an algebraic torus}
Let $T$ be an algebraic torus over $k$ with character lattice $M$ and co-character lattice $N$, and for each $u \in M$, let $\chi^u \in k[M]$ denote the corresponding character on $T$. In this section, we will establish some notation and facts regarding certain actions, by the roots of unity, on the closed subschemes of $T$.
\begin{definition}
Let $\ell \in \mathbb{Z}_{> 0}$. Let $w \in N$, and $\mathbb{G}_{m,k} \to T$ be the corresponding co-character. Then we define the \emph{$(\mu_\ell, w)$-action} to be the $\mu_\ell$-action on $T$ induced by the group homomorphism $\mu_\ell \hookrightarrow \mathbb{G}_{m,k} \to T$.
For each closed subscheme $U$ of $T$ that is invariant under the $(\mu_\ell, w)$-action, we will let $U_\ell^w$ denote the scheme $U$ endowed with the $\mu_\ell$-action given by restriction of the $(\mu_\ell, w)$-action.
\end{definition}
\begin{remark}
\label{characterctionpullback}
Under the $(\mu_\ell, w)$-action, each $\xi \in \mu_\ell$ acts on $T$ with pullback
\[
\chi^u \mapsto \xi^{\langle u, w \rangle} \chi^u.
\]
\end{remark}
\begin{proposition}
\label{initialdegenerationinvariant}
Let $\ell \in \mathbb{Z}_{>0}$, let $w \in N$, and let $U$ be a closed subscheme of $T$. Then the initial degeneration $\init_w U$ is a closed subscheme of $T$ that is invariant under the $(\mu_\ell, w)$-action.
\end{proposition}
\begin{proof}
Let $\xi \in \mu_\ell$, and let $\xi_T: T \to T$ be its action on $T$. It will be sufficient to show that for all $f \in k[M]$, the pullback $\xi_T^*(\init_w f)$ is contained in the ideal of $k[M]$ generated by $\init_w f$.
By definition,
\[
\supp(\init_w f) = \{u \in \supp(f) \, | \, \langle u, w \rangle = \trop(f)(w)\},
\]
so by \autoref*{characterctionpullback},
\[
\xi_T^*(\init_w f) = \xi^{\trop(f)(w)} \init_w f,
\]
and we are done.
\end{proof}
\begin{proposition}
\label{binomialcharacterinvariant}
Let $w \in N$, let $u \in M$ such that $\langle u, w \rangle > 0$, and let $V(\chi^u-1)$ be the closed subscheme of $T$ defined by $\chi^u-1 \in k[M]$. Then $V(\chi^u-1)$ is invariant under the $(\mu_{\langle u, w \rangle}, w)$-action.
\end{proposition}
\begin{proof}
Let $\xi \in \mu_{\langle u, w \rangle}$, and let $\xi_T: T \to T$ be its action on $T$. Then by \autoref*{characterctionpullback},
\[
\xi_T^*(\chi^u - 1) = \xi^{\langle u, w \rangle} \chi^u - 1 = \chi^u -1,
\]
and we are done.
\end{proof}
\begin{proposition}
\label{initdegscalingcharacteraction}
Let $U$ be a closed subscheme of $T$, let $\ell \in \mathbb{Z}_{>0}$, let $w \in N$, and let $u \in M$ be such that $\langle u, w \rangle > 0$.
Then $U$ is invariant under the $(\mu_{\langle u, w \rangle}, w)$-action if and only if $U$ is invariant under the $(\mu_{\langle u, \ell w \rangle}, \ell w)$-action.
Furthermore, if $U$ is invariant under the $(\mu_{\langle u, w \rangle}, w)$-action, then
\[
[U_{\langle u, w \rangle}^w, \hat\mu] = [U_{\langle u, \ell w \rangle}^{\ell w}, \hat\mu] \in K_0^{\hat\mu}(\mathbf{Var}_k).
\]
\end{proposition}
\begin{proof}
Under the $(\mu_{\langle u, w \rangle}, w)$-action, each $\xi \in \mu_{\langle u, w \rangle}$ acts on $T$ with pullback
\[
\chi^{u'} \mapsto \xi^{\langle u', w \rangle} \chi^{u'}.
\]
The homomorphism $\mu_{\langle u, \ell w \rangle} \to \mu_{\langle u, w \rangle}: \xi \mapsto \xi^\ell$ and the $(\mu_{\langle u, w \rangle}, w)$-action induce a $\mu_{\langle u, \ell w \rangle}$-action on $T$ such that each $\xi \in \mu_{\langle u, \ell w \rangle}$ acts on $T$ with pullback
\[
\chi^{u'} \mapsto (\xi^\ell)^{\langle u', w \rangle} \chi^{u'} = \xi^{\langle u', \ell w \rangle} \chi^{u'}.
\]
We see that this action is equal to the $(\mu_{\langle u, \ell w \rangle}, \ell w)$-action. Then the surjectivity of $\mu_{\langle u, \ell w\rangle} \to \mu_{\langle u, w \rangle}$ implies that $U$ is invariant under the $(\mu_{\langle u, w \rangle}, w)$-action if and only if it is invariant under the $(\mu_{\langle u, \ell w \rangle}, \ell w)$-action. The remainder of the proposition follows from the definition of the map $K_0^{\mu_{\langle u, w \rangle}}(\mathbf{Var}_k) \to K_0^{\mu_{\langle u, \ell w \rangle}}(\mathbf{Var}_k)$.
\end{proof}
We will devote the remainder of this section to proving the following proposition.
\begin{proposition}
\label{moninitialdeg}
Let $U$ be a closed subscheme of $T$, let $u \in M$, let $V(\chi^u-1)$ be the closed subscheme of $T$ defined by $\chi^u-1 \in k[M]$, let $w \in u^\perp \cap N$, and let $v \in N$ be such that $\ell = \langle u, v \rangle > 0$ and such that $\init_w U$ is invariant under the $(\mu_\ell, v)$-action.
Then $\init_w U$ is invariant under the $(\mu_\ell, v-w)$-action, and
\[
[(V(\chi^u-1) \cap \init_w U)_\ell^v, \mu_\ell] = [ (V(\chi^u-1) \cap \init_w U)_\ell^{v-w}, \mu_\ell] \in K_0^{\mu_\ell}(\mathbf{Var}_k).
\]
\end{proposition}
\begin{remark}
In the statement of \autoref*{moninitialdeg}, because $\ell = \langle u, v \rangle = \langle u, v-w \rangle$, \autoref*{binomialcharacterinvariant} implies that $V(\chi^u-1)$ is invariant under the $(\mu_\ell, v)$-action and the $(\mu_\ell, v-w)$-action, so the classes in the statement are well defined.
\end{remark}
\subsection{Proof of \autoref*{moninitialdeg}}
Let $U$ be a closed subscheme of $T$, let $u \in M$, let $V(\chi^u-1)$ be the closed subscheme of $T$ defined by $\chi^u-1 \in k[M]$, let $w \in u^\perp \cap N$, and let $v \in N$ be such that $\ell = \langle u, v \rangle > 0$ and such that $\init_w U$ is invariant under the $(\mu_\ell, v)$-action. \autoref*{moninitialdeg} is clear when $w=0$, so we assume that $w \neq 0$.
Let $\mathcal{O}_w = \Spec(k[w^\perp \cap M])$, and let $T \to \mathcal{O}_w$ be the algebraic group homomorphism induced by the inclusion $k[w^\perp \cap M] \to k[M]$.
\begin{lemma}
\label{shiftintoperp}
Let $f \in k[M]$. Then there exists $u' \in M$ such that $\init_w (\chi^{u'} f) \in k[w^\perp \cap M]$.
\end{lemma}
\begin{proof}
By definition,
\[
\supp(\init_w f) = \{u' \in \supp(f) \, | \, \langle u', w \rangle = \trop(f)(w)\}.
\]
If $f=0$, the statement is obvious. Thus we may assume that there exists $u' \in M$ such that $-u' \in \supp(\init_w f)$. Then we have that
\[
\init_w (\chi^{u'} f) = \chi^{u'} \init_w f \in k[w^\perp \cap M].
\]
\end{proof}
\begin{proposition}
\label{initdegpreimageprojection}
There exist closed subschemes $Y$ and $Z$ of $\mathcal{O}_w$ such that $\init_w U$ is equal to the pre-image of $Y$ under the morphism $T \to \mathcal{O}_w$ and $V(\chi^u-1) \cap \init_w U$ is equal to the pre-image of $Z$ under the morphism $T \to \mathcal{O}_w$.
\end{proposition}
\begin{proof}
Let $f_1, \dots, f_m \in k[M]$ be such that $\init_w f_1, \dots, \init_w f_m \in k[M]$ generate the ideal defining $\init_w U$ in $T$. By \autoref*{shiftintoperp}, we can assume that $\init_w f_1, \dots, \init_w f_m \in k[w^\perp \cap M]$. Because $w \in u^\perp$, we have that $\chi^u \in k[w^\perp \cap M]$.
Thus we may let $Y$ be the closed subscheme of $\mathcal{O}_w$ defined by the ideal generated by $\init_w f_1, \dots, \init_w f_m \in k[w^\perp \cap M]$, and we may let $Z$ be the closed subscheme of $\mathcal{O}_w$ defined by the ideal generated by $\init_w f_1, \dots, \init_w f_m, \chi^u-1 \in k[w^\perp \cap M]$, and we are done.
\end{proof}
\begin{lemma}
\label{characterssameafterprojection}
The composition of the co-character $\mathbb{G}_{m,k} \to T$ corresponding to $v$ with the morphism $T \to \mathcal{O}_w$ is equal to the composition of the co-character $\mathbb{G}_{m,k} \to T$ corresponding to $v-w$ with the morphism $T \to \mathcal{O}_w$
\end{lemma}
\begin{proof}
The composition of the co-character $\mathbb{G}_{m,k} \to T$ corresponding to $v$ with the morphism $T \to \mathcal{O}_w$ corresponds to the map of lattices $w^\perp \cap M \hookrightarrow M \xrightarrow{\langle \cdot, v \rangle} \mathbb{Z}$, and the composition of the co-character $\mathbb{G}_{m,k} \to T$ corresponding to $v-w$ with the morphism $T \to \mathcal{O}_w$ corresponds to the map of lattices $w^\perp \cap M \hookrightarrow M \xrightarrow{\langle \cdot, v-w \rangle} \mathbb{Z}$. These are clearly the same lattice maps, so we are done.
\end{proof}
Let $T_w = \Spec(k[(\mathbb{R} w \cap N)^\vee])$. Any splitting of $0 \to \mathbb{R} w \cap N \to N \to N/(\mathbb{R} w \cap N) \to 0$ induces an isomorphism of algebraic groups $T \cong T_w \times_k \mathcal{O}_w$ such that $T \to \mathcal{O}_w$ corresponds to the projection $T_w \times_k \mathcal{O}_w \to \mathcal{O}_w$.
Let $\phi_1: \mu_\ell \to T$ (resp. $\phi_2: \mu_\ell \to T$) be the composition of $\mu_\ell \hookrightarrow \mathbb{G}_{m,k}$ with the co-character $\mathbb{G}_{m,k} \to T$ corresponding to $v$ (resp. $v-w$).
Let $\varphi_1: \mu_\ell \to \mathcal{O}_w$ (resp. $\varphi_2: \mu_\ell \to \mathcal{O}_w$) be the composition of $\phi_1$ (resp. $\phi_2$) with $T \to \mathcal{O}_w$.
\begin{lemma}
\label{diagonalactionequalfactor}
We have that $\varphi_1 = \varphi_2$.
\end{lemma}
\begin{proof}
This follows directly from \autoref*{characterssameafterprojection}.
\end{proof}
Let $\psi_1: \mu_\ell \to T_w$ (resp. $\psi_2: \mu_\ell \to T_w$) be the composition of $\phi_1$ (resp. $\phi_2$) with the projection $T \cong T_w \times_k \mathcal{O}_w \to T_w$.
\begin{remark}
\label{characteractiondiagonal}
We see that under the identification $T \cong T_w \times_k \mathcal{O}_w$, the $(\mu_\ell, v)$-action (resp. $(\mu_\ell, v-w)$-action) is the diagonal action defined by the action on $\mathcal{O}_w$ induced by $\varphi_1$ (resp. $\varphi_2$) and the action on $T_w$ induced by $\psi_1$ (resp. $\psi_2$).
\end{remark}
We now prove the first part of \autoref*{moninitialdeg}.
\begin{proposition}
We have that $\init_w U$ is invariant under the $(\mu_\ell, v-w)$-action.
\end{proposition}
\begin{proof}
By \autoref*{initdegpreimageprojection}, there exists a closed subscheme $Y$ of $\mathcal{O}_w$ such that $\init_w U$ is equal to the pre-image of $Y$ under the morphism $T \to \mathcal{O}_w$. Then under the identification $T \cong T_w \times_k \mathcal{O}_w$, we have that
\[
\init_w U = T_w \times_k Y.
\]
Because $\init_w U$ is invariant under the $(\mu_\ell, v)$-action, \autoref*{characteractiondiagonal} implies that $Y$ is invariant under the $\mu_\ell$-action on $\mathcal{O}_w$ induced by $\varphi_1$. By \autoref*{diagonalactionequalfactor}, $Y$ is invariant under the $\mu_\ell$-action on $\mathcal{O}_w$ induced by $\varphi_2$, and by \autoref*{characteractiondiagonal}, this implies that $\init_w U$ is invariant under the $(\mu_\ell, v-w)$-action.
\end{proof}
Before we complete the proof of \autoref*{moninitialdeg}, we make the following observation, which follows from \cite[Lemma 7.1]{KutlerUsatine} and the fact that $\dim T_w = 1$.
\begin{remark}
\label{trivialclassinvarianttorus}
The class in $K_0^{\mu_\ell}(\mathbf{Var}_k)$ of $T_w$ with the $\mu_\ell$-action induced by $\psi_1$ (resp. $\psi_2$) is equal to $\mathbb{L}-1$.
\end{remark}
We now complete the proof of \autoref*{moninitialdeg}.
\begin{proposition}
We have that
\[
[(V(\chi^u-1) \cap \init_w U)_\ell^v, \mu_\ell] = [ (V(\chi^u-1) \cap \init_w U)_\ell^{v-w}, \mu_\ell] \in K_0^{\mu_\ell}(\mathbf{Var}_k).
\]
\end{proposition}
\begin{proof}
By \autoref*{initdegpreimageprojection}, there exists a closed subscheme $Z$ of $\mathcal{O}_w$ such that $V(\chi^u-1) \cap \init_w U$ is equal to the pre-image of $Z$ under the morphism $T \to \mathcal{O}_w$. Then under the identification $T \cong T_w \times_k \mathcal{O}_w$, we have that
\[
V(\chi^u-1) \cap \init_w U = T_w \times_k Z.
\]
Because $V(\chi^u-1) \cap \init_w U$ is invariant under the $(\mu_\ell, v)$-action, \autoref*{characteractiondiagonal} implies that $Z$ is invariant under the $\mu_\ell$-action on $\mathcal{O}_w$ induced by $\varphi_1$.
Now endow $Z$ with the $\mu_\ell$-action given by restriction of the $\mu_\ell$-action on $\mathcal{O}_w$ induced by $\varphi_1$, which by \autoref*{diagonalactionequalfactor} is the same as the $\mu_\ell$-action given by restriction of the $\mu_\ell$-action on $\mathcal{O}_w$ induced by $\varphi_2$. Then by Remarks \ref*{characteractiondiagonal} and \ref*{trivialclassinvarianttorus},
\begin{align*}
[(V(\chi^u-1) \cap \init_w U)_\ell^v, \mu_\ell] &= (\mathbb{L}-1)[Z, \mu_\ell]\\
&= [ (V(\chi^u-1) \cap \init_w U)_\ell^{v-w}, \mu_\ell].
\end{align*}
\end{proof}
\section{Motivic zeta functions and smooth initial degenerations}
\label{motiviczetafunctionsforschonvarieties}
Let $n \in \mathbb{Z}_{>0}$, let $\mathbb{A}_k^n = \Spec(k[x_1, \dots, x_n])$, and let $\mathbb{G}_{m,k}^n = \Spec(k[x_1^{\pm 1}, \dots, x_n^{\pm 1}])$. Let $X$ be a smooth pure dimension $d$ closed subscheme of $\mathbb{A}_k^n$ such that $U = X \cap \mathbb{G}_{m,k}^n$ is nonempty and such that for all $w \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$, the initial degeneration $\init_w U$ is smooth and there exist $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ that generate the ideal of $X$ in $\mathbb{A}_k^n$ such that $\init_w f_1, \dots, \init_w f_{n-d} \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ generate the ideal of $\init_w U$ in $\mathbb{G}_{m,k}^n$.
Let $u \in \mathbb{Z}_{>0}^n$, let $Z^{\hat\mu}_{X,u}(T) \in \mathscr{M}_X^{\hat\mu}\llbracket T \rrbracket$ be the Denef-Loeser motivic zeta function of the restriction $(x_1, \dots, x_n)^u|_X$, and let $Z^{\mathrm{naive}}_{X,u}(T) \in \mathscr{M}_X\llbracket T \rrbracket$ be the motivic Igusa zeta function of $(x_1, \dots, x_n)^u|_X$.
To state \autoref*{zetafunctionschon} below, we will need the following proposition, which will also be proved in this section.
\begin{proposition}
\label{initialdegenerationXschemestructure}
Let $w = (w_1, \dots, w_n) \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$, and let $\varphi: \mathbb{G}_{m,k}^n \to \mathbb{A}_k^n$ be the morphism whose pullback is given by $x_i \mapsto 0^{w_i}x_i$. Then the restriction of $\varphi$ to $\init_w U$ factors through $X$, and if $w \neq 0$, the induced map $(\init_w U)^w_{u \cdot w} \to X$ is $\mu_{u \cdot w}$-equivariant with respect to the trivial $\mu_{u \cdot w}$-action on $X$.
\end{proposition}
In this section, we will prove the following theorem and its corollary.
\begin{theorem}
\label{zetafunctionschon}
Let $V_u$ be the subscheme of $\mathbb{G}_{m,k}^n$ defined by $(x_1, \dots, x_n)^u - 1$. For any $w \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$, endow the initial degeneration $\init_w U$ and the intersection $V_u \cap \init_w U$ with the $X$-scheme structure given by \autoref*{initialdegenerationXschemestructure}.
Then there exists a function $\ordjac: \Trop(U) \cap \mathbb{Z}_{\geq 0}^n \to \mathbb{Z}$ that satisfies the following.
\begin{enumerate}[(a)]
\item If $w = (w_1, \dots, w_n) \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$ and $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ are a generating set for the ideal of $X$ such that $\init_w f_1, \dots, \init_w f_{n-d} \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ generate the ideal of $\init_w U$, then
\[
\ordjac(w) = w_1 + \dots + w_n - (\trop(f_1)(w) + \dots + \trop(f_{n-d})(w)) \in \mathbb{Z}.
\]
\item We have that
\[
Z^{\hat\mu}_{X,u}(T) = \sum_{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})} [(V_u \cap \init_w U)^w_{u \cdot w} / X, \hat\mu] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w},
\]
and
\[
Z^\mathrm{naive}_{X,u}(T) = \sum_{w \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n} [\init_w U / X] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w}.
\]
\end{enumerate}
\end{theorem}
\begin{remark}
The classes above are well defined by Propositions \ref*{initialdegenerationinvariant}, \ref*{binomialcharacterinvariant}, and \ref*{initialdegenerationXschemestructure}.
\end{remark}
Let $Z^{\hat\mu}_{X,u,k}(T) \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket$ be the power series obtained by pushing forward each coefficient of $Z^{\hat\mu}_{X,u}(T)$ along the structure morphism of $X$, and if the origin of $\mathbb{A}_k^n$ is contained in $X$, let $Z^{\hat\mu}_{X,u,0}(T) \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket$ be the power series obtained by pulling back each coefficient of $Z^{\hat\mu}_{X,u}(T)$ along the inclusion of the origin into $X$.
\begin{corollary}
\label{tropicalzetaformulahomogeneous}
Again let $V_u$ be the subscheme of $\mathbb{G}_{m,k}^n$ defined by $(x_1, \dots, x_n)^u - 1$. Suppose there exists $v \in \mathbb{Z}^n$ such that $u \cdot v > 0$ and such that for all $w \in \mathbb{Z}^n$,
\[
\init_w U = \init_{w+v} U.
\]
Then for all $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$, we have that $V_u \cap \init_w U$ is invariant under the $(\mu_{u \cdot v}, v)$-action, and there exists a function $\ordjac: \Trop(U) \cap \mathbb{Z}_{\geq 0}^n \to \mathbb{Z}$ that satisfies the following.
\begin{enumerate}[(a)]
\item If $w = (w_1, \dots, w_n) \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$ and $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ are a generating set for the ideal of $X$ such that $\init_w f_1, \dots, \init_w f_{n-d} \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ generate the ideal of $\init_w U$, then
\[
\ordjac(w) = w_1 + \dots + w_n - (\trop(f_1)(w) + \dots + \trop(f_{n-d})(w)) \in \mathbb{Z}.
\]
\item We have that
\[
Z^{\hat\mu}_{X,u,k}(T) = \sum_{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})} [(V_u \cap \init_w U)^v_{u \cdot v}, \hat\mu] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w}.
\]
\item If the origin of $\mathbb{A}_k^n$ is contained in $X$, then
\[
Z^{\hat\mu}_{X,u,0}(T) = \sum_{w \in \Trop(U) \cap \mathbb{Z}_{> 0}^n} [(V_u \cap \init_w U)^v_{u \cdot v}, \hat\mu] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w}.
\]
\end{enumerate}
\end{corollary}
\subsection{Proof of \autoref*{tropicalzetaformulahomogeneous}}
Before we prove \autoref*{initialdegenerationXschemestructure} and \autoref*{zetafunctionschon}, we will show that they imply \autoref*{tropicalzetaformulahomogeneous}.
\begin{proposition}
\label{initialformmappedtoorigin}
Let $w \in \Trop(U) \cap \mathbb{Z}_{\geq 0}^n$, suppose that the origin of $\mathbb{A}_k^n$ is contained in $X$, and endow $\init_w U$ with the $X$-scheme structure given by \autoref*{initialdegenerationXschemestructure}. Then
\begin{enumerate}[(a)]
\item if $w \in \mathbb{Z}_{> 0}^n$, the fiber of $\init_w U$ over the origin of $\mathbb{A}_k^n$ is equal to $\init_w U$,
\item and if $w \notin \mathbb{Z}_{>0}^n$, the fiber of $\init_w U$ over the origin of $\mathbb{A}_k^n$ is empty.
\end{enumerate}
\end{proposition}
\begin{proof}
This is a direct consequence of the $X$-scheme structure of $\init_w U$.
\end{proof}
Using the notation in the theorem's statement, \autoref*{zetafunctionschon} implies
\[
Z^{\hat\mu}_{X,u,k}(T) = \sum_{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})} [(V_u \cap \init_w U)^w_{u \cdot w}, \hat\mu] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w},
\]
and if in addition, the origin of $\mathbb{A}_k^n$ is contained in $X$, \autoref*{initialformmappedtoorigin} and \autoref*{zetafunctionschon} imply
\[
Z^{\hat\mu}_{X,u,0}(T) = \sum_{w \in \Trop(U) \cap \mathbb{Z}_{> 0}^n} [(V_u \cap \init_w U)^w_{u \cdot w}, \hat\mu] \mathbb{L}^{-d-\ordjac(w)} T^{u \cdot w}.
\]
Thus \autoref*{tropicalzetaformulahomogeneous} follows from \autoref*{zetafunctionschon} and the following proposition.
\begin{proposition}
\label{laststeptogetzetaformulacorollary}
Suppose there exists $v \in \mathbb{Z}^n$ such that $u \cdot v > 0$ and such that for all $w \in \mathbb{Z}^n$,
\[
\init_w U = \init_{w+v} U.
\]
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$, and let $V_u$ be the subscheme of $\mathbb{G}_{m,k}^n$ defined by $(x_1, \dots, x_n)^u-1$. Then $V_u \cap \init_w U$ is invariant under the $(\mu_{u \cdot v}, v)$-action and
\[
[(V_u \cap \init_w U)^w_{u \cdot w}, \hat\mu] = [(V_u \cap \init_w U)^v_{u \cdot v}, \hat\mu] \in K_0^{\hat\mu}(\mathbf{Var}_k).
\]
\end{proposition}
\begin{proof}
Because $u \cdot w > 0$, there exist $\ell, \ell' \in \mathbb{Z}_{>0}$ and $w' \in \mathbb{Z}^n$ such that $u \cdot w' = 0$ and
\[
\ell w = \ell' v + w'.
\]
By \autoref*{initdegscalingcharacteraction}, $V_u \cap \init_w U$ is invariant under the $(\mu_{u \cdot \ell w}, \ell w)$-action and
\[
[(V_u \cap \init_w U)^w_{u \cdot w}, \hat\mu] = [(V_u \cap \init_w U)^{\ell w}_{u \cdot \ell w}, \hat\mu] \in K_0^{\hat\mu}(\mathbf{Var}_k).
\]
By the hypotheses on $v$, we have that
\[
\init_{w'} U = \init_{\ell w} U,
\]
so by \autoref*{initialdegenerationinvariant}, $\init_{w'}U$ is invariant under the $(\mu_{u \cdot \ell w}, \ell w)$-action. Then by \autoref*{moninitialdeg}, $\init_{w'} U$ is invariant under the $(\mu_{u \cdot \ell' v}, \ell' v)$-action, and noting that $u \cdot \ell w = u \cdot \ell' v$,
\[
[(V_u \cap \init_{w'} U)^{\ell w}_{u \cdot \ell w}, \mu_{u \cdot \ell w}] = [(V_u \cap \init_{w'} U)^{\ell' v}_{u \cdot \ell' v}, \mu_{u \cdot \ell' v}] \in K_0^{\mu_{u \cdot \ell' v}}(\mathbf{Var}_k).
\]
Again by \autoref*{initdegscalingcharacteraction}, $V_u \cap \init_{w'} U$ is invariant under the $(\mu_{u \cdot v}, v)$-action and
\[
[(V_u \cap \init_{w'} U)^{\ell' v}_{u \cdot \ell' v}, \hat\mu] = [(V_u \cap \init_{w'} U)^v_{u \cdot v}, \hat\mu] \in K_0^{\hat\mu}(\mathbf{Var}_k).
\]
All together, noting that $\init_w U = \init_{\ell w} U = \init_{w'} U$,
\begin{align*}
[(V_u \cap \init_w U)^w_{u \cdot w}, \hat\mu] &= [(V_u \cap \init_w U)^{\ell w}_{u \cdot \ell w}, \hat\mu]\\
&= [(V_u \cap \init_{w'} U)^{\ell w}_{u \cdot \ell w}, \hat\mu]\\
&= [(V_u \cap \init_{w'} U)^{\ell' v}_{u \cdot \ell' v}, \hat\mu]\\
&= [(V_u \cap \init_{w'} U)^v_{u \cdot v}, \hat\mu]\\
&= [(V_u \cap \init_w U)^v_{u \cdot v}, \hat\mu].
\end{align*}
\end{proof}
\subsection{Fibers of tropicalization}
For the remainder of Section \ref*{motiviczetafunctionsforschonvarieties}, fix $\ell \in \mathbb{Z}_{>0}$ and endow $R$ with the $\mu_\ell$-action where each $\xi \in \mu_{\ell}$ acts on $R$ by the $\pi$-adically continuous $k$-morphism $\pi \mapsto \xi^{-1}\pi$. Let $\mathbb{A}_R^n = \Spec(R[x_1, \dots, x_n])$, let $\mathfrak{X} = X \times_k \Spec(R) \subset \mathbb{A}_R^n$, and endow $\mathbb{A}_R^n$ (resp. $\mathfrak{X}$) with the $\mu_\ell$-action induced by the $\mu_\ell$-action on $R$ and the trivial $\mu_\ell$-action on $\mathbb{A}_k^n$ (resp. $X$).
Let $A_\ell \subset \mathscr{G}(\mathfrak{X})$ be the subset of arcs where $(x_1, \dots, x_n)^u|_\mathfrak{X}$ has order $\ell$, and let $A_{\ell, 1} \subset \mathscr{G}(\mathfrak{X})$ be the subset of arcs where $(x_1, \dots, x_n)^u|_\mathfrak{X}$ has order $\ell$ and angular component 1.
Let $\trop: \mathscr{G}(\mathfrak{X}) \to (\mathbb{Z}_{\geq 0} \cup \{\infty\})^n$ be the function $(\ord_{x_1|_\mathfrak{X}}, \dots, \ord_{x_n|_\mathfrak{X}})$. Any arc that tropicalizes to a point in $\mathbb{Z}_{\geq 0}^n$ has generic point in $U \times_k \Spec(R)$, so
\[
\trop(\mathscr{G}(\mathfrak{X})) \cap \mathbb{Z}_{\geq 0}^n \subset \Trop(U).
\]
Also because $u \in \mathbb{Z}_{>0}^n$ and $\ell \neq 0$,
\[
\trop(A_\ell) \subset \trop(\mathscr{G}(\mathfrak{X})) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\}) \subset \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\}).
\]
Thus
\[
A_\ell = \bigcup_{\substack{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})\\ u \cdot w = \ell}} \trop^{-1}(w).
\]
This union is disjoint, and because $u \in \mathbb{Z}_{> 0}^n$, it is also finite. By \autoref*{orderangularcomponentgreenbergscheme}, for each $w \in \mathbb{Z}_{\geq 0}^n$, we have that the fiber $\trop^{-1}(w)$ and the intersection $\trop^{-1}(w) \cap A_{\ell, 1}$ are $\mu_\ell$-invariant cylinders in $\mathscr{G}(\mathfrak{X})$. We have thus proved the following.
\begin{proposition}
\label{summingvolumesoffibersgivescoefficient}
We have that
\[
\mu_{\mathfrak{X}}^{\mu_\ell}(A_{\ell, 1}) = \sum_{\substack{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})\\ u \cdot w = \ell}} \mu_{\mathfrak{X}}^{\mu_\ell} (\trop^{-1}(w) \cap A_{\ell, 1}),
\]
and
\[
\mu_{\mathfrak{X}}(A_\ell) = \sum_{\substack{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})\\ u \cdot w = \ell}} \mu_{\mathfrak{X}}(\trop^{-1}(w)).
\]
\end{proposition}
\subsection{Morphisms for computing volumes}
\label{morphismsforcomputingvolumes}
Throughout Subsection \ref*{morphismsforcomputingvolumes}, we will fix some $w = (w_1, \dots, w_n) \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ such that $u \cdot w = \ell$. We will construct a smooth, pure relative dimension $d$, finite type, separated $R$-scheme $\mathfrak{X}^w$ with good $\mu_\ell$-action making the structure morphism equivariant, and we will construct a $\mu_\ell$-equivariant morphism $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$ that will eventually be used to compute the motivic volumes of $\trop^{-1}(w) \cap A_{\ell, 1}$ and $\trop^{-1}(w)$.
Let $\mathbb{G}_{m,R}^n = \Spec(R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]) = \mathbb{G}_{m,k}^n \times_k \Spec(R)$, and endow it with the $\mu_\ell$-action induced by the $\mu_\ell$-action on $\Spec(R)$ and the $(\mu_\ell, w)$-action on $\mathbb{G}_{m,k}^n$. Let $\varphi_w: \mathbb{G}_{m,R}^n \to \mathbb{A}_R^n$ be the $R$-scheme morphism corresponding to the $R$-algebra morphism
\[
\varphi_w^*: \Spec(R[x_1, \dots, x_n]) \to \Spec(R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]): x_i \mapsto \pi^{w_i}x_i.
\]
\begin{proposition}
\label{varphiwequivariant}
The morphism $\varphi_w: \mathbb{G}_{m,R}^n \to \mathbb{A}_R^n$ is $\mu_\ell$-equivariant.
\end{proposition}
\begin{proof}
Let $\xi \in \mu_\ell$, and let $\xi_1: R[x_1^{\pm 1}, \dots, x_n^{\pm 1}] \to R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ and $\xi_2: R[x_1, \dots, x_n] \to R[x_1, \dots, x_n]$ be its actions. We need to show that
\[
\xi_1 \circ \varphi_w^* = \varphi_w^* \circ \xi_2.
\]
Because the structure morphisms of $\mathbb{G}_{m,R}^n$ and $\mathbb{A}_R^n$ are $\mu_\ell$-equivariant, it is sufficient to show that if $i \in \{1, \dots, n\}$, then
\[
\xi_1(\varphi_w^*(x_i)) = \varphi_w^*(\xi_2(x_i)),
\]
which holds because
\begin{align*}
\xi_1(\varphi_w^*(x_i)) &= \xi_1(\pi^{w_i} x_i)\\
&= (\xi^{-1} \pi)^{w_i}\xi^{w_i} x_i\\
&= \pi^{w_i} x_i\\
&= \varphi_w^*(x_i)\\
&= \varphi_w^*(\xi_2(x_i)).
\end{align*}
\end{proof}
Now let $\mathfrak{X}_\eta$ be the generic fiber of $\mathfrak{X}$, let $\varphi_{w,\eta}: \mathbb{G}_{m,K}^n \to \mathbb{A}_{K}^n$ be the base change of $\varphi_w$ to the fraction field $K$ of $R$, let $\mathfrak{X}^w_\eta \subset \mathbb{G}_{m,K}^n$ be the pre-image of $\mathfrak{X}_\eta$ under $\varphi_{w,\eta}$, and let $\mathfrak{X}^w \subset \mathbb{G}_{m,R}^n$ be the unique closed subscheme of $\mathbb{G}_{m,R}^n$ that is flat over $R$ and has generic fiber $\mathfrak{X}^w_\eta$, see for example \cite[Section 4]{Gubler}. By construction, the generic fiber of $\mathfrak{X}^w$ is isomorphic to $U \times_k \Spec(K)$, and its special fiber is equal to $\init_w U \subset \mathbb{G}_{m,k}^n$, which is smooth by the hypotheses on $X$. Thus $\mathfrak{X}^w$ is smooth and pure relative dimension $d$ over $R$. Note that by uniqueness, $\mathfrak{X}^w$ is equal to the closed subscheme of $\varphi_w^{-1}(\mathfrak{X})$ defined by its $R$-torsion ideal. Thus we have a morphism $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$ induced from $\varphi_w$ by restriction.
\begin{remark}
\label{openimmersionongenericfibers}
Note that if $\psi_{w,\eta}: \mathfrak{X}^w_\eta \to \mathfrak{X}_\eta$ is the base change of $\psi_w$ to $K$, we have that $\psi_{w,\eta}$ is isomorphic to the open immersion $U \times_k \Spec(K) \to X \times_k \Spec(K)$. In particular, $\psi_w$ induces an open immersion on generic fibers.
\end{remark}
To obtain a generating set for the ideal defining $\mathfrak{X}^w$ in $\mathbb{G}_{m,R}^n$, we first need to prove the following lemma.
\begin{lemma}
\label{correctspecialfibergivesflat}
Let $\mathfrak{Y}$ be a finite type $R$-scheme, and let $\mathfrak{Y}^\flat$ be the closed subscheme of $\mathfrak{Y}$ defined by its $R$-torsion ideal. If as closed subschemes of $\mathfrak{Y}$, the special fiber of $\mathfrak{Y}^\flat$ is equal to the special fiber of $\mathfrak{Y}$, then $\mathfrak{Y}$ is a flat $R$-scheme.
\end{lemma}
\begin{proof}
We may assume $\mathfrak{Y} = \Spec(\mathfrak{A})$ for some finite type $R$-algebra $\mathfrak{A}$. Let $I \subset \mathfrak{A}$ be the $\pi$-torsion ideal of $\mathfrak{A}$. Because $I$ is finitely generated, there exists $m \in \mathbb{Z}_{\geq 0}$ such that $\pi^mI = 0$. By the hypotheses,
\[
I \subset \pi \mathfrak{A}.
\]
Let $f \in I$. Then there exists $g \in \mathfrak{A}$ such that $f = \pi g$. But $\pi g \in I$ implies that $g \in I$. Thus
\[
I = \pi I = \pi^m I = 0.
\]
Therefore $\mathfrak{A}$ is $\pi$-torsion free, so it is flat over $R$.
\end{proof}
We can now prove the following two propositions.
\begin{proposition}
\label{grobnerflatness}
Let $f_1, \dots, f_m \in k[x_1, \dots, x_n]$ be a generating set for the ideal defining $X$ in $\mathbb{A}_k^n$ such that $\init_w f_1, \dots, \init_w f_m \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ form a generating set for the ideal of $\init_w U$ in $\mathbb{G}_{m,k}^n$. Then
\[
\pi^{-\trop(f_1)(w)}\varphi_w^*(f_1), \dots, \pi^{-\trop(f_m)(w)}\varphi_w^*(f_m) \in R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]
\]
form a generating set for the ideal defining $\mathfrak{X}^w$ in $\mathbb{G}_{m,R}^n$.
\end{proposition}
\begin{proof}
Let $\mathfrak{Y}$ be the closed subscheme of $\mathbb{G}_{m,R}^n$ defined by the ideal generated by
\[
\pi^{-\trop(f_1)(w)}\varphi_w^*(f_1), \dots, \pi^{-\trop(f_m)(w)}\varphi_w^*(f_m).
\]
Then by construction, the generic fiber of $\mathfrak{Y}$ is equal to $\mathfrak{X}^w_\eta$, and $\mathfrak{X}^w$ is equal to the closed subscheme of $\mathfrak{Y}$ defined by its $R$-torsion ideal. The special fiber of $\mathfrak{Y}$ is the closed subscheme of $\mathbb{G}_{m,k}^n$ defined by $\init_w f_1, \dots, \init_w f_m$ and thus is equal to $\init_w U$, which is also the special fiber of $\mathfrak{X}^w$. Therefore by \autoref*{correctspecialfibergivesflat}, $\mathfrak{Y}$ is flat over $R$, so $\mathfrak{X}^w$ is equal to $\mathfrak{Y}$.
\end{proof}
\begin{proposition}
The closed subscheme $\mathfrak{X}^w \subset \mathbb{G}_{m,R}^n$ is $\mu_\ell$-invariant.
\end{proposition}
\begin{proof}
By the hypotheses on $X$, we know there exist $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ that generate the ideal of $X$ such that $\init_w f_1, \dots, \init_w f_{n-d}$ generate the ideal of $\init_w U$, so by \autoref*{grobnerflatness},
\[
\pi^{-\trop(f_1)(w)}\varphi_w^*(f_1), \dots, \pi^{-\trop(f_{n-d})(w)}\varphi_w^*(f_{n-d}) \in R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]
\]
generate the ideal defining $\mathfrak{X}^w$ in $\mathbb{G}_{m,R}^n$.
Thus it will be sufficient to show that if $f \in k[x_1, \dots, x_n]$, $\xi \in \mu_\ell$, and $\xi_1: R[x_1^{\pm 1}, \dots, x_n^{\pm 1}] \to R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ is its action, then $\xi_1(\pi^{-\trop(f)(w)}\varphi_w^*(f))$ is in the ideal of $R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ generated by $\pi^{-\trop(f)(w)}\varphi_w^*(f)$. Write
\[
f = \sum_{u' \in \mathbb{Z}_{\geq 0}^n} a_{u'} (x_1, \dots, x_n)^{u'},
\]
where each $a_{u'} \in k$. Then
\[
\pi^{-\trop(f)(w)}\varphi_w^*(f) = \sum_{u' \in \mathbb{Z}_{\geq 0}^n} \pi^{u' \cdot w - \trop(f)(w)} a_{u'} (x_1, \dots, x_n)^{u'},
\]
so
\begin{align*}
\xi_1(\pi^{-\trop(f)(w)}\varphi_w^*(f)) &= \sum_{{u'} \in \mathbb{Z}_{\geq 0}^n} (\xi^{-1} \pi)^{u' \cdot w - \trop(f)(w)} a_{u'} \xi^{u' \cdot w} (x_1, \dots, x_n)^{u'}\\
&= \xi^{\trop(f)(w)}\sum_{u' \in \mathbb{Z}_{\geq 0}^n} \pi^{u' \cdot w - \trop(f)(w)} a_{u'} (x_1, \dots, x_n)^{u'}\\
&= \xi^{\trop(f)(w)} \pi^{-\trop(f)(w)} \varphi_w^*(f).
\end{align*}
Thus we are done.
\end{proof}
We now endow $\mathfrak{X}^w$ with the restriction of the $\mu_\ell$-action on $\mathbb{G}_{m,R}^n$. Because $\mathfrak{X}^w$ is affine, this $\mu_\ell$-action is good, and by construction, this $\mu_\ell$-action makes the structure morphism equivariant. By \autoref*{varphiwequivariant}, we have that the morphism $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$ is $\mu_\ell$-equivariant.
\begin{remark}
\label{specialfiberofXwaction}
By construction, the special fiber of $\mathfrak{X}^w$ with its induced $\mu_\ell$-action is equal to $(\init_w U)^w_\ell$.
\end{remark}
\subsection{Preparing for the change of variables formula}
For the remainder of Section \ref*{motiviczetafunctionsforschonvarieties}, let $V_u$ be the subscheme of $\mathbb{G}_{m,k}^n$ defined by $(x_1, \dots, x_n)^u -1$, and if $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ is such that $u \cdot w = \ell$, let $\mathfrak{X}^w$ and $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$ be as constructed in Subsection \ref*{morphismsforcomputingvolumes}.
\begin{proposition}
\label{cylinderforcomputingangularcomponent1tropicalfiber}
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ be such that $u \cdot w = \ell$. Noting that $V_u \cap \init_w U \subset \init_w U = \mathfrak{X}^w_0$, the subset $\theta_0^{-1}(V_u \cap \init_w U) \subset \mathscr{G}(\mathfrak{X}^w)$ is a $\mu_\ell$-invariant cylinder, and
\[
\mu_{\mathfrak{X}^w}^{\mu_\ell}(\theta_0^{-1}(V_u \cap \init_w U)) = [(V_u \cap \init_w U)^w_\ell / \mathfrak{X}^w_0, \mu_\ell] \mathbb{L}^{-d} \in \mathscr{M}^{\mu_\ell}_{\mathfrak{X}^w_0}.
\]
\end{proposition}
\begin{proof}
By \autoref*{binomialcharacterinvariant} and \autoref*{specialfiberofXwaction}, we have that $V_u \cap \init_w U$ is a $\mu_\ell$-invariant subscheme of $\mathfrak{X}^w_0$, and with the restriction of this $\mu_\ell$-action, it is equal to $(V_u \cap \init_w U)^w_\ell$. The proposition then follows from the fact that the truncation morphism $\theta_0: \mathscr{G}(\mathfrak{X}^w) \to \mathscr{G}_0(\mathfrak{X}^w) = \mathfrak{X}^w_0$ is $\mu_\ell$-equivariant and the definition of the $\mu_\ell$-equivariant motivic measure.
\end{proof}
\begin{lemma}
\label{lemmaonwmapfromtorustoaffine}
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ be such that $u \cdot w = \ell$, let $\varphi_w: \mathbb{G}_{m,R}^n \to \mathbb{A}_R^n$ be as in Subsection \ref*{morphismsforcomputingvolumes}, and let $k'$ be an extension of $k$. Then $\varphi_w$ induces a bijection
\[
\mathscr{G}( \mathbb{G}_{m,R}^n )(k') \to \{ x \in \mathscr{G}(\mathbb{A}_R^n)(k') \, | \, w = (\ord_{x_1}(x), \dots, \ord_{x_n}(x))\}.
\]
\end{lemma}
\begin{proof}
Let $R' = k'\llbracket \pi \rrbracket$, and let $K'$ be its field of fractions. Because $\varphi_w$ induces an open immersion on generic fibers, it induces an injection $\mathbb{G}_{m,R}^n(K') \to \mathbb{A}_R^n(K')$. Because $\mathbb{G}^n_{m,R}$ is separated, this implies that $\varphi_w$ induces an injection $\mathscr{G}(\mathbb{G}_{m,R}^n)(k') \to \mathscr{G}(\mathbb{A}_R^n)(k')$. We thus only need to show that the image of this injection is $\{ x \in \mathscr{G}(\mathbb{A}_R^n)(k') \, | \, w = (\ord_{x_1}(x), \dots, \ord_{x_n}(x))\}$.
Let $y: \Spec(R') \to \mathbb{G}_{m,R}^n$. Then for each $i \in \{1, \dots, n\}$, we have that $x_i(y)$ is a unit in $R'$, so by construction,
\[
\varphi_w(y) \in \{ x \in \mathscr{G}(\mathbb{A}_R^n)(k') \, | \, w = (\ord_{x_1}(x), \dots, \ord_{x_n}(x))\}.
\]
Write $w = (w_1, \dots, w_n)$, and let $x : \Spec(R') \to \mathbb{A}_R^n$ be such that $\ord_{x_i}(x) = w_i$ for each $i \in \{1, \dots, n\}$. Then for each $i \in \{1, \dots, n\}$, we have that $\pi^{-w_i} x_i(x)$ is a unit in $R'$, so we may set $y: \Spec(R') \to \mathbb{G}_{m,R}^n$ to be the morphism whose pullback is given by $x_i \mapsto \pi^{-w_i}x_i(x) \in R$. By construction $\varphi_w(y) = x$, and we are done.
\end{proof}
\begin{proposition}
\label{bijectionforchangeofvariables}
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ be such that $u \cdot w = \ell$. Then $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$ induces bijections $\mathscr{G}(\mathfrak{X}^w)(k') \to \trop^{-1}(w)(k')$ and $\theta_0^{-1}(V_u \cap \init_w U)(k') \to (\trop^{-1}(w) \cap A_{\ell, 1})(k')$ for all extensions $k'$ of $k$.
\end{proposition}
\begin{proof}
Fix an extension $k'$ of $k$. Because $\psi_w$ induces an open immersion on generic fibers and because $\mathfrak{X}^w$ is separated, we have that $\varphi_w$ induces an injection from $\mathscr{G}(\mathfrak{X}^w)(k')$ to $\mathscr{G}(\mathfrak{X})(k')$. Thus we need to show that the image of $\mathscr{G}(\mathfrak{X}^w)(k')$ is $\trop^{-1}(w)(k')$ and that the image of $\theta_0^{-1}(V_u \cap \init_w U)(k')$ is $(\trop^{-1}(w) \cap A_{\ell, 1})(k')$.
Let $y \in \mathscr{G}(\mathfrak{X}^w)(k') \subset \mathscr{G}(\mathbb{G}_{m,R}^n)(k')$. By \autoref*{lemmaonwmapfromtorustoaffine}, $\psi_w(y) \in \trop^{-1}(w)(k')$. Let $x \in \trop^{-1}(w)(k') \subset \{ x' \in \mathscr{G}(\mathbb{A}_R^n)(k') \, | \, w = (\ord_{x_1}(x'), \dots, \ord_{x_n}(x'))\}$. By \autoref*{lemmaonwmapfromtorustoaffine}, $x$ is in the image of $\varphi_w$, where $\varphi_w$ is as in Subsection \ref*{morphismsforcomputingvolumes}. Because $\mathfrak{X}^w$ is the closed subscheme of $\varphi_w^{-1}(\mathfrak{X})$ defined by its $R$-torsion ideal, this implies that $x$ is in the image $\psi_w$. Thus $\psi_w$ induces a bijection $\mathscr{G}(\mathfrak{X}^w)(k') \to \trop^{-1}(w)(k')$.
Let $y \in \mathscr{G}(\mathfrak{X}^w)(k')$. We only need to show that $\psi_w(y) \in A_{\ell, 1}(k')$ if and only if $\theta_0(y) \in (V_u \cap \init_w U)(k')$. Write $w = (w_1, \dots, w_n)$, and let $R' = k' \llbracket \pi \rrbracket$. Then
\begin{align*}
\psi_w(y) \in A_{\ell, 1}(k') &\iff (x_1, \dots, x_n)^u(\psi_w(y)) = \pi^{\ell}(1+\pi r) \text{ for some $r \in R'$}\\
&\iff (\pi^{w_1} x_1, \dots, \pi^{w_n} x_n)^u (y) = \pi^{u \cdot w}(1 + \pi r) \text{ for some $r \in R'$}\\
&\iff (x_1, \dots, x_n)^u(y) = 1 + \pi r \text{ for some $r \in R'$}\\
&\iff ((x_1, \dots, x_n)^u - 1)(\theta_0(y)) = 0\\
&\iff \theta_0(y) \in (V_u \cap \init_w U)(k').
\end{align*}
\end{proof}
\begin{proposition}
\label{jacobiancomputation}
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ be such that $u \cdot w = \ell$, and let $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ be a generating set for the ideal defining $X$ in $\mathbb{A}_k^n$ such that $\init_w f_1, \dots, \init_w f_{n-d} \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ form a generating set for the ideal of $\init_w U$ in $\mathbb{G}_{m,k}^n$. Then the jacobian ideal of $\psi_w$ is generated by
\[
\pi^{w_1+\dots+w_n-(\trop(f_1)(w)+\dots+\trop(f_{n-d})(w))}.
\]
\end{proposition}
\begin{proof}
Let $\varphi_w: \mathbb{G}_{m,R}^n \to \mathbb{A}_R^n$ be as in Subsection \ref*{morphismsforcomputingvolumes}, and for any $f \in R[x_1, \dots, x_n]$, we will set
\[
f^w = \pi^{-\trop(f)(w)}\varphi_w^*(f) \in R[x_1^{\pm 1}, \dots, x_n^{\pm 1}].
\]
Then by \autoref*{grobnerflatness}, the ideal defining $\mathfrak{X}^w$ is generated by $f_1^w, \dots, f_{n-d}^w$. Let $\mathfrak{A}_w= R[x_1^{\pm 1}, \dots, x_n^{\pm 1}]/(f_1^w, \dots, f_{n-d}^w)$ be the coordinate ring of $\mathfrak{X}^w$. Then we have the diagram
\begin{center}
\begin{tikzcd}
&&&0\\
\psi_w^*\Omega_{\mathfrak{X}/R} \arrow[r] & \Omega_{\mathfrak{X}^w/R} \arrow[r, twoheadrightarrow] & \Omega_{\mathfrak{X}^w/\mathfrak{X}} \arrow[ur] \arrow[r] & 0\\
\mathfrak{A}_w^n \arrow[u, twoheadrightarrow] \arrow[r] & \mathfrak{A}_w^n \arrow[u, twoheadrightarrow] \arrow[ur, twoheadrightarrow]\\
\mathfrak{A}_w^n \oplus \mathfrak{A}_w^{n-d} \arrow[u] \arrow[ur] \arrow[r] & \mathfrak{A}_w^{n-d} \arrow[u]
\end{tikzcd}
\end{center}
where the right vertical sequence is the presentation for the differentials module $\Omega_{\mathfrak{X}^w / R}$ induced by our presentation for $\mathfrak{A}_w$, the top horizontal sequence is the standard presentation for the relative differentials module $\Omega_{\mathfrak{X}^w / \mathfrak{X}}$, and the top left vertical arrow picks out the generators of $\psi_w^*\Omega_{\mathfrak{X}, R}$ induced by the coordinates of $\mathbb{A}_R^n$. The diagonal sequence gives a presentation for $\Omega_{\mathfrak{X}^w / \mathfrak{X}}$ with matrix whose entries are the images in $\mathfrak{A}_w$ of the entries in the matrix
\[
\begin{pmatrix}
\pi^{w_1} & & & \partial f_1^w/\partial x_1 & & \partial f_{n-d}^w/ \partial x_1\\
& \ddots & & \vdots & \cdots & \vdots\\
& & \pi^{w_n} & \partial f_1^w/\partial x_n & & \partial f_{n-d}^w/\partial x_n
\end{pmatrix}
\]
For each $i \in \{1,\dots, n\}$ and $j \in \{1, \dots, n-d\}$,
\[
\partial \varphi_w^*(f_j)/ \partial x_i = \sum_{i'=1}^n (\varphi_w^*(\partial f_j / \partial x_{i'})) (\partial \varphi_w^*(x_{i'}) /\partial x_i) = \pi^{w_i} \varphi_w^*(\partial f_j / \partial x_i),
\]
so
\[
\partial f_j^w/\partial x_i = \pi^{w_i - \trop(f_j)(w)} \varphi_w^*( \partial f_j / \partial x_i).
\]
For each $m \in \{0, 1, \dots, n-d\}$ and size $m$ subsets $I \subset \{1, \dots, n\}$ and $J \subset \{1, \dots, n-d\}$, let $\Delta_I^J$ be the determinant of the size $m$ minor of the matrix $(\partial f_j / \partial x_i)_{i,j}$ given by rows in $I$ and columns in $J$. Then the jacobian ideal of $\psi_w$ is generated by the images in $\mathfrak{A}_w$ of
\[
\{\pi^{w_1 + \dots + w_n - \sum_{j \in J}\trop(f_j)(w)} \varphi_w^* \Delta_I^J\}_{m, I, J}.
\]
Because $\mathfrak{X}$ is smooth and pure relative dimension $d$ over $R$, the unit ideal of $\mathfrak{X}$ is generated by the images in $R[x_1, \dots, x_n]/(f_1, \dots, f_{n-d})$ of
\[
\{\Delta_I^{\{1, \dots, n-d\}} \, | \, \text{$I \subset \{1, \dots, n\}$ has size $n-d$}\},
\]
so the jacobian ideal of $\psi_w$ contains
\[
\pi^{w_1+\dots+w_n-(\trop(f_1)(w)+\dots+\trop(f_{n-d})(w))}.
\]
Because $w \in \mathbb{Z}_{\geq 0}^n$ and each $f_j \in k[x_1, \dots, x_n]$, we have that each $\trop(f_j)(w) \geq 0$. Therefore the jacobian ideal of $\psi_w$ is in fact generated by
\[
\pi^{w_1+\dots+w_n-(\trop(f_1)(w)+\dots+\trop(f_{n-d})(w))}.
\]
\end{proof}
\subsection{Proofs of \autoref*{initialdegenerationXschemestructure} and \autoref*{zetafunctionschon}}
We prove \autoref*{initialdegenerationXschemestructure}.
\begin{proof}[Proof of \autoref*{initialdegenerationXschemestructure}]
This is clear if $w = 0$, so we may assume that $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ is such that $u \cdot w = \ell$. In this case, the proposition follows by \autoref*{specialfiberofXwaction} and by considering the special fiber of $\psi_w: \mathfrak{X}^w \to \mathfrak{X}$.
\end{proof}
Now set $\ordjac: \Trop(U) \cap (\mathbb{Z}^n_{\geq 0} \setminus \{0\}) \to \mathbb{Z}: w \mapsto \ordjac_{\psi_w}(y)$ for any $y \in \mathscr{G}(\mathfrak{X}^w)$, noting that by \autoref*{jacobiancomputation}, this does not depend on the choice of $y$. Also set $\ordjac(0) = 0$.
\begin{proposition}
\label{computingvolumeoffiberoftropicalization}
Let $w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})$ be such that $u \cdot w = \ell$. Then
\[
\mu_{\mathfrak{X}}(\trop^{-1}(w)) = [\init_w U / X] \mathbb{L}^{-d - \ordjac(w)},
\]
and
\[
\mu_{\mathfrak{X}}^{\mu_\ell}(\trop^{-1}(w) \cap A_{\ell, 1}) = [(V_u \cap \init_w U)^w_{u \cdot w}/X, \mu_\ell]\mathbb{L}^{-d-\ordjac(w)}.
\]
\end{proposition}
\begin{proof}
By \autoref*{openimmersionongenericfibers} and Propositions \ref*{cylinderforcomputingangularcomponent1tropicalfiber} and \ref*{bijectionforchangeofvariables}, the proposition follows from the (equivariant) motivic change of variables formula applied to $\psi_w$.
\end{proof}
Because $u \in \mathbb{Z}_{>0}^n$, the monomial $(x_1, \dots, x_n)^u$ vanishes on all of $X \setminus U$. Thus the constant term of $Z_{X,u}^\mathrm{naive}(T)$ is equal to
\[
[U / X]\mathbb{L}^{-d} = [\init_0 U / X] \mathbb{L}^{-d - \ordjac(w)}.
\]
Therefore, \autoref*{zetafunctionschon} follows from \autoref*{jacobiancomputation} and the next proposition.
\begin{proposition}
The coefficient of $T^\ell$ in $Z_{X,u}^{\hat\mu}(T)$ is equal to
\[
\sum_{\substack{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})\\ u \cdot w = \ell}} [(V_u \cap \init_w U)^w_{u \cdot w} / X, \hat\mu] \mathbb{L}^{-d - \ordjac(w)},
\]
and the coefficient of $T^\ell$ in $Z_{X,u}^\mathrm{naive}(T)$ is equal to
\[
\sum_{\substack{w \in \Trop(U) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})\\ u \cdot w = \ell}} [\init_w U / X] \mathbb{L}^{-d - \ordjac(w)}.
\]
\end{proposition}
\begin{proof}
This follows from Propositions \ref*{coefficientDLfromvolume}, \ref*{coefficientIgfromvolume}, \ref*{summingvolumesoffibersgivescoefficient}, and \ref*{computingvolumeoffiberoftropicalization}.
\end{proof}
\section{Motivic zeta functions of hyperplane arrangements}
\label{motiviczetafunctionsprovingmainresultsection}
Let $d, n \in \mathbb{Z}_{>0}$, let $\mathcal{M}$ be a rank $d$ loop-free matroid on $\{1, \dots, n\}$, and let $\mathcal{A} \in \mathrm{Gr}_\mathcal{M}(k)$. We will prove Theorems \ref*{hyperplanearrangementDLzetaformula} and \ref*{hyperplanearrangementIgusazetaformula}. Throughout this section, we will be using the notation defined in Section \ref*{linearsubspacesandmatroidssubsection}.
\begin{lemma}
\label{grobnerbasislinearsubspaceordjac}
Let $w=(w_1, \dots, w_n) \in \,\mathbb{R}^n$. Then there exist $f_1, \dots, f_{n-d} \in k[x_1, \dots, x_n]$ that generate the ideal of $X_\mathcal{A}$ in $\mathbb{A}_k^n$ such that the ideal of $\init_w U_\mathcal{A}$ in $\mathbb{G}_{m,k}^n$ is generated by $\init_w f_1, \dots, \init_w f_{n-d} \in k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ and such that
\[
w_1 + \dots + w_n - (\trop(f_1)(w) + \dots + \trop(f_{n-d})(w)) = \wt_{\mathcal{M}}(w).
\]
\end{lemma}
\begin{proof}
Fix some $B \in \mathcal{B}(\mathcal{M}_w)$. Then by \autoref*{gbasisforlinearsubspace},
\[
\{L_{C(\mathcal{M}, i,B)}^\mathcal{A} \, | \, i \in \{1, \dots, n\} \setminus B\} \subset k[x_1, \dots, x_n]
\]
generate the ideal of $X_\mathcal{A}$ in $\mathbb{A}_k^n$ and
\[
\{ \init_w L_{C(\mathcal{M}, i, B)}^\mathcal{A} \, | \, i \in \{1, \dots, n\} \setminus B\} \subset k[x_1^{\pm}, \dots, x_n^{\pm}]
\]
generate the ideal of $\init_w U_\mathcal{A}$ in $\mathbb{G}_{m,k}^n$. By \autoref*{wmaximalextrahasminimalweight},
\[
\trop(L_{C(\mathcal{M},i,B)}^\mathcal{A})(w) = w_i
\]
for each $i \in \{1, \dots, n\} \setminus B$. Thus
\[
w_1 + \dots + w_n - \sum_{i \in \{1, \dots, n\} \setminus B} \trop(L_{C(\mathcal{M},i,B)}^\mathcal{A})(w) = \sum_{i \in B} w_i = \wt_\mathcal{M}(w).
\]
\end{proof}
We now prove \autoref*{hyperplanearrangementDLzetaformula}.
\begin{proposition}
We have that
\[
Z^{\hat\mu}_{\mathcal{A}, k}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap (\mathbb{Z}_{\geq 0}^n \setminus \{0\})} [F_{\mathcal{A}_w}, \hat\mu] \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket,
\]
and
\[
Z^{\hat\mu}_{\mathcal{A},0}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{> 0}^n} [F_{\mathcal{A}_w}, \hat\mu] \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k^{\hat\mu}\llbracket T \rrbracket.
\]
\end{proposition}
\begin{proof}
By setting $X = X_\mathcal{A}$, $u = (1, \dots, 1)$, and $v = (1, \dots, 1)$, the proposition follows directly from \autoref*{tropicalzetaformulahomogeneous} and \autoref*{grobnerbasislinearsubspaceordjac}.
\end{proof}
We end this section by proving \autoref*{hyperplanearrangementIgusazetaformula}.
\begin{proposition}
We have that
\[
Z^{\mathrm{naive}}_{\mathcal{A},k}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{\geq 0}^n} \chi_{\mathcal{M}_w}(\mathbb{L}) \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k\llbracket T \rrbracket,
\]
and
\[
Z^{\mathrm{naive}}_{\mathcal{A},0}(T) = \sum_{w \in \Trop(\mathcal{M}) \cap \mathbb{Z}_{> 0}^n} \chi_{\mathcal{M}_w}(\mathbb{L}) \mathbb{L}^{-d-\wt_\mathcal{M}(w)}(T, \dots, T)^w \in \mathscr{M}_k\llbracket T \rrbracket.
\]
\end{proposition}
\begin{proof}
By setting $X = X_\mathcal{A}$ and $u = (1, \dots, 1)$, the proposition follows from \autoref*{zetafunctionschon}, \autoref*{initialformmappedtoorigin}, \autoref*{grobnerbasislinearsubspaceordjac}, and the fact that for each $w \in \Trop(\mathcal{M})$, the class $[U_{\mathcal{A}_w}] \in \mathscr{M}_k$ is equal to $\chi_{\mathcal{M}_w}(\mathbb{L})$.
\end{proof}
\end{document}
|
\begin{document}
\title{The extremal length systole of the Bolza surface}
\author[M. Fortier Bourque]{Maxime Fortier Bourque}
\address{
School of Mathematics and Statistics, University of Glasgow, University Place, Glasgow, United Kingdom, G12 8QQ}
\email{[email protected]}
\author[D. Martínez-Granado]{Dídac Martínez-Granado}
\address{Department of Mathematics\\
University of California Davis,
Davis, California, 95616\\
USA}
\email{[email protected]}
\author[F. Vargas Pallete]{Franco Vargas Pallete}
\address{
Department of Mathematics\\
Yale University\\
New Haven, CT 08540\\
USA}
\email{[email protected]}
\date{Draft of \today}
\begin{abstract}
We prove that the extremal length systole of genus two surfaces attains a strict local maximum at the Bolza surface, where it takes the value $\sqrt{2}$.
\end{abstract}
\maketitle
\section{Introduction}
Extremal length is a conformal invariant that plays an important role in complex ana\-ly\-sis, complex dynamics, and Teichm\"uller theory \cite{AhlforsQuasi,AhlforsConf,JenkinsBook}. It can be used to define the notion of quasiconformality, upon which the Teichm\"uller distance between Riemann surfaces is based. In turn, a formula of Kerckhoff \cite[Theorem 4]{Kerckhoff} shows that Teichm\"uller distance is determined by extremal lengths of (homotopy classes of) essential simple closed curves, as opposed to all families of curves.
The \emph{extremal length systole} of a Riemann surface $X$ is defined as the infimum of the extremal lengths of all essential closed curves in $X$. This function fits in the framework of \emph{generalized systoles} (infima of collections of ``length'' functions) developed by Bavard in \cite{Bav:97} and \cite{Bav:05}. In contrast with the hyperbolic systole, the extremal length systole has not been studied much so far.
For flat tori, we will see that the extremal length systole agrees with the systolic ratio, from which it follows that the regular hexagonal torus uniquely maximizes the extremal length systole in genus one (c.f. Loewner's torus inequality \cite{Pu}).
In \cite{MGVP}, the last two authors of the present paper conjectured that the Bolza surface maximizes the extremal length systole in genus two. This surface, which can be obtained as a double branched cover of the regular octahedron branched over the vertices, is the most natural candidate since it maximizes several other invariants in genus two such as the hyperbolic systole \cite{Jenni}, the kissing number \cite{Schmutz:kiss}, and the number of automorphisms \cite[Section 3.2]{KarcherWeber}. The maximizer of the systolic ratio among all non-positively curved surfaces of genus two is also in the conformal class of the Bolza surface \cite{KatzSabourau} and the same is true for the maximizer of the first positive eigenvalue of the Laplacian among all Riemannian surfaces of genus two with a fixed area \cite{NatayaniShoda}.
Here we make partial progress toward the aforementioned conjecture, by showing that the Bolza surface is a strict local maximizer for the extremal length systole.
\begin{thm} \label{thm:main}
The extremal length systole of Riemann surfaces of genus two attains a strict local maximum at the Bolza surface, where it takes the value $\sqrt{2}$.
\end{thm}
Once the curves with minimal extremal length have been identified, the proof that the Bolza surface is a strict local maximizer boils down to a calculation. Indeed, there is a sufficient criterion for generalized systoles to attain a strict local maximum at a point in terms of the derivatives of the lengths of the shortest curves \cite[Proposition~2.1]{Bav:97}, generalizing Voronoi's characterization of extreme lattices in terms of eutaxy and perfection.
The crux of the proof is thus to identify the curves with minimal extremal length. This is not a trivial task because extremal length is hard to compute exactly in general, although it is fairly easy to estimate. In this particular case, we are able to calculate the extremal length of certain curves by finding branched coverings from the Bolza surface to rectangular pillowcases, where extremal length is expressed in terms of elliptic integrals. We then show that all other curves are longer by using a piecewise Euclidean metric to estimate their extremal length. The last step is to compute the first derivative of the extremal length of each shortest curve as we deform the complex structure. These derivatives are encoded by the associated quadratic differentials thanks to Gardiner's formula \cite{G84:MinimalNormProperty}.
The value of $\sqrt{2}$ for the extremal length systole of the Bolza surface came as a surprise to us. Indeed, we initially expressed it as the ratio of elliptic integrals
\[
\left. \int_0^1 \frac{x+1+\sqrt{2}}{ \sqrt{x(1-x^4)}} \, dx \middle/ \int_1^\infty \frac{x+1+\sqrt{2}}{\sqrt{x(x^4-1)} } \, dx \right.
\]
and numerical calculations suggested that it coincided with the square root of two, which we proved. We later found that this follows from a pair of identities between elliptic integrals called the \emph{Landen transformations}. The flip side is that we appear to have discovered a new geometric proof of these identities.
The other surprising phenomenon is that the curves with minimal extremal length on the Bolza surface correspond to the second shortest curves on the punctured octahedron rather than the first. What is going on is that extremal length is not preserved under double branched covers; it is either multiplied or divided by two depending on the type of curve. While there are twelve shortest curves on the Bolza surface, there are only four on the punctured octahedron. In particular, the punctured octahedron is not perfect. Thus, either the punctured octahedron is not a local maximizer or Voronoi's criterion fails for the extremal length systole. We think the second option is more likely.
To conclude this introduction, we note that the proof that the Bolza surface maximizes the hyperbolic systole in genus two \cite{Jenni} (see also \cite{Bav:hyper}) rests on two ingredients: the fact that every genus two surface is hyperelliptic, and a bound of B\"or\"oczky on the density of sphere packings in the hyperbolic plane. A similar approach is used in \cite{KatzSabourau} to determine the optimal systolic ratio among locally $\CAT(0)$ metrics. While we use the first ingredient in the proof of \thmref{thm:main}, the second ingredient is not available because the extremal length systole is calculated using a different metric for each closed curve. Schmutz's proof that the Bolza surface is the unique local maximizer for the hyperbolic systole \cite[Theorem 5.3]{Schmutz:localmax} is similarly very geometric and does not seem applicable for extremal length. New ideas would therefore be required to remove the word ``local'' from the statement of \thmref{thm:main}.
\subsection*{Organization}
We define extremal length and illustrate how to compute it using Beurling's criterion in \secref{sec:EL}. In \secref{sec:simple}, we show that with the exception of the thrice-punctured sphere, the extremal length systole is only achieved by simple closed curves. \secref{sec:branched} explains how extremal length behaves under branched coverings. In \secref{sec:explicit}, we use elliptic integrals to compute the extremal length of various curves on the punctured octahedron. We then prove lower bounds on the extremal length of all other curves in \secref{sec:estimates} to determine the extremal length systole of the punctured octahedron and the Bolza surface. We prove that the Bolza surface is a strict local maximizer in \secref{sec:derivatives}. Our geometric proof of the Landen transformations is given in Appendix \ref{sec:Landen}, and Appendix \ref{sec:prisms} contains upper bounds for the extremal length systole of six-times-punctured prisms and antiprisms.
\section{Extremal length} \label{sec:EL}
\subsection{Extremal length} A \emph{conformal metric} on a Riemann surface $X$ is a Borel-measurable map $\rho : TX \to \R_{\geq 0}$ such that $\rho(\lambda v)=|\lambda| \rho(v)$ for every $v \in TX$ and every $\lambda \in \C$. This gives a choice of scale at every point in $X$, with respect to which we can measure length or area. We denote the set of conformal metrics of finite positive area on $X$ by $\conf(X)$.
Given a conformal metric $\rho$ and a map $\gamma$ from a $1$-manifold $I$ to $X$, we define
\[
\length_\rho(\gamma) := \int_\gamma \rho = \int_I \rho(\gamma'(t)) \, dt
\] if $\gamma$ is locally rectifiable and $\length_\rho(\gamma)=\infty$ otherwise. If $\Gamma$ is a set of maps from $1$-manifolds to $X$, then we set $\ell_\rho(\Gamma) := \inf\{\length_\rho(\gamma) : \gamma \in \Gamma \}$. Finally, the \emph{extremal length} of $\Gamma$ is
\[
\EL(\Gamma) := \EL(\Gamma,X) := \sup_{\rho \in \conf(X)} \frac{\ell_\rho(\Gamma)^2}{\area_\rho(X)}.
\]
This powerful conformal invariant was introduced by Ahlfors and Beurling in \cite{AhlforsBeurling}. The standard reference on this topic is \cite[Chapter 4]{AhlforsConf}.
Typically, one takes $\Gamma$ to be the homotopy class $[\gamma]$ of a map $\gamma$ from a $1$-manifold to $X$. In this case, we will often abuse notation and write $\EL(\gamma)$ or $\EL(\gamma, X)$ instead of $\EL([\gamma])$ or $\EL([\gamma],X)$. Similarly, we may write $\ell_\rho(\gamma)$ instead of $\ell_\rho([\gamma])$.
\subsection{The extremal length systole}
A \emph{closed curve} in a Riemann surface $X$ is the continuous image of a circle. It is \emph{simple} if it is embedded, and \emph{essential} if it cannot be homotoped to a point or into an arbitrarily small neighborhood of a puncture. The sets of homotopy classes of essential closed curves and of essential simple closed curves in $X$ will be denoted by ${\mathcal C}(X)$ and ${\mathcal S}(X)$ respectively.
\begin{defn} \label{def:ELsys}
The \emph{extremal length systole} of a Riemann surface $X$ is defined as
\[
\ELsys(X) :=\inf_{c \in {\mathcal C}(X)} \EL(c,X).
\]
\end{defn}
The reason for restricting to \emph{essential} closed curves is that the extremal length of any inessential closed curve is equal to zero. We will see in \secref{sec:simple} that if $X$ is different from the thrice-punctured sphere, then we can replace ${\mathcal C}(X)$ by ${\mathcal S}(X)$ in \defref{def:ELsys} without affecting the resulting value.
\subsection{Beurling's criterion}
For there to be any hope of computing the extremal length systole of a Riemann surface, we should first be able to compute the extremal length of some essential closed curves.
The definition of extremal length makes it easy to find lower bounds for it: any conformal metric of finite positive area provides a lower bound. To determine its exact value is harder; all known examples use the following criterion \cite[Theorem 4-4]{AhlforsConf} (c.f. \cite[Section 5.5]{Gromov} or \cite{Bav:92}), which encapsulates the \emph{length-area method}.
\begin{thm}[Beurling's criterion]
Let $\Gamma$ be a set of maps from $1$-manifolds to a Riemann surface $X$. Suppose that $\rho \in \conf(X)$ is such that there is a nonempty subset $\Gamma_0 \subseteq \Gamma$ of shortest curves, meaning that $\length_{\rho}(\gamma) = \ell_{\rho}(\Gamma)$ for every $\gamma \in \Gamma_0$, and that the implication
\begin{equation} \label{eq:fubini}
\int_{\gamma} f \rho \geq 0\text{ for all }\gamma \in \Gamma_0 \Longrightarrow \int_X f \rho^2 \geq 0
\end{equation}
holds for every measurable function $f$ on $X$. Then $\EL(\Gamma)= \ell_{\rho}(\Gamma)^2/\area_{\rho}(X)$.
\end{thm}
In practice, one often finds a measure $\mu$ on the set $\Gamma_0$ such that
\begin{equation} \label{eq:fubini2}
\int_X f \rho^2 = k \int_{\Gamma_0} \left( \int_{\gamma} f \rho \right) d\mu(\gamma)
\end{equation}
for some constant $k>0$, which obviously implies \eqref{eq:fubini}. In this case, we will say that $\Gamma_0$ \emph{sweeps out $X$ evenly}.
A conformal metric $\rho$ such that $\EL(\Gamma)= \ell_{\rho}(\Gamma)^2/\area_{\rho}(X)$ as in Beurling's criterion is said to be \emph{extremal} for $\Gamma$. Extremal metrics always exist in a weak sense \cite[Theorem 12]{Rodin} (c.f. \cite[Theorem 5.6.C']{Gromov}), though we will not use this. By a convexity argument, if an extremal metric exists, then it is unique (in the sense of equality almost everywhere) up to scaling \cite[Theorem 2.2]{JenkinsBook}.
\subsection{Examples}
Here are some examples of extremal metrics.
\begin{ex} \label{ex:simple}
If $X$ is a Riemann surface with a finitely generated fundamental group and $c$ is the homotopy class of an essential simple closed curve in $X$, then the extremal metric for $c$ is equal to $\sqrt{|q|}$ for an integrable holomorphic quadratic differential $q$ all of whose regular horizontal trajectories belong to $c$, and this quadratic differential is unique up to scaling \cite{Jenkins}. In simpler terms, the extremal metric looks like a Euclidean cylinder with some parts of its boundary glued together via isometries. If the metric $\rho = \sqrt{|q|}$ is scaled so that the height of the cylinder is $1$, then $\ell_{\rho}(c)=\area_{\rho}(X)$ so that the extremal length of $c$ is equal to either of these two quantities (the circumference or the area of the cylinder).
\end{ex}
We refer the reader to \cite{Strebel} for background on quadratic differentials. Given the existence of $q$, the fact that $\sqrt{|q|}$ is extremal follows from Beurling's criterion since the horizontal trajectories of $q$ have minimal length in their homotopy class and integration against $|q|$ is the same as iterated integration along the horizontal trajectories, then against the transverse measure.
The above result is true more generally if $c$ is the homotopy class of a simple multi-curve with weights \cite{Renelt}. The Heights Theorem of Hubbard and Masur further generalizes the existence and uniqueness of $q$ to equivalence classes of measured foliations \cite{HM79:QuadFoliations} (see also \cite{MS84:Heights}) and this can be used to define the extremal length of such things.
For closed curves that are not simple, very little is known about the extremal metric. The investigations in \cite{Calabi,HZ18:SwissCross,NZ19:Isosystolic} suggest that it might have positive curvature in general. Here are three examples of non-simple closed curves on the thrice-punctured sphere for which the extremal metric is flat but does not come from a quadratic differential.
\begin{figure}
\caption{A figure-eight}
\label{fig:figure-eight}
\caption{A curve with two self-intersections}
\label{fig:sausage}
\caption{A curve with three self-intersections}
\label{fig:trefoil}
\caption{Some extremal metrics on the thrice-punctured sphere}
\label{fig:double_triangles}
\end{figure}
\begin{ex} \label{ex:figure-eight}
Take two copies of a Euclidean isoceles right triangle, glue them along their boundary, and puncture the resulting surface at the three vertices. Then the resulting metric $\rho$ is extremal for the figure-eight curve $\gamma$ which winds once around each of the two acute vertices (see \figref{fig:figure-eight}). If the short sides of the triangle have length $1$, then the surface has area $1$ and $\ell_\rho(\gamma)=2$ so that $\EL(\gamma)=4$.
\end{ex}
\begin{ex} \label{ex:sausage}
The metric from \exref{ex:figure-eight} is also extremal for the curve $\gamma$ with two self-intersections depicted in \figref{fig:sausage}. This curve satisfies $\ell_\rho(\gamma)=2\sqrt{2}$ so that $\EL(\gamma)=8$.
\end{ex}
The next example is closely related to \cite[Example 4-2]{AhlforsConf}.
\begin{ex} \label{ex:isoceles}
Take two copies of a Euclidean equilateral triangle, glue them along their boundary, and puncture the resulting surface at the three vertices. Then the resulting metric $\rho$ is extremal for the curve $\gamma$ depicted in \figref{fig:trefoil}. If the side length of the triangle is equal to $1$, then the area of the surface is equal to $\sqrt{3}/2$ and $\ell_\rho(\gamma)=3$, so that $\EL(\gamma)= 6 \sqrt{3}$.
\end{ex}
In each case, the proof is an application of Beurling's criterion. The first observation is that any closed geodesic in a locally $\CAT(0)$ space (which these surfaces are) minimizes length in its homotopy class. Thus, the closed geodesics depicted in \figref{fig:double_triangles} have minimal length in their homotopy class. Furthermore, in each case the set $\Gamma_0$ of closed geodesics homotopic to $\gamma$ sweeps out the surface evenly.
To prove this, we use the fact that in each example there is an isometric immersion $\pi$ from an open Euclidean cylinder $C$ to the surface $X$ that sends closed geodesics in $C$ to closed geodesics in the desired homotopy class on $X$. These immersions can be obtained by folding along the dashed lines in \figref{fig:immersions}. We observe that the immersion is $2$-to-$1$ almost every\-where in the first two cases and $3$-to-$1$ almost every\-where in the third case (only the edges of the double triangle get covered fewer times). Therefore, if $f$ is a measurable function on $X$, then
\[
\deg(\pi) \int_X f \rho^2 = \int_C (f \circ \pi) \, (\pi^*\rho)^2 = \int_y \left( \int_{\alpha_y} (f \circ \pi) \, (\pi^*\rho) \right) dy
= \int_y \left( \int_{\pi(\alpha_y)} f \rho \right) dy
\]
where $\alpha_y$ is the closed geodesic at height $y$ in $C$, which shows that \eqnref{eq:fubini2} holds.
\begin{figure}
\caption{A $2$-to-$1$ immersion}
\label{fig:figure-eight_cyl}
\caption{Another $2$-to-$1$ immersion}
\label{fig:sausage-cyl}
\caption{A $3$-to-$1$ immersion}
\label{fig:trefoil-cyl}
\caption{Isometric immersions from cylinders to double triangles}
\label{fig:immersions}
\end{figure}
\subsection{Pulling curves tight}
In order to apply Beurling's criterion, or even to obtain a lower bound for extremal length given a metric $\rho$, the first step is to determine the infimum $\ell_\rho(\Gamma)$. A useful trick for this is to pull curves tight. This is a straightforward procedure if the surface is compact, but it leads to slight complications in the presence of punctures.
\begin{prop} \label{prop:tight}
Let $\overline{X}$ be a closed Riemann surface equipped with a conformal metric $\rho$ whose induced distance is compatible with the topology on $\overline{X}$, let $X = \overline{X} \setminus P$ where $P \subset \overline{X}$ is a finite set, and let $c$ be the homotopy class of an essential closed curve in $X$. Then there is a closed curve $\gamma$ in $\overline{X}$ such that $\gamma$ is the limit of a sequence $(\gamma_n)$ of curves in $c$, the restriction $\gamma \cap X$ is locally geodesic, and $\length_\rho(\gamma)= \ell_\rho(c)$. If $c$ is the homotopy class of an essential simple closed curve, then the approximating curves $\gamma_n$ can be chosen to be simple. Furthermore, if $\rho$ is piecewise Euclidean with finitely many cone points, then $\gamma$ can be chosen to pass through either a cone point in $X$ or a point in $P$, unless $X$ is a flat torus.
\end{prop}
This is a well-know fact at least for metrics coming from quadratic differentials (see \cite[Chapter V]{Strebel}, \cite[Section 4]{Minsky} or \cite[Section 2.4]{DLR}), but we could not track down a proof in the case where $X$ has punctures. Here we will apply the lemma to polyhedra punctured at the vertices. It is worth pointing out that even when $c$ contains simple closed curves, the length-minimizer $\gamma$ need not be simple; it can have tangential self-intersections.
\begin{proof}
Take any sequence $(\gamma_n) \subset c$ of curves parametrized proportionally to arc length such that $\length_\rho(\gamma_n)$ tends to the infimum $\ell_\rho(c)$ as $n \to \infty$. Since these are uniformly Lipschitz maps from the circle into $\overline X$, which is compact, we can apply the Arzel\`a--Ascoli theorem to extract a subsequence that converges uniformly to a closed curve $\gamma$ in $\overline X$. We also have $\length_\rho(\gamma) \leq \ell_\rho(c)$ since $\gamma$ is $\ell_\rho(c)$-Lipschitz (i.e., length is lower semi-continuous under uniform convergence). Note that $\gamma$ is not reduced to a point since $c$ is essential.
If $\gamma\cap X$ is not locally geodesic, then we can shorten $\gamma_n$ by a definite amount whenever $n$ is large enough while staying in the same homotopy class, contradicting the hypothesis that $\length_\rho(\gamma_n) \to \ell_\rho(c)$ as $n \to \infty$.
To prove the reverse inequality $\ell_\rho(c) \leq \length_\rho(\gamma)$, the idea is to reconstruct a sequence of curves $\alpha_n$ in $c$ from $\gamma$. We can choose $\alpha_n$ to follow $\gamma$ except where $\gamma$ hits the set $P$. At each of these occurrences, we take $\alpha_n$ to stop a little bit before hitting the given point $p \in P$, wind around $p$ a certain number of times along a small circle centered at $p$, then continue along $\gamma$ where $\gamma$ crosses that circle a second time. Once we have fixed a small enough radius at each of the points in $P$, there is a unique way to choose the winding around each puncture so that $\alpha_n$ belongs to $c$. Indeed, the intersection of a small neighborhood of $\gamma$ with $X$ deformation retracts onto the union of circles and segments of $\gamma$ where we allow $\alpha_n$ to travel. By letting the radii of the circles tend to $0$ as $n \to \infty$, we obtain that $\length_\rho(\alpha_n) \to \length_\rho(\gamma)$ as $n \to \infty$ since the amount of winding around each puncture stays fixed but the circumference of each circle tends to zero.
Suppose that $c$ contains simple closed curves. We can assume that $\gamma_n$ only has transverse self-intersections for otherwise we can perturb it so that this holds, without increasing its length by more than any positive amount we choose. Let $k$ be the number of self-intersections of $\gamma_n$. If $\gamma_n$ is not simple, then it must bound a monogon or a bigon in $X$ \cite[Theorem 2.7]{HassScott}. By erasing the monogon or by pushing the bigon off to its shorter side, we obtain a curve $\gamma_n'$ which has fewer self-intersections and is not longer than $\gamma_n$ by more than any positive amount we choose, say $1/(kn)$. After a finite number of steps, we obtain a simple closed curve $\beta_n$ in $c$ which is not longer than $\gamma_n$ by more than $1/n$. It follows that $\lim_{n \to \infty} \length_\rho(\beta_n) = \ell_\rho(c)$. We could therefore have chosen $(\beta_n)$ from the start (perhaps ending up with a different limit $\gamma$ after passing to a subsequence).
For the last part of the proof, we assume that $\rho$ is piecewise Euclidean with cone points. Let $Q$ be the set cone points in $X$ and suppose that $\gamma$ is contained in $X\setminus Q$. In particular, $\gamma$ is in $X$ so that it is a closed geodesic by the second paragraph of this proof. The fact that $X\setminus Q$ is locally Euclidean and orientable allows us to push $\gamma$ parallel to itself by using the geodesic flow in the normal direction. Let $\gamma_t$ be the geodesic obtained after pushing $\gamma$ by distance $t\in \R$ to the left (where negative $t$ means pushing to the right). This is well-defined if $t$ is close enough to $0$, but if we bump into $Q$ or $P$, then it is not possible to continue. Let $T$ be the supremum of the set of $s>0$ such that $\gamma_t$ is defined for all $t \in (-s,s)$.
Suppose that $T < \infty$. By the same argument as in the first paragraph, $\gamma_{t_n}$ subconverges to limiting curves $\gamma_{\pm T}$ in $\overline{X}$ as $t_n \to \pm T$. At least one of these two curves must pass though $Q$ or $P$, otherwise the flow could be continued and $\gamma_t$ would be defined on a larger interval. Thus, we can replace $\gamma$ by one of $\gamma_T$ or $\gamma_{-T}$.
If $T = \infty$, then there is a local isometry from $S^1 \times \R$ to $X \setminus Q$. Since the domain is complete and the range is connected, this local isometry is a covering map \cite[Proposition I.3.28]{BH:MetricSpaces}. But the cylinder $S^1 \times \R$ only covers itself, tori, or Klein bottles. The only one of these which is orientable and whose completion is compact is the torus.
\end{proof}
\subsection{Systolic ratio}
Besides the homotopy class of an essential closed curve in a Riemann surface $X$, there is another set of curves $\Gamma$ whose extremal length one might want to compute, namely, the set $\Gamma_{\mathrm{all}}$ of \emph{all} essential closed curves in $X$.
Given a conformal metric $\rho$ on $X$, the \emph{systole} of $(X,\rho)$ is
\[
\sys(X,\rho) := \ell_\rho(\Gamma_{\mathrm{all}}) = \inf_{\gamma \in \Gamma_{\mathrm{all}}} \length_\rho(\gamma)
\]
and
\[
\SR(X,\rho) := \frac{\sys(X,\rho)^2}{\area_\rho(X)}
\]
is its \emph{systolic ratio}. By definition, the extremal length of $\Gamma_{\mathrm{all}}$ is equal to
\[
\EL(\Gamma_{\mathrm{all}},X) = \sup_{\rho \in \conf(X)} \frac{\ell_\rho(\Gamma_{\mathrm{all}})^2}{\area_\rho(X)} = \sup_{\rho \in \conf(X)} \SR(X,\rho)=:\overline{\SR}(X),
\]
that is, to the \emph{optimal systolic ratio} in the conformal class of $X$. The \emph{isosystolic pro\-blem} consists in maximizing the optimal systolic ratio over all conformal classes on a given manifold.
Extremal metrics for the isosystolic problem are known for the torus \cite{Pu}, the projective plane \cite{Pu} and the Klein bottle \cite{Bav:Klein1,Bav:Klein2}. When the conformal class is fixed, there are two further examples of optimal metrics known in genus three \cite[Section 7]{Calabi} and a one-parameter family in genus five \cite[Section 6]{WZ}.
We emphasize that the extremal length systole
\[
\ELsys(X) = \inf_{c \in {\mathcal C}(X)} \EL(c,X) = \inf_{c \in {\mathcal C}(X)} \sup_{\rho \in \conf(X)} \frac{\ell_\rho(c)^2}{\area_\rho(X)}
\]
is different from the optimal systolic ratio
\[
\overline{\SR}(X) = \sup_{\rho \in \conf(X)} \frac{\ell_\rho(\Gamma_\mathrm{all})^2}{\area_\rho(X)}=\sup_{\rho \in \conf(X)} \inf_{c \in {\mathcal C}(X)} \frac{\ell_\rho(c)^2}{\area_\rho(X)}.
\]
The maximin-minimax principle (which says that $\sup_x \inf_y F(x,y) \leq \inf_y \sup_x F(x,y)$ for any function $F$) yields the inequality
\begin{equation} \label{eq:SRvsSysEL}
\overline{\SR}(X) \leq \ELsys(X).
\end{equation}
If $X$ is a torus, then equality holds in \eqref{eq:SRvsSysEL}. This is because the extremal metric (for extremal length) is the same for all homotopy classes of curves. Indeed, every essential closed curve $\gamma$ in a torus is homotopic to a power $\alpha^k$ of a simple closed curve $\alpha$. By \exref{ex:simple}, the extremal metric for $\alpha$ (and hence $\gamma$) is realized by a holomorphic quadratic differential $q$ on $X$. The resulting metric is just the flat metric $\rho$ on $X$ because $q$ does not have any singularities (the space of holomorphic quadratic differentials on $X$ is $1$-dimensional). It follows that any homotopy class $c$ with minimal $\rho$-length realizes the extremal length systole, giving \[\overline{\SR}(X) \geq \SR(X,\rho) =\frac{\ell_\rho(c)^2}{\area_\rho(X)}= \EL(c,X)\geq\ELsys(X),\]
and hence $\overline{\SR}(X) = \ELsys(X)$.
By Loewner's torus inequality \cite{Pu}, $\overline{\SR}(X)$ is strictly maximized at the regular hexa\-go\-nal torus, where it takes the value $2/\sqrt{3}$. In fact, it is easy to see that this is the only local maximum (see below). Thus, the same holds for the extremal length systole.
\begin{cor} \label{cor:torus}
The extremal length systole of tori attains a unique (strict) local maximum at the regular hexagonal torus, where it takes the value $2/\sqrt{3}$.
\end{cor}
The proof of Loewner's torus inequality is quite straightforward once we know that the optimal metric is flat. Indeed, the moduli space of flat tori up to similarity is equal to the modular surface $\ensuremath{\mathbb{H}}/\PSL(2,\Z)$. The standard fundamental domain for the action of $\PSL(2,\Z)$ on $\ensuremath{\mathbb{H}}$ is \[F=\{ z \in \ensuremath{\mathbb{H}} : |\re z| \leq 1/2 \text{ and } |z|\geq 1 \}.\] For any $\tau \in F$, it is easy to see that the shortest non-zero vectors in the lattice $\Z + \tau \Z$ have length $1$. This means that the systolic ratio of the torus $\C / (\Z + \tau \Z)$ is the reciprocal of its area, or $1/ \im \tau$. This quantity is only locally maximized at the corners $\{e^{\pi i / 3}, e^{2\pi i / 3} \}$ of $F$, both of which represent the regular hexagonal torus.
The equality $\overline{\SR}(X) = \ELsys(X)$ also holds for the projective plane because there is only one homotopy class of primitive essential closed curves in that case.
However, if $X$ is the thrice-punctured sphere, then
\begin{equation}
\overline{\SR}(X) = 2 \sqrt{3} < 4 = \ELsys(X).
\end{equation}
We explain the first equality here while the second one will be shown in \corref{cor:ELsysPants}.
\begin{prop} \label{prop:SRsphere}
The Euclidean metric on the double equilateral triangle punctured at the vertices is optimal for the systolic ratio, giving $\overline{\SR}(X) = 2 \sqrt{3}$ for the thrice-puntured sphere.
\end{prop}
\begin{proof}
This is the metric $\rho$ described in \exref{ex:isoceles}. The shortest curves are not those depicted in \figref{fig:trefoil} though, they are figure-eight curves of length $\sqrt{3}$.
To prove that every essential curve in $X$ has length at least $\sqrt{3}$, we use \propref{prop:tight} to get a closed curve $\gamma$ into the metric completion $\overline{X}$ (the unpunctured double equilateral triangle) such that $\gamma$ is a limit of curves in $c$, satisfies $\length_\rho(\gamma)\leq \ell_\rho(c)$, and $\gamma$ passes through a vertex $v$ of $\overline{X}$.
The curve $\gamma$ must also intersect the edge opposite to $v$, for otherwise the curves in $c$ close enough to $\gamma$ could be homotoped into a neighborhood of $v$, contradicting the assumption that they are essential. As the distance from $v$ to the opposite edge is $\sqrt{3}/2$, we obtain \[\ell_\rho(c) \geq \length_\rho(\gamma) \geq \sqrt{3}.\]
It remains to show that $X$ is swept out evenly by shortest essential closed curves. This is best seen by observing that there is a covering map from the regular hexagonal torus $Y$ punctured at three points to $X$ (see \figref{fig:torus_cover}). The shortest essential closed curves in $Y$ have length $\sqrt{3}$, and those that do not pass through one of the three punctures project to shortest essential closed curves in $X$. These are organised in three parallel families, each of which foliates $Y$ minus three closed geodesics. By picking any one of these parallel families, we obtain an isometric immersion $\pi$ from a union $U$ of $3$ open Euclidean cylinders to $X$ which is $3$-to-$1$ almost everywhere and maps each simple closed geodesic in $U$ to a shortest essential closed curve in $X$. As before, we obtain
\[
3 \int_X f \rho^2 = \int_U (f\circ \pi) \,(\pi^*\rho)^2 = \int_y \left( \int_{\alpha_y} (f\circ \pi) \, \pi^*\rho \right) dy =\int_{y} \left( \int_{\pi(\alpha_y)} f \rho \right) dy
\]
for any measurable function $f$ on $X$, where $y$ is a height coordinate in $U$ and $\alpha_y$ is the closed geodesic at height $y$.
By Beurling's criterion, $\rho$ is extremal for the set of curves $\Gamma_\mathrm{all}$, so that
\[
\overline{\SR}(X) =\EL(\Gamma_\mathrm{all},X) = \frac{\ell_\rho(\Gamma_\mathrm{all})^2}{\area_\rho(X)} = \frac{\sqrt{3}^2}{\sqrt{3}/2}=2\sqrt{3}. \qedhere
\]
\end{proof}
\begin{figure}
\caption{Degree $3$ cover from the hexagonal torus to the double triangle. None of the three homotopy classes of figure-eights sweeps out the double triangle evenly by itself, but together they do.}
\label{fig:torus_cover}
\end{figure}
Similarly, if $X$ is the Bolza surface, then
\begin{equation}
\overline{\SR}(X) \leq \frac{\pi}{3} < \sqrt{2} = \ELsys(X)
\end{equation}
where the first inequality is from \cite[Theorem~5.1]{KatzSabourau} and the equality on the right-hand side will be shown in \corref{cor:roottwo}.
\section{Systoles are simple} \label{sec:simple}
In this section, we show that we can restrict to \emph{simple} closed curves in the definition of the extremal length systole (except in the case where no such curve is essential). For the systole with respect to a fixed metric, this is an easy surgery argument, but since the extremal lengths of different curves are computed using different metrics, the argument is more subtle for the extremal length systole. As the statement is obvious for tori, we restrict to hyperbolic surfaces in this section.
We start by showing that the infimum in the definition of extremal length systole is achieved. This statement will also be used in \secref{sec:derivatives} to show that the extremal length systole is a generalized systole in the sense of Bavard.
\begin{lem} \label{lem:finite}
Let $X$ be a hyperbolic surface of finite area. For any $L>0$, there are at most finitely many homotopy classes $c$ of essential closed curves in $X$ such that $\EL(c,X) \leq L$. In particular, there is a homotopy class $c\in {\mathcal C}(X)$ such that $\EL(c,X)=\ELsys(X)$.
\end{lem}
\begin{proof}
In the complete hyperbolic metric $\rho$ on $X$, every homotopy class $c \in {\mathcal C}(X)$ contains a unique closed geodesic and for any $B>0$, there are at most finitely many closed geodesics of length at most $B$ (since this set is discrete and compact). By definition of extremal length, we have $\EL(c,X) \geq \ell_\rho(c)^2 / \area_\rho(X)$. Thus, an upper bound on extremal length implies an upper bound on hyperbolic length, which restricts to finitely many homotopy classes. The infimum is therefore a minimum.
\end{proof}
We proceed to show that the extremal length systole is only realized by essential closed curves with the minimum number of self-intersections possible.
\begin{thm}\label{thm:ELsyssimple}
Let $X$ be a hyperbolic surface of finite area. Then any essential closed curve $\gamma$ in $X$ such that $\EL(\gamma,X)=\ELsys(X)$ is simple unless $X$ is the thrice-punctured sphere, in which case the figure-eight curves are the ones with minimal extremal length.
\end{thm}
\begin{proof}
Let $\gamma$ be an essential closed curve in $X$ that is not homotopic to a simple closed curve or to a figure-eight on the thrice-punctured sphere. Our goal is to find an essential closed curve $\alpha$ with strictly smaller extremal length than $\gamma$.
There are two ways to perform surgery on $\gamma$ at an essential self-intersection which we call \emph{smoothings}. These are obtained by cutting the circle at two preimages of the intersection point and regluing in the two other possible ways (see \cite[Definition 2.16]{MGT:20}). As such, the smoothings of $\gamma$ have the same image as $\gamma$ and the same length with respect to any metric. One of the two smoothings is a pair of curves and the other one is a single curve.
Suppose first that all three components of the two smoothings of $\gamma$ at some essential self-intersection $x\in \gamma$ are inessential. Consider the smoothing $\gamma'$ of $\gamma$ at $x$ which has two components. By assumption, each component of $\gamma'$ can be homotoped into a puncture of $X$ (it cannot be homotopic to a point since the self-intersection at $x$ is essential). It can therefore be homotoped to a power of a simple loop from $x$ that encloses the puncture. Thus, up to homotopy, we may assume that the image of $\gamma$ is homeomorphic to a figure-eight curve bounding two punctures. Since the other smoothing $\gamma''$ of $\gamma$ at $x$ is also inessential, the third component of $X \setminus \gamma$ must be a punctured disk, so that $X$ is the thrice-punctured sphere. This also implies that the two components of $\gamma'$ are simple, because the fundamental group of $X$ is the free group on the two simple loops $a$ and $b$ forming the figure-eight, and the only words homotopic to the third puncture are conjugate to powers of $ab$. If $\gamma \sim a^m b^n$, then its other smoothing $\gamma''$ is $a^m b^{-n}$, which is conjugate to a power of $ab$ only if $m=\pm 1$ and $n = \mp 1$. We conclude that $\gamma$ is homotopic to a figure-eight on the thrice-punctured sphere, contrary to our assumption.
It follows that at least one of the two smoothings of $\gamma$ has an essential component $\beta$. Then $\EL(\beta) \leq \EL(\gamma)$ since $\ell_{\rho}(\beta) \leq \ell_{\rho}(\gamma)$ for every conformal metric $\rho$ on $X$. This is because any curve $c$ homotopic to $\gamma$ has a smoothing with one component $b$ homotopic to $\beta$ (see \cite[Lemma~2.1]{NC:01} or ~\cite[Lemma~2.17]{MGT:20}) and $b$ is at most as long as $c$ with respect to $\rho$.
If $\beta$ still has essential self-intersections, then we can repeat the above process of smoothing and keeping an essential component until we are left with a curve $\alpha$ for which this is no longer possible. Then $\alpha$ is either simple or a figure-eight on the thrice-punctured sphere, and we have $\EL(\alpha) \leq \EL(\gamma)$. The tricky part is to prove that the inequality is strict. In either case, we know what the extremal metric $\rho$ for $\alpha$ looks like. Since $\alpha$ was obtained from $\gamma$ after repeated smoothings, we have $\ell_\rho(\alpha) \leq \ell_\rho(\gamma)$. If $\ell_\rho(\alpha) < \ell_\rho(\gamma)$, then $\EL(\alpha)< \EL(\gamma)$ as required. Otherwise, we need to modify the metric $\rho$.
Assume that $\alpha$ is simple. By \exref{ex:simple}, the extremal metric $\rho$ comes from an integrable holomorphic quadratic differential $q$ on $X$. In this metric, the punctures are at a finite distance away, so that the metric completion $\overline{X}$ is a closed surface. By \propref{prop:tight}, the curve $\gamma$ can be pulled tight to a curve $\gamma^*$ in $\overline{X}$ such that $\length_\rho(\gamma^*)=\ell_\rho(\gamma)$. If $\gamma^*$ is a regular closed geodesic in $X$ (that does not pass through cone points), then it must be a power of a simple closed curve (necessarily homotopic to $\alpha$) because its slope with respect to $q$ is constant. As we assumed that $\gamma$ was not simple, we have that $\gamma$ is homotopic to a proper power $\alpha^k$ with $k>1$. We thus have $\ell_\rho(\gamma)= k \ell_\rho(\alpha) > \ell_\rho(\alpha)$ so that $\EL(\alpha)<\EL(\gamma)$.
On the other hand, if every length-minimizer $\gamma^*$ for the homotopy class $[\gamma]$ passes through a cone point or a puncture, then there are only finitely many possible length-minimizers for $[\gamma]$, as there are only finitely many geodesic segments of length at most $L$ between the points in this finite set (for any $L>0$). In fact, the length-minimizer is unique in this case, but we will not need this. All we need to use is that there is a compact set $K \subset X$ with nonempty interior that is disjoint from every length-minimizer. Then every curve homotopic to $\gamma$ that passes through $K$ is longer than $\ell_\rho(\gamma)$ by a definite amount, for otherwise the compactness argument from the proof of \propref{prop:tight} would yield a length-minimizer passing through $K$. We can therefore decrease $\rho$ by a small amount in $K$ without affecting $\ell_{\rho}(\gamma)$, but thereby decreasing the area of $\rho$. This improved metric shows that $\EL(\gamma) > \EL(\alpha)$, as required.
Next, suppose that $\alpha$ is a figure-eight on the thrice-punctured sphere. Then the extremal metric $\rho$ for $\alpha$ is the double of an isoceles right triangle as described in \exref{ex:figure-eight} (scaled to have edge lengths $1$ and $\sqrt{2}$). If the homotopy class of $\gamma$ does not contain any closed geodesics in $X$, then we can apply the same trick as above to reduce $\rho$ in a set $K$ away from all the length-minimizers to obtain the strict inequality $\EL(\gamma)>\EL(\alpha)$.
The only case left is if $[\gamma]$ contains closed geodesics in $X$. In contrast with the case of simple closed curves, the metric $\rho$ comes from a quartic differential rather than a qua\-dra\-tic differential, so that its closed geodesics can self-intersect (at right angles). However, we can still prove that $\ell_\rho(\gamma)> \ell_\rho(\alpha)$. Suppose on the contrary that $\ell_\rho(\gamma)= \ell_\rho(\alpha)$. Let $\gamma^* \subset X$ be a closed geodesic homotopic to $\gamma$ and let $\gamma^\dagger \subset \overline{X}$ be a curve obtained by pushing $\gamma^*$ to one side until it passes through a puncture, as in the proof of \propref{prop:tight}. Consider the covering map from $\R^2 \setminus \Z^2 \to X$ coming from the regular tiling by of the plane by isoceles right triangles. We can lift $\gamma^\dagger$ under this covering to an arc in the plane with endpoints in $\Z^2$. Since $\gamma^\dagger$ is a limit of closed geodesics, that lift must be a straight line segment $I$. Its length is therefore equal to $\sqrt{m^2+n^2}$ for some integers $m$ and $n$. The only way to obtain $\ell_\rho(\alpha)=2$ is if one of $m$ or $n$ is zero, meaning that $I$ is parallel to one of the coordinate axes. Since any closed geodesic in $X$ elevates to a straight line in $\R^2 \setminus \Z^2$, the deck transformation corresponding to $\gamma^*$ must be a translation. Furthermore, as $\gamma^*$ is parallel to $\gamma^\dagger$ and of the same length, that translation is by distance $2$ along one of the coordinate axes. It follows that $\gamma^*$ is a figure-eight in $X$ (see \figref{fig:figure-eight_cyl} and \figref{fig:figure-eight}). This contradicts our initial hypothesis that $\gamma$ was not homotopic to a figure-eight on the thrice-punctured sphere. We conclude that $\ell_\rho(\gamma)> \ell_\rho(\alpha)$ and hence $\EL(\gamma)> \EL(\alpha)$.
\end{proof}
We obtain the extremal length systole of the thrice-punctured sphere as a bonus.
\begin{cor} \label{cor:ELsysPants}
The extremal length systole of the thrice-punctured sphere is equal to $4$.
\end{cor}
\begin{proof}
\thmref{thm:ELsyssimple} shows that the figure-eight curves have minimal extremal length and \exref{ex:figure-eight} shows that the extremal length of these curves is equal to $4$.
\end{proof}
\section{Branched coverings} \label{sec:branched}
A Riemann surface is \emph{hyperelliptic} if it admits a holomorphic map of degree two onto the Riemann sphere. On a surface of genus $g$, such a holomorphic map has $2g+2$ critical points, called the \emph{Weierstrass points}, and the same number of critical values. The conformal automorphism that swaps the two preimages of any non-critical value and fixes the Weierstrass points is called the \emph{hyperelliptic involution}.
Every closed Riemann surface of genus two is hyperelliptic, hence arises as a double branched cover of the Riemann sphere branched over six points. As the next lemma shows, extremal length behaves well under branched coverings. This will allow us to reduce computations of extremal length on surfaces of genus two to computations on six-times-punctured spheres.
\begin{lem} \label{lem:branch}
Let $f:X \to Y$ be a holomorphic map of degree $d$ between Riemann surfaces with finitely generated fundamental groups, let $Q \subset Y$ be a finite set containing the critical values of $f$, and let $P \subset f^{-1}(Q)$ be such that $f^{-1}(Q) \setminus P$ is a subset of the critical points of $f$. Then
\[
\EL(f^{-1}(\gamma) , X \setminus P) = d \cdot \EL(\gamma, Y \setminus Q)
\]
for any simple closed curve $\gamma$ in $Y \setminus Q$.
\end{lem}
Typically, we will take $Q$ to be the set of critical values of $f$ and $P$ to be the empty set, provided that $f^{-1}(Q)$ only consists of critical points. This is the case if $d=2$ since each point in $Q$ has only one (double) preimage.
\begin{proof}[Proof of \lemref{lem:branch}]
By Jenkins's theorem \cite{Jenkins}, there is a unique integrable holomorphic quadratic differential $q$ on $Y\setminus Q$ whose regular trajectories are all homotopic to $\gamma$ and form a cylinder $C$ of height $1$. With this normalization, the extremal length $\EL(\gamma, Y \setminus Q)$ is equal to the area $\int_{Y \setminus Q} |q|$ of the cylinder.
The pull-back differential $f^* q$ is holomorphic on $X \setminus P$ since simple poles pull-back to regular points or zeros at branch points. Since $f:X \setminus f^{-1}(Q) \to Y \setminus Q$ is a covering map, $f^{-1}(C)$ is a union of cylinders of height $1$, the union of whose core curves is homotopic to $f^{-1}(\gamma)$ relative to $f^{-1}(Q)$, hence relative to $P$ as well. Moreover, $f^{-1}(C)$ contains all the regular horizontal trajectories of $f^* q$.
By Renelt's theorem \cite{Renelt}, the extremal metric for $f^{-1}(\gamma)$ is given by $\sqrt{|f^*q|}$ so that
\[
\EL(f^{-1}(\gamma) , X \setminus P) = \int_{X \setminus P} |f^*q| = d \int_{Y \setminus Q} |q| = d \cdot \EL(\gamma, Y \setminus Q),
\]
as required.
\end{proof}
Note that in the above lemma, the inverse image $f^{-1}(\gamma)$ is not necessarily connected; it may have up to $d$ connected components. It may also happen that some components of $f^{-1}(\gamma)$ are homotopic to each other. In the case where $Y \setminus Q$ is a punctured sphere and $d=2$, the number of components of $f^{-1}(\gamma)$ and whether these components are homotopic to each other is determined by how $\gamma$ separates the punctures.
\begin{defn}
Let $2\leq m \leq n$ be integers. An \emph{$(m,n)$-curve} on a sphere with $(m+n)$ punctures is a simple closed curve that separates $m$ punctures from $n$ punctures.
\end{defn}
\begin{lem} \label{lem:curves}
Let $f:X \to \widehat{\C}$ be a holomorphic map of degree two from a closed Riemann surface to the Riemann sphere, let $Q \subset \widehat{\C}$ be its set of critical values and let $\gamma \subset \widehat{\C}\setminus Q$ be an $(m,n)$-curve. Then $f^{-1}(\gamma)$ is connected if and only if both $m$ and $n$ are odd, in which case, $f^{-1}(\gamma)$ is separating. If $m$ and $n$ are even, then the two components of $f^{-1}(\gamma)$ are individually non-separating, and they are homotopic to each other if and only if $m=2$.
\end{lem}
\begin{proof}
Recall that $m+n = |Q| = 2g+2$ where $g$ is the genus of $X$, so that $m$ and $n$ have the same parity. The curve $\gamma$ separates the Riemann sphere into a disk $D_m$ with $m$ critical values and a disk $D_n$ with $n$ critical values. The Riemann--Hurwitz formula gives $\chi(f^{-1}(D_m)) = 2 - m$ and $\chi(f^{-1}(D_n)) = 2 - n$. On the other hand, if $f^{-1}(D_m)$ has genus $g_m$ and $b$ boundary components, then $\chi(f^{-1}(D_m)) = 2 - 2g_m - b$. In particular, $b$ is odd if and only $m$ (and $n$) is. Since $b$ is equal to either $1$ or $2$, we have that $f^{-1}(\gamma)$ is connected if and only if $m$ and $n$ are odd. Since every simple closed curve on the sphere is separating, so is its full preimage under $f$.
Assume that $m$ and $n$ are even. Then $f^{-1}(\gamma)$ has two connected components. Similarly, $X \setminus f^{-1}(\gamma)$ has exactly two connected components, namely $f^{-1}(D_m)$ and $f^{-1}(D_n)$. These surfaces are connected because each one is a branched cover of degree two of a disk with non-trivial branching. In particular, given a point in each component of $f^{-1}(\gamma)$, there is a path in $f^{-1}(D_m)$ between them and another such path in $f^{-1}(D_n)$. The concatenation of these two paths gives a closed curve intersecting each component $C$ of $f^{-1}(\gamma)$ only once and transversely, which implies that $C$ is non-separating.
Suppose that the two components of $f^{-1}(\gamma)$ are homotopic to each other. Then these two components bound a cylinder, necessarily equal to one of $f^{-1}(D_m)$ or $f^{-1}(D_n)$. As the Euler characteristic of a cylinder is equal to $0$, one of $m$ or $n$ must be equal to $2$. Since we assumed that $2\leq m\leq n$, we conclude that $m=2$. Conversely, if $m=2$, then $f^{-1}(D_m)$ must be homeomorphic to a cylinder since it has two boundary components and Euler characteristic zero. This implies that the two components of $f^{-1}(\gamma)$ are homotopic to each other.
\end{proof}
Before stating the consequences of the above lemmas for surfaces of genus two, let us see what they mean for surfaces of genus one. Every torus $X$ is hyperelliptic (or rather \emph{elliptic}). The elliptic involution has $4$ critical points and critical values. Let $f:X \to \widehat{\C}$ be the quotient by the elliptic involution and let $Q$ be its set of critical values. Since any essential simple closed curve $\gamma$ in $\widehat{\C} \setminus Q$ is a $(2,2)$-curve, \lemref{lem:curves} tells us that its preimage $f^{-1}(\gamma)$ has two components, both of which are homotopic to a given curve $\alpha \subset X$. Conversely, every essential simple closed curve $\alpha$ in $X$ can be homotoped off of $P$, after which it projects to some $(2,2)$-curve $\gamma$ in $\widehat{\C} \setminus Q$.
By \lemref{lem:branch} applied with $P = \varnothing$, we obtain
\[
2^2 \EL( \alpha , X)= \EL(2 \alpha , X) = \EL(f^{-1}(\gamma) , X) = 2 \EL(\gamma, \widehat{\C} \setminus Q),
\]
or $\EL(\gamma, \widehat{\C} \setminus Q)=2\EL( \alpha , X)$. By $2 \alpha$ we mean $2$ copies of the curve $\alpha$, and the first equality holds because $\ell_\rho(2 \alpha) = 2 \ell_\rho(\alpha)$ for every conformal metric $\rho$.
By \thmref{thm:ELsyssimple}, the extremal length systole is always achieved by simple closed curves. We conclude that the extremal length systole of a four-times-punctured sphere is equal to twice the extremal length systole of the elliptic double cover branched over the four punctures. Since the quotient of the regular hexagonal torus by the elliptic involution is isometric to the regular tetrahedron, \corref{cor:torus} implies the following.
\begin{cor} \label{cor:tetrahedron}
The extremal length systole of four-times-punctured spheres attains a unique (strict) local maximum at the regular tetrahedron punctured at its vertices, where it takes the value $4/\sqrt{3}$.
\end{cor}
For surfaces of genus two, it is still true that every homotopy class of simple closed curve is preserved by the hyperelliptic involution \cite{HaasSusskind}. However, the situation is a bit more complicated as there are two types of curves. The extremal length of an essential simple closed curve on a surface of genus two is equal to either twice or half the extremal length of some curve on the sphere punctured at the images of the Weierstrass points, depending on the type of curve.
\begin{prop} \label{prop:genus2}
Let $X$ be closed Riemann surface of genus two, let $f:X \to \widehat{\C}$ be a holomorphic map of degree two, let $Q$ be the set of critical values of $f$, and let $\alpha$ be an essential simple closed curve in $X$. Then $\alpha$ is separating if and only if it is homotopic to $f^{-1}(\beta)$ for some $(3,3)$-curve $\beta$ on $\widehat{\C} \setminus Q$, in which case $\EL(\alpha,X) = 2\EL(\beta,Y)$. Similarly, $\alpha$ is non-separating if and only if it is homotopic to either component of $f^{-1}(\beta)$ for some $(2,4)$-curve $\beta$ on $\widehat{\C} \setminus Q$, in which case $\EL(\alpha,X) = \EL(\beta,Y)/2$.
\end{prop}
\begin{proof}
Let $J : X \to X$ be the hyperelliptic involution and let $\alpha^*$ be the geodesic representative of $\alpha$ with respect to the hyperbolic metric on $X$.
If $\alpha$ is separating, then each component of $X \setminus \alpha^*$ is a one-holed torus preserved by $J$, because $J$ preserves $\alpha^*$ together with its orientation. If $C$ is one such component, then $f(C)$ is a disk and the Riemann-Hurwitz formula tells us that
\[
-1 = \chi(C) = 2 - |f^{-1}(Q) \cap C|
\]
so that $C$ contains $3$ critical points of $f$ and hence $f(C)$ contains $3$ critical values. This shows that $f(\alpha^*)$ is a $(3,3)$-curve. More precisely, $f(\alpha^*)=\beta^2$ for some $(3,3)$-curve $\beta$ since $\alpha^*$ covers its image by degree two. As $\alpha$ is homotopic to $\alpha^*$, we have
\[
\EL(\alpha,X)=\EL(\alpha^*,X)=\EL(f^{-1}(\beta),X)= 2\EL(\beta, \widehat{\C} \setminus Q)
\]
according to \lemref{lem:branch}.
If $\alpha$ is non-separating, then the isometry $J$ sends $\alpha^*$ to itself in an orientation-reversing manner, so that $\alpha^*$ passes through two Weierstrass points. Let $\varepsilon>0$ be small enough so that the $\varepsilon$-neighborhood of $\alpha^*$ in $X$ is an annulus $A$ that contains only these two Weierstrass points. Then $J$ maps $A$ to itself and exchanges its two boundary components. Let $\beta$ be the image of either boundary component by $f$. Since $f(A)=A / J$ is a disk containing two critical values, $\beta$ is a $(2,4)$-curve on $\widehat{\C}\setminus Q$. Furthermore, $f^{-1}(\beta)=\partial A$ is a union of two curves homotopic to $\alpha$. By \lemref{lem:branch}, we have
\[
2^2 \EL(\alpha, X)=\EL(2 \alpha, X) = \EL(f^{-1}(\beta), X) = 2 \EL(\beta, \widehat{\C} \setminus Q).
\]
The converse statements follow from \lemref{lem:curves}, which tells us that the preimage of a $(3,3)$-curve is connected and separating, while the preimage of a $(2,4)$-curve has two homotopic non-separating components.
\end{proof}
It is perhaps more intuitive to think in terms of embedded cylinders. The inverse image of a $(3,3)$-cylinder in $\widehat{\C} \setminus Q$ has twice the circumference and the same height, while the inverse image of a $(2,4)$-cylinder $C$ consists in two parallel copies of $C$, so the circumference stays the same but the total height is multiplied by two.
\section{From the octahedron to pillowcases} \label{sec:explicit}
The Bolza surface ${\mathcal B}$ can be defined as the one-point compactification of the algebraic curve
\[
\left\{(x,y)\in \C^2 : y^2 = x(x^4-1)\right\}.
\]
In these coordinates, the hyperelliptic involution takes the form $(x,y)\mapsto (x,-y)$ and the corresponding quotient map is realized by the projection $(x,y) \mapsto x$, which has critical values $\{0, \pm1, \pm i, \infty\}$. By \propref{prop:genus2}, calculating extremal lengths on ${\mathcal B}$ is equivalent to calculating extremal lengths on ${\mathcal O} := \widehat{\C} \setminus \{0, \pm1, \pm i, \infty\}$, where $\widehat{\C}$ is the Riemann sphere. This surface is conformally equivalent to the unit sphere $S^2$ in $\R^3$ punctured where the coordinate axes intersect it. Under the stereographic projection, the three great circles obtained by intersecting a coordinate plane in $\R^3$ with $S^2$ map to the two coordinate axes and the unit circle in $\C$. We will refer to the vertices, edges, and faces of this cell division below.
For certain simple closed curves in ${\mathcal O}$, we are able to explicitly compute their extremal length. We do this by finding branched covers from ${\mathcal O}$ to four-times-punctured spheres, where extremal length is calculated using elliptic integrals. We will then prove lower bounds for the extremal length of other curves in the next section by using the Euclidean metric on the regular octahedron.
\subsection{The curves}
We distinguish four kinds of curves in ${\mathcal O}$:
\begin{itemize}
\item A \emph{baseball curve} is a simple closed curve that separates a pair of consecutive edges (adjacent edges that do not belong to a common face) from another such pair. There are six baseball curves.
\item An \emph{edge curve} is a simple closed curve that separates an edge from the five edges that are not adjacent to it. There are twelve edge curves, one for each edge.
\item An \emph{altitude curve} is a simple closed curve that surrounds a pair of altitudes sharing a common foot. There are twelve altitude curves, one dual to each edge.
\item A \emph{face curve} is a simple closed curve that separates two opposite faces. There are four face curves, one for each pair of opposite faces.
\end{itemize}
An example of each is depicted in \figref{fig:curves}. The baseball curves are named like so because of the resemblance with the stitching pattern of a baseball. From this point onward, we will confound closed curves with their homotopy classes.
\begin{figure}
\caption{A baseball curve}
\label{fig:baseball}
\caption{An edge curve}
\label{fig:edge}
\caption{An altitude curve}
\label{fig:altitude}
\caption{A face curve}
\label{fig:face}
\caption{Some curves on the punctured octahedron}
\label{fig:curves}
\end{figure}
For any two curves of one kind, there is a conformal automorphism of ${\mathcal O}$ sending one to the other. Thus, all curves of a given kind have the same extremal length. In each case, there is also a non-trivial conformal automorphism $h$ that preserves the curve. If we quotient ${\mathcal O}$ by the group generated by $h$, we obtain a holomorphic branched cover $f$ onto the Riemann sphere. By construction, $f$ sends the curve we started with to a power of a simple closed curve in $\widehat{\C}$ punctured at the critical values of $f$ and the images of the vertices of ${\mathcal O}$.
We find the holomorphic map $f$ for each kind of curve in the next subsections, and use this to compute their extremal length.
\subsection{The baseball curves}
We begin with the baseball curves, as this case is the simplest. Strictly speaking, we will not need this calculation to determine the extremal length systole of ${\mathcal O}$ or ${\mathcal B}$. We only use it to illustrate the method outlined above.
\begin{prop} \label{prop:baseball}
The extremal length of any baseball curve in ${\mathcal O}$ is equal to $4$.
\end{prop}
\begin{proof}
The baseball curve $\alpha$ depicted in \figref{fig:baseball} is invariant by the rotation $z \mapsto -z$. To quotient by this involution, we apply the squaring map $f(z)=z^2$. This defines covering map $f: {\mathcal O} \to \widehat{\C} \setminus \{-1,0,1,\infty\}$ of degree two that sends $\gamma$ to $\beta^2$, where $\beta$ is a simple closed curve separating $[0,1]$ from $[-\infty,-1]$.
It is easy to see that $\widehat{\C} \setminus \{-1,0,1,\infty\}$ is conformally equivalent to a square pillowcase, that is, to the double of a Euclidean square punctured at the vertices. Indeed, the closed upper half-plane with vertices at $\{-1,0,1,\infty\}$ is conformally equivalent to a square because it has four-fold symmetry with respect to the point $i$ (the conformal map is given by the Schwarz--Christoffel formula). Doubling across the boundary gives the desired result.
Let $\rho$ be the Euclidean metric on $W=\widehat{\C} \setminus \{-1,0,1,\infty\}$ coming from the square pillowcase of side length $1$. Then $\rho$ is extremal for $\beta$. Indeed, the closed geodesics that go across the squares parallel to the sides have minimal length in their homotopy class because the metric is locally $\CAT(0)$ (this can also be shown using \propref{prop:tight}). The double square is clearly swept out evenly by these closed geodesics homotopic to $\beta$ (this is just Fubini integration on each square). By Beurling's criterion, $\rho$ is extremal, so that
\[
\EL(\beta,W) = \frac{\ell_\rho(\beta)^2}{\area_\rho(W)} = \frac{2^2}{2} = 2
\]
and hence $\EL(\alpha,{\mathcal O}) = \EL(f^{-1}(\beta),{\mathcal O}) = 2 \EL(\beta,W) = 4$ by \lemref{lem:branch}.
\end{proof}
Here we got lucky because $\widehat{\C} \setminus \{-1,0,1,\infty\}$ is particularly symmetric. For the other kinds of curves, we will follow a similar approach of finding branched covers onto four-times-punctured spheres, but these will be rectangular rather than square. Their flat metric can be calculated using elliptic integrals, which we briefly discuss now.
\subsection{Elliptic integrals} \label{subsec:elliptic}
For $k \in (0,1)$, the \emph{complete elliptic integral of the first kind} is defined as
\[
K(k) = \int_0^1 \frac{dt}{\sqrt{(1-t^2)(1-k^2 t^2)}}.
\]
The variable $k$ is called the \emph{modulus}. The \emph{complementary modulus} is $k' = \sqrt{1-k^2}$ and the \emph{complementary integral} is $K'(k):=K(k')$. This terminology can be explained by \eqnref{eq:complement}, taken from \cite[p.501]{WW}, in the proof below.
\begin{lem} \label{lem:elliptic}
For any $k \in (0,1)$, the extremal length of the simple closed curve separating the interval $(-1,1)$ from $\pm 1/k$ in $\widehat{\C} \setminus \{ \pm 1, \pm 1/k \}$ is equal to $4K(k)/K'(k)$.
\end{lem}
\begin{proof}
The Schwarz-Christoffel transformation
\[
\zeta \mapsto \int_{\zeta_0}^\zeta \frac{dz}{\sqrt{(1-z^2)(1-k^2z^2)}}
\]
sends the closed upper half-plane to a rectangle $R(k)$ with sides parallel to the coordinate axes, maps $\{\pm 1, \pm 1/k\}$ to the vertices, and is conformal in the interior \cite[p.238--240]{AhlforsComplex}.
The width of $R(k)$ is equal to
\[
W(k) = \int_{-1}^1 \frac{dt}{\sqrt{(1-t^2)(1-k^2 t^2)}} = 2\int_{0}^1 \frac{dt}{\sqrt{(1-t^2)(1-k^2 t^2)}} = 2 K(k)
\]
and its height is equal to
\[
H(k) = \int_{1}^{1/k} \frac{dt}{\sqrt{(t^2-1)(1-k^2 t^2)}}.
\]
The change of variable $t = \sqrt{1-(k')^2s^2}/k$ or equivalently $s= \sqrt{1-k^2t^2}/k'$ shows that
\begin{equation} \label{eq:complement}
\int_{1}^{1/k} \frac{dt}{\sqrt{(t^2-1)(1-k^2 t^2)}} = \int_0^1 \frac{ds}{\sqrt{(1-s^2)(1-(k')^2s^2)}} = K(k')=K'(k).
\end{equation}
Let $P(k)$ be the pillowcase obtained by doubling $R(k)$ across its boundary and puncturing at the vertices. Then $P(k)$ is foliated by closed horizontal geodesics of length $2W(k)=4K(k)$ and its height is equal to $H(k)=K'(k)$. By Beurling's criterion (or \exref{ex:simple}), the flat metric on $P(k)$ is extremal for the homotopy class of these curves. The extremal length is therefore equal to $2W(k)/H(k) = 4 K(k)/K'(k)$.
\end{proof}
Note that the elliptic double cover of $\widehat{\C} \setminus \{ \pm 1, \pm 1/k \}$ is a torus with periods $4K(k)$ and $i 2K'(k)$. For this reason, the integrals $K(k)$ and $K'(k)$ are often called quarter- or half-periods.
Elliptic integrals satisfy several identities that can be used to compute them efficiently. We will require the \emph{upward Landen transformation}
\begin{equation} \label{eq:upward}
K(k) = \frac{1}{1+k} \, K \left( \frac{2\sqrt{k}}{1+k}\right)
\end{equation}
and the \emph{downward Landen transformation}
\begin{equation} \label{eq:downward}
K'(k) = \frac{2}{1+k} \, K \left( \frac{1-k}{1+k}\right),
\end{equation}
which are valid for every $k \in (0,1)$ \cite[Theorem 1.2]{AGM}. If we define $k^* := 2 \sqrt{k} / (1+k)$, then it is elementary to check that $(k^*)' = (1-k)/(1+k)$. Upon dividing the two Landen transformations, we thus obtain the \emph{multiplication rule}
\begin{equation} \label{eq:multiplication}
\frac{K(k)}{K'(k)} = \frac12 \, \frac{K(k^*)}{K'(k^*)},
\end{equation}
which is actually what we are going to use.
\subsection{The edge curves} \label{subsec:edge}
We are now able to compute the extremal length of the edge curves.
\begin{prop} \label{prop:roottwo}
The extremal length of any edge curve in ${\mathcal O}$ is equal to $2\sqrt{2}$.
\end{prop}
\begin{proof}
We first apply a ``rotation'' of angle $-\pi/4$ around the points $\pm i$ to better display the symmetries of the edge curve in \figref{fig:edge}. This is done with the M\"obius transformation
\[
M(z) = \frac{\cos(\pi/8)z-\sin(\pi/8)}{\sin(\pi/8)z +\cos(\pi/8)}= \frac{\sqrt{2+\sqrt{2}} z - \sqrt{2-\sqrt{2}}}{\sqrt{2-\sqrt{2}} z + \sqrt{2+\sqrt{2}}},
\]
which fixes $\pm i$ and sends $-1$, $0$, $1$, and $\infty$ to
\[
-(\sqrt{2}+1), \quad - (\sqrt{2}-1), \quad \sqrt{2}-1, \quad \text{and} \quad \sqrt{2}+1
\]
respectively. The transformation $M$ also sends the edge curve surrounding $[0,1]$ to a curve $\alpha$ surrounding the interval $[- (\sqrt{2}-1), \sqrt{2}-1]$ in $Z:=\widehat{\C} \setminus\{\pm i,\pm(\sqrt{2}-1), \pm (\sqrt{2}+1) \}$.
Now that everything is symmetric about the origin, we apply the squaring map $f(z) = z^2$. This sends the punctures to $-1$, $3-2\sqrt{2}$ and $3+2\sqrt{2}$, and has critical values at $0$ and $\infty$. Moreover, it maps $\alpha$ to $\beta^2$ where $\beta$ is a curve surrounding the interval $[0,3-2\sqrt{2}]$.
We will compute the extremal length of $\beta$ in $W:= \widehat{\C} \setminus \{ -1 , 0, 3-2\sqrt{2}, 3+2\sqrt{2} \}$. This turns out to be the same as the extremal length of $\beta$ in $W \setminus \{\infty\}$. Indeed, the quadratic differential $q$ realizing the extremal length of $\beta$ in $W$ has two singular horizontal trajectories: the interval $(0,3-2\sqrt{2})$ and $(-\infty,-1)\cup \{\infty\} \cup(3+2\sqrt{2},+\infty)$. So the regular horizontal trajectories of $q$ are homotopic to $\beta$ whether we puncture at $\infty$ or not.
To express the extremal length of $\beta$ as a ratio of elliptic integrals, we first map the punctures $\{-1,0,3-2\sqrt{2},3+2\sqrt{2}\}$ to $\{\pm 1, \pm{1/k}\}$ for some $k \in (0,1)$ via a M\"obius transformation $T$. We begin by applying $z \mapsto 1/z$ to get the points $\{ -1,3-2\sqrt{2},3+2\sqrt{2} , \infty \} $. We then translate by $1$ and scale by $1/(4-2\sqrt{2})$ to end up with
\[
\left\{ 0,1,\frac{4+2\sqrt{2}}{4-2\sqrt{2}}=(\sqrt{2}+1)^2, \infty \right\}.
\]
After these transformations, the curve $\beta$ separates $[0,1]$ from the other two punctures.
The M\"obius transformation $g$ sending $-1$, $1$, and $-1/k$ to $0$, $1$ and $\infty$ respectively is given by
\[
g(z) = \frac{k+1}{2}\left(\frac{z+1}{kz+1}\right).
\]
For it to send $1/k$ to $(\sqrt{2}+1)^2$, we must have
\[
(\sqrt{2}+1)^2 = g(1/k) = \frac{k+1}{2}\left(\frac{\frac{1}{k}+1}{2}\right) = \left(\frac{1+k}{2\sqrt{k}}\right)^2 = \frac{1}{(k^*)^2},
\]
which is equivalent to
\[
k^*= \frac{1}{\sqrt{2}+1} = \sqrt{2}-1.
\]
Also note that
\[
(k^*)' = \sqrt{ 1 - (k^*)^2 } = \sqrt{1 - (\sqrt{2}-1)^2} = \sqrt{2\sqrt{2}-2}.
\]
By \lemref{lem:elliptic}, the extremal length of $\beta$ is equal to $4K(k)/K'(k)$, and the multiplication rule \eqref{eq:multiplication} equates this with $2K(k^*)/K'(k^*)$.
One computes that
\[
(k^*)^* = \frac{2\sqrt{k^*}}{1+k^*} = \sqrt{2\sqrt{2}-2} = (k^*)' \quad \text{and hence} \quad ((k^*)^*)' = k^*.
\]
The multiplication rule applied to $k^*$ then yields
\[
\frac{2K(k^*)}{K'(k^*)}=\frac{K((k^*)^*)}{K'((k^*)^*)}=\frac{K'(k^*)}{K(k^*)},
\]
so that $K'(k^*)/K(k^*) = \sqrt{2}$ and $\EL(\beta,W) = \sqrt{2}$.
By \lemref{lem:branch}, the extremal length of the edge curves is then
\[
\EL(\alpha,Z) = \EL(f^{-1}(\beta),Z) = 2 \EL(\beta,W \setminus \{ \infty \}) = 2 \EL(\beta,W) = 2 \sqrt{2}. \qedhere
\]
\end{proof}
\begin{remark}
Given a positive integer $n$, the unique $k_n \in (0,1)$ such that $K'(k_n) / K(k_n) = \sqrt{n}$ is called a \emph{singular modulus} \cite[p.139]{AGM}. In the above proof, we showed the well-known fact that $k_2 = \sqrt{2} -1$. The square pillowcase from \propref{prop:baseball} corresponds to the first singular modulus $k_1 = 1/\sqrt{2}$.
\end{remark}
We will need to refer to the quadratic differentials realizing the extremal length of the edge curves later on to compute their derivative as we deform ${\mathcal O}$. The quadratic differentials associated to the edge curve surrounding $[0,1]$ in ${\mathcal O}$ can be recovered by pulling back the quadratic differential
\[
\frac{dz^2}{(1-z^2)(1-k^2z^2)},
\]
which realizes the extremal length on the quotient pillowcase (see \lemref{lem:elliptic}), under the branched cover $T \circ f\circ M$ from the above proof.
The result is equal to
\begin{equation} \label{eq:qdiff}
q = \frac{(z+1+\sqrt{2})^2}{z(1-z^4)}dz^2
\end{equation}
up to a positive constant. We leave the details of this calculation to the diligent reader.
\begin{figure}
\caption{Horizontal trajectories for the quadratic differential realizing the extremal length of an edge curve}
\label{fig:qd_edge}
\end{figure}
A less computationally intensive approach is to check that $q$ is invariant under the reflection across the real axis as well as the inversion $J$ in the circle of radius $\sqrt{2}$ centered at $-1$. The invariance of $q$ under complex conjugation holds because the coefficients of $q$ are real. Since $J(z)=h(\overline z)$ where $h(z) = (1-z)/(1+z)$, the invariance of $q$ under $J$ follows from its invariance under $h$, which is a calculation:
\begin{align*}
-h^*q &= \frac{(h(z)+1+\sqrt{2})^2}{h(z)(h(z)^4-1)}(h'(z))^2dz^2 \\
&= \frac{(h(z)+1+\sqrt{2})^2}{h(z)(h(z)^4-1)}\left( \frac{-2}{(1+z)^2}\right)^2dz^2 \\
&= \frac{4((1-z)+(1+\sqrt{2})(1+z))^2}{(1+z)(1-z)((1-z)^4-(1+z)^4)}dz^2 \\
&= \frac{8(z+1+\sqrt{2})^2}{(1-z^2)(-8z-8z^3)}dz^2 \\
& = \frac{(z+1+\sqrt{2})^2}{z(z^4- 1)}dz^2 = -q.
\end{align*}
From these symmetries and by checking the sign of $q$ at certain tangent vectors, it follows that the union of the critical horizontal trajectories of $q$ is equal to
\[
(-\infty,-1] \cup \left\{ -1- \sqrt{2}e^{i \theta} : \theta \in [-3\pi/4, 3\pi/4] \right \} \cup [0,1].
\]
The complement of this locus is homeomorphic to a cylinder, which forces the regular horizontal trajectories of $q$ to be closed and homotopic to each other. By the result of Jenkins cited in \exref{ex:simple}, $q$ is the extremal quadratic differential for the edge curve around $[0,1]$. A plot of the horizontal trajectories of $q$ is shown in \figref{fig:qd_edge}.
\subsection{The altitude curves}
The extremal length of the altitude curves can be calculated similarly as for the edge curves. We will not use this result; we only include it because it is one of the few examples symmetric examples where we can compute the extremal length.
\begin{prop} \label{prop:altitude}
The extremal length of any altitude curve in ${\mathcal O}$ is equal to \[4K(u) / K(u') \in \left[5.8768721265012 \pm 1.18\cdot 10^{-14}\right],\]
where $u=\frac{\sqrt{2+\sqrt{2}}}{2}$ and $u'=\frac{\sqrt{2-\sqrt{2}}}{2}$.
\end{prop}
\begin{proof}
We use the same model $Z=\widehat{\C} \setminus\{\pm i,\pm(\sqrt{2}-1), \pm (\sqrt{2}+1)\}$ as for the edge curves, and take the altitude curve $\alpha$ to surround the vertical line segment between $i$ and $-i$.
Upon squaring, the punctures map to $-1$, $3-2\sqrt{2}$ and $3+2\sqrt{2}$. However, we also need to puncture at the critical values $0$ and $\infty$. The curve $\alpha$ maps to $\beta^2$ where $\beta$ is a simple closed curve surrounding $[-1,0]$. The critical trajectories for the quadratic differential associated to $\beta$ on $W= \C \setminus \{ -1,0,3-2\sqrt{2} \}$ are equal to $[-1,0]$ and $[3-2\sqrt{2},+\infty]$. Since $3+2\sqrt{2}$ lies along one of these trajectories, this puncture does not affect extremal length.
We translate by $1$ to map the punctures of $W$ to $0$, $1$, $4-2\sqrt{2}$ and $\infty$. The M\"obius tranformation sending $-1$ to $0$, $1$ to $1$ and $-1/k$ to $\infty$ is equal to
\[
g(z) = \frac{k+1}{2}\left(\frac{z+1}{kz+1}\right).
\]
For it to send $1/k$ to $4-2\sqrt{2}$ we need to have
\[
4-2\sqrt{2} = g(1/k) = \left(\frac{1+k}{2\sqrt{k}}\right)^2 = \frac{1}{(k^*)^2},
\]
which can rewrite as $k^* = \sqrt{\frac{1}{4-2\sqrt{2}}} = \frac{\sqrt{2+\sqrt{2}}}{2}$.
The complementary modulus is
\[
(k^*)' = \sqrt{1-(k^*)^2} = \frac{\sqrt{2-\sqrt{2}}}{2}.
\]
By \lemref{lem:branch}, the above remark about the superfluous puncture, \lemref{lem:elliptic}, and the multiplication rule \eqref{eq:multiplication}, we obtain
\[
\EL(\alpha,Z) = \EL(f^{-1}(\beta),Z) = 2 \EL(\beta, W \setminus \{ 3+2\sqrt{2} \}) = 2\EL(\beta,W) = \frac{8K(k)}{K'(k)} = \frac{4K(k^*)}{K'(k^*)}.
\]
To compute elliptic integrals numerically, computer algebra systems use the arithmetic-geometric mean $M$ via the formula $K(a) = \pi / (2M(1,a'))$ \cite[Theorem 1.1]{AGM}. The ratio of two complementary integral then becomes $K(a)/K'(a) = M(1,a)/M(1,a')$. The C library Arb for arbitrary-precision interval arithmetic developed Fredrik Johansson \cite{Johansson} contains an implementation of $M$, which provides certified error bounds for its calculation. The Arb package is available in SageMath \cite{sagemath}, where the command
\begin{center}
\texttt{4*CBF(sqrt(2+sqrt(2))/2).agm1()/CBF(sqrt(2-sqrt(2))/2).agm1()}
\end{center}
certifies that $4K(k^*)/K'(k^*)$ belongs to the interval $\left[5.8768721265012 \pm 1.18\cdot 10^{-14}\right]$.
\end{proof}
\subsection{The face curves}
We finish with the face curves, which have the smallest extremal length of the lot.
\begin{prop} \label{prop:zigzag}
The extremal length of any face curve in ${\mathcal O}$ is equal to \[6K(v)/K(v')\in \left[2.79957467136936 \pm 8.4\cdot10^{-15}\right],\] where $v=1/\sqrt{27+15\sqrt{3}}$ and $v' = 1/\sqrt{27-15\sqrt{3}}$.
\end{prop}
\begin{proof}
We start by applying a M\"obius transformation $M$ to display the three-fold symmetry of the face curves. We do this by sending $0$, $1$ and $i$ to the third roots of unity. Writing the explicit formula for $M$ is a bit messy, so we do some trigonometry instead in order to determine where $M$ sends the other vertices of ${\mathcal O}$. The key observation is that $M$ sends the coordinate axes and the unit circle to three congruent circles that pairwise intersect orthogonally at one of the three third roots of unity.
\begin{figure}
\caption{Three congruent circles intersecting at right angles}
\label{fig:venn}
\end{figure}
Among the centers of these three circles, let $c$ be the one that lies on the real axis. By looking at the angles in \figref{fig:venn}, we see that the triangle with vertices $1$, $e^{2\pi i /3}$ and $c$ is isoceles, so that $c = 1 + \sqrt{3}$. Now consider the triangle $\Delta$ with vertices at $0$, $c$, and the intersection point $p=r e^{ i \pi/ 3 }$ between two of the circles. The interior angles of $\Delta$ at $0$, $p$, and $c$ are equal to $\pi/3$, $\pi/4$ and $5\pi / 12$ respectively. By the law of sines, we have
\begin{align*}
r & = \frac{(1+\sqrt{3})}{\sin(\frac{\pi}{4})} \sin\left(\frac{5\pi}{12}\right)\\
&= \frac{(1+\sqrt{3})}{\sin(\frac{\pi}{4})} \left( \cos\left(\frac{\pi}{6}\right) \sin\left(\frac{\pi}{4}\right)+\sin\left(\frac{\pi}{6}\right) \cos\left(\frac{\pi}{4}\right)\right) \\
& = \frac{(1+\sqrt{3})^2}{2} = 2 + \sqrt{3}.
\end{align*}
It follows that $Z = M({\mathcal O}) = \widehat{\C} \setminus \{ 1,e^{2\pi i / 3},e^{-2\pi i / 3}, -(2 + \sqrt{3}), (2+\sqrt{3})e^{\pi i/ 3}, (2+\sqrt{3})e^{-\pi i/ 3} \}$. By construction, $M$ sends the face curve depicted in \figref{fig:face} to the homotopy class of the circle $\alpha$ of radius $2$ centered at the origin.
The cubing map $f(z)=z^3$ quotients out the three-fold symmetry. It maps the punctures to $1$ and $-(2+\sqrt{3})^3$ and has critical values at $0$ and $\infty$. We thus let $W = \C \setminus \{-(2+\sqrt{3})^3, 0, 1 \}$. The curve $f(\alpha)$ is equal to $\beta^3$ where $\beta$ separates $[0,1]$ from the other two punctures.
In order to express $\EL(\beta,W)$ as a ratio of elliptic integrals, we apply another M\"obius transformation to send $0$, $1$ and $\infty$ to $-1$, $1$ and $1/k$ respectively, for some $k\in (0,1)$. The inverse of the required transformation is
\[
g(z)= \frac{1-k}{2}\left(\frac{1+z}{1-kz}\right).
\]
Since we want $g$ to map $-1/k$ to $-(2+\sqrt{3})^3$, we obtain the equation
\[
-(2+\sqrt{3})^3 = g(-1/k) = \left(\frac{1-k}{2}\right)\left(\frac{1-\frac{1}{k}}{2}\right)= -\left( \frac{1-k}{2\sqrt{k}} \right)^2,
\]
or
\[
\frac{1-k}{2\sqrt{k}} = (2+\sqrt{3})^{3/2}.
\]
Observe that
\[
\frac{1}{((k^*)')^2} = \left( \frac{1+k}{1-k} \right)^2 = 1 + \left( \frac{2 \sqrt{k}}{1-k} \right)^2 = 1 + \frac{1}{(2+\sqrt{3})^3}=1 + (2-\sqrt{3})^3=27-15\sqrt{3}
\]
so that
\[
(k^*)^2 = 1 - ((k^*)')^2 = 1 - \frac{1}{1+(2-\sqrt{3})^3} = \frac{(2-\sqrt{3})^3}{1+(2-\sqrt{3})^3} = \frac{1}{1+(2+\sqrt{3})^3}=\frac{1}{27+15\sqrt{3}}.
\]
By \lemref{lem:branch}, \lemref{lem:elliptic}, and the multiplication rule \eqref{eq:multiplication}, we obtain
\[
\EL(\alpha, Z)= \EL(f^{-1}(\beta),Z)= 3 \EL(\beta, W) = \frac{12K(k)}{K'(k)} = \frac{6K(k^*)}{K'(k^*)}.
\]
The command
\begin{center}
\texttt{6*CBF(1/sqrt(27+15*sqrt(3))).agm1()/CBF(1/sqrt(27-15*sqrt(3))).agm1()}
\end{center}
in SageMath certifies that $6K(k^*)/K'(k^*)$ belongs to the interval stated.
\end{proof}
Although we will not use this, we note that the quadratic differential realizing the extremal length of the face curve $\alpha$ surrounding the cube roots of unity in \[\widehat{\C} \setminus \{ 1,e^{2\pi i / 3},e^{-2\pi i / 3}, -(2 + \sqrt{3}), (2+\sqrt{3})e^{\pi i/ 3}, (2+\sqrt{3})e^{-\pi i/ 3} \}\] is
\[
q=\frac{z}{(1-z^3)(z^3+(2+\sqrt{3})^3)} dz^2.
\]
Indeed, this differential is invariant under rotations by $e^{2\pi i / 3}$ and is positive along \[(-\infty,-2-\sqrt{3}) \cup (0,1).\] This implies that the set of critical horizontal trajectories is a tripod joining the origin to $1$, $e^{2\pi i / 3}$, $e^{-2\pi i / 3}$ together with a ray from each of the other three punctures out to infinity. Since the complement of the set of critical horizontal trajectories is homeomorphic to an annulus, all other horizontal trajectories are closed and homotopic to each other.
By a similar argument, the regular vertical trajectories of $q$ are all homotopic to a simple closed curve $\gamma$ intersecting the face curve $\alpha$ six times. The equality case in Minsky's inequality \cite[Lemma 5.1]{MinskyIneq} then yields \[\EL(\gamma)= 6^2 / \EL(\alpha) = 6K(v')/K(v) \in [12.8590961934912 \pm 6.81\cdot 10^{-14}].\]
Some horizontal and vertical trajectories of $q$ are shown in \figref{fig:qd_face}.
\begin{figure}
\caption{Horizontal (blue) and vertical (red) trajectories for the quadratic differential realizing the extremal length of a face curve}
\label{fig:qd_face}
\end{figure}
\section{Geodesics on the regular octahedron} \label{sec:estimates}
In this section, we prove lower bounds on the extremal length of all simple closed curves on the punctured octahedron ${\mathcal O}$ other than those for which we could compute it explicitly. We do this by using the conformal metric $\rho$ coming from the surface of the regular octahedron, scaled so that the edges have length $1$ and hence the total area is equal to $2\sqrt{3}$. This metric, which we call the \emph{flat metric}, is in the conformal class of ${\mathcal O}$. Indeed, any of the curvilinear faces of ${\mathcal O}$ can be mapped conformally onto an equilateral face of the regular octahedron via the Riemann mapping theorem, and the mapping can be extended to all of ${\mathcal O}$ by repeated Schwarz reflection across the sides.
The first step is to obtain lower bounds on the infimal length $\ell_\rho(c)$ for various homotopy classes of simple closed curves in ${\mathcal O}$.
\begin{lem} \label{lem:geodesic}
Let $c$ be a homotopy class of simple closed curves in ${\mathcal O}$. If $\ell_\rho(c)=2$, then $c$ is an edge curve. If $\ell_\rho(c)=3$, then $c$ is a face curve. Otherwise, $\ell_\rho(c)\geq 2\sqrt{3}$.
\end{lem}
\begin{proof}
Since $\rho$ is locally Euclidean and its completion $\overline{{\mathcal O}}$ is compact, \propref{prop:tight} tells us that there is a closed curve $\gamma \subset \overline{{\mathcal O}}$ which is a limit of a sequence of simple closed curves $\gamma_n$ in $c$ with $\lim_{n\to\infty} \length_\rho(\gamma_n)=\ell_\rho(c)$, passes through at least one vertex, and is geodesic away from the vertices. The curve $\gamma$ is thus a sequence of saddle connections on the regular octahedron.
Suppose first that $\gamma$ passes through only one vertex of $\overline{{\mathcal O}}$. Since $\gamma$ is geodesic away from that vertex, if it self-intersects, then the intersection must be transverse. It follows that $\gamma_n$ is not simple if $n$ is large enough, contrary to our assumption. We deduce that $\gamma$ is simple. However, there does not exist a simple geodesic loop from a vertex to itself on the regular octahedron \cite[Theorem~3.1]{Fuchs}. Therefore, $\gamma$ passes through at least two vertices.
Note that the distance between adjacent vertices is equal to $1$ and the distance between opposite vertices is equal to $\sqrt{3}$. In particular, if $\gamma$ passes through two opposite vertices, then its length is at least $2\sqrt{3}$. We can thus assume that $\gamma$ passes through two or three pairwise adjacent vertices and no other vertices.
If $\gamma$ passes through two adjacent vertices, then its length is at least $2$ with equality only if it traces an edge twice. In that case, $c$ is the homotopy class of the associated edge curve because a small neighborhood of the edge in ${\mathcal O}$ is a twice punctured disk, and the only essential simple closed curve in such a surface is the boundary curve.
By inspecting the planar development of the regular octahedron, we see that the second shortest geodesic segment between two adjacent vertices has length $\sqrt{7}$ as depicted in \figref{fig:root7} (this is the shortest vector length in the hexagonal lattice after $0$, $1$, $\sqrt{3}$, and $2$). Thus, if $\gamma$ passes through only two adjacent vertices and has length larger than $2$, then it has length at least $1+\sqrt{7} > 2\sqrt{3}$.
\begin{figure}
\caption{The second shortest saddle connection between adjacent vertices on the octahedron}
\label{fig:root7}
\end{figure}
The final case to consider is if $\gamma$ passes through three adjacent vertices. Then its length is at least $3$, with equality only if $\gamma$ traces the boundary of a triangle. In this case, $c$ has to be a face curve. Indeed, the approximating curve $\gamma_n$ can bypass each vertex by circling either inside or outside the triangle, but if it passes inside, then $\gamma_n$ can be shortened by a definite amount within $c$, contradicting the hypothesis that $\length_\rho(\gamma_n)$ tends to the infimum $\ell_\rho(c)$.
By the above argument, the next shortest closed curve that passes through only three adjacent vertices and is otherwise geodesic has length at least $1+1+\sqrt{7} > 2\sqrt{3}$.
\end{proof}
From this, we easily deduce that the essential simple closed curves in ${\mathcal O}$ with the first and second smallest extremal length are the face curves and the edge curves.
\begin{cor} \label{cor:spectrum}
The first and second smallest extremal lengths of essential simple closed curves in ${\mathcal O}$ are $6K(v)/K'(v)$ and $2\sqrt{2}$,
where $v=1/\sqrt{27+15\sqrt{3}}$, and these are realized by the face curves and the edge curves respectively.
\end{cor}
\begin{proof}
The extremal lengths of the face and edge curves were determined in Propositions \ref{prop:zigzag} and \ref{prop:roottwo}. Note that $6K(v)/K'(v) < 2.799575 < 2.828427 < 2\sqrt{2}$.
Let $c$ be any other homotopy class of essential simple closed curve in ${\mathcal O}$ and let $\rho$ be the flat metric on ${\mathcal O}$. By \lemref{lem:geodesic}, we have $\ell_\rho(c) \geq 2\sqrt{3}$ so that
\[
\EL(c,{\mathcal O})\geq \frac{\ell_\rho(c)}{\area_\rho({\mathcal O})}\geq \frac{(2\sqrt{3})^2}{2\sqrt{3}}= 2\sqrt{3} > 2 \sqrt{2}. \qedhere
\]
\end{proof}
An interesting consequence is that \cite[Proposition 3.3]{FanoniParlier}---stating that if two shortest closest geodesics on a hyperbolic surface intersect twice, then one of them must bound a pair of punctures---is false for the extremal length systole. Indeed, any two distinct face curves intersect twice and they are all $(3,3)$-curves. Given two distinct faces curves $f_1$ and $f_2$, observe that their smoothing with two components is a union of two edge curves $e_1$ and $e_2$. Despite the fact that
\begin{equation} \label{eq:smoothing}
\ell_\rho(e_1)+\ell_\rho(e_2) =\ell_\rho(e_1 \cup e_2) \leq \ell_\rho(f_1 \cup f_2) = \ell_\rho(f_1)+\ell_\rho(f_2)
\end{equation}
for every conformal metric $\rho$ and hence $\EL(e_1 \cup e_2) \leq \EL(f_1 \cup f_2)$ \cite[Lemma 4.16]{MGT:20}, we do not have $\min_j \EL(e_j)\leq \max_k \EL(f_k)$. In other words, the smoothing argument from the proof of \thmref{thm:ELsyssimple} does not work for pairs of shortest closed curves. By \eqnref{eq:smoothing}, the edge curves are shorter than the face curves for any conformal metric $\rho$ invariant under the automorphisms of ${\mathcal O}$, such as the hyperbolic metric. In the hyperbolic metric on ${\mathcal O}$, the shortest closed geodesics are the edge curves and furthermore ${\mathcal O}$ globally maximizes the hyperbolic systole in its moduli space. This is true more generally for principal congruence covers of the modular surface (see \cite[Theorem 13]{Schmutz:congruence} and \cite[Theorem 7.2]{Adams}).
We then apply \corref{cor:spectrum} to determine the extremal length systole of the Bolza surface.
\begin{cor} \label{cor:roottwo}
The extremal length systole of the Bolza surface ${\mathcal B}$ is equal to $\sqrt{2}$, and the only simple closed curves with this extremal length are lifts of edge curves in ${\mathcal O}$.
\end{cor}
\begin{proof}
By \thmref{thm:ELsyssimple}, the extremal length systole is realized by essential simple closed curves, so we may restrict our attention to these.
Let $f: {\mathcal B} \to \overline{{\mathcal O}}$ be the quotient by the hyperelliptic involution. If $\alpha$ is a separating curve in ${\mathcal B}$, then $\alpha$ is homotopic to one component of $f^{-1}(\beta)$ for some $(2,4)$-curve in ${\mathcal O}$ and $\EL(\alpha,{\mathcal B})=\EL(\beta,{\mathcal O})/2$ according to \propref{prop:genus2}. If $\beta$ is an edge curve, then we get $\EL(\alpha,{\mathcal B}) = \sqrt{2}$. Otherwise, $\EL(\beta,{\mathcal O}) > 2\sqrt{2}$ by \corref{cor:spectrum}, so that $\EL(\alpha,{\mathcal B}) > \sqrt{2}$.
If $\alpha$ is a non-separating curve in ${\mathcal B}$, then \propref{prop:genus2} tells us that $\alpha$ is homotopic to $f^{-1}(\beta)$ for some $(3,3)$-curve $\beta$ in ${\mathcal O}$ and $\EL(\alpha,{\mathcal B})=2 \EL(\beta,{\mathcal O})$. By \corref{cor:spectrum}, we have $\EL(\beta,{\mathcal O}) \geq 6K(v)/K'(v) > 2.799574$ with $v$ as above, so that \[\EL(\alpha,{\mathcal B}) \geq 12K(v)/K'(v) > 5.599148 > \sqrt{2},\]
as required.
\end{proof}
\section{Derivatives} \label{sec:derivatives}
In this section, we prove that the extremal length systole of genus two surfaces attains a strict local maximum at the Bolza surface ${\mathcal B}$.
\subsection{Generalized systoles}
Let $M$ be a smooth connected manifold, let ${\mathcal I}$ be an arbitrary set and for each $\alpha \in {\mathcal I}$, let $L_\alpha : M \to \R$ be a $C^1$ function. If for every $x \in M$ and $B\in \R$, there is a neighborhood $U$ of $x$ such that the set
\[
\{ \alpha \in {\mathcal I} : L_\alpha(y) \leq B \text{ for some } y \in U \}
\]
is finite, then the infimum
\[
\mu(x)= \inf_{\alpha \in {\mathcal I}} L_\alpha(x)
\]
is called a \emph{generalized systole} \cite{Bav:05}. The technical condition is there to ensure that the resulting function $\mu$ is continuous. As mentioned in the introduction, the extremal length systole fits into this framework.
\begin{lem}
The extremal length systole $\ELsys$, as a function on the Teichm\"uller space ${\mathcal T}(S)$ of a surface $S$ of finite type, is a generalized systole in the sense of Bavard.
\end{lem}
\begin{proof}
First of all, the Teichm\"uller space ${\mathcal T}(S)$ is a connected complex manifold.
If $S$ is the thrice-punctured sphere, then ${\mathcal T}(S)$ is a point and there is nothing to show except that for every $B>0$, there are only finitely many essential closed curves with extremal length at most $B$, which was proved in \lemref{lem:finite}.
Otherwise, the extremal length systole is only achieved by simple closed curves according to \thmref{thm:ELsyssimple}, so we might as well restrict to these when taking the infimum. The extremal length of such a curve is $C^1$ on ${\mathcal T}(S)$ \cite[Proposition 4.2]{GardinerMasur}.
The last thing to do is to improve the pointwise finiteness of \lemref{lem:finite} to a local one. To this end, recall that the logarithm of extremal length is Lipschitz with respect to the Teichm\"uller distance $d$. More precisely, we have
\[
\EL(\alpha, X) \leq e^{2d(X,Y)}\EL(\alpha, Y)
\]
for every $\alpha \in {\mathcal S}(S)$ and every $X,Y \in {\mathcal T}(S)$ (see e.g. \cite{Kerckhoff}). This implies the required local finiteness, as an upper bound for $\EL(\alpha, Y)$ in a ball centered at $X$ implies an upper bound on $\EL(\alpha, X)$, hence restricts $\alpha$ to a finite subset of ${\mathcal S}(S)$.
\end{proof}
This implies that $\sysEL$ is continuous on Teichm\"uller space and therefore on moduli space. Since extremal length and hyperbolic length tend to zero together \cite[Corollary 2]{Maskit}, Mumford's compactness criterion implies that $\sysEL$ attains its maximum.
\subsection{Perfection and eutaxy}
Given $x\in M$, we denote by ${\mathcal I}_x$ the set of $\alpha \in {\mathcal I}$ such that $L_\alpha(x)=\mu(x)$. Bavard's definition of eutaxy and perfection \cite[D\'efinition 1.2]{Bav:05} is easily seen to be equivalent to the following, which we find easier to state.
\begin{defn} A point $x \in M$ is \emph{eutactic} if for every tangent vector $v \in T_x M$, the following implication holds: if $dL_\alpha(v)\geq 0$ for all $\alpha \in {\mathcal I}_x$, then $dL_\alpha(v) = 0$ for all $\alpha \in {\mathcal I}_x$.
\end{defn}
\begin{defn}
A point $x \in M$ is \emph{perfect} if for every $v \in T_x M$, the following implication holds: if $dL_\alpha(v) = 0$ for all $\alpha \in {\mathcal I}_x$, then $v = 0$.
\end{defn}
If $x\in M$ is perfect and eutactic, then for every $v \in T_x M \setminus \{ 0 \}$ there is some $\alpha \in {\mathcal I}_x$ such that $dL_\alpha(v) < 0$, and it follows easily that $\mu$ attains a strict local maximum at $x$ \cite[Proposition 2.1]{Bav:97}. Bavard proved that the converse holds if the functions $L_\alpha$ are convexoidal (i.e., convex up to reparametrization) along the geodesics for a connection on $M$ \cite[Proposition 2.3]{Bav:97}, thereby generalizing a theorem of Voronoi on the systole of flat tori \cite{Voronoi} and its analogue for hyperbolic surfaces \cite{Schmutz:localmax}. Akrout further proved that generalized systoles obtained from convex length functions are topologically Morse, with singularities equal to the eutactic points and index equal to the rank of the linear map $(dL_\alpha)_{\alpha \in {\mathcal I}_x}$ \cite{Akrout}.
We do not know if there exists a connection on Teichm\"uller space with respect to which the extremal length functions are convexoidal; they are not convexoidal along Teichm\"uller geodesics \cite{nonconvex} or horocycles \cite{horo}. However, all we need here is the easy direction of Bavard's result, namely, that perfection and eutaxy are sufficient to have a local maximum.
\subsection{Triangular surfaces}
A Riemann surface $X$ is \emph{triangular} or \emph{quasiplatonic} if any of the following equi\-va\-lent conditions hold \cite[Theorem 4]{Wolfart}:
\begin{itemize}
\item the quotient of $X$ by its group of conformal automorphisms is isomorphic to a sphere with three cone points (as an orbifold);
\item $X \cong \ensuremath{\mathbb{H}} / \Gamma$ where $\Gamma$ is a normal subgroup of a triangle rotation group;
\item $X$ is an isolated fixed point of a finite subgroup of the mapping class group acting on Teichm\"uller space.
\end{itemize}
Bavard showed that if the collection of length functions $\{ L_\alpha : \alpha \in {\mathcal I} \}$ is invariant under a finite group $G$ acting by isometries on $M$, then any isolated fixed point of $G$ is eutactic \cite[Corollaire~1.3]{Bav:05}. Since the set of extremal length functions is invariant under the action of the mapping class group (which acts by isometries on Teichm\"uller space), we conclude that any triangular surface is eutactic for the extremal length systole (c.f. \cite[p.255]{Bav:05}).
\subsection{The Bolza surface}
It is clear that the punctured octahedron ${\mathcal O}$ is quasiplatonic. The same is true for the Bolza surface ${\mathcal B}$ since any conformal automorphism of ${\mathcal O}$ lifts to ${\mathcal B}$ in two different ways (related by the hyperelliptic involution). In fact, ${\mathcal O}$ and ${\mathcal B}$ are \emph{Platonic} in the sense that they admit tilings by regular polygons (triangles in this case) such that their group of conformal and anti-conformal automorphisms acts transitively on the flags of these tiling (triples consisting of a vertex, an edge, and a face, each contained in the next). The surfaces ${\mathcal O}$ and ${\mathcal B}$ are therefore eutactic for the extremal length systole.
To prove that $\ELsys$ attains a strict local maximum at ${\mathcal B}$, all we have left to show is that ${\mathcal B}$ is perfect, which amounts to proving that a certain linear map is injective. We can do this calculation at the level of six-times-punctured spheres. Indeed, the hyperelliptic involution induces a diffeomorphism $\Phi:{\mathcal T}(S_{2,0}) \to {\mathcal T}(S_{0,6})$ from the space of closed surfaces of genus two to the space of six-times-punctured spheres.
Recall from \corref{cor:roottwo} that the curves in ${\mathcal B}$ with the smallest extremal lengths are lifts of edge curves in ${\mathcal O}$ (this is the set ${\mathcal I}_x$ in the notation of generalized systoles). Let $f: {\mathcal B} \to \overline{{\mathcal O}}$ be the quotient by the hyperelliptic involution. For ease of notation, we will write $\EL_\alpha(Z)$ instead of $\EL(\alpha,Z)$. If $\beta \subset {\mathcal O}$ is an edge curve, $\alpha$ is either component of $f^{-1}(\beta)$, and $Z$ is any surface in ${\mathcal T}(S_{2,0})$, then $\EL_\alpha(Z) = \EL_\beta(\Phi(Z))/2$ according to \propref{prop:genus2}. It follows that $d\EL_\alpha(v) = d\EL_\beta(d\Phi(v))/2$ for every tangent vector $v \in T_{\mathcal B} {\mathcal T}(S_{2,0})$. Since $d\Phi$ is a bijection, to prove that ${\mathcal B}$ is perfect, it thus suffices to show that if $d\EL_\beta(w) = 0$ for every edge curve $\beta \subset {\mathcal O}$, then $w=0$.
\subsection{Gardiner's formula}
The tangent space $T_X {\mathcal T}(S)$ to Teichm\"uller space at a surface $X$ is isomorphic to a quotient of the space of essentially bounded Beltrami differentials (or $(-1,1)$-forms) on $X$, while the cotangent space $T^*_X {\mathcal T}(S)$ can be identified with the set of integrable holomorphic quadratic differentials (or $(2,0)$-forms) on $X$. We can define a bilinear pairing $T_X {\mathcal T}(S) \times T^*_X {\mathcal T}(S) \to \C$ between these objects by sending any pair $(\mu, q)$ to the integral of the $(1,1)$-form $\mu q$ over $X$.
Given an essential simple closed curve $\alpha \subset X$, recall that there is a holomorphic quadratic differential $q_\alpha$ all of whose regular horizontal trajectories are homotopic to $\alpha$, and that $q_\alpha$ is unique up to scaling. Gardiner's formula \cite[Theorem 8]{G84:MinimalNormProperty} says that the logarithmic derivative of the extremal length of $\alpha$ in the direction of $\mu$ is
\[
\frac{d\EL_\alpha(\mu)}{\EL_\alpha(X)} = \frac{2}{\|q_\alpha\|} \re \int_X \mu q_\alpha.
\]
for every $\mu \in T_X {\mathcal T}(S)$, where $\|q_\alpha\|=\int_X |q_\alpha|$ is the area of the induced conformal metric.
\subsection{Punctured spheres}
The Teichm\"uller space ${\mathcal T}(S_{0,p})$ of a punctured sphere admits local coordinates to $\C^{p-3}$. Indeed, if we map three of the punctures to $0$, $1$ and $\infty$ with a M\"obius transformation, then the location of the remaining $p-3$ punctures determines the surface locally (i.e., up to the action of the mapping class group). From this point of view, the tangent space $T_X {\mathcal T}(S_{0,p})$ is naturally isomorphic to $\C^{p-3}$, whereby we attach a complex number $V(z_j)$ to each puncture $z_j$ of $X$ other than $0$, $1$, and $\infty$.
The two points of view can be reconciled by extending $V$ to a smooth vector field on $\widehat{\C}$ that vanishes at $0$, $1$, and $\infty$. The Beltrami form $\mu = \overline{\partial} V$ then represents the same infinitesimal deformation as the one obtained by flowing the punctures along $V$ \cite[Equation (1.5)]{AhlforsRemarks}. Furthermore, the pairing of this deformation with an integrable holomorphic quadratic differential $q$ on $X$ is given by
\[
\int_X \mu q = - \pi \sum_{j=1}^{p} \operatorname{Re}s_{z_j} \left( q \cdot V(z_j)\frac{\partial}{\partial z}\right),
\]
where the sum is taken over all the punctures of $X$ \cite[Lemma 8.2]{FB15:HoloCouchTheorem}. Note that the product of a quadratic differential with a vector field is a $1$-form (locally, $dz^2 \cdot \frac{\partial}{\partial z} = dz$). The residue of a $1$-form $\omega$ at a point $z_j$ is defined in the usual way as $\operatorname{Re}s_{z_j}(\omega)=\frac{1}{2\pi i} \oint_\gamma \omega$ where $\gamma$ is a small counterclockwise loop around $z_j$.
Combined with Gardiner's formula, this yields
\begin{equation} \label{eq:residue}
\frac{d\EL_\alpha(\mu)}{\EL_\alpha(X)} = -\frac{2\pi}{\|q_\alpha\|} \re \sum_{j=1}^{p} \operatorname{Re}s_{z_j} \left( q_\alpha \cdot V(z_j)\frac{\partial}{\partial z}\right)
\end{equation}
for any essential simple closed curve $\alpha$ in a punctured sphere $X$ and any $V:\widehat{\C}\setminus X \to \C$.
\subsection{The edge curves}
A real basis $B=\{ b_1, \ldots, b_6 \}$ for the tangent space $T_{\mathcal O} {\mathcal T}(S_{0,6})$ is given by the vectors $\frac{\partial}{\partial z}$ and $-i\frac{\partial}{\partial z}$ at each of the three punctures $i$, $-1$, and $-i$. For each edge curve $\beta \in {\mathcal E}$, to compute the derivatives $d\EL_\beta(b_j)$ we first need to write down a formula for the associated quadratic differential $q_\beta$.
Recall from subsection \ref{subsec:edge} that the quadratic differential for the edge curve $\beta_{0,1}$ surrounding the edge $[0,1]$ is
\[
q = \frac{(z+1+\sqrt{2})^2}{z(1-z^4)}dz^2=\frac{(z+1+\sqrt{2})^2}{z(1-z)(1+z)(i-z)(i+z)}dz^2.
\]
The residue of $q$ in the direction of $\frac{\partial}{\partial z}$ at each of $i$, $-1$, and $-i$ is equal to
\[
-\left(\frac{1+\sqrt{2}}{2}\right)(1+i), \quad -\frac{1}{2} , \quad \text{and} \quad -\left(\frac{1+\sqrt{2}}{2}\right)(1-i)
\]
respectively. Note that $\re \left(\operatorname{Re}s_{a} \left( q \cdot -i\frac{\partial}{\partial z}\right)\right) = \im\left( \operatorname{Re}s_{a} \left( q \cdot \frac{\partial}{\partial z}\right) \right)$ at any point $a$. Up to a constant, the logarithmic derivative of $\EL_{\beta_{0,1}}$ with respect to the basis $B$ is thus given by
\[
-\frac{\|q\|}{2\pi}\frac{d\EL_{\beta_{0,1}}}{\EL_{\beta_{0,1}}({\mathcal O})} = \left(\frac{-1-\sqrt{2}}{2}, \frac{-1-\sqrt{2}}{2},-\frac{1}{2}, 0 , \frac{-1-\sqrt{2}}{2}, \frac{1+\sqrt{2}}{2} \right)
\]
according to \eqnref{eq:residue}.
To compute the quadratic differential $q_\beta$ associated to a given edge curve $\beta\in {\mathcal E}$, it suffices to find a M\"obius transformation $g: {\mathcal O} \to {\mathcal O}$ that sends $\beta$ to $\beta_{0,1}$. The desired quadratic differential is then the pullback $g^* q$. The required M\"obius transformations for the $12$ edge curves are $z$, $i z$, $-z$, $-iz$, $-(z-i)/(z+i)$, $-i(z-i)/(z+i)$, $(z-i)/(z+i)$, $i(z-i)/(z+i)$, $1/z$, $i/z$, $-1/z$, and $-i/z$. For each of these, we calculated the pullback differential and the residues at $i$, $-1$, and $-i$ using Maple \cite{Maple} (which we found was better than other computer algebra systems at cancelling factors on the denominator to compute residues) to avoid calculation mistakes. The resulting matrix for $-\frac{\|q_\beta\|}{\pi}\frac{d\EL_{\beta}(b_j)}{\EL_{\beta}({\mathcal O})}$, where $\beta$ ranges over the edge curves and $b_j$ ranges over the basis vectors, is
\[
\begin{pmatrix}
-1 - \sqrt{2} & -1 - \sqrt{2} & -1 & 0 & -1 - \sqrt{2} & 1 + \sqrt{2}\\
0 & -1 & -1 - \sqrt{2} & -1 - \sqrt{2} & 0 & -3-2\sqrt{2} \\
1 + \sqrt{2} & -1 - \sqrt{2} & 3+2\sqrt{2} & 0 & 1 + \sqrt{2} & 1 + \sqrt{2} \\
0 & 3+2\sqrt{2} & -1 -\sqrt{2} & 1 + \sqrt{2} & 0 & 1 \\
0 & 3+2\sqrt{2} & -1 -\sqrt{2} & 1 + \sqrt{2} & 0 & 1 \\
-3-2\sqrt{2} & 0 & 0 & -3 -2\sqrt{2} & 1 & 0 \\
0 & -3-2\sqrt{2} & 1 + \sqrt{2} & 1 + \sqrt{2} & 0 & -1 \\
3 + 2\sqrt{2} & 0 & 0 & 1 & -1 & 0 \\
-1 - \sqrt{2} & 1 + \sqrt{2} & 1 & 0 & -1 - \sqrt{2} & -1 - \sqrt{2} \\
0 & -3-2\sqrt{2} & 1 + \sqrt{2} & 1 +\sqrt{2} & 0 & -1 \\
1+\sqrt{2} & 1 + \sqrt{2} & -3-2\sqrt{2} & 0 & 1 + \sqrt{2} & -1 - \sqrt{2} \\
0 & 1 & 1 + \sqrt{2} & -1 - \sqrt{2} & 0 & 3 + 2\sqrt{2}
\end{pmatrix}.
\]
As can be checked either by hand or in any computer algebra system, the above matrix has full rank. Its transpose is therefore injective, proving that the Bolza surface is perfect.
We saw earlier that the Bolza surface is also eutactic. Hence, it is a strict local maximum for $\ELsys$ by \cite[Proposition 2.1]{Bav:97}. Together with \corref{cor:roottwo}, this proves \thmref{thm:main}.
\subsection{The face curves}
By \corref{cor:spectrum}, the extremal length systole of ${\mathcal O}$ is realized by the face curves, of which there are only four. It follows that ${\mathcal O}$ is not perfect, since the dimension of $T_{\mathcal O} {\mathcal T}(S_{0,6})$ is equal to $6$. If extremal length was convexoidal, then we could conclude that ${\mathcal O}$ is not a local maximizer for the extremal length systole by \cite[Proposition 2.3]{Bav:97}.
Since ${\mathcal O}$ is eutactic, there does not exist a tangent vector in the direction of which the extremal length of each face curve has a positive derivative. To determine if ${\mathcal O}$ is a local maximizer for $\ELsys$ would therefore require estimating extremal length up to order two. Theorem 1.1 in \cite{LiuSu} shows that the sum of the second derivative of the extremal length of a curve along the Weil--Petersson geodesics in two directions $v$ and $iv$ is positive, but this could still allow one of them to be negative.
A potentially interesting deformation for disproving local maximality would be to twist two opposite faces with respect to each other and push them towards each other (at a slower rate). This should correspond to the direction $iv$ where $v$ is the gradient of the extremal length of the associated face curve. The extremal length systole of the square pillowcase (or the square torus) can be increased in that way. If such a deformation increased the extremal length systole of the octahedron, then one would expect to reach a local maximum once the opposite faces are aligned, that is, at a right triangular prism with equilateral base. However, the numerical calculations carried out in Appendix \ref{sec:prisms} indicate that all such prisms have extremal length systole at most $2.6236<\ELsys({\mathcal O})$. Furthermore, the regular octahedron is an antiprism like the regular tetrahedron, which maximizes the extremal length systole in its moduli space by \corref{cor:tetrahedron}.
We therefore conjecture that ${\mathcal O}$ maximizes the extremal length systole among all six-times-punctured spheres. This would imply that Voronoi's criterion fails for the extremal length systole and hence that extremal length is not convexoidal with respect to any connection on Teichm\"uller space.
\appendix
\section{A geometric proof of the Landen transformations} \label{sec:Landen}
There are many known proofs of the Landen transformations \cite{LandenSurvey}. Although the proof we give below can be reformulated as a change of variable, it at least explains where the latter comes from.
\begin{thm} \label{thm:landen}
For any $k \in (0,1)$, we have
\[
K(k) = \frac{1}{1+k}\, K(k^*) \quad \text{and} \quad K'(k) = \frac{2}{1+k} \,K'(k^*)
\]
where $k^* = 2 \sqrt{k} / (1+k)$.
\end{thm}
\begin{proof}
We start by proving the multiplication rule $2K(k)/K'(k) = K(k^*)/K'(k^*)$.
Recall that by choosing an appropriate branch of the square root in each half-plane, the Schwarz--Christoffel transformation
\[
T(\zeta) = \int_{\zeta_0}^\zeta \frac{dz}{\sqrt{(1-z^2)(1-(k^*)^2z^2)}}
\]
sends $\widehat{\C} \setminus \{ -1/k^*, -1, 1, 1/k^* \}$ conformally onto a rectangular pillowcase $P(k^*)$ of width $2K(k^*)$ and height $K'(k^*)$, with the punctures mapping to the vertices. By symmetry, $T(0)$ is the midpoint of the bottom side of $P(k^*)$ and $T(\infty)$ is the midpoint of the top side.
Clearly, $T$ conjugates the action of $z \mapsto -z$ on $\widehat{\C} \setminus \{ -1/k^*, -1, 1, 1/k^* \}$ with the rotation $J$ of angle $\pi$ around the vertical axis through $T(0)$ and $T(\infty)$ if we think of $P(k^*)$ as sitting upright in $\R^3$ (see \figref{fig:skewer}). That is, $J$ swaps the front and back faces of $P(k^*)$ and preserves the bottom and top edges.
\begin{figure}
\caption{The hyperelliptic involution $J$ of the pillowcase $P(k^*)$ and the quotient $Q(k^*)$}
\label{fig:skewer}
\end{figure}
To quotient by the action of $z\mapsto -z$, we apply the squaring map and puncture at the critical values, resulting in $\widehat{\C} \setminus \{0,1, 1/(k^*)^2, \infty \}$. On the other hand, quotienting $P(k^*)$ by $J$ gives a pillowcase $Q(k^*)$ of half the width $K(k^*)$ and the same height $K'(k^*)$. The transformation $T$ descends to a conformal map between these two objects.
Observe that the M\"obius transformation
\[
g(z) = \left( \frac{1+k}{2} \right) \frac{1+z}{1+kz}
\]
sends $-1/k$, $-1$, $1$, and $1/k$ to $\infty$, $0$, $1$, and $1/(k^*)^2$ respectively. Another Schwarz-Christoffel transformation sends $\widehat{\C} \setminus \{-1/k,-1,1,1/k \}$ to a pillowcase $P(k)$ of width $2K(k)$ and height $K'(k)$. Since the pillowcase representation of a four-times-punctured sphere is unique up to scaling, we have that $P(k)$ and $Q(k^*)$ have the same aspect ratio. That is,
\[
2K(k)/K'(k) = K(k^*)/K'(k^*).
\]
This implies that $K(k) = \lambda(k) K(k^*)$ and $K'(k) = 2 \lambda(k) K'(k^*)$ for some $\lambda(k)>0$.
To determine this scaling factor, define
\[
q=\frac{dz^2}{(1-z^2)(1-(k^*)^2z^2)}, \quad \omega = \frac{dz^2}{4z(1-z)(1-(k^*)^2 z)} \quad \text{and} \quad \tau = \frac{dz^2}{(1-z^2)(1-k^2z^2)}.
\]
If $f(z)=z^2$, then $f^*\omega = q$ and similarly $g^*\omega = \left( \frac{1+k}{2} \right)^2 \tau$ (these elementary calculations are left to the reader).
We then have
\[
K(k^*) = \int_0^1 \sqrt{q} = \int_0^1 \sqrt{f^* \omega} = \int_0^1 \sqrt{\omega}\]
and
\[
K(k) = \int_0^1 \sqrt{\tau} = \frac{2}{1+k} \int_0^1 \sqrt{g^* \omega} = \frac{1}{1+k} \int_{-1}^1 \sqrt{g^* \omega} =
\frac{1}{1+k} \int_{0}^1 \sqrt{\omega} = \frac{1}{1+k} K(k^*).
\]
It follows that $K'(k) = (2K(k)/K(k^*)) K'(k^*) = 2K'(k^*)/(1+k)$.
\end{proof}
\section{Prisms and antiprisms} \label{sec:prisms}
In this section, we obtain upper bounds on the extremal length systole of six-times-punctured spheres with $D_3$-symmetry. These estimates indicate that the regular octahedron has maximal extremal length systole among all prisms and antiprisms, which leads us to think that it is the global maximizer despite not being perfect.
\subsection{Antiprisms}
Within the $1$-parameter family of antiprisms \[{\mathcal A}_r = \widehat{\C} \setminus \{ 1, e^{2\pi i / 3}, e^{-2\pi i / 3}, -r , re^{\pi i / 3}, r e^{-\pi i / 3} \}\] where $r\geq 1$, it is easy to see that the regular octahedron ${\mathcal O} \cong {\mathcal A}_{2+\sqrt{3}}$ locally maximizes the extremal length systole. Indeed, the extremal length of the central face curve surrounding the cube roots of unity in ${\mathcal A}_r$ has a strictly negative derivative in the $\partial / \partial r$ direction, because this pushes the three punctures furthest away from the origin exactly in the direction where the residue of the associated quadratic differential is positive (the ``horizontal'' direction at these poles). Alternatively, this can be shown by applying the cubing map as in the proof of \propref{prop:zigzag} to get that the extremal length is exactly $6K(1/\sqrt{1+r^3})/K'(1/\sqrt{1+r^3})$. This ratio has a strictly negative derivative. Since ${\mathcal O}$ is eutactic, the derivative of the extremal length of the other face curves at $r=2+\sqrt{3}$ must be negative in the direction $-\partial / \partial r$ (by rotational symmetry, the three other face curves have the same extremal length). Thus, the directional derivative of the extremal length systole is negative in both directions $\partial / \partial r$ and $-\partial / \partial r$ at the regular octahedron.
The other remarkable surface in the family of antiprims is ${\mathcal A}_1$, which is conformally equivalent to the double of a regular hexagon.
\begin{prop}
The extremal length systole of ${\mathcal A}_1$ is at most \[
4K(w)/K'(w) \in \left[2.34031875460627 \pm 5.71\cdot 10^{-15} \right]\]
where $w=2-\sqrt{3}$.
\end{prop}
\begin{proof}
We begin by applying the Cayley transform $z\mapsto i(z-i)/(z+i)$ to send the unit circle to the real line. The image of ${\mathcal A}_1$ is $\widehat{\C} \setminus \left\{\pm (2-\sqrt{3}), \pm 1 ,\pm (2+\sqrt{3})\right\}$. Then scale by $(2+\sqrt{3})$ to obtain $Z=\widehat{\C} \setminus \left\{\pm 1, \pm (2+\sqrt{3}) ,\pm (2+\sqrt{3})^2 \right\}$. To compute the extremal length of the curve $\alpha$ surrounding $[-1,1]$, we apply the squaring map $f(z)=z^2$ and puncture at its critical values to obtain $\widehat{\C} \setminus \{ 0, 1 , (2+\sqrt{3})^2, (2+\sqrt{3})^4,\infty \}$. Then $f(\alpha)=\beta^2$ for the simple closed curve $\beta$ surrounding $[0,1]$. Furthermore, the extremal length of $\beta$ is the same in \[\widehat{\C} \setminus \{ 0, 1 , (2+\sqrt{3})^2, (2+\sqrt{3})^4, \infty \}\] as in $W=\widehat{\C} \setminus \{ 0, 1 , (2+\sqrt{3})^2, \infty \}$ because $(2+\sqrt{3})^4$ lies on one of the critical horizontal trajectories of the extremal differential on $W$.
By the proof of \thmref{thm:landen}, there is a M\"obius transformation sending $0$, $1$, $(2+\sqrt{3})^2$, and $\infty$ to $-1$, $1$, $1/k$, and $-1/k$ for the unique $k\in (0,1)$ such that $1/(k^*)^2 =(2+\sqrt{3})^2$. This gives $k^* = 1/(2+\sqrt{3}) = 2-\sqrt{3}$. Then
\[
\EL(\alpha,Z)= \EL(f^{-1}(\beta),Z)= 2\EL(\beta,W\setminus\{(2+\sqrt{3})^4\}) = 2\EL(\beta,W)= \frac{8K(k)}{K'(k)} = \frac{4K(k^*)}{K'(k^*)}.
\]
This last ratio belongs to the interval $\left[2.34031875460627 \pm 5.71\cdot 10^{-15} \right]$.
\end{proof}
In particular, $\ELsys({\mathcal A}_1)<\ELsys({\mathcal O})$. With similar techniques as in \secref{sec:estimates}, it is possible to show that the shortest curves in $\ELsys({\mathcal A}_1)$ are the six ``edge curves'' whose extremal length was computed in the above proposition. As $r$ increases, we believe that their extremal length increases until at some point three face curves become shorter. Then $\ELsys({\mathcal A}_r)$ keeps increasing until $r$ reaches $2+\sqrt{3}$ where the fourth face curve has the same extremal length of the others. After that point, the central face curve becomes shortest and its extremal length decreases to zero as $r \to \infty$. That is, we conjecture that $r \mapsto \ELsys({\mathcal A}_r)$ attains a unique local maximum at $r=2+\sqrt{3}$.
\subsection{Prisms}
The next interesting family of six-times-punctured spheres are the right tri\-an\-gular prisms with equilateral base punctured at their vertices. Every such prism is conformally equivalent to
\[
{\mathcal P}_r := \widehat{\C} \setminus \{ 1, e^{2\pi i / 3}, e^{-2\pi i / 3}, r , re^{2\pi i / 3}, r e^{-2\pi i / 3} \}.
\]
for some $r>1$.
We start with a rigorous upper bound for the extremal length systole of ${\mathcal P}_r$.
\begin{prop} \label{prop:upper}
The inequality $\ELsys({\mathcal P}_r) \leq 2\sqrt{3} \approx 3.464102$ holds for every $r>1$.
\end{prop}
\begin{proof}
Let $\alpha$ be the circle of radius $\sqrt{r}$ in ${\mathcal P}_r$ and let $\beta=\beta_0 \cup \beta_1 \cup \beta_2$ where each $\beta_j$ surrounds the two punctures on the ray at angle $2\pi j$ from the positive real axis. The cubing map $f(z)=z^3$ sends ${\mathcal P}_r$ to $\widehat{\C} \setminus \{ 0 , 1 , r^3, \infty \}$ after puncturing at the critical values. Furthermore $f(\alpha)=\gamma^3$ and $f(\beta)=3\partialta$ where $\gamma$ and $\partialta$ surround $[0,1]$ and $[1,r^3]$ respectively.
Let $W$ and $H$ be the width and height of the pillowcase representation of $\widehat{\C} \setminus \{ 0 , 1 , r^3, \infty \}$ where $[0,1]$ is horizontal. Then $\EL(\gamma) = 2 W / H$ and $\EL(\partialta) = 2H/W$, so that \[\EL(\gamma)\EL(\partialta)=4.\]
By \lemref{lem:branch}, we have $\EL(\alpha) = 3 \EL(\gamma)$. Furthermore, we claim that $\EL(\beta_0) \leq \EL(\partialta)$. This is because the cylinder $C$ of circumference $2H$ and height $W$ for $\partialta$ lifts under $f$ to a cylinder homotopic to $\beta_0$ (or any $\beta_j$). By monotonicity of extremal length under inclusion, we have $\EL(\beta_0)\leq \EL(C)= \EL(\partialta)$.
We thus have
\[
\EL(\alpha)\EL(\beta_0) \leq 3\EL(\gamma)\EL(\partialta) = 12,
\]
from which it follows that
\[
\ELsys({\mathcal P}_r)\leq \min(\EL(\alpha),\EL(\beta_0)) \leq \sqrt{12} = 2 \sqrt{3}. \qedhere
\]
\end{proof}
The upper bound from \propref{prop:upper} can be improved to
\begin{equation}
\ELsys({\mathcal P}_r) \lesssim 2.6236 < 2.799 < \ELsys({\mathcal O})
\end{equation}
for all $r>0$ using numerical calculations, as we now explain.
Let $x = \EL(\alpha,{\mathcal P}_r)$. One can show that $x = 6K(r^{-3/2})/K'(r^{-3/2})$, so that the map $r\mapsto x$ is a strictly decreasing diffeomorphism from $(0,\infty)$ to itself. To obtain this formula, the idea is to apply the cubing map $f(z)=z^3$ and to puncture at $0$ and $\infty$. Then $f$ maps $\alpha$ to $\gamma^3$ where $\gamma$ is a simple closed curve surrounding the interval $[0,1]$ in $\widehat{\C} \setminus \{ 0, 1, r^3 , \infty \}$. By \lemref{lem:elliptic}, the extremal metric for $\gamma$ is a rectangular pillowcase. Therefore, the extremal metric $\rho$ for $\alpha$ is the triple branched cover of this pillowcase branched over two punctures that lie above each other. This means that $\rho$ looks like the 2-sided surface of the cartesian product between a tripod and an interval (see \figref{fig:tripod}). If we scale the metric so that the height is equal to $1$, then each leg of the tripod has length $x/6$, so that the total circumference is $x$. The cylinder $C$ evoked in the proof of \propref{prop:upper} simply goes around one of the pages of this open book in the vertical direction. To get a better estimate for $\EL(\beta_0)$, it suffices to find a larger embedded annulus in the same homotopy class. We construct one using the flat metric $\rho$.
\begin{figure}
\caption{The polygon $L_x$, the annulus $A_x$, and the flat metric $\rho$ on ${\mathcal P}
\label{fig:tripod}
\end{figure}
Consider the polygon $L_x \subset \C$ with vertices at $0$, $x/3$, $x/3+i$, $x/6+i$, $x/6+i/2$, and $i/2$, as depicted in \figref{fig:tripod}. Reflect $L_x$ across the real axis, then double the resulting $T$-shape across the two pairs of sides that form an interior angle of $3\pi/2$. The resulting object $A_x$ is a topological annulus. Geometrically, it is equal to the cylinder $C$ from before glued onto an $x/3$ by $2$ rectangle via a vertical slit in the center. The interior $A_x^\circ$ embeds isometrically into ${\mathcal P}_r$ equipped with the metric $\rho$ (see \figref{fig:tripod}), with its core curve mapping to $\beta_0$. By monotonicity of extremal length under inclusion, we have $\EL(\beta_0, {\mathcal P}_r) \leq \EL(\beta_0,A_x^\circ)$. Furthermore, a symmetry argument implies that $\EL(\beta_0,A_x^\circ) = 4 \EL(\eta, L_x)$ where $\eta$ is the set of all arcs joining the bottom side of $L_x$ to the pair of sides forming an interior angle of $3\pi/2$. We thus obtain
\[
\ELsys({\mathcal P}_r) \leq \min(\EL(\alpha,{\mathcal P}_r), \EL(\beta_0,{\mathcal P}_r)) \leq \min(x, 4\EL(\eta, L_x)).
\]
\begin{figure}
\caption{Numerical estimates for $\ELsys({\mathcal P}
\label{fig:plot}
\end{figure}
This last extremal length $\EL(\eta, L_x)$ can be calculated by finding a conformal map from $L_x$ onto a rectangle $R_x$, with the ``sides'' of $L_x$ joined by $\eta$ mapping to the vertical sides of $R_x$. We carried out this computation for one thousand equally spaced values of $x$ in the interval $[2,3.4]$ using the Schwarz--Christoffel Toolbox \cite{Driscoll} for MATLAB \cite{Matlab}. The resulting bounds for $\ELsys({\mathcal P}_r)$ are shown in \figref{fig:plot}. Note that $\EL(\eta, L_x)$ is strictly decreasing in $x$, so the maximum of the upper bound $\min(x,4\EL(\eta, L_x))$ is achieved where $x = 4\EL(\eta, L_x)$, which occurs around $x\approx2.6236$ according to our numerical calculations.
Although the Schwarz--Christoffel Toolbox does not come with certified error bounds, it is quite reliable especially for the range of polygons we consider, where crowding of vertices does not occur. One could turn this into a rigorous upper bound using similar methods as in \cite[Section 6]{nonconvex}.
\end{document}
|
\begin{document}
\title{On the integrability of the\\
co-CR quaternionic structures}
\author{Radu Pantilie}
\email{\href{mailto:[email protected]}{[email protected]}}
\address{R.~Pantilie, Institutul de Matematic\u a ``Simion~Stoilow'' al Academiei Rom\^ane,
C.P. 1-764, 014700, Bucure\c sti, Rom\^ania}
\subjclass[2010]{Primary 53C28, Secondary 53C26}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{prop}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\newtheorem{rem}[thm]{Remark}
\newtheorem{exm}[thm]{Example}
\numberwithin{equation}{section}
\begin{abstract}
We characterise the integrability of any co-CR quaternionic structure in terms of the curvature
and a generalized torsion of the connection. Also, we apply this result to obtain, for example, the following:\\
\indent
$\bullet$ New co-CR quaternionic structures built on vector bundles over a quaternionic
manifold $M$, whose twistor spaces are holomorphic vector bundles over the twistor space $Z$ of $M$.
Moreover, all the holomorphic vector bundles over $Z$,
which are positive and isotypic when restricted to the twistor lines, are obtained this way.\\
\indent
$\bullet$ Under generic dimensional conditions, any manifold endowed with an almost \mbox{$f$-quaternionic} structure
and a compatible torsion free connection is, locally, a product of a hypercomplex manifold
with $({\rm Im}\mathscr{H}q\!)^k$, for some $k\in\mathbb{N}$\,.
\end{abstract}
\maketitle
\thispagestyle{empty}
\section*{Introduction}
\indent
It is a basic fact that any notion has an adequate level of generality. For example, anyone with some interest in
three-dimensional Einstein--Weyl spaces and quaternionic manifolds, should have felt a need for a more general
geometric notion, with a twistor space endowed with a locally complete family of spheres with positive normal bundle.
In \cite{fq_2} and \cite{Pan-twistor_(co-)cr_q} (see, also, \cite{fq}\,) it is shown that the \emph{co-CR quaternionic manifolds}
fulfill the needs for such a notion. Up to now, we know the following manifolds which are endowed with natural
co-CR quaternionic structures:\\
\indent
\quad(a) the three-dimensional Einstein--Weyl spaces.\\
\indent
\quad(b) the quaternionic manifolds (in particular, the anti-self-dual manifolds).\\
\indent
\quad(c) the local orbit spaces of any nowhere zero quaternionic vector field on a quaternionic manifold.\\
\indent
\quad(d) a principal bundle built over any quaternionic manifold (in particular, $S^{4n+3}$);
the corresponding twistor space is the product of the sphere with the twistor space of the quaternionic manifold
($\C\!P^1\times\C\!P^{2n+1}$ for $S^{4n+3}$).\\
\indent
\quad(e) vector bundles over any quaternionic manifold; the twistor spaces are
holomorphic vector bundles over the twistor space of the quaternionic manifold.\\
\indent
\quad(f) the Grassmannian of oriented three-dimensional vector subspaces of the Euclidean space of dimension $n+1$\,;
the twistor space is the nondegenerate hyperquadric in the $n$-dimensional complex projective space, $n\geq3$\,.\\
\indent
\quad(g) the complex manifold $M_V$ formed of the isotropic two-dimensional vector subspaces of any complex symplectic vector space $V$; the twistor space is $M_V$ itself.\\
\indent
\quad(h) the space of holomorphic sections of $P\bigl(\mathcal{O}\oplus\mathcal{O}(n)\bigr)$ induced by the holomorphic sections of $\mathcal{O}(m)\oplus\mathcal{O}(m+n)$ which
intertwine the antipodal map and the conjugation; the twistor space is $P\bigl(\mathcal{O}\oplus\mathcal{O}(n)\bigr)$\,, where $m,n\in\mathbb{N}$ are even, $n\neq0$\,.\\
\indent
\quad(i) the space of holomorphic maps of a fixed odd degree from $\C\!P^1$ to $\C\!P^1$ which commute with the antipodal map;
the twistor space is $\C\!P^1\times\C\!P^1$.\\
(The details for (d), (f), (g) can be found in \cite{fq_2}\,, for (c), (h), (i) in \cite{Pan-twistor_(co-)cr_q}\,,
whilst the details for (e) will be given in Section \ref{section:co-cr_q_integrab}\,, below.)\\
\indent
In this paper, we settle the problem of finding a useful characterisation for the integrability of the co-CR quaternionic structures.
For this, we use a seemingly new generalized torsion associated to any connection on a vector bundle $E$, over a manifold $M$,
endowed with a morphism $\r:E\to TM$ (if $\r$ is an isomorphism this reduces to the classical torsion
of a connection on a manifold). This is studied in Section \ref{section:gen_torsion}\,, where we show that it provides a necessary
tool to handle the integrability of distributions defined on Grassmannian bundles.\\
\indent
In Section \ref{section:co-cr_q_integrab}\,, we give the main integrability result (Theorem \ref{thm:co-cr_q_integrab}\,),
and its first applications. For example, there we prove (Theorem \ref{thm:holo_bundles_over_Z}\,) that the following
holds for any holomorphic vector bundle $\mathcal{Z}$\,, over the twistor space $Z$ of a quaternionic manifold $M$,
endowed with a conjugation covering the conjugation of $Z$\,: \emph{if the Birkhoff--Grothendieck decomposition
of $\mathcal{Z}$ restricted to each twistor line contains only terms of Chern number $m\geq1$ then $\mathcal{Z}$ is the twistor space
of a co-CR quaternionic manifold, built on the total space of a vector bundle over $M$}. This gives example (e)\,, above,
and, in particular, for $m=1$ it reduces to \cite[Theorem 7.2]{Sal-dg_qm}\,.\\
\indent
An important particular type of co-CR quaternionic manifolds is provided by the $f$-quaternionic manifolds.
For example, all of the (a), (b), (d), (f), and (g), above, give such manifolds \cite{fq_2}\,.
Also, the same applies to example (e) (Theorem \ref{thm:holo_bundles_over_Z}\,) if $m=2$\,.\\
\indent
In Section \ref{section:f-q_integrab}\,, we apply Theorem \ref{thm:co-cr_q_integrab}\,, to study the integrability of
$f$-quaternionic structures. This leads to Theorem \ref{thm:f-q_generic_dims} by which, under generic dimensional conditions,
any manifold endowed with an almost $f$-quaternionic structure
and a compatible torsion free connection is, locally, a product of a hypercomplex manifold
with $({\rm Im}\mathscr{H}q\!)^k$, for some $k\in\mathbb{N}$\,.
\section{A generalized torsion} \label{section:gen_torsion}
\indent
We work in the smooth and the complex-analytic categories (in the latter case, by the tangent bundle we mean
the holomorphic tangent bundle). For simplicity, sometimes, the bundle projections will be denoted in the same way, when
the base manifold is the same.\\
\indent
Let $E$ be a vector bundle, endowed with a connection $\nabla$, over a manifold $M$. Suppose that we are given a
morphism of vector bundles $\r:E\to TM$.\\
\indent
Then, firstly, note that there exists a unique section $T$ of $TM\otimes\Lambda^2E^*$ such that
\begin{equation} \label{e:gen_torsion}
T(s_1,s_2)=\r\circ\bigl(\nabla_{\r\circ s_1}s_2-\nabla_{\r\circ s_2}s_1\bigr)-[\r\circ s_1,\r\circ s_2]\;,
\end{equation}
for any (local) sections $s_1$ and $s_2$ of $E$\,; we call $T$ the \emph{torsion (with respect to $\r$)} of $\nabla$.\\
\indent
Let $F$ and $G$ be the typical fibre and structural group of $E$\,, respectively, and assume $\nabla$ compatible with $G$\,.
Denote by $(P,M,G)$ the frame bundle of $E$ and let $\mathscr{H}\subseteq TP$ be the principal connection on $P$ corresponding to $\nabla$.\\
\indent
On composing the projection
$P\times F\to E$ with $\r$\,, we obtain a morphism of vector bundles from $P\times F$ to $TM$ which covers
the projection $\p:P\to M$\,.
Consequently, this morphism factorises as a morphism of vector bundles, over $P$, from $P\times F$ to $\mathscr{H}$ followed by the
canonical morphism from $\mathscr{H}\bigl(=\p^*(TM)\bigr)$ onto $TM$. Thus, if $\xi\in F$
the corresponding (constant) section of $P\times F$ determines a horizontal vector field $B(\xi)$ on $P$.\\
\indent
Note that, $B(\xi)$ is characterised by $\dif\!\p\bigl(B(\xi)_u\bigr)=\r(u\xi)$\,, for any $u\in P$,
and the fact that it is horizontal (compare \cite[p.\ 119]{KoNo}\,). However, unlike the classical case
$B(\xi)$ may have zeros; indeed $B(\xi)$ is zero at $u\in P$ if and only if $\r(u\xi)=0$\,.
Also, $\mathscr{H}$ is generated as a vector bundle by all $B(\xi)$\,, $\xi\in F$, if and only if $\r$ is surjective.\\
\indent
Furthermore, $B:F\to\G(TP)$ is $G$-equivariant. Indeed, if we denote by $R_a$ (the differential of) the right translation by some $a\in G$
on $P$, we have
$$\dif\!\p\bigl(\bigl(R_a\bigl(B(\xi)\bigr)\bigr)_u\bigr)=\dif\!\p\bigl(B(\xi)_{ua^{-1}}\bigr)=\r\bigl(u(a^{-1}\xi)\bigr)\;.$$
Hence, $R_a\bigl(B(\xi)\bigr)=B(a^{-1}\xi)$\,, for any $a\in G$ and $\xi\in F$; in particular, $[A,B(\xi)]=B(A\xi)$ for any $A\in\mathfrak{g}$
and $\xi\in F$, where $\mathfrak{g}$ is the Lie algebra of $G$ and we denote in the same way its elements and the corresponding
fundamental vector fields on $P$ (compare \cite[Proposition III.2.3]{KoNo}\,).
\begin{rem}
Let $\xi\in F$ and let $(u(t))_t$ be an integral curve of $B(\xi)$\,; denote $c=\p\circ u$\,, $s=u\xi$\,. Then $c$ is a curve
in $M$, and $s$ is a section of $c^*E$ satisfying $\r\circ s=\dot{c}$ and $(c^*\nabla)(s)=0$\,.
These curves $s$ lead to a natural generalization of the notion of geodesic of a connection on a manifold. Note that, for any
$e\in E$ there exists a unique germ of such a curve $s$ with $s(0)=e$\,.
\end{rem}
\indent
In this setting, Cartan's first structural equation is replaced by the following fact.
\begin{prop} \label{prop:gen_torsion}
For any $u\in P$ and $\xi,\e\in F$ we have
\begin{equation} \label{e:gen_torsion_P}
T(u\xi,u\e)=-\dif\!\p\bigl([B(\xi),B(\e)]_u\bigr)\;.
\end{equation}
\end{prop}
\begin{proof}
Let $u_0\in P$ and let $u$ be a local section of $P$, defined on some open neighbourhood $U$ of $x_0=\p(u_0)$\,, such that
$u_{x_0}=u_0$ and the local connection form $\G$ of $\mathscr{H}$, with respect to $u$\,, is zero at $x_0$\,.\\
\indent
If $\xi\in F$ then, under the isomorphism $P|_U=U\times G$ corresponding to $u$\,, we have $B(\xi)_{ua}=\bigl(\r(ua\xi),-\G\bigl(\r(ua\xi)\bigr)a\bigr)$\,,
for any $a\in G$\,.\\
\indent
By using the fact that $\G_{x_0}=0$\,, we quickly obtain that, at $u_0$\,, both sides of \eqref{e:gen_torsion_P} are equal to
$-[\r(u\xi),\r(u\e)]_{x_0}$\,, for any $\xi,\e\in F$.
\end{proof}
\indent
Also, we obtain the following natural generalization of the first Bianchi identity.
\begin{prop} \label{prop:Bianchi_1}
Let $E$ be a vector bundle, over $M$, and suppose that there exists a morphism of vector bundles $\r:E\to TM$.\\
\indent
Then the curvature form $R$ of any torsion free connection on $E$ satisfies
\begin{equation} \label{e:Bianchi_1}
\r\bigg(R\bigl(\r(e_1),\r(e_2)\bigr)e_3+R\bigl(\r(e_2),\r(e_3)\bigr)e_1+ R\bigl(\r(e_3),\r(e_1)\bigr)e_2\bigg)=0\;,
\end{equation}
for any $e_1\,,e_2\,,e_3\in E$\,.
\end{prop}
\begin{proof}
Let $\O$ be the curvature form of the corresponding principal connection on the frame bundle $P$ of $E$
(we think of $\O$ as a two-form on $P$ with values in the Lie algebra of the structural group of $E$\,; see \cite{KoNo}\,).
Equation \eqref{e:Bianchi_1} is equivalent to the following
\begin{equation} \label{e:Bianchi_1_princ}
B\bigg(\O_{u\!}\bigl(B(\xi),B(\e)\bigr)\mu+\O_{u\!}\bigl(B(\e),B(\mu)\bigr)\xi+\O_{u\!}\bigl(B(\mu),B(\xi)\bigr)\e\bigg)_u=0\;,
\end{equation}
for any $u\in P$ and $\xi$\,, $\e$\,, $\mu$ in the typical fibre $F$ of $E$\,.\\
\indent
By using the fact that the connection is torsion free, we obtain that, for any $u\in P$ and $\xi,\e,\mu\in F$,
the horizontal part of $\bigl[B(\mu),[B(\xi),B(\e)]\bigr]_u$ is $B\bigl(\O_{u\!}\bigl(B(\xi),B(\e)\bigr)\mu\bigr)_u$\,.
Therefore \eqref{e:Bianchi_1_princ} is just the horizontal part, at $u$\,, of the Jacobi identity, for the usual bracket,
applied to $B(\xi)$\,, $B(\e)$\,, $B(\mu)$\,.
\end{proof}
\indent
Let $S$ be a submanifold of a Grassmannian of $F$ on which
$G$ acts transitively. Then $Z=P\times_GS$ is a subbundle of a Grassmannian bundle of $E$ on which $\nabla$
induces a connection $\mathscr{H}\subseteq TZ$\,.\\
\indent
Suppose that for any $p\in Z$ the restriction of $\r$ to $p$ is an isomorphism
onto some vector subspace of $T_{\p(p)}M$, where $\p:Z\to M$ is the projection.
Then we can construct a distribution $\mathcal{C}$ on $Z$ by requiring $\mathcal{C}\subseteq\mathscr{H}$ and $\dif\!\p(\mathcal{C}_p)=\r(p)$\,, for any $p\in Z$\,.
\begin{prop} \label{prop:rho_integrab}
The following assertions are equivalent, where $R$ and $T$ are the curvature form and the torsion of $\nabla$, respectively:\\
\indent
\quad{\rm (i)} $\mathcal{C}$ is integrable;\\
\indent
\quad{\rm (ii)} $R\bigl(\Lambda^{2\!}\bigl(\r(p)\bigr)\bigr)(p)\subseteq p$ and
$T\bigl(\Lambda^2p\bigr)\subseteq\r(p)$\,, for any $p\in Z$\,.\\
\indent
Consequently, if\/ $\nabla$ is torsion free and $\mathcal{C}$ is integrable then
\begin{equation} \label{e:first_Bianchi}
R\bigl(\r(e_1),\r(e_2)\bigr)e_3+R\bigl(\r(e_2),\r(e_3)\bigr)e_1+ R\bigl(\r(e_3),\r(e_1)\bigr)e_2=0\;,
\end{equation}
for any $p\in Z$ and $e_1,e_2,e_3\in p$\,.
\end{prop}
\begin{proof}
Let $(P,M,G)$ be the frame bundle of $E$\,, and let $H$ be the isotropy subgroup of $G$ at some $p_0\in S$\,.
Then $Z=P/H$, and let $\s:P\to Z$ be the projection.\\
\indent
If we denote $\mathcal{C}_P=(\dif\!\s)^{-1}(\mathcal{C})$ then, as $\s$ is a surjective submersion, we also have $\mathcal{C}=(\dif\!\s)(\mathcal{C}_P)$\,.
Therefore $\mathcal{C}$ is integrable if and only if $\mathcal{C}_P$ is integrable.\\
\indent
Now, note that $\mathcal{C}_P$ is generated by all $B(\xi)$\,, with $\xi\in p_0$ and all (the fundamental vector fields) $A\in\mathfrak{h}$\,,
where $\mathfrak{h}$ is the Lie algebra of $H$. Hence, $\mathcal{C}_P$ is integrable if and only if $[B(\xi),B(\e)]$ is a section of $\mathcal{C}_P$\,,
for any $\xi,\e\in p_0$\,; equivalently, $\O_{u\!}\bigl(B(\xi),B(\e)\bigr)\in\mathfrak{h}$ and $\dif\!\p\bigl([B(\xi),B(\e)]_u\bigr)\in\r(up_0)$
for any $u\in P$ and $\xi,\e\in p_0$\,. Together with Cartan's second structural equation and Proposition \ref{prop:gen_torsion}\,, this
completes the proof.
\end{proof}
\indent
Note that, the last statement of Proposition \ref{prop:rho_integrab} could have been proved directly by observing that, under that hypothesis,
the leaves of $\mathcal{C}$ are, locally, projected by $\p$ onto submanifolds of $M$ on which $\nabla$ induces a torsion free connection.\\
\indent
In the following definition, the notations are as in Proposition \ref{prop:rho_integrab}\,.
\begin{defn} \label{defn:first_Bianchi}
1) We say that $\nabla$ \emph{satisfies the first Bianchi identity} if it is torsion free
and \eqref{e:first_Bianchi} holds for any $e_1,e_2,e_3\in E$\,.\\
\indent
2) We say that $\nabla$ \emph{satisfies the first Bianchi identity, with respect to $Z$\,,} if it is torsion free
and \eqref{e:first_Bianchi} holds for any $p\in Z$ and $e_1,e_2,e_3\in p$\,.
\end{defn}
\indent
Let $E$ be a vector bundle, endowed with a connection $\nabla$, over a manifold $M$. Suppose that $\r:E\to TM$
is a morphism of vector bundles and let $[s_1,s_2]=\nabla_{\r\circ s_1}s_2-\nabla_{\r\circ s_2}s_1$\,,
for any sections $s_1$ and $s_2$ of $E$\,.
\begin{prop} \label{prop:Lie_algebroid}
The following assertions are equivalent:\\
\indent
{\rm (i)} $\nabla$ satisfies the first Bianchi identity;\\
\indent
{\rm (ii)} $(E,[\cdot,\cdot],\r)$ is a Lie algebroid.
\end{prop}
\begin{proof}
This is a straightforward computation.
\end{proof}
\section{On the integrability of the co-CR quaternionic structures} \label{section:co-cr_q_integrab}
\indent
A \emph{quaternionic vector bundle} is a vector bundle $E$ whose structural group is the Lie group ${\rm Sp}(1)\cdot{\rm GL}(k,\mathscr{H}q)$
acting on $\mathscr{H}q^{\!k}$ by $\bigl(\pm(a,A),q\bigr)\mapsto aqA^{-1}$, for any $\pm(a,A)\in{\rm Sp}(1)\cdot{\rm GL}(k,\mathscr{H}q)$
and $q\in\mathscr{H}q^{\!k}$. Then the morphism of Lie groups ${\rm Sp}(1)\cdot{\rm GL}(k,\mathscr{H}q)\to{\rm SO}(3)$\,,
$\pm(a,A)\to\pm a$\,, induces an oriented Riemannian vector bundle of rank three whose sphere bundle $Z$
is the bundle of \emph{admissible linear complex structures} of $E$.\\
\indent
An \emph{almost co-CR quaternionic structure} on $M$ is a pair $(E,\r)$ where $E$ is a quaternionic vector bundle over $M$
and $\r:E\to TM$ is a surjective morphism of vector bundles whose kernel contains no nonzero subspace preserved by some admissible
linear complex structure of $E$.\\
\indent
By duality, we obtain the notion of \emph{almost CR quaternionic structure}.\\
\indent
Let $(M,E,\r)$ be an almost co-CR quaternionic manifold and let $\nabla$ be a compatible connection on $E$ (that is,
$\nabla$ is compatible with the structural group of $E$). Then we can construct a complex distribution $\mathcal{C}$ on the bundle $Z$
of admissible linear complex structures of $E$, as follows. Firstly, for any $J\in Z$, let $\mathcal{B}_J$ be the horizontal lift
of $\r\bigl({\rm ker}(J+{\rm i})\bigr)$\,, with respect to $\nabla$. Then $\mathcal{C}=({\rm ker}\dif\!\p)^{0,1}\oplus\mathcal{B}$
is a complex distribution on $Z$ such that $\mathcal{C}+\overline{\mathcal{C}}=T^{\C\!}Z$\,; that is, $\mathcal{C}$ is an \emph{almost co-CR structure}
on $Z$.\\
\indent
We say that $(M,E,\r,\nabla)$ is \emph{co-CR quaternionic} if $\mathcal{C}$ is integrable. Note that, then $\mathcal{C}\cap\overline{\mathcal{C}}$ is the complexification
of (the tangent bundle of) a foliation $\mathscr{F}a$ on $Z$\,; moreover, with respect to it, $\mathcal{C}$ is projectable onto complex structures on the local leaf spaces
of $\mathscr{F}a$. If there exists a surjective submersion $\p_Z:Z\to Y$ such that ${\rm ker}\dif\!\p_Y=\mathscr{F}a$ and $\mathcal{C}$ is projectable with respect to $\p_Y$
(the latter condition is unnecessary if the fibres of $\p_Y$ are connected)
then the complex manifold $\bigl(Y,\dif\!\p_Y(\mathcal{C})\bigr)$ is the \emph{twistor space} of $(M,E,\r,\nabla)$\,.
\begin{displaymath}
\xymatrix{
& Z \ar[dl]_{\p_Y} \ar[dr]^{\p} & \\
Y & & M
}
\end{displaymath}
\indent
Note that, if $\r$ is an isomorphism then we obtain the classical notion of \emph{quaternionic manifold} \cite{Sal-dg_qm}
(see \cite[Remark 2.10(2)]{IMOP}\,). Also, more information on (co-)CR quaternionic manifolds can be found
in \cite{fq}, \cite{fq_2}, \cite{MP2}, \cite{Pan-twistor_(co-)cr_q}\,.
\begin{thm} \label{thm:co-cr_q_integrab}
Let $(M,E,\r)$ be an almost co-CR quaternionic manifold and let $Z$ be the bundle of admissible linear complex structures on $E$\,.
Let $\nabla$ be a compatible connection on $E$\,.\\
\indent
The following assertions are equivalent, where $R$ and $T$ are the curvature form and the torsion of $\nabla$, respectively:\\
\indent
\quad{\rm (i)} $(E,\r,\nabla)$ is integrable;\\
\indent
\quad{\rm (ii)} $R\bigl(\Lambda^{2\!}\bigl(\r\bigl(E^J\bigr)\bigr)\bigr)\bigl(E^J\bigr)\subseteq E^J$ and
$T\bigl(\Lambda^2\bigl(E^J\bigr)\bigr)\subseteq\r\bigl(E^J\bigr)$, for any $J\in Z$\,, where $E^J={\rm ker}(J+{\rm i})$\,.
\end{thm}
\begin{proof}
Assuming real-analyticity, this follows quickly from Proposition \ref{prop:rho_integrab}\,, after a complexification.
In the smooth category, this follows from an extension of Proposition \ref{prop:rho_integrab}\,, similar to \cite[Theorem A.3]{fq}\,.
\end{proof}
\indent
Let $M$ be a quaternionic manifold, $\dim M=4k$\,, endowed with a torsion free compatible connection.
Denote by $L$ the complexification of the line bundle over $M$ characterised by the fact that its $4k$ tensorial power is $\Lambda^{4k}TM$
(we use the orientation on $M$ compatible with all of the admissible linear complex structures on it).\\
\indent
Then, at least locally, we have $T^{\C\!}M=H\otimes W$, where $H$ and $W$ are complex vector bundles of rank $2$ and $2k$\,, respectively,
and the structural group of $H$ is ${\rm SL}(2,\C\!)$ ($H$ and $W$ exist globally if and only if
the vector bundle generated by the admissible linear complex structures on $M$ is spin).\\
\indent
Denote $H'=(L^*)^{k/k+1}\otimes H$. Then $H'\setminus0$ is endowed with a natural hyper-complex structure
(\cite{Sal-dg_qm}\,; see \cite{PePoSw-98}\,), such that the projection onto $M$ is twistorial. In particular,
on endowing $H'\setminus0$ with one of the admissible complex structures (corresponding
to some imaginary quaternion of length $1$\,) then $H'\setminus0$ is the total space of a holomorphic principal bundle over the
twistor space $Z$ of $M$, with group $\C\!\setminus\{0\}$\,. We shall denote by $\mathcal{L}$ the dual of the corresponding holomorphic
line bundle over $Z$\,; note that, if $m$ is even then $\mathcal{L}^m$ is globally defined.
For example, if $M=\mathscr{H}q\!P^k$ then $\mathcal{L}$ is just the hyperplane line bundle over $\C\!P^{2k+1}$.\\
\indent
Now, let $U_m=\odot^m(H')^*$, where $\odot$ denotes the symmetric product, $m\in\mathbb{N}$\,.\\
\indent
If $m$ is even then $U_m$ is globally defined and is the complexification of a (real)
vector bundle which will be denote in the same way (note that, $L^{k/k+1}\otimes U_2$ is just the oriented Riemannian vector bundle of
rank three generated by the admissible linear complex structures on $M$).
Let $F$ be a vector bundle over $M$ endowed with a connection whose $(0,2)$ components of its curvature,
with respect to any admissible linear complex structure on $M$, are zero.
We endow $\mathcal{F}=(\p^*F)^{\C\!}$ with the (Koszul--Malgrange) holomorphic structure determined by the pull back of the connection
on $F$ and the complex structure of $Z$.\\
\indent
If $m$ is odd then $U_m$ is a hypercomplex vector bundle. Therefore if $F$ is a hypercomplex vector bundle over $M$ then $U_m\otimes F$
is the complexification of a vector bundle which will be denoted in the same way; in the tensor product $U_m$ and $F$ are endowed with $I_1$ and $J_1$\,, respectively,
whilst the conjugation on $U_m\otimes F$ is $I_2\otimes J_2$, where $I_i$ and $J_i$\,, $i=1,2,3$\,,
give the linear hypercomplex structures of $U_m$ and $F$, respectively.
Suppose that $F$ is endowed with a compatible connection whose $(0,2)$ components
of its curvature, with respect to any admissible linear complex structure on $M$, are zero.
On endowing $F$ with $J_1$\,, let $\mathcal{F}=\p^*F$ endowed with the holomorphic structure determined by the pull back of the connection
on $F$ and the complex structure of $Z$.\\
\indent
If $m=1$ the next result gives \cite[Theorem 7.2]{Sal-dg_qm}\,.
\begin{thm} \label{thm:holo_bundles_over_Z}
{\rm (a)} There exists a natural co-CR quaternionic structure on the total space of\/ $U_m\otimes F$ whose twistor space is $\mathcal{L}^m\otimes\mathscr{F}a$.\\
\indent
{\rm (b)} Conversely, let $\mathcal{Z}$ be a holomorphic vector bundle over $Z$ such that:\\
\indent
\quad{\rm (i)} the Birkhoff--Grothendieck decomposition of $\mathcal{Z}$ restricted to each twistor line
contains only terms of Chern number $m$\,;\\
\indent
\quad{\rm (ii)} $\mathcal{Z}$ is endowed with a conjugation covering the conjugation of $Z$\,.\\
\indent
Then $\mathcal{Z}$ is the twistor space of a co-CR quaternionic manifold, obtained as in {\rm (a)}\,.
\end{thm}
\begin{proof}
For simplicity, we work in the complex-analytic category. Thus, in particular, at least locally, a (complex-)quaternionic vector bundle
is a bundle which is the tensor product of a vector bundle of rank $2$ and another vector bundle; for example, on denoting $V=L^{k/k+1}\otimes W$,
we have $TM=U_1^*\otimes V$.\\
\indent
Also, let $E=(U_{m-2}\oplus U_m)\otimes F$\,. As $U_{m-2}\oplus U_m=U_1\otimes U_{m-1}$\,, we have $E=U_1\otimes U_{m-1}\otimes F$,
and, in particular, $E$ is a quaternionic vector bundle.
Furthermore, by using the fact that the structural group of $L^{k/k+1}\otimes U_1^*$
is ${\rm SL}(2,\C\!)$\,, we obtain that $U_1=U_1^*\otimes L^{2k/k+1}$. Hence, also, $E\oplus TM$ is a quaternionic vector bundle.\\
\indent
By using the induced connection on $U_m\otimes F$ we obtain
\begin{equation} \label{e:for_holo_bundles_over_Z}
\begin{split}
T(U_m\otimes F)&=\p^*(U_m\otimes F)\oplus\p^*(TM)\;,\\
\p^*(E\oplus TM)&=T(U_m\otimes F)\oplus\p^*(U_{m-2}\otimes F)\;,
\end{split}
\end{equation}
where $\p:U_m\otimes F\to M$ is the projection.\\
\indent
Thus, $\p^*(E\oplus TM)$ and the projection $\r$ from it onto $T(U_m\otimes F)$ provide
an almost co-CR quaternionic structure on $U_m\otimes F$. Furthermore, the connections on $M$ and $F$ induce a compatible connection $\nabla$ on
$\p^*(E\oplus TM)$\,, which preserves the decomposition given by the second relation of \eqref{e:for_holo_bundles_over_Z}\,.
Thus, $\nabla$ is flat when restricted to the fibres of $U_m\otimes F$, whilst if $X\in\p^*(TM)$ then $\nabla_X$ is given by the pull back of the connection
on $E\oplus TM$; in particular, if $X$ and $Y$ are pull backs of local vector fields on $M$ then $\nabla_XY$ is the pull back of
the covariant derivative of $\dif\!\p(Y)$ along $\dif\!\p(X)$\,.\\
\indent
We have $U_m=\odot^mU_1$\,, where $\odot$ denotes the symmetric product.
Also, each $e\in U_1$ may be extended to a covariantly constant local section
of $U_1$ (this is the reason for which the `tensorisation' with $(L^*)^{k/k+1}$ is needed).\\
\indent
In this setting, the bundle of admissible linear complex structures on $M$ is replaced by $P(U_1^*)$ so that if $J$ `corresponds' to $[e]$\,, for some $e\in U_1^*$,
then ${\rm ker}(J+{\rm i})$ corresponds to the space $\{e\otimes f\,|\,f\in V\,\}$\,. Then, on denoting $\mathcal{E}=\p^*(E\oplus TM)$\,,
for any nonzero $e\in U_1$\,, we have that $\r(\mathcal{E}^e)$ is isomorphic to the direct sum of $\{e\otimes f\,|\,f\in V\,\}$
and the tensor product of the corresponding fibre of $F$ with the space of polynomials from $U_m=\odot^mU_1$ which are divisible by $e$\,.\\
\indent
To verify that condition (ii) of Theorem \ref{thm:co-cr_q_integrab} is satisfied we shall, also, use the fact that $\nabla$ restricted to each fibre of $U_m\otimes F$
is flat. This and the fact that $M$ is quaternionic (and endowed with a torsion free connection) quickly implies
that the curvature form of $\nabla$ satisfies (ii) of Theorem \ref{thm:co-cr_q_integrab}\,.
For the torsion $T$, it is sufficient to check the condition on pairs of local sections $A,X$ and $X,Y$ from $\mathcal{E}^e$
with $A$ induced by a section of $E$ and $X,Y$ induced by sections of $TM$, where $e\in U_1$\,. Then we have $T(A,X)=-\r(\nabla_XA)$
and $T(X,Y)$ is the `vertical' component of $-[X,Y]$\,; in particular, $T(X,Y)$ is determined by the curvature form of $U_m\otimes F$,
applied to $(X,Y)$\,.\\
\indent
Locally, we may assume $L$ trivial so that $U_1=U_1^*$ but, note that, this isomorphism does not preserve the connections
(the connection on $U_1$ is just the dual of the connection on $U_1^*$).
Then we may choose $(e_1,e_2)$ a local frame for $U_1^*$ such that it corresponds to $(e^2,-e^1)$\,, where $(e^1,e^2)$ is the dual
of $(e_1,e_2)$\,, and such that $e_1$ is covariantly constant. Thus, we have to check that $T(A,X)$ and $T(X,Y)$ are contained by $\r(\mathcal{E}^{e_1})$\,,
where $A$ is the pull back of the tensor product of a local section of $F$ and a polynomial of degree $m$ which is divisible by $e^2$, whilst
$X=\p^*(e_1\otimes u)$ and $Y=\p^*(e_1\otimes v)$\,, with $u$ and $v$ local sections of $V$. Now, the condition on the torsion follows quickly
by using the fact that $e_1$ is covariantly constant and the fact that the curvature form of $F$ is zero when restricted to spaces of the form
$\{e\otimes f\,|\,f\in V\,\}$\,, with $e\in U_1^*$.\\
\indent
In the complex-analytic category, the twistor space of $M$ is (locally) the leaf space of the foliation $\mathscr{V}$ on $P(U_1)$ which, at each $[e]\in P(U_1)$\,,
is the horizontal lift of the space $\{e\otimes f\,|\,f\in V\,\}$\,. Similarly, the twistor space of $U_m\otimes F$ is the leaf space of the foliation
$\mathscr{V}_m$ on $\p^*\bigl(P(U_1)\bigr)$
which at each $\p^*[e]$ is the horizontal lift of $\r(\mathcal{E}^e)$\,.\\
\indent
On the other hand, the pull back of $\mathcal{L}^*\setminus0$ to $P(U_1)$ is the principal bundle whose projection is $U_1\setminus0\to P(U_1)$\,;
equivalently, the pull back of $\mathcal{L}^*$ to $P(U_1)$ is the tautological line bundle over $P(U_1)$\,.
This is, further, equivalent to the fact that the pull back of $\mathcal{L}$ to $P(U_1)$ is (locally; globally, if $H^1(M,\C\!\setminus\{0\}\,)$ is zero)
isomorphic to the quotient of $\p_1^*(U_1)$ through the tautological line bundle over $P(U_1)$\,,
where $\p_1:U_1\to M$ is the projection. Therefore $\mathcal{L}$ is the leaf space of the foliation on $\p_1^*\bigl(P(U_1)\bigr)$
which at each $\p_1^*[e]$ is the horizontal lift of $[e]\oplus\{e\otimes f\,|\,f\in V\,\}$\,.
Similarly, we obtain that the twistor space of $U_m$ is $\mathcal{L}^m$. Together with $\p_m^*\bigl(P(U_1)\bigr)=U_m+P(U_1)$\,,
this gives a surjective submersion $\phi_m:U_m+P(U_1)\to\mathcal{L}^m$ which is linear along the fibres of the projection from
$U_m+P(U_1)$ onto $P(U_1)$\,; that is, $\phi_m$ is a surjective morphism of vector bundles, covering the surjective submersion $P(U_1)\to Z$\,.\\
\indent
Also, the condition on the connection of $F$ is equivalent to the fact that its pull back to $P(U_1)$ is flat when restricted to the leaves of $\mathscr{V}$. Hence,
the pull back of $F$ to $P(U_1)$ is, also, the pull back of a vector bundle $\mathscr{F}a$ on $Z$\,. Thus, we, also, have a surjective morphism of vector bundles
$\phi:F+P(U_1)\to\mathcal{F}$\,, covering $P(U_1)\to Z$\,.\\
\indent
Therefore there exists a morphism of vector bundles $\psi$ from $(U_m\otimes F)+P(U_1)$ onto $\mathcal{L}^m\otimes\mathcal{F}$,
covering $P(U_1)\to Z$\,. Moreover, ${\rm ker}\dif\!\psi=\mathscr{V}_m$ and, hence, the twistor space of $U_m\otimes F$ is~$\mathcal{L}^m\otimes\mathcal{F}$.\\
\indent
Conversely, if $\mathcal{Z}$ is a vector bundle over $Z$ satisfying (i) then $\bigl(\mathcal{L}^*\bigr)^m\otimes\mathcal{Z}$
restricted to each twistor line is trivial. Thus, it corresponds (through the Ward transform) to a vector bundle $F$ over $M$ endowed with a connection
whose curvature form is zero when restricted to spaces of the form $\{e\otimes f\,|\,f\in V\,\}$\,, with $e\in U_1$\,.
Similarly to above, we obtain that $\mathcal{Z}$ is the twistor space of $U_m\otimes F$, and the proof is complete.
\end{proof}
\indent
With the same notations as in Theorem \ref{thm:holo_bundles_over_Z}\,, the projection from $U_m\otimes F$ onto $M$ is the twistorial map
corresponding to the projection from $\mathcal{L}^m\otimes\mathscr{F}a$ onto $Z$. Also, further examples of co-CR quaternionic manifolds can be obtained
by taking direct sums of bundles $U_m\otimes F$ (with different values for $m$).\\
\indent
Here is another application of Theorem \ref{thm:co-cr_q_integrab}\,.
\begin{cor} \label{cor:co-cr_q_integrab}
Let $(M,E,\r)$ be an almost co-CR quaternionic manifold, $\rank E>4$\,, and let $Z$ be the bundle of admissible linear complex structures
of $E$\,.\\
\indent
If there exists a compatible connection $\nabla$ on $E$ which satisfies the first Bianchi identity, with respect to $Z$\,,
then $(E,\r,\nabla)$ is integrable.
\end{cor}
\begin{proof}
Locally, we may suppose $E^{\C\!}=H\otimes F$, where $H$ and $F$ are complex vector bundles with $\rank H=2$\,. Moreover,
the following hold:\\
\indent
\quad(1) $Z=PH$ such that if $J\in Z$ corresponds to $[e]\in PH$ then $$E^J=\{e\otimes f\,|\,f\in F_{\p(e)}\}\;,$$ where $\p$ is the projection.\\
\indent
\quad(2) $\nabla=\nabla^H\otimes\nabla^F$ for some connections $\nabla^H$ on $H$ and $\nabla^F$ on $F$.\\
\indent
By Theorem \ref{thm:co-cr_q_integrab}\,, we have to show that, for any $e\in H$ and $f_1,f_2,f_3\in F$, we have
$$R\bigl(\r(e\otimes f_1),\r(e\otimes f_2)\bigr)(e\otimes f_3)\in\{e\otimes f\,|\,f\in F_{\p(e)}\}\;;$$
equivalently, $R^E\bigl(\r(e\otimes f_1),\r(e\otimes f_2)\bigr)e$ is proportional to $e$\,, where $R^E$ is the curvature form of $\nabla^E$.\\
\indent
We know that, for any $e\in H$ and $f_1,f_2,f_3\in F$, we have
$$R\bigl(\r(e\otimes f_1),\r(e\otimes f_2)\bigr)(e\otimes f_3)+\textrm{circular\;permutations}=0\;,$$
which implies
\begin{equation} \label{e:for_integrab_from_Bianchi}
\bigl(R^E\bigl(\r(e\otimes f_1),\r(e\otimes f_2)\bigr)e\bigr)\otimes f_3+\textrm{circular\;permutations}\in\{e\otimes f\,|\,f\in F_{\p(e)}\}\;.
\end{equation}
\indent
As $\rank E>4$\,, we have $\rank F>2$\,. Therefore \eqref{e:for_integrab_from_Bianchi} holds, for any $e\in H$ and $f_1,f_2,f_3\in F$,
if and only if each term of the left hand side of \eqref{e:for_integrab_from_Bianchi} is contained by $\{e\otimes f\,|\,f\in F_{\p(e)}\}$\,,
for any $e\in H$ and $f_1,f_2,f_3\in F$. The proof is complete.
\end{proof}
\indent
Note that, in the proof of Corollary \ref{cor:co-cr_q_integrab} it is not used the fact that $\rank H=2$\,.
\begin{prop} \label{prop:co-cr_q_with_first_Bianchi}
Let $(M,E,\r)$ be an almost co-CR quaternionic manifold such that $\rank E>4$ and there exists a compatible connection
$\nabla$ on $E$ which satisfies the first Bianchi identity.\\
\indent
Then, locally, ${\rm ker}\r$ can be endowed with a quaternionic structure such that the projection from ${\rm ker}\r$ onto $M$ is a
twistorial map.
\end{prop}
\begin{proof}
{}From Proposition \ref{prop:Lie_algebroid} and \cite[Theorem 2.2]{KMa-BLMS95} it follows that, locally, there exists
a section $\iota:TM\to E$ of $\r$ such that for any sections $s_1$ and $s_2$ of the vector subbundle ${\rm im}\,\iota\subseteq E$
we have that $\nabla_{\r\circ s_1}s_2-\nabla_{\r\circ s_2}s_1$ is a section of ${\rm im}\,\iota$\,. In particular, we have
$E={\rm ker}\r\oplus TM$, where we have used the obvious isomorphism $TM={\rm im}\,\iota$\,.\\
\indent
Furthermore, Proposition \ref{prop:Lie_algebroid} quickly implies that $\nabla$ restricts to a flat connection $\nabla^v$
on ${\rm ker}\r$\,. Locally, we may suppose that $\nabla^v$ is the trivial connection corresponding to some trivialization
of ${\rm ker}\r$\,; that is, ${\rm ker}\r$ is generated by (global) sections which are covariantly constant, with respect to $\nabla$.\\
\indent
Let $\p:{\rm ker}\r\to M$ be the projection. Note that, we have two decompositions $\p^*E=\p^*({\rm ker}\r)\oplus\p^*(TM)$
and $T({\rm ker}\r)=\p^*({\rm ker}\r)\oplus\p^*(TM)$\,, where the latter is induced by $\nabla^v$. Therefore we have a vector
bundle isomorphism $T({\rm ker}\r)=\p^*E$ which depends only of $\iota$ (and the given co-CR quaternionic structure).
Hence, ${\rm ker}\r$ is endowed with an almost quaternionic structure.\\
\indent
To complete the proof it is sufficient to prove that $\p^*\nabla$ is torsion free. Indeed, let $U,V$ be sections of $\p^*({\rm ker}\r)$
induced by sections of ${\rm ker}\r$ which are covariantly constant, with respect to $\nabla$, and let $X,Y$ be sections of
$\p^*(TM)$ induced by vector fields on $M$; in particular, $X,Y$ are projectable, with respect to $\dif\!\p$\,.\\
\indent
Then we have that all of $[U,V]$\,, $[U,X]$\,, $(\p^*\nabla)_UV$, $(\p^*\nabla)_VU$\,, $(\p^*\nabla)_UX$\,, $(\p^*\nabla)_XU$ are zero.
Also, as $\nabla$ is torsion free, we have $(\p^*\nabla)_XY-(\p^*\nabla)_YX-[X,Y]=0$\,, thus, completing the proof.
\end{proof}
\indent
Let $N$ be a quaternionic-K\"ahler manifold and let $\nabla$ be its Levi--Civita connection. If $M\subseteq N$ is a hypersurface or a CR quaternionic submanifold
\cite{MP2} then the following assertions are equivalent:\\
\indent
\quad(i) $\nabla$ restricted to $TN|_M$ satisfies the first Bianchi identity;\\
\indent
\quad(ii) $M$ is geodesic and the normal connection is flat.
\section{On the integrability of the $f$-quaternionic structures} \label{section:f-q_integrab}
\indent
An \emph{almost $f$-quaternionic structure} \cite{fq_2} on a manifold $M$ is a pair $(E,V)$\,, where $E$ is a quaternionic vector bundle over $M$,
with $V,TM\subseteq E$ vector subbundles such that $E=V\oplus TM$ and $JV\subseteq TM$, for any admissible linear complex structure
$J$ of~$E$. Then $(E,\iota)$ and $(E,\r)$ are almost CR quaternionic and almost co-CR quaternionic structures
on $M$, where $\iota:TM\to E$ and $\r:E\to TM$ are the inclusion and the projection, respectively.\\
\indent
Any almost $f$-quaternionic structure on $M$ corresponds to a reduction of its frame bundle to the group $G_{l,m}$
of \emph{$f$-quaternionic linear isomorphisms} of $({\rm Im}\mathscr{H}q\!)^l\times\mathscr{H}q^{\!m}$, in particular $\dim M=3l+4m$\,.
More precisely, $G_{l,m}={\rm GL}(l,\R)\times\bigl({\rm Sp}(1)\cdot{\rm GL}(m,\mathscr{H}q)\bigr)$\,,
where ${\rm Sp}(1)\cdot{\rm GL}(m,\mathscr{H}q)$ acts canonically on $\mathscr{H}q^{\!m}$, whilst the action of $G_{l,m}$ on $({\rm Im}\mathscr{H}q\!)^l=\R^l\otimes{\rm Im}\mathscr{H}q$
is given by the tensor product of the canonical representations of ${\rm GL}(l,\R)$ and ${\rm SO}(3)$ on $\R^l$ and ${\rm Im}\mathscr{H}q$,
respectively, and the canonical morphisms of Lie groups from $G_{l,m}$ onto ${\rm GL}(l,\R)$ and ${\rm SO}(3)$\,.
Furthermore, $G_{l,m}$ is isomorphic to the group of quaternionic linear isomorphisms of $\mathscr{H}q^{\!l+m}$ which preserve both $\R^l$ and
$({\rm Im}\mathscr{H}q\!)^l\times\mathscr{H}q^{\!m}$.\\
\indent
Consequently, any almost $f$-quaternionic structure on $M$, also, corresponds to a decomposition $TM=(V\otimes Q)\oplus W$,
where $V$ is a vector bundle, $Q$ is an oriented Riemannian vector bundle of rank three, and $W$ is a quaternionic vector bundle
such that the frame bundle of $Q$ is the principal bundle induced by the frame bundle of $W$ through the canonical morphism of Lie groups
${\rm Sp}(1)\cdot{\rm GL}(m,\mathscr{H}q)\to{\rm SO}(3)$\,, where $\rank W=4m$\,.\\
\indent
Thus, any connection $\nabla$ on $E$ compatible with $G_{l,m}$ induces a connection $D$ on $M$
such that $\nabla=D^V\oplus D$, where $D^V$ is the connection induced by $D$ on $V$; then we say that $D$
is \emph{compatible with $(E,V)$}\,. Moreover, if we denote by $T^E$ and $T$ the torsions of $\nabla$ and $D$,
respectively, then $T^E=\r^*T$; in particular, $\nabla$ is torsion free if and only if $D$ is torsion free.\\
\indent
Furthermore, $D=\bigl(D^V\otimes D^Q\bigr)\oplus D^W$, where $D^W$ is a compatible connection on the quaternionic vector bundle $W$,
and $D^Q$ is the connection induced by $D^W$ on $Q$. In particular,
if $D$ is torsion free then $V\otimes Q$ and $W$ are foliations on $M$,
and the leaves of the latter are quaternionic manifolds.
\begin{cor} \label{cor:f-q_D}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some connection $D$ on $M$, compatible with $(E,V)$\,. Let $\r:E\to TM$ be the projection,
$T$ the torsion of $D$, and $R^{Q}$ the curvature form of the connection induced on $Q$\,.\\
\indent
Then $(E,\r,\nabla)$ is integrable if and only if, for any $J\in Z$\,, we have
\begin{equation} \label{e:f-q_D}
\begin{split}
&T\bigl(\Lambda^{2\!}\bigl(\r\bigl(E^J\bigr)\bigr)\bigr)\subseteq\r\bigl(E^J\bigr)\;,\\
&R^Q\bigl(\Lambda^{2\!}\bigl(\r\bigl(E^J\bigr)\bigr)\bigr)(J)\subseteq(J'+{\rm i}J'')^{\perp}\;,
\end{split}
\end{equation}
where $E^J={\rm ker}(J+{\rm i})$\,, and $J',J''\in Z$ such that $(J,J',J'')$ is a positive orthonormal frame.
\end{cor}
\begin{proof}
Let $D^V$ be the connection induced on $V$ and $T^E$ the torsion of $\nabla$. Because $T^E=\r^*T$,
from Theorem \ref{thm:co-cr_q_integrab} we obtain that it is sufficient to prove that, for any $J\in Z$\,,
the second relation of \eqref{e:f-q_D} holds if and only if
\begin{equation} \label{e:f-q_D_1}
R^E\bigl(\Lambda^{2\!}\bigl(\r\bigl(E^J\bigr)\bigr)\bigr)\bigl(E^J\bigr)\subseteq E^J\;,
\end{equation}
where $R^E$ is the curvature form of $\nabla$.\\
\indent
We have $R^Q(X,Y)J=\bigl[R^E(X,Y),J\bigr]$\,, for any $J\in Z$ and $X,Y\in TM$.
Therefore if $J\in Z$ and $A,B,C\in E^J$ then $$\bigl(R^Q\bigl(\r(A),\r(B)\bigr)J\bigr)C=-(J+{\rm i})\bigl(R^E\bigl(\r(A),\r(B)\bigr)C\bigr)\;.$$
As, up to a nonzero factor, $J+{\rm i}$ is the projection from $E^{\C}$ onto $\overline{E^J}$, we have that \eqref{e:f-q_D_1} holds
if and only if $R^Q\bigl(\Lambda^{2\!}\bigl(\r\bigl(E^J\bigr)\bigr)\bigr)\bigl(E^J\bigr)=0$. But, for any $X,Y\in TM$, we have
$R^Q(X,Y)J=\a(X,Y)(J'+{\rm i}J'')+\b(X,Y)(J'-{\rm i}J'')$\,, for some two-forms $\a$ and $\b$\,.\\
\indent
To complete the proof just note that the obvious relation $J'+{\rm i}J''=J'\circ(1-{\rm i}J)$
and its conjugate imply that $(J'+{\rm i}J'')\bigl(E^J\bigr)=0$
whilst $J'-{\rm i}J''$ maps $E^J$ isomorphically onto $\overline{E^J}$.
\end{proof}
\begin{prop} \label{prop:f-q_rV>1_rW>0}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some torsion free connection on $M$, compatible with $(E,V)$\,.\\
\indent
If $\rank E>4\rank V\geq8$ then the connection induced on $Q$ is flat.
\end{prop}
\begin{proof}
Let $TM=(V\otimes Q)\oplus W$ be the decomposition corresponding to $(E,V)$\,. Note that,
$\rank E>4\rank V$ if and only if $\rank W>0$\,. Also, for any $J\in Z\,(\subseteq Q)$ and $U\in V$,
we have $JU=U\otimes J$.\\
\indent
Let $D$\,, $D^V$, $D^Q$, $D^W$ be the connections induced on $TM$, $V$, $Q$\,, $W$,
and let $R^M$, $R^V$, $R^Q$, $R^W$ be their curvature forms, respectively;
recall that, $D=\bigl(D^V\otimes D^Q\bigr)\oplus D^W$.\\
\indent
Now, firstly, let $J\in Z$\,, $U\in V$ and $X,Y\in W$. From the first Bianchi identity applied to $D$ we obtain
$$R^M(X,Y)(U\otimes J)+R^M(Y,U\otimes J)X+R^M(U\otimes J,X)Y=0\;;$$
hence, we, also, have
\begin{equation} \label{e:f-q_rV>1_rW>0_1}
\bigl(R^V(X,Y)U\bigr)\otimes J+U\otimes\bigl(R^Q(X,Y)J\bigr)=0\;.
\end{equation}
As $R^Q(X,Y)J\in J^{\perp}$, from \eqref{e:f-q_rV>1_rW>0_1} we obtain $R^Q(X,Y)J=0$\,.\\
\indent
Secondly, let $J,J'\in Z$\,, $S,U\in V$, and $X\in W$. Then we have
\begin{equation} \label{e:f-q_rV>1_rW>0_2}
R^M(X,S\otimes J)(U\otimes J')+R^M(S\otimes J,U\otimes J')X+R^M(U\otimes J',X)(S\otimes J)=0\;.
\end{equation}
\indent
Relation \eqref{e:f-q_rV>1_rW>0_2} implies $R^W(S\otimes J,U\otimes J')X=0$\,, and, as this holds for any $X\in W$,
we obtain $R^Q(S\otimes J,U\otimes J')=0$\,.\\
\indent
Furthermore, with $J=J'$, relation \eqref{e:f-q_rV>1_rW>0_2}\,, also, gives
\begin{equation} \label{e:f-q_rV>1_rW>0_3}
\begin{split}
&\bigl(R^V(X,S\otimes J)U\bigr)\otimes J+U\otimes\bigl(R^Q(X,S\otimes J)J\bigr)\\
&+\bigl(R^V(U\otimes J,X)S\bigr)\otimes J+S\otimes\bigl(R^Q(U\otimes J,X)J\bigr)=0\;.
\end{split}
\end{equation}
Thus, if in \eqref{e:f-q_rV>1_rW>0_3} we assume $S,U$ linearly independent, we obtain
$R^Q(X,S\otimes J)J=0$\,, for any $X\in W$; equivalently,
\begin{equation} \label{e:f-q_rV>1_rW>0_4}
<R^Q(X,S\otimes J)J',J>=0\;,
\end{equation}
for any $X\in W$ and $J'\in Z$, orthogonal on $J$, where $<\cdot,\cdot>$ denotes the Riemannian structure on $Q$\,.\\
\indent
Finally, if $(J,J',J'')$ is an orthonormal frame on $Q$\,, then \eqref{e:f-q_rV>1_rW>0_2} gives
\begin{equation} \label{e:f-q_rV>1_rW>0_5}
\begin{split}
&\bigl(R^V(X,S\otimes J)U\bigr)\otimes J'+U\otimes\bigl(R^Q(X,S\otimes J)J'\bigr)\\
&+\bigl(R^V(U\otimes J',X)S\bigr)\otimes J+S\otimes\bigl(R^Q(U\otimes J',X)J\bigr)=0\;,
\end{split}
\end{equation}
for any $S,U\in V$ and $X\in W$. Hence, if $S,U$ are linearly independent, we deduce
$<R^Q(X,S\otimes J)J',J''>=0$\,, for any $X\in W$. Together with \eqref{e:f-q_rV>1_rW>0_4}\,,
this shows that $R^Q(X,S\otimes J)J'=0$\,, and the proof is complete.
\end{proof}
\begin{prop} \label{prop:f-q_rV>2_rW=0}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some torsion free connection on $M$, compatible with $(E,V)$\,; denote by $\r:E\to TM$ the projection.\\
\indent
If $\rank E=4\rank V\geq12$ then $(E,\r,\nabla)$ is integrable.
\end{prop}
\begin{proof}
We shall use the same notations as in the proof of Proposition \ref{prop:f-q_rV>1_rW>0}\,. Note that,
$\rank E=4\rank V\geq12$ if and only if $W=0$ and $\rank V\geq3$.\\
\indent
Firstly, we shall prove that, for any $S,U\in V$ and any orthonormal frame $(J,J',J'')$ on $Q$\,, the
folowing relations hold:
\begin{equation} \label{e:f-q_rV>2_rW=0}
\begin{split}
R^Q(S\otimes J,U\otimes J)J=&\,0\;,\\
R^Q(S\otimes J,U\otimes J)J'=&\,0\;,\\
R^Q(S\otimes J,U\otimes J')J''=&\,0\;.
\end{split}
\end{equation}
\indent
{}From the first Bianchi identity applied to $D$ we obtain that, for any $J,J'\in Z$ and $S,T,U\in V$, we have:
\begin{equation} \label{e:f-q_rV>2_rW=0_1}
\begin{split}
\bigl(R^V&(S\otimes J,T\otimes J)U\bigr)\otimes J'+U\otimes\bigl(R^Q(S\otimes J,T\otimes J)J'\bigr)\\
+&\bigl(R^V(T\otimes J,U\otimes J')S\bigr)\otimes J+S\otimes\bigl(R^Q(T\otimes J,U\otimes J')J\bigr)\\
&+\bigl(R^V(U\otimes J',S\otimes J)T\bigr)\otimes J+T\otimes\bigl(R^Q(U\otimes J',S\otimes J)J\bigr)=0\;.
\end{split}
\end{equation}
\indent
If in \eqref{e:f-q_rV>2_rW=0_1} we take $J=J'$ and $S,T,U$ linearly independent, we obtain that
the first relation of \eqref{e:f-q_rV>2_rW=0} holds, for any $J\in Z$ and $S,U\in V$ (note that, if $S,U$ are linearly
dependent then the first two relations of \eqref{e:f-q_rV>2_rW=0} are trivial).\\
\indent
If in \eqref{e:f-q_rV>2_rW=0_1} we take $J\perp J'$ and either $S,T,U$ linearly independent, or $S=U$ and $S,T$ linearly independent, we obtain
\begin{equation} \label{e:f-q_rV>2_rW=0_2}
\begin{split}
<R^Q(S\otimes J,U\otimes J)J',J''>\,=&\,0\;,\\
<R^Q(S\otimes J,U\otimes J')J,J''>\,=&\,0\;,
\end{split}
\end{equation}
for any orthonormal frame $(J,J',J'')$ on $Q$\,, and any $S,U\in V$.\\
\indent
On swapping $J$ and $J'$, in the second relation of \eqref{e:f-q_rV>2_rW=0_2}\,, we deduce
\begin{equation} \label{e:f-q_rV>2_rW=0_3}
<R^Q(S\otimes J,U\otimes J')J',J''>\,=\,0\;,
\end{equation}
for any orthonormal frame $(J,J',J'')$ on $Q$\,, and any $S,U\in V$.\\
\indent
Now, the second relation of \eqref{e:f-q_rV>2_rW=0_2} and \eqref{e:f-q_rV>2_rW=0_3} imply that the third relation
of \eqref{e:f-q_rV>2_rW=0} holds, as claimed.\\
\indent
Further, the first relation of \eqref{e:f-q_rV>2_rW=0_2} implies that the second relation of \eqref{e:f-q_rV>2_rW=0} holds
if and only if $<R^Q(S\otimes J,U\otimes J)J',J>\,=0$\,; but this is a consequence of the first relation of \eqref{e:f-q_rV>2_rW=0}\,.\\
\indent
To complete the proof, we use Corollary \ref{cor:f-q_D}\,. Thus, we have to prove that for any positive orthonormal frame $(J,J',J'')$ on $Q$\,,
and any $S,U\in V$, the following holds:
\begin{equation} \label{e:f-q_rV>2_rW=0_4}
\begin{split}
<R^Q(S\otimes J,U\otimes J)J,J'+{\rm i}J''>\,=&\,0\;,\\
<R^Q\bigl(S\otimes J,U\otimes(J'+{\rm i}J'')\bigr)J,J'+{\rm i}J''>\,=&\,0\;,\\
<R^Q\bigl(S\otimes(J'+{\rm i}J''),U\otimes(J'+{\rm i}J'')\bigr)J,J'+{\rm i}J''>\,=&\,0\;.
\end{split}
\end{equation}
\indent
Obviously, the first relation of \eqref{e:f-q_rV>2_rW=0_4} is an immediate consequence of the first relation of \eqref{e:f-q_rV>2_rW=0}\,.\\
\indent
Note that, the second relation of \eqref{e:f-q_rV>2_rW=0_2} implies that, for any $A\in J^{\perp}\setminus\{0\}$, we have
\begin{equation*}
R^Q(S\otimes J,U\otimes A)J=\,\frac{<R^Q(S\otimes J,U\otimes A)J,A>}{<A,A>}\,A\;;
\end{equation*}
in particular, $<R^Q(S\otimes J,U\otimes A)J,A>\,=c<A,A>$\,, for any $A\in J^{\perp}$, where $c$ does not depend of $A$\,.
As $J'+{\rm i}J''$ is isotropic this shows that the second relation of \eqref{e:f-q_rV>2_rW=0_4} holds.\\
\indent
Finally, the last two relations of \eqref{e:f-q_rV>2_rW=0} (applied to suitable orthonormal frames)
imply $R^Q\bigl(S\otimes(J'+{\rm i}J''),U\otimes(J'+{\rm i}J'')\bigr)J=0$\,. Hence, also, the third relation of \eqref{e:f-q_rV>2_rW=0_4} holds.\\
\indent
The proof is complete.
\end{proof}
\indent
We can, now, give a new proof for \cite[Theorem 4.9]{fq_2}\,, where, note that, the condition $\rank V\neq1$ was
discarded, due to a misprint.
\begin{cor} \label{cor:f-q_D_torsion-free_integrab}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some torsion free connection $D$ on $M$, compatible with $(E,V)$\,.\\
\indent
If either $\rank E>4\rank V\geq8$ or $\rank E=4\rank V\geq12$ then $(E,V,\nabla)$ is integrable.
\end{cor}
\begin{proof}
The integrability of the underlying almost co-CR quaternionic structure is a consequence of
Corollary \ref{cor:f-q_D}\,, and Propositions \ref{prop:f-q_rV>1_rW>0} and \ref{prop:f-q_rV>2_rW=0}\,.\\
\indent
The integrability of the undelying almost CR quaternionic structure is a consequence of
Proposition \ref{prop:f-q_rV>1_rW>0} and \cite[Proposition 4.5]{fq}\,.
\end{proof}
\begin{cor} \label{for:connections_flat_on_Q_and_V}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some torsion free connection on $M$, compatible with $(E,V)$\,.\\
\indent
If the connection induced on $Q$ is flat then, also, the connection induced on $V$ is flat.\\
\indent
The converse also holds if $\rank V\geq3$\,.
\end{cor}
\begin{proof}
As in the proof of Proposition \ref{prop:f-q_rV>1_rW>0} we deduce that \eqref{e:f-q_rV>1_rW>0_1}
and \eqref{e:f-q_rV>1_rW>0_5} hold. Hence, if $R^Q=0$ then $R^V(X,Y)U=0$ and $R^V(S\otimes J,X)U=0$ for
any $S,U\in V$\,, $X,Y\in W$ and $J\in Z$\,.\\
\indent
{}From the first Bianchi identity applied to $D$ we obtain that, for any $J,J',J''\in Z$ and $S,T,U\in V$, we have:
\begin{equation} \label{e:connections_flat_on_Q_and_V}
\begin{split}
\bigl(R^V&(S\otimes J,T\otimes J')U\bigr)\otimes J''+U\otimes\bigl(R^Q(S\otimes J,T\otimes J')J''\bigr)\\
+&\bigl(R^V(T\otimes J',U\otimes J'')S\bigr)\otimes J+S\otimes\bigl(R^Q(T\otimes J',U\otimes J'')J\bigr)\\
&+\bigl(R^V(U\otimes J'',S\otimes J)T\bigr)\otimes J'+T\otimes\bigl(R^Q(U\otimes J'',S\otimes J)J'\bigr)=0\;.
\end{split}
\end{equation}
\indent
If $R^Q=0$ and $J,J',J''$ are linearly independent then, from \eqref{e:connections_flat_on_Q_and_V} we obtain that
$R^V(S\otimes A,T\otimes B)U=0$ for any $S,T,U\in V$ and $A,B\in Q$ (here, we have used the continuity of the map
$(A,B)\mapsto R^V(S\otimes A,T\otimes B)U$, to allow $A,B$ linearly dependent).\\
\indent
Similarly, if $\rank V\geq3$ and $R^V=0$\,, from Proposition \ref{prop:f-q_rV>1_rW>0} and \eqref{e:connections_flat_on_Q_and_V}
we obtain $R^Q=0$\,.
\end{proof}
\indent
We end with the following result.
\begin{thm} \label{thm:f-q_generic_dims}
Let $(E,V)$ be an almost $f$-structure on $M$ and let $\nabla$ be the connection on $E$ induced
by some torsion free connection on $M$, compatible with $(E,V)$\,.\\
\indent
If $\rank E>4\rank V\geq8$ then, locally, $(M,E,V,\nabla)$ is the product of
$({\rm Im}\mathscr{H}q\!)^{\rank V}$ with a hypercomplex manifold.
\end{thm}
\begin{proof}
By Proposition \ref{prop:f-q_rV>1_rW>0} and Corollary \ref{for:connections_flat_on_Q_and_V} the connections induced
on $V$ and $Q$ are flat. Furthermore, as in the proof of Proposition \ref{prop:co-cr_q_with_first_Bianchi}
(note that, here, we do not need \cite[Theorem 2.2]{KMa-BLMS95}\,) we obtain
that, locally, $V$ is a hypercomplex manifold such that the projection onto $M$ is twistorial.\\
\indent
Moreover, we have that $\p^*\nabla$ restricts to give a flat connection on the quaternionic distribution $K$ on $V$
generated by ${\rm ker}\dif\!\p$\,; indeed, we have $K=\p^*\bigl(V\oplus(V\otimes Q)\bigr)$\,.
Therefore $K$ is integrable and, as $\p^*\nabla$ is, also, torsion free, its leaves are,
locally, quaternionic vector spaces, whose linear quaternionic structures are preserved by the parallel transport of $\p^*\nabla$.
Thus, if $U$ is a covariantly constant section of $K$ and $X$ is a section of $\p^*W$ then $[U,X]=(\p^*\nabla)_UX$ is a section
of $\p^*W$. Hence, the linear quaternionic structures on the leaves of $K$ are (locally) projectable with respect to $\p^*W$.
Therefore, locally, there exists a quaternionic submersion $\phi$ from $V$ onto $\mathscr{H}q^{\!\rank V}$ which, by \cite{IMOP}\,, is twistorial.
Thus, $\phi$ restricted to $M$, identified with the zero section of $V$, is a twistorial submersion onto $({\rm Im}\mathscr{H}q\!)^{\rank V}$ whose fibres
are the leaves of $W$.\\
\indent
Now, as $Q$ is flat, $V$ is locally a hypercomplex manifold and $\p^*\nabla$ is its Obata connection. Let $J$ be any covariantly constant
admissible complex structure on $V$. Thus, $T^JV={\rm ker}(J+{\rm i})$ is preserved by $\p^*\nabla$. Hence, if $X$ is a section of $T^JV$
and $U$ is section of $K$ we have that $[U,X]=(\p^*\nabla)_UX-(\p^*\nabla)_XU$ is a section of $T^J+K$. Therefore $T^JV$ is projectable
with respect to $K$. This shows that, locally there exists a triholomorphic submersion from $V$ onto a hypercomplex manifold $N$, with $\dim N=\rank W$,
which factorises into $\p$ followed by a twistorial submersion $\psi$ from $M$ to $N$; also, the latter is triholomorphic when restricted to the leaves of $W$.\\
\indent
Finally, the map $M\to({\rm Im}\mathscr{H}q\!)^{\rank V}\times N$, $x\mapsto\bigl(\phi(x),\psi(x)\bigr)$ provides the claimed (twistorial) identification.
\end{proof}
\end{document}
|
\begin{document}
\baselineskip=16pt
\titlerunning{}
\title{On $q$--covering designs}
\author{Francesco Pavese}
\date{}
\maketitle
\address{F. Pavese: Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy; \email{[email protected]}}
\MSC{Primary 51E20, 05B40; Secondary 05B25, 51A05.}
\begin{abstract}
A $q$--covering design $\mathbb{C}_q(n,k,r)$, $k \ge r$, is a collection $\mathcal X$ of $(k-1)$--spaces of ${\rm PG}(n-1,q)$ such that every $(r-1)$--space of ${\rm PG}(n-1,q)$ is contained in at least one element of $\mathcal X$. Let $\mathcal C_q(n,k,r)$ denote the minimum number of $(k-1)$--spaces in a $q$--covering design $\mathbb{C}_q(n,k,r)$. In this paper improved upper bounds on $\mathcal C_q(2n, 3, 2)$, $n \ge 4$, $\mathcal C_q(3n+8, 4, 2)$, $n \ge 0$, and $\mathcal C_q(2n, 4, 3)$, $n \ge 4$, are presented. The results are achieved by constructing the related $q$--covering designs.
\keywords{$q$--covering design; MRD--codes.}
\end{abstract}
\section{Introduction}
Let $q$ be any prime power, let ${\rm GF}(q)$ be the finite field with $q$ elements and let ${\rm PG}(n-1, q)$ be the $(n-1)$--dimensional projective space over ${\rm GF}(q)$. We will use the term $k$--space to denote a subspace of ${\rm PG}(n-1,q)$ of projective dimension $k$. Let $t \le s$. A {\em blocking set } $\mathbb B$ is a set of $(t-1)$--spaces of ${\rm PG}(n-1,q)$ such that every $(s-1)$--space of ${\rm PG}(n-1,q)$ contains at least one element of $\mathbb B$.
In the last fifty years the general problem of determining the smallest cardinality of a blocking set $\mathbb B$ has been studied by several authors (see \cite{KM, CSS} and references therein) and in very few cases has been completely solved \cite{BB, B1, BU, EM, KM1}.
A blocking set $\mathbb B$ can be seen as a $q$--analog of a well known combinatorial design, called {\em Tur\'an design}, see \cite{EV}, \cite{E}. Indeed, a blocking set $\mathbb B$ is also called a {\em $q$--Tur\'an design} $\mathbb{T}_q(n,t,s)$. The dual structure of a $q$--Tur\'an design $\mathbb{T}_q(n,t,s)$ is called {\em $q$--covering design} and it is denoted with $\mathbb{C}_q(n,n-t,n-s)$. In other words, a $q$--covering design $\mathbb{C}_q(n,k,r)$ is a collection $\mathcal X$ of $(k-1)$--spaces of ${\rm PG}(n-1,q)$ such that every $(r-1)$--space of ${\rm PG}(n-1,q)$ is contained in at least one element of $\mathcal X$. Let $\mathcal C_q(n,k,r)$ denote the minimum number of $(k-1)$--spaces in a $q$--covering design $\mathbb{C}_q(n,k,r)$. Lower and upper bounds on $\mathcal C_q(n,k,r)$ were considered in \cite{EV}, \cite{E}. Lower bounds are obtained by analytical methods, and upper bounds are obtained by explicit constructions of the related $q$--covering designs. A $q$--covering design $\mathbb{C}_q(n,k,r)$ which cover every $(r-1)$--space exactly once is called {\em $q$--Steiner system}.
The concept of $q$--covering design is of interest not only in projective geometry and design theory, but also in coding theory. Indeed, in recent years there has been an increasing interest in $q$--covering designs due to their connections with constant--dimension codes. In particular, a $q$--Steiner system is an optimal constant--dimension code (so far there is only one known example of $q$--Steiner system, i.e., the $2$--covering design $\mathbb{C}_2(13, 3, 2)$ of smallest possible size \cite{BEOVW}). Observe that, as shown in the inspiring article by Koetter and Kschischang \cite{KK}, constant--dimension codes can be used for error--correction in random linear network coding theory.
In this paper we discuss bounds on $q$--covering designs. In Section \ref{1}, based on the $q$--covering design $\mathbb{C}_q(6, 3, 2)$ constructed in \cite{CP}, an improved upper bound on $\mathcal C_q(2n, 3, 2)$, $n \ge 4$, is presented. In the last two sections, starting from a lifted MRD--code, improvements on the upper bounds of $\mathcal C_q(3n+8, 4, 2)$, $n \ge 0$, and $\mathcal C_q(2n, 4, 3)$, $n \ge 4$, are obtained. In particular, first $q$--covering designs $\mathbb{C}_q(8, 4, r)$, $r = 2, 3$, of ${\rm PG}(7,q)$ are constructed. Then, by induction, $q$--covering designs $\mathbb{C}_q(3n+8, 4, 2)$, $n \ge 0$, and $\mathbb{C}_q(2n, 4, 3)$, $n \ge 4$, are presented.
In the sequel we will use the following notation $\theta_{n,q}:= \genfrac{[}{]}{0pt}{}{n+1}{1}_q=q^n + \ldots + q + 1$.
\section{Preliminaries}
A {\em conic} of ${\rm PG}(2,q)$ is the set of points of ${\rm PG}(2,q)$ satisfying a quadratic equation: $a_{11} X_1^2 + a_{22}X_2^2 + a_{33}X_3^2 + a_{12}X_1X_2 + a_{13}X_1X_3 + a_{23}X_2X_3 = 0$. There exist four kinds of conics in ${\rm PG}(2,q)$, three of which are degenerate (splitting into lines, which could be in the plane ${\rm PG}(2,q^2)$) and one of which is non--degenerate, see \cite{H}.
A {\em regulus} is the set of lines intersecting three skew (disjoint) lines and has size $q+1$. The {\em hyperbolic quadric} $\mathcal Q^+(3,q)$, is the set of points of ${\rm PG}(3,q)$ which satisfy the equation $X_1 X_2 + X_3 X_4 = 0$. The hyperbolic quadric $\mathcal Q^+(3,q)$ consists of $(q+1)^2$ points and $2(q+1)$ lines that are the union of two reguli. Through a point of $\mathcal Q^+(3,q)$ there pass two lines belonging to different reguli.
A {\em line--spread} of ${\rm PG}(3,q)$ is a set $\mathcal S$ of $q^2+1$ lines of ${\rm PG}(3,q)$ with the property that each point of ${\rm PG}(3,q)$ is incident with exactly one element of $\mathcal S$. A {\em $1$--parallelism} of ${\rm PG}(3,q)$ is a collection $\mathcal P$ of $q^2+q+1$ line--spreads such that each line of ${\rm PG}(3,q)$ is contained in exactly one line--spread of $\mathcal P$. In \cite{B} the author proved that there exist $1$--parallelisms in ${\rm PG}(3,q)$.
The {\em Klein quadric} $\mathcal Q^+(5,q)$, is the set of points of ${\rm PG}(5,q)$ which satisfy the equation $X_1 X_2 + X_3 X_4 + X_5 X_6 = 0$. The Klein quadric contains $(q^2+1)(q^2+q+1)$ points of and two families each consisting of $q^3+q^2+q+1$ planes called {\em Latin planes} and {\em Greek planes}. Two distinct planes in the same family share exactly one point, whereas planes lying in distinct families are either disjoint or meet in a line. A line of ${\rm PG}(5,q)$ not contained in $\mathcal Q^+(5,q)$ is either {\em external}, or {\em tangent}, or {\em secant} to $\mathcal Q^+(5,q)$, according as it contains $0, 1$ or $2$ points of $\mathcal Q^+(5,q)$. A hyperplane of ${\rm PG}(5,q)$ contains either $q^3+2q^2+q+1$ or $q^3+q^2+q+1$ points of $\mathcal Q^+(5,q)$. In the former case the hyperplane is called {\em tangent}, contains the $2(q+1)$ planes of $\mathcal Q^+(5,q)$ through one of its points, say $R$, and meets $\mathcal Q^+(5,q)$ in a cone having as vertex the point $R$ and as base a hyperbolic quadric $\mathcal Q^+(3,q)$. In the latter case the hyperplane is called {\em secant} and contains no plane of $\mathcal Q^+(5,q)$. The stabilizer of $\mathcal Q^+(5,q)$ in ${\rm PG}L(6,q)$, say $G$, contains a subgroup isomorphic to ${\rm PG}L(4,q)$. Also, the stabilizer in $G$ of a plane $g$ of $\mathcal Q^+(5,q)$ contains a subgroup $H$ isomorphic to ${\rm PG}L(3,q)$ acting in its natural representation on the points and lines of $g$. For more details see \cite[Chapter 1]{H1}. A {\em Singer cyclic subgroup} of ${\rm PG}L(3, q)$ is a cyclic group acting regularly on points and lines of a projective plane ${\rm PG}(2,q)$.
\subsection{Lifting an MRD--code}
The set ${\cal M}_{n \times m}(q)$, $n \le m$, of $n \times m$ matrices over the finite field ${\rm GF}(q)$ forms a metric space with respect to the {\em rank distance} defined by $d_r(A,B)= \mbox{{\em rank}} (A-B)$. The maximum size of a code of minimum distance $\delta$, with $1 \le \delta \le n$, in $\left({\cal M}_{n\times m}(q),d_r\right)$ is $q^{n(m-\delta+1)}$. A code $\mathcal A \subset \mathcal M_{n \times m}(q)$ attaining this bound is said to be a $(n \times m, \delta)_q$ {\em maximum rank distance code} (or {\em MRD--code} in short). A rank distance code $\mathcal A$ is called {\em ${\rm GF}(q)$--linear} if $\cal A$ is a subspace of ${\cal M}_{n \times m}(q)$ considered as a vector space over ${\rm GF}(q)$. Linear MRD--codes exist for all possible parameters \cite{Delsarte, Gabidulin, Roth, Sheekey}.
We recall the so--called {\em lifting process} for a matrix $A \in \mathcal M_{n \times m}(q)$, see \cite{SKK}. Let $I_n$ be the $n \times n$ identity matrix. The rows of the $n \times n+m$ matrix $(I_n | A)$ can be viewed as coordinates of points in general position of an $(n-1)$--space of ${\rm PG}(n+m-1, q)$. This subspace is denoted by $L(A)$. Hence the matrix $A$ can be ``lifted'' to the $(n-1)$--space $L(A)$. Let $U_i$ be the point of ${\rm PG}(n+m-1, q)$ having $1$ in $i$--th position and $0$ elsewhere, $1 \le i \le n+m$, and let $\mathbb Sigma$ be the $(m-1)$--space of ${\rm PG}(n+m-1,q)$ containing $U_{n+1}, \ldots, U_{n+m}$. Note that if $A \in \mathcal A$, then $L(A)$ is disjoint from $\mathbb Sigma$.
\begin{prop} \label{lifting}
\begin{itemize}
\item[i)] If $\mathcal A$ is a $(3 \times m, 2)_q$ MRD--code, $m \ge 3$, then $\mathcal X = \{L(A) \; | \; A \in \mathcal A\}$ is a set of $q^{2m}$ planes of ${\rm PG}(m+2, q)$ such that every line of ${\rm PG}(m+2, q)$ disjoint from $\mathbb Sigma$ is contained in exactly one element of $\mathcal X$.
\item[ii)] If $\mathcal A$ is a $(4 \times m, 3)_q$ MRD--code, $m \ge 4$, then $\mathcal X = \{L(A) \; | \; A \in \mathcal A\}$ is a set of $q^{2m}$ solids of ${\rm PG}(m+3,q)$ such that every line of ${\rm PG}(m+3, q)$ disjoint from $\mathbb Sigma$ is contained in exactly one element of $\mathcal X$.
\item[iii)] If $\mathcal A$ is a $(4 \times m, 2)_q$ MRD--code, $m \ge 4$, then $\mathcal X = \{L(A) \; | \; A \in \mathcal A\}$ is a set of $q^{3m}$ solids of ${\rm PG}(m+3,q)$ such that every plane of ${\rm PG}(m+3, q)$ disjoint from $\mathbb Sigma$ is contained in exactly one element of $\mathcal X$.
\end{itemize}
\end{prop}
\begin{proof}
{\em i)} \ Let $L(A_1), L(A_2) \in \mathcal X$, where $A_1, A_2 \in \mathcal A$ and $A _1 \ne A_2$. Assume by contradiction that $L(A_1) \cap L(A_2)$ contains a line. Then we would have
$$
\mbox{{\em rank}}
\left(
\begin{array}{c|c}
I_3 & A_1 \\
I_3 & A_2
\end{array}
\right) \le 4 .
$$
On the other hand,
$$
\mbox{{\em rank}}
\left(
\begin{array}{c|c}
I_3 & A_1 \\
I_3 & A_2
\end{array}
\right) =
\mbox{{\em rank}}
\left(
\begin{array}{c|c}
I_3 & A_1 \\
0 & A_2 - A_1
\end{array}
\right) =
3 + \mbox{{\em rank}}
\left( A_2 - A_1 \right) \ge 5 ,
$$
a contradiction. Therefore the set $\mathcal X$ contains $q^{2m}$ planes pairwise meeting in at most a point. Hence there are $(q^2+q+1)q^{2m}$ lines of ${\rm PG}(m+2, q)$ lying on a plane of $\mathcal X$. But
$$
\frac{\left( \theta_{m+2, q} - \theta_{m-1, q} \right) \left( \theta_{m+1, q} - \theta_{m-1, q} \right)}{q+1} = q^{2m}(q^2+q+1)
$$
is the total number of lines of ${\rm PG}(n+m+1, q)$ that are disjoint from $\mathbb Sigma$.
A similar argument ban be used in the remaining cases {\em i)}, {\em ii)}.
\end{proof}
\section{On $\mathcal C_q(2n,3,2)$}\label{1}
In this section we provide an upper bound on $\mathcal C_q(2n, 3,2)$, $n \ge 4$. In \cite{CP}, a constructive upper bound on $\mathcal C_q(6,3,2)$ has been given. In what follows we recall the construction and some of the properties of this $q$--covering design.
\begin{cons}
Let $g$ be a Greek plane of $\mathcal Q^+(5,q)$. From \cite[Lemma 2.2]{CP}, there exists a set $\mathcal X$ of $q^6-q^3$ planes disjoint from $g$ and meeting $\mathcal Q^+(5,q)$ in a non--degenerate conic that, together with the set $\mathcal Y$ of $q^3+q^2+q$ Greek planes of $\mathcal Q^+(5,q)$ distinct from $g$, cover every line $\ell$ of ${\rm PG}(5,q)$ that is either disjoint from $g$ or contained in $\mathcal Q^+(5,q) \setminus g$.
Let $\ell$ be a line of $g$. Through the line $\ell$ there pass $q-1$ planes meeting $\mathcal Q^+(5,q)$ exactly in $\ell$ and a unique Latin plane $\pi$. Varying the line $\ell$ over the plane $g$ and considering the planes meeting $\mathcal Q^+(5,q)$ exactly in $\ell$, we get a family $\mathcal Z$ of consisting of $(q-1)(q^2+q+1) = q^3-1$ planes. From \cite[Lemma 2.3]{CP}, every line that is tangent to $\mathcal Q^+(5,q)$ at a point of $g$ is contained in exactly a plane of $\mathcal Z$.
Let $P$ be a point of $\ell$. Through the point $P$ there pass $q$ lines of $\pi$ and $q$ lines of $g$ distinct from $\ell$ and contained in $\mathcal Q^+(5,q)$. Let $S$ be the set of $q^2$ planes generated by a line of $\pi$ through $P$ distinct from $\ell$ and a line of $g$ through $P$ distinct from $\ell$. Let $C$ be a Singer cyclic group of the group $H \simeq {\rm PG}L(3,q)$. Here $H$ is a subgroup of $G$ stabilizing the plane $g$. Let $\mathcal T$ be the orbit of the set $S$ under $C$. Then $\mathcal T$ consists of $q^2(q^2+q+1)$ planes and each of these planes has $2q+1$ points in common with $\mathcal Q^+(5,q)$ on two intersecting lines of $\mathcal Q^+(5,q)$. From \cite[Lemma 2.4]{CP}, every line that is secant to $\mathcal Q^+(5,q)$ and has a point on $g$ is contained in exactly one plane of $\mathcal T$.
\end{cons}
\begin{theorem}\cite[Theorem 2.5]{CP}\label{th1}
The set $\mathcal X \cup \mathcal Y \cup \mathcal Z \cup \mathcal T$ is a $q$--covering design $\mathbb{C}_q(6,3,2)$ of size $q^6+q^4+2q^3+2q^2+q-1$.
\end{theorem}
We will need the following result.
\begin{theorem}\label{th2}
There exists a hyperplane $\Gamma$ of ${\rm PG}(5,q)$ such that $q^3+2q^2+q-1$ elements of $\mathcal X \cup \mathcal Y \cup \mathcal Z \cup \mathcal T$ are contained in $\Gamma$.
\end{theorem}
\begin{proof}
Let $\Gamma$ be a hyperplane of ${\rm PG}(5,q)$ containing $g$. Then $\Gamma$ is a tangent hyperplane and contains the planes of $\mathcal Q^+(5,q)$ through a points $R$ of $g$. In particular, there are $q$ planes of $\mathcal Y$ contained in $\Gamma$. First of all observe that no plane of $\mathcal X$ is contained in $\Gamma$. Indeed, by way of contradiction, assume that a plane of $\mathcal X$ is contained in $\Gamma$. Then such a plane would meet $g$ in at least a point, contradicting the fact that every plane of $\mathcal X$ is disjoint from $g$. A plane of $\mathcal Z$ that is contained in $\Gamma$ has to contain the point $R$. On the other hand, the $q-1$ planes of $\mathcal Z$, passing through a line of $g$ which is incident with $R$, are contained in $\Gamma$. Hence there are $(q+1)(q-1) = q^2-1$ planes of $\mathcal Z$ contained in $\Gamma$. If $\pi$ is a Latin plane contained in $\Gamma$, then $\pi \cap g$ is a line, say $\ell$. By construction there is a point $P \in \ell$ such that the set $\mathcal T$ contains $q^2$ planes meeting $\pi$ in a line through $P$ and $g$ in a line through $P$. Note that these $q^2$ planes of $\mathcal T$ are contained in $\Gamma$. It follows that there are $q^2(q+1)$ planes of $\mathcal T$ contained in $\Gamma$. The result follows.
\end{proof}
The $q$--covering design of Theorem \ref{th1} can be used recursively to obtain a $q$--covering design $\mathbb{C}_q(2n, 3, 2)$, $n \ge 4$, as described in the following result.
\begin{theorem}\label{th3}
There exists a $q$--covering design $\mathbb{C}_q(2n, 3, 2)$, $n \ge 3$, and a hyperplane $\Gamma$ of ${\rm PG}(2n-1,q)$ such that there are $q^{2n-3} + \sum_{j = 0}^{n-2} q^{2(n+j-1)}$ planes of $\mathbb{C}_q(2n,3,2)$ not contained in $\Gamma$ and $(q+1) \left(\sum_{i=2}^{n-1} \left(q^{2i-3} + \sum_{j = 0}^{i-2} q^{2(i+j-1)}\right)\right) - 1$ planes of $\mathbb{C}_q(2n,3,2)$ contained in $\Gamma$.
\end{theorem}
\begin{proof}
By induction on $n$. If $n = 3$, then, from Theorem \ref{th1} and Theorem \ref{th2}, the result holds true. Assume that the result holds true for $n-1$: there are $q^{4n-10} + q^{4n-12} + \ldots + q^{2n-2} + q^{2n-4} + q^{2n-5}$ planes of a $\mathbb{C}_q(2n-2,3,2)$ not contained in a $(2n-4)$--space $\bar{\Lambda}$ of ${\rm PG}(2n-3,q)$ and $(q+1) \left(\sum_{i=2}^{n-2} q^{4i-6} + q^{4i-8} + \ldots + q^{2i} + q^{2i-2} + q^{2i-3} \right) - 1$ planes of $\mathbb{C}_q(2n,3,2)$ contained in $\bar{\Lambda}$.
We claim that the result holds true for $n$. In ${\rm PG}(2n-1,q)$, let $\Lambda$ be the $(2n-4)$--space $\langle U_4, \ldots, U_{2n} \rangle$. Let $\mathcal A$ be a $(3 \times (2n-3), 2)_q$ MRD--code and let $\mathcal U = \{L(A) \; | \; A \in \mathcal A\}$ be the set of $q^{4n-6}$ planes of ${\rm PG}(2n-1,q)$ obtained by lifting the matrices of $\mathcal A$. Let $\pi$ be a plane disjoint from $\Lambda$. For every point $P \in \pi$, let $\Lambda_P$ be the $(2n-3)$--space $\langle \Lambda, P \rangle$. From the induction hypothesis there is a $q$--covering design $\mathbb{C}_q(2n-2,3,2)$ of $\Lambda_P$, say $\mathcal D_P$, such that there is a subset of $\mathcal D_P$, say $\bar{\mathcal D}_P$, consisting of the $q^{4n-10} + q^{4n-12} + \ldots + q^{2n-2} + q^{2n-4} + q^{2n-5}$ planes of $\mathcal D_P$ not contained in $\Lambda$. Moreover $|\mathcal D_P \setminus \bar{\mathcal D}_P| = (q+1) \left(\sum_{i=2}^{n-2} q^{4i-6} + q^{4i-8} + \ldots + q^{2i} + q^{2i-2} + q^{2i-3} \right) - 1$.
Let $\mathcal V = \bigcup_{P \in \pi} \bar{\mathcal D}_P$ and let $\mathcal W = \mathcal D_P \setminus \bar{\mathcal D}_P$. First of all observe that $\mathcal U \cup \mathcal V \cup \mathcal W$ is a $q$--covering design $\mathbb{C}_q(2n,3,2)$. Indeed, from Proposition~\ref{lifting}, every line of ${\rm PG}(2n-1,q)$ disjoint from $\Lambda$ is contained in exactly one element of $\mathcal U$. On the other hand, if $r$ is a line of ${\rm PG}(2n-1,q)$ meeting $\Lambda$ in at least a point, then $r$ is contained in $\Lambda_P$, for some $P \in \pi$, and $r$ is contained in at least a plane of $\mathcal D_P$. Hence $\mathcal U \cup \mathcal V \cup \mathcal W$ is a $q$--covering design $\mathbb{C}_q(2n,3,2)$.
Let $\ell$ be a line of $\pi$ and let $\Gamma$ be the hyperplane $\langle \Lambda, \ell \rangle$ of ${\rm PG}(2n-1,q)$. We will prove that there are $q^{4n-6} + q^{4n-8} + \ldots + q^{2n} + q^{2n-2} + q^{2n-3}$ planes of $\mathcal U \cup \mathcal V \cup \mathcal W$ not contained in $\Gamma$. Since every plane of $\mathcal U$ is disjoint from $\Lambda$, we have that no plane of $\mathcal U$ is contained in $\Gamma$. The planes of $\mathcal V$ not contained in $\Gamma$ are those of the set $\bigcup_{P \in \pi, P \notin \ell} \bar{\mathcal D}_P$. Moreover the planes of $\mathcal W$ are contained in $\Gamma$. Hence there are
\begin{multline*}
q^{4n-6} + q^2\left(q^{4n-10} + q^{4n-12} + \ldots + q^{2n-2} + q^{2n-4} + q^{2n-5}\right) \\
= q^{4n-6} + q^{4n-8} + \ldots + q^{2n} + q^{2n-2} + q^{2n-3}
\end{multline*}
planes of $\mathcal U \cup \mathcal V \cup \mathcal W$ not contained in $\Gamma$. Finally note that the planes of $\mathcal U \cup \mathcal V \cup \mathcal W$ contained in $\Gamma$ are those of $\bigcup_{P \in \ell} \bar{\mathcal D}_P \cup \mathcal W$. Hence there are
\begin{multline*}
(q+1)\left(q^{4n-10} + q^{4n-12} + \ldots + q^{2n-2} + q^{2n-4} + q^{2n-5}\right) + \\
+ (q+1) \left(\sum_{i=2}^{n-2} q^{4i-6} + q^{4i-8} + \ldots + q^{2i} + q^{2i-2} + q^{2i-3} \right) - 1 \\
= (q+1) \left(\sum_{i=2}^{n-1} q^{4i-6} + q^{4i-8} + \ldots + q^{2i} + q^{2i-2} + q^{2i-3} \right) - 1
\end{multline*}
planes of $\mathcal U \cup \mathcal V \cup \mathcal W$ contained in $\Gamma$.
\end{proof}
Theorem \ref{th3}, together with \cite[Theorem 1]{E} gives the following result.
\begin{cor}\label{lines}
If $n \ge 3$, then
\begin{multline*}
\left\lceil \frac{\theta_{n-1,q^2} \theta_{2n-2,q}}{q^2+q+1} \right\rceil \le \mathcal C_q(2n,3,2) \le q^{2n-2} \theta_{n-2,q^2} + q^{2n-3} -1 + \sum_{i = 2}^{n-1} \left( \theta_{4i-5, q} - \theta_{2i-4, q} + q^{2i-2} \right) .
\end{multline*}
\end{cor}
\section{On $\mathcal C_q(3n+8,4,2)$}\label{2}
In this section we provide an upper bound on $\mathcal C_q(3n+8,4,2)$, $n \ge 0$. We first deal with the case $n = 0$.
\begin{cons}
Let $\mathcal A$ be a $(4 \times 4, 3)_q$ MRD--code and let $\mathcal X = \{L(A) \; | \; A \in \mathcal A\}$ be the set of $q^8$ solids of ${\rm PG}(7,q)$ obtained by lifting the matrices of $\mathcal A$. Let $\mathbb Sigma'$ be the solid of ${\rm PG}(7,q)$ containing $U_1, U_2, U_3, U_4$. Then $\mathbb Sigma'$ is disjoint from $\mathbb Sigma$. Let $\mathcal S = \{\ell_i \; | \; 1 \le i \le q^2+1\}$ be a line--spread of $\mathbb Sigma$, let $\mathcal S' = \{\ell_i' \; | \; 1 \le i \le q^2+1\}$ be a line--spread of $\mathbb Sigma'$ and let $\mu: \ell_i' \in \mathcal S' \longmapsto \ell_i \in \mathcal S$ be a bijection. Let $\Gamma_i$ denote the $5$--space containing $\mathbb Sigma$ and $\ell_i'$, $1 \le i \le q^2+1$. If $\gamma$ is a plane of $\Gamma_i$, then there are $q^2+q$ solids of $\Gamma_i$ meeting $\mathbb Sigma_i$ exactly in $\gamma$. Let $\mathcal Y_i$ be the set of $q(q+1)^2$ solids of $\Gamma_i$ (distinct from $\mathbb Sigma$) meeting $\mathbb Sigma$ in a plane containing $\mu(\ell_i')$. Let $\mathcal Y = \bigcup_{i=1}^{q^2+1} \mathcal Y_i$. Then $\mathcal Y$ consists of $q(q+1)^2(q^2+1)$ solids.
\end{cons}
\begin{theorem}\label{lines1}
The set $\mathcal X \cup \mathcal Y$ is a $q$--covering design $\mathbb{C}_q(8,4,2)$ of size $q^8+q(q+1)^2(q^2+1)$.
\end{theorem}
\begin{proof}
Let $r$ be a line of ${\rm PG}(7,q)$. If $r$ is disjoint from $\mathbb Sigma$, then from Proposition \ref{lifting}, we have that $r$ is contained in exactly one element of $\mathcal X$. If $r$ meets $\mathbb Sigma$ in one point, say $P$, then let $\Lambda$ be the $4$--space $\langle \mathbb Sigma, r \rangle$, let $\ell_j$ be the unique line of $\mathcal S$ containing $P$, let $P'$ be the point $\mathbb Sigma' \cap \Lambda$ and let $\ell_k'$ be the unique line of $\mathcal S'$ containing $P'$. If $j = k$, then $P \in \ell_k$ and $r$ is contained in the $q+1$ solids $\langle \alpha, r \rangle$ of $\mathcal Y$, where $\alpha$ is a plane of $\mathbb Sigma$ containing $\ell_k$. If $j \ne k$, then $P \notin \ell_k$. Let $\beta$ be the plane of $\mathbb Sigma$ containing $\ell_k$ and $P$. Then $r$ is contained in $\langle \beta, r \rangle$, where $\langle \beta, r \rangle$ is a solid of $\mathcal Y$. Finally let $r$ be a line of $\mathbb Sigma$, then $r$ is contained in $q(q+1)^2$ solids of $\mathcal Y$.
\end{proof}
\begin{remark}
Let $\mathcal L$ be a Desarguesian line--spread of ${\rm PG}(7,q)$. There are $(q^4+1)(q^4+q^2+1)$ solids of ${\rm PG}(7,q)$ containing exactly $q^2+1$ lines of $\mathcal L$. If $\mathcal Z$ denotes the set of these solids, then it is not difficult to see that every line of ${\rm PG}(7,q)$ is contained in at least a solid of $\mathcal Z$. In \cite[p. 221]{KM}, K. Metsch posed the following question:
``Is $(q^4 +1)(q^4 +q^2 +1)$ the smallest cardinality of a set of $3$--spaces of ${\rm PG}(7,q)$ that cover every line?'' Theorem \ref{lines1} provides a negative answer to this question.
\end{remark}
\begin{remark}
When $q = 2$, the result of Theorem \ref{lines1} was obtained in \cite[Theorem 13]{E}.
\end{remark}
\begin{prop}\label{hyp}
There exists a hyperplane $\Gamma$ of ${\rm PG}(7,q)$ such that precisely $q(q+1)(2q+1)$ members of $\mathcal X \cup \mathcal Y$ are contained in $\Gamma$.
\end{prop}
\begin{proof}
Let $\Gamma$ be a hyperplane of ${\rm PG}(7,q)$ containing $\mathbb Sigma$. Then no element of $\mathcal X$ is contained in $\Gamma$, otherwise such a solid would meet $\mathbb Sigma$, contradicting the fact that every solid in $\mathcal X$ is disjoint from $\mathbb Sigma$. The hyperplane $\Gamma$ intersects $\mathbb Sigma'$ in a plane $\sigma$. The plane $\sigma$ contains exactly one line of $\mathcal S'$, say $\ell_k'$. Hence the $q(q+1)^2$ solids of $\mathcal Y$ meeting $\mathbb Sigma$ in a plane through the line $\mu(\ell_k') = \ell_k$ are contained in $\Gamma$. Let $\ell_j' \in \mathcal S'$, with $j \ne k$, then $\ell_j' \cap \sigma$ is a point, say $R$. In this case the $q+1$ solids generated by $R$ and a plane of $\mathbb Sigma$ through $\mu(\ell_j') = \ell_j$ is contained in $\Gamma$. Since the elements of $\mathcal Y$ are those contained in the $5$--space $\langle \mathbb Sigma, \ell_i' \rangle$, where $\ell_i' \in \mathcal S'$, and meeting $\mathbb Sigma$ in a plane through $\ell_i$, the proof is complete.
\end{proof}
The $q$--covering design of Theorem \ref{lines1} can be used recursively to obtain a $q$--covering design $\mathbb{C}_q(3n+8, 4, 2)$, $n \ge 1$, as described in the following result.
\begin{theorem}\label{th4}
There exists a $q$--covering design $\mathbb{C}_q(3n+8, 4, 2)$, $n \ge 0$, and a hyperplane $\Gamma$ of ${\rm PG}(3n+7,q)$ such that there are $q^{3n+2} (2q^2-1) + \sum_{j=0}^{n+1} q^{3(n+j)+5}$ solids of $\mathbb{C}_q(3n+8, 4, 2)$ not contained in $\Gamma$ and $(q^2+q+1) \left(\sum_{i=0}^{n-1} \left(q^{3i+2}(2q^2-1) + \sum_{j=0}^{i+1} q^{3(i+j)+5} \right) \right) + q(q+1)(2q+1)$ solids of $\mathbb{C}_q(3n+8, 4, 2)$ contained in $\Gamma$.
\end{theorem}
\begin{proof}
By induction on $n$. If $n = 0$, then, from Theorem \ref{lines1} and Proposition \ref{hyp}, the result holds true. Assume that the result holds true for $n-1$: there are $q^{6n+2} + q^{6n-1} + \ldots + q^{3n+5} + q^{3n+2} + 2 q^{3n+1}-q^{3n-1}$ solids of a $\mathbb{C}_q(3n+5, 4, 2)$ not contained in a $3(n+1)$--space $\bar{\Lambda}$ of ${\rm PG}(3n+4,q)$ and $(q^2+q+1) \left(\sum_{i=0}^{n-2} q^{6i+8} + q^{6i+5} + \ldots + q^{3i+8} + q^{3i+5} + 2 q^{3i+4} - q^{3i+2} \right) + q(q+1)(2q+1)$ solids of $\mathbb{C}_q(3n+5, 4, 2)$ contained in $\bar{\Lambda}$.
We claim that the result holds true for $n$. In ${\rm PG}(3n+7,q)$, let $\Lambda$ be the $3(n+1)$--space $\langle U_5, \ldots, U_{3n+8} \rangle$. Let $\mathcal A$ be a $(4 \times (3n+4), 3)_q$ MRD--code and let $\mathcal U = \{L(A) \; | \; A \in \mathcal A\}$ be the set of $q^{6n+8}$ solids of ${\rm PG}(3n+7,q)$ obtained by lifting the matrices of $\mathcal A$. Let $\Pi$ be a solid disjoint from $\Lambda$. For every point $P \in \Pi$, let $\Lambda_P$ be the $(3n+4)$--space $\langle \Lambda, P \rangle$. From the induction hypothesis there is a $q$--covering design $\mathbb{C}_q(3n+5, 4, 2)$ of $\Lambda_P$, say $\mathcal D_P$, such that there is a subset of $\mathcal D_P$, say $\bar{\mathcal D}_P$, consisting of the $q^{6n+2} + q^{6n-1} + \ldots + q^{3n+5} + q^{3n+2} + 2 q^{3n+1}-q^{3n-1}$ solids of $\mathcal D_P$ not contained in $\Lambda$. Moreover $|\mathcal D_P \setminus \bar{\mathcal D}_P| = (q^2+q+1) \left(\sum_{i=0}^{n-2} q^{6i+8} + q^{6i+5} + \ldots + q^{3i+8} + q^{3i+5} + q^{3i+2} - q^{3i+2} +2q^{3i+4} \right) + q(q+1)(2q+1)$.
Let $\mathcal V = \bigcup_{P \in \pi} \bar{\mathcal D}_P$ and let $\mathcal W = \mathcal D_P \setminus \bar{\mathcal D}_P$. First of all observe that $\mathcal U \cup \mathcal V \cup \mathcal W$ is a $q$--covering design $\mathbb{C}_q(3n+8, 4, 2)$. Indeed, from Proposition~\ref{lifting}, every line of ${\rm PG}(3n+7,q)$ disjoint from $\Lambda$ is contained in exactly one element of $\mathcal U$. On the other hand, if $r$ is a line of ${\rm PG}(3n+7,q)$ meeting $\Lambda$ in at least a point, then $r$ is contained in $\Lambda_P$, for some $P \in \Pi$, and $r$ is contained in at least a solid of $\mathcal D_P$. Hence $\mathcal U \cup \mathcal V \cup \mathcal W$ is a $q$--covering design $\mathbb{C}_q(3n+8, 4, 2)$.
Let $\sigma$ be a plane of $\Pi$ and let $\Gamma$ be the hyperplane $\langle \Lambda, \sigma \rangle$ of ${\rm PG}(3n+7,q)$. Since every solid of $\mathcal U$ is disjoint from $\Lambda$, we have that no solid of $\mathcal U$ is contained in $\Gamma$. The solids of $\mathcal V$ not contained in $\Gamma$ are those of the set $\bigcup_{P \in \Pi, P \notin \sigma} \bar{\mathcal D}_P$. Furthermore the solids of $\mathcal W$ are contained in $\Gamma$. Hence there are
\begin{multline*}
q^{6n+8} + q^3 \left(q^{6n+2} + q^{6n-1} + \ldots + q^{3n+5} + q^{3n+2} + 2 q^{3n+1} - q^{3n-1}\right) \\
= q^{6n+8} + q^{6n+5} + \ldots + q^{3n+8} + q^{3n+5} + 2 q^{3n+4} - q^{3n+2}
\end{multline*}
solids of $\mathcal U \cup \mathcal V \cup \mathcal W$ not contained in $\Gamma$. Finally note that the solids of $\mathcal U \cup \mathcal V \cup \mathcal W$ contained in $\Gamma$ are those of $\bigcup_{P \in \sigma} \bar{\mathcal D}_P \cup \mathcal W$. Hence there are
\begin{multline*}
(q^2+q+1)\left(q^{6n+2} + q^{6n-1} + \ldots + q^{3n+5} + q^{3n+2} + 2 q^{3n+1}-q^{3n-1}\right) + \\
+ (q^2+q+1) \left(\sum_{i=0}^{n-2} q^{6i+8} + q^{6i+5} + \ldots + q^{3i+8} + q^{3i+5} + 2 q^{3i+4} - q^{3i+2} \right) + q(q+1)(2q+1) \\
= (q^2+q+1) \left(\sum_{i=0}^{n-1} q^{6i+8} + q^{6i+5} + \ldots + q^{3i+8} + q^{3i+5} + 2 q^{3i+4} - q^{3i+2} \right) + q(q+1)(2q+1)
\end{multline*}
solids of $\mathcal U \cup \mathcal V \cup \mathcal W$ contained in $\Gamma$.
\end{proof}
Taking into account Theorem \ref{th4} and \cite[Theorem 1]{E}, the following result holds true.
\begin{cor}
If $n \ge 0$, then
\begin{multline*}
\left\lceil \frac{\theta_{3n+7,q} \theta_{3n+6,q}}{(q+1)(q^2+1)(q^2+q+1)} \right\rceil \le \mathcal C_q(3n+8, 4, 2) \le q^{3n+5} \theta_{n+1,q^3} + \sum_{i = 0}^{n-1} \left( \theta_{6i+10, q} - \theta_{3i+4, q} \right) + \\
+ \sum_{i=0}^{n} \left( q^{3i+2} (2q^2-1) \right) + q(q+1)(2q+1) .
\end{multline*}
\end{cor}
\section{On $\mathcal C_q(2n, 4, 3)$}\label{3}
The main goal of this section is to give an upper bound on $\mathcal C_q(2n, 4, 3)$, $n \ge 4$. We begin by providing a construction in the case $n = 4$.
\begin{cons}
Let $\mathcal A$ be a $(4 \times 4, 2)_q$ MRD--code and let $\mathcal X = \{L(A) \; | \; A \in \mathcal A\}$ be the set of $q^{12}$ solids of ${\rm PG}(7,q)$ obtained by lifting the matrices of $\mathcal A$. Let $\mathbb Sigma'$ be the solid of ${\rm PG}(7,q)$ containing $U_1, U_2, U_3, U_4$. Then $\mathbb Sigma'$ is disjoint from $\mathbb Sigma$. Let $\mathcal P = \{\mathcal S_i \; | \; 1 \le i \le q^2+q+1\}$ be a $1$--parallelism of $\mathbb Sigma$, let $\mathcal P' = \{\mathcal S_i' \; | \; 1 \le i \le q^2+q+1\}$ be a $1$--parallelism of $\mathbb Sigma'$ and let $\mu: \mathcal S_i' \in \mathcal P' \longmapsto \mathcal S_i \in \mathcal P_i$ be a bijection. For a line $\ell'$ of $\mathbb Sigma'$, let $\Gamma_{\ell'}$ denote the $5$--space containing $\mathbb Sigma$ and $\ell'$. Since $\mathcal P'$ is a $1$--parallelism of $\mathbb Sigma'$, there exists a unique $j$, with $1 \le j \le q^2+q+1$, such that $\ell' \in \mathcal S_j'$. Note that $\mu(\mathcal S_j') = \mathcal S_j$ is a line--spread of $\mathbb Sigma$. Let $\ell$ be a line of $\mathcal S_j$ and let $\mathcal Y_{\ell}$ be the set of $q^4$ solids of $\Gamma_{\ell'}$ (distinct from $\mathbb Sigma$) meeting $\mathbb Sigma$ exactly in $\ell$. Let $\mathcal Z_{\ell'} = \bigcup_{\ell \in \mathcal S_j} \mathcal Y_{\ell}$. Then $\mathcal Z_{\ell'}$ consists of $q^4(q^2+1)$ solids. Varying $\ell'$ among the lines of $\mathbb Sigma'$, we get a set
$$
\mathcal Z = \bigcup_{\ell' \mbox{ {\em line of} } \mathbb Sigma'} \mathcal Z_{\ell'}
$$
consisting of $q^4(q^2+1)^2(q^2+q+1)$ solids.
\end{cons}
\begin{theorem}\label{planes}
The set $\mathcal X \cup \mathcal Z \cup \{\mathbb Sigma\}$ is a $q$--covering design $\mathbb{C}_q(8,4,3)$ of size $q^{12}+q^4(q^2+1)^2(q^2+q+1)+1$.
\end{theorem}
\begin{proof}
Let $\pi$ be a plane of ${\rm PG}(7,q)$. If $\pi$ is disjoint from $\mathbb Sigma$, then, from Proposition \ref{lifting}, we have that $\pi$ is contained in exactly one element of $\mathcal X$. If $\pi$ meets $\mathbb Sigma$ in one point, say $P$, then let $\Lambda$ be the $5$--space $\langle \mathbb Sigma, \pi \rangle$ and let $\ell'$ be the line of $\mathbb Sigma'$ obtained by intersecting $\mathbb Sigma'$ with $\Lambda$. Note that $\Lambda = \Gamma_{\ell'}$. Let $\mathcal S_j'$ be the unique line--spread of $\mathcal P'$ containing $\ell'$. Then there exists a unique line $\ell$ of $\mathcal S_j = \mu(\mathcal S_j')$ such that $P \in \ell$ and $\pi$ is contained in $\langle \pi, \ell \rangle$, that is a solid of $\mathcal Z$. If $\pi$ meets $\mathbb Sigma$ in a line, say $r$, then let $\mathcal S_k$ be the unique line--spread of $\mathcal P$ containing $r$ and let $\Lambda$ be the $4$--space $\langle \mathbb Sigma, \pi \rangle$. Then $\Lambda \cap \mathbb Sigma'$ is a point, which belongs to a unique line, say $r'$, of the line--spread $\mu^{-1}(\mathcal S_k) = \mathcal S_k'$ of $\mathcal P'$. Since there are $q^2$ solids of $\Gamma_{r'}$ meeting $\mathbb Sigma$ exactly in $r$ and containing $\pi$, we have that in this case $\pi$ is contained in $q^2$ members of $\mathcal Z$.
Finally if $\pi$ is a plane of $\mathbb Sigma$, then $\pi$ is contained in $\mathbb Sigma$.
\end{proof}
\begin{remark}
Note that Theorem \ref{planes} generalizes the result of \cite[Theorem 16]{E}.
\end{remark}
\begin{prop}\label{hyp1}
There exists a $5$--space $\Lambda$ of ${\rm PG}(7,q)$ containing exactly $q^4(q^2+1)+1$ members of $\mathcal X \cup \mathcal Z \cup \{\mathbb Sigma\}$. Moreover every hyperplane of ${\rm PG}(7,q)$ through $\Lambda$ contains precisely $q^4(q^2+1)(q^2+q+1)+1$ solids of $\mathcal X \cup \mathcal Z \cup \{\mathbb Sigma\}$.
\end{prop}
\begin{proof}
Let $\Lambda$ be a $5$--space containing $\mathbb Sigma$. Then $\Lambda$ meets $\mathbb Sigma'$ in a line, say $r$, and $\Lambda = \langle \mathbb Sigma, r \rangle$. The line $r$ belongs to a unique line--spread $\mathcal S_i'$ of the $1$--parallelism $\mathcal P'$ of $\mathbb Sigma'$. Then $\mu(\mathcal S_i') = \mathcal S_i$ is a line--spread belonging to the $1$--parallelism $\mathcal P$ of $\mathbb Sigma$. The $q^4(q^2+1)$ solids of $\mathcal Z$ lying in $\langle \mathbb Sigma, r \rangle$ meet $\mathbb Sigma$ in a line of $\mathcal S_i$ and are contained in $\Lambda$. Let $s$ be a line of $\mathbb Sigma'$ such that $s \ne r$. In this case none of the $q^4(q^2+1)$ solids of $\mathcal Z$ lying in $\langle \mathbb Sigma, s \rangle$ is contained in $\Lambda$. Indeed, assume by contradiction that there is a solid $\Delta$ contained in $\Lambda \cap \langle \mathbb Sigma, s \rangle$, then $\Delta \subset \langle \mathbb Sigma, s \cap r \rangle$ and hence $\Delta \cap \mathbb Sigma$ is a plane of $\mathbb Sigma$, contradicing the fact that every solid of $\mathcal Z$ meets $\mathbb Sigma$ in a line. On the other hand, no solid of $\mathcal X$ is contained in $\Lambda$, otherwise such a solid would meet $\mathbb Sigma$ not trivially. Finally note that $\mathbb Sigma$ is a solid of $\Lambda$.
Let $\Gamma$ be a hyperplane of ${\rm PG}(7,q)$ through $\Lambda$. Then $\Gamma \cap \mathbb Sigma'$ is a plane, say $\sigma$, containing the line $r$. Repeating the previous argument for every line of the plane $\sigma$, it turns out that there are $q^4(q^2+1)(q^2+q+1)$ solids of $\mathcal Z$ in $\Gamma$, as required.
\end{proof}
Let $\mathbb S_4$ denotes $\mathcal X \cup \mathcal Z \cup \{\mathbb Sigma\}$. Similarly to \cite[Theorem 17]{E}, $\mathbb S_4$ can be used as a base for a recursive construction of a $q$--covering designs $\mathbb{C}_q(2n, 4, 3)$, $n \ge 5$.
\begin{theorem}
Let $\mathbb S_n$ be a $q$--covering design $\mathbb{C}_q(2n, 4, 3)$, $n \ge 4$, such that there is a $(2n-3)$--space of ${\rm PG}(2n-1, q)$, say $\Lambda_n$, containing precisely $\alpha_n$ elements of $\mathbb S_n$ and every hyperplane of ${\rm PG}(2n-1, q)$ through $\Lambda_n$ contains $\beta_n$ members of $\mathbb S_n$. Then there exists a $q$--covering design $\mathbb{C}_q(2n+2, 4, 3)$, say $\mathbb S_{n+1}$, where
$$
|\mathbb S_{n+1}| = q^{6(n-1)} + (q^2+1)(q^2+q+1) |\mathbb S_n| - q(q+1)^2(q^2+1) \beta_n + q^3(q^2+q+1) \alpha_n.
$$
Moreover there exists a $(2n-1)$--space of ${\rm PG}(2n+1, q)$, say $\Lambda_{n+1}$, containing $\alpha_{n+1} = |\mathbb S_n|$ elements of $\mathbb S_{n+1}$ and such that every hyperplane of ${\rm PG}(2n+1, q)$ through $\Lambda_{n+1}$ contains $\beta_{n+1}$ members of $\mathbb S_{n+1}$, where
$$
\beta_{n+1} = (q^2+q+1)|\mathbb S_n| - (q^3+q^2+q) \beta_n - q^2 \alpha_n.
$$
\end{theorem}
\begin{proof}
Let $\Lambda_n$ be the $(2n-3)$--space of ${\rm PG}(2n+1, q)$ generated by $U_5, \dots, U_{2n+2}$, let $\mathcal A$ be a $(4 \times (2n-2), 2)$ MRD--code and let $\mathcal U$ be the set of $q^{6(n-1)}$ solids obtained by lifting the matrices of $\mathcal A$. Let $\Gamma$ be a solid disjoint from $\Lambda_n$. For a line $\ell$ of $\Gamma$ there exists a $q$--covering design $\mathbb{C}_q(2n, 4, 3)$, say $\mathbb S_n$, of $\langle \Lambda_n, \ell \rangle$ such that $\alpha_n$ elements of $\mathbb S_n$ are contained in $\Lambda_n$ and every $2(n-1)$--space of $\langle \Lambda_n, \ell \rangle$ through $\Lambda_n$ contains $\beta_n$ members of $\mathbb S_n$. Let $\mathcal V_\ell$ be the set of solids of $\mathbb S_n$ not contained in a $2(n-1)$--space of $\langle \Lambda_n, \ell \rangle$ through $\Lambda_n$. Then $\mathcal V_\ell$ consists of $|\mathbb S_n| - \beta_n - q(\beta_n - \alpha_n)$ solids. Varying $\ell$ among the lines of $\Gamma$, we obtain the following set of solids:
$$
\mathcal V = \bigcup_{\ell \mbox{ {\em line of} } \Gamma} \mathcal V_{\ell} .
$$
For a point $P$ of $\Gamma$, there are $\beta_n$ solids of $\mathbb S_n$ contained in $\langle \Lambda_n, P \rangle$, among which $\alpha_n$ are contained in $\Lambda_n$. Let $\mathcal W_P$ denotes the set of $\beta_n - \alpha_n$ solids of $\mathbb S_n$ contained in $\langle \Lambda_n, P \rangle$ and not contained in $\Lambda_n$. Varying $P \in \Gamma$ we get the following set of solids:
$$
\mathcal W = \bigcup_{P \in \Gamma} \mathcal W_{P} .
$$
Let ${\bar \mathcal W}$ be set of $\alpha_n$ solids of $\mathbb S_n$ contained in $\Lambda_n$ and let $\mathbb S_{n+1} = \mathcal U \cup \mathcal V \cup \mathcal W \cup \bar{\mathcal W}$. We claim that $\mathbb S_{n+1}$ is a $q$--covering design $\mathbb{C}_q(2n+2, 4, 3)$. Let $\pi$ be a plane of ${\rm PG}(2n+1,q)$. If $\pi$ is disjoint from $\Lambda_n$, then, from Proposition \ref{lifting}, there is a unique solid of $\mathcal U$ containing $\pi$. If $\pi$ meets $\Lambda_n$ in a point, then $\langle \Lambda_n, \pi \rangle$ is a $2(n-1)$--space meeting the solid $\Gamma$ in a line, say $r$. Then there is a solid of $\mathcal V_{r}$ containing $\pi$. Finally, if $\pi$ shares with $\Lambda_n$ at least a line, then there is at least a solid of $\mathcal W \cup \bar{\mathcal W}$ containing $\pi$.
By construction it follows that
\begin{multline*}
|\mathbb S_{n+1}| = q^{6(n-1)} + (q^2+1)(q^2+q+1)\left( |\mathbb S_n| - \beta_n - q (\beta_n - \alpha_n) \right) + (q+1)(q^2+1) (\beta_n - \alpha_n) + \alpha_n \\
= q^{6(n-1)} + (q^2+1)(q^2+q+1) |\mathbb S_n| - q(q+1)^2(q^2+1) \beta_n + q^3(q^2+q+1) \alpha_n.
\end{multline*}
In order to complete the proof, let us denote with $\Lambda_{n+1}$ the $(2n-1)$--space $\langle \Lambda_n, s \rangle$, where $s$ is a fixed line of $\Gamma$. The solids of $\mathbb S_{n+1}$ that are contained in $\Lambda_{n+1}$ are those of $\mathcal V_s \cup \bigcup_{P \in s} \mathcal W_P \cup \bar{\mathcal W}$ that are exactly the solids of $\mathbb S_n$. Hence $\alpha_{n+1} = |\mathbb S_n|$. A hyperplane $\mathcal H$ of ${\rm PG}(2n+1,q)$ through $\Lambda_{n+1}$ meets $\Gamma$ in a plane, say $\sigma$, where $s \subset \sigma$. Then the solids of $\mathbb S_{n+1}$ contained in $\mathcal H$ are the solids contained in
$$
\left(\bigcup_{\ell \mbox{ {\em line of} } \sigma} \mathcal V_{\ell} \right) \cup \left( \bigcup_{P \in \sigma} \mathcal W_{P} \right) \cup \bar{\mathcal W}.
$$
Therefore
\begin{multline*}
\beta_{n+1} = (q^2+q+1) \left( |\mathbb S_n| - \beta_n - q (\beta_n - \alpha_n) \right) + (q^2+q+1) (\beta_n - \alpha_n) + \alpha_n \\
= (q^2+q+1) |\mathbb S_n| - q(q^2+q+1) \beta_n - q^2 \alpha_n.
\end{multline*}
\end{proof}
\begin{cor}
$$
q^{12} + q^2(q^4+1)(q^2+1)(q^2+q+1) + 1 \le \mathcal C_q(8, 4, 3) \le q^{12} + q^4(q^2+1)^2(q^2+q+1) + 1
$$
\begin{multline*}
(q^4+1)(q^6+q^3+1)(q^8+q^6+q^4+q^2+1) \le \mathcal C_q(10, 4, 3) \le \\
q^{18} + q^4(q^2+1)(q^2+q+1)(q^8+q^6+q^4+q^3+q^2+1) + 1
\end{multline*}
\end{cor}
\end{document}
|
\begin{document}
\title{\LARGE \bf
Linear and Dynamic Programs for Risk-Sensitive\\ Cost Minimization}
\thispagestyle{empty}
\begin{abstract}
We derive equivalent linear and dynamic programs for infinite horizon
risk-sensitive control for minimization of the asymptotic growth rate of
the cumulative cost.
\end{abstract}
\IEEEpeerreviewmaketitle
\title{\LARGE \bf
Linear and Dynamic Programs for Risk-Sensitive\\ Cost Minimization}
\section{Introduction}
Risk-sensitive control problems that seek to minimize over an infinite time
horizon the asymptotic growth rate of mean exponentiated cumulative cost of
a controlled Markov chain were first studied in \cite{Flem1,Flem2}, which also
pioneered the most popular approach to such problems, viz., to use the celebrated
`log-transformation' to convert it to a zero sum stochastic game with long run
average or `ergodic' payoffs. An equivalent alternative approach that treats the corresponding reward maximization problem as a nonlinear eigenvalue problem was
developed in \cite{Ananth}. This leads to an equivalent ergodic reward maximization
problem and an associated linear program. For the finite state-action case,
the complete details of the latter were worked out in \cite{Borkar}.
Unfortunately the techniques therein do not extend to the cost minimization problem,
which is equivalent to a zero sum ergodic stochastic game.
It may be recalled that unlike the classical criteria such as discounted or ergodic,
risk-sensitive reward maximization cannot be converted to a cost minimization and vice versa,
by a simple sign flip. Thus the two are not equivalent.
In this work, we make the key observation that the aforementioned zero sum
ergodic game belongs to a very special subclass thereof, viz., a single controller
game wherein one agent affects only the payoff and not the dynamics.
This case is indeed amenable to a linear programming formulation as pointed out
in \cite{Vrieze}.
We exploit this fact to derive the counterparts of the results of \cite{Borkar}
for the cost minimization problem.
It may be noted that an LP formulation for risk-sensitive
cost or reward is not a priori obvious because unlike the classical criteria such
as discounted or ergodic, where the uncontrolled problems lead to linear `one step analysis'
(or the Poisson equation),
risk-sensitive control leads to an eigenvalue problem which is already nonlinear.
We introduce the control problem in \cref{S2}. The equivalent single
controller ergodic game and its linear programming formulation is given in \cref{S3}.
\Cref{S4} uses this in turn to derive the corresponding dynamic programming equations
for risk-sensitive control without the assumption of irreducibility.
This leads to a second `dynamic programming' equation coupled to the usual one in
what is a counterpart of the corresponding system of equations for
ergodic control without irreducibility (\cite{Puterman}, Chapter 9).
The interesting twist here is the appearance of the so called `twisted' transition kernel.
\section{Risk-sensitive cost minimization}\label{S2}
Consider a controlled Markov chain $\{X_n\}$ on a finite state space
$S \coloneqq \{1,2,\dotsc,s\}$, controlled by a control process $\{Z_n\}$ taking values
in a finite action space $\mathcal{U}$, with running cost $c(i,u)$, $i \in S$, $u \in \mathcal{U}$.
Let
\begin{equation*}
(i,u,j) \in S\times \mathcal{U}\times S \,\mapsto\, p(j\,|\,i,u) \in [0,1]\,,
\end{equation*}
with $\sum_j p(j\,|\,i,u) = 1 \ \forall \ (i,u)\in S\times \mathcal{U}$ be its controlled
transition kernel, that is, the following `controlled Markov property' holds:
$$P(X_{n+1} = j \,|\, X_m,Z_m, m \leq n) \,=\,
p(j\,|\,X_n,Z_n)\,,\ \ n \geq 0\,.$$
Such $\{Z_n\}$ will be called admissible controls.
We call $\{Z_n\}$ a stationary (randomized) policy if
$$P(Z_n = u \,|\,X_m, m \leq n;\, Z_m, m < n) \,=\, \varphi(u\,|\,X_n)$$
for some $\varphi\colon i \in S \mapsto \varphi( \cdot \,|\, i) \in \mathcal{P}(\mathcal{U})$,
with $\mathcal{P}(\mathcal{U})$ denoting the simplex of probability vectors on $\mathcal{U}$.
A stationary policy is called
\emph{pure} or \emph{deterministic} if $Z_n = v(X_n)$ for all $n \ge 0$,
for some $v\colon S \mapsto \mathcal{U}$, equivalently, when $\varphi( \cdot | i ) = \delta_{v(i)}(\cdot) \ \forall i$, i.e., a Dirac measure at $v(i) \ \forall i$.
We let $\mathfrak{U}_{\mathsf{sm}}$ and $\mathfrak{U}_{\mathsf{p}}$ denote the class of all stationary
and pure policies, respectively.
By abuse of terminology, stationary policies, resp.\ pure policies, are identified with the map $\varphi$, resp.\ $v$,
in the preceding definition.
The risk-sensitive cost minimization problem we are interested in seeks to determine
\begin{align}
\Bar\lambda^* \, &\coloneqq \, \max_{i\in S}\, \lambda^*_i\,, \label{cost} \\
\lambda^*_i \, &\coloneqq \, \inf_{\{Z_m\}} \,\limsup_{n\uparrow\infty}\,
\frac{1}{n}\log \mathrm{e}xp_i\Bigl[\mathrm{e}^{\sum_{m=0}^{n-1}c(X_m, Z_m)}\Bigr]\,, \label{cost-i}
\end{align}
where the infimum is over all admissible controls,
and $\mathrm{e}xp_i[\,\cdot\,]$ denotes the
expectation with $X_0 = i$.
We restrict ourselves to stationary policies.
For a stationary policy $v\in\mathfrak{U}_{\mathsf{sm}}$, we use the notation
\begin{equation}\label{E-not1}
\begin{aligned}
c_v(i) &\,\coloneqq\, \sum_{u\in \mathcal{U}} c(i,u) v(u\,|\,i)\,,\\[3pt]
p_v(j\,|\,i) &\,\coloneqq\, \sum_{u\in \mathcal{U}} p(j\,|\,i,u)v(u\,|\,i)\,.
\end{aligned}
\end{equation}
Let
$$\lambda_i^v \,\coloneqq\, \lim_{n\uparrow\infty}\,\frac{1}{n}\,
\log \mathrm{e}xp_i^v\Bigl[\mathrm{e}^{\sum_{m=0}^{n-1}c_v(X_m)}\Bigr]\,,$$
where $\mathrm{e}xp_i^ v[ \cdot ]$ indicates the expectation under
the policy $v\in\mathfrak{U}_{\mathsf{sm}}$ with $X_0 = i$. Thus
\begin{equation}\label{E-min}
\Bar\lambda^* \,=\, \min_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\max_{i\in S}\,\lambda_i^v\,.
\end{equation}
\begin{definition}\label{D2.1}
Let $\mathcal{Q}$ denote the class of stochastic matrices $q=[q_{ij}]_{i,j\in S}$
such that $$q_{ij}\,=\,0\quad\text{if\ \ }\max_{u\in\mathcal{U}}\, p(j\,|\, i,u)\,=\,0\,.$$
Also, $\mathcal{M}_q$ denotes the set of invariant probability
vectors of $q\in\mathcal{Q}$.
Using the equivalent notation $q(j\,|\,i)=q_{ij}$, we define
\begin{equation}\label{E-runreward}
\Tilde{c}(i,q,u) \,=\, c(i,u) - D\bigl( q(\cdot\,|\,i) \,\|\, p(\cdot\,|\,i,u)\bigr)\,,
\end{equation}
if $q(\cdot\,|\,i)\ll p(\cdot\,|\,i,u)$,
and $\Tilde{c}(i,q,u)=-\infty$, otherwise.
Here,
\begin{equation*}
D\bigl( q(\cdot\,|\,i) \,\|\, p(\cdot\,|\,i,u)\bigr)
\,=\,
\sum_{j\in S} q(j\,|\,i)\frac{\log q(j\,|\,i)}{\log p(j\,|\,i,u)}
\end{equation*}
denotes the Kullback-Leibler divergence. For $v\in\mathfrak{U}_{\mathsf{sm}}$, we let $\Tilde{c}_v(i,q)$ be defined analogously to \cref{E-not1},
that is,
\begin{equation*}
\Tilde{c}_v(i,q) \,\coloneqq\, \sum_{u\in \mathcal{U}} \Tilde{c}(i,q,u)\, v(u\,|\,i)\,.
\end{equation*}
\end{definition}
Specializing \cite[Theorem~3.3]{Ananth} to the above, we have
\begin{equation}\label{variational}
\max_{i\in S}\lambda_i^v \,=\, \max_{q \in\mathcal{Q}}\,\max_{\pi \in \mathcal{M}_q}\,
\sum_{i\in S} \pi(i) \Tilde{c}_v(i,q)\,.
\end{equation}
The reason that we can restrict the maximization to the set $\mathcal{Q}$ is
the following.
Suppose $(\Hat{q},\Hat{\pi})$ is a pair where the maximum in \cref{variational}
is attained. Without loss of generality we may assume that $\Hat\pi$ is
an ergodic measure. It is clear then that we must have
$\Hat{q}(\cdot\,|\,i)\ll p_v(\cdot\,|\,i)$ on the support of $\Hat\pi$,
otherwise $\max_{i\in S}\lambda_i^v=-\infty$, which is not possible.
\Cref{E-min,variational,E-runreward} suggest
an ergodic game for a controlled Markov chain which we describe next.
\begin{definition}\label{D2.2}
The model for the controlled Markov chain $\{\widetilde{X}_n\}$ is
as follows:
\begin{itemize}
\item The state space is $S$.
\item The action space is $\mathcal{Q}(i)\times\mathcal{U}$, for $i\in S$,
where
\begin{equation}\label{E-Ai}
\mathcal{Q}(i) \,\coloneqq\, \bigl\{q_{ij}\,\colon j\in S\,,\, q\in\mathcal{Q}\bigr\}\,.
\end{equation}
\item The controlled transition probabilities is are dictated by $q\in\mathcal{Q}$.
Note then that $\mathcal{Q}$ may be viewed as the set of stationary policies
with action spaces $\{\mathcal{Q}(i),\,i\in S\}$.
It is clear that in this space there is no difference
between randomized and pure policies.
\item The running \textit{reward} is
$\Tilde{c}(i,q,u)$ defined in \eqref{E-runreward}.
\end{itemize}
\end{definition}
With $\{\widetilde{X}_n\}_{n\in\mathbb{N}_0}$ denoting the chain defined above,
and $\widetilde\mathrm{e}xp^v_i$ the expectation operator under
the policy $v\in\mathfrak{U}_{\mathsf{sm}}$ with $\widetilde{X}_0=i\in S$, define
\begin{equation}\label{E-hPhi}
\widehat\Phi(q,v)\,\coloneqq\,
\max_{i\in S}\, \lim_{N\to\infty}\,
\frac{1}{N}\, \widetilde\mathrm{e}xp^v_i
\Biggl[\sum_{k=0}^{N-1} \Tilde{c}_v(\widetilde{X}_n,q)\Biggr]\,,
\end{equation}
with $q\in \mathcal{Q}$.
The preceding analysis shows that we seek to maximize
$\widehat\Phi(q,v)$ with respect to $q\in \mathcal{Q}$ and minimize it
with respect to $v\in\mathfrak{U}_{\mathsf{sm}}$.
This forms a single controller zero-sum ergodic game
between the agent who chooses $q$ to maximize the long-term average value
of the reward $\Tilde{c}(i,q,u)$
and the agent who chooses $u$ to minimize it.
The reason that it is a single controller game is
because the decisions of the second player affect only the payoff and not
the transition probability.
This facilitates the application of \cite{Vrieze} to derive equivalent
linear programs, which we do in \cref{S3}.
It is clear that
\begin{equation}\label{E-hPhi2}
\widehat\Phi(q,v) \,=\, \max_{\pi\in\mathcal{M}_q}\,\sum_{i\in S}\pi(i)\,\Tilde{c}_v(i,q) \,.
\end{equation}
Suppose we can show that
\begin{equation*}
\min_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\max_{q\in\mathcal{Q}}\,\widehat\Phi(q,v)
\end{equation*}
is attained at some $v^*\in\mathfrak{U}_{\mathsf{p}}$.
Then, in view of \cref{E-min,variational}, and the fact
that
\begin{equation*}
\Tilde{c}_v (i,q) \,=\,
c_v(i) - D\bigl(q(\cdot\,|\,i) \bigm\| p_v( \cdot \,|\, i)\bigr)
\quad\forall\,v\in\mathfrak{U}_{\mathsf{p}}\,,
\end{equation*}
we obtain
\begin{equation}\label{E-minmax0}
\Bar\lambda^* \,=\, \min_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\max_{q\in\mathcal{Q}}\,\widehat\Phi(q,v)\,.
\end{equation}
In fact, in \cref{S3} we show that the game has a value $\widehat\Phi^*$,
that is,
\begin{equation}\label{E-value}
\widehat\Phi^* \,=\, \adjustlimits\inf_{v\in\mathfrak{U}_{\mathsf{sm}}\,}\sup_{q\in\mathcal{Q}}\,\widehat\Phi(q,v) \,=\,
\adjustlimits\sup_{q\in\mathcal{Q}\,}\inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\widehat\Phi(q,v)\,,
\end{equation}
and there exists $v^*\in\mathfrak{U}_{\mathsf{p}}$ and $q^*\in \mathcal{Q}$ such that
\begin{equation}\label{E-saddle}
\widehat\Phi^* \,=\, \inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\widehat\Phi(q^*, v)
\,=\, \sup_{q\in\mathcal{Q}}\,\widehat\Phi(q, v^*) \,.
\end{equation}
In other words, the pair $(q^*,v^*)$ is optimal.
\section{Equivalent linear programs}\label{S3}
We now adapt the key results of \cite{Vrieze} relevant for us.
Since \cite{Vrieze} works with finite state and action spaces and $\mathcal{Q}$ is not finite,
we first replace $\mathcal{Q}$ by a finite approximation $\mathcal{Q}_n$ for $n \geq 1$, of
transition probability kernels $q( \cdot \,|\, \cdot )$ such that
for all $i,j \in S$,
$q(j\,|\,i)$ takes values in the set of dyadic rationals of the form $\frac{k}{2^n}$
for some $0 \leq k \leq 2^n$.
Let $A_n(i)$ be the corresponding action spaces defined as in \cref{E-Ai},
but with $\mathcal{Q}$ replaced by $\mathcal{Q}_n$.
As noted in \cref{D2.2}, $\mathcal{Q}_n$ may be viewed as the set of stationary policies
with action spaces $\{A_n(i),\,i\in S\}$.
For $(q,v)\in\mathcal{Q}_n\times\mathfrak{U}_{\mathsf{sm}}$, we let
\begin{equation}\label{E-Phi}
\Phi_i(q,v)\,\coloneqq\, \lim_{N\to\infty}\, \frac{1}{N}\, \widetilde\mathrm{e}xp^v_i
\Biggl[\sum_{k=0}^{N-1} \Tilde{c}_v(\widetilde{X}_n,q)\Biggr]\,,
\quad i\in S\,.
\end{equation}
We consider the corresponding single controller zero-sum
game analogous to the one described in \cref{S2}.
As we show later, the single controller zero-sum
game with the objective in \eqref{E-Phi} over
$(q,v)\in \mathcal{Q}_n\times\mathfrak{U}_{\mathsf{sm}}$ has the following equivalent linear programming
formulation.
\noindent \textbf{Primal~program \cref{LPn}:} The primal variables are
\begin{equation*}
V\,=\,(V_1,\dotsc,V_s)\in\mathbb{R}^s\,,\quad \beta\,=\,(\beta_1,\dotsc,\beta_s)\in\mathbb{R}^s\,,
\end{equation*}
and
$$y \,=\, (y_1,\dotsc,y_s)\colon \mathcal{U}\to \mathcal{P}(S)\,,$$
and the linear program is the following:
\begin{equation}\label{LPn}
\begin{aligned}
&\text{Minimize\ } \sum_{i\in S} \beta_i \text{\ subject to:}\\
&\beta_i \,\ge\, \sum_{j\in S} q_{ij}\,\beta_j\, \ \ \ \ \forall \ i \in S,\\
&V_i \,\ge\, \sum_{u\in \mathcal{U}} \Tilde{c}(i,q,u)\,y_i(u) - \beta_i +
\sum_{j\in S} q_{ij}V_j \ \forall \ i \in S.
\end{aligned}\tag{$\mathsf{LP}_n$}
\end{equation}
\noindent \textbf{Dual program \cref{LPn'}:} The dual variables are
$$\bigl(\mu(i,q),\,\nu(i,q)\colon (i,q)\in S\times A_n(i)\bigr)\,,$$
and $w=(w_1,\dotsc,w_s)\in\mathbb{R}^s$, and the dual linear program is:
\begin{equation}\label{LPn'}
\begin{aligned}
&\text{Maximize\ } \sum_{i\in S} w_i \text{\ subject to:}\\
&\sum_{(i,q)\in S\times A_n(i)}\bigl(\delta_{ij}-\Tilde{p}(j\,|\,i,q)\bigr)\mu(i,q)
\,=\, 0\ \ \forall j\in S, \\
&\sum_{(i,q)\in S\times A_n(i)}\bigl(\delta_{ij}-\Tilde{p}(j\,|\,i,q)\bigr)\nu(i,q) \\
&\mspace{120mu}+ \sum_{q\in A_n(j)}\mu(j,q) \,=\, 1
\quad\forall\,j\in S\,, \\
&\sum_{q\in A_n(i)}
\Tilde{c}(i,u,q)\mu(i,q) \,\ge\, w_i \quad \forall\,(i,u)\in S\times \mathcal{U}\,, \\
&\mu(i,q),\, \nu(i,q) \,\ge\, 0 \quad\forall\,(i,q)\in S\times A_n(i)\,.
\end{aligned}\tag{$\mathsf{LP}'_n$}
\end{equation}
In the above constraints, $\delta_{ij}=1$ if $i=j$, and equals $0$ otherwise.
The programs in \cref{LPn,LPn'} are exactly as given in \cite[Section~2]{Vrieze},
with the notation adapted to the current setting.
Arguing as in \cite[Lemma~2.1]{Vrieze},
we deduce that both linear programs are feasible and have bounded
solutions.
We note that $\Tilde{c}$ is extended-valued here, whereas it is
and real-valued and bounded in \cite{Vrieze}.
Nevertheless, note that $q'\in A_n(i)$ can always be selected so that
$\Tilde{c}(i,q',u)>-\infty$, and this shows that
the solution $\beta$ is bounded.
\begin{definition}
Let $(V^n,\beta^n,y^n), (\mu^n,\nu^n,w^n)$ denote solutions
for \cref{LPn,LPn'}, resp., for each $n\in\mathbb{N}$.
Define
\begin{equation*}
\overline\alpha^n_i\,\coloneqq\,\sum_{q\in A_n(i)}\mu^n(i,q)\,,
\end{equation*}
and
\begin{equation}\label{E-alphan}
\alpha^n_i(q) \,\coloneqq\,
\begin{cases}
\frac{\mu^n(i,q)}{\overline\alpha^n_i}&\text{if\ } \
\overline\alpha^n_i\ne0\\[5pt]
\frac{\nu^n(i,q)}{\sum_{q\in A_n(i)}\nu^n(i,q)}&\text{otherwise.}
\end{cases}
\end{equation}
\end{definition}
The following lemma follows from the results in \cite{Vrieze},
some of them drawn from \cite{Bewley,Partha,Vrieze0}.
\begin{lemma}\label{L3.1}
The single controller zero-sum
game with the objective in \eqref{E-Phi} over
$(q,v)\in \mathcal{Q}_n\times\mathfrak{U}_{\mathsf{sm}}$ has a value
$$\Phi^{(n)} = \bigl(\Phi^{(n)}_1, \dotsc , \Phi^{(n)}_s\bigr)\in\mathbb{R}^s\,,$$ that is,
\begin{equation}\label{E-game-n}
\begin{aligned}
\Phi^{(n)}_i &\,=\, \adjustlimits\inf_{v\in\mathfrak{U}_{\mathsf{sm}}\,}\sup_{q\in\mathcal{Q}_n}\,\Phi_i(q,v)\\
&\,=\, \adjustlimits\sup_{q\in\mathcal{Q}_n\,}\inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\Phi_i(q,v)\,,
\end{aligned}
\end{equation}
and the following hold:
\begin{itemize}
\item[(a)]
We have $\beta^n=\Phi^{(n)}$, where $\beta^n$ is the solution of $\cref{LPn}$.
\item[(b)]
A pair of optimal stationary policies $(q^*_n, v^*_n)\in\mathcal{Q}_n\times\mathfrak{U}_{\mathsf{p}}$
exists.
\item[(c)] The inner supremum (resp., infimum) in the left (resp., right) hand side
of \eqref{E-game-n} is attained at a stationary (nonrandomized) policy.
\item[(d)]
For any solution $(V^n,\beta^n,y^n)$ of \cref{LPn},
$y^n$ is an optimal policy for player 2.
In other words,
$v^*_n (\cdot\,|\, i) = y^n_i(\cdot)$ for all $i\in S$.
Moreover $y^n$ can be selected so as to induce a pure Markov policy.
\item[(e)]
For any solution $(\mu^n,\nu^n,w^n)$ of \cref{LPn'}, $q^*_n$ can be
selected as
\begin{equation*}
q^*_n(\cdot\,|\,i) \,=\, \sum_{q\in A_n(i)} q(\cdot\,|\,i)\alpha^n_i(q)\,,
\end{equation*}
with $\alpha^n_i$ as defined in \cref{E-alphan}.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is based on the results in \cite{Vrieze}.
However, the roles of the players
should be interchanged, since it is player 1 that does not influence the transition probabilities
in \cite{Vrieze}. But if we define the expected average payoff $V$ as
$$V(v,q) \,=\, -\Phi(q,v)\,,$$
then with $v\in\mathfrak{U}_{\mathsf{sm}}$ the stationary strategies of player~1, and
$q\in\mathcal{Q}_n$ those of player~2, the model matches exactly that
of \cite{Vrieze}.
That the game has a value and parts (a) and (b)
then follow from \cite[Theorem~2.15]{Vrieze}.
Part (c) is the statement of \cite[Lemma~1.2]{Vrieze}.
Part (d) then follows by considering the second constraint in \cref{LPn} together
with \cite[Lemma~2.14]{Vrieze}.
Part (e) follows from the definitions (2.4)-(2.10) following the proof of
\cite[Lemma~2.2]{Vrieze}
together with \cite[Lemma~2.9]{Vrieze}.
This completes the proof.
\end{proof}
\subsection{The semi-infinite linear programs}
Letting $n\nearrow\infty$, we obtain a pair of semi-infinite linear programs
with $\mathcal{Q}_n$ replaced by $\mathcal{Q}$ in
\eqref{LPn}--\eqref{LPn'}, that is, linear programs with finitely many
variables, but infinitely many constraints.
These are as follows:
\noindent \textbf{Primal~program \cref{LP}:} The primal variables are
as in \cref{LPn},
and the program is the following:
\begin{equation}\label{LP}
\begin{aligned}
&\text{Minimize\ } \sum_{i\in S} \beta_i \text{\ subject to:}\\
&\beta_i \,\ge\, \sum_{j\in S} \Tilde{p}(j\,|\,i,q')\beta_j\,,\\
&V_i \,\ge\, \sum_{u\in \mathcal{U}} \Tilde{c}(i, q', u)\,y_i(u) - \beta_i
+\sum_{j\in S} \Tilde{p}(j\,|\,i,q')V_j\,,\\
&\mspace{50mu}\quad\forall\,q'\in \mathcal{Q}(i)\,,\ \forall\,i\in S\,.
\end{aligned}\tag{$\mathsf{LP}$}
\end{equation}
\noindent \textbf{Dual program \cref{LP'}:} The dual variables are
$$\bigl(\mu(i,q),\,\nu(i,q)\colon (i,q)\in S\times \mathcal{Q}(i)\bigr)\,,$$
and $w=(w_1,\dotsc,w_s)\in\mathbb{R}^s$, and the dual linear program is:
\begin{equation}\label{LP'}
\begin{aligned}
&\text{Maximize\ } \sum_{i\in S} w_i \text{\ subject to:}\\
&\sum_{(i,q)\in S\times \mathcal{Q}(i)}\bigl(\delta_{ij}-\Tilde{p}(j\,|\,i,q)\bigr)\mu(i,q)
\,=\, 0\quad\forall\,j\in S\,, \\
&\sum_{(i,q)\in S\times \mathcal{Q}(i)}\bigl(\delta_{ij}-\Tilde{p}(j\,|\,i,q)\bigr)\nu(i,q) \\
&\mspace{120mu}+ \sum_{q\in A(j)}\mu(j,q) \,=\, 1
\quad\forall\,j\in S\,, \\
&\sum_{q\in \mathcal{Q}(i)}
\Tilde{c}(i,u,q)\mu(i,q) \,\ge\, w_i \quad \forall\,(i,u)\in S\times \mathcal{U}\,, \\
&\mu(i,q),\, \nu(i,q) \,\ge\, 0 \quad\forall\,(i,q)\in S\times \mathcal{Q}(i)\,.
\end{aligned}\tag{$\mathsf{LP}'$}
\end{equation}
With an eye on the passage from the approximate linear programs
\cref{LPn,LPn'}
on $\mathcal{Q}_n$ to the analogous semi-infinite linear programs
\cref{LP,LP'} over $\mathcal{Q}$,
we need the following two lemmas.
\begin{lemma}\label{L3.2}
The sequence $\{\beta^n\}_{n\in\mathbb{N}}$ converges monotonically to
some $\widehat\beta\in\mathbb{R}^s$ in each component.
Moreover, $\widehat\beta$ is the infimum of all feasible values of \cref{LP}.
\end{lemma}
\begin{proof}
Since any solution $(V^n,\beta^n,y^n)$ of \cref{LPn} is feasible for the program
$\mathsf{LP}_{n+1}$,
it is clear that $\beta^n$ is nonincreasing
in $n$ in each component.
It also follows by the definition in \cref{E-runreward} that there
exists a constant $M$ such that
\begin{equation*}
\adjustlimits\min_{(i,u)\in S\times\mathcal{U}\,}\max_{q\in A_n(i)}\,
\Tilde{c}(i,q,u) \,\ge\, M\,.
\end{equation*}
Thus, since $\beta^n_i$ is clearly bounded below by $M$ for each $i\in S$, and $n\in\mathbb{N}$,
there exists a limit
\begin{equation*}
\widehat\beta\,\coloneqq\, \lim_{n\nearrow\infty}\, \beta^n\,.
\end{equation*}
Now, it is straightforward to show that any feasible solution $\beta$ of
\cref{LP} satisfies $\beta\ge\widehat\beta$.
Indeed, if some $\beta$ with $\beta_i<\widehat\beta_i$ is feasible for \cref{LP},
one can find a $\Tilde\beta$ arbitrarily close to $\beta$ which is
feasible for \cref{LPn} for large enough $n$ by continuity.
This of course contradicts the fact that $\beta^n\ge\widehat\beta$ for all $n\in\mathbb{N}$,
and completes the proof.
\end{proof}
Let $P^n$ denote the transition matrix induced by \cref{LPn'} via
the optimal policy defined in \cref{L3.1}\,(d).
Note that this satisfies $\beta^n = P_n\beta^n$ for all $n\in\mathbb{N}$ by \cref{LPn}
(see \cite[Lemma~2.9]{Vrieze}).
Recall also that $y^n$ is pure Markov, and can be identified with
$v^*_n$ as asserted in \cref{L3.1}\,(d).
We continue with the following lemma.
\begin{lemma}\label{L3.3}
Any limit point $(\widehat\beta,\widehat{P},\Hat{y})$ of
$(\beta^n,P_n,y^n)$ along a subsequence,
as $n\to\infty$ is feasible for \cref{LP}.
\end{lemma}
\begin{proof}
Let $Q_n$ be defined by
\begin{equation*}
Q_n\,\coloneqq\, \lim_{N\to\infty}\,\frac{1}{N} \sum_{k=0}^{N-1} P_n^k\,,
\end{equation*}
and similarly define $\widehat{Q}$ relative to the stochastic matrix $\widehat{P}$.
Also let $\Tilde{c}_n$ denote the running cost under $P_n$ and $y^n$.
It is clear that $\Tilde{c}_n$ converges to some $\Hat{c}$ as $n\to\infty$
along the same subsequence.
Since $(\beta^n,P_n,y^n)$ is optimal for \cref{LPn}, we have (in vector notation)
$\beta^n = Q_n \Tilde{c}_n$,
and $\beta^n = P_n\beta^n$ for all $n\in\mathbb{N}$ by \cref{LPn}
(see \cite[Lemma~2.9]{Vrieze}).
Thus taking limits as $n\to\infty$, we obtain
\begin{equation*}
\widehat\beta\,=\, \widehat{Q}\,\Hat{c}\,,\quad\text{and\ }
\widehat\beta\,=\, \widehat{P}\widehat\beta\,.
\end{equation*}
Note that $V^n$ can be selected as
\begin{equation*}
V^n \,=\, \bigl(I - P_n + Q_n\bigr)^{-1}
\bigl(I - Q_n\bigr) \Tilde{c}_n\,.
\end{equation*}
Taking limits as $n\to\infty$, it follows that $V^n\to\widehat{V}$
which satisfies
\begin{equation*}
\widehat{V} \,\coloneqq\, \bigl(I - \widehat{P} + \widehat{Q}\bigr)^{-1}
\bigl(I - \widehat{Q}\bigr) \Hat{c}\,.
\end{equation*}
Inserting the dependence of $\Tilde{c}_n$ and $P_n$ explicitly in the
notation, the second constraint in \cref{LPn} can be written as
\begin{equation}\label{PL3.3F}
V^n + \beta^n \,\ge\, \Tilde{c}_n(q) + P(q) V^n
\end{equation}
for all $q\in\mathcal{Q}_n$. Now, fix some $m\in\mathbb{N}$ and
$q\in\mathcal{Q}_m$.
Taking limits in \cref{PL3.3F} as $n\to\infty$, we obtain
\begin{equation}\label{PL3.3G}
\widehat{V} + \widehat\beta \,\ge\, \Hat{c}(q) + P(q) \widehat{V}.
\end{equation}
Since $q\in\mathcal{Q}_m$ is arbitrary and $\cup_{m\in\mathbb{N}} \mathcal{Q}_m$ is dense in $\mathcal{Q}$,
it follows that \cref{PL3.3G} holds for all $q\in\mathcal{Q}$.
Hence the second constraint in \cref{LP} is satisfied.
Similarly, starting from
\begin{equation*}
\beta^n \,\ge\, P(q) \beta^n \quad\forall q\in\mathcal{Q}_n\,,
\end{equation*}
and repeating the same argument, we see that the first constraint in
\cref{LP} is also satisfied.
This completes the proof of the lemma.
\end{proof}
\begin{remark}
It is also possible to start from a solution
$(\mu^n,\nu^n,w^n)$ of the dual program \cref{LPn'}, and then take limits as $n\to\infty$.
Note that $\mu^n$ is a Dirac mass, so convergence to (say) $\widehat\mu$
is interpreted in the weak sense. Same for $\nu^n\to\widehat\nu$.
It is easy to see then that any subsequential limit
$(\widehat\mu,\widehat\nu,\widehat{w})$ satisfies \cref{LP'} by continuity.
\end{remark}
By \cref{L3.2,L3.3}, the linear programs in
\cref{LP,LP'} are feasible and have bounded solutions.
This allows us to extend \cref{L3.1} as follows.
\begin{theorem}\label{T3.1}
The single controller zero-sum
game with the objective in \eqref{E-Phi} over
$(q,v)\in \mathcal{Q}\times\mathfrak{U}_{\mathsf{sm}}$ has a value
$\Phi^* = \bigl(\Phi^*_1, \dotsc , \Phi^*_s\bigr)\in\mathbb{R}^s$, that is,
\begin{equation*}
\Phi^*_i \,=\, \adjustlimits\inf_{v\in\mathfrak{U}_{\mathsf{sm}}\,}\sup_{q\in\mathcal{Q}}\,\Phi_i(q,v)
\,=\, \adjustlimits\sup_{q\in\mathcal{Q}\,}\inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\Phi_i(q,v)\,,
\end{equation*}
and the following hold:
\begin{itemize}
\item[(i)]
$\Phi^*=\beta$, the solution to \cref{LP}.
\item[(ii)]
A pair of optimal stationary policies $(q^*, v^*)\in\mathcal{Q}\times\mathfrak{U}_{\mathsf{p}}$
exists.
\item[(iii)] The analogous statements of parts (c)--(e) in \cref{L3.1} hold.
\end{itemize}
\end{theorem}
It is now easy to connect the original game in \cref{E-hPhi} to the game
with the objective in \cref{E-Phi}.
Since the maximum of $\Phi_i(q,v)$ over $i\in S$, is attained in some ergodic
class (communicating class of recurrent states), then in view
of \cref{E-hPhi2}, we have
\begin{equation}\label{E-equal1}
\widehat\Phi(q,v) \,=\, \max_{i\in S}\, \Phi_i(q,v)\qquad
\forall (q,v)\in\mathcal{Q}\times\mathfrak{U}_{\mathsf{sm}}\,.
\end{equation}
Thus, by \cref{T3.1}, the game in \cref{E-hPhi} has
the value $\widehat\Phi^*=\max_{i\in S}\, \Phi_i^*$,
and \cref{E-value} holds.
In addition, \cref{E-equal1} implies that the pair
$(q^*, v^*)\in\mathcal{Q}\times\mathfrak{U}_{\mathsf{p}}$ in \cref{T3.1}\,(ii) is optimal
for the game in \cref{E-hPhi}, and thus \cref{E-saddle} holds.
In addition, the fact that $v^*\in\mathfrak{U}_{\mathsf{p}}$ as asserted in
\cref{T3.1}\,(ii) implies that \cref{E-minmax0} holds.
So, in summary, the risk-sensitive value $\Bar\lambda^*$ defined in \cref{cost}
satisfies
\begin{equation*}
\begin{aligned}
\Bar\lambda^* &\,=\, \adjustlimits\inf_{v\in\mathfrak{U}_{\mathsf{sm}}\,}\sup_{q\in\mathcal{Q}}\,\widehat\Phi(q,v) \,=\,
\adjustlimits\sup_{q\in\mathcal{Q}\,}\inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\widehat\Phi(q,v)\\
&\,=\, \inf_{v\in\mathfrak{U}_{\mathsf{sm}}}\,\widehat\Phi(q^*, v)
\,=\, \sup_{q\in\mathcal{Q}}\,\widehat\Phi(q, v^*) \,.
\end{aligned}
\end{equation*}
\section{Dynamic programming}\label{S4}
It can be seen from the linear program \cref{LP}
that the values $\{\Phi^*_i\colon i\in S\}$ can be calculated
by nested dynamic programming equations (see \cite{Puterman}, pp.\ 442--443).
We simplify the notation and write the stochastic matrix $q$
as $[q_{ij}]$.
We have the following theorem.
\begin{theorem}\label{T4.1}
It holds that
$$\Bar\lambda^* \,=\, \max_{i\in S}\, \lambda^*_i
\,=\, \max_{i\in S}\, \Phi^*_i\,,$$
where $\{\Phi^*_i\}_{i\in S}$ solves, for all $i\in S$,
\begin{align}
\Phi^*_i &\,=\, \max_{q \in \mathcal{Q}}\,
\sum_{j\in S} q_{ij}\Phi^*_j\,, \label{DP-1}\\
\Phi^*_i + V_i &\,=\, \min_{u\in \mathcal{U}}\,\max_{q\in B_i}\,
\Biggl[\Tilde{c}(i,q,u)+\sum_{j\in S} q_{ij} V_j \Biggr]\,, \label{DP-2}
\end{align}
with
$$B_i\,\coloneqq\, \Biggl\{q\in\mathcal{Q}\,\colon
\sum_{j\in S} q_{ij}\Phi^*_j = \Phi^*_i\Biggr\}\,.$$
\end{theorem}
Note that \cref{DP-1,DP-2} simply match the constraints in \cref{LP},
so that existence of a solution to these equations follows from
\cref{T3.1}.
The proof of \cref{T4.1}
again goes through a sequence of finite approximations of $\mathcal{A}$ so that
the aforementioned results from \cite{Puterman} apply.
Care should be taken when performing
the maximization over $q\in\mathcal{Q}$ in \eqref{DP-2} explicitly using the
Gibbs variational principle (see Proposition~2.3, \cite{DaiPra}),
since the variables $q$ in \cref{DP-2} are not free
but depend on the maximization in \cref{DP-1}.
Re-order the solution $\{\Phi^*_i\}$ so that
over a partition $\{\mathcal{I}_1,\dotsc,\mathcal{I}_m\}$ of $S$, we have
\begin{equation*}
\Phi^*_i\,=\,\beta^*_\ell\quad\forall\,i\in\mathcal{I}_\ell
\end{equation*}
and
\begin{equation*}
\beta^*_1\,<\,\beta^*_2\,<\,\dotsb\,<\,\beta^*_m\,.
\end{equation*}
It is clear then that
\begin{equation*}
B_i\,=\, \{q_{ij}\in \mathcal{Q}\,\colon j\in \mathcal{I}_1\} \quad\forall\,i\in \mathcal{I}_1\,,
\end{equation*}
and in general
\begin{equation*}
B_i\,=\, \{q_{ij}\in \mathcal{Q}\,\colon j\in \mathcal{I}_k\} \quad\forall\,i\in \mathcal{I}_k\,.
\end{equation*}
Let
\begin{equation*}
\Hat{p}(j\,|\, i,u) \,\coloneqq\,
\begin{cases}
p(j\,|\, i,u) &\text{if\ } i,j\in \mathcal{I}_k\ \text{for\ } k\in\{1,\dotsc,m\}\\
0&\text{otherwise.}
\end{cases}
\end{equation*}
Note that the matrix $[\Hat{p}]$ is block-diagonal.
Thus we can write the maximum in \cref{DP-2} as
\begin{equation*}
q^*(j\,|\,i,u) \,\coloneqq\, \frac{\Hat{p}(j\,|\,i,u)\mathrm{e}^{c(i,u) + V_j}}
{\sum_k \Hat{p}(k\,|\,i,u)\mathrm{e}^{c(i,u) + V_k}}\,.
\end{equation*}
Substituting this back into \cref{DP-1,DP-2} along with the change of variables
$\Psi_i = \mathrm{e}^{V_i}$, $\Lambda_i = \mathrm{e}^{\Phi^*_i}$, and $\Lambda^* = \mathrm{e}^{\Bar\lambda^*}$,
we get
\begin{align}
\Lambda^* &\,=\, \max_{i \in S}\,\Lambda_i\,, \label{DP*1} \\
\Lambda_i\Psi_i &\,=\, \min_{u\in \mathcal{U}}\,
\Biggl(\sum_{j \in S} \Hat{p}(j\,|\,i,u)\mathrm{e}^{c(i,u)}\Psi_j\Biggr), \label{DP*2} \\
\Lambda_i &\,=\, \min_{u \in B^*_i}\,\sum_{j \in S}
\Biggl(\frac{\Hat{p}(j\,|\,i,u)\mathrm{e}^{c(i,u)}\Psi_j}
{\sum_k\Hat{p}(k\,|\,i,u)\mathrm{e}^{c(i,u)}\Psi_k}\Biggr)
\Lambda_j \label{DP*3}
\end{align}
for $i \in S$, where $B^*_i$ is the set of minimizers in \eqref{DP*2}.
As in \cite{Borkar}, the important observation here is the appearance of
a `twisted kernel' for averaging in \eqref{DP*2}\footnote{This also serves as a `correction note' to the derivation of (11)-(12) in \cite{Borkar}. The treatment of dynamic programs in \textit{ibid.} is flawed and should be replaced by the exact counterpart of the above.}
\section{Comments on a counterexample of \texorpdfstring{\cite{CavHern-04}}{}}
We discuss the counterexample in \cite[Example~2.1]{CavHern-04}, which
is for an uncontrolled model.
\begin{example}\label{Ex4.1}
Let
\begin{equation*}
p_{21}=1-\rho,\quad p_{22} =\rho,\quad p_{11} =1,\quad c(2)=1\quad c(1)=0\,,
\end{equation*}
with $\rho\in(0,1)$.
Solving \cref{DP-1,DP-2} for $i=1$, we obtain
\begin{equation*}
\Phi^*_1 =0\,,\quad q_{11} =1\,,\quad V_1=\text{any constant}\,.
\end{equation*}
The equations for $i=2$ are
\begin{align*}
\Phi^*_2 &\,=\, \max_{q \in \mathcal{Q}}\,
\bigl[ q_{22}\Phi^*_2\bigr]\,, \\
\Phi^*_2 + V_2 &\,=\, \max_{q\in B_2}\,
\Biggl[1-q_{22}\log\frac{q_{22}}{\rho} -
q_{21}\log\frac{q_{21}}{1-\rho}\\
&\mspace{200mu}+ q_{22} V_2 + q_{21} V_1 \Biggr] \,.
\end{align*}
Thus we must have $q_{22}=1$ if $\Phi^*_2\ne0$.
In this case, from \cref{DP-1,DP-2}, we get
\begin{equation*}
\Phi^*_2 = 1+\log \rho\,,\quad q_{22} =1\,,\quad V_2=\text{any constant}\,.
\end{equation*}
If $\log\rho>-1$, then the first hitting time to state
$1$ (from state $2$) does not have an exponential moment,
and $\lambda^*_2= 1+\log \rho$, while of course $\lambda^*_1=0$.
On the other hand if $\log\rho<-1$, then $q_{22}\ne1$, and we get
$\Phi^*_2 = 0$, and $q\equiv q_{22}\in(0,1)$ solves
\begin{equation*}
\log\frac{q}{1-q}-\log\frac{\rho}{1-\rho}+ \frac{1}{1-q}\Bigl((1-2q)V_1
- B(q) \Bigr)\,=\,0\,,
\end{equation*}
with
\begin{equation*}
B(q)\,\coloneqq\, 1-q\log\frac{q}{\rho} - (1-q) \frac{1-q}{1-\rho}\,.
\end{equation*}
Also $V_2 = \frac{q V_1 + B(q)}{1-q}$.
Thus, in either case, $\Bar\lambda^* = \max\,\{\Phi^*_1,\Phi^*_2\}$.
\end{example}
However, as noted in \cite[Example~2.1]{CavHern-04},
the multiplicative Poisson equation does not have a solution when $\log\rho>-1$,
because there is no pair of numbers $(h_1,h_2)$ that even solves the inequality
\begin{equation*}
\begin{aligned}
\mathrm{e}\rho \mathrm{e}^{h_2}\,=\,\mathrm{e}^{\lambda^*_2} \mathrm{e}^{h_2}
&\,\ge\, \mathrm{e}^{c(2)} \Bigl[ p_{22} \mathrm{e}^{h_2} + p_{21} \mathrm{e}^{h_1}\Bigr]\\
&\,=\, \mathrm{e}\Bigl[ \rho \mathrm{e}^{h_2} + (1-\rho) \mathrm{e}^{h_1}\Bigr]\,.
\end{aligned}
\end{equation*}
We compare \cref{T4.1} with the results in \cite{CavHern-05}.
As shown in \cite[Theorem~3.5]{CavHern-05}, under a Doeblin hypothesis,
it holds that
\begin{equation}\label{E-inf}
\lambda^*_i\,=\,\inf_{g\in\mathscr{G}}\, g(i)\,,
\end{equation}
where $\mathscr{G}$ is the class of functions satisfying
\begin{equation*}
g(i) \,=\, \min_{u\in\mathcal{U}}\,\Bigl(\max\,\{g(j)\,\colon p(j\,|\, i,u)>0\}\Bigr)\,,
\end{equation*}
and
\begin{equation*}
\mathrm{e}^{g(i) + h_i} \,\ge\,
\min_{u\in B_g(i)}\, \Biggl[ \mathrm{e}^{c(i,u)} \sum_{j\in S} p(j\,|\, i,u) \mathrm{e}^{h_j}\Biggr]
\,,
\end{equation*}
where $h=(h_1,\dotsc,h_{s})\in\mathbb{R}^{s}$ is a vector possibly depending
on $g$, and
\begin{equation*}
B_g(i) \,\coloneqq\,\bigl\{ u\in\mathcal{U}\,\colon g(i)=\max\,\{g(j)\,\colon p(j\,|\, i,u)>0\}\bigr\}\,.
\end{equation*}
It is important to note that the infimum in \cref{E-inf} might not be realized
in $\mathscr{G}$. This is what \cref{Ex4.1} shows in the case $\log\rho>-1$.
\section{Future Directions}
One interesting problem that still remains is to show optimality of stationary or pure policies under very general conditions that do not require irreducibility. Yet another interesting direction is an extension of this paradigm to general state spaces and to continuous time risk-sensitive control.
\eject
\end{document}
|
\begin{document}
\begin{onecolumn}
\thispagestyle{empty}
\vspace*{0.6cm}
\begin{center}
{{\cal L}ARGE Robust Particle Filter by Dynamic Averaging of Multiple Noise Models
}\\
{\large
Bin Liu\footnote{Corresponding author.
E-mail: {\sf [email protected]}.\\
Copyright 2017 IEEE. Published in the IEEE 2017 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017), scheduled for 5-9 March 2017 in New Orleans, Louisiana, USA. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966.}
{$~$School of Computer Science and Technology, \\Nanjing University of Posts and Telecommunications}\\
{$~$Nanjing, Jiangsu, 210023, China}
Manuscript Submitted --- Sep. 5th, 2016\\
This Manuscript has been accepted by ICASSP 2017 as an oral presentation paper\\
}
\end{center}
\end{onecolumn}
\begin{twocolumn}
\title{Robust Particle Filter by Dynamic Averaging of Multiple Noise Models}
\name{Bin~Liu$^{\star}$
\thanks{$^\star$Address correspondence to [email protected]. This work
was partly supported by the National Natural Science Foundation
(NSF) of China (Nos. 61302158 and 61571238), the China Postdoctoral Science Foundation (Nos. 2015M580455 and 2016T90483), the NSF of Jiangsu Province (No. BK20130869), Scientific and Technological Support Project (Society) of Jiangsu Province (No. BE2016776).}}
\address{School of Computer Science and Technology, Nanjing University of \\
Posts and Telecommunications, Nanjing, 210023 China}
\maketitle
\begin{abstract}
State filtering is a key problem in many signal processing applications.
From a series of noisy measurement, one would like to estimate the state
of some dynamic system. Existing techniques usually adopt a Gaussian noise assumption
which may result in a major degradation in performance when the measurements are
with the presence of outliers. A robust algorithm immune to the presence of outliers is desirable.
To this end, a robust particle filter (PF) algorithm is proposed, in which the heavier tailed Student's t distributions
are employed together with the Gaussian distribution to model the measurement noise.
The effect of each model is automatically and dynamically adjusted via a Bayesian model averaging mechanism.
The validity of the proposed algorithm is evaluated by illustrative simulations.
\end{abstract}
\begin{keywords}
Bayesian, dynamic model averaging, robust particle filter, Student's t distribution, outliers
\end{keywords}
\section{Introduction}\label{sec:intro}
This paper focuses on nonlinear state filtering, a key problem in many signal processing applications.
The aim here is to derive a novel particle filter (PF) algorithm that is robust towards outliers in the measurement noise.
We regard filtering with outliers as a model uncertainty problem, and address it using a multiple model strategy (MMS).
The MMS is a generic approach to handle model uncertainty problems. For example, in \cite{yi2016robust} and \cite{liu2011instantaneous}, the MMS is utilized to take account of the issue of measurement model uncertainty and of state evolution model uncertainty, respectively. Here, we employ the MMS to take account of possible appearance of outliers in the measurement, in the context of nonlinear state filtering. Three candidate measurement models including one Gaussian and two Student's distribution models are employed together to represent the measurement. By virtue of a model averaging mechanism, the effect of each model is dynamically adjusted according to the posterior distribution of each model, which is updated sequentially as we observe more data. The method allows the heavier tailed Student's t models to dominate the Gaussian model when the outliers arrive. This is done autonomously and dynamically within the PF algorithmic framework. The validity of our method is evaluated by illustrative simulations.
\section{Particle Filter}
In this Section, we give a succinct description for the PF algorithm. For more details, readers can refer to \cite{arulampalam2002tutorial,smith2013sequential,doucet2000sequential}.
Let us first consider a state space model:
\begin{eqnarray}
x_k&=&f(x_{k-1})+u_k\\
y_k&=&h(x_k)+n_k,
\end{eqnarray}
where $x_k\in\mathbb{R}^{d_x}$ and $y_k\in\mathbb{R}^{d_y}$ denote the target state vector and the measurement at the $k$th time step, respectively; $d_x$ and $d_y$ denote the corresponding dimensions. $f$ and $h$ denote the nonlinear state evolution function and measurement function, respectively. $u_k$ and $n_k$ represent independent identically distributed (i.i.d.) process and measurement noise sequence, respectively.
The probability density functions (pdfs) of $u_k$ and $n_k$, which are usually specified by the modeler, defines the state transition prior density $p(x_k|x_{k-1})$ and the likelihood function $p(y_k|x_k)$, respectively.
The Bayesian state filtering problem consists of computing the a posteriori pdf of $x_k$ given $y_{0:k}=\{y_i\}_{i=0}^k$, denoted by $p(x_k|y_{0:k})$ (or in short $p_{k|k}$). Recursive solutions are more preferable to batch mode methods; and, indeed $p_{k|k}$ can be computed from $p_{k-1|k-1}$ recursively as follows
\begin{equation}\label{eqn:filter}
p_{k|k}=\frac{p(y_k|x_k)\int p(x_k|x_{k-1})p_{k-1|k-1}dx_{k-1}}{p(y_k|y_{0:k-1})}.
\end{equation}
The PF algorithm is an approximate solution to Eqn.(3) based on the sequential application of importance sampling (IS) techniques. Suppose that, at time step $k-1$, we have a discrete approximation of $p(x_{0:k-1}|y_{0:k-1})$ given by a set of weighted samples $\{x_{0:k-1}^i,\omega_{k-1}^i\}_{i=1}^N$, in which $x_{0:k-1}^i\sim q(x_{0:k-1}|y_{0:k-1})$,
$\omega_{k-1}^i\propto p(x_{0:k-1}|y_{0:k-1})/q(x_{0:k-1}|y_{0:k-1})$, $\sum_{i=1}^N\omega_{k-1}^i=1$. At time $k$, the $i$th trajectory is first extended by a particle $\hat{x}_k^i$ sampled from an importance distribution $q(x_k|x_{k-1},y_{0:k})$ and then weighted by
\begin{equation}
\omega_k^i\propto\omega_{k-1}^ip(\hat{x}_k^i|x_{k-1}^i)p(y_k|\hat{x}_k^i)/q(\hat{x}_k^i|x_{k-1}^i,y_{0:k}).
\end{equation}
It is well known that the above algorithm suffers from particle degeneracy when it is applied sequentially \cite{doucet2000sequential}. Precisely, after some iterations only few particles have a non null positive weight. A common practice to get around of this problem is to use after the weighting step
a resampling step meant to discard the particles with low weights and duplicate those with high
weights. Several resampling techniques have been proposed, see e.g. \cite{douc2005comparison,Li2015Resampling,Hol2006on}. A main scheme for an iteration of the PF algorithm can be summarized as follows. Starting from $\{x_{k-1}^i,\omega_{k-1}^i\}_{i=1}^N$:
\begin{itemize}
\item Sampling step. Sample $\hat{x}_k^i\sim q(x_k|x_{k-1}^i,y_{0:k})$, for all $i$, $1\leq i\leq N$;
\item Weighting step. Set $\omega_k^i$ using Eqn.(4) for all $i$, $1\leq i\leq N$, and normalize these weights to guarantee that $\sum_{i=1}^N\omega_k^i=1$;
\item Resampling step. Sample $x_k^i\sim\sum_{j=1}^N\omega_k^j\delta_{\hat{x}_k^j}$, set $\omega_k^i=1/N$, for all $i$, $1\leq i\leq N$. $\delta_{x}$ denotes the Dirac-delta function located at $x$.
\end{itemize}
\section{The Proposed Robust Particle Filter}
Here the measurement noise $n_k$ in Eqn.(2) is modeled by $M$ candidate models together. Let $\mathcal{H}_k=m$ denote the event that the $m$th model, $\mathcal{M}_m$, is the best one for use at time $k$.
Based on the Bayesian model averaging strategy \cite{hoeting1999bayesian,raftery1997bayesian,wintle2003use}, the posterior pdf under this multiple model setting is calculated as follows
\begin{equation}
p_{k|k}=\sum_{m=1}^Mp_{m,k|k}\pi_{m,k|k},
\end{equation}
where $p_{m,k|k}\triangleq p(x_k|\mathcal{H}_k=m,y_{0:k})$ and $\pi_{m,k|k}\triangleq p(\mathcal{H}_{k}=m|y_{0:k})$.
A recursive solution to compute Eqn.(5) is of particular interest here. Assume that at time $k-1$, we have at hand $\pi_{m,k-1|k-1}$, for all $m$, $1\leq m\leq M$, and a weighted sample set, $\{x_{0:k-1}^i,\omega_{k-1}^i\}$, which satisfies
\begin{equation}\label{eqn:particle_approx_t-1}
p_{k-1|k-1}\simeq\sum_{i=1}^N\omega_{k-1}^i\delta_{x_{k-1}^i}.
\end{equation}
At time $k$, the $i$th trajectory is first extended by a particle $\hat{x}_k^i$
sampled from an importance distribution $q(x_k|x_{k-1},y_{0:k})$ and then weighted by a weight
\begin{equation}\label{eqn:omega_mm}
\omega_{m,k}^i\propto\omega_{k-1}^ip(\hat{x}_k^i|x_{k-1}^i)p_m(y_k|\hat{x}_k^i)/q(\hat{x}_k^i|x_{k-1}^i,y_{0:k}),
\end{equation}
under the hypothesis $\mathcal{H}_{k}=m$, where $p_m(y_k|x_k)$ denotes the likelihood function associated with $\mathcal{M}_m$.
According to the IS principle, we have
\begin{equation}
p_{m,k|k}\simeq\sum_{i=1}^N\omega_{m,k}^i\delta_{\hat{x}_{k}^i}.
\end{equation}
Now let us consider, given $\pi_{m,k-1|k-1}$, how to derive out $\pi_{m,k|k}$.
First we specify a model transition process in term of forgetting \cite{liu2011instantaneous}, in order to predict the model indicator $\mathcal{H}$.
Let $\alpha$, $0<\alpha<1$, denote the forgetting factor. Given $\pi_{m,k-1|k-1}$, we have
\begin{equation}\label{model_pred_forget}
\pi_{m,k|k-1}=\frac{\pi_{m,k-1|k-1}^{\alpha}}{\sum_{m=1}^M\pi_{m,k-1|k-1}^{\alpha}},
\end{equation}
where $\pi_{m,k|k-1}\triangleq p(\mathcal{H}_{k}=m|y_{0:k-1})$. Then, employing Bayes' rule we have
\begin{equation}\label{posterior_model_indicator}
\pi_{m,k|k}=\frac{\pi_{m,k|k-1}p_m(y_k|y_{0:k-1})}{\sum_{m=1}^M\pi_{m,k|k-1}p_m(y_k|y_{0:k-1})},
\end{equation}
where $p_m(y_k|y_{0:k-1})$ is the marginal likelihood of
$\mathcal{M}_m$ at time $k$, defined to be
\begin{equation}\label{marginal_lik}
p_m(y_k|y_{0:k-1})=\int p_m(y_k|x_k)p(x_k|y_{0:k-1})dx_k.
\end{equation}
Here the state transition prior is adopted as the importance distribution, namely $q(x_k|x_{k-1},y_{0:k})=p(x_k|x_{k-1})$. Accordingly we have $p(x_k|y_{0:k-1})\simeq\sum_{i=1}^N\omega_{k-1}^i\delta_{\hat{x}_k^i}$. Then the integral in Eqn.(\ref{marginal_lik}) can be approximated as follows
\begin{equation}\label{eqn:particle_marignal_lik}
p_m(y_k|y_{0:k-1})\simeq\sum_{i=1}^N \omega_{k-1}^ip_m(y_k|\hat{x}_k^i).
\end{equation}
To summarize, one iteration of the proposed robust particle filter (RPF) is as follows. Starting from $\{x_{k-1}^i,\omega_{k-1}^i\}_{i=1}^N$ and $\pi_{m,k-1|k-1}$, for all $m$, $1\leq m\leq M$:
\begin{itemize}
\item Sampling step. Sample $\hat{x}_k^i\sim q(x_k|x_{k-1}^i,y_{0:k})$, for all $i$, $1\leq i\leq N$;
\item Weighting step. Set $\omega_{m,k}^i$ using Eqn.(\ref{eqn:omega_mm}) for all $i$, $1\leq i\leq N$, and normalize these weights to guarantee that $\sum_{i=1}^N\omega_{m,k}^i=1$, for all $m$, $1\leq m\leq M$;
\item Model Averaging step. Compute the posterior pdf of the model indicator, $\pi_{m,k|k}$, using Eqns.(9)-(12).
\item Resampling step. Sample $x_k^i\sim\sum_{j=1}^N\omega_k^j\delta_{\hat{x}_k^j}$, in which $\omega_k^j=\sum_{m=1}^M\pi_{m,k|k}\omega_{m,k}^j$; Set $\omega_k^i=1/N$, for all $i$, $1\leq i\leq N$.
\end{itemize}
The presented RPF uses three measurement noise models, including two Student's t distribution models and one Gaussian model, based on which the likelihood functions $p_m(y_k|x_k)$, $m=1,2,3$, are defined. All distribution models are zero mean with the a fixed covariance $\Sigma$. The involved two Student's t models discriminate with each other by the parameter, degrees of freedom (DoF). The DoF values under use are 3 and 50, corresponding to an extremely and an intermediate-level heavier tailed distributions, respectively.
Suppose that $x$ is a $d$ dimensional random variable that follows the multivariate Student's $t$ distribution,
denoted by $\mathcal{S}(\cdot|\mu,\Sigma,v)$, where $\mu$ denotes the mean and $v\in(0,\infty]$ is the DoF. Then the density function of $x$ is:
\begin{equation}\label{def_t}
\mathcal{S}(x|\mu,\Sigma,v)=\frac{\Gamma(\frac{v+d}{2})|\Sigma|^{-0.5}}{(\pi v)^{0.5d}\Gamma(\frac{v}{2})\{1+M_d(x,\mu,\Sigma)/v\}^{0.5(v+d)}},
\end{equation}
where
\begin{equation}
M_d(x,\mu,\Sigma)=(x-\mu)^T \Sigma^{-1}(x-\mu)
\end{equation}
denotes the Mahalanobis squared distance from $x$ to $\mu$ with respect to $\Sigma$, $A^{-1}$ denotes the inverse of $A$ and $\Gamma(\cdot)$ denotes the gamma function.
\section{Simulations}
We evaluated the validity of the proposed algorithm using the time-series experiment presented in \cite{van2000the}. The time-series is generated by the following
state evolution model
\begin{equation}
x_{k+1}=1+\sin(0.04\pi\times(k+1))+0.5x_k+u_k,
\end{equation}
where $u_k$ is a Gamma(3,2) random variable modeling the process noise. The observation model is
\begin{equation}
y_k=\left\{\begin{array}{ll}
0.2x_k^2+n_k,\quad\quad\quad k\leq30 \\
0.2x_k-2+n_k,\quad\, k>30 \end{array} \right.
\end{equation}
The goal is to estimate the underlying clean state sequence $x_k$ online based on the noisy observations, $y_k$, for $k=1,\ldots,60$.
\subsection{Case I: filtering without the presence of outliers}
First we considered the case without outliers. In this case the measurement noise, $n_k$, was drawn from a zero-mean Gaussian distribution.
A few different PF algorithms were used for performance comparison. The experiment was repeated 30 times with random re-initialization for each run. All of the PFs used 200 particles and the residual resampling \cite{Hol2006on}. The forgetting factor of RPF $\alpha$ takes a value of 0.9. The performance of the different filters is summarized in Table 1, wherein the EKPF and UPF denote the PFs which employ the extended Kalman filter and the unscented Kalman filter to generate the importance distribution, respectively. The table shows execution time (in seconds), the means and variances of the mean-square-error (MSE) of the state estimates. All the reported computing times are based on a computer equipped with an Intel i5-3210M 2.50 GHz processor with one core. They do not involve any parallel processing. The result show that the proposed RPF is more accurate than the other competitor algorithms in the sense of MSE, with less execution time than the UPFs.
\begin{table}\centering\small
\begin{tabular}{c||c||c|c}
\hline
Algorithm & Time & \multicolumn{2}{c}{MSE} \\
& &mean&var \\\hline
PF: Generic& 1.561 &0.350&0.056 \\\hline
PF: MCMC move step& 3.275&0.371&0.047 \\\hline
EKPF&2.958&0.280&0.015\\\hline
EKPF: MCMC move step&7.033&0.278&0.013\\\hline
UPF&9.095&0.055&0.008\\\hline
UPF: MCMC move step&19.735&0.052&0.008\\\hline
the proposed RPF&5.509&0.018&0.0001\\\hline
\end{tabular}
\label{Table:convergence values}
\caption{Execution time (in seconds), Mean and variance of the MSE calculated over 30 independent runs for Case I.}
\end{table}
\subsection{Case II: filtering with the presence of outliers}
Next we designed a simulation case that involves outliers. The setting for the experiment time series was the same as Case I, except that several measurements at some time steps are replaced by outliers. The time steps associated with the presence of outliers are $k=7,8,9,20,37,38,39,50$. For typical measurements, their associated measurement noise was drawn from a zero-mean Gaussian distribution the same as for Case I. For outliers, the item $n_k$ in Eqn.(16) was drawn randomly from a uniform distribution between 40 and 50. All the considered algorithms were set to be blind to the above information on the outliers. The other settings for the experiment were the same as for Case I. The performance of the different filters is summarized in Table 2, which shows that the presented RPF method provides the most accurate online state estimation. For this case, the EKPFs and UPFs perform much worse than the other filters. We argue that it is due to the fact that both EKPFs and UPFs utilize the measurement information to build up the importance distribution, while, they will improperly take outliers as regular measurements upon the arrival of outliers and thus make the resulting importance distribution inefficient and misleading.
\begin{table}\centering\small
\begin{tabular}{c||c|c}
\hline
Algorithm & \multicolumn{2}{c}{MSE}\\
&mean&var\\\hline
PF: Generic&0.533&0.040\\\hline
PF: MCMC move step&0.523&0.039\\\hline
EKPF&22.663&0.343\\\hline
EKPF: MCMC move step&22.668&0.358\\\hline
UPF&19.804&0.289\\\hline
UPF: MCMC move step&19.808&0.274\\\hline
the proposed RPF&0.357&0.010\\\hline
\end{tabular}
\label{Table:convergence values}
\caption{Mean and variance of the MSE calculated over 30 independent runs for Case II.}
\end{table}
\subsection{Further evaluations of RPF}
First we evaluated the sensitivity of the RPF's performance with respect to the forgetting factor $\alpha$.
We considered $\alpha$ values 0.1, 0.3, 0.5, 0.7 and 0.9. For each value, we ran the RPF algorithm 30 times for both Case I and II and calculated the corresponding mean of MSE. The result is depicted in Fig.\ref{fig:mse_alpha}, which shows that the performance of the presented RPF algorithm is not very sensitive to the selected values of $\alpha$, for both Case I and II.
\begin{figure}
\caption{Mean of the MSE calculated over 30 independent runs, in case of different $\alpha$ values, for both Case I and II.}
\label{fig:mse_alpha}
\end{figure}
Next we fixed the value of $\alpha$ to be 0.1, and recorded the averaged posterior probability of each candidate measurement model at each time step over 30 times of experiments for both Case I and Case II. The result is plotted in Fig.\ref{fig:model_prob}. It is shown that, for both cases, the Student's t ($v$=3) model always dominates the other models.
The curves have no obvious patterns for Case I; but have an obvious pattern for Case II, that is, the posterior probability of the Student's t ($v$=3) model increases along with the appearance of outliers. Specifically, once an outlier appears (corresponding to time steps $k=7,8,9,20,37,38,39,50$), the posterior probability of the Student's t ($v$=3) model increases suddenly to a value close to 1; meanwhile, the posterior probabilities of the other two models decrease to 0 correspondingly.
\begin{figure}
\caption{Averaged posterior probability of candidate models outputted by the proposed RPF method. The top and bottom sub-figures correspond to Case I and II, respectively.}
\label{fig:model_prob}
\end{figure}
Observing that the posterior probability of the Student's t ($v$=3) model is always much bigger than the others, we wondered if a PF algorithm which only employs the Student's t ($v$=3) model can produce the similar performance as the presented RPF algorithm. We set $\alpha=0.9$ and repeated the experiment of running the single Student's t ($v$=3) model based PF in the same way as described before for Case I and II. The resulting mean and variance of the MSE are presented in Table 3. In contrast with the performance of RPF as presented in Tables 1 and 2, we see that the single model based PF performs similarly as the presented RPF for case II, while, it loses in terms of MSE against RPF for case I.
\begin{table}[htbp]\centering\small
\begin{tabular}{c||c|c}
\hline
& \multicolumn{2}{c}{MSE}\\
&mean&var\\\hline
Case I &0.060&0.012\\\hline
Case II&0.366&0.006\\\hline
\end{tabular}
\label{Table:convergence PF_t}
\caption{Mean and variance of the MSE calculated over 30 independent runs of the Student's t (v=3) model based PF.}
\end{table}
\section{Conclusions}
In this paper, we propose a multi-model based PF method, which is robust against the presence of outliers in the measurements.
In the proposed RPF method, the heavier-tailed Student's t models are employed together with the conventionally used Gaussian model to represent the measurement noise. A Bayesian model averaging strategy is adopted to handle the issue of model uncertainty. It is shown that the proposed method is able to dynamically adjust the effect of each candidate model in an automatic and theoretically sound manner. The validity of this method is evaluated via illustrative simulations. Empirical results show that the RPF method performs strikingly better than several existent PFs for all cases under consideration. Future work lies in adapting the presented RPF algorithm to deal with real-life problems, e.g., sonar/radar target tracking \cite{liu2010multi}, in which the behaviors of the outliers may be more complex.
\end{twocolumn}
\end{document}
|
\begin{document}
\date{}
\begin{abstract}
The \emph{contact graph} of a packing of translates of a convex body in
Euclidean $d$-space $\mathbb E^d$ is the simple graph whose vertices are the
members of the packing, and whose two vertices are connected by an edge if the
two members touch each other. A packing of translates of a convex body is
called \emph{totally separable}, if any two members can be separated by a
hyperplane in $\mathbb E^d$ disjoint from the interior of every packing element.
We give upper bounds on the maximum vertex degree (called \emph{separable Hadwiger
number}) and the maximum number of edges (called \emph{separable contact
number}) of the contact graph of a totally separable packing of $n$ translates
of an arbitrary smooth convex body in $\mathbb E^d$ with $d=2,3,4$. In the proofs,
linear algebraic and convexity methods are combined
with volumetric and packing density estimates based on the underlying
isoperimetric (resp., reverse isoperimetric) inequality.
\end{abstract}
\maketitle
\sloppy
\renewcommand\footnotemark{}
\section{Introduction}\label{sec:intro}
We denote the $d$-dimensional Euclidean space by $\mathbb{E}d$, and the
unit ball centered at the origin $\mathbf{o}$ by $\mathbf{B}^d$. A \emph{convex
body} $\mathbf{K}$ is a compact convex subset of $\mathbb{E}d$ with nonempty interior.
Throughout the paper, $\mathbf{K}$ always denotes a convex body in $\mathbb{E}d$.
If $\mathbf{K} = -\mathbf{K}:=\{-x: x\in \mathbf{K}\}$, then $\mathbf{K}$ is said to be \emph{$\mathbf{o}$-symmetric}.
$\mathbf{K}$ is said to be \emph{smooth} if at every point on the boundary $\bd \mathbf{K}$ of
$\mathbf{K}$, the body $\mathbf{K}$ is supported by a unique hyperplane of $\mathbb{E}d$. $\mathbf{K}$ is
\emph{strictly convex} if the boundary of $\mathbf{K}$ contains no nontrivial line
segment.
The \emph{kissing number problem} asks for the maximum number $k(d)$ of
non-overlapping translates of $\mathbf{B}^d$ that can touch $\mathbf{B}^d$. Clearly,
$k(2)=6$. To date, the only known kissing number values are $k(3)=12$
\cite{ScWa53},
$k(4)=24$ \cite{Mu08}, $k(8)=240$ \cite{OdSl79}, and
$k(24)=196560$ \cite{OdSl79}.
For a survey of kissing numbers we refer the interested reader to
\cite{Bo}.
Generalizing the kissing number, the \emph{Hadwiger number} or \emph{the
trans\-la\-ti\-ve kissing number} $H(\mathbf{K})$ of a convex body
$\mathbf{K}$ is the maximum number of non-overlapping translates of $\mathbf{K}$ that all touch
$\mathbf{K}$. Given the difficulty of the kissing number problem, determining
Hadwiger numbers is highly nontrivial with few exact values known for $d\ge 3$.
The best general upper and lower bounds on $H(\mathbf{K})$ are due to Hadwiger
\cite{Ha} and Talata \cite{Ta} respectively, and can be expressed as
\begin{equation}\label{eq:hadwiger}
2^{cd}\le H(\mathbf{K})\le 3^{d} - 1,
\end{equation}
where $c$ is an absolute constant and equality holds in the right inequality if
and only if $\mathbf{K}$ is an affine $d$-dimensional cube \cite{Gr61}.
A packing of translates of a \emph{convex domain}, that is, a convex
body $\mathbf{K}$ in $\mathbb{E}^2$ is said to be \emph{totally separable} if any two packing
elements can be separated by a line of
$\mathbb{E}^{2}$ disjoint from the interior of every packing element. This
notion was introduced by G. Fejes T\'{o}th and L. Fejes T\'{o}th \cite{FTFT73}.
We can define a totally separable packing of translates of a
$d$-di\-men\-si\-o\-nal
convex body $\mathbf{K}$ in a similar way by requiring any two packing elements to be
separated by a hyperplane in $\mathbb{E}d$ disjoint from the interior of
every packing element \cite{BeSzSz, Ke}.
Recall that the \emph{contact graph} of a packing of translates of $\mathbf{K}$ is the
simple graph whose vertices are the members of the packing, and whose two
vertices are connected by an edge if and only if the
two members touch each other. In this paper we
investigate the maximum vertex degree (called \emph{separable Hadwiger number}), as
well as the maximum number of edges (called the \emph{maximum separable contact
number}) of the contact graphs of totally separable packings by a
given number of translates of a smooth or strictly convex body $\mathbf{K}$ in $\mathbb{E}d$.
This extends and generalizes the results of \cite{BeKhOl} and \cite{BeSzSz}.
The details follow.
\subsection{Separable Hadwiger numbers}\label{sec:sepHintro}
It is natural to introduce the totally separable analogue of the
Hadwiger number as follows \cite{BeKhOl}.
\begin{definition}
Let $\mathbf{K}$ be a convex body in $\mathbb{E}d$. We call a family of translates of $\mathbf{K}$ that
all touch
$\mathbf{K}$ and, together with $\mathbf{K}$, form a totally separable packing in $\mathbb{E}d$ a
\emph{separable Hadwiger configuration} of $\mathbf{K}$.
The \emph{separable Hadwiger number}
$H_{\rm sep}(\mathbf{K})$ of $\mathbf{K}$ is the maximum size of a separable Hadwiger configuration
of $\mathbf{K}$.
\end{definition}
Recall that the \emph{Minkowski symmetrization} of the convex body
$\mathbf{K}$ in $\mathbb{E}d$ denoted by $\mathbf{K}_{\mathbf{o}}$ is defined by
$\mathbf{K}_{\mathbf{o}} := \frac{1}{2}(\mathbf{K} + (- \mathbf{K})) = \frac{1}{2}(\mathbf{K} - \mathbf{K}) = \frac{1}{2}\{\mathbf{x}
- \mathbf{y} : \mathbf{x} , \mathbf{y} \in \mathbf{K}\}$. Clearly, $\mathbf{K}_{\mathbf{o}}$ is an $\mathbf{o}$-symmetric
$d$-dimensional convex body.
Minkowski \cite{Mi04} showed that if ${\mathcal
P}=\{\mathbf{x}_1+\mathbf{K}, \mathbf{x}_2+\mathbf{K}, \dots ,\mathbf{x}_n+\mathbf{K}\}$
is a packing of translates of $\mathbf{K}$, then ${\mathcal P}_{\mathbf{o}}=\{\mathbf{x}_1+\mathbf{K}_{\mathbf{o}},
\mathbf{x}_2+\mathbf{K}_{\mathbf{o}}, \dots ,\mathbf{x}_n+\mathbf{K}_{\mathbf{o}}\}$
is a packing as well. Moreover, the contact graphs of ${\mathcal P}$ and
${\mathcal P}_{\mathbf{o}}$ are the same. Using
the same method, it is easy to see that Minkowski's above statement applies to
totally separable packings as well.
(See also \cite{BeKhOl}.)
Thus, from this point on, we only consider $\mathbf{o}$-symmetric convex bodies.
It is mentioned in \cite{BeSzSz} that based on \cite{DH} (see also, \cite{Ra}
and \cite{Ku}) it follows in a straightforward way that $H_{\rm sep}(\mathbf{B}^d)=2d$
for all $d\geq 2$. On the other hand,
if $\mathbf{K}$ is an $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$, then each facet
of the minimum volume circumscribed parallelotope of $\mathbf{K}$ touches $\mathbf{K}$ at the
center of the facet and so, clearly $H_{\rm sep}(\mathbf{K})\geq 2d$. Thus,
\begin{equation}\label{basic-bounds}
2d\leq H_{\rm sep}(\mathbf{K})\leq H(\mathbf{K})\leq 3^d-1
\end{equation}
holds for any $\mathbf{o}$-symmetric convex body $\mathbf{K}$ in $\mathbb{E}d$.
Furthermore, the $d$-cube is the only $\mathbf{o}$-symmetric convex body
in $\mathbb{E}^d$ with separable Hadwiger number $3^d-1$ \cite{Gr61}.
We investigate equality in the first inequality of \eqref{basic-bounds}.
First, we note as an easy exercise that $H_{\rm sep}$ as a map from
the set of convex bodies equipped with any reasonable topology to the reals is
upper semi-continuous. Thus, for any $d$, if an $\mathbf{o}$-symmetric convex body
$\mathbf{K}$ in $\mathbb{E}d$ is sufficiently close to the Euclidean ball $\mathbf{B}^d$ (say,
$\mathbf{B}^d\subseteq \mathbf{K}\subseteq (1+\varepsilon_d)\mathbf{B}^d$, where $\varepsilon_d>0$
depends on $d$ only), then $H_{\rm sep}(\mathbf{K})=2d$.
Hence, it is natural to ask whether the set of those $\mathbf{o}$-symmetric convex
bodies in ${\mathbb R}^d$ with $H_{\rm sep}(\mathbf{K})=2d$ is dense. In this paper, we investigate
whether $H_{\rm sep}(\mathbf{K})=2d$ holds for any $\mathbf{o}$-symmetric smooth or strictly convex
$\mathbf{K}$ in $\mathbb{E}d$. Our first main result is a partial answer to this question.
\begin{definition}
An \emph{Auerbach basis} of an $\mathbf{o}$-symmetric convex body
$\mathbf{K}$ in $\mathbb{E}d$ is a set of $d$ points on the boundary of $\mathbf{K}$ that form a basis
of $\mathbb{E}d$ with the property that the hyperplane through any one of them,
parallel to the other $d-1$ supports $\mathbf{K}$.
\end{definition}
\begin{theorem}\label{thm:smoothstrictlycvx}
Let $\mathbf{K}$ be an $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$, which is smooth \emph{or}
strictly convex. Then
\begin{enumerate}[(a)]
\item\label{item:threefourdim}
For $d\in\{1,2,3,4\}$, we have $H_{\rm sep}(\mathbf{K})=2d$ and, in any separable Hadwiger
configuration of $\mathbf{K}$ with $2d$ translates of $\mathbf{K}$, the translation vectors are
$d$ pairs of opposite vectors, where picking one from each pair yields an
Auerbach basis of $\mathbf{K}$.
\item\label{item:DG}
$H_{\rm sep}(\mathbf{K})\leq 2^{d+1}-3$ for all $d\geq 5$.
\end{enumerate}
\end{theorem}
We note that part (a) of Theorem~\ref{thm:smoothstrictlycvx} was
proved for $d=2$ and smooth $\mathbf{o}$-symmetric convex domains in \cite{BeKhOl}. We
prove Theorem~\ref{thm:smoothstrictlycvx} in Section~\ref{sec:thmproofs}.
\subsection{One-sided separable Hadwiger numbers}\label{sec:onesided}
The one-sided Had\-wi\-ger number $h(\mathbf{K})$ of an $\mathbf{o}$-symmetric convex body
$\mathbf{K}$ in
$\mathbb{E}^d$ has been defined in \cite{BeBr} as the maximum number of non-overlapping
translates of $\mathbf{K}$ that can touch $\mathbf{K}$ and lie in a {\it closed} supporting
half-space of $\mathbf{K}$. It is proved in \cite{BeBr} that $h(\mathbf{K})\leq 2\cdot
3^{d-1}-1$ holds
for any $\mathbf{o}$-symmetric convex body $\mathbf{K}$ in $\mathbb{E}^d$ with equality for affine
$d$-cubes only.
One could consider the obvious extension of the one-side Had\-wi\-ger number to
separable
Hadwiger configurations. However, a more restrictive and slightly more
technical definition serves our purposes better, the reason of which will
become clear in Theorem~\ref{thm:strict} and Example~\ref{ex:fivedonesidedex}.
\begin{definition}
Let $\mathbf{K}$ be a smooth $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$. The \emph{one-sided
separable Hadwiger number} $h_{\rm sep}(\mathbf{K})$ of $\mathbf{K}$ is the maximum number $n$ of
translates $2\mathbf{x}_1+\mathbf{K},\ldots,\linebreak[0] 2\mathbf{x}_n+\mathbf{K}$ of $\mathbf{K}$ that
form a separable Hadwiger configuration of $\mathbf{K}$, and the
following holds. If \ensuremath{f_1,\linebreak[0]\ldots,\linebreak[0]f_n}{} denote supporting linear functionals of $\mathbf{K}$ at the
points \mathbf{x}n, respectively, then $\mathbf{o}\notin\conv\{\mathbf{x}n\}$ and
$\mathbf{o}\notin\conv\{\ensuremath{f_1,\linebreak[0]\ldots,\linebreak[0]f_n}\}$.
\end{definition}
\begin{definition}
For a positive integer $d$, let
\begin{center}
$
h_{\rm sep}(d):=\max\{h_{\rm sep}(\mathbf{K})\; : \; \mathbf{K} $ is an
$\mathbf{o}$-symmetric, smooth and strictly convex body in $\mathbb{E}d\}$,
\\
$
H_{\rm sep}(d):=\max\{H_{\rm sep}(\mathbf{K})\; : \; \mathbf{K} $ is an
$\mathbf{o}$-symmetric, smooth and strictly convex body in $\mathbb{E}d\}$,
\end{center}
and set $H_{\rm sep}(0)=h_{\rm sep}(0)=0$.
\end{definition}
The proof of part (\ref{item:threefourdim}) of
Theorem~\ref{thm:smoothstrictlycvx} relies on the following fact: for the
smallest dimensional example $\mathbf{K}$ of an $\mathbf{o}$-symmetric, smooth and strictly
convex body with $H_{\rm sep}(\mathbf{K})>2d$, we have $h_{\rm sep}(\mathbf{K})>2d$. More precisely,
\begin{theorem}\label{thm:strict}\leavevmode
\begin{enumerate}[(a)]
\item\label{item:twosided}
$h_{\rm sep}(d)\leq H_{\rm sep}(d)\leq\max\{2\ell+h_{\rm sep}(d-\ell)\; : \; \ell=0,\ldots,d\}$.
\item\label{item:onesided}
$h_{\rm sep}(d)=d$ for $d\in\{1,2,3,4\}$.
\item\label{item:onesidedeuclidean}
$h_{\rm sep}(\mathbf{B}^d)=d$ for the $d$-dimensional Euclidean ball $\mathbf{B}^d$ with $d\in\mathbb{Z}^+$.
\end{enumerate}
\end{theorem}
According to Note~\ref{note:projection}, when bounding $H_{\rm sep}(\mathbf{K})$ for a smooth
\emph{or} strictly convex body $\mathbf{K}$, it is sufficient to consider smooth
\emph{and} strictly convex bodies.
As a warning sign, in Example~\ref{ex:fivedonesidedex} we show that there is an
$\mathbf{o}$-symmetric, smooth and strictly convex body $\mathbf{K}$ in $\mathbb{E}^5$, which has a
set of 6 translates that form a separable Hadwiger configuration, and
the origin is not in the convex hull of the translation vectors.
We prove Theorem~\ref{thm:strict}, and present Example~\ref{ex:fivedonesidedex}
in Section~\ref{sec:thmproofs}.
\subsection{Maximum separable contact numbers}
Let $\mathbf{K}$ be an $\mathbf{o}$-sym\-met\-ric convex body in $\mathbb{E}d$, and let
${\mathcal P}:=\{\mathbf{x}_1+\mathbf{K},\ldots,\mathbf{x}_n+\mathbf{K}\}$ be a packing of translates
of $\mathbf{K}$. The number of edges in the contact graph of $\mathcal P$ is called the
\emph{contact number} of $\mathcal P$. Finally let $c(\mathbf{K},n)$ denote the
\emph{largest
contact number} of a packing of $n$ translates of $\mathbf{K}$ in $\mathbb{E}^d$. It is proved
in \cite{Be02}
that $c(\mathbf{K},n)\leq \frac{H(\mathbf{K})}{2}n-n^{\frac{d-1}{d}}g(\mathbf{K})$ holds for all $n>1$,
where
$g(\mathbf{K})>0$ depends on $\mathbf{K}$ only.
\begin{definition}
If $d, n\in\mathbb{Z}^+$ and $\mathbf{K}$ is an $\mathbf{o}$-sym\-met\-ric convex body in $\mathbb{E}d$,
then let $\csep$ denote the largest
contact number of a totally separable packing of $n$ translates of $\mathbf{K}$.
\end{definition}
According to Theorem~\ref{thm:smoothstrictlycvx}, the maximum degree in the
contact graph of a totally separable packing of a smooth convex body $\mathbf{K}$ is
$2d$, and hence, $\csep\leq dn$, for $d\in\{1,2,3,4\}$. Our second
main result is a stronger bound.
\begin{theorem}\label{thm:contactno}
Let $\mathbf{K}$ be a smooth $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$ with $d\in\{1,2,3,4\}$.
Then
\begin{equation*}
\csep\leq dn-n^{(d-1)/d}f(\mathbf{K})
\end{equation*}
for all $n>1$, where $f(\mathbf{K})>0$ depends on $\mathbf{K}$ only.
In particular, if $\mathbf{K}$ is a smooth $\mathbf{o}$-symmetric convex domain in $\mathbb{E}^2$,
then
\begin{equation*}
\csep\leq 2n-\frac{\sqrt{\pi}}{8}\sqrt{n}
\end{equation*}
holds for all $n>1$.
\end{theorem}
In \cite{BeKhOl} it is proved that $\csep =\floor{2n-2\sqrt{n}}$ holds for any
$\mathbf{o}$-symmetric smooth {\it strictly convex} domain $\mathbf{K}$ and any $n>1$. Thus,
one may wonder whether the same statement holds for any smooth $\mathbf{o}$-symmetric
convex domain $\mathbf{K}$.
We prove Theorem~\ref{thm:contactno} in
Section~\ref{sec:contactproof}. For a more explicit form
of Theorem~\ref{thm:contactno} see Theorem~\ref{thm:contactnoPrecise} in
Section~\ref{sec:contactproof}.
\subsection{Organization of the paper}
In Section~\ref{sec:basics} we develop a dictionary that helps translate the
study of separable Hadwiger configurations of smooth or strictly
convex bodies to the language of systems of vector--linear functional pairs. In
Section~\ref{sec:thmproofs}, based on our observations in
Section~\ref{sec:basics}, we prove Theorem~\ref{thm:strict}, and show how our
first main result, Theorem~\ref{thm:smoothstrictlycvx} follows from it.
In Section~\ref{sec:contactproof} we prove our second main result,
Theorem~\ref{thm:contactno}. This proof is an adaptation of the proof of the
main result of \cite{Be02} to the setting of totally separable packings of
smooth convex bodies. One of the main challenges of the adaptation is to
compute the maximum vertex degree of the contact graph of a totally separable family
of translates of a smooth convex body $\mathbf{K}$, and to characterize locally the
geometric setting where this maximum is attained. This local characterization
is provided by Theorem~\ref{thm:smoothstrictlycvx}.
Finally, in Section~\ref{sec:remarks}, we describe open problems and outline
the difficulties in translating Theorem~\ref{thm:contactno} to strictly convex
(but, possibly not smooth) convex bodies.
\section{Linearization, fundamental properties}\label{sec:basics}
First, in order to give a linearization of the problem, we
consider a set of $n$ pairs \mathbf{x}fn{} where $\mathbf{x}_i\in\mathbb{E}d$ and $f_i$ is a linear
functional on $\mathbb{E}d$ for all $\ensuremath{1\leq i\leq n}$, and we define the following conditions
that they may satisfy.
\begin{align}
f_i(\mathbf{x}_i)=1\;\;\mbox{ and }\;\; f_i(\mathbf{x}_j)\in[-1,0] &\mbox{ holds for all }
\ensuremath{1\leq i,j\leq n, i\neq j}.
\label{eq:linearizationFirst}\tag{Lin}\\
f_i(\mathbf{x}_j)=-1, \mbox{ if and only if, } \mathbf{x}_j=-\mathbf{x}_i &\mbox{ holds for all } \ensuremath{1\leq i,j\leq n, i\neq j}.
\label{eq:linearizationSC}\tag{StrictC}\\
f_i(\mathbf{x}_j)=-1, \mbox{ if and only if, } f_j=-f_i &\mbox{ holds for all } \ensuremath{1\leq i,j\leq n, i\neq j}.
\label{eq:linearizationSM}\tag{Smooth}\\
f_i(\mathbf{x}_i)=1\;\;\mbox{ and }\;\; f_i(\mathbf{x}_j)\in(-1,0] &\mbox{ holds for all }
\ensuremath{1\leq i,j\leq n, i\neq j}.
\label{eq:linearization}\tag{OpenLin}
\end{align}
\begin{lemma}\label{lem:linearization}
There is an $\mathbf{o}$-symmetric, strictly convex body $\mathbf{K}$ in $\mathbb{E}d$ with
$H_{\rm sep}(\mathbf{K})\geq n$ if and
only if, there is a set of $n$ vector-linear functional pairs{} \mathbf{x}fn{} in $\mathbb{E}d$
satisfying \eqref{eq:linearizationFirst} and \eqref{eq:linearizationSC}.
Similarly, there is an $\mathbf{o}$-symmetric, smooth convex body $\mathbf{K}$ in $\mathbb{E}d$ with
$H_{\rm sep}(\mathbf{K})\geq n$
if and only if, there is a set of
$n$ vector-linear functional pairs{} \mathbf{x}fn{} in $\mathbb{E}d$
satisfying \eqref{eq:linearizationFirst} and \eqref{eq:linearizationSM}.
Furthermore, the existence of an $\mathbf{o}$-symmetric, smooth and strictly convex
body with
$H_{\rm sep}(\mathbf{K})\geq n$ is equivalent to the existence of
$n$ vector-linear functional pairs{} satisfying \eqref{eq:linearizationFirst},
\eqref{eq:linearizationSC} and \eqref{eq:linearizationSM}.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:linearization}]
Let $\mathbf{K}$ be an $\mathbf{o}$-symmetric convex body in $\mathbb{E}^d$.
Assume that $2\mathbf{x}_1+\mathbf{K},2\mathbf{x}_2+\mathbf{K},\ldots,2\mathbf{x}_n+\mathbf{K}$ is a separable
Hadwiger configuration of $\mathbf{K}$, where $\mathbf{x}_1,\dots ,\mathbf{x}_n\in\bd \mathbf{K}$.
For $\ensuremath{1\leq i\leq n}$, let $f_i$ be the linear functional corresponding to the separating
hyperplane of $\mathbf{K}$ and $2\mathbf{x}_i+\mathbf{K}$ which is disjoint from the interior of all
members of the family. That is, $f_i(\mathbf{x}_i)=1$ and $-1\leq f_i|_K\leq 1$.
Total separability yields that $f_i(\mathbf{x}_j)\in[-1,1]\setminus(0,1)$, for any
$\ensuremath{1\leq i,j\leq n, i\neq j}$.
Suppose that $f_i(\mathbf{x}_j)=1$. Then $2\mathbf{x}_i+\mathbf{K}$ and $2\mathbf{x}_j+\mathbf{K}$ both touch the
hyperplane $H:=\{x\in\mathbb{E}d\; : \; f_i(x)=1\}$ from one side, while $\mathbf{K}$ is on the
other side of this hyperplane.
If $\mathbf{K}$ is strictly convex, then this is clearly not possible.
If $\mathbf{K}$ is smooth, then let $S$ be a separating hyperplane of $2\mathbf{x}_i+\mathbf{K}$
and $2\mathbf{x}_j+\mathbf{K}$ which is disjoint from $\inter \mathbf{K}$. Since $\mathbf{K}$ is smooth,
$\mathbf{K}\cap H\cap S=\emptyset$, and hence, $\mathbf{K}$ does not touch
$2\mathbf{x}_i+\mathbf{K}$ or $2\mathbf{x}_j+\mathbf{K}$, a contradiction.
Thus, if $\mathbf{K}$ is strictly convex or smooth, then \eqref{eq:linearizationFirst}
holds.
If $\mathbf{K}$ is strictly convex (resp., smooth), then \eqref{eq:linearizationSC}
(resp., \linebreak[0]\eqref{eq:linearizationSM}) follows immediately.
Next, assume that \mathbf{x}fn{} is a set of $n$
vector-linear functional pairs{} satisfying \eqref{eq:linearizationFirst} and
\eqref{eq:linearizationSC}. We need to show that there is a strictly convex
body $\mathbf{K}$ with $H_{\rm sep}(\mathbf{K})\geq n$.
Consider the $\mathbf{o}$-symmetric convex set $\mathbf{L}:=\{\mathbf{x}\in\mathbb{E}d\; : \; f_i(\mathbf{x})\in[-1,1]
\text{ for all }\ensuremath{1\leq i\leq n}\}$, the intersection of $n$ $\mathbf{o}$-symmetric slabs.
Fix an \ensuremath{1\leq i\leq n}. If there is no $j\neq i$ with $f_j(\mathbf{x}_i)=-1$, then
$\mathbf{x}_i$ is in the relative interior of a facet of the
polyhedral set $\mathbf{L}$, moreover, by \eqref{eq:linearizationSC}, no other point
of the set $\{\pm \mathbf{x}_1,\ldots,\pm\mathbf{x}_n\}$ lies on that facet.
If there is a $j\neq i$ with $f_j(\mathbf{x}_i)=-1$, then
$\mathbf{x}_i$ is in the intersection of two facets of $\mathbf{L}$, moreover, by
\eqref{eq:linearizationSC}, no other point of the set $\{\pm
\mathbf{x}_1,\ldots,\pm\mathbf{x}_n\}$ lies on the union of those two facets.
Thus, there is an $\mathbf{o}$-symmetric, strictly convex body $\mathbf{K}\subset \mathbf{L}$ which
contains each $\mathbf{x}_i$. Clearly, for
\ensuremath{1\leq i\leq n}, the hyperplane $\{\mathbf{x}\in\mathbb{E}d\; : \; f_i(\mathbf{x})=1\}$ supports $\mathbf{K}$ at $\mathbf{x}_i$. It
is an easy exercise to see that the family $2\mathbf{x}_1+\mathbf{K},2\mathbf{x}_2+\mathbf{K},\ldots,2\mathbf{x}_n+\mathbf{K}$
is a separable Hadwiger configuration of $\mathbf{K}$.
Next, assume that \mathbf{x}fn{} is a set of $n$ vector-linear functional pairs{} satisfying
\eqref{eq:linearizationFirst} and
\eqref{eq:linearizationSM}. To show that there is a smooth convex body
$\mathbf{K}$ with $H_{\rm sep}(\mathbf{K})\geq n$, one may either copy the above proof and make the
obvious modifications, or use duality: interchange the role of the $\mathbf{x}_i$s with
that of the $f_i$s, obtain a strictly convex body in the space of linear
functionals, and then, by polarity obtain a smooth convex body in $\mathbb{E}d$. We
leave the details to the reader.
Finally, if \eqref{eq:linearizationFirst}, \eqref{eq:linearizationSC} and
\eqref{eq:linearizationSM} hold, then in the above con\-struc\-ti\-on of a
strictly
convex body, we had that each point of the set $\{\pm
\mathbf{x}_1,\ldots,\linebreak[0]\pm\mathbf{x}_n\}$ lies in the interior of a facet of $\mathbf{L}$,
with no other
point lying on the same facet. Thus, there is an $\mathbf{o}$-symmetric, smooth and
strictly convex body $\mathbf{K}\subset \mathbf{L}$ which contains each $\mathbf{x}_i$. Clearly, we
have $H_{\rm sep}(\mathbf{K})\geq n$.
\end{proof}
\begin{note}\label{note:projection}
Let $\mathbf{K}$ be an $\mathbf{o}$-symmetric, strictly convex body in $\mathbb{E}d$, and consider a
separable Hadwiger configuration of $\mathbf{K}$ with $n$ members. Then, by
Lemma~\ref{lem:linearization}, we have a set of $n$ vector-linear functional
pairs satisfying \eqref{eq:linearizationFirst} and \eqref{eq:linearizationSC}.
If for each $\ensuremath{1\leq i\leq n}$, we have that $-\mathbf{x}_i$ is not in the set of vectors, then
\eqref{eq:linearization} is automatically satisfied. We remark that in this
case, we may replace $\mathbf{K}$ with a strictly convex \emph{and} smooth body.
If for some $k\neq \ell$ we have $\mathbf{x}_{\ell}=-\mathbf{x}_k$, then
by \eqref{eq:linearizationFirst}, $f_j(\mathbf{x}_k)=0$ for all
$j\in[n]\setminus\{k,\ell\}$. Thus, if we remove $(\mathbf{x}_k,f_k)$ and
$(\mathbf{x}_{\ell},f_{\ell})$ from the set of vector-linear functional pairs, then we
obtain $n-2$ pairs that still satisfy \eqref{eq:linearizationFirst} and
\eqref{eq:linearizationSC}, and the linear functionals lie in a
$(d-1)$-dimensional linear hyperplane. Thus, we may consider the problem of
bounding their maximum number, $n-2$ in $\mathbb{E}^{d-1}$.
The same dimension reduction argument can be repeated when $\mathbf{K}$ is smooth.
Thus, in order to bound $H_{\rm sep}(\mathbf{K})$ for smooth \emph{ or } strictly convex
bodies, it is sufficient to consider smooth \emph{ and } strictly convex
bodies, and bound $n$ for which there are $n$ vectors with
linear functionals satisfying \eqref{eq:linearization}.
\end{note}
We will rely on the following basic fact from convexity due to Steinitz
\cite{St13} in its original form, and then refined later with
the characterization of the case of equality, see \cite{Re65}.
\begin{lemma}\label{lem:steinitz}
Let $\mathbf{x}n$ be points in $\mathbb{E}d$ with $\mathbf{o}\in\inter\conv\{\mathbf{x}n\}$. Then there is a
subset $A\subseteq\{\mathbf{x}n\}$ of cardinality at most $2d$ with $\mathbf{o}\in\inter\conv
A$.
Furthermore, if the minimal cardinality of such $A$ is $2d$, then $A$ consists
of the endpoints of $d$ line segments which span $\mathbb{E}d$, and whose relative
interiors intersect in $\mathbf{o}$.
\end{lemma}
\begin{proposition}\label{prop:applysteinitzfirst}
Let \mathbf{x}fn{} be vector-linear functional pairs{} in $\mathbb{E}d$ satisfying \eqref{eq:linearizationFirst}.
Assume
further that $\mathbf{o}\in\inter\conv\{\mathbf{x}n\}$. Then $n\leq 2d$.
Moreover, if $n=2d$, then the points $\mathbf{x}n$ are vertices of a cross-polytope
with center $\mathbf{o}$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:applysteinitzfirst}]
By \eqref{eq:linearizationFirst}, for any proper subset $A\subsetneq\{\mathbf{x}n\}$,
we
have that the origin is not in the interior of $\conv A$. Thus, by
Lemma~\ref{lem:steinitz}, $n\leq2d$.
Next, assume that $n=2d$. Observe that it follows from
\eqref{eq:linearizationFirst} that if $x_i=\lambda x_j$ for some \ensuremath{1\leq i,j\leq n, i\neq j}{} and
$\lambda\in{\mathbb R}$, then $\lambda=-1$.
Thus, combining the argument in the previous paragraph with the second part of
Lemma~\ref{lem:steinitz} yields the second part of
Proposition~\ref{prop:applysteinitzfirst}.
\end{proof}
\begin{proposition}\label{prop:edge}
Let \mathbf{x}fn{} be vector-linear functional pairs{} in $\mathbb{E}d$ satisfying \eqref{eq:linearization}. Assume
that $\mathbf{o}\notin\conv\{\mathbf{x}n\}$. Then for any $1\leq k<\ell \leq n$, the triangle
$\conv\{\mathbf{o},\mathbf{x}_k,\mathbf{x}_\ell\}$ is a face of the convex polytope
$\mathbf{P}:=\conv(\{\mathbf{x}n\}\cup\{\mathbf{o}\})$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:edge}]
By \eqref{eq:linearization}, we have that $f_i(\mathbf{x}_j)>-1$ for all \ensuremath{1\leq i,j\leq n, i\neq j}.
Suppose for a contradiction that $\conv\{\mathbf{x}_j\; : \; j\in [n]\setminus\{k,\ell\}\}$
contains a point of the form $\mathbf{x}=\lambda \mathbf{x}_k+\mu \mathbf{x}_\ell$ with
$\lambda,\mu\geq0,
0<\lambda+\mu\leq1$. By \eqref{eq:linearization}, we
have
$f_k(\mathbf{x}), f_\ell(\mathbf{x})\leq 0$. Thus,
\[
0\geq f_k(\mathbf{x})+f_\ell(\mathbf{x})=
\lambda(1+f_\ell(\mathbf{x}_k))+\mu(1+f_k(\mathbf{x}_\ell))>0,
\]
a contradiction.
\end{proof}
\section{Proofs of Theorems~\ref{thm:smoothstrictlycvx} and
\ref{thm:strict}}\label{sec:thmproofs}
\begin{proof}[Proof of Theorem~\ref{thm:strict}]
To prove part (\ref{item:twosided}), we will use induction on $d$, the base
case, $d=1$ being trivial.
By the dimension-reduction argument in Note~\ref{note:projection}, we may
assume that there are $n$ vector-linear functional pairs{} \mathbf{x}fn{} satisfying \eqref{eq:linearization}.
If $\mathbf{o}\notin\conv\{\mathbf{x}n\}$, and $\mathbf{o}\notin\conv\{\ensuremath{f_1,\linebreak[0]\ldots,\linebreak[0]f_n}\}$, then, clearly,
$n\leqh_{\rm sep}(d)$.
Thus, we may assume that $\mathbf{o}\in\conv\{\mathbf{x}n\}$.
We may also assume that $F=\conv\{\mathbf{x}_1,\ldots,\mathbf{x}_k\}$ is the face of the
polytope
$\conv\{\mathbf{x}n\}$ that \emph{supports} $\mathbf{o}$, that is the face which contains
$\mathbf{o}$ in its relative interior.
Let $H:=\operatorname{span} F$. If $H$ is the entire space $\mathbb{E}d$, then
$\mathbf{o}\in\inter\conv\{\mathbf{x}n\}$ and hence,
$n\leq2d$ follows from Proposition~\ref{prop:applysteinitzfirst}.
On the other hand, if $H$ is a proper linear subspace of $\mathbb{E}d$, then clearly,
for any $i>k$, we have that $f_i$ is identically zero on $H$.
Applying Proposition~\ref{prop:applysteinitzfirst} on $H$ for $\{\mathbf{x}_i\; : \; i\leq
k\}$ with $\{f_i|_{H}\; : \; i\leq k\}$, we have
\begin{equation}\label{eq:kleqtwodim}
k\leq 2\dim H.
\end{equation}
Denote by $H^{\perp}$ the orthogonal
complement of $H$, and by $P$ the orthogonal projection of $\mathbb{E}d$ onto
$H^{\perp}$.
It is not hard to see that $P$ is one-to-one on the set $\{\mathbf{x}_i\; : \; i>k\}$.
Moreover, the set of points $\{P\mathbf{x}_i\; : \; i>k\}$, with
linear functionals $\{f_i|_{H^{\perp}}\; : \; i>k\}$ restricted to $H^{\perp}$,
satisfy \eqref{eq:linearization} in $H^{\perp}$.
Combining \eqref{eq:kleqtwodim} with the induction hypothesis applied on
$H^\perp$, we complete the proof of part (\ref{item:twosided}).
For the three-dimensional bound in part (\ref{item:onesided}), suppose that
$\mathbf{o}\notin\linebreak[0]\conv\{\mathbf{x}_1,\linebreak[0]\ldots,\linebreak[0]\mathbf{x}_4\}
\in\mathbb{E}^3$. By Radon's
lemma, the set
$\{\mathbf{o},\linebreak[0]\mathbf{x}_1,\linebreak[0]\ldots,\linebreak[0]\mathbf{x}_4\}$ admits a
partition
into two parts whose convex hulls intersect contradicting
Proposition~\ref{prop:edge}. The same proof yields the two and the
four-dimensional
statements, while the one-dimensional claim is trivial.
We use a projection argument to prove part (\ref{item:onesidedeuclidean}).
Assume that $\mathbf{x}_1,\linebreak[0]\ldots,\linebreak[0]\mathbf{x}_n$ is a set of Euclidean
unit vectors with
$\iprod{\mathbf{x}_i}{\mathbf{x}_j}\in(-1,0]$ for all $\ensuremath{1\leq i,j\leq n, i\neq j}$. Furthermore, let $\mathbf{y}$ be a unit
vector with $\iprod{\mathbf{y}}{\mathbf{x}_i}>0$ for all \ensuremath{1\leq i\leq n}. Consider the set of vectors
$\mathbf{x}_i^{\prime}:=\mathbf{x}_i-\iprod{\mathbf{y}}{\mathbf{x}_i}\mathbf{y}$, $i=1,\ldots,n$, all lying in the
hyperplane $\mathbf{y}^{\perp}$. Now, for \ensuremath{1\leq i,j\leq n, i\neq j}, we have
\begin{equation*}
\iprod{\mathbf{x}_i^{\prime}}{\mathbf{x}_j^{\prime}}=\iprod{\mathbf{x}_i}{\mathbf{x}_j}-\iprod{\mathbf{y}}{\mathbf{x}_i}\iprod{
\mathbf{y}}{\mathbf{x}_j}<0.
\end{equation*}
Thus, $\mathbf{x}_i^{\prime}, i=1,\ldots,n$ form a set of $n$ vectors in a
$(d-1)$-dimensional space with pairwise obtuse angles. It is known \cite{DH,
Ra, Ku},
or may be proved using the same projection argument and induction on the
dimension (projecting orthogonally to $(\mathbf{x}_n^{\prime})^{\perp}$) that $n\leq d$
follows.
\end{proof}
\begin{example}\label{ex:fivedonesidedex}
By Lemma~\ref{lem:linearization}, it is sufficient to exhibit 6 vectors (with their
convex hull not containing $\mathbf{o}$ in $\mathbb{E}^5$) and
corresponding linear functionals satisfying \eqref{eq:linearization}.
Let the unit vectors $\mathbf{v}_4,\mathbf{v}_5,\mathbf{v}_6$ be the vertices
of an equilateral triangle centered at $\mathbf{o}$ in the linear plane
$\operatorname{span}\{\mathbf{e}_4,\mathbf{e}_5\}$ of $\mathbb{E}^5$.
Let $\mathbf{x}_i=\mathbf{e}_i$, for $i=1,2,3$, and
let $\mathbf{x}_i=(\mathbf{e}_1+\mathbf{e}_2+\mathbf{e}_3)/3 +\mathbf{v}_i$, for $i=4,5,6$.
Observe that $\mathbf{o}\notin\allowbreak\conv\{\mathbf{x}_1,\ldots,\mathbf{x}_6\}$,
as
$\iprod{\mathbf{e}_1+\mathbf{e}_2+\mathbf{e}_3}{\mathbf{x}_i}>0$ for $i=1,\ldots,6$.
We define the following linear functionals.
$f_1(\mathbf{x})=\linebreak[0]\iprod{\mathbf{e}_1-\frac{\mathbf{e}_2+\mathbf{e}_3}{2}}{\mathbf{x}}$,
$f_2(\mathbf{x})=\linebreak[0]\iprod{\mathbf{e}_2-\frac{\mathbf{e}_1+\mathbf{e}_3}{2}}{\mathbf{x}}$,
$f_3(\mathbf{x})\linebreak[0]=\linebreak[0]\iprod{\mathbf{e}_3-\frac{\mathbf{e}_1+\mathbf{e}_2}{2}}{\mathbf{x}}$,
and
$f_i(\mathbf{x})=\linebreak[0]\iprod{\mathbf{v}_i}{\mathbf{x}}$, for $i=4,5,6$.
Clearly, \eqref{eq:linearization} holds.
\end{example}
\begin{proof}[Proof of Theorem~\ref{thm:smoothstrictlycvx}]
First, we prove part (\ref{item:threefourdim}).
If the origin is in the interior of the convex hull of the translation vectors,
then Proposition~\ref{prop:applysteinitzfirst} yields $n\leq2d$ and the
characterization of equality. In the case when $\mathbf{o}\notin\inter\conv\{\mathbf{x}_i\}$,
Theorem~\ref{thm:strict} combined with Note~\ref{note:projection}
yields $n<2d$.
The proof of part (\ref{item:DG}) follows closely a classical proof of Danzer
and Gr\"unbaum on the maximum size of an antipodal set in $\mathbb{E}^d$ \cite{DG62}.
By Lemma~\ref{lem:linearization} and Note~\ref{note:projection}, we may assume
that $\mathbf{K}$ is an $\mathbf{o}$-symmetric smooth strictly convex body in $\mathbb{E}^d$. Assume
that $2\mathbf{x}_1+\mathbf{K},2\mathbf{x}_2+\mathbf{K},\ldots,2\mathbf{x}_n+\mathbf{K}$ is a separable
Hadwiger configuration of $\mathbf{K}$, where $\mathbf{x}_1,\dots ,\mathbf{x}_n\in\bd \mathbf{K}$.
Let $f_i$ denote the linear functional
corresponding to the hyperplane that separates $\mathbf{K}$ from $2\mathbf{x}_i+\mathbf{K}$.
For each \ensuremath{1\leq i\leq n}, let $\mathbf{K}_i$ be the set that we obtain by applying a homothety of
ratio $1/2$ with center $\mathbf{x}_i$ on the set $\mathbf{K}\cap\{\mathbf{x}\in\mathbb{E}d\; : \; f_i(\mathbf{x})\geq
0\}$,
that is,
\[
\mathbf{K}_i:=\frac{1}{2}\left(\mathbf{K}\cap\{\mathbf{x}\in\mathbb{E}d\; : \; f_i(\mathbf{x})\geq
0\}\right)+\frac{\mathbf{x}_i}{2}.
\]
These sets are pairwise non-overlapping.
In fact, it is easy to see that the following even stronger statement holds:
\[
\left(\mu \mathbf{x}_i+{\rm int}\left(\frac{1}{2}\mathbf{K}\right)\right)\cap\left(\bigcup_{
j\neq i}\mathbf{K}_j\right)=\emptyset
\]
for any $\mu\geq 0$ and $1\leq i\leq n$.
On the other hand, $\vol{\mathbf{K}_i}=2^{-(d+1)}\vol{\mathbf{K}}$ by the
central symmetry of $\mathbf{K}$,
where $\vol\cdot$ stands for the $d$-dimensional volume of the
given set.
We remark that -- unlike in the proof of the main
result of \cite{DG62} by Danzer and Gr\"unbaum -- the sets $\mathbf{K}_i$
are not translates
of each other. Since each $\mathbf{K}_i$ is contained in
$\mathbf{K}\setminus\inter\left(\frac{1}{2}\mathbf{K}\right)$, we immediately
obtain the bound $n\leq 2^{d+1}-2$.
To decrease the bound further, replace $\mathbf{K}_1$ by
\[
\widehat{\mathbf{K}}_1:=\mathbf{K}\cap\{\mathbf{x}\in\mathbb{E}d\; : \; f_1(\mathbf{x})\geq
1/2\}.
\]
Now, $\widehat{\mathbf{K}}_1,\mathbf{K}_2,\ldots,\mathbf{K}_n$ are still pairwise non-overlapping, and
are contained in $\mathbf{K}\setminus\inter\left(\frac{1}{2}\mathbf{K}\right)$.
The smoothness of $\mathbf{K}$ yields $\widehat{\mathbf{K}}_1\supsetneq \mathbf{K}_1$, and hence,
$\vol{\widehat{\mathbf{K}}_1}>2^{-(d+1)}\vol{\mathbf{K}}$.
This completes the proof of part (\ref{item:DG}) of
Theorem~\ref{thm:smoothstrictlycvx}.
\end{proof}
\section{Proof of Theorem~\ref{thm:contactno}}\label{sec:contactproof}
We define a local version of a totally separable packing.
\begin{definition}
Let $\mathcal{P}:=\{\mathbf{x}_i+\mathbf{K}\; : \; i\in I\}$ be a finite or infinite packing of translates
of $\mathbf{K}$, and $\rho>0$. We say that $\mathcal{P}$ is \emph{$\rho$-separable} if for each
$i\in I$ we have that the family $\{\mathbf{x}_j+\mathbf{K}\; : \; j\in I,
\mathbf{x}_j+\mathbf{K}\subset\mathbf{x}_i+\rho\mathbf{K}\}$
is a totally separable packing of translates of $\mathbf{K}$. Let $\dsep$ denote the
largest density of a $\rho$-separable packing of translates of $\mathbf{K}$, that is,
\begin{equation*}
\dsep:=\sup_{\mathcal{P}}\limsup_{\lambda\rightarrow\infty}\frac{\sum\limits_{i:
\mathbf{x}_i+\mathbf{K}\subset[-\lambda,\lambda]^d}\vol{\mathbf{x}_i+\mathbf{K}}}{(2\lambda)^d},
\end{equation*}
where the supremum is taken over all $\rho$-separable packings $\mathcal{P}$ of
translates of $\mathbf{K}$.
\end{definition}
We quote Lemma~1 of \cite{BeLa18}.
\begin{lemma}\label{lem:BeLa18}
Let $\{\mathbf{x}_i+\mathbf{K}\; : \; \ensuremath{1\leq i\leq n}\}$ be a $\rho$-separable packing of translates of an
$\mathbf{o}$-symmetric convex body $\mathbf{K}$ in $\mathbb{E}d$ with $\rho\geq 1, n\geq 1$ and
$d\geq2$. Then
\begin{equation*}
\frac{n\vol{\mathbf{K}}}{\vol{\bigcup\limits_{\ensuremath{1\leq i\leq n}} \mathbf{x}_i+2\rho \mathbf{K}}}
\leq \dsep.
\end{equation*}
\end{lemma}
\begin{lemma}\label{lem:lsep}
Let $\mathbf{K}$ be a smooth $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$ with $d\in\{1,2,3,4\}$.
Then there is a $\lambda>0$ such that for any separable Hadwiger
configuration $\{\mathbf{K}\}\cup\{\mathbf{x}_i+\mathbf{K}\; : \; i=1,\ldots,2d\}$ of $\mathbf{K}$,
\begin{equation}\label{eq:lsep}
\lambda \mathbf{K}\subseteq \bigcup_{i=1}^{2d} (\mathbf{x}_i+\lambda\mathbf{K}).
\end{equation}
holds. In particular, \eqref{eq:lsep} holds with $\lambda=2$ when $d=2$.
\end{lemma}
\begin{definition}
We denote the smallest $\lambda$ satisfying \eqref{eq:lsep} by $\lsep$, and
note that $\lsep\linebreak[0]\geq 2$, since otherwise $\bigcup_{i=1}^{2d}
(\mathbf{x}_i+\lambda\mathbf{K})$ does not contain $\mathbf{o}$.
\end{definition}
\begin{proof}[Proof of Lemma~\ref{lem:lsep}]
Clearly, $\lambda$ satisfies \eqref{eq:lsep} if and only if,
for each boundary point $\mathbf{b}\in\bd(\mathbf{K})$ we have that at least one
of the $2d$ points $\mathbf{b}-\frac{2}{\lambda}\mathbf{x}_i$ is in $\mathbf{K}$.
First, we fix a separable Hadwiger configuration of $\mathbf{K}$ with
$2d$ members and show that for some $\lambda>0$,
\eqref{eq:lsep} holds.
By Theorem~\ref{thm:smoothstrictlycvx}, we have that $\{\mathbf{x}_i\; : \;
i=1,\ldots,2d\}$ is an Auerbach basis of $\mathbf{K}$, and, in particular, the
origin is in the interior of $\conv\{\mathbf{x}_i\; : \; i=1,\ldots,2d\}$. It follows from
the smoothness of $\mathbf{K}$ that for each boundary point
$\mathbf{b}\in\bd(\mathbf{K})$ we have that
at least one of the $2d$ rays $\{\mathbf{b}-t\mathbf{x}_i\; : \; t>0\}$ intersects the interior of
$\mathbf{K}$. The existence of $\lambda$ now follows from the compactness of $\mathbf{K}$.
Next, since the set of Auerbach bases of $\mathbf{K}$ is compact (consider
them as points in $\mathbf{K}^d$), it follows in a straightforward way that there is a
$\lambda>0$, for which \eqref{eq:lsep} holds for all separable Hadwiger
configurations of $\mathbf{K}$ with $2d$ members.
To prove the part concerning $d=2$, we make use of the characterization of
the equality case in Part (\ref{item:threefourdim}) of
Theorem~\ref{thm:smoothstrictlycvx}. An Auerbach basis of a planar
$\mathbf{o}$-symmetric convex body $\mathbf{K}$ means that $\mathbf{K}$ is contained in an
$\mathbf{o}$-symmetric parallelogram, the midpoints of whose edges are
$\pm\mathbf{x}_1,\pm\mathbf{x}_2$, and $\pm\mathbf{x}_1,\pm\mathbf{x}_2\in\mathbf{K}$. We leave it as an exercise to
the reader that in this case, for each boundary point
$\mathbf{b}\in\bd(\mathbf{K})$ we have that at least one of the $4$
points $\mathbf{b}\pm\frac{\mathbf{x}_1}{2},\mathbf{b}\pm\frac{\mathbf{x}_2}{2}$ is in $\mathbf{K}$.
\end{proof}
We denote the $(d-1)$-dimensional Hausdorff measure by $\vol[d-1]{\cdot}$, and
the
\emph{isoperimetric ratio} of a bounded set $\mathbf{S}\subset\mathbb{E}d$ for
which it is
defined as
\[
\Iq{\mathbf{S}}:=\frac{(\vol[d-1]{{\rm
bd}\mathbf{S}})^d}{(\vol{\mathbf{S}})^{d-1}},
\]
and recall the \emph{isoperimetric inequality}, according to which it is
minimized by Euclidean balls, that is, $\Iq{\mathbf{B}^d}\leq\Iq{\mathbf{S}}$
for any bounded
set
$\mathbf{S}\subset\mathbb{E}d$, for which $\Iq{\mathbf{S}}$ is defined.
Finally, we are ready to state our main result, from which
Theorem~\ref{thm:contactno} immediately follows.
\begin{theorem}\label{thm:contactnoPrecise}
Let $\mathbf{K}$ be a smooth $\mathbf{o}$-symmetric convex body in $\mathbb{E}d$ with $d\in\{1,2,3,4\}$.
Then
\begin{equation*}
\csep\leq
\end{equation*}
\begin{equation*}
dn-\frac{n^{(d-1)/d}}{2\left[\lsep\right]^{d-1}
\left[\dsep[\frac{\lsep}{2},\mathbf{K}]\right]^{(d-1)/d}}
\left[\frac{\Iq{\mathbf{B}^d}}{\Iq{\mathbf{K}}}\right]^{1/d}\leq
\end{equation*}
\begin{equation*}
dn-\frac{n^{(d-1)/d} (\vol{\mathbf{B}^d})^{1/d}}{4\left[\lsep\right]^{d-1}}
\end{equation*}
for all $n>1$.
In particular, in the plane, we have
\begin{equation*}
\csep\leq 2n-\frac{\sqrt{\pi}}{8}\sqrt{n}
\end{equation*}
for all $n>1$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:contactnoPrecise}]
Let $\mathcal{P}=C+\mathbf{K}$ be a totally separable packing of translates of $\mathbf{K}$, where $C$
denotes the set of centers $C=\{\mathbf{x}_1,\ldots,\mathbf{x}_n\}$. Assume that $m$ of the $n$
translates is touched by the maximum number, that is, by
Theorem~\ref{thm:smoothstrictlycvx}, $H_{\rm sep}(\mathbf{K})=2d$ others. By
Lemma~\ref{lem:lsep}, we have
\begin{equation}\label{eq:lsepapplied}
\vol[d-1]{\bd\left( C+\lsep \mathbf{K} \right)}\leq
\end{equation}
\[
(n-m)(\lsep)^{d-1}\vol[d-1]{\bd(\mathbf{K})}.
\]
By the isoperimetric inequality, we have
\begin{equation}\label{eq:isoperimetric}
\Iq{\mathbf{B}^d}\leq
\Iq{C+\lsep \mathbf{K}}=
\frac{\left(\vol[d-1]{\bd\left( C+\lsep \mathbf{K}
\right)}\right)^d}{\left(\vol{ C+\lsep \mathbf{K}}\right)^{d-1}}.
\end{equation}
Combining \eqref{eq:lsepapplied} and \eqref{eq:isoperimetric} yields
\[
n-m\geq
\frac{(\Iq{\mathbf{B}^d})^{1/d}\left[\vol{ C+\lsep
\mathbf{K}}\right]^{(d-1)/d}}{(\lsep)^{d-1}\vol[d-1]{\bd \mathbf{K}}}.
\]
The latter, by Lemma~\ref{lem:BeLa18} is at least
\[
\frac{(\Iq{\mathbf{B}^d})^{1/d}
\left[
\frac{n\vol{\mathbf{K}}}{\dsep[\lsep/2,\mathbf{K}]}
\right]^{(d-1)/d}}{(\lsep)^{d-1}\vol[d-1]{\bd \mathbf{K}}}.
\]
After rearrangement, we obtain the desired bound on $n$ completing the proof of
the first inequality in Theorem~\ref{thm:contactnoPrecise}.
To prove the second inequality, we adopt the proof of \cite[Corollary~1]{Be02}.
First, note that $\dsep[\frac{\lsep}{2},\mathbf{K}]\leq 1$, and
$(\Iq{\mathbf{B}^d})^{1/d}=d\vol{\mathbf{B}^d}$. Next,
according to Ball's reverse isoperimetric inequality
\cite{Ba91}, for any convex body $\mathbf{K}$, there is a non-degenerate affine map
$T:\mathbb{E}d\rightarrow\mathbb{E}d$ with $\Iq{T\mathbf{K}}\leq(2d)^d$. Finally, notice that
$\csep=\csep[T\mathbf{K},n]$, and the inequality follows in a straightforward way.
The planar bound follows by substituting the value $\lsep=2$ from
Lemma~\ref{lem:lsep}.
\end{proof}
\section{Remarks}\label{sec:remarks}
Lemma~\ref{lem:lsep} does not hold for strictly convex but not
smooth convex bodies. Indeed, in $\mathbb{E}^3$, consider the $\mathbf{o}$-symmetric polytope
$\mathbf{P}:=\conv\{\pm \mathbf{e}_1,\linebreak[0]\pm \mathbf{e}_2,\linebreak[0]\pm
\mathbf{e}_3,\linebreak[0] \pm 0.9(\mathbf{e}_1+\mathbf{e}_2+\mathbf{e}_3)\}$ where the
$\mathbf{e}_i$s are the standard
basis vectors. The six translation vectors $\pm
2\mathbf{e}_1,\pm2\mathbf{e}_2,\pm2\mathbf{e}_3$ generate a
separable Hadwiger configuration of $\mathbf{P}$. For the vertex
$\mathbf{b}:=0.9(\mathbf{e}_1+\mathbf{e}_2+\mathbf{e}_3)$, we have that each of the
$3$ lines $\{\mathbf{b}+ t \mathbf{e}_i\; : \;
t\in{\mathbb R}\}$ intersect $\mathbf{P}$ in $\mathbf{b}$ only. Thus,
there is a strictly convex
$\mathbf{o}$-symmetric body $\mathbf{K}$ with the following properties.
$\mathbf{P}\subset\mathbf{K}$, and $\pm
\mathbf{e}_i$ is a boundary point of $\mathbf{K}$ for each $i=1,2,3$, and at $\pm
\mathbf{e}_i$, the
plane orthogonal to $\mathbf{e}_i$ is a support plane of $\mathbf{K}$, and
$\mathbf{b}$ is a boundary
point of $\mathbf{K}$, and the $3$ lines $\{\mathbf{b}+ t \mathbf{e}_i\; : \;
t\in{\mathbb R}\}$ intersect $\mathbf{K}$ in $\mathbf{b}$ only. For this strictly convex
$\mathbf{K}$, we have
$\lsep=\infty$.
Thus, it is natural to ask if in Theorem~\ref{thm:contactno} smoothness can be
replaced by strict convexity. We note that in our proof, Lemma~\ref{lem:lsep}
is
the only place which does not carry over to this case.
The same construction of the polytope $\mathbf{P}$ shows that $\lsep$ may be
arbitrarily
large for a
three-dimensional smooth convex body $\mathbf{K}$. Indeed, if we take
$\mathbf{K}:=\mathbf{P}+\varepsilon\mathbf{B}^d$ with a small $\varepsilon>0$, we obtain a smooth
body
for which, by the previous argument, $\lsep$ is large.
Thus, it would be very interesting to see a lower bound on $f(\mathbf{K})$ of
Theorem~\ref{thm:contactno} which depends on $d$ only.
\end{document}
|
\begin{document}
\title{Isomorphisms of Moduli Spaces}
\author{C. Casorr\'an Amilburu}
\author{S. Barmeier}
\author{B. Callander}
\author{E. Gasparim}
\address{Elizabeth Gasparim and Brian Callander}
\address{ IMECC -- UNICAMP,
Rua S\'ergio Buarque de Holanda, 651,
Cidade Universit\'aria Zeferino Vaz,
Distr. Bar\~ao Geraldo, Campinas SP, Brasil
13083-859 }
\email{[email protected]}
\email{[email protected]}
\address{Carlos Casorr\'an Amilburu,
Depto. de Estad\'istica e Investigaci\'on Operativa,
Universidad de Alicante,03080-Alicante España}
\email{[email protected]}
\address{Severin Barmeier,
Graduate School of Mathematical Sciences,
The University of Tokyo,
3-8-1 Komaba, Meguro,
Tokyo, 153-8914 Japan}
\email{[email protected]}
\subjclass{}
\keywords{}
\thanks{The second and third authors acknowledge support of the
Royal Society and of the Glasgow Mathematical Journal.}
\begin{abstract} We give infinitely many new isomorphisms between
moduli spaces of
bundles on local surfaces and on local Calabi--Yau threefolds.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\label{intro}
To study moduli spaces
of rank 2 bundles on local surfaces and local threefolds we
present concrete descriptions of these moduli
as quotients of the vector spaces of extensions of line bundles by holomorphic
isomorphism.
Our favourite varieties are the following:
$$Z_k:=\Tot({\mathcal O}_{\mathbb P^1}(-k))\qquad \text{and} \qquad
W_i:=\Tot ({\mathcal O}_{\mathbb P^1}(i-2)\oplus {\mathcal O}_{\mathbb P^1}(-i))
\text{,}$$
together with moduli of bundles on them.
Let $\ell$ denote the zero section of $Z_k $
and denote by $X_k$ the surface obtained from $Z_k$
by contracting $\ell$ to a point; thus $X_k$ is singular for $k>1$.
For a bundle $E$ on a surface $Z_k$, let $\ell$ denote the zero section of $\mathcal{O}_{\mathbb{P}^1}(-k)$ considered as a subvariety of $Z_k$, and $\pi \colon Z_k \to X_k$ the map that contracts $\ell$ to a point $x$. Hence $\pi$ is the inverse of blowing up $x$. In the following, we shall also let $Y$ denote either $W_i$ or $Z_k$.
\begin{definition}
The {\it charge} of a bundle $E\to Y$ around $\ell$ is the {\it local holomorphic Euler characteristic} of $\pi_*E$ at $x$, defined as
\begin{equation}\label{eq.euler}
\chi\bigl(x, \pi_*E\bigr) := \chi\bigl(\ell, E\bigr)
:= h^0\bigl(X; \; (\pi_*E)^{\vee\vee} \bigl/ \pi_* E \bigr)
+ \sum_{i=1}^{n-1}(-1)^{i-1} h^0\bigl(X; \; R^i \pi_* E\bigr) \text{ .}
\end{equation}
\end{definition}
\noindent Note that we have only $\chi\bigl(\ell, E\bigr) = h^0\bigl(X; \; (\pi_*E)^{\vee\vee}\bigl/ \pi_* E \bigr) + h^0\bigl(X; \; R^1 \pi_* E\bigr)$ since our spaces only have two coordinate charts (see \ref{can2}).
\begin{definition} \label{definicion} Let $\sim$ denote bundle isomorphism and introduce the following notation and definitions.
\begin{enumerate}
\item $ \mathcal M_{j_1,j_2}(Y) :=
\Ext^1(\mathcal O_Y(j_2),\mathcal O_Y(j_1))\bigm/\sim $
\item ${\mathcal M}_j(Y, 0) := {\mathcal M}_{j, -j}(Y)$
\item ${\mathcal M}_j(Y, 1) := {\mathcal M}_{j+1, -j}(Y)$
\end{enumerate}
\noindent Note that the second entry, that is either $1$ or $0$, denotes the {\it first Chern class} of the bundles considered in each case.
\noindent From such quotients we extract the following moduli spaces. Let
$\epsilon = 0 $ or $1$.
\begin{enumerate}
\item ${\mathfrak M}^1_j(Y, \epsilon) \subset {\mathcal M}_j(Y, \epsilon)$ consisting of elements given by an extension class vanishing to order exactly 1 over $\ell$,
\item ${\mathfrak M}^s_j(Y, \epsilon) \subset {\mathfrak M}^1_j(Y, \epsilon)$ consisting of elements with lowest charge $\chi_{\rm low}$,
where
$\chi_{\rm low}:= \inf\{\chi(E) \vert E \in {\mathfrak M}^1_j(Y, \epsilon)\}$.
\end{enumerate}
\end{definition}
\begin{remark}
For $W_1$, it follows by lemma \ref{alg} that all rank $2$ bundles are extensions of line bundles. In fact, we also have this filtrability for $W_2$ but not for $W_i$ with $i\geq 3$.
\end{remark}
Our main results are the following:
\noindent{\bf Theorem }{\it (Coincidence of moduli of bundles on surfaces and threefolds)}
For all positive integers $i,j,k$, there are isomorphisms
$$\mathfrak M^1_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k,\epsilon) \simeq \mathfrak M^1_j(W_i, \delta)$$
and birational equivalences
$$\mathfrak M^s_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k, \epsilon) \dashrightarrow
\mathfrak M^s_j(W_1, \delta)$$
when $\epsilon\equiv k+1\,{\rm mod}\,2$ and $\delta \in \{0,1\}$.
\noindent{\bf Theorem }{\it (Atiyah--Jones type statement for local moduli)}
For $q \leq 2(2j - k -2+\delta)$ there are isomorphisms
\begin{itemize}
\item[($\iota$)] $H_q(\mathfrak M^1_j(Z_k), \delta) = H_q(\mathfrak M^1_{j+1}(Z_k), \delta)$
\item[($\iota\iota$)] $\pi_q(\mathfrak M^1_j(Z_k), \delta)= \pi_q(\mathfrak M^1_{j+1}(Z_k), \delta)\text{.}$
\end{itemize}
and for $q \leq 2(4j - 3-2\delta)$ there are isomorphisms
\begin{itemize}
\item[($\iota\iota\iota$)] $H_q(\mathfrak M^1_j(W_i), \delta) = H_q(\mathfrak M^1_{j+1}(W_i), \delta)$
\item[($\iota\nu$)] $\pi_q(\mathfrak M^1_j(W_i), \delta)= \pi_q(\mathfrak M^1_{j+1}(W_i), \delta)\text{.}$
\end{itemize}
\begin{remark}
We obtain isomorphisms between bundles $E$ and $F$ over $Z_k$ with $c_1(F)=c_1(E)+2$ by tensoring with $\mathcal O(-1)$, as
\[
\left(\begin{matrix}
z^{-j_1} & p \\ 0 & z^{-j_2}
\end{matrix}\right)\otimes z =
\left(\begin{matrix}
z^{-j_1+1} & zp \\ 0 & z^{-j_2+1}
\end{matrix}\right)
\]
so that we could consider $\epsilon\in\mathbb Z$, as long as $\epsilon\equiv k+1\,\mathrm{mod}\,2$ still holds.
\end{remark}
\section{Filtrability and algebraicity}\label{sec.alg}
We deal with bundles on local surfaces and threefolds,
that is, a neighborhood of a curve $C$ embedded
in a smooth surface or threefold $W$,
typically the total space of a vector bundle $N$ over $C$.
We focus on the case when
$C \simeq \mathbb P^1$. In the $2$-dimensional case we
focus on the case when $N^*$ is ample, and in the $3$-dimensional
case we focus on Calabi--Yau threefolds.
Let $W$ be a connected complex manifold (or smooth algebraic variety)
and $C$ a curve contained in $W$ that is reduced, connected and
a local complete intersection. Let $\widehat C$ denote the formal
completion of $C$ in $W$. Ampleness of the conormal bundle
has a strong influence on the
behaviour of bundles on $\widehat C$.
We will use the following basic fact from formal geometry.
\begin{lemma}\label{alg}\cite[thm.~3.2]{BGK2}
If the conormal bundle $N^*_C$ is ample, then every vector
bundle on $\widehat C$ is filtrable. If in addition $C$ is smooth,
then every holomorphic bundle on $\widehat C$ is algebraic.
\end{lemma}
\begin{remark} Ampleness of $N^*_C$ is essential. For example, consider
the Calabi--Yau threefold
$$W_i = \Tot ({\mathcal O}_{\mathbb P^1}(i-2)\oplus {\mathcal O}_{\mathbb P^1}(-i))
\text{.}$$
Then $W_1$ satisfies the hypothesis of \ref{alg}, hence holomorphic bundles
on $W_1$ are filtrable and algebraic, whereas on $W_2$ filtrability still
holds, but there exist proper
holomorphic bundles $W_2$ that are not algebraic, and on $W_i$ for $i\geq 3$ neither
filtrability nor algebraicity hold, see \cite{K} chapter 3.3.
\end{remark}
\section{Surfaces}\label{2}
We use the very concrete description of moduli spaces
of rank 2 bundles over the surfaces $Z_k :=
\Tot (\mathcal O_{{\mathbb P}^1}(-k))$ given in \cite{BGK1}.
Let $\ell$ denote the zero section inside $Z_k$. Given a bundle
$E$ over $Z_k$, its restriction to $\ell$ splits by Grothendieck's
principle, and if $E\vert_\ell \simeq {\mathcal O}(a_1) \oplus \cdots \oplus
{\mathcal O}(a_r)$ then $(a_1, \dots, a_r)$ is called the {\it splitting type}
of $E$.
By \cite[thm.~3.3]{CA1}, a holomorphic
bundle $E$ over $Z_k$ having
splitting type $(j_1,j_2)$ with $j_1 \leq j_2$
can be written as an algebraic extension
\begin{equation}\label{eq!ext}
0 \longrightarrow \mathcal O(j_1) \longrightarrow E \longrightarrow \mathcal O(j_2) \longrightarrow 0
\end{equation}
and therefore corresponds to an {\it extension class}
\[ p \in \Ext^1_{Z_k}\!(\mathcal O(j_2),\; \mathcal
O(j_1)) .\]
We fix once and for all
coordinate charts on our surfaces $Z_k = U \cup V$, where
\begin{equation}\label{can2}
U = \mathbb C^2_{z,u} = \{(z,u) \in \mathbb C^2\} \qquad\text{and}\qquad
V = \mathbb C^2_{\xi,v} = \{(\xi, v) \in \mathbb C^2\} \end{equation}
and$$ (\xi, v) = (z^{-1}, z^ku) \,\,\,\ \mbox{on}
\,\,\,\ U \cap V.$$
In these coordinates, the bundle $E$
may be represented by a transition matrix in {\it canonical form} as
\[
T =
\left(\begin{matrix}{z^{-j_1}}& {p}\\ {0}& {z^{-j_2}}\end{matrix}
\right)
\]
where
\begin{equation}\label{poly}
p =\hspace{-0.2cm}\sum_{i=1}^{\lfloor \vphantom{\textstyle X}(j_2 - j_1 - 2)/k \rfloor}\hspace{-0.25cm}
\sum_{l = ki + j_1 + 1}^{j_2 - 1} p_{il} \, z^l u^i \text{ .}
\end{equation}
Since we are interested in isomorphism classes of vector bundles
rather than extension classes, we use the following moduli:
\[ \mathcal M_{j_1,j_2}(Z_k) =
\Ext^1(\mathcal O_{Z_k}(j_2),\mathcal O_{Z_k}(j_1))\bigm/\sim \]
where $\sim $ denotes bundle isomorphism. We observe that this
quotient gives rise to a moduli stack, but we will only describe here
subsets of its coarse moduli space considered as a variety.
Considered just as a topological space, the full quotient
will not be Hausdorff except in the trivial case, when
it contains only a point. The latter happens when
the only bundle with splitting type $(j_1,j_2)$ is
$ \mathcal O_{Z_k} (j_1)\oplus\mathcal O_{Z_k} (j_2)$, that is, whenever $j_2-j_1 <k+2$.
To specify the topology in this quotient space, we use the canonical form of the
extension class (\ref{poly}). Then the
coefficients of $p$ written in lexicographical order form
a vector in $\mathbb C^m$, where $m$ is the number of complex coefficients
appearing in the expression of $p$.
We define an equivalence relation in $\mathbb C^m$ by setting
$p \sim p'$ if $(j_1,j_2,p)$ and $(j_1,j_2,p')$
define isomorphic bundles
over $Z_k$, and give $\mathbb C^m\bigl/\sim$ the quotient topology.
Now setting
$n := \lfloor (j_2-j_1-2)/k \rfloor$,
we obtain a bijection
\begin{eqnarray*}
\phi \colon \mathcal M_{j_1,j_2}(Z_k) &\to& \mathbb C^m\bigl/\sim \text{ ,} \\
\begin{pmatrix} z^{-j_1} & p \\ 0 & z^{-j_2}\end{pmatrix}
&\mapsto& \bigl(p_{1,k+j_1+1}, \dotsc, p_{n,j_2-1}\bigr)
\end{eqnarray*}
and give $\mathcal M_{j_1,j_2}(Z_k)$
the topology induced by this bijection.
Now observe that it is always the case that $p \sim \lambda p $ for
any $\lambda \in \mathbb C - \{0\}$. The
moduli space is then evidently non-Hausdorff,
as the only open neighborhood of
the split bundle is the entire moduli space. In the spirit of GIT
one would like to extract nice moduli spaces out of these quotient
spaces. Clearly the split bundle needs to be removed, but there is
quite a bit more topological complexity.
\subsection{Vanishing $c_1$ case: moduli spaces}\label{2.0}
For rank 2 bundles $E$ over $Z_k$ with $c_1(E)=0$ there is a non-negative integer $j$
such that $E \vert_\ell \simeq {\mathcal O}(j) \oplus {\mathcal O}(-j)$
and we will say $E$ has splitting type $j$.
We denote by $\mathcal M_j$ the moduli of all bundles with
this fixed splitting type (see Definition \ref{definicion}, item (2)):
\[ \mathcal M_j(Z_k,0) := \Ext^1(\mathcal O_{Z_k}(-j),\mathcal O_{Z_k}(j))\bigm/\sim \text{.}\]
We now recall some results about the topological structure of
these spaces and their relation to instantons.
These moduli spaces are stratified into Hausdorff components
by local analytic invariants. Given a reflexive sheaf $E$ over $Z_k$ we set:
\[{\bf w}_k(E):= h^0((\pi_*E)^{\vee\vee}/\pi_*E),\qquad
{\bf h}_k(E):=h^0(R^1\pi_* E)\text{.}\]
called the {\it width} and {\it height} or $E$, respectively.
\begin{definition}
$\chi(\ell,E) := {\bf w}_k(E)+{\bf h}_k(E)$ is called the
{\it local holomorphic
Euler characteristic} or {\it charge} of $E$.
\end{definition}
We quote the following results to show the connection with mathematical physics
\begin{theorem}\cite[cor.~5.5]{BGK1} {\it Correspondence with
instantons.}
An $\mathfrak{sl}(2,\mathbb{C})$-bundle
over $Z_k$ represents an instanton if and only if its splitting type
is a multiple of $k.$
\end{theorem}
\begin{theorem}\cite[thm.~4.15]{BGK1} {\it Stratifications.}
If $j=nk$ for some $n \in \mathbb{N}$, then the pair $({\bf h}_k,{\bf w}_k)$
stratifies instanton moduli stacks $\mathcal M_j(k)$ into Hausdorff
components.
\end{theorem}
\begin{remark}\label{usu} Let us note the following:
\begin{itemize}
\item $\chi$ alone is not fine enough to
stratify the moduli spaces.
\item Constructing such a stratification
for the non-instanton case is an open problem.
\item There are various ways to obtain moduli spaces inside the
$\mathcal M_j$. One possible choice is to take the largest Hausdorff
component as our moduli space. This will produce compact moduli, and
we study this case in section
\ref{order1}. A second, more natural choice is to
fix some numerical invariant,
to which end the local holomorphic Euler characteristic
presents itself as the most natural candidate.
\end{itemize}
\end{remark}
\subsection{Vanishing $c_1$ case: first order deformations}\label{order1}
\begin{notation}
Let ${\mathfrak M}^1_j(Z_k,0) \subset {\mathcal M}_j(Z_k)$ denote the
subset which parametrizes isomorphism classes of
bundles on $Z_k$ consisting of isomorphism classes
of nontrivial
first order deformations of ${\mathcal O}(j) \oplus {\mathcal O}(-j)$,
that is, bundles $E$ fitting into an exact sequence
\begin{equation}\label{filt}
0 \rightarrow {\mathcal O}(-j) \rightarrow E \rightarrow
{\mathcal O}(j) \rightarrow 0 \end{equation}
whose corresponding extension class
vanishes to order exactly one on $\ell$ (note that this excludes the split
bundle itself). In other words,
${\mathcal I}_\ell=\langle u\rangle$ on the $u$-chart and consider only
extensions $p \in \Ext({\mathcal O}(j),
{\mathcal O}(-j))$ with $p= u q$ and $u \nmid q$.
\end{notation}
\begin{remark} If $2j-2<k$ then $\mathcal M_j(Z_k) $ consists of
just a point represented by the split bundle, consequently
if $2j-2<k$ then ${\mathfrak M}^1_j(Z_k,0) = \emptyset.$
\end{remark}
A simple observation, which we now describe, then implies that ${\mathfrak M}^1_j(Z_k)$ is
compact and smooth.
\begin{theorem}\label{inf}\cite[thm.~4.9]{BGK1}\label{thm.proj}
On the first infinitesimal neighbourhood, two bundles $E^{(1)}$ and
$F^{(1)}$ with respective transition matrices
\[ \begin{pmatrix} z^j & p_1 \\ 0 & z^{-j} \end{pmatrix} \quad\text{and}\quad
\begin{pmatrix} z^j & q_1 \\ 0 & z^{-j} \end{pmatrix} \]
are isomorphic if and only if $q_1 = \lambda p_1$ for some
$\lambda \in \mathbb{C}-\{0\}$.
\end{theorem}
\begin{remark}
Note that no similar result holds true if we include higher order
deformations, because then there are further identifications and
the quotient space is no longer Hausdorff.
\end{remark}
\begin{corollary}\label{cor}
${\mathfrak M}^1_j(Z_k,0) \simeq \mathbb P^{2j-k-2}\text{.}$
\end{corollary}
\subsection{Vanishing $c_1$ case: minimal charge}\label{chimin}
Another possible choice of moduli space, more compatible with the physics
motivation, is to
consider the subset of bundles on ${\mathfrak M}^1_j(Z_k,0)$
having fixed charge; this is preferable, because the
charge is an analytic invariant on the bundles, and minimal
charge corresponds to a generic choice for the corresponding
instanton interpretation. In this case we take the open subset
of the moduli of first order deformations defined by:
$${\mathfrak M}^s_j(Z_k,0):=
\{E \in {\mathfrak M}^1_j(Z_k,0) : \chi(E) = \chi_{\rm min}(Z_k)\}\text{.}$$
Charge is lower semi-continuous on the splitting type,
and we have that the locus of bundles with charge higher than $\chi_{\rm min}$
is Zariski closed; in fact, such locus
is determined by $k+1$ polynomial equations \cite[thm.~4.11]{BGK1}.
\begin{corollary}\label{quasi}
${\mathfrak M}^s_j(Z_k,0) $ is a quasi-projective
variety, whose complement in $\mathbb P^{2j-k-2}$ is cut out by
$k+1$ equations.
\begin{proof}
On the first infinitesimal neighbourhood $p_1$ has $2j-k-1$ coefficients modulo projectivisation (see equation \ref{poly}) and then, by means of Theorem \ref{inf}, we arrive at the desired result.
\end{proof}
\end{corollary}
\subsection{Case $c_1=1$}
From expression (\ref{poly}) we can read off the case $c_1=1$ by setting $j_1=-j$ and $j_2=j+1$,
considering extensions ${\rm Ext}^1_{Z_k}(\mathcal O(j+1),\mathcal O(-j))$.
The form of the extension class restricted to the first infinitesimal neighborhood expressed in canonical coordinates is
$$ \sum_{l = k - j + 1}^{j} p_{1l} \, z^l u.$$
The coefficients vary in $\mathbb C^{2j-k}$, so that modulo the relation $p\sim \lambda p'$
we have:
\begin{lemma}\label{z1}
$\mathfrak M_j^1(Z_k,1)\simeq \mathbb P^{2j-k-1}$.
\end{lemma}
\begin{proof}
The proof of this lemma is just a modification of the proof of Theorem \ref{thm.proj}, which goes through successfully by replacing the appropriate $j$s with $j+1$: On the first infinitesimal neighbourhood, two bundles $E^{(1)}$ and $F^{(1)}$ with respective transition matrices
\[ \begin{pmatrix} z^j & p_1 \\ 0 & z^{-j-1} \end{pmatrix} \quad\text{and}\quad
\begin{pmatrix} z^j & q_1 \\ 0 & z^{-j-1} \end{pmatrix} \]
are isomorphic if and only if $q_1 = \lambda p_1$ for some
$\lambda \in \mathbb{C}-\{0\}$. Thus, projectivising the space of bundles on the first formal neighbourhood gives the isomorphism classes in the case $c_1=1$
just like we had in the vanishing $c_1$ case.
\end{proof}
The moduli space ${\mathfrak M}^s_j(Z_k,1)$ of bundles with minimal charge can also be considered as well. Since charge is lower semi-continuous, the set ${\mathfrak M}^s_j(Z_k,1)$
of bundles in ${\mathfrak M}^1_j(Z_k,1)$ achieving minimal charge is Zariski open.
\section{Threefolds}\label{3}
Consider the threefolds
$$W_i = \Tot ({\mathcal O}_{\mathbb P^1}(i-2)\oplus {\mathcal O}_{\mathbb P^1}(-i))
$$ to which we alluded earlier in section \ref{sec.alg}, and denote by $\ell$
the zero section inside $W_i$. We focus on the cases of
rank 2 and either $c_1=0$ or else $c_1=1$ as we did
in section \ref{2} and for a bundle $E$ over $W_i$ such that
$E\vert_\ell \simeq {\mathcal O}(j) \oplus {\mathcal O}(-j) $
we call the non-negative integer $j$ the splitting type of $E$.
Note that here again $\Pic W_i \simeq \Pic \, \ell$ so we can avoid
a subscript in the notation ${\mathcal O}(j)$.
We now consider only {\it algebraic extensions} over the $W_i$ and then
define moduli spaces analogous to the ones we defined in section
\ref{2}. First the set of isomorphism classes of bundles
with fixed splitting type:
\[ \mathcal M_j(W_i) =
\bigl\{ E\to W_i : E\vert_{\ell} \simeq
\mathcal O (j)\oplus\mathcal O (-j) \bigr\}
\bigm/ \sim \text{,}\]
and $${\mathfrak M}^1_j(Z_k) \subset {\mathcal M}_j(Z_k)$$ the
subset which parametrizes
bundles on $W_i$ which are nontrivial
first order deformations of ${\mathcal O}(j) \oplus {\mathcal O}(-j)$,
that is, bundles $E$ fitting into an exact sequence
$$
0 \rightarrow {\mathcal O}(-j) \rightarrow E \rightarrow
{\mathcal O}(j) \rightarrow 0 $$
whose corresponding extension class
vanishes to order exactly one on $\ell$ (note that this excludes the split
bundle itself).
In local canonical coordinate charts, we have
\begin{equation}\label{can3}
W_i = U \cup V, \qquad\text{with}\qquad U =\mathbb C^3= \{(z,u_1,u_2)\},
\qquad
V=\mathbb C^3 = \{(\xi,v_1,v_2)\}\qquad \end{equation}
$$ \text{and}\qquad (\xi,v_1,v_2) = (z^{-1},z^{2-i}u_1,z^{i}u_2)
\qquad\text{in}\qquad U \cap V\text{.}$$
Then on the $U$-chart
${\mathcal I}_\ell= \langle u_1,u_2\rangle$ and elements of ${\mathfrak M}^1_j$
are determined by extension classes
$p \in \Ext({\mathcal O}(j),
{\mathcal O}(-j))$ with either $p= u_1p'$ or else $p=u_2p''$
and $u_1 \nmid p'p''$, $u_2 \nmid p'p''$.
\begin{lemma}\label{thm:gk}\cite[cor.~5.6]{GK} We have an isomorphism of varieties
$${\mathfrak M}^1_j(W_i,0) \simeq {\mathbb P}^{4j-5}\text{.}$$
\end{lemma}
Once again, fixing a numerical invariant seems to be a preferable
choice (as suggested by the last item on Remark \ref{usu}), so we define:
$${\mathfrak M}^s_j(W_i,0):=
\{E \in {\mathfrak M}^1_j(W_i,0) : \chi(E) = \chi_{\rm min}(W_i)\}\text{,}$$
and this is a Zariski open subvariety of ${\mathfrak M}^1_j$.
\begin{lemma}\label{w1}
$\mathfrak M_j^1(W_i, 1) = \mathbb P ^{4j-3}$.
\begin{proof}
In canonical coordinates, an extension of $\mathcal{O}(j+1)$ by $\mathcal{O}(-j)$ may be represented over $W_i$ by the transition matrix:
\[
T =
\left(\begin{matrix}{z^{j}}& {p}\\ {0}& {z^{-j-1}}\end{matrix}
\right)\text{.}
\]
On the intersection $U\cap V = \mathbb C-\{0\}\times \mathbb C^2$ the holomorphic functions are of the $$p=\sum_{t=-\infty}^{\infty}\sum_{s=0}^\infty\sum_{r=0}^\infty p_{rst} z^ru_1^s u_2^t\text{.}$$
By changing coordinates one can show that it is equivalent to consider $p$ as
\begin{align*}
&(p_{-j,0,0}z^{-j}+\dotsb + p_{j-1,0,0}z^{j-1}) \\
+ & (p_{-j-i+2,1,0}z^{-j-i+2}+\dotsb + p_{j-1,1,0}z^{j-1})u_1 \\
+ &(p_{-j+i,0,1}z^{-j+i} + \dotsb + p_{j-1,0,1}z^{j-1})u_2 \\
+ &\text{ higher-order terms}.
\end{align*}
Therefore, counting coefficients on the first infinitesimal neighbourhood gives $4j-2$ coefficients giving dimension $4j-3$ after projectivising.
\end{proof}
\end{lemma}
\begin{theorem}
For all positive integers $i,j,k$, there are isomorphisms
$$\mathfrak M^1_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k,\epsilon) \simeq \mathfrak M^1_j(W_1, \delta)$$
and birational equivalences
$$\mathfrak M^s_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k, \epsilon) \dashrightarrow
\mathfrak M^s_j(W_1, \delta)$$
when $\epsilon\equiv k+1\,{\rm mod}\,2$ and $\delta \in \{0,1\}$.
\end{theorem}
\begin{proof}
By setting $j\mapsto 2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta$ in Corollary \ref{cor}, we obtain isomorphisms \[
\mathfrak M^1_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k,0) \simeq \mathbb P^{4j-3-2\delta}
\]
for $k$ odd. Similarly, we can use lemma \ref{z1} to obtain isomorphisms
\[
\mathfrak M^1_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k,1) \simeq \mathbb P^{4j-3-2\delta}
\]
for $k$ even. The required isomorphisms to $\mathfrak M^1_j(W_1, \delta)$ then follow from lemmas \ref{thm:gk} and \ref{w1} for $\delta=0,1$, respectively.
To find the birational equivalences, first note that we have $$\mathfrak M^s_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k, \epsilon) \subset \mathfrak M^1_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k, \epsilon) \text{ and } \mathfrak M^s_j(W_i, \delta)\subset \mathfrak M^1_j(W_i, \delta)$$by definition. Lemma \ref{quasi} shows that $\mathfrak M^s_{2j+\left\lfloor\frac{k-3}{2}\right\rfloor+\delta}(Z_k, \epsilon)$ is a quasi-projective variety and we now show that $\mathfrak M^s_j(W_1, \delta)$ is also quasi-projective.
For any bundle on $W_1$, \cite[lem.~5.2]{BGK2} shows that the width is always ${\bf w}(E) = h^0\left( (\pi_*E)^{\vee\vee}\bigm/ \pi_*E\right) = 0$. Thus, fixed charge is equivalent to fixed height. Since height is minimal on a Zariski open set of $W_1$ of codimension at least $3$ given by the vanishing of certain coefficients of $p$, ${\mathfrak M}^s_j(W_1)$ is Zariski open in ${\mathfrak M}^1_j(W_1)$.
Restricting the isomorphisms above to a suitably small neighbourhood of these quasi-projective varieties then gives the required birational equivalences.
\end{proof}
\begin{question}
Since $\ell\subset W_i$ cannot be contracted to a point for $i>1$, our definition of charge does not apply. Can similar numerical invariants be defined for bundles on $W_i$, $i>1$? Some such invariants were defined in \cite{K} chapter 3.5, though much remains to be understood about their geometrical meaning.
\end{question}
\begin{theorem}
For $q \leq 2(2j - k -2+\delta)$ there are isomorphisms
\begin{itemize}
\item[($\iota$)] $H_q(\mathfrak M^1_j(Z_k), \delta) = H_q(\mathfrak M^1_{j+1}(Z_k), \delta)$
\item[($\iota\iota$)] $\pi_q(\mathfrak M^1_j(Z_k), \delta)= \pi_q(\mathfrak M^1_{j+1}(Z_k), \delta)\text{.}$
\end{itemize}
and for $q \leq 2(4j - 3-2\delta)$ there are isomorphisms
\begin{itemize}
\item[($\iota\iota\iota$)] $H_q(\mathfrak M^1_j(W_i), \delta) = H_q(\mathfrak M^1_{j+1}(W_i), \delta)$
\item[($\iota\nu$)] $\pi_q(\mathfrak M^1_j(W_i), \delta)= \pi_q(\mathfrak M^1_{j+1}(W_i), \delta)\text{.}$
\end{itemize}
\end{theorem}
\begin{proof}
The statements follow immediately from corollary \ref{cor} and lemmas \ref{z1}, \ref{thm:gk} and \ref{w1}.
\end{proof}
\end{document}
|
\begin{document}
\begin{abstract}
A linear $3$-graph is a set of vertices along with a set of edges, which are three element subsets of the vertices, such that any two edges intersect in at most one vertex. The crown, $C$, is a specific $3$-graph consisting of three pairwise disjoint edges, called jewels, along with a fourth edge intersecting all three jewels. For a linear $3$-graph, $F$, the linear Turán number, $ex(n,F)$, is the maximum number of edges in any linear $3$-graph that does not contain $F$ as a subgraph. Currently, the best known bounds on the linear Turán number of the crown are
\[ 6 \floor{\frac{n - 3}{4}} \leq {\rm ex}(n, C) \leq 2n. \]
In this paper, the upper bound is improved to $ex(n,C) < \frac{5n}{3}$.
\end{abstract}
\title{Improved Upper Bound on the Linear Turán Number of the Crown}
\section{Introduction}
A \textit{$3$-graph}, $H$, is a non-empty set, $V(H)$, whose elements are called vertices, along with a set, $E(H)$, of $3$-element subsets of $V(H)$ called edges. A \textit{linear} $3$-graph is a $3$-graph where any two edges intersect in at most one vertex. This paper only concerns linear $3$-graphs, and for the remainder of it, $3$-graph will be used to mean linear $3$-graph.
If $H$ and $F$ are $3$-graphs, then $H$ is \textit{$F$-free} if it has no subgraph isomorphic to $F$. For a $3$-graph, $F$, and $n \in \mathbb{N}$, the linear Turán number ${\rm ex}(n,F)$ is the greatest number of edges in any $F$-free 3-graph on $n$ vertices.
A $3$-tree is a $3$-graph that can be constructed as follows: start with a single edge and add edges one at a time so that each new edge intersects exactly one other edge when it is added. This paper concerns a specific $3$-tree called the crown or $C$. It consists of three pairwise disjoint edges, called jewels, and one edge, called the base, intersecting all three jewels (See figure \ref{fig-crown}). The crown was named in \cite{CFGGWY} where it was proven that every $3$-graph with minimum vertex degree $\delta(H) \geq 4$ contains a crown.
\begin{figure}
\caption{The crown}
\label{fig-crown}
\end{figure}
In \cite{GYRS}, A. Gy\'arf\'as, M. Ruszink\'o, and G. N. S\'ark\"ozy initiated the study of linear Tur\'an numbers for acyclic $3$-graphs, focusing on paths, matchings, and small $3$-trees. The crown is the smallest $3$-tree whose linear Turán number is not known. They provided the bounds
$$ 6 \floor{\frac{n - 3}{4}} \leq {\rm ex}(n, C) \leq 2n. $$
The purpose of this paper is to improve the upper bound.
\begin{theorem} \label{main-theorem}
For $n \geq 1$, $ex(n,C) < \frac{5n}{3}.$
\end{theorem}
The rest of this paper is dedicated to the proof of Theorem \ref{main-theorem}.
\section{Preliminary Results and Notation}
Let $H$ be a $3$-graph and $v \in V(H)$. The number of edges containing $v$ is the \textit{degree} of $v$ and is written $d(v)$. For an edge $e = \{a,b,c\} \in E(H)$, the $\textit{degree vector}$ of $e$ is $D(e) = \degvec{d(a),d(b),d(c)}$ where the coordinates are in non-increasing order. Define a partial order on the degree vectors by $D(e) \geq D(f)$ if $D(e)$ is greater than or equal to $D(f)$ in all three coordinates. The degree vector of an edge can be useful for finding a crown in a $3$-graph using the following result mentioned in \cite{CFGGWY} and \cite{GYRS}.
\begin{lemma} \label{lemma-1}
If $H$ is a $3$-graph and there is an edge $e \in E(H)$ such that $D(e) \geq \degvec{6,4,2}$, then $H$ is not crown-free.
\end{lemma}
\begin{proof}
Let $e = \{a,b,c\} \in E(H)$ with $D(e) = \degvec{d(a),d(b),d(c)} \geq \degvec{6,4,2}$. Then there is an edge $f \neq e$ containing $c$, an edge $g$ containing $b$ and not intersecting $f$, and an edge $h$ containing $a$ and not intersecting $f$ or $g$. Therefore, $H$ is not crown-free since $e,f,g, \text{ and } h$ form a crown.
\end{proof}
Define a \textit{counter-example} to be a crown-free $3$-graph, $H$, with $|E(H)| \geq \frac{5 |V(H)|}{3}$. To prove theorem \ref{main-theorem} we want to show that no such counter-example exists.
\begin{lemma} \label{lemma-min-vertices}
Any counter-example has a vertex set of at least 11 elements.
\end{lemma}
\begin{proof}
Let $H$ be a counter-example with $|V(H)| = n$. By linearity, $|E(H)| \leq \frac{n(n-1)}{6}$ and since $H$ is a counter-example, $|E(H)| \geq \frac{5n}{3}$. This is only possible if $n \geq 11$. Thus, all counter-examples have at least $11$ vertices.
\end{proof}
A \textit{minimal counter-example} is a counter-example containing no proper subgraph that is also a counter-example. If $X \subseteq V(H)$ is a set of vertices, let $E_X(H)$ be the set of all edges containing at least one vertex in $X$.
\begin{lemma} \label{lemma-inductive}
If $H$ is a minimal counter-example, then there does not exist a proper subset $X \subsetneq V(H)$ of vertices such that $|E_X(H)| \leq \frac{5 |X|}{3}$.
\end{lemma}
\begin{proof}
If such an $X$ exists, removing $E_X(H)$ and $X$ gives a proper subgraph that is a counter-example, contradicting $H$ being minimal.
\end{proof}
It follows from lemma \ref{lemma-inductive} that there are no vertices of degree zero or one in a minimal counter-example. The proof of the following lemma is given in section \ref{sec-proof-lemma-3}.
\begin{lemma} \label{lemma-3}
If $H$ is a minimal counter-example, then there is no edge $e \in E(H)$ with $D(e) \geq \degvec{5,5,5}$.
\end{lemma}
If $e \in E(H)$ and the coordinates of $D(e) = \degvec{x,y,z}$ are summed, we get the value $s(e) = x+y+z$. Define $L(H) \subseteq V(H)$ to be the set of ``large" vertices, of degree 9 or higher. Then, for an edge $e \in E(H)$ with $D(e) = \degvec{x,y,z}$, we define a modified $s(e)$ by:
\begin{equation*}
s^*(e) = \begin{cases}
\min\{ s(e), 15 \} &\text{if $x \geq 9$}, \\ s(e) & \text{ otherwise.}
\end{cases}
\end{equation*}
If we sum $s^*(e)$ over all of the edges in $H$, we get the value $$T^*(H) = \sum_{e \in E(H) }s^*(e).$$
\begin{lemma} \label{lemma-2}
If $H$ is a minimal counter-example on $n$ vertices, then $$\frac{T^*(H)}{|E(H)|} \geq \frac{25n+14 |L(H)|}{\frac{5n+2}{3}}.$$
\end{lemma}
Lemma \ref{lemma-2} is proven in section \ref{sec-proof-lemma-2}. We now give the proof of Theorem \ref{main-theorem}.
\begin{proof}
Suppose for contradiction that there exists a counter-example. Let $H$ be a (possibly non-proper) subrgraph that is a minimal counter-example. If $L(H)$ is not empty, then $\frac{T^*(H)}{|E(H)|} > 15$ by lemma \ref{lemma-2} so there exists an $e \in E(H)$ such that $s^*(e) \geq 16$. By definition of $s^*$ there is no vertex $v \in e \cap L(H)$ and by lemma \ref{lemma-inductive} there is no vertex in $e$ of degree one. However, then lemma \ref{lemma-1} tells us there is a crown in $H$, a contradiction.
Now, suppose that $L(H)$ is empty. Then, by lemmas \ref{lemma-min-vertices} and \ref{lemma-2}, $\frac{T^*(H)}{|E(H)|} > 14$ so there exists an $e \in E(H)$ such that $s^*(e) \geq 15$. By assumption, there is no vertex $v \in e \cap L(H)$. There is also no vertex of degree one in $e$ and by lemma \ref{lemma-3}, $D(e) \neq \degvec{5,5,5}$. Then lemma $\ref{lemma-1}$ again gives a contradiction. Therefore, no counter-example exists, proving Theorem \ref{main-theorem}.
\end{proof}
\section{Proof of Lemma \ref{lemma-3}} \label{sec-proof-lemma-3}
\begin{definition}[Link graph] \label{linkgraph}
Let $H$ be a $3$-graph and $e = \{a, b, c\} \in E(H)$. The \textit{link graph} of $e$ in $H$ is the graph, $G(e)$, with a proper coloring, $\varphi$, of the edges defined as follows. The vertex set of $G(e)$ is all vertices covered by $E_e(H)$ minus $a$, $b$, and $c$. The edge set is $\{y,z\}$ for $\{x,y,z\} \in E_e \setminus \{e\}$ where $x \in e$. Such an edge, $\{y,z\}$, is assigned color $\varphi(\{y,z\}) = x$.
\end{definition}
A \textit{rainbow matching} in $G(e)$ is a set of three pairwise disjoint edges representing all three edge colors. There is a crown in $H$ with base $e$ if and only if there is a rainbow matching in $G(e)$.
\begin{lemma} \label{triple-five-lemma}
Let $H$ be a crown-free $3$-graph and $e = (a, b, c) \in E(H)$ with $D(e) = \degvec{5, 5, 5}$. Then $G(e)$ is isomorphic to the graph, $G$ (see Figure \ref{fig-link}).
\end{lemma}
\begin{figure}
\caption{Link Graph G}
\label{fig-link}
\end{figure}
\begin{proof}
Given $i,j \in \{a, b, c \}$ where $i \neq j$, every edge of color $i$ must intersect two edges of color $j$, otherwise there is a rainbow matching. Therefore, the $a-$ and $b$-colored edges either form an alternating $8$-cycle or two disjoint alternating $4$-cycles. In the former case, any $c$-colored edge gives a rainbow matching. In the latter case, the only possible $c$-colored edges are the diagonals of the $4$-cycles, in which case $G(e)$ is isomorphic to $G$.
\end{proof}
Let $H$ be a minimal counter-example. Suppose for contradiction that there exists an edge $e = \{ a,b,c \} \in E(H)$ with $D(e) \geq \degvec{5,5,5}$. By lemma \ref{lemma-1}, $D(e) = \degvec{5,5,5}$ so $G(e)$ is isomorphic to $G$. Let $X = \{a, b, c \} \bigcup_{i=1}^8 \{v_i\}$ where $v_i$ are the vertices of $G$ (see figure \ref{fig-link}). If $E_X(H) = E_e(H)$, then
$$ |E_X(H)| = |E_e(H)| = 13 < \frac{5 |X|}{3} $$
contradicting $H$ being minimal by lemma \ref{lemma-inductive}.
Thus, there exists an edge $f \in E_X(H) \setminus E_e(H)$. The edge $f$ may not contain any vertex from $e$, but must contain at least one $v_i$. By linearity and symmetry, we may assume $f = \{v_1, w_1, w_2 \}$ where $w_2 \not \in V\big(G(e)\big)$ and either $w_1 = v_5$ or $w_1 \not \in V\big(G(e)\big)$. In either case, there is a crown in $H$ with base
$$ \{v_1, v_4, a\} $$
and jewels
$$ \{v_1, w_1, w_2\} \text{, } \{v_2, v_4, c\} \text{, and } \{v_6, v_7, a\} \text{,} $$
contradicting $H$ being a counter-example.\qed
\section{Proof of Lemma \ref{lemma-2}}
\label{sec-proof-lemma-2}
Let $H$ be a minimal counter-example on $n$ vertices. By lemma \ref{lemma-inductive} there are no degree one vertices in $H$. Since $H$ is minimal, $$\sum_{v \in V(H)} d(v) = 5n + l$$ where $l \in \{ 0, 1, 2 \}$.
\begin{lemma}
For some $k > 0$ there exists a sequence of functions $\{f_i\}_{i=0}^k: V(H) \rightarrow \mathbb{N}$ and a partition of $V(H)$ into $I$ and $D = V(H) \setminus I$ such that:
\begin{enumerate}
\item For all $v \in V(H)$, $5 \leq f_0(v) \leq 7$.
\item $f_k = d$ where $d$ is the degree function.
\item For $ 1 \leq i \leq k$, there exists $x_i \in I$ and $y_i \in D$ such that $f_{i-1}(x_i) \geq f_{i-1}(y_i) $, $f_i(x_i) = f_{i-1}(x_i) + 1$, $f_i(y_i) = f_{i-1}(y_i) - 1$, and for all $z \in V(H) \setminus \{x_i, y_i\}$, $f_i(z) = f_{i-1}(z)$.
\item If $v \in D$, $f_0(v) = 5$.
\end{enumerate}
\end{lemma}
\begin{proof}
Label the vertices in $V(H)$ as $v_1$ through $v_n$ so that $\{ d(v_i) \}_{i=1}^n$ is in non-decreasing order. First, suppose $l = 0$. Then, let $f_0(v_i) = 5$ for all $v \in V(H)$. To define $f_i$ for $i > 0$, assume $f_{i-1}$ is already defined. Take the minimal $a$ such that $f_{i-1}(v_a) > d(v_a)$ and the maximal $b$ such that $f_{i-1}(v_b) < d(v_b)$. If no such $a$ and $b$ exist, then $i-1 = k$ and we are done. Otherwise, define $f_i(v_a) = f_{i-1}(v_a) - 1$, $f_i(v_b) = f_{i-1}(v_b) + 1$, and $f_i(v_c) = f_{i-1}(v_c)$ for $c \not \in \{a,b\}$. Eventually, we reach an $f_i = d$. Any given vertex may be either decreased (one or more times) and is in the set $D$ or is never decreased and is in the set $I$. If $l = 1$, do the same except with $f_0(v_n) = 6$. If $l = 2$ and there is a vertex $v \in V(H)$ with $d(v) \geq 7$, let $f_0(v_n) = 7$. If no such vertex exists, then there are at least two vertices of degree six so let $f_0(v_n) = f_0(v_{n-1}) = 6$ and the final criteria must still be satisfied.
\end{proof}
Consider such a sequence of functions. We now define several things. For $0 \leq i \leq k$, let $$T_i = \sum_{v \in V(H)} f_i(v)^2.$$ For $1 \leq i \leq k$, let $\Delta_i = T_i - T_{i-1}$. For $v \in V(H)$, let $I_v = \{1 \leq i \leq k: f_i(v) \neq f_{i-1}(v) \}$. For $v \in V(H)$, define $$\Delta_v = \sum_{i \in I_v} \Delta_i.$$ For a fixed $i$, $1 \leq i \leq k$, there is one $v \in V(H)$ such that $f_i(v) > f_{i-1}(v)$. Let $g(i) = f_i(v)^2 - f_{i-1}(v)^2$. Similarly, define $h$ so that $g(i) - h(i) = \Delta_i$. Note that $$T_k = \sum_{v \in V(H)} d(v)^2 = \sum_{e \in E(H)} s(e)$$ and $\Delta_i > 0$ for all $1 \leq i \leq k$.
\begin{lemma}
If $v \in L(H)$ and $d(v) = m$, then $\Delta_v \geq m^2-9m+14$.
\end{lemma}
\begin{proof}
We have $\Delta_v = \sum_{i \in I_v} \big(g(i)-h(i)\big)$. Now, $f_0(v) = x \in \{ 5,6,7 \}$ and $f_k(v) = m$ so $\sum_{i \in I_v} g(i) = m^2 - x^2$. For any $i \in I_v$, $h(i) \leq 5^2 - 4^2 = 9$. Thus, $\sum_{i \in I_v} h(i) \leq 9 |I_v| = 9 (m-x)$. Therefore, $\Delta_v \geq m^2 - x^2 - 9(m-x)$ which is minimal when $x=7$, giving $\Delta_v \geq m^2-9m+14$.
\end{proof}
\begin{lemma}
If $v \in L(H)$ and $d(v) = m$, then $$\sum_{e \in E_v} \big( s(e) - s^*(e) \big ) \leq m^2 - 9m.$$
\end{lemma}
\begin{proof}
Since $d(v) \geq 9$, the other two vertices in an edge with $v$ have maximum degree $3$ by lemma \ref{lemma-1}. Thus, if $e \in E_v$, $s(e) \leq m + 6$. Since there are $m$ edges in $E_v$, we have $\sum_{e \in E_v} \big ( s(e) - s^*(e) \big ) \leq m(m+6-15) = m^2 - 9m$.
\end{proof}
These two lemmas now give:
\begin{align*}
T^*(H) & = T_k - \sum_{v \in L(H)}\Big(\sum_{e \in E_v} \big (s(e) - s^*(e)\big)\Big) \\
& \geq T_0 + \sum_{v \in L(H)} \Delta_v - \sum_{v \in L(H)}\Big(\sum_{e \in E_v} \big(s(e) - s^*(e)\big)\Big) \\
& = T_0 + \sum_{v \in L(H)} \Big(\Delta_v - \sum_{e \in E_v} \big(s(e) - s^*(e)\big)\Big) \\
& \geq T_0 + \sum_{v \in L(H)} \Big(\big(d(v)^2-9d(v)+14\big) - \big(d(v)^2 - 9d(v)\big)\Big) \\
& = T_0 + \sum_{v \in L(H)} 14 \\
& \geq 25n + 14 |L(H)|
\end{align*}
Since $|E(H)| = \frac{5n+l}{3} \leq \frac{5n+2}{3}$,
$$ \frac{T^*(H)}{|E(H)|} \geq \frac{25n+14 |L(H)|}{\frac{5n+2}{3}}. $$ \qed
\end{document}
|
\begin{document}
\title{No state-independent contextuality can be extracted from contextual measurement-based quantum computation with qudits of odd prime dimension}
\author{Markus Frembs}
\email{[email protected]}
\affiliation{Centre for Quantum Dynamics, Griffith University,\\ Yugambeh Country, Gold Coast, QLD 4222, Australia}
\author{Cihan Okay}
\email{[email protected]}
\affiliation{Department of Mathematics, Bilkent University, Ankara, Turkey}
\author{Ho Yiu Chung}
\email{[email protected]}
\affiliation{Department of Mathematics, Bilkent University, Ankara, Turkey}
\begin{abstract}
Linear constraint systems (LCS) have proven to be a surprisingly prolific tool in the study of non-classical correlations and various related issues in quantum foundations. Many results are known for the Boolean case, yet the generalisation to systems of odd dimension is largely open. In particular, it is not known whether there exist LCS in odd dimension, which admit finite-dimensional quantum, but no classical solutions.
Here, we approach this question from a computational perspective. We observe that every deterministic, non-adaptive measurement-based quantum computation (MBQC) with linear side-processing defines a LCS.
Moreover, the measurement operators of such a MBQC \textit{almost} define a quantum solution to the respective LCS: the only difference is that measurement operators generally only commute with respect to the resource state of the MBQC. This raises the question whether this state-dependence can be lifted in certain cases, thus providing examples of quantum solutions to LCS in odd dimension. Our main result asserts that no such examples arise within a large extension of the Pauli group for $p$ odd prime, which naturally arises from and is universal for computation in deterministic, non-adaptive MBQC with linear side-processing.
\end{abstract}
\maketitle
\section{Introduction}
Quantum nonlocality and contextuality are dramatic expressions of nature's defiance to behave according to our classical intuition.
A particularly strong form of contextuality is exhibited in the Mermin-Peres square \cite{Peres1991,Mermin1993}. The essence of its non-classical behaviour can be expressed in terms of a linear constraint system (LCS), i.e., a set of linear equations $Ax=b$ with $A \in M_{m,n}(\mathbb Z_2)$ and $b \in \mathbb Z_2^m$ \cite{CleveLiuSlofstra2017}. The Mermin-Peres square in Fig.~\ref{fig: Mermin(Peres) square and star} (a) defines such a system, which admits no solution in the form of a bit-string $x \in \mathbb Z_2^n$, yet which admits a quantum solution in the form of two-qubit Pauli operators (see Sec.~\ref{sec: MBQC and LCS} for details).
Linear constraint systems have sparked a lot of interest in recent years, see \cite{Arkhipov2012,CleveMittal2014,Fritz2016,CleveLiuSlofstra2017,Slofstra2019,OkayRaussendorf2020} for instance. Most prominently, they have played a major role in the various stages and final solution of Tsirelson's problem \cite{Tsirelson2006,Slofstra2019,Slofstra2019b,JiEtAl2020}.
Quantum solutions to LCS are known to exist only in even or infinite (Hilbert space) dimension;\footnote{Here, the dimension refers to the (minimal) Hilbert space on which operators act. The existence of linear constraint systems over $\mathbb Z_d$ admitting possibly infinite-dimensional quantum but no classical solutions for arbitrary $d \in \mathbb{N}$ has been reported in \cite{ZhangSlofstra2020}.}\label{fn: ZhangSlofstra} it is an open question whether finite-dimensional quantum solutions to LCS also exist over $\mathbb Z_d$ for $d$ odd. Notably, the qubit Pauli operators in the Mermin-Peres square define a quantum solution of an associated LCS (see Eq. ~(\ref{eq: MP-square LCS})). This raises the question whether LCS with quantum but no classical solutions exist within (tensor products of) the generalised Pauli group $\mc{P}_d$ for arbitrary finite dimension $d$; \cite{QassimWallman2020} have shown that this is not the case. Here, we generalise this result, by approaching the problem from a computational perspective.
Note that the proof of (state-independent) contextuality in Mermin-Peres square is closely related to Mermin's star in Fig.~\ref{fig: Mermin(Peres) square and star} (b) \cite{BartlettRaussendorf2016}, which in turn can be cast as a contextual computation within the setting of MBQC with linear side-processing ($ld$-MBQC, see Sec.~\ref{sec: MBQC and LCS}) \cite{AndersBrowne2009,Raussendorf2013}. Unlike the former, the latter notion of contextuality is inherently state-dependent. The close relation between state-dependent and state-independent contextuality, and the existence of quantum solutions to LCS in the qubit case suggests that quantum solutions to LCS might arise from $ld$-MBQC also in odd dimension. This is further motivated by the fact that state-dependent contextuality turns out to have a straightforward generalisation to higher dimensions \cite{FrembsRobertsBartlett2018,FrembsRobertsCampbellBartlett2022}.
In fact, every deterministic, non-adaptive $ld$-MBQC naturally defines a LCS (see Eq.~(\ref{eq: qudit LCS}) below).
However, the state-dependent nature of MBQC only requires measurement operators to commute on their common resource eigenstate. Consequently, a deterministic, non-adaptive $ld$-MBQC is generally not a quantum solution of its associated LCS;
it is only if measurement operators commute (on an arbitrary state).
Surprisingly, our main result (Thm.~\ref{thm: main result - Clifford hierarchy}) shows that any solution built from a set of measurement operators, which are universal for computation in deterministic, non-adaptive $ld$-MBQC (App.~C in \cite{FrembsRobertsCampbellBartlett2022}), reduces to a classical solution. In this way, \textit{state-dependent contextuality in MBQC does not lift to state-independent contextuality of LCS.} The following diagram summarises the main tenet of our work:
\begin{center}
\begin{tikzpicture}[x=0.07mm,y=0.07mm]
\node[align=center] at (0,300) (A) {\textbf{deterministic, non-adaptive}\\ \textbf{$ld$-MBQC}};
\node[align=center] at (0,0) (B) {\textbf{contextual $ld$-MBQC}\\ \textbf{(state-dependent contextuality)}};
\node at (1250,300) (C) {\textbf{associated LCS (Eq.~\ref{eq: qudit LCS})}};
\node[align=center] at (1250,0) (D) { \textbf{quantum solution}\\ \textbf{(state-independent contextuality)}};
\draw[->,thick] (A) -- (B) node[midway,left] {\cite{FrembsRobertsBartlett2018} \textbf{: deg(o) $\geq$ d}};
\draw[->,thick] (A) -- (C) node[midway,above=-0.05cm] {\textbf{operator solution}};;
\draw[->,dashed] (A) -- (D) node[midway,above=-0.35cm] {\rotatebox{-13}{\textbf{commutativity}}};
\draw[->,thick] (D) -- (C);
\draw[->,ultra thick, red] (B) -- (D)
node[midway,above=0.1cm] {\textcolor{red}{\textbf{Thm. 4}}}
node[midway,thick,red,sloped] {\large \textbf{/}};;
\end{tikzpicture}
\end{center}
The rest of this paper is organised as follows. In Sec.~\ref{sec: MBQC and LCS}, we show that deterministic, non-adaptive $ld$-MBQC naturally defines operator solutions to LCS, which are quantum solutions if they also satisfy the commutativity constraint of LCS (see Def.~\ref{def: solution group} below).
We then show that the boundary between qubit and qudit systems is seamless in the case of state-dependent contextuality, by constructing an explicit example of contextual $ld$-MBQC (Thm.~\ref{thm: AB qudit computation}), which generalises Mermin's star to arbitrary prime dimensions in Sec.~\ref{sec: Contextual computation in qudit MBQC}.
In Sec.~\ref{sec: MBQC-group}, we study the group arising from local measurement operators in deterministic, non-adaptive $ld$-MBQC. This section builds up to the proof of our key technical contribution (Thm.~\ref{thm: quasi-local homomorphism in abelian subgroups of order p}). The details are of independent interest, but may be skipped on first reading. Sec.~\ref{sec: from noncommutative to commutative LCS} contains the main result of our work (Thm.~\ref{thm: main result - Clifford hierarchy}), and Sec.~\ref{sec: discussion} concludes.
\section{MBQC VS LCS - state-dependent vs state-independent contextuality}\label{sec: MBQC and LCS}
Before we define LCS from deterministic, non-adaptive $ld$-MBQC, it is instructive to first recall the qubit case and, in particular, Mermin-Peres square and Mermin's star (see Fig.~\ref{fig: Mermin(Peres) square and star}).\\
\begin{figure}\label{fig: Mermin(Peres) square and star}
\end{figure}
\textbf{LCS from Mermin-Peres square.} The nine two-qubit Pauli operators $x_k \in \mc{P}^{\otimes 2}_2$ in Fig.~\ref{fig: Mermin(Peres) square and star} (a) satisfy the following constraints: (i) for every operator $x_k^2 = 1$, (ii) the operators in every row and column commute $[x_k,x_l] = 0$, and (iii) the operators in every row and column obey the multiplicative constraints
\begin{equation*}
\prod_{k=1}^9 x_k^{A_{ik}} = (-1)^{b_i}\; ,
\end{equation*}
where $A \in M_{6,9}(\mathbb Z_2)$ and $b \in \mathbb Z^6_2$ are defined as follows
\begin{equation}\label{eq: MP-square LCS}
A = \begin{pmatrix}
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\
\end{pmatrix} \quad \quad \quad b = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1
\end{pmatrix} \; .
\end{equation}
Assume that there exists a noncontextual value assignment $x_k = (-1)^{\tilde{x}_k}$ with $\tilde{x}_k \in \mathbb Z^9_2$. Then the constraints can be equivalently expressed in terms of the additive equation $A\tilde{x} = b \mod 2$,
\begin{equation}\label{eq: LCS relation}
\prod_{k=1}^9 x_k^{A_{ik}} = (-1)^{\sum_{k=1}^9 A_{ik}\tilde{x}_k} = (-1)^{b_i}\; .
\end{equation}
However, no such solution exists: the Mermin-Peres square is an example of \emph{state-independent} contextuality.
Generalising this example, one defines linear constraint systems (LCS) `$Ax=b \mod d$' for any $A \in M_{m, n}(\mathbb Z_d)$ and $b \in \mathbb Z^m_d$. A \emph{classical solution} to a LCS is a vector $\tilde{x} \in \mathbb Z_d^n$ solving the system of linear equations $A\tilde{x}=b \mod d$, whereas a \emph{quantum solution} is an assignment of unitary
operators $x \in U^{\otimes n}(d)$ such that (i) $x_k^d = \mathbbm{1}$ for all $k \in [n]$ (`$d$-torsion'), (ii) $[x_k,x_{k'}] := x_kx_{k'}x_k^{-1}x_{k'}^{-1} = \mathbbm{1}$ whenever $(k,k')$ appear in the same row of $A$, i.e., $A_{lk} \neq 0 \neq A_{lk'}$ for some $l \in [m] := \{1,\cdots,m\}$ (`commutativity'), and (iii) $\prod_{k=1}^n x^{A_{lk}} = \omega^{b_l}$ for all $l \in [m]$, and where $\omega = e^{\frac{2\pi i}{d}}$ `constraint satisfaction').\footnote{With \cite{OkayRaussendorf2020}, we call an \emph{operator solution} an assignment $x \in U^{\otimes n}(d)$ satisfying (i) and (iii), but not (ii).} For more details on linear constraint systems, see \cite{CleveLiuSlofstra2017,Slofstra2019,QassimWallman2020}.\\
\textbf{Mermin's star and contextual MBQC with qubits.} Following \cite{AndersBrowne2009,Raussendorf2013}, we encode the constraints in Mermin's star in computational form. Fix the three-qubit resource GHZ-state $|\psi\ensuremath{\rightarrow}ngle = \frac{1}{\sqrt{2}}(|000\ensuremath{\rightarrow}ngle + |111\ensuremath{\rightarrow}ngle)$, and define local measurement operators $M_k(0) = X_k$, $M_k(1) = Y_k$, where $X_k,Y_k,Z_k \in \mc{P}_2$ denote the \emph{local} Pauli measurement operators in Fig.~\ref{fig: Mermin(Peres) square and star}.\footnote{When it is clear from context, we sometimes omit the subscript $k$ to avoid clutter. }
Let $\mathbf{i} \in \mathbb Z_2^2$ be the input of the computation and define local measurement settings on $|\psi\ensuremath{\rightarrow}ngle$ given by the functions $l_1(\mathbf{i}) = i_1$, $ l_2(\mathbf{i}) = i_2$, and $l_3(\mathbf{i}) = i_1 + i_2 \mod 2$. Further, the output of the computation is defined as $o = \sum_{k=1}^3 m_k \mod 2$, where $m_k \in \mathbb Z_2$ denotes the local measurement outcomes. Note that the $l_k$ and $o$ are linear functions in the input $\mathbf{i} \in \mathbb Z_2^2$. It follows that Mermin's star defines a deterministic, non-adaptive $l2$-MBQC, where `deterministic' means that the output is computed with probability $1$, `non-adaptive' means that the measurement settings $l_k = l_k(\mathbf{i})$ are functions of the input only (but not of previous measurement outcomes $m_k$), and $l2$-MBQC means that the classical side-processing (that is, pre-processing of local measurement settings $l_k$ and post-processing of outcomes $m_k$) is restricted to $\mathbb Z_2$-linear computation (for details, see \cite{HobanCampbell2011, Raussendorf2013,FrembsRobertsCampbellBartlett2022}).
Note that the resource state $|\psi\ensuremath{\rightarrow}ngle$ is a simultaneous eigenstate of $X_1X_2X_3$, $-X_1Y_2Y_3$, $-Y_1X_2X_3$, and $-Y_1Y_2X_3$--- the three-qubit Pauli operators in the horizontal context of Mermin's star.\footnote{As in Fig.~\ref{fig: Mermin(Peres) square and star}, we use the common shorthand omitting tensor products in e.g. $X_1X_2X_3 := X_1 \otimes X_2 \otimes X_3$.} The output function $o: \mathbb Z_2^2 \rightarrow \mathbb Z_2$ is then the nonlinear OR gate. The nonclassical correlations thus appear in the form of nonlinearity in the $l2$-MBQC, boosting the classical computer beyond its (limited to $\mathbb Z_2$-linear side-processing) capabilities. In fact, nonlinearity becomes a witness of \emph{state-dependent} contextuality (nonlocality) in $l2$-MBQC more generally \cite{HobanCampbell2011,Raussendorf2013,FrembsRobertsBartlett2018,FrembsRobertsCampbellBartlett2022}.
We can capture the constraints of this $l2$-MBQC in the form of a LCS $Ly=o \mod 2$, where
\begin{equation}\label{eq: M-star LCS}
L_{\mathbf{i}k} := (l_1(\mathbf{i}),l_2(\mathbf{i}), l_3(\mathbf{i})) = \begin{pmatrix}
0 & 0 & 0 \\
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 0 \\
\end{pmatrix} \in M_{4\times 3}(\mathbb Z_2) \quad \quad \quad o(\mathbf{i}) = \begin{pmatrix}
0 \\ 0 \\ 0 \\ 1
\end{pmatrix} \in \mathbb Z_2^4\; .
\end{equation}
More generally, we can associate a LCS $Ly=o \mod 2$ to any deterministic, non-adaptive $l2$-MBQC, where $L \in M_{2^n\times N}(\mathbb Z_2)$ encodes the measurement settings, $o \in \mathbb Z_2^{2^n}$ the output, $n$ the size of the input vector, and $N$ the number of qubits of the $l2$-MBQC. A classical solution (i.e., a Boolean vector $y \in \mathbb Z^N_2$ solving $Ly = o \mod 2$) corresponds with a noncontextual hidden variable model of the respective $l2$-MBQC. In turn, state-dependent contextuality is observed whenever the LCS $Ly = o \mod 2$ admits no classical solution \cite{Raussendorf2013}.
To understand how the $l2$-MBQC nevertheless `computes' a nonlinear output function, we return to Mermin's star and the associated LCS in Eq.~(\ref{eq: M-star LCS}). Write $Y_k = S_kX_k$ with $S_k = \mathrm{diag}(-i,i)$ such that
\begin{equation*}\label{eq: Mermin star as LCS}
\left(\otimes_{k=1}^3 M_k(l_k(\mathbf{i}))\right) |\psi\ensuremath{\rightarrow}ngle
= \left(\otimes_{k=1}^3 S_k^{L_{\mathbf{i}k}}X_k\right) |\psi\ensuremath{\rightarrow}ngle
= (-1)^{o(\mathbf{i})} |\psi\ensuremath{\rightarrow}ngle\; ,
\end{equation*}
then the output function of the deterministic, non-adaptive
$l2$-MBQC is of the form
\begin{equation}\label{eq: output function in l2-MBQC}
o(\mathbf{i}) = \sum_{k=1}^3 l_k(\mathbf{i}) f_k(q_k) \mod 2\; ,
\end{equation}
where $f_k: \mathbb Z_2 \ensuremath{\rightarrow} \mathbb R$, $f_k(q_k) = (-1)^{q_k}\frac{1}{4}$ defines a phase gate $S_k(q_k) = e^{if_k(q_k)}$ as a function of the computational basis. Unlike the $\mathbb Z_2$-linear equation $o = \sum_{k=1}^3 m_k \mod 2$, Eq.~(\ref{eq: output function in l2-MBQC}) is a set of \emph{real-linear} equations. In particular, the above contextual computation based on Mermin's star (cf. \cite{AndersBrowne2009}) defines a solution of the LCS $Ly=o \mod 2$ over the reals (cf. \cite{HobanCampbell2011}). More generally, \cite{FrembsRobertsCampbellBartlett2022} show that a LCS of the form $Ly=o \mod 2$ is implemented by a deterministic, non-adaptive $l2$-MBQC if and only if it has a solution over the reals via Eq.~(\ref{eq: output function in l2-MBQC}).\footnote{Furthermore, \cite{FrembsRobertsCampbellBartlett2022} studies the existence of deterministic, non-adaptive $l2$-MBQC implementing Eq.~(\ref{eq: output function in l2-MBQC}), in dependence of $N$ and the level of the Clifford hierarchy to which local measurement operators $M_k$ belong.}\\
\textbf{LCS from MBQC with qudits.} It is straightforward to generalise the setting in the preceding paragraphs to deterministic, non-adaptive $ld$-MBQC (with $\mathbb Z_d$-linear side-processing) for qudits of dimension $d$. We fix the $N$-qudit GHZ-resource state $|\psi\ensuremath{\rightarrow}ngle = \frac{1}{\sqrt{d}} \sum_{q=0}^{d-1} \otimes_{k=1}^N |q_k\ensuremath{\rightarrow}ngle$ and consider local measurement operators of the form $M_\xi(l)|q\ensuremath{\rightarrow}ngle = S^l_\xi X|q\ensuremath{\rightarrow}ngle = \xi(q+1)|q+1\ensuremath{\rightarrow}ngle$, $l \in \mathbb{N}$, where $S_\xi = e^{if}$ is a type of generalised phase gate for any $f: \mathbb Z_d \ensuremath{\rightarrow} \mathbb{R}$, equivalently $\xi: \mathbb Z_d \ensuremath{\rightarrow} U(1)$, such that $\prod_{q=0}^{d-1} S_\xi(q) = e^{i\sum_{q=0}^{d-1} f(q)} = \mathbbm{1}$.
Moreover, a deterministic, non-adaptive $ld$-MBQC defines a linear constraint system $Ly = o \mod d$, where
\begin{equation}\label{eq: qudit LCS}
L = (L_{\mathbf{i},k})_{k=1,\mathbf{i} \in \mathbb Z_d^n}^N = (l_1(\mathbf{i}),\cdots, l_N(\mathbf{i})) \in M_{d^n\times N}(\mathbb Z_d)\; ,
\end{equation}
for all $\mathbf{i} \in \mathbb Z_d^n$ is the matrix corresponding to the measurement settings and $o: \mathbb Z_d^n \rightarrow \mathbb Z_d$, equivalently $o \in \mathbb Z_d^N$ for $N = d^n$, is the output function of the $ld$-MBQC. The latter is of the form
\begin{equation}\label{eq: output function in MBQC}
o(\mathbf{i}) = \sum_{k=1}^N m_k = \sum_{k=1}^n l_k(\mathbf{i}) f_k(q_k)\; .
\end{equation}
Again, a classical solution $y \in \mathbb Z^N_d$ of the LCS $Ly=o \mod d$ corresponds with a noncontextual hidden variable model for the $ld$-MBQC. Moreover, for $d$ prime a solution over the reals can be implemented in terms of contextual $ld$-MBQC. In fact, the operators $M_\xi(l)$ are universal for contextuality in $ld$-MBQC for $d$ prime: any (linear and nonlinear) output function can be computed deterministically using such operators acting on the $N$-qudit GHZ state (for details, see App. C in \cite{FrembsRobertsCampbellBartlett2022}). \footnote{We emphasise, however, that deterministic, non-adaptive $ld$-MBQC is not universal for quantum computation. \label{fn: universal ld-MBQC}} In the next section we provide an explicit example for state-dependent contextuality, which generalises the qubit MBQC based on Mermin's star in Sec.~\ref{sec: MBQC and LCS}.
\section{Contextual MBQC for qudit systems}\label{sec: Contextual computation in qudit MBQC}
In this section we construct a generalisation of the state-dependent contextual arrangement in Mermin's star \cite{Mermin1993} (see Fig.~\ref{fig: Mermin(Peres) square and star} (b)) from qubit to qudit systems in odd prime dimension. The latter assumes a computational character within the scheme of measurement-based computation \cite{RaussendorfBriegel2001,BriegelRaussendorf2001,AndersBrowne2009}, which moreover is at the heart of one of the only known examples of a provable quantum over classical computational advantage \cite{BravyiGossetKoenig2018,BravyiGossetKoenig2020}. The resource behind this advantage is contextuality (nonlocality), i.e., the non-existence of any consistent value assignment to all measurement operators. Sharp thresholds for state-dependent contextuality can be derived under the restriction of $\mathbb Z_d$-linear side-processing $ld$-MBQC \cite{HobanCampbell2011,Raussendorf2013,FrembsRobertsBartlett2018,FrembsRobertsCampbellBartlett2022}.\\
\textbf{A contextual qudit example.} We generalise the contextual computation in \cite{AndersBrowne2009} based on the operators in Mermin's star to qudit systems of arbitrary prime dimension. Define the following $d$ operators by their action on computational basis states, $|q\ensuremath{\rightarrow}ngle$ for $0 \leq q \leq d-1$ and for $\omega = e^{\frac{2\pi i}{d}}$, as follows: for all $l \in \mathbb Z_d$,
\begin{equation}\label{eq: qudit measurement operators}
M(l)|q\ensuremath{\rightarrow}ngle := \theta(l) \omega^{lq^{d-1}} |q+1\ensuremath{\rightarrow}ngle\; ,
\end{equation}
Note first that $M(l)^d = 1$ if we set $\theta(l)^d = \omega^l$ and thus $\theta(i)^d\theta(j)^d\theta(d-i-j)^d = 1$. A particular choice for the $\theta(l)$ is given by $\theta(l) = e^{\frac{l2\pi i}{d^2}}$. Similarly to the qutrit case, we also specify $\mathbb Z_d$-linear functions, $l_1(\mathbf{i}) := i_1$, $l_2(\mathbf{i}) := i_2$, and $l_3(\mathbf{i}) := -i_1-i_2$ for $\mathbf{i} = (i_1,i_2) \in \mathbb Z_d^2$ and take the resource state to be,
\begin{equation}\label{eq: qudit resource state}
|\psi\ensuremath{\rightarrow}ngle = \frac{1}{\sqrt{d}} \sum_{q=0}^{d-1}|q\ensuremath{\rightarrow}ngle^{\otimes 3}.
\end{equation}
\begin{theorem}\label{thm: AB qudit computation}
The deterministic, non-adaptive $ld$-MBQC for $d$ prime with measurements $M(\mathbf{l}) = \otimes _{k=1}^3 M_k(l_k(\mathbf{i}))$ where the input $\mathbf{i} = (i_1,i_2) \in \mathbb Z_d^2$ sets the measurements via $\mathbb Z_d$-linear functions $l_1(\mathbf{i}) := i_1$, $l_2(\mathbf{i}) := i_2$, and $l_3(\mathbf{i}) := - i_1 - i_2$, is contextual when evaluated on the resource GHZ-state in Eq.~(\ref{eq: qudit resource state}).
\end{theorem}
\begin{proof}
One easily computes the output function of this computation to be of the form:
\begin{equation}\label{eq: general non-linear function}
o(\mathbf{i}) = \begin{cases} 0 &\mathrm{if} \ i_1=i_2=0 \\
1 &\mathrm{if} \ i_1 + i_2 \leq d \\
2 &\mathrm{if} \ i_1 + i_2 > d \end{cases}\; .
\end{equation}
By Thm.~1 in \cite{FrembsRobertsBartlett2018}, the computation is contextual if $o(\mathbf{i})$ is at least of degree $d$. Note that the number of monomials of degree at most $d-1$ is
the same as the number of constraints for $i_1 + i_2 \leq d-1$ in Eq.~(\ref{eq: general non-linear function}). If the computation were noncontextual, the latter would thus uniquely fix a function of degree at most $d-1$, namely $g = (i_1 + i_2)^{d-1}$. However, $g$ does not satisfy the constraints for $i_1 + i_2 \geq d$. Hence, $o \neq g$ from which it follows that $o$ contains at least one term of degree $d$ or greater and is thus contextual.
\end{proof}
Thm.~\ref{thm: AB qudit computation} generalises the contextual computation in Eq.~(\ref{eq: output function in l2-MBQC}) from even to odd prime dimensions. However, there are two notable differences. First, unlike the qubit Pauli group, the Pauli group in odd dimension is noncontextual \cite{Gross2006}. Consequently, the measurement operators $M(\mathbf{l})$ necessarily lie outside the qudit Pauli group.
Second, unlike the three-qubit Pauli operators in Mermin's star, the measurement operators $M(\mathbf{l})$ do not commute---they only commute on the resource state $|\psi\ensuremath{\rightarrow}ngle$. In other words, the measurement operators
only share a single common eigenstate, but not a basis of common eigenstates. As such, deterministic, non-adaptive $ld$-MBQC generally only gives rise to an operator solution of the associated LCS $Ly=o \mod d$ in Eq.~(\ref{eq: qudit LCS}), that is, it does not satisfy the commutativity constraint required of a quantum solution (cf. \cite{OkayRaussendorf2020}). \\
\textbf{Statement of problem.} Nevertheless, the close connection between $ld$-MBQC and LCS for Mermin's star and Mermin-Peres square (as discussed in Sec.~\ref{sec: MBQC and LCS}), together with the fact that state-dependent contextuality in $ld$-MBQC is not restricted to qubit systems (as shown in Sec.~\ref{sec: Contextual computation in qudit MBQC}), suggests that state-dependent contextuality in $ld$-MBQC may at least sometimes be lifted to state-independent contextuality in LCS also in odd prime dimension. We therefore ask: \textit{can the operator solution given by a deterministic, non-adaptive $ld$-MBQC sometimes be lifted to a quantum solution of its accociated LCS (Eq.~(\ref{eq: qudit LCS})) in odd prime dimension, that is, does it sometimes also satisfy the commutativity constraint (see Def. \ref{def: solution group} below).}
In order to study this problem, in the next section we define a group generated by a set local measurement operators, which is universal for (contextual) computation in deterministic, non-adaptive $ld$-MBQC.
\section{Group of measurement operators in deterministic, non-adaptive $ld$-MBQC.}\label{sec: MBQC-group}
We define a group generated from (local) measurement operators in deterministic, non-adaptive $ld$-MBQC---generalising the operators in Eq.~(\ref{eq: qudit measurement operators}): more precisely, we consider operators of the form $S_\xi X^b$ for a generalised phase gate $S_\xi$ and the generalised Pauli shift operator $X$, as well as (Kronecker) tensor products thereof. Note that by App. A in \cite{FrembsRobertsCampbellBartlett2022}, such operators are universal for computation in deterministic, non-adaptive $ld$-MBQC (see also footnote \ref{fn: universal ld-MBQC}).
The group arising from (local) measurement operators in deterministic, non-adaptive $ld$-MBQC is closely related to the Heisenberg-Weyl group $\mH(\mathbb Z_d)$. We first review some basic facts about the latter.
\subsection{Pontryagin duality and Heisenberg-Weyl group}
Given any locally compact abelian group $G$, we define its Pontryagin dual $\hat{G}$ as the set of continuous group homomorphisms into the unitary group $U(1)$,
\begin{equation}\label{eq: Pontryagin dual}
\hat{G} := \mathrm{Hom}(G,U(1)) = \{\chi: G \rightarrow U(1) \mid \chi(gh) = \chi(g)\chi(h)\ \forall g,h \in G\}\; .
\end{equation}
$\hat{G}$ itself is a locally compact abelian group under pointwise multiplication, and equipped with the compact-open topology. By Pontryagin duality, there is a canonical isomorphism $\mathrm{ev}_G: G \rightarrow \hat{\hat{G}}$ between any locally compact abelian group $G$ and its double dual $\hat{\hat{G}}$ given by
\begin{equation*}
\mathrm{ev}_G(x)(\chi) := \chi(x)\; .
\end{equation*}
The Heisenberg-Weyl group $\mH(G)$ of $G$ is now defined as follows. Let $L^2(G)$ be the Hilbert space of complex-valued square-integrable functions on $G$ and define the translation operators $t_x$, $x \in G$ and multiplication operators $m_\chi$, $\chi \in \hat{G}$ by
\begin{equation}\label{eq: GW generators}
(t_x f)(y) = f(x+y) \quad \quad \quad
(m_\chi f)(y) = \chi(y)f(y)\; ,
\end{equation}
for all $f \in L^2(G)$.\footnote{The measure needed in the definition of $L^2(G)$ is the unique Haar measure on $G$.} Clearly, the operators in Eq.~(\ref{eq: GW generators}) do not commute, instead they satisfy the canonical Weyl commutation relations,
\begin{equation}\label{eq: Weyl CCR}
t_xm_\chi = \overline{\chi(x)} m_\chi t_x \quad \forall x \in G, \chi \in \hat{G}\; .
\end{equation}
The Heisenberg-Weyl group is the subgroup of the unitary group on the Hilbert space $L^2(G)$ generated by these operators.
Of particular interest for quantum computation is the discrete Heisenberg-Weyl group $\mH(\mathbb Z_d)$ with $\hat{\mathbb Z}_d = \{\chi^a(q) = \omega^{aq} \mid \omega = e^{\frac{2\pi i}{d}}, a \in \mathbb Z_d\} \cong \mathbb Z_d$.
Note that the Heisenberg-Weyl group can also be seen as a subgroup of the normaliser $\mH(\mathbb Z_d) \subset N(T)$ of a maximal torus of the special unitary group,
\begin{align}\label{eq: maximal torus SU(d)}
T= T(SU(d)) = \{S_\xi = \mathrm{diag}(\xi(0),\cdots,\xi(d-1)) \in SU(d) \mid \xi: \mathbb Z_d \ensuremath{\rightarrow} U(1), \prod_{q\in \mathbb Z_d} \xi(q)=1\}\; .
\end{align}
There is a split extension
\begin{equation}\label{eq: split extension - normaliser}
1\to T\to N(T)\to W\to 1\; ,
\end{equation}
where
$N(T)$ is the normalizer of $T$ and $W=N(T)/T$ is the Weyl group. It induces an action of $W$ on $T$ by
\begin{equation}{\label{eq-action}}
x\cdot t=xtx^{-1}\; ,
\end{equation}
where $x\in W$ on $t\in T$. (See \cite[Chapter 11]{Hall} for details about maximal tori in Lie groups.) Let $X$
be the generalised Pauli shift operator. The group $\langle X\ensuremath{\rightarrow}ngle\cong\mathbb Z_d$ generated by $X$ can be identified with a subgroup of $W$. In particular, note that the Heisenberg-Weyl group arises from the split extension in Eq.~(\ref{eq: split extension - normaliser}), by restriction to the Pontryagin dual
\begin{equation}\label{eq: split extension - HW group}
1\to \hat{\mathbb Z}_d \to \mH(\mathbb Z_d) \to \langle X\ensuremath{\rightarrow}ngle \to 1\; .
\end{equation}
Comparing Eq.~(\ref{eq: split extension - HW group}) with Eq.~(\ref{eq: split extension - normaliser}) suggests to extend the Pontryagin dual $\hat{\mathbb Z}_d$ in the definition of the (discrete) Heisenberg-Weyl group to a maximal torus subgroup of the special unitary group.\footnote{The restriction to the special unitary group $SU(d)$ in Eq.~(\ref{eq: maximal torus SU(d)}) ensures that (local measurement) operators $M(\xi,b) = S_\xi X^b$, with $b\neq 0$ and $S_\xi \in T$ have order $d$ (Lm.~\ref{lem-torsion-K}). We also remark that the extension of $\hat{\mathbb Z}_d$ to $T$ is closely related to the diagonal Clifford hierarchy (cf. \cite{CuiGottesmanKrishna2017}), which embeds into the maximal torus of the unitary group $T(U(d))$ (see also Remark.~\ref{rm: diagonal Clifford hierarchy}).}
\subsection{Definition of $K_Q^{\otimes n}(d)$}
In this section, we define a group generated from operators $S_\xi X^b$ that are $d$-torsion in the sense that $(S_\xi X^b)^d=\mathbbm{1}$. For an integer $m\geq 1$, let $T_{(d^m)}$ denote the subgroup of the maximal torus of the special unitary group $T$
consisting of elements of order $d^m$; that is $T_{(d^m)} =\set{S_\xi\in T|\, S_\xi^{d^m}=\mathbbm{1}}$.
In particular, when $d$ is an odd prime, the subgroup $T_{(d)}$ consists of elements $S_\xi$ where $\xi$ is of the form $\xi(q)= a+bq$ for some $a,b\in \mathbb Z_d$.
\begin{definition}\label{def: local MBQC-group}
For a subgroup $T_{(d)}\subset Q\subset T_{(d^m)}$ invariant under the action of $\Span{X}$ we define the following subgroup of the special unitary group:
\[K_Q(d) = \langle S_\xi X^b\;\ifnum\currentgrouptype=16 \middle\fi|\; b \in \mathbb Z_d,S_\xi \in Q \ensuremath{\rightarrow}ngle \subset SU(d)\; ,\]
where $X|q\ensuremath{\rightarrow}ngle = |q+1\ensuremath{\rightarrow}ngle$ is the generalised Pauli shift operator and $S_\xi|q\ensuremath{\rightarrow}ngle = \xi(q)|q\ensuremath{\rightarrow}ngle$ is a generalised phase gate. \footnote{Note that not all (diagonal) elements in $K_Q(d)$ have order $d$. Such operators do not arise as local measurement operators in (deterministic, non-adaptive) $ld$-MBQC, they are merely a byproduct of Def.~\ref{def: local MBQC-group}.}
We denote the $n$-fold Kronecker tensor product of $K_Q(d)$ by
\[K^{\otimes n}_Q(d)=\overbrace{K_Q(d)\otimes\cdots\otimes K_Q(d)}^{\text{n copies}}.\]
Similarly $H^{\otimes n}(\mathbb Z_d)$ denotes the
$n$-fold Kronecker tensor product of $H(\mathbb Z_d)$.
\end{definition}
Note that $K_Q(d) = \mH(\mathbb Z_d)$ reduces to the discrete Heisenberg-Weyl group\footnote{For $d$ odd, $\mc{P}_d \cong \mH(\mathbb Z_d)$, but for $d$ even these groups differ by central elements \cite{deBeaudrap2013}. In particular, note that the qubit Pauli group $\mc{P}_2 = \langle i,X,Z\ensuremath{\rightarrow}ngle$, whereas $\mH(\mathbb Z_2) = \langle X,Z \ensuremath{\rightarrow}ngle$.} for $Q=T_{(d)}$ where $d$ is an odd prime. In this paper we will consider the other extreme that is when $Q=T_{(d^m)}$.
Note also that $K_Q(d) = Q \rtimes \langle X \ensuremath{\rightarrow}ngle \cong Q \rtimes \mathbb Z_d$ is a split extension, (as a restriction of Eq.~(\ref{eq: split extension - normaliser}))
\begin{equation}\label{eq: split extension}
1 \ensuremath{\rightarrow} Q \ensuremath{\rightarrow} K_Q(d) \ensuremath{\rightarrow} \langle X \ensuremath{\rightarrow}ngle \ensuremath{\rightarrow} 1\; .
\end{equation}
We will therefore often write $S_\xi X^b\in K_Q(d)$ as $(\xi,b)$, where $b\in\mathbb Z_d$ and $S_\xi\in Q$. By Eq. (\ref{eq-action}), the group action of $\mathbb Z_d$ on $Q$ is given by
\[S_{b\cdot\xi}|q\ensuremath{\rightarrow}ngle:=X^b\cdot S_\xi|q\ensuremath{\rightarrow}ngle=X^bS_\xi X^{-b}|q\ensuremath{\rightarrow}ngle=X^bS_\xi |q-b\ensuremath{\rightarrow}ngle=X^b\xi(q-b)|q-b\ensuremath{\rightarrow}ngle=\xi(q-b)|q\ensuremath{\rightarrow}ngle\; ,\]
that is $b\cdot\xi(q)=\xi(q-b)$ for all $q\in\mathbb Z_p$. The group operation of $K_Q(d)$ is given by
\[(\xi,b)(\xi',b')=(\xi b\cdot \xi',b+b')\; ,\]
and the inverse is given by
\[(\xi,b)^{-1}=((-b)\cdot \xi^{-1},-b)\; ,\]
where $(\xi,b),(\xi',b')\in K_Q(d)$.
\begin{lemma}{\label{lem-for}}
Let $(\xi,b),(\chi,0)\in K_Q(d)$. Let $n$ be a positive integer, we have
\begin{enumerate}[(1)]
\item $(\xi,b)^n=\left(\prod_{i=0}^{n-1}(ib)\cdot\xi \, , \, nb\right)$.
\item $(\xi,b)^n(\chi,0)=((nb)\cdot\chi,0)(\xi,b)^n$. Thus we also have
$(\chi,0)(\xi,b)^n=(\xi,b)^n((-nb)\cdot\chi,0)$.
\item $\big((\xi,b)(\chi,0)\big)^n=\big(\prod_{i=1}^n(ib)\cdot\chi,0\big)(\xi,b)^n$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proofs of all three statements are straightforward; see App.~\ref{pf useful formula} for details.
\end{proof}
We are interested in quantum solutions to LCS associated to deterministic, non-adaptive $ld$-MBQC; having defined a group from local measurement operators, we next analyse its abelian subgroups.
\subsection{Abelian subgroups of $K^{\otimes n}_Q(p)$}
From now on, we fix $p$ to be an odd prime. In this section we will describe ($p$-torsion) abelian subgroups of $K^{\otimes n}_Q(p)$ where $Q=T_{(p^m)}$.
For simplicity, we will use the following notation:
$$
K=K_Q(p)\;\;\text{ and }\;\; K^{\otimes n}= K^{\otimes n}_Q(p)\; .
$$
We begin with a description of $p$-torsion elements in $K$.
\begin{lemma}\label{lem-torsion-K}
An element $M\in K$ is $p$-torsion, i.e. $M^p=\mathbbm{1}$, if either
\begin{itemize}
\item $M=S_\xi$ for some $S_\xi\in T_{(p)}$, or
\item $M=S_\xi X^b$ with $b\neq 0$.
\end{itemize}
\end{lemma}
\begin{proof}
Follows from Lm. \ref{lem-for}(1), and our assumption $S_\xi \in T$, which implies $\det(S_\xi) = \prod_{k=0}^{p-1} k\cdot\xi = 1$.
\end{proof}
Next, we describe commuting pairs of elements in $K$ up to a phase.
\begin{lemma}\label{lm: commutation relations gen HW group up to phase}
Let $M=S_\xi X^b$ and $M'=S_{\xi'}X^{b'}$ be elements in $K$ such that $[M,M']=\omega^c \mathbbm{1}$ for some $c\in\mathbb Z_p$.
Then one of the following cases hold:
\begin{enumerate}[(1)]
\item $b=b'=0$ and $M,M'\in Q$. In particular, $c=0$.
\item $b\neq 0$ and $b'=0$ and $M'\in T_{(p)}$. In addition, if $c=0$, then $M'\in Z(K)$.
\item $b,b'\neq 0$ and there exits $a,y\in\mathbb Z_p$ and $S_\chi \in T_{(p)}$ such that $\omega^a M'=(M S_\chi)^y$. Here $\chi(q)=\omega^{c'q}$ where $b'c'=-c$ and $yb=b'$. In addition, if $c=0$, then $\omega^a M'=M^y$.
\end{enumerate}
\end{lemma}
\begin{proof} We will use the alternative notation $(\xi,b)=S
_\xi X^b$.
First, we calculate the commutator:
\begin{equation}{\label{eqq}}
\begin{aligned}
[M,M']&=(\xi,b)(\xi',b')(\xi,b)^{-1}(\xi',b')^{-1}\\
&=\left(\xi b\cdot\xi',b+b'\right) \,\, \left((-b)\cdot\xi^{-1},-b\right)\left((-b')\cdot\xi'^{-1},-b'\right)\\
&=\left(\xi b\cdot\xi',b+b'\right) \left( (-b)\cdot\xi^{-1}(-b-b')\cdot\xi'^{-1},-b-b'\right)\\
&=\left(\xi b\cdot\xi'(b+b'-b)\cdot\xi^{-1}(b+b'-b-b')\cdot\xi'^{-1},b+b'-b-b'\right)\\
&=\left(\xi b\cdot\xi' b'\cdot\xi^{-1}\xi'^{-1},0\right)\; .
\end{aligned}
\end{equation}
Thus we have $[M,M']=(\hat{\xi},0)$, where $\hat{\xi}(q)=\xi(q)\xi'(q-b)\xi^{-1}(q-b')\xi'^{-1}(q)$ for all $q \in \mathbb Z_p$.
Case 1: We assume $b=b'=0$. In this case, we have $M,M'\in Q$. This group is abelian, thus $c=0$.
Case 2: We assume $b\neq 0$ and $b'=0$. In this case, we have
\begin{align*}
[M,M']=\omega^c \mathbbm{1} &\iff \xi(q)\xi'(q-b)\xi^{-1}(q)\xi'^{-1}(q)=\omega^c\\
&\iff \xi'(q-b)=\xi'(q)\omega^c\\
&\iff \xi'(q)=\xi'(q+b)\omega^c\; .
\end{align*}
Let $m\in\mathbb Z_p$ such that $mb=-1$. Thus we have $q+mqb=0$. By the equation above, we have
\[\xi'(q)=\xi'(q+b)\omega^c=\xi'(q+2b)\omega^{2c}=\cdots=\xi'(q+mqb)\omega^{mqc}=\xi'(0)\omega^{mqc}\; .\]
By definition (cf. Eq.~(\ref{eq: maximal torus SU(d)})), we have $\prod_{q=0}^{p-1}\xi'(q)=1$. Thus we obtain
\[1=\prod_{q=0}^{p-1}\xi'(q)=\prod_{q=0}^{p-1}\xi'(0)\omega^{mqc}=(\xi'(0))^p\omega^{mc\sum_{q=0}^{p-1}q}=(\xi'(0))^p\; ,\]
where we used that $\sum_{q=0}^{p-1}q=\frac{p(p-1)}{2}$ is divisible by $2$, hence, $\sum_{q=0}^{p-1}q$ is a multiple of $p$ and thus $\omega^{mc\sum_{q=0}^{p-1}q}=1$.
Finally, we set $\xi'(0)=\omega^a$ for some $a\in\mathbb Z_p$ and obtain
\[\xi'(q)=\xi'(0)\omega^{mqc}=\omega^a\omega^{mqc}=\omega^{mcq+a}\; ,\]
therefore $M'=(\xi',b')\in T_{(p)}$. In particular, if $c=0$ we have $\xi'(q)=\omega^a$, and as a result $M'\in Z(K)$.
Case 3: Assume that $b,b'\neq 0$. We pick $c'\in \mathbb Z_p$ and $y\in\mathbb Z_p$ such that $b'c'=-c$ and $yb=b'$, and we define $\chi(q)=\omega^{c'q}$. By Lm. \ref{lem-for}(1), we have
\begin{equation}{\label{eq3}}
(MS_\chi)^y=\left((\xi,b)(\chi,0)\right)^y=(\xi b\cdot\chi,b)^y=(\bar{\xi},yb)=(\bar{\xi},b')\; ,
\end{equation}
where $\bar{\xi}=\left(\prod_{i=0}^{y-1}(ib)\cdot \xi\right)\left(\prod_{i=0}^{y-1}(ib+b)\cdot\chi\right)$. Since $[M,M']=\omega^c \mathbbm{1}$, by Eq. (\ref{eqq}), we have
\begin{equation}{\label{eq2}}
\xi(q)\xi'(q-b)\xi^{-1}(q-b')\xi'^{-1}(q)=\omega^c\iff \frac{\xi(q)}{\xi(q-b')}=\frac{\xi'(q)}{\xi'(q-b)}\omega^c\; .
\end{equation}
Then we calculate
\begin{align*}
\frac{\bar{\xi}(q)}{\bar{\xi}(q-b)}=\frac{\bar{\xi}}{b\cdot\bar{\xi}}(q)&=\frac{\left(\prod_{i=0}^{y-1}(ib)\cdot\xi\right)\left(\prod_{i=0}^{y-1}(ib+b)\cdot\chi\right)}{\left(\prod_{i=0}^{y-1}(ib+b)\cdot\xi\right)\left(\prod_{i=0}^{y-1}(ib+b+b)\cdot\chi\right)}(q)\\
&=\frac{\xi\left(\prod_{i=1}^{y-1}(ib)\cdot\xi\right)b\cdot\chi\left(\prod_{i=1}^{y-1}(ib+b)\cdot\chi\right)}{\left(\prod_{i=1}^{y}(ib)\cdot\xi\right)\left(\prod_{i=1}^{y}(ib+b)\cdot\chi\right)}(q)\\
&=\left(\frac{\xi}{(yb)\cdot\xi}\right)\frac{b\cdot\chi}{(yb+b)\cdot\chi}(q)\\
&=\left(\frac{\xi(q)}{\xi(q-b')}\right)\frac{\omega^{(q-b)c'}}{\omega^{(q-b'-b)c'}}\,\,\,\,\,\text{(by defintion of $\chi$ and $yb=b'$)}\\
&=\frac{\xi'(q)}{\xi'(q-b)}\omega^c\omega^{b'c'}\,\,\,\,\,\,\text{(by Eq. (\ref{eq2})})\\
&=\frac{\xi'(q)}{\xi'(q-b)}\; .
\end{align*}
By rearranging this equation, we get
\[\frac{\bar{\xi}(q)}{\xi'(q)}=\frac{\bar{\xi}(q-b)}{\xi'(q-b)}\; ,\]
for all $q\in \mathbb Z_p$. Thus we conclude that there is a constant $x$ such that for all $q\in\mathbb Z_p$, we have $\bar{\xi}(q)=x\xi'(q)$. By definition, we have $\prod_{q=0}^{p-1}\bar{\xi}(q)=\prod_{q=0}^{p-1}\xi'(q)=1.$
It follows that
\[1=\prod_{q=0}^{p-1}\bar{\xi}(q)=\prod_{q=0}^{p-1}x\xi'(q)=x^p\prod_{q=0}^{p-1}\xi'(q)=x^p\; .\]
Thus $x=\omega^a$ for some $a\in \mathbb Z_p$ and we have $\bar{\xi}(q)=\omega^a\xi'(q)$ for some $a\in\mathbb Z_p$. Combining $\bar{\xi}(q)=\omega^a\xi'(q)$ with Eq. (\ref{eq3}), we obtain
\[(MS_\chi)^y=(\bar{\xi},b')=(\omega^a\xi',b')=(\omega^a,0)(\xi',b')=\omega^aM'\; .\]
In particular, if $c=0$, then $\chi(q)=1$ and $\omega^aM'=M^y$.
\end{proof}
These two results combined gives us the description of $p$-torsion abelian subgroups of $K$.
\begin{corollary}\label{cor: abelian subgroups in K_Q(p)}
Maximal $p$-torsion abelian subgroups of $K$ fall into two classes:
\begin{itemize}
\item the subgroup $T_{(p)}$, and
\item $\Span{\omega\mathbbm{1},S_\xi X}$ where $S_\xi\in Q=T_{(p^m)}$.
\end{itemize}
Any two distinct maximal $p$-torsion abelian subgroup intersect at the center $Z(K)=\Span{\omega \mathbbm{1}}$.
\end{corollary}
\begin{proof}
This follows immediately from Lm. \ref{lm: commutation relations gen HW group up to phase} with $c = 0$ and Lm. \ref{lem-torsion-K}.
\end{proof}
We reformulate Lm. \ref{lm: commutation relations gen HW group up to phase} in a way that will be useful for computing the commutator in terms of a symplectic form.
\begin{corollary}\label{cor:commutator-K}
Let $M=S_\xi X^b$ and $M'=S_{\xi'}X^{b'}$ be elements in $K$ such that $[M,M']=\omega^c \mathbbm{1}$ for some $c\in\mathbb Z_p$.
Then one of the following cases hold:
\begin{enumerate}[(1)]
\item $b=b'=0$, in which case $c=0$.
\item $b\neq 0$, in which case $M'=S_{\chi'}M^{b'/b}$ where $\chi'(q)=\omega^{\alpha'_0+\alpha'_1q}$ for some $\alpha_0',\alpha_1'\in \mathbb Z_p$ and $c=-b\alpha'_1$.
\item $b'\neq 0$, in which case $M=S_{\chi}M^{b/b'}$ where $\chi(q)=\omega^{\alpha_0+\alpha_1q}$ for some $\alpha_0,\alpha_1\in \mathbb Z_p$ and $c=b'\alpha_1$.
\end{enumerate}
\end{corollary}
\begin{proof}
We will explain the case where at least one of $b$ or $b'$ is nonzero. Then we are either in case (2) or (3) of Lm. \ref{lm: commutation relations gen HW group up to phase}. Assume we are in case (3). We can write
$
M' = S_{\chi'} M^{b'/b}
$
for some $S_{\chi'}\in T_{(p)}$ using part (2) and (3) of Lm. \ref{lm: commutation relations gen HW group up to phase}. Using Eq. (\ref{eqq}) we can compute that $c=-b\alpha_1'$. Note that taking $b'=0$ we see that Lm. \ref{lm: commutation relations gen HW group up to phase} part (2) is also covered by this case. Case (3) can be dealt with in a similar way.
\end{proof}
Next we move on to analyzing $K^{\otimes n}$.
Using Thm. \ref{thm-poly} in the Appendix we can express $\xi$ as follows:
\begin{equation}\label{eq:xi}
\xi(q) = \exp\left( \sum_{j=1}^m \frac{2\pi i}{p^j} f_j(q) \right)\; ,
\end{equation}
where each $f_j$ is a polynomial of the form
\begin{equation}\label{eq: polynomial}
f_j(q) = \sum_{a=0}^{p-1} \vartheta_{j,a} q^a,\;\;\; \vartheta_{j,a}\in \mathbb Z_p\; .
\end{equation}
We need a generalized version of Lm. \ref{lem-torsion-K} to describe $p$-torsion elements in $K^{\otimes n}$.
\begin{lemma}\label{lem:torsion-Kn}
An element $M=M_1\otimes M_2\otimes \cdots\otimes M_n$ in $K^{\otimes n}$ is $p$-torsion if there exists a subset $I\subset \set{1,2,\cdots,n}$ such that the following holds:
\begin{itemize}
\item $M_i\notin Q$, that is, $M_i=S_{\xi_i} X^{b_i}$ with $b_i\neq 0$ for all $i\notin I$.
\item For all $i\in I$ we have $M_i\in Q$ such that $\vartheta_{j,a}^{(i)}=0$ for all $j\geq 2,a\neq 0$ and $\sum_{i\in I} \vartheta_{2,0}^{(i)}=0\mod p$.
\end{itemize}
\end{lemma}
\begin{proof}
It suffices to show that $M_i\in Q$ satisfies $M_i^p = \omega^{c_i}\mathbbm{1}$ for some $c_i \in \mathbb Z_p$ whenever $\vartheta_{j,a}^{(i)}=0$ for all $j\geq 2,a\neq 0$. This follows from Eq. (\ref{eq:xi}).
\end{proof}
To track the commutator in the case of $K^{\otimes n}$ we will need the symplectic form borrowed from the Heisenberg-Weyl group $H^{\otimes n}(p)$. For $v=(v_Z,v_X)$ and $v'=(v'_Z,v'_X)$ in $\mathbb Z_p^n\times \mathbb Z_p^n$ the symplectic form is defined by
$$
[v,v'] = v_Z\cdot v'_X - v'_Z\cdot v_X \mod p\; ,
$$
where $\cdot$ denotes the inner product on vectors in $\mathbb Z^n_p$. Using this symplectic form we can describe the commutator of two elements that commute up to a phase.
\begin{corollary}\label{cor:commutator-Kn}
Let $M=M_1\otimes M_2\otimes \cdots \otimes M_n$ and $M'=M'_1\otimes M'_2\otimes \cdots \otimes M'_n$ be elements in $K^{\otimes n}$ such that $[M,M']=\omega^c\mathbbm{1}$. For every $1\leq i\leq n$, write $M_i=(\xi_i,b_i)$, let $\bar b=(b_1,b_2,\cdots,b_n)$ and let $\bar f=(\vartheta_{1,1}^{(1)},\vartheta_{1,1}^{(2)},\cdots, \vartheta_{1,1}^{(n)})$, where $\vartheta_{1,1}^{(i)}$ is the coefficient in Eq.~(\ref{eq: polynomial}) that appears in the expansion of $\xi_i$ in Eq.~(\ref{eq:xi}); similarly for $\bar f'$ and $\bar b'$.
Then we have
$$
[M,M'] = \omega^{[v,v']}\mathbbm{1}\; ,
$$
where $v=(\bar f,\bar b)$ and $v'=(\bar f',\bar b')$.
\end{corollary}
\begin{proof}
Since $[M,M']=\omega^c\mathbbm{1}$ we have $[M_i,M_i'] = \omega^{c_i}\mathbbm{1}$ such that $\sum_{i=1}^n c_i =c \mod p$.
Cor. \ref{cor:commutator-K} implies that $c_i$ is either zero or given by the formula in parts (2) and (3) of this result.
Suppose we are in case (2). Then $M'_i= S_{\chi'} M_i^{b'/b}$ where $\chi' = \alpha_0'+\alpha_1' q$. Using Lm. \ref{lem-for} (1) we can compute the coefficient $\vartheta_{1,1}'^{(i)}$ in $\xi'$. It turns out that
$
\vartheta_{1,1}'^{(i)} = \alpha_1' + \frac{b'}{b} \vartheta_{1,1}^{(i)},
$
which gives
$$
[M_i,M'_i]=\omega^{-b\alpha_1'}\mathbbm{1} = \omega^{-b(\vartheta_{1,1}'^{(i)}-\frac{b'}{b} \vartheta_{1,1}^{(i)})}\mathbbm{1}=\omega^{\vartheta_{1,1}^{(i)}b'-\vartheta_{1,1}'^{(i)}b}\mathbbm{1}\; .
$$
Case (3) can be dealt with similarly. Combining $\omega^{c_i}$'s we obtain the formula for the symplectic form.
\end{proof}
For each $M=M_1\otimes \cdots\otimes M_n$ we will associate the vector $v=(\bar f,\bar b)$ defined in Cor. \ref{cor:commutator-Kn}. A subspace $W$ in $\mathbb Z_d^n\times \mathbb Z_d^n$ is called isotropic if $[w,w']=0$ for all $w,w'\in W$. Using this definition we observe that
a subgroup $\Span{M^{(1)},M^{(2)}, \cdots,M^{(n)}}$ of $K^{\otimes n}$ is abelian if and only if the subspace $\Span{v^{(1)},v^{(2)},\cdots, v^{(n)} }$ of $\mathbb Z_d^{2n}$, where $v^{(i)}=(\bar f^{(i)},b^{(i)})$, is isotropic.
\subsection{Projecting $p$-torsion abelian subgroups into the Heisenberg-Weyl group}
\label{sec: homomorphism of abelian subgroups of order p}
Let $K^{\otimes n}_{(p)}$ denote the subset of $p$-torsion elements in $K^{\otimes n}$.
In this section, we will construct a map
$$
\phi: K^{\otimes n}_{(p)} \to H^{\otimes n}(\mathbb Z_p)\; ,
$$
that restricts to a group homomorphism on every $p$-torsion abelian subgroup.
Cor. \ref{cor:commutator-Kn} hints at a ``linearization map" that could potentially serve for this purpose. The map we need is closely related to linearization, yet needs to be slightly adjusted when acting on $p$-torsion elements. We will revisit this point in Remark.~\ref{rm: almost linearisation} below, after the definition of the map.
\begin{definition}{\label{def phi map}}
For $\xi$ given as in Eq. (\ref{eq:xi}) we introduce two maps:
$$
R(\xi) = \omega^{\vartheta_{1,0}+\vartheta_{2,0} + \vartheta_{1,1}q}\;\;\text{ and }\;\; P(\xi) = \omega^{\vartheta_{1,0} + \vartheta_{1,1}q}\; .
$$
Using this notation we define a map
$$
\phi: K_{(p)}^{\otimes n} \to H^{\otimes n}(\mathbb Z_p)
$$
as follows:
$$
\phi(M_1\otimes M_2 \otimes \cdots \otimes M_n) = \phi_1(M_1)\otimes \phi_1(M_2)\otimes \cdots \otimes \phi_1(M_n)\; ,
$$
where $\phi_1(M)$ for $M=(\xi,b)\in K$ is defined by
$$
\phi_1(M) =
\left\lbrace
\begin{array}{ll}
(R(\xi),0) & \text{if $b=0$}\\
(P(\xi),1) & \text{if $b=1$}
\end{array}
\right.\; .
$$
and $\phi_1(M) = \phi_1(M^{b_i^{-1}})^{b_i}$ if $1<b<p-1$.
\end{definition}
\begin{remk}\label{rm: almost linearisation}
Note the additional factor $\omega^{\vartheta_{2,0}}$ in the definition of $\phi_1(M)$ for $M\in Q$. Recall that for $M^p=\mathbbm{1}$,
\[M=(\omega^{\sum_{a=0}^{p-1}\vartheta_{1,a}q^a}\sqrt[p]{\omega}^{\vartheta_{2,0}} ,0 )\; ,\]
by Lm. \ref{lem:torsion-Kn}. It is easy to see that defining $\phi_1$ as a linear projection in both cases, that is for $b=0,1$, would not restrict to a group homomorphism as desired.
To see this, let $p=3$ and consider $S_{\xi_1}\otimes S_{\xi_2}$ where $\xi_1(q)= \sqrt[3]{\omega}$ and $\xi_2(q)=\sqrt[3]{\omega}^2$:
$$
\begin{aligned}
\phi(M_1\otimes M_2)\phi(M_1\otimes M_2) &= (\phi_1(M_1)\otimes \phi_1(M_2)) (\phi_1(M_1)\otimes \phi_1(M_2)) \\
& = \mathbbm{1} \otimes \mathbbm{1}\; ,
\end{aligned}
$$
which is not the same as
$$
\begin{aligned}
\phi((M_1\otimes M_2)(M_1\otimes M_2)) &= \phi( M_1^2 \otimes M_2^2 ) \\
&= \phi( \sqrt[3]{\omega}^2 \mathbbm{1} \otimes \omega \sqrt[3]{\omega} \mathbbm{1} ) \\
&= \phi_1(\sqrt[3]{\omega}^2 \mathbbm{1})\otimes \phi_1(\omega \sqrt[3]{\omega} \mathbbm{1}) \\
&= \omega (\mathbbm{1} \otimes \mathbbm{1})\; .
\end{aligned}
$$
\end{remk}
\begin{lemma}{\label{lem-homoq}}
Let $S_\xi,S_{\xi'}\in Q$ be such that $S_{\xi}^p,S_{\xi'}^p \in Z(K)$. Then we have
$$
\phi(S_\xi S_{\xi'}) = \phi(S_\xi)\phi(S_{\xi'})\; .
$$
\end{lemma}
\begin{proof}
The condition $S_{\xi}^p,S_{\xi'}^p \in Z(K)$ implies that $\vartheta_{j,a}=0$ for $j\geq 2, a\neq 0$; similarly for $\vartheta_{j,a}'$ (according to the proof of Lm. \ref{lem:torsion-Kn}). In other words, we have
\begin{align*}
\xi(q)&=\omega^{\sum_{a=0}^{p-1}\vartheta_{1,a}q^a}\sqrt[p]{\omega}^{\vartheta_{2,0}}\\
\xi'(q)&=\omega^{\sum_{a=0}^{p-1}\vartheta'_{1,a}q^a}\sqrt[p]{\omega}^{\vartheta'_{2,0}}\\
\xi(q)\xi'(q)&=\omega^{\sum_{a=0}^{p-1}(\vartheta_{1,a}+\vartheta_{1,a})q^a}\sqrt[p]{\omega}^{\vartheta_{2,0}+\vartheta'_{2,0}}.\\
\end{align*}
By definition of $\phi$, we have
\[\phi(S_\xi S_{\xi'})(q)=\omega^{\vartheta_{1,0}+\vartheta'_{1,0}+(\vartheta_{1,1}+\vartheta'_{1,1})q}\sqrt[p]{\omega}^{\vartheta_{2,0}+\vartheta'_{2,0}}\; ,\]
which is the same as
\[\phi(S_\xi)(q)\phi(S_{\xi'})(q)=\omega^{\vartheta_{1,0}+\vartheta_{1,1}q}\sqrt[p]{\omega}^{\vartheta_{2,0}}\omega^{\vartheta'_{1,0}+\vartheta'_{1,1}q}\sqrt[p]{\omega}^{\vartheta'_{2,0}}=\omega^{\vartheta_{1,0}+\vartheta'_{1,0}+(\vartheta_{1,1}+\vartheta'_{1,1})q}\sqrt[p]{\omega}^{\vartheta_{2,0}+\vartheta'_{2,0}}\; .\]
Thus $\phi(S_\xi S_{\xi'}) = \phi(S_\xi)\phi(S_{\xi'})$.
\end{proof}
\begin{lemma}{\label{lem-cen}}
Let $M\in K$ such that $M\not\in Q$ and $S_{\chi}\in T_{(p)}$. Then we have
$$
\phi(S_{\chi}M)=\phi(S_{\chi})\phi(M)=S_{\chi}\phi(M)\; .
$$
\end{lemma}
\begin{proof}
We denote $M=(\xi,b)$ and $S_{\chi}=(\chi,0)$.
Define $\bar{\chi}=\prod_{i=0}^{y-1}(ib)\cdot \chi$ and $\bar{\xi}=\prod_{i=0}^{y-1}(ib)\cdot\xi$, where $y\in\mathbb Z_p$ such that $yb=1$.
Consider the element $(\chi,b)$. By Lm. \ref{lem-for}, we have
\[(\chi,b)=((\chi,b)^y)^b=\left(\prod_{i=0}^{y-1}(ib)\cdot\chi,1\right)^b=\left(\prod_{j=0}^{b-1}j\cdot\bar{\chi},b\right)
\implies\chi=\prod_{j=0}^{b-1}j\cdot\bar{\chi}\; .\]
Next, consider the element $(\xi,b)$, we have
\begin{align*}
&(\xi,b)=((\xi,b)^y)^b=\left(\prod_{i=0}^{y-1}(ib)\cdot\xi,1\right)^b=(\bar{\xi},1)^b\\
\implies & \phi(\xi,b)=(P(\bar{\xi}),1)^b=\left(\prod_{j=0}^{b-1}j\cdot P(\bar{\xi}),b\right)\; .
\end{align*}
For the element $S_{\chi}M$, we have
\[(\chi,0)(\xi,b)=((\chi\xi,b)^y)^b=\left(\left(\prod_{i=0}^{y-1}(ib)\cdot\chi\right)\left(\prod_{i=0}^{y-1}(ib)\cdot\xi\right),1\right)^b=(\bar{\chi}\bar{\xi},1)^b.\]
Notice that since $S_{\bar{\chi}}\in T_{(p)}$, we have $P(\bar{\chi}\bar{\xi})=\bar{\chi}P(\bar{\xi})$. Therefore
\[
\phi((\chi,0)(\xi,b))=(P(\bar{\chi}\bar{\xi}),1)^b=(\bar{\chi}P(\bar{\xi}),1)^b=\left(\left(\prod_{j=0}^{b-1}j\cdot\bar{\chi}\right)\left(\prod_{j=0}^{b-1}j\cdot P(\bar{\xi})\right),b\right)=(\chi,0)\phi(\xi,b) =\phi(\chi,0)\phi(\xi,b)\; .
\]
\end{proof}
\begin{lemma}{\label{lem}}
Let $M=(\xi,b)$ and $M'=(\xi',b')$ be elements of $K$ such that $[M,M']=\omega^c \mathbbm{1}$ for some $c\in\mathbb Z_p$ and $b,b'\neq 0$. Then we have
$$\phi(MM')=\phi(M)\phi(M')\; .$$
\end{lemma}
\begin{proof}
We have $M'=(\xi',b')=(\xi_0',1)^{b'}$ where $\xi'_0=\prod_{i=0}^{z-1}(ib')\cdot\xi'$ and $z\in\mathbb Z_p$ such that $zb'=1$. Thus we have
\[\phi(M')=(P(\xi_0'),1)^{b'}.\]
By Lm. \ref{lm: commutation relations gen HW group up to phase}(3), there exists $a\in\mathbb Z_p$ and $y\in \mathbb Z_p$ where $yb'=b$ and $S_\chi=(\chi,0)\in T_{(p)}$ such that $M=S_{\omega^a}(M'S_\chi)^y$. Thus we have
\begin{align*}
M=\omega^a((\xi',b')(\chi,0))^y&=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(\xi',b')^y\,\,\,\,\,\text{(apply Lm. \ref{lem-for}(3)})\\
&=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)((\xi'_0,1)^{b'})^y\\
&=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(\xi'_0,1)^b\; .
\end{align*}
By Lm. \ref{lem-cen}, we have
\[\phi(M)=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)\phi(({\xi'}_0,1)^b)=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(P(\xi'_0),1)^b\; .\]
Next, consider the product of $M$ and $M'$. We have
\[MM'=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(\xi'_0,1)^b(\xi_0',1)^{b'}
=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(\xi'_0,1)^{b+b'}.\]
We consider two cases. First, we assume $b+b'$ is a multiple of $p$. In this case, we have $(\xi'_0,1)^{b+b'}=(P(\xi'_0),1)^{b+b'}=1$. Thus $MM'\in T_{(p)}$. Hence
\[\phi(MM')=\phi\left(\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)\right)=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)\; .\]
Therefore we have
\[\phi(M)\phi(M')=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(P(\xi'_0),1)^{b+b'}=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)=\phi(MM')\; .\]
Second, we assume that $b+b'$ is not a multiple of $p$. By Lm.~\ref{lem-cen}, we then have
\begin{align*}
\phi(MM')=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)\phi((\xi'_0,1)^{b+b'})&=\omega^a\left(\prod_{j=1}^y(jb')\cdot\chi,0\right)(P({\xi'}_0),1)^b(P({\xi'}_0),1)^{b'}\\
&=\phi(M)\phi(M')\; .
\end{align*}
\end{proof}
\begin{theorem}\label{thm: quasi-local homomorphism in abelian subgroups of order p}
The map $\phi: K^{\otimes n}_{(p)} \to H^{\otimes n}(\mathbb Z_p)$ restricts to a group homomorphism on $p$-torsion abelian subgroups. That is,
$$
\phi(MM') = \phi(M)\phi(M')\; ,
$$
for all $M,M'\in K^{\otimes n }_{(p)}$ such that $[M,M']=\mathbbm{1}$.
\end{theorem}
\begin{proof}
Let $M=M_1\otimes \cdots\otimes M_n$ and $M'=M_1'\otimes\cdots\otimes M'_n$ be elements of $K^{\otimes n}_{(p)}$. Since $M^p=(M')^p=\mathbbm{1}$ and $[M,M']=\mathbbm{1}$ both Lm. \ref{lem:torsion-Kn} and Cor. \ref{cor:commutator-Kn} apply. In particular, we can reduce to the case $n=1$ according to Def.~\ref{def phi map}.
Consequently, there are four cases to consider:
\begin{itemize}
\item Case 1: $M_i,M'_i\in Q$. Follows from Lm. \ref{lem-homoq}.
\item Case 2: $M_i\in T_{(p)}$ and $M_i\not\in Q$. Follows from Lm. \ref{lem-cen}.
\item Case 3: $M_i\not\in Q$ and $M_i'\in T_{(p)}$. Follows from
$$
\begin{aligned}
\phi(MM') & = \phi(\omega^\alpha M'M)\\
&= \omega^\alpha \phi(M')\phi(M)\\
&=\omega^\alpha \omega^{-\alpha} \phi(M)\phi(M')\\
&= \phi(M)\phi(M')\; ,
\end{aligned}
$$
where in the first line $\omega^\alpha=[M,M']$, in the second line we used Lm. \ref{lem-cen} and in the third line we used $[M,M']=[\phi(M),\phi(M')]$ which is a consequence of Cor. \ref{cor:commutator-Kn}.
\item Case 4: $M_i$ and $M_i'$ are not in $Q$. Follows from Lm. \ref{lem}.
\end{itemize}
\end{proof}
\section{From state-dependent to state-independent contextuality}\label{sec: from noncommutative to commutative LCS}
For convenience, we recall the definition of a solution group associated to a LCS from Def. 1 in \cite{QassimWallman2020}:
\begin{definition}\label{def: solution group}
For a LCS `$Ax=b \mod d$' with $A \in M_{m, n}(\mathbb Z_d)$ and $b \in \mathbb Z^m_d$, the solution group $\Gamma(A,b)$ is the finitely presented group generated by the symbols ${J, g_1, \cdots, g_n}$ and the relations
\begin{itemize}
\item[(a)] $g^d_i = e$, $J^d = e$ (`$d$-torsion'),
\item[(b)] $Jg_iJ^{-1}g_i^{-1}=e$ for all $i\in\{1,\cdots,n\}$ and $g_jg_kg_j^{-1}g_k^{-1} = e$ whenever there exists a row i such that $L_{ij} \neq 0$ and $L_{ik} \neq 0$, (`commutativity')
\item[(c)] $\prod_{j=1}^N g^{A_{ij}}_j = J^{b_i}$ for all i $\{1, \cdots ,m\}$ (`constraint satisfaction').
\end{itemize}
\end{definition}
We are interested in solutions of LCS in $K^{\otimes n}_Q(p)$ for $p$ odd prime (see Def. \ref{def: local MBQC-group}), the group generated by (local) measurement operators in deterministic, non-adaptive $ld$-MBQC. In fact, operators in $K^{\otimes n}_Q(p)$ are universal for computation in deterministic, non-adaptive $ld$-MBQC \cite{FrembsRobertsCampbellBartlett2022}. From this perspective, $K^{\otimes n}_Q(p)$ is a natural candidate to study solutions to LCS in $p$ odd prime dimension.
Surprisingly, we find that LCS admitting a solution $\Gamma \ensuremath{\rightarrow} K^{\otimes n}_Q(p)$ are classical. To prove this, we will reduce a solution $\Gamma \ensuremath{\rightarrow} K^{\otimes n}_Q(p)$ to a solution $\Gamma \ensuremath{\rightarrow} \mH^{\otimes n}(\mathbb Z_p)$ using the map $\phi$ from the previous section.\\
\begin{theorem}\label{thm: main result - Clifford hierarchy}
Let $\Gamma$ be the solution group of a LCS over $\mathbb Z_p$ where $p$ is an odd prime. Then the LCS admits a quantum solution $\eta: \Gamma \ensuremath{\rightarrow} K^{\otimes n}_Q(p)$ with $T_{(p)}\subset Q\subset T_{(p^m)}$ if and only if it admits a classical solution $\eta: \Gamma \ensuremath{\rightarrow} \mathbb Z_p$.
\end{theorem}
\begin{proof}
A classical solution $x \in \mathbb Z^n_p$ becomes a quantum solution under the identification $M_k = \omega^{x_k}\mathbbm{1} \in Z(K^{\otimes n}_Q(p))$ for all $k\in\{1,\cdots,n\}$. For the converse, assume that $\eta: \Gamma \ensuremath{\rightarrow} K^{\otimes n}_Q(p)$ is a quantum solution of the LCS. We show that $\phi \circ \eta: \Gamma \ensuremath{\rightarrow} \mH^{\otimes n}(\mathbb Z_p)$ defines a solution of the LCS, i.e., that the map $\phi: K^{\otimes n}_Q(p)_{(p)} \ensuremath{\rightarrow} \mH^{\otimes n}(\mathbb Z_p)$ preserves the constraints in $\Gamma$. This follows directly form Thm. \ref{thm: quasi-local homomorphism in abelian subgroups of order p} more precisely,
\begin{itemize}
\item[(i)] $d$-torsion: clearly, every ($n$-qudit Pauli) operator $P \in \mH^{\otimes n}(\mathbb Z_p)$
has order $p$ (for $p$ odd prime).
In fact, $\phi$ preserves $d$-torsion by Thm. \ref{thm: quasi-local homomorphism in abelian subgroups of order p}: $\phi(M)^p = \phi(M^p) = \phi(\mathbbm{1}) = \mathbbm{1}$ for all $M\in K^{\otimes n}_Q(p)_{(p)}$.
\item[(ii)] commutativity: $\phi$ preserves commutativity by Thm. \ref{thm: quasi-local homomorphism in abelian subgroups of order p}.
\item[(iii)] constraint satisfaction: let $\{M_j\}_{j \in J}$, $M_j \in K^{\otimes n}_Q(p)$ be a set of pairwise commuting operators such that $M_j^p=\mathbbm{1}$ for all $j \in J$ and $\omega^{b_J}\mathbbm{1} = \prod_{j \in J} M_j$. It follows that $\{M_j\}_{j \in J}$ generates an abelian subgroup of order $p$, hence, $\phi$ preserves constraints by Thm. \ref{thm: quasi-local homomorphism in abelian subgroups of order p}:
\[\omega^{b_J}\mathbbm{1}=\phi(\omega^{b_j}\mathbbm{1})=\phi(\prod_{j\in J} M_j)=\prod_{j\in J}\phi(M_j)\; .\]
\end{itemize}
Consequently, $\phi \circ \eta: \Gamma \ensuremath{\rightarrow} \mH^{\otimes n}(\mathbb Z_p)$ also defines a solution of the LCS.
Finally, the existence of a classical solution of the LCS follows from Thm. 2 in \cite{QassimWallman2020}, which defines a map $v: \mH(\mathbb Z_p) \ensuremath{\rightarrow} \mathbb Z_p$---a homomorphism in abelian subgroups. Similar to above, it follows that $v \circ \phi \circ \eta: \Gamma \ensuremath{\rightarrow} \mathbb Z_p$ yields a classical solution of the LCS described by $\Gamma$.
\end{proof}
More generally, the argument applies to any map $\phi:G\to H$ that restricts to a group homomorphism on $p$-torsion abelian subgroups.
\begin{corollary}\label{cor: reduced solutions from quasi-local homos}
Let $\Gamma$ be the solution group of a LCS over $\mathbb Z_p$ for $p$ odd prime, let $\eta: \Gamma \ensuremath{\rightarrow} G$ be a solution and $\phi: G \ensuremath{\rightarrow} H$ be a map that restricts to a group homomorphism on $p$-torsion abelian subgroups of $G$.
Then $\phi \circ \eta: \Gamma \ensuremath{\rightarrow} H$ is a solution to the LCS.
\end{corollary}
\begin{proof}
The constraints of the LCS hold in abelian subgroups of $\Gamma$, hence, are preserved by $\phi$ (cf. Thm. \ref{thm: main result - Clifford hierarchy}).
\end{proof}
\begin{remk}\label{rm: diagonal Clifford hierarchy}
Thm.~\ref{thm: main result - Clifford hierarchy} is a generalisation of Thm.~2 in \cite{QassimWallman2020}. The latter proves that any solution in the Heisenberg-Weyl group $\mc{P}^{\otimes n} \cong \mH^{\otimes n}(\mathbb Z_p) \cong K^{\otimes n}_{T_{(p)}}(p) = (T_{(p)} \rtimes \langle X \ensuremath{\rightarrow}ngle)^{\otimes n}$ for $p$ odd prime is classical. Diagonal elements $S = \mathrm{diag}(\xi(0),\cdots,\xi(d-1)) \in \mH(\mathbb Z_p)$ are restricted to the Pontryagin dual $T_{(p)} \cong \hat{\mathbb Z}_p$; equivalently they lie in the first level of the diagonal Clifford hierarchy $\mc{C}^{(1)}_d$ (cf. \cite{CuiGottesmanKrishna2017}). Thm.~\ref{thm: main result - Clifford hierarchy} generalises these to higher levels in the diagonal Clifford hierarchy $\mc{C}^{(k)}_d$.
In particular, we note that Thm.~\ref{thm-poly} (see App. \ref{sec: app K_Q(p)}) closely resembles the classification of the diagonal Clifford hierarchy of Thm.~2 in \cite{CuiGottesmanKrishna2017}, as well as the resource analysis of contextuality in $ld$-MBQC of Thm.~5 in \cite{FrembsRobertsCampbellBartlett2022}.
Finally, we will discuss the even prime case in App. \ref{even prime}.
\end{remk}
\section{Discussion}\label{sec: discussion}
We constructed generalisations of the contextuality proof in Mermin's star in Fig.~\ref{fig: Mermin(Peres) square and star} (b) from qubit to qudit systems in odd prime dimension. Unlike the qubit case, these require non-Pauli operators and are inherently \emph{state-dependent}.
As such, they naturally assume a computational character within the setting of deterministic, non-adaptive measurement-based quantum quantum computation with linear side-processing ($ld$-MBQC) \cite{FrembsRobertsCampbellBartlett2022}.
What is more, Mermin's star and its close cousin the Mermin-Peres square in Fig.~\ref{fig: Mermin(Peres) square and star} (a) also constitute \emph{state-independent} proofs of contextuality. As such, they provide examples of quantum solutions to linear constraint systems (LCS) over the Boolean ring $\mathbb Z_2$. To date, no generalisation of the state-independent contextuality proof in the Mermin-Peres square to qudit systems, equivalently no finite-dimensional quantum solution to a LCS over $\mathbb Z_d$ for $d$ odd, has been found.\footnote{Note that possibly infinite-dimensionsal solutions to LCS over $\mathbb Z_d$ for arbitrary $d$ exist by \cite{ZhangSlofstra2020}.} It has been shown that no such examples can be constructed within the Heisenberg-Weyl group \cite{QassimWallman2020}. This might not be surprising since non-Pauli operators are already required for state-dependent contextuality in MBQC. In turn, this suggests that examples for state-independent contextuality might sometimes arise from state-dependent contextuality in MBQC with qudits of odd prime local dimension. Surprisingly, this is not the case.
In essence, Thm.~\ref{thm: main result - Clifford hierarchy} thus demonstrates a sharp distinction between state-dependent contextuality (in the form of deterministic, non-adaptive $ld$-MBQC) and state-independent contextuality (in the form of quantum solutions of its naturally associated LCS).\\
\paragraph*{Acknowledgements.} This work was conducted during a visit of the first author at Bilkent University, Ankara. MF was
supported through grant number FQXi-RFP-1807 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, and ARC Future Fellowship FT180100317. CO and HC are supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0002.
\appendix
\section{Properties of $K_Q(p)$}\label{sec: app K_Q(p)}
\begin{proof}[Proof of Lm. \ref{lem-for}]
{\label{pf useful formula}}
($1$) We do induction on $n$. The statement is trivial for $n=1$. Assume the statement is true when $n=k$ and consider the case $n=k+1$. We have
\begin{align*}
(\xi,b)^{k+1}=(\xi,b)^k(\xi,b)=\left(\prod_{i=0}^{k-1}(ib)\cdot\xi,kb\right)(\xi,b)&=\left((\prod_{i=0}^{k-1}(ib)\cdot\xi)(kb)\cdot\xi \, ,\, (k+1)b \right)\\
&=\left(\prod_{i=0}^k(ib)\cdot\xi \, , \, (k+1)b\right)\; .
\end{align*}
($2$) We first prove the statement for $n=1$. In this case, we have
\begin{equation}{\label{eq-comm}}
(\xi,b)(\chi,0)=(\xi b\cdot \chi,b)=(b\cdot\chi,0)(\xi,b)\; .
\end{equation}
Now, we consider the general case. By part $(i)$, we have
\[(\xi,b)^n(\chi,0)=\left(\prod_{i=0}^{n-1}(ib)\cdot\xi \, , \, nb\right)(\chi,0)\stackrel{\text{Eq. (\ref{eq-comm})}}{=}((nb)\cdot\chi,0)\left(\prod_{i=0}^{n-1}(ib)\cdot\xi \, , \, nb\right)=((nb)\cdot\chi,0)(\xi,b)^n\; .\]
($3$) We proceed by induction on $n$. By part ($ii$), the statement holds for $n=1$. Consider $n=k+1$. We have
\begin{align*}
\big((\xi,b)(\chi,0)\big)^{k+1}=\big((\xi,b)(\chi,0)\big)^{k}(\xi,b)(\chi,0)&=\left(\prod_{i=1}^k(ib)\cdot\chi,0\right)(\xi,b)^k(\xi,b)(\chi,0)\\
&=\left(\prod_{i=1}^k(ib)\cdot\chi,0\right)(\xi,b)^{k+1}(\chi,0)\\
&=\left(\prod_{i=1}^k(ib)\cdot\chi,0\right)((k+1)b\cdot\chi,0)(\xi,b)^{k+1}\, \, \, \, \, \, {\text{(by part }(ii))} \\
&=\left(\prod_{i=1}^{k+1}(ib)\cdot\chi,0\right)(\xi,b)^{k+1}\; .
\end{align*}
\end{proof}
Next, we are going to prove that for every $S_\xi\in T_{(p^m)}$, the function $\xi$ can be expressed uniquely as
\begin{equation*}
\xi(q)=\prod_{k=1}^m exp\left[\frac{2\pi i}{p^k}\left(\sum_{a=0}^{p-1}\vartheta_{k,a}q^{a}\right)\right]\; ,
\end{equation*}
where $\vartheta_{i,j}\in \mathbb Z_p$. We define the following sets:
\[Q_m^{(p)}=\left\{\begin{pmatrix}x_0 & & \\ & \ddots & \\ & & x_{p-1}\end{pmatrix}\;\ifnum\currentgrouptype=16 \middle\fi|\; \forall i\in {0,\cdots,p-1},\,x_i=exp\left(\frac{2\pi i}{p^m}y_i\right) \text{where $y_i\in\mathbb Z_{p^m}$}\right\}\; ,\]
\[
G_m^{(p)}=\left\{S_\xi=\begin{pmatrix}\xi(0) & & \\ & \ddots & \\ & & \xi(p-1)\end{pmatrix}\;\ifnum\currentgrouptype=16 \middle\fi|\; \xi(q)=\prod_{k=1}^m exp\left[\frac{2\pi i}{p^k}\left(\sum_{a=0}^{p-1}\vartheta_{k,a}q^{a}\right)\right]\text{where } \vartheta_{k,a}\in\mathbb Z_p\right\}\; .
\]
\begin{remk}
We have the following remarks related to the above definitions:
($i$) Both $Q_m^{(p)}$ and $G_m^{(p)}$ are groups with respect to matrix multiplication.
($ii$) For $i\in\{1,\cdots,p\}$, define a matrix $M_i$ to be as follows
\[(M_i)_{jk}=\begin{cases}
exp\left(\frac{2\pi i}{p^m}\right) & \text{if $j=k=i$} \\
1 & \text{if $j=k$ and $j,k\neq i$}\\
0 & \text{otherwise}
\end{cases}\; .\]
Then $\{M_1,\cdots,M_p\}$ is a generating set of $Q_m^{(p)}$.
($iii$) $T_{(p^m)}$ is a subgroup of $Q_m^{(p)}$. Explicitly, we have $T_{(p^m)}=\{M\in Q_m^{(p)}\;\ifnum\currentgrouptype=16 \middle\fi|\; det(M)=1\}$.
\end{remk}
\begin{remk}[Fermat's little theorem]
Let $p$ be a prime. If $a\in\mathbb Z$ and $a$ is not divisible by $p$, then we have
\[a^{p-1}\equiv 1 \mod p\; .\]
\end{remk}
\begin{corollary}{\label{cor-prime}}
Let $p$ be a prime and $q\in\{1,\cdots,p-1\}$. Then there exists $0\neq c\in\mathbb N$ such that $1+(p-1)q^{p-1}=cp$.
\end{corollary}
\begin{proof}
Since $p$ is a prime and $q$ is not divisible by $p$, we have
\begin{align*}
&q^{p-1}\equiv 1 \mod p\\
\implies & (p-1)q^{p-1}\equiv p-1 \mod p\\
\implies & 1+(p-1)q^{p-1}\equiv p\equiv 0 \mod p\; .
\end{align*}
Thus there exists $c\in\mathbb N$ such that $1+(p-1)q^{p-1}=cp$.
\end{proof}
\begin{theorem}{\label{thm-poly}}
Let $p$ be a prime. For any $m\in\mathbb N$, we have $Q_m^{(p)}=G_m^{(p)}$.
In particular, for every element $S_\xi\in T_{(p^m)}$, we can express $\xi$ uniquely as
\begin{equation}{\label{lab-xi}}
\xi(q)=\prod_{k=1}^m exp\left[\frac{2\pi i}{p^k}\left(\sum_{a=0}^{p-1}\vartheta_{k,a}q^{a}\right)\right]\; ,
\end{equation}
where $\vartheta_{k,a}\in\mathbb Z_p$ (and $\prod_{q=0}^p \xi(q)=1$).
\end{theorem}
\begin{proof}
This result can be obtained from Thm. ~2 in \cite{CuiGottesmanKrishna2017}. Here, we provide an independent proof.
It is clear that $G_m^{(p)}\subseteq Q_m^{(p)}$. We are left to show that $Q_m^{(p)}\subseteq G_m^{(p)}$. We claim that there exists $S_{\xi_0}\in G_m^{(p)}$ such that $S_{\xi_0}=diag\left(exp(\frac{2\pi i}{p^{m}}),1,\cdots,1\right)$. Then for $b\in\{1,\cdots,p-1\}$, we define $S_{\xi_b}\in G_m^{(p)}$ by $\xi_b(q)=\xi_0(q-b)$. Thus we have $S_{\xi_b}=diag(1,\cdots,1,exp(\frac{2\pi i}{p^m}),1,\cdots,1)$ where $exp(\frac{2\pi i}{p^m})$ appear in the $b+1^{th}$ position. Observe that any $M\in Q_m^{(p)}$ can be expressed as
\[M=S_{\xi_0}^{k_0}S_{\xi_1}^{k_1}\cdots S_{\xi_{m-1}}^{k_{m-1}}\; ,\]
where $k_0,\cdots,k_{m-1}\in \mathbb Z_{p^m}$. Therefore we can define $S_{\bar{\xi}}\in G_m^{(p)}$ where $\bar{\xi}=\prod_{b=0}^{m-1}\xi_b^{k_b}$ such that $S_{\bar{\xi}}=M$. Thus we can conclude that for any $M\in Q_m^{(p)}$, there exists $S_\xi\in G_m^{(p)}$ such that $S_\xi=M$, hence, $Q_m^{(p)}\subseteq G_m^{(p)}$.
Next we prove our claim. We proceed by induction on $m$.
Consider the case $m=1$, and define
\[\xi(q)=exp\left(\frac{2\pi i}{p}(1+(p-1)q^{p-1})\right)\; .\]
Then for $q=0$, we have $\xi(0)=exp(\frac{2\pi i}{p})$. For $q\in\{1,\cdots,p-1\}$, by Cor. \ref{cor-prime}, there exists $c_q\in\mathbb N$ such that
\begin{equation}\label{eq: cor 5}
1+(p-1)q^{p-1}=c_qp\; .
\end{equation}
Thus for $q\neq 0$, we have
\[\xi(q)=exp\left(\frac{2\pi i}{p}(1+(p-1)q^{p-1})\right)=exp(2\pi i c_q)=1\; .\]
Assume the statement is true for $m=n$, and consider the case $m=n+1$. We want to find $S_\xi\in G_{n+1}^{(p)}$ such that $S_\xi=diag\left(exp(\frac{2\pi i}{p^{n+1}}),1,\cdots,1\right)$. Let $M\in Q_n^{(p)}$ be given as
\[M=diag\left(1,exp\left(-\frac{2\pi i}{p^n}(c_1)\right),exp\left(-\frac{2\pi i}{p^n}(c_2)\right),\cdots,exp\left(-\frac{2\pi i}{p^n}(c_{p-1})\right)\right)\; .\]
By induction hypothesis, there exists $S_\theta\in G_n^{(p)}$ where
\[\theta(q)=\prod_{k=1}^n exp\left[\frac{2\pi i}{p^k}\left(\sum_{a=0}^{p-1}c_{k,a}q^{a}\right)\right]\; ,\]
such that
\[S_\theta=diag\left(1,exp\left(-\frac{2\pi i}{p^n}(c_1)\right),exp\left(-\frac{2\pi i}{p^n}(c_2)\right),\cdots,exp\left(-\frac{2\pi i}{p^n}(c_{p-1})\right)\right)\; .\]
Next, we define $S_\xi\in G_{n+1}^{(p)}$ where
\[\xi(q)=\theta(q)exp\left(\frac{2\pi i}{p^{n+1}}(1+(p-1)q^{p-1})\right)\; .\]
For $q=0$, we have $\xi(0)=\theta(0)exp(\frac{2\pi i}{p^{n+1}})=exp(\frac{2\pi i}{p^{n+1}})$. For $q\in\{1,\cdots,p-1\}$, we have
\[\xi(q)=\theta(q)exp\left(\frac{2\pi i}{p^{n+1}}(1+(p-1)q^{p-1})\right)=exp\left(-\frac{2\pi i}{p^n}c_q\right)exp\left(\frac{2\pi i}{p^{n+1}}(c_qp)\right)=1\; ,\]
where we used Eq.~(\ref{eq: cor 5}). Consequently, there exists $S_\xi\in G_{n+1}^{(p)}$ such that $S_\xi=diag(exp(\frac{2\pi i}{p^{n+1}}),1,\cdots,1)$.
\end{proof}
\section{Even vs odd prime}{\label{even prime}}
In this section, we will discuss the group structure of $K^{\otimes n}_Q(2)$ where $Q=T_{(2^m)}$ for some $m\geq 1$. We will show that Thm. \ref{thm: main result - Clifford hierarchy} does not work for $p=2$.
\begin{lemma}{\label{lem dihedral}}
Let $Q=T_{(2^m)}$.
Then $K_Q(2)$ is isomorphic to the dihedral group $D_{2^{m+1}}$ of order $2^{m+1}$.
\end{lemma}
\begin{proof}
Observe that $T_{(2^m)}$ can be expressed as
$$
T_{(2^m)}=\left\{\begin{pmatrix}a & 0\\0 & a^{-1}\end{pmatrix}\;\ifnum\currentgrouptype=16 \middle\fi|\; a\in \mathbb C,\; a^{2^m}=1\right\}\; ,
$$
which is a cyclic group generated by the matrix $M=\begin{pmatrix}e^{2\pi i/2^m} & 0\\0 & e^{-2\pi i/2^m}\end{pmatrix}$. Therefore
\begin{equation}{\label{c1}}
K_{T_{(2^m)}}(2)=\left\langle M,X\right\ensuremath{\rightarrow}ngle\; .
\end{equation}
Notice that $M$ has order $2^m$ and $X$ has order 2. By simple calculation, we also have $XMX=M^{-1}$. By identifying $X$ with reflection and $M$ with rotation in the dihedral group, we have
$$K_{T_{(2^m)}}(2)=\langle M,X\ensuremath{\rightarrow}ngle\cong \langle r,s\;\ifnum\currentgrouptype=16 \middle\fi|\; r^{2^m}=s^2=1, srs=r^{-1}\ensuremath{\rightarrow}ngle\cong D_{2^{m+1}}\; .$$
\end{proof}
\begin{remk}{\label{remk k4}}
Observe that $K_{T_{(4)}}(2)\cong D_8\cong H(\mathbb Z_2)=\langle X,Z\ensuremath{\rightarrow}ngle$\footnote{The isomorphism is given by mapping $X\in K_{T_{(4)}}(2)$ to $X\in H(\mathbb Z_2)$ and mapping $\begin{pmatrix}i&0\\0 & -i\end{pmatrix}\in K_{T_{(4)}}(2)$ to $XZ\in H(\mathbb Z_2)$.} and $K_{T_{(2)}}(2)\cong \mathbb Z_2^2$.
\end{remk}
\begin{lemma}{\label{lem-even prime}}
Assume that $Q=T_{(2^m)}$ where $m\geq 2$. Let $M=\begin{pmatrix}a&0\\0&a^{-1}\end{pmatrix}X^b$ and $M'=\begin{pmatrix}a'&0\\0&{a'}^{-1}\end{pmatrix}X^{b'}$ be elements of $K_Q(2)$ such that $[M,M']=\pm \mathbbm{1}\in Z(K_Q(2))$. Then one of the following cases hold:
\begin{enumerate}[(1)]
\item $b=b'=0$ and $M,M'\in Q$. In particular, $[M,M']=\mathbbm{1}$.
\item $b=1$, $b'=0$ and $M'\in T_{(4)}$. In addition, if $[M,M']=\mathbbm{1}$, then $M'\in Z(K_Q(p))$.
\item $b=0$, $b'=1$ and $M\in T_{(4)}$. In addition, if $[M,M']=\mathbbm{1}$, then $M\in Z(K_Q(p))$.
\item $b=b'=1$ and $M=S_\chi M'$ where $S_\chi\in T_{(4)}$. In addition, if $[M,M']=\mathbbm{1}$, then $M=\pm M'$.
\end{enumerate}
\end{lemma}
\begin{proof}
First, we consider the case where $b=b'=0$. In this case, we have $M,M'\in Q$. Thus $[M,M']=\mathbbm{1}$. Next, by Lm. \ref{lem dihedral}, we can identity $K_Q(2)$ with the dihedral group. In other words, we have $M=r^ks^b$ and $M'=r^{k'}s^{b'}$ for some $k,k'\in\mathbb N$, where $r$ and $s$ are the rotation and reflection generator of the dihedral group; respectively. Then our assumption $[M,M']=\pm\mathbbm{1}$ can be rewritten as $[r^ks^b,r^{k'}s^{b'}]=r^n$ where $r^{2n}=1$.
First, we consider $b=1$ and $b'=0$. In this case, we have $M=r^ks$ and $M'=r^{k'}$. By calculation, we get
$$r^{-n}=r^{n}=[M,M']=(r^ks)(r^{k'})(sr^{-k})(r^{-k'})=r^kr^{-k'}r^{-k}r^{-k'}=r^{-2k'}\mathbb Rightarrow (r^{k'})^4=1\; .$$
Thus we can conclude that $M'^4=\mathbbm{1}$ and therefore $M'\in T_{(4)}$. In addition, if $[M,M']=\mathbbm{1}$, we have $1=r^{2k'}$, then $M'^2=\mathbbm{1}$. It follows that $M'=\pm \mathbbm{1}\in Z(K_Q(2))$. The case $b=0$ and $b'=1$ is similar.
Next, consider $b=b'=1$. In this case, we have $M=r^ks$ and $M'=r^{k'}s$. By calculation, we get
$$r^n=[M,M']=(r^ks)(r^{k'}s)(sr^{-k})(sr^{-k'})=r^kr^{-k'}r^kr^{-k'}=r^{2(k-k')}\; .$$
We can now conclude that $(r^{k-k'})^4=1$ and $r^{k'}=r^{-n}r^{k-k'}r^k$. Observe that $(r^{-n}r^{k-k'})^4=1$. Thus $M=S_\chi M'$ for some $S_\chi\in T_{(4)}$. In addition, if $[M,M']=\mathbbm{1}$, we have $(r^{k-k'})^2=1$ and therefore $M=\pm M'$.
\end{proof}
\begin{corollary}\label{cor:poset-2}
Maximal $2$-torsion abelian subgroups of $K_Q(2)$ fall into two classes:
\begin{itemize}
\item the subgroup $T_{(2)}$, and
\item $\Span{-\mathbbm{1},S_\xi X}$ where $S_\xi\in Q=T_{(2^m)}$.
\end{itemize}
Any two distinct maximal $2$-torsion abelian subgroups intersect at the center $Z(K_Q(2))=\{\pm\mathbbm{1}\}$.
\end{corollary}
\begin{proof}
This follows immediately from Lm. \ref{lem-even prime} with $[M,M']=\mathbbm{1}$.
\end{proof}
\begin{example}\label{ex: p=2}
Define $M=S_{\xi_1} X$ and $N=S_{\xi_2}X$ where
\begin{align*}
\xi_1(q)&=(-1)^{1+q}i\; ,\\
\xi_2(q)&=(-1)^qi\; .
\end{align*}
In other words, we have $M=\begin{pmatrix}0&-i\\ i&0\end{pmatrix}$ and $N=\begin{pmatrix}0&i\\ -i&0\end{pmatrix}$. Notice that $M^2=N^2=\mathbbm{1}$ and $[M,N]=\mathbbm{1}$. Suppose that we define $\phi$ for $p=2$ analogous to the odd case, more precisely using the $P$ map in Def. \ref{def phi map}, we have
\begin{align*}
\phi(M)=\begin{pmatrix}-1 & 0\\0 &1\end{pmatrix}X=\begin{pmatrix}0 & -1\\1 &0\end{pmatrix}\; ,\\
\phi(N)=\begin{pmatrix}1 & 0\\0 &-1\end{pmatrix}X=\begin{pmatrix}0 & 1\\-1 &0\end{pmatrix}\; .
\end{align*}
Thus we have $\phi(M)\phi(N)=\mathbbm{1}$. On the other hand, we have $MN=-\mathbbm{1}$ and therefore
\[\phi(MN)=-\mathbbm{1}\neq \phi(M)\phi(N)\; .\]
Notice that both $\phi(M)$ and $\phi(N)$ are elements of $H(\mathbb Z_2)$ rather than $K_{T_{(2)}}(2)$.
\end{example}
\begin{remk}
Thm. \ref{thm: main result - Clifford hierarchy} does not work for $p=2$. Note first that unlike the odd prime case (cf. Rmk.~\ref{rm: diagonal Clifford hierarchy}), for $p=2$ we have $\mc{P}_2 \not\cong H(\mathbb Z_2)
\not\cong K_{T_{(2)}}(2) = T_{(2)} \rtimes \langle X\ensuremath{\rightarrow}ngle = \langle -\mathbbm{1},X\ensuremath{\rightarrow}ngle$.
Now, as defined in Def. \ref{def phi map}, $\phi$ maps into $H^{\otimes n}(\mathbb Z_2)$ rather than $\langle -\mathbbm{1},X\ensuremath{\rightarrow}ngle^{\otimes n}$, yet it is easy to see that $\phi: K^{\otimes n}_{T_{(2)}} \ensuremath{\rightarrow} H^{\otimes n}(\mathbb Z_2)$ is not a homomorphism when restricted to $2$-torsion abelian subgroups (see Ex.~\ref{ex: p=2} above).
Moreover, there is no way to adapt Def. \ref{def phi map} in such a way that $\phi$ maps $K_Q^{\otimes n}(2)$ into $\langle -\mathbbm{1},X\ensuremath{\rightarrow}ngle^{\otimes n}$ for $n>1$. If that were the case, analogous arguments to those in Thm.~\ref{thm: main result - Clifford hierarchy} would imply that $K^{\otimes n}_Q$ is noncontextual, yet $K^{\otimes n}_Q$ is contextual. To see this, we remark that the quantum solution corresponding to the LCS given by Mermin-Peres square already arises for $H(\mathbb Z_2)^{\otimes 2}$.\footnote{In fact, it is sufficient to note that $Y \otimes Y = - ZX \otimes ZX$ in Fig. \ref{fig: Mermin(Peres) square and star} (a).} Consequently, $H(\mathbb Z_2)^{\otimes n}$ is contextual for all $n > 1$. Finally, using the isomorphism
$K_{(4)} \cong H(\mathbb Z_2)$ (cf. Rmk. \ref{remk k4})
it also follows that $K_{(2^m)}^{\otimes n}$ is contextual for all $n>1,m>2$. As a final remark note that Corollary \ref{cor:poset-2} implies that $K_Q(2)$ is noncontextual.
\end{remk}
\end{document}
|
\begin{document}
\begin{abstract}
In the Reed-Frost model, an example of an SIR epidemic model, one can examine a statistic that counts the number of concurrently infected individuals. This statistic can be reformulated as a statistic on the \mathfrak{m}athbb{E}R random graph $G(n,p)$. Within the critical window of Aldous \mathscr{C}ite{A97_1} and Martin-L\"{o}f \mathscr{C}ite{MartinLof98}, i.e. when $p = p(n) = n^{-1}+\lambda n^{-4/3}$, this statistic converges weakly to a Brownian motion with parabolic drift stopped upon reaching a level. The same statistic exhibits a deterministic scaling limit when $p = (1+\lambda \mathfrak{m}athbf{e}ps_n)/n$ whenever $\mathfrak{m}athbf{e}ps_n\to 0$ and $n^{1/3}\mathfrak{m}athbf{e}ps_n\to\infty$.
\mathfrak{m}athbf{e}nd{abstract}
\keywords{Epidemic models, Reed-Frost model, Erd\H{o}s-R\'{e}nyi random graphs, Lamperti transformation, scaling limits, Brownian motion with parabolic drift}
\subjclass[2010]{60C05, 60F17, 92D30}
\mathfrak{m}aketitle
\section{Introduction}
In this paper we provide a new relationship between an \mathfrak{m}athbb{E}R random graph $G(n,p)$ when $n\to\infty$ with $p = p(n) = n^{-1}+\lambda n^{-4/3}$ and a Brownian motion with parabolic drift, $\mathbf{X}^\lambda = (\mathbf{X}^\lambda(t);t\ge 0)$, defined by
\begin{equation}\label{eqn:xlamb}
\mathbf{X}^\lambda(t) = B(t)+\lambda t - \frac{1}{2} t^2,
\mathfrak{m}athbf{e}nd{equation} for a standard Brownian motion $B$. The connection between this asymptotic regime and a Brownian motion with parabolic drift dates back to Aldous' work in \mathscr{C}ite{A97_1} and the independent work of Martin-L\"{o}f \mathscr{C}ite{MartinLof98}. The latter reference relies on the connection between \mathfrak{m}athbb{E}R random graphs and the so-called Reed-Frost model for epidemics. The results presented below have implications for the Reed-Frost model as well.
The Reed-Frost model is an SIR model - that is individuals are either Susceptible to the disease, Infected with the disease, or have Recovered from the disease (sometimes called Removed). At time $t = 0$, there is some number of initially infected individuals $I_0$ in a population of size $n$. Consequently, there are $S_0 = n-I_0$ susceptible individuals. At time $t = 0,1,2,\dotsm$, each of the $I_t$ infected individuals infects each of the $S_t$ susceptible individuals with probability $p$. The susceptible individuals who become infected at time $t$ make up the $I_{t+1}$ infected individuals at time $t+1$. The connection between the Reed-Frost model and \mathfrak{m}athbb{E}R random graph $G \sim G(n,p)$ is explained in \mathscr{C}ite{BM90}. In brief, the initially infected individuals are uniformly selected vertices without replacement. The neighbors of infected individuals at time $t$, who have not already been infected, become the infected individuals at time $t+1$.
The structure of large random graphs has been an object of immense research dating back to the 1960s. One of the simplest models is the \mathfrak{m}athbb{E}R random graph $G(n,p)$ on $n$ vertices where each of the $\displaystyle \binom{n}{2}$ possible edges is independently added with probability $p$. In their original work \mathscr{C}ite{ER60}, Erd\H{o}s and R\'{e}nyi show that if $p = p(n) = c/n$ for some constant $c$ then the following phase shift occurs
\begin{enumerate}
\item if $c<1$ the largest component is of order $\mathscr{T}heta(\log n)$;
\item if $c>1$ the largest component is of order $\mathscr{T}heta(n)$ and the second largest component is of order $\mathscr{T}heta(\log n)$;
\item if $c = 1$ then the two largest components are of order $\mathscr{T}heta(n^{2/3})$.
\mathfrak{m}athbf{e}nd{enumerate} This has a corresponding interpretation for Reed-Frost model: the largest components of the \mathfrak{m}athbb{E}R graph represent the size of largest possible outbreaks in the Reed-Frost model when only a single individual is initially infected.
Much interest has been paid to the phase shift that occurs at and around $c =1$. In the critical window $p(n) = n^{-1}+\lambda n^{-4/3}$ for a real parameter $\lambda$, the size of the components of the random graph were established in \mathscr{C}ite{A97_1} and are related to the excursion lengths of a Brownian motion with parabolic drift. More formally, let $\mathbf{X}^\lambda = \left(\mathbf{X}^\lambda(t);t\ge0\right)$ be a Brownian motion with parabolic drift defined by \mathfrak{m}athbf{e}qref{eqn:xlamb}, and let $\gamma^\lambda(1)\ge \gamma^\lambda(2)\dotsm$ denote the lengths of the excursions of $\mathbf{X}^\lambda$ above its past infimum ordered by decreasing lengths. Then if $\mathscr{C}_n(1),\mathscr{C}_n(2),\dotsm$ are the components of $G(n,n^{-1}+\lambda n^{-4/3})$ ordered by decreasing cardinality there is convergence in distribution
\begin{equation}\label{eqn:compSize}
\left( n^{-2/3} \# \mathscr{C}_n(1), n^{-2/3} \# \mathscr{C}_n(2),\dotsm\right) \Longrightarrow \left(\gamma^\lambda(1),\gamma^\lambda (2),\dotsm \right)
\mathfrak{m}athbf{e}nd{equation} with respect to the $\mathfrak{m}athbf{e}ll^2$-topology.
The results in this paper are motivated by a question posed by David Aldous to the author during a presentation of the author's results in \mathscr{C}ite{Clancy19}. The main results of \mathscr{C}ite{Clancy19} relate the scaling limit of two statistics on a random forest model to the integral of an encoding L\'{e}vy process without negative jumps. The connection relies on a breadth-first exploration of the random forest. Aldous \mathscr{C}ite{A97_1} used a breadth-first exploration to obtain the relationship between the \mathfrak{m}athbb{E}R random graph $G(n,n^{-1}+\lambda n^{-4/3})$. Aldous asked if there was some relationship between analogous statistics on the graph $G(n,n^{-1}+\lambda n^{-4/3})$ and the integral of the Brownian motion with parabolic drift in equation \mathfrak{m}athbf{e}qref{eqn:xlamb}. The answer to the question is yes and is provided with Theorem \ref{thm:kconv} and Theorem \ref{thm:kconv_gen} below.
\subsection{Statement of Results}\label{sec:statements}
Fix a real parameter $\lambda$, and define $\mathfrak{m}athscr{G}_n = G(n,n^{-1}+\lambda n^{-4/3})$. Fix a $k\le n$ and uniformly choose $k$ vertices without replacement in the \mathfrak{m}athbb{E}R graph $\mathfrak{m}athscr{G}_n$, and denote these by $\rho_n(1),\rho_n(2),\dotsm, \rho_n(k)$. Let $\operatorname{{dist}}(-,-)$ denote the graph distance on $\mathfrak{m}athscr{G}_n$ with the convention $\operatorname{{dist}}(w,v) = \infty$ if $w$ and $v$ are in distinct connected components.
For each vertex $v\in \mathfrak{m}athscr{G}_n$, define the height of a vertex, denoted by ${\operatorname{\mathfrak{m}athbf{ht}}}^k_n(v)$, by
\begin{equation*}
{\operatorname{\mathfrak{m}athbf{ht}}}^k_n(v) = \mathfrak{m}in_{j\le k} \operatorname{{dist}}(\rho_n(j),v).
\mathfrak{m}athbf{e}nd{equation*}
We remark that applying a uniformly chosen permutation to the vertex labels in an \mathfrak{m}athbb{E}R graph $G(n,p)$ gives an identically distributed random graph and so we could take the vertices $\{\rho_n(1),\dotsm, \rho_n(k)\}$ to simply be the vertices $\{1,\dotsm,k\}$. With this observation, using $\{\rho_n(1),\dotsm, \rho_n(k)\}$ instead of $\{1,2,\dotsm,k\}$ may seem like an unnatural choice in terms of the \mathfrak{m}athbb{E}R random graph. If we instead think of the corresponding SIR epidemic model -- more specifically the Reed-Frost model -- this choice becomes much more natural. Indeed, these vertices $\rho_n(1),\dotsm, \rho_n(k)$ become the $k$ initially infected individuals in a population of size $n$.
We define the process $Z_n^k = \left( Z_n^k(h);h = 0,1,\dotsm\right)$ by
\begin{equation}\label{eqn:zDiscrete}
Z_n^k(h) = \# \{v\in \mathfrak{m}athscr{G}_n: {\operatorname{\mathfrak{m}athbf{ht}}}^k_n(v) = h\}.
\mathfrak{m}athbf{e}nd{equation} In words, $Z_n^k(h)$ is the number of vertices at distance exactly $h$ from the $k$ uniformly chosen vertices $\rho_n(1),\dotsm, \rho_n(k)$. In terms of the corresponding SIR model, $Z_n^k(h)$ represents the number of individuals infected at ``time'' $h$ when $k$ individuals are infected at time $0$.
The statistic we examine measures how many vertices in $\mathfrak{m}athscr{G}_n$ are at the same distance from the $k$ uniformly chosen vertices. Namely, given a $k\le n$ and a vertex $v\in \mathfrak{m}athscr{G}_n$ we define the statistic
\begin{equation*}
\operatorname{\mathbf{csn}}^k_n(v) = \#\{w\in \mathfrak{m}athscr{G}_n: {\operatorname{\mathfrak{m}athbf{ht}}}_n^k(v) = {\operatorname{\mathfrak{m}athbf{ht}}}_n^k(w)\},
\mathfrak{m}athbf{e}nd{equation*} and call this the \textit{cousin statistic}. In a random forest model where a genealogical interpretation is more natural, the statistic was used in \mathscr{C}ite{Clancy19} to count the number of ``cousin vertices.'' In the graph context this statistic seems like an unnatural choice. If we instead think of the epidemic model as described in the second paragraph of the introduction, then $\operatorname{\mathbf{csn}}_n^k(v)$ becomes much more natural. The value of $\operatorname{\mathbf{csn}}_n^k(v)$ represents the number of people infected at the same instance that individual $v$ is infected when $k$ individuals are infected at time $0$ and the total population is exactly $n$.
Before discussing a scaling limit involving the cousin statistic, we introduce a labeling of the vertices
$$
\{v\in \mathfrak{m}athscr{G}_n: {\operatorname{\mathfrak{m}athbf{ht}}}_n^k(v)<\infty\},
$$ i.e. the vertices connected to one of the randomly chosen vertices $\rho_n(1),\dotsm, \rho_n(k)$. We label these vertices $w_n^k(0),w_n^k(1),\dotsm$ in any way that $j\mathfrak{m}apsto {\operatorname{\mathfrak{m}athbf{ht}}}_n^k(w_n^k(j))$ is non-decreasing. One such way is by first setting $w_n^k(0) = \rho_n(1)$, $w_n^k(1) = \rho_n(2)$, $\dotsm, w_n^k(k-1) = \rho_n(k)$. and then assigning labels inductively so that the unlabeled neighbors of $w_n^k(i)$ are assigned labels before the unlabeled neighbors of $w_n^k(j)$ for $i<j$. Since $\operatorname{\mathbf{csn}}_n^k(w_n^k(j))$ only depends on the height ${\operatorname{\mathfrak{m}athbf{ht}}}(w_n^k(j))$, the specific ordering of neighbors within the same height is not of much importance. In terms of the epidemic model that we've mentioned several times already, the ordering $w_n^k(0), w_n^k(1),\dotsm$ orders the total number of infected individuals in terms of who got infected first.
We define the cumulative cousin process
\begin{equation} \label{eqn:kDef}
K_n^k(j) = \sum_{i=0}^{j-1} \operatorname{\mathbf{csn}}_n^k(w_n^k(i)).
\mathfrak{m}athbf{e}nd{equation} Eventually there will be no vertex labeled $w_n^k(i)$ in the graph, i.e. we have exhausted all vertices in a connected component containing $\rho_n(k)$. At which point we just define $\operatorname{\mathbf{csn}}_n^k(w_n^k(i)) = 0$.
The following theorems describe the scaling limit of the cousin statistic $\operatorname{\mathbf{csn}}$ and the cumulative sum $K_n^k$.
In the regime studied by Aldous \mathscr{C}ite{A97_1}:
\begin{thm}\label{thm:kconv} Fix a $\lambda\in \mathbb{R}$ and consider the graph $\mathfrak{m}athscr{G}_n = G(n,n^{-1}+\lambda n^{-4/3})$.
Fix an $x>0$ and let $k = k(n,x) = \fl{n^{1/3}x}$. Then the following convergence holds in the Skorohod space $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$:
\begin{equation}\label{eqn:csnconv}
\left(n^{-1/3}\operatorname{\mathbf{csn}}_n^k(w_n^k(\fl{n^{2/3}t}));t\ge 0 \right) \Longrightarrow \left(x + \mathbf{X}^\lambda(t\omegaedge T_{-x}) ;t \ge 0 \right),
\mathfrak{m}athbf{e}nd{equation} where $\mathbf{X}^\lambda$ is a Brownian motion with parabolic drift in \mathfrak{m}athbf{e}qref{eqn:xlamb} and $T_{-x} = \inf\{t: \mathbf{X}^\lambda (t) = -x\}$.
\mathfrak{m}athbf{e}nd{thm}
In a more general view of the critical window, which has been studied in, for example, \mathscr{C}ite{DKLP10,DKLP11,Luczak98,RW10}, we have the following theorem
\begin{thm}\label{thm:kconv_gen}
Consider the Erd\H{o}s-R\'{e}nyi random graph $\mathfrak{m}athscr{G}_n^\mathfrak{m}athbf{e}ps:= G(n,(1+\lambda\mathfrak{m}athbf{e}ps_n)/n)$, where $\mathfrak{m}athbf{e}ps_n>0$, and $\mathfrak{m}athbf{e}ps_n\to 0$ but $\mathfrak{m}athbf{e}ps_n^3 n\to\infty$. Let $k = k(n,x) = \fl{\mathfrak{m}athbf{e}ps_n^2 n x}$. Then, for this sequence of graphs, the following convergence holds on the Skorohod space
\begin{equation*}
\left(n^{-1}\mathfrak{m}athbf{e}ps_n^{-2} \operatorname{\mathbf{csn}}_n^k(w_n^k(\fl{n\mathfrak{m}athbf{e}ps_n t}));t\ge0 \right) \Longrightarrow \left((x+\lambda t - \frac{1}{2}t^2)\vee 0; t\ge0 \right).
\mathfrak{m}athbf{e}nd{equation*}
\mathfrak{m}athbf{e}nd{thm}
The proof of the Theorem \ref{thm:kconv} above can be found in Section \ref{sec:klim} and the proof of Theorem \ref{thm:kconv_gen} can be found in Section \ref{sec:theta}. The proof relies heavily scaling limit for the process $Z_n^k$ and a time-change argument similar to the Lamperti transform. For the critical window in Theorem \ref{thm:kconv}, the scaling limit is known in the literature for continuous time epidemic models \mathscr{C}ite{DL06} and \mathscr{C}ite{Simatos15}. See also, \mathscr{C}ite[Appendix 2]{vBML80}. We state it as the following lemma.
\begin{lem}[\mathscr{C}ite{DL06,Simatos15}] \label{thm:zconv}
Fix an $x>0$ and let $k = k(n) = \fl{n^{1/3}x}$. Then, as $n\to\infty$, the following weak convergence holds in the Skorohod space $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$
\begin{equation*}
\left(n^{-1/3} Z_n^{k(n,x)}(\fl{n^{1/3}t}) ;t \ge 0\right) \Longrightarrow \left(\mathbf{Z}(t);t\ge 0 \right),
\mathfrak{m}athbf{e}nd{equation*} where $\mathbf{Z}$ is the unique strong solution of the following stochastic equation
\begin{equation}\label{eqn:zsde1}
\mathbf{Z}(t) = x + \int_0^t \sqrt{\mathbf{Z}(s)}\,dW(s) + \left( \lambda - \frac{1}{2} \int_0^t \mathbf{Z}(s)\,ds\right)\int_0^t\mathbf{Z}(s)\,ds,
\mathfrak{m}athbf{e}nd{equation} which is absorbed upon hitting zero and $W$ is a standard Brownian motion.
\mathfrak{m}athbf{e}nd{lem}
In Section \ref{sec:z} we provide proofs of the lemmas needed to go from the continuous time statements in \mathscr{C}ite{DL06,Simatos15} to the formulation in Lemma \ref{thm:zconv}. These results we prove will be used in Section \ref{sec:theta} to argue a similar scaling result as in Lemma \ref{thm:zconv} in a more general critical window. We do provide a proof the strong existence and uniqueness of solutions to the stochastic equation \mathfrak{m}athbf{e}qref{eqn:zsde1} with Lemma \ref{lem:uniquenessLemma}.
We also argue the following proposition for the number of vertices in the connected subset of the graph connected to one of the $k$ randomly chosen vertices (cf \mathscr{C}ite[Theorem 1]{MartinLof98}).
\begin{prop}\label{prop:components}
Let $k = k(n) = \fl{n \mathfrak{m}athbf{e}ps_n^2 x}$ where $\mathfrak{m}athbf{e}ps_n$ satisfies \mathfrak{m}athbf{e}qref{eqn:thetan}. Let $A_n^\mathfrak{m}athbf{e}ps(k)$ denote the number of vertices in $\mathfrak{m}athscr{G}_n^\mathfrak{m}athbf{e}ps = G(n,(1+\lambda \mathfrak{m}athbf{e}ps_n)/n)$ which are in the same connected component as some vertex in $\{1,2,\dotsm k\}$. Then if $\mathfrak{m}athbf{e}ps_n\to 0$ but $n\mathfrak{m}athbf{e}ps_n^3\to\infty$, for each $\mathfrak{m}athbf{e}ta>0$,
\begin{equation*}
\mathbb{P}\left(n^{-1/3} \mathfrak{m}athbf{e}ps_n A_n^\mathfrak{m}athbf{e}ps(k) > \lambda+ \sqrt{\lambda^2+2x}-\mathfrak{m}athbf{e}ta \right) \longrightarrow 1,\qquad \text{as }n\to\infty.
\mathfrak{m}athbf{e}nd{equation*}
\mathfrak{m}athbf{e}nd{prop}
We remark that the limiting process $\mathbf{Z}$ found in \mathfrak{m}athbf{e}qref{eqn:zsde1} is precisely what one should expect from Aldous' convergence of a rescaled breadth-first walk towards \mathfrak{m}athbf{e}qref{eqn:xlamb} found in \mathscr{C}ite{A97_1} and the results connecting breadth-first walks on forests and height profiles in \mathscr{C}ite{CPU13}. Using the time change $u(t)$ satsifies $\int_0^u\mathbf{Z}(s)\,ds = t$, one can see that the process $Y(t) = \mathbf{Z}(u)$ becomes a Brownian motion with parabolic drift killed upon hitting zero. This is further explained in Lemma \ref{lem:uniquenessLemma}. The connection is a random time-change called the Lamperti transform in the literature on branching processes. This Lamperti transformation has a natural interpretation which relates a breadth-first walk and a corresponding time-change which counts the number of cousin vertices. Moreover, this transformation gives a bijective relationship between a certain class of L\'{e}vy processes and continuous state branching processes. The bijection originated in the work of Lamperti \mathscr{C}ite{Lamperti67}, but was proved by Silverstein \mathscr{C}ite{Silverstein67}. For a more recent approach see \mathscr{C}ite{CLU09}. See \mathscr{C}ite{CPU13,CPU17} for generalizations and results involving scaling limits.
To view the connection with the author's previous work in \mathscr{C}ite{Clancy19} we include the following corollary of Theorems \ref{thm:kconv} and \ref{thm:kconv_gen}.
\begin{cor}\label{cor:2} Fix a $\lambda\in \mathbb{R}$.
\begin{enumerate}
\item Let $k = k(n,x) = \fl{n^{1/3}x}$. Let $\mathfrak{m}athscr{G}_n = G(n, n^{-1}+\lambda n^{-4/3})$, and let $K_n^k$ be the cumulative cousin process defined by \mathfrak{m}athbf{e}qref{eqn:kDef}. Then on the Skorohod space
\begin{equation}\label{eqn:kconv}
\left(n^{-1} K_n^{k(n,x)}(\fl{n^{2/3}t}); t\ge0\right) \Longrightarrow \left(\int_0^{t\omegaedge T_{-x}} \left(x+ \mathbf{X}^\lambda(s) \right)\,ds; t \ge 0 \right).
\mathfrak{m}athbf{e}nd{equation}
\item Let $\mathfrak{m}athbf{e}ps_n$ be a sequence of strictly positive numbers such that $\mathfrak{m}athbf{e}ps_n\to 0,$ but $\mathfrak{m}athbf{e}ps_n^3 n\to \infty$. Let $k = k(n,x) = \fl{\mathfrak{m}athbf{e}ps_n^2 nx}$. Let $\mathfrak{m}athscr{G}_n^\mathfrak{m}athbf{e}ps = G(n,(1+\lambda\mathfrak{m}athbf{e}ps_n)/n)$ and let $K_n^{\mathfrak{m}athbf{e}ps,k}$ be the cumulative cousin process defined by \mathfrak{m}athbf{e}qref{eqn:kDef} for this sequence of graphs. Then on the Skorohod space $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$ we have
\begin{equation*}
\left(\frac{1}{\mathfrak{m}athbf{e}ps_n^3 n^{2}} K^{\mathfrak{m}athbf{e}ps,k(n,x)}_n (\fl{n\mathfrak{m}athbf{e}ps_n t}) ;t\ge 0 \right) \longrightarrow \left( \left(xt + \frac{1}{2}\lambda t^2 - \frac{1}{6}t^3 \right)\vee 0 ;t\ge 0 \right)
\mathfrak{m}athbf{e}nd{equation*} in probability.
\mathfrak{m}athbf{e}nd{enumerate}
\mathfrak{m}athbf{e}nd{cor}
Let us take some time to discuss the connection between Theorem \ref{thm:kconv} and the results in \mathscr{C}ite{Clancy19}. The work in \mathscr{C}ite{Clancy19} originated in trying to give a random tree interpretation of various results connected to edge limits for eigenvalues of random matrices \mathscr{C}ite{GS18, LS19}. In short, those works give a description of the
random variable
\begin{equation*}
A = \sqrt{12} \left(\int_0^1 r(t)\,dt - \frac{1}{2} \int_0^\infty \left(L_1^v(r)\right)^2\,dv \right),
\mathfrak{m}athbf{e}nd{equation*} where $r = (r(t);t\in[0,1])$ is a reflected Brownian bridge and $L(r) = \left(L_t^v(r);t\in[0,1],v\ge 0\right)$ is its local time (see Chapter VI of \mathscr{C}ite{RY99}). The appearance of the $\sqrt{12}$ term is simply a convenient scaling. In \mathscr{C}ite{LS19}, the authors compute some moments of $A$ which led them to ``believe that $A$ admits an interesting combinatorial interpretation." Such an interpretation was given in \mathscr{C}ite{Clancy19} involving comparisons of two statistics on random trees and forests, one of which is the number of ``cousin'' vertices $\operatorname{\mathbf{csn}}(v)$.
\subsection{Overview of the Paper}
In Section \ref{sec:prelims}, we discuss some preliminaries on random graphs, the Reed-Frost model. In Section \ref{sec:labeling}, we discuss in more detail the ordering of vertices discussed briefly prior to Theorem \ref{thm:kconv}.
In Section \ref{sec:z}, we discuss some lemmas on the asymptotics of the Reed-Frost model. This allows us to go from the scaling limits of the continuous time SIR models in \mathscr{C}ite{DL06, Simatos15} to the statement of Lemma \ref{thm:genz}. This section is focused on the nearly critical regime $p(n) = n^{-1}+\lambda n^{-4/3}$. In Section \ref{sec:zthm}, we prove the strong existence and uniqueness of solutions to equation \mathfrak{m}athbf{e}qref{eqn:zsde1}. We also discuss the time-change discussed in the introduction.
In Section \ref{sec:klim}, we prove Theorem \ref{thm:kconv} using Lemma \ref{thm:zconv}. In Section \ref{sec:ss}, we prove a self-similarity result for the solution of stochastic differential equations \mathfrak{m}athbf{e}qref{eqn:zsde1}. This is analogous to how Aldous \mathscr{C}ite{A97_1} described the (time-inhomogeneous) excursion measure of the process $\mathbf{X}^\lambda$ in terms of the It\^{o} excursion measure.
In Section \ref{sec:theta}, we study the more general nearly critical window $(1+\lambda\mathfrak{m}athbf{e}ps_n)/n$ where $\mathfrak{m}athbf{e}ps_n\to 0$ and $\mathfrak{m}athbf{e}ps_n^3 n\to\infty$. In this section we generalize many of the lemmas in Section \ref{sec:z}, in order to prove Theorem \ref{thm:kconv_gen}.
\section{Preliminaries} \label{sec:prelims}
\subsection{Random Graphs}\label{sec:ER}
Recall that the \mathfrak{m}athbb{E}R graph $G(n,p)$ is the graph on $n$ elements where each edge is independently included with probability $p$. The fundamental paper of Erd\H{o}s and R\'{e}nyi \mathscr{C}ite{ER60} describes the size of the largest component as $n\to \infty$ and $p = \frac{c}{n}$. As briefly discussed in the introduction, a phase transition occurs at $c = 1$. In the subcritical case ($c<1$) the largest component is of (random) order $\mathscr{T}heta(\log n)$ and in the supercritical case $(c>1)$ the largest component is of order $\mathscr{T}heta(n)$ while in the critical case ($c=1$) the largest two components are of order $\mathscr{T}heta(n^{2/3})$.
Much interest has be paid towards the phase transition which occurs near $c = 1$. Bollob\'{a}s \mathscr{C}ite{B84a} showed that if $c = 1+ n^{-1/3}(\log n)^{1/2}$ then the largest component is of order $n^{2/3}(\log n)^{1/2}$. Later {\L}uckzak, Pittel and Wierman \mathscr{C}ite{LPW94} showed that in the regime $c = 1+\lambda n^{-1/3}$ for some constant $\lambda$ then any component of the \mathfrak{m}athbb{E}R graph has at most $\mathbf{x}i_n$ surplus edges, i.e. each component of size $m$ has at most $m-1+\mathbf{x}i_n$ edges and $\mathbf{x}i_n$ is bounded in probability as $n\to\infty$. Prior to the work in \mathscr{C}ite{LPW94}, Bollob\'{a}s studied this regime in \mathscr{C}ite{B84b}. See also the monograph \mathscr{C}ite{B85}.
We now briefly recall some central results for the asymptotics of $\mathfrak{m}athscr{G}_n$. Without going into all of the details, Aldous \mathscr{C}ite{A97_1} encapsulates information on the size of the components in terms of a random walk $X_n = \left(X_n(j) ;j = 0,1,\dotsm \right)$. Namely, setting $T_n(\mathfrak{m}athbf{e}ll) = \inf\{j: X_n(j) = -\mathfrak{m}athbf{e}ll \}$, the sizes of the components of $\mathfrak{m}athscr{G}_n$ are recovered by $(T_n(\mathfrak{m}athbf{e}ll+1) -T_n(\mathfrak{m}athbf{e}ll))$. Using this relationship along, with the scaling limit
\begin{equation*}
\left(n^{-1/3}X_n(\fl{n^{2/3}t});t\ge0 \right) \Longrightarrow \left( \mathbf{X}^\lambda(t);t\ge 0\right),
\mathfrak{m}athbf{e}nd{equation*} Aldous \mathscr{C}ite{A97_1} is able to relate asymptotics of both the number of surplus edges and the size of the components to the excursions above past minima of the process $\mathbf{X}^\lambda$ defined in \mathfrak{m}athbf{e}qref{eqn:xlamb}.
Just as there is a theory of continuum limits of random trees (see, e.g. Aldous's work \mathscr{C}ite{A90,A91,A93} and the monograph \mathscr{C}ite{LD02}) there is a continuum limit of the largest components of $\mathfrak{m}athscr{G}_n$. The scaling limit for the largest components was originally described by Addario-Berry, Broutin and Goldschmidt in \mathscr{C}ite{ABG12} and additional results about continuum limit object can be found in their companion paper \mathscr{C}ite{ABG10}. Their results have been generalized in several aspects. Within a Brownian setting, Bhamidi et. al. \mathscr{C}ite{BSW17} provided scaling limits of measured metric spaces for a large class of inhomogeneous graph models. Continuum limits related to excursions of ``L\'{e}vy processes without replacement" are described in \mathscr{C}ite{BHS18}. In an $\alpha$-stable setting, continuum limits are described in the work of Conchon-Kerjan and Goldschmidt \mathscr{C}ite{CKG20}, where the limiting objects are related to \textit{tilting} excursions of a spectrally positive $\alpha$-stable process and their corresponding height processes (cf. \mathscr{C}ite{LL98a,LL98b}). See also, \mathscr{C}ite{GHS18} for more information about the continuum limits in the $\alpha$-stable setting.
\subsection{Epidemic Models}\label{sec:RFER}
The Reed-Frost model of epidemics describes the spread of a disease in a population of $n$ individuals in discrete time. It is described in terms of two processes $I = (I(t); t = 0,1,\dotsm)$ and $S = (S(t); t =0,1,\dotsm)$ where $I$ represents the number of infected individuals and $S$ represents the number of susceptible individuals. At each time $t$, every infected individuals as a probability $p$ of coming in contact with a susceptible individual and infecting that individual.
It is further assumed that each infected individual at time $t$ recovers at time $t+1$. While $I$ itself is not Markov since the number of people who can be infected at time $t+1$ depend on the total number of infected individuals by time $t$, the pair $(S,I)$ is. Moreover, it can be easily seen that
\begin{equation*}
\left(I(t+1) \big| I(t) = i, S(t) =s \right) \overset{d}{=} \text{Bin}\left(s, 1-(1-p)^i \right),
\mathfrak{m}athbf{e}nd{equation*} and $S(t+1)$ is obtained by setting $S(t)-I(t+1)$.
As described in \mathscr{C}ite{BM90}, the Reed-Frost model can be described as exploring an \mathfrak{m}athbb{E}R graph $G(n,p)$. We quote them at length:
\begin{quote}
[O]ne or more initial vertices [of $G(n,p)$] are chosen at random as the $I(0)$ initial infectives, their neighbors become the $I(1)$ infectives at time $1$, and, inductively the $I(t+1)$ infectives at time $t+1$ are taken to be those neighbours of the $I(t)$ infectives at time $t$ which have not previously been infected.
\mathfrak{m}athbf{e}nd{quote}
In \mathscr{C}ite{vBML80}, von Bahr and Martin-L\"{o}f give a back-of-the-envelope calculation to show that the Reed-Frost epidemic should have a scaling limit for the process $I$ when $p = (n^{-1} + \lambda n^{-4/3}+o(n^{-4/3}))$ and suitable Lindeberg conditions hold. In Lemma \ref{lem:asympt}, we provide detailed results on the asymptotics in this regime, and generalize this with Lemma \ref{lem:asympt2} to the regime where $p = n^{-1}+\lambda \theta_n n^{-4/3}$ where $\theta_n\to \infty$, but $\theta_n = o(n^{1/3})$.
\subsection{The breadth-first labeling} \label{sec:labeling}
The breadth-first labeling we will use can be described on any graph, so we will describe it on a generic finite graph $G$ with $n$ vertices. See Figure \ref{fig:comp} below as well. The breadth-first ordering in this work is described, in short, as follows
\begin{itemize}
\item Randomly select $k$ distinct vertices in a graph $G$. Call these vertices the roots and label these by $\rho(j)$ for $j = 1,2\dotsm$.
\item Begin the labeling of these roots by $w^k(0) = \rho(1)$, $w^k(1) = \rho(2),\dotsm w^k(k-1) = \rho(k)$,
\item Label the unlabeled vertices neighboring vertex $w^k(0)$ by $w^k(k),w^k(k+1),\dotsm,w^k(\mathfrak{m}athbf{e}ll-1)$.
\item After exploring the neighbors of vertex $w^k(j-1)$ and using the labels $w^k(0),\dotsm, w^k(m-1)$ (say), label the unlabeled vertices neighboring vertex $w^k(j)$ by $w^k(m),w^k(m+1), \dotsm$. Continue this until all labeled vertices have been explored (which occurs when you explore every vertex connected to $\{\rho(1),\rho(2),\dotsm, \rho(k)\}$).
\mathfrak{m}athbf{e}nd{itemize}
In more detail we label all the roots as in the second bullet point above as $w^k(j)$ for $j = 0,1,\dotsm, k-1$, and define the vertex set $\mathfrak{m}athscr{V}^k(1) = \{w^k(0),\dotsm, w^k(k-1)\}.$ Now given a vertex set $\mathfrak{m}athscr{V}^k(i) = \{w^k(j),w^k(j+1),\dotsm, w^k(\mathfrak{m}athbf{e}ll)\}$ we define the vertices $w^k(\mathfrak{m}athbf{e}ll+1), w^k(\mathfrak{m}athbf{e}ll+2),\dotsm,w^k(\mathfrak{m}athbf{e}ll+c)$ as the unlabeled vertices which are a neighbor of $w^k(j)$ (if any), where $c$ represents the total number of such vertices. Then define the vertex set $$\mathfrak{m}athscr{V}^k(i+1) = \mathfrak{m}athscr{V}^k(i)\setminus\{ w^k(j) \}\mathscr{C}up\{w^k(\mathfrak{m}athbf{e}ll+1),\dotsm,w^k(\mathfrak{m}athbf{e}ll+c) \}.$$
When we have a sequence of graphs $(G_n; n = 1,2,\dotsm)$, we include a subscript $n$ for both the roots and the breadth-first labeling. That is we write $\rho_n(j)$ and $w_n^k(j)$.
\begin{figure}[h!]
\begin{subfigure}[t]{\textwidth}
\begin{tikzpicture}[scale=.7]
\mathfrak{n}ode (parent) {} [grow'=up]
child {
node [circle,scale=.7,draw] (a) {1} edge from parent[draw=none]
child {
node [circle,scale=.7,draw] (b) {\phantom{-}}
}
}
child {
node [circle,scale=.7,draw] (c) {2} edge from parent[draw=none]
}
child {
node [circle,scale=.7, draw] (d) {3} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (e) {\phantom{-}}
}
child{
node [circle,scale=.7,draw] (f) {\phantom{-}}
child{
node [circle,scale=.7,draw] (g) {\phantom{-}}
}
child{
node [circle,scale=.7,draw] (h) {11}
}
}
}
child {
node [circle,scale=.7,draw] (i) {4} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (j) {\phantom{-}}
}
}
child {
node [circle,scale=.7,draw] (k) {5} edge from parent[draw=none]
}
child{
node [circle,scale=.7,draw] (l) {6} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (m) {\phantom{-}}
}
child{
node [circle,scale=.7,draw] (n) {\phantom{-}}
}
}
child{
node [circle,scale=.7,draw] (o) {7} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (p) {\phantom{-}}
}
}
child{
node[circle,scale=.7,draw] (q) {8} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (r) {9} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (s) {10} edge from parent[draw=none]
child {
node[circle,scale=.7,draw] (t) {\phantom{-}}
child {
node[circle,scale=.7,draw] (u) {\phantom{-}}
}
child {
node[circle,scale=.7,draw] (v) {\phantom{-}}
child {
node[circle,scale=.7,draw] (w) {\phantom{-}}
}
}
}
child {
node[circle,scale=.7,draw] (x) {\phantom{-}}
child {
node[circle,scale=.7,draw] (y) {\phantom{-}}
child {
node (blank2) {} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (z) {\phantom{-}}
}
child {
node[circle,scale=.7,draw] (aa) {\phantom{-}}
child {
node[circle,scale=.7,draw] (ab) {\phantom{-}}
}
}
}
}
child {
node[circle,scale=.7,draw] (ac) {\phantom{-}}
}
}
child {
node[circle,scale=.7,draw] (ad) {12} edge from parent [draw=none]
}
child {
node[circle,scale=.7,draw] (ae) {13} edge from parent [draw=none]
child {
node[circle,scale=.7,draw] (af) {\phantom{-}}
}
}
child {
node[circle,scale=.7,draw] (ag) {14} edge from parent [draw=none]
}
;
\draw[-] (x) -- (u);
\mathfrak{m}athbf{e}nd{tikzpicture}
\subcaption{The example graph used in Aldous \mathscr{C}ite[Fig. 1]{A97_1}. The labeled vertices are $\rho_n(1),\dotsm, \rho_n(14)$.}
\mathfrak{m}athbf{e}nd{subfigure}
\mathfrak{n}ewline
\begin{subfigure}[b]{\textwidth}
\begin{tikzpicture}[scale=.7]
\mathfrak{n}ode (parent) {} [grow'=up]
child {
node [circle,scale=.7,draw] (a) {0} edge from parent[draw=none]
child {
node [circle,scale=.7,draw] (b) {11}
}
}
child {
node [circle,scale=.7,draw] (c) {1} edge from parent[draw=none]
}
child {
node [circle,scale=.7, draw] (d) {2} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (e) {12}
}
child{
node [circle,scale=.7,draw] (f) {13}
child{
node [circle,scale=.7,draw] (g) {21}
}
child [grow=down]{
node [circle,scale=.7,draw] (h) {10}
}
}
}
child {
node [circle,scale=.7,draw] (i) {3} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (j) {14}
}
}
child {
node [circle,scale=.7,draw] (k) {4} edge from parent[draw=none]
}
child{
node [circle,scale=.7,draw] (l) {5} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (m) {15}
}
child{
node [circle,scale=.7,draw] (n) {16}
}
}
child{
node [circle,scale=.7,draw] (o) {6} edge from parent[draw=none]
child{
node [circle,scale=.7,draw] (p) {17}
}
}
child{
node[circle,scale=.7,draw] (q) {7} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (r) {8} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (s) {9} edge from parent[draw=none]
child {
node[circle,scale=.7,draw] (t) {18}
child {
node[circle,scale=.7,draw] (u) {22}
}
child {
node[circle,scale=.7,draw] (v) {23}
child {
node[circle,scale=.7,draw] (w) {25}
}
}
}
child {
node[circle,scale=.7,draw] (x) {19}
child {
node[circle,scale=.7,draw] (y) {24}
child {
node (blank2) {} edge from parent[draw=none]
}
child {
node[circle,scale=.7,draw] (z) {26}
}
child {
node[circle,scale=.7,draw] (aa) {27}
child {
node[circle,scale=.7,draw] (ab) {28}
}
}
}
}
child {
node[circle,scale=.7,draw] (ac) {20}
}
}
child {
node[circle,scale=.7,draw] (ad) {\phantom{-}} edge from parent [draw=none]
}
child {
node[circle,scale=.7,draw] (ae) {\phantom{-}} edge from parent [draw=none]
child {
node[circle,scale=.7,draw] (af) {\phantom{-}}
}
}
child {
node[circle,scale=.7,draw] (ag) {\phantom{-}} edge from parent [draw=none]
}
;
\draw[-] (x) -- (u);
\mathfrak{m}athbf{e}nd{tikzpicture}
\subcaption{The breadth-first labelling of the example graph in \mathscr{C}ite{A97_1} where $\rho_n(j) = j$ for $j=1,2,\dotsm,11$. In lieu of writing $w_n^{11}(j)$ in each vertex, we just write $j$ instead. }
\mathfrak{m}athbf{e}nd{subfigure}
\mathscr{C}aption{Exploration of the graph.} \label{fig:comp}
\mathfrak{m}athbf{e}nd{figure}
\section{Limit of Height Profile}\label{sec:z}
In this section we provide lemmas necessary to go from the convergence in \mathscr{C}ite{DL06, Simatos15} of a continuous-time epidemic model to the statement presented in Lemma \ref{thm:zconv}. This is hinted at in \mathscr{C}ite[Appendix 2]{vBML80} as well. We fix a $\lambda\in \mathbb{R}$ and let $\mathfrak{m}athscr{G}_n = G(n,n^{-1}+\lambda n^{-4/3})$ denote an \mathfrak{m}athbb{E}R random graph, and let $Z_n^k$ be defined by \mathfrak{m}athbf{e}qref{eqn:zDiscrete}. For convenience, we let $C_n^k = \left(C_n^k(h); h = 0,1,\dotsm \right)$ be defined by
\begin{equation}\label{eqn:cDiscDef}
C_n^k(h) = \sum_{j=0}^h Z_n^k(j).
\mathfrak{m}athbf{e}nd{equation} In terms of the graph $\mathfrak{m}athscr{G}_n$, $C_n^k(h)$ represents the number of vertices within distance $h$ of the $k$ randomly selected vertices $\{\rho_n(1),\dotsm, \rho_n(k)\}$. In terms of the SIR model, $C_n^k(h)$ represents the number of individuals who have contracted the disease at or before ``time" $h$.
From the correspondence of the Reed-Frost model and the \mathfrak{m}athbb{E}R random graph, we know that $(Z_n^k(h),C_n^k(h))$ is a Markov chain with state space
\begin{equation*}
\mathfrak{m}athcal{S} = \{(z,c)\in \mathfrak{m}athbb{Z}^2: z,c\ge 0\}
\mathfrak{m}athbf{e}nd{equation*} which is absorbed upon hitting the line $\{(0,c): c \ge 0\}$. Moreover, the conditional distribution of $Z_n^k(h+1)$ given $\left(Z_n^k(h),C_n^k(h)\right)$ is
\begin{equation}\label{eqn:condDist}
\left(Z_n^k(h+1) \bigg| Z_n^k(h) = z, C_n^k(h) = c\right) \overset{d}{=} \left\{
\begin{array}{ll}
\text{Bin}(n-c, q(n,z))&:z>0, c<n\\
0&: \text{else}
\mathfrak{m}athbf{e}nd{array}
\right.,
\mathfrak{m}athbf{e}nd{equation} where $q(n,z)$ is defined by
\begin{equation}\label{eqn:qDef}
q(n,z) = 1- \left(1-n^{-1} - \lambda n^{-4/3} \right)^z.
\mathfrak{m}athbf{e}nd{equation} The joint conditional distribution of $(Z_n^k(h+1),C_n^k(h+1))$ is easily deduced from equations \mathfrak{m}athbf{e}qref{eqn:condDist} and \mathfrak{m}athbf{e}qref{eqn:cDiscDef}.
\subsection{Asymptotics for binomial statistics}
We begin by examining the binomial random variables
$$
\beta(n,z,c) \overset{d}{\sim} \text{Bin}(n-c,q(n,z)),
$$
where $q(n,z)$ is defined by \mathfrak{m}athbf{e}qref{eqn:qDef}. Examining the convergence in Theorem \ref{thm:zconv}, we'll study various statistics of $\beta(n,z,c)$ as $n\to\infty$ with $z = O(n^{1/3})$ and $c = O(n^{2/3})$.
We define the following statistics
\begin{equation}\label{eqn:betaStats}
\mathfrak{m}u(n,z,c) = \mathfrak{m}athbb{E}\left[\beta(n,z,c) \right],\quad \sigma^2(n,z,c) = \mathbf{V}ar\left[\beta(n,z,c) \right],
\quad \kappa(n,z,c) = \mathfrak{m}athbb{E}\left[(\beta(n,z,c) - z)^4 \right].
\mathfrak{m}athbf{e}nd{equation}
The main purpose of this subsection is to establish the following lemma:
\begin{lem}\label{lem:asympt}
Fix an $r>0$ and $T>0$ and let
\begin{equation*}
\Omega_n = \Omega (n,r,T) = \left\{(z,c)\in \mathfrak{m}athbb{Z}^2: 0\le z\le n^{1/3}r, 0\le c\le n^{2/3} Tr \right\}.
\mathfrak{m}athbf{e}nd{equation*}
Then, as $n\to\infty$, the following bounds hold:
\begin{equation}\label{eqn:asypmtotics}
\begin{split}
&\sup_{\Omega_n} \left|\mathfrak{m}u(n,z,c) - z - n^{-1/3} z(\lambda - n^{-2/3}c) \right| = O(n^{-1/3})\\
&\sup_{\Omega_n} \left| \sigma^2(n,z,c) - z - n^{-1/3} z(\lambda - n^{-2/3} c) \right| = O(n^{-1/3})\\
&\sup_{\Omega_n} \left|\kappa(n,z,c)\right| = O(n^{2/3})
\mathfrak{m}athbf{e}nd{split}
\mathfrak{m}athbf{e}nd{equation} In particular, \begin{equation*}
\begin{split}
&\sup_{\Omega_n} \left| \mathfrak{m}u(n,z,c) - z\right| = O(1)\\
&\sup_{\Omega_n} \left| \sigma^2(n,z,c) - z\right| = O(1)
\mathfrak{m}athbf{e}nd{split}, \qquad \text{as }n\to\infty.
\mathfrak{m}athbf{e}nd{equation*}
\mathfrak{m}athbf{e}nd{lem}
\begin{proof}
We prove the statements in equation \mathfrak{m}athbf{e}qref{eqn:asypmtotics}, since the latter bounds easily follow from the more detailed asymptotics.
We start with the expansion of $\mathfrak{m}u(n,z,c)$. The binomial theorem gives
\begin{align*}
\mathfrak{m}u(n,z,c) &= (n-c) \left(1-(1-n^{-1}-\lambda n^{-4/3})^z \right)\\
&= (n-c) \left(z(n^{-1}+\lambda n^{-4/3}) - \sum_{j=2}^z \binom{z}{j} (-1)^j (n^{-1} + \lambda n^{-4/3})^j \right)\\
&= z + n^{-1/3} z(\lambda -n^{-2/3}c) - \lambda n^{-4/3} z c - (n-c)\sum_{j=2}^z \binom{z}{j} (-1)^j (n^{-1}+\lambda n^{-4/3})^j.
\mathfrak{m}athbf{e}nd{align*}
For $n$ sufficiently large, we can obtain the bounds
\begin{align*}
\left|\mathfrak{m}u(n,z,c) - z - n^{-1/3}z(\lambda - n^{-2/3}c) \right|&\le |\lambda| n^{-4/3} z c + (n-c)\left|\sum_{j=2}^z \binom{z}{j} (-1)^j (n^{-1}+\lambda n^{-4/3})^j\right|\\
&\le |\lambda| n^{-4/3} zc + n\sum_{j=2}^z \binom{z}{j} (n^{-1}+\lambda n^{-4/3})^j\\
&\le |\lambda| n^{-4/3}zc + n\sum_{j=2}^z \left(\frac{2ez}{n}\right)^j.
\mathfrak{m}athbf{e}nd{align*} In the second and third inequality above, we used the bound $0<n^{-1}+\lambda n^{-4/3} \le 2n^{-1}$ and the bound $\displaystyle \binom{m}{k} \le (em)^k.$
Taking the supremum over $\Omega_n$, gives
\begin{align*}
\sup_{\Omega_n} \left|\mathfrak{m}u(n,z,c) - z - n^{-1/3}z(\lambda - n^{-2/3}) \right| &\le n^{-1/3} |\lambda| Tr^2 + n\sum_{j=2}^{n^{1/3} r} \left( \frac{2er}{n^{2/3}}\right)^j\\
&\le n^{-1/3} |\lambda|Tr^2 + n\left(\frac{4e^2r^2 n^{-4/3}}{1-2ern^{-2/3}}\right)\\
&\le \left(|\lambda|Tr^2 + 8e^2r^2 \right)n^{-1/3} = O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{align*}
This proves the desired expansion and bound for $\mathfrak{m}u(n,z,c)$.
We now examine the bounds for $\sigma^2(n,z,c)$. Again, we use the binomial theorem
\begin{align}
\sigma^2(n,z,c) &= \mathfrak{m}u(n,z,c) (1-n^{-1} - \lambda n^{-4/3})^z \mathfrak{n}onumber\\
&= \mathfrak{m}u(n,z,c) \left(1-z \left(n^{-1}+\lambda n^{-4/3}\right) + \sum_{j=2}^z \binom{z}{j} (-1)^j(n^{-1}+\lambda n^{-4/3})^j \right)\mathfrak{n}onumber\\
&= \mathfrak{m}u(n,z,c) - \mathfrak{m}u(n,z,c)z(n^{-1}+ \lambda n^{-4/3}) + \mathfrak{m}u(n,z,c)\sum_{j=2}^z \binom{z}{j} (-1)^j (n^{-1}+\lambda n^{-4/3})^j. \label{eqn:sigmaBound}
\mathfrak{m}athbf{e}nd{align} We can then use the previous asymptotic bounds for $\mathfrak{m}u(n,z,c)$ to get
\begin{equation*}
\sup_{\Omega_n} \left|\sigma^2(n,z,c) - z - n^{-1/3}z(\lambda -n^{-2/3}) \right| \le \sup_{\Omega_n} |\sigma^2(n,z,c) - \mathfrak{m}u(n,z,c)|+O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{equation*} We can bound the first term on the right-hand side as we did for $\mathfrak{m}u(n,z,c)$ above. We get
\begin{align*}
\sup_{\Omega_n}& \left|\sigma^2(n,z,c) - \mathfrak{m}u(n,z,c) \right| \le \sup_{\Omega_n} \left|\mathfrak{m}u(n,z,c)z (n^{-1}+\lambda n^{-4/3}) + n\sum_{j=2}^z \binom{z}{j} (n^{-1}+\lambda n^{-4/3})^j \right| \\
&\le \sup_{\Omega_n} \left(2rn^{-2/3} \mathfrak{m}u(n,z,c) + n \sum_{j=2}^z \binom{z}{j} (n^{-1} +\lambda n^{-4/3})^j \right)\\
&\le 2rn^{-2/3} \sup_{\Omega_n}\left(|\mathfrak{m}u(n,z,c) - z - n^{-1/3}z(\lambda - n^{-2/3}c)| + |z+n^{-1/3}z(\lambda- n^{-2/3}c)| \right)\\
&\qquad + O(n^{-1/3})\\
&= 2rn^{-2/3} \left(O(n^{-1/3})+ O(n^{1/3}) \right) + O(n^{-1/3})\\
&= O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{align*} In the first line we used the bound $0<n^{-1}+\lambda n^{-4/3} \le 2n^{-1}$ for large enough $n$, and $\mathfrak{m}u(n,z,c)\le n$, the second inequality used the bounds $(n^{-1} + \lambda n^{-4/3})z\le 2rn^{-2/3}$ on $\Omega_n$. The third inequality used the previously derived bound of $\displaystyle \sum_{j=2}^z \binom{z}{j}(n^{-1}+\lambda n^{-4/3}) \le 8e^2 r^2 n^{-4/3}$ which holds for $n$ sufficiently large.
To show the bound for $\kappa(n,z,c)$, we expand it as follows
\begin{align*}
\kappa(n,z,c) &= \mathfrak{m}athbb{E}\left[(\beta(n,z,c)-\mathfrak{m}u(n,z,c))^4 \right] + 4 \mathfrak{m}athbb{E}\left[(\beta(n,z,c)-\mathfrak{m}u(n,z,c))^3\right](\mathfrak{m}u(n,z,c)-z) \\
&\qquad + 6 \mathfrak{m}athbb{E}\left[ (\beta(n,z,c)-\mathfrak{m}u(n,z,c))^2 \right](\mathfrak{m}u(n,z,c)-z)^2 \\
&\qquad + 4\mathfrak{m}athbb{E}\left[\beta(n,z,c)-\mathfrak{m}u(n,z,c)\right](\mathfrak{m}u(n,z,c)-z)^3\\
&\qquad +(\mathfrak{m}u(n,z,c)-z)^4\\
&=: \kappa_4(n,z,c) + 4\kappa_3(n,z,c) + 6 \kappa_2(n,z,c) +0 + \kappa_0(n,z,c).
\mathfrak{m}athbf{e}nd{align*}
We now show that $\kappa_j(n,z,c)$ for $j = 0,2,3,4$ have the desired bound.
By the approximations for $\mathfrak{m}u(n,z,c)$ it is easy to see that $$
\sup_{\Omega_n}|\kappa_0(n,z,c)| = O(1),\qquad \text{as }n\to\infty.
$$ Similarly, we can use the approximations for both $\mathfrak{m}u(n,z,c)$ and $\sigma^2(n,z,c)$ to arrive at
\begin{align*}
\sup_{\Omega_n}| \kappa_2(n,z,c) |&= \sup_{\Omega_n} \left|\sigma^2(n,z,c)(\mathfrak{m}u(n,z,c) - z)^2\right|\\
&\le O(1) \mathscr{C}dot \sup_{\Omega_n} \left(\left| \sigma^2(n,z,c)-z- n^{-1/3}z(\lambda -n^{-2/3}c)\right|+ |z+n^{-1/3}z(\lambda - n^{-2/3}c)|\right)\\
&= O(1) \mathscr{C}dot \left(O(n^{-1/3}) + O(n^{1/3})\right) = O(n^{1/3}).
\mathfrak{m}athbf{e}nd{align*}
Using the third central moment of a binomial random variable gives
\begin{align*}
\kappa_3(n,z,c) = \sigma^2(n,z,c)(1-2q(n,z)) (\mathfrak{m}u(n,z,c)-z).
\mathfrak{m}athbf{e}nd{align*} A similar expansion as that for $\kappa_2(n,z,c)$ shows that $\sup_{\Omega_n}|\sigma^2(n,z,c)| = O(n^{1/3})$, and the other two terms are $O(1)$ over $\Omega_n$ and hence
\begin{equation*}
\sup_{\Omega_n} |\kappa_3(n,z,c)| = O(n^{1/3}).
\mathfrak{m}athbf{e}nd{equation*}
By the fourth central moment for a binomial random variable, we have
\begin{align*}
|\kappa_4(n,z,c)| &= \sigma^2(n,z,c)\left|1 + 3(n-2-c)(q(n,z)-q(n,z)^2)\right|\\
&\le \sigma^2(n,z,c) \left(\left|1 + 3(n-c)(q(n,z)-q(n,z)^2) \right|+2|(q(n,z)-q(n,z)^2) | \right)\\
&\le \sigma^2(n,z,c) \left(3 + 3\sigma^2(n,z,c) \right).
\mathfrak{m}athbf{e}nd{align*} Hence, by the bound for $\sigma^2(n,z,c)$
\begin{equation*}
\sup_{\Omega_n} |\kappa_4(n,z,c)| = O(n^{2/3}).
\mathfrak{m}athbf{e}nd{equation*} This proves the desired claim.
\mathfrak{m}athbf{e}nd{proof}
\subsection{Martingale estimates}
In this section we verify the conditions of the martingale functional central limit theorem, as found in \mathscr{C}ite[Theorem 7.4.1]{EK86}. Before moving onto the lemma, we establish some notation.
We let
\begin{equation*}
\mathfrak{m}athscr{F}_n^k(h) = \sigma\left( Z_n^k(j) : j\le h\right)
\mathfrak{m}athbf{e}nd{equation*} denote the filtration generated by $Z_n^k$. We let
\begin{equation*}
Z_n^k(h) = k + M_n^k(h) + B_n^k(h),
\mathfrak{m}athbf{e}nd{equation*}
be the Doob decomposition of $Z_n^k$ into an $\{\mathfrak{m}athscr{F}_n^k(h)\}_{h\ge 0}$-martingale $M_n^k$ and a predictable process $B_n^k$. Similarly, we let $Q_n^k$ be the unique increasing process which makes $(M_n^k(h))^2-Q_n^k(h)$ an $\{\mathfrak{m}athscr{F}_n^k(h)\}_{h\ge 0}$-martingale. That is
\begin{equation*}
\begin{split}
B_n^k(h) &= \sum_{\mathfrak{m}athbf{e}ll = 0}^{h-1} \mathfrak{m}athbb{E}\left[ Z_n^k(\mathfrak{m}athbf{e}ll+1) - Z_n^k(\mathfrak{m}athbf{e}ll) | \mathfrak{m}athscr{F}_n^k(\mathfrak{m}athbf{e}ll)\right]\\
Q_n^k(h) &= \sum_{\mathfrak{m}athbf{e}ll=0}^{h-1} \mathfrak{m}athbb{E}\left[\left(Z_n^k(\mathfrak{m}athbf{e}ll+1)-Z_n^k(\mathfrak{m}athbf{e}ll) \right)^2\big| \mathfrak{m}athscr{F}_n^k(\mathfrak{m}athbf{e}ll) \right] - \mathfrak{m}athbb{E}\left[Z_n^k(\mathfrak{m}athbf{e}ll+1)-Z_n^k(\mathfrak{m}athbf{e}ll) | \mathfrak{m}athscr{F}_n^k(\mathfrak{m}athbf{e}ll) \right]^2.
\mathfrak{m}athbf{e}nd{split}
\mathfrak{m}athbf{e}nd{equation*}
We define the following rescaled processes
\begin{equation}\label{eqn:rescales}
\begin{split}
\tilde{Z}_n^k(t) &= n^{-1/3}Z_n^k(\fl{n^{1/3}t})\qquad \tilde{C}_n^k(t) = n^{-2/3}C_n^k(\fl{n^{1/3}t})\qquad \\
\tilde{M}_n^k(t) &= n^{-1/3}M^k_n(\fl{n^{1/3}t})\qquad \tilde{B}_n^k(t) = n^{-1/3}B^k_n(\fl{n^{1/3}t}) \qquad \tilde{Q}_n^k(t)= n^{-2/3}Q^k_n(\fl{n^{1/3}t})
\mathfrak{m}athbf{e}nd{split}.
\mathfrak{m}athbf{e}nd{equation} Also define $\tau_n^k(r) = \inf\{t:\tilde{Z}_n^k(t)\vee \tilde{Z}_n^k(t-)\ge r\}$ and $\hat{\tau}_n^k(r) = n^{-1/3}\inf\{h: Z_n^k(h)\ge n^{1/3}r\}$.
\begin{lem} \label{lem:ekVerify} Fix any $r>0$, $T>0$ and $x>0$. Let $k = k(n) = \fl{n^{1/3}x}$.
The following limits hold
\begin{enumerate}
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{Z}_n^k(t)-\tilde{Z}_n^k(t-)|^2\right] = 0$.
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{B}_n^k(t)-\tilde{B}_n^k(t-)|^2\right] = 0$.
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{Q}_n^k(t)-\tilde{Q}_n^k(t-)| \right] = 0$.
\item $\displaystyle \sup_{t\le T\omegaedge \tau_n^k(r)} \left|\tilde{Q}_n^k(t) - \int_0^t \tilde{Z}^k_n(s)\,ds \right| \longrightarrow 0$, in probability as $n\to\infty$.
\item $\displaystyle \sup_{t\le T\omegaedge \tau_n^k(r)}\left| \tilde{B}_n^k(t)- \int_0^t (\lambda -\tilde{C}^k_n(s))\tilde{Z}^k_n(s) \,ds\right|{\longrightarrow}0$, in probability as $n\to\infty$.
\mathfrak{m}athbf{e}nd{enumerate}
\mathfrak{m}athbf{e}nd{lem}
\begin{proof}
In order to show (1), we prove the stronger claim
\begin{equation*}
\lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_{n}^k(r)} |\tilde{Z}_n^k(t)-\tilde{Z}_n^k(t-)|^4\right] = 0.
\mathfrak{m}athbf{e}nd{equation*}
To show this, note the following string of inequalities
\begin{align*}
\mathfrak{m}athbb{E}&\left[\sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{Z}_n^k(t)-\tilde{Z}_n^k(t-)|^4 \right] \le n^{-4/3} \mathfrak{m}athbb{E}\left[\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}_n^k(r))} |Z_n^k(h+1)-Z_n^k(h)|^4 \right]\\
&\le n^{-4/3} \sum_{h=0}^{\fl{n^{1/3}(T\omegaedge \hat{\tau}_n^k(r))}} \mathfrak{m}athbb{E}\left[|Z_n^k(h+1)-Z_n^k(h)|^4 \right]\\
&\le n^{-4/3} \sum_{h=0}^{\fl{n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))}}\sup_{\Omega_n} \mathfrak{m}athbb{E} \left[\mathfrak{m}athbb{E}\left[(Z_n^k(h+1)-Z_n^k(h))^4\bigg| Z_n^k(h) = z, C_n^k(h) = c\right] \right]\\
&\le Tn^{-1} \sup_{\Omega_n} \mathfrak{m}athbb{E}\left[ (\beta(n,z,c)-z)^4 \right] = O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{align*} In the third inequality above, we used the tower property and on fact that for $h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))$ both $Z_n^k(h)\le n^{1/3}r$ and $C_n^k(h)\le n^{2/3}Tr$. The fourth inequality used the Markov property of $(Z_n^k,C_n^k)$. The convergence then holds by the asymptotic result for $\kappa(n,z,c)$ shown in Lemma \ref{lem:asympt}
To verify (2), we begin by noting that
\begin{equation*}
B_n^k(h) = \sum_{j=0}^{h-1} \mathfrak{m}athbb{E}\left[(Z_n^k(j+1)-Z_n^k(j))|\mathfrak{m}athscr{F}_n^k(j) \right],
\mathfrak{m}athbf{e}nd{equation*} and hence
\begin{equation*}
\sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{B}_n^k(t)-\tilde{B}_n^k(t-)|^2 \le n^{-2/3} \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}_n^k(r))} \left|\mathfrak{m}athbb{E}\left[Z_n^k(h+1)-Z_n^k(h)\Big|\mathfrak{m}athscr{F}_n^k(h) \right] \right|^2.
\mathfrak{m}athbf{e}nd{equation*}
We also note that almost surely on $h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))$
\begin{equation*}
\mathfrak{m}athbb{E}\left[ Z_n^k(h+1) - Z_n^k(h) \Big|\mathfrak{m}athscr{F}_n^k(h) \right]\le \sup_{\Omega_n} |\mathfrak{m}athbb{E}[(\beta(n,z,c) - z)]| = O(1)\text{ as }n\to\infty,
\mathfrak{m}athbf{e}nd{equation*} by Lemma \ref{lem:asympt}. Hence,
\begin{align*}
\mathfrak{m}athbb{E} \left[\sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{B}_n^k(t)-\tilde{B}_n^k(t-)|^2 \right]&\le \mathfrak{m}athbb{E} \left[ n^{-2/3} \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} \left|\mathfrak{m}athbb{E}\left[Z_n^k(h+1)-Z_n^k(h)\Big|\mathfrak{m}athscr{F}_n^k(h) \right] \right|^2 \right]\\
&\le n^{-2/3} \mathscr{C}dot O(1) = O(n^{-2/3}),
\mathfrak{m}athbf{e}nd{align*} which argues (2).
We next show (3). We begin by noting that
$$
Q_n^k(h) = \sum_{j=0}^{h-1} \mathfrak{m}athbb{E}\left[ \left(Z_n^k(j+1) - Z_n^k(j) \right)^2\Big| \mathfrak{m}athscr{F}_n^k(j)\right] - \mathfrak{m}athbb{E}\left[Z_n^k(j+1)-Z_n^k(j) \Big| \mathfrak{m}athscr{F}_n^k(j)\right]^2.
$$ Hence
\begin{align*}
\mathfrak{m}athbb{E}&\left[\sup_{t\le T\omegaedge \tau_n^k(r)} |\tilde{Q}_n^k(t)-\tilde{Q}_n^k(t-)|\right] \\
&\le n^{-2/3} \mathfrak{m}athbb{E} \left[\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} \mathfrak{m}athbb{E}[(Z_n^k(h+1)-Z_n^k(h))^2 |\mathfrak{m}athscr{F}_n^k(h)]+ \mathfrak{m}athbb{E}[Z_n^k(h+1)-Z_n^k(h)|\mathfrak{m}athscr{F}_n^k(h)]^2 \right]\\
&\le n^{-2/3} \mathfrak{m}athbb{E}\left[\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} \sup_{\Omega_n} \left|\mathfrak{m}athbb{E}[(\beta(n,z,c)-z)^2] + (\mathfrak{m}u(n,z,c)-z)^2\right|\right]\\
&=n^{-2/3} \sup_{\Omega_n} \left|\sigma^2(n,z,c) + 2(\mathfrak{m}u(n,z,c)-z)^2 \right| = O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{align*} In the last equalities, we Lemma \ref{lem:asympt} and the observation that $\sup_{\Omega_n} \sigma^2(n,z,c) = O(n^{1/3})$.
To argue claim (4), we observe
\begin{align*}
\left(Q_n^k(h+1)-Q_n^k(h) \right) &= \mathfrak{m}athbb{E}\left[(\beta(n,Z_n^k(h),C_n^k(h)) - Z_n^k(h))^2 |\mathfrak{m}athscr{F}_n^k(h) \right] \\ &\qquad\qquad - (\mathfrak{m}u(n,Z_n^k(h),C_n^k(h)) - Z_n^k(h))^2\\
& = \sigma^2(n,Z_n^k(h),C_n^k(h)).
\mathfrak{m}athbf{e}nd{align*}
Therefore,
\begin{align*}
\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}_n^k(r))}\left|Q_n^k(h) - \sum_{j=0}^{h-1} Z_n^k(j) \right| &=\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} \left|\sum_{j=0}^{k-1} \sigma^2(n,Z_n^k(j),C_n^k(j)) - Z_n^k(j)\right|\\
&\le \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))}\sum_{j=0}^{h-1} |\sigma^2(n,Z_n^k(j),C_n^k(j))- Z_n^k(j)|\\
&\le \sum_{j=0}^{n^{1/3}T} \sup_{\Omega_n} |\sigma^2(n,z,c) - z|\\
&= O(n^{1/3}).
\mathfrak{m}athbf{e}nd{align*} In the third inequality above, we used the previously discussed bounds on $Z_n^k(h)$ and $C_n^k(h)$ for all $h$ such that $h\le n^{1/3}(T \omegaedge \hat{\tau}^k_n(r))$ and in the last term, we used the bound for $\sigma^2(n,z,c)-z$ on $\Omega_n$ given by Lemma \ref{lem:asympt}.
Hence
\begin{align}\label{eqn:expandSupQtilde}
\sup_{t\le T\omegaedge \tau_{n}^k(r)} \left| \tilde{Q}_n^k(t) - \int_0^t \tilde{Z}_n^k(s)\,ds\right| &\le \sup_{t\le T\omegaedge \tau_n^k(r)} \left| \tilde{Q}_n^k(t) - n^{-2/3}\sum_{j=0}^{\fl{n^{1/3} t}} Z_n^k(j) \right|\\
&\qquad \qquad +\sup_{t\le T\omegaedge \tau_n^k(r)} \left| n^{-2/3}\sum_{j=0}^{\fl{n^{1/3} t}} Z_n^k(j) - \int_0^t \tilde{Z}_n^k(s)\,ds\right| .\mathfrak{n}onumber
\mathfrak{m}athbf{e}nd{align} We can bound the first term, using the bound for $Q_n^k(h)-\sum_{j=0}^{h-1}Z_n^k(j)$ from above, to get
\begin{align*}
\sup_{t\le T\omegaedge \tau_n^k(r)} \left| \tilde{Q}_n^k(t) - n^{-2/3}\sum_{j=0}^{\fl{n^{1/3} t}} Z_n^k(j) \right|& \le n^{-2/3} \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}_n(r))} \left|Q_n^k(h) - \sum_{j=0}^{h-1} Z_n^k(j) \right|\\&=O(n^{-1/3}).
\mathfrak{m}athbf{e}nd{align*}
We can bound the second term as follows
\begin{align*}
\sup_{t\le T\omegaedge \tau_n^k(r)} &\left|n^{-2/3} \sum_{j=0}^{\fl{n^{1/3}t}} Z_n^k(j) -\int_0^t \tilde{Z}_n^k(s)\,ds \right|\\
&\le \sup_{t\le T\omegaedge \tau_n^k(r)} \left|n^{-2/3}\int_0^{\fl{n^{1/3}t}+1} Z_n^k(\fl{u})\,du - \int_0^t \tilde{Z}_n^k(s)\,ds \right|\\
&\le \sup_{t\le T\omegaedge \tau_n^k(r)} \left|\int_0^{n^{-1/3}(\fl{n^{1/3}t}+1)} \tilde{Z}_n^k(s)\,du -\int_0^t \tilde{Z}_n^k(s)\,ds \right|\\
&\le r\sup_{t\le T} \left|t-n^{-1/3}(\fl{n^{1/3}t}+1) \right|\longrightarrow 0.
\mathfrak{m}athbf{e}nd{align*} The above bounds hold almost surely. This proves the convergence in (4).
We lastly establish (5). We begin by noting that
$$
B_n^k(h+1) -B_n^k(h) = \mathfrak{m}athbb{E}\left[Z_n^k(h+1)-Z_n^k(h)|\mathfrak{m}athscr{F}_n^k(h) \right] = \mathfrak{m}u(n,Z_n^k(h),C_n^k(h))-Z_n^k(h).
$$ Hence,
\begin{equation*}
B_n^k(h) = \sum_{j=0}^{h-1} \mathfrak{m}u(n,Z_n^k(j),C_n^k(j))-Z_n^k(j).
\mathfrak{m}athbf{e}nd{equation*} Therefore, almost surely we have
\begin{align*}
\sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} &\left|B_n^k(h) - \sum_{j=0}^{h-1} n^{-1/3}Z_n^k(j) (\lambda -n^{-2/3}C_n^k(j)) \right| \\
&= \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))} \left| \sum_{j=0}^{h-1}( \mathfrak{m}u(n,Z_n^k(j),C_n^k(j)) - Z_n^k(j)) -n^{-1/3}Z_n^k(j)(\lambda -n^{-1/3}C_n^k(j)) \right|\\
&\le \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}^k_n(r))}\sum_{j=0}^{h-1} \left|\mathfrak{m}u(n,Z_n^k(j),C_n^k(j)) - Z_n^k(j) -n^{-1/3}Z_n^k(j)(\lambda -n^{-1/3}C_n^k(j))\right|\\
&\le Tn^{1/3} \sup_{\Omega_n} \left| \mathfrak{m}u(n,z,c) - z - n^{-1/3}z (\lambda -n^{-2/3}c) \right| = O(1).
\mathfrak{m}athbf{e}nd{align*}
Hence,
\begin{align*}
\sup_{t\le T\omegaedge \tau_n^k(r)}&\left| \tilde{B}_n^k(t)- \int_0^t (\lambda -\tilde{C}_n^k(s))\tilde{Z}_n^k(s) \,ds\right|\\
&\le \sup_{h\le n^{1/3}(T\omegaedge \hat{\tau}_n^k(r))} \left|n^{-1/3}B_n^k(h) - n^{-1/3}\sum_{j=0}^{h-1} n^{-1/3}Z_n^k(j) (\lambda -n^{-2/3}C_n^k(j)) \right|\\
&\qquad +\sup_{t\le T\omegaedge \tau_n^k(r)} \left|n^{-1/3}\sum_{j=0}^{\fl{n^{1/3}t}} n^{-1/3}Z_n^k(j) (\lambda -n^{-2/3}C_n^k(j)) - \int_0^t (\lambda - \tilde{C}_n^k(s))\tilde{Z}_n^k(s)\,ds\right|.
\mathfrak{m}athbf{e}nd{align*} By factoring out an $n^{-1/3}$ from the first term on the right-hand side, it is easy to see that term is $O(n^{-1/3})$ almost surely. Examining the second term on the right-hand side, we get almost surely
\begin{align*}
&\sup_{t\le T\omegaedge \tau_n^k(r)} \left|n^{-1/3}\sum_{j=0}^{\fl{n^{1/3}t}} n^{-1/3}Z_n^k(j) (\lambda -n^{-2/3}C_n^k(j)) - \int_0^t (\lambda - \tilde{C}_n^k(s))\tilde{Z}_n^k(s)\,ds\right|\\
&\quad= \sup_{t\le T\omegaedge \tau_n^k(r)} \left|n^{-1/3}\int_0^{\fl{n^{1/3}t}+1} n^{-1/3}Z_n^k(\fl{u})(\lambda -n^{-2/3} C_n^k(\fl{u}))\,du - \int_0^t (\lambda-\tilde{C}_n^k(s))\tilde{Z}_n^k(s)\,ds \right|\\
&\quad= \sup_{t\le T\omegaedge\tau_n^k(r)} \left|\int_0^{n^{-1/3}(\fl{n^{1/3}t}+1)}\tilde{Z}_n^k(s)(\lambda - \tilde{C}_n^k(s))\,ds - \int_0^t \tilde{Z}_n^k(s)(\lambda - \tilde{C}_n^k(s))\,ds\right|\\
&\quad= (|\lambda|+Tr)r \sup_{t\le T}\left|t-n^{-1/3}(\fl{n^{1/3}t}+1) \right| \to 0.
\mathfrak{m}athbf{e}nd{align*}
This proves the lemma.
\mathfrak{m}athbf{e}nd{proof}
\subsection{Existence and uniqueness lemma, and a corollary} \label{sec:zthm}
Using the functional central limit machinery found in \mathscr{C}ite[Chapter 7]{EK86}, it is not difficult to argue Lemma \ref{thm:zconv} from Lemma \ref{lem:ekVerify} along with the following existence and uniqueness lemma: \begin{lem}\label{lem:uniquenessLemma}
Fix a $\lambda\in \mathbb{R}$ and an $x\ge 0$.
\begin{enumerate}
\item There exists a unique strong solution to the following stochastic differential equation
\begin{equation}\label{eqn:zeqn.2}
\begin{split}
d\mathbf{Z}(t) &= \sqrt{\mathbf{Z}(t)} dW(t) + \left(\lambda - \mathfrak{m}athbf{C}(t) \right)\,\mathbf{Z}(t)\,dt,\qquad \mathbf{Z}(0) = x\\
d\mathfrak{m}athbf{C}(t) &= \mathbf{Z}(t)\,dt,\qquad \mathfrak{m}athbf{C}(0) = 0,
\mathfrak{m}athbf{e}nd{split}
\mathfrak{m}athbf{e}nd{equation} which is absorbed upon $\mathbf{Z}$ hitting zero.
\item Given a weak solution $(\mathbf{Z},\mathfrak{m}athbf{C})$ to the equation \mathfrak{m}athbf{e}qref{eqn:zeqn.2}, then on an enlarged probability space there exists a Brownian motion $B$ such that $(\mathbf{Z},\mathfrak{m}athbf{C})$ solves
\begin{equation}\label{eqn:ctsTC}
\mathbf{Z}(t) = x + \mathbf{X}^\lambda \left( \mathfrak{m}athbf{C}(t) \omegaedge T_{-x}\right),
\mathfrak{m}athbf{e}nd{equation} where $\mathbf{X}^\lambda(t) = B(t) + \lambda t -\frac{1}{2}t^2$.
\item Given $\mathbf{X}^\lambda(t) = B(t) +\lambda t-\frac{1}{2}t^2$ for a Brownian motion $B$ there exists a path-wise unique solution $(\mathbf{Z},\mathfrak{m}athbf{C})$ where $\mathfrak{m}athbf{C}(t) = \int_0^t \mathbf{Z}(s)\,ds$ and such a solution is a weak solution to \mathfrak{m}athbf{e}qref{eqn:zeqn.2}.
\mathfrak{m}athbf{e}nd{enumerate}
\mathfrak{m}athbf{e}nd{lem}
\begin{remark}
We observe that the SDE in equation \mathfrak{m}athbf{e}qref{eqn:zeqn.2} does not have a $\frac{1}{2}$, while in the integrated form found in equation \mathfrak{m}athbf{e}qref{eqn:zsde1} there is such a term. This is because$$
\int_0^t \mathfrak{m}athbf{C}(s)\,\mathbf{Z}(s) \,ds = \int_0^t \mathfrak{m}athbf{C}(s)\,d\mathfrak{m}athbf{C}(s) = \frac{1}{2} \mathfrak{m}athbf{C}(t)^2.
$$
\mathfrak{m}athbf{e}nd{remark}
\begin{proof}
The strong existence and uniqueness in first item follows from the Yamada-Watanabe theorem \mathscr{C}ite[Theorem 1]{YW71}. The absorption upon $\mathbf{Z}$ is obvious by stopping $(\mathbf{Z},\mathfrak{m}athbf{C})$ upon $\mathbf{Z}$ hitting zero and observing that this stopped process still solves \mathfrak{m}athbf{e}qref{eqn:zeqn.2}.
The path-wise existence found in the third item follows from known theorems on random time-changes. See, for example \mathscr{C}ite[Chapter VI, Section 1]{EK86}, \mathscr{C}ite{CLU09} or \mathscr{C}ite[Section 2]{CPU13}.
Now suppose that $(\mathbf{Z},\mathfrak{m}athbf{C})$ solves \mathfrak{m}athbf{e}qref{eqn:zeqn.2} for a Brownian motion $W$. We observe that the quadratic variation of $\mathbf{Z}$ is given by
$$
\langle \mathbf{Z}\rangle(t) = \int_0^t \mathbf{Z}(s)\,ds.
$$ Define the process $M(t) = \int_0^t \sqrt{\mathbf{Z}(s)}\,dW(s)$. Define $V(t) = \inf\{s:\mathfrak{m}athbf{C}(s)>t\}$ with the convention that $\inf\mathfrak{m}athbf{e}mptyset = \infty$.
Hence, by the Dambis, Dubins-Schwarz theorem \mathscr{C}ite[Chapter V, Theorem 1.7]{RY99}, on an enlarged probability space, there exists a Brownian motion $\tilde{B}$ such that process
$$
B(t) = \left\{\begin{array}{ll}
M(V(t))&: t< \int_0^\infty \mathbf{Z}(s)\,ds\\
M(\infty) + \tilde{B}_{t-\langleM\rangle(\infty)} &: t\ge \int_0^\infty \mathbf{Z}(s)\,ds.
\mathfrak{m}athbf{e}nd{array}
\right.
$$ is a Brownian motion.
We then have for $t<\int_0^\infty \mathbf{Z}(s)\,ds$:
\begin{align*}
\mathbf{Z}(V(t)) &=x+ M(V(t)) + \int_0^{V(t)} (\lambda - \mathfrak{m}athbf{C}(s)) \mathbf{Z}(s)\,ds\\
&= x+ B(t) + \int_0^t \left(\lambda - s \right)\,ds\\
&= x+\mathbf{X}^\lambda(t).
\mathfrak{m}athbf{e}nd{align*} Observe $t<\int_0^\infty \mathbf{Z}(s)\,ds$ occurs if and only if $\mathbf{Z}(V(t))>0$. Indeed, since $\mathbf{Z}$ is continuous, non-negative and absorbed upon reaching zero then by \mathscr{C}ite[Lemma 0.4.8]{RY99} $t\le \mathfrak{m}athbf{C}(V(t)) = \int_0^{V(t)}\,\mathbf{Z}(s)\,ds < \int_0^\infty \mathbf{Z}(s)\,ds$ if and only if $\int_{V(t)}^\infty \mathbf{Z}(s)\,ds> 0$ which occurs if and only if $\mathbf{Z}(V(t))>0$.
Hence, we can rewrite the above string of equalities as
\begin{equation*}
\mathbf{Z}(V(t)) = x +\mathbf{X}^\lambda (t \omegaedge T_{-x}),
\mathfrak{m}athbf{e}nd{equation*} which now holds for all $t$. Indeed, if $t\ge \int_0^\infty \mathbf{Z}(s)\,ds = \int_0^\zeta \mathbf{Z}(s)\,ds$ then $\mathbf{V}(t) = \mathfrak{m}athbf{C}(\zeta) = T_{-x}$. Since $V$ and $\mathfrak{m}athbf{C}$ are two-sided inverses of each other prior to $V(t)=\infty$, the above equation implies equation \mathfrak{m}athbf{e}qref{eqn:ctsTC} holds.
Reversing the above steps, gives the implication in the third item.
\mathfrak{m}athbf{e}nd{proof}
\begin{comment}
We now move to the proof of Lemma \ref{thm:zconv}.
\begin{proof}[Proof of Lemma \ref{thm:zconv}]
To simplify the notation slightly, we omit the explicit reference to $k$; however, $k = k(n) = \fl{n^{1/3}x}$. We also use the definitions in \mathfrak{m}athbf{e}qref{eqn:rescales} with the superscript $k$ omitted.
For $r>0$, define the stopping time $$\alpha_n(r) = \inf\left\{t: (\tilde{Z}_n(t)\vee \tilde{Z}_n(t-)) + (\tilde{C}_n(t)\vee \tilde{C}_n(t-))> r\right\}.$$
We note that Lemma \ref{lem:ekVerify} and Theorem 7.4.1 in \mathscr{C}ite{EK86} imply that the sequence
$$
\left(\tilde{M}_n(\mathscr{C}dot \omegaedge \alpha_n(r), \tilde{Z}_n(\mathscr{C}dot \omegaedge \alpha_n(r)), \tilde{C}_n(\mathscr{C}dot\omegaedge \alpha_n(r)) ; n \ge 1\right)
$$ is tight in the Skorohod space for each fixed $r>0$. Moreover, Lemma \ref{lem:ekVerify} implies that any subsequential limit point has continuous sample paths almost surely.
Now fix $r_0>0$, and let
\begin{equation*}
\left(\tilde{M}_{n_m}(\mathscr{C}dot\omegaedge \alpha_{n_m}(r_0)),\tilde{Z}_{n_m}(\mathscr{C}dot\omegaedge \alpha_{n_m}(r_0)),\tilde{C}_{n_m}(\mathscr{C}dot \omegaedge \alpha_{n_m}(r_0)) \right)
\mathfrak{m}athbf{e}nd{equation*} be a convergent subsequence with limit point $(M^{(r_0)}(\mathscr{C}dot),Z^{(r_0)}(\mathscr{C}dot), C^{(r_0)}(\mathscr{C}dot))$. Let $$\alpha^{(r_0)}(r) = \inf\left\{t: Z^{(r_0)}(t)+ C^{(r_0)}(t)> r\right\}.$$
By Lemma 2.10 and Proposition 2.11 in \mathscr{C}ite[Chapter VI]{JS87}, for each $r<r_0$ which satisfies
\begin{equation}\label{eqn:alphar_0}
\mathbb{P}\left(\lim_{s\to r} \alpha^{(r_0)}(s) = \alpha^{(r_0)}(r) \right) = 1
\mathfrak{m}athbf{e}nd{equation} we have
\begin{equation*}
\alpha_{n_m}(r)\overset{d}{\longrightarrow} \alpha^{(r_0)}(r)
\mathfrak{m}athbf{e}nd{equation*} and consequently
\begin{equation}\label{eqn:covwithalpha} \begin{split}
\bigg( \tilde{M}_{n_m}&(\mathscr{C}dot\omegaedge \alpha_{n_m}(r_0)), \tilde{Z}_{n_m}(\mathscr{C}dot\omegaedge \alpha_{n_m}(r)) ,\tilde{C}_{n_m}(\mathscr{C}dot\omegaedge \alpha_{n_m}(r)) , \alpha_{n_m}(r) \bigg)\\
&\Longrightarrow \left(M^{(r_0)}(\mathscr{C}dot \omegaedge \alpha^{(r_0)}(r)), Z^{(r_0)}(\mathscr{C}dot \omegaedge \alpha^{(r_0)}(r)), C^{(r_0)}(\mathscr{C}dot \omegaedge \alpha^{(r_0)}(r)), \alpha^{(r_0)}(r) \right)
\mathfrak{m}athbf{e}nd{split}.
\mathfrak{m}athbf{e}nd{equation} It is easily argued that \mathfrak{m}athbf{e}qref{eqn:alphar_0} and \mathfrak{m}athbf{e}qref{eqn:covwithalpha} hold for all but countably many $r<r_0$.
By Lemma \ref{lem:asympt}, and as in the proof of Theorem 7.1.4(b) in \mathscr{C}ite{EK86} the martingales $M_{n_m}(\mathscr{C}dot \omegaedge \alpha_{n_m}(r_0))$ and $\left(M_{n_m}(\mathscr{C}dot \omegaedge \alpha_{n_m}(r_0) \right)^2 - \tilde{Q}_{n_m}(\mathscr{C}dot \omegaedge \alpha_{n_m}(r_0))$ are uniformly integrable. By Problem 7 in \mathscr{C}ite[Chapter 7]{EK86}, and property (4) in Lemma \ref{lem:ekVerify}, we have all but countably many $r<r_0$ $$M^{(r_0)}(t\omegaedge \alpha^{(r_0)}(r)) = Z^{(r_0)}(t\omegaedge \alpha^{(r_0)}(r)) - Z^{(r_0)}(0) - \int_0^{t\omegaedge \alpha^{(r_0)}(r)} (\lambda - C^{r_0}(s))Z^{r_0}(s)\,ds $$ is a martingale and by property (5) in Lemma \ref{lem:ekVerify} we have
$$
\left(M^{(r_0)}(t\omegaedge \alpha^{(r_0)}(r)) \right)^2 - \int_0^{t\omegaedge \alpha^{(r_0)}(r)} Z^{r_0}(s)\,ds
$$ is a martingale. These statements imply that there exists a Brownian motion $W$ (on a possibly enlarged probability space) such that
$$
M^{(r_0)}(t\omegaedge \alpha^{(r_0)}(r)) = \int_0^{t\omegaedge \alpha^{(r_0)}(r)} \sqrt{Z^{(r_0)}(s)}\,dW(s).
$$ But then
\begin{equation}\label{eqn:stopped}
Z^{(r_0)}(t\omegaedge \alpha^{(r_0)}(r)) = x+ \int_0^{t\omegaedge \alpha^{(r_0)}(r)} \sqrt{Z^{(r_0)}(s)}\,dW(s) + \int_0^{t\omegaedge \alpha^{(r_0)}(r)} Z^{(r_0)}(s)\left( \lambda - C^{(r_0)}(s)\right)\,ds.
\mathfrak{m}athbf{e}nd{equation}
Let $(\mathbf{Z},\mathfrak{m}athbf{C})$ be the unique strong solution to the SDE \mathfrak{m}athbf{e}qref{eqn:zeqn.2} with respect to the Brownian motion $W$, and let $\alpha(r) = \inf\{t: \mathbf{Z}(t)+\mathfrak{m}athbf{C}(t)> r\}$. Then, $(\mathbf{Z}(\mathscr{C}dot\omegaedge \alpha(r)),\mathfrak{m}athbf{C}(\mathscr{C}dot\omegaedge \alpha(r)))$ is the unique solution of the stopped SDE in \mathfrak{m}athbf{e}qref{eqn:stopped}. Therefore, for all but countably many $r$,
$$
\left(\tilde{Z}_{n}(\mathscr{C}dot \omegaedge \alpha_n(r)), \tilde{C}_n(\mathscr{C}dot \omegaedge \alpha_n(r)))\right)\Longrightarrow
\left(\mathbf{Z}(\mathscr{C}dot \omegaedge \alpha(r)),\mathfrak{m}athbf{C}(\mathscr{C}dot\omegaedge \alpha(r)) \right).
$$ Since $\alpha(r)\to \infty$ as $r\to\infty$, we can conclude the desired convergence.
\mathfrak{m}athbf{e}nd{proof}
\mathfrak{m}athbf{e}nd{comment}
\begin{comment}
We can see the following corollary of Lemma \ref{thm:zconv}, which can be found in \mathscr{C}ite{A97_1} and \mathscr{C}ite[Theorem 1]{MartinLof98}.
\begin{cor} \label{cor:Cor1}
Let $A_n(k)$ denote the number of vertices in $\mathfrak{m}athscr{G}_n$ the same connected component of some vertex in $\{1,2,\dotsm, k\}$, or equivalently for the Reed-Frost model the total number of individuals who ever contract the disease. Then for each $x >0$,
\begin{equation*}
n^{-2/3} A_n(\fl{n^{1/3}x}) \Longrightarrow \int_0^\infty \mathbf{Z}(t)\,dt \overset{d}{=} T_{-x},
\mathfrak{m}athbf{e}nd{equation*} where $\mathbf{Z}$ solves \mathfrak{m}athbf{e}qref{eqn:zsde1} and where $T_{-x}$ is the first hitting to of $-x$ for the parabolic Brownian motion $\mathbf{X}^\lambda$.
\mathfrak{m}athbf{e}nd{cor}
\begin{proof}
The fact that
$$
\int_0^\infty \mathbf{Z}(t)\,dt \overset{d}{=}T_{-x}
$$ follows from Lemma \ref{lem:uniquenessLemma} parts (2) and (3).
The key observation is that $$A_n(k) = \sum_{h\ge0 } Z_n^k(h)$$ for every $k$. We now argue that we can take the rescaling limit.
To do this, we use the Skorohod representation theorem, and Lemma \ref{thm:zconv}, to suppose that on a single probability space $(\Omega,\mathfrak{m}athscr{F},\mathbb{P})$ we have
\begin{equation*}
\left(n^{-2/3}C_n^k(\fl{n^{1/3}t});t\ge 0 \right) \to \left(\mathfrak{m}athbf{C}(t);t\ge 0 \right) \qquad\text{a.s. in }\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+).
\mathfrak{m}athbf{e}nd{equation*} We also let $\zeta = \inf\{t:\mathbf{Z}(t) = 0\}$. Then, on the event $\{\zeta +1 < t\}$, we have by the continuity of the projection map on the space of continuous functions
\begin{equation*}
n^{-2/3}C_n^k(\fl{n^{1/3}t}) \longrightarrow \mathfrak{m}athbf{C}(t) = \int_0^t \mathbf{Z}(s)\,ds\qquad\text{a.s. on }\{\zeta+1<t\}.
\mathfrak{m}athbf{e}nd{equation*}
Now suppose that on the event $\{\zeta+1 < t\}$, $A_n(k) > C_n^k(\fl{n^{1/3}t})+ n^{2/3}\mathfrak{m}athbf{e}ps$ for some $\mathfrak{m}athbf{e}ps>0$. In words this means that there are order $n^{2/3}$ many vertices at distance $n^{1/3}t$ or larger from the vertices $1,\dotsm, k$.
Taking $t\to\infty$ gives the desired claim if $\zeta<\infty$ almost surely. That's the content of the following lemma.
\mathfrak{m}athbf{e}nd{proof}
\mathfrak{m}athbf{e}nd{comment}
\begin{lem}\label{lem:compactsupport}
Let $(\mathbf{Z},\mathfrak{m}athbf{C})$ be a solution of \mathfrak{m}athbf{e}qref{eqn:zeqn.2}. Then almost surely $$
\zeta:=\inf\{t:\mathbf{Z}(t) = 0\} <\infty.
$$
\mathfrak{m}athbf{e}nd{lem}
\begin{proof}
We let $\mathbf{X}^\lambda(t)$ be the process define in Lemma \ref{lem:uniquenessLemma}(2) and such that $(\mathbf{Z},\mathfrak{m}athbf{C})$ solves \mathfrak{m}athbf{e}qref{eqn:ctsTC}. We can write $\mathfrak{m}athbf{C}$ as a function of just the process $\mathbf{X}^\lambda$. Indeed let $T_{-x}$ be the first hitting time of $-x$ of $\mathbf{X}^{\lambda}(t)$, then
$$
\mathfrak{m}athbf{C}(t) = \inf\left\{s: \int_0^s \frac{1}{x+\mathbf{X}^\lambda(u\omegaedge T_{-x}) } \,du = t\right\}.
$$ This is a simple calculus exercise and the proof can be found in \mathscr{C}ite[Section 2]{CPU13} and in \mathscr{C}ite[Chapter VI, Section 1]{EK86}.
We note that almost surely
$$
I:=\int_0^{T_{-x}} \frac{1}{x+\mathbf{X}^{\lambda}(u\omegaedge T_{-x})}\,du <\infty.
$$ Indeed, this is true if we replace $\mathbf{X}^\lambda$ with a Brownian motion $B$ and Girsanov's theorem \mathscr{C}ite[Chapter VIII]{RY99} implies that it is true almost surely for the $\mathbf{X}^\lambda$ as well. Hence $$
\zeta = \inf\{t:x+\mathbf{X}^\lambda(\mathfrak{m}athbf{C}(t)) = 0\} =\inf\{s: \mathfrak{m}athbf{C}(s) = T_{-x}\} = I <\infty.
$$
\mathfrak{m}athbf{e}nd{proof}
\section{Convergence of the Cumulative Cousin Process} \label{sec:klim}
We begin by recalling the notation in Theorem \ref{thm:kconv}. The vertices are labeled by $w_n^k(j)$ for $j=0,1,\dotsm, m-1$ in a breadth-first order. Each vertex $w_n^k(j)$ at some height $h = {\operatorname{\mathfrak{m}athbf{ht}}}_n^k(w_n^k(j))$ and for this $j$ and $h$, we have
$$
\operatorname{\mathbf{csn}}_n^k(w_n^k(j)) = Z_n^k(h).
$$
\begin{proof}[Proof of Theorem \ref{thm:kconv} and Part (1) of Corollary \ref{cor:2}]
Throughout the proof we let $k = k(n,x) = \fl{n^{1/3}x}$. We also omit reference to the vertex $w_n^k$ when using the $\operatorname{\mathbf{csn}}$ statistic and write
\begin{equation*}
\operatorname{\mathbf{csn}}_n^k (j) = \operatorname{\mathbf{csn}}_n^k(w_n^k(j)).
\mathfrak{m}athbf{e}nd{equation*} Recall that we have defined $\displaystyle C_n^k(h) = \sum_{j=0}^h Z_n^k(j)$. By Lemma \ref{thm:zconv} and the Skorohod representation theorem, we can and do assume that
\begin{equation*}
\left(\left(n^{-1/3}Z_n^k(\fl{n^{1/3}t},n^{-2/3}C_n^k(\fl{n^{1/3}t}) \right);t\ge0 \right)\longrightarrow \left((\mathbf{Z}(t),\mathfrak{m}athbf{C}(t));t\ge 0 \right),\qquad a.s.
\mathfrak{m}athbf{e}nd{equation*} where $(\mathbf{Z},\mathfrak{m}athbf{C})$ is a weak solution of the stochastic differential equation in \mathfrak{m}athbf{e}qref{eqn:zeqn.2}. By Lemma \ref{lem:compactsupport}, we can define
\begin{equation*}
\mathbf{Z}(\infty) = 0.
\mathfrak{m}athbf{e}nd{equation*}
Let $A_n(k) := \sum_{h\ge 0} Z_{n}^k(h)$ be the total number of vertices ever infected.
We assume that this occurs on the probability space $(\Omega,\mathfrak{m}athscr{F},\mathbb{P})$. Due to the Skorohod representation theorem, Theorem 1 in \mathscr{C}ite{MartinLof98}, and the equality in distribution between
\begin{equation*}
\int_0^\infty \mathbf{Z}(s)\,ds = T_{-x} = \inf\{t: \mathbf{X}^\lambda(s) = -x\}
\mathfrak{m}athbf{e}nd{equation*} following from Lemma \ref{lem:uniquenessLemma}, without loss of generality we can assume that
\begin{equation}
\label{eqn:AnkAS}
n^{-2/3}A_n(k)\to \int_0^\infty \mathbf{Z}(s)\,ds,\qquad \text{a.s.}
\mathfrak{m}athbf{e}nd{equation}
We break the proof down into several steps.
\begin{enumerate}
\item[Step 1:] Show that $\displaystyle\left(n^{-1/3}\operatorname{\mathbf{csn}}_n^k \mathscr{C}irc C_n^k (\fl{n^{1/3}r});r\ge0 \right) \to \left( \mathbf{Z}(r);r\ge 0 \right)$ a.s. in $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$.
\item[Step 2:] Show that for large values of $t$, $\displaystyle n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}t}) \to \mathbf{Z}(S_t(\mathfrak{m}athbf{C}))$ a.s. in $\mathbb{R}$, for some time-change $S_t(\mathfrak{m}athbf{C})$. This convergence is in $\mathbb{R}$.
\item[Step 3:] We argue the same convergence as step two holds for small values of $t$.
\item[Step 4:] We argue that Steps 2 and 3 imply convergence in the Skorohod space.
\item[Step 5:] We argue that $\displaystyle \int_0^{S_t(\mathfrak{m}athbf{C})} \mathbf{Z}(s)^2\,ds = \int_0^{t\omegaedge T_{-x}} x+\mathbf{X}^\lambda (s)\,ds.$
\mathfrak{m}athbf{e}nd{enumerate}
\textbf{Step 1:} With the convention that we start labeling the breadth-first order at $0$, it is a simple counting argument to see that $w_n^k(C_n^k(h))$ is the first vertex in the breadth-first ordering that is at distance $h+1$ from the root in its connected component. Thus $$
\{ w_n^k(j): 0\le j \le C_n^k(h)-1\} =\left\{v\in \mathfrak{m}athscr{G}_n :{\operatorname{\mathfrak{m}athbf{ht}}}_n^k(v)\le h\right\}.
$$ See also Figure \ref{fig:comp}. Therefore, \begin{equation*}
\operatorname{\mathbf{csn}}_n^k(C_n^k(r)) =Z_n^k(r).
\mathfrak{m}athbf{e}nd{equation*} Consequently,
\begin{equation*}
\left(n^{-1/3}\operatorname{\mathbf{csn}}_n^k(C_n^k(\fl{n^{1/3}r}));r\ge0\right)\longrightarrow \left(\mathbf{Z}(r);r\ge 0 \right)
\mathfrak{m}athbf{e}nd{equation*}
\textbf{Step 2:} The next part of the proof mimics part of the proof of Theorem 1.5 in \mathscr{C}ite{Duquesne09}.
We define for each $f\in \mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$ and $y \in \mathbb{R}_+$ the function
$$
S_y(f) = \inf\{t: f(t)\vee f(t-)>y\},
$$ where $\inf\mathfrak{m}athbf{e}mptyset = \infty$. Set $\mathfrak{m}athcal{V}(f) = \{y\in \mathbb{R}_+: S_{y-}(f) < S_{y}(f)\}$. By Lemma 2.10 and Proposition 2.11 in \mathscr{C}ite[Chapter VI]{JS87}, for each fixed $y$, $f\mathfrak{m}apsto S_y(f)$ is a measurable map from $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)\to \mathbb{R}$ which is continuous at each $f$ such that $y\mathfrak{n}otin\mathfrak{m}athcal{V}(f)$.
Define $\zeta = \inf\{t:\mathbf{Z}(t) = 0\}$ which is finite by Lemma \ref{lem:compactsupport}. We observe that $\mathfrak{m}athbf{C}(t)$ is a strictly increasing continuous function for all $t\in [0,\zeta)$, since its derivative is strictly positive, and $\mathfrak{m}athbf{C}(t) = \mathfrak{m}athbf{C}(\zeta) = \int_0^\infty \mathbf{Z}(s)\,ds$ for all $t\ge \zeta$. In particular, $S_{\mathfrak{m}athbf{C}(\zeta)}(\mathfrak{m}athbf{C}) = +\infty$ whereas $S_{\mathfrak{m}athbf{C}(\zeta)-}(\mathfrak{m}athbf{C}) = \zeta<\infty$. Therefore, we have $\mathfrak{m}athcal{V}(\mathfrak{m}athbf{C}) = \{\mathfrak{m}athbf{C}(\zeta)\}$ a.s.
We recall from \mathfrak{m}athbf{e}qref{eqn:AnkAS} with the notation above, that for each $\mathfrak{m}athbf{e}ps>0$ there exists an $ N(\omega)<\infty$ such that $n^{-2/3}A_n(k) < \mathfrak{m}athbf{C}(\zeta)+\mathfrak{m}athbf{e}ps$ for all $n\ge N(\omega)$. For those $n$ sufficiently large, we have almost surely and for each $t>\mathfrak{m}athbf{C}(\zeta)+\mathfrak{m}athbf{e}ps$
\begin{align*}
n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3} t}) &= 0= \mathbf{Z}(S_t(\mathfrak{m}athbf{C}))
\mathfrak{m}athbf{e}nd{align*} where we use the fact that $S_t(\mathfrak{m}athbf{C}) = \infty$ for all $t\ge \mathfrak{m}athbf{C}(\zeta) = \sup_{s} \mathfrak{m}athbf{C}(s)$. Indeed, $A_n(k) < n^{2/3}(C(\zeta)+\mathfrak{m}athbf{e}ps)< n^{2/3}t$ and so $\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}t}) = 0$ by definition (see around equation \mathfrak{m}athbf{e}qref{eqn:kDef}) for those $t$ sufficiently large.
By taking $\mathfrak{m}athbf{e}ps\downarrow 0$, we have argued that almost surely
\begin{equation}\label{eqn:csnConv_bigt}
n^{-1/3} \operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}t}) \to \mathbf{Z}(S_t(\mathfrak{m}athbf{C}))\qquad \forall t> \mathfrak{m}athbf{C}(\zeta),
\mathfrak{m}athbf{e}nd{equation} where the convergence is convergence as real numbers. Let $\mathfrak{m}athscr{N}_>\subset \Omega$ denote the null set for which the above state does not hold.
\textbf{Step 3:} We now argue that \mathfrak{m}athbf{e}qref{eqn:csnConv_bigt} also holds for $t<\mathfrak{m}athbf{C}(\zeta)$. To begin, we define the process $V_n^k = (V_n^k(j);j=0,1\dotsm)$ by
\begin{equation*}
V_n^k(j) = \mathfrak{m}in\{h: C_n^k(h)> j\}.
\mathfrak{m}athbf{e}nd{equation*} We observe that if vertex $w_n^k (j)$ is at height $h$, then $V_n^k(j) = h$ because we start indexing the vertices at zero. Hence,
\begin{equation*}
|C_n^k(V_n^k(j)) -j | \le Z_n^k(V_n^k(j)).
\mathfrak{m}athbf{e}nd{equation*} Indeed, the first labeled vertex at height $h$ is $w_n^k(C_n^k(h-1))$ and the last vertex of height $h$ is $w_n^k(C_n^k(h-1)+ Z_n^k(h) - 1)$.
In particular,
\begin{equation}\label{eqn:CV}
\left|n^{-2/3} C_n^{k}(V^{k}_n(\fl{n^{2/3}t})) - t\right| \le \sup_{h\ge 0} n^{-2/3} Z_n^k(h) \longrightarrow \to 0.
\mathfrak{m}athbf{e}nd{equation}
We now look at the events
$$
E_q = \{q<\mathfrak{m}athbf{C}(\zeta)\}.
$$ We observe that for there exists a null set $\mathfrak{m}athscr{N}_q\subset E_q$, such that for each $\omega\in E_q\setminus\mathfrak{m}athscr{N}_q$ and for every $t\in[0,q]$ we have the following convergence of real numbers
\begin{equation}\label{eqn:vconv}
n^{-1/3}V_n^k(\fl{n^{2/3} t}) = S_t \left( n^{-2/3}C_n^k(\fl{n^{1/3}\mathscr{C}dot})\right) \longrightarrow S_t(\mathfrak{m}athbf{C}),
\mathfrak{m}athbf{e}nd{equation} by the aforementioned continuity of $S_t(\mathscr{C}dot)$ at those $f\in\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$ with $t\mathfrak{n}otin\mathfrak{m}athcal{V}(f)$. We therefore have for each $\omega\in E_q\setminus \mathfrak{m}athscr{N}_q$
$$
n^{-1/3}\operatorname{\mathbf{csn}}_n^k\left(C_n^k\left( V_n^k(\fl{n^{2/3}t})\right) \right) \to \mathbf{Z}(S_t(\mathfrak{m}athbf{C})),\qquad \forall t\in[0,q]
$$ Indeed, this follows from \mathscr{C}ite[Lemma pg 151]{Billingsley99} and the observation $n^{-1/3}\operatorname{\mathbf{csn}}_n^k \mathscr{C}irc C_n^k(\fl{n^{1/3}\mathscr{C}dot})$ converges in the $J_1$ topology to the continuous function $t\mathfrak{m}apsto \mathbf{Z}(S_t(\mathfrak{m}athbf{C}))$ for $t\in[0,\zeta)$.
Still working with an $\omega\in E_q\setminus \mathfrak{m}athscr{N}_q$, we now observe that
\begin{align*}
&\left| n^{-1/3} \operatorname{\mathbf{csn}}_n^k \left(C_n^k\left( V_n^k(\fl{n^{2/3}t})\right) \right) - n^{-1/3} \operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3} t}) \right| \longrightarrow 0 ,
\mathfrak{m}athbf{e}nd{align*} by equation \mathfrak{m}athbf{e}qref{eqn:CV} and \mathscr{C}ite[Lemma pg. 151]{Billingsley99}. By taking the union over all $q\in \mathfrak{m}athbb{Q}\mathscr{C}ap\mathbb{R}_+$, we have argued Step 3.
\textbf{Step 4:} We have shown that outside of the null set $\mathfrak{m}athscr{N} = \mathfrak{m}athscr{N}_> \mathscr{C}up \bigcup_{q\in \mathfrak{m}athbb{Q}\mathscr{C}ap\mathbb{R}_+} \mathfrak{m}athscr{N}_q$ that
\begin{equation}\label{eqn:csnconv_allt}
n^{-1/3}\operatorname{\mathbf{csn}}_n^k (\fl{n^{2/3}t}) \longrightarrow \mathbf{Z}(S_t(\mathfrak{m}athbf{C})) ,\qquad \forall t\mathfrak{n}eq \zeta(\omega),
\mathfrak{m}athbf{e}nd{equation} where the convergence is a real numbers.
We now argue that for each such $\omega\in \Omega\setminus \mathfrak{m}athscr{N}$ and each $t\mathfrak{n}eq \zeta(\omega)$
\begin{equation}\label{eqn:csnjumps}
\sum_{0\le s\le t}\left( n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}s}) - n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}s-}) \right)^2 \longrightarrow 0.
\mathfrak{m}athbf{e}nd{equation}
Towards this end we have the following string of inequalities
\begin{align*}
\sum_{0\le s\le t}&\left( n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}s}) - n^{-1/3}\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}s-}) \right)^2 \le n^{-2/3} \sum_{h = 0}^{V_n^k(\fl{n^{2/3}t})} \left(Z_n^k(h)-Z_n^k(h-1)\right)^2\\
&= \int_0^{V_n^k(\fl{n^{2/3}t})+1} \left( Z_{n}^k (\fl{u}) - Z_n^k(\fl{u}-1\right)^2\,du\\
&= \int_0^{n^{-1/3} V_n^k (\fl{n^{2/3} t}) + n^{-1/3} } \left(n^{-1/3}Z_n^k(\fl{n^{1/3}u}) - n^{1/3}Z_n^k(\fl{n^{1/3}u}-1) \right)^2\,du \\
&\longrightarrow \int_0^{S_t(\mathfrak{m}athbf{C})} \left(\mathbf{Z}(u)-\mathbf{Z}(u)\right)^2\,du = 0.
\mathfrak{m}athbf{e}nd{align*} The first inequality comes from examining the jumps of $\operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}s})$ occur when $h = n^{2/3}s \in \mathfrak{m}athbb{Z}$ and the jump is of size $Z_n^{k}(h)-Z_n^{k}(h-1)$. The a.s. convergence for the integral follows from the time-change lemma in \mathscr{C}ite[pg. 151]{Billingsley99} and the convergence in \mathfrak{m}athbf{e}qref{eqn:vconv}.
By Theorem 2.15 in \mathscr{C}ite[Chapter VI]{JS87}, equations \mathfrak{m}athbf{e}qref{eqn:csnconv_allt} and \mathfrak{m}athbf{e}qref{eqn:csnjumps} imply that for all $\omega\in \Omega\setminus \mathfrak{m}athscr{N}$
\begin{equation*}
\left(n^{-1/3} \operatorname{\mathbf{csn}}_n^k(\fl{n^{2/3}t}); t\ge0 \right) \longrightarrow \left( \mathbf{Z}(S_t(\mathfrak{m}athbf{C})) ; t\ge0\right) \qquad \text{ in } \mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}).
\mathfrak{m}athbf{e}nd{equation*}
\textbf{Step 5:} We let $\mathbf{X}^\lambda$ be the Brownian motion with parabolic drift related to the processes $(\mathbf{Z},\mathfrak{m}athbf{C})$ in Lemma \ref{lem:uniquenessLemma}(2). The theorem follows from the following change of variables
\begin{align*}
\mathbf{Z}(S_t(C)) &= x+ \mathbf{X}^\lambda (\mathfrak{m}athbf{C}(S_t(\mathfrak{m}athbf{C}))) = x+ \mathbf{X}({t\omegaedge T_{-x}})
\mathfrak{m}athbf{e}nd{align*} where we used the relationship $\mathfrak{m}athbf{C}(S_t(\mathfrak{m}athbf{C})) = t\omegaedge C(\zeta) = t \omegaedge T_{-x}$.
The rest of Theorem \ref{thm:kconv} now follows from integration.
\mathfrak{m}athbf{e}nd{proof}
\section{A Self-Similarity Result}\label{sec:ss}
We first observe the following relationship in $\lambda$ for the process $\mathbf{X}^\lambda$. Namely,
\begin{equation*}
\left(\left( \mathbf{X}^\lambda(t_0+t) - \mathbf{X}^\lambda(t_0); t\ge 0 \right)\bigg| \mathbf{X}^\lambda(t_0) = \inf_{s\le t_0} \mathbf{X}^\lambda(s) \right) \overset{d}{=} \left( \mathbf{X}^{\lambda-t_0}(t);t\ge0 \right).
\mathfrak{m}athbf{e}nd{equation*} This observation was used by Aldous \mathscr{C}ite{A97_1} to simplify the description of the (time-inhomogeneous) excursion measure of $\mathbf{X}^\lambda$ at time $t$ to the excursion measure of $\mathbf{X}^{\lambda-t}$ at time 0. See also \mathscr{C}ite{ABG12}.
A similar result will hold in our situation as well. We state it in the following theorem
\begin{thm}
Let $\mathbf{Z}^{\lambda}_x(t), \mathfrak{m}athbf{C}^{\lambda}_x(t)$ denote the solution to
\begin{equation*}\begin{split}
d\mathbf{Z}^{\lambda}_x (t) &= \sqrt{\mathbf{Z}^\lambda_x(t)}\,dW(t)+ \left(\lambda - \mathfrak{m}athbf{C}^{\lambda}_x(t) \right) \mathbf{Z}^{\lambda}_x(t)\,dt,\qquad \mathbf{Z}^\lambda_x(0) = x\\
d\mathfrak{m}athbf{C}^\lambda_x(t) &= \mathbf{Z}^\lambda_x(t)\,dt \qquad \mathfrak{m}athbf{C}^{\lambda}_x(0) = 0
\mathfrak{m}athbf{e}nd{split}.
\mathfrak{m}athbf{e}nd{equation*}
Then the following self-similarity result holds for any $t_0>0$, $z>0$ and $\mathfrak{m}u>0$:
\begin{equation}\label{eqn:selfsim.1}
\left( \left(\left(\mathbf{Z}^{\lambda}_x(t_0+t), \mathfrak{m}athbf{C}^{\lambda}_x(t_0+t)\right);t\ge0 \right) \Big| \mathbf{Z}^\lambda_x(t_0) = z, \mathfrak{m}athbf{C}^{\lambda}_x(t_0) = \mathfrak{m}u\right) \overset{d}{=} \left(\left(\mathbf{Z}^{\lambda-\mathfrak{m}u}_z(t),\mathfrak{m}athbf{C}^{\lambda-\mathfrak{m}u}_z(t)\right);t\ge 0\right)
\mathfrak{m}athbf{e}nd{equation}
\mathfrak{m}athbf{e}nd{thm}
\begin{proof}
The proof follows from the decomposition in Lemma \ref{lem:uniquenessLemma}, particularly in equation \mathfrak{m}athbf{e}qref{eqn:ctsTC}. Namely, there exists a Brownian motion with parabolic drift $\mathbf{X}^\lambda(t)$ such that
\begin{equation*}
\mathbf{Z}^\lambda_x(t) = x+ \mathbf{X}^{\lambda}(\mathfrak{m}athbf{C}^\lambda_x(t) \omegaedge T_{-x}).
\mathfrak{m}athbf{e}nd{equation*}
We also observe that
\begin{align*}
\mathbf{X}^\lambda(s_0+s) &= B(s_0+s)+\lambda(s_0+s) -\frac{1}{2}(s_0+s)^2\\
&= B(s_0)+B(s_0+s)-B(s_0) + \lambda s_0 + \lambda s- \frac{1}{2}s_0^2 - s_0 s - \frac{1}{2}s^2\\
&= \mathbf{X}^\lambda(s_0) + \left(B(s_0+s)-B(s_0) + (\lambda-s_0) s - \frac{1}{2}s^2 \right)\\
&= \mathbf{X}^\lambda(s_0) + \tilde{\mathbf{X}}^{\lambda-s_0}(s),
\mathfrak{m}athbf{e}nd{align*} for a process $\tilde{\mathbf{X}}^{\lambda-s_0} \overset{d}{=} \mathbf{X}^{\lambda-s_0}$ which is independent of $\sigma\left\{\mathbf{X}^\lambda(u); u\le s_0 \right\}$.
Hence, we have
\begin{align*}
\mathbf{Z}^\lambda_x(t_0+t)&= x + \mathbf{X}^\lambda\left(\mathfrak{m}athbf{C}^\lambda_x(t_0+t)\right)\\
&= x+ \mathbf{X}^\lambda\left(\mathfrak{m}athbf{C}^\lambda_x(t_0) + \int_0^t \mathbf{Z}_x^\lambda(t_0+s)\,ds \right) \\
&= x + \mathbf{X}^\lambda(\mathfrak{m}athbf{C}_x^\lambda(t_0)) + \tilde{B}\left(\int_0^t \mathbf{Z}^\lambda_x(t_0+s)\,ds\right) \\
&\qquad\qquad\qquad\qquad+ \left(\lambda-\mathfrak{m}athbf{C}_x^\lambda(t_0)\right) \int_0^t \mathbf{Z}^\lambda_x(t_0+s)\,ds - \frac{1}{2}\left(\int_0^t \mathbf{Z}^\lambda_x(t_0+s)\,ds\right)^2,
\mathfrak{m}athbf{e}nd{align*} where $\tilde{B}$ is a Brownian motion independent of $\sigma\{\mathbf{X}^\lambda(u): u\le \mathfrak{m}athbf{C}(t_0)\}$. Hence, conditionally on $\mathbf{Z}^\lambda_x (t_0) = z$ and $\mathfrak{m}athbf{C}^\lambda_x(t_0) = \mathfrak{m}u$ gives
$$
\mathbf{Z}^\lambda_x(t_0+t) = z + \tilde{\mathbf{X}}^{\lambda- \mathfrak{m}u}\left(\int_0^t \mathbf{Z}^\lambda_x(t_0+s)\,ds \right)
$$ By Lemma \ref{lem:uniquenessLemma}, this is equivalent to the statement in \mathfrak{m}athbf{e}qref{eqn:selfsim.1} .
\mathfrak{m}athbf{e}nd{proof}
\section{A More General Asymptotic Regime}\label{sec:theta}
As observed by Bollob\'{a}s in \mathscr{C}ite{B85}, the asymptotic order of largest component of the Erd\H{o}s-R\'{e}nyi random graph $G(n,n^{-1}+\lambda\log(n)^{1/2}n^{-4/3})$ is $n^{2/3}(\log n)^{1/2}$ as $n\to\infty$. Actually, he proves a much more general result, but we will not state that fully here. We instead examine a more general asymptotic regime.
We consider any sequence of real numbers $\theta_n$ such that
\begin{equation}
\label{eqn:thetan}
\theta_n = o(n^{1/3}), \qquad\text{and}\qquad \lim_{n\to\infty} \theta_n =\infty.
\mathfrak{m}athbf{e}nd{equation} In the introduction we used the notation $\mathfrak{m}athbf{e}ps_n$ instead of $\theta_n$. The conditions in \mathfrak{m}athbf{e}qref{eqn:thetan} can be reformulated for $\mathfrak{m}athbf{e}ps_n$ in the statement of Theorem \ref{thm:kconv_gen} by setting $$
\theta_n = n^{1/3} \mathfrak{m}athbf{e}ps_n.
$$ We also fix a $\lambda\in \mathbb{R}$ and let \begin{equation*}
\mathfrak{m}athscr{G}_n^\theta = G(n,n^{-1}+\lambda \theta_n n^{-4/3}).\mathfrak{m}athbf{e}nd{equation*}
To distinguish the notation, we let $Z^{\theta,k}_n(h)$ denote the height profile of $\mathfrak{m}athscr{G}_n^\theta$ starting from $k$ uniformly chosen vertices (see Section \ref{sec:labeling} for more information on how this is constructed). With this notation, we can state the following theorem:
\begin{lem} \label{thm:genz} Fix $x>0$.
Suppose that $\theta_n$ satisfies \mathfrak{m}athbf{e}qref{eqn:thetan} and $k = k(n) = \fl{ \theta_n^2 n^{1/3}x}$. Then the following convergence holds in the Skorohod space $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)$
\begin{equation}\label{eqn:ztheta}
\left( \frac{1}{\theta_n^2 n^{1/3}} Z_n^{\theta,k} \left( \fl{ \theta_n^{-1} n^{1/3} t}\right) ;t\ge0\right) \Longrightarrow \left(z(t);t\ge0 \right),
\mathfrak{m}athbf{e}nd{equation} where $z$ solves the deterministic equation
\begin{equation}\label{eqn:detz}
z(t) = f\left(\int_0^t z(s)\,ds\right),\qquad f(t) =x + \lambda t -\frac{1}{2}t^2.
\mathfrak{m}athbf{e}nd{equation}
\mathfrak{m}athbf{e}nd{lem}
The proof follows from lemmas similar to the lemmas found in Section \ref{sec:zthm}. Before stating those lemmas, we make some comments on the solution $z(t)$ found in \mathfrak{m}athbf{e}qref{eqn:detz}. We have already mentioned that $$
c(t) = \int_0^t z(s)\,ds = \inf\{s: \int_0^s \frac{1}{f(u)}\,du = t\}.
$$ See also, \mathscr{C}ite[Section 2]{CPU13} and \mathscr{C}ite[Section 6.1]{EK86} for more details on time changes. We have ``$=t$'' instead of ``$>t$" because the inverse is actually a two-sided inverse. Indeed, since $\int_0^{t_0} \frac{1}{f(u)}\,du = \infty$ where $t_0 = \lambda+\sqrt{2x+\lambda^2}$ is the largest root of $f(t)$, the function $c$ is strictly increasing continuous function $c:[0,\infty)\to [0,\lambda+\sqrt{2x+\lambda^2})$. The function $c$ can actually be explicitly computed:
\begin{equation*}
c(t) = \lambda + \sqrt{2x+\lambda^2} \tanh\left(\frac{\sqrt{2x+\lambda^2}}{2} t + \operatorname{arctanh}\left(\frac{-\lambda}{\sqrt{2x+\lambda^2}} \right) \right).
\mathfrak{m}athbf{e}nd{equation*}
We also make comments on the scaling found in Theorem \ref{thm:genz}. In order to describe this scaling, we introduce the diameter of the graph $\mathfrak{m}athscr{G}_n^\theta$, as
\begin{equation*}
\mathfrak{m}athscr{D}_n^\theta = \mathfrak{m}athscr{D}_n^{\theta_n} = \mathfrak{m}ax_{u,v\in \mathfrak{m}athscr{G}_n^\theta}\left\{ \operatorname{{dist}}(u,v): \operatorname{{dist}}(u,v)<\infty\right\}.
\mathfrak{m}athbf{e}nd{equation*} The trivial observation is that $Z_n^{\theta,k}(h)>0$ implies that $\mathfrak{m}athscr{D}^\theta_n\ge h$.
A result of {\L}uczak \mathscr{C}ite[Theorem 11(iii)]{Luczak98} implies when $\lambda<0$ that
\begin{equation*}
\mathfrak{m}athscr{D}_n^\theta = \frac{\log(2\theta_n^3)+O(1)}{-\log(1-{\theta_n}n^{-1/3})}
\mathfrak{m}athbf{e}nd{equation*} with high probability as $n\to\infty$. There is a typo in the statement of Theorem \mathscr{C}ite[Theorem 11(iii)]{Luczak98}, he writes an $\log(2\mathfrak{m}athbf{e}ps^2n)$ term when there should be an $\log(2\mathfrak{m}athbf{e}ps^3n)$ term. In the supercritical ($\lambda>0$) regime, it appears that the work of Ding, Kim, Lubetzky and Peres \mathscr{C}ite{DKLP10,DKLP11} provide more precise results. Namely, they show \mathscr{C}ite[Theorem 1.1]{DKLP11} that if $\mathscr{C}_n^\theta$ is the largest component of $\mathfrak{m}athscr{G}_n^\theta$, for $\lambda>0$, then with high probability
\begin{equation*}
\text{diam}(\mathscr{C}_n^\theta) = (3+o(1))n^{1/3}\theta_n^{-1} \log (\theta_n^3)\qquad\text{as }n\to\infty.
\mathfrak{m}athbf{e}nd{equation*}
Even more precise asymptotic result in this regime can be found in \mathscr{C}ite{RW10}, again in the supercritical regime when $\lambda>0$.
Both of these results on the asymptotic diameter $\mathfrak{m}athscr{D}_n^\theta$ suggest the proper ``time'' scaling in Theorem \ref{thm:genz} should be $\theta_n^{-1}n^{1/3}\log(\theta_n)t$ as compared with $\theta_n^{-1}n^{1/3}t$; however, this is not the correct scaling to obtain a non-trivial limit.
\subsection{Lemmas}
In the connection to the Reed-Frost model of epidemics, it is easy to see that the analog of \mathfrak{m}athbf{e}qref{eqn:condDist} becomes the following
\begin{equation*}
\left(Z_n^{\theta,k}(h+1) \big| Z_n^{\theta,k}(h) = z, C_n^{\theta,k}(h) = c \right) \overset{d}{=} \left\{ \begin{array}{ll}\text{Bin} \left(n-c, q_\theta(n,z) \right) &: z>0, c<n\\
0 &: \text{else}
\mathfrak{m}athbf{e}nd{array}\right.,
\mathfrak{m}athbf{e}nd{equation*} where $q_\theta(n,z)$ is defined as
\begin{equation*}
q_\theta(n,z) = 1-\left(1- n^{-1} -\lambda \theta_n n^{-4/3} \right)^z
\mathfrak{m}athbf{e}nd{equation*}
The analog of Lemma \ref{lem:asympt} becomes the following
\begin{lem}\label{lem:asympt2}
Let $\beta_\theta(n,z,c)$ denote a $\text{Bin}(n-c,q_\theta(n,z))$ random variable. Let $\mathfrak{m}u_\theta,\sigma_\theta^2,\kappa_\theta$ denote the statistics in \mathfrak{m}athbf{e}qref{eqn:betaStats} with $\beta_\theta$ replacing $\beta$. Fix $r>0$ and $T>0$ and define $$
\Omega_n^\theta = \Omega_n^\theta(n,r,T):= \left\{(z,c)\in \mathfrak{m}athbb{Z}^2: 0\le z\le n^{1/3}\theta_n^2 r, 0\le c\le n^{2/3}\theta_n rT \right\}
$$ then the following bounds hold
\begin{equation*}
\begin{split}
&\sup_{\Omega_n^\theta} \left|\mathfrak{m}u_\theta(n,z,c) - z - n^{-1/3}z(\lambda \theta_n - n^{-2/3}c) \right| = O\left( \theta_n^4 n^{-1/3}+1\right)\\
&\sup_{\Omega_n^\theta} \left|\sigma_\theta^2(n,z,c) - z - n^{-1/3}z(\lambda \theta_n - n^{-2/3}c) \right| = O\left( \theta_n^4 n^{-1/3}+1\right)\\
&\sup_{\Omega_n^\theta} |\kappa_\theta(n,z,c) | = O(\theta_n^{12}+ \theta_n^8 n^{1/3} +\theta_n^4 n^{2/3})
\mathfrak{m}athbf{e}nd{split}
\mathfrak{m}athbf{e}nd{equation*}
\mathfrak{m}athbf{e}nd{lem}
\begin{proof}
The proofs of the convergence of $\mathfrak{m}u_\theta$ and $\sigma_\theta^2$ follow from the same argument as in the proof of Lemma \ref{lem:asympt}, and we omit it here.
We do argue the result for $\kappa_\theta$ since it is much more involved computationally. We again use the expansion:
\begin{align*}
\kappa_\theta(n,z,c) &= \mathfrak{m}athbb{E}\left[(\beta_\theta(n,z,c)-\mathfrak{m}u_\theta(n,z,c))^4 \right] + 4 \mathfrak{m}athbb{E}\left[(\beta_\theta(n,z,c)-\mathfrak{m}u_\theta(n,z,c))^3\right](\mathfrak{m}u_\theta(n,z,c)-z) \\
&\qquad + 6 \mathfrak{m}athbb{E}\left[ (\beta_\theta(n,z,c)-\mathfrak{m}u_\theta(n,z,c))^2 \right](\mathfrak{m}u_\theta(n,z,c)-z)^2 \\
&\qquad + 4\mathfrak{m}athbb{E}\left[\beta_\theta(n,z,c)-\mathfrak{m}u_\theta(n,z,c)\right](\mathfrak{m}u_\theta(n,z,c)-z)^3\\
&\qquad +(\mathfrak{m}u_\theta(n,z,c)-z)^4\\
&=: \kappa_{4,\theta}(n,z,c) + 4\kappa_{3,\theta}(n,z,c) + 6 \kappa_{2,\theta}(n,z,c) +0 + \kappa_{0,\theta}(n,z,c).
\mathfrak{m}athbf{e}nd{align*}
We can use the bound for $\mathfrak{m}u_\theta$ and Minkowski's inequality to get
\begin{align*}
\sup_{\Omega_n^\theta}\left|\kappa_{0,\theta}(n,z,c) \right| &\left(\mathfrak{m}u_\theta(n,z,c) - z \right)^4\\
&\le \left[\sup_{\Omega_n^\theta} \left|n^{-1/3} z(\lambda\theta_n-n^{-2/3}c) \right| + O(\theta_n^4n^{-1/3} + \theta_n^2 n^{-2/3})\right]^4\\
&\le C \left( \sup_{\Omega_n^\theta} |n^{-1/3} z(\lambda\theta_n - n^{-2/3}c)|^4 + O(\theta_n^{16} n^{-4/3} + \theta_n^8 n^{-8/3}) \right)\\
&= O\left(\theta_n^{12} + \theta_n^{16} n^{-4/3} + \theta_n^8 n^{-8/3} \right) \le O(\theta_n^{12})
\mathfrak{m}athbf{e}nd{align*} where in the last inequality we used the bounds in \mathfrak{m}athbf{e}qref{eqn:thetan}.
The next three follow from the bounds below. They are easy to verify using the original bounds of $\sigma_\theta^2$ and $\mathfrak{m}u_\theta$, and computations similar to the one above:
\begin{align*}
&\sup_{\Omega_n^\theta} \left|\mathfrak{m}u_\theta(n,z,c) - z \right| = O\left(\theta_n^3 \right)\\
&\sup_{\Omega_n^\theta} \left|\sigma_\theta^2(n,z,c) \right| = O\left(\theta_n^2 n^{1/3} + \theta_n^3 + \theta_n^4 n^{-1/3} \right)\\
&\qquad\qquad\qquad\qquad = O\left(\theta_n^2 n^{1/3} \right)
\mathfrak{m}athbf{e}nd{align*}
Using the same expansions as in Lemma \ref{lem:asympt}, we have
\begin{align*}
\sup_{\Omega_n^\theta} |\kappa_{2,\theta}(n,z,c)| &= O(\theta_n^6)\times O(\theta_n^2 n^{1/3}) = O(\theta_n^8 n^{1/3})\\
\sup_{\Omega_n^\theta} |\kappa_{3,\theta}(n,z,c)| &= O(\theta_n^2 n^{1/3}) \times O(\theta_n^3) = O(\theta_n^8 n^{1/3})\\
\sup_{\Omega_n^\theta} |\kappa_{4,\theta}(n,z,c)|& = O(\theta_n^2 n^{1/3})^2 = O(\theta_n^4 n^{2/3})
\mathfrak{m}athbf{e}nd{align*}
This proves the desired bounds.
\mathfrak{m}athbf{e}nd{proof}
One can use the bounds in the lemma above to prove an analog of Lemma \ref{lem:ekVerify}. We first establish some notation.
We now let $\mathfrak{m}athscr{F}_n^{\theta,k}(h) = \sigma(Z_n^{\theta,k}(j),j\le h)$ be the filtration generated by $Z_n^{\theta,k}$ and let $Z_n^{\theta,k}(h) = M_n^{\theta,k}(h) + B_n^{\theta,k}(h)$ be the decomposition of $Z_n^{\theta,k}$ into an $\mathfrak{m}athscr{F}_n^{\theta,k}(h)$-martingale $M_n^{\theta,k}$ and a process $B_n^{\theta,k}$. We also let $Q_n^{\theta,k}$ be the process which makes $(M_n^{\theta,k}(h))^2 - Q_n^{\theta,k}(h)$ an $\mathfrak{m}athscr{F}_n^{\theta,k}(h)$-martingale. Define the rescaled processes, in comparison to \mathfrak{m}athbf{e}qref{eqn:rescales},
\begin{equation}\label{eqn:rescales2}
\begin{split}
\tilde{Z}_n^{\theta,k}(t) &= \theta_n^{-2} n^{-1/3}Z_n^{\theta,k}(\fl{ \theta_n^{-1} n^{1/3} t})\quad \tilde{C}_n^{\theta,k}(t) = \theta_n^{-1}n^{-2/3}C_n^{\theta,k}(\fl{ \theta_n^{-1} n^{1/3} t})\qquad \\
\tilde{M}_n^{\theta,k}(t) &= \theta_n^{-2}n^{-1/3}M^{\theta,k}_n(\fl{ \theta_n^{-1} n^{1/3} t})\quad \tilde{B}_n^{\theta,k}(t) = \theta_n^{-2}n^{-1/3}B^{\theta,k}_n(\fl{ \theta_n^{-1} n^{1/3} t}) \\
\tilde{Q}_n^{\theta,k}(t)&= \theta_n^{-4} n^{-2/3}Q^{\theta,k}_n(\fl{ \theta_n^{-1} n^{1/3} t}).
\mathfrak{m}athbf{e}nd{split}.
\mathfrak{m}athbf{e}nd{equation} Also define $\tau_n^{\theta,k}(r) = \inf\{t:\tilde{Z}_n^{\theta,k}(t)\vee \tilde{Z}_n^{\theta,k}(t-)>r\}$ and $\hat{\tau}_n^{\theta,k}(r) = \theta_n n^{-1/3}\inf\{k: Z_n^{\theta,k}(h)>\theta_n^2 n^{1/3}r\}$.
The analog of Lemma \ref{lem:ekVerify} is the following lemma. The proof is omitted since it is similar to the proof of Lemma \ref{lem:ekVerify}.
\begin{lem} \label{lem:ekVerify2} Fix any $r>0$, $T>0$ and $x>0$. Let $k = k(n) = \fl{\theta_n^2 n^{1/3}x}$.
The following limits hold
\begin{enumerate}
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^{\theta,k}(r)} |\tilde{Z}_n^{\theta,k}(t)-\tilde{Z}_n^{\theta,k}(t-)|^2\right] = 0$.
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^{\theta,k}(r)} |\tilde{B}_n^{\theta,k}(t)-\tilde{B}_n^{\theta,k}(t-)|^2\right] = 0$.
\item $\displaystyle \lim_{n\to\infty}\mathfrak{m}athbb{E}\left[ \sup_{t\le T\omegaedge \tau_n^{\theta,k}(r)} |\tilde{Q}_n^{\theta,k}(t)-\tilde{Q}_n^{\theta,k}(t-)| \right] = 0$.
\item $\displaystyle \sup_{t\le T\omegaedge \tau_n^{\theta,k}(r)} \left|\tilde{Q}_n^{\theta,k}(t) \right| \longrightarrow 0$, as $n\to\infty$ almost surely
\item $\displaystyle \sup_{t\le T\omegaedge \tau_n^{\theta,k}(r)}\left| \tilde{B}_n^{\theta,k}(t)- \int_0^t (\lambda -\tilde{C}_n^{\theta,k}(s))\tilde{Z}_n^{\theta,k}(s) \,ds\right|\overset{P}{\longrightarrow}0$, as $n\to\infty$.
\mathfrak{m}athbf{e}nd{enumerate}
\mathfrak{m}athbf{e}nd{lem}
Finally, using the machinery of \mathscr{C}ite[Chapter 7]{EK86}, in particular Theorem 7.4.1, Lemma \ref{thm:genz} follows from Lemma \ref{lem:ekVerify2}.
\subsection{The Cumulative Cousin Process and Corollaries}
Just as we examined cousin process and the cumulative cousin process of the Erd\H{o}s-R\'{e}nyi random graph $\mathfrak{m}athscr{G}_n$ and obtained a non-trivial rescaled limit, we get a similar result in this newer regime.
\begin{proof}[Proof of Theorem \ref{thm:kconv_gen} and Part (2) of Corollary \ref{cor:2}] We write $\theta_n = \mathfrak{m}athbf{e}ps_n n^{1/3}$. We write $C_n^{\theta,k}(h) = \sum_{j\le h} Z_n^{\theta,k}(j)$. The proof of the scaling of the $\operatorname{\mathbf{csn}}$ statistic follows from a similar argument as in the proof of Theorem \ref{thm:kconv} with the only replacements being the scaling, and the scaling limits. We omit that here, but include the proof of the cumulative cousin statistics which does not rely on just integration.
Just as in the proof of Theorem \ref{thm:kconv}, we can write
\begin{align*}
K_n^{\mathfrak{m}athbf{e}ps,k}\mathscr{C}irc C_n^{\theta,k}(h) = \sum_{\mathfrak{m}athbf{e}ll=0}^h \left(Z_n^{\theta,k}(\mathfrak{m}athbf{e}ll)\right)^2.
\mathfrak{m}athbf{e}nd{align*}
Then
\begin{align*}
\frac{1}{\mathfrak{m}athbf{e}ps_n^3 n^{2}}K_n^{\mathfrak{m}athbf{e}ps,k} \mathscr{C}irc C_n^{\theta,k}(\fl{\mathfrak{m}athbf{e}ps_n^{-1}t}) &=\frac{1}{\theta_n^3 n} \int_0^{\fl{n^{1/3}\theta_n^{-1}t}} \left(Z_n^{\theta,k}(\fl{u}) \right)^2\,du\\
&=\frac{1}{\theta_n^3 n}\int_0^t \left(Z_n^{\theta,k}(\fl{n^{1/3}\theta_n^{-1} s}) \right)^2 n^{1/3} \theta_n^{-1}\,ds + o(1)\\
&=\int_0^t \frac{1}{\theta_n^4 n^{2/3}}\left( Z_n^{\theta,k}(\fl{n^{1/3} \theta_n^{-1}s}) \right)^2\,ds + o(1)
\\
& \Longrightarrow \int_0^t \left(z(s)\right)^2\,ds,
\mathfrak{m}athbf{e}nd{align*} where the $o(1)$ term vanishes.
Just as in Step 3 of the proof of Theorem \ref{thm:kconv}, we can go from the convergence above to the convergence
\begin{equation*}
\left(\frac{1}{\mathfrak{m}athbf{e}ps_n^3 n^{8/3}} K_n^{\mathfrak{m}athbf{e}ps,k} (\fl{\mathfrak{m}athbf{e}ps_n n t});t\ge 0\right) \Longrightarrow \left(\int_0^{\inf\{u: c(u)>t\}} z(s)^2\,ds\right),
\mathfrak{m}athbf{e}nd{equation*} where $c(t) = \int_0^t z(s)\,ds$. However, by \mathfrak{m}athbf{e}qref{eqn:detz} and the paragraph thereafter,$$
z(t) = f\mathscr{C}irc c(t), \qquad \text{where }f(t) = x+\lambda t -\frac{1}{2}t^2,
$$ and $c:[0,\infty) \to [0,\lambda + \sqrt{2x+\lambda^2})$.
Hence, by \mathscr{C}ite[Chapter 0]{RY99},
$$
\int_0^{\inf\{u:c(u)>t\}} z(s)^2\,ds = \int_0^{c^{-1}(t)} f(c(s))\,dc(s) = \int_0^{t\omegaedge \lambda + \sqrt{2x+\lambda^2}} f(s)\,ds = xt+\frac{\lambda t^2}{2}-\frac{t^3}{6} \vee 0.
$$
\mathfrak{m}athbf{e}nd{proof}
We can prove Proposition \ref{prop:components}, which is just a corollary of Lemma \ref{thm:genz}.
\begin{proof}[Proof of Proposition \ref{prop:components}]
The proof comes from the following general observation. If $f_n,f\in \mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R}_+)\mathscr{C}ap L^1(\mathbb{R}_+,dx)$ and $f_n\to f$ in the $J_1$ topology then
$$
\lim_{n\to\infty} \int_0^\infty f_n(t)\,dt \ge \lim_{n\to\infty} \int_0^T f_n(t)\,dt = \int_0^T f(t)\,dt.
$$ By taking $T$ large enough, once can make $\int_0^T f\,dt$ arbitrarily close to $\int_0^\infty f(s)\,ds.$
The proof is finished by the following observation, where we again write $\theta = \theta_n = n^{1/3}\mathfrak{m}athbf{e}ps_n$
\begin{align*}
n^{-1/3} \mathfrak{m}athbf{e}ps_n A_n^\mathfrak{m}athbf{e}ps(k) &= n^{-1/3}\mathfrak{m}athbf{e}ps_n\sum_{h\ge 0} Z_n^{\theta,k}(h)\\
&= n^{-2/3}\theta_n \sum_{h\ge 0} Z_n^{\theta,k}(h)\\
&= n^{-2/3} \theta_n \int_0^\infty Z_n^{\theta,k}(\fl{u})\,du\\
&= n^{-2/3} \theta_n \int_0^\infty Z_n^{\theta,k}(\fl{\theta_n^{-1}n^{1/3}}) \frac{n^{1/3}}{\theta_n}\,dt\\
&= \int_0^\infty \tilde{Z}_n^{\theta,k}(t)\,dt.
\mathfrak{m}athbf{e}nd{align*}
\mathfrak{m}athbf{e}nd{proof}
\subsection{A Conjecture}
The scaling found in Corollary 1 in \mathscr{C}ite{CPU17} tells us that under reasonable conditions, see \mathscr{C}ite{CPU13}, if a breadth first walk $X_n = (X_n(k);k=0,1,\dotsm)$ has a rescaled limit in the Skorohod space
\begin{equation*}
\left(\frac{\alpha_n}{\gamma_n} X_n(\fl{\gamma_n t});t\ge0 \right) \Longrightarrow \left( X(t);t\ge0 \right)
\mathfrak{m}athbf{e}nd{equation*} then the process $Z_n = (Z_n(h);h\ge0)$ defined as a solution to the difference equation
\begin{equation}\label{eqn:lamp.disc}
Z_n(h) = X_n\mathscr{C}irc C_n(h-1),\qquad C_n(h) = \sum_{j=0}^h Z_n(j),
\mathfrak{m}athbf{e}nd{equation} has the rescaled limit
\begin{equation}\label{eqn:lamp.cts}
\left(\frac{\alpha_n}{\gamma_n} Z_{n}(\fl{\alpha_n t});t\ge 0 \right) \Longrightarrow \left(Z(t);t\ge0 \right)
\mathfrak{m}athbf{e}nd{equation} where is the unique solution to $$
Z(t) = X\left(\int_0^t Z(s)\,ds\right).
$$
Even though we have no breadth-first walk in this work where we can apply the discrete Lamperti transform \mathfrak{m}athbf{e}qref{eqn:lamp.disc}, we did get the continuous analog \mathfrak{m}athbf{e}qref{eqn:lamp.cts} and the breadth-first walk in \mathscr{C}ite{A97_1} to formulate Lemma \ref{thm:zconv}. This is precisely the content of parts (2) and (3) of Lemma \ref{lem:uniquenessLemma}. We can ask the question, does the breadth-first walk for $\mathfrak{m}athscr{G}_n^\theta$ which is constructed as Aldous constructs his walk in \mathscr{C}ite{A97_1} satisfy a scaling limit? We formulate this as a conjecture:
\begin{conj}
Suppose that $\theta_n$ satisfies \mathfrak{m}athbf{e}qref{eqn:thetan}. Let $X_n = (X_n(k);k = 0,1,\dotsm)$ be the breadth-first walk on $\mathfrak{m}athscr{G}_n^\theta$ described in \mathscr{C}ite{A97_1} for the $\mathfrak{m}athscr{G}_n$ model. Then, in the Skorohod space $\mathfrak{m}athbb{D}(\mathbb{R}_+,\mathbb{R})$ the following convergence holds
\begin{equation*}
\left(\frac{1}{n^{1/3} \theta_n^2} X_n(\fl{n^{2/3} \theta_n t});t\ge0 \right) \Longrightarrow \left(\lambda t -\frac{1}{2}t^2 ;t\ge0 \right)
\mathfrak{m}athbf{e}nd{equation*}
\mathfrak{m}athbf{e}nd{conj}
\begin{thebibliography}{10}
\bibitem{ABG10}
L.~Addario-Berry, N.~Broutin, and C.~Goldschmidt.
\mathfrak{n}ewblock Critical random graphs: limiting constructions and distributional
properties.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Electron. J. Probab.}, 15:no. 25, 741--775, 2010.
\bibitem{ABG12}
L.~Addario-Berry, N.~Broutin, and C.~Goldschmidt.
\mathfrak{n}ewblock The continuum limit of critical random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Probab. Theory Related Fields}, 152(3-4):367--406, 2012.
\bibitem{A91}
D.~Aldous.
\mathfrak{n}ewblock The continuum random tree. {I}.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 19(1):1--28, 1991.
\bibitem{A90}
D.~Aldous.
\mathfrak{n}ewblock The continuum random tree. {II}. {A}n overview.
\mathfrak{n}ewblock In {\mathfrak{m}athbf{e}m Stochastic analysis ({D}urham, 1990)}, volume 167 of {\mathfrak{m}athbf{e}m
London Math. Soc. Lecture Note Ser.}, pages 23--70. Cambridge Univ. Press,
Cambridge, 1991.
\bibitem{A93}
D.~Aldous.
\mathfrak{n}ewblock The continuum random tree. {III}.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 21(1):248--289, 1993.
\bibitem{A97_1}
D.~Aldous.
\mathfrak{n}ewblock Brownian excursions, critical random graphs and the multiplicative
coalescent.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 25(2):812--854, 1997.
\bibitem{BM90}
A.~Barbour and D.~Mollison.
\mathfrak{n}ewblock Epidemics and random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Lect. Notes Biomath.}, 86:86--89, 01 1990.
\bibitem{BSW17}
S.~Bhamidi, S.~Sen, and X.~Wang.
\mathfrak{n}ewblock Continuum limit of critical inhomogeneous random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Probab. Theory Related Fields}, 169(1-2):565--641, 2017.
\bibitem{BHS18}
S.~Bhamidi, R.~van~der Hofstad, and S.~Sen.
\mathfrak{n}ewblock The multiplicative coalescent, inhomogeneous continuum random trees,
and new universality classes for critical random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Probab. Theory Related Fields}, 170(1-2):387--474, 2018.
\bibitem{Billingsley99}
P.~Billingsley.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Convergence of probability measures}.
\mathfrak{n}ewblock Wiley Series in Probability and Statistics: Probability and
Statistics. John Wiley \& Sons, Inc., New York, second edition, 1999.
\mathfrak{n}ewblock A Wiley-Interscience Publication.
\bibitem{B84a}
B.~Bollob\'{a}s.
\mathfrak{n}ewblock The evolution of random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Trans. Amer. Math. Soc.}, 286(1):257--274, 1984.
\bibitem{B84b}
B.~Bollob\'{a}s.
\mathfrak{n}ewblock The evolution of sparse graphs.
\mathfrak{n}ewblock In {\mathfrak{m}athbf{e}m Graph theory and combinatorics ({C}ambridge, 1983)}, pages
35--57. Academic Press, London, 1984.
\bibitem{B85}
B.~Bollob\'{a}s.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Random graphs}.
\mathfrak{n}ewblock Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], London,
1985.
\bibitem{CLU09}
M.~E. Caballero, A.~Lambert, and G.~Uribe~Bravo.
\mathfrak{n}ewblock Proof(s) of the {L}amperti representation of continuous-state
branching processes.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Probab. Surv.}, 6:62--89, 2009.
\bibitem{CPU13}
M.~E. Caballero, J.~L. P\'{e}rez~Garmendia, and G.~Uribe~Bravo.
\mathfrak{n}ewblock A {L}amperti-type representation of continuous-state branching
processes with immigration.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 41(3A):1585--1627, 2013.
\bibitem{CPU17}
M.~E. Caballero, J.~L. P\'{e}rez~Garmendia, and G.~Uribe~Bravo.
\mathfrak{n}ewblock Affine processes on {$\mathfrak{m}athbb R_+^m\times\mathfrak{m}athbb R^n$} and
multiparameter time changes.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Inst. Henri Poincar\'{e} Probab. Stat.}, 53(3):1280--1304,
2017.
\bibitem{Clancy19}
D.~{Clancy, Jr}.
\mathfrak{n}ewblock {The Gorin-Shkolnikov identity and its random tree generalization}.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m arXiv e-prints}, page arXiv:1910.08672, Oct 2019.
\bibitem{CKG20}
G.~{Conchon--Kerjan} and C.~{Goldschmidt}.
\mathfrak{n}ewblock {The stable graph: the metric space scaling limit of a critical
random graph with i.i.d. power-law degrees}.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m arXiv e-prints}, page arXiv:2002.04954, Feb. 2020.
\bibitem{DKLP10}
J.~Ding, J.~H. Kim, E.~Lubetzky, and Y.~Peres.
\mathfrak{n}ewblock Diameters in supercritical random graphs via first passage
percolation.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Combin. Probab. Comput.}, 19(5-6):729--751, 2010.
\bibitem{DKLP11}
J.~Ding, J.~H. Kim, E.~Lubetzky, and Y.~Peres.
\mathfrak{n}ewblock Anatomy of a young giant component in the random graph.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Random Structures Algorithms}, 39(2):139--178, 2011.
\bibitem{DL06}
R.~G. Dolgoarshinnykh and S.~P. Lalley.
\mathfrak{n}ewblock Critical scaling for the {SIS} stochastic epidemic.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m J. Appl. Probab.}, 43(3):892--898, 2006.
\bibitem{Duquesne09}
T.~Duquesne.
\mathfrak{n}ewblock Continuum random trees and branching processes with immigration.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Stochastic Process. Appl.}, 119(1):99--129, 2009.
\bibitem{LD02}
T.~Duquesne and J.-F. Le~Gall.
\mathfrak{n}ewblock Random trees, {L}\'{e}vy processes and spatial branching processes.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ast\'{e}risque}, (281):vi+147, 2002.
\bibitem{ER60}
P.~Erd\H{o}s and A.~R\'{e}nyi.
\mathfrak{n}ewblock On the evolution of random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Magyar Tud. Akad. Mat. Kutat\'{o} Int. K\"{o}zl.}, 5:17--61,
1960.
\bibitem{EK86}
S.~N. Ethier and T.~G. Kurtz.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Markov processes}.
\mathfrak{n}ewblock Wiley Series in Probability and Mathematical Statistics: Probability
and Mathematical Statistics. John Wiley \& Sons, Inc., New York, 1986.
\mathfrak{n}ewblock Characterization and convergence.
\bibitem{GHS18}
C.~{Goldschmidt}, B.~{Haas}, and D.~{S{\'e}nizergues}.
\mathfrak{n}ewblock {Stable graphs: distributions and line-breaking construction}.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m arXiv e-prints}, page arXiv:1811.06940, Nov 2018.
\bibitem{GS18}
V.~Gorin and M.~Shkolnikov.
\mathfrak{n}ewblock Stochastic {A}iry semigroup through tridiagonal matrices.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 46(4):2287--2344, 2018.
\bibitem{JS87}
J.~Jacod and A.~N. Shiryaev.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Limit theorems for stochastic processes}, volume 288 of {\mathfrak{m}athbf{e}m
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of
Mathematical Sciences]}.
\mathfrak{n}ewblock Springer-Verlag, Berlin, 1987.
\bibitem{LS19}
P.~Y.~G. Lamarre and M.~Shkolnikov.
\mathfrak{n}ewblock Edge of spiked beta ensembles, stochastic {A}iry semigroups and
reflected {B}rownian motions.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Inst. Henri Poincar\'{e} Probab. Stat.}, 55(3):1402--1438,
2019.
\bibitem{Lamperti67}
J.~Lamperti.
\mathfrak{n}ewblock Continuous state branching processes.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Bull. Amer. Math. Soc.}, 73:382--386, 1967.
\bibitem{LL98b}
J.-F. Le~Gall and Y.~Le~Jan.
\mathfrak{n}ewblock Branching processes in {L}\'{e}vy processes: {L}aplace functionals of
snakes and superprocesses.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 26(4):1407--1432, 1998.
\bibitem{LL98a}
J.-F. Le~Gall and Y.~Le~Jan.
\mathfrak{n}ewblock Branching processes in {L}\'{e}vy processes: the exploration process.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Ann. Probab.}, 26(1):213--252, 1998.
\bibitem{Luczak98}
T.~{\L}uczak.
\mathfrak{n}ewblock Random trees and random graphs.
\mathfrak{n}ewblock In {\mathfrak{m}athbf{e}m Proceedings of the {E}ighth {I}nternational {C}onference
``{R}andom {S}tructures and {A}lgorithms'' ({P}oznan, 1997)}, volume~13,
pages 485--500, 1998.
\bibitem{LPW94}
T.~{\L}uczak, B.~Pittel, and J.~C. Wierman.
\mathfrak{n}ewblock The structure of a random graph at the point of the phase transition.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Trans. Amer. Math. Soc.}, 341(2):721--748, 1994.
\bibitem{MartinLof98}
A.~Martin-L\"{o}f.
\mathfrak{n}ewblock The final size of a nearly critical epidemic, and the first passage
time of a {W}iener process to a parabolic barrier.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m J. Appl. Probab.}, 35(3):671--682, 1998.
\bibitem{RY99}
D.~Revuz and M.~Yor.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Continuous martingales and Brownian motion}, volume 293 of {\mathfrak{m}athbf{e}m
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of
Mathematical Sciences]}.
\mathfrak{n}ewblock Springer-Verlag, Berlin, third edition, 1999.
\bibitem{RW10}
O.~Riordan and N.~Wormald.
\mathfrak{n}ewblock The diameter of sparse random graphs.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Combin. Probab. Comput.}, 19(5-6):835--926, 2010.
\bibitem{Silverstein67}
M.~L. Silverstein.
\mathfrak{n}ewblock A new approach to local times.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m J. Math. Mech.}, 17:1023--1054, 1967/1968.
\bibitem{Simatos15}
F.~Simatos.
\mathfrak{n}ewblock State space collapse for critical multistage epidemics.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Adv. in Appl. Probab.}, 47(3):715--740, 2015.
\bibitem{vBML80}
B.~von Bahr and A.~Martin-L\"{o}f.
\mathfrak{n}ewblock Threshold limit theorems for some epidemic processes.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Adv. in Appl. Probab.}, 12(2):319--349, 1980.
\bibitem{Whitt80}
W.~Whitt.
\mathfrak{n}ewblock Some useful functions for functional limit theorems.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m Math. Oper. Res.}, 5(1):67--85, 1980.
\bibitem{YW71}
T.~Yamada and S.~Watanabe.
\mathfrak{n}ewblock On the uniqueness of solutions of stochastic differential equations.
\mathfrak{n}ewblock {\mathfrak{m}athbf{e}m J. Math. Kyoto Univ.}, 11:155--167, 1971.
\mathfrak{m}athbf{e}nd{thebibliography}
\mathfrak{m}athbf{e}nd{document}
|
\begin{document}
\begin{frontmatter}
\title{An Efficient Data Structure for\\ Dynamic Two-Dimensional Reconfiguration\tnoteref{ccc}\tnoteref{arcs}}
\tnotetext[ccc]{This work was supported by the DFG Research Group
FOR-1800, ``Controlling Concurrent Change'', under contract number
FE407/17-1.}
\tnotetext[arcs]{A preliminary extended abstract of this
paper appears in ARCS2016~\cite{ARCS}.}
\author[tubs]{S\'andor P.\ Fekete\corref{cor}}\ead{[email protected]}
\author[tubs]{Jan-Marc Reinhardt}\ead{[email protected]}
\author[tubs]{Christian Scheffer}\ead{[email protected]}
\address[tubs]{Department of Computer Science, TU Braunschweig,
Germany.}
\cortext[cor]{Corresponding author}
\begin{abstract}
In the presence of dynamic insertions and deletions into a partially reconfigurable FPGA,
fragmentation is unavoidable. This poses the challenge of developing efficient approaches to
dynamic defragmentation and reallocation.
One key aspect is to develop efficient
algorithms and data structures that exploit the two-dimensional geometry
of a chip, instead of just one.
We propose a new method for this task,
based on the fractal structure of a quadtree,
which allows dynamic segmentation of the chip area,
along with dynamically adjusting the necessary communication
infrastructure. We describe a number of algorithmic
aspects, and present different solutions.
{\color{black} We also provide a number of basic simulations that indicate that the theoretical worst-case bound may be
pessimistic.}
\end{abstract}
\begin{keyword}
FPGAs \sep partial reconfiguration \sep two-dimensional reallocation
\sep defragmentation \sep dynamic data structures \sep insertions and
deletions
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:intro}
In recent years, a wide range of methodological developments on FPGAs
{\color{black} aim at combining} the performance of an ASIC implementation
with the flexibility of software realizations. One important development
is partial runtime reconfiguration, which allows overcoming significant area overhead,
monetary cost, higher power consumption, or speed penalties (see e.g.~\cite{rose_FPGAgap}).
As described in~\cite{fks-ddrd-12}, the idea is to load a sequence of different
modules by partial runtime reconfiguration.
In a general setting, we are faced with a dynamically changing set of modules,
which may be modified by deletions and insertions. Typically, there is no full
a-priori knowledge of the arrival or departure of modules, i.e., we have to deal
with an online situation. The challenge is to ensure that
arriving modules can be allocated. Because previously deleted modules may
have been located in different areas of the layout, free space may be fragmented,
making it necessary to {\em relocate} existing modules in order to provide
sufficient area. In principle, this can be achieved by completely {\em defragmenting}
the layout when necessary; however, the lack of control over
the module sequence makes it hard to avoid frequent full defragmentation,
resulting in expensive operations for insertions if a na\"ive approach is used.
Dynamic insertion and deletion are classic problems of Computer Science.
Many data structures (from simple
to sophisticated) have been studied
that result in low-cost operations and efficient maintenance of
a changing set of objects. These data structures are mostly
one-dimensional (or even dimensionless) by nature, making it hard to
fully exploit the 2D nature of an FPGA. In this
paper, we propose a 2D data structure based on a quadtree
for maintaining the module layout under partial reconfiguration and reallocation.
The key idea is to control the overall structure of the layout,
such that future insertions can be performed with a limited
amount of relocation, even when free space is limited.
Our main contribution is to introduce a 2D
approach that is able to achieve provable constant-factor efficiency
for different types of relocation cost. To this end,
we give detailed mathematical proofs for a slightly simplified setting, along
with sketches of extensions to the more general cases. {\color{black} We also provide
basic simulation runs for various scenarios, indicating the quality of
our approach.}
The rest of this paper is organized as follows. The following Section~2
provides a survey of related work. For better accessibility of the key
ideas and due to limited space, our technical description
in Section~3, Section~4, and Section~5 focuses on the case of discretized quadratic modules
on a quadratic chip area. We discuss in Section~6 how general
rectangles can be dealt with, with corresponding {\color{black} simulations}
in Section~7. {\color{black}Along the same lines, we do not explicitly elaborate on
the dynamic maintenance of the communication infrastructure; see Figure~\ref{fig:config} for the basic
idea. Further details are left to future work, with groundwork laid in~\cite{meyer}.}
\section{Related Work}
The problem considered in our paper has a resemblance to one-dimensional
{\em dynamic storage allocation}, in which a sequence of storage requests of varying
size have to be assigned to a block of memory cells, such
that the length of each block corresponds to the size of the request.
In its classic form (without virtual memory), this block needs to be contiguous;
in our setting, contiguity of two-dimensional allocation is a must, as reconfigurable devices
do not provide techniques such as paging and virtual memory.
Once the allocation has been performed,
it is static in space: after a block has been occupied,
it will remain fixed until the corresponding data is no longer needed
and the block is released. As a consequence, a sequence of
allocations and releases can result in fragmentation of
the memory array, making it hard or even impossible to store
new data.
On the practical side,
classic buddy systems partition the one-dimensional storage into a number of standard block
sizes and allocate a block in a smallest free standard interval
to contain it. Differing only in the
choice of the standard size, various systems have been
proposed \cite{Bromley80,Hinds75,Hirs73,Know65,Shen74}.
Newer approaches based on cache-oblivious structures
in memory hierarchies
include Bender et al.~\cite{Bender05,Bender05a}.
Theoretical work on one-dimensional contiguous allocation
includes
Bender and Hu~\cite{bender_adaptive_2007}, who consider
maintaining $n$ elements in sorted
order, with not more than $O(n)$ space.
Bender et
al.~\cite{bender_maintaining_2009} aim at reducing
fragmentation when maintaining $n$ objects that require
contiguous space. Fekete et
al.~\cite{fks-ddrd-12} study
complexity results and consider practical applications on FPGAs.
Reallocations have also been studied in the context of heap
allocation. Bendersky and Petrank~\cite{bendersky_space_2012} observe
that full compaction, i.e., creating a contiguous block of free space
on the heap,
is prohibitively expensive and consider partial compaction.
Cohen and Petrank~\cite{cohen_limitations_2013} extend these
to practical applications.
Bender et al.~\cite{bender_cost-oblivious_2014}
describe a strategy that achieves good amortized movement costs
for reallocations, where allocated blocks {\color{blue}can} be moved at a cost to a new position that is disjoint
from with the old position.
Another paper by the same authors~\cite{bender_reallocation_2014} deals
with reallocations in the context of scheduling.
Examples for packing problems in applied computer science come from
allocating FPGAs. Fekete et al.~\cite{fekete_efficient_2014} examined
a problem dealing with the allocation of different types of resources
on an FPGA that had to satisfy additional properties. For example, to
achieve specified clock frequencies diameter restrictions had to be
obeyed by the packing. The authors were able to solve the problem
using integer linear programming techniques.
Over the years, a large variety of methods and results for
allocating storage have been proposed. The classical sequential fit
algorithms, First Fit, Best Fit, Next Fit and Worst Fit can be found
in Knuth~\cite{Knuth97} and Wilson et al.~\cite{Wils95}.
These are closely related to problems of offline and online packing of
two-dimensional objects. One of the earliest considered packing variants is the problem of finding
a dense packing of a known set of squares for a rectangular container; see
Moser~\cite{m66}, Moon and Moser~\cite{mm67} and
Kleitman and Krieger~\cite{kk70}, as well as more recent work by
Novotn{\'y}~\cite{n95,n96} and Hougardy~\cite{h11}.
There is also a considerable number of other related work on offline packing squares, cubes, or hypercubes;
see~\cite{ck-soda04,js-ptas08,h09} for prominent examples.
The {\em online} version of square packing has been studied by
Januszewski and Lassak~\cite{jl97} and Han et al.~\cite{hiz08}, with more recent
progress due to Fekete and Hoffmann~\cite{fh-ossp-13,fh-ossp-17}.
A different kind of online square packing was considered by
Fekete et al.~\cite{fks-osp-09,fks-ospg-14}. The container is an unbounded strip,
into which objects enter from above in a Tetris-like fashion; any new
object must come to rest on a previously placed object, and the
path to its final destination must be collision-free.
There are various ways to generalize the online packing of squares; see Epstein and van Stee~\cite{es-soda04,es05,es07} for online bin packing variants
in two and higher dimensions. In this context, also see parts of Zhang et al.~\cite{zcchtt10}.
A natural generalization of online packing of squares is online packing of rectangles,
which have also received a serious amount of attention. Most notably, online strip packing
has been considered; for prominent examples, see Azar and Epstein~\cite{ae-strip97}, who employ
shelf packing, and Epstein and van Stee~\cite{es-soda04}.
Offline packing of rectangles into a unit square or rectangle has also been considered
in different variants; for examples, see \cite{fgjs05}, as well as \cite{jz-profit07}.
Particularly interesting for methods for online packing into a single container may be the work by Bansal et
al.~\cite{bcj-struct-09}, who show that for any complicated packing of rectangular items into a rectangular container,
there is a simpler packing with almost the same value of items. For another variant of online allocation, see~\cite{frs-csdaosa-14},
which extends previous work on optimal shapes for allocation~\cite{bbd-wosc-04}.
From within the FPGA community, there is a huge amount of related work
dealing with problems related to relocation.
Becker et al.~\cite{blc-erpbr-07} present a method for
enhancing the relocability of partial
bitstreams for FPGA runtime configuration, with a special focus on
heterogeneities. They study the underlying prerequisites and
technical conditions for dynamic relocation.
Gericota et al.~\cite{gericota05} present a relocation procedure for
Configurable Logic Blocks (CLBs) that is able to carry out online
rearrangements, defragmenting the available FPGA resources without
disturbing functions currently running. Another relevant approach was
given by Compton et al.~\cite{clckh-crdrt-02}, who present a new
reconfigurable architecture design extension based on the ideas of
relocation and defragmentation.
Koch et al.~\cite{kabk-faepm-04} introduce efficient hardware extensions to
typical FPGA architectures in order to allow hardware task
preemption.
These papers do not consider the algorithmic implications and how the
relocation capabilities can be exploited
to optimize module layout in a fast, practical fashion, which is what we consider in this paper. Koester et
al.~\cite{koester07} also address the problem of
defragmentation. Different defragmentation algorithms that minimize
different types of costs are analyzed.
The general concept of defragmentation is well known, and has been applied to
many fields, e.g., it is typically employed for memory management. Our approach
is significantly different from defragmentation techniques which have been
conceived so far: these require a freeze of the system, followed by
a computation of the new layout and a complete reconfiguration of all modules
at once. Instead, we just copy one module
at a time, and simply switch the execution to the new module as soon as the move is complete.
{\color{blue} This concept aims at providing a seamless, dynamic defragmentation
of the module layout, eventually resulting in much better
utilization of the available space for modules.} All this
makes our work a two-dimensional extension of the one-dimensional approach
described in \cite{fks-ddrd-12}.
\section{Preliminaries}
We are faced with an (online) sequence of configuration requests
that are to be carried out on a rectangular chip area.
A request may consist of {\em deleting} an existing module, which
simply means that the module may be terminated and
its occupied area can be released to free space.
On the other hand,
a request may consist of {\em inserting} a new module,
requiring an axis-aligned, rectangular module
to be allocated to an unoccupied section of the chip;
if necessary, this may require rearranging the
allocated modules in order to create free space of
the required dimensions, incurring some cost.
Previous work on reallocation problems of this type
has focused on one-dimensional approaches.
Using these in a two-dimensional setting does not result in
satisfactory performance.
The main contribution of our paper is to demonstrate a two-dimensional
approach that is able to achieve an efficiency that is provably within a constant factor of the optimum,
even in the worst case, which
requires a variety of mathematical details.
{\color{black} For better accessibility of the key ideas,
our technical description in the rest of this Section~3,
as well as in Section~4 and Section~5 focuses on the case of quadratic modules
on a quadratic chip area. Section~6 addresses how to deal with general
rectangles.}
The rest of this section provides technical notation and descriptions.
A square is called \emph{aligned} if its edge length equals $2^{-r}$
for any $r \in \mathbb{N}z$. It is called an $r$-square if its size is
$2^{-r}$ for a specific $r \in \mathbb{N}z$. The \emph{volume} of an $r$-square $Q$ is $|Q|=4^{-r}$.
A {\em quadtree} is a rooted tree in which every node has either four
children or none. As a quadtree can be interpreted as the subdivision
of the unit square into nested $r$-squares, we can use quadtrees to
describe certain packings of aligned squares into the unit square.
\begin{defi}
A \emph{(quadtree) configuration} $T$ assigns a set of axis-aligned squares to the
nodes of a quadtree. The nodes with a distance $j$ to the root of the
quadtree form \emph{layer} $j$. Nodes are also called \emph{pixels}
and pixels in layer $j$ are called \emph{$j$-pixels}. Thus,
$j$-squares can only be assigned to $j$-pixels. A
pixel $p$ \emph{contains} a square $s$ if $s$ is assigned to $p$ or
one of the children of $p$ contains $s$. A $j$-pixel that has
an assigned $j$-square is \emph{occupied}.
For a pixel $p$ that is not occupied, with $P$ the unique path from $p$ to the root,
we call $p$
\begin{itemize}
\item \emph{blocked} if there is a $q \in P$ that is occupied,
\item \emph{free} if it is not blocked,
\item \emph{fractional} if it is free and contains a square,
\item \emph{empty} if it is free but not fractional,
\item \emph{maximally empty} if it is empty but its parent is not.
\end{itemize}
The \emph{height $h(T)$} of a configuration $T$ is defined as $0$ if the root of $T$ is empty. Otherwise, as the maximum $i+1$ such that $T$ contains an $i$-square.
\end{defi}
\begin{obs}\label{obs:disjoint}
Let $p \ne q$ be two maximally empty pixels and $P$ and $Q$ be the
paths from the root to $p$ and $q$, respectively. Then $p \notin Q$
and $q \notin P$.
\end{obs}
\begin{proof}
Without loss of generality, it is sufficient to show $p \notin Q$.
Assume $p \in Q$. Let $r \in Q$ be the parent of $q$. As $p$ is
maximally empty and $r$ is on the path from $p$ to $q$, $r$ must be
empty. However, that would imply that $q$ is not maximally empty, in
contradiction to the assumption.\qed
\end{proof}
The \emph{(remaining) capacity $\mathrm{cap}(p)$} of a $j$-pixel $p$
is defined as $0$ if $p$ is occupied or blocked and as $4^{-j}$ if $p$ is empty. Otherwise, $\mathrm{cap}(p) := \sum_{p' \in C(p)} \mathrm{cap}(p')$, where $C(p)$ is the set of children of $p$. The \emph{(remaining)
capacity} of $T$, denoted $\mathrm{cap}(T)$, is the remaining
capacity of the root of $T$.
\begin{lem}\label{lem:fullcap}
Let $p_1, p_2, \ldots, p_k$ be all maximally empty pixels of a quadtree
configuration $T$. Then we have $\mathrm{cap}(T) = \sum_{i=1}^k \mathrm{cap}(p_i)$.
\end{lem}
\begin{proof}
The claim follows directly from the definition of the capacity, as the only
positive capacities considered for $\mathrm{cap}(T)$ are exactly those
of the maximally empty pixels. \qed
\end{proof}
\begin{figure}
\caption{A quadtree configuration {\color{black}
\label{fig:config}
\end{figure}
See Figure~\ref{fig:config} for an example of a quadtree configuration
and the corresponding packing of aligned squares in the unit square.
Quadtree configurations are transformed using \emph{moves}
(\emph{reallocations}). A $j$-square $s$ assigned to a $j$-pixel
$p$ can be \emph{moved} (\emph{reallocated}) to another $j$-pixel $q$
by creating a new assignment from $q$ to $s$ and deleting the old
assignment from $p$ to $s$. $q$ must have been empty for this to be
allowed.
We allow only one move at a time. For example, two squares cannot
change places unless there is a sufficiently large pixel to
temporarily store one of them. Furthermore, we do not put limitations
on how to transfer a square from one place to another, i.e., we can
always move a square even if there is no collision-free path between
the origin and the destination.
\begin{defi}
A fractional pixel is \emph{open} if at least one of its children is
(maximally) empty. A configuration is called \emph{compact} if there
is at most one open $j$-pixel for every $j \in \mathbb{N}z$.
\end{defi}
In (one-dimensional) storage allocation and scheduling, there are
techniques that avoid reallocations by requiring more space than the
sum of the sizes of the allocated
pieces. See Bender et al.~\cite{bender_reallocation_2014} for an
example. From there we adopt the term \emph{underallocation}. In particular, given two squares $s_1$ and $s_2$, $s_2$ is an $x$-underallocated copy
of $s_1$, if $|s_2| = x \cdot |s_1|$ for $x > 1$.
\begin{defi}
A \emph{request} has one of the forms \textsc{Insert($x$)} or
\textsc{Delete($x$)}, where $x$ is a unique identifier for a
square. Let $v \in [0, 1]$ be the volume of the square $x$. The
\emph{volume} of a request $\sigma$ is defined as
\[
\mathrm{vol}(\sigma) = \left\{ \begin{array}{ccl}
v & \text{if} & r=\textsc{Insert($x$)},\\
-v & \text{if} & r=\textsc{Delete($x$)}.
\end{array} \right.
\]
\end{defi}
\begin{defi}
A sequence of requests $\sigma_1, \sigma_2, \ldots, \sigma_k$
is \emph{valid} if $\sum_{i=1}^j \mathrm{vol}(\sigma_i) \le 1$ holds
for every $j=1,2,\ldots,k$. It is called \emph{aligned}, if
$|\mathrm{vol}(\sigma_j)| = 4^{-\ell_j}, \ell_j \in \mathbb{N}z,$
where $|.|$ denotes the absolute value, holds for every
$j=1,2,\ldots,k$, i.e., if only aligned squares are packed.
\end{defi}
Our goal is to minimize the costs of reallocations. Costs can be
measured in different ways, for example in the number of moves or the
reallocated volume.
\begin{defi}
Assume we fulfill a request $\sigma$ and as a consequence reallocate
a set of squares $\{s_1, s_2, \ldots, s_k\}$. The \emph{movement
cost} of $\sigma$ is defined as $c_{\mathrm{move}}(\sigma) = k$,
the \emph{total volume cost} of $\sigma$ is defined as
$c_{\mathrm{total}}(\sigma) = \sum_{i=1}^k |s_i|$,
and the \emph{(relative) volume cost} of $\sigma$ is defined as
$c_{\mathrm{vol}}(\sigma) = \frac{c_{\mathrm{total}}(\sigma)}{|\mathrm{vol}(\sigma)|}$.
\end{defi}
\section{Inserting into a Given Configuration}
In this section we examine the problem of rearranging a given
configuration in such a way that the insertion of a new square is
possible. {\color{black}
Before we present our results in mathematical detail, including all
necessary proofs, we give a short overview of the individual
propositions and their significance: We first examine properties of
quadtree configurations culminating in Theorem~\ref{thm:qtmoves}, which
establishes that any configuration with sufficient capacity allows the
insertion of a square. Creating the required contiguous space for the insertion
comes at a cost due to required reallocations. This cost is analysed
in detail in Subsection~\ref{sec:costs}. There, we present matching
upper and lower bounds on the reallocation cost for our three cost
functions -- total volume cost (Theorems~\ref{thm:volumebound} and
\ref{thm:volumexample}), (relative) volume cost
(Corollary~\ref{cor:relvoltight}), and movement cost
(Theorems~\ref{thm:movbound} and \ref{thm:movexample}).
}
\subsection{Coping with Fragmented Allocations}\label{sec:delete}
Our strategy follows one general idea: larger empty pixels can be
built from smaller ones; e.g., four empty $i$-pixels can
be combined into one empty $(i-1)$-pixel. This can be iterated
to build an empty pixel of suitable volume.
\begin{lem}\label{lem:order}
Let $p_1, p_2, \ldots, p_k$ be a sequence of empty pixels sorted by
volume in descending order. Then
$\sum_{i=1}^k \mathrm{cap}(p_i) \ge 4^{-\ell} > \sum_{i=1}^{k-1} \mathrm{cap}(p_i)$
implies the following properties:
\begin{equation}\label{eq:four_p1}
k < 4 \Leftrightarrow k = 1
\end{equation}
\begin{equation}\label{eq:exact}
k \ge 4 \Rightarrow \sum_{i=1}^k \mathrm{cap}(p_i) = 4^{-\ell}
\end{equation}
\begin{equation}\label{eq:four_p2}
k \ge 4 \Rightarrow \mathrm{cap}(p_k) = \mathrm{cap}(p_{k-1}) =
\mathrm{cap}(p_{k-2}) = \mathrm{cap}(p_{k-3})
\end{equation}
\end{lem}
\begin{proof}
For $k \ge 2$, $p_1$ must be a pixel of smaller capacity than an
$\ell$-pixel, because otherwise we would not need $p_2$ for the sum to
be greater than $4^{-\ell}$ -- in contradiction to the
assumption. Thus, we need to add up smaller capacities to at least
$4^{-\ell}$. As we need at least four $(\ell+1)$-pixels for that,
statement~\eqref{eq:four_p1} holds.
In the following we assume $k \ge 4$. Let $x=\sum_{i=1}^{k-1}
\mathrm{cap}(p_i)$. We know from the assumption that $x$ is strictly
less than $4^{-\ell}$, but $x+\mathrm{cap}(p_k)$ is at least
$4^{-\ell}$. Consider the base-4 (quaternary) representation
of $x/4^{-\ell}$: $x_4=(x/4^{-\ell})_4$. It has a zero before the
decimal point and a sequence of base-4 digits after. Let $n$ be the
rightmost non-zero digit of $x_4$. As the sequence is sorted in
descending order and the capacities are all negative powers of four,
adding the capacity of $p_k$ can only increase $n$, or a digit right
of $n$, by one. Since all digits right of $n$ are zero, increasing one
of them by one does not increase $x$ to at least
$4^{-\ell}$. Therefore, it must increase $n$. But if increasing $n$ by
one means increasing $x$ to at least $4^{-\ell}$, then every digit of
$x_4$ after the decimal point and up to $n$ must have been
three. Consequently, increasing $n$ by one leads not only to
$x + \mathrm{cap}(p_k) \ge 4^{-\ell}$ but also to
$x + \mathrm{cap}(p_k)=4^{-\ell}$, which is statement~\eqref{eq:exact}.
Furthermore, as $n$ must have been three and the sequence is sorted,
the previous three capacities added must have each increased $n$ by
exactly one as well. This proves statement~\eqref{eq:four_p2}. \qed
\end{proof}
\begin{figure}
\caption{Illustration to Lemma~\ref{lem:four}
\label{fig:four_pixels}
\end{figure}
\begin{lem}\label{lem:four}
Given a quadtree configuration $T$ with four maximally empty
$j$-pixels. Then $T$ can be transformed (using a sequence
of moves) into a configuration $T^*$ with one more maximally
empty $(j-1)$-pixel and four fewer maximally empty $j$-pixels than $T$
while retaining all its maximally empty $i$-pixels for $i < j-1$.
\end{lem}
\begin{proof}
Let $p_1, p_2, p_3$ and $p_4$ be four maximally empty $j$-pixels and
$q_1, q_2, q_3$ and $q_4$ be the parents of $p_1, p_2, p_3$ and $p_4$,
respectively. Then $q_i$ has at most three children that are not
empty. Now, we can move the at most three non-empty subtrees from one
of the $q_i$ to the others, $i=1,2,3,4$. Without loss of generality,
we choose $q_1$. Let $a, b$ and $c$ be the children of $q_1$ that are
not $p_1$. We move $a$ to $p_2$, $b$ to $p_3$ and $c$ to $p_4$. See
Figure~\ref{fig:four_pixels} for an illustration. Thus, we get a new
configuration $T^*$ with the empty $(j-1)$-pixel $q_1$ and occupied or
fractional pixels {\color{black}$q_2$, $q_3$, $q_4$}. Note that $p_1$ is still empty,
but no longer maximally empty, because its parent $q_1$ is now empty.
The construction does not affect any other maximally empty pixels. \qed
\end{proof}
\begin{thm}\label{thm:qtmoves}
Given a quadtree configuration $T$ with a remaining capacity of at least
$4^{-j}$, you can transform $T$ into a quadtree configuration $T^*$
with an empty $j$-pixel using a sequence of moves.
\end{thm}
\begin{proof}
Let $S=p_1, p_2, \ldots, p_n$ be the sequence containing all maximally
empty pixels of $T$ sorted by capacity in descending order. If the
capacity of $p_1$ is at least $4^{-j}$, then there already is an empty
$j$-pixel in $T$ and we can simply set $T^* = T$.
Assume $\mathrm{cap}(p_1) < 4^{-j}$. In this case we inductively build an empty
$j$-pixel. Let $S'=p_1, p_2, \ldots, p_k$ be the shortest prefix of
$S$ satisfying $\sum_{i=1}^{k} \mathrm{cap}(p_i) \ge 4^{-j}$.
Such a prefix has to exist because of {\color{black}Lemma~\ref{lem:fullcap}}.
Note that due to Observation~\ref{obs:disjoint} no pixel
$p_i$ is contained in another pixel $p_j$, $i, j \in
\{1,2,\ldots,k\}$, $i \ne j$.
Lemma~\ref{lem:order} tells us $k \ge 4$ and the
last four pixels in $S'$, $p_{k-3}, p_{k-2}, p_{k-1}$ and
$p_k$, are from the same layer, say layer $\ell$. Thus, we can apply
Lemma~\ref{lem:four} to $p_{k-3}, p_{k-2}, p_{k-1}, p_k$ to get a new
maximally empty $(\ell - 1)$-pixel $q$. We remove $p_{k-3}, p_{k-2},
p_{k-1}, p_k$ from $S'$ and insert $q$ into $S'$ according to its
capacity.
The length of the resulting sequence $S''$ is three less than
the length of $S'$. This does not change the sum of the capacities, since
an empty $(\ell-1)$-pixel has the same capacity as four empty
$\ell$-pixels. That is, $\sum_{p \in S'} \mathrm{cap}(p) = \sum_{p \in
S''} \mathrm{cap}(p)$ holds.
We can repeat these steps until $k < 4$ holds. Then
Lemma~\ref{lem:order} implies that $k=1$, i.e., the sequence contains
only one pixel $p_1$, and because $\mathrm{cap}(p_1)=4^{-j}$, $p_1$ is
an empty $j$-pixel. \qed
\end{proof}
\subsection{Reallocation Cost}\label{sec:costs}
Reallocation cost is made non-trivial by \emph{cascading moves}:
Reallocated squares may cause further reallocations, when there is no
empty pixel of the required size available.
\begin{obs}\label{obs:largebad}
In the worst case, reallocating an $\ell$-square is not cheaper than
reallocating four $(\ell+1)$-squares -- using any of the three defined
cost types.
\end{obs}
\begin{proof}
It is straightforward to see this for volume costs, total or relative:
Wherever you can move one $\ell$-square you can also move four
$(\ell+1)$-squares without causing more cascading moves.
For movement costs a single move of an $\ell$-square is less than four
moves of $(\ell+1)$-squares, but it can cause cascading moves of three
$(\ell+1)$-squares plus the cascading moves caused by the reallocation
of an $(\ell+1)$-square and, therefore, does not cause lower costs in
total. \qed
\end{proof}
\begin{thm}\label{thm:volumebound}
The maximum total volume cost caused by the insertion of an
$i$-square $Q$, $i \in \mathbb{N}z$,
into a quadtree configuration $T$ with
$\mathrm{cap}(T) \ge 4^{-i}$ is bounded by
\[
c_\mathrm{total,max} \le \frac{3}{4} \cdot
4^{-i} \cdot \mathrm{min} \{(s-i), i\} \in O(|Q| \cdot h(T))
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
For $s \le i$ there has to be an empty $i$-square in $T$, as
$\mathrm{cap}(T) \ge 4^{-i}$, and we can insert $Q$ without any moves. In
the following, we assume $s > i$.
Let $Q$ be the $i$-square to be inserted. We can assume that we do not
choose an $i$-pixel with a remaining capacity of zero to pack $Q$ -- if
there were no other pixels, $\mathrm{cap}(T)$ would be zero as
well. Therefore, the chosen pixel, say $p$, must have a remaining
capacity of at least $4^{-s}$. From Observation~\ref{obs:largebad}
follows that the worst case for $p$ would be to be filled with 3
$k$-squares, for every $i < k \le s$. Let $v_i$ be the worst-case
volume of a reallocated $i$-pixel. We get $v_i \le \sum_{j=i+1}^s \frac{3}{4^j} = 4^{-i} - 4^{-s}$.
Now we have to consider cascading moves. Whenever we move an
$\ell$-square, $\ell > i$, to make room for $Q$, we might
have to reallocate a volume of $v_{\ell}$ to make room for the
$\ell$-square. Let $x_i$ be the total volume that is at most
reallocated when inserting an $i$-square.
Then we get the recurrence $x_i = v_i + \sum_{j=i+1}^s 3 \cdot x_j$
with $x_s=v_s=0$. This resolves to $x_i=3/4 \cdot 4^{-i} \cdot (s-i)$.
$v_i$ cannot get arbitrarily large, as the
remaining capacity must suffice to insert an $i$-square. Therefore,
if all the possible $i$-pixels contain a volume of $4^{-s}$ (if some
contained more, we would choose those and avoid the worst case), we
can bound $s$ by $4^i \cdot 4^{-s} \ge 4^{-i} \Leftrightarrow s \le 2i$,
which leads to $c_\mathrm{total,max} \le \frac{3}{4} \cdot 4^{-i} \cdot i$.
With $|Q|=4^{-i}$ and $i < s < h(T)$ we get $c_\mathrm{total,max} \in
O(|Q| \cdot h(T))$. \qed
\end{proof}
\begin{cor}\label{cor:maxtotalvol}
Inserting a square into a quadtree configuration has a total volume
cost of no more than $3/16=0.1875$.
\end{cor}
\begin{proof}
Looking at Theorem~\ref{thm:volumebound} it is easy to see that the
worst case is attained for $i=1$: $c_\mathrm{total}
= 3/4 \cdot 4^{-1} \cdot 1 = 3/16=0.1875$. \qed
\end{proof}
\begin{figure}
\caption{The worst-case construction for volume cost for $s=6$ and
$i=3$. Every 3-pixel contains three 4-, 5-, and 6-squares with only
one remaining empty 6-pixel.}
\label{fig:wc_volume}
\end{figure}
\begin{thm}\label{thm:volumexample}
For every $i \in \mathbb{N}z$ there are quadtree configurations $T$ for which the
insertion of an $i$-square $Q$ causes a total volume
cost of
\[
c_\mathrm{total,max} \ge \frac{3}{4} \cdot
4^{-i} \cdot \mathrm{min} \{(s-i), i\} \in \Omega(|Q| \cdot h(T))
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
We build a quadtree configuration to match the upper bound of
Theorem~\ref{thm:volumebound}. Let $s=2i$ and consider a subtree
rooted at an $i$-pixel
that contains three $k$-pixels for every $i < k \le s$. They do not have
to be arranged in such a way that the single free $s$-pixel is in the
lower right corner, but the nesting structure is important. Assume all
$4^i$ $i$-pixels of $T$ are constructed in such a way. Then you have
to reallocate three $k$-squares for every $i < k \le s$. However, every
fractional $k$-pixel in the configuration in turn contains three
$k'$-pixel for every $k < k' < s$, i.e., moving every $k$-square
causes cascading moves. See Figure~\ref{fig:wc_volume} for the whole
construction for $s=6$ and $i=3$. The reallocated volume without
cascading moves adds up to $v_i = \sum_{k=i+1}^s 3 \cdot 4^{-k}$.
Including cascading moves we get $x_i = v_i + \sum_{k=i+1}^s 3 \cdot x_k$,
which resolves to $x_i=3/4 \cdot 4^{-i} \cdot (s-i)$.
With
$s=h(T)-1$, $i=s/2$ and $|Q|=4^{-i}$ we get
$c_\mathrm{total,max} \in \Omega(|Q| \cdot h(T))$. \qed
\end{proof}
As a corollary we get an upper bound for the (relative) volume cost
and a construction matching the bound.
\begin{cor}\label{cor:relvoltight}
Inserting an $i$-square into a quadtree configuration $T$ with sufficient capacity
$\mathrm{cap}(T) \ge 4^{-i}$ causes a (relative) volume cost of at
most
\[
c_\mathrm{vol,max} \le \frac{3}{4} \cdot \mathrm{min} \{(s-i), i\} \in
\Theta(h(T)),
\]
when the smallest previously inserted square is an $s$-square,
and this bound is tight, i.e., there are configurations for which the
bound is matched.
\end{cor}
It is important to note that
relative volume cost can be arbitrarily bad by increasing the height
of the configuration, as opposed to total volume cost with the upper
bound derived in
Corollary~\ref{cor:maxtotalvol}. What is more, large total volume
cost is achieved by inserting $i$-squares for small $i$, whereas
large relative volume cost is only possible for large $i$ (and large
$s-i$). This has an interesting interpretation with regard to the structure of
the quadtree: Large total volume cost can happen when you assign a
square to a node close to the root. To get large relative volume cost
you need a high quadtree and assign a square to a node roughly in the
middle (with respect to height).
The same methods we used to derive worst case bounds for volume cost
can also be used to establish bounds for movement cost, which results
in $c_\mathrm{move,max} \le 4^{\mathrm{min}\{s-i, i\}}-1 \in
O(2^{h(T)})$. A matching construction is the same as the one in the
proof of Theorem~\ref{thm:volumexample}.
\begin{thm}\label{thm:movbound}
The maximum movement cost caused by the insertion of an
$i$-square $Q$, $i \in \mathbb{N}z$, into a quadtree configuration $T$ with
$\mathrm{cap}(T) \ge 4^{-i}$ is bounded by
\[
c_\mathrm{move,max} \le 4^{\mathrm{min}\{s-i, i\}}-1 \in O(2^{h(T)})
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
The proof is analogous to the proof of
Theorem~\ref{thm:volumebound}. We can use
Observation~\ref{obs:largebad} and formulate a new recurrence. The
number of reallocations without cascading moves caused by the
insertion of $Q$ can be bounded by $v_i \le 3(s-i)$ and including
cascading moves we get $x_i = v_i + \sum_{j=i+1}^s 3 x_i$, which
resolves to $x_i = 4^{s-i} - 1$.
As we need at least $4^{-i}$ remaining capacity to insert $Q$ we can
again deduce $s \le 2i$. With $s=h(T)-1$ we get $\mathrm{min}\{s-i,
i\} \le h(T)/2$, which results in the claimed bound. \qed
\end{proof}
\begin{thm}\label{thm:movexample}
For every $i \in \mathbb{N}z$ there are quadtree configurations $T$ for which the
insertion of an $i$-square $Q$ causes a movement
cost of
\[
c_\mathrm{move,max} \ge 4^{\mathrm{min}\{s-i, i\}} \in \Omega(2^{h(T)})
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
The example from Theorem~\ref{thm:volumexample} works here as well. As
every fractional $j$-pixel, $j < s$, contains three $(j+1)$-pixels,
you have to move three squares for every $j=i,\ldots,s-1$ and account
for cascading moves. This results in a number of moves $c_{move,max}
\ge x_i = 3(s-i) + \sum_{j=i+1}^s x_j = 4^{s-i} - 1$, where
$s=2i=h(T)-1$. \qed
\end{proof}
\section{Online Packing and Reallocation}
Applying Theorem~\ref{thm:qtmoves} repeatedly to successive configurations
yields a strategy for the dynamic allocation of aligned squares.
\begin{cor}\label{cor:strategy}
Starting with an empty square and given a valid, aligned sequence of
requests, there is a strategy that fulfills every request in the
sequence.
\end{cor}
\begin{proof}
We only have to deal with aligned squares and can use quadtree
configurations to pack the squares, since the sequence of requests
$\sigma_1, \sigma_2, \ldots, \sigma_k$ is aligned. We start with the
empty configuration that contains only one empty $0$-pixel. Thus, we
have a configuration with capacity $1$. We only have to consider
insertions, because deletions can always be fulfilled by definition.
As the sequence of requests is valid, whenever a request $\sigma_\ell$
demands to insert a $j$-square $s$, the remaining capacity of the
current quadtree configuration $T$ is at least
$1 - \sum_{i=1}^{\ell-1} \mathrm{vol}(\sigma_i) + 4^{-j} \ge 4^{-j}$.
Therefore, we can use Theorem~\ref{thm:qtmoves} to transform $T$ into
a configuration $T^*$ with an empty $j$-pixel $p$. We assign $s$
to $p$. \qed
\end{proof}
This strategy may incur the heavy insertion cost
derived in the previous section. However, when we do not have to
work with a given configuration and have the freedom to handle all
requests starting from the empty unit square, we can use the added
flexibility to derive a more sophisticated strategy. In particular,
we can use reallocations to clean up a configuration when squares
are deleted. This can make deletions costly operations, but allows us
to eliminate insertion cost entirely.
\subsection{First-Fit Packing}
We present an algorithm that fulfills any valid, aligned sequence of
requests and does not cause any reallocations on insertions. We call
it \emph{First Fit} in imitation of the well-known technique employed
in one-dimensional allocation problems.
Given a one-dimensional
packing and a request to allocate space for an additional item,
First-fit chooses the first suitable location. In one dimension it is
trivial to define an order in which to check possible locations. For
example, assume your resources are arranged horizontally and proceed
from left to right.
{\color{black}
Finding an order in two or more dimensions
is less straightforward than in 1D. We use space-filling curves to overcome this
impediment. Space-filling curves are of theoretical interest, because
they fill the entire unit square
(i.e., their Hausdorff dimension is $2$).
More useful for us are the schemes used to create a
space-filling curve, which employ a recursive construction on the nodes
of a quadtree and become space-filling as the height of the tree
approaches infinity. In particular, they provide an order for the
nodes of a quadtree. In the following, we make use of the z-order
curve~\cite{morton_1966}.
}
First Fit assigns items to be packed to the next available position in
z-order. We denote the position
of a pixel $p$ in z-order by $z(p)$, i.e.,
$z(p) < z(q)$ if and only if $p$ comes before $q$ in z-order.
In
general, the z-order is only a partial order, as it does not make
sense to compare nodes with their parents or children. However, there
are three important occasions for which the z-order is a total order: If
you only consider pixels in one layer, if you only consider
occupied pixels, and if you only consider maximally empty pixels. In
all three cases pixels are pairwise disjoint, which
leads to a total order.
\begin{figure}
\caption{The z-order for layer 2 pixels (left); a First Fit allocation and the z-order of the occupied pixels
-- which is not necessarily the insertion order (right).}
\label{fig:firstfit}
\end{figure}
First Fit proceeds as follows: A request to insert an $i$-square
$Q$ is handled by assigning $Q$ to the first empty $i$-pixel in
z-order; see Figure~\ref{fig:firstfit}. Deletions are more
complicated. After unassigning a
deleted square $Q$ from a pixel $p$ the following procedure handles
reallocations (an example deletion can be seen in
Figure~\ref{fig:invstrategy}):
\begin{algorithmic}[1]
\State $S \gets \{p'\}$, where $p'$ is the maximally empty pixel
containing $p$
\While{$S \ne \varnothing$}
\State Let $a$ be the element of $S$ that is first in z-order.
\State $S \gets S \setminus \{a\}$
\State Let $b$ be the last occupied pixel in z-order.
\While{$z(b) > z(a)$}
\If{the square assigned to $b$, $B$, can be packed into $a$}
\State Assign $B$ to the first suitable descendant of $a$ in
z-order.
\State Unassign $B$ from $b$.
\State Let $b'$ be the maximally empty pixel containing $b$.
\State $S \gets S \cup \{b'\}$
\State $S \gets S \setminus \{b'': b''\text{ is child of }b'\}$
\EndIf
\State Move the pointer $z$ back in z-order to the next occupied
pixel.
\EndWhile
\EndWhile
\end{algorithmic}
The general idea is to reallocate squares from the current end of the
z-order to empty spots. As reallocating creates new empty squares, we
need to apply the method repeatedly in what can be considered an
inverse case of cascading moves. We ensure termination by always
moving the currently {\color{black}considered} empty pixel in positive z-order and
reallocating squares in negative z-order. We analyze the strategy in
more detail now.
\begin{inv}\label{inv:inv}
For every empty $i$-pixel $p$ in a quadtree configuration $T$ there is
no occupied $i$-pixel $q$ with $z(q) > z(p)$.
\end{inv}
\begin{lem}\label{lem:invcompact}
Every quadtree configuration $T$ satisfying Invariant~\ref{inv:inv}
is compact.
\end{lem}
\begin{proof}
Assume a quadtree configuration $T$ is not compact. Then it contains
two fractional $i$-pixels, $i \in \mathbb{N}$, $p$ and $q$ with maximally
empty children
$p'$ and $q'$, respectively. Without loss of generality, assume $z(p)
< z(q)$. As $q$ is fractional, there is a $j$-square, $j > i$, assigned
to some descendant of $q$, say $q''$. However, $p'$ is an empty
$(i+1)$-pixel and therefore contains an empty $j$-pixel, $p''$. As
$z(p) < z(q)$, we also have $z(p'') < z(q'')$ and
Invariant~\ref{inv:inv} does not hold. \qed
\end{proof}
\begin{lem}\label{lem:3max}
In a compact quadtree configuration $T$ there are at most three
maximally empty $j$-pixels for every $j \in \mathbb{N}z$.
\begin{proof}
The statement holds for $j=0$, since there is only one $0$-pixel. For
$j>0$ there is at most one open $(j-1)$-pixel $p$ in $T$, because $T$
is compact. Therefore, all other $(j-1)$-pixels except for $p$ either
do not have an empty child or are maximally empty themselves. Thus,
all maximally empty $j$-pixels have to be children of $p$. Since $p$
is not empty, there can be at most three. \qed
\end{proof}
\end{lem}
\begin{lem}\label{lem:compactspace}
Given an $\ell$-square $s$ and a compact quadtree configuration $T$,
then $s$ can be assigned to an empty $\ell$-pixel in $T$, if and only
if $\mathrm{cap}(T) \ge 4^{-l}$.
\end{lem}
\begin{proof}
The direction from left to right is obvious, as there can be no empty
$\ell$-pixel if the capacity is less than $4^{-l}$. For the other
direction assume there is no empty $\ell$-pixel in $T$. Since
there is no empty $\ell$-pixel, there is also no empty $j$-pixel for
any $j < \ell$. Let the smallest square assigned to a node be an
$s$-square. As $T$ is compact, we can use Lemma~\ref{lem:3max}
and {\color{black}Lemma~\ref{lem:fullcap}} to bound the remaining capacity of $T$ from
above: $\mathrm{cap}(T) \le \sum_{k=l+1}^{s} 3 \cdot 4^{-k} = 4^{-\ell} -
4^{-s} < 4^{-\ell}$. \qed
\end{proof}
In other words, packing an $\ell$-square in a compact configuration
requires no reallocations.
\begin{thm}\label{thm:ff}
The strategy presented above is correct. In particular,
\begin{enumerate}
\item every valid insertion request is fulfilled at zero cost,
\item every deletion request is fulfilled,
\item after every request Invariant~\ref{inv:inv} holds.
\end{enumerate}
\end{thm}
\begin{proof}
The first part follows from Lemmas~\ref{lem:compactspace}
and \ref{lem:invcompact} and point 3. Insertions maintain the invariant,
because we assign it to the first suitable empty
pixel in z-order. Deletions can obviously always be fulfilled. We
still need to prove the important part, which is that the invariant
holds after a deletion.
We show this by proving that whenever the procedure reaches line 3 and
sets $a$, the invariant holds for all squares in
z-order up to $a$. As we only move squares in negative z-order, the
sequence of pixels $a$ refers to is increasing in z-order. Since we
have a finite number of squares, the procedure terminates after a
finite number of steps when no suitable $a$ is left. At that point the
invariant holds throughout the configuration.
Assume we are at step 3 of the procedure and the invariant holds for
all squares up to $a$. None of the squares considered to be moved to
$a$ fit anywhere before $a$ in z-order -- otherwise the invariant
would not hold for pixels before $a$. Afterwards, no square that has
not been moved to $a$ fits into $a$, because it would have been moved
there otherwise. Once we reach line 3 again, and set the new $a$, say
$a'$, consider the pixels between $a$ and $a'$ in z-order. If any
square after $a'$ would fit somewhere into a pixel between $a$ and
$a'$, then the invariant would not have held before the
deletion. Therefore, the invariant holds up to $a'$. \qed
\end{proof}
{\color{black}
Comparing our results in Section~4 to those in this section, a major
advantage of an empty initial configuration becomes apparent. For all
examined cost functions there are configurations into which no square
can be inserted at zero cost (cf. Theorem~\ref{thm:volumexample},
Corollary~\ref{cor:relvoltight}, Theorem~\ref{thm:movexample}). This
is in contrast to First Fit, which achieves insertion at zero
cost (Theorem~\ref{thm:ff}). The downside is the potentially large cost
of deletions. The thorough analysis of a strategy with provably low
cost for both insertions and deletions is the subject of future work.
}
\begin{figure}
\caption{Deleting a square causes several moves. The
deleted square is marked with a cross. Once it is unassigned, the
squares are checked in reverse z-order until square 1, which
fits. Afterwards, there is a now maximally empty pixel into which
square 2 can be moved. Finally, the same happens for square 3.}
\label{fig:invstrategy}
\end{figure}
\section{General Squares and Rectangles}\label{sec:generals}
Due to limited space and for clearer exposition,
the description in the previous three sections considered aligned squares.
We can adapt the technique to general squares
and even rectangles at the expense of a constant factor.
{\color{black}
To accommodate a non-aligned square, we pack it like an
aligned square of the next larger volume. That is, a square of size
$s$ with $2^{i-1} < s < 2^i$ for some $i \in \{0, -1,-2,\ldots\}$ is
assigned to an $i$-pixel. This approach results in space that cannot
be used to assign squares, even though the remaining capacity
would suffice, and we can no longer guarantee to fit every valid sequence of
squares into the unit square. However,
we can guarantee to pack every such sequence into a $4$-underallocated
unit square (i.e., a $2 \times 2$ square), as every square is assigned
to a pixel that can hold no more than four times its volume. Most
importantly, our reallocation schemes continue to work in this setting
without any modifications. An example allocation is shown in
Figure~\ref{fig:quadtree}, where solid gray areas are assigned squares
and shaded areas indicate wasted space.
Note that a satisfactory reallocation scheme for arbitrary squares
with no or next to no underallocation is unlikely. Even the problem
of handling a sequence of insertions of total volume at most one,
without considering dynamic deletions and reallocation, requires
underallocation. This problem is known as {\em online square packing}, see
Fekete and Hoffmann~\cite{fh-ossp-13,fh-ossp-17};
currently, the best known approach results in
$5/2$-underallocation, see Brubach~\cite{brubach_improved_2014}.
}
\begin{figure}
\caption{\label{fig:quadtree}
\label{fig:quadtree}
\end{figure}
Rectangles of bounded aspect ratio $k$ are dealt with in the same way.
Also accounting for intermodule communication, every rectangle is
padded to the size of the next largest aligned square and assigned to
the node of a quadtree, at a cost not exceeding a factor of $4k$
compared to the one we established for the worst case.
{\color{black} As described in the following section, this theoretical bound
is rather pessimistic: the performance in basic simulation runs
is considerably better.}
\section{\color{black}Simulation Results}\label{sec:experiments}
We carried out a number of {\color{black} simulation runs to get an idea of the potential performance
of our approach}. For each test, we generated a random sequence of $1000$ requests
that were chosen as \textsc{Insert($\cdot$)} (probability $0.7$) or \textsc{Delete($\cdot$)} (probability $0.3$).
We apply a larger probability for \textsc{Insert($\cdot$)} to avoid the (relatively simple)
situation that repeatedly just a few rectangles are inserted and deleted, and in order
to observe the effects of increasing congestion. The individual modules were generated
by considering an upper
bound $b \in [0,1]$ for the side lengths of the considered squares. For
$b=0.125$, the value of the current underallocation seems to be stable except
for the range of the first $50$-$150$ requests. For $b=1$, the current
underallocation may be unstable, which could be caused by the following
simple observation: A larger $b$ allows larger rectangles that induce
$4k$-underallocations.
Our {\color{black} simulations indicate
the theoretical worst-case bound of $1/4k$ may be overly pessimistic, see Figures~\ref{fig:experimentsA}--~\ref{fig:experimentsF}}.
\textcolor{black}{In particular, the $x$-axis represents the number of operations and the $y$-axis represents the inverse value of underallocations. Furthermore, the red curves illustrate the inverse values of the underallocation and lie below the worst case values of $4k$.}
Taking into account that a purely one-dimensional approach cannot provide
an upper bound on the achievable underallocation, this {\color{black} provides reason to be optimistic about the potential
practical performance.}
{\color{black} A simulation of} the First-Fit approach for different values of
$k$ and upper bounds of $b = 0.125$ and $b=1$ for the side length of the
considered squares {\color{black} is shown in Figures~\ref{fig:experimentsA}--~\ref{fig:experimentsF}.
Each diagram illustrates the results of a simulation of $1000$ requests
that are randomly chosen as \textsc{Insert($\cdot$)} (probability $0.7$) or
\textsc{Delete($\cdot$)} (probability $0.3$). We apply a larger probability
for \textsc{Insert($\cdot$)} to avoid the situation that repeatedly just a few
rectangles are inserted and deleted. The red graph shows the total current
underallocation after each request. The green graph shows the average of the
total underallocation in the range between the first and the current request.
We denote by~$c$ the number of collisions, i.e., the situations in that an
\textsc{Insert($\cdot$)} cannot be processed.}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsA}
\end{figure}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsB}
\end{figure}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsC}
\end{figure}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsD}
\end{figure}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsE}
\end{figure}
\begin{figure}
\caption{\textcolor{black}
\label{fig:experimentsF}
\end{figure}
\section{Conclusions}
\label{sec:conc}
We have presented a data structure for exploiting
the full dimensionality of dynamic geometric storage and reallocation
tasks, such as online maintenance of the module layout for an FPGA.
These first results indicate that our approach is suitable for
making progress over purely one-dimensional approaches.
There are several possible refinements and extensions, including
a more sophisticated way of handling rectangles inside of
square pieces of the subdivision, handling heterogeneous chip areas,
and advanced algorithmic methods. These will be addressed in future work.
{\color{black}
Another aspect of forthcoming work is an explicitly self-refining intermodule wiring.
As indicated in Section~3 (and illustrated in Figure~\ref{fig:config}),
dynamically maintaining this communication infrastructure can be envisioned along the subdivision
of the recursive quadtree structure: making the routing a certain proportion of each cell area provides
a dynamically adjustable bandwidth, along with intersection-free routing, as shown in Figure~\ref{fig:config}.
First steps in this direction have been taken with an MA thesis~\cite{meyer}, with more work
to follow; this also addresses the aspect of robustness of communication in a hostile
environment that may cause individual connections to fail.}
\balance
\end{document}
|
\begin{document}
\preprint{}
\title{Qubit-Qutrit
Separability-Probability Ratios}
\author{Paul B. Slater}
\email{[email protected]}
\affiliation{
ISBER, University of California, Santa Barbara, CA 93106\\
}
\date{\today}
\begin{abstract}
Paralleling our recent computationally-intensive (quasi-Monte Carlo)
work for the case $N=4$
(quant-ph/0308037),
we undertake the task for $N=6$ of computing to
high numerical accuracy, the formulas
of Sommers and \.Zyczkowski (quant-ph/0304041) for the $(N^2-1)$-dimensional volume and $(N^2-2)$-dimensional
hyperarea of the (separable and nonseparable)
$N \times N$ density
matrices, based on the Bures (minimal monotone) metric --- and
also their
analogous
formulas (quant-ph/0302197) for the (non-monotone) flat Hilbert-Schmidt
metric. With the same seven {\it billion}
well-distributed
(``low-discrepancy'') sample points,
we estimate the {\it unknown} volumes and hyperareas based on
five additional (monotone) metrics of interest, including the Kubo-Mori
and Wigner-Yanase. Further, we
estimate all of
these seven volume and seven hyperarea
(unknown) quantities when restricted
to the {\it separable} density matrices. The ratios of separable volumes
(hyperareas) to separable {\it plus}
nonseparable volumes (hyperareas)
yield estimates of the {\it separability probabilities} of
generically rank-six (rank-five) density matrices.
The (rank-six) separability probabilities obtained based on the
35-dimensional volumes
appear to be --- {\it independently} of the metric
(each of the seven
inducing Haar measure) employed --- {\it twice} as large as those
(rank-five ones)
based on the 34-dimensional
hyperareas. (An additional estimate --- 33.9982 --- of
the ratio of the rank-6 Hilbert-Schmidt separability probability to the
rank-{\it 4} one is quite clearly close to integral too.) The doubling relationship also appears to hold
S(4.15for the $N=4$ case for the Hilbert-Schmidt metric, but not the others. We
fit simple {\it exact} formulas to
our estimates of the Hilbert-Schmidt
{\it separable} volumes and hyperareas in both the $N=4$ and $N=6$ cases.
\end{abstract}
\pacs{Valid PACS 03.65.Ud,03.67.-a, 02.60.Jh, 02.40.Ky, 02.40.Re}
\maketitle
\section{Introduction}
In Part I of the influential paper \cite{ZHSL,zycz2}, ``Volume of the set of mixed entangled
states'', \.Zyczkowski, Sanpera, Horodecki and Lewenstein considered
the
``question of how many entangled or, respectively, separable states are there in the set of all quantum states''. They cited philosophical, practical and physical reasons for doing so. They gave a qualitative argument
\cite[sec. III.B]{ZHSL} --- contrary
to their
initial supposition --- that the measure of separable states could not be strictly zero. There has since been considerable work
\cite{slatersilver,qubitqutrit,slaterA,slaterC,slaterSEP,firstqubitqubit,slaterprecursor,clifton,sepsize1,sepsize2,sepsize3,szarek}, using various forms of measures,
to determine/estimate the ``volume of separable states'', as well as the volume of separable {\it and} nonseparable states \cite{hans1,hilb2}, and hence probabilities of separability. One somewhat surprising development has been the
(principally
numerical) indication --- two independent estimates being 0.137884
\cite{slatersilver} and 0.138119 (sec.~\ref{N4qubitqubit} below)
that the volume of separable states itself
can take on a notably
elegant form, in particular, $(\sqrt{2}-1)/3 \approx 0.138071$, for the case of qubit-qubit pairs endowed with
the {\it statistical distinguishability} metric (four times the Bures metric).
(However, there seems to be a paucity of ideas on how to {\it formally}
prove or disprove such a conjecture.)
The research reported below was undertaken initially with the specific purpose of finding whether a putative
comparably elegant formula for the volume of separable
qubit-qutrit pairs might exist. We will report below (sec.~\ref{wellfitting})
the obtaining of certain possible formulas that fit our numerical results well, but
none of such striking simplicity (nor none that extends it, in any natural apparent fashion). But we also obtain some new-type
results of substantial independent interest.
In a recent highly comprehensive
analysis \cite{hans1} (cf. \cite{hansrecent}), Sommers and \.Zyczkowski obtained
``a fairly general expression for the Bures volume of the submanifold of the states of rank $N-n$ of the set of complex ($\beta=2$) or real ($\beta=1$)
$N \times N$ density matrices
\begin{equation} \label{HZ1}
S^{(\beta,Bures)}_{N,n}= 2^{-d_{n}} \frac{\pi^{(d_{n}+1)/2}}{\Gamma((d_n+1)/2)} \Pi^{N-n}_{j=1} \frac{\Gamma(j \beta/2) \Gamma[1+(2 n+j-1) \beta/2]}{\Gamma[(n+j) \beta/2] \Gamma[1+(n+j-1) \beta/2]},
\end{equation}
where $d_{n}= (N-n) [1+(N+n-1) \beta/2] -1$ represents the dimensionality of the
manifold \ldots for $n=0$ the last factor simply equals unity and (\ref{HZ1})
gives the Bures volume of the entire space of density matrices, equal to that
of a $d_{0}$-dimensional hyper-hemisphere with radius 1/2. In the case
$n=1$ we obtain the volume of the surface of this set, while for $n=N-1$ we get the volume of the set of pure states \ldots which for $\beta=1(2)$ gives correctly the volume of the real (complex) projective space of dimensions
$N-1$'' \cite{hans1}.
The Bures metric on various spaces of density matrices ($\rho$)
has been widely
studied \cite{hubner1,hubner2,ditt1,ditt2}. In a broader context, it serves
as the {\it minimal} monotone metric \cite{petz1}.
In Part II of \cite{ZHSL,zycz2}, \.Zyczkowski put forth a certain
proposition. It was that ``the link between the purity of the mixed states
and the probability of entanglement is not sensitive to the measure [on the space of $N \times N$ density matrices] used''. His assertion was based on
comparisons between a unitary product measure and an orthogonal product
measure for the (qubit-qubit) case $N=4$ \cite[Fig. 2b]{zycz2}.
The {\it participation ratio} --- $1/\mbox{Tr}(\rho^{2})$ --- was used as the measure of purity.
\section{Separability-Probability Ratios}
In this study, we present (sec.~\ref{ratios})
numerical evidence --- limited largely to the
specific (qubit-qutrit) case $N=6$ --- for a
somewhat related
proposition (which appears to be possibly
topological in nature \cite{witten}).
It is that a certain ``ratio
of ratios''
\begin{equation} \label{ratofrat}
\Omega^{metric} \equiv \frac{R^{metric}_{sep+nonsep}}{R^{metric}_{sep}}
\end{equation}
is equal to 2, {\it independently} of the
measure used --- where the possible measures (including the
just-discussed Bures)
is comprised of
{\it volume elements} (all incorporating {\it Haar} measure as a factor) of certain {\it
metrics} defined on the $N \times N$ density
matrices.
Here by
\begin{equation}
R^{metric}_{sep+nonsep} \equiv \frac{S^{(2,metric)}_{N,1}}{S^{(2,metric)}_{N,0}},
\end{equation}
is indicated
the ratio of the hyperarea of the $(N^2-2)$-dimensional boundary of the $(N^2-1)$-dimensional
convex set ($C_{N}$) of $N \times N$ density matrices
to the total
volume of $C_{N}$. Further,
\begin{equation}
R^{metric}_{sep} \equiv \frac{\Sigma^{(2,metric)}_{N,1}}{\Sigma^{(2,metric)}_{N,0}}
\end{equation}
is the same type of
hyperarea-volume ratio,
but now restricted to the (classical/non-quantum)
subset of $C_{N}$ composed of the {\it separable}
states \cite{werner}
(which we designate using $\Sigma$ rather than $S$).
A simple algebraic
rearrangement of quotients then reveals that $\Omega^{metric}$
(\ref{ratofrat}) is also interpretable as
the
ratio
\begin{equation}
\Omega^{metric} \equiv \frac{P_{N}^{[metric,rank-N]}}{P_{N}^{[metric,rank-(N-1)]}}
\end{equation}
of the {\it separability probability} of the totality of
(generically rank-$N$)
states in $C_{N}$
\begin{equation}
P_{N}^{[metric,rank-N]} \equiv
\frac{\Sigma^{(2,metric)}_{N,0}}{S^{(2,metric)}_{N,0}}
\end{equation}
to the separability probability
\begin{equation}
P_{N}^{[metric,rank-(N-1)]} \equiv
\frac{\Sigma^{(2,metric)}_{N,1}}{S^{(2,metric)}_{N,1}}
\end{equation}
of the
(generically rank-$(N-1)$) states
that lie on the boundary of $C_{N}$.
\section{Metrics of interest}
Let us apply the \.Zyczkowski-Sommers Bures formula
(\ref{HZ1}) to the two cases that will be
of specific interest in this study,
$N=6$, $ n=0$, $\beta=2$ and $N=6$, $n=1$, $\beta=2$ --- that is, the Bures 35-dimensional volume and
34-dimensional hyperarea of the {\it complex}
$6 \times 6$ density matrices. (It would, of course, also be of interest to
study the {\it real} case, $\beta=1$, though we have not undertaken any work
in that direction.)
We then have that
\begin{equation} \label{m1}
S^{(2,Bures)}_{6,0}= \frac{{\pi }^{18}}{12221326970165372387328000} \approx
7.27075 \cdot {10}^{-17}
\end{equation}
and
\begin{equation} \label{m2}
S^{(2,Bures)}_{6,1}= \frac{{\pi }^{17}}{138339065763438059520000} \approx 2.04457 \cdot {10}^{-15}.
\end{equation}
Here, we are able
(somewhat paralleling our recent work for the qubit-qubit case $N=4$
\cite{slatersilver}, but in a
rather
more systematic manner {\it ab initio}
than there), through
advanced numerical (quasi-Monte Carlo/quasi-random) methods, to reproduce
both of these values (\ref{m1}), (\ref{m2}), to a considerable accuracy.
At the same time, we compute numerical values --- it would seem reasonable
to presume, at least initially,
with roughly the same level of accuracy --- of these two quantities, but for the replacement of the Bures metric by five other {\it monotone} metrics of
interest. These are the Kubo-Mori \cite{hasegawa,petz3,michor,streater},
(arithmetic) average \cite{slatersilver}, Wigner-Yanase \cite{gi,wy,luo,luo2},
Grosse-Krattenthaler-Slater
(GKS) \cite{KS} (or ``quasi-Bures'' \cite{gillmassar})
and (geometric) average monotone
metrics --- the two ``averages'' being formed from the minimal
(Bures) and {\it maximal} (Yuen-Lax \cite{yuenlax})
monotone metrics, following the suggested procedure in
\cite[eq. (20)]{petz2}. No proven
formulas, such as (\ref{HZ1}), are presently available for these
other various quantities,
although our research in \cite{slatersilver} had suggested that
the Kubo-Mori volume of the $N \times N$ density matrices is expressible as
\begin{equation} \label{KMgeneral}
S^{(2,KM)}_{N,0} = 2^{N(N-1)/2} S^{(2,Bures)}_{N,0},
\end{equation}
which for our case of $N=6$ would give
\begin{equation} \label{KM32768}
S^{(2,KM)}_{6,0} = 32768 S^{(2,Bures)}_{6,0}.
\end{equation}
In light of
the considerable attention recently devoted to the (Riemannian, but
{\it non}-monotone \cite{ozawa})
Hilbert-Schmidt metric \cite{hansrecent,hilb1,hilb2},
including the availability of exact volume and hypersurface formulas
\cite{hilb2},
we include it in supplementary analyses too.
Further, we estimate for all these seven (six monotone and one
non-monotone)
metrics the (unknown) 35-dimensional volumes and
34-dimensional hyperareas restricted to the {\it separable} $2 \times 3$ and
$3 \times 2$ systems. Then, we can, obviously, by taking ratios
of separable quantities to their separable {\it plus} nonseparable
counterparts obtain ``probabilities of separability'' --- a topic which was first investigated in \cite{ZHSL},
and studied further, using the Bures metric, in \cite{slatersilver,slaterA,slaterC,qubitqutrit}.
\section{Two forms of partial transposition}
We will employ the convenient Peres-Horodecki necessary {\it and}
sufficient positive
partial
transposition criterion for separability \cite{asher,michal} --- asserting that a $4 \times 4$ or $6 \times 6$ density matrix is separable if and only if all the eigenvalues of its
partial transpose are nonnegative. (In the $4 \times 4$ [qubit-qubit]
case, it simply suffices to test the determinant of the partial transpose
for nonnegativity \cite{sanpera,verstraete}.) But in the $6 \times 6$ case, we have the qualitative difference that partial transposes can be determined in
(at least) {\it
two} inequivalent ways, either by transposing in place, in the natural
blockwise manner,
the {\it nine} $2 \times 2$ submatrices or the {\it four}
$3 \times 3$ submatrices \cite[eq. (20)]{michal}. (Obviously, such a nonuniqueness arises in a bipartite system only if the dimensions of the two parts are unequal.)
We will throughout this study --- as in \cite{qubitqutrit} --- at the expense of added computation,
analyze results using {\it both} forms of partial transpose.
It is our anticipation --- although yet
without a formal demonstration --- that in the limit of
large sample size, the two sets of (separable volume and separable hyperarea) results
of interest here should converge
to true {\it common} values.
Now, the author must admit that he initially thought that it made no difference at all in which of the two ways the partial transpose was taken, that is,
a $6 \times 6$ density matrix would either pass or fail {\it both} tests. Also, this seems to be a common attitude in the quantum information community
(as judged by a number of personal reactions [cf. \cite[fn. 2]{ZHSL}]).
Therefore, we present below a specific
example of a $6 \times 6$ density
matrix ($\rho_{1}$)
that remains a density matrix if its four $3 \times 3$ blocks are transposed, but not its nine $2 \times 2$ blocks, since the latter
result has a {\it
negative}
eigenvalue (-0.00129836).
\begin{equation} \label{counterexample}
\begin{pmatrix}
\frac{2}{9}& 0 & 0 & 0 & 0 & 0 \\
0 & \frac{1}{7} & 0 & 0 & 0 &
-\frac{1}{24} + \frac{\imath }{38} \\
0 & 0 & \frac{1}{5} & \frac{\imath }{23} & \frac{-\imath }{41} &
- \frac{1}{10} - \frac{\imath }{21} \\
0 & 0 & \frac{-\imath }{23} & \frac{1}{7} & 0 &
\frac{\imath }{13} \\
0 & 0 & \frac{\imath }{41} & 0 & \frac{1}{6} & 0 \\
0 & - \frac{1}{24} - \frac{\imath }{38} &
-\frac{1}{10} + \frac{\imath }{21} &
\frac{-\imath }{13} & 0 & \frac{79}{630} \\
\end{pmatrix}
\end{equation}
K. \.Zyczkowski has pointed out that the question of whether a given state
$\rho$ is entangled or not depends crucially upon the decomposition of the composite Hilbert space $H_{A} \otimes H_{B}$ (cf. \cite{zanardi,caban}). For instance, for the simplest
$2 \times 2$ case, the maximally entangled Bell state becomes ``separable'', he points out, if one considers entanglement with respect to another division
of the space, {\it e. g.} $A'=\{\Phi_+,\Phi_-\},B'=\{\Psi_+,\Psi_-\}$. So, it should not be surprising, at least in retrospect,
that some states are separable with respect to one form of partial transposition, and not the other.
In the course of examining this issue, we found that if one starts with
an arbitrary $6 \times 6$ matrix ($M$), and alternates the two forms of partial transpostion on it, after twelve ($=2 \times 6$) iterations of this process, one arrives back at the {\it original} $ 6 \times 6$ matrix. So, in group-theoretic terms, if we denote the three-by-three operation by $a_{3}$ and the two-by-two operation by $a_{2}$, we have
idempotency, $a_{2}^2=a_{3}^2=I$ and $(a_{2} a_{3})^6 = (a_{3} a_{2})^6 =I$. Further, one can go from the
partial transpose $a_{3}(M)$ to the partial transpose
$a_{2}(M)$ {\it via} the matrix corresponding to the
permutation $\{1,4,2,5,3,6\}$.
Further, we constructed the related density matrix ($\rho_{2}$)
\begin{equation}
\begin{pmatrix}
\frac{2}{9} & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{1}{7} & 0 & 0 & \frac{\imath }{23} &
\frac{-\imath }{41} \\
0 & 0 & \frac{1}{5} & 0 & -\frac{1}{24} +
\frac{\imath }{38} & 0 \\
0 & 0 & 0 & \frac{1}{7} & - \frac{1}{10} -
\frac{\imath }{21} & \frac{-\imath }{13} \\
0 & \frac{-\imath }{23} &
-\frac{1}{24} - \frac{\imath }{38} &
-\frac{1}{10} + \frac{\imath }{21} &
\frac{1}{6} & 0 \\
0 & \frac{\imath }{41} & 0 &
\frac{\imath }{13} & 0 & \frac{79}{630} \\.
\end{pmatrix}
\end{equation}
Now, if $\rho_{2}$ is partially transposed using its nine $2 \times 2$ blocks,
it gives the identical matrix as when $\rho_{1}$ is partially
transposed using four $3 \times3$ blocks. But the six eigenvalues of
$\rho_{1}$, that is, $ \{ 0.322635,
0.222222, 0.1721, 0.149677, 0.119158, 0.0142076 \}$ are {\it not} the same
as the six eigenvalues of $\rho_{2}$, that is, $ \{0.300489,
0.222222, 0.204982, 0.168304, 0.0992763, 0.00472644 \} $.
So, there can be no unitary transformation taking $\rho_{1}$ to $\rho_{2}$.
(The possibility that $\rho_{1}$ and $\rho_{2}$ might have
the same total measure(s) attached to them, can not formally be ruled out, however.)
\section{Research Design}
Our main analysis will take the form of a quasi-Monte Carlo (Tezuka-Faure
\cite{tezuka,giray1})
numerical integration over the 35-dimensional hypercube ($[0,1]^{35}$) and a
34-dimensional subhypercube of it. In doing so, we implement
a parameterization of the $6 \times 6$ density matrices in terms of {\it thirty} Euler angles (parameterizing $6 \times 6$ unitary matrices) and {\it five} hyperspherical angles
(parameterizing the {\it six} eigenvalues --- constrained to sum to 1)
\cite{sudarshan,toddecg}. We hold a single one of the five hyperspherical angles fixed in the 34-dimensional analysis, so that one of the six eigenvalues is
always zero --- and the density matrix is generically of rank five. The parameters are linearly transformed so
that they each lie in the unit interval [0,1] and, thus, collectively in the
unit hypercube. The computations consumed approximately five months using
six PowerMacs in parallel, each generating a different segment of the
Tezuka-Faure sequence.
\subsection{Silver mean ($\sqrt{2}-1$) conjectures for $N=4$}
We have previously pursued a similar numerical analysis in investigating the separable and nonseparable volumes and hyperareas of the $4 \times 4$ density matrices \cite{slatersilver}.
Highly accurate results (as gauged in terms of {\it known}
Bures quantities \cite{hans1}) --- based on two {\it billion} points of a Tezuka-Faure (``low discrepancy'')
sequence lying in the 15-dimensional hypercube --- led us to advance several strikingly simple conjectures.
For example, it was indicated that the Kubo-Mori volume of separable and nonseparable states was exactly $64 =2^6$ times the known
Bures volume. (The exponent
6 is expressible --- in terms of our general conjecture (\ref{KMgeneral}),
relating the Bures and Kubo-Mori volumes --- as $N(N-1)/2$, with
$N=4$.)
Most prominently, though, it was conjectured
that the statistical distinguishability
(SD) volume of separable states
is $\frac{\sigma_{Ag}}{3}$ and $10 \sigma_{Ag}$ in terms of
(four times) the Kubo-Mori
metric. Here, $\sigma_{Ag} = \sqrt{2}-1 \approx 0.414214$ is the ``silver
mean'' \cite{christos,spinadel,gumbs,kappraff} (cf. \cite{markowsky}).
The SD metric is identically four times the Bures metric
\cite{caves}. (Consequently, the SD 15-dimensional volume of the $4 \times 4$ complex density matrices
is $2^{15}$ times that of the Bures volume --- given by
formula (\ref{HZ1}) for $N=4,n=0,\beta=2$ --- thus equaling the volume
of a 15-dimensional hyper-hemisphere with radius 1, rather than $\frac{1}{2}$
as in the Bures case itself \cite{hans1}.)
Unfortunately, there appears to be little in the way of
indications in the literature, as to how one might {\it formally} prove or disprove these conjectures --- ``brute
force'' {\it symbolic} integration seeming to be well beyond present technical/conceptual capabilities (cf. \cite[sec. 5]{sudarshan}. (Certainly, Sommers and \.Zyczkowski \cite{hans1} did not directly
employ symbolic integration methodologies in deriving the Bures volume, hyperarea,...for $N$-level [separable {\it and} nonseparable] systems, but rather, principally,
used concepts of random matrix theory.)
One approach we have considered in this regard \cite{slaterSEP}
is to parameterize the 15-dimensional
convex set of bipartite qubit states in terms of the weights used in
the expansion of the state in some basis of sixteen extreme
separable $4 \times 4$ density matrices (cf. \cite{rudiger1}).
For a certain basis composed of $SU(4)$ generators \cite{kk,byrdkhaneja,gen},
the associated $15 \times 15$
Bures metric tensor \cite{ditt1} is {\it diagonal} in form
(having all entries equal)
at the fully mixed state \cite[sec.~II.F]{slaterSEP}.
(Also, we have speculated that perhaps there is some way of ``bypassing''
the formidable computation of the Bures metric tensor, and yet being
able to arrive at the required volume element.) Perhaps, though,
at least in the Bures/{\it minimal}
monotone case, a proof might be based on the concept of ``minimal volume'' \cite{bayard,bowditch,bambah}.
\subsection{Formulas for monotone metrics}
The monotone metrics (of which we study five, in addition to the
Bures) can all be expressed
in the general form
\begin{equation}
g_{\rho}(X',X)
= \frac{1}{4} \Sigma_{\alpha,\beta} |\langle \alpha |X| \beta \rangle |^2 c_{monotone}(\lambda_{\alpha},\lambda_{\beta})
\end{equation}
(cf. \cite{hubner1,hubner2}).
Here $X,X'$ lie in the tangent space of all Hermitian $N \times N$ density
matrices $\rho$ and $|\alpha \rangle, \alpha =1, 2 \ldots$ are
the eigenvectors
of $\rho$ with eigenvalues $\lambda_{\alpha}$.
Now, $c_{monotone}(\lambda_{\alpha},\lambda_{\beta})$ represents the specific {\it Morozova-Chentsov} function for the monotone metric in question \cite{petz2}. This function takes the form
for: (1) the Bures metric,
\begin{equation} \label{Bures}
c_{Bures}(\lambda_{\alpha},\lambda_{\beta}) = \frac{2}{\lambda_{\alpha} +\lambda_{\beta}};
\end{equation}
(2) the Kubo-Mori metric (which, up up to a scale factor, is
the unique monotone
Riemannian metric with respect to which the {\it exponential} and {\it mixture}
connections are dual \cite{streater}),
\begin{equation} \label{KM}
c_{KM}(\lambda_{\alpha},\lambda_{\beta}) =\frac{\log{\lambda_{\alpha}}-\log{\lambda_{\beta}}}{\lambda_{\alpha}-\lambda_{\beta}};
\end{equation}
(3) the (arithmetic) average metric (first discussed in \cite{slatersilver}),
\begin{equation}
c_{arith}(\lambda_{\alpha},\lambda_{\beta}) = \frac{4 (\lambda_{\alpha}+\lambda_{\beta})}{\lambda_{\alpha}^2 + 6 \lambda_{\alpha} \lambda_{\beta} +\lambda_{\beta}^2};
\end{equation}
(4) the Wigner-Yanase metric (which corresponds
to a space of {\it constant curvature} \cite{gi});
\begin{equation}
c_{WY}(\lambda_{\alpha},\lambda_{\beta}) =\frac{4}{(\sqrt{\lambda_{\alpha}} +\sqrt{\lambda_{\beta}})^2};
\end{equation}
(5) the GKS/quasi-Bures metric (which yields the asymptotic redundancy for
universal quantum data compression \cite{KS});
\begin{equation}
c_{GKS}(\lambda_{\alpha},\lambda_{\beta})= \frac{{(\frac{\lambda_{\alpha}}{\lambda_{\beta}})}^{\lambda_{\alpha}/(\lambda_{\beta}-\lambda_{\alpha})}}{\lambda_{\beta}} e;
\end{equation}
and (6) the (geometric) average metric (apparently previously unanalyzed),
\begin{equation} \label{geom}
c_{geom}(\lambda_{\alpha},\lambda_{\beta}) =\frac{1}{\sqrt{ \lambda_{\alpha} \lambda_{\beta}}}.
\end{equation}
(The results obtained below for the geometric average monotone metric
seem, in retrospect,
to be of little interest, other than indicating --- that like the
maximal monotone (Yuen-Lax) metric itself \cite{slatersilver} --- volumes
and hyperareas appear to be simply
{\it infinite} in magnitude.)
\section{Analyses}
\subsection{Volumes and Hyperareas Based on Certain Monotone Metrics} \label{secMonotone}
Using
the first seven billion points of a Tezuka-Faure sequence, we
obtained the results reported in Tables~\ref{tab:table1}-\ref{tab:isoperimetric2} and Figs.~\ref{fig:graphBuresTotVol}-\ref{fig:ratioofratiosKM}.
We followed the Bures
formulas in \cite[secs. III.C, III.D]{hans1}, substituting for
(\ref{Bures}) the Morozova-Chentsov functions
given above (\ref{KM})-(\ref{geom}) to obtain the non-Bures counterparts.
In Fig.~\ref{fig:graphBuresTotVol}
we show the ratios of the cumulative estimates
of the 35-dimensional volume
$S^{(2,Bures)}_{6,0}$ to its {\it known} value (\ref{m1}). Each successive
point is based on
ten million ($10^7$) more systematically
sampled values in the 35-dimensional hypercube than the
previous point in the computational sequence.
\begin{figure}
\caption{\label{fig:graphBuresTotVol}
\label{fig:graphBuresTotVol}
\end{figure}
In Fig.~\ref{fig:graphBuresTotArea}
we show the ratios of the cumulative estimates
of the 34-dimensional hyperarea
$S^{(2,Bures)}_{6,1}$ to its known value (\ref{m2}). Each successive
point is based on
ten million more sampled values in the 34-dimensional hypercube than the
previous point in the computational sequence. The
single Tezuka-Faure sequence we
employ for all our purposes, however, is specifically designed
as a {\it 35}-dimensional one --- of which we take an essentially arbitrary
34-dimensional {\it projection}. This is arguably
a suboptimal strategy
for generating well-distributed points in the 34-dimensional hypercube
(cf. \cite[sec. 7]{morokoff}),
but it is certainly highly computationally convenient for us (since we avoid having to generate a totally
new 34-dimensional sequence --- which would, we believe,
increase our computation time
roughly 50 percent), and seems to perform rather well.
(In fact, as discussed below, the {\it bias} of our estimates seems to be
--- contrary to expectations --- markedly less for the known [Bures and
Hilbert-Schmidt] 34-dimensional hyperareas than for the
35-dimensional volumes.)
\begin{figure}
\caption{\label{fig:graphBuresTotArea}
\label{fig:graphBuresTotArea}
\end{figure}
We also present a joint plot (Fig.~\ref{fig:graphBuressep}) of the
two sets of cumulative estimates of the Bures
volume of {\it separable} qubit-qutrit
states based on {\it both} forms of partial transposition.
The estimates obtained using the four blocks of $3 \times 3$ submatrices,
in general, dominate those using nine blocks of $2 \times 2$ submatrices.
\begin{figure}
\caption{\label{fig:graphBuressep}
\label{fig:graphBuressep}
\end{figure}
In Table~\ref{tab:table1}, we scale the estimates (which we denote using
$\tilde{S}$)
of the volumes and hyperareas by the {\it known} values (\ref{m1}), (\ref{m2}) of
$S^{(2,Bures)}_{6,0}$ and $S^{(2,Bures)}_{6,1}$, while in Table~\ref{tab:table2} we scale these estimates by the {\it estimated} values ($7.22904 \cdot 10^{-17}$ and $2.03991 \cdot 10^{-15}$) of these two quantities.
(We use {\it both} approaches because we are uncertain as to which may be more revealing as to possible exact formulas --- an approach
suggested by our work in \cite{slatersilver}.) The
results for the geometric average monotone metric in Table~\ref{tab:table1}
appear to be divergent.
We might speculate that the
middle four scaled hyperareas
in the last column of Table~\ref{tab:table1}
correspond to the actual values $7 \cdot 13/2=45.5,
2^2 \cdot 5^2/3 \approx 31.333, 3 \cdot 13/4=9.75$ and $7/2=3.5$, and for the
second column that we have $132 = 12 \cdot 11$ and 12, as actual values.
\begin{table}
\caption{\label{tab:table1}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of
the 35-dimensional volumes and 34-dimensional
hyperareas of the $6 \times 6$ density matrices, using several monotone metrics. The scaling factors are the {\it known}
values of the volume and
hyperarea for the Bures metric, given by (\ref{HZ1}), and more specifically
for the cases: $N=6$; $n=0,1$; and $\beta=2$ by
(\ref{m1}) and (\ref{m2}).}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & $\tilde{S}_{6,0}^{(2,metric)}/S_{6,0}^{(2,Bures)}$ &
$\tilde{S}_{6,1}^{(2,metric)}/S_{6,1}^{(2,Bures)}$ \\
\hline
Bures & 0.996899 & 0.999022 \\
KM & 32419.4 & 45.4577 \\
arith & 621.714 & 31.291 \\
WY & 131.711 & 9.76835 \\
GKS & 12.4001 & 3.55929 \\
geom & $2.80011 \cdot 10^{44}$ & $1.44011 \cdot 10^{14}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{\label{tab:table2}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of the 35-dimensional volumes and 34-dimensional
hyperareas of the $6 \times 6$ density matrices,
using several monotone metrics. The scaling factors are the {\it estimated}
values ($\tilde{S}^{(2,Bures)}_{6,0} = 7.2482 \cdot 10^{-17}$ and
$\tilde{S}^{(2,Bures)}_{6,1} = 2.04257 \cdot 10^{-15}$ ) of the volume and hyperarea for the Bures metric.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & $\tilde{S}_{6,0}^{(2,metric)}/\tilde{S}_{6,0}^{(2,Bures)}$ &
$\tilde{S}_{6,1}^{(2,metric)}/\tilde{S}_{6,1}^{(2,Bures)}$ \\
\hline
KM & 32520.3 & 45.5022 \\
arith & 623.648 & 31.3216 \\
WY & 132.121 & 9.77791 \\
GKS & 12.4387 & 3.56278 \\
geom & $2.80882 \cdot 10^{44}$ & $1.44152 \cdot 10^{14}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Tables~\ref{tab:table3} and Tables~\ref{tab:table4}, we report our estimates (scaled by the values obtained for the Bures metric)
of the volumes and hyperareas of the $6 \times 6$ separable complex density matrices.
Let us note, however, that
to compute the hyperarea of the {\it complete} boundary of the separable states, one must also include those $6 \times 6$ density matrices of {\it full}
rank,
the partial transposes of which have a zero eigenvalue, with all other eigenvalues being nonnegative
\cite{shidu}. (We do not compute this additional
contribution here --- as we undertook to do in our lower-dimensional
analysis \cite{slatersilver} --- as
it would slow quite considerably the overall process in which we are engaged, since high-degree polynomials
would need to be
solved at each iteration.)
In
\cite{slatersilver}, we had been led to conjecture that
that part of the 14-dimensional boundary of separable $4 \times 4$ density matrices consisting
generically of rank-{\it four} density matrices had
SD hyperarea $\frac{55 \sigma_{Ag}}{39}$
and that part composed of rank-{\it three} density matrices, $\frac{43 \sigma_{Ag}}{39}$, for a
total 14-dimensional boundary
SD hyperarea of $\frac{98 \sigma_{Ag}}{39}$. We, then, sought to apply the
``Levy-Gromov isoperimetric inequality'' \cite{gromov} to the relation between the
known and estimated SD
volumes and hyperareas of the separable and separable plus nonseparable
states \cite[sec. VII.C]{slatersilver}.
Restricting ourselves now to considering only the separable density matrices,
for Table~\ref{tab:table3} we computed the partial transposes of the $6 \times 6$ density matrices by transposing in place the {\it four} $3 \times 3$ submatrices, while
in Table~\ref{tab:table4} we transposed in place the nine
$2 \times 2$ submatrices.
\begin{table}
\caption{\label{tab:table3}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of the 35-dimensional volumes and 34-dimensional
hyperareas of the {\it separable} $6 \times 6$ density matrices, using several monotone metrics. The scaling factors are the {\it estimated}
values ($\tilde{\Sigma}^{(2,Bures)}_{6,0} = 1.0739 \cdot 10^{-19}$ and $
\tilde{\Sigma}^{(2,Bures)}_{6,1}= 1.53932 \cdot 10^{-18}$) --- the true values being unknown --- of the separable
volume and hyperarea for the Bures metric. To implement the Peres-Horodecki positive partial transposition criterion,
we compute
the partial transposes
of the four $3 \times 3$ submatrices (blocks) of the density matrix.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & Bures-scaled separable volume & Bures-scaled separable (rank-5) hyperarea \\
\hline
KM & 8694.79 & 9.43481 \\
arith & 220.75 & 10.6415 \\
WY & 55.3839 & 4.13924 \\
GKS & 7.97798 & 2.28649 \\
geom & $3.33872 \cdot 10^{32} $ & $3.61411 \cdot 10^{8}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{\label{tab:table4}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of the 35-dimensional volumes and 34-dimensional
hyperareas of the {\it separable} $6 \times 6$ density matrices, using several monotone metrics. The scaling factors are the {\it estimated}
values ($\tilde{\Sigma}^{(2,Bures)}_{6,0} = 9.54508 \cdot 10^{-20}$ and $\tilde{\Sigma}^{(2,Bures)}_{6,1}= 1.40208 \cdot 10^{-18}$) --- the true values being unknown --- of the separable volume and hyperarea
for the Bures metric. To implement the Peres-Horodecki positive partial transposition criterion, we compute
the partial transposes of the {\it nine} $2 \times 2$ submatrices (blocks) of the density
matrix.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & Bures-scaled separable volume & Bures-scaled separable (rank-5) hyperarea \\
\hline
KM & 6465.86 & 9.0409 \\
arith & 218.602 & 10.3248 \\
WY & 55.5199 & 4.05201 \\
GKS & 7.92729 & 2.26136 \\
geom & $5.4299 \cdot 10^{35}$ & $4.35667 \cdot 10^{9}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:table5}, we only require the density matrix
in question to pass {\it either} of the two tests, while
in Table~\ref{tab:table6}, we require it
to pass {\it both} tests for separability.
(Of the 7 billion points of the Tezuka-Faure 35-dimensional sequence so
far generated,
approximately 2.91 percent yielded density matrices passing the test for Table I, 2.84 percent for
Table II, 4 percent for Table III and 1.75 percent for Table IV. K.
\.Zyczkowski commented that ``it is not reasonable to ask about the probability that
{\it both} partial transpositions are simultaneously positive, since one should not mix two different physical problems together''.)
\begin{table}
\caption{\label{tab:table5}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of the 35-dimensional volumes and 34-dimensional
hyperareas of the {\it separable} $6 \times 6$ density matrices, using several monotone metrics. The scaling factors are the {\it estimated}
values ($1.99772 \cdot 10^{-19}$ and $2.90956 \cdot 10^{-18}$)--- the true values being unknown --- for the Bures metric.
A density matrix is included here if it passes {\it either} form of the positive partial transposition test.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & Bures-scaled separable volume & Bures-scaled separable (rank-5)
hyperarea \\
\hline
KM & 7735.7 & 9.30446 \\
arith & 221.689 & 10.5467 \\
WY & 55.8928 & 4.11453 \\
GKS & 7.99075 & 2.28089 \\
geom & $2.59619 \cdot 10^{35}$ & $2.29051 \cdot 10^{9}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{\label{tab:table6}Scaled estimates based on the Tezuka-Faure sequence of 7 billion points of the 35-dimensional volumes and 34-dimensional
hyperareas of the {\it separable} $6 \times 6$ density matrices, using several monotone metrics. The scaling factors are the {\it estimated}
values ($3.06807 \cdot 10^{-21}$ and $3.78991 \cdot 10^{-32}$) --- the true values being unknown --- for the Bures metric.
A density matrix is included here {\it only}
if it passes
{\it both} forms of the positive partial transposition test.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & Bures-scaled separable volume & Bures-scaled separable (rank-5) hyperarea \\
\hline
KM & 1800.19 & 3.99932 \\
arith & 92.7744 & 5.3548 \\
WY & 26.4785 & 2.55592 \\
GKS & 5.56969 & 1.77049 \\
geom & $3.96779 \cdot 10^{27}$ & $1.09937 \cdot 10^{7}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Table VII, we ``pool'' (average) the results for the separable volumes
and hyperareas reported in Tables III and IV, based on the two
distinct forms of
partial transposition, to obtain possibly superior estimates of these
quantities, which presumably are actually one and the same {\it independent}
of the particular form of partial transposition.
\begin{table}
\caption{\label{tab:table7}Scaled estimates
obtained by pooling the results from Tables III and
IV --- based on the two forms of partial transposition --- for
the separable volumes and hyperareas. The Bures scaling factors
(pooled volume and hyperarea) are $ \tilde{\Sigma}^{(2,Bures)}_{6,0} = 1.0142 \cdot 10^{-19}$
and $ \tilde{\Sigma}^{(2,Bures)}_{6,1} = 1.4707 \cdot 10^{-18}$.}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & Bures-scaled separable volume & Bures-scaled separable (rank-5) hyperarea \\
\hline
KM & 7645.92 & 9.24704 \\
arith & 219.739 & 10.4905 \\
WY & 55.4479 & 4.09766 \\
GKS & 7.95413 & 2.27537 \\
geom & $2.55692 \cdot 10^{35}$ & $2.26584 \cdot 10^{9}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Fig.~\ref{fig:graphKMBuresratio}
we show the ratios
of $\tilde{S}^{(2,KM)}_{6,0}$
to its conjectured value (\ref{KM32768})
of $32768 S^{(2,Bures)}_{6,0}$.
\begin{figure}
\caption{\label{fig:graphKMBuresratio}
\label{fig:graphKMBuresratio}
\end{figure}
\subsection{Volumes and Hyperareas based on the Hilbert-Schmidt Metric} \label{secHS}
Along with the computations based on six distinct monotone metrics, reported
above in sec.~\ref{secMonotone},
we have at the same time carried out fully parallel analyses of the
(Riemannian, but
non-monotone) Hilbert-Schmidt metric \cite{ozawa}. These have
only been conducted
{\it after} an earlier less-extensive
form of this analysis \cite{slaterprecursor},
reporting initial numerical estimates for the same six monotone
metrics based on 600 million points of a
Tezuka-Faure sequence, was
posted. At that stage of our research, we had --- with certainly
some
regrets --- decided to {\it fully} redo the
computations reported there. This was done
to avoid a (somewhat inadvertent) programming limitation (which seemed of minor importance at the time) --- a
consequence essentially only
of our, in time,
having understood how to greatly speed up the
computations --- of not being able to
sample {\it more} than two billion Tezuka-Faure points.
This fresh beginning (incorporating a much larger limitation,
of which we here take advantage)
allowed us then, as well, to additionally fully
include the Hilbert-Schmidt metric. (It is somewhat
unfortunate, however, at this
point, that we had not conducted analyses based on the HS metric
for the $N=4$ qubit-qubit case, having restricted our earlier attention to monotone metrics only \cite{slatersilver} [cf. sec.~\ref{N4qubitqubit}].)
Prior to Sommers and \.Zyczkowski reporting their exact formula
(\ref{HZ1})
for the Bures volume of the submanifold of the states
of rank $N-n$ of the set of complex ($\beta=2$) or real ($\beta=1$)
$N \times N$ density matrices, they had obtained fully analogous
formulas for the Hilbert-Schmidt metric, which for the specific
volume ($n=0$) case gives
\cite[eq. (4.5)]{hilb2},
\begin{equation}
S^{(2,HS)}_{N,0} = \sqrt{N} (2 \pi)^{N(N-1)/2} \frac{\Gamma(1) \ldots
\Gamma(N)}{\Gamma(N^2)}
\end{equation}
and hyperarea ($n=1$) case \cite[eq. (5.2)]{hilb2} gives,
\begin{equation}
S^{(2,HS)}_{6,1} =
\sqrt{N-1} (2 \pi)^{N(N-1)/2} \frac{\Gamma(1) \ldots \Gamma(N+1)}{\Gamma(N) \Gamma(N^2-1)}.
\end{equation}
For the (qubit-qutrit)
case $N=6$ under study in this paper, these give us, for the
35-dimensional HS volume,
\begin{equation} \label{HSvol1}
S^{(2,HS)}_{6,0} =
\frac{{\pi }^{15}}
{1520749664069126407256340000000\,{\sqrt{6}}} \approx 7.69334\,{10}^{-24}
\end{equation}
and for the 34-dimensional HS hyperarea
\begin{equation} \label{HShyperarea1}
S^{(2,HS)}_{6,1} = \frac{{\pi }^{15}}
{8689998080395008041464800000\,{\sqrt{5}}} \approx 1.47483\,{10}^{-21}.
\end{equation}
So, as above, using the Bures metric,
we can further gauge the accuracy of the Tezuka-Faure numerical
integration in terms of these {\it known} volumes and hyperareas.
(This somewhat alleviates the shortcoming of the Tezuka-Faure procedure
in not lending itself to statistical testing in any straightforward manner.)
The estimated probability of separability is {\it greater} for the HS-metric
than for any monotone one. (The minimal monotone or Bures metric appears
to give the greatest probability in the
nondenumerable class of monotone metrics. Also, the {\it maximal}
monotone metric seems to give a {\it zero} probability \cite{slatersilver}.)
Therefore, one might surmise that the much-discussed
estimates of the sizes of the separable
neighborhoods \cite{sepsize1,sepsize2,sepsize3} --- which usually appear to be based on the
HS or Frobenius metric --- surrounding
the fully mixed state are on the rather generous side (cf. \cite{szarek}),
relatively speaking.
In Fig.~\ref{fig:graphHSTotVol}
we show --- paralleling Fig.~\ref{fig:graphBuresTotVol} --- the
ratios of our cumulative estimates
of $S^{(2,HS)}_{6,0}$ to its known value (\ref{HSvol1}).
\begin{figure}
\caption{\label{fig:graphHSTotVol}
\label{fig:graphHSTotVol}
\end{figure}
In Fig.~\ref{fig:graphHSTotArea}
we show --- paralleling Fig.~\ref{fig:graphBuresTotArea} --- the
ratios of the cumulative estimates
of $S^{(2,HS)}_{6,1}$ to its known value (\ref{HShyperarea1}).
\begin{figure}
\caption{\label{fig:graphHSTotArea}
\label{fig:graphHSTotArea}
\end{figure}
A plot (Fig.~\ref{fig:graphHSsep}) of the cumulative estimates of the Hilbert-Schmidt volume of separable qubit-qutrit states (for
the two forms of partial transposition) is also presented.
(The ratio of the two cumulative estimates at the final [seven billion]
mark is 1.03236, while the comparable ratio is 1.12508 in the analogous
Bures plot [Fig~\ref{fig:graphBuressep}].)
\begin{figure}
\caption{\label{fig:graphHSsep}
\label{fig:graphHSsep}
\end{figure}
In their two studies \cite{hans1,hilb2}, deriving exact formulas for the Bures and Hilbert-Schmidt volumes and hyperareas of the $N \times N$ density matrices, Sommers and \.Zyczkowski also
explicitly derived expressions for the {\it ratios} of
$(N^2-2)$-dimensional hyperareas to $(N^2-1)$-dimensional volumes.
These were \cite[eq. (4.20)]{hans1}
\begin{equation} \label{gammaBures}
\gamma_{Bures,N}= \frac{S^{(2,Bures)}_{N,1}}{S^{(2,Bures)}_{N,0}}
= \frac{2}{\sqrt{\pi}} \frac{\Gamma (N^2/2)}{\Gamma(N^2/2-1/2)} N
\end{equation}
and \cite[eq. (6.5)]{hilb2}
\begin{equation} \label{gammaHS}
\gamma_{HS,N}= \frac{S^{(2,HS)}_{N,1}}{S^{(2,HS)}_{N,0}}
= \sqrt{N(N-1)} (N^2-1).
\end{equation}
In the $N=6$ Bures case, this ratio (equivalently what we have earlier denoted
$R^{Bures}_{sep+nonsep}$) is
\begin{equation}
\gamma_{Bures,6} \equiv \frac{S^{(2,Bures)}_{6,1}}{S^{(2,Bures)}_{6,0}} = \frac{2^{34}}{3^2 \cdot 5 \cdot 11 \cdot 19 \cdot 23 \cdot 29 \cdot 31 \pi}
= \frac{17179869184}{194467185 \pi} \approx 28.1205,
\end{equation}
and in the Hilbert-Schmidt case (equivalently $R^{HS}_{sep+nonsep}$),
\begin{equation}
\gamma_{HS,6} \equiv \frac{S^{(2,HS)}_{6,1}}{S^{(2,HS)}_{6,0}}
= 35 \sqrt{30} \approx 191.703.
\end{equation}
(The Bures ratio grows
proportionally with the dimensionality ($D=N^2-1$) of the
$N \times N$ density matrices as $D$ (for large $N$)
\cite[sec.~IV.C]{hans1}, and as $D^{3/2}$
for the Hilbert-Schmidt ratio \cite[sec.~VI]{hilb2}.)
Our sample estimates for these two quantities are 28.1804 and 192.468,
respectively.
In Table~\ref{tab:isoperimetric}, we report these estimates, as well as
the sample estimates for the other
five metrics under study here. We also list the two known values
and also give the corresponding ratios of hyperarea to volume for
a 35-dimensional {\it Euclidean}
ball having: (1) the same volume as for
the metric in question;
and (2) the same hyperarea.
{\it Only} for the (flat)
HS-metric are these last two ratios {\it less}
than unity (cf. \cite[sec. VI]{hilb2}).
\begin{table}
\caption{\label{tab:isoperimetric}Sample estimates of the ratio ($R^{metric}_{sep+nonsep}
=S^{(2,metric)}_{6,1}/S^{(2,metric)}_{6,0}$)
of the
34-dimensional hyperarea to the 35-dimensional volume for the seven metrics
under study, and the corresponding ratios for a 35-dimensional {\it Euclidean}
ball having: (1)
the same volume as for the metric; and (2) the same hyperarea}
\begin{ruledtabular}
\begin{tabular}{rrrrr}
metric & known ratio & sample ratio ($R^{metric}_{sep+nonsep}$) & isovolumetric ratio & isohyperarea ratio \\
\hline
Bures & 28.1205 & 28.1804 & 2.34553 & 2.40508 \\
KM & --- & 0.0394299 & 1245.79 & 1536.34 \\
arith & --- & 1.41531 & 38.858 & 43.2743 \\
WY & --- & 2.08556 & 27.5655 & 30.39 \\
GKS & --- & 8.07163 & 7.61987 & 8.08886 \\
geom & --- & $1.44625 \cdot 10^{-29}$ & $2.45463 \cdot 10^{29} $ & $1.79638 \cdot 10^{30}$ \\
HS & 191.703 & 192.468 & 0.543466 & 0.533806 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Separability-probability ratios} \label{ratios}
\subsubsection{The $N=6$ qubit-qutrit case}
In Table~\ref{tab:isoperimetric2} we list for the seven metrics the
estimated ratios, which we denote
$R^{metric}_{sep}$, of
the hyperarea (consisting of only the rank-five but not the rank-six $6 \times 6$ density matrices constituting the boundary of the
{\it separable}
density matrices \cite{shidu}) to the volume of all the separable states themselves.
\begin{table}
\caption{\label{tab:isoperimetric2}Sample estimates of the ratio ($R^{metric}_{sep}$)
of the
34-dimensional hyperarea consisting only
of rank-five $6 \times 6$
{\it separable} density
matrices to the 35-dimensional separable volume for the seven metrics
under study. In the last column there are given the ratios of
ratios ($\Omega^{metric}$) of the middle (third)
column of Table~\ref{tab:isoperimetric} to these values}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & $R^{metric}_{sep}$ & $\Omega^{metric} \equiv
R^{metric}_{sep+nonsep}/R^{metric}_{sep}
=P_{6}^{[metric,6]}/P_{6}^{[metric,5]}$ \\
\hline
Bures & 14.501 & 1.94334 \\
KM & 0.0175377 & 2.24829 \\
arith & 0.692291 & 2.04439 \\
WY & 1.07164 & 1.94613 \\
GKS & 4.14819 & 1.94582\\
geom & $1.28502 \cdot 10^{-25}$ & 0.000112547 \\
HS & 94.9063 & 2.0279 \\
\end{tabular}
\end{ruledtabular}
\end{table}
We see that $R^{sep}_{WY}$ is quite close to 1.
(The Wigner-Yanase metric is one of constant curvature \cite{gi}.)
In Fig.~\ref{fig:graphratiosepWY} we show the deviations of the cumulative
estimates of $R^{sep}_{WY}$ from 1.
\begin{figure}
\caption{\label{fig:graphratiosepWY}
\label{fig:graphratiosepWY}
\end{figure}
In the last column of Table~\ref{tab:isoperimetric2}
there are given the ratios of
ratios $\Omega^{metric} \equiv
R^{metric}_{sep+nonsep}/R^{metric}_{sep}$. (The
exceptional [geometric average] case might possibly simply be dismissed for serious consideration
on the basis of numerical instabilities, with the associated volumes
for this metric appearing to be actually
{\it infinite} in nature. Also, as we will see below, $\Omega^{KM}$ is subject to particular severe jumps, perhaps decreasing the reliability of the
estimates. The other five are rather close to 2 --- but it is also somewhat intriguing that three of the
estimated monotone metric
ratios are quite close to one another ($\approx 1.945$),
and therefore perhaps a common value
{\it unequal} to 2.)
This ratio-of-ratios can be easily be rewritten --- as explicated in the
Introduction --- to take the form of a
ratio of separability
probabilities. That is, $\Omega^{metric}$ is equivalently the ratio
of the probability of separability ($P_{6}^{[(metric,6]}$)
for all qubit-qutrit states to the
conditional
probability of separability ($P_{6}^{[metric,5]}$) for those states on the (rank-five) boundary of the
35-dimensional convex set.
An interesting conjecture now would be that this ratio ($\Omega^{metric}$)
is equal to the integral value 2,
{\it independently} of the (monotone or HS) metric used to measure the volumes and hyperareas.
If, in fact, valid, then there is presumably a {\it topological}
explanation \cite{witten} for this. (We were able to quite readily reject
the speculation that this
phenomenon might be in some way an {\it artifact} of our particular
experimental design, in that we employ, as previously discussed,
only for simple computational
convenience, a 34-dimensional subsequence of
the 35-dimensional Tezuka-Faure sequence --- rather than an {\it ab initio}
independent 34-dimensional Tezuka-Faure sequence for the calculation of the
hyperareas.)
We must observe, however, that all the seven
metrics specifically studied here induce
the {\it same} (Haar) measure over 30 of the 35 variables --- that is, the
30 Euler angles parameterizing the unitary matrices
\cite{sudarshan,toddecg} --- but not over the five
independent eigenvalues of the $6 \times
6$ density matrix. Therefore, it is certainly valid to point out
that we
have not considered {\it all} types of
possible metrics over the 35-dimensional
space, but have restricted attention only to certain of those that are
{\it not} inconsistent with quantum mechanical principles.
(S. Stolz has pointed out, in a personal communication that, in general,
one could modify a metric in the interior away from the boundary {\it and}
outside the separable states,
without affecting the metric on the separable states,
thus changing $R^{metric}_{sep+nonsep}$
without changing $R^{metric}_{sep}$, but obviously then also
altering the ratio of ratios ({\it proportio proportionum} \cite{oresme})
$\Omega^{metric}$.
But presumably such a modification
would lead, in our context, to the volume element of the
so-modified metric {\it not} respecting Haar measure [cf. \cite[App. A]{hayden}].)
The {\it topology} of the $(N^2-1)$-dimensional convex set
of $N \times N$ density matrices has been laid out by \.Zyczkowski
and S{\l}omczynski \cite[sec. 2.1]{slom}.
The topological structure is expressible as
\begin{equation}
[U(N)/T^N] \times G_{N},
\end{equation}
where the group of unitary matrices of size $N$ is denoted by $U(N)$ and the
unit circle (one-dimensional torus $\approx U(1)$) by $T$, while $G_{N}$
represents an $(N-1)$-dimensional {\it asymmetric}
simplex. It would appear, however, that the set of separable states lacks
such a product topological structure (thus, rendering integrations over the
set --- and hence the computation of
corresponding volumes --- quite problematical).
In Fig.~\ref{fig:ratioofratiosGKS} is plotted the deviations from the conjectured integral value of 2 of the cumulative estimates of
the ratio ($\Omega^{GKS}$) --- given in Table~\ref{tab:isoperimetric2} --- of
the two hyperarea-to-volume ratios for the GKS
monotone metric,
the numerator ($R^{sep+nonsep}_{GKS}$)
of $\Omega^{GKS}$ being based on
the entirety of qubit-qutrit states, and the
denominator ($R^{sep}_{GKS}$) being based on the boundary qubit-qutrit
states only. (All the succeeding plots of deviations from the conjectured integral value of 2 will be drawn to the {\it same} scale.)
\begin{figure}
\caption{\label{fig:ratioofratiosGKS}
\label{fig:ratioofratiosGKS}
\end{figure}
In Figures~\ref{fig:ratioofratiosBures}, \ref{fig:ratioofratiosHS}
and \ref{fig:ratioofratiosKM},
we show the corresponding plots based on the Bures, Hilbert-Schmidt
and
Kubo-Mori metrics, respectively.
\begin{figure}
\caption{\label{fig:ratioofratiosBures}
\label{fig:ratioofratiosBures}
\end{figure}
\begin{figure}
\caption{\label{fig:ratioofratiosHS}
\label{fig:ratioofratiosHS}
\end{figure}
\begin{figure}
\caption{\label{fig:ratioofratiosKM}
\label{fig:ratioofratiosKM}
\end{figure}
We note that the cumulative estimates in this last plot were relatively close
to 2, before a sudden spike in the curve drove it upward.
The values for the (quite unrelated) Bures and HS-metrics are rather
close to 2,
which is the main factor in our advancing the conjecture in question.
It would, of course, be of interest to study comparable ratios involving
$6 \times 6$ density matrices of generic
rank less than 5. We did not originally
incorporate these into our MATHEMATICA
Tezuka-Faure
calculations (in particular, since we did not anticipate the apparent metric-independent phenomenon, we have
observed here). In sec.~\ref{newestone} below, we have, however, subsequently pursued such analyses.
\subsubsection{The $N=4$ qubit-qubit case} \label{N4qubitqubit}
We adapted our MATHEMATICA routine used so far
for the scenario $N=6$, so that it
would yield analogous results for $N=4$. Based on 400 million points of
a new independent 15-dimensional
Tezuka-Faure sequence, we obtained the results reported
in Table~\ref{tab:reducedcase}. (We now use the lower-case counterparts of
the symbols $R$ and $\Omega$ to differentiate the $N=4$ case from the
$N=6$ one.)
\begin{table}
\caption{\label{tab:reducedcase}Counterparts for the qubit-{\it qubit}
case $N=4$ of the
ratios of separability probabilities, based on 400 million points of a Tezuka-Faure sequence}
\begin{ruledtabular}
\begin{tabular}{rrrr}
metric & $r^{metric}_{sep+nonsep}$ & $r^{metric}_{sep}$ & $\omega^{metric}$ \\
\hline
Bures & 12.1563 & 6.58246 & 1.84676 \\
KM & 0.506688 & 0.348945 & 1.45206 \\
arith & 2.19634 & 1.2269 & 1.79015 \\
WY & 2.93791 & 1.73028 & 1.69794 \\
GKS & 6.03321 & 3.3661 & 1.79234 \\
geom & $4.02853 \cdot 10^{-16}$ & $7.1263 \cdot 10^{-16} $& 0.565304 \\
HS & 51.9626 & 25.9596 & 2.00167 \\
\end{tabular}
\end{ruledtabular}
\end{table}
Here, once more, the ratios-of-ratios ($\omega^{metric}$)
tend to show rather similar values,
with the two exceptional cases again being the geometric average metric
(which we suspect ---like the maximal (Yuen-Lax) monotone metric, from
which it is partially formed --- simply
gives infinite volumes and hyperareas) and
the somewhat unstable KM monotone metric (which now gives an atypically
{\it low} value!). We were somewhat surprised that
the Hilbert-Schmidt metric again gives, as for $N=6$,
a value quite close to 2.
In Fig.~\ref{fig:graphRank4HS} we show (on a comparatively very fine scale)
the deviations from 2 of the
cumulative estimates of the ratio of the Hilbert-Schmidt separability probability for the rank-4 states to that for the rank-3 states.
\begin{figure}
\caption{\label{fig:graphRank4HS}
\label{fig:graphRank4HS}
\end{figure}
However, it now seems fairly certain
that if there is a true common value for $\omega^{metric}$ across the metrics, then it is not an integral one (and thus possibly not a {\it topological} explanation).
The theoretical values predicted by (\ref{gammaBures}) and
(\ref{gammaHS}) for $r^{Bures}_{sep+nonsep}$
and $r^{HS}_{sep+nonsep}$ are $16384/(429 \pi) \approx 12.1566$ and
$30 \sqrt{3} \approx 51.9615$, respectively.
Also, consulting Table 6 of our earlier study
\cite{slatersilver}, we find that
using the conjectured and
known values for the qubit-{\it qubit} case ($N=4$) presented there
gives us $\omega^{Bures}
= 8192/(1419 \pi) \approx 1.83763$ and a somewhat similar numerical
value
$\omega^{arith} = 408260608/73153125 \pi
\approx 1.77646$.
Let us also indicate in passing that this new independent Tezuka-Faure sequence yields estimates that are quite close to previously known and conjectured values. For example, the ratios of the estimates of $S^{(2,Bures)}_{4,0}$
and $S^{(2,Bures)}_{4,1}$ to their respective known values are 1.0001 and
.9999. Further, the ratios of the estimates of $\Sigma^{(2,Bures)}_{4,0}$
and $\Sigma^{(2,Bures)}_{4,1}$ (our estimate being
0.138119) to their respective
conjectured values are 1.0001 and .99999.
Let us take this opportunity to note that our analyses here indicate that the conjectures given in Table 6 of \cite{slatersilver} for the 14-dimensional hyperareas --- denoted $B^s$ and $B^{s+n}$ there --- appear
to have been too large by a factor of 8.
\subsubsection{The $N=9$ rank-9 and-8 cases}
K. \.Zyczkowski has indicated to us that he has an argument, if not yet fully rigorous, to the effect that the ratio of the probability of rank-$N$ states having positive partial transposes to the probability of such rank-$(N-1)$ states should be 2 {\it independently} of $N$.
Some early analyses of our --- based on a
so-far relatively short Tezuka-Faure sequence of 126 million points in the extraordinarily high (80)-dimensional
hypercube --- gave us a
Hilbert-Schmidt rank-9/rank-8
probability
ratio of 1.89125. (The analogous
ratios for the monotone metrics were largely on the order
of 0.15. In these same analyses we also --- for our first time --- implemented, as well, the computable cross-norm criterion for separability
\cite{rudolph},
and found that {\it many} more density matrices could not be ruled out
as possibly separable than with the [apparently much more discriminating]
positive
partial transposition criterion. The Hilbert-Schmidt
probability ratio based on the cross-norm criterion was 0.223149.).
In Fig.~\ref{fig:graphRank9} we show the deviations from 2 of the
cumulative estimates of the Hilbert-Schmidt rank-9/rank-8 ratio based on the positivity of the partial transpose.
\begin{figure}
\caption{\label{fig:graphRank9}
\label{fig:graphRank9}
\end{figure}
However, this plot seems very unstable, so we must be quite cautious
(pending a much more extended analysis) in
its interpretation.
\subsubsection{The $N=6$ rank-4 and rank-3 cases} \label{newestone}
The principal analyses above have been concerned with the full rank
(rank-6) and rank-5 $6 \times 6$ density matrices.
We adapted our MATHEMATICA procedure so that it would analyze the rank-4 and rank-3 cases, in a similar fashion. Now, we are dealing with 31-dimensional and 26-dimensional scenarios, {\it vis \`a vis}
the original 35- and 34-dimensional ones.
In a preliminary run, based on 35 million points of corresponding Tezuka-Faure sequences, not a single rank-3 separable $6 \times 6$ density matrix was
generated. (The general results of Lockhart \cite{lockhart} --- based on
Sard's Theorem --- tells us
that
the measures of rank-2 and rank-1 $6 \times 6$ separable density matrices must be
zero, but not rank-3, as it appears we have observed [or near to it].)
At that stage, we decided to concentrate further in our calculations
on the rank-4 case alone.
In Table~\ref{tab:N6rank4} we report results based on one billion points of
a (new/independent) 31-dimensional Tezuka-Faure sequence, coupled with our estimates obtained
on the basis of our principal analysis, using the before-mentioned seven billion points.
\begin{table}
\caption{\label{tab:N6rank4}Estimated ratios of both
rank-6 and rank-5 qubit-qutrit separability probabilities to rank-4 separability probabilities}
\begin{ruledtabular}
\begin{tabular}{rrr}
metric & rank-6/rank-4 ratio & rank-5/rank-4 ratio \\
\hline
Bures & 20.9605 & 10.7858 \\
KM & 12.2764 & 5.4603 \\
arith & 17.4245 & 8.52308 \\
WY & 15.5015 & 7.96527 \\
GKS & 18.3778 & 9.44474 \\
geom & $1.30244 \cdot 10^{-7}$ & 0.00115724\\
HS & 33.9982 & 16.7652 \\
\end{tabular}
\end{ruledtabular}
\end{table}
We note that for the Hilbert-Schmidt metric, 33.9982 (2 $\times$ 16.9991)
is quite close to integral.
In Fig.~\ref{fig:graphDev34} we show the cumulative estimates of the ratio from the value 34.
\begin{figure}
\caption{\label{fig:graphDev34}
\label{fig:graphDev34}
\end{figure}
(Of course, if the ratio of the rank-6 HS separability probability to the rank-5 HS separability probability is exactly, in theory, equal to 2, and the rank-6/rank-4 separability probability exactly 34, then
the rank-5/rank-4 ratio should be 17. Since it is
based on greater numbers of sampled separable density matrices,
we suspect the sample estimate of the rank-6/rank-4 HS separability
probability may perhaps be superior to the
[less closely integral in value --- that is, 16.7652] rank-5/rank-4 estimate.)
Though the convergence to the predicted Hilbert-Schmidt volume was quite
good (99.9654\% of that given by the \.Zyczkowski-Sommers formula \cite[eq. (5.3)]{hilb2},
for $N=6,n=2$),
we were initially
rather disappointed/surprised that the predicted value of the Bures volume was inaccurate by some 25\%. This indicated to us that either
the numerics were much more difficult for the Bures computation,
or there was a possible error in our programming
(which we were unable to locate) or even the
possibility that something was incorrect with the {\it specific}
Sommers-\.Zyczkowski formula \cite[eq. (4.19)]{hansrecent} we were using,
\begin{equation} \label{HZ2}
S^{(2,Bures)}_{N,n}= 2^{-d_{n}} \frac{\pi^{(d_{n}+1)/2}}{\Gamma((d_n+1)/2)}
\binom{N+n-1}{n}.
\end{equation}
This last possibility, in fact, proved to be the case, as we found that
their formula
(4.19) did not agree (for cases other than $n=0,1,N-1$)
with the
more general formula (5.15) --- reproduced above as (\ref{HZ1}) --- and
that using the correct formulation (5.15) (which we found also agrees with
(4.18) of \cite{hansrecent})
with $\beta=2,n=2,N=6$,
our numerical deviation was reduced from 25\% to a more acceptable/less surprising .1\%. A rectified version of their formula (4.19),
\begin{equation}
S^{(2,Bures)}_{N,n}=2^{-d_n}\frac{\pi^{(d_n+1)/2}}
{\Gamma((d_n+1)/2)}
\Pi_{j=0}^{N-n-1} \frac{j! (j+2n)!}{[(j+n)!]^2} ,
\end{equation}
has since been posted
by Sommers and \.Zyczkowski (quant-ph/0304041 v3) after this
matter was brought to their attention.
\subsection{Well-fitting
formulas for the Bures and Hilbert-Schmidt separable volumes and hyperareas} \label{wellfitting}
\subsubsection{The Bures case}
Proceeding under the assumption of the validity of our
conjecture above (regarding the integral value of 2 for
$\Omega^{metric}$),
computational experimentation indicates that the Tezuka-Faure quasi-Monte Carlo separable Bures
results
can be quite well fitted by taking
for the {\it separable} Bures 35-dimensional volume
\begin{equation} \label{sepconj60}
\Sigma^{(2,Bures)}_{6,0}= \frac{3 c_{Bures}}{2^{77}} \approx 1.03447
\cdot 10^{-19},
\end{equation}
and for the {\it separable} (rank-five) 34-dimensional
hyperarea
\begin{equation} \label{sepconj61}
\Sigma^{(2,Bures)}_{6,1}= (2^{43} \cdot 3 \cdot 5 c_{Bures})^{-1} \approx 1.45449 \cdot 10^{-18},
\end{equation}
where by $\Sigma$ we denote volumes and hyperareas of {\it separable} states
and
\begin{equation}
c_{Bures} = \sqrt{8642986 \pi} = \sqrt{\pi \cdot 2 \cdot 11 \cdot 19 \cdot 23
\cdot 29 \cdot 31} \approx 5210.83.
\end{equation}
The pooled {\it sample} estimates, $\tilde{\Sigma}^{(2,Bures)}_{6,0}$ and
$\tilde{\Sigma}^{(2,Bures)}_{6,1}$, as indicated in the caption to
Table~\ref{tab:table7}, are $1.0142 \cdot 10^{-19}$ and
$1.4707 \cdot 10^{-18}$.
Then, we would have the Bures probability of separability of the
(generically rank-six) qubit-qutrit
states as
\begin{equation} \label{Buresprob}
P_{6}^{[Bures,rank-6]} =
\frac{3^7 \cdot 5^3 \cdot 7^2 \cdot 11 \cdot 13 \cdot 17 c_{Bures}}{2^{27} \pi^{18}} \approx 0.00142278,
\end{equation}
and the Bures probability of separability of the generically rank-five
qubit-qutrit states, exactly (by our integral conjecture)
one-half of this.
(However, there appears to be no obvious way in which the formulas immediately
above extend the
analogous ones in the qubit-qubit separable
case \cite{slatersilver}, which
were hypothesized to involve the ``silver mean'', $\sigma_{Ag}=\sqrt{2}-1$.
Thus, it does not seem readily
possible to use the results here to, in any way,
support our earlier conjectures.)
We have also devised another set of exact Bures formulas that fit our data
roughly as well as (\ref{sepconj60}) and (\ref{sepconj61}).
These are
\begin{equation} \label{ssepconj60}
\Sigma^{(2,Bures)}_{6,0}= \frac{3^2 \cdot 11 \cdot 19 \cdot 23 \cdot 29 \cdot 31 \pi}{2^{76} \cdot 5^6} \approx 1.03497
\cdot 10^{-19}
\end{equation}
and for the {\it separable} (rank-five) 34-dimensional
hyperarea
\begin{equation} \label{ssepconj61}
\Sigma^{(2,Bures)}_{6,1}= (2^{43} \cdot 5^7)^{-1} \approx 1.45519 \cdot
10^{-18}.
\end{equation}
\subsubsection{The Hilbert-Schmidt case}
Additionally, we can achieve {\it excellent} fits to our
Hilbert-Schmidt estimates
by taking for the {\it separable} (rank-six) 35-dimensional volume
\begin{equation} \label{Sepconj60}
\Sigma^{(2,HS)}_{6,0}= (2^{45} \cdot 3 \cdot 5^{13} \cdot 7 \sqrt{30})^{-1}
\approx 2.02423 \cdot 10^{-25}
\end{equation}
(the sample estimate being $2.05328 \cdot 10^{-25}$)
and for the {\it separable} (rank-five) 34-dimensional
hyperarea
\begin{equation} \label{Sepconj61}
\Sigma^{(2,HS)}_{6,1}= (2^{46} \cdot 3 \cdot 5^{12})^{-1}
\approx 1.94026 \cdot 10^{-23}
\end{equation}
(the sample estimate being $1.94869 \cdot 10^{-23}$).
This gives us a Hilbert-Schmidt probability of separability of the
generically rank-six states of
\begin{equation}
P_{6}^{[HS,rank-6]}= \frac{3^{10} \cdot 7^4 \cdot 11^3 \cdot 13^2 \cdot 17^2 \cdot 19 \cdot 23 \cdot 29 \cdot 31 \sqrt{5}}{2^{37} \cdot 5^7 \pi^{15}} \approx 0.0263115,
\end{equation}
approximately 18.5 times the predicted Bures probability (\ref{Buresprob}).
(As we have noted, the Bures separability probability appears to be the
{\it greatest} among the monotone metrics.) An {\it upper} bound of
$0.166083 \approx (0.95)^{35}$ on $P_{6}^{[HS,rank-6]}$ is given in Appendix G of \cite{szarek}.
Our simple
excellent Hilbert-Schmidt fits here led us to investigate whether the same could be achieved in the qubit-qubit ($N=4$) case, using the same
400 million point Tezuka-Faure sequence employed in sec.~\ref{N4qubitqubit}.
This, in fact, seemed definitely doable, by taking
\begin{equation}
\Sigma^{(2,HS)}_{4,0} = (3^3 5^7 \sqrt{3})^{-1} \approx 2.73707 \cdot 10^{-7},
\end{equation}
(the sample estimate being $2.73928 \cdot 10^{-7}$)
and
\begin{equation}
\Sigma^{(2,HS)}_{4,1}= (3^2 5^6)^{-1} \approx 7.11111 \cdot 10^{-6}
\end{equation}
(the sample estimate being $7.11109 \cdot 10^{-6}$).
The {\it exact}
Hilbert-Schmidt probability of separability of the generically rank-4
qubit-qubit states would then be
\begin{equation}
P_{4}^{[HS,rank-4]}= \frac{2^2 \cdot 3 \cdot 7^2 \cdot 11 \cdot 13 \sqrt{3}}{5^3 \pi^6} \approx
0.242379.
\end{equation}
(A {\it lower} bound for $\Sigma_{4,0}^{(2,HS)}$ of
${256 \pi^7/29805593211675} \approx 2.65834 \cdot 10^{-8}$ --- that is, the volume of a 15-dimensional ball of radius $\frac{1}{3}$ --- appears obtainable
from the results of \cite{sepsize3}, although it is not fully clear
to this reader whether
the argument there applies to the {\it two}-qubit case [$m=2$], since the exponent $\frac{m}{2}-1$ appears.)
Of course, one would now like to try to extend these Hilbert-Schmidt results
to cases $N>6$.
Also, let us propose the formula
\begin{equation}
\Sigma^{(2,HS)}_{6,2} = \frac{7 \cdot 11}{2^{41} 5^{11} \sqrt{5} \pi} \approx
1.02084 \cdot 10^{-19},
\end{equation}
(the sample estimate [sec.~\ref{newestone}] being $1.04058 \cdot 10^{-19}$).
\section{Discussion}
In the main numerical analysis of
this study (secs.~\ref{secMonotone} and \ref{secHS}),
we have directly estimated 28 quantities of interest --- seven
total volumes of the 35-dimensional space of qubit-qutrit states, seven
34-dimensional hyperareas of the boundary of those states, and the same
quantities when restricted to the separable qubit-qutrit states.
Of these 28 quantities, four
(that is, $S^{(2,Bures)}_{6,0}, S^{(2,Bures)}_{6,1}, S^{(2,HS)}_{6,0}$
and $S^{(2,HS)}_{6,1}$) were
precisely
known from previous analyses of Sommers
and \.Zyczkowski \cite{hans1,hilb2}.
It is interesting to observe that the Tezuka-Faure
quasi-Monte Carlo numerical integration procedure has, in all four of these
cases, as shown in the corresponding table and figures
(Table~\ref{tab:table1} and Figs.~\ref{fig:graphBuresTotVol}, \ref{fig:graphBuresTotArea}, \ref{fig:graphHSTotVol}, \ref{fig:graphHSTotArea}),
slightly but consistently
{\it under}estimated the known values --- more so, it seems, with
the 35-dimensional volumes, as opposed to the 34-dimensional
hyperareas. (So, in statistical terminology, we
appear to have {\it biased} estimators. The very same form of bias --- in terms of the Bures metric --- was observed
in the precursor analysis \cite{slaterprecursor} to this one, based on
an independent, shorter Tezuka-Faure sequence. {\it Randomizing} deterministic algorithms --- such as the Tezuka-Faure --- can remove such bias \cite{hong}.)
This suggests that we
might possibly
improve the accuracy of the estimates of the 24 unknown quantities
by scaling them in accordance with the magnitude of
known underestimation.
Also, we have in our several tables only reported the results at the
(seven-billion point)
end of the Tezuka-Faure procedure. We might also report results at intermediate
stages at which the estimates of the 4 known quantities are closest to their
true values, since estimates of the 24 unknown quantities might arguably
also be most accurate at those stages.
Of course, as we have done,
taking the {\it ratios} of estimates of the volumes/hyperareas
of separable states to the estimates of the volumes/hyperareas of
separable plus nonseparable states, one, in turn, obtains estimates of
the probabilities of separability \cite{ZHSL}
for the various monotone metrics studied. ({\it Scaling} the estimated volumes
and hyperareas by the corresponding estimates for the
Bures metric, as we have done in certain of the
tables above for numerical convenience and possible insightfulness,
would be inappropriate in such a process.)
Among the metrics studied,
the Hilbert-Schmidt metric gives the {\it largest}
qubit-qutrit probability of separability
($\approx 0.0268283$), while the Bures metric --- the {\it minimal} monotone one --- gives
the (considerably smaller)
largest separability probability ($\approx 0.00139925$)
among the monotone metrics studied
(and presumably among all monotone metrics). The (Yuen-Lax) {\it maximal}
monotone metric appears to give a null separability probability.
In \cite{qubitqutrit}, we had
attempted a somewhat similar quasi-Monte Carlo
qubit-qutrit
analysis (but restricted simply
to the Bures metric) to that reported above, but based on many fewer points (70 million {\it vs.} the 7 billion so far
used here) of a
(Halton) sequence.
At this stage, having made use of considerably increased computer power (and streamlined MATHEMATICA programming --- in particular
employing the Compile command, which enables the program to
proceed under the condition
that certain variables will enter a calculation only as
machine numbers, and
not as lists, algebraic objects or any other kind of expression),
we must regard this earlier study as entirely superseded by the one here.
(Our pooled estimate of the Bures volume of the separable qubit-qutrit
systems here [Table VII]
is $1.0142 \cdot 10^{-19}$, while in \cite{qubitqutrit}, following our earlier work for $N=4$ \cite{firstqubitqubit},
we formulated a conjecture (\cite[eq. (5)]{qubitqutrit}) --- in which we can now have but
very little
confidence --- that would give [converting from the SD metric to the
Bures] a value of
$2^{-35} \cdot (2.19053 \cdot 10^{-9}) \approx 6.37528 \cdot 10^{-20}$.)
We also anticipate revisiting --- as in sec.~\ref{N4qubitqubit} --- the
$N=4$ (qubit-qubit) case \cite{slatersilver}
with our newly accelerated programming methods, in a similarly systematic
manner.
Perhaps, in the future, subject to research priorities, we will
add to the 7 billion points of the Tezuka-Faure sequence employed above, and hope to report considerably more accurate results in
the future (based on which, possibly,
we can further appraise the hypotheses offered above as to the values of the various
volumes and hyperareas). Also, we may seek to
estimate the hyperarea of that part
of the boundary of the separable qubit-qutrit states consisting of
generically rank-six $6 \times 6$ density matrices \cite{slatersilver,shidu},
though this involves a much greater amount of
computation per point. (This would entail first
finding the values, if any,
of the undetermined [35-{\it th}] parameter that would set
the determinants of the two forms of
partial transpose equal to zero, and then --- using these values ---
ascertaining whether or not
all the six eigenvalues of the resultant partial transposes
were nonnegative.)
In this study, we have utilized additional computer power
recently available to us,
together with an advanced quasi-Monte Carlo procedure
(scrambled Faure-Tezuka sequences
\cite{tezuka,giray1} --- the use of which was recommended to us
by G. \"Okten, who provided a corresponding MATHEMATICA
code). Faure and Tezuka were guided ``by the construction $C^{(i)} = A^{(i)}
P^{(i-1)}$ and by some possible extensions of the generator formal series in
the framework of Neiderreiter''. ($A^{(i)}$ is an arbitrary nonsingular lower
triangular [NLT] matrix, $P$ is the Pascal matrix \cite{call}
and $C^{(i)}$ is a generator matrix of a sequence $X$.)
Their idea was to multiply from the right by
nonsingular upper triangular (NUT) random matrices and get the new generator
matrices $C^{(i)}= P^{(i-1)} U^{(i)}$ for $(0,s)$-sequences
\cite{tezuka,giray1}.``Faure-Tezuka scrambling
scrambles the digits of $i$ before multiplying by the generator matrices \ldots\
The effect of the Faure-Tezuka-scrambling can be thought of as reordering
the original sequence, rather than permuting its digits like the Owen
scrambling \ldots Scrambled sequences often have smaller
discrepancies than their nonscrambled counterparts. Moreover, random
scramblings facilitate error estimation'' \cite[p. 107]{hong}.
It would be interesting to conduct analogous investigations to those reported here ($N=6$)
and in \cite{slatersilver} for the case $N=4$, using
quasi-random sequences {\it other}
than Tezuka-Faure ones \cite{tezuka,giray1}, particularly those for which it is possible to do {\it statistical} testing on the results
(such as constructing confidence intervals) \cite{hong}. It is, of course,
possible to conduct statistical testing using simple Monte Carlo methods, but
their convergence is much weaker than that of the quasi-Monte Carlo
procedures. Since we have been dealing with
extraordinarily high-dimensional spaces, good
convergence has been a dominant consideration in the selection of
numerical integration methodologies to employ.
``It is easier to estimate the error of Monte Carlo methods because one can
perform a number of replications and compute the variance. Clever
randomizations of quasi-Monte Carlo methods combine higher accuracy with practical
error estimates'' \cite[p. 95]{hong}.
G. \"Okten is presently developing a new MATHEMATICA version of scrambled
Faure-Tezuka sequences in which
there will be
a random generating matrix for each dimension --- rather than
one for all the dimensions together --- which will then be
susceptible to {\it statistical} testing \cite{hong}.
At the strong urging of K. \.Zyczkowski, we disaggregated
the pooled results in the last
column of Table IX into the part based on partial transposition of
four three-by-three blocks and obtained
$\{1.966, 2.53505, 2.04826, 1.94679, 1.96481, 9.32089 \times 10^-7, 1.99954\}$
and into
the part based on nine two-by-two blocks and obtained
$\{1.91846, 1.91976, 2.04, 1.94539, 1.92476, 0.0001227, 2.05803\}$.
We bring the attention of the reader to the particular
closeness to 2 of the first (Hilbert-Schmidt) ratio.
\begin{acknowledgments}
I wish to express gratitude to the Kavli Institute for Theoretical
Physics (KITP)
for computational support in this research and to Giray \"Okten
for supplying the MATHEMATICA code for the Tezuka-Faure quasi-Monte Carlo
procedure and for numerous communications. Chris Herzog of the KITP
kindly provided certain computer assistance,
as well as comments on the course of the research.
S. Stolz remarked on our conjecture regarding the integral value of 2. Also,
K. \.Zyczkowski supplied many useful comments and insights.
\end{acknowledgments}
\end{document}
|
\begin{document}
\begin{abstract} We define a transcendence degree for division algebras, by modifying the lower transcendence degree construction of Zhang. We show that this invariant has many of the desirable properties one would expect a noncommutative analogue of the ordinary transcendence degree for fields to have. Using this invariant, we prove the following conjecture of Small.
Let $k$ be a field, let $A$ be a finitely generated $k$-algebra that is an Ore domain, and let $D$ denote the quotient division algebra of $A$. If $A$ does not satisfy a polynomial identity then ${\rm GKdim}(K) \lambdae {\rm GKdim}(A)-1$ for every commutative subalgebra $K$ of $D$.
\end{abstract}
\maketitle
\section{Introduction}
Transcendence degree for fields is an important invariant, which has proven incredibly useful in algebraic geometry. In the noncommutative setting, many different transcendence degrees have been proposed \cite{GK, BK, Re1, Sc1, St, Z, Z1, Z2}, many of which possess some of the desirable properties that one would hope for a noncommutative analogue of transcendence degree to possess. Sadly, none of these has proved as versatile as the ordinary transcendence degree has in the commutative setting, as there has always been the fundamental problem: they are either difficult to compute in practice or are not powerful enough to say anything about division subalgebras.
The first such invariant was defined by Gelfand and Kirillov \cite{GK}, who used their Gelfand-Kirillov transcendence degree to prove that if the quotient division algebras of the $n$th and $m$th Weyl algebras were isomorphic, then $n=m$. Gelfand-Kirillov transcendence degree is obtained from Gelfand-Kirillov dimension in a natrual way.
Given a finitely generated algebra $A$ over a field $k$, the \emph{Gelfand-Kirillov dimension} (GK dimension, for short) of $A$ is defined to be
\[ {\rm GKdim}(A) \ = \ \lambdaimsup_{n\rightarrow\infty} \frac{\lambdaog\,{\rm dim} V^n}{\lambdaog \, n}, \]
where $V$ is a finite-dimensional $k$-vector subspace of $A$ which contains $1$ and generates $A$ as a $k$-algebra. We note that this definition is independent of the choice of vector space $V$ with the above properties.
The \emph{Gelfand-Kirillov transcendence degree} for a division algebra $D$ with centre $k$ is defined to be
\[ {\rm Tdeg}(A) \ = \ \sup_V \inf_{b} \lambdaimsup_{n\rightarrow\infty} \frac{\lambdaog\,{\rm dim}_k (k+bV)^n}{\lambdaog \, n}, \]
where $V$ ranges over all finite-dimensional $k$-vector subspaces of $D$ and $b$ ranges over all nonzero elements of $D$.
Zhang \cite{Z} introduced a combinatorial invariant, which he called the \emph{lower transcendence degree} of a division algebra $D$, which he denoted by ${\rm Ld}(D)$. We define this degree in Section \ref{sec: def}. Zhang showed that this degree had many of the basic properties that one would expect a transcendence degree to have. In particular, he showed that if $k$ is a field, $A$ is a $k$-algebra that is a domain of finite GK dimension, and $D$ is the quotient division algebra of $A$, then
$${\rm GKdim}(A) \ \ge \ {\rm Ld}(D).$$
He also showed that ${\rm Ld}(K)={\rm trdeg_k}(K)$ in the case that $K$ is a field and that if $E$ is a division subalgebra of $D$ then ${\rm Ld}(E)\lambdae {\rm Ld}(D)$. We modify Zhang's construction to define a new transcendence degree, which we call the \emph{strong lower transcendence degree} and which we denote by ${\rm Ld}^*$. We define this invariant in Section \ref{sec: def}.
We use the adjective strong, simply because we have the inequality
$${\rm Ld}^*(D)\ge {\rm Ld}(D).$$
We are able to show that the strong lower transcendence degree has the following properties.
\begin{enumerate}
\item If $D$ is a division algebra and $E$ is a division subalgebra of $D$ then ${\rm Ld}(E)\lambdae {\rm Ld}^*(D)$.
\item If $D$ is a division subalgebra and $E$ is a division subalgebra of $D$ such that $D$ is finite-dimensional as either a left or right $E$-vector space, then ${\rm Ld}^*(D)\lambdae {\rm Ld}^*(E)$.
\item If $D$ is a finitely generated division algebra and $E$ is a division subalgebra of $D$ such that $D$ is infinite-dimensional as a left $E$-vector space, then ${\rm Ld}^*(D)\ge {\rm Ld}(E)+1$.
\item If ${\rm Ld}^*(D)<1$ then ${\rm Ld}^*(D)=0$; moreover, ${\rm Ld}^*(D)=0$ if and only if every finitely generated subalgebra of $D$ is finite-dimensional.
\item If $k$ is a field and $A$ is a finitely generated $k$-algebra that is an Ore domain and $D$ is its quotient division algebra, then
$${\rm Ld}^*(D)\lambdae {\rm GKdim}(A).$$
\item If $k$ is a field and $K$ is a field extension of $k$ of transcendence degree $d$ then ${\rm Ld}^*(K)=d$.
\end{enumerate}
Using these results, we prove the following conjecture of Small \cite[Conjecture 8.1]{Z}.
\begin{thm} Let $k$ be a field and let $A$ be a finitely generated $k$-algebra that is an Ore domain. If $A$ does not satisfy a polynomial identity and $K$ is a commutative subalgebra of the quotient division algebra of $A$ then ${\rm GKdim}(K)\lambdae {\rm GKdim}(A)-1$.\lambdaabel{thm: main2}
\end{thm}
Zhang \cite[Corollary 0.8]{Z} showed that Theorem \ref{thm: main2} holds if the conclusion is replaced by ${\rm GKdim}(K)\lambdae {\rm GKdim}(A)$, and, moreover, the hypothesis that $A$ not satisfy a polynomial identity is unnecessary with this bound.
The outline of this paper is as follows. In Section \ref{sec: def} we recall Zhang's definition of lower transcendence degree and define the strong lower transcendence degree. In Section \ref{sec: prop}, we prove that the strong lower transcendence degree has properties (1), (2), (4), (5), and (6). In Section \ref{sec: estimates}, we show that property (3) holds and we prove Theorem \ref{thm: main2}.
\section{Definitions}
\lambdaabel{sec: def}
In this section, we recall the definition of lower transcendence degree, defined by Zhang \cite{Z} and recall some basic facts about this invariant. We then proceed to modify his construction to provide a two-sided version of this invariant, which we call the \emph{strong lower transcendence degree} and which we denote by ${\rm Ld}^*$
Given a field $k$ and a $k$-algebra $A$, we say that a $k$-vector subspace $V$ of $A$ is a \emph{subframe} of $A$ if $V$ is finite-dimensional and contains $1$; we say that $V$ is a \emph{frame} if $V$ is a subframe and $V$ generates $A$ as a $k$-algebra.
The definition of lower transcendence degree is fairly technical and we refer the reader to Zhang \cite{Z} for more insight into this definition. Let $k$ be a field and let $A$ be a $k$-algebra that is a domain. If $V$ is a subframe of $A$, we define
${\rm VDI}(V)$ to be the supremum over all nonnegative numbers $d$ such that there exists a positive constant $C$ such that
$${\rm dim}_k(VW)\ge {\rm dim}_k(W)+C{\rm dim}(W^{(d-1)/d})$$ for every subframe $W$ of $D$. (If no nonnegative $d$ exists, we take ${\rm VDI}(V)$ to be zero.) VDI stands for ``volume difference inequality'' and it gives a measure of the growth of an algebra. We then define the \emph{lower transcendence degree} of $A$ by
$${\rm Ld}(A) = \sup_V {\rm VDI}(V),$$ where $V$ ranges over all subframes of $A$.
The definition, while technical, gives a powerful invariant that Zhang \cite{Z} has used to answer many difficult problems about division algebras. Zhang showed that if $A$ is an Ore domain of finite GK dimension and $D$ is the quotient division algebra of $A$ then ${\rm Ld}(A)\lambdae {\rm GKdim}(A)$. Moreover, equality holds for many classes of rings. In particular, if $A$ is a commutative domain over a field $k$, then equality holds and so Lower transcendence degree agrees with ordinary transcendence degree.
One of the weaknesses of lower transcendence degree is that it is unknown whether it satisfies the equality ${\rm Ld}(D)={\rm Ld}(D^{{\rm op}})$. To correct this, we use a two-sided approach.
We define
${\rm VDI}^*(V)$ to be the supremum over all nonnegative numbers $d$ such that there exists a positive constant $C$ such that
$$\max\lambdaeft({\rm dim}_k(VW),{\rm dim}_k(WV)\right)\ge {\rm dim}_k(W)+C{\rm dim}(W^{(d-1)/d})$$ for every subframe $W$ of $A$. (As before, if no nonnegative $d$ exists, we take ${\rm VDI}^*(V)$ to be zero.) We then define the strong lower transcendence degree of a domain $A$ by
$${\rm Ld}^*(A) = \sup_V {\rm VDI}^*(V),$$ where $V$ ranges over all subframes of $A$.
We note that we trivially have the estimate
\begin{equation}
{\rm Ld}^*(D)\ge \max({\rm Ld}(D),{\rm Ld}(D^{\rm op}))
\end{equation}
and by construction
$${\rm Ld}^*(D)={\rm Ld}^*(D^{\rm op}).$$
\section{Basic properties}
\lambdaabel{sec: prop}
In this section, we prove the basic properties of strong lower transcendence degree. For the most part, we follow the work of Zhang \cite{Z}. We note that the first property listed in the introduction, namely that ${\rm Ld}(E)\lambdae {\rm Ld}^*(D)$ whenever $E$ is a division subalgebra of $D$ follows immediately from the fact that ${\rm Ld}^*(D) \ge {\rm Ld}(D)$ and Theorem 2.4 of Zhang \cite{Z}.
We first show that if $k$ is a field, $A$ is an Ore domain that is a $k$-algebra, and $D$ is the quotient division algebra of $A$, then we have
\begin{equation}
{\rm Ld}(D)\lambdae {\rm Ld}^*(D) \lambdae {\rm GKdim}(A).
\end{equation}
This is property (5) on the list of properties given in the introduction.
\begin{prop} Let $k$ be a field and let $A$ be a $k$-algebra that is a domain. Then
$${\rm Ld}^*(A)\lambdae {\rm GKdim}(A).$$
\end{prop}
\begin{proof} If ${\rm Ld}^*(D)=0$ or ${\rm GKdim}(A)=\infty$ there is nothing to prove. Thus we assume that ${\rm Ld}^*(D)>0$ and ${\rm GKdim}(A)<\infty$. Let $d$ be a positive number less than ${\rm Ld}^*(A)$.
Then by assumption, there exists a subframe $V$ of $A$ and a positive constant $C$ such that $$\max({\rm dim}_k(VW), {\rm dim}_k(WV))\ge{\rm dim}_k(W)+C\lambdaeft( {\rm dim}_k(W)\right)^{(d-1)/d}$$ for every subframe $W$ of $A$. Letting $W=V^n$ gives
$$ {\rm dim}_k(V^{n+1})\ge {\rm dim}_k(V^n) + C\lambdaeft( {\rm dim}_k(V^n)\right)^{(d-1-\epsilon)/(d-\epsilon)}.$$
Telescoping gives,
\begin{eqnarray*}
{\rm dim}_k(V^{2n}) &\ge & {\rm dim}_k(V^{2n}) - {\rm dim}(V^n) \\
& \ge & C\sum_{j=n}^{2n-1} \lambdaeft( {\rm dim}_k(V^j)\right)^{(d-1-\epsilon)/(d-\epsilon)} \\
&\ge & Cn \lambdaeft( {\rm dim}_k(V^n)\right)^{(d-1-\epsilon)/(d-\epsilon)}.
\end{eqnarray*}
Let $e={\rm GKdim}(A)$ and let $\epsilon>0$. Then there are infinitely many $n$ such that
${\rm dim}(V^n)\ge n^{e-\epsilon}$, but we must have
${\rm dim}(V^n)<n^{e+\epsilon}$ for all sufficiently large $n$. In particular, there are infinitely many $n$ such that
$$(2n)^{e+\epsilon}>{\rm dim}_k(V^{2n}) \ge C n\cdot \lambdaeft(n^{e-\epsilon}\right)^{(d-1)/d}.$$
Since this holds for infinitely many $n$, it follows that
$$e+\epsilon \ge 1 + (e-\epsilon)(d-1)/d$$ for every $\epsilon>0$.
Letting $\epsilon$ tend to zero gives
$$e\ge 1+e(d-1)/d$$ or
equivalently, $e\ge d$. The result follows.\end{proof}
This shows that lower transcendence degree does not blow up under localization. Makar-Limanov \cite{ML} has shown that the quotient division algebra of the Weyl algebra over a field of characteristic $0$ contains a copy of the free algebra on two generators, and hence GK dimension generally blows up under localization, except when we are dealing with algebras that are in some sense very close to being commutative.
As an immediate corollary, we obtain property (6).
\begin{cor} Let $k$ be a field and let $K$ be an extension of $k$. Then ${\rm Ld}^*(K)={\rm GKdim}(K)$.
\end{cor}
\begin{proof} By a result of Zhang \cite[Corollary 2.8 (1)]{Z}, we have
$${\rm GKdim}(K) = {\rm Ld}(K)\lambdae {\rm Ld}^*(K)\lambdae {\rm GKdim}(K).$$
The result follows.
\end{proof}
We now show that the strong lower transcendence degree behaves as one would hope with respect to large division subalgebras. The following proposition is a proof that property (2) holds.
\begin{prop} Let $D$ be a division algebra over a field $k$ and let $E$ be a division subalgebra. If $D$ is finite-dimensional as both a left and right $E$-vector space then ${\rm Ld}^*(D)\lambdae {\rm Ld}^*(E)$.
\end{prop}
\begin{proof} We modify the proof of Proposition 3.1 given by Zhang \cite{Z}. We may assume that ${\rm Ld}^*(E)<\infty$, since otherwise there is nothing to prove in this case. Thus we may assume that there is a nonnegative real number $d$ such that $d={\rm Ld}^*(E)$. Let $\epsilon>0$.
Write $D=x_1Ex_1\oplus \cdots \oplus x_pE=Ey_1\oplus \cdots \oplus Ey_q$ and
pick a subframe $V$ of $D$. Then there exists a subframe $V_1$ of $E$ such that
$$Vx_1+\cdots +Vx_p \subseteq x_1V_1+\cdots+x_p V_1$$ and
$$y_1V+\cdots + y_qV\subseteq V_1y_1+\cdots + V_1 y_q.$$
Since ${\rm Ld}^*(E)=d$, there exists a subframe $W$ of $E$ such that
$\max\lambdaeft({\rm dim}(WV_1/W),{\rm dim}(V_1W/W)\right)<\lambdaeft({\rm dim}(W)\right)^{(d-1+\epsilon)/(d+\epsilon)}$. Let $W_0=\sum_{i,j} x_iWy_j$.
Then
\begin{eqnarray*}
VW_0 & \subseteq & V\lambdaeft( \sum_{i,j} x_iWy_j\right) \\
&\subseteq & \sum_{i,j} x_iV_1Wy_j\\
&=& \sum_{i,j} x_iWy_j + x_i Ty_j\\
&=& W_0+ \sum_{i,j} x_i Ty_j,
\end{eqnarray*}
where $T$ is a finite-dimensional vector subspace which satisfies $V_1W = W\oplus T$. Then
by assumption ${\rm dim}(T)<\lambdaeft({\rm dim}(W_0)\right)^{(d-1+\epsilon)/(d+\epsilon)}$.
Hence $${\rm dim}\lambdaeft( VW_0/W_0\right)\lambdae pq \lambdaeft({\rm dim}(W_0)\right)^{(d-1+\epsilon)/(d+\epsilon)}$$ and
similarly,
$${\rm dim}\lambdaeft( W_0V/W_0\right)\lambdae pq \lambdaeft({\rm dim}(W_0)\right)^{(d-1+\epsilon)/(d+\epsilon)}.$$
It follows that ${\rm Ld}^*(D)\lambdae d$.
\end{proof}
We next show that property (4) holds.
\begin{prop} Let $k$ be a field and let $D$ be a division algebra over $k$. If ${\rm Ld}^*(D)<1$ then ${\rm Ld}^*(D)=0$; moreover, $Ld^*(D) = 0$ if and only if every finitely generated subalgebra
of $D$ is finite-dimensional as a $k$-vector space.
\end{prop}
\begin{proof}
Note that if ${\rm Ld}(D^*)=0$ then ${\rm Ld}(D)\lambdae {\rm Ld}^*(D)=0$ and so every finitely generated subalgebra of $D$ is finite-dimensional over $k$ by Proposition 1.1 (4) of Zhang \cite{Z}. Furthermore, if every finitely generated subalgebra of $D$ is finite-dimensional, then we necessarily have that ${\rm Ld}(D^*)=0$. To see this, let $V$ be a subframe of $D$ and let $D_0$ be the finite-dimensional division subalgebra generated by $V$. Then if $W$ is a subframe that is a left and right $D_0$-vector space, then $VW=WV=W$ and so ${\rm Ld}^*(D)=0$.
On the other hand, if $D$ has a finitely generated division subalgebra that is not finite-dimensional over $k$, then ${\rm Ld}^*(D)\ge {\rm Ld}(D)\ge 1$ \cite[Prop 1.1 (2) \& (4)]{Z}.
\end{proof}
\section{Estimates}
\lambdaabel{sec: estimates}
In this section, we prove the basic estimates that we will use to obtain a proof that property (3), given in the introduction, holds. We will then use this to prove Theorem \ref{thm: main2}.
We introduce the notion of a decomposition of a vector space, which will be key in all of our estimates.
\begin{defn} {\em Let $k$ be a field, let $D$ be a division algebra over $k$, and let $W$ be a finite-dimensional $k$-vector subspace of $D$. Given a division subalgebra $E$ of $D$ and a finite-dimensional $k$-vector subspace $V$ of $D$, we say that $W$ admits a left $(E,V)$-\emph{decomposition} if there exist subspaces $U_1,\lambdadots ,U_r$ of $W$, $x_1,\lambdadots ,x_r\in V$, and natural numbers $a_1,\lambdadots ,a_r$ with $i<a_i\lambdae r+1$ such that:
\begin{enumerate}
\item $W=U_1\oplus U_2\oplus \cdots \oplus U_r$;
\item $U_i x_i \subseteq EU_1+EU_2+\cdots +EU_{a_i}$, where $U_{r+1}=D$;
\item $U_i x_i \cap \lambdaeft(EU_1+EU_2+\cdots +EU_{a_i-1}\right)=(0)$;
\item $U_j x_i \subseteq EU_1+EU_2+\cdots +EU_{a_i-1}$ for $j<i$.
\end{enumerate}
In this case, we will write $U_1\oplus \cdots \oplus U_r$ is a left $(E,V)$-decomposition of $W$.}
\lambdaabel{def: 14} The notion of a right $(E,V)$-decomposition is defined analogously.
\end{defn}
We show that under general conditions such decompositions exist.
\begin{lem} Let $k$ be a field, let $D$ be a division algebra over $k$, and let $E$ be a division subalgebra of $D$. If $W$ and $V$ are non-trivial subframes of $D$ such that $WV\not\subseteq EW$, then $W$ admits a left $(E,V)$-decomposition.
\lambdaabel{lem: decomp}
\end{lem}
\begin{proof} We prove this by induction on the dimension of $W$.
If the dimension of $W$ is $1$, then there exists some $x_1\in V$ such that $Wx_1\cap EW=(0)$; otherwise, $WV\subseteq EW$, a contradiction. We then take $U_1=W$ and $a_1=2$ and obtain the result in this case.
We next assume that the conclusion of the statement of the proposition holds for all $k$-vector subspaces of $D$ whose dimension is strictly less than the dimension of $W$.
Since $WV\not\subseteq EW$, there exists $x\in V$ such that $Wx\not\subseteq EW$. Let $$W_1=\{w\in W~: ~wx\in EW\}.$$
Pick a subspace $W_0$ of $W$ such that $$W_0\oplus W_1=W.$$ By the inductive hypothesis, $W_1$ has a left $(E,V)$-decomposition $$W_1=U_1\oplus \cdots \oplus U_r.$$
Furthermore, there exist $x_1,\lambdadots ,x_r\in V$ and natural numbers $a_1,\lambdadots ,a_r$ such that the conditions (1)--(4) of Definition \ref{def: 14} are satisfied.
Let \begin{equation} S= \{i : 1\lambdae i\lambdae r, ~a_i=r+1\} \qquad {\rm and}\qquad T = \{1,2,\lambdadots ,r+1\}\setminus S.
\end{equation}
For $i\in S$, it is possible that $U_i x_i \cap (EU_1+\cdots + EU_r+EW_0)\not = (0)$. Thus we let $$U_{i,0}=\{u\in U_i~:~ux_i\in EU_1+\cdots + EU_r+EW_0$$ and choose $U_{i,1}$ such that $$U_{i,0}\oplus U_{i,1}\ = \ U_i.$$ For $i\in S$, we let $x_{i,0}=x_{i,1}=x_i$ and $a_{i,0}=r+1$, $a_{i,1}=r+2$; and we let $U_{r+1}=W_0$, $x_{r+1}=x$, and $a_{r+1}=r+2$.
We construct a left $(E,V)$-decomposition of $W$ using the subspaces $U_j$ with $j\in T$ and $U_{i,0}, U_{i,1}$ with $i\in S$. We create a total ordering on the indices by declaring \begin{equation}
(i-1,1)\ < \ (i,0) \ < \ i \ < \ (i,1) \ < \ (i+1,0)
\end{equation}
for every natural number $i$.
Notice that for $i\in S$, we have $EU_i = EU_{i,0}+EU_{i,1}$.
Then for $j\in T$ with $j<r+1$ we have:
\begin{equation} U_j x_j \in EU_1+\cdots + EU_{a_j} = \sum_{i\in T, i\lambdae a_j} EU_i + \sum_{i\in S, i\lambdae a_j}\lambdaeft( EU_{i,0}+EU_{i,1}\right);\end{equation}
\begin{equation} U_j x_j \cap \lambdaeft( \sum_{i\in T, i\lambdae a_j} EU_i + \sum_{i\in S, i\lambdae a_j}\lambdaeft( EU_{i,0}+EU_{i,1}\right)\right)=(0).\end{equation}
For $j\in S$ we have:
\begin{equation}U_{j,0} x_{j,0} \in \sum_{i\in T} EU_i + \sum_{i\in S}\lambdaeft( EU_{i,0}+EU_{i,1}\right)+ EU_{r+1};\end{equation}
\begin{equation} U_{j,0}x_{j,0}\cap \lambdaeft( \sum_{i\in T} EU_i + \sum_{i\in S}\lambdaeft( EU_{i,0}+EU_{i,1}\right)\right)=(0);\end{equation}
\begin{equation} U_{j,1}x_{j,1}\cap \lambdaeft( \sum_{i\in T} EU_i + \sum_{i\in S}\lambdaeft( EU_{i,0}+EU_{i,1}\right)+EU_{r+1}\right)
=(0).\end{equation}
Finally, we take $x_{r+1}=x$. Then by construction, $U_{r+1}x_{r+1}=W_0x$ which has trivial intersection with $EU_1+\cdots +EU_r$. Thus these subspaces give a left $(E,V)$-decomposition of $W$. \end{proof}
A similar result holds for right decompositions.
\begin{lem} Let $k$ be a field, let $D$ be a division algebra over $k$, and let $W$ and $V$ be non-trivial subframes of $D$. Suppose that $E$ is a division subalgebra and $U_1\oplus \cdots \oplus U_r$ is a left $(E,V)$-decomposition of $W$. Then
$EU_1+\cdots + EU_r$ is direct. \lambdaabel{direct}
\end{lem}
\begin{proof} Suppose not. Then for $1\lambdae i\lambdae r$ we can find $w_i\in EU_i$ with $w_1,\lambdadots ,w_r$ not all zero such that
$$w_1+\cdots + w_r=0.$$
Then there exist $x_1,\lambdadots ,x_r\in V$, and natural numbers $a_1,\lambdadots ,a_r$ with $i<a_i\lambdae r+1$ satisfying conditions (1)--(4) of Definition \ref{def: 14}.
Let $m$ be such that $w_m$ is nonzero and $w_i=0$ for every $i>m$.
Note that $w_i x_m\subseteq EU_1+\cdots + EU_{a_m-1}$ for $i<m$, and so
$$w_m x_m = (w_1+\cdots +w_{m-1})x_m \subseteq EU_1+\cdots + EU_{a_m-1}.$$
But by hypothesis, $w_m x_m \cap ( EU_1+\cdots + EU_{a_m-1})=(0)$, and so $w_m x_m=0$. Since $D$ is a domain, $w_m=0$, a contradiction. Thus we obtain the desired result.
\end{proof}
We now give two estimates which we will use to estimate the strong lower transcendence degree of division subalgebras. This first lemma is rather technical and is where we really use all requirements listed in the definition of left $(E,V)$-decompositions.
\begin{lem} Let $k$ be a field, let $D$ be a division algebra over $k$ and let $E$ be a division subalgebra of $D$. Suppose that $W$ and $V$ are non-trivial subframes of $D$ and that $W$ admits a left $(E,V)$-decomposition $U_1\oplus \cdots \oplus U_r$.
Then
\[ {\rm dim}_k(W+WV) \ \ge \ {\rm dim}_k(W) + \max_{1\lambdae i\lambdae r} {\rm dim}_k\lambdaeft(U_i\right).\]
\lambdaabel{lem: Z1}
\end{lem}
\begin{proof}
By assumption, there exist $x_1,\lambdadots, x_r$ in $V$ and natural numbers $a_1,\lambdadots ,a_r$ satisfying conditions (1)--(4) of Definition \ref{def: 14}.
Pick $i_{0}$ such that
$${\rm dim}_k(U_{i_0}) \ \ge \ {\rm dim}_k(U_j)$$ for $1\lambdae j\lambdae r$.
Let $i_1=a_{i_0}$. If $i_1>r$, then we let $Y_0=U_{i_0}$; otherwise, by assumption there is a $k$-vector space embedding of $U_{i_0}x_{i_0}$ in $$\lambdaeft(EU_1+\cdots+EU_{i_1}\right)/\lambdaeft(EU_1+\cdots +EU_{i_1-1}\right).$$
By Lemma \ref{direct}, $U_{i_1}$ also embeds into $$\lambdaeft(EU_1+\cdots+EU_{i_1}\right)/\lambdaeft(EU_1+\cdots +EU_{i_1-1}\right),$$ and so it follows that there exists a subspace $Y_0$ of $U_{i_0}x_{i_0}$ such that the image of $Y_0$ in $$\lambdaeft(EU_1+\cdots+EU_{i_1}\right)/\lambdaeft(EU_1+\cdots +EU_{i_1-1}\right)$$
intersects the image of $U_{i_1}$ trivially and their sum is the image of $U_{i_0}x_{i_0}+U_{i_1}$.
Then
\begin{equation}
{\rm dim}(Y_0) \ \ge \ {\rm dim}(U_{i_0})-{\rm dim}(U_{i_1})
\end{equation} if $i_1\lambdae r$; otherwise, ${\rm dim}(Y_0)={\rm dim}(U_{i_0})$.
If $i_1>r$, then we stop; otherwise, we can repeat the procedure, taking $i_2=a_{i_1}$, and we can construct a subspace $Y_1$ of $U_{i_1}x_{i_1}$. If $i_2>r$, then $Y_1=U_{i_1}x_{i_1}$; otherwise, we take $Y_1$ such that its image in $$\lambdaeft(EU_1+\cdots+EU_{i_2}\right)/\lambdaeft(EU_1+\cdots +EU_{i_2-1}\right)$$ has trivial intersection with the image of $U_{i_2}$ and its sum with the image of $U_{i_2}$ is the image of $U_{i_1}x_{i_1}+U_{i_2}$.
Then
\begin{equation}
{\rm dim}(Y_1) \ \ge \ {\rm dim}(U_{i_1})-{\rm dim}(U_{i_2})
\end{equation}
if $i_2\lambdae r$; otherwise, ${\rm dim}(Y_1)={\rm dim}(U_{i_1})$.
If we continue in this manner, we eventually reach an index $\ell$ such that $i_{\ell+1}=r+1$.
Notice $$WV+W \supseteq Y_0+\cdots +Y_{\ell}+W.$$ Moreover, we claim that the sum on the right is direct. If not, there exists a dependence
$$y_0+y_1+\cdots +y_{\ell}+u_1+\cdots + u_r=0,$$ with $y_i\in Y_i$, $u_j\in U_j$ not all zero.
Let $j$ be the largest index with $y_j\not = 0$. Then $y_j\in EU_1+\cdots +EU_{i_{j+1}}$. By Lemma \ref{direct}, $EU_1+\cdots +EU_r$ is direct, and so $u_n=0$ for $n>i_{j+1}$. Then $y_j+u_{i_{j+1}}\in EU_1+\cdots + EU_{i_{j+1}-1}$, where we take $u_{r+1}=0$. Thus the image of $y_j+u_{i_{j+1}}$ in $$\lambdaeft(EU_1+\cdots+EU_{i_{j+1}}\right)/\lambdaeft(EU_1+\cdots +EU_{i_{j+1}-1}\right)$$
is trivial, and so $y_j=0$ by construction of the space $Y_j$. This contradicts the fact that the $y_i$ cannot all be zero. Thus we see that the sum
$$Y_0+\cdots +Y_{\ell}+W$$ is direct.
Hence
\[ {\rm dim}(WV+V) \ \ge \ {\rm dim}(W) + \sum_{i=1}^{\ell} {\rm dim}(Y_i).\]
At this point, we use telescoping sums:
\begin{eqnarray*}
\sum_{i=0}^{\ell} {\rm dim}(Y_i) &=& \sum_{j=0}^{\ell-1} \lambdaeft( {\rm dim}(U_{i_j})-{\rm dim}(U_{i_{j+1}}) \right)
+ {\rm dim}(U_{i_{\ell}}) \\
&=& {\rm dim}(U_{i_0}) \\
&=& \max_{1\lambdae i\lambdae r} {\rm dim}_k\lambdaeft(U_i\right).
\end{eqnarray*}
The result now follows. \end{proof}
\begin{lem}
Let $k$ be a field, let $D$ be a division algebra over $k$, let $E$ be a division subalgebra of $D$ of lower transcendence degree $d$, and let $V$ be a subframe of $D$. For every $\epsilon>0$ there exists a subframe $V'\supseteq V$ and a positive constant $C>0$ such that whenever $U_1\oplus \cdots \oplus U_r$ is a left $(E,V')$-decomposition of a finite-dimensional $k$-vector subspace $W$ of $D$, we have
\[{\rm dim}_k(V'W) \ \ge \ {\rm dim}_k(W) + \sum_{i=1}^r C \lambdaeft({\rm dim}_k\lambdaeft(U_i\right)\right)^{\frac{d-1-\epsilon}{d-\epsilon}}.\]
\lambdaabel{lem: Z2}
\end{lem}
\begin{proof} Let $\epsilon>0$.
By definition of lower transcendence degree, there is some subframe $V_0$ of $E$ and a positive constant $C$ such that
\[ {\rm dim}_k\lambdaeft(V_0U\right) \ \ge \
{\rm dim}_k\lambdaeft(U\right) + C\lambdaeft({\rm dim}_k(U)\right)^{(d-1-\epsilon)/(d-\epsilon)} \] for every subframe $U$ of $E$.
We let $V'=V+V_0$ and let $W$ be a subframe of $D$. Suppose that $U_1\oplus \cdots \oplus U_r$ is a left $(E,V')$-decomposition of $W$ and let
\[b_i \ := \ {\rm dim}_k(U_i). \] Then
\[ {\rm dim}_k\lambdaeft(V'U_i\right) \ \ge \ {\rm dim}_k\lambdaeft(V_0U_i\right) \ \ge \
{\rm dim}_k\lambdaeft(U_i\right) + Cb_i^{(d-1-\epsilon)/(d-\epsilon)} \] for all $i $.
By Lemma \ref{direct}, the sum
$$EU_1+\cdots +EU_r$$ is direct and since $V_0\subseteq E$, we see
\begin{eqnarray*}
{\rm dim}_k(V'W) - {\rm dim}_k(W) &\ge &
{\rm dim}_k(V_0W) - {\rm dim}_k(W) \\ & = &
\sum_{i=1}^r {\rm dim}_k \lambdaeft(V_0U_i/U_{i}\right) \\
&\ge &\sum_{i=1}^r C b_i^{(d-1-\epsilon)/(d-\epsilon)}.
\end{eqnarray*}
\end{proof}
We now give a simple estimate which will allow us to combine the preceding two estimates.
\begin{lem} Let $b_1,\lambdadots ,b_m, d$ be positive real numbers and let $N$. If
$$b_1+\cdots + b_r = N,$$ then either:
\begin{enumerate}
\item{$b_i\ge \lambdaeft( \frac{d-1}{d}\right)^d N^{d/(d+1)}$ for some $i$; or}
\item{$b_1^{(d-1)/d}+\cdots + b_r^{(d-1)/d}\ge N^{d/(d+1)}$.}
\end{enumerate}
\lambdaabel{lem: Z3}
\end{lem}
\begin{proof} Without loss of generality, we may assume that
$$b_1 \ \ge \ b_2 \ \ge \ \cdots \ \ge \ b_m.$$
Suppose that
$$b_1\lambdae \frac{(d-1)^d}{d^d} N^{d/(d+1)}.$$
By the mean value theorem
$$b_i^{(d-1)/d}-b_{i+1}^{(d-1)/d} \ge (b_i-b_{i+1})\frac{d-1}{d} b_i^{-1/d} \ge
(b_i-b_{i+1})\frac{d-1}{d} \frac{d}{d-1} N^{-1/(d+1)}.$$
Thus
\begin{eqnarray*}
\sum_{i=1}^m b_i^{(d-1)/d} &=&
m b_m^{(d-1)/d}+ \sum_{i=1}^{m-1} i (b_i^{(d-1)/d}-b_{i+1}^{(d-1)/d})\\
&\ge & m b_m^{(d-1)/d} + \sum_{i=1}^{m-1} i (b_i-b_{i+1})\frac{d-1}{d} \frac{d}{d-1} N^{-1/(d+1)} \\
&=& m b_m^{(d-1)/d} + N^{-1/(d+1)} (b_1+\cdots + b_m - mb_m) \\
&=& mb_m^{(d-1)/d} + N^{d/(d+1)} - mb_m N^{-1/(d+1)} \\
&=& N^{d/(d+1)} + mb_m^{(d-1)/d}\Big(1 - \big(b_mN^{-d/(d+1)}\big)^{1/d}\Big) \\
&\ge & N^{d/(d+1)}.
\end{eqnarray*}
The result follows. \end{proof}
We now prove property (3) in the list of properties given in the introduction.
\begin{thm} Let $k$ be a field and let $D$ be a finitely generated division algebra over $k$. If $E$ is a division subalgebra of $D$ with the property that $D$ is infinite-dimensional as a left $E$-vector space, then $${\rm Ld}^*(D)\ge {\rm Ld}(E)+1.$$
\lambdaabel{thm: mainx}
\end{thm}
\begin{proof} If ${\rm Ld}(E)=\infty$, there is nothing to prove, as ${\rm Ld}^*(D)\ge {\rm Ld}(D)\ge {\rm Ld}(E)=\infty$. Thus we may assume that there is a positive real number $d$ such that ${\rm Ld}(E)=d$. Since $D$ is finitely generated and is infinite dimensional as a left $E$-vector space, we may pick a subframe $V$ of $D$ such that
$EV^{n+1}$ properly contains $EV^n$ for every natural number $n$.
Let $W$ be a subframe of $D$. We note that
$WV\not \subseteq EW$; otherwise, we would have
$V^n\subseteq WV^n\subseteq EW$ for every natural number $n$ and so $EV^n\subseteq EW$ for every natural number $n$, and so there must exist some $n$ such that $EV^n=EV^{n+1}$, a contradiction. Thus $W$ admits a left $(E,V)$-decomposition by Lemma \ref{lem: decomp}. Similarly, $W$ must admit a left $(E,V')$-decomposition for every subframe $V'$ containing $V$.
Let $\epsilon>0$. Then by Lemma \ref{lem: Z2}, there exists a frame $V'\supset V$ and a positive constant $C>0$ such that
if $U_1\oplus \cdots \oplus U_r$ is a left $(E,V')$-decomposition of $W$ then
\[({\rm dim}_k(W+V'W) \ \ge \ {\rm dim}_k(W) + C\sum_{i=1}^{ r} {\rm dim}_k\lambdaeft(U_i\right)^{(d-1-\epsilon)/(d-\epsilon)}.\]
Similarly,
By Lemma \ref{lem: Z1}
\[ {\rm dim}_k(W+WV') \ \ge \ {\rm dim}_k(W) + \max_{1\lambdae i\lambdae r} {\rm dim}_k\lambdaeft(U_i\right)\]
Let $b_i={\rm dim}_k\lambdaeft(U_i\right)$ for $1\lambdae i\lambdae r$ then we have
$b_1+\cdots + b_r={\rm dim}(W)$.
By Lemma \ref{lem: Z3}, there is a constant $C_0>0$, independent of $W$, such that
$$\max\lambdaeft( {\rm dim}_k(W+WV'), {\rm dim}_k(W+V'W)\right) \ge {\rm dim}_k(W) + C_0\lambdaeft({\rm dim}_k(W)\right)^{(d-\epsilon)/(d+1-\epsilon)}$$ for every subframe $W$ of $D$.
Thus by definition, ${\rm Ld}^*(D)\ge d+1-\epsilon$. Since this holds for every $\epsilon>0$, we obtain the desired result.
\end{proof}
As an immediate corollary, we obtain the proof of Theorem \ref{thm: main2}
\begin{proof}[Proof of Theorem \ref{thm: main2}]
We may assume that $A$ has finite GK dimension.
Let $D$ denote the quotient division algebra of $A$ and let $K$ be a subfield of $D$ that contains $k$. If ${\rm GKdim}(K)>{\rm GKdim}(A)-1$ then we have
$${\rm Ld}(K)= {\rm GKdim}(K)>{\rm GKdim}(A)-1\ge {\rm Ld}^*(D)-1.$$ By Theorem \ref{thm: mainx} we have that $D$ must be finite-dimensional as a left $K$-vector space and hence $D$ embeds in a matrix ring over a field. But this gives that $A$ satisfies a polynomial identity, a contradiction. The result follows.
\end{proof}
\section{Concluding remarks and questions}
We make a few remarks. Ideally, a transcendence degree should have the property that if $D$ is a finitely generated division algebra and $E$ is a division subalgebra such that $D$ is infinite-dimensional as a left $E$-vector space, then the transcendence degree of $E$ should be at most the transcendence degree of $D$ minus $1$. We ask if this property holds for the strong lower transcendence degree. This would have profound implications. In particular, it would show that if
$$k=D_0\subseteq D_1 \subseteq D_2 \subseteq \cdots \subseteq D_n=D$$ is a chain of finitely generated division subalgebras of $D$ such that each $D_i$ is infinite-dimensional as a left $D_{i-1}$-vector space. Then $n\lambdae {\rm Ld}^*(D)$. This is Zhang's conjecture \cite[Conjecture 8.4]{Z}.
The author \cite{Bell1} proved this in the case that $D$ is the quotient division algebra of a domain of GK dimension strictly less than $3$.
This is related to Schofield's notion of stratiform length \cite{Sc}.
Schofield has pathological constructions of division algebras $D$ which are finite-dimensional over a division subalgebra on one side but are infinite-dimensional on the other \cite[Section 5.9]{Co}. In the case that we are dealing with division algebras of finite transcendence degree, however, it is expected that these type of phenomena should not occur. Again, an inequality of this sort could be used that division algebras of finite transcendence degree are well-behaved in this sense.
\section*{Acknowledgments} The author thanks Lance Small, Dan Rogalski, and James Zhang for many helpful comments and suggestions.
\end{document}
|
\begin{document}
\title{A numerical scheme for an improved Green–Naghdi model in the Camassa-Holm regime for the propagation of internal waves}
\begin{abstract}
In this paper we introduce a new reformulation of the Green-Naghdi model in the Camassa-Holm regime for the propagation of internal waves over a flat topography derived by
Duch\^ene, Israwi and Talhouk [{\em SIAM J. Math. Anal.}, 47(1), 240–-290]. These new Green–Naghdi systems are adapted to improve the frequency dispersion of the original model, they share the same order of precision as the standard one but have an appropriate structure which makes them much more suitable for the numerical resolution. We develop a second order splitting scheme where the hyperbolic part of the system is treated with a high-order finite volume scheme and the dispersive part is treated with a finite difference approach. Numerical simulations are then performed to validate the model.
\end{abstract}
\tildeextbf{Key words} : Green–Naghdi model, Nonlinear shallow water, Dispersive waves, Nonlinear interactions, Improved dispersion, Splitting method, Hybrid method, Finite volume, Finite Difference, High order scheme, WENO reconstruction
\tildeableofcontents
\section{Introduction}
This study deals with the propagation of internal waves in the uni-dimensional setting located at the interface between two layers of fluids of different densities. The fluids are assumed to be incompressible, homogeneous,
and immiscible, limited from above by a rigid lid and from below by a flat bottom.
This type of fluid dynamics problem is encountered by researchers in oceanography when they study the wave near the shore.
Because of the difference in the salinity of the different layers of water near the shore, it is useful to model the flow of salted water by a two layers incompressible fluids flow.
The usual way of describing such a flow is to use the 3D-Euler equations for the different layers adding some thermodynamic and
dynamic conditions at the interface. This system will be called the \emph {full Euler system}. This system of partial differential
equations is very rich but very difficult to manipulate both mathematically and numerically.
This is the reason why reduced models have been derived to characterize the evolution of the solution in some physical and geometrical specific regimes. Many models for the water wave (air-water interface) system have already been derived and studied in the shallow-water regime, where we consider
that the wave length of the flow is large compared to the typical depth. We refer the reader to the following papers \cite{Barthelemy04,BD08,LB09,BBCCLMT11,Lannes}.
Earlier works have also set a very interesting theoretical background for the two-fluid system see~\cite{BonaLannesSaut08,Anh09,Duchene13,DucheneIsrawiTalhouk14,DucheneIsrawiTalhouk15},
for more details.
One of these reduced models is the Green-Naghdi system of partial differential equations (denoted GN in the following). Many numerical studies of the fully nonlinear GN equations for the one layer case have been proposed in the literature. Let us introduce some of these results. The GN model describing dispersive shallow water waves have been numerically studied in~\cite{MGH10} after being written in terms of potential variables in a pseudo-conservative form and using a Godunov scheme. Bonneton {\em et al.} studied numerically in~\cite{BCLMT} the fully nonlinear and weakly dispersive GN model for shallow water waves of large amplitude over variable topography. The original model was firstly adjusted in a suitable way for numerical resolution with an improvement of the dispersive properties, then they proposed to use a finite volume-finite difference splitting scheme that decomposes the hyperbolic and dispersive parts of the equations. This similar approach was earlier introduced in~\cite{EIK05} for the solution of the Boussinesq equations. In~\cite{CLM}, a three-parameters GN system is derived, yielding further improvements of the dispersive properties. Let us mention also~\cite{MID14}, where a highly accurate and stable numerical scheme based on the Galerkin / finite-element method was presented for the Serre system. The method is based on smooth periodic splines in space and an explicit fourth-order Runge-Kutta method in time.
Recently, Lannes and Marche introduced in~\cite{LannesMarche14} a new class of two-dimensional GN equations over varying topography. These fully nonlinear and weakly dispersive equations have a mathematical structure more suitable for 2D simulations. Using the same splitting strategy initiated in~\cite{CLM}, they develop a high order, well balanced, and robust numerical code for these new models. Finally, we would like to mention the recent work of Duch\^ene, Israwi and Talhouk~\cite{DucheneIsrawiTalhouk16}, where they derive a new class of GN models with improved frequency dispersion for the propagation of internal waves at the interface between two layers of fluids. They numerically compute their class of GN models using spectral methods~\cite{Trefethen00} for space discretization and the Matlab solver ode45, which is based on the fourth and fifth order Runge-Kutta-Merson method for time evolution. Their numerical simulations show how the different frequency dispersion of the modified GN models may affect the appearance of high frequency Kelvin-Helmholtz instabilities.
In this paper, we present the numerical resolution of the GN model in the Camassa-Holm (medium amplitude) regime obtained and fully justified by Duch\^ene, Israwi and Talhouk in \cite{DucheneIsrawiTalhouk15} using the same strategy initiated in~\cite{BCLMT,LannesMarche14}. Let us recall that this model describes the propagation of one-dimensional internal waves at the interface between two layers of ideal fluids, limited by a rigid lid from above and a flat bottom from below. This model is first recast under a new formulation more suitable for numerical resolution with the same order of precision as the standard one but with improved frequency dispersion. More precisely, following~\cite{BCLMT} we derive a family of GN equations depending on a parameter $\alpha$ to be chosen in order to minimize the phase velocity error between the reduced model and the \emph {full Euler system}. Then we propose a numerical scheme that decomposes the hyperbolic and dispersive parts of the equations. Following the same strategy adopted in~\cite{BCLMT,LannesMarche14}, we use a second order splitting scheme.
The approximation $U^{n+1}=(\zeta^{n+1}, v^{n+1})$ is computed at time $t^{n+1} = t^n + \Delta t$ where $\zeta$ represents the deformation of the interface and $v$ represent the {\em shear mean velocity} defined in Section~\ref{GNCHsec},
in terms of the approximation $U^n$ at time $t^n$ by solving
$$U^{n+1} = S_1(\Delta t/2)S_2(\Delta t)S_1(\Delta t/2)U^n,$$
where $S_1(.)$ is the operator associated to the hyperbolic part and $S_2(.)$ the operator associated to the dispersive part of the GN equations.
For the numerical computation of $S_1(.)$, we use a finite volume method. We begin by the VFRoe method (see~\cite{BGH00,GHN02,GHN03}), that is an approximate Godunov scheme. Unfortunately, the VFRoe scheme seems to be very diffusive. To this end, we propose a second-order scheme following the classical ``MUSCL" approach~\cite{Leer79}. Finally, following~\cite{JiangShu96} we implement a fifth-order accuracy WENO reconstruction in order to reach higher order accuracy in smooth regions and a good resolution around discontinuities since the second order schemes are known to degenerate to first
order accuracy at smooth extrema. On the other hand, $S_2(.)$ is computed using a finite difference scheme discretized using second and fourth order formulas, whereas for time discretization we use classical second and fourth order Runge-Kutta methods according to the order of the space derivative in consideration.
The computation of $S_2(\Delta t)$ in the above splitting scheme (dispersive part) requires the inversion of the following symmetric second order differential operator introduced in \cite{DucheneIsrawiTalhouk15}
$$
{\mathfrak T}[\epsilon\zeta]V \ = \ q_1(\epsilon\zeta)V \ - \ \mu \nu \mathfrak{p}artial_x \Big(q_2(\epsilon\zeta)\mathfrak{p}artial_xV \Big).
$$
with $q_i(X)\equiv 1+\kappa_i X $ ($i=1,2$), where $\kappa_i$ and $\nu$ are constants depending on three parameters: the ratio between the densities of the two layers, the ratio between their depth and the capillary effect of the interface.
Moreover, it is known that third order derivatives involved in this model may create high frequencies instabilities, but the presence of the operator ${\mathfrak T}[\epsilon\zeta]^{-1}$ in the second equation stabilizes the model with respect to these perturbations, allowing for more robust numerical computations, see Section~\ref{IHFsec}. The invertibility of this operator played also an important role in the well-posedness of these equations (see~\cite{Israwi11} for the one layer case and~\cite{DucheneIsrawiTalhouk15} for the two layers case).
However, this operator is time dependent since it depends on the deformation of the interface $\zeta(t,x)$. In order to avoid the inversion of this operator at each time step keeping its stabilizing effects, we derive a new family of physical models that are equivalent to the standard one in the sense of consistency (same order of precision $\mathcal {O}(\mu^2)$), where we remove the time dependency of the operator $\mathfrak{T}$. The structure of the time-independent model leads to a small improvement in terms of computational time due to its simple one-dimensional structure. In fact, this strategy was originally initiated for numerical simulations of the fully nonlinear and weakly dispersive GN models in the two-dimensional case~\cite{LannesMarche14}, in order to reduce significantly the computational time. \\
\\
We organize the paper as follows.
In Section~\ref{FEsec}, we introduce the non-dimensonalized {\em full Euler system}.
Section~\ref{GNCHsec} is devoted to recall the GN model in the Camassa-Holm regime, where an equivalent time-independent reformulation is given with an improved frequency dispersion based on the choice of a parameter $\alpha$. The stability issue of this new reformulation is discussed in the one and two layers cases.
Section~\ref{NMSec} is then devoted to the presentation of the numerical scheme. Firstly the hyperbolic/dispersive splitting is introduced.
Then we describe the finite-volume spatial discretization of the hyperbolic part. The first-order finite volume scheme is given, an extension to the second-order accuracy is considered following the ``MUSCL" approach and finally high-order accuracy around discontinuities is achieved thanks to the implementation of a fifth-order WENO reconstruction. The finite-difference discretization of the dispersive part is then detailed and the boundary conditions are briefly described. Finally, we present in Section~\ref{NVsec} several numerical validations of our model. The one layer case is numerically validated: we compare the accuracy of the first, second and fifth orders of accuracy by studying the propagation of a solitary wave. To evaluate the influence of dispersive nonlinear terms, we consider the head-on collision of counter-propagating waves and the breaking of a Gaussian hump. Dealing with discontinuities is numerically validated by studying the dam-break problem in the one layer case. We highlight then the importance of the choice of the parameter $\alpha$ in improving the frequency dispersion of the model and compare our results with numerical experiments. Finally, we validate the ability of our numerical scheme to deal with discontinuities when considering a dam-break problem in the two layers case.
\section{Full Euler system}\label{FEsec}
In this section, we briefly recall the derivation of the {\em full Euler system} governing the evolution equations of the two-layers flow and refer to
\cite{Anh09,BonaLannesSaut08,Duchene13,DucheneIsrawiTalhouk14} for more details. The two-layers flow considered are assumed to be incompressible, homogeneous, immiscible perfect fluids of different densities under the sole influence of gravity.
\begin{figure}
\caption{Domain of study and governing equations.}
\label{domain}
\end{figure}
The study is restricted to the one-dimensional horizontal variable.
The deformation of the interface between the two layers is represented by the graph of function a $\zeta(t,x)$ while the bottom and the top surfaces are assumed to be rigid and flat. The domains of the upper and lower fluid at time $t$ (denoted, respectively, $\Omega_1^t$ and $\Omega_2^t$), are given by
\begin{eqnarray*}
\Omega_1^t \ &=& \ \{\ (x,z)\in\mathcal{R}R\tildeimes\mathcal{R}R, \quad \zeta(t,x)\ \leq\ z\ \leq \ d_1\ \}, \\
\Omega_2^t \ &=& \ \{\ (x,z)\in\mathcal{R}R\tildeimes\mathcal{R}R, \quad -d_2 \ \leq\ z\ \leq \ \zeta(t,x)\ \}.
\end{eqnarray*}
In what follows, we assume that the domains remain strictly connected, that is there exists $h_0>0$ such that $d_1-\zeta(t,x)\geq h_0>0$ and $d_2+\zeta(t,x)\geq h_0>0$.
The density and velocity fields of the upper and lower layers are denoted by $(\rho_i,{\mathbf v}_i)$, $(i=1,2)$ respectively.
We assume the fluids to be incompressible, homogeneous and irrotational so that the velocity fields are divergence free and expressed as gradients of a potential denoted $\mathfrak{p}hi_i$. The fluids being ideal, that is with no viscosity, we may assume that each fluid follows the Euler equations. Assuming that the surface, the bottom or the interface are impenetrable one deduces the kinematic boundary conditions. Finally, the set of equations is completed by the continuity of the stress tensor at the interface.
Altogether, the governing equations are given by the following system:
\everymath{\displaystyle}
\begin{equation} \label{eqn:EulerComplet}
\left\{\begin{array}{ll}
\mathfrak{p}artial_x^2 \mathfrak{p}hi_i \ + \ \mathfrak{p}artial_z^2 \mathfrak{p}hi_i \ = \ 0 & \mbox{ in }\Omega^t_i, \ i=1,2,\\
\mathfrak{p}artial_t \mathfrak{p}hi_i+\frac{1}{2} |\nabla_{x,z} \mathfrak{p}hi_i|^2=-\frac{P_{i}}{\rho_i}-gz & \mbox{ in }\Omega^t_i, \ i=1,2, \\
\mathfrak{p}artial_{z}\mathfrak{p}hi_1 \ = \ 0 & \mbox{ on } \Gamma_{\rm t}\equiv\{(x,z),z=d_1\}, \\
\mathfrak{p}artial_t \zeta \ = \ \sqrt{1+|\mathfrak{p}artial_x\zeta|^2}\mathfrak{p}artial_{n}\mathfrak{p}hi_1 \ = \ \sqrt{1+|\mathfrak{p}artial_x\zeta|^2}\mathfrak{p}artial_{n}\mathfrak{p}hi_2 & \mbox{ on } \Gamma \equiv\{(x,z),z=\zeta(t,x)\},\\
\mathfrak{p}artial_{z}\mathfrak{p}hi_2 \ = \ 0 & \mbox{ on } \Gamma_{\rm b}\equiv\{(x,z),z=-d_2\}, \\
\lim\limits_{\varepsilon\tildeo 0} \Big( P(t,x,\zeta(t,x)+\varepsilon)
- P(t,x,\zeta(t,x)-\varepsilon) \Big) = -\sigma k(\zeta) & \mbox{ on } \Gamma,
\end{array}
\right.
\end{equation}
where $n$ denotes the unit upward normal vector at the interface.\\
The function $ k(\zeta)=-\mathfrak{p}artial_x \Big(\frac1{\sqrt{1+|\mathfrak{p}artial_x\zeta|^2}}\mathfrak{p}artial_x\zeta\Big)$ denotes the mean curvature of the interface and $\sigma$ the surface (or interfacial) tension coefficient.
To go further in the study of the flow of the two layers in order to build a numerical scheme for the asymptotic dynamics, we write the system in dimensionless form. To this end, we introduce dimensionless parameters and variables that reduce the setting to the physical regime under consideration.
Firstly, let $a$ be the maximum amplitude of the deformation of the interface and $\lambda$ the wavelength of the interface. Then the typical velocity of small propagating internal waves (or wave celerity) is given by
\[c_0 \ = \ \sqrt{g\frac{(\rho_2-\rho_1) d_1 d_2}{\rho_2 d_1+\rho_1 d_2}}.\]
Consequently, we introduce the dimensionless variables:
\[
\tilde z \ \equiv\ \dfrac{z}{d_1}, \quad\quad \tilde x\ \equiv \ \dfrac{x}{\lambda}, \quad\quad \tilde t\ \equiv\ \dfrac{c_0}{\lambda}t,
\]
the dimensionless unknowns:
\[
\tilde{\zeta}(\tilde t,\tilde x)\ \equiv\ \dfrac{\zeta(t,x)}{a}, \quad\quad \tilde{\mathfrak{p}hi_i}(\tilde t,\tilde x,\tilde z)\ \equiv\ \dfrac{d_1}{a\lambda c_0}\mathfrak{p}hi_i(t,x,z) \quad (i=1,2),
\]
and finally the dimensionless parameters:
\[
\gamma\ =\ \dfrac{\rho_1}{\rho_2}, \quad \epsilon\ \equiv\ \dfrac{a}{d_1},\quad \mu\ \equiv\ \dfrac{d_1^2}{\lambda^2}, \quad \delta\ \equiv \ \dfrac{d_1}{d_2}, \quad {\rm bo}\ =\ \frac{ g(\rho_2-\rho_1) d_1^2}{\sigma}.
\]
In what follows we assume that the depth ratio $\delta$ does not approach zero or infinity which means that the two layers of fluid have comparable depth. Therefore, the choice of the reference vertical length is harmless so we decided to choose $d_1$. We would like to mention that $\rm Bo=\mu\rm bo$ where $\rm Bo$ is the classical Bond number.
The significations of the different dimensionless parameters are the following :
\begin{itemize}
\item $\gamma$ represents the ratio between the densities of the two layers,
\item $\epsilon$ represents the amplitude of the interface between the two layers,
\item $\mu$ represents the nonlinear effect of the shallowness approximation,
\item $\delta$ represents the ratio between the depth of the two layers,
\item $\rm bo$ represents the capillary effect of the interface.
\end{itemize}
Let us now remark that the system can be rewritten as two evolution equations coupling Zakharov's canonical variables \cite{Zakharov68,CraigSulem93}, $(\zeta,\mathfrak{p}si)$ representing respectively the deformation of the interface and the trace of the dimensionless upper potential at the interface defined by $\mathfrak{p}si \ \equiv \ \mathfrak{p}hi_1(t,x,\zeta(t,x)).$ \\
\\
In order to do so, we define the so-called Dirichlet-Neumann operators. The tildes are removed for the sake of readability.
\everymath{\displaystyle}
\begin{definition}[Dirichlet-Neumann operators]
Let $\zeta\in H^{t_0+1}(\mathcal{R}R)$, $t_0>1/2$, such that there exists $h>0$ with
$h_1 \ \equiv\ 1-\epsilon\zeta \geq h>0$ and $h_2 \ \equiv \ \frac1\delta +\epsilon \zeta\geq h>0$, and let $\mathfrak{p}si\in L^2_{\rm loc}(\mathcal{R}R),\mathfrak{p}artial_x \mathfrak{p}si\in H^{1/2}(\mathcal{R}R)$.
Then we define:
\begin{eqnarray*}
G^{\mu}\mathfrak{p}si &\equiv & G^{\mu}[\epsilon\zeta]\mathfrak{p}si \equiv \sqrt{1+\mu|\epsilon\mathfrak{p}artial_x\zeta|^2}\big(\mathfrak{p}artial_n \mathfrak{p}hi_1 \big)\id{z=\epsilon\zeta} = -\mu\epsilon(\mathfrak{p}artial_x\zeta) (\mathfrak{p}artial_x\mathfrak{p}hi_1)\id{z=\epsilon\zeta}+(\mathfrak{p}artial_z\mathfrak{p}hi_1)\id{z=\epsilon\zeta},\\
H^{\mu,\delta}\mathfrak{p}si &\equiv & H^{\mu,\delta}[\epsilon\zeta]\mathfrak{p}si \equiv \mathfrak{p}artial_x \big(\mathfrak{p}hi_2\id{z=\epsilon\zeta}\big) = (\mathfrak{p}artial_x\mathfrak{p}hi_2)\id{z=\epsilon\zeta}+\epsilon(\mathfrak{p}artial_x \zeta)(\mathfrak{p}artial_z\mathfrak{p}hi_2)\id{z=\epsilon\zeta},
\end{eqnarray*}
where, $\mathfrak{p}hi_1$ and $\mathfrak{p}hi_2$ are uniquely deduced from $(\zeta,\mathfrak{p}si)$ as solutions of the following Laplace's problems:
\begin{eqnarray*}
&&\left\{
\begin{array}{ll}
\left(\ \mu\mathfrak{p}artial_x^2 \ +\ \mathfrak{p}artial_z^2\ \right)\mathfrak{p}hi_1=0 & \mbox{ in } \Omega_1\equiv \{(x,z)\in \mathcal{R}R^{2},\ \epsilon{\zeta}(x)<z<1\}, \\
\mathfrak{p}artial_z \mathfrak{p}hi_1 =0 & \mbox{ on } \Gamma_{\rm t}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=1\}, \\
\mathfrak{p}hi_1 =\mathfrak{p}si & \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=\epsilon \zeta\},
\end{array}
\right.\\
&&\left\{
\begin{array}{ll}
\left(\ \mu\mathfrak{p}artial_x^2\ + \ \mathfrak{p}artial_z^2\ \right)\mathfrak{p}hi_2=0 & \mbox{ in } \Omega_2\equiv\{(x,z)\in \mathcal{R}R^{2},\ -\frac{1}{\delta}<z<\epsilon\zeta\}, \\
\mathfrak{p}artial_{n}\mathfrak{p}hi_2 = \mathfrak{p}artial_{n}\mathfrak{p}hi_1 & \mbox{ on } \Gamma, \\
\mathfrak{p}artial_{z}\mathfrak{p}hi_2 =0 & \mbox{ on } \Gamma_{\rm b}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=-\frac{1}{\delta}\}.
\end{array}
\right.
\end{eqnarray*}
\end{definition}
At this stage of the model, using the above definition and withouth making any assumption on the different dimensionless parameters,
one can rewrite the nondimensionalized version of~\eqref{eqn:EulerComplet} as a system of two coupled evolution equations, which writes:
\begin{equation}\label{eqn:EulerCompletAdim}
\left\{ \begin{array}{l}
\displaystyle\mathfrak{p}artial_{ t}{\zeta} \ -\ \frac{1}{\mu}G^{\mu}\mathfrak{p}si\ =\ 0, \\ \\
\displaystyle\mathfrak{p}artial_{ t}\Big(H^{\mu,\delta}\mathfrak{p}si-\gamma \mathfrak{p}artial_x{\mathfrak{p}si} \Big)\ + \ (\gamma+\delta)\mathfrak{p}artial_x{\zeta} \ + \ \frac{\epsilon}{2} \mathfrak{p}artial_x\Big(|H^{\mu,\delta}\mathfrak{p}si|^2 -\gamma |\mathfrak{p}artial_x {\mathfrak{p}si}|^2 \Big) \\
\displaystyle\hspace{5cm} = \ \mu\epsilon\mathfrak{p}artial_x\mathcal{N}^{\mu,\delta}-\mu\frac{\gamma+\delta}{\rm bo}\frac{\mathfrak{p}artial_x \big(k(\epsilon\sqrt\mu\zeta)\big)}{{\epsilon\sqrt\mu}} \ ,
\end{array}
\right.
\end{equation}
where we denote:
\[ \mathcal{N}^{\mu,\delta} \ \equiv \ \frac{\big(\frac{1}{\mu}G^{\mu}\mathfrak{p}si+\epsilon(\mathfrak{p}artial_x{\zeta})H^{\mu,\delta}\mathfrak{p}si \big)^2\ -\ \gamma\big(\frac{1}{\mu}G^{\mu}\mathfrak{p}si+\epsilon(\mathfrak{p}artial_x{\zeta})(\mathfrak{p}artial_x{\mathfrak{p}si}) \big)^2}{2(1+\mu|\epsilon\mathfrak{p}artial_x{\zeta}|^2)}.
\]
We will refer to \eqref{eqn:EulerCompletAdim} as the {\em full Euler system}.
\section{Green-Naghdi model in the Camassa-Holm regime}\label{GNCHsec}
We now recall the new Green-Naghdi model in the Camassa-Holm regime recently derived by Duchêne, Israwi and Talhouk in \cite{DucheneIsrawiTalhouk15}.
This new model is derived after expanding the different operators of the original Green-Naghdi model with respect to $\epsilon \mbox{ and } \mu$. Then several additional transformations are made using the smallness assumption $\epsilon=\mathcal {O}(\sqrt{\mu})$ in order to produce an equivalent precise system of partial differential equations whose unknowns are
$\zeta \mbox{ and }v$ :
\everymath{\displaystyle}
\begin{equation}\label{eq:Serre2mr}\left\{ \begin{array}{l}
\mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\left(\dfrac{h_1 h_2}{h_1+\gamma h_2}v\right)\ =\ 0,\\ \\
\mathfrak T[\epsilon\zeta] \left( \mathfrak{p}artial_{ t} v + \epsilon \varsigma {v } \mathfrak{p}artial_x {v} \right) + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta \\
\qquad \qquad+\frac\epsilon2 q_1(\epsilon\zeta) \mathfrak{p}artial_x \left(\frac{h_1^2 -\gamma h_2^2 }{(h_1+\gamma h_2)^2}| v|^2-\varsigma |v|^2\right)= - \mu \epsilon\frac23\frac{1-\gamma}{(\gamma+\delta)^2} \mathfrak{p}artial_x\big((\mathfrak{p}artial_x v)^2\big) ,
\end{array} \right. \end{equation}
where $h_1\equiv1-\epsilon\zeta$ (resp. $h_2\equiv\frac1\delta+\epsilon\zeta$) denotes the depth of the upper (resp. lower) fluid, and $v$ is the {\em shear mean velocity} defined by:
\begin{equation*}v\equiv \frac{1}{h_2}\int_{-\frac1\delta}^{\epsilon\zeta(t,x)} \mathfrak{p}artial_x \mathfrak{p}hi_2(t,x,z) \ dz - \frac{\gamma}{h_1}\int_{\epsilon\zeta(t,x)}^{1} \mathfrak{p}artial_x \mathfrak{p}hi_1(t,x,z) \ dz.\end{equation*}
The operator ${\mathfrak T}$ is defined as:
\begin{equation*}
{\mathfrak T}[\epsilon\zeta]V \ = \ q_1(\epsilon\zeta)V \ - \ \mu\nu \mathfrak{p}artial_x \Big(q_2(\epsilon\zeta)\mathfrak{p}artial_xV \Big),
\end{equation*}
with $q_i(X)\equiv 1+\kappa_i X $ ($i=1,2$) and $\nu$, $\kappa_1$, $\kappa_2$, $\varsigma$ are defined below.
Let us first introduce the following constants in order to ease the reading:
\begin{equation}\label{eqn:deflambdaalphabeta}
\lambda=\frac{1+\gamma\delta}{3\delta(\gamma+\delta)}\ , \quad
\alpha=\dfrac{1-\gamma}{(\gamma+\delta)^2} \quad
\mbox{ and } \quad \beta=\dfrac{(1+\gamma\delta)(\delta^2-\gamma)}{\delta(\gamma+\delta)^3}\ .
\end{equation}
Thus
\begin{equation*}
\nu \ = \ \lambda-\frac1{\rm bo} \ = \ \frac{1+\gamma\delta}{3\delta(\gamma+\delta)}-\frac1{\rm bo},
\end{equation*}
\begin{equation*}
(\lambda-\frac1{\rm bo})\kappa_1 \ = \ \frac{\gamma+\delta}{3}(2\beta-\alpha) , \quad (\lambda-\frac1{\rm bo})\kappa_2 \ = \ (\gamma+\delta)\beta , \end{equation*}
\begin{equation*}
(\lambda-\frac1{\rm bo})\varsigma \ = \ \frac{2\alpha-\beta}{3} \ - \ \frac{1}{\rm bo} \dfrac{\delta^2 -\gamma }{(\delta+\gamma )^2}.
\end{equation*}
The function $f$ is defined as follows:
\[ f:X\tildeo\frac{(1-X)(\delta^{-1}+X)}{1-X+\gamma(\delta^{-1}+X)},\]
so one has:
\[
f(\epsilon\zeta) \ =\ \displaystyle\frac{h_1h_2}{h_1+\gamma h_2} \quad \mbox{ and } \quad f'(\epsilon\zeta) \ =\ \displaystyle\frac{h_1^2-\gamma h_2^2}{(h_1+\gamma h_2)^2} \ .\]
Additionally, let us denote:
\begin{equation}\label{defkappaq3}
\kappa=\frac23\frac{1-\gamma}{(\delta+\gamma)^2}
\quad \mbox{ and } \quad q_3(\epsilon\zeta)=\frac12\big(f'(\epsilon\zeta)-\varsigma\big) = \frac12\Big(\frac{h_1^2-\gamma h_2^2}{(h_1+\gamma h_2)^2}-\varsigma\Big) \ .
\end{equation}
Problem \eqref{eq:Serre2mr} writes now in compact form:
\begin{equation}\label{GNCH2}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle {\mathfrak T} \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right) + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2) + \mu\epsilon\kappa\mathfrak{p}artial_x\big((\mathfrak{p}artial_x v)^2\big)=0.
\end{array} \right. \end{equation}
Let us now recall the regime of validity of the system~\eqref{GNCH2} as exactly given in~\cite{DucheneIsrawiTalhouk15}. In the first place, we consider the so-called {\em shallow water regime} for two layers of comparable depths:
\begin{multline} \label{eqn:defRegimeSWmr}
\mathcal{P}_{\rm SW} \ \equiv \ \Big\{ (\mu,\epsilon,\delta,\gamma,\rm bo):\ 0\ < \ \mu \ \leq \ \mu_{\max}, \ 0 \ \leq \ \epsilon \ \leq \ 1, \ \delta \in (\delta_{\min},\delta_{\max}), \Big.\\
\Big. \ 0\ \leq \ \gamma\ <\ 1,\ \rm bo_{\min}\leq \rm bo\leq \infty \ \Big\},
\end{multline}
with given $0\leq \mu_{\max},\delta_{\min}^{-1},\delta_{\max},\rm bo_{\min}^{-1}<\infty$.
Two additional key restrictions are necessary for the validity of the model:
\begin{equation} \label{eqn:defRegimeCHmr}
\mathcal{P}_{\rm CH} \equiv \left\{ (\mu,\epsilon,\delta,\gamma,\rm bo) \in \mathcal{P}_{\rm SW}:\ \epsilon \leq M \sqrt{\mu} \quad \mbox{ and } \quad \nu \equiv \frac{1+\gamma\delta}{3\delta(\gamma+\delta)}-\frac1{\rm bo} \ge \nu_{0} \ \right\},
\end{equation}
with given $0\leq M,\nu_0^{-1}<\infty$.\\
\\
Duch\^ene, Israwi and Talhouk proved in~\cite{DucheneIsrawiTalhouk15}, that the system \eqref{GNCH2} is well-posed (in the sense of Hadamard) in the energy space $X^s\ = \ H^s(\mathcal{R}R)\tildeimes H^{s+1}(\mathcal{R}R)$, endowed with the norm
\[
\forall\; U=(\zeta,v)^\tildeop \in X^s, \quad \vert U\vert^2_{X^s}\equiv \vert \zeta\vert^2 _{H^s}+\vert v\vert^2 _{H^s}+ \mu\vert \mathfrak{p}artial_xv\vert^2 _{H^s}.
\]
This result is obtained for parameters in the {\em Camassa-Holm regime}~\eqref{eqn:defRegimeCHmr} and under the following non-zero depth and ellipticity (for the operator $\mathfrak{T}$) conditions:
\begin{equation}\tildeag{H1}
\exists h_{01}>0 \mbox{ such that, } \inf_{x\in \mathcal{R}R} h_1 \geq\ h_{01}\ >\ 0, \quad \inf_{x\in \mathcal{R}R} h_2\geq\ h_{01}\ >\ 0.
\end{equation}
\begin{equation}\tildeag{H2}
\exists h_{02}>0 \mbox{ such that, } \inf_{x\in \mathcal{R}R} \left(1+\epsilon\kappa_1\zeta\right) \ge \ h_{02} \ > \ 0, \ \quad \inf_{x\in \mathcal{R}R} \left( 1+\epsilon \kappa_2\zeta \right)\ge \ h_{02} \ > \ 0.
\end{equation}
They also prove that this new asymptotic model model is fully justified by a convergence result in the Camassa-Holm regime.
\subsection{Reformulation of the model}\label{REFsec}
One can easily check that the operator $\mathfrak{T}$ can be written under the form:
\begin{equation*}
\mathfrak T[\epsilon\zeta]V=(q_1(\epsilon\zeta)I+\mu\nu T[\epsilon\zeta])V \quad \mbox{with} \quad T[\epsilon\zeta]V=-\mathfrak{p}artial_x(q_2(\epsilon\zeta)\mathfrak{p}artial_x V).
\end{equation*}
Therefore, this operator is time dependant through the presence of the term $\zeta$ and at each time, this operator has to be inverted in order to
solve equation \eqref{GNCH2}
Thus we want to derive a new model equivalent to \eqref{eq:Serre2mr} up to a lower order in $\epsilon \mbox{ or } \mu$ but with a structure
adapted to construct a numerical scheme for its resolution.
Let us first denote :
\begin{equation}
\label{Q1}Q_1(v)=\kappa\mathfrak{p}artial_x((\mathfrak{p}artial_xv)^2),
\end{equation}
such that one can rewrite problem \eqref{GNCH2} as :
\begin{equation}\label{GNCH3}
\left\{ \begin{array}{l}
\hspace*{-0.25cm}\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\hspace*{-0.25cm}\displaystyle \Big(q_1(\epsilon\zeta)I+\mu\nu T[\epsilon\zeta]\Big) \Big( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \Big) + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2) + \mu\epsilon Q_1(v)=0 .
\end{array} \right. \end{equation}
As we said before since the operator $(q_1(\epsilon\zeta)I+\mu\nu T[\epsilon\zeta])$ depends on $\zeta(t,x)$,
one has to remove this time dependency to avoid its inversion at each time.
Firstly, let us write the operator $T[\epsilon\zeta]$ under the form:
\[T[\epsilon\zeta]V=-\mathfrak{p}artial_x (q_2(\epsilon\zeta)\mathfrak{p}artial_x V)=T[0]V+ \epsilon S[\zeta]V,\]with
\[
T[0]V=-\mathfrak{p}artial_x^2V
\quad \mbox{and} \quad S[\zeta]V=-\kappa_2\mathfrak{p}artial_x(\zeta\mathfrak{p}artial_xV).\]
Expanding the term $\Big(q_1(\epsilon\zeta)I+\mu\nu T[\epsilon\zeta]\Big) \Big( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \Big)$ in problem
\eqref{GNCH3} leads to :
\begin{equation}\label{GNCH4}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle \Big(I+\mu\nu T[0]\Big) \Big( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \Big) + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2) + \mu\epsilon Q_1(v)\\+\mu\epsilon \nu S[\zeta] \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)+\epsilon\kappa_1\zeta \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)=0 .
\end{array} \right. \end{equation}
But we have:
$\mu\epsilon \nu S[\zeta] \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right) = \mu\epsilon \nu S[\zeta] \mathfrak{p}artial_{ t} v + \mathcal {O}(\mu\epsilon^2)$.
By assumption the term $\mathcal {O}(\mu\epsilon^2)$ is of order $\mathcal {O}(\mu^2)$; thus Problem \eqref{GNCH4} is equivalent to the following system:
\begin{equation}\label{GNCH5}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle \Big(I+\mu\nu T[0]\Big) \Big( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \Big) + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2) + \mu\epsilon Q_1(v)\\+\mu\epsilon \nu S[\zeta] \left( \mathfrak{p}artial_{ t} v \right)+\epsilon\kappa_1\zeta \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)=0 .
\end{array} \right. \end{equation}
\subsection{Improved Green-Naghdi equations}\label{GNID}
As done by many authors, see \cite{Witting84,MMS91,CBB07}, the frequency dispersion of problem \eqref{GNCH5} can be improved by adding some terms of order $\mathcal {O}(\mu^2)$ to the momentum equation. The accuracy of the model is not affected by this manipulation since this equation is precise up to terms of the same order as the added ones, namely $\mathcal {O}(\mu^2)$.
Going back to problem \eqref{GNCH3}, one notices that:
\begin{equation}\label{eq1}q_1(\epsilon\zeta)\left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)+(\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)+\mathcal {O}(\mu)=0.\end{equation}
We can now divide \eqref{eq1} by $q_1(\epsilon\zeta)$ to obtain:
\begin{equation}\label{eq2}\mathfrak{p}artial_{ t} v=-\epsilon\varsigma v \mathfrak{p}artial_x v -(\gamma+\delta)\mathfrak{p}artial_x \zeta -\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)-\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}.\end{equation}
Let us fix $\alpha\in\mathcal{R}R$. Multiplying \eqref{eq2} by $(1-\alpha)$ leads to:
\begin{equation}\label{eq3}\mathfrak{p}artial_{ t} v=\alpha\mathfrak{p}artial_{ t} v-(1-\alpha)[\epsilon\varsigma v \mathfrak{p}artial_x v -(\gamma+\delta)\mathfrak{p}artial_x \zeta -\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)]-(1-\alpha)\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}.\end{equation}
Replacing $\mathfrak{p}artial_t v$ by its expression given in \eqref{eq3} in the second equation of \eqref{GNCH5} and regrouping the $\mathcal {O}(\mu^2)$
terms yields to the following equation:
\begin{multline*}
(I+\mu\nu T[0]) \left( \alpha\mathfrak{p}artial_{ t} v-(1-\alpha)[\epsilon\varsigma v \mathfrak{p}artial_x v +(\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)]-(1-\alpha)\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)} + \epsilon\varsigma v \mathfrak{p}artial_x v \right) \\ + (\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta + \epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2) + \mu\epsilon Q_1(v)+\mu\epsilon\nu S[\zeta] \mathfrak{p}artial_t v+\epsilon\kappa_1\zeta \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)+ \mathcal {O}(\mu^2)=0
\end{multline*}
After straightforward computations,
\begin{multline}\label{eq4}
(I+\mu\nu\alpha T[0]) \left( \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v\right) +(\alpha-1)[\mathfrak{p}artial_t v+\epsilon\varsigma v \mathfrak{p}artial_x v +(\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)]\\+(\alpha-1)\mu\nu T[0]\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+(\gamma+\delta)q_1(\epsilon\zeta)\mathfrak{p}artial_x \zeta+\epsilon q_1(\epsilon\zeta)\mathfrak{p}artial_x(q_3(\epsilon\zeta)v^2)\\+\mu\epsilon Q_1(v)+ \mu \epsilon \nu S[\zeta] \mathfrak{p}artial_t v+\epsilon\kappa_1\zeta \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)-(1-\alpha)\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}+\mathcal {O}(\mu^2) =0
\end{multline}
Replacing in \eqref{eq4},
$\epsilon\kappa_1\zeta \left( \mathfrak{p}artial_{ t} v + \epsilon\varsigma v \mathfrak{p}artial_x v \right)$ by $\epsilon\kappa_1\zeta \Big(-(\gamma+\delta)\mathfrak{p}artial_x \zeta -\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)-\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}\Big)$,
we have:
\begin{multline}\label{eq5}
(I+\mu\nu\alpha T[0]) \left( \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v\right) +(I-\mu\nu(1-\alpha) T[0])(\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2))\\+\mu\epsilon Q_1(v)+
\mu\epsilon \nu S[\zeta] \mathfrak{p}artial_t v-\epsilon\kappa_1\zeta\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}+\mathcal {O}(\mu^2) =0.
\end{multline}
One deduces easily that,
\begin{multline}\label{eq6}(I-\mu\nu(1-\alpha) T[0])(\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2))=\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big) \\ +\dfrac{\alpha-1}{\alpha}(I+\mu\nu\alpha T[0])((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)).
\end{multline}
Plugging \eqref{eq6} in \eqref{eq5} one has:
\begin{multline}\label{eq7}
(I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\\+\mu\epsilon Q_1(v)+\mu \epsilon \nu S[\zeta] \mathfrak{p}artial_t v-\epsilon\kappa_1\zeta\dfrac{\mathcal {O}(\mu)}{q_1(\epsilon\zeta)}+\mathcal {O}(\mu^2) =0
\end{multline}
Remembering that $\mu\nu T[0] ( \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v) +\mu\epsilon Q_1(v)+\mu\epsilon \nu S[\zeta] \mathfrak{p}artial_t v= \mathcal {O}(\mu)$,
and regrouping the $\mathcal {O}(\mu\epsilon^2)$ terms, \eqref{eq7} becomes:
\begin{multline}\label{eq8}
(I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\\+\mu\epsilon Q_1(v)+\mu \epsilon \nu S[\zeta] (\mathfrak{p}artial_t v)-\epsilon\kappa_1\zeta\mu\nu T[0]
(\mathfrak{p}artial_t v)+\mathcal {O}(\mu^2,\mu\epsilon^2) =0
\end{multline}
Expanding in terms of $\epsilon$ and $\mu$, and looking for the term $\mathfrak{p}artial_t v$, we get:
\begin{equation}\label{dtv1}
\mathfrak{p}artial_{ t} v = -(I+\mu\nu \alpha T[0])^{-1}[(\gamma+\delta)\mathfrak{p}artial_x \zeta]+\mathcal {O}(\epsilon,\mu)
\end{equation}
Replacing $ \mathfrak{p}artial_{ t} v $ by its expression obtained in~\eqref{dtv1} in the last two terms of the equation \eqref{eq8} the Green–Naghdi equations with improved dispersion can therefore be written as (up to the order $\mathcal {O}(\mu^2)$) :
\begin{equation}\label{GNCH6}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle (I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]\\+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\mu\epsilon Q_1(v)+\mu\epsilon\nu Q_2(\zeta)+\mu\epsilon\nu Q_3(\zeta) =0.
\end{array} \right. \end{equation}
with
\begin{equation}\label{Q2}
Q_2(\zeta)=- S[\zeta] \Big(I+\mu\nu \alpha T[0]\Big)^{-1}[(\gamma+\delta)\mathfrak{p}artial_x \zeta ] ,
\end{equation}
and
\begin{equation}\label{Q3}
Q_3(\zeta)=\kappa_1\zeta T[0]\Big(I+\mu\nu \alpha T[0]\Big)^{-1}[(\gamma+\delta)\mathfrak{p}artial_x \zeta ].
\end{equation}
In \cite{CLM}, a three-parameter family of Green–Naghdi equations in the one layer case is derived yielding additional improvements of the dispersive properties. In this paper we stick to the one-parameter family~\eqref{GNCH6}. This new formulation does not contain any third-order derivative, thus one can expect more stable and robust numerical computations.
\subsection{Choice of the parameter $\alpha$}\label{Secalphachoice}
The choice of the parameter $\alpha$ is motivated by the agreement of the dispersion properties of the \emph {full Euler system} and the improved Green-Naghdi system
\eqref{GNCH6} in term of the dispersion relation. Therefore, we have first to find the dispersion relation for the improved Green-Naghdi system, then the dispersion relation
for the \emph {full Euler system} and finally to find an optimal parameter $\alpha_{opt}$ to ensure a good agreement between these two relations.
\subsubsection{The dispersion relation associated to the improved GN formulation}
The system \eqref{GNCH6} can be written under the following form:
\begin{equation}\label{GNCH7}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle (I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]\\+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\mu\epsilon Q_1(v)+\mu\epsilon\nu Q_2(\zeta)+\mu\epsilon\nu Q_3(\zeta) =0,
\end{array} \right. \end{equation}
where the operators $Q_1$, $Q_2$ and $Q_3$ are explicitly given in~\eqref{Q1},~\eqref{Q2} and~\eqref{Q3}. Looking at the linearization of~\eqref{GNCH7} around the rest state $(\zeta,v)=(0,0)$, one derives the dispersion
relation associated to~\eqref{GNCH7}.\\
This relation is obtained by looking for plane wave solutions of the form
$(\zeta,v) = (\underline{\zeta},\underline{v})e^{i(kx-wt)}$, with $k$ the spatial wave number and $\omega$ the time pulsation, of the linearized equations.\\
\\
The first equation of \eqref{GNCH7} gives:
\[-iw\underline{\zeta}+\dfrac{1}{(\gamma+\delta)}ik\underline{v}=0 \ .\]
Thus we first obtain:
\begin{equation}\label{eq9}\underline{v}=\dfrac{w(\gamma+\delta)}{k}\underline{\zeta}.\end{equation}
From the second equation of \eqref{GNCH7} we have:
\[(1+\mu\nu\alpha k^2)\Big(-iw\underline{v}+\dfrac{\alpha-1}{\alpha}(\gamma+\delta)ik\underline{\zeta}\Big)+\dfrac{1}{\alpha}(\gamma+\delta)ik\underline{\zeta}=0 .\]
This equation may be written as:
\begin{equation}\label{eq10}
-iw\underline{v}+(\gamma+\delta)ik\underline{\zeta}-\mu\nu\alpha iwk^2\underline{v}+\mu\nu(\alpha-1)(\gamma+\delta)ik^3\underline{\zeta}=0.
\end{equation}
Replacing $\underline{v}$ in~\eqref{eq10} by its expression given in~\eqref{eq9} we obtain:
\begin{equation}
-w^2\dfrac{(\gamma+\delta)}{k}\underline{\zeta}+(\gamma+\delta)k\underline{\zeta}-\mu\nu\alpha wk^2\dfrac{w(\gamma+\delta)}{k}\underline{\zeta}+\mu\nu(\alpha-1)(\gamma+\delta)k^3\underline{\zeta}=0.
\end{equation}
After straightforward computations, we obtain the following dispersion relation of the new GN formulation for $\alpha\geq 1$:
\begin{equation}\label{rdgn}
w_{\alpha,GN}=\mathfrak{p}m|k|\sqrt{\dfrac{1+\mu\nu(\alpha-1)k^2}{1+\mu\nu\alpha k^2}}
\end{equation}
Defining the linear phase velocity associated to~\eqref{rdgn} as:
\[C_{GN}^p(k)=\dfrac{w(k)}{|k|} ,\]
we choose $\alpha$ such that the phase velocity stays close to the reference phase velocity $C_S^p(k)$ coming from Stokes
linear theory. Thanks to this approach, the error on the phase velocity is minimized for any discrete value of $\mu|k|$ and the corresponding local optimal
value of $\alpha$, denoted by $\alpha_{opt}$ is computed.
In order to do so, we have firstly to obtain the dispersion relation of the original \emph {full Euler system}.
\subsubsection{The dispersion relation associated to the full-Euler system}
Let us recall the {\em full-Euler system} given in~\eqref{eqn:EulerCompletAdim}:
\begin{equation}\label{Eulercomplet}
\left\{ \begin{array}{l}
\displaystyle\mathfrak{p}artial_{ t}{\zeta} \ -\ \frac{1}{\mu}G^{\mu}\mathfrak{p}si\ =\ 0, \\ \\
\displaystyle\mathfrak{p}artial_{ t}\Big(H^{\mu,\delta}\mathfrak{p}si-\gamma \mathfrak{p}artial_x{\mathfrak{p}si} \Big)\ + \ (\gamma+\delta)\mathfrak{p}artial_x{\zeta} \ + \ \frac{\epsilon}{2} \mathfrak{p}artial_x\Big(|H^{\mu,\delta}\mathfrak{p}si|^2 -\gamma |\mathfrak{p}artial_x {\mathfrak{p}si}|^2 \Big) \\ \hspace{5cm} = \mu\epsilon\mathfrak{p}artial_x\mathcal{N}^{\mu,\delta}-\mu\frac{\gamma+\delta}{\rm bo}\frac{\mathfrak{p}artial_x \big(k(\epsilon\sqrt\mu\zeta)\big)}{{\epsilon\sqrt\mu}} \ ,
\end{array}
\right.
\end{equation}
where
\[ \mathcal{N}^{\mu,\delta} \ \equiv \ \frac{\big(\frac{1}{\mu}G^{\mu}\mathfrak{p}si+\epsilon(\mathfrak{p}artial_x{\zeta})H^{\mu,\delta}\mathfrak{p}si \big)^2\ -\ \gamma\big(\frac{1}{\mu}G^{\mu}\mathfrak{p}si+\epsilon(\mathfrak{p}artial_x{\zeta})(\mathfrak{p}artial_x{\mathfrak{p}si}) \big)^2}{2(1+\mu|\epsilon\mathfrak{p}artial_x{\zeta}|^2)},
\]
and
\begin{eqnarray*}
G^{\mu}\mathfrak{p}si &\equiv & G^{\mu}[\epsilon\zeta]\mathfrak{p}si \equiv \sqrt{1+\mu|\epsilon\mathfrak{p}artial_x\zeta|^2}\big(\mathfrak{p}artial_n \mathfrak{p}hi_1 \big)\id{z=\epsilon\zeta} = -\mu\epsilon(\mathfrak{p}artial_x\zeta) (\mathfrak{p}artial_x\mathfrak{p}hi_1)\id{z=\epsilon\zeta}+(\mathfrak{p}artial_z\mathfrak{p}hi_1)\id{z=\epsilon\zeta},\\
H^{\mu,\delta}\mathfrak{p}si &\equiv & H^{\mu,\delta}[\epsilon\zeta,\beta b]\mathfrak{p}si \equiv \mathfrak{p}artial_x \big(\mathfrak{p}hi_2\id{z=\epsilon\zeta}\big) = (\mathfrak{p}artial_x\mathfrak{p}hi_2)\id{z=\epsilon\zeta}+\epsilon(\mathfrak{p}artial_x \zeta)(\mathfrak{p}artial_z\mathfrak{p}hi_2)\id{z=\epsilon\zeta},
\end{eqnarray*}
where $\mathfrak{p}hi_1$ and $\mathfrak{p}hi_2$ are uniquely defined (up to a constant for $\mathfrak{p}hi_2$) as the solutions in $H^2(\mathcal{R}R)$ of the following Laplace's problems.
\begin{eqnarray}
\label{Laplace1} &&\left \{
\begin{array}{ll}\left(\ \mu\mathfrak{p}artial_x^2 \ +\ \mathfrak{p}artial_z^2\ \right)\ \mathfrak{p}hi_1=0 & \mbox{ in } \Omega_1\equiv \{(x,z)\in \mathcal{R}R^{2},\ \epsilon{\zeta}(x)<z<1\}, \\
\mathfrak{p}artial_z \mathfrak{p}hi_1 =0 & \mbox{ on } \Gamma_{\rm t}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=1\}, \\
\mathfrak{p}hi_1 =\mathfrak{p}si & \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=\epsilon \zeta\},
\end{array}
\right.\\
\label{Laplace2}&&\left\{
\begin{array}{ll}
\left(\ \mu\mathfrak{p}artial_x^2\ + \ \mathfrak{p}artial_z^2\ \right)\ \mathfrak{p}hi_2=0 & \mbox{ in } \Omega_2\equiv\{(x,z)\in \mathcal{R}R^{2},\ -\frac{1}{\delta}<z<\epsilon\zeta\}, \\
\mathfrak{p}artial_{n}\mathfrak{p}hi_2 = \mathfrak{p}artial_{n}\mathfrak{p}hi_1 & \mbox{ on } \Gamma,\\
\mathfrak{p}artial_{z}\mathfrak{p}hi_2 =0 & \mbox{ on } \Gamma_{\rm b}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=-\frac{1}{\delta}\}.
\end{array}
\right.
\end{eqnarray}
To obtain the dispersion relation associated to the {\em full-Euler system}, we first have to linearize it around the rest state that is
$\zeta=0 \, , \, v = 0$. We have thus to compute the two operators $G^{\mu}$ and $H^{\mu,\delta}$ at the rest state.
Therefore we have firstly to linearize the two previous equations \eqref{Laplace1}-\eqref{Laplace2},
then write the linear system in the wave number space by performing a Fourier transform.
After performing some algebraic computations, we will obtain the dispersion relation.
Let us remark that the two equations \eqref{Laplace1} and \eqref{Laplace2} are very similar except for the boundary conditions,
and the space domain $\Omega_1$ and $\Omega_2$. Therefore we have chosen to perform the complete analysis for the two equations.
$\bullet$ Linearizing~\eqref{Laplace1} around the rest state ($\zeta=0$) gives:
\begin{eqnarray}
\label{Laplace1lin} &\left\{
\begin{array}{ll}\left(\ \mu\mathfrak{p}artial_x^2 \ +\ \mathfrak{p}artial_z^2\ \right)\ \mathfrak{p}hi_1=0 & \mbox{ in } \Omega_1\equiv \{(x,z)\in \mathcal{R}R^{2},\ 0<z<1\}, \\
\mathfrak{p}artial_z \mathfrak{p}hi_1 =0 & \mbox{ on } \Gamma_{\rm t}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=1\}, \\
\mathfrak{p}hi_1 =\mathfrak{p}si & \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=0\},
\end{array}
\right.
\end{eqnarray}
Applying the Fourier transform with respect to $x$, one has:
\begin{eqnarray}
\label{Laplace1four} &\left\{
\begin{array}{ll}\ -\mu|k|^2\widehat{\mathfrak{p}hi_1}+\mathfrak{p}artial_z^2\widehat{\mathfrak{p}hi_1}=0 & \mbox{ in } \Omega_1\equiv \{(x,z)\in \mathcal{R}R^{2},\ 0<z<1\}, \\
\mathfrak{p}artial_z \widehat{\mathfrak{p}hi_1} =0 & \mbox{ on } \Gamma_{\rm t}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=1\}, \\
\widehat{\mathfrak{p}hi_1} =\widehat{\mathfrak{p}si} & \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=0\},
\end{array}
\right.
\end{eqnarray}
One can easily remark that~\eqref{Laplace1four} is an ordinary linear differential equation of order 2 whose solution has the following form
(since $\mu k^2 \geq 0$):
\[\widehat{\mathfrak{p}hi_1}(k)=A(k)\cosh (\sqrt{\mu}|k|z)+B(k)\sinh (\sqrt{\mu}|k|z).\]
Writing the boundary condition at $z=0$ i.e.
$\widehat{\mathfrak{p}hi_1}|_{z=0} =\widehat{\mathfrak{p}si} $ and at $z=1$ i.e. $(\mathfrak{p}artial_z \widehat{\mathfrak{p}hi_1})|_{z=1}=0$ we obtain:
$$ A(k)=\widehat{\mathfrak{p}si}(k) \quad \mbox{and} \quad B(k)=-\tildeanh(\sqrt{\mu}|k|)\widehat{\mathfrak{p}si}(k) \ .$$
Thus we obtain:
$$ \widehat{\mathfrak{p}hi_1}(k)=\Big[\cosh (\sqrt{\mu}|k|z)-\tildeanh(\sqrt{\mu}|k|)\sinh (\sqrt{\mu}|k|z)\Big]\widehat{\mathfrak{p}si}(k) \ . $$
Therefore as:
\[ \widehat{G^{\mu}[0]\mathfrak{p}si}=(\mathfrak{p}artial_z \widehat{\mathfrak{p}hi_1})|_{z=0} = -\sqrt{\mu}|k|\tildeanh(\sqrt{\mu}|k|)\widehat{\mathfrak{p}si}(k),\]
we finally have:
\begin{equation}\label{Gmu0} G^{\mu}[0]\mathfrak{p}si= -\sqrt{\mu}|k|\tildeanh(\sqrt{\mu}|k|)\mathfrak{p}si(k).\end{equation}
$\bullet$ Linearizing~\eqref{Laplace2} around the rest state ($\zeta=0$) gives:
\begin{eqnarray}
\label{Laplace2lin} &\left\{
\begin{array}{ll}
\left(\ \mu\mathfrak{p}artial_x^2\ + \ \mathfrak{p}artial_z^2\ \right)\ \mathfrak{p}hi_2=0 & \mbox{ in } \Omega_2\equiv\{(x,z)\in \mathcal{R}R^{2},\ -\frac{1}{\delta}<z<0\}, \\
\mathfrak{p}artial_{z}\mathfrak{p}hi_2 = \mathfrak{p}artial_{z}\mathfrak{p}hi_1= G^{\mu}[0]\mathfrak{p}si & \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=0\},\\
\mathfrak{p}artial_{z}\mathfrak{p}hi_2 =0 & \mbox{ on } \Gamma_{\rm b}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=-\frac{1}{\delta}\}.
\end{array}
\right.
\end{eqnarray}
Applying the Fourier transform with respect to $x$, one has:
\begin{eqnarray}
\label{Laplace2four} &\left\{
\begin{array}{ll}
-\mu|k|^2\widehat{ \mathfrak{p}hi_2}+\mathfrak{p}artial_z^2\widehat{ \mathfrak{p}hi_2}=0 & \mbox{ in } \Omega_2\equiv\{(x,z)\in \mathcal{R}R^{2},\ -\frac{1}{\delta}<z<0\}, \\
\mathfrak{p}artial_{z}\widehat{\mathfrak{p}hi_2} = \widehat{G^{\mu}[0]\mathfrak{p}si}= -\sqrt{\mu}|k|\tildeanh(\sqrt{\mu}|k|)\widehat{\mathfrak{p}si(k)}& \mbox{ on } \Gamma\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=0\},\\
\mathfrak{p}artial_{z}\widehat{\mathfrak{p}hi_2} =0 & \mbox{ on } \Gamma_{\rm b}\equiv \{(x,z)\in \mathcal{R}R^{2},\ z=-\frac{1}{\delta}\}.
\end{array}
\right.
\end{eqnarray}
One can easily remark that~\eqref{Laplace2four} is an ordinary differential equation of order 2 whose solution has the following form (since $\mu k^2 \geq 0$):
\[\widehat{\mathfrak{p}hi_2}(k)=A(k)\cosh (\sqrt{\mu}|k|z)+B(k)\sinh (\sqrt{\mu}|k|z).\]
Writing the boundary condition at $z=0$ i.e.
$(\mathfrak{p}artial_z \widehat{\mathfrak{p}hi_2})|_{z=0}=-\sqrt{\mu}|k|\tildeanh(\sqrt{\mu}|k|)\widehat{\mathfrak{p}si(k)}$ and at $z=-1/\delta$ i.e. $(\mathfrak{p}artial_z \widehat{\mathfrak{p}hi_2})|_{z=-1/\delta}=0$ we obtain:
$$ B(k)=-\tildeanh(\sqrt{\mu}|k|)\widehat{\mathfrak{p}si}(k) \quad \mbox{and} \quad
-A(k)\sinh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)+B(k)\cosh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)=0 \ .$$
Replacing $B(k)$ by its expression, we obtain:
\[A(k)=-\dfrac{\tildeanh\Big(\sqrt{\mu}|k|\Big)}{\tildeanh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)}\widehat{\mathfrak{p}si}(k).\]
Since $\widehat{\mathfrak{p}hi_2}(k)|_{z=0}=A(k)$ we have thus:
$\mathfrak{p}hi_2(k)|_{z=0}=-\dfrac{\tildeanh\Big(\sqrt{\mu}|k|\Big)}{\tildeanh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)}\mathfrak{p}si(k).$\\
Therefore,
\begin{equation}\label{Hmu0}H^{\mu,\delta}[0]\mathfrak{p}si=\mathfrak{p}artial_x(\mathfrak{p}hi_2|_{z=0})=
-\dfrac{\tildeanh\Big(\sqrt{\mu}|k|\Big)}{\tildeanh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)}\mathfrak{p}artial_x\mathfrak{p}si .\end{equation}
Having obtained the two operators $G^{\mu}$ and $H^{\mu,\delta}$ at the rest state, \eqref{Gmu0} and \eqref{Hmu0}, we can now proceed to compute the dispersion relation for the \emph {full Euler system}.\\
One derives the dispersion relation associated to~\eqref{Eulercomplet} by looking for plane wave solutions of the form $(\zeta,\mathfrak{p}si) = (\underline{\zeta},\mathfrak{p}sibar)e^{i(kx-wt)}$
to the linearized equations around the rest state $(\zeta,\mathfrak{p}si )=(0,0)$.\\
\\
The linearisation of the \emph {full Euler system}, \eqref{Eulercomplet} at the rest state, $\zeta = 0 \,,\, \mathfrak{p}si = 0$, writes under the form:
\begin{equation}\label{Eulercompletlin}
\left\{ \begin{array}{l}
\displaystyle\mathfrak{p}artial_{ t}{\zeta} \ -\ \frac{1}{\mu}G^{\mu}[0]\mathfrak{p}si\ =\ 0, \\ \\
\displaystyle\mathfrak{p}artial_{ t}\Big(H^{\mu,\delta}[0]\mathfrak{p}si-\gamma \mathfrak{p}artial_x{\mathfrak{p}si} \Big)\ + \ (\gamma+\delta)\mathfrak{p}artial_x{\zeta} = -\mu\frac{\gamma+\delta}{\rm bo}\frac{\mathfrak{p}artial_x \big(k(\epsilon\sqrt\mu\zeta)\big)}{{\epsilon\sqrt\mu}} \ ,
\end{array}
\right.
\end{equation}
Replacing $G^\mu[0]\mathfrak{p}si$ by its expression given by \eqref{Gmu0} in the first equation of the system \eqref{Eulercompletlin}, we firstly have:
\[-iw\underline{\zeta}+\dfrac{1}{\sqrt{\mu}}|k|\tildeanh\Big(\sqrt{\mu}|k|\Big)\mathfrak{p}sibar=0 .\]
Thus,
\begin{equation}\label{zetabar}\underline{\zeta}=\dfrac{1}{\sqrt{\mu}iw}|k|\tildeanh\Big(\sqrt{\mu}|k|\Big)\mathfrak{p}sibar.\end{equation}
Replacing $H^\mu[0]\mathfrak{p}si$ by its expression given by \eqref{Hmu0} in the second equation of the system~\eqref{Eulercompletlin}, we obtain:
\[-\dfrac{\tildeanh\Big(\sqrt{\mu}|k|\Big)}{\tildeanh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)}\mathfrak{p}artial_t(\mathfrak{p}artial_x\mathfrak{p}si)-\gamma\mathfrak{p}artial_t(\mathfrak{p}artial_x\mathfrak{p}si)+(\gamma+\delta)\mathfrak{p}artial_x\zeta-\mu\frac{\gamma+\delta}{\rm bo}\mathfrak{p}artial_x^3\zeta=0 .\]
Thus,
\[-\dfrac{\tildeanh\Big(\sqrt{\mu}|k|\Big)}{\tildeanh\Big(\dfrac{\sqrt{\mu}|k|}{\delta}\Big)}wk\mathfrak{p}sibar-
\gamma wk \mathfrak{p}sibar +(\gamma+\delta)ik\underline{\zeta}+\mu\frac{\gamma+\delta}{\rm bo}ik^3\underline{\zeta}=0.\]
After straightforward computations and replacing $\underline{\zeta}$ by its expression given in~\eqref{zetabar} we finally obtain the following dispersion relation:
\begin{equation}\label{rdfe}
w^2_{F.E}=\dfrac{(\gamma+\delta)|k|\tildeanh\Big(\sqrt{\mu}|k|\Big)(1+\dfrac{\mu}{\rm bo}k^2)\tildeanh\Big(\sqrt{\mu}\dfrac{|k|}{\delta}\Big)}
{\sqrt{\mu}[\tildeanh\Big(\sqrt{\mu}|k|\Big)+\gamma\tildeanh\Big(\sqrt{\mu}\dfrac{|k|}{\delta}\Big)]}.\end{equation}
\subsubsection{Phase velocity agreement}
One can easily remark that the Taylor expansions of the two previous dispersion relations~\eqref{rdgn} and~\eqref{rdfe} are equivalent for small wavenumbers,
\[w_{\alpha,GN}^2\equiv w^2_{F.E}\simeq k^2-\mu\nu k^4 + \mathcal {O}(\mu^2 k^6).\]
Indeed, for small wavenumber one has:
\begin{eqnarray*}w^2_{\alpha,GN}&=&k^2\Big(\dfrac{1+\mu\nu(\alpha-1)k^2}{1+\mu\nu\alpha k^2}\Big)
\\&\approx& k^2(1+\mu\nu(\alpha-1)k^2)(1-\mu\nu\alpha k^2+\mathcal {O}((\sqrt{\mu}k)^4))
\\&\approx &k^2(1-\mu\nu\alpha k^2+\mu\nu(\alpha-1)k^2+\mathcal {O}((\sqrt{\mu}k)^4))
\\&\approx&k^2-\mu\nu k^4+\mathcal {O}(\mu^2 k^6).\end{eqnarray*}
and
\begin{eqnarray*}
w^2_{FE}&=&\dfrac{(\gamma+\delta)|k|\tildeanh(\sqrt{\mu}|k|)(1+\dfrac{\mu}{\rm bo}k^2)\tildeanh(\sqrt{\mu}\dfrac{|k|}{\delta})}{\sqrt{\mu}[\tildeanh(\sqrt{\mu}|k|)+\gamma\tildeanh(\sqrt{\mu}\dfrac{|k|}{\delta})]}
\\&\approx& \dfrac{(\gamma+\delta)|k|\big[\sqrt{\mu}k-\dfrac{(\sqrt{\mu}k)^3}{3}+\mathcal {O}((\sqrt{\mu}k)^5)\big](1+\dfrac{\mu}{\rm bo}k^2)\big[\dfrac{\sqrt{\mu}k}{\delta}-\big(\dfrac{\sqrt{\mu}k}{\delta}\big)^3\dfrac{1}{3}+\mathcal {O}((\sqrt{\mu}k)^5)\big]}{\sqrt{\mu}\Big[\sqrt{\mu}k-\dfrac{(\sqrt{\mu}k)^3}{3}+\mathcal {O}((\sqrt{\mu}k)^5)+\gamma\Big(\dfrac{\sqrt{\mu}k}{\delta}-\big(\dfrac{\sqrt{\mu}k}{\delta}\big)^3\dfrac{1}{3}+\mathcal {O}((\sqrt{\mu}k)^5)\Big)\Big]}
\\&\approx&k^2+\mu k^4\Big[\dfrac{1}{\rm bo}-\dfrac{\delta^2(1+\gamma\delta)}{3\delta^3(\gamma+\delta)}\Big]+\mathcal {O}(\mu^2 k^6)
\\&\approx&k^2-\mu\nu k^4+\mathcal {O}(\mu^2 k^6).
\end{eqnarray*}
Therefore for small wavenumbers and in the whole range of regime, the choice of $\alpha$ does not influence the dispersion relation \eqref{rdgn}.
In the desired simulations, we are interested in small wave length (i.e large wavenumbers) dispersion characteristics and thus we would like to find an optimal value of $\alpha$ in order
to observe the same dispersion properties for the reduced Green-Naghdi model that the \emph {full Euler} original problem satisfies. Therefore, we are interested in values of $\alpha$
such that $C_{GN}^p(k)=C_{F.E}^p(k)$ for large value of $k$.
We can thus compute $\alpha_{opt}$ from the equality: $w_{\alpha,GN}^2= w^2_{F.E}$.\\
Let us denote $X=\sqrt{\mu}|k|$. The previous equality writes:
\[\dfrac{1+\nu X^2(\alpha_{opt}-1)}{1+\nu X^2\alpha_{opt}}
=\dfrac{(\gamma+\delta)\tildeanh(X)(1+\dfrac{X^2}{\rm bo})\tildeanh(\dfrac{X}{\delta})}{X[\tildeanh(X)+\gamma\tildeanh(\dfrac{X}{\delta})]}=g(X) .\]
After straightforward computations we obtain the follwing expression for the value of $\alpha_{opt}$:
\[\alpha_{opt}=\dfrac{g(X)-1+\nu X^2}{\nu X^2(1-g(X))}.\]
In Figure~\ref{figalphaopt} (top), $\alpha_{opt}$ is plotted against the spatial wave number $k$, for $k \in [0,4]$ and $\mu=1$, for the one and two layers cases. For the one layer case we set $\gamma=0$, $\delta=1$ and $\rm bo^{-1}=0$ whereas for the two layers case we set $\gamma=0.95$, $\delta=0.5$ and $\rm bo^{-1}=5\tildeimes 10^{-5}$. We would like to mention that $\alpha_{opt}\rightarrow 1$ in both cases when considering very large values of $k$, this fact is confirmed by the numerical simulation done in Section~\ref{DAM2}, see Figure~\ref{dam2}, where we notice similar observations at time $t=20 \ s$. Therefore, we will use an optimal value of $\alpha$ different of $1$ only when looking at the dispersion effects for intermediate wave numbers. This is highlighted, when comparing to the numerical experiments done in~\cite{DucheneIsrawiTalhouk16} (for more details see Section~\ref{KH}).
In Figure~\ref{figalphaopt} (bottom), the ratio $\dfrac{C^p_{GN}(k)}{C^p_{S}(k)}$ (i.e linear phase velocity error), is plotted against the spatial wave number $k$ for $k\in[0,4]$, in the one and two layer cases.
\begin{figure}
\caption{Top: local optimal values of $\alpha$ for $k\in[0,4]$, for the two layers case (full line) and one layer case (dashed line). Bottom: linear phase velocity errors in the one layer case (blue) and two layers case (red) for $\alpha=1.159$ (full blue line) and $\alpha=1.498$ (full red line) and $\alpha=1$ (dashed line) }
\label{figalphaopt}
\end{figure}
\subsection{High frequencies instabilities}\label{IHFsec}
In what follows, a qualitative discussion on the stability of the improved model~\eqref{GNCH6} is given. We study the high frequencies instabilities in the one layer case, as well as in the two layers case.
As $\mu <<1$, a simple expansion of the operator $(I+\mu\nu \alpha T[0])^{-1}$ shows that we have:
$$(I+\mu\nu \alpha T[0])^{-1}[(\gamma+\delta)\mathfrak{p}artial_x \zeta ]=(\gamma+\delta)\mathfrak{p}artial_x \zeta +\mathcal {O}(\mu).$$ Thus, we could replace
$ Q_2(\zeta)$ by $\widetilde{Q}_2(\zeta)=-S[\zeta]\big((\gamma+\delta)\mathfrak{p}artial_x \zeta\big),$ and
$ Q_3(\zeta)$ by $\widetilde{Q}_3(\zeta) = \kappa_1\zeta T[0][(\gamma+\delta)\mathfrak{p}artial_x \zeta ]$
in the second equation of \eqref{GNCH6}, keeping the same order of precision $\mathcal {O}(\mu^2)$.
This simple expansion would avoid the inversion of $(I+\mu\nu \alpha T[0])$ (resolution of an extra linear system) in the computation of
${Q}_2$ and ${Q}_3$ but it leads to instabilities.
Indeed, this is due to the fact that the two terms $\widetilde{Q}_2(\zeta)$ and $\widetilde{Q}_3(\zeta)$
contain third order derivatives in $\zeta$ that may create high frequencies instabilities.
\begin{remark}
In order to recover the dimensionalized version of system~\eqref{GNCH6}, one has to set $\mu=\epsilon=1$ and add the acceleration of gravity term $g$ when necessary to obtain the following system:
\begin{equation}\label{GNCH6dim}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\zeta) v\big)\ =\ 0,\\ \\
\displaystyle (I+\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)g \mathfrak{p}artial_x \zeta +\mathfrak{p}artial_x(q_3(\zeta) {v}^2)\big)\big]\\+\dfrac{1}{\alpha}\big((\gamma+\delta)g\mathfrak{p}artial_x \zeta +\mathfrak{p}artial_x(q_3(\zeta) {v}^2)\big)+Q_1(v)+\nu Q_2(\zeta)+\nu Q_3(\zeta)=0,
\end{array} \right. \end{equation}
with
\begin{equation}\label{Q2dim2layers}Q_2(\zeta)= \kappa_2 \mathfrak{p}artial_x \Big(\zeta \mathfrak{p}artial_x \big((I+\nu\alpha T[0])^{-1}[(\gamma+\delta)g\mathfrak{p}artial_x \zeta ]\big)\Big)\end{equation}
and
\begin{equation}\label{Q3dim2layers}Q_3(\zeta)=\kappa_1\zeta T[0](I+\nu\alpha T[0])^{-1}[(\gamma+\delta)g\mathfrak{p}artial_x \zeta ].\end{equation}
\end{remark}
Firstly, we discuss the stability of the model~\eqref{GNCH6dim} in the one layer case without surface tension. To this end, we set $\gamma=0$, $\delta=1$ and $\dfrac{1}{\rm bo}=0$. Thus system~\eqref{GNCH6dim} becomes:
\begin{equation}\label{GNCH6dim1layer}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big((1+\zeta) v\big)\ =\ 0,\\ \\
\displaystyle (I+\frac{\alpha}{3} T[0]) \big[ \mathfrak{p}artial_{ t} v+v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}g \mathfrak{p}artial_x \zeta\big]+\dfrac{1}{\alpha}g\mathfrak{p}artial_x \zeta+Q_1(v)+ Q_2(\zeta)+\dfrac{1}{3} Q_3(\zeta)=0,
\end{array} \right. \end{equation}
with
\begin{equation*}
Q_1(v)=\dfrac{2}{3}\mathfrak{p}artial_x ((\mathfrak{p}artial_x v)^2),
\end{equation*}
\begin{equation}\label{Q2dim1layer}
Q_2(\zeta)= \mathfrak{p}artial_x \Big(\zeta \mathfrak{p}artial_x\big((I+\frac{\alpha}{3} T[0])^{-1}[g\mathfrak{p}artial_x \zeta]\big)\Big),
\end{equation}
\begin{equation}\label{Q3dim1layer}
Q_3(\zeta)=\zeta T[0](I+\frac{\alpha}{3}T[0])^{-1}[g\mathfrak{p}artial_x \zeta ].
\end{equation}
When linearizing system~\eqref{GNCH6dim1layer} around constant state solution $(\underline{\zeta}, \underline{v})$, one obtains the following linear system in $(\tildeilde{\zeta},\tildeilde{v})$:
\begin{equation}\label{GNCH6dim1layerlin}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\tildeilde{\zeta} +(1+\underline{\zeta}) \mathfrak{p}artial_x \tildeilde{v} + \underline{v} \mathfrak{p}artial_x{\tildeilde{\zeta}}\ =\ 0,\\ \\
\displaystyle (I-\frac{\alpha}{3} \mathfrak{p}artial_x^2) \big[ \mathfrak{p}artial_{ t} \tildeilde{v}+\underline{v}\mathfrak{p}artial_x \tildeilde{v}+\dfrac{\alpha-1}{\alpha}g \mathfrak{p}artial_x \tildeilde{\zeta}\big]+\dfrac{1}{\alpha}g\mathfrak{p}artial_x \tildeilde{\zeta}+\dfrac{2}{3}g\underline{\zeta}\mathfrak{p}artial_x^2\Big((I-\frac{\alpha}{3} \mathfrak{p}artial_x^2)^{-1}\mathfrak{p}artial_x \tildeilde{\zeta}\Big)=0.
\end{array} \right. \end{equation}
Looking for plane wave solution of the form $(\tildeilde{\zeta},\tildeilde{v})=e^{i(kx-wt)}(\zeta^0,v^0)$ as solution of the above system, one obtains the following dispersion relation:
\begin{equation}\label{rd}
\dfrac{(w-k\underline{v})^2}{g(1+\underline{\zeta}) k^2}=\dfrac{1+\frac{(\alpha-1)}{3}k^2-\dfrac{2k^2\underline{\zeta}}{3(1+\frac{\alpha}{3} k^2)}}{1+\frac{\alpha}{3} k^2}.
\end{equation}
When we consider the linearization of our new model around the rest state $(\underline{\zeta},\underline{v})=(0, 0)$, the dispersion relation~\eqref{rd} becomes:
\begin{equation*}\label{rd0}
w^2=gk^2\dfrac{1+\frac{(\alpha-1)}{3}k^2}{1+\frac{\alpha}{3} k^2}.
\end{equation*}
In this case the perturbations are always stable if $\alpha \geq 1$.
We refer to Section~\ref{Secalphachoice} for the discussion concerning the choice of $\alpha$ in order to improve the dispersive properties of the model.\\
Now, replacing $ Q_2(\zeta)$ defined in~\eqref{Q2dim1layer} by $\widetilde{Q}_2(\zeta)=\mathfrak{p}artial_x \Big(\zeta \mathfrak{p}artial_x\big(g\mathfrak{p}artial_x \zeta\big)\Big)$ and
$ Q_3(\zeta)$ defined in~\eqref{Q3dim1layer} by $\widetilde{Q}_3(\zeta) = \kappa_1\zeta T[0][g\mathfrak{p}artial_x \zeta ]$ in the second equation of \eqref{GNCH6dim1layer},
modifies the dispersion relation~\eqref{rd} and we obtain instead:
\begin{equation}\label{rdIHF}
\dfrac{(\tildeilde{w}-k\underline{v})^2}{g(1+\underline{\zeta}) k^2}=\dfrac{1+\frac{(\alpha-1)}{3}k^2-\frac{2k^2\underline{\zeta}}{3}}{1+\frac{\alpha}{3} k^2}.
\end{equation}
One can easily check that if $\underline{\zeta} > \frac{(\alpha -1)}{2}$, the numerator of the right hand side of the dispersion relation~\eqref{rdIHF} will become negative for sufficiently large values of $k^2$. Thus the root of $\tildeilde{w}$ will become complex inducing a high frequency instability of the model, see Figure~\ref{figIHF}. On the other hand, the numerator of the r.h.s of the dispersion relation~\eqref{rd} is positive for sufficiently large values of $k^2$, due to the existence of the term $-\dfrac{2 k^2 \underline{\zeta}}{3(1+\frac{\alpha}{3}k^2)}$, thus $w$ is always real at high frequencies.
\begin{figure}
\caption{High frequencies instabilities in the one layer case due to third order derivatives.}
\label{figIHF}
\end{figure}
Let us now discuss the stability issue of the model~\eqref{GNCH6dim} in the two layers case without surface tension. When linearizing system~\eqref{GNCH6dim} around motionless steady state solution $(\underline{\zeta}=\tildeext{cst}, \underline{v}=0)$ and after following the same method as above, one obtains the following dispersion relation:
\begin{equation}\label{rd2layers}
\dfrac{w^2}{gf(\underline{\zeta}) k^2}=\dfrac{(\gamma+\delta)\Big(1+\nu(\alpha-1)k^2-\nu\dfrac{(\kappa_2-\kappa_1)k^2\underline{\zeta}}{(1+\nu\alpha k^2)}\Big)}{1+\nu\alpha k^2}.
\end{equation}
Replacing $Q_2(\zeta)$ defined in~\eqref{Q2dim2layers} by $\widetilde{Q}_2(\zeta)=\mathfrak{p}artial_x \Big(\zeta \mathfrak{p}artial_x\big(g(\gamma+\delta)\mathfrak{p}artial_x \zeta\big)\Big)$ and
$Q_3(\zeta)$ defined in~\eqref{Q3dim2layers} by $\widetilde{Q}_3(\zeta) = \kappa_1\zeta T[0][g(\gamma+\delta)\mathfrak{p}artial_x \zeta ]$ in the second equation of \eqref{GNCH6dim},
modifies the dispersion relation~\eqref{rd2layers} and we obtain instead:
\begin{equation}\label{rd2layersIHF}
\dfrac{\tildeilde{w}^2}{gf(\underline{\zeta}) k^2}=\dfrac{(\gamma+\delta)(1+\nu(\alpha-1)k^2-\nu(\kappa_2-\kappa_1)k^2\underline{\zeta})}{(1+\nu\alpha k^2)}.
\end{equation}
In this case, there exists a critical ratio for the depth of the two layers. Indeed, when $\delta^2 < \gamma$, one has $\kappa_2<\kappa_1$, thus the perturbations are always stable if $\alpha \geq 1$. Whereas, when $\delta^2\geq\gamma$ i.e assuming $\gamma=0.5$, $\delta=0.8$ and $\dfrac{1}{\rm bo}=0$, one has $\kappa_2>\kappa_1$, thus the perturbations are unstable if $\underline{\zeta} > \dfrac{\alpha -1}{\kappa_2-\kappa_1}$. In fact, under the previous conditions the numerator of the right hand side of the dispersion relation~\eqref{rd2layersIHF} will become negative for sufficiently large values of $k^2$. Thus the root of $\tildeilde{w}$ will become complex inducing a high frequency instability of the model, see Figure~\ref{figIHF2layers}. On the other hand, the numerator of the r.h.s of the dispersion relation~\eqref{rd2layers} is positive for sufficiently large values of $k^2$, due to the existence of the term $-\nu\dfrac{(\kappa_2-\kappa_1)k^2\underline{\zeta}}{(1+\nu\alpha k^2)}$, thus $w$ is always real at high frequencies. This ensures the numerical stability of the model~\eqref{GNCH6dim} which will be considered in the rest of this work.
\begin{figure}
\caption{High frequencies instabilities in the two layers case due to third order derivatives.}
\label{figIHF2layers}
\end{figure}
\section{Numerical methods}\label{NMSec}
This section is devoted to the numerical methods developped to solve the improved Green-Naghdi equations~\eqref{GNCH6}.
As pointed out by many authors~\cite{BCLMT,LannesMarche14} the improved dispersion Green-Naghdi equations~\eqref{GNCH6} is well-adapted to the implementation of a splitting scheme separating the hyperbolic and the dispersive parts of the equations.
We present in Section~\ref{SSsec} this splitting scheme inspired by~\cite{BCLMT,LannesMarche14}.
We explain in Sections~\ref{Hyperbolic} and~\ref{Dispersive} how we treat respectively the hyperbolic and dispersive parts of the equations.
We decided to treat the hyperbolic part by a finite volume method of Roe type. We will construct first order, second order ``MUSCL'' type method and finally 5th order WENO method. The high order method is suitable to compute correctly the maximum value of the height $\zeta$ and the discontinuities
by limiting the diffusive effects.
As we will show in the numerical validations, the high order scheme is also suitable to catch correctly the dispersive effects.
The dispersive part of the proposed splitting method is solved using a classical finite difference method.
\subsection{The splitting scheme}\label{SSsec}
Let us recall the improved Green-Naghdi system that we consider:
\begin{equation}\label{GNCH66}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle (I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]\\+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\mu\epsilon Q_1(v)+\mu\nu Q_2(\zeta)+\mu\epsilon\nu Q_3(\zeta) =0.
\end{array} \right. \end{equation}
$q_3$ defined by~\eqref{defkappaq3},$Q_1$, $Q_2$ and $Q_3$ are defined by~\eqref{Q1},~\eqref{Q2} and~\eqref{Q3}.\\
We decompose the solution operator $S(.)$ associated to~\eqref{GNCH66} at each time step $\Delta t$ by the following second order splitting scheme:
$$S(\Delta t) = S_1(\Delta t/2)S_2(\Delta t)S_1(\Delta t/2)$$
where $S_1(.)$ is the solution operator associated to the hyperbolic part, and $S_2(.)$ the solution operator associated to the dispersive part of the
equations \eqref{GNCH66}.
$\bullet \ S_1(t)$ is the solution operator associated to the hyperbolic part namely the nonlinear shallow water equations, NSWE:
\begin{equation}\label{hyp}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle \mathfrak{p}artial_{ t} v+\epsilon\varsigma v\mathfrak{p}artial_x v+\dfrac{\alpha-1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big) =0.
\end{array} \right. \end{equation}
Using the definition of $q_3(\epsilon\zeta)$ given in~\eqref{defkappaq3}, one can easily check that
$\displaystyle{\dfrac{\varsigma}{2}+q_3(\epsilon\zeta)=\dfrac {f'(\epsilon\zeta)}{2}}$. Thus we rewrite the NSWE system \eqref{hyp} in the following condensed form:
\begin{equation}\label{hypcons}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta +\mathfrak{p}artial_x\big(f(\epsilon\zeta) v\big)\ =\ 0,\\ \\
\displaystyle \mathfrak{p}artial_{ t} v+\mathfrak{p}artial_x\Big(\dfrac{\epsilon f'(\epsilon\zeta)}{2}v^2+(\gamma+\delta)\zeta\Big) =0.
\end{array} \right. \end{equation}
We recall that, $f(\epsilon\zeta) =\displaystyle\frac{h_1h_2}{h_1+\gamma h_2}$ and $f'(\epsilon\zeta) = \displaystyle\frac{h_1^2-\gamma h_2^2}{(h_1+\gamma h_2)^2}$, with $h_1=1-\epsilon\zeta$ and $h_2=1/\delta+\epsilon\zeta$.
$\bullet \ S_2(t)$ is the solution operator associated to the remaining (dispersive) part of the equations.
\begin{equation}\label{disp}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta \ =\ 0,\\ \\
\displaystyle (I+\mu\nu\alpha T[0]) \big[ \mathfrak{p}artial_{ t} v-\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\big]\\+\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\mu\epsilon Q_1(v)+\mu\nu Q_2(\zeta)+\mu\epsilon\nu Q_3(\zeta)=0.
\end{array} \right. \end{equation}
In this study, $S_1$ is computed using a finite volume method while $S_2$ is computed using a classical finite-difference method.
In order to discretize system \eqref{GNCH66}, the numerical domain is an interval of length $L$ denoted $[0,L]$.
Let $N\in\mathcal{N}N^{*}$, and let us consider the following mesh on $[0,L]$. Cells are denoted for every
$i\in [0,N+1]$, by $m_i =(x_{i-1/2},x_{i+1/2})$, with $x_i=\displaystyle\frac{x_{i-1/2}+x_{i+1/2}}{2}$ and
$h_{i}=x_{i+1/2}-x_{i-1/2} $ the space mesh. The ``fictitious'' cells $m_0$ and $m_{N+1}$ denote the boundary cells and
the mesh interfaces located at $x_{1/2} = 0$ and $x_{N+1/2} = L$ are respectively the upstream and the downstream ends (see Figure \ref{domain-discret}).
\begin{figure}
\caption{The space discretization.}
\label{domain-discret}
\end{figure}
We denote $\displaystyle \Delta x = \min_{i=1,N}h_i$.
We also consider a time discretization $t^n$ defined by $t^{n+1}=t^n+\Delta t$ with $\Delta t$ the time step.
\subsection{Finite volume scheme}\label{Hyperbolic}
For the sake of simplicity in the notations, it is convenient to rewrite the hyperbolic system~\eqref{hypcons} in the following form:
\begin{equation}\label{condform}\mathfrak{p}artial_t U +\mathfrak{p}artial_x (F(U))=0,\end{equation}
with the following conservative variables and flux function:
\begin{equation}\label{consvar}
U=\begin{pmatrix}
\zeta\\
v
\end{pmatrix}
,\quad F(U)=\begin{pmatrix}
f(\epsilon\zeta)v\\
\dfrac{\epsilon f'(\epsilon\zeta)}{2} v^2+(\gamma+\delta)\zeta
\end{pmatrix}.
\end{equation}
The Jacobian matrix is given by:
\begin{equation}\label{matricejacob}
A(U)=d(F(U))=\begin{pmatrix}
\epsilon f'(\epsilon\zeta)v&f(\epsilon\zeta)\\
(\gamma+\delta)+\epsilon^2\frac{f''(\epsilon\zeta)}{2}v^2 & \epsilon f'(\epsilon\zeta)v
\end{pmatrix},
\end{equation}
where $f''(\epsilon\zeta)= -\dfrac{2\gamma(h_1+h_2)^2}{(h_1+\gamma h_2
)^3}$.\\
\\
A simple computation shows that the homogeneous system~\eqref{condform} is strictly hyperbolic provided that:
\begin{equation}\label{Condhyp}\left\{ \begin{array}{l}
\inf_{x\in \mathcal{R}R} h_1 >\ 0,\\ \\
\inf_{x\in \mathcal{R}R} h_2 >\ 0,\\ \\
\inf_{x\in \mathcal{R}R} \Big[(\gamma+\delta) -\dfrac{\gamma(h_1+h_2)^2}{(h_1+\gamma h_2
)^3}\epsilon^2v^2 \Big]>\ 0.\\ \\
\end{array} \right. \end{equation}
As a matter of fact, these conditions simply consist in assuming that the deformation of the interface is not too large and imposing a smallness assumption on $\epsilon v$. Notice that theses conditions correspond exactly to the hyperbolicity condition for the shallow water system provided in~\cite{GuyenneLannesSaut10}.\\
As a consequence, the solutions may develop shock discontinuities. In order to rule out the
unphysical solutions, the system~\eqref{condform} must be supplemented by entropy inequalities, (see for instance~\cite{Bouchut04} and references therein for more details).
The Cauchy problem associated to~\eqref{condform} is the following:
\begin{equation}\label{cauchy}\left\{ \begin{array}{l}
\mathfrak{p}artial_t U +\mathfrak{p}artial_x (F(U)) \ =\ 0, \qquad t\geq0, x\in \mathcal{R}R.\\ \\
U(0,x)=U_0(x), \qquad x \in \mathcal{R}R.
\end{array} \right. \end{equation}
We are interested in the approximation of~\eqref{cauchy} by the finite volume method.\\
We denote
$\overline{U}_i=(\zeta_i,v_i)$, the cell-centered approximation of $U$ on the cell $m_i$ at time $t$ given by:
$$\overline U_i = \frac{1}{h_i}\int_{m_i}U(t,x)\,dx \ .$$
The piecewise constant representation of $U$ is given by, $U(t,x) = \displaystyle \overline U_i \mathds{1}_{m_i}(x)$.
We denote $\overline{U}_i^n=(\zeta_i^n,v_i^n)$, the cell-centered approximation of $U$ on the cell $m_i$ at time $t^n$ given by:
$$\overline U^n_i = \frac{1}{h_i}\int_{m_i}U(t^n,x)\,dx \ .$$
The spatial discretization of the homogeneous system \eqref{hyp} can be recast under the following classical semi-discrete finite-volume formalism:
\begin{equation}\label{volfinidiscret}
\frac{d \overline{U}_i(t)}{dt} + \frac{1}{h_i}\Big(\widetilde{F}(\overline U_i,\overline U_{i+1}) - \widetilde{F}(\overline U_{i-1},\overline U_{i}) \Big) = 0
\end{equation}
where $\widetilde{F}$ is a numerical flux function based on a conservative flux consistent with the homogeneous NSWE:
\begin{equation}\label{flux} F_{i+1/2}=\widetilde{F}(\overline U_i,\overline U_{i+1})\approx\dfrac{1}{h_i} \int_{m_i} F(U(t,x_{i+1/2}))dx.\end{equation}
\mathfrak{p}aragraph{VFRoe method.}
In what follows, we consider the numerical approximation of the hyperbolic system of conservation laws in the form of~\eqref{condform}. To this end, we adopt the VFRoe method (see~\cite{BGH00,GHN02,GHN03}) which is an approximate Godunov scheme. It relies on the exact
resolution of the following linearized Riemann problem:
\begin{equation}\label{PRL}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}U + \widetilde{A}(\overline U_i^n, \overline U_{i+1}^n)\mathfrak{p}artial_x U\ =\ 0,\\ \\
\displaystyle U(0,x)=\left \{ \begin{array}{l}
\displaystyle \overline U^n_i \quad if \quad x<x_{i+1/2},\\ \\
\displaystyle \overline U_{i+1}^n \quad if \quad x>x_{i+1/2},
\end{array}\right.\end{array} \right.
\end{equation}
where $\widetilde{A}(\overline U_i^n,\overline U_{i+1}^n)=A\left(\dfrac{\overline U_i^n+\overline U_{i+1}^n}{2}\right)$.\\
By solving the linearized Riemann problem we obtain $\overline U_{i+1/2}^*= U(x=x_{i+1/2},t=t_{n})$, the interface value between two neighbouring cells.\\
In what follows, we will detail the choice of the numerical flux for different order of approximation. The only change is in the computation of the interface values $\overline U_{i+1/2}^*$ which depends on the right and the left states of the linearized Riemann problem. For the sake of simplicity, we will
still denote by $\widetilde{F}$ these numerical fluxes.
\mathfrak{p}aragraph{CFL condition}\label{CFLpar}
It is always necessary to impose what is called a CFL condition (for
Courant, Friedrichs, Levy) on the timestep to prevent the blow up of
the numerical values. It comes usually under the form
\begin{equation}\label{CFL}a_{i+1/2}\Delta t \leq \Delta x, \quad i=1,\ldots,N,\end{equation}
where $a_{i+1/2}=\displaystyle \max_{i\in[1, N]}(j=1,2, |\lambda_j(\widetilde U_i)|)$ and $\lambda_j(\widetilde U_i)$ are the eigenvalues of $A\big(\widetilde U_i=\dfrac{\overline U_i^n+ \overline U_{i+1}^n}{2}\big)$.\\
The restriction~\eqref{CFL} enables in practice to compute the timestep
at each time level $t_n$, in order to determine the new time level $t_{n+1} =
t_n + \Delta t$ (within this view, $\Delta t$ is not constant, it is computed in an
adaptive fashion).
\mathfrak{p}aragraph{Consistency.}
The numerical flux $\widetilde{F}(U_l,U_r)$ is called consistent with~\eqref{condform} if
\begin{equation}\label{consis}\widetilde{F}(U,U)=F(U) \quad \mbox{for all U}.\end{equation}
\subsubsection{First order finite-volume scheme}\label{FV1sec}
The semi-discrete equation \eqref{volfinidiscret} is discretised by an explicit Euler (in time) method to obtain :
\begin{equation}\label{volfini}
\overline U^{n+1}_i= \overline U^n_i-\dfrac{\Delta t}{h_i}(F_{i+1/2}^n-F_{i-1/2}^n),
\end{equation}
where the numerical flux is defined directly as the value of the exact flux at the interface value, namely:
\begin{eqnarray}\label{numflux}
F_{i+1/2}^n=\widetilde{F}(\overline U_i^n,\overline U_{i+1}^n)=F(\overline U_{i+1/2}^*)\nonumber \\
\\F_{i-1/2}^n=\widetilde{F}(\overline U_{i-1}^n,\overline U_i^n)=F(\overline U_{i-1/2}^*)\nonumber.
\end{eqnarray}
Let us remark that by construction the numerical flux given by \eqref{numflux} ensures the consistency property.
In the sequel, we will suppose that the space discretisation is uniform.
\mathfrak{p}aragraph{Algorithm.}\label{algo} In the following, we state the algorithm for computing the discrete values $\overline U_i^{n+1}$ at $t^{n+1}$. Given the initial data and boundary conditions and the number $CFL \leq 1$, we start with the known discrete averaged values $(\overline{ U}_i^n)$ for $i=0,...,N+1$ at $t^n$. As long as ($t<T$) one has to do:\\
\\
1) Computation of $\widetilde{A}_i$ for $i=0,...,N$ where $\widetilde{A}_i=A\left(\dfrac{\overline U_i^n+\overline U_{i+1}^n}{2}\right)$.\\
\\
2) Computation of $r_i^1$,$r_i^2$ and $\lambda_i^1$, $\lambda_i^2$ set respectively as the eigenvectors and eigenvalues of $\widetilde{A}_i$.\\
\\
3) Computation of $\Delta t$, such that $\dfrac{\Delta t}{\Delta x} \leq \dfrac{CFL}{a_{i+1/2}}$.\\
\\
4) Computation of ${\overline U_{i-1/2}^*}$ for $i=1,...,N+1$ by solving the linearized Riemann problem.\\
\\
In fact we have 3 cases:\\
\\
$\bullet$ if $\lambda_i^1$,$\lambda_i^2$<0 then ${\overline U_{i-1/2}^*}=\overline U_i^n$.\\
\\
$\bullet$ if $\lambda_i^1$,$\lambda_i^2$>0 then ${\overline U_{i-1/2}^*}=\overline U_{i-1}^n$.\\
\\
$\bullet$ if $\lambda_i^1<0$, $\lambda_i^2>0$ then for:\begin{equation}\left\{ \begin{array}{l}\displaystyle x<\lambda_i^1t \quad \mbox{one has} \quad {\overline U_{i-1/2}^*}=\overline U_{i-1}^n, \\ \\
\displaystyle x>\lambda_i^1t \quad \mbox{or} \quad x<\lambda_i^2t \quad \mbox{one has} \quad {\overline U_{i-1/2}^*}=\overline U_{i}^n-(R^{-1}[U])_2r_i^2=\overline U_{i-1}^n+(R^{-1}[U])_1r_i^1,\\ \\
\displaystyle x>\lambda_i^2t \quad \mbox{one has} \quad {\overline U_{i-1/2}^*}=\overline U_{i}^n,\end{array} \right. \end{equation}
with $R=(r_i^1|r_i^2)$ and $[U]=\overline U^n_{i}-\overline U^n_{i-1}$.\\
\\
5) Computation of $F({\overline U_{i-1/2}^*})$.\\
\\
6) Computation of $\overline U^{n+1}_i=\overline U^n_i-\dfrac{\Delta t}{\Delta x}(F_{i+1/2}^n-F_{i-1/2}^n)$ for $i=1,...,N$.\\
\\
We repeat this algorithm for the new level of time $(t^{n+1}+\Delta t)$, until we reach the required final time $T$.
\subsubsection{Second order finite-volume scheme: MUSCL-RK2}\label{secMUSCLRK2}
A drawback of the Roe scheme (such as Godunov) is to be very diffusive. A remedy for this situation is through the extension of the scheme to the second order in space, associated to a second order Runge-Kutta scheme in time. In fact, we would like to reduce both numerical dissipation and dispersion within the hyperbolic component $S_1(.)$.
To this end, high order reconstructed
states at each interface have to be considered, following the classical MUSCL approach~\cite{Leer79} (Monotonic Upstream Scheme for Conservation Laws). To prevent the spurious oscillations that would occur around discontinuities or shocks, we suggest to use the ``minmod" limiter, designed to generate slope limited, reconstructed left and right states for each cell that are used to calculate the flux at the interfaces. The implementation of this scheme is very easy and provides a natural extension of the Roe scheme described above. In fact, the main interests seem to be, after the tests, a gain of precision and stability.
The steps of the second-order reconstruction are as follows: \\
1. Using the discrete averaged values $\overline U_i^n$, we construct the slopes $S_i^n$ using the ``minmod" limiter as the reconstruction must be non oscillatory in some sense, see~\cite{GR91}. We consider:
\begin{equation}\label{minmod}
S^{n}_i=\mbox{\emph{minmod}} \left(\dfrac{\overline U_{i+1}-\overline U_i}{x_{i+1}-x_i}, \dfrac{\overline U_{i}-\overline U_{i-1}}{x_{i}-x_{i-1}}\right)
\end{equation} where the function \emph{minmod} is defined on $\mathcal{R}R^2$ by
\begin{equation*}\mbox{\emph{minmod}}(a,b)=\left\{ \begin{array}{l}
min (|a|,|b|) sgn(a) \quad \mbox{if} \quad sgn(a)=sgn(b),\\ \\
0 \quad \mbox{else}.
\end{array} \right. \end{equation*}
2. On the cell $m_i = ]x_{i-1/2}, x_{i+1/2}[$, the solution is approached by:
\begin{equation*} U^{n}(x)= \overline U_i^{n} +(x-x_i)S_i^n. \end{equation*}
3. We compute the numerical flux at the interfaces:
\begin{equation*}F^n_{i+1/2}=\widetilde{F}(\overline U_i^{n,+},\overline U_{i+1}^{n,-}) \quad \mbox{and} \quad F^n_{i-1/2}=\widetilde{F}(\overline U_{i-1}^{n,+},\overline U_{i}^{n,-})\end{equation*} with:
\begin{equation*}\left\{ \begin{array}{l}
\displaystyle \overline U_i^{n,+}\ =\ \overline U_i^{n}+\dfrac{\Delta x}{2}S_i^n\\ \\
\overline U_{i+1}^{n,-}\ =\ \overline U_{i+1}^{n}-\dfrac{\Delta x}{2}S_{i+1}^n.
\end{array} \right. \end{equation*}
4. We compute $\widetilde{U}_i^{n+1}$ by the application of~\eqref{volfini}, thus the scheme is given as follows:
\begin{equation*}\widetilde{U}^{n+1}_{i}= \overline U_{i}^n - \dfrac{\Delta t}{\Delta x}(F_{i+1/2}^n-F_{i-1/2}^n). \end{equation*}
As far as time discretization is concerned, we use the second-order explicit Runge–Kutta ``RK2"
method which is described in the following. Given the ODE $\dfrac{dy}{dt}=f(t,y)$, one has,
\begin{eqnarray}\label{RK2}
y^{n+1}&=&y^n+\dfrac{h}{2}\Big(f(t^n,y^n)+f(t^{n+1},\tildeilde{y}^{n+1})\Big),
\end{eqnarray}
with $\tildeilde{y}^{n+1}=y^n+hf(t^n,y^n)$, and $t^{n+1}=t^{n}+h$.\\
Applying~\eqref{RK2} to~\eqref{volfini}, we obtain the following modified scheme ``MUSCL-RK2":
\begin{equation}\label{RK2MUSCL}\overline U^{n+1}_i=\overline U^n_i-\dfrac{\Delta t}{2\Delta x}(F_{i+1/2}^n-F_{i-1/2}^n+F_{i+1/2}^{n+1}-F_{i-1/2}^{n+1}),\end{equation}
with \begin{equation*}F^{n+1}_{i+1/2}=\widetilde{F}(\widetilde{U}_i^{n+1,+},\widetilde{U}_{i+1}^{n+1,-}) \quad \mbox{and} \quad F^{n+1}_{i-1/2}=\widetilde{F}(\widetilde{U}_{i-1}^{n+1,+},\widetilde{U}_{i}^{n+1,-}) \ ,
\end{equation*}
$-\widetilde{F}$: the numerical flux determined as in the first order VFRoe method, given by \eqref{numflux}.\\
$- \widetilde{U}^{n+1}_{i}$ is computed as follows:
\begin{equation*}\widetilde{U}^{n+1}_{i}= \overline U_{i}^n - \dfrac{\Delta t}{\Delta x}(F_{i+1/2}^n-F_{i-1/2}^n) .
\end{equation*}
$-\widetilde{U}^{n+1,+}_{i}=\widetilde{U}^{n+1}_{i}+\dfrac{\Delta x}{2} \widetilde{S}_{i}^{n+1}.$ \\
$-\widetilde{U}^{n+1,-}_{i+1}=\widetilde{U}^{n+1}_{i+1}-\dfrac{\Delta x}{2} \widetilde{S}_{i+1}^{n+1}.$\\
$-\widetilde{S}_{i}^{n+1}$ and $\widetilde{S}_{i+1}^{n+1}$ are associated respectively to $\widetilde{U}_{i}^{n+1}$ and $\widetilde{U}_{i+1}^{n+1}$ by \eqref{minmod}.
\subsubsection{Higher order finite-volume scheme: WENO5-RK4}\label{WENO5sec}
The second order schemes are known to degenerate to first
order accuracy at smooth extrema. To reach higher order accuracy in smooth regions and a good resolution around discontinuities, we implement fifth-order accuracy WENO reconstruction, following~\cite{JiangShu96,Shu98}, where Jiang and Shu constructed third and fifth order finite difference WENO schemes in multi-space dimensions with a general framework for the design of the smoothness indicators and nonlinear weights. To automatically achieve high order accuracy and non-oscillatory property near discontinuities, WENO schemes use the idea of adaptive stencils in the reconstruction procedure based on the local smoothness of the numerical solution. We would like to mention also the previous studies~\cite{CLM,BCLMT,LannesMarche14}, where it is shown that for the study of dispersive waves, it is necessary to use high-order schemes to prevent the corruption of the dispersive properties of the model by some dispersive truncation errors linked to second-order schemes.
Using the same reconstruction proposed in~\cite{BCLMT}, we consider a cell $m_i$, and the corresponding constant averaged value $\overline{U}_i^n=(\overline{\zeta}_i^n,\overline{v}_i^n)$, with a constant space step $\Delta x$. This approach, provides high order reconstructed left and right values $\overline{U}_i^{n,-}$ and $\overline{U}_i^{n,+}$, built following the five points stencil,
and introduced as follows:
\begin{equation}\label{recval}
\overline{U}_i^{n,+}=\overline{U}_i^{n}+\dfrac{1}{2}\overline{\delta U}_i^{n,+} \quad \mbox{and} \quad \overline{U}_i^{n,-}=\overline{U}_i^{n}-\dfrac{1}{2}\overline{\delta U}_i^{n,-},
\end{equation}
where $\overline{\delta U}_i^{n,+}$ and $\overline{\delta U}_i^{n,-}$ are defined as follows:
\begin{eqnarray}\label{deltaUinplus}
\overline{\delta U}_i^{n,+}&=\dfrac{2}{3}(\overline{U}_{i+1}^{n}-\overline{U}_{i}^{n})+\dfrac{1}{3}(\overline{U}_{i}^{n}-\overline{U}_{i-1}^{n})-\dfrac{1}{10}(-\overline{U}_{i-1}^{n}+3\overline{U}_{i}^{n}-3\overline{U}_{i+1}^{n}+\overline{U}_{i+2}^{n})\nonumber\\&\quad-\dfrac{1}{15}(-\overline{U}_{i-2}^{n}+3\overline{U}_{i-1}^{n}-3\overline{U}_{i}^{n}+\overline{U}_{i+1}^{n})
\end{eqnarray}
\begin{eqnarray}\label{deltaUinmoins}
\overline{\delta U}_i^{n,-}&=\dfrac{2}{3}(\overline{U}_{i}^{n}-\overline{U}_{i-1}^{n})+\dfrac{1}{3}(\overline{U}_{i+1}^{n}-\overline{U}_{i}^{n})-\dfrac{1}{10}(-\overline{U}_{i-2}^{n}+3\overline{U}_{i-1}^{n}-3\overline{U}_{i}^{n}+\overline{U}_{i+1}^{n})\nonumber\\&\quad-\dfrac{1}{15}(-\overline{U}_{i-1}^{n}+3\overline{U}_{i}^{n}-3\overline{U}_{i+1}^{n}+\overline{U}_{i+2}^{n})
\end{eqnarray}
and the coefficients $\dfrac{2}{3}$, $\dfrac{1}{3}$, $\dfrac{-1}{10}$ and $\dfrac{-1}{15}$ are set in order to obtain better dissipation and dispersion properties
in the truncature error. We consider the following modified scheme:
\begin{equation}\label{WENO5}\overline{U}^{n+1}_{i}= \overline U_{i}^n - \dfrac{\Delta t}{\Delta x}\Big(\widetilde{F}(\overline{U}_i^{n,+},\overline{U}_{i+1}^{n,-})-\widetilde{F}(\overline{U}_{i-1}^{n,+},\overline{U}_{i}^{n,-})\Big). \end{equation}
To reduce spurious oscillations near discontinuities, we apply the same limitation procedure as in~\cite{BCLMT}, preserving the scheme positivity and the high order accuracy. Thus scheme~\eqref{WENO5} becomes
\begin{equation}\label{WENO5LIM}\overline{U}^{n+1}_{i}= \overline U_{i}^n - \dfrac{\Delta t}{\Delta x}\Big(\widetilde{F}(^L\overline{U}_i^{n,+},^L\overline{U}_{i+1}^{n,-})-\widetilde{F}(^L\overline{U}_{i-1}^{n,+},^L\overline{U}_{i}^{n,-})\Big). \end{equation}
We define the limited high-order values as follows:
\begin{equation}\label{LIMrecval}
^L\overline{U}_i^{n,+}=\overline{U}_i^{n}+\dfrac{1}{2}L_{i}^{+}(\overline U^n) \quad \mbox{and} \quad ^L\overline{U}_i^{n,-}=\overline{U}_i^{n}-\dfrac{1}{2}L_{i}^{-}(\overline U^n).
\end{equation}
Using the following limiter,
\begin{equation*}L(u,v,w)=\left\{ \begin{array}{l}
min (|u|,|v|,2|w|) \ sgn(u) \quad \mbox{if} \quad sgn(u)=sgn(v),\\ \\
0 \quad \mbox{else},
\end{array} \right. \end{equation*}
we define the limiting process as,
\begin{equation*}L_{i}^{+}(\overline U^n)=L(\delta \overline{U}_{i}^{n},\delta \overline{U}_{i+1}^{n},\overline{\delta U}_i^{n,+})\quad \mbox{and} \quad L_{i}^{-}(\overline U^n)=L(\delta \overline{U}_{i+1}^{n},\delta \overline{U}_{i}^{n},\overline{\delta U}_i^{n,-}),\end{equation*}
with $\delta \overline{U}_{i+1}^{n}=\overline U_{i+1}^{n}-\overline U_{i}^{n}$ and $\delta \overline{U}_{i}^{n}=\overline U_{i}^{n}-\overline U_{i-1}^{n}$ are upstream and downstream variations, and $\overline{\delta U}_i^{n,+} $ and $\overline{\delta U}_i^{n,-}$ taken from~\eqref{deltaUinplus} and~\eqref{deltaUinmoins}.
The limited high order reconstructions stated above must be performed on both conservative variables $\overline{U}_i^n=(\overline{\zeta}_i^n,\overline{v}_i^n)$. We would like to mention that the resulting finite volume scheme preserve motionless steady states, $\zeta=\tildeext{cst}$ and $v=0$.
As far as time discretization is concerned, we use the fourth-order explicit Runge–Kutta ``RK4" method which is described in the following. Given the ODE $\dfrac{dy}{dt}=f(t,y)$, one has,
\begin{eqnarray}\label{RK4}
k_1&=&f(t^n,y^n)\nonumber,\\
k_2&=&f(t^n+\dfrac{h}{2},y^n+h\dfrac{k_1}{2})\nonumber,\\
k_3&=&f(t^n+\dfrac{h}{2},y^n+h\dfrac{k_2}{2})\nonumber,\\
k_4&=&f(t^n+h,y^n+hk_3)\nonumber,\\
y^{n+1}&=&y^n+\dfrac{h}{6}\big(k_1+2k_2+2k_3+k_4\big),
\end{eqnarray}
with $t^{n+1}=t^n+h$. Applying~\eqref{RK4} to~\eqref{WENO5LIM}, one gets the ``WENO5-RK4" scheme.
\subsection{Finite difference scheme for the dispersive part}\label{Dispersive}
First of all, let us detail how to construct nodal values of the unknowns (which are the ones used for a finite difference discretization) from the
cell-averaged value computed by a finite volume scheme and vice versa.
We denote by $U_i^n$ the nodal value of $U$ at the $i$th node $(x_{i+1/2})_{i\in[0,N]}$ and at time $t_n$ i.e. $U_i^n$ is an approximation of $U(x_{i+1/2},t^n)$.
The finite volume-finite difference mix imply to switch between the cell-averaged and nodal values for each unknown and at each time step.
To this end, we use the fifth-order accuracy WENO reconstruction, that allows to approximate the nodal values (i.e finite difference unknowns)
$(U_i^n)_{i =1,N+1}$ in terms of the cell-averaged values (i.e finite volume unknowns) $(\overline U_i^n)_{i =1,N}$. The relation is given by:
\begin{equation}\label{switchFVDF}
U_i^n=\dfrac{1}{30}\overline U_{i-2}^n-\dfrac{13}{60}\overline U_{i-1}^n+\dfrac{47}{60}\overline U_i^n+\dfrac{9}{20}\overline U_{i+1}^n-\dfrac{1}{20}\overline U_{i+2}^n +\mathcal {O}(\Delta x^5), \quad 1\leq i \leq N+1,
\end{equation}
with adaptations at the boundaries following the method presented in Section~\ref{BCsec}.
One can easily recover the relation that allows to determine the cell-averaged values $(\overline U_i^n)_{i \in [1:N]}$ in terms of the nodal values $(U_i^n)_{i \in [1:N+1]}$.
We can easily check that~\eqref{switchFVDF} preserve the steady state at rest and that this formula is precise up to order $\mathcal {O}(\Delta x^5)$
terms, thus preserving the global order of the scheme.
We can now proceed by explaining how we compute the solution operator $S_2(.)$ associated to the dispersive part of the equations.
Let us recall the system~\eqref{disp} corresponding to the operator $S_2(.)$, given in Section~\eqref{SSsec}.
\begin{equation}\label{disp1}\left\{ \begin{array}{l}
\displaystyle \mathfrak{p}artial_{ t}\zeta \ =\ 0,\\ \\
\displaystyle \mathfrak{p}artial_{ t} v-\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)\\+(I+\mu\nu\alpha T[0])^{-1}\Big[\dfrac{1}{\alpha}\big((\gamma+\delta)\mathfrak{p}artial_x \zeta +\epsilon\mathfrak{p}artial_x(q_3(\epsilon\zeta) {v}^2)\big)+\mu\epsilon Q_1(v)+\mu\nu Q_2(\zeta)+\mu\epsilon\nu Q_3(\zeta)\Big]=0.
\end{array} \right. \end{equation}
where the operators $Q_1$, $Q_2$ and $Q_3$ are explicitly given in~\eqref{Q1},~\eqref{Q2} and~\eqref{Q3}.
For the sake of simplicity, we detail the numerical resolution of \eqref{disp1} using an explicit Euler in time scheme. Standard extensions to second
and fourth order Runge-Kutta method has been used according to the order of the space derivative operators as done in the previous section.
The finite discretization of the system~\eqref{disp1} leads to the following discrete problem:
\begin{equation}\label{disp1disc}\left\{ \begin{array}{l}
\displaystyle \dfrac{\zeta^{n+1}-\zeta^n}{\Delta t} \ =\ 0,\\ \\
\displaystyle \dfrac{v^{n+1}-v^n}{\Delta t}-\dfrac{1}{\alpha}(\gamma+\delta)D_1 (\zeta^{n})
-2\dfrac{\epsilon}{\alpha}q_3(\epsilon\zeta^{n})v^{n}D_1(v^{n})-\dfrac{\epsilon^2}{\alpha}q'_3(\epsilon\zeta^{n})D_1(\zeta^{n})(v^{n})(v^{n})
\\+(I-\mu\nu\alpha D_2)^{-1} \Big[ \dfrac{1}{\alpha}(\gamma+\delta)D_1(\zeta^{n})+2\dfrac{\epsilon}{\alpha}q_3(\epsilon\zeta^{n})v^{n}D_1(v^{n})+\dfrac{\epsilon^2}{\alpha}q'_3(\epsilon\zeta^{n})D_1(\zeta^{n})(v^{n})(v^{n})\\
+\mu\epsilon Q_1(v^n)+\mu\nu\epsilon Q_2(\zeta^n)+\mu\epsilon\nu Q_3(\zeta^n)\Big]=0,
\end{array} \right. \end{equation}
with
\begin{equation*}\label{discQ1}Q_1(v^n)=2 \kappa D_1(v^{n})D_2(v^{n}),\end{equation*}
\begin{equation*}\label{discQ2}Q_2(\zeta^n)=\kappa_2D_1\Big[\zeta^{n}D_1\Big((I-\mu\nu\alpha D_2)^{-1}(\gamma+\delta)D_1(\zeta^{n})
\Big)\Big],
\end{equation*}
\begin{equation*}\label{discQ3}Q_3(\zeta^n)=-\kappa_1\zeta^{n}D_2\Big[(I-\mu\nu \alpha D_2)^{-1}(\gamma+\delta)D_1(\zeta^{n})
\Big].\end{equation*}
The system~\eqref{disp1disc} is solved at each time step using a classical finite-difference technique, where the matrices $D_1$ and $D_2$ are the classical centered discretizations of the derivatives $\mathfrak{p}artial_x$ and $\mathfrak{p}artial^2_x$ given below.
The first formula is the second-order formula called ``DF2", where the spatial derivatives are given as follows:
\begin{equation*}\label{Diff1}
(\mathfrak{p}artial_x U)_i=\dfrac{1}{2\Delta x}(U_{i+1}-U_{i-1}),
\end{equation*}
\begin{equation*}\label{Diff2}
(\mathfrak{p}artial_x^2 U)_i=\dfrac{1}{\Delta x^2}(U_{i+1}-2U_{i}+U_{i-1}).
\end{equation*}
For time discretization, the second-order formula ``DF2" is associated to a second-order classical Runge-Kutta ``RK2" scheme, and thus one obtains the ``DF2-RK2" scheme.
The second formula is the fourth-order formula called ``DF4" where the spatial derivatives are given as follows:
\begin{equation*}\label{Diff1-4}
(\mathfrak{p}artial_x U)_i=\dfrac{1}{12\Delta x}(-U_{i+2}+8U_{i+1}-8U_{i-1}+U_{i-2}),
\end{equation*}
\begin{equation*}\label{Diff2-4}
(\mathfrak{p}artial_x^2 U)_i=\dfrac{1}{12\Delta x^2}(-U_{i+2}+16U_{i+1}-30U_{i}+16U_{i-1}-U_{i-2}).
\end{equation*}
For time discretization, the fourth-order formula ``DF4" is associated to a fourth-order classical Runge-Kutta ``RK4" scheme, and thus one obtains the ``DF4-RK4" scheme.
\subsection{ Boundary conditions}\label{BCsec}
In the following section, we show how to treat the boundary conditions for the hyperbolic and dispersive parts of the splitting scheme.
Suitable relations are imposed on both cell-averaged and nodal quantities. We only treat either periodic boundary conditions or
reflective boundary conditions. We detail now how we have implemented these boundary conditions for the hyperbolic part and the dispersive part
of the numerical scheme.
For the hyperbolic part, we have introduced ‘‘ghosts cells’’ respectively at the upstream and downstream boundaries of the domain.
The imposed relations on the cell-averaged quantities are the following:
$\bullet$ $\overline U_{-k+1}=\overline U_{N-k+1}$, and $\overline U_{N+k}=\overline U_{k}$, $k \geq 1$, for periodic conditions on upstream and downstream boundaries.
$\bullet$ $\overline \zeta_{-k+1}=\overline \zeta_{-k}$, $\overline v_{-k+1}=-\overline v_{-k}$ and $\overline \zeta_{N+k-1}=\overline \zeta_{N+k}$, $\overline v_{N+k-1}=-\overline v_{N+k}$, $k \geq 1$, for reflective conditions on the left and right boundaries.\\
For the dispersive part of the splitting, the boundary conditions are simply imposed on the nodal values that are located outside of the domain, in order to maintain centered formula at the boundaries, while keeping a regular structure in the discretized model:
$\bullet$ $U_{-k+1}= U_{N-k+1}$, and $U_{N+k}=U_{k}$, $k \geq 1$, for periodic conditions on upstream and downstream boundaries.
$\bullet$ $\zeta_{-k+1}= \zeta_{-k}$, $ v_{-k+1}=-v_{-k}$ and $ \zeta_{N+k-1}=\zeta_{N+k}$, $ v_{N+k-1}=-v_{N+k}$, $k \geq 1$, for reflective conditions on upstream and downstream boundaries.
\section{Numerical validations}\label{NVsec}
In this section, several numerical tests are performed in both one and two layers cases in order to validate the numerical efficiency and accuracy of the improved Green-Naghdi model~\eqref{GNCH6dim}. We first consider several numerical tests in the one layer case without any surface tension i.e $\rm bo^{-1}=0$. We begin by studying the propagation of a solitary wave over a flat bottom. We compare our numerical solution with an analytic one (up to an $\mathcal {O}(\mu^2)$ remainder) at several times and for different orders of discretizations and show that our numerical scheme is very efficient and accurate. To evaluate the influence of the nonlinear and dispersive terms we study the collision between two solitary waves traveling in opposite directions (head-on collision). We then study the breaking of a Gaussian hump into two solitary waves. Finally, we study the dam-break problem supplemented by a comparison between the second and fifth order accuracy in order to show the ability of the higher order numerical scheme in dealing with discontinuities. We used the value $\alpha=1$ in the aforementioned cases. In fact, we obtained very similar results when performing the same simulations with $\alpha=1.159$. Secondly, two numerical simulations are performed in the two layers cases. In the first one, we compare our results with numerical data from~\cite{DucheneIsrawiTalhouk16}, where we show that a very good matching is observable if $\alpha$ is carefully chosen as in Section~\ref{Secalphachoice}. We then consider the dam-break problem in the two layers cases, where we test the ability of the splitting scheme to compute dispersive shock waves with high accuracy. We would like to mention that in all the numerical tests, we use the WENO5 reconstruction for the hyperbolic part of the splitting scheme and a fourth order finite difference scheme ``DF4" for the dispersive part, both associated to a fourth-order classical Runge-Kutta ``RK4" time scheme. In every numerical simulation presented in the following section, we choose to use a CFL number equal to 1 in the algorithm stated in page~\mathfrak{p}ageref{algo}, in order to obtain a stable numerical scheme.
\subsection{Numerical validations in the one layer case}\label{NV1sec}
\subsubsection{Propagation of a solitary wave}~\label{PSWsec}
Here, we test the accuracy of our numerical scheme~\eqref{GNCH6dim} with $\alpha=1$, by using the exact solitary wave solutions of the one layer Green-Naghdi equations in the one-dimensional setting over a flat bottom (see~\cite{LannesMarche14}), given in variables with dimensions, by
\begin{equation}\label{soliton}\left\{ \begin{array}{l}
\displaystyle \zeta(t,x)=a \ \tildeext{sech}^2(k(x-ct)) ,\\ \\
\displaystyle v(t,x)=c \Big(\dfrac{\zeta(t,x)}{d_2+\zeta(t,x)}\Big),\\ \\
\displaystyle k=\dfrac{\sqrt{3 a}}{2d_2\sqrt{d_2+a}}, \quad c=\sqrt{g(d_2+a)},
\end{array} \right. \end{equation}
where we recall that $d_2$ is depth of the fluid when considering the one layer case.
Such solitary waves are also solution of the improved Green-Naghdi model~\eqref{GNCH6dim} up to an $\mathcal {O}(\mu^2)$ remainder.
We consider the propagation of a solitary wave initially centered at $x_0=20 \ m$, of relative amplitude $a=0.2 \ m$, over a constant water depth $d_2=1 \ m$. The computational domain is $200 \ m$ in length and discretized with 1280 cells. The single solitary wave propagates from left to right. In this test, since the solitary wave is initially far from boundaries, the boundary conditions do not affect the computation, thus we choose to impose periodic boundary conditions on each boundary for the sake of simplicity.
We compare the water surface profile of our numerical solution provided by the model~\eqref{GNCH6dim} (after setting the parameters $\gamma=0$ and $\delta=1$ corresponding to the one layer case), with the exact one given by~\eqref{soliton} at several times using the first, second and fifth order discretizations (see Figure~\ref{125}). One can easily remark that the fifth order discretization ``WENO5-DF4-RK4" provides very accurate solutions and reduces both numerical dissipation and dispersion, contrarily to the first order discretization ``FV1-DF2-Euler" which seems to be very diffusive. Indeed, looking at the amplitude and shape of the solitary wave at $t=50 \ s$ in the bottom of Figure~\ref{125}, we can observe an excellent agreement between numerical and exact solutions, unlike the second order discretization ``MUSCL-DF2-RK2" (middle of Figure~\ref{125}), generating a less accurate numerical solution. The preservation of the amplitude and shape of the solitary wave computed using the fifth order scheme, even after $200 \ m$ indicate that the governing equations have been accurately discretized in space and time.
In what follows, we quantify the numerical accuracy of our numerical scheme by computing the numerical solution for this particular test case for an increasing number of cells $N$, over a duration $T=5\ s$. Starting with $N=80$ number of cells, we successively multiply the number of cells by two. The relative errors $E_{L^2}(\zeta)$ and $E_{L^2}(v)$ on the water surface deformation and the averaged velocity presented in Table~\ref{L2err} are computed at $t= 5 \ s$, using the discrete $L^2$ norm $\|.\|_{2}$:
\begin{equation}
E_{L^2}(\zeta)=\dfrac{\|\zeta_{num}-\zeta_{sol}\|_{2}}{\|\zeta_{sol}\|_{2}}; \qquad E_{L^2}(v)=\dfrac{\|v_{num}-v_{sol}\|_{2}}{\|v_{sol}\|_{2}},
\end{equation}
where $(\zeta_{num}, v_{num})$ are the numerical solutions and $(\zeta_{sol}, v_{sol})$ are the analytical ones coming from~\eqref{soliton}. Very accurate results are thus obtained as an evaluation of the capacity of our
numerical method to compute in a stable way the propagation of a solitary wave.
\begin{figure}
\caption{Propagation of a solitary wave over a flat bottom: water surface profiles at t=0, 10, 20, 30, 40 and 50s. Top: FV1-DF2-Euler, middle: MUSCL-DF2-RK2 and bottom: WENO5-DF4-RK4. }
\label{125}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{ p{4cm} p{4cm} p{3cm} }
FV1-DF2-Euler &MUSCL-DF2-RK2 & WENO5-DF4-RK4
\end{tabular}
\begin{tabular}{ | c | c | c | c | c | c | c | }
\hline
N & $E_{L^2}(\zeta)$&$E_{L^2}(v)$& $E_{L^2}(\zeta)$&$E_{L^2}(v)$& $E_{L^2}(\zeta)$&$E_{L^2}(v)$\\
\hline
80 & $5.79\tildeimes 10^{-1}$ & $5.56\tildeimes 10^{-1}$ & $5.57\tildeimes 10^{-1}$ & $5.30\tildeimes 10^{-1}$& $4.32\tildeimes 10^{-1}$ & $4.02\tildeimes 10^{-1}$\\
160 & $4.30\tildeimes 10^{-1}$ & $4.07\tildeimes 10^{-1}$ & $3.54\tildeimes 10^{-1}$ & $3.27\tildeimes 10^{-1}$& $1.94\tildeimes 10^{-1}$ & $1.67\tildeimes 10^{-1}$\\
320 & $3.04\tildeimes 10^{-1}$ & $2.83\tildeimes 10^{-1}$ & $1.76\tildeimes 10^{-1}$ & $1.54\tildeimes 10^{-1}$& $6.45\tildeimes 10^{-2}$ & $5.25\tildeimes 10^{-2}$ \\
640 & $1.95\tildeimes 10^{-1}$ & $1.79\tildeimes 10^{-1}$ & $5.96\tildeimes 10^{-2}$ & $5.00\tildeimes 10^{-2}$& $1.16\tildeimes 10^{-2}$ & $9.30\tildeimes 10^{-3}$ \\
1280& $1.14\tildeimes 10^{-1}$ & $1.04\tildeimes 10^{-1}$ & $1.38\tildeimes 10^{-2}$& $1.20\tildeimes 10^{-2}$ & $3.60\tildeimes 10^{-3}$& $3.40\tildeimes 10^{-3}$\\
\hline
\end{tabular}
\caption{Propagation of a solitary wave over a flat bottom: relative $L^2$-error table for the conservative variables.}
\label{L2err}
\end{table}
\begin{remark}\label{remfilter}
We believe that the main reason for not obtaining the predicted order in each space discretization might be due to the fact that the analytic solution given in~\eqref{soliton} satisfies the model~\eqref{GNCH6dim} up to an $\mathcal {O}(\mu^2)$ remainder, that is to say it is an approximate solution. A remedy for this situation could be through an ``iterative cleaning" technique that acts to damp the high frequency oscillations (i.e the oscillatory dispersive tails) that appears due to the remainder term of size $\mathcal {O}(\mu^2)$. This technique has been used by several authors, see for instance~\cite{BC98,BDM07,ND08}. In this paper, we do not try to give some optimal convergence result and the filtering technique is left to future work.
\end{remark}
\subsubsection{Head-on collision of counter-propagating waves}\label{HOCsec}
We now investigate the interaction of solitary waves which allows us to evaluate the impact of non linearities and dispersive terms. To this end, we study the head-on collision of two solitary waves traveling in opposite directions. The initial data for the two counter-propagating solitary waves are given in~\eqref{soliton}. Many authors have set different models and numerical methods in order to numerically study this problem (see~\cite{CGHHS06,Arnaud14,MID14}). Unlike solitary waves in integrable systems, the one for the \emph {full Euler} equations are often followed by a non zero residual wave after interactions. The resulting residual has the form of dispersive trailing waves of small amplitude. In this test, we study the interaction of two solitary waves of equal amplitudes propagating in an opposite direction, initially located at $x=100 \ m$ and $x= 300 \ m$. The spatial domain is $400 \ m$ in length discretized using $1200$ cells. Periodic conditions are imposed on each boundary.
\begin{figure}
\caption{Head on collision of two counter-propagating solitary waves: water surface profiles at several times during the propagation.}
\label{HOC}
\end{figure}
The results are shown in Figure~\ref{HOC} at different propagation times. Before the collision, at times $t=20 \ s$ and $t= 25 \ s$ one can observe two dispersive tails of very small amplitude located to the left and right of each solitary wave. The generation of such dispersive tails is due to the $\mathcal {O}(\mu^2)$ remainder term as mentioned in Remark~\ref{remfilter}. As expected, the waves collide to reach a maximum height larger than the sum of the
amplitudes of the two incident solitary waves at the time $t = 27 \ s$. After the collision, dispersive tails with small amplitudes appear clearly when zooming at $t= 70 \ s$, illustrating an appropriate characterization of the nonlinear interactions. Capturing this dynamics validate the high precision of our numerical scheme. The head-on collision is also studied in~\cite{MID14,Arnaud14}, leading to similar observations.
\subsubsection{Breaking of a Gaussian hump}\label{HOWsec}
In this test, we consider the following initial data representing a heap of water,
\begin{equation*}
\zeta(0,x)=a e^{-\frac{1}{\lambda}(x-\frac{L}{2})^2}, \quad v(0,x)=0,
\end{equation*}
where $a$ represents the amplitude, $\lambda$ represents the width and $L$ represents the length of the domain. Figure~\ref{HOW}, shows the overall behavior of the solutions using $a=0.4 \ m$ and $\lambda=40 \ m$ and $L=400 \ m$ discretized with $2000$ cells using periodic boundary conditions. The initial hump breaks up into two large solitary waves and smaller dispersive tails. These waves and the dispersive tails travel in opposite directions.
\begin{figure}
\caption{Breakup of a Gaussian hump into two solitary waves traveling in opposite directions and dispersive tails. }
\label{HOW}
\end{figure}
The results shown in Figure~\ref{HOW}, tends to confirm the ability of our numerical scheme to capture this dynamics accurately. Indeed, this typical behaviour is also studied in~\cite{MID14}, leading to very similar observations.
\subsubsection{Dam-break problem in the one layer case}\label{DB1sec}
We consider now a dam-break problem in the one layer case in order to test the ability of our numerical scheme to deal with discontinuities.
In general, discontinuous initial data of this type generates dispersive shock waves due to the dispersive effects~\cite{MGH10}. Analytic and computational studies of the dispersive shock waves in fully-nonlinear dispersive shallow water systems were carried out in~\cite{EGS06,MGH10}.
We would like to mention also the previous studies~\cite{CLM,BCLMT,LannesMarche14}, where it is shown that for the study of dispersive waves, it is necessary to use high-order schemes to prevent the corruption of the dispersive properties of the model by some dispersive truncation errors linked to second-order schemes. Indeed, this test is supplemented by a comparison between the second and fifth order accuracy exhibiting the ability of higher order schemes to capture the rapid oscillations in dispersive shock waves. We use the following initial data:
\begin{equation}\label{daminitialdata}
\zeta(0,x)=a[1+\tildeanh(250-|x|)], \quad v(0,x)=0,
\end{equation}
where $a=0.2091 \ m$. The computational domain is the interval $x\in (-700,700)$ and discretized using 2800 cells. We choose to impose periodic boundary conditions.
\begin{figure}
\caption{Dam break in the one layer case: water surface profiles at several times}
\label{dam1}
\end{figure}
Figure~\ref{dam1} shows the results of the numerical simulations with two different orders of discretization, ``MUSCL-DF2-RK2" and ``WENO5-DF4-RK4", generating two dispersive tails propagating in opposite direction, on the left and right side of the “dam”, and two rarefaction waves that travel towards the center. A zoom on the shock and rarefaction waves at $t=30 \ s$ and $t=65 \ s$, shows clearly the corruption of the dispersive properties by the second order scheme. The dam-break problem was also studied in~\cite{EGS06,MGH10} and seems to fit well with our numerical observations. The numerical model proposed in~\cite{MID14} computes using a finite element method all the nonlinear dispersive terms, in particular the third order ones. These third order derivatives are present in our model but in order to improve the frequency dispersion we have proposed to factorize these terms, making it possible not to compute them. This formulation has the inconvenience of numerical diffusion. This is the reason why the dispersive tails observed in the dam-break problem in~\cite{MID14} have larger amplitude oscillations.
\subsection{Numerical validations in the two layers case}\label{NV2sec}
\subsubsection{Kelvin-Helmholtz instabilities}\label{KH}
In this case, we would like to highlight the importance of the choice of the parameter $\alpha$ in order to improve the frequency dispersion of the model~\eqref{GNCH6}, through the simulation of a sufficiently regular initial wave, following the numerical experiments performed in~\cite{DucheneIsrawiTalhouk16}. In the aforementioned paper they introduce a new class of Green-Naghdi type models for the propagation of internal waves with improved frequency dispersion in order to prevent high-frequency Kelvin-Helmholtz instabilities. These models are obtained by regularizing the original Green-Naghdi one by slightly modifying the dispersion components using a class of Fourier multipliers. They represent three different choices of the Fourier multipliers, each one yields to a specific Green-Naghi model which they denote as follows:\\
\\
$\bullet$ ``original" as the classical Green-Naghi model introduced in~\cite{ChoiCamassa99}.\\
\\
$\bullet$ ``regularized" which is a well-posed system for sufficiently small and regular data, even in absence of surface tension. In addition its dispersion relation fit the one of \emph {full Euler system} at order $\mathcal {O}(\mu^3)$.\\
\\
$\bullet$ ``improved" whose dispersion relation is the same as the one of the \emph {full Euler system}.\\
\\
Using the spectral methods~\cite{Trefethen00} for the space discretization and the Matlab solver ode45, which is based on the fourth and fifth order Runge-Kutta-Merson method for time evolution, they numerically compute several of their Green-Naghdi systems. Several computations are made, with and without surface tension in order to observe how the different frequency dispersion may affect the appearance of Kelvin-Helmholtz instabilities. In order to compare with the numerical experiments done in~\cite{DucheneIsrawiTalhouk16} we choose the initial data $\zeta(0,x)=-e^{-4|x|^2}$ and $v(0,x)=0$ (represented by the dashed lines). The dimensionless parameters are set as follows: $\mu=0.1$, $\epsilon=0.5$, $\delta=0.5$, $\gamma=0.95$ and $\rm bo^{-1}=5\tildeimes10^{-5}$. The computational domain is the interval $x \in (-4,4)$ discretized with 512 cells using periodic boundary conditions. In all the following simulations we compute our numerical solution using the fifth order accuracy scheme ``WENO5-DF4-RK4".
\begin{figure}
\caption{Comparison with the Green-Naghdi models, with surface tension, at time $t=2$, for $\alpha_{opt}
\label{DIT1}
\end{figure}
\hspace{7cm}
\begin{figure}
\caption{Comparison with the Green-Naghdi models, with surface tension, at time $t=3$, for $\alpha_{opt}
\label{DIT2}
\end{figure}
\begin{figure}
\caption{Comparison with the Green-Naghdi models, without surface tension ($\rm bo^{-1}
\label{DIT3}
\end{figure}
\begin{figure}
\caption{Comparison with the Green-Naghdi models, without surface tension ($\rm bo^{-1}
\label{DIT4}
\end{figure}
Figures~\ref{DIT1} and~\ref{DIT2} show the comparisons between our numerical solution for $\alpha_{opt}=1$ (left) and $\alpha_{opt}=1.271$ (right) and the Green-Naghdi models solutions obtained in~\cite{DucheneIsrawiTalhouk16}, with a small amount of surface tension, at time $t=2$ and $t=3$ respectively.
We observe an excellent agreement between our numerical solution computed for $\alpha_{opt}=1.271$ and both ``improved" and ``regularized" models at $t=2$ and $t=3$. As expected, at $t=3$ the original model induces Kelvin-Helmholtz instabilities. Meanwhile, the flows predicted by the regularized and improved models and by our model~\eqref{GNCH6} with $\alpha_{opt}=1.271$ remain smooth and are very similar. Similarly, Figure~\ref{DIT3} shows an excellent agreement between the numerical solutions computed for $\alpha_{opt}=1.271$ with the ``improved" and ``regularized" models, without surface tension at time $t=2$, while the flow of the original model is completely destroyed due to Kelvin-Helmholtz instabilities. However, at a larger time ($t=5$), figure~\ref{DIT4} shows that in absence of surface tension ($\rm bo^{-1}=0$) Kelvin-Helmholtz instabilities will destroy completely both ``original" and ``improved" models while the numerical solution computed for $\alpha_{opt}=1.271$ and for the ``regularized" models remain smooth and very similar.
The overall observations show the importance of the choice of the parameter $\alpha$ as in Section~\ref{Secalphachoice} in improving the frequency dispersion. Indeed, when choosing $\alpha_{opt}=1.271$, we observe an excellent matching between our numerical solutions and those obtained by the ``improved" model before the latter is completely destroyed in absence of surface tension due to the Kelvin-Helmholtz instabilities. As well, our numerical solution matches the one computed by the ``regularized" model even for a large time and with or without surface tension. This is not the case when choosing $\alpha_{opt}=1$. In fact, the ``improved" model has the same dispersion relation as the one of the \emph {full Euler system} and the dispersion relation of the ``regularized" model fit the one of the \emph {full Euler system} to an $\mathcal {O}(\mu^3)$ order. This explains the reason behind the matching when choosing an optimal value for $\alpha$.
\subsubsection{Dam-break problem in the two layers case}\label{DAM2}
This simulation concerns a test of the ability of our numerical scheme~\eqref{GNCH6dim} to handle discontinuities when considering a dam-break problem in the two layers case. To this end, we use the same initial data as in the one layer case given by~\eqref{daminitialdata}, where $a=0.2091 \ m$. The simulations are performed on the interval $x \in (-700,700)$, discretized with 2800 cells imposing periodic conditions on each boundary. The dimensionless parameters representing the ratio between the depth and the ratio between the densities of the two layers are set respectively as follows: $\delta=0.5$, $\gamma=0.95$. Taking into account a small amount of surface tension, we set $\rm bo^{-1}= 5\tildeimes 10^{-5}$. We would like to mention that the simulations are computed using $\alpha=1$. As expected, the same simulations performed when choosing $\alpha=1.1498$ yielded the same result since $\alpha_{opt}\rightarrow 1$ when considering large wave numbers as explained in Section~\ref{Secalphachoice}. Figure~\ref{dam2} shows the results of the numerical simulation at several times, generating two dispersive tails propagating towards the center and two rarefaction waves that travel in opposite directions, on each side of the “dam”. A zoom on the dispersive tails is proposed at $t=55 \ s$ and $t=75 \ s$. Indeed, capturing this phenomenon accurately exhibit the high accuracy of our numerical scheme.
\begin{figure}
\caption{Dam break in the two layers case: water surface profiles at several times}
\label{dam2}
\end{figure}
\section{Conclusion}
In this work, we presented a numerical scheme for the Green-Naghdi model in the Camassa-Holm regime. This model is first reformulated under more appropriate structure, where the time dependency of a second order differential operator present in the model is removed, keeping the stabilizing effects of its inversion. Furthermore, we improved the frequency dispersion of the original model keeping the same order of precision thanks to a parameter $\alpha$ to be precisely chosen. Additionally, our model do not contain third order derivatives that may create high frequencies instabilities. We then propose an efficient, precise and stable numerical splitting scheme that decomposes the hyperbolic and dispersive parts of the equations. The hyperbolic part is treated with a finite-volume method, where we consider the following schemes with different orders of accuracy: the first order finite-volume scheme based on a VFRoe method, the second order finite volume scheme following the classical MUSCL approach and finally a fifth-order WENO reconstruction. On the other hand, we treat the dispersive part with a finite-difference scheme, using second and fourth order formulas.
Concerning time discretization we use classical second and fourth order Runge-Kutta methods, according to the order of the space derivative in consideration. Finally, we present several numerical validations in the one and two layers cases, showing the numerical efficiency and accuracy of our scheme and exhibiting its ability to reduce numerical dispersion and dissipation and to deal with discontinuities. The next step of this study may concern the extension of this numerical scheme to a more general configuration of variable topography. We believe that this splitting strategy may be applied in the variable bottom case.
\end{document}
|
\begin{document}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\title {A characterisation of almost simple groups with socle ${}^2\E_6(2)$ or $\mathcal{M}(22)$ }
\author{Chris Parker}
\author{M. Reza Salarian}
\author{Gernot Stroth}
\address{Chris Parker\\
School of Mathematics\\
University of Birmingham\\
Edgbaston\\
Birmingham B15 2TT\\
United Kingdom} \email{[email protected]}
\address{M. Reza Salarian \\mathcal{D}epartment of Mathematics\\
Tarbiat Moallem University\\
University square- The end of Shahid beheshti Avenue\\
31979-37551 Karaj- Iran}
\email {[email protected]}
\address{Gernot Stroth\\
Institut f\"ur Mathematik\\ Universit\"at Halle - Wittenberg\\
Theordor Lieser Str. 5\\ 06099 Halle\\ Germany}
\email{[email protected]}
\date{\today}
\maketitle \pagestyle{myheadings}
\markright{{\sc }} \markleft{{\sc Chris Parker, M. Reza Salarian and Gernot Stroth}}
\begin{abstract} We show that the
sporadic simple group $\mathcal{M}(22)$, the exceptional group of Lie type ${}^2\E_6(2)$ and their automorphism groups are
uniquely determined by the approximate structure of the centralizer of an element of order $3$ together with some
information about the fusion of this element in the group.
\end{abstract}
\section{Introduction}
The aim of this article is to identify the groups with minimal normal subgroup $\mathcal{M}(22)$, one of the sporadic
simple groups discovered by Fischer, and the exceptional Lie type group ${}^2\E_6(2)$ from certain information
about the centralizer of a certain element of order $3$.
The results of this paper and its companions \cite{Parker1,PS1, F4, ParkerRowley2} is to provide identification theorems
for the work in \cite{almostlie} where the following configuration relevant to the classification of groups with
a so-called large $p$-subgroup is considered. We are given a group $G$, a prime $p$ and a large $p$-subgroup $Q$
(the definition of a large $p$-subgroup is not important for this discussion) and we find ourselves in the
following situation. Containing a Sylow $p$-subgroup $S$ of $G$ there is a group $H$ such that $F^*(H)$ is a
simple group of Lie type. In the typical situation when one would expect that this group $H$ is in fact
the entire group $G$. However it can exceptionally happen that in fact the normalizer of the large subgroup is
not contained in $Q$. This happens more frequently than one might expect when $F^*(H)$ is defined over the field
of $2$ or $3$ elements and $N_H(Q)$ is soluble. Indeed in \cite{almostlie}, the authors determine all the cases
when this phenomena appears. This paper fits into the picture when we consider $F^*(H) \cong \Omega_7(3)$. In
$H$, the large subgroup $Q$ is extraspecial of order $3^7$ an $N_{F^*(H)}(Q) \approx 3^{1+6}_+.(SL_2(3)\times \Omega_3(3)).$ In \cite{almostlie} we show that if $N_G(Q)$ is not contained in $H$, then we must have $C_H(Z(Q))$ is a centralizer in a group of type either $\mathcal{M}(22)$ or ${}^2\E_6(2)$ where these centralizers are defined as follows.
\begin{definition} We say that $X$ is similar to a $3$-centralizer in a group of type ${}^2\E_6(2)$
provided
\begin{enumerate}
\item $Q=F^*(X)$ is extraspecial of order $3^{1+6}$ and $Z(F^*(X)) =Z(X)$; and
\item $O_{2}(X/Q) \cong \mathrm{Q}_8\times \mathrm{Q}_8\times \mathrm{Q}_8$.
\end{enumerate}
\end{definition}
\begin{definition} We say that $X$ is similar to a $3$-centralizer in a group of type $\mathcal{M}(22)$
provided
\begin{enumerate}
\item $Q=F^*(X)$ is extraspecial of order $3^{1+6}$ and $Z(F^*(X)) =Z(X)$; and
\item $O_{2}(X/Q)$ acts on $Q/Z$ as a subgroup of order $2^7$ of $\mathrm{Q}_8\times \mathrm{Q}_8\times \mathrm{Q}_8$, which contains
$Z(\mathrm{Q}_8\times \mathrm{Q}_8\times \mathrm{Q}_8)$.
\end{enumerate}
\end{definition}
In this paper we will prove the following two theorems
\begin{theorem}\label{Main}
Suppose that $G$ is a group, $H \le G$ is similar to a $3$-centralizer in a group of type ${}^2\E_6(2)$, $Z = Z(F^*(H))$ and $H= C_G(Z)$. If $S \in \Syl_3(G)$ and $Z$ is not weakly closed in $S$ with respect to $G$, then $Z$ is not weakly closed in $O_3(H)$ and $G \cong {}^2\E_6(2)$, ${}^2\E_6(2).2$, ${}^2\E_6(2).3$ or ${}^2\E_6(2).\mathrm{Sym}(3)$.
\end{theorem}
\begin{theorem}\label{Main1}
Suppose that $G$ is a group, $H \le G$ is similar to a $3$-centralizer in a group of type $\mathcal{M}(22)$, $Z =
Z(F^*(H))$ and $H= C_G(Z)$. If $S \in \Syl_3(G)$ and $Z$ is not weakly closed in $S$ with respect to $G$, then $Z$ is not weakly closed in $O_3(H)$ and $G
\cong \mathcal{M}(22)$ or $\mathrm{Aut}(\mathcal{M}(22))$.
\end{theorem}
A minor observation that is useful to us in our forthcoming work on $\mathcal{M}(23)$ and the Baby Monster $\F_2$ is that the interim statements that we prove in this paper become observations about the structure of $\mathcal{M}(22)$ and ${}^2\E_6(2)$ once the main theorems have been proved.
The paper is organised as follows. In Section 2, we gather together facts about the 20-dimensional $\mathrm{G}F(2)\U_6(2)$-module, centralizers of involutions in this group and in the spit extension $2^{20}:\U_6(2)$ as well as a transfer theorem for groups of shape $2^{10}.\mathrm{Aut}(\mathcal{M}at(22))$. We close Section 2 with a collection of theorems and lemmas which will be applied in the proof of our main theorems.
Section 3 contains a proof of the following theorem which we used to determine the structure of the centralizer of an involution in groups satisfying the hypothesis of Theorem~\ref{Main}.
\begin{theorem}\label{Not3embedded} Suppose that $X$ is a group, $O_{2'}(X)= 1$, $H = N_X(A)= AK$ with $H/A \cong K \cong \U_6(2)$ or $\U_6(2):2$, $|A|=2^{20}$ and $A$
a minimal normal subgroup of $H$. Then $H$ is not a strongly $3$-embedded subgroup of $X$.
\end{theorem}
In Section 3, we set $H= C_G(Z)$ and $Q= O_3(H)$ and start by investigating the possible structure of $H$. Almost immediately from the hypothesis we know that $H/O_3(H)$ embeds into $\mathrm{Sp}_2(3)\wr \mathrm{Sym}(3)$. Lemma~\ref{ZnotweakQ} shows that $Z$ is not weakly closed in $Q$ and we use this information to build a further $3$-local subgroup $M$. It turns out that $M$ is the normalizer of the Thompson subgroup of a Sylow $3$-subgroup of $G$ contained in $H$ and further Lemma~\ref{N_G(J)} that $O_3(M)$ elementary abelian of order either $3^5$ or $3^6$ and $F^*(M/O_3(M)) \cong \Omega_5(3)$.
Section 5 is devoted to the proof of Theorem~\ref{Main1}. From the information gathered in Section 3 we quickly show that the centralizer of an involution has shape $2\udot \U_6(2)$ or $2\udot\U_6(2).2$. From this we can build a further $2$-local subgroup of shape $2^{10}:\mathcal{M}at(22)$ or $2^{10}:\mathrm{Aut}(\mathcal{M}at(22))$ and use Lemma~\ref{autm22} to show that $G$ has a subgroup of index $2$ in the latter case. Finally we apply
\cite[Theorem 31.1]{Asch} to finally prove Theorem~\ref{Main1}.
From Section 7 onwards we may assume that $H$ is a $3$-centralizer in a group of type ${}^2\E_6(2)$. In particular, we have that $O_{2}(H/Q) \cong \mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8$ and we let $r_1$ be an involution in $H$ such that $r_1Q$ is contained in the first direct factor.
By the end of Section 7 we know $r_1$ is a $2$-central involution which contains an extraspecial subgroup of order $E\cong 2^{1+20}_+$ in its centralizer and that $F^*(N_G(E)/E)\cong \U_6(2)$. Our next objective is to control the embedding of $N_G(E)$ in $C_G(r_1)$ so that we can show that $C_G(r_1)= N_G(E)$. To do this we first transfer elements of order $2$ and order $3$ from $G$. The transfer of an element of order 2 is carried out in Section 8 and then the element of order 3 easily follows in Section 9. At this stage we know that $ N_G(E) \approx 2^{1+20}_+.\U_6(2)$, however we still don't know enough about the centralizers of elements of order $3$ in $C_G(r_1)$ to be able to show that $N_G(E)$ is strongly 3-embedded in $C_G(r_1)$. Thus in Section 10, we determine the centralizer of a further element of order $3$ with the help of Astill's Theorem \cite{Astill}. With this we can prove that $N_G(E)$ is indeed strongly $3$-embedded in $ C_G(r_1)$ and conclude from Theorem~\ref{Not3embedded} that $C_G(r_1) = N_G(E)$. At this stage, we could apply Aschbacher's Theorem ~\cite{Aschbacher2E6} to identify $G$, however, partly because some of the background material about the simple connectivity of certain graphs related to geometries to type $\F_4$ has not yet been published and also because we would prefer a uniform building theoretic approach to the classification of the groups such as ${}^2\E_6(2)$, in the penultimate section we identify the ${}^2\E_6(2)$ by showing that the coset geometry constructed from certain $2$-local subgroups containing the normalizers of a Sylow 2-subgroup of $G$ is in fact a chamber system of type $\F_4$. The Tit's Local Approach Theorem yields that the group generated by these $2$-local subgroups is $\F_4(2)$. Finally we apply Holt's Theorem \cite{Ho} to see that $G \cong {}^2\E_6(2)$. Combining this with the transfer arguments presented earlier finally proves Theorem~\ref{Main} the details being presented in our brief final section.
Throughout this article we follow the now standard Atlas \cite{Atlas} notation for group extensions. Thus $X\udot
Y$ denotes a non-split extension of $X$ by $Y$, $X{:}Y$ is a split extension of $X$ by $Y$ and we reserve the
notation $X.Y$ to denote an extension of undesignated type (so it is either unknown, or we do not care). Our
group theoretic notation is mostly standard and follows that in \cite{GLS2} for example. For odd primes $p$,
the extraspecial groups of exponent $p$ and order $p^{2n+1}$ are denoted by $p^{1+2n}_+$. The extraspecial
$2$-groups of order $2^{2n+1}$ are denoted by $2^{1+2n}_+$ if the maximal elementary abelian subgroups have order
$2^{1+n}$ and otherwise we write $2^{1+2n}_-$. The extraspecial group of order $8$ is denoted by $\mathrm{Q}_8$. We
expect our notation for specific groups is self-explanatory. For a subset $X$ of a group $G$, $X^G$ denotes that
set of $G$-conjugates of $X$. If $x, y \in H \le G$, we write $x\sim _Hy$ to indicate that $x$ and $y$ are
conjugate in $H$. Often we shall give suggestive descriptions of groups which indicate the isomorphism type of
certain composition factors. We refer to such descriptions as the \emph{shape} of a group. Groups of the same
shape have normal series with isomorphic sections. We use the symbol $\approx$ to indicate the shape of a group.
\noindent {\bf Acknowledgement.} The initial work on this paper was prepared during a visit of the first and third author
to the Mathematisches Forschungsinstitut Oberwolfach as part of the
Research in Pairs Programme, 30th November--12 December, 2009. The authors are pleased to thank the MFO and its staff for the pleasant and
stimulating research environment that they provided. The first author is also grateful to the DFG for support and the mathematics department in Halle for their hospitality.
\section{Preliminary facts}
Suppose that $X = \U_6(2){:}2$, $Y = \U_6(2)$, $\ov X=\SU_6(2){:}2$, $\ov Y= \SU_6(2)$ and $W$ is the natural
$\mathrm{G}F(4)\ov Y$-module. Let $\{w_1, \dots, w_6\}$ be a unitary basis for $W$. Note that $\ov X$ acts on $W$ with
the outer elements acting as semilinear transformations. Let $\ov M$ be the monomial subgroup of $\ov Y$ of shape
$3^5{:}\mathrm{Sym}(6)$ and $M$ be its image in $Y$. Set $J= O_3(M)$. Then $J$ is elementary abelian of order $3^4$ and
$\ov J$ is elementary abelian of order $3^5$. Note that $M$ contains a Sylow $3$-subgroup of $Y$. We let $e_1$,
$e_2$ and $e_3$ be the images of the diagonal matrices $\diag (\omega, \omega^{-1},1,1,1,1)$, $\diag (\omega,
\omega,\omega^{-1}, \omega^{-1},1,1)$ and $\diag(\omega, \omega,\omega,\omega^{-1},\omega^{-1},\omega^{-1})$ in $
Y$ respectively. Then $e_1, e_2$ and $e_3$ are representatives of the three conjugacy classes of elements of
order $3$ in $Y$.
\begin{lemma}\label{U62J} Every element of order $3$ in $X$ is $X$-conjugate to an element of $J$ and the centralizers of
elements of order $3$ are as follows.
\begin{enumerate}
\item $C_Y(e_1) \cong 3 \times \SU_4(2)$;\item $C_Y(e_2) \cong 3 \times \mathrm{Sym}(3)\wr 3$ and has order $2^3.3^5$;
and \item $C_Y(e_3) \cong (\SU_3(2)\circ \SU_3(2)).3 \approx 3^{1+4}_+.(\mathrm{Q}_8\times \mathrm{Q}_8).3$.\end{enumerate}
\end{lemma}
\begin{proof} Given the descriptions of $e_1$, $e_2$ and $e_3$ above this is an easy calculation. (See also \cite[(23.9)]{Asch} and correct the typographical error.)\end{proof}
We also need to know the centralizers of involutions in $X$.
\begin{lemma}\label{centralizerinvsU6}
$X$ has five conjugacy classes of involutions and their centralizers have shapes as follows.
\begin{eqnarray*}
C_X(t_1) &\approx& 2^{1+8}_+:\SU_4(2). 2; \\
C_X(t_2) &\approx& 2^{4+8}.(\mathrm{Sym}(3) \times \mathrm{Sym}(3)).2; \\
C_X(t_3) &\approx& 2^{9}.3^2.\mathrm{Q}_8.2 \le 2^9:\L_3(4).2;\\
C_X(t_4) &\approx& 2\times \mathrm{Sp}_6(2); \text{ and}\\
C_X(t_5) &\approx& 2 \times (2^5:\mathrm{Sp}_4(2)).\\
\end{eqnarray*}
The involutions $t_1,t_2$ and $t_3$ are contained in $Y$ and their centralizers in $Y$ are obtained by dropping
the final $2$ in their description in $X$. Furthermore we may suppose that $t_5= t_4t_1$ and $C_X(t_5) \le C_X(t_4)$.
\end{lemma}
\begin{proof} This can be found in \cite{AschSe} for the involution $t_1, t_2$ and $t_3$ (see also \cite[(23.2)]{Asch} and the following discussion). For the involutions
$t_4$ and $t_5$ we refer to \cite[Proposition 4.9.2]{GLS3}.
\end{proof}
We note that the involutions $t_1$, $t_2$, and $t_3$ are the images in $Y$ of the involutions $\diag(t,I,I)$,
$\diag(t,t,I)$ and $\diag(t,t,t)$ respectively, where $t = \left(\begin{array}{cc} 0&1\\1&0\end{array}\right)$
and $I$ is the $2\times 2$ identity matrix. The conjugates of $t_1$ are called \emph{unitary transvections}.
\begin{lemma}\label{nofours} There are no fours groups in $X$ all of whose non-trivial elements are unitary
transvections. In particular, if $t$ is a unitary transvection, then $\langle t\rangle$ is weakly closed in
$O_2(C_X(t))$.
\end{lemma}
\begin{proof} Suppose that $F$ is a fours group in $X$ and that all the non-trivial elements of $F$ are unitary
transvections. Let $x_1,x_2$ and $x_3$ be the non-trivial elements of $F$. Since $C_X(x_1)$ is a maximal subgroup
of $X$ and $Z(C_X(x_1))=\langle x_1\rangle$, $X= \langle C_X(x_1),C_X(x_2)\rangle$. Therefore, $C_W(x_1) \not =
C_W(x_2)$. Let $v\in W \setminus C_W(x_1)$ and $w \in C_X(x_2)\setminus C_W(x_1)$. Then $[v,x_3] =[v,x_2]$ and
$[w,x_3] = [w,x_2]$. Hence, as $\dim [W,x_3]=1$, $[W,x_1]=[W,x_2]$ is normalized by $X$, which is a
contradiction. If $O_2(C_G(t))$ containes a unitary transvection $s$ with $t\neq s$, then conjugation in
$O_2(C_G(t))$ reveals that all elements of $\langle s,t\rangle$ are unitary transvections and this is impossible
as we have just seen. Thus $\langle t \rangle$ is weakly closed in $O_2(C_X(t))$.
\end{proof}
Let $P_1$ and $P_2$ be the connected parabolic subgroups of $Y$ containing a fixed Borel subgroup where notation
is chosen so that $$P_1\approx 2^{1+8}_+{:}\SU_4(2)$$ and $$P_2\approx 2^9{:}\L_3(4).$$
\begin{lemma}\label{unique} Suppose that $Y \cong \U_6(2)$ and that $V$ is an irreducible $20$-dimensional
$\mathrm{G}F(2)Y$-module. Then $V\otimes \mathrm{G}F(4)$ is the exterior cube of $W$. In particular, $\dim C_V(O_2(P_2))=1$ and
$\dim C_V(e_3)=2$.
\end{lemma}
\begin{proof} First consider the restriction of $V$ to $O_3(C_Y(e_3))$. This group has no faithful characteristic $2$-representation
of dimension less than $9$ and as $e_3$ is inverted by a conjugate $t$ of $t_3$, we see that any characteristic~$2$ representation of $O_3(C_Y(e_3))\langle t\rangle$ has dimension at least $18$. It follows that $\dim
C_V(e_3) = 2$ and that $V$ is absolutely irreducible. By Smith's Theorem \cite{Smith}, we now have, for $i=1,2$,
$C_V(O_2(P_i))$ are irreducible $P_i$-modules. Suppose that $\dim C_V(O_2(P_2)) >1$. Then, as $P_2/O_2(P_2)\cong
\L_3(4)$ contains an elementary abelian subgroup of order 9 all of whose subgroups of order 3 are conjugate, we
have $\dim C_V(O_2(P_2)) \ge 8$. Since $t_1 \in O_2(P_2)$ and since there exists $x\in P_1$ such that $P_1=
\langle O_2(P_2),O_2(P_2)^x \rangle$, we either have $\dim C_V(t_1) \ge 15$ or $\dim C_V(P_1) \ge 2$. The latter
possibility violates Smith's Theorem. Hence $\dim C_V(t_1) \ge 15$. Thus $V/C_V(t_1)$ has dimension at most $5$.
Since $P_1/O_2(P_1)\cong \SU_4(2)$ has Sylow $3$-subgroups of order $3^4$, we have $[V,P_1] \le C_V(t_1)$ and so
$t_1$ is a transvection by Smith's Theorem. Since $t_1$ inverts $e_1$, we now have $\dim C_V(e_1) \ge 18$ and
taking a suitable product of three conjugates of $e_1$ we obtain a conjugate of $e_3$ centralizing a $14$-space
rather than a $2$-space. At which stage we conclude $\dim C_V(O_2(P_1))= 1$. Finally, using
\cite[5.5]{Aschbacher2E6} we obtain the statement of the lemma. \end{proof}
We note that the $20$-dimensional $\mathrm{G}F(2)Y$-module in Lemma~\ref{unique} extends to an action of $X$ (as can be
seen in the group ${}^2\E_6(2).2$). Our next gual is to determine the action of elements of $X$ on $V$ described
in Lemma~\ref{unique}. We recall that $P_1/O_2(P_1) \cong\SU_4(2)$. We call the $4$-dimensional $\mathrm{G}F(4)\SU_4(2)$
viewed as an $8$-dimensional $\mathrm{G}F(2)$-module the \emph{unitary module} for $\SU_4(2)$ and the $6$-dimensional
$\mathrm{G}F(2)\SU_4(2)$-module which can be seen as the exterior square of the unitary module is called the
\emph{orthogonal module} for $\SU_4(2)$. We will also meet the \emph{symplectic module} for $C_X(t_4)/\langle
t_4\rangle \cong \mathrm{Sp}_6(2)$ as well as the \emph{spin module} which has dimension $8$ and this is the unique
$8$-dimensional irreducible $\mathrm{Sp}_6(2)$-module (see \cite[5.4]{Aschbacher2E6}). Finally, from
Lemma~\ref{centralizerinvs} we have that $C_Y(t_2)/O_2(C_Y(t_2)) \cong \mathrm \Omega_4^+(2)$ and so this group
has an orthogonal module.
\begin{proposition}\label{Vaction} Suppose that $X = \U_6(2):2$ and $V$ is the irreducible $\mathrm{G}F(2)X$-module of dimension $20$.
\begin{enumerate}
\item The following hold:
\begin{enumerate}
\item $\dim C_V(t_1) = 14$, $[V,t_1]$ is the orthogonal module and $C_V(t_1)/[V,t_1]$ is
the unitary module for $C_X(t_1)/O_2(C_X(t_1)) \cong \SU_4(2)$; \item $\dim C_V(t_2) = 12$, $C_V(t_2)/[V,t_2]$ is the orthogonal module for $C_X(t_2)/O_2(C_X(t_2)) \cong \Omega_4^+(2)$;
\item $\dim C_V(t_4) = 14$,
$[V,t_4]$ is the symplectic module and $C_V(t_4)/[V,t_4]$ is the spin module for $C_X(t_4)/O_2(C_X(t_4)) \cong \mathrm{Sp}_6(2)$;
\item $\dim C_V(t_3) = \dim C_V(t_5)= 10$;
\end{enumerate}
\item The stabilizers of non-zero vectors in $V$ are as follows:
\begin{eqnarray*}
\Stab_X(v_1)&\approx&2^9:\L_3(4).2;\\
\Stab_X(v_2)&\approx&2^{1+8}.\mathrm{Sp}_4(2).2;\\
\Stab_X(v_3)&\approx&2^8:3^2.\mathrm{Q}_8.2;\\
\Stab_X(v_4)&\approx&\L_3(4).2.2; \text{ and }\\
\Stab_X(v_5)&\approx&3^{1+4}_+.(\mathrm{Q}_8\times \mathrm{Q}_8).2.2.
\end{eqnarray*}
Here $v_1,v_2,v_3$ are the singular vectors.
\end{enumerate}
\end{proposition}
\begin{proof} For the involutions $t_i$, $i=1,2,3$, $\dim [V,t_i]$ is given in \cite[7.4
(1)]{Aschbacher2E6}. In particular (i) (c) holds and the dimension statements in (i)(a) and (i)(b) hold.
The remaining parts of (i)(a) can be deduced from \cite[(5.6)]{Aschbacher2E6}.
The involution $t_2$ centralizes the image in $X$ of $\langle a,b\rangle$ where $a=\diag
(\omega,\omega,\omega^{-1},\omega^{-1},\omega,\omega^{-1})$ and $b=\diag(
\omega^{-1},\omega^{-1},\omega,\omega,\omega,\omega^{-1})$, Thus the Sylow $3$-subgroup $T$ of $C_X(t_2)$
contains two conjugates of $\langle e_3\rangle$, a conjugate of $\langle e_1\rangle$ and a conjugate of $\langle
e_2\rangle$. Now $C_V(a) =\langle w_1\wedge w_2\wedge w_5, w_3\wedge w_4\wedge w_6\rangle$ and $C_V(b) =\langle
w_1\wedge w_2\wedge w_6, w_3\wedge w_4\wedge w_5\rangle$ and so $C_V(T)=0$. It follows that $C_V(t_2)/[V,t_2]$
admits $C_X(t_2)$ as described in (i)(b).
There is a conjugate of $t_4$ which centralizes a subgroup isomorphic to $\mathrm{Sp}_4(2)$ in $C_X(t_1)/O_2(C_X(t_1))$.
By part (i)(a) $C_X(t_1)$ acts as $\OO^-_6(2)$ on $[V,t_1]$ and $V/C_V(t_1)$ and naturally as $\SU_4(2)$ on
$C_V(t_1)/[V,t_1]$. Since $t_4$ is not a unitary transvection of $C_X(t_1)/O_2(C_X(t_1))$, we see that $\dim
[V,t_4] \geq 6$ and $[C_X(t_4), [V,t_4]]\not= 1$. Furthermore $\mathrm{Sp}_4(2)$ acts fixed point freely on
$C_W(t_4)/[W,t_4]$ for all $\U_4(2)$ sections in $V$. Therefore
$\mathrm{Sp}_4(2)$ acts fixed point freely on $C_V(t_4)/[V,t_4]$. In particular $|C_V(t_4)/[V,t_4]| = 2^{4x}$ where $x$ is some positive integer.
This shows that this module must be the 8-dimensional $\mathrm{Sp}_6(2)$-module and then we deduce $\dim C_V(t_4) = 14$.
We have $t_5 = t_4t_1$, and $C_X(t_5) \leq C_X(t_4)$. As seen before we have that there is $U = \mathrm{Sym}(3) \times
\U_4(2)$ in $X$ such that as an $U$-module $V$ is a direct sum of the unitary module $V_2$ with a tensor product
of the 2-dimensional $\mathrm{Sym}(3)$- module with the $\OO^-_6(2)$-module. We may assume that $t_1 \in \mathrm{Sym}(3)$ and
$t_5$ and $t_4$ induce an outer automorphism on $\U_4(2)$. As $C_X(t_5)$ does not contain $\mathrm{Sym}(6) \times
\mathrm{Sym}(3)$, we see that $t_5$ acts faithfully on the normal $\mathrm{Sym}(3)$, while $t_4$ centralizes this group. We have
that $C_{V_2}(t_5)$ is of order 16. As $t_5$ inverts an element of order three in $\mathrm{Sym}(3)$, which acts fixed
point freely on $V_1$, we get that $C_{V_1}(t_5)$ is of order 64.
Hence we have that $\dim C_V(t_5)= 10$.
For part (ii) we refer to Aschbacher \cite[7.5 (4)]{Aschbacher2E6} for centralizers of singular vectors in $V$.
This gives the centralizers of $v_1$, $v_2$ and $v_3$.
Let $Z= \langle e_3\rangle$, $Q= O_3(C_X(Z))\cong 3^{1+4}_+$ and set $U= C_V(Z)$. Then $\dim U= 2$ and $\dim [V,Z]=18$.
Since $Q \le C_X(Z)'$, we have that $Q$ centralizes $U$. As none of
the singular vectors have such a subgroup centralizing them, we infer that the non-trivial elements of $U$ are
all non-singular. Now $U$ is normalized by $N_X(Z)$ and so we have that $C_X(U)$ has index at most $6$ in
$N_X(Z)$. By Lemma~\ref{U62J}, there is a conjugate $Y$ of $Z$ in $C_X(Z)$ which is not contained in $Q$. If
$[Y,U] = 1$, then $U = C_V(Y) = C_V(Z)$ and so $Y$ is conjugate to $Z$ in $N_X(U)$, which is not the case. Hence
$Y$ acts transitively on $U^\sharp$. This shows that $C_X(v_5)$ is as stated.
Let $L\cong \L_3(4)$ be the Levi complement of the parabolic subgroup of $X$ which is the image of the
stabilizer of an isotropic $3$-space $I$ of the unitary space $W$ .Then $L$ also stabilizes an isotropic subspace
$J$ with $I \cap J=0$ and in fact $I$ and $J$ are the only such subspaces normalized by $L$. Now $L$ centralizes
$\langle i_1\wedge i_2\wedge i_3, j_1\wedge j_2\wedge j_3\rangle$ where $\{i_1,i_2,i_3\}$ and $\{j_1,j_2,j_3\}$
are bases for $I$ and $J$ respectively.
Thus
by \ref{unique} $\dim C_V(L)= 2$ and this space is normalized by $\L_3(4):2$. It follows that this group
centralizes at least one non-zero vector and this vector must be non-singular as none of the singular vectors
have such a stabilizer. By \cite{Atlas} we have that $\L_3(4):2$ is a maximal subgroup in $F^\ast(X)$. Thus we
have at least two orbits of non-singular vectors and
summing the lengths of these orbits we see that we have accounted for all the orbits of $X$ on $V$.
\end{proof}
\begin{lemma}\label{centralizerinvs}
Assume that $X \cong \U_6(2):2$ and that $V$ is a $20$-dimensional $\mathrm{G}F(2)X$-module. Let $Y$ be the semidirect product of $V$ and $X$. Then for $j$ an involution in $Y\setminus V$ we have one of the following:
\begin{enumerate}
\item $Vj$ is a $2$-central involution in $Y'/V$, $|C_V(j)|=2^{14}$ and
\begin{enumerate}
\item $C_{Y'}(j) \approx 2^{14}.2^{1+8}_+.\U_4(2)$;
\item $C_{Y'}(j) \approx 2^{14}.2^{1+8}_+.2^{1+4}.\mathrm{Sym}(3)$;
\item $C_{Y'}(j) \approx 2^{14}.2^{1+8}_+.3^{1+2}_+.\mathrm{Q}_8$;
\end{enumerate}
\item $Vj$ is not $2$-central in $Y'/V$ and $C_{Y'/V}(Vj) = 2^{4+8}.(\mathrm{Sym}(3)\times \mathrm{Sym}(3)) $, $|C_V(j)|= 2^{12}$ and
\begin{enumerate}
\item $C_{Y'}(j) \approx 2^{12}.2^{4+8}.(\mathrm{Sym}(3)\times \mathrm{Sym}(3))$;
\item $C_{Y'}(j) \approx 2^{12}.2^{4+8}.\mathrm{Sym}(3)$;
\item $C_{Y'}(j) \approx 2^{12}.2^{4+8}.2^2$;
\end{enumerate}
\item $Vj$ is not $2$-central in $Y'/V$, $|C_V(j)|= 2^{10}$ and $C_{Y'}(j) \approx 2^{10}.2^{9}.3^2:\mathrm{Q}_8$;
\item $j \in Y\setminus Y'$, $|C_V(j)|= 2^{14}$ and
\begin{enumerate}
\item $C_Y(j) \approx 2^{14}.(2 \times\mathrm{Sp}_6(2))$;
\item $C_Y(j) \approx 2^{14}.(2 \times 2^6.\L_3(2))$;
\item $C_Y(j) \approx 2^{14}.(2 \times \mathrm G_2(2))$; and
\end{enumerate}
\item $j \in Y\setminus Y'$, $C_Y(j)\approx 2^{10}.(2 \times 2^5 .\mathrm{Sym}(6))$.
\end{enumerate}
\end{lemma}
\begin{proof} If $|C_V(j)| = 2^{10}$, then all involutions in $Vj$ are conjugate. Hence (iii) and (v) hold with Proposition \ref{Vaction}.
Let $j$ be 2-central. Then $C_V(j)/[V,j]$ is the $\U_4(2)$-module by Proposition \ref{Vaction} In particular we have three orbits of lengths 1,135, 120, which gives (i) (a) - (c).
If $j$ is as in (iv), then by Proposition \ref{Vaction} $C_X(j)$ induces on $C_V(j)/[V,j]$ the spin module and we have again orbits of lengths 1, 135 and 120, which gives (iv) (a) - (c).
Let finally $j$ be as in (ii). Then $|[V,j]| = 2^8$ and by Proposition \ref{Vaction} $C_V(j)/[V,j]$ is the $\OO^+_4(2)$-module for $C_X(j)$. Hence we have three orbits of lengths 1,6,9, which gives (ii) (a) - (c).
\end{proof}
\begin{lemma} \label{Vfacts} Suppose that $X \cong \U_6(2){:}2$ and that $V$ is an irreducible
$20$-dimensional $\mathrm{G}F(2)X$-module. Then $V$ is not a failure of factorization module.
\end{lemma}
\begin{proof} Suppose that $A \le P_1$ is an elementary abelian $2$-subgroup of $X$, $|V:C_V(A)|\le |A|$ and
$[V,A,A]=0$. Then Lemma~\ref{nofours} and Proposition~\ref{Vaction}(i) imply that $$2^{8} \le |V:C_V(A)| \le |A|
\le 2^9$$ as the $2$-rank of $X$ is $9$. In particular, Proposition~\ref{Vaction} implies that all the
non-trivial elements of $A$ are conjugate to either $t_1$ or $t_2$. As the $2$-rank of $P_1/Q_1$ is $4$, $|A\cap
Q_1| \ge 2^4$. Since $t_1$ is weakly closed in $Q_1$ by Lemma~\ref{nofours}, there exist $b \in A\cap Q_1$
conjugate to $t_2$. Hence $C_V(A)= C_V(b)\ge C_V(Q_1)$. Now $C_X(C_V(Q_1)) = Q_1$ by Proposition~\ref{Vaction}
and so $A \le Q_1$ which is absurd as $Q_1$ is extraspecial of order $2^9$.
\end{proof}
\begin{lemma}\label{contains a 2-central} Suppose that $X= \U_6(2){:}2$ and that $j\sim_Xt_2$. Then every normal subgroup of order
$8$ in a Sylow $2$-subgroup of $C_X(j)$ contains a unitary transvection.
\end{lemma}
\begin{proof} By Lemma \ref{centralizerinvs} we may assume that $P_1$ contains a Sylow 2-subgroup $T$
of $C_X(j)$ and $j \in Q_1$. Suppose that $A$ is a normal subgroup of $T$ of order $8$ with $j\in A$. If $A
\cap C_{Q_1}(j) = \langle j \rangle$, then $[A, C_{Q_1}(j)] \leq \langle j \rangle$ and every non-trivial element
of $AQ_1/Q_1$ acts as a unitary transvection on $Q_1/\langle t_1 \rangle$. From \cite[Proposition~2.12
(viii)]{PS1}, we have $|A Q_1/Q_1|\le 2$ which means that $|A|\le 4$, a contradiction. Thus $A \cap C_{Q_1}(j)
\not \leq \langle j \rangle$. Since $C_{Q_1}(j)$ normalizes $A$ and $|Q_1:C_{Q_1}(j)|=2$, we now get $t_1 \in A$
and we are done.
\end{proof}
In the next lemma we present some results about the 10-dimensional Todd module for $\mathcal{M}_{22}$. A description of
this module may be found in \cite[Section 22]{Asch}. This module is seen to admit the action of $\mathrm{Aut}(\mathcal{M}_{22})$
and we continue to call this module the Todd module. We note that it is a quotient of the natural 22-dimensional
permutation module for $\mathrm{Aut}(\mathcal{M}_{22})$ (see \cite[(22.3)]{Asch}) and that the module is uniquely determined by this
property. The Todd module for $H=\L_3(4)$ is obtained as an irreducible 9-dimensional quotient
$\mathrm{G}F(2)$-permutation module obtained from the action of $H$ on the 21 points of the projective plane. Once
tensored with $\mathrm{G}F(4)$, it can also be identified with the tensor product $N \otimes N^\sigma$ where $N$ is the
natural $\mathrm{SL}_3(4)$-module and $\sigma$ is the Frobenius automorphism. In particular, if $H_1$ and $H_2$ are the
two parabolic subgroups of $H$ containing a fixed Borel subgroup of $H$, then, without loss of generality, $H_1$
fixes a $1$-space and $O_2(H_2)$ centralizes a $4$-space one which $H_2/O_2(H_2)$ acts as an orthogonal module.
\begin{lemma}\label{M22} Let $X = \mathrm{Aut}(\mathcal{M}_{22})$, $Y= X'$ and $V$ be the irreducible 10-dimensional Todd module for $X$ over $\mathrm{G}F(2)$.
\begin{enumerate}
\item If $x \in Y$ is an involution, then $|C_V(x)| = 2^6$.
\item Assume that $M \le X$ with $M \approx 2^4.\mathrm{Sym}(5)$ and $L= O_2(M)$, then $L$ is elementary abelian of order 16 and $|C_V(L)| = 4$.
\item Assume that $M \le X$ with $M \approx 2^4.\mathrm{Alt}(6)$ and $L= O_2(M)$, then $L$ is elementary abelian of order 16, and $|C_V(L)| = 2^5$.
\item If $x \in X\setminus Y$ centralizes $M \approx 2^3.\L_3(2)$, then $|C_V(x)| = 2^7$ and involves two nontrivial $\L_3(2)$-modules.
\end{enumerate}
\end{lemma}
\begin{proof} From the \cite[Table 5.3 c]{GLS3}, we have that there is just one class of involutions in $Y=\mathcal{M}_{22}$.
Let $v$ be some vector in $V$ such that $|v^{X}| = 22$. Then $v$ is centralized by a subgroup $H \cong \L_3(4)$
and $V/\langle v \rangle$ is the Todd module \cite[(22.2) and (22.3.1)]{Asch}. Hence, by \cite[(22.2.1)]{Asch},
there is a parabolic subgroup $H_1 \le H$ fixing a $1$-space in $V/\langle v \rangle$ such that, setting $E=
O_2(H_1)$, we have $H_1/E\cong \mathrm{SL}_2(4)$ and $E$ is elementary abelian of order $2^4$ admitting $H_1/E$ as
$\mathrm{SL}_2(4)$. It follows that $|C_V(E)| = 4$. Choose an involution $x \in H_1\setminus E$, then $x$ inverts some
element $\omega$ of order 5 with $|[V,\omega]| = 2^8$. Further $[C_V(\omega),x] = 1$. This shows $|C_V(x)| = 2^6$
and proves (i).
Let $H_2\le H$ be the companion parabolic subgroup to $H_1$, then, setting $E_2= O_2(H_2)$, we have $C_{V/\langle
v \rangle}(E_2)$ has dimension $4$. and it follows that $C_V(E_2)$ has dimension $5$.
In $Y$ there is a subgroup $M \approx 2^4.\mathrm{Alt}(6)$ with $L=O_2(M)$ elementary abelian of order 16. As the
orbits of $Y$ on $V$ have length $22$, $231$ and $770$, we see that $M$ has no fixed point on $V$. Hence $E$ is
not normalized by $M$. Hence $N_X(E)\approx 2^4:\mathrm{Sym}(5)$ and we have (ii). Furthermore $E_1$ is
normalized by $M$ and so $E_1$ has to centralize the preimage of $C_{V/\langle v \rangle}(E_1)$ and we have
(iii).
Now let $x \in X\setminus Y$ be an involution, which centralizes $U \approx 2^3.\L_3(2)$ in $Y$. As
just elements from the orbit $v^{Y}$ are centralized by an element $\nu$ of
order 7, we see that $|C_V(\nu)| = 2$ and so $V$ involves three nontrivial $\L_3(2)$-modules. As $U$ is not a subgroup of
$\L_3(4)$, we see that $C_V(U) = 1$. In particular $\L_3(2)$ acts nontrivially on $[V,x]$. This now shows that $|[V,x]| = 8$ or $16$. In the second case we have that $|C_V(x)/[V,x]| = 4$ and so is centralized by an element of order 7, a contradiction. This
shows (iv).
\end{proof}
Our next lemma of this section requires the following transfer theorem.
\begin{theorem}\label{transfer} Let $M$ be a subgroup of a finite group $G$ with $G = O^2(G)$,
$|G:M|$ odd and $M > O^2(M)M^\prime$. Suppose that $E$ is an elementary abelian subgroup of a Sylow $2$-subgroup
$T$ of $M$ such that $E$ is weakly closed in $T$ and $N_G(E) \leq M$. Let $T_1$ be a maximal subgroup of $T$ with
$|M : O^2(M)T_1| = 2$. Then there exists $g \in G\setminus M$ such that $|E^g : E^g \cap M| \leq 2$ and $E^g \cap
M \not\leq O^2(M)T_1$.
\end{theorem}
\begin{proof} This is \cite[Theorem 2.11 (i)]{SoWo}.
\end{proof}
\begin{lemma}\label{autm22} Suppose that $G$ is a group, $M$ is a $2$-local subgroup of $G$ with
$F^*(M)=O_2(M)$. Assume that
$M/O_2(M) \cong \mathrm{Aut}(\mathcal{M}_{22})$, $O_2(M)$ is elementary abelian of order $2^{10}$ and $O_2(M)$ is the Todd module for
$M/O_2(M)$.
Then
\begin{enumerate}
\item For involutions $x$ in $M \setminus O^2(M)$, the $2$-rank of $C_{M}(x)$ is at most $ 8$; and
\item $G$ has a subgroup of index $2$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $E= O_2(M)$, $X= M/E$ and $Y= X'$. From \cite[Table 5.3 c]{GLS3} we see that $X$ has exactly two conjugacy classes of involutions not in $Y$ one with centralizer of shape $2 \times 2^3:\L_3(2)$ and the other with centralizer $2\times 2^4:(5:4)$. Also by \cite[Table 5.3 c]{GLS3}, the normalizer of a Sylow $11$-subgroup of $Y$ has order $55$. Hence one class of involutions in $X\setminus Y$ contains elements which normalize, and consequently invert, a Sylow $11$-subgroup. Furthermore, such an involution commutes with an element of order $5$.
Aiming for a contradiction, let $x \in N_G(E)$ with $Ex\not\in X$ and $F \leq C_{M}(x)$ with $F$ is
elementary abelian of order at least $2^9$. Since the $2$-rank of $X$ is $5$, we have $|C_E(F)|\ge 2^4$.
If $Ex$ inverts an element of order 11 in $X$, then
$|C_E(x)| = 2^5$ and $C_X(Ex) \cong 2\times (2^4:(5:4))$. Let $L= O_2(C_Y(Ex))$.
By Lemma \ref{M22} (ii), we have that $|C_E(L)| \leq 2^2$. Since the involutions which invert an element of order $5$ in $C_X(Ex)$ can only centralize $2^3$ in $C_E(x)$, we infer that $FE/E \leq L$.
If $F$ centralizes $C_E(x)$ then the normal closure of $FE/E$ in $C_{M/E}(Ex)$ also is abelian and so we may assume that $FE/E = L$ in this case. On the other hand, if $F$ does not centralize $C_E(x)$, then $|FE/E|\ge 2^5$ and we also have $FE/E = L$.
Hence in any case $FE/E = L$. However this implies that $|F| \leq 2^7$ as $|C_E(L)| \leq 4$ and is a contradiction. Hence $F$ contains no such involutions.
So we have $C_X( Ex) \cong 2 \times 2^3:\L_3(2)$. Let $L=O_2(C_Y( Ex))$
and $L_1 \le C_X(Ex)$ be such that $L_1 \cong \L_3(2)$. Let $e \in L_1$ be an involution. Then
$Le$ contains representatives of two $LL_1$-conjugacy classes of involutions. As $x$ is not 2-central in $X$, we have that $x \sim_X x\ell$ for
some $1 \not= \ell \in L$. It follows that all the involutions in $Lx$ are conjugate to $x$ in $X$. Hence
we see that the coset $Lex$ contains an involution which is not conjugate to $x$ in $X$.
Assume that $(F \cap T)E/E \not\leq L$. Let $e \in FE/E \cap L_1L \setminus L$.
If $|(FE/E) \cap L| > 2$ then $(FE/E \cap L)ex$ is the set of involutions in $Lex$. But this coset contains an
involution which inverts an element of order 11 and we have already seen that such elements cannot be in $F$.
So $|(FE/E) \cap L| \leq 2$ and consequently $|FE/E| \leq 16$. By Lemma \ref{M22} (iv), $|C_E(x)| = 2^7$ and, for $e \in FE/E \setminus L\langle Ex\rangle$, as $C_E(x)$ has two non-trivial $3$-dimensional composition factors for $L_1$,
$|C_{E}(x) : C_{C_E(x)}(e)| \geq 4$.
Therefore $|C_E(F)| =
2^5$ and $|FE/E| = 2^4$. In $L_1$ there are two conjugacy classes of fours groups. One which is contained in an elementary
abelian group of order $2^5$ in $M/E$ and one which is contained in a conjugate of $O_2(C_{M/E}(x))$.
If
$FE/E$ is contained in an elementary abelian group $F_1$ of order $2^5$ in $\mathrm{Aut}(\mathcal{M}_{22})$, then, as $|C_E(F)| = 2^5$,
we get that $|C_E(F_1)| \geq 2^3$, which contradicts Lemma \ref{M22} (ii). Therefore $FE/E$ is uniquely determined and
is conjugate to $\langle L, Ex\rangle$ in $M/E$. In particular $|C_E(\langle L, Ex\rangle)| =
2^5$. But then $L_1$ cannot induce two non-trivial irreducible modules in $C_E(x)$, which contradicts Lemma
\ref{M22}(iv).
Suppose that $w \in Lx$ and let $L_w= O_2(C_Y(w))$. We have that $C_{LL_1}(w)/L$ is a parabolic subgroup of $LL_1/L$. Therefore
$LL_w$ has order $2^5$ and consequently $L \cap L_w$ has order $2$. Now we have $(F\cap Y)E/E \cap L \cap L_w$ which means that $|FE/E| \le 2^2$ and $|C_E(F)|\ge 2^7$. Using Lemma~\ref{M22} , for $f \in O^2(M) \setminus E$, we have
that $|C_E(f)| = 2^6$. Hence $|FE/E|=2$ and $|C_E(F)|=2^8$ contrary to Lemma~\ref{M22} (iv). This proves (i).
We recall that $V$ is not a failure of factorization module for $X$. Thus, for $S \in \syl_2(M)$, $E= J(S)$ and hence $E$ is weakly closed in $S$ with respect to $G$. In particular, as $M= N_G(E)$, $S \in \Syl_2(G)$ and $M$ has odd index in $G$. Therefore (ii) follows from Theorem~\ref{transfer} and part (i).
\end{proof}
\begin{lemma}\label{Fusion}
Suppose that $G$ is a group, $E$ is an extraspecial subgroup of $G$, $H=N_G(E)= N_G(Z(E))$, $C_G(E)= Z(E)$ and $S\in \syl_p(H) \subseteq \syl_p(G)$. Assume that if $g \in G$ and $Z^g \le E$ then every element of $Z^gZ$ is conjugate to an element of $Z$ and assume that no element of $S\setminus E$ centralizes a subgroup of index $p$ in $E$. Then, for all $d \in E$ with $d^G \cap Z = \emptyset$, $\Syl_p(C_H(d)) \subseteq \syl_p(C_G(d))$ and $d^G \cap E = d^H$.
\end{lemma}
\begin{proof} Assume that $d \in E$ is not $G$-conjugate to an element of $Z$. Let $T \in \Syl_p(C_G(d))$. Then $Z(T)$ centralizes $C_E(d)$ which has index $p$ in $E$. Thus $Z(T) \le E$ and so $Z(T) = Z(C_E(d)) = \langle d \rangle Z$. In particular, $Z$ is the unique $G$-conjugate of $Z$ contained in $\langle d \rangle Z$. Therefore $N_G(T) \le H$ and consequently $T \in \Syl_p(C_G(d))$.
Now assume that $e=d^g \in d^G \cap E$ and let $R \in \Syl_p(C_H(e))$. Then, as $T^g \in \Syl_p(C_G(e))$, there exists $h \in C_G(e)$ such that $T^{gw}= R$. But then $Z\langle s \rangle ^{gw} = Z\langle e \rangle$ and as $Z$ is the unique conjugate of $Z$ in $Z\langle e \rangle$ we conclude that $Z^{gw}= Z$. Thus $gw \in H$ and $d^{gw}= e^w=e$. Thus $d^G \cap E = d^H$ as claimed.
\end{proof}
\begin{lemma}\label{ptransfer} Suppose that $p$ is a prime, $G$ is a group and $P \in \Syl_p(G)$.
Assume that $J=J(P)$ is the Thompson subgroup of $P$. Assume that $J$ is elementary abelian. Then
\begin{enumerate}
\item $N_G(J)$ controls $G$-fusion in $J$; and
\item if $J \not \le
N_G(J)'$, then $J \not \le G'$.
\end{enumerate}
\end{lemma}
\begin{proof} Part (i) is well-known see \cite[37.6]{Asch}. Part (ii) is proved in \cite[Lemma 2.2(iii)]{PS1}.\end{proof}
The next lemma is a straightforward consequence of Goldschmidt's Theorem on groups with a strongly closed abelian
subgroup \cite{Goldschmidt}. Recall that for subgroups $A \le H \le G$, we say that $A$ is \emph{weakly closed} in $H$ with respect to $G$ provided that
for $g \in G$, $A^g \le H$ implies that $A^g= A$. We say that $A$ is \emph{strongly closed} in $H$ with respect to $G$ so long as, for all
$g \in G$, $A^g \cap H \le A$.
\begin{lemma}\label{Gold} Suppose that $K$ is a group, $O_{2'}(K)=1$, $E$ is an abelian $2$-subgroup of $K$ and $E$ is strongly closed in $N_K(E)$. Assume that $F^*(N_K(E)/C_K(E))$ is a non-abelian simple group.
Then $K= N_K(E)$.
\end{lemma}
\begin{proof} See \cite[Lemma~2.15]{F4}. \end{proof}
We will also need the following statement of Holt's Theorem \cite{Ho}.
\begin{lemma}\label{Holt}
Suppose that $K$ is a simple group, $P$ is a proper subgroup of $K$ and $r$ is a $2$-central element of $K$.
If $r^K\cap P= r^P$ and $C_K(r)\le P$, then $K\cong \PSL_2(2^a)$ ($a \ge 2$), $\mathrm{PSU}_3(2^a)$ ($a \ge 2$),
${}^2\B_2(2^a)$ ($a\ge 3$ and odd) or $\mathrm{Alt}(n)$ where
in the first three cases $P$ is a Borel subgroup of $K$ and in the last case $P \cong \mathrm{Alt}(n-1)$.
\end{lemma}
\begin{proof} This is \cite[Lemma 2.16]{F4}.
\end{proof}
\begin{definition} \label{U6F4def}We say that $X$ is similar to a $3$-centralizer in a group of type $\U_6(2)$ or $\F_4(2)$
provided the following conditions hold.
\begin{enumerate}
\item $Q=F^*(X)$ is extraspecial of order $3^5$; and
\item $X/Q$ contains a normal subgroup isomorphic to $\mathrm{Q}_8 \times \mathrm{Q}_8$.
\end{enumerate}
\end{definition}
The main theorems of \cite{PS1, F4} combine to give the following result which is also recorded in \cite{F4}.
\begin{theorem}\label{F4U6Thm} Suppose that $G$ is a group, $Z \le G$ has order $3$ and set $M = C_G(Z)$.
If $M$ is similar to a $3$-centralizer of a group of type $\U_6(2)$ or $\F_4(2)$ and $Z$ is not weakly closed in
a Sylow $3$-subgroup of $G$ with respect to $G$, then either $F^*(G) \cong \U_6(2)$ or $F^*(G) \cong \F_4(2)$. Furthermore, if $F^*(G) \cong \U_6(2)$, then $Z$ is weakly closed in $O_3(M)$ with respect to $G$ and if $F^*(G) \cong \F_4(2)$, then $Z$ is not weakly closed in $O_3(M)$ with respect to $G$.
\end{theorem}
\begin{definition}\label{O8def}
We say that $X$ is similar to a $3$-centralizer in a group of type $\mathrm{Aut}(\Omega_8^+(2))$
provided the following conditions hold.
\begin{enumerate}
\item $Q=F^*(X)$ is extraspecial of order $3^5$;
\item $X/Q \cong \mathrm{SL}_2(3)$ or $\mathrm{SL}_2(3) \times 2$;
\item $[Q,O_{3,2}(X)]$ has order $27$.
\end{enumerate}
\end{definition}
\begin{theorem}[Astill \cite{Astill}]\label{Astill} Suppose that $G$ is a group, $Z \le G$ has order $3$ and set $M = C_G(Z)$.
If $M$ is similar to a $3$-centralizer of a group of type $\mathrm{Aut}(\Omega_8^+(2))$ and $Z$ is not weakly closed in $O_3(C_G(Z))$ with respect to $G$, then either $G \cong \mathrm \Omega_8^+(2):3$ or $F^*(G) \cong \mathrm{Aut}(\Omega_8^+(2))$.
\end{theorem}
\section{Strong closure}
The main result of this section will be used in the final determination of the centralizer of an involution in
${}^2\E_6(2)$. Remember that for a prime $p$ and a group $X$ a subgroup $Y$ of order divisible by $p$ is \emph{strongly $p$-embedded} in $X$ so long as $Y\cap Y^g $ has order coprime to $p$ for all $g \in X \setminus Y$.
\begin{lemma}\label{fusion2} Suppose that $p$ is a prime, $X$ is a group and $H$ is strongly $p$-embedded in $X$.
If $x\in H$, $y \in x^X\cap H$ and $p$ divides both $|C_H(x)|$ and $|C_H(y)|$, then $y \in x^H$.
\end{lemma}
\begin{proof}
Since $H$ is strongly $p$-embedded in $X$ and $p$ divides $|C_H(x)|$, $C_{H}(x)$ contains a Sylow $p$-subgroup $P$ of
$C_X(x)$. Let $g \in X$ be such that $y^g = x$. Since $p$ divides $|C_H(y)|$ there is an element $d \in C_H(y)$ of
order $p$. Then $d^{g}$ is a $p$-element of $C_H(x)$ and hence there exists an element $w\in C_G(x)$ such that
$d^{gw} \in P$. Then, as $H$ controls $p$-fusion in $X$ (\cite[Prop. 17.11]{GLS2}), there exists $h\in H$ such that $d= d^{gwh}$. As $H$ is strongly $p$-embedded in $G$, we now have $gwh \in C_X(d) \le H$. Hence $gw \in H$, and $$y^{gw}= x^{w} = x$$ as claimed.
\end{proof}
\begin{lemma}\label{centinH} Suppose that $X$ is a group, $H = N_X(A) $ with $H/A \cong \U_6(2)$ or $\U_6(2):2$, $|A|=2^{20}$ and $A$
a minimal normal subgroup of $H$. Then $C_H(x)$ contains a Sylow $2$-subgroup of $C_X(x)$ for all {$x \in A$}.
\end{lemma}
\begin{proof} Let $S \in \Syl_2(C_X(x))$ with $S \cap H \in \Syl_2(C_H(x))$. As,
by Proposition~\ref{Vfacts} (i), $A$ is not a failure of factorization module for $H/A$, we have $A = J(S\cap H)$
{from \cite[Lemma 26.7]{GLS2}}. In particular, we have $N_S(S\cap H) \le N_G(J(S\cap H)) = H$. Hence $S =S\cap H$.
\end{proof}
We can now prove Theorem~\ref{Not3embedded} which we restate for the convenience of the reader.
\begin{theorem}\label{Not3embedded1} Suppose that $X$ is a group, $O_{2'}(X)= 1$, $H = N_X(A)= AK$ with $H/A \cong K \cong \U_6(2)$ or $\U_6(2):2$, $|A|=2^{20}$ and $A$
a minimal normal subgroup of $H$. Then $H$ is not a strongly $3$-embedded subgroup of $X$.
\end{theorem}
\begin{proof} Suppose that $H$ is strongly $3$-embedded in $X$. Let $S \in \Syl_2(H)$. Then Lemma \ref{centinH} yields $S \in \Syl_2(X)$.
We now claim that $A$ is strongly closed in $H$ with respect to $X$. Assume that, on the contrary, there
is $u \in A$, $g \in X$ and $v \in H\setminus A$ with $v^g = u$. If $3$ divides both $|C_H(u)|$ and $|C_H(v)|$,
then $u$ and $v$ are $H$-conjugate by Lemma~\ref{fusion2}. Since $A$ is normal in $H$, this is impossible.
{Therefore, as $H= AK$ is a split extension, Proposition~\ref{Vaction} and Lemma~\ref{centralizerinvs} together, imply that there is} a unique
possibility for the conjugacy class of $v$ in $H$ and $C_{S}(v)A/A$ has index $2$ in $S/A$. In addition, we have
$|C_{A}( v)|= 2^{12}$.
Since $v \in A^{g^{-1}}$, there exists a Sylow $2$-subgroup $T$ of $C_X(v) $ which contains both $C_S(v)$ and a
conjugate of $A$ which contains $v$. Let $A_v = J(T)$. If $C_A(v) \le A_v$, then, as $[A,v]\le C_A(v)$,
$\langle A,A_v\rangle$ normalizes $\langle v,A\cap A_v\rangle$. Because $A$ is the Thompson subgroup of any $2$-group
which contains $A$, $A$ and $A_v$ are conjugate in $\langle A,A_v\rangle$. But $A$ does not centralize $\langle v, A_v \cap A \rangle$ while $A_v$ does,
which is a contradiction. Thus $C_A(v) \not \le A_v$.
We have $(A_v \cap C_S(v))A/A$ is an elementary abelian normal subgroup of $C_S(v)A/A$ and, as $(A_v \cap
C_S(v))A/A$ only contains elements which are conjugate to $Av$, we have $|(A_v \cap C_S(v))A/A|\leq 4$ from
Lemma~\ref{contains a 2-central}. Combining this with the fact that $A_v \cap C_S(v) \cap A < C_A(v)$, we
deduce that $|A_v\cap C_S(v)|\le 2^{13}$. In particular we have that $|T : A_vC_S(v)| \le 4$. {Now using Lemma
\ref{centinH} and Proposition~\ref{Vaction} we see that $v$ is $H^{g^{-1}}$-conjugate to an element in $A_v$
in class $v_1$ or $v_2$ (using the notation as in Proposition \ref{Vaction}). Furthermore,} $v$ is a singular
element. Suppose that $v$ is conjugate to $v_2$. Then $|T : A_vC_S(v)| = 4$ and so $|A_v \cap C_S(v)| = 2^{13}$.
But any subgroup of $A_v$ of order $2^{13}$ is generated by non-singular vectors, and {as we have seen such
elements are not conjugate} to elements in $H \setminus A$, a contradiction. So we have that $v$ {is conjugate
to} $v_1$. Now let $T$ be a Sylow 2-subgroup of $C_X(v)$, which contains $A_vC_S(v)$. Then $T \in \Syl_2(X)$ {by
Lemma~\ref{centinH}}. {Once again, as $A_v \cap C_S(v)$ is not generated by non-singular vectors,} we get that
$|A_v \cap C_S(v)| \le 2^{12}$ and so $|T : A_vC_S(v)| \le 2$. Further we have $|C_S(v) \cap A_v| \ge 2^{11}$.
Therefore, {as there are only $891$ conjugates of $v$ in $A_v$}, $|(A_v \cap C_S(v))\setminus A| \le 891$. It
follows that $|A \cap A_v| \le 2^9$. Since $|(C_S(v) \cap A_v)A/A|\le 2^2$, we get $|A \cap A_v| = 2^9$ and
$|C_S(v) \cap A_v| = 2^{11}$. But then $891 \ge |(A_v \cap C_S(v))\setminus A| = 1536$ which is a contradiction.
Hence $A$ is strongly closed in $H$.
Since $A$ is strongly closed in $H$ and $O_{2'}(X)=1$, we now have that $X= H$ by Lemma~\ref{Gold} and this is
impossible as $H$ is strongly $3$-embedded. This completes the proof of the theorem.
\end{proof}
\section{The Structure of $H$}
From here on we assume that $G$ satisfies the hypothesis of Theorem~\ref{Main} or Theorem~\ref{Main1}. We let
$H \le G$ be a subgroup of $G$ which is similar to the $3$-centralizer in a group of type ${}^2\E_6(2)$ or $\mathcal{M}(22)$. We let $Z= Z(O_3(F^*(H)))$ and assume that $H = C_G(Z)$.
We will use the following notation $Q= O_3(H)$, $S \in \Syl_3(H)$ and $Z =\langle z\rangle= Z(S)$. We select $R
\in \Syl_2(O_{3,2}(H))$ such that $S=N_S(R)Q$. Then $R$ is isomorphic to a subgroup of $\mathrm{Q}_8\times \mathrm{Q}_8\times
\mathrm{Q}_8$ containing the centre of this group and of order $2^7$ when $H$ has type $\mathcal{M}(22)$ and order $2^9$ when
$H$ has type ${}^2\E_6(2)$. Note that $\Omega_1(R) $ is elementary abelian of order $2^3$. For $i=1,2,3$, let
$\langle r_i\rangle \le \Omega_1(R)$ be chosen so that $C_Q(r_i)$ is extraspecial of order $3^5$. {We set, for
$i=1,2,3$, $Q_i= [Q,r_i]$ and note that $Q_i$ is extraspecial of order $3^3$.}
{If $|R|= 2^9$, we let $R_1$, $R_2$ and $R_3$ be the three normal subgroups of $R$ which are isomorphic to
$\mathrm{Q}_8$ such that $[R_i,Q] = Q_i$. Notice that we have $Z(R_i) = \langle r_i\rangle$ in this case.}
Further we set $B = C_S(\Omega_1(Z(R)))$.
\begin{lemma} We have $Q_1 \cong Q_2 \cong Q_3 \cong 3^{1+2}_+$ and that pairwise these subgroups commute. \end{lemma}
{\begin{proof} This follows from the Three Subgroup Lemma and the definitions of $r_i$ and $Q_i$.
\end{proof}
}
Since each $Q_i$ has exponent $3$, $Q$ has exponent $3$ and so $\mathrm{Out}(Q) \cong \mathrm{G}Sp_6(3)$. For later calculations,
for each $i= 1, 2,3 $, we select $q_i, \wt q_i \in Q_i$ such that $[q_i,S] \le Z$
\begin{center}$q_i^{r_i}= q_i^{-1}$, $\wt q_i^{r_i}= \wt q_i^{-1}$ and $[q_i,\wt q_i]=z$.
\end{center}
We set $\ov H = H/Q$. Then the following lemma follows from
the structure of $\mathrm{G}Sp_6(3)$ and the definition of the $3$-centralizers in groups of type $\mathcal{M}(22)$ and
${}^2\E_6(2)$.
\begin{lemma}\label{H/Q struct}
We have $\ov R $ is normal in $ \ov H$ and, in particular, $\ov H$ is isomorphic to a subgroup of
$\mathrm{Sp}_2(3)\wr \mathrm{Sym}(3)$ preserving the symplectic form.
\end{lemma}
\begin{proof} {This follows from the definition of $H$. Note also that $\ov H$ preserves the ``perpendicular" decomposition of
$Q$ as the central product of $Q_1$, $Q_2$ and $Q_3$.}
\end{proof}
If the Sylow $3$-subgroup $S$ of $H$ equal $ Q$, then, as $Z$ is not weakly closed in $S$ by hypothesis, there
exists $g \in G$ such that $Z^g \le S=Q$ and $Z\neq Z^g$. Now $C_S(Z^g) \cong 3 \times 3^{1+4}_+$ and so
$C_Q(Z^g)'= Z$. However, $C_G(Z^g)$ is $3$-closed with Sylow $3$-subgroup $Q^g$ and derived subgroup $Z^g$.
Therefore we have
\begin{lemma}\label{S>Q} $S > Q$.
\end{lemma}
We draw further information about the structure of $\ov S$ from Lemma~\ref{H/Q struct}.
{
\begin{lemma}\label{added} The following hold:
\begin{enumerate}
\item $\ov S$ is isomorphic to a subgroup of $3\wr 3$ and $|S:BQ|\le 3$;
\item if $x \in S\setminus BQ$ has order $3$, then $|C_{Q/Z}(x)|= 9$, $|[Q/Z,x]| = 3^4$ and the preimage of
$C_{Q/Z}(x)$ is equal to the centre of $[Q,x]$;
\item if $x\in BQ$, then $|C_{Q/Z}(x)| \ge 3^3$;
\item if $\ov S$ contains $\ov E$ of order $9$ with $\ov S= \ov E \ov B$, then $|C_{Q/Z}(\ov E)| = 3$; and
\item if $\ov F \le \ov S$ is elementary abelian of order $27$, then $\ov F = \ov B$.
\end{enumerate}
\end{lemma}
\begin{proof} Lemma~\ref{H/Q struct} (i) implies that $\ov S$ is isomorphic to a subgroup of the wreath product $3\wr
3$ and, as by design, $\ov B$ is the intersection of $\ov S$ with the base group of this group, (i) holds.
Assume that $x \in S\setminus BQ$. Since $x\not\in BQ$, $x$ permutes the set $\{Q_1,Q_2,Q_3\}$ transitively and
therefore $Q/Z$ is a sum of two regular representations of $\langle x\rangle$. It follows that $[Q/Z,x]$ has
order $81$, $|C_{Q/Z}(x)|$ has order $9$ and $C_{Q/Z}(x)= [Q/Z,x,x]$. Let $J$ be the preimage of $C_{Q/Z}(x)$.
Then $[J,x,Q]=1$ and $[J,Q,x]=1$. Hence the Three Subgroup Lemma implies that $J\le Z([Q,x])$ and as $Q$ is
extraspecial, equality follows.
Part(iii) follows from the fact that $BQ$ normalizes each $Q_i$, $1\le i \le 3$.
For part (iv), we have $\ov E$ contains an element which acts nontrivially on each of $Q_i$, $i = 1,2,3$, and a
further element which permutes the $Q_i$ transitively. So the result follows.
Finally (v) follows from (i) as $3\wr 3$ contains a unique elementary abelian subgroup of order $27$.
\end{proof}
}
The next lemma shows that $Z$ is not weakly closed in $Q$. As we will see this is not an immediate observation.
\begin{lemma}\label{ZnotweakQ} $Z$ is not weakly closed in $Q$ with respect to $G$.
\end{lemma}
\begin{proof} Assume that $Z$ is weakly closed in $Q$.
By hypothesis we have that $Z$ is not weakly closed in $S$ with respect to $G$.
Hence {there exists $g\in G$ such that} $Y=Z^g \le S$ and $Y\not\le Q$.\\
\begin{claim}\label{weak1} We have $Y \le BQ$.
\end{claim}
\\
Suppose that $Y \not \le BQ$. Then, by Lemma~\ref{H/Q struct}, $\ov Y $ permutes the set $\{Q_1,Q_2,Q_3\}$
transitively and $\ov Y$ centralizes $\ov f = \ov {r_1r_2r_3}$ which has order $2$. Furthermore by
Lemma~\ref{added} (ii), $[Q/Z,Y]/C_{Q/Z}(Y)$ and $C_{Q/Z}(Y)$ have order $9$. In particular, every element of
order $3$ in $Qz^g$ is conjugate to an element of $Zz^g$. Therefore, as $Z$ normalizes $R$, we may assume that
$Y$ normalizes $R$ and so we can further assume that $f=r_1r_2r_3 \in C_R(Y)$.
{Let $J$ be the preimage of $C_{Q/Z}(Y)$ and set $E = [J,f]$. Then, as $J$ is abelian by Lemma~\ref{added} (ii),
$E$ has order $9$ and is centralized by $Y$. Hence $J= C_Q(Y)=ZE$. Furthermore, Lemma~\ref{added} (ii) shows that
$[Q,Y] = C_Q(E)$. Since $[Y,f]=1$ and $[C_Q(E),f]=[Q,Y,f]=[Q,Y]$, the Three Subgroup Lemma (to get the second
equality) implies}
$$[Q,Y,Y] = [C_Q(E),f, Y]=[C_Q(E),Y,f]=[Q,Y,Y,f]=E.$$
In particular, if $y = z^g$, then every element of the coset $Ey$ is conjugate to $z$. Hence $Ey \cap Q^g
\subseteq \{y,y^{-1}\}$ as $y^G \cap Q^g \subseteq \{y, y^{-1}\}$. Thus $E \cap Q^g= 1$. As $f$ inverts $Q \cap
Q^g$ we have that $Q \cap Q^g \leq E$ and so $Q \cap Q^g = 1$. Since $ZE\le C_G(Y)$, we now have $ZEQ^g/Q^g$ is
elementary abelian of order $3^3$. It follows from Lemma~\ref{added} (v) that $Z$ centralizes
$\Omega_1(R^g)Q^g/Q^g$. Hence $|C_{Q^g/Y}(Z)| \ge 3^3$ {by Lemma~\ref{added}(iii)}. Now we have that
$|\ov{C_{Q^g}(Z)}| \geq 3^3$. Since $\ov Y$ centralizes $\ov{C_{Q^g}(Z)}$ this is impossible. Hence
\ref{weak1} holds.\qedc
Reiterating the statement of \ref{weak1}, we have $z^G \cap H \subseteq BRQ$.
\begin{claim}\label{cent} We have that $C_Q(Y)$ does not contain a subgroup $F$ isomorphic to $3^2 \times 3^{1+2}_+$.
\end{claim}
Suppose false and assume that $F$ is such a subgroup. As
$Z \not \le Q^g$, we have that $FQ^g/Q^g$ is isomorphic to $3^{1+2}_+$. Since $F$ centralizes $F \cap Q^g$ which
has order $9$, we have a contradiction to the fact that $|C_{Q^g/Y}(F)|=3$, see Lemma \ref{added} (iv).
\qedc
\begin{claim}\label{weak2} For $\{i,j\} \subset\{1,2,3\}$ with $i \neq j$, $[Y,Q_iQ_j] \not \le Z$.
\end{claim}
Assume that $[Y,Q_iQ_j] \le Z$. Then $C_{Q/Z}(Y)$ has order $3^5$ and, letting $E_1$ be its preimage, we have
$E_1 \cong 3 \times 3^{1+4}_+$. If $E_1$ is centralized by $Y$, then $E_1Q^g/Q^g$ must be elementary abelian and we have
$Z \le Q^g$ which is a contradiction. So suppose that $[Y,E_1]=Z$. Then $E_2=C_{E_1}(Y)\cong 3^2 \times 3^{1+2}_+$. But this contradicts \ref{cent}.
\qedc
\begin{claim}\label{weak3} If $E \le C_Q(Y)$ with $|E|= 27$, then the non-trivial cyclic subgroups contained in $EY$
but not in $E$ are not all conjugate to $Z$.
\end{claim}\\
Suppose that every non-trivial cyclic subgroup $EY$ not contained in $E$ is conjugate to $Z$. Then
$E \cap Q^g = 1$ for otherwise $(E \cap Q^g)Y\le Q^g$ contains a conjugate of $Z$. Thus \ref{weak1} implies that
$EY \le B^{gh} Q^g$ for some appropriate $h \in H^g$. But then there is a subgroup $U \le EY$, $U \not= Y$ such that $U$
is $G$-conjugate to $Z$ and such that $U$ centralizes $(Q_1Q_2)^{gh}/Y$. This violates \ref{weak2}.
\qedc
\begin{claim}\label{weak4} There are non-trivial cyclic subgroups of $YZ$ which are not conjugate to $Z$.
In particular, $C_{Q}(Y)/Z = C_{Q/Z}(Y)$.
\end{claim}
Suppose that statement is false. Let the subgroups of order $3$ in $YZ$ be $Y_1$, $Y_2$, $ Y$ and $Z$. Then by assumption all these groups are $G$-conjugate to $Z$. Let $E= [Q,Y]Z$. Then the cyclic subgroups of $EY$ not contained in $E$ are $Y_1^Q\cup Y_2^Q \cup Y^Q$. Since $|E| \ge 27$ by \ref{weak1} and \ref{weak2}
we have a contradiction to \ref{weak3}. Let $C$ be the preimage of $C_{Q/Z}(Y)$. Then, as $Y$ and $Z$ are the only
$G$-conjugates of $Z$ in $YZ$, $C$ centralizes $Y$ and have $C= C_Q(Y)$.\qedc
\begin{claim}\label{weak5} $C_Q(Y)$ is elementary abelian of order $81$. In particular, for $i=1,2,3$, $[Q_i,Y] \not\le Z$.
\end{claim}
Otherwise $Y$ centralizes $Q_1/Z$ say and then $C_{Q}(Y) \cong 3^2\times 3^{1+2}_+$ by \ref{weak4}. Now
\ref{cent} gives a contradiction. \qedc
{ Since $[Q,Y] = C_Q(Y)$, every subgroup of $[Q,Y]Y$ order $9$ containing $Z$ is $Q$-conjugate to $YZ$. As
$[Q,Y]Y = C_{QY}(C_Q(Y))$ is normalized by $\Omega_1(R)$, we may suppose that $[\Omega_1(R),ZY] = 1$.
From \ref{weak5} we have $|C_{Q^g}(Z)/Y|=3^3$ and so Thompson's $A\times B$ Lemma \cite[Lemma 11.7]{GLS2} implies that $\Omega_1(R)$ is isomorphic to a subgroup of $\mathrm{G}L_3(3)$. Since
all elementary abelian subgroups of order $2^3$ in $\mathrm{G}L_3(3)$ contain the centre of $\mathrm{G}L_3(3)$, there exists $x \in \Omega_1(R)$ such that $C_{Q^g}(Z)/Y$ is inverted by $x$. Hence $C_{Q^g}(Z)= Y[C_{Q^g}(Z),x]$. Because $\ov{C_{Q^g}(Z)}$ normalizes, and is normalized by, $\Omega_1(R)$, we have $$Q \ge [C_{Q^g}(Z),\Omega_1(R)]=[C_{Q^g}(Z),x].$$
Therefore $C_{Q^g}(Z)Q= YQ$ and $|C_{Q^g}(Z)\cap Q|= |Q\cap Q^g|=3^3$.}
{Set $D= Q\cap Q^g$ and $U= ZDY$. Then $U$ is elementary abelian of order $3^5$.} Let $P= \langle Q,
Q^g\rangle$ and note that $P$ normalizes $U$. Since $Z$ is the only $G$-conjugate of $Z$ in $DZ$ and $P$ does not
normalize $Z$, we see that there are $P$-conjugates of $Z$ which are not contained in $DZ$. Now conjugating by
$Q$, we see that there are $28$, $55$ or $82$ $P$-conjugates of $Z$ in $U$. Since $7$ and $41$ do not divide
$|\mathrm{G}L_5(3)|$, we have that there are exactly $55$ $P$-conjugates of $Z$ in $U$. Similarly, there are $55$
$P$-conjugates of $Y$ and so we infer that $Z$ and $Y$ are $P$-conjugate. Since $DZ$ and $DY$ each only have one
$G$-conjugate of $Z$, we have that $U\setminus( DZ\cup DY)$ contains at most two elements which are not conjugate
into $Z$. Since $Q$ does not normalize $Y$ and does normalize $DZ$, there is a $u \in P$ with $(ZD)^{u} \not
\subseteq DZ \cup DY$ .
Set $D_1 = D \cap (DZ)^{u}$.
Then $|D_1| \geq 9$. Choose $x \in (DZ)^{u} \setminus (ZD \cup DY)$. Then in { $\langle D_1, x \rangle$ there are
nine subgroups} of order three not in $ZD \cup DY$, in particular at least eight of them are conjugate to $Z$,
which is not possible as $Z^{u}$ is the only conjugate of $Z$ in $(ZD)^{u}$. This contradiction finally proves
that $Z$ is not weakly closed in $Q$ with respect to $G$.
\end{proof}
Because of Lemma~\ref{ZnotweakQ} we may and do assume that for some $g \in G$ we have $Y = Z^g \le Q$ with $Y \neq Z$. Set $V = ZY$ and assume that $Y$ is chosen so that $C_{Q^g}(Z) \le S$.
Set $P= \langle Q,Q^g\rangle$ and $W= C_Q(Y)C_{Q^g}(Z)$.
\begin{lemma}\label{Pfacts1} The following hold:
\begin{enumerate}
\item $V \le Q \cap Q^g$;
\item $Q \cap Q^g$ is normal in $P$ and is elementary abelian;
\item $[Q\cap Q^g,P]= V$;
\item $P/C_P(V) \cong \mathrm{SL}_2(3)$ and there are exactly $4$ conjugates of $Z$ in $V$; and
\item $|N_G(Z):H|=2$.
\end{enumerate}
\end{lemma}
\begin{proof} We have $C_Q(Y) \cong 3\times 3^{1+4}_+$ and so, as $C_Q(Y) \le H^g$, the structure of $\ov S$ {given in Lemma~\ref{added} (i)} implies that $Z=C_{Q}(Y)'\le Q^g$. Hence (i) holds.
{Since $[Q\cap Q^g, Q]= Z \le V$ and $[Q\cap Q^g,Q^g] = Y \le V$, the first part of (ii) and (iii) hold.} Of course $\Phi(Q\cap Q^g)\le Z \cap Y=1$. Hence the second part of (ii) holds as well.
Since $|V|= 3^2$, $[V,Q]= Z$ and $[V, Q^g]= Y$, we get (iv). Finally there is an element in $P$ which inverts $V$, and so we have $|N_G(Z)/H|=2$.
\end{proof}
\begin{lemma}\label{Pfacts2}
\begin{enumerate}
\item $W$ is a normal subgroup of $P$, $P/W \cong \mathrm{SL}_2(3)$ and $W= C_P(V)$;
\item {$Q\cap Q^g$ is a maximal abelian subgroup of $Q$,} and $W/(Q\cap Q^g)$ is elementary abelian of
order $3^4$ which, as a $P/C_P(V)$-module, is a direct sum of two natural $\mathrm{SL}_2(3)$-modules;
\item {$WQ\not \le BQ$}, $\ov W$ has order $9$ and does not act quadratically on $Q/Z$;
\item $V$ is the second centre of $S$;
\item $S = WQ$ or $\ov{S}$ is extraspecial. {Furthermore, if $|R| = 2^7$, then $S = WQ$; and}
\item $\ov{W}$ is inverted by an involution $t\in N_P(Z)\cap N_G(S)$ which inverts $Z$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $C_Q(Y)$ normalizes $C_{Q^g}(Z)$, $W$ is a subgroup of $G$. We have that $[Q, Y,
C_{Q^g}(Z)]=[Z,C_{Q^g}(Z)]=1$ and $[Y, C_{Q^g}(Z), Q]=1$ and so $[Q, C_{Q^g}(Z), Y]= 1$ by the Three Subgroup
Lemma. Thus $[Q,C_{Q^g}(Z)] \le C_Q(Y) \le W$. Hence $[W,Q] \le W$ and similarly $[W,Q^g]\le W$. So $W$ is a
normal subgroup of $P$. Furthermore, $[C_P(V),Q]\le C_Q(Y) \le W$ and $[C_P(V),Q^g]\le C_{Q^g}(Z) \le W$ and so
$P/W$ is a central extension of $P/C_P(V)$. Let $T$ be a Sylow $2$-subgroup of $O^3(P)$. Then as $O^3(P)/W$ is
nilpotent, $Q$ normalizes and does not centralize $T$. It follows that $P= WTQ$ and then the action of $Q$ on $T$
and the fact that $T/C_T(V)\cong\mathrm{Q}_8$ implies that $T \cong \mathrm{Q}_8$ and that $P/W \cong \mathrm{SL}_2(3)$, as by \cite[Satz
V.25.3]{Hu} the Schur multiplier of a quaternion group is trivial. This proves (i).
Since $WQ = C_{Q^g}(Y)Q$ and $Y \le Q$, we have $\ov W$ is elementary abelian. Furthermore, as $Q$ is
extraspecial and as $Q\cap Q^g$ is elementary abelian by Lemma~\ref{Pfacts1} (iii), $Q\cap Q^g$ has index at
least $3^3$ in $Q^g$. Because $C_{Q^g}(Y)$ has index $3$ in $Q^g$, there is an integer $a$ such that
$$3^2 \le |\ov W|= |WQ^g/Q^g|=3^a \le 3^3.$$ Furthermore, we have that $W/(Q\cap Q^g)=C_Q(Y)C_{Q^g}(Z)/(Q\cap Q^g) $ has order $3^{2a}$ and is elementary abelian.
If $C_{W/(Q\cap Q^g)}(Q)> C_{Q}(Y)/(Q\cap Q^g)$, then $C_{W/(Q\cap Q^g)}(Q) \cap C_{Q^g}(Z)/(Q\cap Q^g) > 1$ and
is centralized by $P$. As $P$ acts transitively on the subgroups of $V$ of order $3$, we get $$C_{W/(Q\cap
Q^g)}(Q) \cap C_{Q^g}(Z)/(Q\cap Q^g) \le Q/(Q\cap Q^g)$$ which is absurd. Hence $C_{W/(Q\cap
Q^g)}(Q)=C_{Q}(Y)/(Q\cap Q^g)$. In particular, $C_{W/(Q\cap Q^g)}(P)=1$ and $[W,Q](Q\cap Q^g)/(Q\cap Q^g)$ has
order $3^a$. Since $Q$ acts quadratically on $W/(Q \cap Q^g)$, as a $P/W$-module, we have that $W/(Q\cap Q^g)$ is
a direct sum of $a$ natural $\mathrm{SL}_2(3)$-modules.
Assume that $|\ov{W}| = 3^3$. Then $WQ = BQ$ {and so $|[Q/Z,W]|= |[Q/Z,B]| \le 3^3$. Since
$[W, Q\cap Q^g] \ge [Q\cap Q^g, C_{Q^g}(Y)]= Y$ and $|[W/(Q\cap Q^g),Q]|= 3^a=27$, }we infer
that $3^3=|[Q/Z,W]| \ge 3^4$ which is a contradiction. This proves (ii).
{Suppose that $WQ \le BQ$ (which is equivalent to $W$ acting quadratically on $Q/Z$). }Then $[Q,W]V/V \le
Z(W/V)$ and as $(Q\cap Q^g)/V \le Z(W/V)$, we infer that $C_Q(Y)/V \le Z(W/V)$ and this means that $W/V$ is
abelian. Since $W$ is generated by elements of order $3$, we then have that $W/V$ is elementary abelian. Letting
$t$ be an involution in $P$, we now have that $W_1=[W,t]$ has order $3^6$, is abelian and is normal in $P$. Now
by (ii) $W_1/V$ is a direct sum of two natural $P/W$-modules and so there are exactly four normal subgroups of
$P$ in $W_1/V$ of order $3^2$. Let $U$ be such a subgroup. Then $[U,Q\cap Q^g]\le V$. By (ii) we have $C_Q(Q \cap
Q^g) = Q \cap Q^g$ and so $[U,Q\cap Q^g] \not= 1$. As $[U,Q\cap Q^g]$ is normal in $P$ we get $[U,Q\cap Q^g]= V$.
Therefore $|[U,Q]/Z|= 3^2$. {Now, as $WQ\le BQ$, $WQ$ normalizes $Q_1$, $Q_2$ and $Q_3$, so , as
$|[U,Q]/Z|=3^2$, $UQ$ centralizes exactly one of $Q_1/Z$, $Q_2/Z$ and $Q_3/Z$. This is true for all four
possibilities for $U$. Hence there exists two candidates for $U$ centralizing $Q_1/Z$ (say). Thus $\ov W$
centralizes $Q_1/Z$ and we get $[Q/Z,W]=[Q_2Q_3/Z,W]$ has order $3^2$. Since $|[Q/Z,W]|=3^3$, this is a
contradiction.
Hence $W \not\le BQ$ and $W$ does not act quadratically on $Q/Z$.} This proves (iii).
Since $W \not \le BQ$ and $|\ov W \cap \ov B|\neq 1$, we see that $C_{Q/Z}(W) = V/Z$ by using Lemma~\ref{added}
(iv). This then gives (iv).
{Note that, by (iv), $S=C_S(Y)Q$ and so $WQ$ is normalized by $S$. Since, by Lemma~\ref{added} (i), $\ov S$ is
isomorphic to a subgroup of $3\wr 3$ with $\ov B$ being the subgroup of $\ov S$ meeting the base group of the
wreath product, the possibilities for $\ov S$ now follow as $\ov W$ is normalized by $\ov S$. In the case when
$|R| = 2^7$, we have that $|R/Z(R)| = 2^4$ and so does not admit an extraspecial group of order 27. Hence in this
case we get $\ov S= \ov W$ has order $9$. This proves (v).}
Finally we note that the involution $t$ in a Sylow $2$-subgroup of $P$ inverts $Z$, normalizes $S$ and also
inverts $\ov W$. So (vi) holds.
\end{proof}
\begin{lemma}\label{Horder} One of the following holds:
\begin{enumerate}
\item $|R|= 2^9$, $S=WQ$ and either $|H|=2^9\cdot 3^9$, $H=WRQ$ and $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3^2$$ or $|H|=2^{10}\cdot3^9$, $H/BRQ \cong \mathrm{Sym}(3)$ and $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3. \mathrm{Sym}(3);$$
\item $|R|= 2^9$, $\ov S$ is extraspecial and either $|H|=2^9\cdot 3^{10}$, $H=SR$
$$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3^{1+2}_+$$
or $|H|=2^{10}\cdot 3^{10}$, $H/BRQ \cong \mathrm{Sym}(3)$ and $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3^{1+2}_+ .2;$$ or
\item $|R|= 2^7$, $S=WQ$ and either $|H|=2^7\cdot 3^9$, $H=QRW$ and
$$\ov H \approx 2^7.3^2$$
or $|H|=2^{8}\cdot3^9$, $H/BRQ \cong \mathrm{Sym}(3)$ and
$$\ov H \approx 2^7.3.\mathrm{Sym}(3).$$
\end{enumerate}
\end{lemma}
\begin{proof} This is a summary of things we have learnt in Lemma~\ref{Pfacts2}
combined with the fact that $\ov H$ embeds into $\mathrm{Sp}_2(3) \wr \mathrm{Sym}(3)$.
\end{proof}
{We may now fill in the details of the structure of $N_G(Z)$ and while doing so establish some further notation
which will be used throughout the remainder of the paper.}
By
Lemma~\ref{Pfacts2} (i), $\ov W$ does not act quadratically on $Q/Z$. Thus $W \not \le QB$. It follows that
$N_{S}(R)$ contains an element $w$ which permutes $\{Q_1, Q_2,Q_3\}$ transitively ($w$ is a wreathing element).
Furthermore, as $\ov W$ is abelian, $\ov W\cap \ov B$ contains an a cyclic subgroup which is centralized by $wQ$.
We let $x_{123}$ be the corresponding element in $N_S(R)$ (here the notation should remind the readers (and the
authors) that $x_{123}$ acts non-trivially on $Q_1/Z$, $Q_2/Z$ and $Q_3/Z$ and on $R_1/\langle r_1\rangle$,
$R_2/\langle r_2\rangle$, $R_3/\langle r_3\rangle$. Since $x_{123}$ centralizes $r_1r_2r_3$, it normalizes
$\langle q_1,q_2,q_3\rangle$ and consequently $$[x_{123}, \langle q_1,q_2,q_3\rangle ]\le \langle
q_1,q_2,q_3\rangle \cap Z=1.$$ Hence $x_{123} \in C_{S}(\langle q_1,q_2,q_3\rangle)$.
If $S>QW$, then $|\ov B|$ has order $9$ and is normalized by $w$. Thus $N_S(R)$ contains an element
$x_2x_3^{-1}$, which as with $x_{123}$ centralizes $\langle q_1,q_2,q_3,Z\rangle$. Note that at this stage it
may be that $x_{123}$ and $x_2x_3^{-1}$ do not commute. We continue our investigations under the assumption that
if $S= WQ$, then $x_2x_3^{-1}$ is the identity element and $J=J_0$.
Set $A = [Q,B]= \langle Z, q_1,q_2,q_3\rangle$, $$J = C_{QW}(A)= \langle A, x_{123} \rangle$$ and $$J_0 =
C_S(A)=\langle A,x_{123},x_2x_3^{-1}\rangle.$$
\begin{lemma}\label{J} \begin{enumerate}
\item $J= J(W)$ is the Thompson subgroup of $W$, $ (Q\cap Q^g)J/(Q\cap Q^g)$ is a non-central $P$-chief factor
and $A \neq Q\cap Q^g$;
\item if $S>QW$ then $J_0$ is elementary abelian and $\ov{J_0}= \ov B$;
\item $x_{123}$ has order $3$ and, if $S> QW$, $x_{2}x_3^{-1}$ also has order $3$ and commutes with $x_{123}$;
\item if $S=WQ$, then $J= J(S)$ and, if $S>WQ$, then $J_0= J(S)$; and
\item if $S > QW$, then $|J_0|= 3^6$ and $S=QWJ_0$.
\end{enumerate}
\end{lemma}
\begin{proof} Because $A$ has index $3$ in $J$, $J$ is abelian.
As $J$ centralizes $V$ and $J \le QW$, $J \le C_{QW}(V)= W$. {As, by Lemma~\ref{Pfacts2} (ii)}, $W/(Q\cap Q^g)$
is a direct sum of two natural $\mathrm{SL}_2(3)$-modules, there is a normal subgroup $W_0$ of $P$ such that $(Q\cap
Q^g) \le W_0 \le W$ and $$\ov W_0 = \ov{\langle x_{123}\rangle}\le \ov B.$$
We have $|W_0 \cap Q:Q\cap Q^g|=3$. Thus, as $Q \cap Q^g$ is a maximal abelian subgroup of $Q$ by Lemma~\ref{Pfacts2} (ii), $Z(W_0 \cap Q)$ has index $3$ in $Q \cap Q^g$ and contains $V$. Hence $Z(W_0 \cap Q)$ is normal in $P$ by Lemma~\ref{Pfacts1} (iii) and this means that $Z(W_0) = Z(W_0)\cap Q$. From the definition of $A$ and of $W_0$, we have $[A,W_0]\le Z$. On the other hand, $Z(W_0) \le C_Q(W_0) \le A$. Thus $W_0$ centralizes a subgroup of $A$ of index $3$. It follows that $W_0$ induces a group of order $3$ on $A$. Hence $C_{W_0}(A) = J$ and $W_0 = (Q \cap Q^g)J$. As $[W_0,Q \cap Q^g] = V$, $W_0$ is
not abelian and hence $J$ is a maximal abelian subgroup of $W_0$.
If $J^*\le W_0$ is abelian with $|J^*|= |J|$
and $J\neq J^*$, then $W_0= JJ^*$ and $Z(W_0)\ge J\cap J^*$. Since $Q\cap Q^g \not \le Z(W_0)$ and $W_0/(Q\cap
Q^g)$ is a $P$-chief factor, we get $W_0= Z(W_0)(Q\cap Q^g)$ which means that $W_0$ is abelian and is a
contradiction. Hence $J= J(W_0)$ is normal in $P$ and, as $J = [J,Q][J,Q^g]$ is generated by elements of order
$3$, $J$ is elementary abelian.
Since $J$ contains a $P$-chief factor, we have $C_P(J) = C_W(J)= J$. Assume that $\wt A$ is an abelian subgroup
of $QW$ with $|\wt A|\ge |J|=3^5$. If $\wt AQ\not \le BQ$, then $|C_{Q/Z}(\wt A)| \le 3^2$ by
Lemma~\ref{added} (ii). Hence $|\wt A \cap Q| \le 3^3$ which means that $\wt A Q= WQ$ and so we have $|C_{Q/Z}(\wt
A)|=3$ by Lemma~\ref{added} (iv). But then $\ov W$ has order greater than $9$, a contradiction. So $\wt A \le
W_0Q$ and $|\wt A \cap Q|= 3^4$, it follows that $\wt A \cap Q= A$ and $\wt A \le J$. Thus $J = J(WQ)$ and if $S
= QW$ we even have $J = J(S)$.
This completes the proof of (i) and shows that $x_{123}$ has order $3$.
Since $J$ does not centralize $Q \cap Q^g$, $A \neq Q\cap Q^g$.
Now we consider $J_0$ and suppose that $S > QW$. Then $S= J_0QW$. Because $A$ is normalized by $S$, $J_0$ is a normal subgroup of
$S$ and $x_2x_3^{-1}\in J_0 \setminus J$. Set $A_1= A \cap Q\cap Q^g$. Then, as $W_0 \cap Q= A(Q\cap Q^g)$, we have $A_1$ has order $3^3$ and is centralized by $W_0J_0$. It follows that $W_0J_0= C_S(A_1)$.
Since $A_1$ is normalized by $P$ by Lemma~\ref{Pfacts1}(iii) and $C_{PS}(A_1) \le O_3(PS)$, we have $J_0W_0$ is normalized by $PS$ and that $J_0W_0/J$ is centralized by $O^3(P)$. As $J_0$ is normalized by $S$, we have that $J_0$ is a normal subgroup of
$PS$. Employing the fact that $A \le Z(J_0)$, yields $J=\langle A^P\rangle \le Z(J_0)$. Hence $J_0$ is abelian. As $J$ is
elementary abelian, $\Phi(J_0)$ has order at most $3$ and as $P$ does not normalize $Z$ we have $J_0$ is
elementary abelian. This then implies that { $x_2x_3^{-1}$ has order $3$ and $[x_{123},x_2x_3^{-1}]=1$}. Since $|J_0|= 3^6$, we also
have that $J_0 = J(S)$ in this case.
\end{proof}
The next lemma just reiterates what we have discovered in Lemma~\ref{J} (iii).
\begin{lemma}\label{BEA} $B= \langle x_{123},x_2x_3^{-1},z\rangle$ is elementary abelian.\qed
\end{lemma}
\begin{lemma}\label{controlfus}
The subgroup $N_G(J(S))$ controls $G$-fusion of elements in $J(S)$.
\end{lemma}
\begin{proof} This follows from lemma~\ref{ptransfer} (i) as $J(S)$ is elementary abelian. \end{proof}
\begin{lemma}\label{fusion1} $N_G(Z)$ controls $G$-fusion of elements of order $3$ in $Q$ which are not conjugate to $z$.
In particular, $q_1$, $q_1q_2$ and $z$ represent distinct $G$-conjugacy classes of elements of $Q$.
\end{lemma}
\begin{proof} From Lemma~\ref{Pfacts2} (iii) and (v) no element of $\ov S$ centralizes a subgroup of index $3$ in $Q$. Furthermore, if $Z^g \le Q$, then all the elements of $ZZ^g$ are $G$-conjugate to elements of $Z$ by Lemma~\ref{Pfacts1} (iv). Hence $N_G(Z)$ controls $G$-fusion of elements of order $3$ in $Q$ which are not conjugate to elements of $z$ by Lemma~\ref{Fusion}.
By Lemma \ref{Pfacts2}(iv) any conjugate of $z$ in $Q$ is
in the second centre of some Sylow 3-subgroup of $N_G(Z)$ and so $q_1$ and $q_1q_2$ both are not conjugate to $z$
in $G$.
\end{proof}
\begin{lemma}\label{NHJ}
We have $N_H(J) = \Omega_1(Z(R))N_H(S)$.
\end{lemma}
\begin{proof} We know by direct calculation that $N_{\ov H}(\ov J)= \ov{\Omega_1(Z(R))}N_{\ov H}(\ov S)$ and so the result follows.
\end{proof}
Recall that, for $i=1,2,3$, $Q_i= \langle q_i,\wt q_i\rangle $ where $[q_i,\wt q_i]=z$ are specifically
defined. In the next lemma we give precise descriptions, some of which we have already seen, of a number of the
key subgroups of $Q$.
{\begin{lemma}\label{QQg} The following hold:
\begin{enumerate}
\item $V = \langle z, q_1q_2q_3 \rangle$;
\item $C_Q(V) = \langle A,\wt q_1\wt q_2^{-1}, \wt q_1 \wt q_2\wt q_3\rangle$;
\item $A=\langle z,q_1,q_2,q_3\rangle$;
\item $A \cap Q^g= \langle V, q_1q_2^{-1}\rangle= \langle V, q_2q_3^{-1}\rangle$; and
\item $Q\cap Q^g=\langle A\cap Q^g, \wt q_1 \wt q_2\wt q_3\rangle$.
\end{enumerate}
\end{lemma}
\begin{proof} We have that $V$ is centralized by $W$ and $\ov W = \langle wQ,x_{123}Q\rangle$, hence (i) holds and
(ii) follows from that. Part (iii) is the definition of $A$. Since $A \le C_Q(V)\le W$, $[A,W]= [A,w] \le Q^g$
and this gives (iv). Finally, since $[Q\cap Q^g, W]= V$ and so we get (v).
\end{proof}}
\begin{lemma}\label{Zconjs1} $A$ contains exactly $13$ conjugates of $Z$ and $A\cap Q^g$ contains exactly $4$ $G$-conjugates of $Z$.
\end{lemma}
\begin{proof}
Since the images of $G$-conjugates of $Z$ contained in $Q$ are $3$-central in $N_G(Z)/Z$ by Lemma~\ref{Pfacts2}
(iv), the conjugates of $Z$ in $Q$ are $N_G(Z)$-conjugate to $\langle q_1q_2q_3\rangle$ by Lemma~\ref{fusion1}.
{Therefore, in $A = \langle z, q_1,q_2,q_3\rangle$ we have thirteen candidates for such subgroups and they are
in the four groups $$\langle Z, q_1q_2q_3\rangle\;, \langle Z, q_1q_2^{-1}q_3\rangle\;,\langle Z,
q_1q_2q_3^{-1}\rangle\text{ and }\langle Z, q_1q_2^{-1}q_3^{-1}\rangle.$$ As all these groups are conjugate in
$\Omega_1(R)Q$, we see that $A$ contains exactly thirteen conjugates of $Z$. Now $A \cap Q^g= \langle
z,q_1q_2^{-1},q_2q_3^{-1}\rangle$ contains four conjugates of $Z$ all of which are contained in $V$.}
\end{proof}
\begin{lemma}\label{conjugates in Z} $J_0$ contains exactly $40$ subgroup which are $G$-conjugate to $Z$ and they are all
contained in $J$. In particular, $N_G(J) \ge N_G(J_0)$ and $|N_G(J)/J_0| = 2^{7+i}\cdot 3^4\cdot 5$ where $i$ is
such that $2^{i+2}= |N_G(S)/S| \le 8$.
\end{lemma}
\begin{proof} By Lemma~\ref{Zconjs1}, we have that $A= J\cap Q$ contains exactly thirteen conjugates of $Z$ and
$J \cap Q^g \cap Q= A \cap Q\cap Q^g= \langle z,q_1q_2^{-1},q_2q_3^{-1}\rangle$ contains exactly four conjugates
of $Z$. We have that both $J$ and $J \cap Q \cap Q^g$ are normal in $P$. As $J/(J \cap Q \cap Q^g)$ is a natural
$P$-module by Lemma \ref{Pfacts2}(ii), we see that $J = \cup_{x \in P}(J \cap Q)^x$ {is a union of four
conjugates of $J\cap Q$ pairwise meeting in $J\cap Q\cap Q^g$. This gives, using the inclusion exclusion
principle and Lemma~\ref{fusion1}, that there are exactly $4 \cdot 13 - 3 \cdot 4 = 40$ conjugates of $Z$ in
$J$.} In particular, $J_0 =\langle Z^g \mid Z^g \le J_0\rangle$.
Suppose that $J_0 >J$.
Then $|R| = 2^9$ and $\ov S \cong 3^{1+2}_+$. If $N_G(J_0)$ normalizes $J$ then Lemma~\ref{controlfus} delivers the result. So we may
assume that $N_G(J)$ does not normalize $J_0$.
Suppose that $X$ is a subgroup of $J$ of order $3$ and that $X \not \le
J_0$. Then $\ov X\le \ov B$ and $\ov X \not= \ov J_0$ is conjugate to $\ov{\langle x_{2}x_3^{-1}\rangle}$ and so
we have that $C_{Q}(X)$ is conjugate to $Q_1A$ which has order $3^5$. Thus $XA$ is normalized by $Q$, $|X^Q|=
3^2$ and, {as $|(XQ)^{S}|=3$,} $|X^S| = 27$.
Hence, taking $X$ to be a conjugate of $Z$, yields that there are $40 + 27i$ conjugates of $Z$ contained in $J_0$
where $1 \le i \le 9$. If there is some non-trivial element of $A$ which has all its $G$-conjugates contained in
some proper subgroup of $J$, then we have that this subgroup is normal in $N_G(J_0) \ge S$ and so contains $Z$.
But then $Z$ is trapped in this subgroup, a contradiction. By Lemma \ref{fusion1} there are at least two
$G$-conjugacy classes of cyclic subgroups different from $Z$ in $A$ and so
there are at
least 54 cyclic subgroups of $J_0$ not in $J$, which are not $G$-conjugate to $Z$. It follows that $i \le 7$.
Now the only non-zero $i$ which has $40 + 27i$ dividing $|\mathrm{G}L_6(3)|$ is $i=3$. This means that there are 121
conjugates of $Z$ in $J_0$ and that $N_G(J_0)$ contains a cyclic group $D$ of order $121$. Let $J_1 \le J$ have
order $3^5$ be normalized by $D$. Then $D$ acts transitively on the cyclic subgroups of $J_1$ and consequently
$J_1\cap Q = J_1 \cap A$ which has order $27$ has only one $G$-class of cyclic subgroups. As $Z \not\leq J_1 \cap
A$, we get that $(J_1 \cap A)Z = A$. Now all elements of $A$ not in $Z$ are conjugate, which contradicts
Lemma~\ref{Zconjs1}. Now we have that all the $G$-conjugates of $Z$ in $J_0$ are contained in $J$. Thus $N_G(J_0)
\le N_G(J)$.
\end{proof}
\begin{lemma}\label{q1conjs} There are $36$ conjugates of $\langle q_1\rangle$ in $J$. In particular, $\langle q_1\rangle$ is centralized by an element of order $5$ in $N_G(J_0)$
\end{lemma}
\begin{proof} In $J \cap Q$, there are nine $N_H(J)$-conjugates of $\langle q_1\rangle$
(which are already conjugate in $QW$) and in $Q\cap Q^g\cap J$ there are none by Lemmas~\ref{fusion1} and
\ref{QQg} (iv). Again as $J$ is the union of the four $P$-conjugates of $J \cap Q$, we
have $4\cdot 9$ conjugates of $\langle q_1\rangle$ in $J$.
Since, by Lemma~\ref{conjugates in Z}, $|N_G(J_0)|$ is divisible by $5$,
we have that some element of order $5$ in $N_G(J_0)$ centralizes $\langle q_1\rangle$.
\end{proof}
\begin{lemma}\label{N_G(J)}
$N_G(J)/J_0 \cong \Omega_5(3).2$ or $\Omega_5(3).2 \times 2$. In particular, $r_1$ centralizes an element of order $5$ in $N_G(J)$.
\end{lemma}
\begin{proof}{ Let $M= N_G(J)$, $\mathcal P = Z^M$ and $\mathcal L = V^M$. We call the elements of $\mathcal P$
points and those in $\mathcal L$ lines. For $X\in \mathcal P$ and $Y \in \mathcal L$, declare $X$ and $Y$ to be
incident if and only if $X \le Y$. We claim the this makes $(\mathcal P, \mathcal L)$ into a generalized
quadrangle with parameters $(3,3)$.
For $X =Z^m \in \mathcal P$, $m \in M$, we set $Q_x =O_3(C_G(x))=Q^m$.
By Lemma~\ref{Pfacts1} (iv), we have $4$ points on each line. Suppose that $Z \le V^m\in \mathcal L$. Then either
$Z^m = Z$ or $Z^m \neq Z$ and $Z \le Q^m$. In the first case $m \in H\cap M$ and $V^m \le J\cap Q_Z$ and, in the
second case, we have $Z^m \le Q$ by Lemma~\ref{Pfacts1} (i) and so $V^m \le Q_Z$ again. Thus, if $X \in
\mathcal P$ is incident to a line $L\in \mathcal L$, then $L \le J\cap Q_X$.
By Lemma~\ref{Zconjs1} there are twelve $M$-conjugates of $Z$ in $(J \cap Q) \setminus Z$ and each of them
forms a line with $Z$. Thus $Z$ is contained in exactly $4$ lines and, furthermore, any two lines containing $Z$
meet in exactly $Z$ and any two points determine exactly one line.
Now suppose that $L\in \mathcal L$ is a line which is not incident to $X \in \mathcal P$. Then, as $|J:J\cap
Q_X|= 3$, we have $L \cap (J\cap Q_X)$ is a point and this is the unique point of $L$ which is collinear to $X$.
It follows that $(\mathcal P,\mathcal L)$ is a generalized quadrangle with parameters $(3,3)$.} By \cite{Payne1}
there is up to duality a unique such quadrangle. Hence we have that $N_G(J)/J_0$ induces a subgroup of
$\Omega_5(3).2$ on the quadrangle. Using Lemma \ref{conjugates in Z}, we see that the full group is induced. As
there might be some element which inverts $J$ and so acts trivially on $(\mathcal P,\mathcal L)$, we get the two
possibilities as stated.
Finally, as $r_1$ acts as a reflection on $J$, we see that $r_1$ centralizes an element of order 5.\end{proof}
\begin{lemma}\label{alt6} We have $F^*(C_{N_G(J)}(q_1)/J_0) \cong \mathrm{Alt}(6) \cong \Omega_4^-(3)$.\end{lemma}
\begin{proof} Because $q_1$ is inverted by $r_1$ and $r_1$ acts on $J$ as a reflection, we have that $F^*(C_{N_G(J)}(q_1)/J_0)$ is an orthogonal group in dimension $4$. Since, by Lemma~\ref{q1conjs}, $q_1$ commutes with an element of order $5$, we have $F^*(C_{N_G(J)}(q_1)/J_0)\cong \Omega_4^-(3)\cong \mathrm{Alt}(6)$.
\end{proof}
\section{The Fischer group $\mathcal{M}(22)$ and its automorphism group}
In this section we will assume that $|R| = 2^7$ and determine the isomorphism type of $G$. Set $r = r_1$ and $K
= C_G(r)$. Recall that $R$ is a subgroup of $R_1\times R_2\times R_3 \cong \mathrm{Q}_8\times Q_8\times\mathrm{Q}_8$ and $R \ge
\langle r_1, r_2, r_3\rangle= \Omega_1(Z(R))$.
\begin{lemma}\label{OmegaR} We have that $\Omega_1(Z(R)) \leq \Phi (R)$.
\end{lemma}
\begin{proof} Assume that $\Omega_1(Z(R)) \not\leq \Phi (R)$. As $w$ acts transitively on the set $\{r_1,r_2,r_3\}$, we
may assume that $r_i \not\in \Phi(R)$ for $1\le i\le3$. Let $U$ be a hyperplane in $\Omega_1(Z(R))$ which
contains $\Phi(R)$. Then, as $w$ normalizes $R$, we may assume that $\{r_1,r_2,r_3\} \cap U = \emptyset$. An easy
inspection of the maximal subgroups of $\Omega_1(Z(R))$ yields $U = \langle r_1r_2, r_2r_3 \rangle$. Therefore
$(R_1 \times R_2 \times R_3)/U$ is an extraspecial group of order $2^7$. We have that $R/U$ is of order $2^5$,
hence $R/U$ is not abelian. However $\Phi (R) \not\leq U$, which is a contradiction. \end{proof}
{Recall from Lemma~\ref{Horder} (iii), either $H =QRW$ or $H/BRQ \cong \mathrm{Sym}(3)$ and in either case $S=WQ$. If
$H/BRQ \cong \mathrm{Sym}(3)$, then there is an element $iRQ$ of order $2$ which permutes $Q_2$ and $Q_3$ and centralizes
$r$. We let $i\in H$ be such an element where for convenience we understand that $i=1$ if $H=QRW$. Thus in any
case $H=QRW\langle i\rangle$. By Lemma~\ref{Pfacts2} (vi), $|N_G(Z):H|=2$ and $\ov W$ is
inverted by an involution $j$
in $N_G(Z) \cap N_G(S)$. Again, we can choose $j$ to centralize $rQ \in HQ$ and consequently it can be further
chosen to centralize $r$. Thus we have
$N_K(Z) = Q_2Q_3RC_S(r) \langle i,j \rangle$ and this group has order $3^6\cdot 2^9$.}
\begin{lemma}\label{Cr1Fischer} Suppose that $|R| = 2^7$. Then $K \cong 2\udot \U_6(2) $ or $2\udot \U_6(2).2$.
\end{lemma}
\begin{proof} We have $N_K(Z) = Q_2Q_3RC_S(r) \langle i,j \rangle$. Since $Z(C_S(r)R/\langle r \rangle)$ acts faithfully on $Q_2Q_3$ and centralizes the fours group
$\Omega_1(R)/\langle r\rangle$, we see that $N_K(Z)/\langle r\rangle$ when embedded into $\mathrm{G}Sp_4(3)$ preserves
the decomposition of the associated symplectic space into a perpendicular sum of two non-degenerate spaces and
has $R/\langle r \rangle \cong \mathrm{Q}(8) \times \mathrm{Q}(8)$ as a normal subgroup. Therefore, as $Q_1Q_2\cong
F^*(N_K(Z)/\langle r \rangle)$ is extraspecial of order $3^5$, we have
$N_K(Z)/\langle r\rangle$ is similar to a normalizer in a group
of $\U_6(2)$-type. By Lemma~\ref{fusion1}, no conjugate of $Z$ is $G$-conjugate to an element of $Q_1Q_2\setminus
Z$ and so $Z$ is weakly closed in $Q_1Q_2$ with respect to $K$. Since, by Lemma~\ref{N_G(J)}, $C_{N_G(J)}(r)$ has
an element $f$ of order $5$, we have $Z^f \le C_J(r)$ and, of course, $Z^f \neq Z$. It follows that $Z\langle
r\rangle/\langle r\rangle$ is not weakly closed in $C_S(r )\langle r\rangle/\langle r\rangle$ with respect to
$C_G(r)/\langle r\rangle$. Therefore, as $C_S(r)Q_2Q_3/Q_2Q_3$ has order $3$, Theorem~\ref{F4U6Thm} implies
that $C_G(r)/\langle r\rangle \cong \U_6(2)$ or $\U_6(2).2$. Since $R \leq C_G(r)$ and $r \in R^\prime$ by Lemma
\ref{OmegaR}, $F^*(C_G(r))$ does not split over $\langle r\rangle$. It follows that $F^*(C_G(r)) \cong
2\udot\U_6(2)$ or $2\udot \U_6(2).2$ as claimed.
\end{proof}
Let $K_1= F^*(K) \cong 2\udot \U_6(2)$ and fix some Sylow 2-subgroup $T$ of $K_1$. In $T/\langle r \rangle$
there is a unique elementary abelian group of order $2^9$ with normalizer of shape $2^9:\PSL_3(4)$ (the
stabilizer of a totally isotropic subspace of dimension $3$). Let $E$ be the preimage of this subgroup. Then
$\PSL_3(4)$ acts irreducibly on $E/\langle r \rangle$ and $|E| = 2^{10}$, we get that $E$ is elementary abelian
of order $2^{10}$ with $N_{K_1}(E)/E \cong \PSL_3(4)$ and $C_G(E)=C_K(E) =E$. By \cite[(23.5 .5)]{Asch}, $E$ is
an indecomposable module for $N_K(E)/E$.
\begin{lemma}\label{m22} We have that $N_G(E)/E \cong \mathcal{M}_{22}$ or $\mathrm{Aut}(\mathcal{M}_{22})$.
\end{lemma}
\begin{proof} As $r^H \cap R^\prime \not= \{r\}$ we have that $r^G \cap K_1 \not= \{r\}$.
As all involutions of $\U_6(2)$ are conjugate into $E$ (see \cite[(23.3)]{Asch}), we have that $r^{N_G(E)} \not=
\{r\}$. Recall that $E/\langle r \rangle$ is just the Todd module for $\L_3(4)$ and so $N_K(E)$ has orbits of
length $1$, $21$ ,$21$, $210$, $210$, $280$ and $280$ on $E$ (where some of these lengths may double as $E$ is
indecomposable) {by \cite[(22.2)]{Asch}.}
Then, as $Z(T)\le E$ has order $4$ by \cite[Table 5.3t]{GLS3},
$N_K(Z(T)) $ has shape $2.2^{1+8}_+.\SU_4(2)$. In particular, we can choose $t \in Z(T)$ such that $t$ is a
square in $K_1$ and $Z(T)=\langle r,t\rangle$. Since $r$ is not a square in $K_1$ by \cite[(23.5.3)]{Asch}, we have
$t$ is not $N_G(E)$-conjugate to $r$. Now taking in account that
$|N_G(E)/E|$ has to divide $|\mathrm{G}L_{10}(2)|$, we see that $|r^{N_G(E)}| = 2\cdot 11$, $2^9$ or $561$. {If $|r^{N_G(E)}|=
561$, then $|N_G(E)/E| = 2^{a}\cdot 3^3 \cdot 5 \cdot 7 \cdot 11 \cdot 17$, where $a = 6$ or $7$. As the
normalizer of a Sylow $17$-subgroup in $\mathrm{G}L_{10}(2)$ has order $2^4\cdot 3^2\cdot 5 \cdot 17$, Sylow's Theorem
implies that there must be $2^4\cdot 3\cdot 5 \cdot 7 \cdot 11$ Sylow $17$-subgroups in $N_{G}(E)/E$. In
particular the Sylow $3$-subgroup $D$ of the normalizer of the subgroup of order $17$ has order $9$ and is
elementary abelian. Two of the cyclic subgroups of $D$ are fixed point free on $E$, one has centralizer of order
$4$ and the final one centralizes a subgroup of order $2^8$. As the Sylow $3$-subgroups of $N_G(E)$ have order
$3^3$, at least one of these subgroups is conjugate in to $N_K(E)$ and there we see that such groups all have
centralizer of order $2^4$ in $E$. This shows that this configuration cannot arise.}
So assume that $|r^{N_G(E)}| = 2^9$. {Then $|N_G(E)/E|=2^{a}\cdot 3^2 \cdot 5 \cdot 7$, $a = 15$ or $16$. Since
some orbit on $E$ is of odd length, we must have an orbit of length $21$, $231$ or $301$ or $511$. As we know
$|N_G(E)|$, we get an orbit of length $21$. From the action of $\L_3(4)$ on this set, we see that no element of
odd order fixes more than $3$ points. Let $T \in \Syl_2(N_G(E)/E)$. Now $\mathrm{Sym}(21)$ has Sylow $2$-subgroups of
order $2^{18}$ and $\mathrm{Sym}(8)$ has Sylow $2$-subgroups of order $2^6$. Hence, as $|T| \ge 2^{15}$, there is an
involution $j \in T$ which fixes at least $13$ points and the product of two such involutions fixes at least $5$
points. It follows that $\langle j,j^x\rangle$ is a $2$-group for all $x \in N_G(E)/E$. Hence $O_2(N_G(E)) > E$
by the Baer-Suzuki Theorem and this contradicts the fact that $N_G(E)$ acts irreducibly on $E$ and $C_G(E)=E$.
So
we have that $|r^{N_G(E)}| = 22$. In particular we have that $N_G(E)/E$ acts triply transitive on 22 points with
point stabilizer $\L_3(4)$ or $\L_3(4):2$. Using, for example \cite{Lun}, get that $N_G(E)/E$ is isomorphic to
$\mathcal{M}_{22}$ or Aut($\mathcal{M}_{22})$, the assertion.}\end{proof}
\begin{proof}[Proof of Theorerm~\ref{Main1}] If $K= K_1$, then, as $r$ is not weakly closed in a Sylow $2$-subgroup of $G$ (its conjugate to
$r_2$ for example) we have $G \cong \mathcal{M}(22)$ by \cite[Theorem 31.1]{Asch}. If $K> K_1$, then also $N_G(E)/E \cong
\mathrm{Aut}(\mathcal{M}_{22})$ and Lemma \ref{autm22} (ii) implies that $G$ has a subgroup $G_1$ of index $2$. We have $K_1 = K
\cap G_1$ and $G_1 \cong \mathcal{M}(22)$ by \cite[Theorem 31.1]{Asch}.
\end{proof}
\section{Some notation}
From here on we may suppose that $|R|= 2^9$. In this brief section we are going to reinforce some of our earlier notation in preparation for determining the centralizers of various elements in the coming sections.
We begin by recalling our basic notation which has already been established. We have $R_1$, $R_2$, $R_3$ are the
normal quaternion groups of $R$ and $Q_i = [Q,R_i]$ extraspecial of order 27. We have defined $Z(R_i) = \langle
r_i \rangle$ so that $Z(R) = \Omega_1(R)=\langle r_1,r_2,r_3 \rangle$. We have for $B = C_S(Z(R))$ and that $B =
\langle Z,x_{123}, x_2x_3^{-1} \rangle$, where the last element is non-trivial just when $WQ < S$. By
Lemma~\ref{BEA} $B$ is elementary abelian. Further we have some $w \in N_H(R)$ with $Q_1^w = Q_2$, $Q_2^w = Q_3$
and $Q_3^w = Q_1$.
From Lemma~\ref{Horder} (ii) and (iii) we have
$|H| = 2^{9+a}\cdot 3^{10}$ or $2^{9+a}\cdot 3^9$ where $a= 0, 1$. When $a=1$, just as in the case when
$|R|=2^7$,
there exists a further involution $i \in N_H(S)$. This involution can be chosen to centralize $Z$ and normalize
$R$. Since, by Lemma~\ref{Horder}, $\ov H$ is isomorphic to a subgroup of $\mathrm{Sp}_{2}(3) \wr \mathrm{Sym}(3)$, we see that
$i$ can be selected so that $Q_1$ is centralized by $i$, and so that $Q_2^i= Q_3$.
We take the involution $t \in N_P(Z) \cap N_G(S)$ from Lemma~\ref{Pfacts2} (vi). Since $t$ normalizes $QR$ and $Q
\le P$, we may assume that $t$ normalizes $R$. Since $t$ inverts $\ov W$, $t$ inverts $wQ$ and so $t$ permutes
$R_1$, $R_2$ and $R_3$ as a $2$-cycle. Thus we may suppose that $t$ normalizes $R_1$ and exchanges $R_2$ and
$R_3$. In particular, $t$ centralizes $r_1$ and acts on $Q_1$ inverting $Z$. Since $W/(Q\cap Q^g)$ is inverted by
$t$, we see, using Lemma~\ref{QQg} (iv), that $q_1(Q\cap Q^g)$ is inverted by $t$. Similarly $\wt {q_1} W$ is
centralized by $t$. It follows that $[Q_1, t]= Z\langle q_1\rangle$ and that $t$ inverts $q_1$.
\begin{lemma}\label{calc1} With the notation just established, we have $N_{N_G(Z)}(R) = R\langle z,x_{123}, x_2x_3^{-1}, w\rangle\langle i,t\rangle$. Furthermore,
\begin{enumerate}
\item $q_1^t = q_1^{-1}$.
\item $t$ inverts $\langle z,x_{123},w\rangle$ which is abelian and $t$ centralizes $x_{2}x_3^{-1}$.
\item $w^i= w^{-1}$ and $(x_{2}x_3^{-1})^i=(x_{2}x_3^{-1})^{-1}$.
\end{enumerate}
\end{lemma}
\begin{proof} We have already discussed (i).
By Lemma~\ref{Pfacts2}(iv), $t$ inverts $\ov W = \ov{\langle x_{123},w\rangle }$ and $t$ inverts $Z$. Thus, we
may choose notation so that
that $t$ inverts $\langle z,x_{123}, w\rangle$ (i) holds.
Furthermore, we may suppose
that $t$ centralizes $x_2x_{3}^{-1}$. Now $C_X(i) =\langle Z,x_{123}\rangle$ and $[X,i] $ has order $9$. In
particular, $[X,i]\cap [X,t]$ has order $3$. We choose $w$ such that $[X,i]\cap [X,t]=\langle w \rangle$. Finally
we may suppose that $x_{2}x_3^{-1}$ is chosen so that it is inverted by $i$.
\end{proof}
\section{A signalizer}
Recall from Lemma~\ref{Pfacts2} (vii) that there is an involution $t \in P$ which inverts both $Z$ and $\ov W$ and that further properties of $t$ are listed in Section 6. We set $$H_0 = QWR\langle t\rangle$$ and note that, as $t$ inverts $\ov W$, $H_0$ is a normal subgroup of $N_G(Z)$.
\begin{lemma}\label{u62} The following hold.
\begin{enumerate}
\item $F^*(C_G(q_1)) \cong 3 \times \U_6(2)$; \item $|N_G(\langle q_1\rangle) :C_G(q_1)|=2$; and
\item $C_G(q_1)/F^*(C_G(q_1)) \cong N_G(Z)/H_0$ and is isomorphic to a subgroup of $\mathrm{Sym}(3)$.
\end{enumerate}
Furthermore $[r_1, E(C_G(q_1))] = 1$.
\end{lemma}
\begin{proof} We have $O^2(C_H(q_1)) = C_Q(q_1) (R_2R_3)B$ which has shape
$(3 \times 3^{1+4}_+).( \mathrm{Q}_8\times \mathrm{Q}_8).3^k$ where $3^k= |\ov B|$ with $k=1,2$. From Lemma~\ref{calc1} (i),
we have that $t$ inverts $q_1$ and, by definition $t$ inverts $Z$, since $r_1$ inverts $q_1$ and centralizes $Z$,
we have that $r_1 t\in N_{C_H(q_1)}(Z)$. Thus $$C_{N_G(Z)}(q_1) = \langle q_1\rangle Q_2Q_3R_2R_3J_0\langle i,
r_1t\rangle.$$
Now we see that $O_3(C_{N_G(Z)}(q_1)/\langle q_1\rangle) = Q_2Q_3\langle q_1\rangle /\langle q_1\rangle$ is extraspecial of order $3^5$ and that $$O_2(C_{N_G(Z)}(q_1)/Q_2Q_3\langle q_1\rangle )= R_2R_3Q_2Q_3\langle q_1\rangle /Q_2Q_3\langle q_1\rangle /\langle q_1\rangle \cong \mathrm{Q}_8\times \mathrm{Q}_8.$$ Thus $C_G(q_1)/\langle q_1\rangle$ is similar to a $3$-centralizer in either $\U_6(2)$ or $\F_4(2)$ (see Definition~\ref{U6F4def}).
By Lemma~\ref{q1conjs}, $q_1$ is centralized by an element $f$ of order $5$ in $N_G(J)$. Furthermore, $f$ does not
normalize $Z$ as $5$ does not divide the order of $H$. Since $Z^f \le J$ and $f \in C_G(q_1)$, we see that $Z$ is
not weakly closed in $C_S(q_1)$ and so it follows from Theorem~\ref{F4U6Thm} that $F^*(C_H(q_1))/\langle
q_1\rangle \cong \U_6(2)$ or $\F_4(2)$ and that $C_H(q_1)/F^*(C_H(q_1)) \cong H/H_0$. Finally, as $N_{C_G(q_1)}(J)$ involves $\mathrm{Alt}(6)$ by Lemma~\ref{alt6}, the subgroup structure of $\F_4(2)$ implies that $$F^*(C_H(q_1)/\langle
q_1\rangle )\cong \U_6(2).$$
Now $\langle q_1 \rangle $ is normalized
by the involution $r_1$ and $r_1$ centralizes $C_H(q_1)/\langle q_1\rangle$. Hence, by Proposition
\ref{centralizerinvsU6}, $r_1$ centralizes $C_G(q_1)/\langle q_1\rangle$. Since $C_{H}(q_1)$ splits over $q_1$,
we now have $F^*(C_G(q_1)) \cong 3 \times \U_6(2)$. This proves (i). Part (ii) follows as $r_1$ (and $t$) invert
$q_1$.
We also easily have
$C_G(q_1)/F^*(C_G(q_1)) \cong N_G(Z)/H_0$.
\end{proof}
Let $K = E(C_G(q_1))$. Then $K \cong \U_6(2)$ by Lemma~\ref{u62}. Since
$R_2 \le C_G(q_1)$, we have $r_2 \in K$. As $r_2$ centralizes $Q_3\cong 3^{1+2}_+$ in $K$, Proposition \ref{centralizerinvsU6} yields $$C_K(r_2) \cong 2^{1+8}_+:\U_4(2).$$ Notice
that $r_3$ is also in $K$ and therefore $q_2$ and $q_3 \in K$. From the structure of $C_S(q_1)$ we also have that
$z \in K$.
Furthermore, we have $|J_0 \cap K|$ is elementary abelian of order $3^4$ and that $A \cap K = \langle
Z,q_2,q_3\rangle= C_A(r_1)$. Using \cite[Theorem 4.8]{PS1}, we get that $$F=N_{K}(J\cap K) \cong 3^4:\mathrm{Sym}(6).$$
Furthermore \cite[Lemma 4.2]{PS1} indicates that $Z$ has exactly 10 conjugates under the action of $F$. As $A
\cap K = J \cap O_3(C_K(Z))$ we see that $(A\cap K)^F$ has order $10$ and $F$ acts $2$-transitively on this
set.
We also have that $F$ commutes with $\langle q_1, r_1\rangle \le C_G(K)$ and $A \cap K = C_A(r_1)$. Let $f \in F$
be such that $C=(A\cap K) \cap ( A\cap K)^f = \langle q_2, q_3\rangle$.
Then, as $q_1$ and $q_2$ are $G$-conjugate, we obtain $$L=C_G(C)^\infty \le C_{C_G(q_2)}(q_3)^\infty \cong \U_4(2)$$ from
Lemma~\ref{u62}. In addition, $C$ commutes with $R_1R_1^f N_J(R_1)N_J(R_2)$ and therefore
$R_1R_1^f \le L \cong \U_4(2)$. If $R_1 = R_1^f$, then $R_1$ centralizes $J \cap K$. However,
$C_J(R_1) \le Q$ and $J \cap K \not \le Q$. Therefore $R_1 \not= R_1^f$ and this means that $r_1$ is a $2$-central
involution of $L$. Hence $R_1R_1^f \cong 2^{1+4}_+$ and we deduce that $R_1$ and $R_1^f$ commute as $R_1R_1^f$ contains exactly two subgroups isomorphic to $\mathrm{Q}_8$.
As $F$ acts $2$-transitively on the set $(A\cap K)^F$, we deduce that any two $F$-conjugates of $R_1$ commute and so $$E=\langle R_1^F\rangle \cong 2^{1+20}_+$$ and this is a $2$-signalizer for $F$.
\begin{lemma}\label{signal1} The following hold.
\begin{enumerate}
\item $E$ is extraspecial of order $2^{21}$ and plus type;
\item $C_E(Z)=R_1$;
\item $E$ is the unique maximal $2$-signalizer for $Q_2Q_3$ in $C_G(r_1)$; and
\item $C_{G}(\langle r_1, q_1\rangle)$ normalizes $E$.
\end{enumerate}
In particular, $K$ normalizes $E$.
\end{lemma}
\begin{proof} We have already remarked that (i) is true. Also, we know that $Q_2Q_3\le F$ and so $E$ is a $2$-signalizer for $Q_2Q_3$.
Suppose that $D$ is a $2$-signalizer for $Q_2Q_3$ in $C_G(r_1)$. Then $$D= \langle C_D(x) \mid x \in \langle
z,q_2\rangle^\# \rangle$$ and observe that $\langle z,q_2\rangle$ contains three $Q_2$-conjugates of $\langle q_2\rangle$. Now in $C_K(z)$ the only 2-subgroup which is normalized by $Q_2Q_3$ is $R_1$ and this
is contained in $E$. In particular, (ii) holds. So we consider signalizers for $\langle q_2,Q_3\rangle$ in $C_{C_G(r_1)}(q_2)$. First we
note that $R_1$ commutes with $q_2$ and so we have that $r_1 \in K_2=C_G(q_2)^\infty \cong \U_6(2)$ and, as
$Q_1Q_3 \le O_3(C_{K_2}(Z))$, we have that $Q_3 \le C_{K_2}(r_1)$ and this means that $r_1$ is a $2$-central
element of $K_2$ by Proposition \ref{centralizerinvsU6}. As an extraspecial group of order 27 in $\U_4(2)$ does
not normalize a non-trivial 2-group, we now have that the maximal signalizer for $Q_3$ in $C_{C_G(q_2)}(r_1)$ is
$O_2(C_{K_2}(r_1)) \cong 2^{1+8}_+$. We have that $\langle Z, q_2 \rangle$ acts on $E$ and $ C_E(\langle Z,
q_2\rangle) = C_E(Z) = R_1$. Since $$E = \langle C_E(x) \mid x \in \langle z,q_2\rangle^\# \rangle,$$ we have
$|C_E(q_2)|= 2^9$ and $C_E(q_2)= O_2(C_{K_2}(r_1))$ . Therefore $C_D(q_2) \le E$. It now follows that $D \le E$
as claimed in (iii).
From the construction of $E$, we have that $E$ is normalized by $F$ and (ii) implies that $N_{C_G(\langle
q_1,r_1\rangle)}(Q_2Q_3)= N_{C_G(\langle q_1,r_1\rangle)}(Z)$ also normalizes $E$. Now either using \cite{Atlas}
or \cite{PS1} we have that $C_{G}(\langle q_1,r_1\rangle)$ normalizes $E$. This is (iii). Since $K \le
C_{G}(\langle q_1,r_1\rangle)$ by Lemma~\ref{u62}, we have $K \le N_G(E)$ as well.
\end{proof}
\begin{lemma}\label{NE} $F^\ast(N_{G}(E)/E) = KE/E \cong \U_6(2)$.
\end{lemma}
\begin{proof} Note that $N_G(E) = N_{C_G(r_1)}(E)$. In $N_{C_G(r_1)}(E)/E$ we have that $N_K(Z)E/E$ is a $3$-normalizer of type $\U_6(2)$. Therefore, as $Z$ is
not weakly closed in $C_S(r)E/E$ with respect to $N_{C_G(r_1)}(E)/E$, we have that $F^\ast(N_{C_G(r_1)}(E)/E)= EK/E$ from
Theorem~\ref{F4U6Thm}.
\end{proof}
\begin{lemma}\label{syl2} $N_G(E)/E$ acts irreducibly on $E/\langle r_1\rangle$ and $N_G(E)$ contains a Sylow $2$-subgroup of $G$.
\end{lemma}
\begin{proof} We know that $F^*(N_G(E)/E) \cong \U_6(2)$ and that $|E/\langle r_1\rangle|= 2^{20}$. The action of $F$ and $E$, shows that $E/\langle r_1\rangle$ is irreducible. Thus Lemma~\ref{Vfacts} implies that $E/\langle r_1\rangle$ is not a failure of factorization module for $N_G(E)/E$. In particular, if $T \in \syl_2(N_G(E))$, we have that $Z(T)=\langle r_1\rangle$ and the Thompson Subgroup of $T/\langle r_1\rangle$ is $E/\langle r_1 \rangle$ by \cite[Lemma~26.15]{GLS2}. Thus $N_G(T) \le N_G(E)$ and so $T \in \syl_2(G)$.\end{proof}
We close this section with a technical detail that we shall need later.
\begin{lemma}\label{CKq2} We have $C_K(q_2) \cong 3 \times \U_4(2)$. \end{lemma}
\begin{proof} Set $X=\langle q_2, Q_3, (J\cap K) R_3\rangle \approx 3 \times 3^{1+2}_+.\mathrm{Q}_8.3$. Then $X \le C_K(q_2)$. As $\langle q_2\rangle =[J\cap K, r_2]$, we have that $N_{K}(J\cap K)/(J\cap K) \cong \mathrm O_4^-(3)$, we get $C_{N_K(J\cap K)}(q_2) \approx 3^4:\mathrm{Sym}(4)$. Hence $C_K(q_2) \cong 3 \times \U_4(2)$ as is seen in \cite{Atlas}.
\end{proof}
\section{The centralizer of an outer involution}
In this section we continue our investigation of the situation when $|R|=2^9$, assume that $H/BRQ\cong \mathrm{Sym}(3)$ and show that $G$ has a subgroup of index $2$. Thus, by Lemma~\ref{Horder}, $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3. \mathrm{Sym}(3)$$
or $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3^{1+2}_+ .2.$$
Since $H/BRQ\cong \mathrm{Sym}(3)$, Lemma~\ref{H/Q struct} implies that the Sylow $2$-subgroup of $H$ is isomorphic to
the Sylow $2$-subgroup of $\mathrm{Sp}_{2}(3)\wr \mathrm{Sym}(3)$ and hence we may select an the involution $d$ which conjugates
$Q_2$ to $Q_3$ and centralizes an extraspecial ``diagonal'' subgroup of $Q_2Q_3$ and in addition centralizes $Q_1$ and normalizes $S$.
\begin{lemma}\label{F42} We have $C_G(d)/\langle d \rangle \cong \F_4(2)$.
\end{lemma}
\begin{proof} Since $d$ centralizes $Z$, we have $C_Q(d)$ is extraspecial of order $3^{1+4}$. Furthermore, as $\ov B$
has order $3$ or $3^2$ we have $|C_{\ov B}(d)|=3$. Thus $C_{S}(d)$ has order $3^{6}$. Furthermore, $C_{R}(d)= R_1
\times C_{R_2R_3}(d)$ is a direct product of two quaternion groups. It follows that $C_{C_G(d)}(Z)$ is a
$3$-centralizer in a group of type $\U_6(2)$ or $\F_4(2)$. Since $d$ normalizes $S$, $d$ normalizes $Z_2(S) = V$ and, as $V= Z\langle
q_1q_2q_3\rangle$, $d$ centralizes $V$ (see Lemma~\ref{Pfacts1}). From the definition of $P$, we now have that $d$ normalizes $P$.
Since $d$
centralizes $V$, we have that $C_{P\langle d\rangle}(V) = \langle d\rangle W$. A Frattini Argument now shows that
$C_{P\langle d\rangle}(d)W= P\langle d\rangle$. Therefore $C_{P}(d)$ acts transitively on the non-trivial
elements of $V$. Hence $Z$ is not weakly closed in $C_Q(d)$. Now Theorem~\ref{F4U6Thm} implies that $C_G(d)/\langle d\rangle
\cong \F_4(2)$ or $\mathrm{Aut}(\F_4(2))$. Since $|C_H(d)|= 2^7\cdot 3^6 $ it transpires that $C_G(d)/\langle d\rangle \cong
\F_4(2)$ as claimed.
\end{proof}
\begin{theorem}\label{index2}
If $H/BRQ\cong \mathrm{Sym}(3)$, then $G$ has a subgroup $G^\ast$ of index $2$ which satisfies the hypothesis of Theorem~\ref{Main} and in addition has $|H \cap G^\ast/BRQ|=3$.
\end{theorem}
\begin{proof}
Now let
$T \in \syl_2(N_G(E))$ and $T_0= T \cap EK$. By Lemma~\ref{syl2}, $T \in \syl_2(G)$.
Assume that $G$ does not have a subgroup of index $2$. Then by
\cite[Proposition 15.15]{GLS2} we have that there is a conjugate $d^*$ of $d$ in $T_0$ such that $C_T(d^*) \in
\syl_2(C_G(d^*))$. In particular, we must have $|C_{EK\langle d\rangle}(d^*)| = 2^{25}$. Using
Lemma~\ref{Vaction} (ii) we see that $d^*\not \in E$. Now note that $$C_{EK\langle d\rangle}(d^*)EK = EK\langle
d\rangle$$ by Lemma~\ref{centralizerinvs} and so we require $|C_{EK/\langle r_1\rangle}(d^* \langle r_1\rangle)| =
2^{23}$ or $2^{24}$ where in the latter case, we must have
$$C_{EK/\langle r_1\rangle}(d^* \langle r_1\rangle)> C_{EK}(d^*) \langle r_1\rangle/\langle r_1\rangle.$$
We now apply Lemma~\ref{centralizerinvs}. As $d^* \in Y^\prime$ in the notation of Lemma \ref{centralizerinvs},
this shows that (iv) and (v) not apply. But then Lemma~\ref{centralizerinvs} provides no possibility for $d^*$.
\end{proof}
\section{Transferring the element of order $3$}
Because of Theorem~\ref{index2}, from here on we suppose that $H/BRQ$ has order 3.
In this section we show that if $S>QW$, then $G$ has a normal subgroup of index $3$ which satisfies the hypothesis of
Theorem~\ref{Main}. So assume that $S>QW$. Then, by Lemma~\ref{Horder} (ii), $\ov S$ is extraspecial and
$|H|= 2^{9}\cdot 3^{10}$ with $$\ov H \approx (\mathrm{Q}_8 \times \mathrm{Q}_8 \times \mathrm{Q}_8).3^{1+2}_+.$$
\begin{lemma}\label{index3} Suppose that $S>QW$ and $|H| = 2^{9}\cdot 3^{10}$. Then $G$ has a
normal subgroup $G^*$ of index of index $3$ and $C_G(Z) \cap G^* = QWR\langle t\rangle$ is similar to a
$3$-centralizer on type ${}^2\E_6(2)$ and $Z$ is not weakly closed in $S\cap G^*$ with respect to $G^*$.
\end{lemma}
\begin{proof} We know that $S= QJ_0W$ and $N_G(Z)= QRWJ_0\langle t\rangle$ by Lemma~\ref{J}(v). From
Lemma~\ref{Pfacts2}(vi), $t$ inverts $\ov W$ and so, as $\ov S$ is extraspecial, $J_0Q/JQ \cong J_0/J$ is
centralized by $t$. Therefore $J_0\not \le N_G(Z)'$ and $S/J = J_0/J \times QW/J$. Since $J_0/J$ is a normal
subgroup of $N_G(J_0/J)$ we now have that $J_0 \not \le N_G(J_0)'$. As $J_0$ is abelian, we may use Lemma~\ref{ptransfer} (ii)to obtain $J_0 \not \le G'$.
Let $G^*$ be a normal subgroup of $G$ of index $3$. Then,
as $\ov W$ is inverted by $t$ and $Q= [Q,R]$, $S \cap G^* = QW$. It follows that $C_{G^*}(Z)= QWR$ and $M \cap
G^*= N_{G^*}(J) \not \le H$, in particular, $Z$ is not weakly closed in $S\cap G^*$ with respect to $G^*$. This
proves the lemma.
\end{proof}
\section{The centralizer of an involution}
Because of Lemma~\ref{index3}, we may now assume that $G$ satisfies the hypothesis of the Theorem~\ref{Main}
with $S =QW$ and $H= QRW$. Thus we now have $$S= QW= Q\langle x_{123},w\rangle$$ where $x_{123}$ and $w$ are as introduced just before Lemma~\ref{J}.
\begin{lemma}\label{Sarah} We have $$C_G(q_2q_3^{-1})/\langle q_2q_3^{-1}\rangle \cong \Omega_8^+(2):3.$$
\end{lemma}
\begin{proof} Set $x=q_2q_3^{-1}$. Then $$C_Q(x) = \langle q_1, \wt q_1, q_2, q_3, \wt q_2 \wt q_3^{-1} \rangle.$$
Furthermore $[x_{123},x] = 1$ and $[w,x] \not\in Z$. Hence we see that $$C_S(x) = C_Q(x)\langle x_{123}\rangle.$$
We also have $C_R(x) = R_1$. So we have $$C_{H}(x)= \langle q_1,\wt q_1, q_2,q_3, \wt q_2 \wt
q_3^{-1}, x_{123},R_1 \rangle$$ and $C_{H}(x)/O_3(C_{C_G(Z)}(x))\cong \mathrm{SL}_2(3)$. Furthermore, $[C_{Q}(x),R_1] = Q_1$ has order $27$ and $C_{Q}(x)/\langle x \rangle$ is extraspecial of order $3^5$.
By Lemma \ref{QQg} we see that $x \in Q\cap Q^g$ and $[P,x] \leq V = ZZ^g$ by Lemma~\ref{Pfacts1}(iii). Since all the elements of the
coset $Vx$ are conjugate in $P$, it follows that we may assume that there is $U \le P$ with $U \cong \mathrm{Q}_8$ with $[U,x]=1$.
Then
$Z$ and $Z^g$ are conjugate by an element of $U$. It follows that $Z$ is not weakly closed in $C_Q(x)$ with respect to $C_G(x)$. Now we have $C_G(x)/\langle x \rangle \cong \POmega_8^+(2):3$
by Astill's Theorem~\ref{Astill}.
\end{proof}
Recall the subgroup $E= \langle R_1^F\rangle$ from Lemma~\ref{signal1} is normalized by $C_{J}(r_1) = J\cap K$ and that $F= N_K(J\cap K) \approx 3^4:\mathrm O_4^-(3)$. Since $r_1$ centralizes $q_2q_3^{-1}$, we have that $q_2q_3^{-1} \in J\cap K$. Furthermore, we note that $F$ has exactly $3$-orbits on the subgroups of order $3$ in $J \cap K$ representatives being $Z$, $\langle q_2\rangle$ and $\langle q_2q_3^{-1}\rangle$ and that these subgroups are in different $G$-conjugacy classes by Lemma~\ref{fusion1}. The next goal is to show that $N_G(E)$ is strongly $3$-embedded in $C_G(r_1)$. The next lemma facilitates this aim.
\begin{lemma}\label{q23sigs}
The following hold:
\begin{enumerate}
\item $C_E(q_2q_3^{-1}) \cong 2^{1+8}_+$;
\item $r_1$ is a $2$-central involution in $E(C_G(q_2q_3^{-1}))$;
\item $C_G(r_1) \cap C_G(\langle q_2q_{3}^{-1}\rangle) \le N_G(E)$;
\item $O_2(C_{E(C_G(q_2q_3^{-1}))}(r_1)) =C_E(q_2q_3^{-1})$; and
\item $r_1^{C_G(q_2q_3^{-1})} \cap E \not=\{r_1\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $D= E(C_G(q_2q_3^{-1}))$. Then $D \cong \Omega_8^+(2)$ by Lemma~\ref{Sarah} as the Schur multiplier of $\Omega_8^+(2)$ is a $2$-group.
We have seen that $R_1$ centralizes $q_2q_{3}^{-1}$ and so $r_1 \in D $. As
$\langle z, q_2q_3^{-1} \rangle \le J\cap K$ acts on $E$ and $C_E(Z) =R_1\cong \mathrm{Q}_8$ by Lemma~\ref{signal1} (ii), by decomposing $E$ under the action of
$\langle z, q_2q_3^{-1} \rangle$
we see that $$C_E(q_2q_3^{-1}) \cong
2^{1+8}_+.$$
Hence (i) holds. Additionally, we have $S \cap K = C_{S}(\langle r_1,q_1\rangle) = Q_2Q_3\langle q_1, x_{123}\rangle$ and therefore
$$C_{S \cap EK}(q_2q_3^{-1}) = \langle q_2,q_3, \wt q_2 \wt q_3^{-1},x_{123}\rangle$$ has order $3^4$.
Using this and \cite{Atlas} we infer that $r_1$ is a
$2$-central element of $E(C_G(q_2q_{3}^{-1}))$ which is (ii).
Since $r_1$ is 2-central in $D$,
$$C_{C_G(q_2q_{3}^{-1})}(r_1) \approx ((2^{1+8}_+.(\mathrm{Sym}(3)\times \mathrm{Sym}(3)\times \mathrm{Sym}(3)).3) \times 3$$ with
$O_2(C_{C_G(q_2q_{3}^{-1})}(r_1) ) = C_E(q_2q_3{-1})$ normalized by $C_J(r_1)$. It follows that
$$C_{C_G(\langle q_2q_{3}^{-1} \rangle)}(r_1) = O_2(C_{C_G(q_2q_{3}^{-1})}(r_1) )N_{C_G(r_1)}(C_J(r_1)) \le N_G(E).$$ Thus (iii) and (iv) hold.
This proves the main part of the lemma and the remaining part
follows as $r_1$ is not weakly closed in $C_E(r_1)$ in $D$.
\end{proof}
\begin{lemma}\label{3-embedded} If $N_G(E) <C_G(r_1)$, then $N_G(E)=KE$ is strongly $3$-embedded in $C_G(r_1)$.
\end{lemma}
\begin{proof} Let $d \in N_G(E)$ be a $3$-element. Then $d$ is conjugate in $N_G(E)$ to an element of $C_J(r_1)$ by Lemma \ref{U62J}.
We have $N_{C_G(r_1)}(S\cap KE) = N_{C_G(r_1)}(Z)$ and so to prove the lemma it suffices to show that $$C_{C_G(r_1)}(\langle d \rangle) \le N_G(E)$$ for all $d \in C_J(r_1)^\#$ by \cite[Proposition 17.11]{GLS2}.
By Lemma \ref{q23sigs} (iii) we have that $$C_{C_G(r_1)}(\langle q_2q_3^{-1} \rangle ) \le N_G(E).$$ By Lemma \ref{signal1} we have that
$$C_{C_G(r_1)}(Z) \leq N_G(E).$$
Further we have that $C_{N_G(E)}(q_2)E/E = C_K(q_2)E/E \cong 3 \times
\U_4(2)$ from Lemma~\ref{CKq2}. Using Lemma \ref{u62} this shows that also $$C_{C_G(r_1)}(\langle q_2 \rangle) \le N_G(E).$$
By Lemma
\ref{fusion1} these subgroups $\langle q_2 \rangle$, $\langle q_2q_3^{-1} \rangle$ and $Z$ are in different conjugacy classes of $G$ and as $N_K(J\cap K)$ has three orbits on the non-trivial cyclic subgroups of $J\cap K$ we have accounted for all conjugacy classes of three elements in $N_G(E)$ and consequently $N_K(E)$ is strongly $3$-embedded in $C_G(r_1)$.
\end{proof}
\begin{theorem}\label{NGECGr} $C_G(r) = N_G(E)=KE\approx 2^{1+20}_+:\U_6(2)$.
\end{theorem}
\begin{proof}This now follows from Lemma~\ref{3-embedded} and Theorem~\ref{Not3embedded}.
\end{proof}
\section{The identification of $G$}
For the section we set $r = r_1$, $L = C_G(r)$ and $K= E(C_G(q_1))$. From Theorem~\ref{NGECGr} we
have $L= N_G(E)$ and from Lemma~\ref{u62} and Lemma~\ref{NE} we have $K \cong \U_6(2)$ with $L=KE \approx 2^{1+20}_+.\U_6(2)$. In particular, $E$ is extraspecial of order $2^{21}$.
\begin{lemma}\label{controlfusionE} Suppose that $r^g \in E \setminus \langle r \rangle$ for some $g\in G$.
Define $F = \langle C_E(r^g), C_{E^g}(r) \rangle$ and $X = \langle E, E^g \rangle$.
Then
\begin{itemize}
\item[(i)] $E \cap E^g$ is elementary abelian of order $2^{11}$ and is a maximal elementary abelian subgroup of $E$.
\item[(ii)] $C_{E^g}(r) \leq L$ and $C_{E^g}(r)E/E$ is elementary abelian of order $2^9$.
\item[(iii)] $C_{L}(r^g)E/E \cong 2^9.\L_3(4)$ and $O_2(C_{L}(r^g)E)=(E^g\cap L)E$.
\item[(iv)] $F$ is normal in $X$, $X/F \cong \mathrm{Sym}(3)$ and
$[X,E \cap E^g] = \langle r,r^g \rangle$.
\item[(v)] If $h \in G$ and $r^h \in E \setminus \langle r \rangle$, then there is some $k \in EK$ such that $r^{hk} = r^g$.
\end{itemize}
\end{lemma}
\begin{proof} Since $E$ is extraspecial of order $2^{1+20}$, $C_E(r^g)$ is a direct product of $\langle r^g \rangle$ with an extraspecial group of order
$2^{1+18}$.
As $|L^g/E^g|$ is not divisible by $2^{19}$, there is no such extraspecial group in $L^g/E^g$ and therefore $r \in E^g$.\\
Because $\Phi (E \cap E^g) \leq \langle r \rangle \cap \langle r^g \rangle = 1$, $E\cap E^g$ is elementary
abelian. Hence, as $E$ is extraspecial, we have $|E \cap E^g| \leq 2^{11}$. In particular, as $|C_{E^g}(r)| =
2^{20}$, we have that $C_{E^g}(r)E/E$ is an elementary abelian group of order at least $2^9$. Since the $2$-rank
of $L/E$ is $9$, we deduce that $|C_{E^g}(r)E/E|= 2^9$ and $|E\cap E^g|= 2^{11}$. Furthermore $(E^g \cap L)E/E$
is uniquely determined. This completes the proof of parts (i) and (ii).
By Lemma \ref{Vfacts}, we have $|C_{E/\langle r \rangle}(C_{E^g}(r))| = 2$ and therefore $$C_{E/\langle r
\rangle}(C_{E^g}(r)) = \langle r, r^g \rangle/\langle r \rangle.$$ Hence we have that $C_{L/\langle r
\rangle}(\langle r, r^g \rangle/\langle r \rangle) = N_L(C_{E^g}(r))E$. This proves (iii).
As $C_E(r^g)$ and $ C_{E^g}(r)$ normalize each other,
$F$ is a 2-group and $$[E, C_{E^g}(r)] \leq C_E(r^g) \text{ and }[E^g, C_{E}(r^g)] \leq C_{E^g}(r)$$ which means that
$F$ is normal in $X$. In addition, $[E, E \cap E^g] \leq \langle r \rangle$ and $[E^g, E \cap E^g] \leq \langle r^g \rangle$. So the
group $(E \cap E^g )/\langle r, r^g \rangle$ is centralized by $X$. Suppose that $f \in C_X(\langle r, r^g
\rangle)$ has odd order. Then $f$ is in $L$ and centralizes $E \cap E^g$. As $E \cap E^g$ is a maximal
elementary abelian subgroup of $E$ we now have that $E$ is centralized by $f$ and this contradicts
Lemma~\ref{NE}. Thus $C_X(\langle r, r^g \rangle)$ is a $2$-group. Modulo $F$ the group $X$ is generated by two
conjugate involutions, $X/F$ is dihedral. This shows that $X/F \cong \mathrm{Sym}(3)$, and proves (iv).
Suppose that $r^h \in E \setminus \langle r \rangle$ for some $h \in G$. Then by (iii) $r^h\langle r \rangle$ is centralized by a
maximal parabolic subgroup of $L/E$ of shape $2^9.\L_3(4)$. But this group has a 1-dimensional centralizer in
$E/\langle r \rangle$ and so $r^h$ is conjugate to $r^g$ in $L$ which proves (v).
\end{proof}
We now fix some Sylow 2-subgroup $T$ of $L$. From Lemma \ref{q23sigs} we have that $$r^{C_G(q_2q_3^{-1})}
\cap E \not = \{r\}.$$ Thus there $g \in G$ with $s =r^g \neq r$ and $s\in E$. By Lemma~\ref{controlfusionE} we
may assume that $Z_2(T) =\langle r,s\rangle$. We set $X= \langle E, E^g\rangle$, $$B= N_L(T)$$ and
$$P_1=BX.$$
For $2 \le j \le 4$, we let $P_j \ge B$ be such that $P_j/E$ is a minimal parabolic subgroups in $L/E$
containing $B/E$ and $L= \langle P_2,P_3,P_4\rangle.$ Set $I=\{1,2,3,4\}$ and for $\mathcal J \subseteq I$ define
$P_{\mathcal J}=\langle P_j \mid j \in \mathcal J\rangle$ and $M=P_I$.
We further choose notation such that
\begin{eqnarray*}P_{34}/O_2( P_{34} ) &\cong& \L_3(4)\\
P_{23} /O_2(P_{23}) &\cong& \U_4(2)\text{ and }\\P_{24}/O_2(P_{24}) &\cong& \mathrm{SL}_2(2)\times \mathrm{SL}_2(4).
\end{eqnarray*} Let ${\mathcal{C}} = (M/B, (M/P_k), k \in I)$ be the corresponding chamber system. Thus
$\mathcal C$ is an edge coloured graph with colours from $I=\{1,2,3,4\}$ and vertex set the right cosets $M/B$.
Furthermore, two cosets $Bg_1$ and $Bg_2$ form a $k$-coloured edge if and only if $Bg_2g_1^{-1} \subseteq P_k$.
Obviously $M$ acts on $\mathcal C$ by multiplying cosets on the right and this action preserves the colours. For
$\mathcal J \subseteq I$, set $M_{\mathcal J} = \langle P_k \mid k \in \mathcal J\rangle$ and $$\mathcal
C_{\mathcal J} = (M_{\mathcal J}/B, (M_{\mathcal J}/P_k), k \in\mathcal J) \subseteq \mathcal C.$$ Then
$\mathcal C_{\mathcal J}$ is the $\mathcal J$-coloured connected component of $\mathcal C$ containing the vertex
$B$.
\begin{lemma}\label{P1B} The following hold.
\begin{enumerate}
\item $|P_1:B|=3$.
\item $\mathcal{C}_{1,3}=$ and $\mathcal{C}_{1,4}$ are generalized digons.\end{enumerate}
\end{lemma}
\begin{proof} By Lemma~\ref{controlfusionE} (iii), $P_{34}$ normalizes $Z_2(T)$. Hence $P_{34}$ acts on the set $\{E^h~|~ r^h \in Z_2(T)\}$ and consequently $P_{34}$ normalizes $X=\langle E, E^g
\rangle$. In particular, we have $P_1= BX$ and, as $X/O_2(X) \cong \mathrm{Sym}(3)$, (i) holds.
Now note that $$P_1P_3 = XBP_3= XP_3=P_3X= P_3BX= P_3P_1.$$
In particular, the cosets of $B$ in $\mathcal C_{1,3}$ correspond to the edges in a generalized digon with one part having valency $3$ and the other $5$. The same is true for $\mathcal C_{1,4}$ and so (ii) holds.
\end{proof}
Because of Lemma~\ref{P1B}, have that $\mathcal C_1$ and $\mathcal C_2$ have three chambers and
$\mathcal C_3$ and $\mathcal C_4$ each have $5$-chambers. Furthermore, from the choice of notation we also have
that $\mathcal{C}_{3,4}$ is the projective plane $\mathrm{PG}(2,4)$ and that $\mathcal{C}_{2,3}$ is the generalised polygon
associated with $\SU_4(2)$. Furthermore, we have that $\mathcal{C}_{2,3,4}$ is the $\U_6(2)$ polar space.
\begin{lemma}\label{buildingup} We have $P_{12}/O_2(P_{12})
\cong \mathrm{SL}_3(2) \times 3$ and $P_{124} = P_{12}P_4$. In particular, $\mathcal{C}_{12}$ is the projective plane $\mathrm{PG}(2,2)$. \end{lemma}
\begin{proof}
We have that $C_{E/\langle r \rangle}(O_2(P_2))$ is 2-dimensional by Smith's Lemma \cite{Smith} and additionally $P_2/C_{P_2}(C_{E/\langle r \rangle}(O_2(P_2)))\cong \mathrm{SL}_2(2)$. It follows that $$C_{E/\langle r \rangle}(O_2(P_2))=Z_3(T)/\langle r \rangle.$$ Hence $P_2$ acts on $Z_3(T)$ and $O^3(P_2)$ induces $\mathrm{Sym}(4)$ on $Z_3(T)$
with the normal fours group inducing all transvections to $\langle r \rangle$. As $(E \cap E^g)/Z_2(T)$ is non-trivial and normal
in $T$, we have that $Z_3(T) \leq E \cap E^g$. Thus Lemma \ref{controlfusionE}(iv) yields that $P_1$
normalizes and induces $\mathrm{Sym}(4)$ on $Z_3(T)$ where now the normal fours group induces all transvections to
$Z_2(T)$. Hence $\langle O^3(P_1), O^3(P_2) \rangle$ induces $\mathrm{SL}_3(2)$ on $Z_3(T)$. Furthermore, we have that
$P_{12}=\langle O^3(P_1), O^3(P_2) \rangle C_G(Z_3(T))$.
We now see that $$X=\langle O^3(P_1), O^3(P_2) \rangle = \langle E^h ~|~ r^h \in Z_3(T) \rangle.$$
Since, by Lemma~\ref{P1B} (ii) and choice of notation, $X$ is normalized by $P_4$ and $\mathrm{SL}_2(4)$ is not isomorphic to a section of $\mathrm{SL}_3(2)$ we infer that
$O^2(P_4) \leq C_L(Z_3(T))$ and normalizes $\langle P_1, O^3(P_2) \rangle$. This shows that $C_{\langle P_1, O^3(P_2)
\rangle}(Z_3(T)) = O_2(\langle P_1, O^3(P_2) \rangle)$ as well as $P_{124}= P_{14}P_4$. Recall that $P_2 =
O^3(P_2)N_G(T)$ and $P_1 = O^3(P_1)N_G(T)$. So $P_{12} = \langle O^3(P_1), O^3(P_2) \rangle N_G(T)$ and this
completes the proof.
\end{proof}
\begin{lemma}\label{ominus} We have that $ P_{123} /O_2(\langle P_{123} \rangle) \cong
\Omega^-_8(2)$. \end{lemma}
\begin{proof} Let $U_{23} $ be the preimage in $E$ of $C_{E}(O_2( P_{23}))$. Then, by Lemma~\ref{Vaction},
$U_{23}= [E,Et_1]$ where $Et_1$ is centralized by $P_{23}/E$. In particular, we have that $U_{23}/Z(E)$ is an
orthogonal module for $P_{23}/O_2(P_3) \cong \U_4(2)$ and, furthermore, $U_{23}/Z(E)$ is totally singular which
means that $U_{23}$ is elementary abelian. Since $U_{23}$ is normal in $T$, $Z_2(T) \le U_{23}\le E\cap L^g$
which is the unique $T$-invariant subgroup of $E$ of index $2$. Now $P_3/O_2(P_3) \cong \mathrm{SL}_2(4) \cong
\Omega_4^-(2)$ and
$$(E^g \cap L)O_{2}(P_{23})/O_2(P_{23}) = O_2(P_3)/O_2(P_{23}).$$ As $P_3$ normalizes a hyperplane in $U_{23}/Z(E)$, we have
$[U_{23},E^g\cap L]$ has order $2^6$ and $[U_{23},E^g\cap L]$. In particular, $U_{23} \not \le E^g$ and, in fact,
$|U_{23}E^g/E^g|=2$ and is centralized by $O_{2}(P_1)E^g/E^g \in \syl_2(L^g/E^g)$. Thus $$[U_{23},E^g]=
U_{23}^g\text{ and } [U_{23}^g,E] =U_{23}.$$ Set $U_4 = U_{23}U_{23}^g$. Then, as $[U_{23},U_{23}^g]\le Z(E) \cap
Z(E^g)=1$, we have $U_4$ is elementary abelian. Furthermore, $[U_4,E^g] = U_{23}^g \le U_4$ and $[U_4,E]\le
U_{23}\le U_4$ and consequently $U_4$ is normalized by $X$. Since $X$ normalizes $P_3$ by Lemma~\ref{buildingup}
(i), we now have $\langle X, P_3\rangle =P_1P_3$ normalizes $U_4$. Note that $U_4E= U_{23}^gE= E\langle t_1\rangle$ and so
$C_{E}(U_4)$ has order $2^{15}$ by Lemma~\ref{Vaction}. Because $U_4$ is elementary abelian, we have $ U_4 \le
C_E(U_4)U_4$ and, as a $P_{23}/O_2(P_{23})$-module, $C_E(U_4)U_4/U_{23}$ has a natural $8$-dimensional
composition factor and a trivial factor. Since $U_4/U_{23}$ is stabilized by $P_3$ and the composition factors of
$P_3$ on $C_E(U_4)/U_{23}$ are both non-trivial, we find that $U_4$ is normalized by $P_{123}$.
Let $$\mathcal P=
\langle r \rangle^{P_{123}}\text { and }\mathcal L = \langle r, s \rangle ^{P_{123}}$$ and define incidence
between elements $x \in \mathcal P$ and $y \in \mathcal L$ if and only if $x \le y$. Of course all the points and
lines are contained in $U_4$. We claim that $(\mathcal P, \mathcal L)$ is a polar space. Because of the
transitivity of $P_{123}$ on $\mathcal P$, we only need to examine the relationship between $\langle r \rangle$
and an arbitrary member of $\mathcal L$. So let $l \in \mathcal L$. Then every involution of $l$ is $G$-conjugate
to $r$. Hence if $r^* \in l \cap E \;(= l\cap U_{23})$, then, by Lemma~\ref{controlfusionE} (v), $r^*$ is
$L$-conjugate to $r^g$. In particular, we have that $r^*$ is a vector of type $v_1$ in the notation of
Lemma~\ref{Vaction}. Since $P_{23}$ has $3$-orbits on its $6$-dimensional module and since $U_{23}/\langle
r\rangle$ contains representatives of the three classes of singular vectors in $E/\langle r\rangle$, we infer
that $r^*$ is $P_{123}$-conjugate to an element of $\langle r,r^g\rangle$ .Thus $\langle r, r^*\rangle \in
\mathcal L$. Since $|U_4:U_{23}|=2$, we have that $\langle r \rangle$ is incident to at least one point of $l$.
Assume that $\langle r \rangle$ is incident to at least two points, $p_1, p_2$ of $l$. Then $\langle r,
p_1\rangle \le E$ and $\langle r, p_2 \rangle\le E$. Hence $l \le E$. But then $r$ is incident to every point on
$l$. Thus we have shown that $(\mathcal P, \mathcal L)$ is a polar space. Since $Z_3(T) \le U_{23}$, we have that
$(\mathcal P, \mathcal L)$ has rank either $3$ or $4$. As the $ P_{123}$ induces $\Omega^-_6(2)$ on the lines
through $\langle r \rangle$, we get with \cite[Theorem on page 176]{Tits} that $(\mathcal P, \mathcal L)$ is the
polar space associated to $\Omega^-_8(2)$, the assertion.
\end{proof}
Combining Lemmas~\ref{P1B} and \ref{buildingup} we now have that $\mathcal C$ is a chamber system of type $\F_4$
with local parameters in which the panels of type $1$ and $2$ have three chambers and the panels of type $3$ and
$4$ have five chambers.
\begin{proposition}\label{2E62} We have $\mathcal C$ is a building of type $\F_4$
with automorphism group $\mathrm{Aut}({}^2\E_6(2))$. In particular, $M \cong {}^2\E_6(2)$. \end{proposition}
\begin{proof} The chamber systems $\mathcal C_{1,2}$ , $\mathcal C_{3,4}$ are projective planes with parameters $3,3$ and $5,5$ and $\mathcal C_{2,3}$ is a generalized quadrangle with parameters $3,5$. The remaining $\mathcal C_{\mathcal J}$ with $|J| = 2$ are all complete bipartite graph. Thus, using the language of Tits in \cite{local}, $\mathcal C$ is a chamber system of type $\F_4$. Now suppose that $J$ of $\{1,2,3,4\}$ has cardinality three.
Then $\mathcal C_{1,2,3}$ is the $\mathrm O^-_8(2)$-building by Lemma~\ref{ominus} and, as $L/E \cong \U_6(2)$, we
have $\mathcal C_{2,3,4}$ is a building of type $\U_6(2)$. Finally, Lemma~\ref{buildingup} implies that
$\mathcal C_{1,3,4}$ and $\mathcal C_{1,2,4}$ are both buildings. Since each rank $3$-residue is a building, if
$\pi : {\mathcal{C}}^\prime \longrightarrow {\mathcal{C}}$ is the universal 2-covering of $\mathcal C$, then
$\mathcal{C}^\prime$ is a building of type $\F_4$ by \cite[Corollary 3]{local}. By \cite[Proof of Theorem 10.2
on page 214]{Tits} this building is uniquely determined by the two residues of rank three with connected
diagram (i.e. $\U_6(2)$, $\Omega^-_8(2)$) and so $F^*(\mathrm{Aut}(\mathcal{ C}^\prime))\cong {}^2\E_6(2)$. Now we have that there is a subgroup $U$ of
$\mathrm{Aut}(\mathcal {C}^\prime)$ such that $U$ contains $L$ and $U/D \cong M$ for a suitable normal subgroup
$D$ of $U$. As $L = L^\prime$, we have that $L \leq F^*(\mathrm{Aut}(\mathcal {C}^\prime))$ and so $L$ is a maximal parabolic of $F^*(\mathrm{Aut}(\mathcal {C}^\prime))$. As
$U \cap F^*(\mathrm{Aut}(\mathcal {C}^\prime)) > L$, we get
$F^*(\mathrm{Aut}(\mathcal {C}^\prime)) \leq U$. As $F^*(\mathrm{Aut}(\mathcal {C}^\prime))$ is simple this implies that $U = M$ and
therefore
$M\cong {}^2\E_6(2)$. \end{proof}
\begin{theorem}\label{G=M} The group $G$ is isomorphic to ${}^2\E_6(2)$.
\end{theorem}
\begin{proof} By \cite{AschSe} we have that $M$ has exactly three conjugacy classes of involutions. In
$E \setminus \langle r \rangle$ we also have three classes $C_M(r)$-classes by Lemma \ref{Vaction}. Using
Lemmas~\ref{controlfusionE} (iv) and (v) and the fact that $E/\langle r\rangle$ does not admit transvections from
$L$, we may apply Lemma~\ref{Fusion} to see that $x^G \cap E = x^L$ for all $x \in E\setminus \{z\}$. In
particular, the three conjugacy classes of involutions in $M$ all have representatives in $E$. Further, if $x \in
G$ with $r^x \in M$, then there is $h \in M$ such that $r^{xh} \in E$. But now by Lemma \ref{controlfusionE} we
may assume that $r^{xh} = r$. Then $xh \in L \leq M$ and so $x \in M$. Hence $M$ controls fusion of 2-central
elements in $M$.
If $Y$ is a normal subgroup of $G$, then, as $M$ contains the normalizer of a Sylow $3$-subgroup of $G$ and is
simple, we either have $M\le Y$ which means that $Y= G$ or $Y$ is a $3'$-group. Suppose the latter. Since $r_1$
is in $M$ and is non-central, we have $C_Y(r_1) \neq 1$. But then $C_Y(r_1) \le M$ a contradiction. Thus $Y=1$
and $G$ is a simple group. As $C_G(r_1) < M$ and $r_1^G \cap M= r_1^M$ we get with Lemma \ref{Holt}
that $G$ is isomorphic to one of
the following groups $\PSL_2(2^n)$, $\mathrm{PSU}_3(2^n)$, ${}^2\B_2(2^n)$ ($n\ge 3$ and odd) or $\mathrm{Alt}(\Omega)$.
In the first three classes of groups the point stabiliser in question is soluble and in the latter case it is
$\mathrm{Alt}(n-1)$. Since $M$ is neither soluble nor isomorphic to $\mathrm{Alt}(\Omega\setminus\{M\})$, we have a
contradiction. Hence $M=G$ and the proof of Theorem~\ref{G=M} is complete. \end{proof}
\section{The proof of Theorem~\ref{Main}}
Here we assemble the mosaic which proves Theorem~\ref{Main}. Thus here we have $C_G(Z)$ is a centralizer of type ${}^2\E_6(2)$ and so $|R|= 2^9$. Lemma~\ref{Horder} (i) and (ii) gives the possibilities for the structure of $\ov H= H/Q$. If $|H|_2 = 2^{10}$, then Theorem~\ref{index2} implies that $G$ has a subgroup of index $2$ which satisfies the hypothesis of Theorem~\ref{Main} . Thus it suffices to prove the result for groups in which $|H|_2= 2^9$. This means that $S = QW$ or $S> QW$ and $\ov =S/Q \cong 3^{1+2}_+$. The latter situation is addressed in Lemma~\ref{index3} where is shown that if $S> QW$ then $G$ has a normal subgroup of index $3$ which also satisfies the hypothesis of Theorem~\ref{Main}. Thus we may assume that $S= QW$. Under this hypothesis in Section~10 we prove Theorem~\ref{NGECGr} which asserts that $C_G(r_1)= N_G(E)= KE\approx 2^{1+20}_+:\U_6(2)$. Finally, in Section 11, we prove Theorem~\ref{G=M} which shows that under the hypothesis that $C_G(r_1)=N_G(E) = KE$, $G \cong {}^2\E_6(2)$. Thus we have $F^*(G) \cong {}^2\E_6(2)$ and the theorem is validated.
\end{document}
|
\begin{document}
\title{The difference of convex algorithm\ on Hadamard manifolds}
\begin{abstract}
In this paper, we propose a Riemannian version of the difference of convex algorithm (DCA) to solve a minimization problem involving the difference of convex (DC) function. We establish the equivalence between the classical and simplified Riemannian versions of the DCA. We also prove that, under mild assumptions, the Riemannian version of the DCA is well-defined, and every cluster point of the sequence generated by the proposed method, if any, is a critical point of the objective DC function. Additionally, we establish some duality relations between the DC problem and its dual. To illustrate the effectiveness of the algorithm, we present some numerical experiments.
\end{abstract}
\begin{keywords}
DC programming, DCA, Fenchel conjugate function, Riemannian manifolds
\end{keywords}
\begin{AMS}
\href{https://mathscinet.ams.org/msc/msc2010.html?t=90C30}{90C30}
\href{https://mathscinet.ams.org/msc/msc2010.html?t=90C26}{90C26}
\href{https://mathscinet.ams.org/msc/msc2010.html?t=49N14}{49N14}
\href{https://mathscinet.ams.org/msc/msc2010.html?t=49Q99}{49Q99}
\end{AMS}
\section{Introduction}
In this paper, we consider a general non-convex and non-smooth constrained optimization problem
involving a difference of convex functions (shortly, \emph{DC problem}) as follows
\begin{equation}\label{eq:DCOptP}
\argmin_{p\in \mathcal M} f(p), \qquad \text{where } f(p)\coloneqq g(p)-h(p),
\end{equation}
where the constrained set $\mathcal M $ is endowed with a structure of a
\emph{complete, simply connected Riemannian manifold of non-positive sectional curvature,
i.e., a Hadamard manifold}, the functions $g,h\colon\mathcal M \to \overline{\mathbb{R}}$,
are convex, lower semi-continuous and proper functions (called DC components), and
$\overline{\mathbb{R}}\coloneqq {\mathbb{R}}\cup\{+\infty\}$ is the extended real line.
Due to the increasing number of optimization problems arising from practical applications
posed in a Riemannian setting, the interest in this topic has increased significantly over
the years. Even though we are not currently concerned with practical issues at this point, we emphasize
that practical applications arise whenever the natural structure of the data is modeled as
an optimization problem on a Riemannian manifold. For example, several problems in image
processing, computational vision and signal processing can be modeled as problems in this
setting. Papers dealing with this subject include~\cite{BacakBergmannSteidlWeinmann2016,
BergmannPerschSteidl2016,BergmannWeinmann2016, BrediesHollerStorathWeinmann2018,
WeinmannDemaretStorath2014, WeinmannDemaretStorath2016}, and problems in medical imaging
modeled in this context are addressed in \cite{EspositoWeinmann2019}.
Problems of tracking, robotics and scene motion analysis are also posed on
Riemannian manifolds, as seen in~\cite{FreifeldBlack2012, ParkBobrowPloen1995}. Machine learning \cite{NickelKiela2018},
artificial intelligence~\cite{MuscoloniThomasCiucciBianconiVitt2017},
neural circuits \cite{Sharpee2019}, low-rank approximations of hyperbolic
embeddings~\cite{JawanpuriaMeghwanshiMishra2019,TabaghiDokmanic2020},
Procrustes problems~\cite{TabaghiDokmanic2021},
financial networks~\cite{Keller-ResselNargang2021},
complex networks~\cite{KrioukovPapadopoulosKitsakVahdatBoguna2010, MoshiriSafaeiSamei2021},
embeddings of data~\cite{WilsonHancockPekalskaDuin2014}
and strain analysis \cite{Vollmer2018,Yamaji2008} are some of the other areas where optimization problems on Riemannian manifolds are prevalent.
Additionally, we mention that there are many papers on statistics in the Riemannian context, as seen in \cite{BhattacharyaBhattacharya2008, Fletcher2013}.
As previously mentioned, there has been a significant increase in the number of works focusing on concepts and techniques of
nonlinear programming and convex analysis in the Riemannian setting, see~\cite{AbsilMahonySepulchre2008,Udriste1994}.
In addition to the theoretical issues addressed, which have an interest of their own,
the Riemannian machinery provides support to design efficient algorithms to solve
optimization problem in this setting; papers on this subject include~\cite{AbsiBakerGallivan2007,
EdelmanAriasSmith1999, HuangGallivanAbsil2015, LiMordukhovichWangYao2011, Manton2015,
MillerMalick2005, NesterovTodd2002, Smith1994, WenYin2012, WangLiWangYao2015} and references therein.
Recently, the concept of the conjugate of a convex function was introduced in the Riemannian context. This is an important tool in convex analysis and plays a significant role in the theory of duality on Riemannian manifolds,
see~\cite{SilvaLouzeiroBergmannHerzog2022,BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2021}.
In particular, this definition enables us to propose a Riemannian version of the DCA.
DC problems cover a broad class of non-convex optimization problems and DCA was the first method introduced especially for the standard DC problem \cref{eq:DCOptP}.
It was proposed by~\cite{PhamSouad1986} in the Euclidean setting.
The basic idea behind DCA is to compute a subgradient of each (convex) DC component separately,
i.e., at each iterate $k$, DCA calculates $y^{(k)}\in \partial h(x^{(k)})$ and uses this trial point
to compute $x^{(k+1)} \in \partial g^* (y^{(k)})$, where $\partial g^*$ denotes
the subdifferential of the conjugate function of $g$ in the sense of convex analysis.
Equivalently, DCA approximates the second DC component $h(x)$ by its affine minorization
$h_k(x)=h(x^{(k)}) + \langle x-x^{(k)}, y^{(k)} \rangle$, with $y^{(k)}\in \partial h(x^{(k)})$,
and minimizes the resulting convex function
\begin{equation*}
x^{(k+1)}\in \argmin_{x\in\mathbb R^n} g(x) - h_ k(x),
\end{equation*}
which is called the alternative version of DCA therein.
On the other hand, computing $y^{(k)}\in \partial h(x^{(k)})$ is equivalent to find a solution of
the dual problem
\begin{equation*}
\argmin_{y\in\mathbb R^n} h^*(y) - g^*(y^{k-1}) - \langle y-y^{k-1}, x^{(k)} \rangle.
\end{equation*}
Therefore, DCA can also be viewed as an iterative primal-dual subgradient method.
DC optimization algorithms have been proved to be particularly successful for analyzing
and solving a variety of highly structured and practical problems;
see for instance~\cite{Oliveira2020,LeThiPhamDinh2018,AnTao2005}.
To the best of our knowledge, the work in~\cite{SouzaOliveira2015} was the first to deal with DC functions in Riemannian manifolds. More precisely, the authors proposed the proximal point algorithm for DC functions (DCPPA) and studied the convergence of the method in Hadamard manifolds.
Recently, \cite{AlmeidaNetoOliveiraSouza2020} proposed a modified version
of the DCPPA in the same Riemannian setting in order to accelerate
the convergence of the method considered in~\cite{SouzaOliveira2015}.
The aim of this paper is to propose, for the first time, a Riemannian version of the DCA.
We obtain an equivalence between the classical and a simplified version of the Riemannian DCA.
Therefore, under mild assumptions, we prove that the Riemannian DCA is well-defined, and
every cluster point of the sequence generated by the proposed method, if any,
is a critical point of the objective DC function in \cref{eq:DCOptP}.
We also extend some relations between the DC problem \cref{eq:DCOptP} and
its dual to the Riemannian setting. To illustrate the effectiveness of DCA, we present some numerical experiments.
This paper is organized as follows. In \cref{sec:Preliminaries} we present some notations and preliminary results that will
be used throughout the paper. In \cref{sec:Duality} some relations between the DC problem and its dual
are established on Hadamard manifolds. In \cref{sec:DCA_manifolds} we present a formulation of the Riemannian DCA.
In \cref{sec:Convergence} we study the convergence properties of the proposed method.
In \cref{Sec:Numerics} we provide some applications to the problem of maximizing a convex function in a compact set and manifold-valued image denoising. Finally, \cref{sec:Conclusion} presents some conclusions.
\section{Preliminaries}
\label{sec:Preliminaries}
In this section, we recall some concepts, notations, and basics results about Riemannian manifolds and optimization.
For more details see, for example, \cite{doCarmo1992, Rapcsak1997, Sakai1996, Udriste1994}.
Let us begin with concepts about Riemannian manifolds.
We denote by $\mathcal M$ a finite dimensional Riemannian manifold and
by $T_p\mathcal M$ the \emph{tangent space} of $\mathcal M$ at $p$.
The corresponding norm associated to the Riemannian metric $\langle \cdot , \cdot \rangle$
is denoted by $\lVert\cdot\rVert$.
Moreover, the \emph{tangent bundle} of $\mathcal M$,
will be denoted by $T\mathcal M$. We use $\ell(\gamma)$ to denote the length of a piecewise smooth curve
$\gamma\colon [a,b] \rightarrow \mathcal M$.
The Riemannian distance between $p$ and $q$ in $\mathcal M$ is denoted by $d(p,q)$,
which induces the original topology on $\mathcal M$, namely,
$(\mathcal M, d)$, which is a complete metric space.
A complete, simply connected Riemannian manifold of non-positive sectional curvature
is called a Hadamard manifold. \emph{All Riemannian manifold considered in this paper will be Hadamard manifolds and will be denoted ${\mathcal M}$.} For a $p\in\mathcal M$, the exponential map $\exp_p\colon T_p \mathcal M \to \mathcal M$ is a diffeomorphism
and $\exp^{-1}_p\colon\mathcal M\to T_p\mathcal M$ denotes its inverse.
In this case, $d(q, p) = \lVert \exp^{-1}_pq\rVert$ holds,
the function $d_q^2\colon\mathcal M\to\mathbb{R}$ defined by $d_q^2(p)\coloneqq d^2(q,p)$
is $C^{\infty}$ and its gradient is given by $\grad d_q^2(p) = -2\exp^{-1}_pq$.
Now, we recall some concepts and basic properties about optimization in the Riemannian context.
For that, given two points $p,q\in\mathcal M$, denotes by $\gamma_{pq}$ the geodesic segment joining $p$ to $q$, i.e.,
$\gamma_{pq}\colon[0,1]\rightarrow\mathcal M$ with $\gamma_{pq}(0)=p$ and $\gamma_{pq}(1)=q$. We denote by $\overline{\mathbb{R}}\coloneqq \mathbb{R}\cup\{+\infty\}$ the extended real line.
The \emph{domain} of a function $f\colon\mathcal M \to \overline{\mathbb{R}}$ is denoted by $\dom (f) \coloneqq \{ p\in \mathcal M\ : \ f(p) < +\infty\}$.
The function $f$ is said to be \emph{convex (resp. strictly convex)} if, for any $p,q\in \mathcal M$,
the composition $f\circ{\gamma_{pq}}\colon[0, 1]\to\mathbb{R}$ is convex (resp. strictly convex), i.e.,
$(f\circ{\gamma_{pq}})(t)\leq(1-t)f(p)+tf(q)$ (resp. ($f\circ{\gamma_{pq}})(t)<(1-t)f(p)+tf(q)$),
for all $t\in[0,1]$.
A function $f\colon\mathcal M \to \overline{\mathbb{R}}$ is said to be
\emph{$\sigma$-strongly convex} for $\sigma > 0$ if, for any
$p,q\in \mathcal M$, the composition $f\circ{\gamma_{pq}}\colon[0, 1]\to \overline{\mathbb{R}}$ is
$\sigma$-strongly convex, i.e.,
$(f\circ{\gamma_{pq}})(t)\leq(1-t)f(p)+tf(q)-\frac{\sigma}{2}t(1-t)d^2(q,p)$, for all $t\in[0,1]$.
\begin{definition}
The \emph{subdifferential} of a proper, convex function
$f\colon\mathcal M \to \overline{\mathbb{R}}$ at $p\in \mathcal \dom (f) $ is the set
\begin{equation*}
\partial f(p)
\coloneqq
\bigl\{
X \in T_p\mathcal M\ : \
f(q) \geq f(p)+\langle X,\exp^{-1}_pq \rangle,
\quad\text{ for all } q\in \mathcal M
\bigr\}.
\end{equation*}
\end{definition}
The proof of the first item of the following theorem can be found in \cite[Theorem 4.10, p. 76]{Udriste1994}, while the proof of the second one follows the same idea as the first one.
\begin{theorem}
\label{thm:f-convex-subdiff}
Let $f\colon\mathcal M\to \mathbb{R}$ be a function. Then,
\begin{enumerate}
\item
\label{thm:f-convex-subdiffi}
The function $f$ is convex if and only if
$f(p)\geq f(q) + \langle X, \exp^{-1}_qp\rangle$,
for all $p, q\in \mathcal M$ and all $X\in \partial f(q)$.
\item
\label{thm:f-convex-subdiffii}
The function $f$ is $\sigma$-strongly convex if and only if
$f(p)\geq f(q) + \langle X, exp^{-1}_q p \rangle + \frac{\sigma}{2}d^2(p,q)$,
for all $p, q\in \mathcal M$ and all $X\in \partial f(q)$.
\end{enumerate}
\end{theorem}
The following definition play an import role in the paper, see \cite[p. 363]{Bourbaki1995}.
\begin{definition} \label{def:linf}
A function $f\colon\mathcal M\to \overline{\mathbb{R}}$ is said to be
\emph{lower semi-continuous} (\emph{lsc}), at $p\in \mathcal M$ if $ \liminf_{q\to p} f(q)= f(p)$. If $f$ is lower semi-continuous at all points along $\mathcal M$, we simply state that $f$ is \emph{lower semi-continuous}.
\end{definition}
The proof of the following result is an immediate consequence of \cite[Proposition 2.5]{WangLiWangYao2015}.
\begin{proposition}
\label{cont_subdif}
Let $f\colon \mathcal M \rightarrow \overline{\mathbb{R}}$ be a convex
and lower semi-continuous function. Consider the sequence
$(p^{(k)})_{k\in\mathbb{N}}\subset \intdom (f)$ such that $\displaystyle\lim_{k\to\infty}p^{(k)}={\bar p} \in \intdom (f)$.
If $(X^{(k)})_{k\in\mathbb{N}}$ is a sequence such that $X^{(k)}\in \partial f(p^{(k)})$
for every $k\in \mathbb{N}$, then $(X^{(k)})_{k\in\mathbb{N}}$ is bounded and
its cluster points belongs to $\partial f({\bar p}).$
\end{proposition}
\begin{definition}
A function $f\colon\mathcal M \to \overline{\mathbb{R}}$ is said to be 1-\emph{coercive} if there exists a point ${\bar p}\in \mathcal M$ such that
\begin{equation*}
\lim_{d({\bar p},p)\to+\infty} \frac{{f(p)}}{{d({\bar p},p)}} = +\infty.
\end{equation*}
\end{definition}
The \emph{global minimizer set} of a function $f\colon\mathcal M\to \overline{\mathbb{R}}$ is defined by
\begin{equation*}
\Omega^*:=\{q\in {\mathcal M}\ :\ f(q)\leq f(p), \text{for all } p\in {\mathcal M}\}.
\end{equation*}
\begin{proposition}\label{prop:coercive}
Assume that $f\colon\mathcal M \to \overline{\mathbb{R}}$ is lsc and 1-\emph{coercive}. Then the global minimizer set of $f$ is non-empty.
\end{proposition}
\begin{proof}
Take ${\bar p}\in \mathcal M$ such that $\lim_{d({\bar p},p)\to+\infty} ({f(p)}/{d({\bar p},p)})= +\infty$. In particular, we conclude that $\displaystyle\lim_{d({\bar p},p)\to+\infty } {f(p)} = +\infty$.
Thus, there exists ${\bar r}>0$ such that
${\bar r}<d({\bar p},p)$ implies that $f({\bar p})\leq {f(p)}$.
Consider the set $B[{\bar p}, {\bar r}]\coloneqq \{ p\in \mathcal M\ :\ d(p, {\bar p})\leq {\bar r}\}$.
Since $\mathcal M$ is a Hadamard manifold, the Hopf-Rinow theorem ensures that
$B[{\bar p}, {\bar r}]$ is compact.
Thus, taking into account that $f$ is lsc., by~\cite[Theorem~3, p.~361]{Bourbaki1995}
there exists ${\hat p}\in B[{\bar p}, {\hat r}]$ such that $f({\hat p})\leq f(p)$,
for all $p\in B[{\bar p}, {\hat r}]$. Therefore, ${\bar p}$ or ${\hat p}$ is a global minimizer of $f$.
\end{proof}
\begin{lemma}\label{L:coercrf}
Let $g\colon\mathcal M \to\mathbb{R}$ be a $\sigma$-strongly convex function.
Take ${\bar p}\in \mathcal M$ and $X\in T_{{\bar p}}\mathcal M$.
Then, the function $f\colon\mathcal M \to\mathbb{R}$ defined by
$f(p)=g(p)-\big\langle X, \exp^{-1}_{\bar p}p \big\rangle$
is 1-coercive. Consequently, the global minimizer set of $f$ is non-empty.
\end{lemma}
\begin{proof}
Since the function $g\colon \mathcal M \to\mathbb{R}$ is a $\sigma$-strongly convex, \cref{thm:f-convex-subdiff} \cref{thm:f-convex-subdiffi} implies that
\begin{equation*}
g(p)\geq g({\bar p})
+ \langle {Y},\exp^{-1}_{\bar p}p\rangle
+ \frac{\sigma}{2}d^2({\bar p},p),
\qquad \text{for all } p \in \mathcal M \text{ and all } {Y}\in \partial g({\bar p}).
\end{equation*}
Thus, considering that $f(p)=g(p)-\big\langle X, \exp^{-1}_{\bar p}p \big\rangle$
and using the last inequality we conclude
\begin{align*}
\frac{f(p)}{d({\bar p},p)}
\geq
\frac{g({\bar p})}{d({\bar p},p)}
+ \Big\langle {Y},\frac{\exp^{-1}_{\bar p}p}{d({\bar p},p)}\Big\rangle
+ \frac{\sigma}{2}d({\bar p},p)
- \Big\langle X,\frac{\exp^{-1}_{\bar p}p}{d({\bar p},p)}\Big\rangle,
\qquad\text{ for all } {Y}\in \partial g({\bar p}).
\end{align*}
Since $d({\bar p},p)=\,\lVert\exp^{-1}_{\bar p}p\rVert$,
we obtain that the inner products in the last inequality are bounded. Hence, we have
\[
\lim_{d({\bar p},p)\to+\infty }\frac {f(p)}{d({\bar p},p)}=+\infty.
\]
Therefore, $f$ is 1-coercive. The second part of the proposition is an immediate consequence of the first one combined with \cref{prop:coercive}.
\end{proof}
The statement and proof of the next proposition can be found
in~\cite[Lemma 2.4, p. 666]{LiLopezMartinMarquez2009}.
\begin{proposition}\label{pr:cont_pi}
Let ${\bar p}\in \mathcal M$ and $(p^{(k)})_{k\in \mathbb{N}}\subset \mathcal M$
be such that $\displaystyle\lim_{k\to +\infty}p^{(k)}= {\bar p}$.
Then the following assertions hold:
\begin{enumerate}
\item
\label{pr:cont_pi_i}
For any $p\in \mathcal M$, we have
$\displaystyle\lim_{k\to +\infty}\exp_{p^{(k)}}^{-1}p = \exp_{{\bar p}}^{-1}p$
and $\displaystyle\lim_{k\to +\infty} \exp_p^{-1}p^{(k)} = \exp_p^{-1}{\bar p}$.
\item
\label{pr:cont_pi_ii}
If $X^{(k)}\in T_{p^{(k)}}\mathcal M$ and $\displaystyle\lim_{k\to +\infty}X^{(k)}= {\bar X}$,
then ${\bar X}\in T_{{\bar p}}\mathcal M$.
\item
\label{pr:cont_pi_iii}
Given $X^{(k)}\in T_{p^{(k)}}\mathcal M$, $Y^{(k)}\in T_{p^{(k)}}\mathcal M$,
${\bar X}\in T_{{\bar p}}\mathcal M$, and ${\bar Y}\in T_{{\bar p}}\mathcal M$.
If $\displaystyle\lim_{k\to +\infty}X^{(k)}= {\bar X}$
and $\displaystyle\lim_{k\to +\infty}Y^{(k)}= {\bar Y},$
then $\displaystyle\lim_{k\to +\infty}\langle X^{(k)}, Y^{(k)} \rangle
= \langle {\bar X},{\bar Y} \rangle$.
\end{enumerate}
\end{proposition}
We end this section recalling several results from Fenchel duality on Hadamard manifolds, which play an important role in the following sections. It is worth emphasizing that we are limiting our study to finite-dimensional manifolds and that our emphasis is algorithmic. Consequently, we do not need to use cotangent space for our purposes. Due to this, we decided to exclusively employ tangent spaces in the following results of the paper~\cite{SilvaLouzeiroBergmannHerzog2022}. We begin by recalling the defining the conjugate of a proper function.
\begin{definition}\label{def:conj_nova}
Let $f\colon\mathcal M \to \overline{\mathbb{R}}$ be a proper function. The Fenchel conjugate of $f$ is
the function $f^{*} \colon T\mathcal M\to \overline{\mathbb{R}}$
defined by
\begin{equation*}
f^{*} (p,X)\coloneqq \sup_{q\in \mathcal M}
\bigl\{ \langle X, \exp^{-1}_pq \rangle - f(q) \bigr\},
\qquad (p,X) \in T\mathcal M.
\end{equation*}
\end{definition}
\begin{theorem}\label{th:FYIN}
Let $f\colon\mathcal M \to \overline{\mathbb{R}}$ be a proper function.
Then, the \emph{Fenchel-Young inequality} holds, i. e.,
for all $(p,X)\in T\mathcal M$ we have
\begin{equation*}
f(q)+f^{*}(p,X)\geq \langle X, \exp^{-1}_pq \rangle,
\qquad\text{for all } q \in \mathcal M.
\end{equation*}
\end{theorem}
\begin{theorem}
Let $f\colon \mathcal M\to \overline{\mathbb{R}}$ be a proper lsc convex function
and $p\in \mathcal M$.
Then the function
$f^{*}(p, \cdot)\colon T_p\mathcal M\to \overline{\mathbb{R}}$
is convex and proper.
\end{theorem}
\begin{definition}\label{def:subdif_novo}
Let $p \in \mathcal M$ and suppose that $f^{*}(p, \cdot)\colon T_p\mathcal M\to \overline{\mathbb{R}}$ is proper. The subdifferential of $f^{*}(p,\cdot )$ at $X \in T_p\mathcal M$,
denoted by $\partial_2f^{*}(p, X)$, is the set
\begin{equation*}
\partial_2f^{*}(p, X)
= \bigl\{
Y\in T_p\mathcal M ~:~ f^{*}(p,Z)\geq f^{*}(p,X)
+ \langle Z-X, Y \rangle, \quad \text{ for all }\quad
Z\in T_p\mathcal M
\bigr\}.
\end{equation*}
\end{definition}
Combining~\cite[Remark 3.3]{SilvaLouzeiroBergmannHerzog2022}
with~\cite[Corollary 3.16]{BergmannHerzogSilvaLouzeiroTenbrinckVidalNunez2021},
we obtain the following result.
\begin{theorem}\label{th:FYINPC}
Let $f\colon\mathcal M\to\overline{\mathbb{R}}$ be a proper function and $p\in \mathcal M$.
Then, $Y\in \partial_2f^{*}(p, X)$ if and only if
$f(\exp_p Y)+f^{*}(p,X)= \langle X, Y \rangle$.
\end{theorem}
\begin{remark}[{\cite[Remark 3.4]{SilvaLouzeiroBergmannHerzog2022}}]
\label{rmk:subdif}
If
$\mathcal M=\mathbb{R}^{n}$, then $f^{*}(p,X)=f^{*}(X)-\langle X,p \rangle$.
Moreover,
$\partial_2f^{*}(p,X) \coloneqq \partial f^{*}(X)+\{-p\}$.
\end{remark}
\begin{definition}\label{def:bcconj_nova}
The Fenchel biconjugate of a function $f\colon \mathcal M\to \overline{\mathbb{R}}$ is the function
$f^{**}\colon\mathcal M\to \overline{\mathbb{R}}$ defined by
\begin{equation*}
f^{**} (p)\coloneqq \sup_{(q,X)\in T\mathcal M}
\Bigl\{
\langle X, \exp^{-1}_pq \rangle - f^*(q, X)
\Bigr\},
\qquad \text{ for all } p \in \mathcal M.
\end{equation*}
\end{definition}
\begin{theorem}
\label{th:bcec}
Let $f\colon \mathcal M\to \overline{\mathbb{R}}$ be a proper lsc convex function.
Then, $f^{**}=f$ holds.
\end{theorem}
\begin{theorem}\label{th:eqsubdifcf}
Let $f\colon \mathcal M\to \overline{\mathbb{R}}$ be a proper convex function.
Then,
\begin{equation*}
X \in \partial f(p)\text{ if and only if }f^{*}(p,{X})=-f(p).
\end{equation*}
\end{theorem}
\section{Duality in DC optimization in Hadamard manifolds}
\label{sec:Duality}
In this section our aim is to state and study difference of convex optimization problem or DC problem, and its dual problem, called dual DC problem, in the Hadamard setting. The DC problem is defined as follows
\begin{equation}\label{Pr:DCproblem}
\argmin_{p\in \mathcal M} f(p), \qquad \text{where } f(p)\coloneqq g(p)-h(p),
\end{equation}
and $g\colon\mathcal M \to \overline{\mathbb{R}}$ and $h\colon\mathcal M \to \overline{\mathbb{R}}$ are proper, lsc and convex functions.
The DC problem is a non-convex and, in general, a non-smooth problem. In the following we further use the conventions
\begin{align}
\label{eq:conventions}
(+\infty)-(+\infty) = +\infty,\quad
(+\infty)-\lambda = +\infty, \text{ and }
\lambda- (+\infty) = -\infty,
\qquad \text{for all } \lambda \in \mathbb{R}.
\end{align}
Similarly to Euclidean context, see \cite{TaoSouad1988}, the dual DC problem of the problem~\eqref{Pr:DCproblem}, is stated as follows
\begin{equation}\label{Pr:Dual}
\argmin_{(p,X) \in T\mathcal M}
\varphi (p, X),
\qquad\text{where } \varphi (p, X)
\coloneqq h^{*}(p,X)-g^{*}(p,X).
\end{equation}
Additional detail concerning the appropriateness of the previous definition will be provided later in Theorem~\ref{equiv:dualpr}. In the following remark, we will look at the details of the relationship between \eqref{Pr:Dual} and its Euclidean counterpart.
\begin{remark}
\label{rem:DCPonRn}
If $\mathcal M=\mathbb{R}^{n}$, then $T_p\mathcal M \simeq \mathbb{R}^{n}$
for all $p\in \mathcal M$.
Consequently, $T\mathcal M \simeq \mathbb{R}^{n}$.
Moreover, by using \cref{rmk:subdif}, we obtain
\begin{align*}
h^{*}(p,X)-g^{*} (p,X)
= h^{*}(X)-\langle X,p \rangle - \left( g^{*}(X)-\langle X,p \rangle \right)
= h^{*}(X)-g^{*}(X), \quad \text{for all } X \in \mathbb{R}^n.
\end{align*}
Therefore, if $\mathcal M=\mathbb{R}^{n}$ then problem~\eqref{Pr:Dual} simplifies to
\begin{equation}\label{Pr:Dual_euclidean}
\argmin_{X \in \mathbb{R}^{n} } h^{*}(X)-g^{*} (X).
\end{equation}
In conclusion, for $\mathcal M=\mathbb{R}^{n}$, the dual \eqref{Pr:Dual} of the problem~\eqref{Pr:DCproblem} merges into the dual stated in \cite{TaoSouad1988}.
\end{remark}
To proceed with the study of problems~\eqref{Pr:DCproblem} and~\eqref{Pr:Dual},
for now on we will assume that:
\begin{enumerate}[label={A\arabic*)}, ref={(A\arabic*)}]
\item
\label{it:A1} $g\colon\mathcal M \to \overline{\mathbb{R}}$
and $h\colon\mathcal M \to \overline{\mathbb{R}}$ are $\sigma$-strongly convex
and lsc functions, where $\sigma>0$;
\item
\label{it:A2} $f_{\inf}\coloneqq \displaystyle\inf_{x\in\mathcal M}
f(x) > -\infty$;
\item
\label{it:A3} $\dom (g) \subseteq \intdom(h);$
\item
\label{it:A4} $\partial_2g^{*}(p,X)\neq \varnothing$,
for every $X \in \dom (g^{*}(p,\cdot))
\coloneqq \{ X \in T_p\mathcal M\ : \ g^{*}(p,X)<+\infty \}$.
\end{enumerate}
Next, we discuss the above assumptions. First, we show that \ref{it:A1} is not restrictive.
\begin{remark}
Let $q \in \mathcal M$ and $\sigma >0$.
Consider the function $\mathcal M\ni p \mapsto \frac{\sigma}{2}d^{2}(q, p)$,
which is $\sigma$-strongly convex, see~\cite[Corollary 3.1]{NetoFerreiraPerez2002}.
If $\tilde g \colon\mathcal M\to \mathbb{R}$
and $\tilde h \colon\mathcal M\to \mathbb{R}$ are convex,
then taking $q\in \mathcal M$ and
setting $g(p)={\tilde g}(p) + \frac{\sigma}{2}d^{2}(q, p)$
and $h(p)={\tilde h}(p)+ \frac{\sigma}{2}d^{2}(q, p)$ we obtain two $\sigma$-strongly
convex functions $g$ and $h$ in $\mathcal M$.
In addition, $f (p)={\tilde g}(p)-{\tilde h}(p)=g(p)-h(p)$, for all $p\in \mathcal M$.
\end{remark}
\begin{remark}\label{rmk:1}
If assumption \ref{it:A2} holds, then $\dom (f)=\dom (g)\subseteq \dom (h)$.
Indeed, if $\dom (g)\nsubseteq \dom (h),$ then there exists $p\in \dom (g)$
such that $p\notin \dom (h)$, and hence by \eqref{eq:conventions}, we have that
$f(p)=g(p)-h(p)=g(p)-(+\infty)=-\infty$, which contradicts assumption \ref{it:A2}. Thus, $\dom (g)\subseteq \dom (h)$, which implies that $\dom (g)\subseteq \dom (f)$.
On the other hand, assume by contradiction that $\dom (f)\nsubseteq \dom (g)$.
Then, there exists $p\in \dom (f)$ such that $g(p)=+\infty$.
From \eqref{eq:conventions} we obtain that $f(p)=g(p)-h(p)=(+\infty)-h(p)=+\infty$,
which contradicts the fact that $p\in \dom (f)$.
Therefore, we conclude that $\dom (f)=\dom (g)$. Since under assumption \ref{it:A2}, we have $\dom (f)=\dom (g)$, which implies that $\dom (g)\subseteq \dom (h)$. Hence, assumption \ref{it:A3} is only slightly more restrictive than assumption \ref{it:A2}.
We also note that if $\dom (h)=\mathcal M$, then assumption~\ref{it:A3} holds,
and if $\dom (g^{*}(p,\cdot)) = T_p\mathcal M$,
then assumption~\ref{it:A4} holds.
It is worth to note that assumption~\ref{it:A4} is used here to establish
the relationship between problems~\eqref{Pr:DCproblem} and~\eqref{Pr:Dual}.
\end{remark}
A necessary condition for the point $p^{*}\in \mathcal M$ to be a local minimum
of $f = g-h$ is that
$0\in \partial f (p^{*})\subset \partial g(p^{*})-\partial h(p^{*})$.
Hence, if $p^{*}\in \mathcal M$ is the solution of problem~\eqref{Pr:DCproblem},
then $\partial h(p^{*}) \subset\partial g(p^{*})$.
Consequently, $\partial g(p^{*}) \cap \partial h(p^{*}) \neq\varnothing$.
In this sense, we define a~\emph{critical point} of problem~\eqref{Pr:DCproblem}.
\begin{definition}
\label{def:critpoint}
A point $p^{*}\in \mathcal M$ is a critical point of $f$ in~\eqref{Pr:DCproblem}
if $\partial g(p^{*}) \cap \partial h(p^{*}) \neq\varnothing$.
\end{definition}
The next lemma establishes a necessary condition for a point
$({\bar p},{\bar X})\in T\mathcal M$
be a solution of problem~\eqref{Pr:Dual}.
\begin{lemma}
\label{le:ncsoldual}
If $({\bar p},{\bar X})$ is a solution of problem~\eqref{Pr:Dual}, then
$\partial_2g^{*}({\bar p},{\bar X}) \subseteq \partial_2h^{*}({\bar p},{\bar X})$
holds.
\end{lemma}
\begin{proof}
Let $({\bar p},{\bar X})$ be a solution of problem~\eqref{Pr:Dual}.
Then,
$h^{*}(p, Y)- g^{*} (p,Y)\geq h^{*}({\bar p},{\bar X}) -g^{*} ({\bar p},{\bar X})$,
for all $(p,Y) \in T\mathcal M$. Thus, we have
\begin{align*}
h^{*}({\bar p},Y)-h^{*}({\bar p},{\bar X})
\geq
g^{*} ({\bar p},Y) -g^{*} ({\bar p},{\bar X}),
\qquad \text{for all}\quad Y \in T_{{\bar p}}\mathcal M.
\end{align*}
Take $Z\in \partial_2g^{*}({\bar p},{\bar X})$.
By \cref{def:subdif_novo}, we have
$g^{*}({\bar p},Y)-g^{*}({\bar p},{\bar X}) \geq \langle Y-{\bar X}, Z \rangle$,
for all $Y \in T_{{\bar p}}\mathcal M$,
which combined with the last inequality yields
\begin{align*}
h^{*}({\bar p},Y)-h^{*}({\bar p},{\bar X})
\geq \langle Y-{\bar X}, Z \rangle,
\qquad \text{for all}\quad Y \in T_{{\bar p}}\mathcal M.
\end{align*}
This implies, by \cref{def:subdif_novo}, that $v\in \partial_2h^{*}({\bar p},{\bar X})$,
and the statement is proved.
\end{proof}
\begin{remark}
If $\mathcal M=\mathbb{R}^{n}$, then by using \cref{rmk:subdif} we have
$\partial_2g^{*}({\bar p},{\bar X}) = \partial g^{*}({\bar X})-\{{\bar p} \}$
and $\partial_2h^{*}({\bar p},{\bar X}) = \partial h^{*}({\bar X})-\{{\bar p} \}$.
Thus, from \cref{rem:DCPonRn} and \cref{le:ncsoldual} we conclude that
if ${\bar X} \in \mathbb{R}^{n}$ is a solution of problem~\eqref{Pr:Dual_euclidean},
then we have $\partial g^{*}({\bar X})\subseteq \partial h^{*}({\bar X})$,
which yields \cite[Theorem~2.1~(2)]{TaoSouad1988}.
\end{remark}
We have already defined the critical point for the primal problem in \cref{def:critpoint}, so let us continue on dual problem. Please keep in mind that, it follows from \cref{le:ncsoldual}, that if $({\bar p},{\bar X})$ is a solution
of the problem~\eqref{Pr:Dual}, then the set
$\partial_2h^{*}({\bar p},{\bar X})\cap \partial_2g^{*}({\bar p},{\bar X})$
is non-empty. Hence, we define the notion of critical point for the problem~\eqref{Pr:Dual} as follows:
\begin{definition}
A point $({\bar p},{\bar X})$ is \emph{a critical point} for problem~\eqref{Pr:Dual} if $
\partial_2h^{*}({\bar p},{\bar X})\cap \partial_2g^{*}({\bar p},{\bar X})
\neq\varnothing.$
\end{definition}
\begin{remark}
If $\mathcal M=\mathbb{R}^{n}$, then by using \cref{rmk:subdif} we have
$\partial_2g^{*}({\bar p},{\bar X}) = \partial g^{*}({\bar X})-\{{\bar p} \}$ and
$\partial_2h^{*}({\bar p},{\bar X}) = \partial h^{*}({\bar X})-\{{\bar p} \}$.
Thus, if $({\bar p},{\bar X})$ is a critical point of problem~\eqref{Pr:Dual},
then there exist
$Z\in \partial_2h^{*}({\bar p},{\bar X})\cap \partial_2g^{*}({\bar p},{\bar X})$.
Hence,
$Z+{\bar p} \in \partial g^{*}({\bar X})\cap \partial h^{*}({\bar X})\neq\varnothing$.
Therefore, ${\bar X}\in \mathbb{R}^{n}$ is
a critical point of problem~\eqref{Pr:Dual_euclidean}.
\end{remark}
To proceed with our analysis we need the next lemma. For a proof of it see \cite[p. 46]{BartleSherbert2000}.
\begin{lemma}\label{le:iterinf}
Let $X $ and $Y$ be non-empty sets and $f\colon X\times Y \to \mathbb{R}$ a function.
Then, it holds
\begin{align*}
\inf_{(x,y)\in X\times Y} f(x,y)
= \inf_{x\in X} \inf_{y \in Y} f(x,y)
= \inf_{y \in Y} \inf_{x \in X} f(x,y).
\end{align*}
\end{lemma}
The next theorem presents the relation between the optimum values of
problems~\eqref{Pr:DCproblem} and~\eqref{Pr:Dual}.
\begin{theorem}
\label{equiv:dualpr}
Let $g\colon\mathcal M \to \overline{\mathbb{R}}$ and $h\colon\mathcal M \to \overline{\mathbb{R}}$ be proper, lsc and convex functions. Then, there holds
\begin{equation*}
\inf_{(q,X) \in T\mathcal M}
\Bigl\{ h^{*}(q,X)-g^{*} (q,X) \Bigr\}
= \inf_{p\in \mathcal M}\left\{ g(p) - h(p) \right\}.
\end{equation*}
\end{theorem}
\begin{proof}
Since $h$ is convex, \cref{th:bcec} implies that $h^{**}=h$.
Thus, using \cref{def:bcconj_nova} we have
\begin{align*}
\inf_{p\in \mathcal M} \{g(p)-h(p)\}
& = \inf \{g(p)-h^{**}(p)\ : \ \in \mathcal M\}
\\
& = \inf \Biggl\{
g(p) - \sup_{(q,X)\in T\mathcal M}
\Bigl\{
\langle X,\exp^{-1}_qp \rangle - h^{*}(q,X)
\Bigr\}
\ : \ p\in \mathcal M
\Biggr\}.
\end{align*}
Since $\displaystyle\sup_{(q,X)\in T\mathcal M}\{\langle X,\exp^{-1}_qp \rangle - h^{*}(q,X) \}=-\displaystyle\inf_{(q,X)\in T\mathcal M}\{h^{*}(q,X) -\langle X,\exp^{-1}_qp \rangle \}$, the last equality is equivalent to
\begin{equation*}
\inf_{p\in \mathcal M} \{g(p)-h(p)\}
= \inf_{p\in \mathcal M} \inf_{(q,X)\in T\mathcal M}
\Bigl\{ g(p) + h^{*}(q,X) -\langle X,\exp^{-1}_qp \rangle \Bigr\},
\end{equation*}
which, using \cref{le:iterinf}, can still be expressed equivalently as
\begin{equation*}
\inf_{p\in \mathcal M} \{g(p)-h(p)\}
= \inf_{(q,X)\in T\mathcal M} \inf_{p\in \mathcal M}
\Bigl\{ g(p) + h^{*}(q,X) -\langle X,\exp^{-1}_qp \rangle \Bigr\}.
\end{equation*}
Due to $\inf_{p\in \mathcal M} \{ g(p) + h^{*}(q,X) -\langle X,\exp^{-1}_qp \rangle \}= h^{*}(q,X) - \sup_{p\in \mathcal M} \langle X,\exp^{-1}_qp \rangle - \{ g(p) \}$, the final equality is as follows
\begin{equation*}
\inf_{p\in \mathcal M} \{g(p)-h(p)\}
= \inf_{(q,X)\in T\mathcal M}
\Biggl\{
h^{*}(q,X) - \sup_{p\in \mathcal M}
\bigl\{ \langle X,\exp^{-1}_qp \rangle - g(p) \bigr\}
\Biggr\},
\end{equation*}
which, by using \cref{def:conj_nova}, yields the desired equality
and the proof is concluded.
\end{proof}
\begin{theorem}\label{th:emp}
The following statements hold:
\begin{enumerate}
\item
\label{th:empi}
If ${\bar p}\in \mathcal M$ is a solution of problem~\eqref{Pr:DCproblem},
then $({\bar p},{\bar Y})\in T\mathcal M$ is a solution of
the problem~\eqref{Pr:Dual},
for all ${\bar Y} \in \partial h({\bar p}) \cap \partial g({\bar p})$.
\item
\label{th:empii}
If $({\bar p},{\bar Y})\in T\mathcal M$ is a solution of
problem~\eqref{Pr:Dual}, for some
${\bar Y} \in \partial h({\bar p}) \cap \partial g({\bar p})$,
then ${\bar p}\in \mathcal M$ is a solution of problem~\eqref{Pr:DCproblem}.
\end{enumerate}
\end{theorem}
\begin{proof}
To prove \cref{th:empi}, assume that ${\bar p}\in \mathcal M$ is a solution of
problem~\eqref{Pr:DCproblem}.
Thus, we have $\partial h({\bar p}) \cap \partial g({\bar p}) \neq \varnothing$.
Let ${\bar Y} \in \partial h({\bar p})\cap \partial g({\bar p})$.
Since $g$ and $h$ are convex, by \cref{th:eqsubdifcf} we have
$-g^{*}({\bar p},{\bar Y}) = g({\bar p})$ and
$ h^{*}({\bar p},{\bar Y}) = -h({\bar p})$, which implies that
$h^{*}({\bar p},{\bar Y})-g^{*}({\bar p},{\bar Y}) = g({\bar p})-h({\bar p})$.
Using again that ${\bar p}\in \mathcal M$ is a solution of problem~\eqref{Pr:DCproblem},
the last equality together with \cref{equiv:dualpr} ensure that $({\bar p},{\bar Y})$
is a solution of problem~\eqref{Pr:Dual}, and hence, the \cref{th:empi} is proved.
We proceed to prove \cref{th:empii}. To this end, we assume that $({\bar p},{\bar Y})$
is a solution of problem~\eqref{Pr:Dual} with
${\bar Y}\in \partial h({\bar p}) \cap \partial g({\bar p})$.
Since $g$ and $h$ are convex and
${\bar Y} \in \partial h({\bar p}) \cap \partial g({\bar p})$,
it follows from \cref{th:eqsubdifcf} that $-g^{*}({\bar p},{\bar Y}) = g({\bar p})$
and $ h^{*}({\bar p},{\bar Y}) = -h({\bar p})$, which implies
\begin{equation} \label{eq:eqif}
g({\bar p})-h({\bar p})=h^{*}({\bar p},{\bar Y})-g^{*} ({\bar p},{\bar Y})
=\inf_{(p,X) \in T\mathcal M}
\bigl\{ h^{*}(p,X)-g^{*} (p,X) \bigr\}.
\end{equation}
On the other hand, \cref{equiv:dualpr} implies that
\begin{equation*}
\inf_{(p,X) \in T\mathcal M}
\bigl\{ h^{*}(p,X)-g^{*} (p,X)\bigr\}
= \inf_{q\in \mathcal M} \bigl\{ g(q) - h(q) \bigr\}
\leq g({\bar p})-h({\bar p})
\end{equation*}
Combining the last inequality with \eqref{eq:eqif} yields
$g({\bar p})-h({\bar p}) = \inf_{q\in \mathcal M}\bigl\{ g(q) - h(q) \bigr\}$.
Hence, ${\bar p}\in \mathcal M$ is a solution of problem~\eqref{Pr:DCproblem}.
\end{proof}
\section{DCA on Hadamard manifolds}
\label{sec:DCA_manifolds}
The aim of this section is present an extension of the DCA to Hadamard manifolds.
To this end, we first propose an extension of the classical DCA, which is based
on the Fenchel conjugate introduced in \cref{def:conj_nova}.
As the DCA is dependent on the Fenchel conjugate of the first component of the objective function, which is in general difficult to compute, we provide a much simpler version of DCA on Hadamard manifolds based on a first-order approximation of the second component.
We also show the well-definition of these algorithms and
their equivalence in the Riemannian setting, such as in the linear setting.
The DCA based on Fenchel conjugate is stated in \cref{Alg:DCA1}, and
the second version in \cref{Alg:DCA2}.
\begin{algorithm}[hbp]
\caption{The DC Algorithm on Hadamard Manifolds (DCA1)}\label{Alg:DCA1}
\begin{algorithmic}[1]
\STATE {Choose an initial point $p^{(0)}\in \dom (g) $. Set $k=0$.}
\STATE{Take $X^{(k)}\in\partial h(p^{(k)})$, and compute
\begin{equation}
\label{eq:DCAS_natural}
\begin{split}
Y^{(k)}&\in \partial_2 g^{*}(p^{(k)},X^{(k)}),
\\
p^{(k+1)} &\coloneqq \exp_{p^{(k)}}Y^{(k)}.
\end{split}
\end{equation}
}
\STATE{If $p^{(k+1)} =p^{(k)}$, then STOP and return $p^{(k)}$. Otherwise, go to Step~4.}
\STATE{Set $k \leftarrow k+1$ and go to Step~2.}
\end{algorithmic}
\end{algorithm}
As mentioned before, \cref{Alg:DCA1} relies on the computation of the Fenchel conjugate, which can be difficult to compute in practice. However, this algorithm is conceptually useful and can be shown to be is equivalent to more practical and computable algorithm that does not rely on the Fenchel conjugate. The following two results will be used to demonstrate the well-definedness of \cref{Alg:DCA1}.
\begin{lemma}
\label{le:wedda}
If $p\in \dom (h)$ and $Y\in\partial h(p)$,
then $\dom (h^{*}(p,\cdot))\subseteq \dom (g^{*}(p,\cdot))$ and
\begin{equation*}
Y \in dom(g^{*}(p,\cdot))
=\{ X \in T_p\mathcal M\ : \ g^{*}(p,X)<+\infty \}.
\end{equation*}
In particular, $\partial_2 g^{*}(p,Y)\neq \varnothing$.
\end{lemma}
\begin{proof}
Assume that $p\in \dom (h)$ and take $Y\in\partial h(p)$.
Thus, by using \cref{th:eqsubdifcf} we obtain
\begin{equation}\label{eq:aaa1}
h^{*}(p,Y)=-h(p)<+\infty .
\end{equation}
From \cref{equiv:dualpr} and assumption~\ref{it:A2} we have that
\begin{equation}\label{eq:aaa2}
h^{*}(p,Y)-g^{*}(p,Y)
\geq \inf_{(q,X) \in T\mathcal M}
\bigl\{ h^{*}(q,X) -g^{*} (q,X)\bigr\}
= \inf_{q\in \mathcal M}\bigl\{ g(q) - h(q) \bigr\}>-\infty.
\end{equation}
To prove the first statement, assume by contradiction that
$\dom (h^{*}(p,\cdot))\nsubseteq \dom (g^{*}(p,\cdot))$.
Thus, there exists ${\bar Y} \in T_p\mathcal M$ such that
$h^{*}(p, {\bar Y})<+\infty$ and $g^{*}(p, {\bar Y})=+\infty$.
By using \eqref{eq:conventions}, we have
$h^{*}(p,{\bar Y})-g^{*}(p,{\bar Y})=h^{*}(p,{\bar Y})-(+\infty)=-\infty$,
which contradicts the equality in \eqref{eq:aaa2} and the first statement is proved.
Since $\dom (h^{*}(p,\cdot))\subseteq \dom (g^{*}(p,\cdot))$,
it follows from~\eqref{eq:aaa1} that $g^{*}(p,Y)<+\infty$.
Thus, $Y\in \dom (g^{*}(p,\cdot))$ and by assumption~\ref{it:A4}
we conclude that $\partial_2 g^{*}(p,Y)\neq \varnothing$.
\end{proof}
\begin{proposition}
\cref{Alg:DCA1} is well defined.
\end{proposition}
\begin{proof}
Assume $p^{(k)}\in \dom (g)$.
From \cref{rmk:1}, we have that $\dom (f)=\dom (g)\subseteq \dom (h)$,
and hence $p^{(k)}\in \dom (h)$.
By assumption~\ref{it:A3}, we have that $\partial h(p^{(k)})\neq \varnothing$.
Let $X^{(k)}\in \partial h(p^{(k)})$.
Since $h$ is convex, \cref{th:eqsubdifcf} implies that
$h^{*}(p^{(k)},X^{(k)})=-h(p^{(k)})<+\infty$.
By the first part of \cref{le:wedda}, we have that $g^{*}(p^{(k)},X^{(k)})<+\infty$ and
$\partial_ 2g^{*}(p^{(k)},X^{(k)})\neq \varnothing$.
Let $Y^{(k)}\in \partial_2 g^{*}(p^{(k)},X^{(k)})$.
Since $\mathcal M$ is Hadamard, the point $p^{(k+1)}=\exp_{p^{(k)}}Y^{(k)}$ is well defined
and belongs to $\mathcal M$.
Moreover, applying \cref{th:FYINPC} with $f=g$, $p=p^{(k)}$, $X=X^{(k)}$ and $Y=Y^{(k)}$ we have
$g(p^{(k+1)})+g^{*}(p^{(k)},X^{(k)})=\langle X^{(k)},Y^{(k)} \rangle$ or equivalently
$g(p^{(k+1)})=\langle X^{(k)},Y^{(k)} \rangle -g^{*}(p^{(k)},X^{(k)})<+\infty$,
which implies that $p^{(k+1)}\in \dom (g)=\dom (f) \subseteq \dom (h)$.
Therefore, \cref{Alg:DCA1} is well defined.
\end{proof}
In the following, we present a second version of the DCA that is equivalent to \cref{Alg:DCA1}, which is described in \cref{Alg:DCA2}.
\begin{algorithm}[hbp]
\caption{The DC Algorithm on Hadamard Manifolds (DCA2)}\label{Alg:DCA2}
\begin{algorithmic}[1]
\STATE {Choose an initial point $p^{(0)}\in \dom (g) $. Set $k=0$.}
\STATE{Take $X^{(k)}\in\partial h(p^{(k)})$,
and the next iterated $p^{(k+1)}$ is define as following
\begin{equation} \label{eq:DCAS}
p^{(k+1)}\in \argmin_{p\in \mathcal M}
\biggl(
g(p)-\big\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p\big\rangle
\biggr).
\end{equation}
}
\STATE{If $p^{(k+1)} =p^{(k)}$, then STOP and return $p^{(k)}$. Otherwise, go to Step~4.}
\STATE{Set $k \leftarrow k+1$ and go to Step~2.}
\end{algorithmic}
\end{algorithm}
It should be noted that the stopping criterion in step 3 of \cref{Alg:DCA2} allows it to generate an infinite sequence. Therefore, in practice, to implement \cref{Alg:DCA2}, an appropriate stopping criterion will be required, which will be addressed further in the implementation section. Let us now analyze \cref{Alg:DCA2}. First of all, note that due to the point $ p^{(k+1)}$ be a solution of \eqref{eq:DCAS}, we have
\begin{equation}\label{eq:sol}
g(p) - \big\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p \big\rangle
\geq
g(p^{(k+1)})-\big\langle X^{(k)}, \exp^{-1}_{p^{(k)}}{p^{(k+1)}} \big\rangle,
\qquad \text{for all } p\in \mathcal M.
\end{equation}
This inequality will now have an important role in the paper.
\begin{proposition}
\cref{Alg:DCA2} is well defined.
\end{proposition}
\begin{proof}
Assume that $p^{(k)}\in \dom (g)$.
From \cref{rmk:1}, we have that $\dom (g)=\dom (f)\subseteq \dom (h)$,
which implies that $p^{(k)}\in \dom (h)$.
Thus, by Assumption~\ref{it:A3}, we have that $\partial h(p^{(k)})\neq \varnothing$.
Let $X^{(k)}\in \partial h(p^{(k)})$.
From \cref{L:coercrf}, we have that $g_{k}\colon\mathcal M\to \overline{\mathbb{R}}$
given by $g_{k}(p)\coloneqq g(p)-\langle X^{(k)},\exp^{-1}_{p^{(k)}}p\rangle$
is 1-coercive.
Consequently, its minimizer set is non-empty and is contained in $\dom (g)$.
Therefore, there exists $p^{(k+1)}\in \dom (g)=\dom (f)$ such that
$p^{(k+1)}\in \argmin_{p\in \mathcal M} (g(p)
-\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p \rangle)$,
which implies that \cref{Alg:DCA2} is well defined.
\end{proof}
\begin{remark}
If $\mathcal M=\mathbb{R}^{n}$, then by \cref{rmk:subdif},
we have $\partial_2g^{*}(p^{(k)},X^{(k)} ) = \partial g^{*}(X^{(k)})-\{p^{(k)}\}$
and consequently
$Y^{(k)}+p^{(k)} = \exp_{p^{(k)}}Y^{(k)} = p^{(k+1)} \in \partial g^{*}(X^{(k)})
= \partial_2g^{*}(p^{(k)},X^{(k)} )+\{p^{(k)}\}$, i.e.,
$p^{(k+1)} \in \partial g^{*}(X^{(k)})$ and $X^{(k)}\in \partial h(p^{(k)})$.
Therefore, \cref{Alg:DCA1} coincides with the classical formulation of the DCA;
see~\cite{TaoSouad1988,AnTao2005}.
Moreover, if $\mathcal M=\mathbb{R}^{n}$, then \eqref{eq:sol} becomes
\begin{equation*}
g(p) - \big\langle X^{(k)}, p-p^{(k)} \big\rangle
\geq g(p^{(k+1)})-\big\langle X^{(k)}, p^{(k+1)}-p^{(k)} \big\rangle,
\qquad \text{ for all } p\in \mathbb{R}^{n},
\end{equation*}
which is equivalent to $p^{(k+1)}=\argmin_{p \in \mathbb{R}^{n}} \{ g(p)-\big\langle X^{(k)}, p-p^{(k)} \big\rangle\}$
As a conclusion, \cref{Alg:DCA2} yields an alternative version of the classical DCA
\end{remark}
In the next result, we show that \cref{Alg:DCA1} is equivalent to \cref{Alg:DCA2} in the Riemannian setting, similar to the linear setting.
\begin{proposition}
\label{equivalenceAlg}
If $p^{(k)}\in \dom (g)$, $X^{(k)}\in \partial h(p^{(k)})$ and
$Y^{(k)}\in \partial_2g^{*}(p^{(k)},X^{(k)})$,
then $p^{(k+1)}=\exp_{p^{(k)}}Y^{(k)}$ if and only if
$p^{(k+1)}\in \argmin_{p\in \mathcal M}
(g(p)-\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p\rangle)$.
Consequently, \cref{Alg:DCA1} is equivalent to \cref{Alg:DCA2}.
\end{proposition}
\begin{proof}
Let $p^{(k)}\in \dom(g)$, $X^{(k)}\in \partial h(p^{(k)})$,
$Y^{(k)}\in \partial_2g^{*}(p^{(k)},X^{(k)})$, and $p^{(k+1)}=\exp_{p^{(k)}}Y^{(k)}$ be given
by \cref{Alg:DCA1}.
By applying \cref{th:FYINPC} with $f=g$, $p=p^{(k)}$, $Y=\exp^{-1}_{p^{(k)}}p^{(k+1)}$,
and $X=X^{(k)}$, we have
$g(p^{(k+1)})+g^{*}(p^{(k)},X^{(k)})=\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p^{(k+1)} \rangle$,
which by using \cref{def:conj_nova} is equivalent to
\begin{equation*}
g(p^{(k+1)})-\langle X^{(k)},\exp^{-1}_{p^{(k)}}p^{(k+1)} \rangle
= -g^{*}(p^{(k)},X^{(k)}) = -\sup_{q\in \mathcal M} \left( \langle X^{(k)},\exp^{-1}_{p^{(k)}}q \rangle - g(q)\right),
\end{equation*}
or equivalently,
\begin{equation*}
g(p^{(k+1)})-\langle X^{(k)},\exp^{-1}_{p^{(k)}}p^{(k+1)} \rangle
= \displaystyle\inf_{q\in\mathcal M}
(g(q)-\langle X^{(k)}, \exp^{-1}_{p^{(k)}}q \rangle).
\end{equation*}
This is also equivalent to
$p^{(k+1)} \in \displaystyle\argmin_{p\in \mathcal M }
(g(p)-\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p\rangle)$.
Therefore, \cref{Alg:DCA1} is equivalent to \cref{Alg:DCA2}.
\end{proof}
\section{Convergence analysis of DCA}\label{sec:Convergence}
The aim of this section is to study the convergence properties of DCA.
It is worth mentioning that the results in this section can be proved using either of
the formulations of DCA in \cref{Alg:DCA1} and~\ref{Alg:DCA2}, as they are equivalent
according to \cref{equivalenceAlg}.
For simplicity, we present the results only using \cref{Alg:DCA2}, but the proofs of
the results for \cref{Alg:DCA1} are quite similar.
We begin by showing a descent property of the algorithm.
\begin{proposition}\label{pr:ffr}
Let $(p^{(k)})_{k\in \mathbb N}$ be generated by \cref{Alg:DCA2}.
Then, the following inequality holds
\begin{equation}\label{eq:dsc}
f(p^{(k+1)})\leq f(p^{(k)})-\frac{\sigma}{2}d^{2}(p^{(k)},p^{(k+1)}).
\end{equation}
Moreover, if $p^{(k+1)} =p^{(k)}$, then $p^{(k)}$ is a critical point of $f$.
\end{proposition}
\begin{proof}
By using inequality in~\eqref{eq:sol} with $p=p^{(k)}$ we have
$g(p^{(k)})-g(p^{(k+1)}) \geq \langle -X^{(k)}, \exp^{-1}_{p^{(k)}}{p^{(k+1)}}\rangle$.
On the other hand, since $h$ is $\sigma$-strongly convex and
$X^{(k)}\in \partial h(p^{(k)})$, we obtain that
\begin{equation*}
h(p^{(k+1)})-h({p^{(k)}})
\geq \langle X^{(k)},\exp^{-1}_{p^{(k)}}p^{(k+1)}\rangle
+\frac{\sigma}{2}d^2(p^{(k+1)},p^{(k)}).
\end{equation*}
Hence, using that $f=g-h$ together with two previous inequalities we
obtain~\eqref{eq:dsc}. To prove the last statement, we assume that $p^{(k+1)} =p^{(k)}$.
Thus, \eqref{eq:sol} implies that
$g(p)\geq g(p^{(k)})+ \langle X^{(k)}, \exp^{-1}_{p^{(k)}}p \rangle$,
for all $p\in \mathcal M$, which shows that $X^{(k)}\in \partial g(p^{(k)})$.
Hence, taking into account that $X^{(k)}\in\partial h(p^{(k)})$, we conclude that
$X^{(k)}\in \partial g(p^{(k)}) \cap \partial h(p^{(k)}) \neq~\varnothing$.
Therefore, it follows from \cref{def:critpoint} that $p^{(k)}$ is a critical point
of $f$ in problem~\eqref{Pr:DCproblem}.
\end{proof}
\begin{proposition}\label{pr:consit}
Let $(p^{(k)})_{k\in \mathbb N}$ be generated by \cref{Alg:DCA2}.
Then,
\begin{equation*}
\sum_{k=0}^{+\infty}d^{2}(p^{(k)},p^{(k+1)})<+\infty.
\end{equation*}
In particular, $\displaystyle\lim_{k\to+\infty} d (p^{(k)},p^{(k+1)})=0$.
\end{proposition}
\begin{proof}
It follows from~\eqref{eq:dsc} that
$0\leq (\sigma/2)d^{2}(p^{(k)},p^{(k+1)})\leq f (p^{(k)})-f (p^{(k+1)})$,
for all $k\in \mathbb{N}$. Thus,
\begin{align*}
\sum_{k=0}^Td^{2}(p^{(k)},p^{(k+1)})
\leq \frac{2}{\sigma}\sum_{k=0}^T\left( f (p^{(k)})-f (p^{(k+1)}) \right) \leq \frac{2}{\sigma}\left( f (p^{(0)})-f _{\inf} \right),\\
\end{align*}
for each $T\in \mathbb N$, where $f_{\inf}>-\infty$ is given by assumption~\ref{it:A2}. Taking the limit in the last inequality, as $T$ goes to $+\infty$, we obtain
the first statement. The second statement is an immediate consequence of the first one.
\end{proof}
\begin{theorem}\label{pr:assympc}
Let $(p^{(k)})_{k\in \mathbb N}$ and $(X^{(k)})_{k\in \mathbb N}$ be generated
by \cref{Alg:DCA2}. If ${\bar p}$ is a cluster point of $(p^{(k)})_{k\in \mathbb{N}}$,
then ${\bar p}\in \dom(g)$ and there exists a cluster point ${\bar X}$ of
$(X^{(k)})_{k\in \mathbb N}$ such that
${\bar X}\in \partial g({\bar p})\cap \partial h({\bar p})$.
Consequently, every cluster point of $(p^{(k)})_{k\in \mathbb{N}}$,
if any, is a critical point of $f$.
\end{theorem}
\begin{proof}
Let ${\bar p}\in \mathcal M$ be a cluster point of $(p^{(k)})_{k\in \mathbb{N}}$.
Without loss of generality we can assume that
$\displaystyle\lim_{k \to +\infty}p^{(k)}={\bar p}$.
It follows from \cref{pr:ffr} together with assumption \ref{it:A2} that
$(f(p^{(k)}))_{k\in \mathbb N}$ is non-increasing and converges.
Moreover, due to $f(p^{(0)})\geq f(p^{(k)})=g(p^{(k)})-h(p^{(k)})$ and $g$ be lsc, we have
\begin{equation*}
f(p^{(0)})\geq \liminf_{k\to +\infty} g(p^{(k)}) - \limsup _{k\to +\infty}h(p^{(k)})
\geq g({\bar p})- \limsup _{k\to +\infty}h(p^{(k)}).
\end{equation*}
Thus, using the convention~\eqref{eq:conventions} we conclude that ${\bar p}\in \dom (g)$.
Hence, using assumption \ref{it:A3}, we conclude that ${\bar p}\in \intdom(h)$.
We know that $X^{(k)}\in\partial h(p^{(k)})$, for all $k\in \mathbb{N}$.
Thus, by \cref{cont_subdif}, we can also conclude that
$\displaystyle\lim_{k\to+\infty} X^{(k)}={\bar X}\in \partial h({\bar p})$.
Due to the point $p^{(k+1)}$ being a solution of~\eqref{eq:DCAS},
it satisfies \eqref{eq:sol}.
Thus, taking the inferior limit in \eqref{eq:sol}, as $k$ goes to $+\infty$,
and using the fact that $\displaystyle\lim_{k\to+\infty}p^{(k)}={\bar p}$,
$g$ is lsc together with \cref{pr:cont_pi},\cref{pr:cont_pi_iii} and \cref{pr:consit},
we obtain
\begin{equation*}
g(p) \geq \liminf_{k\to+\infty}
\biggl(
g(p^{(k+1)}) + \langle X^{(k)}, \exp^{-1}_{p^{(k)}}p \rangle
- \langle X^{(k)}, \exp^{-1}_{p^{(k)}}p^{(k+1)} \rangle
\biggr)
\geq g({\bar p})+\langle {\bar X}, \exp^{-1}_{{\bar p}}p\rangle,
\end{equation*}
for each $p\in \mathcal M$, which implies that
$g(p) \geq g({\bar p})+\langle {\bar X}, \exp^{-1}_{{\bar p}}p\rangle$,
for all $p\in \mathcal M$.
Hence, ${\bar X}\in \partial g({\bar p})$.
Therefore, ${\bar X}\in \partial g({\bar p})\cap \partial h({\bar p})$,
and hence ${\bar p}$ is a critical point of $f$ in problem~\eqref{Pr:DCproblem}.
\end{proof}
\begin{proposition}
Let $(p^{(k)})_{k\in \mathbb N}$ be generated by \cref{Alg:DCA2}.
Then, for all $N\in \mathbb{N},$ there holds
\begin{equation*}
\min_{k=0,1,\ldots,N} d(p^{(k)},p^{(k+1)})
\leq \biggl( \frac{ 2(f (p^{0})-f _{\inf})}{(N+1)\sigma} \biggr)^{1/2}.
\end{equation*}
\end{proposition}
\begin{proof}
It follows from~\eqref{eq:dsc} that
$d^{2}(p^{(k)},p^{(k+1)})\leq (2/\sigma) \bigr( f (p^{(k)})-f (p^{(k+1)}) \bigr)$,
for all $k\in \mathbb{N}$. Thus,
\begin{equation*}
(N+1)\min _{k=0,1,\ldots,N}\biggl( d^{2}(p^{(k)},p^{(k+1)}) \biggr)
\leq \sum_{k=0}^{N}\frac{2}{\sigma} \biggl( f (p^{(k)})-f (p^{(k+1)}) \biggr)
\leq \frac{2}{\sigma}\biggl( f (p^{0})-f _{\inf}\biggr),
\end{equation*}
where $f_{\inf}>-\infty$ is given by assumption~\ref{it:A2}.
Therefore, the desired inequality directly follows.
\end{proof}
The last result of this section establishes a primal-dual asymptotic convergence of the
sequences generated by the DCA.
This result extends the known result from the Euclidean case,
cf.~\cite[Theorem 3]{TaoSouad1988}, to Hadamard manifolds.
Due to the nature of the problem, we will use the formulation of the DCA given
in \cref{Alg:DCA1}.
\begin{theorem}\label{th:pdac}
Let $(p^{(k)})_{k\in \mathbb{N}}$ and $(X^{(k)})_{k\in \mathbb{N}}$ be
the sequences generated by \cref{Alg:DCA1}. Then, the following statements hold:
\begin{enumerate}
\item
\label{th:pdac_i}
$g(p^{(k+1)})-h(p^{(k+1)})
\leq h^{*}(p^{(k)},X^{(k)})-g^{*}(p^{(k)},X^{(k)})
\leq g(p^{(k)})-h(p^{(k)})$,
for all $k=0,1, \ldots$.
\item
\label{th:pdac_ii}
$\displaystyle\lim_{k\to+\infty} ( g(p^{(k)})-h(p^{(k)}) )
= \displaystyle\lim_{k\to+\infty} (h^{*}(p^{(k)},X^{(k)})-g^{*}(p^{(k)},X^{(k)}))
= {\bar f} \geq f _{\inf}$.
\item
\label{th:pdac_iii}
If the sequence $(p^{(k)})_{k\in \mathbb{N}}$ is bounded and ${\bar p}$
is a cluster point of $(p^{(k)})_{k\in \mathbb{N}}$, then ${\bar p}\in \dom(g)$
and there exists a cluster point ${\bar X}$ of $(X^{(k)})_{k\in \mathbb{N}}$
such that
\begin{subequations}
\begin{equation}\label{pdac_eq1}
\partial g({\bar p})\cap \partial h({\bar p})\neq \varnothing ,
\end{equation}
\begin{equation}\label{pdac_eq2}
\lim_{k\to+\infty} ( h(p^{(k)})+h^{*}(p^{(k)},X^{(k)}) )
= h({\bar p})+h^{*}({\bar p},{\bar X})=0,
\end{equation}
\begin{equation}\label{pdac_eq3}
\lim_{k\to+\infty} ( g(p^{(k)})+g^{*}(p^{(k)},X^{(k)}) )
= g({\bar p})+g^{*}({\bar p},{\bar X})=0.
\end{equation}
\begin{equation}\label{pdac_eq4}
\partial_2 h^{*}({\bar p},{\bar X})\cap \partial_2 g^{*}({\bar p},{\bar X})
\neq \varnothing,
\end{equation}
\begin{equation}\label{pdac_eq5}
g({\bar p})-h({\bar p})
= h^{*}({\bar p},{\bar X})-g^{*}({\bar p},{\bar X})
={\bar f},
\end{equation}
\end{subequations}
\end{enumerate}
\end{theorem}
\begin{proof}\ \\[-2\baselineskip]
\begin{enumerate}
\item By applying \cref{th:FYIN} with $q=p^{(k+1)}$, $p=p^{(k)}$, $X=X^{(k)}$, and $f=h$,
we obtain that
$h(p^{(k+1)})+h^{*}(p^{(k)},X^{(k)})
\geq \langle X^{(k)},\exp^{-1}_{p^{(k)}}p^{(k+1)}\rangle$.
Since \eqref{eq:DCAS_natural} implies that
$\exp^{-1}_{p^{(k)}}p^{(k+1)}\in \partial_2 g^{*}(p^{(k)},X^{(k)})$,
we can apply \cref{th:FYINPC} with $f=g$, $p=p^{(k)}$, $Y=\exp^{-1}_{p^{(k)}}p^{(k+1)}$, and
$X=X^{(k)}$ to obtain
$\langle X^{(k)}, \exp^{-1}_{p^{(k)}}p^{(k+1)} \rangle=g(p^{(k+1)})+g^{*}(p^{(k)},X^{(k)})$.
Hence, we have $h(p^{(k+1)})+h^{*}(p^{(k)},X^{(k)}) \geq g(p^{(k+1)})+g^{*}(p^{(k)},X^{(k)})$,
which is equivalent to the first inequality of \cref{th:pdac_i}.
To prove the second one, we first note that since $X^{(k)}\in \partial h(p^{(k)})$
and $h$ is convex, by using \cref{th:eqsubdifcf},
we have $h^{*}(p^{(k)},X^{(k)})+h(p^{(k)})=0$.
Thus, applying \cref{th:FYIN} with $q=p=p^{(k)}$, $X=X^{(k)}$ and $f=g$, we have
$0\leq g^{*}(p^{(k)},X^{(k)})+g(p^{(k)})$, which combined with the last equality yields
the second inequality of \cref{th:pdac_i}.
\item First we recall that $f = g-h$ satisfies assumption~\ref{it:A2}.
Thus, \cref{th:pdac_i} implies that $(f (p^{(k)}))_{k\in \mathbb{N}}$ is
non-increasing and convergent. Hence
$\lim_{k\to+\infty}(g(p^{(k)})-h(p^{(k)}))\eqqcolon{\bar f}\in {\mathbb R}$.
Moreover, by using again \cref{th:pdac_i}, we also have
\begin{equation*}
\lim _{k\to+\infty}
(h^{*}(p^{(k)},X^{(k)})-g^{*}(p^{(k)},X^{(k)}))
\eqqcolon {\bar f} \in {\mathbb R}.
\end{equation*}
Finally, the inequality in \cref{th:pdac_ii} follows from assumption~\ref{it:A2}.
\item To prove the first part, we assume that $(p^{(k)})_{k\in \mathbb{N}}$ is bounded
and ${\bar p}$ a cluster point of $(p^{(k)})_{k\in \mathbb{N}}$.
By using \cref{pr:assympc}, we conclude that ${\bar p}\in \dom (g)$ and that there
exists a cluster point ${\bar X}$ of $(X^{(k)})_{k\in \mathbb{N}}$, such that
${\bar X}\in \partial g({\bar p})\cap \partial h({\bar p})$.
Therefore, \eqref{pdac_eq1} is proved.
Before proceeding with the proof we note that due to ${\bar p}\in \dom (g)$,
assumption \ref{it:A3} implies that ${\bar p}\in \dom(h)$.
To prove \eqref{pdac_eq2} note that since $X^{(k)}\in \partial h(p^{(k)})$,
for all $k\in \mathbb{N}$, and $h$ is convex, from \cref{th:eqsubdifcf}, we have
$h(p^{(k)})+h^{*}(p^{(k)},X^{(k)}) = 0$, for all $k \in \mathbb{N}$.
Consequently,
$\displaystyle\lim _{k\to+\infty} (h(p^{(k)})+h^{*}(p^{(k)},X^{(k)}))= 0.$
Since ${\bar X}\in \partial h({\bar p})$, using again \cref{th:eqsubdifcf}, we have
$h({\bar p})+h^{*}({\bar p},{\bar X}) =0$ and~\eqref{pdac_eq2} follows directly.
To prove~\eqref{pdac_eq3} we first note that
\begin{multline*}
g(p^{(k)})+g^{*}(p^{(k)},X^{(k)})
= g(p^{(k)})- h(p^{(k)})- \big(h^{*}(p^{(k)},X^{(k)})\\-g^{*}(p^{(k)},X^{(k)})\big)
+ h(p^{(k)})+ h^{*}(p^{(k)},X^{(k)}).
\end{multline*}
Thus, using \cref{th:pdac_ii} together with~\eqref{pdac_eq2}, we have
$\displaystyle\lim_{k\to+\infty} (g(p^{(k)})+g^{*}(p^{(k)},X^{(k)}))=0$.
Since ${\bar X}\in \partial g({\bar p})$, using again \cref{th:eqsubdifcf},
we have $g({\bar p})+g^{*}({\bar p},{\bar X}) =0$, which combined with the last
equality yields~\eqref{pdac_eq3}.
We proceed to prove~\eqref{pdac_eq4}. For that, we assume without loss of generality
that $\lim _{k\to+\infty} p^{(k)}={\bar p}$.
Now, by applying \cref{th:FYIN} with $f=h$, $p={\bar p}$, $q= p^{(k)}$, we obtain
\begin{equation*}
h(p^{(k)})+h^{*}({\bar p},Y)
\geq \langle Y,\exp^{-1}_{\bar p}p^{(k)} \rangle,
\qquad \text{ for all } Y \in T_{{\bar p}}\mathcal M,
\quad\text{ and all } k\in {\mathbb N}.
\end{equation*}
Thus, by using \cref{def:linf}, $\displaystyle\lim_{k\to+\infty} p^{(k)}={\bar p}$,
\cref{pr:cont_pi}, \cref{pr:cont_pi_i} and \cref{pr:cont_pi_iii},
and that $h$ is lsc, we have
$h({\bar p})+h^{*}({\bar p},Y)
=\displaystyle\liminf_{k\to+\infty}h(p^{(k)})+h^{*}({\bar p},Y) \geq 0$.
Thus, the second equality in \eqref{pdac_eq2} implies that
\begin{equation*}
h^{*}({\bar p},Y) \geq h^{*}({\bar p},X),
\qquad \text{ for all } Y \in T_{{\bar p}}\mathcal M.
\end{equation*}
Hence, $0\in \partial_2h^{*}({\bar p},X)$. Similarly, by using \eqref{pdac_eq3},
we can also show that $0\in \partial_2g^{*}({\bar p},X)$.
Therefore, $0\in \partial_2h^{*}({\bar p},X) \cap \partial_2g^{*}({\bar p},X)$,
which proves \eqref{pdac_eq4}.
Finally, we prove \eqref{pdac_eq5}.
Combining the second equality in~\eqref{pdac_eq2} and~\eqref{pdac_eq3},
we obtain the first equality in~\eqref{pdac_eq5}.
To prove the second inequality, we first note that
${\bar p}\in \dom (g)\subset \mbox{\emph{int dom}}(h)$.
Since $h$ is convex, it is continuous in $\mbox{\emph{int dom}}(h)$,
which implies that $\lim _{k\to+\infty} h(p^{(k)})=h({\bar p})$.
Thus, using \cref{th:pdac_ii}, we conclude that
\begin{equation*}
\lim_{k\to+\infty}g(p^{(k)})
= \lim_{k\to+\infty}( g(p^{(k)})-h(p^{(k)})) + \lim _{k\to+\infty}h(p^{(k)})
={\bar f}+ h({\bar p}).
\end{equation*}
Hence, using \cref{def:linf}, we hav
$\lim _{k\to+\infty} g(p^{(k)})=\liminf_{k\to+\infty} g(p^{(k)})= g({\bar p})$.
Therefore, we obtain that $g({\bar p})- h({\bar p})=\bar f$,
which concludes the proof.\qedhere
\end{enumerate}
\end{proof}
\section{Examples}\label{Sec:Examples}
In this section we consider examples of DC functions on the Hadamard manifold of symmetric positive definite matrices. These examples can also be seen as constraint problems on the Euclidean space of square matrices, but they are not DC problems thereon. Only by imposing the manifold structure on the constrained set, namely the symmetric positive definite matrices set, both components of the problem become convex.
Formerly, we consider the symmetric positive definite (SPD) matrices cone ${\mathbb P}^{n}_{++}$.
Following~\cite{Rothaus1960}, see also \cite[Section 6.3]{NesterovTodd2002}, we introduce the Hadamard manifold,
\begin{equation} \label{eq:RiemPmatrix}
\mathcal{M}\coloneqq ({\mathbb P}^n_{++}, \langle \cdot , \cdot \rangle)
\end{equation}
endowed with the Riemannian metric given by
\begin{equation} \label{eq:metric}
\langle X,Y \rangle_p \coloneqq \operatorname{tr}(Xp^{-1}Yp^{-1}),
\end{equation}
for $p\in \mathcal{M}$ and $X,Y\in T_p\mathcal{M}$, where $\mbox{tr}(p)$ denotes the trace of the matrix $p\in {\mathbb P}^{n}_{++}$, $T_p\mathcal{M}\approx\mathbb{P}^n$ is the tangent space of $\mathcal{M}$ at $p$ and ${\mathbb P}^{n}$ denotes the set of symmetric matrices of order $n\times n$.
Further details about the Hadamard manifold $\mathcal{M}$ can be found, for example, in~\cite[Theorem~1.2. p. 325]{Lang1999}.
The \emph{exponential map and its inverse} at a point $p\in \mathcal{M}$ are given, respectively, by
\begin{align}
\exp_{p}X & \coloneqq p^{1/2} e^{p^{-1/2}Xp^{-1/2}} p^{1/2}, \qquad X\in T_p\mathcal{M}, p \in \mathcal M \label{eq:exp}\\
\exp^{-1}_{p}{q}& \coloneqq p^{1/2} \log({p^{-1/2}qp^{-1/2}}) p^{1/2}, \qquad p, q\in \mathcal{M} \label{eq:invexp}.
\end{align}
The \emph{dimension} of the manifold is given by $\operatorname{dim}_{\mathbb P_{++}^n} = \frac{n(n+1)}{2}$.
The \emph{gradient} of a differentiable function $f\colon{\mathbb P}^n_{++}\to {\mathbb R}$ is given by
\begin{equation}
\label{eq:Grad}
\grad f(p)=pf'(p)p.
\end{equation}
where $f'(p)$ is the Euclidean gradient of $f$ at $p$. If $f$ is twice differentiable, then the \emph{hessian} of $f$ is given by
\begin{equation}
\label{eq:Hess}
\operatorname{Hess}\,f(p)X=pf''(p)Xp+\frac{1}{2}\bigl[ Xf'(p)p+ pf'(p)X \bigr],
\end{equation}
where $X\in T_p\mathcal{M}$ and $f''(p)$ is the Euclidean hessian of $f$ at $p$.
In general, subproblem~\eqref{eq:DCAS} in \cref{Alg:DCA2} is not convex; nevertheless, in some special cases, as illustrated by the following examples, it actually is convex.
To begin, recall that the gradient and hessian of a function $p\mapsto \varphi(\det(p))$, where $\varphi\colon\mathbb{R}_{++}\to \mathbb{R}$ is twice differentiable, is given by
\begin{align}
\grad\varphi(\det(p))&=\bigl(\varphi'(\det(p))\det(p)\bigr)p, \label{eq:Graddetf}\\
\operatorname{Hess}\,\varphi(\det(p))v&=\bigl(\varphi''(\det(p))(\det(p))^2+\varphi'(\det(p))\det(p)\bigr)tr(p^{-1}v)p\label{eq:Hessdetf},
\end{align}
where $v\in T_p\mathcal{M}$, $\varphi'$ and $\varphi''$ are the first and second derivative of $\varphi$, respectively.
\begin{example} \label{ex:loglog}
Consider the following optimization problem
\begin{equation} \label{eq:loglog}
\argmin_{p\in \mathcal M} f(p), \qquad \text{where } f(p)\coloneqq \varphi_1(\det(p))-\varphi_2(det(p)),
\end{equation}
where the function $\varphi_i\colon\mathbb{R}_{++}\to \mathbb{R}$ are twice differentiable satisfying $\varphi_i''(t)t^2+\varphi_i'(t)t\geq 0$, for all $t\in \mathbb{R}_{++}$ and $i=1,2$.
Indeed, by using~\eqref{eq:Grad} and~\eqref{eq:Hess}, we can show that~\eqref{eq:loglog} is a DC problem with components
\begin{equation} \label{eq:loglog1}
g(p)=\varphi_1(\det(p)), \qquad \quad h(p)=\varphi_2(\det(p)).
\end{equation}
This follows from~\eqref{eq:Hessdetf} and $\varphi_i''(t)t^2+\varphi_i'(t)t\geq 0$, for all $t\in \mathbb{R}_{++}$ that $\langle \operatorname{Hess}\,\varphi_i(\det(p))X, X\rangle \geq 0$, for all $X\in T_p\mathcal{M}$ and $i=1,2$, which implies that $g$ and $h$ are convex.
By using \eqref{eq:Graddetf} we conclude that critical points of $f$ are matrices ${\bar p}\in \mathbb{P}_{++}^n$ such that
\begin{equation} \label{eq:DCCriticalPointsG}
\varphi'_1(\det({\bar p}))=\varphi'_2(\det({\bar p})).
\end{equation}
Now, considering that $h$ is a differentiable function, consider the subproblem associated to the problem~\eqref{eq:loglog}
\begin{equation} \label{eq:diffcasedetf}
\argmin_{p\in \mathcal M} \psi(p), \qquad \text{where } \psi(p) \coloneqq \varphi_1(\det(p))-\big\langle \grad(\varphi_2(\det(q))), \exp^{-1}_{q}{p} \big\rangle
\end{equation}
It is worth noting that if we use \cref{Alg:DCA2} to solve problem~\eqref{eq:loglog}, the subproblem~\eqref{eq:DCAS} to be addressed has the form~\eqref{eq:diffcasedetf}.
In general, subproblem~\eqref{eq:DCAS} is not convex; nevertheless, we will show now that~\eqref{eq:diffcasedetf} is a convex problem.
In fact, by using second equality in~\eqref{eq:Graddetf} it follows from~\eqref{eq:diffcasedetf} that
\begin{equation} \label{eq:diffcasedetf1}
\psi(p) =\varphi_1(\det(p))-\bigl(\varphi'_2(\det(q))\det(q)\bigr) \big\langle q, \exp^{-1}_{q}{p} \big\rangle.
\end{equation}
On the other hand, by using the exponential in~\eqref{eq:invexp} and the metric in~\eqref{eq:metric} we obtain that
\begin{equation*}
\big\langle q, \exp^{-1}_{q}{p} \big\rangle= \mbox{tr}\big( \log({q^{-1/2}pq^{-1/2}}) \big).
\end{equation*}
Since $ \operatorname{tr}\log Z =\log \det Z$, for any matrix $Z$, the last equality becomes
\begin{equation} \label{eq:cipmdetf}
\big\langle q, \exp^{-1}_{q}{p} \big\rangle= \log \det (p) - \log \det (q).
\end{equation}
Combining~\eqref{eq:diffcasedetf1} with~\eqref{eq:cipmdetf}, the function $\psi$ in subproblem \eqref{eq:diffcasedetf} is rewritten equivalently as
\begin{equation} \label{eq:diffcasedetffd}
\psi(p) =\varphi_1(\det(p))-\bigl(\varphi'_2(\det(q))\det(q)\bigr) \bigl(\log \det (p) - \log \det (q)\bigr).
\end{equation}
Since the matrix $q\in \mathcal{M}$ is fixed and the function $g(p)=\varphi_1(\det(p))$ is convex, proving that $\psi$ is convex is sufficient to prove that the function $\Upsilon(p)=-\log \det(p)$ is convex.
Applying \eqref{eq:Hessdetf} with $\varphi=\log$ we conclude that $\operatorname{Hess}\,\Upsilon(p)=0$, for all $p$, which implies that $\Upsilon$ is convex.
In conclusion, the objective function $f$ in problem~\eqref{eq:loglog} is not convex in general, while the function $\psi$ in the associated subproblem \eqref{eq:diffcasedetf} is.
Let us conclude by presenting some functions $\varphi\colon \mathbb{R}_{++}\to \mathbb{R}$ satisfying the condition $\varphi''(t)t^2+\varphi'(t)t\geq~0$, for all $t\in \mathbb{R}_{++}$:
\begin{enumerate}
\item $\varphi_1(t)=a_1(\log(t))^{2(b+1)}$ and $\varphi_2(t)=a_2(\log(t))^{2b}$ with $a_1, a_2 \in \mathbb{R}_{++}$ and $b\geq 1$. \label{ex:loglog:concrete1}
\item $\varphi_1(t)={\bar a}\log(t^b+c_1)-{\hat a}\log(t)$ and $\varphi_2(t)=\log(t+c_2)$ with ${\bar a}, {\hat a}, b, c_1, c_2 \in \mathbb{R}_{++}$. Note that, if $ab>d+1$, then $\varphi_1-\varphi_2$ has a critical point.
\item $\varphi_1(t)=a_1t^{b_1+2}$ and $\varphi_2(t)=a_2t^{b_2+2}$ with $a_1, a_2, b_1, b_2 \in \mathbb{R}_{+}$.
\end{enumerate}
Finally, it is worth noting that these functions $g$ and $h$ in~\eqref{eq:loglog1} associated with these problems are in general not Euclidean convex functions.
Consequently, \eqref{eq:loglog} is not a Euclidean DC problem. As we just derived, they are DC in the Hadamard manifold~\eqref{eq:RiemPmatrix}.
\end{example}
Let us examine at another set of examples that are not DC Euclidean problems but are DC in the Hadamard manifold \eqref{eq:RiemPmatrix} described above.
\begin{example}
Consider the following optimization problem
\begin{equation} \label{eq:trlog}
\argmin_{p\in \mathcal M} f(p), \qquad \text{where } f(p)\coloneqq \varphi_1(\mbox{tr}(p))-\varphi_2(det(p)),
\end{equation}
where the function $\varphi_i\colon\mathbb{R}_{++}\to \mathbb{R}$ are twice differentiable satisfying the following conditions
\begin{equation} \label{eq:condtrlog}
\varphi_1'(t)\geq 0, \quad \varphi_1''(t)\geq 0, \quad \varphi_2''(t)t^2+ \varphi_2(t)t\geq 0 \qquad \forall t\in \mathbb{R}_{++}.
\end{equation}
In general, the objective function $f$ in the problem~\eqref{eq:trlog} is not convex in either the Euclidean context nor the Hadamard manifold~\eqref{eq:RiemPmatrix}.
However, by using~\eqref{eq:Grad} and~\eqref{eq:Hess}, we prove that the components in~\eqref{eq:trlog}, denoted by
\begin{equation} \label{eq:trlog1}
g(p)=\varphi_1(\mbox{tr}(p)), \qquad\text{ and }\qquad h(p)=\varphi_2(\det(p)),
\end{equation}
which, in general, are not convex Euclidean, are convex functions on the Hadamard manifold~\eqref{eq:RiemPmatrix} since conditions in~\eqref{eq:condtrlog} hold.
Therefore, \eqref{eq:trlog} is a DC optimization problem.
In addition, by using \eqref{eq:Grad}, we can show that the gradients of $g$ and $h$ are given by
\begin{equation} \label{eq:gradghmG}
\grad g(p)= \varphi_1'(\mbox{tr}(p)) p^2, \qquad \qquad \grad h(p)= \bigl( \varphi_2'(\det(p)) \det(p)\bigr)p,
\end{equation}
respectively.
By using \eqref{eq:gradghmG} we conclude that critical points of $f$ are matrices ${\bar p}\in \mathbb{P}_{++}^n$ such that
\begin{equation} \label{eq:DCCriticalPointtrdet}
\varphi_1'(\mbox{tr}({\bar p})){\bar p}=\bigl( \varphi_2'(\det({\bar p})) \det {\bar p}\bigr)I.
\end{equation}
Using the same arguments as in \cref{ex:loglog}, we can show that the subproblem associated with problem~\eqref{eq:trlog} is given by
\begin{equation} \label{eq:Subtrlog}
\argmin_{p\in \mathcal M} \psi(p), \quad \text{where } \psi(p) =\varphi_1(\mbox{tr}(p))-\bigl(\varphi'_2(\det(q))\det(q)\bigr) \bigl(\log \det (p) - \log \det (q)\bigr),
\end{equation}
for a fixed $q\in \mathcal{M}$, and the objective function $\psi$ is convex.
Finally, let us present some functions satisfying the condition~\eqref{eq:condtrlog}.
\begin{enumerate}
\item $\varphi_1(t)=a_1t^{b_1}$ and $\varphi_2(t)=a_2t^{b_2}$ with $a_1\geq1$ and $a_2, b_1, b_2>0$ such that $a_1b_1n^{b_1-1}=a_2b_2$.
\item $\varphi_1(t)=ae^{bt}$ and $\varphi_2(t)=\frac{1}{2}abe^{nb}t^2$, with $a, b>0$.
\end{enumerate}
\end{example}
\section{Numerics}\label{Sec:Numerics}
In this section, we present several numerical examples. On the one hand, we compare the algorithm to two existing algorithms and, on the other hand, illustrate in a third example how optimization problems can be reformulated into DC problems to use this structure as an advantage in numerical computations. For all numerical examples, the \cref{Alg:DCA2} is implemented in
Julia 1.8.5~\cite{BezansonEdelmanKarpinskiShah2017} within the package
\lstinline!Manopt.jl!~\cite{Bergmann2022}
version 0.4.12, using a trust region solver to solve the optimization problem
in~\eqref{eq:DCAS} within every step,
including a generic implementation of the corresponding cost and gradient.
This way, the algorithm is easy-to-use, while when a more efficient computation for either
cost and gradient of the sub problem or even a closed form solution is available, they
can benefit to speed up the computation, when provided.
Together with \lstinline!Manifolds.jl!~\cite{AxenBaranBergmannRzecki2021}
this algorithm can be used on arbitrary manifolds.
All times refer to running the experiments on an Apple MacBook Pro M1 (2021), 16 GB Ram, Mac OS Ventura 13.0.1.
\subsection{A comparison to the Difference of Convex Proximal Point Algorithm}
We first consider the problem
\begin{equation*}
\argmin_{p\in\mathcal M} \bigl( \log\bigr(\det(p)\bigr)\bigr)^4 - \bigl(\log \det(p) \bigr)^2.
\end{equation*}
on $\mathcal P_{++}^n$. Here we have $f(p) = g(p) - h(p)$ where
$g(p) = \varphi_1\bigl(\det(p)\bigr)$, $\varphi_1(t) = (\log t)^4$,
and $h(p) = \varphi_2\bigl(\det(p)\bigr)$, $\varphi_2(t) = (\log t)^2$,
which fits \cref{ex:loglog}, \cref{ex:loglog:concrete1}. The critical points of this problem are the matrices $p^*\in \mathcal P_{++}^n$ such that $\det(p^*)=e^{1/\sqrt{2}}$. We have $f(p^*) = -\frac{1}{4}$, for each critical point $p^*$.
We compare the DCA with the Difference of Convex Proximal Point Algorithm (DCPPA)
as introduced in~\cite{SouzaOliveira2015}. The algorithm is also available in \lstinline!Manopt.jl!,
implemented in the same generic manner, as the DCA explained above.
This means that the proximal map can be considered as a subproblem to solve.
When only $g$ and its gradient $\operatorname{grad} g$ are provided, the subproblem
is generated in a generic manner, that is, a default implementation of the minimization problem
that corresponds to step 3 of the DCPPA-Algorithm from~\cite{SouzaOliveira2015} is generated.
This is also the scenario we use for our example.
For the case that $\operatorname{prox}_{\lambda g}(p)$ is available, e.g.\ in closed form, it can be provided to the algorithm for speed-up.
For both DCA and DCPPA, the generation of the generic subproblem is the default in
\lstinline!Manopt.jl! as soon as the gradient $\operatorname{grad} g$ of $g$ is provided.
The function calls look like
\begin{lstlisting}
difference_of_convex_algorithm(M, f, g, grad_h, p0; grad_g=grad_g)
difference_of_convex_proximal_point(M, grad_h, p0; g=g, grad_g=grad_g)
\end{lstlisting}
By default, further an approximation the Hessian of both sub problems by a Riemannian variant
of forward difference from the gradient is used.
This enables the use of the~\lstinline!trust_regions!\footnote{see \href{https://manoptjl.org/stable/solvers/trust_regions/}{manoptjl.org/stable/solvers/trust\_regions/} for details}
algorithm to solve the sub-problem.
To make both algorithms comparable we
\begin{itemize}
\item for both sub solvers we \emph{stop} when the gradient norm (of the subproblem's gradient) is below $10^{-10}$ or after $5000$ iterations
if the gradient does not get small.
\item for both algorithms DCA and DCPPA when the gradient norm (of $f$) is below $10^{-10}$. We also have a fall back to stop after $100$ iterations if the gradient norm is not hit.
\item the proximal parameter in the DCPPA to a constant of $\lambda=\frac{1}{2n}$
\item for both algorithms we set $p^{(0)} = \log(n) I_n$ as the initial point, where $I_n$ denotes the $n\times n$ identity matrix
\end{itemize}
For the matrix size $n$ of $\mathbb P_{++}^n$ we set $n=2,3,\ldots,80$ to compare the algorithm
for different manifold sizes, which yields manifolds of dimension $d=\frac{n(n+1)}{2}$.
In \cref{fig:DCAvsDCPPA-time} we compare the different run times for both the DCA and DCPPA.
These were obtained using the \lstinline!@benchmark! macro from \lstinline!BenchmarkTools.jl!~\cite{ChenRevels2016},
Up to a dimension of approximately $d=40$ (or $8\times 8$ spd.\ matrices) the DCA is faster.
This includes the important case of $3\times 3$ spd.\ matrices, that is one representation of diffusion tensors,
where the DCA takes only $5.2434\cdot10^{-3}$ seconds while the DCPPA takes
$2.2672\cdot10^{-2}$ seconds.
For higher-dimensional problems, cf. \cref{fig:DCAvsDCPPA-time:high},
the DCPPA seems to only increase very slowly, where $d=465$, or $30\times 30$ spd.\ matrices,
seems to be an outlier, where DCPPA takes over $22$ seconds, while otherwise it
stays around about half a second, even for the last case shown,
i.e.\ $d=99$ (or $44\times 44$ spd.\ matrices).
\begin{figure}
\caption{Low dimensional performance}
\label{fig:DCAvsDCPPA-time:low}
\caption{High dimensional performance}
\label{fig:DCAvsDCPPA-time:high}
\caption{A comparison of the run times of DCA and DCPPA for different manifold dimensions.}
\label{fig:DCAvsDCPPA-time}
\end{figure}
\begin{figure}
\caption{A comparison of how close the cost function is to the actual minimum for different sizes of problems and both algorithms.}
\label{fig:DCAvsDCPPA-cost}
\end{figure}
Comparing the number of iterations, we observe that after the first 5 experiments,
so starting from a dimension of $21$ ($6\times 6$ spd. matrices), the number of
iterations stabilizes around $25$ iterations for the DCA and $38$ for the DCPPA.
We compare different developments of the cost function in \cref{fig:DCAvsDCPPA-cost}.
Since for all dimensions we know that $f(p^*) = -\frac{1}{4}$ we plot $\lvert f(p^{(k)}) - f(p^*) \rvert$ over the iterations for the manifold
dimensions $d=15, 55, 210, 820, 3240$, that is the $n\times n$ matrices for $n=5,10,20,40,80$.
The initial value $p^{(0)}$ was chosen as above, which yields that the value $f(p^{(1)})$
is always below $10^{3}$ in our experiments.
All these different dimensions show the same slope in the decrease of the cost function $f(p)$
for both the DCA as well as the DCPPA.
While for DCPPA the cost seems to be below
$10^{-16}$ close to the minimum for a few iterations already, before the stopping criterion of a gradient norm
$\rVert \operatorname{grad} f(p^{(k)})\lVert_{p^{(k)}} < 10^{-10}$ is reached.
The development of the cost function illustrates, that DCA converges faster than
DCPPA, such that the choice of the sub solver seems to be crucial for the run time,
which for these experiments we configures equally to compare the algorithms and
not sub solvers.
\subsection{The Rosenbrock Problem}
The Rosenbrock problem consists of
\begin{equation}\label{eq:Rosenbrock}
\argmin_{x\in\mathbb R^2} a\bigl( x_{1}^2-x_{2}\bigr)^2 + \bigl(x_{1}-b\bigr)^2,
\end{equation}
where $a,b>0$ are positive numbers, classically $b=1$ and $a \gg b$, see \cite{Rosenbrock1960}.
Note that the function is non-convex on $\mathbb R^2$.
The minimizer $x^*$ is given by $x^* = (b,b^2)^\mathrm{T}$,
and also the (Euclidean) Gradient can be directly stated as
\begin{equation}\label{eq:Rosenbrock:gradf}
\nabla f(x) =
\begin{pmatrix}
4a(x_1^2-x_2)\\
-2a(x_1^2-x_2)
\end{pmatrix}
+
\begin{pmatrix}
2(x_1-b)\\
0
\end{pmatrix}
\end{equation}
We introduce a new metric for $\mathcal M = \mathbb R^2$:
For any $p \in \mathbb R^2$ we define
\begin{equation*}
G_p \coloneqq
\begin{pmatrix}
1+4p_{1}^2 & -2p_{1} \\
-2p_{1} & 1
\end{pmatrix}, \text{ which has the inverse matrix } G^{-1}_p =
\begin{pmatrix}
1 & 2p_1\\
2p_{1} & 1+4p_{1}^2 \\
\end{pmatrix}.
\end{equation*}
We define the inner product on $T_p\mathcal M = \mathbb R^2$ as
\begin{equation*}
\langle X,Y \rangle_p = X^\mathrm{T}G_pY
\end{equation*}
In the following we refer to $\mathbb R^2$ with the default Euclidean metric
further as just $\mathbb R^2$ and to the same space with this new metric as $\mathcal M$.
The exponential and logarithmic map are given as
\begin{equation*}
\exp_p(X) =
\begin{pmatrix}
p_1 + X_1\\
p_2 + X_2 + X_1^2
\end{pmatrix},\qquad
\exp^{-1}_p(q) =
\begin{pmatrix}
q_1 - p_1\\
q_2 - p_2 - (q_1-p_1)^2
\end{pmatrix}.
\end{equation*}
Given some function $h\colon \mathcal M \to \mathbb R$, its Riemannian gradient
$\operatorname{grad} h\colon \mathcal M \to T\mathcal M$ can be computed from the
Euclidean one by
\begin{equation*}
\operatorname{grad} h(p) = G_p^{-1}\nabla h(p).
\end{equation*}
Denoting the two components of the Euclidean gradient by $\nabla h(p) = (h'_1(p), h'_2(p))^\mathrm{T}$ we can derive that given two points $p,q\in\mathcal M$ we have
\begin{equation}\label{eq:gradh-log}
\begin{split}
\Bigl\langle
\operatorname{grad} h(q),
\exp_q^{-1}(p)
\Bigr\rangle_q
&=
\bigl(\exp_q^{-1}(p)\bigr)^\mathrm{T}
\nabla h(q)
\\&
= (p_1-q_1)h'_1(q) + (p_2 - q_2 - (p_1-q_1)^2)h'_2(q)
\end{split}
\end{equation}
For the difference of convex algorithm we split the cost function from the Rosenbrock
problem \eqref{eq:Rosenbrock} as $f(x) = g(x) - h(x)$ with
\begin{equation*}
g(x) = a\bigl( x_{1}^2-x_{2}\bigr)^2 + 2\bigl(x_{1}-b\bigr)^2
\quad\text{ and }\quad
h(x) = \bigl(x_{1}-b\bigr)^2.
\end{equation*}
Using the isometry $\psi\colon \mathbb R^2 \to \mathcal M, \mathbf{z} \mapsto (z_1, z_1^2-z_2)$
we get
\begin{equation*}
(h\circ\psi)(x) = h(x_1, x_1^2-x_2) = (x_1-b)^2
\end{equation*}
and hence $h$ is (geodesically) convex on $\mathcal M$.
The corresponding Euclidean gradients of $g$ and $h$ are
\begin{equation*}
\nabla g(p) =
\begin{pmatrix}
4a(x_1^2-x_2)\\
-2a(x_1^2-x_2)
\end{pmatrix}
+
\begin{pmatrix}
4(x_1-b)\\
0
\end{pmatrix}
\quad\text{ and }\quad
\nabla h(p) =
\begin{pmatrix}
2(p_1-b)\\
0
\end{pmatrix},
\end{equation*}
So especially the second component $h_2'(p) = 0$.
\\
Considering the sub-problem \cref{eq:DCAS} from \cref{Alg:DCA2}, we obtain
together with \eqref{eq:gradh-log} for some fixed $q$ that
\begin{equation*}
\begin{split}
\varphi(p)
&=
g(p) -
\Bigl\langle
\operatorname{grad} h(q),
\exp_q^{-1}(p)
\Bigr\rangle_q
\\
&=
a\bigl( p_{1}^2-p_{2}\bigr)^2
+ 2\bigl(p_{1}-b\bigr)^2 - 2(q_1-b)(p_1-q_1)
\\
&= a\bigl( p_{1}^2-p_{2}\bigr)^2
+ 2\bigl(p_{1}-b\bigr)^2 - 2(q_1-b)p_1 + 2(q_1-b)q_1,
\end{split}
\end{equation*}
where the last term is constant with respect to $p$ and hence
irrelevant when determining a minimizer. The Euclidean Gradient reads
\begin{equation*}
\nabla \varphi(p)
= \begin{pmatrix}
4a p_1(p_1^2-p_2) + 4(p_1-b) - 2(q_1-b)\\
-2a(p_1^2-p_2)
\end{pmatrix}
\end{equation*}
and the Riemannian gradient is similar to before $\operatorname{grad} \varphi(p)
= G_p^{-1}\nabla \varphi(p)$.
This allows to use for example the Riemannian gradient descent from \lstinline!Manopt.jl!
to be used as a sub-solver for \cref{eq:DCAS} within \cref{Alg:DCA2}.
Since also for the Rosenbrock function the Riemannian gradient can be easily computed in the same manner from \eqref{eq:Rosenbrock:gradf},
we can now compare three different first order methods:
\begin{enumerate}
\item The Euclidean gradient descent algorithm on $\mathbb R^2$,
\item The Euclidean Difference of Convex Algorithm on $\mathbb R^2$
\item The Riemannian gradient descent algorithm on $\mathcal M$,
\item The Riemannian Difference of Convex Algorithm on $\mathcal M$,
using Riemannian gradient descent as a sub-solver
\end{enumerate}
All algorithms use \lstinline!ArmijoLinesearch(M)! when determining the step size in gradient descent,
and all stop either after 10 million steps, or when the change between two successive iterates drops below \lstinline!1e-16!.
The sub solver in the DCA is set to stop when the gradient is at \lstinline!1e-16! in norm or at 1000 iterations.
\begin{figure}
\caption{A comparison of the cost function during the iterations of the four experiments performed on the Rosenbrock example.}
\label{fig:Rosenbrock}
\end{figure}
We set $b=1$ and $a=2\cdot10^5$. All algorithms start in $p^{(0)} = \frac{1}{10}(1, 2)^\mathrm{T}$.
The initial cost is $f(p^{(0)}) \approx 7220.81$.
The runtime and number of iterations is depicted in \cref{tab:Rosenbrock}
and the development of the cost function during the iterations in \cref{fig:Rosenbrock}.
For the cost $f(p^{(k)})$ during the iterations, we can observe that both gradient
algorithms as well as both difference of convex algorithms perform similar in shape,
both groups even have similar gain in their first step.
Still, even for the Euclidean case, the gradient descent with Armijo step size
requires several orders of magnitude more iterations than the Euclidean difference
of convex algorithm. The Riemannian gradient descent outperforms the Euclidean one
both in number of iterations as well as overall runtime.
Since a single iteration in the difference of convex algorithm requires to solve
a sub optimization problem, and we even employ a gradient descent per iteration,
even the Euclidean DCA is slower than the Riemannian gradient descent, while
the DCA already requires about a factor of 50 less iterations.
Similarly, the Riemannian DCA requires a factor of 1\,000 less iterations than the
Riemannian gradient descent, but since a single iteration is more costly, it
is only about a factor of 2 faster.
\begin{table}[tbp]
\centering
\begin{tabular}{lrr}
\toprule
\textbf{Algorithm} & \textbf{Runtime} & \textbf{\# Iterations} \\
\midrule
Euclidean GD & 305.567\,sec.& 53\,073\,227 \\
Euclidean DCA & 58.268\,sec.& 50\,588\\
Riemannian GD & 18.894\,sec.& 2\,454\,017 \\
Riemannian DCA & 7.704\,sec.& 2\,459 \\
\bottomrule
\end{tabular}
\caption{Summary of the runtime and number of iterations of the four experiments performed on the Rosenbrock example.}\label{tab:Rosenbrock}
\end{table}
\subsection{Constrained maximization of the Fréchet variance}
\label{sec:logdet}
Let $\mathcal M$ be the manifold \eqref{eq:RiemPmatrix} of symmetric positive definite matrices $\mathbb P^n_{++}$, $n\in\mathbb N$
with the affine invariant metric $\langle\cdot,\cdot\rangle$,
$\{q_1, \dots, q_m\}\subset {\mathcal M}$ be a data set of distinct points, \ie $q_i\neq q_j$ for $i\neq j$,
and $\mu_1, \ldots \mu_m$ be non-negative weights with $\sum_{j=1}^m\mu_j=1$.
Let $h\colon \mathcal M\to \mathbb R$ be the function defined by
\begin{equation}
\label{eq:ff}
h(p) \coloneqq
\sum_{j=1}^m \mu_j d^2(p,q_i),
\qquad \text{where }
d^2(p,q_i)\coloneqq\operatorname{tr}\bigl(
\log^2(p^{-1/2}q_jp^{-1/2})
\big).
\end{equation}
Recall that
\begin{equation*}
d(p, q) \coloneqq
\lVert \log(p^{-1/2}qp^{-1/2})\rVert_{F}
= \sqrt{\operatorname{tr}\log^2(p^{-1/2}qp^{-1/2})}
\end{equation*}
is the Riemannian distance between $p$ and $q$ on $\mathcal M$.
When every one of the weights ${\mu}_1, \ldots {\mu}_m$ are equal,
this function $h$ is known as the \emph{Fréchet variance}
of the set $\{q_1, \dots, q_m\}$, see~\cite{Horev2017}.
In this example we want to consider the constrained Fréchet variance maximization problem, which is stated as
\begin{equation}\label{eq:probmaxpddK}
\operatorname*{arg\,max}_{p\in\mathcal C} h(p)
\end{equation}
where the constrained convex set is given by
\begin{equation}\label{eq:boxsetKd}
{\mathcal C} \coloneqq
\{ p\in {\mathcal M}\ |\ \bar q_{-}\preceq p \preceq \bar q_{+} \},
\end{equation}
where $\bar q_-, \bar q_+\in\mathcal M$ with $\bar q_{-}\prec \bar q_{+}$. Here, $p \prec q$ ($p \preceq q$) denotes the (non-strict) partial ordering on the spd-matrices, i.\,e.\ that $q-p$ is positive (semi-)definite or for short $q-p \prec 0$ ($\preceq 0$). We point out that \cite[Lemma 2 (iii)]{Lim2012} implies that the set $C$ is convex.
The problem~\eqref{eq:probmaxpddK} can be equivalently stated
as a Difference of Convex problem or a non-convex minimization problem.
The second formulation can be algorithmically solved by a Frank-Wolfe
algorithm~\cite{WeberSra2022}.
\paragraph{Maximizing the Fréchet variance as a DC problem.}
We define the \emph{indicator function} of the set $\mathcal C$ as
\begin{equation*}
\iota_{\mathcal C}(p) = \begin{cases}
0 & \text{ if } p \in \mathcal C\\
\infty & \text{ else.}
\end{cases}
\end{equation*}
Using $g = \iota_{\mathcal C}$ and the fact that
\begin{equation*}
\operatorname*{arg\,max}_{p\in\mathcal C} h(p)
=
\operatorname*{arg\,min}_{p\in\mathcal C} -h(p)
\end{equation*}
to rephrase problem~\eqref{eq:probmaxpddK} to
\begin{equation}
\label{eq:probmaxef}
-\operatorname*{arg\,min}_{p\in\mathcal M} f(p),
\qquad\text{where } f(p)\coloneqq g(p)-h(p).
\end{equation}
We obtain indeed for $p\in \mathcal C$ that $f(p) = -h(p)$ and hence at a minimizer of $f$ we obtain a maximizer of $h$.
This hence yields a DC problem as studied in the previous sections.
By using~\eqref{eq:Grad} and~\eqref{eq:ff}, the gradient $\grad h(p)$ is given by
\begin{align} \label{ed:subgrdpdm1dp}
\grad h(p)
&= -2\sum_{j=1}^{m}\mu_jp^{1/2}\log({p^{-1/2}q_jp^{-1/2}}) p^{1/2}\\
&= 2\sum_{j=1}^{m}{\mu_j}p^{1/2} \log({p^{1/2}q_j^{-1}p^{1/2}}) p^{1/2}.
\label{ed:subgrdpdm2dp}
\end{align}
In this case, due to $g(p)=0$, for $p\in\mathcal C$, the subproblem~\eqref{eq:DCAS} for $X^{(k)}=\grad h(p^{(k)})$ is given by
\begin{equation}
\label{eq:DCASEc}
p^{(k+1)}\in \argmin_{p\in {\mathcal C}}
\big\langle- \grad h(p^{(k)}), \exp^{-1}_{p^{(k)}}p\big\rangle.
\end{equation}
On the other hand, it follows from~\eqref{eq:metric} and~\eqref{eq:invexp} that
\begin{align}
\big\langle-\grad h(p^{(k)}),& \exp^{-1}_{p^{(k)}}p\big\rangle
\\
& = \big\langle
-\grad h(p^{(k)}),
{\bigl(p^{(k)}\bigr)}^{1/2} \log
\Bigl(
{{\bigl(p^{(k)}\bigr)}^{-1/2}p{\bigl(p^{(k)}\bigr)}^{-1/2}}\Bigr) {\bigl(p^{(k)}\bigr)}^{1/2}
\big\rangle\\
& = \operatorname{tr}\Bigl(
-{\bigl(p^{(k)}\bigr)}^{-1/2} \grad h(p^{(k)}){\bigl(p^{(k)}\bigr)}^{-1/2}\log({{\bigl(p^{(k)}\bigr)}^{-1/2}p{\bigl(p^{(k)}\bigr)}^{-1/2}})
\Bigr).
\end{align}
Therefore, the problem~\eqref{eq:DCASEc} becomes
\begin{equation} \label{eq:DCASEcef}
p^{(k+1)}\in \argmin_{p\in {\mathcal C}} \operatorname{tr}\big(s^{(k)} \log({{\bigl(p^{(k)}\bigr)}^{-1/2}p{\bigl(p^{(k)}\bigr)}^{-1/2}})\big),
\end{equation}
where, by using~\eqref{ed:subgrdpdm1dp}, the matrix $s^{(k)}$ is given by
\begin{equation} \label{ed:subgrdpdm1it}
s^{(k)} = -{\bigl(p^{(k)}\bigr)}^{-1/2} \grad h(p^{(k)}){\bigl(p^{(k)}\bigr)}^{-1/2}= 2\sum_{j=1}^{m}{\mu_j}\log({\bigl(p^{(k)}\bigr)}^{-1/2}q_j{\bigl(p^{(k)}\bigr)}^{-1/2}).
\end{equation}
or, by using~\eqref{ed:subgrdpdm2dp}, the matrix $s^{(k)}$ is given equivalently by
\begin{equation} \label{ed:subgrdpdm2it}
s^{(k)}\coloneqq-{\bigl(p^{(k)}\bigr)}^{-1/2} \grad h(p^{(k)}){\bigl(p^{(k)}\bigr)}^{-1/2}= -2\sum_{j=1}^{m}{\mu_j}\log({\bigl(p^{(k)}\bigr)}^{1/2}q_j^{-1}{\bigl(p^{(k)}\bigr)}^{1/2}).
\end{equation}
In order to deal with the subproblem \eqref{eq:DCASEcef} we consider the following theorem, which gives a closed formula for it , see \cite[Theorem~4]{WeberSra2022}.
\begin{theorem}\label{th:esmw}
Let $L,U\in {\mathbb P}^{n}_{++}$ such that $L\prec U$.
Let $S$ be a Hermitian $(n\times n)$ matrix and $X\in{\mathbb P}^{n}_{++}$ be arbitrary.
Then, the solution to the optimization problem
\begin{align*}
\min_{L\preceq Z\preceq U} \mbox{\emph{tr}}(S\log (XZX)),
\end{align*}
is given by $Z=X^{-1}Q\left( P^{\top}[-\mbox{sgn}(D)]_{+}P+\hat{L} \right)Q^{\top}X^{-1}$, where $S=QDQ^{\top}$ is a diagonalization of $S$, $\hat{U}-\hat{L}=P^{\top}P$ with $\hat{L}=Q^{\top}XLXQ$ and $\hat{U}=Q^{\top}XUXQ,$ where $[-\mbox{sgn}(D)]_{+}$ is the diagonal matrix
$$\mbox{\emph{diag}}\left([-\mbox{sgn}(d_{11})]_{+}, \ldots, [-\mbox{sgn}(d_{nn})]_{+} \right)$$ and $D=(d_{ij}).$
\end{theorem}
\begin{remark}
The solution to \eqref{eq:DCASEcef}
can be obtained \cref{th:esmw} setting $L=\bar q_-$, $U=\bar q_+$, $S = s^{(k)}$, $X=\bigl(p^{(k)}\bigr)^{-\frac{1}{2}}$ and $Z=p$.
Note that given $p^{(k)}$ both $X$ and $X^{-1}$ can be easily computed using the eigen decomposition and modifying the diagonal matrix.
\end{remark}
To minimize a constrained, non-convex function $f_{\mathrm{FW}} \colon \mathcal X \to \mathbb R$, $\mathcal X \subset \mathcal M$, \cite{WeberSra2022} propose the Riemannian Frank-Wolfe algorithm as summarized in \cref{Alg:FW}.
\begin{algorithm}[hbp]
\caption{The Riemannian Frank-Wolfe Algorithm, cf.~\cite[Algorithm 2]{WeberSra2022}.}\label{Alg:FW}
\begin{algorithmic}[1]
\STATE {
Choose an initial point $p^{(0)}\in \mathcal X$.
Set $k=0$.
}
\WHILE{convergence criterion is not met}
\STATE $q^{(k)} \gets
\displaystyle\operatorname*{argmin}_{q\in\mathcal X}\
\bigl\langle
\operatorname{grad} f_{\mathrm{FW}}(p^{(k)}),
\exp_{p^{(k)}}^{-1}q
\bigr\rangle$
\STATE $s_k \gets \frac{2}{2+k}$
\STATE $p^{(k+1)} = \gamma_{p^{(k)}q^{(k)}}(s_k)$
\STATE $k \gets k+1$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
In our example we have $\mathcal X = \mathcal C$ and $f_{\mathrm{FW}} = -h$, \ie we obtain a concave constrained problem. Since $f_{\mathrm{FW}} = -h$, we obtain the same subproblem in Step 3 of the Frank-Wolfe Algorithm as stated in~\eqref{eq:DCASEc}.
Thus, we have two algorithms for solving the problem~\eqref{eq:probmaxpddK} or equivalently~\eqref{eq:probmaxef},
namely \cref{Alg:FW} and \cref{Alg:DCA2}.
Both possess the same subproblem in this case.
They treat the result of the subproblem differently, though.
While \cref{Alg:DCA2} uses the subproblem solution directly for the next iteration, \cref{Alg:FW} uses the solution as an end point of a geodesic segment starting from the previous iterate. This geodesic segment is then evaluated at a certain interims point.
This also means that \cref{Alg:FW} has to start in a \emph{feasible point} $p^{(0)} \in \mathcal C$, while for \cref{Alg:DCA2} this is not necessary.
\\
In our numerical example we consider $\mathcal M = \mathbb P_{++}^{20}$, that is, the set of $20\times20$ symmetric positive definite matrices with the affine-invariant metric $\langle\cdot,\cdot\rangle$. This is a Riemannian manifold of dimension $d=210$. We further generate $m=100$ random spd\ matrices $q_i$ with corresponding random weights $w_i$ as the data set for the Fréchet variance.
We set
\begin{equation*}
q_{-} \coloneqq \biggl(\sum_{i=1}^m w_iq_i^{-1}\biggr)^{-1},
\qquad
q_{+} \coloneqq \sum_{i=1}^m w_iq_i,
\quad\text{ and }\quad
p^{(0)}\coloneqq \frac{1}{2}(q_{-}+q_{+}).
\end{equation*}
A numerical implementation of \cref{th:esmw} is used as a closed-form solver of the subproblem.
Numerically we observe, that these results might suffer from
imprecisions, which means they might not meet the constraint, but only by around $2\cdot10^{-13}$.
Since we use these points as iterates in \cref{Alg:DCA2}, only for this algorithm we add a “safeguard” and perform a small line search for the first matrix closest matrix to the sub-solver's result $q^*$ on the geodesic to the last iterate $p^{(k)}$ fulfilling the constraint.
\begin{figure}
\caption{A comparison of the Fréchet variance $h(p)$ during the iterations of \cref{Alg:DCA2}
\label{fig:DCAvsFW}
\end{figure}
Then we stop \cref{Alg:DCA2} if the change $d(p^{(k)}, p^{(k+1)}) < 10^{-14}$ or if the gradient change between these two iterates (computes using parallel transport) is below $10^{-9}$.
The algorithm stops after $55$ iterations due to a small gradient change.
\\
For Frank-Wolfe a suitable stopping criterion is challenging.
Note that the gradient $\operatorname{grad} f_{\mathrm{fw}}$ does not tend to $0$ of the minimizer is on the boundary. Even after $100\,000$ iterations, Frank-Wolfe still has not reached either of the stopping criteria, both changes are still of order $10^{-4}$.
While the Difference of Convex Algorithm reaches it's minimum (its maximum in $h$) after $11$ iterations and then increases slightly, probably due to the closed-form solution not being precise, Frank-Wolfe reaches neither of these two values – after $11$ or $55$ in these $100\,000$ iterations.
\\
Finally, comparing the time per iteration, both algorithms comparable. With the numerical safeguard for this specific problem, $1000$ iterations of DCA take $16.01$ seconds,
Frank-Wolfe $8.13$ seconds and DCA without the safeguard $7.475$ seconds. That is, in runtime per iteration, using the same sub solver, both perform similarly, while DCA seems to have a vanishing gradient change.
\section{Conclusion}\label{sec:Conclusion}
In this paper, we investigated the extension of the Difference of Convex Algorithm (DCA) to the Riemannian case, enabling us to solver DC problems on Riemannian manifolds. We investigate its relation to Duality on manifolds and state a convergence result on Hadamard manifolds.
Numerically, the new algorithm outperforms the existing Difference of Convex Proximal Point algorithm (DCPPA) in terms of the number of iterations. However, for large-dimensional manifolds, the DCPPA is faster. Additionally, for a specific class of constrained maximization problems, the DCA is well-suited and outperforms the Riemannian Frank-Wolfe algorithm, especially because a suitable stopping criterion can be used but also in how close it gets to the actual minimizer.
Finally, rephrasing Euclidean problems into DC problems with a suitable metric is another field where using the DCA seems very beneficial.
Extending the numerical algorithms also to employing duality is a future research topic, where the iteration time and convergence speed might increase.
\printbibliography
\end{document}
|
\begin{document}
\frenchspacing \newcommand\hreff[1] {{\footnotesize \url {https://#1}}}
\renewcommand\smile{\mbox{:-)}} \newtheorem{lemma} {Lemma}
\newcommand\emm[1]{{\ensuremath{#1}}} \newcommand\trm[1]{{\bf\em #1}}
\newcommand\edf{{\raisebox{-2pt}{$\stackrel{\mbox{\tiny df}}=$}}}
\newcommand\ceil[1]{{\lceil#1\rceil}}
\newcommand\ex\exists \newcommand\all\forall
\newcommand\then\Rightarrow \renewcommand\iff\Leftrightarrow
\renewcommand\a{{\emm\alpha}} \renewcommand\b{{\emm\beta}}
\newcommand\e\varepsilon \renewcommand\l{\lambda} \newcommand\w\omega
\newcommand\W{{\mathbf\Omega}} \newcommand\g{{\emm\gamma}}
\newcommand\N{{\mathbb N}}\newcommand\R{{\mathbb R}}\newcommand\Z{{\mathbb Z}}
\renewcommand\L{{\mbox{\bf L}}} \newcommand\Kt{{\mathbf Kt}}
\newcommand\M{{\mathbf M}} \newcommand\T{{\mathbf T}}
\newcommand\m{{\mathbf m}} \newcommand\K{{\mathbf K}}
\renewcommand\d{{\mathbf d}} \newcommand\St{{\mathbf S}}
\author{\aut\ (\hreff {www.cs.bu.edu/fac/lnd/}) \\ Boston University\thanks
{College of Arts and Sciences, Computer Science department, Boston,
MA 02215, USA}} \title{\ttl}\date{}\maketitle \begin{abstract}
Here I share a few notes I used in various
course lectures, talks, etc. Some may be just calculations that in
the textbooks are more complicated, scattered, or less specific;
others may be simple observations I found useful or curious.
\end{abstract} \tableofcontents
\noindent Copyright \textcircled{c} $\number\year$ by the author.
Last revised: \today.
\section [Nemirovski Estimate of Common Mean of Distributions]
{Nemirovski Estimate of Common Mean of\\
Arbitrary Distributions with Bounded Variance}
The popular Chernoff bounds\footnote
{First studied by S.N. Bernstein: {\em Theory of Probability.}, Moscow,
1927. Tightened by Wassily Hoeffding in: Probability inequalities for
sums of bounded random variables, J.Am.Stat.Assoc. 58(301):13-30, 1963.}
assume severe restrictions on distribution: it must be cut-off,
or vanish exponentially, etc. In [Nemirovsky Yudin]\footnote
{A.S.Nemirovsky, D.B.Yudin. {\em Problem Complexity
and Method Efficiency in Optimization.} Wiley, 1983.}
an equally simple bound uses no conditions at all
beyond independence and known bound on variance.
It is not widely used because it is not explained anywhere
with an explicit tight computation. I offer this version:
Assume independent variables $X_i(\w)$ with the same unknown mean
$m$ and known lower bounds $B_i^2$ on inverses $1/v_i$ of their variance.
We estimate $m$ as $M(\w)$ with probability $P(\pm(M\!-\!m)\ge\e)
\;\edf\; p^\pm< 2^{-k}$ for $k$ close to $\sum_i (B_i\e)^2/12$.
First, we normalize $X_i$ to set $\e\!=\!1$, spread them into $n$
groups, and take in each group $j$ its $B_i^2$-weighted mean $x_j(\w)$.
The inverse variance bounds $b_j^2$ for $x_j$ are additive; we grow
groups to assure $b_j\!>\!2$ and to increase the sum $k$ of {\em heights}
$h_j\edf\;\log_2\frac{b_j\!+b_j^{-\!1}}2$. (The best $h/b^2>1/12$ comes
with $b^2\approx 6$.)\footnote
{Giving up tightness, the rest may be simplified: assure
$b_j\ge b\edf \sqrt2+1$ and replace $b_j,h_j$ with $b,1/2$.}
For $s\subset[1,n]$, let $b_s\;\edf\;\prod_{j\in s}b_j$.
Let $L\edf\cup_tL_t$ consist of {\em light} $s$: those whose largest
superset $s'$ with $b_{s'}^2< b_{[1,n]}$ has $\|s\|\!+\!t$ elements.
As $s\!\in\!L_t$ do not include each other, $\|L_t\|\le \binom n{\ceil
{n/2}}{<}2^n\sqrt{2/(\pi n)}$, by Sperner's theorem, and since
$n!{=}(n/e)^n\sqrt{\pi(2n{+}1/3){+}\varepsilon/n}, \varepsilon{\in}[0,.1]$.
Our $M$ is the {\bf $(\log b_j)$-weighted}
median of $x_j$. Let $S^\pm(\w)\;\edf\;\{j:\pm (x_j\!-\!m)<1\}$.
Then $\pm(M(\w)\!-\!m)\ge1$ means $S^\pm(\w)\!\in\!L$.
By Chebyshev's inequality, $p_j^\pm\;\edf\;P(j\not\in S^\pm)\le 1/(b_j^2\!+
\!1)$. We assume $p_j^\pm=1/ (b_j^2\!+\!1)$: the general case follows by
so modifying the distribution without changing $m$, $b_j$, or decreasing
$p^+$ (respectively $p^-$). If $s\in L_t$, $S^\pm(\w)=s$ has probability
$$p_s^\pm=b_s^2/\prod_{j\le n} (b_j^2\!+\!1) < 4^{-t}\prod_{j\le n}
b_j/({b_j^2\!+\!1})= 4^{-t}2^{-(k+n)}\mbox{ . So,}$$
$$p^++p^-\le\sum_{t\ge0}\sum_{s\in L_t} (p_s^++p_s^-)\le 2(\sum_{t\ge0}
4^{-t}) 2^{-(k+n)} 2^n\sqrt {2/(\pi n)} <2^{-k} \sqrt{5/n}\;.\;\qed$$
\section {Leftover Hash Lemma}
The following Lemma is often useful to convert a stream of symbols with
absolutely unknown (except for a lower bound on its entropy)
distribution into a source of perfectly uniform random bits $b\in
Z_2=\{0,1\}$.
The version I give is close to that in [HILL]\footnote
{Johan Hastad, Russell Impagliazzo, Leonid A. Levin, Michael
Luby.\\ A Pseudorandom Generator from any One-way Function. Section 4.5.
{\em SICOMP} {\bf 28}(4):1364-1396, 1999.},
though some aspects
are closer to that from [GL]\footnote
{Oded Goldreich, Leonid A. Levin. A Hard-core Predicate for any One-way
Function. Sec.5. {\em STOC} 1989.}.
Unlike [GL], I do not restrict hash functions to be linear and do
not guarantee polynomial reductions, i.e.\ I forfeit the case when the
unpredictability of the source has computational, rather than truly
random, nature. However, like [GL], I restrict hash functions only in
probability of collisions, not requiring pairwise uniform distribution.
Let $G$ be a probability distribution on $Z_2^n$ with Renyi entropy
$-\log\sum_x G^2(x)$ $\ge m$. Let $f_h(x)\!\in\! Z_2^k$, $h\!\in\! Z_2^t$,
$x\!\in\! Z_2^n$ be a hash function family in the sense that for each $x,y\ne
x$ the fraction of $h$ with $f_h(x) \!=\! f_h(y)$ is $\le 2^{-k}+2^{-m}$.
Let $U^t$ be the uniform probability distribution on $Z_2^t$ and
$s=m-k-1$. Consider a distribution $P(h,a)=2^{-t} G(f^{-1}_h(a))$
generated by identity and $f$ from $U^t\otimes G$. Let $\L_1(P,Q)=
\sum_z|P(z)-Q(z)|$ be the $\L_1$ distance between distributions $P$
and $Q=U^i$, $i=t+k$. It never exceeds their $\L_2$ distance
$$\L_2(P,Q)=\sqrt{2^i\sum_z(P(z)-Q(z))^2}\;.$$
\begin{lemma}[Leftover Hash Lemma]
$$\L_1(P,U^i)\le\L_2(P,U^i)<2^{-s/2}\;.$$ \end{lemma}
Note that $h$ must be uniformly distributed but can be reused for many
different $x$. These $x$ need to be independent only of $h$, not of
each other as long as they have $\ge m$ entropy in the distribution
conditional on all their predecessors.
\paragraph {Proof.}\begin {eqnarray*} (\L_2(P,U))^2 &=& 2^i\sum_{h,a} P(h,a)^2
+ 2^i\sum_z (2^{-2i}-2P(z)2^{-i}) = 2^i\sum_{h,a} P(h,a)^2 -1\\
&=& -1+ 2^i\sum_{x,y}G(x)G(y)2^{-2t}\sum_a\|\{h\!:f_h(x)=f_h(y)=a\}\|\\
&=& -1+ 2^{k-t}\sum_{x,y}G(x)G(y) \|\{h\!:f_h(x)\!=\!f_h(y)\}\|\\
&=& -1+ 2^{k-t}\left(\sum_xG(x)^2 2^t +
\sum_{x,y\ne x}G(x)G(y) \|\{h\!:f_h(x)\!=\!f_h(y)\}\|\right)\\
&\le& -1+ 2^k2^{-m}+2^{k-t}(1-2^{-m})2^t(2^{-k}+2^{-m}) < 2^{-s}\;.
\end{eqnarray*}\qed
\section {Disputed Ballots and Poll Instabilities}
Here is another curious example of advantages of quadratic norms.
The ever-vigilant struggle of major parties for the heart of the median
voter makes many elections quite tight. Add the Electoral College system
of the US Presidential elections and the history may hang on a small
number of ballots in one state. The problem is not in the randomness of
the outcome. In fact, chance brings a sort of fair power sharing
unplagued with indecision: either party wins sometimes, but the country
always has only one leader. If a close race must be settled by dice, so
be it. But the dice must be trusty and immune to manipulation!
Alas, this is not what our systems assure. Of course, old democratic
traditions help avoiding outrages endangering younger democracies, such
as Ukraine. Yet, we do not want parties to compete on tricks that may
decide the elections: appointing partisan election officials or judges,
easing voter access in sympathetic districts, etc. Better to make
the randomness of the outcome explicit, giving each candidate a chance
depending on his/her share of the vote. It is easy to implement the
lottery in an infallible way, the issue is how its chance should depend
on the share of votes.
In contrast to the present one, the system should avoid any big
jump from a small change in the number of votes. Yet, chance should
not be proportional to the share of votes. Otherwise each voter may
vote for himself, rendering election of a random person. The present
system encourages voters to consolidate around candidates acceptable
to many others. The `jumpless' system should preserve this feature.
This can be done by using a non-linear function: say the chance in the
post-poll lottery be proportional to the {\em squared} number of votes.
In other words, a voter has one vote per each person he agrees
with.\footnote {The dependence of lottery odds on the share of votes
may be sharper.\\ Yet, it must be smooth to minimize the effects of
manipulation. Even (trusty) noise alone,\\ e.g., discarding a randomly
chosen half of the votes, can ``smooth" the system a little.}
Consider for instance an 8-way race where the percents of votes are 60,
25, 10, 1, 1, 1, 1, 1. The leader's chance will be 5/6, his main rival's
1/7, the third party candidate's 1/43 and the combined chance of the
five `protest' runners 1/866.
This system would force major parties to determine the most popular
candidate via some sort of primaries, and will almost exclude marginal
runners. However it would have no discontinuity rendering any small
change in the vote distribution irrelevant. The system would preserve an
element of chance, but would be resistant to manipulation.
\section
[Universal Heuristics: How do humans solve ``unsolvable" problems?]
{Universal Heuristics:\\ How do humans solve ``unsolvable" problems?}
\begin{small}
Lots of crucial problems defeat current computer arts but yield to our
brains. Great many of~them can be stated in the form of inverting easily
computable functions. Still other problems, such~as extrapolation, are
related to this form. We have no idea which difficulties are intrinsic
to these problems and which just reflect our ignorance. We will remain
puzzled pending major foundational advances such as, e.g., on P=?NP.
And yet, traveling salesmen do get to their destinations, mathematicians
do find proofs of their theorems, and physicists do find patterns in
transformations of their elementary particles! How is this done, and how
could computers emulate their success?\end{small}
Brains of insects solve problems of such complexity and with
such efficiency, as we cannot dream of. Yet, few of us would be
flattered with a comparison to the brain of an insect \smile. What
advantage do we, humans, have? One is the ability to solve new problems,
those on which evolution did not train generations of our ancestors. We
must have some pretty universal methods, not restricted to the specifics
of focused problems. Of course, it is hard to tell how, say, the
mathematicians search for their proofs. Yet, the diversity and dynamism
of math achievements suggest that some pretty universal methods must be
at work.
In fact, whatever the difficulty of inverting functions $x{=}f(y)$ is,
we know a ``theoretically" optimal algorithm for all such problems,
one that cannot be sped-up\footnote {The speed is defined to include
the time for running $f$ on the solution $y$ to check it.}
by more than a constant factor, even on a subset of instances $x$.
It searches for solutions $y$, but in order of increasing complexity
$\Kt$, not increasing length: short solutions may be much harder to
find than long ones. $\Kt(y|x)$ can be defined as the minimal sum of
(1) the bit-length of a prefixless program $p$ transforming $x$ into $y$
and (2) the log of the running time of $p$.\footnote
{Realistically, $p$ runs on data which specify the instance, but also
encompass other available relevant information, possibly including
access to a huge database, such as a library, or even the Internet.}
\paragraph {Extrapolations} could be done by double-use of this concept.
The likelihood of a given extrapolation consistent with known data
decreases exponentially with the length of its shortest description.
This principle, Occam Razor, was clarified in
papers by Ray Solomonoff and his followers (see also
\hreff {arxiv.org/abs/1403.4539} and its references).
Decoding short descriptions should not take more time than
the complexity of the process that generated the data.
The major hurdle in implementing Occam Razor is finding
short descriptions: it may be exponentially hard.
Yet, this is an inversion problem, and the above optimal search applies.
Such approaches contrast with the methods employed currently by CS -
universal algorithms are used heavily, but mostly for negative results.
\paragraph {The point} of this note is to emphasize the following
problem:\\ The above methods are optimal only up to constant factors.
Nothing is known about these factors, and simplistic attempts make them
completely unreasonable. Current theory cannot even answer straight
questions, such as, e.g., is it true that some such optimal algorithm
cannot be sped-up 10-fold on infinitely many instances? Yet humans do
seem to use such generic methods successfully, raising hopes for a
reasonable approach to these factors.
\section {A Magic Trick}
A book ``Mathematics for Computer Science"\footnote {Problem 15.48
in a preprint: \hreff {courses.csail.mit.edu/6.042/fall17/mcs.pdf}}
by Eric Lehman, F Thomson Leighton, and Albert R Meyer has
a very nice magic trick with cards. I used in my class some
variation of it described below (with book authors permission).
The trick is performed by a Wizard (W) and his assistant (A) for the viewers
(V).
In W's absence, V choose and give A four cards out of 52 deck.
A places them in a row with one of them ($H$) hidden (turned back up) and exits.
W then enters and guesses $H$.
However, placing $H$ in the middle of the 3 open cards hints that the cards
order is informative, spoiling the surprise. I would instead place the chosen
cards so that, 3 {\bf contiguous} cards are open and 1 hidden, or all are
hidden (sometimes stellar patterns are so favorable to magic that wizards need
no information at all ! \smile.
First, some terms: \trm{Senior} (S), \trm {Junior} (J), \trm {Middle} (M) below
refer to the order of ranks or rank-suit pairs. Kings (K) are special\footnote
{In Russia, the special one would be Queen, not King:
Queen of Spades is attributed a special malice. \smile}:
If chosen cards include King of spades~(K0),~all cards are hidden; K1 always
is J, K2 is M, K3 is S. A \trm {4-set} is a set of 4 cards with no K0.
A \trm {string} is an {\em ordered} 4-set with the first or last card replaced
by a symbol $H$ (hidden). $G$ is a bipartite graph of 4-sets connected to four
strings obtained by hiding one card and ordering the rest to reflect the rank
of $H$. A hidden K is treated as a duplicate of the respective (J, M, or S)
non-K open card. The Wizard only needs to figure the suit of $H$.
$G$ breaks into small connected components distinguished by their sets $R$
of non-K ranks of the 4 chosen cards and ranks' multiplicity (including K as
duplicates). With a {\bf uniform degree} 4, $G$ has a {\bf perfect matching},
described below, for A,W to use.
In a 4-set, let $\a$ be the $\Z_4$ sum of all suits in single-suit ranks.
Multiple suits in a rank are viewed in a circle ($\Z_7$ if $|R|{=}1$,
else $\Z_5$) including respective Kings (but not K2 for $|R|{=}2$).
Let $\b$ (and $\b'$ if $2$ such ranks) be $0$ if the suits are consecutive, else
$1$. Notations like $j,j'$ mean same rank suits, $j'{\equiv}j{+}1{+}\b\pmod 5$.
Let $\g$ be $2$ if $|R|{=}2$ with K2 present, else $\g{=}0$.
Below is a simple matching, blind to $\Z_5,\Z_7$ rotations.
(I omit cases with just $j,m,s$ permuted):
\begin{description} \item [$|R|{=}1$] $H$ is the suit in a row (in $\Z_7$)
adjacent to $1$-suit-shorter gap (left is preferred).
\item [$|R|{=}2$] suits $j,j',s,s'$: $H{=}j$ if $\b{=}\b'$, else $H{=}s$.
\item [$|R|{=}2$] suits $j,c{=}K2,s,s'$ or $j,s,c{=}s',s''$:
$H{=}j$ if $\a{=}\b{+}\g$; $H{=}c$ if $\a{+}\b{=}1$; else $H{=}s$.
\item [$|R|{=}3$] suits $j,j',m,s$:
$H{=}s$ if $x{=}(\a{+}\b \bmod 4$) is $0$; $H{=}m$ if $x{=}1$; else $H{=}j$.
\item [$|R|{=}4$] The seniority of $H$ reflects $\a$. \end{description}
\end{document}
|
\begin{document}
\markboth{R. Rajkumar and M.Gayathri}{Spectra of $(H,H')-$merged subdivision graph of a graph}
\title{\LARGE\bf Spectra of $(H_1,H_2)-$merged subdivision graph of a graph}
\author{R. Rajkumar\footnote{e-mail: {\tt [email protected]}},\ \ \
M. Gayathri\footnote{e-mail: {\tt [email protected]}, }\ \\
{\footnotesize Department of Mathematics, The Gandhigram Rural Institute -- Deemed to be University,}\\ \footnotesize{Gandhigram -- 624 302, Tamil Nadu, India}\\[3mm]
}
\date{ }
\maketitle
\begin{abstract}
In this paper, we define a ternary graph operation which generalizes the construction of subdivision graph, $R-$graph, central graph. Also, it generalizes the construction of overlay graph (Marius Somodi \emph{et al.}, 2017), and consequently, $Q-$graph, total graph, and quasitotal graph. We denote this new graph by
$[S(G)]^{H_1}_{H_2}$, where $G$ is a graph and, $H_1$ and $H_2$ are suitable graphs corresponding to $G$. Further, we define several new unary graph operations which becomes particular cases of this construction.
We determine the Adjacency and Laplacian spectra of $[S(G)]^{H_1}_{H_2}$ for some classes of graphs $G$, $H_1$ and $H_2$. From these results, we derive the $L$-spectrum of the graphs obtained by the unary graph operations mentioned above.
As applications, these results enable us to compute the number of spanning trees and Kirchhoff index of these graphs.
\paragraph{Keywords:} Adjacency spectrum; Laplacian spectrum; subdivision graph; spanning trees; Kirchhoff index\\
\textbf{2010 Mathematics Subject Classification:} 05C50, 05C76
\end{abstract}
\section{Introduction}\label{sec1}
All the graphs considered in this paper are undirected and simple. $K_n$, $C_n$ and $P_n$ denote the complete graph, the cycle graph and the path graph on $n$ vertices, respectively. The complete bipartite graph whose partite sets having sizes $p$ and $q$ is denoted by $K_{p,q}$. $J_{n\times m}$ denotes the matrix of size $n\times m$ in which all the entries are 1. We will denote $J_{n\times n}$ simply by $J_n$.
The study of the properties of graphs is an essential one, since several real life problems can be modeled by using graphs. One of the approach used in spectral graph theory is, by associating matrices to the given graphs and by determining their eigenvalues and eigenvectors through which the properties of the graphs can be described.
For a graph $G=(V,E)$ with $V(G) = \{v_1,v_2,\ldots,v_n\}$ and $E(G)=\{e_1,e_2,\ldots,e_m\}$, the \emph{adjacency
matrix of $G$} is the $n \times n$ matrix $A(G)=[a_{ij}]$, where $a_{ij}=1,$ if $i\neq j$ and, $v_i$ and $v_j$ are adjacent in $G$; 0, otherwise.
The \textit{vertex-edge incidence matrix of $G$} is the $n\times m$ matrix $B(G)=[b_{ij}]$, where $b_{ij}=1,$ if the vertex $v_i$ is incident with the edge $e_j$; 0, otherwise.
The \textit{degree matrix $D(G)$ of $G$} is the diagonal matrix $diag(d_1,d_2,\ldots,d_n)$, where $d_i$ denotes the degree of the vertex $i$.
The \emph{Laplacian matrix} $L(G)$ of $G$ is the matrix $D(G)-A(G)$ and the \emph{signless Laplacian matrix} $\mathcal Q(G)$ of $G$ is the matrix $D(G)+A(G)$. Note that $\mathcal Q(G)=B(G)B(G)^T$.
The characteristic polynomials of $A(G)$, $L(G)$ and $\mathcal Q(G)$ are denoted by $P_G(x)$, $L_G(x)$ and $\mathcal Q_G(x)$, respectively. The eigenvalues of $A(G)$, $L(G)$ and $Q(G)$, are said to be the \textit{$A$-spectrum, $L$-spectrum and $\mathcal Q$-spectrum of $G$}, respectively.
Two graphs are said to be \textit{$A$-cospectral} (resp. \textit{$L$-cospectral, $\mathcal Q$-cospectral}) if they have same the $A$-spectrum (resp. $L$-spectrum, $\mathcal Q$-spectrum). The $A$-specturm, $L$-spectrum and $\mathcal Q$-spectrum of a graph $G$ with $n$ vertices are denoted by $\lambda_i(G),\mu_i(G)$ and $\nu_i(G)$, $i=1,2,\ldots,n,$ respectively.
If $\lambda_1,\lambda_2,\ldots,\lambda_t$ are the distinct eigenvalues of a a matrix $M$ with multiplicity $m_1,m_2,\ldots, m_t$, respectively, then the eigenvalues of $M$ are denoted by $\lambda_1^{m_1},\lambda_2^{m_2}, \ldots, \lambda_t^{m_t}$. If $m_i=1$, for some $i$, then $\lambda_i^{m_i}$ is simply denoted by $\lambda_i$.
Two graphs are said to be \emph{commute} if their adjacency matrices commute. Some properties, examples of commuting graphs are studied in \cite{akbari2007,akbari2009}.
For example, the complete graph $K_n$ commutes with any regular graph of order $n$ and the complete bipartite graph $K_{p,p}$ commutes with any of its regular spanning subgraph (See, \cite[Proposition 2.3.6]{hei2011}).
$A$-spectrum, $L$-spectrum
of a graph are powerful tools for analyzing the properties of the corresponding graph.
Apart from graph theory, the determination of the various spectra of graphs has found applications in many other fields such as physics, chemistry, computer science etc.; see, for instance \cite{bapat2010,brouwer2012, cvetkovic2011}.
Let $G$ be a connected graph with $V(G) = \{ 1,2,\ldots, n\}$. Then the resistance distance $r_{ij}$
between vertices $i$ and $j$ of $G$ is defined to be the effective resistance between nodes $i$ and $j$ as
computed with Ohm's law when all the edges of $G$ are considered to be unit resistors. The \textit{Kirchhoff index} $Kf(G)$ of $G$ is defined as $Kf(G ) = \sum_{i<j}r_{ij}$ \cite{klein1993}.
The resistance distance and
the Kirchhoff index attracted extensive attention due to their wide
applications in electric network theory, physics, chemistry, etc. and the Kirchoff index of graphs constructed by graph operations was also obtained; see \cite{bonchev1994,gao2012,qliu2016, xiao2003, yang2014, zhang2013, zhou2008}.
A natural question arise is "to what extent the spectrum of a given graph can be expressed in terms the spectrum of some other graphs by using graph operations ?".
In this point of view, to construct graphs from the given graphs, several graph operations were defined in literature such as the union, the complement, Cartesian product, the Kronecker product, the NEPS, the corona, the edge corona, the join, deletion of a vertex, insertion/deletion of an edge, etc. and the results on the spectra of the graphs obtrined by using these graph operations were obtained. See, \cite{ barik2007, barik2015, cardoso2013, cui2012,cvetkovic1975, gao2012, hou2010, laali2016, lan2015,liu2014,wang20131} and the references therein. In addition, several unary graph operations were defined in the literature. Some of them are given below for the easy reference of the reader:
The \textit{line graph $\mathcal{L}(G)$ of $G$} is the graph having $E(G)$ as its vertex set and two vertices are adjacent if and only if the corresponding edges are adjacent in $G$. The \textit{subdivision graph} $S(G)$ of $G$ is the graph obtained by inserting a new vertex into every edge of $G$. The \textit{middle graph} or $Q-$\textit{graph} $Q(G)$ of $G$ is the graph obtained from G by inserting a new vertex into each edge of $G$, and joining by edges those pairs of new vertices which lie on adjacent edges of $G$. The \emph{total graph} $T(G)$ of $G$ is the graph obtained by taking one copy of $R(G)$ and joining the new vertices which lie on the adjacent edges of $G$. The \emph{quasitotal graph $QT(G)$ of $G$} is the graph obtained by taking one copy of $Q(G)$ and joining the vertices which are not adjacent in $G$. The determination of $A$-spectra of these graphs have been made in \cite{cvetkovic1975,fiedler1973, wang20132,xie2016}. The \emph{central graph $C(G)$ of $G$} is the graph obtained by taking one copy of $S(G)$ and joining the vertices which are not adjacent in $G$.
In \cite{somody2017}, Marius Somodi \emph{et al.} defined the following graph operation which generalizes
the constructions of the middle, total, and quasitotal graphs:
Let $G$ and $G'$ be two graphs having $n$ vertices with same vertex labeling $\{v_1,v_2,\ldots,v_n\}$. Then the \emph {overlay of $G$ and $G'$}, denoted by $G\ltimes G'$ is the graph obtained by taking $ Q(G)$ and joining the vertices $v_i$ and $v_j$ of $G$ if and only if $v_i$ and $v_j$ are adjacent in $G'$.
Therein, they obtained the characteristic polynomial of adjacency and Laplacian matrices of overlay of two commuting graphs. Among the other results, they determined the number of spanning trees and Kirchhoff index of overlay of two graphs. Also they derived the $A$-spectrum and $L$-spectrum of $Q-$graph, total graph and quasitotal graph of a graph.
By observing the construction of the above mentioned unary graph operations and overlay graph operation, we define a new ternary graph operation namely, $(H_1,H_2)$-merged subdivision graph of $G$ which is obtained from the subdivision graph of $G$ by combining the suitable graphs $H_1$ and $H_2$. Consequently, this construction generalizes some graph operations defined in the literature, and enables us to define some new unary operations in Section 2. In Section 3, we obtain the $A$-spectrum and $L$-spectrum of the $(H_1,H_2)$-merged subdivision graph for some classes of graphs $G$, $H_1$ and $H_2$. In addition, we deduce the $L$-spectra of the overlay graph for some class of constituting graphs. In Section 4, we obtain the number of spanning trees and the Kirchhoff index of the $(H_1,H_2)$-merged subdivision graph for some classes of graphs $G$, $H_1$ and $H_2$.
\section{$(H_1,H_2)$-merged subdivision graph of $G$}
First we define the following ternary graph operation:
\begin{defn}\label{def1}
\normalfont
Let $G$ be a graph with $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $E(G)=\{e_1,e_2,\ldots,e_m\}$. Let $H_1$ and $H_2$ be two graphs with $V(H_1)=\{u_1,u_2,\ldots,u_n\}$ and $V(H_2)=\{w_1,w_2,\ldots,w_m\}$. The \emph{ $(H_1,H_2)$-merged subdivision graph of $G$}, denoted by $[S(G)]^{H_1}_{H_2}$, is the graph obtained by taking $S(G)$ and joining the vertices $v_i$ and $v_j$ if and only if the vertices $u_i$ and $u_j$ are adjacent in $H_1$, and joining the new vertices which lie on the edges $e_t$ and $e_s$ if and only if $w_t$ and $w_s$ are adjacent in $H_2$ for $i,j=1,2,\ldots,n$ and $t,s=1,2,\ldots,m$.
\end{defn}
Clearly, if $G$ has $n$ vertices and $m$ edges, and $H_1$ and $H_2$ have $m_1$ and $m_2$ edges, respectively, then $[S(G)]^{H_1}_{H_2}$ has $n+m$ vertices and $2m+m_1+m_2$ edges.
We denote the graphs $[S(G)]_H^{\overline K_n}$ and $[S(G)]_{\overline K_m}^H$ simply by $[S(G)]_H$ and $[S(G)]^H$, respectively. The above construction is illustrated in Figure~\ref{HcombinedRgraphexample}.
\begin{figure}
\caption{An example of $(H_1,H_2)$-merged subdivision graph of a graph $G$}
\label{HcombinedRgraphexample}
\end{figure}
The construction used in Definition~\ref{def1} generalizes
many graph constructions: $S(G) \cong [S(G)]^{\overline K_n}_{\overline K_m}$, $R(G) \cong [S(G)]^G$ and $C(G) \cong [S(G)]^{\overline G}$. Also note that the graph $[S(G)]^{H}_{\mathcal L(G)}$ is the graph overlay of $G$ and $H$. Consequently, $Q(G)\cong [S(G)]_{\mathcal L(G)}$, $T(G) \cong [S(G)]^{G}_{\mathcal L(G)}$, $QT(G) \cong [S(G)]^{\overline{ G}}_{\mathcal L(G)}$.
Some of the special cases of Definition~\ref{def1} enable us to define some interesting unary graph operations:
\begin{defn}\label{unary combined}
Let $G$ be a graph with $V(G)=\{v_1,v_2,\ldots,v_n\}$.
\normalfont
\begin{enumerate}[(1)]
\item The \emph{point complete subdivision graph of $G$} is the graph obtained by taking one copy of $S(G)$ and joining all the vertices $v_i,v_j\in V(G)$.
\item The \emph{$Q-$complemented graph of $G$} is the graph obtained by taking one copy of $S(G)$ and joining the new vertices which lie on the non-adjacent edges of $G$.
\item The \emph{total complemented graph of $G$} is the graph obtained by taking one copy of $R(G)$ and joining the new vertices lie which on the non-adjacent edges of $G$.
\item The \emph{quasitotal complemented graph of $G$} is the graph obtained by taking one copy of $Q-$complemented graph of $G$ and joining all the vertices $v_i,v_j\in V(G)$ which are not adjacent in $G$.
\item The \emph{complete $Q-$complemented graph of $G$} is the graph obtained by taking one copy of $Q-$complemented graph of $G$ and joining all the vertices of $v_i,v_j\in V(G)$.
\item The \emph{complete subdivision graph of $G$} is the graph obtained by taking one copy of $S(G)$ and joining the all the new vertices which lie on the edges of $G$.
\item The \emph{complete $R-$graph of $G$} is the graph obtained by taking one copy of $R(G)$ and joining all the new vertices which lie on the edges of $G$.
\item The \emph{complete central graph of $G$} is the graph obtained by taking one copy of central graph of $G$ and joining all the new vertices which lie on the edges of $G$.
\item The \emph{fully complete subdivision graph of $G$} is the graph obtained by taking one copy of $S(G)$ and joining all the vertices of $G$ and joining all the new vertices which lie on the edges of $G$.
\end{enumerate}
\end{defn}
Notice that the graphs mentioned in Definitions~\ref{unary combined}(1)-(9) are isomorphic to $[S(G)]^{K_n}$, $[S(G)]_{\overline{\mathcal L(G)}}$, $[S(G)]_{\overline {\mathcal L(G)}}^{G}$, $[S(G)]_{\overline { \mathcal L(G)}}^{\overline G}$, $[S(G)]_{\overline {\mathcal L(G)}}^{K_n}$, $[S(G)]_{K_m}$, $[S(G)]_{ K_m}^{G}$, $[S(G)]_{ K_m}^{\overline G}$, $[S(G)]_{ K_m}^{K_n}$, respectively. The structures of these graphs for $G=C_4$ are shown in Figures~\ref{fhhcombinedsubdivisionc4}(a)-(i), respectively.
\begin{figure}
\caption{(a) The point complete subdivision graph of $C_4$, (b) The $Q-$complemented graph of $C_4$, (c) The total complemented graph of $C_4$, (d) The quasitotal complemented graph of $C_4$, (e) The complete $Q-$complemented graph of $C_4$, (f) The complete subdivision graph of $C_4$, (g) The complete $R-$graph of $C_4$, (h) The complete central graph of $C_4$, (i) The fully complete subdivision graph of $C_4$}
\label{fhhcombinedsubdivisionc4}
\end{figure}
\section{$A$-spectra and $L$-spectra of $[S(G)]^{H_1}_{H_2}$}\label{sec2}
In this section, we compute the $L$-spectrum of $[S(G)]^{H_1}_{H_2}$ for some classes of graphs $G$, $H_1$ and $H_2$.
The Laplacian matrix of $[S(G)]^{H_1}_{H_2}$ is
\begin{eqnarray}\label{lmatrix of h1h2merged subdivision}
\begin{bmatrix}
L(H_1)+D(G) & -B(G) \\ -B(G)^T & L(H_2)+2I_m
\end{bmatrix}
\end{eqnarray}
We first state the following results which will be used later.
\begin{thm}\label{schur}(\cite{bapat2010})
Let $A$ be an $n\times n$ matrix partitioned as
\[A=\begin{bmatrix}
A_1&A_2\\A_3&A_4
\end{bmatrix},
\]where $A_1,A_4$ are square matrices. If $A_1,A_4$ are invertible, then
\begin{align*}
\left|A\right|&=\left|A_4\right| \left|A_1-A_2A_4^{-1}A_3\right|=\left|A_1\right| \left|A_4-A_3A_1^{-1}A_2\right|.
\end{align*}
\end{thm}
To obtain the $L$-spectrum of $[S(G)]^{H}$, $[S(G)]_{K_m}^{H}$ and $[S(G)]_{\overline {\mathcal L(G)}}^{H}$, where $G$ is a regular, first we obtain the characteristic polynomial of a partitioned matrix:
\begin{pro}\label{determinant matrix including JBBT}
Let $A\in M_n(\mathbb R)$ and $B\in M_{n\times m}(\mathbb R)$. If the sum of all entries in each row of $B$ is equal to $r$, then the characteristic polynomial of the matrix
\[M=\begin{bmatrix}
A & B\\B^T& t_1I_m+t_2J_m+t_3B^TB
\end{bmatrix},
\] is
\[(x-t_1)^{m-n}\times \left|\left\{(x-t_1)I_n-t_3BB^T-\frac{t_2}{2}rJ_n\right\}(xI_n-A)-BB^T\right|.
\]
\end{pro}
\begin{proof}
\begin{eqnarray*}
\begin{vmatrix}
xI_n-A & -B\\-B^T& (x-t_1)I_m-t_2J_m-t_3B^TB
\end{vmatrix} &=&
\begin{vmatrix}
xI_n-A & -B\\-B^T-\displaystyle t_3B^T\left(xI_n-A\right)& (x-t_1)I_m-t_2J_{m}
\end{vmatrix}\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left( R_2\rightarrow R_2-t_3B^TR_1\right)
\\ &=&
\begin{vmatrix}
xI_n-A & -B\\-B^T-\left\{\displaystyle t_3B^T+\frac{t_2}{2}J_{m\times n}\right\}\left( xI_n-A\right)& (x-t_1)I_m
\end{vmatrix}\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left( R_2\rightarrow R_2-\frac{t_2}{2}J_{n\times m}R_1\right) .
\end{eqnarray*}
So, the result follows from Theorem~\ref{schur}.
\end{proof}
The following result gives a characterization of commuting matrices in terms of their eigenvectors.
\begin{pro}(\cite[Proposition 2.3.2]{hei2011})\label{eigenvectors of commting matrices}
Let $A_1,A_2,\ldots,A_m$ be symmetric matrices of order $n$. Then the following are equivalent.
\begin{enumerate}[(1)]
\item $A_iA_j=A_jA_i$, $\forall i,j\in\{1,2,\ldots,m\}$.
\item There exists an orthonormal basis $\{x_1,x_2,\ldots, x_n\}$ of $\mathbb R^n$ such that $x_1,x_2,\ldots, x_n$ are eigenvectors of $A_i$, $\forall i=1,2,\ldots,m$.
\end{enumerate}
\end{pro}
If $G$ and $H$ are two commuting graphs, then by Proposition \ref{eigenvectors of commting matrices}, there exists an orthonormal basis $\{x_1,x_2,\ldots, x_n\}$ of $\mathbb R^n$ such that $x_i'$s are eigenvectors of both $A(G)$ and $A(H)$. For such graphs, throughout this paper, the $A$-spectra of $G$ and $H$ are denoted by $\lambda_1(G),\lambda_2(G),\ldots, \lambda_n(G)$ and $\lambda_1(H),\lambda_2(H),\ldots, \lambda_n(H)$, respectively, where $\lambda_i(G)$ and $\lambda_i(H)$ are the eigenvalues of $A(G)$ and $A(H)$ corresponding to the same eigenvector $x_i$, $i=1,2,\ldots,n$. Further, if $G$ and $H$ are commuting graphs which are $r_1$, $r_2$ regular, respectively, then their $A$-spectra are denoted by $\lambda_1(G)(=r_1),\lambda_2(G),\ldots, \lambda_n(G)$ and $\lambda_1(H)(=r_2),\lambda_2(H),\ldots, \lambda_n(H)$, respectively; their $L$-spectra are denoted by $\mu_1(G)(=0),\mu_2(G),$ $\ldots, \mu_n(G)$ and $\mu_1(H)(=0),\mu_2(H),\ldots, \mu_n(H)$, respectively.
\begin{thm}\label{eigenvalues of M}
Let $G$ be an $r-$regular graph ($r\geq2$) with $n$ vertices and $m~(=\frac{1}{2}nr)$ edges. Let $H$ be a regular graph with $n$ which commutes with $G$.
Let $M$ be a matrix of the form
\[M=\begin{bmatrix}
L(H)+rI_n & B(G) \\ B(G)^T & t_1I_m+t_2J_m+t_3B(G)^TB(G)
\end{bmatrix}.
\]Then the eigenvalues of $M$ are
\begin{eqnarray*}
&t_1^{m-n}, \displaystyle\frac{1}{2}\left(r+t_1+2rt_3+mt_2\pm\sqrt{(r-t_1-2rt_3-mt_2)^2+8r} \right), \notag
\\ &\displaystyle\frac{1}{2}\left( r+\mu_i(H)+t_1+2rt_3-t_3\mu_i(G)\pm\sqrt{\left( r+\mu_i(H)-t_1-2rt_3+t_3\mu_i(G)\right) ^2+4(2r-\mu_i(G))}\right).
\end{eqnarray*}
\end{thm}
\begin{proof}
By using Proposition~\ref{determinant matrix including JBBT}, $|xI_{n+m}-M|$ equals
\begin{eqnarray}
&(x-t_1)^{m-n}\left|\left\{(x-t_1)I_n-t_3B(G)B(G)^T-\frac{t_2}{2}J_n\right\}\left\{(x-r)I_n-L(H)\right\}-B(G)B(G)^T\right| \nonumber
\\=&(x-t_1)^{m-n}\left|\left\{(x-t_1)I_n-t_3(2rI_n-L(G))-\frac{t_2}{2}J_n\right\}\left\{(x-r)I_n-L(H)\right\}-2rI_n+L(G)\right|.\notag \\ \label{HHeq2}
\end{eqnarray}
Let $\mathcal D=\left|\left\{(x-t_1)I_n-t_3(2rI_n-L(G))-\displaystyle\frac{t_2}{2}J_n\right\}\left\{(x-r)I_n-L(H)\right\}-2rI_n+L(G)\right|$.
For any graph $G'$, the sum of entries in each row (column) of $L(G')$ is 0. So, we have $J_nL(G')=0=L(G')J_n$. That is, for any graph $G'$, $J_n$ commutes with $L(G')$ and so $J_n$ commutes with both $L(G)$ and $L(H)$. Since $G$ and $H$ are regular commuting graphs, $L(G)$ and $L(H)$ also commute. So $J_n$, $L(G)$ and $L(H)$ mutually commute with each other.
Then by Proposition \ref{eigenvectors of commting matrices}, there exist orthonormal vectors $x_1,x_2,\ldots,x_n$ which are eigenvectors of $J_n$, $L(G)$ and $L(H)$.
Sine 0 is an eigenvalue of both $L(G)$ and $L(H)$, respectively with eigenvector $\frac{1}{\sqrt{n}}J_{n\times 1}$, we assume that $\mu_1(G)=0$ and $\mu_1(H)=0$.
Since $\frac{1}{\sqrt{n}}J_{n\times 1}$ is an eigenvector of $J_n$ corresponding to the eigenvalue $n$, we take $\lambda_1(J_n)=n$ and all other eigenvalues of $J_n$ are $0$.
Let $P$ be the matrix with columns $\frac{1}{\sqrt{n}}J_{n\times 1},x_2,\ldots,x_n$. Then $P$ is orthonormal. Consequently, $P^TL(G)P=diag(0,\mu_2(G),\ldots,\mu_n(G))$, $P^TL(H)P=diag(0,\mu_2(H),\ldots,\mu_n(H))$ and $P^TJ_nP=diag(n,0,0,\ldots,0)$. So, we have
\begin{eqnarray*}
\mathcal D&=&|P^T|\left|\left\{(x-t_1)I_n-t_3(2rI_n-L(G))-\frac{t_2}{2}J_n\right\}\left\{(x-r)I_n-L(H)\right\}-2rI_n+L(G)\right||P|
\\&=&\left|\left\{(x-t_1)I_n-t_3(2rI_n-P^TL(G)P)-\frac{t_2}{2}P^TJ_nP\right\}\left\{(x-r)I_n-P^TL(H)P\right\}-2rI_n+L(G)\right|
\\&=&\left\{x^2-\left(r+t_1+2rt_3+mt_2\right)x+r(t_1+2rt_3+mt_2)-2r\right\}
\\&&\times\left\{\prod_{i=2}^{n}\left(x^2-\left[r+\mu_i(H)+t_1+t_3(2r-\mu_i(G))\right]x+t_3(2r-\mu_i(G))(r-\mu_i(H))-2r+\mu_i(G)\right) \right\}.
\end{eqnarray*}
Substituting this in \eqref{HHeq2} we get the result.
\end{proof}
In the following result we deduce the $L$-spectra of some special cases of the graph $[S(G)]^{H_1}_{H_2}$.
\begin{cor}\label{lspectrum of hmerged subdivision}
Let $G$ be an $r-$regular graph ($r\geq2$) with $n$ vertices and $m~(=\frac{1}{2}nr)$ edges. Let $H$ be a regular graph with $n$ which commutes with $G$. Then we have the following.
\begin{enumerate}[(1)]
\item The $L$-spectrum of $[S(G)]^{H}$ is
\[0,r+2, 2^{m-n}, \frac{1}{2}\left( r+\mu_i(H)+2\pm\sqrt{(r+\mu_i(H)+2)^2-8\mu_i(H)-4\mu_i(G)}\right) \text{ for } i=2,3,\ldots,n.
\]
\item The $L$-spectrum of $[S(G)]_{K_m}^{H}$ is
\[0, r+2, (m+2)^{m-n}, \frac{1}{2}\left( m+r+\mu_i(H)+2\pm\sqrt{(m-r-\mu_i(H)+2)^2+4[2r-\mu_i(G)]}\right) \] for $i=2,3,\ldots,n$.
\item (\cite{somody2017}) The $L$-spectrum of $[S(G)]_{\mathcal L(G)}^{H}$ is
\[0, r+2, (2r+2)^{m-n}, \frac{1}{2}\left( r+\mu_i(H)+\mu_i(G)+2\pm\sqrt{(\mu_i(G)-r-\mu_i(H)+2)^2+4[2r-\mu_i(G)]}\right) ,
\] where $t_i=m-\mu_i(G)+2$ and $i=2,3,\ldots,n.$
\item The $L$-spectrum of $[S(G)]_{\overline{\mathcal L(G)}}^{H}$ is
\[0, r+2, (m-2r+2)^{m-n}, \frac{1}{2}\left( t_i+r+\mu_i(H)\pm\sqrt{(t_i-r-\mu_i(H))^2+4[2r-\mu_i(G)]}\right) ,
\] where $t_i=m-\mu_i(G)+2$ and $i=2,3,\ldots,n.$
\end{enumerate}
\end{cor}
\begin{proof}
The Laplacian matrix of $[S(G)]^{H}$, $[S(G)]_{K_m}^{H}$, $[S(G)]_{\mathcal L(G)}^{H}$ and $[S(G)]_{\overline{\mathcal L(G)}}^{H}$ can be obtained by substituting $L(\overline K_m)=0$, $L(K_m)=mI_m-J_m$, $L(\mathcal L(G))=2rI_m-B(G)^TB(G)$ and $L(\overline {\mathcal L(G)})=(m-2r)I_m-J_m+B(G)^TB(G)$, respectively in \eqref{lmatrix of h1h2merged subdivision}. The $L$-spectrum of these matrices can be obtained, respectively by taking $t_1, t_2$ and $t_3$ in Theorem~\ref{eigenvalues of M} in the following order: (1) $t_1=2$, $t_2=t_3=0$; (2) $t_1=m+2$, $t_2=-1$ and $t_3=0$; (3) $t_1=2r+2$, $t_2=0$ and $t_3=-1$; (4) $t_1=m-2r+2$, $t_2=-1$, and $t_3=1$.
\end{proof}
\begin{note}
\normalfont
For a given regular graph $G$, by suitably substituting the graph $H_1$ and $H_2$ in Corollary~\ref{lspectrum of hmerged subdivision}, we can obtain the $L$-spectra of its $R-$graph, central graph and each of the graphs defined in Definitions~\ref{unary combined}(1)-(9). Also, it can be seen that the $L$-spectra of these graphs constructed using $G$ are uniquely determined by the $L$-spectrum of $G$. Consequently, if $G$ and $G'$ are two regular $L$-cospectral graphs, then the graphs constructed using them as in Definitions~\ref{unary combined}(1)-(9) are $L$-cospectral.
\end{note}
\noindent\textbf{$(H_1,H_2)$-merged subdivision graph of $K_{p,p}$}
In the next result, we deduce the $L$-spectrum of $(H_1,H_2)$-merged subdivision graph of $K_{p,p}$, for $H_2=\overline K_m,K_m$, $\overline {\mathcal L(K_{p,p})}$.
To obtain this, we use the following result:
\begin{pro}(\cite[Proof of Lemma 3.13]{bapat2010}) \label{spectrum of bipartite graphs}
Let $G$ be a bipartite graph with bipartite sets $X$ and $Y$ having $p$ and $q$ vertices, respectively. If $\lambda(G)$ is an eigenvalue of $G$ with eigenvector $[x_1~x_2]^T$, then $-\lambda(G)$ is also an eigenvalue of $G$ with eigenvector $[x_1~-x_2]^T$, where $x_1\in \mathbb R^p$ and $x_2\in \mathbb R^q$.
\end{pro}
\begin{cor}\label{spectra of hhmerged subdivision of Kpp}
Let $H$ be a spanning $r-$regular subgraph of $K_{p,p}$. Then we have the following.
\begin{enumerate}[(1)]
\item The $L$-spectrum of $[S(K_{p,p})]^{H}$ is
\[0,p+2,p+2r, 2^{p^2-2p+1},
\frac{1}{2}\left( p+\mu_i(H)+2\pm\sqrt{(p+\mu_i(H)+2)^2-8\mu_i(H)-4p}\right)\]
$for i=3,4,\ldots,2p$.
\item The $L$-spectrum of $[S(K_{p,p})]_{K_{p^2}}^{H}$ is
\[0, p+2, (p^2+2)^{p^2-2p+1}, p+2r,
\frac{1}{2} \left( p^2+p+\mu_i(H)+2\pm\sqrt{(p^2-p-\mu_i(H)+2)^2+8p}\right) \] for $i=3,4,\ldots,2p$.
\item The $L$-spectrum of $[S(K_{p,p})]_{\overline{\mathcal L(K_{p,p})}}^{H}$ is
\[0, p+2, (p^2-2p+2)^{p^2-2p+1}, p+2r,
\frac{1}{2}\left( p^2+\mu_i(H)+2\pm\sqrt{(p^2-2p-\mu_i(H)+2)^2+4p}\right)\] for $i=3,4,\ldots,2p$.
\item The $L$-spectrum of $[S(H)]^{K_{p,p}}$ is
\[0,r+2, r+2p, 2^{m-2p+1},
\frac{1}{2}\left( r+2+p\pm\sqrt{(r+2+p)^2-8p-4\mu_i(G)}\right) \text { for } i=3,4,\ldots,2p.
\]
\item The $L$-spectrum of $[S(H)]_{K_{m}}^{K_{p,p}}$ is
\[0, r+2, r+2p, (m+2)^{m-2p+1},
\frac{1}{2} \left( m+r+p+2\pm\sqrt{(m-r-p+2)^2+4[2r-\mu_i(G)]}\right) \] for $i=3,4,\ldots,2p$.
\item The $L$-spectrum of $[S(H)]_{\overline{\mathcal L(H)}}^{K_{p,p}}$ is
\[0, r+2, r+2p, (m-2r+2)^{m-2p+1}, \frac{1}{2}\left(s_i+k\pm\sqrt{(s_i-k)^2+4[2r-\mu_i(G)]} \right) ,
\] where $k=r+p$, $s_i=m-\mu_i(G)+2$ and $i=3,4,\ldots,2p$,
\end{enumerate}
\end{cor}
\begin{proof}
Note that the spectrum of $K_{p,p}$ is $p,-p,0^{2p-2}$. Also $J_{2p\times 1}$ and $[J_{1\times p}~ -J_{1\times p}]^T$ are the eigenvectors corresponding to the eigenvalues $p$ and $-p$, respectively. Since $H$ is an $r-$regular spanning subgraph of $K_{p,p}$, it is a $r-$regular bipartite graph. Since $H$ is $r-$regular, $r$ is an eigenvalue of $A(H)$ corresponding to the eigenvector $J_{2p\times 1}$. By Proposition \ref{spectrum of bipartite graphs}, $-r$ is also an eigenvalue of $H$ corresponding to the eigenvector $[J_{1\times p}~ -J_{1\times p}]^T$, since $H$ is bipartite.
Consequently, $0$ is an eigenvalue of $L(K_{p,p})$ (resp. $L(H)$) corresponding to the eigenvector $J_{2p\times 1}$ and $2p$ (resp. $2r$) is an eigenvalue of $L(K_{p,p})$ (resp. $L(H)$) corresponding to the eigenvector $[J_{1\times p}~ -J_{1\times p}]^T$. So, we arrange the $L$-spectrum of $K_{p,p}$ and $H$ such that $\mu_1({K_{p,p}})=0,\mu_2({K_{p,p}})=2p,\mu_3({K_{p,p}})=p,\ldots,\mu_{2p}({K_{p,p}})=p$ and $\mu_1(H)=0,\mu_2(H)=2r,\mu_3(H),\ldots,\mu_{2p}(H)$. Then by using Corollary \ref{lspectrum of hmerged subdivision}, we get the result.
\end{proof}
\begin{note}
\normalfont
The argument used in the proof of Corollary~\ref{lspectrum of hmerged subdivision} can be used to derive the adjacency, signless Laplacian spectrum and the normalized Laplacian spectrum of those graphs.
\end{note}
\noindent\textbf{$(H_1,H_2)$-merged subdivision graph of $K_{1,m}$}
\begin{thm}\label{lchpolygensubstar}
If $H$ is a graph with $m$ vertices, then we have the following.
\begin{enumerate}[(1)]
\item If $H$ is $r-$regular, then the $A$-spectrum of $[S(K_{1,m})]_{H}$ is \[ 0,\frac{1}{2}\left( r\pm\sqrt{r^2+4m+4}\right) ,\frac{1}{2}\left( \lambda_i(H)\pm\sqrt{\lambda_i(H)^2+4}\right) \text{ for } i=2,3, \ldots,m.\]
\item The $L$-spectrum of $[S(K_{1,m})]_{H}$ is $$0, \frac{1}{2}\left( m+3\pm\sqrt{(m-1)^2+4}\right) , \frac{1}{2}\left( \mu_i(H)+3\pm\sqrt{[\mu_i(H)+1]^2+4}\right)
\text{ for } i=2,3,\ldots,m, $$
\end{enumerate}
\end{thm}
\begin{proof}
\begin{enumerate}[(1)]
\item It is easy to see that
\begin{equation*}
A([S(G)]_H)=\begin{bmatrix}
0&B(K_{1,m})\\B(K_{1,m})^T&A(H)
\end{bmatrix}.
\end{equation*}
By using Theorem~\ref{schur} and the fact $\mathcal L(K_{1,m})=K_m$, we have
\begin{eqnarray*}
P_{[S(K_{1,m})]_{H}}(x)&=&x^{n-m}\left|x^2I_m-xA(H)-B(K_{1,m})^TB(K_{1,m})\right|
\\&=&x^{n-m}\left|(x^2-2)I_m-xA(H)-A(\mathcal L(K_{1,m}))\right|
\\&=&x^{n-m}\left|(x^2-2)I_m-xA(H)-A(K_m)\right|
\\&=&x^{n-m}\left|(x^2-1)I_m-xA(H)-J_m\right|
\\&=&x(x^2-rx-m-1)\times\left\{\prod_{i=2}^{m}(x^2-\lambda_i(H)x-1)\right\}.
\end{eqnarray*}
So the proof follows.
\item It is easy to see that
$$L([S(K_{1,m})]_H)=\begin{bmatrix}
D(K_{1,m})&-B(K_{1,m})\\-B(K_{1,m})^T&L(H)+2I_m
\end{bmatrix}.$$
By using Theorem~\ref{schur}, we have
\begin{eqnarray}\label{detlk1m}
L_{[S(K_{1,m})]_{H}}(x)&=&\left|xI_{m+1}-D(K_{1,m})\right|\times\notag\\ &&\left|xI_m-L(H)-2I_m-B(K_{1,m})\left[xI_{m+1}-D(K_{1,m})\right]^{-1}B(K_{1,m})^T\right|
\end{eqnarray}
Since $B(K_{1,m})^T=\begin{bmatrix}
J_{m\times 1}&I_m
\end{bmatrix}$ and $D(K_{1,m})=\begin{bmatrix}
m&0\\0&I_m
\end{bmatrix}$, we have
$$\left|xI_{m+1}-D(K_{1,m})\right|=(x-m)(x-1)^m$$ and
$$B(K_{1,m})^T(xI-D(K_{1,m}))^{-1}B(K_{1,m})=\frac{1}{x-m}J_m+\frac{1}{x-1}I_m.$$
Applying these in \eqref{detlk1m}, we get \begin{eqnarray*}
L_{[S(K_{1,m})]_{H}}(x)&=&(x-m)(x-1)^m~\left|(x-2)I_m-L(H)-\frac{1}{x-m}J_m-\frac{1}{x-1}I_m\right|
\\&=&(x-m)^{1-m}\left|(x-m)(x-1)\left[(x-2)I_m-L(H)\right]-(x-1)J_m-(x-m)I_m\right|
\end{eqnarray*}
Using the eigenvalues of $J_m$,
\begin{eqnarray*}
L_{[S(K_{1,m})]_{H}}(x)&=&
\left\{(x-m)(x-1)(x-2)-m(x-1)-(x-m)\right\}\\
&&
\times\left\{\prod_{i=2}^{n}\left((x-1)[x-\mu_i(H)-2]-1\right)\right\}
\\&=& x(x^2-(m+3)x+2m+1)\times\left\{\prod_{i=2}^{m}\left (x^2-(\mu_i(H)+3)x+\mu_i(H)+1 \right )\right\}.
\end{eqnarray*}
So the proof follows.
\end{enumerate}
\end{proof}
\noindent\textbf{$(H_1,H_2)$-merged subdivision graph of $P_n$}
The following result is proved in \cite{beezer1984}.
\begin{thm}(\cite[Theorem 3.2]{beezer1984}) \label{pathpolynomial graph}
Suppose that $p(x)$ is a polynomial of degree less than $n$. Then $p(A(P_n))$ is the adjacency matrix of a graph if and only if $p(x) = P_{P_{2i+1}}(x)$, for some
$i$, $0\leq i \leq \lfloor{\frac{n}{2}}\rfloor-1$.
\end{thm}
Using this result, we prove the following result.
\begin{cor}
Let $n \geq 3$ be an integer. If $H$ is a graph with $A(H)=P_{P_{2i+1}}(A(P_{n-1}))$, for some $i$, with $0\leq i \leq \lfloor{\frac{n-1}{2}}\rfloor-1$, then the $A$-spectrum of $[S(P_n)]_H$ is
\[0, \frac{c_j\pm\sqrt{c_j^2+8\left(\cos\frac{\pi j}{n}+1 \right)}}{2}, \] where $c_j=\displaystyle\sum_{k=0}^{i}(-1)^k\binom{2i+1-k}{k}\left(2\cos\frac{\pi j}{n} \right)^{2(i-k)+1} $ and $j=1,2,\ldots,n-1$.
\end{cor}
\begin{proof}
It is easy to see that
\[A\left([S(P_n)]_H \right)=\begin{bmatrix}
0 & B(P_n) \\ B(P_n)^T & A(H)
\end{bmatrix}
\]
So by Theorem~\ref{schur}, we have
\begin{eqnarray*}
P_{[S(P_n)]_H}(x)&=&x\times\left|x^2I_n-xA(H)-B(P_n)B(P_n)^T\right|
\\&=&x\times\left|(x^2-2)I_n-xA(H)-A(\mathcal L(P_n))\right|.
\end{eqnarray*}
Using the facts $\mathcal L(P_n)=P_{n-1}$ and $A(H)=P_{P_{2i+1}}(A(P_{n-1}))$, we have
$$ P_{[S(P_n)]_H}(x)=x\times\left\{\displaystyle\prod_{j=1}^{n-1}\left(x^2-P_{P_{2i+1}}(\lambda_j(P_{n-1}))x-\lambda_j(P_{n-1})-2\right)\right\}.$$
So the proof follows from the facts that $P_{P_{2i+1}}(x)=\displaystyle\sum_{k=0}^{i}(-1)^k\binom{2i+1-k}{k}x^{2(i-k)+1}$ and $\lambda_j(P_{n-1})=2\cos\frac{\pi j}{n}$, where $j=1,2,\ldots,n-1$.
\end{proof}
\noindent\textbf{$Q-$complemented graph of a graph}
For a given $n\times n$ matrix $M$, its coronal $\chi_M(x)$ is defined as $\chi_M(x)=J_{1\times n}(xI_n-M)^{-1}J_{n\times 1}^T$ \cite{liu2014}. For a graph $G$, the coronal of $G$ is defined as the coronal of $A(G)$ and is simply denoted by $\chi_G(x)$.
\begin{pro}(\cite[Proposition 6]{mcleman2011}) \label{equal row sum matrices}
If $G$ is an $r-$regular graph with $n$ vertices, then \[\chi_G(x)=\frac{n}{x-r}.\]
\end{pro}
\begin{thm}(\cite[Proposition 6]{liu2017}) \label{chpoly of JM}
Let $M$ be a square matrix of order $n$ and $\alpha$ be a scalar. Then $$\left|xI_n-M-\alpha J_n\right|=\left(1-\alpha \chi_M(x)\right)~|xI_n-M|.$$
\end{thm}
\begin{thm}(\cite[Theorem 2.4.1, Equation (2.28)]{cvetkovic2010}) \label{chpolyline}
Let $G$ be a graph with $n$ vertices and $m$ edges. Then the characteristic polynomial of $A(\mathcal{L}(G))$ is $$(x+2)^{m-n}\mathcal Q_G(x+2).$$
\end{thm}
\begin{thm}\label{achpoly hcom sub}
Let $G$ be a graph with $n$ vertices and $m$ edges. Then the characteristic polynomial of $Q-$complemented graph of $G$ is $$(-1)^n(x-1)^m
\left(1-\frac{x}{1-x}\chi_{\mathcal L(G)}\left(\frac{x^2+x-2}{1-x}\right)\right)Q_G(-x).$$
\end{thm}
\begin{proof}
Note that
\[A\left([S(G)]_{\overline {\mathcal L(G)}} \right) = \begin{bmatrix}
0 & B(G) \\B(G)^T & A(\overline{\mathcal L(G)})
\end{bmatrix}
\]
So by Theorem~\ref{schur} and using the fact $A(\overline{\mathcal L(G)})=J_m-I_m-A(\mathcal L(G))$
\begin{eqnarray*}
P_{[S(G)]_{\overline {\mathcal L(G)}}}(x)&=&x^{n-m}\times\left|x^2I_m-A(\overline{\mathcal L(G)})-B(G)^TB(G)\right|
\\&=&x^{n-m}\times\left|(x^2+x-2)I_m-xJ_m-(1-x)A(\mathcal L(G))\right|
\end{eqnarray*}
By using Theorem \ref{chpoly of JM}, we have,
\begin{eqnarray*}
P_{[S(G)]_{\overline {\mathcal L(G)}}(G)}(x)&=&x^{n-m} \left(1-\frac{x}{1-x}\chi_{\mathcal L(G)}\left(\frac{x^2+x-2}{1-x}\right)\right)\left|(x^2+x-2)I_m-(1-x)A(\mathcal L(G))\right|
\\&=&x^{n-m} (1-x)^m \left(1-\frac{x}{1-x}\chi_{\mathcal L(G)}\left(\frac{x^2+x-2}{1-x}\right)\right)P_{\mathcal{L}(G)}\left(\frac{x^2+x-2}{1-x}\right).
\end{eqnarray*}
So the proof follows by Theorem \ref{chpolyline}.
\end{proof}
In the following result, we show that for a graph $G$ whose line graph is regular, $Q-$complemented graph of $G$ can be completely determined by the $\mathcal Q$-spectrum of $G$.
\begin{cor}\label{chpolygensubkmlinereg}
Let $G$ be a graph with $n$ vertices and $m$ edges whose line graph is $r-$regular $(r\geq1)$. Then the $A$-spectrum of $Q-$complemented graph of $G$ is
$$1^{m-1}, -\nu_i(G), \displaystyle\frac{m-r-1\pm\sqrt{(m-r-1)^2+4r+8}}{2},$$ where $i=2,3,\ldots,n$.
\end{cor}
\begin{proof}
Since $\mathcal L(G)$ is $r-$regular, by Proposition~\ref{equal row sum matrices} $\chi_{\mathcal L(G)}(x)=\displaystyle \frac{m}{x-r}$. So $$\chi_{\mathcal L(G)}\left(\displaystyle\frac{x^2+x-2}{1-x}\right)=\displaystyle\frac{m(1-x)}{(x+r+2)(x-1)}.$$ By Theorem \ref{achpoly hcom sub},
the characteristic polynomial of $Q-$complemented graph of $G$ is
\begin{eqnarray}
P_{[S(G)]_{\overline {\mathcal L(G)}}(G)}(x)&=&(-1)^n(x-1)^m
\left(1-\frac{mx}{(x+r+2)(x-1)}\right)\mathcal Q_G(-x) \nonumber
\\&=&(-1)^n(x-1)^m
\left(\frac{x^2-(m-r-1)x-r-2}{(x+r+2)(x-1)}\right)\mathcal Q_G(-x) \label{HHeqn3}
\end{eqnarray}
Also, since $A(\mathcal{L}(G))=B(G)^TB(G)-2I_m$, the sum of the entries in each row of the matrix $B(G)^TB(G)$ is $r+2$ and so $\nu_1(G)=r+2$.
From this fact, we get $$\mathcal Q_G(-x)=(-1)^n(x+r+2)\times\left\{\displaystyle\prod_{i=2}^{n}(x+\nu_i(G))\right\}.$$ The proof follows by substituting this in \eqref{HHeqn3}.
\end{proof}
\begin{cor}
The $A$-spectrum of $Q-$complemented graph of $K_{p,q}$ is $$0,1^{pq-1},(-p)^{q-1},(-q)^{p-1},\frac{1}{2}\left( pq-p-q+1\pm\sqrt{(pq-p-q+1)^2+4(p+q)}\right).$$
\end{cor}
\begin{proof}
Since $\mathcal L(K_{p,q})$ is $(p+q-2)-$regular, by using Corollary \ref{chpolygensubkmlinereg} and the $\mathcal Q$-spectrum of $K_{p,q}$ is $p+q, 0, q^{p-1}, p^{q-1} $, we get the $A$-spectrum of $Q-$complemented graph of $K_{p,q}$.
\end{proof}
\noindent\textbf{Complete subdivision graph of a graph}
\begin{thm}\label{achpoly hcom sub2}
Let $G$ be a graph with $n$ vertices and $m$ edges. Then the characteristic polynomial of the complete subdivision graph of $G$ is $$(x+1)^{m-n}\left(1-x\chi_{\mathcal L(G)}(x^2+x-2)\right)\mathcal Q_G(x^2+x);$$
\end{thm}
\begin{proof}
Note that
\[A\left([S(G)]_{K_m} \right) = \begin{bmatrix}
0 & B(G) \\B(G)^T & A(K_m)
\end{bmatrix}
\]
So by Theorem~\ref{schur} and using the fact $A(K_m)=J_m-I_m$, we have
\begin{eqnarray*}
P_{[S(G)]_{K_m}}(x)&=&x^{n-m}\left|x^2I_m-x(J_m-I_m)-B(G)^TB(G)\right|
\\&=&x^{n-m}\left|(x^2+x-2)I_m-xJ_m-A(\mathcal L(G))\right|
\\&=&x^{n-m}\left(1-x\chi_{\mathcal L(G)}(x^2+x-2)\right)\left|(x^2+x-2)I_m-A(\mathcal L(G))\right|
\\&=&x^{n-m}\left(1-x\chi_{\mathcal L(G)}(x^2+x-2)\right) P_{\mathcal L(G)}(x^2+x-2).
\end{eqnarray*}
Proof follows by Theorem \ref{chpolyline}.
\end{proof}
In the next result, we show that for a graph $G$ whose line graph is regular, the $A$-spectrum of complete subdivision graph of $G$ can be completely determined by the $\mathcal Q$-spectrum of $G$.
\begin{cor}\label{chpolygensubkmlinereg2}
\begin{enumerate}[(1)]
\item The $A$-spectrum of complete subdivision graph of $tK_{1,2}$ $(t\geq1)$ is
$$0^t, \displaystyle \left(\frac{-1\pm\sqrt{5}}{2}\right)^t, \left(\frac{-1\pm\sqrt{13}}{2}\right)^{t-1},\frac{1}{2}\left( 2t-1\pm\sqrt{(2t-1)^2+12}\right) $$
\item Let $G$ be a graph with $n$ vertices and $m$ edges whose line graph is $r-$regular $(r\geq2)$. Then the $A$-spectrum of complete subdivision graph of $G$ is $$(-1)^{m-n},\displaystyle\frac{1}{2}\left( m-1\pm\sqrt{(m-1)^2+4r+8}\right) , \frac{1}{2}\left( -1\pm\sqrt{4\nu_i(G)+1}\right) \text{ for } i=2,3,\ldots,n.$$
\end{enumerate}
\end{cor}
\begin{proof}
\begin{enumerate}[(1)]
\item
Since $\mathcal L(G)$ is $r-$regular, by Proposition~\ref{equal row sum matrices}, $\chi_{\mathcal{L}(G)}(x)=\displaystyle\frac{2t}{x-1}$. Using this fact and the $\mathcal Q$- spectrum of $tK_{1,2}$ is $3^t,1^t,0^t$ in the parts (2) and (3) of Theorem \ref{achpoly hcom sub2}, we get the result.
\item Since $\mathcal L(G)$ is $r-$regular, by Proposition~\ref{equal row sum matrices} $\chi_{\mathcal L(G)}(x)=\displaystyle \frac{m}{x-r}$. So by Theorem \ref{achpoly hcom sub2},
the characteristic polynomial of $A([S(G)]_{K_m})$ is
\begin{align}\label{achpoly of complete subdivision line regular}
(x+1)^{m-n}\left(\frac{x^2-(m-1)x-r-2}{x^2+x-r-2}\right )Q_G(x^2+x).
\end{align}
Also, since $A(\mathcal{L}(G))=B(G)^TB(G)-2I_m$, the sum of the entries in each row of the matrix $B(G)^TB(G)$ is $r+2$ and so $\nu_1(G)=r+2$.
Using this fact, the proof follows from \eqref{achpoly of complete subdivision line regular}.
\end{enumerate}
\end{proof}
\begin{cor}
Let $(p,q)\neq (1,2), (2,1)$.
Then the $A$-spectrum of complete subdivision graph of $K_{p,q}$ is
$$\displaystyle 0,(-1)^{\alpha},\left(\frac{-1\pm\sqrt{4p+1}}{2}\right)^{q-1},\left(\frac{-1\pm\sqrt{4q+1}}{2}\right)^{p-1},
\frac{pq-1\pm\sqrt{(pq-1)^2+4(p+q)}}{2},$$
where $\alpha = pq-p-q+1$.
\end{cor}
\begin{proof}
Note that $\mathcal L(K_{p,q})$ is $(p+q-2)-$regular. So by using Corollary \ref{chpolygensubkmlinereg2}(2) and the fact that the $\mathcal Q$-spectrum of $K_{p,q}$ is $p+q, 0, p^{q-1}, q^{p-1} $, we get the $A$-spectrum of complete subdivision graph of $K_{p,q}$.
\end{proof}
\section{Applications}
In this section, we determine the number of spanning trees and the Kirchhoff index of $[S(G)]_{H_2}^{H_1}$.
for some families of graphs $G$, $H_1$ and $H_2$.
The number of spanning trees of a graph $G$ is denoted by $\tau(G)$. First, we state a well known result to count the number of spanning tress of a graph using Laplacian eigenvalues.
\begin{thm}(\cite[Theorem 4.11]{bapat2010})\label{spanningtrees}
Let $G$ be a graph with $n$ vertices. Then
$$\tau(G)=\mu_2(G)\mu_3(G)\ldots\mu_n(G)/n.$$
\end{thm}
By using Corollary~\ref{lspectrum of hmerged subdivision} and Theorem~\ref{lchpolygensubstar} in Theorem~\ref{spanningtrees}, we have the following result.
\begin{cor}
Let $G$ be an $r-$regular graph with $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $H$ be a graph of with $V(H)=\{u_1,u_2,\ldots,u_n\}$ which commutes with $G$. Then we have the following.
\begin{enumerate}[(1)]
\item $\tau\left([S(G)]^{H} \right) =2^{m-n+1}\times \displaystyle\frac{1}{n}
\left\{\displaystyle\prod_{i=1}^{n}[2\mu_i(H)+\mu_i(G)]\right\}$,
\item $\tau\left([S(G)]_{K_m}^{H} \right)= (m+2)^{m-n}\times \displaystyle\frac{2}{n} \left\{\displaystyle \prod_{i=1}^{n}\left([m+2][r+\mu_i(H)]+\mu_i(G)-2r\right)\right\}$,
\item $\tau\left([S(G)]_{\overline{\mathcal L(G)}}^{H} \right)= (m-2r+2)^{m-n}\times \displaystyle\frac{2}{n} \left\{\displaystyle \prod_{i=1}^{n}\left([r+\mu_i(H)][m-\mu_i(G)+2]+\mu_i(G)-2r\right)\right\}$,
\item If $H$ is a graph with $m$ vertices, then $$\tau([S(K_{1,m})]_{H})=\displaystyle\ \left\{\displaystyle\prod_{i=2}^{m}(\mu_i(H)+1)\right\}.$$
\end{enumerate}
\end{cor}
The Kirchhoff index of a connected graph can be calculated by using the following result.
\begin{thm}(\cite[Lemma 3.4]{gao2012}) \label{kf formula}
For a connected graph $G$ with $n \geq 2$ vertices,
$$Kf(G)=n\sum_{i=2}^{n}\frac{1}{\mu_i(G)}$$
\end{thm}
\begin{cor}
Let $G$ be an $r-$regular graph with $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $H$ be a graph with $V(H)=\{u_1,u_2,\ldots,u_n\}$ which commutes with $G$. Then we have the following.
\begin{enumerate}[(1)]
\item $Kf\left([S(G)]^{H} \right) = \displaystyle \frac{n}{2}+\frac{m^2-n^2}{2}+k_1\displaystyle\sum_{i=2}^{n}\left(\frac{k_2+\mu_i(H)}{2\mu_i(H)+\mu_i(G)} \right) $,
\item $Kf\left([S(G)]_{K_m}^{H}\right)=\displaystyle \frac{n}{2}+\frac{m^2-n^2}{m+2}+
k_1\displaystyle\sum_{i=2}^{n}\left(\frac{m+k_2+\mu_i(H)}{m(r+\mu_i(H))+2\mu_i(H)+\mu_i(G)} \right)$,
\item $Kf\left([S(G)]_{\overline{\mathcal L(G)}}^{H} \right) =\displaystyle \frac{n}{2}+\frac{m^2-n^2}{k_3}+k_1\displaystyle\sum_{i=2}^{n}\left(\frac{m+k_2+\mu_i(H)-\mu_i(G)}{[m-\mu_i(G)][r+\mu_i(H)]+2\mu_i(H)+\mu_i(G)} \right)$,
\item If $H$ is a graph with $m$ vertices, then $$Kf([S(K_{1,m})]_H)=m+3+(2m+1)\sum_{j=2}^m\left( \frac{\mu_j(H)+3}{\mu_j(H)+1}\right) .$$
\end{enumerate}
where $k_1=m+n$, $k_2=r+2$ and $k_3=m-2r+2$.
\end{cor}
\begin{proof}
\begin{enumerate}[(1)]
\item Let $a_i=r+\mu_i(H)+2$, $b_i=2\mu_i(H)+\mu_i(G)$. Then by applying Corollary~\ref{lspectrum of hmerged subdivision}(1) in Theorem \ref{kf formula}, we have,
\begin{eqnarray*}
Kf\left([S(G)]^{H} \right) &=& \displaystyle \frac{m+n}{r+2}+\frac{m^2-n^2}{2}+(m+n)\displaystyle\sum_{i=2}^{n}\left(\frac{2}{a_i+\sqrt{a_i^2-4b_i}}+\frac{2}{a_i-\sqrt{a_i^2-4b_i}} \right)
\\&=& \displaystyle \frac{nr+2n}{2(r+2)}+\frac{m^2-n^2}{2}+(m+n)\displaystyle\sum_{i=2}^{n}\left(\frac{a_i}{b_i} \right).
\end{eqnarray*}
\item [(2)--(4):] Proof is analogous to the proof of (1) and by using Corollary \ref{lspectrum of hmerged subdivision}(2), \ref{lspectrum of hmerged subdivision}(4) and \ref{lchpolygensubstar}, respectively in Theorem~\ref{kf formula}.
\end{enumerate}
\end{proof}
\section {Concluding remarks}
The new graph construction defined in this paper
generalizes many existing graph operations and enables to define some more new unary graph operations which are
particular cases of the above construction. From Corollary 3.1,
the $L$-spectra of the graphs obtained by the existing as well as new unary graph operations mentioned above were
readily derived.
The determination of spectra of $[S(G)]^{H_1}_{H_2}$ for the classes of graphs $G$, $H_1$ and $H_2$ which are not considered in this paper is an interesting problem. Also, the determination of the spectra of other graph matrices of these graphs is further research topic in this direction. The study of other graph theoretic properties of the graph constructed by this ternary graph operation as well as these new
unary graph operations needs further research.
\section*{Acknowledgment}
The authors would like to thank the referee for his/her useful comments and suggestions. The second author is supported by INSPIRE Fellowship, Ministry of Science and Technology, Government of India under the Grant no. DST/INSPIRE Fellowship/[IF150651] 2015.
\end{document}
|
\betagin{document}
\title{On the growth rate of minor-closed classes of graphs}
\betagin{abstract}
A minor-closed class of graphs is a set of labelled graphs which
is closed under isomorphism and under taking minors. For a
minor-closed class $\mathcal{G}$, we let $g_n$ be the number of graphs in
$\mathcal{G}$ which have $n$ vertices. A recent result of Norine \emph{et
al.} \cite{Norine:small} shows that for all minor-closed class
$\mathcal{G}$, there is a constant $c$ such that $g_n\leq c^n n!$. Our
main results show that the growth rate of $g_n$ is far from
arbitrary. For example, no minor-closed class $\mathcal{G}$ has $g_n=
c^{n+o(n)} n!$ with $0<c<1$ or $1<c<\xi\approx 1.76$.
\end{abstract}
\section{Introduction}
In 1994, Scheinerman and Zito \cite{Scheinerman:speed-hereditary}
introduced the study of the possible growth rates of hereditary
classes of graphs (that is, sets of graphs which are closed under
isomorphism and induced subgraphs). Here we study the same problem
for classes which are closed under taking minors. Clearly, being
minor-closed is a much stronger property than to be hereditary.
However, many of the more structured hereditary classes such
as graphs embeddable in a fixed surface or graphs of tree
width bounded by a fixed constant are minor-closed and the
possible growth rates attainable are of independent interest.
A broad classification of possible growth rates for hereditary
classes given by Scheinermann and Zito
\cite{Scheinerman:speed-hereditary} is into four categories,
namely constant, polynomial, exponential and factorial. This
has been considerably extended in a series of papers by Balogh,
Bollobas and Weinrich
\cite{Bolagh:speed-hereditary,Bolagh:hereditary-penultimate,Bolagh:hereditary-Bell}
who use the term \emph{speed} for what we call growth rate.
A first and important point to note is that if a class of
graphs is minor-closed then it is hereditary. Hence, in what
follows we are working within the confines described by the
existing classifications of growth rates of hereditary classes.
Working in this more restricted context, we obtain simpler
characterization of the different categories of growth rate and
simpler proofs. This is done in Section
\ref{section:classification-thm}. In Section
\ref{section:growth-constants}, we establish some results about
the possible behaviour about classes in the most interesting range
of growth rates, namely the factorial range. We conclude by
listing some open questions in Section \ref{section:conclusion}.
A significant difference between hereditary and minor-closed classes is due to the following
recent result by Norine \emph{et al.}
A class is proper if it does not contain all graphs.
\betagin{thm}[Norine et al. \cite{Norine:small}]\label{th:small}
If $\mathcal{G}$ is a proper minor-closed class of graphs then $g_n \le c^n n! $ for some constant $c$.
\end{thm}
\paragraph{Remark.} In contrast, a hereditary class such as the set of bipartite
graphs can have growth rate of order $2^{cn^2}$ with $c>0$.
We close this introduction with some definitions and notations. We
consider simple labelled graphs. The \emph{size} of a graph is the
number of vertices; graphs of size $n$ are labelled with vertex
set $\{1,2,\dots,n\}$. A \emph{class} of graphs is a family of
labelled graphs closed under isomorphism. For a class of graphs
$\mathcal{G}$, we let $\mathcal{G}_n$ be the graphs in
$\mathcal{G}$ with $n$ vertices, and we let $g_n
=|\mathcal{G}_n|$. The (exponential) \emph{generating function}
associated to a class $\mathcal{G}$ is $G(z)=\sum_{n\geq 0}
\frac{g_n}{n!} z^n$.
The relation $H < G$ between graphs means \emph{$H$ is a minor of
$G$}. A family $\mathcal{G}$ is \emph{minor-closed} if $G \in
\mathcal{G}$ and $H<G$ implies $H \in \mathcal{G}$. A class is
\emph{proper} if it does not contain all graphs. A graph $H$ is a
(minimal) \emph{excluded minor} for a minor-closed family
$\mathcal{G}$ if $H \not\in \mathcal{G}$ but every proper minor of
$H$ is in $\mathcal{G}$. We write $\mathcal{G} =\textrm{Ex}(H_1,H_2,
\cdots)$ if $H_1,H_2,\dots$ are the excluded minors of
$\mathcal{G}$. By the theory of graph minors developed by
Robertson and Seymour \cite{Seymour:Graph-minors}, the number of
excluded minors is always finite.
\section{A classification theorem}\label{section:classification-thm}
Our classification theorem for the possible growth rate of minor-closed classes of graphs involves the following classes;
it is easy to check that they are all minor-closed.\\
\noindent $\bullet~$ $\mathcal{P}$ is the class of \emph{path forests}: graphs whose connected components are paths.\\
\noindent $\bullet~$ $ \mathcal{S}$ is the class of \emph{star forests}: graphs whose connected components are stars (this includes isolated vertices).\\
\noindent $\bullet~$ $ \mathcal{M}$ is the class of \emph{matchings}: graphs whose connected components are edges and isolated vertices.\\
\noindent $\bullet~$ $ \mathcal{X}$ is the class of \emph{stars}: graphs made of one star and some isolated vertices.
\betagin{thm}\label{th:refine}
Let $\mathcal{G}$ be a proper minor-closed family and let $g_n$ be the number of graphs in $\mathcal{G}$ with $n$ vertices.
\betagin{enumerate}
\noindent $\bullet~$m If $\mathcal{G}$ contains all the paths, then $g_n$ has \emph{factorial growth}, that is, \\
$n! \leq g_n \leq c^n n^! \textrm{ for some } c>1;$ \label{item:factorial}
\noindent $\bullet~$m else, if $\mathcal{G}$ contains all the star forests, then $g_n$ has \emph{almost-factorial growth}, that is,\\
$B(n) \leq g_n \leq \epsilon^n n!~ \textrm{ for all } \epsilon>0$, where $B(n)$ is the $n^{\rm th}$ Bell number;
\noindent $\bullet~$m else, if $\mathcal{G}$ contains all the matchings, then $g_n$ has \emph{semi-factorial growth}, that is, \\
$a^n n^{(1-1/k)n} \leq g_n \leq b^n n^{(1-1/k)n}~\textrm{ for some integer } k\geq 2 \textrm{ and some } a,b>0;$
\noindent $\bullet~$m else, if $\mathcal{G}$ contains all the stars, then $g_n$ has \emph{exponential growth}, that is,\\
$2^{n-1} \leq g_n \leq c^n~ \textrm{ for some } c>2;$ \label{item:exponential}
\noindent $\bullet~$m else, if $\mathcal{G}$ contains all the graphs with a single edge, then $g_n$ has \emph{polynomial growth}, that is,
$g_n = P(n)~\textrm{ for some polynomial } P(n) \textrm{ of degree at least 2 and } n \textrm{ sufficiently large};$
\noindent $\bullet~$m else, $g_n$ is \emph{constant}, namely
$g_n \textrm{ is equal to 0 or 1 for } n \textrm{ sufficiently large}.$
\end{enumerate}
\end{thm}
\paragraph{Remark.} As mentioned in the introduction, some of the results given by Theorem \ref{th:refine}
follow from the previous work on hereditary classes. In
particular, the classification of growth between \emph{pseudo
factorial} (this includes our categories factorial,
almost-factorial and semi-factorial), \emph{exponential},
\emph{polynomial} and \emph{constant} was proved by Scheinerman
and Zito in~\cite{Scheinerman:speed-hereditary}. A refined
description of the exponential growth category was also proved in
this paper (we have not included this refinement in our statement
of the classification Theorem~\ref{th:refine} since we found no
shorter proof of this result in the context of minor-closed
classes). The refined descriptions of the semi-factorial and
polynomial growth categories stated in Theorem~\ref{th:refine}
were established in \cite{Bolagh:speed-hereditary}. Finally, the
\emph{jump} between the semi-factorial growth category and the
almost-factorial growth category was established in
\cite{Bolagh:hereditary-Bell}.
The rest of this section is devoted to the proof of Theorem
\ref{th:refine}. This proof is self-contained and does not use the
results from
\cite{Scheinerman:speed-hereditary,Bolagh:speed-hereditary,Bolagh:hereditary-penultimate,Bolagh:hereditary-Bell}.
We begin by the following easy estimates.
\betagin{lem}\label{lem:estimates}
1. The number of path forests of size $n$ satisfies $|\mathcal{P}_n| \geq n!$.\\
2. The number of star forests of size $n$ satisfies $|\mathcal{S}_n| \geq B(n)$.\\
3. The number of matchings of size $n$ satisfies $|\mathcal{M}_n| \geq n!!=n(n-2)(n-4)\ldots$.\\
4. The number of stars of size $n$ satisfies $|\mathcal{X}_n| \geq 2^{n-1}$.
\end{lem}
We recall that $\log(n!)=n\log(n)+O(n)$, $\log B(n) = n \log(n) - n \log(\log(n)) + O(n)$ and $\log(n!!)=n\log(n)/2+O(n)$.
\betagin{proof}
1. The number of path forests of size $n\geq 2$ made of a single path is $n!/2$; the number of path forests of size $n\geq 2$ made of an isolated vertex and a path is $n!/2$.\\
2. A star-forest defines a partition of $[n]:=\{1,2,\dots,n\}$ (together with some marked vertices: the centers of the stars) and the partitions of $[n]$ are counted by the Bell numbers $B(n)$.\\
3. The vertex $n$ of a matching of size $n$ can be isolated or joined to any of the $(n-1)$ other vertices, hence $|\mathcal{M}_n|\geq |\mathcal{M}_{n-1}|+n|\mathcal{M}_{n-2}|$. The property $|\mathcal{M}_n|\geq n!!$ follows by induction.\\
4. The number of stars for which 1 is the center of the star is $2^{n-1}$.
\end{proof}
\noindent \textbf{Proof of Theorem \ref{th:refine}} \\
\noindent $\bullet~$ The lower bound for classes of graphs containing all paths
follows from Lemma~\ref{lem:estimates} while the upper bound
follows from Theorem \ref{th:small}.
\noindent $\bullet~$ The lower bound for classes of graphs containing all the star
forests but not all the paths follows from
Lemma~\ref{lem:estimates}. The upper bound is given by the
following Claim (and the observation that if a class $\mathcal{G}$
does not contain a given path $P$, then $\mathcal{G} \subseteq
\textrm{Ex}(P)$).
\betagin{claim}\label{claim:path}
For any path $P$, the growth rate of $\textrm{Ex}(P)$ is bounded by $\epsilon^n n^n$ for all $\epsilon>0$.
\end{claim}
The proof of Claim \ref{claim:path} use the notion of
\emph{depth-first search spanning tree} (or \emph{DFS tree} for
short) of a graph. A DFS tree of a connected graph $G$ is a rooted
spanning tree obtained by a \emph{depth-first search algorithm} on
$G$ (see, for instance, \cite{Cormen:introduction-algo}). If $G$
is not connected, a choice of a DFS tree on each component of $G$
is a \emph{DFS spanning forest}. We recall that if $T$ is a DFS
spanning forest of $G$, every edge of $G$ which is not in $T$
joins a vertex of $T$ to one of its ancestors
(see~\cite{Cormen:introduction-algo}).
\betagin{proof}
Let $P$ be the path of size $k$. Let $G$ be a graph in $\textrm{Ex}(P)$
and let $T$ be a DFS spanning forest of $G$.
We wish to bound the number of pairs $(G,T)$ of this kind.\\
\noindent $\bullet~$ First, the height of $T$ is at most $k-1$ (otherwise $G$
contains $P$). The number of (rooted labelled) forests of bounded
height is at most $\epsilon^n n^n $ for all $\epsilon>0$; this is
because the associated exponential generating function is
analytic everywhere and hence has infinite radius of convergence
(see Section III.8.2 in \cite{Flajolet:analytic}).
\noindent $\bullet~$ Second, since $T$ is a DFS spanning forest, any edge in $G$
which is not in $T$ joins a vertex of $T$ to one of its ancestors.
Since the height of $T$ is at most $k-1$, each vertex has at most
$k$ ancestors, so can be joined to its ancestors in at most $2^k$
different ways. This means that, given $T$, the graph $G$ can be
chosen in at most $2^{kn}$ ways, and so the upper bound
$\epsilon^n n^n$ for all $\epsilon>0$ holds for the number of
pairs $(G,T)$.
\end{proof}
\noindent $\bullet~$ We now consider minor-closed classes which do not contain all
the paths nor all the star forests. Given two sequences
$(f_n)_{n\in \mathbb{N}}$ and $(g_n)_{n\in \mathbb{N}}$, we write $f_n \asymp_{\textrm{exp}} g_n$
if there exist $a,b>0$ such that $f_n\leq a^ng_n$ and $g_n\leq
b^nf_n$. Observe that if $\mathcal{G}$ contains all the matchings, then
$g_n\geq n!!\asymp_{\textrm{exp}} n^{n/2}$ by Lemma~\ref{lem:estimates}. We prove
the following more precise result.
\betagin{claim}\label{claim:pseudo-factorial}
Let $\mathcal{G}$ be a minor-closed class containing all matchings but not containing all the paths nor all the star forests. Then, there exists an integer $k\geq 2$ such that $g_n\asymp_{\textrm{exp}} n^{(1-1/k)n}$.
\end{claim}
\paragraph{Remark.} For any integer $k\geq 2$, there exists a minor-closed class of graphs $\mathcal{G}$ such that $g_n\asymp_{\textrm{exp}} n^{(1-1/k)n}$. For instance, the class $\mathcal{G}$ in which the connected components have no more than $k$ vertices satisfies this property (see Lemma \ref{lem:unbounded-multiplicity} below).
\betagin{proof}
Let $\mathcal{G}$ be a minor-closed class $\mathcal{G}$ containing all matchings but not a given path $P$ nor a given star forest $S$. We denote by $p$ and $s$ the size of $P$ and $S$ respectively. Let $\mathcal{F}$ be set of graphs in $\mathcal{G}$ such that every vertex has degree at most $s$. The following lemma compares the growth rate of $\mathcal{F}$ and $\mathcal{G}$.
\betagin{lem}
The number $f_n$ of graphs of size $n$ in $\mathcal{F}$ satisfies $f_n\asymp_{\textrm{exp}} g_n$.
\end{lem}
\betagin{proof}
Clearly $f_n\leq g_n$ so we only have to prove that there exists
$b>0$ such that $g_n\leq b^nf_n$. Let $c$ be the number of stars
in the star forest $S$ and let $s_1,\ldots,s_c$ be the respective
number of edges of these stars (so that $s=c+ s_1+\ldots+s_c$).
\noindent $\bullet~$ We first prove that \emph{any graph in $\mathcal{G}$ has less than
$c$ vertices of degree greater than $s$}. We suppose that a graph
$G\in \mathcal{G}$ has $c$ vertices $v_1,\ldots,v_c$ of degree at least
$s$ and we want to prove that $G$ contains the forest $S$ as a
subgraph (hence as a minor; which is impossible). For
$i=1\ldots,n$, let $V_i$ be the set of vertices distinct from
$v_1,\ldots,v_c$ which are adjacent to $v_i$. In order to prove
that $G$ contains the forest $S$ as a subgraph it suffices to show
that there exist disjoint subsets $S_1\subseteq
V_1,\ldots,S_c\subseteq V_c$ of respective size $s_1,\ldots,s_c$.
Suppose, by induction, that for a given $k\leq c$ there exist
disjoint subsets $S_1\subseteq V_1,\ldots,S_{k-1}\subseteq
V_{k-1}$ of respective size $s_1,\ldots,s_{k-1}$. The set
$R_k=V_k-\bigcup_{i\leq k} S_i$ has size at least $s-c-\sum_{i<
k}s_i\geq s_k$, hence there is a subset $S_k\subseteq V_k$
distinct from the $S_i,~i<k$ of size $s_k$. The induction follows.
\noindent $\bullet~$ We now prove that $g_n \leq {n \choose c} 2^{cn} f_n$. For
any graph in $\mathcal{G}$ one obtains a graph in $\mathcal{F}$ by deleting all
the edges incident to the vertices of degree greater than $s$.
Therefore, any graph of $\mathcal{G}_n$ can be obtained from a graph of
$\mathcal{F}_n$ by choosing $c$ vertices and adding some edges incident to
these vertices. There are at most ${n \choose c} 2^{cn}f_n$ graphs
obtained in this way.
\end{proof}
It remains to prove that $f_n\asymp_{\textrm{exp}} n^{(1-1/k)n}$ for some
integer $k\geq 2$. Let $G$ be a graph in $\mathcal{F}$ and let $T$ be a
tree spanning of one of its connected components. The tree $T$ has
height less than $p$ (otherwise $G$ contains the path $P$ as a
minor) and vertex degree at most $s$. Hence, $T$ has at most
$1+s+\ldots+s^{p-1}\leq s^p$ vertices. Thus the connected
components of the graphs in $\mathcal{F}$ have at most $s^p$ vertices. For
a connected graph $G$, we denote by $m(G)$ the maximum $r$ such
that $\mathcal{F}$ contains the graph consisting of $r$ disjoint copies of
$G$. We say that $G$ has \emph{unbounded multiplicity} if $m(G)$
is not bounded. Note that the graph consisting of 1 edge has
unbounded multiplicity since $\mathcal{G}$ contains all matchings.
\betagin{lem}\label{lem:unbounded-multiplicity}
Let $k$ be the size of the largest connected graph in $\mathcal{F}$ having unbounded multiplicity.
Then, $\displaystyle f_n \asymp_{\textrm{exp}} n^{(1-1/k)n}$.
\end{lem}
\betagin{proof}
\noindent $\bullet~$ Let $G$ be a connected graph in $\mathcal{F}$ of size $k$ having unbounded multiplicity. The class
of graphs consisting of disjoint copies of $G$ and isolated vertices (these are included in order
to avoid parity conditions) is contained in $\mathcal{F}$ and has exponential generating function
$\exp(z+ z^k/a(G))$, where $a(G)$ is the number of automorphisms of $G$. Hence
$f_n$ is of order at least $n^{(1-1/k)n}$, up to an exponential
factor (see Corollary VIII.2 in \cite{Flajolet:analytic}).
\noindent $\bullet~$ Let $\mathcal{L}$ be
the class of graphs in which every connected component $C$ appears
at most $m(C)$ times. Then clearly $\mathcal{F} \subseteq \mathcal{L}$.
The exponential generating function for $\mathcal{L}$ is
$P(z)\exp(Q(z))$, where $P(z)$ collects the connected graphs with
bounded multiplicity, and $Q(z)$ those with unbounded
multiplicity. Since $Q(z)$ has degree $k$, we have an upper bound
of order $n^{(1-1/k)n}$.
\end{proof}
This finishes the proof of Claim \ref{claim:pseudo-factorial}.
\end{proof}
\noindent $\bullet~$ We now consider the classes of graphs containing all the
stars but not all the matchings. The lower bound for these classes
follows from Lemma~\ref{lem:estimates} while the upper bound is
given by the following claim.
\betagin{claim}
Let $M_k$ be a perfect matching on $2k$ vertices. The growth rate of $\textrm{Ex}(M_k)$ is at most exponential.
\end{claim}
\betagin{proof}
Let $G$ be a graph of size $n$ in $\textrm{Ex}(M_k)$ and let $M$ be a
maximal matching of $G$. The matching $M$ has no more than $2k-2$
vertices (otherwise, $M_k<G$). Moreover, the remaining vertices
form an independent set (otherwise, $M$ is not maximal). Hence
$G$ is a subgraph of the sum $H_n$ of the complete graph
$K_{2k-2}$ and $n-(2k-2)$ independent vertices. There are ${n
\choose 2k-2}$ ways of labeling the graph $H_n$ and $2^{e(H_n)}$
ways of taking a subgraph, where $e(H_n)={2k -2 \choose 2} +
(2k-2)(n-2k+2)$ is the number of edges of $H_n$. Since ${n
\choose 2k-2}$ is polynomial and $e(H_n)$ is linear, the number of
graphs of size $n$ in $\textrm{Ex}(M_k)$ is bounded by an
exponential.\end{proof}
\noindent $\bullet~$ We now consider consider classes of graphs $\mathcal{G}$ containing
neither all the matchings nor all the stars. If $\mathcal{G}$ does not
contain all the graphs with a single edge, then either $\mathcal{G}$
contains all the graphs without edges and $g_n=1$ for $n$ large
enough or $g_n=0$ for $n$ large enough. Observe that if $\mathcal{G}$
contains the graphs with a single edge, then $g_n\geq
\frac{n(n-1)}{2}$. It only remains to prove the following claim:
\betagin{claim}\label{claim:polynomial-growth}
Let $\mathcal{G}$ be a minor-closed class containing neither all the
matching nor all the stars. Then, there exists an integer $N$ and
a polynomial $P$ such that $g_n=P(n)$ for all $n\geq N$.
\end{claim}
\paragraph{Remark.} For any integer $k\geq 2$, there exists a minor-closed class of graphs $\mathcal{G}$ such that $g_n=P(n)$ where $P$ is a polynomial of degree $k$. Indeed, we let the reader check that the class $\mathcal{G}$ of graphs made of one star of size at most $k$ plus some isolated vertices satisfies this property.
\betagin{proof}
Since $\mathcal{G}$
does not contain all
matchings, one of the minimal excluded minors of $\mathcal{G}$ is a
graph $M$ which is made of a set of $k$ independent edges plus $l$
isolated vertices. Moreover, $\mathcal{G}$ does not contain all the stars,
thus one of the minimal excluded minors of $\mathcal{G}$ is a graph $S$
made of one star on $s$ vertices plus $r$ isolated vertices.
\noindent $\bullet~$ We first prove that \emph{for every graph $G$ in $\mathcal{G}$ having
$n\geq \max(s+r,2k+l)$ vertices, the number of isolated vertices
is at least $n-2ks$.} Observe that for every graph $G$ in $\mathcal{G}$
having at least $s+r$ vertices, the degree of the vertices is less
than $s$ (otherwise, $G$ contains the star $S$ as a minor).
Suppose now that a graph $G$ in $\mathcal{G}$ has $n\geq \max(s+r,2k+l)$
vertices from which at least $2ks$ are not isolated. Then, one can
perform a greedy algorithm in order to find $k$ independent edges.
In this case, $G$ contains the graph $M$ as a minor, which is
impossible.
\noindent $\bullet~$ Let $M,S,H_1,\ldots,H_h$ be the minimal excluded minors of
$\mathcal{G}$ and let $M',S',H_1',\ldots,H_h'$ be the same graphs after
deletion of their isolated vertices. We prove that \emph{there
exists $N\in \mathbb{N}$ such that $\mathcal{G}_n=\mathcal{F}_n$ for all $n\geq N$, where
$\mathcal{F}=\textrm{Ex}(H_1',\ldots,H_h')$}. Let $m$ be the maximal number of
isolated vertices in the excluded minors $M,S,H_1,\ldots,H_h$ and
let $N=\max(s+r,2k+l,2ks+m)$. If $G$ has at least $N$ vertices,
then $G$ has at least $m$ isolated vertices, hence $G$ is in $\mathcal{G}$
if and only if it is in $\mathcal{F}$.
\noindent $\bullet~$ We now prove that there exists a polynomial $P$ with rational
coefficients such that $f_n\equiv |\mathcal{F}_n|=P(n)$. Let $\mathcal{C}$ be the
set of graphs in $\mathcal{F}$ without isolated vertices; by convention we
consider the graph of size 0 as being in $\mathcal{C}$. The graphs in
$\mathcal{C}$ have at most $\max(s+r,2k+l,2ks)$ vertices, hence $C$ is a
finite set. We say that a graph in $G$ \emph{follows the pattern}
of a graph $C\in \mathcal{C}$ if $C$ is the graph obtained from $G$ by
deleting the isolated vertices of $G$ and reassigning the labels
in $\{1,\ldots,r\}$ respecting the order of the labels in $G$. By
the preceding points, any graph in $\mathcal{F}$ follows the pattern of a
graph in $\mathcal{C}$ and, conversely, any graph following the pattern of
a graph in $\mathcal{C}$ is in $\mathcal{F}$ (since the excluded minors
$M',S',H_1',\ldots,H_h'$ of $\mathcal{F}$ have no isolated vertices). The
number of graphs of size $n$ following the pattern of a given
graph $C\in \mathcal{C}$ is ${n \choose |C|}$, where $|C|$ is the number
of vertices of $C$. Thus, $f_n=\sum_{C\in\mathcal{C}} {n \choose |C|}$
which is a polynomial.
\end{proof}
This conclude the proof of Theorem \ref{th:refine}.
\section{Growth constants}\label{section:growth-constants}
We say that class $\mathcal{G}$ \emph{has growth constant}
$\gamma$ if $\lim_{n \to \infty} \left(g_n/ n!\right)^{1/n} =
\gamma$, and we write $\gamma(\mathcal{G})= \gamma$.
\betagin{prop}\label{th:existence}
Let $\mathcal{G}$ be a minor-closed class such that all the excluded minors of $\mathcal{G}$ are 2-connected. Then, $\gamma(\mathcal{G})$ exists.
\end{prop}
\betagin{proof}
In the terminology of \cite{McDiarmid:growth-constant-planar-graphs}, the class $\mathcal{G}$ is small (because of Theorem
\ref{th:small}), and it is addable because of the assumption on the forbidden minors. Hence,
Theorem 3.3 from \cite{McDiarmid:growth-constant-planar-graphs} applies and there exists a growth constant.
\end{proof}
We know state a theorem about the set $\Gamma$ of growth constants of minor-closed classes. In what follows we denote by $\xi \approx 1.76$ the inverse of the unique positive root of $x \exp(x) =1 $.
\betagin{thm}\label{thm:growth-constants}
Let $\Gamma$ be the set of real numbers which are growth constants of minor-closed classes of graphs.
\betagin{enumerate}
\noindent $\bullet~$m The values $0,~1,~\xi$ and $e$ are in $\Gamma$.
\noindent $\bullet~$m If $\gamma \in \Gamma$ then $2 \gamma \in \Gamma$.
\noindent $\bullet~$m There is no $\gamma \in \Gamma$ with $0 < \gamma <1$.
\noindent $\bullet~$m There is no $\gamma \in \Gamma$ with $1 < \gamma <\xi$.
\end{enumerate}
\end{thm}
\paragraph{Remarks.} \noindent $\bullet~$ The property 1 of Theorem \ref{thm:growth-constants} can be extended with the growth constants of the minor-closed classes listed in table by table \ref{table:known-constants}. \\
\noindent $\bullet~$ The properties 2, 3 and 4 of Theorem \ref{thm:growth-constants} remain valid if one replaces $\Gamma$ by the set $\Gamma'=\{\gamma'=\limsup \left(\frac{g_n}{n!}\right)^{1/n} / \mathcal{G} \textrm{ minor-closed}\}$.
\betagin{table}[htb]
\betagin{center}
\betagin{tabular}{|l|r|l|}
\hline Class of graphs & Growth constant & Reference \\ \hline
$\textrm{Ex}(P_k)$ & $0$ & This paper\\
Path forests & $1$ & Standard \\
Caterpillar forests & $\xi \approx 1.76$ & This paper \\
Forests $= \textrm{Ex}(K_{3})$ & $e \approx 2.71$ & Standard\\
$\textrm{Ex}(C_4)$ & $3.63$ & \cite{Gimenez:given-3connected} \\
$\textrm{Ex}(K_4-e)$ & $4.18$ & \cite{Gimenez:given-3connected} \\
$\textrm{Ex}(C_5)$ & $4.60$ & \cite{Gimenez:given-3connected} \\
Outerplanar $=\textrm{Ex}(K_4,K_{2,3})$ & 7.320 & \cite{Bodirsky:series-parallel+outerplanar} \\
$\textrm{Ex}(K_{2,3})$ & 7.327 & \cite{Bodirsky:series-parallel+outerplanar} \\
Series parallel $=\textrm{Ex}(K_4)$ & 9.07 & \cite{Bodirsky:series-parallel+outerplanar} \\
$\textrm{Ex}(W_4)$ & $11.54$ & \cite{Gimenez:given-3connected} \\
$\textrm{Ex}(K_5-e)$ & $12.96$ & \cite{Gimenez:given-3connected} \\
$\textrm{Ex}(K_2 \times K_3)$ & $14.13$ & \cite{Gimenez:given-3connected}\\
Planar & 27.226 & \cite{Gimenez:planar-graphs} \\
Embeddable in a fixed surface & 27.226 & \cite{McDiarmid:graphs-on-surfaces} \\
$\textrm{Ex}(K_{3,3})$ & 27.229 & \cite{Gerke:K33-free} \\
\hline
\end{tabular}
\caption{A table of some known growth constants.}\label{table:known-constants}
\end{center}
\end{table}
Before the proof of Theorem \ref{thm:growth-constants}, we make
the following remark. Let $\mathcal{G}$ be a minor-closed class,
let $\mathcal{C}$ be the family of all connected members of
$\mathcal{G}$, and let $G(z)$ and $C(z)$ be the corresponding
generating functions. Then if $\mathcal{C}$ has growth constant
$\gamma$, so does $\mathcal{G}$. This is because the generating
functions $G(z)$ is bounded by $\exp(C(z))$ (they are equal if the
forbidden minors for $\mathcal{G}$ are all connected), and both
functions have the same dominant singularity.
\betagin{proof}
1) \noindent $\bullet~$ All classes whose growth is not at least factorial have
growth constant $0$. In particular, $\gamma(\textrm{Ex}(P)) = 0$ for any
path $P$.
\noindent $\bullet~$ The number of labelled paths is $n!/2$. Hence, by the remark
made before the proof, the growth constant of the class of path
forests is 1.
\noindent $\bullet~$ A \emph{caterpillar} is a tree consisting of a path and
vertices directly adjacent to (i.e. one edge away from) that path.
Let $\mathcal{C}$ be the class of graphs whose connected components are
caterpillars, which is clearly minor-closed. A rooted caterpillar
can be considered as an ordered sequence of stars. Hence the
associated generating function is $1/(1 - z e^z)$. The dominant
singularity is the smallest positive root of $1-ze^z=0$, and
$\gamma(\mathcal{C})$ is the inverse $\xi$ of this value.
\noindent $\bullet~$ The growth constant of the class of acyclic graphs (forests)
is the number $e$. This is because the number of labelled trees is
$n^{n-2}$ which, up to a sub-exponential factor, is asymptotic to
$\sim e^n n!$.
2) This property follows from an idea by Colin McDiarmid. Suppose $\gamma(\mathcal{G})= \gamma$, and
let $\mathcal{AG}$ be family of graphs $G$ having a
vertex $v$ such that $G-v$ is in $\mathcal{G}$; in this case we say that $v$ is an apex of $G$. It
is easy to check that if $\mathcal{G}$ is minor-closed, so is $\mathcal{AG}$. Now we have
$$ 2^n |\mathcal{G}_n| \le |\mathcal{AG}_{n+1}| \le (n+1) 2^n |\mathcal{G}_n|. $$
The lower bound is obtained by taking a graph $G \in \mathcal{G}$
with vertices $[n]$, adding $n+1$ as a new vertex, and making
$n+1$ adjacent to any subset of $[n]$. The upper bound follows the
same argument by considering which of the vertices $1,2,\dots,n+1$
acts as an apex. Dividing by $n!$ and taking $n$-th roots, we see
that $\gamma(\mathcal{AG})= 2\gamma(\mathcal{G})$.
3) This has been already shown during the proof of Theorem \ref{th:refine}. Indeed, if a
minor-closed class $\mathcal{G}$ contains all paths, then $|\mathcal{G}_n|\ge n!/2$ and the growth constant is at least $1$.
Otherwise $g_n < \epsilon^n n^n $ for all $\epsilon>0$ and $\gamma(\mathcal{G})=0$.\\
4) We consider the graphs $\textrm{Cat}_l$ and $\textrm{Ap}_l$ represented in
Figure \ref{fig:two-obstructions}.
\fig{two-obstructions}{The graph $\textrm{Cat}_l$ (left) and the graph
$\textrm{Ap}_l$ (right).}
If a minor-closed class $\mathcal{G}$ contains the graphs $\textrm{Cat}_l$ for all
$l$, then $\mathcal{G}$ contains all the caterpillars hence
$\gamma(\mathcal{G})\geq \xi\approx 1.76$. If $\mathcal{G}$ contains the graphs
$\textrm{Ap}_l$ for all $l$, then $\mathcal{G}$ contains the apex class of path
forests and $\gamma(\mathcal{G})\geq 2$. Now, if $\mathcal{G}$ contains neither
$\textrm{Cat}_k$ nor $\textrm{Ap}_l$ for some $k,l$, then $\mathcal{G}\subseteq
\textrm{Ex}(\textrm{Cat}_l,\textrm{Ap}_l)$. Therefore, it is sufficient to prove the
following claim.
\betagin{claim} \label{claim:gap}
The growth constant of the class \emph{$\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$} is 1 for all $k> 2 ,l>1$.
\end{claim}
\paragraph{Remark.}
Claim \ref{claim:gap} gives in fact a characterization of the
minor-closed classes with growth constant 1. These are the classes
containing all the paths but neither all the caterpillars nor all
the graphs in the apex class of the path forests. For instance,
the class of trees not containing a given caterpillar (as a minor)
and the class of graphs not containing a given star (as a minor)
both have growth constant 1.
\betagin{proof}
Observe that the class $\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$ contains all paths as
soon as $k>2$ and $l>1$. Hence, $\gamma(\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l))\geq 1$
(by Lemma \ref{lem:estimates}) and we only need to prove that
$\gamma(\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l))\leq~1$. We first prove a result about
the simple paths of the graphs in $\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$.
\betagin{lem}\label{lem:degree2-on-path}
Let $G$ be a graph in \emph{$\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$} and let $P$ be a
simple path in $G$. Then, there are less than $kl+4k^3l$ vertices
in $P$ of degree greater than 2.
\end{lem}
\betagin{proof}
\noindent $\bullet~$ We first prove that \emph{any vertex not in $P$ is adjacent
to less than $l$ vertices of $P$} and \emph{any vertex in $P$ is
adjacent to less than $2l$ vertices of $P$}. Clearly, if $G$
contains a vertex $v$ not in $P$ and adjacent to $l$ vertices $P$,
then $G$ contains $\textrm{Ap}_l$ as a minor. Suppose now that there is a
vertex $v$ in $P$ adjacent to $2l$ other vertices of $P$. In this
case, $v$ is adjacent to at least $l$ vertices in one of the
simple paths $P_1,~P_2$ obtained by removing the vertex $v$ from
the path~$P$. Hence $G$ contains $\textrm{Ap}_l$ as a minor.
\noindent $\bullet~$ We now prove that \emph{there are less than $kl$ vertices
in~$P$ adjacent to at least one vertex not in~$P$}. We suppose
the contrary and we prove that there exist $k$ independent edges
$e_i=(u_i,v_i),~i=1\ldots k$ such that $u_i$ is in $P$ and $v_i$
is not in $P$ (thereby implying that $\textrm{Cat}_k$ is a minor of $G$).
Let $r< k$ and let $e_i=(u_i,v_i),~i\leq r$ be independent edges
with $u_i\in P$ and $v_i\notin P$. The set of vertices in $P$
adjacent to some vertices not in $P$ but to none of the vertices
$v_i,i\leq r$ has size at least $kl-rl>0$ (this is because each of
the vertex $v_i$ is adjacent to less than $l$ vertices of $P$).
Thus, there exists an edge $e_{r+1}=(u_{r+1},v_{r+1})$ independent
of the edges $e_i,i\leq r$ with $u_{r+1}\in P$ and $v_{r+1}\notin
P$. Thus, any set of $r<k$ independent edges with one endpoint in
$P$ and one endpoint not in $P$ can be increased.
\noindent $\bullet~$ We now prove that \emph{there are no more than $4k^3l$
vertices in $P$ adjacent to another vertex in~$P$ beside its 2
neighbors in $P$}. We suppose the contrary and we prove that
either $\textrm{Cat}_k$ or $\textrm{Ap}_l$ is a minor of $G$. Let $E_P$ be the set
of edges not in the path $P$ but joining 2 vertices of~$P$. We say
that two independent edges $e=(u,v)$ and $e'=(u',v')$ of $E_P$
\emph{cross} if the vertices $u,u',v,v'$ appear in this order
along the path $P$; this situation is represented in Figure
\ref{fig:crossing-edges}~(a).
\noindent $\bullet~$n We first show that \emph{there is a subset $E_P'\subseteq
E_P$ of $k^3$ independent edges}. Let $S$ be any set of $r<k^3$
edges in $E_P$. The number of edges in $E_P$ sharing a vertex with
one of the edges in $S$ is at most $2r\times 2l<4k^3l$ (this is
because any vertex in $P$ is adjacent to less than $2l$ vertices
in $P$). Since $|E_P|\geq 4k^3l$, any set of independent edges in
$E_P$ of size less than $k^3$ can be increased.
\noindent $\bullet~$n We now show that \emph{for any edge $e$ in $E_P'$ there are
at most $k$ edges of $E_P'$ crossing $e$}. Suppose that there is a
set $S\subseteq E_P'$ of $k$ edges crossing $e$. Let $P'$ be the
path obtained from $P\cup e$ by deleting the edges of $P$ that are
between the endpoints of $e$. The graph made of $P'$ and the set
of edges $S$ contains the graph $\textrm{Cat}_l$ as a minor which is
impossible.
\noindent $\bullet~$n We now show that \emph{there exists a subset $E_P''\subseteq
E_P'$ of $k^2$ non-crossing edges}. Let $S$ be any set of $r<k^2$
edges in $E_P'$. By the preceding point, the number of edges in
$E_P'$ crossing one of the edges in $S$ is less than $rk<k^3$.
Since $|E_P'|\geq k^3$, any set of non-crossing edges in~$E_P'$
of size less than $k^2$ can be increased.
\noindent $\bullet~$n Lastly, we show that \emph{the graph $\textrm{Cat}_k$ is a minor of
$G$}. We say that an edge $e=(u,v)$ of~$E_P''$ is \emph{inside}
another edge $e'=(u',v')$ if $u',u,v,v'$ appear in this order
along the path $P$; this situation is represented in Figure
\ref{fig:crossing-edges}~(b). We define the \emph{height} of the
edges in $E_P''$ as follows: the height of an edge $e$ is 1 plus
the maximum height of edges of $E_P''$ which are inside $e$ (the
height is 1 if there is no edge inside $e$). The height of edges
have been indicated in Figure~\ref{fig:crossing-edges}~(c).
Suppose that there is an edge of height $k$ in $E_P''$. Then there
is a set $S$ of $k$ edges $e_1=(u_1,v_1),\ldots,e_k=(u_k,v_k)$
such that the vertices $u_1,u_2,\ldots,u_k,v_k,v_{k-1},\ldots,v_1$
appear in this order along $P$. In this case, the subgraph made of
$S$ and the subpath of $P$ between $u_1$ and $u_k$ contains
$\textrm{Cat}_k$ as a minor. Suppose now that there is no edge of height
$k$. Since there are $k^2$ edges in $E_P''$, there is a integer
$i<k$ such that the number of edges of height $i$ is greater than
$k$. Thus, there is a set $S$ of $k$ edges
$e_1=(u_1,v_1),\ldots,e_k=(u_k,v_k)$ such that the vertices
$u_1,v_1,u_2,v_2,\ldots,u_k,v_k$ appear in this order along $P$.
In this case, the subgraph obtained from $P\cup
\{e_1,\ldots,e_k\}$ by deleting an edge of $P$ between $u_i$ and
$v_i$ for all $i$ contains $\textrm{Cat}_k$ as a minor.
\end{proof}
\fig{crossing-edges}{(a) Two crossing edges. (b) An edge inside
another. (c) A set of non-crossing edges.}
For any integer $N$, we denote by $\mathcal{G}^{N}_T$ the set of pairs
$(G,T)$ where $G$ is a graph and $T$ is a DFS spanning forest on
$G$ having height at most $N$ (the definition of \emph{DFS
spanning forest} was given just after Claim \ref{claim:path}).
\betagin{lem}\label{lem:subdivide-tree}
For any graph $G$ in $\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$, there exists a pair
$(G',T')$ in $\mathcal{G}^{kl+4k^3l}_T$ such that $G$ is obtained from
$G'$ by subdividing some edges of $T$.
\end{lem}
\betagin{proof}
Let $G$ be a graph in $\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$, let $T$ be a DFS
spanning forest of $G$ and let $R$ be the set of roots of $T$ (one
root for each connected components of~$G$). One \emph{contracts} a
vertex $v$ of degree $2$ by deleting $v$ and joining its two
neighbors by an edge. Let $G'$ and $T'$ be the graphs and trees
obtained from $G$ and $T$ by contracting the vertices $v\notin R$
of degree 2 which are incident to 2 edges of $T$. We want to
prove that $(G',T')$ is in $\mathcal{G}^{kl+4k^3l}_T$.
\noindent $\bullet~$ Since $T$ is a DFS spanning forest of $G$, every edge of $G$
which is not in $T$ connects a vertex to one of its ancestors
\cite{Cormen:introduction-algo}. This property characterize the
DFS spanning forests and is preserved by the contraction of the
vertices of degree 2. Hence, $T'$ is a DFS spanning forest of
$G'$.
\noindent $\bullet~$ By Lemma \ref{lem:degree2-on-path}, the number of vertices
which are not of degree $2$ along a path of $T$ from a root to a
leaf is less than $kl+4k^3l$. Thus, the height of $T'$ is at most
$kl+4k^3l$.
\end{proof}
We have already shown in the proof of Claim \ref{claim:path} that
the radius of convergence of the generating function $G^{N}_T(z)$
of the set $\mathcal{G}^{N}_T$ is infinite. Moreover, the generating
function of the set of graphs that can be obtained from pairs
$(G',T')$ in $\mathcal{G}^{N}_T$ by subdividing the tree $T'$ is bounded
(coefficient by coefficient) by $G^{N}_T(\frac{z}{1-z})$ (since a
forest $T'$ on a graph $G'$ of size $n$ has at most $n-1$ edges to
be subdivided). Thus, Lemma \ref{lem:subdivide-tree} implies that
the generating function of $\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l)$ is bounded by
$G^{kl+4k^3l}_T(\frac{z}{1-z})$ which has radius of convergence 1.
Hence, the growth constant $\gamma(\textrm{Ex}(\textrm{Cat}_k,\textrm{Ap}_l))$ is at most
1.
\end{proof}
This concludes the proof of Claim \ref{claim:gap} and Theorem
\ref{thm:growth-constants}.
\end{proof}
We now investigate the topological properties of the set $\Gamma$
and in particular its limit points. First note that $\Gamma$ is
countable
(as a consequence of the Minor Theorem of Robertson and Seymour \cite{Seymour:Graph-minors}).
\betagin{lem}\label{th:sep}
Let $H_1,H_2,\dots H_k$ be a family of 2-connected graphs, and let
$\mathcal{H} = {\rm Ex}(H_1,H_2,\dots H_k)$. If $G$ is a
2-connected graph in $\mathcal{H}$, then
$\gamma(\mathcal{H}\cap \textrm{Ex}(G)) < \gamma(\mathcal{H})$.
\end{lem}
\betagin{proof}
The condition on 2-connectivity guarantees that the growth
constants exist. By Theorem 4.1 from
\cite{McDiarmid:growth-constant-planar-graphs}, the probability
that a random graph in $\mathcal{H}_n$ contains $G$ as a subgraph
is a least $1 - e^{-\alphapha n}$ for some $\alphapha>0$. Hence the
probability that a random graph in $\mathcal{H}_n$ does not
contain $G$ as a minor is at most $e^{-\alphapha n}$. If we denote
$\mathcal{G} = \mathcal{H}\cap \textrm{Ex}(G)$, then we have
$$
{|\mathcal{G}_n| \over |\mathcal{H}_n|} = {|\mathcal{G}_n| \over n!} {n! \over |\mathcal{H}_n|}
\le e^{-\alphapha n}.
$$
Taking limits, this implies
$$
{\gamma(\mathcal{G}) \over \gamma(\mathcal{H})} \le \lim \left(e^{-\alphapha n}\right)^{1/n} =
e^{-\alphapha} < 1.
$$
\end{proof}
We recall that given a set $A$ of real numbers, $a$ is a
\emph{limit point} of $A$ if for every $\epsilon>0$ there exists
$x\in A-\{a\}$ such that $|a-x| < \epsilon$.
\betagin{thm}\label{th:limit}
Let $H_1,\ldots,H_k$ be 2-connected graphs which are not cycles. Then, $\gamma=\gamma(\textrm{Ex}(H_1,\ldots,H_k))$ is a limit point of $\Gamma$.
\end{thm}
\betagin{proof}
For $k \ge3$, let $\mathcal{G}_k = \mathcal{G} \cap \textrm{Ex}(C_k)$,
where $C_k$ is the cycle of size $k$. Because of Proposition
\ref{th:existence}, the class $\mathcal{G}_k$ has a growth
constant $\gamma_k$, and because of Lemma \ref{th:sep} the
$\gamma_k$ are strictly increasing and $\gamma_k < \gamma$ for all
$k$. It follows that $\gamma' = \lim_{k \to \infty} \gamma_k$
exists and $\gamma' \le \gamma$. In order to show equality we
proceed as follows.
Let $g_n = |\mathcal{G}_n|$ and let $g_{k,n} = |(\mathcal{G}_k)_n|$. Since $\gamma =
\lim_{n\to\infty}(g_n/n!)^{1/n}$, for all $\epsilon>0$ there exists $N$ such that for $n>N$ we
have
$$
\left(g_n/ n! \right)^{1/n} \geq \gamma -\epsilon.
$$
Now define $\displaystyle f_n=\frac{g_n}{e^2 n!}$ and
$\displaystyle f_{k,n}=\frac{g_{k,n}}{e^2n!}$. From \cite[Theorem
3]{McDiarmid:growth-constant-planar-graphs}, the sequence $f_n$ is
supermultiplicative and $\displaystyle \gamma=\lim_{n\to
\infty}\left(f_n\right)^{1/n}=\lim_{n\to
\infty}\left(g_n/n!\right)^{1/n}$ exists and equals $\sup_n
\left(f_n\right)^{1/n}$. Similarly, $\gamma_k=\lim_{n\to
\infty}\left(f_{k,n}\right)^{1/n}=\sup_n
\left(f_{k,n}\right)^{1/n}$.
But since a graph on less than $k$ vertices cannot contain $C_k$
as a minor, we have $g_{k,n} = g_n$ for $k>n$. Equivalently,
$f_{k,n} = f_n$ for $k>n$. Combining all this, we have
$$\gamma_k \geq \left(f_{k,n}\right)^{1/n} \geq \left(f_n\right)^{1/n}\geq \gamma -\epsilon$$
for $k>N$. This implies $\gamma' = \lim \gamma_k \ge \gamma$.
\end{proof}
Notice that Theorem \ref{th:limit} applies to all the classes in
Table \ref{table:known-constants} starting at the class of
outerplanar graphs. However, it does not apply to the classes of
of forests. In this case we offer an independent proof based on
generating functions.
\betagin{lem}
The number $e$ is a limit point of $\Gamma$.
\end{lem}
\betagin{proof}
Let $\mathcal{F}_k$ be the class of forests whose trees are made
of a path and rooted trees of height at most $k$ attached to
vertices of the path. Observe that the classes $\mathcal{F}_k$ are
minor-closed, that $\mathcal{F}_k\subset \mathcal{F}_{k+1}$, and
that $\cup_{k} \mathcal{F}_k=\mathcal{F}$, where $\mathcal{F}$ is the
class of forests. We prove that $\gamma(\mathcal{F}_k)$ is a strictly
increasing sequence tending to $e=\gamma(\mathcal{F})$.
Recall that the class $\mathcal{F}_k$ and the class
$\mathcal{T}_k$ of its connected members have the same growth
constant. Moreover, the class $\vec{\mathcal{T}}_k$ of trees with
a distinguished oriented path to which rooted trees of height at
most $k$ are attached has the same growth constant as
$\mathcal{T}_k$ (this is because there are only $n(n-1)$ of
distinguishing and orienting a path in a tree of size $n$). The
generating function associated to $\vec{\mathcal{T}}_k$ is
$1/(1-F_k(z))$, where $F_k(z)$ of is the generating function of
rooted trees of height at most $k$. Hence,
$\gamma(\mathcal{F}_k)=\gamma(\vec{\mathcal{T}}_k)$ is the inverse
of the unique positive root $\rho_k$ of $F_k(\rho_k) = 1$.
Recall that the generating functions $F_k$ are obtained as
follows; see Section III.8.2 in \cite{Flajolet:analytic}).
$$
F_0(z) = z; \qquad F_{k+1}(z) = ze^{F_k(z)} \quad \hbox{for $k>0$}.
$$
It is easy to check that the roots $\rho_k$ of $F_k(\rho_k) = 1$
are strictly decreasing.
Recall that the generating function $F(z)$ of rooted trees has a
singularity at $1/e$ and that $F(1/e)=1$
(see~\cite{Flajolet:analytic}). Moreover, for all $n$, $0\leq
[z^n]F_k(z) \leq [z^n]F(z)$ and $\lim_{k\to \infty}
[z^n]F_k(z)=[z^n]F(z)$, thus $\lim_{k\to \infty} F_k(1/e) = F(1/e)
=1$. Furthermore, the functions $F_k(z)$ are convex and
$F_k'(1/e)\geq 1$ (since the coefficients of $F_k$ are positive
and $[z^1]F_k(z)=1$). Thus, $F_k(z)>F_k(1/e)+(z-1/e)$ which
implies $1/e\leq \rho_k \leq 1/e+(F_k(1/e)-F(1/e))$. Thus, the
sequence $\rho_k$ tends to $1/e$ and the growth constants
$\gamma(\mathcal{F}_k)=1/\rho_k$ tend to $e$.
\end{proof}
\paragraph{Remark.} The number $\nu \approx 2.24$, which is the inverse of the smallest positive
root of $z\exp(z/(1-z))=1$, can be shown to be a limit point of $\Gamma$ by similar methods. It is
the smallest number which we know to be a limit point of $\Gamma$. It is the growth constant of
the family whose connected components are made of a path $P$ and any number of paths of any
length attached to the vertices of $P$.
\paragraph{Remark.} All our examples of limit points in $\Gamma$ come from
strictly increasing sequences of growth constants that converge to
another growth constant. Is it possible to have an infinite
strictly decreasing sequences of growth constants? As we see now,
this is related to a classical problem. A quasi-ordering is a
reflexive and transitive relation. A quasi-ordering $\le$ in $X$
is a \emph{well-quasi ordering} if for every infinite sequence
$x_1,x_2,\dots$ in $X$ there exist $i<j$ such that $x_i \le x_j$.
Now consider the set $X$ of minor-closed classes of graphs ordered
by inclusion. It is an open problem whether this is a well-quasi
ordering \cite{Diestel:wqo-minors}. Assuming this is the case, it
is clear that an infinite decreasing sequence $\gamma_1
> \gamma_2 > \cdots$ of growth constants cannot exist. For consider the corresponding sequence of
graph classes $\mathcal{G}_1 , \mathcal{G}_2,\dots$. For some $i<j$ we must have $\mathcal{G}_i
\subseteq \mathcal{G}_j$, but this implies $\gamma_i \le \gamma_j$.
\section{Conclusion: some open problems}\label{section:conclusion}
We close by listing some of the open questions which have arisen in this work.\\
1) We know that a class $\mathcal{G}$ has a growth constant
provided that all its excluded minor are 2-connected.
The condition that the excluded-minors are 2-connected is certainly not necessary as is seen by noting that the apex family of any class which has a growth constant also has a growth constant. It is also easy to see that such an apex family is also minor-closed and that at least one of its excluded minors is disconnected.
Thus our first conjecture is that every minor-closed family
has a growth constant, that is, $\lim
\left(\frac{g_n}{n!}\right)^{1/n}$ exists for every minor-closed
class $\mathcal{G}$.
2) A minor-closed class is \emph{smooth} if $\lim
\frac{g_n}{ng_{n-1}}$ exists. It follows that this limit must be
the growth constant and that a random member of $\mathcal{G}$
will have expected number of isolated vertices converging to
$1/\gamma$. Our second conjecture is that if every excluded
minor of a minor-closed class is 2-connected then the class is
smooth.
If true, then it would follow that a random member of the
class would qualitatively exhibit all the Poisson type
behaviour exhibited by the random planar graph. However proving
smoothness for a class seems to be very difficult and the
only cases which we know to be smooth are when the
exponential generating function has been determined exactly.
3) We have shown that the intervals $(0,1)$ and $(1,\xi)$ are
"gaps" which contain no growth constant. We know of no other gap,
though if there is no infinite decreasing sequence of growth
constants they must exist. One particular question which we
have been unable to settle is whether $(\xi,2)$ is also a gap.
4) We have shown that for each nonnegative integer $k$,
$2^k$ is a growth constant. A natural question is whether any
other integer is a growth constant. More generally, is there any
algebraic number in $\Gamma$ besides the powers of 2?
5) All our results concern labelled graphs. In unlabelled setting,
the most important question to settle is whether there is an
analogue of the theorem of Norine \emph{et al}. More precisely,
suppose $\mathcal{G}$ is a minor-closed class of graphs and that
$u_n$ denotes the number of unlabelled members of $\mathcal{G}_n$.
Does there exist a finite $d$ such that $u_n$ is bounded
above by $d^n$?
\paragraph{Aknowledgements.}
We are very grateful to Colin McDiarmid who suggested the
apex-construction, to Angelika Steger for useful discussions, and
to Norbert Sauer and Paul Seymour for information on well quasi
orders.
\end{document}
|
\begin{document}
\title[Localized energy estimates on Schwarzschild space-times]
{
Localized energy estimates for wave equations on high dimensional
Schwarzschild space-times
}
\thanks{
The second author was supported in part by the NSF through grant DMS0800678
}
\author{Parul Laul}
\author{Jason Metcalfe}
\address{Department of Mathematics, University of North Carolina,
Chapel Hill, NC 27599-3250}
\begin{abstract}
The localized energy estimate for the wave equation is known to be a
fairly robust measure of dispersion. Recent analogs on
the $(1+3)$-dimensional Schwarzschild space-time have played a key
role in a number of subsequent results, including a proof of Price's
law. In this article, we explore similar localized energy estimates
for wave equations on $(1+n)$-dimensional hyperspherical Schwarzschild
space-times.
\epsilonnd{abstract}
\maketitle
{\vec{n}}ewsection{Introduction}
One of the more robust measures of dispersion for the wave equation
is the so called localized energy estimates. These estimates have
played a role in understanding scattering theory, as means of summing
local in time Strichartz estimates to obtain global in time
Strichartz estimates on a compact set, as an essential tool for
proving long time existence of quasilinear wave equations in exterior
domains, and as means to handle errors in certain parametrix
constructions.
Estimates of this form originated in \cite{M} where it was shown that
solutions to the constant coefficient, homogeneous wave equation
\[(\partial_t^2-\Delta)u=0,\quad u(0)=u_0,\quad \partial_t u(0)=u_1\]
on ${\mathbb R}_+\times {\mathbb R}^n$, $n\ge 3$ satisfy
\[\int_0^T \int_{{\mathbb R}^n} \frac{1}{|x|}|{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} u|^2(t,x)\,dx\,dt\lesssim
\|{\vec{n}}abla u_0\|_2^2 + \|u_1\|_2^2\]
where ${{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla}$ denotes the angular derivatives. Though not the original
method of proof, one can obtain this estimate by multiplying $\Box u$
by $\partial_r u +\frac{n-1}{2}\frac{u}{r}$, integrating over a
space-time slab, and integrating by parts. One can also obtain
control on $\partial_r$ and $\partial_t$ in a fixed dyadic annulus.
Attempting to sum over these dyadic annuli comes at the cost of a
logarithmic blow up in time. One may otherwise introduce an
additional component of the weight that permits summability. In this
case, it can be shown that for $u$ as above and $n\ge 4$
\begin{equation}\langlebel{kss}
\|\langle x\rangle^{-1/2-} u'\|_{L^2_{t,x}([0,T]\times{\mathbb R}^n)} + \|\langle
x\rangle^{-3/2} u\|_{L^2_{t,x}([0,T]\times {\mathbb R}^n)} \lesssim \|{\vec{n}}abla u_0\|_2
+ \|u_1\|_2.
\epsilonnd{equation}
Here $\langle x\rangle = \sqrt{1+|x|^2}$ denotes the Japanese bracket and
$u'=(\partial_t u,{\vec{n}}abla_xu)$ represents the space-time gradient. An
estimate akin to \epsilonqref{kss} also holds for $n=3$ but in this case the
weight $\langle x\rangle^{-3/2}$ in the second term in the left side must be
replaced by $\langle x\rangle^{-3/2-}$. To prove \epsilonqref{kss}, one multiplies
the equation instead by $f(r)\partial_r u +
\frac{n-1}{2}\frac{f(r)}{r}u$ where $f(r)=\frac{r}{r+2^j}$, $j\ge 0$
and integrates by parts. For fixed $j$, this yields an estimate for
the left side over $|x|\approx 2^j$ with weight $\langle x\rangle^{-1/2}$ in
the first term. Introducing the additional weight and summing over
$j$ produces \epsilonqref{kss}. See, e.g., \cite{HY}, \cite{KSS}, \cite{KPV},
\cite{Met}, \cite{SmSo}, \cite{St}, \cite{Strauss}. Analogous
estimates have been shown for small, asymptotically flat, possibly time-dependent
perturbations of the d'Alembertian in \cite{Alinhac}, \cite{MS},
\cite{MT, MT2} as well as for time-independent, asymptotically flat, nontrapping
perturbations in e.g. \cite{BH}, \cite{Burq}, \cite{SW}.
A particular case of interest which does not fall into these latter
categories is the wave equation on (1+3)-dimensional Schwarzschild space-times. While
asymptotically flat and time independent, this metric is not, however,
nontrapping. There is trapping which occurs on the so called photon
sphere which necessitates a loss in the estimates as compared to those
for the Minkowski wave equation. Despite this complication, a number
of proofs of estimates akin to \epsilonqref{kss} exist for the wave equation
on Schwarzschild space-times. See \cite{BS1}-\cite{BS5}, \cite{DR, DR2},
\cite{MMTT}. In \cite{TT}, these estimates have been extended to Kerr
space-times with small angular momentums. Here an additional
difficulty is encountered as \cite{Alinhac2} shows that the estimates
will not follow from an analog of the argument above with any first order
differential multiplier.
The goal of this article is to extend the known localized energy
estimates on $(1+3)$-dimensional Schwarzschild space-times to
$(1+n)$-dimensional Schwarzschild space-times for $n\ge 3$. There are
multiple notions of the Schwarzschild space-time for $n\ge 4$, and we
restrict our attention to the hyperspherical case. For a derivation
and discussion of such black hole space times, see
e.g. \cite{T}, \cite{MP}. An alternative notion of higher
dimensional Schwarzschild space-times is discussed e.g. in \cite{GL}.
We shall not explore these hypercylindrical Schwarzschild manifolds or
other notions of higher dimensional black holes here.
The exterior of a $(1+n)$-dimensional hyperspherical Schwarzschild
black hole $({\mathcal{M}},g)$ is described by the manifold ${\mathcal{M}}={\mathbb R}\times
(r_s,\infty)\times {\mathbb{S}}^{d+2}$ and the line element
\[ds^2 = -\weight dt^2 + \weight^{-1}dr^2 + r^2d\omega^2.\]
Here $d=n-3$, $r_s$ denotes the Schwarzschild radius (i.e. $r=r_s$ is
the event horizon), and $d\omega$ is
the surface measure on the sphere ${\mathbb{S}}^{d+2}={\mathbb{S}}^{n-1}$. Here, the
d'Alembertian is given by
\begin{align*}
\Box_g \phi &={\vec{n}}abla^\alpha \partial_\alpha \phi\\
&= -\weight^{-1}\partial_t^2 \phi + r^{-(d+2)}\partial_r
\Bigl[r^{d+2}\weight\partial_r \phi\Bigr] + {{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla}\, \cdot\, ot {{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} \phi.
\epsilonnd{align*}
The Killing vector field $\partial_t$ yields the
conserved energy
\begin{multline*}
E[\phi](t)=\int_{{\mathbb{S}}^{d+2}}\int_{r\ge r_s}
\Bigl[\weight^{-1}(\partial_t\phi)^2(t,r,\omega) \\+ \weight
(\partial_r\phi)^2(t,r,\omega) + |{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} \phi|^2(t,r,\omega)\Bigr]r^{d+2}\,dr\,d\omega.
\epsilonnd{multline*}
That is, when $\Box_g\phi = 0$, $E[\phi](t)=E[\phi](0)$ for all $t$.
We now define our localized energy norm. To this end, we set
\begin{multline}\langlebel{LEnorm}
\|\phi\|_{LE}^2 = \int_0^\infty \int_{{\mathbb{S}}^{d+2}}\int_{r\ge r_s} \Bigl[c_r(r) \weight
(\partial_r\phi)^2(t,r,\omega)
+ c_\omega(r) |{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla}\phi|^2(t,r,\omega) \\+
c_0(r)\phi^2(t,r,\omega)\Bigr] r^{d+2}\,dr\,d\omega\,dt
\epsilonnd{multline}
where
\[
c_r=\frac{1}{r^{d+3}\Bigl(1-\log\Bigl(\frac{r-r_s}{r}\Bigr)\Bigr)^2},\quad
c_\omega=\frac{1}{r} \Bigl(\frac{r-r_{ps}}{r}\Bigr)^2,\]
\[c_0=\Bigl(\frac{r-r_s}{r}\Bigr)^{-1}\frac{1}{r^3\Bigl(1-\log\Bigl(\frac{r-r_s}{r}\Bigr)\Bigr)^4}. \]
Here $r_{ps}=\Bigl(\frac{d+3}{2}\Bigr)^{\frac{1}{d+1}}r_s$, which is
the location of the photon sphere.
The main result of this article then states that a localized energy
estimate holds on ${\mathcal{M}}$.
\begin{theorem}\langlebel{thm1.1}
Let $\phi$ solve the homogeneous wave equation $\Box_g\phi=0$ on a
$(1+n)$-dimensional hyperspherical Schwarzschild manifold with $n\ge
4$. Then we have
\begin{equation}\langlebel{LEestimate}
\sup_{t\ge 0}E[\phi](t)+\|\phi\|_{LE}^2\lesssim E[\phi](0).\epsilonnd{equation}
\epsilonnd{theorem}
We note that by modifying the lower order portion of the multiplier
which appears in the next section that it is straightforward to obtain an
estimate on $\partial_t\phi$ as well. Moreover, one can obtain an
analogous estimate for the inhomogeneous wave equation by placing the
forcing term in an appropriate dual norm. The necessary decay of the
coefficient $c_r$ at $\infty$ can also be significantly improved. It is relatively
simple to carry out these modifications, but for the sake of
clarity, we omit the details.
As in the $(1+3)$-dimensional case, the higher dimensional
hyperspherical Schwarzschild space-times have a photon sphere where
trapping occurs. Rays initially located on this sphere and moving
initially tangent to it will stay on the surface for all times. Such
trapping is an obstacle to many types of dispersive estimates, and the
vanishing in the coefficient $c_\omega$ at the trapped set is a loss,
when compared to the nontrapping Minkowski setting, which results from
this trapping.
The proof follows by constructing an appropriate differential
multiplier. This is carried out in the next section, and the
construction most closely resembles that which appears in
\cite{MMTT}. Some significant technicalities needed to be
resolved, however, in order to find a construction which works in all
dimensions $d\ge 1$.
The localized energy estimates on $(1+3)$-dimensional Schwarzschild
manifolds have played a key role in a number of subsequent results.
See, e.g., \cite{Blue}, \cite{BS1}-\cite{BS5},
\cite{BSt},\cite{DR4}-\cite{DR3}, \cite{Luk}, \cite{MMTT},
\cite{Tataru}, \cite{TT}. Theorem \ref{thm1.1} permits possible
generalizations of these studies to higher dimensions, and a portion
of these will be explored in the first author's upcoming doctoral
dissertation and other subsequent works.
{\vec{n}}ewsection{Construction of the multiplier}
Associated to $\Box_g$ is the energy-momentum tensor
\[Q_{\alpha\beta}[\phi]=\partial_\alpha\phi\partial_\beta\phi -
\frac{1}{2}g_{\alpha\beta}\partial^\gamma \phi \partial_\gamma \phi\]
whose most important property is the following divergence condition
\[{\vec{n}}abla^\alpha Q_{\alpha\beta}[\phi] = \partial_\beta\phi \Box_g
\phi.\]
Contracting $Q_{\alpha\beta}$ with the radial vector field
$X=f(r)\weight\partial_r$, we form the momentum density
$P_\alpha[\phi,X]=Q_{\alpha\beta}[\phi]X^\beta$.
Calculating the divergence of this quantity we have
\begin{multline*}
{\vec{n}}abla^\alpha P_\alpha[\phi,X] = \Box_g \phi X\phi +
f'(r)\weight^2(\partial_r\phi)^2
+
\Bigl(\frac{r^{d+1}-\frac{d+3}{2}r_s^{d+1}}{r^{d+1}}\Bigr)\frac{f(r)}{r}|{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla}
\phi|^2
\\-\frac{1}{2}\Bigl[\weight r^{-(d+2)}\partial_r
(r^{d+2}f(r))\Bigr]\partial^\gamma\phi\partial_\gamma \phi.
\epsilonnd{multline*}
The last term involving the Lagrangian is not signed. In order to
eliminate it, we utilize a lower order term in the multiplier. To
this end, we modify the momentum density
\begin{multline*}
\tilde{P}_\alpha[\phi,X] = P_\alpha[\phi,X] + \frac{1}{2}\Bigl[\weight r^{-(d+2)}\partial_r
(r^{d+2}f(r))\Bigr]\phi\partial_\alpha \phi \\-
\frac{1}{4}\partial_\alpha\Bigl[\weight r^{-(d+2)}\partial_r
(r^{d+2}f(r))\Bigr] \phi^2,
\epsilonnd{multline*}
and recompute the divergence
\begin{multline}\langlebel{div}
{\vec{n}}abla^\alpha \tilde{P}_\alpha[\phi,X]=\Box_g\phi\Bigl[X\phi +
\frac{1}{2}\Bigl\{\weight
r^{-(d+2)}\partial_r(f(r)r^{d+2})\Bigr\}\Bigr]
\\+\weight^2 f'(r)(\partial_r\phi)^2 + \Bigl(\frac{r^{d+1}
-r_{ps}^{d+1}}{r^{d+1}}\Bigr)\frac{f(r)}{r}|{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} \phi|^2
\\-\frac{1}{4}{\vec{n}}abla^\alpha\partial_\alpha\Bigl[\weight
r^{-(d+2)}\partial_r(f(r)r^{d+2})\Bigr] \phi^2.
\epsilonnd{multline}
Ideally one would choose $f(r)$ to be smooth, bounded, and so that the
coefficients in the last three terms on the right are all nonnegative.
Indeed, this is precisely what is done in, e.g., \cite{St} and
\cite{MS} when $r_s=0$, i.e. in Minkowski space-time. The localized
energy estimate then follows from an application of the divergence
theorem as well as an application of a Hardy inequality and the energy
estimate in order to handle the time boundary terms.
Unfortunately,
in the current setting it does not appear possible to construct such
an $f$. We shall instead construct $f$ so that the last term in the right can be
bounded below by a positive quantity minus a fractional multiple of
the second term in the right.
We let $g(r)=\frac{r^{d+2}-r_{ps}^{d+2}}{r^{d+2}}$ and
$h(r)=\ln\Bigl(\frac{r^{d+1}-r_s^{d+1}}{\frac{d+1}{2}r_s^{d+1}}\Bigr)$.
The multiplier will be defined piecewise, and
it will be convenient to parametrize in terms of the values of
$h(r)$. To this end, we fix the notation $r_\theta$ to denote the
value of $r$ so that $h(r_\theta)=\theta$. Note, e.g., that
$r_{-\infty}=r_s$ and $r_{ps}=r_0$. More explicitly, $r_\theta^{d+1}
=r_s^{d+1}\Bigl(\frac{d+1}{2}e^\theta + 1\Bigr)$. An approximate multiplier is
\[g(r)+\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}h(r).\]
We must, however, smooth out the logarithmic blow up at $r=r_s$. We
must also smooth out $h(r)$ near $\infty$ in order to prevent a term
in $f'(r)$ which has an unfavorable sign.
In order to accomplish this,
set
\[a(x)=
\begin{cases}
-\frac{1}{\varepsilon} \frac{\varepsilon x + 1}{\delta(\varepsilon x+1)-1} -
\frac{1}{\varepsilon},&x\le -\frac{1}{\varepsilon}\\
x, &-\frac{1}{\varepsilon}\le x \le 0\\
x-\frac{2}{3\alpha^2} x^3 +\frac{1}{5\alpha^4}x^5, &0\le x\le
\alpha\\
\frac{8\alpha}{15},&x\ge \alpha.
\epsilonnd{cases}
\]
Here $\alpha = 5-\delta_0$ for some $0<\delta_0\ll 1$. Then,
set
\[f(r)=g(r)+\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}a(h(r)).\]
We notice that $f$ is $C^2$ with the exception of a jump in the second
derivative at $r_{-1/\varepsilon}$.
Using that $\Box_g\phi=0$, integrating \epsilonqref{div} over $[0,T]\times
(r_s,\infty)\times {\mathbb{S}}^{d+2}$, and applying the divergence theorem, we have
\begin{multline}\langlebel{base}
-\int\int f(r)\partial_t\phi \partial_r\phi\,
r^{d+2}\,dr\,d\omega\Bigl|_0^T - \frac{1}{2}\int\int
\frac{1}{r^{d+2}}\partial_r(f(r)r^{d+2}) \phi \partial_t\phi\,
r^{d+2}\,dr\,d\omega\Bigl|_0^T
\\=\int_0^T\int\int \Bigl\{ \weight^2 f'(r)(\partial_r\phi)^2 +
\Bigl(\frac{r^{d+1}
-r_{ps}^{d+1}}{r^{d+1}}\Bigr)\frac{f(r)}{r}|{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} \phi|^2
\\+l(f)\phi^2\Bigr\}r^{d+2}\,dr\,d\omega\,dt
\\+\frac{1}{4}r_{-1/\varepsilon}^{d+2}\Bigl(1-\Bigl(\frac{r_s}{r_{-1/\varepsilon}}\Bigr)^{d+1}\Bigr)^2
(f''(r_{-1/\varepsilon}^-)-f''(r_{-1/\varepsilon}^+))\int_0^T\int \phi^2|_{r=r_{-1/\varepsilon}}\,d\omega\,dt
\epsilonnd{multline}
where
\[
l(f)=-\frac{1}{4}r^{-(d+2)}\partial_r\Bigl[\weight r^{d+2}\partial_r\Bigl\{\weight
r^{-(d+2)}\partial_r(f(r)r^{d+2})\Bigr\}\Bigr].
\]
We first make a note about the boundary terms at
$r_{-1/\varepsilon}$. Indeed, an elementary calculation shows
\begin{align*}
f''(r_{-1/\varepsilon}^-)-f''(r_{-1/\varepsilon}^+) &=\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^{d+2}}
a''(-1/\varepsilon^-)(h'(r_{-1/\varepsilon}))^2 \\&= 2\delta\varepsilon
\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^{d+2}}
(h'(r_{-1/\varepsilon}))^2\approx \delta \varepsilon e^{2/\varepsilon}.\epsilonnd{align*}
Of particular interest is that the coefficient of the resulting boundary term at
$r_{-1/\varepsilon}$ is $O(\varepsilon\delta)$.
We now proceed to showing that the sum of the first three terms in the
right of \epsilonqref{base} produce a positive contribution. To do so, we will examine $f$
on a case by case basis and show
\begin{itemize}
\item $f'(r)>0$ for all $r>r_s$
\item $f(r)<0$ for $r<r_{ps}$ and $f(r)>0$ for $r>r_{ps}$.
\item $\int l(f)\phi^2 \,r^{d+2}\,dr\,d\omega\,dt$ is bounded
below by a positive term minus a fractional multiple of the
$(\partial_r\phi)^2$ term and a $r_{-1/\varepsilon}$ boundary
term.
\epsilonnd{itemize}
By absorbing these latter pieces into those previously shown to
positively contribute, the estimate shall nearly be in hand. It will
only remain to examine the time boundary terms, and in particular,
establish the Hardy inequality which will permit a direct application
of conservation of energy.
{\vec{n}}oindent{\bf\epsilonm Case 1:} $r_s\le r\le r_{-1/\varepsilon}$.
The multiplier in this region is constructed to smooth out the
logarithmic blow up at the event horizon. This is the only case in
which we shall not be able to just show that $l(f)\ge 0$.
Noting that $a(x)\le -1/\varepsilon$ for $x\le -1/\varepsilon$, we
immediately see that $f(r)<0$ on this range. We also compute
\begin{multline}\langlebel{df}
f'(r)=\frac{(d+2)r_{ps}^{d+2}}{r^{d+3}}-\frac{(d+2)^2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+3}}a(h(r))
\\+ \frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}\frac{1}{(\delta\varepsilon
h(r)+\delta-1)^2}\frac{(d+1)r^d}{r^{d+1}-r_s^{d+1}}.
\epsilonnd{multline}
Each of these summands is nonnegative on the given range, yielding the
desired sign for $f'(r)$.
It remains to show an appropriate lower bound for the $l(f)\phi^2$
term of \epsilonqref{base}. Calculating $l(f)$ we find
\begin{multline}\langlebel{l_f}
l(g(r))+l\Bigl(\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}a(h(r))\Bigr)
= \frac{d+2}{4r^{2d+5}}\Bigl(d
r^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2r_s^{2d+2}\Bigr)
\\+\frac{(d+1)(d+2)}{2}\frac{r_{ps}r_s^{d+1}}{r^{2d+6}}(r_{ps}^{d+1}-r^{d+1})a'(h(r))
\\+\frac{(d+1)^2(d+2)(d+5)}{4(d+3)}\frac{r_{ps}r_s^{d+1}}{r^{d+5}}a''(h(r))
\\-\frac{(d+1)^3(d+2)}{4(d+3)}\frac{r_{ps}r_s^{d+1}}{r^4} \frac{1}{r^{d+1}-r_s^{d+1}}a'''(h(r)).
\epsilonnd{multline}
As $a'(h(r))=(\delta\varepsilon h(r)+\delta -1)^{-2}$ and $r<r_{ps}$ in this regime,
the second term in the right has the desired sign. The third term in
the right of \epsilonqref{l_f} also has the desired sign as
$a''(h(r))=-2\delta\varepsilon/(\varepsilon \delta h(r) +\delta -1)^3$
and $h(r)\le -1/\varepsilon$ here.
The key step is to control the contribution of the last term in the
right of \epsilonqref{l_f}. We shall abbreviate $R(r)=\delta\varepsilon h(r)+\delta-1$.
By the Fundamental Theorem of Calculus, we observe
\[\int_{r_s}^{r_{-1/\varepsilon}} \partial_r\Bigl(\frac{2\delta\varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^2(R(r))^3}\phi^2\Bigr)\,dr
=-\frac{2\delta\varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\phi(r_{-1/\varepsilon})^2.\]
Evaluating the derivative in the left side yields
\begin{multline}\langlebel{ineq1}
\int_{r_s}^{r_{-1/\varepsilon}}
\frac{6\delta^2\varepsilon^2(d+1)^3(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^2(R(r))^4}\frac{r^d}{r^{d+1}-r_s^{d+1}}\phi^2
\,dr
\\=-\int_{r_s}^{r_{-1/\varepsilon}} \frac{4\delta\varepsilon(d+1)^2(d+2)}{
d+3}\frac{r_{ps}r_s^{d+1}}{r^3(R(r))^3}\phi^2\,dr
\\+\int_{r_s}^{r_{-1/\varepsilon}}
\frac{4\delta\varepsilon(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^2(R(r))^3}\phi\partial_r \phi\,dr
\\+\frac{2\delta \varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\phi(r_{-1/\varepsilon})^2.\epsilonnd{multline}
To the second term in the right, we apply the Schwarz inequality to
obtain
\begin{multline}\langlebel{ineq2}
\int_{r_s}^{r_{-1/\varepsilon}}
\frac{4\delta\varepsilon(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^2(R(r))^3}\phi\partial_r
\phi\,dr
\\\le \int_{r_s}^{r_{-1/\varepsilon}} \frac{3\delta^2\varepsilon^2(d+2)(d+1)^3}{
d+3} \frac{r_{ps}r_s^{d+1}r^{d-2}}{(R(r))^4(r^{d+1}-r_s^{d+1})}\phi^2\,dr
\\+\frac{4}{3}\int_{r_s}^{r_{-1/\varepsilon}} \frac{(d+2)(d+1)}{
d+3} \frac{r_{ps}r_s^{d+1}}{r^{d+2}}\frac{r^{d+1}-r_s^{d+1}}{(R(r))^2}(\partial_r \phi)^2\,dr.
\epsilonnd{multline}
Plugging \epsilonqref{ineq2} into \epsilonqref{ineq1} yields
\begin{multline}\langlebel{combined}
\int_{r_s}^{r_{-1/\varepsilon}}
\frac{3\delta^2\varepsilon^2(d+1)^3(d+2)}{d+3}
\frac{r_{ps}r_s^{d+1}}{r^2(R(r))^4}\frac{r^d}{r^{d+1}-r_s^{d+1}}\phi^2
\,dr
\\\le -\int_{r_s}^{r_{-1/\varepsilon}} \frac{4\delta\varepsilon(d+1)^2(d+2)}{
d+3}\frac{r_{ps}r_s^{d+1}}{r^3(R(r))^3}\phi^2\,dr
\\+\frac{4}{3}\int_{r_s}^{r_{-1/\varepsilon}} \frac{(d+2)(d+1)}{
d+3} \frac{r_{ps}r_s^{d+1}}{r^{d+2}}\frac{r^{d+1}-r_s^{d+1}}{(R(r))^2}(\partial_r \phi)^2\,dr
\\+\frac{2\delta \varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\phi(r_{-1/\varepsilon})^2.
\epsilonnd{multline}
This shows that
\begin{multline}\langlebel{rewritten}
\Bigl(\frac{1}{4}+\frac{1}{48}\Bigr)\int_{\{r\in
[r_s,r_{-1/\varepsilon}]\}}
\frac{(d+1)^3(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^4}\frac{1}{r^{d+1}-r_s^{d+1}}a'''(h(r))\phi^2
\,r^{d+2}\,dr\,d\omega\,dt
\\\le \frac{13}{12}\int_{\{r\in [r_s,r_{-1/\varepsilon}]\}} \frac{(d+1)^2(d+2)}{
(d+3)}\frac{r_{ps}r_s^{d+1}}{r^{d+5}}a''(h(r))\phi^2\,r^{d+2}\,dr\,d\omega\,dt
\\+\frac{13}{18}\int_{\{r\in[r_s,r_{-1/\varepsilon}]\}} \frac{(d+2)(d+1)}{
(d+3)}
\frac{r_{ps}r_s^{d+1}}{r^{d+3}}\Bigl(1-\frac{r_s^{d+1}}{r^{d+1}}\Bigr)\frac{1}{(
R(r))^2}(\partial_r \phi)^2\,r^{d+2}\,dr\,d\omega\,dt
\\+\frac{13}{12}\frac{\delta \varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\int \phi^2|_{r=r_{-1/\varepsilon}}\,d\sigma\,dt.
\epsilonnd{multline}
We now use \epsilonqref{rewritten} and the fact that $\frac{13}{12}\le
\frac{d+5}{4}$ for $d\ge 1$ to account for the last term in
\epsilonqref{l_f}. We then see that
\begin{multline}\langlebel{final}
\int_{\{r\in [r_s,r_{-1/\varepsilon}]\}}
l(f)\phi^2\,r^{d+2}\,dr\,d\omega\,dt
\ge \int_{\{r\in [r_s,r_{-1/\varepsilon}]\}} l(g)\phi^2\,r^{d+2}\,dr\,d\omega\,dt
\\
+ \frac{1}{48}\int_{\{r\in
[r_s,r_{-1/\varepsilon}]\}}
\frac{(d+1)^3(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^4}\frac{1}{r^{d+1}-r_s^{d+1}}a'''(h(r))\phi^2
\,r^{d+2}\,dr\,d\omega\,dt
\\-\frac{13}{18}\int_{\{r\in[r_s,r_{-1/\varepsilon}]\}}
\Bigl(1-\frac{r_s^{d+1}}{r^{d+1}}\Bigr)^2f'(r)(\partial_r \phi)^2\,r^{d+2}\,dr\,d\omega\,dt
\\-\frac{13}{12}\frac{\delta \varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\int \phi^2|_{r=r_{-1/\varepsilon}}\,d\sigma\,dt.
\epsilonnd{multline}
Here we have also used \epsilonqref{df}. As $l(g)$ remains bounded in the
relevant region and as the coefficient in the integrand of the second term in the right
of \epsilonqref{final} is $\gtrsim \varepsilon^2 e^{1/\varepsilon}\gg 1$ on the
said region, the first term in the right can be controlled by a
fraction of the
second provided $\varepsilon$ is sufficiently small. This finally
yields
\begin{multline}\langlebel{final2}
\int_{\{r\in [r_s,r_{-1/\varepsilon}]\}}
l(f)\phi^2\,r^{d+2}\,dr\,d\omega\,dt
\\\ge
\frac{1}{96}\int_{\{r\in
[r_s,r_{-1/\varepsilon}]\}}
\frac{(d+1)^3(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r^4}\frac{1}{r^{d+1}-r_s^{d+1}}a'''(h(r))\phi^2
\,r^{d+2}\,dr\,d\omega\,dt
\\-\frac{13}{18}\int_{\{r\in[r_s,r_{-1/\varepsilon}]\}}
\Bigl(1-\frac{r_s^{d+1}}{r^{d+1}}\Bigr)^2f'(r)(\partial_r \phi)^2\,r^{d+2}\,dr\,d\omega\,dt
\\-\frac{13}{12}\frac{\delta \varepsilon
(d+1)^2(d+2)}{d+3}\frac{r_{ps}r_s^{d+1}}{r_{-1/\varepsilon}^2}
\int \phi^2|_{r=r_{-1/\varepsilon}}\,d\sigma\,dt.
\epsilonnd{multline}
The second term on the right can be bootstapped into the positive
contribution provided by the first term in the right of \epsilonqref{base}.
The remaining boundary term at $r_{-1/\varepsilon}$ will be controlled
at the end of this section using pieces from the subsequent case.
{\vec{n}}oindent{\bf\epsilonm Case 2:}
$r_{-1/\varepsilon}
\le r\le r_{ps}$.
For $r$ in this range, we simply have
\[f(r)=\frac{r^{d+2}-r_{ps}^{d+2}}{r^{d+2}} +
\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}\ln\Bigl(\frac{r^{d+1}-r_s^{d+1}}{\frac{d+1}{2}r_s^{d+1}}\Bigr)\]
which is negative, as is desired in order to guarantee a positive
contribution from the $|{{\vec{n}}ot{\vec{n}}egmedspace{\vec{n}}abla} \phi|^2$ term of \epsilonqref{base}. Moreover,
we have
\[f'(r) =
\frac{(d+2)r_{ps}^{d+2}}{r^{d+3}}+\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^2}\frac{d+1}{r^{d+1}-r_s^{d+1}}
-\frac{(d+2)^2}{d+3}\frac{r_s^{d+1}r_{ps}}{r^{d+3}}\ln\Bigl(\frac{r^{d+1}-r_s^{d+1}}{\frac{d+1}{2}r_s^{d+1}}\Bigr)
\]
whose every term is positive for $r_s<r\le r_{ps}$.
It only remains to examine $l(f(r))$ for this region.
Here, we first note that
\begin{equation}\langlebel{lg}l(g(r))=\frac{d+2}{4r^{2d+5}}\Bigl(dr^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2r_s^{2d+2}\Bigr)
\epsilonnd{equation}
and
\begin{equation}\langlebel{lh}l\Bigl(\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}h(r)\Bigr)=
-
\frac{(d+2)(d+1)}{4}\frac{r_{ps}r_s^{d+1}}{r^{2d+6}}\bigl(2r^{d+1}-(d+3)r_s^{d+1}\bigr).\epsilonnd{equation}
Since \epsilonqref{lh} is nonnegative for $r\le r_{ps}$, we have
\[l\Bigl(\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}h(r)\Bigr)\ge
-\frac{(d+2)(d+1)}{4}\frac{r_s^{d+1}}{r^{2d+5}}\bigl(2r^{d+1}-(d+3)r_s^{d+1}\bigr)\]
for $r\le r_{ps}$.
Summing this with \epsilonqref{lg}, we have
\[l(f(r))\ge
\frac{d+2}{4r^{2d+5}}\bigl(r^{d+1}-r_s^{d+1}\bigr)\bigl(dr^{d+1}+r_s^{d+1}\bigr),\]
which is clearly nonnegative for $r\ge r_s$.
{\vec{n}}oindent{\bf\epsilonm Case 3:} $r_{ps}\le r\le r_\alpha$.
This region corresponds precisely to $h(r)\in [0,\alpha]$. We, thus,
see that
\[f(r)=\frac{r^{d+2}-r_{ps}^{d+2}}{r^{d+2}}+\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}}\frac{h(r)}{15\alpha^4}(3
h(r)^4 -10 h(r)^2\alpha^2 + 15\alpha^4)\]
which is easily seen to be positive. Moreover,
\[f'(r)=\frac{(d+2)r_{ps}^{d+2}}{r^{d+3}} -
\frac{(d+2)^2r_{ps}r_s^{d+1}}{(d+3)r^{d+3}}a(h(r))
+ \frac{d+2}{d+3}\frac{(d+1)r_{ps}r_s^{d+1}}{r^2(r^{d+1}-r_s^{d+1})}\frac{(h(r)^2-\alpha^2)^2}{\alpha^4}.\]
As $a(h(r))$ takes on a maximum value of $\frac{8\alpha}{15}$ and as
$\frac{(d+2)(d+3)}{2} - \frac{(d+2)^2}{d+3}\frac{8\alpha}{15} > 0$ for
$d\ge 1$ and $\alpha<5$, the sum of the first two terms is positive.
And as the last term is clearly positive, we see that $f'(r)>0$ as
desired.
It remains to verify that $l(f)$ is positive. We begin by calculating
\begin{multline*}
l(f)=\frac{d+2}{4r^{2d+5}}\Bigl(dr^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2
r_s^{2d+2}\Bigr)
\\+\frac{(d+2)(d+1)r_s^{d+1}r_{ps}}{4(d+3)r^{2d+6}(r^{d+1}-r_s^{d+1})}
\Bigl(-2(d+3)(r^{d+1}-r_{ps}^{d+1})(r^{d+1}-r_s^{d+1})a'(h(r))
\\+(d+1)(d+5)r^{d+1}(r^{d+1}-r_s^{d+1})a''(h(r))
\\-(d+1)^2 r^{2d+2}a'''(h(r))\Bigr).
\epsilonnd{multline*}
Setting
\begin{align*}
p(r)&=r(dr^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2
r_s^{2d+2})\\
n_1(r)&=
-r_{ps}r_s^{d+1}(d+1)(2r^{d+1}-r_s^{d+1}(d+3))\frac{(h(r)^2-\alpha^2)^2}{\alpha^4}\\
n_2(r)&=r_{ps}r_s^{d+1}\frac{(d+1)^2(d+5)}{d+3}r^{d+1}\frac{4h(r)(h(r)^2-\alpha^2)}{\alpha^4}\\
n_3(r)&=r_{ps}r_s^{d+1}\frac{(d+1)^3}{d+3}\frac{r^{2d+2}}{r^{d+1}-r_s^{d+1}}\, \cdot\, ot 4\frac{\alpha^2-3h(r)^2}{\alpha^4},
\epsilonnd{align*}
it remains to show that
\[p(r)+n_1(r)+n_2(r)+n_3(r)>0.\]
The dominant term is $p(r)$, and we shall show
\begin{align}
\frac{1}{3}p(r)+n_1(r)>0,\langlebel{n1}\\
\frac{1}{2}p(r)+n_2(r)\ge 0,\langlebel{n2}\\
\frac{1}{6}p(r)+n_3(r)\ge 0.\langlebel{n3}
\epsilonnd{align}
{\vec{n}}oindent{\epsilonm Proof of \epsilonqref{n1}:} Using that we are in the regime
$r\ge r_{ps}$ and that $(h(r)^2-\alpha^2)^2$ is maximized when
$h(r)=0$, we have
\begin{align*}
\frac{1}{3}p(r)+n_1(r)&\ge
\frac{1}{3}r_{ps}(dr^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2r_s^{2d+2})
\\&\qquad\qquad\qquad\qquad -
(d+1)r_{ps}r_s^{d+1}(2r^{d+1}-(d+3)r_s^{d+1})\\
&=\frac{1}{3}r_{ps}\Bigl(dr^{2d+2}-(5d+3)r_s^{d+1}r^{d+1}+(2d^2+8d+5)r_s^{2d+2}\Bigr)\\
&=\frac{1}{3}r_{ps}\Bigl[d\Bigl(r^{d+1}-\frac{5d+3}{2d}r_s^{d+1}\Bigr)^2
+ \frac{1}{4d}(d+1)^2(8d-9)r_s^{2d+2}\Bigr].
\epsilonnd{align*}
The last quantity is clearly positive, as desired, for $d>1$.
For the case $d=1$,
\begin{align*}
\frac{1}{3}p(r)+n_1(r)&\ge
\frac{1}{3}r(r^4+4r_s^2r^2-9r_s^4)-2\sqrt{2}r_s^3(2r^2-4r_s^2)\\
&=\frac{1}{3}\Bigl(3\sqrt{2}r_s^5 - 13
r_s^4(r-\sqrt{2}r_s)+20\sqrt{2}r_s^3(r-\sqrt{2}r_s)^2 + 24 r_s^2
(r-\sqrt{2}r_s)^3
\\&\qquad\qquad\qquad\qquad+5\sqrt{2}r_s(r-\sqrt{2}r_s)^4 +
(r-\sqrt{2}r_s)^5\Bigr)\\
&\ge \frac{1}{3}\Bigl(3\sqrt{2}r_s^5 - 13
r_s^4(r-\sqrt{2}r_s)+20\sqrt{2}r_s^3(r-\sqrt{2}r_s)^2\Bigr)
\epsilonnd{align*}
which is an everywhere positive quadratic.
{\vec{n}}oindent{\epsilonm Proof of \epsilonqref{n2}:}
Here, again, we use that we are studying the regime that $r\ge r_{ps}$
and that $h^2-\alpha^2$ is minimized when $h(r)=0$. It is thus
obtained that
\begin{equation}\langlebel{n2_1}\begin{split}
\frac{1}{2}p(r)+n_2(r)&\ge
\frac{1}{2}r_{ps}(dr^{2d+2}+(d+3)r_s^{d+1}r^{d+1}-(d+2)^2
r_s^{2d+2})\\
&\qquad\qquad\qquad\qquad
-\frac{4r_{ps}r_s^{d+1}}{\alpha^2}\frac{(d+1)^2(d+5)}{d+3}r^{d+1}h(r)\\
&=\frac{1}{2}r_{ps}\Bigl(d(r^{d+1}-r_s^{d+1})^2+3(d+1)r_s^{d+1}(r^{d+1}-r_s^{d+1})-(d+1)^2
r_s^{2d+2}\\
&\qquad\qquad\qquad\qquad
-\frac{8r_s^{d+1}}{\alpha^2}\frac{(d+1)^2(d+5)}{d+3}r^{d+1}h(r)\Bigr).
\epsilonnd{split}
\epsilonnd{equation}
Here, we make the change of variables $x=h(r)$. Thus,
$r^{d+1}-r_s^{d+1}=\frac{d+1}{2}r_s^{d+1}e^x$. The right side of
\epsilonqref{n2_1} can be rewritten as
\[\frac{r_{ps}}{2}r_s^{2d+2}(d+1)^2\Bigl[\frac{d}{4} e^{2x} +
\frac{3}{2}e^x - 1 - \frac{4}{\alpha^2}\frac{(d+5)(d+1)}{d+3}xe^x
- \frac{8}{\alpha^2}\frac{d+5}{d+3}x\Bigr].\]
Setting
\[q(x)=\frac{d}{4} e^{2x} +
\frac{3}{2}e^x - 1 - \frac{4}{\alpha^2}\frac{(d+5)(d+1)}{d+3}xe^x
- \frac{8}{\alpha^2}\frac{d+5}{d+3}x\]
and noticing that $q(0)>0$, it will suffice to show that $q'(x)\ge 0$
for $x$ between $0$ and $\alpha$. We compute
\[q'(x)=\frac{1}{2}e^x(3+de^x)-\frac{4}{\alpha^2}\frac{d+5}{d+3}\Bigl[2+(1+d)e^x(1+x)\Bigr].\]
As $\frac{d+5}{d+3}\le \frac{3}{2}$ for $d\ge 1$ and as $1+x\le e^x$,
it follows that
\[q'(x)\ge \frac{1}{2}e^x(3+de^x)-\frac{6}{25}\Bigl(2e^x +
(d+1)e^{2x}\Bigr)>0,\quad d\ge 1,\,\alpha=5.\]
The latter inequality follows from the fact that $\frac{d}{2}\ge
\frac{6}{25}(d+1)$ provided $d\ge 1$. Since the above inequality
holds for $\alpha = 5$, by continuity, we have that it also
holds for $\alpha=5-\delta_0$ for some $\delta_0>0$, which completes the proof.
{\vec{n}}oindent{\epsilonm Proof of \epsilonqref{n3}:}
Using that we are examining the region $r\ge r_{ps}$ and rewriting
$l(g)$ as in the previous case, we have
\begin{multline*}
\frac{1}{6}p(r)+n_3(r)\ge\frac{1}{6}r_{ps}\Bigl[d(r^{d+1}-r_s^{d+1})^2+3(d+1)r_s^{d+1}(r^{d+1}-r_s^{d+1})
- (d+1)^2r_s^{2d+2}\\
+\frac{24
r_s^{d+1}}{\alpha^2}\frac{(d+1)^3}{d+3}\Bigl(1-\frac{3}{\alpha^2}h(r)^2\Bigr)\Bigl(
(r^{d+1}-r_s^{d+1}) + 2 r_s^{d+1} +
\frac{r_s^{2d+2}}{r^{d+1}-r_s^{d+1}}\Bigr)\Bigr].
\epsilonnd{multline*}
Using that
\begin{align*}\frac{24r_s^{d+1}}{\alpha^2}\frac{(d+1)^3}{d+3}
\Bigl(1-\frac{3}{\alpha^2}(h(r))^2\Bigr)\frac{r_s^{2d+2}}{r^{d+1}-r_s^{d+1}}
&\ge
-\frac{72r_s^{d+1}}{\alpha^4}\frac{(d+1)^3}{d+3}(h(r))^2\frac{r_s^{2d+2}}{r^{d+1}-r_s^{d+1}}\\
&\ge -\frac{144}{\alpha^4} r_s^{2d+2}\frac{(d+1)^2}{d+3}(h(r))^2\epsilonnd{align*}
when $r\ge r_{ps}$, we obtain
\begin{multline*}
\frac{1}{6}p(r)+n_3(r)\ge \frac{1}{6}r_{ps}\Bigl[d(r^{d+1}-r_s^{d+1})^2+3(d+1)r_s^{d+1}(r^{d+1}-r_s^{d+1})
- (d+1)^2r_s^{2d+2}\\
+\frac{24
r_s^{d+1}}{\alpha^2}\frac{(d+1)^3}{d+3}\Bigl(1-\frac{3}{\alpha^2}h(r)^2\Bigr)\Bigl(
(r^{d+1}-r_s^{d+1}) + 2 r_s^{d+1}\Bigr) \\
-\frac{144}{\alpha^4}r_s^{2d+2}\frac{(d+1)^2}{d+3}(h(r))^2\Bigr].
\epsilonnd{multline*}
Proceeding as above with the change of variables $x=h(r)$, this is
\[= \frac{(d+1)^2}{6}r_{ps}r_s^{2d+2}\Bigl[\frac{d}{4}e^{2x}+\frac{3}{2}e^x
- 1+
\frac{24}{\alpha^2}\frac{d+1}{d+3}\Bigl(1-\frac{3}{\alpha^2}x^2\Bigr)\Bigl(
\frac{d+1}{2}e^x + 2\Bigr)
-\frac{144}{\alpha^4}\frac{1}{d+3}x^2\Bigr].\]
Setting
\[s(x)=\frac{d}{4}e^{2x}+\frac{3}{2}e^x
- 1+
\frac{24}{\alpha^2}\frac{d+1}{d+3}\Bigl(1-\frac{3}{\alpha^2}x^2\Bigr)\Bigl(
\frac{d+1}{2}e^x + 2\Bigr)
-\frac{144}{\alpha^4}\frac{1}{d+3}x^2,\]
we first note that $s(0)>0$. For $x\le 5$, we furthermore have
\begin{align*}
s'(x)&=\frac{1}{2\alpha^4(d+3)}\Bigl[24\alpha^2 (d+1)^2 e^x +
\alpha^4 (d+3)e^x (3+de^x) - 72 x(8(d+2) + (d+1)^2e^x(x+2))\Bigr]\\
&\ge \frac{1}{2\alpha^4 (d+3)}\Bigl[24\alpha^2 (d+1)^2 e^x + \alpha^4
(d+3)e^x (3+de^x) - 72 e^x (8(d+2)+7(d+1)^2e^x)\Bigr].
\epsilonnd{align*}
For $\alpha=5$, this is
\[\frac{1}{1250(d+3)}e^x\Bigl(5073-504 e^x +
51d(49+17e^x)+d^2(600+121e^x)\Bigr),\]
which is easily seen to be positive for $d\ge 1$. By continuity,
positivity also follows for $\alpha=5-\delta_0$ provided $\delta_0$ is
sufficiently small.
{\vec{n}}oindent{\bf\epsilonm Case 4:} $r\ge r_\alpha$.
In this regime,
\[f(r)=\frac{r^{d+2}-r_{ps}^{d+2}}{r^{d+2}}
+\frac{8\alpha}{15}\frac{d+2}{d+3}\frac{r_{ps}r_s^{d+1}}{r^{d+2}},\]
which is clearly positive. Moreover,
\[f'(r)=\frac{(d+2)r_s^{d+1}r_{ps}}{r^{d+3}}\Bigl(\frac{d+3}{2}-\frac{8\alpha}{15}\frac{d+2}{d+3}\Bigr)\]
which is also positive since $\alpha < 5 \le
\frac{15}{16}\frac{(d+3)^2}{d+2}$ for $d\ge 1$. Finally, we notice
that $l(f(r))=l(g(r))$ when in this case. Thus, as in the proof of
\epsilonqref{n2}, we have
\[l(f(r))=r_{ps}\Bigl(d(r^{d+1}-r_s^{d+1})^2+3(d+1)r_s^{d+1}(r^{d+1}-r_s^{d+1})-(d+1)^2
r_s^{2d+2}\Bigr).\]
As $r^{d+1}-r_s^{d+1}\ge \frac{d+1}{2}r_s^{d+1}$ for $r\ge r_{ps}$, we
see that $l(f(r))>0$ as desired.
{\vec{n}}oindent{\bf\epsilonm Boundary term at $r_{-1/\varepsilon}$:}
In order to finish showing that the right side of \epsilonqref{base} is
nonnegative, it remains to examine the $r_{-1/\varepsilon}$ boundary
term in \epsilonqref{base} as well as the subsequent contribution from
\epsilonqref{final2}. Here, we simply utilize the Fundamental Theorem of
Calculus to control these terms via the positive contributions of the
first and third term in the right of \epsilonqref{base} in the range
$[r_{-1/\varepsilon},r_{ps}]$. The scaling parameter $\delta$ insures
the necessary smallness.
Fix a smooth cutoff $\beta$ which is identity for, say, $r\le r_{-1}$
and which vanishes for $r\ge r_{ps}$. Then, for $r\le r_{-1}$, we
have
\[\phi(r)=-\int_r^{r_{ps}} \partial_s (\beta \phi)\,ds.\]
Using the Schwarz inequality, this yields
\[\phi^2(r)\lesssim \int_r^{r_{ps}}|\beta'| \phi^2\,ds -h(r)
\int_r^{r_{ps}}(s^{d+1}-r_s^{d+1}) \beta (\partial_r\phi)^2\,ds.\]
Applying this at $r_{-1/\varepsilon}$ yields
\[\varepsilon \phi^2(r_{-1/\varepsilon})\lesssim
\int_{r_{-1/\varepsilon}}^{r_{ps}}{\vec{n}}abla^\alpha\tilde{P}_\alpha[\phi,X]
r^{d+2}\,dr.\]
Multiplying both sides by $\delta$ and integrating over $[0,T]\times
{\mathbb{S}}^{d+2}$, we see that these boundary terms can be bootstrapped into
the contributions of {\epsilonm Case 2}.
{\vec{n}}ewsection{A Hardy inequality and the time boundary terms}
In the previous section, we constructed a multiplier so that the right
side of \epsilonqref{base} provides a positive contribution. By inspection,
the coefficients are easily seen to correspond to those in
\epsilonqref{LEnorm}. What remains is to control the left side of
\epsilonqref{base} in terms of the initial energy. For the first term, this
is straightforward. For the second term in the left side of
\epsilonqref{base}, a Hardy-type inequality is employed, which shall be
proved below.
For the first term in \epsilonqref{base}, we need only apply the Schwarz
inequality to see that
\[\int
f(r)\partial_t\phi(t,\, \cdot\, )\partial_r\phi(t,\, \cdot\, )\,r^{d+2}\,dr\,d\omega
\lesssim E[\phi](t).\]
And thus, by conservation of energy, these terms are controlled by
$E[\phi](0)$ as desired.
For the second term in \epsilonqref{base}, we again apply the Schwarz
inequality. It remains to show that
\begin{equation}\langlebel{time}\int
\Bigl[\frac{1}{r^{d+2}}\partial_r(f(r)r^{d+2})\Bigr]^2\Bigl(1-\frac{r_s^{d+1}}{r^{d+1}}\Bigr)
\phi^2(t,\, \cdot\, )\,r^{d+2}\,dr\,d\omega \lesssim E[\phi](t)\epsilonnd{equation}
as a subsequent application of conservation of energy will complete
the proof.
In order to show \epsilonqref{time}, we shall prove a Hardy-type inequality
which is in the spirit of that which appears in \cite{DR}. Indeed, we
notice that the coefficient in the integrand in the left side of
\epsilonqref{time} is $O((\log(r-r_s))^{-2}(r-r_s)^{-1})$ as $r\to r_s$ and
is $O(r^{-2})$ as $r\to \infty$. Thus, it will suffice to show that
\begin{equation}\langlebel{hardy}
\int_{r_s}^\infty
\frac{1}{r^2}\frac{1}{\Bigl(1-\log\Bigl(\frac{r-r_s}{r}\Bigr)\Bigr)^2\Bigl(\frac{r-r_s}{r}\Bigr)}
\phi^2\, r^{d+2}\,dr \lesssim \int_{r_s}^\infty
\Bigl(\frac{r-r_s}{r}\Bigr) (\partial_r\phi)^2\,r^{d+2}\,dr.
\epsilonnd{equation}
To this end, we set
\[\rho(r)=\int_{r_s}^r
\frac{x^d}{\Bigl(1-\log\Bigl(\frac{x-r_s}{x}\Bigr)\Bigr)^2\Bigl(\frac{x-r_s}{x}\Bigr)}\,dx.\]
Notice that $\rho(r)\sim r^{d+1}$ as $r\to \infty$ and $\rho(r)\sim
\Bigl[1-\log\Bigl(\frac{r-r_s}{r}\Bigr)\Bigr]^{-1}$ as $r \to r_s$.
Writing the left side of \epsilonqref{hardy} as $\int \rho'(r)\phi^2\,dr$,
integrating by parts, and applying the Schwarz inequality, we have
\begin{align*}
\int \rho'(r)\phi^2\,dr &= -2\int \rho(r)\phi \partial_r\phi\,dr\\
&\lesssim \Bigl(\int
\frac{(\rho(r))^2}{\rho'(r)}(\partial_r\phi)^2\,dr\Bigr)^{1/2}
\Bigl(\int \rho'(r)\phi^2\,dr\Bigr)^{1/2}.
\epsilonnd{align*}
This completes the proof of \epsilonqref{hardy} as
$\frac{(\rho(r))^2}{\rho'(r)}\sim r^{d+2}$ as $r\to \infty$ and
$\frac{(\rho(r))^2}{\rho'(r)}\sim (r-r_s)$ as $r\to r_s$.
\begin{thebibliography}{MA}
\bibitem{Alinhac} S. Alinhac: {\epsilonm On the Morawetz--Keel-Smith-Sogge inequality for the wave equation
on a curved background}. Publ. Res. Inst. Math. Sci. {\bf 42} (2006),
705--720.
\bibitem{Alinhac2} S. Alinhac: {\epsilonm Energy multipliers for
perturbations of the Schwarzschild metric}. Comm. Math. Phys. {\bf
288} (2009), 199--224.
\bibitem{Blue} P. Blue: {\epsilonm Decay of the Maxwell field on the
Schwarzschild manifold}. J. Hyperbolic Diff. Equ. {\bf 5} (2008), 807--856.
\bibitem{BS1} P. Blue and A. Soffer: {\epsilonm Semilinear wave equations on the Schwarzschild manifold I:
Local decay estimates}. Adv. Differential Equations {\bf 8} (2003), 595--614.
\bibitem{BS2} P. Blue and A. Soffer: {\epsilonm The wave equation on the Schwarzschild metric II: Local decay
for the spin-2 Regge-Wheeler equation}. J. Math. Phys. {\bf 46} (2005), 9pp.
\bibitem{BSerrata} P. Blue and A. Soffer: {\epsilonm Errata for ``Global existence and scatttering for the
nonlinear Schr\"odinger equation on Schwarzschild manifolds'', ``Semilinear wave equations on the
Schwarzschild manifold I: Local decay estimates'', and ``The wave equation on the Schwarzschild
metric II: Local decay for the spin 2 Regge Wheeler equation''},
preprint. (ArXiv: gr-qc/0608073)
\bibitem{BS3} P. Blue and A. Soffer: {\epsilonm Phase space analysis on some
black hole manifolds}. J. Funct. Anal. {\bf 256} (2009), 1--90.
\bibitem{BS4} P. Blue and A. Soffer: {\epsilonm Improved decay rates with small regularity loss for the wave
equation about a Schwarzschild black hole}, preprint. (ArXiv: math/0612168)
\bibitem{BS5} P. Blue and A. Soffer: {\epsilonm A space-time integral
estimate for a large data semi-linear wave equation on the
Schwarzschild manifold}. Lett. Math. Phys. {\bf 81} (2007), 227--238.
\bibitem{BSt} P. Blue and J. Sterbenz: {\epsilonm Uniform decay of local energy and the semi-linear wave
equation on Schwarzschild space}. Comm. Math. Phys. {\bf 268} (2006),
481--504.
\bibitem{BH} J.-F. Bony and D. H\"afner: {\epsilonm The semilinear wave
equation on asymptotically Euclidean manifolds}, preprint. (ArXiv: 0810.0464)
\bibitem{Burq} N. Burq: {\epsilonm Global Strichartz estimates for
nontrapping geometries: about an article by H. F. Smith and
C. D. Sogge ``Global Strichartz estimates for nontrapping
perturbations of the Laplacian''}. Comm. Partial Differential
Equations {\bf 28} (2003), 1675--1683.
\bibitem{DR4} M. Dafermos and I. Rodnianski: {\epsilonm Small-amplitude
nonlinear waves on a black hole background}. J. Math. Pures
Appl. {\bf 84} (2005), 1147--1172.
\bibitem{DR} M. Dafermos and I. Rodnianski: {\epsilonm The red-shift effect and radiation decay on black
hole spacetimes}. Comm. Pure Appl. Math. {\bf 62} (2009), 859--919.
\bibitem{DR2} M. Dafermos and I. Rodnianski: {\epsilonm A note on energy
currents and decay for the wave equation on a Schwarzschild
background}, preprint. (ArXiv: 0710.017)
\bibitem{DR3} M. Dafermos and I. Rodnianksi: {\epsilonm A new physical-space
approach to decay for the wave equation with applications to black
hole spacetimes}, preprint. (ArXiv: 0910.4957)
\bibitem{GL} R. Gregory and R. Laflamme: {\epsilonm Hypercylindrical black
holes}. Phys. Rev. D {\bf 37} (1988), 305--308.
\bibitem{HY} K. Hidano and K. Yokoyama: {\epsilonm A remark on the almost
global existence theorems of Keel, Smith, and Sogge}.
Funkcial. Ekvac. {\bf 48} (2005), 1--34.
\bibitem{KSS} M. Keel, H. Smith, and C. D. Sogge: {\epsilonm Almost global existence for some semilinear
wave equations}. J. Anal. Math. {\bf 87} (2002), 265--279.
\bibitem{KPV} C. E. Kenig, G. Ponce, and L. Vega: {\epsilonm On the Zakharov
and Zakharov-Schulman systems}. J. Funct. Anal. {\bf 127} (1995),
204--234.
\bibitem{Luk} J. Luk: {\epsilonm Improved decay for solutions to the linear
wave equation on a Schwarzschild black hole}, preprint. (ArXiv: 0906.5588)
\bibitem{MMTT} J. Marzuola, J. Metcalfe, D. Tataru, and M. Tohaneanu:
{\epsilonm Strichartz estimates on Schwarzschild black hole backgrounds}.
Comm. Math. Phys. {\bf 293} (2010), 37--83.
\bibitem{Met} J. Metcalfe: {\epsilonm Global existence for semilinear wave
equations exterior to nontrapping obstacles}. Houston J. Math. {\bf
30} (2004), 259--281.
\bibitem{MS} J. Metcalfe and C. D. Sogge: {\epsilonm Long-time existence of quasilinear wave equations
exterior to star-shaped obstacles via energy methods}. SIAM J. Math. Anal. {\bf 38} (2006), 391--420.
\bibitem{MT} J. Metcalfe and D. Tataru: {\epsilonm Global parametrices and dispersive estimates for variable
coefficient wave equations}. Math. Ann., to appear.
\bibitem{MT2} J. Metcalfe and D. Tataru: {\epsilonm Decay estimates for
variable coefficient wave equations in exterior domains}. Advances
in Phase Space Analysis of Partial Differential Equations, In Honor
of Ferruccio Colombini's 60th Birthday, Progress in Nonlinear
Differential Equations and Their Applications, Vol 78, 2009.
p. 201--217.
\bibitem{MP} R. C. Myers and M. J. Perry: {\epsilonm Black holes in higher
dimensional space-times}. Ann. Physics {\bf 172} (1986), 304--347.
\bibitem{M} C. Morawetz: {\epsilonm Time decay for the nonlinear Klein-Gordon equations}. Proc. Roy. Soc. Ser. A.
{\bf 306} (1968), 291--296.
\bibitem{SmSo} H. F. Smith and C. D. Sogge: {\epsilonm Global Strichartz estimates for nontrapping perturbations
of the Laplacian}. Comm. Partial Differential Equations {\bf 25} (2000), 2171--2183.
\bibitem{SW} C. D. Sogge and C. Wang: {\epsilonm Concerning the wave
equation on asymptotically Euclidean manifolds}, preprint. (ArXiv: 0901.0022)
\bibitem{St} J. Sterbenz: {\epsilonm Angular regularity and Strichartz estimates for the wave equation.} With
an appendix by I. Rodnianski. Int. Math. Res. Not. {\bf 2005}, 187--231.
\bibitem{Strauss} W. A. Strauss: {\epsilonm Dispersal of waves vanishing on the boundary of an exterior domain}. Comm.
Pure Appl. Math. {\bf 28} (1975), 265--278.
\bibitem{T} F. R. Tangherlini: {\epsilonm Schwarzschild field in $n$
dimensions and the dimensionality of space problem}. Nuovo Cimento
{\bf 27} (1963), 636--651.
\bibitem{Tataru} D. Tataru: {\epsilonm Local decay of waves on
asymptotically flat stationary space-times}, preprint. (ArXiv: 0910.5290)
\bibitem{TT} D. Tataru and M. Tohaneanu: {\epsilonm Local energy estimate on
Kerr black hole backgrounds}, preprint. (ArXiv: 0810.5766)
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\betaegin{document}
\title{Relative to any non-hyperarithmetic set}
\today
\betaegin{abstract}
We prove that there is a structure, indeed a linear ordering, whose degree spectrum is the set of all non-hyperarithmetic degrees. We also show that degree spectra can distinguish measure from category.
\end{abstract}
\section{Introduction}
Slaman \cite{Sla98} and Wehner \cite{Weh98} independently proved the following theorem (with a third proof later provided by Hirschfeldt \cite{Hir06}).
\betaegin{theorem}\langlebel{thm_Slaman_Wehner}
There is a countable structure $\MM$ such that for any set $X$, $X$ computes an isomorphic copy of $\MM$ if and only if $X$ is not computable.
\end{theorem}
Theorem \ref{thm_Slaman_Wehner} fits into the general research programme of classifying \emph{degree spectra} of countable structures. Recall that the (Turing) degree of a structure $\MM$ whose universe is a subset of the set of natural numbers $\omegaega$ is defined to be the degree of its atomic (equivalently, quantifier-free) diagram; and that the \emph{degree spectrum} $\mathcal Spec(\MM)$ of a countable structure is defined to be the collection of Turing degrees which compute an isomorphic copy of $\MM$. Classifying degree spectra of countable structures amounts to finding which computability-theoretic aspects of a set $X$ of natural numbers are reflected in the isomorphism types of countable structures which $X$ can effectively present. Equivalently, the question is how much of the richness of the structure of the power set of the reals (or the Turing degrees) is reflected in the substructure consisting only of the degree spectra of countable structures. In other words, we ask which properties of sets of Turing degrees can be discerned by restricting to nicely definable sets; in this instance, ``nicely definable'' means being a degree spectrum of a structure, which is a particularly nice $\mathbf{\mathcal Sigma}^1_1$ definition of a set. In the example above, the Slaman-Wehner theorem says that being noncomputable is a property which is so definable. Even though there is no noncomputable set of least Turing degree, there is a single countable structure whose isomorphism type captures what it means for a set to contain any noncomputable information.
Among properties of reals, being noncomputable is a ``large'' property, since the collection of noncomputable reals is co-countable. Recent efforts were devoted to understanding large degree spectra. Among the co-countable classes, Kalimullin \cite{Kalimullin-low,Kalimullin-ce,Kalimullin-restrict} investigated complements of lower cones: relativisations of the Slaman-Wehner theorem to nonzero degrees. He showed that such a relativisation holds to any low and any c.e.\ degree, but not to some $\Delta^0_3$ degree. Goncharov, Harizanov, Knight, McCoy, Miller and Solomon \cite{GHKMMS} showed that for any computable ordinal $\alphalpha$, the collection of non-$\low_\alphalpha$ degrees -- those degrees $\deltaega$ for which $\deltaega^{(\alphalpha)}>\mathbf{0}^{(\alphalpha)}$ -- is a degree spectrum. Among classes which are not co-countable, we can appeal to the notions of category and measure to obtain notions of largeness, namely being co-meagre or being co-null. For example, Csima and Kalimullin \cite{CsimaKalimullin} showed that the collection of hyperimmune degrees is a degree spectrum. This is a collection which is both co-null and co-meagre, but is not co-countable.
The largeness notions given by category and measure are not compatible: there is a meagre co-null class, and also a null co-meagre class. In Section \ref{sec_separating} we show that this incompatibility is reflected in degree spectra; namely that there is a null and co-meagre degree spectrum (Theorem \ref{thm_null_co_meagre}), and a meagre and co-null spectrum (Theorem \ref{thm_meagre_co_null}).
It is not difficult to see that there can be only countably many co-null or co-meagre degree spectra. In fact, Kalimullin and Nies, and independently the authors \cite{GMS} showed that any co-null degree spectrum must include the Turing degree of Kleene's ${\mathcal O}O$, the complete $\Pi^1_1$ set. One then wonders if this bound can be improved to be hyperarithmetic. Our main theorem shows, in a strong way, that it cannot.
\betaegin{theorem} \langlebel{main_theorem}
There is a structure whose degree spectrum is the set of all non-hyperarithmetic degrees.
\end{theorem}
This result has implications for higher degree structures. In \cite{GMS} it is shown that the Slaman-Wehner theorem fails for the degrees of constructibility: under standard mild set-theoretic assumptions, if for any non-constructible set $X\subseteq \omegaega$, a countable structure $\MM$ has a copy constructible in $X$, then $\MM$ has a constructible copy. Theorem \ref{main_theorem}, though, shows that the analogue of the Slaman-Whener theorem does hold in the hyperdegrees. In other words, there is a structure which is $\Delta^1_1(X)$-presentable in every nonhyperarithmetic set $X$, but has no hyperarithmetic copy; so the hyperspectrum of this structure consists of the nonzero hyperdegrees.
Two lines of continued research present themselves. Further restrictions on definability arise from looking at degree spectra of structures in particular classes. It is unknown, for example, if the Slaman-Wehner theorem can be witnessed by a linear ordering. We show (Theorem \ref{thm_LO}) that Theorem \ref{main_theorem} can: there is a countable linear ordering whose degree spectrum is the collection of non-hyperarithmetic degrees. On the other hand, Theorem \ref{main_theorem} cannot be witnessed by a countable model of an uncountably categorical structure. The argument is simple: if the degree spectrum of $\MM$ contains all non-hyperarithmetic degrees, then the theory of $\MM$ is hyperarithmetic; because the theory of $\MM$ is hyperarithmetic in any copy of $\MM$, and there is a minimal pair of hyperdegrees. Khisamiev \cite{Khisamiev} and Harrington \cite{Harrington_prime}, however, showed that if $T$ is a countable, uncountable categorical theory, then every countable model of $T$ has a copy computable in $T$. We wonder if the countable structures which witness Theorem \ref{main_theorem} fall on one side of some standard watershed of the stability spectrum.
We also ask which other complements of countable ideals of Turing degrees are degree spectra. There are only countably many such ideals. Is the ideal of arithmetic degrees a complement of a degree spectrum? Kalimullin's original problem -- for which degrees $\deltaega$ is there a structure $\MM$ whose degree spectrum is the collection of degrees $\deltaegb \nle \deltaega$ -- remains open. There are only countably many such degrees, but we do not know any upper bound on them. Are they all hyperarithmetic? The next natural test case is finding whether $\mathbf{0}''$ is such a degree. A hazardous guess, based on a proof that all c.e.\ degrees are such degrees, asks if these degrees coincide with the Turing degrees which contain ranked sets.
\section{Relative to any non-hyperarithmetic degree}
In this section we prove Theorem \ref{main_theorem}.
\subsection{Discussion}
Goncharov, Harizanov, Knight, MacCoy, R. Miller and Solomon \cite{GHKMMS} showed that for any computable ordinal $\alphalpha$, the collection of non-$\low_\alphalpha$-degrees, namely those degrees $\deltaegd$ such that $\deltaegd^{(\alphalpha)}>\mathbf{0}^{(\alphalpha)}$, is the degree spectrum of a structure~$\MM_\alphalpha$. We observe that a degree $\deltaegd$ is hyperarithmetic if and only if it is $\low_\alphalpha$ for some computable ordinal $\alphalpha$: certainly every $\low_\alphalpha$-degree is computable in $\mathbf{0}^{(\alphalpha)}$, and so is hyperarithmetic; and for all computable $\alphalpha$, $\mathbf{0}^{(\alphalpha)}$ is $\low_{\alphalpha\cdot \omegaega}$, as
\[ \left(\mathbf{0}^{(\alphalpha)}\right)^{(\alphalpha\cdot \omegaega)} = \mathbf{0}^{(\alphalpha+\alphalpha\cdot \omegaega)} = \mathbf{0}^{(\alphalpha\cdot \omegaega)}.\]
It follows that a degree $\deltaegd$ is non-hyperarithmetic if and only if it computes a copy of $\MM_\alphalpha$ for every computable $\alphalpha$, and in fact, an examination of the proof shows that such a copy is computable uniformly from $\deltaegd$ and $\alphalpha$. In other words, there is a Turing functional $\Phi$ such that for any non-hyperarithmetic arithmetic set $D$ and every notation $a$ for an ordinal $\alphalpha$ in some $\Pi^1_1$ path in Kleene's ${\mathcal O}O$, $\Phi(D,a)$ is a copy of $\MM_\alphalpha$.
It would then seem natural to try to prove Theorem \ref{main_theorem} by examining the disjoint union $\MM$ of all the structures $\MM_\alphalpha$. Certainly any degree computing a copy of $\MM$ must be non-hyperarithmetic, and a non-hyperarithmetic degree can compute every component of $\MM$. However, a non-hyperarithmetic degree which does not compute Kleene's ${\mathcal O}O$ will not know to apply $\Phi$ only to notations of computable ordinals. It will therefore fail to compute a copy of $\MM$.
To overcome this problem, we examine what happens when $\Phi$ is applied to notations for nonstandard ordinals. Restricting to a computable set of such notations, which is obtained from an overspill argument, overcomes the problem of working with the $\Pi^1_1$-set of notations. Essentially, we show that the structure $\Phi(D,a)$ for notations $a$ for a nonstandard ordinal is in fact computable, and does not depend on $D$ or on $a$. In some sense it is the ``ill-founded limit'' of the structures $\MM_\alphalpha$. Adding this limit as a component to $\MM$ results in the structure we are seeking.
\subsection{The proof}
The proof of Theorem \ref{main_theorem} relies on three ingredients: nonstandard ordinals, the Wehner graph, and jump inversion for graphs.
\subsubsection{Nonstandard ordinals}
H.~Friedman (see \cite[III,1.9]{Sacks}) used the Gandy hyperlow basis theorem to construct a countable $\omegaega$-model of set theory which omits the least non-computable ordinal $\omegack$. We fix such a model $H$. So $H$ is a model of $\ZFC$ (by which we mean, of course, that $H$ is a model of a finite fragment of $\ZFC+V=L$, sufficiently strong for our purposes). The well-founded part of $H$ extends $L_{\omegack}$ but has height $\omegack$. There are non-well-ordered computable orderings of $\omegaega$ which have no infinite descending sequences in $H$. These are the order-types of non-standard computable ordinals of $H$.
For all $Z\in 2^\omegaega\cap H$, and every $H$-notation $\alphalpha\in {\mathcal O}O^H$, the $\alphalpha\tth$ iteration $Z^{(\alphalpha)}$ of the Turing jump of $Z$ is a well-defined element of $2^\omegaega\cap H$. This is compatible with the standard definition: for a standard notation $\alphalpha$, $Z^{(\alphalpha)}$ as computed in $H$ agrees with the value computed in $V$. This follows by induction on $\alphalpha$, or simply by absoluteness of $\mathbf{\Delta}^1_1(\ZFC)$ facts between $H$ and $V$.
We fix an ill-founded computable ordinal $\deltaelta^*$ of $H$. We identify $\deltaelta^*$ with an element of ${\mathcal O}O^H$, the collection of $a\in {\mathbb N}$ which $H$ believes are notations for computable ordinals. From this notation, which we also call $\deltaelta^*$, we can obtain a computable linear ordering of $\omegaega$ which is isomorphic to $\deltaelta^*$; we call this linear ordering $\deltaelta^*$ as well. An ordering such as $\deltaelta^*$, which is ill-founded but has no infinite descending hyperarithmetic sequences, was first constructed by Harrison~\cite{Har68}, who also showed that his ordering supports a jump hierarchy. The maximal well-founded initial segment of~$\deltaelta^*$ is $\omegack$. We choose $\deltaelta^*$ so that in $H$, it is a limit ordinal which is closed under ordinal addition and multiplication; and we choose the linear ordering $\deltaelta^*$ so that the operations of ordinal addition and multiplication on the left by $\omegaega$ are computable on $\deltaelta^*$. This can be achieved, for example, by taking $\deltaelta^*$ to be a power of a power of $\omegaega$.
\subsubsection{The relativised Wehner graph}
A relativisation of Wehner's proof of the Slaman-Wehner theorem (\ref{thm_Slaman_Wehner}) yields, for any set $X$ of natural numbers, a (simple undirected) graph $G_X$ whose degree spectrum does not contain $\deltaeg_\Tur(X)$ but does contain the degrees strictly above $\deltaeg_\Tur(X)$. The construction yields a \emph{total} operator: a Turing functional $\Phi$ such that for all oracles $X$, $\Phi(X)\in 2^\omegaega$ is total.
\betaegin{theorem}\langlebel{thm_relativised_Wehner}
There is a total Turing functional $\Phi$ such that for all $X$ and $Y$ in $2^\omegaega$, $\Phi(Y,X)$ is a graph, and $\Phi(Y,X)\cong G_X$ if and only if $Y \nle_\Tur X$.
\end{theorem}
\subsubsection{Jump inversion for graphs} \langlebel{subsubsec_jump_inversion}
The structures $\MM_\alphalpha$ mentioned above were obtained in \cite{GHKMMS} by inverting the Wehner graph $G_{\emptyset^{(\alphalpha)}}$. Replacing the edges of the graph by pairs of linear orderings built from $\Int^\alphalpha$, the authors show an $\alphalpha$-jump inversion for graphs: an operation taking a graph $G$ to a structure $\NN$ such that for all $X\in 2^\omegaega$, $X$ computes a copy of $\NN$ if and only if $X^{(\alphalpha)}$ computes a copy of $G$.
We show that this jump inversion can be stretched to the nonstandard ordinals, in which case it produces a uniform computable structure. To state the following theorem in a uniform fashion for both finite and infinite computable ordinals, we need to shift the infinite indices by 1 (as is done in \cite{AK00}). We discuss this in greater length in Subsection \ref{subsubsec_corrected}. Let $Z\in 2^\omegaega$. For $n<\omegaega$, we let $Z_{(n)} = Z^{(n)}$. For $\alphalpha\in [\omegaega,\omegack)$, we let $Z_{(\alphalpha)} = Z^{(\alphalpha+1)}$. If $Z\in H$, then for all $\alphalpha\in (\omegack)^H\setminus \omegack$, we also let $Z_{(\alphalpha)}= Z^{(\alphalpha+1)}$.
\betaegin{theorem}\langlebel{thm_jump_inversion}
For any graph $G$ and $\alphalpha<\deltaelta^*$ there is a structure $G^{-\alphalpha}$ (in a fixed language ${\mathcal L}L$) with the following properties:
\betaegin{enumerate}
\item For $\alphalpha<\omegack$, for any $X\in 2^\omegaega$, if $X$ computes a copy of $G^{-\alphalpha}$, then $X_{(2\alphalpha)}$ computes a copy of $G$.\footnote{In fact, a set $X\in 2^\omegaega$ computes a copy of $G^{-\alphalpha}$ if and only if $X_{(2\alphalpha)}$ computes a copy of $G$. This follows from a relativisation of a lemma below; but this equivalence is not necessary for the proof of Theorem \ref{main_theorem}.}
\item For $\alphalpha,\betaeta \in \deltaelta^*\setminus \omegack$, and any graphs $G$ and $H$, $G^{-\alphalpha}\cong H^{-\betaeta}$.
\end{enumerate}
Moreover, for any total Turing functional $\Phi$ there is a Turing functional $\Psi$ such that for all $X\in 2^\omegaega$, $\alphalpha< \deltaelta^*$ and all graphs $G$, if $\Phi(X,\emptyset_{(2\alphalpha)})\cong G$ then $\Psi(X,\alphalpha)\cong G^{-\alphalpha}$.
\end{theorem}
The isomorphism type of $G^{-\alphalpha}$ depends only on $\alphalpha$ and the isomorphism type of~$G$. We fix a structure $G^{-\infty}$ such that for any graph $H$ and any $\alphalpha\in \deltaelta^*\setminus \omegack$, $G^{-\infty}\cong H^{-\alphalpha}$. We will see that $G^{-\infty}$ has a computable copy.
\subsubsection{The proof of Theorem \ref{main_theorem}}
Before we present the proofs of Theorems \ref{thm_relativised_Wehner} and \ref{thm_jump_inversion}, we show
how to use them to prove Theorem \ref{main_theorem}. The structure $\mathcal AAA$ whose degree spectrum is the
collection of non-hyperarithmetic degrees is the disjoint union of $G_{\emptyset_{(2\alphalpha)}}^{-\alphalpha}$ for
$\alphalpha<\deltaelta^*$. In other words, $\mathcal AAA$ is the disjoint union of $G_{\emptyset_{(2\alphalpha)}}^{-\alphalpha}$ for
$\alphalpha<\omegack$ and infinitely many copies of $G^{-\infty}$. Formally, the universe of $\mathcal AAA$ is the disjoint
union of the universes of the structures $\NN_\alphalpha = G_{\emptyset_{(2\alphalpha)}}^{-\alphalpha}$ for $\alphalpha<\deltaelta^*$,
with an added equivalence relation whose equivalence classes are the various $\NN_\alphalpha$'s.
We show that a set $X$ computes a copy of $\mathcal AAA$ if and only if it is not hyperarithmetic.
For $\alphalpha<\omegack$, if a set $X$ computes a copy of $\NN_\alphalpha$ then $X_{(2\alphalpha)}$ computes a copy of $G_{\emptyset_{(2\alphalpha)}}$, whence $X_{(2\alphalpha)} >_\Tur \emptyset_{(2\alphalpha)}$. As we argued above, if~$X$ is hyperarithmetic, then there is some computable ordinal $\betaeta$ such that for all computable ordinals $\gammaamma\gammae \betaeta$, $X_{(\gammaamma)} \equiv_\Tur \emptyset_{(\gammaamma)}$. It follows that if $X$ is hyperarithmetic, then there are computable ordinals $\gammaamma$ such that $X$ does not compute a copy of $\NN_\gammaamma$, and so $X$ does not compute~$\mathcal AAA$.
On the other hand, if $X$ is not hyperarithmetic, then for all $\alphalpha<\omegack$, $X\nle_\Tur \emptyset_{(2\alphalpha)}$, and so $\Phi(X,\emptyset_{(2\alphalpha)})\cong G_{\emptyset_{(2\alphalpha)}}$, where $\Phi$ is the functional guaranteed by Theorem \ref{thm_relativised_Wehner}. If $\Psi$ is the functional obtained from $\Phi$ by Theorem \ref{thm_jump_inversion}, then it follows that $\Psi(X,\alphalpha)\cong \NN_\alphalpha$. Certainly for $\alphalpha\in \deltaelta^*\setminus \omegack$, we have $\Psi(X,\alphalpha)\cong G^{-\infty}$. Hence the disjoint union of the structures $\Psi(X,\alphalpha)$ for $\alphalpha<\deltaelta^*$ is an $X$-computable structure isomorphic to $\mathcal AAA$.
This completes the proof of Theorem \ref{main_theorem}. We can think of this proof as follows. The component $\NN_\alphalpha$ of $\mathcal AAA$ can be thought of as the result of diagonalising $\mathcal AAA$ against the $\alphalpha\tth$ hyperarithmetic structure. As we shall shortly see, the graph $G_{\emptyset_{(2\alphalpha)}}$ is obtained by aggregating all possible ways that a finite initial segment of a set $X\nle_\Tur \emptyset_{(2\alphalpha)}$ codes into the graph the fact that it enumerates a set that $\emptyset_{(2\alphalpha)}$ cannot enumerate. This process ensures that the result does not depend on $X$. The jump-inversion operation then translates this information, coded directly into the atomic diagram of $G_{\emptyset_{(2\alphalpha)}}$, into the $\mathcal Sigma^0_{2\alphalpha}$-diagram of $\NN_\alphalpha$.
This translation involves a certain amount of ``overwriting''; the raw information present in the graph $G_{\emptyset_{(2\alphalpha)}}$ is homogenised, or diluted, so that only an iteration of the jump of length $2\alphalpha$, and no shorter, can recover it. If $\alphalpha$ is nonstandard, then this dilution process completely overwrites all the information in the graph produced by $X$ as it attempts to diagonalise against $\emptyset_{(2\alphalpha)}$. We have no control over this graph, as it is possible that $X$ is nonhyperarithmetic yet is computable from $\emptyset_{(2\alphalpha)}$. Overwriting all the information coded in this graph ensures that we produce $\NN_\alphalpha \cong G^{-\infty}$, which again doesn't depend on $X$.
Note that in our construction, it is unimportant that $\mathcal AAA$ is an unlabelled disjoint union of the components $\NN_\alphalpha$; we could have also built a labelled disjoint union, in which the component $\NN_\alphalpha$ is defined by a unary predicate indexed by $\alphalpha$. This shows that it is not important that for nonstandard $\alphalpha$, $G^{-\alphalpha}$ does not depend on $\alphalpha$; what is important is that it does not depend on $G$.
\
\subsection{The Wehner graph}
We prove Theorem \ref{thm_relativised_Wehner}. As we mentioned above, this is a relativisation of Wehner's proof of Theorem \ref{thm_Slaman_Wehner}.
For a set $X\in 2^\omegaega$, consider the following family of finite sets:
\[
\mathcal FF_X = \left\{ \{e\}\oplus F\,:\, e< \omega, F\subseteq \omega\text{ is finite, and } F\neq W_e^X\right\}.
\]
Recall that for a set $A\subseteq \omegaega^2$, for all $n<\omegaega$ we let $A^{[n]} = \left\{x \,:\, (n,x)\in A \right\}$. For any countable collection $\mathcal FF$ of subsets of $\omegaega$, we say that a set $A\subseteq \omega^2$ is an {\em enumeration} of $\mathcal FF$ if $\mathcal FF=\{A^{[n]}:n<\omega\}$. We say that a set $Y$ can {\em enumerate} $\mathcal F$ is there is an enumeration of $\mathcal F$ that is $Y$-c.e.
\betaegin{proposition} \langlebel{prop_Wehner_family_diagonalizes}
No set $X$ can enumerate $\mathcal FF_X$.
\end{proposition}
\betaegin{proof}
If $X$ were able to enumerate $\mathcal FF_X$, then $X$ could compute a function $f$ such that for all $e$, $W_{f(e)}^X$ is a finite set different than $W_e^X$. This would contradict the recursion theorem.
\end{proof}
\betaegin{proposition} \langlebel{prop_uniform_Wehner_family}
There is a c.e.\ operator $V$ such that for all $X$ and $Y\nle_\Tur X$, $V(Y,X)$ is an enumeration of $\mathcal FF_X$.
\end{proposition}
\betaegin{proof}
Let $X,Y\in 2^\omegaega$.
For each $n<\omegaega$ we enumerate $V(Y,X)^{[n]}$ as follows.
Suppose that $n= (e,u,s_0)$.
We start by letting
\[ V(Y,X)_{s_0}^{[n]} = \{e\}\oplus D_u = \{2e\}\cup \{2x+1: x\in D_u\};\] here $D_u$ is the $u\tth$ finite set of natural numbers.
At every stage $s>s_0$, if we see that $V(Y,X)_s^{[n]}=\{e\}\oplus W_{e,s}^X$, then we enumerate $2x+1$ into $V(Y,X)_{s+1}^{[n]}$, where $x$ is the least element of $Y\oplus \overline Y$ which is not in $V(Y,X)_s^{[n]}$. Here $\overline Y = \omegaega\setminus Y$.
Let $\{e\}\oplus F$ be an element of $\mathcal FF_X$. There is a stage $s_0$ such that for all $s\gammae s_0$, $F\ne W_{e,s}^X$. If $F=D_u$ then $V(Y,X)^{[n]} = \{e\}\oplus F$ where $n = (e,u,s_0)$, as we set $V(Y,X)_{s_0}^{[n]} = \{e\}\oplus F$ and never enumerate other elements into $V(Y,X)^{[n]}$ at any later stage. So $\mathcal FF_X\subseteq \{V(Y,X))^{[n]}:n<\omega\}$.
Suppose that $Y\nle_\Tur X$. Pick any $n=(e,u,s_0)$; we claim that $V(Y,X)^{[n]}$ is finite which, by the construction, implies that $V(Y,X)^{[n]}\in \mathcal FF_X$. The reason $V(Y,X)^{[n]}$ it is finite is that $Y\oplus \overline Y$ is not c.e.\ in $X$. If there were infinitely many stages at which new elements were enumerated into $V(Y,X)^{[n]}$, then we would end up with
\[ \{e\} \oplus W_e^X \,=\, V(Y,X)^{[n]} \,=\, \{e\} \oplus (D_u \cup (Y\oplus \overline Y)),\]
whence $Y\oplus \overline Y$ would be c.e.\ in $X$, yielding $Y\le_\Tur X$.
\end{proof}
Now, following Knight (see \cite{AK00}) and Khoussainov \cite{Bakh_dasies}, we code families of sets in graphs.
\betaegin{definition}
Given a set $F\subseteq\omega$, we define the \emph{flower graph} $G(F)$ of $F$ as follows:
We start with a vertex $v$, and for each $n\in F$ we add a cycle of length $n+3$ starting and ending in $v$.
Given a family of sets $\mathcal FF$, we define the {\em bouquet graph $G(\mathcal FF)$} of $\mathcal FF$ to be the disjoint union of infinitely many copies of $G(F)$ for each $F\in \mathcal FF$, together with infinitely many isolated vertices.
\end{definition}
\betaegin{lemma}[\cite{AK00,Bakh_dasies}] \langlebel{lem_flower_grpahs_and_enumerations}
For any countable collection $\mathcal FF$ of subsets of $\omegaega$, a set $Y$ can compute a copy of $G(\mathcal FF)$ if and only if $Y$ can enumerate $\mathcal FF$.
This equivalence is uniform: there is a total Turing functional ${\mathcal L}ambda$ such that for any set $Z$ and any index $e$, if $\mathcal FF_e^Z$ is the family enumerated by $W_e^Z$, then ${\mathcal L}ambda(Z,e)$ is a presentation of $G\left( \mathcal FF_e^Z\right)$.
\end{lemma}
For $X\in 2^\omegaega$, we let the \emph{Wehner graph} $G_X$ of $X$ be the bouquet graph $G\left(\mathcal FF_X \right)$ of the Wehner family $\mathcal FF_X$. Theorem \ref{thm_relativised_Wehner} now follows immediately from Propositions \ref{prop_Wehner_family_diagonalizes} and \ref{prop_uniform_Wehner_family}, and Lemma \ref{lem_flower_grpahs_and_enumerations}.
\
\subsection{Jump inversion for structures}
We set out to prove Theorem \ref{thm_jump_inversion}. The theorem actually follows from an application of an overspill argument to Ash's metatheorem (see \cite{AK00}); the relevant work here is by Ash and Knight, on pairs of structures \cite{AK90}. This work was used in \cite{GHKMMS} to invert the $\alphalpha$-jump for standard computable ordinals $\alphalpha$. Ash's theorem has a complicated proof, and its full power is not required to prove jump-inversion for structures. This is why we give a complete proof. The construction we present has its roots in work of Hirschfeldt and White \cite{HW02}. However, Hirschfeldt and White do not give sharp bounds for the pairs of trees they construct, whereas proving Theorem \ref{thm_jump_inversion} does require these sharp bounds. Possibly our construction is not new, but we have not been able to find it in the literature.
\
The main part of the argument proving Theorem \ref{thm_jump_inversion} is the construction of uniformly computable structures $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$, for $\alphalpha<\deltaelta^*$, which for standard $\alphalpha < \omegack$ discern $\Delta^0_{2\alphalpha+1}$ predicates, and for nonstandard $\alphalpha\in \deltaelta^*\setminus \omegack$ have the same isomorphism type. This is the content of Propositions \ref{prop_jump_inversion_part_two}, \ref{prop_jump_inversion_part_one}, and \ref{prop_jump_inversion_part_three}. Here discerning $\Delta^0_{2\alphalpha+1}$ predicates means that the problem of detecting isomorphisms to either $\mathcal SSS_{\alphalpha,0}$ or $\mathcal SSS_{\alphalpha,1}$ is $\Delta^0_{2\alphalpha+1}$-complete. That is, given a structure $\NN$, $\emptyset^{(2\alphalpha+1)}$ can determine if $\NN$ is isomorphic to $\mathcal SSS_{\alphalpha,0}$ or $\mathcal SSS_{\alphalpha,1}$; and for any $\Delta^0_{2\alphalpha+1}$ set $X$, there is a uniformly computable sequence of structures $\seq{\NN_n}_{n<\omegaega}$ such that if $n\in X$, then $\NN_n\cong \mathcal SSS_{\alphalpha,1}$, and if $n\notin X$, then $\NN_n\cong \mathcal SSS_{\alphalpha,0}$.
The building blocks of the structures $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$ are fat trees, which we now define.
\
\subsubsection{Fat trees} \langlebel{subsec_fat_trees}
We work with nonempty countable rooted trees of height at most~$\omegaega$. Usually, trees of finite height are defined to be partial orderings $(T,<_T)$ for which for all $y\in T$, the collection of predecessors $\{x\in T\,:\, x<_T y\}$ is linearly ordered and finite; \emph{rooted} means that $T$ has a $<_T$-least element, named $\rooot(T)$. However, such a presentation of a tree $T$ does not allow us to effectively compute the parent (the immediate predecessor) of a non-root element of $T$. Further, under this definition, a homomorphism $f\colon T\to S$ of trees does not need to take immediate successors to immediate successors, or the root to the root. To overcome this, when we consider them as structures, we add the parent function which maps every non-root element of $T$ to its parent, and the root to itself. Note that the ordering $<_T$ is ${\mathcal L}L_{\omegaega_1,\omegaega}$-definable from the parent function, and so we may omit it when specifying a tree.
Each such tree is isomorphic to a downward closed subset of $\omegaega^{<\omegaega}$ (the collection of all finite strings of natural numbers), with the ordering given by extension; in other words, the parent unary function is interpreted as the function which chops off the last bit of the string. However, it will be useful to use the general notion; in particular, we will use subsets of $\omegaega^{<\omegaega}$ which do not necessarily contain the empty string~$\seq{}$, but which are trees under the ordering of string extension.
Let $T\in H$ be a tree, and suppose that $H$ believes that $T$ is well-founded. That is, $H$ does not contain an infinite path of $T$. Then $H$ contains a unique \emph{rank function} for $T$: a function $r$ from $T$ to the countable ordinals of $H$, such that for all $x\in T$, $r(x)$ is the least upper bound (in $H$) of $r(y)+1$, for $y>_T x$. We let $\rk_T$ be this unique rank function, and let $\rk(T) = \rk_T(\rooot(T))$.\footnote{Note that this notation differs by 1 from the standard set-theoretic definition, which lets $\rk(T) = \rk(\rooot(T))+1$.} The range of $\rk_T$ contains every $\alphalpha<\rk(T)$, in other words, $\ranglenge \rk_T = \{ \betaeta\,:\, \betaeta\le \rk(T)\} = \rk(T)+1$.
For any tree $T$ and $x\in T$, let $T_x = \{ y\in T\,:\, y\gammae_T x\}$ with the partial ordering restricted from $<_T$; so $x = \rooot(T_x)$. We have $\rk_{T_x} = \rk_T \rest{T_x}$, so $\rk(T_x) = \rk_T(x)$. We also let $\hht_T(X)$, the \emph{height} of $x$ on $T$, be the size of the set $\{y\in T\,:\, y<_T x\}$. The height $\hht(T)$ of a tree $T$ is the supremum of the height of its elements. If $T$ has finite height, then $\rk(T)= \hht(T)$. If the height of $T$ is $\omegaega$, then $\rk(T)$ is infinite. For any ranked tree $T$, for all $k\le \hht(T)$, $\rk(T) = \sup \{\rk_T(x)+k\,:\, \hht_T(x)=k\}$.
If $T$ is a hyperarithmetic well-founded tree, then the rank function of $T$ is hyperarithmetic (and the rank of $T$ is a computable ordinal). It follows that the rank function of $T$ (as computed in $V$) is an element of $H$; by $\Delta^1_1$ absoluteness between $H$ and $V$, we see that $\rk_T$ is the ``true'' rank function of $T$, and that $\rk(T)$ is computed correctly in $H$. In fact, for a hyperarithmetic tree $T$ which $H$ believes is well-founded, $\rk(T)<\omegack$ if and only if $T$ is well-founded, because if $\rk(T)<\omegack$, then the well-foundedness of $\rk(T)$ shows that $T$ is well-founded.
\betaegin{definition}\langlebel{def_fat_tree}
Let $T$ be a tree which $H$ believes is well-founded. The tree $T$ is \emph{fat} if for all $x\in T$, for all $\gammaamma < \rk_T(x)$, there are infinitely many children (immediate successors) $y$ of $x$ on $T$ such that $\rk_T(y) = \gammaamma$.
\end{definition}
If $T$ is truly well-founded, then the fatness of $T$ does not depend on the choice of $H$. The utility of fat trees is that they are universal for their rank.
\betaegin{proposition}\langlebel{prop_fat_trees_universal}
Let $S$ and $T$ be trees which $H$ believes are well-founded. If $\rk(S)\le \rk(T)$, and $T$ is fat, then $S$ is embeddable into $T$. If both $T$ and $S$ are fat and $\rk(S) = \rk(T)$, then $S$ and $T$ are isomorphic.
Furthermore, if both $T$ and $S$ are fat and are ill-founded, then they are isomorphic, regardless of their nonstandard rank.
\end{proposition}
\betaegin{proof}
An isomorphism $f\colon S\to T$ is built by a back-and-forth argument, building an isomorphism from the roots up. Inductively, we map an element $x\in S$ to $f(x)\in T$ such that $\rk_S(x)< \omegack$ if and only if $\rk_T(f(x))< \omegack$, and if so, then these ranks are equal. Fatness, together with the fact that $\omegack$ is the well-founded part of the ordinals of $H$, shows that the induction can always continue.
The embedding is similarly built, by a forth-only argument.
\end{proof}
Given any $\alphalpha<\deltaelta^*$, we can effectively obtain a computable (index for a) fat tree of rank $\alphalpha$. This is done using the \emph{fattening} operation on trees. For any tree $T$, let $\fat(T)$ be the subset of $\omegaega^{<\omegaega}$ which consists of the (nonempty) sequences of the form $\seq{\rooot(T),(x_1,n_1),(x_2,n_2),\deltaots, (x_k,n_k)}$, where $k<\omegaega$, $n_1,n_2,\deltaots, n_k\in {\mathbb N}$ and $\rooot(T)<_T x_1 <_T x_2 <_T \deltaots <_T x_k$. It is easy to see that the fattening operation induces a computable map on indices of computable trees.
\betaegin{lemma}\langlebel{lem_fattening_makes_fat}
If $T$ is well-founded in $H$, then $\fat(T)$ is fat, and $\rk(\fat(T))= \rk(T)$.
\end{lemma}
\betaegin{proof}
The function mapping $\seq{\rooot(T),(x_1,n_1),(x_2,n_2),\deltaots, (x_k,n_k)}$ to $\rk_T(x_k)$ (and $\rooot(\fat(T)) = \seq{\rooot(T)}$ to $\rk(T)$) is the rank function for $\fat(T)$. The fact that for $x\in T$, the range of $\rk_{T_x}$ is $\rk_T(x)+1$ shows that $\fat(T)$ is fat.
\end{proof}
For $\alphalpha<\deltaelta^*$, let $S_{\alphalpha}$ be the \emph{tree of descending sequences} from $\alphalpha$;
\[ S_\alphalpha = \left\{\seq{\alphalpha_1,\alphalpha_2,\deltaots, \alphalpha_k} \,:\, \alphalpha>\alphalpha_1>\alphalpha_2 > \deltaots \alphalpha_k \right\};\]
the root of $S_{\alphalpha}$ is the empty sequence $\seq{}$. It is easy to check that for all nonempty
$\seq{\alphalpha_1,\deltaots, \alphalpha_k}\in S_{\alphalpha}$,
$\rk(\seq{\alphalpha_1,\deltaots, \alphalpha_k}) = \alphalpha_k$, and so that
$\rk(S_\alphalpha) = \alphalpha$; this is because for all $\alphalpha< \deltaelta^*$, $\alphalpha = \sup\{ \betaeta+1\,:\, \betaeta<\alphalpha\}$. We let $T_\alphalpha = \fat(S_\alphalpha)$; so $T_\alphalpha$ is a computable fat tree of rank $\alphalpha$,
and a computable index for $T_\alphalpha$ is effectively obtained from $\alphalpha$.
We let $T_\infty$ be a fat, ill-founded computable tree. So for all $\alphalpha\in \deltaelta^*\setminus \omegack$, $T_\alphalpha\cong T_\infty$.
\subsubsection{The adjusted hyperarithmetic hierarchy} \langlebel{subsubsec_corrected}
Before we define the structures $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$, we explain why we modified the iterated jump and used the sets $Z_{(\alphalpha)}$ rather than $Z^{(\alphalpha)}$ (Subsection \ref{subsubsec_jump_inversion}), that is, why we work with $Z^{(\alphalpha+1)}$ rather than $Z^{(\alphalpha)}$ if $\alphalpha$ is infinite, but not if it is finite.
Recall the following definition of the hyperarithmetic hierarchy. For each $\alphalpha<\omegack$, we fix an effective
listing $\seq{W_{e,\alphalpha}}_{e<\omegaega}$ of the $\mathcal Sigma^0_\alphalpha$ subsets of ${\mathbb N}$. We let $W_{e,1} = W_e$ be the
$e\tth$ c.e.\ set (or we could start with $\alphalpha=0$, if we like, by listing all the primitive recursive sets,
say). Given $W_{i,\betaeta}$ for all $i<\omegaega$ and $\betaeta<\alphalpha$, we let $W_{e,\alphalpha}$ be the union of the sets
$\overline{W_{i,\betaeta}}$, where $(i,\betaeta)\in W_e \cap (\omegaega\times \alphalpha)$.
Note that in fact, this definition can be pursued in $H$, as we did with ranks of $H$-well-founded trees; this gives us an effective listing, for each $\alphalpha<\deltaelta^*$, of what $H$ defines as the $\mathcal Sigma^0_\alphalpha$ sets. These definitions are relativised naturally to any oracle $Z\in H$, and for $\alphalpha<\omegack$, to any oracle $Z\in 2^\omegaega$.
Now there is a discrepancy between the finite and infinite levels of the hierarchy, in the relationship between $\mathcal Sigma^0_\alphalpha$ and $\emptyset^{(\alphalpha)}$. For nonzero $n<\omegaega$, $\emptyset^{(n)}$ is $\mathcal Sigma^0_{n}$-complete (for many-one reductions), but for computable successor ordinals $\alphalpha\gammae \omegaega$, $\emptyset^{(\alphalpha)}$ is merely $\mathcal Sigma^0_{\alphalpha-1}$-complete. This is because for limit ordinals $\alphalpha$, $\mathcal Sigma^0_\alphalpha$ sets are effective unions of $\Pi^0_\betaeta$ sets for $\betaeta<\alphalpha$ unbounded in $\alphalpha$. Similarly, for $n<\omegaega$, a set is $\mathcal Sigma^0_{n+1}$ if and only if it is c.e.\ in $\emptyset^{(n)}$; whereas for all $\alphalpha\gammae \omegaega$, a set is $\mathcal Sigma^{0}_{\alphalpha}$ if and only if it is c.e. in $\emptyset^{(\alphalpha)}$.
We thus use the modification employed by Ash and Knight in \cite{AK00} to overcome this split between the finite and infinite case. For all $Z\in H\cap 2^\omegaega$, for all $\alphalpha<\deltaelta^*$, a set is $\mathcal Sigma^0_{\alphalpha+1}(Z)$ if and only if it is c.e.\ in $Z_{(\alphalpha)}$, and $Z_{(\alphalpha)}$ is $\mathcal Sigma^0_\alphalpha(Z)$-complete. The same holds for $\alphalpha<\omegack$ for all $Z\in 2^\omegaega$.
All of these equivalences are uniform. For example, there is a c.e.\ operator $\Gamma$ such that for all $Z\in 2^\omegaega$, all $\alphalpha<\omegack$ and all $e<\omegaega$, $\Gamma(Z_{(\alphalpha)},\alphalpha,e) = W_{e,\alphalpha+1}(Z)$.
\subsubsection{The structures $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$}
We use the fat trees $T_\alphalpha$ to define the structures $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$. Both will be \emph{pairs of trees}, or labelled disjoint unions of trees. For trees $S$ and $T$, the universe of the structure $(S,T)$ is the disjoint union of $S$ and $T$; the parent function is defined on both parts; and a unary predicate defines $S$ in the structure.
\betaegin{definition}\langlebel{def_alpha_complete_pair}
For nonzero $\alphalpha< \deltaelta^*$, let $\mathcal SSS_{\alphalpha,0} = (T_{\omegaega\alphalpha}, T_{\omegaega\alphalpha+1})$, and $\mathcal SSS_{\alphalpha,1} = (T_{\omegaega\alphalpha+1}, T_{\omegaega\alphalpha})$.
\end{definition}
\betaegin{proposition} \langlebel{prop_jump_inversion_part_two}
For $\alphalpha,\betaeta\in \deltaelta^*\setminus \omegack$, $\mathcal SSS_{\alphalpha,0}\cong \mathcal SSS_{\betaeta,0}\cong \mathcal SSS_{\betaeta,1}$.
\end{proposition}
\betaegin{proof}
All three structures are isomorphic to $(T_\infty,T_\infty)$.
\end{proof}
The following proposition states that the isomorphism problem for the pair $(\mathcal SSS_{\alphalpha,0},\mathcal SSS_{\alphalpha,1})$ is $\emptyset_{(2\alphalpha)}$-computable, uniformly in $\alphalpha$, in a relativisable way. Note that this problem is not trivial: for $\alphalpha<\omegack$, $\mathcal SSS_{\alphalpha,0}\ncong \mathcal SSS_{\alphalpha,1}$, because $T_{\omegaega\alphalpha}\ncong T_{\omegaega\alphalpha+1}$. This relies on the fact that a standard ordinal cannot be embedded into a smaller ordinal. This shows that the isomorphism between say $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$ for nonstandard $\alphalpha$ cannot belong to $H$.
Let $\seq{\Phi_e}$ be an effective sequence of all Turing functionals.
\betaegin{proposition}\langlebel{prop_jump_inversion_part_one}
There is a Turing functional $\Theta$ such that for all $Z\in 2^\omegaega$, for all nonzero $\alphalpha<\omegack$, and all $e<\omegaega$,
\betaegin{itemize}
\item If $\Phi_e(Z)\cong \mathcal SSS_{\alphalpha,0}$, then $\Theta(Z_{(2\alphalpha)},\alphalpha,e) = 0$; and
\item If $\Phi_e(Z)\cong \mathcal SSS_{\alphalpha,1}$, then $\Theta(Z_{(2\alphalpha)},\alphalpha,e) = 1$.
\end{itemize}
\end{proposition}
\betaegin{proof}
We analyse the complexity, for $\alphalpha<\omegack$, of the class of well-founded trees whose rank is bounded by $\alphalpha$. We need to work in relativised form. For any $Z\in 2^\omegaega$ and $\alphalpha<\omegack$, let $R_\alphalpha(Z)$ be the collection of indices $e<\omegaega$ such that $\Phi_e(Z)$ is total and is a tree whose rank is smaller than $\alphalpha$.
We show that for all computable $\alphalpha\gammae 1$ and $Z\in 2^\omegaega$:
\betaegin{enumerate}
\item $R_{\omegaega\alphalpha}(Z)\le_\Tur Z_{(2\alphalpha)}$; and
\item for all $n<\omegaega$, $R_{\omegaega\alphalpha+n}(Z)$ is co-c.e.\ in $Z_{(2\alphalpha)}$.
\end{enumerate}
These are proved simultaneously by effective transfinite recursion on $\omegack$ (that is, on the well-founded initial segment of the computable well-ordering $\deltaelta^*$). That is, by recursion on $\alphalpha<\omegack$, we define Turing functionals which given $Z$, $\alphalpha$ and $n$, produce a $Z_{(2\alphalpha)}$-index for $R_{\omegaega\alphalpha}(Z)$ (an index $e$ such that $\Phi_e(Z_{(2\alphalpha)}) = R_{\omegaega\alphalpha}(Z)$), and a $\Pi^0_{2\alphalpha+1}(Z)$-index for $R_{\omegaega\alphalpha+n}(Z)$ (an index $e$ such that ${W_{e,2\alphalpha+1}(Z)} = \omegaega\setminus R_{\omegaega\alphalpha+n}(Z)$), recalling that a set is co-c.e.\ in $Z_{(2\alphalpha)}$ if and only if it is $\Pi^0_{2\alphalpha+1}(Z)$.
Since a tree has a finite rank if and only if it has finite height, $R_\omegaega(Z)$ is $Z_{(2)} = Z''$-computable, as the condition that $\Phi_e(Z)$ is total is $\Pi^0_2(Z)$, and the condition that $\Phi_e(Z)$ has finite height is $\mathcal Sigma^0_2(Z)$. For $\alphalpha>1$, given (2) for $\betaeta<\alphalpha$, we in fact see that $R_{\omegaega\alphalpha}(Z)$ is $\mathcal Sigma^0_{2\alphalpha}$, because for any tree $T$, $\rk(T)<\omegaega\alphalpha$ if and only if there is some $\betaeta<\alphalpha$ and $n<\omegaega$ such that $\rk(T)< \omegaega\betaeta+n$. We then use the fact that $Z_{(2\alphalpha)}$ is $\mathcal Sigma^0_{2\alphalpha}$-complete to see that $Z_{(2\alphalpha)}$ computes $R_{\omegaega\alphalpha}(Z)$.
Now for any $\alphalpha\gammae 1$, we see that for any $n<\omegaega$, $R_{\omegaega\alphalpha+n}(Z)$ is co-c.e.\ in $R_{\omegaega\alphalpha}(Z)$, and then use (1). This is because for any tree $T$, $\rk(T)<\omegaega\alphalpha+n$ if and only if for every $x\in T$ of height $n$, $\rk(T_x)< \omegaega\alphalpha$.
Now we define $\Theta$ as follows. Given $Z\in 2^\omegaega$, $\alphalpha<\deltaelta^*$ and $e<\omegaega$, if $\Phi_e(Z) = (S,T)$, then we run the co-enumeration of $R_{\omegaega\alphalpha+1}(Z)$ with oracle $Z_{(2\alphalpha)}$. We let $\Theta(Z_{(2\alphalpha)},\alphalpha,e) = 1$ if we first notice that $\rk(S)>\omegaega\alphalpha$, and output 0 if we first notice that $\rk(T)> \omegaega\alphalpha$. Note, of course, that given $Z_{(2\alphalpha)}$ and $\alphalpha$, we can effectively find~$Z$; and that from a $Z$-index for $(S,T)$ we can effectively find $Z$-indices for $S$ and for~$T$.
\end{proof}
\subsubsection{Hardness of the isomorphism problem}
We want to code any $\Delta^0_{2\alphalpha+1}$ set into an isomorphism problem for the pair $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$. We start with a recursive definition of the sets $\emptyset_{(\alphalpha)}$ for $\alphalpha<\deltaelta^*$.
\
Recall that we also consider $\deltaelta^*$ as a notation in ${\mathcal O}O^H$. This means that for limit $\alphalpha<\deltaelta^*$ we uniformly obtain a computable and increasing sequence $\seq{\alphalpha_s}_{s<\omegaega}$ which is cofinal in $\alphalpha$. We may assume that for all $s$, $\seq{\alphalpha_s}$ is odd.
For a successor $\alphalpha<\deltaelta^*$, we let $\alphalpha_s = \alphalpha-1$ for all $s$.
\betaegin{lemma}\langlebel{lem_definition_of_H_sets}
There is a computable function $f$ such that for all $\alphalpha<\deltaelta^*\setminus \{0,1\}$ and all $m\in {\mathbb N}$,
$m\in \emptyset_{(\alphalpha)}$ if and only if for some $s<\omegaega$, $f(\alphalpha, m, s) \notin \emptyset_{(\alphalpha_s)}.$
Furthermore, we may assume that if $f(\alphalpha,m,s)\notin \emptyset_{(\alphalpha_s)}$, then for all $t\gammae s$, $f(\alphalpha,m,t)\notin \emptyset_{(\alphalpha_t)}$; and that $f(\alphalpha,m,s)\gammae s$.
\end{lemma}
Hence, either $m\notin \emptyset_{(\alphalpha)}$, and for all $s$, $f(\alphalpha,m,s)\in \emptyset_{(\alphalpha_s)}$; or $m\in \emptyset_{(\alphalpha)}$, and for all but finitely many $s$, $f(\alphalpha,m,s)\notin \emptyset_{(\alphalpha_s)}$.
\betaegin{proof}
Let $\alphalpha\gammae 1$. Since $\emptyset_{(\alphalpha)}$ is $\mathcal Sigma^0_\alphalpha$, we can find, effectively in $\alphalpha$ and $m$, a sequence $\seq{e(t),\betaeta(t)}$ (where $\betaeta(t)<\alphalpha$) such that $m\in \emptyset_{(\alphalpha)}$ if and only if there is some $t<\omegaega$ such that $m\notin W_{e(t),\betaeta(t)}$. For all $t$ we can effectively find some $s=s(\alphalpha,m,t)$ such that $\alphalpha_s\gammae \betaeta(t')$ for all $t'\le t$. Since $\emptyset_{(\alphalpha_s)}$ is $\mathcal Sigma^0_{\alphalpha_s}$-complete, this means that we can find a function $f$ such that $f(\alphalpha,m,s)\in \emptyset_{(\alphalpha_s)}$ if and only if for all $t'\le t$, $m\in W_{e(t'),\betaeta(t')}$. We get $f(\alphalpha,m,s)\gammae s$ by using the padding lemma.
\end{proof}
Using the fattening operation we discussed in Subsection \ref{subsec_fat_trees}, we define two operations on countable sequences trees, mapping (indices of) uniformly computable sequences of trees to (indices of) computable trees. Let $\seq{T_i}_{i<\omegaega}$ be a sequence of trees. We let
\betaegin{itemize}
\item $\supr_i T_i = \fat(S)$, where $S$ is obtained from the disjoint union of the trees $T_i$ and identifying their roots into a single root. We also let
\item $\mini_i T_i = \fat(S)$, where $S$ is the collection of all nonempty strings of the form
\[ \seq{ \seq{x_{0,0}}, \seq{x_{1,0}, x_{1,1}}, \seq{x_{2,0}, x_{2,1}, x_{2,2}}, \deltaots, \seq{x_{k,0}, x_{k,1}, \deltaots, x_{k,k}} },\]
where for each $j\le k$ and $i\le j$, $x_{j,i}\in T_i$, $x_{i,i} = \rooot(T_i)$, and $x_{i,i}<_{T_i} x_{i+1,i} <_{T_i} \deltaots <_{T_i} x_{k,i}$. The root of $S$ is $\seq{\seq{\rooot(T_0)}}$.
\end{itemize}
\betaegin{lemma}\langlebel{lem_supr_and_mini}
Suppose that $\seq{T_i}\in H$ and that every $T_i$ is well-founded in $H$. Then
\betaegin{enumerate}
\item $\rk(\supr_i T_i) = \sup_i \rk(T_i)$; and
\item $\rk(\mini_i T_i) = \min_i (\rk(T_i)+i)$.
\end{enumerate}
\end{lemma}
\betaegin{proof}
Work in $H$.
If $\supr_i T_i = \fat(S)$ with $S$ defined as above, then for all $i$ and $x\in T_i\setminus \{\rooot(T_i)\}$, $\rk_S(x) = \rk_{T_i}(x)$. So
\[ \rk(S) = \sup_{i<\omegaega} \sup_{x\in T_i\setminus \{\rooot(T_i)\}} (\rk_{T_i}(x)+1) = \sup_{i<\omegaega} \rk_{T_i}(\rooot(T_i)) = \sup_i \rk(T_i) .\]
Let $\mini_i T_i = \fat(S)$ with $S$ defined as above. Suppose that $\betaar \s = \seq{\s_0,\s_2,\deltaots, \s_k}\in S$. Let $\s_k = \seq{x_0,x_1,\deltaots, x_k}$; so $x_k = \rooot(T_k)$. Then $\rk_S(\betaar \s)\le \rk(T_k)$. To see this, consider the (non-injective) homomorphism $f$ from $S_{\betaar \s}$ to $T_k$, mapping $\seq{\s_1,\s_2,\deltaots, \s_{m-1},\seq{y_0,y_1,\deltaots, y_m}}$ to $y_k$. We see by induction on $\rk(\betaar \tau)$ that for all $\betaar \tau\in S_{\betaar \s}$, $\rk_S(\betaar \tau) \le \rk_{T_k}(f(\betaar \tau)))$. So for all $\betaar \s\in S$ of height $k$, we have $\rk_S(\betaar s)\le \rk(T_k)$. If $S$ contains elements of height $k$, this shows that $\rk(S) \le \rk(T_k)+k$. Otherwise, $\rk(S) \le k$, so certainly $\rk(S)\le \rk(T_k)+k$.
Now let $k<\omegaega$ such that for all $i<\omegaega$, $\rk(T_k)+k\le \rk(T_i)+i$. We show that $T_k$ is embeddable into $S_{\betaar \s}$ for some $\betaar \s\in S$ of height $k$ in $S$, whence we get that $\rk(S)\le \rk(T_k) + k$.
For $i<k$, since $\rk(T_i)\gammae \rk(T_k)+ (k-i)$, we can find a sequence $\rooot(T_i) = x_{i,i}<_{T_i} x_{i+1,i} <_{T_i} \deltaots <_{T_i} x_{k,i}$ such that $\rk_{T_i}(x_{k,i}) \gammae \rk(T_k)$. We let
\[ \betaar \s = \seq{ \seq{x_{0,0}}, \seq{x_{1,0}, x_{1,1}}, \seq{x_{2,0}, x_{2,1}, x_{2,2}}, \deltaots, \seq{x_{k,0}, x_{k,1}, \deltaots, x_{k,k-1}, \rooot(T_k)} },\]
and set $f(\rooot(T_k)) = \betaar \s$. By induction on the height of $y\in T_k$, we define $f(y)$ so that $f\colon T_k\to S_{\betaar \s}$ is an embedding. Let $x\in T_k$ have height $m$ in $T_k$. Suppose that $f(x)$ has been defined, and that $f(x) = \seq{\tau_0,\tau_1,\deltaots, \tau_{k+m}}$ with $\tau_{k+m} = \seq{x_0,x_1,\deltaots, x_{k+m}}$ such that $x_{i}\in T_i$, $\rk_{T_i}(x_i) \gammae \rk_{T_k}(x)$, and $x_k = x$. Let $y$ be a child of $x$ on $T_k$. For all $i\le k+m$, we can find some $y_i >_{T_i} x_i$ such that $\rk_{T_i}(y_i) \gammae \rk_{T_k}(y)$; we also choose $y_k=y$. For $i= k+m+1$, since $\rk_{T_i} + (i-k)\gammae \rk(T_k)$, we have $\rk(T_k)\le \rk(T_i) + (m+1)$. Since the height of $y$ in $T_k$ is $m+1$, $\rk_{T_k}(y) + (m+1) \le \rk(T_k)$. Putting these together, we get $\rk_{T_k}(y)\le \rk(T_i)$, so $\rk_{T_i}(\rooot(T_i))\gammae \rk_{T_k}(y)$; we choose $y_{i} = \rooot(T_i)$. We then let $f(y) = \seq{\tau_0,\tau_1,\deltaots, \tau_{k+m}, \seq{y_0,y_1,\deltaots, y_{k+m+1}}}$.
\end{proof}
\
We define computable trees $T^\betaeta_\gammaamma(m)$ for all nonzero $\gammaamma<\deltaelta^*$, all $m\in {\mathbb N}$, and all $\betaeta < \deltaelta^*$.
An index for $T^\betaeta_\gammaamma(m)$ is obtained effectively from $\betaeta,\gammaamma$ and $m$. These trees are defined by working in $H$, performing effective transfinite recursion on $\gammaamma<\deltaelta^*$. There are three cases:
\betaegin{itemize}
\item $\gammaamma=1$: if $m\notin \emptyset'$, then we let $T^\betaeta_\gammaamma(m)$ consist of a root only. If at stage $s$ we see that $m\in \emptyset'$, then we let $T^\betaeta_\gammaamma(m)\cong T_\betaeta$, using elements with G\"odel numbers greater than $s$.
\item $\gammaamma>1$ is even: we let $T^\betaeta_\gammaamma(m) = \mini_{s\gammae m} T^\betaeta_{\gammaamma_s} (f(\gammaamma,m,s))$.
\item $\gammaamma>1$ is odd: we let $T^\betaeta_\gammaamma(m) = \supr_{s<\omegaega}T^{\betaeta}_{\gammaamma_s}(f(\gammaamma,m,s))$. Of course in this case, $\gammaamma_s = \gammaamma-1$.
\end{itemize}
Of course, here $f$ is the function guaranteed by Lemma \ref{lem_definition_of_H_sets}.
Note that for all $\betaeta,\gammaamma$ and $m$, $T^\betaeta_\gammaamma(m)$ is fat. We calculate ranks.
\betaegin{proposition}\langlebel{prop_calculate_ranks}
Let $\gammaamma\in \deltaelta^*\setminus\{0\}$, and let $m\in {\mathbb N}$.
\betaegin{enumerate}
\item If $\gammaamma=2\deltaelta$ is even, then for all $\betaeta\gammae \omegaega\deltaelta$,
\betaegin{enumerate}
\item if $m\notin \emptyset_{(\gammaamma)}$, then $\rk(T^\betaeta_\gammaamma(m)) = \betaeta$;
\item if $m\in \emptyset_{(\gammaamma)}$, then $\rk(T^\betaeta_\gammaamma(m)) < \omegaega\deltaelta$.
\end{enumerate}
\item If $\gammaamma=2\deltaelta+1$ is odd, then for all $\betaeta\gammae \omegaega\deltaelta$,
\betaegin{enumerate}
\item if $m\notin \emptyset_{(\gammaamma)}$, then $\rk(T^\betaeta_\gammaamma(m)) = \omegaega\deltaelta$;
\item if $m\in \emptyset_{(\gammaamma)}$, then $\rk(T^\betaeta_\gammaamma(m)) = \betaeta$.
\end{enumerate}
\end{enumerate}
\end{proposition}
\betaegin{proof}
Working in $H$, we verify the proposition by induction on $\gammaamma$.
For $\gammaamma=1$, we have $\deltaelta=0$. If $m\notin \emptyset_{(1)} = \emptyset'$, then $T^\betaeta_\gammaamma(m)$ has only a root, and its rank is $0= \omegaega\deltaelta$. If $m\in \emptyset'$ then $T^\betaeta_\gammaamma(m)\cong T_\betaeta$, whose rank is $\betaeta$.
Let $\gammaamma = 2\deltaelta$ be even. For $s<\omegaega$, let $S_s = T^\betaeta_{\gammaamma_s}(f(\gammaamma,m,s))$; so $T^\betaeta_\gammaamma(m) = \mini_{s\gammae m} S_s$, whence $\rk(T^\betaeta_\gammaamma(m)) = \min_{s<\omegaega} (\rk(S_s)+(s-m))$. For each $s$, $\gammaamma_s$ is odd: if $\gammaamma$ is a limit, then we required that $\gammaamma_s$ be odd; and if $\gammaamma$ is a successor, then $\gammaamma_s = \gammaamma-1$ is odd. Let $\deltaelta_s = \integer{\gammaamma_s/2}$, so $\gammaamma_s = 2\deltaelta_s +1$. For all $s$, $\deltaelta_s<\deltaelta$ because $\gammaamma_s < \gammaamma$. Hence $\betaeta \gammae \omegaega\deltaelta_s$. So by induction, $\rk(S_s) = \omegaega\deltaelta_s$ if $f(\gammaamma,m,s)\notin \emptyset_{(\gammaamma_s)}$, and $\rk(S_s) = \betaeta$ if $f(\gammaamma,m,s)\in\emptyset_{(\gammaamma_s)}$.
If $m\notin \emptyset_{(\gammaamma)}$, then for all $s$, $f(m,\gammaamma,s)\in \emptyset_{(\gammaamma_s)}$, so for all $s$, $\rk(S_s) = \betaeta$. Then $\rk(T^\betaeta_\gammaamma(m)) = \min_{s\gammae m} (\betaeta + (s-m)) = \betaeta$. If $m\in \emptyset_{(\gammaamma)}$, then there is some $t<\omegaega$ such that for all $s<t$, $f(\gammaamma,m,s)\in \emptyset_{(\gammaamma_s)}$, and for all $s\gammae t$, $f(\gammaamma,m,s)\notin \emptyset_{(\gammaamma_s)}$. Hence, for all $s<t$, $\rk(S_s) = \betaeta$, and for all $s\gammae t$, $\rk(S_s) = \omegaega\deltaelta_s$. Since $\deltaelta_s<\deltaelta$, we have $\omegaega\deltaelta_s+s < \omegaega\deltaelta \le \betaeta$. It follows that $\rk(T^\betaeta_\gammaamma(m)) = \omegaega\deltaelta_{\max\{t,m\}} + \max\{t,m\} - m < \omegaega\deltaelta$.
Before we check the odd case, we note that $\rk(T^\betaeta_\gammaamma(m))\gammae \omegaega\deltaelta_m$; this, because $\seq{\deltaelta_s}$ is non-decreasing.
Now let $\gammaamma = 2\deltaelta+1$ be odd, so $\gammaamma-1 = 2\deltaelta$, and for all $s$, $\gammaamma_s = \gammaamma-1$. Again let $S_s = T^\betaeta_{\gammaamma_s}(f(\gammaamma,m,s))$. In this case $T^\betaeta_\gammaamma(m) = \supr_{s<\omegaega} S_s$, and so $\rk(T^\betaeta_\gammaamma)(m) = \sup_{s<\omegaega}\rk(S_s)$. Noting that $\gammaamma-1$ is even, induction shows that if $f(\gammaamma,m,s)\notin \emptyset_{(\gammaamma-1)}$, then $\rk(S_s) =\betaeta(s)$, and if $f(\gammaamma,m,s)\in \emptyset_{(\gammaamma-1)}$, then $\rk(S_s) < \omegaega\deltaelta$.
If $m\in \emptyset_{(\gammaamma)}$, then for all but finitely many $s<\omegaega$ we have $f(\gammaamma,m,s)\notin \emptyset_{(\gammaamma-1)}$; so for all but finitely many $s$, $\rk(S_s) = \betaeta$; for other $s$, we have $\rk(S_s) < \omegaega\deltaelta \le \betaeta$. In this case, $\rk(T^\betaeta_\gammaamma(m)) = \sup_s \rk(S_s) = \betaeta$.
If $m\notin \emptyset_{(\gammaamma)}$, then for all $s$, $f(\gammaamma,m,s)\in \emptyset_{(\gammaamma-1)}$, so for all $s$, $\rk(S_s)< \omegaega\deltaelta$, so $\rk(T^\betaeta_\gammaamma(m)) \le \omegaega\deltaelta$. However, we checked that $\rk(S_s) \gammae \omegaega\deltaelta_{f(\gammaamma,m,s)}$, where $\deltaelta_t = \integer{(\gammaamma-1)_t /2}$. As $f(\gammaamma,m,s)\gammae s$, $\rk(S_s)\gammae \omegaega\deltaelta_s$. Since $\sup_s \deltaelta_s = \deltaelta$, we have $\rk(T^\betaeta_\gammaamma(m))\gammae \omegaega\deltaelta$.
\end{proof}
\
We can now finally show the $\Delta^0_{2\alphalpha+1}$-hardness of the isomorphism problem for the pair $(\mathcal SSS_{\alphalpha,0},\mathcal SSS_{\alphalpha,1})$.
\betaegin{lemma}\langlebel{lem_hardness_ce}
Let $\alphalpha\in \deltaelta^*\setminus\{0\}$. If $A\le_\Tur \emptyset_{(2\alphalpha)}$, then there is a uniformly computable sequence of structures $\seq{\NN_{n}}_{n<\omegaega}$ such that for all $n$,
\betaegin{itemize}
\item if $n\in A$, then $\NN_n\cong \mathcal SSS_{\alphalpha,1}$; and
\item if $n\notin A$, then $\NN_n\cong \mathcal SSS_{\alphalpha,0}$.
\end{itemize}
\end{lemma}
\betaegin{proof}
Let $g\colon \omegaega\to \omegaega$ be a many-one reduction of $A$ to $\emptyset_{(2\alphalpha+1)}$, and $h$ be a many-one reduction of $\omegaega\setminus A$ to $\emptyset_{(2\alphalpha+1)}$. We then let
\[ \NN_n = \left(T^{\omegaega\alphalpha+1}_{2\alphalpha+1}(g(n)) , T^{\omegaega\alphalpha+1}_{2\alphalpha+1}(h(n)) \right) . \qedhere\]
\end{proof}
In fact, this hardness is uniform, given $\alphalpha$ and the many-one reductions of $A$ and its complement to $\emptyset_{(2\alphalpha+1)}$. Moreover, it can be relativised.
\betaegin{proposition}\langlebel{prop_jump_inversion_part_three}
For any total Turing functional $\Phi$ there is a Turing functional $\Psi$ such that for any set $X$ and any nonzero $\alphalpha<\deltaelta^*$, for all $n$,
\betaegin{itemize}
\item if $\Phi(X,\emptyset_{(2\alphalpha)},n) = 1$, then $\Psi(X,\alphalpha,n)\cong \mathcal SSS_{\alphalpha,1}$; and
\item if $\Phi(X,\emptyset_{(2\alphalpha)},n) = 0$, then $\Psi(X,\alphalpha,n)\cong \mathcal SSS_{\alphalpha,0}$.
\end{itemize}
\end{proposition}
\betaegin{proof}
We may assume that for all $X$, $Y$ and $n$, $\Phi(X,Y,n)\in \{0,1\}$. The compactness of $2^\omegaega$ shows that we can regard $\Phi$ as a \emph{truth-table} functional (this is Nerode's theorem \cite{Nerode:57}). There is a computable sequence of pairs of (canonical indices for) clopen subsets $\CC_n$ and $\DD_n$ of $2^\omegaega$, such that for all $X$ and $Y$, $\Phi(X,Y,n)=1$ if and only if $X\in \CC_n$ and $Y\in \DD_n$; otherwise $\Phi(X,Y,n)=0$.
The set of $n<\omegaega$ such that $\emptyset_{(2\alphalpha)}\in \DD_n$ is computable in $\emptyset_{(2\alphalpha)}$, uniformly in~$\alphalpha$. By the uniform version of Lemma \ref{lem_hardness_ce}, there is a uniformly computable array $\seq{\NN_{\alphalpha,n}}_{\alphalpha<\deltaelta^*,n<\omegaega}$ of structures such that for all nonzero $\alphalpha<\deltaelta^*$ and all $n<\omegaega$,
\betaegin{itemize}
\item if $\emptyset_{(2\alphalpha)}\in \DD_n$, then $\NN_{\alphalpha,n}\cong \mathcal SSS_{\alphalpha,1}$; and
\item if $\emptyset_{(2\alphalpha)}\notin \DD_n$, then $\NN_{\alphalpha,n}\cong \mathcal SSS_{\alphalpha,0}$.
\end{itemize}
The functional $\Psi$ is now defined as follows: given $X$, $\alphalpha>0$ and $n$, if $X\notin \CC_n$, then we output $\mathcal SSS_{\alphalpha,0}$; if $X\in \CC_n$, then we output $\NN_{\alphalpha,n}$.
\end{proof}
\subsubsection{The proof of Theorem \ref{thm_jump_inversion}}
Let $\alphalpha<\deltaelta^*$, and let $G$ be a graph. If $\alphalpha=0$, then we let $G^{-0}=G$. If $\alphalpha>0$, then we let $G^{-\alphalpha}$ be the structure obtained from $G$ by replacing every edge by a copy of $\mathcal SSS_{\alphalpha,1}$, and every non-edge by a copy of $\mathcal SSS_{\alphalpha,0}$. As we mentioned above, a similar construction in \cite{GHKMMS} uses pairs built from linear orderings of the form $\Int^\alphalpha$ instead of $\mathcal SSS_{\alphalpha,0}$ and $\mathcal SSS_{\alphalpha,1}$.
Formally, a unary predicate $V$ defines in $G^{-\alphalpha}$ the set of vertices of $G$. A ternary predicate $D$ defines a partition of $G^{-\alphalpha}\setminus V$, the classes of which are indexed by pairs of elements of $V$. Adding the language of $\mathcal SSS_{\alphalpha,i}$, for $a,b\in V$, $D(a,b,-)$ is the domain of either $\mathcal SSS_{\alphalpha,0}$ or $\mathcal SSS_{\alphalpha,1}$, depending on whether the edge $(a,b)$ is in $G$ or not.
To prove part (1) of Theorem \ref{thm_jump_inversion}, we need to show, for all nonzero $\alphalpha<\omegack$, that if a set $X$ computes a copy of $G^{-\alphalpha}$, then $X_{(2\alphalpha)}$ computes a copy of $G$. The isomorphic copies of $G^{-\alphalpha}$ are the structures $H^{-\alphalpha}$ for $H\cong G$; so it suffices to show that if a set $X$ computes $G^{-\alphalpha}$, then $X_{(2\alphalpha)}$ computes $G$.
Say $X$ computes $G^{-\alphalpha}$. Then $X$ computes the set of vertices $V$ of $G$. To recover the edges of $G$, for $a,b\in V$, $X_{(2\alphalpha)}$ examines the structure $M_{a,b}$ in $G^{-\alphalpha}$ whose domain is $D(a,b,-)$; an $X$-computable index for $M_{a,b}$ is effectively obtained. Using the functional given by Proposition \ref{prop_jump_inversion_part_one}, $X_{(2\alphalpha)}$ can tell if $M_{a,b}$ is isomorphic to $\mathcal SSS_{\alphalpha,0}$ or $\mathcal SSS_{\alphalpha,1}$, and so decide if $(a,b)$ is an edge of $G$ or not.
We note that this process can be reversed; say $X_{(2\alphalpha)}$ computes $G$. Then using a relativisation of Lemma \ref{lem_hardness_ce} to $X$, which is possible for standard $\alphalpha<\omegack$, we see that we can indeed $X$-computably replace the edges of $G$ by the correct copies of either $\mathcal SSS_{\alphalpha,0}$ or $\mathcal SSS_{\alphalpha,1}$. We noted, though, that this direction is not needed for the proof of Theorem \ref{main_theorem}.
Part (2) of Theorem \ref{thm_jump_inversion} follows from Proposition \ref{prop_jump_inversion_part_two}. For any countable graphs $G$ and $H$, for any nonstandard $\alphalpha,\betaeta \in \deltaelta^*\setminus \omegack$, we have $G^{-\alphalpha} \cong H^{-\betaeta}$; this is because $\mathcal SSS_{\alphalpha,0}\cong \mathcal SSS_{\alphalpha,1}\cong \mathcal SSS_{\betaeta,0}\cong\mathcal SSS_{\betaeta,1}$. Note that this structure, $G^{-\infty}$, has a computable copy.
We turn to the last part of the theorem. Given a total Turing functional $\Phi$, we need to construct a Turing functional $\Psi$ such that for all $X\in 2^\omegaega$, $\alphalpha<\deltaelta^*$, and all graphs $G$, if $\Phi(X,\emptyset_{(2\alphalpha)})\cong G$ then $\Psi(X,\alphalpha)\cong G^{-\alphalpha}$. We use Proposition~\ref{prop_jump_inversion_part_three}. Given $X$ and $\alphalpha$, let $\seq{v_n}_{n<\omegaega}$ be an $X\oplus \emptyset_{(2\alphalpha)}$-computable enumeration of the vertices of $\Phi(X,\emptyset_{(2\alphalpha)})$. For any $n,m<\omegaega$ we can ask $X\oplus \emptyset_{(2\alphalpha)}$ whether the edge $(v_n,v_m)$ belongs to the graph $\Phi(X,\emptyset_{(2\alphalpha)})$; this is uniform in $X$, $\alphalpha$, $n$ and $m$, and by assumption we always get an answer. Proposition \ref{prop_jump_inversion_part_three} gives us a functional such that for all $n,m<\omegaega$, $X\in 2^\omegaega$ and nonzero $\alphalpha<\deltaelta^*$, outputs a copy of $\mathcal SSS_{\alphalpha,1}$ if the edge $(v_n,v_m)$ is in the graph $G=\Phi(X,\emptyset_{(2\alphalpha)})$, and otherwise gives a copy of $\mathcal SSS_{\alphalpha,0}$. From this functional we can easily build a copy of $G^{-\alphalpha}$.
\section{A Linear ordering}
We prove the following extension of Theorem \ref{main_theorem}:
\betaegin{theorem}\langlebel{thm_LO}
There is a countable linear ordering ${\mathcal L}L$ whose degree spectrum consists of the non-hyperarithmetic degrees.
\end{theorem}
Using the notation of the previous section, we make use of the following relativisation of a uniform version of a Theorem of Ash's (this is Theorem 18.15 of \cite{AK00}). Below, for $X\in 2^\omegaega$ and an $X$-computable ordinal $\deltaelta$, we again identify $\deltaelta$ with some $X$-notation for $\deltaelta$, from which we also derive an $X$-computable well-ordering of $\omegaega$ of order-type $\deltaelta$.
\betaegin{theorem} \langlebel{thm_Ash_LO}
Let $X\in 2^\omegaega$. For any linear ordering ${\mathcal L}L$ and any $\alphalpha<\omegaega_1^X$, ${\mathcal L}L$ has an $X_{(2\alphalpha)}$-computable copy if and only if $\omegaega^\alphalpha\cdot {\mathcal L}L$ has an $X$-computable copy.
This is uniform in $\alphalpha$: let $\deltaelta<\omegaega_1^X$ be an $X$-computable ordinal. Suppose that $\Phi$ is a Turing functional such that for all $\alphalpha<\deltaelta$, ${\mathcal L}L_\alphalpha = \Phi(X_{(2\alphalpha)},\alphalpha)$ is a linear ordering. Then there is a Turing functional $\Psi$ such that for all $\alphalpha<\deltaelta$, $\Psi(X,\alphalpha)$ is a linear ordering isomorphic to $\omegaega^{\alphalpha}\cdot {\mathcal L}L_\alphalpha$.
\end{theorem}
We also make use of a result of Frolov, Harizanov, Kalimullin, Kudinov and Miller from \cite{FHKKM}. They combined coding families of sets in a shuffle sum of linear orderings with a result of Wehner's (similar to the one we used above) to show that there is a linear ordering whose degree spectrum is the collection of nonlow$_2$ degrees.
\betaegin{theorem}\langlebel{thm_FHKKM}
For every set $Y$ there is a linear ordering ${\mathcal L}L_Y$ such that for all $X\gammae_\Tur Y$, $X$ computes a copy of ${\mathcal L}L_Y$ if and only if $X''>_\Tur Y''$.
This is uniform in $Y$: there is a Turing functional $\Theta$ such that for all $Y$ and all $X$ such that $X''>_\Tur Y''$, $\Theta(X,Y)\cong {\mathcal L}L_Y$.
\end{theorem}
Because adding an extremal point does not change the degree spectrum of a linear ordering, we may assume that every linear ordering ${\mathcal L}L_Y$ has a least element.
\
The ``ill-founded limit'' of the linear orderings $\omegaega^\alphalpha\cdot {\mathcal L}L$ is $\omegack\cdot \Rat$, the linear ordering which contains densely many copies of $\omegack$. To see this, we need to use a generalisation of ordinal exponentiation $\omegaega^\alphalpha$ to ill-founded linear orders. For any linear order ${\mathcal L}L$, the linear ordering $\omegaega^{{\mathcal L}L}$ is the collection of all functions $f\colon {\mathcal L}L\to \omegaega$ which take the value 0 on all but finitely many inputs, ordered by a ``lexicographic'' ordering: $f<_{\omegaega^{{\mathcal L}L}}g$ if for the ${\mathcal L}L$-least $x$ such that $f(x)\ne g(x)$ we have $f(x)<g(x)$ (in $\omegaega$, of course). That this is indeed a generalisation of taking ordinal powers of $\omegaega$ can be seen by considering an inductive definition of a linear ordering (of order-type) $\omegaega^\betaeta$ for ordinals $\betaeta$, using a directed system of linear orderings. At the successor step we let $\omegaega^{\betaeta+1}= \omegaega^\betaeta\cdot \omegaega$, with the embedding taking $\omegaega^\betaeta$ to the leftmost copy $\omegaega^\betaeta\times \{0\}$ of $\omegaega^\betaeta$ in $\omegaega^{\betaeta}\cdot \omegaega$; at limit stages we take direct limits. The direct construction of the linear ordering $\omegaega^{{\mathcal L}L}$ shows that a power rule holds: for linear orderings ${\mathcal L}L$ and $\mathcal KK$, $\omegaega^{{\mathcal L}L+\mathcal KK}\cong \omegaega^{\mathcal L}L\cdot \omegaega^{\mathcal KK}$.
\betaegin{proposition}\langlebel{prop_ill_founded_LO}
Let $\deltaelta$ be an ill-founded linear ordering whose maximal well-founded initial segment has order-type $\omegack$. Let ${\mathcal L}L$ be an countable linear ordering. Then $\omegaega^\deltaelta\cdot {\mathcal L}L$ is isomorphic to $\omegack\cdot \Rat$.
\end{proposition}
\betaegin{proof}
Let $C$ be the maximal well-founded initial segment of $\deltaelta$; so $\deltaelta-C$ has no least element. This means that $\omegaega^{\deltaelta-C}$ is dense (with no endpoints). This, in turn, means that $\omegaega^{\deltaelta-C}\cdot {\mathcal L}L$ is also dense, and so is isomorphic to the rationals. Since $\omegack = \omegaega^{\omegack}$,
\[ \omegaega^\deltaelta\cdot {\mathcal L}L \,\cong\, \omegaega^{C}\cdot \omegaega^{\deltaelta-C}\cdot {\mathcal L}L \,\cong\, \omegaega^{\omegack}\cdot \Rat \,\cong\, \omegack\cdot\Rat. \qedhere\]
\end{proof}
Armed with Theorems \ref{thm_Ash_LO} and \ref{thm_FHKKM}, we give a proof of Theorem \ref{thm_LO}.
The linear ordering in question is the sum
\[ \mathcal KK = \sum_{\alphalpha<\omegack} \omegaega^\alphalpha\cdot {\mathcal L}L_{\emptyset_{(2\alphalpha)}} \,\, + \,\, \omegack\cdot\Rat.\]
Because ${\mathcal L}L_{\emptyset_{(2\alphalpha)}}$ has a least element, so does $\omegaega^\alphalpha\cdot {\mathcal L}L_{\emptyset_{(2\alphalpha)}}$. This means that if a set $X$ computes a copy of $\mathcal KK$, then it computes a copy of $\omegaega^\alphalpha\cdot {\mathcal L}L_{\emptyset_{(2\alphalpha)}}$ for all $\alphalpha<\omegack$. In turn, this means that for all $\alphalpha<\omegack$, $X_{(2\alphalpha)}$ computes a copy of ${\mathcal L}L_{\emptyset_{(2\alphalpha)}}$, which means that $X_{(2\alphalpha+2)}>_\Tur \emptyset_{(2\alphalpha+2)}$. As we have seen in the previous section, it follows that $X$ cannot be hyperarithmetic: if $X\le_\Tur \emptyset_{(\betaeta)}$, then for all $\alphalpha>\betaeta\cdot \omegaega$ we get $X_{(\alphalpha)} \equiv_\Tur \emptyset_{(\alphalpha)}$.
\
It remains to show, then, that any nonhyperarithmetic set can compute a copy of $\mathcal KK$. This is done slightly differently to the way we argued in the previous section. Here we will see that it is important that the sum $\mathcal KK$ is unlabelled. In the previous section it was not important that for nonstandard $\alphalpha$, the graph $G^{-\alphalpha}$ did not depend on $\alphalpha$; it was just important that it did not depend on $G$. Here it is important that for nonstandard $\alphalpha$, the linear ordering $\omegaega^{\alphalpha}\cdot {\mathcal L}L$ depends on neither ${\mathcal L}L$ nor $\alphalpha$.
The reason for this is that we cannot argue, as we did in the previous section, for all nonhyperarithmetic sets at once. We do not produce a single functional which outputs a copy of $\mathcal KK$ given any nonhyperarithmetic set. This is because we use a stronger form of overspill to stretch Ash's theorem beyond $\omegack$. We cannot obtain it for all sets $X$ at once, as that is a $\Pi^1_1$ statement. We stretch Ash's theorem for each $X$ separately, and this means that we have to treat two distinct cases: $\omegaega_1^X = \omegack$ and $\omegaega_1^X>\omegack$.
Let $X$ be a nonhyperarithmetic set. If $\omegaega_1^X = \omegack$, then an application of the overspill principle (equivalently, working in an ill-founded $\omegaega$-model of set theory $H$ which contains $X$ and omit $\omegack$) yields a nonstandard $X$-computable ordinal $\deltaelta^*$ whose maximal well-founded initial segment has order-type $\omegack$, which supports a jump hierarchy and for which Theorem \ref{thm_Ash_LO} holds:
\betaegin{itemize}
\item For all $Y\le_\Tur X$ there are sets $\seq{Y_{(\alphalpha)}}$ for $\alphalpha<\deltaelta^*$ which obey the recursive definition of a transfinite iteration of the Turing jump; and
\item If $\Phi$ is a Turing functional such that for all $\alphalpha<\deltaelta^*$, $\Phi(X_{(2\alphalpha)}, \alphalpha)$ is a linear ordering ${\mathcal L}L_\alphalpha$, then there is a Turing functional $\Psi$ such that for all $\alphalpha<\deltaelta^*$, $\Psi(X,\alphalpha)$ is isomorphic to $\omegaega^{\alphalpha}\cdot {\mathcal L}L_\alphalpha$.
\end{itemize}
In our application, we use ${\mathcal L}L_\alphalpha = \Theta(X_{(2\alphalpha)}, \emptyset_{(2\alphalpha)})$, where $\Theta$ is given by Theorem \ref{thm_FHKKM}. For standard $\alphalpha<\omegack$, because $X$ is not hyperarithmetic, $X_{(2\alphalpha+2)}>_\Tur \emptyset_{(2\alphalpha+2)}$, and so for standard $\alphalpha$ we get ${\mathcal L}L_\alphalpha\cong {\mathcal L}L_{\emptyset_{(2\alphalpha)}}$. Because $\emptyset_{(2\alphalpha)}$ is computable from $X_{(2\alphalpha)}$ (even for nonstandard $\alphalpha$), uniformly in $\alphalpha$, we see that there is indeed a Turing functional $\Phi$ such that for all $\alphalpha<\deltaelta^*$, $\Phi(X_{(2\alphalpha)},\alphalpha) = {\mathcal L}L_\alphalpha$. Using the functional $\Phi$ obtained from $\Psi$, we get, uniformly in $\alphalpha<\deltaelta$ with oracle $X$, a copy $\Psi(X,\alphalpha)$ of $\omegaega^\alphalpha\cdot {\mathcal L}L_\alphalpha$. In this way,
\[ \mathcal KK(X) = \sum_{\alphalpha<\deltaelta^*} \Psi(X,\alphalpha) \]
is computable from $X$. It is easy to show that $\mathcal KK(X)$ is isomorphic to $\mathcal KK$: for standard $\alphalpha<\omegack$ we have $\Psi(X,\alphalpha)\cong \omegaega^\alphalpha\cdot {\mathcal L}L_{\emptyset_{(2\alphalpha)}}$, for nonstandard $\alphalpha \in \deltaelta^*\setminus \omegack$ we have $\Psi(X,\alphalpha)\cong \omegack\cdot \Rat$ (Proposition \ref{prop_ill_founded_LO}), and any sum of copies of $\omegack\cdot\Rat$ is again isomorphic to $\omegack\cdot\Rat$.
Now suppose that $\omegaega_1^X>\omegack$. We can then fix an $X$-computable copy of $\omegack$, which we naturally also call $\omegack$. The argument is now simpler. Applying Theorem \ref{thm_Ash_LO} to $\omegack$, the argument above, using the fact that $X$ is not hyperarithmetic, gives us, uniformly in $\alphalpha<\omegack$, an $X$-computable copy of $\omegaega^\alphalpha\cdot {\mathcal L}L_{\emptyset_{(2\alphalpha)}}$, and so $X$ computes a copy of $\sum_{\alphalpha<\omegack}\omegaega^\alphalpha\cdot{\mathcal L}L_{\emptyset_{(2\alphalpha)}}$. Because $X$ computes a copy of $\omegack$ and of course computes $\Rat$, it also computes a copy of $\omegack\cdot\Rat$. Putting these together, we see that $X$ computes a copy of $\mathcal KK$. This completes the proof of Theorem \ref{thm_LO}.
\
\
\section{Distinguishing category and measure} \langlebel{sec_separating}
\betaegin{theorem}\langlebel{thm_null_co_meagre}
There is a structure whose degree spectrum is null and co-meagre.
\end{theorem}
\betaegin{proof}
In \cite{ShinodaSlaman}, Shinoda and Slaman show that if $\mathcal C$ is a $\Pi^0_2(\mathbf{0}')$ co-meagre class defined as the intersection of uniformly $\mathcal Sigma^0_2$ dense open classes, then there is a co-meagre class $\mathcal D\subseteq \mathcal C$ such that the upward closure of $\mathcal D$ in the Turing degrees is a degree spectrum. In \cite{Downey_Jockusch_Stob:_array_recursive_2}, Downey, Jockusch and Stob show that there is a $\Pi^0_2(\mathbf{0}')$ co-meagre class as above: the collection of \emph{pb-generic} sets, whose upward closure in the Turing degrees is the collection of array noncomputable degrees. Hence there is a co-meagre degree spectrum contained in the array non-computable degrees. The theorem follows from the fact that the collection of array noncomputable degrees is null.
\end{proof}
\
In the rest of this section, we prove the following:
\betaegin{theorem}\langlebel{thm_meagre_co_null}
There is a structure whose degree spectrum is meagre and co-null.
\end{theorem}
Theorem \ref{thm_meagre_co_null} follows from applying Lemma \ref{lem_flower_grpahs_and_enumerations} to the family $\mathcal SSS$ given by the following theorem:
\betaegin{theorem}\langlebel{thm_meagre_co_null_enumeration}
There is a countable family $\mathcal SSS$ of subsets of ${\mathbb N}$ such that every 1-random set can enumerate $X$, but no 2-generic set can enumerate $\mathcal SSS$.
\end{theorem}
To prove Theorem \ref{thm_meagre_co_null_enumeration}, let $\seq{\epsilon_{e,\s}}_{e<\omegaega,\s\in 2^{<\omegaega}}$ be a computable array of positive rational numbers whose sum $\sum_{e,\s} \epsilon_{e,\s}$ is smaller than 1. For every $e<\omegaega$ and $\s\in 2^{<\omegaega}$ we define a countable family $\mathcal SSS_{e,\s}$ and uniformly, a c.e.\ operator ${\mathcal L}ambda_{e,\s}$ and a $\Pi^0_1$ class $\PP_{e,\s}$ whose measure is at least $1-\epsilon_{e,\s}$ such that for all $X\in \PP_{e,\s}$, ${\mathcal L}ambda_{e,\s}(X)$ is an enumeration of $\mathcal SSS_{e,\s}$. We then let
\[ \mathcal SSS = \betaigoplus_{e,\s} \mathcal SSS_{e,\s} = \betaig\{ \{(e,\s)\}\oplus A \,:\, A\in \mathcal SSS_{e,\s} \betaig\}.\]
The uniformity of the array ${\mathcal L}ambda_{e,\s}$ shows that every $X$ in the $\Pi^0_1$ class $\PP = \betaigcap_{e,\s}\PP_{e,\s}$ can enumerate $\mathcal SSS$. The measure of $\PP$ is at least $1-\sum_{e,\s}\epsilon_{e,\s}$, and so $\PP$ is non-null. By Ku\v{c}era's \cite{Kucera:85}, every 1-random set is Turing equivalent to some element of~$\PP$, and so every 1-random set can enumerate $\mathcal SSS$.
Fix some $e$. We use the families $\mathcal SSS_{e,\s}$ to ensure that if $G$ is 2-generic, then $\Phi_e(G)$ is not an enumeration of $\mathcal SSS$; here $\Phi_e$ is the $e\tth$ c.e.\ operator. From $\Phi_e$ and $\s$ we can find a c.e.\ operator $\Psi_{e,\s}$ such that for any $X$, if $\Phi_e(X)$ is an enumeration of $\mathcal SSS$, then $\Psi_{e,\s}(X)$ is an element of $\mathcal SSS_{e,\s}$. We would like it to be the case that if $G$ is a generic set which extends $\s$, then $\Psi_{e,\s}(G)$ is not an element of $\mathcal SSS_{e,\s}$. We will not always be able to achieve this aim. We will be able to ensure that there is some $\tau$ extending $\s$ such that for a sufficiently generic set $G$ (1-generic will be enough), if $G$ extends $\tau$ then $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$. As we do this for each $\s$, the collection of $\tau$ which force that $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$ for \emph{some} $\s$ is dense, and so if $G$ is sufficiently generic (2-generic will suffice), $\Phi_e(G)$ cannot be an enumeration of $\mathcal SSS$. Hence the collection of oracles $X$ that can enumerate $\mathcal SSS$ is meagre.
Fixing $e$ and $\s$, let $\seq{\tau_k}_{k<\omegaega}$ be an enumeration of all extensions of $\s$. Let $\deltaelta_k = \deltaelta_k(e,\s)$ be positive binary rational numbers such that $\sum_{k<\omegaega} \deltaelta_k < \epsilon_{e,\s}$; let $n_k = 1/\deltaelta_k$, so $n_k$ is a power of 2. Partition $\omegaega$ computably into sets $I_k$ such that $|I_k| = {n_k}$.
At step $k$, working on $\tau_k$, we first tempt a generic set $G$ extending $\tau_k$ to enumerate an element of $I_k$ into $\Psi_{e,\s}(G)$. We will need to ensure that if $G$ does not oblige, that is, if we find no extension $\rho$ of $\tau_k$ for which there is some $x_k\in I_k\cap \Psi_{e,\s}(\rho)$, then we will have already ensured that $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$. We do this by enumerating elements of $I_k$ into each set in $\mathcal SSS_{e,\s}$. If we can find such an extension for every $\tau_k$, then a 1-generic set $G$ will contain $x_k$ for infinitely many $k$. We then ensure that no set in $\mathcal SSS_{e,\s}$ contains infinitely many of the numbers $x_k$. To ensure this, we will need to ``throw away'' sets $X$ which enumerate too many $x_k$'s into sets in ${\mathcal L}ambda_{e,\s}(X)$; we need to make sure that we do not get rid of too many (in the sense of measure) such sets $X$.
Note that while we are waiting for $x_k$ to be defined (which may not happen), that is, while we are waiting for an extension $\rho$ of $\tau_k$ which enumerates an element of $I_k$ into $\Psi_{e,\s}(\rho)$, we need to enumerate various elements of $I_k$, among them $x_k$, into sets in ${\mathcal L}ambda_{e,\s}(X)$. In advance, we cannot ensure that any particular element of $I_k$ is enumerated by only few oracles, because if $x_k$ is never defined, we still need to ensure that ${\mathcal L}ambda_{e,\s}(X)$ enumerates the same family $\mathcal SSS_{e,\s}$ for most oracles $X$. Thus $x_k$ will be an elements of some sets in $\mathcal SSS_{e,\s}$. But by passing to new sets (in a sense using some kind of priority between the ``old'' and the ``new'' parts of $\mathcal SSS_{e,\s}$) we can ensure that each element of $S_{e,\s}$ contains only finitely many $x_k$.
So for $k<\omegaega$, let $x_k$ be the first number discovered such that there is some $\rho\supseteq \tau_k$ with $x_k\in I_k\cap \Psi_{e,\s}(\rho)$. If there is some $k$ for which $x_k$ is not defined (for all $\rho\supseteq \tau_k$, $I_k\cap \Psi_{e,\s}(\rho)$ is empty), let $k^* = k^*(e,\s)$ be the least such $k$. Otherwise, let $k^*=\omegaega$. We can now define $\mathcal SSS_{e,\s}$:
\betaegin{itemize}
\item If $k^* = \omegaega$, then $A\in \mathcal SSS_{e,\s}$ if for some $k<\omegaega$, $I_k\cap A$ is a singleton, and for all $j> k$, $A\cap I_j = I_j\setminus \{x_j\}$.
\item If $k^*< \omegaega$, then $A\in \mathcal SSS_{e,\s}$ if $A\cap I_{k^*}$ is a singleton, and for all $j>k^*$, $A\cap I_j$ is empty.
\end{itemize}
The family $\mathcal SSS_{e,\s}$ is indeed countable.
\betaegin{claim} \langlebel{clm_kstar}\
\betaegin{enumerate}
\item If $k^*=\omegaega$ and $G\supset \s$ is 1-generic, then $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$.
\item If $k^*<\omegaega$ and $G\supset \tau_{k^*}$, then $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$.
\end{enumerate}
\end{claim}
\betaegin{proof}
If $k^*<\omegaega$, then for all $G\supset \tau_{k^*}$, $I_{k^*}\cap \Psi_{e,\s}(G)$ is empty. On the other hand, for all $A\in \mathcal SSS_{e,\s}$, $I_{k^*}\cap A$ is nonempty.
Suppose that $k^*=\omegaega$, and that $G\supset \s$ is 1-generic. Then for infinitely many $k$, $x_k\in \Psi_{e,\s}(G)$, whereas for all $A\in \mathcal SSS_{e,\s}$, $x_k\in A$ for only finitely many $k$.
\end{proof}
\betaegin{claim}
If $G$ is 2-generic, then $G$ cannot enumerate $\mathcal SSS$.
\end{claim}
\betaegin{proof}
Let $G$ be 2-generic, and let $e<\omegaega$. We show that there is some $\s$ such that $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$, which shows that $\Phi_e(G)$ is not an enumeration of $\mathcal SSS$.
If there is some $\s\subset G$ such that $k^*(e,\s)=\omegaega$, then by Claim \ref{clm_kstar}(1), $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$. Otherwise, consider the set $D$ of strings $\tau_{k^*(e,\s)}$ as $\s$ ranges over the strings such that $k^*(e,\s)<\omegaega$. Then $D$ is a $\Pi^0_1$ collection of strings which is dense around $G$, and so $G$ has some initial segment in $D$. By Claim \ref{clm_kstar}(2), $\Psi_{e,\s}(G)\notin \mathcal SSS_{e,\s}$ for some initial segment $\s$ of $G$.
\end{proof}
\
We turn to the definition of ${\mathcal L}ambda_{e,\s}$. For any $X\in 2^\omegaega$, we think of ${\mathcal L}ambda_{e,\s}(X)$ as enumerating sets $A_{k,x,F}(X)$, indexed by $k\le k^*$ (of course we mean $k<\omegaega$ if $k^* = \omegaega$), $x\in I_k$ and $F\subseteq \betaigcup_{j< k}I_j$. For brevity, let $Q_k = I_k\times \PP(\betaigcup_{j<k}I_j)$ be the set of pairs $(x,F)$ with $x\in I_k$ and $F\subseteq \betaigcup_{j<k}I_j$. Let $\seq{s_k}_{k<k^*}$ be a computable increasing sequence of stages such that at stage $s_k$ we observe $x_k$; let $s_{-1}=0$. Then in practice, at stage $s_{k-1}$, we associate each triple $(k,x,F)$ for $(x,F)\in Q_k$ with a fresh index $n$, and will let ${\mathcal L}ambda_{e,\s}(X)^{[n]} = A_{k,x,F}(X)$. At stage $s$ which is not equal to $s_k$ for any $k$, if ${\mathcal L}ambda_{e,\s}(X)^{[s]}$ has not yet been associated with any set $A_{k,x,F}(X)$, then we declare that ${\mathcal L}ambda_{e,\s}(X)^{[s]} = {\mathcal L}ambda_{e,\s}(X)^{[s-1]}$. In this way we ensure that ${\mathcal L}ambda_{e,\s}(X)$ is an enumeration of the family $\GG_{e,\s}(X) = \left\{ A_{k,x,F}(X) \,:\, k\le k^* \alphandd (x,F)\in Q_k \right\}$.
Let $k\gammae 0$. At stage $s_{k-1}$ we do the following.
\betaegin{itemize}
\item For all $X\in 2^\omegaega$ and all $(x,F)\in Q_{k}$, enumerate $F\cup \{x\}$ into $A_{k,x,F}(X)$.
\item If $k\gammae 1$, partition Cantor space $2^\omegaega$ into $n_{k}$ many clopen sets $\CC_{x,k}$ for $x\in I_{k}$, each of measure $\deltaelta_{k}$. Let $x\in I_{k}$; for all $j\le k$, for all $(y,F)\in Q_j$, and all $X\in \CC_{x,k}$, enumerate $x$ into $A_{j,y,F}(X)$.
\item If $k\gammae 2$, for all $j<k-1$ and all $(y,F)\in Q_j$, for all $X\in 2^\omegaega\setminus \CC_{x_{k-1},k-1}$, enumerate all of $I_{k-1}\setminus\{x_{k-1}\}$ into $A_{j,y,F}(X)$.
\end{itemize}
This defines the family $\GG_{e,\s}(X)$ of sets $A_{k,x,F}(X)$ for all $X$, and so the functional ${\mathcal L}ambda_{e,\s}$. For positive $k<k^*$, let $\PP_k = \PP_k(e,\s) = 2^\omegaega\setminus \CC_{x_k,k}$; let $\PP_{e,\s} = \betaigcap_{k<k^*}\PP_k$. The class $\PP_{e,\s}$ is a $\Pi^0_1$ class (uniformly in $e$ and $\s$), and the measure of $\PP_{e,\s}$ is at least $1-\sum_{k<\omegaega}\deltaelta_k \gammae 1 - \epsilon_{e,\s}$. What remains is to show that for all $X\in \PP_{e,\s}$, ${\mathcal L}ambda_{e,\s}(X)$ is an enumeration of $\mathcal SSS_{e,\s}$, that is, that for each $X\in \PP_{e,\s}$ we have $\mathcal SSS_{e,\s} = \GG_{e,\s}(X)$.
\betaegin{claim} \langlebel{clm_meagre_conull_main}
Let $X\in \PP_{e,\s}$. Let $k\le k^*$ and $(x,F)\in Q_k$.
\betaegin{enumerate}
\item If $k^* = \omegaega$, then $A_{k,x,F}(X) = F \cup \{x\} \cup\betaigcup_{j> k} (I_j\setminus \{x_j\})$.
\item If $k^*< \omegaega$, then $A_{k,x,F}(X) = B\cup \{y\}$ where $(y,B)\in Q_{k^*}$. If $k=k^*$ then $A_{k^*,x,F} = F\cup\{x\}$.
\end{enumerate}
\end{claim}
\betaegin{proof}
Suppose that $k^*=\omegaega$. At stage $s_{k-1}$, we enumerate $F\cup\{x\}$ into $A_{k,x,F}(X)$; at later stages we do not enumerate any other element of $\betaigcup_{j\le k}I_j$ into $A_{k,x,F}(X)$. Let $j>k$. At stage $s_{j-1}$ we enumerate $y$ into $A_{k,x,F}(X)$, where $X\in \CC_{y,j}$. Since $X\notin \CC_{x_j,j}$, we have $y\ne x_j$. At stage $s_j$ we enumerate the rest of $I_j\setminus \{x_j\}$ into $A_{k,x,F}(X)$; at no later stage is $x_j$ enumerated into $A_{k,x,F}(X)$.
Now suppose that $k^*<\omegaega$. At stages $s_{j-1}$ for $j<k^*$, only numbers in $\betaigcup_{i<k^*}I_i$ are enumerated into $A_{k,x,F}$. If $k<k^*$, then at stage $s_{k^*-1}$ we enumerate $y$ into $A_{k,x,F}(X)$, where $X\in \CC_{y,k^*}$; no other element of $I_{k^*}$ is enumerated into $A_{k,x,F}(X)$. If $k=k^*$ then at stage $s_{k^*-1}$ we enumerate $F\cup \{x\}$ into $A_{k,x,F}(X)$. In either case, as $s_{k^*-1}$ is the last stage at which we enumerate any numbers into $A_{k,x,F}(X)$, we see that $I_{k^*}\cap A_{k,x,F}(X)$ is a singleton; and that for all $j>k^*$, no element of $I_j$ is ever enumerated into $A_{k,x,F}(X)$.
\end{proof}
A short examination of the definition of $\GG_{e,\s}$ now shows that for all $X\in \PP_{e,\s}$, $\GG_{e,\s}(X) = \mathcal SSS_{e,\s}$. This completes the proof of Theorem \ref{thm_meagre_co_null_enumeration}.
\betaibliography{bftypes}
\betaibliographystyle{alpha}
\end{document}
|
\begin{document}
\title{Separability Criteria for Arbitrary Quantum Systems}
\author{Chang-shui Yu\thanks{
[email protected]}, He-shan Song}
\affiliation{Department of Physics, Dalian University of Technology,\\
Dalian 116024, China}
\date{\today }
\begin{abstract}
The purpose of this paper is to obtain a sufficient and necessary condition
as a criteria to test whether an arbitrary multipartite state is entangled
or not. Based on the tensor expression of a multipartite pure state, the
paper show that a state is separable iff $\left\vert \mathbf{C}(\rho
)\right\vert $ =0 for pure states and iff $C(\rho )$ vanishes for mixed
states.
\end{abstract}
\pacs{03.67.-a ,03.65.-Ta}
\maketitle
\section{Introduction}
Entanglement is an essential ingredient in quantum information and the
central feature of quantum mechanics which distingishes a quanum system from
its classical counterpart. As an important physical resource, it is also
widely applied to a lot of quanum information processing(QIP): quantum
computation [1], quantum cryptography [2], quantum teleportation [3],
quantum dense coding [4] and so on.
Entanglement arises only if there has been interactions between the
subsystems of a multipartite system from physics or only if the quantum
state is nonseparable or nonfactorized from mathematics. Even though a lot
of efforts have been made on how to tell whether a given quantum state is
entangled (separable) or not, \ only bipartite entanglement measures
[6,7,8,9] as separability criteria have been for the most part well
understood. Even though the separability criteria for pure states [10,11,12]
are versatile and complex, there does not exist a unified one; A general
formulation of multipartite mixed states is relatively lacking and remains
an open problem.
Recently, Reference [5] has presented a sufficient and necessary condition
for separability of tripartite qubit systems by arranging $a_{ijk}$s of a
pure state $\left\vert \psi \right\rangle _{ABC}=\sum a_{ijk}\left\vert
i\right\rangle _{A}\left\vert j\right\rangle _{B}\left\vert k\right\rangle
_{C}$ as a three-order tensor in $2\times 2\times 2$ dimension. The
introduction of the new skill has provided an effective way to generalize
the criteria to tripartite states in arbitrary dimension and to multipartite
quantum systems.
In this Letter, \ we continue Ref.[5] to give out a general formulation of
separability criterion for arbitrary quantum systems. The paper is organized
as follows: Firstly, we present a sufficient and necessary condition of
separability for tripartite pure states. Secondly, we generalize the
condition to the case of mixed states. Lastly, we generalize our result to
multipartite quantum systems.
\section{Separability criterion for tripartite pure states}
We begin with the definition of our tensors. Unlike the previous definition
of tensors, for convenience, all the quantities with indices, such as $
T_{ij\cdot \cdot \cdot k}$ and so on, are called tensors here. The number of
the indices is called the order of the tensor. Therefore, the set of all
one-order tensor is the set of vectors, and the set of all two-order tensors
is the one of matrices. Three-order tensors $T_{ijk}$ are matrices (vectors)
if any one (two) of their three indices is (are) fixed.
The elements of a three-order tensor can be arranged at the node of the grid
in three-dimensional Hilbert space, such as the tensor $T_{ijk}$ with $
i,j,k=0,1,2$ shown in figure 1. From geometry, every fixed index corresponds
to a group of parallel planes which are perpendicular to the vector which
the fixed index corresponds to. The planes corresponding to different fixed
indices are mutually perpendicular.
\begin{figure}
\caption{Three-order tensor of the coefficients of a tripartite pure state.}
\label{1}
\end{figure}
\ \textbf{Definition.-}Let $T_{ijk}$ is a three-order tensor in $
n_{1}\times n_{2}\times n_{3}$ dimension with $i=0,1,\cdot \cdot \cdot
,n_{1}-1$, $j=0,1,\cdot \cdot \cdot ,n_{2}-1$ and $k=0,1,\cdot \cdot \cdot
,n_{3}-1$. $T_{i^{\prime }j^{\prime }k^{\prime }}^{\prime }$ is any
sub-tensor in $m\times m\times m$ dimension by selecting any $m$ planes from
every group of parallel ones corresponding to three different indices from $
T_{ijk}$. Then $M_{\alpha \beta \gamma }$=$f(T_{i^{\prime }j^{\prime
}k^{\prime }}^{\prime })$ is called the $m$th compound tensor denoted by $
C_{m}(T)$ defined in $\left(
\begin{array}{c}
n_{1} \\
m
\end{array}
\right) \times \left(
\begin{array}{c}
n_{2} \\
m
\end{array}
\right) \times \left(
\begin{array}{c}
n_{3} \\
m
\end{array}
\right) $ dimension, where $f(x)$ is a given function of the tensor $x$. For
a two-order tensor $x$, $f(x)=\det x$ is the determinant of the matrix $x.$
For higher-order tensors, what $f(x)$ denotes is still an open problem.
However, to the intent of this paper, we can give out a $f(x)$ for $
C_{2}(T_{ijk})$ which is effective enough to characterize the separability
of a tripartite quantum state. For $C_{2}(T)$, every sub-tensor $
T_{i^{\prime }j^{\prime }k^{\prime }}^{\prime }$ is defined in $2\times
2\times 2$ dimension which corresponds to a tensor cube [5]. So we can
employ
\begin{equation*}
f(T_{i^{\prime }j^{\prime }k^{\prime }}^{\prime })=\left\vert \mathbf{C}
\right\vert =\frac{1}{\sqrt{3}}\sqrt{\sum_{\alpha }(C^{\alpha })^{2}},
\end{equation*}
which is introduced in Ref.[5], where $C^{\alpha }=\left\vert \left(
T_{ijk}\right) ^{\prime }s^{\alpha }T_{ijk}\right\vert $ with $s^{1}=-\sigma
_{y}\otimes \sigma _{y}\otimes I$, $s^{2}=-\sigma _{y}\otimes I\otimes
\sigma _{y}$, $s^{3}=-I\otimes \sigma _{y}\otimes \sigma _{y}$, $
s^{4}=-Iv\otimes \sigma _{y}\otimes \sigma _{y}$, $s^{5}=-\sigma _{y}\otimes
Iv\otimes \sigma _{y}$, $s^{6}=-\sigma _{y}\otimes \sigma _{y}\otimes Iv)$,
here $\sigma _{y}=\left(
\begin{array}{cc}
0 & -i \\
i & 0
\end{array}
\right) $, $I=\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right) $, and $Iv=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) $.
Analogous to Ref.[5], considering any a tripartite pure state $\left\vert
\psi \right\rangle _{ABC}=\sum a_{ijk}\left\vert i\right\rangle
_{A}\left\vert j\right\rangle _{B}\left\vert k\right\rangle _{C}$ with $
i=0,1,\cdot \cdot \cdot ,n_{1}-1$, $j=0,1,\cdot \cdot \cdot ,n_{2}-1$ and $
k=0,1,\cdot \cdot \cdot ,n_{3}-1$, if arranging the coefficients of the
state as a tensor denoted by $A_{ijk}$ (every coefficient corresponds to a
node of the grid), every line in the grid corresponds to a vector and every
plane corresponds to a matrix. One can easily find that, the tripartite
state is\ fully separable iff all the parallel vectors are linear relevant.
A necessary and sufficient condition for this which is easily proved is that
the second compound tensor $C_{2}(A_{ijk})$ must be zero. Namely, every
element of the tensor must be zero. Consider the state $\left\vert \psi
\right\rangle _{ABC}$ written in vector notation $\left\vert \psi
\right\rangle _{ABC}$ $=$($a_{000},a_{001},\cdot \cdot \cdot
,a_{00n_{3}-1},a_{010},\cdot \cdot \cdot ,a_{n_{1}-1n_{2}-1n_{3}-1})^{\prime
}$, one can obtain an equivalent expression of above relation, i.e.
\begin{equation*}
\left\vert \mathbf{C}_{\alpha \beta \gamma }(\psi )\right\vert =\frac{1}{
\sqrt{3}}\sqrt{\sum_{p}(C_{\alpha \beta \gamma }^{p}(\psi ))^{2}}=0
\end{equation*}
holds for any $\alpha $, $\beta $ and $\gamma $, where $C_{\alpha \beta
\gamma }^{p}=\left\vert \left\langle \psi _{ABC}\right\vert s_{\alpha \beta
\gamma }^{p}\left\vert \psi _{ABC}\right\rangle \right\vert $ with $
s_{\alpha \beta \gamma }^{1}=-L_{\alpha }\otimes L_{\beta }\otimes I_{\gamma
}$, $s_{\alpha \beta \gamma }^{2}=-L_{\alpha }\otimes I_{\beta }\otimes
L_{\gamma }$, $s_{\alpha \beta \gamma }^{3}=-I_{\alpha }\otimes L_{\beta
}\otimes L_{\gamma }$, $s_{\alpha \beta \gamma }^{4}=-\left\vert L_{\alpha
}\right\vert \otimes L_{\beta }\otimes L_{\gamma }$, $s_{\alpha \beta \gamma
}^{5}=-L_{\alpha }\otimes \left\vert L_{\beta }\right\vert \otimes L_{\gamma
}$, $s_{\alpha \beta \gamma }^{6}=-L_{\alpha }\otimes L_{\beta }\otimes
\left\vert L_{\gamma }\right\vert $, here $L_{\alpha }$, $L_{\beta }$ and $
L_{\gamma }$ are the generators of $SO(n_{1})$, $SO(n_{2})$ and $SO(n_{3})$,
respectively; $I_{\alpha }$, $I_{\beta }$ and $I_{\gamma }$ are the unit
matrices in $n_{1}$, $n_{2}$ and $n_{3}$ dimension, respectively; with $
\alpha =1,2,\cdot \cdot \cdot ,\frac{n_{1}(n_{1}-1)}{2}$, $\beta =1,2,\cdot
\cdot \cdot ,\frac{n_{2}(n_{2}-1)}{2}$ and $\gamma $ $=1,2,\cdot \cdot \cdot
,\frac{n_{3}(n_{3}-1)}{2}$. $\left\vert M\right\vert $ denotes the modulus
of the elements of the matrix $M$.
Then we can construct a new vector $\mathbf{C}$=$\underset{\alpha \beta
\gamma }{\oplus }\mathbf{C}_{\alpha \beta \gamma }$ and employ the length of
the vector
\begin{equation*}
\left\vert \mathbf{C(}\psi )\right\vert =\sqrt{\underset{\alpha \beta \gamma
}{\sum }\left\vert \mathbf{C}_{\alpha \beta \gamma }\mathbf{(}\psi
)\right\vert ^{2}}=\frac{1}{\sqrt{3}}\sqrt{\sum_{p}\underset{\alpha \beta
\gamma }{\sum }(C_{\alpha \beta \gamma }^{p}\mathbf{(}\psi ))^{2}}
\end{equation*}
as the criterion of separability.
\section{Separability criterion for tripartite mixed states}
The tripartite mixed states $\rho =\sum\limits_{k=1}\omega _{k}\left\vert
\psi ^{k}\right\rangle \left\langle \psi ^{k}\right\vert $ can be written in
matrix notation as $\rho =\Psi W\Psi ^{\dagger }$, where $W$ is a diagonal
matrix with $W_{kk}=\omega _{k}$, the columns of \ the matrix $\Psi $
correspond to the vectors $\psi ^{k}$. Consider the eigenvalue
decomposition, $\rho =\Phi M\Phi ^{\dagger }$, where $M$ is a diagonal
matrix whose diagonal elements are the eigenvalues of $\rho $, and $\Phi $
is a unitary matrix whose columns are the eigenvectors of $\rho $. From
Ref.[], one can get $\Psi W^{1/2}=\Phi M^{1/2}T$, where $T$ is a
Right-unitary matrix. The tripartite mixed states are fully separable iff
there exist a decomposition such that $\psi ^{k}$ for every $k$ is fully
separable. The entanglement measure of formation can be defined as the
infimum of the average $\left\vert \boldsymbol{C}(\psi ^{k})\right\vert $.
Namely, $C(\rho )=\inf \sum\limits_{k}\omega _{k}\left\vert \boldsymbol{C}
(\psi ^{k})\right\vert $, if $C(\rho )$ is assigned as the entanglement
measure for tripartite mixed states. Therefore, for any a decomposition
\begin{equation*}
\rho =\sum\limits_{k=1}\omega _{k}\left\vert \psi ^{k}\right\rangle
\left\langle \psi ^{k}\right\vert ,
\end{equation*}
one can get
\begin{eqnarray}
C(\rho ) &=&\inf \sum\limits_{k}\omega _{k}\left\vert \boldsymbol{C}(\psi
^{k})\right\vert \notag \\
&=&\inf \sum\limits_{k}\omega _{k}\frac{1}{\sqrt{3}}\sqrt{\sum_{p}\underset{
\alpha \beta \gamma }{\sum }\left\vert \left\langle (\psi ^{k})^{\ast
}\right\vert s_{\alpha \beta \gamma }^{p}\left\vert \psi ^{k}\right\rangle
\right\vert ^{2}}. \notag
\end{eqnarray}
According to the Mincowski inequality
\begin{equation}
\left( \sum\limits_{i=1}\left( \sum\limits_{k}x_{i}^{k}\right) ^{p}\right)
^{1/p}\leq \sum_{k}\left( \sum\limits_{i=1}\left( x_{i}^{k}\right)
^{p}\right) ^{1/p},\text{ }p>1,
\end{equation}
one can easily obtained
\begin{eqnarray}
C(\rho ) &\geq &\inf \frac{1}{\sqrt{3}}\sqrt{\sum_{p}\underset{\alpha \beta
\gamma }{\sum }\left( \sum\limits_{k}\omega _{k}\left\vert \left\langle
(\psi ^{k})^{\ast }\right\vert s_{\alpha \beta \gamma }^{p}\left\vert \psi
^{k}\right\rangle \right\vert \right) ^{2}} \notag \\
&=&\inf \frac{1}{\sqrt{3}}\sqrt{\sum_{p}\underset{\alpha \beta \gamma }{\sum
}\left( \sum\limits_{k}\left\vert \Psi ^{T}W^{1/2}s_{\alpha \beta \gamma
}^{p}W^{1/2}\Psi \right\vert _{kk}\right) ^{2}} \notag \\
&=&\underset{T}{\inf }\frac{1}{\sqrt{3}}\sqrt{\sum_{p}\underset{\alpha \beta
\gamma }{\sum }\left( \sum\limits_{k}\left\vert T^{T}M^{1/2}\Phi
^{T}s_{\alpha \beta \gamma }^{p}\Phi M^{1/2}T\right\vert _{kk}\right) ^{2}}
\notag \\
&\geq &\underset{T}{\inf }\frac{1}{\sqrt{3}}\sum\limits_{k}\left\vert
T^{T}\left( \sum_{p}\underset{\alpha \beta \gamma }{\sum }z_{\alpha \beta
\gamma }^{p}A_{\alpha \beta \gamma }^{p}\right) T\right\vert _{kk},
\end{eqnarray}
where $A_{\alpha \beta \gamma }^{p}=M^{1/2}\Phi ^{T}s_{\alpha \beta \gamma
}^{p}\Phi M^{1/2}$ for any $z_{\alpha \beta \gamma }^{p}=y_{\alpha \beta
\gamma }^{p}e^{i\phi }$ with $y_{\alpha \beta \gamma }^{p}>0$, $
\sum\limits_{\alpha }\left( y_{\alpha \beta \gamma }^{p}\right) ^{2}=1$, and
Cauchy-Schwarz inequality
\begin{equation}
\left( \sum\limits_{i}x_{i}^{2}\right) ^{1/2}\left(
\sum\limits_{i}y_{i}^{2}\right) ^{1/2}\geqslant \sum\limits_{i}x_{i}y_{i},
\end{equation}
are applied at the last step. The infimum of equation (3) is given by $
\underset{z\in \mathbf{C}}{max}\lambda _{1}(z)-\underset{i>1}{\sum }\lambda
_{i}(z)$ with $\lambda _{i}(z)$s are the singular values, in decreasing
order, of the matrix $\underset{p}{\frac{1}{\sqrt{3}}\sum }\underset{\alpha
\beta \gamma }{\sum }z_{\alpha \beta \gamma }^{p}A_{\alpha \beta \gamma
}^{p} $. Therefore, we can express $C(\rho )$ as
\begin{equation}
C(\rho )=\max \{0,\underset{z\in \mathbf{C}}{max}\lambda _{1}(z)-\underset{
i>1}{\sum }\lambda _{i}(z)\}.
\end{equation}
It is not difficult to find that $C(\rho )=0$ is a sufficient and necessary
condition of separability for mixed states according to the whole procedure
of derivation.
\section{Separability criterion for multipartite systems}
A general $N$-partite pure states
\begin{eqnarray}
\left\vert \psi \right\rangle _{AB\cdot \cdot \cdot N}
&=&\sum\limits_{ij\cdot \cdot \cdot k}a_{ij\cdot \cdot \cdot k}\left\vert
ij\cdot \cdot \cdot k\right\rangle , \notag \\
i &\in &[0,n_{1}-1],j\in \lbrack 0,n_{2}-1],\cdot \cdot \cdot ,k\in \lbrack
0,n_{N}-1],
\end{eqnarray}
is separable iff $\left\vert \psi \right\rangle _{AB\cdot \cdot \cdot
N}=\sum\limits_{ij\cdot \cdot \cdot k}a_{ij\cdot \cdot \cdot k}\left\vert
i\right\rangle _{A}\otimes \left\vert j\right\rangle _{B}\otimes \cdot \cdot
\cdot \otimes \left\vert k\right\rangle _{N}$. Analogously, if the
coefficients $a_{ij\cdot \cdot \cdot k}$s are arranged as an $N$-order
tensor, one can easily find that, $\left\vert \psi \right\rangle _{AB\cdot
\cdot \cdot N}$ is separable iff all the vectors which are mutually parallel
are linear relevant. In order to obtain a mathematical rigorous criterion,
we have to redefine matrices $s_{\underset{N}{\underbrace{\alpha \beta \cdot
\cdot \cdot \lambda }}}^{ij}$s as
\begin{eqnarray}
s_{\underset{N}{\underbrace{\alpha \beta \cdot \cdot \cdot \lambda }}}^{ij}
&=&L_{\alpha }\otimes L_{\beta }\otimes \underset{i}{\underbrace{\left\vert
L_{\gamma }\right\vert \otimes \cdot \cdot \cdot \otimes \left\vert
L_{\delta }\right\vert }}\otimes \underset{N-i-2}{\underbrace{I_{\rho
}\otimes ,\cdot \cdot \cdot ,\otimes I_{\lambda }}}, \\
i &=&0,1,\cdot \cdot \cdot ,N-2,\text{ \ }j=1,2,\cdot \cdot \cdot ,\binom{N}{
i}\times \binom{N-i}{2},
\end{eqnarray}
where $L_{x}$ denotes the generators of $SO(n_{p})$, $x=1,2,\cdot \cdot
\cdot ,\frac{n_{p}(n_{p}-1)}{2},$ with $p$ standing for the $p$th subsystem.
$i$ in above equation (6) states that there are $i$ absolute values of
generators. Note that the order of $L_{\alpha }$, $L_{\beta }$, $\left\vert
L_{\gamma }\right\vert $, $\cdot \cdot \cdot $, $\left\vert L_{\delta
}\right\vert $, $I_{\rho }$, $\cdot \cdot \cdot $, $I_{\lambda }$ in
equation (6) must cover all the permutations with $j$ as a index showing the
$j$th permutation.
Hence, the criterion for pure states can be expressed as following, if an $N$
-partite pure state is separable iff
\begin{equation*}
\left\vert \mathbf{C}(\psi )\right\vert =\sqrt{\sum (C_{\alpha \beta \cdot
\cdot \cdot \gamma }^{ij}(\psi ))^{2}}/\sqrt{\binom{N}{2}}=0,
\end{equation*}
where the sum is over all the indices and $1/\sqrt{\binom{N}{2}}$ is a
normalized factor.
According to the same procedure of the derivation to Section II, one can
easily obtain the criterion of separability for an $N$-partite mixed state
by testing whether $C(\rho )$ vanishes with
\begin{equation*}
C(\rho )=\max \{0,\underset{z\in \mathbf{C}}{max}\lambda _{1}(z)-\underset{
i>1}{\sum }\lambda _{i}(z)\},
\end{equation*}
where $\lambda _{i}(z)$s are the singular values, in decreasing order, of
the matrix $\underset{ij}{\sum }\underset{\alpha \beta \cdot \cdot \cdot
\gamma }{\sum }z_{\alpha \beta \cdot \cdot \cdot \gamma }^{ij}A_{\alpha
\beta \cdot \cdot \cdot \gamma }^{ij}/\sqrt{\binom{N}{2}}$.
Let us finally note that, our result can also be reduced to Wootters,
concurrence when $N=2$ in 2$\times 2$ dimension or the result in Ref.[5]
owing to the only generator $\sigma _{y}$ of $SO(2)$. Therefore, the final
result presented in this Letter is a general one suitable for arbitrary
quantum systems.
\section{\protect
Conclusion}
As a summary, in this paper, we generalize the result presented in Ref.[5]
to arbitrary quantum systems. The result in this paper provides a sufficient
and necessary condition as a general criterion to test whether a given
multipartite system is separable or not. The criterion for pure states is
convenient and analytic, but it has to turn to a numerical optimization for
mixed states, however, a simple treatment similar to Ref.[5] is often
enough. The fruitful conclusion similar to [8] is expectable and compensable.
\end{document}
|
\begin{document}
\begin{abstract}
In this paper, I give a method to calculate the HOMFLY polynomials of two bridge knots by using a representation of the braid group $\mathbb{B}_4$ into a group of $3\times 3$ matrices. Also, I will give examples of a 2-bridge knot and a 3-bridge knot that have the same Jones polynomial, but different HOMFLY polynomials.
\end{abstract}
\title{On the HOMFLY Polynomial of $4$-plat presentations of knots}
\section{Introduction}
In 1985, Hoste-Ocneanu-Millett-Freyd-Lickorish-Yetter~\cite{3} had discovered the HOMFLY polynomial which is a 2-variable oriented link polynomial $P_L(a,m)$ motivated by the Jones polynomial. Also, Prztycki and Traczyk~\cite{7} independently had done some work related to the HOMFLY polynomial.
The calculation of the HOMFLY polynomial is based on the HOMFLY skein relations as follows.\\
\begin{enumerate}
\item $P(L)$ is an isotopy invariant.
\item $P$(unknot)=1.
\item $a \cdot P(L_+)+a^{-1}\cdot P(L_{-})+m\cdot P(L_0)=0$.
\end{enumerate}
\begin{figure}\label{p1}
\end{figure}
Lickorish and Millett~\cite{6} and Kanenobu and Sumi~\cite{8} gave a formula to calculate the HOMFLY polynomials of 2-bridge knots by using a representation of the continued fraction of a rational knot into a group of $2\times 2$ matrices. (See Proposition 14 of~\cite{6}.) \\
In this paper, I use the HOMFLY blacket polynomial and the plat presentation of knots to calculate the HOMFLY polonomials of rational knots by using a representation of the braid group $\mathbb{B}_4$ into a group of $3\times 3$ matrices. Also, we can extend the method to evaluate the HOMFLY polynomials of $2n$-plat presentations of knots.\\
Now, we define the plat presentation of a knot and a rational tangle.
Let $S^2$ be a sphere smoothly embedded in $S^3$ and let $K$ be a link transverse to $S^2$. The complement in $S^3$ of $S^2$ consists of two open balls, $B_1$ and $B_2$. We assume that $S^2$ is $xz$-plane $\cup~\{\infty\}$. Now, consider the projection of $K$ onto the flat $xy$-plane.
Then, the projection onto the $xy$-plane of $S^2$ is the $x$-axis and $B_1$ projects to the upper half plane and $B_2$ projects to the lower half plane.
The projection gives us a {\emph{ link diagram}}, where we make note of over and undercrossings.
The diagram of the link $K$ is called a $\emph{plat on 2n-strings}$, denoted by $p_{2n}(w)$, if it is the union of a $2n$-braid $w$ and $2n$ unlinked and unknotted arcs which connect pairs of consecutive strings of the braid at the top and at the bottom endpoints and $S^2$ meets the top of the $2n$-braid. (See the first and second diagrams of Figure~\ref{p2}.) Any link $K$ in $S^3$ admits a plat presentation, that is $K$ is ambient isotopic to a plat (\cite{2}, Theorem 5.1). The bridge (plat) number $b(K)$ of $K$ is the smallest possible number $n$ such that there exists a plat presentation of $K$ on $2n$ strings.\\
We know that the braid group $\mathbb{B}_{4}$ is generated by $\sigma_1,\sigma_2,\sigma_{3}$ which are twisting of two adjacent strings. For example, $w=\sigma_2^{-2}\sigma_1^{2}\sigma_2^{-1}\sigma_3^2\sigma_2^{-1}$ is the word for the
4 braid of the first diagram of Figure~\ref{p2}.\\
\begin{figure}\label{p2}
\end{figure}
Then we say that a plat presentation is $\emph{standard}$ if the $4$-braid $w$ of $p_{4}(w)$ involves only $\sigma_2,\sigma_3$.\\
A $\emph{2-tangle}$ is the disjoint union of $2$ properly embedded arcs in a 3-ball $B^3$.
Then we say that a $2$-$tangle$ $T=(B^3,\alpha_1 \cup\alpha_2)$ is $\emph{rational}$ if there exists a homeomorphism of pairs ${H}: (B^3,\alpha_1 \cup\alpha_2)\longrightarrow
(D^2\times I,\{p_1,p_2\}\times I)$, where $p_i$ are distinct points in $D^2$ and $I=[0,1]$. We have the two $2$-tangles $T_1=(B_1,K\cap B_1)$ and $T_2=(B_2,K\cap B_2)$. We note that $T_1$ and $T_2$ are rational if $K$ has a plat presentation. Let $T_w$ be the rational $2$-tangle in $B_2$ if $K$ has a plat presentation.\\
Now, we define a $\emph{plat presentaion}$ for rational $2$-tangles $p_{4}(w)\cap B_2$ (Refer to~\cite{4}.) denoted by $q_{4}(w)$ if it is the union of a $4$-braid $w$ and $2$ unlinked and unknotted arcs which connect pairs of strings of the braid at the bottom endpoints with the same pattern as in a plat presentation for a knot and $\partial B_2$ meets the top of the $4$-braid.\\
We note that $q_{4}(w)$ is a rational $2$-tangle in $B_2$. (See~\cite{5}.)\\
We say that $\overline{q_{4}(w)}$ = $p_{4}(w)$ is the $\emph{plat closure}$ of $q_{4}(w)$.\\
The tangle diagrams with the circles in Figure~\ref{p3} give the diagrams of trivial rational $2$-tangles as in~\cite{1},~\cite{3},~\cite{4},~\cite{7}.
\begin{figure}\label{p3}
\end{figure}
We note that $q_4(w)$ is alternating if and only if $\overline{q_4(w)}$ is alternating.\\
A tangle $T$ is $\emph{reduced}$ alternating if $T$ is alternating and $T$ does not have a self-crossing which can be removed by a Type I Reidemeister move.
\begin{Thm}[\cite{5}]\label{T1}
If $K$ is a 2-bridge knot, then there exists a word $w$ in $\mathbb{B}_4$ so that the plat presentation $p_4(w)$ is reduced alternating, standard and represents a knot isotopic to $K$.
\end{Thm}
By Theorem~\ref{T1}, any 2-bridge knot $K$ can be represented by a word $w$ as a plat which involves only $\sigma_2$ and $\sigma_3^{-1}$ (or $\sigma_2^{-1}$ and $\sigma_3$). i.e., $w=\sigma_2^{\epsilon_1}\sigma_3^{-\epsilon_2}\cdot\cdot\cdot\sigma_2^{\epsilon_{2n-1}}$
for some positive (negative) integers $\epsilon_i$ for $1\leq i\leq 2n-1$. We notice that if $w=\sigma_2^{\epsilon_1}\sigma_3^{-\epsilon_2}\cdot\cdot\cdot\sigma_3^{-\epsilon_{2n}}$ for some positive (negative) integer $\epsilon_{2n}$ then it is not a reduced alternating form. i.e., we can twist the right unlinked and unknotted bottom arc to reduce some crossings to have fewer crossings for $K$.
So, in order to have a reduced alternating form, $w$ needs to start from $\sigma_2^{\pm 1}$ and end at $\sigma_2^{\pm 1}$.\\
In section 2, we introduce the HOMFLY polynomial of rational 2-tangles and give a formula to calculate the HOMFLY polynomial of 4-plat presentations of knots.\\
In section 3, we give a method to find the orientation of each crossing of a knot from a given orientation of the knot.\\
Then, we give some examples of knots for which we calclulate the HOMFLY polynomials and especially give examples of a 2-bridge knot and a 3-bridge knot that have the same Jones polynomial, but different HOMFLY polynomials in section 4.
\section{HOMFLY bracket polynomial of rational 2-tangles and the main theorem}
Let $K$ be a 2-bridge knot. By Theorem~\ref{T1} there exists a plat presentation $p_4(w)$ which is reduced alternating, standard and represents a knot isotopic to $K$.\\
For a given orientation of $K$, we will give an induced orientation to the rational $2$-tangle $T_w=K\cap B_2$ such that $T_w$ has the same orientation with the oriented knot $\overrightarrow{K}$ in $B_2$. Let
$\overrightarrow{q_4(w)}$ be the plat presentation of $T_w$ with the induced orientation.\\
Now, we define the HOMFLY polynomial of an oriented plat presentation of rational $2$-tangle $\overrightarrow{q_4(w)}$ in $B_2$ as $P(\overrightarrow{T_w})=f(a,m)<T_0>+g(a,m)<T_{\infty}>+h(a,m)<T_x>$, where the coefficients $f(a,m),g(a,m)$ and $h(a,m)$ are polynomials in $a,a^{-1}$ and $m$ that are obtained by starting with $\overrightarrow{T_w}$ and using the skein relations repeatedly until only the three tangles in the expression given for $T_w$.\\
\begin{figure}\label{p4}
\end{figure}
Let $n$ be the number of crossings of $w$.\\
We note that if we switch one of the alternative crossings of $w$ $(n>1)$ from positive (negative) to negative (positive) to have $K'$, then we have the plat presentation $p_4(w')$ for $K'$ so that $w'$ is a reduced alternating, standard and $w'$ has lower crossings than $w$. (Refer to~\cite{5}.) So, by the skein relations, we can reduce the number of crossings of $w$. However, if we have an oriented rational $2$-tangle with $n=1$ then we cannot reduce the number of crossing by the skein relations. This is the reason that we need $T_x$.\\
We remark that polynomials $f(a,m),g(a,m)$ and $h(a,m)$ are invariant under isotopy of $\overrightarrow{q_4(w)}$.\\
Also, we note that even if we apply the skein relationship to one of the crossings of ${q_4(w)}$, the orientations of the rest of crossings will be preserved. i.e., all the rest of crossings keep the directions for the given orientation while we are applying the skein relationship to one of the crossings of $\overrightarrow{q_4(w)}$ to calculate the HOMFLY polynomial of ${q_4(w)}$.\\
Let $\mathcal{A}=<T_0>$, $\mathcal{B}=<T_{\infty}>$ and $\mathcal{C}=<T_x>$.\\
Recall that $K$ is alternating and standard.\\
Since $p_4(w)$ is standard, we consider $\mathbb{B}_3$ instead of $\mathbb{B}_4$. Then let $\sigma_1$ and $\sigma_2$ be the two generators of $\mathbb{B}_3$. I want to emphasize here that we are changing from $\sigma_2$ and $\sigma_3$ to $\sigma_1$ and $\sigma_2$.\\
Suppose that $w=\sigma_1^{\alpha_1}\sigma_2^{-\alpha_2}\cdot\cdot\cdot\sigma_1^{\alpha_{2n-1}}$ for positive integers $\alpha_i$ ($1\leq i\leq 2n-1$).
Then, we will give an orientation to $w$ which is induced by $\overrightarrow{K}$. So, $\sigma_1$ and $\sigma_2^{-1}$ have four different cases $\sigma_{j1}$ and $\sigma^{-1}_{j2}$ for $j=1,2$ as in Figure~\ref{p5}.
\begin{figure}\label{p5}
\end{figure}
Actually, there are two possible directions for the orientation of $\overrightarrow{K}$. However, the skein relation does not depend on the choice of the direction.
Now, we consider the sub-directions which is induced by the orientation of $\overrightarrow{K}$ as in Figure~\ref{p5}. We note that there are the other corresponding eight cases which are obtained from the given cases by taking the opposite arrows. However, we will not distinguish the corresponding cases since they play the same role as the corrsponding cases when we construct $3\times 3$ matrices for calculation of the HOMFLY polynomial. \\
So, by considering the orientation, we can describe $w$ as $\sigma_{1~k_{1}}^{\alpha_1}\sigma_{2~k_{2}}^{-\alpha_2}\cdot\cdot\cdot \sigma_{1~k_{2n-1}}^{\alpha_{2n-1}}$ instead of $w=\sigma_1^{\alpha_1}\sigma_2^{-\alpha_2}\cdot\cdot\cdot\sigma_1^{\alpha_{2n-1}}$, where $k_{i}\in\{1,2\}$.\\
Let $A^1_1= \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & -a^{-2} \\
0 & 1& -a^{-1}m\\
\end{array} \right]$, \hskip 50pt $A^1_2=\left[ \begin{array}{ccc}
1 & 0 & -am \\
0 & 0 & -a^2 \\
0 & 1 & 0\\
\end{array} \right],\\$
\vskip 20pt
and, $B^{-1}_1= \left[ \begin{array}{ccc}
0 & 0 & -a^2 \\
0 & 1 & 0 \\
1 & 0 & -am\\
\end{array} \right]$,\hskip 50pt $B^{-1}_2= \left[ \begin{array}{ccc}
0 & 0 & -a^{-2} \\
0 & 1 & -a^{-1}m \\
1 & 0 & 0\\
\end{array} \right].\\$
\vskip 10pt
Let $M=(A^1_{k_1})^{\alpha_1}(B^{-1}_{k_2})^{\alpha_2}\cdot\cdot\cdot(A^1_{k_{2n-1}})^{\alpha_{2n-1}}.$\\
\vskip 20pt
Then we have the main theorem to calculate the HOMFLY polynomial of $K$ as follows.\\
\begin{Thm}\label{T2}
Suppose that $q_4(w)$ is a plat presentation of a rational 2-tangle $T_w$ which is alternating and standard so that
$w=\sigma_1^{\epsilon_1}\sigma_2^{-\epsilon_2}\cdot\cdot\cdot\sigma_1^{\epsilon_{2n-1}}$ for some positive integers $\epsilon_i$ ($1\leq i\leq 2n-1$). Then
$P(T_w)=f(a,m)\mathcal{A}+g(a,m)\mathcal{B}+h(a,m)\mathcal{C}$, where $f(a,m), g(a,m)$ and $h(a,m)$ are obtained from
$[f(a,m)~g(a,m)~h(a,m)]=[0~1~0]M^t$. i.e., the second column of $M$.
Moreover, $P(K)=f(a,m)-g(a,m){(a+a^{-1})\over m}+h(a,m)$.
\end{Thm}
\begin{proof}
By Theorem~\ref{T1}, for a two bridge knot $K$, there exists a word $w$ in $\mathbb{B}_4$ so that the plat presentation $p_4(w)$ is alternating, standard and represents a link isotopic to $K$.\\
Then, by the argument above, we have $P(T_w)=f(a,m)\mathcal{A}+g(a,m)\mathcal{B}+h(a,m)\mathcal{C}$ for some 2-variable polynomials $f(a,m), g(a,m)$ and $h(a,m)$.\\
Since $w$ is standard, we consider two generators $\sigma_1$ and $\sigma_2$ for $\mathbb{B}_3$ as mentioned before.\\
Let $T_1'$ and $T_1''$ be the rational two tangles which are obtained from $T_w$ by adding $\sigma_1^{\mp 1}$ or $\sigma_2^{\pm 1}$ respectively to cancel the first $\sigma_i^{\pm 1}$ in $w$. So, we have a new word $v$ of smaller length than $w$ so that $w=\sigma_i^{\pm }v$ for the rational 2-tangles $T_1'$ and $T_1''$. Without loss of generality, we will consider the cases that $w=\sigma_1^{-1}v$ or $w=\sigma_2v$.\\
First, we consider the case that $w=\sigma_1^{-1}v$.
Then, we have that $P(T_1')=f'(a,m)\mathcal{A}+g'(a,m)\mathcal{B}+h'(a,m)\mathcal{C}$ for some 2-variable polynomials $f'(a,m),g'(a,m)$ and $h'(a,m)$.\\
Also, by Figure~\ref{p6}, we know that $P(T_1')=f(a,m)\mathcal{A}+g(a,m)<T_{x'}>+h(a,m)\mathcal{B}$, where $T_{x'}$ is the tangle which has a plat presentation $q_4(w)$ with $w=\sigma_1^{-1}$.\\
\begin{figure}\label{p6}
\end{figure}
By Figure~\ref{p6}, we also know that $P(T_{x'})=-a^2\mathcal{C}-am\mathcal{B}$ or $P(T_{x'})=-a^{-2}\mathcal{C}-a^{-1}m\mathcal{A}$.\\
Therefore, $P(T_1')=f(a,m)\mathcal{A}+g(a,m)<T_{x'}> +h(a,m)\mathcal{B}=f(a,m)\mathcal{A}+g(a,m)(-a^2\mathcal{C}-am\mathcal{B} ) +h(a,m)\mathcal{B}=f(a,m)\mathcal{A}+(h(a,m)-amg(a,m))\mathcal{B}-a^2g(a,m)\mathcal{C}$ or,\\
$P(T_1')=f(a,m)\mathcal{A}+g(a,m)<T_{x'}> +h(a,m)\mathcal{B}=f(a,m)\mathcal{A}+g(a,m)(-a^{-2}\mathcal{C}-a^{-1}m\mathcal{A} ) +h(a,m)\mathcal{B}=(f(a,m)-a^{-1}mg(a,m))\mathcal{A}+h(a,m)\mathcal{B}-a^{-2}g(a,m)\mathcal{C}$\\
Therefore, the following gives the operations.
$ \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & -am & 1 \\
0 & -a^{2} & 0\\
\end{array} \right]\left[\begin{array}{c}
f(a,m)\\
g(a,m)\\
h(a,m)\\
\end{array}
\right]=\left[\begin{array}{c}
f'(a,m)\\
g'(a,m)\\
h'(a,m)\\
\end{array}
\right],
\left[ \begin{array}{ccc}
1 & -a^{-1}m & 0 \\
0 & 0 & 1 \\
0 & -a^{-2} & 0\\
\end{array} \right]\left[\begin{array}{c}
f(a,m)\\
g(a,m)\\
h(a,m)\\
\end{array}
\right]=\left[\begin{array}{c}
f'(a,m)\\
g'(a,m)\\
h'(a,m)\\
\end{array}
\right].$
\vskip 20pt
Now, we consider the case that $w=\sigma_2v$.\\
Then we have $P(T_1'')=f''(a,m)\mathcal{A}+g''(a,m)\mathcal{B}+h''(a,m)\mathcal{C}$.\\
By Figure~\ref{p7}, we also know that $P(T_{x'})=-a^{-2}\mathcal{C}-a^{-1}m\mathcal{A}$ or $P(T_{x'})=-a^2\mathcal{C}-am\mathcal{B}$.\\
Therefore, $P(T_1')=f(a,m)<T_{x'}>+g(a,m)\mathcal{B}+h(a,m)\mathcal{C}=f(a,m)(-a^{-2}\mathcal{C}-a^{-1}m\mathcal{A} )+g(a,m)\mathcal{B} +h(a,m)\mathcal{A}=(h(a,m)-a^{-1}mf(a,m))\mathcal{A}+g(a,m)\mathcal{B}-a^{-2}f(a,m)\mathcal{C}$ or,\\
$P(T_1')=f(a,m)<T_{x'}>+g(a,m)\mathcal{B}+h(a,m)\mathcal{C}=f(a,m)(-a^{2}\mathcal{C}-am\mathcal{B} )+g(a,m)\mathcal{B} +h(a,m)\mathcal{A}=h(a,m)\mathcal{A}+(g(a,m)-amf(a,m))\mathcal{B}-a^2f(a,m)\mathcal{C}$\\
\begin{figure}\label{p7}
\end{figure}
Therefore, the following gives the operations.\\
$ \left[ \begin{array}{ccc}
-a^{-1}m & 0 & 1 \\
0 & 1 & 0 \\
-a^{-2} & 0 & 0\\
\end{array} \right]\left[\begin{array}{c}
f(a,m)\\
g(a,m)\\
h(a,m)\\
\end{array}
\right]=\left[\begin{array}{c}
f''(a,m)\\
g''(a,m)\\
h''(a,m)\\
\end{array}
\right],
\left[ \begin{array}{ccc}
0 & 0 & 1 \\
-am & 1 & 0 \\
-a^{2} & 0 & 0\\
\end{array} \right]\left[\begin{array}{c}
f(a,m)\\
g(a,m)\\
h(a,m)\\
\end{array}
\right]=\left[\begin{array}{c}
f''(a,m)\\
g''(a,m)\\
h''(a,m)\\
\end{array}
\right].$\\
\vskip 15pt
Now, let $A^{-1}_1= \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & -am & 1 \\
0 & -a^2 & 0\\
\end{array} \right],$ \hskip 50pt $A^{-1}_2= \left[ \begin{array}{ccc}
1 & -a^{-1}m & 0 \\
0 & 0 & 1 \\
0 & -a^{-2} & 0\\
\end{array} \right],\\$
\vskip 20pt
and, $B^{1}_1= \left[ \begin{array}{ccc}
-a^{-1}m & 0 & 1 \\
0 & 1 & 0 \\
-a^{-2} & 0 & 0\\
\end{array} \right]$,\hskip 50pt $B^{1}_2=\left[ \begin{array}{ccc}
0 & 0 & 1 \\
-am & 1 & 0 \\
-a^{2} & 0 & 0\\
\end{array} \right].\\$
\vskip 20pt
Now, recall that $A^1_1= \left[ \begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & -a^{-2} \\
0 & 1& -a^{-1}m\\
\end{array} \right],$ \hskip 50pt $A^1_2= \left[ \begin{array}{ccc}
1 & 0 & -am \\
0 & 0 & -a^2 \\
0 & 1 & 0\\
\end{array} \right],\\$
\vskip 20pt
and, $B^{-1}_1= \left[ \begin{array}{ccc}
0 & 0 & -a^2 \\
0 & 1 & 0 \\
1 & 0 & -am\\
\end{array} \right]$,\hskip 50pt $B^{-1}_2= \left[ \begin{array}{ccc}
0 & 0 & -a^{-2} \\
0 & 1 & -a^{-1}m \\
1 & 0 & 0\\
\end{array} \right].\\$
\vskip 20pt
We note that each $A^{\pm 1}_i$ is invertible and $A_i^1$ is actually the inverse of $A_i^{-1}$.\\
Also, we note that each $B^{\pm 1}_i$ is invertible and $B_i^1$ is actually the inverse of $B_i^{-1}$.\\
Therefore, $A^{-1}\left[\begin{array}{c}
f(a,m)\\
g(a,m)\\
h(a,m)\\
\end{array}
\right]=\left[\begin{array}{c}
0\\
1\\
0\\
\end{array}
\right]$ since $P(T_{0})=0\cdot\mathcal{A}+1\cdot\mathcal{B}+0\cdot\mathcal{C}.$\\
This implies that $(f(a,m)~g(a,m)~h(a,m))=(0~1~0)A^t$.\\
We remark that $\sigma_1$ and $\sigma_2^{-1}$ are corresponding to $\{A_1^1, A_2^1\}$ and $\{B_1^{-1},B_2^{-1}\}$ depending on the given orientation of $w$.\\
Now, by attaching the three unlinked and unknotted arcs in $B_1$, we can calculate $P(K)=f(a,m)\cdot 1+g(a,m)\cdot({-a-a^{-1}\over m})+h(a,m)\cdot 1$ by Figure~\ref{p8}.\\
To see this, we need the fact that The HOMFLY polynomial of a link $L$ that is a split union of two links $L_1$ and $L_2$ is
given by $P(L)={-a-a^{-1}\over m}P(L_1)P(L_2)$.\\
So, we have $P(S^1 \dot{\cup} S^1)=-({a+a^{-1}\over m})$ for the disjoint union of two unknots.\\
Finally, we have $P(K)=f(a,m)-g(a,m)({a+a^{-1}\over m})+h(a,m).$
\\
\begin{figure}\label{p8}
\end{figure}
\end{proof}
We note that if we have a word $w$ which involves only $\sigma_1^{-1}$ and $\sigma_2$ then we can get a $3\times 3$ matrix $A$
which is a composing sequences of $\{A_1^{-1},A_2^{-1}\}$ and $\{B_1^1,B_2^1\}$ which are corresponding to $\sigma_1^{-1}$ and $\sigma_2$ depending on a given orientation of $w$.\\
We have the equality about the mirror image of $K$ as follows.\\
$P_K(a,m)=P_{Mirror~Image(K)}(a^{-1},m)$.
\section{A way to determine the $k_i$ for the orientation of a rational three tangle $T$ which has a knot $\overline{T}$}
First, assume that the projection onto the $xy$-plane of a 2-bridge knot $K$ has a standard plat presentaion $p_4(w)$ with $w=\sigma_1^{\epsilon_1}\sigma_2^{-\epsilon_2}\cdot\cdot\cdot\sigma_1^{\epsilon_{2n-1}}$ for some positive (negative) integers $\epsilon_i$ ($1\leq i\leq 2n-1$).\\
Then we have the plat presentation $q_4(w)$ of the tangle $T=K\cap B_2$ so that $\overline{q_4(w)}=p_4(w)$.\\
Let $\mathcal{P}(\sigma_i^{\pm 1})$ be the $3\times 3$ matrix which is obtained by interchanging the $i$ and $i+1$ rows of $I$.\\
Then $\mathcal{P}$ extends to a homomorphism from $\mathbb{B}_3$ to $GL_3(\mathbb{Z})$.\\
For an element $w$ of $\mathbb{B}_3$, let 1,2,3 be the upper endpoints of the three strings for $\mathbb{B}_3$ from the left. Also, let $0$ be the upper endpoint of the left most string for $\mathbb{B}_4$. Let $u=[1,2,3]$. Then we assign the same number to the other endpoint of the three strings. Then we say that the new ordered sequence of numbers $w(u)$ is the $\emph{permutation induced by $w$}$.
\begin{Lem}\label{T3}
Suppose that $w$ is an element of $\mathbb{B}_3$ so that $w=\sigma_1^{\epsilon_1}\sigma_2^{-\epsilon_2}\cdot\cdot\cdot\sigma_1^{\epsilon_{2n-1}}$ for some positive (negative) integers $\epsilon_i$ ($1\leq i\leq 2n-1$).\\
Then $[1,2,3]\mathcal{P}(w)$ is the permutation which is induced by $w$.
\end{Lem}
\begin{proof}
This is proven by induction on $m=|\epsilon_1|+|\epsilon_2|+\cdot\cdot\cdot+|\epsilon_{2n}|$.
\end{proof}
Now, let $[p_w(1),p_w(2),p_w(3)]=[1,2,3]\mathcal{P}(w)$.\\
Without loss of generality, give the orientation (clockwise) to the trivial arc $\delta_1$ in $B_1$ with $\partial\delta_1=\{0,1\}$ from $1$ to $0$ along $\delta_1$. So, the initial point of $\delta_1$ is 1 and the terminal point of $\delta_1$ is 0 for the given orientation. Then, we can give the orientation to the other trivial arcs $\delta_2$ in $B_1$ as follows, where $\partial \delta_2=\{2,3\}$.\\
Now, we consider $p_w^{-1}(3)$. Then we note that $p_w^{-1}(3)\neq 1$. If not, then $K$ is a link, not a knot.
\begin{Lem}\label{T4}
If $p_w^{-1}(3)=3$ then the trivial arc $\delta_2$ has the same direction (clockwise) as $\delta_1$ for the orientation. If $p_w^{-1}(3)=2$ then the trivial arc $\delta_2$ has the opposite direction (counter clockwise) as $\delta_1$ for the orientation.
\end{Lem}
\begin{proof}
If $p_w^{-1}(3)=3$ then $p_w(3)=3$. So, the direction of the orientation at $3$ is upward. So, the $\delta_2$ has the same direction with $\delta_1$.\\
If $p_w^{-1}(3)=2$ then $p_w(2)=3$. then the direction of the orientation at $2$ is upward. So, the $\delta_2$ has the opposite direction with $\delta_1$.
\end{proof}
Recall the ordered sequence of numbers $u=[1,2,3]$. Now, we will define a new sequence of numbers $r=[r(1),r(2),r(3)]$.
For the given orientation, we replace the original number for the initial point of $\delta_2$ by 1 as follow.\\
$r=[r(1),r(2),r(3)]$ so that $r(1)=1$
and for $i>1$,
$r(i)=1$ if $p_w^{-1}(3)=i$ and $r(i)=i$ if $p_w^{-1}(3)\neq i$.\\
For the three strings of the braids $w$, we assign the number $r(k)$ to each string with the upper endpoint $k$ for $1\leq k\leq 3$.\\
Now, let $r_0=r$.\\
Let $r_i=[r_i(1),r_i(2),r_i(3)]=
r\mathcal{P}(\sigma_{1}^{\epsilon_1}\sigma_{2}^{-\epsilon_2}\sigma_1^{\epsilon_3}\cdot\cdot\cdot \sigma_{\delta}^{(-1)^{i-1} \epsilon_{i}})$ for $1\leq i\leq 2n$, where $\delta=1$ if $i$ is odd and $\delta=2$ if $i$ is even.\\
Let $k_i=\left\{\begin{array}{cl}
1 & $if $\hskip 10pt r_{i-1}(2)=r_{i-1}(3) \\
2 & $if $\hskip 10pt r_{i-1}(2)\neq r_{i-1}(3). \\
\end{array}\right.$.\\
\begin{Thm}\label{T5}
Suppose that the projection of a knot $K$ onto the $xy$-plane has a plat presentation $p_4(w)$ with $w=\sigma_1^{\epsilon_1}\sigma_2^{-\epsilon_2}\cdot\cdot\cdot\sigma_1^{-\epsilon_{2n-1}}$
for some positive (negative) integers $\epsilon_i$ ($1\leq i\leq 2n-1$).\\
Then a given orientation for $K$, we have $w=\sigma_{1~k_{1}}^{\alpha_1}\sigma_{2~k_{2}}^{-\alpha_2}\cdot\cdot\cdot \sigma_{1~k_{2n-1}}^{\alpha_{2n-1}}$ for $k_i$ which is defined above.
\end{Thm}
\begin{proof}
Without loss of generality, we give the orientation (clockwise) to $\delta_1$ from $1$ to $2$ along $\delta_1$. Then the direction of the orientation at 1 is up and the direction at 2 is down.
Then we know that the orientation at $i$ is up if $r(i)=1$ and is down if $r(i)=2$.\\
Fix a value $i$. \\
Case 1: Suppose that $r_{i-1}(2)=r_{i-1}(3)=1$.\\
Then the two strings for the ($\sum_{j=1}^{i-1}|\epsilon_j|+1)$-th crossing have the same direction of the orientation since $r_{i-1}(2)=r_{i-1}(3)=1$ So, the directions of the orientation are upward.\\
Then $k_i=1$. This is consistant with the $k_i$ that is defined above.\\
Case 2: Suppose that $r_{i-1}(2)\neq r_{i-1}(3)$.\\
Then the two strings for the ($\sum_{j=1}^{i-1}|\epsilon_j|+1)$-th crossing have different directions for the orientation since $r_{i-1}(2)\neq r_{i-1}(3)$.\\
Then $k_i=2$. This is consistant with the $k_i$ that is defined above.\\
\end{proof}
\section{The calculation of some examples}
First, we will calculate the HOMFLY polynomials of $3_1$ (trefoil knot), $5_1$ and $5_2$.\\
\begin{figure}\label{p9}
\end{figure}
\begin{enumerate}
\item[(a)] $3_1$ is represented by $w=\sigma_{12}\sigma_{21}^{-1}\sigma_{12}$. Then we have $A=A_2^1B_1^{-1}A_2^1$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[-a^2+a^2m^2~a^3m~0]$.\\
Therefore, $P(3_1)=-a^2+a^2m^2-a^3m{a+a^{-1}\over m}=-a^2+a^2m^2-a^4-a^2=-2a^2+a^2m^2-a^4$.\\
\item[(b)] $5_1$ is represented by $w=\sigma_{12}\sigma_{21}^{-3}\sigma_{12}$. Then we have $A=A_2^1(B_1^{-1})^3A_2^1$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[a^4-3a^4m^2+a^4m^4~a^5m(-2+m^2)~0]$.\\
Therefore, $P(5_1)=-a^6m^2+2a^6+a^4m^4-4a^4m^2+3a^4$.\\
\item[(c)] $5_2$ is represented by $w=\sigma_{11}^{-1}\sigma_{22}^{2}\sigma_{11}^{-2}$. Then we have $A=A_1^{-1}(B_2^{1})^2A_1^{-2}$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[a^3m-a^3m^3-a^5m+a^5m^3~ a^4-a^4m^2+a^6m^2~0]$.\\
Therefore, $P(5_2)=a^6-a^2+a^2m^2+a^4-a^4m^2$.\\
Now, I will give a set of knots $K_1$ and $K_2$ so that they have the same Jones polynomial but different HOMFLY polynomials.\\
Consider the two knots $K_1=8_9$ and $K_2=4_1\# 4_1$.\\
First, we can check that the Kauffman (Jones) polynomial of $K_1$ and $K_2$ are the same as follows.\\
$X_{8_9}=X_{4_1\#4_1}=(a^{16}-a^{12}+a^8-a^4+1)^2/a^{16}$.
\begin{figure}\label{p10}
\end{figure}
\end{enumerate}
We note that $8_9$ is represented by $w=(\sigma_{11}^{-1})^3(\sigma_{22})(\sigma_{12}^{-1})(\sigma_{21})^2(\sigma_{12}^{-1})$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[0~(-m^7a+4am^5-5am^3+a^{-1}m^5-2a^{-1}m^3+a^{-1}m+a^4a^{-1}m^5-3a^3m^3+2a^3m+am)~(-m^6a^2+3a^2m^4+m^4-3a^2m^2-m^2+a^4m^4-2a^4m^2+a^4)].$\\
Therefore, $P(8_9)=-2a^2m^4+5a^2m^2-4m^4+6m^2-2+a^4m^2-a^4-3a^2+m^6-a^{-2}m^4+2a^{-2}m^2-a^{-2}$.\\
However, $4_1$ is prepresented by $w=(\sigma_{11}^{-1})(\sigma_{22})(\sigma_{12}^{-1})^2$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[0~(-am^3+a^{-1}m+a^3m)~(-a^2m^2+a^4)]$.\\
Therefore, $P(4_1)=m^2-a^2-1-a^{-2}$.\\
This implies that $P(4_1\#4_1)=P(4_1)\cdot P(4_1)=(m^2-a^2-1-a^{-2})^2$.\\
We can check that $P(8_9)\neq P(4_1\#4_1)$.\\
\begin{figure}\label{p11}
\end{figure}
Now, consider the two knots $K_3$ and $K_4=4_1\#8_3$ as in Figure~\ref{p11}.\\
We can check that they have the same Kauffman (Jones) polynomial as follows.\\
$X_{K_3}=X_{K_4}=(a^{16}-a^{12}+a^8-a^4+1)(a^{32}-a^{28}+2a^{24}-3a^{20}+3a^{16}-3a^{12}+2a^8-a^4+1)/a^{24}$.\\
Then $K_3$ is represented by $w=(\sigma_{12}^{-1})^2(\sigma_{22}^1)^4(\sigma_{12}^{-1})^2(\sigma_{22}^1)^3(\sigma_{11}^{-1})$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[(a^{10}-2a^8m^2+2a^6m^2+m^4a^6-2a^4m^4+a^2m^4+a^2m^2-m^2)/a^2,-m(a^2-1)(a^6-a^4m^2+a^2m^2-1)/a^3,0]$.\\
Therefore, $P(K_3)=(a^{12}-2a^{10}m^2+a^8m^2+a^8m^4-2m^4a^6+a^4m^4+2a^4m^2-2a^2m^2+a^{10}-a^6+a^6m^2-a^4+1)/a^4$.\\
$8_3$ is represented by $w=(\sigma_{12}^{-1})^4(\sigma_{22}^1)^3(\sigma_{11}^{-1})$.\\
So, $[f(a,m)~g(a,m)~h(a,m)]=[(a^6-a^4m^2+2a^2m^2-m^2)/a^2,m(a^2-1)/a^3,0]$.\\
Therefore, $P(8_3)=(a^8-a^6m^2+2a^4m^2-a^2m^2-a^4+1)/a^4$.\\
Now, we know that $P(4_1\#8_3)=(m^2-a^2-1-a^{-2})(a^8-a^6m^2+2a^4m^2-a^2m^2-a^4+1)/a^4$.\\
This implies that $P(K_3)\neq P(K_4)=P(4_1\#8_3)$.
\end{document}
|
\begin{document}
\title{The Truth About Torsion In The CM Case, II}
\author{Pete L. Clark}
\author{Paul Pollack}
\operatorname{new}command{\mathrm{ratl}}{\mathrm{ratl}}
\operatorname{new}command{\textrm{\'et}alchar}[1]{$^{#1}$}
\operatorname{new}command{\mathbb{F}}{\mathbb{F}}
\operatorname{new}command{\textrm{\'et}}{\textrm{\'et}}
\operatorname{new}command{\ensuremath{\rightarrow}}{\ensuremath{\rightarrow}}
\operatorname{new}command{\mathbb{F}F}{\mathbb{F}}
\operatorname{new}command{\mathfrak{f}}{\mathfrak{f}}
\operatorname{new}command{\mathbb{Z}}{\mathbb{Z}}
\operatorname{new}command{\mathbb{N}}{\mathbb{N}}
\operatorname{new}command{\ch}{}
\operatorname{new}command{\mathbb{R}}{\mathbb{R}}
\operatorname{new}command{\mathbb{P}}{\mathbb{P}}
\operatorname{new}command{\mathfrak{p}}{\mathfrak{p}}
\operatorname{new}command{\mathbb{C}}{\mathbb{C}}
\operatorname{new}command{\mathbb{Q}}{\mathbb{Q}}
\operatorname{new}command{\operatorname{ab}}{\operatorname{ab}}
\operatorname{new}command{\operatorname{Aut}}{\operatorname{Aut}}
\operatorname{new}command{\mathfrak{g}_K}{\mathfrak{g}_K}
\operatorname{new}command{\mathfrak{g}_{\Q}}{\mathfrak{g}_{\mathbb{Q}}}
\operatorname{new}command{\overline{\Q}}{\overline{\mathbb{Q}}}
\operatorname{new}command{\operatorname{Out}}{\operatorname{Out}}
\operatorname{new}command{\operatorname{End}}{\operatorname{End}}
\operatorname{new}command{\operatorname{Gal}}{\operatorname{Gal}}
\operatorname{new}command{\mathbb{C}T}{(\mathcal{C},\mathcal{T})}
\operatorname{new}command{\operatorname{lcm}}{\operatorname{lcm}}
\operatorname{new}command{\operatorname{Div}}{\operatorname{Div}}
\operatorname{new}command{\mathcal{O}}{\mathcal{O}}
\operatorname{new}command{\ensuremath{\rightarrow}nk}{\operatorname{rank}}
\operatorname{new}command{\operatorname{tors}}{\operatorname{tors}}
\operatorname{new}command{\operatorname{IM}}{\operatorname{IM}}
\operatorname{new}command{\mathbb{C}M}{\mathrm{CM}}
\operatorname{new}command{\mathbf{HS}}{\mathbf{HS}}
\operatorname{new}command{\mathbb{F}rac}{\operatorname{Frac}}
\operatorname{new}command{\operatorname{Pic}}{\operatorname{Pic}}
\operatorname{new}command{\operatorname{coker}}{\operatorname{coker}}
\operatorname{new}command{\mathbb{C}l}{\operatorname{Cl}}
\operatorname{new}command{\operatorname{loc}}{\operatorname{loc}}
\operatorname{new}command{\operatorname{GL}}{\operatorname{GL}}
\operatorname{new}command{\operatorname{PSL}}{\operatorname{PSL}}
\operatorname{new}command{\mathbb{F}rob}{\operatorname{Frob}}
\operatorname{new}command{\operatorname{Hom}}{\operatorname{Hom}}
\operatorname{new}command{\mathbb{C}oker}{\operatorname{\operatorname{coker}}}
\operatorname{new}command{\ker}{\ker}
\operatorname{new}command{\mathfrak{g}}{\mathfrak{g}}
\operatorname{new}command{\operatorname{sep}}{\operatorname{sep}}
\operatorname{new}command{\operatorname{new}}{\operatorname{new}}
\operatorname{new}command{\mathcal{O}_K}{\mathcal{O}_K}
\operatorname{new}command{\operatorname{ord}}{\operatorname{ord}}
\operatorname{new}command{\mathfrak{m}}{\mathfrak{m}}
\operatorname{new}command{\OO_{\ell^{\infty}}}{\mathcal{O}_{\ell^{\infty}}}
\operatorname{new}command{\operatorname{ann}}{\operatorname{ann}}
\renewcommand{\mathfrak{t}}{\mathfrak{t}}
\renewcommand{\mathfrak{a}}{\mathfrak{a}}
\operatorname{new}command\leg{\mathfrak{g}enfrac(){.4pt}{}}
\operatorname{new}command{\mathscr{K}}{\mathscr{K}}
\operatorname{new}command{\mathfrak{g}}{\mathfrak{g}}
\operatorname{new}command{\operatorname{cyc}}{\operatorname{cyc}}
\begin{abstract}
Let $T_{\mathbb{C}M}(d)$ be the largest size of the torsion subgroup of an elliptic curve with complex multiplication (CM) defined
over a degree $d$ number field. Work of \cite{Breuer10} and \cite{CP15} showed $\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathbb{C}M}(d)}{d \log \log d} \in (0,\infty)$. Here we show that the above limit supremum is precisely $\frac{e^{\mathfrak{g}amma} \pi}{\sqrt{3}}$. We also study -- in part, out of necessity -- the upper order of the size of the torsion subgroup of various restricted classes of CM elliptic curves over number fields.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\subsection{Asymptotics of torsion subgroups of elliptic curves}
Let $E_{/F}$ be an elliptic curve over a number field. Then the torsion subgroup $E(F)[\operatorname{tors}]$ is finite, and it is a problem of
fundamental interest to study its size as a function of $F$ and also of $d = [F:\mathbb{Q}]$. For $d \in \mathbb{Z}^+$, we put
\[ T(d) = \sup \# E(F)[\operatorname{tors}], \]
the supremum ranging over all elliptic curves defined over all degree $d$ number fields. We know that $T(d) < \infty$
for all $d \in \mathbb{Z}^+$ \cite{Merel96}. Merel's work gives an
explicit upper bound on $T(d)$, but it is more than exponential. \\ \indent
In the other direction, it is known that $T(d)$ is \emph{not} bounded above by a linear function of $d$. This and related bounds can be obtained by the following seemingly
naive approach: start with \emph{any} number field $F_0$ and \emph{any} elliptic curve $E_{/F_0}$. For $n \in \mathbb{Z}^+$,
let $N_n$ be the product of the first $n$ prime numbers, and put
\[ F_n = F_0(E[N_n]), \quad d_n = [F_n:\mathbb{Q}]. \]
An analysis of this ``naive approach'' was given by F. Breuer \cite{Breuer10}, who showed
\begin{equation}
\label{BREUEREQ1}
\inf_n \frac{\# E(F_n)[\operatorname{tors}]}{\sqrt{d_n \log \log d_n}} > 0.
\end{equation}
An elliptic curve $E_{/F}$ has \textbf{complex multiplication (CM)} if \[\operatorname{End}(E) \coloneqq \operatorname{End}_{\overline{F}}(E) \supsetneq \mathbb{Z}, \] in which
case $\operatorname{End}(E)$ is an order $\mathcal{O}$ in an imaginary quadratic field $K$. Moreover, if $F \supset K$ we
have $\operatorname{End}_F(E) = \mathcal{O}$, whereas if $F \not \supset K$ we have $\operatorname{End}_F(E) = \mathbb{Z}$. If $E_{/F_0}$ has CM, then
Breuer shows by the same ``naive approach'' that
\begin{equation}
\label{BREUEREQ2}
\inf_n \frac{\# E(F_n)[\operatorname{tors}]}{d_n \log \log d_n} > 0,
\end{equation}
and thus $T(d)$ is not bounded above by a linear function of $d$. In view of these and other considerations, it is reasonable to define $T_{\mathbb{C}M}(d)$ as
for $T(d)$ but restricting to CM elliptic curves only and $T_{\neg \mathbb{C}M}(d)$ as for $T(d)$ but restricting to elliptic curves
\emph{without} CM. Then it follows from (\ref{BREUEREQ1}) that
\begin{equation*}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\neg \mathbb{C}M}(d)}{\sqrt{d \log \log d}} > 0
\end{equation*}
and it follows from (\ref{BREUEREQ2}) that
\begin{equation*}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathbb{C}M}(d)}{d \log \log d} > 0.
\end{equation*}
In a recent work \cite[Thm. 1]{CP15} we showed there is an effective $C > 0$ such that
\[ \forall d \mathfrak{g}eq 3, \quad T_{\mathbb{C}M}(d) \leq C d \log \log d \]
and thus we get an upper order result for $T_{\mathbb{C}M}(d)$:
\begin{equation}
\label{CPEQ}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathbb{C}M}(d)}{d \log \log d} \in (0,\infty).
\end{equation}
Other statistical behavior of $T_{\mathbb{C}M}(d)$ was studied in \cite{BCP,BP16,MPP16}; in particular, its average order is $d/(\log{d})^{1+o(1)}$ and its normal
order (in a slightly nonstandard sense made precise in \cite{BCP}) is bounded.
\\ \\
In the present work we will improve upon (\ref{CPEQ}), as follows:
\begin{thm}
\label{MAINTHM} $\displaystyle \limsup_{d\to\infty} \frac{T_{\mathbb{C}M}(d)}{d\log\log{d}} = \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{3}}. $
\end{thm}
\noindent
(The easier, lower bound half of Theorem \ref{MAINTHM} was noted in \cite[Remark 1.10]{MPP16}.) In $\S$1.4 we will deduce Theorem \ref{MAINTHM} from results stated later in the introduction.
\begin{remark}
As mentioned above, the constant $C$ appearing in \cite[Thm. 1]{CP15} is effective. This aspect is not addressed in Theorem \ref{MAINTHM} or anywhere in the present work. In fact, though $C$ is effectively \emph{computable}, we did not effectively
\emph{compute} it, and it is not a trivial matter to do so. It is our understanding that such a computation is in progress by some of our colleagues.
\end{remark}
\subsection{Refining the truth I}
As mentioned above, it is natural to distinguish between the cases in which the CM is or is not rationally defined over the
ground field. In this section we concentrate on the former case: let $T_{\mathbb{C}M}^{\bullet}(d)$ be as for $T_{\mathbb{C}M}(d)$ but restricting to CM elliptic curves $E_{/F}$ for number fields $F \supset K$.
\\ \\
We will also examine the dependence of the bound on the CM field and the CM order. Let $\mathscr{K}$ be a set of imaginary quadratic fields.
We define $T_{\mathbb{C}M(\mathscr{K})}(d)$ to be as for $T_{\mathbb{C}M}(d)$ but with the CM field restricted to lie in $\mathscr{K}$. When $\mathscr{K} = \{K\}$ we
write $T_{\mathbb{C}M(K)}(d)$ in place of $T_{\mathbb{C}M(\{K\})}(d)$. Once again we denote restriction to number fields $F \supset K$
by a superscripted $\bullet$.
\begin{thm}\label{lem:reducetofinite}
\label{BIGTHM3} Let $\epsilon > 0$. There is $\Delta_0 = \Delta_0(\epsilon) < 0$ such that: if $\mathscr{K}$ is the collection of imaginary quadratic fields with $\Delta_K < \Delta_0$, then
\begin{equation*}
\limsup_{d\to\infty} \frac{T_{\mathbb{C}M(\mathscr{K})}^{\bullet}(d)}{d\log\log{d}} < \epsilon.
\end{equation*}
\end{thm}
\noindent
We will prove Theorem \ref{BIGTHM3} in $\S$3. A key ingredient is a lower order result for Euler's totient
function across all imaginary quadratic fields which improves upon \cite[Thm. 8]{CP15}. This result is established
in $\S$2.
\\ \\
Theorem \ref{BIGTHM3} motivates us to concentrate on a fixed imaginary quadratic field as well as a fixed imaginary
quadratic order. The next two results address this.
\begin{thm}\label{lem:fixedK} \label{BIGTHM4} Fix an imaginary quadratic field $K$. For $d \in \mathbb{Z}^+$, let
$\mathfrak{f}T_{\mathbb{C}M(K)}^{\bullet}(d)$ denote the maximum value of $\mathfrak{f} \# E(F)[\operatorname{tors}]$ as
$F$ ranges over all degree $d$ number fields containing $K$ and $E_{/F}$ ranges over all elliptic curves
with CM by an order $\mathcal{O}$ of $K$: here $\mathfrak{f}$ is the conductor of $\mathcal{O}$. Then we have
\[ \limsup_{d\to\infty} \frac{\mathfrak{f} T_{\mathbb{C}M(K)}^{\bullet}(d)}{d\log\log{d}} \leq \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}}. \]
\end{thm}
\begin{thm}
\label{BIGTHM5}
Let $\mathcal{O}$ be an order in the imaginary quadratic field $K$, with conductor $\mathfrak{f}$ and discriminant $\Delta = \mathfrak{f}^2 \Delta_K$. Let $T_{\mathcal{O}\text{-}\mathbb{C}M}(d)$ be the maximum value of $\#E(F)[\operatorname{tors}]$ as $F$ ranges over all degree $d$ number
fields containing $K$ and $E_{/F}$ ranges over all $\mathcal{O}$-CM elliptic curves. Then
\[ \limsup_{d \ensuremath{\rightarrow} \infty} \frac{ T_{\mathcal{O}\text{-}\mathbb{C}M}^{\bullet}(d)}{d \log \log d} = \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta|}} = \frac{e^{\mathfrak{g}amma} \pi}{\mathfrak{f} \sqrt{|\Delta_K|}}. \]
\end{thm}
\noindent
We will prove Theorems \ref{BIGTHM4} and \ref{BIGTHM5} in $\S$4.
\subsection{Refining the truth II}
We turn now to upper order results for $\#E(F)[\operatorname{tors}]$ when the CM is \emph{not} defined over the ground field $F$: define $T_{\mathbb{C}M}^{\circ}(d)$ as for $T_{\mathbb{C}M}(d)$ but restricting to CM elliptic curves $E_{/F}$ for number fields $F \not \supset K$. As above, we will want
to impose this restriction along with restrictions on the CM field and CM order, and we denote restriction to number fields
$F \not \supset K$ by a superscripted $\circ$.
\\ \indent
If $E_{/F}$ is an elliptic curve defined over a number
field $F$ not containing $K$, then $\# E(F)[\operatorname{tors}] \leq \# E(FK)[\operatorname{tors}]$, and thus we have
\begin{equation}
\label{OBVIOUSEQ}
T_{\mathbb{C}M}^{\circ}(d) \leq T_{\mathbb{C}M}^\bullet(2d).
\end{equation}
\noindent
Although (\ref{OBVIOUSEQ}) will be of use to us, it is too crude to allow us to deduce Theorem \ref{MAINTHM}
from the results of the previous section. To overcome this we establish the following result, which \emph{almost}
computes the true upper order of $T_{\mathcal{O}\text{-}\mathbb{C}M}^{\circ}(d)$.
\begin{thm}
\label{BIGTHM6}
Let $\mathcal{O}$ be an order in an imaginary quadratic field $K$.
\begin{enumerate}
\item[a)] There is a constant $C(\mathcal{O})$ such that
\[\forall d \mathfrak{g}eq 3, \quad T_{\mathcal{O}\text{-}\mathbb{C}M}^{\circ}(d) \leq C(\mathcal{O}) \sqrt{d \log \log d}. \]
\item[b)] We have $\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathcal{O}\text{-}\mathbb{C}M}^{\circ}(d)}{\sqrt{d}} > 0$.
\end{enumerate}
\end{thm}
\noindent
We will prove Theorem \ref{BIGTHM6} in $\S$5.
\\ \\
We can now easily deduce:
\begin{thm}
\label{BIGTHM2}
\label{thm:nonrational} We have $\displaystyle\limsup_{d\to\infty} \frac{T_{\rm CM}^{\circ}(d)}{d\log\log{d}} = 0.$
\end{thm}
\begin{proof}
Step 1: Using (\ref{OBVIOUSEQ}) we immediately get versions of Theorems \ref{BIGTHM3} and \ref{BIGTHM4} for
$T_{\mathbb{C}M}(d)$: since replacing $\epsilon$ by $2 \epsilon$ is harmless,
Theorem \ref{BIGTHM3} holds verbatim with $T_{\mathbb{C}M}(d)$ in place of $T_{\mathbb{C}M}^{\bullet}(d)$, whereas for
any imaginary quadratic field $K$ we have
\begin{equation}
\label{BIGTHM2EQ1}
\limsup_{d\to\infty} \frac{\mathfrak{f} T_{\mathbb{C}M(K)}(d)}{d\log\log{d}} \leq \frac{2 e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}}.
\end{equation}
Step 2: The above strengthened version of Theorem \ref{BIGTHM3} reduces us to finitely many quadratic fields, and then
the dependence on the conductor in (\ref{BIGTHM2EQ1}) reduces us to finitely many quadratic orders. Thus we may
treat one quadratic order at a time, and Theorem \ref{BIGTHM6} gives a much better bound than $o(d \log \log d)$ in that case.
\end{proof}
\subsection{Proof of Theorem \ref{MAINTHM}} Theorem \ref{MAINTHM} is a quick consequence of these refined results: by Theorem \ref{BIGTHM2}
we may restrict to the case in which the number field contains the CM field. Now we argue much as in the proof of
Theorem \ref{BIGTHM2}. By Theorem \ref{BIGTHM5}, applied with $\mathcal{O}$ the maximal order in $\mathbb{Q}(\sqrt{-3})$,
\[ \limsup_{d\to\infty} \frac{T_{\mathbb{C}M}(d)}{d\log\log d} \mathfrak{g}e \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{3}}.\]
By Theorem \ref{BIGTHM3}, in any sequence $\{(E_n)_{/F_n}\}$ with $[F_n:\mathbb{Q}] \ensuremath{\rightarrow} \infty$ such that
\[ \lim_{n \ensuremath{\rightarrow} \infty} \frac{ \#E(F_n)[\operatorname{tors}]}{[F_n:\mathbb{Q}] \log \log [F_n:\mathbb{Q}]} \mathfrak{g}eq \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{3}}, \]
only finitely many quadratic fields intervene, and by Theorem \ref{BIGTHM4}
among orders with the same fraction field the conductors must be bounded. So we have reduced to working with
one imaginary quadratic order $\mathcal{O}$ at a time, and Theorem \ref{BIGTHM5} tells us that $T_{\mathcal{O}\text{-}\mathbb{C}M}^{\bullet}(d)$
is largest when the discriminant of $\mathcal{O}$ is smallest, i.e., when $\mathcal{O}$ is the ring of integers of $\mathbb{Q}(\sqrt{-3})$.
\subsection{Complements} In $\S$6.1 we compare our results to the asymptotic behavior of prime order torsion studied in
\cite{TORS1}. In $\S$6.2 we address -- but do not completely resolve -- the question of the upper order of $T_{\mathbb{C}M}^{\circ}(d)$.
In $\S$6.3 we revisit Breuer's work and give what is in a sense a non-CM analogue of Theorem \ref{BIGTHM5}: we study the asymptotic behavior of torsion one $j$-invariant at a time.
\section{Lower bounds on $\varphi_K(\mathfrak{a})$}
\noindent
For the classical Euler totient function, it is a well-known consequence of Mertens' Theorem that \cite[Thm. 328]{HW} $$\liminf_{n\to\infty} \frac{\varphi(n)}{n/\log\log{n}} = e^{-\mathfrak{g}amma}.$$ As in \cite{CP15}, we require analogous results for $\varphi_K(\mathfrak{a})$, where $K$ is an imaginary quadratic field and $\mathfrak{a}$ is an ideal of $\mathcal{O}_K$.
When the field $K$ is fixed, this presents no difficulty. In that case, one can argue precisely as in \cite{HW}, using the number field analogue of the classical Mertens Theorem: e.g. \cite{Rosen99}. For any fixed number field $K$, let $\alpha_K$ be
the residue of the Dedekind zeta function $\zeta_K(s)$ at $s = 1$. Then one finds that
\[ \liminf_{|\mathfrak{a}|\to\infty} \frac{ \varphi_K(\mathfrak{a})}{|\mathfrak{a}|/\log \log |\mathfrak{a}|} = e^{-\mathfrak{g}amma} \alpha_K^{-1}. \]
(Here and below, $|\mathfrak{a}|$ denotes the norm of the ideal $\mathfrak{a}$.)
When $K$ is imaginary quadratic, the class number formula (e.g. \cite[Theorem 61, p. 284]{FT93}) gives \[\alpha_K = \frac{2\pi h_K}{w_K \sqrt{|\Delta_K|}}, \]
so that
\begin{equation}\label{eq:landauK}
\liminf_{|\mathfrak{a}|\to\infty} \frac{ \varphi_K(\mathfrak{a})}{|\mathfrak{a}|/\log \log |\mathfrak{a}|} = \frac{e^{-\mathfrak{g}amma} w_K \sqrt{|\Delta_K|}}{2\pi h_K}.\end{equation}
When $K$ is allowed to vary, the situation becoms more delicate. In this section we will establish the following result.
\begin{thm}\label{thm:phibound}
There is a constant $c>0$ such that for all quadratic fields $K$ and all nonzero
ideals $\mathfrak{a}$ of $\mathcal{O}_K$ with $|\mathfrak{a}| \mathfrak{g}eq 3$, we have
\[ \varphi_K(\mathfrak{a}) \mathfrak{g}eq \frac{c}{\log|\Delta_K|}\cdot \frac{|\mathfrak{a}|}{\log\log|\mathfrak{a}|}. \]
\end{thm}
\noindent
Below, $\star$ denotes Dirichlet convolution, so that $(f\star g)(n) = \sum_{de=n} f(d) g(e)$.
\begin{lemma}\label{lemma:sylvester} There is a positive constant $C$ for which the following holds. Let $f$ be a nonnegative multiplicative function. Let $\chi$ be a nonprincipal Dirichlet character modulo $q$. For all $x \mathfrak{g}eq 2$,
\begin{equation}
\label{SYLVESTEREQ}
\left|\sum_{n \le x} \frac{(f\star\chi)(n)}{n}\right| \leq C\log{q} \cdot \prod_{p \le x} \left(1+\frac{f(p)}{p} + \frac{f(p^2)}{p^2} + \dots\right).
\end{equation}
\end{lemma}
\begin{proof} Write $\sum_{n \le x} \frac{(f\star\chi)(n)}{n} = \sum_{d \le x} \frac{f(d)}{d} \sum_{e \le x/d} \frac{\chi(e)}{e}$. The contribution to the inner sum from values of $e\le q$ is bounded in absolute value by $1+1/2+\dots+1/q\ll \log{q}$. Since the partial sums of $\chi$ are (crudely) bounded by $q$, Abel summation shows that the contribution to the inner sum from values of $e$ with $q < e \le x/d$ is $\ll 1 \ll \log{q}$. Now the triangle inequality and the nonnegativity of $f$ yield
\[ \left|\sum_{n \le x} \frac{(f\star\chi)(n)}{n}\right| \ll \log{q} \cdot \sum_{d \le x} \frac{f(d)}{d}. \]
The sum on $d$ is bounded by the Euler product appearing in (\ref{SYLVESTEREQ}).
\end{proof}
\begin{lemma}\label{lemma:lowerprod} There is a constant $c>0$ for which the following holds. Let $\chi$ be a quadratic character modulo $q$. For all $x \mathfrak{g}e 2$,
\[ \prod_{p \le x}\left(1-\frac{\chi(p)}{p}\right) \mathfrak{g}eq \frac{c}{\log{q}}. \]
\end{lemma}
\begin{proof} Let $f$ be multiplicative such that $f(p^k) = 1-\chi(p)$ for every prime power $p^k$. Since $\chi$ is quadratic, $f$ assumes only nonnegative values. Moreover we have
$(f \star \chi)(n) = 1$ for all $n \in \mathbb{Z}^+$, so that
\[ \sum_{n \le x} \frac{(f \star \chi)(n)}{n} = \sum_{n \le x}\frac{1}{n} \mathfrak{g}g \log{x}. \] Hence, by Lemma \ref{lemma:sylvester} and Mertens' Theorem \cite[Thm. 429]{HW},
\begin{align*} \log{x} \ll (\log{q}) \cdot \prod_{p \le x} \left(1 + \frac{1-\chi(p)}{p-1}\right) &= (\log{q})\cdot \prod_{p \le x}\left(1-\frac{1}{p}\right)^{-1} \cdot \prod_{p \le x}\left(1-\frac{\chi(p)}{p}\right)
\\&\ll (\log{q}) (\log{x}) \cdot \prod_{p \le x}\left(1-\frac{\chi(p)}{p}\right). \end{align*}
Rearranging gives the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:phibound}] We write $\varphi_K(\mathfrak{a}) = |\mathfrak{a}| \cdot \prod_{\mathfrak{p} \mid \mathfrak{a}} (1-1/|\mathfrak{p}|)$ and notice that the factors $1-1/|\mathfrak{p}|$ are increasing in $|\mathfrak{p}|$. So if $z\mathfrak{g}e 2$ is such that $\prod_{|\mathfrak{p}| \le z} |\mathfrak{p}| \mathfrak{g}e |\mathfrak{a}|$, then
\begin{equation}\label{eq:step0} \frac{\varphi_K(\mathfrak{a})}{|\mathfrak{a}|} \mathfrak{g}e \prod_{|\mathfrak{p}| \le z} \left(1-\frac{1}{|\mathfrak{p}|}\right). \end{equation}
We first establish a lower bound on the right-hand side, as a function of $z$, and then we prove the theorem by making a convenient choice of $z$. We partition the prime ideals with $|\mathfrak{p}|\le z$ according to the splitting behavior of the rational prime $p$ lying below $\mathfrak{p}$. Noting that $p \le |\mathfrak{p}|$, Mertens' Theorem and Lemma \ref{lemma:lowerprod} yield
\begin{align} \prod_{|\mathfrak{p}| \le z} \left(1-\frac{1}{|\mathfrak{p}|}\right) &\mathfrak{g}e \prod_{p \le z} \prod_{\mathfrak{p} \mid (p)} \left(1-\frac{1}{|\mathfrak{p}|}\right)
= \prod_{p \le z} \left(1-\frac{1}{p}\right) \left(1-\frac{\leg{\Delta_K}{p}}{p}\right) \notag\\
&\mathfrak{g}g (\log{z})^{-1} \prod_{p \le z} \left(1-\frac{\leg{\Delta_K}{p}}{p}\right) \mathfrak{g}g (\log{z})^{-1} \cdot (\log{|\Delta_K|})^{-1}. \label{eq:step1}\end{align}
With $C'$ a large absolute constant to be described momentarily, we set
\begin{equation}\label{eq:step2} z=(C'\log |\mathfrak{a}|)^2.\end{equation}
Let us check that $\prod_{|\mathfrak{p}| \le z}|\mathfrak{p}| \mathfrak{g}e |\mathfrak{a}|$ with this choice of $z$. In fact, the Prime Number Theorem guarantees that \[ \prod_{|\mathfrak{p}| \le z}|\mathfrak{p}| \mathfrak{g}e \prod_{p \le z^{1/2}} p \mathfrak{g}e \prod_{p \le C'\log |\mathfrak{a}|}p \mathfrak{g}e |\mathfrak{a}|,\] provided that $C'$ was chosen appropriately. Combining \eqref{eq:step0}, \eqref{eq:step1}, and \eqref{eq:step2} gives \[\varphi_K(\mathfrak{a})\mathfrak{g}g |\mathfrak{a}| \cdot (\log{z})^{-1} \cdot (\log |\Delta_K|)^{-1}\mathfrak{g}g (\log|\Delta_K|)^{-1} \cdot |\mathfrak{a}| \cdot (\log\log|\mathfrak{a}|)^{-1}. \qedhere \]
\end{proof}
\section{Proof of Theorem \ref{BIGTHM3}}
\noindent
Let $E$ be an elliptic curve over a degree $d$ number field $F$. Certainly we may, and shall, assume that $\# E(F)[{\rm tors}] \mathfrak{g}eq 3$.
Suppose that $E$ has CM by the imaginary quadratic field $K$ and that $F \supset K$. Let $\mathcal{O} = \operatorname{End} E$, an order in the imaginary quadratic field $K$. Let $\mathcal{O}_K$ be the
ring of integers of $K$. By \cite[Thm. 1.3]{BC17} there is an elliptic curve $E'_{/F}$ with $\operatorname{End} E' = \mathcal{O}_K$ such that
\[ \# E(F)[\operatorname{tors}] \mid \# E'(F)[\operatorname{tors}]. \]
Let $\mathfrak{a} \subset \mathcal{O}_K$
be the annihilator ideal of the $\mathcal{O}_K$-module $E'(F)[{\rm tors}]$. By \cite[Thm. 2.7]{BC17} we have
\[ E'(F)[{\rm tors}] \cong \mathcal{O}_K/\mathfrak{a}, \]
so
\[ \#E'(F)[{\rm tors}] = |\mathfrak{a}|. \]
By the First Main Theorem of Complex Multiplication \cite[Thm. II.5.6]{SilvermanII} we have
\begin{equation}
\label{PETEEQ1}
F \supset K^{\mathfrak{a}},
\end{equation}
while by \cite[Lemma 2.11]{BC17} we have
\begin{equation}
\label{PETEEQ2}
\frac{2 \varphi_K(\mathfrak{a}) h_K}{w_K} \mid [K^{\mathfrak{a}}:\mathbb{Q}]
\end{equation}
with equality in most cases (e.g. unless $\mathfrak{a} \mid 6 \mathcal{O}_K$). Combining (\ref{PETEEQ1}) and (\ref{PETEEQ2}), we get
\begin{equation*}
\varphi_K(\mathfrak{a}) \mid \frac{w_K}{2} \cdot \frac{d}{h_K}.
\end{equation*}
Combining the last inequality with the result of Theorem \ref{thm:phibound}, we get
\[ |\mathfrak{a}|/\log\log|\mathfrak{a}| \le \frac{w_K}{2c} \cdot \frac{\log|\Delta_K|}{h_K} \cdot d. \]
Siegel's Theorem (see \cite[Chapter 21]{Davenport00}) implies that $h_K \mathfrak{g}g |\Delta_K|^{1/3}$. So we can choose $\Delta_0$ sufficiently large and negative so that when $\Delta_K < \Delta_0$, we have
\[ \frac{\log|\Delta_K|}{h_K} \le \frac{\epsilon}{6} c. \]
Working under this assumption on $\Delta_K$, we have
\[ |\mathfrak{a}|/\log\log|\mathfrak{a}| \le \frac{\epsilon}{2} d. \]
For all $d$ sufficiently large in terms of $\epsilon$, this implies that
\[ \#E(F)[\operatorname{tors}] \leq \#E'(F)[\operatorname{tors}] = |\mathfrak{a}| < \epsilon d\log\log{d}.\]
Thus,
\begin{equation*}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathbb{C}M(\mathscr{K})}^\bullet(d)}{d \log \log d} < \epsilon.
\end{equation*}
\section{Proofs of Theorems \ref{BIGTHM4} and \ref{BIGTHM5}}
\noindent
In this section we fix an imaginary quadratic field $K$.
\subsection{Proof of Theorem \ref{BIGTHM4}}
Let $F \supset K$ be a number field of degree $d$, and let $E_{/F}$ be an elliptic curve with CM by an order
$\mathcal{O}$, of conductor $\mathfrak{f}$, in $K$. Put $w = \# \mathcal{O}^{\times}$. We may and shall assume that $\# E(F)[\operatorname{tors}] \mathfrak{g}eq 5$. We may write
\[ E(F)[\operatorname{tors}] \cong \mathbb{Z}/a\mathbb{Z} \times \mathbb{Z}/ab \mathbb{Z} \]
for $a,b \in \mathbb{Z}^+$. Since $\# E(F)[\operatorname{tors}] \mathfrak{g}eq 5$, we have $ab \mathfrak{g}eq 3$. By \cite[Thm. 7]{CP15} there is a number field $L \supset F$ such that $F(E[ab]) \subset L$
and $[L:F] \leq b$. Let $\mathfrak{h}\colon E \ensuremath{\rightarrow} E/\operatorname{Aut}(E)$ be the Weber function for $E$, and put
\[ W(N,\mathcal{O}) = K(\mathfrak{f})(\mathfrak{h}(E[N])). \]
By \cite[Thm. 2.12]{BC17} we have $L \supset W(ab,\mathcal{O})$ and thus
\begin{equation}
\label{THM4EQ1}
[L:\mathbb{Q}] \mathfrak{g}eq [W(ab,\mathcal{O}):K^{(1)}][K^{(1)}:\mathbb{Q}] = 2h_K [W(ab,\mathcal{O}):K(\mathfrak{f})][K(\mathfrak{f}):K^{(1)}].
\end{equation}
We recall several formulas from \cite{BC17}: namely, we have \cite[Thm.4.8]{BC17}
\begin{equation}
\label{THM4EQ2}
[W(ab,\mathcal{O}):K(\mathfrak{f})] = \frac{ (\mathcal{O}/ab\mathcal{O})^{\times}}{w}
\end{equation}
and \cite[Lemma 2.3]{BC17}
\begin{equation}
\label{THM4EQ3}
\# (\mathcal{O}/ab\mathcal{O})^{\times} [K(\mathfrak{f}):K^{(1)}] = \varphi_K(ab\mathfrak{f}) \left( \frac{\varphi(N)}{\varphi(N \mathfrak{f})} \right) \frac{w}{w_K}. \end{equation}
Combining (\ref{THM4EQ1}), (\ref{THM4EQ2}) and (\ref{THM4EQ3}) gives
\[ [L:\mathbb{Q}] \mathfrak{g}eq \frac{2h_K}{w_K} \left( \frac{\varphi(ab)}{\varphi(ab\mathfrak{f})} \right) \varphi_K(ab \mathfrak{f}), \]
and thus
\[ d = [F:\mathbb{Q}] \mathfrak{g}eq \frac{[L:\mathbb{Q}]}{b} \mathfrak{g}eq \frac{2h_K}{b w_K} \left( \frac{\varphi(ab)}{\varphi(ab\mathfrak{f})} \right) \varphi_K(ab \mathfrak{f}). \]
Multiplying by $a^2b^2 \mathfrak{f}^2 = |ab\mathfrak{f} \mathcal{O}_K|$ and rearranging, we get
\begin{equation}
\label{THM4EQ4}
\mathfrak{f}^2 \# E(F)[\operatorname{tors}] = \mathfrak{f}^2 a^2 b \leq \frac{w_K}{2} \frac{d}{h_K} \frac{|abf \mathcal{O}_K|}{\varphi_K(ab\mathfrak{f})}
\frac{\varphi(ab\mathfrak{f})}{\varphi(ab)}.
\end{equation}
We have $\frac{\varphi(ab\mathfrak{f})}{\varphi(ab)} \leq \mathfrak{f}$; replacing the factor of $\frac{\varphi(ab\mathfrak{f})}{\varphi(ab)}$ with $\mathfrak{f}$
in the right-hand side of (\ref{THM4EQ4}) and cancelling the $\mathfrak{f}$'s, we get
\[ \mathfrak{f} \# E(F)[\operatorname{tors}] \leq \frac{w_K}{2} \frac{d}{h_K} \frac{|ab\mathfrak{f}\mathcal{O}_K|}{\varphi_K(ab\mathfrak{f})}. \]
Now using (\ref{eq:landauK}) we get that, as $\mathfrak{f} \# E(F)[\operatorname{tors}] \ensuremath{\rightarrow} \infty$,
\[ \mathfrak{f} \# E(F)[\operatorname{tors}] \leq (1+o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}} d \log \log(a^2 b^2 \mathfrak{f}^2). \]
Since $\mathfrak{f} \#E(F)[\operatorname{tors}] = \mathfrak{f} a^2 b \le a^2 b^2 \mathfrak{f}^2 \le (\mathfrak{f} \#E(F)[\operatorname{tors}])^2$,
\[ \log\log(a^2 b^2 \mathfrak{f}^2) = \log\log(\mathfrak{f}\#E(F)[\operatorname{tors}]) + O(1), \]
so that
\[ \mathfrak{f} \# E(F)[\operatorname{tors}] \le (1 + o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}} d \log \log(\mathfrak{f} \# E(F)[\operatorname{tors}]). \]
Thus,
\[ \frac{\mathfrak{f} \# E(F)[\operatorname{tors}]}{\log\log \mathfrak{f} \#E(F)[\operatorname{tors}]} \le (1 + o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}} d, \]
which implies that
\[ \mathfrak{f} \# E(F)[\operatorname{tors}] \le (1+o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}} d\log\log{d}. \]
Thus, for any sequence of $K$-CM elliptic curves $E_{/F}$ (having $F\supset K$, and $[F:\mathbb{Q}]=d$) with $\mathfrak{f} \#E(F)[\operatorname{tors}]\to\infty$,
\[ \limsup \frac{\mathfrak{f} \# E(F)[\operatorname{tors}]}{d \log \log d} \leq \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta_K|}}.\]
Theorem \ref{BIGTHM4} follows immediately.
\subsection{Proof of Theorem \ref{BIGTHM5}} Let $\mathcal{O}$ be the order of conductor $\mathfrak{f}$ in the imaginary
quadratic field $K$. Again we put $w = \# \mathcal{O}^{\times}$.
\\ \\
The inequality
\[ \limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_{\mathcal{O}-\mathbb{C}M}^{\bullet}(d)}{d \log \log d} \leq \frac{e^{\mathfrak{g}amma} \pi}{\mathfrak{f} \sqrt{|\Delta_K|}} \]
is immediate from Theorem \ref{BIGTHM4}, so it remains to prove the opposite inequality.
\\ \indent
Let $n \mathfrak{g}eq 2$, and let $N_n$ be the product of the primes not exceeding $n$. We assume that $n$ is large enough so that for all primes $\ell$,
if $\ell \mid \mathfrak{f}$ then $\ell \mid N_n$. By \cite[Thm. 1.1b)]{BC17} there is a number field $F \supset K$ and an $\mathcal{O}$-CM elliptic curve $E_{/F}$ such that \[[F:K(j(E))] =
\frac{ \# (\mathcal{O}/N_n\mathcal{O})^{\times}}{w} \]
and $(\mathbb{Z}/N_n\mathbb{Z})^2 \hookrightarrow E(F)$. Since $K(j(E)) = K(\mathfrak{f})$, using (\ref{THM4EQ3}) as above shows that
\[ [F:\mathbb{Q}] = \frac{2 h_K}{w_K} \left( \frac{\varphi(N_n)}{\varphi(N_n\mathfrak{f})} \right) \varphi_K(N_n\mathfrak{f}). \]
Because every prime dividing $\mathfrak{f}$ also divides $N_n$, we have $ \frac{\varphi(N_n)}{\varphi(N_n\mathfrak{f})} = \frac{1}{\mathfrak{f}}$,
so
\[ d = [F:\mathbb{Q}] = \frac{2h_K}{\mathfrak{f} w_K} \varphi_K(N_n \mathfrak{f}) = \frac{2 \mathfrak{f} h_K N_n^2}{w_K} \prod_{p \leq n} \bigg(1-\frac{1}{p}\bigg)\bigg(1- \leg{\Delta_K}{p} \frac{1}{p} \bigg). \]
It follows that
\[ \# E(F)[\operatorname{tors}] \mathfrak{g}eq N_n^2 = \frac{w_K}{2 \mathfrak{f} h_K} d \prod_{p \leq n} \bigg(1-\frac{1}{p} \bigg)^{-1} \prod_{p \leq n}
\bigg(1-\leg{\Delta_K}{p} \frac{1}{p} \bigg)^{-1}. \]
Mertens' Theorem gives $\prod_{p \le n}(1-1/p)^{-1} \sim e^{\mathfrak{g}amma}\log{n}$, as $n\to\infty$, while
\[\prod_{p \le n}\left(1-\leg{\Delta_K}{p}\frac{1}{p}\right)^{-1} \to L(1,\leg{\Delta_K}{\cdot}) = \frac{2 \pi h_K}{w_K \sqrt{|\Delta_K|}}. \]
Thus we find that as $n \ensuremath{\rightarrow} \infty$ we have
\[ \# E(F)[\operatorname{tors}] \mathfrak{g}eq (1 + o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\mathfrak{f} \sqrt{|\Delta_K|}} d \log n. \]
Moreover, for sufficiently large $n$ we have
\[ d = \frac{2 h_K}{\mathfrak{f} w_K} \varphi_K(N_n \mathfrak{f}) = \frac{2 \mathfrak{f} h_K}{w_K} \varphi_K(N_n) \leq \frac{2 \mathfrak{f} h_K}{w_K}
N_n^2 \leq N_n^3, \]
so as $n \ensuremath{\rightarrow} \infty$ we have
\[ \log \log d \leq \log \log N_n + O(1). \]
By the Prime Number Theorem we have $N_n = e^{(1+o(1))n}$, so
\[ \log \log N_n \sim \log n, \]
and thus as $n \ensuremath{\rightarrow} \infty$ we have
\[ \log n \mathfrak{g}eq (1+o(1)) \log \log d. \]
We conclude that as $n \ensuremath{\rightarrow} \infty$,
\[ \# E(F)[\operatorname{tors}] \mathfrak{g}eq (1+o(1)) \frac{e^{\mathfrak{g}amma} \pi}{\mathfrak{f} \sqrt{|\Delta_K|}} d \log \log d = (1+o(1))
\frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta|}} d \log \log d, \]
completing the proof of Theorem \ref{BIGTHM5}.
\section{Proof of Theorem \ref{BIGTHM6}}
\subsection{Proof of Theorem \ref{BIGTHM6}a)}
Let $\mathcal{O}$ be an order in the imaginary quadratic field $K$.
By \cite[Prop. 25]{TORS1} there is a cyclic, degree $\mathfrak{f}$ $F$-rational isogeny $E \ensuremath{\rightarrow} E'$, with $E'_{/F}$ an $\mathcal{O}_K$-CM elliptic curve. It follows that
\begin{equation}
\label{JUSTKIDDINGEQ0}
\#E(F)[\operatorname{tors}] \leq \mathfrak{f} \# E'(F)[\operatorname{tors}].
\end{equation}
By \cite[Lemma 3.15]{BCS16} we have
\[ E'(F)[\operatorname{tors}] \cong \mathbb{Z}/a\mathbb{Z} \times \mathbb{Z}/ab \mathbb{Z} \]
with $a \in \{1,2\}$. Certainly there are $A,B \in \mathbb{Z}^+$ such that
\[ E'(FK)[\operatorname{tors}] \cong \mathbb{Z}/A\mathbb{Z} \times \mathbb{Z}/AB\mathbb{Z}.\]
Write $ab = c_1 c_2$ with $c_1$ divisible only by primes $p \nmid \Delta_K$ and $c_2$ divisible only by
primes $p \mid \Delta_K$. Then \cite[Thm. 4.8]{BCS16} gives $c_1 \mid A$. Let $\beta$ be the product of
the distinct prime divisors of $c_2$, and let $\mathfrak{b}$ be the product of the distinct prime divisors of
$\Delta_K$, so $\beta \mid \mathfrak{b}$. By \cite[$\S$6.3]{BC17} we have $\frac{c_2}{\beta} \mid A$. Since $\mathfrak{g}cd(c_1,c_2) = 1$, this implies $\frac{ab}{\beta} \mid A$ and thus
\begin{equation}
\label{JUSTKIDDINGEQ2}
\# E'(FK)[\operatorname{tors}] = A^2 B \mathfrak{g}eq A^2 \mathfrak{g}eq \frac{a^2 b^2}{\beta^2}.
\end{equation}
Using (\ref{JUSTKIDDINGEQ0}) and (\ref{JUSTKIDDINGEQ2}) we get
\[ \#E(F)[\operatorname{tors}] \leq \mathfrak{f} \# E'(F)[\operatorname{tors}] \leq a \mathfrak{f} \beta \sqrt{ \# E'(FK)[\operatorname{tors}]} \leq 2 \mathfrak{f} \mathfrak{b} \sqrt{ \# E'(FK)[\operatorname{tors}]}. \]
Note that $\mathfrak{f}$ and $\mathfrak{b}$ depend only on $\mathcal{O}$. Moreover $\# E'(FK)[\operatorname{tors}] \ll_{\mathcal{O}} d\log\log d$, by Theorem \ref{BIGTHM5}. Thus, as claimed, we have
\[ \#E(F)[\operatorname{tors}] \ll_{\mathcal{O}} \sqrt{d\log\log{d}}. \]
\subsection{Proof of Theorem \ref{BIGTHM6}b)} First one rather innocuous preliminary result.
\begin{lemma}
\label{REAL1}
Let $F$ be a subfield of $\mathbb{R}$. Let $E_{/F}$ be an elliptic curve, and let
$N \in \mathbb{Z}^+$ have prime power decomposition $N= \prod_{i=1}^r \ell_i^{a_i}$. Then there is a point $P \in E(\mathbb{R})$ of order $N$ such that $[F(P):F] \leq \prod_{i=1}^r \ell_i^{2a_i-2}(\ell_i^2-1)$.
\end{lemma}
\begin{proof}
As for every elliptic curve defined over $\mathbb{R}$, we have $E(\mathbb{R}) \cong S^1$ or $E(\mathbb{R}) \cong S^1 \times \mathbb{Z}/2\mathbb{Z}$ (e.g. \cite[Cor. V.2.3.1]{SilvermanII}) . Thus there is $P \in E(\mathbb{R})$ of order $N$. Let $\overline{F}$ be the algebraic closure of $F$ viewed as a subfield of $\mathbb{C}$. Then $P \in E(\overline{F})$ and
the degree $[F(P):F]$ is the size of the $\operatorname{Aut}(\overline{F}/F)$-orbit on $P$. For all $\sigma \in \operatorname{Aut}(\overline{F}/F)$, $\sigma(P)$ is also a point of order $N$,
so the size of this orbit is no larger than the number of order $N = \prod_{i=1}^r \ell_i^{a_i}$ points in $E[N](\overline{F})
\cong (\mathbb{Z}/N\mathbb{Z})^2$, which is $\prod_{i=1}^r \ell_i^{2a_i-2}(\ell_i^2-1)$.
\end{proof}
\noindent
We now give the proof of Theorem \ref{BIGTHM6}b). Let $\mathcal{O}$ be an order in an imaginary quadratic field $K$. Let
$F_0 = \mathbb{Q}(j(\mathbb{C}/\mathcal{O}))$, so that $F_0$ is a subfield of $\mathbb{R}$ (forcing $F_0\not\supset K$) and $[F_0:\mathbb{Q}]=\#\operatorname{Pic}\mathcal{O}$. Let $E_{/F_0}$ be any $\mathcal{O}$-CM elliptic curve. Let $r \in \mathbb{Z}^+$ and let $N_r = p_1 \cdots p_r$ be the product of the first $r$ primes. \\ \indent
Applying Lemma \ref{REAL1} to $E_{/F_0}$ we get a number field $F_{N_r} \subset \mathbb{R}$ with \[d_r \coloneqq [F_{N_r}:\mathbb{Q}] \leq \# \operatorname{Pic} \mathcal{O} \prod_{i=1}^r (p_i^2-1)\] such that $E(F_{N_r})$ has a point of order $N_r$. So we have
\[ \limsup_{r \ensuremath{\rightarrow} \infty} \frac{d_r}{N_r^2} \leq \frac{\# \operatorname{Pic} \mathcal{O}}{\zeta(2)} = \frac{6 \# \operatorname{Pic} \mathcal{O}}{\pi^2} \]
and thus, as $r\to\infty$,
\[ \frac{\# E(F_{N_r})[\operatorname{tors}]}{\sqrt{d_r}} \mathfrak{g}eq \sqrt{\frac{\pi^2}{6\# \operatorname{Pic} \mathcal{O}}} + o(1). \]
\section{Complements}
\noindent
For a number field $F$, let $\mathfrak{g}_F = \operatorname{Aut}(\overline{\mathbb{Q}}/F)$ denote the absolute Galois group of $F$.
\subsection{Comparison to prior work}
Fix an imaginary quadratic order $\mathcal{O}$ of discriminant $\Delta$. For all sufficiently large primes $p$, the least degree of a number field $F \supset K$ such that there is an $\mathcal{O}$-CM elliptic curve $E_{/F}$ with an $F$-rational point of order $p$ is at least $\left(\frac{2 \# \operatorname{Pic} \mathcal{O}}{\# \mathcal{O}^{\times}}\right) (p-1)$, with equality if $p$ splits in $\mathcal{O}$, and thus the upper order of the size of a prime order torsion point divided by the degree of the number field containing $K$ over which it is defined is $\frac{\# \mathcal{O}^{\times}}{2 \# \operatorname{Pic} \mathcal{O}}$. The maximum value of this quantity is $3$, occurring iff $\Delta = -3$; the next largest value
is $2$, occurring iff $\Delta = -4$, and these are indeed the largest two imaginary quadratic discriminants. But
the next largest value is $1$, occurring iff $\Delta \in \{-7,-8,-11,-12,-16,-19,-27,-28, -43,-67,-163\}$. In particular, both the
class number $h_K$ and the size of the unit group $\mathcal{O}^{\times}$ play a role in the asymptotic behavior of prime order
torsion but get cancelled out by the special value $L(1,\left(\frac{\Delta_K}{\cdot}\right))$
when we look at the size of the torsion subgroup as a whole.
\subsection{The truth about $T_{\mathbb{C}M}^{\circ}(d)$?}
\begin{prop} There is a sequence of $d\to\infty$ along which $T^{\circ}_{\mathbb{C}M}(d) \mathfrak{g}e d^{2/3+o(1)}$.
\end{prop}
\begin{proof} Let $\ell$ be an odd prime. By Corollary 7.5 in [BCS] applied to the maximal order of $K=\mathbb{Q}(\sqrt{-\ell})$, there is a number field $F$ of degree $d=h_{\mathbb{Q}(\sqrt{-\ell})} \frac{\ell-1}{2}$ and an $\mathcal{O}_K$-CM elliptic curve $E_{/F}$ with an $F$-rational torsion point of order $\ell$. We restrict to $\ell\equiv 3\pmod{4}$ --- this has the effect of ensuring that $d$ is odd, and so $F\not\supset K$. Hence, $T_{\mathbb{C}M}^{\circ}(d) \mathfrak{g}e \ell$. By Dirichlet's class number formula together with the elementary bound $L(1,\leg{-\ell}{\cdot}) \ll \log{\ell}$ (see, e.g., \cite[Thm. 8.18]{tenenbaum}), we have $h_{\mathbb{Q}(\sqrt{-\ell})} \ll \ell^{1/2}\log{\ell}$. Thus, $d \le \ell^{3/2+o(1)}$ (as $\ell\to\infty$), and so $$T_{\mathbb{C}M}(d) \mathfrak{g}e \ell \mathfrak{g}e d^{2/3+o(1)}.$$ As $\ell$ tends to infinity, so does $d$, and the proposition follows.
\end{proof}
\noindent
Define $T_{\mathbb{C}M,\mathrm{max}}^{\circ}(d)$ in the same way as $T_{\mathbb{C}M}^{\circ}(d)$, but with the added restriction that we consider only curves $E_{/F}$ with CM by the maximal order $\mathcal{O}_K$. The preceding proof shows that $T_{\mathbb{C}M,\mathrm{max}}^{\circ}(d) \mathfrak{g}e d^{2/3+o(1)}$ on a sequence of $d$ tending to infinity.
\begin{thm}
\label{PAULSTHM}
For all $\epsilon > 0$, there is $C(\epsilon) > 0$ such that for all $d \in \mathbb{Z}^+$ we have
\[T^{\circ}_{\mathbb{C}M,\mathrm{max}}(d) \leq C(\epsilon) d^{2/3+\epsilon}. \]
\end{thm}
\begin{proof}
Let $F \not \supset K$ be a number field of degree $d$, and let $E_{/F}$ be an $\mathcal{O}_K$-CM elliptic curve.
There are positive integers $a$ and $b$ with
\[ E(F)[{\operatorname{tors}}] \cong \mathbb{Z}/a\mathbb{Z} \times \mathbb{Z}/ab\mathbb{Z}. \]
By [BCS, Lemma 3.15], we have $a \in \{1,2\}$. The remainder of the proof takes two forms depending on the size of $|\Delta_K|$.
\noindent Case I: $|\Delta_K| \mathfrak{g}e d^{2/3}$.
\noindent Since $E(FK)$ contains a point of order $ab$, [BC, Theorem 5.3] shows that
\[ \varphi(ab) \le \frac{w_K}{2} \frac{2d}{h_K}. \]
By Siegel's Theorem, $h_K \mathfrak{g}g_{\epsilon} |\Delta_K|^{1/2-\epsilon} \mathfrak{g}e d^{1/3-2\epsilon/3}$ (as $d\to\infty)$, and so
\[ \varphi(ab) \ll_{\epsilon} d^{2/3+2\epsilon/3}. \]
Consequently,
\[ \#E(F)[{\operatorname{tors}}] = a(ab) \le 2ab \ll_{\epsilon} d^{2/3+\epsilon}. \]
\noindent Case II: $|\Delta_K| < d^{2/3}$.
\noindent We can and will assume that $d\mathfrak{g}e 3$ and that $\#E(F)[{\rm tors}]\mathfrak{g}e 3$. Write $ab = c_1 c_2$, where $(c_1,\Delta_K)=1$ and where every prime dividing $c_2$ divides $\Delta_K$. By [BCS, Theorem 4.8], $E(FK)$ has full $c_1$-torsion, so that $$ c_1^2 \mid \#E(FK)[\operatorname{tors}].$$ Let $\ell^{\alpha}$ be a prime power dividing $c_2$, and let $P$ be a point of $E(FK)$ of order $\ell^{\alpha}$. By [BC, \S6.3], the $\mathcal{O}_K$-submodule of $E(FK)[{\rm tors}]$ generated by $P$ is isomorphic, as a $\mathbb{Z}$-module, to either $\mathbb{Z}/\ell^{\alpha}\mathbb{Z} \oplus \mathbb{Z}/\ell^{\alpha}\mathbb{Z}$ or $\mathbb{Z}/\ell^{\alpha}\mathbb{Z} \oplus \mathbb{Z}/\ell^{\alpha-1}\mathbb{Z}$. It follows that if $r$ is the product of the distinct primes dividing $c_2$, then
\[ c_2^2 \mid r\cdot\#E(FK)[{\rm tors}].\]
Since $\mathfrak{g}cd(c_1,c_2)=1$ and $r \mid \Delta_K$, we have
\begin{equation}\label{eq:c1c2square} c_1^2 c_2^2 \mid \Delta_K \cdot \#E(FK)[\operatorname{tors}]. \end{equation}
Applying the proof of Theorem \ref{BIGTHM3} to $FK$, we get that if $\mathfrak{a} \subset \mathcal{O}_K$ is the annihilator ideal of the $\mathcal{O}_K$-module $E(FK)[{\rm tors}]$, then
we have
\[ |\mathfrak{a}| = \#E(FK)[\operatorname{tors}], \]
and
\[ \varphi_K(\mathfrak{a}) \le \frac{w_K}{2} \cdot \frac{2d}{h_K}. \]
Hence, by Theorem \ref{thm:phibound},
\[ |\mathfrak{a}|/\log\log|\mathfrak{a}| \le \frac{w_K}{c} \cdot d \frac{\log |\Delta_K|}{h_K}. \]
Since $\frac{w_K}{c}, \frac{\log |\Delta_K|}{h_K}$ are bounded, this implies $|\mathfrak{a}|/\log\log|\mathfrak{a}| \ll d$, and hence $|\mathfrak{a}| \ll d\log\log{d}$. Hence, $\log\log|\mathfrak{a}| \ll \log\log{d}$, and
\[ \#E(FK)[{\rm tors}]= |\mathfrak{a}| \ll \frac{\log|\Delta_K|}{h_K}\cdot d\log\log{d}. \]
Now \eqref{eq:c1c2square} implies that
\[ \#E(F)[{\rm tors}]^2 = (a^2 b)^2 = a^2 \cdot a^2 b^2 \le 4 c_1^2 c_2^2 \ll \frac{|\Delta_K| \log|\Delta_K|}{h_K}\cdot d\log\log{d}. \]
By Siegel's Theorem, $h_K \mathfrak{g}g_{\epsilon} |\Delta_K|^{1/2-\epsilon}$. Thus (keeping in mind our upper bound on $|\Delta_K|$ in this case),
\[ \frac{|\Delta_K| \log|\Delta_K|}{h_K} \ll_{\epsilon} |\Delta_K|^{1/2+2\epsilon} \le d^{1/3 + 4\epsilon/3}, \]
so that
\[ \#E(F)[{\rm tors}]^2 \ll_{\epsilon} d^{4/3+2\epsilon}. \]
Hence, \[ \#E(F)[{\operatorname{tors}}] \ll_{\epsilon} d^{2/3+\epsilon}.\]
The result follows from combining Cases 1 and 2.
\end{proof}
\noindent
The above results suggest to us that the upper order of $T_{\mathbb{C}M}^{\circ}(d)$ is $d^{2/3+o(1)}$, but we cannot yet prove this. When the CM is $F$-rationally defined, we were able to take advantage of the recent work \cite{BC17}. The authors of \cite{BC17} are pursuing analogous algebraic results when the CM is not rationally defined. In view of this, we hope to revisit the upper order of $T_{\mathbb{C}M}^{\circ}(d)$ later and present more definitive results.
\subsection{An analogue of Theorem \ref{BIGTHM5} in the non-CM case}
Our method of showing $\limsup_{d \ensuremath{\rightarrow} \infty} \frac{ T_{\mathcal{O}\text{-}\mathbb{C}M}^{\bullet}(d)}{d \log \log d} \mathfrak{g}eq \frac{e^{\mathfrak{g}amma} \pi}{\sqrt{|\Delta|}}$ is very nearly the ``naive approach'' of starting with an $\mathcal{O}$-CM elliptic curve defined over $F_0 = K(\mathfrak{f})$
and extending the base to $\tilde{F}_n = F_0(E[N_n])$. In fact we pass to the Weber function field $F_n = F_0(\mathfrak{h}(E[N_n]))$ and then twist $E_{/F_n}$ to get full $N_n$-torsion. We know the degree $[F_n:\mathbb{Q}]$ exactly; the degree $[\tilde{F}_n:\mathbb{Q}]$ depends on the $F_0$-rational model, but in general is $\# \mathcal{O}^{\times}$ as large, so the naive approach would give \[\limsup_{d \ensuremath{\rightarrow} \infty} \frac{ T_{\mathcal{O}\text{-}\mathbb{C}M}^{\bullet}(d)}{d \log \log d} \mathfrak{g}eq \frac{e^{\mathfrak{g}amma} \pi}{\# \mathcal{O}^{\times} \sqrt{|\Delta|}}. \]
So the naive approach comes within a twist of giving the \emph{true upper order} of $T_{\mathbb{C}M}(d)$.
\\ \\
Observe that in the CM case, fixing the quadratic order $\mathcal{O}$ fixes the $\mathfrak{g}_{\mathbb{Q}}$-conjugacy class
of the $j$-invariant. This motivates the following definition: let $j \in \overline{\mathbb{Q}} \subset \mathbb{C}$ and let $F_0 = \mathbb{Q}(j)$. For
positive integers $d$ divisible by $[F_0:\mathbb{Q}]$, put
\[ T_j(d) = \sup \# E(F)[\operatorname{tors}], \]
the supremum ranging over number fields $F \subset \mathbb{C}$ with $[F:\mathbb{Q}] = d$ and elliptic curves $E_{/F}$ such that $j(E) = j$. Note
that we could equivalently range over all elliptic curves $E_{/F}$ such that $j(E)$ and $j$ are $\mathfrak{g}_\mathbb{Q}$-conjugates.
\begin{thm}[Breuer \cite{Breuer10}]\mbox{ }
\label{BREUERTHM}
\begin{enumerate}
\item[a)] If $j \in \overline{\mathbb{Q}}$ is a CM $j$-invariant, then
\[ \limsup_d \frac{T_j(d)}{d \log \log d} \in (0,\infty). \]
\item[b)] If $j \in \overline{\mathbb{Q}}$ is not a CM $j$-invariant, then
\[ \limsup_d \frac{T_j(d)}{\sqrt{d \log \log d}} \in (0,\infty). \]
\end{enumerate}
\end{thm}
\noindent
Breuer states his results for a fixed elliptic curve $E_{/F_0}$, but an immediate twisting argument gives the result for fixed $j$.
\\ \indent
From this perspective our Theorem \ref{BIGTHM5} can be viewed as sharpening Theorem \ref{BREUERTHM}a) by computing the value of $\limsup_d \frac{T_j(d)}{d \log \log d}$
for every CM $j$-invariant. We will now
give an analogous sharpening of Theorem \ref{BREUERTHM}b). For a non-CM elliptic curve $E$ defined over a number field $F$, we define the \textbf{reduced Galois representation}
\[ \overline{\rho}_N\colon \mathfrak{g}_F \ensuremath{\rightarrow} \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm 1\} \]
as the composite of the mod $N$ Galois representation $\rho_N\colon \mathfrak{g}_F \ensuremath{\rightarrow} \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})$ with the quotient map
$\operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z}) \ensuremath{\rightarrow} \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{ \pm 1\}$. The point is that if $(E_1)_{/F}$ and ${(E_2)}_{/F}$ have $j(E_1) = j(E_2)$,
then their reduced Galois representations are the same (up to conjugacy in $\operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm 1\}$). We say that $j \in \overline{\mathbb{Q}}$
is \textbf{truly Serre} if for every $E_{/F_0}$ with $j(E) = j$, then $\overline{\rho}_N$ is surjective for all $N \in \mathbb{Z}^+$.
\begin{thm}
\label{NONCMTHM}
Let $j \in \overline{\mathbb{Q}} \subset \mathbb{C}$ be a non-CM $j$-invariant, and let $F_0 = \mathbb{Q}(j)$.
\begin{enumerate}
\item[a)] We have \begin{equation}
\label{TWISTEDBREUER1}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_j(d)}{\sqrt{d \log \log d}} \mathfrak{g}eq \sqrt{\frac{\pi^2 e^{\mathfrak{g}amma}}{3[F_0:\mathbb{Q}]}}.
\end{equation}
\item[b)] If $j$ is truly Serre, then
\begin{equation}
\label{TWISTEDBREUER2}
\limsup_{d \ensuremath{\rightarrow} \infty} \frac{T_j(d)}{\sqrt{d \log \log d}} = \sqrt{\frac{\pi^2 e^{\mathfrak{g}amma}}{3[F_0:\mathbb{Q}]}}.
\end{equation}
\end{enumerate}
\end{thm}
\begin{proof}
a) Let ${(E_0)}_{/F_0}$ be an elliptic curve with $j(E_0) = j$. Let $r \mathfrak{g}eq 2$, let $N_r$ be the product of all primes $p \leq r$, and let $F_{N_r} = F_0(x(E_0[N_r]))$.
Then
\[ d_r \coloneqq [F_{N_r}:\mathbb{Q}] \mid [F_0:\mathbb{Q}]\frac{ \# \operatorname{GL}_2(\mathbb{Z}/N_r\mathbb{Z})}{2} = [F_0:\mathbb{Q}]\frac{\prod_{p \leq r} (p^2-1)(p^2-p)}{2}. \]
The mod $N_r$-Galois representation on $(E_0)_{/F_{N_r}}$ has image contained in $\{ \pm 1\}$ and thus is given by
a quadratic character $\chi\colon \operatorname{Aut}(\overline{F_{N_r}}/F_{N_r}) \ensuremath{\rightarrow} \{ \pm 1\}$. Let $(E_{N_r})_{/F_{N_r}}$ be the twist of
$(E_0)_{/F_{N_r}}$ by $\chi$, so that \[(\mathbb{Z}/N_r \mathbb{Z})^2 \hookrightarrow E_{N_r}(F_{N_r}) \]
and thus
\[ \# E_{N_r}(F_{N_r})[\operatorname{tors}] \mathfrak{g}eq N_r^2. \]
Arguments very similar to those made above -- using the Euler product for $\zeta(2)$, Mertens' Theorem and the Prime Number
Theorem -- give
\[ d_r \leq (1+ o(1)) \frac{3[F_0:\mathbb{Q}]}{\pi^2 e^{\mathfrak{g}amma} \log r} N_r^4\]
and
\[ \# E(F_{N_r})[\operatorname{tors}]^2 \mathfrak{g}eq N_r^4 \mathfrak{g}eq (1+o(1)) \frac{\pi^2 e^{\mathfrak{g}amma}}{3[F_0:\mathbb{Q}]} d_r \log \log d_r, \]
and thus (\ref{TWISTEDBREUER1}) follows. \\
b) Let $E_{/F}$ be an elliptic curve with $j(E) = j$ and $[F:\mathbb{Q}] = d$. We may and shall assume that $d\mathfrak{g}e 3$ and that $\# E(F)[\operatorname{tors}] \mathfrak{g}eq 5$. Write
\[ E(F)[\operatorname{tors}] \cong \mathbb{Z}/a\mathbb{Z} \times \mathbb{Z}/ab \mathbb{Z} \]
for $a,b \in \mathbb{Z}^+$, and note that we have $ab \mathfrak{g}eq 3$.
\noindent Step 1: We claim that there is a number field $L \supset F$ such that $F(E[ab]) \subset L$ and $[L:F] \leq b^2$. This is the non-CM analogue of \cite[Thm. 7]{CP15}. As in \emph{loc. cit.} primary decomposition and induction quickly reduce us to the consideration of the case $a = p^A$, $b = p$ for a prime number $p$ and $A \in \mathbb{N}$, and we must show that we have full $p^{A+1}$-torsion
in an extension of degree at most $p^2$. If $A = 0$, then we have an $F$-rational point $P$ of
order $p$. If $Q \in E[p] \setminus \langle P \ensuremath{\rightarrow}ngle$, then the Galois orbit on $Q$ has size at most $p^2-p$, so we may
take $L = F(Q)$ and get $[L:F] \leq p^2-p$. If $A \mathfrak{g}eq 1$, then $E$ has full $p^A$-torsion defined over $F$ and an $F$-rational point
$P$ of order $p^{A+1}$. Let $Q \in
E[p^{A+1}]$ be such that $\langle P,Q \ensuremath{\rightarrow}ngle = E[p^{A+1}]$. For $\sigma \in \mathfrak{g}_F$, we have $p \sigma(Q) = \sigma(pQ) =
pQ$, so there is $R_{\sigma} \in E[p]$ such that
\[ \sigma(Q) = Q + R_{\sigma}. \]
Thus there are at most $p^2$ possibilities for $\sigma(Q)$, so again we may take $L = F(Q)$.
\noindent
Step 2: Let $W(N) = F_0(x(E[N]))$. Then $L \supset W(ab)$, and thus
\[ [L:\mathbb{Q}] \mathfrak{g}eq [W(ab):F_0][F_0:\mathbb{Q}] = [F_0:\mathbb{Q}] \frac{\# \operatorname{GL}_2(\mathbb{Z}/ab\mathbb{Z})}{2}. \]
Hence,
\[ d = [F:\mathbb{Q}] \mathfrak{g}eq \frac{[L:\mathbb{Q}]}{b^2} \mathfrak{g}eq \frac{[F_0:\mathbb{Q}]}{2b^2} \# \operatorname{GL}_2(\mathbb{Z}/ab\mathbb{Z}). \]
Multiplying by $a^4b^4$ and rearranging, we get
\[ \# E(F)[\operatorname{tors}]^2 = a^4 b^2 \leq \frac{2 d}{[F_0:\mathbb{Q}]} \cdot \frac{(ab)^4}{\#\operatorname{GL}_2(\mathbb{Z}/ab\mathbb{Z})}. \]
Now
\begin{align*} \frac{(ab)^4}{\# \operatorname{GL}_2(\mathbb{Z}/ab\mathbb{Z})} = \prod_{p \mid ab} \frac{p^4}{(p^2-1)(p^2-p)} &= \prod_{p \mid ab} \left(1-\frac{1}{p^2}\right)^{-1} \prod_{p \mid ab}\left(1-\frac{1}{p}\right)^{-1} \\ &\le \frac{\pi^2}{6} \prod_{p \mid ab}\left(1-\frac{1}{p}\right)^{-1}. \end{align*}
Substituting this above,
\begin{equation}\label{eq:a4b2bound} \#E(F)[\operatorname{tors}]^2 = a^4 b^2 \le \frac{\pi^2}{3 [F_0:\mathbb{Q}]} d \prod_{p\mid ab}\left(1-\frac{1}{p}\right)^{-1}. \end{equation}
Rearranging,\[ d \mathfrak{g}e \frac{3[F_0:\mathbb{Q}]}{\pi^2} \cdot a^4 b^2 \cdot \prod_{p \mid ab}(1-1/p) = \frac{3[F_0:\mathbb{Q}]}{\pi^2} a^3 b \cdot \varphi(ab) > \frac{1}{2} ab. \]
(In the last step, we used the lower bound $\varphi(ab) \mathfrak{g}e 2$.) The product in \eqref{eq:a4b2bound} is only increased if $p$ is taken to run over the first $\omega$ primes, where $\omega$ is the number of distinct primes dividing $ab$. Since $ab < 2d$, the first $\omega$ primes all belong to the interval $[1,2\log{d}]$, once $d$ is large enough. Hence, as $d\to\infty$,
\[ \prod_{p\mid ab}\left(1-\frac{1}{p}\right)^{-1} \le \prod_{p \le 2\log{d}} \left(1-\frac{1}{p}\right)^{-1} = (1+o(1)) e^{\mathfrak{g}amma} \log\log{d}. \]
Plugging this back into \eqref{eq:a4b2bound} and taking square roots yields the upper bound \[T_j(d) \le \left(\sqrt{\frac{\pi^2 e^{\mathfrak{g}amma}}{3[F_0:\mathbb{Q}]}}+o(1)\right)\sqrt{d\log\log{d}}. \] Combining this with the lower bound from part a), the result follows.
\end{proof}
\begin{remark}\mbox{ }
\begin{enumerate}
\item[a)] If $E_{/F_0}$ is an elliptic curve over a number field with surjective adelic Galois representation
$\hat{\rho}\colon \mathfrak{g}_{F_0} \ensuremath{\rightarrow} \operatorname{GL}_2(\widehat{\mathbb{Z}})$,
then $j(E_0)$ is truly Serre. The converse also holds. Indeed, suppose $j$ is truly Serre, and let $E_{/F_0}$ be any elliptic curve
with $j(E) = j$, let $N \in \mathbb{Z}^+$, and let $\rho_N\colon \mathfrak{g}_{F_0} \ensuremath{\rightarrow} \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})$ be the mod $N$ Galois representation. By definition
of truly Serre, we have $\langle \rho_N(\mathfrak{g}_{F_0}),-1\ensuremath{\rightarrow}ngle = \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})$. It follows \cite[p. 145]{Serre-MW} that $\rho_N(\mathfrak{g}_{F_0}) = \operatorname{GL}_2(\mathbb{Z}/N\mathbb{Z})$. Since this holds for all $N \in \mathbb{Z}^+$, it follows that $\hat{\rho}$ is surjective.
\item[b)] Greicius showed \cite[Thm. 1.2]{Greicius10} that if $E_{/F_0}$ is an elliptic curve over a number field with surjective
adelic Galois representation, then
$F_0 \cap \mathbb{Q}^{\operatorname{ab}} = \mathbb{Q}$ and $\sqrt{\Delta} \notin F_0 \mathbb{Q}^{\operatorname{ab}}$, where $\Delta$ is the discriminant of any Weierstrass model of $E$. Thus if
$[F_0:\mathbb{Q}] \leq 2$ the adelic Galois representation cannot be surjective. Greicius also exhibited an elliptic curve over a non-Galois cubic field with surjective adelic Galois representation \cite[Thm. 1.5]{Greicius10}. Zywina showed \cite{Zywina10}
that if $F_0 \supsetneq \mathbb{Q}$ is a number field such that $F_0 \cap \mathbb{Q}^{\operatorname{ab}} = \mathbb{Q}$ then there is an elliptic curve $E_{/F_0}$
with surjective adelic Galois representation. In fact he shows that ``most Weierstrass equations over
$F_0$'' define an elliptic curve with surjective adelic Galois representation. His work makes it plausible that when measured by height, ``most $j \in F_0$'' are truly Serre.
\end{enumerate}
\end{remark}
\section*{Acknowledgments}
\noindent
We thank Abbey Bourdon and Drew Sutherland for helpful and stimulating conversations on the subject matter of this paper. The second author is supported by NSF award DMS-1402268.
\end{document}
|
\begin{document}
\title[K-G-frames in Hilbert spaces]
{ $K$-$G$-frames and approximate $K$-$G$-duals in Hilbert spaces }
\author{Jahangir Cheshmavar}
\author{Maryam Rezaei Sarkhaei}
\email{j$_{_-}[email protected]}
\email{[email protected]}
\address{Department of Mathematics,
Payame Noor University, P.O.BOX 19395-3697, Tehran, IRAN.}
\thanks{}
\thanks{}
\thanks{\it 2010 Mathematics Subject Classification: 42C15; 42C40.}
\keywords{frame; $g$-frame; $K$-$g$-frame;
approximate $K$-$g$-duals; redundancy.} \dedicatory{} \commby{}
\begin{abstract}
Approximate duality of frame pairs have been investigated by
Christensen and Laugesen in (Sampl. Theory Signal Image Process.,
9, 2011, 77-90), with the motivation to obtain an important
applications in Gabor systems, wavelets and general frame theory.
In this paper we obtain some of the known results in approximate
duality of frames to $K$-$g$-frames. We also obtain new
$K$-$g$-frames and approximate $K$-$g$-duals from a $K$-$g$-frame
and an approximate $K$-$g$-dual. Finally, we give an equivalent
condition under which the subsequence of a $K$-$g$-frame still to
be a $K$-$g$-frame.
\end{abstract}
\maketitle
\section{\bf Introduction and Preliminaries}\label{introduction}
The concept of frames in Hilbert spaces were introduced in the
paper ~\cite{Duffin.Schaeffer} by Duffin and Schaeffer to study
some problems in nonharmonic Fourier series, reintroduced by
Daubechies et al. in ~\cite{Daubechies.Grossmann.Meyer} to study
the connecting with wavelet and Gabor systems. For special
applications, various generalization of frames were proposed, such
as frame of subspace ~\cite{Cazassa.Kutyniok}, $g$-frames
~\cite{Sun}, $K$-frames ~\cite{Gavruta} by G$\breve{a}$vruta to
study the atomic systems with respect to a bounded linear operator
$K$ in Hilbert spaces. The concept of $K$-$g$-frames which more
general than ordinary g-frames considered by Authors in
~\cite{Asgari.Rahimi,Zhou.Zhu.1,Zhou.Zhu.2}. After that, some
properties of $K$-frames was extended to $K$-$g$-frames in
\cite{Hua.Huang} by Hua and Huang.
One of the main reason for considering frames and any type of
generalization of frames, is that, they allows each element in the
space to be non-uniquely represented as a linear combination of
the frame elements, by using of their duals; however, it is
usually complicated to calculate a dual frame explicitly. For
example, in practical, one has to invert the frame operator, in
the canonical dual frames, which is difficult when the space is
infinite-dimensional. One way to avoid this difficulty is to
consider approximate duals. The concepts of approximately dual
frames have been studied since the work of Gilbert et al.
\cite{Gilbert.et.al} in the wavelet setting, see for example
Feichtinger et al. \cite{Feichtinger.Kaiblinger} for Gabor systems
and reintroduced in systematic by Christensen and Laugesen in
\cite{Christensen.Laugesen} for dual frame pairs.
In this paper, the advantage of $K$-$g$-frames in
comparison of $g$-frames is given; with this motivation, we obtain new
$K$-$g$-frames and approximate $K$-$g$-duals and derive some of results
for the approximate duality of $K$-$g$-frames and their redundancy.
What we discuss in the following sections; we will review
some notions relating to frames, $K$-frames and $K$-$g$-frames
in the rest of this section. New $K$-$g$-frames and the advantage of $K$-$g$-frames are found
in Section 2. In Section 3 we define approximate duality of
$K$-$g$-frames and get some important properties of approximate
$K$-$g$-duals, also we extend some results of approximate duality
of frame to $K$-$g$-frames based in the paper
\cite{Christensen.Laugesen}. Section 4 contains two results on the
redundancy.
Throughout this paper, $J$ is a subset of integers $\mathbb{Z}$,
$\mathcal{H}$ is a separable Hilbert space,
$\{\mathcal{H}_j\}_{j\in J}$ is a sequence of closed subspaces of
$\mathcal{H}$. Let $\mathcal{B}(\mathcal{H},\mathcal{H}_j)$ be the
collection of all bounded linear operators from $\mathcal{H}$ into
$\mathcal{H}_j$, $\mathcal{B}(\mathcal{H},\mathcal{H})$ is denote
by $\mathcal{B}(\mathcal{H})$; for $K \in
\mathcal{B}(\mathcal{H})$, $\mathcal{R}(K)$ is the range of $K$
and also ${I}_{\mathcal{R}(K)}$ is the identity operator on
$\mathcal{R}(K)$. The space $l^2 \left( \{\mathcal{H}_j\}_{j\in J}
\right)$ is defined by
\begin{eqnarray}
l^2 \left ( \{\mathcal{H}_j\}_{j\in J} \right
)=\left\{\{f_j\}_{j\in J}:\,\ f_j \in \mathcal{H}_j,
\|\{f_j\}_{j\in J}\|^2=\sum_{j\in J}\|f_j\|^2 <+\infty \right \} ,
\end{eqnarray}
with inner product given by
\begin{eqnarray}
\langle \{f_j\}_{j\in J}, \{g_j\}_{j\in J} \rangle=\sum_{j\in
J}\langle f_j, g_j \rangle.
\end{eqnarray}
Then $l^2 \left( \{\mathcal{H}_j\}_{j\in J} \right)$ is Hilbert
space with pointwise operations.\\
Next, some terminology relating to Bessel and $g$-Bessel systems,
frames, $g$-frames, $K$-frames and related notions.
A sequence $\{f_j\}_{j\in J}$ contained in $\mathcal{H}$ is called
a Bessel system for $\mathcal{H}$, if there exists a positive
constant $B$ such that, for all $f \in \mathcal{H}$
\begin{eqnarray}
\sum_{j \in J}|\langle f, f_j \rangle|^2\leq B\|f\|^2.
\end{eqnarray}
The constant $B$ is called a Bessel bound of the system. If, in
addition, there exists a lower bound $A>0$ such that, for all $f
\in \mathcal{H}$
\begin{eqnarray}
A\|K^{\ast}f\|^2\leq \sum_{j \in J}|\langle f, f_j \rangle|^2,
\end{eqnarray}
the system is called a $K$-frame for $\mathcal{H}$. The constant
$A$ and $B$ are called $K$-frame bounds.\\
\noindent {\bf Remark 1:}
If $K=I_{\mathcal{H}}$, then $K$-frames are called the ordinary frames.
A sequence $\{\Lambda_j \in
\mathcal{B}(\mathcal{H},\mathcal{H}_j): j\in J\}$ is called a
$g$-Bessel system for $\mathcal{H}$ with respect to
$\mathcal{H}_{j}$ if there exists a positive constant $B$ such
that, for all $f \in \mathcal{H}$
\begin{eqnarray}
\sum_{j \in J}\|\Lambda_jf\|^2\leq B\|f\|^2.
\end{eqnarray}
The constant $B$ is called a $g$-Bessel bound of the system. If,
in addition, there exists a lower bound $A>0$ such that, for all
$f \in \mathcal{H}$
\begin{eqnarray}
A\|f\|^2\leq \sum_{j \in J}\|\Lambda_jf\|^2,
\end{eqnarray}
the system is called a $g$-frame for $\mathcal{H}$ with respect to
$\{\mathcal{H}_j\}_{j\in J}$. The constants $A$ and $B$ are called
$g$-frame bounds. If $A=B$, the $g$-frame is said to be tight
$g$-frame. For more information on frame theory,
basic properties of the $K$-frames and g-frames, we refer to \cite{Christensen.book, Gavruta,Sun}.
Now, we state the following basic definition, the concept of
$K$-$g$-frames, which are more general than ordinary $g$-frames
stated in \cite[Theorem (2.5)]{Asgari.Rahimi}.
\begin{defn}
Let $K \in \mathcal{B}(\mathcal{H})$ and $\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_j)$ for any $j\in J$. A
sequence $\{\Lambda_j\}_{j\in J}$ is called a K-g-frame for
$\mathcal{H}$ with respect to $\{H_j\}_{j\in J}$, if there exist
constants $0<A \leq B<\infty$ such that
\begin{eqnarray}\label{Def:K-g-frame}
A\|K^{\ast}f\|^2\leq \sum_{j\in J} \|\Lambda_jf\|^2\leq
B\|f\|^2,\,\ \forall f\in \mathcal{H}.
\end{eqnarray}
The constants $A$ and $B$ are called the lower and upper bounds of
K-g-frame, respectively. A $K$-$g$-frame $\{\Lambda_j\}_{j\in J}$
is said to be tight if there exists a constant $A>0$ such that
\begin{eqnarray}
\sum_{j\in J}\|\Lambda_jf\|^2=A\|K^{\ast}f\|^2,\,\ \forall f\in
\mathcal{H}.
\end{eqnarray}
\end{defn}
\noindent {\bf Remark 2:}
If $K=I_{\mathcal{H}}$, then $K$-$g$-frames are just the ordinary $g$-frames.
\section{\bf New $K$-$g$-frames}
There is an advantage of studying $K$-$g$-frames in comparison with the
$g$-frames. This advantage is, as we will see in the following
examples, for a Bessel sequence in $\mathcal{B}(\mathcal{H},
\mathcal{H}_j)$, which is not an $g$-frame, we can define a
suitable operator $K$ such that this sequence be a $K$-$g$-frame
for $\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$.
Also, we construct new $K$-$g$-frames by
considering a $g$-frame for $\{\mathcal{H}_j\}_{j\in J}$.\\
\noindent {\bf Example 1:}
\label{Exam.1}
Let $\{e_j\}_{j=1}^{\infty}$ be an orthonormal basis for
$\mathcal{H}$ and $\mathcal{H}_j=\overline{span}\{e_j, e_{j+1}\}$, \,\ $j=1,2,3,\cdots$.
Define the operator
$\Lambda_j:\mathcal{H}\rightarrow \mathcal{H}_j$ as
follows:
\begin{eqnarray*}
\Lambda_jf=\langle f, e_j+e_{j+1} \rangle(e_j+e_{j+1}),
\,\ \forall f\in \mathcal{H}.
\end{eqnarray*}\
Then,
\begin{eqnarray*}
\sum_{j=1}^{\infty}\|\Lambda_j f\|^2 &=&
2\sum_{j=1}^{\infty}|\langle
f, e_j+e_{j+1} \rangle|^2\\
& \leq & 2\sum_{j=1}^{\infty}(|\langle f, e_j \rangle|+|\langle f, e_{j+1} \rangle|)^2\\
& \leq & 4 \sum_{j=1}^{\infty}|\langle f, e_j \rangle|^2+4
\sum_{j=1}^{\infty}|\langle f, e_{j+1} \rangle|^2\\
& \leq & 8 \|f\|^2.
\end{eqnarray*}
That is, $\{\Lambda_j\}_{j\in J}$ is a $g$-Bessel sequence.
However, $\{\Lambda_j\}_{j\in J}$ does not satisfy the lower
$g$-frame condition, because if we consider the vectors
$g_m:=\sum_{n=1}^m (-1)^{n+1}e_n,\,\ m\in \mathbb{N}$, then
$\|g_m\|^2=m$, for all $m \in \mathbb{N}$. Fix $m\in \mathbb{N}$,
we see that
\begin{eqnarray*}
\langle g_m, e_j+e_{j+1}\rangle=\left\{
\begin{array}{ll}
0, & j>m \\
(-1)^{m+1}, & j=m \\
0, & j<m
\end{array} \right.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\sum_{j=1}^{\infty}\|\Lambda_jg_m\|^2=2\sum_{j=1}^{\infty}|\langle
g_m, e_j+e_{j+1}\rangle|^2=2=\frac{2}{m}\|g_m\|^2, \forall m\in
\mathbb{N},
\end{eqnarray*}
that is, $\{\Lambda_j\}_{j\in J}$ does not satisfy the lower
$g$-frame condition. Now define
\begin{eqnarray*}
K:\mathcal{H} \rightarrow \mathcal{H},\,\
Kf=\sum_{j=1}^{\infty}\langle f, e_j\rangle(e_j+e_{j+1}),\,\
\forall f\in \mathcal{H}.
\end{eqnarray*}
Then $\{\Lambda_j\}_{j\in J}$ is a $K$-$g$-frame for $\mathcal{H}$
with respect to $\{\mathcal{H}_j\}_{j\in J}$.\\
\noindent {\bf Example 2:}
\label{Exam.2}
Let $\{e_j\}_{j=1}^{\infty}$ be an
orthonormal basis for $\mathcal{H}$ and
$\mathcal{H}_j=\overline{span}\{e_{3j-2}, e_{3j-1}, e_{3j}\}$, \,\ $j=1,2,3,\cdots$.
Define the operator $\Lambda_j:\mathcal{H}\rightarrow \mathcal{H}_j$
as follows:\\
\begin{eqnarray*}
\Lambda_1f=\langle f, e_1\rangle e_1+\langle f, e_2\rangle e_2+\langle f, e_3\rangle e_3 \,\ \mbox{and}\,\ \Lambda_jf=0, \,\ \mbox{for}\,\
j\geq 2,
\end{eqnarray*}
With a simple calculate $\{\Lambda_j\}_{j\in J}$ is not a $g$-frame for
$\mathcal{H}$ with respect to $\mathcal{H}_j$,
because, if we take $f=e_4$, then
\begin{eqnarray*}
\|f\|^2=1 \,\ \mbox{and}\,\ \sum_{j=1}^{\infty}\|\Lambda_jf\|^2=\|\Lambda_1e_4\|^2=0.
\end{eqnarray*}
Define now the operator $K:\mathcal{H} \rightarrow \mathcal{H}$ as follows:
\begin{eqnarray*}
Ke_1=e_1,\,\ Ke_2=e_2 \,\ \mbox{and}\,\ Ke_j=0,\,\ \mbox{for}\,\ j\geq 3.
\end{eqnarray*}
It is easy to see that,
$K^*e_1=e_1,\,\ K^*e_2=e_2$ and $K^*e_j=0, \mbox{for}\,\ j\geq 3$.
We show that $\{\Lambda_j\}_{j\in J}$ is a $K$-$g$-frame for
$\mathcal{H}$ with respect to $\mathcal{H}_j$. In fact, for any
$f\in \mathcal{H}$, we have
\begin{eqnarray*}
\|K^*f\|^2=\|\sum_{j=1}^{\infty}\langle f, e_j\rangle K^*e_j\|^2=
|\langle f, e_1\rangle|^2+|\langle f, e_2\rangle|^2,
\end{eqnarray*}
and
\begin{eqnarray*}
\sum_{j=1}^{\infty}\|\Lambda_jf\|^2&=&\|\Lambda_1f\|^2=
\|\langle f, e_1\rangle e_1+\langle f, e_2\rangle e_2+\langle f, e_3\rangle e_3\|^2\\
&=&|\langle f, e_1\rangle|^2+|\langle f, e_2\rangle|^2+|\langle f, e_3\rangle|^2 \geq \|K^*f\|^2.
\end{eqnarray*}
Therefore, for any $f\in \mathcal{H}$
\begin{eqnarray*}
\|K^*f\|^2\leq \sum_{j=1}^{\infty}\|\Lambda_jf\|^2 \leq \|f\|^2,
\end{eqnarray*}
as desired.
The next Propositions reads as follows:
\begin{prop}
Let $\{\Lambda_j \in \mathcal{B}(\mathcal{H}, \mathcal{H}_j):j\in
J\}$ be a $K$-g-frame for $\mathcal{H}$ with respect to
$\{\mathcal{H}_j\}_{j\in J}$. Then $\{\Lambda_j\}_{j\in J}$ is a
g-frame for $\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in
J}$ if $K^{\ast}$ is bounded below.
\end{prop}
\begin{proof}
Since $K^{\ast}$ is bounded below, by definition, there exists a
constant $C>0$ such that,
\begin{eqnarray*}
\|K^{\ast}f\|\geq C\|f\|, \forall f\in \mathcal{H}.
\end{eqnarray*}
Therefore, for all $f\in \mathcal{H}$,
\begin{eqnarray*}
AC^2\|f\|^2\leq A\|K^{\ast}f\|^2\leq \sum_{j\in
J}\|\Lambda_{j}f\|^2 \leq B\|f\|^2,
\end{eqnarray*}
that is, $\{\Lambda_j\}_{j\in J}$ is a $g$-frame for $\mathcal{H}$
with respect to $\{\mathcal{H}_j\}_{j\in J}$.
\end{proof}
\begin{prop}
Let $\{\Lambda_j\}_{j\in J}$ be a tight $K$-$g$-frame for
$\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$ with
$K$-$g$-frame bound $A_1$. Then $\{\Lambda_j\}_{j\in J}$ is a
tight $g$-frame with $g$-frame bound $A_2$ if and only if
$KK^{\ast}=\frac{A_2}{A_1}I_{\mathcal{H}}$.
\end{prop}
\begin{proof}
Let $\{\Lambda_j\}_{j\in J}$ is a tight $g$-frame with bound
$A_2$, then
\begin{eqnarray*}
\sum_{j \in J}\|\Lambda_jf\|^2=A_2\|f\|^2, \,\ \forall f\in
\mathcal{H}.
\end{eqnarray*}
Since $\{\Lambda_j\}_{j\in J}$ is a tight $K$-$g$-frame with bound
$A_1$, we have $A_1\|K^{\ast}f\|^2=A_2\|f\|^2, \,\ \forall f\in
\mathcal{H}$, that is,
\begin{eqnarray*}
\langle KK^{\star}f, f\rangle=\langle \frac{A_2}{A_1}f, f\rangle,
\,\ \forall f\in \mathcal{H}.
\end{eqnarray*}
The converse is straight forward.
\end{proof}
The following Theorem is analog
of ~\cite[Theorem (3.3)]{Casazza.Kutyniok.Li} to obtain new
$K$-$g$-frame for $\mathcal{H}$:
\begin{thm}
Let $K \in \mathcal{B}(\mathcal{H})$ and $\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_j)$ for any $j\in J$. Let
$\{\Gamma_{ij} \in \mathcal{B}(\mathcal{H}_j,
\mathcal{H}_{ij}):i\in I_j\}$ be a $g$-frame for $\mathcal{H}_j$
with bounds $C_j$ and $D_j$, such that
$$0<C=\inf_{j\in J} C_j \leq \sup_{j\in J} D_j=D<\infty,$$ where
$\{\mathcal{H}_{ij}\}_{i\in I_j}$ is a sequence of closed
subspaces of $\mathcal{H}_j$, for all $j\in J$. Then the following
statements are equivalent;
\begin{enumerate}
\item[(i)]$\{\Lambda_j \in \mathcal{B}(\mathcal{H}, \mathcal{H}_j):j\in
J\}$ is a $K$-$g$-frame for $\mathcal{H}$
\item[(ii)]$\{\Gamma_{ij}\Lambda_j \in \mathcal{B}(\mathcal{H},
\mathcal{H}_{ij}): i\in I_j, j\in J\}$ is a $K$-$g$-frame
for $\mathcal{H}$.
\end{enumerate}
\end{thm}
\begin{proof}
$(i)\Rightarrow (ii)$ Let $\{\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_j):j\in J\}$ be a
$K$-$g$-frame for $\mathcal{H}$ with bounds $A_1, B_1$. Then for
all $f \in \mathcal{H}$ we have
\begin{align*}
\sum_{j\in J}\sum_{i\in I_j}\|\Gamma_{ij}\Lambda_jf\|^2 \leq
\sum_{j\in J}D_j\|\Lambda_jf\|^2\leq DB_1\|f\|^2, \\
\sum_{j\in J}\sum_{i\in I_j}\|\Gamma_{ij}\Lambda_jf\|^2 \geq
\sum_{j\in J}C_j\|\Lambda_jf\|^2 \geq CA_1\|K^{\ast}f\|^2.
\end{align*}
$(ii)\Rightarrow (i)$ Let $\{\Gamma_{ij}\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_{ij}): i\in I_j, j\in J\}$ be
a $K$-$g$-frame for $\mathcal{H}$ with bounds $A_2,B_2$. we have
\begin{align*}
\sum_{j\in J}\|\Lambda_jf\|^2 \leq \sum_{j\in
J}\frac{1}{C_i}\sum_{i\in I_j}\|\Gamma_{ij}\Lambda_jf\|^2 \leq
\frac{B_2}{C} \|f\|^2, \\ \sum_{j\in J}\|\Lambda_jf\|^2 \geq
\sum_{j\in J}\frac{1}{D_j}\sum_{i\in
I_j}\|\Gamma_{ij}\Lambda_jf\|^2 \geq \frac{A_2}{D}\|K^{\ast}f\|^2.
\end{align*}
\end{proof}
Recall that, a sequence $\{w_j\}_j$ is called semi-normalized if
there are bounds $b\geq a> 0$, such that $a \leq |w_j|\leq b$.
\begin{prop}
\label{prop.:semi-normal} Suppose that $\{\Lambda_j\}_{j\in J}$ be
a K-g-frame for $\mathcal{H}$ with respect to
$\{\mathcal{H}_j\}_{j\in J}$ with bounds $A,B$ and $K$-g-dual
$\{\Theta_j\}_{j\in J}$. Let $\{w_j\}_{j\in J}$ be a
semi-normalized sequence with bounds $a, b$. Then
$\{w_j\Lambda_j\}_{j\in J}$ is a K-g-frame with bounds $a^2A$ and
$b^2B$. The sequence $\{w_j^{-1}\Theta_j\}_{j\in J}$ is a K-g-dual
of $\{w_j\Lambda_j\}_{j\in J}$.
\end{prop}
\begin{proof}
Since $\sum_{j\in J}\|w_j\Lambda_jf\|^2 = \sum_{j\in
J}|w_j|^2\|\Lambda_jf\|^2$. Then
\begin{eqnarray*}
a^2\sum_{j\in J}\|\Lambda_jf\|^2 \leq \sum_{j\in J}\|w_j\Lambda_jf\|^2 \leq b^2\sum_{j\in
J}\|\Lambda_jf\|^2,
\end{eqnarray*}
that is, $a^2A\|K^{\ast}f\|^2 \leq \sum_{j\in
J}\|w_j\Lambda_jf\|^2 \leq
b^2B\|f\|^2$.\\
We have $\sum_{j\in J}(w_j\Lambda_j)^{\ast}(w_j^{-1}\Theta_j)f=
\sum_{j\in J}\Lambda_j^{\ast}\Theta_jf=f$. Since $\{w_j^{-1}\}_j$
is bounded, $\{w_j^{-1}\Theta_j\}_j$ is a $g$-Bessel sequence.
Therefore, it is a $K$-$g$-dual of $\{w_j\Lambda_j\}_{j\in J}$.
\end{proof}
Suppose that $\{\Lambda_j\}_{j\in J}$ is a $K$-$g$-frame for
$\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$.
Obviously, it is a $g$-Bessel sequence, so we can define the
bounded linear operator
$T_{\Lambda}:\ell^2\left(\{\mathcal{H}_j\}_{j\in
J}\right)\rightarrow \mathcal{H}$ as follows:
\begin{eqnarray}
T_{\Lambda}(\{g_j\}_{j\in J})=\sum_{j\in J}\Lambda_j^{\ast}g_j,
\,\ \forall \{g_j\}_{j\in J}\in \ell^2
\left(\{\mathcal{H}_j\}_{j\in J}\right).
\end{eqnarray}
The operator $T_{\Lambda}$ is called the synthesis operator (or
pre-frame operator) for the $K$-$g$-frame $\{\Lambda_j\}_{j\in
J}$. The adjoint operator
\begin{eqnarray}
T_{\Lambda}^{\ast}: \mathcal{H}\rightarrow
l^2(\{\mathcal{H}_j\}_{j\in J}),\quad\
T_{\Lambda}^{\ast}f=\{\Lambda_jf\}_{j\in J}, \,\ \forall f\in
\mathcal{H},
\end{eqnarray}
is called analysis operator for the $K$-$g$-frame
$\{\Lambda_j\}_{j\in J}$. Let
$S_{\Lambda}=T_{\Lambda}T_{\Lambda}^{\ast}$, we obtain the frame
operator for the $K$-$g$-frame $\{\Lambda_j\}_{j\in J}$ as follows
\begin{eqnarray}
S_{\Lambda}:\mathcal{H}\rightarrow \mathcal{H},\quad\
S_{\Lambda}f=\sum_{j\in J}\Lambda_j^{\ast}\Lambda_jf, \,\ \forall
f \in \mathcal{H}.
\end{eqnarray}
One of the main problem in the $K$-$g$-frame theory is that the
interchangeability of two $g$-Bessel sequence with respect to a
$K$-$g$-frame is different from a $g$-frame. The following
characterization of $K$-$g$-frames is in \cite[Theorem
(2.5)]{Asgari.Rahimi}.
\begin{prop}
Let $K \in \mathcal{B}(\mathcal{H})$. Then the following are
equivalent:
\begin{enumerate}
\item[(i)] $\{\Lambda_j\}_{j\in J}$ is a $K$-$g$-frame for
$\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$;
\item[(ii)] $\{\Lambda_j\}_{j\in J}$ is a $g$-Bessel sequence for
$\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$
and there exists a $g$-Bessel sequence
$\{\Gamma_j\}_{j\in J}$ for $\mathcal{H}$ with respect to
$\{\mathcal{H}_j\}_{j\in J}$ such that
\begin{eqnarray}\label{eqn:g-frame.1}
Kf=\sum_{j\in J}\Lambda_j^{\ast}\Gamma_jf,\,\
\forall f\in \mathcal{H}.
\end{eqnarray}
\end{enumerate}
\end{prop}
The position of two $g$-Bessel sequence $\{\Lambda_j\}_{j\in J}$
and $\{\Gamma_j\}_{j\in J}$ in (\ref{eqn:g-frame.1}) are not
interchangeable in general, but there exists another type of dual
such that $\{\Lambda_j\}_{j\in J}$ and a sequence derived by
$\{\Gamma_j\}_{j\in J}$ are interchangeable on the subspace
$\mathcal{R}(K)$ of $\mathcal{H}$.
For $K\in \mathcal{B}(\mathcal{H})$, let $\mathcal{R}(K)$ is
closed, then the pseudo-inverse $K^{\dagger}$ of $K$ exists.
\begin{thm}\cite[Theorem (3.3)]{Hua.Huang}
\label{Thm.rep.1}
Suppose that $\{\Lambda_j\}_{j\in J}$ and
$\{\Gamma_j\}_{j\in J}$ are g-Bessel sequence as in
(\ref{eqn:g-frame.1}). Then there exists a sequence
$\{\Theta_j\}_{j\in J}$ such that
\begin{eqnarray}\label{eqn:g-frame.2}
f=\sum_{j\in J}\Lambda_j^{\ast}\Theta_jf, \,\ \forall f \in
\mathcal{R}(K),
\end{eqnarray}
where $\Theta_j=\Gamma_j(K^{\dagger}\mid_{\mathcal{R}(K)})$.
Moreover, $\{\Lambda_j\}_{j\in J}$ and $\{\Theta_j\}_{j\in J}$ are
interchangeable for any $f \in \mathcal{R}(K)$, that is,
\begin{eqnarray}\label{eqn:g-frame.3}
f=\sum_{j\in J}\Theta_j^{\ast}\Lambda_jf, \,\ \forall f \in
\mathcal{R}(K).
\end{eqnarray}
\end{thm}
\section{\bf Approximate $K$-$g$-duals}
Motivated by the concept of approximate dual
frames in ~\cite{Christensen.Laugesen}, we will define and focus
on the approximate dual $K$-$g$-frames for $\mathcal{H}$ with
respect to $\{H_j\}_{j\in J}$.
By the Theorem \ref{Thm.rep.1}, since
$K^{\dagger}\mid_{\mathcal{R}(K)}:\mathcal{R}(K)\rightarrow
\mathcal{H}$, we obtain $\Theta_j:\mathcal{R}(K)\rightarrow
\mathcal{H}_j$. For any $f \in \mathcal{R}(K)$, we have
\begin{eqnarray}
\sum_{j\in J}\|\Theta_jf\|^2= \sum_{j\in J}\|\Gamma_j
K^{\dagger}f\|^2\leq B\|K^{\dagger}f\|^2\leq
B\|K^{\dagger}\|^2\|f\|^2.
\end{eqnarray}
That is, $\{\Theta_j\}_{j\in J}$ is a $g$-Bessel sequence for
$\mathcal{R}(K)$ with respect to $\{\mathcal{H}_j\}_{j\in J}$. Let
$T_{\Theta}$ be the synthesis operator of $\{\Theta_j\}_{j\in J}$.
Consider two mixed operators $T_{\Lambda}T_{\Theta}^{\ast}$ and
$T_{\Theta}T_{\Lambda}^{\ast}$ as follows:
\begin{eqnarray*}\label{eqn:Appr.2}
T_{\Lambda}T_{\Theta}^{\ast}:\mathcal{R}(K)\rightarrow
\mathcal{H}, \,\
T_{\Lambda}T_{\Theta}^{\ast}f=\sum_{}\Lambda_j^{\ast}\Theta_jf,
\,\ \forall f \in \mathcal{R}(K),
\end{eqnarray*}
\begin{eqnarray*}
T_{\Theta}T_{\Lambda}^{\ast}:\mathcal{H}\rightarrow
\mathcal{R}(K), \,\
T_{\Theta}T_{\Lambda}^{\ast}f=\sum_{j}\Theta_j^{\ast}\Lambda_jf,
\,\ \forall f \in \mathcal{H}.
\end{eqnarray*}
By the Theorem \ref{Thm.rep.1} and similar to the definition of
duality and approximate duality stated in
~\cite{Christensen.Laugesen}, we have the following definition:
\begin{defn}
Consider two g-Bessel sequences $\{\Lambda_j \in
\mathcal{B}(\mathcal{H},\mathcal{H}_j): j\in J\}$ and $\{\Theta_j
\in \mathcal{B}(\mathcal{H},\mathcal{H}_j): j\in J\}$.
\begin{enumerate}
\item[(i)] The sequences $\{\Lambda_j\}_{j\in J}$ and
$\{\Theta_j\}_{j\in J}$ are dual K-g-frames, when
$T_{\Lambda}T_{\Theta}^{\ast}=I_{\mathcal{R}(K)}$ or
$T_{\Theta}T_{\Lambda}^{\ast}|_{\mathcal{R}(K)}=I_{\mathcal{R}(K)}$.
In this case, we say that $\{\Theta_j\}_{j\in J}$ is a
K-g-dual of $\{\Lambda_j\}_{j\in J}$,
\item[(ii)] The sequences $\{\Lambda_j\}_{j\in J}$ and
$\{\Theta_j\}_{j\in J}$ are approximately dual K-g-frames,
whenever $\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast}\|<1$ or
$\|I_{\mathcal{R}(K)}-T_{\Theta}T_{\Lambda}^{\ast}|_{\mathcal{R}(K)}\|<1$.
In this case, we say that $\{\Theta_j\}_{j\in J}$ is an
approximate dual K-g-frame or approximate K-g-dual of
$\{\Lambda_j\}_{j\in J}$.
\end{enumerate}
\end{defn}
Note that the given conditions are equivalent, by taking adjoint.
A well-known algorithm to find the inverse of an operator is the
Neumann series algorithm.\\ Since
$\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast}\|<1$,
$T_{\Lambda}T_{\Theta}^{\ast}$ is invertible with
$$(T_{\Lambda}T_{\Theta}^{\ast})^{-1}=
(I_{\mathcal{R}(K)}-(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast}))^{-1}
=\sum_{n=0}^{\infty}(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^n.$$
Therefore, every $f \in \mathcal{R}(K)$ can be reconstruct as
\begin{eqnarray}\label{eqn:Appr.3}
f=\sum_{n=0}^{\infty}(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^n
T_{\Lambda}T_{\Theta}^{\ast}f.
\end{eqnarray}
The following Propositions is the analog of Prop. 3.4 and Prop.
4.1 in ~\cite{Christensen.Laugesen} to obtain new approximate
$K$-$g$-duals.
\begin{prop}
Let $\{\Theta_j\}_{j\in J}$ be an approximate $K$-$g$-dual of
$\{\Lambda_j\}_{j\in J}$, then
$\{\Theta_j(T_{\Lambda}T_{\Theta}^{\ast})^{-1}\}$ is a K-g-dual of
$\{\Lambda_j\}_{j\in J}$.
\end{prop}
\begin{proof}
It is easy to see that
$\{\Theta_j(T_{\Lambda}T_{\Theta}^{\ast})^{-1}\}_{j\in J}$ is a
$g$-Bessel sequence and
\begin{eqnarray*}
f=(T_{\Lambda}T_{\Theta}^{\ast})(T_{\Lambda}T_{\Theta}^{\ast})^{-1}f
& = &
\sum_{j=0}^{\infty}\Lambda_j^{\ast}\Theta_j(T_{\Lambda}T_{\Theta}^{\ast})^{-1}f\\
& = &
\sum_{j=0}^{\infty}\Lambda_j^{\ast}(\Theta_j\sum_{n=0}^{\infty}
(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^nf).
\end{eqnarray*}
Therefore,
$\{\Theta_j(T_{\Lambda}T_{\Theta}^{\ast})^{-1}\}=\{\Theta_j\sum_{n=0}^{\infty}
(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^n\}_{j\in J}$ is
a $K$-$g$-dual of $\{\Lambda_j\}_{j\in J}$.
\end{proof}
For each $N\in \mathbb{N}$, define
$\gamma_j^{(N)}=\sum_{n=0}^N\Theta_j(I_{\mathcal{R}(K)}-T_{\Lambda}
T_{\Theta}^{\ast})^n$. Define $T_N:\mathcal{H}\rightarrow
\mathcal{H}$ by
$T_N=\sum_{n=0}^N(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^n$,
Then $\gamma_j^{(N)}=\Theta_jT_N, \,\ \forall j\in J$. The
sequence $\{\gamma_j^{(N)}\}_{j \in J}$ is obtained from the
$g$-Bessel sequence $\{\Theta_j\}_{j\in j}$ by a bounded operator,
therefore, it is a $g$-Bessel sequence. For each $f \in
\mathcal{R}(K)$,
\begin{eqnarray*}
T_{\Lambda}T_{\Theta}^{\ast}T_Nf=
\sum_{j=0}^{\infty}\Lambda_j^{\ast}\Theta_jT_Nf=
\sum_{j=0}^{\infty}\Lambda_j^{\ast}\gamma_j^{(N)}f=
T_{\Lambda}T_{\Gamma}^{\ast}f,
\end{eqnarray*}
where $T_{\Gamma}$ is the synthesis operator of
$\{\gamma_j^{(N)}\}_{j \in J}$. Thus,
\begin{align*}
T_{\Lambda}T_{\Gamma}^{\ast}f &=T_{\Lambda}T_{\Theta}^{\ast}T_Nf\\
&=[I_{\mathcal{R}(K)}-(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})]\sum_{n=0}^N
(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^nf\\
&=[I_{\mathcal{R}(K)}-(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^{N+1}]f,
\end{align*}
by telescoping. Thus
\begin{align}\label{eqn:Appr.7}
\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Gamma}^{\ast}\|&=\|(I_{\mathcal{R}(K)}-T_{\Lambda}
T_{\Theta}^{\ast})^{N+1}\|\\
& \leq\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})\|^{N+1}.
\end{align}
If $\{\Theta_j\}_{j\in J}$ be an approximate K-g-dual of
$\{\Lambda_j\}_{j\in J}$, then
\begin{align}\label{eqn:Appr.8}
\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})\|<1.
\end{align}
By (\ref{eqn:Appr.7}) and (\ref{eqn:Appr.8}), we obtain
\begin{align*}
\|I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Gamma}^{\ast})\|<1,
\end{align*}
That is, $\{\gamma_j^{(N)}\}_{j\in J}$ is an approximate
$K$-$g$-dual of $\{\Lambda_j\}_{j\in J}$.\\
We now summarize what we have proved:
\begin{prop}
Let $\{\Theta_j\}_{j\in J}$ be an approximate K-g-dual of
$\{\Lambda_j\}_{j\in J}$. Then
\begin{align*}
\{\sum_{n=0}^{N}\Theta_j(I_{\mathcal{R}(K)}-T_{\Lambda}T_{\Theta}^{\ast})^n\}_{j\in
J}
\end{align*}
is an approximate K-g-dual of $\{\Lambda_j\}_{j\in J}$.
\end{prop}
\begin{thm}
Let $\{\Lambda_j \in \mathcal{B}(\mathcal{H},\mathcal{H}_j): j\in
J\}$ be a $K$-$g$-frame and $\{\Theta_j \in
\mathcal{B}(\mathcal{H},\mathcal{H}_j): j\in J\}$ be a $g$-Bessel
sequence. Let also $\{f_{i,j}\}_{i\in I_j}$ be a frame for
$\mathcal{H}_j$ with bounds $A_j$ and $B_j$ for every $j\in J$
such that, $0<A=\inf_{j\in J}A_j\leq \sup_{j\in J}B_j=B<\infty$.
Then $\{\Theta_j\}_{j\in J}$ is an approximate $K$-$g$-dual of
$\{\Lambda_j\}_{j\in J}$ if and only if
$E=\{\Theta_j^{\ast}f_{i,j}\}_{i\in I_j, j\in J}$ is an
approximate dual of
$F=\{\Lambda_j^{\ast}\widetilde{f}_{i,j}\}_{i\in I_j, j\in J}$,
where $\{\widetilde{f}_{i,j}\}_{i \in I_j}$ is the canonical dual
of $\{f_{i,j}\}_{i \in I_j}$.
\end{thm}
\begin{proof}
For each $f\in \mathcal{H}$ we have
\begin{align*}
\sum_{j\in J}\sum_{i\in I_j}|\langle f, \Theta_j^{\ast}f_{i,j}
\rangle|^2 & =\sum_{j\in J}\sum_{i\in I_j}|\langle \Theta_jf,
f_{i,j}
\rangle|^2\\
& \leq \sum_{j\in J}B_j\|\Theta_jf\|^2 \leq B\sum_{j\in
J}\|\Theta_jf\|^2.
\end{align*}
This implies that $E$ is a Bessel sequence for $\mathcal{H}$.
Similarly, $F$ is also Bessel sequence for $\mathcal{H}$.
Moreover, for each $f\in \mathcal{H}$ we have
\begin{align*}
T_{\Theta}T_{\Lambda}^{\ast}f & =
\sum_{j\in J}\Theta_j^{\ast}\Lambda_jf\\
& =\sum_{j\in J}\Theta_j^{\ast}\sum_{i\in I_j}\langle \Lambda_jf,
\widetilde{f}_{i,j} \rangle f_{i,j}\\
& =\sum_{j\in J,i\in I_j}\langle f,
\Lambda_J^{\ast}\widetilde{f}_{i,j} \rangle
\Theta_j^{\ast}f_{i,j}\\
& =T_ET_F^{\ast}f.
\end{align*}
So $\|I-T_{\Theta}T_{\Lambda}^{\ast}\|<1$ if and only if
$\|I-T_ET_F^{\ast}\|<1$, this follows the result.
\end{proof}
\section{\bf Redundancy}
One of the important property of frame theory is the possibility
of redundancy. For example, in ~\cite[Theorem
(3.2)]{Casazza.Kutyniok} the authors have provided sufficient
conditions on the weights in the fusion frames to be a fusion
frames, when some elements erasure, that is, some arbitrary
elements can be removed without destroying the
fusion frame property of the remaining set.\\
Our main result in this section provides an equivalent condition
under which the subsequence of a $K$-$g$-frame still to be a
$K$-$g$-frame. In the theory of $K$-$g$-frames, if we have
information on the lower $K$-$g$-frame bound and the norm of the
$K$-$g$-frame elements, we can provide a criterion for how many
elements we can remove:
\begin{prop}
Let $K\in \mathcal{B}(\mathcal{H})$, such that $K^{\ast}$ is
bounded below with constant $C>0$ and also let $\{\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_j):j\in J\}$ is a
$K$-$g$-frame with lower bound $A>\frac{1}{C}$. Then for each
subset $I\subset J$ with $|I|<AC$ such that, $\|\Lambda_j\|=1,
\forall j \in I$, the family $\{\Lambda_j\}_{j\in J\backslash I}$
is a $K$-$g$-frame for $\mathcal{H}$ with respect to
$\{\mathcal{H}_j\}_{j\in J}$ with lower $K$-$g$-frame bound
$AC^2-|I|$.
\end{prop}
\begin{proof}
Given $f \in \mathcal{H}$,
\begin{align*}
\sum_{j\in I}\|\Lambda_j f\|^2 \leq \sum_{j\in I}\|\Lambda_j\|^2
\|f\|^2=|I|\|f\|^2.
\end{align*}
Thus
\begin{align*}
\sum_{j\in J\backslash I}\|\Lambda_j f\|^2 & \geq
A\|K^{\ast}f\|^2-|I|\|f\|^2\\
& \geq AC^2\|f\|^2-|I|\|f\|^2\\
& =(AC^2-|I|)\|f\|^2.
\end{align*}
\end{proof}
\begin{thm}
Let $I\subset J$. Suppose that $\{\Lambda_j \in
\mathcal{B}(\mathcal{H}, \mathcal{H}_j):j\in J\}$ is a
$K$-$g$-frame with bounds $A, B$ and $K$-$g$-frame operator
$S_{\Lambda,J}$. Then the following statements are equivalent:
\begin{enumerate}
\item[(i)]
$I-S^{-1}_{\Lambda,J}S_{\Lambda,I}$ is boundedly invertible,
\item[(ii)] The sequence $\{\Lambda_j\}_{j\in J\backslash I}$ is a $K$-$g$-frame
for $\mathcal{H}$ with respect to $\{\mathcal{H}_j\}_{j\in J}$
with lower $K$-$g$-frame bound
$\frac{A}{\|(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})^{-1}\|^2}$.
\end{enumerate}
\end{thm}
\begin{proof}
Denote the frame operator of $K$-$g$-frame $\{\Lambda_j\}_{j\in
J\backslash I}$ by $S_{\Lambda,J\backslash I}$. Since
$$S_{\Lambda,J\backslash I}=S_{\Lambda,J}-S_{\Lambda,I}=
S_{\Lambda,J}(I-S^{-1}_{\Lambda,J}S_{\Lambda,I}),$$ we have,
$\{\Lambda_j\}_{j\in J \backslash I}$ is $K$-$g$-frame if and only
if $S_{\Lambda,J\backslash I}$ is boundedly invertible and then if
and only if $I-S^{-1}_{\Lambda,J}S_{\Lambda,I}$ is boundedly invertible.\\
Now, for the lower $K$-$g$-frame bound, assume that
$I-S^{-1}_{\Lambda,J}S_{\Lambda,I}$ is invertible. Since
$\{\Lambda_j\}_{j\in J}$ is a $K$-$g$-frame for $\mathcal{H}$ with
respect to $\{\mathcal{H}_j\}_{j\in J}$ with bounds $A$ and $B$,
for any $f \in \mathcal{H}$,
\begin{align*}
f & =S^{-1}_{\Lambda,J}S_{\Lambda,J}f\\
& =S^{-1}_{\Lambda,J}(\sum_{j\in I}\Lambda^{\ast}_j\Lambda_jf +
\sum_{j\in J\backslash
I}\Lambda^{\ast}_j\Lambda_jf)\\
& = S^{-1}_{\Lambda,J}S_{\Lambda,I}f+ \sum_{j\in J\backslash I}
S^{-1}_{\Lambda,J} \Lambda^{\ast}_j\Lambda_jf.
\end{align*}
Hence we have, $(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})f= \sum_{j\in
J\backslash I} S^{-1}_{\Lambda,J} \Lambda^{\ast}_j\Lambda_jf$.
Therefore we obtain
\begin{align*}
\|(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})f\| & =\|\sum_{j\in
J\backslash I} S^{-1}_{\Lambda,J}
\Lambda^{\ast}_j\Lambda_jf\|\\
& = \sup_{g\in \mathcal{H}, \|g\|=1}|\langle \sum_{j\in
J\backslash I} S^{-1}_{\Lambda,J}
\Lambda^{\ast}_j\Lambda_jf, g \rangle|\\
& = \sup_{g\in \mathcal{H}, \|g\|=1}|\sum_{j\in J\backslash I}
\langle\Lambda_jf, \Lambda_jS^{-1}_{\Lambda,J}g \rangle|\\
& \leq \sup_{g\in \mathcal{H}, \|g\|=1}\sum_{j\in J\backslash
I} \|\Lambda_jf\| \|\Lambda_jS^{-1}_{\Lambda,J}g\|\\
& \leq \sup_{g\in \mathcal{H}, \|g\|=1}(\sum_{j\in J\backslash I}
\|\Lambda_jf\|^2)^{\frac{1}{2}}(\sum_{j\in J\backslash I}
\|\Lambda_jS^{-1}_{\Lambda,J}g\|^2)^{\frac{1}{2}}\\
& \leq \sup_{g\in \mathcal{H}, \|g\|=1}(\sum_{j\in J\backslash I}
\|\Lambda_jf\|^2)^{\frac{1}{2}}(\langle
S_{\Lambda,J}(S^{-1}_{\Lambda,J}g),
S^{-1}_{\Lambda,J}g\rangle)^{\frac{1}{2}}.
\end{align*}
That is
\begin{align}\label{Noneqn:1}
\|(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})f\| \leq
\sqrt{A^{-1}}(\sum_{j\in J\backslash I}
\|\Lambda_jf\|^2)^{\frac{1}{2}}.
\end{align}
It follows that $I-S^{-1}_{\Lambda,J}S_{\Lambda,I}$ is well
defined in $\mathcal{H}$. If $I-S^{-1}_{\Lambda,J}S_{\Lambda,I}$
is invertible on $\mathcal{H}$, then for any $f \in \mathcal{H}$
we have
\begin{align}\label{Noneqn:2}
\|K^{\ast}f\|\leq
\|K^{\ast}(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})^{-1}\|\cdot\|(I-S^{-1}_{\Lambda,J}S_{\Lambda,I}f)\|.
\end{align}
From (\ref{Noneqn:1}) and (\ref{Noneqn:2}) we have
\begin{align*}
\frac{A}{\|K^{\ast}(I-S^{-1}_{\Lambda,J}S_{\Lambda,I})^{-1}\|^2}\|K^{\ast}f\|^2
\leq \sum_{j\in J\backslash I} \|\Lambda_jf\|^2,\,\ \forall f \in
\mathcal{H}.
\end{align*}
This complete the proof.
\end{proof}
\end{document}
|
\begin{document}
\title{Combining dependent p-values resulting from multiple effect size
homogeneity tests in meta-analysis for binary outcomes}
\author{Osama Almalik\thanks{Researcher, Department of Mathematics and Computer Science, Eindhoven
University of Technology. E-mail: [email protected].}}
\maketitle
\begin{abstract}
\noindent Testing effect size homogeneity is an essential part when
conducting a meta-analysis. Comparative studies of effect size homogeneity
tests in case of binary outcomes are found in the literature, but
no test has come out as an absolute winner. A alternative approach
would be to carry out multiple effect size homogeneity tests on the
same meta-analysis and combine the resulting dependent p-values. In
this article we applied the correlated Lancaster method for dependent
statistical tests. To investigate the proposed approach's performance,
we applied eight different effect size homogeneity tests on a case
study and on simulated datasets, and combined the resulting p-values.
The proposed method has similar performance to that of tests based
on the score function in the presence of a effect size when the number
of studies is small, but outperforms these tests as the number of
studies increases. However, the method's performance is sensitive
to the correlation coefficient value assumed between dependent tests,
and only performs well when this value is high. More research is needed
to investigate the method's assumptions on correlation in case of
effect size homogeneity tests, and to study the method's performance
in meta-analysis of continuous outcomes.
\end{abstract}
\section{Introduction}
Meta-analysts still carry out effect size homogeneity tests on a regular
basis \cite{key-1-1-1-1-1-1,key-1-1-1-1-1-2}. Several methods have
been developed to test effect size homogeneity in meta analysis with
multiple 2x2 contingency tables, and the performance of these methods
have been studied in the literature. Jones et al. (1989) studied the
performances of the Likelihood Ratio test, Pearson's chi square test,
Breslow-Day test and Tarone's adjustment to it, a conditional score
test and Liang and Self's normal approximation to the score test \cite{key-1-1-1-1-1}.
Gavaghan et al. (1999) compared the performance of the Peto statistic,
the Woolf statistic, the Q-statistic (applied to the estimates of
the risk difference), Liang and Self's normal approximation to the
score test, and the Breslow-Day test \cite{key-1-1-1-1-2}. Almalik
and van den Heuvel (2018) compared the performance of the fixed effects
logistic regression analysis, the random effects logistic regression
analysis, the Q-statistic, the Bliss statistic, the $I^{2}$, the
Breslow-Day test, the Zelen statistic, Liang and Self's $T_{2}$ and
$T_{R}$ statistics, and the Peto statistic \cite{key-1-1-1-5-2}.
A recent study focused on meta-analysis with rare binary events \cite{key-1-1-1-1-4-1}.
However, no one specific test could be presented as a universal winner
from these comparative studies. An alternative approach would be to
perform multiple tests of effect size homogeneity on the same meta-analysis,
and combine the resulting p-values.
\noindent The earliest method to combine p-values resulting from independent
statistical tests is Fisher's method \cite{key-1-1-1-1}. Since then
many methods have been proposed to combine p-values resulting from
independent statistical tests, see \cite{key-1-1-1-2,key-1-1-1-2-1}
for an overview. The p-values in our context result from different
statistical tests for effect size homogeneity testing the same null
hypothesis. Since these p-values result from the same meta-analysis
dataset, these p-values as correlated \cite{key-1-1-1-6-1}. Therefore,
we only consider methods that combine p-values resulting from dependent
tests, and we present a brief review of these methods here.
\noindent Brown (1975) \cite{key-1-1-1-3} extended Fisher\textquoteright s
method to the case where the p-values result from test statistics
having a multivariate normal distribution with a known covariance
matrix. Kost and McDermott (2002) \cite{key-1-1-1-4} extended Brown\textquoteright s
method analytically for unknown covariance matrices. Other methods
to calculate the covariance matrix have been proposed in the literature
\cite{key-1-1-1-5}. Brown's method and its improvements can only
be applied to the case where the test statistics follow a multivariate
normal distribution. Since most test statistics used to test effect
size homogeneity approximately follow the Chi Square distribution,
these methods can not be applied.
\noindent Makambi (2003) modified the Fisher statistic, using weights
derived from the data, to accommodate correlation between the p-values
\cite{key-1-1-1-6-1}. However, the author applied methods developed
by Brown to derive the first two moments of the weighted distribution
and to estimate the correlation coefficients, which implies that the
same distributional assumptions made by Brown must hold here. Similar
modifications of the Fisher statistic can be found in the literature
\cite{key-1-1-1-6-2}. Yang (2010) introduced an approximation of
the null distribution of the Fisher statistic, based on the Lindeberg
Central Limit theorem. However, one condition that the test statistics
being m-dependent clearly does not hold here \cite{key-1-1-1-5-1}.
Another approximation introduced in Yang (2010) based on permutations
is said by the author to be numerically intensive \cite{key-1-1-1-5-1}.
Methods combining dependent p-values for very specific applications
have been proposed in the literature \cite{key-1-1-1-7-1,key-1-1-1-7-2,key-1-1-1-7-4},
and Bootstrap methods have been applied to this problem as well \cite{key-1-1-1-8-1}.
\noindent One method developed by Hartung (1999) considered dependent
test statistics testing one null hypothesis, each having a continuous
distribution under the null hypothesis \cite{key-1-1-1-6}. Using
the probability integral transformation \cite{key-1-1-1-8-1-1}, the
dependent p-values are transformed into standardized z values using
the probit function. Then the author proposed a formula for combining
the z values into one z value using weights for each z value and a
correlation coefficient between each two z values. This combined z
value is then used to test the original null hypothesis. Hartung's
method is only applicable for one-sided hypothesis tests, which makes
it applicable for combing p-values resulting from effect size homogeneity
tests based on random effects having one-sided alternative hypothesis.
However, this approach is not promising since effect size homogeneity
tests using random effects have been shown to perform poorly \cite{key-1-1-1-5-2,key-1-1-1-1-4-1}.
\noindent Another possibility is a method developed by Dai et al.
\cite{key-1-1-1-8,key-1-1-1-2-1-1}. Dai's approach uses a combined
test statistic developed earlier by Lancaster \cite{key-1-1-1-7},
but adjusted to incorporate correlations between the p-values. Lancaster
presented a test statistic that transforms the independent p-values
into a Chi squared test statistic using the inverse cumulative distribution
function of the Gamma distribution. Dai et al. (2014) noted that after
introducing correlation between the p-values the Lancaster statistic
no longer follows the Chi square distribution, and provided five approaches
to approximate the distribution of the Lancaster test statistic under
correlation. The approximation is done using the observed test statistics
and their corresponding degrees of freedom. The basic approach presented
by the authors is the Satterthewaite approximation \cite{key-1-1-1-9},
and the other four approaches are based on the Satterthwaite approximation
as well. The authors recommended using the Satterthwaite approximation
as the standard procedure to adjust the Lancaster statistic. See section
2.2.2 for a detailed description of the Sattherthwaite approximation
of the correlated Lancaster test statistic.
\noindent This article is structured as follows. Section 2 presents
a short description of the tests for effect size homogeneity using
two-sided alternative hypothesis and of the correlated Lancaster procedure.
Section 3 describes the case study and the simulation model used.
The results are presented in section 4 and the discussion is relegated
to section 5.
\section{Statistical methods based on effect size homogeneity tests using
two-sided alternative hypothesis}
We first introduce notation that will be used throughout this section.
For a binary clinical outcome, let $X_{1i}$ and $X_{0i}$ be the
number of successes in the treatment group and the control group in
study $i$ (i$=1,2,...,m$) out of $n_{1i}$ and $n_{0i}$ trails,
respectively. Methods testing effect size homogeneity assume that
$E\left(X_{1i}\right)=n_{1i}\cdot p_{1i}$, with $p_{1i}$ the proportion
of success for the treatment group in study $i$. The Odds ratio is
calculated by $\hat{OR}_{i}=X_{1i}\left(n_{0i}-X_{0i}\right)/\left[\left(n_{1i}-X_{1i}\right)X_{0i}\right]$
and its standard error is given by $\hat{se}_{i}^{2}=\frac{1}{X_{1i}}+\frac{1}{n_{1i}-X_{1i}}+\frac{1}{X_{0i}}+\frac{1}{n_{0i}-X_{0i}}$.
All methods testing effect size homogeneity in this section assume
that the proportion $p_{ji}$, $j=0,1$, satisfy the following form:
$logit\left(p_{ji}\right)=\alpha_{i}+\left(\beta+\gamma_{i}\right)\cdot t_{ji}$,
with $\alpha_{i}$ an intercept for study $i$, $\beta$ the (mean)
effect size, $t_{ji}$ a treatment indicator variable for study $i$
with value 1 when $j=1$ and 0 otherwise, and $\gamma_{i}$ a study
treatment interaction effect for study $i$. Then effect size homogeneity
tests apply the following hypotheses
\noindent
\begin{equation}
H_{0}:\gamma_{1}=\gamma_{2}=\ldots=\gamma_{m}=0\,\,\,vs\,\,H_{1}:\gamma_{i}\neq\gamma_{i^{'}}\:for\,some\,i\neq i^{'}.\label{eq:null_fixed}
\end{equation}
\subsection{Tests of effect size homogeneity }
\noindent In this section the effect size homogeneity tests based
on fixed-effects are briefly described. Under the null hypothesis
in (\ref{eq:null_fixed}), all these tests have an approximately Chi
squared distributed test statistic with $m-1$ degrees of freedom
\cite{key-1-1-1-8-2,key-1-1-1-8-3,key-1-1-1-8-4,key-1-1-1-8-5,key-1-1-1-8-6,key-1-1-1-8-7,key-1-1-1-8-8,key-1-1-1-8-9}.
\subsubsection{The Likelihood ratio test }
Assuming a fixed-effects logistic regression model, and using the
notation presented above, the success probability $p_{ji}$ for a
subject in study $i$ receiving treatment $j$, $j=0,1$, is given
by\linebreak{}
$p_{ji}=\exp\left(\alpha_{i}+\left(\beta+\lambda_{i}\right)t_{ji}\right)/\left(1+\exp\left(\alpha_{i}+\left(\beta+\lambda_{i}\right)t_{ji}\right)\right)$.
The log Likelihood function can be constructed as follows
\begin{equation}
\begin{array}{lr}
l_{F}\left(\alpha_{i},\beta,\lambda_{i}\right)=\sum_{i=1}^{m}\left(X_{1i}\left(\alpha_{i}+\beta+\gamma_{i}\right)-n_{1i}\log\left(1+\exp\left(\alpha_{i}+\beta+\gamma_{i}\right)\right)\right)\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\quad\,+\sum_{i=1}^{m}\left(X_{0i}\alpha_{i}-n_{0i}\log\left(1+\exp\left(\alpha_{i}\right)\right)\right).
\end{array}\label{eq:Fixed logistic}
\end{equation}
\noindent The Maximum Likelihood estimates are obtained under the
Full and the Null models, denoted from now on by indexes F and N,
respectively. The Likelihood ratio test statistic is given by \\
$T_{1}=-2\left(l_{N}\left(\hat{\alpha}_{i\left(N\right)},\hat{\beta}_{\left(N\right)},0\right)-l_{F}\left(\hat{\alpha}_{i\left(F\right)},\hat{\beta}_{\left(F\right)},\hat{\gamma}_{i}\right)\right)$.
\subsubsection{Tests based on the Q-statistic }
\noindent Defining $\hat{\beta}_{i}=\log\left(\hat{OR}_{i}\right)$
as the primary effect size, the Q-statistic \cite{key-1-1-1-8-2}
is given by $T_{2}=\sum_{i=1}^{m}\left(\hat{\beta}_{i}-\bar{\beta}\right)^{2}/\hat{se}_{i}^{2}$,
with $\bar{\beta}$ a weighted average given by $\bar{\beta}=\sum_{i=1}^{m}\left(\hat{\beta_{i}}/\hat{se}_{i}^{2}\right)/\sum_{i=1}^{m}\left(1/\hat{se}_{i}^{2}\right)$.
Bliss's test statistic is given by
\[
T_{3}=\left(m-1\right)+\sqrt{\left(\bar{n}-4)\right)/\left(\bar{n}-1\right)}\left\{ \left(\bar{n}-2\right)\cdot T_{2}/\bar{n}-\left(m-1\right)\right\}
\]
with $\bar{n}=\left(\sum_{i=1}^{m}\left(n_{1i}+n_{0i}-2\right)\right)/m$
\cite{key-1-1-1-8-3,key-1-1-1-5-2}.
\subsubsection{Tests based on the score function}
\noindent The Breslow-Day approach \cite{key-1-1-1-8-4} adjusted
by Tarone \cite{key-1-1-1-8-6} can be described as follows. Firstly,
the Cochran-Mantel-Haenszel pooled odds ratio $OR_{C}$ is calculated
using \linebreak{}
$\hat{OR}_{C}=\sum_{i=1}^{m}\left(X_{1i}\left(n_{0i}-X_{0i}\right)/n_{i}\right)/\sum_{i=1}^{m}\left(X_{0i}\left(n_{1i}-X_{1i}\right)/n_{i}\right)$.
Define $E_{C}\left(X_{1i}\right)=E\left(X_{1i}|X_{i},\hat{OR}_{C}\right)$
as the expected value of $X_{1i}$ given $X_{i}=X_{0i}+X_{1i}$, where
$E_{C}\left(X_{1i}\right)$ is obtained by solving the following equation
\begin{equation}
\left(\hat{OR}_{C}-1\right)E_{C}\left(X_{1i}\right)^{2}-\left(\left(X_{i}+n_{1i}\right)\hat{OR}_{C}+\left(n_{0i}-X_{i}\right)\right)E_{C}\left(X_{1i}\right)+X_{i}n_{1i}\hat{OR}_{C}=0\label{eq:Exp value}
\end{equation}
\noindent The Breslow-Day approach adjusted by Tarone statistic is
given by
\noindent
\[
T_{4}=\sum_{i=1}^{m}\frac{\left(X_{1i}-E_{C}\left(X_{1i}\right)\right)^{2}}{var\left(X_{1i}|X_{i},\hat{OR}_{C}\right)}-\frac{\left(\sum_{i=1}^{m}X_{1i}-\sum_{i=1}^{m}E_{C}\left(X_{1i}\right)\right)^{2}}{\sum_{i=1}^{m}var\left(X_{1i}|X_{i},\hat{OR}_{C}\right)}.
\]
\noindent with $var\left(X_{1i}|X_{i},OR_{C}\right)$ is given by
\begin{equation}
var\left(X_{1i}|X_{i},\hat{OR}_{C}\right)=\left(\frac{1}{E_{C}\left(X_{1i}\right)}+\frac{1}{X_{i}-E_{C}\left(X_{1i}\right)}+\frac{1}{n_{1i}-E_{C}\left(X_{1i}\right)}+\frac{1}{n_{0i}-X_{i}+E_{C}\left(X_{1i}\right)}\right)^{-1}.\label{eq:variance}
\end{equation}
\noindent Zelen's test statistic \cite{key-1-1-1-8-6}, later corrected
by Halperin et al. \cite{key-1-1-1-8-7}, can be described as follows.
Firstly, the odds ratio $\hat{OR}_{Z}$ is given by $\hat{OR}_{Z}=\exp\left(\hat{\beta}_{\left(N\right)}\right)$
with $\hat{\beta}_{\left(N\right)}$ the Maximum Likelihood estimator
of the Fixed effects logistic regression analysis (\ref{eq:Fixed logistic})
under the null hypothesis of effect size homogeneity. The corrected
Zelen statistic for testing effect size homogeneity is given by \linebreak{}
$T_{5}=\sum_{i=1}^{m}\left(X_{1i}-E_{Z}\left(X_{1i}\right)\right)^{2}/var\left(X_{1i}|X_{i},\hat{OR}_{Z}\right)$,
with $E_{Z}\left(X_{1i}\right)$ now obtained by solving equation
(\ref{eq:Exp value}) with $\hat{OR}_{C}$ replaced by $\hat{OR}_{Z}$,
and $var\left(X_{1i}|X_{i},\hat{OR}_{Z}\right)$ is obtained by equation
(\ref{eq:variance}) with $E_{C}\left(X_{1i}\right)$ replaced by
$E_{Z}\left(X_{1i}\right)$.
\noindent Liang and Self (1985) \cite{key-1-1-1-8-8} developed the
following test statistic using $\hat{OR}_{CL}=\exp\left(\hat{\beta}_{\left(CL\right)}\right)$,
with $\hat{\beta}_{\left(CL\right)}$ the Maximum Likelihood estimator
of the conditional Likelihood function given $X_{i}$ and $\gamma_{i}=0$,
$\forall i$. The the test statistic is given by $T_{6}=\sum_{i=1}^{m}\left(X_{1i}-E_{CL}\left(X_{1i}\right)\right)^{2}/var\left(X_{1i}|X_{i},\hat{OR}_{CL}\right)$,
with $E_{CL}\left(X_{1i}\right)$ now obtained by solving equation
(\ref{eq:Exp value}) with $\hat{OR}_{C}$ replaced by $\hat{OR}_{CL}$,
and $var\left(X_{1i}|X_{i},\hat{OR}_{CL}\right)$ is obtained by equation
(\ref{eq:variance}) with $E_{C}\left(X_{1i}\right)$ replaced by
$E_{CL}\left(X_{1i}\right)$.
\subsubsection{Woolf statistic}
The Woolf statistic \cite{key-1-1-1-8-8-1} is given by $T_{7}=\sum_{i=1}^{m}\left(\hat{\beta_{i}}/\hat{se}_{i}\right)^{2}-\left(\sum_{i=1}^{m}\left(\hat{\beta_{i}}/\hat{se}_{i}^{2}\right)\right)^{2}/\sum_{i=1}^{m}\left(1/\hat{se}_{i}^{2}\right)$.
\subsubsection{Peto test}
\noindent The Peto statistic is given by \cite{key-1-1-1-8-9}
\noindent
\[
T_{8}=\sum_{i=1}^{m}\frac{\left(X_{1i}-X_{i}n_{1i}/n_{i}\right)^{2}}{V_{i}}-\frac{\left(\sum_{i=1}^{m}\left(X_{1i}-X_{i}n_{1i}/n_{i}\right)\right)^{2}}{\sum_{i=1}^{m}V_{i}}
\]
\noindent where $n_{i}=n_{1i}+n_{0i}$ and $V_{i}=\left(X_{i}\left(n_{i}-X_{i}\right)n_{1i}n_{0i}\right)/\left(n_{i}^{2}\left(n_{i}-1\right)\right)$.
\subsection{Correlated Lancaster procedure for combining correlated p-values}
In this section we describe the correlated Lancaster procedure \cite{key-1-1-1-8}
for combining dependent p-values resulting from the above mentioned
eight effect size homogeneity tests. Lancaster's method assumes there
are $n$ statistical tests each resulting in a test statistic $T_{i}$,
$i=1,\cdots.n$, degrees of freedom, $df_{i}$, and a p value, $p_{i}$.
Applying the probability integral transformation \cite{key-1-1-1-8-1-1},
it is noted that $1-p_{i}$ are uniformly distributed on $\left(0,1\right)$.
Lancaster showed that for $n$ independent p-values that $T=\sum\limits _{i=1}^{n}\gamma_{\left(df_{i}/2,2\right)}^{-1}\left(1-p_{i}\right)\sim\chi_{df}^{2}$
where $\gamma_{\left(df_{i}/2,2\right)}^{-1}$ is the inverse cumulative
distribution function of a Gamma distribution with a shape parameter
$df_{i}/2$ and a scale parameter 2, and $df=\sum_{i=1}^{n}df_{i}$
\cite{key-1-1-1-7}. Dai et al. (2014), noting that for correlated
p-values the $T$ statistic does not follow a $\chi_{df}^{2}$ anymore,
suggested five methods to approximate the distribution of $T$. The
authors recommended a method using the Satterthwaite approximation
\cite{key-1-1-1-9} as a standard procedure to adjust the $T$ statistic,
which can be described as follows. For a set of correlated p-values,
the authors noted that $E\left(T\right)=\sum\limits _{i=1}^{n}df_{i}=df$,
and $Var\left(T\right)=2\sum\limits _{i=1}^{n}df_{i}+2\sum\limits _{i<k}\mathop{c}ov_{ik}$
with $cov_{ik}=cov\left(\gamma_{\left(df_{i}/2,2\right)}^{-1}\left(1-p_{i}\right),\gamma_{\left(df_{k}/2,2\right)}^{-1}\left(1-p_{k}\right)\right)$.
Next the authors defined the statistic $T_{A}=cT\thickapprox\chi_{\nu}^{2}$
where $c=\nu/E\left(T\right)$ and $\nu=2\left[E\left(T\right)\right]^{2}/Var\left(T\right)$,
where $c$ and $\nu$ are chosen so that the first and second moments
of the scaled chi-square distribution and the distribution of $T$
under the null are identical \cite{key-1-1-1-2-1,key-1-1-1-2-1-1}.
The statistic $T_{A}$ can be used for testing the null hypothesis
of effect size homogeneity. The authors presented several methods
to estimate $\rho_{ik}$ \cite{key-1-1-1-8}.
\section{Case study and simulation model}
\subsection{Case study}
Bein et al. (2021) carried out a systematic review and meta-analysis
to investigate the risk of adverse pregnancy, perinatal and early
childhood outcomes among women with subclinical hypothyroidism treated
with Levothyroxince \cite{key-1-1-1-8-14}. Among the extensive study
was a meta-analysis of preterm delivery associated with levothyroxine
treatment versus no treatment among women with subclinical hypothyroidism
during pregnancy. This meta-analysis included seven studies, each
study having a group treated with Levothyroxine and a control group.
For each group the number of preterm delivery (events) was noted.
This meta-analysis was used here as a case study and the data are
shown in Table 1.
\begin{table}[H]
\caption{Meta-analysis on studying the effect of Levothyroxine on preterm pregnancy
among women with subclinical hypothyroidism (Bein et al. 2021) }
\begin{centering}
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{Study} & \multicolumn{2}{c|}{Levothyroxine} & \multicolumn{2}{c|}{Control}\tabularnewline
\cline{2-5}
& Events & Total & Events & Total\tabularnewline
\hline
\hline
Casey et al. (2017) & 40 & 339 & 47 & 338\tabularnewline
\hline
Maraka et al. (2016) & 4 & 82 & 30 & 284\tabularnewline
\hline
Maraka et al. (2017) & 60 & 843 & 236 & 4562\tabularnewline
\hline
Nazarpour et al. (2017) & 4 & 56 & 14 & 58\tabularnewline
\hline
Nazarpour et al. (2018) & 18 & 183 & 21 & 183\tabularnewline
\hline
Wang et al. (2012) & 0 & 28 & 9 & 168\tabularnewline
\hline
Zhao et al. (2018) & 7 & 62 & 6 & 31\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\end{table}
\subsection{Simulation model}
The simulation model applied can be described as follows \cite{key-1-1-1-8-13,key-1-1-1-5-2}.
In total $m$ studies are created, and for the $i^{th}$ study $n_{ji}$,
$j=0,1$, are independently drawn from a Poisson distribution with
parameter $\delta$. The random variables $X_{ji}$ are drawn independently
from the Binomial distribution $Bin\left(n_{ji},p_{ji}\right)$, with
$p_{0i}=\exp\left(\alpha_{i}\right)/\left(1+\exp\left(\alpha_{i}\right)\right)$
and $p_{1i}=\exp\left(\alpha_{i}+\beta+\gamma_{i}\right)/\left(1+\exp\left(\alpha_{i}+\beta+\gamma_{i}\right)\right)$.
Here $\alpha_{i}$ is study-specific intercept, $\beta$ is a constant
effect size, and $\gamma_{i}$ is study-specific effect size with
$\alpha_{i}\sim\left(\alpha,\sigma_{\alpha}^{2}\right)$ and $\gamma_{i}\sim N\left(0,\tau^{2}\right)$.
Two scenario's are considered: effect size homogeneity ($\tau^{2}=0$
) and effect size heterogeneity ($\tau^{2}>0$). The following parameter
values were used in the simulation study: $m=5,10,20,30$, $\delta=50$,
$\alpha=0$, $\beta=0,2$, $\sigma_{\alpha}^{2}=1$, $\tau^{2}=0,0.15,0.3,0.5$.
A number of 1000 simulation runs was carried out for each parameter
combination, and the average Type I error rate and the average statistical
power based on significance level of 0.05 were calculated and presented
in the results section.
\section{Results}
This section presents the results of the case-study and the simulation
study. For the correlated Lancaster method we used $\rho_{ik}=0.25,0.5,0.75$.
\subsection{Case study}
\noindent We applied the Breslow-Day test adjusted by Tarone (BDT),
the Bliss test, the Liang \& Self test, the Likelihood ratio test
(LRT), the Peto test, the Q-statistic (Q), the Woolf test and the
Zelen test to test the effect size homogeneity hypothesis for the
meta-analysis study in Bein et al. (2021). Subsequently, we used the
correlated Lancaster method (CORR. LANC.) to combine the eight resulting
p-values. The resulting p-values are shown in Table 2. All homogeneity
tests and the correlated Lancaster method (for all values of $\rho_{ik}$)
rejected the effect size homogeneity hypothesis (p<0.05). The effect
size homogeneity tests based on the score function and the Peto test
produced similar p-values. The Q-statistic and the Bliss statistic
produced substantially lower p-values than all other tests. For the
correlated Lancaster method the p-value increased as the $\rho_{ik}$
value increased.
\begin{table}[H]
\caption{p-values from the effect size homogeneity tests and the correlated
Lancaster method applied to meta-analysis from Bein et al. (2021)}
\centering{}
\begin{tabular}{|>{\raggedright}m{1cm}|>{\raggedright}m{1.6cm}|>{\raggedright}m{2cm}|}
\hline
\multicolumn{2}{|>{\raggedright}p{0.52cm}|}{Method} & p-value\tabularnewline
\hline
\multicolumn{2}{|>{\raggedright}p{1.5cm}|}{BDT} & 0.007514 \tabularnewline
\hline
\multicolumn{2}{|l|}{BLISS} & 0.000003283\tabularnewline
\hline
\multicolumn{2}{|l|}{LIANG \& SELF} & 0.007486 \tabularnewline
\hline
\multicolumn{2}{|l|}{LRT} & 0.004327 \tabularnewline
\hline
\multicolumn{2}{|l|}{PETO} & 0.008340 \tabularnewline
\hline
\multicolumn{2}{|>{\raggedright}p{1.5cm}|}{Q} & 0.000003122 \tabularnewline
\hline
\multicolumn{2}{|l|}{WOOLF} & 0.020232 \tabularnewline
\hline
\multicolumn{2}{|l|}{ZELEN} & 0.007485 \tabularnewline
\hline
\multirow{3}{1cm}{CORR. LANC.} & $\rho_{ik}=0.25$ & 0.000000353 \tabularnewline
\cline{2-3}
& $\rho_{ik}=0.5$ & 0.000043090\tabularnewline
\cline{2-3}
& $\rho_{ik}=0.75$ & 0.000371 \tabularnewline
\hline
\end{tabular}
\end{table}
\subsection{Results of simulation study}
Table 3 shows the average Type 1 error rates of the Breslow-Day test,
the Bliss test, the Liang \& Self test, the Peto test, the Q-statistic,
the Woolf test, the Zelen test and the correlated Lancaster method.
For the correlated Lancaster method, it is noted that the value of
the correlation coefficient between the p-values resulting from effect
size homogeneity test statistics affects the Type I error values,
with the Type I error closest to the nominal value when $\rho_{ik}=0.75$.
In case of no effect size, all homogeneity tests are conservative,
while the combined Lancaster method is liberal. This pattern is consistent
for the different number of studies included in a meta-analysis. In
case $\beta=2$, all effect size homogeneity tests with the exception
of the Liang \& Self, Likelihood ratio test and the Zelen tests, have
a Type I error value below the nominal value. The Likelihood ratio
test has a Type I error rate above the nominal level for all values
of $m$. The Liang \& Self has a Type I error rate equaling the nominal
value when $m=5,10$. The Zelen test has a nominal Type I error value
when $m=5$ but a Type I error value above the nominal value when
$m=10$. As $m$ increases, the Type I error rates of the Liang \&
Self and the Zelen tests increase way above the nominal value. The
Q-statistic and the Bliss statistic have a Type I error rates lower
than the nominal value, and these rates decrease as $m$ increases.
When $\rho_{ik}=0.75$ and for all values of $m$, the correlated
Lancaster method has a Type I error either equal to or below the nominal
value.
\begin{table}[H]
\caption{Type I error for fixed-effects homogeneity tests and the correlated
Lancaster method $\tau^{2}=0$}
\centering{}
\begin{tabular}{|>{\raggedright}m{1cm}|>{\raggedright}m{1.6cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|>{\raggedright}m{1cm}|}
\hline
\multicolumn{2}{|>{\raggedright}p{0.5cm}|}{} & \multicolumn{2}{c|}{$m=5$} & \multicolumn{2}{c|}{$m=10$} & \multicolumn{2}{c|}{ $m=20$} & \multicolumn{2}{c|}{ $m=30$}\tabularnewline
\hline
\multicolumn{2}{|>{\raggedright}p{0.52cm}|}{} & $\beta=0$ & $\beta=2$ & $\beta=0$ & $\beta=2$ & $\beta=0$ & $\beta=2$ & $\beta=0$ & $\beta=2$\tabularnewline
\hline
\multicolumn{2}{|>{\raggedright}p{1.5cm}|}{BDT} & 0.044 & 0.045 & 0.049 & 0.045 & 0.049 & 0.051 & 0.040 & 0.068\tabularnewline
\hline
\multicolumn{2}{|l|}{BLISS} & 0.040 & 0.036 & 0.041 & 0.022 & 0.028 & 0.021 & 0.023 & 0.022\tabularnewline
\hline
\multicolumn{2}{|l|}{LIANG \& SELF} & 0.044 & 0.049 & 0.049 & 0.050 & 0.049 & 0.054 & 0.040 & 0.070\tabularnewline
\hline
\multicolumn{2}{|l|}{LRT} & 0.048 & 0.064 & 0.055 & 0.059 & 0.055 & 0.077 & 0.049 & 0.094\tabularnewline
\hline
\multicolumn{2}{|l|}{PETO} & 0.041 & 0.013 & 0.045 & 0.004 & 0.048 & 0.002 & 0.037 & 0.002\tabularnewline
\hline
\multicolumn{2}{|>{\raggedright}p{1.5cm}|}{Q} & 0.047 & 0.042 & 0.046 & 0.023 & 0.039 & 0.024 & 0.028 & 0.027\tabularnewline
\hline
\multicolumn{2}{|l|}{WOOLF} & 0.037 & 0.032 & 0.041 & 0.016 & 0.037 & 0.016 & 0.028 & 0.017\tabularnewline
\hline
\multicolumn{2}{|l|}{ZELEN} & 0.044 & 0.050 & 0.049 & 0.053 & 0.049 & 0.055 & 0.040 & 0.074\tabularnewline
\hline
\multirow{3}{1cm}{CORR. LANC.} & $\rho_{ik}=0.25$ & 0.135 & 0.099 & 0.148 & 0.107 & 0.144 & 0.094 & 0.138 & 0.102\tabularnewline
\cline{2-10}
& $\rho_{ik}=0.5$ & 0.089 & 0.069 & 0.103 & 0.062 & 0.090 & 0.063 & 0.077 & 0.070\tabularnewline
\cline{2-10}
& $\rho_{ik}=0.75$ & 0.057 & 0.050 & 0.071 & 0.036 & 0.060 & 0.044 & 0.054 & 0.050\tabularnewline
\hline
\end{tabular}
\end{table}
\noindent The statistical power is shown in figures 1, 2 and 3. Since
the Type I error was closest to the nominal value when $\rho_{ik}=0.75$,
the statistical power of the correlated Lancaster method is only shown
for the case $\rho_{ik}=0.75$. When there is no effect size, $\beta=0$,
and in the case of low heterogeneity, $\tau^{2}=0.15$, all methods
have a statistical power lower than 30\% when $m=5$, with the correlated
Lancaster method showing higher statistical power than other methods.
Of the other methods, the Likelihood ratio test, Liang and Self test
and Breslow-Day test have slightly higher statistical power, followed
by Zelen, Peto, Q-statistic, Woolf and Bliss. The same pattern remains
as $m$ increases, with the statistical power of all methods increasing
but remaining below than 80\%. In case of moderate and high heterogeneity
($\tau^{2}=0.3,0.5$), the statistical power increases for all methods
when $m=5$, and all methods approach or surpass the nominal value
of 95\% as $m$ increases. The correlated Lancaster method still has
higher statistical power than other methods.
\noindent When $\beta=2$, $\tau^{2}=0.15$ and $m=5$, the Likelihood
ratio test has the highest statistical power. The Likelihood ratio
test is followed by the Zelen, Liang and Self, BDT and the correlated
Lancaster methods. The Q-statistic, Bliss, Woolf and Peto methods
have clearly a lower statistical power. All methods, however, have
a statistical power below 25\%. As $m$ increases, the statistical
power of all methods increases slightly, except for the Peto method
which remains below 10\%. As the heterogeneity level increases, $\tau^{2}=0.3$,0.5,
the statistical power increases for all methods when $m=5$. The same
pattern persists with the Likelihood ratio, Zelen, Liang and Self,
Breslow-Day tests and the correlated Lancaster methods still having
the highest power, although the statistical power remains below 50\%.
As $m$ increases, the statistical power increases for all methods.
When $\tau^{2}=0.5$ the Likelihood ratio test's statistical power
exceeds the nominal level, while the Zelen, Liang and Self, Breslow-Day
and the correlated Lancaster methods have statistical power close
or equal to the nominal value when $m=30$.
\noindent
\begin{figure}
\caption{Statistical power in case weak effect size heterogeneity: $\tau^{2}
\end{figure}
\begin{figure}
\caption{Statistical power in case moderate effect size heterogeneity: $\tau^{2}
\end{figure}
\begin{figure}
\caption{Statistical power in case strong effect size heterogeneity: $\tau^{2}
\end{figure}
\section{Discussion}
The purpose of this article was applying an approach to combine p-values
of different effect size homogeneity tests applied to a meta-analysis
of binary outcomes. The proposed approach is an adjustment of a method
introduced in Lancaster (1961) to combine independent p-values into
a Chi squared test statistic. The Lancaster method was adjusted to
incorporate correlation between dependent p-values. The Satterthewaite
method was used to approximate the correlated Lancaster test statistic
in case of dependent p-values. The method was originally developed
for aggregating effects in high-dimensional genetic data analysis
(Dai et al. 2012). To study the performance of the proposed method
we analyzed a real life meta-analysis, and we carried out a simulation
study with multiple scenarios including different number of studies,
different effect sizes and different levels of heterogeneity. We tested
the null hypothesis of effect size homogeneity using the Breslow-Day
test, the Bliss test, the Liang \& Self test, the Likelihood ratio
test, the Peto test, the Q-statistic, the Woolf test, and the Zelen
test for the case study and for the simulated datasets. Subsequently,
we combined the resulting p-values from these eight effect size homogeneity
tests using the correlated Lancaster method. For the case study we
compared the performance of the correlated Lancaster method to that
of the eight effect size homogeneity tests using the p-values. For
the simulation study we did the comparison using the average Type
I error rates and the average statistical power.
\noindent Some findings regarding the effect size homogeneity tests
have been established earlier in the literature. The Likelihood ratio
test is liberal, and tests based on the score function (Breslow-Day,
Liang \& Self and Zelen tests) perform well when the number of studies
is small \cite{key-1-1-1-1-1,key-1-1-1-5-2}. However, as the number
of studies increases these score function tests tend become liberal
\cite{key-1-1-1-5-2}. The Q-statistic and the Bliss statistic are
conservative and they get more conservative as the number of studies
increases \cite{key-1-1-1-1-1,key-1-1-1-5-2}.
\noindent The correlated Lancaster method is sensitive to the value
of correlation coefficient between the test statistics, as the combined
p value is positively correlated with the value of the correlation
coefficient. This can be explained by the fact that higher values
of the correlation coefficient result in a larger variance of the
correlated Lancaster statistic. This produces a smaller value of the
correction constant $c$ and in turn a smaller value of the correlated
Lancaster test statistic and thereby a larger p value. The correlated
Lancaster method performs best in case of a high positive correlation
between the dependent test statistics, namely a correlation coefficient
value of 0.75. The correlated Lancaster method performs quite well
in the presence of a effect size, having a Type 1 error rate always
within the nominal value. Unlike all effect size homogeneity tests
considered here, the correlated Lancaster method is robust to the
number of studies in a meta-analysis. The statistical power of the
correlated Lancaster method is similar to that of the Breslow-Day,
Liang \& Self and the Zelen tests when the number of studies is small.
As the number of studies increases, these three tests have superior
statistical power to the correlated Lancaster method. This can be
explained by the inflated Type I error the Breslow-Day, Liang \& Self
and the Zelen tests when the number of studies increases.
\noindent The correlated Lancaster method performs well and it is
easy to implement, but few reservations need to be mentioned. The
assumption of positive correlation between the dependent test statistics
is intuitively a reasonable assumption. However, the method's performance
is sensitive to the value of the correlation coefficient, as the method
performs best when the value of the correlation coefficient is high.
More research is needed to investigate the correlation levels between
dependent effect size homogeneity tests. In addition, we only applied
the method to balanced meta-analysis of binary outcomes. Extra research
is warranted to investigate the method's performance on unbalanced
meta-analysis, meta-analysis of continuous outcomes and meta-analysis
of rare binary events.
\part*{Conflict of interest}
The author has declared no conflict of interest.
{}
\section{Appendix: code used for implementing the correlated Lancaster method}
\noindent DATA DATASET\_CORR;
\noindent SET P\_VALUES\_INI;
\noindent DF = \&NUMBER\_OF\_CENTERS - 1;
\noindent SHAPE = DF/2; SCALE = 2;
\noindent INV\_CDF = SQUANTILE('GAMMA',P\_VALUE,SHAPE,SCALE);
\noindent RUN;
\noindent PROC SORT DATA = DATASET\_CORR;
\noindent BY SIM TEST;
\noindent RUN;
\noindent PROC MEANS DATA = DATASET\_CORR NOPRINT;
\noindent VAR INV\_CDF DF;
\noindent OUTPUT OUT = T\_VALUE SUM = T\_VALUE SUM\_DF;
\noindent BY SIM;
\noindent RUN;
\noindent PROC MEANS DATA = P\_VALUES\_INI NOPRINT;
\noindent VAR P\_VALUE;
\noindent OUTPUT OUT = NUM\_TESTS N = NUM\_TESTS;
\noindent BY SIM;
\noindent RUN;
\noindent DATA RHO;
\noindent SET NUM\_TESTS(KEEP = SIM NUM\_TESTS);
\noindent COMB\_NR = NUM\_TESTS{*}(NUM\_TESTS - 1)/2;
\noindent RUN;
\noindent DATA RHO\_S;
\noindent SET RHO;
\noindent DO COMB\_NR = 1 TO COMB\_NR;
\noindent RHO = \&RHO{*}2{*}(\&NUMBER\_OF\_CENTERS - 1);
\noindent OUTPUT;
\noindent END;
\noindent RUN;
\noindent PROC SORT DATA = RHO\_S;
\noindent BY SIM;
\noindent RUN;
\noindent PROC MEANS DATA = RHO\_S NOPRINT;
\noindent VAR RHO;
\noindent OUTPUT OUT = SUM\_RHOS SUM = SUM\_RHOS;
\noindent BY SIM;
\noindent RUN;
\noindent DATA LANC;
\noindent SET T\_VALUE;
\noindent P\_VALUE = 1 - PROBCHI(T\_VALUE,SUM\_DF);
\noindent IF P\_VALUE <= \&SIG\_LEVEL THEN SIG = 1;
\noindent ELSE SIG = 0;
\noindent TEST = \textquotedbl LANCASTER\textquotedbl ;
\noindent RUN;
\noindent DATA LANC\_CORR;
\noindent MERGE T\_VALUE SUM\_RHOS;
\noindent BY SIM;
\noindent EXP\_T = SUM\_DF;
\noindent VAR\_T = 2{*}SUM\_DF + 2{*}SUM\_RHOS;
\noindent V = 2{*}(EXP\_T{*}{*}2)/VAR\_T;
\noindent C = V/EXP\_T;
\noindent T\_A = C{*}T\_VALUE;
\noindent P\_VALUE = 1 - PROBCHI(T\_A,V);
\noindent IF P\_VALUE <= \&SIG\_LEVEL THEN SIG = 1;
\noindent ELSE SIG = 0;
\noindent TEST = \textquotedbl LANC\_CORR\_POS\_CORR\textquotedbl ;
\noindent RUN;
\noindent DATA LANC\_TOTAL(KEEP = SIM TEST P\_VALUE SIG);
\noindent SET LANC LANC\_CORR;
\noindent BY SIM;
\noindent RUN;
\end{document}
|
\begin{document}
\begin{abstract}
We propose and analyze an overlapping Schwarz preconditioner for the $p$ and $hp$ boundary element method
for the hypersingular integral equation in 3D.
We consider surface triangulations consisting of triangles.
The condition number is bounded uniformly in the mesh size $h$ and the polynomial order $p$.
The preconditioner handles adaptively refined meshes and is based on a local multilevel preconditioner for the
lowest order space. Numerical experiments on different geometries illustrate its robustness.
\end{abstract}
\begin{keyword}
$hp$-BEM, hypersingular integral equation, preconditioning, additive Schwarz method
\MSC 65N35, 65N38, 65N55
\end{keyword}
\title{Optimal additive Schwarz methods for the $hp$-BEM: the hypersingular integral operator in 3D on locally refined meshes}
\section{Introduction}
Many elliptic boundary value problems that are solved in practice are linear
and have constant (or at least piecewise constant) coefficients. In this setting,
the boundary element method (BEM, \cite{book_hsiao_wendland,book_sauter_schwab,book_steinbach,book_mclean})
has established itself as an effective alternative to the finite element method (FEM). Just as in the FEM applied
to this particular problem class, high order methods are very attractive since they
can produce rapidly convergent schemes on suitably chosen adaptive meshes.
The discretization leads to large systems of equations, and a use of
iterative solvers brings the question of preconditioning to the fore.
In the present work, we study high order Galerkin discretizations of the
hypersingular operator. This is an operator of order $1$, and we therefore have to
expect the condition number of the system matrix to increase as the mesh size $h$
decreases and the approximation order $p$ increases. We present an additive overlapping Schwarz
preconditioner that offsets this degradation and results in condition numbers that
are bounded independently of the mesh size and the approximation order. This is achieved
by combining the recent $H^{1/2}$-stable decomposition of spaces of piecewise polynomials of degree $p$
of \cite{melenk_appendix} and the multilevel diagonal scaling preconditioner of \cite{ffps,dissTF}
for the hypersingular operator discretized by piecewise linears.
Our additive Schwarz preconditioner is based on stably decomposing the approximation space of piecewise polynomials
into the lowest order space (i.e., piecewise linears) and spaces of higher order polynomials supported
by the vertex patches. Such stable localization procedures were first developed for the $hp$-FEM
in \cite{pavarino_94} for meshes consisting of quadrilaterals (or, more generally, tensor product elements).
The restriction to tensor product elements stems from the fact that the localization is achieved by exploiting
stability properties of the 1D-Gau{\ss}-Lobatto interpolation operator, which, when applied to polynomials,
is simultaneously stable in $L^2$ and $H^1$ (see, e.g., \cite[eqns. (13.27), (13.28)]{bernardi-maday97}).
This simultaneous stability raises the hope for $H^{1/2}$-stable localizations and was
pioneered in \cite{heuer_asm_indef_hyp} for the $hp$-BEM for the hypersingular operator on meshes consisting
of quadrilaterals. Returning to the $hp$-FEM, $H^1$-stable localizations on triangular/tetrahedral meshes
were not developed until \cite{schoeberl_asm_fem}. The techniques developed there were subsequently used in
\cite{melenk_appendix} to design $H^{1/2}$-stable decompositions on triangular meshes and thus paved the way
for condition number estimates that are uniform in the approximation $p$ for overlapping Schwarz methods for
the $hp$-version BEM applied to the hypersingular operator. Non-overlapping additive Schwarz preconditioners
for high order discretizations of the hypersingular operator are also available in the literature,
\cite{ainsworth-guo00}; as it is typical of this class of preconditioners, the condition number
still grows polylogarithmically in $p$.
Our preconditioner is based on decomposing the approximation space into the space of piecewise linears
and spaces associated with the vertex patches. It is highly desirable to decompose the space of piecewise linears
further in a multilevel fashion. For sequences of uniformly refined meshes, the first such
multilevel space decomposition appears to be \cite{tran_stephan_asm_h_96} (see also \cite{oswald_99}).
For adaptive meshes, local multilevel diagonal scaling was first analyzed in \cite{amcl03}, where
for a sequence $\mathcal{T}_{\ell}$ of successively refined adaptive meshes a uniformly bounded condition number
for the preconditioned system is established. Formally, however, \cite{amcl03} requires
that $\mathcal{T}_{\ell} \cap \mathcal{T}_{\ell+1} \subset \mathcal{T}_{\ell+k}$ for all $\ell,k \in \mathbb{N}_0$, i.e., as soon as an element $K \in \mathcal{T}_{\ell}$ is not refined, it remains non-refined in all
succeeding triangulations. While this can be achieved implementationally, the recent works \cite{ffps,dissTF}
avoid such a restriction by considering sequences of meshes that are obtained
in typical $h$-adaptive environments with the aid of {\em newest vertex bisection} (NVB).
We finally note that the additive Schwarz decomposition on adaptively refined meshes is a subtle issue.
Hierarchical basis preconditioners (which are based on the new nodes only) lead to a growth of the
condition number with $\mathcal{O}(\left|\log{h_{min}}\right|^2)$; see \cite{tran_stephan_mund_hierarchical_prec}.
Global multilevel diagonal preconditioning (which is based on all nodes) leads to a growth $\mathcal{O}(\left|\log{h_{min}}\right|)$;
see \cite{maischak_multilevel_asm,ffps}.
The paper is organized as follows: In Section~\ref{sec:model_problem} we introduce the hypersingular equation
and the discretization by high order piecewise polynomial spaces.
Section~\ref{sec:properties-of-honehalftilde} collects properties of the
fractional Sobolev spaces including the scaling properties.
Section~\ref{sec:p_condition} studies
in detail the $p$-dependence of the condition number of the unpreconditioned system. The polynomial
basis on the reference triangle chosen by us is a hierarchical basis of the form first proposed by
Karniadakis \& Sherwin, \cite[Appendix~{D.1.1.2}]{karniadakis_sherwin}; the precise form is the one from
\cite[Section~{5.2.3}]{zaglmayr_diss}.
We prove bounds for the condition number of the stiffness matrix not only in the $H^{1/2}$-norm but also
in the norms of $L^2$ and $H^1$. This is also of interest for $hp$-FEM and could not be found in the literature.
Section~\ref{sec:main_results} develops several preconditioners. The first one (Theorem~{~\ref{thm:ppreconditioner}}) is based on
decomposing the high order approximation space into the global space of piecewise linears and local
high order spaces of functions associated with the vertex patches. The second one (Theorem~{~\ref{thm:p_precond_with_multilevel}}) is based on a further
multilevel decomposition of the global space of piecewise linears. The third one (Theorem~{~\ref{thm:hp_reference_solver_preconditioner}})
exploits the observation that topologically, only a finite number of vertex patches can occur. Hence, significant
memory savings for the preconditioner are possible if the exact bilinear forms for the vertex patches are replaced
with scaled versions of simpler ones defined on a finite number of reference configurations.
Numerical experiments in Section~\ref{sec:numerics} illustrate that the proposed preconditioners are indeed robust
with respect to both $h$ and $p$.
We close with a remark on notation:
The expression $a \lesssim b$ signifies the existence of a constant $C>0$ such that $a \leq C \, b$. The constant $C$
does not depend on the mesh size $h$ and the approximation
order $p$, but may depend on the geometry and the shape regularity of the triangulation. We also write $a \sim b$ to
abbreviate $a \lesssim b \lesssim a$.
\section{$hp$-discretization of the hypersingular integral equation}
\label{sec:model_problem}
\subsection{Hypersingular integral equation}
Let $\Omega\subset\mathbb{R}^3$ be a bounded Lipschitz polyhedron with a connected boundary $\partial \Omega$,
and let $\Gamma \subseteq \partial \Omega$ be
an open, connected subset of $\partial \Omega$. If $\Gamma \ne \partial\Omega$, we assume it to be
a Lipschitz hypograph, \cite{book_mclean}; the key property needed is that
$\Gamma$ is such that the ellipticity condition (\ref{eq:ellipticity}) holds.
Furthermore, we will use affine, shape regular
triangulations of $\Gamma$, which further imposes conditions on $\Gamma$.
In this work, we are concerned with preconditioning high order discretizations of the hypersingular integral operator, which is given by
\begin{align}
\label{hypsing_operator_definition}
\left(D u \right)(x)&:= - \partial_{n_x }^{int} \int_{\Gamma}{ \partial_{n_y}^{int} G(x,y)u(y) \; ds_y} \quad \text{for } x \in \Gamma,
\end{align}
where $G(x,y):= \frac{1}{4\pi} \frac{1}{\left|x-y\right|}$ is the fundamental solution of the 3D-Laplacian and $\partial_{n_y}^{int}$ denotes the
(interior) normal derivative with respect to $y \in \Gamma$.
We will need some results from the theory of Sobolev and interpolation spaces, see \cite[Appendix B]{book_mclean}.
For an open subset $\omega \subset \partial \Omega$, let $L^2(\omega)$ and $H^1(\omega)$ denote the usual
Sobolev spaces. The space $\widetilde H^1(\omega)$ consists of those
functions whose zero extension to $\partial\Omega$ is in $H^1(\partial\Omega)$. (In particular, for
$\omega = \partial\Omega$, $H^1(\partial\Omega) = \widetilde H^1(\partial\Omega)$.)
When the surface measure of the set $\partial\Omega\setminus \omega$ is positive, we use the equivalent norm
$\left\|u\right\|_{\widetilde{H}^1(\omega)}^2:=\left\|\nabla_{\Gamma} u\right\|^2_{L^2(\omega)}$.
We will define fractional Sobolev norms by interpolation. The following Proposition~\ref{interpolation_theorem}
collects key properties of interpolation spaces that we will need; we refer to
\cite{tartar07,triebel95} for a comprehensive treatment.
For two Banach spaces $\left(X_0, \left\|\cdot\right\|_0\right)$ and $\left(X_1,\left\|\cdot\right\|_{1}\right)$,
with continuous inclusion $X_1 \subseteq X_0$ and a parameter $s \in (0,1)$ the interpolation norm is defined as
\begin{align*}
\left\|u\right\|^2_{[X_0,X_1]_s} &:= \int_{t=0}^\infty t^{-2s} \left( \inf_{v \in X_1} \|u - v\|_{0} + t \|v\|_1\right)^2 \frac{dt}{t}.
\end{align*}
The interpolation space is given by $\left[X_0,X_1\right]_{s}:=\left\{ u \in X_0 : \left\|u\right\|_{[X_0,X_1]_{s}} < \infty \right\}$.
An important result, which we use in this paper, is the following interpolation theorem:
\begin{proposition}
\label{interpolation_theorem}
Let $X_i$, $Y_i$, $i\in \{0,1\}$, be two pairs of Banach spaces with continuous inclusions $X_1 \subseteq X_0$ and $Y_1 \subseteq Y_0$. Let $s \in (0,1)$.
\begin{enumerate}[(i)]
\item
\label{item:interpolation_theorem:i}
If a linear operator $T$ is bounded as an operator $X_0 \to Y_0$ and $X_1 \to Y_1$, then it is also bounded
as an operator $[X_0,X_1]_{s} \to [Y_0,Y_1]_{s}$ with
\begin{align*}
\left\|T\right\|_{[X_0,X_1]_{s} \to [Y_0,Y_1]_{s}} \leq \left\|T\right\|_{X_0 \to Y_0}^{1-s} \left\|T\right\|_{X_1 \to Y_1}^s.
\end{align*}
\item
\label{item:interpolation_theorem:ii}
There exists a constant $C > 0$ such that for all $x \in X_1$: $\displaystyle \left\|x\right\|_{[X_0,X_1]_{s} } \leq C \left\|x\right\|_{X_0}^{1-s} \left\|x\right\|_{X_1}^s.$
\end{enumerate}
\end{proposition}
We define the fractional Sobolev spaces by interpolation. For $s \in (0,1)$, we set:
\begin{align*}
H^{s}(\omega)&:=\left[ L^2(\omega), H^1(\omega) \right]_s, \quad
\widetilde{H}^{s}(\omega):=\left[ L^2(\omega), \widetilde{H}^1(\omega) \right]_s.
\end{align*}
Here, we will only consider the case $s=1/2$.
We define $H^{-1/2}(\Gamma)$ as the dual space of $\widetilde{H}^{1/2}(\Gamma)$, where
duality is understood with respect to the (continuously) extended $L^2(\Gamma)$-scalar product and denoted by $\left<\cdot,\cdot \right>_{\Gamma}$.
An equivalent norm on $H^{1/2}(\Gamma)$ is given by ${\left\|u\right\|_{H^{1/2}(\Gamma)}^2 \sim \left\|u\right\|_{L^2(\Gamma)}^2 + \left|u\right|_{H^{1/2}(\Gamma)}^2}$,
where $\left|\cdot\right|_{H^{1/2}(\Gamma)}$ is given by the Sobolev-Slobodeckij seminorm (see \cite{book_sauter_schwab} for the exact definition).
We now state some important properties of the hypersingular operator $D$ from \eqref{hypsing_operator_definition}, see, e.g., \cite{book_sauter_schwab,book_mclean,book_hsiao_wendland,book_steinbach}.
First, the operator $D: \widetilde{H}^{1/2}(\Gamma) \to H^{-1/2}(\Gamma)$ is a bounded linear operator.
For open surfaces $\Gamma \subsetneqq \partial \Omega$ the operator is elliptic
\begin{align}
\label{eq:ellipticity}
\left<D u, u \right>_{\Gamma} &\geq c_{ell} \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 \quad \quad \forall u \in \widetilde{H}^{1/2}(\Gamma),
\end{align}
with some constant $c_{ell} > 0$ that only depends on $\Gamma$.
In the case of a closed surface, i.e. $\Gamma=\partial \Omega$ we note that $\widetilde{H}^{1/2}(\Gamma)=H^{1/2}(\Gamma)$ and the operator $D$ is still semi-elliptic, i.e.
\begin{align*}
\left<D u,u \right>_{\Gamma} &\geq c_{ell} \left|u\right|_{H^{1/2}(\Gamma)}^2 \quad \quad \forall u \in \widetilde{H}^{1/2}(\Gamma).
\end{align*}
Moreover, the kernel of $D$ then consists of the constant functions only: $\operatorname{ker}(D)~=~\operatorname{span}(1)$.
To get unique solvability and strong ellipticity for the case of a closed surface, it is customary to introduce
a stabilized operator $\widetilde{D}$ given by the bilinear form
\begin{align}
\label{eq:def_stabilized_op}
\left<\widetilde{D} u ,v \right>_{\Gamma}:= \left<Du,v \right>_{\Gamma} + \alpha^2 \left<u,1 \right>_{\Gamma} \left<v,1 \right>_{\Gamma}, \quad \alpha > 0.
\end{align}
In order to avoid having to distinguish the two cases $\Gamma=\partial \Omega$ and $\Gamma \subsetneqq \partial \Omega$, we
will only work with the stabilized form on $\widetilde{H}^{1/2}(\Gamma)$ and just set $\alpha = 0$ in the
case of $\Gamma \subsetneqq \partial \Omega$.
The basic integral equation involving the hypersingular operator $D$ then reads:
For given $g \in H^{-1/2}(\Gamma)$, find $u \in \widetilde{H}^{1/2}(\Gamma)$ such that
\begin{align}
\label{eq:weakform}
\left<\widetilde{D} u ,v \right>_{\Gamma} &= \left<g,v \right>_{\Gamma} \quad \forall v \in \widetilde{H}^{1/2}(\Gamma).
\end{align}
We note that in the case of the closed surface $\Gamma=\partial \Omega$, the solution of the stabilized system above is equivalent
to the solution of $\left<Du,v \right>_{\Gamma}=\left<g,v \right>_{\Gamma}$ under the side constraint $\left<u,1 \right>_{\Gamma}=\frac{\left<g,1 \right>_{\Gamma}}{\alpha^2 \, \left|\Gamma\right|}$.
Moreover, it is well known that $\left<\widetilde{D} \cdot ,\cdot \right>_{\Gamma}$ is symmetric, elliptic and induces an equivalent norm on $\widetilde{H}^{1/2}(\Gamma)$, i.e.,
\begin{align*}
\left<\widetilde{D} u ,u \right>_{\Gamma} \sim \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 \quad \quad \forall u \in \widetilde{H}^{1/2}(\Gamma).
\end{align*}
\subsection{Discretization}
\label{sect:discretization}
Let $\mathcal{T} = \{K_1,\dots,K_N\}$ denote a regular (in the sense of Ciarlet) triangulation of the two-dimensional
manifold $\Gamma\subseteq\partial \Omega$ into compact, non-degenerate planar surface triangles.
We say that a triangulation is $\gamma$-shape regular, if there exists a constant $\gamma> 0$ such that
\begin{align}
\label{eq:shape-regularity}
\max_{K\in\mathcal{T}} \frac{\mathrm{diam}(K)^2}{|K|} \leq \gamma.
\end{align}
Let $\widehat{K}:=\operatorname{conv}\{(0,0),(1,0),(0,1)\}$ be the reference triangle.
With each element $K$ we associate an affine, bijective element map $F_K: \widehat{K} \to K$.
We will write $P^{p}(\widehat{K})$ for the space of polynomials of degree $p$ on $\widehat{K}$.
The space of piecewise polynomials on $\mathcal{T}$ is given by
\begin{align}
P^p(\mathcal{T}) := \left\{ u\in L^2(\Gamma) \,:\, u \circ F_K \in P^p(\widehat{K}) \text{ for all } K\in\mathcal{T} \right\}.
\end{align}
The elementwise constant mesh width
function $h:=h_\mathcal{T} \in P^0(\mathcal{T})$ is defined by $(h_\mathcal{T})|_K := \mathrm{diam}(K)$ for all $K\in\mathcal{T}$.
Let ${\mathcal{V}} = \{{\boldsymbol{z}}_1,\dots,{\boldsymbol{z}}_M\}$ denote the set of all vertices of the triangulation $\mathcal{T}$ that are not on the boundary of $\Gamma$.
We define the (vertex) patch $\omega_{\boldsymbol{z}}$ for a vertex ${\boldsymbol{z}} \in{\mathcal{V}}$ by
\begin{align}\label{def:patch}
\omega_{\boldsymbol{z}} := \operatorname*{interior} \left(\bigcup_{\left\{K\in\mathcal{T}: \;{\boldsymbol{z}} \in K \right\}} K \right),
\end{align}
where the interior is understood with respect to the topology of $\Gamma$.
For $p \geq 1$, define
\begin{align}
\widetilde{S}^p(\mathcal{T}) := P^p(\mathcal{T}) \cap \widetilde{H}^{1/2}(\Gamma).
\end{align}
Then, the Galerkin discretization of~\eqref{eq:weakform} consists in replacing $\widetilde{H}^{1/2}(\Gamma)$ with
the discrete subspace $\widetilde{S}^p(\mathcal{T})$, i.e.:
Find $u_h \in \widetilde{S}^p(\mathcal{T})$ such that
\begin{align}\label{eq:weakform:discrete}
\left<\widetilde{D} u_h ,v_h \right>_{\Gamma} = \left<g,v_h \right>_{\Gamma} \quad\text{for all } v_h \in \widetilde{S}^p(\mathcal{T}).
\end{align}
\begin{remark}
We employ the same polynomial degree for all elements. This is not essential and done for simplicity of
presentation. For details on the more general case, see \cite{melenk_appendix}.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
After choosing a basis of $\widetilde{S}^p(\mathcal{T})$,
the problem (\ref{eq:weakform:discrete}) can be written as a linear system of equations, and we write
$\widetilde{D}^p_h$ for the resulting system matrix. Our goal is to construct a preconditioner for $\widetilde{D}^p_h$.
It is well-known that the condition number of $\widetilde{D}_h^p$ depends on the choice of the basis of
$\widetilde{S}^p(\mathcal{T})$, which we fix in Definition~\ref{definition_basis} below.
We remark in passing that the preconditioned system of
Section~\ref{sec:hpprecond} will no longer depend on the basis.
\subsection{Polynomial basis on the reference element}
\label{sec:basis_reference_element}
For the matrix representation of the Galerkin formulation (\ref{eq:weakform:discrete}) we have to
specify a polynomial basis on the reference triangle $\widehat{K}$. We use a basis that relies
on a collapsed tensor product representation of the triangle and is given in \cite[Section 5.2.3]{zaglmayr_diss}.
This kind of basis was first proposed for the $hp$-FEM
by Karniadakis \& Sherwin, \cite[Appendix D.1.1.2]{karniadakis_sherwin}; closely related earlier works on polynomial
bases that rely on a collapsed tensor product representation of the triangle are \cite{koornwinder75,dubiner91}.
\begin{definition}[Jacobi polynomials]
\label{def:ortho_polys}
For coefficients $\alpha$, $\beta > -1$ the family of {\em Jacobi polynomials} on the interval $\left(-1,1\right)$ is denoted by $P_n^{(\alpha,\beta)}$, $n \in \mathbb{N}_0$.
They are orthogonal with respect to the $L^2(-1,1)$ inner product with weight $(1-x)^\alpha (1+x)^\beta$.
(See for example \cite[Appendix A]{karniadakis_sherwin} or \cite[Appendix A.3]{zaglmayr_diss} for the exact definitions and
a list of important properties).
The \emph{Legendre polynomials} are a special case of the Jacobi polynomials for $\alpha=\beta=0$ and denoted by $\ell_n(s):=P^{(0,0)}_{n}(s)$.
The \emph{integrated Legendre polynomials} $L_n$ and the \emph{scaled polynomials} are defined by
\begin{align}
\label{eq:def_legendre_poly}
L_n(s)&:=\int_{-1}^{s}{\ell_{n-1}(t) dt} \quad \text{for } n \in \mathbb{N},
&P_{n}^{\mathcal{S},(\alpha,\beta)}(s,t)&:=t^{n}P^{(\alpha,\beta)}_n(s/t),
&L_n^{\mathcal{S}}(s,t)&:=t^n L_n(s/t).
\end{align}
\end{definition}
On the reference triangle, our basis reads as follows:
\begin{definition}[polynomial basis on the reference triangle]
\label{definition_basis}
Let $p \in {\mathbb N}$ and let $\lambda_1,\lambda_2,\lambda_3$ be the barycentric coordinates on the reference triangle $\widehat{K}$.
Then the basis functions for the reference triangle consist of three vertex functions,
$p-1$ edge functions per edge, and $(p-1)(p-2)/2$ cell-based functions:
\begin{enumerate}[(a)]
\item for $i=1$, $2$, $3$ the vertex functions are:
\begin{align*}
\varphi^{\mathcal{V}}_i&:=\lambda_i;
\intertext{\item for $m=1$, $2$, $3$ and an edge $\mathcal{E}_m$ with edge vertices $e_1,e_2$,
the edge functions are given by:}
\varphi_{i}^{\mathcal{E}_m}&:=\sqrt{\frac{2i+3}{2}}\,L_{i+2}^{\mathcal{S}}(\lambda_{e_2}-\lambda_{e_1},\lambda_{e_1}+\lambda_{e_2}), \quad \quad 0\leq i\leq p-2;
\intertext{\item for $0 \leq i+j\leq p-3$ the cell based functions are:}
\varphi_{(i,j)}^{\mathcal{I}}&:=c_{ij} \lambda_1 \lambda_2 \lambda_3 P_{i}^{\mathcal{S},(2,2)}(\lambda_1-\lambda_2,\lambda_1+\lambda_2) \, P_j^{(2i+5,2)}(2\lambda_3-1).
\end{align*}
with $c_{ij}$ such that $\left\|\varphi_{(i.j)}^{\mathcal{I}}\right\|_{L^2(\widehat{K})}=1$.
\end{enumerate}
\end{definition}
\begin{remark}
In order to get a basis of $\widetilde{S}^p(\mathcal{T})$ we take the composition with the element mappings $\varphi \circ F_K$.
To ensure continuity along edges we take an arbitrary orientation of the edges and observe that
the edge basis functions $\varphi_i^\mathcal{E}$ are symmetric under permutation of $\lambda_{e_1}$ and $\lambda_{e_2}$ up to a sign change $(-1)^{i}$.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\section{Properties of $\widetilde{H}^{1/2}(\Gamma)$}
\label{sec:properties-of-honehalftilde}
\subsection{Quasi-interpolation in $\widetilde{H}^{1/2}(\Gamma)$}
Several results of the present paper depend on results in \cite{melenk_appendix}.
Therefore we present a short summary of the main results of that paper in this section.
In \cite{melenk_appendix} the authors propose an $H^{1/2}$-stable space decomposition on meshes consisting of triangles.
It is based on quasi-interpolation operators constructed by local averaging on elements.
We introduce the following product spaces:
\begin{align*}
X_0&:=\prod_{\boldsymbol{z} \in {\mathcal{V}}} { L^2(\omega_{\boldsymbol{z}}) }, \quad \quad X_1:=\prod_{\boldsymbol{z}\in {\mathcal{V}}}{ \widetilde{H}^1(\omega_{\boldsymbol{z}})}.
\end{align*}
The spaces $L^2(\omega_{\boldsymbol{z}})$ and $\widetilde{H}^1(\omega_{\boldsymbol{z}})$ are endowed with the
$L^2$- and $\widetilde{H}^1$-norm, respectively.
\begin{proposition}[localization, \protect{\cite{melenk_appendix}}]
\label{prop:stable_space_decomposition}
There exists an operator $J: L^2(\Gamma) \to \left(\widetilde{\mathcal{S}}^1(\mathcal{T}),\left\|\cdot\right\|_{L^2(\Gamma)}\right) \times X_0$ with the following properties:
\begin{enumerate}[(i)]
\item
\label{item:prop:stable_space_decomposition-i}
$J$ is linear and bounded.
\item
\label{item:prop:stable_space_decomposition-ii}
$J|_{\widetilde{H}^1(\Gamma)}$ is also bounded as an operator
$\widetilde{H}^1(\Gamma) \to \left(\widetilde{\mathcal{S}}^1(\mathcal{T}),\left\|\cdot\right\|_{\widetilde{H}^1(\Gamma)}\right) \times X_1$.
\item
\label{item:prop:stable_space_decomposition-iii}
If $u \in \widetilde{S}^p(\mathcal{T})$ then each component of $J u$ is in $\widetilde{S}^p(\mathcal{T})$.
\item
\label{item:prop:stable_space_decomposition-iv}
If we write $J u=:(u_1,U)$, and furthermore $U_{\boldsymbol{z}}$ for the component of $U$ in $X_0$ corresponding
to the space $L^2(\omega_{\boldsymbol{z}})$, then $Ju$ represents an $\widetilde{H}^{1/2}(\Gamma)-stable$ decomposition of $u$, i.e.,
\begin{align}
\label{eq:stable_space_decomposition_sum}
u&=u_1 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{U_{\boldsymbol{z}}} \quad \text{and} \quad
\left\|u_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{\left\|U_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}^2}\leq
C \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2.
\end{align}
\end{enumerate}
\vspace*{-\baselineskip}
The norms of $J$ in
(\ref{item:prop:stable_space_decomposition-i})---(\ref{item:prop:stable_space_decomposition-ii}) and the
constant $C > 0$ in (\ref{item:prop:stable_space_decomposition-iv})
depend only on $\Gamma$ and the shape regularity constant $\gamma$.
\begin{proof}[Sketch of proof: \nopunct]
The first component of $J$ (i.e., the mapping $u \mapsto u_1$) consists of the Scott-Zhang
projection operator, as modified in \cite[Section 3.2]{aff_hypsing}.
The local components (i.e., the functions $U_{\boldsymbol{z}}$, $z \in {\mathcal{V}}$) then are based
on a successive decomposition into vertex, edge and interior parts, similar to what is done in
\cite{schoeberl_asm_fem}. We give a flavor of the procedure. Set $u_2:= u - u_1$.
In order to define the vertex parts for a vertex $\boldsymbol{z}$,
we select an element $K \subset \omega_{\boldsymbol{z}}$ of the patch $\omega_{\boldsymbol{z}}$ and perform a
suitable local averaging of $u_2$ on that element; this averaged
function $u_{loc,K}$ is defined on $K$ in terms of $u_2|_K$ and vanishes on the edge opposite $\boldsymbol{z}$.
In order to extend $u_{loc,K}$
to the patch $\omega_{\boldsymbol{z}}$ and thus obtain the function $u_{\boldsymbol{z}}$, we define $u_{\boldsymbol{z}}$
by ``rotating'' $u_{loc,K}$ around the vertex $\boldsymbol{z}$. The averaging process can be done
in such a way that for continuous functions $u_2$ one has
$u_2(\boldsymbol{z}) = u_{\boldsymbol{z}}(\boldsymbol{z})$ and that one has appropriate
stability properties in $L^2$ and $H^1$. The edge contributions are constructed from the function
$u_3:= u_2 - \sum_{\boldsymbol{z} \in {\mathcal{V}}} u_{\boldsymbol{z}}$.
Let ${\mathcal E}(\mathcal{T})$ denote the set of interior edges of $\mathcal{T}$.
For an edge ${\mathcal E} \in {\mathcal E}(\mathcal{T})$ one selects
an element (of which ${\mathcal E}$ is an edge), averages there, and extends the obtained averaged function to
the edge patch by symmetry across the edge ${\mathcal E}$. In this way, the function
$u_{{\mathcal E}}$ is constructed for each edge ${\mathcal E}$.
It again holds for sufficiently smooth $u_3$ that
$u_3(x)=u_{{\mathcal E}}(x) \; \forall x \in {\mathcal E}$.
For $u \in \widetilde{H}^1(\Gamma)$ and in turn $u_2 \in \widetilde{H}^1(\Gamma)$, we have that
$u_4:= u- \sum_{ {\mathcal E} \in \mathcal{E}(\mathcal{T})}{u_{\mathcal E}} - \sum_{\boldsymbol{z} \in {\mathcal{V}}}{u_{\boldsymbol{z}}}$
vanishes on all edges; hence $u_4|_K \in \widetilde{H}^1(K)$ for all $K \in \mathcal{T}$.
The terms $u_{\boldsymbol{z}}$, $u_{\mathcal E}$, $u_4|_K$ can be rearranged to take the form of patch contributions
$U_{\boldsymbol{z}}$ as given in the statement of the proposition. (The decomposition is not unique.)
The $\widetilde{H}^{1/2}$ stability
is a direct consequence of the $L^2$ and $H^1$ stability and interpolation properties given in
Proposition~\ref{interpolation_theorem}, (\ref{item:interpolation_theorem:i}).
We finally mention that assertion (\ref{item:prop:stable_space_decomposition-iii}) follows from the fact
that the averaging operators employed at the various stages of the decomposition are polynomial preserving.
\end{proof}
\end{proposition}
\begin{remark}
Independently, a decomposition similar to Proposition~\ref{prop:stable_space_decomposition}
was presented in \cite{falk-winther13}.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
The construction in Proposition~\ref{prop:stable_space_decomposition} can be modified and used
to relate the $\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})$-norm to the
$\widetilde{H}^{1/2}({\mathcal O})$-norm if ${\mathcal O} \supset \omega_{\boldsymbol{z}}$:
\begin{corollary}
\label{cor:restriction_extension}
Let $\boldsymbol{z} \in {\mathcal{V}}$ and $\mathcal{O}$ be the union of some triangles of $\mathcal{T}$ with $\omega_{\boldsymbol{z}} \subseteq \mathcal{O} \subseteq \Gamma$.
Then there exist constants $c_1,c_2$ that depend only on $\mathcal{O}$, $\Gamma$,
and the $\gamma$-shape regularity of $\mathcal{T}$ such that
for all $u \in \widetilde{H}^{1/2}(\mathcal{O})$ with $\operatorname{supp}(u)\subseteq \overline{\omega_{\boldsymbol{z}}}$,
we can estimate:
\begin{align*}
c_1\;\left\|u\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}&\leq \left\|u\right\|_{\widetilde{H}^{1/2}(\mathcal{O})}\leq c_2 \left\|u\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}.
\end{align*}
\begin{proof}
To see the second inequality, consider the extension operator $E$ that extends the function $u$
by $0$ outside of $\omega_{\boldsymbol{z}}$.
This operator is continuous $L^2(\omega_{\boldsymbol{z}}) \rightarrow L^2(\mathcal{O})$ and
$\widetilde{H}^1(\omega_{\boldsymbol{z}}) \rightarrow \widetilde{H}^1(\mathcal{O})$, both with constant $1$. Applying
Proposition~\ref{interpolation_theorem}, (\ref{item:interpolation_theorem:i}) to this extension operator $E$
gives the second inequality with $c_2=1$.
The first inequality is more involved. We start by noting that the stability assertion \eqref{eq:stable_space_decomposition_sum} of
Proposition~\ref{prop:stable_space_decomposition} gives
$u = u_1 + \sum_{\boldsymbol{z^\prime} \in {\mathcal{V}}} U_{\boldsymbol{z^\prime}}$ and
\begin{align}
\label{eq:foo-1001}
\left\|u_1\right\|^2_{\widetilde{H}^{1/2}(\mathcal{O})}
+ \sum_{\boldsymbol{z^\prime} \in {\mathcal{V}}}{\left\|U_{\boldsymbol{z^\prime}}\right\|^2_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z^\prime}})}}&\leq C \left\|u\right\|^2_{\widetilde{H}^{1/2}(\mathcal{O})}.
\end{align}
The constant $C$ depends only on the set ${\mathcal O}$ and the shape regularity of the triangulation,
when Proposition \ref{prop:stable_space_decomposition} is applied with $\Gamma$ replaced by $\mathcal{O}$.
The decomposition in Proposition~\ref{prop:stable_space_decomposition} is not unique, and we will now exploit
this by requiring more. Specifically, we assert that the operator $J$, which effects the decomposition, can be
chosen such that, for given $\omega_{\boldsymbol{z}}$, we have
$\operatorname*{supp} u_1 \subset \overline{\omega_{\boldsymbol{z}}}$ and $U_{\boldsymbol{z}^\prime} = 0$ for
$\boldsymbol{z}^\prime \ne \boldsymbol{z}$.
If this can be achieved, we get $u = u_1 + U_{\boldsymbol{z}}$.
Since \eqref{eq:foo-1001} contains a term $\left\|u_1\right\|_{\widetilde{H}^{1/2}(\mathcal{O})}$,
we also need to reinvestigate the stability proof. The decomposition is $L^2$- and $H^1$-stable, and maps to functions with
$\operatorname*{supp} u_1 \subset \overline{\omega_{\boldsymbol{z}}}$. Therefore, we can interpret the first component of $J$ as an operator mapping to
$\widetilde{H}^1(\omega_{\boldsymbol{z}})$ and apply Proposition \ref{interpolation_theorem} (\ref{item:interpolation_theorem:i}) to get:
$$
\left\|u_1\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})} + \left\|U_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}
\leq C \left\|u\right\|_{\widetilde{H}^{1/2}(\mathcal{O})}.
$$
The triangle inequality
$\left\|u\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})} \leq
\left\|u_1\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})} +
\left\|U_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}$ then concludes the proof.
It therefore remains to see that we can construct the operator $J$ of Proposition~\ref{prop:stable_space_decomposition}
with the additional property that $u = u_1 + U_{\boldsymbol{z}}$ if $u$ is such that
$\operatorname*{supp} u \subset \overline{\omega_{\boldsymbol{z}}}$. This follows by carefully selecting the
elements on which the local averaging is done, namely, whenever one has to choose an element on which to average,
one selects, if possible, an element that is {\em not} contained in $\omega_{\boldsymbol{z}}$.
For example for vertex contributions $\boldsymbol{z'} \in \partial \omega_{\boldsymbol{z}}$ we make sure to use
elements $K'$ which are not in $\omega_{\boldsymbol{z}}$. This implies that for
$\operatorname{supp}(u) \subseteq \overline{\omega_{\boldsymbol{z}}}$ we get $u_{\boldsymbol{z'}}=0$.
A similar choice is made when defining the edge contributions.
\end{proof}
\end{corollary}
The stable space decomposition of Proposition~\ref{prop:stable_space_decomposition} is one of several ingredients
of the proof that the interpolation space obtained by interpolating the space $\widetilde{S}^{p}(\mathcal{T})$ endowed
with the $L^2$-norm and the $H^1$-norm yields the space $\widetilde{S}^p(\mathcal{T})$ endowed with
the appropriate fractional Sobolev norm:
\begin{proposition}[\protect{\cite{melenk_appendix}}]
\label{prop:characterization_poly_interp}
Let $s \in (0,1)$ and let $\mathcal{T}$ be a shape regular triangulation of $\Gamma$.
Let $p \ge 1$. Then:
\begin{align*}
\left[ \left( \widetilde{S}^{p}(\mathcal{T}), \left\|\cdot\right\|_{L^2(\Gamma)}\right), \left( \widetilde{S}^{p}(\mathcal{T}), \left\|\cdot\right\|_{\widetilde{H}^1(\Gamma)}\right) \right]_s
&=\left( \widetilde{S}^{p}(\mathcal{T}), \left\|\cdot\right\|_{\widetilde{H}^s(\Gamma)}\right).
\end{align*}
The constants implied in the norm equivalence depend only on $\Gamma$, $s$, and the $\gamma$-shape regularity of $\mathcal{T}$.
\end{proposition}
We note that such a result is clearly valid for fixed $p \geq 1$ (see, e.g., \cite[Proof of Prop. 5]{aff_hypsing}), but the essential observation of
\cite{melenk_appendix} is that the norm equivalence constants do not depend on $p$.
\subsection{Geometry of vertex patches}
\begin{figure}
\caption{\label{fig:reference-patches}
\label{fig:reference-patches}
\end{figure}
We recall that ${\mathcal{V}}$ is the set of all inner vertices, i.e., $\boldsymbol{z} \notin \partial\Gamma$ for all $\boldsymbol{z} \in {\mathcal{V}}$.
We define the patch size for a vertex $z \in {\mathcal{V}}$ as $h_{\boldsymbol{z}}:=\operatorname{diam}(\omega_{\boldsymbol{z}})$ and
stress that $\gamma$-shape regularity implies $h_{\boldsymbol{z}} \sim h|_K$ for all elements $T \subseteq \omega_{\boldsymbol{z}}$.
Due to shape regularity, the number of elements meeting at a vertex is bounded.
The following definition allows us to transform the vertex patches $\omega_{\boldsymbol{z}}$ to a finite number of reference configurations.
\begin{definition}[reference patch]
\label{definition:reference-patches}
Let $\omega_{\boldsymbol{z}}$ be an interior patch consisting of $n$ triangles.
We may define a Lipschitz continuous bijective map $F_{\boldsymbol{z}}: \widehat \omega_{\boldsymbol{z}} \rightarrow \omega_{\boldsymbol{z}}$,
where $\widehat \omega_{\boldsymbol{z}} \subseteq \mathbb{R}^2$ is a regular polygon with $n$ edges,
see Fig.~\ref{fig:reference-patches}. The map $F_{\boldsymbol{z}}$ is piecewise defined
as a concatenation of affine maps from triangles comprising the regular polygon to the reference element $\widehat K$
with the element maps $F_K$. We note that $F_{\boldsymbol{z}}(\partial\widehat\omega_{\boldsymbol{z}}) = \partial \omega_{\boldsymbol{z}}$.
\end{definition}
The following lemma tells us how the hypersingular integral operator behaves under
the patch transformation.
\begin{lemma}
\label{item:lemma:reference-patch-hypsing_scaling}
Let $u \in \widetilde{H}^{1/2}(\Gamma)$ with $\operatorname{supp}(u) \subseteq \overline{\omega_{\boldsymbol{z}}}$.
Define $\widehat u:= u \circ F_{\boldsymbol{z}}$ and the integral operator $\widehat{D}$ as
$\widehat{D} \widehat{u}\,(x):= - \partial_{n_x }^{int} \int_{\widehat{\omega_{\boldsymbol{z}}}}{ \partial_{n_y}^{int} G(x,y) \widehat{u}(y) \; ds(y)}
\quad \text{for } x \in \widehat{\omega}_{\boldsymbol{z}}$, where we treat $\widehat{\omega_{\boldsymbol{z}}} \subseteq \mathbb{R}^2$ as a screen embedded in $\mathbb{R}^3$ and $\partial_{n_x}$
is the derivative in direction of the vector $(0,0,1)$.
Then the hypersingular operator scales like
$$
\left<D u,u \right>_{\omega_{\boldsymbol{z}}} \sim h_{\boldsymbol{z}} \left<\widehat{D} \widehat u,\widehat u \right>_{\widehat{\omega}_{\boldsymbol{z}}},
$$
where the implied constants depend only on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$.
\end{lemma}
\begin{proof}
We will prove this in three steps:
\begin{enumerate}[(i)]
\item $\left<D u,u \right>_{\Gamma} \sim \left\|u\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}^2$,
\item $\left\|u\right\|_{\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})}^2 \sim h_{\boldsymbol{z}} \left\|\widehat{u}\right\|_{\widetilde{H}^{1/2}(\widehat{\omega}_{\boldsymbol{z}})}^2$,
\item $\left\|\widehat{u}\right\|_{\widetilde{H}^{1/2}(\widehat{\omega}_{\boldsymbol{z}})}^2 \sim \left<\widehat{D} \widehat{u},\widehat{u}\right>_{\widehat{\omega}_{\boldsymbol{z}}}$.
\end{enumerate}
\emph{Proof of (i):} It is well-known that $D$ is continuous and elliptic on $\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})$. In
our case, the ellipticity constants can be chosen independently of the patch $\omega_{\boldsymbol{z}}$ and instead only depend on
$\Gamma$. To do so we embed the spaces $\widetilde{H}^{1/2}(\omega_{\boldsymbol{z}})$ into finitely
many larger spaces $\widetilde{H}^{1/2}(\mathcal{O}_j)$, where the
sub-surfaces $\mathcal{O}_j$ are {\em open} and for each $\boldsymbol{z}$ there is ${\mathcal O}_j$
such that $\omega_{\boldsymbol{z}} \subseteq \mathcal{O}_j\subseteq \Gamma$.
(For the screen problem we may use the single sub-surface $\mathcal{O}:=\Gamma$, for the case
of closed surfaces we can, for example, use $\mathcal{O}_j:=\Gamma \setminus F_j$, where $F_j$ is the $j$-th face
of the polyhedron $\Omega$ such that $\omega_{\boldsymbol{z}} \cap F_j = \emptyset$).
It is important that these surfaces are open, since for closed surfaces $\Gamma$ we do not have full ellipticity
of $D$ but only for $\widetilde{D}$, and the stabilization term has a different scaling behavior.
Since $u$ vanishes outside of $\omega_{\boldsymbol{z}}$ we can use the ellipticity on $\widetilde{H}^{1/2}(\mathcal{O}_j)$ to see
$\left<Du,u \right>_{\Gamma} \sim \left\|u\right\|_{\widetilde{H}^{1/2}(\mathcal{O}_j)}^2$.
By Corollary \ref{cor:restriction_extension} the norms on $\omega_{\boldsymbol{z}}$ and $\mathcal{O}_j$ are equivalent, which implies the statement (i).
\emph{Proof of (ii):}
The scalings of the $L^2$-norm and $H^1$-seminorm (we can use the seminorm, since we are working on $\widetilde{H}^{1/2}$ of an open surface) is well-known to be
\begin{align*}
\left\|u\right\|_{L^2(\omega_{\boldsymbol{z}})} &\sim h_{\boldsymbol{z}} \left\|\widehat{u}\right\|_{L^2(\widehat{\omega}_{\boldsymbol{z}})}, \quad
\left\|\nabla u\right\|_{L^2(\omega_{\boldsymbol{z}})} \sim \left\|\nabla \widehat{u}\right\|_{L^2(\widehat{\omega}_{\boldsymbol{z}})}.
\end{align*}
The interpolation theorem (Proposition~\ref{interpolation_theorem}, (\ref{item:interpolation_theorem:i}))
then proves part (ii).
\emph{Proof of (iii):} We again use ellipticity and continuity of $\widehat{D}$. Since there are only finitely many reference patches, the constants can be
chosen independently of the individual patches.
\end{proof}
\section{Condition number of the $hp$-Galerkin matrix}
\label{sec:p_condition}
In this section we investigate the condition number of the unpreconditioned Galerkin matrix to motivate the need
for good preconditioning.
We will work on the reference triangle $\widehat{K}=\operatorname{conv}\left\{(0,0),(1,0),(0,1)\right\}$ and
will need the following well-known inverse inequalities for polynomials on $\widehat{K}$:
\begin{proposition}[{Inverse inequalities, \cite[Theorem 4.76]{schwab_p_fem}}]
Let $\widehat{K}$ denote the reference triangle and let ${\mathcal E}$ be one of its edges.
There exists a constant $C$ such that for all $p \in \mathbb{N}$ and for all $v \in P^p(\widehat{K})$ the following
estimates hold:
\begin{align}
\left\|v\right\|_{L^{\infty}(\widehat{K})}&\leq C p^2 \left\|v\right\|_{L^2(\widehat{K})} \label{eqn:inv_est_infty_l2}, \\
\left\|v\right\|_{L^{\infty}(\widehat{K})}&\leq C \sqrt{\log(p+1)} \left\|v\right\|_{H^1(\widehat{K})} \label{eqn:inv_est_infty_h1}, \\
\left\|v\right\|_{H^1(\widehat{K})}&\leq C p^2 \left\|v\right\|_{L^2(\widehat{K})} \label{eqn:inv_est_h1_l2}, \\
\left\|v\right\|_{L^2({\mathcal E})}&\leq C p \left\|v\right\|_{L^2(\widehat{K})} \label{eqn_inv_est_trace}.
\end{align}
\qed
\end{proposition}
First we investigate the $L^2$ and $H^1$ conditioning of our basis on the reference triangle.
\begin{lemma}
\label{condition_upper_bound_l2}
Let $u \in P^p(\widehat{K})$ and let $\alpha^{\mathcal{V}}_{j}$, $\alpha_j^{\mathcal{E}_m}$, $\alpha^\mathcal{I}_{(ij)}$ be the coefficients with respect to
the basis in Definition~\ref{definition_basis}, i.e.,
we decompose $u= u_{\mathcal{V}} + u_{\mathcal{E}_1} + u_{\mathcal{E}_2} + u_{\mathcal{E}_3} + u_\mathcal{I}$ with
\begin{align}
\label{eq:condition_upper_bound_l2-10}
u_{\mathcal{V}}&=\sum_{j=1}^{3}{\alpha^{\mathcal{V}}_j \varphi^{\mathcal{V}}_j}, \quad
u_{\mathcal{E}_m}=\sum_{j=0}^{p-2}{\alpha^{\mathcal{E}_m}_j \varphi^{\mathcal{E}_m}_j}, \quad
u_{\mathcal{I}}=\sum_{i+j \leq p-3}{\alpha^\mathcal{I}_{(ij)} \varphi^{\mathcal{I}}_{(ij)} }.
\end{align}
Then for a constant $C > 0$ that does not depend on $u$ or $p$:
\begin{align}
\left\|u_{\mathcal{V}}\right\|_{L^2(\widehat{K})}^2 & \leq C \sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_j\right|^2},
&\left\|u_{\mathcal{E}_m}\right\|_{L^2(\widehat{K})}^2 & \leq C \sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}_m}_j\right|^2},
&\left\|u_{\mathcal{I}}\right\|_{L^2(\widehat{K})}^2 & = \sum_{i+j\leq p-3}{\left|\alpha^{\mathcal{I}}_{(ij)}\right|^2}. \label{eqn:inner_l2_eqality}
\end{align}
Combined this gives:
\begin{align}
\label{eqn:condition_upper_bound_l2_full}
\left\|u\right\|_{L^2(\widehat{K})}^2 &\leq C \Bigl( \sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_{j}\right|^2} + \sum_{m=1}^{3}{\sum_{j=0}^{p-2}{ \left|\alpha_j^{\mathcal{E}_m}\right|}^2 }+ \sum_{i+j \leq p-3}{\left|\alpha^\mathcal{I}_{(ij)}\right|^2}\Bigr).
\end{align}
\begin{proof}
The estimate for $u_{\mathcal{V}}$ in (\ref{eqn:inner_l2_eqality}) is clear.
For the edge contributions in (\ref{eqn:inner_l2_eqality}), we restrict ourselves to the edge $(0,0)-(1,0)$, i.e., $m=1$ and drop the index $m$ in the notation. The other edges can be treated analogously.
\begin{align*}
\left\|u_{\mathcal{E}}\right\|_{L^2(\widehat{K})}^2 &= \left\|\sum_{j=0}^{p-2}{ \alpha^{\mathcal{E}}_j
\sqrt{\frac{2i+3}{2}} L^{\mathcal{S}}_{i+2}(\lambda_1 - \lambda_2, \lambda_1 + \lambda_2)}\right\|^2_{L^2(\widehat{K})} \\
&=
\sum_{i,j=0}^{p-2}{ \alpha^\mathcal{E}_j \alpha^\mathcal{E}_i
\int_{\widehat{K}}{ \sqrt{\frac{2i+3}{2}} L^{\mathcal{S}}_{i+2}(\lambda_1 - \lambda_2, \lambda_1 + \lambda_2)} \sqrt{\frac{2j+3}{2}}L^{\mathcal{S}}_{j+2}(\lambda_1 - \lambda_2, \lambda_1 + \lambda_2)} dx\\
&= \frac{1}{4} \sum_{i,j=0}^{p-2}{\alpha^\mathcal{E}_j \alpha^\mathcal{E}_i
\int_{-1}^{1}{ \int_{-1}^{1}{\sqrt{\frac{2i+3}{2}}
L_{i+2}(\xi) \sqrt{\frac{2j+3}{2}}L_{j+2}(\xi) \left(\frac{1-\eta}{2}\right)^{i+j+5} d\xi d\eta }}}.
\end{align*}
In the last step we transformed the reference triangle to $(-1,1)\times (-1,1)$
via the map
${(\xi,\eta)\mapsto(\frac{1}{4}(1+\xi)(1-\eta),\frac{1}{2}(1+\eta))}$.
It is well-known (see \cite[p.~{65}]{schwab_p_fem}) that the 1D-mass matrix M of
the integrated Legendre polynomials is
pentadiagonal, and the non-zero entries satisfy $\left|M_{(ij)}\right| \sim \frac{1}{(i+1)\,(j+1)}$.
It is easy to check that $ \left|\int_{-1}^{1}{\left(\frac{1-\eta}{2}\right)^{i+j+5} d\eta}\right| \leq C (i+j+6)^{-1}$.
Together with a Cauchy-Schwarz estimate, we obtain:
\begin{align*}
\left\|u_{\mathcal{E}}\right\|_{L^2(\widehat{K})}^2 &
\lesssim \sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}}_j\right|^2 \frac{1}{(j+1)^3}}
\lesssim \sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}}_j\right|^2}.
\end{align*}
The bubble basis functions are chosen $L^2$-orthogonal. Thus, using our scaling of the
bubble basis functions,
\begin{align*}
\left\|u_{\mathcal{I}}\right\|_{L^2(\widehat{K})}^2 & =
\sum_{i+j \leq p-3}{ \left|\alpha^\mathcal{I}_{(ij)}\right|^2 \left\|\varphi_{(ij)}\right\|_{L^2(\widehat{K})}^2} =
\sum_{i+j \leq p-3} \left|\alpha^\mathcal{I}_{(ij)}\right|^2 .
\end{align*}
Finally, we split the function $u$ into vertex, edge and inner components, apply the triangle inequality and get
\begin{align*}
\left\|u\right\|_{L^2(\widehat{K})}^2 &\leq 5 \left\|u_{\mathcal{V}}\right\|^2_{L^2(\widehat{K})} + 5 \sum_{m=1}^{3}{\left\|u_{\mathcal{E}_j}\right\|_{L^2(\widehat{K})}^2} + 5 \left\|u_\mathcal{I}\right\|_{L^2(\widehat{K})}^2,
\end{align*}
which is \eqref{eqn:condition_upper_bound_l2_full}.
\end{proof}
\end{lemma}
More interesting are the reverse estimates.
\begin{lemma}
\label{condition_lower_bound}
There is a constant $C > 0$ independent of $p$ such for every $u \in P^p(\widehat{K})$ the coefficients of its
representation in the basis of Definition~\ref{definition_basis} as in Lemma~\ref{condition_upper_bound_l2} satisfy:
\begin{align*}
\mbox{vertex parts:} &\qquad \sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_j\right|^2} \leq C p^{4} \left\|u\right\|_{L^2(\widehat{K})}^2 \quad \text{ as well as } \quad
\sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_j\right|^2} \leq C \log(p+1) \left\|u\right\|_{H^1(\widehat{K})}^2,\\
\mbox{edge parts:} & \qquad
\sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}_m}_j\right|^2} \leq C p^6 \left\|u\right\|_{L^2(\widehat{K})}^2 \quad \text{ and } \quad
\sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}_m}_j\right|^2} \leq C p^2 \left\|u\right\|_{H^1(\widehat{K})}^2.
\end{align*}
Moreover, if $u$ vanishes on $\partial \widehat{K}$, then
\begin{align}
\label{eq:condition_inner_l2_equality}
\sum_{i+j\leq p-3}{\left|\alpha^{\mathcal{I}}_{(ij)}\right|^2}&= \left\|u\right\|_{L^2(\widehat{K})}^2.
\end{align}
\begin{proof}
Since $\alpha^{\mathcal{V}}_j=u(\boldsymbol{z}_j)$ where $\boldsymbol{z}_j$ denotes the $j-th$ vertex,
we can use the $L^\infty$-inverse estimates \eqref{eqn:inv_est_infty_l2} and \eqref{eqn:inv_est_infty_h1} to get estimates
for the vertex part.
For the edge parts, we again only consider the bottom edge, ${\mathcal E} = {\mathcal E}_m$ with $m=1$.
First we assume that $u$ vanishes in all vertices. If we consider the restriction of $u$ to the edge ${\mathcal E}$
we only have contributions by the edge basis, i.e., we can write
\begin{align*}
u\left(x,0\right)&=\sum_{i=0}^{p-2}{\alpha^{\mathcal{E}}_i \sqrt{\frac{2i+3}{2}} L_{i+2}( 2x-1)}, \quad x \in (0,1), \\
\frac{\partial}{\partial x} u(x,0)&=\sum_{i=0}^{p-2}{\alpha^{\mathcal{E}}_i 2\;\sqrt{\frac{2 i+3}{2}} \ell_{i+1}(2x-1)}, \quad x \in (0,1).
\end{align*}
The factor was chosen to get an $L^2$-normalized basis, since we have $\left\|\ell_{i+1}\right\|_{L^2(-1,1)}^2=\frac{2}{2 i+3}$.
The Legendre polynomials are orthogonal on $(-1,1)$, and therefore simple calculations show
\begin{align*}
\left\|\frac{\partial u}{\partial x}\right\|_{L^2(\mathcal{E})}^2&=2\sum_{i=0}^{p-2}{\left|\alpha^{\mathcal{E}}_i\right|^2}.
\end{align*}
If we consider a general $u \in P^p(\widehat{K})$, we apply the previous estimate to $u_2:=u - I^1 u$ where $I^1$ denotes the
nodal interpolation operator to the linears. Then we get from the triangle inequality
\begin{align*}
\sum_{i=0}^{p-2}{\left|\alpha^{\mathcal{E}}_i\right|^2} &
\leq 2\left\|\frac{\partial u}{\partial x}\right\|_{L^2(\mathcal{E})}^2 +
2 \left\|\frac{\partial I^1u}{\partial x}\right\|_{L^2(\mathcal{E})}^2.
\end{align*}
We apply the trace estimate \eqref{eqn_inv_est_trace} to the first part
and the trace and norm equivalence for the second. We obtain:
\begin{align*}
\sum_{i=0}^{p-2}{\left|\alpha^{\mathcal{E}}_i\right|^2} &\lesssim p^2 \left\|\frac{\partial u}{\partial x}\right\|_{L^2(\widehat{K})}^2 + \left\|I^1u\right\|_{L^2(\widehat{K})}^2.
\end{align*}
The $H^1$ estimate then follows from the $L^\infty$ estimate
for the nodal interpolant \eqref{eqn:inv_est_infty_h1}.
For the $L^2$ estimate we then simply use the inverse estimate \eqref{eqn:inv_est_h1_l2}.
For the equality \eqref{eq:condition_inner_l2_equality} we note that if $u|_{\partial T} = 0$ then
$u=u_{\mathcal{I}}$ and thus we can use the equality in \eqref{eqn:inner_l2_eqality}.
\end{proof}
\end{lemma}
\begin{lemma}
\label{thm:condition_estimates_tref}
There exist constants $c_0$, $C_0$, $c_1$, $C_1 >0$ independent of $p$ such that
for every $u \in P^p(\widehat{K})$ its coefficients in the basis
of Definition~\ref{definition_basis} as in Lemma~\ref{condition_upper_bound_l2} satisfy:
\begin{align}
c_0 \left\|u\right\|_{L^2(\widehat{K})}^2 &\leq \sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_{j}\right|^2} + \sum_{m=1}^{3}{\sum_{j=0}^{p-2}{ \left|\alpha_j^{\mathcal{E}_m}\right|}^2 }+
\sum_{i+j \leq p-3}{\left|\alpha^\mathcal{I}_{(ij)}\right|^2} \leq C_0 p^{6} \left\|u\right\|_{L^2(\widehat{K})}^2, \label{eqn:condition_estimates_tref_l2}\\
c_1 p^{-4} \left\|u\right\|_{H^1(\widehat{K})}^2 &
\leq \sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_{j}\right|^2} + \sum_{m=1}^{3}{\sum_{j=0}^{p-2}{ \left|\alpha_j^{\mathcal{E}_m}\right|}^2 }+ \sum_{i+j \leq p-3}{\left|\alpha^\mathcal{I}_{(ij)}\right|^2}
\leq C_1 p^{2} \left\|u\right\|_{H^{1}(\widehat{K})}^2 \label{eqn:condition_estimates_tref_h1}.
\end{align}
\begin{proof}[Proof of \eqref{eqn:condition_estimates_tref_l2}: \nopunct]
The lower bound was already shown in Lemma~\ref{condition_upper_bound_l2}.
For the upper bound we apply the preceding Lemma~\ref{condition_lower_bound} to $u$ and see
$\sum_{j=1}^{3}{\left|\alpha^{\mathcal{V}}_j\right|^2} \lesssim p^{4} \left\|u\right\|_{L^2(\widehat{K})}^2$.
For any edge ${\mathcal E}_m$ we get
\begin{align*}
\sum_{j=0}^{p-2}{\left|\alpha^{\mathcal{E}_m}_j\right|^2} &\lesssim p^6 \left\|u\right\|_{L^2(\widehat{K})}^2.
\end{align*}
Next we set $u_2:=u- u_{\mathcal{V}} - u_{\mathcal{E}}$, where $u_{\mathcal{E}}$ is the sum of the edge contributions $u_{\mathcal{E}_m}$. This function vanishes on the boundary of $\widehat{K}$ and we can apply Lemma~\ref{condition_lower_bound} to get:
\begin{align*}
\sum_{i+j\leq p-3}{\left|\alpha^{\mathcal{I}}_{(ij)}\right|^2}&\lesssim \left\|u_2\right\|_{L^2(\widehat{K})}^2.
\end{align*}
Since we can always estimate the $L^2$ norms by the $\ell^2$ norms of the coefficients
(Lemma~\ref{condition_upper_bound_l2}), we obtain
\begin{align*}
\left\|u_2\right\|_{L^2(\widehat{K})}^2&\leq 3\left\|u\right\|_{L^2(\widehat{K})}^2 + 3\left\|u_{\mathcal{V}}\right\|_{L^2(\widehat{K})}^2 + 3\left\|u_{\mathcal{E}}\right\|_{L^2(\widehat{K})}^2
\lesssim \left\|u\right\|_{L^2(\widehat{K})}^2 + p^6 \left\|u\right\|_{L^2(\widehat{K})}^2 + p^4 \left\|u\right\|_{L^2(\widehat{K})}^2.
\end{align*}
{\em Proof of \eqref{eqn:condition_estimates_tref_h1}:} The proof for the upper $H^1$ estimate works along the same lines, but using the sharper $H^1$-estimates from
Lemma \ref{condition_lower_bound} for the vertex and edge parts.
For the lower estimate, we just make use of the inverse estimate \eqref{eqn:inv_est_h1_l2}
and the fact that the $L^2$-norm is uniformly bounded by the coefficients, to conclude the proof.
\end{proof}
\end{lemma}
\begin{figure}
\caption{Numerical computation of the extremal eigenvalues of the mass matrices and the sum of mass and $H^1$-stiffness matrix $M+S$
for the full system and different sub-blocks.}
\label{fig:numerical_comparison_tref_condition}
\end{figure}
\begin{example}
In Figure~\ref{fig:numerical_comparison_tref_condition} we compare our theoretical bounds on the reference element
from Lemma~\ref{thm:condition_estimates_tref} with a numerical experiment that studies the maximal and minimal
eigenvalues of the mass matrix $M$ and the stiffness matrix $S$ (corresponding to the bilinear form
$(\nabla \cdot,\nabla \cdot)_{L^2(\widehat{K})}$).
We focus on the full system and the subblocks that contributed the highest order in our theoretical
investigations, i.e. the edge blocks $M_{\mathcal{E}}$, $S_{\mathcal{E}}$, and the block of inner basis functions
$M_{\mathcal{I}}$, $S_{\mathcal{I}}$. We see that the estimates
on the full condition numbers are not overly pessimistic: the numerics show a
behavior of the minimal eigenvalue of $\mathcal{O}(p^{5.5})$ instead of $\mathcal{O}(p^6)$. If we focus solely on the edge contributions, we see that the
bound we used for the lower eigenvalue is not sharp there. This can partly be explained by the fact that if no inner basis functions are present
it is possible to improve the estimate \eqref{eqn_inv_est_trace} by a factor of $p$. But since we also need to
include the coupling of inner and edge basis functions this improvement in order is lost again when looking at the
full systems.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{example}
The estimates on the reference triangle can now be transferred to the global space $\widetilde{S}^p(\mathcal{T})$ on quasiuniform meshes.
\begin{theorem}
\label{thm:condition_numbers_h1}
Let $\mathcal{T}$ be a quasiuniform triangulation with mesh size $h$. With the polynomial basis on the reference
triangle $\widehat{K}$ given by Definition~\ref{definition_basis}, let $\{\varphi_i\,|\, i=1,\ldots,N\}$ be
the basis of $\widetilde{S}^p(\mathcal{T})$.
Then there exist constants $c_0$, $c_{1/2}$, $c_1$, $C_0$, $C_{1/2}$, $C_1>0$ that depend only
on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$, such that, for every
$\mathfrak{u} \in \mathbb{R}^N$ and $u =\sum_{j=1}^{N}{\mathfrak{u}_j \varphi_j} \in \widetilde S^p(\mathcal{T})$:
\begin{align}
c_0 \frac{1}{h^2} \left\|u\right\|_{L^2(\Gamma)}^2 &\leq \left\|\mathfrak{u}\right\|_{\ell^2}^2 \leq C_0 \frac{p^6}{h^2}\left\|u\right\|_{L^2(\Gamma)}^2, \label{eqn:condition_numbers_l2}\\
c_1 p^{-4} \left\|u\right\|_{H^1(\Gamma)}^2 &\leq \left\|\mathfrak{u}\right\|_{\ell^2}^2 \leq C_1 \left(p^2 + h^{-2}\right) \left\|u\right\|_{H^1(\Gamma)}^2, \label{eqn:condition_numbers_h1}\\
c_{1/2} h^{-1}\,p^{-2} \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 &\leq \left\|\mathfrak{u}\right\|_{\ell^2}^2 \leq C_{1/2} \left(\frac{p^4}{h} + h^{-2}\right)\left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2. \label{eqn:condition_numbers_honehalf}
\end{align}
\begin{proof}
The $L^2$-estimate (\ref{eqn:condition_numbers_l2}) can easily be shown by transforming to
the reference element and applying \eqref{eqn:condition_estimates_tref_l2}.
To prove the other estimates (\ref{eqn:condition_numbers_h1}), (\ref{eqn:condition_numbers_honehalf}),
we need the Scott-Zhang projection operator
$J_{h}:L^2(\Gamma) \rightarrow \widetilde S^1(\mathcal{T})$ as modified in \cite[Section 3.2]{aff_hypsing}.
It has the following important properties:
\begin{enumerate}
\item $J_{h}$ is a bounded linear operator from $L^2(\Gamma) $ to
$(\widetilde{S}^1(\mathcal{T}),\|\cdot\|_{L^2(\Gamma)})$.
\item For every $s \in [0,1]$ there holds $\left\|J_{h} v\right\|_{\widetilde{H}^{s}(\Gamma)} \leq C_{stab}(s) \left\|v\right\|_{\widetilde{H}^s(\Gamma)}$ $\forall v \in \widetilde{H}^s(\Gamma)$.
\item For every $K \in \mathcal{T}$ let $\omega_K:=\bigcup\left\{K' \in \mathcal{T}: K \cap K' \neq \emptyset\right\}$
denote the element patch, i.e., the union of all elements that touch $K$. Then, for all
$v \in \widetilde{H}^1(\Gamma)$
\begin{align}
\label{scott_zhang_local_l2_approx}
\left\|\left(1-J_h\right) v\right\|_{L^2(K)}&\leq C_{sz} h_K \left\|\nabla v\right\|_{L^2(\omega_K)}, \\
\label{scott_zhang_local_h1_stab}
\left\|\nabla \left(1-J_h\right) v\right\|_{L^2(K)}&\leq C_{sz} \left\|\nabla v\right\|_{L^2(\omega_K)}.
\end{align}
\end{enumerate}
The constant $C_{sz}$ depends only on the $\gamma$-shape regularity of $\mathcal{T}$, and $C_{stab}(s)$ additionally
depends on $\Gamma$ and $s$.
We will use the following notation:
For a function $u \in \widetilde{S}^p(\mathcal{T})$ we will write $\mathfrak{u} \in \mathbb{R}^{N}$ for its
representation in the basis $\{\varphi_i\,|\, i=1,\ldots,N\}$.
For an element $K \in \mathcal{T}$ we write $\mathfrak{u}|_K$ for
the part of the coefficient vector that belongs to basis functions whose support intersects the interior of $K$.
In addition to the function $u \in \widetilde{S}^p(\mathcal{T})$, we will employ the function $\tilde u:= u - J_h u$.
Its vector
representation will be denoted $\tilde {\mathfrak{u}} \in \mathbb{R}^{N}$. Finally, the vector representation
of $J_h u$ (again $u \in \widetilde{S}^p(\mathcal{T})$) will be $\mathfrak{J}_h \mathfrak{u} \in \mathbb{R}^{N}$.
{\em 1.~step:}
We claim the following stability estimates:
\begin{align}
\left\|\mathfrak{J}_h \mathfrak{u}\right\|_{\ell^2}^2&\lesssim h^{-2} \, \left\|J_h u\right\|_{L^2(\Gamma)}^2 \lesssim h^{-2} \left\|u\right\|_{L^2(\Gamma)}^2 \lesssim \left\|\mathfrak{u}\right\|_{\ell^2}^2,
\label{est_jhu_coeff}\\
\left\|J_h u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2&\lesssim~ h^{-1} \left\|J_h u\right\|_{L^{2}(\Gamma)}^2,
\label{est_jhu_coeff-foo}\\
\left\|J_h u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2&\lesssim h \left\|\mathfrak{u}\right\|_{\ell^2}^2 \label{est_jhu_honehalf}.
\end{align}
The inequalities (\ref{est_jhu_coeff}) are just a simple scaling argument combined with the $L^2$ stability
of the Scott-Zhang projection and \eqref{eqn:condition_numbers_l2}.
The inequality (\ref{est_jhu_coeff-foo}) follows from the inverse inequality (note that $J_h u$ has degree 1).
Finally, \eqref{est_jhu_honehalf} follows from combining (\ref{est_jhu_coeff-foo}) and \eqref{est_jhu_coeff}.
{\em 2.~step:}
Next, we investigate the function $\tilde{u}=u - J_h u$. We claim the following estimates:
\begin{align}
\left\|\tilde{u}\right\|_{L^2(\Gamma)}^2 &\lesssim h^2 \left\|\mathfrak{u}\right\|_{\ell^2}^2 \label{est_utilde_l2}, \\
\left\|\tilde{u}\right\|_{H^1(\Gamma)}^2 &\lesssim p^4 \left\|\mathfrak{u}\right\|_{\ell^2}^2 \label{est_utilde_h1}, \\
\left\|\tilde{u}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 &\lesssim p^2\,h \left\|\mathfrak{u}\right\|_{\ell^2}^2 \label{est_utilde_honehalf}.
\end{align}
The estimate \eqref{est_utilde_l2}
is a simple consequence of
\eqref{eqn:condition_numbers_l2} and the $L^2$-stability of the Scott-Zhang operator $J_h$.
For the proof of \eqref{est_utilde_h1}, we combine a simple scaling argument
with \eqref{eqn:condition_estimates_tref_h1} and the stability estimate \eqref{est_jhu_coeff} to get
\begin{align*}
\left\|\tilde{u}\right\|_{H^1(\Gamma)}^2
&\stackrel{~\eqref{eqn:condition_estimates_tref_h1}}{\lesssim}
p^4 \left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2
\stackrel{~\eqref{est_jhu_coeff}}{\lesssim} p^4 \left\|\mathfrak{u}\right\|_{\ell^2}^2.
\end{align*}
The bound \eqref{est_utilde_honehalf} follows from the interpolation estimate of
Proposition~\ref{interpolation_theorem}, (\ref{item:interpolation_theorem:ii}) and the
estimates \eqref{est_utilde_l2}--\eqref{est_utilde_h1}.
{\em 3.~step:} We assert:
\begin{align}
\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2&\lesssim \frac{p^6}{h^2} \left\|u\right\|_{L^2(\Gamma)}^2 \label{est_utilde_coeff_l2},\\
\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2&\lesssim p^2 \left\|u\right\|_{H^1(\Gamma)}^2 \label{est_utilde_coeff},\\
\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2&\lesssim \frac{p^4}{h} \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 \label{est_utilde_coeff_honehalf}.
\end{align}
Again, \eqref{est_utilde_coeff_l2} is a simple consequence of
\eqref{eqn:condition_numbers_l2} and the $L^2$-stability of the Scott-Zhang operator $J_h$.
For the bound \eqref{est_utilde_coeff} we calculate,
using the equivalence \eqref{eqn:condition_estimates_tref_h1} of the coefficient vector and the $H^1$-norm
on the reference triangle, together with the scaling properties of the $H^1$- and $L^2$-norms,
\begin{align*}
\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2&\leq\sum_{K \in \mathcal{T}}{ \left\|\mathfrak{\tilde{u}}|_K \right\|_{\ell^2}^2}
\stackrel{~\eqref{eqn:condition_estimates_tref_h1}, \text{scaling}}{\lesssim} p^2 \sum_{K \in \mathcal{T}}{\left( h^{-2} \left\|\tilde{u}\right\|_{L^2(K)}^2 + \left|\tilde{u}\right|_{H^1(K)}^2\right)}.
\end{align*}
By applying the local $L^2$-interpolation estimate \eqref{scott_zhang_local_l2_approx} and $H^1$-stability \eqref{scott_zhang_local_h1_stab} we get
\begin{align*}
\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2&\lesssim p^2 \sum_{K \in \mathcal{T}}{ \left\|u\right\|_{H^1(\omega_K)}^2 }
\lesssim p^2 \left\|u\right\|_{H^1(\Gamma)}^2,
\end{align*}
where in the last step we used the fact for shape regular meshes each element is contained in at most $M$ different patches, where $M$ depends solely on the shape regularity constant $\gamma$.
We next prove \eqref{est_utilde_coeff_honehalf}.
We apply Proposition~\ref{interpolation_theorem}, (\ref{item:interpolation_theorem:i})
to the map $\ell^2 \to \widetilde{S}^p(\mathcal{T}):\; \mathfrak{u} \mapsto u$, where the space $\widetilde{S}^p(\mathcal{T})$ is
once equipped with the $L^2$- and once with the $H^1$-norm. By Proposition~\ref{prop:characterization_poly_interp}
interpolating between \eqref{est_utilde_l2} and~\eqref{est_utilde_h1} yields
$\left\|\mathfrak{\tilde{u}}\right\|_{\ell^2}^2 \lesssim \left( p^6 h^{-2} \, p^2\right)^{1/2} \left\|u\right\|^2_{\widetilde{H}^{1/2}(\Gamma)} \lesssim p^4 h^{-1} \left\|u\right\|^2_{\widetilde{H}^{1/2}(\Gamma)}$.
{\em 4.~step:}
The above steps allow us to obtain the $H^1$ and $H^{1/2}$ estimates (\ref{eqn:condition_numbers_h1})---(\ref{eqn:condition_numbers_honehalf}) of the theorem.
We decompose $u = \tilde{u} + J_h u$ and correspondingly
$\mathfrak{u}=\mathfrak{\tilde{u}}+ \mathfrak{J}_h \mathfrak{u}$. Then:
\begin{align*}
\left\|\mathfrak{u}\right\|_{\ell^2}^2
\lesssim \left\|\tilde {\mathfrak{u}}\right\|_{\ell^2}^2 + \left\|\mathfrak{J}_h \mathfrak{u}\right\|_{\ell^2}^2,
&\stackrel{(\ref{est_utilde_coeff_honehalf}), (\ref{est_jhu_coeff})}{\lesssim} \frac{p^4}{h} \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + \frac{1}{h^2} \left\|u\right\|_{L^2(\Gamma)}^2 \lesssim \frac{p^4\,h+1}{h^2} \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2, \\
\left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 \lesssim
\left\|\tilde u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + \left\|J_h u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2
&\stackrel{(\ref{est_utilde_honehalf}), (\ref{est_jhu_honehalf})}{\lesssim} p^2 \,h \left\|\mathfrak{u}\right\|_{\ell^2}^2 + h \left\|\mathfrak{u}\right\|_{\ell^2}^2 \lesssim h\,p^2 \left\|\mathfrak{u}\right\|_{\ell^2}^2.
\end{align*}
This shows (\ref{eqn:condition_numbers_honehalf}). The $H^1$ estimate (\ref{eqn:condition_numbers_h1})
follows along the same lines: An elementwise inverse estimate gives
$$
\left\|u\right\|^2_{H^1(\Gamma)} \lesssim \frac{p^4}{h^2} \left\|u\right\|^2_{L^2(\Gamma)}
\stackrel{~\eqref{eqn:condition_numbers_l2}}{\lesssim} p^4 \|\mathfrak{u}\|^2_{\ell^2},
$$
and the splitting $\mathfrak{u} = \tilde{\mathfrak{u}} + \mathfrak{J}_h\mathfrak{u}$ produces
\begin{align*}
\left\|\mathfrak{u}\right\|^2_{\ell^2} \lesssim
\left\|\tilde{\mathfrak{u}}\right\|^2_{\ell^2} +
\left\|\mathfrak{J}_h \mathfrak{u}\right\|^2_{\ell^2}
{\stackrel{(\ref{est_utilde_coeff}), (\ref{est_jhu_coeff})}\lesssim}
\left( p^2 + h^{-2} \right) \left\|u\right\|^2_{H^1(\Gamma)}.
\tag*{\qedhere}
\end{align*}
\end{proof}
\end{theorem}
\begin{corollary}
\label{thm:condition_numbers_D}
The spectral condition number of the unpreconditioned Galerkin matrix $\widetilde D^p_h$ can be bounded by
\begin{align*}
\kappa \left(\widetilde{D}^p_h\right) &\leq C \left(\frac{p^2}{h} + p^6 \right)
\end{align*}
with a constant $C>0$ that depends only on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$.
\begin{proof}
The bilinear form induced by the stabilized hypersingular operator is elliptic and continuous with respect
to the $\widetilde{H}^{1/2}$-norm.
By applying the estimates \eqref{eqn:condition_numbers_honehalf} to the Rayleigh quotients we get the stated result.
\end{proof}
\end{corollary}
\begin{remark}
In this section we did not consider the effect of diagonal scaling. The numerical results in
Section~\ref{sec:numerics} suggest that it improves the $p$-dependence of the condition number significantly.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\subsection{Quadrilateral meshes}
The present paper focuses on meshes consisting of triangles. Nevertheless, in order to put the results
of Section~\ref{sec:p_condition} in perspective, we include a short section on quadrilateral meshes.
In \cite{maitre_pourquier}, estimates similar to those of Lemma~\ref{thm:condition_estimates_tref}
have been derived for the case of the Babu\v{s}ka-Szab\'o basis on the reference square $\widehat{S}:=[-1,1]^2$.
We give a brief summary of the definitions and results.
\begin{definition}[Babu\v{s}ka-Szab\'o basis]
\label{def:babuska_szabo_basis}
\begin{enumerate}[(i)]
\item
On the reference interval $\widehat{I}:=[-1,1]$, the basis functions are based on the integrated
Legendre polynomials $L_j$ as defined in \eqref{eq:def_legendre_poly}:
\begin{align*}
\varphi_0(x)&:=\frac{1}{2} (1-x), & \varphi_1(x)&:=\frac{1}{2}(1+x), &
\varphi_j(x)&:=\frac{1}{\left\|\ell_{j-1}\right\|_{L^2(I)}} L_{j}(x) \quad \forall \, 2 \leq j \leq p.
\end{align*}
\item
On the reference square $\widehat S$, the basis of the ``tensor product space'' ${\mathcal Q}^p(\widehat S)$
is given by
the tensor product of the 1D basis functions: ${\{ \varphi_i \otimes \varphi_j : 0 \leq i,j \leq p \}}$.
\end{enumerate}
\end{definition}
For this basis, the following estimates hold:
\begin{proposition}[{\cite[Theorem 1]{maitre_pourquier}}, {\cite[Theorem 4.1]{hu_guo_katz_condition_bounds_p}}]
\label{prop:matire_pourquier}
Let $u \in \mathcal{Q}^p(\widehat{S})$ and let $\mathfrak{u}$ denote its coefficient vector with
respect to the basis of Definition~\ref{def:babuska_szabo_basis}. Then the following estimates hold:
\begin{align}
\left\|u\right\|_{L^2(\widehat{S})}^2 &\lesssim \left\|\mathfrak{u}\right\|_{\ell^2}^2 \lesssim p^8 \left\|u\right\|_{L^2(\widehat{S})}^2, & \quad
\left\|u\right\|_{H^1(\widehat{S})}^2 &\lesssim \left\|\mathfrak{u}\right\|_{\ell^2}^2 \lesssim p^4 \left\|u\right\|_{H^{1}(\widehat{S})}^2.
\end{align}
\end{proposition}
\begin{remark}
In \cite[Thm.~{1}]{maitre_pourquier}, the estimates were only shown for the inner degrees of freedom, i.e.,
if $u|_{\partial\widehat{S}} = 0$. This restriction is removed in \cite[Thm.~{4.1}]{hu_guo_katz_condition_bounds_p}.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\begin{theorem}
\label{thm:L2H1-quads}
Let $\mathcal{T}$ be a quasi-uniform, shape-regular affine mesh of quadrilaterals of size $h$,
and let $F_K: \widehat{S} \to K$ be the affine element map for $K \in \mathcal{T}$.
Let $u \in \widetilde{\mathcal{Q}}^p(\mathcal{T}):=\left\{ u \in \widetilde{H}^{1/2}(\Gamma): u|_K \circ F_K \in \mathcal{Q}^p(\widehat{S}) \;\; \forall K \in \mathcal{T} \right\}$,
and let $\mathfrak{u}$ denote its coefficient vector with respect to the basis of Definition~\ref{def:babuska_szabo_basis}.
Then there exist constants $c_0,c_1,C_0,C_1 > 0$ that depend only on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$ such that:
\begin{align*}
c_0 h^{-2} \left\|u\right\|_{L^2(\Gamma)}^2 &\leq \left\|\mathfrak{u}\right\|_{\ell^2}^2 \leq C_0 h^{-2} p^8 \left\|u\right\|_{L^2(\Gamma)}^2, &
c_1 \left\|u\right\|_{H^1(\Gamma)}^2 &\leq \left\|\mathfrak{u}\right\|_{\ell^2}^2 \leq C_1 \left(p^4 + h^{-2}\right) \left\|u\right\|_{H^1(\Gamma)}^2.
\end{align*}
\begin{proof}
The proof is completely analogous to that of Theorem~\ref{thm:condition_numbers_h1}.
The only additional ingredient to
Proposition~\ref{prop:matire_pourquier} is an operator $J_h: L^2 \to \widetilde{\mathcal{Q}}^{1}(\mathcal{T})$
that is bounded with respect to the $L^2$ and the $H^1$-norm, reproduces homogeneous
Dirichlet boundary conditions for the case of open surfaces, and has the approximation property
$\displaystyle \left\|u-J_h u\right\|_{L^2(\Gamma)} \lesssim h \left|u\right|_{H^1(\Gamma)}.$
Such an operator was proposed, e.g., in \cite{bernardi_girault_regularization_operator}.
The important estimates that need to be shown are
(we again write $\widetilde{u}:=u-J_h u$):
\begin{align*}
\left\|u\right\|_{L^2(\Gamma)}^2 &\lesssim h^2 \left\|\mathfrak{u}\right\|_{\ell^2}^2, &
\left\|u\right\|_{H^{1}(\Gamma)}^2&\lesssim \left\| \mathfrak{u}\right\|_{\ell^2}^2 \\
\left\|\mathfrak{u}\right\|_{\ell^2}^2 &\lesssim \frac{p^8}{h^2} \left\|u\right\|_{L^2(\Gamma)}^2, &
\left\|\mathfrak{u}\right\|_{\ell^2}^2 &\lesssim \left\|\mathfrak{\widetilde{u}}\right\|_{\ell^2}^2 + \left\|\mathfrak{J_h u}\right\|_{\ell^2}^2
\lesssim{p^4} \left\|u\right\|_{H^1(\Gamma)}^2 + h^{-2} \left\|u\right\|_{L^2(\Gamma)}^2. \qedhere
\end{align*}
\end{proof}
\end{theorem}
\begin{remark}
\label{rem:conditioning-babuska-szabo-quads}
In the case of triangular meshes, Proposition~\ref{prop:characterization_poly_interp} allowed us to infer
$\widetilde{H}^{1/2}$-condition number estimates in Corollary~\ref{thm:condition_numbers_D}
by interpolating the discrete norm
equivalences in $L^2$ and $H^1$ of Theorem~\ref{thm:condition_numbers_h1}. For meshes consisting of
quadrilaterals, the result corresponding to
Proposition~\ref{prop:characterization_poly_interp} is currently not available in the literature. If we conjecture
$\left[ \left( \widetilde{\mathcal{Q}}^{p}(\mathcal{T}), \left\|\cdot\right\|_{L^2(\Gamma)}\right),
\left( \widetilde{\mathcal{Q}}^{p}(\mathcal{T}), \left\|\cdot\right\|_{H^1(\Gamma)}\right) \right]_{1/2}
=\left( \widetilde{\mathcal{Q}}^{p}(\mathcal{T}), \left\|\cdot\right\|_{\widetilde{H}^{1/2}(\Gamma)}\right)$
with equivalent norms, then Theorem~\ref{thm:L2H1-quads} implies the following estimates:
\begin{align*}
h^{-1} \left\|u\right\|^2_{\widetilde{H}^{1/2}(\Gamma)} &\lesssim \left\|\mathfrak{u}\right\|^2_{\ell^2}
\lesssim \left(h^{-1} p^6 + h^{-2}\right) \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2, \\
\kappa\left(\widetilde{D}_h^p\right) &\lesssim h^{-1} + p^6.
\tag*{\hbox{}
\rule{0.8ex}{0.8ex}}
\end{align*}
\end{remark}
\begin{remark}
\cite{maitre_pourquier} also analyzes the influence of diagonal preconditioning and shows that the condition number is improved by a factor of two in the exponents of $p$. Although we did not make any
theoretical investigations in this direction for the $H^{1/2}$-case, our numerical experiments
in Examples~\ref{example_fichera_prec} and \ref{example_screen_prec} for triangular meshes show
that diagonal scaling improves the $p$-dependence of the condition number from
$\mathcal{O}(p^{5.5})$ to $\mathcal{O}(p^{2.5})$.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\begin{remark}
The Babu\v{s}ka-Szab\'o basis of Definition~\ref{def:babuska_szabo_basis} is not the only one used on
quadrilaterals or hexahedra. An important representative of other bases are the Lagrange interpolation polynomials
associated with the Gau{\ss}-Lobatto points.
This basis has the better $\mathcal{O}(p^3)$ conditioning for the stiffness matrix
and $\mathcal{O}(p^2)$ for the mass matrix (see \cite{melenk_gl},\cite[Sect.~6]{maitre_pourquier}).
Using the same arguments as in the proof of Theorem~\ref{thm:L2H1-quads}, we get for the global $H^1$-problem that the condition number behaves
like $\mathcal{O}(p\,h^{-2} + p^3)$. If the conjecture of Remark~\ref{rem:conditioning-babuska-szabo-quads} is valid,
then we obtain for this basis for the hypersingular integral operator the condition number
estimate $\kappa(\widetilde D_h^p) \lesssim p^{5/2} + p^{-1/2} h^{-1}$.
(See the Appendix for details.)
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\section{$hp$-preconditioning}
\label{sec:main_results}
\subsection{Abstract additive Schwarz methods}
\label{section_abstract_asm}
Additive Schwarz preconditioners are based on decomposing a vector space $\mathds{V}$ into smaller subspaces $\mathds{V}_i$, $i=0,\dots, J$, on
which a local problem is solved. We recall some of the basic definitions and important results.
Details can be found in \cite[chapter 2]{toselli_widlund}.
Let $a(\cdot,\cdot): \mathds{V} \times \mathds{V} \to \mathbb{R}$ be a symmetric, positive definite bilinear form on the finite dimensional vector space $\mathds{V}$.
For a given $f \in \mathds{V}'$ consider the problem of finding $u \in \mathds{V}$ such that
\begin{align*}
a(u,v)&=f(v) \quad \forall v \in \mathds{V}.
\end{align*}
We will write $A$ for the corresponding Galerkin matrix.
Let $\mathds{V}_i \subset \mathds{V},\; i=0,\dots, J$, be finite dimensional vector spaces with corresponding
prolongation operators $R_i^T: \mathds{V}_i \to \mathds{V}$.
We will commit a slight abuse of notation and also denote the matrix representation of the operator by $R^T_i$,
and $R_i$ is its transposed matrix.
We also assume that $\mathds{V}$ permits a (in general not direct) decomposition into
\begin{align*}
\mathds{V}&= R_0^T \mathds{V}_0 + \sum_{i=1}^{J}{R_i^T \mathds{V}_i}.
\end{align*}
We assume that for each subspace $\mathds{V}_i$ a symmetric and positive definite bilinear form
\begin{align*}
\widetilde{a}_i( \cdot, \cdot):\mathds{V}_i \times \mathds{V}_i \to \mathbb{R}, \quad i=0, \dots, J,
\end{align*}
is given.
We write $\widetilde{A}_i$ for the matrix representation of $\widetilde{a}_i(\cdot,\cdot)$.
Sometimes these bilinear forms are referred to as the ``local solvers'';
in the simplest case of ``exact local solvers'' they are just restrictions of $a(\cdot,\cdot)$, i.e.,
$ \widetilde{a}_i(u_i,v_i):=a\left(R_i^T u_i, R_i^T v_i\right)$ for all $u_i,v_i \in \mathds{V}_i.$
Then, the corresponding additive Schwarz preconditioner is given by
\begin{align*}
B^{-1}:=\sum_{i=0}^{J}{R_i^T \widetilde{A}^{-1}_i R_i }.
\end{align*}
The following proposition allows us to bound the condition number
of the preconditioned system $B^{-1}A$.
The first part is often referred to as the Lemma of Lions
(see \cite{zhang_multilevel_schwarz,lions_schwarz_alternating_1,nepomnyaschik_asm}).
\begin{proposition}
\label{thm_asm_estimate}
\begin{enumerate}[(a)]
\item Assume that there exists a constant $C_0 >0 $ such that every $u \in \mathds{V}$ admits a decomposition
$ u = \sum_{i=0}^{J}{ R_i^T \, u_i}$ with $u_i \in \mathds{V}_i$
such that
\begin{align*}
\sum_{i=0}^{J}{\widetilde{a}_i(u_i,u_i)}\leq C_0 \; a(u,u).
\end{align*}
Then, the minimal eigenvalue $\lambda_{min}(B^{-1}A)$ of $B^{-1} A$ satisfies
$ \lambda_{min}\left(B^{-1} A\right) \geq C_0^{-1}.$
\item Assume that there exists $C_1 >0$ such that for every decomposition
$u=\sum_{i=0}^{J}{R_i^T \,v_i}$ with $v_i \in \mathds{V}_i$ the following estimate holds:
\begin{align*}
a(u,u) \leq C_1 \sum_{i=0}^{J}{\widetilde{a}_i(v_i,v_i)}.
\end{align*}
Then, the maximal eigenvalue $\lambda_{max}(B^{-1} A)$ of $B^{-1} A$ satisfies
$\lambda_{max}\left(B^{-1} A\right) \leq C_1$.
\item { These two estimates together give an estimate for the condition number of the preconditioned linear system: }
\begin{align*}
\kappa \left(B^{-1} A\right)&:=\frac{\lambda_{max}}{\lambda_{min}}\leq C_0 C_1. \tag*{\qed}
\end{align*}
\end{enumerate}
\end{proposition}
\subsection{An $hp$-stable preconditioner}\label{sec:hpprecond}
In order to define an additive Schwarz preconditioner, we decompose the boundary element space $\mathds{V}:=\widetilde{S}^{p}(\mathcal{T})$ into
several overlapping subspaces.
We define $\mathds{V}_h^1:= \widetilde{S}^{1}(\mathcal{T})$ as the space of globally continuous and piecewise linear functions on $\mathcal{T}$ that
vanish on $\partial \Gamma$ and denote the corresponding canonical embedding operator by $R_{h}^T: \mathds{V}_h^1 \to \mathds{V}$.
We also define for each vertex $\boldsymbol{z} \in {\mathcal{V}}$ the local space
\begin{align*}
\mathds{V}_{\boldsymbol{z}}^p:=\{ u \in \widetilde{S}^{p}(\mathcal{T}) | \operatorname{supp}(u)
\subset \overline{\omega_{\boldsymbol{z}}} \}
\end{align*}
and denote the canonical embedding operators by $R_{\boldsymbol{z}}^T: \mathds{V}_{\boldsymbol{z}}^p \to \mathds{V}$.
The space decomposition then reads
\begin{align}
\label{abstract_space_splitting}
\mathds{V} = \mathds{V}_h^1 + \sum_{\boldsymbol{z} \in {\mathcal{V}} } {\mathds{V}_{\boldsymbol{z}}^p}.
\end{align}
We will denote the restriction of the Galerkin matrix $\widetilde{D}_h^p$ to the
subspaces $\mathds{V}_h^1$ and $\mathds{V}_{\boldsymbol{z}}^p$ as
$\widetilde{D}_h^1$ and $\widetilde{D}^p_{h,\boldsymbol{z}}$, respectively.
\begin{lemma}
\label{lemma:ppreconditioner_stable_splitting}
There exist constants $c_1,c_2 > 0 $, which depend only on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$, such that the following holds:
\begin{enumerate}[(a)]
\item
For every $u \in \widetilde{S}^p(\mathcal{T})$ there exists a decomposition $u= u_1 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{u_{\boldsymbol{z}}}$ with
$u_1 \in \mathds{V}_h^1$ and $u_{\boldsymbol{z}} \in \mathds{V}^p_{\boldsymbol{z}}$ and
\begin{align*}
\left\|u_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{\left\|u_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2} &\leq c_1 \; \left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2.
\end{align*}
\item Any decomposition $u=v_1 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{v_{\boldsymbol{z}}}$ with $v_1 \in \mathds{V}^1_h$ and $v_{\boldsymbol{z}} \in \mathds{V}^p_{\boldsymbol{z}}$ satisfies
\begin{align*}
\left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 &\leq c_2 \left(\left\|v_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + \sum_{\boldsymbol{z} \in {\mathcal{V}}}{ \left\|v_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2} \right).
\end{align*}
\end{enumerate}
\begin{proof}
The first estimate is the assertion of Proposition~\ref{prop:stable_space_decomposition}, (\ref{item:prop:stable_space_decomposition-iv}).
The second estimate can be shown by a so-called coloring argument, along the same lines as in \cite[Lemma 2]{heuer_asm_indef_hyp}. It is based on
the following estimate (see \cite[Lemma 4.1.49]{book_sauter_schwab} or \cite[Lemma 3.2]{petersdorff_rwp_elasticity}):
Let $w_j$, $j=1,\dots, n$ be functions in $\widetilde{H}^{s}(\Gamma)$ for $s \geq 0$ with pairwise disjoint support. Then it holds
\begin{align}
\label{eq:SS}
\left\|\sum_{i=1}^{n}{w_i}\right\|_{\widetilde{H}^{s}(\Gamma)}^2 &\leq C \sum_{i=1}^{n}{\left\|w_i\right\|^2_{\widetilde{H}^{s}(\Gamma)}},
\end{align}
where $C>0$ depends only on $\Gamma$.
By $\gamma$-shape regularity, the number of elements in any vertex patch, and therefore also the number of vertices in a patch, is uniformly bounded by some constant
$N_{c}$ which depends solely on $\gamma$.
Thus, we can divide the vertices into sets $J_1,\dots,J_{N_{c}}$ such that
$\bigcup_{i=1}^{N_c}{ J_i} = {\mathcal{V}}$ and $ \left|\omega_{\boldsymbol{z}} \cap \omega_{\boldsymbol{z'}} \right| = 0$ for all $\boldsymbol{z},\boldsymbol{z'}$ in the same index set $J_i$.
Repeated application of the triangle inequality together with \eqref{eq:SS} then gives:
\begin{align*}
\left\|u\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 \leq 2\left\|v_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + 2 \left\|\sum_{\boldsymbol{z} \in {\mathcal{V}}}{ v_{\boldsymbol{z}}}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2
&\leq 2 \left\|v_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + 2 N_c \sum_{i=1}^{N_c}{ \left\|\sum_{\boldsymbol{z} \in J_i}{v_{\boldsymbol{z}}}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 } \\
&\leq 2 \left\|v_1\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 + 2\, N_c \,C \sum_{\boldsymbol{z} \in {\mathcal{V}}}{ \left\|v_{\boldsymbol{z}}\right\|_{\widetilde{H}^{1/2}(\Gamma)}^2 }.
\qedhere
\end{align*}
\end{proof}
\end{lemma}
The previous lemma only made statements about the $\widetilde{H}^{1/2}(\Gamma)$-norm.
\begin{theorem}
\label{thm:ppreconditioner}
Let $\mathcal{T}$ be a $\gamma$-shape regular triangulation of $\Gamma$. Then there is a constant $C > 0$ that depends
solely on $\Gamma$ and the $\gamma$-shape regularity of ${\mathcal{T}}$ such that the following is true:
The preconditioner
\begin{align*}
B^{-1}:= R^T_h \left(\widetilde{D}_h^1 \right)^{-1} R_h \; + \; \sum_{\boldsymbol{z} \in {\mathcal{V}}}{ R^T_{\boldsymbol{z}} \left( \widetilde{D}_{h,\boldsymbol{z}}^p\right)^{-1}} R_{\boldsymbol{z}},
\end{align*}
which is implied by the space decomposition~\eqref{abstract_space_splitting}, leads to the spectral condition
number estimate
\begin{align*}
\kappa(B^{-1} \widetilde{D}_h^p) &\leq C.
\end{align*}
\begin{proof}
The bilinear form $\left<\widetilde{D} \cdot ,\cdot \right>_{\Gamma}$ is equivalent to $\left\|{\cdot}\right\|^2_{\widetilde{H}^{1/2}(\Gamma)}$.
Hence, the combination of Lemma~\ref{lemma:ppreconditioner_stable_splitting} and
Proposition~\ref{thm_asm_estimate} give the boundedness of the condition number.
\end{proof}
\end{theorem}
\subsection{Multilevel preconditioning on adaptive meshes}\label{sec:lmld}
\label{section_adaptive_meshes}
The preconditioner of Theorem~\ref{thm:ppreconditioner} relies on the space decomposition
(\ref{abstract_space_splitting}). In this section, we discuss how the space $\widetilde{S}^1(\mathcal{T})$ of
piecewise linear function can be further decomposed in a multilevel fashion. Our setting will be one
where $\mathcal{T}$ is the finest mesh of a sequence $(\mathcal{T}_\ell)_{\ell=0}^L$ of nested meshes that
are generated by newest vertex bisection (NVB); see Figure~\ref{fig:nvb:bisec} for a description. We
point the reader to \cite{stevenson,kpp} for a detailed discussion of NVB. A key feature of NVB is that it creates
sequences of meshes that are uniformly shape regular. We mention in passing that further properties of NVB
were instrumental in proving optimality of $h$-adaptive algorithms in both FEM~\cite{ckns,ffp_adaptive_fem}
and BEM~\cite{partOne,partTwo,fkmp13,gantumur}.
Before discussing the details of the multilevel space decomposition, we stress that
the preconditioner described in Section~\ref{sec:hpprecond} is independent of the chosen
refinement strategy (such as NVB) as long as it satisfies the assumptions in Section~\ref{sec:hpprecond}, whereas
the condition number estimates for the local multilevel preconditioner discussed in the present
Section~\ref{sec:lmld} depend on the fact that the underlying refinement strategy is based on NVB.
\begin{figure}
\caption{For each element $K\in\mathcal{T}
\label{fig:nvb:bisec}
\end{figure}
Adaptive algorithms create sequences of meshes $(\mathcal{T}_\ell)_{\ell=0}^L$. Typically, the procedure starts
with an \emph{initial} triangulation $\mathcal{T}_0$ and the further members of the sequences are created
inductively. That is, mesh $\mathcal{T}_\ell$ is obtained from $\mathcal{T}_{\ell-1}$ by
refining some elements of $\mathcal{T}_{\ell-1}$. In an adaptive environment, these elements are determined
by a marking criterion (``marked elements'') and a mesh closure condition.
Usually, the following assumptions on the mesh refinement are made:
\begin{itemize}
\item $\mathcal{T}_\ell$ is regular for all $\ell\in\mathbb{N}_0$, i.e., there exist no hanging nodes;
\item The meshes $\mathcal{T}_0,\mathcal{T}_1,\dots$ are uniformly $\gamma$-shape-regular, i.e.,
with $|K|$ denoting the surface area of an element $K\in\mathcal{T}_\ell$ and $\mathrm{diam}(K)$
the Euclidean diameter, we have
\begin{align}
\sup_{\ell\in\mathbb{N}_0} \max_{K\in\mathcal{T}_\ell} \frac{\mathrm{diam}(K)^2}{|K|} \leq \gamma.
\label{eq:refinement-shape-regularity}
\end{align}
\end{itemize}
We consider a sequence of triangulations $\mathcal{T}_0,\dots,\mathcal{T}_L$, which is created by iteratively applying NVB. The corresponding sets of vertices are denoted ${\mathcal{V}}_0,\dots,{\mathcal{V}}_L$.
For a vertex ${\boldsymbol{z}}\in{\mathcal{V}}_\ell$, the associated patch is denoted by $\omega_{\ell,{\boldsymbol{z}}}$.
In the construction of the $p$-preconditioner in Section~\ref{sec:hpprecond} we only considered a single mesh $\mathcal{T}$.
For the remainder of the paper, the $p$ part will always be constructed with respect to the finest mesh $\mathcal{T}_L$.
For a simpler presentation we set $\mathcal{T} := \mathcal{T}_L$ and ${\mathcal{V}} := {\mathcal{V}}_L$.
\subsubsection{A refined splitting for adaptive meshes}
\begin{figure}
\caption{Visualization of the definition~\eqref{eq:def:nodesTilde}
\label{fig:nodesTilde}
\end{figure}
The space decomposition from~\eqref{abstract_space_splitting} involves the global lowest-order space $\mathds{V}_h^1 =
\widetilde{S}^1(\mathcal{T}_L)$.
Therefore, the computation of the corresponding additive Schwarz operator needs the inversion of a global problem,
which is, in practice, very costly, and often even infeasible.
To overcome this disadvantage, we consider a refined splitting of the space $\mathds{V}_h^1$ that relies on the hierarchy of the
adaptively refined meshes $\mathcal{T}_0,\dots,\mathcal{T}_L$.
The corresponding local multilevel preconditioner was introduced and analyzed in~\cite{ffps,dissTF}.
See also~\cite{hiptwuzheng2012,wuchen06,xch10,xcn09} for local multilevel preconditioners for (adaptive) FEM
, \cite{tran_stephan_asm_h_96,hiptmair_mao_BIT_2012} for (uniform) BEM, and \cite{amcl03,maischak_multilevel_asm,tran_stephan_mund_hierarchical_prec} for
(restricted) approaches for adaptive BEM.
Set $\widetilde{\mathcal{V}}_0 := {\mathcal{V}}_0$ and define the local subsets
\begin{align}\label{eq:def:nodesTilde}
\widetilde{\mathcal{V}}_\ell := {\mathcal{V}}_\ell \backslash {\mathcal{V}}_{\ell-1} \cup \{ {\boldsymbol{z}}\in{\mathcal{V}}_{\ell-1} \,:\,
\omega_{\ell,{\boldsymbol{z}}} \subsetneq \omega_{\ell-1,{\boldsymbol{z}}} \} \quad\text{for } \ell\geq 1
\end{align}
of newly created vertices plus some of their neighbors, see Figure~\ref{fig:nodesTilde} for a visualization.
Based on these sets, we consider the space decomposition
\begin{align}\label{eq:splitting:refined}
\mathds{V}_h^1 = \sum_{\ell=0}^L \sum_{{\boldsymbol{z}}\in\widetilde{\mathcal{V}}_\ell} \mathds{V}^1_{\ell,{\boldsymbol{z}}} \quad\text{with}\quad \mathds{V}^1_{\ell,{\boldsymbol{z}}} :=
\mathrm{span}\{\varphi_{\ell,{\boldsymbol{z}}}\},
\end{align}
where $\varphi_{\ell,{\boldsymbol{z}}}\in \widetilde{S}^1(\mathcal{T}_\ell)$ is the nodal hat function with $\varphi_{\ell,{\boldsymbol{z}}}({\boldsymbol{z}}) = 1$ and $\varphi_{\ell,{\boldsymbol{z}}}({\boldsymbol{z}}') =
0$ for all ${\boldsymbol{z}}'\in
{\mathcal{V}}_\ell \backslash \{{\boldsymbol{z}}\}$.
The basic idea of this splitting is that we do a diagonal scaling only in the regions where the meshes have been refined.
We will use local exact solvers, i.e.,
\begin{align*}
\widetilde a_{\ell,{\boldsymbol{z}}}(u_{\ell,{\boldsymbol{z}}},v_{\ell,{\boldsymbol{z}}}) := \left<\widetilde{D} (R_{\ell,{\boldsymbol{z}}})^T u_{\ell,{\boldsymbol{z}}} , (R_{\ell,{\boldsymbol{z}}})^T
v_{\ell,{\boldsymbol{z}}} \right>_{\Gamma}
\quad\text{for all } u_{\ell,{\boldsymbol{z}}},v_{\ell,{\boldsymbol{z}}} \in \mathds{V}^1_{\ell,{\boldsymbol{z}}},
\end{align*}
where $(R_{\ell,{\boldsymbol{z}}})^T : \mathds{V}^1_{\ell,{\boldsymbol{z}}} \to \mathds{V}_h^1$ denotes the canonical embedding operator.
Let $\widetilde{D}_h^1$ denote the Galerkin matrix of $\widetilde{D}$ with respect to the basis
$(\varphi_{L,{\boldsymbol{z}}})_{{\boldsymbol{z}}\in{\mathcal{V}}_L}$ of $\mathds{V}_h^1$ and define $\widetilde{D}_{\ell,{\boldsymbol{z}}}^1 := \widetilde
a_{\ell,{\boldsymbol{z}}} (\varphi_{\ell,{\boldsymbol{z}}},\varphi_{\ell,{\boldsymbol{z}}})$.
Then, the \emph{local multilevel diagonal (LMLD)} preconditioner associated to the
splitting~\eqref{eq:splitting:refined} reads
\begin{align}
(B_h^1)^{-1} := \sum_{\ell=0}^L \sum_{{\boldsymbol{z}}\in\widetilde{\mathcal{V}}_\ell} (R_{\ell,{\boldsymbol{z}}})^T \left(\widetilde{D}_{\ell,{\boldsymbol{z}}}^1\right)^{-1}
R_{\ell,{\boldsymbol{z}}}.
\end{align}
We stress that this preconditioner corresponds to a diagonal scaling with respect to the local subset of vertices
$\widetilde{\mathcal{V}}_\ell$ on each level $\ell = 0,\dots,L$.
Further details and the proof of the following result are found in~\cite{ffps,dissTF}.
\begin{proposition}\label{thm:hpreconditioner}
The splitting~\eqref{eq:splitting:refined} together with $\widetilde a_{\ell,{\boldsymbol{z}}}(\cdot,\cdot)$
and the operators $R_{\ell,\boldsymbol{z}}^T$ satisfies the
requirements of Proposition~\ref{thm_asm_estimate} with constants depending only on $\Gamma$
and the initial triangulation $\mathcal{T}_0$.
For the additive Schwarz operator $P_h^1 := (B_h^1)^{-1} \widetilde{D}_h^1$, there holds in particular
\begin{align}
c \left<\widetilde{D} u_h ,u_h \right>_{\Gamma} \leq \left<\widetilde{D} P_h^1 u_h , u_h \right>_{\Gamma} \leq C \left<\widetilde{D} u_h ,u_h \right>_{\Gamma} \quad\text{for all } u_h\in \mathds{V}_h^1.
\end{align}
The constants $c$, $C>0$ depend only on $\Gamma$, the initial triangulation $\mathcal{T}_0$, and the use of NVB for refinement,
i.e., $\mathcal{T}_{\ell+1} = \operatorname{refine}(\mathcal{T}_{\ell}, \mathcal{M}_{\ell})$
with arbitrary set $\mathcal{M}_{\ell} \subseteq \mathcal{T}_{\ell}$ of marked elements.
\qed
\end{proposition}
We replace the space $\mathds{V}_h^1$ in~\eqref{abstract_space_splitting} by the refined splitting~\eqref{eq:splitting:refined}
and end up with the space decomposition
\begin{align}\label{eq:hpsplitting}
\mathds{V} = \sum_{\ell=0}^L \sum_{{\boldsymbol{z}}\in\widetilde{{\mathcal{V}}}_\ell} \mathds{V}^1_{\ell,{\boldsymbol{z}}} + \sum_{{\boldsymbol{z}}\in{\mathcal{V}}_L} \mathds{V}_{L,\boldsymbol{z}}^p.
\end{align}
The following Lemma \ref{lemma_replace_global_space} shows that the preconditioner resulting from the decomposition~\eqref{eq:hpsplitting} is
$hp$-stable.
The result formalizes the observation that the combination of stable subspace decompositions leads again to a stable
subspace decomposition.
It is a simple consequence of the well-known theory for additive Schwarz methods; see
Section~\ref{section_abstract_asm}. Therefore, details are left to the reader.
\begin{lemma}
\label{lemma_replace_global_space}
Let $\mathds{V}$ be a finite dimensional vector space, and let $\mathds{V}_j$, $R_{\mathds{V},j}^T$, and $\widetilde{a}_{\mathds{V},j}(\cdot,\cdot)$ for $j=0, \dots, J$
be a decomposition of $\mathds{V}$ in
the sense of Section \ref{section_abstract_asm} that satisfies the assumptions of Proposition~\ref{thm_asm_estimate} with constants $C_{0,\mathds{V}}$ and $C_{1,\mathds{V}}$.
Consider an additional decomposition $\mathds{W}_{\ell}$, $R_{\mathds{W},{\ell}}^T $
and $\widetilde{a}_{\mathds{W},\ell}(\cdot,\cdot)$ with $\ell=0, \dots, L$
of $\mathds{V}_0$
that also satisfies the requirements of Proposition~\ref{thm_asm_estimate} for the bilinear form $\widetilde{a}_{\mathds{V},0}(\cdot,\cdot)$
with constants $C_{0,\mathds{W}}$ and $C_{1,\mathds{W}}$.
Define a new additive Schwarz preconditioner as:
\begin{align*}
\widetilde{B}^{-1}:= R_{\mathds{V},0}^T \left(\sum_{\ell=0}^{L}{R_{\mathds{W},\ell}^T \widetilde{A}_{\mathds{W},\ell}^{-1} R_{\mathds{W},\ell} } \; \right)R_{\mathds{V},0} \;
+ \; \sum_{j=1}^{J}{R_{\mathds{V},j}^T \widetilde{A}_{\mathds{V},j}^{-1} R_{\mathds{V},j}}.
\end{align*}
This new preconditioner satisfies the assumptions of Proposition~\ref{thm_asm_estimate} with $C_0=\max\left(1,C_{0,\mathds{W}}\right) C_{0,\mathds{V}}$ and
$C_1=\max\left(1,C_{1,\mathds{W}}\right) C_{1,\mathds{V}}$.
\qed
\end{lemma}
\begin{theorem}
\label{thm:p_precond_with_multilevel}
Assume that $\mathcal{T}$ is generated from a regular and shape-regular initial triangulation $\mathcal{T}_0$ by successive application of NVB.
Based on the space decomposition~\eqref{eq:hpsplitting} define the preconditioner
\begin{align*}
B_2^{-1}:= R_h^T (B_h^1)^{-1} R_h \; + \; \sum_{\boldsymbol{z} \in {\mathcal{V}}_L}{R^T_{\boldsymbol{z}} \left(\widetilde{D}_{h,\boldsymbol{z}}^p\right)^{-1}} R_{\boldsymbol{z}}.
\end{align*}
Then, for constants $c$, $C>0$ that depend only on $\Gamma$, $\mathcal{T}_0$, and the use of NVB refinement, the
extremal eigenvalues of $B_2^{-1} \widetilde D_h^p$ satisfy
\begin{align*}
c\leq \lambda_\mathrm{min}(B_2^{-1} \widetilde D_h^p)
\leq \lambda_\mathrm{max} (B_2^{-1} \widetilde D_h^p) \leq C.
\end{align*}
In particular, the condition number $\kappa(B_2^{-1} \widetilde D_h^p)$ is bounded independently of $h$ and
$p$.
\begin{proof}
The proof follows from Lemma~\ref{lemma:ppreconditioner_stable_splitting}, Proposition~\ref{thm:hpreconditioner}, and
Lemma~\ref{lemma_replace_global_space}.
\end{proof}
\end{theorem}
\subsection{Spectrally equivalent local solvers}
\label{section_inexact_locals}
For each vertex patch, we need to store the dense matrix $\left( \widetilde{D}^p_{h,\boldsymbol{z}}\right)^{-1}$.
For higher polynomial orders, storing these blocks is a significant part of the memory consumption of the
preconditioner. To reduce these costs, we can make use of the fact that the abstract additive Schwarz theory
allows us to replace the local bilinear forms $a(R^T_i u_i,R^T_i v_i)$ with spectrally equivalent forms,
as long as they satisfy the conditions stated in Proposition~\ref{thm_asm_estimate}.
This is for example the case, if the decomposition is stable for the exact local solvers and if there exist constants
$c_1$, $c_2 > 0$ such that
\begin{align*}
c_1 \, \widetilde{a}_i(u_i,u_i) &\leq a(R^T_i u_i,R^T_i u_i) \leq c_2 \widetilde{a}_i(u_i,u_i) \quad \forall u_i \in \mathds{V}_i.
\end{align*}
The new preconditioner will be based on a finite number of reference patches, for which the Galerkin matrix has to be inverted.
First we prove the simple fact that we can drop the stabilization term from (\ref{eq:weakform}) when assembling the local bilinear forms:
\begin{lemma}
There exists a constant $c_1>0$ that depends only on $\Gamma$ and the $\gamma$-shape regularity of $\mathcal{T}$
such that for any vertex patch $\omega_{\boldsymbol{z}}$ the following estimates
hold:
\begin{align*}
\left<Du,u \right>_{\Gamma} &\leq \left<\widetilde{D} u ,u \right>_{\Gamma} \leq c_1 \left<Du,u \right>_{\Gamma} \qquad \forall u \in \mathds{V}^p_{\boldsymbol{z}}.
\end{align*}
\begin{proof}
The first estimate is trivial, as $\widetilde{D}$ only adds an additional non-negative term.
For the second inequality, we note that the functions in $\mathds{V}^p_{\boldsymbol{z}}$ all vanish outside of $\omega_{\boldsymbol{z}}$
and therefore $\mathds{V}^p_{\boldsymbol{z}} \cap \operatorname{ker}(D) = \{0\}$.
We transform to the reference patch, use the fact that $\widehat{D}$ is elliptic on $\widetilde{H}^{1/2}(\widehat{\omega}_{\boldsymbol{z}})$,
and transform back by applying Lemma \ref{item:lemma:reference-patch-hypsing_scaling}:
\begin{align*}
\left\|u\right\|_{L^2(\omega_{\boldsymbol{z}})}^2 &\lesssim h_{\boldsymbol{z}}^2 \left\|\widehat{u}\right\|_{L^2\left(\widehat{\omega}_{\boldsymbol{z}}\right)}^2
\lesssim h_{\boldsymbol{z}}^2 \left\|\widehat{u}\right\|_{\widetilde{H}^{1/2}(\widehat{\omega}_{\boldsymbol{z}})}^2
\lesssim h_{\boldsymbol{z}}^2 \left<\widehat{D} \widehat{u},\widehat{u} \right>_{\widehat{\omega}_{\boldsymbol{z}}}
\lesssim h_{\boldsymbol{z}} \, \left<D u,u \right>_{\omega_{\boldsymbol{z}}}.
\end{align*}
Thus, we can simply estimate the stabilization:
\begin{align*}
\alpha^2 \left<u,\mathds{1} \right>_{\omega_{\boldsymbol{z}}}^2 &\leq \alpha^2 \left\|u\right\|^2_{L^2(\omega_{\boldsymbol{z}})} \left\|\mathds{1}\right\|^2_{L^2(\omega_{\boldsymbol{z}})}
\leq C \alpha^2 \left\|\mathds{1}\right\|_{L^2(\omega_{\boldsymbol{z}})}^2 \, h_{\boldsymbol{z}} \; \left<Du,u \right>_{\Gamma}.
\end{align*}
This gives the full estimate with the constant
$c_1:= \max\left(1,\alpha^2 \left\|\mathds{1}\right\|_{L^2(\omega_{\boldsymbol{z}})}^2 \, C h_{\boldsymbol{z}}\right)
\leq \max\left(1, C \alpha^2 h_{\boldsymbol{z}}^3 \right)$.
\end{proof}
\end{lemma}
\begin{remark}
The proof of the previous lemma shows that this modification does not significantly affect the stability of the preconditioner and its effect will even vanish with
$h$-refinement.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
We are now able to define the new local bilinear forms as:
\begin{definition}
\label{definition_perturbed_local_forms}
Take $\boldsymbol{z} \in {\mathcal{V}}$ and let $F_{\boldsymbol{z}}: \widehat{\omega}_{\boldsymbol{z}} \to \omega_{\boldsymbol{z}}$ be the pullback mapping to the reference patch
as in Definition~\ref{definition:reference-patches}. Set
\begin{align*}
\widetilde{a}_{\boldsymbol{z}}(u,v):=h_{\boldsymbol{z}}
\left<\widehat{D} \left( \;u \circ F_{\boldsymbol{z}}\right), v \circ F_{\boldsymbol{z}} \right>_{\widehat{\omega}_{\boldsymbol{z}}}
\qquad \forall u,v \in \mathds{V}^p_{\boldsymbol{z}}.
\end{align*}
(see Lemma~\ref{item:lemma:reference-patch-hypsing_scaling} for the definition of $\widehat{D}$).
We denote the Galerkin matrix corresponding the bilinear form $\widetilde{a}_{\boldsymbol{z}}$ on the reference patch by $\widehat{D}_{h,\operatorname{ref}(\boldsymbol{z})}^p$.
\end{definition}
The above definition only needs to evaluate $\left<\widehat{D} \widehat{u},\widehat{v} \right>_{\Gamma}$ on the reference patch.
Since the reference patch depends only on the number of elements belonging to the patch,
the number of blocks that need to be stored, depends only on the shape regularity and is independent of the number of vertices in the triangulation $\mathcal{T}$.
\begin{theorem}
\label{thm:hp_reference_solver_preconditioner}
Assume that $\mathcal{T}$ is generated from a regular and shape-regular initial triangulation $\mathcal{T}_0$ by successive application of NVB.
The preconditioner using the local solvers from Definition \ref{definition_perturbed_local_forms} is optimal, i.e., for
\begin{align*}
B_3^{-1}:= R_h^T (B_h^1)^{-1} R_h \; +
\; \sum_{\boldsymbol{z} \in {\mathcal{V}}_L}{ h_{\boldsymbol{z}}^{-1} R^T_{\boldsymbol{z}} \left(\widehat{D}_{h,\operatorname{ref}(\boldsymbol{z})}^p\right)^{-1}} R_{\boldsymbol{z}},
\end{align*}
the condition number of the preconditioned system satisfies
\begin{align*}
\kappa(B_{3}^{-1} \tilde{D}_h^p) \leq C,
\end{align*}
where $C > 0$ depends only on $\Gamma$, $\mathcal{T}_0$ and the use of NVB refinement.
It is in particular independent of $h$ and $p$.
\begin{proof}
The scaling properties of $\left<D u,u \right>_{\Gamma}$ were stated in Lemma~\ref{item:lemma:reference-patch-hypsing_scaling}.
Therefore, we can conclude the argument by using the standard additive Schwarz theory.
\end{proof}
\end{theorem}
\subsubsection{Numerical realization}
\begin{figure}
\caption{The mapping to the reference patch described as a combination of element maps.}
\label{fig_patch_mapping}
\end{figure}
When implementing the preconditioner as defined above, it is important to note that for a basis function $\varphi_i$ on $\omega_{\boldsymbol{z}}$
the transformed function $\varphi_i \circ F_{\boldsymbol{z}}^{-1}$ does not necessarily correspond to the $i$-th basis function on
$\widehat{\omega}_{\boldsymbol{z}}$. Depending on the chosen
basis we may run into orientation difficulties. This can be resolved in the following way:
Let $\boldsymbol{z} \in {\mathcal{V}}$ be fixed.
Choose a numbering for the vertices $\boldsymbol{z}_i$ and elements $K_i$ of $\omega_{\boldsymbol{z}}$ such that adjacent elements have adjacent numbers (for example, enumerate
clockwise or counter-clockwise). We also choose a similar enumeration on the reference patch and denote it as $\widehat{\boldsymbol{z}}_i$ and $\widehat{K}_i$.
The enumeration is such that the reference map $F_{\boldsymbol{z}}$ maps $\boldsymbol{z}_i$ to $\widehat{\boldsymbol{z}_i}$ and $K_i$ to $\widehat{K}_i$.
Let $N_{\boldsymbol{z}}$ be the number of vertices in the patch.
For elements $K \subset \omega_{\boldsymbol{z}}$ and $K' \subset \widehat{\omega}_{\boldsymbol{z}}$, the bases on $\omega_{\boldsymbol{z}}$ and on $\widehat{\omega}_{\boldsymbol{z}}$ are
locally defined by the pullback of polynomials on the reference triangle $\widehat{K}$.
We denote the element maps as $F_K: \widehat{K} \to K $ and $F_K': \widehat{K} \to K'$, respectively.
The basis functions are then given as $\varphi_j:=\widehat{\varphi}_j \circ F_K$ on $\omega_{\boldsymbol{z}}$ and
$\psi_j:=\widehat{\psi}_j \circ F_{K'}$ on $\widehat{\omega}_{\boldsymbol{z}}$.
Corresponding local element maps do not necessarily map the same vertices of the reference element $\widehat K$ to vertices
with the same numbers in the local ordering.
Hence, we need to introduce another map $Q: \widehat{K} \to \widehat{K}$ that represents a vertex permutation.
Then, we can write the patch-pullback restricted to $K'$ as $F_{\boldsymbol{z}}|_{K'}=F_K \circ Q \circ F_{K'}^{-1}$ (see Figure~\ref{fig_patch_mapping}).
We observe:
\begin{enumerate}[i)]
\item For the hat function the mapping is trivial:
$\varphi_{\boldsymbol{z}} \circ F_{\boldsymbol{z}}=\psi_{\widehat{\boldsymbol{z}}}$.
\item For the edge basis, permuting the vertices on the reference element only changes the sign of the corresponding edge functions.
Thus, we have $\varphi^{{\mathcal E}_m}_{j} \circ F_{\boldsymbol{z}}= (-1)^j \psi^{{\mathcal E}_m}_{j}$, if the orientation of the edge in the global triangulation does not
match the orientation of the reference patch.
\item The inner basis functions transformation under $Q$ is not so simple. Since the basis functions all have support on a single
element we can restrict our consideration to this element and assemble the necessary basis transformations for all 5 permutations of vertices on
the reference triangle without losing the memory advantage of using the reference patch.
\end{enumerate}
\begin{remark}
One could also exploit the symmetry (up to a sign change) of the permutation of $\lambda_1$ and $\lambda_2$ in the definition of the
inner basis functions to reduce the number of basis transformation matrices needed from 5 to 2.
\hbox{}
\rule{0.8ex}{0.8ex}
\end{remark}
\section{Numerical results}
\label{sec:numerics}
The following numerical experiments confirm that the proposed preconditioners
(Theorem~\ref{thm:ppreconditioner}, Theorem~\ref{thm:p_precond_with_multilevel},
and Theorem~\ref{thm:hp_reference_solver_preconditioner})
do indeed yield a system with a condition number that is bounded uniformly in $h$ and $p$, whereas the
condition number of the unpreconditioned system grows in $p$ with a rate slightly smaller than predicted in
Corollary~\ref{thm:condition_numbers_D}: We observe numerically $\kappa \sim \mathcal{O}(p^{5.5})$.
Diagonal preconditioning appears to reduce the condition number to $\mathcal{O}(p^{2.5})$.
All of the following experiments were performed using the BEM++ software library (\cite{bempp_preprint}; \url{www.bempp.org}) with the AHMED software library
for $\mathcal{H}$-matrix compression, \cite{bebendorf:2008}, \cite{ahmed_homepage}.
We used the polynomial basis described in Section~\ref{sect:discretization}.
\begin{figure}
\caption{Adaptive meshes on the Fichera cube and for a screen problem.}
\label{fig_meshes}
\end{figure}
\begin{example}[unpreconditioned $p$-dependence]
\label{example_unprec}
We consider a quadratic screen in $\mathbb{R}^3$ (see Figure~\ref{fig_meshes}, right).
We study the $p$-dependence
of the unpreconditioned system on different uniformly refined meshes.
In accordance with the estimates of Corollary~\ref{thm:condition_numbers_D},
Figure~\ref{fig:unpreconditioned_systems} shows that one has, depending on the mesh size $h$,
a preasymptotic phase in which the
$\mathcal{O}(h^{-1}p^2)$ term dominates, and an $h$-independent asymptotic
$\mathcal{O}(p^{5.5})$ behavior. The latter is slightly better than the prediction
of $\mathcal{O}(p^6)$ of Corollary~\ref{thm:condition_numbers_D}.
\hbox{}
\rule{0.8ex}{0.8ex}
\begin{figure}
\caption{Comparison of the condition number of $\widetilde{D}
\label{fig:unpreconditioned_systems}
\end{figure}
\end{example}
\begin{example}[Fichera's cube]
\label{example_fichera_prec}
We compare the preconditioner that uses the local multilevel preconditioner for the $h$-part and the
inexact local solvers based on the reference patches to the unpreconditioned system and to simple diagonal
scaling.
We consider the problem on a closed surface, namely, the surface of the Fichera cube with side length $2$, and employ a stabilization parameter
$\alpha=0.2$.
To generate the adaptive meshes, we used NVB, where in each step, the set of marked elements originated from
a lowest order adaptive algorithm with a ZZ-type error estimator (as described in \cite{aff_hypsing}).
The left part of Figure~\ref{fig_meshes} shows an example of one of the meshes used.
Figure~\ref{fig_fichera_p} confirms that the condition number of the preconditioned system does not depend
on the polynomial degree of the discretization. Figure~\ref{fig_fichera_h} confirms the robustness of the
preconditioner with respect to the adaptive refinement level.
The unpreconditioned and the diagonally preconditioned system do not show a bad behavior with respect to $h$,
probably due to the already large condition number for $p > 1$.
\hbox{}
\rule{0.8ex}{0.8ex}
\begin{figure}
\caption{Fichera cube, condition numbers for fixed uniform mesh with $70$ elements (Example~\ref{example_fichera_prec}
\label{fig_fichera_p}
\end{figure}
\begin{figure}
\caption{Fichera cube, adaptive $h$-refinement for $p=3$ (Example~\ref{example_fichera_prec}
\label{fig_fichera_h}
\end{figure}
\end{example}
\begin{example}[screen problem]
\label{example_screen_prec}
We consider the screen problem in $\mathbb{R}^3$ with a quadratic screen of side length~1 (see Figure~\ref{fig_meshes},
right),
which represents the case $\Gamma \neq \partial \Omega$ and $\alpha=0$ in~\eqref{eq:def_stabilized_op}, and
perform the same experiments as we did for Fichera's cube in Example~\ref{example_fichera_prec}.
In Figure~\ref{fig_screen_p} we again observe that the condition number is independent of the polynomial degree.
Figures~\ref{fig_screen_h}--\ref{fig_screen_h5} demonstrate the independence of the mesh size $h$.
\hbox{}
\rule{0.8ex}{0.8ex}
\begin{figure}
\caption{Screen problem, condition numbers for uniform mesh with $45$ elements (Example~\ref{example_screen_prec}
\label{fig_screen_p}
\end{figure}
\begin{figure}
\caption{Screen problem, uniform $h$-refinement for $p=4$ (Example~\ref{example_screen_prec}
\label{fig_screen_h}
\end{figure}
\begin{figure}
\caption{Screen problem, adaptive $h$-refinement for $p=1$ (Example~\ref{example_screen_prec}
\label{fig_screen_h3}
\end{figure}
\begin{figure}
\caption{Screen problem, adaptive $h$-refinement for $p=2$ (Example~\ref{example_screen_prec}
\label{fig_screen_h4}
\end{figure}
\begin{figure}
\caption{Screen problem, adaptive $h$-refinement for $p=3$ (Example~\ref{example_screen_prec}
\label{fig_screen_h5}
\end{figure}
\end{example}
\afterpage{
}
\begin{example}[inexact local solvers]
\label{example_prec_compare}
We compare the different preconditioners proposed in this paper.
While the numerical experiments all show that the preconditioner is indeed robust in $h$ and $p$,
the constant differs if we use the different simplifications
described in the Sections~\ref{section_adaptive_meshes} and \ref{section_inexact_locals} to the preconditioner.
In Figures~\ref{fig_compare_preconditioners_p} and \ref{fig_compare_preconditioners_h}, we can observe
the different constants for the geometry given by Fichera's cube of Example~\ref{example_fichera_prec}.
\hbox{}
\rule{0.8ex}{0.8ex}
\begin{figure}
\caption{Comparison of the different proposed preconditioners for a fixed uniform mesh with $70$ elements on the Fichera cube (Example~\ref{example_prec_compare}
\label{fig_compare_preconditioners_p}
\end{figure}
\begin{figure}
\caption{Comparison of the different proposed preconditioners for adaptive mesh refinement on the Fichera cube with $p=2$.}
\label{fig_compare_preconditioners_h}
\end{figure}
\end{example}
\begin{example}[inexact local solvers]
\label{example_memory}
We continue with the geometry of Example~\ref{example_prec_compare}, i.e., Fichera's cube.
We motivated Section~\ref{section_inexact_locals} by stating the large memory requirement of the preconditioner when storing the dense local block inverses.
It can be seen in Table~\ref{table_memory_requirement} that the reference
patch based preconditioner resolves this issue: we present the memory requirements for the various approaches when
excluding the memory requirement for the treatment of the lowest order space $\mathds{V}_h^1$.
For comparison, we included the storage requirements for the full matrix $\widetilde{D}_h^p$
and the $\mathcal{H}$-matrix approximation with accuracy $10^{-8}$ which is denoted as $D^{p,\mathcal{H}}_h$.
While we still get linear growth in the number of elements, due to some bookkeeping requirements,
such as element orientation etc., which could theoretically also be avoided,
we observe a much reduced storage cost.
For $p=3$ and $55,298$ degrees of freedom, the memory requirement is less than $2.5\%$ of the full block storage.
For $p=4$ and $393,218$ degrees of freedom the memory requirement is just $0.6\%$ and for higher polynomial orders, this ratio would become even smaller.
Comparing only the number of blocks that need to be stored,
we see that in this particular geometry we only need to store the inverse for $6$ reference blocks.
\hbox{}
\rule{0.8ex}{0.8ex}
\begin{table}[h!]
\begin{center}
{}\begin {tabular}{SSSSSS}\toprule \multicolumn {1}{c}{$p$}&\multicolumn {1}{c}{$N_{dof}$}&\multicolumn {1}{c}{$\operatorname {mem}\left (\widetilde {D}^p_h\right )/{N_{dof}}$}&\multicolumn {1}{c}{$\operatorname {mem}\left (\widetilde {D}^{p,\mathcal {H}}_h\right )/N_{dof}$}&\multicolumn {1}{c}{$\operatorname {mem}\left (B^{-1}\right )/N_{dof}$}&\multicolumn {1}{c}{$\operatorname {mem}\left (B_{3}^{-1}\right )/N_{dof}$}\\& & [KB] & [KB] & [KB] & [KB] \\ \midrule 2&98&0.76562&0.76945&0.094547&0.039222\\%
2&298&2.3281&2.3334&0.098259&0.027318\\%
2&986&7.7031&7.1907&0.10025&0.020522\\%
2&3558&27.797&15.006&0.10068&0.018394\\%
2&8950&69.922&19.817&0.10078&0.017902\\%
\toprule
3&218&1.7031&1.7103&0.31314&0.083787\\%
3&668&5.2188&5.1523&0.32508&0.041261\\%
3&2216&17.312&10.921&0.332&0.017895\\%
3&5969&46.633&16.452&0.3343&0.011556\\%
3&16310&127.42&22.738&0.33274&0.0091824\\%
3&20135&157.3&24.04&0.33372&0.0089222\\%
\toprule
4&386&3.0156&3.0197&0.67086&0.16973\\%
4&1954&15.266&10.879&0.70233&0.04829\\%
4&5634&44.016&17.39&0.71085&0.019619\\%
4&14226&111.14&21.209&0.71364&0.010424\\%
4&35794&279.64&27.596&0.71428&0.0067908\\%
\toprule
5&602&4.7031&4.7083&1.1694&0.29324\\%
5&1852&14.469&14.476&1.2122&0.12945\\%
5&8802&68.766&68.772&1.2384&0.029459\\%
5&16577&129.51&129.4&1.2468&0.016961\\%
5&45302&353.92&353.79&1.2404&0.0079898\\%
5&55927&436.93&436.78&1.2443&0.0070062\\\bottomrule \end {tabular}
{}
\end{center}
\caption{Comparison of the memory requirement relative to the number of degrees of freedom~$N_{dof}$ between storing the full block structure and the reference block based preconditioner from Section~\ref{section_inexact_locals} (Example~\ref{example_memory}).}
\label{table_memory_requirement}
\end{table}
\end{example}
\textbf{Acknowledgments:} The research was supported by the Austrian Science Fund (FWF) through the
doctoral school ``Dissipation and Dispersion in Nonlinear PDEs'' (project W1245, A.R.)
and ``Optimal Adaptivity for BEM and FEM-BEM Coupling'' (project P27005, T.F, D.P.).
T.F. furthermore acknowledges funding through the Innovative Projects Initiative of
Vienna University of Technology and the CONICYT project ``Preconditioned linear solvers
for nonconforming boundary elements'' (grant FONDECYT 3150012).
\defAppendix{Appendix}
\appendix
\section*{Appendix}
From \cite[Prop.~{2.8}]{melenk_gl} for the stiffness matrix and from classical estimates for the quadrature weights of the Gau{\ss}-Lobatto quadrature, we have on the reference square $\widehat{S}$
\begin{align*}
p^{-2} \|{\mathfrak u}\|^2_{\ell^2} \lesssim \|u\|^2_{H^1(\widehat S)} \lesssim p \|{\mathfrak u}\|^2_{\ell^2},
\qquad
p^{-4} \|{\mathfrak u}\|^2_{\ell^2} \lesssim \|u\|^2_{L^2(\widehat S)} \lesssim p^{-2} \|{\mathfrak u}\|^2_{\ell^2}
\qquad \forall u \in {\mathcal Q}^p(\widehat S)
\end{align*}
For quasi-uniform meshes, we therefore obtain
\begin{align*}
h^{-2} p^2 \|u\|^2_{L^2(\Gamma)} & \lesssim \|{\mathfrak u}\|^2_{\ell^2} \lesssim h^{-2} p^4 \|u\|^2_{L^2(\Gamma)}.
\end{align*}
Furthermore, we have
\begin{align*}
\|{\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} &\lesssim h^{-2} \|J_h u\|^2_{L^2(\Gamma)}
\lesssim h^{-2} \|u\|^2_{L^2(\Gamma)} \lesssim p^{-2} \|{\mathfrak u}\|^2_{\ell^2},\\
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} &\lesssim
\sum_{K \in {\mathcal T}} \|({\mathfrak u} - {\mathfrak J}_h{\mathfrak u})|_K)\|^2_{\ell^2}
\lesssim p^2 \sum_{K \in {\mathcal T}} \|\widehat u - \widehat {J_h u}\|^2_{H^1(\widehat S)}
\lesssim p^2 \sum_{K \in {\mathcal T}} | u - J_h u|^2_{H^1(K)} + h^{-2} \|u - J_h u\|^2_{L^2(K)} \\
&\lesssim p^2 \|u\|^2_{H^1(\Gamma)}.
\end{align*}
We obtain
\begin{align*}
\|u\|^2_{L^2(\Gamma)} \lesssim h^2 p^{-2} \|{\mathfrak u}\|^2_{\ell^2},
\qquad
\|u\|^2_{H^1(\Gamma)} = \|u\|^2_{L^2(\Gamma)} + |u|^2_{H^1(\Gamma)} \lesssim
h^2 p^{-2} \|{\mathfrak u}\|^2_{\ell^2} + p \|{\mathfrak u}\|^2_{\ell^2}
\lesssim p \|{\mathfrak u}\|^2_{\ell^2},
\end{align*}
so that interpolation yields
$$
\|u\|^2_{\widetilde H^{1/2}(\Gamma)} \lesssim h p^{-1/2} \|{\mathfrak u}\|^2_{\ell^2}.
$$
For the converse estimate, we observe
\begin{align*}
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} \lesssim p^2 \|u\|^2_{H^1(\Gamma)},
\qquad \qquad
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} \lesssim h^{-2} p^4 \|u\|^2_{L^2(\Gamma)} ,
\end{align*}
so that an interpolation argument (which we assume to be admissible!) produces
\begin{align*}
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} &\lesssim p^3 h^{-1} \|u\|^2_{\widetilde H^{1/2}(\Gamma)}.
\end{align*}
Hence,
\begin{align*}
\|{\mathfrak u}\|^2_{\ell^2} &\lesssim
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} +
\|{\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2}
\lesssim p^3 h^{-1} \|u\|^2_{\widetilde H^{1/2}(\Gamma)} + h^{-2} \|u\|^2_{L^2(\Gamma)}
\lesssim \left( p^3 h^{-1} + h^{-2} \right) \|u\|^2_{\widetilde H^{1/2}(\Gamma)}.
\end{align*}
Putting things together, we get
$$
p^{1/2} h^{-1} \|u\|^2_{\widetilde H^{1/2}(\Gamma)} \lesssim \|{\mathfrak u}\|^2_{\ell^2}
\lesssim \left( p^3 h^{-1} + h^{-2} \right) \|u\|^2_{\widetilde H^{1/2}(\Gamma)},
$$
which in turn gives the condition number estimate
$$
\kappa(\widetilde D^p_h) \lesssim p^{5/2} + h^{-1} p^{-1/2}.
$$
For the $H^1$-condition number, we note the estimates
\begin{align*}
\|u\|^2_{H^1(\Gamma)} &=
|u|^2_{H^1(\Gamma)} +
\|u\|^2_{L^2(\Gamma)} \lesssim p \|{\mathfrak u}\|^2_{\ell^2} + h^2 p^{-2} \|{\mathfrak u}\|^2_{\ell^2}
\lesssim p \|{\mathfrak u}\|^2_{\ell^2}, \\
\|{\mathfrak u}\|^2_{\ell^2} &\lesssim
\|{\mathfrak u} - {\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} +
\|{\mathfrak J}_h {\mathfrak u}\|^2_{\ell^2} \lesssim p^2 \|u\|^2_{H^1(\Gamma)} + h^{-2} \|u\|^2_{L^2(\Gamma)}
\lesssim \left( p^2 + h^{-2} \right)\|u\|^2_{H^1(\Gamma)},
\end{align*}
so that we get
$$
p^{-1} \|u\|^2_{H^1(\Gamma)} \lesssim \|{\mathfrak u}\|^2_{\ell^2} \lesssim \left( p^2 + h^{-2}\right) \|u\|^2_{H^1(\Gamma)}
$$
\section*{References}
\end{document}
|
\begin{document}
\author{
Zhen Wang$^{1,*}$,
Weirui Kuang$^{1,*}$,
Ce Zhang$^{2}$,
Bolin Ding$^{1}$,
Yaliang Li$^{1,\dagger}$\\
\textsuperscript{\rm 1}Alibaba Group, \textsuperscript{\rm 2}ETH Z{\"u}rich\\
\{jones.wz, weirui.kwr, bolin.ding, yaliang.li\}@alibaba-inc.com, [email protected]
}
\iffalse
\affiliation{
\institution{
\textsuperscript{\rm 1}Alibaba Group,
\textsuperscript{\rm 2}ETH Z{\"u}rich,
}
\country{}
}
\affiliation{
\institution{
\{jones.wz, weirui.kwr, bolin.ding, yaliang.li\}@alibaba-inc.com,
[email protected]
}
\country{}
}
\fi
\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{Co-first authors.}
\footnotetext[2]{Corresponding author}
\renewcommand*{\thefootnote}{\arabic{footnote}}
\title{\ours: A Benchmark Suite for Federated Hyperparameter Optimization}
\begin{abstract}
Hyperparameter optimization (HPO) is crucial for machine learning algorithms to achieve satisfactory performance, whose progress has been boosted by related benchmarks. Nonetheless, existing efforts in benchmarking all focus on HPO for traditional centralized learning while ignoring federated learning (FL), a promising paradigm for collaboratively learning models from dispersed data. In this paper, we first identify some uniqueness of HPO for FL algorithms from various aspects. Due to this uniqueness, existing HPO benchmarks no longer satisfy the need to compare HPO methods in the FL setting. To facilitate the research of HPO in the FL setting, we propose and implement a benchmark suite \textsc{FedHPO-B}\xspace that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on \textsc{FedHPO-B}\xspace to benchmark a few HPO methods. We open-source \textsc{FedHPO-B}\xspace at \href{https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB}{https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB}.
\iffalse
Hyperparameter optimization (HPO) is crucial for machine learning algorithms to achieve satisfactory performance, whose progress has been boosted by related benchmarks. Nonetheless, existing efforts in benchmarking all focus on HPO for traditional centralized learning while ignoring federated learning (FL), a promising paradigm for collaboratively learning models from dispersed data. In this paper, we first identify some uniqueness of HPO for FL algorithms from various aspects. Due to this uniqueness, comparing HPO methods on non-FL tasks cannot reflect their performance on FL tasks, and existing benchmarks are inapplicable for HPO methods deliberately designed for the FL setting. To facilitate the research of HPO in the FL scenarios, we propose and implement a benchmark suite \textsc{FedHPO-B}\xspace that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions. We also conduct extensive experiments based on \textsc{FedHPO-B}\xspace to benchmark a few HPO methods. We open-source \textsc{FedHPO-B}\xspace at \href{https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB}{https://github.com/alibaba/FederatedScope/tree/master/benchmark/FedHPOB} and will maintain it actively.
\fi
\end{abstract}
\section{Introduction}
\label{sec:intro}
Most machine learning algorithms expose many design choices, which can drastically impact the ultimate performance.
Hyperparameter optimization (HPO)~\cite{hpobook} aims at making the right choices without human intervention.
Formally, HPO can be described as the problem $\min_{\lambda\in\Lambda_1 \times \cdots \times \Lambda_K}f(\lambda)$, where each $\Lambda_k$ corresponds to the candidate choices of a specific hyperparameter, e.g., taking the learning rate from $\Lambda_1 = [0.01,1.0]$ and the batch size from $\Lambda_2 = \{16, 32, 64\}$.
For each specified $\lambda$, $f(\lambda)$ is the output result (e.g., validation loss) of executing the considered algorithm configured by $\lambda$.
Research in this line has been facilitated by HPO benchmarks~\cite{automlbenchmark,hpobench,hpob}, which encourage reproducible and fair comparisons between different HPO methods. To this end, their primary efforts are two-fold: One is to keep the results of the same function evaluation consistent across different runtime environments, e.g., by containerizing its execution; The other is to simplify the evaluations, e.g., by evaluating a function via querying a readily available lookup table or a fitted surrogate model.
However, existing HPO benchmarks all focus on traditional learning paradigms, where the functions to be optimized correspond to centralized learning tasks.
Federated learning (FL)~\cite{deffl,flsurvey}, as a privacy-preserving paradigm for collaboratively learning a model from distributed data, has not been considered.
Actually, along with the increasing privacy concerns from the whole society, FL has been gaining more attentions from academia and industry.
Meanwhile, HPO for FL algorithms (denoted by FedHPO\xspace from now on) is identified as a critical and promising open problem in FL~\cite{flsurvey1}.
As an emerging topic, the community lacks a thorough understanding of how traditional HPO methods perform in the FL setting. Meanwhile, the recently proposed FedHPO\xspace methods have not been well benchmarked. Before attempting to fill this gap, it is helpful to gain some insights into the difference between FedHPO\xspace and traditional HPO. We elaborate on such differences from various aspects in Section~\ref{sec:background}, which essentially come from the distributed nature of FL and the heterogeneity among FL's participants.
In summary, the function to be optimized in FedHPO\xspace has an augmented domain that introduces new hyperparameter and fidelity dimensions, with the intricate correlations among them; The FL setting poses both opportunities and challenges in concurrently exploring the search space with a stricter budget constraint.
Due to FedHPO\xspace's uniqueness, existing HPO benchmarks cannot standardize the comparisons between HPO methods regarding FL tasks.
Firstly, their integrated functions correspond to non-FL tasks, which may lead to performances of compared methods inconsistent with their actual performances in optimizing FL algorithms.
Moreover, those recently proposed FedHPO methods need to incorporate into the procedure of function evaluation and thus can not be evaluated against existing benchmarks.
Motivated by FedHPO\xspace's uniqueness and the successes of previous HPO benchmarks, we summarize the desiderata of FedHPO\xspace benchmarks as follows.
\noindent\textbf{Comprehensiveness}. FL tasks are diverse in terms of data, model architecture, the level of heterogeneity among participants, etc. As their corresponding functions to be optimized by HPO methods are thus likely to be diverse, including a comprehensive collection of FL tasks is necessary for drawing an unbiased conclusion from comparisons.
\noindent\textbf{Efficiency}. As exact function evaluations are costly in the FL setting, an ideal benchmark is expected to provide tabular and surrogate modes for approximate but efficient function evaluations. When accurate results are required, the benchmark should enable simulated execution while reasonably estimating the corresponding deployment cost.
\noindent\textbf{Extensibility}. As a developing field, new FL tasks and novel FedHPO\xspace methods constantly emerge, and FL's best practice continuously evolves. Thus, what the community desired is more of a benchmarking tool that can effortlessly incorporate novel ingredients.
Towards these desiderata, we propose and implement \textsc{FedHPO-B}\xspace, a dedicated benchmark suite, to facilitate the research and application of FedHPO\xspace. \textsc{FedHPO-B}\xspace incorporates rich FL tasks from various domains with respective model architectures, providing realistic and, more importantly, comprehensive FedHPO\xspace problems for studying the related methods. In addition to the tabular and surrogate modes, \textsc{FedHPO-B}\xspace provides a configurable system model so that function evaluations can be efficiently executed via simulation while keeping the tracked time consumption meaningful. Last but not least, we build \textsc{FedHPO-B}\xspace upon a recently open-sourced FL platform FederatedScope (FS), which provides solid infrastructure and many off-the-shelf FL-related functionalities.
Thus, it is easy for the community to extend \textsc{FedHPO-B}\xspace with more and more tasks and FedHPO\xspace methods.
\section{Background and Motivations}
\label{sec:background}
We first give a brief introduction to the settings of HPO and its related benchmarks.
Then we present and explain the uniqueness of FedHPO\xspace to show the demand for dedicated FedHPO\xspace benchmarks.
\subsection{Problem Settings}
\label{subsec:problem}
As mentioned in Section~\ref{sec:intro}, HPO aims at solving $\min_{\lambda\in\Lambda_1 \times\cdots\times\Lambda_K}f(\lambda)$, where each $\Lambda_k$ corresponds to candidate choices of a specific hyperparameter, and their Cartesian product (denoted by $\times$) constitute the search space.
In practice, such $\Lambda_k$ is often bounded and can be continuous (e.g., an interval of real numbers) or discrete (e.g., a set of categories/integers).
Each function evaluation with a specified hyperparameter configuration $\lambda$ means to execute the corresponding algorithm accordingly, which results in $f(\lambda)$.
HPO methods, e.g., Gaussian process, generally solves this problem with a series of function evaluations.
To save the time and energy consumed by a full-fidelity function evaluation, multi-fidelity methods exploit low-fidelity function evaluation, e.g., training for fewer epochs~\cite{lowerrounds1,lowerrounds2} or on a subset of data~\cite{subset1,subset2,subset3}, to approximate the exact result.
Thus, it would be convenient to treat $f$ as $f(\lambda,b),\lambda\in\Lambda_1 \times \cdots \times \Lambda_K, b\in \mathcal{B}_1 \times \cdots \times \mathcal{B}_L$, where each $\mathcal{B}_l$ corresponds to the possible choices of a specific fidelity dimension, e.g., taking \textit{\#epoch} from $\{10,\ldots,50\}$.
For the purpose of benchmarking different HPO methods, it is necessary to integrate diverse HPO problems wherein the function to be optimized exhibits the same or at least similar characteristics as that in realistic applications.
To evaluate these functions, HPO benchmarks, e.g., HPOBench~\cite{hpobench}, often provide three modes:
(1) ``Raw'' means truly execute the corresponding algorithm;
(2) ``Tabular'' means querying a lookup table, where each entry corresponds to a specific $f(\lambda,b)$;
(3) ``Surrogate'' means querying a surrogate model that might be trained on the tabular data.
\subsection{Uniqueness of Federated Hyperparameter Optimization}
\label{subsec:fedhpo}
\noindent\textbf{Function evaluation in FL}.
Despite the various scenarios in FL literature, we restrict our discussion about FedHPO\xspace to one of the most general FL settings that has also been adopted in existing FedHPO\xspace works~\cite{fedex,fedhposys}.
Conceptually, there are $N$ clients, each of which has its specific data, and a server coordinates them to learn a model $\theta$ collaboratively.
Most FL algorithms are designed under this setting, including FedAvg~\cite{deffl} and FedOPT~\cite{fedopt}.
Such FL algorithms are iterative.
In the $t$-th round, the server broadcasts the global model $\theta^{(t)}$; then the clients make local updates and send the updates back; finally, the server aggregates the updates to produce $\theta^{(t+1)}$.
Obviously, this procedure consists of two subroutines---local updates and aggregation.
Thus, $\lambda$ can be divided into client-side and server-side hyperparameters according to which subroutine each hyperparameter influences.
After executing an algorithm configured by $\lambda$ for $T$ such rounds, what $\theta^{(T)}$ achieves on the validation set (e.g., its validation loss) is regarded as $f(\lambda)$.
The execution of an FL algorithm is essentially a distributed machine learning procedure while distinguishing from general non-FL cases by the heterogeneity among clients~\cite{fs}.
These characteristics make FedHPO\xspace unique against HPO for traditional machine learning algorithms.
We summarize the uniqueness from the following perspectives:
\noindent\textbf{Hyperparameter dimensions}.
Despite the server-side hyperparameters newly introduced by FL algorithms (e.g., FedOPT), some client-side hyperparameters, e.g., the \textit{\#local\_update\_step}, do not exist in the non-FL setting.
Moreover, these new hyperparameter dimensions bring in correlations that do not exist in HPO for traditional machine learning algorithms.
For example, \textit{\#local\_update\_step}, client-side \textit{learning\_rate}, and server-side \textit{learning\_rate} together determine the step size of each round's update.
Besides, their relationships are not only determined by the landscape of aggregated objective function but also the statistical heterogeneity of clients, which is a unique factor for FL.
\noindent\textbf{Fidelity dimensions}.
FedHPO\xspace introduces a new fidelity dimension---\textit{sample\_rate}, which determines the fraction of clients sampled for training in each round.
The larger \textit{sample\_client} is, the smaller the variance of each aggregation is, and the more resource each round consumes.
As existing fidelity dimensions, \textit{sample\_rate} allows to trade accuracy for efficiency.
Moreover, it correlates with other fidelity dimensions, such as \textit{\#round} $T$, where, in general, aggregation with smaller variance is believed to need fewer rounds for convergence.
This correlation encourages people to balance these quantities w.r.t. their system conditions, e.g., choosing large $T$ but small \textit{sample\_rate} when straggler issue is severe, to achieve more economical accuracy-efficiency trade-offs.
\noindent\textbf{Concurrent exploration}.
Unlike centralized learning, where each execution can only try a specific $\lambda$, some FedHPO\xspace works, such as FedEx~\cite{fedex}, concurrently explores different client-side configurations in each round and updates a policy w.r.t. the feedback from all these clients.
FedEx regards this strategy as a FedHPO\xspace counterpart to the weight-sharing strategy in neural architecture search.
However, the heterogeneity among clients is likely to make them have different optimal configurations~\cite{adaptivelr}, where making decisions by the same policy would become unsatisfactory.
In the same spirit as personalized FL~\cite{fedbn,ditto}, a promising direction is to decide on personalized hyperparameters in FedHPO\xspace.
\noindent\textbf{One-shot optimization}.
As each round in an FL course corresponds to two times of communication among participants (i.e., download and upload the model), the consumed resource, in terms of both time and carbon emission, is larger than that in centralized learning by orders of magnitude.
As a result, most traditional black-box optimizers that require more than one full-fidelity trials are impractical in the FL setting~\cite{robustadaptive}.
Thus, multi-fidelity methods, particularly those capable of one-shot optimization~\cite{fedex,flora}, are more in demand in FedHPO\xspace.
\iffalse
\noindent\textbf{Design dimensions}. As a distributed learning scenario, most FL algorithms introduce hyperparameters for both the server(s) and the clients, e.g., the learning rate for the server-side optimizer and that for the client-side optimizer in FedOPT. Moreover, new correlations emerge in Federated HPO, e.g., \#local update step and the learning rate for the client-side optimizer.
\noindent\textbf{Fidelity dimensions}. In each communication round of FL, it is viable to sample only a fraction of clients for making local updates. Thus, \#sampled clients can be a fidelity dimension, uniquely for federated HPO. Besides, newly introduced design dimensions, e.g., learning rate for the server-side optimizer, correlate with some fidelity dimensions.
\noindent\textbf{Client-wise configurations}. As the client-wise data are non-i.i.d., applying client-wise configurations might lead to better results than applying the same configuration. As the client-wise setting means a high-dimensional search space, it is extremely challenging and has not been studied in prior works.
\noindent\textbf{Landscape}. Due to the statistical heterogeneity of clients, the hyperparameter configuration in FL not only determines whether the performance is satisfactory but more seriously whether convergence can be achieved. Thus, the range of $f(\cdot,\cdot)$ in federated HPO is often broader, and its landscape is unique. (TODO: ref to figures about ECDF)
\noindent\textbf{One-shot optimization}.
As each training step in FL corresponds to a communication round, much more expensive than that in centralized learning, even just one full-fidelity function evaluation is unaffordable.
Thus, multi-fidelity optimizers, particularly those suitable for one-shot optimization, are more in demand.
\fi
Due to the uniqueness mentioned above, existing HPO benchmarks are inappropriate for studying FedHPO\xspace.
FedHPO\xspace calls for dedicated benchmarks that incorporate functions corresponding to FL algorithms and respect realistic FL settings.
\section{Our Proposed Benchmark Suite: \textsc{FedHPO-B}\xspace}
\label{sec:ourmethod}
We present an overview of \textsc{FedHPO-B}\xspace in Figure~\ref{fig:overview}.
Conceptually, \textsc{FedHPO-B}\xspace encapsulates functions to be optimized and provides a unified interface for HPO methods to access.
As the incorporated functions correspond to FL tasks, we build \textsc{FedHPO-B}\xspace upon an FL platform---FederatedScope (FS)~\cite{fs}.
It offers many off-the-shelf and pluggable FL-related ingredients, which enable us to prepare a comprehensive collection of FL tasks (see Section~\ref{subsec:comprehensive}).
Besides, FS's event-driven framework and well-designed APIs allow us to easily incorporate more FL tasks and FedHPO\xspace methods into \textsc{FedHPO-B}\xspace, which is valuable for this nascent research direction (see Section~\ref{subsec:extensive}).
In \textsc{FedHPO-B}\xspace, function evaluations can be conducted in either of the three modes---``tabular'', ``surrogate'', and ``raw'', following the convention mentioned in Section~\ref{subsec:problem}.
To create the lookup table for tabular mode, we truly execute the corresponding FL algorithms with the grids of search space as their configurations.
These lookup tables are adopted as training data for the surrogate models, which are expected to approximate the functions of interest.
Meanwhile, we collect clients' execution time from these executions to form system statistics for our system model (see Section~\ref{subsec:efficient}).
As all our FL tasks and algorithms are implemented in FS, and FS has provided its docker images, we can containerize \textsc{FedHPO-B}\xspace effortlessly, i.e., the function evaluation in the ``raw'' mode is executed in an FS docker container.
\begin{figure}
\caption{Overview of \textsc{FedHPO-B}
\label{fig:overview}
\end{figure}
\subsection{Comprehensiveness}
\label{subsec:comprehensive}
There is no universally best HPO method~\cite{automlbenchmark}.
Therefore, it is necessary to compare related methods on multiple HPO problems that correspond to diverse functions and thus can comprehensively evaluate their performances.
To satisfy this need, we leverage FS to prepare various FL tasks, where their considered datasets and model architectures are quite different. Specifically, the data can be images, sentences, graphs, or tabular data. Some datasets are provided by existing FL benchmarks, including FEMNIST (from LEAF~\cite{leaf}) and split Cora (from FS-GNN~\cite{fsg}), which are readily distributed and thus conform to the FL setting. Some are centralized initially (e.g., those from OpenML~\cite{openml,automlbenchmark} and Hugging Face~\cite{nlpdataset}), which we partition by FS's splitters to construct their FL version with Non-IIDness among clients. All these datasets are publicly available and can be downloaded and preprocessed by our prepared scripts. The corresponding suitable neural network model is applied to handle each dataset. Thus, these FL tasks involve fully-connected networks, convolutional networks, and the latest attention-based model. For each such FL task, we employ two FL algorithms---FedAvg and FedOPT to handle it, respectively, where it is worth mentioning that FedOPT has server-side hyperparameters.
\begin{table}[bthp]
\centering
\caption{Summary of benchmarks in current \textsc{FedHPO-B}\xspace: \#Cont. and \#Disc. denote the number of hyperparameter dimensions corresponding to continuous and discrete candidate choices, respectively.}
\label{tab:probstat}
\begin{tabular}{c|c|c|c|c|c|c|c}
\toprule
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{\#Dataset} & \multicolumn{1}{c}{Domain} & \multicolumn{1}{c}{\#Client} & \multicolumn{1}{c}{\#FL Algo.} & \multicolumn{1}{c}{\#Cont.} & \multicolumn{1}{c}{\#Disc.} & \multicolumn{1}{c}{Opt. budget} \\
\hline
\multicolumn{1}{c}{CNN} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{CV} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{20 days} \\
\multicolumn{1}{c}{BERT~\cite{BERT}} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{NLP} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{20 days} \\
\multicolumn{1}{c}{GNN} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{Graph} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{1 days} \\
\multicolumn{1}{c}{LR} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{Tabular} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{21,600 seconds} \\
\multicolumn{1}{c}{MLP} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{Tabular} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{43,200 seconds} \\
\bottomrule
\end{tabular}
\end{table}
Then the FedHPO\xspace problem is defined as optimizing the design choices of the FL algorithm on each specific FL task.
We are more interested in FL tasks' unique hyperparameter dimensions that are not involved in traditional centralized learning. Thus, client-side \textit{learning\_rate}, \textit{\#local\_ update\_step}, and server-side \textit{learning\_rate} are optimized in all provided FedHPO\xspace problems. Besides, in addition to \textit{\#round}, the unique fidelity dimension, \textit{sample\_rate}, is adopted. We summarize our currently provided FedHPO\xspace problems in Table~\ref{tab:probstat}. More details can be found in Appendix~\ref{sec:datasets} and Appendix~\ref{sec:benchmarks}.
We study the empirical cumulative distribution function (ECDF) for each model type in \textsc{FedHPO-B}\xspace.
Specifically, in creating the lookup table for tabular mode, we have conducted function evaluations for the grid search space, resulting in a finite set $\{(\lambda, f(\lambda))\}$ for each benchmark.
Then we normalize the performances (i.e., $f(\lambda)$) and show their ECDF in Figure~\ref{fig:cdf_avg}, where these curves exhibit different shapes.
For example, the amounts of top-tier configurations for GNN on PubMed are remarkably less than on other graph datasets, which might imply a less smoothed landscape and difficulty in seeking the optimal configuration.
As the varying shapes of ECDF curves have been regarded as an indicator of the diversity of benchmarks~\cite{hpobench}, we can conclude from Figure~\ref{fig:cdf_avg} that \textsc{FedHPO-B}\xspace enables evaluating HPO methods comprehensively. We defer more studies about function landscape from the perspective of ECDF to Appendix~\ref{sec:more_results}.
\begin{figure}
\caption{CNN}
\caption{BERT}
\caption{GNN}
\caption{LR}
\caption{MLP}
\caption{Empirical Cumulative Distribution Functions: The normalized regret is calculated for all evaluated configurations of the respective model on the respective FL task with FedAvg.}
\label{fig:cdf_avg}
\end{figure}
We are continuously integrating more and more benchmarks into \textsc{FedHPO-B}\xspace to improve its comprehensiveness. Notably, we will incorporate the emerging learning paradigms, including federated reinforcement learning~\cite{fedrl}, federated unsupervised representation learning~\cite{fedunsupervised}, and federated hetero-task~\cite{fedhetero}, whose HPO problems have not been studied by the community.
\subsection{Efficiency}
\label{subsec:efficient}
\iffalse
There is an essential difference between FL's simulation and deployment, which raises a dilemma for optimizers to track the time consumption of function evaluations. On the one hand, the time consumed by simulation can not reflect that of deployment. On the other hand, time consumption for the deployment on even a tiny cluster is unaffordable due to the communication overhead and the trial-and-error nature of HPO methods. A sensible system model would be helpful in addressing this dilemma.
\fi
For efficient function evaluation, we implement the tabular mode of \textsc{FedHPO-B}\xspace by running the FL algorithms configured by the grid search space in advance. Each specific configuration $\lambda$ is repeated five times with different random seeds, and the resulted performances, including loss, accuracy and f1-score under train/validation/test splits, are averaged and adopted as the results of $f(\lambda)$. Besides, we provide not only the results of $f(\lambda)$ (i.e., that with full-fidelity) but also results of $f(\lambda,b)$, where $b$ is enumerated across different \textit{\#round} and different \textit{sample\_rate}. Since executing function evaluation is much more costly in FL than traditional centralized learning, such lookup tables are precious. In creating them, we spent about two months of computation time on six machines, each with four Nvidia V100 GPUs. Now we make them publicly accessible via the tabular mode of \textsc{FedHPO-B}\xspace.
As tabular mode has discretized the original search space and thus cannot respond to queries other than the grids, we train random forest models on these lookup tables, i.e., $\{(\lambda, b), f(\lambda,b))\}$. These models serve as a surrogate of the functions to be optimized and can answer any query $\lambda$ by simply making an inference. More details about implementing the tabular and surrogate modes of \textsc{FedHPO-B}\xspace are deferred to Appendix~\ref{sec:benchmarks}.
When an HPO method interacts with \textsc{FedHPO-B}\xspace in raw mode, each function evaluation is to run the corresponding FL course, which can be conducted by indeed executing it on a cluster of FL participants or simulating this execution in a standalone machine. Simulation is preferred, as it can provide consistent results as running on a cluster while saving time and energy. However, the time consumed by simulation cannot reasonably reflect that by actual execution, which makes the HPO method fail to track the depleted budget. Hence, a system model that can estimate the time consumed by evaluating $f(\lambda, b)$ in realistic scenarios is indispensable. Meanwhile, such a system model should be configurable so that users with different system conditions can calibrate the model to their cases.
Therefore, we propose and implement a novel system model based on a basic one~\cite{fedoptimizationsurvey}.
Formally, the execution time for each FL round in our model is estimated as follow:
\begin{equation}
\label{eq:sysmodel}
\begin{split}
T(f,\lambda,b) &= T_{\text{comm}}(f,\lambda,b) + T_{\text{comp}}(f,\lambda,b),\\
T_{\text{comm}}(f,\lambda,b) &= \max(\frac{N\times S_{\text{down}}(f,\lambda)}{B_{\text{up}}^{(\text{server})}}, \frac{S_{\text{down}}(f,\lambda)}{B_{\text{down}}^{(\text{client})}}) + \frac{S_{\text{up}}(f,\lambda)}{B_{\text{up}}^{(\text{client})}},\\
T_{\text{comp}}(f,\lambda,b) &= \mathbb{E}_{T_{i}^{(\text{client})}\sim\text{Exp}(\cdot |\frac{1}{c(f,\lambda,b)}),i=1,\ldots,N}[\max(\{T_{i}^{(\text{client})}\})] + T^{(\text{server})}(f,\lambda,b),
\end{split}
\end{equation}
where $N$ denotes the number of clients sampled in this round, $S(f,\lambda)$ denotes the download/upload size, $B$ denotes the download/upload bandwidth of server/client, $T^{(\text{server})}$ is the time consumed by server-side computation, and $T_{i}^{(\text{client})}$ denotes the computation time consumed by $i$-th client, which is sampled from an exponential distribution with $c(f,\lambda,b)$ as its mean.
Compared with the existing basic model, one ingredient we add is to reflect the bottleneck issue of the server.
Specifically, the server broadcasts model parameters for $N$ clients in each round, which might become the bottleneck of the communication.
And $N$ is determined by the total number of clients in the considered FL task and \textit{sample\_rate} ($b$ specified).
Another ingredient is to consider the heterogeneity among clients' computational capacity, where the assumed exponential distribution has been widely adopted in system designs~\cite{fedoptimizationsurvey} and is consistent with real-world applications~\cite{papaya}.
As the local updates are not sent back simultaneously, there is no need to consider the bottleneck issue for the server twice.
To implement our system model, we use the following proposition to calculate Eq.~\eqref{eq:sysmodel} analytically.
\begin{proposition}
When the computation time of clients are identically independently distributed, following an exponential distribution $\text{Exp}(\cdot|\frac{1}{c})$, then the expected time for the straggler of $N$ uniformly sampled clients is $\sum_{i=1}^{N}\frac{c}{i}$.
\label{prop:1}
\end{proposition}
Proof can be found in Appendix~\ref{sec:proof}.
We provide default parameters of our system model, including $c$, $B$, and $T^{(\text{server})}$, based on observations collected from the executions in Section~\ref{subsec:efficient}.
Users are allowed to specify these parameters according to their scenarios or other system statistic providers, e.g., estimating the computation time of stragglers by sampling from FedScale~\cite{fedscale}.
\subsection{Extensibility}
\label{subsec:extensive}
\begin{figure}
\caption{A general algorithmic view for FedHPO\xspace methods: They are allowed to concurrently explore different client-side configurations in the same round of FL, but the clients are heterogeneous, i.e., corresponding to different functions $f_i(\cdot)$. Operators in brackets are optional.}
\label{fig:fedhpoalgoarch}
\end{figure}
Traditional HPO methods are decoupled from the procedure of function evaluation, with a well-defined interface for interaction (see Figure~\ref{fig:overview}). Thus, any novel method is readily applicable to optimizing the prepared functions and could be integrated into \textsc{FedHPO-B}\xspace without further development. However, FedHPO\xspace methods, including FTS~\cite{fedbo} and FedEx~\cite{fedex}, are often coupled with the FL procedure, which needs to be implemented in FS if we want to incorporate them into \textsc{FedHPO-B}\xspace, as the red color ``FedHPO'' module in Figure~\ref{fig:overview} shows. As FedHPO\xspace is springing up, we must ease the development of novel FedHPO\xspace methods so that \textsc{FedHPO-B}\xspace is extensible.
We present a general algorithmic view in Figure~\ref{fig:fedhpoalgoarch}, which unifies several related methods and thus would benefit \textsc{FedHPO-B}\xspace's extensibility. In this view, FedHPO\xspace follows what an FL round is framed: (1) server broadcasts information; (2) clients make local updates and send feedback; (3) server aggregates feedback. At the server-side, we maintain the global policy for determining hyperparameter configurations. In addition to the model parameters, either the policy or configurations sampled from it are also broadcasted. If the $i$-th client receives the global policy, it will update its local policy w.r.t. the global one and then sample a configuration from its local policy. Either received or locally sampled, the configuration $\lambda_i$ is specified for the local update procedure, which results in updated local model parameters $\theta^{(t+1)}_i$. Then $\theta^{(t+1)}_i$ is evaluated, and its performance is regarded as the result of (client-specific) function evaluation on $\lambda_i$, i.e., $f_i(\lambda_i)$. Finally, both $\theta^{(t+1)}_i$ and $(\lambda_i, f_i(\lambda_i))$ are sent back to the server, which will be aggregated for updating the global model and policy, respectively.
We have implemented FedEx in FS with such a view, where $\lambda_i$ is independently sampled from the global policy, and the ``$\text{aggr}_{\text{p}}$'' operator is exponential gradient descent. Other FedHPO\xspace methods, e.g., FTS, can also be implemented with our view. In FTS, the broadcasted policy $\pi^{(t)}$ is the samples drawn from all clients' posterior beliefs. The ``$\text{sync}_{\text{p}}$'' operator can be regarded as mixing Gaussian process (GP) models.
``$\text{update}_{\text{p}}$'' operator corresponds to updating local GP model.
Then a sample drawn from local GP posterior belief is regarded as $\pi^{(t+1)}_i$ and sent back. The ``$\text{aggr}_{p}$'' operator corresponds to packing received samples together.
We choose to build \textsc{FedHPO-B}\xspace on FS as it allows developers to flexibly customize the message exchanged among FL participants. Meanwhile, the native procedures to handle a received message could be modularized. These features make it easy to express novel FedHPO\xspace methods with the above view. Last but not least, FS's rich off-the-shelf datasets, splitters, models, and trainers have almost eliminated the effort of introducing more FL tasks into \textsc{FedHPO-B}\xspace.
\section{Experiments}
\label{sec:exp}
We conduct extensive empirical studies with our proposed \textsc{FedHPO-B}\xspace. Basically, we exemplify the use of \textsc{FedHPO-B}\xspace in comparing HPO methods, which, in the meantime, can somewhat validate the correctness of \textsc{FedHPO-B}\xspace. Moreover, we aim to gain more insights into FedHPO\xspace, answering three research questions: \textbf{(RQ1)} How do traditional HPO methods perform in the FL setting? \textbf{(RQ2)} Do recently proposed methods that exploit ``concurrent exploration'' (see Section~\ref{sec:background}) significantly improve traditional methods? \textbf{(RQ3)} How can we leverage the new fidelity dimension of FedHPO\xspace?
All scripts concerning the studies here will be committed to \textsc{FedHPO-B}\xspace so that the community can quickly reproduce our established benchmarks.
\iffalse
To illustrate the gap between HPO in federated learning from centralized HPO scenarios, we conduct comprehensive experiments based on FS~\cite{fs} and FS-G~\cite{fsg} to build \textsc{FedHPO-B}\xspace based on the results.
In this section, we first provide a detailed experimental setup and study the following three questions: \TBD{TBD}
1. do the advanced HPO methods improve the baselines (RS and HB) in the FL scenario? 2. do the multi-fidelity HPO methods improve the performance compared to the black-box optimization methods in the FL scenario? 3. are FL HPO methods superior to traditional methods?
\fi
\subsection{Studies about Applying Traditional HPO Methods in the FL Setting}
\label{subsec:exp1}
To answer RQ1, we largely follow the experiment conducted in HPOBench~\cite{hpobench} but focus on the FedHPO\xspace problems \textsc{FedHPO-B}\xspace provided.
\noindent\textbf{Protocol.}
We employ up to ten optimizers (i.e., HPO methods) from widely adopted libraries (see Table~\ref{tab:hpo_optimizers} for more details).
For black-box optimizers (\textit{BBO}), we consider random search (\textit{RS}), the evolutionary search approach of differential evolution (\textit{DE}~\cite{de1,de2}), and bayesian optimization with: a GP model ($\textit{BO}_{\textit{GP}}$), a random forest model ($\textit{BO}_{\textit{RF}}$~\cite{borf}), and a kernel density estimator ($\textit{BO}_{\textit{KDE}}$~\cite{kde}), respectively.
For multi-fidelity optimizers (\textit{MF}), we consider Hyperband (\textit{HB}~\cite{hyperband}), its model-based extensions with KDE-based model (\textit{BOHB}~\cite{bohb}), and differential evolution (\textit{DEHB}~\cite{dehb}), and Optuna's implementations of TPE with median stopping ($\textit{TPE}_{\textit{MD}}$) and TPE with Hyperband ($\textit{TPE}_{\textit{HB}}$)~\cite{optuna}.
\iffalse
We conduct experiments on 20 FedHPO\xspace problems from what \textsc{FedHPO-B}\xspace currently provides, of five model types. We established five federated HPO benchmarks from different fields based on these results of experiments, namely CNN, BERT, GNN, LR, and MLP, to cover domains of CV, NLP, graph, and tabular, see Table~\ref{tab:hpo_datasets} for more details.
To conduct the experiments uniformly and fairly, we repeat all the experiment five times for optimizers in different benchmarks with different random seeds on the same hardware, refer Appendix~\ref{sec:optimizers} for more details. The best ever seen validation losses are recorded for each optimizer (for multi-fidelity optimizers, higher budgets are preferred over lower budgets).
We sort the best ever seen results and report the mean rank among these 10 optimizers with different benchmark families.
Following HPOBench~\cite{hpobench}, we use a sign test to measure whether the performance of the advanced methods goes beyond the baselines (\textit{RS} and \textit{HB}) in FL setting.
\fi
We apply these optimizers to optimize the design choices of FedAvg and FedOPT on 20 FL tasks drawn from what \textsc{FedHPO-B}\xspace currently provides (see Table~\ref{tab:probstat}). These FL tasks involve five model types and four data domains. To compare the optimizers uniformly and fairly, we repeat each setting five times in the same runtime environment but with different random seeds. The best-seen validation loss is monitored for each optimizer (for multi-fidelity optimizers, higher fidelity results are preferred over lower ones). We sort the optimizers by their best-seen results and compare their mean ranks on these 20 FL tasks. Following HPOBench~\cite{hpobench}, we use sign tests to judge whether advanced methods outperform their baselines and whether multi-fidelity methods outperform their single-fidelity counterparts. We refer our readers to Appendix~\ref{sec:optimizers} for more details.
\begin{figure}
\caption{\textit{BBO}
\label{fig:BBO_All_avg}
\caption{\textit{MF}
\label{fig:MF_All_avg}
\caption{All}
\label{fig:All_All_avg}
\caption{Mean rank over time on all FedHPO\xspace problems (with FedAvg).}
\label{fig:entire_all_tabular_avg_rank}
\end{figure}
\noindent\textbf{Results and Analysis.}
We show the results in Figure~\ref{fig:entire_all_tabular_avg_rank}. Overall, their eventual mean ranks do not deviate remarkably. For \textit{BBO}, the performances of optimizers are close at the beginning but become more distinguishable along with their exploration. Ultimately, $\textit{BO}_{\textit{GP}}$ has successfully sought better configurations than other optimizers. In contrast to \textit{BBO}, \textit{MF} optimizers perform pretty differently in the early stage, which might be rooted in the vast variance of low-fidelity function evaluations. Eventually, \textit{HB} and $\textit{BOHB}$ become superior to others while achieving a very close mean rank.
We consider optimizers' final performances on these 20 tasks, where, for each pair of optimizers, one may win, tie, or lose against the other. Then we can conduct sign tests to compare pairs of optimizers, where results are presented in Table~\ref{tab:p-value} and Table~\ref{tab:p-value2}. Comparing these advanced optimizers with their baselines, only $\textit{BO}_{\textit{GP}}$, $\textit{BO}_{\textit{RF}}$, and \textit{DE} win on more than half of the FL tasks but have no significant improvement. Meanwhile, no \textit{MF} optimizers show any advantage in exploiting experience. These observations differ from non-FL cases, where we presume the reason lies in the distribution of configurations' performances (see Figure~\ref{fig:cdf_avg}). From Table~\ref{tab:p-value2}, we see that \textit{MF} optimizers always outperform their corresponding single-fidelity version, which is consistent with non-FL settings.
\begin{table}[htbp]
\centering
\caption{P-value of a sign test for the hypothesis---these advanced methods surpass the baselines (\textit{RS} for \textit{BBO} and \textit{HB} for \textit{MF}).}
\label{tab:p-value}
\begin{tabular}{ccccc}
\toprule
& $\textit{BO}_{\textit{GP}}$ & $\textit{BO}_{\textit{RF}}$ & $\textit{BO}_{\textit{KDE}}$ & $\textit{DE}$ \\
p-value agains \textit{RS} & 0.0637 & 0.2161 & 0.1649 & 0.7561 \\
win-tie-loss & 13 / 0 / 7 & 12 / 0/ 8 & 7 / 0 / 13 & 11 / 0 / 9 \\ \hline
& $\textit{BOHB}$ & $\textit{DEHB}$ & $\textit{TPE}_{\textit{MD}}$ & $\textit{TPE}_{\textit{HB}}$ \\
p-value against \textit{HB} & 0.4523 & 0.9854 & 0.2942 & 0.2454 \\
win-tie-loss & 7 / 0 / 13 & 9 / 0 / 11 & 9 / 0 / 11 & 9 / 0 / 11 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[hbp]
\centering
\caption{P-value of a sign test for the hypothesis---\textit{MF} methods surpass corresponding \textit{BBO} methods.}
\label{tab:p-value2}
\begin{tabular}{cccc}
\toprule
& $\textit{HB}$ vs. $\textit{RS}$ & $\textit{DEHB}$ vs. $\textit{DE}$ & $\textit{BOHB}$ vs. $\textit{BO}_{\textit{KDE}}$ \\
p-value & 0.1139 & 0.2942 & \textbf{0.0106} \\
win-tie-loss & 13 / 0 / 7 & 13 / 0 / 7 & 16 / 0 / 4 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Studies about Concurrent Exploration}
\label{subsec:exp2}
As mentioned in Section~\ref{sec:background}, the cost of communication between FL participants has made acquiring multiple full-fidelity function evaluations unaffordable, posing a stricter budget constraint to HPO methods. Yet FL, at the same time, allows HPO methods to take advantage of concurrent exploration, which somewhat compensates for the number of function evaluations. We are interested in methods designed regarding these characteristics of FedHPO\xspace and design this experiment to see how much concurrent exploration contributes.
\noindent\textbf{Protocol.} We consider the FL tasks where FedAvg and FedOPT are applied to learn a 2-layer CNN on FEMNIST. As a full-fidelity function evaluation consumes 500 rounds on this dataset, we carefully specify \textit{RS} and successive halving algorithm (\textit{SHA}) to limit their total budget as a one-shot optimization in terms of \textit{\#round}. Precisely, \textit{RS} consists of ten trials, each running for 50 rounds. \textit{SHA}, initialized with 27 candidate configurations, consists of three stages with budgets to be 12, 13, and 19 rounds. Then we adopt \textit{RS}, \textit{SHA}, FedEx wrapped by \textit{RS} (\textit{RS+FedEx}), and FedEx wrapped by \textit{SHA} (\textit{SHA+FedEx}) to optimize the design choices of FedAvg and FedOPT, respectively. The wrapper is responsible for (1) determining the server-side \textit{learning\_rate} for FedOPT and (2) determining the arms for FedEx. We consider validation loss the metric of interest, and function evaluations are conducted in the raw mode. We repeat each method three times and report the averaged best-seen value at the end of each trial. Meanwhile, for each considered method, we entirely run the FL course with the optimal configuration it seeks. Their averaged test accuracies are compared.
\noindent\textbf{Results and Analysis.} We present the results in Figure~\ref{fig:exp2} and Table~\ref{tab:exp2}. For FedAvg, the best-seen mean validation losses of all wrapped FedEx decrease slower than their corresponding wrapper. However, their searched configurations' generalization performances are significantly better than their wrappers, which strongly confirms the effectiveness of concurrent exploration. As for FedOPT, all wrapped FedEx show better regrets than their corresponding wrapper. However, as the one-shot setting has drastically limited the number of initial configurations, all searched configurations cannot lead to satisfactory performances. Notably, the most crucial hyperparameter, server-side \textit{learning\_rate}, cannot be well specified.
\begin{table}[ht]
\begin{minipage}[b]{0.6\linewidth}
\centering
\captionsetup{justification=centering}
\centering
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{materials/exp2/exp2_avg.pdf}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{materials/exp2/exp2_opt.pdf}
\end{subfigure}
\captionof{figure}{Mean validation loss over time. \\ \textbf{Left}: FedAvg. \textbf{Right}: FedOPT.}
\label{fig:exp2}
\end{minipage}
\begin{minipage}[b]{0.4\linewidth}
\centering
\begin{tabular}{ll}
\toprule
Methods & Test Accuracy \\ \midrule
\textit{RS} & 67.14 ± 8.46 \\
\textit{SHA} & 75.15 ± 3.44 \\
\textit{RS+FedEx} & 71.25 ± 8.79 \\
\textit{SHA+FedEx} & \textbf{77.46 ± 1.78} \\ \bottomrule
\end{tabular}
\caption{Evaluation about the searched configurations: Mean test accuracy (\%) ± standard deviation.}
\label{tab:exp2}
\end{minipage}
\end{table}
\subsection{Studies about the New Fidelity}
\label{subsec:exp3}
We simulate distinct system conditions by specifying different parameters for our system model. Then we show the performances of \textit{HB} with varying \textit{sample\_rates} in Figure~\ref{fig:exp3}, where which \textit{sample\_rate} is in favor depends on the system condition. Such a phenomenon supports the idea of pursuing a more economic accuracy-efficiency trade-off by balancing \textit{sample\_rate} with \textit{\#rounds}, w.r.t. the system condition. More details about this experiment are deferred to Appendix~\ref{sec:new_fidelity}.
\begin{figure}
\caption{With bad network status}
\label{fig:exp3_slow}
\caption{With good network status}
\label{fig:exp3_quick}
\caption{Performances of different \textit{sample\_rate}
\label{fig:exp3}
\end{figure}
\label{sec:related}
\textbf{Federated learning}.
\noindent\textbf{HPO methods}.
\noindent\textbf{HPO benchmarks}.
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we first identify the uniqueness of FedHPO\xspace, which we ascribe to the distributed nature of FL and its heterogeneous clients. This uniqueness prevents FedHPO\xspace research from leveraging existing HPO benchmarks, which has led to inconsistent comparisons between some recently proposed methods. Hence, we suggest and implement a comprehensive, efficient, and extensible benchmark suite, \textsc{FedHPO-B}\xspace. We further conduct extensive HPO experiments on \textsc{FedHPO-B}\xspace, validating its correctness and applicability to comparing traditional and federated HPO methods. We have open-sourced \textsc{FedHPO-B}\xspace with an Apache-2.0 license and will actively maintain it in the future. We believe \textsc{FedHPO-B}\xspace can serve as the stepping stone to developing reproducible FedHPO\xspace works, which is indispensable for such a nascent direction.
As mentioned in Section~\ref{subsec:comprehensive}, tasks other than federated supervised learning will be incorporated. At the same time, we aim to extend \textsc{FedHPO-B}\xspace to include different FL settings, e.g., HPO for vertical FL~\cite{flora}. Another issue the current version has not touched on is the risk of privacy leakage caused by HPO methods~\cite{adaptivelr}, which we should provide related metrics and testbeds in the future.
a
a
\\
\\
\\
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
a
\\
\\
\\
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
\\
\\
\\
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
\\
\\
\\
a
a
a
a
\\
\\
\\
a
a
\iffalse
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{In Section \ref{sec:conclusion}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerNo{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{}
\item Did you include complete proofs of all theoretical results?
\answerYes{Included in supplementary materials due to space limitation.}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{Training details are given in Section~\ref{sec:exp} and supplemental materials.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNo{We only report mean accuracy because the variance is relatively large and matters visualization.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{Included in supplementary materials.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{We cite the creators of used datasets.}
\item Did you mention the license of the assets?
\answerNo{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{We released our benchmark suite.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerYes{We mention that all used datasets are public in our experiments.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerYes{We mention that all used datasets are public in our experiments without personally identifiable information or offensive content.}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\fi
\appendix
\section{Maintenance of \textsc{FedHPO-B}\xspace}
\label{sec:app}
In this section, we present our plan for maintaining \textsc{FedHPO-B}\xspace following~\cite{hpobench}.
\begin{itemize}
\item \textbf{Who is maintaining the benchmarking library?} \textsc{FedHPO-B}\xspace is developed and maintained by Data Analytics and Intelligence Lab (DAIL) of DAMO Academy.
\item \textbf{How can the maintainer of the dataset be contacted (e.g., email address)?} Users can reach out the maintainer via creating issues on Github repository at \url{https://github.com/alibaba/FederatedScope} with \textsc{FedHPO-B}\xspace label.
\item \textbf{Is there an erratum?} No.
\item \textbf{Will the benchmarking library be updated?} Yes, as we discussed in Section~\ref{sec:conclusion}, we will add more FedHPO problems and introduce more FL tasks to existing benchmark. We will track updates and Github release on the \href{https://federatedscope.io/}{website} and README. In addition, we will fix potential issues regularly.
\item \textbf{Will older versions of the benchmarking library continue to be supported/hosted/maintained?} All older versions are available and maintained by the Github release, but limited support will be provided for older versions. Containers will be versioned and available via AliyunOSS.
\item \textbf{If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?} Any contribution is welcome, and all commits to \textsc{FedHPO-B}\xspace must follow the guidance and regulations at \url{https://federatedscope.io/docs/contributor/}.
\end{itemize}
\section{HPO Methods}
\label{sec:optimizers}
As shown in Table~\ref{tab:hpo_optimizers}, we provide an overview of the optimizers (i.e., HPO methods) we use in this paper.
\begin{table}[htbp]
\centering
\caption{Overview of the optimizers from widely adopted libraries.}
\label{tab:hpo_optimizers}
\begin{tabular}{lllll}
\toprule
Name & Model & Packages & version \\ \midrule
\textit{RS}~\cite{rs} & - & \href{https://github.com/automl/HpBandSter}{\textit{HPBandster}} & 0.7.4 \\
$\textit{BO}_{\textit{GP}}$~\cite{bogp1,bogp2} & \textit{GP} & \href{https://github.com/automl/SMAC3}{\textit{SMAC3}} & 1.3.3 \\
$\textit{BO}_{\textit{RF}}$~\cite{borf} & \textit{RF} & \href{https://github.com/automl/SMAC3}{\textit{SMAC3}} & 1.3.3 \\
$\textit{BO}_{\textit{KDE}}$~\cite{kde} & \textit{KDE} & \href{https://github.com/automl/HpBandSter}{\textit{HPBandster}} & 0.7.4 \\
\textit{DE}~\cite{de1,de2} & - & \href{https://github.com/automl/DEHB}{\textit{DEHB}} & git commit \\
\hline
\textit{HB}~\cite{hyperband} & - & \href{https://github.com/automl/HpBandSter}{\textit{HPBandster}} & 0.7.4 \\
\textit{BOHB}~\cite{bohb} & \textit{KDE} & \href{https://github.com/automl/HpBandSter}{\textit{HPBandster}} & 0.7.4 \\
\textit{DEHB}~\cite{dehb} & - & \href{https://github.com/automl/DEHB}{\textit{DEHB}} & git commit \\
$\textit{TPE}_{\textit{MD}}$~\cite{optuna} & \textit{TPE} & \href{https://github.com/optuna/optuna}{\textit{Optuna}} & 2.10.0 \\
$\textit{TPE}_{\textit{HB}}$~\cite{optuna} & \textit{TPE} & \href{https://github.com/optuna/optuna}{\textit{Optuna}} & 2.10.0 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Black-box Optimizers}
\label{subsec:bbo}
\noindent\textbf{\textit{RS}} (\textit{Random search}) is a priori-free HPO method, i.e., each step of the search does not exploit the already explored configuration. The random search outperforms the grid search within a small fraction of the computation time.
\noindent\textbf{$\textit{BO}_{\textit{GP}}$} is a Bayesian optimization with a Gaussian process model. $\textit{BO}_{\textit{GP}}$ uses a Matérn kernel for continuous hyperparameters, and a hamming kernel for categorical hyperparameters. In addition, the acquisition function is expected improvement (EI).
\noindent\textbf{$\textit{BO}_{\textit{RF}}$} is a Bayesian optimization with a random forest model. We set the hyperparameters of the random forest as follows: the number of trees is 10, the max depth of each tree is 20, and we use the default setting of the minimal samples split, which is 3.
\noindent\textbf{$\textit{BO}_{\textit{KDE}}$} is a Bayesian optimization with kernel density estimators (KDE), which is used in \textit{BOHB}~\cite{bohb}. It models objective function as $\Pr(x\mid y_{\text{good}})$ and $\Pr(x\mid y_{\text{bad}})$. We set the hyperparameters for $\textit{BO}_{\textit{KDE}}$ as follows: the number of samples to optimize EI is 64, and $1/3$ of purely random configurations are sampled from the prior without the model; the bandwidth factor is 3 to encourage diversity, and the minimum bandwidth is 1e-3 to keep diversity.
\noindent\textbf{\textit{DE}} uses the evolutionary search approach of Differential Evolution. We set the mutation strategy to \textit{rand1} and the binomial crossover strategy to \textit{bin}~\footnote{Please refer to \url{https://github.com/automl/DEHB/blob/master/README.md} for details.}. In addition, we use the default settings for the other hyperparameters of \textit{DE}, where the mutation factor is 0.5, crossover probability is 0.5, and the population size is 20.
\subsection{Multi-fidelity Optimizers}
\label{subsec:mf}
\noindent\textbf{\textit{HB}} (\textit{Hyperband}) is an extension on top of successive halving algorithms for the pure-exploration nonstochastic infinite-armed bandit problem. Hyperband makes a trade-off between the number of hyperparameter configurations and the budget allocated to each hyperparameter configuration. We set $\eta$ to 3, which means only a fraction of $1/\eta$ of hyperparameter configurations goes to the next round.
\noindent\textbf{\textit{BOHB}} combines \textit{HB} with the guidance and guarantees of convergence of Bayesian optimization with kernel density estimators. We set the hyperparameter of the \textit{BO} components and the \textit{HB} components of \textit{BOHB} to be the same as $\textit{BO}_{\textit{KDE}}$ and \textit{HB} described above, respectively.
\noindent\textbf{\textit{DEHB}} combines the advantages of the bandit-based method \textit{HB} and the evolutionary search approach of \textit{DE}. The hyperparameter of \textit{DE} components and \textit{BO} components are set to be exactly the same as \textit{DE} and \textit{HB} described above, respectively.
\noindent\textbf{$\textit{TPE}_{\textit{MD}}$} is implemented in \textit{Optuna} and uses Tree-structured Parzen Estimator (\textit{TPE}) as a sampling algorithm, where on each trial, TPE fits two Gaussian Mixture models for each hyperparameter. One is to the set of hyperparameters with the best performance, and the other is to the remaining hyperparameters. In addition, it uses the median stopping rule as a pruner, which means that it will prune if the trial’s best intermediate result is worse than the median (\textit{MD}) of intermediate results of previous trials at the same step. We use the default settings for both \textit{TPE} and \textit{MD}.
\noindent\textbf{$\textit{TPE}_{\textit{HB}}$} is similar to $\textit{TPE}_{\textit{MD}}$ described above, which uses \textit{TPE} as a sampling algorithm and \textit{HB} as pruner. We set the reduction factor to 3 for \textit{HB} pruner, and all other settings use the default ones.
\section{Datasets}
\label{sec:datasets}
As shown in Table~\ref{tab:hpo_datasets}, we provide a detailed description of the datasets we use in current \textsc{FedHPO-B}\xspace. Following FS~\cite{fs}, FS-G~\cite{fsg}, and HPOBench~\cite{hpobench}, we use 14 FL datasets from 4 domains, including CV, NLP, graph, and tabular.
Some of them are inherently real-world FL datasets, while others are simulated FL datasets split by the splitter modules of FS.
Notably, the name of datasets from OpenML is the ID of the corresponding task.
\begin{table}[htbp]
\centering
\caption{Statistics of the datasets used in current \textsc{FedHPO-B}\xspace.}
\label{tab:hpo_datasets}
\begin{tabular}{lcccccc}
\toprule
Name & \#Client & Subsample & \#Instance & \#Class & Split by \\ \midrule
FMNIST & \num{3550} & 5\% & \num{805263} & 62 & Writer \\
CIFAR-10 & 5 & 100\% & \num{60000} & 10 & LDA \\ \hline
CoLA & 5 & 100\% & \num{10657} & 2 & LDA \\
SST-2 & 5 & 100\% & \num{70042} & 2 & LDA \\ \hline
Cora & 5 & 100\% & \num{2708} & 7 & Community \\
CiteSeer & 5 & 100\% & \num{4230} & 6 & Community \\
PubMed & 5 & 100\% & \num{19717} & 5 & Community \\ \hline
$31_{OpenML}$ & 5 & 100\% & \num{1000} & 2 & LDA \\
$53_{OpenML}$ & 5 & 100\% & 846 & 4 & LDA \\
$3917_{OpenML}$ & 5 & 100\% & \num{2109} & 2 & LDA \\
$10101_{OpenML}$ & 5 & 100\% & 748 & 2 & LDA \\
$146818_{OpenML}$ & 5 & 100\% & 690 & 2 & LDA \\
$146821_{OpenML}$ & 5 & 100\% & \num{1728} & 4 & LDA \\
$146822_{OpenML}$ & 5 & 100\% & \num{2310} & 7 & LDA \\
\bottomrule
\end{tabular}
\end{table}
\noindent\textbf{FEMNIST} is an FL image dataset from LEAF~\cite{leaf}, whose task is image classification. Following \cite{leaf}, we use a subsample of FEMNIST with 200 clients, which is round 5\%. And we use the default train/valid/test splits for each client, where the ratio is $60\%:20\%:20\%$.
\noindent\textbf{CIFAR-10}~\cite{cifar10} is from Tiny Images dataset and consists of \num{60000} $32\times32$ color images, whose task is image classification. We split images into 5 clients by latent dirichlet allocation (LDA) to produce statistical heterogeneity among these clients. We split the raw training set to training and validation sets with a ratio $4:1$, so that ratio of final train/valid/test splits is 66.7\%:16.67\%:16.67\%.
\noindent\textbf{SST-2} is a dataset from GLUE~\cite{glue} benchmark, whose task is binary sentiment classification for sentences. We also split these sentences into 5 clients by LDA. In addition, we use the official train/valid/test splits for SST-2.
\noindent\textbf{CoLA} is also a dataset from GLUE benchmark, whose task is binary classification for sentences---whether it is a grammatical English sentence. We exactly follow the experimental setup in SST-2.
\noindent\textbf{Cora \& CiteSeer \& PubMed}~{\cite{citation1,citation2}} are three widely adopted graph datasets, whose tasks are node classification. Following FS-G~\cite{fsg}, a community splitter is applied to each graph to generate five subgraphs for each client. We also split the nodes into train/valid/test sets, where the ratio is 60\%:20\%:20\%.
\noindent\textbf{Tabular datasets} are consist of 7 tabular datasets from OpenML~\cite{openml2}, whose task ids (name of source data) are 31 (\textbf{credit-g}), 53 (\textbf{vehicle}), 3917 (\textbf{kc1}), 10101 (\textbf{blood-transfusion-service-center}), 146818 (\textbf{Australian}), 146821 (\textbf{car}) and 146822 (\textbf{segment}). We split each dataset into 5 clients by LDA, respectively. In addition, we set the ratio of train/valid/test splits to 80\%:10\%:10\%.
\section{Proof of Proposition 1}
\label{sec:proof}
What we need to calculate is the expected maximum of i.i.d. exponential random variable.
Proposition~\ref{prop:1} states that, for $N$ exponential variables independently drawn from $\text{Exp}(\cdot|\frac{1}{c})$, the expectation is $\sum_{i=1}^{N}\frac{c}{i}$.
There are many ways to prove this useful proposition, and we provide a proof starting from studying the minimum of the exponential random variables.
\begin{proof}
According to the \href{https://en.wikipedia.org/wiki/Exponential_distribution#Distribution_of_the_minimum_of_exponential_random_variables}{Wikipedia page about exponential distribution}, the minimum of them obeys $\text{Exp}(\cdot|\frac{N}{c})$.
Denoting the $i$-th minimum of them by $T_i$, $T_1 \sim \text{Exp}(\cdot|\frac{N}{c})$ and $T_N$ is what we are interested in.
Meanwhile, it is well known that exponential distribution is memoryless, namely, $\Pr(X>s+t|X>s)=\Pr(X>t)$.
Thus, $T_2 - T_1$ obeys the same distribution as the minimum of $N-1$ such random variables, that is to say, $T_2 - T_1$ is a random variable drawn from $\text{Exp}(\cdot|\frac{N-1}{c})$.
Similarly, $T_{i+1} - T_i ~\text{Exp}(\cdot|\frac{N-i}{c}), i=1,\ldots,N-1$. Thus, we have:\\
\begin{equation}
\mathbb{E}[T_N] = \mathbb{E}[T_1 + \sum_{i=1}^{N-1}(T_{i+1}-T_i)] =\frac{c}{N} + \sum_{i=1}^{N-1}\frac{c}{N-i} = \sum_{i=1}^{N}\frac{c}{i},
\end{equation}
which concludes this proof.
\end{proof}
\section{Details of the Study about New Fidelity}
\label{sec:new_fidelity}
We compare the performance of \textit{HB} with different \textit{sample\_rate}s to learn a 2-layer CNN with 2,048 hidden units on FEMNIST.
To simulate a system condition with bad network status, we set the upload bandwidth $B_{\text{up}}^{(\text{server})}$ and $B_{\text{up}}^{(\text{client})}$ to 0.25MB/second and the download bandwidth $B_{\text{down}^{(\text{client})}}$ to 0.75MB/second~\cite{fedoptimizationsurvey}. As for good network status, we set the upload bandwidth $B_{\text{up}}^{(\text{server})}$ and $B_{\text{up}}^{(\text{client})}$ to 0.25GB/second and the download bandwidth $B_{(\text{down})}^{(\text{server})}$ to 0.75GB/second.
In both cases, we fix the computation overhead so that it is negligible and significant, respectively.
As for the rest settings, we largely follow that in Section~\ref{subsec:exp1}.
In both cases, as shown in Figure~\ref{fig:exp3}, the severity of the straggler issue raises with the \textit{sample\_rate} increases.
As a limited time budget, the FL procedure with a lower \textit{sample\_rate} achieves a more competitive result in the bad network status. In comparison, that with a higher \textit{sample\_rate} achieves a more competitive result in the good network status.
In conclusion, this study suggests a best practice that we should carefully balance these two fidelity dimensions, \textit{\#round} and \textit{sample\_rate}, w.r.t. the system condition. Better choices tend to achieve more economical accuracy-efficiency trade-offs for FedHPO\xspace.
\iffalse
We designed the total budget (time) for different scenarios where the former's budget is tens of times larger than the latter. The logarithm of the loss over time budgets are reported to better visualize the performance difference.
\fi
\section{Details on \textsc{FedHPO-B}\xspace Benchmarks}
\label{sec:benchmarks}
\textsc{FedHPO-B}\xspace consists of five categories of benchmarks on the different datasets (see Appendix~\ref{sec:datasets}) with three modes.
In this part, we provide more details about how we construct the FedHPO\xspace problems provided by current \textsc{FedHPO-B}\xspace and the three modes to interact with them.
\begin{table}[htbp]
\centering
\caption{The search space of our benchmarks, where continuous search spaces are discretized into several bins under the tabular mode.}
\label{tab:hpo_space}
\begin{tabular}{ccccccc}
\toprule
Benchmark & & Name & Type & Log & \#Bins & Range \\ \midrule
\multirow{9}{*}{CNN} & \multirow{5}{*}{Client} & batch\_size & int & $\times$ & - & \{16, 32, 64\} \\
& & weight\_decay & float & $\times$ & 4 & {[}0, 0.001{]} \\
& & dropout & float & $\times$ & 2 & {[}0, 0.5{]} \\
& & step\_size & int & $\times$ & 4 & {[}1, 4{]} \\
& & learning\_rate & float & $\checkmark$ & 10 & {[}0.01, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Server} & momentum & float & $\times$ & 2 & {[}0.0, 0.9{]} \\
& & learning\_rate & float & $\times$ & 3 & {[}0.1, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Fidelity} & sample\_rate & float & $\times$ & 5 & {[}0.2, 1.0{]} \\
& & round & int & $\times$ & 250 & {[}1, 500{]} \\ \hline
\multirow{9}{*}{BERT} & \multirow{5}{*}{Client} & batch\_size & int & $\times$ & - & \{8, 16, 32, 64, 128\} \\
& & weight\_decay & float & $\times$ & 4 & {[}0, 0.001{]} \\
& & dropout & float & $\times$ & 2 & {[}0, 0.5{]} \\
& & step\_size & int & $\times$ & 4 & {[}1, 4{]} \\
& & learning\_rate & float & $\checkmark$& 10 & {[}0.01, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Server} & momentum & float & $\times$ & 2 & {[}0.0, 0.9{]} \\
& & learning\_rate & float & $\times$ & 3 & {[}0.1, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Fidelity} & sample\_rate & float & $\times$ & 5 & {[}0.2, 1.0{]} \\
& & round & int & $\times$ & 40 & {[}1, 40{]} \\ \hline
\multirow{8}{*}{GNN} & \multirow{4}{*}{Client} & weight\_decay & float & $\times$ & 4 & {[}0, 0.001{]} \\
& & dropout & float & $\times$ & 2 & {[}0, 0.5{]} \\
& & step\_size & int & $\times$ & 8 & {[}1, 8{]} \\
& & learning\_rate & float & $\checkmark$ & 10 & {[}0.01, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Server} & momentum & float & $\times$ & 2 & {[}0.0, 0.9{]} \\
& & learning\_rate & float & $\times$ & 3 & {[}0.1, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Fidelity} & sample\_rate & float & $\times$ & 5 & {[}0.2, 1.0{]} \\
& & round & int & $\times$ & 500 & {[}1, 500{]} \\ \hline
\multirow{8}{*}{LR} & \multirow{4}{*}{Client} & batch\_size & int & $\checkmark$ & 7 & {[}4, 256{]} \\
& & weight\_decay & float & $\times$ & 4 & {[}0, 0.001{]} \\
& & step\_size & int & $\times$ & 4 & {[}1, 4{]} \\
& & learning\_rate & float & $\checkmark$ & 6 & {[}0.00001, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Server} & momentum & float & $\times$ & 2 & {[}0.0, 0.9{]} \\
& & learning\_rate & float & $\times$ & 3 & {[}0.1, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Fidelity} & sample\_rate & float & $\times$ & 5 & {[}0.2, 1.0{]} \\
& & round & int & $\times$ & 500 & {[}1, 500{]} \\ \hline
\multirow{10}{*}{MLP} & \multirow{6}{*}{Client} & batch\_size & int & $\checkmark$ & 7 & {[}4, 256{]} \\
& & weight\_decay & float & $\times$ & 4 & {[}0, 0.001{]} \\
& & step\_size & int & $\times$ & 4 & {[}1, 4{]} \\
& & learning\_rate & float & $\checkmark$ & 6 & {[}0.00001, 1.0{]} \\
& & depth & int & $\times$ & 3 & {[}1, 3{]} \\
& & width & int & $\checkmark$ & 7 & {[}16, 1024{]} \\
\cline{2-7}
& \multirow{2}{*}{Server} & momentum & float & $\times$ & 2 & {[}0.0, 0.9{]} \\
& & learning\_rate & float & $\times$ & 3 & {[}0.1, 1.0{]} \\ \cline{2-7}
& \multirow{2}{*}{Fidelity} & sample\_rate & float & $\times$ & 5 & {[}0.2, 1.0{]} \\
& & round & int & $\times$ & 500 & {[}1, 500{]} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Category}
\label{subsec:bench_category}
We categorize our benchmarks by model types. Each benchmark is designed to solve specific FL HPO problems on its data domain, wherein CNN benchmark on CV, BERT benchmark on NLP, GNN benchmark on the graphs, and LR \& MLP benchmark on tabular data. All benchmarks have several hyperparameters on configuration space and two on fidelity space, namely sample rate of FL and FL round. And the benchmarks support several FL algorithms, such as FedAvg and FedOPT.
\noindent\textbf{CNN} benchmark learns a two-layer CNN with 2048 hidden units on FEMNIST and 128 hidden units on CIFAR-10 with five hyperparameters on configuration space that tune the batch size of the dataloader, the weight decay, the learning rate, the dropout of the CNN models, and the step size of local training round in client each FL communication round. The tabular and surrogate mode of the CNN benchmark only supports FedAvg due to our limitations in computing resources for now, but we will update \textsc{FedHPO-B}\xspace with more results as soon as possible.
\noindent\textbf{BERT} benchmark fine-tunes a pre-trained language model, BERT-Tiny, which has two layers and 128 hidden units, on CoLA and SST-2. The BERT benchmark also has five hyperparameters on configuration space which is the same as CNN benchmark. In addition, the BERT benchmark support FedAvg and FedOPT with all three mode.
\noindent\textbf{GNN} benchmark learns a two-layer GCN with 64 hidden units on Cora, CiteSeer and PubMed. The GNN benchmark has four on hyperparameters configuration space that tune the weight decay, the learning rate, the dropout of the GNN models, and the step size of local training round in client each FL communication round. The GNN benchmark support FedAvg and FedOPT with all three mode.
\noindent\textbf{LR} benchmark learns a lr on seven tasks from OpenML, see Appendix~\ref{sec:datasets} for details. The LR benchmark has four on hyperparameters configuration space that tune the batch size of the dataloader, the weight decay, the learning rate, and the step size of local training round in client each FL communication round. The LR benchmark support FedAvg and FedOPT with all three mode.
\noindent\textbf{MLP} benchmark's the vast majority of settings are the same as LR benchmark. But in particular, we add depth and width of the MLP to search space in terms of model architecture. The MLP benchmark also support FedAvg and FedOPT with all three mode.
\subsection{Mode}
\label{subsec:bench_mode}
Following HPOBench~\cite{hpobench}, \textsc{FedHPO-B}\xspace provides three different modes for function evaluation: the tabular mode, the surrogate mode, and the raw mode. The valid input hyperparameter configurations and the speed of acquiring feedback vary from mode to mode. Users can choose the desired mode according to the purposes of their experiments.
\noindent\textbf{Tabular mode}.
The idea is to evaluate the performance of many different hyperparameter configurations in advance so that users can acquire their results immediately. To this end, we firstly attain a grid search space from our original search space (see Table~\ref{tab:hpo_space}). For hyperparameters whose original search space is discrete, we just preserve its original one. As for continuous ones, we discretize them into several bins (see Table~\ref{tab:hpo_space} for details). Then we evaluate each configuration in the grid search space. To ensure that the results are reproducible, we execute the FL procedure in the Docker container environment. Evaluation for each specific configuration is repeated three times with different random seeds. We record the averaged loss, accuracy, and F1 score for each train/valid/test split. Users can choose the desired metric as the output of the black-box function via \textsc{FedHPO-B}\xspace's APIs.
We plan to augment these tables (i.e., with a denser grid) in the next version of \textsc{FedHPO-B}\xspace.
\noindent\textbf{Surrogate mode}.
As the valid input configurations of tabular mode are those on the grid, the tabular mode cannot be used for comparing the optimization of real-valued hyperparameters. Thus, we train a surrogate model on the lookup table of tabular mode, which enables us to evaluate arbitrary configuration by model inference. Specifically, we conduct 10-fold cross-validation to train and evaluate random forest models (implemented in scikit-learn~\cite{sklearn}) on the tabular data. Meanwhile, we search for suitable hyperparameters for the random forest models with the number of trees in \{10, 20\} and the max depth in \{10, 15, 20\}. The mean absolute error (MAE) of the surrogate model w.r.t. the true value is within an acceptable threshold. For example, in predicting the true average loss on the CNN benchmark, the surrogate model has a training error of 0.0061 and a testing error of 0.0077. In addition to the off-the-shelf surrogate models we provide, \textsc{FedHPO-B}\xspace offers tools for users to build brand-new surrogate models.
Meanwhile, we notice the recent successes of neural network-based surrogate, e.g., \href{https://github.com/slds-lmu/yahpo_gym}{YAHPO Gym}~\cite{yahpo}, and we will also try it in the next version of \textsc{FedHPO-B}\xspace.
\iffalse
The idea is to evaluate the performance of continuous hyperparameter beyond the grid configurations by model inference. We build the surrogate mode of the benchmarks from the tabular mode. As we envisioned, the input to surrogate mode can be continuous hyperparameters.
We employ ten-fold cross-validation of random forests as the surrogate model, which is trained on the tabular data provided by the benchmarks under tabular mode.
Also, in order to find the optimal random forest (implemented in scikit-learn~\cite{sklearn}) for different benchmarks, we search the architecture with the number of trees in \{10, 20\} and the max depth in \{10, 15, 20\}.
The MAE of the surrogate model with respect to the true value is within the acceptable error. For example, on the CNN benchmark, the surrogate model has a training error of 0.0061 and a testing error of 0.0077 on the true average loss.
In addition, we provide not only the pickled surrogate models for a quick start, but also users can build brand new surrogate models with the tools \textsc{FedHPO-B}\xspace provide.
\fi
\noindent\textbf{Raw mode}. Both of the above modes, although they can respond quickly, are limited to pre-designed search space.
Thus, we introduce raw mode to \textsc{FedHPO-B}\xspace, where user-defined search spaces are allowed.
Once \textsc{FedHPO-B}\xspace's APIs are called with specific hyperparameters, a containerized and standalone FL procedure (supported by FS) will be launched. It is worth noting that although we use standalone simulation to eliminate the communication cost, raw mode still consumes much more computation cost than tabular and surrogate modes.
\section{More Results}
\label{sec:more_results}
In this section, we show the detailed experimental results of the optimizers on \textsc{FedHPO-B}\xspace benchmarks under different modes.
We firstly report the averaged best-seen validation loss, from which the mean rank over time for all optimizers can be deduced.
Due to time and computing resource constraints, we do not have a complete experimental result of the raw mode, which we will supplement as soon as possible.
\subsection{Tabular mode}
Following Section~\ref{subsec:exp1}, we show the overall mean rank overtime on all FedHPO\xspace problems with FedOPT, whose pattern is similar to that of FedAvg in Figure~\ref{fig:entire_all_tabular_avg_rank}.
Then, we report the final results with FedAvg and FedOPT in Table~\ref{tab:final_results_tab_avg} and ~\ref{tab:final_results_tab_opt}, respectively.
Finally, we report the mean rank over time in Figure~\ref{fig:entire_cnn_tabular_avg_rank}-\ref{fig:entire_MLP_tabular_opt_rank}.
Due to time and computing resource constraints, the results on CNN benchmark are incomplete (lacking that with FedOPT), which we will supplement as soon as possible.
\begin{figure}
\caption{\textit{BBO}
\label{fig:BBO_All_opt}
\caption{\textit{MF}
\label{fig:MF_All_opt}
\caption{All}
\label{fig:All_All_opt}
\caption{Mean rank over time on all FedHPO\xspace problems (with FedOPT).}
\label{fig:entire_all_tabular_opt_rank}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Final results of the optimizers on tabular mode with FedAvg (lower is better).}
\label{tab:final_results_tab_avg}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllllll}
\toprule
benchmark & \textit{RS} & $\textit{BO}_{\textit{GP}}$ & $\textit{BO}_{\textit{RF}}$ & $\textit{BO}_{\textit{KDE}}$ & \textit{DE} & \textit{HB} & \textit{BOHB} & \textit{DEHB} & $\textit{TPE}_{\textit{MD}}$ & $\textit{TPE}_{\textit{HB}}$ \\ \midrule
$\text{CNN}_{\text{FEMNIST}}$ & 0.4969 & 0.4879 & 0.4885 & 0.5004 & 0.4928 & 0.4926 & 0.4945 & 0.498 & 0.5163 & 0.5148 \\ \hline
$\text{BERT}_{\text{SST-2}}$ & 0.435 & 0.4276 & 0.4294 & 0.4334 & 0.437 & 0.4311 & 0.4504 & 0.4319 & 0.4341 & 0.4251 \\
$\text{BERT}_{\text{CoLA}}$ & 0.6151 & 0.6148 & 0.6141 & 0.6133 & 0.6143 & 0.6143 & 0.6168 & 0.6178 & 0.6158 & 0.6146 \\ \hline
$\text{GNN}_{\text{Cora}}$ & 0.3265 & 0.3258 & 0.326 & 0.3347 & 0.3267 & 0.3324 & 0.3288 & 0.3225 & 0.3241 & 0.3249 \\
$\text{GNN}_{\text{CiteSeer}}$ & 0.6469 & 0.6442 & 0.6499 & 0.6442 & 0.6453 & 0.6387 & 0.6425 & 0.6452 & 0.6324 & 0.6371 \\
$\text{GNN}_{\text{PubMed}}$ & 0.5262 & 0.5146 & 0.5169 & 0.5311 & 0.5001 & 0.5006 & 0.5194 & 0.4934 & 0.506 & 0.5044 \\ \hline
$\text{LR}_{31}$ & 0.6821 & 0.6308 & 0.6382 & 0.6385 & 0.667 & 0.6492 & 0.6461 & 0.6145 & 0.7228 & 0.758 \\
$\text{LR}_{53}$ & 1.6297 & 1.7288 & 1.6116 & 1.7142 & 1.6062 & 1.5765 & 1.5634 & 1.4755 & 1.5506 & 1.5506 \\
$\text{LR}_{3917}$ & 1.8892 & 1.7561 & 1.7186 & 2.4271 & 1.7519 & 3.948 & 1.6384 & 3.1183 & 2.1344 & 2.6576 \\
$\text{LR}_{10101}$ & 0.548 & 0.5483 & 0.5482 & 0.5487 & 0.5481 & 0.5504 & 0.5505 & 0.5516 & 0.5483 & 0.5487 \\
$\text{LR}_{146818}$ & 0.5294 & 0.5291 & 0.5295 & 0.5289 & 0.5291 & 0.5292 & 0.529 & 0.5293 & 0.5328 & 0.5387 \\
$\text{LR}_{146821}$ & 0.4733 & 0.464 & 0.4722 & 0.4843 & 0.4971 & 0.4678 & 0.4747 & 0.4707 & 0.4792 & 0.4688 \\
$\text{LR}_{146822}$ & 0.4581 & 0.4481 & 0.4505 & 0.4731 & 0.4587 & 0.4478 & 0.4446 & 0.4304 & 0.4376 & 0.4419 \\ \hline
$\text{MLP}_{31}$ & 0.5899 & 0.5891 & 0.5808 & 0.5904 & 0.5925 & 0.5921 & 0.5929 & 0.593 & 0.593 & 0.593 \\
$\text{MLP}_{53}$ & 0.7795 & 0.7373 & 0.7849 & 0.8215 & 0.8068 & 0.769 & 0.7577 & 0.8173 & 0.9491 & 1.0567 \\
$\text{MLP}_{3917}$ & 0.3863 & 0.3937 & 0.3858 & 0.3958 & 0.383 & 0.3895 & 0.3911 & 0.4084 & 0.3979 & 0.3988 \\
$\text{MLP}_{10101}$ & 0.4054 & 0.4217 & 0.4361 & 0.4162 & 0.418 & 0.4137 & 0.4152 & 0.4102 & 0.4522 & 0.4352 \\
$\text{MLP}_{146818}$ & 0.5089 & 0.4997 & 0.5125 & 0.5112 & 0.5138 & 0.5009 & 0.5199 & 0.5039 & 0.5392 & 0.54 \\
$\text{MLP}_{146821}$ & 0.184 & 0.1251 & 0.155 & 0.1769 & 0.1851 & 0.1561 & 0.1683 & 0.1572 & 0.1654 & 0.1761 \\
$\text{MLP}_{146822}$ & 0.2839 & 0.2892 & 0.317 & 0.3586 & 0.2928 & 0.2927 & 0.2823 & 0.2549 & 0.2745 & 0.2755 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[htbp]
\centering
\caption{Final results of the optimizers on tabular mode with FedOPT (lower is better).}
\label{tab:final_results_tab_opt}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllllll}
\toprule
benchmark & \textit{RS} & $\textit{BO}_{\textit{GP}}$ & $\textit{BO}_{\textit{RF}}$ & $\textit{BO}_{\textit{KDE}}$ & \textit{DE} & \textit{HB} & \textit{BOHB} & \textit{DEHB} & $\textit{TPE}_{\textit{MD}}$ & $\textit{TPE}_{\textit{HB}}$ \\ \midrule
$\text{BERT}_{\text{SST-2}}$ & 0.441 & 0.4325 & 0.4301 & 0.4463 & 0.4351 & 0.4403 & 0.4295 & 0.4285 & 0.4293 & 0.4332 \\
$\text{BERT}_{\text{CoLA}}$ & 0.616 & 0.616 & 0.6141 & 0.6137 & 0.6159 & 0.6154 & 0.6157 & 0.6176 & 0.6172 & 0.6168 \\ \hline
$\text{GNN}_{\text{Cora}}$ & 0.3264 & 0.3235 & 0.3268 & 0.3322 & 0.3256 & 0.3245 & 0.3347 & 0.3254 & 0.3405 & 0.3361 \\
$\text{GNN}_{\text{CiteSeer}}$ & 0.6483 & 0.6517 & 0.6497 & 0.6535 & 0.6458 & 0.6442 & 0.6543 & 0.6463 & 0.6488 & 0.6495 \\
$\text{GNN}_{\text{PubMed}}$ & 0.4777 & 0.4426 & 0.4718 & 0.4943 & 0.4318 & 0.4559 & 0.4699 & 0.4318 & 0.4368 & 0.4402 \\ \hline
$\text{LR}_{31}$ & 0.7358 & 0.6831 & 0.6849 & 0.8152 & 0.7085 & 0.6772 & 0.6877 & 0.6385 & 0.8652 & 0.7044 \\
$\text{LR}_{53}$ & 1.7838 & 1.5609 & 1.5241 & 1.5116 & 1.6208 & 1.6045 & 1.7236 & 1.3488 & 1.6654 & 1.7978 \\
$\text{LR}_{3917}$ & 2.254 & 2.0316 & 2.3952 & 1.9788 & 2.6261 & 2.3472 & 2.5452 & 2.3144 & 3.2131 & 2.0291 \\
$\text{LR}_{10101}$ & 0.5533 & 0.55 & 0.5505 & 0.5509 & 0.549 & 0.5504 & 0.5476 & 0.5522 & 0.5612 & 0.8567 \\
$\text{LR}_{146818}$ & 0.511 & 0.506 & 0.5034 & 0.5133 & 0.5007 & 0.5032 & 0.5086 & 0.4974 & 0.4983 & 0.5104 \\
$\text{LR}_{146821}$ & 0.4017 & 0.3599 & 0.4121 & 0.4134 & 0.4079 & 0.395 & 0.398 & 0.3902 & 0.4447 & 0.4625 \\
$\text{LR}_{146822}$ & 0.3972 & 0.4211 & 0.4037 & 0.4442 & 0.4075 & 0.4131 & 0.4008 & 0.3916 & 0.3878 & 0.3871 \\ \hline
$\text{MLP}_{31}$ & 0.5912 & 0.5914 & 0.5912 & 0.5912 & 0.5918 & 0.5923 & 0.5921 & 0.5911 & 0.5921 & 0.5921 \\
$\text{MLP}_{53}$ & 0.9096 & 0.8166 & 0.8111 & 0.8872 & 0.8546 & 1.0163 & 0.8565 & 0.9849 & 1.1276 & 1.0952 \\
$\text{MLP}_{3917}$ & 0.3798 & 0.3937 & 0.3862 & 0.3871 & 0.3867 & 0.4109 & 0.4262 & 0.3812 & 0.4003 & 0.4003 \\
$\text{MLP}_{10101}$ & 0.4219 & 0.4141 & 0.4197 & 0.4111 & 0.4303 & 0.4145 & 0.4256 & 0.4215 & 0.4502 & 0.4502 \\
$\text{MLP}_{146818}$ & 0.4943 & 0.4913 & 0.5022 & 0.5023 & 0.4884 & 0.4995 & 0.5046 & 0.4921 & 0.4978 & 0.4861 \\
$\text{MLP}_{146821}$ & 0.1169 & 0.0836 & 0.0915 & 0.1674 & 0.1079 & 0.0891 & 0.1389 & 0.0838 & 0.1051 & 0.1194 \\
$\text{MLP}_{146822}$ & 0.2963 & 0.2914 & 0.2705 & 0.3025 & 0.2779 & 0.2759 & 0.2621 & 0.2549 & 0.257 & 0.2518 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_FEMNIST_avg}
\caption{$\textit{BBO}
\label{fig:BBO_FEMNIST_avg}
\caption{$\textit{MF}
\label{fig:MF_FEMNIST_avg}
\caption{Mean rank over time on CNN benchmark (FedAvg).}
\label{fig:entire_cnn_tabular_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_BERT_avg}
\caption{$\textit{BBO}
\label{fig:BBO_BERT_avg}
\caption{$\textit{MF}
\label{fig:MF_BERT_avg}
\caption{$\text{ALL}
\label{fig:All_CoLA_avg}
\caption{$\textit{BBO}
\label{fig:BBO_CoLA_avg}
\caption{$\textit{MF}
\label{fig:MF_CoLA_avg}
\caption{$\text{ALL}
\label{fig:All_SST-2_avg}
\caption{$\textit{BBO}
\label{fig:BBO_SST-2_avg}
\caption{$\textit{MF}
\label{fig:MF_SST-2_avg}
\caption{Mean rank over time on BERT benchmark (FedAvg).}
\label{fig:entire_bert_tabular_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_BERT_opt}
\caption{$\textit{BBO}
\label{fig:BBO_BERT_opt}
\caption{$\textit{MF}
\label{fig:MF_BERT_opt}
\caption{$\text{ALL}
\label{fig:All_CoLA_opt}
\caption{$\textit{BBO}
\label{fig:BBO_CoLA_opt}
\caption{$\textit{MF}
\label{fig:MF_CoLA_opt}
\caption{$\text{ALL}
\label{fig:All_SST-2_opt}
\caption{$\textit{BBO}
\label{fig:BBO_SST-2_opt}
\caption{$\textit{MF}
\label{fig:MF_SST-2_opt}
\caption{Mean rank over time on BERT benchmark (FedOPT).}
\label{fig:entire_bert_tabular_opt_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_GNN_avg}
\caption{$\textit{BBO}
\label{fig:BBO_GNN_avg}
\caption{$\textit{MF}
\label{fig:MF_GNN_avg}
\caption{$\text{ALL}
\label{fig:All_Cora_avg}
\caption{$\textit{BBO}
\label{fig:BBO_Cora_avg}
\caption{$\textit{MF}
\label{fig:MF_Cora_avg}
\caption{$\text{ALL}
\label{fig:All_Citeseer_avg}
\caption{$\textit{BBO}
\label{fig:BBO_Citeseer_avg}
\caption{$\textit{MF}
\label{fig:MF_Citeseer_avg}
\caption{$\text{ALL}
\label{fig:All_Pubmed_avg}
\caption{$\textit{BBO}
\label{fig:BBO_Pubmed_avg}
\caption{$\textit{MF}
\label{fig:MF_Pubmed_avg}
\caption{Mean rank over time on GNN benchmark (FedAvg).}
\label{fig:entire_gcn_tabular_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_GNN_opt}
\caption{$\textit{BBO}
\label{fig:BBO_GNN_opt}
\caption{$\textit{MF}
\label{fig:MF_GNN_opt}
\caption{$\text{ALL}
\label{fig:All_Cora_opt}
\caption{$\textit{BBO}
\label{fig:BBO_Cora_opt}
\caption{$\textit{MF}
\label{fig:MF_Cora_opt}
\caption{$\text{ALL}
\label{fig:All_Citeseer_opt}
\caption{$\textit{BBO}
\label{fig:BBO_Citeseer_opt}
\caption{$\textit{MF}
\label{fig:MF_Citeseer_opt}
\caption{$\text{ALL}
\label{fig:All_Pubmed_opt}
\caption{$\textit{BBO}
\label{fig:BBO_Pubmed_opt}
\caption{$\textit{MF}
\label{fig:MF_Pubmed_opt}
\caption{Mean rank over time on GNN benchmark (FedOPT).}
\label{fig:entire_gcn_tabular_opt_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_LR_avg}
\caption{$\textit{BBO}
\label{fig:BBO_LR_avg}
\caption{$\textit{MF}
\label{fig:MF_LR_avg}
\caption{$\text{ALL}
\label{fig:All_31_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_31_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_31_openml_avg}
\caption{$\text{ALL}
\label{fig:All_53_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_53_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_53_openml_avg}
\caption{$\text{ALL}
\label{fig:All_10101_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_10101_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_10101_openml_avg}
\caption{$\text{ALL}
\label{fig:All_146818_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_146818_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_146818_openml_avg}
\caption{$\text{ALL}
\label{fig:All_146821_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_146821_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_146821_openml_avg}
\caption{$\text{ALL}
\label{fig:All_146822_openml_avg}
\caption{$\textit{BBO}
\label{fig:BBO_146822_openml_avg}
\caption{$\textit{MF}
\label{fig:MF_146822_openml_avg}
\caption{Mean rank over time on LR benchmark (FedAvg).}
\label{fig:entire_lr_tabular_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_LR_opt}
\caption{$\textit{BBO}
\label{fig:BBO_LR_opt}
\caption{$\textit{MF}
\label{fig:MF_LR_opt}
\caption{$\text{ALL}
\label{fig:All_31_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_31_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_31_openml_opt}
\caption{$\text{ALL}
\label{fig:All_53_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_53_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_53_openml_opt}
\caption{$\text{ALL}
\label{fig:All_10101_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_10101_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_10101_openml_opt}
\caption{$\text{ALL}
\label{fig:All_146818_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_146818_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_146818_openml_opt}
\caption{$\text{ALL}
\label{fig:All_146821_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_146821_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_146821_openml_opt}
\caption{$\text{ALL}
\label{fig:All_146822_openml_opt}
\caption{$\textit{BBO}
\label{fig:BBO_146822_openml_opt}
\caption{$\textit{MF}
\label{fig:MF_146822_openml_opt}
\caption{Mean rank over time on LR benchmark (FedOPT).}
\label{fig:entire_lr_tabular_opt_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_MLP_avg}
\caption{$\textit{BBO}
\label{fig:BBO_MLP_avg}
\caption{$\textit{MF}
\label{fig:MF_MLP_avg}
\caption{$\text{ALL}
\label{fig:All_31_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_31_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_31_openml_avg_MLP}
\caption{$\text{ALL}
\label{fig:All_53_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_53_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_53_openml_avg_MLP}
\caption{$\text{ALL}
\label{fig:All_10101_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_10101_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_10101_openml_avg_MLP}
\caption{$\text{ALL}
\label{fig:All_146818_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146818_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_146818_openml_avg_MLP}
\caption{$\text{ALL}
\label{fig:All_146821_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146821_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_146821_openml_avg_MLP}
\caption{$\text{ALL}
\label{fig:All_146822_openml_avg_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146822_openml_avg_MLP}
\caption{$\textit{MF}
\label{fig:MF_146822_openml_avg_MLP}
\caption{Mean rank over time on MLP benchmark (FedAvg).}
\label{fig:entire_MLP_tabular_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_MLP_opt}
\caption{$\textit{BBO}
\label{fig:BBO_MLP_opt}
\caption{$\textit{MF}
\label{fig:MF_MLP_opt}
\caption{$\text{ALL}
\label{fig:All_31_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_31_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_31_openml_opt_MLP}
\caption{$\text{ALL}
\label{fig:All_53_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_53_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_53_openml_opt_MLP}
\caption{$\text{ALL}
\label{fig:All_10101_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_10101_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_10101_openml_opt_MLP}
\caption{$\text{ALL}
\label{fig:All_146818_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146818_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_146818_openml_opt_MLP}
\caption{$\text{ALL}
\label{fig:All_146821_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146821_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_146821_openml_opt_MLP}
\caption{$\text{ALL}
\label{fig:All_146822_openml_opt_MLP}
\caption{$\textit{BBO}
\label{fig:BBO_146822_openml_opt_MLP}
\caption{$\textit{MF}
\label{fig:MF_146822_openml_opt_MLP}
\caption{Mean rank over time on MLP benchmark (FedOPT).}
\label{fig:entire_MLP_tabular_opt_rank}
\end{figure}
\subsection{Surrogate mode}
We report the the final results with FedAvg on FEMNIST and BERT benchmarks in Table~\ref{tab:final_results_sur_avg}. Then we present the mean rank over time of the optimizers in Figure~\ref{fig:entire_cnn_surrogate_avg_rank} and Figure~\ref{fig:entire_bert_surrogate_avg_rank}.
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_FEMNIST_avg_surrogate}
\caption{$\textit{BBO}
\label{fig:BBO_FEMNIST_avg_surrogate}
\caption{$\textit{MF}
\label{fig:MF_FEMNIST_avg_surrogate}
\caption{Mean rank over time on CNN benchmark under surrogate mode (FedAvg).}
\label{fig:entire_cnn_surrogate_avg_rank}
\end{figure}
\begin{figure}
\caption{$\text{ALL}
\label{fig:All_BERT_avg_surrogate}
\caption{$\textit{BBO}
\label{fig:BBO_BERT_avg_surrogate}
\caption{$\textit{MF}
\label{fig:MF_BERT_avg_surrogate}
\caption{$\text{ALL}
\label{fig:All_CoLA_avg_surrogate}
\caption{$\textit{BBO}
\label{fig:BBO_CoLA_avg_surrogate}
\caption{$\textit{MF}
\label{fig:MF_CoLA_avg_surrogate}
\caption{$\text{ALL}
\label{fig:All_SST-2_avg_surrogate}
\caption{$\textit{BBO}
\label{fig:BBO_SST-2_avg_surrogate}
\caption{$\textit{MF}
\label{fig:MF_SST-2_avg_surrogate}
\caption{Mean rank over time on BERT benchmark under surrogate mode (FedAvg).}
\label{fig:entire_bert_surrogate_avg_rank}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Final results of the optimizers in surrogate mode (lower is better).}
\label{tab:final_results_sur_avg}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllllll}
\toprule
benchmark & \textit{RS} & $\textit{BO}_{\textit{GP}}$ & $\textit{BO}_{\textit{RF}}$ & $\textit{BO}_{\textit{KDE}}$ & \textit{DE} & \textit{HB} & \textit{BOHB} & \textit{DEHB} & $\textit{TPE}_{\textit{MD}}$ & $\textit{TPE}_{\textit{HB}}$ \\ \midrule
$\text{CNN}_{\text{FEMNIST}}$ & 0.0508 & 0.0478 & 0.0514 & 0.0492 & 0.0503 & 0.0478 & 0.048 & 0.0469 & 0.0471 & 0.0458 \\ \hline
$\text{BERT}_{\text{SST-2}}$ & 0.4909 & 0.4908 & 0.4908 & 0.4908 & 0.4908 & 0.4908 & 0.4908 & 0.4917 & 0.4908 & 0.4908 \\
$\text{BERT}_{\text{CoLA}}$ & 0.5013 & 0.4371 & 0.4113 & 0.487 & 0.444 & 0.4621 & 0.4232 & 0.4204 & 0.3687 & 0.3955 \\
\bottomrule
\end{tabular}
}
\end{table}
\iffalse
\subsection{Raw Mode}
Rank
\begin{table}[htbp]
\centering
\caption{Final results of each optimizers on raw mode(lower is better).}
\label{tab:final_results_raw_avg}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllllll}
\toprule
benchmark & \textit{RS} & $\textit{BO}_{\textit{GP}}$ & $\textit{BO}_{\textit{RF}}$ & $\textit{BO}_{\textit{KDE}}$ & \textit{DE} & \textit{HB} & \textit{BOHB} & \textit{DEHB} & $\textit{TPE}_{\textit{MD}}$ & $\textit{TPE}_{\textit{HB}}$ \\ \hline
$\text{GNN}_{\text{Cora}}$ & & & & & & & & & & \\
$\text{GNN}_{\text{CiteSeer}}$ & & & & & & & & & & \\
$\text{GNN}_{\text{PubMed}}$ & & & & & & & & & & \\
\bottomrule
\end{tabular}
}
\end{table}
\fi
\end{document}
\endinput
\end{document}
|
\begin{document}
\title{Strong Completeness of Coalgebraic Modal Logics}
\author[DFKIUHB]{L. Schr{\"o}der}{Lutz
Schr{\"o}der}
\address[DFKIUHB]{DFKI Bremen and Department of Computer Science,
Universit\"at Bremen}
\email{[email protected]}
\author[IC]{D. Pattinson}{Dirk Pattinson}
\address[IC]{Department of Computing,
Imperial College London}
\email{[email protected]}
\thanks{Work of the first author performed as part of the DFG project
\emph{Generic Algorithms and Complexity Bounds in Coalgebraic Modal
Logic} (SCHR 1118/5-1). Work of the second author partially
supported by EPSRC grant EP/F031173/1}
\keywords{Logic in computer science, semantics, deduction, modal
logic, coalgebra}
\subjclass{F.4.1 [Mathematical Logic and Formal Languages]:
Mathematical Logic --- modal logic; I.2.4 [Artificial Intelligence]:
Knowledge Representation Formalisms and Methods --- modal logic,
representation languages}
\begin{abstract}
Canonical models are of central importance in modal logic, in
particular as they witness strong completeness and hence
compactness. While the canonical model construction is well
understood for Kripke semantics, non-normal modal logics often
present subtle difficulties -- up to the point that canonical models
may fail to exist, as is the case e.g.\ in most probabilistic
logics. Here, we present a generic canonical model construction in
the semantic framework of coalgebraic modal logic, which pinpoints
coherence conditions between syntax and semantics of modal logics
that guarantee strong completeness.
We apply this method to reconstruct canonical
model theorems that are either known or folklore, and moreover instantiate
our method to obtain new strong
completeness results. In particular, we prove strong completeness of
graded modal logic with finite multiplicities, and of the modal
logic of exact probabilities.
\end{abstract}
\maketitle
\noindent In modal logic, completeness proofs come in two flavours:
\emph{weak} completeness, i.e.\ derivability of all universally
valid formulas,
is often proved using \emph{finite model} constructions, and
\emph{strong} completeness, which additionally allows for a possibly
infinite set of assumptions. The latter entails recursive
enumerability of the set of consequences of a recursively enumerable
set of assumptions, and is usually established using (infinite)
\emph{canonical models}. The appeal of the first method is that it
typically entails decidability. The second method yields a stronger
result and has some advantages of its own. First, it applies in some
cases where finite models fail to exist, which often means that the
logic at hand is undecidable. In such cases, a completeness proof via
canonical models will at least salvage recursive
enumerability. Second, it allows for schematic axiomatisations, e.g.\
pertaining to the infinite evolution of a system or to observational
equivalence, i.e.\ statements to the effect that certain states cannot
be distinguished by any formula.
In the realm of Kripke semantics, canonical models exist for a large
variety of logics and are well understood, see e.g.\
\cite{BlackburnEA01}. But there is more to modal logic than Kripke
semantics, and indeed the natural semantic structures used to
interpret a large
class of modal logics go beyond pure relations. This includes e.g.\
the selection function semantics of conditional logics
\cite{Chellas80}, the semantics of probabilistic logics in terms of
probability distributions, and the game frame semantics of coalition
logic~\cite{Pauly02}.
To date, there is very little research that provides systematic
criteria, or at least a methodology, for establishing strong
completeness for logics not amenable to Kripke semantics. This is made worse
as the question of strong completeness crucially depends on the chosen
semantic domain, which as illustrated above may differ widely. It is
precisely this variety in semantics that makes it hard to employ the
strong-completeness-via-canonicity approach, as in many cases there is
no readily available notion of canonical model. The present work
improves on this situation by providing a widely applicable generic
canonical model construction. More precisely, we establish the
existence of quasi-canonical models, that is, models based on the
set of maximally consistent sets of formulas that satisfy the truth
lemma, as there may be no unique, or canonical, such model in our
more general case.
In order to cover the large span of semantic structures, we avoid a
commitment to a particular class of models, and instead work within
the framework of coalgebraic modal logic~\cite{Pattinson03} which
precisely provides us with a semantic umbrella for all of the examples
above. This is achieved by using coalgebras for an endofunctor $T$ as
the semantic domain for modal languages. As we illustrate in examples,
the semantics of particular logics is then obtained by particular
choices of~$T$. Coalgebraic modal logic serves in particular as a
general semantic framework for non-normal modal
logics. As such, it improves on neighbourhood semantics in that it
retains the full semantic structure of the original models
(neighbourhood semantics offers only very little actual semantic
structure, and in fact may be regarded as constructed from syntactic
material~\cite{SchroderPattinson07mcs}).
In this setting, our criterion can be formulated as a set of coherence
conditions that relate the syntactic component of a logic to its
coalgebraic semantics, together with a purely semantic condition
stating that the endofunctor~$T$ that defines the semantics needs to
preserve inverse limits weakly, and thus allows for a passage from the
finite to the infinite. We are initially concerned with the existence
of quasi-canonical models relative to the class of \emph{all}
$T$-coalgebras, that is, whith logics that are axiomatisable by
formulas of modal depth uniformly equal to one \cite{Schroder07}. As
in the classical theory, the corresponding result for logics with
extra frame conditions requires that the logic is canonical, i.e. the
frame that underlies a quasi-canonical model satisfies the frame
conditions, which holds in most cases, but for the time being needs to
be established individually for each logic.
Our new criterion is then used to obtain both previously known and
novel strong completeness results. In addition to positive results, we
dissect a number of logics for which strong completeness fails and
show which assumption of our criterion is violated. In particular,
this provides a handle on adjusting either the syntax or the semantics
of the logic at hand to achieve strong completeness.
For example, we demonstrate that the failure of strong
completeness for probabilistic modal logic (witnessed e.g.\ by the set
of formulas assigning probability $\mge 1-1/n$ to an event for all $n$
but excluding probability $1$) disappears in the logic of exact
probabilities. Moreover, we show that graded modal logic, and more
generally any description logic~\cite{BaaderEA03} with qualified
number restrictions, role hierarchies, and reflexive, transitive, and
symmetric roles, is strongly complete over the multigraph model of
\cite{DAgostinoVisser02}, which admits infinite multiplicities. While
strong completeness fails for the naive restriction of this model to
multigraphs allowing only finite multiplicities, we show how to
salvage strong completeness using additive
\mbox{(finite-)integer}-valued measures. Finally, we prove strong
completeness of several conditional logics w.r.t.\ conditional frames
(also known as selection function models); for at least one of these
logics, strong completeness was previously unknown.
\section{Preliminaries and Notation}
\noindent Our treatment of strong completeness is parametric in both
the syntax and the semantics of a wide range of modal logics. On the
syntactic side, we fix a \emph{modal similarity type} $\Lambda$
consisting of modal operators with associated arities. Given a
similarity type $\Lambda$ and a countable set $P$ of
atomic propositions, the set $\mathcal{F}(\Lambda)$ of
\emph{$\Lambda$-formulas} is inductively defined by the grammar
\begin{equation*}
\FLang(\Lambda) \ni \phi, \psi ::= p \mid \bot \mid \neg\phi\mid\phi \wedge \psi \mid
L(\phi_1, \dots, \phi_n)
\end{equation*}
where $p \in P$ and $L \in \Lambda$ is $n$-ary; further boolean
operators ($\vee$, $\to$, $\leftrightarrow$, $\top$) are defined as
usual. Given any set $X$ (e.g.\ of formulas, atomic propositions, or
sets (!)), we write $\mathsf{Prop}(X)$ for the set of propositional formulas
over $X$ and $\Lambda(X) = \lbrace L(x_1, \dots, x_n) \mid L \in
\Lambda \mbox{ is $n$-ary}, x_1, \dots, x_n \in X \rbrace$ for the set
of formulas arising by applying exactly one operator to elements of
$X$. We instantiate our results to a variety of settings later with
the following similarity types:
\begin{exas} \langlebel{expl:sim-types}
\begin{sparenumerate}
\item The similarity type $\Lambda_K$ of standard modal logic consists of a
single unary operator $\Box$.
\item Conditional logic \cite{Chellas80} is defined over the
similarity type $\Lambda_\mathrm{CL} = \lbrace \Rightarrow \rbrace$ where the binary
operator $\Rightarrow$ is read as a non-monotonic conditional (default,
relevant etc.), usually written in infix notation.
\item Graded modal operators~\cite{Fine72} appear in expressive
description logics~\cite{BaaderEA03} in the guise of so-called
qualified number restrictions; although we discuss only modal
aspects, we use mostly description logic notation and terminology
below. The operators of graded modal logic (GML) are $\Lambda_{\mathrm{GML}}
= \lbrace (\mge k) \mid k \in {\mathbb{N}} \rbrace$ with $(\mge k)$ unary.
We write $\mge k.\, \phi$ instead of $(\mge k) \phi$. A formula
$\mge k.\, \phi$ is read as `at least $k$ successor states satisfy
$\phi$', and we abreviate $\Box \phi = \neg \mge 1. \neg \phi$.
\item The similarity type $\Lambda_\mathrm{PML}$ of probabilistic modal logic
(PML)~\cite{LarsenSkou91} contains the unary modal operators $L_p$
for $p \in \mathbb{Q} \cap [0, 1]$, read as `with probability at
least $p$, \dots'.
\end{sparenumerate}
\end{exas}
\noindent We split axiomatisations of modal logics into two parts: the
first group of axioms is responsible for axiomatising the logic
w.r.t. the class of \emph{all} (coalgebraic) models, whereas the
second consists of frame conditions that impose additional conditions
on models. As the class of all coalgebraic models, introduced below,
can always be axiomatised by formulas of \emph{rank $1$}, i.e.\
containing exactly one level of modal operators~\cite{Schroder07} (and
conversely, every collection of such axioms admits a complete
coalgebraic semantics~\cite{SchroderPattinson07mcs}), we restrict the
axioms in the first group accordingly. More formally:
\begin{defi}
A \emph{(modal) logic} is a triple $\mathcal{L}=(\Lambda, \mathcal{A}, \Theta)$
where $\Lambda$ is a similarity type, $\mathcal{A} \subseteq
\mathsf{Prop}(\Lambda(\mathsf{Prop}(P)))$ is a set of \emph{rank-1 axioms},
and $\Theta \subseteq \FLang(\Lambda)$ is a set of \emph{frame
conditions}. We say that $\mathcal{L}$ is a \emph{rank-1 logic} if
$\Theta=\emptyset$. If $\phi \in \FLang(\Lambda)$, we write
$\vdash_{\mathcal{L}} \phi$ if $\phi$ can be derived from $\mathcal{A} \cup
\Theta$ with the help of propositional reasoning, uniform
substitution, and the congruence rule: from $\phi_1 \leftrightarrow \psi_1,
\dots, \phi_n \leftrightarrow \psi_n$ infer $L(\phi_1, \dots, \phi_n)
\leftrightarrow L(\psi_1, \dots, \psi_n)$ whenever $L \in \Lambda$ is
$n$-ary. For a set $\Phi \subseteq \FLang(\Lambda)$ of assumptions,
we write $\Phi \vdash_{\mathcal{L}} \phi$ if $\vdash_{\mathcal{L}} \phi_1
\langlend \dots \langlend \phi_n \to \phi$ for (finitely many) $\phi_1,
\dots, \phi_n \in \Phi$. A set $\Phi$ is \emph{$\mathcal{L}$-inconsistent}
if $\Phi\vdash_\mathcal{L}\bot$, and otherwise
\emph{$\mathcal{L}$-consistent}.
\end{defi}
\begin{exas}\langlebel{expl:axioms}
\begin{sparenumerate}
\item \langlebel{item:ax-kripke} The modal logic $K$ comes about as the
rank-1 logic $(\Lambda_K, \mathcal{A}_K, \emptyset)$ where $\mathcal{A}_k = \lbrace
\Box\top,\Box(p \to q) \to (\Box p \to \Box q) \rbrace$. The logics
$K4, S4, KB, \dots$ arise as $(\Lambda_K, \mathcal{A}_K, \Theta)$ where
$\Theta$ contains the additional axioms that define the respective
logic~\cite{BlackburnEA01}, e.g.\ $\Theta=\{\Box p\to\Box\Box p\}$
in the case of $K4$.
\item\langlebel{item:ax-cond} For conditional logic, we take the
similarity type $\Lambda_\mathrm{CL}$ together with rank-1 axioms $r\Rightarrow
\top$, $r \Rightarrow (p \to q) \to ((r \Rightarrow p) \to (r \Rightarrow q))$ stating that
the binary conditional is normal in its second argument. Typical
additional rank-1 axioms are
\begin{equation*}
\begin{axarraycomment}
\textrm{(ID)} & $a\Rightarrow a$ &\emph{(identity)}\\
\textrm{(DIS)} & $(a\Rightarrow c) \wedge (b\Rightarrow c)\to ((a\vee b)\Rightarrow c)$&
\emph{(disjunction)}\\
\textrm{(CM)} & $(a\Rightarrow c) \wedge (a\Rightarrow b)\to ((a\wedge b)\Rightarrow c)$&
\emph{(cautious monotony)}\\
\end{axarraycomment}
\end{equation*}
which together form the so-called \emph{System C}, a modal version of the
well-known KLM (Krauss/Lehmannn/Magidor) axioms of default reasoning
due to Burgess~\cite{Burgess81}.
\item\langlebel{item:ax-gml} The axiomatisation of GML
given in~\cite{Fine72} consists of the rank-1 axioms
\begin{itemize}
\item[] $\Box (p \to q) \to (\Box p \to \Box q)$
\item[] $\mge k.\, p \to \mge l.\,p$ for $l < k$
\item[] $\mge k.\, p \leftrightarrow \bigvee_{i=0, \dots, k} \mge i.\, (p \langlend q)
\langlend \mge(k-i).\, (p \langlend \neg q)$
\item[] $\Box (p \to q) \to (\mge k.\,p \to \mge k.\,q)$
\end{itemize}
Frame conditions of interest include e.g. reflexivity ($p\to\mge
1.\,p$), symmetry ($p\to\Box \,\mge 1. \,p$), and transitivity
($\mge 1.\,\mge n.\,p\to\mge n.\,p$).
\end{sparenumerate}
\end{exas}
\noindent To keep our results parametric also in the semantics of
modal logic, we work in the framework of \emph{coalgebraic modal
logic} in order to achieve a uniform and coherent presentation. In
this framework, the particular shape of models is encapsulated by an
endofunctor $T: \Cat{Set} \to \Cat{Set}$, the \emph{signature functor} (recall
that such a functor maps every set $X$ to a set $TX$, and every map
$f:X\to Y$ to a map $Tf:TX\to TY$ in such a way that composition and
identities are preserved), which may be thought of as a parametrised
data type. We fix the data $\Lambda$, $\mathcal{L}$, $T$ etc.\ throughout
the generic part of the development. The role of models in then played
by $T$-coalgebras:
\begin{defi}
A \emph{$T$-coalgebra} is a pair $\mathbb{C} = (C, \gamma)$ where $C$ is a set
(the \emph{state space} of $\mathbb{C}$)
and $\gamma: C \to T C$ is a
function, the transition structure of $\mathbb{C}$.
\end{defi}
\noindent We think of $TC$ as a type of successors,
polymorphic in $C$. The
transition structure $\gamma$
associates a structured collection of successors
$\gamma(c)$ to each state $x\in C$.
\noindent The following choices of signature functors give rise to the
semantics of the modal logics discussed in Expl.
\ref{expl:axioms}.
\begin{exas}\langlebel{expl:coalgml}
\begin{sparenumerate}
\item \langlebel{item:Kripke} Coalgebras for the covariant powerset
functor $\mathcal{P}$ defined on sets $X$ by $\mathcal{P}(X) = \lbrace A \mid A
\subseteq X \rbrace$ and on maps $f$ by $\mathcal{P}(f)(A)=f[A]$ are Kripke
frames, as relations $R \subseteq W \times W$ on a set $W$ of worlds
are in bijection with functions of type $W \to \mathcal{P}(W)$.
Restricting the powerset functor to \emph{finite} subsets,
i.e. putting $\mathcal{P}_\omega(X) = \lbrace A \subseteq X \mid A \mbox{
finite} \rbrace$, one obtains the class of image finite Kripke
frames as $\mathcal{P}_{\omega}$-coalgebras.
\item\langlebel{item:cond} The semantics of conditional logic is captured
coalgebraically by the endofunctor $\mathcal{S}$ that maps a set $X$ to the
set $(\mathcal{P}(X) \to \mathcal{P}(X))$ of selection functions over $X$ (the
action of $\mathcal{S}$ on functions $f: X \to Y$ is given by
$\mathcal{S}(f)(s)(B) = f [ s(f^{-1}[B])]$). The ensuing $\mathcal{S}$-coalgebras
are precisely the conditional frames of \cite{Chellas80}.
\item\langlebel{item:gml} The \emph{(infinite) multiset functor}
${\Word{a}}ginfty$ maps a set $X$ to the set ${\Word{a}}ginfty X$ of multisets
over $X$, i.e.\ functions of type $X \to {\mathbb{N}} \cup \lbrace \infty
\rbrace$.
Accordingly, ${\Word{a}}ginfty$-coalgebras are \emph{multigraphs} (graphs
with edges annotated by multiplicities). Multigraphs provide an
alternative semantics for GML which is in many
respects more natural than the original Kripke
semantics~\cite{DAgostinoVisser02}, as also confirmed by new
results below.
\item\langlebel{item:pml} Finally, if $\mathrm{supp}(\mu) = \lbrace x \in X \mid
\mu(x) \neq 0 \rbrace$ is the support of a function $\mu: X \to [0,
1]$ and $\mathcal{D}(X) = \lbrace \mu: X \to [0, 1] \mid \mathrm{supp}(\mu) \mbox{
finite}, \sum_{x \in X} \mu(x) = 1 \rbrace$ is the set of finitely
supported probability distributions on $X$, then $\mathcal{D}$-coagebras
are probabilistic transition systems, the semantic domain of
PML.
\end{sparenumerate}
\end{exas}
\noindent The link between coalgebras and modal languages is provided
by predicate liftings~\cite{Pattinson03}, which are used to interpret
modal operators. Essentially, predicate liftings convert predicates on
the state space $X$ into predicates on the set $TX$ of structured
collections of states:
\begin{defi}\langlebel{def:lifting}\cite{Pattinson03}
An \emph{$n$-ary predicate lifting} ($n\in{\mathbb{N}}$) for $T$
is a family of maps $\langlembda_X:\mathcal{P}{X}^n \to\mathcal{P}{TX}$, where $X$
ranges over all sets, satisfying the \emph{naturality} condition
\begin{equation*}
\langlembda_X(f^{-1}[A_1],\dots,f^{-1}[A_n])=(Tf)^{-1}[\langlembda_Y(A_1,\dots,A_n)]
\end{equation*}
for all $f:X\to Y$, $A_1,\dots,A_n\in\mathcal{P}{Y}$. (For the
categorically minded, $\langlembda$ is a natural transformation
$\mathcal{Q}^n\to\mathcal{Q}\circ T^{op}$, where $\mathcal{Q}$ denotes
contravariant powerset.) A \emph{structure} for a similarity type
$\Lambda$ over an endofunctor $T$ is the assignment of an $n$-ary
predicate lifting $\llbracket L \rrbracket$ to every $n$-ary modal operator $L
\in \Lambda$.
\end{defi}\noindent
\noindent Given a valuation $V: P \to \mathcal{P}(C)$ of the
propositional variables and a $T$-coalgebra $(C, \gamma)$, a structure
for $\Lambda$ allows us to define a satisfaction relation
$\models_{(C, \gamma,V)}$ between states of $C$ and formulas $\phi \in
\FLang(\Lambda)$ by stipulating that $c\models_{(C, \gamma,V)}p$ iff
$c\in V(p)$ and
\[
c\models_{(C, \gamma, V)} L (\phi_1, \dots, \phi_n)\;\textrm{ iff }\;
\gamma(c)\in \llbracket L \rrbracket_C (\llbracket \phi_1 \rrbracket,
\dots, \llbracket \phi_n \rrbracket),
\]
where $\llbracket \phi \rrbracket=\{c\in C\mid c\models_{(C, \gamma,V)}\phi\}$.
An \emph{$\mathcal{L}$-model} is now a \emph{model}, i.e.\ a triple $(C,
\gamma, V)$ as above, such that $c \models_{(C, \gamma,V)} \psi$ for
all all $c\in C$ and all substitution instances $\psi$ of $\mathcal{A} \cup
\Theta$. An \emph{$\mathcal{L}$-frame} is a $T$-coalgebra $(C, \gamma)$ such that
$(C, \gamma, V)$ is an $\mathcal{L}$-model for all valuations~$V$. The
reader is invited to check that the following predicate liftings
induce the standard semantics for the modal languages introduced in
Expl. \ref{expl:sim-types}.
\begin{exas}\langlebel{expl:structure}
\begin{sparenumerate}
\item A structure for $\Lambda_K$ over the covariant powerset functor
$\mathcal{P}$ is given by $\llbracket \Box \rrbracket_X (A) = \lbrace Y \in \mathcal{P}(X)
\mid Y \subseteq A \rbrace$. The frame classes defined by the frame
conditions mentioned in Expl.~\ref{expl:axioms}.\ref{item:ax-kripke}
are well-known; e.g.\ a Kripke frame $(X,R)$ is a $K4$-frame iff $R$
is transitive.
\item Putting $\llbracket \Rightarrow \rrbracket_X (A, B) = \lbrace f \in \mathcal{S}(X) \mid
f(A) \subseteq B \rbrace$ reconstructs the semantics of conditional
logic in a coalgebraic setting.
\item\langlebel{item:gml-struct} A structure for GML over
${\Word{a}}ginfty$ is given by $\llbracket (\mge k) \rrbracket_X (A) = \lbrace f: X
\to {\mathbb{N}} \cup \lbrace \infty \rbrace \mid \sum_{x \in A} f(x) \geq
k \rbrace$. The frame conditions mentioned in
Expl.~\ref{expl:axioms}.\ref{item:ax-gml} correspond to conditions
on multigraphs that can be read off directly from the logical
axioms. E.g.\ a multigraph satisfies the transitivity axiom $\mge
1.\,\mge n.\,p\to\mge n.\,p$ iff whenever $x$ has non-zero
transition multiplicity to $y$ and $y$ has transition multiplicity
at least $n$ to $z$, then $x$ has transition multiplicity at least
$n$ to $z$.
\item The structure over $\mathcal{D}$ that captures PML coalgebraically is
given by the the predicate lifting $\llbracket L_p \rrbracket_X (A) = \lbrace
\mu \in \mathcal{D}(X) \mid \sum_{x \in A} \mu(x) \geq p \rbrace$ for $p
\in [0, 1 ] \cap \mathbb{Q}$.
\end{sparenumerate}
\end{exas}
\noindent From now on, \emph{fix a modal logic $\mathcal{L}=(\Lambda, \mathcal{A},
\Theta)$ and a structure for $\Lambda$ over a functor~$T$}.
We say that $\mathcal{L}$ is \emph{strongly complete} for some class of
models if every $\mathcal{L}$-consistent set of formulas is satisfiable in
some state of some model in that class. Restricting to \emph{finite}
sets $\Phi$ defines the notion of \emph{weak completeness}; many
coalgebraic modal logics are only weakly complete~\cite{Schroder07}.
\begin{defi}
Let $X$ be a set. If $\psi \in \mathcal{F}(\Lambda)$ and $\tau: P
\to \mathcal{P}(X)$ is a valuation, we write $\psi \tau$ for the result of
substituting $\tau(p)$ for $p$ in $\psi$, with propositional
subformulas evaluated according to the boolean algebra structure of
$\mathcal{P}(X)$. (Hence, $\psi\tau$ is a formula over the set $\mathcal{P}(X)$ of
atoms.) A formula $\phi \in \mathsf{Prop}(\Lambda(\mathcal{P}(X))$ is
\emph{one-step $\mathcal{L}$-derivable}, denoted $\vdash^1_\mathcal{L} \phi$,
if $\phi$ is propositonally entailed by the set $ \lbrace \psi \tau
\mid \tau:P \to \mathcal{P}(X), \psi \in \mathcal{A} \rbrace$. A set $\Phi
\subseteq \mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ is \emph{one-step
$\mathcal{L}$-consistent} if there do not exist formulas $\phi_1, \dots,
\phi_n \in \Phi$ such that $\vdash_\mathcal{L}^1
\neg(\phi_1\langlend\dots\langlend\phi_n)$. Dually, the \emph{one-step
semantics} $\llbracket \phi \rrbracket_X^1 \subseteq T X$ of a formula $\phi
\in \mathsf{Prop}(\Lambda(\mathcal{P}(X))$ is defined inductively by $\llbracket L(A_1,
\dots, A_n) \rrbracket_X^1 = \llbracket L \rrbracket_X ( A_1, \dots, A_n)$ for
$A_1, \dots, A_n \subseteq X$. A set $\Phi \subseteq
\mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ is \emph{one-step satisfiable} if
$\bigcap_{\phi \in \Phi} \llbracket \phi \rrbracket_X^1 \neq \emptyset$. We
say that $\mathcal{L}$ (or $\Lambda$) is \emph{separating} if $t\in TX$ is
uniquely determined by the set $\{\phi\in\Lambda(\mathcal{P}(X))\mid
t\in\Sem{\phi}^1_X\}$. We call $\mathcal{L}$ (or $\mathcal{A}$) \emph{one-step
sound} if every one-step derivable formula $\phi \in
\mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ is one-step valid, i.e. $\llbracket \phi
\rrbracket_X^1 = X$.
\end{defi}
\noindent \emph{Henceforth, we assume that $\mathcal{L}$ is one-step sound},
so that every $T$-coalgebra satisfies the rank-1 axioms; in the
absence of frame conditions ($\Theta = \emptyset$), this means in
particular that every $T$-coalgebra is an $\mathcal{L}$-frame. The above
notions of one-step satisfiability and one-step consistency are the
main concepts employed in the proof of strong completeness in the
following section.
Given a structure for $\Lambda$ over $T$, every set $\mathcal{B}$ of rank-1
axioms over $\Lambda$ defines a subfunctor $T_\mathcal{B}$ of $\mathcal{B}$ with
$T_\mathcal{B}(X)=\bigcap\{\Sem{\phi\tau}^1_X\mid\phi\in\mathcal{B},\tau:P\to\mathcal{P}(X)\}\subseteq
TX$. This functor induces a structure for which $\mathcal{B}$ is one-step
sound.
\begin{exa}\langlebel{expl:subfunctors}
The additional rank-1 axioms of
Expl.~\ref{expl:axioms}.\ref{item:ax-cond} induce subfunctors
$\mathcal{S}_\mathcal{B}$ of the functor $\mathcal{S}$ of
Expl.~\ref{expl:coalgml}.\ref{item:cond}. E.g.\ we have
\begin{align*}
\mathcal{S}_{\{\mi{ID}\}}X&=\{f\in\mathcal{S}(X)\mid \forall A\subseteq X.\,
f(A)\subseteq A\}\\
\mathcal{S}_{\{\mi{ID,DIS}\}}X&=\{f\in\mathcal{S}(X)\mid \forall A,B\subseteq X.\,
f(A)\subseteq A \wedge f(A\cup B)\subseteq
f(A)\cup f(B) \}\\
\mathcal{S}_{\{\mi{ID,DIS,CM}\}}X&=
\{f\in\mathcal{S}(X)\mid \forall A,B\subseteq X.\,
f(A)\subseteq A \wedge
(f(B)\subseteq A \Rightarrow f(A)\cap B\subseteq f(B))\}
\end{align*}
(it is an amusing exercise to verify the last claim).
\end{exa}
\section{Strong Completeness Via Quasi-Canonical Models}
\noindent
We wish to establish strong completeness of $\mathcal{L}$ by defining a
suitable $T$-coalgebra structure $\zeta$ on the set~$S$ of maximally
$\mathcal{L}$-consistent subsets of $\mathcal{F}(\Lambda)$, equipped with the
standard valuation $V(p)=\{\Gamma\in S\mid p\in\Gamma\}$. The crucial
property required is that $\zeta$ be \emph{coherent}, i.e.
\begin{equation*}
\zeta(\Gamma)\in \Sem{L}(\hat\phi_1,\dots,\hat\phi_n) \iff
L(\phi_1,\dots,\phi_n)\in \Gamma,
\end{equation*}
where $\hat\phi=\{\Delta\in S\mid \phi\in\Delta\}$, for $L\in\Lambda$
$n$-ary, $\Gamma\in S$, and $\phi_1,\dots,\phi_n\in\mathcal{F}(\Lambda)$,
as this allows proving, by a simple induction over the structure of
formulas,
\begin{lem}[Truth lemma]
If $\zeta$ is coherent, then for all formulas $\phi$,
$\Gamma\models_{(S,\zeta,V)}\phi$ iff $\phi\in\Gamma$.
\end{lem}
\noindent We define a \emph{quasi-canonical model} to be a model
$(S,\zeta,V)$ with $\zeta$ coherent; the term quasi-canonical serves
to emphasise that the coherence condition does not determine the
transition structure $\zeta$ uniquely. By the truth lemma,
quasi-canonical models for $\mathcal{L}$ are $\mathcal{L}$-models, i.e.\ satisfy
all substitution instances of the frame conditions. The first question
is now under which circumstances quasi-canonical models exist; we
proceed to establish a widely applicable criterion. This criterion has
two main aspects: a \emph{local} form of strong completeness involving
only finite sets, and a preservation condition on the functor enabling
passage from finite sets to certain infinite sets. We begin with the
latter part:
\begin{defi}
A \emph{surjective $\omega$-cochain (of finite sets)} is a sequence
$(X_n)_{n\in{\mathbb{N}}}$ of (finite) sets equipped with surjective
functions $p_n:X_{n+1}\to X_n$ called \emph{projections}. The
\emph{inverse limit} $\varprojlim X_n$ of $(X_n)$ is the set
$\{(x_i)\in\prod_{i\in{\mathbb{N}}} X_i \mid \forall n.\,p_n(x_{n+1})=x_n\}$ of
\emph{coherent} families $(x_i)$. The \emph{limit projections} are
the maps $\pi_i((x_n)_{n\in{\mathbb{N}}})=x_i$, $i\in{\mathbb{N}}$; note that the
$\pi_i$ are surjective, i.e.\ every $x\in X_i$ can be extended to a
coherent family. Since all set functors preserve surjections,
$(TX_n)$ is a surjective $\omega$-cochain with projections
$Tp_n$. The functor $T$ \emph{weakly preserves inverse limits of
surjective $\omega$-cochains of finite sets} if for every
surjective $\omega$-cochain $(X_n)$ of finite sets, the canonical
map $T(\varprojlim X_n)\to\varprojlim TX_n$ is surjective, i.e.\ every
coherent family $(t_n)$ in $\prod TX_n$ is \emph{induced} by a (not
necessarily unique) $t\in T(\varprojlim X_n)$ in the sense that
$T\pi_n(t)=t_n$ for all $n$.
\end{defi}
\begin{exa}\langlebel{exa:cochains}
Let $A$ be a finite alphabet; then the sets $A^n$, $n\in{\mathbb{N}}$, form
a surjective $\omega$-cochain of finite sets with projections
$p_n:A^{n+1}\to A^n$, $(a_1,\dots,a_{n+1})\mapsto
(a_1,\dots,a_n)$. The inverse limit $\varprojlim A^n$ is the set
$A^\omega$ of infinite sequences over $A$. The covariant powerset
functor $\mathcal{P}$ preserves this inverse limit weakly: given a coherent
family of subsets $B_n\subseteq A^n$, i.e.\ $p_n[B_{n+1}]=B_n$ for
all $n$, we define the set $B\subseteq A^\omega$ as the set of all
infinite sequences $(a_n)_{n\ge 1}$ such that $(a_1,\dots,a_n)\in
B_n$ for all $n$; it is easy to check that indeed $B$ induces the
$B_n$, i.e.\ $\pi_n[B]=B_n$. However, $B$ is by no means uniquely
determined by this property: Observe that $B$ as just defined is a
safety property. The intersection of $B$ with any liveness property
$C$, e.g.\ the set $C$ of all infinite sequences containing
infinitely many occurrences of a fixed letter in $A$, will also
satisfy $\pi_n[B\cap C]=B_n$ for all $n$.
\end{exa}
\noindent The second part of our criterion is an infinitary version of
a local completeness property called one-step completeness, which has
been used previously in \emph{weak} completeness
proofs~\cite{Pattinson03,Schroder07}.
\begin{defi}
We say that $\mathcal{L}$ is \emph{strongly one-step complete over finite
sets} if for finite $X$, every one-step consistent subset $\Phi$
of $\mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ is one-step satisfiable.
\end{defi}
\noindent
The difference with plain one-step completeness is that $\Phi$ above
may be infinite. Consequently, strong and plain one-step completeness
coincide in case the modal similarity type $\Lambda$ is finite, since
in this case, $\mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ is, for finite $X$, finite up
to propositional equivalence. The announced strong completeness
criterion is now the following.
\begin{thm}\langlebel{thm:can-model}
If $\mathcal{L}$ is strongly one-step complete over finite sets and
separating, $\Lambda$ is countable, and $T$ weakly preserves inverse
limits of surjective $\omega$-cochains of finite sets, then $\mathcal{L}$
has a quasi-canonical model.
\end{thm}
\proof[Proof sketch]
The most natural argument is via the dual adjunction between sets
and boolean algebras that associates to a set the boolean algebra of
its subsets, and to a boolean algebra the set of its
ultrafilters. For economy of presentation, we outline a direct
proof instead: we prove that
\begin{itemize}
\item[($*$)] every maximally one-step consistent
$\Phi\subseteq\mathsf{Prop}(\Lambda(\mathfrak{A}))$ is one-step satisfiable,\\ where
$\mathfrak{A}=\{\hat\phi\mid\phi\in\mathcal{F}(\Lambda)\}\subseteq\mathcal{P}(S)$.
\end{itemize}
The existence of the required coherent coalgebra structure $\zeta$
on $S$ follows immediately, since the coherence requirement for
$\zeta(\Gamma)$, $\Gamma\in S$, amounts to one-step satisfaction of
a maximally one-step consistent subset of $\mathsf{Prop}(\Lambda(\mathfrak{A}))$.
To prove ($*$), let $\Lambda=\{L_n\mid n\in{\mathbb{N}}\}$, let
$P=\{p_n\mid n\in{\mathbb{N}}\}$, let $\mathcal{F}_n$ denote the set of
$\Lambda$-formulas of modal nesting depth at most $n$ that employ
only modal operators from $\Lambda_n=\{L_0,\dots,L_n\}$ and only the
atomic propositions $p_0,\dots,p_n$, and let $S_n$ be the set of
maximally consistent subsets of $\mathcal{F}_n$. Then $S$ is (isomorphic
to) the inverse limit $\varprojlim S_n$, where the projections
$S_{n+1}\to S_n$ and the limit projections $S\to S_n$ are just
intersection with $\mathcal{F}_n$. As the sets $S_n$ are finite, we
obtain by strong one-step completeness $t_n\in TS_n$ such that
$t_n\models^1_{S_n} \Phi\cap\mathsf{Prop}(\Lambda(\mathfrak{A}_n))$, where
$\mathfrak{A}_n=\{\hat\phi\cap S_n\mid\phi\in\mathcal{F}_n\}$. By separation,
$(t_n)_{n\in{\mathbb{N}}}$ is coherent, and hence is induced by some $t\in
TS$ by weak preservation of inverse limits; then,
$t\models^1_S\Phi$.\qed
\noindent
Together with the Lindenbaum Lemma we obtain strong completeness as
a corollary.
\begin{cor} \langlebel{cor:str-comp}
Under the conditions of Thm.~\ref{thm:can-model},
$\mathcal{L}$
is strongly complete for $\mathcal{L}$-models.
\end{cor}
\noindent Both Thm.~\ref{thm:can-model} and
Cor.~\ref{cor:str-comp} do apply to the case that $\mathcal{L}$ has
frame conditions. When $\mathcal{L}$ is of rank~1 (i.e.\
$\Theta=\emptyset$), Cor.~\ref{cor:str-comp} implies that $\mathcal{L}$
is strongly complete for (models based on) $\mathcal{L}$-frames. In the
presence of frame conditions, the underlying frame of an $\mathcal{L}$-model
need not be an $\mathcal{L}$-frame, so that the question arises whether
$\mathcal{L}$ is also strongly complete for $\mathcal{L}$-frames. In applications,
positive answers to this question, usually referred to as the
canonicity problem, typically rely on a judicious choice of
quasi-canonical model to ensure that the latter is an $\mathcal{L}$-frame,
often the largest quasi-canonical model under some ordering on
$TS$. Detailed examples are given in Sec.~\ref{sec:examples}.
\begin{rem}
It is shown in~\cite{KurzRosicky} that $T$ admits a strongly
complete modal logic if $T$ weakly preserves (arbitrary) inverse
limits \emph{and preserves finite sets}. The essential contribution
of the above result is to remove the latter restriction, which fails
in important examples. Moreover, the observation that we need only
consider \emph{surjective} $\omega$-cochains is relevant in some
applications, see below.
\end{rem}
\begin{rem}
A last point that needs clearing up is whether strong completeness
of coalgebraic modal logics can be established by some more general
method than quasi-canonical models of the quite specific shape used
here. The answer is negative, at least in the case of rank-1 logics
$\mathcal{L}$: it has been shown in~\cite{KurzPattinson05} that every such
$\mathcal{L}$ admits models which consist of the maximally
\emph{satisfiable} sets of formulas and obey the truth lemma. Under
strong completeness, such models are quasi-canonical.
This seems to contradict the fact that some canonical model
constructions in the literature, notably the canonical Kripke models
for graded modal logics~\cite{Fine72,DeCaro88}, employ state spaces
which have multiple copies of maximally consistent sets. The above
argument indicates that such logics fail to be coalgebraic, and
indeed this is the case for GML with Kripke
semantics. As mentioned above, GML has an alternative
coalgebraic semantics over multigraphs, and we show below that this
semantics does admit quasi-canonical models in our sense.
\end{rem}
\section{Examples}\langlebel{sec:examples}
\noindent We now show how the generic results of the previous section
can be applied to obtain canonical models and associated strong
completeness and compactness theorems for a large variety of
structurally different modal logics. We have included some negative
examples where canonical models necessarily fail to exist due to
non-compactness, and we analyse which conditions of
Thm.~\ref{thm:can-model} fail in each case. We emphasise that in
the positive examples, the verification of said conditions is entirely
stereotypical. Weak preservation of inverse limits of surjective
$\omega$-cochains usually holds without the finiteness assumption,
which is therefore typically omitted.
\begin{exa}[Strong completeness of Kripke semantics for $K$]
Recall from Expl.~\ref{expl:coalgml}.\ref{item:Kripke} that Kripke
frames are coalgebras for the powerset functor $TX = \mathcal{P}(X)$.
Strong completeness of $K$ with respect to Kripke semantics is, of
course, well known. We briefly illustrate how this can be derived
from our coalgebraic treatment. To see that $K$ is strongly
one-step complete over finite sets $X$, let
$\Phi\subseteq\mathsf{Prop}(\Lambda_K(\mathcal{P}(X)))$ be maximally one-step
consistent. It is easy to check that $\{x\in
X\mid\Diamond\{x\}\in\Phi\}$ satisfies $\Phi$. To prove that the
powerset functor weakly preserves inverse limits, let $(X_n)$ be an
$\omega$-cochain, and let $(A_n\in\mathcal{P}(X_n))$ be a coherent
family. Then $(A_n)$ is itself a cochain, and the set $A=\varprojlim
A_n\subseteq\varprojlim X_n$ induces $(A_n)$ (w.r.t.\ the subset
ordering on $\mathcal{P}(X)$). Separation is clear. By
Thm.~\ref{thm:can-model}, there exists a quasi-canonical Kripke
model for all normal modal logics. In particular, the standard
canonical model~\cite{Chellas80} is quasi-canonical; it witnesses
strong completeness (w.r.t.\ frames) of all canonical logics such as
$K4$, $S4$, $S5$.
\end{exa}
\begin{exa}[Failure of strong completeness of $K$ over finitely
branching models]
As seen in Expl.~\ref{expl:coalgml}.\ref{item:Kripke}, finitely
branching Kripke frames are coalgebras for the finite powerset functor
$\mathcal{P}_\omega$. It is clear that quasi-canonical models fail to exist
in this case, as compactness fails over finitely branching frames: one
can easily construct formulas $\phi_n$ that force a state to have at
least $n$ different successors. The obstacle to the application of
Thm.~\ref{thm:can-model} is that the finite powerset functor fails to
preserve inverse limits weakly, as the inverse limit of an
$\omega$-cochain of finite sets may fail to be finite.
\end{exa}
\begin{exa}[Conditional logic]
Recall from Expl.~\ref{expl:coalgml}.\ref{item:cond} that the
conditional logic $\mathit{CK}$ is interpreted over the functor
$\mathcal{S}(X)=\mathcal{P}(X)\to\mathcal{P}(X)$. To prove strong one-step completeness
over finite sets $X$, let $\Phi\subseteq\mathsf{Prop}(\Lambda_\mathrm{CL}(\mathcal{P}(X)))$
be maximally one-step consistent. Define $f:\mathcal{P}(X)\to\mathcal{P}(X)$ by
$f(A)=\bigcap\{B\subseteq X\mid A\Rightarrow B\in\Phi\}$; it is
mechanical to check that $f\models^1\Phi$. To see that $\mathcal{S}$ weakly
preserves inverse limits, let $(X_n)$ be a surjective
$\omega$-cochain, let $X=\varprojlim X_n$, and let $(f_n\in\mathcal{S}(X_n))$
be coherent. Define $f:\mathcal{P}(X)\to\mathcal{P}(X)$ by letting $(x_n)\in f(A)$
for a coherent family $(x_n)\in X$ iff whenever $A=\pi_n^{-1}[B]$
for some $n$ and some $B\subseteq X_n$, then $x_n\in f_n(B)$. Using
surjectivity of the projections of $(X_n)$, it is straightforward to
prove that $f$ induces $(f_n)$. Finally, separation is clear. By
Thm.~\ref{thm:can-model}, it follows that the conditional logic
$\mathit{CK}$ has a quasi-canonical model, and hence that $\mathit{CK}$ is strongly
complete for conditional frames. In the case of the additional
rank-1 axioms mentioned in
Expl.~\ref{expl:axioms}.\ref{item:ax-cond} and the corresponding
subfunctors of $\mathcal{S}$ described in Expl.~\ref{expl:subfunctors}, the
situation is as follows.
\textbf{Identity:} The functor $\mathcal{S}_{\{\mi{ID}\}}$ weakly preserves
inverse limits of surjective $\omega$-cochains. In the notation
above, put $(x_n)\in f(A)$ iff the condition above holds and
$(x_n)\in A$.
\textbf{Identity and disjunction:} The functor
$\mathcal{S}_{\{\mi{ID},\mi{DIS}\}}$ weakly preserves inverse limits of
surjective $\omega$-cochains: put $(x_n)\in f(A)$ iff $(x_n)\in A$
and whenever $(x_n)\in\pi_m^{-1}B\subseteq A$, then $x_m\in f_m(B)$.
\textbf{System C:} It is open whether the the functor
$\mathcal{S}_{\{\mi{ID},\mi{DIS},\mi{CM}\}}$ weakly preserves inverse
limits of surjective $\omega$-cochains, and whether System C is
strongly complete over conditional frames.
Indeed it appears to be an open problem to find \emph{any} semantics
for which System C is strongly complete, other than the generalised
neighbourhood semantics as described e.g.\
in~\cite{SchroderPattinson07mcs}, which is strongly complete for
very general reasons but provides little in the way of actual
semantic information. The classical preference semantics according
to Lewis is only known to be weakly
complete~\cite{Burgess81}. Friedman and
Halpern~\cite{FriedmanHalpern01} do silently prove strong
completeness of System C w.r.t.\ plausibility measures; however, on
close inspection the latter turn out to be essentially equivalent to
the above-mentioned generalised neighbourhood semantics. Moreover,
Segerberg~\cite{Segerberg89} proves strong completeness for a whole
range of conditional logics over \emph{general} conditional frames,
where, in analogy to corresponding terminology for Kripke frames, a
general conditional frame is equipped with a distinguished set of
\emph{admissible propositions} limiting both the range of valuations
and the domain of selection functions. In contrast, our method
yields full conditional frames in which the frame conditions hold
for \emph{any} valuation of the propositional variables. While in
the case of $\mathit{CK}$ and its extension by $\mi{ID}$ alone, these models
differ from Segerberg's only in that they insert default values for
the selection function on non-admissible propositions, the
canonical model for the extension of $\mathit{CK}$ by $\{\mi{ID},\mi{DIS}\}$
has non-trivial structure on non-admissible propositions, and we
believe that our strong completeness result for this logic is
genuinely new.
\end{exa}
\begin{exa}[Strong completeness of GML over
multigraphs]\langlebel{expl:gml-infty}
Recall from Expl.~\ref{expl:coalgml}.\ref{item:gml} that graded
modal logic (GML) has a coalgebraic semantics in terms of the multiset
functor ${\Word{a}}ginfty$. To prove strong one-step completeness over
finite sets $X$, let $\Phi\subseteq\mathsf{Prop}(\Lambda_{GML}(\mathcal{P}(X)))$ be
maximally one-step consistent. We define $B\in{\Word{a}}ginfty(X)$ by
$B(A)\ge n\iff\mge n.\,A\in\Phi$; it is easy to check that $B$ is
well-defined and additive. To prove weak preservation of inverse
limits, let $(X_n)$ be an $\omega$-cochain, let $X=\varprojlim X_n$, and
let $(B_n\in{\Word{a}}ginfty(X_n))$ be coherent. Then define
$B\in{\Word{a}}ginfty(X)$ pointwise by
\begin{equation*}
B((x_n))=\min_{n\in{\mathbb{N}}}B_n(x_n),
\end{equation*}
noting that the sequence $(B_n(x_n))$ is decreasing by coherence. A
straightforward computation shows that $B$ induces $(B_n)$. Separation
is clear.
By the above and Thm.~\ref{thm:can-model}, all extensions of GML have
quasi-canonical multigraph models. While the technical core of the
construction is implicit in the work of Fine~\cite{Fine72} and
de~Caro~\cite{DeCaro88}, these authors were yet unaware of multigraph
semantics, and hence our result that \emph{GML is strongly complete
over multigraphs} has not been obtained previously.
The standard frame conditions for reflexivity, symmetry, and
transitivity (Expls.~\ref{expl:coalgml}.\ref{item:ax-gml}
and~\ref{expl:structure}.~\ref{item:gml-struct}) and arbitrary
combinations thereof are easily seen to be satisfied in the
quasi-canonical model constructed above. We point out that this
contrasts with Kripke semantics in the case of the graded version of
$S4$, i.e.\ GML extended with the reflexivity and transitivity axioms
of Expl.~\ref{expl:coalgml}.\ref{item:ax-gml}: as shown
in~\cite{FattorosiBarnabaCerrato88}, the complete axiomatisation of
graded modal logic over transitive reflexive Kripke frames includes
two rather strange combinatorial artefacts, which by the above
disappear in the multigraph semantics. The reason for the divergence
(which we regard as an argument in favour of multigraph semantics) is
that, while in many cases multigraph models are easily transformed
into equivalent Kripke models by just making copies of states, no such
translation exists in the transitive reflexive case (transitivity
alone is unproblematic).
Observe moreover that the above extends straightforwardly to
decription logics $\mathcal{ALCQ}(\mathcal{R})$ with qualified number restrictions
and a role hierarchy $\mathcal{R}$ where roles may be distinguished as, in
any combination, transitive, reflexive, or symmetric. As shown
in~\cite{HorrocksEA99,KazakovEA07}, $\mathcal{ALCQ}(\mathcal{R})$ is undecidable for
many $\mathcal{R}$, even when only transitive roles are considered. For
undecidable logics, completeness is in some sense the `next best
thing', as it guarantees if not recursiveness then at least recursive
enumerability of all valid formulas, and hence enables automatic
reasoning. Essentially, our results show that the natural
axiomatisation of \emph{$\mathcal{ALCQ}(\mathcal{R})$ with transitive, symmetric and
reflexive roles is strongly complete over multigraphs}, a result
which fails for the standard Kripke semantics.
\end{exa}
\begin{exa}[Failure of strong completeness of image-finite GML]
\langlebel{expl:gml-finite}
Similarly to the case of image-finite Kripke frames, one can model an
image-finite version of graded modal logic coalgebraically by
exchanging the functor ${\Word{a}}ginfty$ for the \emph{finite multiset
functor} ${\Word{a}}g$, where ${\Word{a}}g(X)$ consists of all maps $X\to{\mathbb{N}}$
with finite support. Of course, the resulting logic is non-compact and
hence fails to admit a canonical model. This is witnessed not only by
the same family of formulas as in the case of image-finite Kripke
semantics, which targets finiteness of the number of different
successors, but also by the set of formulas $\{\mge n.\,a\,\mid
n\in{\mathbb{N}}\}$, which targets finiteness of multiplicities. Analysing the
conditions of Thm.~\ref{thm:can-model}, we detect two violations: not
only does weak preservation of inverse limits fail, but there is also
no way to find an axiomatisation which is strongly one-step complete
over finite sets (again, consider sets $\{\mge n.\,\{x\}\mid
n\in{\mathbb{N}}\}$).
\end{exa}
\noindent Strong completeness of image-finite GML can be recovered by
slight adjustments to the syntax and semantics. We formulate a more
general approach, as follows.
\begin{exa}[Strong completeness of the logic of additive measures]
\langlebel{expl:additive-measures}
We fix an at most countable commutative monoid~$M$ (e.g.\
$M={\mathbb{N}}$). We think of the elements of $M$ as describing the measure
of a set of elements. To ensure compactness, we have to allow some
sets to have undefined measure. That is, we work with coalgebras for
the endofunctor $T_M$ defined by
\[ T_M(X) = \lbrace (\mathfrak{A}, \mu) \mid \mathfrak{A} \subseteq \mathcal{P}(X) \mbox{
closed under disjoint unions}, \mu: \mathfrak{A} \to M \mbox{ additive}
\rbrace \]
The modal logic of additive $M$-valued measures is given by the
similarity type $\Lambda_M = \lbrace E_m \mid m \in M \rbrace$ where
$E_m \phi$ expresses that $\phi$ has measure $m$, i.e.\
\begin{equation*}
\Sem{E_m}_XB=\{(\mathfrak{A},\mu)\in T_M(X)\mid B\in\mathfrak{A},\mu(B)=m\}.
\end{equation*}
$\Lambda_M$ is clearly separating. The logic is axiomatised by the
following two axioms:
\begin{equation*}
E_ma\to\neg E_na\quad(n\neq m)
\quad\textrm{and}\quad
E_m(a\langlend b)\langlend E_n(a\langlend\neg b)\to E_{m+n}a.
\end{equation*}
These axioms are strongly one-step complete over finite sets $X$: if
$\Phi\subseteq\mathsf{Prop}(\Lambda_M(\mathcal{P}(X)))$ is maximally one-step
consistent, then $(\mathfrak{A},\mu)\models^1\Phi$ where $A\in\mathfrak{A}$ iff
$E_mA\in\Phi$ for some necessarily unique $m$, in which case
$\mu(A)=m$. Moreover, $T_M$ weakly preserves inverse limits
$X=\varprojlim X_n$, with finite $X_n$: a coherent family
$((\mathfrak{A}_n,\mu_n)\in T_M(X_n))$ is induced by
$(\mathfrak{A},\mu)\in T_M(X)$, where $\mathfrak{A}=\{\pi_n^{-1}[B]\mid
n\in{\mathbb{N}},B\in\mathfrak{A}_n\}$ and $\mu(\pi_n^{-1}[B])=\mu_n(B)$ is easily seen
to be well-defined and additive. Theorem~\ref{thm:can-model} now
guarantees existence of quasi-canonical models. A simple example is $M
= {\mathbb{Z}} / 2 {\mathbb{Z}}$, which induces a logic of even and odd.
For the case $M = {\mathbb{N}}$, we obtain a variant of graded modal logic
with finite multiplicities, where we code $\geq k. \phi$ as $\neg
\bigvee_{0 \leq i < k} E_k \phi$. However, it may still be the case that
a state has a family of successor sets of unbounded measure, so that
undefinedness of the measure of the entire state space just hides an
occurrence of infinity. This defect is repaired by insisting that the
measure of the whole state space is finite at the expense of
disallowing the modal operator $E_0$ in the language, as follows.
\end{exa}
\newcommand{\mathrm{GML}m}{\mathrm{GML}^-}
\begin{exa}[Strong completeness of finitely branching
$\mathrm{GML}^-$]\langlebel{expl:gmlm}
To force the entire state space to have finite measure, we
additionally introduce a \emph{measurability} operator~$E$,
interpreted by $\Sem{E}B=\{(\mathfrak{A},\mu)\mid B\in\mathfrak{A}\}$, and impose
obvious axioms guaranteeing that measures on $X$ are defined on
boolean subalgebras of $\mathcal{P}(X)$, in particular $E\top$ (i.e.\
$\mu(X)$ is finite), and $E_na\to Ea$. In order to achieve
compactness, we now leave a bolt hole on the syntactical side and
exclude the operator $E_0$. In other words, the syntax of $\mathrm{GML}m$ is
given by the similarity type
\( \Lambda_\mathrm{GML}m = \lbrace E \rbrace \cup \lbrace E_n \mid n > 0
\rbrace \),
and we interpret $\mathrm{GML}m$ over coalgebras for the functor
${\Word{a}}g_M$ defined by
\[ {\Word{a}}g_M(X) = \lbrace (\mathfrak{A}, \mu) \mid \mathfrak{A} \mbox{ boolean subalgebra
of $\mathcal{P}(X)$}, \mu: \mathfrak{A} \to {\mathbb{N}} \mbox{ additive} \rbrace.
\]
Separation is clear.
The axiomatisation of $\mathrm{GML}m$ is given by the axiomatisation of the
modal logic of additive measures, the above-mentioned axioms on $E$,
and the additional axiom
\begin{gather*}
E_n a\langlend E b \to E_n (a\langlend b) \lor E_n(a\langlend\neg b) \lor
\textstyle\bigvee_{0<k<n} (E_k(a\langlend b)\langlend E_{n-k} (a\langlend\neg b))
\end{gather*}
which compensates for the absence of $E_0$. Strong one-step
completeness over finite sets and weak preservation of inverse limits
is shown analogously as in Expl.~\ref{expl:additive-measures}, so that
we obtain a \emph{strongly complete finitely branching graded modal
logic $\mathrm{GML}m$}. The tradeoff is that the operator $\geq k. \phi$
is no longer expressible as $\neg \bigvee_{0 \leq i < k} E_i
\phi$ in $\mathrm{GML}m$ which only allows to formulate the implication
$\mge
1.\phi\to\mge n.\,\phi$.
\end{exa}
\begin{exa}[Failure of strong completeness for PML over finitely
supported probability distributions]
Like image-finite graded modal logic, probabilistic modal logic as
introduced in Expl.~\ref{expl:coalgml}.\ref{item:pml} fails to be
compact, and violates the conditions of Thm.~\ref{thm:can-model} on
two counts, namely weak preservation of inverse limits and strong
one-step completeness over finite sets. The first issue is related to
image-finiteness, while the second is rooted in the structure of the
real numbers: e.g.\ the set $\{L_{1/2-1/n}a\mid n\in{\mathbb{N}}\}\cup\{\neg
L_{1/2}a\}$ is finitely satisfiable but not satisfiable.
\end{exa}
\newcommand{\mathrm{PML}e}{\mathrm{PML}_e}
\begin{exa}[Strong completeness of the logic of exact
probabilities]
In order to remove the above-mentioned failure of compactness, we
consider the fragment of probabilistic modal logic containing only
operators $E_p$ stating that a given event has probability exactly
$p$. (This is, of course, less expressive than the operators $L_p$ but
still allows reasonable statements such as that rolling a six on a die
happens with probability $1/6$.) Moreover, we require probabilities to
be rational and allow probabilities to be undefined, thus following
the additive measures approach as outlined above, where we consider a
subfunctor of $T_{\mathbb{Q}}$ defined by the requirement that the
whole set has measure~$1$. However, we are able to impose stronger
conditions on the domain $\mathfrak{A}\subseteq\mathcal{P}(X)$ of a probability
measure $P$ on $X$: we require that $X\in\mathfrak{A}$ and that $A,B\in\mathfrak{A}$,
$B\subseteq A$ imply $A-B\in FA$, which is reflected in the additional
axioms $E_1\top$ and $E_pa\langlend E_q(a\langlend b)\to
E_{p-q}(a\langlend\neg b)$. It is natural that we cannot force closure
under intersection, as there is in general no way to infer the exact
probability of $A\cap B$ from the probabilities of $A$ and $B$.
Along the same lines as above, we now obtain quasi-canonical models,
and hence strong completeness and compactness, of the arising modal
logic of exact probabilities.
\end{exa}
\section{Conclusion}
\noindent We have laid out a systematic method of proving existence of
canonical models in a generic semantic framework encompassing a wide
range of structurally different modal logics. We have shown how this
method turns the construction of canonical models into an entirely
mechanical exercise where applicable, and points the way to obtaining
compact fragments of non-compact logics. As example applications, we
have reproved a number of known strong completeness result and
established several new results of this kind; specifically, the latter
includes strong completeness of the following logics.
\begin{sparitemize}
\item The modal logic of exact probabilities, with operators $E_p$
`with probability exactly $p$'.
\item Graded modal logic over transitive reflexive multigraphs, i.e.\
the natural graded version of $S4$, and more generally description
logic with role hierarchies including transitive, reflexive, and
symmetric roles and qualified number restrictions also on non-simple
(e.g.\ transitive) roles.
\item The conditional logic $CK+\{\mi{ID},\mi{DIS}\}$, i.e.\ with the
standard axioms of identity and disjunction, interpreted over
conditional frames.
\end{sparitemize}
A number of interesting open problems remain, e.g.\ to find further
strongly complete variants of probabilistic modal logic or to
establish strong completeness of the full set of standard axioms of
default logic, Burgess' System C~\cite{Burgess81}, over the
corresponding class of conditional frames.
\iffalse
\appendix
\section{Omitted Proofs}
\subsection*{Full Proof of Theorem~\ref{thm:can-model}}
\noindent For a set $\Phi \subseteq \mathsf{Prop}(\Lambda(\mathcal{P}(X)))$ and $t\in
TX$, we write $t \models^1_X \Phi$ if $t\in\bigcap_{\phi \in \Phi}
\llbracket \phi \rrbracket_X^1 \neq \emptyset$.
To begin, we reduce the claim of the theorem to
\begin{itemize}
\item[($*$)] every maximally one-step consistent
$\Phi\subseteq\mathsf{Prop}(\Lambda(\mathfrak{A}))$ is one-step satisfiable,\\ where
$\mathfrak{A}=\{\hat\phi\mid\phi\in\mathcal{F}(\Lambda)\}\subseteq\mathcal{P}(S)$.
\end{itemize}
Let $\Gamma\in S$. We have to prove that there exists
$\zeta(\Gamma)\in TS$ such that
\begin{equation*}
\zeta(\Gamma)\in \Sem{L}(\hat\phi_1,\dots,\hat\phi_n) \iff
L(\phi_1,\dots,\phi_n)\in \Gamma,
\end{equation*}
for $L\in\Lambda$ $n$-ary, $\Gamma\in S$, and
$\phi_1,\dots,\phi_n\in\mathcal{F}(\Lambda)$. This means that
$\zeta(\Gamma)\models^1_S\Phi$, where the set
$\Phi\subseteq\mathsf{Prop}(\Lambda(\mathcal{P}(S)))$ is obtained as follows. Let
$V$ denote the set of propositional variables $a_\phi$, where $\phi$
ranges over $\mathcal{F}(\Lambda)$, let $\tau$ be the $\mathcal{P}(S)$-valuation
defined by $\tau(a_\phi)=\hat\phi$, and let $\sigmagma$ be the
substitution defined by $\sigmagma(a_\phi)=\phi$. Then $\Phi$ consists
of all formulas $\phi\tau\in\mathsf{Prop}(\Lambda(\mathcal{P}(S)))$ where
$\phi\in\mathsf{Prop}(\Lambda(V))$ is such that $\phi\sigmagma\in\Gamma$. As
shown in (the full version of)~\cite{SchroderPattinson07mcs},
$\Phi$ is (maximally) one-step consistent, hence satisfiable by
strong one-step completeness, so that $\zeta(\Gamma)$ exists as
required.
To prove ($*$), let $\Lambda=\{L_n\mid n\in{\mathbb{N}}\}$, let
$P=\{p_n\mid n\in{\mathbb{N}}\}$, let $\mathcal{F}_n$ denote the set of
$\Lambda$-formulas of modal nesting depth at most $n$ that employ
only modal operators from $\Lambda_n=\{L_1,\dots,L_n\}$ and only the
atomic propositions $p_1,\dots,p_n$, and let $S_n$ be the set of
maximally consistent subsets of $\mathcal{F}_n$. Then $S$ is (isomorphic
to) the inverse limit $\varprojlim S_n$, where the projections
$p_n:S_{n+1}\to S_n$ and the limit projections $\pi_n:S\to S_n$ are
just intersection with $\mathcal{F}_n$: it is clear that $\Gamma\in S$ is
uniquely determined by all its projections $\pi_n(\Gamma)=\Gamma\cap
\mathcal{F}_n$, and conversely, a coherent family $(\Gamma_n)$ for
$(S_n)$, i.e.\ $p_n(\Gamma_{n+1})=\Gamma_{n+1}\cap\mathcal{F}_n=\Gamma_n$
comes from $\Gamma\in S$ defined by $\Gamma=\bigcup\Gamma_n$:
$\Gamma\cap\mathcal{F}_m=\bigcup (\Gamma_n\cap\mathcal{F}_m) =\bigcup_{n\le
m}\Gamma_n\cup\bigcup_{n> m}(\Gamma_n\cap\mathcal{F}_m)=\Gamma_m$. As
the sets $S_n$ are finite, we obtain by strong one-step completeness
$t_n\in TS_n$ such that $t_n\models^1_{S_n}
\Phi\cap\mathsf{Prop}(\Lambda(\mathfrak{A}_n))$, where $\mathfrak{A}_n=\{\hat\phi\cap
S_n\mid\phi\in\mathcal{F}_n\}$. We use separation to show that
$(t_n)_{n\in{\mathbb{N}}}$ is coherent: It suffices to show that
\begin{equation*}\tag{$+$}
Tp_n(t_{n+1})\models^1_SL(A_1,\dots,A_k)\textrm{ iff } t_n\models^1_S
L(A_1,\dots,A_k)
\end{equation*}
for $L\in\Lambda$ $k$-ary and $A_1,\dots,A_k\subseteq S_n$. By
naturality of $\Sem{L}$, the left hand side is equivalent to
\begin{equation*}
t_{n+1}\models^1_SL(p_n^{-1}[A_1],\dots,p_n^{-1}[A_k]).
\end{equation*}
By finiteness of the $S_n$, there exists, for each $A_i$, a formula
$\phi_i\in\mathcal{F}_n$ such that $\hat\phi_i\cap S_n=A_i$. Then
$p_n^{-1}[A_i]=\hat\phi_i\cap S_{n+1}$. Thus, both sides of ($+$)
are equivalent to $L(\phi_1,\dots,\phi_k)\in\Phi$.
Since $T$ weakly preserves the inverse limit of $(S_n)$, it follows
that the family $(t_i)$ is induced by some $t\in TS$. Then,
$t\models^1_S\Phi$: we have to show
\begin{equation*}
t\models^1_SL(A_1,\dots,A_k)\textrm{ iff } L(A_1,\dots,A_k)\in\Phi
\end{equation*}
for $L\in\Lambda$ $k$-ary, $A_1,\dots,A_k\in\mathfrak{A}$. We have $n\in{\mathbb{N}}$
and $\phi_1,\dots,\phi_k\in\mathcal{F}_n$ such that $A_i=\hat\phi_i$,
$i=1,\dots,k$. Thus, $A_i=\pi_n^{-1}[\hat\phi_i\cap S_n]$ for all
$i$, so that the left hand side is by naturality of $\Sem{L}$
equivalent to
\begin{equation*}
t_n=T\pi_n(t)\models^1_{S_n}L(\hat\phi_1\cap S_n,\dots,\hat\phi_k\cap S_n),
\end{equation*}
which is equivalent to the right hand side by construction of
$t_n$. \qed
\fi
\end{document}
|
\begin{document}
\begin{center}
{\Large\bf MCMC Methods for Gaussian Process Models\\[-2pt]
Using Fast Approximations for the Likelihood\\ }
\begin{minipage}{0.45\linewidth}
\centering
Chunyi Wang\\
Department of Statistical Sciences\\
University of Toronto\\
\texttt{[email protected]}\\
~
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
Radford M.\ Neal\\
Department of Statistical Sciences and \\
Department of Computer Science\\
University of Toronto\\
\texttt{[email protected]}
\end{minipage}
\\
9 May 2013
\end{center}
\begin{quotation}\noindent
Gaussian Process (GP) models are a powerful and flexible tool for
non-parametric regression and classification. Computation for GP
models is intensive, since computing the posterior density, $\pi$, for
covariance function parameters requires computation of the covariance
matrix, $C$, a $pn^2$ operation, where $p$ is the number of covariates
and $n$ is the number of training cases, and then inversion of $C$, an
$n^3$ operation. We introduce MCMC methods based on the ``temporary
mapping and caching'' framework, using a fast approximation, $\pi^*$,
as the distribution needed to construct the temporary space. We
propose two implementations under this scheme: ``mapping to a
discretizing chain'', and ``mapping with tempered transitions'', both
of which are exactly correct MCMC methods for sampling $\pi$, even
though their transitions are constructed using an approximation.
These methods are equivalent when their tuning parameters are set at the
simplest values, but differ in general. We compare how well these
methods work when using several approximations, finding on
synthetic datasets that a $\pi^*$ based on the ``Subset of Data''
(SOD) method is almost always more efficient than standard MCMC using
only $\pi$. On some datasets, a more sophisticated $\pi^*$ based on
the ``Nystr\"om-Cholesky'' method works better than SOD.
\end{quotation}
\begin{section}{Introduction}
Evaluating the posterior probability density function is the most
costly operation when Markov Chain Monte Carlo (MCMC) is applied to
many Bayesian inference problems. One example is the Gaussian Process
regression model (see Section \ref{sec:app} for a brief introduction),
for which the time required to evaluate the posterior probability density
increases with the cube of the sample size. However, several fast but
approximate methods for Gaussian Process models have been developed.
We show in this paper how such an approximation to the posterior
distribution for parameters of the covariance function in a Gaussian
process model can be used to speed up sampling, using either of two
schemes, based on ``mapping to a discretizing chain'' or ``mapping
with tempered transitions''. Both schemes produce an exactly correct
MCMC method, despite using an approximation to the posterior density
for some operations.
In the next section, we describe a general scheme for contructing
efficient MCMC methods using temporary mapping and caching techniques,
first introduced by \citet{Neal:2006}, which is the basis for both of
the schemes for using approximations that are introduced in this
paper.
One possibility for a space to temporarily map to is the space of
Markov chain realizations that leave a distribution $\pi^*$ invariant.
Our hope is that if we use such a space with a $\pi^*$ that is a good
approximation to $\pi$, but faster to compute, then MCMC with
temporary mapping and caching will be faster than MCMC methods using
only $\pi$.
We then consider how the tempered transiton method due to
\citet{Neal:1996} can also be viewed as mapping temporary to another
space. Using this view, we give a different proof that detailed
balance holds for tempered transitions. We then discuss how the
sequence of transitions
$\hat{T}_1,\hat{T}_2,...,\check{T}_2,\check{T}_1$ (which collectively
form the tempered transition) should be chosen when they are defined
using fast approximations, rather than (as in the original context for
tempered transtions) by modifying the original distribution, $\pi$, in
a way that does not reduce computation time.
We apply these two proposed schemes to Gaussian process regression
models that have a covariance function with unknown hyperparameters,
whose posterior distribution must be sampled using MCMC. We discuss
several fast GP approximation methods that can be used to contruct an
approximate $\pi^*$. We conclude by presenting experiments on
synthetic datasets using the new methods that show that these methods
are indeed faster than standard methods using only $\pi$.
\end{section}
\begin{section}{MCMC with temporary mapping and caching} \label{sec:map-cache}
To start, we present two general ideas for improving MCMC ---
temporarily mapping to a different state space, and caching the
results of posterior density computations for possible later use.
\begin{subsection}{Creating Markov transitions using temporary mappings}
To obtain samples of a target distribution $\pi$ from space $\mathcal{X}$
using MCMC, we need to find a transition probability $T(x'|x)$, for which
\begin{equation}
\label{eq:inv}
\int \pi(x) T(x'|x) dx = \pi(x')
\end{equation}
i.e., $T(x'|x)$ leaves the target distribution $\pi$ invariant. There
are many ways to form such a transition. In the famous Metropolis
algorithm \citep{Metropolis:1953}, from a current state $x$, we
propose to move to a candidate state $x^*$ according to a
proposal distribution $S(x'|x)$ that is symmetric (i.e., $S(x'|x)=S(x|x')$),
and then accept this proposal with probability $\min(1,\pi(x^*)/\pi(x))$. If
this proposal is accepted, the new state is $x'=x^*$, otherwise
$x'=x$. It's easy to show that these transitions leave $\pi$ invariant
(in fact they satisfy the stronger ``detailed balance'' condition that
$\pi(x)T(x'|x)=\pi(x')T(x|x')$).
The temporary mapping technique \citep{Neal:2006} defines such a
transition via three other stochastic mappings, $\hat{T}$, $\bar{T}$
and $\check{T}$, as follows:
\begin{equation}
\label{eq:mapping}
x \stackrel{\hat{T}}{\longrightarrow} y \stackrel{\bar{T}}{\longrightarrow} y' \stackrel{\check{T}}{\longrightarrow} x'
\end{equation}
where $x,x' \in \mathcal{X}$ and $y,y' \in \mathcal{Y}$. Starting from $x$, we obtain a value $y$ in the temporary space $\mathcal{Y}$ by $\hat{T}(y|x)$. The target distribution for $y$ has probability mass/density function $\rho(y)$. We require that
\begin{equation}
\label{eq:mapup}
\int \pi(x)\hat{T}(y|x) dx = \rho(y)
\end{equation}
We then obtain another sample $y'$ using $\bar{T}(y'|y)$, which leaves $\rho$ invariant:
\begin{equation}
\label{eq:mapbar}
\int \rho(y)\bar{T}(y'|y) dy = \rho(y')
\end{equation}
Finally, we map back to $x'\in \mathcal{X}$ using $\check{T}(x'|y)$, which we require to satisfy
\begin{equation}
\label{eq:mapdown}
\int \rho(y') \check{T}(x'|y') dy' = \pi(x')
\end{equation}
It's easy to see that the combined transition $T(x'|x) = \int\int \hat{T}(y|x)\bar{T}(y'|y)\check{T}(x'|y') dydy'$ leaves $\pi$ invariant:
\begin{eqnarray}
\int \pi(x) T(x'|x) dx & = & \int\int\int \pi(x)\hat{T}(y|x)\bar{T}(y'|y)
\check{T}(x'|y') dydy'dx \\
& = & \int\int \rho(y)\bar{T}(y'|y)\check{T}(x'|y')dydy' \\
& = & \int \rho(y')\check{T}(x'|y')dy' \\
& = & \pi(x')
\end{eqnarray}
Quite a few existing methods can be viewed as mapping to temporary
spaces. For instance, the technique of temporarily introducing
auxiliary variables can be considered as mapping from $x$ to
$y=(x,z)$, where $z$ is a set of auxiliary variables.
\end{subsection}
\begin{subsection}{Caching values for future re-use}
Many MCMC transitions require evalulating the probability density of
$\pi$, up to a possibly unknown normalizing constant. For example,
each iteration of the Metropolis algorithm needs the probability
density values of both the current state $x$ and the candidate state
$x^*$. Since these evaluations typically dominate the MCMC computation
time, it may be desirable to save (`cache') computed values of
$\pi(x)$ so they can be re-used when the same state $x$ appears in the
chain again.
Caching is always useful for the Metropolis algorithm, since if we
reject a proposal $x^*$, we will need $\pi(x)$ for the next
transition, and if we instead accept $x^*$ then it becomes the current
state and we will need $\pi(x^*)$ for the next transition.
When the proposal distribution is discrete (as it will always be when
the state space is discrete), the probability of proposing an $x^*$
that was previously proposed can be positive, so saving the computed
value of $\pi(x^*)$ may be beneficial even if $x^*$ is rejected. When
the state space is continuous, however, the proposal distributions
commonly used are also continuous, and we will have zero probability
of proposing the same $x^*$ again. But in this case, as we will see
next, caching can still be beneficial if we first map to another space
with a ``discretizing chain''. \end{subsection}
\end{section}
\begin{section}{Mapping to a discretizing chain}\label{sec:map}
To take full advantage of both mapping and caching, we propose a
temporary mapping scheme where the temporary space is continuous, but
is effectively discrete with regard to transitions $\bar{T}$.
Let $R(x'|x)$ be the transition probabilities for a Markov Chain which leaves $\pi^*$ invariant. Let $\tilde{R}(x|x')=R(x'|x)\pi^*(x)/\pi^*(x')$ be the reverse transition probabilities, which clearly also leave $\pi^*$ invariant.
We map from $\mathcal{X}$ to $\mathcal{Y}$, a space of realizations of this Markov Chain of length $K$, where one time step of this chain is ``marked''. To map $x\in \mathcal{X}$ to $y\in \mathcal{Y}$, we use a $\hat{T}$ that operates as follows:
\begin{itemize}
\item Choose $k$ uniformly from $0,...,K-1$.
\item Simulate $K-1-k$ forward transition steps using $R$ starting at $x_k=x$, producing states $x_{k+1},...,x_{K-1}$.
\item Simulate $k$ reverse transitions using $\tilde{R}$, starting at $x_k=x$, producing states $x_{k-1},...,x_0$.
\item Set the ``marked'' time step to $k$.
\end{itemize}
The transition $\bar{T}$ moves the mark along the chain from $k$ to
another time step $k' \in \{0,\ldots,K\!-\!1\}$, while keeping the current
chain realization, $(x_0,\ldots,x_{K-1})$, fixed. The transition
$\check{T}$ just takes the marked state, so $x'=x_{k'}$. The actual
implementation will not necessarily simulate all $K-1$ steps of the
discretizing chain --- a new step is simulated only when it is
needed. We can then let $K$ go to infinity, so that $\bar{T}$ can move
the mark any finite number of steps forward or backward.
\begin{figure}
\caption{Mapping to a discretizing chain and back.}
\label{fig:mapping}
\end{figure}
Figure \ref{fig:mapping} illustrates this scheme. Note that an element
$y \in \mathcal{Y}$ is a chain realization with a mark placed on the
time step $k$. We write $y=(k;x_0,...,x_{K-1})$. When we say we ``move
the mark from $k$ to $k'$'', we actually use a transition $\bar{T}$ to
move from $y=(k;x_0,...,x_{K-1})$ to $y'=(k';x_0,...,x_{K-1})$, where
$y$ and $y'$ share the same chain realization and differ only on the
marked position. We are free to choose the way $\bar{T}$ moves the
mark in any way that leaves $\rho$ invariance --- for instance, we can
pick a number $s$ and propose to move mark from $k$ to $k+s$ or $k-s$
with equal probabilities. We can make $r$ such moves within each
mapping. The discretizing chain makes the state space effectively
discrete, even though the space $\mathcal{Y}$ is continuous, and
consequently, when we move the mark around the chain realization,
there is a positive probability of hitting a location that has been
visited before.
The transition $\bar{T}$ has to leave $\rho(y)$ invariant. We compute the ratio of $\rho(y')$ and $\rho(y)$ to see how we can construct a such a $\bar{T}$. $\rho$ has been implicitly defined in \eqref{eq:mapup} as the distribution resulting from applying $\hat{T}$ to $x$ drawn from $\pi$. The probability to sample $y$ is given by the simulation process described above (i.e. start from $x$, simulate $K-1-k$ forward steps using $R$ and $k$ backward steps using $\tilde{R}$), namely, if $y=(k;x_0,...,x_{K-1})$,
\begin{align}\notag
\rho(y) &= \pi(x_k)\frac{1}{K}R(x_{k+1}|x_k)\cdots R(x_{K-1}|x_{K-2})\times \tilde{R}(x_{k-1}|x_k)\cdots \tilde{R}(x_0|x_1)\\[3pt]
\label{eq:rho_k}
&=\frac{\pi(x_k)}{\pi^*(x_k)}\frac{1}{K} \underbrace{\pi^*(x_k)R(x_{k+1}|x_k)\cdots R(x_{K-1}|x_{K-2})\times \tilde{R}(x_{k-1}|x_k)\cdots \tilde{R}(x_0|x_1)}_{:=A}
\end{align}
An expression for $\rho(y')$ can be similarly obtained for $y'=(k';x_0,...,x_{K-1})$:
\begin{align}
\label{eq:rho_k'}
\rho(y') &=\frac{\pi(x_{k'})}{\pi^*(x_{k'})}\frac{1}{K} \underbrace{\pi^*(x_{k'})R(x_{k'+1}|x')\cdots R(x_{K-1}|x_{K-2})\times \tilde{R}(x_{k'-1}|x_{k'})\cdots \tilde{R}(x_0|x_1)}_{:=A'}
\end{align}
We take out a factor of the ratio of densities $\pi/\pi^*$ from both \eqref{eq:rho_k} and \eqref{eq:rho_k'}, and write the remaining term as $A$ or $A'$, as indicated in the respective equation. Since $R$ and $\tilde{R}$ are reverse transitions with respect to $\pi^*$, if $k'>k$, then
\begin{align} \notag
\lefteqn{\pi^*(x_k)R(x_{k+1}|x_k)\cdots R(x_{k'}|x_{k'-1})} \\
\notag \ \
&\quad = \tilde{R}(x_k|x_{k+1}) \pi^*(x_{k+1})R(x_{k+2}|x_{k+1})\cdots R(x_{k'}|x_{k'-1}) \\ \notag
&\qquad \vdots \\
&\quad=\tilde{R}(x_k|x_{k+1})...\tilde{R}(x_{k'-1}|x_{k'}) \pi^*(x_{k'})
\end{align}
It therefore follows that $A=A'$. A similar argument shows that $A=A'$ when
$k' \le k$. Thus the ratio of $\rho(y')$ and $\rho(y)$ is
\begin{align}
\frac{\rho(y')}{\rho(y)}
\label{eq:rho_kk'}
&=\frac{\pi(x_{k'})/\pi^*(x_{k'})}{\pi(x_k)/\pi^*(x_k)}
\end{align}
Equation \eqref{eq:rho_kk'} implies that to leave $\rho$ invariant we can use a Metropolis type transition, $\bar{T}$, that proposes to move the mark from $k$ to $k'$ and accepts the move with probability
\[ \min\left(1,\frac{\pi(x_{k'})/\pi^*(x_{k'})}{\pi(x_k)/\pi^*(x_k)}\right)\]
Note that if $\pi=\pi^*$, then the transition $\bar{T}$ will accept a
move of the mark to any other time step on the discretizing chain,
since the discretizing chain actually leaves the target
distribution $\pi^*$ invariant and therefore every time step of this
chain is a valid sample of $\pi$. If $\pi^*\ne \pi$, but is very
similar to $\pi$, we can hope the acceptance rate will be high. In
addition, if the evaluation of $\pi^*(x)$ takes much less time than that
of $\pi(x)$, mapping to the discretizing chain and then proposing
large moves of the mark can save computation time,
since it effectively replaces evaluations of $\pi$ with evaluations of $\pi^*$,
except for the acceptance decisions..
On the other hand, if $\pi^*$
is completely arbitrary, the acceptance rate will be low, and if the
evalution of $\pi^*$ is not much faster than $\pi(x)$, we will not save
computation time. These $\pi^*$'s are not useful. We need
$\pi^*$ to be a fast but good approximation to $\pi$. We will discuss this
in the context of GP models in a later section.
Every time we map into a temporary space, we can make multiple $\bar{T}$ updates (move the ``mark'' several times). This way we can take advantage of the ``caching'' idea, since sometimes the mark will be moved to a state where $\pi$ has already been computed, and therefore no new computation is needed. The number of updates is a tuning parameter, which we denote as ``$r$''. Another tuning parameter, which we denote as ``$s$'', is the number of steps of transition $R$ to ``jump'' when we try to move the mark. Note that although we only ``bring back'' (using $\check{T}$) the last updated sample as $x'$, all of the marked states are valid samples of $\pi(x)$, and can be used for computing expectations with respect to $\pi$ if desired.
\end{section}
\begin{section}{Tempered transitions} \label{sec:tmp}
The ``tempered transitions'' method of \citet{Neal:1996} can also be
viewed as mapping to a temporary space. This method aims to sample
from $\pi$ using a sequence of distributions $\pi=\pi_0,\ \pi_1,
\ldots,\ \pi_n$.
For $i=0,\ldots,n$, let $\hat{T}_i$ (called the ``up'' transition) and
$\check{T}_i$ (the ``down'' transition) be mutually reversible
transitions with respect to the density $\pi_i$ --- i.e. for any pair of
states $x_i$ and $x_i'$,
\begin{equation}
\pi_i(x_i)\hat{T}_i(x_i'|x_i) = \check{T}_i(x_i|x_i') \pi_i(x_i')
\label{eq:mutual_reversibility}
\end{equation}
This condition implies that both $\hat{T}_i$ and $\check{T}_i$ have
$\pi_i$ as their invariant distribution. If $\hat{T}_i = \check{T}_i$
then \eqref{eq:mutual_reversibility} reduces to the detailed balance
condition. If $\hat{T}_i=S_1S_2...S_k$ with all of $S_i$ being
reversible transitions, then $\check{T}_i=S_kS_{k-1}...S_1$ would
satisfy condition \eqref{eq:mutual_reversibility}.
We map from $x\in\mathcal{X}$ to $y\in\mathcal{Y}$, a space of realizations of tempered transitions, using a $\hat{T}$ that operates as follows:
\begin{quotation}
Generate $\hat{x}_1$ from $x$ using $\hat{T}_1$;
Generate $\hat{x}_2$ from $\hat{x}_1$ using $\hat{T}_2$;
\indent\indent\vdots
Generate $\bar{x}_n$ from $\hat{x}_{n-1}$ using $\hat{T}_n$.
Generate $\check{x}_{n-1}$ from $\bar{x}_n$ using $\check{T}_n$;
Generate $\check{x}_{n-2}$ from $\check{x}_{n-1}$ using $\check{T}_{n-1}$;
\indent\indent\vdots
Generate $x^*$ from $\check{x}_1$ using $\check{T}_1$.
\end{quotation}
An element $y\in\mathcal{Y}$ can be written as $y=(x,\hat{x}_1,...,\bar{x}_n,...,\check{x}_1,x^*)$.
$\bar{T}$ attempts to flip the order of $y$, accepting the flip with probability
\begin{equation}
\min\left(1,
\frac{\pi_1(\hat{x}_0)}{\pi_0(\hat{x}_0)}\cdots\frac{\pi_n(\hat{x}_{n-1})}{\pi_{n-1}(\hat{x}_{n-1})}\cdot
\frac{\pi_{n-1}(\check{x}_{n-1})}{\pi_n(\check{x}_{n-1})}\cdots\frac{\pi_0(\check{x}_0)}{\pi_1(\check{x}_0)}
\right)
\label{eq:temper_acc_prob}
\end{equation}
where $\hat{x}_0$ and $\check{x}_0$ are synonyms for
$x$ and $x^*$, respectively, to keep notations consistent.
In other words, with this probability, we set $y'$ to
$y^*=(x^*,\check{x}_1,...,\bar{x}_n,...,\hat{x}_1,x)$ (the order is
reversed); otherwise we sset $y'=y$ (the order is preserved).
Finally, $\check{T}$ maps back to $x'\in\mathcal{X}$ by taking the
first coordinate of $y'$ (either the original $x$ or $x^*$, depending
on whether or not the flip was accepted).
Using the temporary mapping perspective, we can show that tempered
transitions are valid updates, leaving $\pi$ invariant, by defining
$\rho$ to be the result of applying $\hat T$ to a point drawn from
$\pi$, and then showing that $\bar T$ leaves $\rho$ invariant, and
that $\check T$ produces a point distributed as $\pi$ from a point
distributed as $\rho$.
The $\hat T$ mapping from $x=\hat{x}_0$ to
$y=(\hat{x}_0,\hat{x}_1,...,\bar{x}_n,...,\check{x}_1,\check{x}_0)$
involves a sequence of transitions:
\[ \hat{x}_0\stackrel{\hat{T}_1}{\longrightarrow}\hat{x}_1\stackrel{\hat{T}_2}{\longrightarrow}\hat{x}_2\longrightarrow\cdots\longrightarrow\hat{x}_{n-1}\stackrel{\hat{T}_n}{\longrightarrow}\bar{x}_n \stackrel{\check{T}_n}{\longrightarrow}\check{x}_{n-1}\stackrel{\check{T}_{n-1}}{\longrightarrow}\check{x}_{n-2}\longrightarrow\cdots\longrightarrow\check{x}_1\stackrel{\check{T}_1}{\longrightarrow}\check{x}_0 \]
The probability density, $\rho$, for $y$ can be computed from this as
\begin{equation}
\rho(y) = \pi_0(\hat{x}_0)\hat{T}_1(\hat{x}_1|\hat{x}_0)\cdots\hat{T}_n(\bar{x}_n|\hat{x}_{n-1})
\check{T}_n(\check{x}_{n-1}|\bar{x}_n)\cdots\check{T}_1(\check{x}_0|\check{x}_1) \label{eq:rhodef}
\end{equation}
Similarly,
\begin{equation}
\rho(y^*) = \pi_0(\check{x}_0)\hat{T}_1(\check{x}_1|\check{x}_0)\cdots\hat{T}_n(\bar{x}_n|\check{x}_{n-1})
\check{T}_n(\hat{x}_{n-1}|\bar{x}_n)\cdots\check{T}_1(\hat{x}_0|\hat{x}_1)
\end{equation}
Now we compute the ratio of probability densities of $y^*$ and $y$:
\begin{align}
\notag
\frac{\rho(y^*)}{\rho(y)} &= \frac{\pi_0(\check{x}_0)\hat{T}_1(\check{x}_1|\check{x}_0)\cdots\hat{T}_n(\bar{x}_n|\check{x}_{n-1}) \check{T}_n(\hat{x}_{n-1}|\bar{x}_n)\cdots\check{T}_1(\hat{x}_0|\hat{x}_1)}
{ \pi_0(\hat{x}_0)\hat{T}_1(\hat{x}_1|\hat{x}_0)\cdots\hat{T}_n(\bar{x}_n|\hat{x}_{n-1})
\check{T}_n(\check{x}_{n-1}|\bar{x}_n)\cdots\check{T}_1(\check{x}_0|\check{x}_1)} \\ \label{eq:reo1}
&=\pi_0(\check{x}_0)\cdot
\frac{\hat{T}_1(\check{x}_1|\check{x}_0)}{\check{T}_1(\check{x}_0|\check{x}_1)}
\cdots
\frac{\hat{T}_n(\bar{x}_n|\check{x}_{n-1}) \check{T}_n(\hat{x}_{n-1}|\bar{x}_n)}
{\check{T}_n(\check{x}_{n-1}|\bar{x}_n)\hat{T}_n(\bar{x}_n|\hat{x}_{n-1})}
\cdots
\frac{\check{T}_1(\hat{x}_0|\hat{x}_1)}{\hat{T}_1(\hat{x}_1|\hat{x}_0)}
\cdot
\frac{1}{\pi_0(\hat{x}_0)}\\
\label{eq:mut_rev}
&=\pi_0(\check{x}_0)\cdot
\frac{\pi_1(\check{x}_1)}{\pi_1(\check{x}_0)}\cdots
\frac{\pi_n(\bar{x}_n)}{\pi_n(\check{x}_{n-1})}\cdot
\frac{\pi_n(\hat{x}_{n-1})}{\pi_n(\bar{x}_n)}\cdots
\frac{\pi_1(\hat{x}_0)}{\pi_1(\hat{x}_1)}\cdot
\frac{1}{\pi_0(\hat{x}_0)}
\\\label{eq:reo2}
&=\frac{\pi_1(\hat{x}_0)}{\pi_0(\hat{x}_0)}\cdots\frac{\pi_n(\hat{x}_{n-1})}{\pi_{n-1}(\hat{x}_{n-1})}\cdot
\frac{\pi_{n-1}(\check{x}_{n-1})}{\pi_n(\check{x}_{n-1})}\cdots\frac{\pi_0(\check{x}_0)}{\pi_1(\check{x}_0)}
\end{align}
We obtain \eqref{eq:mut_rev} from the mutual reversibility property of the
transitions $\hat T_i$ and $\check T_i$,
and \eqref{eq:reo1} and \eqref{eq:reo2} simply by reordering terms.
From (\ref{eq:reo2}), we see that the probability of accepting the
flip from $y$ to $y^*$ given by (\ref{eq:temper_acc_prob}) is equal to
$\min(1,\rho(y^*)/\rho(y))$, and thus $\bar T$ satisfies detailed
balance with respect to $\rho$. It is also clear from (\ref{eq:rhodef})
that the marginal distribution under $\rho$ of the first component of $y$
is $\pi_0=\pi$, and thus $\check T$ maps from $\rho$ to $\pi$.
The original motivation of the tempered transition method described by
\citet{Neal:2006} is to move between isolated modes of multimodal
distributions. The distributions $\pi_1,...,\pi_n$ are typically of
the same class as $\pi$, but broader, making it easier to move between
modes of $\pi$ (typically, as $i$ gets larger, the distribution
$\pi_i$ gets broader, thus making it more likely that modes have
substantial overlap). Evaluating the densities for $\pi_1,...,\pi_n$
typically takes similar computation time as evaluating the density for
$\pi$. Our mapping-caching scheme, on the other hand, is designed to
reduce computation. Ideally, in our scheme the bigger $i$ is, the
faster is the evaluation of $\pi_i(x)$. One possibility for this is
that each $\pi_i$ is an approximation of $\pi$, and as $i$ increases
the computation of $\pi_i$ becomes cheaper (but worse).
The two methods we propose in this paper are equivalent if
the following are all true:
\begin{itemize}
\item For mapping to a discretizing chain:
\begin{enumerate}
\item The transition $R$ which leaves $\pi^*$ invariant is reversible.
\item $s=2k$, i.e. $\bar{T}$ always attempts to move the mark over an even number of $R$ updates.
\item $r=1$, i.e. $\bar{T}$ attempts to move the mark only once within each mapping.
\end{enumerate}
\item For mapping by tempered transitions:
\begin{enumerate}
\item $n=1$, i.e., there is only one additional distribution.
\item $\hat{T}_1 = \check{T}_1=R^{k}$, i.e.
these transitions consist of $k$ updates using $R$ (and hence $\pi_1=\pi^*$).
\end{enumerate}
\end{itemize}
When all above are true except that $n>1$, so more than one additional
distribution is used in the tempered transitions, we might expect
tempered transitions to perform better, as they propose a new point
through the guidance of these additional distributions, and
computations for these additional distributions should be negligible,
if they are faster and faster approximations. On the other hand, we
might think that $r>1$ will improve the performance when mapping to a
discretizing chain, since then caching could be exploited.
So each method may have its own advantages.
\end{section}
\begin{section}{Application to Gaussian process models}\label{sec:app}
We now show how these MCMC methods can be applied to Bayesian inference
for Gaussian process models.
\begin{subsection}{Introduction to Gaussian process models}
We start with a brief introduction to Gaussian process (GP) models to establish
notation. The problem is to model the
association between covariates $x$ and a response $y$ using $n$
observed pairs $(x_1,y_1),...,(x_n,y_n)$, and then make predictions for
the $y$ in future items once their covariates, $x$, have been observed.
We can write such a model as
\begin{equation}
\label{reg}
y_i = f(x_i) + \epsilon_i
\end{equation}
where $x_i$ is a covariate vector of length $p$, and $y_i$ is the correspoding
scalar response. The $\epsilon_i$ are random residuals,
assumed to have Gaussian distributions with mean 0 and constant
variance $\sigma^2$.
Bayesian GP models assume that the noise-free function $f$ comes from
a Gaussian Process which has prior mean function zero and some
specified covariance function. Note that a zero mean prior is not a
requirement --- we could specify a non-zero prior mean function $m(x)$
if we have \textit{a priori} knowledge of the mean structure. Using a
zero mean prior just reflects prior knowledge that the function is
equally likely to be positive or negative; the posterior mean of the
function is typically not zero.
The covariance function could be fixed \textit{a priori}, but more
commonly is specified in terms of unknown hyperparameters, $\theta$, which are
then estimated from the data. Given the values of the hyperparameters,
the response $y$ follows a multivariate Gaussian distribution with
zero mean and a covariance matrix given by
\begin{align}
\label{covy}
\text{Cov}(y_i,y_j) &\ =\ K(x_i,x_j) + \text{Cov}(\epsilon_i,\epsilon_j)
\ =\ K(x_i,x_j) + \delta_{ij}\sigma^2
\end{align}
where $\delta_{ii} = 1$ and $\delta_{ij}=0$ when $i\ne j$, and
$K$ is the covariance function of $f$. Any covariance function that
always leads to a positive semi-definite covariance matrix can be used.
One example is the squared exponential
covariance function with isotropic length-scale (to which we add a constant
allowing the overall level of the function to be shifted from zero):
\begin{equation}
\label{eq:covSEiso}
K(x_i,x_j) = c^2 + \eta^2 \exp\left( -\frac{\|x_{i}-x_{j}\|^2}{\rho^2}\right)
\end{equation}
Here, $c$ is a fairly large constant (not excessively large, to avoid
numerical singularity), and $\eta$, $\sigma$, and $\rho$ are
hyperparameters --- $\eta$ controls the magnitude of variation of $f$,
$\sigma$ is the residual standard deviation, and $\rho$ is a length scale
parameter for the covariates. We can instead assign a different length scale
to each covariate, which leads to the squared exponential covariance
function with automatic relevance determination (ARD):
\begin{equation}
\label{covfun}
K(x_i,x_j) = c^2 + \eta^2 \exp\left( -\sum_{k=1}^p \frac{(x_{ik}-x_{jk})^2}{\rho_k^2}\right)
\end{equation}
Unless noted otherwise, we will use the squared exponential covariance functions \eqref{eq:covSEiso} or \eqref{covfun} thoughout this paper.
When the values of the hyperparameters are known, the predictive
distribution for the response, $y_*$, a test case with covariates
$x_*$, based on observed values $x=(x_1,...,x_n)$ and $(y_1,...,y_n)$,
is Gaussian with the following mean and variance:
\begin{equation}
\label{predmean}
E(y_*|x,y,x_*,\theta) = k^TC(\theta)^{-1}y
\end{equation}
\begin{equation}
\label{predvar}
\text{Var}(y_*|x,y,x_*,\theta) = v - k^TC(\theta)^{-1}k
\end{equation}
In the equations above, $k$ is the vector of covariances between $y_*$
and each of $y_i$, $C(\theta)$ is the covariance matrix of the
observed $y$, based on the known hyperparameters $\theta$, and $v$ is
the prior variance of $y_*$, which is $\text{Cov}(y_*,y_*$) from \eqref{covy}.
When the values of the hyperparameters are
unknown, and therefore must be estimated from the data, we put a
prior, $p(\theta)$, on them (typically an independent Gaussian prior
on the logarithm of each hyper-parameter), and obtain the posterior
distribution $p(\theta|x,y) \propto \mathcal{N}(y|0,C(\theta))\,p(\theta)$.
The predictive mean of $y$ is then computed by integrating over the posterior
distribution of the hyperparameters:
\begin{equation}
\label{predmeanint}
E(y_*|x,y,x_*) = \int_{\Theta} k^TC(\theta)^{-1}y \cdot p(\theta|x,y)\, d\theta
\end{equation}
The predicted variance is given by
\begin{align}
\label{predvarint}
\mbox{Var}(y_*|x,y,x_*) & \ = \
E[\mbox{Var}(y_*|x,y,x_*,\theta)\,|\,x,y] \ + \
\mbox{Var}[E(y_*|x,y,x_*,\theta)\,|\,x,y]
\end{align}
Finding $C^{-1}$ directly takes time proportional to $n^3$, but we do
not have to find the inverse of $C$ explicitly. Instead we find the
Cholesky decomposition of $C$, denoted as $R=\mbox{chol}(C)$, for
which $R^TR=C$ and $R$ is an ``upper'' triangular matrix (also called
a ``right'' triangular matrix). This also takes time proportional to $n^3$,
but with a much smaller constant. We then solve $R^Tu = y$ for $u$
using a series of forward subsititutions (taking time proportional to $n^2$).
From $R$ and $u$, we can compute the likelihood for $\theta$, which is
needed to compute the posterior density, by making use of the expressions
\begin{equation}
y^TC^{-1} y = y^T(R^TR)^{-1}y = y^TR^{-1} \left(R^{T}\right)^{-1}y = u^T u
\end{equation}
and
\begin{equation}
\det(C) = \det(R)^2 = \prod_{i=1}^n R_{ii}^2
\end{equation}
Similarly, equations (\ref{predmean}) and (\ref{predvar}) and be reformulated
to use $R$ rather than $C^{-1}$.
\end{subsection}
\begin{subsection}{Approximating $\pi$ for GP models}
As discussed in Section \ref{sec:map}, using a poor $\pi^*$ for the
discretizing chains on $\mathcal{Y}$, or poor $\pi_i$ for tempered
transitions, can lead to a poor MCMC method which is not useful. We
would like to choose approximations to $\pi$ that are good, but that can
nevertheless be computated much faster than $\pi$. For GP regression
models, $\pi$ will be the posterior distribution of the
hyperparameters, $\theta$.
Quite a few efficient approximation methods for GP models have been
discussed from a different perspective. For example, \citet{QC:2007}
categorizes these approximations in terms of ``effective prior''. Most
of these methods are used for approximate training and prediction;
not all of them are suitable for forming a posterior approximation, $\pi^*$.
For example, we cannot take advantage of an efficient approximated prediction.
\begin{subsubsection}{Subset of data (SOD)}
The most obvious approximation is to simply take a
subset of size $m$ from the $n$ observed pairs $(x_i,y_i)$ and use the
posterior distribution given only these observations as $\pi^*$:
\begin{align}
\pi^*(\theta) &\ =\ \mathcal{N}(y|0,\hat{C}_{(m)}(\theta))\,p(\theta)
\label{eq:pistar}
\end{align}
where $p(\theta)$ is the prior for $\theta$, the vector of
hyperparameters, and $\mathcal{N}(a|\mu,\Sigma)$ denotes the
probability density of a multivariate normal distribution
$N(\mu,\Sigma)$ evaluated at $a$. $\hat{C}_{(m)}(\theta)$ is computed based on
hyperparameters $\theta$ and the $m$ observations in the subset.
Even though the SOD method seems quite naive, it does speed up
computation of the Cholesky decomposition of $C$ from time
proportional to $n^3$ to time proportional to $m^3$.
If a small subset (say 10\% of the full dataset) is used to form
$\pi^*$, we can afford to do a lot of Markov chain updates for $\pi^*$,
since the time it takes to make these updates will be quite small
compared to a computation of $\pi$.
So a $\pi^*$ formed by this method might still be useful.
To form a $\pi^*$ using SOD, we need the following major computations, if
there are $p$ covariates:\\
\begin{tabular}{c|c}
\hline
Operation & Complexity \\ \hline
Compute $\hat{C}_{(m)}$ & $pm^2$ \\ \hline
Find chol($\hat{C}_{(m)}$) & $m^3$ \\ \hline
\end{tabular}
\end{subsubsection}
\begin{subsubsection}{Using low-rank plus diagonal matrices}
A covariance matrix in a GP model typically has the form $C \,=\,K + \sigma^2I$,
where $K$ is the noise-free covariance matrix, and $\sigma^2$ is the
residual variance. More generally, if the residual variance differs for
different observations, the covariance matrix will be $K$ plus a diagonal
matrix giving these residual variances. If we approximate $K$
by a matrix $\hat{K}$ with rank $m<n$, and let $\hat{C} = \hat{K}+\sigma^2I$,
then after writing $\hat{K} = B S B^T$, where $B$ is $n$ by $m$, we can
quickly find $\hat{C}^{-1}$ by taking advantage of the
matrix inversion lemma, which states that
\begin{equation}
(BSB^{T}+D)^{-1} = D^{-1} - D^{-1}B(S^{-1}+B^TD^{-1}B)^{-1}B^TD^{-1}
\label{eq:minv}
\end{equation}
This can be simplified as follows when $D=dI$, where $d$ is a scalar,
$B$ has orthonormal columns (so that $B^T B=I$), and $S$ is a diagonal matrix with
diagonal elements given by the vector $s$, denoted by $\text{diag}(s)$:
\begin{align}
(B\,\text{diag}(s)\,B^T+dI)^{-1}
&= d^{-1}I - d^{-1}I B (\text{diag}(s^{-1}) + B^Td^{-1}IB)^{-1} B^T d^{-1} I\\
&= d^{-1}I - d^{-2} B(\text{diag}(1/s)+ B^TB/d)^{-1}B^T \\
&= d^{-1}I - d^{-1} B(\text{diag}(d/s)+ I)^{-1} B^T \\
&= d^{-1}I - d^{-1} B(\text{diag}((s+d)/s)))^{-1}B \\
&= d^{-1}I - B\,\text{diag}(s/(d(s+d)))\,B^T \label{eq:minvd}
\end{align}
Expressions above such as $1/s$ denote element-by-element arithmetic
on the vector operands.
We can use the matrix determinant lemma to compute the determinant of $\hat{C}$.
\begin{equation}
\text{det}(BSB^T+D) \ =\ \text{det}(S^{-1}+B^TD^{-1}B)\,\text{det}(D)\,\text{det}(S)
\end{equation}
When $D=dI$ with $d$ being a scalar, $\text{det}(D)=d^n$ is
trivial, and $\text{det}(S^{-1}+B^TD^{-1}B)$ can be
found from the Cholesky decomposition of $S^{-1}+B^TD^{-1}B$.
Once we obtain $\hat{C}^{-1}$ and $\text{det}(\hat{C})$, we can easily establish our $\pi^*$:
\begin{equation}
\pi^*(\theta) = \mathcal{N}(y|0,\hat{C})p(\theta)
\end{equation}
\end{subsubsection}
\begin{subsubsection}{The Eigen-exact approximation}
Since the noise-free covariance matrix, $K$, is non-negative definite,
we can write it as $K = E\Lambda E^T=\sum_i^n \lambda_i e_i e_i^T$,
where $E$ has columns $e_1,e_2,...,e_n$, the eigenvectors of $K$, and
the diagonal matrix $\Lambda$ has the eigenvalues of $K$,
$\lambda_1\ge \lambda_2 \ge ...\ge \lambda_n$ on its diagonal. This is
known as the eigendecomposition. A natural choice of low-rank plus
diagonal approximation would be $\hat{C} = \hat{K} + \sigma^2I$ where
$\hat{K} = BSB^T$ where $B$ is an $n\times m$ matrix with columns
$e_1,...,e_m$, and $S$ is a diagonal matrix with diagonal entries
$\lambda_1,...,\lambda_m$. We expect this to be a good approximation
if $\lambda_{m+1}$ is close to zero.
With this approximation, $\hat{C}^{-1}$ can be computed rapidly from
$B$ and $S$ using (\ref{eq:minvd}). However, the time needed to find
the first $m$ eigenvalues and eigenvectors (and hence $B$ and $S$) is
proportional to $mn^2$, with a much larger constant factor than for
the $n^3$ computation of all eigenvalues and eigenvectors. In
practice, depending on the values of $m$ and $n$ and the software
implementation, a $\pi^*$ formed by this method could even be slower
than the original $\pi$. Since our experiments confirm this, we
mention it here only because it is a natural reference point.
\end{subsubsection}
\begin{subsubsection}{The Nytr\"om-Cholesky approximation}
In the Nystr\"om method, we take a random $m$ by $m$ submatrix of the
noise-free covariance matrix, $K$, which is equivalent to looking at the
noise-free covariance for a subset of the data of size $m$, and then
find its eigenvalues and eigenvectors. This takes time proportional
to $m^3$. We will denote the submatrix chosen by $K^{(m,m)}$, and its
eigenvalues and eigenvectors by $\lambda_1^{(m)},...,\lambda_m^{(m)}$
and $e_1^{(m)},...,e_m^{(m)}$.
We can then approximate the first $m$ eigenvalues and eigenvectors of the
full noise-free covariance matrix by
\begin{align}
\hat{\lambda}_i &= (n/m)\lambda_i^{(m)} \\
\hat{e}_i &= \frac{\sqrt{m/n}}{\lambda_i^{(m)}} K^{(n,m)} e_i^{(m)}
\end{align}
where $K^{(n,m)}$ is the $n$ by $m$ submatrix of $K$ with only the columns
corresponding to the $m$ cases in the random subset.
The covariance matrix $C$ can then be approximated in the same fashion
as Eigen-exact, with the exact eigenvalues and eigenvectors replaced
by the approximated eigenvalues $\hat{\lambda_1},...,\hat{\lambda}_m$
and eigenvectors $\hat{e}_1,...\hat{e}_m$. However, a more efficient
computational method for this approximation, requiring no eigenvalue/eigenvector
computations, is available as follows:
\begin{equation}
\hat{K} \,=\, K^{(n,m)} [K^{(m,m)}]^{-1} K^{(m,n)}
\label{eq:nystrom-cholesky}
\end{equation}
where $K^{(m,n)} = [K^{(n,m)}]^T$).
We can find the
Cholesky decomposition of $K^{(m,m)}$ as $R^TR$, in time proportional to $m^3$,
with a much smaller constant factor than finding the
eigenvalues and eigenvectors. Equation \eqref{eq:nystrom-cholesky} can then be
put in the form of $BSB^T$ by letting $B=K^{(n,m)}R^{-1}$ and $S=I$.
In practice, the noise free submatrix $K^{(m,m)}$ often has some
very small positive eigenvalues, which can appear to be negative due
to round-off error, making the Cholesky decomposition fail, a problem that
can be avoided by adding a small jitter to the diagonal \citep{Neal:1993}.
An alternative way of justifying the approximation in
(\ref{eq:nystrom-cholesky}) is by considering the covariance matrix
for the predictive distribution of all $n$ noise-free observations
from the random subset of $m$ noise-free observations, which (from a
generalization of (\ref{predvar})) is $K-K^{(n,m)} [K^{(m,m)}]^{-1}
K^{(m,n)}$. When this is close to zero (so these $m$ noise-free
observations are enough to almost determine the function),
$\hat{K}$ will be almost the same as $K$.
More sophisticated schemes for Nystr\"om-Cholesky have been
proposed. For instance, \citet{Drineas:2005} randomly select the $m$
columns to construct $\hat{C}$ according to some
``judiciously-chosen\rq{}\rq{} and data-dependent probability
distribution rather than uniformly choose the $m$ columns.
To form a $\pi^*$ using Nystr\"om-Cholesky, we need the following
major computations:
\begin{tabular}{c|c}
\hline
Operation & Complexity \\\hline
Compute $K^{(n,m)}$ & $pmn$ \\\hline
Find chol($K^{(m,m)}$) & $m^3$ \\\hline
\end{tabular}
\end{subsubsection}
\end{subsection}
\end{section}
\begin{section}{Experiments} \label{sec:exp}
Here we report tests of the performance of the methods described in this paper
using synthetic datasets.
\begin{subsection}{Experimental setup}
The datasets we used in these experiments were randomly generated,
with all covariates drawn independently from uniform distributions on
the interval $[0,1]$, and responses then generated according to a
Gaussian process with specified hyperparameters.
We generated ten types of datasets in this way, with different combinations
of the following:
\begin{itemize}
\item Number of
observations: $n=300$ or $n=900$.
\item Number of
covariates: $p$=1 or $p=5$.
\item Type of covariance function: squared exponential
covariance function with a single length
scale (isotropic), or with multiple length scales
(Automatic Relevance Determination, ARD). Note that these
are identical when $p=1$.
\item Size of length scales: ``short'' indicates that
a dataset has small length scales,``long'' that it has
large length scales.
\end{itemize}
The specific hyperparameter values that were used for each combination of
covariance function and length scale are shown in
Table \ref{tbl:datasets}.
\begin{table}[b]
\centering
\begin{tabular}{c|c|c|c}
\hline
Length scale size &Length scale type & $\eta$& $l$\\ \hline
short & isotropic &5 & $l=0.1$ \\\hline
short & ARD &5 & $l_i=0.1i $ \\\hline
long & isotropic &5 & $l=2$ \\\hline
long & ARD &5 & $l_i=2i$ \\\hline
\end{tabular}
\caption{Hyperparameter values used to generate the synthetic datasets.}
\label{tbl:datasets}
\end{table}
The efficiency of an MCMC method is usually measured by the
autocorrelation time, $\tau$, for the sequence of values produced by
the chain \citep[see][]{Neal:1993}:
\begin{equation}
\tau \ =\ 1 + 2\sum_{i=1}^{\infty} \rho_i
\end{equation}
where $\rho_i$ is the lag-$i$ autocorrelation for some function of interest.
In practice, with an
MCMC sample of size $M$, we can only find estimates, $\hat{\rho_i}$, of
autocorrelations up to lag $i=M-1$. To avoid excessive variance from
summing many noisy estimates, we typically estimate $\tau$ by
\begin{equation}
\hat{\tau} \ =\ 1 + 2\sum_{i=1}^{k} \hat{\rho_i}
\end{equation}
where $k$ is a point where for all $i>k$, $\hat{\rho_i}$ is not
significantly different from 0.
Below, we will compare methods with respect to autocorrelation time of
the log likelihood. For a fair comparison, we multiply the estimate
of each method's autocorrelation times by the average CPU time it
needs to obtain a new sample point.
\end{subsection}
\begin{subsection}{Experiments with mapping to a discretizing chain}
For each dataset, we tried the method of mapping to a discretizing
chain using both a $\pi^*$ formed with SOD and a $\pi^*$ formed with
Nystr\"om-Cholesky. For comparison, we also ran a standard MCMC model.
All the Markov chains were started from the hyperparameter values that
were used to generate them, so these tests assess only autocorrelation
time once the high-probability region of the posterior has been
reached, not time needed for convergence when starting at a
low-probability initial state. The adjustable parameters of each
method were chosen to give good performance. All chains were run for
2000 iterations, and autocorrelation times were then computed based on
the last two-thirds of the chain.
The standard MCMC method we used is a slice sampler \citep{Neal:2003},
specifically a univariate slice sampler with stepping-out and
shrinkage, updating parameters in sequence. For the discretizing
Markov chain, the transition $R(x'|x)$ uses the same slice sampler.
Although slice sampling has tuning parameters (the stepsize, $w$, and
the upper limit on number of steps, $M$), satisfactory results can be
obtained without extensive tuning (that is, the autocorrelation time
of a moderately-well-tuned chain will not be much bigger than for an
optimally-tuned chain). Because finding an optimal set of tuning
parameters is generally hard (requiring much time for trial runs), we
will accept the results using moderately-well-tuned chains.
We found that $r=s=1$ gives the best performance for the method of
mapping to a discretizing chain when the slice sampler is used for
$R(x'|x)$, at least if only fairly small values of $r$ and $s$ are
considered. Recall that $r$ is the number of $\bar{T}$ updates to do
in each temporary mapping, and $s$ is the number of steps of $R(x'|x)$
to propose to move the mark for each $\bar{T}$ update. Note that a
single slice sampling update will usually evaluate $\pi$ or $\pi^*$
more than once, since an evaluation is needed for each outward step
and each time a point is sampled from the interval found by stepping
out. Therefore if we didn't use a mapping method we would have to
compute $\pi(x)$ several times for each slice sampling update. When a
mapping method is used, $\pi(x)$ only needs to be evaluated once each
update, for the new state (its value at the previous state having been saved),
while meanwhile, $\pi^*(x)$ will be evaluated several times.
We tuned the remaining parameter $m$, the subset size for SOD, or the
number of random columns for Nystr\"om-Cholesky, by trial and
error. Generally speaking, $m$ should be between 10\% and 50\% of $n$,
depending on the problem. For Nystr\"om-Cholesky, quite good results
are obtained if such a value for $m$ makes $\pi^*$ be very close to
$\pi(x)$.
The results are in Table \ref{tbl:exp}, which shows CPU time per
iteration times autocorrelation time for the standard MCMC method, and
for other methods the ratio of this with the standard method.
Table \ref{tbl:exp2} shows actual autocorrelation time and CPU time
per iteration for each experimental run.
\begin{table}[p]
\centering
\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|r|r|c|c|c}
\hline
\multirow{2}{*}{\#} & \multicolumn{2}{c|}{Length scale} &\multirow{2}{*}{$p$} &\multirow{2}{*}{$n$}
& \multicolumn{3}{c|}{$m$} & \multicolumn{4}{|c}{$\!\!$Autocorrelation time $\times$ CPU time per iteration$\!\!$} \\ \cline{2-3}\cline{6-12}
&size &type & & & SOD & NYS & TMP &$T_{\text{STD}}$ & $\!T_{\text{SOD}}/T_{\text{STD}}\!$ & $\!T_{\text{NYS}}/T_{\text{STD}}\!$ & $\!\!T_{\text{TMP}}/T_{\text{STD}}\!\!$ \\ \hline\hline
1 & small & isotropic & 1 & 300 & 40 & 30 & 40,\,20 & 0.76 & 0.45 & 0.51 & 1.05 \\ \hline
2 & small & isotropic & 5 & 300 & 150 & - & 100,\,50 & 1.62 & 0.81 & - & 0.14 \\ \hline
3 & small & ARD & 5 & 300 & 100 & - & 90,\,45 & 3.39 & 0.83 & - & 0.36 \\ \hline
4 & long & isotropic & 5 & 300 & 150 & 120 & 130,\,65 & 2.05 & 0.81 & 0.97 & 0.69 \\ \hline
5 & long & ARD & 5 & 300 & 90 & 80 & 100,\,50 & 5.23 & 0.66 & 0.85 & 0.51 \\ \hline
6 & small & isotropic & 1 & 900 & 60 & 90 & 60,\,30 & 9.06 & 0.27 & 0.23 & 0.28 \\ \hline
7 & small & isotropic & 5 & 900 & 300 & - & - & 18.17 & 0.51 & - & - \\ \hline
8 & small & ARD & 5 & 900 & 100 & - & - & 25.47 & 0.43 & - & - \\ \hline
9 & long & isotropic & 5 & 900 & 100 & 110 & - & 16.86 & 0.34 & 0.40 & - \\ \hline
10 & long & ARD & 5 & 900 & 300 & 90 & - & 47.46 & 0.67 & 0.34 & - \\ \hline
\end{tabular}
\caption{Results of experiments on the ten datasets.}
\label{tbl:exp}
\end{table}
\begin{table}[p]
\centering
\footnotesize
\begin{tabular}{c|c|c|r|r|r|r|r|r}
\hline
\multirow{2}{*}{\#} & \multicolumn{4}{c|}{CPU time (s) per iteration} & \multicolumn{4}{c}{Autocorrelation time} \\ \cline{2-9}
& STD & SOD & NYS & TMP & STD & SOD & NYS & TMP \\ \hline\hline
1 & 0.26 & 0.078 & 0.11 & 0.15 & 2.90 & 4.32 & 3.53 & 5.40 \\\hline
2 & 0.28 & 0.14 & - & 0.13 & 5.77 & 9.32 & - & 1.67 \\\hline
3 & 0.56 & 0.23 & - & 0.14 & 6.09 & 11.98 & - & 8.63 \\\hline
4 & 0.13 & 0.072 & 0.15 & 0.09 & 15.62 & 23.04 & 12.88 & 16.56 \\\hline
5 & 0.49 & 0.19 & 0.41 & 0.13 & 11.16 & 18.07 & 10.89 & 20.37 \\\hline
6 & 3.10 & 0.53 & 0.83 & 0.61 & 2.92 & 4.63 & 2.48 & 4.21 \\\hline
7 & 3.76 & 0.82 & - & - & 4.83 & 11.24 & - & - \\\hline
8 & 7.21 & 1.48 & - & - & 3.53 & 7.38 & - & - \\\hline
9 & 1.81 & 0.69 & 0.91 & - & 9.33 & 8.27 & 7.40 & - \\\hline
10 & 5.66 & 1.95 & 1.75 & - & 8.39 & 16.18 & 9.14 & - \\\hline
\end{tabular}
\caption{CPU time per iteration and autocorrelation time for each
run in Table \ref{tbl:exp}.}
\label{tbl:exp2}
\end{table}
From these results, we see that Subset of Data is overall the most
reliable method for forming a $\pi^*$. We can almost always find a SOD
type of $\pi^*$ that leads to more efficient MCMC than the standard
method. Depending on the problem, mapping to a discretizing chain using
such a $\pi^*$ can be two to four times faster than standard MCMC, for
the Gaussian Process regression problems we tested. The computational
savings go up when the size of the dataset increases. This is likely
because when $n$ is small, evaluation of $\pi$ is fast, so overhead
operations (especially those not related to $n$) are not trivial in
comparison. The computational saving of $\pi^*$ compared to $\pi$
will be then less than the $m^3$ to $n^3$ ratio we expect from SOD for
large $n$. Also when $n$ is small, time to compute $C$ (proportional
to $pn^2$) may be significant, which also reduces the computational
savings from a $\pi^*$ based on SOD.
For some datasets, we can find a Nystr\"om-Cholesky $\pi^*$ with a
small $m$ that can approximate $\pi$ well, in which case this method
works very nicely. However, for datasets with small length scales
with $p=5$, in order to find a working $\pi^*$ we have to set $m$ to
be around $95\%$ of $n$ or greater, making $\pi^*$ as slow as, or even
slower than $\pi$. This is due to the fact that when the length scale
parameters for the GP are small, the covariance declines rapidly as
the input variable changes, so $x$ and $x'$ that are even moderately
far apart have low covariance. As a result, we were not able to find
efficient mapping method using Nystr\"om-Cholesky with performance
even close to standard MCMC (so no result is shown in the
table). On the other hand, when the length scale is large, a good
approximation can be had with a small $m$ (as small as $10\%$ of $n$).
For $n=900$ and $p=5$ with ARD covariance, Nystr\"om-Cholesky
substantially outperforms SOD.
\end{subsection}
\begin{subsection}{Experiments with tempered transitions}
We have seen in the previous section that the method of mapping to a
discretizing chain has a lot of tuning parameters, and finding the
optimal combination of these tuning parameters is not easy. The method
of tempered transitions actually has more tuning parameters. To start
with, we have to decide the number of ``layers'' (we call each of
$\hat{T}_i$ or $\check{T}_i$ a ``layer''). For each layer,
(e.g. $\hat{x}_i
\stackrel{\hat{T}_{i+1}}{\longrightarrow}\hat{x}_{i+1}$), we have to
decide how many MCMC updates to simulate. This reduces the attraction
of tempered transitions, but in some situations it does improve
sampling efficiency.
In the experiments for the method of mapping to a discretizing chain,
the results given by both SOD and Nystr\"om-Cholesky for datasets with
$n=300, p=5$ are less satisfatory compared to others. We tried
tempered transitions with these datasets. For simplicity, we used
just two layers, each of which uses SOD to form the transition. The
number of observations in each subset (denoted as $m_i$ for transition
$\hat{T}_i$ and $\check{T}_i$) is listed in Table \ref{tbl:exp} under
the column ``TMP'' and the time ratio results are under the column
``$T_\text{TMP}/T_\text{STD}$''. We can see that for all these
datasets, tempered transitions outperform the method of mapping to a
discretizing chain, sometimes substantially. The advantage of
tempered transitons is further illustrated n Figure
\ref{fig:comp_map_temper}, which shows the sample autocorrelation
plots of the log likelihood for both methods, on dataset \#2.
\begin{figure}
\caption{Comparison of autocorrelation times of the log likelihood for
MCMC runs using mapping to a
discretizing chain and using tempered transitions. Dataset \#2 is used
(with five covariates, small length scales, an isotropic covariance
function, and 300 observations).}
\label{fig:comp_map_temper}
\end{figure}
\end{subsection}
\end{section}
\begin{section}{Discussion and future work}\label{sec:disc}
We have introduced two classes of MCMC methods using the ``mapping and
caching'' framework: the method of mapping to a discretizing chain,
and the tempered transition method. Our experiments indicate that for
method of mapping to a discretizing chain, when an appropriate $\pi^*$
is chosen (e.g. SOD approximation of $\pi$ with an appropriate $m$),
an efficient MCMC can be constructed by making ``local'' jumps (e.g.\
setting $r=s=1$). A good MCMC method can also be constructed using the
tempered transitions, with a small number of $\pi_i$, where each
$\hat{T}_i$ and $\check{T}_i$ makes only a small update.
These results are understandable. Though $\pi^*$ and $\pi_i$, are
broader than $\pi$, making small adjustments a small number of times
will have a good chance to still stay in a high probability area of
$\pi$. However, even though the acceptance rate is high, this strategy
of making small adjustments cannot bring us very far from the previous
state. On the other hand, if we make large jumps, for instance, by
using large values for $r$ and $s$ in the method of mapping to a
discretizing chain, the acceptance rate will be low, but when a
proposal is accepted, it will be much further away from the previous
state, which is favourable for a MCMC method. We haven't had much
success using this strategy so far, perhaps due to difficulty of
parameter tuning, but we believe this direction is worth pursuing.
The tempered transition method may be more suitable for this
direction, because moving from one state to another state further away
is somewhat similar to moving among modes --- the sequence of
$\hat{T}_i$ and $\check{T}_i$ should be able to ``guide'' the
transition back to a region with high probability under $\pi$.
\end{section}
\end{document}
|
\begin{equation}gin{document}
\title{Equivalent approaches to electromagnetic field quantization in a linear
dielectric}
\begin{equation}gin{abstract}
\noindent It is shown that the minimal coupling method is
equivalent to the Huttner-Barnet and phenomenological approaches
up to a canonical transformation.\\
{\bf Keywords: Field quantization, Damped Polarization Model, Phenomenological Model,
Minimal Coupling Model, Linear Dielectric}\\
{\bf PACS number(s): 12.20.Ds}
\end{abstract}
\section{Introduction}
One of the main developments in the quantum optics has been the
study of process, for example spontaneous emission, that take place
in inside, or adjacent to, material bodies. The need to interpret
the experimental results has stimulated attempts to quantize the
electromagnetic field (EM) in a material of general property
\cite{1}.
In an inhomogeneous nondissipative medium, quantization have been
achieved by Glauber and Lewenstein \cite{2}.
But for dissipative
medium, presence of absorbtion has the effect of coupling the EM
field to a reservoir. This matter is common in different
quantization schemes \cite{3,4,5,6} since in contrast to classical
case, losses in quantum mechanics imply a coupling to a reservoir
whose degrees of freedom have to be added to the Hamiltonian. But in
different methods the coupling between reservoir and EM field has
been considered in different ways.
In damped polarization model the EM field is coupled with matter
field and matter field is coupled with a reservoir. This model is
based on a Lagrangian and equal-time commutation relations (ETCR)
can be written between canonical components. In this approach the
Hamiltonian is obtained from the Lagrangian and is diagonalized by
the Fano technique \cite{3}.
In phenomenological method the reservoir act as a noise source and
these are conveniently represented by the Langevin force which act
on the EM field. The EM field is written in terms of the noise
operators using the Green function of Maxwell equations. The
equivalence of these two methods has been shown in different papers
\cite{4,5,6}.
Recently a new method for dealing with quantum dissipative systems
has been introduced in \cite{7,8} and has been extended to EM field
in a dissipative media. This method is based on a Hamiltonian and
the reservoir couples directly to EM field. In this method the time
revolution of Em field, in contrast to damped polarization model, is
obtained by solving the Heisenberg equations. The extension of this
method to a magnetizable media is straightforward \cite{9}.
The connection between this recent method and the other methods is
the subject of the present paper. Throughout the paper the SI
units are used.
\section{A brief review of different quantization methods}
In this section we review the main approaches to EM field
quantization in a dielectric medium and then compare them.
\subsection{The damped polarization model}
This model is based on the Hopfield model of a dielectric \cite{13}.
The matter is represented by a harmonic polarization field and a
coupling between the polarization field and another harmonic field
is considered. Following the standard approach in quantum
electrodynamics, quantization procedure is started from a Lagrangian
density in real space
\begin{equation}gin{equation}\label{e1}
{\cal L}={\cal L}_{em}+{\cal L}_{mat}+{\cal L}_{res}+{\cal L}_{int},
\end{equation}
where
\begin{equation}gin{equation}\label{e2}
{\cal
L}_{em}=\frac{\epsilon_{0}}{2}\thetaextbf{E}^{2}-\frac{1}{2\mu_{0}}\thetaextbf{B}^{2},
\end{equation}
is the electromagnetic part which can be expressed in terms of the
vector potential $\thetaextbf{A}$ and a scalar potential $U$ ( $
\thetaextbf{B}=\nabla\thetaimes \thetaextbf{A},
\thetaextbf{E}=-\dot{\thetaextbf{A}}-\nabla U$). The Lagrangian of the
matter (dielectric) is defined by
\begin{equation}gin{equation}\label{e3}
{\cal
L}_{mat}=\frac{\rho}{2}\dot{\thetaextbf{X}}^2-\frac{\rho\omega_{0}^2}{2}\thetaextbf{X}^2,
\end{equation}
where the field $\thetaextbf{X}$ is the polarization part, modeled by a
harmonic oscillator of frequency $\omega_{0}$. The Lagrangian
\begin{equation}gin{equation}\label{e4}
{\cal L}_{res}=\int_{0}^\infty d\omega
(\frac{\rho}{2}\dot{\thetaextbf{Y}}_\omega^2-\frac{\rho\omega^2}{2}\thetaextbf{Y}_\omega^2),
\end{equation}
is the reservoir part, modeled by a set of harmonic oscillators used
to model the losses. The interaction part
\begin{equation}gin{equation}\label{e5}
{\cal L}_{int}=-\alpha(\thetaextbf{A}\cdot\dot{\thetaextbf{X}}+U\nabla
\cdot\thetaextbf{X})-\int_{0}^\infty d \omega v
(\omega)\thetaextbf{X}\cdot\dot{\thetaextbf{Y}}_{\omega},
\end{equation}
includes the interaction between the light and the polarization
field with coupling constant $\alpha $ and the interaction between
the polarization field and the other oscillators with a frequency
dependent coupling $v(\omega)$. Taking
$\thetaextbf{X}\cdot\dot{\thetaextbf{Y}}_\omega$ as the dissipating term is
not essential but simplifies the calculations. The displacement
field $\thetaextbf{D}(\thetaextbf{r},t)$ is given by the following
combination of electric field and the material polarization
\begin{equation}gin{equation}\label{e6}
\thetaextbf{D}(\thetaextbf{r},t)=\epsilon_0\thetaextbf{E}(\thetaextbf{r},t)-\alpha
\thetaextbf{X}(\thetaextbf{r},t).
\end{equation}
Since $\dot U$ does not appear in the Lagrangian, $U$ is not a proper
dynamical variable and the Lagrangian can be written in
terms of the proper dynamical variables $\thetaextbf{A}$, $\thetaextbf{X}$ and
$\thetaextbf{Y}_\omega$. The easiest way to do this is to go to the
reciprocal space and write all the fields in terms of spatial
Fourier transforms. For example the electric field can be written
as
\begin{equation}gin{equation}\label{e7}
\thetaextbf{E}(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^\frac{3}{2}}
\int d^3\thetaextbf{k}\hspace{0.1cm}\thetailde{\thetaextbf{E}}(\thetaextbf{k},t)\hspace{0.1cm}
e^{i\thetaextbf{k}\cdot\thetaextbf{r}}.
\end{equation}
Since $\thetaextbf{E}(\thetaextbf{r},t)$ is a real field so
$\thetailde{\thetaextbf{E}}^*(\thetaextbf{k},t)=\thetailde{\thetaextbf{E}}(-\thetaextbf{k},t)$.
Therefore we can restrict integration over $\thetaextbf{k}$ to half
space. The total Lagrangian is
\begin{equation}gin{equation}\label{e8}
L =\int' d^3\thetaextbf{k}(\thetailde{{{\cal L}}}_{em}+\thetailde{{{\cal
L}}}_{mat}+ \thetailde{{{\cal L}}}_{res}+\thetailde{{{\cal L}}}_{int}),
\end{equation}
where the prime means that the integration is restricted to the half
of the reciprocal space and the Lagrangian densities in this space
are defined by
\begin{equation}gin{equation}\label{e9}
{\thetailde{{\cal L}}_{em}}=\epsilon_0
({\thetailde{\thetaextbf{E}}^2}-c^2\thetailde{\thetaextbf{B}}^2),
\end{equation}
\begin{equation}gin{equation}\label{e10}
{\thetailde{{\cal L}}_{mat}}=\rho
{\dot{\thetailde{\thetaextbf{X}}}}^2-\rho\omega_0^2\thetailde{\thetaextbf{X}}^2,
\end{equation}
\begin{equation}gin{equation}\label{e11}
\thetailde{{\cal L}}_{res}=\int_0^\infty d\omega(\rho
\dot{\thetailde{\thetaextbf{Y}}}_\omega^2-\rho\omega^2\thetailde{\thetaextbf{Y}}_\omega^2),
\end{equation}
\begin{equation}gin{eqnarray}\label{e12}
\thetailde{{\cal L}}_{int}&=&-\alpha[\thetailde{\thetaextbf{A}}^*\cdot \dot
{\thetailde{\thetaextbf{X}}}+\thetailde{\thetaextbf{A}}\cdot \dot
{\thetailde{\thetaextbf{X}}}^*+i\thetaextbf{k}\cdot(\thetailde{U}^*\thetailde{\thetaextbf{X}}-
\thetailde{U}\thetailde{\thetaextbf{X}}^*)]\nonumber\\&-&\int_0^\infty d\omega
v(\omega)\thetailde{\thetaextbf{X}}^*\cdot\dot{\thetailde{\thetaextbf{Y}}}_
\omega+\thetailde{\thetaextbf{X}}
\cdot\dot{\thetailde{\thetaextbf{Y}_\omega^*}}.
\end{eqnarray}
Using the Coulomb gauge $\thetaextbf{k}\cdot
\thetailde{\thetaextbf{A}}(\thetaextbf{k},t)=0$ and the Euler-Lagrange equation
for $\dot{\thetailde{U}}^*$, we find
\begin{equation}gin{equation}\label{e13}
\thetailde{U}(\thetaextbf{k},t)=i\frac{\alpha}{\epsilon_0}(\frac{\thetaextbf{e}_3(\thetaextbf{k})
\cdot{\thetailde{\thetaextbf{X}}}(\thetaextbf{k},t)}{k}),
\end{equation}
where $\thetaextbf{e}_3(\thetaextbf{k})$ is the unit vector in the direction
of $\thetaextbf{k}$. The matter fields $\thetailde{\thetaextbf{X}}$ and
$\thetailde{\thetaextbf{Y}}_\omega$ can be decomposed into transverse and
longitudinal parts. For example $\thetailde{\thetaextbf{X}}$ can be written
as
\begin{equation}gin{equation}\label{e14}
\thetailde{\thetaextbf{X}}(\thetaextbf{k},t)=\thetailde{\thetaextbf{X}}^\parallelrtiala(\thetaextbf{k},t)
+\thetailde{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{k},t),
\end{equation}
where $\thetaextbf{k}\cdot
\thetailde{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{k},t)=\thetaextbf{k} \thetaimes
\thetailde{\thetaextbf{X}}^\parallelrtiala(\thetaextbf{k},t)=0$ and similarly for
$\thetailde{\thetaextbf{Y}}_\omega$. The total Lagrangian can then be
written as the sum of two independent parts. The transverse part
\begin{equation}gin{equation}\label{e15}
L^\parallelrtialerp=\int' d^3\thetaextbf{k}({\thetailde{\cal
L}_{em}^{\parallelrtialerp}}+{\thetailde{{\cal L}}_{mat}^\parallelrtialerp} +{\thetailde{{\cal
L}}_{res}^\parallelrtialerp}+{\thetailde{{\cal L}}_{int}^\parallelrtialerp})
\end{equation}
where
\begin{equation}gin{equation}\label{e16}
{\thetailde{{\cal L} }_{em}^\parallelrtialerp}=\epsilon_0
({\dot{\thetailde{\thetaextbf{A}}}^2}-c^2\thetaextbf{k}^2\thetailde{\thetaextbf{A}}^2),
\end{equation}
\begin{equation}gin{equation}\label{e17}
{\thetailde{{\cal L}}_{mat}^\parallelrtialerp}=(\rho
{\dot{\thetailde{\thetaextbf{X}}}}^{\parallelrtialerp2}-\rho\omega_0^2\thetailde{\thetaextbf{X}}^{\parallelrtialerp2}),
\end{equation}
\begin{equation}gin{equation}\label{e18}
\thetailde{{\cal L}}_{res}^\parallelrtialerp=\int_0^\infty d\omega(\rho
\dot{\thetailde{\thetaextbf{Y}}}_\omega^{\parallelrtialerp2}-\rho\omega^2\thetailde{\thetaextbf{Y}}_\omega^{\parallelrtialerp2}),
\end{equation}
\begin{equation}gin{equation}\label{e19}
\thetailde{{\cal L}}_{int}^\parallelrtialerp=-(\alpha{\thetailde{\thetaextbf{A}}}\cdot
\dot{\thetailde{\thetaextbf{X}}}^{\parallelrtialerp*}+\int_0^\infty d\omega
v(\omega)\thetailde{\thetaextbf{X}}^{\parallelrtialerp*}\cdot\dot{\thetailde{\thetaextbf{Y}}}_\omega^\parallelrtialerp+c.c.),
\end{equation}
and the longitudinal part
\begin{equation}gin{equation}\label{e20}
L^\parallelrtiala =\int' d^3k\thetailde{{{\cal L}}}^\parallelrtiala,
\end{equation}
where
\begin{equation}gin{eqnarray}\label{e21}
\thetailde{{\cal L}}^\parallelrtiala&=&(\rho\dot{\thetailde{\thetaextbf{X}}}^{\parallelrtiala2}-\rho
{\omega_L^2}{\thetailde{\thetaextbf{X}}}^{\parallelrtiala2}+\int_0^\infty
d\omega\rho\dot{\thetailde{\thetaextbf{Y}}}_\omega^{\parallelrtiala2}-\rho\omega^2
\thetailde{\thetaextbf{Y}}_\omega^{\parallelrtiala2})\nonumber\\&-&\int_0^\infty
d\omega(v(\omega)\thetailde{\thetaextbf{X}}^{\parallelrtiala*}\cdot
\thetailde{\thetaextbf{Y}}_\omega^{\parallelrtiala}+c.c.)
\end{eqnarray}
In (\ref{e21}), $\omega_L=\sqrt{\omega_0^2+\omega_c^2}$, is the
longitudinal frequency and
$\omega_c^2=\frac{\alpha^2}{\rho\epsilon_0}$. The link between these
two parts is given by the total electric field
\begin{equation}gin{equation}\label{e22}
\thetailde{\thetaextbf{E}}(\thetaextbf{k},t)=\thetailde{\thetaextbf{E}}^\parallelrtialerp(\thetaextbf{k},t)+
\thetailde{\thetaextbf{E}}^\parallelrtiala(\thetaextbf{k},t)=
-\dot{\thetailde{\thetaextbf{A}}}(\thetaextbf{k},t)+\frac{\alpha}{\epsilon_0}
\thetailde{\thetaextbf{X}}^\parallelrtiala \thetaextbf{e}_3(\thetaextbf{k}).
\end{equation}
Using (\ref{e13}) and the definition of the displacement field
$\thetaextbf{D}$ given in (\ref{e6}), we recover the fact that
$\thetaextbf{D}(\thetaextbf{r},t)$ is a purely transverse field as expected.
The vector potential in reciprocal space can be expanded in terms of
the unit vectors $\thetaextbf{e}_\lambda(\thetaextbf{k})$ as
\begin{equation}gin{equation}\label{e23}
\thetailde{\thetaextbf{A}}(\thetaextbf{k},t)=\sum\limits_{\lambda=1,2}
\thetailde{A}_\lambda(\thetaextbf{k},t)\thetaextbf{e}_\lambda(\thetaextbf{k}),
\end{equation}
where
$\thetaextbf{e}_\lambda(\thetaextbf{k})\cdot\thetaextbf{e}_\lambda'(\thetaextbf{k})=
\delta_{\lambda\lambda'}$ and
$\thetaextbf{e}_\lambda(\thetaextbf{k})\cdot\thetaextbf{e}(\thetaextbf{k})=0$ for
$\lambda=1,2$.
The Lagrangian ${\cal L}$ can now be used to obtain the components
of conjugate variables
\begin{equation}gin{equation}\label{e24}
-\epsilon_0\thetailde{E}_\lambda=\frac{\parallelrtialartial{\cal L}}
{\parallelrtialartial\dot{\thetailde{A}}_\lambda^*}=\epsilon_0\dot{\thetailde{A}}_\lambda,
\end{equation}
\begin{equation}gin{equation}\label{e25}
\thetailde {P}_{\lambda}=\frac{\parallelrtialartial{\cal L}}
{\parallelrtialartial{\dot{\thetailde{X}}_\lambda^*}}=
\rho\dot{\thetailde{X}}_{\lambda}-\alpha\thetailde{A}_\lambda,
\end{equation}
\begin{equation}gin{equation}\label{e26}
\thetailde{Q}_{\omega\lambda}= \frac{\parallelrtialartial\parallelrtialounds}{\parallelrtialartial
\dot{\thetailde{Y}}_{\omega\lambda}^*}=\rho\dot
{\thetailde{Y}}_{\omega\lambda}-v(\omega)\thetailde{Y}_{\omega\lambda}.
\end{equation}
Using the Lagrangian (\ref{e15}) and the expressions for the
conjugate variables (\ref{e24})-(\ref{e26}), we obtain the
transverse part of Hamiltonian as
\begin{equation}gin{equation}\label{e27}
H^\parallelrtialerp=\int' d^3\thetaextbf{k}(\thetailde{\cal H}^\parallelrtialerp_{em}+\thetailde{\cal
H}^\parallelrtialerp_{mat}+\thetailde{\cal H}^\parallelrtialerp_{int}),
\end{equation}
where
\begin{equation}gin{equation}\label{e28}
\thetailde{\cal H}^\parallelrtialerp_{em}=\epsilon_0(\thetailde{\thetaextbf{E}})^{\parallelrtialerp2}
+\epsilon_0c^2\thetailde{\thetaextbf{k}}^2\thetailde{\thetaextbf{A}}^2,
\end{equation}
is the energy density of the EM field, $\thetailde{k}$ is defined by
$\thetailde{k}=\sqrt{k^2+k_c^2}$ with
$k_c\equiv\frac{\omega_c}{c}=\sqrt{\frac{\alpha^2}{\rho
c^2\epsilon_0}}$, and
\begin{equation}gin{eqnarray}\label{e29}
\thetailde{\cal H}^\parallelrtialerp _{mat} &=&
\frac{\thetailde{\thetaextbf{P}}^{\parallelrtialerp2}}{\rho
}\mathop{+\rho\thetailde{\omega}_0^2\thetailde{\thetaextbf{X}}^{\parallelrtialerp2}+}\int_0^\infty
d\omega (\frac{{\thetailde{\thetaextbf{Q}}^\parallelrtialerp_\omega}}{{\rho }}^2+{\rho}
\omega ^2 \thetailde{\thetaextbf{Y}}_\omega
^{\parallelrtialerp2})\nonumber\\&+&\int_0^\infty {d\omega }
\frac{{(\nu(\omega)}}
{\rho}\thetailde{\thetaextbf{X}}^{\parallelrtialerp*}\cdot\thetailde{\thetaextbf{Q}}
^\parallelrtialerp_\omega) + c.c.,
\end{eqnarray}
is the energy density of the matter field which includes the
interaction between the magnetization and the reservoir. The
frequency
$\thetailde{\omega}_0^2\equiv\omega_0^2+\int_0^\infty
d\omega\frac{v(\omega)^2}{\rho^2}$ is the renormalized frequency of
the polarization field and
\begin{equation}gin{equation}\label{e30}
H_{int}^\parallelrtialerp=\frac{\alpha}{\rho}\int{d^3{\bf k}}[{{\bf
A}}^*\cdot\thetailde{{\bf P}}+ c.c.],
\end{equation}
is the interaction between the EM field and the magnetization.
By the same method we can obtain the longitudinal part of the
Hamiltonian as
\begin{equation}gin{eqnarray}\label{e31}
\hat{\thetailde{\cal H}}^\parallelrtiala
&=&[{\frac{\hat{\thetailde{\thetaextbf{P}}}^{\parallelrtiala2}}{\rho
}\mathop{+\rho\thetailde{\omega}_0^2\hat{\thetailde{\thetaextbf{X}}}^{\parallelrtiala2}+}\int_0^\infty
d\omega (\frac{{\hat{\thetailde{\thetaextbf{Q}}}_\omega^\parallelrtiala}}{{\rho
}}^2+{\rho}\omega^2 \hat{\thetailde{\thetaextbf{Y}}}_\omega
^{\parallelrtiala2})}\nonumber\\&+&{\int_0^\infty {d\omega }
\frac{{(\nu(\omega)}}
{\rho}\hat{\thetailde{\thetaextbf{X}}}^{\parallelrtiala*}\cdot\hat{\thetailde{\thetaextbf{Q}}}
_\omega ^\parallelrtiala) +
c.c.}]+[\frac{\alpha^2}{\epsilon_0}\hat{\thetailde{\thetaextbf{X}}}^{\parallelrtiala2}],
\end{eqnarray}
where we have separated the electric part of the Hamiltonian from
the matter part for later convenience.
The fields are quantized in a standard fashion by demanding ETCR
between the variables and their conjugates. For EM field components
we have
\begin{equation}gin{equation}\label{e32}
[\hat{\thetailde{A}}_\lambda(\thetaextbf{k},t),\hat{\thetailde{E}}_{\lambda'}^{{*}}
(\thetaextbf{k}',t)]=-i\frac{\hbar}{\epsilon_0}\delta
_{\lambda\lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}'),
\end{equation}
and for the matter
\begin{equation}gin{equation}\label{e33}
[\hat{\thetailde{X}}_{\lambda}(\thetaextbf{k},t),\hat{\thetailde{P}}_{\lambda'}^{*}
(\thetaextbf{k}',t] =i\hbar \delta_{\lambda,\lambda'}\delta (\thetaextbf{k}-
\thetaextbf{k}'),
\end{equation}
\begin{equation}gin{equation}\label{e34}
[\hat{\thetailde{Y}}_{\omega\lambda}(\thetaextbf{k},t),\hat{\thetailde{Q}}
_{\omega'\lambda'}^{{*}}(\thetaextbf{k}',t)]=i\hbar\delta
_{\lambda\lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}')\delta(\omega-\omega'),
\end{equation}
with all other equal-time commutators being zero. Indeed, it is
easily shown that the Heisenberg equations of evolution based on the
Hamiltonian (\ref{e27}) and ETCR, (\ref{e32})-(\ref{e34}), are
identical to Maxwell equations in a dielectric medium with
displacement operator $\thetaextbf{D}(\thetaextbf{r},t)$, defined in
(\ref{e6}).
The equations (\ref{e27}) and (\ref{e6}), together with the
commutation relations in (\ref{e32})$-$(\ref{e34}) complete the
quantization procedure of transverse fields.
In order to extract useful information about the system, we need to
solve these equation. To facilitate the calculations, we introduce
two sets of annihilation operators
\begin{equation}gin{equation}\label{e35}
\hat{a}_\lambda(\thetaextbf{k},t) = \sqrt {\frac{\epsilon_0 }{{2\hbar
\thetailde{k}c}}}(\thetailde{k}c
\hat{\thetailde{A}}_\lambda(\thetaextbf{k},t)+i\hat
{\thetailde{E}}_{\lambda}(\thetaextbf{k},t)),
\end{equation}
\begin{equation}gin{equation}\label{e36}
\hat{b}_\lambda(\thetaextbf{k},t) = \sqrt {\frac{\rho }{{2\hbar
\thetailde{\omega}_0}}}(\thetailde{\omega}_0
\hat{\thetailde{X}}_\lambda(\thetaextbf{k},t)+\frac{i}{\rho}\hat
{\thetailde{P}}_{\lambda}(\thetaextbf{k},t)),
\end{equation}
\begin{equation}gin{equation}\label{e37}
\hat{b}_{\omega\lambda}(\thetaextbf{k},t)=\sqrt{\frac{\rho}{{2\hbar
\omega}}} (-i\omega\hat{\thetailde{Y}}_{\omega\lambda}(\thetaextbf{k},t)+
\frac{1}{\rho}\hat{\thetailde{Q}}_{\omega\lambda}(\thetaextbf{k},t)),
\end{equation}
where $\thetailde{\omega}_0$ is defined in (\ref{e29}). The different
definitions for $\hat{b}$ and $\hat{b}_\omega$ only amount to a
change of phase and have been chosen for further simplicity. From
the ETCR for the fields (\ref{e32})$-$(\ref{e34}) we obtain
\begin{equation}gin{equation}\label{e38}
[\hat{a}_\lambda(\thetaextbf{k},t),\hat{a}_\lambda^\dag(\thetaextbf{k}',t)]=
\delta_{\lambda \lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}'),
\end{equation}
\begin{equation}gin{equation}\label{e39}
[\hat{b}_\lambda(\thetaextbf{k},t),\hat{b}_\lambda^\dag(\thetaextbf{k}',t)]=
\delta_{\lambda \lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}'),
\end{equation}
\begin{equation}gin{equation}\label{e40}
[\hat{b}_{\omega\lambda}(\thetaextbf{k},t),\hat{b}_{\omega'\lambda'}^\dag(
\thetaextbf{k}^\shortmid,t)]=\delta_{\lambda\lambda'}
\delta(\thetaextbf{k}-\thetaextbf{k}')\delta(\omega-\omega').
\end{equation}
We emphasize that, in contrast to the previous ETCR between the
conjugate fields (\ref{e32})-(\ref{e34}), which were correct only in
half $\thetaextbf{k}$ space, the relations (\ref{e38})$-$(\ref{e40}) are
valid in the hole reciprocal space. Inverting eq
(\ref{e35})$-$(\ref{e37}) to express the field operators in terms of
the creation and annihilation operators and inserting them into the
matter part of the Hamiltonian (\ref{e27}), we obtain the normal
ordered Hamiltonian for the transverse part of the matter
Hamiltonian as
\begin{equation}gin{equation}\label{e41}
\hat{H}^\parallelrtialerp=\hat{H}_{em}^\parallelrtialerp+\hat{H}_{matt}^\parallelrtialerp+\hat{H}_{int}^\parallelrtialerp
\end{equation}
\begin{equation}gin{equation}\label{e42}
\hat{H}_{em}^\parallelrtialerp=\int_0^\infty{d\omega}
\int{d^3\thetaextbf{k}\sum\limits_{\lambda=1,2}
{\hbar\omega}\hat{a}_\lambda^\dag
(\omega,\thetaextbf{k})\hat{a}_\lambda(\omega,\thetaextbf{k})},
\end{equation}
\begin{equation}gin{eqnarray}\label{e43}
\hat{H}_{mat}^\parallelrtialerp&=&\int d^3k\sum\limits_{\lambda=1,2}
[\hbar\thetailde{\omega}_0\hat{b}_\lambda^\dagger(\thetaextbf{k},t)
\hat{b}_\lambda(\thetaextbf{k},t)+\int_o^\infty
d\omega\hbar\omega\hat{b}_{\omega\lambda}^\dagger(
\thetaextbf{k},t)\hat{b}_{\omega\lambda}(\thetaextbf{k},t)\nonumber\\
&+&\frac{\hbar}{2}\int_0^\infty
d\omega
V(\omega)[\hat{b}^\dagger_\lambda(-\thetaextbf{k},t)+\hat{b}_\lambda
(\thetaextbf{k},t)][\hat{b}_{\omega\lambda}^\dagger(-\thetaextbf{k},t)
+\hat{b}_{\omega\lambda}(\thetaextbf{k},t)]],\nonumber\\
\end{eqnarray}
\begin{equation}gin{equation}\label{e44}
\hat{H}_{int} = i\frac{\hbar}{2}\int {d^3 {\bf
k}}\sum_{\lambda=1,2} \Lambda(k) (\hat a_\lambda (\thetaextbf{k}) + \hat
a_\lambda ^\dag ( - \thetaextbf{k})) (\hat b_{\lambda } (\thetaextbf{k}) -
\hat b_{\lambda } ^\dag ( - \thetaextbf{k})) ,
\end{equation}
where
$V(\omega)\equiv[\frac{v(\omega)}{\rho}\sqrt{\frac{\omega}{\thetailde{\omega}_0}}]$,
$\Lambda(k)\equiv\sqrt{\frac{\thetailde{\omega}_0ck_c^2}{\thetailde{k}}}$
and integration over $\thetaextbf{k}$ has been extended to the whole
reciprocal space.
Now instead of solving the Heisenberg equations we use the Fano
technique to diagonalize the Hamiltonian \cite{11}. After
diagonalization the field operators are written in terms of the
eigenoperators of the Hamiltonian.
The diagonalization is down in two step. In the first step the
polarization and reservoir part of the Hamiltonian is digitalized
and the polarization is written in terms of its eigenoperators. Then
the total Hamiltonian is again diagonalized with the same method.
In the first step the diagonalized expression for
$\hat{H}^\parallelrtialerp_{mat}$ is obtained as (the calculations leading to
the diagonalization are lengthy and can be found in \cite{3} and we
do not repeat them here)
\begin{equation}gin{equation}\label{e45}
\hat{H}_{matt}^\parallelrtialerp=\int_0^\infty{d\omega}
\int{d^3\thetaextbf{k}\sum\limits_{\lambda=1,2}
{\hbar\omega}\hat{B}_\lambda^\dag
(\omega,\thetaextbf{k})\hat{B}_\lambda(\omega,\thetaextbf{k})},
\end{equation}
where $\hat{B}^\dagger _\lambda(\thetaextbf{k},\omega)$ and
$\hat{B}_\lambda(\thetaextbf{k},\omega)$ are the dressed matter field creation
and annihilation operators and satisfy the usual ETCR
\begin{equation}gin{equation}\label{e46}
[\hat{B}_\lambda(\omega,\thetaextbf{k},t),\hat{B}_{\lambda'}^\dag(\omega',
\thetaextbf{k}',t)]=\delta
_{\lambda\lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}')\delta(\omega-\omega').
\end{equation}
These operators can be expressed in terms of the initial creation
and and annihilation operators as
\begin{equation}gin{equation}\label{e47}
B({\bf k},\omega ) = \alpha _0 (\omega )b({\bf k}) + \begin{equation}ta _0
(\omega )b^\dag (-{\bf k}) + \int_0^\infty {d\omega } \alpha
(\omega ,\omega ')b({\bf k},\omega ) + \begin{equation}ta (\omega ,\omega
')b^\dag (-{\bf k},\omega ),
\end{equation}
and all the coefficient $\alpha_0(\omega)$,
$\begin{equation}ta_0(\omega)$, $\alpha_1(\omega,\omega')$
and $\begin{equation}ta_1(\omega,\omega')$ can be obtained in terms of microscopic
parameters. In \cite{3} the relation between $\alpha_0(\omega)$ and
$\begin{equation}ta_0(\omega)$ is obtained as
\begin{equation}gin{equation}\label{e48}
\begin{equation}ta _{0} (\omega) =
\frac{\omega-\thetailde{\omega}_0}{\omega+\thetailde{\omega}_0
}\alpha_{0}(\omega),
\end{equation}
and we will use this relation in the next section.
Using the commutators of $\hat{b}$ with $\hat{B}$ and $\hat{B}^\dag$
together with (\ref{e47}), we find
\begin{equation}gin{equation}\label{e49}
\hat{b}_\lambda(\thetaextbf{k})=\int_0^\infty{d\omega[\alpha^*_0
(\omega)}\hat{B}_\lambda^\dag(\thetaextbf{k},\omega)-\begin{equation}ta_0(\omega)
\hat{B}_\lambda(-\thetaextbf{k},\omega)].
\end{equation}
The consistency of the diagonalization procedure is checked by
verifying that the initial commutation relation between
$\hat{b}(\thetaextbf{k})$ and $\hat{b}^\dagger(\thetaextbf{k})$ are
conserved.
Using (\ref{e44}), (\ref{e45}), (\ref{e49}) and (\ref{e41}), the
total Hamiltonian can be written as
\begin{equation}gin{eqnarray}\label{e50}
\hat{H} &=& \int {d^3{\bf k}}\{ \hbar\thetailde{\omega}_{\bf k}
\hat{a}_\lambda ^\dag ({\bf k})\hat{a}_\lambda({\bf
k})+\int_0^\infty{d\omega\hbar\omega} \hat{B}_{\lambda }^\dag
({\bf k},\omega )\hat{B}_{\lambda }({\bf k},\omega )\nonumber\\
&+& \frac{\hbar}{2}\Lambda ({k})\int_0^\infty{d\omega}\{ g(\omega
)\hat{B}_{\lambda }^\dag ({\bf k},\omega)[\hat{a}_\lambda({\bf
k})+\hat{a}_\lambda ^\dag(-{\bf k})]+ H.c.\}
\end{eqnarray}
where $\Lambda ({k}) $ is defined in (\ref{e44}) and $g(\omega
)=i\alpha_0(\omega)+i\begin{equation}ta_{0}(\omega)$.
Now we should take the second step and diagonlaize the total
Hamiltonian, (\ref{e50}), as
\begin{equation}gin{equation}\label{e51}
\hat{H}=\int d^3 {\bf k}\int_{0}^\infty\hbar\omega
[\hat{C}^\dag({\bf k}.\omega)\hat{C}({\bf k}.\omega)
\end{equation}
where eigenoperators of the system can be written as
\begin{equation}gin{eqnarray}\label{e52}
\hat{C}({\bf k},\omega ) &=& \thetailde{\alpha} _0 (k,\omega
)\hat{a}({\bf k}) + \thetailde{\begin{equation}ta} _0 (k,\omega )\hat{a}^\dag
(-{\bf k}) \nonumber\\&+& \int_0^\infty {d\omega' } \thetailde{\alpha}
(k,\omega ,\omega ')\hat{B}({\bf k},\omega,\omega' ) +
\thetailde{\begin{equation}ta} (k,\omega ,\omega ')\hat{B}^\dag (-{\bf k},\omega'
).
\end{eqnarray}
To calculate the time dependence of EM field operators we should
write them in terms of eigenoperators of the system.
The vector potential is given by
\begin{equation}gin{equation}\label{e53}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^{\frac{3} {2}}}
\int
d^3\thetaextbf{k}\sum_{\lambda=1,2}\sqrt{\frac{\hbar}{{2\epsilon_0\thetailde{\omega}_\thetaextbf{k}}}}
[\hat{a}_\lambda(\thetaextbf{k},t)e^{i\thetaextbf{k}
\cdot\thetaextbf{r}}+H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k}).
\end{equation}
By inverting (\ref{e52}), writing $\hat{a}$ in terms of $\hat{C}$
and using the Hamiltonian (\ref{e51}), to calculate the time
dependence of $\hat{C}$, we obtain $\hat{a}(t)$ as
\begin{equation}gin{equation}\label{e54}
\hat{a}_\lambda({\bf k},t)=\int_0^\infty
{d\omega}[\thetailde{\alpha}_0^*({k},\omega)\hat{C}_\lambda({\bf
k},\omega )e^{-i\omega t}-\thetailde{\begin{equation}ta} _0 ({k},\omega
)\hat{C}_\lambda^\dag ({\bf k},\omega )e^{i\omega t}].
\end{equation}
Using (\ref{e53}) and (\ref{e54}) we obtain
\begin{equation}gin{eqnarray}\label{e55}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)&=&(\frac{\hbar}{8\parallelrtiali^4\epsilon_0})^{\frac{1}{2}}\int
{d^3 {\bf k}} \int_0^\infty {d\omega }
\frac{\omega\sqrt{Im\chi(\omega)}}{{\omega_{\bf k}^2+\omega
^2(1+\chi(\omega))}}\nonumber\\&\thetaimes&[\sum\limits_{\lambda=1,2}\hat
C_\lambda({\bf k},\omega )e^{i(\thetaextbf{k}\cdot \thetaextbf{r}-\omega
t)}\thetaextbf{e}_\lambda(\thetaextbf{k}) + H.c.].
\end{eqnarray}
where $\chi(\omega)$ is obtained as
\begin{equation}gin{eqnarray}\label{e56}
\chi(\omega)=\lim_{\varepsilon\longrightarrow0}\frac{1}{2}\int_{-\infty}^{+\infty}{d\omega'}
\frac{{\left|{f(\omega')}\right|^2}}{{\omega'-\omega-
i\varepsilon}}= \frac{1}{2}P\int_{ - \infty }^{ + \infty } {} \{
\frac{{|f(\omega ')|^2 }}{{\omega ' - \omega }}\} d\omega ' +
\frac{1}{2}i\parallelrtiali
|f(\omega )|^2,\nonumber\\
\end{eqnarray}
and
$f(\omega)=\sqrt{\frac{\alpha^2\omega_0}{\rho\omega^2\epsilon_0}}g(\omega)$.
The transverse part of EM field can be obtained by the same method using
the Hamiltonian (\ref{e31}) and the definition of $E^\parallelrtialarallel$ in
(\ref{e22}) as
\begin{equation}gin{eqnarray}\label{e57}
\hat{ \thetaextbf{E}}^\parallelrtiala(\thetaextbf{r},t)&=&\left({\frac{\hbar}{8\parallelrtiali ^4
\epsilon_0}}
\right)^{\frac{1}{2}}
\int_0^\infty d\omega\int d^3k\hspace{0.1cm}\frac{i\sqrt{Im\chi(\omega)}}
{1+\chi(\omega)}\nonumber\\&\thetaimes & (\hat{C}_3(\thetaextbf{k},\omega)e^{i(\omega
t-\thetaextbf{k}\cdot\thetaextbf{r})}-H.c.)\hspace{0.1cm}\thetaextbf{e}_3(\thetaextbf{k}).
\end{eqnarray}
The polarization can be obtained by writing the matter field in
terms of $\hat{C}$ and $\hat{C}^\dagger$ as
\begin{equation}gin{equation}\label{e58}
{\hat{\thetaextbf{P}}}({\bf r},t) = \int_0^\infty {} d\omega
\{[\epsilon_0\chi (\omega ){\bf \hat{\thetaextbf{E}}}({\bf r},\omega )
+{\hat{\thetaextbf{P}}}_N ({\bf r},\omega )]e^{ - i\omega t}+h.c.\}
\end{equation}
where
\begin{equation}gin{equation}\label{e59}
\hat{P}_{N\lambda}({\thetaextbf{r}},\omega)=\int d^3 \thetaextbf{k}
\sqrt{2\hbar\epsilon_0 Im\chi}\hat{C}_{e\lambda}({\bf
k},\omega)e^{i\thetaextbf{k}.\thetaextbf{r}}.
\end{equation}
Also using (\ref{e59}) we can show
that the obtained polarizability satisfies the Kramers-Kronig
relations.
Although the damped polarization model is based on a microscopic
model but as can be seen from (\ref{e55}) and (\ref{e57}), the final
results only depend on the macroscopic parameter $\chi (\omega)$.
Then this model can be used for any medium with known
susceptibility.
The second term of (\ref{e58}) has no classical equivalent and
represents a Langevin fluctuation term (noise operator) which is a
characteristic of a dissipative medium. It is easy to show that this
noise operator satisfy the fluctuation-dissipation theorem
\cite{14}.
\subsection{Phenomenological model}
This model is based on Maxwell equations, Kubo's formula and
dissipation-fluctuation theorem. In accordance to dissipation-
fluctuation theorem, since the medium is dissipative, we should add
to Maxwell equations a noise field. This field is considered as the
source of electromagnetic field. The electromagnetic field can be
written in terms of this noise operators by using the Green function
of classical Maxwell equations.
In this method the Em field operators are separated into the
positive and negative frequency parts in the usual way. For
example
\begin{equation}gin{equation}\label{e60}
\hat{\thetaextbf{E}}(\thetaextbf{r},t)=\frac{1}{\sqrt{2\parallelrtiali}}\int_0^\infty
d\omega[\hat{\thetaextbf{E}}^+(\thetaextbf{r},\omega)e^{-i\omega
t}+\hat{\thetaextbf{E}}^-(\thetaextbf{r},\omega)e^{i\omega t}],
\end{equation}
where the positive and negative frequency parts involve only the
annihilation and creation operators respectively. Similar
decompositions hold for other field operators. The field operators
satisfy Maxwell equations which in the frequency domain are of the
forms
\begin{equation}gin{equation}\label{e61}
\nabla\thetaimes\hat{\thetaextbf{E}}^+(\thetaextbf{r},\omega)=i\omega\hat{\thetaextbf{B}}^+
(\thetaextbf{r},\omega), \end{equation}
\begin{equation}gin{equation}\label{e62}
\nabla\thetaimes\hat{\thetaextbf{B}}^+(\thetaextbf{r},\omega)=
-i\omega\mu_0\hat{\thetaextbf{D}}^+(\thetaextbf{r},\omega)
\end{equation}
where the monochoromattic electric and displacement field are
related through
\begin{equation}gin{equation}\label{e63}
\hat{\thetaextbf{D}}^+(\thetaextbf{r},\omega)=
\epsilon_0\varepsilon(\omega)\hat{\thetaextbf{E}}^+(\thetaextbf{r},\omega)
+\hat{\thetaextbf{P}}^+_N(\thetaextbf{r},\omega),
\end{equation}
the parameters $\epsilon_0$ and $\mu_0$ are permutability and
permeability of free space respectively and
$\varepsilon(\omega)=1+\chi(\omega)$.
In this equation, $\hat{\thetaextbf{P}}^+_N(\thetaextbf{r},\omega)$ stands
for noise current density operator associated with absorption nature
of the medium. The fluctuation-dissipation theorem and Kubo's
formula require that the noise operator satisfy the following
commutation relation \cite{14}
\begin{equation}gin{eqnarray}\label{e64}
&&
[\hat{P}_{Ni}^{+}(\thetaextbf{r},\omega),\hat{P}_{Nj}^-(\thetaextbf{r}',\omega')]=
2\epsilon_0\hbar Im\chi(\omega)
\delta_{i,j}\delta(\thetaextbf{r}-\thetaextbf{r}')\delta(\omega-\omega'),
\nonumber\\
&&
[\hat{P}_{Ni}^{+}(\thetaextbf{r},\omega),{P}_{Nj}^+(\thetaextbf{r},\omega)]
=[P_{Ni}^-(\thetaextbf{r},\omega),P_{Nj}^{-}(\thetaextbf{r},\omega)]=0.\nonumber\\
\end{eqnarray}
In a gauge in which scaler potential vanishes, we have
\begin{equation}gin{equation}\label{e65}
\hat{\thetaextbf{E}}^+(\thetaextbf{r},\omega)=i\omega\hat{\thetaextbf{A}
}^{+}(\thetaextbf{r},\omega), \end{equation}
\begin{equation}gin{equation}\label{e66}
\hat{\thetaextbf{B}}^+(\thetaextbf{r},\omega)=
\nabla\thetaimes\hat{\thetaextbf{A}}^+(\thetaextbf{r},\omega).\end{equation}
Combining equations (\ref{e62}), (\ref{e63}), (\ref{e65}) and
(\ref{e66}) one can easily show that the positive frequency part of
the vector potential operator satisfies
\begin{equation}gin{equation}\label{e67}
\nabla\thetaimes[\nabla\thetaimes\hat{\thetaextbf{A}}^+(\thetaextbf{r},\omega)]-\varepsilon
(\omega)
\frac{\omega^2}{c^2}\hat{\thetaextbf{A}}^+(\thetaextbf{r},\omega)=\frac{i\omega}{\epsilon_0c^2}
\hat{\thetaextbf{P}}_N^+(\thetaextbf{r},\omega).
\end{equation}
The differential equation (\ref{e67}) can be converted to an
algebraic equation using a Fourier transformation
\begin{equation}gin{equation}\label{e68}
-\thetaextbf{k}\thetaimes[\thetaextbf{k}\thetaimes\hat{\thetaextbf{A}}^+(\thetaextbf{k},\omega)]
-\varepsilon(\omega)\frac{\omega^2}{c^2}\hat{\thetaextbf{A}}^+
(\thetaextbf{k},\omega)=\frac{i\omega}{\epsilon_0c^2}\hat{\thetaextbf{P}}_N^+(\thetaextbf{k},\omega)
\end{equation}
From (\ref{e68}), the positive frequency component of the vector
potential in reciprocal space, can be written as
\begin{equation}gin{equation}\label{e69}
\hat{\thetaextbf{A}}^+(\thetaextbf{k},\omega)=
\frac{i\omega}{\epsilon_0}[{\frac{I-[\frac{c^2}{\omega^2}\varepsilon(\omega)]\thetaextbf{kk}}
{\thetaextbf{k}^2c^2-\omega^2\varepsilon(\omega)}}]\cdot\hat{\thetaextbf{P}}_N^+(\thetaextbf{k},
\omega),
\end{equation}
where $I$ is the unit Cartesian tensor and $\thetaextbf{kk}$ is the
normal Cartesian dyadic. The expression in the bracket is the Green
function of Maxwell equation. We now decompose the Longitudinal and
transverse parts as
\begin{equation}gin{equation}\label{e70}
\hat{\thetaextbf{A}}^{+\parallelrtialerp}(\thetaextbf{k},\omega)=\frac{i\omega}{\epsilon_0}
[{\frac{I-\thetaextbf{e}_3(\thetaextbf{k})\thetaextbf{e}_3(\thetaextbf{k})}
{\thetaextbf{k}^2c^2-\omega^2\varepsilon(\omega)}}]\cdot\hat{\thetaextbf{P}}_N^+(\thetaextbf{k},
\omega),\end{equation} and
\begin{equation}gin{equation}\label{e71}
\hat{\thetaextbf{A}}^{+\parallelrtialarallel}(\thetaextbf{k},\omega)=\frac{i\omega}{\epsilon_0}
\frac{\thetaextbf{e}_3(\thetaextbf{k})\thetaextbf{e}_3(\thetaextbf{k})}
{{\omega^2\varepsilon(\omega)}}\cdot\hat{\thetaextbf{P}}_N^+(\thetaextbf{k},\omega).
\end{equation}
Now let us define a new set of boson operators as
\begin{equation}gin{equation}\label{e72}
\hat{{B}}_\lambda(\thetaextbf{k},\omega)= \frac{\hat{{P}}_{N\lambda}^+
(\thetaextbf{k},\omega)}{\sqrt{2\epsilon_0\hbar Im\chi(\omega)}}.
\end{equation}
Using the inverse Fourier transform the operators
$\hat{\thetaextbf{A}}^\parallelrtialerp(\thetaextbf{r},t)$ and
$\hat{\thetaextbf{A}}^\parallelrtialarallel(\thetaextbf{r},t)$ can be obtained from the
equations (\ref{e70}) and (\ref{e71}) as
\begin{equation}gin{eqnarray}\label{e73}
\hat{\thetaextbf{A}}^\parallelrtialerp(\thetaextbf{r},t)&=& ({\frac{\hbar }{{8\parallelrtiali ^4
\epsilon _0 }}})^{\frac{1}{2}} \int{d\omega\int
{d^3{\thetaextbf{k}}\sum\limits_{\lambda=1,2} \omega \frac{\sqrt
{Im\chi(\omega)}}{\thetaextbf{k}^2c^2-\omega^2
\varepsilon(\omega)}}}\nonumber\\
&\thetaimes& (\hat{B}_\lambda(\thetaextbf{k},\omega) e^{i(\omega
t-\thetaextbf{k}\cdot\thetaextbf{r})}+
H.c.)\hspace{0.1cm}\thetaextbf{e}_\lambda(\thetaextbf{k}),
\end{eqnarray}
\begin{equation}gin{eqnarray}\label{e74}
\hat{\thetaextbf{A}}^{\parallelrtiala}(\thetaextbf{r},t)&=& \left({\frac{\hbar}{8\parallelrtiali^4\epsilon_0}}
\right)^{\frac{1}{2}}\int_0^\infty{d\omega\int{d^3\thetaextbf{k}\frac{\sqrt
{Im\chi
(\omega)}}{\omega\varepsilon(\omega)}}}\nonumber\\
&\thetaimes& (\hat{B}_3(\thetaextbf{k},\omega)e^{i(\omega t-\thetaextbf{k}
\cdot\thetaextbf{r})}+
h.c.)\hspace{0.1cm}\thetaextbf{e}_3(\thetaextbf{k}),
\end{eqnarray}
and the electric field can be obtained from
\begin{equation}gin{equation}\label{e75}
\hat{\thetaextbf{E}}(\thetaextbf{r},t)=-\frac{\parallelrtialartial\hat{\thetaextbf{A}}(\thetaextbf{r},t)}{\parallelrtialartial
t}.
\end{equation}
The relations (\ref{e73})-(\ref{e74}) are equivalent to the results
of the damped polarization method (\ref{e55}) and (\ref{e57}).
Therefore these two approaches are equivalent.
\subsection{Minimal coupling method}
In the minimal coupling method quantum electrodynamics in a linear
polarizable medium can be accomplished by modeling the medium with a
quantum field, namely a matter field interacting with
electromagnetic field. This quantum field describes the
polarizability character of the medium and interacts with the
displacement field $\hat{\thetaextbf{D}}$ through a minimal coupling
term. \cite{9} (This method is named minimal coupling method since
the matter field is coupled to EM field in a minimal coupling way).
The Heisenberg equations for the electromagnetic field and matter
quantum field lead to both Maxwell and constitutive equations. In
this method we use the Coulomb gauge
$\nabla\cdot{\hat{\thetaextbf{A}}}=0$ and the conjugate canonical
momentum density of electromagnetic field is the displacement vector
operator $\hat{\thetaextbf{D}}(\thetaextbf{r},t)$ which satisfies the
commutation relation
\begin{equation}gin{equation}\label{e76}
[\hat{\thetaextbf{A}}_\lambda({\thetaextbf{r},t}),\hat{\thetaextbf{D}}_{\lambda'}
(\thetaextbf{r}',t)]=
i\hbar\delta_{\lambda\lambda'}^\parallelrtialerp
({\thetaextbf{r}-\thetaextbf{r}'}),
\end{equation}
where $\delta_{\lambda\lambda'}^\parallelrtialerp$ is the transverse delta
function \cite{15}. The total Hamiltonian is written as
\begin{equation}gin{eqnarray}\label{e77}
\hat{H}&=&\int
d^3\thetaextbf{r}({\frac{[\hat{\thetaextbf{D}}(\thetaextbf{r},t)-\hat{\thetaextbf{P}}
(\thetaextbf{r},t)]^2}{2\epsilon_0}}+ {\frac{(\nabla\thetaimes
\hat{\thetaextbf{A}}(\thetaextbf{r},t))^2}{2\mu_0}})\nonumber\\
&+&\sum_{\lambda=1,2,3}\int d^3\thetaextbf{k}\int_0^\infty d\omega\hbar
\omega\hat{B}^\dagger_\lambda(\thetaextbf{k},\omega)
\hat{B}_\lambda(\omega,\thetaextbf{k}),
\end{eqnarray}
where $\hat{B}_\lambda(\thetaextbf{k},\omega)$ are matter field
operators which satisfy the boson type commutation relations
\begin{equation}gin{equation}\label{e78}
[\hat{B}_\lambda(\thetaextbf{k},\omega),\hat{B}^{\dag}_{\lambda'}(\thetaextbf{k}',\omega)]=
\delta_{\lambda,\lambda'}\delta(\omega-\omega')\delta(\thetaextbf{k}-\thetaextbf{k}').
\end{equation}
In the Hamiltonian (\ref{e77}), $\hat{\thetaextbf{P}}(\thetaextbf{r},t)$ is the Polarization
density operator of the medium defined by
\begin{equation}gin{equation}\label{e79}
\hat{\thetaextbf{P}}(\thetaextbf{r},t)=\int
\frac{d^3\thetaextbf{k}}{(2\parallelrtiali)^{\frac{3}{2}}}\int_0^\infty{d\omega\sum
\limits_{\lambda= 1,2,3}
{[F(\omega)\hat{B}_\lambda
(\thetaextbf{k},\omega,t)e^{i\thetaextbf{k}.\thetaextbf{r}}+
H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k})}},
\end{equation}
where the the function $F(\omega)$ is the coupling function
between the electromagnetic field and the matter field.
Electric field in terms of the displacement vector
${\hat{\thetaextbf{D}}}$ and polarization vector $\hat{\thetaextbf{P}}$
is defined as
\begin{equation}gin{equation}\label{e80}
\epsilon_0\hat{\thetaextbf{E}}(\thetaextbf{r},t)=\hat{\thetaextbf{D}}(\thetaextbf{r},t)-\hat{\thetaextbf{P}}(\thetaextbf{r},t).
\end{equation}
Applying the Heisenberg equation to the vector potential
$\hat{\thetaextbf{A}}$ and using the commutation relations defined in
(\ref{e76}) we find that
\begin{equation}gin{equation}\label{e81}
\hat{\thetaextbf{E}}^\parallelrtialerp(\thetaextbf{r},t)=\frac{\parallelrtialartial\hat{\thetaextbf{A}}(\thetaextbf{r},t)}
{\parallelrtialartial t}\end{equation} and longitudinal part of electric field
is obtained as
\begin{equation}gin{equation}\label{e82}
\hat{\thetaextbf{E}}^\parallelrtiala(\thetaextbf{r},t)={\hat{\thetaextbf{P}}^\parallelrtiala(\thetaextbf{r},t)}.
\end{equation}
Similarly, applying the Heisenberg equation to
the displacement vector $\hat{\thetaextbf{D}}$ we obtain
\begin{equation}gin{equation}\label{e83}
\frac{\parallelrtialartial \hat{\thetaextbf{D}}(\thetaextbf{r},t)}{\parallelrtialartial
t}=\nabla\thetaimes \hat{\thetaextbf{H}}(\thetaextbf{r},t).
\end{equation}
Equation (\ref{e80}), (\ref{e81}) and (\ref{e83}) show that the
supposed Hamiltonian (\ref{e77}) gives the correct dynamical
equation (Maxwell equation) and interpreting $\hat{\thetaextbf{P}}$ as
the polarization vector is acceptable.
By using the commutation relation (\ref{e78}), the Heisenberg
equation for $\hat{B}_\lambda
(\thetaextbf{k},\omega)$ can be obtained as follows
\begin{equation}gin{equation}\label{e84}
\dot {\hat{B}}_\lambda({\bf k},\omega ,t) = - i\omega
\hat{B}_\lambda({\bf k},\omega ,t) + \frac{1}{{\hbar \sqrt {(2\parallelrtiali
)^{\frac{3}{2}}}}}\int_{}^{} {d^3 {\bf r}'\hspace{0.1cm} F^* (\omega ,{\bf
r}')e^{ - i{\bf k}\cdot{\bf r}'}\hat{\bf E}({\bf r}',t)\cdot{\bf
e}_\lambda ({\bf k})}.
\end{equation}
This equation have the following formal solution
\begin{equation}gin{eqnarray}\label{e85}
\hat{B}_\lambda({\bf k},\omega ,t) &=& \hat{B}_\lambda({\bf
k},\omega )e^{ - i\omega t}\nonumber\\
&+& \frac{1}{{\hbar \sqrt {(2\parallelrtiali )^{\frac{3}{2}} } }}\int_0^t {}
dt'e^{i\omega (t - t')} \int d{\bf r}'F^* (\omega )e^{ - i{\bf
k}\cdot{\bf r}'}\hat{\bf E}({\bf r}',t')\cdot{\bf e}_\lambda ({\bf
k}).\nonumber\\
\end{eqnarray}
Using this equation and (\ref{e79}), the polarization operator can
be obtained as
\begin{equation}gin{equation}\label{e86}
\hat{\thetaextbf{P}}(\thetaextbf{r},t)=\int_0^{t}
dt'\chi(t-t')\hat{\thetaextbf{E}}(\thetaextbf{r},t')+\hat{\thetaextbf{P}}_N(\thetaextbf{r},t),
\end{equation}
where
\begin{equation}gin{equation}\label{e87}
\chi (t) = \frac{{2 }}{{\hbar \varepsilon _0 }}\int_0^\infty d\omega
|F(\omega )|^2 \sin (\omega t),
\end{equation}
and $\hat{\thetaextbf{P}}_N(\thetaextbf{r},t)$ is the noise operator
of the medium and is obtained as
\begin{equation}gin{equation}\label{e88}
\hat{\bf P}_N ({\bf r},t) = \int \frac{d^3{\bf
k}}{(2\parallelrtiali)^{\frac{3}{2}}}\int_0^\infty d\omega \sum_{\lambda=1}^3
F(\omega )\hat{B}_\lambda({\bf k},\omega )e^{ - i\omega t + i{\bf
k}.{\bf r}}{\bf e}_\lambda({\bf k})+H.c.
\end{equation}
From (\ref{e87}), the imaginary and real part of the Fourier
transform of susceptibility can be written as
\begin{equation}gin{equation}\label{e89} Im
[\chi(\omega)]=\frac{{\parallelrtiali}}{\hbar\epsilon_0}
|F(\omega)|^2,\end{equation}
\begin{equation}gin{equation}\label{e90}
Re[\chi(\omega)]=\frac{\parallelrtiali}{\hbar\epsilon_0}P\int_{-\infty}^{+\infty}
d\omega|F (\omega')|^2 \frac{\omega}{\omega^2-\omega'^2}.
\end{equation}
As can be seen from (\ref{e89}) and (\ref{e90}), $Im\chi(\omega)$
and $Re\chi(\omega)$ satisfy the Kramers-Kronig relations, which
verifies the interpreting of $\chi(t)$ as the susceptibility.
In (\ref{e88}), we can separate the positive and negative components
of the noise operator $\hat{\bf P}_N ({\bf r},t)$ as defined in the
phenomenological method and then using (\ref{e89}) and (\ref{e78})
we find
\begin{equation}gin{equation}\label{e91}
[\hat{P}_{Ni}(\thetaextbf{r},\omega),\hat{P}_{Nj}(\thetaextbf{r}',\omega')]=
2\epsilon_0\hbar Im\chi(\omega)\delta_{i
j}\delta(\omega-\omega')\delta(\thetaextbf{r}-\thetaextbf{r}'),
\end{equation}
which is the pustulated relation in the phenomenological method
(\ref{e64}).
In a physical situation where the electric susceptibility is known,
the coupling function can be obtained from (\ref{e89}) and the
Hamiltonian (\ref{e77}) can be constructed in terms of the electric
susceptibility. Then the equation of motion can be obtained from
(\ref{e80}), (\ref{e81}) and (\ref{e83}) as
\begin{equation}gin{eqnarray}\label{e92}
-\nabla^2\hat{\thetaextbf{A}}&+&\frac{1}{c^2}\frac{\parallelrtialartial^2\hat{\thetaextbf{A}}}{\parallelrtialartial
t^2 }+\frac{1}{c^2}\frac{\parallelrtialartial}{\parallelrtialartial
t}\int_0^tdt'\chi(t-t')\frac{\parallelrtialartial
\hat{\thetaextbf{A}}(\thetaextbf{r},t')}{\parallelrtialartial
t'}\nonumber\\&=&\mu_0\frac{\parallelrtialartial
\hat{\thetaextbf{P}}_N^\parallelrtialerp(\thetaextbf{r},t)}{\parallelrtialartial t}.
\end{eqnarray}
In this equation if we change the lower limit of integral from $0$
to $-\infty$ and take the time-Fourier transform of this equation we
can obtain the transverse part of the postulated equation of the
phenomenological method (\ref{e68}). Also the time-Fourier transform
of the longitudinal part of electric filed, $E^\parallelrtiala$, is equivalent
to the longitudinal part of the postulated equation of the
phenomenological method. Then as can be seen the phenomenological
method can be obtained from the minimal coupling method.
Equation (\ref{e92}) can be solved by going to the reciprocal space.
The vector potential in reciprocal space can be written as
\begin{equation}gin{equation}\label{e93}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}\int
d^3\thetaextbf{k}\hat{\thetailde{\thetaextbf{A}}}(\thetaextbf{k},t)e^{i\thetaextbf{k}\cdot\thetaextbf{r}},
\end{equation}
where the Fourier component $\hat{\thetailde{\thetaextbf{A}}}$ can be written in
terms of creation and annihilation operators
\begin{equation}gin{equation}\label{e94}
\hat{\thetailde{\thetaextbf{A}}}(\thetaextbf{k},t)=\sum_{\lambda=1,2}\sqrt{\frac{\hbar}
{2\epsilon_0\omega_\thetaextbf{k}}} [\hat{a}_\lambda(\thetaextbf{k},t)
\thetaextbf{e}_\lambda(\thetaextbf{k})+\hat{a}^\dagger_\lambda(-\thetaextbf{k},t)
\thetaextbf{e}_\lambda(-\thetaextbf{k})].
\end{equation}
In terms of the Fourier components, equation (\ref{e94}) is written
as
\begin{equation}gin{eqnarray}\label{e95}
\ddot{\hat{\thetailde{\thetaextbf{A}}}}&+&\omega_\thetaextbf{k}^2\hat{\thetailde{\thetaextbf{A}}}+
\frac{\parallelrtialartial}{\parallelrtialartial
t}\int_0^tdt'\chi(t-t')\dot{\hat{\thetailde{\thetaextbf{A}}}}(\thetaextbf{k},t')\nonumber\\&=&-
\frac{1}{\epsilon_0}
\sum_{\lambda=1,2}\int_0^\infty{d\omega}[\omega F(\omega)
\hat{B}_\lambda(\thetaextbf{k},\omega)e^{-i\omega t}\thetaextbf{e}_\lambda
(\thetaextbf{k})+H.c.], \nonumber\\
\end{eqnarray}
where $\omega_\thetaextbf{k}=c|\thetaextbf{k}|$.
The equation (\ref{e95}) can be solved by using the Laplace
transformation, the details can be found in reference \cite{9},
and the final result is
\begin{equation}gin{eqnarray}\label{e96}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)&=&\int{d^3\thetaextbf{k}}
\sum\limits_{\lambda=1,2}{\sqrt{\frac{\hbar}
{2(2\parallelrtiali)^3\epsilon_0\omega_\thetaextbf{k}}}
[z(\omega_\thetaextbf{k},t)e^{i\thetaextbf{k}\cdot\thetaextbf{r}}
\hat{a}_\lambda{(\thetaextbf{k}},0)}+H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k})\nonumber \\
&+& \frac{1}{\epsilon_0}\sum\limits_{\lambda=1,2}
{\int\frac{d^3\thetaextbf{k}}{(2\parallelrtiali)^{\frac{3}{2}}}
\int_0^\infty{d\omega}[\xi(\omega,\omega_\thetaextbf{k},t)
\hat{B}_\lambda(\thetaextbf{k},\omega,0)}e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+ H.c.]
\thetaextbf{e}_\lambda(\thetaextbf{k}),\nonumber\\
\end{eqnarray}
where
\begin{equation}gin{equation}\label{e97}
z(\omega _{\thetaextbf{k}},t)=L^{-1}\{\frac{[s+s\thetailde\chi(s)-
i\omega_{\thetaextbf{k}}]}{s^2+\omega_\thetaextbf{k}^2+s^2\thetailde\chi(s)}\},
\end{equation}
\begin{equation}gin{equation}\label{e98}
\xi(\omega,\omega_\thetaextbf{k},t)=F(\omega)L^{-1}\{
\frac{s}{(s+i\omega)[s^2+ \omega_\thetaextbf{k} ^2 + s^2
\thetailde{\chi}(s)]}\},
\end{equation}
and $L^{-1}\{.\}$ denotes the inverse Laplace transform. The
transverse electric field is obtained as
\begin{equation}gin{equation}\label{e99}
\hat{\thetaextbf{E}}^\parallelrtialerp(\thetaextbf{r},t)=-\frac{\parallelrtialartial\hat{\thetaextbf{A}}(\thetaextbf{r},t)}{\parallelrtialartial
t},
\end{equation}
and
\begin{equation}gin{equation}\label{e100}
\hat{\thetaextbf{B}}(\thetaextbf{r},t)=\nabla\thetaimes
\hat{\thetaextbf{A}}(\thetaextbf{r},t).
\end{equation}
From equations (\ref{e82}) and (\ref{e85}) the longitudinal part of
electric field is
\begin{equation}gin{eqnarray}\label{e101}
&&\hat{\thetaextbf{E}}^{\parallelrtiala}(\thetaextbf{r},t)=-\frac{\hat{\thetaextbf{P}}^{\parallelrtiala}(\thetaextbf{r},t)}{{\varepsilon
_0 }}\nonumber\\&=&-\frac{1}{\varepsilon_0}{\int_0^\infty{d\omega }
\int {d^3\thetaextbf{k}}[Q(\omega
,t)}F(\omega)\hat{B}_3(\thetaextbf{k},\omega,0)e^{i\thetaextbf{k}\cdot
\thetaextbf{r}}
+ H.c.]\thetaextbf{e}_3(\thetaextbf{k}),\nonumber\\
\end{eqnarray}
where
\begin{equation}gin{equation}\label{e102}
Q(\omega, t)=L^{-1}\{ \frac{1}{(s+i\omega)(1+\thetailde\chi(s))}\}.
\end{equation}
In section 4 we will show that these results,
(\ref{e96})$-$(\ref{e102}), are equivalent to the results obtained
in the previous methods.
\section{Derivation of the minimal coupling Hamiltonian from the
damped polarization model}
In this section we want to obtain the Hamiltonian of minimal couplig
method from the Lagrangian of damped polarization model. In
(\ref{e32}) the conjugate of $\thetailde{\thetaextbf{A}}$ is the transverse
electric field $-\epsilon_0\thetailde{\thetaextbf{E}}^\parallelrtialerp$ but in minimal
coupling scheme, as given in (\ref{e76}), the conjugate of
$\thetailde{\thetaextbf{A}}$ is the displacement vector
$\thetailde{\thetaextbf{D}}$. We can change the conjugate of $\hat{\thetaextbf{A}}$ from $\hat{\thetaextbf{E}}$
to $\hat{\thetaextbf{D}}$ by adding to the Lagrangian (\ref{e1}) the term
$\alpha\frac{\parallelrtialartial (\thetaextbf{X}\cdot \thetaextbf{A})}{\parallelrtialartial t}$ which is
a canonical transformation and do not effect the equations of motion.
This canonical transformation leads to a
$\dot{\bf A}\cdot\thetaextbf{X}$ type of coupling which gives the
displacement field $-\thetailde{\thetaextbf{D}}$ as the conjugate of
$\thetailde{\thetaextbf{A}}$. In this case using the Fourier transform the
transverse part of the canonically transformed Lagrangian
($\thetailde{{\cal L}}'^\parallelrtialerp$) can be written as
\begin{equation}gin{equation}\label{e103}
\thetailde{{\cal L}}'^\parallelrtialerp= ({\thetailde{{\cal
L}}_{em}^\parallelrtialerp}+{\thetailde{{\cal L}}_{mat}^\parallelrtialerp} +{\thetailde{\cal
L}_{res}^\parallelrtialerp}+\thetailde{{\cal L}}_{int}'^\parallelrtialerp),
\end{equation}
where
\begin{equation}gin{equation}\label{e104}
\thetailde{{\cal L}}_{int}'^\parallelrtialerp=(\alpha\dot{\thetailde{\thetaextbf{A}}}\cdot
\thetailde{\thetaextbf{X}}^{\parallelrtialerp*}+c.c.)-(\int_0^\infty d\omega
v(\omega)\thetailde{\thetaextbf{X}}^{\parallelrtialerp*}\cdot\dot{\thetailde{\thetaextbf{Y}}}^\parallelrtialerp+c.c.).
\end{equation}
The other transverse or longitudinal parts of the Lagrangian do not
change. Now ${\cal L}'$ is used to obtain the components of the
conjugate variables of the fields
\begin{equation}gin{equation}\label{e105}
-\thetailde{D}_\lambda=\frac{\parallelrtialartial{\cal
L}}{\parallelrtialartial\dot{\thetailde{A}}_\lambda^*}
=\epsilon_0\dot{\thetailde{A}}_\lambda+\alpha\thetailde{X}_\lambda^\parallelrtialerp,
\end{equation}
\begin{equation}gin{equation}\label{e106}
\thetailde {P}_\lambda^{\parallelrtialerp}=\frac{\parallelrtialartial{\cal L}}
{\parallelrtialartial{\dot{\thetailde{X}}^{\parallelrtialerp*}_\lambda}}=
\rho\dot{\thetailde{X}}^{\parallelrtialerp}_{\lambda},
\end{equation}
\begin{equation}gin{equation}\label{e107}
\thetailde{Q}_{\omega\lambda}^{\parallelrtialerp}= \frac{\parallelrtialartial{\cal L}}{\parallelrtialartial
\dot{\thetailde{Y}}_{\omega\lambda}^{\parallelrtialerp*}}=\rho\dot
{\thetailde{Y}}_{\omega\lambda}^{\parallelrtialerp}-v(\omega)\thetailde{Y}_\lambda^{\parallelrtialerp},
\end{equation}
where (\ref{e104}) is used to obtaining (\ref{e105})$-$(\ref{e107}).
Using the Lagrangian (\ref{e103}) and the expression for the
conjugate variables in (\ref{e105})$-$(\ref{e107}) we obtain the
Hamiltonian for the transverse field as
\begin{equation}gin{equation}\label{e108}
H^\parallelrtialerp=\int' d^3\thetaextbf{k}(\thetailde{\cal H}_{em}^\parallelrtialerp+\thetailde{\cal
H}_{mat}^\parallelrtialerp),
\end{equation}
where
\begin{equation}gin{equation}\label{e109}
\thetailde{\cal H}_{em}^\parallelrtialerp=(\frac{\thetailde{\thetaextbf{D}}
+\alpha\thetailde{\thetaextbf{X}}^\parallelrtialerp}{\epsilon_0})^2 +\frac
{\thetaextbf{k}^2\thetailde{\thetaextbf{A}}^2}{\mu_0},
\end{equation}
is the electromagnetic energy density. The interaction between
electromagnetic field and the polarization field is embodied in
electromagnetic energy density. Using (\ref{e6}) we can rewrite
(\ref{e109}) as
\begin{equation}gin{equation}\label{e110}
\thetailde{\cal H}_{em}^\parallelrtialerp=\epsilon_0\thetailde{\thetaextbf{E}}^2(\thetaextbf{k})
+\epsilon_0c^2\thetailde{\thetaextbf{B}}^2(\thetaextbf{k}).
\end{equation}
This Hamiltonian is like the Hamiltonian of the free EM field. But
since in this Hamiltonian electric field
$\thetailde{\thetaextbf{E}}(\thetaextbf{k})$ is not the conjugate component of
vector potential $\thetailde{\thetaextbf{A}}(\thetaextbf{k})$ then we can not
use it to separate the eagenmode of the Hamiltonian and quantizing
the EM field.
Again the fields are quantized by demanding the ETCR between
components and its conjugates. Then we should change the equation
(\ref{e24}) as
\begin{equation}gin{equation}\label{e111}
[\hat{\thetailde{A}}_\lambda(\thetaextbf{k},t),\hat{\thetailde{D}}_{\lambda'}^{{*}}
(\thetaextbf{k}',t)]=i\hbar\delta
_{\lambda\lambda'}\delta(\thetaextbf{k}-\thetaextbf{k}').
\end{equation}
The matter and reservoir parts of the Hamiltonian
do not change and we can use the result of
section 2-1 to write
\begin{equation}gin{equation}\label{e112}
\hat{H}_{matt}^\parallelrtialerp=\int_0^\infty{d\omega}
\int{d^3\thetaextbf{k}\sum\limits_{\lambda=1,2}
{\hbar\omega}\hat{B}_\lambda^\dag
(\thetaextbf{k},\omega)\hat{B}_\lambda(\thetaextbf{k},\omega)},
\end{equation}
where $\hat{B}_\lambda
(\thetaextbf{k},\omega)$ and $\hat{B}_\lambda^\dag
(\thetaextbf{k},\omega)$ are defined in (\ref{e47}).
The electromagnetic part of the Hamiltonian has
been written in reciprocal space and can be inverted to real space
by using the inverse Fourier transform as
\begin{equation}gin{equation}\label{e113}
\hat{H}_{em}^\parallelrtialerp =\int{d^3{\bf r
}\frac{(\hat{\thetaextbf{D}}(\thetaextbf{r},t)+\alpha\hat{\thetaextbf{X}}^\parallelrtialerp
(\thetaextbf{r},t))^2 }{2\varepsilon_0}+\frac{(\nabla\thetaimes
{\hat{\thetaextbf{A}}(\thetaextbf{r},t)})^2}{2\mu _0}},
\end{equation}
where $\hat{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{r},t) $ can be written in
terms of its Fourier component
$\hat{\thetailde{\thetaextbf{X}}}^\parallelrtialerp(\thetaextbf{k},t) $ as
\begin{equation}gin{equation}\label{e114}
\hat{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^\frac{3}{2}}
\int d^3{\bf k}\hspace{0.1cm}\hat{\thetailde{\thetaextbf{X}}}^\parallelrtialerp(\thetaextbf{k},t)\hspace{0.1cm}
e^{i\thetaextbf{k}\cdot\thetaextbf{r}}.
\end{equation}
Now using the definition of $\hat{b}_\lambda(\thetaextbf{k},t)$ and
$\hat{b}_\lambda^\dagger(\thetaextbf{k},t)$ in (\ref{e36}), we obtain
$\hat{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{r},t) $ as
\begin{equation}gin{equation}\label{e115}
\hat{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}
\sqrt{\frac{\hbar}{{2\rho\thetailde{\omega}_0}}} \int
d^3\thetaextbf{k}\sum_{\lambda=1,2}[\hat{b}_\lambda(\thetaextbf{k},t)
e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+h.c.]\hspace{0.1cm}\thetaextbf{e}_\lambda(\thetaextbf{k}).
\end{equation}
Using (\ref{e49}) and (\ref{e115})we have
\begin{equation}gin{equation}\label{e116}
\hat{\thetaextbf{X}}^\parallelrtialerp(\thetaextbf{r},t)=-\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}\int_0^\omega
{d\omega}\int{d^3 \thetaextbf{k}\sum\limits_{\lambda= 1,2}[
{\frac{F(\omega)}{\alpha}}\hat{B}_\lambda
(\thetaextbf{k},\omega,t)e^{i\thetaextbf{k}.\thetaextbf{r}}+
H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k})}.
\end{equation}
where
\begin{equation}gin{equation}\label{e117}
F(\omega)=-\sqrt{\frac{\hbar\alpha^2}{2\rho\thetailde{\omega}_0}}(
{\alpha_0^{*}(\omega)-\begin{equation}ta_0^{*}(\omega)}).
\end{equation}
Now using (\ref{e116}) and (\ref{e113}) the total transverse
Hamiltonian (\ref{e108}) can be written as
\begin{equation}gin{eqnarray}\label{e118}
\hat{H}^\parallelrtialerp&=&\hat{H}_{em}^\parallelrtialerp+\hat{H}_{mat}^\parallelrtialerp\nonumber\\
&=&\int{d^3\thetaextbf{r}}
\frac{{\left({\hat{\thetaextbf{D}}-\hat{\thetaextbf{P}}^\parallelrtialerp}
\right)^2}}
{{2\varepsilon_0}}
+\frac{(\nabla \thetaimes {\hat{\thetaextbf{A}}})^2}{2\mu_0}\nonumber\\&+&
\int_0^\omega{d\omega}
\int{d^3\thetaextbf{k}\sum\limits_{\lambda=1,2}
{\hbar\omega}\hat{B}_\lambda^\dag
(\thetaextbf{k},\omega)\hat{B}_\lambda(\thetaextbf{k},\omega,t)},
\end{eqnarray}
where
\begin{equation}gin{equation}\label{e119}
\hat{\thetaextbf{P}}^\parallelrtialerp(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}\int_0^\omega{d\omega}
\int{d^3\thetaextbf{k}\sum\limits_{\lambda=1,2}[F(\omega)\hat{B}_\lambda
(\thetaextbf{k},\omega,t)e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k})}.
\end{equation}
Using the same method the Longitudinal part of the matter field can
be written as
\begin{equation}gin{equation}\label{e120}
\hat{\thetaextbf{X}}^\parallelrtiala(\thetaextbf{r},t)=\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}\int_0^\infty
{d\omega}\int{d^3\thetaextbf{k}[\frac{F(\omega)}{\alpha}\hat{B}_3
(\thetaextbf{k},\omega,t)e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+
h.c.]}\hspace{0.1cm}\thetaextbf{e}_3(\thetaextbf{k}).
\end{equation}
The total longitudinal part of the Hamiltonian (\ref{e31}) can be
written as
\begin{equation}gin{eqnarray}\label{e121}
\hat{H}^\parallelrtiala&=&\hat{H}_{em}^\parallelrtiala+\hat{H}_{mat}^\parallelrtiala\nonumber\\
&=&\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}
\int{d^3\thetaextbf{r}}\frac{{\left({\int_0^\infty{d\omega}
\int{d^3\thetaextbf{k}[F(\omega)\hat{B}_3
(\thetaextbf{k},\omega)e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+H.c.]\thetaextbf{e}_3(\thetaextbf{k})}}
\right)^2}}{{2\varepsilon_0}}
\nonumber\\&+&\int_0^\omega{d\omega}
\int d^3\thetaextbf{k}\hspace{0.1cm}
{\hbar\omega}\hspace{0.1cm}{B}_3^\dag
(\thetaextbf{k},\omega)\hat{B}_3(\thetaextbf{k},\omega),
\end{eqnarray}
where $F(\omega)$ is defined in (\ref{e117}). Finally the total
Hamiltonian can be written as the sum of transverse part
(\ref{e118}) and longitudinal part (\ref{e121})
\begin{equation}gin{eqnarray}\label{e122}
\hat{H} &=& \hat{H}^\parallelrtialerp + \hat{H}^{\parallelrtialarallel}\nonumber\\
&=&\int {d^3\thetaextbf{r}}
\frac{{\left( {\hat{\thetaextbf{D}}-\hat{\thetaextbf{P}}}\right)^2}}
{{2\varepsilon _0}}+\frac{{(\nabla \thetaimes {\hat{\thetaextbf{A}}})^2 }}
{{2\mu _0}}
\nonumber\\&+&\int_0^\omega{d\omega}\int{d^3\thetaextbf{k}\sum
\limits_{\lambda=1,2,3}{\hbar\omega }
\hat{B}_\lambda^\dag(\thetaextbf{k},\omega)\hat{B}_\lambda(\thetaextbf{k},\omega)}.\nonumber\\
\end{eqnarray}
where
\begin{equation}gin{equation}\label{e123}
\hat{\thetaextbf{P}}=-\frac{1}{(2\parallelrtiali)^{\frac{3}{2}}}
\int_0^\omega {d\omega }
\int {d^3\thetaextbf{k}\sum\limits_{\lambda= 1,2,3} [ F(\omega)
\hat{B}_\lambda
(\thetaextbf{k},\omega)e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+H.c.]\hspace{0.1cm}
\thetaextbf{e}_\lambda(\thetaextbf{k})}
\end{equation}
Therefore the minimal coupling Hamiltonian is obtained from the
Lagrangian (\ref{e1}) by using a canonical transformation then the
two Hamiltonians in relation (\ref{e27})$-$(\ref{e30}) and
(\ref{e77}) are equivalent.
We can check the consistency of minimal coupling method with damped
polarization method by comparting the obtained susceptibility from
these two methods. In the damped polarization model the imaginary
part of the susceptibility $Im\chi(\omega)$ is obtained as
\begin{equation}gin{equation}\label{1}
Im\chi(\omega)={\frac{\alpha^2\omega_0\parallelrtiali}{2\rho\omega^2\epsilon_0}}
(\alpha(\omega)+\begin{equation}ta(\omega))^2,
\end{equation}
where we have used Eq. (\ref{e56}) and the definition of $g(\omega)$
in (\ref{e50}). In minimal coupling method $Im\chi(\omega)$ can be
written as
\begin{equation}gin{equation}\label{2}
Im\chi(\omega)={\frac{\parallelrtiali\alpha^2}{2\rho\omega_0\epsilon_0}}
(\alpha(\omega)-\begin{equation}ta(\omega))^2,
\end{equation}
where we have used (\ref{e89}) and (\ref{e117}). Now from
(\ref{e48}) we can easily prove
\begin{equation}gin{equation}\label{3}
(\alpha(\omega)-\begin{equation}ta(\omega))^2=\frac{\omega_0^2}{\omega^2}
(\alpha(\omega)+\begin{equation}ta(\omega))^2,
\end{equation}
Substituting ({\ref{3}) in (\ref{2}), relation (\ref{1}) will be
obtained.
\section{Comparing the results of the minimal coupling method with other methods}
In this section we want to complete the equivalence of the minimal
coupling method with other methods and show that the results
obtained in the minimal coupling method, (\ref{e96})$-$(\ref{e102}),
are equivalent to those obtained in other methods. In equations
(\ref{e96})$-$(\ref{e102}) the vector potential is obtained as a
combination of the free field operators and the reservoir field
operator respectively as
\begin{equation}gin{equation}\label{e124}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)=I+II,
\end{equation}
where by setting the initial time at $-\infty$, we can write the
expressions for $I$ and $II$ as
\begin{equation}gin{eqnarray}\label{e125}
I=\mathop{\lim}\limits_{t'\longrightarrow-\infty}\int{d^3\thetaextbf{k}}
\sum\limits_{\lambda=1,2}{\sqrt{\frac{\hbar}{2(2\parallelrtiali)^3
\varepsilon_0\omega_\thetaextbf{k}
}}}[z_+(\omega_{\thetaextbf{k}},t-t')e^{i\thetaextbf{k}\cdot\thetaextbf{r}}\hat{a}_\lambda
(\thetaextbf{k},t')+
H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k}),\nonumber\\
\end{eqnarray}
and
\begin{equation}gin{equation}\label{e126}
II=\lim\limits_{t'\thetao-\infty}
\frac{1}{\varepsilon_0}\sum\limits_{\lambda=1,2}{\int{\frac{d^3
\thetaextbf{k}}{(2\parallelrtiali)^{\frac{3}{2}}}}\int_0^\infty{\omega}[\xi(\omega}
,\omega_\thetaextbf{k},t)\hat{B}_\lambda(\thetaextbf{k},\omega,t')
e^{i\thetaextbf{k}\cdot\thetaextbf{r}}+H.c.]\thetaextbf{e}_\lambda(\thetaextbf{k}).
\end{equation}
In other words in Eqs.(\ref{e96}) and (\ref{e101}) we change time
variable from $t$ to $t-t'$ and then let $t'\rightarrow -\infty$.
The explicit time dependence of $z(\omega_\thetaextbf{k},t-t')$ and
$\xi(\omega_\thetaextbf{k},\omega,t-t')$ can be obtained from the the
inverse Laplace transform formula
\begin{equation}gin{equation}\label{e127}
f(t)=L^{-1}[f(s)]=\mathop{\lim}\limits_{\eta\thetao 0}\int_{-
\infty}^{+\infty}{\frac{f(i\omega
+\eta)}{2\parallelrtiali}}e^{(i\omega+\eta)t}d\omega,
\end{equation}
where $f(t)$ is an arbitrary function. We have
\begin{equation}gin{eqnarray}\label{e128}
&&\lim\limits_{t'\thetao-\infty}z_+(\omega_\thetaextbf{k},t-t')
=\nonumber\\&&\lim\limits_{\eta\thetao 0^+}\lim \limits_{t'\thetao-
\infty}\int_{-\infty}^{+\infty}
{\frac{d\omega}{2\parallelrtiali}\frac{(i\omega+\eta) [1+\chi
(-\omega)]-i\omega_\thetaextbf{k}}{\omega_\thetaextbf{k}^2+
(i\omega+\eta)^2[1+\chi(-\omega)]}}
\hspace{0.1cm} e^{(i\omega+\eta)(t-t')}.
\end{eqnarray}
In equation (\ref{e128}) the relation
$\thetailde{\chi}(i\omega)=\chi(-\omega)$ have been used where
$\thetailde{\chi}(s)$ and $\chi(\omega)$ are the Laplace and Fourier
transforms of $\chi(t)$ respectively.
In equation (\ref{e128}) we have two situations: (i) The medium is
nondispersive and $\chi$ is a constant. In this case we use the
limit $t'\rightarrow 0$ for simplicity and solve the equation
(\ref{e128}) by using residue calculations and find
\begin{equation}gin{equation}\label{e129}
z_+(\omega ,t )=(1+n)\hspace{0.1cm} e^{(\frac{i\omega_k}{n}t)} +(1-n)\hspace{0.1cm}
e^{-(\frac{i\omega_k}{n}t)},
\end{equation}
where $n=\sqrt{\varepsilon}$ which coincide with the result of
\cite{9}.
(ii) The medium is dissipative. In this case
according to the Kramers$-$Kronig relations $\chi(\omega)$ is a
complex function of $\omega$. Since $t-t'>0$, the
poles with positive imaginary part are important. Let there be N
poles with $\omega_i>0$ then
\begin{equation}gin{eqnarray}\label{e130}
{\lim\limits_{t'\thetao-\infty}}z_+(\omega _\thetaextbf{k},t - t')
&=&\sum\limits_{n=1}^N{\lim
\limits_{t'\thetao-\infty}\alpha_n(\omega_k
)e^{i\omega_n(t-t')}},\nonumber\\&=&\sum\limits_{n=1}^N{\lim\limits_{t'\thetao-\infty}
[\alpha_n(\omega_k)e^{i\omega_{nr}(t-t')-\omega_{ni}t}]e^{\omega
_{ni}t'}},
\end{eqnarray}
where
\begin{equation}gin{equation}\label{e131}
\alpha_n(\omega_k)=\lim\limits_{\omega\thetao\omega_n}(\omega-\omega_n)
\{\frac{\omega[1-\chi(\omega)]-\omega_k}{\omega_k^2-\omega^2[1+\chi(-\omega)]}\}.
\end{equation}
From Eq.(\ref{e130}) it is clear that
\begin{equation}gin{equation}\label{e132}
\lim\limits_{t'\thetao-\infty}z_+(\omega_{\thetaextbf{k}},t-t')\thetao0.
\end{equation}
The same procedure can be used to obtain the explicit form of the
second term of equation (\ref{e124}) and the final result is
\begin{equation}gin{eqnarray}\label{e133}
&&\xi(\omega_{\thetaextbf{k}},\omega,t)= F(\omega)\nonumber
\\&\thetaimes &\lim\limits_{\eta\thetao0^+}\lim\limits_{t'\thetao-\infty}
\int\limits_{-\infty}^{+\infty} {\frac{d\omega'}{2\parallelrtiali}
\frac{(i\omega+\eta)
e^{(i\omega+\eta)(t-t')}}{\{i(\omega+\omega')+\eta\}\{\omega_{\thetaextbf{k}}^2-
(\omega-i\eta)^2[1+\chi(-\omega)]\}}}.\nonumber\\
\end{eqnarray}
In equation (\ref{e133}) there is one real pole and the other poles
have imaginary parts and tend to zero. Now by using the calculus of
residues we obtain
\begin{equation}gin{eqnarray}\label{e134}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)&=&
II=\frac{1}{\epsilon_0}\sum\limits_{\lambda=1,2}{\int_0^\infty
{d\omega}\int\frac{d^3\thetaextbf{k}}{(2\parallelrtiali)^{\frac{3}{2}}}\hspace{0.1cm}
F(\omega) \frac{\omega}
{(\omega_{\thetaextbf{k}}^2-\omega^2\varepsilon(\omega))}}
\nonumber\\&\thetaimes &{ (\hat{B}_\lambda(\thetaextbf{k},\omega)e^{i(\omega
t-\thetaextbf{k}\cdot\thetaextbf{x})}+
H.c.})\hspace{0.1cm}\thetaextbf{e}_{\lambda}(\thetaextbf{k}),
\end{eqnarray}
where we have defined $\hat{B}_\lambda(\thetaextbf{k},\omega)$ as
\begin{equation}gin{equation}\label{e135}
\lim\limits_{t'\thetao-\infty}\hat{B}_\lambda
(\thetaextbf{k},\omega,t')e^{i\omega
t'}=\hat{B}_\lambda(\thetaextbf{k},\omega).
\end{equation}
Using (\ref{e89}) to write $F(\omega)$ in terms of $Im \chi(\omega)$
equation (\ref{e134}) can be written as
\begin{equation}gin{eqnarray}\label{e136}
\hat{\thetaextbf{A}}(\thetaextbf{r},t)
&=& \left({\frac{\hbar}{8\parallelrtiali^4\varepsilon_0 }}
\right)^{\frac{1}{2}} \int{d\omega
\int{d^3{\thetaextbf{k}}\sum\limits_{\lambda=1,2}
{\omega \frac{\sqrt {Im\chi(\omega)}}{\thetaextbf{k}^2 c^2
-\omega^2\varepsilon(\omega)}}}}
\nonumber \\
&\thetaimes&(\hat{B}_\lambda(\thetaextbf{k},\omega)e^{i(\omega t-\thetaextbf{k}
\cdot\thetaextbf{r})} +
H.c.)\hspace{0.1cm}\thetaextbf{e}_\lambda(\thetaextbf{k}).
\end{eqnarray}
This equation is the same equation obtained in the phenomenological
and the damped polarization model \cite{3,6}.
By the same method the longitudinal part of electric field can be
obtained as
\begin{equation}gin{eqnarray}\label{e137}
\hat{ \thetaextbf{E}}^\parallelrtiala(\thetaextbf{r},t)&=&\left({\frac{\hbar}{8\parallelrtiali ^4
\varepsilon_0}}
\right)^{\frac{1}{2}}
\int_0^\infty d\omega\int d^3k\hspace{0.1cm}\frac{i\sqrt{Im\chi(\omega)}}
{1+\chi (\omega)}\nonumber \\
&\thetaimes& (\hat{B}_3(\thetaextbf{k},\omega)e^{i(\omega
t-\thetaextbf{k}\cdot\thetaextbf{r})}-h.c.)\hspace{0.1cm}\thetaextbf{e}_3(\thetaextbf{k}),
\end{eqnarray}
which is $\frac{\parallelrtialartial \hat{\thetaextbf{A}}^\parallelrtialarallel}{\parallelrtialartial t}$ in
the phenomenological model and also the longitudinal part of the
electric filed in the damped polarization model.
\section{Comparing the different methods}
In subsection 2.3 we showed that the phenomenological method can be
obtained from minimal coupling method and in section 3 we obtained
the minimal coupling Hamiltonian from the Lagrangian of damped
polarization method. In addition as can be seen from
(\ref{e55})-(\ref{e59}), (\ref{e73})-(\ref{e75}) and
(\ref{e96})-(\ref{e102}), these three methods lead to the same
results. Then in fact these three models are equivalent. In other
word they are different techniques for solving the equation of
motion of the same Lagrangian.
Although these three methods have the same results and they are
equivalent but each of them have its own advantage and disadvantage.
The merit of damped polarization method with respect to the other
methods is that it is based on a Lagrangian and then we can use the
standard canonical quantization method and the ETCR between
different operators can be deduced from its Lagrangian. But by
comparing the solution technique of three methods we see that the
solution technique of this model, which is based on Fano technique
and diagonalization of the Hamiltonian, is hard and lengthy. In
addition the extension of this model to nonisotropic and
nonhomogeneous polarizable and magnetizable medium has not been
down.
The preference of phenomenological method is that its way of solving
the equation is based on the Green function of the classical Maxwell
equations. This correlation function would be useful in calculating
the vacuum effect, like spontaneous emission, Casimir effect and Van
der Wales force \cite{16}$-$\cite{19}. In addition the extension of
this model to a nonhomogeneous and nonisotropic medium is easy and
only we should solve the classical Green function of the medium with
nonhomogeneous and nonisotropic susceptibility\cite{20}. Also the
extension of this model to a magnetizable medium is down by adding
new noise operators for the loss of magnetization \cite{21}. But the
disadvantage of this model is that it is not based on a Lagrangian
and the commutation relation between vector potential and electric
field is postulated and should be checked after quantization. In
addition in \cite{22} it is shown that in the minimal coupling
method the spontaneous emission can be solved easier than this
model.
The advantage of minimal coupling model is that in this model
Maxwell equations are obtained from Heisenberg equation. Also, as
mentioned in (\ref{e91}), the commutation relation between noise
operators are obtained from the Heisenberg equation which give a
better understanding of the nature of noise operators.
Another merit of this method is that it can be extended easily to a
nonhomogeneous, non isotropic and nonlocal polarizable and
magnetizable medium \cite{9} and \cite{23}.
The disadvantage of this model is that since this model is not based
on a Lagrangian then the commutation between vector potential
$\hat{A}$ and the displacement vector $\hat{D}$ is postulated.
Therefore up to the range of applicability of these three models, we
could prove that these models are equivalent in a spatially
homogeneous and non magnetized medium.
\section{conclusion}
In this paper we have shown that the minimal coupling method is
equivalent to the Huttner-Barnet and phenomenological approaches up
to a canonical transformation in a nonmagnetic medium. The magnetic
properties of the medium are also included in the minimal coupling
method contrary to the other methods. So for a general comparison,
an extension of the Huttner-Barnet model to the case of a
magnetodielectric medium is needed which is under consideration.
\begin{equation}gin{thebibliography}{99}
\bibitem {1} S. M. Barnett, R. Matloob and R. Lodoun, J. Mod. Opt.
42, 1165 (1995)
\bibitem {2} R. J. Glauber and M. Lewnstein, Phys. Rev. A 43, 467 (1991)
\bibitem {3} B. Huttner and S. Barnett, Phys. Rev. A 46, 4306 (1992)
\bibitem {4} R. Matloob and R. Loudon, Phys. Rev. A 53, 4567 (1996)
\bibitem {5} R. Matloob and R. Loudon, Phys. Rev. A 52, 4623 (1995)
\bibitem {6} R. Matloob, Phys. Rev. A 60, 50 (1999)
\bibitem {7} F. Kheirandish and M. Amooshahi, Int. J. Theo. Phys.
45, No. 1 (2006)
\bibitem {8} F. Kheirandish and M. Amooshahi, Mod. Phys. Lett. A 29,
No 30 3025 (2005)
\bibitem {9} F. Kheirandish and M. Amooshahi, Phys. Rev. A 74,
042102 (2006)
\bibitem {11} V. Fano, Phys. Rev. 124, 1886 (1961)
\bibitem {12} S. M. Barnet and P. M. Radmor, Opt.Commun. 68, 304
(1998)
\bibitem {13} J. J. Hopffield, Phys. Rev. 112, 1555 (1958)
\bibitem {14}L. D. Landau and E. M. Lifshitz, \thetaextit{Statistical Physics},
3rd ed. (Pergamon, Oxford, 1980), Part 1, Sec. 123
\bibitem {15}J. D. Jackson, classical Electrodynamics, 3rd ed. (Wiley, New York, 1999)
\bibitem {16}R. Matloob and H. Falinejad, phys. Rev. A 64, 042102 (2001)
\bibitem {17}S. Sheel, L. Knoll and D. G. Welsch, Phys. Rev. A
60,4094 (1999)
\bibitem {18}C. Rabba and D. G. Welcsh, Phys. Rev. A 73,063822 (2006)
\bibitem {19}S. Spagnolo, D. A. R. Dalvit and P. W. Milonni, Phys.
Rev. A 75, 052117 (2007)
\bibitem {20}R. Matloob, Phys. Rev. A 71, 062105 (2005)
\bibitem {21}R. Matloob, Phys. Rev. A 77, 062103 (2005)
\bibitem {22} M. Amooshahi and F. Kheirandish, Phys. Rev. A 76, 062103 (2007)
\bibitem {23} F. Kheirandish and M. Amooshahi, arXiv:0705.3942, to
be published in Mod. Phys. Lett. A
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Active phase for activated random walks on $\mathbb{Z}^d$, $ d \geq 3$, \\
with density less than one and arbitrary sleeping rate }
\author{Lorenzo Taggi\thanks{Technische Universit\"at Darmstadt, Darmstadt, DE; [email protected]. Supported by German Research Foundation (DFG).}}
\date{}
\maketitle
\begin{abstract}
It has been conjectured that the critical density of the Activated Random Walk model is strictly less than one for any value of the sleeping rate.
We prove this conjecture on $\mathbb{Z}^d$ when $d \geq 3$ and, more generally, on graphs where the random walk is transient. Moreover, we establish the occurrence of a phase transition on non-amenable graphs,
extending previous results which require that the graph is amenable or a regular tree.
\newline
\newline
\emph{Keywords and phrases.} Interacting particle systems, Abelian networks, Absorbing-state phase transition,
self-organized criticality.\\
MSC 2010 \emph{subject classifications.}
Primary 82C22;
Secondary 60K35,
82C26.
\end{abstract}
\section{Introduction}\label{sec:intro}
The activated random walk model (ARW) is a system
of interacting particles on a graph $G=(V,E)$.
Together with Abelian and Stochastic Sandpiles,
it belongs to a class of systems
which have been introduced
in order to study a physical phenomenon known as
self-organized criticality.
Moreover, it can be interpreted
as a toy model for an epidemic spreading, with
infected individuals moving diffusively on a graph.
The model is defined as follows.
Every particle is in one of two states,
A (active) or S (inactive, sleeping).
Initially, the number of particles at each vertex of $G$
is an independent Poisson random variable with mean $\mu\in(0,\infty)$, usually called the \emph{particle density}, and all particles are of type A.
Active particles perform an independent, continuous time
random walk on $G$ with jump rate $1$, and
with each jump being to a uniformly random neighbour.
Moreover, every A-particle has a Poisson clock of rate $\lambda>0$
(called \emph{sleeping rate}).
When the clock of a particle rings,
if the particle does not share
the vertex with other particles,
the particle becomes of type S; otherwise nothing happens.
Each S-particle does not move and remains sleeping until another particle jumps into its location.
At such an instant, the S-particle is activated and turns into type A.
For any value of $\lambda$, a phase transition as a $\mu$ varies is expected to occur.
When $\mu$ is small, there is a lot of free space between the particles. This allows every particle
to turn into type S eventually and never become active again.
When this happens, we say that ARW \textit{fixates}.
This is not expected to occur when $\mu$ is large, since the active particles will repetitively jump
on top of other particles, activating the ones that had turned into type S.
In this case, we say that ARW is \textit{active}.
In a seminal paper \cite{Rolla}, Rolla and Sidoravicius prove a 0-1 law (i.e., the process is either active or fixates with probability 1) and a monotonicity property with respect to $\mu$.
This leads to the existence of a critical curve $\mu_c = \mu_c(\lambda)$,
\begin{equation}
\mu_c=\mu_c\left(\lambda\right) := \inf\lrc{ \mu \geq 0 \, : \, \mathbb{P}(\text{ARW is active}) > 0 }.
\label{eq:criticaldensity}
\end{equation}
which is such that, for any $\mu>\mu_c$ the system is almost surely active,
and for any $\mu<\mu_c$ the system fixates almost surely.
Though \cite{Rolla} is restricted to the case of $G$ being $\mathbb{Z}^d$,
the above properties hold for any vertex-transitive graph.
Throughout this paper we always consider that $G$ is an infinite simple graph that is locally-finite and vertex transitive, which ensures the existence of $\mu_c$.
In recent years considerable effort has been made to prove basic properties of the critical curve $\mu_c = \mu_c(\lambda)$
\cite{Amir, Basu, Rolla, Rolla3, Rolla2, Sidoravicius, Stauffer, Taggi}.
A quite natural bound for this curve
is $\mu_c \leq 1$ for any value of $\lambda \in (0,\infty)$
, which was proved
in \cite{Amir, Rolla, Shellef}.
Indeed, one does not expect fixation when the average number of particles per vertex is more than one,
since a particle can be in the S-state only if it is alone on a given vertex
and, for this reason, there is not enough space for all the A-particles to turn to the S-state.
A more challenging question is whether $\mu_c$ is strictly less than one
for any value of $\lambda \in (0, \infty)$, which is expected to hold true under wide generality.
In other words, one expects that,
for any value of $\lambda \in (0,\infty)$, there exists a value of $\mu$ which is strictly less than
one such that, even though there is enough space for all the particles to turn into the S-state,
particle motion prevents this from happening, so the system does not fixate.
This question was asked by Rolla and Sidoravicius in their seminal paper \cite{Rolla}
and appears also in \cite{Basu, Dickman}.
Such a question received much attention in the last few years \cite{Rolla, Basu, Rolla2, Stauffer, Taggi} but,
despite much effort, a complete answer was provided only in two cases:
on vertex-transitive graphs where the random walk has a positive speed \cite{Stauffer}
and for a simplified model on $\mathbb{Z}^d$ where
the jump distribution of active particles is biased in a fixed direction \cite{Taggi}.
A partial answer which requires the assumption
that $\lambda$ is smaller than a finite constant $\lambda_0 < \infty$
was also provided in
\cite{Stauffer} when $G$ is vertex-transitive and transient
and in \cite{Basu} when $G = \mathbb{Z}$.
The first main result of this paper is the next theorem, which provides a positive
answer to this question for any $\lambda \in (0, \infty)$ on $\mathbb{Z}^d$ , when $d \geq 3$, for the original model where active particles jump uniformly to nearest-neighbours. More generally, our result holds for any vertex-transitive amenable graph
where the random walk is transient.
As a byproduct of our method, we
also obtain that
$\mu_c (\lambda) \rightarrow 0$ as $\lambda \rightarrow 0$
with a better convergence rate than as in \cite{Stauffer}.
\begin{theorem}
\label{theo1:transient graph}
If $G$ is vertex-transitive, amenable and transient, then $$\mu_c(\lambda) < 1~~~~~ \mbox{$\forall \lambda \in (0, \infty)$.}$$
Moreover,
$\lim\limits_{\lambda \rightarrow 0} \frac{\mu_c(\lambda)}{\lambda^{\frac{1}{2}}} < \infty$.
\end{theorem}
A second basic question concerning the behaviour of the critical curve $\mu_c=\mu_c(\lambda)$ is whether its value is positive.
A positive answer has been proved by Sidoravicius and Teixeira in \cite{Sidoravicius} when $G= \mathbb{Z}^d$ by means of renormalization techniques. A shorter proof
was also provided by Stauffer and Taggi in \cite{Stauffer} when $G$ is amenable and vertex-transitive
and when $G$ is a regular tree.
The proofs of \cite{Sidoravicius, Stauffer} crucially rely on the amenability
property of the graph or on the assumption that $G$ is a regular tree.
Our second main theorem provides a positive answer to this question
on vertex-transitive graphs that are non-amenable, establishing the occurrence of
a phase transition for this class of graphs and extending the previous results \cite{Sidoravicius, Stauffer}.
Moreover, we also obtain that $\lim_{\lambda \rightarrow \infty} \mu_c(\lambda)=1$.
\begin{theorem}
\label{theo: non amenable}
If $G$ is vertex-transitive and non-amenable, then $\mu_c(\lambda) > 0$ for any value of $\lambda \in (0,\infty)$. More specifically,
$$
\mu_c(\lambda) \geq \frac{\lambda}{1 + \lambda} ~~~~~ \mbox{$\forall \lambda \in (0, \infty)$.}
$$
\end{theorem}
Theorem \ref{theo: non amenable} and the results of \cite{Sidoravicius, Stauffer}
imply that $\mu_c(\lambda) >0$ for any $\lambda \in (0, \infty)$
and that $\lim_{\lambda \rightarrow \infty} \mu_c(\lambda) = 1$
on any vertex transitive graph.
Moreover, Theorem \ref{theo1:transient graph} and the results of \cite{Stauffer}
imply that
$\mu_c(\lambda) < 1$ for any $\lambda \in (0, \infty)$
and that $\lim_{\lambda \rightarrow 0} \mu_c(\lambda) = 0$
on any vertex-transitive graph where the random walk is transient.
\paragraph{Description of the proofs}
Our proofs are simple and rely on
a graphical representation,
which is called \textit{Diaconis-Fulton} and has been introduced in \cite{Rolla},
and on \textit{weak stabilization},
a procedure that has been introduced in \cite{Stauffer}
which consists of using the random instructions
of such a representation by following a certain strategy.
A fundamental quantity for the mathematical analysis
of the activated random walks is the number of times $m_{B_L}$
the origin is visited by a particle when the dynamics take place in a finite
ball of radius $L$, $B_L$, with particles being absorbed whenever they leave $B_L$.
As it was proved in \cite{Rolla}, activity for ARW is equivalent
to the limit $L \rightarrow \infty$ of this quantity being infinite almost surely.
A quantity that plays a central role in this paper
is the probability $Q(x,B_L)$ that an S-particle is at a vertex $x \in B_L$ when $B_L$ becomes stable.
This quantity is important since the values $\{ \, Q(x,B_L) \, \}_{x \in B_L}$ are related
to the expectation of $m_{B_L}$ by mass-conservation arguments.
Thus, one can deduce whether the system is active by estimating these values.
The proof of Theorem \ref{theo1:transient graph} consists of bounding away from one
the probabilities $\{ Q(x,B_L)\}_{x \in B_L}$ for any $\lambda \in (0, \infty)$ uniformly in $L$ and in $x \in B_L$.
This improves the upper bound that was provided in \cite{Stauffer},
where the probabilities $\{ Q(x,B_L)\}_{x \in B_L}$ were bounded away from one
only for $\lambda$ small enough.
Such an enhancement is obtained by introducing
a stabilization procedure that allows to recover independence
from sleep instructions at one vertex. This gained independence and the fact that we
do not count the total number of instructions but only jump instructions,
allows to obtain an additional factor
in the upper bound for $Q(x,B_L)$ which prevents this bound
from exploding when $\lambda$ is
infinitely large.
Our upper bound on $Q(x,B_L)$ implies that for any $\lambda \in (0, \infty)$
one can find $\epsilon>0$ and set the value of $\mu$ such that $ 1 > \mu \geq
Q(x,B_L ) + \epsilon$ for all $L$ and $x \in B_L$.
This implies that a positive density $\epsilon$ of particles eventually leaves $B_L$
and, as it was proved in \cite{Rolla2}, that the system is active,
proving Theorem \ref{theo1:transient graph}.
Theorem \ref{theo: non amenable} extends to non-amenable graphs
the analogous result that was proved in \cite{Stauffer} for amenable graphs.
The idea of the proof that is presented in \cite{Stauffer} is that one assumes activity and uses this
assumption and the
weak stabilization procedure to show that for any $\epsilon>0$,
there exists a large enough constant $r_0=r_0(\epsilon)$ such that,
for any large enough $L$ and for any vertex $x \in B_L$
which has a distance at least $r_0$ from the boundary of $B_L$,
\begin{equation}\label{eq:intro}
Q(x,B_L) \geq \frac{\lambda}{1+\lambda}- \epsilon.
\end{equation}
This leads to the conclusion that the particle density after the stabilization of $B_L$
is at least $ \frac{\lambda}{1+\lambda}$.
The amenability assumption is crucial here, since the number of particles which start `close' to
the boundary, for which (\ref{eq:intro}) does not hold,
can be neglected only if the graph is amenable (i.e. their number is of order $o(|B_L|)$).
Since the initial particle density is $\mu$ and since the particle density cannot increase,
we conclude that $\mu \geq \frac{\lambda}{1+\lambda}$.
Since this is a consequence of activity, we obtain that $\mu_c \geq \frac{\lambda}{1+\lambda}$.
In this paper, we use a different strategy that allows us to extend this result to non-amenable graphs.
By assuming that the system is active and by using (\ref{eq:intro}),
one obtains that the particle density in a small ball
$B_{(1-\delta)L} \subset B_L$ after the stabilization of the larger ball $B_L$ is at least
$ \frac{\lambda}{1+\lambda}$, for some $\delta>0$ and all $L$ large enough.
Thus, if we set $\mu < \frac{\lambda}{1+\lambda}$, this means that the particle density
inside the smaller ball must have increased during the stabilization of the larger ball. Due to the conservation law,
the only way this might have happened
is if a large number of particles which started from $B_L \setminus B_{(1-\delta) L}$
turns into the S-state for the last time in $B_{(1-\delta) L}$.
We show that, if the graph is non-amenable, this cannot happen simply because, even though the number of the boundary
particles is not negligible if compared to $|B_{(1-\delta) L}|$, the bias towards the outside of the ball allows only a
few of them to penetrate inside the ball. So, the particle density in the smaller ball cannot increase
and this leads to the conclusion that $\mu \geq \frac{\lambda}{1+\lambda}$. Since this is a consequence of activity,
we obtain that $\mu_c \geq \frac{\lambda}{1+\lambda}$.
The remaining part of the paper is organized as follows.
In Section \ref{sec:Diaconis} we introduce the Diaconis-Fulton representation following
\cite{Rolla}, we recall the notion of weak stabilization
following \cite{Stauffer} and we fix the notation.
In Section \ref{sec:enforced stabilization} we provide an explicit upper bound for $Q(x,B_L)$, which is presented
in Theorem \ref{theo:boundsQ},
and we prove Theorem \ref{theo1:transient graph}.
Finally, Section \ref{sec: Fixation on non-amenable graphs} is dedicated to the proof of Theorem \ref{theo: non amenable}.
\section{Diaconis-Fulton representation and weak stabilization}
\label{sec:Diaconis}
In this section we describe the
Diaconis-Fulton graphical representation
for the dynamics of ARW, following~\cite{Rolla},
and we recall the notion of \textit{weak stabilization}, following \cite{Stauffer}.
Before starting, we fix the notation.
\textbf{Notation} A graph is denoted by $G=(V,E)$ and is always assumed to be simple, infinite and locally-finite.
The simple random walk measure is denoted by $P_x$, where $x$ is the starting vertex of the random walk.
The expectation with respect to $P_x$ is denoted by $E_x$.
For any set $Z \subset V$ and any pair of vertices $x, y \in V$, we let
$$
G_{Z}(x,y) = {E}_x \big ( \, \sum\limits_{t=0}^{\tau_Z-1} \mathbbm{1} \{ \, X(t) = y \, \} \, \big)
$$
be the expected number of times a discrete time random walk $X(t)$ starting from $x$ hits $y$ before reaching $Z$ (Green's function), where $\tau_Z$ is the hitting time of the set $Z$.
If $Z = \emptyset$, then we set $\tau_Z = \infty$ and we simply write $G(x,y)$.
We also denote by $\tau^+_Z$ the return time to $Z$.
The origin of the graph will be denoted by $0 \in V$.
We let $B_r = \{ y \in V \, \, : \, \, d(0,y) < r\}$ be the ball
of radius $r>0$ centred at the origin, where $d( \cdot, \cdot )$ is the graph distance,
and we let $B_r(x) = B_r + x$ be the ball of radius $r$ which is centred at $x \in V$.
\paragraph{Diaconis-Fulton representation}
For a graph $G=(V,E)$, the state of configurations is $\Omega=\{0,\rho,1,2,3,\ldots\}^V$, where a vertex being in state $\rho$ denotes that the vertex has
one S-particle, while being in state $i\in\{0,1,2,\ldots\}$ denotes that the vertex contains $i$ A-particles.
We employ the following order on the states of a vertex: $0 < \rho < 1<2<\cdots$.
In a configuration $\eta\in \Omega$,
a vertex $x \in V$ is called \textit{stable} if
$\eta(x) \in \{0, \rho \}$,
and it is called \textit{unstable} if $\eta(x) \geq 1$.
We fix an array of \textit{instructions}
$\tau = ( \tau^{x,j}: \, x \in V, \, j \in \mathbb{N})$
(in this paper we assume that $\mathbb{N}$ is the set of strictly positive integers),
where $\tau^{x,j}$ can either be of the form $\tau_{xy}$
or $\tau_{x\rho}$. We let $\tau_{xy}$ with $x,y\in V$ denote the instruction that a particle from $x$ jumps to vertex $y$, and $\tau_{x\rho}$ denote
the instruction that a particle from $x$ falls asleep.
Henceforth we call $\tau_{xy}$ a \emph{jump instruction} and $\tau_{x\rho}$ a \emph{sleep instruction}.
Therefore, given any configuration $\eta$, performing the instruction $\tau_{xy}$ in $\eta$ yields another configuration $\eta'$ such that
$\eta'(z)=\eta(z)$ for all $z\in V\setminus\{x,y\}$, $\eta'(x)=\eta(x)-\ind{\eta(x)\geq 1}$, and $\eta'(y)=\eta(y)+\ind{\eta(x)\geq 1}$. We use the convention that $1+\rho=2$.
Similarly, performing the instruction $\tau_{x\rho}$ to $\eta$ yields a configuration $\eta'$ such that
$\eta'(z)=\eta(z)$ for all $z\in V\setminus\{x\}$, and if $\eta(x)=1$ we have $\eta'(x)=\rho$, otherwise $\eta'(x)=\eta(x)$.
Let $h = ( h(x)\, : \, x \in V)$ count the number of
instructions used at each vertex.
We say that we \textit{use} an instruction
at $x$ (or that we \emph{topple} $x$) when we act on the current
particle configuration $\eta$ through the operator $\Phi_x$,
which is defined as,
\begin{equation}
\label{eq:Phioperator}
\Phi_x ( \eta, h) =
( \tau^{x, h(x) + 1} \, \eta, \, h + \delta_x),
\end{equation}
where $\delta_x(y)=1$ if $y=x$ and $\delta_x(y)=0$ otherwise.
The operation $\Phi_x$ is \textit{legal} for $\eta$ if $x$ is unstable in $\eta$,
otherwise it is \textit{illegal}.
\noindent {\textbf{Properties}}
We now describe the properties of this representation.
Later we discuss how they are related to the stochastic dynamics of ARW.
For a sequence of vertices $\alpha = ( x_1, x_2, \ldots x_k)$,
we write $\Phi_{\alpha} = \Phi_{x_k} \Phi_{x_{k-1}}
\ldots \Phi_{x_1}$ and we say that $\Phi_{\alpha}$ is
\textit{legal} for $\eta$ if $\Phi_{x_\ell}$
is legal for $\Phi_{(x_{\ell-1}, \ldots, x_1)} (\eta,h) $
for all $\ell \in \{ 1, 2, \ldots k \}$.
Let $m_{\alpha} = ( m_{\alpha}(x) \, : \,x \in V )$
be given by,
$m_{\alpha}(x) \, = \, \sum_{\ell} \ind{x_\ell = x},$
the number of times the vertex $x$ appears in $\alpha$.
We write $m_{\alpha} \geq m_{\beta}$ if
$m_{\alpha} (x) \, \geq \, m_{\beta} (x) \, \, \, \forall x \in V$.
Analogously we write $\eta' \geq \eta$ if $\eta' (x) \, \geq \, \eta(x)$
for all $x \in V$. We also write $(\eta', h') \geq (\eta, h)$
if $\eta' \geq \eta$ and $h' = h$.
Let $\eta, \eta'$ be two configurations, $x$ be a vertex in $V$
and $\tau$ be a realization
of the array of instructions.
Let $V'$ be a finite subset of $V$. A configuration $\eta$ is said to be \textit{stable} in $V'$
if all the vertices $x \in V'$ are stable. We say that $\alpha$ is contained in $V'$
if all its elements are in $V'$, and we say that $\alpha$ \textit{stabilizes} $\eta$ in $V'$
if every $x \in V'$ is stable in $\Phi_\alpha \eta$.
The following lemmas give fundamental properties of the Diaconis-Fulton representation.
For the proof, we refer to \cite{Rolla}.
\begin{lemma}[Abelian Property]\label{prop:lemma2}
Given any $V'\subset V$,
if $\alpha$ and $\beta$ are both legal sequences for $\eta$
that are contained in $V'$ and stabilize $\eta$ in $V'$,
then $m_{\alpha} = m_{\beta}$. In particular, $\Phi_{\alpha} \eta = \Phi_{\beta} \eta$.
\end{lemma}
For any subset $V'\subset V$, any $x\in V$, any particle configuration $\eta$, and any array of instructions $\tau$, we denote by $m_{V^{\prime},\eta,\tau}(x)$ the number of times that $x$ is toppled in the stabilization of $V'$ starting from configuration $\eta$ and using the instructions in $\tau$. Note that by Lemma~\ref{prop:lemma2}, we have that $m_{V^{\prime},\eta,\tau}$ is well defined.
\begin{lemma}[Monotonicity]\label{prop:lemma3}
If $V' \subset V''\subset V$ and $\eta \leq \eta'$, then $m_{V', \eta, \tau} \leq m_{V'', \eta', \tau}$.
\end{lemma}
By monotonicity, given any growing sequence of subsets $V_1\subseteq V_2 \subseteq V_3\subseteq \cdots \subseteq V$ such that $\lim_{m\to\infty} V_m=V$,
the limit
$$
m_{\eta, \tau} = \lim\limits_{m\to \infty} m_{V_m, \eta, \tau},
$$
exists and does not depend
on the particular sequence $\{V_m\}_m$.
We now introduce a probability measure on the space of instructions and of particle configurations.
We denote by $\mathcal{P}$ the probability measure according to which,
for any $x \in V$ and any $j \in \mathbb{N}$,
$\mathcal{P} ( \tau^{x,j} = \tau_{x\rho} ) = \frac{\lambda}{1 + \lambda}$ and
$\mathcal{P} ( \tau^{x,j} = \tau_{xy} ) = \frac{1}{d(1 + \lambda)}$ for any $y\in V$ neighboring $x$,
where $d$ is the degree of each vertex of $G$ and the $\tau^{x,j}$ are independent across diffent values of $x$ or $j$.
Finally, we denote by $\mathcal{P}^\nu=\mathcal{P}\otimes \nu$ the joint law of
$\eta$ and $\tau$, where $\nu$ is a distribution on $\Omega$ giving the law of $\eta$.
Let $\mathbb{P}^\nu$ denotes the probability measure induced by the ARW process when the initial distribution of particles is given by $\nu$.
We shall often omit the dependence on $\nu$ by writing $\mathcal{P}$ and $\mathbb{P}$ instead of $\mathcal{P}^\nu$ and $\mathbb{P}^\nu$.
The following lemma relates the dynamics of ARW to the stability property of the representation.
\begin{lemma}[0-1 law]
\label{prop:lemma4}
Let $\nu$ be a translation-invariant, ergodic distribution with finite density.
Let $x\in V$ be any given vertex of $G$.
Then $\mathbb{P}^{\nu} (\text{ARW fixates} ) = \mathcal{P}^{\nu} ( m_{\eta, \tau} (x) < \infty ) \in \{0, 1 \}$.
\end{lemma}
Roughly speaking, the next lemma gives that removing an instruction sleep, cannot decrease the number of instructions
used at a given vertex for stabilization.
In order to state the lemma, consider an additional instruction $\iota$ besides $\tau_{xy}$ and $\tau_{x\rho}$. The effect of $\iota$ is to leave the configuration unchanged; i.e., $\iota \, \eta = \eta$,
so we will call this instruction \textit{neutral}.
Then given two arrays $\tau = \left( \tau^{x,j} \right)_{x ,\, j }$
and $\tilde{\tau} = \left( \tilde{\tau}^{x,j} \right)_{x, \, j }$,
we write $\tau \leq \tilde{\tau}$ if for every $x \in V$ and $j \in \mathbb{N}$,
we either have $\tilde{\tau}^{x,j} = {\tau}^{x,j}$ or we have $\tilde{\tau}^{x,j} = \iota$ and
${\tau}^{x,j} = \tau_{x\rho}$.
\begin{lemma}[Monotonicity with enforced activation]
\label{prop:lemma5}
Let $\tau$ and $\tilde{\tau}$ be two arrays of instructions such that $\tau \leq \tilde{\tau}$.
Then, for any finite subset $V' \subset V$ and configuration $\eta \in \Omega$, we have
$m_{V', \eta, \tau} \leq m_{V', \eta, \tilde{\tau}}.$
\end{lemma}
When we average over $\eta$ and $\tau$ using the measure $\mathcal{P}$,
we will simply write $m_{V'}$ instead of $m_{V',\eta,\tau}$
and we will do the same for the other quantities that will be introduced later.
\subsection{Weak stabilization}
We now recall the notion of weak stabilization following \cite{Stauffer}.
\begin{definition}[weakly stable configurations]\label{def:wstable}
We say that a configuration $\eta$ is \emph{weakly stable} in a subset $K\subset V$ with respect to a vertex $x\in K$
if $\eta(x)\leq 1$ and $\eta(y)\leq\rho$ for all $y\in K\setminus\{x\}$.
For conciseness, we just write that
$\eta$ is weakly stable for $(x, K)$.
\end{definition}
\begin{definition}[weak stabilization]
Given a subset $K\subset V$ and a vertex $x\in K$, the \emph{weak stabilization} of $(x,K)$ is a sequence of topplings of unstable vertices of $K\setminus\{x\}$ and of topplings of $x$ whenever $x$ has at least two active particles, until a weakly stable configuration for $(x,K)$ is obtained. The order of the topplings of a weak stabilization can be arbitrary.
\end{definition}
The Abelian property (Lemma \ref{prop:lemma2}), the monotonicity property
(Lemma \ref{prop:lemma3}) and monotonicity with enforced activation
(Lemma \ref{prop:lemma5}) hold for weak stabilization as well.
Since the proof of these lemmas is the same as for stabilization, for the proofs
we refer to \cite{Rolla}.
For any given particle configuration $\eta$ and instruction array $\tau$, we let
$m^1_{(x,K), \eta, \tau}(y)$ be the number of instructions that are used at $y$
for the weak-stabilization of $(x,K)$.
By the Abelian property, this quantity is well defined.
We now formulate the Least Action Principle for weak stabilization of $(x, K)$.
In order to state the lemma, we need to extend the notion of unstable vertex and of legal operations
to weak stabilization of $(x,K)$.
We call a vertex $y$ \textit{WS-unstable} (that is, unstable for weak stabilization)
in $\eta \in \Omega$ if $\eta(y) \geq 1 + \delta_x(y)$,
where $\delta_x(y)=1$ if $x=y$ and $\delta_x(y)=0$ otherwise.
We call a vertex $y$ \textit{WS-stable} in $\eta \in \Omega$ if
it is not WS-unstable.
We call the operation $\Phi_y$ defined in (\ref{eq:Phioperator})
\textit{WS-legal} for $\eta$ if $y$ is WS-unstable in $\eta$.
Note that a WS-legal operation is always legal but a
legal operation is not necessarily WS-legal.
For a sequence of vertices $\alpha = ( x_1, x_2, \ldots x_k)$,
we say that $\Phi_{\alpha}$ is
{WS-legal} for $\eta$ if $\Phi_{x_\ell}$
is WS-legal for $\Phi_{(x_{\ell-1}, \ldots, x_1)} (\eta,h) $
for all $\ell \in \{ 1, 2, \ldots k \}$.
We say that that $\alpha$ \textit{stabilizes} $\eta$ weakly in $(x,K)$
if every $x \in V$ is WS-stable in $\Phi_\alpha \eta$.
\begin{lemma}[Least Action Principle for weak stabilization of $(x,K)$]\label{prop:lemma1bis}
If $\alpha$ and $\beta$ are sequences of topplings for $\eta$ such that
$\alpha$ is legal and stabilizes $\eta$ weakly in $(x,K)$ and $\beta$ is WS-legal and is contained in $K$,
then $m_{\beta} \leq m_{\alpha}$.
\end{lemma}
For the proof of the lemma, we refer to \cite{Stauffer}.
We now introduce a stabilization procedure of $K$ consisting of a sequence of weak stabilizations of $(x,K)$. This stabilization procedure is called \textit{stabilization via weak stabilization} and was used also in \cite{Stauffer}. From now on, we will omit the dependence of the quantities on $\eta$ and $\tau$, unless necessary, in order to lighten the notation.
\textbf{Stabilization via weak stabilization of $\boldsymbol{(x,K)}$.}
Let $\eta$ be the initial particle configuration.
\textit{First step.} We perform the weak stabilization of $(x,K)$.
Recall that $m^{1}_{(x,K)}(y)$ is the total number of instructions that are used at $y$ for the weak stabilization of $(x,K)$ and let $\eta_1$ be the resulting particle configuration.
Note that, by definition of weak stabilisation, $\eta_1$ is either stable in $K$ or it is stable in $K \setminus \{x\}$ and it has one active particle at $x$.
In the first case, the stabilisation procedure is complete.
In the second case
we move to the second step.
\textit{$i$th step, for $i \geq 2$.} We start by using the next instruction at $x$
and we distinguish between two cases.
If this instruction
is sleep, then we obtain a particle configuration which is stable in $K$. In this case the stabilisation procedure is completed, we call $\eta_i$ the particle configuration we obtain and define $m^i_{K, \eta, \tau}(y)$, the total number of instructions which have been used at $y \in K$ up to this step, which by the Abelian property equals $m_{K, \eta, \tau}(y)$.
If this instruction is not sleep, after using this instruction we perform a new weak stabilisation of $(x,K)$. We call $\eta_i$ the particle configuration that we obtain and, for any $y \in K$, we let $m^{i}_{(x,K)}(y)$ be the number of instructions
that have been used at $y \in K$ up to this step. If $\eta_i$ is stable in $K$, then the procedure stops, otherwise we move to the next step and iterate.
Hence, we iterate the procedure until we obtain a stable configuration.
We let $T_{(x,K)}$ denote the number of iterations,
\begin{equation}\label{eq:numberiterations}
T_{(x,K)} := \min\{n \in \mathbb{N}_{0} \, \, : \, \, \eta_n \mbox{ is stable } \},
\end{equation}
where $\eta_0 = \eta$ is the initial particle configuration.
Note that if $\eta$ is unstable in $K$ then $T_{(x,K)}$ is strictly positive
and that, if $T_{(x,K)}=1$, then
the stable configuration $\eta_{T_{(x,K)}}$ hosts no particle at $x$.
For consistency, for any $i > T_{(x, K)}$, we let $\eta_i$ be the stable configuration obtained after stabilizing $K$ and, for any $y \in K$, we define $m^{i}_{(x, K)}(y)=m_K(y)$, which is the total number of instructions used
at $y$ for the complete stabilization of $K$. By the Abelian property, the quantities $T_{(x,K)}$ and $m^i_{(x,K)}$ are all well defined.
Note that the quantity $T_{(x,K)}$ is defined slightly differently than in \cite{Stauffer}. Sometimes we will make explicit the dependence of
$T_{(x,K)}$ on $\eta$ and $\tau$ by writing $T_{(x,K), \eta, \tau}$.
In Section \ref{sec:enforced stabilization} we will show that the number of weak stabilizations of $(x,K)$ that is necessary to perform to stabilize $K$ is related to the probability that the stabilization of $K$ ends with one particle at $x$,
which is an important quantity for the proof of Theorem \ref{theo1:transient graph}.
In Section \ref{sec:enforced stabilization} we will upper bound this
probability by introducing a new stabilization procedure which ignores the sleep instructions at $x$.
\section{Active phase on transient graphs}\label{sec:enforced stabilization}
In this section we prove Theorem \ref{theo1:transient graph}.
We first state Theorem \ref{theo:boundsQ}, where
the probability $Q(x,K)$ that the vertex $x \in K$ hosts an $S$-particle after the stabilization of the finite set $K \subset V$ is bounded away from one for any value of $\lambda \in (0, \infty)$.
In order for the next theorem to hold true the graph $G$ does not need to be vertex-transitive.
\begin{theorem}
\label{theo:boundsQ}
Let $G=(V, E)$ be a locally-finite graph and let $K \subset V$ be a finite set.
Then, for any set $K \subset V$, for any vertex $x \in K$,
for any positive integer $H$,
\begin{equation} \label{eq:ubound}
Q(x,K) \leq 1 - \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big)
\Big ( \frac{1}{1 + \lambda} \Big)^H.
\end{equation}
\end{theorem}
Theorem \ref{theo1:transient graph} is proved in the end of this section
and will be a direct consequence of Theorem \ref{theo:boundsQ}.
We now introduce a new stabilization procedure that consists
of ignoring sleep instructions at one fixed vertex
and prove some auxiliary lemmas that are necessary for the proof of
Theorem \ref{theo:boundsQ}. After that, we prove Theorem \ref{theo:boundsQ}
and Theorem \ref{theo1:transient graph}.
We introduce the function $T^x$ that associates to any instruction
array $\tau$ a new instruction array $T^x(\tau)$ that is obtained from
$\tau$ by ignoring all sleep instruction at $x$.
More precisely, we define for any $y \in V$ and $j \in \mathbb{N}$,
$$
\Big ( \, T^x(\tau)\, \Big )^{y,j} : =\begin{cases}
\tau^{y,j} &\mbox{ if $y\neq x$ }\\
\tau^{y,j} &\mbox{ if $y = x$ and $\tau^{y,j} \neq \tau_{y\rho}$ } \\
\iota & \mbox{ if $y= x$ and $\tau^{y,j} = \tau_{y\rho}$},
\end{cases}
$$
recalling that $\iota$ denotes a neutral instruction.
Moreover, for any $y, x\in V$, we let for any $i \in \mathbb{N}$,
\begin{equation}\label{eq:defenforced}
m^e_{(x,K), \eta, \tau}(y) : = m_{K, \eta, T^x(\tau)}(y)
\end{equation}
be the number of instructions that are used at $y$ when we stabilize the set
$K$ by ignoring sleep instructions at $x$.
Moreover, recall the definition of stabilisation via weak stabilisation and define
\begin{equation}
\label{eq:defenforcedstep}
m^{e,i}_{(x,K), \eta, \tau}(y) : = m^i_{K, \eta, T^x(\tau)}(y).
\end{equation}
These functions plays an important role in this section.
For the proof of Theorem \ref{theo:boundsQ}, we will not count the total number of
instructions, but only the number of jump instructions. Thus, for any $y \in K$, we let
\begin{equation}\label{eq:defenforcedjump}
M^e_{(x,K), \eta, \tau}(y): = \Big | \, \Big \{ \tau^{y, j} \, \, : \, \, \, j \in [0, m^e_{(x,K), \eta, \tau}(y)], \, \, \, \tau^{y, j} \neq \tau_{y\rho} \Big \} \, \Big |
\end{equation}
be the number of jump instructions that are used at $y$ when we stabilize $K$ by ignoring
sleep instructions at $x$.
Similarly, we let $M_{K, \eta, \tau}(y)$ be the number of jump
instructions that are used at $y$ for the stabilization of $K$
and $M^1_{(x,K), \eta, \tau}(y)$ be the number of jump instructions
that are used at $y$ for the weak stabilization of $(x,K)$.
In the next lemma we state some simple but important relations between these quantities.
Recall the definition of the variable $T_{(x,K)}$, (\ref{eq:numberiterations}),
which counts the number
of weak stabilisations of $(x,K)$ which is necessary to perform in order
to stabilise $K$.
\begin{lemma}\label{lemma:independence}
Let $\eta$ be an arbitrary particle configuration,
let $\tau$ be an arbitrary instruction array,
suppose that $T_{ (x,K), \eta, \tau } < \infty$,
let $\tilde \tau = T^x(\tau)$ be obtained from $\tau$ by turning all the sleep instructions
at $x$ into a neutral instruction.
Then, for any vertex $y \in K$,
for any $ i \in \{1, \ldots, T_{ (x,K), \eta, \tau } -1 \}$,
\begin{align}\label{eq:independ1}
m^i_{(x,K), \eta, \tau} (y) \, &= \, m^i_{(x,K), \eta, \tilde \tau} (y),
~~~~~
M^i_{(x,K), \eta, \tau} (y) \, = \, M^i_{(x,K), \eta, \tilde \tau} (y), \\
\label{eq:independ4}
m^e_{(x,K), \eta, \tau} (y) \, & \geq \, m_{K, \eta, \tau} (y),
~~~~~~~~~
M^e_{(x,K), \eta, \tau} (y) \, \geq \, M_{K, \eta, \tau} (y), \\
\label{eq:independ5}
&
~~~~~~~~~T_{(x,K), \eta, \tilde \tau} \, \geq T_{(x,K), \eta, \tau}.
\end{align}
Moreover, if the particle configuration which is obtained by stabilising $\eta$ in $K$ has no particle at $x$, we deduce that (\ref{eq:independ1}) holds also for $i = T_{(x,K), \eta, \tau}$.
\end{lemma}
\begin{proof}
Recall the definition of stabilisation via weak stabilisation. We perform a stabilisation via weak stabilisation and we check that
at every step (\ref{eq:independ1}) holds.
\textbf{Step $\boldsymbol{i=1}$. }The first step consists in the weak stabilisation of $(x,K)$.
By definition of weak stabilisation, when we perform the weak stabilization of $(x,K)$, we topple $x$ only if $x$ contains
at least two particles, so the sleep instructions at $x$ have no effect. Hence, we deduce that,
\begin{equation}\label{eq:check1}
m^{1}_{(x,K), \eta, \tau}(y) = m^1_{(x,K), \eta, \tilde \tau},
\quad M^{1}_{(x,K), \eta, \tau}(y) = M^1_{(x,K), \eta, \tilde \tau}.
\end{equation}
If the configuration we obtain, $\eta_1$, is not stable, we go to the next step.
If the configuration we obtain is stable, this means that the particle configuration which is obtained by stabilising $\eta$ in $K$ has no particle at $x$
and that $T_{(x,K)} = 1$,
hence the proof is concluded in this case.
\textbf{Step $\boldsymbol{i \geq 2}$. }
The $i$th step starts by using the next instruction at $x$ and, if this instruction is not slee in performing a weak stabilisation of $(x,K)$ afterwards. We denote by $\eta_i$
the particle configuration that we obtain at the end of the $i$th step.
Note that in the previous steps we checked that,
\begin{equation}\label{eq:checkingsteps}
\forall j = 1, 2, \ldots, i-1, \quad m^{j}_{(x,K), \eta, \tau}(y) = m^{j}_{(x,K), \eta, \tilde \tau},
\quad M^{j}_{(x,K), \eta, \tau}(y) = M^{j}_{(x,K), \eta, \tilde \tau},
\end{equation}
We distinguish between three cases.
\textbf{Case (i):}
The first case is that the first instruction we use at $x$
is a sleep instruction. In this case the particle configuration we obtain, $\eta_i$, is stable, it hosts one sleeping particle at $x$, and $T_{(x,K), \eta, \tau} = i$, hence (\ref{eq:independ1}) is fulfilled for any $i=1, 2, \ldots, T_{(x,K), \eta, \tau}-1$
since we checked (\ref{eq:checkingsteps}) in the previous steps.
\textbf{Case (ii):} The second case is that the first instruction we use at $x$ is not a sleep instruction and that the weak stabilisation we perform afterwards ends with a stable configuration in $K$, i.e, $\eta_i$ is stable in $K$ and $T_{(x,K)} = i$. By definition of weak stabilisation of $(x,K)$, this can only happen if no particle jumps from a neighbour of $x$ to $x$ while performing the weak stabilisation, hence $\eta_i(x) = 0$.
Hence, in this case no sleep instruction
is used at $x$ during the $i$th weak stabilisation and for this reason and for the fact that in the previous steps we checked (\ref{eq:checkingsteps}) we deduce that
(\ref{eq:independ1}) holds for any $i=1, 2, \ldots, T_{(x,K), \eta, \tau}$.
\textbf{Case (iii):} The third case is that the first instruction we use at $x$ is not a sleep instruction and that the weak stabilisation we perform afterwards ends with a particle configuration which is unstable in $K$.
Observe that this necessarily means (by definition of weak stabilisation) that the configuration we obtain, $\eta_i$, is stable in $K \setminus \{x\}$ and that it hosts an active particle at $x$.
Since by definition of weak stabilisation $x$ is toppled only if it contains at least $2$ active particles at $x$, then the sleep instructions used at $x$ have no effect. From this and from the fact that at the previous steps we checked that (\ref{eq:checkingsteps}) holds,
we deduce that,
$$
m^{i}_{(x,K), \eta, \tau}(y) = m^{i}_{(x,K), \eta, \tilde \tau},
\quad M^{i}_{(x,K), \eta, \tau}(y) = M^{i}_{(x,K), \eta, \tilde \tau}.
$$
We now move to the step $i+1$ and iterate.
We iterate the procedure until the last step, $i = T_{(x,K)}$, which is the first step such that Case (i) or (ii) are fulfilled. Hence, we checked that (\ref{eq:checkingsteps}) holds up to the last step $i = T_{(x,K)}$ and that, if the procedure ends with Case (iii), then
(\ref{eq:independ1}) holds also for $i = T_{(x,K)}$. This proves (\ref{eq:independ1})
for any $i = 1, \ldots,T_{(x,K)}-1$ and also proves the last claim in the statement of the lemma.
The relations (\ref{eq:independ4}) follow from a direct application of monotonicity with enforced activation for stabilization (Lemma \ref{prop:lemma5}).
For (\ref{eq:independ5}), we compare the stabilisation via weak stabilisation procedure
for $\tau$ and $\tilde \tau$ simultaneously.
First of all note that, while stabilising via weak stabilisation using the instructions of $\tau$, all the sleep instructions which have been used at $x$ during the first $T_{(x,K), \eta, \tau}-1$ steps had no effect, hence ignoring them makes no difference up to this step.
Now consider the last step for the stabilisation via weak stabilisation which uses the instructions of $\tau$. If such last step starts with a sleep instruction at $x$ (as described in Case (i)), then the particle configuration gets stabilised, i.e, $\eta_{ T_{(x,K), \eta, \tau}}$ is stable in $K$. This however is not true for
$\eta_{ T_{(x,K), \eta, \tilde \tau}}$, since the array $\tilde \tau$ has neutral instruction at $x$ in place of sleep instructions, hence we deduce that the stabilisation via weak stabilisation
which uses the instructions of $\tilde \tau$ may perform further steps, i.e,
$T_{(x,K), \eta, \tilde \tau} \geq T_{(x,K), \eta, \tau}$.
Instead, if such last step starts with a jump instruction at $x$ (as described in Case (ii)),
this means that no sleep instruction of $\tau$ was ever used at $x$ during such last step, hence ignoring sleep instructions at $x$ makes no difference when we compare the stabilisation-via-weak stabilisation with $\tau$ and $\tilde \tau$ and for this reason
$
T_{(x,K), \eta, \tau} = T_{(x,K), \eta, \tilde \tau},
$
and that
$\eta_{ T_{(x,K), \eta, \tau}} = \eta_{ T_{(x,K), \eta, \tilde \tau}}$
in this case.
Combining the two cases we deduce (\ref{eq:independ5}) and conclude the proof.
\end{proof}
For the next lemma we need to recall the notion
of stabilization via weak stabilization that has been introduced in Section \ref{sec:Diaconis}, recall also (\ref{eq:defenforcedjump}).
Define,
\begin{equation}\label{eq:Afunction}
A_{(x,K), \eta, \tau} : = M^e_{(x,K), \eta, \tau} (x) - M^1_{(x,K), \eta, \tau} (x).
\end{equation}
be the total number of jump instructions that are used at $x$ when
we stabilize $K$ by ignoring sleep instructions at $x$ and that are not used for the weak stabilization of $(x,K)$.
\begin{lemma}\label{lemma:Qandenforced}
Let $G=(V,E)$ be an arbitrary locally-finite graph and let $K \subset V$ be a finite set.
Let $\eta^{\prime}$ be the particle configuration that is obtained after the stabilization of $K$.
Then, for any $x \in K$, for any integer $\ell \geq 2$,
\begin{equation}\label{eq:mainupperbound}
\mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \big) \leq \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell-2} \, \, \mathcal{P} \big( A_{(x,K)} \geq \ell-2 \big)
\end{equation}
\end{lemma}
\begin{proof}
Recall the stabilization-via-weak-stabilization procedure that has been introduced in Section \ref{sec:Diaconis} and the definitions (\ref{eq:defenforced}),
(\ref{eq:defenforcedstep}),
(\ref{eq:defenforcedjump}).
First of all, note that for any integer $\ell \geq 2$,
\begin{align*}
\mathcal{P} \big( \, \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \, \big ) & =
\mathcal{P} \big( \, T_{(x,K)} \geq \ell, \, \, \tau^{x, m_{(x,K)}^{\ell-1}(x)+1} = \tau_{x \rho} \, \big ) \\
& = \frac{\lambda}{1+\lambda} \, \, \mathcal{P} \big( \, T_{(x,K)} \geq \ell \, \big ).
\end{align*}
In the previous display
$\tau^{x, m_{(x,K)}^{\ell-1}(x)+1}$
is the first instruction which is used at $x$ during the $\ell$th step
of the stabilisation via weak stabilisation of $(x,K)$.
This first equality holds true since, by definition of stabilisation via weak stabilisation, the event $\{ \eta^{\prime}(x) = \rho, T_{(x,K)}= \ell \}$
occurs if and only if
$T_{(x,K)} > \ell -1$
(i.e., the $\ell-1$th step does not end with a stable configuration)
and the first instruction
used at $x$ during the $\ell$-th step is sleep.
The second equality above follows from the independence of the instructions. Now note that,
\begin{align*}
\Big \{ T_{(x,K)} \geq \ell \Big \}
& = \Big \{ \forall i \in [1, \ell-2], \, \, \, \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \mbox{ and }
m^{i+1}_{(x,K)}(x) > m^{i}_{(x,K)}(x) \Big \} \cap
\Big \{ T_{(x,K)} \geq \ell \Big \} \\
& =
\Big \{ \forall i \in [1, \ell-2], \, \, \, \, \, \tau^{x, m^{i,e}_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \mbox{ and }
m^{i+1,e}_{(x,K)}(x) > m^{i,e}_{(x,K)}(x) \Big \} \cap
\Big \{ T_{(x,K)} \geq \ell \Big \} \\
& \subset
\Big \{ \forall i \in [1, \ell-2], \, \, \, \, \, \tau^{x, m^{i,e}_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \mbox{ and }
m^{i+1,e}_{(x,K)}(x) > m^{i,e}_{(x,K)}(x) \Big \} \cap
\Big \{ T^e_{(x,K)} \geq \ell \Big \}.
\end{align*}
The first identity holds true since, in order for the stabilization-via-weak-stabilization procedure to consist of at least $\ell \geq 2$ steps, it is necessary that
the first instruction used at $x$ during the steps $j=2, 3, \ldots,$ $\ell-1$
is not a sleep instruction, but a jump instruction.
The second identity and the inclusion follow from Lemma \ref{lemma:independence}.
From the fact that the function $T^e_{(x,K)}$ is independent from the
sleep instructions at $x$ and from the previous inclusion relation we deduce that,
\begin{align}
\begin{split}\label{eq:split}
& \mathcal{P} \Big ( T_{(x,K)} \geq \ell \Big ) \\
& \leq
\mathcal{P} \Big ( \big \{ \forall i \in [1, \ell-2], \, \, \, \, \, \tau^{x, m^{i,e}_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \mbox{ and }
m^{i+1,e}_{(x,K)}(x) > m^{i,e}_{(x,K)}(x)\big \} \cap \big \{ T^e_{(x,K)} \geq \ell \big \} \Big ) \\
& \leq \big ( \frac{1}{1 + \lambda } \big )^{\ell - 2} \, \,
\mathcal{P} \Big ( \big \{ \forall i \in [1, \ell-2], \,
m^{i+1,e}_{(x,K)}(x) > m^{i,e}_{(x,K)}(x)\big \} \cap \{ T^e_{(x,K)} \geq \ell \big \} \Big ) \\
& = \big ( \frac{1}{1 + \lambda } \big )^{\ell - 2} \, \,
\mathcal{P} \big (
T^e_{(x,K)} \geq \ell \big ) \\
& \leq \big ( \frac{1}{1 + \lambda } \big )^{\ell - 2} \mathcal{P} \Big ( M^e_{(x,K)}(x) - M^1_{(x,K)}(x) \geq \ell -2 \Big ),
\end{split}
\end{align}
For the last inequality we used the fact that, if the stabilization via weak stabilization of $(x,K)$ consists of at least $\ell \geq 2$ steps,
then it is necessarily the case that at least $\ell - 2$ jump instructions are used at $x$
after the first step.
This concludes the proof.
\end{proof}
\begin{remark}
In \cite{Stauffer} the quantity in the left-hand side of (\ref{eq:mainupperbound}) is bounded from above by
the probability that at least $\ell-2$ instructions
are used at $x$ after the first weak stabilization, without distinguishing between jump and sleep instructions.
Our enhancement is obtained by counting only the jump instructions which are used for the stabilisation and by introducing
a stabilization procedure, (\ref{eq:defenforced}), that ignores sleep instructions at one vertex. This allows us to recover independence from sleep instructions and, thus, to split the upper bound in (\ref{eq:split}) into the product of two factors, which are then bounded from above separately.
\end{remark}
In the next lemma we will bound from above the expectation of $A_{(x,K)}$.
\begin{lemma}\label{lemma:upperboundA}
Let $G$ be a locally-finite graph and let $K \subset V$ be a finite set. Then, for any $x \in K$,
\begin{equation}\label{eq:upperboundA}
\boldsymbol{E} \Big ( A_{(x,K)} \Big ) \leq G_{K^c}(x, x),
\end{equation}
where $\boldsymbol{E}$ is the expectation with respect to $\mathcal{P}$.
\end{lemma}
\begin{proof}
Note that the expectation of $A_{(x,K)}$ can be written as follows,
\begin{equation}\label{eq:claim0}
\boldsymbol{E} \Big ( A_{(x,K)} \Big ) = \sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k) \, \, \Big [
\boldsymbol{E}_k \big ( M^e_{(x,K)}(x) \big ) \, - \, \boldsymbol{E}_k \big ( M^1_{(x,K)}(x)\big ) \, \, \Big ]
\end{equation}
where $\boldsymbol{E}_k$ is the expectation $\boldsymbol{E}$ conditional on having precisely $k$ particles starting from $x$ at time $0$ and $Poi_{\mu}(k)$ is the probability that a Poisson random variable with mean $\mu$ has outcome $k$.
We claim that, for any $k \in \mathbb{N}$,
\begin{equation}
\label{eq:claim1}
\boldsymbol{E}_{k+1} \Big ( M^1_{(x,K)}(x) \Big) = \boldsymbol{E}_k \Big (M^e_{(x,K)}(x) \Big ),
\end{equation}
and that
\begin{equation}
\label{eq:claim3}
\boldsymbol{E}_{k+1} \Big ( M^1_{(x,K)}(x) \Big ) \leq \boldsymbol{E}_{k} \Big ( M^1_{(x,K)}(x) \Big ) \, + \, G_{K^c}(x,x).
\end{equation}
By using (\ref{eq:claim1}) and (\ref{eq:claim3}) we obtain from (\ref{eq:claim0}) that,
\begin{align*}
\boldsymbol{E} \Big ( A_{(x,K)} \Big ) & =
\sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k) \, \, \Big [
\boldsymbol{E}_{k+1} \big ( M^1_{(x,K)}(x) \big ) \, - \, \boldsymbol{E}_k \big ( M^1_{(x,K)}(x)\big ) \, \, \Big ] \\
& \leq \sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k)\, \, G_{K^c}(x, x) \\
& = G_{K^c}(x, x),
\end{align*}
obtaining the desired inequality (\ref{eq:upperboundA}).
So, in order to conclude the proof, it remains to prove (\ref{eq:claim1}) and (\ref{eq:claim3}).
The equality (\ref{eq:claim1}) holds true since adding one particle at a $x$ and never moving that particle
is equivalent to stabilizing $K$
by ignoring all the sleep instructions at $x$.
For a formal proof, let $\eta^{k+1}$ be an arbitrary particle configuration with $k+1$ particles at $x$
and let $\eta^k$ be obtained from $\eta^{k+1}$ by removing
one of the particles at $x$.
Let $\tau$ be an arbitrary array and let $\tilde \tau$ be obtained from $\tau$ by turning
sleep instructions at $x$ into a neutral instruction. We use the instructions of $\tau$ for $\eta^{k+1}$ and
the instructions of $\tilde \tau$ for $\eta^k$ simultaneously.
More specifically, let $\alpha = (x_1, x_2, \ldots x_{|\alpha|})$ be a sequence
that stabilizes $\eta^k$ in $K$ by using the instructions of $\tilde \tau$.
Since any step of $\alpha$ is legal
for $\eta^{k}$ when we use $\tilde \tau$ (a neutral instruction is always legal), it is also WS-legal for $\eta^{k+1}$
when we use $\tau$.
Moreover, since $\Phi_{\alpha} \eta^k$ is stable
in $K$ and has no particle at $x$ when we use $\tilde \tau$, then
$\Phi_{\alpha} \eta^{k+1}$ is weakly stable in $(x,K)$
when we use $ \tau$.
Thus, the sequence $\alpha$ stabilizes $\eta^k$ in $K$
when we use the instrutions of $\tilde \tau$ and stabilizes
$\eta^{k+1}$ weakly in $(x,K)$ when we use the instructions of $\tau$.
From the Abelian property we deduce that for any $y \in K$,
$
m^1_{(x,K), \eta^{k+1}, \tau}(y) = m^e_{(x,K), \eta^{k}, \tau}(y).
$
This implies (\ref{eq:claim1}).
We now prove (\ref{eq:claim3}), adapting the steps of a similar proof that appears in \cite{Stauffer}
to our setting.
Let $\eta$ be an arbitrary particle configuration with $k+1$ particles at $x$.
In the first step, we move one of the particles that is at $x$ until it leaves the set $K$, ignoring any sleep instruction.
During this step of the procedure we might use some instruction at $x$
that is WS-illegal (but legal).
The expected number of times a jump instruction is used at $x$ during this step
is $G_{K^c}(x,x)$.
In the second step, we perform weak the stabilization of $(x,K)$ with the remaining particles.
Let $M_{K, \eta, \tau}^{\prime}(x)$ be the total number of jump instructions that are used at $x$. We have that,
by monotonicity with enforced activation and by the least action principle
for weak stabilization,
$$
M^1_{(x,K), \eta, \tau}(x) \, \, \leq \, \, M_{K, \eta, \tau}^{\prime}(x) .
$$
Moreover, since in the second step we start from a configuration with $k$ particles at $x$ and instructions are independent,
$$
\boldsymbol{E}_{k+1} \big ( \, M_K^{\prime} (x) \, \big ) = \boldsymbol{E}_k \big ( \, M^1_{(x,K)}(x) \, \big ) + G_{K^c}(x,x) .
$$
By using the two previous relations we obtain (\ref{eq:claim3}).
\end{proof}
\subsection{Proof of Theorem \ref{theo:boundsQ}}
The proof of Theorem \ref{theo:boundsQ} is a direct consequence of
Lemma \ref{lemma:Qandenforced} and Lemma \ref{lemma:upperboundA}.
From Lemma \ref{lemma:Qandenforced} we obtain that,
\begin{align*}
Q(x,K) & =
\sum\limits_{ \ell=2}^{\infty} \, \mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \big) \\
& \leq \sum\limits_{ \ell=2}^{\infty} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell-2}
\, \, \mathcal{P} \big( A_{(x,K)} \geq \ell -2 \big),
\end{align*}
having used the fact that $\mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = 1 \big) =0$
and that $T_{(x,K)} > 0 $ almost surely.
We now perform simple calculations in order to prove the quantitative upper bound of Theorem \ref{theo:boundsQ}.
By using the Markov's inequality and Lemma \ref{lemma:upperboundA}, we obtain that for any positive integer $H$,
\begin{align*}
\mathcal{P} \big( \eta^{\prime}(x) = \rho \big) & \leq
\sum\limits_{ \ell=0}^{H-1} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell}
\, +\, \sum\limits_{ \ell=H}^{\infty} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell}
\mathcal{P} \big( A_{(x,K)} \geq \ell \big) \\
& \leq \frac{\lambda}{1+\lambda} \, \, \Big [ \sum\limits_{ \ell=0}^{H-1} \, \big( \frac{1}{1+\lambda} \big)^{\ell}
\, +\, \, \frac{G_{K^c}(x,x)}{H+1} \, \sum\limits_{ \ell=H}^{\infty} \, \, \, \big( \frac{1}{1+\lambda} \big)^{\ell} \Big ] = \\
& \leq \frac{\lambda}{1+\lambda} \, \, \Big [ \frac{1}{1 - \frac{1}{1 + \lambda}} \, \, - \, \, \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big)
\Big ( \frac{1}{1 + \lambda} \Big)^H \, \, \frac{1}{ \Big ( 1 - \frac{1}{1 + \lambda} \Big)}\, \, \Big ] \\
& = 1 - \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big)
\Big ( \frac{1}{1 + \lambda} \Big)^H.
\end{align*}
This concludes the proof of Theorem \ref{theo:boundsQ}.
\subsection{Proof of Theorem \ref{theo1:transient graph}}
Suppose that the graph is vertex-transitive and transient.
We have that for any set $K \subset V$ and any vertex $x \in K$,
$G_{K^c}(x,x) \leq G(0,0) < \infty$.
Thus, if we replace $G_{K^c}(x,x)$ by $G(0,0)$ in the right-hand side of (\ref{eq:ubound}),
the inequality is still true.
Moreover, if we set $H$ large enough, we have that the right-hand side of (\ref{eq:ubound})
is bounded away from one uniformly in $K$ and in $x \in K$ for any $\lambda >0$.
We can then find a function $g(\lambda)$ such that,
\begin{equation}\label{eq:bound2}
\forall K \subset V, \, \, \, \, \, \, \, \, \forall x \in K, \, \, \, \, \, \, \, \,Q(x,K) \leq g(\lambda)<1 .
\end{equation}
Moreover, by choosing $H^{*} := \lceil \sqrt{ \frac{G(0,0)}{\log(1 + \lambda)} } \rceil$,
we deduce that $g(\lambda)$ can be chosen such that
$\lim_{\lambda \rightarrow 0} \frac{g(\lambda)}{\lambda^{\frac{1}{2}}} < \infty$.
Suppose now that $\mu > g(\lambda)$.
Since from (\ref{eq:bound2}) we have that the expected number of particles after the stabilization of $K$ is at most
$g(\lambda) \, |K|$, it follows that the expected number of particles leaving $K$ during the stabilization
of $K$ is at least $ ( \, \mu - g(\lambda) \, ) \, \, | K|$.
Since a positive density of particles leaves the set,
since the graph is amenable, and since $K$ is an arbitrary
finite set, we deduce from
\cite{Rolla2}[Proposition 2] that
the system is active. This implies that
$\mu_c(\lambda) \leq g(\lambda)$
for any $\lambda \in (0, \infty)$ and concludes the proof.
\section{Fixation on non-amenable graphs}
\label{sec: Fixation on non-amenable graphs}
In this section we prove Theorem \ref{theo: non amenable}.
We start with an auxiliary lemma, which provides an
upper bound for the expected number of times that the particles
which start from the vertices that are `close' to the boundary of a ball
visit the centre of that ball.
Afterwards, we use this lemma to prove Proposition \ref{prop:fixation non-amemable},
showing that the probabilities $\{Q(x,B_{L}) \}_{x \in B_L}$
must fulfil a certain condition.
Finally, we prove Theorem \ref{theo: non amenable}
by showing that, if one assumes that ARW is active
and that $\mu < \frac{\lambda}{1 + \lambda}$, then
such a condition is violated,
obtaining a contradiction.
\begin{lemma}\label{lemma:RWboundary}
Let $G$ be a vertex-transitive graph where the random walk has a positive speed.
There exists $C_1=C_1(G) < \infty$ such that, for any $\delta \in (0,1)$,
there exists an infinite increasing sequence of integers $\{L_n\}_{n \in \mathbb{N}}$
such that
$$
\sum\limits_{x \in B_{L_n} \setminus B_{(1 - \delta) \, L_n} } G_{B_{L_n}^c} \big (x, 0 \big ) \, \leq \, C_1 \, \delta \, L_n.
$$
\end{lemma}
\begin{proof}
For any pair of real numbers $r_2 > r_1$, let $$\Xi(r_1,r_2) : =
E_0 \Big ( \sum\limits_{t=0}^{\infty} \mathbbm{1}\{ X(t) \in B_{r_2}\setminus B_{r_1} \} \Big),$$
be the expected number of vertices in the ring $B_{r_2} \setminus B_{r_1}$
which are visited by the random walk,
with $\Xi(0,r_2) $ being the expected number of vertices in the ball $B_{r_2}$
which are visited by the random walk.
By regularity of the graph, for any integer $n$ and $x \in B_{n}$, we have that,
\begin{equation}\label{eq:regularity}
P_{x}\big ( \tau_0 < \tau_{B_n^c} \big ) \, = G_{B_n^c \cup \{0\}}(x,x) \, \, P_0 \big ( \tau_x < \tau_{ \{0\} \cup B_n^c}^+ \big).
\end{equation}
Then, for any $\delta^{\prime} \in (0,1)$,
\begin{align}
\sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} G_{B_n^c}(x,0) \, \, & = \, \, G_{B_n^c}(0,0) \sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} P_{x}\big ( \tau_0 < \tau_{B_n^c} \big ) \\
\label{eq:condtris}
& \leq G(0,0)^2 \, \, \sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} P_{0}\big ( \tau_x < \tau_{B_n^c} \big ) \\
\label{eq:cond4}
& \leq G(0,0)^2 \, \, \Xi \Big ( \, (1 - \delta^{\prime}) n , \, n \, \Big ),
\end{align}
where we used (\ref{eq:regularity}) and vertex-transitivity.
We have that,
\begin{equation}\label{eq:cond1}
\forall n \in \mathbb{N} ~~~~~ \Xi \big ( 0, n \big ) \geq \sum\limits_{\ell=1}^{ \lfloor \frac{1}{\delta^{\prime}} \rfloor } \, \Xi \big ( \, \, \delta^{\prime} \, n \, (\ell-1),\, \delta^{\prime} \, n \, \ell \, \, \big ) .
\end{equation}
Since the random walk has a positive speed, we have that there exists $K=K(G)$ such that,
\begin{equation}\label{eq:cond2}
\forall n \in \mathbb{N} ~~~~~ \Xi \big ( 0, n \big ) \, \leq \, K \, \, n,
\end{equation}
(see for example \cite{Stauffer}[eq. (5.16)] for a proof).
Conditions (\ref{eq:cond1}) and (\ref{eq:cond2}) imply that,
\begin{equation}\label{eq:cond3}
\forall n \in \mathbb{N} ~~~~~~ \exists \ell_n \in [\frac{1}{2 \delta^{\prime}} , \frac{1}{\delta^{\prime}}] ~~~~~~ \mbox{s.t.} ~~~~~~\Xi \big ( \, \delta^{\prime} \, n \, (\ell_n-1) ,\, \delta^{\prime} \, n \, \ell_n \, \big ) \leq 4 \, K \, \delta^{\prime} \, n.
\end{equation}
For any $n \in \mathbb{N}$, define now $L_n :=\lfloor \delta^{\prime} \, n \, \ell_n \rfloor$. From (\ref{eq:cond3}) we obtain that, for any large enough $n$,
\begin{multline}
\Xi \big ( \, L_n ( 1 \, - \, \frac{\delta^{\prime}}{2} ) , \, \, L_n \, \big ) \, \leq
\Xi \big (\, L_n ( \frac{\delta^{\prime} n \ell_n}{L_n} \, - \delta^{\prime}), L_n \big ) \leq
\Xi \big (\, L_n ( \frac{\delta^{\prime} n \ell_n}{L_n} \, - \frac{1}{\ell_n}) , L_n \big )
= \\
\Xi \big (\, \delta^{\prime} n \ell_n\, - \frac{L_n}{\ell_n} , L_n \big ) \leq
\Xi \big (\,\delta^{\prime} n (\ell_n\, - 1), \delta^{\prime} n \ell_n \big )
\leq \, 4 \, K \, \, \frac{\delta^{\prime} \, n \, \ell_n}{\ell_n} \leq
5 K \frac{L_n}{\ell_n} \leq 10 K \delta^{\prime} L_n.
\end{multline}
The proof follows by defining $\delta = \frac{\delta^{\prime}}{2}$,
$C_1 = 20 \, K \, G(0,0)^2$ and by selecting an infinite increasing subsequence
of $\{L_n\}_{n \in \mathbb{N}}$.
\end{proof}
\begin{proposition}\label{prop:fixation non-amemable}
Let $G$ be vertex-transitive and suppose that the random walk on $G$ has a positive speed.
Then, for any $\delta \in (0,1)$, there exists an infinite increasing sequence of integers $\{L_n\}_{n \in \mathbb{N}}$ such that
\begin{equation}\label{eq:constraintQ}
\sum\limits_{x \in B_{(1 - \delta) \, L_n}} \, \, \, G_{B_{L_n}^c} (x,0) \,
\, \, \big ( Q(x,B_{L_n}) \, - \, \mu \big ) \, \, \, \leq \mu \, \, C_1 \, \, \delta\, L_n,
\end{equation}
where $C_1=C_1(G)$ is the constant that has been defined in Lemma \ref{lemma:RWboundary}.
\end{proposition}
\begin{proof}
The expected number of particles visiting the origin is related to the
quantities $\{ Q(y,B_L)\}_{y \in B_L}$ by the following relation,
\begin{equation}\label{eq:relationQm}
\mathbb{E}_{B_L}\big ( M_{B_L}(0) \big) = \sum\limits_{y \in B_L} \, G_{B^c_L}(y, 0) \, \, \big ( \, \mu - Q(y, B_L) \, \big),
\end{equation}
where $M_{B_L}(y)$ is the total number of jump instructions which are used at $y \in B_L$ for the stabilization of $B_L$.
We will first prove (\ref{eq:relationQm}) and then use it to prove the proposition.
In order to prove (\ref{eq:relationQm}), we use the ghost explorer technique similarly
to \cite{Shellef, Stauffer}.
First, we let the particles move until the ball $B_L$ is stable.
This means that some particles leave $B_L$ being absorbed at the boundary
and other particles remain in $B_L$ after having turned into the S-state.
We now let a \textit{ghost particle} start an independent
simple random walk from every vertex that is occupied by an S-particle in $B_L$.
Ghost particles are `killed' whenever they visit $B_L^c$.
We let $R_{B_L}(x)$ be the total number of visits at $x \in B_L$ by ghost particles. We let
$W_{B_L}(x)$ be the total number of visits at $x$ by normal particles or by ghost particles.
Now we have that,
$$
M_{B_L}(x) = W_{B_L}(x) - R_{B_L}(x).
$$
As both particles and ghost particles stop only when they leave $B_L$ and as random walks
are independent, we have that,
$$
\tilde{E}_{B_L}[ W_{B_L}(x)] = \mu \, \sum\limits_{y \in B_L} \, G_{B^c_L}(y, x),
$$
where $\tilde {E}_{B_L}$ is the expectation in the enlarged probability space
of activated random walks and ghost particles.
Moreover, since precisely one ghost leaves from every vertex where an S-particle
is located,
$$
\tilde{E}_{B_L}[ R_{B_L}(x)] = \sum\limits_{y \in B_L} \, G_{B^c_L}(y, x) \, \, Q(y, B_L),
$$
by linearity of expectation.
The proof of (\ref{eq:relationQm}) is concluded again by using linearity of expectation.
By using (\ref{eq:relationQm}) and
Lemma \ref{lemma:RWboundary}, we obtain that, for any $\delta \in (0,1)$, there exists an
infinite increasing sequence of integers $\{L_n\}_{n \in \mathbb{N}}$ such that,
$$
\mathbb{E}\big (\, M_{B_{L_n}}(0) \, \big ) \leq \, \, \sum\limits_{y \in B_{(1 - \delta) L_n }} \, \, \Big ( \, \mu - Q(y,B_L) \, \Big ) \, \, G_{B_{L_n}^c}(y,0) \, \, \, \, + \, \, \,\mu \, C_1 \, \delta \, L_n.
$$
Since the left-hand side of the previous inequality has to be non-negative for any $n$, we obtain (\ref{eq:constraintQ}),
concluding the proof.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{theo: non amenable}}]
We will show that, if $G$ is such that the random
walk has a positive speed, then condition (\ref{eq:constraintQ}) cannot be satisfied
for an infinite increasing sequence $\{L_n\}_{n \in \mathbb{N}}$
when $\mu < \frac{\lambda}{1+\lambda}$ and ARW is active,
obtaining a contradiction and concluding that ARW fixates
when $\mu < \frac{\lambda}{1+\lambda}$.
First of all, note that
\begin{equation}\label{eq:lowerboundQ}
Q(x, B_L) \geq \, \mathcal{P} \big ( \, m_{B_{ L} (x)}(x) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda} .
\end{equation}
This inequality was proved in \cite{Stauffer} and follows from the next relation,
\begin{equation}\label{eq:relation1}
\{ m^1_{(x, B_{ L} (x))}(x) \geq 1\} \, \, \cap \, \, \{ \tau^{x, m_{(x,K)}^1(x) + 1} = \tau_{x \rho} \} \, \, \subset \, \, \{ \eta^{\prime}(x) = \rho \},
\end{equation}
Indeed, if one concludes weak stabilization of $(x,K)$ with one particle at $x$ and the next instruction at $x$ is sleep,
then the stabilization is completed with one particle at $x$.
Moreover, at least one instruction is used at $x$ during the stabilization of $K$ if and only if
at least one instruction is used at $x$ during the weak stabilization of $(x,K)$.
By independence of instructions, one obtains
(\ref{eq:lowerboundQ}).
Thus, assume that $\mu < \frac{\lambda}{1+\lambda}$ and that ARW is active
and let $D : = \frac{\frac{\lambda}{1+\lambda} - \mu}{2} >0$.
We have that, for any $\delta>0$ and for any $L$ large enough depending on $\delta$,
\begin{align}
\forall x \in B_{(1-\delta) L }, ~~~~~~~
Q(x, B_L) \,
& \geq \, \mathcal{P} \big ( \, m_{B_{\delta L} (x)}(x) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda}
\label{eq:step1bis} \\
& \geq \, \mathcal{P} \big ( \, m_{B_{\delta L}}(0) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda}
\label{eq:step1tris} \\
& \geq \mu + D.
\label{eq:step2}
\end{align}
where the first inequality follows from (\ref{eq:lowerboundQ})
and from monotonicity (Lemma \ref{prop:lemma3}), for the second inequality we used vertex-transitivity and
for the third inequality we used the definition of activity and Lemma \ref{prop:lemma4}.
For any $\delta \in (0,1)$ and for any $L$ large enough the next inequality holds,
\begin{align}
\sum\limits_{y \in B_{ (1-\delta) \, L }} G_{B_{L}^c}(y,0)\Big ( Q(y,B_L ) \, - \, \mu \Big ) &
\geq D \sum\limits_{y \in B_L} G_{B^c_L}(y,0) \\
& = \, \, D \, G_{B_{L }^c}(0,0) \, \, \mathbb{E}_0 \Big (
\sum\limits_{t=0}^{\tau_{B^c_L}-1} \mathbbm{1} \big\{ X(t) \in B_{(1-\delta)L} \big\} \Big ) \\
\label{eq:violation}
& \geq \, \, D \, G_{B_{L}^c}(0,0) \, \, p \, \, (1-\delta) \, L ,
\end{align}
where for the first inequality we used (\ref{eq:regularity}) and
for the second inequality we used conditional expectation and
we let $p = P_0 \big ( X(t) \neq 0 \, \, \forall t >0 \big ) >0$ be the probability that the random walk
does not return to its starting vertex, which is positive since the random walk has a positive speed
and is then transient.
Choose now $\delta \in (0,1)$ small enough such that, for any $L$ large enough,
\begin{align*}
\sum\limits_{y \in B_{ (1-\delta) \, L }} G_{B_{L}^c}(y,0)\Big ( Q(y,B_L ) \, - \, \mu \Big )
& \geq \, \, D \, G_{B_{L}^c}(0,0) \, \, p \, \, (1-\delta) \, L \\
& > C_1\, \delta \, L.
\end{align*}
Since the previous condition holds for any $L$ large enough, we deduce that
an infinite increasing sequence $\{L_n\}_{n \in \mathbb{N}}$ satisfying (\ref{eq:constraintQ})
cannot exist when $\mu \leq \frac{\lambda}{1+\lambda}\leq1$, obtaining the desired contradiction.
\end{proof}
\end{document}
|
\begin{document}
\title{
Discrete $H^1$-inequalities for spaces admitting \mm-decompositions}
\begin{abstract}
We find new discrete $H^{1}\!$- and Poincar\'e-Friedrichs inequalities by studying the invertibility of
the DG approximation of the flux for local spaces admitting M-decompositions. We then show \ber{how} to use
\ber{these inequalities} to define and analyze new, superconvergent HDG and mixed methods for which the stabilization
function is defined in such a way that the approximations satisfy new $H^1$-stability results with which
their error analysis is greatly simplified. We apply this approach to define a wide class of
energy-bounded, superconvergent HDG and mixed methods
for the incompressible Navier-Stokes equations defined on unstructured meshes using, in 2D, general polygonal elements and, in 3D, general, flat-faced tetrahedral, prismatic, pyramidal and hexahedral elements.
\end{abstract}
\begin{keywords}
discontinuous Galerkin, hybridization, stability, superconvergence, Navier-Stokes
\end{keywords}
\begin{AMS}
65N30, 65M60, 35L65
\end{AMS}
\thispagestyle{plain} \markboth{B. Cockburn, et.~al.}{
Discrete $H^1$-inequalitites and superconvergent HDG methods for Navier-Stokes}
\centerline{{\bf Version of \today}}
\section{Introduction}
\label{sec:intro}
\ber{In this paper, we obtain new discrete stability inequalities with which we
carry out the first a priori error analysis of a
wide class of hybridizable discontinuous Galerkin (HDG) and mixed methods for the Navier-Stokes
equations. The methods are defined on unstructured meshes using, in 2D, general polygonal
elements and, in 3D, general, flat-faced tetrahedral, prismatic, pyramidal and hexahedral elements. They are a direct extension of the
corresponding methods introduced for the Stokes flow in \cite{CockburnFuQiu16}. We prove optimal error estimates in all the unknowns as
well as superconvergence results for the approximate velocity. By this, we mean that a new approximation for the velocity can be obtained in an
elementwise manner which converges faster than the original velocity approximation.
}
\ber{The unifying feature of the above-mentioned class of methods is that they are defined by using
the theory of M-decompositions. Using this theory, superconvergent HDG and mixed methods have
been devised for diffusion \cite{CockburnFuSayas16,CockburnFuM2D,CockburnFuM3D}, for linear
incompressible flow \cite{CockburnFuQiu16}, and for linear elasticity \cite{CockburnFuElas}.
The theory of M-decompostions has also been used to obtain commuting de Rham sequences
\cite{CockburnFuCommuting}. Here, we use it to obtain the above-mentioned new discrete
inequalities.}
\ber{To better explain our results, we introduce the HDG and mixed methods for steady-state diffusion
\[
\mathrm{c}\boldsymbol{q} + \boldsymbol{n}abla u = 0, \quad
\boldsymbol{n}abla \cdot \boldsymbol{q} = f \text{ in }\Omega,
\quad
\text{ and }
\quad u = g \text{ on } \partial \Omega,
\]
and introduce the concept of an M-decomposition. We then describe the inequalities we want to obtain and, finally, describe how we are going to apply them to the analysis of HDG and mixed methods for the Navier-Stokes equations.}
\subsection*{HDG methods and M-decompositions} \ber{To define the HDG methods,
we follow \cite{CockburnGopalakrishnanLazarov09}.
Thus, we take the domain $\Omega\subset \mathbb R^d$ to be a polygon if $d=2$ and
a polyhedron if $d=3$. We triangulate it with a conforming mesh ${\mathcal{T}_h}:=\{K\}$ made of
shape-regular polygonal/polyhedral elements $K$.
We set $\partial {\mathcal{T}_h}:=\{\partial K: \, K\in {\mathcal{T}_h}\}$, and denote by $\mathcal{F}_h$ the set of faces $F$
of the elements $K \in {\mathcal{T}_h}$. We also denote by $\mathcal{F}(K)$ the set of faces $F$ of the element $K$.}
\ber{The HDG method seeks an approximation to $(u, \boldsymbol{q}, u|_{\mathcal{F}_h})$,
$(u_h, \boldsymbol{q}_h, \widehat{u}_h)$, in the {\color{blue}finite dimensional} space $W_h \times \boldsymbol{V}_h
\times M_h$, where
\begin{alignat*}{3}
\boldsymbol{V}_h:=&\;\{\boldsymbol{v}\in\boldsymbol{L}^2({\mathcal{T}_h}):&&\;\boldsymbol{v}|_K\in\boldsymbol{V}(K),&&\; K\in{\mathcal{T}_h}\},
\\
W_h:=&\;\{w\in{L}^2({\mathcal{T}_h}):&&\;w|_K\in W(K),&&\; K\in{\mathcal{T}_h}\},
\\
M_h:=&\;\{what\in{L}^2(\mathcal{F}_h):&&\;what|_F\in M(F),&&\; F\in\mathcal{F}_h\},
\end{alignat*}
and determines it as the only solution of the following weak formulation:
\begin{subequations}
\label{HDG equations}
\begin{alignat}{3}
\label{HDG equations a}
\bint{\mathrm{c}\,\boldsymbol{q}_h}{\boldsymbol{v}}&-\bint{u_h}{\boldsymbol{n}abla \cdot \boldsymbol{v}} && + \bintEh{\widehat{u}_h}{ \boldsymbol{v} \cdot \boldsymbol{n}} && = 0, \\
\label{HDG equations b}
& -\bint{\boldsymbol{q}_h}{\boldsymbol{n}abla w} && + \bintEh{\widehat{\boldsymbol{q}}_h \cdot \boldsymbol{n}}{w} && = \bint{f}{w}, \\
&\widehat{\boldsymbol{q}}_h\cdot \boldsymbol{n} = \boldsymbol{q}_h\cdot \boldsymbol{n} &&+ {\alpha (u_h - \widehat{u}_h)} &&\quad\text{ on }\quad \partial {\mathcal{T}_h},
\\
& && \quad \, \langle{\widehat{\boldsymbol{q}}_h \cdot \boldsymbol{n}},{what}\rangle_{\partial{\mathcal{T}_h}\setminus\partial\Omega} && = 0,\\
& && \quad \, \langle \widehat{u}_h,{what}\rangle_{\partial\Omega} && = \langle {u_D},{what}\rangle_{\partial\Omega},
\end{alignat}
for all $(w, \boldsymbol{v}, what) \in W_h \times \boldsymbol{V}_h \times M_h$.
Here we write
\end{subequations}
$\inp{\eta}{\zeta} := \sum_{K \in {\mathcal{T}_h}} (\eta, \zeta)_K,$
where $(\eta,\zeta)_D$ denotes the integral of $\eta\zeta$ over the domain $D \subset \mathbb{R}^n$. We also write
$\bintEh{\eta}{\zeta}:= \sum_{K \in {\mathcal{T}_h}} \langle \eta \,,\,\zeta \rangle_{\partial K},$
where $\langle \eta \,,\,\zeta \rangle_{D}$ denotes the integral of $\eta \zeta$ over the domain $D \subset \mathbb{R}^{n-1}$.
When vector-valued functions are involved, we use a similar notation.}
\ber{The different HDG methods are obtained by choosing
the local spaces $\boldsymbol{V}(K)$, $W(K)$ and
\[
M(\partial K):=\{what\in L^2(\partial K): \;what|_F\in M(F)\mbox{ for all }F\in \mathcal{F}(K)\},
\]
and the {\em linear local stabilization} function ${\alpha}$. It turns out \cite{CockburnFuSayas16} that if we can decompose $\boldsymbol{V}(K)\times W(K)$
in such a way that
\begin{alignat*}{1}
\boldsymbol{V}(K)&=\widetilde{\boldsymbol{V}}(K) \oplus \widetilde{\boldsymbol{V}}^\perp(K),
\\
{W}(K)&=\widetilde{W}(K) \oplus \widetilde{W}^\perp(K),
\\
M(\partial K)&=\widetilde{\boldsymbol{V}}^\perp(K)\cdot\boldsymbol{n}|_{\partial K} \oplus \widetilde{W}^\perp(K)|_{\partial K},
\end{alignat*}
and a couple of simple inclusion properties,
that it is possible to find a stabilization function $\alpha$ such that the resulting HDG ($\alpha\boldsymbol{n}eq0)$
or mixed method ($\alpha=0$) is superconvergent. Since this decomposition is essentially induced by the space $M(\partial K)$, it is called an $M(\partial K)$-decomposition of the space $\boldsymbol{V}(K)\times W(K)$. The explicit construction of those spaces for general polygonal
elements was carried in \cite{CockburnFuM2D} (see the main examples in Table \ref{table:examples2D}) and for flat-faced general pyramids, prisms, and \ber{hexahedral} elements in
\cite{CockburnFuM3D}.
}
\subsection*{Invertibility of the discrete gradient operator}
In this paper, we study the invertibility properties of the mapping
\begin{subequations}
\label{gc_hdg}
\begin{alignat}{3}
W(K)\times &M(\partial K) &&\longrightarrow && \;\bld{V}(K),
\\
(u_h,&\,\widehat{u}_h) &&\longmapsto && \;\bld{q}_h,
\end{alignat}
\vskip-.4truecm
\boldsymbol{n}oindent where
\vskip-.6truecm
\begin{equation}
\label{gc_hdg_o}
(\mathrm{c}\,\bld{q}_h,\bld{v})_K=(u_h,\boldsymbol{n}abla\cdot\bld{v})_K-\langle\widehat{u}_h, \bld{n}\cdot\bld{v}\rangle_{\partial K}\quad\forall\; \bld{v}\in \bld{V}(K),
\end{equation}
\end{subequations}
for spaces $\bld{V}(K)\times W(K)$ admitting an $M(\partial K)$-decomposition \cite{CockburnFuSayas16}.
This mapping is a discrete version of the
{\em constitutive} equation relating a vector-valued function $\bld q$ and a scalar-valued function $u$:
\[
\mathrm{c}\,\bld{q} = - \boldsymbol{n}abla u,
\]
where $\mathrm{c}$ and $\mathrm{c}^{-1}$ are bounded, symmetric and uniformly positive definite matrix-valued functions, and
has been used in, arguably, {\em all} DG and hybridized versions of mixed methods. \ber{In particular, it captures the first equation defining the HDG method for steady-state diffusion.}
We present new discrete versions of the estimates
\begin{alignat*}{3}
\|\boldsymbol{n}abla u\|^2_K&=\| \mathrm{c}\,\bld{q} \|^2 _K &&\quad\mbox{{\rm (trivial)}},
\\
h_K^{-2}\| u-\overline{u}^K\|^2_K&\le C\, \| \mathrm{c}\,\bld{q} \|^2 _K&&\quad\mbox{{\rm (Poincar\'e-Friedrichs)}},
\end{alignat*}
where $\overline{\zeta}^{\,D}$ denotes the average of $\zeta$ \ber{on $D$} and $\|\cdot\|_D$ is the $L^2(D)$-norm.
They are expressed in terms of the (equivalent)
seminorms
\begin{subequations}
\label{localseminorms}
\begin{alignat}{3}
\label{localseminorm1}
| (u_h, \widehat{u}_h)|^2_{1,K}:=& \|\boldsymbol{n}abla u_h\|^2_K+ h^{-1}_K\, \|u_h-\widehat{u}_h\|^2_{\partial K},
\\
\label{localseminorm0}
| (u_h, \widehat{u}_h)|^2_{\mbox{{\rm \tiny PF}},K}:=&\| u_h-\overline{\widehat{u}_h}^{\;\partial K}\|^2_K+ h_K\, \|\widehat{u}_h-\overline{\widehat{u}_h}^{\;\partial K}\|^2_{\partial K},
\end{alignat}
\end{subequations}
and are, essentially, of the form
\begin{alignat*}{2}
|\,(u_h,\widehat{u}_h)\,|_{1,K}^2&\le C\,\left(\|\mathrm{c}\,\bld{q}_h\|_{K}^2 +
{h_K^{-1}\|P_{M_S}( u_h-\widehat{u}_h)\|_{{\partial K}}^2}\right) &&\quad\mbox{{\rm ($H^1$)},}
\\
h_K^{-2}\, |\,(u_h,\widehat{u}_h)\,|_{\mbox{{\rm \tiny PF}},K}^2&\le C\,\left( \|\mathrm{c}\,\bld{q}_h\|_{K}^2 +
h_K^{-1}\|{P_{M_S}(u_h-\widehat{u}_h)}\|_{{\partial K}}^2\right)&&\quad\mbox{{\rm (Poincar\'e-Friedrichs)},}
\end{alignat*}
where $M_S= M_S({\partial K})$,
referred to as the {\it stabilization} space, is an easy-to-compute subspace of the {space $M({\partial K})$} whose dimension is chosen to be {\em minimal},
and $P_{M_S}$ is its corresponding $L^2$-projection.
\ber{These inequalities, which are nothing but {\em stabilized}
versions of $\inf$-$\sup$ conditions \cite{BoffiBrezziFortin13},
are the key ingredients for our analysis of HDG and mixed methods.
They generalize, to {\em all} spaces admitting M-decompositions, the $H^1$-inequality
obtained with {$M_{S}=\emptyset$} in \cite[Proposition 3.2]{EggerSchoberl10},
for the well known Raviart-Thomas spaces for simplexes, and, for smaller spaces, in \cite[Theorem 3.2]{ChungEngquist09} with
$M_S$ equal to the restriction of $M({{\partial K}})$ onto an arbitrary face $F_K$
on which $\widehat{u}_h$ was set to coincide with $u_h$.}
\subsection*{Application to the Navier-Stokes equations} We show how to do that, not in the relatively simple case of convection-diffusion equations, but in the
more difficult case of the velocity gradient-velocity-pressure formulation of the
steady-state incompressible Navier-Stokes equations in two- and three-space dimensions:
\begin{subequations}
\label{ns-equation}
\begin{align}
\label{nsa}
\mathrm{L}=\boldsymbol{n}abla \bld u&\;quad \text{in}\quad\Omega,\\
\label{nsb}
-\boldsymbol{n}u{\bld{\nabla\cdot}} \mathrm{L}+{\nabla\cdot}(\bld u\otimes\bld{u})+{\nabla} p=\bld{f}&\;quad \text{in}\quad\Omega,\\
\label{nsc}
{\nabla\cdot}\bld{u}=0&\;quad \text{in}\quad \Omega,\\
\label{nsd}
\bld{u}=\bld{0}&\;quad \text{on}\quad \partial\Omega,\\
\label{nse}
\int_\Omega p=0&\;,
\end{align}
\end{subequations}
where $\mathrm{L}$ is the velocity gradient, $\bld{u}$ is the velocity, $p$ is the pressure, $\boldsymbol{n}u$ is the kinematic viscosity
and $\bld{f}\in {L}^2(\Omega)^d$
is the external body force.
\ber{Let us compare our results with those in \cite{CesmeliogluCockburnQiu17} where the only error analysis for HDG methods for the Navier-Stokes equations has been recently carried out. Let $(\bld{u}_h,\widehat{\bld{u}}_h)$ be an approximation of the velocity
$(\bld{u}|_\Omega, \bld{u}|_{\mathcal{F}_h})$, where $\mathcal{F}_h$ denotes the set of faces of the mesh ${\mathcal{T}_h}$ of the domain $\Omega$, and
let $\mathrm{L}_h$ be an approximation of
the velocity gradient $\mathrm{L}|_\Omega$. In \cite{CesmeliogluCockburnQiu17}, the authors considered unstructured meshes made of simplexes, spaces of polynomials of degree $k$, and a stabilization function
$\alpha$ such that
\[
\langle\alpha(\bld{u}_h-\widehat{\bld{u}}_h),\bld{u}_h-\widehat{\bld{u}}_h\rangle_{{\partial K}} = h^{-1}_K \|(\bld{u}_h-\widehat{\bld{u}}_h)\cdot\bld{n}\|_{{\partial K}}^2.
\]
For this HDG method, optimal convergence order for all unknowns as well as the superconvergence of the velocity was obtained by using
the novel upper bound
\[
\vertiii{(\bld{u}_h,\widehat{\bld{u}}_h)}_{1,{\mathcal{T}_h}}^2 \le C\,\sum_{K\in{\mathcal{T}_h}} ( \| \mathrm{L}_h\|^2_K + h^{-1}_K \|(\bld{u}_h-\widehat{\bld{u}}_h)\cdot\bld{n}\|_{{\partial K}}^2),
\]
\ber{where the discrete $H^1$-norm $\vertiii{\cdot}_{\mathcal{T}_h}$ is given by
\[
\vertiii{(u_h,\widehat{u}_h)}_{1,{\mathcal{T}_h}}:=(\sum_{K\in{\mathcal{T}_h}} |(u_h,\widehat{u}_h)|^2_{1,K})^{1/2}.
\]}
In contrast, in this paper, \ber{stronger results} are obtained for a wide class of HDG and mixed methods defined on a variety of element shapes: general polgonal elements in 2D, and tetrahedral, pyramidal, prismatic and hexahedral elements in 3D.} The local spaces defining these methods are those used for the corresponding methods for the Stokes equations of incompressible flow proposed in \cite{CockburnFuQiu16}; the stabilization function is not the same though. The spaces are constructed by using, as building blocks, the local spaces
$\bld{V}(K)\times W(K)$ admitting an $M(\partial K)$-decomposition introduced in \cite{CockburnFuSayas16} for steady-state diffusion.
\ber{To obtain the new discrete inequalities, we proceed in two steps. First, we show that for all these methods, we have the
discrete $\bld{H}^1$-inequality
\[
\vertiii{(\bld{u}_h,\widehat{\bld{u}}_h)}_{1,{\mathcal{T}_h}}^2 \le C\,\sum_{K\in{\mathcal{T}_h}} ( \| \mathrm{L}_h\|^2_K + h^{-1}_K {\|P_{M_S}( \bld{u}_h-\widehat{\bld{u}}_h) \|_{{\partial K}}^2}).
\]
We then show that if we {\em define} a stabilization function $\alpha$ such that
\[
\langle\alpha(\bld{u}_h-\widehat{\bld{u}}_h),\bld{u}_h-\widehat{\bld{u}}_h\rangle_{{\partial K}} = {h^{-1}_K \|P_{M_S}(\bld{u}_h-\widehat{\bld{u}}_h) \|_{{\partial K}}^2},
\]
we obtain new $\bld{H}^1$-boundedness results for the approximation, and new $\bld{H}^1$-stability inequalities, with which can easily obtain the above-mentioned convergence properties.}
\subsection*{Organization of the paper}The rest of the paper is organized as follows. In Section
\ref{sec:dh1}, {we present the general properties of the local spaces admitting M-decompositions and those of the the stabilization subspaces $M_S$;
specific choices of $M_S$ for the main spaces admitting M-decompositions are also provided. We then present and discuss our main result, namely,
the new discrete inequalities of Theorem \ref{thm:dh1} which we prove} in Section \ref{sec:proof-dh1}.
In Section \ref{sec:ns}, we define our HDG and mixed methods for the incompressible Navier-Stokes equations
and present their energy-boundedness and superconvergence properties; \ber{their proofs are provided} in Section \ref{sec:proof-ns}.
We end with some concluding remarks in Section \ref{sec:conclude}.
\section{The main result}
\label{sec:dh1}
In this Section, we present \ber{and discuss} our main result, namely,
the discrete $\bld{H}^1$- and Poincar\'e-Friedrichs inequalities
of Theorem \ref{thm:dh1}\ber{; their proof is postponed to Section 3}.
\ber{We first present the two ingredients
needed to obtain these inequalities, namely, the spaces admitting M-decompositions and a
stabilization subspace of the trace space $M(\partial K)$.
}
\subsection{Notation}
\label{sec:notation}
Given a domain $D\subset \mathbb{R}^n$, we denote by
$\EuScript{P}_k(D)$ and $\widetilde{\EuScript{P}}_k(D)$ the space of polynomials of degree no
greater than $k$, and the space of homogeneous polynomials of degree $k$, respectively, defined on the domain $D$.
When $D$ is a unit square with coordinates $(x,y)$, we denote by
$\EuScript{Q}_k(D):= \EuScript{P}_k(x)\otimes \EuScript{P}_k(y)$ and $\widetilde{\EuScript{Q}}_k(D):=\widetilde\EuScript{P}_k(x)\otimes \widetilde\EuScript{P}_k(y)$ the space of
tensor-product polynomials of degree no
greater than $k$, and the space of homogeneous tensor-product polynomials of degree $k$, respectively,
We use a
similar notation on tensor-product polynomial spaces on the unit cube. When $D:=B\otimes I$ is a unit prism having a triangular base $B$ with coordinates $(x,y)$ and a $z$-directional edge $I$, we denote by
$\EuScript{P}_{k|k}(D):= \EuScript{P}_k(x,y)\otimes \EuScript{P}_k(z)$ and $\widetilde{\EuScript{P}}_{k|k}(D):=\widetilde\EuScript{P}_k(x,y)\otimes \widetilde\EuScript{P}_k(z)$ the space of
tensor-product polynomials of degree no
greater than $k$, and the space of homogeneous tensor-product polynomials of degree $k$, respectively.
Vector-valued spaces are denoted with a superscript $d$ (the space dimension); for example,
$\EuScript{P}_k(K)^d$ is the space of vectors whose entries lie in $\EuScript{P}_k(K)$.
We denote by $\|\cdot\|_{W^{m,p}(D)}$ the standard $W^{m,p}$-Sobolev norm on the domain $D\subset \mathbb R^d$. For the Hilbert space
$H^m(D):=W^{m,2}(D)$, we simply write $\|\cdot\|_{m,D}$ instead of
$\|\cdot\|_{H^{m}(D)}$, and $\|\cdot\|_{D}$ instead of $\|\cdot\|_{0,D}$.
Similarly, when $p=\infty$, we write $\|\cdot\|_{m,\infty, D}$ instead of
$\|\cdot\|_{W^{m,\infty}(D)}$, and $\|\cdot\|_{\infty, D}$ instead of $\|\cdot\|_{0,\infty, D}$.
For a given a second-order tensor $\mathrm{c}$, we denote by
$\|\cdot\|_{\mathrm{c},D}$ the $\mathrm{c}$-weighted $L^2$-norm on the domain $D$.
\ber{Finally, we denote by ${\lambda_{\mathrm{c}}^\mathrm{max}}(K)$ the $L^\infty(K)$-norm of the maximum eigenvalue of the tensor $\mathrm{c}$}.
\subsection{\!Examples of spaces $\bld{V}(K)\times W(K)$ admitting $M({\partial K})$-decompositions}
An M-decomposition relates the trace of the normal component of the space of approximate fluxes ${\boldsymbol V}(K)$
and
the trace of the space of approximate scalars
$W(K)$
with the space of approximate traces
$
M(\partial K).
$
To define it,
we need to consider the combined trace operator
\begin{alignat*}{6}
\mathrm{tr} : & {\boldsymbol V}(K)\times W(K) &\quad\longrightarrow\quad
& L^2(\partial K)\\
& (\bld{v},w) &\quad\longmapsto\quad &
(\bld{v}\cdot\boldsymbol{n} +w)|_{\partial K}
\end{alignat*}
\begin{definition} [The M-decomposition] We say that ${\boldsymbol V}(K)\times W(K)$ admits an M-decomposition when
\label{definition:m}
\begin{itemize}
\item[{\rm(a)}] $\mathrm{tr} ({\boldsymbol V}(K)\times W(K))\subset M(\partial K)$,
\end{itemize}
and there exists a subspace $\widetilde\VV(K)\times\widetilde W(K)$ of ${\boldsymbol V}(K)\times W(K)$ satisfying
\begin{itemize}
\item[{\rm(b)}] $\boldsymbol{n}abla W(K)\times\boldsymbol{n}abla\cdot{\boldsymbol V}(K)\subset \widetilde\VV(K)\times\widetilde W(K),$
\item[{\rm(c)}] $\mathrm{tr}: \widetilde\VV{}^\perp(K)\times \widetilde W^\perp(K)\rightarrow M(\partial K)$ is an isomorphism.
\end{itemize}
Here $\widetilde\VV{}^\perp(K)$ and $\widetilde W^\perp(K)$ are the {$L^2(K)$-}orthogonal complements of
$\widetilde\VV(K)$ in ${\boldsymbol V}(K)$, and of $\widetilde W(K)$ in $W(K)$, respectively.
\end{definition}
Local spaces ${\boldsymbol V}(K)\times W(K)$ admitting $M(\partial K)$-decompositions have been explicitly constructed in two-dimensions
for general polygonal elements $K$ (see some examples in Table \ref{table:examples2D}) in
\cite{CockburnFuM2D} and in three-dimensions for four types of polyhedral elements $K$, namely, tetrahedra, pyramids, prisms, and hexahedra
in \cite{CockburnFuM3D}. \ber{As pointed out in the Introduction, the} main interest of these spaces is that they generate superconvergent HDG and mixed methods, see \cite{CockburnFuSayas16}.
\begin{table}[ht]
\caption{Spaces ${\boldsymbol V}(K)\times W(K)$ admitting an $M(\partial K)$-decomposition. {\rm \cite{CockburnFuSayas16}}}
\centering
\begin{tabular}{l c c }
\hline
\boldsymbol{n}oalign{
}
\multicolumn{1}{c}{$\boldsymbol{V}(K)$} & $W(K)$ & method \\
\boldsymbol{n}oalign{
}
\hline\hline
\boldsymbol{n}oalign{
}
\multicolumn{3}{c}{$M({\partial K})=\EuScript{P}_{k}(\partial K)$, $K$ is a square.
}\\
\hline
\boldsymbol{n}oalign{
}
$\boldsymbol{\qol}_k\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k+1} y , x\,y^{k+1}\}\oplus\;\mathrm{span}\{
\bld x\, x^{k} y^k\}$ & $\EuScript{Q}_{k}$& ${\mathbf{TNT}_{[k]}}$ \cite{CockburnQiuShi12}\\
$\boldsymbol{\qol}_{k}\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k+1} y , x\,y^{k+1}\}$ & $\EuScript{Q}_{k}$ & ${\mathbf{HDG}^Q_{[k]}}$\cite{CockburnQiuShi12} \\
$\boldsymbol{\qol}_k\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k+1} y , x\,y^{k+1}\}$& $\EuScript{Q}_{k}\setminus\{x^k\,y^k\}$ & ${\mathbf{BDM}_{[k]}}$\\
\boldsymbol{n}oalign{
}
\hline
\boldsymbol{n}oalign{
}
\multicolumn{3}{c}{$M=\EuScript{P}_{k}(\partial K)$, $K$ is a triangle.
}\\
\hline
\boldsymbol{n}oalign{
}
$\boldsymbol{\pol}_k\oplus\boldsymbol{x}\,\widetilde{\EuScript{P}}_k$ & $\EuScript{P}_{k}$& ${\mathbf{RT}_{k}}$ \cite{RaviartThomas77}\\
$\boldsymbol{\pol}_{k}$ & $\EuScript{P}_{k}$ & $\boldsymbol{\mathrm{HDG}}_k$\cite{CockburnQiuShi12}\\
$\boldsymbol{\pol}_k$& $\EuScript{P}_{k-1}$
& ${\mathbf{BDM}_{k}}$ \cite{BrezziDouglasMarini85}\\
\boldsymbol{n}oalign{
}
\hline
\boldsymbol{n}oalign{
}
\multicolumn{3}{c}{$M=\EuScript{P}_{k}(\partial K)$, $K$ is a square.
}\\
\hline
\boldsymbol{n}oalign{
}
$\boldsymbol{\pol}_k\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k+1} y , x\,y^{k+1}\}\oplus\;\boldsymbol{x}\,\widetilde{\EuScript{P}}_k$ & $\EuScript{P}_{k}$& \cite{CockburnFuM2D}\\
$\boldsymbol{\pol}_{k}\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k +1} y , x\,y^{k+1}\}$ & $\EuScript{P}_{k}$ &\cite{CockburnFuM2D}\\
$\boldsymbol{\pol}_k\oplus\bld{\mathrm{curl}}\;\mathrm{span}\{
x^{k+1} y , x\,y^{k+1}\}$& $\EuScript{P}_{k-1}$
& ${\mathbf{BDM}_{[k]}}$ \cite{BrezziDouglasMarini85}\\
\boldsymbol{n}oalign{
}
\hline
\boldsymbol{n}oalign{
}
\multicolumn{3}{c}{$M=\EuScript{P}_{k}(\partial K)$, $K$ is a quadrilateral.
}\\
\hline
\boldsymbol{n}oalign{
}
$\boldsymbol{\pol}_k\oplus_{i=1}^{ne}\bld{\mathrm{curl}} \,\mathrm{span}\{
\xi_4\,\lambda_3^k , \xi_4\,\lambda_4^k\}
\oplus
\boldsymbol{x}\,\widetilde{\EuScript{P}}_k
$ & $\EuScript{P}_{k}$ & \cite{CockburnFuM2D}\\
$\boldsymbol{\pol}_{k}\oplus_{i=1}^{ne}\bld{\mathrm{curl}} \,\mathrm{span}\{
\xi_4\,\lambda_3^k , \xi_4\,\lambda_4^k\}\
$ & $\EuScript{P}_{k}$& \cite{CockburnFuM2D}\\
$\boldsymbol{\pol}_k\oplus_{i=1}^{ne}\bld{\mathrm{curl}} \,\mathrm{span}\{
\xi_4\,\lambda_3^k , \xi_4\,\lambda_4^k\}\
$& $\EuScript{P}_{k-1}$ & \cite{CockburnFuM2D}\\
\boldsymbol{n}oalign{
}
\hline
\end{tabular}
\label{table:examples2D}
\end{table}
Let us explain the notation used in the above table. By $\bld{\mathrm{curl}}\,p$ we mean the vector $(-p_y,p_x)$.
\ber{By $\{\boldsymbol{{\mathrm{v}}}_i\}_{i=1}^{4}$ (and $\boldsymbol{{\mathrm{v}}}_5:=\boldsymbol{{\mathrm{v}}}_1$), we mean the four vertices of a quadrilateral; the vertices are ordered in a counter-clockwise manner. We}
denote by $\boldsymbol{\mathsf{e}}_i$ the edge connecting the vertices $\boldsymbol{{\mathrm{v}}}_i$ and $\boldsymbol{{\mathrm{v}}}_{i+1}$. Then, we set
\begin{align*}
\xi_i := &\; \eta_{i-1}\frac{\lambda_{i-2}}{\lambda_{i-2}(\boldsymbol{{\mathrm{v}}}_i)}+
\eta_{i}\frac{\lambda_{i+1}}{\lambda_{i+1}(\boldsymbol{{\mathrm{v}}}_{i})}
\quad\mbox{ and }\quad
\eta_i :=\Pi_{\underset{ j\boldsymbol{n}ot=i}{j=1}}^{4} \frac{\lambda_j}{\lambda_j+\lambda_i},
\end{align*}
where $\lambda_i$ \ber{is} the linear function that
vanishes on \ber{the} edge $\boldsymbol{\mathsf{e}}_i$ and reaches the maximum value $1$ in the closure of $K$. For details, see \cite{CockburnFuSayas16,CockburnFuM2D}.
\subsection{The stabilization subspace $M_S(\partial K)$}
\label{subsec:dh1-stabilization}
We also need to introduce the stabilization space $M_S(\partial K)$.
\ber{This} is a subspace of $M({\partial K})$ satisfying the following two conditions inspired from \cite[Proposition 3.2]{CockburnFuSayas16}:
\begin{subequations}
\label{condition-ms}
\begin{align}
\dim M_S({\partial K}) = \dim \widetilde{W}^\perp(K) = &\;\dim W(K)-\dim {\nabla\cdot} {\boldsymbol V}(K),\\
\|P_{M_S}(\cdot )\|_{{\partial K}}\text{ is a norm} & \text{ on the space }
\widetilde{W}^\perp(K).
\end{align}
\end{subequations}
Here, $P_{M_S}$ denotes the $L^2({\partial K})-$projection into the space $M_S({\partial K})$.
\ber{Examples of $M_S(\partial K)$ for various element shapes are collected in the following proposition,
whose proof is given in Section 3.}
\begin{proposition}
\label{prop:ms}
Let the space ${\boldsymbol V}(K)\times W(K)$ admit an $M({\partial K})$-decomposition.
Then, conditions \bld{e}_{q}ref{condition-ms} are satisfied
\begin{itemize}
\item [\em{(1)}] If ${\nabla\cdot} {\boldsymbol V}(K)=W(K)$ and $M_S({\partial K})=\emptyset$.
\item [\em{(2)}] If ${\nabla\cdot} {\boldsymbol V}(K)=\EuScript{P}_{k-1}(K), \ber{W(K)=\EuScript{P}_k(K)}$
and
\vskip-.5truecm
\[
M_S({\partial K}):=\{what\in L^2({\partial K}):\;\;what|_{F^*}\in \EuScript{P}_k(F^*), \;\;what|_{{\partial K}\backslash F^*} = 0 \}
\]
\vskip-.2truecm
Here $F^*$ is a fixed face of the element $K$ such that
$K$ \ber{lies} in one side of the hyperplane containing $F^*$.
\item [\em{(3)}] If $K$ is a square or cube,
${\nabla\cdot} {\boldsymbol V}(K)={\nabla\cdot} \EuScript{Q}_k(K)^d, \ber{W(K)=\EuScript{Q}_k(K)}$
and
\vskip-.5truecm
\[
M_S({\partial K}):=\{what\in L^2({\partial K}):\;\;what|_{F^*}\in \widetilde{\EuScript{Q}}_k(F^*),
\;\;what|_{{\partial K}\backslash F^*} = 0 \}.
\]
\vskip-.2truecm
Here $F^*$ is any fixed face of the square or cubic element $K$.
\item [\em{(4)}] If $K$ is a prism with tensor product structure,
${\nabla\cdot} {\boldsymbol V}(K)={\nabla\cdot} \EuScript{P}_{k|k}(K)^d, \linebreak \ber{W(K)=\EuScript{P}_{k|k}(K)}$,
and
\vskip-.5truecm
\[
M_S({\partial K}):=\{what\in M:\;\;what|_{F^*}\in \widetilde{\EuScript{P}}_k(F^*),
\;\;what|_{{\partial K}\backslash F^*} = 0 \}.
\]
\vskip-.2truecm
Here $F^*$ is a \ber{triangular base} of the prism $K$.
\end{itemize}
\end{proposition}
\subsection{Discrete $H^{1}$- and Poincar\'e-Friedrichs inequalities}
Our main result is the following.
\begin{theorem}[Local, discrete $H^1$- and Poincar\'e-Friedrichs inequalities]
\label{thm:dh1}
Let $K$ be any element of the mesh $\mathcal{T}_h$.
Consider the mapping $(u_h,\,\widehat{u}_h)\in W(K)\times M(\partial K) \longmapsto \;\bld{q}_h\in \bld{V}(K)
$ given by \bld{e}_{q}ref{gc_hdg}. Then, if $\bld{V}(K)\times W(K)$
admits an $M(\partial K)$-decomposition, and
\[
\Theta_K:=\left( {\lambda_{\mathrm{c}}^\mathrm{max}}(K) \,\|\bld{q}_h\|_{\mathrm{c},K}^2 +
h_K^{-1}\|P_{M_S}(u_h-\widehat{u}_h)\|_{\partial K}^2\right),
\]
where
$M_{S}(\partial K)$ is any subspace of $M(\partial K)$
satisfying conditions \bld{e}_{q}ref{condition-ms}, we have the inequalities
\begin{alignat*}{2}
|\,(u_h,\widehat{u}_h)\,|_{1,K}^2&\le C\,\Theta_K
&&\quad\mbox{{\rm ($H^1$)},}
\\
h_K^{-2}\, |\,(u_h,\widehat{u}_h)\,|_{\mbox{{\rm \tiny PF}},K}^2&\le C\,\Theta_K&&\quad\mbox{{\rm (Poincar\'e-Friedrichs)},}
\end{alignat*}
where
the constant $C$ only depends on the finite element spaces ${\boldsymbol V}(K)$, $W(K)$ and $M_S({\partial K})$,
and on the shape-regularity properties of the element $K$.
\end{theorem}
\
A detailed proof of this result is given in the next section. Here, let us briefly discuss it:
\
(1). First, note that it is not very difficult to obtain these inequalities if the projection operator
$P_{M_S}$ is replaced by the identity. Indeed, if we {\em only} assume that $\boldsymbol{n}abla W(K)\subset \bld{V}(K)$, we can take $\bld{v}:=\boldsymbol{n}abla u_h$ in the equation defining $\bld{q}_h$, \bld{e}_{q}ref{gc_hdg_o}, to immediately obtain
\[
\|\boldsymbol{n}abla u_h\|^2_K \le \|\mathrm{c}\,\bld{q}_h\|^2 _K + C\,h^{-1}_K\,\|u_h-\widehat{u}_h\|^2_{\partial K}.
\]
The wanted inequality now easily follows.
\ber{However, such choice might degrade the accuracy of the HDG method, as is typical of DG methods, see, for example, \cite{CastilloCockburnPerugiaSchoetzau00}. To avoid this, we must
chose a {\it minimal} space $M_S$ such that the inequalities in Theorem 2.3 still hold.}
(2). The inequalities of the above result are nothing but {\em stabilized} versions of $\inf$-$\sup$ conditions for the bilinear form defining $\bld{q}_h$, see \bld{e}_{q}ref{gc_hdg}, since
\[
\| \mathrm{c}\,\bld{q}_h\|_K \ge \sup_{\bld{v}\in\bld{V}(K)\setminus\{\bld{0}\}}\frac{(u_h,\boldsymbol{n}abla\cdot\bld{v})_K-\langle\widehat{u}_h, \bld{v}\cdot\bld{n}\rangle_{\partial K}}{\| \bld{v}\|_K},
\]
see \cite[Section 6.3]{BoffiBrezziFortin13}. For this reason, the subspace $M_S({\partial K})$ is called a {\em stabilization} subspace.
\
(3). Let us argue that the dimension of the stabilization space $M_S({\partial K})$ is actually minimal. It is obvious {that the influence of $u_h$ on $\bld{q}_h$ is only through its
$L^2$-projection} into $\boldsymbol{n}abla\cdot\bld{V}(K)$.
As a consequence, the part of $u_h$ lying on the
$L^2(K)$-orthogonal complement of $\boldsymbol{n}abla\cdot\bld{V}(K)$ in $W(K)$ {\em cannot} be controled by the size of $\bld{q}_h$. Since the dimension of such space is
$\dim W(K)-\dim {\nabla\cdot} {\boldsymbol V}(K) $ and this number, by the first of conditions \bld{e}_{q}ref{condition-ms}, is equal to $\dim M_S(K)$, we see that the dimension
of $M_S({\partial K})$ cannot be smaller for the inequalities under consideration to hold.
\
(4).
\ber{The above $H^1$-inequality has been explicitly obtained in the literature for two cases \cite{EggerSchoberl10,ChungEngquist09}.
The first \cite{EggerSchoberl10} is the
case of the Raviart-Thomas elements on a simplex in which the spaces, using our notation,
\begin{align*}
{\boldsymbol V}(K)=&\EuScript{P}_k(K)^d+ \bld{x}\,\EuScript{P}_k(K), \quadW(K):=\EuScript{P}_k(K),\\
M(\partial K):=&\{\mu\in L^2({\partial K}):\;\mu|_F\in\EuScript{P}_k(F) \;\forall\; F\in\mathcal{F}(K)\},
M_S({\partial K})=\emptyset,
\end{align*}
see \cite[Proposition 3.2]{EggerSchoberl10};
the second \cite{ChungEngquist09} is the case for the
staggered DG method in which the spaces (defined on a simplex) are given as follows:
\begin{align*}
{\boldsymbol V}(K)=&\EuScript{P}_k(K)^d, \quadW(K):=\EuScript{P}_k(K),\\
M(\partial K):=&\{\mu\in L^2({\partial K}):\;\mu|_F\in\EuScript{P}_k(F) \;\forall\; F\in\mathcal{F}(K)\},\\
M_S({\partial K}):=&\{\mu\in M(\partial K): \mu=0 \mbox{ on }\partial K\setminus F_K\},
\end{align*}
where $F_K$ is a single face of the simplex $K$; see \cite[Theorem 3.2]{ChungEngquist09}.
}
\
(5).
\ber{Given data $\widehat u_h$ and $f$,
let $(\bld{q}_h,u_h)\in \bld{V}(K)\times W(K)$ be the solution to the local problem
\bld{e}_{q}ref{HDG equations a}--\bld{e}_{q}ref{HDG equations b},
with the space $\bld{V}(K)\times W(K)$ admitting an $M({\partial K})$-decomposition.
The following inequalities were obtained in \cite[Theorem 4.3]{CockburnFuSayas16}
\begin{alignat*}{1}
\|\boldsymbol{n}abla u_h\|^2_K &\le C\,\left( {\lambda_{\mathrm{c}}^\mathrm{max}}(K) \,\|\bld{q}_h\|_{\mathrm{c},K}^2 +\|P_{\widetilde{W}^\perp} f\|^2_K\right),
\\
h_K^{-1}\|u_h-\widehat{u}_h\|^2_{{\partial K}} &\le C\,\left( {\lambda_{\mathrm{c}}^\mathrm{max}}(K) \,\|\bld{q}_h\|_{\mathrm{c},K}^2 +\|P_{\widetilde{W}^\perp} f\|^2_K\right).
\end{alignat*}
Our result replaces the quantity $\|P_{\widetilde{W}^\perp} f\|^2_K$ on the above right hand side with
\[h_K^{-1}\|P_{M_S}(u_h-\widehat{u}_h)\|_{\partial K}^2.\]
It is this small change that significantly facilitates the analysis of HDG schemes for the incompressible
Navier-Stokes equation considered in this paper.
}
\
\ber{(6). The dependence of the constant $C$ in the estimates on the local spaces
$\boldsymbol{V}(K), W(K), \text{ and } M_S(\partial K),$
and on the shape regularity of the element $K$ remains to be studied. It is reasonable to believe that $C$ can be uniformly bounded by
a function of the the maximum degree of the polynomial functions belonging to the local spaces and by a suitable measure of the element shape-regularity.
}
\subsection{Choosing the stabilization function $\alpha$ to get $H^1$-stability}
We end this Section by illustrating the fact that the stabilization subspace $M_S({\partial K})$
can be actually used, when defining HDG methods, to obtain what we could call the {\em minimal}
stabilization function $\alpha$ needed to achieve a new $H^1$-stability result. Let us do that in the framework of HDG approximations for steady-state diffusion problems.
So, if $(\bld{q}_h,u_h)\in \bld{V}(K)\times W(K)$
is the solution of the local problem \bld{e}_{q}ref{HDG equations a}--\bld{e}_{q}ref{HDG equations b},
we have the discrete energy identity
\[
\mathsf{E}_K(\bld{q}_h;u_h,\widehat{u}_h)
=(f, u_h)_K -\langle \widehat{\boldsymbol{q}}_h\cdot\bld{n}, \widehat{u}_h\rangle_{{\partial K}},
\]
where
\vskip-1truecm
\[
\mathsf{E}_K(\bld{q}_h;u_h,\widehat{u}_h):=
(\mathrm{c}\,\bld{q}_{h}, \bld{q}_h)_K + \langle \alpha(u_h - \widehat{u}_h), u_h - \widehat{u}_h\rangle_{{\partial K}},
\]
is the {\em energy} associated to the element $K$. We immediately \ber{see} that
\[
\| \mathrm{c}\,\bld{q}_h\|^2 _K + h^{-1}_K\,\|P_{M_S}(u_h-\widehat{u}_h)\|^2_{\partial K}\le C\, \mathsf{E}_K(\bld{q}_h;u_h,\widehat{u}_h),
\]
if we pick the stabilization function $\alpha$ as
\begin{equation}
\label{alpha}
\alpha(\widehat{\omega}):={h_K^{-1}}\,P_{M_S}(\widehat{\omega})\;\;\forall\;\widehat{\omega}\in L^2({\partial K}),
\end{equation}
case in which we say that this stabilization function $\alpha$ is {\em minimal}.
Thus, by establishing this link between the HDG stabilization function $\alpha$ and
the stabilization subspace $M_S({\partial K})$, an estimate of the energy immediate implies
an estimate on the discrete seminorms under consideration, that is,
\[
{\max\{ h_K^{-2}|(u_h,\widehat{u}_h)|^2_{\mbox{{\rm \tiny PF}},K}, |(u_h,\widehat{u}_h)|^2_{1,K}\}}\le C \,\mathsf{E}_K(\bld{q}_h;u_h,\widehat{u}_h).
\]
\ber{Now consider the full HDG scheme \bld{e}_{q}ref{HDG equations} for diffusion,
we easily obtain discrete $H^1$-stability result of the approximation with respect to the data $f$ by summing
the above inequality over all elements:
\[
\vertiii{(u_h,\widehat{u}_h)}_{1,{\mathcal{T}_h}}^2
=\sum_{K\in{\mathcal{T}_h}} |(u_h,\widehat{u}_h)|^2_{1,K}
\le C \sum_{K\in{\mathcal{T}_h}} \mathsf{E}_K(\bld{q}_h;u_h,\widehat{u}_h)
= C\, (f,u_h)_{\mathcal{T}_h}.
\]
This stability result can be \ber{similarly} obtained for the HDG method for
the convection-difussion equation in which convection is treated with the standard {\it upwinding} technique.
We use this approach in Section 4 to deal with the HDG and mixed methods for the Navier-Stokes equations.
}
\section{Proofs of the results of Section 2}
\label{sec:proof-dh1}
In this Section, we give a proof of the properties of the stabilization spaces $M_S({\partial K})$,
and then a proof of the discrete $H^1$- and the discrete Poincar\'e-Friedrichs inequalities.
\subsection{\ber{Proof of Proposition \ref{prop:ms}}}
\ber{Let us first prove Proposition \ref {prop:ms} on the properties of the stabilization spaces $M_S({\partial K})$.}
We just prove the second case since the proofs for the other three are similar and simpler.
For this case, we have ${\nabla\cdot} {\boldsymbol V}=\EuScript{P}_{k-1}(K)$, $W=\EuScript{P}_k(K)$ and
\[
M_S=\{what\in L^2({\partial K}):\;
what|_{F^*} \in \EuScript{P}_k(F^*), what|_{{\partial K}\backslash F^*}=0\},
\]
where $F^*$ is a face of the element $K$ such that $K$ \ber{lies} on one side of the hyperplane
containing $F^*$. Hence, we have
\begin{align*}
\dim M_S &\;= \dim \EuScript{P}_k(F^*)
= \dim \EuScript{P}_k(K) - \dim \EuScript{P}_{k-1}(K)\\
&\; = \dim W-\dim {\nabla\cdot} {\boldsymbol V}
= \dim W-\dim \widetilde W
= \dim \gamma(\widetilde W^\perp).
\end{align*}
This proves the first condition for $M_S$.
To prove the second condition, we only need to show that for any function
$what \in \gamma(\widetilde W^\perp)$, $P_{M_S}(what)=0$ implies $what=0$.
Now, let $what$ be a function in $\gamma(\widetilde W^\perp)$ such that $P_{M_S}(what)=0$.
By the definition of $\gamma(\widetilde W^\perp)$, there exists a function $w\in \widetilde W^\perp$ such that
$\gamma(w) = what$. Hence, $P_{M_S}(\gamma(w))=0$. By the definition of $M_S$ and $W$,
we have $w = \lambda \widetilde{w}$ where $\lambda\in \EuScript{P}_1(K)$ is the linear function vanishing on $F^*$ and
$\widetilde{w}\in\EuScript{P}_{k-1}(K)= \widetilde W$. By $L^2$-orthogonality of the spaces $\widetilde W$ and $\widetilde W^\perp$, we have
\[
(w,\widetilde{w})_K = (\lambda \widetilde{w},\widetilde{w})_K=0,
\]
which immediately implies $w = 0$ by the assumption on the face $F^*$. This completes the proof
of Proposition \ref{prop:ms}.
\subsection{\ber{Proof of Theorem \ref{thm:dh1}}}
\label{subsec:thm:dh1}
\ber{Here, we prove the inequalities of Theorem \ref{thm:dh1}.
Although it is enough to prove only one since the seminorms $|(\cdot,\cdot)|_{1,K}$ and $|(\cdot,\cdot)|_{\mbox{{\rm \tiny PF}},K}$ are equivalent,} we provide a different proof for each of them, as they put in evidence different \ber{properties} of the
M-decompositions.
\subsubsection{Proof of the first inequality} To prove the first inequality, it is convenient to first carry out a simple integration-by-parts in the equation defining $\bld{q}_h$, \bld{e}_{q}ref{gc_hdg_o}:
\begin{align*}
\ber{(\mathrm{c}\,\bld{q}_h, \bld{v})_K =- ({\nabla} u_h, \bld{v})_K + \bintK{u_h-\widehat{u}_h}{\bld{v}\cdot\boldsymbol{n}}}\quad\forall\;\bld{v}\in {\boldsymbol V}(K).
\end{align*}
By Property (b) of an M-decomposition, we can now set $\bld{v}:= {\nabla} u_h$
to get
\begin{align*}
\|{\nabla} u_h\|_{K}^2 = &\;
-(\mathrm{c}\,\bld{q}_h, \bld{v})_K + \bintK{u_h-\widehat{u}_h}{\bld{v}\cdot\boldsymbol{n}},
\end{align*}
and conclude that
\begin{alignat*}{1}
\|{\nabla} u_h\|_{K}&
\le ({\lambda_{\mathrm{c}}^\mathrm{max}})^{1/2} \|\bld{q}_h\|_{\mathrm{c}, K}+
C_{{\nabla} W}\,{\color{blue}h_K^{-1/2}}\,\|u_h-\widehat{u}_h\|_{{\partial K}},
\\
C_{{\nabla}W} := &\;
\sup_{\bld{v}\in {\nabla}W(K)\backslash\{0\}} \frac{h_K^{1/2}\|\bld{v}\cdot\boldsymbol{n}\|_{\partial K}}{\|\bld{v}\|_K}.
\end{alignat*}
Let us now estimate the jump $u_h-\widehat{u}_h\in M({\partial K})$.
By \ber{Property} (c) of an M-decomposition, we can write
that $u_h-\widehat{u}_h = P_{\gamma\Wperp}(u_h-\widehat{u}_h) + P_{\gamma\Vperp}(u_h-\widehat{u}_h)$. Now, by the second of conditions \bld{e}_{q}ref{condition-ms}, there is a constant $C_{M_S}$ such that
\begin{alignat*}{2}
\|P_{\gamma\widetilde W^\perp}(u_h-\widehat{u}_h)\|_{{\partial K}}
\le &\;C_{M_S} \|P_{M_S}\,\big(P_{\gamma\widetilde W^\perp}(u_h-\widehat{u}_h)\big)\|_{{\partial K}}\\
\le &\;C_{M_S}\big( \|P_{M_S}\,(u_h-\widehat{u}_h)\|_{{\partial K}}
+\|P_{M_S}\,\big(P_{\gamma\widetilde\VV{}^\perp}(u_h-\widehat{u}_h)\big)\|_{{\partial K}}
\big)\\
\le &\;C_{M_S}\big( \|P_{M_S}\,(u_h-\widehat{u}_h)\|_{{\partial K}}
+\|P_{\gamma\widetilde\VV{}^\perp}(u_h-\widehat{u}_h)\|_{{\partial K}}
\big).
\end{alignat*}
It remains to estimate $\|P_{\gamma\Vperp}(u_h-\widehat{u}_h)\|_{{\partial K}}$.
Taking $\bld{v}\in \widetilde\VV{}^\perp(K)$ such that $\bld{v}\cdot\boldsymbol{n}|_{{\partial K}} = P_{\gamma\widetilde\VV{}^\perp}(u_h-\widehat{u}_h)$
in the definition of $\bld{q}_h$, and using the fact that ${\nabla} u_h\in \widetilde\VV(K)$ is $L^2$-orthogonal to
$\bld{v}\in \widetilde\VV{}^\perp(K)$, we get
\begin{alignat*}{2}
\|P_{\gamma\widetilde\VV{}^\perp}(u_h-\widehat{u}_h)\|_{{\partial K}}^2 = &\;
(\mathrm{c}\,\bld{q}_h, \bld{v})_K,
\end{alignat*}
and conclude that
\begin{align*}
& \|P_{\gamma\widetilde\VV{}^\perp}(u_h-\widehat{u}_h)\|_{{\partial K}}\le
C_{\widetilde\VV{}^\perp}({\lambda_{\mathrm{c}}^\mathrm{max}})^{1/2}\,{\color{blue}h_K^{1/2}}\,\|\bld{q}_h\|_{\mathrm{c}, K},
\\
& C_{\widetilde\VV{}^\perp} :=
\sup_{\bld{v}\in \widetilde\VV{}^\perp(K)\backslash\{0\}} \frac{\|\bld{v}\|_K}{h_K^{1/2}\|\bld{v}\cdot\boldsymbol{n}\|_{\partial K}}.
\end{align*}
The first inequality now easily follows.
\subsubsection{Proof of the second inequality} To prove this inequality,
it is convenient \ber{to rewrite} the equation defining
$\bld{q}_h$, \bld{e}_{q}ref{gc_hdg_o}, as follows:
\begin{align*}
(\mathrm{c}\,\bld{q}_h, \bld{v})_K - (u_h-\overline{\widehat{u}_h}^{\,{\partial K}}, \boldsymbol{n}abla\cdot \bld{v})_K + \bintK{\widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}}{\bld{v}\cdot\boldsymbol{n}} = 0\quad\forall\;\bld{v}\in {\boldsymbol V}(K).
\end{align*}
By \cite[Theorem 2.4]{CockburnFuSayas16}, since $\bld{V}(K)\times W(K)$ admits an $M({\partial K})$ decomposition, we have the identity
\begin{alignat}{1}
\label{Mdec}
\{\mu\in M({\partial K}):\;\langle \mu, 1\rangle_{{\partial K}} \textcolor{black}{ = 0 } \}
=\{\bld{v}\cdot\bld{n}|_{{\partial K}}:\;
\bld{v}\in \bld{V}(K), \;\boldsymbol{n}abla\cdot\bld{v}=0\}.
\end{alignat}
This means that there is a function $\bld{v}\in \bld{V}(K)$ such that $\bld{v}\cdot\bld{n}|_{{\partial K}}= \widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}$ and $\boldsymbol{n}abla\cdot\bld{v}=0$. Using this function as test function, we get
\[
\|\widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}\|^2_{{\partial K}}=-(\mathrm{c}\,\bld{q}_h, \bld{v})_K,
\]
and so,
\begin{alignat*}{1}
&\|\widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}\|_{{\partial K}}\le ({\lambda_{\mathrm{c}}^\mathrm{max}}(K))^{1/2}\|\bld{q}_h\|_{c,K} C_{\bld{V}\cdot \bld{n}}\, h^{1/2}_K,
\\
&
C_{\bld{V}\cdot \bld{n}}:=\sup_{\footnotesize\begin{matrix}\mu\in M({\partial K})\\
\langle\mu,1\rangle_{{\partial K}}=0
\end{matrix}}
\inf_{\footnotesize\begin{matrix}\bld{v}\in \bld{V}(K)\setminus\{0\}\\
\boldsymbol{n}abla\cdot\bld{v}=0\\
\bld{v}\cdot\bld{n}=\mu
\end{matrix}}
\frac{\|\bld{v}\|_K}{h_K^{1/2}\|\bld{v}\cdot\boldsymbol{n}\|_{\partial K}}.
\end{alignat*}
\textcolor{black}{
It remains to estimate $\|u_h-\overline{\widehat{u}_h}^{\,{\partial K}}\|_K$. We define \ber{a} test function
$\bld{v}\in \bld{V}(K)$ such that $\boldsymbol{n}abla\cdot\bld{v}=P_{\boldsymbol{n}abla\cdot\bld{V}} (u_h-\overline{\widehat{u}_h}^{\,{\partial K}})${, which we can assume to be different from zero}.
Obviously, we get
\begin{align*}
\| P_{\boldsymbol{n}abla\cdot \bld{V}}(u_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|^2_{K} = & (\mathrm{c}\,\bld{q}_h, \bld{v})_K
+ \bintK{\widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}}{\bld{v}\cdot\boldsymbol{n}} ,
\end{align*}
and so,
{\begin{align*}
&\| P_{\boldsymbol{n}abla\cdot \bld{V}}(u_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|_{K}
\leq \big(\Vert \mathrm{c} \bld{q}_{h} \Vert_{K}
+ h^{-1/2}_K\,\|\widehat{u}_h-\overline{\widehat{u}_h}^{\,{\partial K}}\|_{{\partial K}}\big)\,C_{\boldsymbol{n}abla\cdot\bld{V}}\,h_{K},
\\
&C_{\boldsymbol{n}abla\cdot\bld{V}}:=\sup_{g\in \boldsymbol{n}abla\cdot\bld{V}(K)\setminus\{0\}}
\inf_{\footnotesize\begin{matrix}\bld{v}\in \bld{V}(K)\\
\boldsymbol{n}abla\cdot\bld{v}=g
\end{matrix}}
\frac{(\|\bld{v}\|_K+h^{1/2}_K\|\bld{v}\cdot\bld{n}\|_{{\partial K}})}{h_K \|\boldsymbol{n}abla\cdot\bld{v}\|_K}.
\end{align*}}
}
{Finally, let us} estimate $(\mathrm{Id}-P_{\boldsymbol{n}abla\cdot \bld{V}})(u_h - \overline{\widehat{u}_h}^{\,{\partial K}})$. Since this function coincides with $P_{\widetilde{W}^\perp} (u_h - \overline{\widehat{u}_h}^{\,{\partial K}})$ because $\widetilde{W}(K)=\boldsymbol{n}abla\cdot\bld{V}(K)$, we get
\begin{alignat*}{2}
\| P_{\widetilde{W}^\perp} (u_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|_K
\le &\; C_K\, h^{1/2}_K\,\|P_{\gamma\Wperp} (u_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|_{{\partial K}}
\\
\le &\; C_M\,C_K\, h^{1/2}_K\,\|P_{M_S} (u_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|_{{\partial K}}
\\
\le &\; C_M\,C_K\, h^{1/2}_K\,(
\|P_{M_S} (u_h - \widehat{u}_h)\|_{{\partial K}}
+\|P_{M_S} (\widehat{u}_h - \overline{\widehat{u}_h}^{\,{\partial K}})\|_{{\partial K}}),
\\
\le &\; C_M\,C_K\, h^{1/2}_K\,(
\|P_{M_S} (u_h - \widehat{u}_h)\|_{{\partial K}}
+\|\widehat{u}_h - \overline{\widehat{u}_h}^{\,{\partial K}}\|_{{\partial K}}),
\end{alignat*}
and the estimate follows.
This completes the proof of Theorem \ref{thm:dh1}.
\section{Application: HDG methods for the Navier-Stokes equations}
\label{sec:ns}
In this Section, we introduce and analyze
new HDG and mixed
methods for the steady-state incompressible Navier-Stokes equation with
velocity gradient-velocity-pressure formulation described by equations \bld{e}_{q}ref{ns-equation}.
We proceed as follows. After defining the methods, we show that their approximate solution exists, is unique and satisfies an
energy-boundedness property under \ber{a} {\em smallness} assumption \ber{on} the data. We then provide results on the convergence properties.
Some of the errors involving the velocities are measured in
the norms and seminorms defined as follows.
For any $(\bld{v},\widehat{\bld{v}}) \in \bld{V}_h\times \bld{M}_h$, we set
\begin{alignat*}{1}
\vertiii{(\bld{v},\widehat{\bld{v}})}_{\ell,{\mathcal{T}_h}}^2 := \sum_{i=1}^d \sum_{K\in{\mathcal{T}_h}} |(\bld{v}_i,\widehat{\bld{v}}_i)|^2_{\ell,K}
quad\mbox{ for }\ell=0,1,{\mbox{\rm \tiny PF}},
\end{alignat*}
where $|(\cdot,\cdot)|_{1,K}$ and $|(\cdot,\cdot)|_{\mbox{\rm \tiny PF},K}$ are defined by \bld{e}_{q}ref{localseminorms},
and
\begin{alignat*}{1}
|(\bld{v}_i,\widehat{\bld{v}}_i)|^2_{0,K}:= \|\bld{v}_i\|^2_K+ h_K(\|\widehat{\bld{v}}_i\|^2_{{\partial K}}+\|\bld{v}_i-\widehat{\bld{v}}_i\|^2_{{\partial K}}).
\end{alignat*}
\subsection{Definition of the methods}
\label{subsec:ns-hdg}
\subsubsection{The general form of the methods}
The HDG and mixed methods for \bld{e}_{q}ref{ns-equation} seek an approximation to $(\mathrm{L}, \bld{u}, p, \bld{u}|_{\mathcal{F}_h})$,
$(\mathrm{L}_h, \boldsymbol{u}_h, p_h, \widehat{\bld{u}}_h)$, in the space
${\mathcal{G}}_h\times \boldsymbol{V}_h \times \mathring{{Q}_h} \times \bld {M}_h(0)$ given by
\begin{subequations}
\label{ns-hdg-space}
\begin{alignat}{3}
{\mathcal{G}}_h:=&\;\{\mathrm{G}\in{L}^2({\mathcal{T}_h})^{d\times d}:&&\;\mathrm{G}|_K\in{\mathcal{G}}(K),&&\; K\in{\mathcal{T}_h}\},
\\
\label{ns-hdg-space-v}
\boldsymbol{V}_h:=&\;\{\bld{v}\in{L}^2({\mathcal{T}_h})^d:&&\;\bld{v}|_K\in {\boldsymbol V}(K),&&\; K\in{\mathcal{T}_h}\},
\\
\label{ns-hdg-space-q}
\mathring{{Q}_h}:=&\;\{\ber{q\in{L}^2({\mathcal{T}_h})}:&&\;q|_K\in Q(K),&&\; K\in{\mathcal{T}_h}, (q,1)_\Omega = 0\},
\\
\label{ns-hdg-space-m}
\bld M_h:=&\;\{\widehat{\bld{v}}\in{L}^2(\mathcal{F}_h)^d:&&\;\widehat{\bld{v}}|_F\in \bld M(F),&&\; F\in\mathcal{F}_h\},
\\
\bld M_h(0):=&\;\{\widehat{\bld{v}} \in \bld M_h:&&\;\widehat{\bld{v}}|_{\partial\Omega}=0\}.
\end{alignat}
\end{subequations}
where the local spaces ${\mathcal{G}}(K), {\boldsymbol V}(K), Q(K),$ and $\bld M(F)$ are suitably defined finite dimensional spaces,
and determine it as the only solution of the following weak formulation:
\begin{subequations}
\label{ns-HDG-equations}
\begin{alignat}{3}
\label{ns-HDG-equations-1}
\bint{\boldsymbol{n}u\,\mathrm{L}_h}{\mathrm{G}}+\bint{\bld u_h}{\boldsymbol{n}u\,{\bld{\nabla\cdot}} \mathrm{G}} - \bintEh{\widehat{\bld{u}}_h}{ \boldsymbol{n}u\,\mathrm{G}\, \boldsymbol{n}} & = 0, \\
\label{ns-HDG-equations-2}
\bint{\boldsymbol{n}u\,\mathrm{L}_h}{{\bld{\nabla}} \bld{v}} \ber{+ \bintEh{-\boldsymbol{n}u\,\mathrm{L}_h\,\boldsymbol{n}
+\alpha_v(\bld u_h-\widehat{\bld{u}}_h) }{\bld{v} - \widehat{\bld{v}}}\;\;\hspace{0.6cm}} \boldsymbol{n}onumber&\\
- \bint{p_h}{{\nabla\cdot} \bld{v}} + \bintEh{p_h\,\boldsymbol{n}}{\bld{v} - \widehat{\bld{v}}}\;\;\hspace{3.3cm} \boldsymbol{n}onumber&\\
-\bint{\bld u_h\otimes \bld{\beta}}{{\bld{\nabla}} \bld{v}} + \bintEh{
\ber{( \bld{\beta}
\cdot\boldsymbol{n})}\,\widehat{\bld{u}}_h
+\alpha_c(\bld u_h-\widehat{\bld{u}}_h)
}{\bld{v} - \widehat{\bld{v}}} &= \bint{\bld f}{\bld{v}},\\
\label{ns-HDG-equations-3}
-\bint{\bld u_h}{{\nabla} q}+ \bintEh{\widehat{\bld{u}}_h\cdot\boldsymbol{n}}{ q} & \ber{= 0,}
\end{alignat}
\end{subequations}
for all $(\mathrm{G}, \bld{v}, q, \widehat{\bld{v}}) \in {\mathcal{G}}_h\times \boldsymbol{V}_h \times \mathring{{Q}_h}\times \bld M_h(0)$, where
\[\alpha_v: L^2({\partial K})^d\longrightarrow L^2({\partial K})^d\;\;\; \text{ and} \;\;\;
\alpha_c: L^2({\partial K})^d\longrightarrow L^2({\partial K})^d
\]
are the {\em local stabilization operators} related to the viscous and convective parts, respectively.
To complete the definition of the method, we have to define the local spaces, the divergence-free post-processed
velocity $\bld{\beta}$, and the stabilization operators. We do this next.
\subsubsection{{The local spaces}}
\label{subsec:ns-space}
\ber{The finite element spaces are the ones used in \cite{CockburnFuQiu16} for Stokes flow.}
\ber{Let the space}
${\boldsymbol V}D\times WD\times {M}^{\mathrm{D}}(\dK)$ be such that
${\boldsymbol V}D\times WD$ admits an ${M}^{\mathrm{D}}(\dK)$-decomposition, see Definition \ref{definition:m}. Moreover, we assume that
\begin{align}
\label{require-w}
WD \text{ is a polynomial space \ber{such that}}
\sum_{i=1}^d \partial_{i}WD\subset WD.
\end{align}
Then, the local spaces ${\mathcal{G}}(K)$, $ {\boldsymbol V}(K)$, and $Q(K)$,
and the local trace space $\bld{M}({\partial K})$
are defined as follows:
\begin{subequations}
\label{ns-local-spaces}
\begin{alignat}{2}
\label{ns-local-spaces-a}
{\mathcal{G}}_i(K)\times {\boldsymbol V}_i(K)\times \bld{M}_i(K)
:= &\;{\boldsymbol V}D\times WD\times {M}^{\mathrm{D}}(\dK) &&\quad i=1,\cdots, d,\\
\label{ns-local-spaces-b}
Q(K) := &\;WD.
\end{alignat}
\end{subequations}
\subsubsection{The post-processed velocity $\bld{\beta}$}
On the element $K$, the post-processed velocity $\bld{\beta}$ is taken in a \ber{finite dimentional} space ${{\boldsymbol V}}^*(K)$ satisfying the conditions
\begin{subequations}
\label{div-space}
\begin{align}
\label{div-space-1}
&{\boldsymbol V}D\subset {{\boldsymbol V}}^*(K), {\nabla\cdot}{{\boldsymbol V}}^*(K) = WD,\\
\label{div-space-2}
&{{\boldsymbol V}}^*(K)\times WD \text{ admits an ${M}^{\mathrm{D}}(\dK)$-decomposition}.
\end{align}
\end{subequations}
This vector-valued space can be easily constructed from ${\boldsymbol V}D$, as shown in
\cite[Proposition 5.3]{CockburnFuSayas16}.
On the element $K$,
the post-processed velocity $\bld{\beta}:= {\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h)\in {\boldsymbol V}^*_h $
is defined as the function in ${\boldsymbol V}^*(K)$ such that
\begin{subequations}
\label{post-process-defn}
\begin{alignat}{2}
\label{post-process-defn-1}
({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h), \bld{v})_K = &\;
(\bld{u}_h, \bld{v})_K&&\quad \forall\; \bld{v}\in \widetilde{{\boldsymbol V}^*}(K),\\
\label{post-process-defn-2}
\bintK{{\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h)\cdot\boldsymbol{n}}{\widehat{v}}
= &\;
\bintK{\widehat{\bld{u}}_h\cdot\boldsymbol{n}}{\widehat{v}}
&&\quad\forall\; \widehat{v}\in {M}^{\mathrm{D}}(\dK).
\end{alignat}
\end{subequations}
Here $\widetilde{{\boldsymbol V}^*}(K):= {\nabla} WD\oplus\{\bld{v}\in{\boldsymbol V}^*(K):\;
{\nabla\cdot} \bld{v}=0,\;\bld{v}\cdot\boldsymbol{n}|_{{\partial K}} = 0\}$.
\ber{ We gather the main properties of this mapping in the next result which we prove in Appendix \ref{sec:A}.}
\begin{proposition}
\label{lemma:div-proj}
Let $(\bld v,\widehat{\bld{v}})\in \boldsymbol{V}_h\times \bld M_h$. Then, for any element $K\in{\mathcal{T}_h}$, we have
\begin{alignat*}{2}
\vertiii{({\bld P}_{\!h}{(\bld v,\widehat{\bld{v}}),\ave{{\bld P}_{\!h}{(\bld v,\widehat{\bld{v}})}})}}_{\ell,K}\le &\;
C\,\vertiii{(\bld v,\widehat{\bld{v}})}_{\ell,K}&&\quad
\text{ for } \ell = 0,1,\\
\|{{\bld P}_{\!h}{(\bld v,\widehat{\bld{v}})}}\|_{\infty,K}\le &\;
C\,\vertiii{(\bld v,\widehat{\bld{v}})}_{\infty,K},
\end{alignat*}
with a constant $C$ depending only on the space ${\boldsymbol V}(K)\times \bld M({\partial K})$ and the shape regularity of the element
$K$.
Moreover, if $(\bld u_h,\widehat{\bld{u}}_h)\in \boldsymbol{V}_h\times \bld M_h(0)$ satisfies the weak incompressibility condition given by equation
\bld{e}_{q}ref{ns-HDG-equations-3},
then
\begin{alignat*}{2}
{\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h)\in H(\mathrm{div}, \Omega) \text{ and }
{\nabla\cdot} {\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h)=0.
\end{alignat*}
\end{proposition}
\subsubsection{The stabilization operators}
\label{subsec:ns-stabilization}
\ber{For the convective stabilization operator, we take the choice leading to the classic upwinding:}
\begin{subequations}
\begin{alignat}{2}
\label{stabilization-n}
\alpha_c (\widehat{\bld{v}}) := \max\{\bld{\beta}\cdot \boldsymbol{n}, 0\}\,\widehat{\bld{v}} \quad \forall\; \widehat{\bld{v}} \in L^2({\partial K})^d,
\end{alignat}
where $\bld{\beta}={\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h)$ is given in \bld{e}_{q}ref{post-process-defn}.
For the viscous stabilization operator, we take
\begin{alignat}{2}
\label{stabilization-s}
\alpha_v (\widehat{\bld{v}}) := \frac{\boldsymbol{n}u}{h_K} P_{\bld{M_S}}(\widehat{\bld{v}}) \quad \forall\; \widehat{\bld{v}} \in L^2({\partial K})^d,
\end{alignat}
\end{subequations}
where $P_{\bld{M_S}}$ is the projection onto the space $\bld{M_S}({\partial K})$, whose
$i$-th component is taken to be $M_S^\mathrm{D}({\partial K})$.
\subsection{Existence, uniqueness and boundedness}
\label{subsec:ns-discrete-h1}
Now that we have completed the definition of the methods, we must ask ourselves if the approximate solutions actually exist and are unique.
The next result show that this is the case under a standard {\em smallness} condition on the data.
\begin{theorem}[Existence, uniqueness and boundedness]
\label{thm:discrete-h1-ns}
If $\boldsymbol{n}u^{-2}\|\boldsymbol{f}\|_\Omega$ is small enough, then the HDG method \bld{e}_{q}ref{ns-HDG-equations} has a unique solution.
Furthermore, for the component
$(\bld{u}_h,\widehat{\bld{u}}_h)\in \boldsymbol{V}_h\times \bld{M}_h(0)$ of the approximate solution
the following stability bound is satisfied:
\begin{eqnarray*}
\vertiii{(\boldsymbol{u}_h,\widehat{\boldsymbol{u}}_h)}_{1,{\mathcal{T}_h}}\le C\boldsymbol{n}u^{-1}\,\|\boldsymbol{f}\|_\Omega,
\end{eqnarray*}
for a constant $C$ that depends only on the finite element
spaces, the shape-regularity of the mesh, and the domain.
\end{theorem}
\subsection{Convergence properties}
\label{subsec:ns-error}
Having shown that the approximate solutions are well defined, we next measure how well they approximate the exact solution by
comparing them with suitably chosen projections of the exact solution.
\subsubsection{Projections of the errors}
\label{subsubsec:projs_ns}
Let us define the projections we are going to use in our a priori error analysis.
We denote $P_{\mathcal{G}}$, $P_{\boldsymbol V}$, $P_{Q}$, $P_{\bld{M}}$ to be the $L^2$-projections onto
${\mathcal{G}}_h$, $\boldsymbol{V}_h$, $\mathring{{Q}_h}$, and $\bld{M}_h$. We also define the projection $\Pi_Ww$ into the space $\boldsymbol{V}_h$ as follows. On the element
$K$, $\Pi_Ww \bld u\in \bld{V}(K)$ is defined as follows:
\begin{subequations}
\label{bu-projection}
\begin{alignat}{2}
\label{bu-projection-1}
(\Pi_Ww \bld u, \bld{v})_K = &\; (\bld u,\bld{v})_K &&\;\;\forall\; w\in {\bld{\nabla\cdot}} {\mathcal{G}},\\
\label{bu-projection-2}
\bintK{\Pi_Ww \bld u}{\widehat{\bld{v}}} = &\; \bintK{\bld u}{\widehat{\bld{v}}} &&\;\;\forall\; \widehat{\bld{v}}\in \bld {M_S}.
\end{alignat}
\end{subequations}
Our strategy is to first estimate the size of the projection of the errors
\begin{alignat*}{3}
\boldsymbol{\mathsf{e}}g = P_\GG \mathrm{L} - \mathrm{L}_h,\;\;& e_{u}u = \Pi_Ww \bld u - \bld u_h, &&\;\;
{e}_{p} = P_{Q} p - p_h, && \;\;e_{u}uhat = P_Mm \bld u - \widehat{\bld{u}}_h,
\end{alignat*}
and then use the triangle inequality to estimate the size of the actual errors.
To do that, we need to use the well-known approximation properties
of the various $L^2$-projections. We also need the approximation properties of the
projection $\Pi_Ww$ which we show depend on the $L^2$-projection $P_{\boldsymbol V}$.
The following result, \ber{proven in Appendix \ref{sec:B},} is a direct consequence of
the assumption on the stabilization space $\bld{M_S}$.
\begin{proposition}
\label{lemma:projection-b}
For the projection $\Pi_Ww \bld u\in {\boldsymbol V}(K)$ defined above, we have
\begin{alignat*}{2}
\|\Pi_Ww \bld u - \bld u\|_K\le &\;C\, \left(\|P_{{\boldsymbol V}} \bld u - \bld u\|_K+h_K^{1/2}\|P_{{\boldsymbol V}} \bld u -\bld u\|_{\partial K} \right)\\
\|\Pi_Ww \bld u\|_{\infty,K}\le &\;C\, \|\bld u\|_{\infty,K},
\end{alignat*}
where the constant $C$ only depends on the spaces ${\boldsymbol V}(K)$ and $\bld{M_S}(K)$.
\end{proposition}
\subsubsection{A priori error estimates}
Next, we state our main convergence result.
\begin{theorem}
\label{thm:ns-error1}
Let $(\mathrm{L}_h,\bld u_h,p_h, \widehat{\bld{u}}_h)\in {\mathcal{G}}_h\times \boldsymbol{V}_h\times \mathring{{Q}_h}\times \bld{M}_h(0)$ be the numerical solution of
\bld{e}_{q}ref{ns-HDG-equations}. Assume that
\begin{alignat*}{2}
\EuScript{P}_k(K)^{d\times d}\times \EuScript{P}_k(K)^d\times \EuScript{P}_k(K)&\subset
{\mathcal{G}}(K)\times {\boldsymbol V}(K)\times Q(K)&& quad\forall\;K\in {\mathcal{T}_h},
\\
\EuScript{P}_k(F)^d&\subset \bld M(F)&&quad\forall\;F\in \mathcal{F}_h.
\end{alignat*}
Then, for $\boldsymbol{n}u^{-2}\|\bld{f}\|_{\Omega}$ and $\boldsymbol{n}u^{-1}\|\bld{u}\|_{\infty,\Omega}$ sufficiently small, we have
\begin{align}
\label{est-1}
\|\boldsymbol{\mathsf{e}}g\|_{{\mathcal{T}_h}} + \|e ^p\|_{{\mathcal{T}_h}}
+\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}
+h^{-1}\,\vertiii{(e_{u}u,e_{u}uhat)}_{\mbox{{\rm \tiny PF}},{\mathcal{T}_h}}
+\|e_{u}\|_{\mathcal{T}_h} \le &\;C\, h^{k+1}
\,
\Xi,
\end{align}
where
$
\Xi:=\|\mathrm{L}\|_{k+1}+
\boldsymbol{n}u^{-1}\, \|\bld{\beta}\|_{\infty,\Omega}\,\|\bld u\|_{k+1}+
\boldsymbol{n}u^{-1}\,\|p\|_{k+1}
$
and the constant $C$ only depends on the finite element
spaces, the shape-regularity of the mesh, and the domain $\Omega$.
Moreover, if
$
\boldsymbol{n}u^{-1}\|\boldsymbol{n}abla \bld{u}\|_{\Omega}$ is small enough, $\bld{u}\in \boldsymbol{W}^{1,\infty}(\Omega)$ and
the regularity estimate in {\rm \cite[(2.3) ]{CesmeliogluCockburnQiu17}} holds, then
\begin{align}
\label{est-2}
\|e_{u}\|_{\Omega}\leq C\, h^{k+2} \quad \forall k\geq 1.
\end{align}
Finally, if $\bld{u}_h^{*}\in H(\rm div,\Omega)$ is the post-processed approximate velocity
introduced in {\rm \cite [(2.9)]{CockburnFuQiu16}}, then we have
$\boldsymbol{n}abla \cdot \bld{u}_h^{*}=0$ in $\Omega$, and
\begin{align}
\label{est-3}
\|\bld{u}_h^{*}-\bld{u}\|_{\Omega}\leq C\,h^{k+2}\quad \forall k\geq 1.
\end{align}
\end{theorem}
Note that this result gives optimal convergence of the velocity gradient $\mathrm{L}_h$,
the velocity $\bld{u}_h$ and the pressure $p_h$ approximations. It also gives two superconvergence results. The first is the one of the projections of the error in the velocity, which are of order $k+1$ for $\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}$ and of order $k+2$ for $\vertiii{(e_{u}u,e_{u}uhat)}_{\mbox{{\rm \tiny PF}},{\mathcal{T}_h}}$. The second is also for the projection of the error in the velocity. The only difference is that the
first superconvergence estimate does not say anything about the convergence properties of the local averages, whereas the second does. Moreover, the second superconvergence result allows the local postprocessing of the velocity $\bld{u}_h^{*}$ to be an $\bld{H}(div)$, globally divergence-free approximation to the velocity converging faster than the original approximation {$\bld{u}_h$}.
\section{\ber{Proofs} of the results of Section 4}
\label{sec:proof-ns}
In this Section, we prove
Theorem \ref{thm:discrete-h1-ns} \ber{on the existence, uniqueness and boundedness of the approximate solution, and} the convergence properties of
Theorem \ref{thm:ns-error1}.
\ber{We would like to emphasize that, due to the existence of the discrete-$H^1$ stability results in Theorem 2.3,
the proofs in this section can be considered as a word-by-word ''translation`` of the corresponding
proofs in \cite{CesmeliogluCockburnQiu17}, where, for the first time, a superconvergent HDG method was analyzed for
the incompressible Navier-Stokes equations}
To simplify the notation, we write
$
A\lesssim B
$
to indicate that $A\le C\, B$ with a constant $C$ that only depends on the
finite element spaces, the shape-regularity of the mesh and the domain.
\subsection{Preliminaries}
\subsection*{\ber{Rewriting the method in a compact form}}
To facilitate the analysis, we rewrite the formulation of the methods under consideration by using the bilinear form associated to the
Stokes system,
\begin{subequations}
\label{ns-forms}
\begin{alignat}{2}
B_h(\mathrm{L},\bld u,p, \bld{\widehat{u}};\; \mathrm{G}, \bld{v}, q, \widehat{\bld{v}})
:= &\;
\boldsymbol{n}u(\mathrm{L}, \mathrm{G})_{\mathcal{T}_h} \ber{+ \boldsymbol{n}u(\bld u,{\bld{\nabla\cdot}} \mathrm{G})_{\mathcal{T}_h} - \bintEh{\bld{\widehat{u}}}{\boldsymbol{n}u\,\mathrm{G}\,\boldsymbol{n}}}\boldsymbol{n}onumber
\\
& \;+(\boldsymbol{n}u\,\mathrm{L},{\bld{\nabla}} \bld{v})_{\mathcal{T}_h} + \bintEh{-\boldsymbol{n}u\,\mathrm{L}\, \boldsymbol{n}
+\alpha_v(\bld u-\bld{\widehat{u}})}{\bld{v}-\widehat{\bld{v}}}\boldsymbol{n}onumber\\
& \;-(p,{\nabla\cdot} \bld{v})_{\mathcal{T}_h} + \bintEh{p\,\boldsymbol{n}}{\bld{v}-\widehat{\bld{v}}}\boldsymbol{n}onumber\\
&\; -(\bld u, {\nabla} q)_{\mathcal{T}_h} + \bintEh{\bld{\widehat{u}}\cdot\boldsymbol{n}}{q},
\intertext{and the bilinear form associated to the convection,}
\mathcal{O}_h(\bld{\beta}; (u,\widehat{u}), ( w, what))
:= &\; -(\bld u\otimes \bld{\beta}, {\bld{\nabla}} \bld{v})_{\mathcal{T}_h} \boldsymbol{n}onumber\\
&\; + \bintEh{\ber{(\bld{\beta}\cdot\boldsymbol{n})}\;\bld{\widehat{u}}+
\alpha_c(\bld u-\bld{\widehat{u}})}{\bld{v}-\widehat{\bld{v}}},
\end{alignat}
\end{subequations}
where $(\mathrm{L},\bld u,p, \bld{\widehat{u}})$ and $(\mathrm{G}, \bld{v}, q,\widehat{\bld{v}})$ lie in the space
$\left({H^1}({\mathcal{T}_h})^{d\times d}+{\mathcal{G}}_h\right) \times H^1({\mathcal{T}_h})^d\times H^1({\mathcal{T}_h})\times L^2(\mathcal{F}_h; 0)^d$, and
{\color{blue}$\bld{\beta}\in {\boldsymbol V}_{\!\!\beta}\cap {\boldsymbol V}^*_h$} { where
\begin{alignat*}{3}
{\boldsymbol V}_{\!\!\beta}:=&\;\{\bld{v}\in{H}(\mathrm{div},\Omega):&&\;{\nabla\cdot} \bld{v} = 0, \bld{v}\cdot\boldsymbol{n}|_{{\partial K}}\in L^2({\partial K}),\; K\in{\mathcal{T}_h}\},\\
{\boldsymbol V}^*_h:=&\;\{\bld{v}\in{L}^2({\mathcal{T}_h})^d:&&\;\bld{v}|_K\in {\boldsymbol V}^*(K),\; K\in{\mathcal{T}_h}\}.
\end{alignat*}
}
Now, the equations defining the HDG method \bld{e}_{q}ref{ns-HDG-equations} can be recast as
\begin{align}
\label{ns-form-equation}
B_h(\mathrm{L}_h,\bld u_h,p_h, \bld{\widehat{u}}_h;\; \mathrm{G},\bld{v},q,\widehat{\bld{v}}) +
\mathcal{O}_h(\bld\beta; (\bld u_h, \widehat{\bld{u}}_h), (\bld{v},\widehat{\bld{v}})) & = (\bld f,\bld{v})_{\mathcal{T}_h},
\end{align}
with $\bld{\beta}= {\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h)$ defined in \bld{e}_{q}ref{post-process-defn}.
Consistency of the HDG method \bld{e}_{q}ref{ns-HDG-equations} implies that,
for the exact solution $(\mathrm{L},\bld u, p)\in {H^1}(\Omega)^{d\times d}\times H^{2}(\Omega)^d
\times H^1(\Omega)$ of \bld{e}_{q}ref{ns-equation}
(assuming $H^2$-regularity),
\begin{align}
\label{ns-form-equation-ex}
B_h(\mathrm{L},\bld u,p, \bld{{u}};\; \mathrm{G},\bld{v},q,\widehat{\bld{v}}) +
\mathcal{O}_h(\bld u; (\bld u, \bld u), (\bld{v},\widehat{\bld{v}})) & = (\bld f,\bld{v})_{\mathcal{T}_h}
\end{align}
for all $(\mathrm{G},\bld{v},q,\widehat{\bld{v}})\in{\mathcal{G}}_h\times \boldsymbol{V}_h\times \mathring{{Q}_h}\times \bld{M}_h(0)$.
\subsection*{An inequality for the viscous energy}
Next, we obtain a key inequality for the viscous energy associated the discrete Stokes operator associated with the HDG method \bld{e}_{q}ref{ns-HDG-equations},
namely,
\begin{align}
\label{ns-energy}
\mathsf{E}(\mathrm{L},\bld u, \bld{\widehat{u}}): = &\;
B_h(\mathrm{L},\bld u,p, \bld{\widehat{u}};\; \mathrm{L},\bld u,p, \bld{\widehat{u}})\boldsymbol{n}onumber\\
=&\;
\boldsymbol{n}u(\mathrm{L}, \mathrm{L})_{\mathcal{T}_h} +
\bintEh{\frac{\boldsymbol{n}u}{h_K}P_{\bld{M_S}}(\bld u-\bld{\widehat{u}})}{P_{\bld{M_S}}(\bld u-\bld{\widehat{u}})}.
\end{align}
\begin{lemma}
\label{lemma:ns-energy}
Let $(\mathrm{L}_h,\bld u_h,p_h, {\widehat{\bld{u}}_h})\in {\mathcal{G}}_h\times\boldsymbol{V}_h\times \mathring{{Q}_h}\times\bld{M}_h(0)$
be the numerical solution of the linear system
\bld{e}_{q}ref{ns-HDG-equations} with a prescribed velocity $\bld{\beta}\in {\boldsymbol V}_{\!\!\beta}$,
then, we have
\[
\mathsf{E}(\mathrm{L}_h,\bld u_h, \widehat{\bld{u}}_h)
\le (f, u_h)_{\mathcal{T}_h}.
\]
\end{lemma}
\begin{proof} By equation \bld{e}_{q}ref{ns-form-equation} with $(\mathrm{G},\bld v,q, \widehat{\bld{v}}):=(\mathrm{L}_h,\bld u_h,p_h, \widehat{\bld{u}}_h)$, we get
the energy identity
\[
\mathsf{E}(\mathrm{L}_h,\bld u_h, \widehat{\bld{u}}_h) + \mathcal{O}_h(\bld{\beta};(\bld u_h,\widehat{\bld{u}}_h),(\bld u_h,\widehat{\bld{u}}_h))
= (\bld f,\bld{v})_{\mathcal{T}_h},
\]
and since
$
\mathcal{O}_h(\bld{\beta};(\bld u_h,\widehat{\bld{u}}_h),(\bld u_h,\widehat{\bld{u}}_h)) = \;
\frac{1}{2}\bintEh{|\bld{\beta}\cdot\boldsymbol{n}|(\bld u_h-\widehat{\bld{u}}_h)}{\bld u_h-\widehat{\bld{u}}_h}\ge 0,
$
the inequality follows. This completes the proof.
\end{proof}
\subsection*{The new discrete inequalities} Next, we relate the viscous energy of the discrete Stokes operator with our new discrete inequalitites of
Theorem \ref{thm:dh1}.
\begin{theorem}[Global, discrete $\bld{H}^1$- and Poincar\'e-Friedrichs inequalities]
\label{thm:dh1-global}
Let $(\mathrm{r}_h,\bld z_h,{\widehat{\bld z}_h})\in {\mathcal{G}}_h\times W_h\times M_h$ satisfy
\vskip-.5truecm
\[
(\mathrm{r}_h, \mathrm{G})_{\mathcal{T}_h} - (\bld z_h, {\bld{\nabla\cdot}}\mathrm{G})_{\mathcal{T}_h} + \bintEh{\widehat{\bld z}_h}{\mathrm{G}\,\boldsymbol{n}} = 0quad \forall\;\mathrm{G}\in {\mathcal{G}}_h.
\]
Then,
\vskip-.5truecm
\begin{alignat*}{2}
\vertiii{(\bld{z}_h,\widehat{\bld{z}}_h)}_{1,{\mathcal{T}_h}}^2&\le C\,\Theta_h &&\quad\mbox{{\rm ($\bld{H}^1$)},}
\\
h^{-2}\, \vertiii{(\bld{z}_h,\widehat{\bld{z}}_h)}_{\mbox{{\rm \tiny PF}},{\mathcal{T}_h}}^2&\le C\,\Theta_h&&\quad\mbox{{\rm (Poincar\'e-Friedrichs)}},
\end{alignat*}
\vskip-.5truecm
where
\vskip-.5truecm
\[
\Theta_h:=\sum_{K\in{\mathcal{T}_h}} ( \|\mathrm{r}_h\|_{K}^2 +
h_K^{-1}\|P_{\bld{M}_S}(\bld{z}_h-\widehat{\bld{z}}_h)\|_{\partial K}^2)= \boldsymbol{n}u^{-1}\, \mathsf{E}(\mathrm{r}_h,\bld z_h, \widehat{\bld{z}}_h).
\]
\boldsymbol{n}oindent Here, the constant $C$ only depends on the finite element spaces ${\boldsymbol V}(K)$, $W(K)$ and $M_S({\partial K})$,
and on the shape-regularity properties of the elements $K\in{\mathcal{T}_h}$.
\end{theorem}
\begin{proof} This result follows from the local discrete inequalities of Theorem \ref{thm:dh1}.
For $i=1,\dots,d$, let $\mathrm{\mathrm{r}}_i$ denote the $i$-th row of the matrix $\mathrm{r}$, and let $\bld{v}_i$ denote the $i$-th component of the vector $\bld{v}$. Then,
by the choice of the local spaces \bld{e}_{q}ref{ns-local-spaces-a}, we have that, on the element $K$,
\[
((\mathrm{r}_h)_i, (\bld z_h)_i, (\widehat{\bld z}_h)_i) \in
{\boldsymbol V}D\times WD\times {M}^{\mathrm{D}}(\dK),
\]
and since ${\boldsymbol V}D\times WD$ admits an ${M}^{\mathrm{D}}(\dK)$-decomposition,
we can apply Theorem \ref{thm:dh1} with $\mathrm{c}=\mathrm{Id}$ and
$(\bld{q}_h,u_h,\widehat{u}_h):=((\mathrm{r}_h)_i, (\bld z_h)_i, (\widehat{\bld z}_h)_i)$. The inequalities now follow by adding over all element $K\in{\mathcal{T}_h}$ and then
over the components $i=1,\dots,d$. This completes the proof.
\end{proof}
\subsection*{Properties of the convective form $\mathcal{O}_h$}
In the next result, we gather some properties of the convective form $\mathcal{O}_h$.
\begin{lemma}[Properties of the nonlinear term $\mathcal{O}_h$ {\rm \cite[Proposition 3.4, Proposition 3.5]{CesmeliogluCockburnQiu17}}]
\label{lemma:OO}
For any $(\bld{v}_h,\bld{\widehat{v}}_h)\in \boldsymbol{V}_h\times \bld{M}_h(0)$, we have
\begin{subequations}
\begin{align}
\label{oh-lipschitz}
|\mathcal{O}_h(\bld{\beta};(\bld{u},\bld{\widehat{u}}), (\bld{v}_h,\widehat{\bld{v}}))|
\lesssim \vertiii{(\bld{\beta},\ave{\bld{\beta}})}_{1,{\mathcal{T}_h}}
\vertiii{(\bld{u},\bld{\widehat{u}})}_{1,{\mathcal{T}_h}}
\vertiii{(\bld{v}_h,\bld{\widehat{v}}_h)}_{1,{\mathcal{T}_h}},
\end{align}
for all $\bld{\beta}\in \boldsymbol{V}_h^*$ and
$(\bld{u},\bld{\widehat{u}})\in \boldsymbol{V}_h\times \bld{M}_h(0)$,
\begin{align}
\label{oh-linf-1}
|\mathcal{O}_h(\bld{\beta};(\bld{u},\bld{\widehat{u}}), (\bld{v}_h,\widehat{\bld{v}}))|
\lesssim \|\bld{\beta}\|_{\infty,\Omega}
\vertiii{(\bld{u},\bld{\widehat{u}})}_{0,{\mathcal{T}_h}}
\vertiii{(\bld{v}_h,\bld{\widehat{v}}_h)}_{1,{\mathcal{T}_h}},
\end{align}
for all $\bld{\beta}\in L^\infty(\Omega)^d\cap\boldsymbol{V}_h^*$ and
$(\bld{u},\bld{\widehat{u}})\in H^1({\mathcal{T}_h})^d\times L^2(\mathcal{F}_h,0)^d$,
and
\begin{align}
\label{oh-linf-2}
|\mathcal{O}_h(\bld{\beta};(\bld{u},\bld{\widehat{u}}), (\bld{v}_h,\widehat{\bld{v}}))
-\mathcal{O}_h(\bld{\gamma};(\bld{u},\bld{\widehat{u}}), (\bld{v}_h,\widehat{\bld{v}}))|\hspace{2cm}\boldsymbol{n}onumber\\
\lesssim \vertiii{(\bld{\beta}-\bld{\gamma},0)}_{0,{\mathcal{T}_h}}
\vertiii{(\bld{u},\bld{\widehat{u}})}_{\infty,{\mathcal{T}_h}}
\vertiii{(\bld{v}_h,\bld{\widehat{v}}_h)}_{1,{\mathcal{T}_h}},
\end{align}
for all $\bld{\beta}\in H^1({\mathcal{T}_h})^d+\boldsymbol{V}_h^*$ and
$(\bld{u},\widehat{\bld{u}})\in L^\infty({\mathcal{T}_h})^d\times L^\infty(\mathcal{F}_h)^d$.
\end{subequations}
\end{lemma}
\subsection{\ber{Proof of Theorem \ref{thm:discrete-h1-ns}}}
Now we are ready to prove
the existence and uniqueness of the approximation in Theorem \ref{thm:discrete-h1-ns}.
The proof is almost identical to that in \cite[Section 5]{CesmeliogluCockburnQiu17}.
We use a Banach fixed-point theorem by constructing a
contraction mapping $\mathcal{F}: Z_h\rightarrow Z_h$, where
\[
Z_h:=\{(\bld{v},\widehat{\bld{v}})\in \boldsymbol{V}_h\times \bld M_h(0):
(\bld{v}_h,{\nabla} q)_{\mathcal{T}_h}-\bintEh{\widehat{\bld{v}}\cdot\boldsymbol{n}}{q}=0\;\;\forall\; q\in \mathring{{Q}_h}\}.
\]
Let us show that \ber{there} is a ball $K_h$ inside $Z_h$ such that \ber{$\mathcal{F}$} maps $K_h$ into $K_h$.
For a pair $(\bld w_h,\widehat{\bld{w}}_h)\in Z_h$, the mapping is defined by
$\mathcal{F}(\bld w_h,\widehat{\bld{w}}_h):=(\bld{u}_h,\widehat{\bld{u}}_h)$
with $(\bld{u}_h,\widehat{\bld{u}}_h)$ being part of the numerical solution to the linear system \bld{e}_{q}ref{ns-HDG-equations} with
$\bld{\beta}={\bld P}_{\!h}(\bld w_h,\widehat{\bld{w}}_h)$.
By Lemma \ref{lemma:div-proj}, we have that $\bld{\beta}\in {\boldsymbol V}_{\!\!\beta}$.
Then,
\begin{alignat*}{2}
\vertiii{(\bld{u}_h,\widehat{\bld{u}}_h)}_{1,{\mathcal{T}_h}}^2
&\lesssim\, \boldsymbol{n}u^{-1} \mathsf{E}(\mathrm{L}_h,\bld u_h, \widehat{\bld{u}}_h)
&&\quad\mbox{ by Theorem \ref{thm:dh1-global},}
\\
&\lesssim\, \boldsymbol{n}u^{-1}(\bld{f}, \bld{u}_h)_{\mathcal{T}_h}
&&\quad\mbox{ by Lemma \ref{lemma:ns-energy},}
\\
&\lesssim\, \boldsymbol{n}u^{-1} \|\bld{f}\|_{\mathcal{T}_h}\, \|\bld{u}_h\|_{\mathcal{T}_h}
\\
&\lesssim\, \boldsymbol{n}u^{-1}\|\bld{f}\|_{\mathcal{T}_h}\,\vertiii{(\bld{u}_h,\widehat{\bld{u}}_h)}_{1,{\mathcal{T}_h}},
\end{alignat*}
and we get
\[
\vertiii{(\bld{u}_h,\widehat{\bld{u}}_h)}_{1,{\mathcal{T}_h}}\lesssim \boldsymbol{n}u^{-1}\|\bld f\|_{\mathcal{T}_h}.
\]
Then, defining
\boldsymbol{n}ewcommand{C_{\mathrm{sm}}}{C_{\mathrm{sm}}}
\[
K_h:=\{(\bld{v},\widehat{\bld{v}})\in Z_h:\;\;\vertiii{(\bld{v},\widehat{\bld{v}})}_{1,{\mathcal{T}_h}}\le C_{\mathrm{sm}}\,\boldsymbol{n}u^{-1}\|\bld f\|_{\mathcal{T}_h}\},
\]
with a positive constant $C_{\mathrm{sm}}$ {\color{blue} big} enough, we conclude that $\mathcal{F}$ maps $K_h$ into itself.
Now we only have to show that $\mathcal{F}$ is a contraction in $K_h$.
Set $(\bld{u}_h^1,\widehat{\bld{u}}_h^1):=\mathcal{F}(\bld{w}_h^1,\widehat{\bld{w}}_h^1)$ and
$(\bld{u}_h^2,\widehat{\bld{u}}_h^2):=\mathcal{F}(\bld{w}_h^2,\widehat{\bld{w}}_h^2)$ with
$(\bld{w}_h^i,\widehat{\bld{w}}_h^i)\in K_h$ for $i=1,2$.
Now, let $(\mathrm{L}_h^i, \bld{u}_h^i, p_h^i, \widehat{\bld{u}}_h^i)$ be the solution to \bld{e}_{q}ref{ns-HDG-equations} with
$\bld{\beta}^i: = {\bld P}_{\!h}(\bld w_h,\widehat{\bld w}_h)$. Using ${\mathrm{d}}_{L}:=\mathrm{L}_h^1-\mathrm{L}_h^2$ and similar definitions for
${\mathrm{d}}_{u}, {\mathrm{d}}_{p},{\mathrm{d}}_{u}hat, {\mathrm{d}}_{{\beta}}, {\mathrm{d}}_{w}$, and ${\mathrm{d}}_{w}hat$, and the fact that
equation \bld{e}_{q}ref{ns-form-equation} is satisfied for $i=1,2$,
to conclude that
\begin{alignat*}{2}
\mathsf{E}({\mathrm{d}}_{L}, {\mathrm{d}}_{u}, {\mathrm{d}}_{u}hat) = &\;
-\mathcal{O}_h(\bld{\beta}^1;(\bld{u}_h^1, \widehat{\bld{u}}_h^1),({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat))
+\mathcal{O}_h(\bld{\beta}^2;(\bld{u}_h^2, \widehat{\bld{u}}_h^2),({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat))
\\
= &\;
-\mathcal{O}_h({\mathrm{d}}_{{\beta}},;(\bld{u}_h^1, \widehat{\bld{u}}_h^1),({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat))
-\mathcal{O}_h(\bld{\beta}^2;({\mathrm{d}}_{u}, {\mathrm{d}}_{u}hat),({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat))
\\
\le &\;
-\mathcal{O}_h({\mathrm{d}}_{{\beta}};(\bld{u}_h^1, \widehat{\bld{u}}_h^1),({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat)).
\end{alignat*}
By Lemma \ref{lemma:OO}, we easily get that
\begin{alignat*}{2}
\mathsf{E}({\mathrm{d}}_{L}, {\mathrm{d}}_{u}, {\mathrm{d}}_{u}hat) \lesssim &\;
\vertiii{({\mathrm{d}}_{{\beta}}, \ave{{\mathrm{d}}_{{\beta}}})}_{1,{\mathcal{T}_h}}
\vertiii{(\bld{u}_h^1, \widehat{\bld{u}}_h^1)}_{1,{\mathcal{T}_h}}
\vertiii{({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat)}_{1,{\mathcal{T}_h}}\\
\lesssim &\;
\vertiii{({\mathrm{d}}_{w}, {\mathrm{d}}_{w}hat)}_{1,{\mathcal{T}_h}}
\vertiii{(\bld{u}_h^1, \widehat{\bld{u}}_h^1)}_{1,{\mathcal{T}_h}}
\vertiii{({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat)}_{1,{\mathcal{T}_h}}
&&\mbox{ by Proposition \ref{lemma:div-proj},}
\\
\lesssim&\;\boldsymbol{n}u^{-1} \|\bld f\|_{\mathcal{T}_h} \vertiii{({\mathrm{d}}_{w}, {\mathrm{d}}_{w}hat)}_{1,{\mathcal{T}_h}}
\vertiii{({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat)}_{1,{\mathcal{T}_h}},
\end{alignat*}
by Theorem \ref{thm:discrete-h1-ns}. Combining this result with Theorem \ref{thm:dh1-global}, we immediately get
\[
\vertiii{({\mathrm{d}}_{u},{\mathrm{d}}_{u}hat)}_{1,{\mathcal{T}_h}} \lesssim\;\boldsymbol{n}u^{-2} \|\bld f\|_{\mathcal{T}_h} \vertiii{\ber{({\mathrm{d}}_{w}, {\mathrm{d}}_{w}hat)}}_{1,{\mathcal{T}_h}}.
\]
Hence, for $\boldsymbol{n}u^{-2} \|\bld f\|_{\mathcal{T}_h}$ sufficiently small, the mapping $\mathcal{F}$ is a contraction
in $K_h$. This completes the proof of Theorem \ref{thm:discrete-h1-ns}.
\subsection{Proof of estimate \bld{e}_{q}ref{est-1} in Theorem 4.4}
\ber{The energy estimate \bld{e}_{q}ref{est-1} in Theorem 4.4 directly follows from
Proposition \ref{lemma:projection-b}, the approximation properties of the finite element spaces and from
Theorem 5.4 below.
To simplify the notation, we introduce the following approximation errors:}
\begin{alignat*}{3}
\mathrm{\delta}_{L} := \mathrm{L} -P_\GG \mathrm{L}, \;\;& \delta_{u}u := \bld u - \Pi_Ww \bld u, &&\;\;
{\delta}_{p} := p -P_{Q} p, && \;\;\delta_{u}uhat :=\bld u - P_Mm \bld u.
\end{alignat*}
\begin{theorem}
\label{thm:ns-error}
Under the assumptions of Theorem \ref{thm:ns-error1},
we have
\begin{alignat*}{2}
\|\boldsymbol{\mathsf{e}}g\|_{{\mathcal{T}_h}}
+\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}
+h^{-1}\,\vertiii{(e_{u}u,e_{u}uhat)}_{0,{\mathcal{T}_h}}
+\|e_{u}\|_{\mathcal{T}_h} \le
C\,\boldsymbol{n}u^{-1}\,\Theta_{ns}^{1/2},
\end{alignat*}
where
\begin{align*}
\Theta_{ns} := &\;
\sum_{K\in {\mathcal{T}_h}}{h_K}\,(\|\boldsymbol{n}u\,\mathrm{\delta}_{L}\,\boldsymbol{n}\|_{\partial K}^2 + \|{\delta}_{p}\|_{\partial K}^2)+\|\bld{u}\|_{\infty,\Omega}^2\,\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}^2.
\end{align*}
Here, the constant $C$ depends only on the finite element
spaces, the shape-regularity of the mesh ${\mathcal{T}_h}$, and the domain $\Omega$.
\end{theorem}
\ber{The rest of this subsection is devoted to the proof of Theorem 5.4.
We need the following two auxiliary results.}
\begin{lemma}
\label{lemma:energy-ns}
We have
\begin{align*}
B_h(\boldsymbol{\mathsf{e}}g,e_{u}u,{e}_{p}, e_{u}uhat;\,\mathrm{G},\bld{v}, q,\widehat{\bld{v}})
= &\;
\bintEh{\boldsymbol{n}u\,\mathrm{\delta}_{L}\,\boldsymbol{n}-{\delta}_{p}\,\boldsymbol{n}}{\bld{v}-\widehat{\bld{v}}} \\
&\; + \mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\bld u_h,\widehat{\bld{u}}_h),\,(\bld{v},\widehat{\bld{v}}))\boldsymbol{n}onumber\\
&\; - \mathcal{O}_h(\bld{u}; (\bld u,\bld{u}),\,(\bld{v},\widehat{\bld{v}}))\boldsymbol{n}onumber
\end{align*}
for all
$(\bld{v},w_h,\widehat{w}_h)\in \boldsymbol{V}_h\times W_h\times M_h(0)$.
\end{lemma}
\begin{proof}
It is a direct consequence of the definition of the numerical method \bld{e}_{q}ref{ns-form-equation}, the consistency of the method \bld{e}_{q}ref{ns-form-equation-ex},
and the definition of the projections in Subsection \ref{subsubsec:projs_ns}.
\ber{In particular,} note that, by the definition of $\Pi_Ww$, \ber{there holds}
\[
\bintK{\frac{\boldsymbol{n}u}{h_K} P_{\bld{M_S}}(\delta_{u}u-\delta_{u}uhat)}{\bld{v}-\widehat{\bld{v}}} =
\frac{\boldsymbol{n}u}{h_K} \bintK{\delta_{u}u-\delta_{u}uhat}{P_{\bld{M_S}}(\bld{v}-\widehat{\bld{v}})} = 0.
\]
\end{proof}
\begin{lemma}
\label{lemma:oo-term}
We have
\begin{align*}
\mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\bld u_h,\widehat{\bld{u}}_h),\,(e_{u}u,e_{u}uhat))
\\
- \mathcal{O}_h(\bld{u}; (\bld u,\bld{u}),\,(e_{u}u, e_{u}uhat))&\;\lesssim
\|\bld{u}\|_{\infty,\Omega}\,\Phi \,\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}.
\end{align*}
where
\[
\Phi:= \vertiii{(e_{u}u,e_{u}uhat)}_{0,{\mathcal{T}_h}} + \vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}
+ \vertiii{({\bld P}_{\!h}(\Pi_Ww\bld{u}, P_Mm\bld{u})-\bld u, 0)}_{0,{\mathcal{T}_h}}.
\]
\end{lemma}
\ber{
For a proof, see Appendix \ref{sec:C}; see also \cite[Section 6]{CesmeliogluCockburnQiu17}.}
\ber{We are now ready to prove Theorem \ref{thm:ns-error}.
Since the following estimates holds}
\begin{alignat*}{2}
\|e_{u}\|_{\mathcal{T}_h}
&\le \vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}},&&\mbox{ by \cite[Theorem 2.1]{DiPietroDroniouErn10}},
\\
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}
&\le C\, \boldsymbol{n}u^{-1}\, \mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u, e_{u}uhat),
&&\mbox{ by Theorem \ref{thm:dh1-global}},
\\
h^{-1}\,\vertiii{(e_{u}u,e_{u}uhat)}_{\mbox{{\rm \tiny PF}},{\mathcal{T}_h}}
&\le \vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}},
\end{alignat*}
\ber{
the left hand side of the inequality in Theorem 5.4 is smaller than
\[C\boldsymbol{n}u^{-1}\mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u, e_{u}uhat).\]
We turn to estimate the above term next using a standard energy argument.
}
\ber{To do that, we take \[(\mathrm{G},\bld{v},q, \widehat{\bld{v}}):=(\boldsymbol{\mathsf{e}}g, e_{u}u, {e}_{p},e_{u}uhat)\] in Lemma \ref{lemma:energy-ns}, to get}
\begin{align*}
\mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u, e_{u}uhat) =&\;
\bintEh{\boldsymbol{n}u\,\mathrm{\delta}_{L}\,\boldsymbol{n}-{\delta}_{p}\,\boldsymbol{n}}{e_{u}u-e_{u}uhat} \\
&\; + \mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\bld u_h,\widehat{\bld{u}}_h),\,(e_{u}u-e_{u}uhat))\\
&\; - \mathcal{O}_h(\bld{u}; (\bld u,\bld{u}),\,(e_{u}u-e_{u}uhat)).
\end{align*}
\ber{Then, applying the \ber{Cauchy-Schwarz} inequality and using
Lemma \ref{lemma:OO}, we obtain}
\begin{align*}
\mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u,e_{u}uhat) \lesssim &\;
(\sum_{K\in{\mathcal{T}_h}} h_K\,\|\boldsymbol{n}u\,\mathrm{\delta}_{L}\,\boldsymbol{n}-{\delta}_{p}\,\boldsymbol{n}\|_{\partial K}^2)^{1/2}\,\vertiii{(e_{u}u, e_{u}uhat)}_{1,{\mathcal{T}_h}}\\
&\; + \|\bld{u}\|_{\infty,\Omega}
\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}\,\vertiii{(e_{u}u, e_{u}uhat)}_{1,{\mathcal{T}_h}}\\
&\; + \|\bld{u}\|_{\infty,\Omega}\,\vertiii{({\bld P}_{\!h}(\Pi_Ww \bld u, P_Mm \bld u))-\bld u,0)}_{0,{\mathcal{T}_h}}
\,\vertiii{(e_{u}u, e_{u}uhat)}_{1,{\mathcal{T}_h}}\\
&\; + \|\bld{u}\|_{\infty,\Omega}\,\vertiii{(e_{u}u, e_{u}uhat)}_{0,{\mathcal{T}_h}}
\,\vertiii{(e_{u}u, e_{u}uhat)}_{1,{\mathcal{T}_h}}.
\end{align*}
Now, assuming $\boldsymbol{n}u^{-1} \|\bld{u}\|_{\infty,\Omega}$ sufficiently small such that
\[
\|\bld{u}\|_{\infty,\Omega}\,\vertiii{(e_{u}u, e_{u}uhat)}_{0,{\mathcal{T}_h}}
\,\vertiii{(e_{u}u, e_{u}uhat)}_{1,{\mathcal{T}_h}}\le \frac{1}{2} \mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u,e_{u}uhat),
\]
we get
\begin{align*}
\mathsf{E}(\boldsymbol{\mathsf{e}}g,e_{u}u,e_{u}uhat) \lesssim &\;
\sum_{K\in{\mathcal{T}_h}} h_K\,\|\boldsymbol{n}u\,\mathrm{\delta}_{L}\,\boldsymbol{n}-{\delta}_{p}\,\boldsymbol{n}\|_{\partial K}^2+ \|\bld{u}\|_{\infty,\Omega}^2\,\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}^2
\end{align*}
since we have that
\[
\vertiii{({\bld P}_{\!h}(\Pi_Ww \bld u, P_Mm \bld u))-\bld u,0)}_{0,K}\lesssim
\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,K},
\]
by the approximation properties of ${\bld P}_{\!h}$, see Proposition \ref{lemma:div-proj}.
This completes the proof of Theorem \ref{thm:ns-error}.
\ber{
\subsection{Proofs of estimates \bld{e}_{q}ref{est-2} and \bld{e}_{q}ref{est-3} in Theorem 4.4}
The superconvergent velocity estimates in $L^2$-norm in \bld{e}_{q}ref{est-2} and \bld{e}_{q}ref{est-3}
follow from a standard duality argument. For a detailed proof,
we refer interested reader to \cite{CesmeliogluCockburnQiu17}.
}
\section{Concluding remarks}
\label{sec:conclude}
As we pointed out in \S 2.5,
the application of our approach to the steady-state diffusion problem gives rise to the
first superconvergent HDG method, namely, the so-called SFH method proposed in
\cite{CockburnDongGuzman08} when its non-zero stabilization is taken to be of order $1/h$.
As shown in \cite{CockburnDongGuzman08}, the convergence properties of the SFH method remain
unchanged when the stabilization function increases. A similar phenomenon takes place for all the
methods considered here.
The extension of the techniques developed in this paper to other nonlinear partial differential equations constitutes the subject of ongoing work.
\
\ber{{\bf Acnowledgements}. The authors would like to thank the reviewers for their constructive comments leading to a better presentation of the material of this paper.}
\appendix
\section{\ber{Proof of Proposition \ref{lemma:div-proj}}}
\label{sec:A}
\ber{Here we give a proof of Proposition \ref{lemma:div-proj} on the
properties of the convective velocity ${\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h)$.}
The well-posedness of the projection ${\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h)\in {\boldsymbol V}^*(K)$ is due to properties \bld{e}_{q}ref{div-space}
on the space ${\boldsymbol V}^*(K)$ since we have
$
\gamma((\widetilde{{\boldsymbol V}^*}(K))^\perp) = {M}^{\mathrm{D}}(\dK)
$; see \cite[Proposition 6.4]{CockburnFuSayas16}.
Then, the first two estimates directly follows from scaling and norm-equivalence on finite dimensional spaces.
Now, \ber{assume} $(\bld{u}_h,\widehat{\bld{u}}_h)$ satisfies \bld{e}_{q}ref{ns-HDG-equations-3} for all
$q\in \mathring{{Q}_h}$.
By equation \bld{e}_{q}ref{post-process-defn-2} and property \bld{e}_{q}ref{div-space-2} on the space ${\boldsymbol V}^*(K)$, we immediately have
${\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h)\in H(\mathrm{div};\Omega)$.
\ber{Let us now prove that it is divergence-free. Obviously,} \bld{e}_{q}ref{ns-HDG-equations-3} is satisfied for the constant test function $q = 1$.
Hence, we have, on each element $K$,
\[
-(\bld u_h,{\nabla} q)_K + \bintK{\widehat{\bld{u}}_h\cdot\boldsymbol{n}}{q} = 0\quad \forall\; q\in Q(K)=WD.
\]
Next, by the definition of $\widetilde{{\boldsymbol V}^*}(K)$, we have ${\nabla} Q(K)= {\nabla} WD \subset \widetilde{{\boldsymbol V}^*}(K)$.
Hence, using the definition of ${\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h)$ in the above
equation, and integrating by parts, we get
\[
({\nabla\cdot} {\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h),q)_K=0 \quad \forall\; q\in WD.
\]
This implies ${\nabla\cdot} {\bld P}_{\!h}(\bld u_h,\widehat{\bld{u}}_h) = 0$ by \bld{e}_{q}ref{div-space-1}.
This concludes the proof of Proposition \ref{lemma:div-proj}.
\section{\ber{Proof of Proposition \ref{lemma:projection-b}}}
\label{sec:B}
\ber{Here, we give a proof of Proposition \ref{lemma:projection-b}
on the approximation properties of the projection $\Pi_Ww$.} By defintion of $\Pi_Ww \bld u$, \bld{e}_{q}ref{bu-projection}, we have that, on
the element $K$, its $i$-th component, $(\Pi_Ww \bld u)_i$, is defined as the element of $W^{\mbox{\rm \tiny D}}(K)$ such that
\begin{subequations}\begin{alignat}{2}{}
\label{u-projection-1}
((\Pi_Ww \bld u)_i, w)_K = &\; (\bld u_i,w)_K &&\;\;\forall\; w\in W^{\mbox{\rm \tiny D}}(K),\\
\label{u-projection-2}
\bintK{(\Pi_Ww \bld u)_i}{\widehat{w}} = &\; \bintK{\bld u_i}{\widehat{w}} &&\;\;\forall\; \widehat{w}\in {M_S({\partial K})}.
\end{alignat}
\end{subequations}
Thus, if we set $\Pi_W \bld u_i:=(\Pi_Ww \bld u)_i$,
to prove our result, we only have to prove a similar result for the projection $\Pi_W$.
By \bld{e}_{q}ref{u-projection-1}, we have $\Pi_W u-P_{W}u \in \widetilde W^\perp(K)$.
By \bld{e}_{q}ref{u-projection-2}, we have
\begin{align*}
\bintK{\Pi_W u-P_{W} u}{what}
=\bintK{ u-P_{W}u}{what} \quad\forall\;what\in M_S.
\end{align*}
Hence,
\begin{align*}
\|\Pi_W u-P_W u\|_{{\partial K}}\le \,C_{M_S}
\|P_{M_S}( \Pi_W u-P_W u)\|_{{\partial K}}
\le {\color{red}C_{M_S}\,\|u-P_W u\|_{{\partial K}}},
\end{align*}
since the constant $C_{M_S}$ exists by condition \bld{e}_{q}ref{condition-ms}.
Then, the first estimate follows directly by scaling and norm-equivalence of
$\|w\|_K$ and $\|w\|_{{\partial K}}$ for functions $w\in\widetilde W^\perp(K)$.
Moreover, we have
\begin{align*}
\|P_W u -u\|_K +h_K^{1/2}\|P_W u - u\|_{{\partial K}}\le &\;
\|P_W u\|_K +\|u\|_K
+ h_K^{1/2}\|P_W u\|_{{\partial K}} +h_{K}^{1/2}\| u\|_{{\partial K}}\\
\le &\;
C\, \|P_W u\|_K +\|u\|_K +h_{K}^{1/2}\| u\|_{{\partial K}}\\
\le&\; C\, \|u\|_K +h_{K}^{1/2}\| u\|_{{\partial K}}\\
\le &\;C\,h_K^{d/2}\|u\|_{\infty,K}.
\end{align*}
The second estimate is obtained by
scaling, norm-equivalence of $h_K^{d/2}\|w\|_{\infty,K}$ and $\|w\|_{K}$ for the finite dimensional space $W(K)$,
the above estimate and the first estimate \ber{of Proposition \ref{lemma:projection-b}.}
This completes the proof of \ber{Proposition} \ref{lemma:projection-b}.
\section{\ber{Proof of Lemma \ref{lemma:oo-term}}}
\label{sec:C}
\ber{Here, we prove Lemma \ref{lemma:oo-term} on the properties of the convective term $\mathcal{O}_h$.}
The main idea is to first split the terms on the left hand side of the estimate in Lemma \ref{lemma:oo-term}
into the sum of the following four terms
\begin{align*}
T_1:=&\; \mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\bld u_h,\widehat{\bld{u}}_h),\,(e_{u}u,e_{u}uhat)) \\
&\; -\mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\Pi_Ww \bld ,P_Mm \bld u),\,(e_{u}u,e_{u}uhat)),\boldsymbol{n}onumber\\
T_2:=&\; \mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (\Pi_Ww \bld u,P_Mm \bld u),\,(e_{u}u,e_{u}uhat))\\
&\; -\mathcal{O}_h({\bld P}_{\!h}(\Pi_Ww \bld u,P_Mm \bld u); (\Pi_Ww \bld u,P_Mm \bld u),\,(e_{u}u,e_{u}uhat)),\boldsymbol{n}onumber\\
T_3:=&\; \mathcal{O}_h({\bld P}_{\!h}(\Pi_Ww \bld u,P_Mm \bld u); (\Pi_Ww \bld u,P_Mm \bld u),\,(e_{u}u,e_{u}uhat))\\
&\; - \mathcal{O}_h({\bld P}_{\!h}(\Pi_Ww \bld u,P_Mm \bld u); (\bld u, \bld u),\,(e_{u}u,e_{u}uhat)),\boldsymbol{n}onumber\\
T_4:=&\; \mathcal{O}_h({\bld P}_{\!h}(\Pi_Ww \bld u,P_Mm \bld u); (\Pi_Ww \bld u,P_Mm \bld u),\,(e_{u}u,e_{u}uhat))\\
&\; - \mathcal{O}_h(\bld u; (\bld u, \bld u),\,(e_{u}u,e_{u}uhat)).\boldsymbol{n}onumber
\end{align*}
and then estimate each of them.
So, by \bld{e}_{q}ref{oh-linf-2}, we have that
$T_1 = - \mathcal{O}_h({\bld P}_{\!h}(\bld{u}_h,\widehat{\bld{u}}_h); (e_{u}u,e_{u}uhat),\,(e_{u}u,e_{u}uhat))\le 0.$
For the second term, we have
\begin{align*}
T_2 =&\; - \mathcal{O}_h({\bld P}_{\!h}(e_{u}u,e_{u}uhat); (\Pi_Ww \bld u,P_Mm \bld u),\,(e_{u}u,e_{u}uhat))\\
\lesssim&\;
\vertiii{({\bld P}_{\!h}(e_{u}u,e_{u}uhat),\ave{{\bld P}_{\!h}(e_{u}u,e_{u}uhat)})}_{0,{\mathcal{T}_h}}
\vertiii{(\Pi_Ww \bld u,P_Mm \bld u)}_{\infty,{\mathcal{T}_h}}
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}\\
\lesssim&\;
\vertiii{(e_{u}u,e_{u}uhat)}_{0,{\mathcal{T}_h}}\,
\|\bld{u}\|_{\infty,\Omega}\,
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}},
\end{align*}
by Proposition \ref{lemma:div-proj}, and Proposition \ref{lemma:projection-b}.
For the third term, we have, by \bld{e}_{q}ref{oh-linf-1},
\begin{align*}
T_3 =&\; \mathcal{O}_h({\bld P}_{\!h}(\Pi_Ww\bld u,P_Mm \bld u); (\delta_{u}u,\delta_{u}uhat),\,(e_{u}u,e_{u}uhat))\\
\lesssim&\;
\|{\bld P}_{\!h}(\Pi_Ww\bld u,P_Mm \bld u)\|_{\infty,{\mathcal{T}_h}}
\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}}\\
\lesssim&\;
\|\bld{u}\|_{\infty,\Omega}\,
\vertiii{(\delta_{u}u,\delta_{u}uhat)}_{0,{\mathcal{T}_h}}\,
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}},
\end{align*}
by Proposition \ref{lemma:div-proj}.
For the last term, we have
\begin{align*}
T_4
\lesssim&\;
\|\left({\bld P}_{\!h}(\Pi_Ww \bld u,P_Mm \bld u)-\bld u, 0\right)\|_{0,{\mathcal{T}_h}}
\|\bld{u}\|_{\infty,\Omega}
\vertiii{(e_{u}u,e_{u}uhat)}_{1,{\mathcal{T}_h}},
\end{align*}
by \bld{e}_{q}ref{oh-linf-2}.
This concludes the proof of Lemma \ref{lemma:oo-term}.
\end{document}
|
\begin{equation}gin{document}
{\begin{equation}gin{flushleft}\baselineskip9pt\scriptsize
\end{flushleft}}
\setcounter{page}{1} \thispagestyle{empty}
\begin{equation}gin{abstract}
\sloppy Harmonic, Geometric, Arithmetic, Heronian and Contra-harmonic means have been studied by many mathematicians. In 2003, H. Eves studied these means from geometrical point of view and established some of the inequalities between them in using a circle and its radius. In 1961, E. Beckenback and R. Bellman introduced several inequalities corresponding to means. In this paper, we will introduce the concept of mean functions and integral means and give bounds on some of these mean functions and integral means.
\end{abstract}
\maketitle
\section{Introduction}
In their book of inequalities, Beckenback and Bellman established several inequalities between arithmetic, harmonic and contra-harmonic means\cite{mbook}. These means are defined in the following paragraph, based on the original text by Eve\cite{main}.\\
Let $a,b>0$ and $a\neq b.$ Putting together the results from works of several mathematicians, in particular Taneja established that$\
\max\{a,b\}>C>r>g>A>Hn>G>H>\min\{a,b\}$ in \cite{7means} and \cite{7means2}, where
$C=\frac{a^2+b^2}{a+b}$ is contraharmonic mean,$\
r=\sqrt{\frac{a^2+b^2}{2}}$ is root square mean,
$g=\frac{2(a^2+ab+b^2)}{3(a+b)}$ is gravitational mean (also called centroidal mean),
$A=\frac{a+b}{2}$ is arithmetic mean, $Hn=\frac{a+\sqrt{ab}+b}{3}$
is Heronian mean, $G=\sqrt{ab}$ is geometric mean and
$H=\frac{2ab}{a+b}$ is harmonic mean of $a$ and $b$.\\In this
paper we introduce the notion of a mean function and utilize it to define some integral means of $a$ and $b$ and then we establish
some inequalities corresponding to those mean functions and integral means.
\section{Definitions and Main Theorems}
All the means that appear in this paper are functions F with conditions a and b satisified:\\
a) $F:\mathbb{R}_{+}^2\to\mathbb{R}_{+},$\ \ where $\min\{x,y\}\leq
F(x,y)\leq\max\{x,y\},$Provided that $(x,y)\in\mathbb{R}_{+}^2$,\\ b) $F(x,y)=F(y,x),$ such that $(x,y)\in\mathbb{R}_{+}^2.$ Consequently, $F(x,x)=x$ where $x\in\mathbb{R}_{+}$.\\ We say $F$ is a
mean function when the two above conditions are satisfied.\\All throughout the paper we are assuming that $a,b>0$ and without loss of generality can assume
$b\geq a$, by symmetry.\\\\
\begin{equation}gin{dfn} \it Let $M$ be a mean of $a$ and
$b$. We define $M_A:=2A-M.$ to be $A$-complementary (arithmetic complementary) of $M$. \end{dfn}
It is obvious that $M_A$ and $M_G$ are means of $a$ and $b$.
\begin{equation}gin{thm}
Let $M\in\mathcal{R}(\mathbb{R}_{+}^2)$ be a mean function.
Then\\
(i)\[\mathcal{I}_M:=\mathcal{I}_M(a,b):=\begin{equation}gin{cases}
\frac{1}{(b-a)^2}\int_a^b\int_a^b M(x,y)\ dxdy, &\text{if $a\neq b$}, \\
a, & \text{if $a=b$} \
\end{cases}\] \\
is a mean of $a$ and $b$,\\\\ (ii) $\mathcal{J}_M:=\mathcal{J}_M(a,b):=3\mathcal{I}_M(a,b)-2A(a,b)$ is a mean of $a$ and $b$ and finally \\\\(iii) $\frac{2}{3}A<\mathcal{I}_M<\frac{4}{3}A.$
\end{thm}
\begin{equation}gin{proof} Let
$b>a$.\\(i):\[\min\{a,b\}=a<\frac{2a+b}{3}=\frac{1}{(b-a)^2}\int_a^b\int_a^xy\
dydx+\frac{1}{(b-a)^2}\int_a^b\int_x^bx\
dydx=\]\[\frac{1}{(b-a)^2}\int_a^b\int_a^b\min\{x,y\}\
dxdy\leq\mathcal{I}_M(a,b)\leq\frac{1}{(b-a)^2}\int_a^b\int_a^b\max\{x,y\}\
dxdy\]\[=\frac{1}{(b-a)^2}\int_a^b\int_a^xx\
dydx+\frac{1}{(b-a)^2}\int_a^b\int_x^by\
dydx=\frac{a+2b}{3}<b=\max\{a,b\}.\]Also, it is obvious that$\
\mathcal{I}_M\ $is symmetric.\\(ii):\\By proof of (i), we
have$\frac{2a+b}{3}\leq\mathcal{I}_M(a,b)\leq\frac{a+2b}{3}.\label{2}
So,$\[a\leq3\mathcal{I}_M(a,b)-(a+b)\leq b.\]$\\(iii):
$\[\frac{2}{3}<\frac{2(2a+b)}{3(a+b)}\leq\frac{\mathcal{I}_M(a,b)}{A(a,b)}\leq\frac{2(a+2b)}{3(a+b)}<\frac{4}{3},\]
multiplying by $A(a,b)$ we get the result.
\end{proof}
\begin{equation}gin{prop}
Let $M,M_1
$and$ M_2$ be mean functions and $\lambda\in\mathbb{R}$.
Then\\\\(i)
$M_1>M_2\Rightarrow\mathcal{I}_{M_{1}}(a,b)>\mathcal{I}_{M_{2}}(a,b)\
\ $and$\ \ \mathcal{J}_{M_{1}}(a,b)>\mathcal{J}_{M_{2}}(a,b),\ \
a\neq b,$\\\\(ii) $\mathcal{I}_{\lambda M_{1}+(1-\lambda)
M_{2}}=\lambda\mathcal{I}_{M_{1}}+(1-\lambda)\mathcal{I}_{M_{2}},\
$if$\ \lambda M_{1}+(1-\lambda) M_{2}\ $is a mean
function.\\\\In particular, $\mathcal{I}_{M_{A}}=(\mathcal{I}_M)_{A}\
$ and$\ \
\mathcal{I}_{\mathcal{J}_{M}}=\mathcal{J}_{\mathcal{I}_{M}}$.
\end{prop}
\begin{equation}gin{proof}
Proof is easily done by straightforward calculations.
\end{proof}
Here are some examples where the above proposition is used:\\
Let $ a\neq b.$\\
\begin{equation}gin{exm}
\[\mathcal{I}_A(a,b)=\frac{1}{(b-a)^2}\int_a^b\int_a^b\frac{1}{2}(x+y)\ dxdy=\frac{1}{2(b-a)^2}\int_a^b\left[\frac{x^2}{2}+xy\right]_a^b dy\]
\[=\frac{1}{4(b-a)^2}\left[y(b^2-a^2)+y^2(b-a)\right]_a^b=\frac{b+a}{2}=A(a,b).\]\end{exm}
\begin{equation}gin{exm}\[\mathcal{I}_G(a,b)=\frac{1}{(b-a)^2}\int_a^b\int_a^b\sqrt{xy}\ dxdy=\bigg(\frac{1}{b-a}\int_a^b\sqrt{t}\ dt\bigg)^2 =\bigg(\frac{2(b^{\frac{3}{2}}-a^{\frac{3}{2}})}{3(b-a)}\bigg)^2\]
\[=g^2(\sqrt{a},\sqrt{b}).\]\end{exm}
\begin{equation}gin{exm}\[\mathcal{I}_H(a,b)=\frac{2}{(b-a)^2}\int_a^b\int_a^b\frac{xy}{x+y}\ dxdy=\frac{2}{(b-a)^2}\int_a^b\left[xy-y^2\ln(x+y)\right]_a^b dy\]
\[=\frac{2}{3(b-a)^2}\left[y^2(b-a)+y(b^2-a^2)-(y^3+b^3)\ln(y+b)+(y^3+a^3)\ln(y+a)\right]_a^b=\]
\[\frac{4}{3}\bigg(2A(a,b)+\frac{1}{(b-a)^2}\bigg(a^3\ln\frac{A(a,b)}{a}+b^3\ln\frac{A(a,b)}{b}\bigg)\bigg),\ \ \ \ a\neq b.\]\end{exm}
\begin{equation}gin{exm}Let$ a<b$ , then:\\ \[\sqrt{2}(b-a)^2\mathcal{I}_{r}(a,b)=\int_a^b\int_a^b\sqrt{x^2+y^2}\ dxdy\]
\[=\int_{\tan^{-1}\tfrac{a}{b}}^{\tfrac{\pi}{4}}\int_{\tfrac{a}{\sin\theta}}^{\tfrac{b}{\cos\theta}}\rho^2\
d\rho d\theta\ +
\int_{\tfrac{\pi}{4}}^{\tan^{-1}\tfrac{b}{a}}\int_{\tfrac{a}{\cos\theta}}^{\tfrac{b}{\sin\theta}}\rho^2\
d\rho
d\theta.\]\\
Double integrals in the above expression are easily calculated and the final result is:
\[
\mathcal{I}_{r}(a,b)=\frac{1}{3\sqrt{2}(b-a)^2}\bigg(\big(\sqrt{2}+\ln(1+\sqrt{2})\big)(a^3+b^3)-a^3\ln\big(\frac{b+\sqrt{a^2+b^2}}{a}\big)-...\]\[
b^3\ln\big(\frac{a+\sqrt{a^2+b^2}}{b}\big)-2ab\sqrt{a^2+b^2}\bigg),\
\ \ \ a\neq b.\]\end{exm} Similarly, by Proposition 2.1 (ii), we will
have:\\
\begin{equation}gin{exm}Let $a\neq b
$, then\\
\[ \mathcal{I}_C(a,b)=(\mathcal{I}_H)_{A}(a,b)=-\frac{2}{3}\bigg(A(a,b)+\frac{2}{(b-a)^2}\bigg(a^3\ln\frac{A(a,b)}{a}+b^3\ln\frac{A(a,b)}{b}\bigg)\bigg)\]\end{exm}
\begin{equation}gin{exm}Let $a\neq b$, then\\
\[ \mathcal{I}_{g}(a,b)=\frac{4}{3}\mathcal{I}_A(a,b)-\frac{1}{3}\mathcal{I}_H(a,b)=\]
\[\frac{4}{9}\bigg(A(a,b)-\frac{1}{(b-a)^2}\bigg(a^3\ln\frac{A(a,b)}{a}+b^3\ln\frac{A(a,b)}{b}\bigg)\bigg).\]\end{exm}
\begin{equation}gin{exm}\[\hspace{2mm}\mathcal{I}_{Hn}(a,b)=\frac{2}{3}\mathcal{I}_A(a,b)+\frac{1}{3}\mathcal{I}_G(a,b)=\frac{1}{3}\big(2A(a,b)+g^2(\sqrt{a},\sqrt{b})\big)
\]\[=\frac{2(13A^2+13AG+G^2)}{27(A+G)}\]\end{exm}
and finally,
\begin{equation}gin{exm}\[\mathcal{I}_{\frac{A+G}{2}}(a,b)=\frac{1}{2}\mathcal{I}_A(a,b)+\frac{1}{2}\mathcal{I}_G(a,b)=\frac{1}{2}\big(A(a,b)+g^2(\sqrt{a},\sqrt{b})\big)\]\[
=\frac{17A^2+17AG+2G^2}{18(A+G)}.\] \\
Because$
\mathcal{I}_A=\mathcal{J}_A=A,$
\end{exm}
By Proposition 2.1 (i), we can propose the following:\\\
\begin{equation}gin{prop} Let$\ M\ $be a mean function
and$\ a\neq b$, then\\\\(i)$\
M>(<)A\Rightarrow\mathcal{I}_M>(<)A,$\\\\(ii)$\
M>(<)A\Rightarrow\mathcal{J}_M>(<)A,$\\\\(iii)$\ \
\mathcal{I}_C>\mathcal{I}_r>\mathcal{I}_g>A>\mathcal{I}_{Hn}>\mathcal{I}_G>\mathcal{I}_H\
\ $and$\ \
\mathcal{J}_C>\mathcal{J}_r>\mathcal{J}_g>A>\mathcal{J}_{Hn}>\mathcal{J}_G>\mathcal{J}_H$.\end{prop}
By proposition 2.1 (ii) and proposition 2.2(iii), we infer\[\ \mathcal{I}_r(a,b)>\frac{4}{3}A(a,b)-\frac{1}{3}\mathcal{I}_H(a,b)>A(a,b),
\ \ \ \ a\neq
b.\]Therefore,\[3\mathcal{I}_r(a,b)-2A(a,b)>2A(a,b)-\mathcal{I}_H(a,b)>A(a,b),\
\ \ \ a\neq b.\]In other words,\[\mathcal{J}_r(a,b)>\mathcal{I}_C(a,b)>A(a,b),\ \ \ \ a\neq
b.\]Similarly,\[A(a,b)>\mathcal{I}_G(a,b)>\mathcal{J}_G(a,b),\ \ \
\ a\neq b,\]\\\[\mathcal{I}_C(a,b)+\mathcal{I}_G(a,b)>2A(a,b)>\mathcal{I}_r(a,b)+\mathcal{I}_H(a,b),\
\ \ \ a\neq
b\]and\[\mathcal{J}_C(a,b)+\mathcal{J}_G(a,b)>2A(a,b)>\mathcal{J}_r(a,b)+\mathcal{J}_H(a,b),\
\ \ \ a\neq b.\]
\begin{equation}gin{prop}
If $a\neq b$,
then\\\\(i) $\ \frac{8}{9}A<\mathcal{I}_G<A$,\\\\(ii)$\
\frac{8(1-\ln2)}{3}A<\mathcal{I}_H<A$,\\\\(iii) $\
A<\mathcal{I}_C<\frac{2(-1+4\ln2)}{3}A$,\\\\(iv) $\
\frac{26}{27}A<\mathcal{I}_{Hn}<A$,\\\\(v) $\
A<\mathcal{I}_g<\frac{4(1+2\ln2)}{9}A$,\\\\(vi) $\
A<\mathcal{I}_r<\frac{2+\sqrt{2}\ln(1+\sqrt{2})}{3}A,$\\\\(vii) $\
\frac{2}{3}A<\mathcal{J}_G<A$,\\\\(viii)$\
2(3-4\ln2)A<\mathcal{J}_H<A$,\\\\(ix) $\
A<\mathcal{J}_C<4(-1+2\ln2)A$,\\\\(x) $\
\frac{8}{9}A<\mathcal{J}_{Hn}<A$,\\\\(xi) $\
A<\mathcal{J}_g<\frac{2(-1+4\ln2)}{3}A$,\\\\(xii) $\
A<\mathcal{J}_r<\sqrt{2}\big(\ln(1+\sqrt{2})\big)A,$\\\\where$\ \
1,\ \ \frac{8}{9},\ \ \frac{8(1-\ln2)}{3},\ \
\frac{2(-1+4\ln2)}{3},\ \ \frac{26}{27},\ \ \frac{4(1+2\ln2)}{9},\
\ \frac{2+\sqrt{2}\ln(1+\sqrt{2})}{3},\ \ \frac{2}{3},\ \
2(3-4\ln2),$\\$4(-1+2\ln2),\ \ $and$\ \ \sqrt{2}\ln(1+\sqrt{2})\ \
$are the best possible bounds we found for the inequalities between the integral means and the mean functions.\end{prop}
\begin{equation}gin{proof}
(i):\ If we
take $b=at^2,\ \ t>1,\ $\ then the following will be concluded:\\$\ \
f_1(t):=\frac{8(t^2+t+1)^2}{9(t^2+1)(t+1)^2}=\frac{\mathcal{I}_G(a,b)}{A(a,b)}.$\\Taking the derivative, we get:$f_1^{'}(t)=\frac{-16t(t^3-1)}{9(t^2+1)^2(t+1)^3}<0.\
\ $Therefore,$\ \ f_1\ \ $is strictly decreasing. So,$\ \
\lim_{t\to\infty}f_1(t)=\frac{8}{9}<f_1(t)<\lim_{t\to1^{+}}f_1(t)=1,\
\ t>1.$\\(ii):\ If we take $b=at,\ \ t>1,\ $\ then we will have$\
\ f_2(t):=\frac{8}{3}\big(1+\frac{(t^3+1)\ln\frac{t+1}{2}-t^3\ln
t}{(t-1)^2(t+1)}\big)=\frac{\mathcal{I}_H(a,b)}{A(a,b)}.$\\$f_2^{'}(t)=\frac{-8f_3(t)}{3(t^3-t^2-t+1)^2},\
\ $where$\ \ f_3(t):=(t+1)^3\ln\frac{t+1}{2}-(t^3+3t^2)\ln
t+t^3-t^2-t+1.$\\$f_3^{'}(t)=3(t+1)^2\ln\frac{t+1}{2}-3(t^2+2t)\ln
t+3t^2-3t.\ \ f_3^{''}(t)=6(t+1)\ln\frac{t+1}{2}-6(t+1)\ln
t+6t-6.\ \ f_3^{'''}(t)=6\ln\frac{t+1}{2}-6\ln t+6-\frac{6}{t}.\ \
f_3^{''''}(t)=\frac{6}{t^2(t+1)}>0.\ $Therefore, $f_3^{'''}$ is
strictly increasing, so
$f_3^{'''}(t)>\lim_{t\to1^{+}}f_3^{'''}(t)=0,\ \ t>1.\
$Consequently, $f_3^{''}$ is strictly increasing, hence
$f_3^{''}(t)>\lim_{t\to1^{+}}f_3^{''}(t)=0,\ \ t>1.\ $Therefore,
$f_3^{'}$ is strictly increasing, so
$f_3^{'}(t)>\lim_{t\to1^{+}}f_3^{'}(t)=0,\ \ t>1.\ $Thus, $f_3$ is
strictly increasing, hence $f_3(t)>\lim_{t\to1^{+}}f_3(t)=0,\ \
t>1.\ $Consequently,$\ \ f_2^{'}(t)<0,\ \ t>1.\ $Therefore, $f_2$
is strictly decreasing, so
$\frac{8(1-\ln2)}{3}=\lim_{t\to\infty}f_2(t)<f_2(t)<\lim_{t\to1^{+}}f_2(t)=1,\
\ t>1.$\\(iii):\ $\ a\neq
b\Rightarrow\frac{8(1-\ln2)}{3}<\frac{\mathcal{I}_H(a,b)}{A(a,b)}<1\Rightarrow$\\$\frac{2(-1+4\ln2)}{3}=2-\frac{8(1-\ln2)}{3}>2-\frac{\mathcal{I}_H(a,b)}{A(a,b)}=
\frac{\mathcal{I}_C(a,b)}{A(a,b)}>2-1=1.$\\(iv):\ $\ a\neq
b\Rightarrow\frac{8}{9}A(a,b)<\mathcal{I}_G(a,b)<A(a,b)\Rightarrow\frac{26A}{27}=\frac{2A}{3}+\frac{8A}{27}<\frac{2A}{3}+\frac{1}{3}\mathcal{I}_G(a,b)=\mathcal{I}_{Hn}(a,b)
<\frac{2A}{3}+\frac{A}{3}=A.$\\(v):\ $\ a\neq
b\Rightarrow\frac{8(1-\ln2)}{3}<\frac{\mathcal{I}_H(a,b)}{A(a,b)}<1\Rightarrow$\\$\frac{4(1+2\ln2)}{9}=\frac{4}{3}-\frac{8(1-\ln2)}{9}>\frac{4}{3}-\frac{\mathcal{I}_H(a,b)}{3A(a,b)}=
\frac{\mathcal{I}_g(a,b)}{A(a,b)}>\frac{4}{3}-\frac{1}{3}=1.$\\(vi):\
$$f_4(t):=\frac{\sqrt{2}}{3}\bigg(\frac{k(t^3+1)-t^3\ln\frac{1+\sqrt{1+t^2}}{t}-\ln(t+\sqrt{1+t^2})-2t\sqrt{1+t^2}}{(t+1)(t-1)^2}\bigg)=\frac{\mathcal{I}_r(a,b)}{A(a,b)},$$
where$\ a=bt,\ \ 0<t<1\ $and$\ k:=\sqrt{2}+\ln(1+\sqrt{2}).\ $The we
have:\\$\ f_4^{'}(t)=\frac{\sqrt{2}f_5(t)}{3(t+1)^2(t-1)^3}$,where\[
f_5(t):=(3t^2+2t+3)\sqrt{1+t^2}+t^2(t+3)\ln\frac{1+\sqrt{1+t^2}}{t}+...\]\[(3t+1)\ln(t+\sqrt{1+t^2})-k(t+1)^3.\]\\
Therefore,
$$f_5^{'}(t)=(3t^2+6t)\ln\frac{1+\sqrt{1+t^2}}{t}+3\ln(t+\sqrt{1+t^2})+(9t+3)\sqrt{1+t^2}-3k(t+1)^2,$$
$$f_5^{''}(t)=(6t+6)\ln\frac{1+\sqrt{1+t^2}}{t}+\frac{(18t^2+6)}{\sqrt{1+t^2}}-6k(t+1),$$
$$f_5^{'''}(t)=6\bigg(\frac{3t^4-t^3+4t^2-t-1}{t(t^2+1)^{\frac{3}{2}}}+\ln\frac{1+\sqrt{1+t^2}}{t}-k\bigg)$$and
\[f_5^{''''}(t)=\frac{6(1-t+8t^2-t^3+t^4)}{t^2(t^2+1)^{\frac{5}{2}}}.\]Since$\ t\in(0,1),\ $so$\ 1-t+8t^2-t^3+t^4=(1-t)+(8-t+t^2)t^2>0.\ $Hence,$\
f_5^{''''}>0\ $on$\ (0,1).$\\Consequently,$\ f_5^{'''}\ $will be
strictly increasing. Therefore,$\
f_5^{'''}(t)<\lim_{t\to1^{-}}f_5^{'''}(t)=6(\sqrt{2}+\ln(1+\sqrt{2})-k)=0,\
$for$\ 0<t<1.\ $Thus,$\ f_5^{''}\ $is strictly decreasing.
Hence,$\
f_5^{''}(t)>\lim_{t\to1^{-}}f_5^{''}(t)=12\ln(1+\sqrt{2})+12\sqrt{2}-12k=0,\
$for$\ 0<t<1.\ $So,$\ f_5^{'}\ $will be strictly increasing.
Therefore,$\
f_5^{'}(t)<\lim_{t\to1^{-}}f_5^{'}(t)=12\ln(1+\sqrt{2})+12\sqrt{2}-12k=0,\
$for$\ 0<t<1.\ $Hence,$\ f_5\ $is strictly decreasing. So,$\
f_5(t)>\lim_{t\to1^{-}}f_5(t)=8\ln(1+\sqrt{2})+8\sqrt{2}-8k=0,\
$on$\ (0,1)\ $and consequently$\ f_4^{'}<0\ $on$\ (0,1).\ $Thus,$\
f_4\ $will be strictly decreasing 0n$\ (0,1).\ $Therefore,$\
1=\lim_{t\to1^{-}}<f_4(t)<f_4(t)=\frac{\mathcal{I}_r(a,b)}{A(a,b)}<\lim_{t\to0^{+}}f_4(t)=\frac{\sqrt{2}k}{3}=\frac{2+\sqrt{2}\ln(1+\sqrt{2})}{3},\
$for$\ 0<t<1.$
\\By
(i), (ii), (iii), (iv), (v) and (vi), (vii), (viii), (ix), (x),
(xi) and (xii) are straightforward.
\end{proof}
\begin{equation}gin{thm}
Let $\ M\in\mathcal{R}(\mathbb{R}_{+}^2)\ $ be a mean function
and $\ \varphi:[\alpha_0,\begin{equation}ta_0]\to[0,\infty) \ $be an integrable
function; that is, $\ \varphi\in\mathcal{R}([\alpha_0,\begin{equation}ta_0])$,
where$\ \alpha_0,\begin{equation}ta_0\in\mathbb{R}\ $and$\ \alpha_0<\begin{equation}ta_0.\
$Besides,$\ \psi:\mathbb{R}_{+}\to\mathbb{R}_{+}\ $is a Lipschitz
function with the constant of 1; that
is\[|\psi(x)-\psi(y)|\leq|x-y|,\ \ \ \
(x,y)\in\mathbb{R}_{+}^2.\]Then $\mathcal{S}_{M,\varphi,\psi}:=\mathcal{S}_{M,\varphi,\psi}(a,b)$, defined in the following way:\\
\[\mathcal{S}_{M,\varphi,\psi}:=-\bar{\varphi}+A(a,b)-A(\psi(a),\psi(b))+\frac{1}{\begin{equation}ta_0-\alpha_0}\int_{\alpha_0}^{\begin{equation}ta_0}M\big(\varphi(t)+\psi(a),\varphi(t)+\psi(b)\big)\
dt\]
is a mean of $a$ and $b$, where\\
\[\bar{\varphi}:=\frac{1}{\begin{equation}ta_0-\alpha_0}\int_{\alpha_0}^{\begin{equation}ta_0}\varphi(t)\
dt.\]\end{thm}
\begin{equation}gin{proof}
\[\min\{a,b\}=A(a,b)-\frac{|a-b|}{2}\leq A(a,b)-\frac{|\psi(a)-\psi(b)|}{2}=\]\[A(a,b)-A(\psi(a),\psi(b))+\min\{\psi(a),\psi(b)\}\leq
\mathcal{S}_{M,\varphi,\psi}\leq\]\[
A(a,b)-A(\psi(a),\psi(b))+\max\{\psi(a),\psi(b)\}\]\[=A(a,b)+\frac{|\psi(a)-\psi(b)|}{2}\leq
A(a,b)+\frac{|a-b|}{2}=\max\{a,b\}.\]Also, it is obvious that$\
\mathcal{S}_{M,\varphi,\psi}\ $is symmetric.
\end{proof}
\begin{equation}gin{remark}
Since$\
\varphi\in\mathcal{R}([\alpha_0,\begin{equation}ta_0])\Leftrightarrow\varphi\circ\eta\in\mathcal{R}([0,1]),\
$\\where$\ \eta(t):=(\begin{equation}ta_0-\alpha_0)t+\alpha_0.\ $So, without
loss of generality, we can assume$\ \alpha_0=0\ $and\\$\
\begin{equation}ta_0=1.$
\end{remark}
\begin{equation}gin{remark} If$\ \ \psi(a)=\psi(b),\ \ $then$\ \
\mathcal{S}_{M,\varphi,\psi}(a,b)=A(a,b).$\end{remark}
\begin{equation}gin{remark}If$\
\varphi=c\geq0\ $($\ c$ is a constant),\ \
then\[\mathcal{S}_{M,c,\psi}=\mathcal{S}_{M,c,\psi}(a,b)=-c+M(c+\psi(a),c+\psi(b))-A(\psi(a),\psi(b))+A(a,b).\]\end{remark}
\begin{equation}gin{remark}\[(\mathcal{S}_{M,c,\psi})_{A}=(\mathcal{S}_{M,c,\psi})_{A}(a,b)=c-M(c+\psi(a),c+\psi(b))+A(\psi(a),\psi(b))+A(a,b).\]\end{remark}
\begin{equation}gin{prop}Let $M,M_1$ and $M_2$ be mean functions
and$\ \lambda\in\mathbb{R}.\ $ Then\\\\(i) $\
\mathcal{S}_{A,\varphi,\psi}=A,$\\\\(ii)
$M_1>M_2\Rightarrow\mathcal{S}_{M_{1},\varphi,\psi}>\mathcal{S}_{M_{2},\varphi,\psi},\
\ a\neq b,$\\\\In particular, $\
M>(<)A\Rightarrow\mathcal{S}_{M,\varphi,\psi}>(<)A,$\\\\(iii)$\
\mathcal{S}_{\lambda M_1+(1-\lambda)
M_2,\varphi,\psi}=\lambda\mathcal{S}_{M_1,\varphi,\psi}+(1-\lambda)\mathcal{S}_{M_2,\varphi,\psi},\
$if$\ \lambda M_{1}+(1-\lambda) M_{2}\ $is a mean
function.\\
In particular,$
\mathcal{S}_{M_{A},\varphi,\psi}=(\mathcal{S}_{M,\varphi,\psi})_A.$\end{prop}
\begin{equation}gin{proof}
It is straightforward by direct calculations.\end{proof}
\begin{equation}gin{exm}
Let$\ \ M=G,\ \ \varphi(t)=c\geq0\ $($\
c$ is a constant) and $\check{\psi}(t)=\frac{t-\sin t}{2},\
$then\[\mathcal{N}_c:=\mathcal{N}_c(a,b):=\mathcal{S}_{G,c,\check{\psi}}(a,b)=A(a,b)-\frac{1}{4}\bigg(\sqrt{2c+a-\sin
a}-\sqrt{2c+b-\sin
b}\bigg)^2.\]In particular,\[\mathcal{N}_0=\mathcal{N}_0(a,b)=A(a,b)-\frac{1}{4}\bigg(\sqrt{a-\sin
a}-\sqrt{b-\sin b}\bigg)^2.\]We can
see\[(\mathcal{N}_c)_{A}=(\mathcal{N}_c)_{A}(a,b)=A(a,b)+\frac{1}{4}\bigg(\sqrt{2c+a-\sin
a}-\sqrt{2c+b-\sin
b}\bigg)^2.\]In particular,\[(\mathcal{N}_0)_{A}=(\mathcal{N}_0)_{A}(a,b)=A(a,b)+\frac{1}{4}\bigg(\sqrt{a-\sin
a}-\sqrt{b-\sin b}\bigg)^2.\]\end{exm}
\begin{equation}gin{exm}
Let$\ \ M=G,\ \
\varphi(t)=c\geq0\ $($\ c$ is a constant) and
$\tilde{\psi}(t)=\ln(t^2+1),\
$then\[L_c:=L_c(a,b):=\mathcal{S}_{G,c,\tilde{\psi}}(a,b)=-c+A(a,b)-\ln(\sqrt{(a^2+1)(b^2+1)})+...\]\[\sqrt{(c+\ln(a^2+1))(c+\ln(b^2+1))}
.\]\\In particular,\[L_0=L_0(a,b)=A(a,b)-\ln\big(\sqrt{(a^2+1)(b^2+1)}\big)+\sqrt{(\ln(a^2+1))(\ln(b^2+1))}.\]
Also we can see\[(L_c)_{A}=(L_c)_{A}(a,b)=c+A(a,b)+\ln(\sqrt{(a^2+1)(b^2+1)})-...\]\[\sqrt{(c+\ln(a^2+1))(c+\ln(b^2+1))}.\]\\
In particular,\[
(L_0)_{A}=(L_0)_{A}(a,b)=A(a,b)+\ln(\sqrt{(a^2+1)(b^2+1)})-\sqrt{(\ln(a^2+1))(\ln(b^2+1))}.\]\end{exm}
\begin{equation}gin{exm} Let$\ \ M=H,\ \ \varphi(t)=Id(t)=t,\
$then\[J_{\psi}:=J_{\psi}(a,b):=\mathcal{S}_{H,Id,\psi}(a,b)=A(a,b)-\bigg(\frac{\psi(a)-\psi(b)}{2}\bigg)^2
\ln\bigg(1+\frac{1}{A(\psi(a),\psi(b))}\bigg).\]In particular,\begin{equation}gin{equation}
J_{Id}=J_{Id}(a,b)=A(a,b)-\bigg(\frac{a-b}{2}\bigg)^2
\ln\bigg(1+\frac{1}{A(a,b)}\bigg).
\end{equation}
We can
see\[(J_{\psi})_{A}=(J_{\psi})_{A}(a,b)=A(a,b)+\bigg(\frac{\psi(a)-\psi(b)}{2}\bigg)^2
\ln\bigg(1+\frac{1}{A(\psi(a),\psi(b))}\bigg).\]In particular,\[(J_{Id})_{A}=(J_{Id})_{A}(a,b)=A(a,b)+\bigg(\frac{a-b}{2}\bigg)^2
\ln\bigg(1+\frac{1}{A(a,b)}\bigg).\]
\end{exm}
\begin{equation}gin{exm} Let$ M=G\ $
and $\ \psi(t)=Id(t)=t,\
$then\begin{equation}gin{equation}
I_{\varphi}:=I_{\varphi}(a,b):=\mathcal{S}_{G,\varphi,Id}(a,b)=-\bar{\varphi}+\int_0^1\sqrt{(\varphi(t)+a)(\varphi(t)+b)}dt.
\end{equation}In particular,\begin{equation}gin{equation}
I_{Id}=I_{Id}(a,b)=
-\frac{1}{2}+F_1(a+1,b+1)-F_1(a,b)-F_2(a+1,b+1)+F_2(a,b),
\end{equation}
where\begin{equation}gin{equation}
F_1(x,y):={\frac{1}{4}}(x+y)\sqrt{xy},\hskip2cm
F_2(x,y):={\frac{1}{4}}(x-y)^2\ln\frac{\sqrt{x}+\sqrt{y}}{\sqrt{2}}.
\end{equation}
We can
see\[(I_{\varphi})_{A}=(I_{\varphi})_{A}(a,b)=a+b+\bar{\varphi}-\int_0^1\sqrt{(\varphi(t)+a)(\varphi(t)+b)}\
dt.\]In particular,\[(I_{Id})_{A}=(I_{Id})_{A}(a,b)=
a+b+\frac{1}{2}-F_1(a+1,b+1)+F_1(a,b)+F_2(a+1,b+1)-F_2(a,b).\]If\
$a\neq b,\ $by proposition4 (ii), we will
have\[\mathcal{N}_c<A,\hskip2cm L_c<A,\hskip2cm
J_{\psi}<A,\hskip2cm I_{\varphi}<A.\]
\end{exm}
\begin{equation}gin{prop}
If$\ \ a\neq b,\ \ $then$\ \ I_{\varphi}>G,\ \ $for every$\
\varphi,\ $whose support is positive measure; equivalently, there
exists $\ S\subseteq[0,1]\ \ $with$\ \ |S|>0,\ \ $such that $\
\varphi(t)>0\ \ $for every $\ t\in S.$\end{prop}
\begin{equation}gin{proof} Let$\
a\neq
b.$We know that \[I_{\varphi}>G\]is equivalent to \[\int_0^1\bigg(\sqrt{(\varphi(t)+a)(\varphi(t)+b)}-\varphi(t)-G(a,b)\bigg)\
dt>0.\] Let us start our argument by working with the integrand:
\[\sqrt{(\varphi(t)+a)(\varphi(t)+b)}-\varphi(t)-G(a,b)>(\geq)0\] This results in: \[(\sqrt{a}-\sqrt{b})^2\varphi(t)>(\geq)0.\]Thus,
$
\sqrt{(\varphi(t)+a)(\varphi(t)+b)}-\varphi(t)-G(a,b)\geq0,$
$t\in[0,1]$\label{3}
and
\\
$ \sqrt{(\varphi(t)+a)(\varphi(t)+b)}-\varphi(t)-G(a,b)>0$,$t\in S$.\label{4} By (\ref{3}) and (\ref{4}), we will
have\[\int_0^1\bigg(\sqrt{(\varphi(t)+a)(\varphi(t)+b)}-\varphi(t)-G(a,b)\bigg)\
dt>0.\]Hence, $ I_{\varphi}>G.$\end{proof}
\begin{equation}gin{prop}
If$\ a\neq b,\ $then\\\\(i)\ $J_{Id}>H,$\\\\(ii)\ neither $\
J_{Id}>G\ $nor$\ J_{Id}<G$,\\\\(iii)\ neither $\ L_{0}<G\ $nor$\
L_{0}>H$,\\\\(iv)\ neither $\ \mathcal{N}_{0}<G\ $nor$\
\mathcal{N}_{0}>H.$\end{prop}
\begin{equation}gin{proof}Let$\ a\neq b\ .$\\(i):$\
J_{Id}>H\Leftrightarrow\frac{(a-b)^2}{2(a+b)}>\frac{(a-b)^2}{4}\ln\big(1+\frac{2}{a+b}\big)\Leftrightarrow\frac{2}{a+b}>\ln(1+\frac{2}{a+b})\checkmark.$\\
(ii) One counter example is : $\\ J_{Id}(0.5,1)<0.6971<0.7071<G(0.5,1).\ \ $On the other hand,$\
J_{Id}(0.5,0.2)>0.31962>0.31623>G(0.5,0.2).$\\(iii) A counter examples would be: $\
L_0(0.1,0.2)>0.14516>0.14143>G(0.1,0.2).\\ $On the other hand, we have:\\$\
L_0(4.1754412,4.175399)-H(4.1754412,4.175399)<-10^{-9}<0.$\\(iv) Here is a counter example: \[\mathcal{N}_0(0.5,0.2)>0.34713>0.31623>G(0.5,0.2)\].\\ On the other hand,
$\mathcal{N}_0(4.1,4.100000001)-H(4.1,4.100000001)<-10^{-19}<0.$\end{proof}
\begin{equation}gin{thm}Let $\ M\in\mathcal{R}(\mathbb{R}_{+}^2)\ $ be a
mean function and $\ \varphi:[\alpha_0,\begin{equation}ta_0]\to\mathbb{R}_{+} \
$be an integrable function; that is, $\
\varphi\in\mathcal{R}([\alpha_0,\begin{equation}ta_0])$, where$\
\alpha_0,\begin{equation}ta_0\in\mathbb{R}\ $and$\ \alpha_0<\begin{equation}ta_0.\ $Also,$\
\psi:\mathbb{R}_{+}\to\mathbb{R}_{+}\ $is a Lipschitz function
with the constant of 1.\\
Then $\mathcal{P}_{M,\varphi,\psi}:=\mathcal{P}_{M,\varphi,\psi}(a,b)$, defined in the following way:\\
\[\mathcal{P}_{M,\varphi,\psi}(a,b):= A(a,b)-A(\psi(a),\psi(b))+\frac{1}{(\begin{equation}ta_0-\alpha_0)\bar{\varphi}}\int_{\alpha_0}^{\begin{equation}ta_0}M\big(\varphi(t)\psi(a),\varphi(t)\psi(b)\big)\
dt\]
is a mean of $a$ and $b$.\end{thm}
\begin{equation}gin{proof}
\[\min\{a,b\}=A(a,b)-\frac{|a-b|}{2}\leq A(a,b)-\frac{|\psi(a)-\psi(b)|}{2}=\]\[A(a,b)-A(\psi(a),\psi(b))+\min\{\psi(a),\psi(b)\}\leq
\mathcal{P}_{M,\varphi,\psi}(a,b)\leq...\]\[
A(a,b)-A(\psi(a),\psi(b))+\max\{\psi(a),\psi(b)\}=A(a,b)+\frac{|\psi(a)-\psi(b)|}{2}\]\[\leq
A(a,b)+\frac{|a-b|}{2}=\max\{a,b\}.\]Also, it is obvious that$\
\mathcal{P}_{M,\varphi,\psi}\ $is symmetric.\end{proof}
\begin{equation}gin{remark}
Since$\
\varphi\in\mathcal{R}([\alpha_0,\begin{equation}ta_0])\Leftrightarrow\varphi\circ\eta\in\mathcal{R}([0,1]),\
$\\where$\ \eta(t):=(\begin{equation}ta_0-\alpha_0)t+\alpha_0.\ $So, without
loss of generality, we can assume$\ \alpha_0=0\ $and\\$\
\begin{equation}ta_0=1.$\end{remark}
\begin{equation}gin{remark}If$\ \ \psi(a)=\psi(b),\ \ $then$\ \
\mathcal{P}_{M,\varphi,\psi}(a,b)=A(a,b).$\end{remark}
\begin{equation}gin{remark}If$\
\varphi=c>0\ $($\ c$ is a constant),\ \
then\[\mathcal{P}_{M,c,\psi}=\mathcal{P}_{M,c,\psi}(a,b)={\frac{1}{c}}M(c\psi(a),c\psi(b))-A(\psi(a),\psi(b))+A(a,b).\]\end{remark}
\begin{equation}gin{remark}\[(\mathcal{P}_{M,c,\psi})_{A}=(\mathcal{P}_{M,c,\psi})_{A}(a,b)=-{\frac{1}{c}}M(c\psi(a),c\psi(b))+A(\psi(a),\psi(b))+A(a,b).\]\end{remark}
\begin{equation}gin{remark}
If$\ M\ $is a homogeneous function of order 1; that
is\[M(xz,yz)=zM(x,y),\hskip1cm\forall x,y,z\in\mathbb{R}_+\
,\]then$\ \
\mathcal{P}_{M,\varphi,\psi}(a,b)=A(a,b)-A(\psi(a),\psi(b))+M(\psi(a),\psi(b)).$\end{remark}
\begin{equation}gin{remark}$C,r,g,A,Hn,G\ $and$\ H\ $are homogeneous mean functions
of order 1.\end{remark}
\begin{equation}gin{prop} Let $M,M_1$ and $M_2$ be
mean functions and $\lambda\in\mathbb{R}.$Then\\\\
(i) The following three equations hold:\\
$\mathcal{P}_{A,\varphi,\psi}(a,b)=A(a,b),$\\$\mathcal{P}_{G,\varphi,\psi}(a,b)=A(a,b)-A(\psi(a),\psi(b))+G(\psi(a),\psi(b))=A(a,b)-{\frac{1}{2}}\big(\sqrt{\psi(a)}-\sqrt{\psi(b)}\big)^2$,\\
$(\mathcal{P}_{G,\varphi,\psi})_A(a,b)=A(a,b)+{\frac{1}{2}}
\big(\sqrt{\psi(a)}-\sqrt{\psi(b)}\big)^2,$\\\\(ii)
$M_1>M_2\Rightarrow\mathcal{P}_{M_{1},\varphi,\psi}>\mathcal{P}_{M_{2},\varphi,\psi},\
\ a\neq b,$\\\\In particular, $\
M>(<)A\Rightarrow\mathcal{P}_{M,\varphi,\psi}>(<)A,$\\\\(iii)$\
\mathcal{P}_{\lambda M_1+(1-\lambda)
M_2,\varphi,\psi}=\lambda\mathcal{P}_{M_1,\varphi,\psi}+(1-\lambda)\mathcal{P}_{M_2,\varphi,\psi},\
$if$\ \lambda M_{1}+(1-\lambda) M_{2}\ $is a mean
function.\\\\
In particular,$\
\mathcal{P}_{M_{A},\varphi,\psi}=(\mathcal{P}_{M,\varphi,\psi})_A.$\end{prop}
\begin{equation}gin{proof}It is straightforward.\end{proof}
Here are some examples where the above proposition is used: \begin{equation}gin{exm}Let$\ \ M=G\ $and $\
\check{\psi}(t)=\frac{t-\sin t}{2},\
$then\[\mathcal{P}_{G,\varphi,\check{\psi}}=\mathcal{N}_0.\]\end{exm}
\begin{equation}gin{exm}
Let$\ \ M=G\ $and$\ \tilde{\psi}(t)=\ln(t^2+1),\
$then\[\mathcal{P}_{G,\varphi,\tilde{\psi}}=L_0.\] \end{exm}
\begin{equation}gin{exm}
Let$\ \ M=H,\ \ \varphi(t)=Id(t)=t,\
$then\[\mathcal{P}_{H,Id,\psi}(a,b)=A(a,b)-A(\psi(a),\psi(b))+H(\psi(a),\psi(b))=A(a,b)-\frac{\big(\psi(a)-\psi(b)\big)^2}{2\big(\psi(a)+\psi(b)\big)}
.\]In particular,\[\mathcal{P}_{H,Id,Id}(a,b)=H(a,b).\]We can
see\[\mathcal{P}_{H,Id,\psi}(a,b)<\mathcal{S}_{H,Id,\psi}(a,b),\ \
\ \ \ \ a\neq b.\]
\end{exm}
Let$\ M\ $be a mean function. We define\[
\hat{S}_{M}:=\hat{S}_{M}(a,b):=\int_0^{\frac{\pi}{2}}M(a\sin\theta,b\cos\theta)\
d\theta.\]
\\ We can easily see$\ \
\hat{S}_{M}(a,b)=\hat{S}_{M}(b,a).\ $Also,\[
\hat{S}_{M}(a,b)\leq\int_0^{\frac{\pi}{2}}\big(\frac{a\sin\theta+b\cos\theta}{2}+\frac{|a\sin\theta-b\cos\theta|}{2}\big)\
d\theta=\]\[A(a,b)+{1\over2}\int_0^{\tan^{-1}{b\over
a}}(b\cos\theta-a\sin\theta)\
d\theta+{1\over2}\int_{\tan^{-1}{b\over
a}}^{\frac{\pi}{2}}(-b\cos\theta+a\sin\theta)\
d\theta=\]\[\sqrt{a^2+b^2}.\label{8}\]
\[
\hat{S}_{M}(a,b)\geq\int_0^{\frac{\pi}{2}}\big(\frac{a\sin\theta+b\cos\theta}{2}-\frac{|a\sin\theta-b\cos\theta|}{2}\big)\
d\theta=\]\[A(a,b)-{1\over2}\int_0^{\tan^{-1}{b\over
a}}(b\cos\theta-a\sin\theta)\
d\theta-{1\over2}\int_{\tan^{-1}{b\over
a}}^{\frac{\pi}{2}}(-b\cos\theta+a\sin\theta)\
d\theta=\]\[2A(a,b)-\sqrt{a^2+b^2}.\label{9}\] Thus, by (\ref{8}) and
(\ref{9}), we have\begin{equation}
2A(a,b)-\sqrt{a^2+b^2}\leq\hat{S}_M\leq\sqrt{a^2+b^2}.\label{10}\end{equation}
From (\ref{10}), if we take$\ \xi:=\xi(a,b)>0\ $and$\
\zeta:=\zeta(a,b),\ $such
that\begin{equation}\min\{a,b\}\leq\frac{(a+b)\xi}{\sqrt{a^2+b^2}}-\xi+\zeta\leq\frac{\xi}{\sqrt{a^2+b^2}}\hat{S}_M+\zeta\leq\xi+\zeta\leq\max\{a,b\},\label{11}\end{equation}
then we will
have\begin{equation}0<\xi\leq\frac{|a-b|\sqrt{a^2+b^2}}{2\sqrt{a^2+b^2}-(a+b)}\label{12}\end{equation}
and\begin{equation}\min\{a,b\}+(1-\frac{a+b}{\sqrt{a^2+b^2}})\xi\leq\zeta\leq\max\{a,b\}-\xi\label{13}.\end{equation}
By (\ref{11}), (\ref{12}) and (\ref{13}), we infer\begin{equation}
S_{M,\xi,\zeta}:=S_{M,\xi,\zeta}(a,b):=\frac{\xi(a,b)}{\sqrt{a^2+b^2}}\hat{S}_M(a,b)+\zeta(a,b)\label{14}\end{equation}
is a mean of $a$ and $b$.\\For example, if we take$\ \xi:=|a-b|\
$and$\
\zeta:={1\over2}\big(\min\{a,b\}+(1-\frac{a+b}{\sqrt{a^2+b^2}})|a-b|+\\\max\{a,b\}-|a-b|\big)=A(a,b)-\frac{|a^2-b^2|}{2\sqrt{a^2+b^2}},\
$then from (\ref{14})\begin{equation}
S_{M}:=S_{M}(a,b):=A(a,b)-\frac{|a^2-b^2|}{2\sqrt{a^2+b^2}}
+\frac{|a-b|}{\sqrt{a^2+b^2}}\int_0^{\frac{\pi}{2}}M(a\sin\theta,b\cos\theta)\
d\theta\label{15}\end{equation} is a mean of $a$ and $b$. Thus, we will have
the following theorem\\\\{\bf Theorem5}{\it\ \ Let $M$ is a mean
function. Then$\ S_M\ $which is defined by (\ref{15}), is a mean
function.}\\
\begin{equation}gin{prop}\label{11} \ \ Let$\ M,M_1\ $and$\ M_2\
$be mean functions and$\ \lambda\in\mathbb{R}.\ $Then\\\\(i)\
$S_{\lambda M_1+(1-\lambda)M_2}=\lambda
S_{M_1}+(1-\lambda)S_{M_2},\ $if$\ \lambda M_{1}+(1-\lambda)
M_{2}\ $is a mean function.\\\\Specially,\ $S_{M_{A}}=(S_{M})_A,$
\\\\(ii)\
$M_1>M_2\Rightarrow S_{M_1}>S_{M_2},\ \ a\neq b,$\\\\specially,
$M>(<)A\Rightarrow S_M>(<)A.$\end{prop}
\begin{equation}gin{proof} is
straightforward.\end{proof}
\begin{equation}gin{exm}\[\hskip1cm S_A=A,\hskip.7cm S_G(a,b)=A(a,b)+
\frac{G(a,b)|a-b|\Gamma^2({3\over4})}{\sqrt{\pi(a^2+b^2)}}
-\frac{|a^2-b^2|}{2\sqrt{a^2+b^2}},\ \ \ \
\]\[\Gamma(\tfrac{3}{4})\approx1.225416702.\]\end{exm}
\begin{equation}gin{exm}\[
S_H(a,b)=A(a,b)-\frac{|a^2-b^2|(a^2+b^2-4ab)}{2(a^2+b^2)^{3\over2}}-...\]\[\frac{4a^2b^2|a-b|}{(a^2+b^2)^{2}}
\ln\frac{a+b+\sqrt{a^2+b^2}}{\sqrt{2ab}}.\]
\end{exm}
\begin{equation}gin{exm}
By proposition \ref{11} (i)
\[\hskip2cm S_g=A(a,b)+\frac{|a^2-b^2|(a^2+b^2-4ab)}{6(a^2+b^2)^{3\over2}}+...\]\[\frac{4a^2b^2|a-b|}{3(a^2+b^2)^{2}}
\ln\frac{a+b+\sqrt{a^2+b^2}}{\sqrt{2ab}}\]\end{exm}and
\begin{equation}gin{exm}\[\hskip2cm S_C=A(a,b)+\frac{|a^2-b^2|(a^2+b^2-4ab)}{2(a^2+b^2)^{3\over2}}+...\]\[\frac{4a^2b^2|a-b|}{(a^2+b^2)^{2}}
\ln\frac{a+b+\sqrt{a^2+b^2}}{\sqrt{2ab}}.\]
By proposition\ref{11} (ii),
for $a\neq
b$\\\[\bigg(\frac{\sqrt{2(a^2+b^2)}}{|a-b|}\bigg)S_C-\frac{a+b}{\sqrt{2}}\bigg(\frac{\sqrt{a^2+b^2}}{|a-b|}-1\bigg)
\]\[>\int_0^{\frac{\pi}{2}}\sqrt{a^2\sin^2\theta+b^2\cos^2\theta}\
d\theta>\]\[\bigg(\frac{\sqrt{2(a^2+b^2)}}{|a-b|}\bigg)S_g-\frac{a+b}{\sqrt{2}}\bigg(\frac{\sqrt{a^2+b^2}}{|a-b|}-1\bigg)\]and\\
\[S_C(a,b)>S_g(a,b)>A(a,b)>S_G(a,b)>S_H(a,b),\ \ \ \ a\neq b.\]Specially,$\ A(3,4)>S_G(3,4)>S_H(3,4),\ $which we infer
\[\frac{7}{12}>\frac{\Gamma^2(\tfrac{3}{4})}{\sqrt{3\pi}}>\frac{140-48\ln\!6}{125}.\] \end{exm}
\begin{equation}gin{thm} \ Let$\ M_1,
M_2\in\mathcal{R}(\mathbb{R}_{+}^2)\ $be mean functions.\ Then\\\[\mathcal{T}_{M_{1},M_{2}}:=\mathcal{T}_{M_{1},M_{2}}(a,b):=\left\{
\begin{equation}gin{array}{ll}
\frac{1}{b-a}\int_a^bM_1\big(M_2(a,b),x\big)\ dx, & \hbox{$a\neq b,$} \\\\
a, & \hbox{$a=b$} \\
\end{array}
\right. \]is a mean of$\ a\ $and$\ b$.\end{thm}
\begin{equation}gin{proof}
Let$\ b>a\ $and$\ x\in[a,b].\ $We have\[
\frac{1}{b-a}\int_a^bM_1\big(M_2(a,b),x\big)\
dx\leq\frac{1}{b-a}\int_a^b\max\{M_2(a,b),x\}\
dx=\]\[\frac{1}{b-a}\int_a^{M_2(a,b)}M_2(a,b)\
dx+\frac{1}{b-a}\int_{M_2(a,b)}^bx\
dx=...\]\[\frac{1}{2(b-a)}\big(b^2+M_2^2(a,b)-2aM_2(a,b)\big).\label{p1}\].If
we take$\ m_1(t):=b^2+t^2-2at,\ \ t\in[a,b],\ $then$\ m_1\ $will
be increasing on$\ [a,b].\ $So, $m_1(t)\leq m_1(b)=2b(b-a),\
$for$\ t\in[a,b].$ Hence, from (\ref{p1}), we will
get\[\frac{1}{b-a}\int_a^bM_1\big(M_2(a,b),x\big)\ dx\leq
b.\]Similarly,\[ \frac{1}{b-a}\int_a^bM_1\big(M_2(a,b),x\big)\
dx\geq\frac{1}{b-a}\int_a^b\min\{M_2(a,b),x\}\
dx=\]\[\frac{1}{b-a}\int_a^{M_2(a,b)}x\
dx+\frac{1}{b-a}\int_{M_2(a,b)}^bM_2(a,b)\
dx=...\]\[\frac{1}{2(b-a)}\big(-a^2-M_2^2(a,b)+2bM_2(a,b)\big).\label{p2}.\]
If
we take$\ m_2(t):=-a^2-t^2+2bt,\ \ t\in[a,b],\ $then$\ m_2\ $will
be increasing on$\ [a,b].\ $ So, $m_2(t)\geq m_2(a)=2a(b-a),\
$for$\ t\in[a,b].$ Therefore, from (\ref{p2}), we will
get\[\frac{1}{b-a}\int_a^bM_1\big(M_2(a,b),x\big)\ dx\geq
a.\]Also, it is obvious that$\ \mathcal{T}_{M_{1},M_{2}}\ $ is
symmetric.\end{proof}
\begin{equation}gin{prop}
\label{12}\ Let$\
M_1,M_1^{'},M_2\ $and$\ M_2^{'}\ $be mean functions and
$\lambda\in\mathbb{R}$.\ Then\\\\(i)$\
M_1>M_1^{'}\Rightarrow\mathcal{T}_{M_{1},M_{2}}(a,b)>\mathcal{T}_{M_{1}^{'},M_{2}}(a,b),\
\ a\neq b,$\\\\(ii) If$\ M_1\ $is strictly increasing and$\
M_2>M_2^{'},\ \ $then \[
\mathcal{T}_{M_{1},M_{2}}(a,b)>\mathcal{T}_{M_{1},M_{2}^{'}}(a,b),\
\ a\neq b,\]\\\\(iii)\ $\mathcal{T}_{\lambda M_{1}+(1-\lambda)
M_{1}^{'},M_{2}}=\lambda\mathcal{T}_{M_{1},M_{2}}+(1-\lambda)\mathcal{T}_{M_{1}^{'},M_{2}},\
$\\if$\ \lambda M_{1}+(1-\lambda) M_1^{'}\ $is a mean
function.\\Specially,
$\mathcal{T}_{M_{1A},M_{2}}=2\mathcal{T}_{A,M_{2}}-\mathcal{T}_{M_{1},M_{2}}$.\end{prop}
\begin{equation}gin{proof}\ is straightforward.\end{proof}
{\bf Some Examples} \ Let$\
M\ $be a mean
function.\[(1)\hskip6cm\mathcal{T}_{A,M}(a,b)=A\big(M(a,b),A(a,b)\big).\]
\[(2)\hskip5.3cm\mathcal{T}_{G,M}(a,b)=G\big(M(a,b),g^2(\sqrt{a},\sqrt{b})\big).\]
\[(3)\hskip1.3cm\mathcal{T}_{H,M}(a,b)=2M(a,b)\bigg(1-\frac{M(a,b)}{b-a}\ln\frac{b+M(a,b)}{a+M(a,b)}\bigg),\ \ a\neq
b.\]
\[(4)\hskip1.5cm\mathcal{T}_{r,M}(a,b)=\frac{1}{2\sqrt{2}}\bigg(\frac{(a+b)\big(a^2+b^2+M^2(a,b)\big)}{\big(a\sqrt{a^2+M^2(a,b)}+b\sqrt{b^2+M^2(a,b)}\big)}+\]\[
\frac{M^2(a,b)}{(b-a)}\ln\frac{b+\sqrt{b^2+M^2(a,b)}}{a+\sqrt{a^2+M^2(a,b)}}\bigg),\
\ a\neq b.\]
By proposition \ref{12}, we will get
\[\mathcal{T}_{r,M}(a,b)>\mathcal{T}_{A,M}(a,b)>\mathcal{T}_{G,M}(a,b)>\mathcal{T}_{H,M}(a,b),\ \ \
\ a\neq b.\]Also, by proposition\ref{12}(ii) and note12, we will
have\[\mathcal{T}_{A,A}(a,b)>\mathcal{T}_{A,G}(a,b)>\mathcal{T}_{A,H}(a,b),\
\ \ \ a\neq
b,\]\[\mathcal{T}_{G,A}(a,b)>\mathcal{T}_{G,G}(a,b)>\mathcal{T}_{G,H}(a,b),\
\ \ \ a\neq
b\]and\[\mathcal{T}_{H,A}(a,b)>\mathcal{T}_{H,G}(a,b)>\mathcal{T}_{H,H}(a,b),\
\ \ \ a\neq b.\]Besides, by proposition\ref{12}(iii), we will have
\[\mathcal{T}_{H_n,M}=\tfrac{2}{3}\mathcal{T}_{A,M}+\tfrac{1}{3}\mathcal{T}_{G,M},\]
\[\mathcal{T}_{g,M}=\tfrac{4}{3}\mathcal{T}_{A,M}-\tfrac{1}{3}\mathcal{T}_{H,M}\]and
\[\mathcal{T}_{(M_1)_{A},M_2}=2\mathcal{T}_{A,M_2}-\mathcal{T}_{M_1,M_2},\]if$\ M,M_1\
$and$\ M_2\ $are mean functions.
\begin{equation}gin{thebibliography}{n}
\bibitem{main} H.\,Eves,
\emph{Means Appearing in Geometrical Figures},
Mathematics Magazine \textbf{76(4)} (2003), 292-294.
\bibitem{mbook} E. Beckenback, R. Bellman,
\emph{An Introduction to Inequalities}, Random House, Inc., New York, 1961.
\bibitem{7means} Inder J. Taneja,\emph{Refinement of Inequalities among Means}, Journal of Combinatorics, Informationand Systems Sciences \textbf{31} (2006), 357-378.
\bibitem{7means2} Inder J. Taneja, \emph{Inequalities having seven means and proportionality relations},
\url{arXiv:1203.2288}
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{center}
\uppercase{\bf A new estimate on complexity of binary generalized pseudostandard words}
\vskip 20pt
{\bf Josef Florian}\\
{\smallit Department of Mathematics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Czech Republic}\\
\vskip 10pt
{\bf L\!'ubom\'ira Dvo\v r\'akov\'a (born Balkov\'a)}\\
{\smallit Department of Mathematics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Czech Republic}\\
{\tt [email protected]}\\
\end{center}
\vskip 30pt
\centerline{\smallit Received: , Revised: , Accepted: , Published: }
\vskip 30pt
\centerline{\bf Abstract}
\noindent Generalized pseudostandard words were introduced by de Luca and De Luca in~\cite{LuDeLu}. In comparison to the palindromic and pseudopalindromic closure, only little is known about the generalized pseudopalindromic closure and the associated generalized pseudostandard words. We present a~counterexample to Conjecture 43 from a~paper by Blondin Mass\'e et al.~\cite{MaPa} that estimated the complexity of binary generalized pseudostandard words as ${\mathcal C}(n) \leq 4n$ for all sufficiently large $n$. We conjecture that ${\mathcal C}(n)<6n$ for all $n \in \mathbb N$.
\pagestyle{myheadings}
\markright{\smalltt INTEGERS: 16 (2016)
}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}
This paper focuses on generalized pseudostandard words. Such words were defined by de Luca and De Luca in 2006~\cite{LuDeLu} who studied generalized standard episturmian words, called generalized pseudostandard words, by considering pseudopalindromic closure of an infinite sequence of involutory antimorphisms.
While standard episturmian and pseudostandard words have been studied intensively and a~lot of their properties are known (see for instance~\cite{BuLuZa,DrJuPi,Lu,LuDeLu}), only little has been shown so far about the generalized pseudopalindromic closure that gives rise to generalized pseudostandard words. In~\cite{LuDeLu} the authors have defined the generalized pseudostandard words and proved there that the famous Thue--Morse word is an example of such words. Jajcayov\'a et al.~\cite{JaPeSt} characterize generalized pseudostandard words in the class of generalized Thue--Morse words.
Jamet et al.~\cite{JaPaRiVu} deal with fixed points of the palindromic and pseudopalindromic closure and formulate an open problem concerning fixed points of the generalized pseudopalindromic closure. The authors of this paper provide a~necessary and sufficient condition on periodicity of binary and ternary generalized pseudostandard words in~\cite{BaFl}. The most detailed study of binary generalized pseudostandard words has been so far provided by Blondin Mass\'e et al.~\cite{MaPa}:
\begin{itemize}
\item A~so-called normalization is described that guarantees for generalized pseudostandard words that no pseudopalindromic prefix is missed during the construction.
\item An effective algorithm -- the generalized Justin's formula -- for generation of generalized pseudostandard words is presented.
\item The standard Rote words are proven to be generalized pseudostandard words and the infinite sequence of antimorphisms that generates such words is studied.
\item A~conjecture is stated saying that the complexity of an infinite binary generalized pseudostandard word $\mathbf u$, i.e., the map ${\mathcal C}: \mathbb N \to \mathbb N$ defined by ${\mathcal C}(n)=$ the number of factors of length $n$ of the infinite word $\mathbf u$, satisfies:
$${\mathcal C}(n)\leq 4n \quad \text{for sufficiently large $n$.}$$
\end{itemize}
In this paper, we provide a~counterexample to the above conjecture by construction of a~generalized pseudostandard word satisfying ${\mathcal C}(n)>4n$ for all $n\geq 10$. We moreover show that ${\mathcal C}(n)>4.5 \ n $ for infinitely many $n \in \mathbb N$.
The work is organized as follows. In Section~\ref{sec:CoW} we introduce basics from combinatorics on words. Section~\ref{sec:palindrome} deals with the palindromic closure and summarizes known results. Similarly, Section~\ref{sec:pseudopalindrome} is devoted to the pseudopalindromic closure and its properties. In Section~\ref{sec:generalized_pseudopalindrome}, the generalized pseudopalindromic closure is defined and the normalization process is described. A~counterexample to Conjecture 43 from~\cite{MaPa} is constructed and its complexity is estimated in Section~\ref{sec:Conjecture4n}. In Section~\ref{sec:open_problems} we summarize known facts about the complexity of binary generalized pseudostandard words and state a~new conjecture: ${\mathcal C}(n) <6n$ for all $n \in \mathbb N$.
\section{Basics from combinatorics on words}\label{sec:CoW}
We restrict ourselves to the binary \emph{alphabet} $\{0,1\}$, we call $0$ and $1$ \emph{letters}. A~\emph{(finite) word} $w$ over $\{0,1\}$ is any finite binary sequence. Its length $|w|$ is the number of letters $w$ contains. The empty word -- the neutral element for concatenation of words -- is denoted by $\varepsilon$ and its length is set $|\varepsilon|=0$.
The set of all finite binary words is denoted by ${\{0,1\}}^*$.
An \emph{infinite word} $\mathbf u$ over $\{0,1\}$ is any binary infinite sequence.
The set of all infinite words is denoted $\{0,1\}^{\mathbb N}$.
A finite word $w$ is a~\emph{factor} of the infinite word $\mathbf u=u_0u_1u_2\ldots$ with $u_i \in \{0,1\}$ if there exists an index $i\geq 0$ such that $w=u_iu_{i+1}\ldots u_{i+|w|-1}$. Such an index is called an \emph{occurrence} of $w$ in $\mathbf u$. The symbol ${\mathcal L}(\mathbf u)$ is used for the set of factors of $\mathbf u$ and is called the \emph{language} of $\mathbf u$, similarly ${\mathcal L}_n(\mathbf u)$ stands for the set of factors of $\mathbf u$ of length $n$.
A~\emph{left special factor} of a~binary infinite word $\mathbf{u}$ is any factor $v$ such that both $0v$ and $1v$ are factors of $\mathbf{u}$. A~\emph{right special factor} is defined analogously. Finally, a~factor of $\mathbf{u}$ that is both right and left special is called a~\emph{bispecial}. We distinguish the following types of bispecials over $\{0,1\}$:
\begin{itemize}
\item A~\emph{weak bispecial} $w$ satisfies that only $0w1$ and $1w0$, or only $0w0$ and $1w1$ are factors of $\mathbf u$.
\item A~\emph{strong bispecial} $w$ satisfies that all $0w0$, $0w1$, $1w0$ and $1w1$ are factors of $\mathbf{u}$.
\item We do not use a~special name for bispecials that are neither weak nor strong.
\end{itemize}
Let $w \in \mathcal{L}({\mathbf u})$. A~\emph{left extension} of $w$ is any word $aw \in \mathcal{L}(\mathbf{u})$, where $a \in \{0,1\}$, and a~\emph{right extension} is defined analogously. A~\emph{bilateral extension} of $w$ is then $awb \in \mathcal{L}({\mathbf u})$, where $a,b \in \{0,1\}$. The set of left (resp. right extensions) of $w$ is denoted $\mathrm{Lext}(w)$ (resp. $\mathrm{Rext}(w)$). The \emph{(factor) complexity} of $\mathbf{u}$ is the map $\mathcal{C}_{{\mathbf u}}: {\mathbb N} \rightarrow {\mathbb N}$ defined as $$\mathcal{C}_{{\mathbf u}}(n) =\text{the number of factors of ${\mathbf u}$ of length $n$.}$$
In order to determine the complexity of an infinite word $\mathbf u$, the well-known formula for the \emph{second difference of complexity}~\cite{Cas} may be useful:
\begin{equation}\label{complexity2diff}
\Delta^2\mathcal{C}_{{\mathbf u}}(n) = \Delta\mathcal{C}_{{\mathbf u}}(n+1) - \Delta\mathcal{C}_{{\mathbf u}}(n) = \sum_{w\in \mathcal{L}_n({\mathbf u})}B(w),
\end{equation}
where $$B(w) = \#\{awb \mid a, b \in \{0,1\}, awb \in \mathcal{L}({\mathbf u})\} - \#\mathrm{Rext}(w) - \#\mathrm{Lext}(w) + 1$$
and the \emph{first difference of complexity} is defined as $\Delta\mathcal{C}_{{\mathbf u}}(n)=\mathcal{C}_{{\mathbf u}}(n+1)-\mathcal{C}_{{\mathbf u}}(n)$.
It is readily seen that for any factor of a~binary infinite word ${\mathbf u}$ the following holds:
\begin{itemize}
\item $B(w) = 1$ if and only if $w$ is a~strong bispecial.
\item $B(w) = -1$ if and only if $w$ is a~weak bispecial.
\item $B(w) = 0$ otherwise.
\end{itemize}
An infinite word $\mathbf u$ is called \emph{recurrent} if each of its factors occurs infinitely many times in $\mathbf u$. It is said to be \emph{uniformly recurrent} if for every $n \in \mathbb N$ there exists a~length $r(n)$ such that every factor of length $r(n)$ of $\mathbf u$ contains all factors of length $n$ of $\mathbf u$.
We say that an infinite word ${\mathbf u}$ is \emph{eventually periodic} if there exists $v, w \in \{0,1\}^{*}$ such that ${\mathbf u}=wv^{\omega}$, where $\omega$ denotes an infinite repetition. If $w=\varepsilon$, we call $\mathbf u$ \emph{(purely) periodic}. If $\mathbf u$ is not eventually periodic, $\mathbf u$ is said to be \emph{aperiodic}.
It is not difficult to see that if an infinite word is recurrent and eventually periodic, then it is necessarily purely periodic.
A~fundamental result of Morse and Hedlund \cite{MoHe1} states that a
word $\mathbf u$ is eventually periodic if and only if for some $n$ its
complexity is less than or equal to $n$. Infinite words of complexity $n+1$ for all $n$ are called \emph{Sturmian words}, and hence they are aperiodic words of the
smallest complexity.
Among Sturmian words we distinguish the class of \emph{standard (or characteristic) Sturmian words} satisfying that their left special factors are their prefixes at the same time. The Fibonacci word from Example~\ref{Fibonacci} is a~standard Sturmian word.
The first systematic study of Sturmian words was by Morse and Hedlund in~\cite{MorHed1940}.
A \emph{morphism} is a~map $\varphi: \{0,1\}^* \rightarrow \{0,1\}^*$ such that for every $v, w \in \{0,1\}^*$ we have $\varphi(vw) = \varphi(v) \varphi(w)$. It is clear that in order to define a~morphism, it suffices to provide letter images. A morphism is
\emph{prolongable} on $a \in \{0,1\}$ if $|\varphi(a)|\geq 2$ and
$a$ is a prefix of $\varphi(a)$.
If $\varphi $ is prolongable on
$a$, then $\varphi^n(a)$ is a proper prefix of $\varphi^{n+1}(a)$
for all $n \in \mathbb{N}$. Therefore, the sequence
$(\varphi^n(a))_{n\geq 0}$ of words defines an infinite word $\mathbf u$
that is a~fixed point of $\varphi$. Such a word $\mathbf u$ is a~\emph{(pure) morphic word}.
\begin{example}\label{Fibonacci}
The most studied Sturmian word is the
so-called Fibonacci word
\[{\mathbf u}_F=01001010010010100101001001010010\ldots\]
fixed by the morphism $\varphi_F(0)=01$ and $\varphi_F(1)=0$.
\end{example}
\begin{example}\label{ThueMorse}
Another well-known morphic word that however does not belong to Sturmian words is the
Thue-Morse word
\[{\mathbf u}_{TM}=01101001100101101001011001101001\ldots\]
fixed by the morphism $\varphi_{TM}(0)=01$ and $\varphi_{TM}(1)=10$ (we start with the letter $0$ when generating ${\mathbf u}_{TM}$).
\end{example}
An \emph{involutory antimorphism} is a~map $\vartheta: \{0,1\}^* \rightarrow \{0,1\}^*$ such that for every $v, w \in \{0,1\}^*$ we have $\vartheta(vw) = \vartheta(w) \vartheta(v)$ and moreover $\vartheta^2$ equals identity. There are only two involutory antimorphisms over the alphabet $\{0,1\}$: the \emph{reversal (mirror) map} $R$ satisfying $R(0)=0, R(1)=1$, and the \emph{exchange antimorphism} $E$ given by $E(0)=1, E(1)=0$. We use the notation $\overline{0} = 1$ and $\overline{1} = 0$, $\overline{E} = R$ and $\overline{R} = E$. A finite word $w$ is a~\emph{palindrome} if $w = R(w)$, and $w$ is an $E$-\emph{palindrome} ({\em pseudopalindrome}) if $w = E(w)$.
\section{Palindromic closure}\label{sec:palindrome}
In this section we describe the construction of binary infinite words generated by the palindromic closure. Further on, we recall some properties of such infinite words. We use the papers~\cite{DrJuPi,Lu} as our source.
\begin{definition}
Let $w \in \{0,1\}^*$. The \emph{palindromic closure} $w^R$ of a~word $w$ is the shortest palindrome having $w$ as prefix.
\end{definition}
Consider for instance the word $w=0100$. Its palindromic closure $w^R$ equals $010010$. It is readily seen that $|w| \leq |w^R| \leq 2|w|-1$. For $w = 010$ we have $w^R = 010$ and for $w = 0001$ we obtain $w^R = 0001000$. It is worth noticing that the palindromic closure can be constructed in the following way: Find the longest palindromic suffix $s$ of $w$. Denote $w = ps$. Then $w^R = psR(p)$. For instance, for $w = 0100$ we have $s = 00$ and $p = 01$. Thus $w^R = 01\underline{00}10$.
\begin{definition}
Let $\Delta = \delta_1 \delta_2 \ldots$, where $\delta_i \in \{0,1\}$ for all $i \in \mathbb{N}$. The infinite word $\mathbf{u}(\Delta)$ \emph{generated by the palindromic closure (or $R$-standard word)} is the word whose prefixes $w_n$ are obtained from the recurrence relation
$$w_{n+1} = (w_n \delta_{n+1})^{R},$$ $$w_0 = \varepsilon.$$ The sequence $\Delta$ is called the \emph{directive sequence} of the word $\mathbf{u}(\Delta)$.
\end{definition}
\noindent {\bf Properties of the $R$-standard word} $\mathbf{u}=\mathbf{u}(\Delta)\in \{0,1\}^{\mathbb N}$:
\begin{enumerate}
\item The sequence of prefixes $(w_k)_{k\geq 0}$ of $\mathbf{u}$ contains every palindromic prefix of~$\mathbf{u}$.
\item The language of $\mathbf u$ is closed under reversal, i.e., $w$ is a~factor of $\mathbf u$ $\Leftrightarrow$ $R(w)$ is a~factor of $\mathbf u$.
\item The word $\mathbf u$ is uniformly recurrent.
\item Every left special factor of $\mathbf u$ is a~prefix of $\mathbf u$.
\item If $w$ is a bispecial factor of $\mathbf u$, then $w=w_k$ for some $k$.
\item Since $\mathbf u$ is (uniformly) recurrent, it is either aperiodic or purely periodic.
\item The word $\mathbf u$ is standard Sturmian if and only if both $0$ and $1$ occur in the directive sequence $\Delta$ infinitely many times.
\item The word $\mathbf u$ is periodic if and only if $\Delta$ is of the form $v0^{\omega}$ or $v1^{\omega}$ for some $v \in \{0,1\}^*$.
\end{enumerate}
\begin{example}
The Fibonacci word $\mathbf{u_F}$ defined in Example~\ref{Fibonacci} is the most famous example of an infinite word generated by the palindromic closure. It is left as an exercise for the reader to show that $\mathbf{u_F} = \mathbf{u}((01)^{\omega})$. Let us form the first few prefixes $w_k$:
\begin{align}
w_1 =& \; 0 \nonumber \\
w_2 =& \; 010 \nonumber \\
w_3 =& \; 010010 \nonumber \\
w_4 =& \; 01001010010. \nonumber
\end{align}
\end{example}
\section{Pseudopalindromic closure}\label{sec:pseudopalindrome}
Let us recall here the definition of the pseudopalindromic closure and the construction of binary infinite words generated by the pseudopalindromic closure. Some of their properties are similar as for the palindromic closure, but in particular their complexity is already slightly more complicated. Pseudopalindromes and the pseudopalindromic closure have been studied for instance in~\cite{BuLuZa, LuDeLu}.
\begin{definition}
Let $w \in \{0,1\}^*$. The \emph{pseudopalindromic closure} $w^E$ of a~word $w$ is the shortest $E$-palindrome having $w$ as prefix.
\end{definition}
Consider $w=0010$ which has pseudopalindromic closure $w^E=001011$. The following inequalities hold: $|w| \leq |w^E| \leq 2|w|$. For instance for $w = 0101$ we have $w^E = 0101$, while for $w = 000$ we get $w^E = 000111$. Let us point out that the pseudopalindromic closure may be constructed in the following way: Find the longest pseudopalindromic suffix of~$w$. Denote it by $s$ and denote the remaining prefix by $p$, i.e., $w = ps$. Then $w^E = psE(p)$. For $w=0010$, we obtain $p = 00$ and $s = 10$, therefore $w^E = 00\underline{10}11$.
\begin{definition}
Let $\Delta = \delta_1 \delta_2 \ldots$, where $\delta_i \in \{0,1\}$ for all $i \in \mathbb{N}$. The infinite word $\mathbf{u}_E(\Delta)$ \emph{generated by the pseudopalindromic closure (or $E$-standard or pseudostandard word)} is the word whose prefixes $w_n$ are obtained from the recurrence relation
$$w_{n+1} = (w_n \delta_{n+1})^{E},$$
$$w_0 = \varepsilon.$$
The sequence $\Delta$ is called the \emph{directive sequence} of the word $\mathbf{u}_E(\Delta)$.
\end{definition}
\noindent {\bf Properties of the $E$-standard word} $\mathbf{u}=\mathbf{u}_E(\Delta) \in \{0,1\}^{\mathbb N}$:
\begin{enumerate}
\item The sequence of prefixes $(w_k)_{k\geq 0}$ of $\mathbf{u}$ contains every pseudopalindromic prefix of $\mathbf{u}$.
\item The language of $\mathbf u$ is closed under the exchange antimorphism, i.e., $w$ is a~factor of $\mathbf u$ $\Leftrightarrow$ $E(w)$ is a~factor of $\mathbf u$.
\item The word $\mathbf u$ is uniformly recurrent.
\item A close relation between $R$-standard and $E$-standard words has been revealed in Theorem 7.1 in~\cite{LuDeLu}:
Let $\Delta = \delta_1 \delta_2 \ldots$, where $\delta_i \in \{0,1\}$ for all $i \in \mathbb{N}$. Then $$\mathbf{u}_E(\Delta)=\varphi_{TM}({\mathbf u}(\Delta)).$$
In words, any $E$-standard word is the image by the Thue-Morse morphism $\varphi_{TM}$ of the $R$-standard word with the same directive sequence $\Delta$. Moreover, the set of pseudopalindromic prefixes of $\mathbf{u}_E(\Delta)$ equals the image by $\varphi_{TM}$ of the set of palindromic prefixes of $\mathbf{u}(\Delta)$.
\item If $\Delta$ contains both $0$ and $1$ infinitely many times, then every prefix of $\mathbf u$ is left special.
\item In contrast to infinite words generated by the palindromic closure, $\mathbf u$ can contain left special factors that are not prefixes. Nevertheless, such left special factors can be of length at most $2$.
\item If $w$ is a bispecial factor of $\mathbf u$ of length at least $3$, then $w=w_k$ for some $k$.
\item Since $\mathbf u$ is (uniformly) recurrent, it is either aperiodic or purely periodic.
\item The complexity of $\mathbf u$ satisfies ${\mathcal C}_{\mathbf u}(n+1)-{\mathcal C}_{\mathbf u}(n)=1$ for all $n \geq 3$ if and only if both $0$ and $1$ occur in the directive sequence $\Delta$ infinitely many times.
\item The word $\mathbf u$ is periodic if and only if $\Delta$ is of the form $v0^{\omega}$ or $v1^{\omega}$ for some $v \in \{0,1\}^*$.
\end{enumerate}
\begin{example}
Let us illustrate the construction of an infinite word generated by the pseudopalindromic closure for $\mathbf{u} = \mathbf{u}_E((01)^{\omega})$. Here are the first prefixes $w_k$:
\begin{align}
w_1 =& \; 01 \nonumber \\
w_2 =& \; 011001 \nonumber \\
w_3 =& \; 011001011001 \nonumber \\
w_4 =& \; 0110010110011001011001. \nonumber
\end{align}
Notice that $1$ and $10$ are left special factors that are not prefixes.
The reader can also check that $\mathbf u$ is the image by $\varphi_{TM}$ of the Fibonacci word, i.e., $\mathbf u=\varphi_{TM}({\mathbf u}_F)$.
\end{example}
\section{Generalized pseudopalindromic closure}\label{sec:generalized_pseudopalindrome}
Generalized pseudostandard words form a~generalization of infinite words generated by the palindromic (resp. pseudopalindromic) closure; such a~construction was first described and studied in~\cite{LuDeLu}.
Let us start with their definition and known properties; we use the papers~\cite{JaPaRiVu,LuDeLu,MaPa}.
\subsection{Definition of generalized pseudostandard words}
Let us underline that we again restrict ourselves only to the binary alphabet $\{0,1\}$.
\begin{definition}
Let $\Delta = \delta_1 \delta_2 \ldots$ and $\Theta = \vartheta_1 \vartheta_2 \ldots$, where $\delta_i \in \{0,1\}$ and $\vartheta_i \in \{E, R\}$ for all $i \in \mathbb{N}$. The infinite word $\mathbf{u}(\Delta, \Theta)$ \emph{generated by the generalized pseudopalindromic closure (or generalized pseudostandard word)} is the word whose prefixes $w_n$ are obtained from the recurrence relation
$$w_{n+1} = (w_n \delta_{n+1})^{\vartheta_{n+1}},$$ $$w_0 = \varepsilon.$$ The sequence $\Lambda = (\Delta, \Theta)$ is called the \emph{directive bi-sequence} of the word $\mathbf{u}(\Delta, \Theta)$.
\end{definition}
\noindent {\bf Properties of the generalized pseudostandard word} $\mathbf u=\mathbf{u}(\Delta, \Theta) \in \{0,1\}^{\mathbb N}$:
\begin{enumerate}
\item If $R$ (resp. $E$) is contained in $\Theta$ infinitely many times, then the language of $\mathbf u$ is closed under reversal (resp. under the exchange antimorphism).
\item The word $\mathbf u$ is uniformly recurrent.
\end{enumerate}
\subsection{Normalization}
In contrast to $E$- and $R$-standard words, the sequence $(w_k)_{k\geq 0}$ of prefixes of a~generalized pseudostandard word ${\mathbf u}(\Delta, \Theta)$ does not have to contain all $E$-palindromic and palindromic prefixes of ${\mathbf u}(\Delta, \Theta)$. Blondin Mass\'e et al.~\cite{MaPa} introduced the notion of normalization of the directive bi-sequence.
\begin{definition}
A~directive bi-sequence $\Lambda=(\Delta, \Theta)$ of a~generalized pseudostandard word $\mathbf{u}(\Delta, \Theta)$ is called \emph{normalized} if the sequence of prefixes $(w_k)_{k\geq 0}$ of $\mathbf{u}(\Delta, \Theta)$ contains all $E$-palindromic and palindromic prefixes of ${\mathbf u}(\Delta, \Theta)$.
\end{definition}
\begin{example} \label{ex:norm}
Let $\Lambda=(\Delta, \Theta) = ((011)^{\omega}, (EER)^{\omega})$. Let us write down the first prefixes of $\mathbf{u}(\Delta, \Theta)$:
\begin{align}
w_1 =& \;01 \nonumber \\
w_2 =& \;011001 \nonumber \\
w_3 =& \;01100110 \nonumber \\
w_4 =& \;0110011001. \nonumber
\end{align}
The sequence $w_k$ does not contain for instance the palindromic prefixes $0$ and $0110$ of $\mathbf{u}(\Delta, \Theta)$.
\end{example}
The authors of~\cite{MaPa} proved that every directive bi-sequence $\Lambda$ can be normalized, i.e., transformed to such a~form $\widetilde \Lambda$ that the new sequence $(\widetilde{w_k})_{k\geq 0}$ contains already every $E$-palindromic and palindromic prefix and $\widetilde \Lambda$ generates the same generalized pseudostandard word as~$\Lambda$.
\begin{theorem}[\cite{MaPa}]\label{thm:norm}
Let $\Lambda = (\Delta, \Theta)$ be a~directive bi-sequence. Then there exists a~normalized directive bi-sequence $\widetilde{\Lambda} = (\widetilde{\Delta}, \widetilde{\Theta})$ such that ${\mathbf u}(\Delta, \Theta) = {\mathbf u}(\widetilde{\Delta}, \widetilde{\Theta})$.
Moreover, in order to normalize the sequence $\Lambda$, it suffices firstly to execute the following changes of its prefix (if it is of the corresponding form):
\begin{itemize}
\item $(a\bar{a}, RR) \rightarrow (a\bar{a}a, RER)$,
\item $(a^i, R^{i-1}E) \rightarrow (a^i\bar{a}, R^iE)$ for $i \geq 1$,
\item $(a^i\bar{a}\bar{a}, R^iEE) \rightarrow (a^i\bar{a}\bar{a}a, R^iERE)$ for $i \geq 1$,
\end{itemize}
and secondly to replace step by step from left to right every factor of the form:
\begin{itemize}
\item $(ab\bar{b}, \vartheta\overline{\vartheta}\overline{\vartheta}) \rightarrow (ab\bar{b}b, \vartheta\overline{\vartheta}\vartheta\overline{\vartheta})$,
\end{itemize}
where $a, b \in \{0,1\}$ and $\vartheta \in \{E,R\}$.
\end{theorem}
\begin{example} \label{ex:norm2}
Let us normalize the directive bi-sequence $\Lambda = ((011)^{\omega}, (EER)^{\omega})$ from Example~\ref{ex:norm}.
According to the procedure from Theorem~\ref{thm:norm}, we transform first the prefix of $\Lambda$. We replace $(0,E)$ with $(01,RE)$ and get $\Lambda_1 = (01(110)^{\omega}, RE(ERE)^{\omega})$. The prefix of $\Lambda_1$ is still of a~forbidden form, we replace thus the prefix $(011,REE)$ with $(0110, RERE)$ and get $\Lambda_2 = (0110(101)^{\omega}, RERE(REE)^{\omega})$. The prefix of $\Lambda_2$ is now correct. It remains to replace from left to right the factors $(101, REE)$ with $(1010, RERE)$. Finally, we obtain $\widetilde{\Lambda} = (0110(1010)^{\omega}, RERE(RERE)^{\omega})=(01(10)^{\omega}, (RE)^{\omega})$, which is already normalized. Let us write down the first prefixes $(\widetilde{w_k})_{k\geq 0}$ of ${\mathbf u}(\widetilde{\Lambda})$:
\begin{align}
\widetilde{w_1} =& \;0 \nonumber \\
\widetilde{w_2} =& \;01 \nonumber \\
\widetilde{w_3} =& \;0110 \nonumber \\
\widetilde{w_4} =& \;011001. \nonumber
\end{align}
We can notice that the new sequence $(\widetilde{w_k})_{k\geq 0}$ now contains the palindromes $0$ and $0110$ that were skipped in Example~\ref{ex:norm}.
\end{example}
\section{Conjecture 4n}\label{sec:Conjecture4n}
As a~new result, we will construct a~counterexample to Conjecture $4n$ (stated as Conjecture $43$ in~\cite{MaPa}):
\begin{conjecture}[Conjecture 4n]
For every binary generalized pseudostandard word ${\mathbf u}$ there exists $n_0 \in {\mathbb N}$ such that $\mathcal{C}_{{\mathbf u}}(n) \leq 4n$ for all $n>n_0$.
\end{conjecture}
We have found a~counterexample ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ that satisfies $\mathcal{C}_{{\mathbf u_p}}(n) > 4n$ for all $n \geq 10$. Moreover, we will show in the end of this section that $\mathbf u_p$ even satisfies ${\mathcal C}(n)\geq 4.577 \ n$ for infinitely many $n \in \mathbb N$. Let us write down the first prefixes $w_n$ of ${\mathbf u}_p$:
\begin{align}
w_1 =& \; 10 \nonumber \\
w_2 =& \; 1010 \nonumber \\
w_3 =& \; 10101 \nonumber \\
w_4 =& \; 1010110101 \nonumber \\
w_5 =& \; 1010110101100101001010 \nonumber \\
w_6 =& \; 1010110101100101001010110101100101001010 \nonumber
\end{align}
It is readily seen that $w_{4k+1}$ and $w_{4k+2}$ are $E$-palindromes, while $w_{4k+3}$ and $w_{4k+4}$ are palindromes for all $k \in \mathbb N$.
The aim of this section is to prove the following theorem.
\begin{theorem}\label{thm:counterexample}
The infinite word ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ satisfies
$$\mathcal{C}_{{\mathbf u_p}}(n) > 4n \quad \text{for all $n \geq 10$.}$$
\end{theorem}
In order to prove Theorem~\ref{thm:counterexample}, we have to describe all weak bispecial factors and find enough strong bispecial factors so that it provides us with a~lower bound on the second difference of complexity (see Equation~\eqref{complexity2diff}) that leads to the strict lower bound equal to $4n$ on the complexity of $\mathbf u_p$. The partial steps will be formulated in several lemmas and observations.
Let us start with a~description of the relation between the consecutive prefixes $w_k$ and $w_{k+1}$ that will turn out to be useful in many proofs. The knowledge of the normalized form of the directive bi-sequence is needed.
\begin{observation}\label{obs:normtvar}
The directive bi-sequence $\Lambda = (1^{\omega}, (EERR)^{\omega})$ has the normalized form $\widetilde{\Lambda} = (1010(1)^{\omega}, RERE(RREE)^{\omega})$.
\end{observation}
\begin{proof}
The normalized form is obtained using the algorithm from Theorem~\ref{thm:norm}.
\end{proof}
\begin{example}\label{ex:normtvar}
The prefixes $\widetilde{w_n}$ of ${\mathbf u}(\widetilde{\Lambda})$ satisfy:
\begin{align}
\widetilde{w_1} =& \; 1 \nonumber \\
\widetilde{w_2} =& \; 10 \nonumber \\
\widetilde{w_3} =& \; 101 \nonumber \\
\widetilde{w_n} =& \; w_{n-2} \quad \text{for all $n\geq 4$.} \nonumber
\end{align}
\end{example}
\begin{lemma} \label{lemma:consecutive_members}
For the infinite word ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ and $k \in \mathbb{N}$, the following relations hold. For $z \leq 0$ we set $w_z = \varepsilon$.
\begin{align}
w_{4k+1} = & \;w_{4k}10E(w_{4k}) \nonumber \\
w_{4k+2} = & \;w_{4k+1}w_{4k-2}^{-1}w_{4k+1} \nonumber \\
w_{4k+3} = & \;w_{4k+2}(010)^{-1}R(w_{4k+2}) \nonumber \\
w_{4k+4} = & \;w_{4k+3}w_{4k}^{-1}w_{4k+3}. \nonumber
\end{align}
\end{lemma}
\begin{proof}
The statement follows from Theorem~29 in~\cite{MaPa}. We prefer however to prove it here also for integrity of our paper.
One can easily check that the statement holds for $w_1$, $w_2$, $w_3$ and $w_4$. Let $k\geq 1$.
\begin{itemize}
\item
In order to get the $E$-palindrome $w_{4k+1}$, it is necessary to find the longest $E$-palindromic suffix of $w_{4k}1$. In other words, it is necessary to find the longest $E$-palindromic suffix preceded by $0$ of the palindrome $w_{4k}$. Taking into account the normalized form of the directive bi-sequence $\widetilde{\Lambda}$ from Observation~\ref{obs:normtvar}, for every $E$-palindromic (resp. palindromic) prefix $p$ of $\mathbf{u_p}$ there exists $\ell \in \mathbb N$ such that $p = \widetilde{w_\ell}$. Therefore all $E$-palindromic suffixes of $w_{4k}$ are of the form $R(\widetilde{w_\ell})$, where $\widetilde{w_\ell} = E(\widetilde{w_\ell})$. However, we search only for the longest $E$-palindromic suffix of $w_{4k}$ preceded by $0$. If $0R(\widetilde{w_\ell})$ is a~suffix of $w_{4k}$, then $\widetilde{w_\ell}0$ has to be the prefix of $w_{4k}$. Using the normalized form $\widetilde{\Lambda}$ we nevertheless notice that no $\widetilde{w_\ell} = E(\widetilde{w_\ell})$ is followed by $0$. Consequently, $w_{4k+1} = w_{4k}10E(w_{4k})$.
\item
To obtain the $E$-palindrome $w_{4k+2}$, we look for the longest $E$-palindromic suffix of $w_{4k+1}1$. We proceed analogously as in the previous case, thus we search for the longest $E$-palindromic prefix $\widetilde{w_\ell}$ of $w_{4k+1}$ followed by $1$. Then $E(\widetilde{w_\ell}1) = 0\widetilde{w_\ell}$ is the longest $E$-palindromic suffix of $w_{4k+1}$ preceded by $0$. It follows from the form of $\widetilde{\Lambda}$ that every $E$-palindromic prefix $\widetilde{w_\ell}$ of $w_{4k+1}$ is followed by $1$.
Moreover, according to Example~\ref{ex:normtvar}, $E$-palindromes in the sequence $(w_k)_{k\geq 0}$ coincide with $E$-palindromes in the sequence $(\widetilde{w_k})_{k\geq 0}$, therefore the longest $E$-palindromic prefix $\widetilde{w_\ell}$ of $w_{4k + 1}$ followed by $1$ is $w_{4k-2}$. Consequently, $w_{4k+2} = w_{4k+1}w_{4k-2}^{-1}w_{4k+1}$.
\item The remaining two cases are similar. They are left as an exercise for the reader.
\end{itemize}
\end{proof}
It is not difficult to find strong bispecials among members of the sequence $(w_k)_{k\geq 0}$.
\begin{lemma}\label{lemma:strongBS}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ and let $k \in \mathbb N$. Then $w_{4k+1}$ and $w_{4k+3}$ are strong bispecials of ${\mathbf u_p}$. Moreover, $1w_{4k+1}0$ is a~central factor of $w_{4(k+1)+1}$ and $0w_{4k+3}0$ is a~central factor of $w_{4(k+1)+3}$.
\end{lemma}
\begin{proof}
Let us show the statement for the $E$-palindrome $w_{4k+1}$.
The proof for the palindrome $w_{4k+3}$ is similar.
Since $\Delta = 1^{\omega}$, the prefix $w_{4k+1}$ is followed by $1$. Consider now any $E$-palindrome $w_j$ such that $j>4k+1$. Since $w_j = E(w_j)$ and $w_{4k+1}1$ is a~prefix of $w_j$, the factor $0E(w_{4k+1})=0w_{4k+1}$ is a~suffix of $w_j$. The prefix $w_j$ is again followed by $1$, therefore $0w_{4k+1}1 \in \mathcal{L}(\mathbf{u_p})$. Consider further on any palindrome $w_\ell$ such that $\ell > 4k+1$. Since $w_{4k+1}1$ is again a~prefix of $w_\ell = R(w_\ell)$, the factor $1R(w_{4k+1})$ is a~suffix of $w_\ell$. The prefix $w_\ell$ is followed by $1$, thus $1R(w_{4k+1})1 \in \mathcal{L}(\mathbf{u_p})$. Since the language is closed under $R$ and $E$, we deduce that $1w_{4k+1}1, 0w_{4k+1}0 \in \mathcal{L}(\mathbf{u_p})$.
Let us find the missing bilateral extension $1w_{4k+1}0$ of the $E$-palindrome $w_{4k+1}$. We will show that $1w_{4k+1}0$ is a~central factor of $w_{4(k+1)+1}$. By Lemma~\ref{lemma:consecutive_members} we have $$w_{4(k+1)+1} = w_{4(k+1)}10E(w_{4(k+1)}).$$ The factor $w_{4k}1$ is a~prefix of the palindrome $w_{4(k+1)}$, therefore $1R(w_{4k}) = 1w_{4k}$ is a~suffix of $w_{4(k+1)}$. It implies moreover that $E(1w_{4k})=E(w_{4k})0$ is a~prefix of $E(w_{4(k+1)})$. Altogether we see that $$1w_{4k+1}0 = 1w_{4k}10E(w_{4k})0$$ is a~central factor of $w_{4(k+1)+1}$.
\end{proof}
Let us indicate how we managed to find weak bispecials. The factor $w_k$ has $w_{k-1}1$ as prefix. When constructing $w_k=\vartheta(w_k)$, one looks for the longest $\vartheta$-palindromic suffix of $w_{k-1}1$. In order to get a~weak bispecial, we look instead for the longest $\overline{\vartheta}$-palindromic suffix of $w_{k-1}1$. If this suffix is longer than the longest $\vartheta$-palindromic suffix, we check whether its bilateral extension is also a~$\overline{\vartheta}$-palindrome. If yes, we extend it and continue in the same way. When we arrive at the moment where it is not possible to extend it any more, we have a~bispecial factor: We get either a~factor of the form $ap\overline{a}$, where $p=R(p)$, and since the language is closed under reversal, $\overline{a}pa$ is a~factor of ${\mathbf u_p}$ too. Or we get a~factor of the form $apa$, where $p=E(p)$, and since the language is closed under the exchange antimorphism, $\overline{a}p\overline{a}$ is a~factor of ${\mathbf u_p}$ too.
\begin{example}\label{ex:shortest_weakBS}
Let us show how to obtain the shortest weak bispeacials using the above described way. The factor $w_5$ is an $E$-palindrome. The longest $E$-palindromic suffix of $w_4 1$ is $\varepsilon$, while the longest palindromic suffix is $1101011$. This palindrome may be moreover extended by $0$ to $011010110$. We have thus obtained the shortest weak bispecial. The second weak bispecial of the same length is $E(011010110)=100101001$.
\end{example}
Let us now provide a~formal description of weak bispecials.
\begin{lemma}\label{lemma:BSs_k}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$.
Then for all $k \in \mathbb N, \ k \geq 1,$ the following factors of ${\mathbf u_p}$ are bispecials:
$$s_{4k+1} = R(w_{4(k-1)+1})w_{4(k-1)}^{-1}w_{4(k-1)+3}w_{4(k-1)}^{-1}w_{4(k-1)+1},$$
$$s_{4k+3} = E(w_{4(k-1)+3})w_{4k-2}^{-1}w_{4k+1}w_{4k-2}^{-1}w_{4(k-1)+3}.$$ Moreover, the palindrome $s_{4k+1}$ is contained in the prefix $w_{4k+1}$ and has the bilateral extensions $1s_{4k+1}0$ and $0s_{4k+1}1$, and the $E$-palindrome $s_{4k+3}$ is contained in the prefix $w_{4k+3}$ and has the bilateral extensions $0s_{4k+3}0$ and~$1s_{4k+3}1$.
\end{lemma}
\begin{proof}
Let us show the statement for $s_{4k+1}$. The proof for $s_{4k+3}$ is similar.
Using Lemma~\ref{lemma:consecutive_members} we can write $w_{4k+1} = w_{4k}10E(w_{4k}).$ The prefix $w_{4k}$ and the suffix $E(w_{4k})$ can be again rewritten as follows: $$w_{4k+1} = w_{4(k-1)+3}w_{4(k-1)}^{-1}w_{4(k-1)+3}10E(w_{4(k-1)+3}w_{4(k-1)}^{-1}w_{4(k-1)+3}).$$
Thus $w_{4k+1}$ has $w_{4(k-1)+3}$ as prefix. The factor $w_{4(k-1)+3}$ has certainly $w_{4(k-1)+1}1$ as prefix. Since the factor $w_{4(k-1)+3}$ is a~palindrome, the factor $1R(w_{4(k-1)+1})$ is its suffix. Using the above form of $w_{4k+1}$, we know $1R(w_{4(k-1)+1})w_{4(k-1)}^{-1}w_{4(k-1)+3}$ is a~factor of $w_{4k+1}$.
Thanks to Lemma~\ref{lemma:strongBS} we know that $1w_{4(k-1)+1}0$ is a~central factor of $w_{4k+1}$. Let us use again Lemma~\ref{lemma:consecutive_members} to rewrite $w_{4(k-1)+1}$: $$w_{4(k-1)+1} = w_{4(k-1)}10E(w_{4(k-1)}).$$ The already constructed factor $1R(w_{4(k-1)+1})w_{4(k-1)}^{-1}w_{4(k-1)+3}$ is therefore followed by $w_{4(k-1)}^{-1}w_{4(k-1)+1}0$. Consequently, we get the following factor of $w_{4k+1}$: Hence, $s_{4k+1}=R(w_{4(k-1)+1})w_{4(k-1)}^{-1}w_{4(k-1)+3}w_{4(k-1)}^{-1}w_{4(k-1)+1}$ is contained in the prefix $w_{4k+1}$ and it is easy to check that $s_{4k+1}$ is a~palindrome. We have so far found its bilateral extension $1s_{4k+1}0$. Using the fact that $\mathcal{L}(\mathbf{u_p})$ is closed under reversal, it follows that $0s_{4k+1}1 \in \mathcal{L}(\mathbf{u_p})$.
\end{proof}
\begin{example}
Let us write down the shortest two bispecials $s_\ell$ of $\mathbf{u_p}$:
$$s_5=011010110,$$
$$s_7=010101101011001010010101.$$
\end{example}
\begin{proposition}\label{proposition:weakBS}
The factors $s_{2k+1}$ are weak bispecials for all $k\geq 2$.
Moreover, there are no other weak bispecials in the language of ${\mathbf u_p}$ except for $s_{2k+1}$ and their $R$- and $E$-images.
\end{proposition}
Let us postpone the proof of Proposition~\ref{proposition:weakBS} to a~separate subsection since it is long and technical, and provide instead the remaining steps to the proof of Theorem~\ref{thm:counterexample}.
In order to estimate the second difference of complexity, we need to determine the relation of lengths of weak and strong bispecials.
\begin{observation} \label{obs:nerovn}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ and $k \in \mathbb{N}, \ k\geq 2$. Then $$|s_{2k+1}| < |w_{2k+1}| < |s_{2k+3}|.$$
\end{observation}
\begin{observation} \label{obs:comp}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. Then for all $n \in \mathbb N$ the following holds:
$$\Delta^2\mathcal{C}_{\mathbf{u_p}}(n) \geq \begin{cases}
2 & \text{if} \ n=|w_{2k+1}| \ \text{for some $k\geq 1$}; \\
-2 & \text{if} \ n=|s_{2k+1}| \ \text{for some $k \geq 2$}; \\
0 & \textrm{otherwise}.
\end{cases} $$
\end{observation}
\begin{proof}
We use Equation~\eqref{complexity2diff}.
For $n=|w_{2k+1}|$ we have at least two strong bispecials of $\mathbf{u_p}$: $w_{4\ell+1}$ and $R(w_{4\ell+1})$ (resp. $w_{4\ell+3}$ and $E(w_{4\ell+3})$) by Lemma~\ref{lemma:strongBS}. For $n =|s_{2k+1}|$ we have exactly two weak bispecials of $\mathbf{u_p}$: $s_{4\ell+1}$ and $E(s_{4\ell+1})$ (resp. $s_{4\ell+3}$ and $R(s_{4\ell+3})$) by Proposition~\ref{proposition:weakBS}. Moreover, Proposition~\ref{proposition:weakBS} states that all other bispecials have at least three bilateral extensions.
\end{proof}
\begin{lemma} \label{lemma:vsechnyfakt}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. Let $k \geq 5$. Then $w_k$ contains all factors of $\mathbf{u_p}$ of length less than or equal to $|w_{k-5}|$, except possibly for the images by the antimorphisms $E$ and $R$, and the morphism $ER$. Further on, $w_{k+2}$ contains all factors of $\mathbf{u_p}$ of length less than or equal to $|w_{k-5}|$.
\end{lemma}
\begin{proof}
We will prove the first statement. The second one is its direct consequence -- it suffices to take into account the form of the directive bi-sequence. We will show that $w_s$ for $s \geq k \geq 5$ does not contain (except for $E$-, $R$- and $ER$-images) factors of length less than or equal to $|w_{k-5}|$ other than those ones that are contained in $w_{k}$. To obtain a~contradiction assume that $v$ is the first such factor and that $s$ is the smallest index such that $v$ is contained in $w_s$.
\begin{itemize}
\item
If $s = 4\ell$, then $w_s = w_{s-1}w_{s-4}^{-1}w_{s-1}$. The factor $v$ has to contain the central factor $1w_{s-4}1$ of $w_s$ (otherwise $v$ would be contained already in $w_{s-1}$), which is a~contradiction because $|1w_{s-4}1| > |v|$.
\item
If $s = 4\ell+1$, then $w_s = w_{s-1}10E(w_{s-1})$. By Lemma~\ref{lemma:strongBS}, the factor $w_{s-4}$ is a~central factor of $w_s$ and it has to contain the factor $v$ since $v$ is either a~suffix of $w_{s-1}1$ or a~prefix of $0E(w_{s-1})$ or $v$ contains the central factor $10$ (otherwise, $v$ or $E(v)$ would be contained already in $w_{s-1}$). It is however a~contradiction with the minimality of the index $s$.
\item The remaining two cases are analogous to the above ones.
\end{itemize}
\end{proof}
\begin{corollary}\label{coro:comp1diff}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. Then for all $n \geq 10$ the following holds:
$$\begin{array}{rl}
\Delta{\mathcal C}_{\mathbf u_p} (n) \geq 6 & \text{if $|w_{4i+1}|< n \leq |s_{4i+3}|$ or $|w_{4i+3}|< n \leq |s_{4i+5}|$ for some $i \geq 1$};\\
\Delta{\mathcal C}_{\mathbf u_p} (n) \geq 4 & \text{otherwise}.
\end{array}$$
\end{corollary}
\begin{proof}
Let us recall that $\Delta\mathcal{C}(n+1)=\Delta\mathcal{C}(n)+\Delta^2\mathcal{C}(n)$ for all $n \in \mathbb N$. Since $|w_4|=10$, all factors of length $10$ are, according to Lemma~\ref{lemma:vsechnyfakt}, contained in the prefix $w_{11}$ of length $1077$. Checking this prefix by Sage~\cite{St} we determined $\Delta {\mathcal C}(9)=6$. The claim follows then by Observations~\ref{obs:nerovn} and~\ref{obs:comp} taking into account that $|s_5|=9$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:counterexample}]
In order to get ${\mathcal C}(10)$ it suffices by Lemma~\ref{lemma:vsechnyfakt} to check the prefix $w_{11}$ of length $1077$ because $|w_4|=10$. Using the program Sage~\cite{St} we determined $\mathcal{C}(10) = 42$. It is then a~direct consequence of Corollary~\ref{coro:comp1diff} that ${\mathcal C}(n)>4n$ for all $n \geq 10$.
\end{proof}
\subsection{Proof of Proposition~\ref{proposition:weakBS}}
This section is devoted to quite a~long and technical proof of the fact that the only weak bispecials of $\mathbf{ u_p}$ are $s_{4k+1}$ and $s_{4k+3}$ and their $E$- and $R$-images for all $k \in \mathbb N, \ k \geq 1$.
We will put together several lemmas and observations to get finally the proof.
\begin{lemma}\label{lemma:prefixBS}
Let $v$ be a~prefix of ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. If $v$ is a~bispecial, then $v$ has at least three bilateral extensions and $E(v), R(v), ER(v)$ has at least three bilateral extensions too.
\end{lemma}
\begin{proof}
Denote $a$ the letter for which $va$ is a~prefix of $\mathbf{u_p}$. We can certainly find $k, \ell \in \mathbb N$ such that $va$ is a~prefix of $w_k=R(w_k)$ and $w_\ell=E(w_\ell)$. Then $aR(v)$ is a~suffix of $w_k$ and $\overline{a}E(v)$ is a~suffix of $w_\ell$. By the construction of $\mathbf{u_p}$, the words $aR(v)1$ and $\overline{a}E(v)1$ belong to the language of $\mathbf{u_p}$. Since the language is closed under $E$ and $R$, it follows that $1va$ and $0va$ are factors of $\mathbf{u_p}$ too. Since $v$ is a~bispecial, $v$ has to have a~bilateral extension $bv\overline{a}$ for some $b\in \{0,1\}$. Hence, $v$ has at least three bilateral extensions. The rest of the proof follows by application of the antimorphisms $E, R$ and the morphism $ER$.
\end{proof}
In order to detect all weak bispecial factors, we need to describe all occurrences of $w_k=\vartheta(w_k)$ and $\overline{\vartheta}(w_k)$ and of some of their bilateral extensions. To manage that task, we will distinguish between regular and irregular occurrences.
Let $v$ be a~factor of $\mathbf{u_p}$.
Every element of $\{v, E(v), R(v), ER(v)\}$ is called \emph{an image of} $v$.
Let us define \emph{occurrences (of the images of $v$) generated by a~particular occurrence} $i$ of $v$. Let $k$ be the minimal index such that $w_k$ contains the factor $v$ at the occurrence $i$. Since $w_{k}$ is a~$\vartheta$-palindrome, it contains $\vartheta(v)$ symmetrically with respect to the center of $w_k$. If the corresponding occurrence $j$ of $\vartheta(v)$ is larger than $i$, we say that the occurrence $j$ is generated by the occurrence $i$ of $v$. Assume $w_\ell$ contains occurrences $i_1, \ldots, i_s$ of the images of $v$ generated by the particular occurrence $i$ of $v$. In order to get all occurrences of the images of $v$ generated by the particular occurrence $i$ of $v$ in $w_{\ell+1}$, we proceed in the following way. The prefix $w_{\ell+1}$ is a~$\vartheta$-palindrome for some $\vartheta \in \{E, R\}$, and therefore contains symmetrically with respect to its center occurrences $j_1, \ldots, j_s$ of $v_1, \ldots, v_s$ that are $\vartheta$-images of images of $v$ at the occurrences $i_1, \ldots, i_s$. Putting all occurrences
$i_1, \ldots, i_s, j_1, \ldots j_s$ together, we obtain all occurrences generated by the particular occurrence $i$ of $v$ in $w_{\ell +1}$.
We say that an occurrence of $v$ is \emph{regular} if it is generated by the very first occurrence of any image of $v$ in $\mathbf{u_p}$. Otherwise, we call the occurrence of $v$ \emph{irregular}.
\begin{example}\label{ex:occurrences}
Consider $v=110$. Its images are: $110, 100, 011, 001$.
The first occurrence of an image of $v$ is $i=3$ of $011$ in the palindrome $w_4= 101\underline{011}0101$.
Hence, the occurrence $i=4$ of $v=110$ in $w_4= 1010\underline{110}101$ is regular. (It is the $R$-image of $011$ in $w_4$.)
However, for instance the occurrence $i=9$ of $v=110$ in the $E$-palindrome $w_5=101011010\underline{110}0101001010$ is irregular. (It is not the $E$-image of any image of $v$ at a~regular occurrence in $w_4$.)
\end{example}
\begin{observation} \label{obs:ekvivalence}
Let $v$ be a~factor of $\mathbf{u_p}$. Then $v$ has only regular occurrences in $\mathbf{u_p}$ if and only if any element of $\{v, E(v), R(v), ER(v)\}$ has only regular occurrences in $\mathbf{u_p}$.
\end{observation}
\begin{lemma} \label{lemma:1w41}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. Let $k \in \mathbb{N}$. Assume the factors $w_{4k}$ and $w_{4k+2}$ have only regular occurrences in $\mathbf{u_p}$. Then the following statements hold:
\begin{itemize}
\item All irregular occurrences of the factor $1w_{4k}1$ in $\mathbf{u_p}$ are generated by its occurrences as the suffix of the prefix $w_{4\ell}1$ for all $\ell > k$. Moreover, the first regular occurrence of $1w_{4k}1$ is as the central factor of the prefix $w_{4(k+1)}$.
\item All irregular occurrences of the factor $0w_{4k+2}1$ in $\mathbf{u_p}$ are generated by its occurrences as the suffix of the prefix $w_{4\ell+2}1$ for all $\ell > k$. Moreover, the first regular occurrence of $0w_{4k+2}1$ is as the central factor of the prefix $w_{4(k+1)+2}$.
\item All irregular occurrences of the factor $1w_{4k+1}0$ in $\mathbf{u_p}$ are generated by its occurrences as the central factor of the prefix $w_{4\ell+1}$ for all $\ell > k+1$. Moreover, the first regular occurrence of $1w_{4k+1}0$ is as the central factor of the prefix $w_{4(k+1)+1}$.
\item All irregular occurrences of the factor $0w_{4k+3}0$ in $\mathbf{u_p}$ are generated by its occurrences as the central factor of the prefix $w_{4\ell+3}$ for all $\ell > k+1$. Moreover, the first regular occurrence of $0w_{4k+3}0$ is as the central factor of the prefix $w_{4(k+1)+3}$.
\end{itemize}
\end{lemma}
\begin{proof}
We will prove two of the four statements.
\begin{itemize}
\item Let us show the statement for $1w_{4k}1$. The statement for $0w_{4k+2}1$ is an analogy, we leave it thus for the reader. Using Lemma~\ref{lemma:consecutive_members} we know that $w_{4(k+1)} = w_{4k+3}w_{4k}^{-1}w_{4k+3}$. It is easy to see that the bilateral extension of the central factor $w_{4k}$ is $1w_{4k}1$. This bilateral extension occurs in $w_{4(k+1)}$ exactly once. Let us explain why: The factor $w_{4k}$ has only regular occurrences in $\mathbf{u_p}$, therefore $w_{4k+1}$ contains $w_{4k}1$ as prefix and $0E(w_{4k})$ as suffix. Further on, $w_{4k+2}$ contains moreover $0E(w_{4k})1$ and $0w_{4k}1$, and $w_{4k+3}$ contains in addition $1E(w_{4k})0$ and $1w_{4k}0$. Consequently, $0E(w_{4k})0$ is not contained in $w_{4(k+1)}$ and the first occurrence of $1w_{4k}1$ in $w_{4(k+1)}$ is necessarily regular.
Let us study occurrences of $1w_{4k}1$ in the whole word $\mathbf{u_p}$. All regular occurrences of $1w_{4k}1$ are generated by the first occurrence of $1w_{4k}1$ as the central factor of the prefix $w_{4(k+1)}$. We will show that all irregular occurrences of $1w_{4k}1$ are generated by the occurrences of $1w_{4k}1$ as the suffix of the prefix $w_{4\ell}1$ for all $\ell> k$. It is evident that $1w_{4k}1$ is a~suffix of the prefix $w_{4\ell}1$ and the factor $1w_{4k}1$ is here at an irregular occurrence.
For a~contradiction assume that $1w_{4k}1$ occurs at an irregular position that is not generated by the occurrence of $1w_{4k}1$ as the suffix of the prefix $w_{4\ell}1$ for any $\ell> k$. Such an irregular occurrence may as well be generated by an occurrence of $0E(w_{4k})0$. Let $w_s$ be the first prefix that contains such an irregular occurrence of $1w_{4k}1$ (resp. of $0E(w_{4k})0$). Let $m\geq k+1$. If $s = 4m+1$, then $w_s = w_{s-1}10E(w_{s-1})$ and according to Lemma~\ref{lemma:strongBS} the prefix $w_s$ has $1w_{4k+1}0$ as its central factor. The irregular occurrence of $1w_{4k}1$ (resp. of $0E(w_{4k})0$) has to be contained in this factor. But $1w_{4k+1}0$ contains $1w_{4k}1$ only as a~prefix and this occurrence corresponds at the same time to the suffix of $w_{4m}1$, which is a~contradiction. If $s = 4m+2$, then $w_s = w_{s-1}w_{s-4}^{-1}w_{s-1}$ and the irregular occurrence of $1w_{4k}1$ (resp. of $0E(w_{4k})0$) has to contain the central factor of $w_s$: $1w_{s-4}1$. However, $|1w_{s-4}1| > |1w_{4k}1|=|0E(w_{4k})0|$, which is a~contradiction. Let $s = 4m+3$, then $w_s = w_{s-1}(010)^{-1}R(w_{s-1})$. Using Lemma~\ref{lemma:strongBS} the prefix $w_s$ has $w_{4k+3}$ as its central factor. The irregular occurrence of $1w_{4k}1$ (resp. of $0E(w_{4k})0$) has to contain the central factor of $w_s$: $10101$. Consequently, $1w_{4k}1$ (resp. $0E(w_{4k})0$) has to be contained in $w_{4k+3}$, which is a~contradiction.
If $s = 4m+4$, then $w_s = w_{s-1}w_{s-4}^{-1}w_{s-1}$ and the irregular occurrence of $1w_{4k}1$ (resp. of $0E(w_{4k})0$) has to contain the central factor of $w_s$: $1w_{s-4}1$. However, $|1w_{s-4}1| > |1w_{4k}1|=|0E(w_{4k})0|$, which is a~contradiction.
\item Let us show the statement for $1w_{4k+1}0$. The fourth statement is its analogy. The first and thus regular occurrence of $1w_{4k+1}0$ is by Lemma~\ref{lemma:strongBS} and by the assumption on regular occurrences of $w_{4k}$ as the central factor of the prefix $w_{4(k+1)+1}$.
Firstly, let us show that for all $\ell>k$ every occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) in the prefixes $w_{4\ell+2}, w_{4\ell+3}, w_{4\ell+4}$ is already generated by an occurrence of an image of $1w_{4k+1}0$ in the prefix $w_{4\ell+1}$.
By Lemma~\ref{lemma:consecutive_members} we can write $w_{4\ell+2}=w_{4\ell+1}w_{4\ell-2}^{-1}w_{4\ell+1}$ and $0w_{4\ell-2}1$ is its central factor. If $w_{4\ell+2}$ contains an occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) that is not generated by an occurrence of an image of $1w_{4k+1}0$ in $w_{4\ell+1}$, then this occurrence has to contain $0w_{4\ell-2}1$. This is not possible because for all $\ell >k$ we have
$|0w_{4\ell-2}1| > |1w_{4k+1}0|=|0R(w_{4k+1})1|$. Next, $w_{4\ell+3} = w_{4\ell+2}(010)^{-1}R(w_{4\ell+2})$. The central factor is $10101$ and moreover we know by Lemma~\ref{lemma:strongBS} that $w_{4k+3}$ is also a~central factor of $w_{4\ell+3}$. If $w_{4\ell+3}$ contains an occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) that is not generated by an occurrence of an image of $1w_{4k+1}0$ in $w_{4\ell+1}$, then this occurrence has to contain the factor $10101$. Then such an occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) is necessarily contained in $w_{4k+3}$. This is not possible since the factor $1w_{4k+1}0$ occurs for the first time in $w_{4(k+1)+1}$ and its $R$-image even later. Finally we have $w_{4\ell+4} = w_{4\ell+3}w_{4\ell}^{-1}w_{4\ell+3}$ and $1w_{4\ell}1$ is its central factor. If $w_{4\ell+4}$ contains an occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) that is not generated by an occurrence of an image of $1w_{4k+1}0$ in $w_{4\ell+1}$, then this occurrence has to contain the factor $1w_{4\ell}1$. This is again not possible because of lengths of those factors.
Secondly, let us show that the occurrence of the factor $1w_{4k+1}0$ as the central factor of the prefix $w_{4\ell+1}$ is the only occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) in the prefix $w_{4\ell+1}$ that is not generated by any occurrence of an image of $1w_{4k+1}0$ in the prefix $w_{4\ell}$. We have $w_{4\ell+1} = w_{4\ell}10E(w_{4\ell})$. Using Lemma~\ref{lemma:strongBS} it follows that $1w_{4k+1}0$ is the central factor of $w_{4\ell+1}$ and this occurrence is not generated by any image of $1w_{4k+1}0$ contained in $w_{4\ell}$. In order to have another occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) in the prefix $w_{4\ell+1}$ so that it is not generated by any image of $1w_{4k+1}0$ in the prefix $w_{4\ell}$, it has to be either a~suffix of $w_{4\ell}1$ or a~prefix of $0E(w_{4\ell})$ or it has to contain the central factor of $w_{4\ell+1}$: $10$. However such an occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) has to be contained in the longer central factor of $w_{4\ell+1}$: $w_{4(k+1)+1}$.
This is not possible because $1w_{4k+1}0$ occurs in $w_{4(k+1)+1}$ exactly once as the central factor and this occurrence has been already discussed. Altogether we have described all occurrences of the factor $1w_{4k+1}0$ in $\mathbf{u_p}$. All irregular occurrences of $1w_{4k+1}0$ are thus generated by the occurrences of $1w_{4k+1}0$ as the central factor of $w_{4\ell+1}$, $\ell >k+1$.
\end{itemize}
\end{proof}
\begin{lemma} \label{lemma:vyskyt_w_k}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$ and $k \in \mathbb N$.
\begin{enumerate}
\item All occurrences of $w_{4k}$ and $E(w_{4k})$ are regular for $k\geq 1$.
\item All occurrences of $w_{4k+2}$ and $R(w_{4k+2})$ are regular.
\item All irregular occurrences of $w_{4k+1}$ and $R(w_{4k+1})$ are generated by the occurrences of $w_{4k+1}$ as the central factor of the prefixes $w_{4\ell+1}$ for all $\ell>k$.
\item All irregular occurrences of $w_{4k+3}$ and $E(w_{4k+3})$ are generated by the occurrences of $w_{4k+3}$ as the central factor of the prefixes $w_{4\ell+3}$ for all $l>k$.
\end{enumerate}
\end{lemma}
\begin{proof}
We will prove only the first and the third statement. The other statements may be proved analogously.
Let us proceed by induction. Assume the first and the third statement hold for some $k \in \mathbb N$.
\begin{enumerate}
\item[1.]
We will first prove that $w_{4(k+1)}$ has only regular occurrences in $\mathbf{u_p}$. Putting together Lemma~\ref{lemma:1w41}, the induction assumption and the fact that $1w_{4k}1$ and $w_{4(k+1)}$ are both palindromes, it follows that
the occurrence of $w_{4(k+1)}$ is regular if and only if the occurrence of its central factor $1w_{4k}1$ is regular.
Therefore the factor $w_{4(k+1)}$ at an irregular occurrence has to have as its central factor $1w_{4k}1$ at an irregular occurrence, i.e., by Lemma~\ref{lemma:1w41} generated by an occurrence of $1w_{4k}1$ as the suffix of the prefix $w_{4\ell}1$ for some $\ell >k$.
Assume $w_{4(k+1)}$ is at such an occurrence that its central factor $1w_{4k}1$ is the suffix of the prefix $w_{4\ell}1$. By Lemma~\ref{lemma:strongBS} we know that $w_{4\ell+1}=w_{4\ell}10E(w_{4\ell})$ has the central factor $1w_{4k+1}0$. Therefore $w_{4(k+1)}$ having the suffix $1w_{4k}1$ of $w_{4\ell}1$ as its central factor has to contain $1w_{4k+1}0$. This is a~contradiction because using Lemma~\ref{lemma:1w41} and the induction assumption, one can see that the factor $1w_{4k+1}0$ occurs for the first time in $w_{4(k+1)+1}$.
By Observation~\ref{obs:ekvivalence} it follows that $E(w_{4(k+1)})$ has only regular occurrences in $\mathbf{u_p}$ too.
Let us conclude the proof for $k=1$. We will show that $w_4$ and $E(w_4)$ have only regular occurrences in $\mathbf{u_p}$. It is easy to check that $w_8$ contains only regular occurrences of $w_4$ and $E(w_4)$. See Appendix for the form of $w_8$.
Assume $k>8$ and $w_k$ contains the first irregular occurrence of $w_4$ (resp. of $E(w_4)$).
For $k=4m+1$, we have $w_k=w_{k-1}10E(w_{k-1})$. By Lemma~\ref{lemma:strongBS} the factor $w_5$ is a~central factor of $w_k$, hence the irregular occurrence of $w_4$ (resp. of $E(w_4)$) has to be contained in $w_5$, which is a~contradiction. If $k=4m+2$, then $w_k=w_{k-1}w_{k-4}^{-1}w_{k-1}$. The irregular occurrence of $w_4$ (resp. of $E(w_4)$) has to contain the central factor $0w_{k-4}1$, which is a~contradiction.
For $k=4m+3$, we have $w_k=w_{k-1}(010)^{-1}R(w_{k-1})$. The central factor of $w_k$ is $w_7$ by Lemma~\ref{lemma:strongBS}. Therefore the irregular occurrence of $w_4$ (resp. of $E(w_4)$) has to be contained in $w_7$, which is a~contradiction. Finally for $k=4m+4$, the argument is similar as for $k=4m+2$.
Consequently, $w_4$ and $E(w_4)$ have only regular occurrences in $\mathbf{u_p}$.
\item[3.] We will first prove that all irregular occurrences of $w_{4(k+1)+1}$ are generated by its occurrences as the central factor of the prefixes $w_{4\ell+1}$ for all $\ell >k+1$.
Since by Lemma~\ref{lemma:1w41} and by the induction assumption, the factor $1w_{4k+1}0$ occurs for the first time as the central factor of the prefix $w_{4(k+1)+1}$ and since both $1w_{4k+1}0$ and $w_{4(k+1)+1}$ are $E$-palindromes and $w_{4(k+1)+1}$ does not contain $0R(w_{4k+1})1$, it follows that the occurrence of $w_{4(k+1)+1}$ is regular if and only if the occurrence of its central factor $1w_{4k+1}0$ is regular.
We will thus consider irregular occurrences of $1w_{4k+1}0$. We know using Lemma~\ref{lemma:1w41} and the induction assumption that every irregular occurrence of $1w_{4k+1}0$ (resp. of $0R(w_{4k+1})1$) is generated by an occurrence of $1w_{4k+1}0$ as the central factor of $w_{4\ell+1}$ for $\ell>k+1$. It is then a~direct consequence that all irregular occurrences of $w_{4(k+1)+1}$ are generated by the occurrences of $w_{4(k+1)+1}$ as the central factor of the prefixes $w_{4\ell+1}$ for all $\ell>k+1$.
The statement for $R(w_{4(k+1)+1})$ follows using Observation~\ref{obs:ekvivalence}.
It remains to prove the statement for $k=0$.
We have to show that all irregular occurrences of $w_1$ in $\mathbf{u_p}$ are generated by the occurrences of $w_1$ as the central factor of the prefixes $w_{4\ell+1}$ for all $\ell\geq 1$. It is easy to show that the first irregular occurrence of $w_1$ is the occurrence as the central factor of the prefix $w_5$. Let $m > 5$ and let $w_m$ contain the first irregular occurrence of $w_1$ (resp. of $R(w_1)$) that is not generated by the occurrence of $w_1$ as the central factor of the prefix $w_5$. If $m = 4\ell+2$, then $w_m = w_{m-1}w_{m-4}^{-1}w_{m-1}$. Then the irregular occurrence of $w_1$ (resp. of $R(w_1)$) has to contain the central factor of $w_m$: $0w_{m-4}1$, which is not possible. If $m = 4\ell+3$, then $w_m = w_{m-1}(010)^{-1}R(w_{m-1})$. By Lemma~\ref{lemma:strongBS} the factor $w_3$ is a~central factor of $w_m$. Then the irregular occurrence of $w_1$ (resp. of $R(w_1)$) has to be contained in $w_3$, which is a~contradiction. If $m = 4\ell+4$, then $w_m = w_{m-1}w_{m-4}^{-1}w_{m-1}$. Then the irregular occurrence of $w_1$ (resp. of $R(w_1)$) has to contain the central factor of $w_m$: $1w_{m-4}1$, which is not possible. If $m = 4\ell+5$, then $w_m = w_{m-1}10E(w_{m-1})$. By Lemma~\ref{lemma:strongBS} the factor $w_5$ is a~central factor of $w_m$. Then the irregular occurrence of $w_1$ (resp. of $R(w_1)$) has to be contained in $w_5$. It follows that $w_1$ has to be the central factor of $w_{4\ell+5}, \ \ell \geq 1$.
\end{enumerate}
\end{proof}
In the proof of the last and essential lemma, we will make use of the following observation.
\begin{observation}\label{obs:p_k}
Consider ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$.
For all $k\in \mathbb N, \ k\geq 1$, let:
$$p_{4k+1} = w_{4(k-1)+3}w_{4(k-1)}^{-1}w_{4(k-1)+1},$$
$$p_{4k+3} = w_{4k+1}w_{4k-2}^{-1}w_{4(k-1)+3}.$$
Then the factor $p_{4k+1}$ is a~suffix of $s_{4k+1}$ and a~prefix of $w_{4k}$
and similarly the factor $p_{4k+3}$ is a~suffix of $s_{4k+3}$ and a~prefix of $w_{4k+2}$.
\end{observation}
\begin{lemma}\label{lemma:nonprefixBS}
Let $v$ be a~factor, but not an image of a~prefix of ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$.
The following statements hold:
\begin{itemize}
\item If $v$ is neither an $E$-palindrome, nor a~palindrome, then $v$ is not bispecial in $\mathbf u_p$.
\item If $v$ is an $E$-palindrome or a~palindrome, but different from $s_{4k+1}$, $E(s_{4k+1})$, $s_{4k+3}$, $R(s_{4k+3})$ for all $k\geq 1$, then $v$ is either not a~bispecial, or it is a~bispecial with three bilateral extensions.
\item If $v$ is equal to one of the bispecials $s_{4k+1}, \ E(s_{4k+1}), \ s_{4k+3}, \ R(s_{4k+3})$ for some $k\geq 1$, then $v$ is a~weak bispecial.
\end{itemize}
\end{lemma}
\begin{proof}
We will find the minimal index $k$ such that $w_k$ contains an image of the factor $v$. Let the first occurrence of an image of $v$ correspond without loss of generality to $v$. Let us discuss the possible cases.
No such factors are contained in $w_3$.
\begin{enumerate}
\item Let $k=4\ell, \ \ell \geq 1$. We have $w_{4\ell}=w_{4(\ell-1)+3}w_{4\ell-4}^{-1}w_{4(\ell-1)+3}$. The bilateral extension of the central factor is $1w_{4\ell-4}1$. According to Lemma~\ref{lemma:1w41} the factor $1w_{4\ell-4}1$ occurs for the first time as the central factor of $w_{4\ell}$ and moreover $w_{4\ell}$ does not contain $0E(w_{4\ell-4})0$. Since $v$ is not contained in $w_{4(\ell-1)+3}$, the factor $v$ has to contain the central factor of $w_{4\ell}$: $1w_{4\ell-4}1$. Hence $v$ occurs once in $w_{4\ell}$. Let us denote $avb$ the corresponding bilateral extension of $v$, where $a,b \in \{0,1\}$. If $v$ is a~$\vartheta$-palindrome, then thanks to the unique occurrence of the palindrome $1w_{4\ell-4}1$ and the absence of $0E(w_{4\ell-4})0$ in $w_{4\ell}$, the factor $v$ has to be the central palindrome of $w_{4\ell}$. In this case its bilateral extension is $ava$, i.e. $a=b$. Moreover, $v$ is distinct from $s_{4m+1}$ and $E(s_{4m+1})$ for all $m \in \mathbb N$ because these palindromes do not have the central factor $1w_{4\ell-4}1$.
We will now study irregular occurrences of $v$.
It is not difficult to see that regular occurrences of $1w_{4\ell-4}1$ are factors of regular occurrences of $v$ and $R(v)$. Therefore we have to look at irregular occurrences of $1w_{4\ell-4}1$.
The first such occurrence is as the suffix of the prefix $w_{4\ell}1$. Then $v$ cannot contain the central factor $1w_{4(l-1)+1}0$ of $w_{4\ell+1}=w_{4\ell}10E(w_{4\ell})$ since by Lemmas~\ref{lemma:1w41} and~\ref{lemma:vyskyt_w_k} the factor $1w_{4(l-1)+1}0$ occurs for the first time in $w_{4\ell+1}$ while $v$ occurs already in $w_{4\ell}$. Consequently and since $v$ contains $1w_{4\ell-4}1$ once, $v$ has to be contained in the suffix of $s_{4\ell+1}$: $p_{4\ell+1}$ defined in Observation~\ref{obs:p_k}. If $v$ is not a~suffix of $p_{4\ell+1}$, then since $p_{4\ell+1}$ is a~prefix of $w_{4\ell}$, we do not get any new bilateral extension of $v$. If $v$ is a~suffix of $p_{4\ell+1}$ and thus of the bispecial $s_{4\ell+1}$, the word $\mathbf{u_p}$ contains the bilateral extension $av\overline{b}$ too. All other irregular occurrences of $1w_{4\ell-4}1$ are generated by its occurrences as the suffix of the prefix $w_{4m}1, \ m>\ell$. It is not difficult to see that such occurrences do not provide any new bilateral extension of~$v$. Altogether we have found for $v$ that is not an $R$ palindrome the bilateral extension $avb$ and possibly $av\overline{b}$. Thus, such a~factor $v$ is not bispecial. If $v$ is a~palindrome, then its bilateral extension is either only $ava$ and it is not a~bispecial, or its bilateral extensions are $ava, av\overline{a}$ and by the fact that the language is closed under reversal also $\overline{a}va$. Therefore $v$ is a~bispecial with three bilateral extensions.
\item Let $k=4\ell+1, \ \ell \geq 1$. The factor $v$ occurs for the first time in $w_{4\ell+1}=w_{4\ell}10E(w_{4\ell})$.
Therefore $v$ contains either the central factor $1w_{4(\ell-1)+1}0$ of $w_{4\ell+1}$ or $v$ has to contain at least the suffix $1w_{4(\ell-1)+3}1$ of $w_{4\ell}1$. If $v$ does not contain neither $1w_{4(\ell-1)+1}0$ nor $1w_{4(\ell-1)+3}1$, then $v$ itself is contained in $p_{4\ell+1}$ defined in Observation~\ref{obs:p_k}. However, $p_{4\ell+1}$ is a~prefix of $w_{4\ell}$, therefore $v$ would be contained already in $w_{4\ell}$.
If $v$ contains $1w_{4(\ell-1)+1}0$, then $v$ occurs in $w_{4\ell+1}$ once since $w_{4\ell+1}$ contains
$1w_{4(\ell-1)+1}0$ only once by Lemmas~\ref{lemma:1w41} and~\ref{lemma:vyskyt_w_k}. If $v$ contains $1w_{4(\ell-1)+3}1$, then $v$ occurs in $w_{4\ell+1}$ only once too, as one can easily check using Lemma~\ref{lemma:vyskyt_w_k}. Let $avb$ denote the corresponding bilateral extension of $v$. If $v$ is a~$\vartheta$-palindrome, then two cases are possible:
If $v$ contains $1w_{4(\ell-1)+1}0$, then $v$ is an $E$-palindromic central factor of $w_{4\ell+1}$. Then $v$ is not equal to $s_{4m+3}$ or $R(s_{4m+3})$ for any $m\in \mathbb N$ because neither $s_{4m+3}$ nor $R(s_{4m+3})$ is a~central factor of $w_{4\ell+1}$. The bilateral extension of $v$ is then $av\overline{a}$. If $v$ contains $1w_{4(\ell-1)+3}1$, then $v$ has to be a~palindrome with the central factor $1w_{4(\ell-1)+3}1$. The longest such palindrome in $w_{4\ell+1}$ is $s_{4\ell+1}$. If $v=s_{4\ell+1}$, then its bilateral extension is $av\overline{a}$. If $v$ is a~shorter palindrome, i.e., a~central factor of $s_{4\ell+1}$, then its bilateral extension is $ava$.
It is not difficult to see that no irregular occurrence of any image of $v$ is contained in $w_{4\ell+1}$. Consider irregular occurrences of images of $v$ first in $w_{4\ell+2}=w_{4\ell+1}w_{4\ell-2}^{-1}w_{4\ell+1}$. The first irregular occurrence of an image of $v$ has to contain the central factor $0w_{4\ell-2}1$ of $w_{4\ell+2}$, which is not possible because $0w_{4\ell-2}1$ occurs by Lemmas~\ref{lemma:1w41} and~\ref{lemma:vyskyt_w_k} for the first time in $w_{4\ell+2}$, and moreover $w_{4\ell+2}$ does not contain $1R(w_{4\ell-2})0$. Thus $w_{4\ell+2}$ does not contain any irregular occurrence of any image of $v$. Similarly, no irregular occurrence of an image of $v$ is contained in $w_{4\ell+3}=w_{4\ell+2}(010)^{-1}R(w_{4\ell+2})$. The image of $v$ cannot contain the central factor $0w_{4(\ell-1)+3}0$ because this factor occurs for the first time in $w_{4\ell+3}$ and its $E$-image even later. The image of $v$ cannot contain the suffix $0w_{4\ell-2}1$ of $w_{4\ell+2}1$ because $0w_{4\ell-2}1$ occurs for the first time in $w_{4\ell+2}$ and its $R$-image even later. This implies however that the image of $v$ is contained in $p_{4\ell+3}$ defined in Observation~\ref{obs:p_k}. And since $p_{4\ell+3}$ is a~prefix of $w_{4\ell+2}$, such occurrence of the image of $v$ is regular.
Consider an image of $v$ has an irregular occurrence in $w_{4\ell+4}$. Then the image of $v$ has to contain its central factor $1w_{4\ell}1$, which occurs however for the first time in $w_{4\ell+4}$ and its $E$-image even later. Therefore it is not possible.
No new irregular occurrences can appear in larger prefixes: for $s>\ell$, the prefixes $w_{4s+2}$ and $w_{4s+4}$ has too long central factors that $v$ has to contain, while $w_{4s+1}$ and $w_{4s+3}$ have central factors $w_{4\ell+1}$ (resp. $w_{4\ell+3}$) and these cases have been already discussed.
If $v$ is not a~$\vartheta$-palindrome, then the only bilateral extension of $v$ is $avb$, thus $v$ is not a~bispecial. If $v$ is an $E$-palindrome, then its only bilateral extension is $av\overline{a}$ and we do not get any new bilateral extension by application of $E$. Hence $v$ is not a~bispecial. If $v$ is a~palindrome, but distinct from $s_{4\ell+1}$, then its bilateral extension is $ava$ and we do not get any new bilateral extension by application of $R$. Finally, if $v=s_{4\ell+1}$, then its bilateral extension is $av\overline{a}$ and by application of $R$ we get $\overline{a}va$, thus $v$ is a~weak bispecial.
\item Let $k=4\ell+2$. This case is analogous to the first one.
\item Let $k=4\ell+3$. This case is analogous to the second one.
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{proposition:weakBS}]
The result follows from Lemmas~\ref{lemma:prefixBS} and~\ref{lemma:nonprefixBS}.
\end{proof}
\subsection{Complexity of $\mathbf u_p$ is significantly larger than $4n$}
Let us prove that the infinite word $\mathbf u_p$ from the counterexample to Conjecture $4n$ not only satisfies ${\mathcal C}_{\mathbf u_p}(n)> 4n$ for all $n \geq 10$, but its complexity is significantly larger than $4n$.
We start with a~simple observation concerning the lengths of weak bispecial factors.
\begin{observation}\label{obs:length_weakBS}
For all $i \in \mathbb N, \ i\geq 1$, we have:
$$\begin{array}{rclcl}
|s_{4i+5}|-|w_{4i+3}|&=&2|w_{4i+1}|-2|w_{4i}|&=&2|w_{4i}|+4,\\
|s_{4i+3}|-|w_{4i+1}|&=&2|w_{4i-1}|-2|w_{4i-2}|&=&2|w_{4i-2}|-6.
\end{array}$$
\end{observation}
\begin{proof}
It follows from the form of weak bispecials described in Lemma~\ref{lemma:BSs_k} that
\begin{equation}\label{eq:weakBS1}
\begin{array}{rcl}
|s_{4i+5}|&=&2|w_{4i+1}|+|w_{4i+3}|-2|w_{4i}|,\\
|s_{4i+3}|&=&2|w_{4i-1}|+|w_{4i+1}|-2|w_{4i-2}|.
\end{array}
\end{equation}
Applying then Lemma~\ref{lemma:consecutive_members}, we obtain the statement.
\end{proof}
\begin{theorem}\label{limsup}
Let ${\mathbf u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. Then its complexity satisfies:
$$\limsup\frac{{\mathcal C}_{\mathbf u_p}(n)}{n} \geq 4.57735.$$
\end{theorem}
\begin{proof}
For $n\geq 10$ we have ${\mathcal C}(n)={\mathcal C}(10)+\sum_{\ell=10}^{n-1}\Delta {\mathcal C}(\ell)$,
where by Corollary~\ref{coro:comp1diff} we have for $\ell \geq 10$:
$$\begin{array}{rl}
\Delta{\mathcal C}(\ell) \geq 6 & \text{if $|w_{4i+1}|<\ell \leq |s_{4i+3}|$ or $|w_{4i+3}|<\ell \leq |s_{4i+5}|$ for some $i \geq 1$};\\
\Delta{\mathcal C}(\ell) \geq 4 & \text{otherwise}.
\end{array}$$
Let us set $n=|s_{4k+5}|+1$. Since ${\mathcal C}(10)=42$, we get the following expression:
$${\mathcal C}(n)\geq 42+4(n-10)+2\sum_{i=1}^k\left(|s_{4i+3}|-|w_{4i+1}|\right)+2\sum_{i=1}^k\left(|s_{4i+5}|-|w_{4i+3}|\right).$$
Inserting formulas from Observation~\ref{obs:length_weakBS}, we obtain:
\begin{equation}\label{eq:estimateC1}
{\mathcal C}(n)\geq 4n+2-4k+4\sum_{i=1}^k\left(|w_{4i-2}|+|w_{4i}|\right).
\end{equation}
Using~\eqref{eq:weakBS1} we have $|s_{4k+5}|=2|w_{4k+1}|+|w_{4k+3}|-2|w_{4k}|$ and applying Lemma~\ref{lemma:consecutive_members} we obtain
\begin{equation}\label{s4k+5}
|s_{4k+5}|=2|w_{4k}|+2|w_{4k+2}|+1.
\end{equation}
Therefore it suffices to deal with $w_{n}$ for even $n$.
By Lemma~\ref{lemma:consecutive_members} we easily deduce the following set of equations:
\begin{equation}\label{eq:set_for_w_2n}
\begin{array}{rcl}
|w_{4k+2}|&=&4|w_{4k}|-|w_{4k-2}|+4,\\
|w_{4k+4}|&=&4|w_{4k+2}|-|w_{4k}|-6,\\
|w_{4k}|&=&4|w_{4k-2}|-|w_{4k-4}|-6.
\end{array}
\end{equation}
We multiply the first equation by four and add the remaining two equations so that we get the following recurrence equation for $|w_{4k}|$:
$$|w_{4k+4}|=14|w_{4k}|-|w_{4k-4}|+4.$$
The initial values are $|w_0|=0$ and $|w_4|=10$.
The solution of the above recurrence equation reads
\begin{equation}\label{eq:solution}
|w_{4k}|=\frac{1+2\sqrt{3}}{6}\tau^k+\frac{1-2\sqrt{3}}{6}{(\tau')}^k-\frac{1}{3},
\end{equation}
where $\tau=7+4\sqrt{3}>1$ and $\tau'=7-4\sqrt{3} \in (0,1)$ are roots of the equation $x^2-14x+1=0$.
It follows using the last equation from~\eqref{eq:set_for_w_2n} that
\begin{equation}\label{eq:w_{4k-2}}
4|w_{4k-2}|=|w_{4k-4}|+|w_{4k}|+6=\frac{16}{3}+\frac{1+2\sqrt{3}}{6}
\tau^k(\tau'+1)+\frac{1-2\sqrt{3}}{6}{(\tau')}^k(\tau+1).\end{equation}
Consequently,
$$4\sum_{i=1}^k\left(|w_{4i-2}|+|w_{4i}|\right)=
4k+\frac{1+2\sqrt{3}}{6}(\tau'+5)\frac{\tau}{\tau-1}(\tau^k-1)+c_1(k),$$
where $c_1(k)$ is a~bounded sequence.
Inserting the previous formula into~\eqref{eq:estimateC1}, we have:
\begin{equation}\label{eq:complexity2}
{\mathcal C}(n)\geq 4n+2+\frac{1+2\sqrt{3}}{6}(\tau'+5)\frac{\tau}{\tau-1}(\tau^k-1)+c_1(k).
\end{equation}
In order to continue with the estimate on complexity, we have to express the relation between $k$ and $n$. We know that $n=|s_{4k+5}|+1$. Combining Equations~\eqref{s4k+5}, \eqref{eq:solution} and~\eqref{eq:w_{4k-2}}, we obtain:
$$|s_{4k+5}|=3+\frac{1+2\sqrt{3}}{6}\frac{5+\tau}{2}\tau^k+c_2(k),$$
where $c_2(k)$ is again a~bounded sequence.
Consequently,
$$n =|s_{4k+5}|+1=\frac{1+2\sqrt{3}}{6}\frac{5+\tau}{2}\tau^k+c_3(k),$$
where constants have been included in the bounded sequence $c_3(k)$.
Hence, we have:
$$\tau^k=\frac{6}{1+2\sqrt{3}}\frac{2}{5+\tau}n+c_4(k),$$
where $c_4(k)$ is a~bounded sequence.
Inserting the previous formula into~\eqref{eq:complexity2}, we obtain the following lower bound on complexity:
$${\mathcal C}(n)\geq 4n+n\frac{\tau'+5}{\tau+5}\frac{2\tau}{\tau-1}+c_5(k),$$
where $c_5(k)$ is a~bounded sequence, where all constants have been included.
Finally, we obtain:
$$\limsup\frac{{\mathcal C}(n)}{n}\geq 4+\frac{\tau'+5}{\tau+5}\frac{2\tau}{\tau-1}\doteq 4,57735.$$
\end{proof}
\section{Open problems}\label{sec:open_problems}
It remains as an open problem to determine a~new upper bound on the complexity of binary generalized pseudostandard words.
\begin{itemize}
\item Let us start here with a~simple observation: If both $E$ and $R$ occur in the sequence $\Theta$ an infinite number of times, then since the language of $\mathbf u(\Delta, \Theta)$ is closed under the antimorphisms $E$ and $R$, the first difference of complexity has even values, i.e., $\Delta {\mathcal C}(n) \in \{2,4,6,\ldots \}$.
\item It might be helpful to illustrate what is the situation for sufficiently long left special factors of the Thue--Morse word and what seems to be the situation for the word $\mathbf u_p$. See Figure~\ref{fig:left_specials}.
In both cases, there are two infinite left special branches -- the infinite word itself and its $ER$-image, i.e., the word that arises when exchanging ones with zeroes. The weak bispecial factors form finite branches and the common prefixes of the weak bispecials and the infinite left special branch correspond to strong bispecials. The first difference of complexity $\Delta {\mathcal C}(n)$ equals to the number of left special factors of length $n$. In the case of $\mathbf u_p$ in contrast to $\mathbf u_{TM}$, the detached parts of finite branches may overlap, thus the first difference of complexity is larger. The question is whether it is possible to construct an example where there are even more overlapping detached parts of finite branches.
\begin{figure}
\caption{Infinite left special factors of $\mathbf u_p$ and $\mathbf u_{TM}
\label{fig:left_specials}
\end{figure}
\item Let us state a~new conjecture based on our computer experiments:
\begin{conjecture}[Conjecture $6n$]
Let $\mathbf u$ be a~binary generalized pseudostandard word, then its complexity satisfies $${\mathcal C}_{\mathbf u}(n)<6n \quad \text{for all $n \in \mathbb N$}.$$
\end{conjecture}
The arguments supporting our conjecture are as follows:
\begin{enumerate}
\item On one hand, in all our examples the first difference of complexity satisfies $\Delta {\mathcal C}(n) \leq 6$.
\item On the other hand, we have checked for $\mathbf u=\mathbf u(1^{\omega}, (RRRRREEEEE)^{\omega})$ that ${\mathcal C}_{\mathbf u}(n)>5n$ for some $n \in \mathbb N$.
\end{enumerate}
\end{itemize}
\section{Appendix}\label{sec:Appendix}
In this appendix, we list the members of the sequence $(w_k)_{k=1}^{9}$ for the infinite word $\mathbf{u_p} = {\mathbf u}(1^{\omega}, (EERR)^{\omega})$. For their generation the program Sage~\cite{St} was used.
We have moreover highlighted the first occurrences of weak bispecials $s_{2k+1}, \ k\geq 2$.
\begin{align}
w_1 =& \;10 \nonumber \\[1mm]
w_2 =& \;1010 \nonumber \\[1mm]
w_3 =& \;10101 \nonumber \\[1mm]
w_4 =& \;1010110101 \nonumber \\[1mm]
w_5 =& \;101\underbrace{011010110}_{s_5}0101001010 \nonumber \\[1mm]
w_6 =& \;1010110101100101001010110101100101001010 \nonumber \\[1mm]
w_7 =& \;10101101011001010\underbrace{010101101011001010010101}_{s_7}001010011010 \nonumber \\
& \;110101001010011010110101 \nonumber \\[1mm]
w_8 =& \;10101101011001010010101101011001010010101001010011010 \nonumber \\
& \;11010100101001101011010110010100101011010110010100101 \nonumber \\
& \;01001010011010110101001010011010110101 \nonumber \\[1mm]
w_9 =& \;10101101011001010010101101011001010010101001010011010 \nonumber \\
& \;11\underbrace{010100101001101011010110010100101011010110010100101} \nonumber \\
& \;\underbrace{01001010011010110101001010011010110101100101001010}_{s_9}011 \nonumber \\
& \;01011010100101001101011010101101011001010010101101011 \nonumber \\
& \;00101001010011010110101001010011010110101011010110010 \nonumber \\
& \;1001010110101100101001010 \nonumber
\end{align}
\end{document}
|
\begin{document}
\frontmatter
\author[Tye Lidman]{Tye Lidman}
\address {Department of Mathematics, North Carolina State University\\ Raleigh, NC 27607, USA}
\email {[email protected]}
\urladdr{http://www4.ncsu.edu/~tlidman/}
\author[Ciprian Manolescu]{Ciprian Manolescu}
\address {Department of Mathematics, UCLA, 520 Portola Plaza\\ Los Angeles, CA 90095, USA}
\email {[email protected]}
\urladdr{http://www.math.ucla.edu/~cm/}
\begin{abstract}
We show that monopole Floer homology (as defined by Kronheimer and Mrowka) is isomorphic to the $S^1$-equivariant homology of the Seiberg-Witten Floer spectrum constructed by the second author.
\end {abstract}
\begin{altabstract}
Dans ce volume nous montrons que l'homologie de Floer des monopoles (telle que d\'efinie par Kronheimer et Mrowka) est isomorphe \`a l'homologie $S^1$-\'equivariante du spectre de Seiberg-Witten Floer construit par le second auteur.
\end{altabstract}
\operatorname{sp}incubjclass{57R58}
\keywords{Floer homology, Seiberg-Witten equations, monopoles, 3-manifolds, Morse homology, Conley index, Morse-Smale, Coulomb gauge}
\mathfrak{t}hanks{The first author was partially supported by NSF grants DMS-1128155 and DMS-1148490. The second author was partially supported by NSF grants DMS-1104406 and DMS-1402914.}
\maketitle
\mathfrak{t}ableofcontents
\mainmatter
\chapter {Introduction}
\operatorname{sp}incection{Background}
The Seiberg-Witten (monopole) equations \cite{SW1, SW2} are an important tool for understanding the topology of smooth four-dimensional manifolds. A signed count of the solutions of these equations on a closed four-manifold yields the Seiberg-Witten invariant \cite{Witten}. On a four-manifold with boundary, instead of a numerical invariant one can define an element in a group associated to the boundary, called the Seiberg-Witten Floer homology. There are several different constructions of Seiberg-Witten Floer homology in the literature, \cite{MarcolliWang, Spectrum,KMbook, FroyshovSW}. The goal of this monograph is to prove that, for rational homology spheres, the definitions given by Kronheimer-Mrowka in \cite{KMbook} and by the second author in \cite{Spectrum} are equivalent.
The construction in \cite{KMbook} applies to an arbitrary three-manifold $Y$, equipped with a $\operatorname{Spin}^c$ structure $\operatorname{sp}inc$. Given this data, Kronheimer and Mrowka define an infinite dimensional analog of the Morse complex, with the underlying space being the blow-up of the configuration space of $\operatorname{Spin}^c$ connections and spinors (modulo gauge). The role of gradient flow lines is played by solutions to generic perturbations of the Seiberg-Witten equations on $\rr\mathfrak{t}imes Y$. Their invariant, monopole Floer homology, is the homology of the resulting complex. This complex (and hence also its homology) comes with a $\zz[U]$-module structure. The applications of monopole Floer homology include the surgery characterization of the unknot \cite{KMOS} and Taubes' proof of the Weinstein conjecture in three dimensions \cite{TaubesWeinstein}.
In fact, there are three different versions of monopole Floer homology defined in \cite{KMbook}; they are denoted $\overline{\mathit{HM}}to, \widehat{\mathit{HM}},$ and $\overline{\mathit{HM}}$. Yet another version, $\overline{\mathit{HM}}tilde$, was constructed by Bloom in \cite{Bloom}: To define $\overline{\mathit{HM}}tilde$, one considers the cone of the $U$ map on the complex that defines $\overline{\mathit{HM}}tilde$, and then takes homology.
Compared with \cite{KMbook}, the construction in \cite{Spectrum} was originally done only for rational homology spheres; on the other hand, it yields something more than a homology group. By using finite dimensional approximation of the Seiberg-Witten equations, combined with Conley index theory, one obtains an invariant in the form of an equivariant suspension spectrum. Specifically, given a rational homology sphere $Y$ equipped with a $\operatorname{Spin}^c$ structure $\operatorname{sp}inc$, one can associate to it an $S^1$-equivariant spectrum $\operatorname{SWF}(Y, \operatorname{sp}inc)$. (See also \cite{PFP, KhandhawitThesis, Sasahira} for extensions of this construction to the case $b_1 > 0.$)
The $S^1$-equivariant homology of $\operatorname{SWF}(Y, \operatorname{sp}inc)$ can be viewed as a definition of Seiberg-Witten Floer homology. The advantage of having a Floer spectrum is that one can also apply other (equivariant) generalized homology functors to it. For example, by adding the conjugation symmetry, one can define a $\operatorname{Pin}(2)$-equivariant Seiberg-Witten Floer homology; this was instrumental in the disproof of the triangulation conjecture by the second author \cite{Triangulation}. For other applications of the Floer spectrum, see \cite{GluingBF, kg, LinKO, Covers}.
\operatorname{sp}incection{Results} \label{sec:results} The bulk of this monograph is devoted to proving:
\begin{theorem}\label{thm:Main}
Let $Y$ be a rational homology sphere with a $\operatorname{Spin}^c$ structure $\mathfrak{s}$. There is an isomorphism of absolutely-graded $\mathbb{Z}[U]$-modules:
$$\overline{\mathit{HM}}to_*(Y,\mathfrak{s}) \cong \widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s})),$$
where $\overline{\mathit{HM}}to$ is the ``to'' version of monopole Floer homology defined in \cite{KMbook}, and $\mathfrak{t}H^{S^1}_*$ denotes reduced equivariant (Borel) homology.\footnote{In this book, we grade Borel homology so that we simply have $\mathfrak{t}H^{S^1}_*(X) = \mathfrak{t}H_*(X \wedge_{S^1} ES^1_+)$. This differs from the grading conventions in \cite{GreenleesMay} or \cite{PFP} by one.}
\end{theorem}
From Theorem~\ref{thm:Main} we deduce that Bloom's homology $\overline{\mathit{HM}}tilde$ can be identified with the ordinary (non-equivariant) homology of the Floer spectrum $\operatorname{SWF}$:
\begin{corollary}\label{cor:MainHat}
Let $Y$ be a rational homology sphere equipped with a $\operatorname{Spin}^c$ structure $\mathfrak{s}$. Then, $\overline{\mathit{HM}}tilde_*(Y,\mathfrak{s}) \cong \widetilde{H}_*(\operatorname{SWF}(Y,\mathfrak{s}))$ as absolutely graded abelian groups.
\end{corollary}
From the absolute grading on monopole Floer homology one can extract a $\mathbb{Q}$-valued invariant, called the Fr{\o}yshov invariant; see \cite{FroyshovSW} or \cite[Section 39.1]{KMbook}. A similar numerical invariant, called $\delta$, was defined in \cite[Section 3.7]{Triangulation} using the Floer spectrum $\operatorname{SWF}$. (The definition there was only given for Spin structures, but it extends to the Spin$^c$ setting.) An immediate consequence of Theorem~\ref{thm:Main} is
\begin{corollary}
\label{cor:delta}
Let $Y$ be a rational homology sphere equipped with a Spin$^c$ structure $\mathfrak{s}$. Then, $\delta(Y,\mathfrak{s}) = - h(Y,\mathfrak{s})$, where $h$ is the Fr{\o}yshov invariant as defined in \cite[Section 39.1]{KMbook}.
\end{corollary}
Furthermore, we can combine Theorem~\ref{thm:Main} with the equivalence between monopole Floer homology and Heegaard Floer homology, established in work of Kutluhan-Lee-Taubes \cite{KLT1, KLT2, KLT3, KLT4, KLT5} and of Colin-Ghiggini-Honda \cite{CGH1, CGH2, CGH3} and Taubes \cite{Taubes12345}. In this way we obtain a relationship between the spectrum $\operatorname{SWF}(Y, \mathfrak{s})$ and Heegaard Floer theory. Precisely, we confirm a conjecture from \cite{PFP}, that the different flavors of Heegaard Floer homology are different equivariant homology theories applied to $\operatorname{SWF}(Y, \mathfrak{s})$:
\begin{corollary}
\label{cor:hfswf}
If $Y$ is a rational homology sphere and $\mathfrak{s}$ is a $\operatorname{Spin}^c$ structure on $Y$, we have the following isomorphisms of relatively graded $\zz[U]$-modules:
\begin{align*}
\mathit{HF}^+_*(Y, \mathfrak{s}) & \cong \mathfrak{t}H_*^{S^1}(\SWF(Y, \mathfrak{s})), \hskip1cm \widehat{\mathit{HF}}_*(Y, \mathfrak{s}) \cong \mathfrak{t}H_*(\SWF(Y, \mathfrak{s})), \\
\mathit{HF}^-_*(Y, \mathfrak{s}) & \cong c\mathfrak{t}H_{*}(\SWF(Y, \mathfrak{s})), \hskip1.07cm \mathit{HF}^{\infty}_*(Y, \mathfrak{s}) \cong t\mathfrak{t}H_*(\SWF(Y, \mathfrak{s})),
\end{align*}
where $c\mathfrak{t}H^*$ and $t\mathfrak{t}H^*$ denote co-Borel and Tate homology, respectively, as defined in \cite{GreenleesMay}.
\end{corollary}
The results above can be applied in two directions. On the one hand, there are numerous computational techniques available for Heegaard Floer homology, whereas the class of manifolds $Y$ for which one can compute $\operatorname{SWF}(Y, \operatorname{sp}inc)$ is rather small. By making use of the isomorphisms in Corollary~\ref{cor:hfswf}, one can at least understand the (equivariant or non-equivariant) homologies of $\operatorname{SWF}(Y, \operatorname{sp}inc)$. In particular:
\begin{itemize}
\item If $\widehat{\mathit{HF}}_*(Y, \mathfrak{s}) \cong \zz$, this suffices to determine the non-equivariant stable homotopy type of $\operatorname{SWF}(Y, \operatorname{sp}inc)$: by the Hurewicz and Whitehead theorems, it has to be that of a (de-)suspension of the sphere spectrum. (However, it is unclear whether the equivariant stable homotopy type of $\operatorname{SWF}(Y, \operatorname{sp}inc)$ is determined by this information.)
\item In the case when $\operatorname{sp}inc$ is a Spin structure, the Seiberg-Witten equations on $(Y, \operatorname{sp}inc)$ have a $\operatorname{Pin}(2)$ symmetry, and one can define the $\operatorname{Pin}(2)$-equivariant homology of $\operatorname{SWF}(Y, \operatorname{sp}inc)$. (This homology was used in \cite{Triangulation} to disprove the triangulation conjecture in high dimensions.) When $Y$ is Seifert fibered, the Heegaard Floer homology was calculated in \cite{Plumbed}, so Corollary~\ref{cor:hfswf} can tell the $S^1$-equivariant homology of $\operatorname{SWF}(Y, \operatorname{sp}inc)$. Together with knowledge of the monopoles (generators of the monopole Floer complex) from \cite{MOY}, this suffices to determine the $\operatorname{Pin}(2)$-equivariant homology of $\operatorname{SWF}(Y, \operatorname{sp}inc)$; see \cite{Stoffregen} for details.
\end{itemize}
In the reverse direction, there are certain results that are easier to prove with Floer spectra, and one can use Theorem~\ref{thm:Main} and Corollary~\ref{cor:hfswf} to translate them into the settings of monopole Floer or Heegaard Floer homology. This is the case with the Smith inequality for the Floer homology of coverings, which is the subject of the sequel to this book \cite{Covers}. By combining the Smith inequality with the knot surgery formula in Heegaard Floer homology \cite{RatSurg}, we obtain restrictions on surgeries on a knot being regular covers over other surgeries on that knot, and over surgeries on other knots; see \cite{Covers}.
\operatorname{sp}incection{Outline and organization of the book} Theorem~\ref{thm:Main} asserts that the monopole Floer homology of Kronheimer-Mrowka is isomorphic to the equivariant homology of the Conley index used for $\operatorname{SWF}(Y, \operatorname{sp}inc)$. Let us describe the basic strategy for the proof. The monopole Floer complex is defined from the {\em perturbed} Seiberg-Witten equations in infinite dimensions, whereas the Conley index is a space associated to an {\em approximation} of the Seiberg-Witten equations in finite dimensions. On the Conley index side, we can also use an approximation of the perturbed Seiberg-Witten equations; by the continuation properties of the Conley index, this yields a space homotopy equivalent to the one from the unperturbed case. Thus, we start by fixing a suitable perturbation, so that we have regularity in infinite dimensions, and hence we can define the monopole Floer complex. We then show that for a large enough approximation, we also have regularity in finite dimensions, and hence the homology of the Conley index is given by a Morse complex. Further, by applying the inverse function theorem (for stationary points and trajectories), we conclude that the Morse complex in finite dimensions is isomorphic to the original monopole Floer complex, and this will complete the proof.
There are several technical difficulties that must be overcome to implement this strategy. To start with, the version of Morse homology that we need to use in finite dimensions is different from the standard one in three ways: we are on a non-compact manifold (a vector space), we have to define $S^1$-equivariant rather than ordinary homology, and the approximate Seiberg-Witten flow is not a gradient flow. Non-compactness is taken care of using the notion of Conley index; this goes back to the work of Floer \cite{FloerMorse}. Equivariance was dealt with by Kronheimer and Mrowka in \cite{KMbook}, using the real blow-up construction. One intriguing aspect is that the flow is not a gradient. This leads us to introduce the weaker notion of {\em quasi-gradient}, which is a particular case of the Morse-Smale flows that appear in the study of stability for dynamical systems. We will show that one can define Morse homology from quasi-gradients. All this is done in Chapter~\ref{sec:finite}; there, we recall the various notions from Morse theory and Conley index theory, the relation between the two theories, and then proceed to define (in several steps) equivariant Morse homology for quasi-gradients on non-compact manifolds.
In Chapter~\ref{sec:spectrum} we outline the construction of the Seiberg-Witten Floer spectrum from \cite{Spectrum}. The main idea is to approximate the Seiberg-Witten equations (in global Coulomb gauge) by a flow in finite dimensions, and then to take the Conley index associated to that flow.
In Chapter~\ref{sec:HM} we sketch the construction of monopole Floer homology by Kronheimer and Mrowka, following their book \cite{KMbook}. In particular, we describe the class of admissible perturbations of the Seiberg-Witten equations that can be used to define the Floer complexes.
In Chapter~\ref{sec:coulombgauge} we explain how the Kronheimer-Mrowka construction can be rephrased in terms of configurations in global Coulomb gauge. This is the first step in bringing it closer to the construction of the Floer spectrum. The section is rather lengthy, because there are several aspects of the theory that have to be translated into the new setting: gauge conditions (in particular, for four-dimensional configurations we introduce the concept of {\em pseudo-temporal gauge}), admissibility for perturbations, Hessians, regularity for stationary points, spaces of paths, regularity for trajectories, gradings, orientations, and the $\zz[U]$-module action.
Another step in relating the two theories is taken in Chapter~\ref{sec:finiteapproximations}. There, we show that the Floer spectrum can be defined from finite dimensional approximation of the {\em perturbed} Seiberg-Witten equations. We use an admissible perturbation of the kind considered in \cite{KMbook}.
In Chapter~\ref{sec:criticalpoints} we use the inverse function theorem to identify the stationary points in infinite dimensions (i.e., the generators of the monopole Floer complex) with the stationary points of the gradient flow in the finite dimensional approximations. (This identification is limited to a certain grading range.) Further, we show that if the stationary points are non-degenerate in infinite dimensions, then the corresponding stationary points in sufficiently large approximations are non-degenerate as well. Note that the approximations are described by eigenvalue cut-offs, and these range over the interval $(0, \infty]$, with $\infty$ giving the original equations (in infinite dimensions). When applying the inverse function theorem, one thing to be careful about is how to give a manifold-with-boundary structure to the interval $(0, \infty]$. We do so by identifying $(0, \infty]$ with $[0,1)$, via a homeomorphism that depends on the growth rate of the eigenvalues.
In Chapter~\ref{sec:quasigradient} we show that the approximate Seiberg-Witten flow in finite dimensions is a quasi-gradient. The main difficulty there is to perturb the Chern-Simons-Dirac functional in such a way so that it decreases along flow lines.
In Chapter~\ref{sec:gradings}, we study the gradings of the stationary points in the finite dimensional approximations and show that the correspondence with stationary points in infinite dimensions established in Chapter~\ref{sec:criticalpoints} preserves gradings.
In Chapter~\ref{sec:MorseSmale}, we show how to arrange so that the Morse-Smale condition is satisfied for the approximate flow.
Chapter~\ref{sec:appendix} contains the construction of some diffeomorphisms $\mathcal{X}i_{\lambda}$ of the configuration space. These diffeomorphisms take a stationary point of the approximate Seiberg-Witten flow to the corresponding stationary point of the original flow. They allow us to identify the corresponding path spaces, so that we may directly relate infinite dimensional trajectories with approximate trajectories in Chapters~\ref{sec:trajectories1} and \ref{sec:trajectories2}.
In Chapter~\ref{sec:trajectories1} we prove various convergence results for trajectories of the approximate Seiberg-Witten flow, in the limit as the eigenvalue cut-off goes to infinity.
In Chapter~\ref{sec:trajectories2} we use the inverse function theorem to identify the flow trajectories in infinite dimensions with those in the approximations. Furthermore, we show that these identifications preserve orientations.
Finally, in Chapter~\ref{sec:equivalence}, we bring these results together to prove Theorem~\ref{thm:Main}, about the equivalence between monopole Floer homology and the equivariant homology of $\operatorname{SWF}(Y, \operatorname{sp}inc)$. We also prove the three corollaries stated in Chapter~\ref{sec:results}.
\operatorname{sp}incection{Conventions.} Throughout the book, when we talk about the flow associated to a vector field $v$, we mean the reverse flow generated by $v$, i.e, the flow whose trajectories satisfy:
$$\frac{d}{dt}\gamma(t) + v(\gamma(t)) =0.$$
Also, we depart from the terminology in \cite{KMbook}, where critical points of the CSD functional are also referred to as critical points of the Seiberg-Witten vector field. Here, we sometimes need to think of vector fields as maps to the tangent space, and the critical points of a map are the zeros of its derivative. To prevent confusion, we will use the term {\em stationary point} to refer to the zero of a vector field.
\operatorname{sp}incection{Acknowledgements.} We thank Peter Kronheimer, Robert Lipshitz, Max Lipyanskiy, Tim Perutz, and Matthew Stoffregen for helpful conversations.
\chapter{Morse homology in finite dimensions}
\label{sec:finite}
In this chapter we describe several versions of Morse homology (all in finite dimensions), building up to the construction that will be needed in this monograph.
\operatorname{sp}incection{The standard construction} \label{sec:standard}
Let $X$ be a closed, oriented Riemannian manifold equipped with a Morse function $f$. The Morse-Smale condition requires that for any two critical points $x$ and $y$, the unstable manifold $W^u_{x}$ and the stable manifold $W^s_{y}$ intersect transversely. Their intersection is then a smooth manifold $M(x,y)$, the moduli space of parameterized flows between $x$ and $y$, which has dimension $\operatorname{ind}ex(x) - \operatorname{ind}ex(y)$.
With this information, we can build the {\em Morse complex} \cite{WittenMorse, BottMorse, FloerMorse}. The chain groups in degree $i$, denoted $C_i$, are freely generated over $\zz$ by the critical points of $f$ of index $i$. We define the differential $\partial$ as follows. If $x$ and $y$ are critical points of index $i$ and $i-1$ respectively, then $\breve{M}(x,y) := M(x,y)/\mathbb{R}$ is a compact, oriented $0$-manifold. (See Section~\ref{sec:or1} below for the construction of orientations.) We let $n(x,y)$ denote the number of unparameterized flow lines of $\nabla f$ from $x$ to $y$, counted with sign. The Morse differential is
\begin{equation}
\label{eq:delx}
\partialx = \operatorname{sp}incum_{\{y \mid \operatorname{ind}ex(y)=\operatorname{ind}ex(x)-1\} } n(x,y) \cdot y.
\end{equation}
It turns out that $\partial^2 = 0$ and one can take the homology of this complex. The Morse homology is isomorphic to the singular homology, $H_*(X)$.
For future reference, we mention a (well-known) alternative way of expressing the Morse-Smale condition. Let $f: M \mathfrak{t}o \R$ be Morse, and $x$, $y$ be two critical points. Define $\P(x, y)$ to be the space of smooth paths $\gamma: \rr\mathfrak{t}o M$ with $\lim_{t \mathfrak{t}o -\infty} \gamma(t)=x$ and $\lim_{t \mathfrak{t}o +\infty} \gamma(t)=y$. The space $\P(x, y)$ has a well-defined $L^2_k$ completion $\P_k(x, y)$, which is a Banach manifold. (See, for example, \cite[Section 2.1]{Schwarz} for the case $k=1$.) For $0 \leq j \leq k$, we let $T_{j, \gamma}\P(x, y)$ be the $L^2_j$ completion of the tangent space to $\P_k(x, y)$ at $\gamma$. (If we fix $j$, these spaces are naturally identified for all $k \geq j$, so we do not include $k$ in the notation.) The moduli space $M(x, y)$ is the zero set of the section
$$ F: \P_k(x, y) \mathfrak{t}o T_{k-1}\P(x,y), \ \ \gamma \mapsto \frac{d\gamma}{dt} + (\nabla f)(\gamma).$$
Given a path $\gamma \in \P_k(x, y)$, we get the linearized operator
\begin{equation}
\label{eq:lgamma}
L_{\gamma}=(dF)_\gamma : T_{j, \gamma}\P(x, y) \mathfrak{t}o T_{j-1, \gamma}\P(x, y),
\end{equation}
for all $j$ with $1 \leq j \leq k$.
The definition of $L_{\gamma}$ involves taking derivatives of vector fields, i.e., covariant derivatives. To do this, we need to choose a connection on $TM$ in a neighborhood $U$ of the image of $\gamma$. This can be the Levi-Civita connection coming from the Riemannian metric or, as in \cite{AudinDamian}, the trivial connection induced from a trivialization of $TM$ over $U$. In either case, if we denote by $D$ the covariant derivative, and $w \in T_{j, \gamma}\P(x, y)$ is a vector field along $\gamma$, we have
\begin{equation}
\label{eq:Lgamma}
L_{\gamma}(w) = \frac{Dw}{dt} + D(\nabla f)(w).
\end{equation}
Note that if $D$ is the Levi-Civita connection, then the second term $D(\nabla f)(w)$ is the Hessian of $f$ applied to $w$ at $\gamma(t)$.
One can check that $L_{\gamma}$ is a Fredholm operator. When $\gamma \in M(x, y)$, the index of $L_{\gamma}$ is the expected dimension of the moduli space $M(x, y)$.
\begin{lemma}
\label{lem:Connections}
Suppose $k \geq 2$ and $1 \leq j \leq k$. Let $D$ and $D'$ be two connections on $TM$ in an open set $U$ containing the image of $\gamma \in \P_k(x, y)$. Write $L_{\gamma}$ and $L'_{\gamma}$ for the operators \eqref{eq:lgamma} that correspond to $D$, resp. $D'$. Then:
\begin{enumerate}[(i)]
\item
The difference $L_{\gamma} - L'_{\gamma}$ is compact, so the operators $L_{\gamma}$ and $L'_{\gamma}$ have the same Fredholm index;
\item If $\gamma \in M(x, y)$, then $L_{\gamma} = L'_{\gamma}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $A=D - D' \in \mathcal{O}mega^1(U; \mathcal{E}nd(TM))$. Then
\begin{equation}
\label{eq:lgdiff}
L_{\gamma} - L'_{\gamma} = A\bigl (\frac{d\gamma}{dt} + (\nabla f)(\gamma) \bigr).
\end{equation}
Since $k \geq 2$ and $j \leq k$, we have a Sobolev multiplication $L^2_{j-1} \mathfrak{t}imes L^2_{k-1} \mathfrak{t}o L^2_{j-1}$. From here we see that the difference $L_{\gamma} - L'_{\gamma}$ takes $T_{j-1, \gamma} \P(x, y)$ to itself.
Consider a sequence of smooth bump functions $\beta_n: \rr\mathfrak{t}o [0,1]$ such that $\beta_n$ is supported on $[-n-1, n+1]$ and identically $1$ on $[-n, n]$. The truncations
$$ \beta_n \cdot (L_{\gamma} - L'_{\gamma}) : T_{j, \gamma}\P(x, y) \mathfrak{t}o T_{j, \gamma}\P(x, y)$$
converge to $L_{\gamma} - L'_{\gamma}$ in operator norm. Furthermore, when precomposing these truncations with the inclusion of $T_{j, \gamma}\P(x, y)$ into $T_{j-1, \gamma}\P(x, y)$, we obtain compact operators. (Indeed, we can apply Rellich's lemma, because we restrict attention to a fixed compact interval.) Since the limit of compact operators is compact, we conclude that $L_{\gamma} - L'_{\gamma}$, as an operator from $T_{j, \gamma}\P(x, y)$ to $T_{j-1, \gamma}\P(x, y)$, is compact.
Part (ii) follows immediately from \eqref{eq:lgdiff}.
\end{proof}
\begin{lemma} \label{lem:MSlu}
Fix $j \geq 1$. Then, the function $f$ is Morse-Smale if and only if for any two critical points $x$ and $y$ of $f$, and for any gradient trajectory $\gamma \in M(x, y)$, the operator $L_{\gamma}$ from \eqref{eq:lgamma} is surjective.
\end{lemma}
\begin{proof}
See \cite[Theorem 10.1.5]{AudinDamian} for the case $j=1$. The main idea is that surjectivity of $L_{\gamma}$ is equivalent to injectivity of the formal adjoint $L^*_{\gamma}$ and, for any $t\in \R$, one can identify $\ker(L^*_{\gamma})$ with the orthogonal complement $(T_{\gamma(t)} W^u_x + T_{\gamma(t)}W^s_y)^{\perp}.$ Thus, $W^u_x$ and $W^s_y$ are transverse at $\gamma(t)$ if and only if $L^*_{\gamma}$ is injective.
The same proof works for any $j \geq 1$. In fact, the kernel of $L^*_{\gamma} : T_{j, \gamma}\P(x, y) \mathfrak{t}o T_{j-1, \gamma}\P(x, y)$ consists of smooth configurations (as can be seen by a standard bootstrapping argument), and hence the kernel is independent of $j$.
\end{proof}
\operatorname{sp}incection{Orientations}
\label{sec:or1}
We now discuss the construction of orientations in Morse homology. The traditional way is to choose orientations for $W^u_x$ for all critical points $x$. These induce orientations on $W^s_x$, and hence on the moduli spaces $M(x, y)$. When $\operatorname{ind}ex(x)-\operatorname{ind}ex(y)=1$, a trajectory $u \in M(x, y)$ is counted in the differential $\del$ with a sign $\pm 1$ depending on whether the orientation of $M(x, y)$ at $u$ coincides with the canonical orientation of $\R$.
An alternative way of constructing orientations, closer to Floer theory, is described in \cite[Chapter 3]{Schwarz}. Let us sketch this construction here. First, note that to every Fredholm operator $L$ we can associate a one-dimensional vector space, the determinant line
$$ \det(L) = \mathscr{L}ambda^{\max} (\ker L) \otimes \mathscr{L}ambda^{\max}(\operatorname{coker}L)^*.$$
From the proof of Lemma~\ref{lem:MSlu} we see that the tangent bundle to $M(x, y)$ can be identified with the kernel of the operator $L_{\gamma}$ from \eqref{eq:Lgamma}, whereas the cokernel of $L_{\gamma}$ is trivial. Thus, orienting $M(x, y)$ is equivalent to orienting the determinant line bundle $\det(L) \mathfrak{t}o M(x, y)$, with fibers $\det(L_{\gamma})$ over $\gamma \in M(x, y)$.
Following \cite[Definition 2.1]{Schwarz}, let us equip $\overline{\R} = \rr\cup \{\pm \infty \}$ with the structure of a smooth manifold with boundary by requiring that the map $\overline{\R} \mathfrak{t}o [-1, 1], t \mapsto t/\operatorname{sp}incqrt{1+t^2}$ is a diffeomorphism. For every $x, y \in X$ (not necessarily critical points), we consider the space of smooth curves
$$ C^{\infty}_{x, y} = \{ \gamma \in C^{\infty}(\overline{\R}, X) \mid \gamma(-\infty)=x, \ \gamma(+\infty) = y\}.$$
For $\gamma \in C^{\infty}_{x, y}$, we let $\Sigma_{\gamma^*TX}$ be the space of Fredholm operators of the form
\begin{equation}
\label{eq:Kop}
K = \frac{D}{dt} + A(t) : T_{1,\gamma} \P(x, y) \mathfrak{t}o T_{0,\gamma} \P(x, y),
\end{equation}
where $A \in C^0(\rr; \mathcal{E}nd(\gamma^*TX))$ is a path of endomorphisms such that $K^{\pm} := A(\pm \infty)$ is non-degenerate and conjugated self-adjoint\footnote{In \cite{Schwarz}, for a real vector space $V$, an operator $A \in \mathcal{E}nd(V)$ is called conjugated self-adjoint if it is self-adjoint with respect to some scalar product. This condition is equivalent to $A$ being diagonalizable over $\R$.}. In particular, when $x$ and $y$ are critical points, the operator $L_{\gamma}$ from \eqref{eq:Lgamma} is of this form.
Consider the set of pairs $(\gamma, K)$, where $\gamma \in C^{\infty}(\overline{\R}, X)$ and $K \in \Sigma_{\gamma^*TX}$. Following \cite[Definition 3.7]{Schwarz}, two such pairs $(\gamma, K)$ and $(\zeta, L)$ are said to be equivalent if we have the asymptotical identities
$$ \gamma(\pm \infty) = \zeta(\pm \infty) \ \ \mathfrak{t}ext{and} \ \ K^{\pm} = L^{\pm}.$$
Let $\mathcal{E}$ be the set of such equivalence classes. Thus, an element of $\mathcal{E}$ is determined by two points $x, y \in X$, together with two non-degenerate and conjugated self-adjoint operators $K^- \in \mathcal{E}nd(T_xX)$ and $K^+ \in \mathcal{E}nd(T_yY)$.
It is proved in \cite[Section 3.2.1]{Schwarz} that, if $(\gamma, K)$ is equivalent to $(\zeta, L)$, there is a natural identification of the determinant lines $\det(K)$ and $\det(L)$. Hence, we can write $\det([\gamma, K])$ for a class $[\gamma, K] \in \mathcal{E}$. As in \cite[Definition 3.15]{Schwarz}, we define a {\em coherent orientation} to be a map $\operatorname{sp}incigma$ that associates an orientation of $\det([\gamma, K])$ to any $[\gamma, K] \in \mathcal{E}$, in a way compatible with concatenation of paths and operators; that is,
\begin{equation}
\label{eq:gluingOr}
\operatorname{sp}incigma[\gamma, K] \# \operatorname{sp}incigma[\zeta, L] = \operatorname{sp}incigma[\gamma \# \zeta, K \# L],
\end{equation}
whenever $\gamma(+\infty) = \zeta(-\infty)$ and $K^+ = L^-$. We denote by $\#$ the natural concatenation operator.
Proposition 3.16 in \cite{Schwarz} shows that coherent orientations exist. Concretely, a coherent orientation can be specified by choosing a basepoint $x_0 \in X$, some non-degenerate and conjugated self-adjoint operator $A \in \mathcal{E}nd(T_{x_0}X)$, choosing the trivial orientation for the class $[x_0, K_0]$, where $x_0$ is the constant path at $x_0$ and $K_0 = \frac{d}{dt} + A$, and finally choosing arbitrary orientations $\operatorname{sp}incigma[\gamma, K]$ for all classes $[\gamma, K] \in \mathcal{E}$ with $[\gamma, K] \neq [x_0, K_0]$ and $\gamma(-\infty) = x_0$. Once this set of data is fixed, the orientations for the other classes in $\mathcal{E}$ are determined by the gluing condition \eqref{eq:gluingOr}.
In particular, a coherent orientation gives orientations of the determinant line bundles $\det(L) \mathfrak{t}o M(x, y)$, and hence of the moduli spaces $M(x, y)$ themselves. These orientations can be used to define the Morse differential as in \eqref{eq:delx}. Furthermore, it is proved in \cite[Appendix B]{Schwarz} that the coherent orientation can be chosen so that the orientations on $M(x, y)$ coincide with those coming from orienting the unstable manifolds. Therefore, the resulting Morse complex is the same as in the classical definition.
We now introduce a third way of defining orientations for the Morse complex. This is a variant of Schwarz's construction, but closer to what Kronheimer and Mrowka do for monopole Floer homology in \cite[Section 20]{KMbook}. Rather than considering all Fredholm operators of the form $\frac{D}{dt} + A(t)$, we focus on the operators $L_{\gamma}$ as in \eqref{eq:Lgamma}, but with $x, y \in X$ and $\gamma \in \P(x, y)$ arbitrary. However, note that $L_{\gamma}$ is only Fredholm when the Hessian $\operatorname{Hess}(f)=D(\nabla f)$ is non-degenerate at its endpoints $x, y$. This is not true in general, so let us instead consider compact intervals $I=[t_1, t_2] \operatorname{sp}incubset \R$, spaces of paths
$$\P^I(x, y) = \{\gamma \in C^{\infty}(I, X) \mid \gamma(t_1)=x,\ \gamma(t_2)=y \},$$
and their $L^2_k$ completions $\P_k^I(x, y)$. At each $x \in X$, the Hessian $\operatorname{Hess}(f)_x$ is self-adjoint, and gives a decomposition of $T_x X$ into the span of the nonpositive and positive eigenspaces:
$$ T_x X = H^-_x \oplus H^+_x.$$
In other words, we are using the spectral decomposition of the non-degenerate self-adjoint operator $\operatorname{Hess}(f)_x - \varepsilonilon$, for $\varepsilonilon > 0$ small.
Let $\Pi^-_x$ and $\Pi^+_x$ be the orthogonal projections onto $H^-_x$ and $H^+_x$, respectively.
For any $\gamma \in \P^I_k(x, y)$, we define a Fredholm operator
\begin{align}
\label{eq:tLgamma}
\mathfrak{t}ilde{L}_{\gamma} &: T_{1,\gamma}\P^I(x, y) \mathfrak{t}o T_{0,\gamma} \P^I(x, y) \oplus H^+_x \oplus H^-_y \\
w &\mapsto \Bigl ( \frac{Dw}{dt} + \operatorname{Hess}(f)(w), -\Pi_x^+(w(t_1)), \Pi_y^+(w(t_2)) \Bigr).
\notag
\end{align}
One can turn $\mathfrak{t}ilde{L}_{\gamma}$ into an operator of the form \eqref{eq:Kop} as follows. Pick a smooth map $f: I \mathfrak{t}o I$ such that $f' \geq 0$ and
$$ f(t_1+ s) = t_1, \ \ f(t_2 - s) = t_2$$
for all $s \in [0, \delta]$, for some small $\delta > 0$. Then, consider the extended path
$$ \gamma^{\operatorname{ext}}: \overline{\R} \mathfrak{t}o X, \ \ \gamma^{\operatorname{ext}}(t) = \begin{cases}
x & \mathfrak{t}ext{for} \ t < t_1,\\
\gamma(f(t)) & \mathfrak{t}ext{for} \ t \in I,\\
y & \mathfrak{t}ext{for} \ t > t_2.
\end{cases} $$
Define a Fredholm operator $\mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma} \in \Sigma_{(\gamma^{\operatorname{ext}})^*TX}$ by
$$ \mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma}= \frac{D}{dt} + \operatorname{Hess}(f)_{\gamma^{\operatorname{ext}}(t)} - \varepsilonilon, $$
for $\varepsilonilon > 0$ small. Standard deformation and concatenation arguments show that the operators $\mathfrak{t}ilde{L}_{\gamma}$ and $\mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma}$ have the same index, and that orienting $\det(\mathfrak{t}ilde{L}_{\gamma})$ is equivalent to orienting $\det(\mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma})$.
Let $\mathscr{L}ambda_{\gamma}(x, y)$ be the two-element set consisting of the orientations of $\det(\mathfrak{t}ilde{L}_{\gamma})$ or, equivalently, of $\det(\mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma})$. For fixed $x, y$ but different $\gamma \in \P_k(x, y)$, the pairs $(\gamma^{\operatorname{ext}}, \mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma}) \in \mathcal{E}$ are equivalent. Hence, by the results of \cite[Section 3.2.1]{Schwarz}, we have a canonical identification between the different sets $\mathscr{L}ambda_{\gamma}(x, y)$. Consequently, we can drop $\gamma$ from the notation and write $\mathscr{L}ambda(x, y)$ for any $\mathscr{L}ambda_{\gamma}(x, y)$.
By considering concatenation of paths, we obtain a natural composition map
\begin{equation}
\label{eq:composeorientations}
\mathscr{L}ambda(x, y) \mathfrak{t}imes \mathscr{L}ambda(y, z) \mathfrak{t}o \mathscr{L}ambda(x, z).
\end{equation}
\begin{definition}
\label{def:sco}
A {\em specialized coherent orientation} $o$ consists of choices of elements $o_{x, y} \in \mathscr{L}ambda(x, y)$, one for each $x, y \in X$, such that
$$o_{x, y} \cdot o_{y, z} = o_{x, z}$$
for all $x, y, z \in X$, where the multiplication is with respect to \eqref{eq:composeorientations}.
\end{definition}
A specialized coherent orientation can be constructed as follows. We choose a critical point $x_0$ of $f$, and consider the operator $\mathfrak{t}ilde{L}_{x_0}$ for the constant path at $x_0$. There is a canonical identification $\det(\mathfrak{t}ilde{L}_{x_0}) \cong \R$, and hence from $o_{x_0, x_0} \cong \{\pm 1\}$ we can choose the element $+1$. Then, we choose arbitrary elements of $o_{x_0, x}$ for all other $x \in X$, and we obtain elements in any $o_{x, y}$ using the concatenation rule.
Using the identification between $\det(\mathfrak{t}ilde{L}_{\gamma})$ and $\det(\mathfrak{t}ilde{L}^{\operatorname{ext}}_{\gamma})$, we see that a coherent orientation gives rise to a specialized coherent orientation. Conversely, any specialized coherent orientation can be extended to a coherent orientation. To check this last statement, consider the construction of a specialized coherent orientation from the choices of $x_0$ and elements in $o_{x_0, x}$, as above. This is a subset of the data needed to give a coherent orientation. Indeed, letting $K_0 = \frac{d}{dt} + \operatorname{Hess}(f)_{x_0}$, we see that from the elements of $o_{x_0, x}$ we get orientations $\operatorname{sp}incigma[\gamma, K]$ for paths $\gamma$ starting at $x_0$ and ending at $x$, and for $K$ with $K^+=H^+_x$. Let us now also choose orientations $\operatorname{sp}incigma[\gamma, K]$ for $\gamma \in \P_k(x_0,x)$ but $K^+ \neq H^+_x$. This gives the necessary data to construct a coherent orientation.
Finally, note that a specialized coherent orientation gives rise to orientations on the moduli spaces $M(x, y)$ in the following way. Given $\gamma \in M(x, y)$, pick $I=[t_1, t_2] \operatorname{sp}incubset \R$ large enough so that the operators $\operatorname{Hess}(f)_{\gamma(t)}$ are non-degenerate for all $t \in (-\infty, t_1] \cup [t_2, \infty)$. Consider the restriction of $\gamma$ to the interval $I$, and the corresponding operator $\mathfrak{t}ilde{L}_{\gamma|_I}$. The orientation on $\det(\mathfrak{t}ilde{L}_{\gamma|_I})$ gives an orientation on $\det(L_{\gamma})$, by a concatenation process similar to the one in \cite[Section 20.4]{KMbook}. In turn, this orients the tangent space $T_{\gamma}M(x, y)$.
We conclude that a specialized coherent orientation $o$ can be used to give signs in the Morse complex. When $o$ is induced by a coherent orientation $\operatorname{sp}incigma$, the signs coming from $o$ are the same as those coming from $\operatorname{sp}incigma$. Thus, the three ways of producing orientations, which we have described in this section, all produce the same Morse complex.
\operatorname{sp}incection{Quasi-gradient vector fields}
\label{sec:quasi}
The purpose of this subsection is to extend Morse-Smale theory to a class of vector fields that are not gradients. Note that among all smooth vector fields on a manifold, gradient vector fields are rather special---even when allowing both the metric and the function to vary. For example, the differential of a gradient vector field at a critical point has only real eigenvalues (because it is a symmetric operator with respect to the given metric). This is also true for the gradient-like (also called pseudo-gradient) vector fields that are sometimes used in Morse theory, e.g., in \cite{MilnorHCob, AudinDamian}. We want to relax this condition, and allow for the stationary points of our vector field to only be hyperbolic, in the following sense:
\begin{definition}
Let $v$ be a smooth vector field on a manifold $X$. A stationary point $x$ of $v$ is called {\em hyperbolic} if (the complexification of) the derivative $(dv)_x: T_xX \mathfrak{t}o T_xX$ has no purely imaginary eigenvalues.
\end{definition}
We now introduce the class of vector fields we want to work with.
\begin{definition}
\label{def:qg}
Let $X$ be a smooth manifold. A smooth vector field $v$ on $X$ is called {\em Morse quasi-gradient} if the following conditions are satisfied:
\begin{enumerate}[(a)]
\item All stationary points of $v$ are hyperbolic.
\item There exists a smooth function $f: X \mathfrak{t}o \R$ such that $df(v) \geq 0$ at all $x \in X$, with equality holding if and only if $x$ is a stationary point of $v$.
\end{enumerate}
\end{definition}
\begin{example}
The gradient vector field of a Morse function $f$ has hyperbolic stationary points, since the eigenvalues of $(dv)_x$ are real and non-zero; further, the Morse function $f$ can be used in condition (b).
\end{example}
For future reference, it is worth pointing out the following.
\begin{lemma}
\label{lem:qgCritical}
Let $v$ be a Morse quasi-gradient vector field on a smooth manifold $X$, and let $f$ be a function as in part (b) of Definition~\ref{def:qg}. Then, any stationary point of $v$ is a critical point of $f$.
\end{lemma}
\begin{proof}
Let $x$ be a stationary point of $v$. Since $x$ is a minimum of the function $df(v)$, we have $d(df(v))_x = 0$. For $y \in T_xX,$ we get
$$ 0= d(df(v))_x (y) = (d^2f)_x(v_x, y) + (df)_x((dv)_x(y)).$$
The first term in the last expression is zero, because $v_x = 0$. Further, since $x$ is hyperbolic, we have that $(dv)_x$ is an automorphism of $T_xX$, hence $(dv)_x(y)=z$ can take any value in $T_x X$. We conclude that $(df)_x(z)=0$ for all $z \in T_x X $, so $x$ is a critical point of $f$.
\end{proof}
For the rest of this subsection we will assume that $X$ is closed and oriented. Let $v$ be a Morse quasi-gradient vector field on $X$. We seek to establish several properties that $v$ has in common with gradients of Morse functions.
Let $\mathcal{C}rit$ be the set of stationary points of $v$. If $x \in \mathcal{C}rit$, hyperbolicity implies that we have a decomposition
$$ T_xX = T_x^sX \oplus T_x^uX,$$
where $T_x^sX$ and $T_x^uX$ are invariant subspaces for $L=(dv)_x$, the eigenvalues of $L|_{T_x^sX}$ have negative real part, and the eigenvalues of $L|_{T_x^uX}$ have positive real part. (See \cite[Proposition 2.8]{PalisMelo} for a proof.) We define the {\em index} of $x$ to be the dimension of $T_x^uX$.
The local structure of flows near hyperbolic singularities is well-understood; a good reference is \cite[Chapter 2]{PalisMelo}. Theorem 4.10 in \cite{PalisMelo} says that around every $x \in \mathcal{C}rit$ there is a local coordinate chart $\N_x$ such that the restriction of $v$ to $\N_x$ is conjugate to a linear flow (under a homeomorphism). This implies that $x$ is the only stationary point inside $\N_x$. Since $X$ is compact, we deduce that the set $\mathcal{C}rit$ is finite.
We can arrange so that the neighborhoods $\N_x$ are disjoint for different $x$. By part (b) of Definition~\ref{def:qg}, there is an $\varepsilonilon > 0$ such that
\begin{equation}
\label{eq:eeps}
df(v)(p) \geq \varepsilonilon, \ \ \forall \ p \in X - \bigcup_{x \in \mathcal{C}rit} \N_x.
\end{equation}
Next, consider the flow on $X$ associated to $v$. Let $\gamma: \rr\mathfrak{t}o X$ be a flow line, i.e., $d\gamma/dt + v(\gamma(t))=0.$ We have:
\begin{equation}
\label{eq:fgamma}
\frac{d}{dt}f(\gamma(t))= -df_{\gamma(t)} \circ v(\gamma(t)) \leq 0,
\end{equation}
with equality only if $\gamma(t)$ is stationary. Therefore, $f$ decreases (strictly) along non-stationary flow lines.
\begin{lemma}
\label{lem:ao}
Let $v$ be a Morse quasi-gradient vector field on a closed, oriented, smooth manifold $X$. Let $\gamma: \rr\mathfrak{t}o X$ be a flow line of $v$. Then, $\lim_{t\mathfrak{t}o -\infty} \gamma(t)$ and $\lim_{t\mathfrak{t}o +\infty} \gamma(t)$ exist, and they are both stationary points of $v$.
\end{lemma}
\begin{proof}
This is similar to the Morse gradient case; cf. \cite[Proposition 3.19]{BanyagaHurtubise}. Let $\N_x$ be the neighborhoods of $x \in \mathcal{C}rit$ constructed above, and $\varepsilonilon > 0$ be such that \eqref{eq:eeps} holds.
Compactness of $X$ implies that $\gamma(t)$ is defined for all $t \in \R$, and $f \circ \gamma : \rr\mathfrak{t}o \R$ is bounded. (Here, $f$ is the function in part (b) of Definition~\ref{def:qg}.) From Equation~\eqref{eq:fgamma} we see that
$$ \lim_{t \mathfrak{t}o \pm \infty} -df(v)(\gamma(t))= \lim_{t \mathfrak{t}o \pm \infty} \frac{d}{dt}f(\gamma(t))=0.$$
In view of \eqref{eq:eeps}, this shows that for $t \ll 0$, the flow line $\gamma(t)$ is contained in the chart $\N_x$ for some $x$. Further, if $t_n \mathfrak{t}o -\infty$, then at any accumulation point of the sequence $\{\gamma(t_n)\} \operatorname{sp}incubseteq X$ we must have $df(v)=0$. Hence, the only accumulation point (which exists by the compactness of $X$) must be the stationary point $x$. We deduce that $\lim_{t\mathfrak{t}o -\infty} \gamma(t)=x$. A similar argument applies to $\gamma(t)$ as $t\mathfrak{t}o +\infty$.
\end{proof}
Hyperbolicity of stationary points implies the existence of stable and unstable manifolds $W^s_x, W^u_x$, cf. the Stable Manifold Theorem \cite[Theorem 6.2]{PalisMelo}. These are the images of injective immersions
$$ E^s : T_x^sX \mathfrak{t}o W^s_x \operatorname{sp}incubseteq X,$$
$$ E^u : T_x^sX \mathfrak{t}o W^u_x \operatorname{sp}incubseteq X.$$
If we did not have condition (b) in Definition~\ref{def:qg}, the maps $E^s$ and $E^u$ may not be embeddings, and $W^s_x, W^u_x$ may not be manifolds. (See \cite[p.74]{PalisMelo} for an example.) However, for Morse quasi-gradients, $E^s$ and $E^u$ are homeomorphisms onto their images, and hence embeddings. The proof of this fact can be taken verbatim from the Morse gradient case; cf. Lemma 4.10 in \cite{BanyagaHurtubise}. Indeed, the main input in that proof is the existence of limits of flow lines at $\pm \infty$; this was established for quasi-gradients in Lemma~\ref{lem:ao}.
Thus, for every $x \in \mathcal{C}rit$, we have stable and unstable submanifolds $W^s_x, W^u_x \operatorname{sp}incubseteq M$, of dimensions $\dim(X)-\operatorname{ind}(x)$ and $\operatorname{ind}(x)$, respectively. We can then formulate the Morse-Smale condition as in the gradient case:
\begin{definition}
\label{def:Mqv}
A Morse quasi-gradient vector field $v$ is called {\em Morse-Smale} if for all stationary points $x$ and $y$, the unstable manifold $W^u_{x}$ and the stable manifold $W^s_{y}$ intersect transversely.
\end{definition}
Morse-Smale quasi-gradients are a particular example of the {\em Morse-Smale vector fields}, which play an important role in the theory of dynamical systems. We recall their definition below. See \cite[Chapter 5]{PalisMelo} for an introduction to the subject.
\begin{definition}[\cite{PalisMelo}, p.118-119]
\label{def:MS}
Let $X$ be a closed, smooth manifold. A smooth vector field $v$ on $X$ is called {\em Morse-Smale} if the following conditions are satisfied:
\begin{enumerate}[(a)]
\item $v$ has a finite number of critical elements (stationary points and closed orbits), all of which are hyperbolic.\footnote{See \cite[p.95]{PalisMelo} for the definition of hyperbolicity for closed orbits. Stable and unstable manifolds for closed orbits are defined in the obvious way; see \cite[p.98]{PalisMelo}.}
\item If $\operatorname{sp}incigma_1$ and $\operatorname{sp}incigma_2$ are critical elements of $v$, then the unstable manifold of $\operatorname{sp}incigma_1$ is transverse to the stable manifold of $\operatorname{sp}incigma_2$;
\item The set of nonwandering points for $v$ is equal to the union of the critical elements of $v$.
\end{enumerate}
Here, $x \in X$ is called a {\em wandering point} for $v$ if there exists a neighborhood $U$ of $x$ and a number $t_0 > 0$ such that $\Phi_t(U) \cap U = \emptyset$ for $|t| > t_0$, where $\{\Phi_t\}_{t \in \R}$ is the flow generated by $v$. Otherwise, we say that $x$ is {\em nonwandering}.
\end{definition}
Morse-Smale vector fields satisfy structural stability, that is, a small perturbation of a Morse-Smale dynamical system is conjugate to the original system via a homeomorphism.
\begin{lemma}
\label{lem:isMS}
A Morse-Smale quasi-gradient vector field, in the sense of Definition~\ref{def:Mqv}, is a Morse-Smale vector field in the sense of Definition~\ref{def:MS} and, furthermore, it has no closed orbits.
\end{lemma}
\begin{proof}
Closed orbits do not exist because $f$ decreases along flow lines. Parts (a) and (b) of Definition~\ref{def:MS} are immediate consequences of Definition~\ref{def:Mqv}. Part (c), the fact that nonstationary points are wandering, follows since $f$ decreases along flow lines.
\end{proof}
Suppose $v$ is a Morse-Smale quasi-gradient field. For every $x, y \in \mathcal{C}rit$, the intersection
$$M(x,y):= W^u_x \cap W^s_y$$ is transverse, and hence a manifold of dimension $\operatorname{ind}ex(x) - \operatorname{ind}ex(y)$. We can construct a Morse complex $(C_*, \del)$ as in the gradient case. The groups $C_*$ are generated by the stationary points, the grading is determined by the index, and the differential $\del$ is given by the formula \eqref{eq:delx}.
\begin{proposition}
\label{prop:MSquasi}
Let $v$ be a Morse-Smale quasi-gradient vector field on a closed, oriented, smooth manifold $X$. Let $(C_*, \del)$ be the corresponding Morse complex. Then $\del^2=0$, and the homology $H_*(C)$ is isomorphic to $H_*(M)$.
\end{proposition}
\begin{proof}
In the gradient case, one way to prove that Morse homology recovers singular homology is by using the Conley index; cf. \cite{FloerMorse, SalamonMorse, Franzosa, McCord} or \cite[Chapter 7]{BanyagaHurtubise}. (See Section~\ref{subsec:ConleyMorse} below for the definition of the Conley index.) The basic idea is to find a Morse decomposition of the flow into attractor-repeller pairs. This produces a filtration of $X$ by index pairs, such that the quotients are the Conley indices of the stationary points $x \in \mathcal{C}rit$. These quotients are homotopy equivalent to spheres of dimensions $\operatorname{ind}ex(x)$, and the connecting homomorphisms between their homology groups are given by the signed count of flow lines.
This proof can be adapted to the setting of Morse-Smale quasi-gradients. Indeed, one can construct Morse decompositions and connection matrices for general flows, as in the work of Franzosa \cite{Franzosa}. Further, Theorem 3.1 in \cite{McCord} describes the neighborhood of a flow line between two hyperbolic points in a Morse-Smale (not necessarily gradient) flow. This shows that the connection matrices are given by counts of flow lines, i.e. that each gradient flow line contributes plus or minus $1$ to the boundary operator. Orientations can be fixed as in \cite{FloerMorse, SalamonMorse}.
\end{proof}
The rest of the discussion in Section~\ref{sec:standard} extends to quasi-gradients as well. The Morse-Smale condition can be phrased in terms of the surjectivity of the operator
$$ L_{\gamma}(w) = \frac{Dw}{dt} + D(v)(w), $$
as in Lemma~\ref{lem:MSlu}. Furthermore, the discussion from Section~\ref{sec:or1} carries over to this setting, so the moduli spaces $M(x,y)$ can be oriented using coherent orientations or specialized coherent orientations. The only caveat is that in the definition of coherent orientations, when we define the sets $\Sigma_{\gamma^*TX}$, we should allow the operators $K^{\pm}$ to be hyperbolic, rather than conjugated self-adjoint.
\operatorname{sp}incection{Morse homology for isolated invariant sets}
\label{subsec:ConleyMorse}
Suppose $X$ is a (possibly non-compact) smooth, oriented manifold equipped with a Morse quasi-gradient vector field $v$. In general, there is no way of defining a Morse complex. Even if we assume the Morse-Smale condition on the unstable and stable manifolds, the resulting differential may not square to zero, because flow lines can escape to infinity.
Nevertheless, we have a Morse complex in a certain setting, described below. First, we recall the definitions of an isolated invariant set, index pair, and Conley index, following \cite{ConleyBook}. Fix a one-parameter subgroup $\{\Phi_t \}$ of diffeomorphisms of a manifold $X$. Given a compact subset $A \operatorname{sp}incubseteq X$, define the compact subset
\[
\operatorname{Inv} (A, \Phi) = \{x \in A \mid \Phi_t (x) \in A \mathfrak{t}ext{ for all } t \in \mathbb{R} \}.
\]
\begin{definition}
\label{def:iis}
A compact set $\mathscr{S} \operatorname{sp}incubseteq X$ is called an {\em isolated invariant set} if there exists a compact $A \operatorname{sp}incubseteq X$ with $\mathscr{S} = \operatorname{Inv}(A, \Phi) \operatorname{sp}incubseteq \operatorname{int}(A)$. Such a set $A$ is called an {\em isolating neighborhood} for $\mathscr{S}$.
\end{definition}
\begin{definition}
\label{def:indexpair}
An {\em index pair} $(N,L)$ for $\mathscr{S}$ consists of compact sets $L \operatorname{sp}incubseteq N \operatorname{sp}incubseteq X$ such that \\
a) $\operatorname{Inv} (N - L, \Phi) = \mathscr{S} \operatorname{sp}incubset \operatorname{int}(N - L)$, \\
b) for all $x \in N$, if there exists $t >0$ such that $\Phi_t(x)$ is not in $N$, there exists $0 \leq \mathfrak{t}au < t$ with $\Phi_\mathfrak{t}au(x) \in L$ $(L$ is an {\em exit set} for $N)$, \\
c) given $x \in L$, $t>0$, if $\Phi_s(x) \operatorname{sp}incubseteq N$ for all $0 \leq s \leq t$, then $\Phi_s(x)$ is in $L$ for $0 \leq s \leq t$ $(L$ is {\em positively invariant in $N$)}.
\end{definition}
Conley \cite{ConleyBook} showed that any isolated invariant set $\mathscr{S}$ admits an index pair. The {\em Conley index} for an isolated invariant set $\mathscr{S}$, denoted $I(\Phi, \mathscr{S})$, is defined to be the pointed space $(N/L,[L])$. Its pointed homotopy type is an invariant of the triple $(X,\Phi_t,\mathscr{S})$. In fact, the Conley index is invariant under continuous deformations of the flow, as long as $\mathscr{S}$ remains isolated in a suitable sense.
Going back to the case of a Morse-Smale quasi-gradient flow on $X$, suppose $\mathscr{S} \operatorname{sp}incubseteq X$ is an isolated invariant set. Let $\mathcal{C}rit[\mathscr{S}]$ be the set of stationary points of $v$ that lie in $\mathscr{S}$. The set $\mathcal{C}rit[\mathscr{S}]$ is finite, because $\mathscr{S}$ is compact. For any $x, y \in \mathcal{C}rit[\mathscr{S}]$, if a point of $\mathscr{S}$ is on a flow line $\gamma \in \breve{M}(x,y)$, then all the points on that flow line must be in $\mathscr{S}$. Further, the subsets
$$\breve{M}[\mathscr{S}](x, y) = \{\gamma \in \breve{M}(x,y) \mid \gamma \operatorname{sp}incubset \mathscr{S}\}$$
are both open and closed in $\breve{M}(x,y)$. We have
$$ \mathscr{S} = \bigcup_{\operatorname{sp}incubstack{x, y \in \mathcal{C}rit[\mathscr{S}] \\ \gamma \in \breve{M}[\mathscr{S}](x, y)}} \gamma.$$
We can define a Morse complex $C[\mathscr{S}]$ to be freely generated by the points in $\mathcal{C}rit[\mathscr{S}]$, with the differential given by counting only the flow lines in $\breve{M}[\mathscr{S}](x, y)$. We still have $\del^2=0$, and we obtain a Morse homology $H_*(C[\mathscr{S}])$.
The following theorem was proved by Floer in \cite{FloerMorse} in the setting of Morse-Smale gradient flows. The extension to the quasi-gradient case goes along the same lines as the proof of Proposition~\ref{prop:MSquasi}. (In fact, the results of Franzosa and McCord used in that proof were originally phrased for more general flows.)
\begin{theorem}[cf. \cite{FloerMorse}]
Given an isolated invariant set $\mathscr{S}$ in a Morse-Smale quasi-gradient flow $\phi$ on a manifold $X$, the Morse homology $H_*(C[\mathscr{S}])$ is isomorphic to the reduced homology of the Conley index of $\mathscr{S}$.
\end{theorem}
Thus, if $(N, L)$ is an index pair for $\mathscr{S}$ and $L \operatorname{sp}incubseteq N$ is a neighborhood deformation retract (which can easily be arranged), then the Morse homology is isomorphic to the relative homology $H_*(N, L)$.
\operatorname{sp}incection{Morse homology for manifolds with boundary}
\label{subsec:morseboundary}
Let $X$ be a smooth, oriented, compact manifold with boundary. We proceed to describe the construction of a Morse complex, $(\check{C}(X),\check{\partial})$, whose homology is isomorphic to $H_*(X;\zz)$ (we will ignore the analogous constructions for $H_*(\partialM; \zz)$ or $H_*(X,\partial X; \zz)$). For gradient flows, this construction appeared in print in \cite[Section 2.4]{KMbook}, although it may have been known before. See also \cite{Laudenbach}, \cite{Ranicki} for related work. Our exposition follows \cite{KMbook} closely, except we use the more general quasi-gradient flows.
Fix a metric $g$ and a Morse quasi-gradient vector field $v$ on $X$ such that $g$ and $v$ are respectively the restrictions of a metric and a vector field on the double of $X$, which are invariant under the obvious involution. Among the stationary points of $v$, there are those which occur in the interior of $X$; their set is denoted $\mathcal{C}rit^o$. The others are stationary points on $\partialX$. If $x \in \partialX$ is such a point, note that the normal vector to the boundary $N_x$ is taken to its opposite under the involution, and the same holds for $dv(N_x)$. Hence, $N_x$ is an eigenvector of $dv$, so it must live in either $T_x^sX$ or $T_x^uX$. In the first case we say that $x$ is {\em boundary-stable}, and in the second that it is {\em boundary-unstable}. The set of boundary-stable stationary points is denoted $\mathcal{C}rit^s$, and the set of boundary-unstable stationary points is denoted $\mathcal{C}rit^u$.
We require that $v$ satisfy the usual Morse-Smale condition, except in the so-called {\em boundary-obstructed} case, when $x$ is boundary-stable and $y$ is boundary-unstable. In the boundary-obstructed case, $W^u_x$ and $W^s_y$ are subsets of $\partial X$, so transversality in $X$ is impossible. Therefore, we instead require that they intersect transversely in $\partial X$. When this happens, the dimension of $M(x,y)$ is in fact $\operatorname{ind}ex(x) - \operatorname{ind}ex(y) + 1$.
For any $x,y \in \mathcal{C}rit=\mathcal{C}rit^o \cup \mathcal{C}rit^s \cup \mathcal{C}rit^u$, we let $M^\partial(x,y) = M(x,y) \cap \partial X$ and $\breve{M}^\partial(x,y) = M^\partial(x,y)/\mathbb{R}$. One can induce the same orientations on the moduli spaces as in the usual closed setting, except in the boundary-obstructed case, where we must also make use of an outward normal vector.
We let $C^o$, $C^s$, and $C^u$ be the free Abelian groups generated by $\mathcal{C}rit^o$, $\mathcal{C}rit^s$, and $\mathcal{C}rit^u$ respectively. The chain groups we work with will be $\check{C}(X) = C^o \oplus C^s$. Although these chain groups do not incorporate the boundary-unstable stationary points, the differential does. The way that the flows are counted varies from the usual construction. We define two sets of maps
\[
\partial^\mathfrak{t}heta_\varpi ,\bar{\partial}^\mathfrak{t}heta_\varpi: C^\mathfrak{t}heta \mathfrak{t}o C^\varpi
\]
for various pairs of $\mathfrak{t}heta, \varpi \in \{o,u,s\}$.
If $(\mathfrak{t}heta, \varpi) \in \{(o,o), (o,s), (u,o),(u,s)\}$ and $x \in \mathcal{C}rit^\mathfrak{t}heta$, define
\[
\partial^\mathfrak{t}heta_\varpi x = \operatorname{sp}incum_{y \in \mathcal{C}rit^\varpi } n(x,y) y,
\]
where the summation is again over those $y$ with $\dim \breve{M}(x,y)=0$.
If $\mathfrak{t}heta, \varpi \in \{s,u\}$ and $x \in \mathcal{C}rit^\mathfrak{t}heta$, we instead define $\bar{\partial}^\mathfrak{t}heta_\varpi$ to count flows on $\partial X$:
\[
\bar{\partial}^\mathfrak{t}heta_\varpi x = \operatorname{sp}incum_{y \in \mathcal{C}rit^\varpi} \bar{n}(x,y) y,
\]
where $\bar{n}(x,y)$ is the signed count of points (unparameterized flows) in zero-dimensional moduli spaces $\breve{M}^\partial(x,y)$.
On $\check{C}$, we define $\check{\partial}$ by
\begin{equation}
\label{eq:delcheck}
\check{\partial} = \begin{bmatrix} \partial^o_o & - \partial^u_o \bar{\partial}^s_u \\ \partial^o_s & \bar{\partial}^s_s - \partial^u_s \bar{\partial}^s_u \end{bmatrix}.
\end{equation}
\begin {theorem}[Theorem 2.4.5 in \cite{KMbook}]
We still have $\check{\partial}^2 = 0$. Furthermore, $H_*(\check{C},\check{\partial}) \cong H_*(X)$.
\end{theorem}
\begin{remark}\label{rmk:noMred}
In this book, since $\bar{\partial}^u_s$ does not appear in $\check{\partial}$, we will not need the notation $M^\partial(x,y)$, since either $M^\partial(x,y)$ agrees with $M(x,y)$ or is empty, except when $x$ is boundary-unstable and $y$ is boundary-stable. In \cite{KMbook}, the term $\bar{\partial}^u_s$ is needed for the proof that $\check{\partial}^2 = 0$. We keep the notation and discussion for consistency with \cite{KMbook}.
\end{remark}
\operatorname{sp}incection{Morse homology for manifolds with circle actions}
\label{subsec:CircleMorse}
A circle action is called {\em semifree} if it is free on the complement of the fixed points. For manifolds with semifree $S^1$-actions, Kronheimer and Mrowka showed that the methods in the previous subsection can be applied to obtain Morse theoretic approximations to the $S^1$-equivariant homology. We sketch their arguments here, following \cite[Sections 2.5--2.6]{KMbook}, but phrasing everything in terms of quasi-gradients rather than gradients.
Suppose that a closed Riemannian manifold $X$ has a smooth, semifree $S^1$-action by isometries, and let $Q$ denote the fixed point set. Let $N(Q)$ be the normal bundle of $Q$, and $N^1(Q)$ the unit normal bundle. Observe that the $S^1$-action gives $N(Q)$ the structure of a complex vector bundle. In order to make $X/S^1$ into a manifold, we resolve the singularity at $Q$ by considering the blow-up
\[
X^\operatorname{sp}incigma = (X - Q) \cup \bigl( N^1(Q) \mathfrak{t}imes [0,\varepsilonilon) \bigr),
\]
where we have identified $N^1(Q) \mathfrak{t}imes (0,\varepsilonilon)$ with $N(Q) - Q \operatorname{sp}incubset X$. Note that $X^\operatorname{sp}incigma/S^1$ is a smooth manifold with boundary $N^1(Q)/S^1$.
A smooth, $S^1$-invariant vector field $\mathfrak{t}ilde v$ on $X$ induces a vector field $v$ on $(X - Q)/S^1$. In turn, this extends naturally to a smooth vector field $v^\operatorname{sp}incigma$ on $X^\operatorname{sp}incigma/S^1$; cf. \cite[Lemma 2.5.2]{KMbook}. Note that $\mathfrak{t}ilde v$ must be tangent to $Q$, and hence $v^{\operatorname{sp}incigma}$ is tangent to the boundary.
Precisely, the dynamics of $v^{\operatorname{sp}incigma}$ on the boundary $N^1(Q)/S^1 = \del(X^\operatorname{sp}incigma/S^1)$ can be described as follows. A point on the boundary can be written as $(q,[\phi])$ with $q \in Q$ and $\phi \in N^1_q(Q)$. As discussed, $N_q(Q)$ has a complex structure; we let $\langle \phi \rangle^{\perp}$ be the complex orthogonal complement to $\phi$ in $N_q(Q)$. Decompose the tangent space to $X^{\operatorname{sp}incigma}/S^1$ at $q$ as
\begin{equation}
\label{eq:decomposes}
T_qQ \oplus \langle \phi \rangle^{\perp} \oplus \R,
\end{equation}
where $\R$ is the direction normal to the boundary.
The covariant derivative $(\nabla \mathfrak{t}ilde v)_q : T_qX \mathfrak{t}o T_qX$ is $S^1$-equivariant, and hence takes the normal direction $N_qQ \operatorname{sp}incubset T_qX$ to $N_qQ$. Let
\begin{equation}
\label{eq:lq}
L_q := (\nabla \mathfrak{t}ilde v)|_{N_qQ}.
\end{equation}
With respect to the decomposition \eqref{eq:decomposes}, let us write
$$
v^{\operatorname{sp}incigma}(q, [\phi])=( \mathfrak{t}ilde v(q), \mathbb{L}_q \phi, 0).
$$
Here, when $\mathfrak{t}ilde v(q)=0$, the second term equals
$$ \mathbb{L}_q \phi = L_q\phi - \operatorname{Re} \langle \phi, L_q\phi\rangle\phi.$$
Thus, the stationary points of $v^{\operatorname{sp}incigma}$ on the boundary are the pairs $(q, [\phi])$, where $q$ is a zero of $\mathfrak{t}ilde v|_Q$ and $\phi$ is an eigenvector of $L_q$.
Furthermore, the flow associated to $v^{\operatorname{sp}incigma}$ on the boundary is given by the equations:
\begin{align}
\label{eq:lq0}
\frac{dq}{dt} + \mathfrak{t}ilde v(q(t)) &= 0,\\
q^*(\nabla) + \bigl( L_{q(t)} - \operatorname{Re} \langle \phi(t), L_{q(t)}\phi(t)\rangle\phi(t) \bigr) dt &=0, \label{eq:lqx}
\end{align}
where $|\phi(t)|=1$ for all $t$.
\begin{definition}
\label{def:eqgv}
A smooth, $S^1$-invariant vector field $\mathfrak{t}ilde v$ on $X$ is called a {\em Morse equivariant quasi-gradient} if the following conditions are satisfied:
\begin{enumerate}[(a)]
\item All stationary points of $v$ on $(X - Q)/S^1$ are hyperbolic.
\item All stationary points of $\mathfrak{t}ilde v|_Q$ are hyperbolic.
\item At each stationary point $q$ of $\mathfrak{t}ilde v|_Q$, the operator $L_q: N_qQ \mathfrak{t}o N_qQ$ is self-adjoint, and admits a basis of eigenvectors $\phi_1(q), \phi_2(q), \dots, \phi_n(q)$ with corresponding eigenvalues $\lambda_1(q), \lambda_2(q), \dots, \lambda_n(q)$ such that
$$ \lambda_1(q) < \lambda_2(q) < \dots < \lambda_n(q)$$
and $\lambda_i(q) \neq 0$ for any $i$.
\item There exists a smooth, $S^1$-equivariant function $\mathfrak{t}ilde f: X \mathfrak{t}o \R$ such that $d\mathfrak{t}ilde f(\mathfrak{t}ilde v) \geq 0$ at all $x \in X$, with equality holding if and only if $\mathfrak{t}ilde v(x)=0$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:MEquiv}
Parts (a), (b) and (c) in Definition~\ref{def:eqgv}, taken together, are equivalent to asking for all the stationary points of $v^{\operatorname{sp}incigma}$ on $X^{\operatorname{sp}incigma}/S^1$ to be hyperbolic, and for those on the boundary to give rise to operators $L_q$ that are self-adjoint.
Furthermore, if $\mathfrak{t}ilde v$ is as in Definition~\ref{def:eqgv}, and $(q, [\phi_i(q)])$ is a stationary point of $v^{\operatorname{sp}incigma}$ on the boundary, then its index is given by
\[
\operatorname{ind}ex(q, [\phi_i(q)]) =
\begin{cases}
\operatorname{ind}ex_Q(q) + 2i-2 & \mathfrak{t}ext{ if } \lambda_i(q) > 0,\\
\operatorname{ind}ex_Q(q) + 2i-1 & \mathfrak{t}ext{ if } \lambda_i(q) < 0.
\end{cases}
\]
\end{lemma}
\begin{proof}
Away from the boundary, a stationary point of $v^{\operatorname{sp}incigma}$ is just a stationary point of $v$, and the two notions of hyperbolicity (and index) correspond. With regard to stationary points on the boundary and their indices, see \cite[Proof of Lemma 2.5.5]{KMbook} for the case of gradient fields. The arguments there extend to the quasi-gradient setting without difficulty.
\end{proof}
\begin{remark}
In part (c) of Definition~\ref{def:eqgv}, the condition that $L_q$ is self-adjoint (and has real eigenvalues) is not strictly necessary. We could have only asked for $L_q$ to have a complex basis of eigenvectors $\phi_1(q), \phi_2(q), \dots, \phi_n(q)$ with corresponding eigenvalues $\lambda_1(q), \lambda_2(q), \dots, \lambda_n(q)$ such that
$$ \operatorname{Re} \lambda_1(q) < \operatorname{Re} \lambda_2(q) < \dots < \operatorname{Re} \lambda_n(q)$$
and $ \operatorname{Re} \lambda_i(q) \neq 0$ for any $i$. With this weaker requirement, the results of this section would still hold, but the proof of Lemma~\ref{lem:ao3} below would be more complicated. In the case of interest to us in this book, the $L_q$ are self-adjoint, so we decided to include this condition in the definition.
\end{remark}
\begin{lemma}
\label{lem:qgCritical2}
Let $\mathfrak{t}ilde v$ be a Morse equivariant quasi-gradient vector field, and let $f$ be a function as in part (d) of Definition~\ref{def:eqgv}. Then, any stationary point of $\mathfrak{t}ilde v$ is a critical point of $f$.
\end{lemma}
\begin{proof}
This is similar to the proof of Lemma~\ref{lem:qgCritical}.
\end{proof}
Given $x \in X$, we write $[x]$ for its $S^1$-equivalence class, i.e. its projection to the singular space $X/S^1$.
\begin{lemma}
\label{lem:ao2}
Let $\mathfrak{t}ilde v$ be a Morse equivariant quasi-gradient vector field on $X$, as in Definition~\ref{def:eqgv}. Let $\gamma: \rr\mathfrak{t}o X$ be a flow line of $\mathfrak{t}ilde v$. Then, $\lim_{t\mathfrak{t}o -\infty} [\gamma(t)]$ and $\lim_{t\mathfrak{t}o +\infty} [\gamma(t)]$ exist in $X/S^1$, and they are both projections of stationary points of $\mathfrak{t}ilde v$.
\end{lemma}
\begin{proof}
This is similar to the proof of Lemma~\ref{lem:ao}.
\end{proof}
Suppose $\mathfrak{t}ilde v$ is a Morse equivariant quasi-gradient vector field on $X$, with $\mathfrak{t}ilde f$ as in part (d) of Definition~\ref{def:eqgv}. Observe that $\mathfrak{t}ilde{f}$ induces a function $f:X/S^1 \mathfrak{t}o \mathbb{R}$ which is smooth except on $Q/S^1$. Using $f$ we see that $v^{\operatorname{sp}incigma}$ is a Morse quasi-gradient vector field on the interior of $X^{\operatorname{sp}incigma}/S^1$.
If we let $f^{\operatorname{sp}incigma} : X^{\operatorname{sp}incigma}/S^1 \mathfrak{t}o \R$ be the composition of the blow-down map with $f$, then $f^{\operatorname{sp}incigma}$ is smooth, and we have
\begin{equation}
\label{eq:dfsigma}
df^{\operatorname{sp}incigma}(v^{\operatorname{sp}incigma}) \geq 0.
\end{equation}
However, $\mathfrak{t}ilde f$ is constant on the fibers $N_q^1(Q)/S^1$. Hence, equality happens in \eqref{eq:dfsigma} not only at the stationary points of $v^{\operatorname{sp}incigma}$, but also at all $(q, [\phi]) \in \del(X^{\operatorname{sp}incigma}/S^1)$ such that $\mathfrak{t}ilde v(q)=0$ (and $\phi$ does not have to be an eigenvector). Therefore, $v^{\operatorname{sp}incigma}$ is not naturally a Morse quasi-gradient on all of $X^\operatorname{sp}incigma/S^1$; or, at least, we cannot use $f^{\operatorname{sp}incigma}$ to argue that it is.
Nevertheless, we will still be able to do Morse homology using the flow of $v^{\operatorname{sp}incigma}$. To start with, observe that there are no closed orbits in this flow: In view of \eqref{eq:dfsigma}, the only such orbits would have to be contained in a fiber $N_q^1(Q)/S^1$, where $\mathfrak{t}ilde v(q)=0$. From \eqref{eq:lqx} we see that the flow on the projective space $N_q^1(Q)/S^1$ is the projection of a linear flow on $N_qQ$, which is in fact a gradient flow. This shows that there are no closed orbits in that fiber.
Furthermore, we have the analogue of Lemmas~\ref{lem:ao} and \ref{lem:ao2}:
\begin{lemma}
\label{lem:ao3}
Let $\mathfrak{t}ilde v$ be a Morse equivariant quasi-gradient vector field on $X$, as in Definition~\ref{def:eqgv}. Let $\gamma: \rr\mathfrak{t}o X^{\operatorname{sp}incigma}/S^1$ be a flow line of $v^{\operatorname{sp}incigma}$. Then, $\lim_{t\mathfrak{t}o -\infty} \gamma(t)$ and $\lim_{t\mathfrak{t}o +\infty} \gamma(t)$ exist in $X^{\operatorname{sp}incigma}/S^1$, and they are both stationary points of $v^{\operatorname{sp}incigma}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:ao2}, we already know that the projection of $\gamma$ to the blow-down $X/S^1$ limits to two stationary points. The projection is one-to-one away from the boundary, so it suffices to study the case of a trajectory that (in the blow-down) limits to a stationary point $q_0$ in the fixed point set $Q$. The limit can be at either $-\infty$ and $+\infty$. Without loss of generality, we consider the case of $+\infty$, and focus on the half-trajectory
$$\gamma_+: [0, \infty) \mathfrak{t}o X^{\operatorname{sp}incigma}/S^1, \ \ \gamma_+(t) = \gamma(t).$$
The idea is that, in the blow-up, the flow of $v^{\operatorname{sp}incigma}$ near the boundary $N^1(Q)/S^1$ is approximated by the flow on the boundary, which is given by \eqref{eq:lq0}-\eqref{eq:lqx}, and whose behavior we understand.
A suitably small neighborhood $V \operatorname{sp}incubset X^{\operatorname{sp}incigma}/S^1$ of the boundary $\partial(X^{\operatorname{sp}incigma}/S^1)=N^1(Q)/S^1$ can be identified with the normal bundle to the boundary. Hence, a point $v \in V$ can be written as a triple $(q, s, [\phi])$, with $q \in Q$, $s \geq 0$, and $\phi \in N^1_q(Q)$, normalized so that $|\phi|=1$. (Here, $s$ represents the distance to the boundary.) With $L_q$ as in \eqref{eq:lq}, we introduce the functional
$$ \mathscr{L}ambda: V \mathfrak{t}o \R, \ \ \mathscr{L}ambda(q, s, [\phi]) = \langle \phi, L_q \phi \rangle.$$
The stationary points of $v^{\operatorname{sp}incigma}$ on the boundary correspond to $s=0$ and $d\mathscr{L}ambda =0$.
Consider the fiber $F = N^1_{q_0}(Q)/S^1$ over $q_0$. The restriction of $v^{\operatorname{sp}incigma}$ to $F$ is the gradient of $\frac{1}{2} \mathscr{L}ambda|_F$. By part (c) of Definition~\ref{def:eqgv}, $v^{\operatorname{sp}incigma}|_F$ has $n$ stationary points, corresponding to the eigenvalues $\lambda_i = \lambda_i(q_0)$ of $L_q$, such that
$$ \lambda_1 < \lambda_2 < \dots < \lambda_n.$$
From our assumption, we know that given any neighborhood $U$ of $F$, there exists $t_0$ such that $\gamma(t) \in U$ for $t > t_0$. In particular, all the accumulation points of the half-trajectory $\gamma_+$ must be contained in $F$. We seek to show that $\gamma_+$ has a unique accumulation point, and that point is a stationary point of $v^{\operatorname{sp}incigma}$.
Note that if $x \in F$ is an accumulation point of $\gamma_+$, then all the points on the flow trajectory $\zeta$ through $x$ are also accumulation points. Indeed, if $\{\Phi_t\}$ denotes the flow of $v^{\operatorname{sp}incigma}$, and $\gamma(t_n) \mathfrak{t}o x = \zeta(0)$, then $\gamma(t_n + t) = \Phi_t(\gamma(t_n)) \mathfrak{t}o \Phi_t(x) = \zeta(t)$. The trajectory $\zeta$ is contained in $F$, where the flow is a gradient flow, and therefore $\zeta$ limits to two stationary points $x=(q_0, 0, [\phi_i])$ and $y=(q_0, 0, [\phi_j])$ at $\mp \infty$. Moreover, $x$ and $y$ have to be accumulation points for $\gamma_+$ as well, and we have $x=y$ if and only if $\zeta$ is stationary.
Therefore, we are left to show that $\gamma_+$ cannot have two different stationary points $x, y \in F$ as accumulation points. Suppose that were the case, and let $\lambda_i > \lambda_j$ be the eigenvalues corresponding to $x$ and $y$. These are also the values of $\mathscr{L}ambda$ at $x$ and $y$. Pick an intermediate value $\lambda \in (\lambda_j, \lambda_i)$, such that $\lambda \neq \lambda_k$ for any $k$. Since $v^{\operatorname{sp}incigma}$ restricts to be the gradient of $\frac{1}{2}\mathscr{L}ambda$ on $F$, we have
$$ d\mathscr{L}ambda(v^{\operatorname{sp}incigma}) > 0 \mathfrak{t}ext{ on } \mathscr{L}ambda^{-1}(\lambda) \cap F.$$
Since $\mathscr{L}ambda^{-1}(\lambda) \cap F$ is compact, we can find a neighborhood $U\operatorname{sp}incubset V$ of $F$ such that
\begin{equation}
\label{eq:dLa}
d\mathscr{L}ambda(v^{\operatorname{sp}incigma}) > 0 \mathfrak{t}ext{ on } \mathscr{L}ambda^{-1}(\lambda) \cap U.
\end{equation}
Because $x$ and $y$ are accumulation points of $\gamma_+$, it must be that the half-trajectory $\gamma_+$ intersects the intermediate level set $\mathscr{L}ambda^{-1}(\lambda)$ infinitely many times. Furthermore, we know that after a certain time $t_0$, the trajectory $\gamma_+$ is contained in $U$. However, this contradicts \eqref{eq:dLa}, which says that $\mathscr{L}ambda$ decreases every time the trajectory goes through $\mathscr{L}ambda^{-1}(\lambda) \cap U$. The conclusion follows.
\end{proof}
Since the stationary points of $v^{\operatorname{sp}incigma}$ are hyperbolic, they admit stable and unstable manifolds; cf. the discussion in Section~\ref{sec:quasi}. Further, we can separate the stationary points on the boundary into stable and unstable, and then define the notion of boundary-obstructed trajectories, as in Section~\ref{subsec:morseboundary}.
\begin{definition}
\label{def:eMSqg}
A Morse equivariant quasi-gradient vector field $\mathfrak{t}ilde v$ on $X$ is called {\em Morse-Smale} if the induced vector field $v^{\operatorname{sp}incigma}$ on $X^{\operatorname{sp}incigma}/S^1$ satisfies the Morse-Smale condition for boundary-unobstructed trajectories; and the Morse-Smale condition inside $\del(X^{\operatorname{sp}incigma}/S^1)$ for the boundary-obstructed trajectories.
\end{definition}
If we have a Morse-Smale equivariant quasi-gradient vector field, the constructions in Section~\ref{subsec:morseboundary} carry through using $v^{\operatorname{sp}incigma}$ (even though $v^\operatorname{sp}incigma$ is not a quasi-gradient vector field in a natural way). In particular, Lemma~\ref{lem:ao3} implies that $v^{\operatorname{sp}incigma}$ is a Morse-Smale vector field as in Definition~\ref{def:MS}; compare Lemma~\ref{lem:isMS}. Thus, we obtain a Morse complex $$(\check{C}(X^\operatorname{sp}incigma/S^1),\check{\partial})$$ which computes the homology of $X^\operatorname{sp}incigma/S^1$.
The space $X^\operatorname{sp}incigma/S^1$ can be viewed as an approximation to the homotopy quotient $X/ \!/ S^1 := X \mathfrak{t}imes_{S^1} ES^1$. Precisely, let $n$ be the connectivity of the pair $(X, X-Q)$, that is, the largest $j$ such that $(X, X-Q)$ is $j$-connected. Note that if the real codimension of $Q$ in $X$ is $2c$, then $n \geq 2c-1$. Since $X^{\operatorname{sp}incigma}$ is $S^1$-equivariantly homotopy equivalent to $X - Q$, we have an isomorphism in homology
\begin{equation}
\label{eq:approx1}
H_j(X^\operatorname{sp}incigma/S^1) \cong H_j(X / \!/ S^1)
\end{equation}
for all $j \leq n-1$. The homology $H_*(X / \!/ S^1) \cong H_*^{S^1}(X)$ is the $S^1$-equivariant Borel homology of $X$.
\operatorname{sp}incection{The $U$-action in Morse homology}
\label{subsec:UMorse}
We keep the same setting as in the previous subsection. The equivariant homology
$H_*^{S^1}(X)$ admits a natural $\zz[U]$-module structure, given by cap products with the elements of $H^*_{S^1}(pt) \cong \zz[U]$. (The action of $U$ decreases degree by two.) Our goal here is to explain how this module structure can be approximated in terms of Morse theory. The discussion is modeled on the infinite-dimensional case presented in \cite[Section 4.11]{KMOS}.
The circle action on $X^{\operatorname{sp}incigma}$ produces a natural complex line bundle $E^{\operatorname{sp}incigma}$ on $X^{\operatorname{sp}incigma}/S^1$. Let $\operatorname{sp}incect$ be a generic, smooth section of $E^{\operatorname{sp}incigma}$, such that $\operatorname{sp}incect$ is transverse to the $0$-section and further, $\operatorname{sp}incect$ restricted to the boundary, $N^1(Q)/S^1$ is transverse to restriction of the $0$-section to the boundary. Note that $\operatorname{sp}incect$ inherits a canonical orientation from $X^\operatorname{sp}incigma/S^1$. Let $\Zs$ denote the zero set of $\operatorname{sp}incect$, which is a manifold with boundary $\partial\mathscr{Z}= \mathscr{Z}\cap (N^1(Q)/S^1)$. From the orientation of $\operatorname{sp}incect$ and $X^\operatorname{sp}incigma/S^1$, we obtain an orientation on $\Zs$.
Notice that a trajectory $\gamma$ of $v^{\operatorname{sp}incigma}$ is determined by its value at time $t=0$. Thus, we can define cut-down moduli spaces by intersecting $M(x, y)$ with $\Zs$ at time $t=0$:
$$ M(x, y) \cap \mathscr{Z}:= \{\gamma \in M(x, y) \mid \gamma(0) \in \mathscr{Z}\}.$$
Given $(\mathfrak{t}heta, \varpi) \in \{(o,o), (o,s), (u,o),(u,s)\}$, we define maps of the form $m^\mathfrak{t}heta_\varpi : \check{C}^\mathfrak{t}heta_*(X^{\operatorname{sp}incigma}/S^1) \mathfrak{t}o \check{C}^\varpi_{*-2}(X^{\operatorname{sp}incigma}/S^1)$ by counting flows that intersect $\Zs$:
\[
m^\mathfrak{t}heta_\varpi(x) = \operatorname{sp}incum_{y \in \mathfrak{c}^\varpi} \# (M(x,y) \cap \Zs) \cdot y.
\]
Note that the conditions on the gradings of $x$ and $y$ guarantee that the cut-down moduli spaces being counted are 0-dimensional.
For $\mathfrak{t}heta, \varpi \in \{u, s\},$ we also define analogous maps that count intersections in the boundary of $X$:
\[
\bar{m}^\mathfrak{t}heta_\varpi x = \operatorname{sp}incum_{y \in \mathfrak{c}^\varpi}\# (M(x,y) \cap \partial\Zs) \cdot y,
\]
where we only consider $y$ where the relevant cut-down moduli space is $0$-dimensional. Further, recall the shift in gradings by one in the boundary-obstructed case.
We define a chain map $\check{m} : \check{C}_*(X^{\operatorname{sp}incigma}/S^1) \mathfrak{t}o \check{C}_{*-2}(X^{\operatorname{sp}incigma}/S^1)$ by
\begin{equation}\label{eqn:morsecap}
\check{m} = \begin{bmatrix} m^o_o & - m^u_o \bar{\partial}^s_u - \partial^u_o \bar{m}^s_u \\ m^o_s & \bar{m}^s_s - m^u_s \bar{\partial}^s_u - \partial^u_s \bar{m}^s_u \end{bmatrix}.
\end{equation}
The map induced by $\check{m}$ on the homology $H_*(X^{\operatorname{sp}incigma}/S^1)$ is exactly the cap product with $c_1(E^{\operatorname{sp}incigma})$. This recovers the $\zz[U]$-module structure on $H_*(X^{\operatorname{sp}incigma}/S^1) = H_*^{S^1}(X^{\operatorname{sp}incigma})$. Note that the isomorphism \eqref{eq:approx1} discussed in the previous subsection,
\begin{equation}
\label{eq:approx2}
H_{\leq n-1}(X^{\operatorname{sp}incigma}/S^1) \cong H_{\leq n-1}(X / \!/ S^1),
\end{equation}
is actually an isomorphism of $\zz[U]$-modules. Thus, Morse theory tells us the action of $\zz[U]$ on $H_*^{S^1}(X)$ in degrees up to $n-1$.
\operatorname{sp}incection{Combined generalizations}
\label{sec:combinedMorse}
The versions of Morse homology described in Sections~\ref{subsec:ConleyMorse} and ~\ref{subsec:morseboundary} can be combined as follows. Let $X$ be a (possibly non-compact) manifold with boundary. Fix a metric $g$ and a Morse quasi-gradient vector field $v$ on $X$ as in Section~\ref{subsec:morseboundary}, and let $\mathscr{S} \operatorname{sp}incubseteq X$ be an isolated invariant set of the resulting flow. Although our definitions in Section~\ref{subsec:ConleyMorse} were for flows on manifolds without boundary, Conley index theory easily extends to the boundary case, provided that the gradient vector field is tangent to the boundary; compare \cite[Section 3.1.2]{HellThesis}. The only caveat is that in Definitions~\ref{def:iis} and ~\ref{def:indexpair}, the interior of a subset $A \operatorname{sp}incubseteq X$ should be defined as in point-set topology, without regard to the structure of $X$ as a manifold-with-boundary. With this in mind, the Conley index of $\mathscr{S}$ is defined as before, to be the quotient $N/L$ of an index pair $(N,L)$ for $\mathscr{S}$. Alternatively, we could consider the double $D(X)$ of $X$ with its $\zz/2$-action, and appeal to the equivariant Conley index theory developed in \cite{FloerConley, Pruszko}. This guarantees the existence of a $\zz/2$-equivariant Conley index of $D(\mathscr{S})$ on $D(X)$, well-defined up to $\zz/2$-equivariant homotopy equivalence. In particular, its quotient by $\zz/2$, which we take to be the Conley index of $\mathscr{S}$ on $X$, is well-defined up to homotopy equivalence. This is equivalent to \cite[Definition 3.1.32]{HellThesis}.
We now impose a Morse-Smale condition for the trajectories in $\mathscr{S}$ as in Section~\ref{subsec:morseboundary}, where in the boundary-obstructed case we only require transversality inside $\partialX$. We then construct a complex $\check{C}[\mathscr{S}]$ using the same formula \eqref{eq:delcheck}, but involving only the stationary points and the flow trajectories in $\mathscr{S}$. The homology of $\check{C}[\mathscr{S}]$ will be the reduced homology of the Conley index associated to $\mathscr{S}$.
Starting from this, we can also combine the construction in Section~\ref{subsec:ConleyMorse} with those in Sections~\ref{subsec:CircleMorse}-\ref{subsec:UMorse}. Suppose that $X$ is a (possibly non-compact) Riemannian manifold $X$ with a smooth, semifree $S^1$-action by isometries. (In our applications, $X$ will be a vector space of the form $\rr^m \oplus \cc^n$, with the linear $S^1$-action.) Let $\mathfrak{t}ilde v$ be a smooth vector field on $X$, and $\mathscr{S} \operatorname{sp}incubseteq X$ be an $S^1$-invariant, isolated invariant set in the flow of $\mathfrak{t}ilde v$. Let $Q, X^{\operatorname{sp}incigma}, v^{\operatorname{sp}incigma}, L_q$ be constructed as in Section~\ref{subsec:CircleMorse}. Given a closed $S^1$-invariant subset $M \operatorname{sp}incubset X$, we will denote by $M^\operatorname{sp}incigma$ the closure of the preimage of $M - Q$ in $X^\operatorname{sp}incigma$. We seek to form a complex $\check{C}(X^\operatorname{sp}incigma/S^1)[\mathscr{S}]$ using only trajectories in $\mathscr{S}^\operatorname{sp}incigma/S^1$. In order to do this, we require that we can find an $S^1$-invariant isolating neighborhood $A$ of $\mathscr{S}$ such that the restriction of $\mathfrak{t}ilde v$ to $A$ is a Morse-Smale equivariant quasi-gradient vector field in the sense of Definition~\ref{def:eMSqg}.
Under these assumptions, we obtain the desired Morse complex $\check{C}(X^\operatorname{sp}incigma/S^1)[\mathscr{S}]$ using trajectories in $A^\operatorname{sp}incigma/S^1$. Observe that $\mathscr{S}^{\operatorname{sp}incigma}/S^1$ is an isolated invariant set for the flow of $v^{\operatorname{sp}incigma}$ on $X^{\operatorname{sp}incigma}/S^1$; let $I(\mathscr{S}^\operatorname{sp}incigma/S^1)$ be its Conley index. We have:
\begin{equation}
\label{eq:ConleyMorse}
H_*(\check{C}(X^\operatorname{sp}incigma/S^1)[\mathscr{S}]) \cong \mathfrak{t}ilde{H}_*(I(\mathscr{S}^\operatorname{sp}incigma/S^1)).
\end{equation}
On the other hand, $\mathscr{S}$ is an isolated invariant set itself, and is fixed by the $S^1$-action. We may choose an index pair which is $S^1$-invariant \cite{FloerConley, Pruszko}, and we thus get an $S^1$-equivariant Conley index $I_{S^1}(\mathscr{S})$. Let $(I_{S^1}(\mathscr{S}))^{S^1}$ be its fixed point set. The Morse homology in \eqref{eq:ConleyMorse} approximates the reduced equivariant homology of $I_{S^1}(\mathscr{S})$, in the sense that:
\begin{equation}
\label{eq:EquivConleyMorse}
H_{\leq n-1}(\check{C}(X^\operatorname{sp}incigma/S^1)[\mathscr{S}]) \cong \mathfrak{t}ilde{H}_{\leq n-1}^{S^1}(I_{S^1}(\mathscr{S})),
\end {equation}
where $n$ is the connectivity of the pair $\bigl(I_{S^1}(\mathscr{S}), \bigl(I_{S^1}(\mathscr{S}) - (I_{S^1}(\mathscr{S}))^{S^1}\bigl ) \cup *\bigr)$, with $*$ denoting the basepoint in $I_{S^1}(\mathscr{S})$; compare \eqref{eq:approx2}. Moreover, after choosing a suitable section $\operatorname{sp}incect$ of $E^{\operatorname{sp}incigma}$ as in Section~\ref{subsec:UMorse}, the isomorphism in \eqref{eq:EquivConleyMorse} becomes one of $\zz[U]$-modules.
Finally, recall that in Sections~\ref{sec:standard} and \ref{sec:or1} we described alternative ways of stating the Morse-Smale condition and of constructing orientations. These descriptions apply equally well to the more general settings discussed here.
\chapter{The Seiberg-Witten Floer spectrum}\label{sec:spectrum}
We review here the construction of the Seiberg-Witten Floer spectrum $\operatorname{SWF}(Y, \operatorname{sp}inc)$, following \cite{Spectrum}.
\operatorname{sp}incection{The configuration space and the gauge group action}
\label{sec:configure}
We will be studying the Seiberg-Witten equations on a tuple $(Y,g,\mathfrak{s},\mathbb{S})$, where $Y$ is a rational homology three-sphere, $g$ is a metric on $Y$, $\mathfrak{s}$ is a $\operatorname{Spin}^c$ structure on $Y$, and $\mathbb{S}$ is a spinor bundle for $\mathfrak{s}$. We choose a flat $\operatorname{Spin}^c$ connection $A_0$ on $\mathbb{S}$ which gives an affine identification of $ \mathcal{O}mega^1(Y; i \R)$ with $\operatorname{Spin}^c$ connections on $\mathbb{S}$.
We will be doing analysis on the configuration space
\[
\mathcal{C}(Y) = \mathcal{O}mega^1(Y; i\R) \oplus \Gamma(\mathbb{S}).
\]
Of course, $\mathcal{C}(Y)$ also depends on $\mathfrak{s}$, but we omit it from the notation. The gauge group $\mathcal{G}= \G(Y):=C^\infty(Y,S^1)$ acts on $\mathcal{C}(Y)$ by $u \cdot (a,\phi) = (a - u^{-1}du,u \cdot \phi)$. Since $b_1(Y)=0$, each $u\in \G$ can be written as $e^{f}$ for some $f: Y \mathfrak{t}o i\R$. We define the {\em normalized gauge group} $\Go$ to consist of those $u=e^{f} \in \G$ such that $\int_Y f = 0$.
For any integer $k$, following \cite{KMbook}, we let $H_{k}$ denote the completion of a subspace $H \operatorname{sp}incubseteq \mathcal{C}(Y)$ with respect to the $L^2_k$ Sobolev norm. In particular, the completion of $W$ is denoted $W_k$. The Sobolev norm is defined in the standard way using the $L^2$ norms of iterated gradients $\nabla^j$, as in \cite[Section 5.1]{KMbook}. For $j \leq k$, define $\T_{j}$ as the $L^2_j$ completion of $T\operatorname{sp}incC_k(Y)$. (In particular, $\T_k$ is $T\operatorname{sp}incC_k(Y)$.)
\operatorname{sp}incection{Coulomb slices}
\label{sec:coulombs}
We have a {\em global Coulomb slice}\footnote{The global Coulomb slice was denoted $V$ in \cite{Spectrum}. We switched to $W$ in order to avoid confusion with the spaces denoted $\V(Z)$ in \cite{KMbook}, which will also appear in this book.}:
$$W = \ker d^* \oplus \Gamma(\mathbb{S}) \operatorname{sp}incubset \mathcal{C}(Y),$$
where $d^*$ is meant to act on imaginary $1$-forms. Given $(a,\phi) \in \mathcal{C}(Y)$, there is a unique element of $W$ which is obtained from $(a,\phi)$ by a normalized gauge transformation; this element is called the {\em global Coulomb projection} of $(a, \phi)$. Explicitly, the global Coulomb projection of $(a, \phi)$ is
\begin{equation}
\label{eq:gCoulomb}
\Pi^{\operatorname{gC}}(a, \phi) = (a - df, e^{f} \phi),
\end{equation}
where $f: Y \mathfrak{t}o i\rr$ is such that $d^*(a-df) =0$ and $\int_Y f= 0$; that is, $f = Gd^*a$, where $G$ is the Green's operator of $\Delta = d^*d$. For future reference, let us denote by $$\pi: \mathcal{O}mega^1(Y;i\R) \mathfrak{t}o \ker d^*$$ the $L^2$-orthogonal projection, given by $\pi(a) = a - df = a- dGd^*a$.
The derivative of $\Pi^{\operatorname{gC}}$ is called the {\em infinitesimal global Coulomb projection}, given by
$$( \Pi^{\operatorname{gC}}_*)_{(a, \phi)}(b, \psi) = \bigl (b - dGd^*b, e^{Gd^*a}(\psi + (Gd^*b) \phi) \bigr).$$
In particular, if $(a, \phi)$ happens to be already in $W$, we have
\begin{equation}
\label{eq:icp}
( \Pi^{\operatorname{gC}}_*)_{(a, \phi)}(b, \psi)= (b- dxi, \psi + xi \phi) = (\pi(b), \psi + xi \phi)\in T_{(a, \phi)} W,
\end{equation}
where $xi=Gd^* b$.
Analogous to the definition of $\T_{j}$, let $\T^{\operatorname{gC}}_{j}$ be the $L^2_j$ completion of the tangent bundle to $W_k$, namely the trivial vector bundle with fiber $W_{j}$ over $W_{k}$. We keep the notation $\T^{\operatorname{gC}}_j$ (rather than just $W_k \mathfrak{t}imes W_j$) to emphasize the bundle structure; this will be convenient when we discuss bundle decompositions.
Let us mention the following lemma, which will be of use to us later:
\begin{lemma}
\label{lem:igc}
Let $k \geq 2$. View the infinitesimal global Coulomb projection as a section of the bundle $\operatorname{Hom}(T\mathcal{C}(Y), TW)$ over $W$, i.e., a map from $W$ to $\operatorname{Hom}(\mathcal{C}(Y), W)$, given by
$$ (a,\phi) \mapsto \bigl( (b, \psi) \mapsto (\Pi^{\operatorname{gC}}_*)_{(a,\phi)}(b,\psi) \bigr).$$
Then, this map extends to smooth maps between the Sobolev completions
$$ W_{k} \mathfrak{t}o \operatorname{Hom}(\mathcal{C}_j(Y), W_{j})$$
for all $-k \leq j \leq k$.
\end{lemma}
\begin{proof}
Recall the formula~\eqref{eq:icp}:
$$ (\Pi^{\operatorname{gC}}_*)_{(a, \phi)} (b, \psi) = (b - dGd^*b, \psi + (Gd^*b)\phi).$$
Observe that $d$ and $d^*$ decrease Sobolev coefficients by one, the Green operator $G$ increases them by $2$. Because Sobolev multiplication $L^2_{k} \mathfrak{t}imes L^2_j \mathfrak{t}o L^2_j$ induces a smooth map $L^2_{k} \mathfrak{t}o \operatorname{Hom}(L^2_j, L^2_j)$, the desired map is smooth.
\end{proof}
Infinitesimally, we can consider a different slice to the gauge action, the one perpendicular to the orbits in the $L^2$ metric. This is the {\em local Coulomb slice} at $(a, \phi) \in \mathcal{C}(Y)$, denoted $\K_{(a, \phi)}$, and consisting of those tangent vectors $(b, \psi) \in T_{(a, \phi)} \mathcal{C}(Y)$ such that
\begin{equation}
\label{eq:localCoulomb}
- d^* b + i \operatorname{Re} \langle i\phi, \psi \rangle = 0.
\end{equation}
Away from the reducibles, we see that $\T_{k}$ splits into a direct sum of two bundles:
$$ \T_k = \J_{k} \oplus \K_{k},$$ where $\J_{k}$ consists of the vectors tangent to the $\G_{k+1}$ orbits, and $\K_{k}$ is the completion of the local Coulomb slice. Note that the local Coulomb slice does not form a bundle over the entire configuration space, since the local Coulomb slice is ``bigger'' at reducibles.
Given any $(b, \psi) \in T_{(a, \phi)} \mathcal{C}(Y)$, we define the (infinitesimal) {\em local Coulomb projection} of $(b, \psi)$ to be
$$\Pi^{\operatorname{lC}}_{(a, \phi)} (b, \psi) := (b - d\zeta, \psi + \zeta \phi),$$ where $\zeta: Y \mathfrak{t}o i\R$ is, for $\phi \neq 0$, the unique function such that
\begin{equation}
\label{eq:zetaf}
-d^*(b-d\zeta) + i\operatorname{Re}\langle i\phi , \psi + \zeta \phi \rangle = 0.
\end{equation}
The existence and uniqueness of such a $\zeta$ follow from \cite[Proposition 9.3.4]{KMbook}. When $\phi = 0$, we again ask for \eqref{eq:zetaf} to be satisfied, but to guarantee uniqueness we also impose the condition $\int_Y \zeta=0$.
Note that both types of Coulomb slices are also mentioned by Kronheimer and Mrowka; see \cite[Sections 9.3 and 9.6]{KMbook}.
For our purposes, we will also need the {\em enlarged local Coulomb slice}, which consists of vectors that are only required to be perpendicular to the orbits of the normalized gauge group action.
We denote this by $\mathcal{K}^{\operatorname{e}}_{(a, \phi)}$. A vector $(b, \psi)\in T_{(a, \phi)} \mathcal{C}(Y)$ is in $\mathcal{K}^{\operatorname{e}}_{(a, \phi)}$ if and only if $-d^*b + i \operatorname{Re} \langle i\phi, \psi \rangle$ is a constant function. Equivalently, we can write this condition as
\begin{equation}
\label{eq:elocalCoulomb}
- d^* b + i \operatorname{Re} \langle i\phi, \psi \rangle^{\circ} = 0.
\end{equation}
Here, and later in the book, given a smooth function $f: Y \mathfrak{t}o \cc$, we denote by $\mu_Y(f) $ the average value of $f$ over the $3$-manifold $Y$:
\begin{equation}
\label{eq:muy}
\mu_Y(f)= \frac{1}{\operatorname{vol}(Y)}\int_Y f
\end{equation}
and set
\begin{equation}
\label{eq:intzero}
f^{\circ} = f - \mu_Y(f),
\end{equation}
so that $\int_Y f^{\circ} = 0$.
We remark that the reason why we did not add the superscript $\circ$ to $d^*b$ in \eqref{eq:elocalCoulomb} is because $\int_Y d^*b = 0$, so $(d^*b)^{\circ} = d^*b$. Note also that unlike the local Coulomb slice, the enlarged local Coulomb slices do produce a bundle over the configuration space.
Given any $(b, \psi) \in T_{(a, \phi)} \mathcal{C}(Y)$, we define the {\em enlarged local Coulomb projection} of $(b, \psi)$ to be
\begin{equation}
\label{eq:Pielc}
\Pi^{\operatorname{elC}}_{(a, \phi)} (b, \psi) := (b - d\zeta, \psi + \zeta \phi),
\end{equation}
where $\zeta: Y \mathfrak{t}o i\R$ is such that $\int_Y \zeta=0$ and
\begin{equation}
\label{eq:zetafirst}
-d^*(b-d\zeta) + i\operatorname{Re}\langle i\phi , \psi + \zeta \phi \rangle^{\circ} = 0.
\end{equation}
The fact that $\Pi^{\operatorname{elC}}$ is well-defined is established by the following lemma.
\begin{lemma}
\label{lem:elcUniqueness}
(a) Fix $k, j \in \zz$ with $k \geq 2$ and $-k < j \leq k$. Then, for any $x=(a, \phi) \in \mathcal{C}_k(Y)$ and $(b, \psi) \in \T_{j,x}$, there is a unique $\zeta \in L^2_{j+1}(Y; i\R)$ satisfying $\int_Y \zeta =0$ and \eqref{eq:zetafirst}. Further, if $d^*b=0$, then $\zeta \in L^2_{j+2}(Y; i\R)$.
(b) If $d^*b=0$ and the $L^2_k$ norm of $\phi$ is bounded above by a constant $R$, $k \geq 3$ and $|j| \leq k-1$, then there exists a constant $C(R) > 0$ such that
$$ \| \zeta\|_{L^2_{j+2}} \leq C(R) \cdot \|\psi \|_{L^2_j}.$$
\end{lemma}
\begin{proof}
Consider the direct sum decomposition
\begin{equation}
\label{eq:deco}
C^{\infty}(Y; i\R) = (\operatorname{im}d^*) \oplus \R
\end{equation}
into functions that integrate to zero and constant functions. This induces a similar decomposition on the $L^2_j$ Sobolev completions.
Let $\Delta=d^*d$ denote the (geometer's) Laplacian on imaginary-valued functions. Consider the linear operator between Sobolev completions
$$E_{\phi}: L^2_{j+1}(Y; i\R) \mathfrak{t}o L^2_{j-1}(Y; i\R) $$
given by
$$ E_{\phi}(\zeta) = \bigl( \Delta \zeta + (|\phi|^2 \zeta)^{\circ} \bigr) + \int_Y \zeta.$$
With respect to the decomposition \eqref{eq:deco} for $L^2_{j-1}$, note that the expression $\Delta \zeta + (|\phi|^2 \zeta)^{\circ}$ lands in the first summand, and $\int_Y \zeta$ in the second summand.
Equation~\eqref{eq:zetafirst} together with the condition $\int_Y \zeta = 0$ can be written as
$$ E_{\phi}(\zeta)=\bigl( d^*b - i\operatorname{Re}\langle i\phi , \psi \rangle + i\mu_Y( \operatorname{Re} \langle i\phi, \psi \rangle) \bigr) + 0.$$
Observe that the right hand side lives in $L^2_{j-1}(Y; i\rr)$ in general, and in $L^2_j(Y: i\rr)$ when $d^*b=0$.
We need to show that $E_{\phi}$ is invertible. Observe that $E_{\phi}$ is a compact deformation of the operator $E_0 = \Delta + \int_Y$. The latter is invertible, and in particular Fredholm of index zero. Hence, $E_{\phi}$ is also Fredholm of index zero. To show that it is invertible, it suffices to show that it has no kernel. Indeed, suppose $\zeta \in \ker(E_{\phi})$. Then, $\int_Y \zeta=0$ and
\begin{align}
0 &=\int_Y \langle \Delta \zeta + |\phi|^2 \zeta - \mu_Y(|\phi|^2 \zeta) , \zeta \rangle \\
&= \int_Y |d\zeta|^2 + \int_Y |\phi|^2 |\zeta|^2 - 0.
\end{align}
This implies $d\zeta=0$, and since $\int_Y \zeta=0$, we get $\zeta=0$. We conclude that $E_{\phi}$ is injective and hence invertible. This proves part (a).
For part (b), note that if $k \geq 3$, the map
$$ L^2_{k-1}(Y; \mathbb{S}) \mathfrak{t}o \operatorname{Hom}( L^2_{j+2}(Y; i\R), L^2_{j}(Y; i\R) ), \ \ \ \phi \mapsto E_{\phi}$$
is continuous (and lands in invertible operators) for $|j| \leq k-1$. Hence, the map $\phi \mapsto E_{\phi}^{-1}$ is also continuous. Since the ball of radius $R$ in $L^2_k$ is precompact in $L^2_{k-1}$, the resulting collection of $E_{\phi}^{-1}$ is also precompact, and hence bounded, in $\operatorname{Hom}( L^2_{j+2}(Y; i\R), L^2_{j}(Y; i\R) )$. This gives the desired bounds.
\end{proof}
Observe that, for $(a, \phi) \in W$, we can view the (restrictions of the) projections $(\Pi^{\operatorname{gC}}_*)_{(a, \phi)}$ and $\Pi^{\operatorname{elC}}_{(a, \phi)}$ as inverse maps relating the enlarged local Coulomb slice to the global Coulomb slice:
\begin{equation}
\label{eq:backandforth}
xymatrixcolsep{5pc}xymatrix{\mathcal{K}^{\operatorname{e}}_{(a, \phi)} \ar@/^/[r]^{(\Pi^{\operatorname{gC}}_*)_{(a,\phi)}}
& \T^{\operatorname{gC}}_{(a, \phi)}. \ar@/^/[l]^{\Pi^{\operatorname{elC}}_{(a, \phi)}}}
\end{equation}
To see that they are inverse to each other, it suffices to note that both are given by adding a uniquely determined vector that is tangent to the $\Go$-orbit.
Here is the analogue of Lemma~\ref{lem:igc} for $\Pi^{\operatorname{elC}}$ instead of $(\Pi^{\operatorname{gC}}_*)$.
\begin{lemma}
\label{lem:elc}
Let $k \geq 2$. View the enlarged local Coulomb projection as a section of the bundle $\operatorname{Hom}(TW, T\mathcal{C}(Y))$ over $W$, i.e., a map from $W$ to $\operatorname{Hom}(W, \mathcal{C}(Y))$, given by
$$ (a,\phi) \mapsto \bigl( (b, \psi) \mapsto \Pi^{\operatorname{elC}}_{(a,\phi)}(b,\psi) \bigr).$$
Then, this map extends to smooth maps between the Sobolev completions
$$ W_{k} \mathfrak{t}o \operatorname{Hom}(W_j, \mathcal{C}_j(Y))$$
for all $-k \leq j \leq k$.
\end{lemma}
\begin{proof} The fact that the extension to Sobolev completions is well-defined was established in Lemma~\ref{lem:elcUniqueness}. Smoothness can be deduced from Lemma~\ref{lem:igc}, using the fact that $\Pi^{\operatorname{elC}}$ (restricted to $TW$) is the inverse to $\Pi^{\operatorname{gC}}_*$.
\end{proof}
\operatorname{sp}incection{The Seiberg-Witten equations} \label{sec:SWe}
Let $(Y,g,\mathfrak{s},\mathbb{S})$ be as above. We let $\rho:TY \mathfrak{t}o \mathfrak{t}ext{End}(\mathbb{S})$ be the Clifford multiplication.
Further, for $a \in \mathcal{O}mega^1(Y; i\R)$, we will use $D_a : \Gamma(\mathbb{S}) \mathfrak{t}o \Gamma(\mathbb{S})$ to denote the Dirac operator corresponding to the connection $A_0 + a$, and $D$ for the case of $a = 0$.
Consider the Chern-Simons-Dirac (CSD) functional, $\mathscr{L}$, on $\mathcal{C}(Y)$:
\[
\mathscr{L}(a,\phi) = \frac{1}{2} \Bigl(\int_Y \langle \phi, D_a \phi \rangle - \int_Y a \wedge da \Bigr).
\]
We let $\mathcal{X}$ denote the $L^2$-gradient of CSD:
$$ \mathcal{X}(a, \phi) = (*da + \mathfrak{t}au(\phi,\phi), D_a \phi),$$
where $\mathfrak{t}au(\phi,\phi)=\rho^{-1}(\phi \phi^*)_0$ is a quadratic function coming from the Clifford multiplication.
It is not difficult to check that the CSD functional is gauge-invariant (since $b_1(Y) = 0$).
The critical points of $\mathscr{L}$ are the solutions to the {\em Seiberg-Witten equations},
$$\mathcal{X}(a, \phi)=0.$$ If the spinor $\phi$ is identically 0, then the solution $(a,\phi)$ is said to be {\em reducible}.
By measuring the length of the enlarged local Coulomb projections of tangent vectors to $W$, we obtain a Riemannian metric $\mathfrak{t}ilde{g}$ on $W$. Explicitly, for any tangent vector $(b, \psi)$ to $(a, \phi) \in W$, we set\footnote{The $L^2$ inner product is Hermitian, and thus the real part gives a real inner product. This is what we need to define a Riemannian metric.}
\begin{equation}
\label{eq:gtilde}
\langle (b, \psi), (b', \psi') \rangle_{\mathfrak{t}ilde{g}} = \operatorname{Re} \; \langle \Pi^{\operatorname{elC}}_{(a, \phi)} (b, \psi), \Pi^{\operatorname{elC}}_{(a, \phi)} (b', \psi') \rangle_{L^2}.
\end{equation}
For future reference, let us mention that, since $\Pi^{\operatorname{elC}}_{(a, \phi)}$ is an $L^2$-orthogonal projection, we can also write
\begin{equation}
\label{eq:gtilde2}
\langle (b, \psi), (b', \psi') \rangle_{\mathfrak{t}ilde{g}} = \operatorname{Re} \; \langle (b, \psi), \Pi^{\operatorname{elC}}_{(a, \phi)} (b', \psi') \rangle_{L^2}.
\end{equation}
The metric $\mathfrak{t}ilde g$ has the property that the trajectories of the gradient flow of $\mathscr{L}$ restricted to $W$ are precisely the global Coulomb projections of the original gradient flow trajectories in $\mathcal{C}(Y)$. Therefore, in the global Coulomb slice with the metric $\mathfrak{t}ilde{g}$, the (downward) gradient flow trajectories are given by
\begin{equation}
\label{eq:gradL}
\frac{d}{dt} \gamma(t) = - (\Pi^{\operatorname{gC}}_*)_{\gamma(t)} \mathcal{X} (\gamma(t)),
\end{equation}
where $\gamma(t)=(a(t), \phi(t)).$ Note that $\mathcal{X} = \operatorname{gr}ad \mathscr{L}$ is perpendicular to the level sets of $\mathscr{L}$ with respect to the $L^2$ metric, which contain the gauge orbits, so $\mathcal{X}$ is automatically contained in the local Coulomb slices. We can split the right hand side of \eqref{eq:gradL} into a linear part $l$ and a nonlinear part $c$, and re-write the flow equation as
\[
\frac{d}{d t} \gamma(t) = - (l + c)(\gamma(t)),
\]
where
\begin{eqnarray}
l(a,\phi) &=& (*da, D \phi) \label{eq:lmap} \\
c(a,\phi) &=& (\pi \circ \mathfrak{t}au(\phi,\phi),\rho(a)\phi + xi(\phi)\phi) \label{eq:cmap},
\end{eqnarray}
with $xi(\phi): Y \mathfrak{t}o i\rr$ being characterized by $dxi(\phi) = (1-\pi)\circ \mathfrak{t}au(\phi, \phi)$ and $\int_Y xi(\phi) =0.$ Recall also that $\pi$ denotes the orthogonal projection to $\ker d^*$.
The $\mathfrak{t}ilde g$-gradient of the restriction $\mathscr{L}|_W$ extends to a map
$$\mathcal{X}^{\operatorname{gC}} = l + c: W_{k} \mathfrak{t}o W_{k-1},$$
such that $l$ is a linear Fredholm operator, and $c$ is quadratic.\footnote{We chose our conventions to be in agreement with \cite{KMbook}. In \cite{Spectrum}, the map $l+c$ went from $W_{k+1}$ to $W_k$.} The linear operator $l$ is self-adjoint with respect to the $L^2$ inner product, but not necessarily $\mathfrak{t}ilde{g}$. Further, the map $c$ is continuous as a map from $W_k$ to $W_k$, and thus compact as a map from $W_k$ to $W_{k-1}$. The corresponding flow lines are called {\em Seiberg-Witten trajectories} (in Coulomb gauge). Such a trajectory $\gamma=(a(t), \phi(t)): {\mathbb{R}}\mathfrak{t}o W$ is said to be {\em of finite type} if $\mathscr{L}(\gamma(t))$ and $\|\phi(t)\|_{C^0}$ are bounded in $t$.
\operatorname{sp}incection{Finite-dimensional approximation} \label{sec:fdax}
For $\lambda > 1$, let us denote by $W^\lambda$ the finite-dimensional subspace of $W$ spanned by the eigenvectors of $l$ with eigenvalues in the interval $(-\lambda,\lambda)$.\footnote{In \cite{Spectrum}, the role of $W^\lambda$ was played by $W^{\mu}_{\lambda}$, the subspace spanned by eigenvectors with eigenvalues between $\lambda$ and $\mu$. In this book we restrict to $\mu = - \lambda$; this produces the same spectrum. The reason for the change is that in Section~\ref{sec:StabilityPoints} we will need to turn the parameter space for $\lambda$ into a manifold with boundary, and it is easier to do so with only one degree of freedom. Also, notice that in \cite[Section 4]{Spectrum}, the approximation $W^{\mu}_{\lambda}$ was initially defined as the span of eigenvectors with eigenvalues in $(\lambda, \mu]$, but later changed so that it is the image of the smoothed projection $p^{\mu}_{\lambda}$. This means using the open interval $(\lambda, \mu)$ instead of $(\lambda, \mu]$, and in fact it is easier to define the approximation as such from the beginning. In our setting we use $(-\lambda, \lambda)$.} The $L^2$ orthogonal projection from $W$ to $W^\lambda$ will be denoted $\mathfrak{t}ilde{p}^\lambda$. We modify this to make it smooth in $\lambda$, using the following preliminary definition:
\begin{equation}
\label{eq:plprel}
p^\lambdaprel = \int^1_0 \beta(\mathfrak{t}heta) \mathfrak{t}ilde{p}^{\lambda - \mathfrak{t}heta}_{-\lambda + \mathfrak{t}heta} d \mathfrak{t}heta,
\end{equation}
where $\beta$ is a smooth, non-negative function that is non-zero exactly on $(0,1)$, and such that $\int_{\mathbb{R}}\beta(\mathfrak{t}heta) d\mathfrak{t}heta =1$.
Our $p^\lambdaprel$ was the one used in \cite{Spectrum}, where it was denoted $p^\lambda$. In this book, it will be convenient to arrange for the smoothed projection to be the actual projection $\mathfrak{t}ilde{p}^\lambda$ at an infinite sequence of $\lambda$'s. Let us fix such a sequence:
$$ \lambda^{\bullet}_1 < \lambda^{\bullet}_2 < \dots$$
such that $\lambda^{\bullet}_i \mathfrak{t}o \infty$ and none of the $\lambda^{\bullet}_i$ are eigenvalues of $l$. Fix also disjoint intervals $[\lambda^{\bullet}_i - \varepsilonilon_i, \lambda^{\bullet}_i + \varepsilonilon_i]$ that do not contain eigenvalues of $l$. Choose smooth bump functions $\beta_i: (0,\infty) \mathfrak{t}o [0,1]$ supported in $[\lambda^{\bullet}_i - \varepsilonilon_i, \lambda^{\bullet}_i + \varepsilonilon_i]$ and with $\beta_i(\lambda^{\bullet}_i)=1.$ Then set
\begin{equation}
\label{eq:pl}
p^\lambda = \operatorname{sp}incum_i \beta_i(\lambda) \mathfrak{t}ilde{p}^\lambda + \bigl(1-\operatorname{sp}incum_i \beta_i(\lambda) \bigr) p^\lambdaprel.
\end{equation}
We now have that $p^\lambda$ is smooth in $\lambda$, and $p^\lambda=\mathfrak{t}ilde{p}^{\lambda}$ for $\lambda \in \{\lambda^{\bullet}_1, \lambda^{\bullet}_2, \dots \}.$ Moreover, for any $\lambda$, the image of $p^\lambda$ is the subspace $W^\lambda$.
As an aside, let us remark that in \cite{Spectrum} there is a different definition of the $L^2_k$ Sobolev norm on $W$, using $l$ instead of the covariant derivative $\nabla$. This produces a norm equivalent to the usual one. With the definition in \cite{Spectrum}, the $L^2$ projection $\mathfrak{t}ilde{p}^\lambda$ would have been the orthogonal projection to $W^\lambda$ with respect to the $L^2_k$ metric for any $k$, and we would have had $\| p^\lambda (x) \|_{L^2_k} \leq \|x \|_{L^2_k}.$ Given our choice of Sobolev norms, the same inequality holds up to a constant:
$$\| p^\lambda (x) \|_{L^2_k} \leq \Theta_k \|x \|_{L^2_k},$$
where $\Theta_k$ depends only on $k$ and the Riemannian manifold $Y$. We choose $\Theta_0 = 1$.
On $W^\lambda$, we consider the flow equation
\begin{equation}
\label{eq:approxgrad}
\frac{d}{d t} \gamma(t)=-(l + p^\lambda c)(\gamma(t)).
\end{equation}
We refer to solutions of \eqref{eq:approxgrad} as {\em approximate Seiberg-Witten trajectories}.
\begin{remark} \label{rem:notgradient}
If we consider the restriction of the CSD functional to $W^\lambda$, its gradient with respect to $\mathfrak{t}ilde g$ is
$$ \mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g} (l + c) \mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g}= l + \mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g} c \mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g},$$
where $\mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g}$ denotes the $\mathfrak{t}ilde g$-orthogonal projection onto $W^{\lambda}$.
It would be rather cumbersome to work with these projections, so we replaced $\mathfrak{t}ilde{p}^{\lambda}_{\mathfrak{t}ilde g}$ with the $L^2$ orthogonal projection $\mathfrak{t}ilde{p}^{\lambda}$. When $\lambda$ is one of the cut-offs $\lambda^{\bullet}_i$, on $W^\lambda$ we have
$$ \mathfrak{t}ilde{p}^{\lambda} (l + c) \mathfrak{t}ilde{p}^{\lambda} = l + \mathfrak{t}ilde{p}^{\lambda} c = l +p^\lambda c.$$
However, even in this case (when $\lambda=\lambda^{\bullet}_i$), we expect that $ l +p^\lambda c$ is neither the $L^2$ nor the $\mathfrak{t}ilde g$ gradient of a function. We will show in Chapter~\ref{sec:quasigradient} that $l +p^\lambda c$ is a quasi-gradient, so it can still be used to do Morse theory.
\end{remark}
Fix a natural number $k \geq 5$. There exists a constant $R > 0$, such that all Seiberg-Witten trajectories $\gamma: {\mathbb{R}}\mathfrak{t}o W$ of finite type are contained in $B(R)$, the ball of radius $R$ in $W_k$. The following is a corresponding compactness result for approximate Seiberg-Witten trajectories:
\begin{proposition}[Proposition 3 in \cite{Spectrum}] \label{prop:proposition3}
For any $\lambda$ sufficiently large (compared to $R$), if $\gamma: \mathbb{R} \mathfrak{t}o W^\lambda$ is a trajectory of $(l + p^\lambda c)$, and $\gamma(t)$ is in $\overline{B(2R)}$ for all $t$, then in fact $\gamma(t)$ is contained in $B(R)$.
\end{proposition}
This was proved in \cite{Spectrum} with $p^\lambdaprel$ instead of $p^\lambda$, but the same arguments work in our setting.
\operatorname{sp}incection{The Conley index and the Seiberg-Witten Floer spectrum}\label{sec:conleyswf}
Recall the definition of the Conley index from Section~\ref{subsec:ConleyMorse}, and of the $S^1$-equivariant refinement $I_{S^1}$ mentioned at the end of Section~\ref{sec:combinedMorse}.
With this in mind, we are ready to define the Seiberg-Witten Floer spectrum. We fix $k$, $R$, and sufficiently large $\lambda$ such that Proposition~\ref{prop:proposition3} applies. We consider the vector field $u^\lambda(l + p^\lambda c)$ on $W^\lambda$, where $u^\lambda$ is a smooth, $S^1$-invariant, cut-off function on $W^\lambda$ that vanishes outside of $B(3R)$. This generates the flow $\phi^\lambda$ that we will work with. Denote by $S^\lambda$ the union of all trajectories of $\phi^\lambda$ inside $B(R)$. Recall from Proposition~\ref{prop:proposition3} that these are the same as the trajectories that stay in $\overline{B(2R)}$. This implies that $S^\lambda$ is an isolated invariant set.
Since everything is $S^1$-invariant, we can construct the equivariant Conley index $I^\lambda = I_{S^1}(\phi^\lambda,S^\lambda)$. We must de-suspend appropriately to make the stable homotopy type independent of $\lambda$:
\[
\operatorname{SWF}(Y,\mathfrak{s},g) = \Sigma^{-W^{(-\lambda,0)}} I^\lambda,
\]
where $W^{(-\lambda, 0)}$ denotes the direct sum of the eigenspaces of $l$ with eigenvalues in the interval $(-\lambda, 0)$. As we vary the metric $g$, the spectrum $\operatorname{SWF}(Y, \mathfrak{s}, g)$ varies by suspending (or de-suspending) with copies of the vector space $\cc$. In \cite{Spectrum}, this indeterminacy is fixed by introducing a quantity $n(Y,\mathfrak{s}, g) \in \qq$ (a linear combination of eta invariants), and setting
$$\operatorname{SWF}(Y, \mathfrak{s}) = \Sigma^{-n(Y, \mathfrak{s}, g) \cc} \operatorname{SWF}(Y, \mathfrak{s}, g),$$
where the de-suspension by rational numbers is defined formally. For the definition of $n(Y,\mathfrak{s},g)$, see \eqref{eq:ng} below. We have:
\begin{theorem}[Theorem 1 in \cite{Spectrum}]
The $S^1$-equivariant stable homotopy type of $\operatorname{SWF}(Y,\mathfrak{s})$ is an invariant of the pair $(Y, \mathfrak{s})$.
\end{theorem}
\begin{remark}
The construction of $\operatorname{SWF}(Y, \mathfrak{s})$ in \cite{Spectrum} used the smoothed projections $p^\lambdaprel$. By interpolating linearly between $p^\lambdaprel$ and $p^\lambda$, and using the homotopy invariance properties of the Conley index, we see that the definitions from $p^\lambda$ and $p^\lambdaprel$ yield the same $\operatorname{SWF}(Y, \mathfrak{s})$.
\end{remark}
\chapter{Monopole Floer homology}
\label{sec:HM}
In this chapter we review the definition of monopole Floer homology given by Kronheimer and Mrowka in their book \cite{KMbook}. Recall that we are considering the case of a rational homology sphere $Y$ and Spin$^c$ structure $\mathfrak{s}$, which is necessarily torsion. In this case, all of the reducible solutions to the Seiberg-Witten equations are gauge equivalent. While we worked in the Coulomb gauge for the Floer spectrum, for now, we will return to the entire configuration space $$\operatorname{sp}incC(Y) = \mathcal{O}mega^1(Y;i \R) \oplus \Gamma(\mathbb{S})$$ (where we have made this identification via a fixed flat Spin$^c$ connection).
Here are the main ideas in the construction of monopole Floer homology: One would like to proceed by analogy with Morse homology; that is, to build a chain complex whose generators are given by gauge-equivalence classes of critical points of the CSD functional and whose differential counts gradient trajectories between them. However, there are several technical issues that need to be addressed. One issue is that the gauge group does not act freely near the reducible solutions to the Seiberg-Witten equations, and these points cause a serious problem. This problem was solved by Kronheimer and Mrowka by blowing up the singular set, in a way similar to the construction of $S^1$-equivariant Morse homology in Section~\ref{subsec:CircleMorse}. The second problem is that even after performing a blow-up, transversality may still not be satisfied for the moduli spaces. For this reason, special perturbations to the CSD functional must be introduced.
\operatorname{sp}incection{Seiberg-Witten equations on the blow-up} \label{sec:SWblowup}
We will often work on cylinders of the form $Z = I \mathfrak{t}imes Y$, where $I$ is an interval (possibly $\mathbb{R}$). The Spin$^c$ structure on $Y$ induces a Spin$^c$ structure on $Z$ with unitary rank 4 bundle $\mathbb{S}^+ \oplus \mathbb{S}^-$ on $Z$; here, $\mathbb{S}^{\pm}$ are the $\mp 1$ eigenspaces of $\rho(d\operatorname{vol}_Z)$, where $\rho$ is the induced Clifford multiplication for differential forms on $Z$. Both $\mathbb{S}^+$ and $\mathbb{S}^-$ can be identified with the pull-backs of $\mathbb{S}$ under the projection $\mathrm{p}: Z \mathfrak{t}o Y$. Again identifying 1-forms with Spin$^c$ connections, we have
\[
\operatorname{sp}incC(Z) = \{(a,\phi) \mid a \in \mathcal{O}mega^1(Z; i\R), \phi \in \Gamma(\mathbb{S}^+)\}.
\]
This is acted on by the gauge group $\G(Z) = C^{\infty}(Z, S^1).$ An element of $\operatorname{sp}incC(Z)$ in temporal gauge (i.e., such that the $dt$ component of $a$ is zero) corresponds to a path $\gamma(t) = (a(t),\phi(t))$ in $\operatorname{sp}incC(Y)$, and we will often not distinguish between these. Note that every element of $\operatorname{sp}incC(Z)$ is gauge-equivalent to a configuration in temporal gauge.
We write down the four-dimensional Seiberg-Witten equations on $Z$ as $\F(a,\phi) = 0$, where
\[
\F(a,\phi) = (d^+a - \rho^{-1}((\phi\phi^*)_0),D^+_a \phi) \in \mathcal{O}mega^2_+(Z;i\R) \oplus \Gamma(\mathbb{S}^-),
\]
and $D^+_a : \Gamma(\mathbb{S}^+) \mathfrak{t}o \Gamma(\mathbb{S}^-)$ is the Dirac operator and $d^+ a$ is the self-dual part of $da$. If $\F(\gamma) = 0$, then $\gamma$ represents a {\em Seiberg-Witten trajectory}. If $\gamma$ is the constant path, then this is giving a solution to the Seiberg-Witten equations on $Y$. In general, $\F(\gamma) = 0$ is equivalent to $\gamma$ being a downward gradient trajectory of $\mathscr{L}$. We will often go back and forth between this notion of trajectories of $\operatorname{gr}ad \mathscr{L} = \mathcal{X}$ on $Y$ and solutions to the Seiberg-Witten equations on $Z$. We will also think of $\F$ as a section of $\operatorname{sp}incC(Z)$ into $\V(Z)$, the trivial $\Gamma(Z; i \mathscr{L}ambda^2_+ T^*Z \oplus \mathbb{S}^-)$ bundle over $\operatorname{sp}incC(Z)$.
As discussed above, in order to deal with the reducible solution, we must blow-up our configuration spaces. We first consider
\[
\operatorname{sp}incC^\operatorname{sp}incigma(Y) = \{(a,s,\phi) \mid s \geq 0, \| \phi \|_{L^2} = 1 \} \operatorname{sp}incubset \mathcal{O}mega^1(Y; i\R) \mathfrak{t}imes \mathbb{R}_{\geq 0 } \mathfrak{t}imes \Gamma(\mathbb{S}).
\]
We can define $\operatorname{sp}incC^\operatorname{sp}incigma(Z)$ similarly, as the space of triples $(a, s, \phi) \in \mathcal{O}mega^1(Z; i\R) \mathfrak{t}imes \mathbb{R}_{\geq 0 } \mathfrak{t}imes \Gamma(\mathbb{S}^+)$ with $\| \phi \|_{L^2(Z)} = 1$. The {\em blown-up Seiberg-Witten equations} on $\operatorname{sp}incC^\operatorname{sp}incigma(Z)$ are given by $\Fsigma(a, s, \phi) = 0$, where $\Fsigma$ is a section of a bundle $\V^\operatorname{sp}incigma(Z)$ over $\mathcal{C}^\operatorname{sp}incigma(Z)$, defined to be the pullback of $\V(Z)$ under the blow-down from $\mathcal{C}^\operatorname{sp}incigma(Z)$ to $\mathcal{C}(Z)$; see \cite[p.115]{KMbook}. Explicitly, we have
\[
\Fsigma(a,s,\phi) = (d^+a - s^2 \rho^{-1}((\phi\phi^*)_0),D^+_a \phi).
\]
However, there is a variant of the four-dimensional blow-up that is more directly related to paths in $\mathcal{C}^\operatorname{sp}incigma(Y)$. This is the so-called {\em $\mathfrak{t}au$ model} defined in \cite[Section 6.3]{KMbook}. Precisely, we let
$$ \mathcal{C}^\mathfrak{t}au(Z) \operatorname{sp}incubset \mathcal{O}mega^1(Z; i\R) \mathfrak{t}imes C^\infty(I) \mathfrak{t}imes C^\infty(Z; \mathbb{S}^+)$$
be the space of triples $(a, s, \phi)$ with $s(t) \geq 0$ and $\|\phi(t)\|_{L^2(Y)}=1$ for all $t \in I$. After moving it into temporal gauge, an element of $\mathcal{C}^\mathfrak{t}au(Z)$ determines a path $\gamma^\operatorname{sp}incigma(t) = (a(t),s(t),\phi(t))$ in $\mathcal{C}^\operatorname{sp}incigma(Y)$.
In temporal gauge on $\operatorname{sp}incC^\mathfrak{t}au(Z)$, the blown-up Seiberg-Witten equations can be written as
\begin{align*}
& \frac{d}{dt} a(t) = - *da(t) - s(t)^2 \mathfrak{t}au(\phi(t), \phi(t)) \\
& \frac{d}{dt} s(t) = - \mathscr{L}ambda(a(t),s(t),\phi(t)) s(t) \\
& \frac{d}{dt} \phi(t) = - D_{a(t)} \phi(t) + \mathscr{L}ambda(a(t),s(t),\phi(t)) \phi(t),
\end{align*}
where
\begin{equation}
\label{eq:LambdaNoq}
\mathscr{L}ambda(a,s,\phi) = \langle \phi, D_a \phi \rangle_{L^2}.
\end{equation}
Furthermore, the right hand sides of these equations induce a vector field, denoted $\mathcal{X}^\operatorname{sp}incigma$, on $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$ whose flow trajectories are precisely the solutions to the three equations above. The zeros of $\mathcal{X}^\operatorname{sp}incigma$ on $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$ are easily rephrased in terms of the zeros of $\mathcal{X}$ on $\operatorname{sp}incC(Y)$:
\begin{proposition}[Proposition 6.2.3 in \cite{KMbook}]\label{prop:swblownup} If $s > 0$, then $(a,s,\phi) \in \operatorname{sp}incC^\operatorname{sp}incigma(Y)$ is a zero of $\mathcal{X}^\operatorname{sp}incigma$ in $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$ if and only if $(a,s\phi)$ is a zero of $\mathcal{X}$ in $\operatorname{sp}incC(Y)$. If $s = 0$, then $(a,s,\phi)$ is a zero if and only if $(a,0)$ is a zero of $\mathcal{X}$ and $\phi$ is an eigenvector of $D_a$.
\end{proposition}
\begin{remark}
\label{rem:zeroA}
Since $b_1(Y)=0$, we have that $(a,0)$ is a zero of $\mathcal{X}$ if and only if $a=0$.
\end{remark}
Throughout this section, we fix an integer $k \geq 2$. We may also blow up the Sobolev completions to obtain spaces $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ such that $\mathcal{X}^\operatorname{sp}incigma$ extends. Similarly, we can complete the gauge group $\G$ to $\G_{k}$. This leads to the various blown-up configuration spaces mod gauge:
\[
\B(Y) = \operatorname{sp}incC(Y)/\G, \; \; \B^\operatorname{sp}incigma(Y) = \operatorname{sp}incC^\operatorname{sp}incigma(Y)/\G, \; \; \B_{k}(Y) = \operatorname{sp}incC_{k}(Y)/\G_{k+1}, \; \; \B^\operatorname{sp}incigma_{k}(Y) = \operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)/\G_{k+1}.
\]
Recall that for $j \leq k$, $\T_j$ denotes the $L^2_j$ completion of $T \mathcal{C}_k(Y)$. We have the analogous construction in the blow-up, $\T^\operatorname{sp}incigma_j$, which decomposes as
\begin{equation}
\label{eq:BlowUpDecompose}
\T^{\operatorname{sp}incigma}_j = \J^{\operatorname{sp}incigma}_{j} \oplus \K^{\operatorname{sp}incigma}_{j},
\end{equation}
where $\J^{\operatorname{sp}incigma}_k$ consists of the tangents to the gauge orbits. More explicitly, the $L^2_j$ completion of the tangent space at $x = (a,s,\phi) \in \mathcal{C}^\operatorname{sp}incigma_k(Y)$ is
$$
\T^\operatorname{sp}incigma_{j,x} = \{(b,r,\psi) \mid \operatorname{Re} \langle \phi, \psi \rangle_{L^2} = 0 \} \operatorname{sp}incubset L^2_j(Y;iT^*Y) \oplus {\mathbb{R}}\oplus L^2_j(Y; \mathbb{S}).
$$
At $\phi = 0$, we also want to consider $\T^{\red}_{j}$, the completion of the tangent bundle to $L^2_k(\mathcal{O}mega^1(Y; i\R))$. We have an analogous splitting of $\T^{\red}_{k,a}$ into $\J^{\red}_{k,a} = L^2_k(\operatorname{im}d)$ and $\K^{\red}_{k,a} = L^2_k(\ker d^*)$.
In four dimensions, we can similarly divide the blown up configuration space $\mathcal{C}^\operatorname{sp}incigma(Z)$ by gauge to obtain a space $\B^{\operatorname{sp}incigma}(Z)$. In the $\mathfrak{t}au$ model, we define the quotient configuration space $\B^\mathfrak{t}au(Z) = \mathcal{C}^{\mathfrak{t}au}(Z)/\G(Z)$. Also, starting from the bundle $\V(Z)$ over $\mathcal{C}(Z)$, we obtain a bundle $\V^{\mathfrak{t}au}(Z)$ over $\mathcal{C}^{\mathfrak{t}au}(Z)$. Explicitly, the fiber of $\V^\mathfrak{t}au(Z)$ over $(a, s, \phi)$ is
\begin{align*}
\V^{\mathfrak{t}au}(Z)_{(a, s, \phi)} &= \{(b, r, \psi) \mid \operatorname{Re} \langle \phi(t), \psi(t) \rangle_{L^2(Y)}=0, \ \forall t \} \\
& \operatorname{sp}incubset C^\infty(Z; i\mathscr{L}ambda^2_+ T^*Z) \oplus C^\infty(\rr) \oplus C^{\infty}(Z; \mathbb{S}^-).
\end{align*}
In the $\mathfrak{t}au$ model, the blown-up Seiberg-Witten equations on $Z$ can be written as the zeros of a section $\F^\mathfrak{t}au$ of the bundle $\V^\mathfrak{t}au(Z)$; see \cite[Equations (6.11)]{KMbook} for the exact formula. In temporal gauge, we simply have
$$ \F^\mathfrak{t}au = \frac{d}{dt} + \mathcal{X}^\operatorname{sp}incigma.$$
When $Z$ is a compact cylinder, that is, $Z = [t_1, t_2] \mathfrak{t}imes Y$, we define the Sobolev completions $\operatorname{sp}incC_{k}(Z)$, $\B_k(Z)$, $\V_{k}(Z)$, and similarly with $\operatorname{sp}incigma$ or $\mathfrak{t}au$ superscripts. We also have completed tangent bundles $\T_j(Z)$, $\T^{\operatorname{sp}incigma}_j(Z)$ and $\T^{\mathfrak{t}au}_j(Z)$. We should note that $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ and $\mathcal{C}^\operatorname{sp}incigma_k(Z)$ are Hilbert manifolds with boundary, so the tangent bundles make sense.
The $\mathfrak{t}au$ model $\mathcal{C}^\mathfrak{t}au_k(Z)$ is not a Hilbert manifold, even with boundary. Nevertheless, it is a closed subset of the Hilbert manifold $\mathfrak{t}C^\mathfrak{t}au_k(Z)$, which is the $L^2_k$ completion of
\begin{equation}
\label{eq:tildeC}
\mathfrak{t}C^\mathfrak{t}au(Z) = \{(a, s, \phi) \mid \|\phi(t)\|_{L^2(Y)}=1, \ \forall t\} \operatorname{sp}incubset \mathcal{O}mega^1(Z; i\R) \mathfrak{t}imes C^\infty([t_1,t_2]) \mathfrak{t}imes C^\infty(Z; \mathbb{S}^+);
\end{equation}
see \cite[Section 9.2]{KMbook}. By the completed tangent bundle to $\mathcal{C}^\mathfrak{t}au_k(Z)$ we simply mean the restriction of the completed tangent bundle to $\mathfrak{t}C^\mathfrak{t}au_k(Z)$, i.e. the $L^2_j$ completion of the bundle $\T^\mathfrak{t}au(Z)$ with fibers
\begin{align*}
\T^{\mathfrak{t}au}(Z)_{(a, s, \phi)} &= \{(b, r, \psi) \mid \operatorname{Re} \langle \phi(t), \psi(t) \rangle_{L^2(Y)}=0, \ \forall t \} \\
& \operatorname{sp}incubset C^\infty(Z; i T^*Z) \oplus C^\infty([t_1,t_2]) \oplus C^{\infty}(Z; \mathbb{S}^-).
\end{align*}
The bundle $\V^\mathfrak{t}au(Z)$ extends naturally over $\mathfrak{t}C^\mathfrak{t}au(Z)$ as well. When discussing $\V^\mathfrak{t}au(Z)$, this will refer to the extension and will explicitly mention the restriction to $\mathcal{C}^\mathfrak{t}au(Z)$ explicitly when it is used. Finally, we will need to quotient the extended spaces $\mathfrak{t}C^\mathfrak{t}au_k(Z)$ by gauge, which we write as $\mathfrak{t}B_k^\mathfrak{t}au$.
When $Z$ is non-compact, for example $Z={\mathbb{R}}\mathfrak{t}imes Y$, it is often more useful to consider the local Sobolev completions $L^2_{k, loc}$; see \cite[Section 13.1]{KMbook} for more details.
As mentioned in Chapter~\ref{sec:spectrum}, gauge transformations preserve the property of being a Seiberg-Witten trajectory; this extends through blow-ups and Sobolev completions. Monopole Floer homology will be defined as the Morse homology (for manifolds with boundary) of a perturbation of the Seiberg-Witten equations on $\B^\operatorname{sp}incigma_{k}(Y)$.
\operatorname{sp}incection{Perturbed Seiberg-Witten equations}
\label{sec:perturbedSW}
Consider a function $f: \operatorname{sp}incC(Y) \mathfrak{t}o \mathbb{R}$ which is gauge-invariant. A {\em perturbation} is a section $\q: \operatorname{sp}incC(Y) \mathfrak{t}o \T_{0}$. We will call $\q$ the {\em formal gradient of $f$}, or $\operatorname{gr}ad f$ (even though it's not actually a gradient), if for all $\gamma \in C^\infty([0,1],\operatorname{sp}incC(Y))$
\[
\int^1_0 \Bigl \langle \frac{d}{dt} \gamma(t),\q(\gamma(t)) \Bigr \rangle_{L^2} dt = f \circ \gamma(1) - f \circ \gamma(0).
\]
We will use the notation
$$\mathcal{X}_\q:=\mathcal{X} + \mathfrak{q}= (\operatorname{gr}ad \mathscr{L}) + \q.$$
We will also write $\mathscr{L}_\q$ for $\mathscr{L} + f$, so $\mathcal{X}q = \operatorname{gr}ad \mathscr{L}_\q$. Note that $\q$ is described by its components $\q^0$ to $\mathcal{O}mega^1(Y; i\R)$ and $\q^1$ to $\Gamma(\mathbb{S})$. More generally, from now on we will use the superscripts $0$ and $1$ to describe the $1$-form and spinorial parts of vectors in $\mathcal{C}(Y)$.
The perturbation $\q$ also induces a section $\qhat: \operatorname{sp}incC(Z) \mathfrak{t}o \V_{0}(Z)$. When $Z=[t_1, t_2] \mathfrak{t}imes Y$ is a compact cylinder, we will require that this actually extends to a smooth section $\operatorname{sp}incC_{k}(Z) \mathfrak{t}o \V_{k}(Z)$.
The flow trajectories $(a(t),\phi(t))$ of $\mathcal{X}_\q$ are solutions to the equations
\begin{align*}
& \frac{d}{dt} a = - *da - \mathfrak{t}au(\phi, \phi) - \q^0(a,\phi) \\
& \frac{d}{dt} \phi = - D_a \phi - \q^1(a,\phi).
\end{align*}
Recast on $Z$, these are the {\em perturbed Seiberg-Witten equations}, $\F_{\q} = \mathcal{F}+ \hat{\mathfrak{q}}= 0$.
We now move to the blow-up. The vector field $\mathcal{X}_q$ induces a vector field $\mathcal{X}qsigma$ on $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$. Precisely, on $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$, we define
\[
\q^\operatorname{sp}incigma(a,s,\phi) = (\q^0(a,s\phi), \operatorname{Re} \langle \operatorname{qt}ilde^1(a,s,\phi),\phi \rangle_{L^2} \cdot s, \operatorname{qt}ilde^1(a,s,\phi)^\perp),
\]
where $\operatorname{qt}ilde^1(a,s,\phi) = \int^1_0 \D_{(a,st\phi)} \q^1(0,\phi) dt$ and $\perp$ means projection onto the real orthogonal complement of $\phi$. We can obtain ${\hat{\mathfrak{q}}^\operatorname{sp}incigma}$ similarly, with components $(\qhatzerosigma,\qhatonesigma)$. This leads to the flow equations for $\mathcal{X}qsigma$:
\begin{align*}
& \frac{d}{dt} a = - *da - s^2 \mathfrak{t}au(\phi, \phi) - \q^0(a,s\phi), \\
& \frac{d}{dt} s = - \mathscr{L}ambda_\q(a,s,\phi)s, \\
& \frac{d}{dt} \phi = - D_a \phi - \operatorname{qt}ilde^1(a,s,\phi) + \mathscr{L}ambda_\q(a,s,\phi)\phi,
\end{align*}
where
\begin{equation}
\label{eq:Lambdaq}
\mathscr{L}ambda_\q(a,s,\phi) = \mathfrak{t}ext{Re}\langle \phi,D_a \phi + \operatorname{qt}ilde^1(a,s,\phi) \rangle_{L^2}.
\end{equation}
These are the solutions to the {\em perturbed Seiberg-Witten equations on the blow-up} in temporal gauge, or $\Fsigma_{\q} = 0$ for short. We call $\mathscr{L}ambda_\q(a,s,\phi)$ the {\em spinorial energy} of $(a,s,\phi)$. Note that for an irreducible stationary point $(a,s,\phi)$, we must have $\mathscr{L}ambda_\q(a,s,\phi) = 0$.
We can now state the perturbed analogue of Proposition~\ref{prop:swblownup}. First, we define the operator
\[
D_{\q,a}: L^2_k(Y; \mathbb{S}) \mathfrak{t}o L^2_{k-1}(Y; \mathbb{S}), \ \ \
D_{\q, a} (\phi) =D_a \phi + \D_{(a,0)} \q^1(0,\phi).
\]
\begin{proposition}[Proposition 10.3.1 of \cite{KMbook}]
The element $(a,s,\phi)$ in $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ is a zero of $\mathcal{X}qsigma$ if and only if:
\begin{enumerate}[(a)]
\item $s \neq 0$ and $(a,s\phi)$ is a stationary point of $\mathcal{X}_\q$, or
\item $s = 0$ and $(a,0)$ is a stationary point of $\mathcal{X}_\q$ and $\phi$ is an eigenvector of $D_{\q,a}$.
\end{enumerate}
\end{proposition}
\begin{remark}
A stationary point of the form $(a,0)$ is called {\em reducible}. Recall from Remark~\ref{rem:zeroA} that the unperturbed Seiberg-Witten vector field $\mathcal{X}$ has a unique reducible stationary point mod gauge, namely $a=0$. We claim that, when $\q$ is small, the perturbed field $\mathcal{X}q$ also has a unique reducible stationary point. Indeed, notice that the linear map $*d: L^2_k(Y;iT^*Y) \mathfrak{t}o L^2_{k-1}(Y; iT^*Y)$ induces an invertible map from $(\ker d^*)_k$ to $(\ker d^*)_{k-1}$. Note also that $(\ker d^*)_j$ is the $L^2$ orthogonal complement in $L^2_j(Y; iT^*Y)$ of the tangents to the gauge orbits of connections. Therefore, by the inverse function theorem, for a small perturbation $\q$, there is a unique $a \in L^2_k(Y; iT^*Y)$ mod gauge, in a neighborhood of $0$, satisfying $*da + \q^0(a,0) = 0$. (Here we are using that $\q$ is $L^2$ orthogonal to the tangents of the gauge-orbits, since it is the formal gradient of a gauge-equivariant function.) By elliptic bootstrapping, $a$ is smooth. Further, by gauge-equivariance, we have $\q^1(a,0) = 0$ and we can conclude that $(a,0)$ provides the unique reducible solution to $\mathcal{X}q$ near zero. Using the compactness properties of the perturbed Seiberg-Witten equations, it also follows that (for sufficiently small $\q$) there are no reducible solutions outside the fixed neighborhood of $0$.
We will not, however, use the uniqueness of reducible solutions for small perturbations in this book.
\end{remark}
The (perturbed, blown-up) Seiberg-Witten map $\Fsigma_\q$ can be viewed as a section of the bundle $\V^{\operatorname{sp}incigma}(Z)$ over $\mathcal{C}^\operatorname{sp}incigma(Z)$. In the $\mathfrak{t}au$ model, there is a similar section $\F^{\mathfrak{t}au}_\q$ of $\V^\mathfrak{t}au(Z)$ over $\mathfrak{t}C^\mathfrak{t}au(Z)$. In temporal gauge, we can simply write
$$ \F^{\mathfrak{t}au}_{\q} = \frac{d}{dt} + \mathcal{X}qsigma.$$
\operatorname{sp}incection{Tame perturbations}
\label{sec:tame}
We are interested in studying the moduli spaces of flows of $\mathcal{X}qsigma$ connecting stationary points. In order to obtain the desired compactification results for the moduli spaces, we require some conditions on $\q$.
Let $Z = [t_1, t_2] \mathfrak{t}imes Y$ be a compact cylinder. For every one-form $a \in \mathcal{O}mega^1(Z; i\R)$, we can consider the Sobolev norm $L^2_{k, a}$ defined using as covariant derivative the connection corresponding to $a$, namely $\nabla_{A_0 + a}$. The $L^2_{k, a}$ norm is equivalent to the usual Sobolev norm, and gives rise to the same Sobolev completion. However, when we want to state global bounds it becomes important to specify the precise Sobolev norm. We will take the usual norm on $\mathcal{C}_{k}(Y), \mathcal{C}_{k}(Z)$. However, on the bundles $\V_{k}(Z)$ and $T\mathcal{C}_{k}(Z)$, in the fiber over $(a, \phi)$ we will take the $L^2_{k, a}$ norm; this turns them into gauge-invariant normed vector bundles.
\begin{definition}[Definition 10.5.1 in \cite{KMbook}]
\label{def:tame}
Fix an integer $k \geq 2$. Suppose that a section $\mathfrak{q}: \operatorname{sp}incC(Y) \mathfrak{t}o \T_{0}$ is the formal gradient of a continuous, $\G$-invariant function. We say $\q$ is a {\em $k$-tame perturbation} if, for any compact cylinder $Z = [t_1, t_2] \mathfrak{t}imes Y$, the following hold:
\begin{enumerate}[(i)]
\item $\qhat$ defines an element of $C^{\infty}(\mathcal{C}_{k}(Z), \V_{k}(Z))$;
\item $\qhat$ also defines an element of $C^0(\mathcal{C}_{j}(Z), \V_{j}(Z))$, for all integers $j \in [1,k]$;
\item The derivative $\mathcal{D}\hat{\mathfrak{q}}\in C^\infty(\operatorname{sp}incC_{k}(Z),\operatorname{Hom}(\T_{k}(Z),\V_{k}(Z)))$ extends to a smooth map into $\operatorname{Hom}(\T_{j}(Z),\V_{j}(Z))$, for all integers $j \in [-k,k]$;
\item There exists a constant $m$ such that
\[
\| \mathfrak{q}(a,\phi) \|_{L^2} \leq m (\| \phi \|_{L^2} + 1),
\]
for all $(a,\phi) \in \operatorname{sp}incC_{k}(Y)$;
\item For any $a_0 \in i \mathcal{O}mega^1(Z)$, there is a function $h:\mathbb{R} \mathfrak{t}o \mathbb{R}$ such that
\[
\| \qhat(a,\phi) \|_{L^2_{1,a}} \leq h(\| (a, \phi)\|_{L^2_{1,a_0}}),
\]
for all $(a,\phi) \in \operatorname{sp}incC_{k}(Z)$;
\item $\q$ extends to a $C^1$ section of $\operatorname{sp}incC_{1}(Y)$ into $\T_{0}$.
\end{enumerate}
We say that $\q$ is tame if it is $k$-tame for all $k \geq 2$.
\end{definition}
\begin{remark}
\label{rem:qhatq}
Conditions (i), (ii), (iii) and (v) are phrased in terms of the four-dimensional perturbation $\hat{\q}$, but they readily imply the analogous statements for $\q$. For example, from part (i) in Definition~\ref{def:tame}, we know that for a tame perturbation $\q$, $\hat{\q}$ gives a smooth map from $\mathcal{C}_{k}(Z)$ to $\V_{k}(Z)$, for all $k \geq 1$. For $x \in \mathcal{C}_{k}(Y)$, we can consider the element of $\mathcal{C}_{k}(Z)$ that is constantly $x$ in every slice $\{t\} \mathfrak{t}imes Y$. Its image under $\hat{\q}$ is constantly $\q(x)$ in every slice, so we conclude that $\q(x)$ is in $\V_{k}(Y)$.
\end{remark}
For a tame perturbation $\q$, the set of solutions to $\mathcal{X}_\mathfrak{q}= 0$, topologized as a subspace of $\B_{k}(Y)$, is compact for tame $\q$. In this case, any trajectory of $\mathcal{X}qsigma$ between two stationary points in $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ is gauge-equivalent to a smooth trajectory, which thus lives in $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$; no Sobolev completion is necessary.
Kronheimer and Mrowka construct explicitly a collection of tame perturbations which are the gradient of cylinder functions. We will not define these here. However, we will always need to work with a space of perturbations that does contain them.
\begin{definition}
A separable Banach space $\P$ is a {\em large Banach space of tame perturbations} if there exists a map $\mathcal{P}\mathfrak{t}o C^0(\operatorname{sp}incC(Y),\T_{0})$ which takes $x$ to $\q_x$ such that:
\begin{enumerate}[(i)]
\item the image of this map contains a countably infinite family of perturbations which are gradients of cylinder functions,
\item $\q_x$ is tame for all $x \in \P$,
\item for $k \geq 2$, the map $\mathcal{P}\mathfrak{t}imes \operatorname{sp}incC_{k}(I \mathfrak{t}imes Y) \mathfrak{t}o \V_{k}(I \mathfrak{t}imes Y)$ is smooth for any compact interval $I$ in $\mathbb{R}$,
\item the map $\mathcal{P}\mathfrak{t}imes \operatorname{sp}incC_{1}(Y) \mathfrak{t}o \T_{1}(Y)$ is continuous, and
\item there exist a constant $m$ and a map $\mu:\mathbb{R} \mathfrak{t}o \mathbb{R}$ such that $\mathcal{P}\mathfrak{t}imes \operatorname{sp}incC_{1}(Y) \mathfrak{t}o \T_{1}(Y)$ satisfies
\begin{align*}
\|\q_x(a,\phi)\|_{L^2} & \leq m \| x \| ( \|\phi\|_{L^2} + 1) \\
\|\q_x(a,\phi)\|_{L^2_{1,a_0}} & \leq \| x \| \mu(\|(a,\phi)\|_{L^2_{1,a_0}}).
\end{align*}
\end{enumerate}
\end{definition}
\operatorname{sp}incection{Very tame perturbations}
\label{sec:verytame}
We now make a digression. For the purposes of doing finite dimensional approximation (as we shall do in this book), we need slightly stronger assumptions on the perturbation $\q$ than the ones in Definition~\ref{def:tame}. We give the relevant definitions here. These concepts are new material; they did not appear in \cite{KMbook}.
Let us recall that a linear operator between Banach spaces, $f: E \mathfrak{t}o F$, is continuous if and only if it is bounded, i.e., there is a constant $K$ such that $\|f(x) \| \leq K \|x\|$ for all $x$. This condition is also equivalent to requiring $f$ to take bounded sets to bounded sets. For nonlinear operators, we introduce the following terminology.
\begin{definition}
\label{def:fb}
Let $E$ and $F$ be Banach spaces. A map $f: E \mathfrak{t}o F$ is called {\em functionally bounded} if $f(B) \operatorname{sp}incubset F$ is bounded whenever $B \operatorname{sp}incubset E$ is bounded. In other words, there exists a function $h: \rr\mathfrak{t}o \R$ such that
$$ \| f(x) \| \leq h(\|x\|),$$
for all $x \in E$.
\end{definition}
We use the term {\em functionally bounded}, rather than {\em bounded}, to prevent confusion with
the usual notion of bounded functions in analysis, which requires that $\| f(x) \| \leq K$ for some $K$.
For non-linear operators, continuity does not imply functional boundedness; see \cite{StackExchange}. We will need both of these conditions, so let us denote by
$$ C^0_{\fb}(E, F)$$
the space of continuous and functionally bounded operators from $E$ to $F$. Further, for $m \geq 1$ (and also for $m=\infty$), we let
$$ C^m_{\fb}(E, F) = \{ f \in C^m(E, F) \mid \D^\ell f \in C^0_{\fb}(E, \operatorname{Hom}(E^{\mathfrak{t}imes \ell}, F)), \ \ell=0, \dots, m \}.$$
We can similarly define the space of $C^k_{\fb}$ sections of a bundle.
With this in mind, we present the following strengthening of Definition~\ref{def:tame}.
\begin{definition}
\label{def:verytame}
Fix $k \geq 2$. A $k$-tame perturbation $q$ is called {\em very $k$-tame} if the following additional conditions are satisfied:
\begin{enumerate}[(i)]
\item $\qhat$ defines an element of $C^{\infty}_{\fb}(\mathcal{C}_{k}(Z), \V_{k}(Z))$;
\item $\qhat$ also defines an element of $C^0_{\fb}(\mathcal{C}_{j}(Z), \V_{j}(Z))$, for all integers $j \in [1,k]$;
\item $\mathcal{D}\qhat$ defines an element of $C^\infty_{\fb}(\operatorname{sp}incC_{k}(Z),\operatorname{Hom}(T\operatorname{sp}incC_{j}(Z),\V_{j}(Z)))$, for all integers $j \in [-k,k]$.
\end{enumerate}
We say that $\q$ is {\em very tame} if it is very $k$-tame for all $k \geq 2$.
\end{definition}
Luckily, the kind of perturbations considered by Kronheimer and Mrowka in \cite{KMbook} satisfy these additional conditions.
\begin{lemma}
\label{lem:verytame}
Let $\q$ be the gradient of a cylinder function, as in \cite[Section 11]{KMbook}. Then, $\q$ is very tame.
\end{lemma}
\begin{proof}
This is an immediate consequence of Proposition 11.4.1 and Lemma 11.4.4 in \cite{KMbook}, along the lines of the proof of Theorem 11.1.2 on p.190 of \cite{KMbook}.
\end{proof}
We will need to ensure that both the perturbations we use are very tame and the space of perturbations is sufficiently large to guarantee transversality. This motivates the following definition.
\begin{definition}\label{def:large-verytame}
A {\em large Banach space of very tame perturbations} is a large Banach space of tame perturbations where each perturbation is additionally very tame.
\end{definition}
Using Lemma~\ref{lem:verytame}, the proof of the existence of such a space follows as in the proof of Theorem 11.6.1 in \cite{KMbook}.
\operatorname{sp}incection{Admissible perturbations} \label{sec:AdmPer}
For Morse homology, we need non-degeneracy of the Hessian at the critical points and the Morse-Smale condition. For monopole Floer homology, we must establish the analogues of these: non-degeneracy of stationary points and regularity of the moduli spaces. We first focus on non-degeneracy.
Fix a tame perturbation $\q$ and Sobolev number $k$. Note that $\mathcal{X}qsigma$ takes $\mathcal{C}^\operatorname{sp}incigma_k(Y)$ to $\T^\operatorname{sp}incigma_{k-1}$. Note also that the stationary points of $\mathcal{X}qsigma$ cannot be isolated like in Morse theory. This is because the gauge group preserves stationary points. Therefore, this will be a Morse-Bott condition, in the sense that the stationary points will be non-degenerate in the directions transverse to the gauge orbits.
\begin{definition}
A stationary point $x$ of $\mathcal{X}qsigma$ is {\em non-degenerate} in $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ if $\mathcal{X}qsigma$ is transverse to the sub-bundle $\J^\operatorname{sp}incigma_{k-1}$ at $x$.
\end{definition}
Much like the existence of stationary points, the non-degeneracy of the stationary points of $\mathcal{X}qsigma$ can be described in terms of $\mathcal{X}_\q$ on $\operatorname{sp}incC_{k}(Y)$.
\begin{proposition}[Proposition 12.2.5 in \cite{KMbook}]\label{prop:nondegeneracycharacterized}
A stationary point $(a,s,\phi)$ is a {\em non-degenerate} zero of $\mathcal{X}qsigma$ on $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ for $s>0$ if $\mathcal{X}_\q$ is transverse to $\J_{k-1}$ at $(a,s\phi)$. If $s = 0$, and $\phi$ is an eigenvector of $D_{\q,a}$ with eigenvalue $\lambda$, then $(a, 0, \phi)$ is non-degenerate if and only if the following three conditions are satisfied:
\begin{enumerate}
\item $\lambda \neq 0$,
\item $\lambda$ is a simple eigenvalue,
\item $\mathcal{X}_\q$ is transverse to $\J^{\red}_{k-1}$ at $a$.
\end{enumerate}
\end{proposition}
It turns out that one can always find many perturbations such that the stationary points are non-degenerate.
\begin{theorem}[Theorem 12.1.2 in \cite{KMbook}]
For any large Banach space of tame perturbations $\P$, there is a residual subset of $\P$ such that for each perturbation $\q$, all of the stationary points of $\mathcal{X}qsigma$ are non-degenerate.
\end{theorem}
Given such a perturbation $\q$, the gauge-equivalence classes of solutions to the perturbed Seiberg-Witten equations on $(Y,\mathfrak{s})$ are isolated (this comes from being transverse to the gauge orbits). Since these equivalence classes form a compact subset of $\operatorname{sp}incC_{k}(Y)/\G_{k+1}$ because of tameness, there are at most finitely many irreducible solutions in $\B^\operatorname{sp}incigma_{k}(Y)$.
For the rest of the subsection, we will assume that $\q$ has been chosen so as to be tame and that the stationary points are non-degenerate. In analogy with the finite-dimensional case we have
\begin{definition}
Let $[x] = [(a,s,\phi)]$ be a gauge-equivalence class of zeros of $\mathcal{X}qsigma$ in $\B^\operatorname{sp}incigma_{k}(Y)$. If $s \neq 0$, then we say that $[x]$ is {\em irreducible}. If $[x]$ is {\em reducible}, then $[x]$ is {\em boundary-stable} (respectively boundary-unstable) if $\mathscr{L}ambda_\q(x) >0$ (respectively $<0$).
\end{definition}
Define $\mathcal{C}rit^o$, $\mathcal{C}rit^s$, and $\mathcal{C}rit^u$ to be the sets of gauge-equivalence classes of irreducible, boundary-stable, and boundary-unstable stationary points of $\mathcal{X}qsigma$ on $\B^\operatorname{sp}incigma_{k}(Y)$. Set $\mathcal{C}rit = \mathcal{C}rit^o \cup \mathcal{C}rit^s \cup \mathcal{C}rit^u.$
Next, we discuss regularity for trajectories. Let $Z = {\mathbb{R}}\mathfrak{t}imes Y$. For $x, y \in \mathcal{C}^\operatorname{sp}incigma(Y)$, pick a smooth path $\gamma_0: {\mathbb{R}}\mathfrak{t}o \mathcal{C}^\operatorname{sp}incigma(Y)$ such that $\gamma_0(t)=x$ for $t \ll 0$ and $\gamma_0(t)=y$ for $t \gg 0$. Then, define
\begin{equation}
\label{eq:Cktau}
\operatorname{sp}incC^{\mathfrak{t}au}_k(x, y)= \{\gamma \in \mathcal{C}_{k, loc}^{\mathfrak{t}au}(Z) \mid \gamma - \gamma_0 \in L^2_k(Z; iT^*Z) \mathfrak{t}imes L^2_k(\rr; \rr) \oplus L^2_k(Z; \mathbb{S}^+)\},
\end{equation}
Note that here we are imposing $L^2_k$ and not $L^2_{k,loc}$ conditions on $\gamma - \gamma_0$. Thus, $\operatorname{sp}incC^{\mathfrak{t}au}_k(x, y)$ is equipped with a natural metric, $d(\gamma,\gamma') = \| \gamma - \gamma' \|_{L^2_k}$. We have an analogously defined space $\mathfrak{t}C^\mathfrak{t}au_k(x,y)$ where we remove the condition $s(t) \geq 0$, as for the definition of $\mathfrak{t}C^\mathfrak{t}au_k(Z)$.
We let $\B^{\mathfrak{t}au}_k(x, y)$ (respectively $\mathfrak{t}B^\mathfrak{t}au_k(x,y)$) be the quotient of $\operatorname{sp}incC^{\mathfrak{t}au}_k(x,y)$ (respectively $\mathfrak{t}C^\mathfrak{t}au_k(x,y)$) by the action of gauge transformations $u:Z\mathfrak{t}o S^1$ such that $1-u \in L^2_{k+1}(Z, \cc)$. The space $\B^{\mathfrak{t}au}_k(x, y)$ depends only on the classes $[x]$ and $[y]$, up to canonical diffeomorphism. Consequently, we can use the notation $\B^{\mathfrak{t}au}_k([x], [y])$. Similarly for $\mathfrak{t}B^\mathfrak{t}au_k([x],[y])$.
We are now interested in studying gauge-equivalence classes of trajectories between two stationary points. Define $\mathfrak{t}au_t : {\mathbb{R}}\mathfrak{t}imes Y \mathfrak{t}o {\mathbb{R}}\mathfrak{t}imes Y$ to be translation by $t$ and let $\gamma_x$ (respectively $\gamma_y$) denote the elements of $\mathcal{C}^\mathfrak{t}au_{k,loc}({\mathbb{R}}\mathfrak{t}imes Y)$ which are $x$ (respectively $y$) in each slice.
\begin{definition}\label{def:sw-moduli-space}
For $[x]$ and $[y]$ in $\mathcal{C}rit$, we define the {\em moduli space of trajectories from $[x]$ to $[y]$} as
\[
M([x],[y]) = \{[\gamma] \in \B^\mathfrak{t}au_{k,loc}({\mathbb{R}}\mathfrak{t}imes Y) \mid \F^\mathfrak{t}au_\q(\gamma) = 0, \lim_{t \mathfrak{t}o -\infty} [\mathfrak{t}au^*_t \gamma] = [\gamma_x], \lim_{t \mathfrak{t}o +\infty} [\mathfrak{t}au^*_t \gamma] = [\gamma_y] \},
\]
where the limits are taken in $\B^\mathfrak{t}au_{k,loc}({\mathbb{R}}\mathfrak{t}imes Y)$.
If $[x]$ is boundary-stable and $[y]$ is boundary-unstable, then we say we are in the {\em boundary-obstructed} case. Finally, we will decorate a moduli space by $\breve{M}$ if we want the result after quotienting by the usual $\mathbb{R}$-action.
\end{definition}
It is proved in \cite[Theorem 13.3.5]{KMbook} that every class in $M([x], [y])$ has a gauge representative in $\B^{\mathfrak{t}au}_k([x], [y])$. The proof uses exponential decay estimates for Seiberg-Witten trajectories.
\begin{remark}
As mentioned above, each (blown-up, perturbed) Seiberg-Witten trajectory on $\operatorname{sp}incC^\operatorname{sp}incigma_{k}(Y)$ is gauge-equivalent to a smooth trajectory on $\operatorname{sp}incC^\operatorname{sp}incigma(Y)$, for $k \geq 2$. In particular, the moduli spaces are homeomorphic for any $k \geq 2$.
\end{remark}
In order to show that the moduli spaces are smooth manifolds, we must show that they are (locally) the preimage of a regular value of a Fredholm map. Just as in the case of compact cylinders, one can define a bundle $\V_{k-1}^\mathfrak{t}au(Z)$ over $\mathfrak{t}C_{k}^\mathfrak{t}au(x,y)$, such that $\F_\q^\mathfrak{t}au$ provides a section. The zero set of this section, restricted to $\mathcal{C}_k^\mathfrak{t}au(x,y)$, thus describes the Seiberg-Witten trajectories asymptotic to $x$ and $y$. The advantage of working with $\mathfrak{t}C_{k}^\mathfrak{t}au(x,y)$ is that we can differentiate $\F_\q^\mathfrak{t}au$ in this setting.
Recall from \eqref{eq:muy} that $\mu_Y(f)$ denotes the average value of a function $f$ over the three-manifold $Y$. For $\gamma = (a,s,\phi) \in \mathfrak{t}C^\mathfrak{t}au_{k}(x,y)$ and $1 \leq j \leq k$, define the linear operator\footnote{In Equation~\eqref{eq:Qgamma}, $\D^{\mathfrak{t}au}$ is a covariant derivative on the space of paths, which was simply denoted $\D$ in \cite{KMbook}. See Section~\ref{sec:linearized} below for more details.}
\begin{equation}
\label{eq:Qgamma}
Q_{\gamma} = \D^{\mathfrak{t}au}_\gamma \F^\mathfrak{t}au_\mathfrak{q}\oplus \mathbf{d}^{\mathfrak{t}au,\dagger}_\gamma : \T^\mathfrak{t}au_{j,\gamma}(Z) \mathfrak{t}o \V^\mathfrak{t}au_{j-1,\gamma}(Z) \oplus L^2_{j-1}(Z;i\mathbb{R}),
\end{equation}
where
\begin{equation}
\label{eq:dtaudagger}
\mathbf{d}^{\mathfrak{t}au,\dagger}_\gamma(b,r,\psi) = -d^*b + is^2 \mathfrak{t}ext{Re}\langle i\phi,\psi \rangle + i |\phi|^2 \mathfrak{t}ext{Re} \ \mu_Y \langle i\phi,\psi \rangle
\end{equation}
is a variant of the formal adjoint to the infinitesimal (four-dimensional) gauge action. The condition $\mathbf{d}^{\mathfrak{t}au, \dagger}_\gamma = 0$ describes a four-dimensional local Coulomb slice $\K^\mathfrak{t}au_{j, \gamma} \operatorname{sp}incubset \T^\mathfrak{t}au_{j, \gamma}$. When $j = k$, the slice $\K^\mathfrak{t}au_{k,\gamma}$ can be viewed as the tangent space to $\mathfrak{t}B^{\mathfrak{t}au}_{k}([x], [y])$ at $[\gamma]$. In general, the bundle $\K^\mathfrak{t}au_j$ defines a gauge-equivariant bundle over $\mathfrak{t}C^\mathfrak{t}au_k(x,y)$ which descends to a bundle over $\mathfrak{t}B^\mathfrak{t}au_k([x],[y])$. If $Q_\gamma$ is surjective (or, equivalently, the restriction $\D^\mathfrak{t}au_\gamma \F^\mathfrak{t}au_\q|_{\K^\mathfrak{t}au_{j, \gamma}}$ surjects onto $\V^\mathfrak{t}au_{j-1,\gamma}$) for all $[\gamma]$ in $M([x],[y])$, then $M([x],[y])$ is a smooth manifold.
\begin{definition}
\label{def:regM}
The moduli space $M([x],[y])$ is {\em regular} if $Q_\gamma$ is surjective for all $[\gamma] \in M([x],[y])$, unless we are in the boundary-obstructed case. If $M([x],[y])$ is boundary-obstructed, regularity means that the cokernel of $Q_\gamma$ is dimension one for each $[\gamma] \in M([x],[y])$.
\end{definition}
Theorem 15.1.1 in \cite{KMbook} guarantees the existence of a tame perturbation $\q$ such that $M([x],[y])$ are regular for all $[x]$ and $[y]$. In particular, this implies that the moduli spaces $M([x],[y])$ are all smooth manifolds, even in the boundary-obstructed case, by \cite[Proposition 14.5.7]{KMbook}. In particular, we have that $\operatorname{ind} Q_\gamma = \dim M([x],[y])$, except in the boundary-obstructed case, where $\operatorname{ind} Q_\gamma = \dim M([x],[y]) -1$.
If $[x]$ and $[y]$ are reducible and $M([x],[y])$ is regular, then define $M^{\red}([x],[y])$ to be the subset of $M([x],[y])$ consisting of reducible trajectories. Note that $M^{\red}([x],[y])$ is either empty (this is the case when one of $[x]$ or $[y]$ is irreducible) or all of $M([x],[y])$, except when $[x]$ is boundary-unstable and $[y]$ is boundary-stable, in which case it is $\partial M([x],[y])$.
\begin{definition}
\label{def:admi}
An {\em admissible} perturbation is a tame perturbation such that all the stationary points are non-degenerate and all the moduli spaces are regular.
\end{definition}
\begin{theorem}[Theorem 15.1.1 in \cite{KMbook}]\label{thm:admissibleperturbationsexist} For any large Banach space of tame perturbations $\mathcal{P}$, there exists an admissible perturbation $\mathfrak{q}\in \mathcal{P}$.
\end{theorem}
As discussed after Definition~\ref{def:large-verytame}, there exist large Banach spaces of very tame perturbations. Therefore, there exist perturbations which are both very tame and admissible.
\operatorname{sp}incection{Orientations}
\label{sec:or2}
From now on we fix an admissible perturbation $\q$.
\begin{theorem}[Corollary 20.4.1 of \cite{KMbook}]
The moduli spaces $M([x],[y])$ are orientable manifolds.
\end{theorem}
Let us sketch the construction of orientations on $M([x], [y])$, following \cite[Section 20]{KMbook}. This is similar to the discussion of specialized coherent orientations in Morse theory (see Section~\ref{sec:or1}).
To orient $M([x], [y])$, we need to orient the determinant lines $\det(Q_{\gamma})$, where $Q_{\gamma}$ is as in \eqref{eq:Qgamma}. For arbitrary $x, y \in \mathcal{C}_k^{\operatorname{sp}incigma}(Y)$ (not necessarily stationary points), we consider instead a compact interval $I=[t_1, t_2]$, and a configuration $\gamma \in \mathcal{C}_k^{\mathfrak{t}au}(I \mathfrak{t}imes Y)$ whose restrictions to $\{t_1\} \mathfrak{t}imes Y$ and $\{t_2\} \mathfrak{t}imes Y$ are gauge equivalent to $x$, resp. $y$. To any such $\gamma$ we can associate an operator
$Q_{\gamma}$ by the same formula as for \eqref{eq:Qgamma}. To make it Fredholm, we need to add suitable boundary conditions. At the boundary component $\{t_1\} \mathfrak{t}imes Y$, consider the subspaces
$$ H_1^{\pm} = \K^{\pm}_{1/2, x} \oplus L^2_{1/2}(Y; i\R) \operatorname{sp}incubset \T^{\operatorname{sp}incigma}_{1/2, x}(Y) \oplus L^2_{1/2}(Y; i\R).$$
Here, we have decomposed the local Coulomb slice $\K^{\operatorname{sp}incigma}_{1/2, x} \operatorname{sp}incubset \T^{\operatorname{sp}incigma}_{1/2, x}(Y) $ as
$$\K^{\operatorname{sp}incigma}_{1/2, x} = \K^{-}_{1/2, x} \oplus \K^{+}_{1/2, x},$$
using the spectral subspaces (the direct sum of nonpositive, resp. positive eigenspaces) of the Hessian
$$ \operatorname{Hess}^{\operatorname{sp}incigma}_{\q, x} := \Pi_{\K^{\operatorname{sp}incigma}_{1/2, x}} \circ \D_{\gamma}\mathcal{X}q^{\operatorname{sp}incigma}|_{\K^{\operatorname{sp}incigma}_{1/2, x}}.$$
(See Section~\ref{sec:HessBlowUp} below for more details about the Hessian.)
We consider similar subspaces $\K_2^{\pm}$ at the other boundary component $\{t_2\} \mathfrak{t}imes Y$.
Then, we define a Fredholm operator
\begin{equation}
\label{eq:Pgamma}
P_{\gamma} = \bigl( Q_{\gamma}, -\Pi_1^+, \Pi_2^- \bigr) : \T^{\mathfrak{t}au}_{1, \gamma}(I \mathfrak{t}imes Y) \mathfrak{t}o \V^{\mathfrak{t}au}_{0, \gamma} \oplus L^2(I \mathfrak{t}imes Y; i\R) \oplus H_1^+ \oplus H_2^-,
\end{equation}
where $\Pi_i^{\pm}$ denotes the composition of restriction to $\{t_i\} \mathfrak{t}imes Y$ with the projection to $H_i^{\pm}$.
Let $\mathscr{L}ambda(\gamma)$ be the set of orientations of $\det(P_{\gamma})$. It is proved in \cite[Proposition 20.3.4]{KMbook} that there are canonical identifications between the different $\mathscr{L}ambda(\gamma)$ when we fix the (gauge equivalence classes of the) endpoints $x$ and $y$ of $\gamma$. Thus, we can write $\mathscr{L}ambda([x], [y])$ for $\mathscr{L}ambda(\gamma)$. Further, there are natural composition maps
$$ \mathscr{L}ambda([x], [y]) \mathfrak{t}imes \mathscr{L}ambda([y], [z]) \mathfrak{t}o \mathscr{L}ambda([x], [z]).$$
Departing slightly from the terminology and conventions in \cite{KMbook}, we define an {\em orientation data set} $o$ for the admissible perturbation $\q$ to consists of elements $o_{[x], [y]} \in \mathscr{L}ambda([x], [y])$, one for each pair $([x], [y])$, such that we have the relations
$$ o_{[x], [y]} \cdot o_{[y], [z]} = o_{[x], [z]}.$$
An orientation data set can be constructed as follows. In \cite[p. 385-390]{KMbook} it is shown that when $[x]$ and $[y]$ are reducible, the set $\mathscr{L}ambda([x], [y])$ can be canonically identified with $\zz/2 = \{\pm 1\}$, in a way compatible with concatenation; we then choose $o_{[x], [y]}$ to be the element $+1$ in this case. Then, we fix a reducible $[x_0]$ and pick arbitrary elements $o_{[x_0], [x]}$ for all irreducibles $[x]$. This uniquely determines the data set $o$, using the concatenation property.
Next, let $[x]$ and $[y]$ be stationary points, and consider a trajectory $[\gamma] \in M([x], [y])$. Let $\gamma_0$ be its restriction to a large compact interval $I=[t_1, t_2] \operatorname{sp}incubset \R$. It is proved in \cite[Section 20.4]{KMbook} that an orientation for $\det(P_{[\gamma_0]})$ determines one for $\det(Q_{\gamma})$. Therefore, an orientation data set for $\q$ produces orientations for all determinant lines $\det(Q_{\gamma})$, and hence for the moduli spaces $M([x], [y])$. From here we also get orientations on the quotients $\breve M([x], [y])$.
We can orient the moduli spaces $M^{\red}([x],[y])$ and $\breve{M}^{\red}([x],[y])$ in a similar manner.
\begin{remark}
In \cite[Section 20]{KMbook}, the discussion was more general, allowing the perturbation $\q$ to vary for $t \in [t_1, t_2]$. For the purposes of this book, it suffices to work with a single perturbation $\q$; we adjusted our discussion accordingly, for simplicity.
Furthermore, the notion of orientation data set did not appear in \cite{KMbook}. Whereas we trivialize the sets $\mathscr{L}ambda([x], [y])$ and then let the monopole Floer complex be generated by stationary points $[x]$ (cf. Section~\ref{sec:mfh} below), Kronheimer and Mrowka define the complex more canonically, as generated by orientations of $\mathscr{L}ambda([x_0], [x])$ (where $[x_0]$ is a fixed reducible basepoint), with two opposite orientations being set to be the negative of each other. The two definitions are readily seen to be equivalent.
\end{remark}
\operatorname{sp}incection{Monopole Floer homology} \label{sec:mfh}
We will now define the monopole Floer chain complex $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ analogous to the construction of Morse homology for manifolds with boundary in Section~\ref{subsec:morseboundary}.
The set $\mathcal{C}rit^o$ of gauge-equivalence classes of irreducible stationary points of $\mathcal{X}qsigma$ is finite. There are also countably many reducible stationary points, corresponding to the eigenvectors of $D_{\q, a}$. Indeed, the operator $D_{\q, a}$ is ASAFOE in the sense of \cite[Definition 12.2.1]{KMbook}, so its spectrum is discrete by \cite[Lemma 12.2.4]{KMbook}.
In \cite[Chapter V]{KMbook}, Kronheimer and Mrowka give an analysis of the compactifications of the moduli spaces via broken flow lines. Rather than state the general result, we simply point out that given an admissible perturbation, if $\breve{M}([x],[y])$ or $\breve{M}^{\red}([x],[y])$ is 0-dimensional, then this moduli space is compact.
Let the groups $C^\mathfrak{t}heta$ be freely generated over $\mathbb{Z}$ by $\mathcal{C}rit^\mathfrak{t}heta$ for $\mathfrak{t}heta \in \{o,s,u\}$. The monopole Floer chain groups are given by
\[
\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q) = C^o \oplus C^s.
\]
Here, the relative $\mathbb{Z}$-grading on $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ (since $\mathfrak{s}$ is torsion) is given by the expected dimension of $M([x],[y])$ (the index of $Q_\gamma$), since in this case, we cannot have that $M([x],[y])$ is boundary-obstructed. Since $Y$ is a rational homology sphere, this number is well-defined (there is only one homotopy class of paths from $[x]$ to $[y]$). Recall that we need a shift of grading by 1 in the boundary-obstructed case.
For $\mathfrak{t}heta, \varpi \in \{o, s, u\}$, define $\partial^\mathfrak{t}heta_\varpi$ and $\bar{\partial}^\mathfrak{t}heta_\varpi$ by
\begin{align}
\label{eqn:boundaryirred} & \partial^\mathfrak{t}heta_\varpi([x]) = \operatorname{sp}incum_{[y] \in \mathcal{C}rit^\varpi} \# \breve{M}([x],[y]) [y], \\
\label{eqn:boundaryred} & \bar{\partial}^\mathfrak{t}heta_\varpi([x]) = \operatorname{sp}incum_{[y] \in \mathcal{C}rit^\varpi} \# \breve{M}^{red}([x],[y]) [y],
\end{align}
for $[x] \in \mathcal{C}rit^\mathfrak{t}heta$, where we only sum over $[y]$ such that the relevant moduli spaces are 0-dimensional. Here, $\#$ means the signed count of points in this oriented, compact $0$-dimensional manifold. Also, we only allow $\mathfrak{t}heta, \varpi$ such that these counts make sense: for $\partial$, we want $\mathfrak{t}heta \in \{o,u\}$ and $\varpi \in \{ o,s\}$ while for $\bar{\partial}$, we ask that $\mathfrak{t}heta, \varpi \neq o$. The boundary operator on $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ is given by
\begin{equation}\label{eqn:cmboundary}
\check{\partial} = \begin{bmatrix} \partial^o_o & - \partial^u_o \bar{\partial}^s_u \\ \partial^o_s & \bar{\partial}^s_s - \partial^u_s \bar{\partial}^s_u \end{bmatrix}.
\end{equation}
The admissibility of $\q$ guarantees that $\check{\partial}$ squares to zero and we can take homology, $\overline{\mathit{HM}}to(Y,\mathfrak{s},\q)$. As in Remark~\ref{rmk:noMred}, the moduli spaces $M^{\red}([x],[y])$ are either empty or equal to $M([x],[y])$, except in the case that $[x]$ is boundary-unstable and $[y]$ is boundary-stable; in this exceptional case, the counts of $\breve{M}^{\red}([x],[y])$ do not arise in $\check{\partial}$.
Similar to the Morse homology for circle actions, monopole Floer homology can be given the structure of a $\mathbb{Z}[U]$-module. This is defined in \cite{KMbook} in terms of evaluations of suitable \v{C}ech cochains on the $2$-dimensional moduli spaces of parameterized trajectories. For our purposes, it is more convenient to use an alternate, equivalent definition of the $U$ map, which is taken from \cite[Section 4.11]{KMOS}. The latter definition involves counting points in zero-dimensional spaces, and this will make it easier to prove a stability result for trajectories in Section~\ref{subsec:U-maps}.
Although the $U$ map on $\overline{\mathit{HM}}to$ comes from a more general cobordism construction, we will restrict our attention to the specific case we need. Let $p \in {\mathbb{R}}\mathfrak{t}imes Y$ be a basepoint, and $B_p$ a standard ball neighborhood of $p$ in ${\mathbb{R}}\mathfrak{t}imes Y$. Let $\B^{\operatorname{sp}incigma}_{k}(B_p)$ be the the blown-up configuration space of $L^2_k$ connections and spinors on $B_p$, modulo the gauge group $$\G_{k+1}(B_p) = \{u: B_p \mathfrak{t}o S^1 \mid u \in L^2_{k+1}\}.$$ Note that $\B^{\operatorname{sp}incigma}_k(B_p)$ is a Hilbert manifold with boundary, and is a free quotient by $\G_{k+1}(B_p)$. There is a natural complex line bundle $E_p^{\operatorname{sp}incigma}$ over $\B^{\operatorname{sp}incigma}_k(B_p)$, induced from the map $\G_{k+1}(B_p) \mathfrak{t}o S^1, \ u \mapsto u(p)$.
For any $[x] \in \mathcal{C}rit^{\mathfrak{t}heta}, [y] \in \mathcal{C}rit^{\varpi}$, because of unique continuation, there is a well-defined restriction map
$$ r_p : M([x],[y]) \mathfrak{t}o \B^\operatorname{sp}incigma_k(B_p),$$
which is an embedding.
Pick a smooth section $\operatorname{sp}incect$ of $E_p^{\operatorname{sp}incigma}$ such that $\operatorname{sp}incect$ is transverse to the zero section, and the zero set $\Zs$ of $\operatorname{sp}incect$ intersects all the moduli spaces $M([x], [y])$ and $M^{\red}([x], [y])$ transversely. By analogy with the finite-dimensional case from Section~\ref{subsec:UMorse}, for $(\mathfrak{t}heta, \varpi) \in \{(o,o), (o,s), (u,o),(u,s)\}$, we define $m^\mathfrak{t}heta_\varpi: \mathfrak{C}^\mathfrak{t}heta \mathfrak{t}o \mathfrak{C}^\varpi$ by
\begin{equation}\label{eqn:mU}
m^\mathfrak{t}heta_\varpi([x]) = \operatorname{sp}incum_{[y] \in \mathfrak{C}^\varpi} \# (M([x],[y]) \cap \Zs) \cdot [y], \mathfrak{t}ext{ for } [x] \in \mathfrak{C}^\mathfrak{t}heta,
\end{equation}
Similarly, for the reducibles, we set
\begin{equation}\label{eqn:mUred}
\bar{m}^\mathfrak{t}heta_\varpi([x]) = \operatorname{sp}incum_{[y] \in \mathfrak{C}^\varpi} \# (M^{\red}([x],[y]) \cap \partial\Zs) \cdot [y], \mathfrak{t}ext{ for } [x] \in \mathfrak{C}^\mathfrak{t}heta.
\end{equation}
Note that we are using parameterized trajectories. We only consider terms in the sums where the dimension of the moduli spaces being considered is two.
Finally, we can define $\check{m}: \widecheck{\mathit{CM}} \mathfrak{t}o \widecheck{\mathit{CM}}$ by
\begin{equation}\label{eqn:floercap}
\check{m} = \begin{bmatrix} m^o_o & - m^u_o \bar{\partial}^s_u - \partial^u_o \bar{m}^s_u \\ m^o_s & \bar{m}^s_s - m^u_s \bar{\partial}^s_u - \partial^u_s \bar{m}^s_u \end{bmatrix}.
\end{equation}
This induces a chain map. Therefore, we can define the $U$-action on $\overline{\mathit{HM}}to$ by the map induced by $\check{m}$. By construction, this map lowers the relative grading by 2.
\begin{theorem}[Kronheimer-Mrowka, Theorem 23.1.5 of \cite{KMbook}]
The relatively-graded $\mathbb{Z}[U]$-module $\overline{\mathit{HM}}to(Y,\mathfrak{s},\q)$ is an invariant of the pair $(Y,\mathfrak{s})$.
\end{theorem}
Therefore, we just use the notation $\overline{\mathit{HM}}to(Y,\mathfrak{s})$. From here we can easily define Bloom's variant of monopole Floer homology, $\overline{\mathit{HM}}tilde$.
\begin{definition}[\cite{Bloom}]
The {\em tilde-flavor of monopole Floer homology}, $\overline{\mathit{HM}}tilde(Y,\mathfrak{s})$, is the homology of the mapping cone of the map $\check{m}$ on $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$.
\end{definition}
\operatorname{sp}incection{Gradings}\label{sec:modifications}
Recall that for two stationary points $x$ and $y$, the relative homological grading between $[x]$ and $[y]$ is defined by
\[
\operatorname{gr}([x], [y]) = \operatorname{ind} Q_{\gamma},
\]
where $\gamma \in \mathcal{C}^{\mathfrak{t}au}_k(x, y)$ is any path, and $Q_{\gamma}$ is the operator from \eqref{eq:Qgamma}. (See \cite[Definition 14.4.4]{KMbook}.)
Suppose that $[x]$ and $[y]$ are reducibles with the same blow-down projection $(a, 0)$, and correspond to eigenvalues $\mu$ and $\nu$ of $D_{\q, a}$ with $\mu \geq \nu$. Then, an alternate formula for the relative grading is given in \cite[Corollary 14.6.2]{KMbook}:
\begin{equation}
\label{eq:GradingRed}
\operatorname{gr}([x], [y]) = \begin{cases}
2i(\mu, \nu) & \mathfrak{t}ext{if $\mu$ and $\nu$ have the same sign,} \\
2i(\mu, \nu)-1 &\mathfrak{t}ext{otherwise.}
\end{cases}
\end{equation}
where $i(\mu, \nu)$ denotes the number of eigenvalues in the interval $(\nu, \mu].$
There is also an absolute rational grading $\operatorname{gr}^{\Q}$ defined in \cite[Section 28.3]{KMbook}. Given a cobordism $W$ from the round sphere $S^3$ to $Y$, we have
$$ \operatorname{gr}^{\Q}([x]) := - \operatorname{gr}_z([x_0], W, [x]) + \frac{c_1(\mathfrak{t})^2 - \operatorname{sp}incigma(W)}{4} - \iota(W),$$
where: $[x_0]$ is the reducible on $S^3$ with the lowest positive eigenvalue; $\mathfrak{t}$ is a Spin$^c$ structure on $W$ that restricts to $\operatorname{sp}inc$ on $Y$; $\operatorname{gr}_z([x_0], W, [x])$ is the relative grading between $x_0$ and $x$ for the monopole map associated to $(W, \mathfrak{t})$, as in \cite[Section 25]{KMbook}; the subscript $z$ refers to a connected component of the configuration space of $W$, and in our case $z$ is uniquely determined by the Spin$^c$ structure $\mathfrak{t}$; and $\iota(W) = (\chi(W) + \operatorname{sp}incigma(W))/2=b^+(W) - b_1(W)$.
For future reference, let us rephrase this definition in terms of a four-manifold $X$ with boundary $Y$. We can then take $W$ to be the complement of a four-ball in $X$. Using additivity of $\operatorname{gr}$ and a standard calculation on $B^4$ we can replace $W$ with $X$ and write
\begin{equation}
\label{eq:grQx}
\operatorname{gr}^{\Q}([x]) = -\operatorname{gr}_z(X, [x]) + \frac{c_1(\mathfrak{t})^2 - \operatorname{sp}incigma(X)}{4} - b^+(X) + b_1(X) -1,
\end{equation}
where $\operatorname{gr}_z(X,[x])$ is the expected dimension of the moduli space $M_z(X^*; [x])$ defined in \cite[Section 24]{KMbook}. Here, $X^*$ refers to $X$ after attaching to it a cylindrical end of the form $[0, \infty) \mathfrak{t}imes Y$.
\chapter{Reduction to the Coulomb gauge}\label{sec:coulombgauge}
In this chapter, we recast the constructions from Chapter~\ref{sec:HM} entirely in the global Coulomb slice
\[
W = \ker d^* \oplus \Gamma(\mathbb{S}) \operatorname{sp}incubset \operatorname{sp}incC(Y).
\]
This is needed in order to make contact with the construction of the Seiberg-Witten Floer spectrum from Chapter~\ref{sec:spectrum}, for which we used finite dimensional approximation on $W$. Throughout this section $k$ will denote a fixed integer at least 2.
\operatorname{sp}incection{Bundle decompositions and projections}
\label{sec:decompositions}
Recall that in Section~\ref{sec:coulombs} we introduced the global Coulomb slice $W$, the global Coulomb projection $\Pi^{\operatorname{gC}}$, the infinitesimal global Coulomb projection $\Pi^{\operatorname{gC}}_*$, the local Coulomb slice $\K$, the (infinitesimal) local Coulomb projection $\Pi^{\operatorname{lC}}$, the enlarged local Coluomb slice $\mathcal{K}^{\operatorname{e}}$, the enlarged local Coulomb projection $\Pi^{\operatorname{elC}}$, and the metric $\mathfrak{t}ilde g$ on $W$. Further, in Section~\ref{sec:SWblowup} we introduced the bundle decomposition of $\T_k$ into the tangents to the gauge orbits $\J_k$ and the local Coulomb slice $\K_k$; we also mentioned a similar decomposition in the blow-up, $\T_k^{\operatorname{sp}incigma} = \J^{\operatorname{sp}incigma}_{k} \oplus \K^{\operatorname{sp}incigma}_{k}.$
In this section we explore these constructions further. In particular, we extend the gauge projections to the blow-up, and describe a few bundle decompositions that are related to global Coulomb gauge.
For $j \leq k$, recall that $\T^{\operatorname{gC}}_{j}$ is the trivial vector bundle with fiber $W_{j}$ over $W_{k}$. Unlike what we did for $\T_{j}$ in Section~\ref{sec:tame}, in the case of $\T^{\operatorname{gC}}_j$ there is no need to define the Sobolev norms using the varying covariant derivatives $\nabla_{A_0+a}$. The Sobolev norm on $W_j$ defined by $\nabla = \nabla_{A_0}$ is invariant under the residual gauge action by $S^1$, and this norm is what we shall use on each tangent space. Thus, $\T^{\operatorname{gC}}_j$ is exactly the trivial normed bundle $W_k \mathfrak{t}imes W_j$. Again, we keep the notation $\T^{\operatorname{gC}}_j$ (rather than just $W_k \mathfrak{t}imes W_j$) to emphasize the bundle structure. Note that these two Sobolev norms are equivalent in the following strong sense. For $x = (a,\phi) \in W_k$ with $\| a \|_{L^2_k} < R$, there exists a constant $C(R)$ such that
$$
\frac{1}{C(R)} \| v \|_{L^2_j} \leq \| v \|_{L^2_{j,a}} \leq C(R) \| v \|_{L^2_j}
$$
for all $v \in \T_{j,x}$ and $j \leq k$. Therefore, these Sobolev norms will also be equivalent when working over paths of three-dimensional configurations, and so even in the four-dimensional setting, we are content to use $\nabla$.
For $x = (a, \phi) \in W_k$, we let $\J^{\operatorname{gC}}_x \operatorname{sp}incubset \T^{\operatorname{gC}}_k$ be the (real) span of $(0, i\phi)$. This is the tangent space to the $S^1$-orbit at $x$. Since $\J^{\operatorname{gC}}_x$ is canonically isomorphic to $\mathbb{R}$, there is no need for a subscript $j$ for Sobolev regularity.
The intersection of a gauge orbit with global Coulomb gauge is depicted schematically in Figure~\ref{fig:orbits}.
\begin{figure}\label{fig:orbits}
\end{figure}
\begin{lemma}\label{lem:gaugeprojections}
Fix $j \geq 0$. For each $x =(a,\phi) \in W_k$ with $\phi \neq 0$, we have $\J_{j,x} \cap \T^{\operatorname{gC}}_{j,x} = (\Pi^{\operatorname{gC}}_*)_x(\J_{j,x}) = \J_x^{\operatorname{gC}}$.
\end{lemma}
\begin{proof}
Let $x = (a,\phi) \in W_{k-1}$. We recall from \cite[Page 140]{KMbook} that
\[
\J_{j,x} = \{(-d\zeta, \zeta \phi) \mid \zeta \in L^2_{j+1}(Y;i\mathbb{R})\}.
\]
First, we compute $\J_{j,x} \cap \T^{\operatorname{gC}}_{j,x}$. Suppose $(b,\psi) \in \J_{j,x} \cap \T^{\operatorname{gC}}_{j,x}$. If we write $(b,\psi) = (-d\zeta, \zeta\phi)$, we have $d^*d\zeta = 0$. Since $Y$ is a rational homology sphere, this implies that $\zeta$ is constant. Therefore,
\[
\J_{j,x} \cap \T^{\operatorname{gC}}_{j,x} = \{(0,it\phi) \mid t \in \mathbb{R} \} = \J_x^{\operatorname{gC}}.
\]
Now, we study $(\Pi^{\operatorname{gC}}_*)_x(\J_{j,x})$.
Let $(-d\zeta,\zeta\phi) \in \J_{j,x}$. Recall that
\[
(\Pi^{\operatorname{gC}}_*)_x(-d\zeta,\zeta\phi) = (-d\zeta - d xi, \zeta \phi + xi \phi),
\]
where $xi:Y \mathfrak{t}o i\mathbb{R}$ satisfies $d^*(-d\zeta - dxi) = 0$ and $\int_Y xi = 0$. Again, since $Y$ is a rational homology sphere, this implies that $\zeta + xi$ is a constant $it$ for some $t \in \mathbb{R}$. Therefore, we obtain
\[
(\Pi^{\operatorname{gC}}_*)_x(-d\zeta,\zeta\phi) = (0,it\phi).
\]
This implies that $
(\Pi^{\operatorname{gC}}_*)_x(\J_{j,x}) = \J_x^{\operatorname{gC}}$, as desired.
\end{proof}
Analogous to the splitting of $\mathcal{T}_{j}$ as $\J_j \oplus \K_j$, there exists a decomposition
\begin{equation}
\label{eq:JKcoulomb}
\mathcal{T}^{\operatorname{gC}}_{j} =\J^{\operatorname{gC}} \oplus \K^{\operatorname{agC}}_{j}.
\end{equation}
Here, $\K^{\operatorname{agC}}_{j}$ is defined to be the orthogonal complement of $\J^{\operatorname{gC}}$ with respect to the $\mathfrak{t}ilde{g}$-metric on $W$. For each $x = (a,\phi) \in W_k$ and $j \leq k$, this space can be written explicitly as
\begin{align*}
\K^{\operatorname{agC}}_{j,x} &= \{(b,\psi) \in \T_{j,x}^{\operatorname{gC}} \mid \langle (0, i\phi), (b, \psi) \rangle_{\mathfrak{t}ilde g} = 0 \} \\
&= \{(b,\psi) \in \T_{j, x} \mid d^*b = 0, \langle (0, i\phi), (b, \psi) \rangle_{\mathfrak{t}ilde g}= 0\}.
\end{align*}
We call $\K^{\operatorname{agC}}_{j,x}$ the {\em anticircular global Coulomb slice} at a point $x=(a,\phi) \in W_k$. Observe that, if $d^*b=0$, then the vector $(b, 0)$ is $L^2$-perpendicular to all $(-d\zeta, \zeta \phi) \in \J_{j,x}$, and hence lies in the local Coulomb slice $\K_{j,x} \operatorname{sp}incubset \mathcal{K}^{\operatorname{e}}_{j,x}$. Using the formula \eqref{eq:gtilde2} for the $\mathfrak{t}ilde g$-inner product, we get that
\begin{equation}
\label{eq:gtildezero}
\langle (0, i\phi), (b, 0) \rangle_{\mathfrak{t}ilde g} = \operatorname{Re} \langle (0, i\phi), \Pi^{\operatorname{elC}}(b, 0) \rangle_{L^2} = \operatorname{Re} \langle (0, i\phi), (b, 0) \rangle_{L^2} = 0.
\end{equation}
We deduce that the condition $\langle (0, i\phi), (b, \psi) \rangle_{\mathfrak{t}ilde g}= 0$ is equivalent to
$$ \langle (0, i\phi), (0, \psi) \rangle_{\mathfrak{t}ilde g}= 0.$$
For simplicity, we will write $ \langle \psi_1, \psi_2 \rangle_{\mathfrak{t}ilde g}$ and $\|\psi\|^2_{\mathfrak{t}ilde g}$ for expressions of the form $ \langle (0, \psi_1), (0, \psi_2) \rangle_{\mathfrak{t}ilde g}$ and $\|(0, \psi) \|_{\mathfrak{t}ilde g}^2$. With this in mind, we can write the anticircular global Coulomb slice as
\begin{align*}
\K^{\operatorname{agC}}_{j,x} &= \{(b,\psi) \in \T_{j,x}^{\operatorname{gC}} \mid \langle i\phi, \psi \rangle_{\mathfrak{t}ilde g} = 0 \} \\
&= \{(b,\psi) \in \T_{j, x} \mid d^*b = 0, \langle i\phi, \psi \rangle_{\mathfrak{t}ilde g}= 0\}.
\end{align*}
We define
$$\Pi^{\operatorname{agC}} = \Pi_{\K^{\operatorname{agC}}_j} \circ \Pi^{\operatorname{gC}}_* :\T_j \mathfrak{t}o \K_j^{\operatorname{agC}}$$ to be the composition of infinitesimal global Coulomb projection $\Pi^{\operatorname{gC}}_*: \T_j \mathfrak{t}o \T^{\operatorname{gC}}_j$ with $\mathfrak{t}ilde g$ orthogonal projection onto $\K^{\operatorname{agC}}_j$.
See Figure~\ref{fig:coulomb}.
\begin{figure}\label{fig:coulomb}
\end{figure}
\begin{remark}
The anticircular global Coulomb slice $\K^{\operatorname{agC}}_{j, x}$ contains, but does not equal, the intersection $\K_{j,x} \cap \T_j^{\operatorname{gC}}$. Indeed, the latter consists of vectors $(b,\psi)$ satisfying $d^*b = 0$ and $\operatorname{Re} \langle i \phi, \psi \rangle = 0$ (pointwise). These conditions imply
$$\langle (0, i\phi), (b,\psi) \rangle_{\mathfrak{t}ilde{g}} = \operatorname{Re} \langle (0,i\phi), \Pi^{\operatorname{elC}}_x (b,\psi) \rangle_{L^2} = \operatorname{Re} \langle (0, i\phi), (b,\psi) \rangle_{L^2} = 0,$$ and thus $(b,\psi) \in \K^{\operatorname{agC}}_{j, x}$. Here we are using that $\Pi^{\operatorname{elC}}$ is an $L^2$ projection, which can easily be verified.
\end{remark}
\begin{remark}
The subspaces $\J^{\operatorname{gC}}$ and $\K_j^{\operatorname{agC}}$ do not form Hilbert bundles over the whole of $W_k$, because $\J^{\operatorname{gC}}$ is smaller (and $\K_j^{\operatorname{agC}}$ larger) at reducibles, compared to irreducibles.
\end{remark}
Recall that on $W$ we have a natural metric $\mathfrak{t}ilde g$, defined by measuring the $L^2$ norm of the enlarged local Coulomb projections; cf. Equation \eqref{eq:gtilde}. At this point it is helpful to extend the $\mathfrak{t}ilde g$-inner product on $\T_j^{\operatorname{gC}}$ to the bigger bundle $\T_j$ (restricted to $W_k$). Consider the bundle decomposition over $W_k$,
\begin{equation}
\label{eq:decomposetilde1}
\T_j=\J^{\circ}_j \oplus \T^{\operatorname{gC}}_j,
\end{equation}
where $\J^{\circ}_j$ is the tangent to the orbit of the normalized gauge group $\Go$; that is, $\J^{\circ}_j \operatorname{sp}incubset \J_j$ consists of vectors $(-d\zeta, \zeta \phi)$ with $\int_Y \zeta=0$. Given $x \in W_k$ and vectors $v, w \in \T_{j,x}$, decompose them according to \eqref{eq:decomposetilde1} as
\begin{equation}
\label{eq:sums}
v = v^{\circ} + v^{\operatorname{gC}}, \ \ \ w = w^{\circ} + w^{\operatorname{gC}},
\end{equation}
where $v^{\operatorname{gC}} = (\Pi^{\operatorname{gC}}_*)_x(v)$ and $w^{\operatorname{gC}} = (\Pi^{\operatorname{gC}}_*)_x(w)$. Then, set
\begin{equation}
\label{eq:gtildefull}
\langle v, w\rangle_{\mathfrak{t}ilde g} = \operatorname{Re} \langle v^{\circ}, w^{\circ} \rangle_{L^2} + \langle v^{{\operatorname{gC}}}, w^{{\operatorname{gC}}} \rangle_{\mathfrak{t}ilde g},
\end{equation}
where in the last inner product we use the formula \eqref{eq:gtilde}. With this definition, the direct sum decompositions \eqref{eq:decomposetilde1} and
\begin{equation}
\label{eq:decomposetilde2}
\T_j=\J_j \oplus \K^{\operatorname{agC}}_j
\end{equation}
are orthogonal for $\mathfrak{t}ilde g$. Moreover, we can think of $\Pi^{\operatorname{gC}}_*$ and $\Pi^{\operatorname{agC}}$ as the $\mathfrak{t}ilde g$-orthogonal projections from $\T_j$ to $\T^{\operatorname{gC}}_j$ and $\K^{\operatorname{agC}}_j$, respectively.
Next, we consider the blow-up $\mathcal{C}^{\operatorname{sp}incigma}_k(Y)$ from Section~\ref{sec:SWblowup}. This has tangent bundle $\T^\operatorname{sp}incigma$ with Sobolev completions $\T^{\operatorname{sp}incigma}_j$. The tangents to the gauge orbits at $x=(a, s, \phi)$ form the subspace $ \J^{\operatorname{sp}incigma}_{x, j}$, consisting of vectors of the form $(-d\zeta, 0, \zeta \phi)$. We also have a local Coulomb slice in the blow-up, given by the following formula from \cite[Definition 9.3.6]{KMbook}:
\[
\K^\operatorname{sp}incigma_{j,x} = \{(b,t,\psi) \in \mathcal{T}^\operatorname{sp}incigma_{j,x} \mid -d^*b + i s^2 \operatorname{Re} \langle i \phi, \psi \rangle = 0, \operatorname{Re} \langle i \phi, \psi \rangle_{L^2} = 0 \}.
\]
The local Coulomb projection on the blow-up, $\Pi^{\operatorname{lC},\operatorname{sp}incigma}_{(a,s,\phi)}: \T_{j,x}^\operatorname{sp}incigma\mathfrak{t}o \K_{j,x}^\operatorname{sp}incigma$, is defined to be
\begin{equation}
\label{eq:piLCsigma}
\Pi^{\operatorname{lC},\operatorname{sp}incigma}_{(a,s,\phi)}(b,r,\psi) = (b-d\zeta,r, \psi + \zeta \phi),
\end{equation}
where $\zeta$ is such that $-d^*(b-d\zeta) + is^2 \operatorname{Re} \langle i \phi, \psi + \zeta \phi \rangle = 0$ and $\operatorname{Re} \langle i \phi, \psi+\zeta \phi \rangle_{L^2} = 0.$
On the blow-up $\mathcal{C}^{\operatorname{sp}incigma}(Y)$ there is a natural $L^2$ metric, obtained from the inclusion $\T^{\operatorname{sp}incigma}_j \operatorname{sp}incubset L^2_j(Y; iT^*Y) \oplus \rr\oplus L^2_j(Y; \mathbb{S})$. Note that the direct sum decomposition
$$ \T_j^{\operatorname{sp}incigma} = \J_j^\operatorname{sp}incigma \oplus \K_j^\operatorname{sp}incigma$$
is not orthogonal with respect to this $L^2$ metric. Rather, on the irreducible locus, the decomposition is orthogonal with respect to the pull-back of the $L^2$ metric on $\mathcal{C}(Y)$. On the other hand, this pull-back does not produce a non-degenerate metric on the whole $\mathcal{C}^{\operatorname{sp}incigma}(Y)$.
Now consider the residual gauge action of $S^1$ on $W$. It is convenient to blow up $W$ at its fixed locus, just as we did with the configuration space $\mathcal{C}(Y)$. The blow-up of $W$ is the space
$$ W^{\operatorname{sp}incigma}=\{(a, s, \phi) \mid d^*a =0, s \geq 0, \| \phi\|_{L^2}=1\} \operatorname{sp}incubset \mathcal{C}^{\operatorname{sp}incigma}(Y).$$
There is a natural identification of $W^\operatorname{sp}incigma/S^1$ with $\B^\operatorname{sp}incigma(Y)$, defined in Section~\ref{sec:SWblowup}. We have Sobolev completions $W^{\operatorname{sp}incigma}_k$ and tangent bundles $\T^{\operatorname{gC}, \operatorname{sp}incigma}_j$ for $j \leq k$.
The global Coulomb projection $\Pi^{\operatorname{gC}}: \mathcal{C}(Y) \mathfrak{t}o W$ induces a global Coulomb projection between the blow-ups:
\begin{equation}
\label{eq:piCS}
\Pi^{\operatorname{gC}, \operatorname{sp}incigma}: \mathcal{C}^{\operatorname{sp}incigma}(Y) \mathfrak{t}o W^{\operatorname{sp}incigma}, \ \ (a, s, \phi) \mapsto (a-df, s, e^{f}\phi),
\end{equation}
where $f = Gd^*a$. Furthermore, if $x = (a,s,\phi) \in W^{\operatorname{sp}incigma}$, then the differential of $\Pi^{\operatorname{gC},\operatorname{sp}incigma}$ is given by
\begin{equation}
\label{eq:piCoulSigma}
(\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x : \T_x^\operatorname{sp}incigma \mathfrak{t}o \T_x^{\operatorname{gC},\operatorname{sp}incigma}, \ \ ((a,s, \phi),(b,r,\psi)) \mapsto (\pi(b),r,\psi + (Gd^*b)\phi).
\end{equation}
At a point $x=(a, s, \phi) \in W^{\operatorname{sp}incigma}$, we let $\J_x^{\operatorname{gC}, \operatorname{sp}incigma} = \R\langle (0, 0, i\phi) \rangle$ be the tangent to the residual $S^1$ gauge orbit. Lemma~\ref{lem:gaugeprojections} can be easily adapted to show that:
\begin{equation}\label{eq:PigCJ}
\J^{\operatorname{sp}incigma}_{j,x} \cap \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} = (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x(\J^{\operatorname{sp}incigma}_{j,x}) = \J_x^{\operatorname{gC}, \operatorname{sp}incigma}.
\end{equation}
Next, we define the anticircular global Coulomb slice in the blow-up. For $x = (a,s,\phi) \in W^\operatorname{sp}incigma_k$, let
\begin{align}
\label{eq:Kagcsigma}
\K^{\operatorname{agC},\operatorname{sp}incigma}_{j,x} &= \{(b,r,\psi) \in \mathcal{T}_{j,x}^{\operatorname{gC},\operatorname{sp}incigma} \mid \langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}} = 0 \} \\
&= \{(b,r,\psi) \in \mathcal{T}_{j,x}^\operatorname{sp}incigma \mid d^*b = 0, \langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}} = 0\}, \notag
\end{align}
Note that the condition $(b,r,\psi) \in \mathcal{T}_{j,x}^\operatorname{sp}incigma$ already implies that $\operatorname{Re} \langle \phi, \psi \rangle_{L^2} = 0$. Furthermore, here and later, by $\langle \psi, i\phi \rangle_{\mathfrak{t}ilde{g}}$ we implicitly mean that the inner product is taken in the blow-down projection; that is, we consider
$\langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}(a,s\phi)}.$ It is useful to compare $\langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}}$ with the result of taking the $\mathfrak{t}ilde{g}$-inner product of $(0,i\phi)$ with the image of $(b,r,\psi)$ in the blow-down: $(b,r\phi + s\psi)$. This yields
\begin{equation}\label{eq:tgs-blowdown-inner}
\langle (b,r\phi + s\psi), (0, i\phi) \rangle_{\mathfrak{t}ilde{g}} = s \langle (0,\psi), (0,i\phi) \rangle_{\mathfrak{t}ilde{g}} + r \langle (0,\phi), (0, i\phi) \rangle_{\mathfrak{t}ilde{g}} = s \langle \psi, i\phi \rangle_{\mathfrak{t}ilde{g}},
\end{equation}
since $(0,\phi) \in \K_{(a,s\phi)}$ and $\operatorname{Re} \langle \phi, i \phi \rangle_{L^2} = 0$. Note that if we tried to define $\K^{\operatorname{agC},\operatorname{sp}incigma}_{(a,s,\phi)}$ using the $\mathfrak{t}ilde{g}$-inner product of $(0,i\phi)$ with $(b,r\phi + s\psi)$, we would not obtain a bundle, since this would be larger at reducibles. On the other hand, for irreducibles, we have the following.
\begin{lemma}
At every irreducible $x=(a,s,\phi) \in W^{\operatorname{sp}incigma}_k$, the infinitesimal blow-down projection
$$ (b, r, \psi) \mapsto (b, r\phi + s\psi)$$
induces a linear isomorphism from $\K^{\operatorname{agC},\operatorname{sp}incigma}_{j, (a, s, \phi)}$ to $\K^{\operatorname{agC}}_{j,(a, s\phi)}$.
\end{lemma}
\begin{proof}
This follows from \eqref{eq:tgs-blowdown-inner}, since at an irreducible we have $s \neq 0$.
\begin{comment}
This boils down to showing that, assuming $d^*b=0$, the condition $\langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}} = 0$ is equivalent to
$$\langle (0, is\phi), (b,r\phi + s\psi) \rangle_{\mathfrak{t}ilde{g}} = 0.$$
Indeed, from \eqref{eq:gtildezero} we get that $\langle (0, is\phi), (b,0) \rangle_{\mathfrak{t}ilde{g}} = s\langle (0, i\phi), (b,0) \rangle_{\mathfrak{t}ilde{g}} = 0$. Similarly, we have $(0, r\phi) \in \mathcal{K}^{\operatorname{e}}_{j, (a,s\phi)}$ and $\operatorname{Re} \langle (0, i\phi), (0, r\phi) \rangle_{L^2}=0$, so we also get $\langle (0, is\phi), (0, r\phi) \rangle_{\mathfrak{t}ilde g}=0$. Since we are at an irreducible $(s \neq 0)$, the conclusion follows.
\end{comment}
\end{proof}
We have direct sum decompositions
\begin{equation}\label{eq:TsigmaJsigma}
\T_{j,x}^{\operatorname{sp}incigma} = \J_{j,x}^{\operatorname{sp}incigma} \oplus \K_{j,x}^{\operatorname{agC}, \operatorname{sp}incigma}
\end{equation}
and
\begin{equation}
\T_{j,x}^{\operatorname{gC}, \operatorname{sp}incigma} = \J_x^{\operatorname{gC}, \operatorname{sp}incigma} \oplus \K_{j,x}^{\operatorname{agC},\operatorname{sp}incigma}.
\end{equation}
We define the anticircular global Coulomb projection on the blow-up,
$$\Pi^{\operatorname{agC},\operatorname{sp}incigma}_x: \T_{j,x}^{\operatorname{sp}incigma} \mathfrak{t}o \K_{j,x}^{\operatorname{agC}, \operatorname{sp}incigma},$$
to be the projection with kernel $\J_{j,x}^{\operatorname{sp}incigma}$. It can be viewed as the composition of the map $(\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x$ from \eqref{eq:piCoulSigma} with the projection with kernel $\J_x^{\operatorname{gC}, \operatorname{sp}incigma}$. This last projection, which is the restriction of $\Pi^{\operatorname{agC},\operatorname{sp}incigma}_x$ to $ \T_{j,x}^{\operatorname{gC}, \operatorname{sp}incigma}$, can be written explicitly as
\begin{equation}
\label{eq:antiproj}
(b,r, \psi) \mapsto (b,r,\psi) - \frac{\langle i\phi, \psi \rangle_{\mathfrak{t}ilde{g}}}{\| i\phi\|_{\mathfrak{t}ilde g}^2} \cdot (0,0,i\phi).
\end{equation}
From \eqref{eq:TsigmaJsigma} we see that the anticircular global Coulomb slice is a true ``infinitesimal slice'' to the whole gauge group in $\mathcal{C}^\operatorname{sp}incigma(Y)$; i.e., a complement to the tangent space to the gauge orbits. Another such complement is the local Coulomb slice in the blow-up, $\K^{\operatorname{sp}incigma}_j$. The two slices are related as follows:
\begin{lemma}
\label{lem:bijection}
Let $x = (a, s, \phi) \in W^{\operatorname{sp}incigma}_k$. Then:
$(a)$ The local Coulomb projection $\Pi^{\operatorname{lC}, \operatorname{sp}incigma}_x$ induces a linear isomorphism between the slices $ \K_{j,x}^{\operatorname{agC}, \operatorname{sp}incigma}$ and $\K^{\operatorname{sp}incigma}_{j,x}$. Its inverse is the anticircular global Coulomb projection $\Pi^{\operatorname{agC},\operatorname{sp}incigma}|_{\K^{\operatorname{sp}incigma}_{j,x}}$.
$(b)$ If $s=0$, then $\K^{\operatorname{agC},\operatorname{sp}incigma}_{j,x} = \K^\operatorname{sp}incigma_{j,x}$, and $\Pi^{\operatorname{agC}, \operatorname{sp}incigma}_x|_{\K^{\operatorname{sp}incigma}_{j,x}} = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_x|_{\K^{\operatorname{sp}incigma}_{j, x}} : \K^\operatorname{sp}incigma_{j,x} \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{j,x}$ is the identity.
$(c)$ For any $x$, we have that $\Pi^{\operatorname{agC}, \operatorname{sp}incigma}_x|_{\K^{\operatorname{sp}incigma}_{j,x}} = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_x|_{\K^{\operatorname{sp}incigma}_{j, x}}$.
\end{lemma}
\begin{proof}
$(a)$ As previously noted, both slices are complements to $\J^{\operatorname{sp}incigma}_{j,x}$. Further, both local Coulomb projection and anticircular global Coulomb projection are given by adding the suitable elements in $\J^{\operatorname{sp}incigma}_{j,x}$ that move to the other slice. This implies that the two maps are inverse to each other.
$(b)$ If $s=0$, we see from \eqref{eq:piLCsigma} and \eqref{eq:piCoulSigma} that $\Pi^{\operatorname{lC},\operatorname{sp}incigma}_x = (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x$. The conclusion follows since $(\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_x$ is both idempotent and invertible.
$(c)$ The case $s=0$ was studied in part (b). Therefore, it now suffices to consider the case when $s \neq 0$, so that $x$ is irreducible. Then, the local slices are isomorphic to the corresponding ones in the blow-down, and the projections commute with the infinitesimal blow-down map, so we can simply work in the blow-down.
Let $\Phi = s \phi$ and $z = (a, \Phi)$. We need to check that if we have a vector $v \in \K_{j,z}$, then its projection $w = (\Pi^{\operatorname{gC}}_*)_z(v)$ lands in the anticircular global Coulomb slice. In other words, we know that $\operatorname{Re} \langle (0, i \Phi), v \rangle_{L^2}=0$, and we want to check that $\langle (0, i\Phi), w \rangle_{\mathfrak{t}ilde g} = 0$. Recall from \eqref{eq:backandforth} that $\Pi^{\operatorname{elC}}_z$ and $(\Pi^{\operatorname{gC}}_*)_z$ are inverse to each other; hence, $\Pi^{\operatorname{elC}}_z(w)=v$. Using \eqref{eq:gtilde2}, we get
$$\langle (0, i\Phi), w \rangle_{\mathfrak{t}ilde g} = \operatorname{Re} \langle (0, i\Phi), \Pi^{\operatorname{elC}}_z(w) \rangle_{L^2} = \operatorname{Re} \langle (0, i \Phi), v \rangle_{L^2}=0,$$
as desired.
\end{proof}
Recall that in Section~\ref{sec:coulombs} we defined an enlarged local Coulomb slice $\mathcal{K}^{\operatorname{e}}$, complementary to the orbit of the normalized gauge group $\Go$. There is a similar enlarged local Coulomb slice in the blow-up,
\[
\mathcal{K}^{\operatorname{e}}sigma_{j,x} = \{(b,t,\psi) \in \mathcal{T}^\operatorname{sp}incigma_{j,x} \mid -d^*b + i s^2 \operatorname{Re} \langle i \phi, \psi \rangle^{\circ} =0 \}.
\]
Note that the condition $-d^*b + i s^2 \operatorname{Re} \langle i \phi, \psi \rangle^{\circ} =0$ simply means that $-d^*b + i s^2 \operatorname{Re} \langle i \phi, \psi \rangle$ is a constant function.
We define the enlarged local Coulomb projection on the blow-up, $\Pi^{\operatorname{elC},\operatorname{sp}incigma}_{(a,s,\phi)}: \T_{j,x}^\operatorname{sp}incigma\mathfrak{t}o \mathcal{K}^{\operatorname{e}}sigma_{j,x}$, by
\begin{equation}
\label{eq:pieLCsigma}
\Pi^{\operatorname{elC},\operatorname{sp}incigma}_{(a,s,\phi)}(b,r,\psi) = (b-d\zeta,r, \psi + \zeta \phi),
\end{equation}
where $\zeta$ is such that $\int_Y \zeta =0$ and $-d^*(b-d\zeta) + is^2 \operatorname{Re} \langle i \phi, \psi + \zeta \phi \rangle^\circ =0$.
Let
\begin{equation}
\label{eq:Jcircsigma}
\J^{\circ,\operatorname{sp}incigma}_{j,x} = \{(-dxi, 0, xi\phi) \mid \int_Y xi =0\} \operatorname{sp}incubset \J^{\operatorname{sp}incigma}_{j,x}
\end{equation}
be the tangent to the orbit of $\Go$ in the blow-up. We have direct sum decompositions
\begin{equation}
\label{eq:xavi}
\T^{\operatorname{sp}incigma}_{j,x} = \J^{\circ,\operatorname{sp}incigma}_{j,x} \oplus \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x}
\end{equation}
and
\begin{equation}
\label{eq:xavi2}
\T^{\operatorname{sp}incigma}_{j,x} = \J^{\circ,\operatorname{sp}incigma}_{j,x} \oplus \mathcal{K}^{\operatorname{e}}sigma_{j,x}.
\end{equation}
Thus, $\T^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x}$ and $\mathcal{K}^{\operatorname{e}}sigma_{j,x}$ are both infinitesimal slices for the action of $\Go$.
Here is the analogue of Lemma~\ref{lem:bijection}; see also \eqref{eq:backandforth} for the corresponding result in the blow-down.
\begin{lemma}
\label{lem:bijection2}
Let $x = (a, s, \phi) \in W^{\operatorname{sp}incigma}_k$. Then:
$(a)$ The enlarged local Coulomb projection $\Pi^{\operatorname{elC}, \operatorname{sp}incigma}_x$ induces a linear isomorphism between the slices $ \T_{j,x}^{\operatorname{gC}, \operatorname{sp}incigma}$ and $\mathcal{K}^{\operatorname{e}}sigma_{j,x}$. Its inverse is the infinitesimal global Coulomb projection $\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*|_{\mathcal{K}^{\operatorname{e}}sigma_{j,x}}$.
$(b)$ If $s=0$, then $\T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x} = \mathcal{K}^{\operatorname{e}}sigma_{j,x}$, and $(\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_x|_{\mathcal{K}^{\operatorname{e}}sigma_{j, x}} : \mathcal{K}^{\operatorname{e}}sigma_{j,x} \mathfrak{t}o \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$ is the identity.
\end{lemma}
Finally, at $x \in W^{\operatorname{sp}incigma}_k$, let us introduce the {\em shear map}
$$ S_x: \T_{j,x}^{\operatorname{sp}incigma} \mathfrak{t}o \T_{j,x}^{\operatorname{sp}incigma}$$
given by
\begin{equation}
\label{eq:shear}
\J^{\circ, \operatorname{sp}incigma}_{j,x} \oplus \mathcal{K}^{\operatorname{e}}sigma_{j,x} \mathfrak{t}o \J^{\circ, \operatorname{sp}incigma}_{j,x} \oplus \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x}, \ \ \ \
v \oplus w \mapsto v \oplus (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x(w).
\end{equation}
Its inverse is
$$
S_x^{-1}: \J^{\circ, \operatorname{sp}incigma}_{j,x} \oplus \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} \mathfrak{t}o \J^{\circ, \operatorname{sp}incigma}_{j,x} \oplus \mathcal{K}^{\operatorname{e}}sigma_{j,x}, \ \ \ \
v \oplus w \mapsto v \oplus \Pi^{\operatorname{elC},\operatorname{sp}incigma}_x(w).
$$
We can write, in a more compressed form,
\begin{align*}
S_x(v) &= v - \Pi^{\operatorname{elC},\operatorname{sp}incigma}_x(v) + (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x(v), \\
S_x^{-1}(v) &= v + \Pi^{\operatorname{elC},\operatorname{sp}incigma}_x(v) - (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_x(v).
\end{align*}
Putting together all the maps $S_x$, we obtain an automorphism $S$ of the bundle $\T_j^{\operatorname{sp}incigma}$ over $W_k^{\operatorname{sp}incigma}$.
\begin{remark}
\label{rem:shear}
With regard to the infinitesimal slices to the whole gauge action, observe that $S_x$ maps $\K^{\operatorname{sp}incigma}_{j,x}$ to $\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j,x}$; compare with Lemma~\ref{lem:bijection}. Also, $S_x$ preserves (but does not act by the identity on) the infinitesimal orbit space $\J^{\operatorname{sp}incigma}_{j,x}$.
\end{remark}
\operatorname{sp}incection{Choices of gauge on cylinders}
\label{sec:cylinders}
Let $Z = I \mathfrak{t}imes Y$ be a cylinder. Recall that $\mathcal{C}(Z)$ consists of pairs $(a, \phi)$ with $a \in \mathcal{O}mega^1(Z; i\R)$ and $\phi \in \Gamma(\mathbb{S}^+)$. We write such a pair as a path
$$(a(t) + \alpha(t)dt, \phi(t)), \ \ t \in I,$$ where $a(t) \in \mathcal{O}mega^1(Y; i\R)$, $\alpha(t) \in C^{\infty}(Y; i\R)$, and $\phi(t) \in \Gamma(\mathbb{S})$.
Recall that if $\alpha(t)=0$ we say that $(a, \phi)$ is in temporal gauge and that any configuration can be put into temporal gauge using the action of $\G(Z)$.
We seek an analogue of $\mathcal{C}(Z)$ adapted to global Coulomb gauge. The first guess is to consider $W(Z)$, the subspace of $\mathcal{C}(Z)$ consisting of configurations in temporal gauge, and that are also in (three-dimensional) Coulomb gauge at each slice $\{t\} \mathfrak{t}imes Y$; that is,
$$ W(Z) = \{(a, \phi) \in \Gamma(Z, \mathrm{p}^*(iT^*Y \oplus \mathbb{S})) \mid a(t) \in \ker d^*,\ \forall t\},$$
where $\mathrm{p}: Z \mathfrak{t}o Y$ denotes the projection. Note that an arbitrary configuration $(a, \phi) \in \mathcal{C}(Z)$ cannot always be moved into $W(Z)$ by a four-dimensional gauge transformation; instead, we can move it into temporal gauge, and then Coulomb project in each three-dimensional slice. Further, on $W(Z)$ we could consider an action by the group of slicewise constant gauge transformations
$$ \G^{\operatorname{gC}}(Z) = C^{\infty}(I; S^1),$$
with $u \in \G^{\operatorname{gC}}(Z) $ acting by $(a(t), \phi(t)) \mathfrak{t}o (a(t), u(t)\phi(t))$.
As we saw in Section~\ref{sec:SWe}, the global Coulomb projections of Seiberg-Witten trajectories are the solutions of
\begin{equation}
\label{eq:swc}
\Bigl( \frac{d}{dt} + \mathcal{X}gc \Bigr) (a(t), \phi(t)) = 0,
\end{equation}
where $(a, \phi) \in W(Z)$.
Note that the equations \eqref{eq:swc} are invariant under the action of constant $u \in S^1$, but not under all of $\G^{\operatorname{gC}}(Z)$. This is similar to what happens in $\mathcal{C}(Z)$: the Seiberg-Witten equations are invariant under the whole group $\G(Z)$, but once we move to temporal gauge and write the equations as a gradient flow, we are only left with the action of $\G(Y)$, constant in $t$.
In view of this, a better analogue of $\mathcal{C}(Z)$ is defined using the following notion:
\begin{definition}
A configuration $(a(t) + \alpha(t)dt, \phi(t)) \in \mathcal{O}mega^1(Z; i\R) \oplus \Gamma(\mathbb{S}^+)$ is said to be in {\em pseudo-temporal gauge} if for each $t$, the component $\alpha(t)$ is constant as a function on $Y$.
\end{definition}
We let $\mathcal{C}^{\operatorname{gC}}(Z)$ consist of pairs $(a(t) + \alpha(t)dt, \phi(t))$ in pseudo-temporal gauge, and such that $(a(t), \phi(t))$ is in slicewise Coulomb gauge, i.e.:
$$d(\alpha(t)) =0, \ \ d^*(a(t)) = 0, \ \ \forall t \in I.$$
The elements of $\G^{\operatorname{gC}}(Z)$ act on $\mathcal{C}^{\operatorname{gC}}(Z)$ as usual gauge transformations:
$$ u: \bigl( a(t) + \alpha(t)dt, \phi(t)\bigr ) \mapsto \bigl(a(t) + (\alpha(t)-u(t)^{-1}\frac{du(t)}{dt})dt, u(t)\phi(t) \bigr).$$
Consider the process of moving an arbitrary configuration in $\mathcal{C}(Z)$ into pseudo-temporal gauge by an element of $\G(Z)$, and then applying slicewise global Coulomb projection to land in $\mathcal{C}^{\operatorname{gC}}(Z)$. Under this process, the Seiberg-Witten equations turn into:
\begin{equation}
\label{eq:swC}
\Bigl( \frac{d}{dt} + \mathcal{X}gc \Bigr) (a(t), \phi(t)) + (0, \alpha(t)\phi(t)) = 0,
\end{equation}
for $(a(t) + \alpha(t)dt, \phi(t)) \in \mathcal{C}^{\operatorname{gC}}(Z)$. These equations are invariant under the action of $\G^{\operatorname{gC}}(Z)$.
Of course, every solution of \eqref{eq:swC} can be transformed into a solution of \eqref{eq:swc} by moving into temporal gauge. Most of the time we will work with solutions of \eqref{eq:swc}, i.e., trajectories of $\mathcal{X}gc$. However, when considering infinitesimal deformations of such trajectories (as we shall do in Section~\ref{sec:linearized}, for example), it is important to allow the more general pseudo-temporal gauge in order to obtain a good Fredholm problem.
By analogy with the section $\mathcal{F}: \mathcal{C}(Z) \mathfrak{t}o \V(Z)$ from Section~\ref{sec:SWblowup}, we write $\F^{\gCoul}(a, \phi)$ for the left hand side of Equation~\eqref{eq:swC}. We view $\F^{\gCoul}$ as a section
$$\F^{\gCoul}: \mathcal{C}^{\operatorname{gC}}(Z) \mathfrak{t}o \V^{\operatorname{gC}}(Z),$$
where $\V^{\operatorname{gC}}(Z)$ is the trivial $W(Z)$ bundle over $\mathcal{C}^{\operatorname{gC}}(Z)$.
It is worth comparing $\V^{\operatorname{gC}}(Z)$ to the bundle $\V(Z)$ from Section~\ref{sec:SWblowup}, whose fibers were $\Gamma(Z; i\mathscr{L}ambda^2_+ T^*Z \oplus \mathbb{S}^-)$. In our setting, we identify self-dual imaginary two-forms on $Z$ with sections $a = (a(t))$ of $\mathrm{p}^*(iT^*Y)$, via sending a section $a$ to $*a + dt \wedge a$, where $*$ here denotes the Hodge star operator on $Y$. We then impose a Coulomb gauge condition $d^*a(t) = 0$ for all $t$. Further, the bundle $\mathbb{S}^-$ can be identified with $\mathrm{p}^* \mathbb{S}$. Thus, the fibers become $W(Z)$, as in our definition of $\V^{\operatorname{gC}}(Z)$.
If $Z$ is a compact cylinder, then starting from the space $W(Z)$ we can consider Sobolev completions $W_{k}(Z)$ and blow-ups $W_k^{\operatorname{sp}incigma}(Z)$ and $W_k^{\mathfrak{t}au}(Z)$. The space $W_k^\mathfrak{t}au(Z)$ is a subset of the Banach manifold $\widetilde{W}^\mathfrak{t}au_k(Z)$, the latter of which is obtained by removing the condition $s(t) \geq 0$; compare with \eqref{eq:tildeC}. Similarly, we can define $\mathcal{C}^{\operatorname{gC}}_k(Z)$, its $\operatorname{sp}incigma$ and $\mathfrak{t}au$ blow-ups, and a Banach manifold $\widetilde{\mathcal{C}}^{\operatorname{gC}, \mathfrak{t}au}_k(Z)$.
The tangent space to $W_k(Z)$ can be completed to $T_jW(Z)$ (for $j \leq k$), which is a trivial bundle with fiber $W_j(Z)$. The tangent bundle to $\mathcal{C}^{\operatorname{gC}}_k(Z)$ is denoted $\T^{\operatorname{gC}}_{k}(Z)$, and has completions $\T^{\operatorname{gC}}_{j}(Z)$. There are blown-up analogues $\T^{\operatorname{gC}, \operatorname{sp}incigma}_j(Z)$ and $\T^{\operatorname{gC}, \mathfrak{t}au}_j(Z)$, the latter of which we think of as a bundle over $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_k(Z)$. Similar constructions can be done for the bundle $\V(Z)$.
If $Z$ is an infinite cylinder, we will also consider $L^2_{j, loc}$ completions. These are denoted $W_{k, loc}(Z), \mathcal{C}^{\operatorname{gC}}_{k, loc}(Z)$, and so on.
\operatorname{sp}incection{Controlled Coulomb perturbations} \label{sec:ccp}
In Section~\ref{sec:perturbedSW} we discussed how one can perturb the Seiberg-Witten equations by a formal gradient $\q$. In Coulomb gauge, a perturbation is an $S^1$-equivariant vector field
$$ \eta : W \mathfrak{t}o \T^{\operatorname{gC}}_0.$$
We write $\eta^0$ and $\eta^1$ for the connection and spinor components of $\eta$.
We are interested in perturbations $\eta$ that are $\mathfrak{t}ilde{g}$-formal gradients of functions $f: W \mathfrak{t}o \mathbb{R}$. Given such an $\eta$, by applying it slicewise in $t$ on a cylinder $Z = I \mathfrak{t}imes Y$, we
obtain a section
\begin{equation}
\label{eq:hateta}
\hat \eta: \mathcal{C}^{\operatorname{gC}}(Z) \mathfrak{t}o \V^{\operatorname{gC}}(Z), \ \ \
\hat \eta(a(t) + \alpha(t)dt, \phi(t)) = \eta(a(t), \phi(t)).
\end{equation}
We require that $\hat \eta$ preserves Sobolev regularity. We will also need properties of $\eta$ analogous to some of the very tameness properties of $\q$. (Compare Definitions~\ref{def:tame} and \ref{def:verytame}.)
\begin{definition}\label{def:controlled}
Suppose that $\eta=(\eta^0, \eta^1)$ is the $\mathfrak{t}ilde{g}$-formal gradient of an $S^1$-equivariant function $f: W \mathfrak{t}o \mathbb{R}$. Let $\hat{\eta}$ be the induced four-dimensional perturbation. We say $\eta$ is a {\em controlled Coulomb perturbation} if for all integers $k \geq 2$, and all compact cylinders $Z = [t_1, t_2] \mathfrak{t}imes Y$,
\begin{enumerate}[(i)]
\item $\hat \eta$ defines an element of $C^{\infty}_{\fb}(\mathcal{C}^{\operatorname{gC}}_{k}(Z), \V^{\operatorname{gC}}_{k}(Z))$;
\item The first derivative $\mathcal{D}\hat\eta \in C^\infty_{\fb}(\mathcal{C}^{\operatorname{gC}}_{k}(Z),\operatorname{Hom}(\T^{\operatorname{gC}}_{k}(Z),\V^{\operatorname{gC}}_{k}(Z)))$ extends to a $C^\infty_{\fb}$ map into $\operatorname{Hom}(\T^{\operatorname{gC}}_{j}(Z),\V^{\operatorname{gC}}_{j}(Z))$, for all integers $j \in [-k,k]$;
\end{enumerate}
\end{definition}
\begin{remark}
In \cite{KMbook}, Properties (ii), (iv), (v), and (vi) in Definition~\ref{def:tame} are used to get bounds on the stationary points and trajectories of $\mathcal{X}q$. When we do finite-dimensional approximation, we have a priori bounds since we restrict to the ball $B(2R)$. Therefore, there is no need to include analogues of these properties in Definition~\ref{def:controlled} above.
\end{remark}
We recall that the infinitesimal global Coulomb projection of the Seiberg-Witten vector field $\mathcal{X}=\operatorname{gr}ad \mathscr{L}$ is given by $\mathcal{X}^{\operatorname{gC}}=l + c$, where $l$ and $c$ are defined by \eqref{eq:lmap} and \eqref{eq:cmap} respectively. Suppose that $\q$ is an abstract perturbation given by the formal gradient of a function $f$. Let
\begin{equation}
\label{eq:etaq}
\eta_{\q}(a, \phi) = (\Pi^{\operatorname{gC}}_*)_{(a,\phi)} \q(a, \phi)
\end{equation}
be the infinitesimal global Coulomb projection of $\q$. The infinitesimal global Coulomb projection of $\mathcal{X}q = \mathcal{X} + \q$ is then $$
\mathcal{X}qgc=\mathcal{X}^{\operatorname{gC}} + \eta_{\q},$$ which is the gradient of $(\mathscr{L} + f)|_W$ with respect to the metric $\mathfrak{t}ilde{g}$ introduced in \eqref{eq:gtilde}.
\begin{lemma}\label{lem:tamecoulomb}
Let $\q$ be a very tame perturbation. Then, $\eta_{\q}$ is a controlled Coulomb perturbation.
\end{lemma}
Before proving the claim, we will need to prove some additional properties of the infinitesimal global Coulomb projection.
\begin{lemma}\label{lem:igc-fb}
Fix $j \in [-k,k]$. Then,
\begin{enumerate}[(a)]
\item \label{igc-fb:a} the map $\Pi^{\operatorname{gC}}_* : \T_j \mathfrak{t}o \T^{\operatorname{gC}}_j$ is functionally bounded as a map from $\mathcal{C}_k(Y)$ to $\operatorname{Hom}(\mathcal{C}_j(Y), W_j)$;
\item \label{igc-fb:b} if $X \in C^n_{\fb}(W_k, \mathcal{C}_k(Y))$ satisfies that $\mathcal{D}X: W_k \mathfrak{t}o \operatorname{Hom}(W_k, \mathcal{C}_k(Y))$ extends to an element of $C^{n-1}_{\fb}(W_k, \operatorname{Hom}(W_j, \mathcal{C}_j(Y)))$, then $\mathcal{D}(\Pi^{\operatorname{gC}}_* \circ X) : W_k \mathfrak{t}o \operatorname{Hom}(W_j, W_j)$ is also in $C^{n-1}_{\fb}$;
\item \label{igc-fb:c}the analogues of \eqref{igc-fb:a} and \eqref{igc-fb:b} hold for $I \mathfrak{t}imes Y$ as well for $I \operatorname{sp}incubseteq \mathbb{R}$ a closed interval (with $\Pi^{\operatorname{gC}}_*$ applied slicewise).
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{igc-fb:a} Let $(a,\phi) \in \mathcal{C}_k(Y)$. By definition,
$$
(\Pi^{\operatorname{gC}}_*)_{(a,\phi)}(b,\psi) = (\pi(b), e^{Gd^* a}((Gd^* b)\phi + \psi)).
$$
We are interested in showing that this expression has $L^2_j$ norm bounded in terms of the $L^2_k$ norm of $(a,\phi)$ and the $L^2_j$ norm of $(b,\psi)$. This is clear for the first term $\pi(b)$, because $\pi$ is a linear, continuous operator taking $L^2_j$ forms to $L^2_j$ forms. For the second term, the continuity and bilinearity of the Sobolev multiplication $L^2_j \mathfrak{t}imes L^2_k \mathfrak{t}o L^2_j$ gives bounds
$$
\|(Gd^* b)\phi\|_{L^2_j} \leq C \| Gd^*b \|_{L^2_j} \| \phi \|_{L^2_k}.
$$
We then use the linearity and continuity of $Gd^*$ to bound $ \| Gd^*b \|_{L^2_j}$ in terms of $\|b \|_{L^2_j}$. The functional boundedness of the exponential map gives a bound on $e^{Gd^* a}$. Using the Sobolev multiplication again, we get the desired $L^2_j$ bounds on $e^{Gd^* a}((Gd^* b)\phi + \psi))$.
\eqref{igc-fb:b} By Lemma~\ref{lem:igc}, we have that $\Pi^{\operatorname{gC}}_*$ is smooth and therefore we are only interested in functional boundedness. Precisely, to show that $\mathcal{D}(\Pi^{\operatorname{gC}}_* \circ X)$ is in $C^{n-1}_{\fb}$, we need to check that $\D^m (\Pi^{\operatorname{gC}}_* \circ X)$ is functionally bounded for all $m=1, \dots, n$.
We first consider the case $m=1$. Let $(a, \phi) \in W_k$ and $(b,\psi) \in W_j$. It is straightforward to compute
\begin{align}\label{eq:igc-fb-d}
\D_{(a,\phi)} (\Pi^{\operatorname{gC}}_* \circ X) (b,\psi) = \Big(\pi(\D_{(a,\phi)} X^0(b,\psi)), \D_{(a,\phi)} X^1(b,\psi) + (G&d^* \D_{(a,\phi)} X^0(b,\psi)) \phi \\
&+ (Gd^* X^0(a,\phi))\psi \Big).\notag
\end{align}
Since $X$ and $\mathcal{D}X$ are functionally bounded (the latter as a map from $W_k$ to $\operatorname{Hom}(W_j, \mathcal{C}_j(Y))$), we get bounds on $X(a,\phi)$ and $\D_{(a,\phi)} X^0(b,\psi)$ in terms of the $L^2_k$ norm of $(a,\phi)$ and the $L^2_j$ norm of $(b,\psi)$. To obtain the desired $L^2_j$ bounds on \eqref{eq:igc-fb-d}, as above, we apply the continuity and bilinearity of the Sobolev multiplication $L^2_k \mathfrak{t}imes L^2_j \mathfrak{t}o L^2_j$.
Finally, we give the argument for the second derivative (as this will illustrate the appropriate Sobolev norms) and allow the reader to complete the proof by induction. We consider the derivative of $\mathcal{D}(\Pi^{\operatorname{gC}}_* \circ X)$, which we think of as a map from $W_k$ to $\operatorname{Hom}(W_k \mathfrak{t}imes W_j,W_j)$. Again, let $(a,\phi) \in W_k$, $(b,\psi) \in W_j$. We also denote an $L^2_k$ tangent vector to $(a,\phi)$ by $(\alpha, \zeta)$. Direct computation shows
\begin{align*}
\D^2_{(a,\phi)} (\Pi^{\operatorname{gC}}_* \circ X)((\alpha, \zeta), (b,\psi)) = \Big(\pi(\D^2_{(a,\phi)} X^0)(\alpha,\zeta), (b,\psi))&, \\
(Gd^*(\D^2_{(a,\phi)} X^0)((\alpha,\zeta), (b,\psi))) \phi &+ Gd^*(\D_{(a,\phi)} X^0(b,\psi)) \zeta \\ &+ (\D^2_{(a,\phi)} X^1)(\alpha, \zeta),(b,\psi)\Big).
\end{align*}
Again, the functional boundedness of $X$ and $\mathcal{D}X$ together with Sobolev multiplication give the desired result.
\eqref{igc-fb:c} Similar arguments apply to establish the four-dimensional analogues.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:tamecoulomb}] First, note that if $\q$ is the $L^2$-formal gradient of $f: \mathcal{C}(Y) \mathfrak{t}o \rr$, then $\eta_{\q}$ is the $\mathfrak{t}ilde{g}$-formal gradient of the restriction of $f$ to $W$.
Since $\eta_\mathfrak{q}(a,\phi) = (\Pi^{\operatorname{gC}}_*)_{(a,\phi)} \q(a,\phi)$, the result follows by combining Lemmas~\ref{lem:igc} and \ref{lem:igc-fb} with the corresponding Properties (i) and (iii) of $\q$ in Definition~\ref{def:verytame}, since $\q$ is a very tame perturbation. \begin{comment}
For Property (ii), we observe that $\eta_{\q}^0(a,\phi) = \pi(\q^0(a,\phi))$, so the continuity follows from Property (ii) of $\q$ in Definition~\ref{def:verytame}, since $\pi : \mathcal{O}mega^1(Y; i\R) \mathfrak{t}o \ker d^*$ extends continuously to all $L^2_j$-completions. Finally, for Property (iv), we observe that there exists $m$ such that
\[
\| \eta_{\q}^0 (a,\phi) \|_{L^2} = \| \pi(\q^0(a,\phi)) \|_{L^2} \leq \| \q(a,\phi) \|_{L^2} \leq m(\| \phi \|_{L^2} + 1),
\]
where the last inequality follows from Property (iv) of Definition~\ref{def:tame} for $\q$.
\end{comment}
\end{proof}
\operatorname{sp}incection{Trajectories in global Coulomb gauge} \label{sec:trajGC}
Let $\q$ be a very tame perturbation and $\eta_{\q}$ the induced controlled perturbation in Coulomb gauge.
Let us write $\mathcal{X}qgc = ((\mathcal{X}qgc)^0, (\mathcal{X}qgc)^1)$ where, as usual, the superscript $0$ denotes the form part and the superscript $1$ denotes the spinorial part. The vector field $\mathcal{X}qgc$ on $W$ induces a vector field $\mathcal{X}qgcsigma$ on $W^{\operatorname{sp}incigma}$, given by
\begin{equation}\label{eq:Xqgcsigmaformula}
\mathcal{X}qgcsigma (a, s, \phi) = ((\mathcal{X}qgc)^0(a, s\phi), \mathscr{L}ambda_\q(a, s, \phi)s, (\widetilde\mathcal{X}qgc)^1(a, s, \phi) - \mathscr{L}ambda_\q(a, s, \phi) \phi),
\end{equation}
where
\begin{equation}
\label{eq:widetildeX}
(\widetilde\mathcal{X}qgc)^1(a, s, \phi) = \int_0^1 \D_{(a, sr\phi)} (\mathcal{X}qgc)^1(0, \phi) dr,
\end{equation}
and
\begin{equation}
\label{eq:Lambdaqas}
\mathscr{L}ambda_\q(a, s, \phi) = \operatorname{Re}\langle \phi, (\widetilde \mathcal{X}qgc)^1(a, s, \phi) \rangle_{L^2}.
\end{equation}
Since $\mathcal{X}qgc = \Pi^{\operatorname{gC}}_* \circ \mathcal{X}q$, we have that $\mathcal{X}qgcsigma = \Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \circ \mathcal{X}qsigma$.
\begin{lemma}\label{lem:continuityxqsigma}
The vector field $\mathcal{X}qgcsigma : W^\operatorname{sp}incigma_k \mathfrak{t}o \mathcal{T}^{\operatorname{gC},\operatorname{sp}incigma}_{k-1}$ is smooth.
\end{lemma}
\begin{proof}
We write $\mathcal{X}qgc = \Pi^{\operatorname{gC}}_* \circ (\mathcal{X} + \q)$. Lemma~\ref{lem:igc} shows that $\Pi^{\operatorname{gC}}_*: \T_{k-1} \mathfrak{t}o \T^{\operatorname{gC}}_{k-1}$ is smooth. Since $\mathcal{X} = \operatorname{gr}ad \mathscr{L}$ and $\q$ are smooth as maps from $\mathcal{C}_k(Y)$ to $\T_{k-1}$, we get that $\mathcal{X}^{\operatorname{gC}}_\mathfrak{q}: W_k \mathfrak{t}o \T^{\operatorname{gC}}_{k-1}$ is smooth. Hence, the induced vector field on the blow-up, $\mathcal{X}qgcsigma$, is also smooth (compare \cite[Lemma 10.2.1]{KMbook}).
\end{proof}
We are interested in the dynamics of the vector field $\mathcal{X}qgcsigma$ on $W^{\operatorname{sp}incigma}$.
Every stationary point of $\mathcal{X}qsigma$ on $\mathcal{C}^\operatorname{sp}incigma(Y)$ can be moved into $W^\operatorname{sp}incigma$ by the global Coulomb projection. Conversely, every stationary point $x$ of $\mathcal{X}qgcsigma$ on $W^\operatorname{sp}incigma$ is also a stationary point of $\mathcal{X}qsigma$, since $\mathcal{X}qsigma$ lands in $\K_x^{\operatorname{sp}incigma}$ and infinitesimal global Coulomb projection induces an isomorphism from $\K_x^{\operatorname{sp}incigma}$ to $\K_x^{\operatorname{agC},\operatorname{sp}incigma}$. Thus, $\Pi^{\operatorname{gC}, \operatorname{sp}incigma}$ induces a bijection
\begin{equation}
\label{eq:EquivStat1}
\{ \mathfrak{t}ext{stationary points of } \mathcal{X}qsigma \bigr \} / \mathcal{G}\ xrightarrow{\mathmakebox[2em]{\cong}} \ \{ \mathfrak{t}ext{stationary points of } \mathcal{X}qgcsigma \} / S^1.
\end{equation}
Note that the condition of being irreducible, boundary stable, or boundary unstable is preserved by this bijection. By contrast, a trajectory\footnote{From now on, by trajectory we will always mean a trajectory of finite type as in Section~\ref{sec:SWe}.} of $\mathcal{X}qgcsigma$ on $W^{\operatorname{sp}incigma}$ is {\em not} a trajectory of $\mathcal{X}qsigma$; still, we have:
\begin{proposition}
\label{prop:CPtraj}
The trajectories of $\mathcal{X}qgcsigma$ on $W^{\operatorname{sp}incigma}$ are precisely the global Coulomb projections of the trajectories of $\mathcal{X}qsigma$ on the blow-up $\mathcal{C}^\operatorname{sp}incigma(Y)$. In fact, global Coulomb projection induces a bijection
\begin{equation}
\label{eq:EquivTraj1}
\{ \mathfrak{t}ext{trajectories of } \mathcal{X}qsigma \bigr \} / \mathcal{G}\ xrightarrow{\mathmakebox[2em]{\cong}}\ \{ \mathfrak{t}ext{trajectories of } \mathcal{X}qgcsigma \} / S^1,
\end{equation}
where $\G$ acts on trajectories $x(t)$ by three-dimensional gauge transformations, constant in $t$.
\end{proposition}
\begin{remark}
Because $\q$ is a very tame perturbation, every $L^2_k$ trajectory of $\mathcal{X}qsigma$ is gauge-equivalent to a smooth one. Furthermore, every trajectory of $\mathcal{X}qgcsigma$ is smooth.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:CPtraj}]
If $\gamma$ is a smooth trajectory of $\mathcal{X}qsigma$, its global Coulomb projection $\gamma^{\flat}$ is a trajectory of $\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_* \circ \mathcal{X}qsigma$, which is exactly $\mathcal{X}qgcsigma$.
Conversely, given a trajectory $\gamma$ of $\mathcal{X}qgcsigma$, we can lift it to a smooth trajectory $\gamma^{\operatorname{sp}incharp}$ of $\mathcal{X}qsigma$ as follows. Consider the submanifold $\mathcal{O}(\gamma)_k \operatorname{sp}incubset \mathcal{C}_k^{\operatorname{sp}incigma}(Y)$ consisting of all points that are on the gauge orbits of points on $\gamma$. Thus, any $x \in \mathcal{O}(\gamma)_k$ can be written as $x=u \cdot \gamma(t_0)$ where $u$ is an $L^2_{k+1}$ gauge transformation on $Y$ and $t_0 \in \rr$. The vector $v_x = \mathcal{X}qsigma(x)$ is in the local Coulomb slice $\K^\operatorname{sp}incigma_{x, k-1}$. Consider the push-forward $(u^{-1})_* v_x$, which is in the local Coulomb slice to $u^{-1} \cdot x = \Pi^{\operatorname{gC}}(x)$. Now, $u^{-1} \cdot x$ is part of the trajectory $\gamma$, and hence is smooth. Therefore, $(u^{-1})_* v_x= (\mathcal{X}qsigma)_{u^{-1} \cdot x}$ is also smooth. Pushing it back by the $L^2_{k+1}$ gauge transformation $u$ yields the vector $v$, which we now see that it must be in $L^2_k$. Thus, the vectors $v_x$ form a true vector field on the Banach manifold $\mathcal{O}(\gamma)_k$. By integrating this vector field starting at a smooth point $x \in \mathcal{O}(\gamma)_k$, we obtain a lift $\gamma^\operatorname{sp}incharp$ of $\gamma$. We can do this for any $k$, and obtain the same lift; hence, $\gamma^\operatorname{sp}incharp$ is smooth.
In the above construction, note that the lift $ \gamma^\operatorname{sp}incharp$ is unique up to transformation by an element in $\G$. Indeed, in four-dimensions, after moving to temporal gauge, there are only gauge transformations which are constant in $t$. (Compare \cite[Proposition 7.2.1]{KMbook}.)
\end{proof}
Using the formula~\eqref{eq:piCS} for $(\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{(a, s, \phi)}$, we can describe the trajectories of $\mathcal{X}q^{\operatorname{gC}, \operatorname{sp}incigma}$ more explicitly, as the paths $(a(t),s(t),\phi(t))$ in slicewise Coulomb gauge that satisfy
\begin{align}
\nonumber & \frac{d}{dt} a = - *da - s^2\pi ( \mathfrak{t}au(\phi, \phi)) - \pi(\q^0(a,s\phi)), \\
\label{eqn:xqgcsigma} & \frac{d}{dt} s = - \mathscr{L}ambda_\q(a,s,\phi)s, \\
\nonumber & \frac{d}{dt} \phi = - D_a \phi - \mathfrak{t}ilde{q}^1(a,s,\phi) + \mathscr{L}ambda_q(a,s,\phi)\phi - s^2 Gd^*(\mathfrak{t}au(\phi,\phi))\phi - Gd^*(\q^0(a,s\phi))\phi.
\end{align}
Alternatively, we can write the trajectory equations as
\begin{align}
\nonumber \frac{d}{dt} a &= - *da - s^2\pi ( \mathfrak{t}au(\phi, \phi)) - \pi(\q^0(a,s\phi)), \\
\label{eq:xqgcsigmaalternate} \frac{d}{dt} s &= - \mathscr{L}ambda_\q(a,s,\phi)s, \\
\nonumber \frac{d}{dt} \phi &= - D_a \phi - s^2 xi(\phi)\phi - \widetilde{\eta}_{\q}^1(a,s,\phi) + \mathscr{L}ambda_\q(a,s,\phi)\phi \\
\nonumber & = -D \phi - \mathfrak{t}ilde{c}^1(a,s,\phi) - \widetilde{\eta}_{\q}^1(a,s,\phi) + \mathscr{L}ambda_\q(a,s,\phi)\phi,
\end{align}
where $xi(\phi)$ is as in \eqref{eq:cmap}, and we use $\widetilde{\eta}_{\q}^1(a,s,\phi)$ to denote $ \int_0^1 \D_{(a, sr\phi)} {\eta}_{\q}^1(0,\phi) dr$.
By making use of $\G^{\operatorname{gC}}(Z)$-equivariance as in \eqref{eq:hateta}, the equations defining the flow of $\mathcal{X}q^{\operatorname{gC}, \operatorname{sp}incigma}$ define a section
\begin{equation}\label{eq:Fqgctau}
\F^{\operatorname{gC}, \mathfrak{t}au}_{\q}: \mathcal{C}^{\operatorname{gC}, \mathfrak{t}au}(Z) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}(Z),
\end{equation}
so that in temporal gauge we have
$$ \F^{\operatorname{gC}, \mathfrak{t}au}_{\q} = \frac{d}{dt} + \mathcal{X}qgcsigma.$$
It follows that the space of trajectories of $\mathcal{X}qgcsigma$ modulo the action of $S^1$ is the zero set of $\F^{\operatorname{gC},\mathfrak{t}au}_{\q}$ modulo the action of $\G^{\operatorname{gC}}(Z)$. As in Section~\ref{sec:AdmPer}, it will be useful to consider the extension of $\F^{\operatorname{gC},\mathfrak{t}au}_{\q}$ to $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}(Z)$, so that we may differentiate this map.
\operatorname{sp}incection{Hessians}
In Section 12.4 in \cite{KMbook}, Kronheimer and Mrowka study the derivative of the vector field $\mathcal{X}q^{\operatorname{sp}incigma}$ on $\mathcal{C}^{\operatorname{sp}incigma}_k(Y)$. At each $x \in \mathcal{C}^{\operatorname{sp}incigma}_{k}(Y)$, this is an operator between the local Coulomb slices:
$$ \operatorname{Hess}^{\operatorname{sp}incigma}_{\q, x} : \K^{\operatorname{sp}incigma}_{k,x} \mathfrak{t}o \K^{\operatorname{sp}incigma}_{k-1, x},$$
called the {\em Hessian} (in the blow-up). The Hessian is needed in Floer theory for several reasons. First, the non-degeneracy of a stationary point $x$ is expressed in terms of the surjectivity of $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q, x}$. Second, we need to study Hessians at all points (not necessarily stationary) in order to construct the operator
\begin{equation}
\label{eq:dHess}
\frac{d}{dt} + \operatorname{Hess}^{\operatorname{sp}incigma}_{q},
\end{equation}
which can be applied to vector fields along a path $\gamma$ in $\mathcal{C}^{\operatorname{sp}incigma}(Y)$. If $\gamma$ is a flow trajectory for $\mathcal{X}q^{\operatorname{sp}incigma}$, then the surjectivity of $d/dt + \operatorname{Hess}^{\operatorname{sp}incigma}_{q}$ indicates that the moduli space of trajectories is regular at $\gamma$. Also, for arbitrary $\gamma$ (not necessarily a flow trajectory), we need the operator~\eqref{eq:dHess} to describe the relative grading of stationary points, and to construct orientations on the moduli spaces of trajectories.
We will do the same analysis in global Coulomb gauge. We will first define a $\mathfrak{t}ilde{g}$-Hessian before the blow-up, at irreducible points $(a, \phi)$ (that is, those with $\phi \neq 0$). We will then construct a $\mathfrak{t}ilde{g}$-Hessian on the blow-up, well-defined at all points.
\operatorname{sp}incubsection{The Hessian at irreducibles} \label{sec:Hess}
We start by recalling the original Hessian at irreducibles, as defined in Section 12.3 of \cite{KMbook}:
\begin{equation}
\label{eq:HessIrr}
\operatorname{Hess}_{\q,x} = \Pi^{\operatorname{lC}}_x \circ \D_{x} \mathcal{X}q : \K_{k,x} \mathfrak{t}o \K_{k-1,x}.
\end{equation}
This formula applies to any $x=(a, \phi) \in \mathcal{C}_k(Y)$ with $\phi \neq 0$.
\begin{remark}
When $x$ is a stationary point, we can actually drop the projection $\Pi^{\operatorname{lC}}_x$ from the formula for $\operatorname{Hess}_{\q,x}$. This is because $\mathcal{X}q$ is a formal gradient, so it is orthogonal to the gauge orbits; therefore, it can be viewed as a section of $\K_{k-1}$, and its derivative at a zero point must automatically land in the local Coulomb slice.
\end{remark}
The $\mathfrak{t}ilde{g}$-Hessian is defined as:
\begin{equation}
\label{eq:HessGCoul}
\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x} = \Pi^{\operatorname{agC}}_x \circ \Dg_x \mathcal{X}qgc : \K^{\operatorname{agC}}_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k-1,x}.
\end{equation}
Here, $\Dg$ denotes the connection on $\T^{\operatorname{gC}}$ induced by the $\mathfrak{t}ilde g$ metric on $W_k$. To put this in context, recall that the derivative of a vector field is well-defined at a stationary point $x$, but when $x$ is not stationary it depends on the choice of a connection in the tangent bundle. In \eqref{eq:HessIrr}, we simply viewed the vector field $\mathcal{X}q$ as a map into affine space, and took its derivative as such. Doing the same thing in global Coulomb gauge would mean using the connection $\D$ induced by the ordinary $L^2$ metric. However, $\mathcal{X}qgc$ is the $\mathfrak{t}ilde{g}$-gradient of $(\mathscr{L} + f)|_{W}$, and taking its ordinary derivative $\D_x \mathcal{X}qgc$ would yield an operator that is not symmetric. It is more natural to use the connection $\Dg$, which is explicitly given by the formula:
\begin{equation}
\label{eq:Dg}
\mathcal{D}^{\mathfrak{t}ilde g}(X) = \Pi^{\operatorname{gC}}_* \circ \mathcal{D}( \Pi^{\operatorname{elC}}(X)) \circ \Pi^{\operatorname{elC}}.
\end{equation}
Here, $X$ is a vector field on $W_k$, and $\Pi^{\operatorname{elC}}(X)$ is a section of $\T_k$ over $W_k$. We implicitly extend $\Pi^{\operatorname{elC}}(X)$ to a $\Go_{k+1}$-invariant vector field on the whole configuration space $\mathcal{C}_k(Y)$; this allows us to take its derivative in the direction $\Pi^{\operatorname{elC}}(Z)$, for some $Z$.
Let us check that the formula \eqref{eq:Dg} produces a connection compatible with $\mathfrak{t}ilde g$. If $X, Y$ and $Z$ are vector fields on $W_k$, we have:
\begin{align*}
Z \langle X, Y \rangle_{\mathfrak{t}ilde g} &= Z \operatorname{Re} \langle\Pi^{\operatorname{elC}}(X), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2} \\
&= \Pi^{\operatorname{elC}}(Z) \operatorname{Re} \langle\Pi^{\operatorname{elC}}(X), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2}\\
&= \operatorname{Re} \langle (\mathcal{D}( \Pi^{\operatorname{elC}}(X))) (\Pi^{\operatorname{elC}}(Z)), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2} + \langle \Pi^{\operatorname{elC}}(X), (\mathcal{D}( \Pi^{\operatorname{elC}}(Y))) (\Pi^{\operatorname{elC}}(Z))\rangle_{L^2} \\
&= \operatorname{Re} \langle (\mathcal{D}^{\mathfrak{t}ilde g}X) (Z), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2} + \langle \Pi^{\operatorname{elC}}(X), (\mathcal{D}^{\mathfrak{t}ilde g}Y) (Z)\rangle_{L^2} \\
&= \operatorname{Re} \langle \Pi^{\operatorname{elC}}( (\mathcal{D}^{\mathfrak{t}ilde g}X) (Z)), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2} + \langle \Pi^{\operatorname{elC}}(X), \Pi^{\operatorname{elC}}((\mathcal{D}^{\mathfrak{t}ilde g}Y) (Z))\rangle_{L^2} \\
&= \operatorname{Re} \langle (\mathcal{D}^{\mathfrak{t}ilde g}X)(Z), Y \rangle_{\mathfrak{t}ilde g} + \langle X, (\mathcal{D}^{\mathfrak{t}ilde g}Y) (Z)\rangle_{\mathfrak{t}ilde g}.
\end{align*}
Here, we have implicitly extended $\Pi^{\operatorname{elC}}(X)$ and $\Pi^{\operatorname{elC}}(Y)$ to $\Go_{k+1}$-invariant vector fields on the configuration space. The second equality above is true because the function $$\operatorname{Re} \langle\Pi^{\operatorname{elC}}(X), \Pi^{\operatorname{elC}}(Y) \rangle_{L^2} $$ is $\Go_{k+1}$-invariant, and therefore its derivative in the direction $Z-\Pi^{\operatorname{elC}}(Z) \in \J^{\circ}_{k}$ vanishes. For some of the other equalities, we used repeatedly the fact that the $L^2$ inner product with something in the enlarged local Coulomb slice is unchanged by applying either $\Pi^{\operatorname{gC}}_*$ or $ \Pi^{\operatorname{elC}}$ to the other factor. This last fact is true because both of these operations consist in adding a vector in $\J^{\circ}_{k}$, and such vectors are $L^2$ perpendicular to the enlarged local Coulomb slice.
One can also check that the connection $\Dg$ is torsion-free:
\begin{align*}
(\mathcal{D}^{\mathfrak{t}ilde g}X)(Y) - (\mathcal{D}^{\mathfrak{t}ilde g}Y)(X) &= \Pi^{\operatorname{gC}}_* \circ \mathcal{D}( \Pi^{\operatorname{elC}}(X)) (\Pi^{\operatorname{elC}}(Y))- \Pi^{\operatorname{gC}}_* \circ \mathcal{D}( \Pi^{\operatorname{elC}}(Y)) (\Pi^{\operatorname{elC}}(X))\\
&= \Pi^{\operatorname{gC}}_* [\Pi^{\operatorname{elC}}(X), \Pi^{\operatorname{elC}}(Y)] \\
&=[X, Y].
\end{align*}
To see the last equality, it suffices to check how the respective vector fields act on $\Go_{k+1}$-invariant functions on the configuration space. On such functions $f$, the action of a vector field does not change if we apply $\Pi^{\operatorname{elC}}$ or $\Pi^{\operatorname{gC}}_*$ to that vector field (because this means changing it by a vector field in a direction tangent to $\Go_{k+1}$). The equality follows easily from this observation.
\begin{remark}
It follows from the above discussion that $\Dg$ is the exact analogue of the Levi-Civita connection on the infinite-dimensional manifold $W$ with respect to the $\mathfrak{t}ilde{g}$-metric. Therefore, when we compute the $\mathfrak{t}ilde{g}$-Hessian of a real-valued function on $W$, we will obtain a symmetric operator.
\end{remark}
From Section~\ref{sec:coulombs} we know that the projection $\Pi^{\operatorname{gC}}_*$ maps $\mathcal{K}^{\operatorname{e}}$ isomorphically onto $\T^{\operatorname{gC}}$, with inverse $\Pi^{\operatorname{elC}}$. Since $\mathcal{X}qgc = \Pi^{\operatorname{gC}}_* \mathcal{X}q$, we also have $\mathcal{X}q = \Pi^{\operatorname{elC}} \mathcal{X}qgc$. From here, in view of \eqref{eq:Dg}, we can obtain a simpler formula for the Hessian $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$:
\begin{equation}
\label{eq:altHess}
\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x} = \Pi^{\operatorname{agC}}_x \circ \D_x \mathcal{X}q \circ \Pi^{\operatorname{elC}}_x : \K^{\operatorname{agC}}_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k-1,x}.
\end{equation}
\begin{remark}
The vector field $\mathcal{X}qgc$ points $\mathfrak{t}ilde{g}$-orthogonally to the $S^1$-orbits on $W_{k-1}$. Hence, when $x$ is a stationary point of $\mathcal{X}qgc$, the image of $\D^{\mathfrak{t}ilde g}\mathcal{X}qgc$ must be contained in $\K_{k-1}^{\operatorname{agC}}$. Further, in this case the derivative of $\mathcal{X}qgc$ is independent of which connection we use. Therefore, when $x$ is stationary, we can write
\begin{equation}
\label{eq:statHess}
\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x} = \D_x \mathcal{X}qgc = \Dg_x \mathcal{X}qgc : \K^{\operatorname{agC}}_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k-1,x}.
\end{equation}
\end{remark}
It is shown in \cite[Proposition 12.3.1]{KMbook} that $\operatorname{Hess}_{\q,x}$ is a Fredholm operator of index zero. Similarly, using the $\mathfrak{t}ilde{g}$-Hessian we have:
\begin{lemma}\label{lem:hessianfredholm}
For any $x=(a, \phi) \in W_k$ with $\phi \neq 0$, the operator $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}: \K^{\operatorname{agC}}_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k-1,x}$ is Fredholm of index zero. Therefore, it is surjective if and only if it is injective.
\end{lemma}
\begin{proof} We see from \eqref{eq:altHess} that $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$ is the conjugate of $\operatorname{Hess}_{\q, x}$ by the isomorphism $\Pi^{\operatorname{agC}}_x: \K_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k,x}$, with inverse $\Pi^{\operatorname{elC}}_x$ (see Lemma~\ref{lem:bijection}). Since $\operatorname{Hess}_{\q, x}$ is Fredholm of index $0$, so is $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q, x}$.
\end{proof}
At this point it is helpful to recall the proof of \cite[Proposition 12.3.1]{KMbook}, which says that $\operatorname{Hess}_{\q,x}$ is a Fredholm operator of index zero. That proof will serve as a model for some of our later arguments.
We start by recalling \cite[Definition 12.2.1]{KMbook}:
\begin{definition}
An operator $L$ is called {\em $k$-almost self-adjoint first-order elliptic} ($k$-ASAFOE) if it is of the form
$$ L=L_0+h$$
where
\begin{itemize}
\item $L_0$ is a first-order, self-adjoint, elliptic differential operator (with smooth coefficients) acting on sections of a vector bundle $E \mathfrak{t}o Y$;\\
\item $h: C^\infty(Y; E) \mathfrak{t}o L^2(Y; E)$ is a linear operator on sections of $E$ that extends to a bounded map on $L^2_j(Y; E)$ for all $j$ with $|j| \leq k$.
\end{itemize}
\end{definition}
For $|j| \leq k$, note that a $k$-ASAFOE operator $L_0+h$ is Fredholm of index zero when viewed as a map $L^2_j(Y;E) \mathfrak{t}o L^2_{j-1}(Y; E)$. Indeed, this statement is true for $L_0$ (by ellipticity and self-adjointness), and adding $h$ is just a compact perturbation.
Note that $\operatorname{Hess}_{\q, x}$ is not $k$-ASAFOE, because it does not act on {\em all} sections of a vector bundle. The remedy in \cite{KMbook} was to introduce an extended Hessian
$$ \widehat{\operatorname{Hess}}_{\q, x} : \T_{k, x} \oplus L^2_k(Y; i\R) \mathfrak{t}o \T_{k-1, x} \oplus L^2_{k-1}(Y; i\R)$$
by the formula
\begin{equation}
\label{eq:ExtHess}
\widehat{\operatorname{Hess}}_{\q, x} = \begin{pmatrix}
\D_x \mathcal{X}q & \mathbf{d}_x \\
\mathbf{d}_x^* & 0
\end{pmatrix}.
\end{equation}
Here, $\mathbf{d}_x$ encodes the infinitesimal gauge action at $x=(a, \phi)$:
$$\mathbf{d}_x (xi) = (-dxi, xi \phi)$$
and its adjoint
$$\mathbf{d}_x^*(b, \psi) = -d^* b + i \operatorname{Re}\langle i\phi, \psi\rangle$$
can be used to define the local Coulomb slice $\K_{k, x}$ by the condition $\mathbf{d}_x^* =0$.
The extended Hessian is a self-adjoint $k$-ASAFOE operator acting on sections of the bundle $iT^*Y \oplus \mathbb{S} \oplus i\R$. With respect to the decomposition $\T_{j, x}\oplus L^2_j(Y; i\R) = \J_{j, x} \oplus \K_{j, x} \oplus L^2_j(Y; i\R)$, with $j=k$ for the domain and $j=k-1$ for the target, we can write\footnote{In \cite[Equation (12.7)]{KMbook}, the term $h_1$ was inadvertently missing.}
$$
\widehat{\operatorname{Hess}}_{\q, x} = \begin{pmatrix}
h_1 & h_2 & \mathbf{d}_x \\
h_3 & \operatorname{Hess}_{\q, x} & 0 \\
\mathbf{d}_x^* & 0 & 0
\end{pmatrix}.
$$
Here, the $h_i$ terms preserve Sobolev regularity and hence are compact as maps $L^2_k \mathfrak{t}o L^2_{k-1}$. (As an aside, note also that the $h_i$ terms are zero at stationary points.) Thus, dropping $h_1, h_2$ and $h_3$ from the expression above yields a Fredholm operator of index $0$. In turn, since the $h_i$ are compact, this shows that $\operatorname{Hess}_{\q, x}$ must be Fredholm of index $0$.
\operatorname{sp}incubsection{The Hessian on the blow-up} \label{sec:HessBlowUp}
We now define the analogous Hessian on the blow-up.
In Section 12.4 of \cite{KMbook}, the Hessian on the blow-up is defined by:
\[
\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x} = \Pi^{\operatorname{lC},\operatorname{sp}incigma}_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}qsigma : \K^{\operatorname{sp}incigma}_{k,x} \mathfrak{t}o \K^{\operatorname{sp}incigma}_{k-1,x}.
\]
Note that since $\mathcal{C}^\operatorname{sp}incigma_k(Y)$ is only a submanifold of a vector space, the derivative of the section $\mathcal{X}qsigma$ need not land in its tangent bundle $\T^\operatorname{sp}incigma_{k-1}$. What we actually mean\footnote{In \cite{KMbook}, the notation $\D$ is used instead of $\D^\operatorname{sp}incigma$. We use $\D^\operatorname{sp}incigma$ to distinguish this derivative from the $L^2$ derivative $\D$ in the ambient vector space $L^2(Y;iT^*Y) \oplus \rr\oplus L^2(Y;\mathbb{S})$, which we will also work with.} by $\D^\operatorname{sp}incigma_x \mathcal{X}qsigma$ is the composition of the derivative in the larger vector space with the $L^2$ orthogonal projection onto $\T^\operatorname{sp}incigma_{k-1}$. To obtain the Hessian, we further project to the corresponding local Coulomb slice by $\Pi^{\operatorname{lC},\operatorname{sp}incigma}_x$.
To show that $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x} $ is Fredholm of index zero, Kronheimer and Mrowka introduced the extended Hessian in the blow-up:
\begin{equation}
\label{eq:ExtHessBlowUp}
\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x} = \begin{pmatrix}
\D^\operatorname{sp}incigma_x \mathcal{X}qsigma & \mathbf{d}^{\operatorname{sp}incigma}_x \\
\mathbf{d}_x^{\operatorname{sp}incigma, \dagger} & 0
\end{pmatrix}.
\end{equation}
Here,
\begin{align}
\label{eq:dsigma}
\mathbf{d}^{\operatorname{sp}incigma}_x (xi) &= (-dxi, 0,xi \phi), \\
\label{eq:dsigmadagger}
\mathbf{d}^{\operatorname{sp}incigma, \dagger}_x (b, r, \psi) &= -d^*b + is^2\operatorname{Re}\langle i\phi, \psi \rangle + i |\phi|^2 \operatorname{Re} \mu_Y(\langle i\phi, \psi\rangle),
\end{align}
so that $ \K_j^{\operatorname{sp}incigma} = \ker \mathbf{d}^{\operatorname{sp}incigma, \dagger}_x$, $\J_j^{\operatorname{sp}incigma} = \operatorname{im}\mathbf{d}^{\operatorname{sp}incigma}_x$ are the local Coulomb slice and the tangent to the gauge orbit, as in \eqref{eq:BlowUpDecompose}. Note that on the blow-up, $\mathbf{d}_x^{\operatorname{sp}incigma, \dagger}$ is not quite the adjoint to $\mathbf{d}_x^{\operatorname{sp}incigma}$, just as the decomposition $\T^{\operatorname{sp}incigma}_j = \J^{\operatorname{sp}incigma}_j \oplus \K^{\operatorname{sp}incigma}_j$ is not $L^2$ orthogonal. Therefore, unlike the extended Hessian $\widehat{\operatorname{Hess}}_{\q, x}$, the operator $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ is not symmetric. Moreover, $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q,x}$ does not a priori act on sections of a vector bundle, but rather on the subspace of $ \T^{\operatorname{sp}incigma}_{j, x} \oplus L^2_j(Y; i\R)$, where we have the condition $\operatorname{Re} \langle \phi, \psi\rangle_{L^2} = 0$. Nevertheless, we can combine the $r$ and $\psi$ components of $(b, r, \psi) \in \T^{\operatorname{sp}incigma}_{j, x}$ into
$$ \boldsymbol {\psi}= \psi + r \phi,$$
and then we can think of $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ as acting on sections of $iT^*Y \oplus \mathbb{S} \oplus i\R$. It then becomes a $k$-ASAFOE operator (hence Fredholm of index zero), and starting from here one can show that $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x} $ is Fredholm of index zero---much as in the proof of Lemma~\ref{lem:hessianfredholm}. Further, when $x$ is a non-degenerate stationary point, $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x} $ is invertible and has real spectrum (even though it is not self-adjoint). See \cite[Section 12.4]{KMbook} for details.
Let us now move to global Coulomb gauge. Fix $x = (a, s, \phi) \in W^{\operatorname{sp}incigma}$. In this setting, we define a $\mathfrak{t}ilde{g}$-Hessian in the blow-up by:
\begin{equation}
\label{eq:HessianBlowupCG}
\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} = \Pi^{\operatorname{agC},\operatorname{sp}incigma}_x \circ \Dgs_x \mathcal{X}qgcsigma : \K^{\operatorname{agC},\operatorname{sp}incigma}_{k,x} \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1,x},
\end{equation}
where
\begin{equation}
\label{eq:DGS}
\mathcal{D}^{\mathfrak{t}ilde g, {\operatorname{sp}incigma}}X := \Pi^{\operatorname{gC}, \operatorname{sp}incigma}_* \circ \D^\operatorname{sp}incigma(\Pi^{\operatorname{elC}, \operatorname{sp}incigma}(X)) \circ \Pi^{\operatorname{elC}, \operatorname{sp}incigma}.
\end{equation}
is the connection on $\T^{\operatorname{gC}, \operatorname{sp}incigma}$ defined by analogy with \eqref{eq:Dg}.
Since $\mathcal{X}qgcsigma = \Pi^{\operatorname{gC}, \operatorname{sp}incigma}_* \circ \mathcal{X}qsigma$, in view of Lemma~\ref{lem:bijection2}(a) we have $\mathcal{X}qsigma = \Pi^{\operatorname{elC}, \operatorname{sp}incigma} \circ \mathcal{X}qgcsigma$. Therefore, we can re-write the blown-up $\mathfrak{t}ilde{g}$-Hessian as
\begin{equation}
\label{eq:Hess2}
\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} = \Pi^{\operatorname{agC},\operatorname{sp}incigma}_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}qsigma \circ \Pi^{\operatorname{elC}, \operatorname{sp}incigma}_x.
\end{equation}
\begin{remark}
\label{rem:HessEqual}
When $x$ is on the blow-up locus (i.e., $s=0$), we know from Lemma~\ref{lem:bijection} (b) that the anticircular global Coulomb slice coincides with the local Coulomb slice and $\Pi^{\operatorname{lC}, \operatorname{sp}incigma}_x = \Pi^{\operatorname{agC}, \operatorname{sp}incigma}_x$. We see from \eqref{eq:Hess2} that $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ agrees with the ordinary blow-up Hessian $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x}$.
\end{remark}
\begin{lemma}
\label{lem:HessB}
$(a)$ For any $x \in W^{\operatorname{sp}incigma}_k$, the operator $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is Fredholm of index zero.
$(b)$ When $x$ is a non-degenerate stationary point of $\mathcal{X}qgcsigma$, the operator $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} $ is invertible and has real spectrum.
\end{lemma}
\begin{proof}
When $x$ is reducible, we have $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} = \operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x}$, and the corresponding results for $\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x}$ were established in \cite[Section 12.4]{KMbook}.
When $x$ is irreducible, $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} $ is conjugate to $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$ via the blow-down map. Thus, part (a) is a consequence of Lemma~\ref{lem:hessianfredholm}. For part (b), since $x$ is stationary, by \eqref{eq:altHess}, we have that $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x} = \D^{\mathfrak{t}ilde g}_{x} \mathcal{X}qgc$, and the latter operator is invertible and self-adjoint (being a formal Hessian). The conclusion follows.
\end{proof}
Let us also mention:
\begin{lemma}\label{lem:hessiancontinuity}
The map $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q} : \K_k^{\operatorname{agC},\operatorname{sp}incigma} \mathfrak{t}o \K_{k-1}^{\operatorname{agC},\operatorname{sp}incigma}$ is a continuous bundle map.
\end{lemma}
\begin{proof}
This follows from the continuity of the factors in \eqref{eq:Hess2}. In particular, recall that $\Pi^{\operatorname{agC}, \operatorname{sp}incigma}$ is the composition of $\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*$ with the projection \eqref{eq:antiproj} onto $\K^{\operatorname{agC}, \operatorname{sp}incigma}$. The continuity of $\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*$ can be proved by an argument similar to that in Lemma~\ref{lem:igc}. As for the projection \eqref{eq:antiproj}, it can be seen from that formula that it is a continuous operation. Finally, $ \Pi^{\operatorname{elC}, \operatorname{sp}incigma}$ is continuous by the same arguments as in Lemma~\ref{lem:elc}.
\end{proof}
\operatorname{sp}incubsection{The split extended Hessian on the blow-up}
\label{sec:ExtSpilt}
For the purposes of this book, it is helpful to work with a different extended Hessian than the one in \eqref{eq:ExtHessBlowUp}. Precisely, we replace \eqref{eq:dsigmadagger} with an operator
$$ \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x : \T^{\operatorname{sp}incigma}_{j,x} \mathfrak{t}o L^2_j(Y; i\rr),$$
defined as follows. We decompose the domain $\T^{\operatorname{sp}incigma}_{j,x}= \K^\operatorname{sp}incigma_{j,x} \oplus \J^{\operatorname{sp}incigma}_{j,x} $ into three summands $\K^\operatorname{sp}incigma_{j,x} \oplus \J^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x}$, and also decompose the codomain $L^2_{j-1}(Y; i\R)$ into $(\operatorname{im}d^*)_{j-1} \oplus i\R$ (that is, into functions that integrate to zero and constant functions). With respect to these decompositions, we let
\begin{equation}
\label{eq:decomposing}
\mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x = \begin{pmatrix} 0 & 0 & -d^* \\
0 & i \operatorname{Re} \langle i\phi, \cdot \rangle_{L^2} & 0 \end{pmatrix},
\end{equation}
where $-d^*$ acts on the first component of $(-dxi, 0, xi\phi) \in \J^{\circ, \operatorname{sp}incigma}_{j,x}$, and $i \operatorname{Re} \langle i\phi, \cdot \rangle_{L^2}$ acts on the last component of $ (0, 0, it\phi) \in\J^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x}$, $t\in \rr$. Note that $\| \phi \|_{L^2} = 1$, so $i \operatorname{Re} \langle i\phi, it\phi \rangle_{L^2}$ simply equals $it$.
One can also write a more compressed formula for $ \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x$. Recall from
\eqref{eq:piLCsigma} that $\Pi^{\operatorname{lC},\operatorname{sp}incigma}_{(a,s,\phi)}(b,r,\psi) = (b-d\zeta,r, \psi + \zeta \phi),$
for some $\zeta = \zeta(x, b,r,\psi): Y \mathfrak{t}o i\rr$. Then:
\begin{align*}
\mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x (b,r,\psi) &= d^*d \zeta+ \mu_Y(\zeta).
\end{align*}
Note that the kernel of $ \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x$ is the local Coulomb slice $\K^{\operatorname{sp}incigma}_{j,x}$, just as for $\mathbf{d}^{\operatorname{sp}incigma, \dagger}_x$.
We now define the {\em split extended Hessian in the blow-up} to be
\begin{equation}
\label{eq:SplitExtHessBlowUp}
\widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x} = \begin{pmatrix}
\D^\operatorname{sp}incigma_x \mathcal{X}qsigma & \mathbf{d}^{\operatorname{sp}incigma}_x \\
\mathbf{d}_x^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger} & 0
\end{pmatrix}: \T^{\operatorname{sp}incigma}_{j,x} \oplus L^2_j(Y; i\rr) \mathfrak{t}o \T^{\operatorname{sp}incigma}_{j-1,x} \oplus L^2_{j-1}(Y; i\rr).
\end{equation}
At any $x$, by combining $\psi$ and $r$ into $\psis=\psi + r\phi$ as in the case of $ \widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$, we see that $ \widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x}$ is a $k$-ASAFOE operator. We also have:
\begin{lemma}
\label{lem:eHsplit}
If $x$ is a non-degenerate stationary point of $\mathcal{X}qgcsigma$, then $\widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x}$ is invertible and has real spectrum.
\end{lemma}
\begin{proof}
The argument is similar to that in \cite[proof of Lemma 12.4.3]{KMbook}. The operator $ \widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x}$ has a block form where one block is ${\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ and the other is
\begin{equation}\label{eq:block-hess-dsigma}
\begin{pmatrix}
0 &\mathbf{d}^{\operatorname{sp}incigma}_x \\
\mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, {\dagger}}_x & 0
\end{pmatrix}.
\end{equation}
It is established in \cite[proof of Lemma 12.4.3]{KMbook} that the operator ${\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ is invertible and has real spectrum. To justify that \eqref{eq:block-hess-dsigma} is invertible and has real spectrum, it suffices to prove that the operator $\mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x$ is self-adjoint and strictly positive.
To see this last fact, we compute
$$\mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x (xi) = \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x (-dxi, 0, xi \phi) = \Delta xi + \mu_Y(xi).$$
With respect to the orthogonal decomposition $L^2_{j-1}(Y; i\R)=(\operatorname{im}d^*)_{j-1} \oplus i\R$, we have
$$ \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x = \begin{pmatrix}
\Delta & 0 \\
0 & 1
\end{pmatrix}.$$
Both diagonal entries, $\Delta$ and $1$, are self-adjoint and positive on the respective summands. This completes the proof.
\end{proof}
\begin{remark}
We could have defined a split extended Hessian in the blow-down as well (at irreducibles), but we had no need for that construction.
\end{remark}
\operatorname{sp}incubsection{The $\mathfrak{t}ilde g$--extended Hessian on the blow-up} \label{sec:ExtHessBlowUp}
We now construct another extended Hessian in the blow-up, using the $\mathfrak{t}ilde{g}$ metric. The definition is somewhat similar to that of the split extended Hessian in Section~\ref{sec:ExtSpilt} above. Precisely, we set
\begin{equation}
\label{eq:ExtHessBlowUpGC}
\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x} = \begin{pmatrix}
S_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}q^{\operatorname{sp}incigma} \circ S_x^{-1} & \mathbf{d}^{\operatorname{sp}incigma}_x \\[.5em]
\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x & 0
\end{pmatrix}.
\end{equation}
Here, $S_x$ is the shear map from \eqref{eq:shear}, and to define $\mathbf{d}^{\operatorname{sp}incigma,\mathfrak{t}ilde{\dagger}}_x: \T^{ \operatorname{sp}incigma}_{j,x} \mathfrak{t}o L^2_{j-1}(Y; i\R)$, we use the decompositions
$$\T^{\operatorname{sp}incigma}_{j,x}=\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x}, \ \ \ \ \ L^2_{j-1}(Y; i\R) = (\operatorname{im}d^*)_{j-1} \oplus i\R,$$
and set
\begin{equation}
\label{eq:decomposingtilde}
\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x = \begin{pmatrix} 0 & 0 & -d^* \\
0 & i\langle i\phi, \cdot \rangle_{\mathfrak{t}ilde g} & 0 \end{pmatrix}.
\end{equation}
Alternatively, we have the formula
\begin{equation}
\label{eq:dcsdagger}
\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x(b, r, \psi) = -d^* b + i \langle i\phi, \psi + (Gd^*b) \phi \rangle_{\mathfrak{t}ilde{g}}.
\end{equation}
Observe the kernel of $\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x$ is the anticircular global Coulomb slice $\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j,x}$, given by the conditions $d^*b=0$ and $\langle i\phi, \psi\rangle_{\mathfrak{t}ilde g}=0$.
At any $x$, by combining $\psi$ and $r$ into $\psis=\psi + r\phi$ as before, we would like to claim that $ \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ is a $k$-ASAFOE operator as a first step to proving that $\widehat{\operatorname{Hess}}^\operatorname{sp}incigma_{\q,x}$ is invertible and has real spectrum at non-degenerate stationary points. It turns out that if $x$ is not a stationary point, then $\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ is only $(k-1)$-ASAFOE, so we will have to be careful. To establish these properties, we will use a different operator which agrees with $ \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ at stationary points. This operator will also be useful in Section~\ref{sec:linearized} and is described in the following lemma.
\begin{lemma}\label{lem:fakehessian}
Let $x \in W^\operatorname{sp}incigma_k$ and $1 \leq j \leq k$. Consider the operator $\mathcal{H}^\sigma_x: \T^\operatorname{sp}incigma_{j,x} \oplus L^2_j(Y;i\R) \mathfrak{t}o \T^\operatorname{sp}incigma_{j-1,x} \oplus L^2_{j-1}(Y;i\R) $ given in block form by
$$
\mathcal{H}^\sigma_x = \begin{pmatrix} (\D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma) \circ \Pi^{\operatorname{gC},\operatorname{sp}incigma}_* & \mathbf{d}^\operatorname{sp}incigma_x \\ \mathbf{d}^{\operatorname{sp}incigma,\mathfrak{t}ilde{\dagger}}_x & 0 \end{pmatrix}.
$$
\begin{enumerate}[(a)]
\item\label{fakehessian-asafoe} Under the identification of $\T^\operatorname{sp}incigma_{j,x} \oplus L^2_j(Y;i\R)$ with $L^2_j(Y;iT^*Y \oplus \mathbb{S} \oplus \R)$ and likewise for $j-1$, the operator $\mathcal{H}^\sigma_x$ is $(k-1)$-ASAFOE with linear part $$L_0 = \begin{pmatrix} *d & 0 & -d \\ 0 & D & 0 \\ -d^* & 0 & 0 \end{pmatrix};$$
\item \label{fakehessian-compact-asafoe} when $j=k$, the operator $\mathcal{H}^\sigma_x$ differs from $L_0$ by a compact operator from $L^2_k$ to $L^2_{k-1}$;
\item \label{fakehessian-k-asafoestationary} if $x$ is a stationary point of $\mathcal{X}qgcsigma$, then $\mathcal{H}^\sigma_x$ is $k$-ASAFOE;
\item \label{fakehessian-realstationary} if $x$ is a stationary point of $\mathcal{X}qgcsigma$, then $\mathcal{H}^\sigma_x = \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{fakehessian-asafoe} For notation, we write $L^\operatorname{sp}incigma:\T^\operatorname{sp}incigma_{j,x} \oplus L^2_j(Y;i\R) \mathfrak{t}o \T^\operatorname{sp}incigma_{j-1,x} \oplus L^2_{j-1}(Y;i\R)$ for the operator induced by $L_0$. Our goal is to show that $\mathcal{H}^\sigma_x$ differs from $L^\operatorname{sp}incigma$ by bounded operators from $L^2_{j}$ to $L^2_{j}$ for $1 \leq j \leq k-1$. (Technically, we must show that these are induced by operators from $C^\infty$ to $L^2$, but this will be clear from the explicit description.) We break up the analysis of $\mathcal{H}^\sigma_x$ into how it acts on $L^2_j(Y;i\R)$, $\T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$ and $\J^{\circ,\operatorname{sp}incigma}_{j,x}$. In fact, we will show the difference is bounded from $L^2_j$ to $L^2_j$ even when $j = k$, except for one term.
First, consider $\beta \in L^2_j(Y;i\R)$. We have that $\mathcal{H}^\sigma_x(\beta) = ( - d\beta , 0, \beta \phi)$. Then, we have $\mathcal{H}^\sigma_x(\beta) - L^\operatorname{sp}incigma(\beta) = \beta \phi$, which is bounded as a linear map from $L^2_j$ to $L^2_{j}$ by Sobolev multiplication whenever $1 \leq j \leq k$ since $\phi \in L^2_k$.
Next, let $v = (-dxi, 0, xi \phi) \in \J^{\circ,\operatorname{sp}incigma}_{j,x}$. Then, since $v$ is in the kernel of the infinitesimal global Coulomb projection $\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*$, we have
$$\mathcal{H}^\sigma_x(v) = (0, \mathbf{d}^{\operatorname{sp}incigma,\mathfrak{t}ilde{\dagger}}_x(v)) = (0, d^* d xi) \in \T^\operatorname{sp}incigma_{j-1,x} \oplus L^2_{j-1}(Y;i\R).$$ In the second component, $\mathcal{H}^\sigma_x(v)$ and $L^\operatorname{sp}incigma(v)$
agree. Thus, it remains to show that the component of $L^\operatorname{sp}incigma(v)$ landing in $\T^{\operatorname{sp}incigma}_{j-1,x}$ is bounded from $L^2_{j}$ to $L^2_{j}$ for $1 \leq j \leq k-1$. We have
$$L^\operatorname{sp}incigma(v) - \mathcal{H}^\sigma_x(v) = (0, \operatorname{Re} \langle D(xi \phi), \phi \rangle_{L^2}, D(xi \phi) - \operatorname{Re} \langle D(xi \phi), \phi \rangle_{L^2}\phi) \in T^\operatorname{sp}incigma_{j-1,x}.$$
This differs from $v \mapsto (0,0,D(xi\phi))$ by a bounded operator from $L^2_j$ to $L^2_k$, so we focus on $D(xi\phi) = \rho(d xi) \phi + xi D\phi$. Since $v \in L^2_j$, we see that $dxi \in L^2_j$ and thus $\rho(dxi) \phi$ is bounded in $L^2_j$ by Sobolev multiplication. Therefore, $v \mapsto (0,0,\rho(dxi) \phi)$ is bounded from $L^2_j$ to $L^2_{j}$ (even if $j = k$). For the term $xi D\phi$, we note that $D\phi \in L^2_{k-1}$, so the map $(-dxi, 0, xi \phi) \mapsto xi D\phi$ is bounded as a linear map from $L^2_{j}$ to $L^2_{j}$ as long as $j \leq k-1$. This establishes the desired form for $v \in\J^{\circ,\operatorname{sp}incigma}_{j,x}$.
It thus remains to compare $\mathcal{H}^\sigma_x$ and $L^\operatorname{sp}incigma$ on $\T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$. Let $v = (b,r,\psi) \in \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$. First, note that the component of $\mathcal{H}^\sigma_x(v)$ landing in $L^2_{j-1}(Y;i\R)$ is given by $i \langle i \phi, \psi \rangle_{\mathfrak{t}ilde{g}}$ which is bounded as a map from $\T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$ to $L^2_j(Y;i\R)$ for $j \leq k$; this is compatible with the fact that $L^\operatorname{sp}incigma(b,r,\psi)$ has no component landing in $L^2_{j-1}(Y;i\R)$ since $d^*b = 0$. Thus, it remains to focus on the component of $\mathcal{H}^\sigma_x(v)$ contained in $\T^\operatorname{sp}incigma_{j,x}$. Since $\widehat{\operatorname{Hess}}^\operatorname{sp}incigma_{\q,x}$ is $k$-ASAFOE with linear term also given by $L_0$, it suffices to show that $\D^\operatorname{sp}incigma_x \mathcal{X}qsigma$ differs from $\D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma$ by bounded operators from $L^2_j$ to $L^2_j$. Direct computation shows
$$
\mathcal{X}qsigma (a,s,\phi)- \mathcal{X}qgcsigma(a,s,\phi) = \left( dGd^* \q^0(a,s\phi), 0, -(Gd^*\q^0(a,s\phi)) \phi \right).
$$
Recall that to compute $\D^\operatorname{sp}incigma$, we compute the $L^2$ derivative as a map into the affine space $L^2(Y;iT^*Y) \oplus \rr\oplus L^2(Y;\mathbb{S})$ and then apply $L^2$ projection to $\T^\operatorname{sp}incigma_{0,x}$. We first compute the affine derivative
\begin{align*}
\left( \D_{(a,s,\phi)} \mathcal{X}qsigma- \D_{(a,s,\phi)} \mathcal{X}qgcsigma \right)(b,r,\psi) = \big(&dGd^*\D_{(a,s\phi)} \q^0(b, r\phi + s\psi) ,0, \\
& - Gd^*\D_{(a,s\phi)} \q^0(b, r\phi + s\psi) \phi - Gd^*\q^0(a,s\phi) \psi \big).
\end{align*}
Projecting, we obtain
\begin{align}\label{eq:Dsigma-XqvsXqgc}
\left(\D^\operatorname{sp}incigma_{(a,s,\phi)} \mathcal{X}qsigma - \D^\operatorname{sp}incigma_{(a,s,\phi)} \mathcal{X}qgcsigma \right)(b,r,\psi)= \big(&dGd^*\D_{(a,s\phi0} \q^0(b, r\phi + s\psi) ,0, \\ \notag &- Gd^*\D_{(a,s\phi0} \q^0(b, r\phi + s\psi) \phi - Gd^*\q^0(a,s\phi) \psi + \\ \notag & \hspace{1.5in} \operatorname{Re} \langle Gd^*\q^0(a,s\phi) \psi , \phi \rangle_{L^2} \phi\big).
\end{align}
Here we are using that $Gd^*\D_{(a,s\phi)} \q^0(b, r\phi + s\psi) \phi$ is real $L^2$ orthogonal to $\phi$ since $\q^0$ is purely imaginary. This operator is seen to be bounded from $L^2_j$ to $L^2_j$ for $j \leq k$ since $\q$ is tame and because $G$ raises Sobolev regularity by 2. \\
\noindent \eqref{fakehessian-compact-asafoe} From the above argument, we saw that $\mathcal{H}^\sigma_x$ differs from $L^\operatorname{sp}incigma$ by a bounded map from $L^2_j$ to $L^2_j$ for all $1 \leq j \leq k$, except possibly for a single term: $(-dxi, 0, xi\phi) \mapsto xi D\phi$ where $(-dxi, 0, xi\phi) \in \J^{\circ,\operatorname{sp}incigma}_{k,x}$. By postcomposing with the compact inclusion of $L^2_j$ into $L^2_{j-1}$, we have the desired result, except for this exceptional term. In this case, since $D\phi \in L^2_{k-1}$, we have that this operator is bounded as a linear map from $L^2_{k-1}$ to $L^2_{k-1}$. Therefore, we precompose with the compact inclusion from $L^2_{k}$ to $L^2_{k-1}$ to obtain the desired compactness. \\
\noindent \eqref{fakehessian-k-asafoestationary} As in the above case, we previously showed that $\mathcal{H}^\sigma_x$ differs from $L^\operatorname{sp}incigma$ by a bounded map from $L^2_j$ to $L^2_j$ for all $1 \leq j \leq k$, except for the term involving $D\phi$. If $x$ is a stationary point of $\mathcal{X}qgcsigma$, then we know that $\phi$ is actually in $L^2_{k+1}$ and thus $D\phi$ is contained in $L^2_k$, and the argument proceeds with $j = k$. \\
\noindent \eqref{fakehessian-realstationary} Let $x$ be a stationary point of $\mathcal{X}qgcsigma$ (and hence also $\mathcal{X}qsigma$). By \eqref{eq:ExtHessBlowUpGC}, it suffices to show that $S_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}qsigma \circ S^{-1}_x$ agrees with $\D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma \circ \Pi^{\operatorname{gC},\operatorname{sp}incigma}_*$ on $\T^\operatorname{sp}incigma_{j,x}$. First, suppose $v \in \J^{\circ,\operatorname{sp}incigma}_{j,x}$. Then, we have that $S^{-1}_x(v) = v$. It then follows that $S_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}qsigma \circ S^{-1}_x(v) = 0$, since $v$ is tangent to the gauge orbit at a stationary point. This agrees with with the fact that the kernel of $\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*$ is $\J^{\circ,\operatorname{sp}incigma}$.
Next, consider the case $v \in \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$. Since $S^{-1}_x(v) - v$ is contained in $\J^{\circ,\operatorname{sp}incigma}_{j,x}$, we have that $\D^\operatorname{sp}incigma_x \mathcal{X}qsigma \circ S^{-1}_x (v) = \D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v) $. Further, since $x$ is a stationary point, we have that $\D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v) \in \K^\operatorname{sp}incigma_{j,x}$, and therefore $S_x(\D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v)) = \Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v)$. Thus, it suffices to prove that
\begin{equation}\label{eq:almost-Dsigma-XqvsXqgc}
\Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v) = \D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma (v).
\end{equation}
Since $\D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma(v) \in \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$, we have that \eqref{eq:almost-Dsigma-XqvsXqgc} is equivalent to
\begin{equation}\label{eq:Pigc-Dsigma-XqvsXqgc}
\Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \D^\operatorname{sp}incigma_x \mathcal{X}qsigma(v) = \Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma (v).
\end{equation}
Thus, we can establish \eqref{eq:Pigc-Dsigma-XqvsXqgc} if applying infinitesimal global Coulomb projection to \eqref{eq:Dsigma-XqvsXqgc} vanishes. We split \eqref{eq:Dsigma-XqvsXqgc} into two parts. The first term is
$$
(0, 0, Gd^*\q^0(a,s\phi) \psi + \operatorname{Re} \langle Gd^*\q^0(a,s\phi) \psi , \phi \rangle_{L^2} \phi).
$$
Since $x$ is a stationary point of $\mathcal{X}qsigma$, we have that $ -\q^0(a,s\phi) = *da$. Therefore, $Gd^*\q^0(a,s\phi) = 0$ and the above expression vanishes. Thus, it suffices to show that infinitesimal global Coulomb projection vanishes on the remaining term
$$
(dGd^*\D_{(a,s\phi0} \q^0(b, r\phi + s\psi) ,0, - Gd^*\D_{(a,s\phi0} \q^0(b, r\phi + s\psi) \phi).
$$
This term is contained in $\J^\circ_{j,x}$, which is precisely the kernel of the infinitesimal global Coulomb projection. This completes the proof.
\end{proof}
With the above lemma, we can now show that $\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ shares two important properties with $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q,x}$ and $\widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q,x}$.
\begin{lemma}
\label{lem:eH}
If $x$ is a non-degenerate stationary point of $\mathcal{X}qgcsigma$, then $\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ is invertible and has real spectrum.
\end{lemma}
\begin{proof}
At stationary points, with respect to the decomposition $\T^{\operatorname{sp}incigma}_{j,x}=\K^{\operatorname{sp}incigma}_{j,x} \oplus \J^{\operatorname{sp}incigma}_{j,x}$, the derivative $\D_x \mathcal{X}q^{\operatorname{sp}incigma}$ takes the form
$$ \begin{pmatrix}
\operatorname{Hess}^{\operatorname{sp}incigma}_{\q,x} & 0 \\ 0 & 0
\end{pmatrix}.$$
Recall from Remark~\ref{rem:shear} that $S_x$ maps $\K^{\operatorname{sp}incigma}_{j,x}$ to $\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j,x}$ and preserves $\J^{\operatorname{sp}incigma}_{j,x}$. Therefore, after conjugating by the shear $S_x$, we have
$$
S_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}q^{\operatorname{sp}incigma} \circ S_x^{-1} =
\begin{pmatrix}
\operatorname{Hess}^{\mathfrak{t}ilde g, \operatorname{sp}incigma}_{\q,x} & 0 \\ 0 & 0
\end{pmatrix},$$
with respect to the decomposition $\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\operatorname{sp}incigma}_{j,x}$.
We deduce that the operator $ \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ has a block form where one block is ${\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ and the other is
\begin{equation}\label{eq:block-hess-dsigma-tilde}
\begin{pmatrix}
0 &\mathbf{d}^{\operatorname{sp}incigma}_x \\
\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x & 0
\end{pmatrix}.
\end{equation}
By Lemma~\ref{lem:HessB}, ${\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$ is invertible and has real spectrum.
It remains to check that \eqref{eq:block-hess-dsigma-tilde} is invertible and has real spectrum. As in the proof of Lemma~\ref{lem:eHsplit}, we do this by checking that $\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x \mathbf{d}^{\operatorname{sp}incigma}_x$ is self-adjoint and strictly positive.
Indeed, we have
$$ \mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x \mathbf{d}^{\operatorname{sp}incigma}_x (xi) = \Delta xi + \mu_Y(xi) \|i \phi \|_{\mathfrak{t}ilde g}^2.$$
In block form with respect to the decomposition $L^2_{j-1}(Y; i\R)=(\operatorname{im}d^*)_{j-1} \oplus i\R$, we have
$$ \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x = \begin{pmatrix}
\Delta & 0 \\
0 & \|i \phi \|_{\mathfrak{t}ilde g}^2
\end{pmatrix}.$$
Both diagonal entries are self-adjoint and positive. (Note that $\phi$ is nonzero, because it was normalized to have unit $L^2$ norm.)
\end{proof}
\operatorname{sp}incubsection{Interpolations} \label{sec:interpol}
In the proof of Proposition~\ref{prop:FredholmCoulomb} below, we will need to interpolate between the extended Hessians $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ and $\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$, by going through $k$-ASAFOE operators that are still invertible and have real spectrum.
We can do this in two steps. First, we interpolate linearly between $\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x}$ and the split extended Hessian $\widehat{\operatorname{Hess}}^{\operatorname{sp}incp,\operatorname{sp}incigma}_{\q, x}$.
\begin{lemma}
\label{lem:eHrho}
If $x$ is a non-degenerate stationary point of $\mathcal{X}qgcsigma$, then for any $\rho \in [0,1]$, we have that
$$(1-\rho) \cdot \widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x} + \rho \cdot \widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x}$$
is invertible and has real spectrum.
\end{lemma}
\begin{proof}
We use a block decomposition as in the proofs of Lemma 12.4.3 in \cite{KMbook} and of Lemma~\ref{lem:eHsplit} above. It suffices to check that
$$(1-\rho)\cdot \mathbf{d}^{\operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x + \rho \cdot \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x \mathbf{d}^{\operatorname{sp}incigma}_x$$
is self-adjoint and strictly positive. This is true because both terms are self-adjoint and strictly positive.
\end{proof}
For the second step, we interpolate between $\widehat{\operatorname{Hess}}^{\operatorname{sp}incp, \operatorname{sp}incigma}_{\q, x}$ and $\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x}$. We do this by considering the family of metrics on $\T_{j,x}$ given by
$$ g_{\rho}= (1-\rho) \cdot g_{L^2} + \rho \cdot \mathfrak{t}ilde g, \ \ \rho \in [0,1],$$
where $g_{L^2}$ denotes the $L^2$ metric. We consider the $g_{\rho}$-orthogonal complements to $\J_{j,x}$ and $\J^{\circ}_{j,x}$, which we denote by $\K^{\rho}_{j,x}$ and $\K^{\rho, \operatorname{e}}_{j,x}$, respectively. After blowing-up, we obtain a complement $\K^{\rho, \operatorname{sp}incigma}_{j,x}$ to $\J^{\operatorname{sp}incigma}_{j,x}$ and a complement $\K^{\rho, \operatorname{e}, \operatorname{sp}incigma}_{j,x}$ to $\J^{\circ, \operatorname{sp}incigma}_{j,x}$. We construct the shear map $S^{\rho}_x$ that takes $\K^{\operatorname{e},\operatorname{sp}incigma}_{j,x}$ into $\K^{\rho, \operatorname{e}, \operatorname{sp}incigma}_{j,x}$ and is the identity on $\J^{\circ, \operatorname{sp}incigma}_{j,x}$. Further, we let
\begin{equation}
\label{eq:decomposingrho}
\mathbf{d}^{\rho, \operatorname{sp}incigma, \dagger}_x = \begin{pmatrix} 0 & 0 & -d^* \\
0 & i\langle i\phi, \cdot \rangle_{g_{\rho}} & 0 \end{pmatrix} : \K^{\rho, \operatorname{sp}incigma}_{j,x} \oplus \J^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x} \mathfrak{t}o (\operatorname{im}d^*)_{j-1} \oplus i\R,
\end{equation}
so that $ \mathbf{d}^{0, \operatorname{sp}incigma, \dagger}_x= \mathbf{d}^{\operatorname{sp}incp, \operatorname{sp}incigma, \dagger}_x$ and $\mathbf{d}^{1, \operatorname{sp}incigma, \dagger}_x
= \mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x$. Finally, define
\begin{equation}
\label{eq:ExtHessBlowUpRho}
\widehat{\operatorname{Hess}}^{\rho, \operatorname{sp}incigma}_{\q, x} = \begin{pmatrix}
S^{\rho}_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}q^{\operatorname{sp}incigma} \circ (S^{\rho}_x)^{-1} & \mathbf{d}^{\operatorname{sp}incigma}_x \\[.5em]
\mathbf{d}^{\rho, \operatorname{sp}incigma, {\dagger}}_x & 0
\end{pmatrix}.
\end{equation}
The same arguments as in Lemmas~\ref{lem:eH} and ~\ref{lem:eHrho} give the following:
\begin{lemma}
\label{lem:eHrho2}
If $x$ is a non-degenerate stationary point of $\mathcal{X}qgcsigma$, and $\rho \in [0,1]$, then $\widehat{\operatorname{Hess}}^{\rho, \operatorname{sp}incigma}_{\q, x}$ is invertible and has real spectrum.
\end{lemma}
\operatorname{sp}incection{Non-degeneracy of stationary points in Coulomb gauge} \label{sec:nondegCoulomb}
Recall that an irreducible stationary point $x = (a,\phi)$ of $\mathcal{X}q$ is non-degenerate if $\mathcal{X}q$ is transverse to the gauge orbit at $x$ or, equivalently, transverse to the subbundle $\J_{k-1}$ at $x$. We would like to rephrase this condition both in terms of Coulomb gauge and in terms of Hessians.
\begin{lemma}\label{lem:nondegeneracycoulomb}
Let $x \in W_k$ be an irreducible stationary point of $\mathcal{X}qgc$. The following are equivalent:
\begin{enumerate}[(i)]
\item $x$ is non-degenerate (i.e. $\mathcal{X}q$ is transverse to $\J_{k-1}$ at $x$),
\item $\operatorname{Hess}_{\q,x} : \K_{k,x} \mathfrak{t}o \K_{k-1,x}$ is surjective,
\item $\mathcal{X}qgc$ is transverse to $\J^{\operatorname{gC}}_{x}$,
\item $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x} : \K^{\operatorname{agC}}_{k,x} \mathfrak{t}o \K^{\operatorname{agC}}_{k-1,x}$ is surjective.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence (i) $\iff$ (ii) is proved in \cite[Lemma 12.4.1]{KMbook}. It is a consequence of the fact that at a critical point, with respect to the decompositions $\T_j = \J_j \oplus \K_j$ (with $j=k$ for the domain and $j=k-1$ for the image), the derivative $\D_x \mathcal{X}q$ has the block form
$$\begin{pmatrix}
0 & 0 \\
0 & \operatorname{Hess}_{\q, x}
\end{pmatrix}.$$
For the equivalence (iii) $\iff$ (iv), we apply a similar reasoning: $\Dg_x \mathcal{X}qgc$ vanishes on $\J^{\operatorname{gC}}_{x}$, the tangents to the $S^1$-orbits in $W_k$; therefore, $\Dg_x \mathcal{X}qgc$ has a block form with respect to the decomposition $\T^{\operatorname{gC}}_j = \J^{\operatorname{gC}} \oplus \K^{\operatorname{gC}}_j$ from \eqref{eq:JKcoulomb}; and the only nonzero entry in this block form is $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$. (Since we are at a stationary point, note that in fact $\Dg_x \mathcal{X}qgc = \D_x \mathcal{X}qgc.$)
Finally, for the equivalence (ii) $\iff$ (iv), recall from \eqref{eq:altHess} and the proof of Lemma~\ref{lem:hessianfredholm} that $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$ is the conjugate of $\operatorname{Hess}_{\q, x}$ by the isomorphism $\Pi^{\operatorname{agC}}_x$, with inverse $\Pi^{\operatorname{elC}}_x$. Hence, one Hessian is surjective if and only if the other one is.
\end{proof}
In the blow-up, we can also rephrase non-degeneracy of stationary points in terms of the Hessian. Rather than stating the exact analogue of Lemma~\ref{lem:nondegeneracycoulomb}, let us emphasize the following result:
\begin{lemma}\label{lem:nondegeneracycoulombblowup}
Let $x$ be a stationary point of $\mathcal{X}qgcsigma$. Then, $x$ is non-degenerate $\iff\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is injective $\iff \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is bijective.
\end{lemma}
\begin{proof}
If $x$ is irreducible, $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is conjugate to $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}$ via the blow-down (which identifies $\K^{\operatorname{agC},\operatorname{sp}incigma}$ with $\K^{\operatorname{agC}}$) and we may apply Lemmas~\ref{lem:hessianfredholm} and ~\ref{lem:nondegeneracycoulomb}.
If $x$ is a reducible stationary point in the blow-up, we have $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} = \operatorname{Hess}^\operatorname{sp}incigma_{\q,x}$; see Remark~\ref{rem:HessEqual}. The claim then follows from the analogous statement for $\operatorname{Hess}^\operatorname{sp}incigma_{\q,x}$ in \cite[Section 12.4]{KMbook}.
\end{proof}
So far we have only worked with $W^\operatorname{sp}incigma$. The above constructions can also be phrased in terms of the quotient $W^\operatorname{sp}incigma/S^1$. Given $x \in W^{\operatorname{sp}incigma}$, we will write $[x]$ for its class in $W^\operatorname{sp}incigma/S^1$. Note that the bundles $\K^{\operatorname{agC},\operatorname{sp}incigma}_j$ are $S^1$-invariant and therefore the space $\K^{\operatorname{agC},\operatorname{sp}incigma}_{j,x}$ is canonically identified with the tangent space at $[x]$ in the $L^2_j$ completion of the tangent bundle to $W^\operatorname{sp}incigma/S^1$. The vector field $\mathcal{X}qgcsigma$ on $W_k^\operatorname{sp}incigma$ is $S^1$-invariant and takes values in $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1}$. Thus, it descends to a vector field, denoted $\mathcal{X}qagcsigma$, on $W_k^\operatorname{sp}incigma/S^1$.
\begin{lemma}
\label{lem:idCoulomb}
We have the following identifications, given by composing global Coulomb projection with projection to the quotient by $S^1$:
\begin{equation}
\label{eq:EquivStat2}
\{ \mathfrak{t}ext{stationary points of } \mathcal{X}qsigma \bigr \} / \G_{k+1} \ xrightarrow{\mathmakebox[2em]{\cong}} \ \{ \mathfrak{t}ext{stationary points of } \mathcal{X}qagcsigma \}
\end{equation}
and
\begin{equation}
\label{eq:EquivTraj2}
\{ \mathfrak{t}ext{trajectories of } \mathcal{X}qsigma \bigr \} / \G_{k+1} \ xrightarrow{\mathmakebox[2em]{\cong}} \ \{ \mathfrak{t}ext{trajectories of } \mathcal{X}qagcsigma \}.
\end{equation}
\end{lemma}
\begin{proof}
This is immediate from \eqref{eq:EquivStat1} and \eqref{eq:EquivTraj1}.
\end{proof}
Let $x$ be a stationary point of $\mathcal{X}qgcsigma$, so that $[x]$ is a stationary point of $\mathcal{X}qagcsigma$. Since $\mathcal{X}qagcsigma$ is a section of $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1}$, under the identification of $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1,x}$ with the $L^2_{k-1}$-completion of $T_{[x]} (W^\operatorname{sp}incigma_k/S^1)$, the derivative $\D^\operatorname{sp}incigma_{[x]}\mathcal{X}qagcsigma := \D^\operatorname{sp}incigma_x \mathcal{X}qgcsigma = \D^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_x \mathcal{X}qgcsigma$ takes values in $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1, x}$. In view of Equation~\eqref{eq:Hess2}, we have
$$ \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} = \D^\operatorname{sp}incigma_{[x]}\mathcal{X}qagcsigma.$$
The following is then a direct consequence of Lemma~\ref{lem:nondegeneracycoulombblowup}:
\begin{lemma}
\label{lem:rephraseStat}
In terms of the identification \eqref{eq:EquivStat2}, non-degeneracy of a stationary point $x$ of $\mathcal{X}qsigma$ is equivalent to the injectivity (or bijectivity) of $\D^\operatorname{sp}incigma\mathcal{X}qagcsigma$ at the corresponding point $[\Pi^{\operatorname{gC}, \operatorname{sp}incigma}(x)] \in W^\operatorname{sp}incigma/S^1$.
\end{lemma}
Just as in Lemma~\ref{lem:rephraseStat} we rephrased the non-degeneracy of stationary points, our next goal will be to rephrase the regularity condition on the moduli spaces of trajectories of $\mathcal{X}qsigma$ in terms of global Coulomb gauge; that is, re-write it as a condition on the moduli spaces of trajectories of $\mathcal{X}qagcsigma$. This will be accomplished in Section~\ref{sec:NDTcoulomb}. Before that, as preliminary steps, we will:
\begin{itemize}
\item Embed the moduli space of trajectories of $\mathcal{X}qagcsigma$ into a larger space of paths in Coulomb gauge (in Section~\ref{sec:path});
\item Describe the tangent bundle to this path space (in Section~\ref{sec:4Dcoulomb});
\item Linearize the equations that define the moduli space in Coulomb gauge (in Section~\ref{sec:linearized}).
\end{itemize}
\operatorname{sp}incection{Path spaces}
\label{sec:path}
Fix points $x,y \in W^{\operatorname{sp}incigma}$, and a smooth path $\gamma_0$ in $W^{\operatorname{sp}incigma}$ from $x$ to $y$, such that $\gamma_0(t)$ agrees with $x$ near $-\infty$ and agrees with $y$ near $+\infty$. Let $Z = {\mathbb{R}}\mathfrak{t}imes Y$. By analogy with the definition of $\mathcal{C}_k^\mathfrak{t}au(x, y)$ in \eqref{eq:Cktau}, we define the space of four-dimensional configurations
\[
\mathcal{C}_k^{\operatorname{gC}, \mathfrak{t}au}(x, y) = \{\gamma \in \mathcal{C}^{\operatorname{gC}, \mathfrak{t}au}_{k, loc}({\mathbb{R}}\mathfrak{t}imes Y) \mid \gamma - \gamma_0 \in L^2_k(Z; iT^*Z) \mathfrak{t}imes L^2_k(\rr; \rr) \oplus L^2_k(Z; \mathbb{S}^+)\}.
\]
We can write $\gamma$ as a path
$$\gamma(t) = (a(t)+\alpha(t)dt, s(t), \phi(t)).$$
Recall from Section~\ref{sec:cylinders} that the condition $\gamma \in \mathcal{C}_{k, loc}^{\operatorname{gC},\mathfrak{t}au}({\mathbb{R}}\mathfrak{t}imes Y)$ means that $\gamma$ is in pseudo-temporal gauge ($\alpha(t)$ is constant on each slice), and that the one-form component $a(t)$ is in the kernel of $d^*$, for all $t$. Moreover, since we are in the $\mathfrak{t}au$ model, we must have $\|\phi(t)\|_{L^2(Y)}=1$ and $s(t) \geq 0$ for all $t$.
The space $\mathcal{C}_k^{\operatorname{gC}, \mathfrak{t}au}(x, y)$ embeds in a Hilbert manifold $\widetilde{\mathcal{C}}_k^{\operatorname{gC},\mathfrak{t}au}(x,y)$, defined as above but using $\widetilde{\mathcal{C}}^{\operatorname{gC},\mathfrak{t}au}_{k,loc}({\mathbb{R}}\mathfrak{t}imes Y)$; that is, dropping the condition $s(t) \geq 0$.
Further, in the spirit of Section~\ref{sec:cylinders}, we let $W_k^{\mathfrak{t}au}(x, y)$ (respectively $\mathfrak{t}W_k^\mathfrak{t}au(x,y)$) denote the subset of $\mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(x, y)$ (respectively $\mathfrak{t}C_k^{\operatorname{gC},\mathfrak{t}au}(x,y)$) consisting of configurations with $\alpha(t)=0$, i.e., in temporal gauge.
The gauge group $$\G_{k+1}^{\operatorname{gC}}(Z) := \{u: \rr\mathfrak{t}o S^1 \mid 1-u \in L^2_{k+1}(\R; \cc) \}$$
acts on $\mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(x, y)$ and $\widetilde{\mathcal{C}}_k^{\operatorname{gC},\mathfrak{t}au}(x,y)$.
Let $\mathcal{B}^{\gCoul, \tau}_k(x, y)$ denote the quotient of $\mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(x, y)$ by $\G_{k+1}^{\operatorname{gC}}(Z)$. Note that $\mathcal{B}^{\gCoul, \tau}_k(x, y)$ only depends on the classes $[x]$ and $[y]$ in $W^\operatorname{sp}incigma/S^1$, up to canonical diffeomorphism. Therefore, we will use the notation $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$. The quotient of $\widetilde{\mathcal{C}}_k^{\operatorname{gC},\mathfrak{t}au}(x,y)$ by the same gauge action is denoted $\mathfrak{t}Btaug_k([x],[y])$. One can check that $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$ and $\mathfrak{t}Btaug_k([x],[y])$ are Hausdorff in the quotient topology; compare \cite[Proposition 13.3.4]{KMbook}.
\begin{remark}\label{rmk:no-temporal}
It is important to note that given an element $\gamma \in \mathcal{C}^{\operatorname{gC},\mathfrak{t}au}_k(x,y)$, it cannot necessarily be moved to be in temporal gauge by an element of $\G_{k+1}^{\operatorname{gC}}(Z)$. However, we can act by a four-dimensional gauge transformation in $\G_{k+1,loc}^{\operatorname{gC}}(Z)$ to move $\gamma$ to Coulomb gauge, but the result will land in $\mathcal{C}^{\operatorname{gC},\mathfrak{t}au}_k(x, u y)$, for some $u \in S^1$. Since $\B^{\operatorname{gC},\mathfrak{t}au}_k(x,y)$ is canonically identified with $\B^{\operatorname{gC},\mathfrak{t}au}_k(x, uy)$, we can still think of $[\gamma]$ as having a representative in temporal gauge.
\end{remark}
These constructions are similar to those in Section~\ref{sec:AdmPer}, where we had a space $\B^{\mathfrak{t}au}_k([x], [y])$ of configurations in $\mathcal{C}_k^\mathfrak{t}au(x, y)$ modulo four-dimensional gauge transformations. One can also consider the corresponding Hilbert manifolds $\widetilde \B^{\mathfrak{t}au}_k([x], [y])$ and $\widetilde \mathcal{C}_k^\mathfrak{t}au(x, y)$.
We would like to relate $\widetilde \B^\mathfrak{t}au_k([x],[y])$ to $\mathfrak{t}Btaug_k([x],[y])$. We first define a map
$$ \Pi^{\operatorname{gC}, \mathfrak{t}au}: \widetilde \mathcal{C}^{\mathfrak{t}au}_k(x, y) \mathfrak{t}o \widetilde \mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(\Pi^{\operatorname{gC}, \operatorname{sp}incigma}(x), \Pi^{\operatorname{gC}, \operatorname{sp}incigma}(y))$$
by the formula
\begin{equation}
\label{eq:zebra}
(a(t)+\alpha(t)dt, s(t), \phi(t)) \mapsto \Pi^{\operatorname{gC}, \operatorname{sp}incigma}(a(t), s(t), \phi(t)) + (\mu_Y(\alpha(t))dt, 0, 0).
\end{equation}
\begin{lemma}
There is a well-defined, continuous map
\begin{equation}
\label{eq:Pigctau}
\Pi^{[\operatorname{gC}], \mathfrak{t}au}: \widetilde \B^{\mathfrak{t}au}_k([x], [y]) \mathfrak{t}o \mathfrak{t}Btaug_k([x], [y]), \ \ [\gamma] \mathfrak{t}o [\Pi^{\operatorname{gC}, \mathfrak{t}au} (\gamma)].
\end{equation}
This sends $\B^{\mathfrak{t}au}_k([x], [y])$ to $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$.
\end{lemma}
\begin{proof}
Let us first check that $[\Pi^{\operatorname{gC}, \mathfrak{t}au}(\gamma)]$ does not depend on the choice of representative $\gamma$ for the class $[\gamma]$. Indeed, suppose we change $\gamma$ by a four-dimensional gauge transformation of the form $u: Z \mathfrak{t}o S^1$. If we ignore the $\alpha(t)dt$ component and write $x(t)=(a(t), s(t), \phi(t))$, we find that $u$ acts on each $x(t)$ as the three-dimensional gauge transformation $u(t) := u|_{\{t\} \mathfrak{t}imes Y}$. Write $u(t) = e^{f(t)}$ with $f(t): Y \mathfrak{t}o i\R$. Using \eqref{eq:piCS} and the fact that $Gd^*df = f - \mu_Y(f)$, we see that
$$\Pi^{\operatorname{gC}, \operatorname{sp}incigma} (e^{f(t)} \cdot x(t)) = e^{\mu_Y(f(t))} \Pi^{\operatorname{gC}, \operatorname{sp}incigma}(x(t)).$$
From here we get $$ \Pi^{[\operatorname{gC}], \mathfrak{t}au}(e^{f} \cdot \gamma) = e^{\mu_Y(f)} \cdot \Pi^{[\operatorname{gC}], \mathfrak{t}au}(\gamma),$$
where the gauge transformation $e^f$ on the left is a four-dimensional gauge transformation, and $e^{\mu_Y(f)}$ is the slicewise constant gauge transformation obtained by taking the average of $f$ in each slice. Thus, the class $[\Pi^{\operatorname{gC}, \mathfrak{t}au}(\gamma)]$ is unchanged.
The fact that slicewise application of $\Pi^{\operatorname{gC}, \operatorname{sp}incigma}$ preserves the four-dimensional $L^2_k$ condition can be seen from the formula \eqref{eq:piCS}, together with the Sobolev multiplication rule on infinite cylinders \cite[Theorem 13.2.2]{KMbook}. Further, averaging $\alpha(t)$ slicewise preserves the $L^2_k$ condition by the Cauchy-Schwarz inequality. Continuity of the map $\Pi^{[\operatorname{gC}], \mathfrak{t}au}$ follows from similar arguments.
Finally, since $s(t)$ (and hence the condition $s(t) \geq 0$) is preserved by global Coulomb projection, we have that $\B^{\mathfrak{t}au}_k([x], [y])$ is mapped to $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$.
\end{proof}
Observe that the map \eqref{eq:Pigctau} is surjective but not injective. For example, suppose we have two configurations $\gamma_1$ and $\gamma_2$ that are in temporal gauge and that differ in each slice by a three-dimensional gauge transformation $u: Y \mathfrak{t}o S^1$ (non-constant in $t$). Then $\Pi^{[\operatorname{gC}], \mathfrak{t}au}([\gamma_1]) = \Pi^{[\operatorname{gC}], \mathfrak{t}au}([\gamma_2])$, but $\gamma_1$ and $\gamma_2$ are typically not gauge equivalent as four-dimensional configurations.
Now suppose that $[x]$ and $[y]$ are stationary points of $\mathcal{X}qagcsigma$. Define $M^{\operatorname{agC}}([x],[y])$ to be the moduli space of trajectories of $\mathcal{X}qagcsigma$, considered as a subspace of $\mathcal{B}^{\gCoul, \tau}_k([x],[y])$. We can also define $M^{\operatorname{agC},\red}([x],[y])$ similarly.
\begin{proposition}\label{prop:gccorrespondence}
Every trajectory of $\mathcal{X}qagcsigma$ on $W^\operatorname{sp}incigma/S^1$ (connecting two stationary points $x$ and $y$ as in Definition~\ref{def:sw-moduli-space}) is actually in $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$. Further, the map $\Pi^{[\operatorname{gC}], \mathfrak{t}au}$ produces a homeomorphism between the moduli space $M([x], [y])$ (consisting of gauge equivalence classes of trajectories of $\mathcal{X}qsigma$) and $M^{\operatorname{agC}}([x],[y])$.
\end{proposition}
\begin{proof}
The equivalence between trajectories of $\mathcal{X}qsigma$ and $\mathcal{X}qagcsigma$ (without the $L^2_k$ conditions) was already established in Lemma~\ref{lem:idCoulomb}.
Furthermore, as noted in Section~\ref{sec:AdmPer}, it is proved in \cite[Theorem 13.3.5]{KMbook} that every trajectory of $\mathcal{X}qsigma$ connecting $x$ and $y$ is gauge equivalent to a trajectory in $\mathcal{C}^{\mathfrak{t}au}_k(x, y)$. Thus, if we have a trajectory $\gamma$ of $\mathcal{X}qagcsigma$ connecting two stationary points, we can lift it to one of $\mathcal{X}qsigma$, and apply a gauge transformation to obtain a trajectory in $\mathcal{C}^{\mathfrak{t}au}_k(x, y)$. Since the map $\Pi^{[\operatorname{gC}], \mathfrak{t}au}$ preserves the $L^2_k$ condition, we deduce that $\gamma$ is in $\mathcal{B}^{\gCoul, \tau}_k([x], [y])$.
\end{proof}
\begin{remark}\label{rmk:MagcFgc}
It also follows from the proof of Proposition~\ref{prop:gccorrespondence} that $M^{\operatorname{agC}}([x],[y])$ is identified with the space of trajectories of $\mathcal{X}qgcsigma$ from $x$ to $y$ modulo the $S^1$ action, and equivalently, the zero set of $\F^{\operatorname{gC},\mathfrak{t}au}_\q$, restricted to $\mathcal{C}^{\operatorname{gC},\mathfrak{t}au}_k(x,y)$ modulo the action of $\G^{\operatorname{gC}}_{k+1}(Z)$.
\end{remark}
\operatorname{sp}incection{Four-dimensional Coulomb slices}
\label{sec:4Dcoulomb}
Fix $x, y \in W^{\operatorname{sp}incigma}$. We aim to prove that $\widetilde{\B}^{\operatorname{gC}, \mathfrak{t}au}_k([x],[y])$ is a Hilbert manifold, and to identify its tangent space. The discussion here will be modelled on the corresponding one for the space $\widetilde{\B}^\mathfrak{t}au_k([x],[y])$, following \cite[Section 14.3]{KMbook}.
Let us first review the analysis for $\widetilde{\B}^\mathfrak{t}au_k([x],[y])$. This space is the quotient of $\widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$ by the gauge action. The $L^2_j$ completion of the tangent space to $ \widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$ at $\gamma=(a, s, \phi)$ is
\begin{align}
\label{eq:fir}
\T^{\mathfrak{t}au}_{j, \gamma} &= \{(b, r, \psi) \mid \operatorname{Re} \langle \phi(t), \psi(t) \rangle_{L^2(Y)}=0, \ \forall t \} \\
& \operatorname{sp}incubset L^2_j(Z; i T^*Z) \oplus L^2_j(\rr; \rr) \oplus L^2_j(Z; \mathbb{S}^+). \notag
\end{align}
The derivative of the gauge group action on $\widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$ is given by
$$
\mathbf{d}^{\mathfrak{t}au}_{\gamma}: L^2_{j+1}(Z; i\R) \mathfrak{t}o \T^{\mathfrak{t}au}_{j, \gamma}, \ \ \
\mathbf{d}^\mathfrak{t}au_{\gamma}(xi) = (-dxi, 0, xi \phi).$$
Kronheimer and Mrowka define a local slice for the gauge action, $\mathcal{S}^{\mathfrak{t}au}_{k, \gamma} \operatorname{sp}incubset \widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$, by the equation:
\begin{equation}
\label{eq:sliceC}
-d^*b + isr \operatorname{Re} \langle i\phi, \psi \rangle + i |\phi|^2 \operatorname{Re} \mu_Y ( \langle i\phi, \psi \rangle ) = 0.
\end{equation}
By linearizing this equation, they obtain the four-dimensional local Coulomb slice which was previously mentioned in Section~\ref{sec:AdmPer}:
$$\K^\mathfrak{t}au_{k, \gamma} = \ker \mathbf{d}^{\mathfrak{t}au, \dagger}_{\gamma} \operatorname{sp}incubset \T^\mathfrak{t}au_{k, \gamma},$$
where
\begin{equation}
\label{eq:dtdagger}
\mathbf{d}^{\mathfrak{t}au,\dagger}_\gamma(b,r,\psi) = -d^*b + is^2 \mathfrak{t}ext{Re}\langle i\phi,\psi \rangle + i |\phi|^2 \operatorname{Re}\ \mu_Y (\langle i\phi,\psi \rangle).
\end{equation}
There are also $L^2_j$ completions $\K^\mathfrak{t}au_{j, \gamma}$ for $j \leq k$. Let $\J^{\mathfrak{t}au}_{j, \gamma}$ denote the image of $\mathbf{d}^{\mathfrak{t}au}_{\gamma}$. In \cite[Proposition 14.3.2]{KMbook}, Kronheimer and Mrowka prove that $\mathbf{d}^\mathfrak{t}au_{\gamma}$ is injective with closed range, $ \mathbf{d}^{\mathfrak{t}au,\dagger}_\gamma$ is surjective, and there is a bundle decomposition
$$ \T^\mathfrak{t}au_j = \J^{\mathfrak{t}au}_j \oplus \K^{\mathfrak{t}au}_j.$$
In turn, this implies that the slices $\mathcal{S}^{\mathfrak{t}au}_{k, \gamma}$ are Hilbert submanifolds of $\widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$. Since they provide local models for the quotient $\widetilde{\B}^\mathfrak{t}au_k([x],[y])$, they deduce that this quotient is a Hilbert manifold. Furthermore, its tangent space at $[\gamma]$ can be identified with $\K^\mathfrak{t}au_{k, \gamma}$.
We now turn to a similar discussion in global Coulomb gauge.
The $L^2_j$ completion of the tangent space to $ \widetilde{\mathcal{C}}^{\operatorname{gC}, \mathfrak{t}au}_k(x, y)$ at $\gamma=(a, s, \phi)$ is
\begin{align*}
\T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma} &= \{(b, r, \psi) \mid d\beta(t)=0, \ d^*(b(t))=0, \ \operatorname{Re} \langle \phi(t), \psi(t) \rangle_{L^2(Y)}=0, \ \forall t \} \\
& \operatorname{sp}incubset L^2_j(Z; i \mathrm{p}^*(T^*Z)) \oplus L^2_j(\rr; \rr) \oplus L^2_j(Z; \mathbb{S}^+),
\end{align*}
where we write $b$ as $b(t) + \beta(t)dt$.
The derivative of the action of $\G_{k+1}^{\operatorname{gC}}(Z)$ on $\widetilde{\mathcal{C}}^{\operatorname{gC}, \mathfrak{t}au}_k(x, y)$ is
$$
\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au}_{\gamma}: L^2_{j+1}(\R; i\R) \mathfrak{t}o \T^{\mathfrak{t}au}_{j, \gamma}, \ \ \
\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au}_{\gamma}(xi) = (-\frac{dxi}{dt} dt, 0, xi \phi).$$
A suitable local slice for this action is $\mathcal{S}^{\operatorname{gC}, \mathfrak{t}au}_{k, \gamma} \operatorname{sp}incubset \widetilde{\mathcal{C}}^{\operatorname{gC}, \mathfrak{t}au}_k(x, y)$, defined by:
\begin{equation}
\label{eq:Remu}
\frac{d\beta}{dt} + i \langle i \phi, \psi \rangle_{\mathfrak{t}ilde{g}} = 0.
\end{equation}
This is already linear, so the same equation defines the corresponding linearized local slice
$$\K^{\operatorname{gC},\mathfrak{t}au}_{k, \gamma} = \ker \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma} \operatorname{sp}incubset \T^{\operatorname{gC}, \mathfrak{t}au}_{k, \gamma},$$
where
\begin{equation}
\label{eq:dgct}
\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_\gamma(b,r,\psi) = \frac{d\beta}{dt} + i \langle i \phi, \psi \rangle_{\mathfrak{t}ilde{g}}.
\end{equation}
We have $L^2_j$ completions $\K^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}$ for $j \leq k$, and we denote the image of $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au}_{\gamma}$ by $\J^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}$.
\begin{lemma}
\label{lem:DecomposeGC}
The operator $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au}_{\gamma}$ is injective with closed range, $ \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_\gamma$ is surjective, and there is a bundle decomposition
$$ \T^{\operatorname{gC}, \mathfrak{t}au}_j = \J^{\operatorname{gC}, \mathfrak{t}au}_j \oplus \K^{\operatorname{gC}, \mathfrak{t}au}_j.$$
Further,
\[
\J^{\operatorname{gC},\mathfrak{t}au}_j = \J^{\operatorname{gC},\mathfrak{t}au}_0 \cap \T^\mathfrak{t}au_j.
\]
\end{lemma}
The proof of Lemma~\ref{lem:DecomposeGC} is similar to \cite[Proposition 14.3.2]{KMbook}, so we omit it.
We deduce that $\mathfrak{t}Btaug_k([x],[y])$ is a Hilbert manifold, locally modelled on the slices $\mathcal{S}^{\operatorname{gC}, \mathfrak{t}au}_{k, \gamma}$. The tangent space to $\mathfrak{t}Btaug_k([x],[y])$ at $\gamma$ can be identified with $\K^{\operatorname{gC},\mathfrak{t}au}_{k, \gamma}$.
Let us now take a quick look at the map $\Pi^{[\operatorname{gC}], \mathfrak{t}au}$ from \eqref{eq:Pigctau}. With respect to the smooth structures we just defined, this map is continuously differentiable. Its derivative
\begin{equation}
\label{eq:Pigctau2}
(\Pi^{[\operatorname{gC}], \mathfrak{t}au}_* )_{[\gamma]}: \K^{\mathfrak{t}au}_{j, [\gamma]} \longrightarrow \K_{j, \Pi^{[\operatorname{gC}], \mathfrak{t}au}([\gamma])}^{\operatorname{gC}, \mathfrak{t}au}
\end{equation}
is given by the formula
$$(b(t)+\beta(t)dt, r(t), \psi(t)) \mapsto \Pi_{\K^{\operatorname{gC}, \mathfrak{t}au}_j} \Bigl( (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_{(a(t), s(t), \phi(t))}(b(t), r(t), \psi(t)) + (\mu_Y(\beta(t))dt, 0, 0) \Bigr),$$
where $\Pi_{\K^{\operatorname{gC}, \mathfrak{t}au}_j}$ denotes the projection onto $\K^{\operatorname{gC}, \mathfrak{t}au}_j$ with kernel $ \J^{\operatorname{gC}, \mathfrak{t}au}_j$.
\operatorname{sp}incection{The linearized equations}
\label{sec:linearized}
Let $x, y$ be non-degenerate stationary points of $\mathcal{X}qsigma$. Recall from Section~\ref{sec:HM} that the moduli space $M(x, y)$ of perturbed Seiberg-Witten trajectories can be described as the zero set of the section
$$ \F^{\mathfrak{t}au}_{\q} : \mathcal{C}^{\mathfrak{t}au}_k(x, y) \mathfrak{t}o \V^{\mathfrak{t}au}_{k-1}(Z),$$
modulo gauge. In temporal gauge, we have $ \F^{\mathfrak{t}au}_{\q} = \frac{d}{dt} + \mathcal{X}qsigma$.
Recall that $\V^{\mathfrak{t}au}_{k-1}(Z)$ is a bundle over the Hilbert manifold $\widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$, and we have extended the section $\F^{\mathfrak{t}au}_{\q}$. In order to understand the local structure of $M(x, y)$, one needs to study the derivative $\D^{\mathfrak{t}au}_{\gamma} \F^{\mathfrak{t}au}_{\q}$ at paths $\gamma \in M(x, y)$. Further, to be able to define gradings and orientations later, one needs to understand $\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q}$ when $\gamma$ is not a trajectory in $M(x, y)$. Much like using the notation $\D^\operatorname{sp}incigma$ to clarify derivatives in the blow-up, we write $\D^\mathfrak{t}au$ to mean that the derivatives are taken with respect to four-dimensional configurations (as opposed to three-dimensional derivatives slicewise). The relevant properties of $\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q}$ are analyzed by Kronheimer and Mrowka in \cite[Section 14.4]{KMbook}. We will sketch their results, and then do a similar analysis in global Coulomb gauge, with an eye towards the local structure of the moduli spaces $M^{\operatorname{agC}}([x], [y])$.
Fix $\gamma \in \widetilde{\mathcal{C}}^{\mathfrak{t}au}_k(x, y)$, and assume that $\gamma$ is in temporal gauge. Recall the definition of the tangent space $\T^{\mathfrak{t}au}_{j, \gamma}$ from \eqref{eq:fir}. Following \cite[Section 14.4]{KMbook}, we write an element of $\T^{\mathfrak{t}au}_{j, \gamma}$ as $(V, \beta)$, where $V(t) = (b(t), r(t), \psi(t))$ is a path in (a completion of) $\T^{\operatorname{sp}incigma}(Y)$, and $\beta= (\beta(t))$ is the path in $L^2(Y; i\R)$ that gives the $dt$ component of the connection. Vectors in $\T^{\operatorname{sp}incigma}(Y)$ can be differentiated along paths using the covariant derivative
$$ \frac{D^\operatorname{sp}incigma}{dt} V = \Bigl( \frac{db}{dt}, \frac{dr}{dt}, \Pi^{\perp}_{\phi(t)} \frac{d\psi}{dt} \Bigr),$$
where $\Pi^{\perp}_{\phi(t)}$ denotes the $L^2$ projection to the orthogonal complement of $\phi(t)$.
We can then write the derivative $\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q}$ explicitly:
$$ \D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q}(V, \beta) = \frac{D^\operatorname{sp}incigma}{dt} V + \D^\operatorname{sp}incigma\mathcal{X}^{\operatorname{sp}incigma}_{\q}(V) + \mathbf{d}^{\operatorname{sp}incigma}_{\gamma(t)} \beta,$$
where $\mathbf{d}^{\operatorname{sp}incigma}$ is as in \eqref{eq:dsigma}.
Let us also recall from Section~\ref{sec:4Dcoulomb} that the $L^2_j$ completion of the tangent space to $\widetilde{\B}^{\mathfrak{t}au}_k([x], [y])$ at $\gamma$ is the space $\K^{\mathfrak{t}au}_{j,\gamma} = \ker \mathbf{d}^{\mathfrak{t}au, \dagger}_\gamma$. With respect to the decomposition $(V, \beta)$, the map $\mathbf{d}^{\mathfrak{t}au, \dagger}_{\gamma}: \T^{\mathfrak{t}au}_{j, \gamma} \mathfrak{t}o L^2_{j-1}(Z; i\R)$ is given by
$$ \mathbf{d}^{\mathfrak{t}au, \dagger}_{\gamma}(V, \beta) = \frac{d\beta}{dt} + \mathbf{d}^{\operatorname{sp}incigma, \dagger}_{\gamma(t)}(V),$$
with $\mathbf{d}^{\operatorname{sp}incigma, \dagger}$ as in \eqref{eq:dsigmadagger}.
The local structure of the moduli space $M([x], [y])$ is governed by the operator:
$$ (\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}} : \K^{\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o
\V^{\mathfrak{t}au}_{j-1,\gamma}(Z).$$
Kronheimer and Mrowka establish the following:
\begin{proposition}[Proposition 14.4.3 in \cite{KMbook}]
\label{prop:Dfredholm} For $1 \leq j \leq k$, the operator $(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}}$ is Fredholm and the index is independent of $j$.
\end{proposition}
\begin{proof}[Sketch of proof] The main tool is Proposition 14.2.1 in \cite{KMbook}, which gives a Fredholmness criterion for differential operators on infinite cylinders. Specifically, it deals with operators of the form
\begin{equation}
\label{eq:Q}
Q= \frac{d}{dt} + L_0 + h_t : L^2_1(Z; \mathrm{p}^*E) \mathfrak{t}o L^2(Z; \mathrm{p}^*E),
\end{equation}
where $E \mathfrak{t}o Y$ is a vector bundle, $L_0$ is a first order, self-adjoint elliptic operator acting on sections of $E$, and $h_t$ is a time-dependent bounded operator on $L^2(Y; E)$, varying continuously in the operator norm topology, and assumed to be constant $h_{\pm}$ near the ends of the cylinder $Z$. The proposition says that if $L_0 + h_{\pm}$ are hyperbolic (i.e., their spectrum is disjoint from the imaginary axis), then $Q$ is Fredholm, with index given by the spectral flow of the family $\{L_0 + h_t\}$.
Proposition 14.2.1 in \cite{KMbook} cannot be applied directly to the operator $( \D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}}$, because its domain is the local Coulomb slice, rather than the space of all sections of a vector bundle. The remedy is to enlarge the operator, much as in the proof of Lemma~\ref{lem:hessianfredholm}. We take the direct sum
\begin{equation}
\label{eq:Qg}
Q_{\gamma} = \D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q} \oplus \mathbf{d}^{\mathfrak{t}au, \dagger}_{\gamma} : \T^{\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o \V^{\mathfrak{t}au}_{j-1,\gamma} \oplus L^2_{j-1}(Z; i\R),
\end{equation}
previously considered in \eqref{eq:Qgamma}. We can write
\begin{equation}
\label{eq:Qg2}
Q_{\gamma} = \frac{D^\operatorname{sp}incigma}{dt} + \begin{pmatrix} \D^\operatorname{sp}incigma_{\gamma(t)} \mathcal{X}_{\q}^{\operatorname{sp}incigma} & \mathbf{d}^{\operatorname{sp}incigma}_{\gamma(t)} \\ \mathbf{d}^{\operatorname{sp}incigma, \dagger}_{\gamma(t)} & 0 \end{pmatrix}.
\end{equation}
Further, to deal with the condition $\operatorname{Re} \langle \phi(t), \psi(t)\rangle_{L^2(Y)} = 0$ that defines $\T^{\mathfrak{t}au}_{j,\gamma}$, we proceed as in Section~\ref{sec:HessBlowUp}: We combine the $r$ and $\psi$ components into $$ \psis(t) = \psi(t) + r(t) \phi(t).$$
Now $Q_{\gamma}$ is of the desired form \eqref{eq:Q}, with $E$ being the bundle $iT^*Y \oplus \mathbb{S} \oplus i\R$ and $L_0$ having the block form
\begin{equation}
\label{eq:L0}
L_0 = \begin{pmatrix}
*d & 0 & -d \\
0 & D & 0\\
-d^* & 0 & 0
\end{pmatrix}.
\end{equation}
In fact, $L_0 + h_t$ is a slight variant of the extended Hessian from \eqref{eq:ExtHessBlowUp}. On the two ends of the cylinder $Z$ we see exactly the extended Hessians at the non-degenerate stationary points $x$ and $y$. Recall that at a stationary point (say $x=(a, s, \phi)$), the extended Hessian is the operator
\[
\widehat{\operatorname{Hess}}^{\operatorname{sp}incigma}_{\q, x} =\begin{pmatrix}
0 & 0 & \mathbf{d}^{\operatorname{sp}incigma}_x \\
0 & \operatorname{Hess}^{\operatorname{sp}incigma}_{\q, x} & 0 \\
\mathbf{d}^{\operatorname{sp}incigma, \dagger}_x & 0 & 0
\end{pmatrix}.
\]
This is written in block form with respect to the decomposition $L^2_j(Y;E)= \J_j^{\operatorname{sp}incigma} \oplus \K_j^{\operatorname{sp}incigma} \oplus L^2_j(Y; i\R)$; compare Sections~\ref{sec:Hess} and ~\ref{sec:HessBlowUp}.
The extended Hessians at $x$ and $y$ are invertible by the non-degeneracy assumption; moreover, if we had not used the blow-up, they would be self-adjoint and therefore have real spectrum. Given the use of $\psis$ imposed from the blow-up construction, the limiting operators are no longer self-adjoint. Nevertheless, they have real spectrum by \cite[Lemma 12.4.3]{KMbook}. It follows that they are hyperbolic, and the hypotheses of \cite[Proposition 14.2.1]{KMbook} apply. We obtain that $Q_{\gamma}$ is Fredholm as an operator from $L^2_1$ to $L^2$. Further, if $L_0 + h_{\pm}$ are $k$-ASAFOE and $h_t$ is compact as a map from $L^2_j$ to $L^2_{j-1}$ for paths constant near the endpoints, then using elliptic estimates, the result is extended to show that $Q_{\gamma}$ as a map
from $L^2_j$ to $L^2_{j-1}$ is also Fredholm with the same index. As discussed, strictly speaking, we chose the path $\gamma$ to be constant near the ends. That case implies the general case by the argument at the end of \cite[proof of Theorem 14.4.2]{KMbook}. Finally, Fredholmness of $Q_{\gamma}$ implies that $( \D^\mathfrak{t}au_{[\gamma]} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}}$ is also Fredholm, of the same index; this follows from the decomposition \eqref{eq:Qg}, together with the surjectivity of $ \mathbf{d}^{\mathfrak{t}au, \dagger}_{\gamma}$ (cf. Proposition 14.3.2 in \cite{KMbook}).
\end{proof}
We now move to Coulomb gauge. Consider the moduli space $M^{\operatorname{agC}}([x],
[y]) \operatorname{sp}incubset {\widetilde \B}_k^{\operatorname{gC}, \mathfrak{t}au}([x], [y])$, as defined before
Proposition~\ref{prop:gccorrespondence}. Recall from Remark~\ref{rmk:MagcFgc}
that we can describe $M^{\operatorname{agC}}([x], [y])$ as the zero set of
$$\F^{\operatorname{gC}, \mathfrak{t}au}_{\q}: \mathfrak{t}C_k^{\operatorname{gC}, \mathfrak{t}au}(x, y) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}_{k-1}(Z),$$
restricted to $\mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(x,y)$ and modulo the action of $\G_{k+1}^{\operatorname{gC}}(Z)$.
Fix $\gamma \in \mathcal{C}^{\operatorname{gC}, \mathfrak{t}au}(x, y)$ in temporal gauge (that is, in $W^{\mathfrak{t}au}_k(x, y)$, cf. Section~\ref{sec:path}). To understand the local structure of $M^{\operatorname{agC}}([x], [y])$, we need to study the derivative of the section $\F^{\operatorname{gC}, \mathfrak{t}au}_{\q}$. There are different ways of doing this, depending on what covariant derivative we choose. The most natural choice is to use a covariant derivative that involves the $\mathfrak{t}ilde{g}$-metric. Specifically, for $1 \leq j \leq k$, we write
$$\D^{\mathfrak{t}ilde{g},\mathfrak{t}au}_{\gamma} \mathcal{F}_{\q}gctau : \T^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma},$$
\begin{equation}
\label{eq:tangerine}
\D^{\mathfrak{t}ilde{g},\mathfrak{t}au}_{\gamma} \F^{\operatorname{gC}, \mathfrak{t}au}_{\q}(V, \beta) = \frac{\dgs}{dt} V + (\Dgs_{\gamma(t)} \mathcal{X}^{\operatorname{gC}, \operatorname{sp}incigma}_{\q})(V) + \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma(t)} \beta,
\end{equation}
where
\begin{equation}
\label{eq:dde}
\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_x : i\rr\mathfrak{t}o \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, x}, \ \ \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_x (xi) = (0, 0, xi \phi)
\end{equation}
is the restriction of the operator $\mathbf{d}^{\operatorname{sp}incigma}_x : L^2_j(Y;i\R) \mathfrak{t}o \T^{\operatorname{sp}incigma}_{j, x}$ from \eqref{eq:dsigma} to constant functions. Also, $ \frac{\dgs}{dt}$ represents the covariant derivative (along a path) for the connection $\Dgs$ from \eqref{eq:DGS}, that is,
\begin{equation}
\label{eq:dgs}
\frac{\dgs}{dt} V = \Pi^{\operatorname{gC}, \operatorname{sp}incigma}_* \circ \D^\operatorname{sp}incigma_{\Pi^{\operatorname{elC}, \operatorname{sp}incigma}(\mathfrak{t}frac{d\gamma}{dt})} (\Pi^{\operatorname{elC}, \operatorname{sp}incigma}(V)).
\end{equation}
We could also use the covariant derivative coming from the $L^2$ metric. Define
$$\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau : \T^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma},$$
by
\begin{equation}
\label{eq:tangerine2}
\D^\mathfrak{t}au_{\gamma} \F^{\operatorname{gC}, \mathfrak{t}au}_{\q}(V, \beta) = \frac{D^\operatorname{sp}incigma}{dt} V + (\D^\operatorname{sp}incigma_{\gamma(t)} \mathcal{X}^{\operatorname{gC}, \operatorname{sp}incigma}_{\q})(V) + \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma(t)} \beta.
\end{equation}
While the operator $ \D^{\mathfrak{t}ilde{g},\mathfrak{t}au}_{\gamma} \F^{\operatorname{gC}, \mathfrak{t}au}_{\q}$ seems more natural, it will be easier to work with $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau$ later on, so we will focus on the latter. When $\gamma$ is a flow trajectory, then the two operators coincide.
As discussed in Section~\ref{sec:4Dcoulomb}, the tangent space to $\widetilde{\B}^{\operatorname{gC}, \mathfrak{t}au}([x], [y])$ has completions $\K^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma} = \ker \mathbf{d}_{\gamma}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}$. From \eqref{eq:dgct} we can write
$$ \mathbf{d}_{\gamma}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}} = \frac{d\beta}{dt} + \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_{\gamma(t)}(V),$$
where, at $x=(a, s, \phi) \in W^{\operatorname{sp}incigma}$,
\begin{equation}
\label{eq:ddedagger}
\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x: \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, x} \mathfrak{t}o i\R, \ \ \ \
\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_x(b, r, \psi) = i \langle i \phi, \psi \rangle_{\mathfrak{t}ilde{g}}.
\end{equation}
Note that $ \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}$ appears as part of the operator $\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}: \T^{\operatorname{sp}incigma}_{j, x} \mathfrak{t}o L^2_{j-1}(Y; i\R)$ from \eqref{eq:decomposingtilde}. Precisely, we decompose the domain of $\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}$ as follows:
$$\T^{\operatorname{sp}incigma}_{j, x}=\K^{\operatorname{agC}, \operatorname{sp}incigma}_{j, x} \oplus \J^{\operatorname{gC}, \operatorname{sp}incigma}_{j,x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x}=\T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x}.$$
Then, with respect to the last decomposition (where we combine the first two summands into one), we have
\begin{equation}
\label{eq:Decomposed}
\mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}} = \begin{pmatrix} 0 & -d^* \\
\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}} & 0 \end{pmatrix} : \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, x} \oplus \J^{\circ, \operatorname{sp}incigma}_{j,x} \mathfrak{t}o (\operatorname{im}d^*)_{j-1} \oplus i\R.
\end{equation}
Returning to our operator $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau$, we consider its restriction to $\K^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}$. The surjectivity of this restriction would imply that $M^{\operatorname{agC}}([x], [y])$ is a smooth manifold near $[\gamma]$, and the dimension of its kernel would give the dimension of $M^{\operatorname{agC}}([x], [y])$. As a preliminary step towards this discussion, we establish the following:
\begin{proposition}
\label{prop:FredholmCoulomb}
Let $x, y \in W^\operatorname{sp}incigma_k$ be non-degenerate stationary points of $\mathcal{X}qsigma$ (and hence of $\mathcal{X}qagcsigma$). Pick a path $\gamma \in W^{\mathfrak{t}au}_k(x, y)$. Then, for $j \leq k$, the operator
$$(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}} : \K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma}$$
is Fredholm. Moreover, the Fredholm index is the same as that of the operator
\[
( \D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}} : \K^{\mathfrak{t}au}_{j,\gamma} \mathfrak{t}o \V^\mathfrak{t}au_{j-1,\gamma}.
\]
\end{proposition}
\begin{proof}
To establish the Fredholm property, the arguments are similar to those in the proof of Proposition~\ref{prop:Dfredholm}, with some modifications.
We extend our operator $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau$ so that it acts on sections of a vector bundle. First, define
\begin{equation}
\label{eq:Qggc}
Q_{\gamma}^{\operatorname{gC}} = \D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau \oplus \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma} : \T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma} \oplus L^2_{j-1}(\R; i\R).
\end{equation}
The codomain of $Q_{\gamma}^{\operatorname{gC}, \mathfrak{t}ilde g, \operatorname{sp}incigma} $ can be identified with $ \T^{\operatorname{gC}, \mathfrak{t}au}_{j-1, \gamma}.$
However, even after writing $ \psis(t) = \psi(t) + r(t) \phi(t)$ as before, $\T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}$ is still not the space of all sections of a vector bundle. Indeed, for $(b(t) + \beta(t)dt, \psis(t)) \in \T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}$, we still have the conditions $d(\beta(t))=0$ and $d^* (b(t))=0$.
Thus, we need to extend the operator once more. Consider the linear operator
\begin{equation}
\label{eq:RR}
R = \frac{d}{dt} + \begin{pmatrix} 0 & -d \\ -d^* & 0 \end{pmatrix}: ( \operatorname{im}d_{(0)} \oplus \operatorname{im}d^*_{(1)} ) \mathfrak{t}o ( \operatorname{im}d_{(0)} \oplus \operatorname{im}d^*_{(1)})
\end{equation}
where the subscript $(p)$ with $p \in \{0,1\}$ denotes the imaginary $p$-forms on which the respective operator acts, on each slice $\{t\} \mathfrak{t}imes Y$. Precisely, $\operatorname{im}d^*_{(1)}$ is the subset of $L^2_{j}(Z; i\R)$ consisting of functions that integrate to zero slicewise, and $\operatorname{im}d_{(0)} = \ker d_{(1)}$, since $Y$ is a rational homology sphere.
Next, decompose $\T^{\mathfrak{t}au}_{j, \gamma}$ as
\begin{equation}
\label{eq:Ttau}
\T^{\mathfrak{t}au}_{j, \gamma} = \T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma} \oplus ( \J^{\circ, \mathfrak{t}au}_{j, \gamma} \oplus \operatorname{im}d^*_{(1)}),
\end{equation}
where $\J^{\circ, \mathfrak{t}au}_{j, \gamma}$ consists of time-dependent elements of the spaces $\J^{\circ, \operatorname{sp}incigma}_{j, \gamma(t)}$ from \eqref{eq:Jcircsigma}. Note that there is a natural identification
$$ \Psi: \operatorname{im}d_{(0)} \mathfrak{t}o \J^{\circ, \mathfrak{t}au}_{j, \gamma}$$
given at each time $t$ by the formula
$$ \Psi(-dxi, 0, 0) =(-dxi, 0, xi \cdot \phi(t)),$$
where $\phi(t)$ is the spinor component of $\gamma(t)$. If we conjugate the operator $R$ by $\Psi$ on the $\operatorname{im}d_{(0)}$ summand, we obtain an operator
\begin{equation}
\label{eq:RRtilde}
\widehat{R} = \Psi \circ \frac{d}{dt}\circ \Psi^{-1} + \begin{pmatrix} 0 & \mathbf{d}^{\operatorname{sp}incigma} \\ -d^* & 0 \end{pmatrix}: (\J^{\circ, \mathfrak{t}au}_{j, \gamma} \oplus \operatorname{im}d^*_{(1)} ) \mathfrak{t}o (\J^{\circ, \mathfrak{t}au}_{j, \gamma} \oplus \operatorname{im}d^*_{(1)} ).
\end{equation}
Observe that $\Psi \circ \frac{d}{dt}\circ \Psi^{-1} ( -dxi, 0, xi \phi) = (-\frac{dxi}{dt}, 0, \frac{dxi}{dt} \phi)$.
Define the new extension of $Q_{\gamma}^{\operatorname{gC}} $ to be
\begin{equation}
\label{eq:newextension}
\Qhat_{\gamma}^{\operatorname{gC}} =\begin{pmatrix} Q_{\gamma}^{\operatorname{gC}} & 0 \\ 0 & \widehat{R} \end{pmatrix}: \T^{\mathfrak{t}au}_{j, \gamma} \mathfrak{t}o \T^{\mathfrak{t}au}_{j-1, \gamma},
\end{equation}
with respect to the decomposition \eqref{eq:Ttau}.
Instead of \eqref{eq:Ttau} we could consider the decomposition $\T^\mathfrak{t}au_{j, \gamma} = \V^\mathfrak{t}au_{j, \gamma} \oplus L^2_j(Z; i\mathbb{R})$, where we recall that $\V^\mathfrak{t}au$ is the space of four-dimensional configurations with trivial $dt$ component that are slicewise in $\T^{\operatorname{sp}incigma}$. With respect to this new decomposition, we can write
\begin{equation}
\label{eq:Qhgc}
\Qhat_{\gamma}^{\operatorname{gC}} = \frac{D^\operatorname{sp}incigma}{dt} +\begin{pmatrix} M & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} H & \mathbf{d}^{\operatorname{sp}incigma}_{\gamma(t)} \\ \mathbf{d}^{\operatorname{sp}incigma, \mathfrak{t}ilde{\dagger}}_{\gamma(t)} & 0 \end{pmatrix}.
\end{equation}
Here, for a fixed time $t$, the operator $M$ acts by zero on $ \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, \gamma(t)} \operatorname{sp}incubset \T^\operatorname{sp}incigma_{j, \gamma(t)} $ and equals the difference $\Psi \circ \frac{d}{dt}\circ \Psi^{-1} - \frac{D^\operatorname{sp}incigma}{dt}$ when applied to elements of $\J^{\circ, \operatorname{sp}incigma}_{j, \gamma(t)} \operatorname{sp}incubset \T^\operatorname{sp}incigma_{j, \gamma(t)}$; that is,
$$ M(-dxi, 0, xi \phi) = \Pi^{\perp}_{\phi} (0, 0, xi \mathfrak{t}frac{d\phi}{dt}).$$
Furthermore, with respect to the decomposition $\T^\operatorname{sp}incigma_{j, \gamma(t)} = \T^{\operatorname{gC}, \operatorname{sp}incigma}_{j, \gamma(t)} \oplus \J^{\circ, \operatorname{sp}incigma}_{j, \gamma(t)}$, the operator $H$ from \eqref{eq:Qhgc} is given by
\begin{equation}\label{eq:Qgamma-Tgc-Jcirc}
H = \begin{pmatrix} \D^\operatorname{sp}incigma_{\gamma(t)} \mathcal{X}^{\operatorname{gC}, \operatorname{sp}incigma}_{\q} & 0 \\ 0 & 0 \end{pmatrix}.
\end{equation}
Note that the third term in \eqref{eq:Qhgc} is exactly $\mathcal{H}^\operatorname{sp}incigma_{\gamma(t)}$ from Lemma~\ref{lem:fakehessian}.
\begin{comment}
Let us also consider the difference
$$ \mathcal{O}mega := \frac{\dgs}{dt} - \frac{D}{dt}.$$
This can be viewed as a family of endomorphisms $\mathcal{O}mega(t) \in \mathcal{E}nd(\T^{\operatorname{sp}incigma}_{j, \gamma(t)}).$ Since the covariant derivative along a path is the covariant derivative in the direction of $d\gamma/dt$, and $d\gamma/dt \mathfrak{t}o 0$ as $t \mathfrak{t}o \pm \infty$, we see that that $\mathcal{O}mega(t)$ converges to zero as $t \mathfrak{t}o \pm \infty$. Further, $\mathcal{O}mega$ is a compact operator. (Compare Lemmas~\ref{lem:Connections} and \ref{lem:TwoD}.)
\end{comment}
Using Lemma~\ref{lem:fakehessian} and the arguments in the proof of Proposition~\ref{prop:Dfredholm}, we obtain that $\Qhat_{\gamma}^{\operatorname{gC}} $ is of the form \eqref{eq:Q}, with the bundle $E = iT^*Y \oplus \mathbb{S} \oplus i\R$ just as for $Q_{\gamma}$. Furthermore, the differential part $L_0$ in $\Qhat_{\gamma}^{\operatorname{gC}} $ is the same $L_0$ that appeared in \eqref{eq:L0} for $Q_{\gamma}$. We write
\begin{equation}
\label{eq:tQg}
\Qhat_{\gamma}^{\operatorname{gC}} = \frac{d}{dt} + L_0 + \hat{h}_t^{\operatorname{gC}}.
\end{equation}
It's important to note that $L_0 + \hat{h}^{\operatorname{gC}}_t$ is not exactly the operator $\mathcal{H}^\sigma_x$ arising from Lemma~\ref{lem:fakehessian}, as $\hat{h}^{\operatorname{gC}}_t$ also has terms coming from the time derivative of $\phi$. In the limit, as $t \mathfrak{t}o \pm \infty$ (i.e., as we approach the stationary points $x$ and $y$), we do have from \eqref{eq:Qgamma-Tgc-Jcirc} and Lemma~\ref{lem:fakehessian} that
\begin{equation}\label{eq:4D-endpoint-Hess}
L_0 + \hat{h}_{\pm \infty}^{\operatorname{gC}}
= \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, \pm},
\end{equation}
where $ \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q}$ is the extended Hessian from \eqref{eq:ExtHessBlowUpGC}. The extended Hessian is hyperbolic at non-degenerate stationary points; indeed, it has real spectrum by Lemma~\ref{lem:eH}. Using the properties of $\mathcal{H}^\sigma_x$ established in Lemma~\ref{lem:fakehessian}, we can apply Proposition 14.2.1 in \cite{KMbook} together with the arguments in \cite[p.256]{KMbook} (mentioned in the proof of Proposition~\ref{prop:Dfredholm}) and deduce that $\Qhat_{\gamma}^{\operatorname{gC}} $ is Fredholm. As before, first we establish Fredholmness as a map from $L^2_1$ to $L^2$ for paths $\gamma$ which are constant near the endpoints, and then we extend this to operators from $L^2_j$ to $L^2_{j-1}$ for $1 \leq j \leq k$ and for all paths. Again the index is independent of $j$.
We claim that $\Qhat_{\gamma}^{\operatorname{gC}} $ has the same Fredholm index as the operator $Q_{\gamma}$ from \eqref{eq:Qg}. Indeed, we can relate them by a continuous family of Fredholm operators, in two steps, along the lines of Section~\ref{sec:interpol}. First, we interpolate linearly from $Q_{\gamma}$ to the operator
$$Q_{\gamma}^{\operatorname{sp}incp} := \frac{D^\operatorname{sp}incigma}{dt} + \begin{pmatrix} \D^\operatorname{sp}incigma_{\gamma(t)} \mathcal{X}_{\q}^{\operatorname{sp}incigma} & \mathbf{d}^{\operatorname{sp}incigma}_{\gamma(t)} \\ \mathbf{d}^{\operatorname{sp}incp,\operatorname{sp}incigma, \dagger}_{\gamma(t)} & 0 \end{pmatrix},$$
which differs from $Q_{\gamma}$ by having $ \mathbf{d}^{\operatorname{sp}incp,\operatorname{sp}incigma, \dagger}_{\gamma(t)}$ instead of $\mathbf{d}^{\operatorname{sp}incigma, \dagger}_{\gamma(t)}$. Second, we use the family of metrics $g_{\rho}$ from Section~\ref{sec:interpol} to define operators $ \Qhat_{\gamma}^{\rho} $ similar to $\Qhat_{\gamma}^{\operatorname{gC}}$, such that at $\rho=0$ we have
$$\Qhat_{\gamma}^{0} = Q_{\gamma}^{\operatorname{sp}incp} $$
and at $\rho=1$ we have
$$ \Qhat_{\gamma}^{1} =\Qhat_{\gamma}^{\operatorname{gC}}.$$
To see that the operators considered during these two interpolations are Fredholm and have the same index, observe that they are all of the form \eqref{eq:Q}, with the same differential part $L_0$. Further, at the endpoints they limit to the interpolations between extended Hessians that appeared in Lemmas~\ref{lem:eHrho} and \ref{lem:eHrho2}, and which were shown there to be invertible with real spectrum.
We have now shown that $Q_{\gamma}$ and $\Qhat_{\gamma}^{\operatorname{gC}}$ have the same index. To go back from $\Qhat_{\gamma}^{\operatorname{gC}} $ to $Q_{\gamma}^{\operatorname{gC}}$, note that the operator $\widehat{R}$ defined in \eqref{eq:RRtilde} is bijective. Indeed, it is conjugate to the operator $R$ from \eqref{eq:RR}, so it suffices to check that $R$ is bijective. We have $R= \frac{d}{dt} + A,$ where
$$A = \begin{pmatrix} 0 & -d \\ -d^* & 0 \end{pmatrix}$$
is a self-adjoint operator acting slicewise. Observe that $A$ is invertible because $b_1(Y)=0$, and we can find a complete orthonormal system of eigenvectors $\{a_n\}$, with $A(a_n) = \lambda_n a_n$, $\lambda_n \neq 0$. An element in the kernel of $R$ would be of the form
$$\operatorname{sp}incum_n c_n e^{-\lambda_n t} a_n, \ \ c_n \in \R.$$
However, any such nonzero element increases exponentially at one of the ends of the cylinder $Z$, and hence is not in $L^2_j$. Thus, $\ker (R) = 0$, and by self-adjointness, $\coker(R)=0$.
Since $\widehat{R}$ is bijective, from \eqref{eq:newextension} we deduce that $Q_{\gamma}^{\operatorname{gC}} $ is Fredholm, of the same index as $\Qhat_{\gamma}^{\operatorname{gC}} $ (and hence as $Q_{\gamma}$). In the proof of Proposition~\ref{prop:Dfredholm} we saw that $Q_{\gamma}$ has the same index as $(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{j,\gamma}}$. In a similar fashion, since $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma}$ is surjective by Lemma~\ref{lem:DecomposeGC}, we see that $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}$ is Fredholm and of the same index as $Q_{\gamma}^{\operatorname{gC}, \mathfrak{t}ilde g, \operatorname{sp}incigma} $ by the same arguments as in \cite[Proposition 14.4.3]{KMbook}. The conclusion follows.
\end{proof}
\begin{comment}
For future reference, let us define\footnote{In \eqref{eq:QggcL2}, it would have been more natural to use an operator $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, {\dagger}}_{\gamma}$ defined with respect to the $L^2$ metric instead of $\mathfrak{t}ilde{g}$. However, this makes no difference for our purposes, so it is easier to use the operator $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma}$, which has already been defined.}
\begin{equation}
\label{eq:QggcL2}
Q_{\gamma}^{\operatorname{gC}} = \D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau \oplus \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma} : \T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma} \oplus L^2_{j-1}(\R; i\R)
\end{equation}
and
\begin{equation}
\label{eq:newextensionL2}
\Qhat_{\gamma}^{\operatorname{gC}} =\begin{pmatrix} Q_{\gamma}^{\operatorname{gC}} & 0 \\ 0 & \widehat{R} \end{pmatrix}: \T^{\mathfrak{t}au}_{j, \gamma} \mathfrak{t}o \T^{\mathfrak{t}au}_{j-1, \gamma}.
\end{equation}
Alternatively, in the proof of Proposition~\ref{prop:FredholmCoulomb} we could have used $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau$ instead of $\D^{\mathfrak{t}ilde{g},\mathfrak{t}au}_{\gamma} \mathcal{F}_{\q}gctau$ by using $Q_{\gamma}^{\operatorname{gC}}$ and $\Qhat_{\gamma}^{\operatorname{gC}}$ in place of the $\mathfrak{t}ilde{g}$ counterparts.
\end{comment}
We summarize the results obtained from the proof of Proposition~\ref{prop:FredholmCoulomb}.
\begin{lemma}
\label{lem:allsurjective}
Under the hypotheses of Proposition~\ref{prop:FredholmCoulomb},
\begin{enumerate}[(a)]
\item the operators
$$ (\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}, \ Q_{\gamma}^{\operatorname{gC}}, \ \Qhat_{\gamma}^{\operatorname{gC}} $$
are all Fredholm of the same index;
\begin{comment}
\item
one of the operators
$$ (\Dgs_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}, \ Q_{\gamma}^{\operatorname{gC}, \gts}, \ \Qhat_{\gamma}^{\operatorname{gC}, \mathfrak{t}ilde g, \operatorname{sp}incigma} $$
is surjective if and only if any of the two others is surjective;
\end{comment}
\item
one of the operators
$$ (\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}, \ Q_{\gamma}^{\operatorname{gC}}, \ \Qhat_{\gamma}^{\operatorname{gC}} $$
is surjective if and only if any of the two others is surjective.
\end{enumerate}
\end{lemma}
\begin{proof}
To establish the relation between $ (\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}$ and $Q_{\gamma}^{\operatorname{gC}, \mathfrak{t}ilde g, \operatorname{sp}incigma} $ we use Lemma~\ref{lem:DecomposeGC}, which gives the surjectivity of $\mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma}$. To establish the relation between $Q_{\gamma}^{\operatorname{gC}} $ and $\Qhat_{\gamma}^{\operatorname{gC}} $, we use the block form \eqref{eq:newextension} and the bijectivity of $\widehat{R}$.
\end{proof}
\begin{comment}
\begin{remark}
In general, the surjectivity of the operators in part (b) of Lemma~\ref{lem:allsurjective} is unrelated to the surjectivity of the operators in part (c). However, when $\gamma$ is a trajectory of $\mathcal{X}qgcsigma$, then $\D_{\gamma} \mathcal{F}_{\q}gctau = \Dgs_{\gamma} \mathcal{F}_{\q}gctau$ (cf. Lemma~\ref{lem:TwoD}), so in that case the surjectivity of any of the six operators in part (a) is equivalent to the surjectivity of any other.
\end{remark}
\end{comment}
\operatorname{sp}incection{Non-degeneracy of trajectories in Coulomb gauge}
\label{sec:NDTcoulomb}
Let us now suppose that $\gamma$ is a trajectory of $\mathcal{X}qsigma$ on $\mathcal{C}_k^\operatorname{sp}incigma(Y)$ between two non-degenerate stationary points $x, y \in \mathcal{C}^\operatorname{sp}incigma(Y)$. Recall from Definition~\ref{def:regM} that the moduli space $M(x, y)$ is regular at $\gamma$ if $Q_\gamma$ is surjective. By \cite[Propositions 14.3.2 and 14.4.3]{KMbook}, this is equivalent to the operator
\[
(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{k,\gamma}} :\K^{\mathfrak{t}au}_{k,\gamma} \mathfrak{t}o \V^\mathfrak{t}au_{k-1,\gamma}
\]
being surjective. (Compare the proof of Proposition~\ref{prop:Dfredholm}.) We rephrase this condition in Coulomb gauge:
\begin{proposition}\label{prop:Qgammasurjective}
Consider a path ${\gamma} \in \mathcal{C}^{\mathfrak{t}au}_k(x, y)$ in temporal gauge. Write $x^\flat = \Pi^{\operatorname{gC}, \operatorname{sp}incigma}(x)$, $y^{\flat} =\Pi^{\operatorname{gC}, \operatorname{sp}incigma}(y)$ and $\gamma^\flat= \Pi^{\operatorname{gC}, \mathfrak{t}au} (\gamma)$. Then:
$(a)$ The operators $(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{k,\gamma}} : \K^{\mathfrak{t}au}_{k, \gamma} \mathfrak{t}o \V^\mathfrak{t}au_{k-1,\gamma}$ and $(\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}}} : \K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{k-1,\gamma^{\flat}}$ have the same Fredholm index.
$(b)$ Suppose that ${\gamma}$ is a trajectory of $\mathcal{X}qsigma$, so that $[\gamma^{\flat}] \in \mathcal{B}^{\gCoul, \tau}_k([x^\flat], [y^\flat])$ is a trajectory of $\mathcal{X}qagcsigma$. If $(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{k,\gamma}}$ is surjective, then so is $(\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}}}$.
\end{proposition}
\begin{proof}
$(a)$ We can interpolate between the paths $\gamma$ and $\gamma^{\flat}$ by a continuous family of paths $\{\gamma_s\}_{s \in [0,1]}$, with the endpoints of $\gamma_s$ varying on the gauge orbits of $x$ and $y$. Thus, the operators $Q_{\gamma}$ and $Q_{\gamma^{\flat}}$ are part of a family of Fredholm operators $Q_{\gamma_s}, \ s\in [0,1]$. All Fredholm operators in a continuous family must have the same index. Combining this with Proposition~\ref{prop:FredholmCoulomb}, we obtain $$ \operatorname{ind}((\D^\mathfrak{t}au_{\gamma^\flat} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}}}) = \operatorname{ind} ((\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}^\tau)|_{\K^{\mathfrak{t}au}_{k,\gamma^\flat}}) = \operatorname{ind} ((\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{k,\gamma}}),$$
as desired.
$(b)$ Recall that in Section~\ref{sec:path} we defined the map
$$ \Pi^{\operatorname{gC}, \mathfrak{t}au}: \widetilde \mathcal{C}^{\mathfrak{t}au}_k(x, y) \mathfrak{t}o \widetilde \mathcal{C}_k^{\operatorname{gC},\mathfrak{t}au}(x^\flat, y^\flat)$$
and that after dividing by the gauge groups, this induces a map between the respective quotients:
$$ \Pi^{[\operatorname{gC}], \mathfrak{t}au}: \widetilde \B^{\mathfrak{t}au}_k([x], [y]) \mathfrak{t}o \mathfrak{t}Btaug_k([x], [y]).$$
Note that when we work with gauge equivalence classes in $\mathcal{C}^\operatorname{sp}incigma(Y)$, we have $[x]=[x^\flat]$ and $[y]=[y^\flat].$
The derivative of $ \Pi^{[\operatorname{gC}], \mathfrak{t}au}$, the map
$$ (\Pi^{[\operatorname{gC}], \mathfrak{t}au}_* )_{[\gamma]}: \K^{\mathfrak{t}au}_{k, \gamma} \longrightarrow \K_{k, \gamma^\flat}^{\operatorname{gC}, \mathfrak{t}au},$$
was mentioned in \eqref{eq:Pigctau2}.
For $j \leq k$, observe that inside the completed tangent bundle $\T^{\mathfrak{t}au}_j$ we have the subbundle $\V^{\mathfrak{t}au}_j$ consisting of paths $(b(t), r(t), \psi(t))$ in temporal gauge. The derivative $ (\Pi^{\operatorname{gC}, \mathfrak{t}au}_*)_{\gamma}$ maps $\V^{\mathfrak{t}au}_j$ to $\V^{\operatorname{gC}, \mathfrak{t}au}_j$ by applying infinitesimal global Coulomb projection slicewise:
$$(\Pi^{\operatorname{gC}, \mathfrak{t}au}_*)_{\gamma} (b, r, \psi) (t) = (\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*)_{\gamma(t)} (b(t), r(t), \psi(t)).$$
Consider the diagram
\begin{equation}
\label{eq:diagram}
xymatrixcolsep{4pc}
xymatrix{
\widetilde \B^\mathfrak{t}au_k([x],[y]) \ar[r]^{\mathcal{F}_{\q}^\tau} \ar[d]_{\Pi^{[\operatorname{gC}],\mathfrak{t}au}} & \V^\mathfrak{t}au_{k-1} \ar[d]^{\Pi^{\operatorname{gC},\mathfrak{t}au}_*} \\
\widetilde \B^{\operatorname{gC},\mathfrak{t}au}_k([x^\flat],[y^\flat]) \ar[r]^{\ \ \ \mathcal{F}_{\q}agctau} & \V_{k-1}^{\operatorname{gC},\mathfrak{t}au}.
}
\end{equation}
We claim that this diagram commutes. This can be seen as follows. Choose a representative $\gamma$ of $[\gamma] \in \widetilde \B^\mathfrak{t}au_k([x],[y])$ in temporal gauge.
Since the elements of $\V^\mathfrak{t}au_j$ are in temporal gauge, we can write $\mathcal{F}_{\q}^\tau = \frac{d}{dt} + \mathcal{X}qsigma$ and $\mathcal{F}_{\q}gctau = \frac{d}{dt} + \mathcal{X}qgcsigma$.
Let us also write $\gamma^\flat(t)=\Pi^{\operatorname{gC},\operatorname{sp}incigma}(\gamma(t))$ as $g(t)\gamma(t)$, where $g(t)$ is a path of gauge transformations on $Y$. Since the (perturbed) CSD functional is gauge-invariant, we have that $\mathcal{X}qsigma$ is gauge-equivariant and thus:
\begin{equation}\label{eq:g(t)}
\mathcal{X}qsigma (\gamma^{\flat}(t)) = \mathcal{X}qsigma(g(t)\gamma(t)) = g(t)_* \mathcal{X}qsigma(\gamma(t)).
\end{equation}
Moreover, direct computation shows that
\begin{equation}\label{eq:g_*}
(\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{\gamma^{\flat}(t)} \circ g(t)_* = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{\gamma(t)}.
\end{equation}
Since $\mathcal{X}qgcsigma = \Pi^{\operatorname{gC},\operatorname{sp}incigma}_* \circ \mathcal{X}qsigma$, applying $\Pi^{\operatorname{gC},\operatorname{sp}incigma}$ to \eqref{eq:g(t)} and using \eqref{eq:g_*} shows that
\begin{equation}\label{eq:equivariance}
\mathcal{X}qgcsigma(\gamma^{\flat}(t)) = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{\gamma^{\flat}(t)} \mathcal{X}qsigma (\gamma^{\flat}(t)) = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{\gamma(t)} \mathcal{X}qsigma(\gamma(t)).
\end{equation}
By the chain rule, we have
$$\frac{d\gamma^{\flat}(t)}{dt} = \frac{d}{dt}(\Pi^{\operatorname{gC},\operatorname{sp}incigma} \circ \gamma(t)) = (\Pi^{\operatorname{gC},\operatorname{sp}incigma}_*)_{\gamma(t)} \frac{d\gamma(t)}{dt},$$
or, for short,
\begin{equation}\label{eq:dtequivariance}
\frac{d}{dt} (\gamma^{\flat}) = \Pi^{\operatorname{gC},\mathfrak{t}au}_* \Bigl(\frac{d}{dt} \gamma \Bigr).
\end{equation}
Using \eqref{eq:equivariance} and \eqref{eq:dtequivariance} we compute
\begin{align*}
\mathcal{F}_{\q}agctau \circ \Pi^{[\operatorname{gC}],\mathfrak{t}au}(\gamma) &= \frac{d}{dt} (\gamma^{\flat}) + \mathcal{X}qgcsigma (\gamma^{\flat}) \\
&= \Pi^{\operatorname{gC},\mathfrak{t}au}_* \Bigl(\frac{d}{dt} \gamma \Bigr) + \Pi^{\operatorname{gC},\mathfrak{t}au}_* (\mathcal{X}qsigma \gamma) \\
&= \Pi^{\operatorname{gC},\mathfrak{t}au}_* (\mathcal{F}_{\q}^\tau (\gamma)).
\end{align*}
Thus, the diagram \eqref{eq:diagram} commutes.
Taking derivatives in \eqref{eq:diagram} and using the fact that $\gamma$ and $\gamma^\flat$ are trajectories of the respective vector fields, we obtain the commutative diagram
\begin{equation}\label{eq:commutingderivatives}
xymatrixcolsep{5pc}
xymatrix{
\K^\mathfrak{t}au_{k,\gamma} \ar[r]^{\D^\mathfrak{t}au_{\gamma}\mathcal{F}_{\q}^\tau} \ar[d]_{\Pi^{\operatorname{gC},[\mathfrak{t}au]}_* } & \V^\mathfrak{t}au_{k-1,\gamma} \ar[d]^{\Pi^{\operatorname{gC},\mathfrak{t}au}_*} \\
\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}} \ar[r]^{\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}agctau} & \V_{k-1,\gamma^{\flat}}^{\operatorname{gC},\mathfrak{t}au}.
}
\end{equation}
Since the infinitesimal global Coulomb projection $\Pi^{\operatorname{gC}, \operatorname{sp}incigma}_*$ is surjective, we see that the right vertical arrow $\Pi^{\operatorname{gC},\mathfrak{t}au}_*$ in \eqref{eq:commutingderivatives} is also surjective. Using the commutativity of \eqref{eq:commutingderivatives}, we get that if $(\D^\mathfrak{t}au_{\gamma} \F^{\mathfrak{t}au}_{\q})|_{\K^{\mathfrak{t}au}_{k,\gamma}}$ is surjective, then so is $(\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}}}$.
\end{proof}
Recall that Proposition~\ref{prop:gccorrespondence} guaranteed that the moduli spaces $M([x],[y])$ and $M^{\operatorname{agC}}([x],[y])$ are homeomorphic. Proposition~\ref{prop:Qgammasurjective} says more: If $M([x],[y])$ is cut out smoothly and transversely by $\mathcal{F}_{\q}^\tau$, then also $M^{\operatorname{agC}}([x],[y])$ is cut out smoothly and transversely by $\mathcal{F}_{\q}agctau$.
So far we have only discussed the usual moduli spaces $M([x], [y])$. Similar arguments apply to the moduli spaces $M^{\red}([x], [y])$ between reducibles. The end result is that we can identify them with the corresponding moduli spaces $M^{\operatorname{agC},\red}([x], [y])$ in Coulomb gauge, and that regularity of the former implies regularity of the latter.
\operatorname{sp}incection{Gradings}
\label{sec:GradingsCoulomb}
From now on we will assume that we have chosen an admissible perturbation $\q$, so that all the stationary points are non-degenerate, and the moduli spaces $M([x], [y])$ and $M^{\red}([x], [y])$ are regular.
Let $x, y$ be stationary points in $W^\operatorname{sp}incigma$. In view of Lemma~\ref{lem:allsurjective}, Proposition~\ref{prop:Qgammasurjective} and the definition of relative gradings in Section~\ref{sec:modifications}, we have
\begin{align}\label{eq:gradingscoulomb}
\operatorname{gr}(x, y) & = \operatorname{ind} \ (\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}^\tau)|_{\K^{\mathfrak{t}au}_{k,\gamma}} = \operatorname{ind} Q_{\gamma} \\
&= \operatorname{ind} \ (\D^\mathfrak{t}au_{\gamma^{\flat}} \mathcal{F}_{\q}agctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma^{\flat}}} = \operatorname{ind} Q^{\operatorname{gC}}_{\gamma^\flat}, \notag
\end{align}
where $\gamma$ is any path from $x$ to $y$ (not necessarily a trajectory of $\mathcal{X}qsigma$).
Thus, the relative gradings can be calculated directly in Coulomb gauge.
Recall from \eqref{eq:GradingRed} that the relative gradings of reducible stationary points of $\mathcal{X}q$ with the same connection component can be computed in terms of the spectrum of the operator $D_{\q,a}$; here, an eigenvector $\phi$ of $D_{\q,a}$ has eigenvalue given by the spinorial energy of $(a,0,\phi)$. It is thus natural to ask how this relates to the analogous setup in Coulomb gauge. The following tells us that spinorial energy and ``Coulomb gauge spinorial energy'' are equal.
\begin{lemma} \label{lem:coulomb-gauge-spinorial-same}
Let $x = (a,s,\phi) \in W_k^\operatorname{sp}incigma$. Then,
\begin{equation}\label{eq:spinorials-agree}
\operatorname{Re} \langle \widetilde \mathcal{X}qgc^1(a,s,\phi), \phi \rangle_{L^2} = \operatorname{Re} \langle \widetilde \mathcal{X}q^1(a,s,\phi), \phi \rangle_{L^2}.
\end{equation}
\end{lemma}
\begin{proof}
By continuity, it suffices to establish \eqref{eq:spinorials-agree} for irreducibles. We have
\begin{align*}
\operatorname{Re} \langle \widetilde \mathcal{X}qgc^1(a,s,\phi), \phi \rangle_{L^2} &= \frac{1}{s} \operatorname{Re} \langle (\mathcal{X}qgc)^1(a,s\phi), \phi \rangle_{L^2} \\
&= \frac{1}{s} \operatorname{Re} \langle \mathcal{X}q^1(a,s\phi) + Gd^*(\mathcal{X}q^0(a,s\phi))\phi, \phi \rangle_{L^2} \\
&= \frac{1}{s} \operatorname{Re} \langle \mathcal{X}q^1(a,s\phi), \phi \rangle_{L^2} \\
&= \operatorname{Re} \langle \widetilde \mathcal{X}q^1(a,s,\phi), \phi \rangle_{L^2},
\end{align*}
where in the penultimate equality, we use that $\operatorname{Re} \langle Gd^*(a,s\phi) \phi, \phi \rangle_{L^2} = 0$ since $Gd^*(a,s\phi)$ is purely imaginary.
\end{proof}
In light of Lemma~\ref{lem:coulomb-gauge-spinorial-same}, we do not define a separate spinorial energy in Coulomb gauge.
Consider $x=(a, 0, \phi)$ a reducible stationary point of $\mathcal{X}qgcsigma$. Then $\phi$ is an eigenvector of the operator $D^{\operatorname{gC}}_{\q, a}$, defined by
$$ D^{\operatorname{gC}}_{\q, a} (\psi) := \D_{(a, 0)} (\mathcal{X}qgc)^1(0, \psi) = \widetilde \mathcal{X}qgc^1(a, 0, \psi).$$
Let $\mu$ be the corresponding eigenvalue. Since $x$ is necessarily a stationary point of $\mathcal{X}qsigma$ as well, $\phi$ is an eigenvector of $D_{\q,a}$ and $(a,0)$ is a stationary point of $\mathcal{X}q$. In particular, $\D_{(a,0)} \mathcal{X}q(0,\phi)$ is in Coulomb gauge, and thus
$$
\mu \phi = D_{\q,a}(\phi) = D^{\operatorname{gC}}_{\q,a} (\phi).
$$
It follows that
\begin{equation}\label{eq:LambdaRed}
\mathscr{L}ambda_\q(x) = \langle \widetilde \mathcal{X}qgc^1(0,\phi), \phi \rangle_{L^2}= \langle \mu\phi , \phi \rangle_{L^2} = \mu,
\end{equation}
which agrees with Lemma~\ref{lem:coulomb-gauge-spinorial-same}. Note that here we do not need to take real parts, since the relevant operators are self-adjoint at stationary points.
\begin{lemma}
\label{lem:grsp}
Fix a reducible stationary point $(a,0)$ of $\mathcal{X}qgc$. For each $N \in \mathbb{N}$, there exist $\omega_1 , \omega_2 > 0$ such that the (finitely many) reducible stationary points of $\mathcal{X}qagcsigma$ which agree with $(a,0)$ in the blow-down and have grading in the interval $[-N,N]$ are precisely the reducible stationary points with spinorial energy in the interval $[-\omega_1,\omega_2]$.
\end{lemma}
\begin{proof}
This follows from \eqref{eq:GradingRed} and \eqref{eq:LambdaRed}.
\end{proof}
\operatorname{sp}incection{The cut-down moduli spaces in Coulomb gauge}
\label{sec:UCoulomb}
Recall that in Section~\ref{sec:mfh}, the $U$-map on monopole Floer homology was defined by intersecting the moduli spaces $M([x], [y])$ and $M^{\red}([x], [y])$ with the zero set $\Zs$ of a transverse section $\operatorname{sp}incect$ of a complex line bundle $E^{\operatorname{sp}incigma}$ over $\B^{\operatorname{sp}incigma}_k(B_p)$. The bundle $E^{\operatorname{sp}incigma}$ was associated to the map $\G_{k+1}(B_p) \mathfrak{t}o S^1, \ u \mapsto u(p)$, where $p=(t, q)$ is a point in ${\mathbb{R}}\mathfrak{t}imes Y$. Without loss of generality, let us assume that $t=0$.
We will need to modify this definition so that we can relate it to Coulomb gauge. First, whereas in Section~\ref{sec:mfh} we followed \cite{KMOS} and used the restriction of configurations to a standard ball $B_p$ around $p$, we could just as well restrict to any closed neighborhood $N_p$ of $p$ that is a manifold with boundary. It is convenient to take $N_p = [-1, 1] \mathfrak{t}imes Y$. Consider the restriction $r: \B_k(N_p) \mathfrak{t}o \B_k(B_p)$. On the blow-up, this induces a map
$$ r^{\operatorname{sp}incigma} : \mathscr{N} \mathfrak{t}o \B^{\operatorname{sp}incigma}_k(B_p),$$
$$ r^{\operatorname{sp}incigma}(a, s, \phi) = \bigl (a, s \cdot \|\phi\|_{L^2(B_p)}, {\phi}/{ \|\phi\|_{L^2(B_p)}} \bigr),$$
well-defined on the open subset $\mathscr{N} \operatorname{sp}incubset \B^{\operatorname{sp}incigma}_k(N_p)$ consisting of configurations $(a, s, \phi)$ such that $\phi$ does not vanish identically on $B_p$. Note that, because of the unique continuation principle, the moduli spaces $M([x], [y])$ and $M^{\red}([x], [y])$ are contained in $\mathscr{N}$. (Here, we identify trajectories with their restrictions to $N_p$, again using unique continuation.)
Let us pull back $\operatorname{sp}incect$ under $r^\operatorname{sp}incigma$ and obtain a section $(r^\operatorname{sp}incigma)^*\operatorname{sp}incect$ of the bundle
$$(E^{\operatorname{sp}incigma})' := (r^\operatorname{sp}incigma)^*E^{\operatorname{sp}incigma}$$ over $\mathscr{N}$. Intersecting the moduli spaces with the zero set of $(r^\operatorname{sp}incigma)^*\operatorname{sp}incect$ is the same as intersecting them with $\Zs$. Furthermore, we can consider any other transverse section of $(E^{\operatorname{sp}incigma})'$, and let $\Zs'$ be its zero set. If we define the $U$-maps using intersections with $\Zs'$, standard continuation arguments in Floer theory show that they are chain homotopic to the original ones.
Note that $(E^{\operatorname{sp}incigma})'$ is associated to the map $\G_{k+1}(N_p) \mathfrak{t}o S^1, \ u \mapsto u(0,q)$. For our next modification, let $(E^{\operatorname{sp}incigma})''$ be the complex line bundle over $\mathscr{N} \operatorname{sp}incubset \B^{\operatorname{sp}incigma}_k(N_p)$ associated to the map
\begin{equation}
\label{eq:mueq}
\G_{k+1}(N_p) \mathfrak{t}o S^1, \ e^{f} \mapsto e^{\mu_Y(f(0, \cdot))},
\end{equation}
where $f : N_p \mathfrak{t}o i\rr$ and $\mu_Y(f(0, \cdot))$ is the average value of $f$ on the slice $\{0\} \mathfrak{t}imes Y$.
We can construct a family of bundles interpolating between $(E^{\operatorname{sp}incigma})'$ and $(E^{\operatorname{sp}incigma})''$, by considering the maps
$$ \G_{k+1}(N_p) \mathfrak{t}o S^1, \ e^{f} \mapsto e^{\lambda \mu_Y(f(0, \cdot)) + (1-\lambda)f(0,q)}.$$
Thus, instead of considering sections of $(E^{\operatorname{sp}incigma})'$, we could define the $U$-maps using sections of $(E^{\operatorname{sp}incigma})''$, and the results will be chain homotopic to the originals.
The third modification consists in moving from the $\operatorname{sp}incigma$ model to the $\mathfrak{t}au$ model. As explained in \cite[Section 6.3]{KMbook}, the two models are equivalent in the following sense. Consider the open subset $\mathcal{U}e \operatorname{sp}incubset \B^{\operatorname{sp}incigma}_k(N_p)$ consisting of configurations $(a, s, \phi)$ with $\phi|_{\{t\} \mathfrak{t}imes Y} \not \equiv 0$ for all $t \in [0,1]$. As noted in \cite[p.463]{KMbook}, there is a natural map
$$\varrho: \mathcal{U}e \mathfrak{t}o \B^{\mathfrak{t}au}_k(N_p).$$
By unique continuation, the moduli spaces $M([x], [y])$ and $M^{\red}([x], [y])$ restricted to $N_p$ yield configurations in $\mathcal{U}e$, which map homeomorphically onto their image under $\varrho$.
There is a complex line bundle $E^{\mathfrak{t}au}$ over $\B^{\mathfrak{t}au}_k(N_p)$ associated to the same map \eqref{eq:mueq} as before. When we pull it back under $\varrho$ and then further restrict to $\mathcal{U}e \cap \mathscr{N}$, we obtain the restriction of the bundle $(E^{\operatorname{sp}incigma})''$ to $\mathcal{U}e \cap \mathscr{N}$. Thus, we can equivalently define the $U$-maps using the $\mathfrak{t}au$ model and intersecting with the zero set $\Zs^{\mathfrak{t}au}$ of a transverse section of $E^{\mathfrak{t}au}$.
We are now ready to make the connection with configurations in Coulomb gauge. This is done via the global Coulomb projection $\Pi^{[\operatorname{gC}],\mathfrak{t}au}$. In \eqref{eq:Pigctau} the map $\Pi^{[\operatorname{gC}],\mathfrak{t}au}$ was defined for configurations on ${\mathbb{R}}\mathfrak{t}imes Y$ with fixed asymptotics, but the same formula \eqref{eq:zebra} can be applied to a configuration on $N_p = [-1,1] \mathfrak{t}imes Y$. By a slight abuse of notation, we still write $\Pi^{[\operatorname{gC}],\mathfrak{t}au}$ for the resulting map $\B^{\mathfrak{t}au}_k(N_p) \mathfrak{t}o \B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p)$. Here, $\B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p)$ stands for the quotient of the space $\mathcal{C}^{\operatorname{gC}, \mathfrak{t}au}_k(N_p)$ by the gauge group $\G^{\operatorname{gC}}_{k+1}(N_p)$; cf. Section~\ref{sec:cylinders}.
Note that there is a natural map
\begin{equation}
\label{eq:BW}
\B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p) \mathfrak{t}o W^\operatorname{sp}incigma_{k-1/2}/S^1, \ \ [\gamma] \mathfrak{t}o [\gamma(0)],
\end{equation}
where the representative $\gamma$ is chosen to be in temporal gauge.
Consider the complex line bundle $E^{\operatorname{agC}, \operatorname{sp}incigma}$ over $W^\operatorname{sp}incigma_{k-1/2}/S^1$, associated to the $S^1$-bundle $W^\operatorname{sp}incigma_{k-1/2}$. Pick a section $\operatorname{sp}incect^{\operatorname{agC}}$ transverse to the zero section, and such that the zero set $\Zs^{\operatorname{agC}}$ of $\operatorname{sp}incect^{\operatorname{agC}}$ intersects all the moduli spaces $M^{\operatorname{agC}}([x], [y])$ and $M^{\operatorname{agC}, \red}([x], [y])$ transversely. We obtain cut-down moduli spaces in Coulomb gauge:
$$M^{\operatorname{agC}}([x], [y]) \cap \Zs^{\operatorname{agC}} \ \ \mathfrak{t}ext{and} \ \ M^{\operatorname{agC}, \red}([x], [y]) \cap \Zs^{\operatorname{agC}}.$$
Here, we identified $M^{\operatorname{agC}}([x], [y])$ and $M^{\operatorname{agC}, \red}([x], [y])$ with their images in $W^\operatorname{sp}incigma_k/S^1 \operatorname{sp}incubset W^\operatorname{sp}incigma_{k-1/2}/S^1$ at time $t=0$, for simplicity. (See \cite[Proposition 7.2.1]{KMbook} for the model unique continuation result for this case.) Alternatively, we could identify them with their images in $\B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p)$ under restriction (using unique continuation). Consider the line bundle $E^{\operatorname{agC}, \mathfrak{t}au}$ over $\B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p)$, pulled back from $E^{\operatorname{agC}, \operatorname{sp}incigma}$ under the map \eqref{eq:BW}. From $\operatorname{sp}incect^{\operatorname{agC}}$ we obtain a section of $E^{\operatorname{agC}, \mathfrak{t}au}$, and intersections of its zero set with the moduli spaces correspond to intersections of $\Zs^{\operatorname{agC}}$ with those moduli spaces.
Observe that the pull-back of $E^{\operatorname{agC}, \mathfrak{t}au}$ under $\Pi^{[\operatorname{gC}],\mathfrak{t}au}$ is exactly the bundle $E^{\mathfrak{t}au}$ over $\B^{\mathfrak{t}au}_k(N_p)$. Thus, we can pull back the section of $E^{\operatorname{agC}, \mathfrak{t}au}$ and obtain a section of $E^{\mathfrak{t}au}$, which we can then take to be the one defining the $U$-maps in the $\mathfrak{t}au$ model. We have a commutative diagram:
\[
xymatrixcolsep{4pc}
xymatrix{
\ \ M([x],[y]) \ \ \ar[r] \ar[d]_{\Pi^{[\operatorname{gC}],\mathfrak{t}au}} & \B^\mathfrak{t}au_k(N_p) \ \ar[d]^{\Pi^{[\operatorname{gC}],\mathfrak{t}au}} \\
M^{\operatorname{agC}}([x],[y]) \ar[r] & \B^{\operatorname{gC}, \mathfrak{t}au}_k(N_p),
}
\]
where the horizontal maps are given by restriction, and are one-to-one (by unique continuation). Since we have established that $\Pi^{[\operatorname{gC}],\mathfrak{t}au} : M([x],[y]) \mathfrak{t}o M^{\operatorname{agC}}([x],[y])$ is a homeomorphism, we obtain an identification of the cut-down moduli spaces:
\begin{equation}
\label{eq:M1Z}
M([x], [y]) \cap \Zs^{\mathfrak{t}au} \ \cong \ M^{\operatorname{agC}}([x], [y]) \cap \Zs^{\operatorname{agC}}
\end{equation}
and
\begin{equation}
\label{eq:M2Z}
M^{\red}([x], [y]) \cap \Zs^{\mathfrak{t}au} \ \cong \ M^{\operatorname{agC}, \red}([x], [y]) \cap \Zs^{\operatorname{agC}}.
\end{equation}
\operatorname{sp}incection{Orientations}
\label{sec:OrientCoulomb}
In Section~\ref{sec:or2} we explained how the moduli spaces $M([x], [y])$ can be oriented using an orientation data set. That discussion can be adapted to global Coulomb gauge. To orient the spaces $M^{\operatorname{agC}}([x], [y])$, we need to trivialize the determinant lines $\det(Q_{\gamma}^{\operatorname{gC}} )$. For arbitrary $x, y \in W^{\operatorname{sp}incigma}$, consider compact intervals $I=[t_1, t_2]$, paths $\gamma \in W^{\mathfrak{t}au}(I \mathfrak{t}imes Y)$ restricting to $x$ and $y$ on the two boundary components, and Fredholm operators of the form
\begin{equation}
\label{eq:Pgammagc}
P_{\gamma}^{\operatorname{gC}} = \bigl( Q_{\gamma}^{\operatorname{gC}} , -\Pi_1^{\operatorname{gC},+}, \Pi_2^{\operatorname{gC},-} \bigr) : \T^{\operatorname{gC},\mathfrak{t}au}_{1, \gamma}(I \mathfrak{t}imes Y) \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{0, \gamma} \oplus L^2(I; i\R) \oplus H_1^{\operatorname{gC}, +} \oplus H_2^{\operatorname{gC}, -}.
\end{equation}
Here, $Q_{\gamma}^{\operatorname{gC}} $ has the same expression as in \eqref{eq:Qggc}, and $\Pi_1^{\operatorname{gC},+}$ is the composition of restriction to the boundary with spectral projection onto
$$ H_1^{\operatorname{gC}, +} = \K^{\operatorname{agC}, +}_{1/2, x} \oplus i\rr\operatorname{sp}incubset \T^{\operatorname{gC}, \operatorname{sp}incigma}_{1/2, x}(Y) \oplus i\R,$$
with $\K^{\operatorname{agC}, +}_{1/2, x} \operatorname{sp}incubset \K^{\operatorname{agC}, \operatorname{sp}incigma}_{1/2, x}$ being the direct sum of all positive eigenspaces of the Hessian $\operatorname{Hess}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, x} - \varepsilonilon$ for small, positive $\varepsilonilon$. The space $H_2^{\operatorname{gC}, -}$ and the projection $\Pi_2^{\operatorname{gC},-}$ are defined similarly, using the nonpositive eigenspaces of $\operatorname{Hess}^{\mathfrak{t}ilde{g}, \operatorname{sp}incigma}_{\q, y} - \varepsilonilon$. Note that since $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is invertible with real spectrum and is a compact perturbation of a self-adjoint and invertible operator, we see that $\K^{\operatorname{agC},+}_{1/2,x} \oplus \K^{\operatorname{agC},-}_{1/2,x} = \K^{\operatorname{agC},\operatorname{sp}incigma}_{1/2,x}$ (cf. \cite[p.313]{KMbook}).
We define an {\em orientation data set in Coulomb gauge} $o^{\operatorname{gC}}$ to consist of orientations $o^{\operatorname{gC}}_{[x], [y]}$ for $\det(P_{\gamma}^{\operatorname{gC}} )$ (for any $\gamma$), satisfying the compatibility condition
$$ o^{\operatorname{gC}}_{[x], [y]} \cdot o^{\operatorname{gC}}_{[y], [z]} = o^{\operatorname{gC}}_{[x], [z]}.$$
An orientation data set in Coulomb gauge produces orientations on the moduli spaces $M^{\operatorname{agC}}([x], [y])$ and $M^{\operatorname{agC}, \red}([x], [y]).$
\begin{proposition}
\label{prop:orientationsgc}
An orientation data set $o$ (as in Section~\ref{sec:or2}) naturally induces an orientation data set $o^{\operatorname{gC}}$ in Coulomb gauge, such that the homeomorphisms constructed in Proposition~\ref{prop:gccorrespondence},
$$ M([x], [y]) xrightarrow{\cong} M^{\operatorname{agC}}([x], [y]),$$
are orientation-preserving, and so are the homeomorphisms
$$ M^{\red}([x], [y]) xrightarrow{\cong} M^{\operatorname{agC}, \red}([x], [y]).$$
\end{proposition}
\begin{proof}
Fix an orientation data set $o$. This trivializes the determinant lines $\det(P_{\gamma})$, for the operators $P_{\gamma}$ from \eqref{eq:Pgamma}. Let us focus on trajectories $\gamma \in W^{\mathfrak{t}au}(I \mathfrak{t}imes Y)$. We seek to trivialize the corresponding operators $P_{\gamma}^{\operatorname{gC}}$. To go between $P_{\gamma}$ and $P_{\gamma}^{\operatorname{gC}}$, we follow the steps in the proof of Proposition~\ref{prop:FredholmCoulomb}, but considering operators defined on compact cylinders, and with spectral projections added at the boundary. Specifically, we can deform $P_{\gamma}$ into an operator of the form
$$\widehat{P}_{\gamma}^{\operatorname{gC}} =\begin{pmatrix} P_{\gamma}^{\operatorname{gC}} & 0 \\ 0 & \widehat{J} \end{pmatrix},$$
where $\widehat{J}$ is the analogue of $\widehat{R}$ from \eqref{eq:RRtilde}, with spectral projections added at the boundary. One can check that $\widehat{J}$ is bijective. Hence, a trivialization of $\det(P_{\gamma})$ gives one of $\det(\widehat{P}_{\gamma}^{\operatorname{gC}} )$ and then one of $\det(P_{\gamma}^{\operatorname{gC}})$. The resulting trivializations are compatible with concatenation, and hence combine into an orientation data set in Coulomb gauge.
The fact that the homeomorphisms are orientation-preserving is immediate from the construction.
\end{proof}
Observe that Proposition~\ref{prop:orientationsgc} also implies that the identifications \eqref{eq:M1Z} and \eqref{eq:M2Z}, between the cut-down moduli spaces, are orientation-preserving.
\operatorname{sp}incection{Monopole Floer homology in Coulomb gauge}\label{sec:CoulombSummary}
The work done in this chapter allows us to rephrase the definition of monopole Floer homology in terms of configurations in Coulomb gauge.
We fix an admissible perturbation $\q$. The generators of $\widecheck{\mathit{CM}}$ can be taken to be some of the stationary points $[x]$ of the vector field $\mathcal{X}q^{\operatorname{agC}, \operatorname{sp}incigma}$ on $W^\operatorname{sp}incigma/S^1$; precisely, those that are either in the interior of $W^\operatorname{sp}incigma/S^1$, or on the boundary and stable. Indeed, by \eqref{eq:EquivStat2}, these are in one-to-one correspondence with the generators $\mathcal{C}rit^o \cup \mathcal{C}rit^s$ considered in Section~\ref{sec:mfh}. Further, since the original generators are non-degenerate, so are the ones in Coulomb gauge, in the sense that $\ker (\D^\operatorname{sp}incigma_x\mathcal{X}q^{\operatorname{agC}, \operatorname{sp}incigma}) = 0$ at each stationary $x$; see Lemma~\ref{lem:rephraseStat}.
To define the differential on $\widecheck{\mathit{CM}}$, we can use the moduli spaces $M^{\operatorname{agC}}([x], [y])$ and $M^{\operatorname{agC}, \red}([x], [y])$, consisting of trajectories of $\mathcal{X}q^{\operatorname{agC}, \operatorname{sp}incigma}$. By \eqref{eq:EquivTraj2} and Proposition~\ref{prop:gccorrespondence}, these are in one-to-one correspondence with the moduli spaces of monopoles considered in Section~\ref{sec:mfh}. Moreover, by Proposition~\ref{prop:Qgammasurjective}, since the original moduli spaces are regular, so are the ones in Coulomb gauge. Specifically, this means that the operators $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma}}$ (or, equivalently, $Q^{\operatorname{gC}}_{\gamma}$) are surjective for all $[\gamma] \in M^{\operatorname{agC}}([x], [y])$, except in the boundary-obstructed case, where the cokernel has dimension 1.
As shown in \eqref{eq:gradingscoulomb}, we can define the relative gradings between generators $[x]$ and $[y]$ in Coulomb gauge by the index of the operator $Q^{\operatorname{gC}}_{\gamma}$, where $\gamma$ is a path from $x$ to $y$ in $W^\operatorname{sp}incigma$ and that these are the same relative gradings as in $\widecheck{\mathit{CM}}$.
Moreover, we can orient the moduli spaces $M^{\operatorname{agC}}([x], [y])$ and $M^{\operatorname{agC}, \red}([x], [y])$ using an orientation data set in Coulomb gauge, as in Section~\ref{sec:OrientCoulomb}; see Proposition~\ref{prop:orientationsgc}.
With this in mind, we define the differential $\check\del$ on $\widecheck{\mathit{CM}}$ by the same formulas as \eqref{eqn:boundaryirred}, \eqref{eqn:boundaryred}, and \eqref{eqn:cmboundary}, but using the moduli spaces $M^{\operatorname{agC}}([x], [y])$, $M^{\operatorname{agC}, \red}([x], [y])$, instead of $M([x], [y]), M^{\red}([x], [y])$.
Finally, the $\zz[U]$-module structure on $\widecheck{\mathit{CM}}$ can also be described in Coulomb gauge. We apply the equivalences \eqref{eq:M1Z} and \eqref{eq:M2Z} established in Section~\ref{sec:UCoulomb}. Thus, the $U$-map is given by the formulas similar to \eqref{eqn:mU}, \eqref{eqn:mUred}, and \eqref{eqn:floercap}. For the new formulas, we instead intersect the moduli spaces $M^{\operatorname{agC}}([x], [y])$, $M^{\operatorname{agC}, \red}([x], [y])$ with the zero set $\Zs^{\operatorname{agC}}$ of a generic section of the complex line bundle $E^{\operatorname{agC},\operatorname{sp}incigma}$ over $W^{\operatorname{sp}incigma}/S^1$.
\chapter[Finite-dimensional approximations]
{Finite-dimensional approximations with tame perturbations} \label{sec:finiteapproximations}
\operatorname{sp}incection{Very compactness}\label{sec:verycompact}
Since monopole Floer homology is defined using a perturbation $\q$, we must define an analogue of the Floer spectrum using this perturbation as well. In order to do this, we must recall a more general setting in which the spectrum can be defined.
\begin{definition}
\label{def:vc}
Let $Y$ be a closed oriented Riemannian three-manifold with $b_1(Y)=0$, with a spinor bundle $\mathbb{S}$. Let $W = \ker d^* \oplus \Gamma(\mathbb{S}) \operatorname{sp}incubset \mathcal{C}(Y)$ be the global Coulomb slice from Section~\ref{sec:coulombs}.
A smooth map $\eta:W \mathfrak{t}o W$ is called {\em very compact} if for all integers $k \geq 5$ and compact cylinders $Z = I \mathfrak{t}imes Y$, the following two conditions are satisfied:
\noindent $(a)$ The map $\eta$ induces a continuous, functionally bounded map
\[
\hat{\eta}: W_{k-1}(Z) \mathfrak{t}o W_{k-1}(Z), \ \ \mathfrak{t}ext{and}
\]
$(b)$ We can extend the differentials
\[
W \mathfrak{t}imes \prod^m_{i = 1} W \mathfrak{t}o W, \ (x; v_1, \dots, v_m) \mathfrak{t}o (\D^m\eta)_x(v_1, \dots, v_m)
\]
to continuous maps
\[
W_{k}(Z) \mathfrak{t}imes W_{k-1-i_1}(Z) \mathfrak{t}imes \ldots \mathfrak{t}imes W_{k-1-i_m}(Z) \mathfrak{t}o W_{k-1-\operatorname{sp}incum i_s}(Z).
\]
\end{definition}
By the argument mentioned in Remark~\ref{rem:qhatq} (that is, working with configurations which are constant in the $\R$ direction), we obtain the analogous regularity statements on $Y$ as well. We call such a map $\eta$ very compact because condition (a), applied with $k$ instead of $k-1$, guarantees that the induced map $\eta : W_k \mathfrak{t}o W_{k}$ is functionally bounded, and therefore $\eta : W_k \mathfrak{t}o W_{k-1}$ is compact (in the sense that it takes bounded sets to precompact sets).
One example of a very compact map is the map $c: W \mathfrak{t}o W$ from \eqref{eq:cmap}.
\begin{proposition}\label{prop:proposition3perturbed}
Let $\eta: W \mathfrak{t}o W$ be a very compact map. Fix $k \geq 5$. Suppose that there exists a closed, bounded subset $N$ of $W_{k}$ such that the finite type trajectories of $l + \eta$ on $W_{k}$ that are contained in $N$ are actually contained in $W \cap U$ for an open set $U \operatorname{sp}incubset N$. Then: \\
$(i)$ For $\lambda \gg 0$, trajectories of $l+p^\lambda \eta$ contained in $N$ must be contained in $U$. \\
$(ii)$ We can define the Floer spectrum $\Sigma^{-W^{(-\lambda,0)}}I^\lambda$ as in \eqref{sec:spectrum}. This is independent of $\lambda$ up to stable equivalence. \\
$(iii)$ Furthermore, if $\eta$ is $S^1$-equivariant, then the Floer spectrum can be constructed equivariantly, and is an invariant up to $S^1$-equivariant stable equivalence.
\end{proposition}
We omit the proof of Proposition~\ref{prop:proposition3perturbed}, as it is analogous to that of \cite[Theorem 1]{Spectrum}. In particular, part (i) corresponds to \cite[Proposition 3]{Spectrum}. The results in \cite{Spectrum} were for the special case where $\eta = c$, $N = \overline{B(2R)}$, and $U = B(R)$, as in Chapter~\ref{sec:spectrum}. The properties of $c$ used in those proofs were exactly those listed in Definition~\ref{def:vc}.
\begin{remark}
Very compactness was defined in \cite[Definition 4]{GluingBF}. There, condition (a) only required that $\eta:W_k \mathfrak{t}o W_{k-1}$ be a compact map. Proposition 5 in \cite{GluingBF} claimed that Proposition~\ref{prop:proposition3perturbed} is true under this weaker hypothesis. However, this is in fact not sufficient. In \cite{Spectrum}, Step 1 in the proof of Proposition 3 requires functional boundedness of $\eta$ to bound the derivatives of trajectories, and Step 3 requires continuity of $\hat{\eta}$ in order to do elliptic bootstrapping on $I \mathfrak{t}imes Y$.
\end{remark}
For future reference, we state a key lemma that is needed for the proof of Proposition~\ref{prop:proposition3perturbed}(i). Its analogue is contained in the proof of Proposition 3 in \cite{Spectrum}.
\begin{lemma}\label{lem:convergencenoblowup}
Let $I \operatorname{sp}incubseteq \R$ be a closed interval (possibly $\R$). Under the hypotheses of Proposition~\ref{prop:proposition3perturbed}, suppose we have a sequence of eigenvalues $\lambda_n \mathfrak{t}o \infty$ and a sequence of trajectories $\gamma_n: I \mathfrak{t}o W$ of $l + p^\lambdan \eta$, such that $\gamma_n(t) \in N$ for all $t \in I$. Then there exists a subsequence of $\gamma_n$ for which the restrictions to any compact subinterval\footnote{For closed intervals $I, I' \operatorname{sp}incubseteq \R$, we write $I' \Subset I$ if $I'$ is compact and contained in the interior of $I$.} $I' \Subset I$ converge in the $C^{\infty}$ topology of $W(I' \mathfrak{t}imes Y)$ to a trajectory of $l + \eta$.
\end{lemma}
The following will allow us to define a Floer spectrum which incorporates the perturbations used in defining monopole Floer homology.
\begin{proposition}\label{prop:verycompact}
Let $\q$ be a very tame perturbation. Then the map $\eta_{\q} = \Pi^{\operatorname{gC}}_* \q:W \mathfrak{t}o W$ from \eqref{eq:etaq} is very compact.
\end{proposition}
\begin{proof}
Recall from Lemma~\ref{lem:tamecoulomb} that $\eta_{\q}$ is a controlled perturbation, in the sense of Definition~\ref{def:controlled}.
Thus, to prove Property (a) in the definition of very compactness, we simply apply part (i) of Definition~\ref{def:controlled}, with $k-1$ instead of $k$. This says that $\eta_{\q}$ extends to a continuous, functionally bounded map from $W_{k-1}(Z)$ to $W_{k-1}(Z)$.
For Property (b), without loss of generality, assume that $i_m$ is the largest of the integers $i_s$. Let us study the regularity properties of the derivatives of $\q$. We apply \cite[Proposition 11.4.1(ii)]{KMbook}, which states that for $p \geq 2$, the map
\[
\D^m \hat{\q} : (x,v_1,\ldots,v_m) \mapsto \D^m_x \hat{\q}(v_1,\ldots,v_m),
\]
extends to a continuous map from $\mathcal{C}_p(Z) \mathfrak{t}imes \ldots \mathfrak{t}imes \mathcal{C}_p(Z) \mathfrak{t}imes \mathcal{C}_j(Z)$ to $\mathcal{C}_j(Z)$, for $0 \leq j \leq p$. For clarity, this product is written to be compatible with the formula above, so the copy of $\mathcal{C}_j(Z)$ in the domain consists of tangent vectors and the left-most $\mathcal{C}_p(Z)$ consists of the point where we are computing the higher derivative. We let $j = k - 1 - i_m$ and
$$p = \begin{cases}
k-1 & \mathfrak{t}ext{ if } m=1,\\
k-1-\max \{i_s \mid s \neq m \} & \mathfrak{t}ext{ if } m \geq 2.
\end{cases}
$$ Note that since $k - 1 \geq 4$, $k - 1 - \operatorname{sp}incum i_s \geq 0$, and $i_m = \max i_s$, we have that $k - 1 - i_s \geq 2$ for any $s \neq m$, so $p \geq 2$. Thus,
\[
\D^m \hat{\q}: \mathcal{C}_p(Z) \mathfrak{t}imes \ldots \mathfrak{t}imes \mathcal{C}_p(Z) \mathfrak{t}imes \mathcal{C}_{k-1-i_m}(Z) \mathfrak{t}o \mathcal{C}_{k-1-i_m}(Z)
\]
is continuous. By pre-composing with the continuous inclusions of $\mathcal{C}_{k}(Z)$ and $\mathcal{C}_{k-1-i_s}(Z)$ into $\mathcal{C}_{p}(Z)$ for $s \neq m$ and post-composing with the continuous inclusion of $\mathcal{C}_{k-1-i_m}(Z)$ into $\mathcal{C}_{k-1-\operatorname{sp}incum i_s}(Z)$, we get that $\D^m \hat{\q}$ extends to a map
\[
\mathcal{C}_{k}(Z) \mathfrak{t}imes \mathcal{C}_{k-1-i_1}(Z) \mathfrak{t}imes \ldots \mathfrak{t}imes \mathcal{C}_{k-1-i_m}(Z) \mathfrak{t}o \mathcal{C}_{k-1- \operatorname{sp}incum i_s}(Z).
\]
The requirement (b) in the definition of very compactness now follows from Lemma~\ref{lem:igc}.
\end{proof}
We want to apply Proposition~\ref{prop:proposition3perturbed} to $l$ and $$c_{\q} := c+\eta_{\q}.$$ Since both of the terms $c$ and $\eta_{\q}$ are very compact, so is $c_{\q}$. However, observe that very compactness is only one of the requirements needed to do finite dimensional approximation. The others are smoothness and boundedness for flow trajectories, as in the statement of Proposition~\ref{prop:proposition3perturbed}. (The containment in $W \cap U \operatorname{sp}incubset W$ in the statement guarantees smoothness.) In our setting, we know from \cite[Proposition 13.1.2 (i)]{KMbook} that for a tame perturbation, the perturbed Seiberg-Witten trajectories, and thus their global Coulomb projections, are smooth. With regard to boundedness, in Proposition~\ref{prop:proposition3perturbed}, we want to take $U$ to be the open ball of some radius $R \gg 0$ in $W_{k}$, and $N$ to be the closed ball of radius $2R$. This is exactly what was done for the unperturbed Seiberg-Witten trajectories in \cite{Spectrum}. It follows from Proposition 1 of that paper that Seiberg-Witten trajectories inside $\overline{B(2R)}$ live inside $B(R)$, provided $R$ was chosen large enough. The same proof works for perturbed Seiberg-Witten trajectories, since they satisfy similar compactness properties, as detailed in \cite[Section 10.7]{KMbook}.
Therefore, we are now able to define an analogue of the Floer spectrum with the finite-dimensional approximations of $l + c_\q$ instead of $l + c$. We will denote this new spectrum by $\operatorname{SWF}_\q(Y,\mathfrak{s})$.
\begin{proposition}\label{prop:perturbedspectrum}
Let $\q$ be a very tame perturbation. Then $\operatorname{SWF}_\q(Y,\mathfrak{s})$ and $\operatorname{SWF}(Y,\mathfrak{s})$ are $S^1$-equivariantly stably homotopy equivalent.
\end{proposition}
\begin{proof}
This is similar to the proof that $\operatorname{SWF}(Y, \mathfrak{s})$ is independent of the Riemannian metric on $Y$; see \cite[Section 7]{Spectrum}. The key ingredients are the fact that we can interpolate linearly between $\q$ and $0$ such that the hypotheses of Proposition~\ref{prop:proposition3perturbed} are satisfied, and the fact that the homotopy type of the Conley index is invariant under perturbations.
\end{proof}
\begin{remark}
It is worth comparing very compactness with the condition of being controlled, as in Definition~\ref{def:controlled}. If $\q$ is very tame, both of these conditions are satisfied by $\eta_{\q}$; see Lemma~\ref{lem:tamecoulomb} and Proposition~\ref{prop:verycompact}. There is some overlap between the two conditions: for example, part (a) in the definition of very compactness is implied by part (i) in the definition of a controlled perturbation. However, neither condition is stronger than the other: part (b) in the definition of very compactness has no analog in the controlled condition; and part (ii) in the controlled condition has no analogue in very compactness due to the constraints on functional boundedness.
When working with the perturbation $\eta_{\q}$, we will need to use both very compactness and the controlled condition. Very compactness is needed to do finite dimensional approximation, and the controlled compactness is necessary to study the properties of stationary points and trajectories in these approximations.
\end{remark}
From now on, we always assume that our perturbation $\q$ is both very tame (Definition~\ref{def:verytame}) and admissible (Definition~\ref{def:admi}). For the existence of such $\q$, see Sections~\ref{sec:verytame} and \ref{sec:AdmPer}.
\operatorname{sp}incection[Strategy for the proof]{Strategy for the proof of Theorem~\ref{thm:Main}} \label{sec:strategy}
In order to relate monopole Floer homology to the spectrum, it suffices to instead work with $\operatorname{SWF}_\q(Y,\mathfrak{s})$ by Proposition~\ref{prop:perturbedspectrum}. This latter invariant is clearly closer to monopole Floer homology due to the presence of the perturbation. However, we still need to relate the vector field $l + c_\q$ used for monopole Floer homology to $l + p^\lambda c_\q$ on $W^\lambda$ . We will consider a vector field on $W_k$ defined by taking finite-dimensional approximations of the non-linear part of $\mathcal{X}q$:
\begin{equation}
\label{eq:Xqml}
\mathcal{X}qmlgc := l + p^\lambda c_\mathfrak{q}= l + c + \eta_{\q}^{\lambda},
\end{equation}
where $\eta_{\q}^{\lambda} := p^\lambda c_\mathfrak{q}- c.$ We will see that $\eta_{\q}^{\lambda}$ is very compact.
Further, $\mathcal{X}qmlgc$ induces a vector field $\mathcal{X}qmlgcsigma$ on the blow-up $W^\operatorname{sp}incigma$ and thus a vector field $\mathcal{X}qmlagcsigma$ on the quotient $W^\operatorname{sp}incigma/S^1$.
At this point we give an outline of how the proof of Theorem~\ref{thm:Main} is going to go.
For $N > 0$, let $\mathcal{C}rit_{[-N, N]}$ be the set of stationary points of $\mathcal{X}qagcsigma$ of grading in $[-N, N]$. Their $S^1$-orbits form sets of stationary points of $\mathcal{X}qgcsigma$ on $W^{\operatorname{sp}incigma}$. Let $\mathcal{O}rb_{[-N, N]}$ denote the union of these orbits. We fix $N$ sufficiently large so that the projection of $\mathcal{O}rb_{[-N, N]}$ to the blow-down $W$ contains all the stationary points of $\mathcal{X}qgc$; in particular, $\mathcal{C}rit_{[-N, N]}$ should contain all the irreducibles. Further, we assume that no reducible stationary point that is boundary-stable has grading less than $-N$. This ensures that the truncated chain complex $\widecheck{\mathit{CM}}_{\leq N}(Y, \mathfrak{s}, \q)$ is generated by $\mathcal{C}rit_{[-N, N]}$. Note that the homology of $\widecheck{\mathit{CM}}_{\leq N}(Y, \mathfrak{s}, \q)$ agrees with $\overline{\mathit{HM}}to(Y, \mathfrak{s}, \q)$ in degrees $\leq N-1$.
Let
\begin{equation}
\label{eq:Nclosed}
\mathcal{N}= \{ x\in W^{\operatorname{sp}incigma}_k \mid d_{L^2_k}(x, \mathcal{O}rb_{[-N, N]}) \leq 2\delta \},
\end{equation}
where $d_{L^2_k}$ denotes $L^2_k$ distance, and $\delta > 0$ is chosen sufficiently small such that the only stationary points of $\mathcal{X}qgcsigma$ that are contained in $\N$ are those in $\mathcal{O}rb_{[-N, N]}$. Similarly, let
\begin{equation}
\label{eq:Uopen}
\mathcal{U} = \{ x\in W^{\operatorname{sp}incigma}_k \mid d_{L^2_k}(x, \mathcal{O}rb_{[-N, N]}) < \delta \} \operatorname{sp}incubset \N.
\end{equation}
Thus, $\N/S^1$ and $\mathcal{U}/S^1$ are closed, resp. open, neighborhoods of $\mathcal{C}rit_{[-N,N]}$ in $W^\operatorname{sp}incigma_k/S^1$.
Granted this, we will construct a chain complex $\check{C}^{\lambda}$ determined by $\mathcal{X}qmlagcsigma$, and which will be identified with $\widecheck{\mathit{CM}}_{\leq N}(Y,\mathfrak{s},\q)$. The chain groups of $\check{C}^{\lambda}$ will be generated by the stationary points of $\mathcal{X}qmlagcsigma$ that live in $\N/S^1$ (and hence in $\mathcal{U}/S^1$); in particular, this includes all irreducibles $[(a, s, \phi)]$ such that $(a, s\phi) \in \overline{B(2R)}.$ We will see that these stationary points will necessarily be contained in the finite dimensional approximation $(W^\lambda)^\operatorname{sp}incigma/S^1$. The differential on $\check{C}^{\lambda}$ will be defined analogously to monopole Floer homology (only counting trajectories that are contained entirely in $(W^\lambda)^\operatorname{sp}incigma/S^1$). That this will actually be a chain complex will come from a Morse-Smale stability condition---since we have non-degeneracy of the stationary points and regularity of the moduli spaces for $\mathcal{X}qagcsigma$, we will show that this holds for $\mathcal{X}qmlagcsigma$ as well, provided that $\lambda$ is sufficiently large. Using the inverse function theorem, we will find a correspondence between the stationary points and isolated trajectories of $\mathcal{X}qmlagcsigma$, on the one hand, and those of $\mathcal{X}qagcsigma$ on the other. This will give an explicit identification between $\check{C}^\lambda$ and $\widecheck{\mathit{CM}}_{\leq N}(Y,\mathfrak{s},\q)$. Here we are using the work of Chapter~\ref{sec:coulombgauge}, where we rephrased $\widecheck{\mathit{CM}}$ in Coulomb gauge. By the setup, we will be able to relate the respective orientations of the moduli spaces, gradings, and $U$-actions as well.
However, $\check{C}^\lambda$ can also be identified with a truncation of the Morse complex (for manifolds with $S^1$ actions) for $B(2R) \cap W^\lambda$ as in Section~\ref{sec:combinedMorse}. It follows from Equation~\eqref{eq:EquivConleyMorse} that the homology of this Morse complex is isomorphic to $\mathfrak{t}H^{S^1}_{\leq M}(\operatorname{SWF}_{\q}(Y,\mathfrak{s}))$, for some $M > 0$. We can assume that $M > N$, and we get that
$$\overline{\mathit{HM}}to_{\leq N-1}(Y,\mathfrak{s},\q) \cong \mathfrak{t}H^{S^1}_{\leq N-1}(\operatorname{SWF}_{\q}(Y,\mathfrak{s})).$$
By letting $N$ tend to infinity and applying Proposition~\ref{prop:perturbedspectrum}, we obtain the desired isomorphism in Theorem~\ref{thm:Main}.
With the above strategy in mind, it will suffice to do most of our analysis (i.e. compactness and non-degeneracy of stationary points and trajectories) in $W^\operatorname{sp}incigma$ rather than in the quotient $W^\operatorname{sp}incigma/S^1$. We are able to do so due to the compactness of the residual gauge group $S^1$.
\operatorname{sp}incection{Finite-dimensional approximations of perturbations}
In order to carry out the strategy mentioned above, we need to understand the properties of the perturbation
\[
\eta_{\q}^{\lambda}= p^\lambda c_\mathfrak{q} - c = (p^\lambda c - c) + p^\lambda \eta_{\q} ,
\]
and in particular of the term $p^\lambda \eta_q $. For notation, given a sequence $\lambda_n \mathfrak{t}o \infty$, we write $\pi_n$ for $p^{\lambda_n}$. Recall that $\Theta_j$ is a constant such that $\| p^\lambda x \|_{L^2_j} \leq \Theta_j \| x \|_{L^2_j}$ for all $x \in W_j$ and we take $\Theta_0 = 1$.
\begin{fact}\label{fact:pml}
$(a)$ For all $\lambda$, the smoothed projection $p^\lambda$ extends to a continuous, linear map $p^\lambda: W_{j} \mathfrak{t}o W_{j}$ such that $\| p^\lambda \| \leq \Theta_j$ for all $j$. (Here, $\| p^\lambda\|$ denotes the norm of $p^\lambda$ as an operator on $W_j$.)
$(b)$ If $\lambda_n \mathfrak{t}o \infty$, then $\pi_n = p^{\lambda_n} \mathfrak{t}o 1$ in the strong operator topology on $W_{j}$ for all $j$.
\end{fact}
\begin{lemma}\label{lem:finitecoulombtame}
If $\q$ is a very tame perturbation, then $p^\lambda \eta_{\q} $ is a controlled Coulomb perturbation for all $ \lambda$.
\end{lemma}
\begin{proof}
Lemma~\ref{lem:tamecoulomb} says that $\eta_{\q}$ is controlled. The claim now follows from Fact~\ref{fact:pml}(a).
\end{proof}
\begin{lemma}
\label{lem:finitecoulombvc}
The maps $c_\q$, $p^\lambda \eta_{\q} $, $p^\lambda c_\mathfrak{q}$ and $\eta_{\q}^{\lambda}$ are all very compact.
\end{lemma}
\begin{proof}
This follows from the very compactness of $c$, together with the very compactness of $\eta_{\q}$ (cf. Proposition~\ref{prop:verycompact}), and Fact~\ref{fact:pml}(a).
\end{proof}
We also need the following result about $c_{\q}$, which is not subsumed in very compactness.
\begin{lemma}
\label{lem:Dcq}
Let $Z = I \mathfrak{t}imes Y$ for compact $I$. For $k \geq 3$ and $0 \leq j \leq k$, the map
$$ \mathcal{D}c_{\q}: W_k(Z) \mathfrak{t}o \operatorname{Hom}(W_j(Z), W_j(Z))$$
is in $C^\infty_{\fb}$. Therefore, extending by four-dimensional gauge, we have that
$$ \mathcal{D}c_{\q}: \mathcal{C}^{\operatorname{gC}}_k(Z) \mathfrak{t}o \operatorname{Hom}(\T^{\operatorname{gC}}_j(Z), \V^{\operatorname{gC}}_j(Z))$$
is in $C^\infty_{\fb}$.
\end{lemma}
\begin{proof}
We write $c_\mathfrak{q}= c + \eta_\q$. Since $\eta_\q$ is a controlled Coulomb perturbation by Lemma~\ref{lem:tamecoulomb}, we have that $\mathcal{D}\eta_{\q} \in C^\infty_{\fb}(W_k(Z), \operatorname{Hom}(W_j(Z), W_j(Z)))$. Therefore, it suffices to show that $\mathcal{D}c$ is in $C^\infty_{\fb}$ as well. Write $x = (a,\phi)$.
By \eqref{eq:cmap}, we see that $c$ is the composition of $\Pi^{\operatorname{gC}}_*$ with the vector field
\begin{equation}\label{eq:quadraticvf}
X: W_k (Z)\mathfrak{t}o \mathcal{C}_k(Z), \ (a,\phi) \mapsto (\mathfrak{t}au(\phi, \phi), \rho(a) \cdot \phi).
\end{equation}
We can explicitly compute
\begin{align*}
(\D_{(a,\phi)} X)(b, \psi) &= (\mathfrak{t}au(\phi, \psi) + \mathfrak{t}au(\psi, \phi), \rho(b) \cdot \phi + \rho(a) \cdot \psi) \\
(\D^2_{(a,\phi)} X)((\alpha, \zeta),(b,\psi)) &= (\mathfrak{t}au(\zeta, \psi) + \mathfrak{t}au(\psi, \zeta), \rho(b) \cdot \zeta + \rho(\alpha) \cdot \psi) \\
(\D^3_{(a,\phi)} X) &\equiv 0.
\end{align*}
From this, it is straightforward to apply the Sobolev multiplication $L^2_k \mathfrak{t}imes L^2_j \mathfrak{t}o L^2_j$ to see that $X$ satisfies the conditions of Lemma~\ref{lem:igc-fb} with $n = \infty$. Therefore, we see that $\mathcal{D}c$ is in $C^\infty_{\fb}$.
\end{proof}
The vector field $\mathcal{X}qmlgc = \mathcal{X}gc + \eta_{\q}^{\lambda}$ from \eqref{eq:Xqml} induces a vector field $\mathcal{X}qmlgcsigma$ on the blow-up. Similar to \eqref{eq:xqgcsigmaalternate}, this is given by the formula
\begin{align}
\nonumber \mathcal{X}qmlgcsigma(a,s,\phi) & = ({\mathcal{X}}_{\q^\lambda}^0(a,s\phi), \mathscr{L}ambda_{\q^\lambda}(a,s,\phi)s, D\phi + \widetilde{p^\lambda c_\mathfrak{q}}^1(a,s,\phi) - \mathscr{L}ambda_{\q^\lambda}(a,s,\phi)\phi) \\
\label{eq:Xqmlgcsigmaformula} & = \Bigl (*da + (p^\lambda c_\mathfrak{q})^0(a,s\phi), \mathscr{L}ambda_{\q^\lambda}(a,s,\phi)s, \\
\nonumber & \qquad \qquad \qquad D\phi + \int^1_0 \D_{(a,st\phi)} (p^\lambda c_\mathfrak{q})^1 (0,\phi)dt - \mathscr{L}ambda_{\q^\lambda}(a,s,\phi)\phi \Bigr),
\end{align}
where
\begin{equation}
\label{eq:lambda-spinorial}
\mathscr{L}ambda_{\q^\lambda} = \operatorname{Re} \langle \phi, (\widetilde{\mathcal{X}qmlgc})^1(a,s,\phi) \rangle_{L^2}.
\end{equation}
For $(a,s,\phi) \in W^\operatorname{sp}incigma$, we call $\mathscr{L}ambda_{\q^\lambda}(a,s,\phi)$ the {\em $\lambda$-spinorial energy}. As in the case of the spinorial energy, the $\lambda$-spinorial energy of an irreducible stationary point of $\mathcal{X}qmlgcsigma$ is zero. Analogous to the definition of $\mathcal{F}_{\q}gctau$ in \eqref{eq:Fqgctau}, we can use $\G^{\operatorname{gC}}(Z)$-equivariance to extend the equations defining the flow in \eqref{eq:Xqmlgcsigmaformula} to define a section
\begin{equation}
\label{eq:Fqmlgctau}
\mathcal{F}_{\qml}gctau: \mathfrak{t}C^{\operatorname{gC}, \mathfrak{t}au}(Z) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}(Z).
\end{equation}
In the rest of this section we will describe several results about the dynamics of $\mathcal{X}qmlgcsigma$. These will be put to use, for instance, in Chapters~\ref{sec:criticalpoints}, \ref{sec:gradings}, \ref{sec:trajectories1} and \ref{sec:trajectories2}, where we will relate stationary points and trajectories of $\mathcal{X}qmlgcsigma$ to those of $\mathcal{X}qgcsigma$.
Note that being a reducible stationary point $(a,0)$ of $\mathcal{X}qmlgc$ is equivalent to solving
\[
*da + (p^\lambda \eta_\mathfrak{q})^0(a,0) = 0, \quad (p^\lambda \eta_\mathfrak{q})^1(a,0) = 0, \]
since $p^\lambda c (a,0) = 0$.
A reducible stationary point $(a,0,\phi)$ of $\mathcal{X}qmlgcsigma$ is determined by a reducible stationary point $(a,0)$ of $\mathcal{X}qmlgc$ and an $L^2$-unit length eigenvector $\phi$ of the linear operator $D_{\q^\lambda,a}$, where
\begin{equation}\label{eq:Dqmlaphi}
D_{\q^\lambda,a}(\phi) = \D_{(a,0)} (\mathcal{X}qmlgc)^1(0,\phi) = D \phi + (p^\lambda)^1(\D_{(a,0)} c_\q(0,\phi)).
\end{equation}
If $(a,0,\phi)$ is a reducible stationary point of $\mathcal{X}qmlgcsigma$, then the $\lambda$-spinorial energy is simply the eigenvalue of $D_{\q^\lambda,a}$ corresponding to $\phi$.
We now establish some important analytic properties of the perturbed equations.
\begin{lemma}
\label{lem:fam}
Fix $k \geq 3$.
\begin{enumerate}[(a)]
\item \label{fam:a} For a bounded subset $K \operatorname{sp}incubset W_k$ and a bounded subset $J \operatorname{sp}incubset W_j$ (with $0 \leq j \leq k$), the set
$$\{p^\lambda (\D_{(a,\phi)} c_\q(b,\psi)) \mid (a,\phi) \in K, (b,\psi) \in J, \lambda \geq 0\}$$ is bounded in $L^2_j$. These uniform bounds also hold if we include $\lambda = \infty$.
\item \label{fam:b} For a bounded subset $K^\operatorname{sp}incigma \operatorname{sp}incubset W_k^{\operatorname{sp}incigma} \operatorname{sp}incubset (\ker d^*)_k \oplus \rr\oplus L^2_k(Y; \mathbb{S})$ and a bounded subset $J^\operatorname{sp}incigma \operatorname{sp}incubset \T_j^{\operatorname{gC}, \operatorname{sp}incigma}|_{K^\operatorname{sp}incigma}$ (with $0 \leq j \leq k$), the set
$$\{ \D^\operatorname{sp}incigma_{x} (p^\lambda c_\q)^{\operatorname{sp}incigma}(v) \mid x \in K^\operatorname{sp}incigma, \ (x,v) \in J^\operatorname{sp}incigma, \lambda \geq 0\}$$ is bounded in $L^2_j$.
These uniform bounds also hold if we include $\lambda = \infty$.
\item\label{fam:c} If $x_n \mathfrak{t}o x$ in $W^\operatorname{sp}incigma_k$ and $\lambda_n \mathfrak{t}o \lambda$ (possibly $\infty$), then $\widetilde{p^{\lambda_n} c_\mathfrak{q}}^1(x_n) \mathfrak{t}o \widetilde{p^{\lambda} c_\mathfrak{q}}^1(x)$ in $L^2_k(Y;\mathbb{S})$. Furthermore, $\widetilde{\mathcal{X}qmlngc}^1(x_n) \mathfrak{t}o \widetilde{\mathcal{X}qmlgc}^1(x)$ in $L^2_{k-1}(Y;\mathbb{S})$. In this case, we also have $\mathscr{L}ambda_{\q^{\lambda_n}}(x_n) \mathfrak{t}o \mathscr{L}ambda_{\q^{\lambda}}(x)$ and thus $\mathcal{X}qmlngcsigma(x_n) \mathfrak{t}o \mathcal{X}qmlgcsigma(x)$ in $\T^{\operatorname{gC},\operatorname{sp}incigma}_{k-1}$.
\item\label{fam:d} Same as \eqref{fam:c}, but for four-dimensional configurations on compact cylinders.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{fam:a} We have $c_\mathfrak{q}= c + \eta_\q$. In the proof of Lemma~\ref{lem:Dcq}, it was shown that $\mathcal{D}c_\mathfrak{q}\in C^0_{\fb}(W_k,\operatorname{Hom}(W_j, W_j))$.
We thus have $L^2_j$ bounds on the set
$$ \{\D_{(a,\phi)} c_\q(b,\psi) \mid (a,\phi) \in K, (b,\psi) \in J\}.$$
Fact~\ref{fact:pml}(a) now gives uniform $L^2_j$ bounds on
$$ \{p^\lambda (\D_{(a,\phi)} c_\q(b,\psi)) \mid (a,\phi) \in K, (b,\psi) \in J, \lambda \geq 0\}.$$
\noindent \eqref{fam:b} Recall that $\D^\operatorname{sp}incigma (p^\lambda c_\q)^\operatorname{sp}incigma$ is computed in two steps. First, we differentiate $(p^\lambda c_\q)^\operatorname{sp}incigma$, thought of as a map to $L^2(Y; i \rr) \mathfrak{t}imes {\mathbb{R}}\mathfrak{t}imes L^2(Y;\mathbb{S})$, as opposed to a section of $\T^{\operatorname{gC}}_{k}$. Then, we apply $L^2$ orthogonal projection to $\T^{\operatorname{gC},\operatorname{sp}incigma}_k$. We also have the analogous construction when extending to $L^2_j$ completions. Since $\Pi_{\T^{\operatorname{gC},\operatorname{sp}incigma}_{j, (a,s,\phi)}}(b,r,\psi) = (b,r,\psi - \operatorname{Re} \langle \psi, \phi \rangle_{L^2} \phi)$, we see that $\Pi_{\T^{\operatorname{gC},\operatorname{sp}incigma}_j}$ is $L^2_j$ bounded in terms of the $L^2_k$ norm of $(a,s,\phi)$ and $L^2_j$ norm of $(b,r,\psi)$. Therefore, we focus on the boundedness of the derivative of $(p^\lambda c_\q)^\operatorname{sp}incigma$ as a map to the extended space $L^2_k(Y; i \rr) \mathfrak{t}imes {\mathbb{R}}\mathfrak{t}imes L^2_k(Y;\mathbb{S})$. From the definitions we compute
\begin{align}\label{eq:Dhat-finite-compute}
\D_{(a,s,\phi)} (p^\lambda c_\q)^\operatorname{sp}incigma (b,r,\psi) &= \Big( \D_{(a,s\phi)} (p^\lambda c_\q)^0(b,r\phi + s\psi), \\
\nonumber \operatorname{Re} \langle \widetilde{p^\lambda c_\q}^1(a,s,\phi), & \phi \rangle_{L^2} r + \operatorname{Re} \langle \widetilde{p^\lambda c_\q}^1(a,s,\phi), \psi \rangle_{L^2} s + \operatorname{Re} \langle \D_{(a,s,\phi)} \widetilde{p^\lambda c_\q}^1(b,r,\psi), \phi \rangle_{L^2} s, \\
\nonumber \operatorname{Re} \langle \D_{(a,s,\phi)} \widetilde{p^\lambda c_\q}^1&(b,r,\psi), \phi \rangle_{L^2} \phi + \operatorname{Re} \langle \widetilde{p^\lambda c_\q}^1(a,s,\phi), \psi \rangle_{L^2} \phi + \operatorname{Re} \langle \widetilde{p^\lambda c_\q}^1(a,s,\phi), \phi \rangle_{L^2} \psi\Big),
\end{align}
where $\widetilde{p^\lambda c_\q}^1(a,s,\phi) = \int^1_0 \D_{(a,sr\phi)} (p^\lambda c_\q)^1(0,\phi) dr$. It follows from Lemma~\ref{lem:Dcq} and Fact~\ref{fact:pml}(a) that
$$\widetilde{p^\lambda c_\q}^1(a,s,\phi), \ \D_{(a,s,\phi)} \widetilde{p^\lambda c_\q}^1(b,r,\psi)$$
are $L^2_j$ bounded in terms of the $L^2_k$ bounds on $(a,s,\phi)$ and the $L^2_j$ bounds on $(b,r,\psi)$. Sobolev multiplication again provides the desired bounds on \eqref{eq:Dhat-finite-compute}. \\
\noindent \eqref{fam:c} A smooth section of $W_k$ to the $L^2_j$ completion of $TW_k$ with $j \leq k$ induces a smooth vector field on the blow-up with the same regularity. By Proposition~\ref{prop:verycompact} (which applies for any Sobolev coefficient at least three), we see that if $x_n \mathfrak{t}o x$ in $W^\operatorname{sp}incigma_k$, we have ${\widetilde{c_\q}}^1(x_n) \mathfrak{t}o {\widetilde{c_\q}}^1(x)$ in $L^2_k$. Applying Fact~\ref{fact:pml} gives $\widetilde{p^{\lambda_n} c_\mathfrak{q}}(x_n) \mathfrak{t}o \widetilde{p^{\lambda} c_\mathfrak{q}}(x)$ in $L^2_k$ as well. Since $\mathcal{X}qmlgc = l + p^\lambda c_\mathfrak{q}$ and $l : W_k \mathfrak{t}o W_{k-1}$ are smooth for $k \geq 2$, we obtain the desired convergence results by similar arguments. \\
\noindent \eqref{fam:d} The argument is similar to the one for \eqref{fam:c}.
\end{proof}
It turns out that we can give more explicit bounds on $(p^\lambda c_\mathfrak{q})^1$:
\begin{lemma}\label{lem:linearbounds}
There exists a constant $C_{k, j,M}$, depending only on $0 \leq j \leq k$ and $M$, such that if $x = (a,\phi) \in W_k$ satisfies $\| x \|_{L^2_k} \leq M$ then $\| (p^\lambda c_\mathfrak{q})^1(x) \|_{L^2_j} \leq C_{k, j,M} \| \phi \|_{L^2_j}$ for any $\lambda$.
\end{lemma}
Note that $ \| \phi \|_{L^2_j} \leq \| x \|_{L^2_k} \leq M$, so Lemma~\ref{lem:linearbounds} implies that $\| (p^\lambda c_\mathfrak{q})^1(x) \|_{L^2_j}$ is bounded above uniformly by a constant $C'_{k, j, M}$. We already knew this, owing to the very compactness of $p^\lambda c_\mathfrak{q}$. The new content of Lemma~\ref{lem:linearbounds} is that it gives a stronger (linear) bound when $ \| \phi \|_{L^2_j}$ is small.
\begin{proof}[Proof of Lemma~\ref{lem:linearbounds}]
Since $\| p^\lambda \| \leq \Theta_j$, it suffices to find a constant $C$ such that $\| c^1_\q(x) \|_{L^2_j} \leq C \| \phi \|_{L^2_j}$. First, we bound $c^1$.
Recall that
\[
c^1 (a,\phi) = \rho(a) \cdot \phi - Gd^* \mathfrak{t}au(\phi,\phi) \phi.
\]
Since $a$ is $L^2_k$-bounded, Sobolev multiplication implies
\[
\|\rho(a) \cdot \phi\|_{L^2_j} \leq C_1 \| \phi \|_{L^2_j}.
\]
Because $\phi$ is $L^2_k$-bounded, so is $Gd^* \mathfrak{t}au(\phi,\phi)$. Again, by Sobolev multiplication, we obtain
\[
\| Gd^* \mathfrak{t}au(\phi,\phi) \phi \|_{L^2_j} \leq C_2 \| \phi \|_{L^2_j}.
\]
Therefore, it remains to find bounds on $\eta_{\q}^1(a,\phi)$. We have
\[
\eta_{\q}^1(a,\phi) = \q^1(a,\phi) - Gd^* \q^0(a,\phi) \phi.
\]
Since $\q$ is a very tame perturbation, we obtain $L^2_k$ bounds on $Gd^* \q^0(a,\phi)$ and thus linear bounds on the second term. Therefore, it remains to give linear bounds on $\|\q^1(a,\phi)\|_{L^2_j}$ in terms of $\| \phi \|_{L^2_j}$. We may write $\q^1(a,\phi) = \int^1_0 \D_{(a,r\phi)} \q^1(0,\phi) dr$, because the $S^1$-equivariance of $\q^1$ implies $\q^1(a,0) = 0$. Since the $(a,r\phi)$ are $L^2_k$ bounded and $\D_{(a,r\phi)}\q^1$ is linear in $\phi$, the very tameness of $\q$ implies linear $L^2_j$-bounds on $\D_{(a,r\phi)} \q^1(0,\phi)$, depending only on $\|\phi\|_{L^2_j}$. Integrating gives linear bounds on $\| \q^1(a,\phi)\|_{L^2_j}$ in terms of $\| \phi \|_{L^2_j}$. This now completes the proof.
\end{proof}
In order to make further connection with the finite-dimensional setting of the spectrum, we must ensure that the trajectories of $\mathcal{X}qmlgcsigma$ are actually contained in the finite-dimensional blow-up of $W^\lambda$, assuming control over the $\lambda$-spinorial energy. We begin with the non-blown-up case.
\begin{lemma}\label{lem:trajectoriesinvml}
If $\gamma(t)$ is a trajectory of $\mathcal{X}qmlgc$ contained in $B(2R)$, then $\gamma(t)$ is contained in $W^\lambda$. In particular, $\gamma(t)$ is smooth for each $t$.
\end{lemma}
\begin{proof}
Let $\gamma(t) = (a(t),\phi(t))$. Fix $\kappa$ with $|\kappa| \geq \lambda$ and let $\gamma_\kappa(t)$ be the $L^2$ projection of $\gamma(t)$ to the $\kappa$-eigenspace of $l$. It suffices to show that $\gamma_\kappa(t) = 0$. Since $\gamma(t)$ is a trajectory of $\mathcal{X}qmlgctau$, we have
\[ -\frac{d}{dt}\gamma(t) = l(\gamma(t)) + (p^\lambda c_\mathfrak{q})(\gamma(t)). \]
Since $|\kappa| \geq \lambda$, we see $-\frac{d}{dt}{\gamma_\kappa}(t) = l(\gamma_\kappa(t)) = \kappa \gamma_\kappa(t)$. Thus, $\gamma_\kappa(t) = e^{-\kappa t} \gamma_\kappa(0)$. Suppose that $\gamma_\kappa(t) \not\equiv 0$. Therefore, $\gamma_\kappa(t)$, and thus $\gamma(t)$, are unbounded in the $L^2_k$-norm on $Y$ as $|t|$ increases. This contradicts $\gamma(t) \in B(2R)$ for all $t$. Therefore, $\gamma_\kappa(t) \equiv 0$ and $\gamma(t)$ is contained in $W^\lambda$.
\end{proof}
We seek an analogous result to Lemma~\ref{lem:trajectoriesinvml}, but in the blow-up. Apart from the $2R$ bound on the projections of trajectories in the blow-down, we will also need to assume a bound $\omega$ on the absolute values of spinorial energies $\mathscr{L}ambda_{\q}$. With $N$, $\N$ and $\mathcal{U}$ as in Section~\ref{sec:strategy}, note that $\mathscr{L}ambda_{\q}$ is bounded on the compact set $\mathcal{O}rb_{[-N, N]}$, and hence it is bounded on its neighborhood $\N$. The boundedness can be explained as follows. Since $\q$ is controlled and our choice of $k$ guarantees that $k - 1 \geq 2$, we have that $\mathscr{L}ambda_\q$ is continuous on $W^\operatorname{sp}incigma_{k-1}$. Therefore, $\mathscr{L}ambda_\q$ is bounded on $\N$ since it is a compact subset of $W^\operatorname{sp}incigma_{k-1}$. We shall choose $\omega > 0$ so that any point $ x \in \N$ has
\begin{equation}
\label{eq:omega}
|\mathscr{L}ambda_{\q}(x) | < \omega.
\end{equation}
\begin{lemma}\label{lem:blowuptrajectoriesinvml}
For sufficiently large $\lambda$, the following is true. If $\gamma(t) = (a(t),s(t),\phi(t))$ is a trajectory of $\mathcal{X}qmlgcsigma$ contained in $B(2R)^\operatorname{sp}incigma$ and the $\lambda$-spinorial energy of $\gamma(t)$ is in $[-\omega,\omega]$ for all $t$, then $(a(t),\phi(t)) \in W^\lambda$ for all $t \in \mathbb{R}$. In particular, $(a(t),\phi(t))$ is smooth for all $t \in \mathbb{R}$.
\end{lemma}
\begin{proof}
Fix $\lambda \gg \omega$. Fix $\kappa$ with $|\kappa| \geq \lambda$ and let $(a_\kappa(t), \phi_\kappa(t))$ be the projection of $(a(t),\phi(t))$ to the $\kappa$-eigenspace of $l$. Since $(a(t),s(t)\phi(t))$ is contained in $W^\lambda$ by Lemma~\ref{lem:trajectoriesinvml}, we have that $a_\kappa(t) \equiv 0$. It suffices to show that $\phi_\kappa(t) \equiv 0$. Let $\nu(t) = \mathscr{L}ambda_{\q^\lambda}(a(t),s(t),\phi(t))$. Since $\gamma(t)$ is a trajectory of $\mathcal{X}qmlgcsigma$, we have by \eqref{eq:Xqmlgcsigmaformula} that
\begin{align*}
-\frac{d}{dt}{\phi}(t) &= D\phi(t) + \widetilde{p^\lambda c_\mathfrak{q}}^1(a(t),s(t),\phi(t)) - \nu(t) \phi(t) \\
&= D\phi(t) + \int^1_0 \D_{(a(t),rs(t)\phi(t))} (p^\lambda c_\mathfrak{q})^1(0,\phi(t)) dr - \nu(t) \phi(t).
\end{align*}
Since $p^\lambda c_\mathfrak{q}(x) \in W^{\lambda}$ for all $x \in W_k$, we have that the projection of $\widetilde{p^\lambda c_\mathfrak{q}}^1(a(t),s(t),\phi(t))$ to the $\kappa$-eigenspace of $D$ is trivial. Therefore, we have
\[ -\frac{d}{dt}{\phi_\kappa}(t) = D\phi_\kappa(t) - \nu(t) \phi_\kappa(t) = (\kappa - \nu(t)) \phi_\kappa(t). \]
By assumption, $\kappa - \nu(t) \gg 0$ for all $t$. If $\phi_\kappa(t) \not \equiv 0$, then $\|\phi_\kappa(t)\|_{L^2(Y)}$ is unbounded. However, $\| \phi_\kappa(t) \|_{L^2(Y)} \leq \| \phi(t) \|_{L^2(Y)} \equiv 1$, which is thus a contradiction.
\end{proof}
For future reference, we will write $L^2_k(Y)$ (resp. $L^2_k(Z)$) to refer to the $L^2_k$-norm of any relevant object, e.g. a connection, spinor, etc., on $Y$ (resp. $Z$). If $Z = I \mathfrak{t}imes Y$, we note that $L^2_{j+1}(Z)$-convergence implies pointwise $L^2_j(Y)$-convergence, uniform on compact sets of $I$. We will also write $x$, $a$, $s$, $\phi$ (without $t$) if we want to treat this object as a section of a bundle over $I \mathfrak{t}imes Y$ instead of as a path of sections over $Y$.
\begin{comment}
\begin{lemma}\label{lem:boundedconnection}
There is a ball in $(\ker d^*)_k \oplus 0 \operatorname{sp}incubset W_k$ that contains the reducible trajectories of $\mathcal{X}qmlgc$ for all $0 < \lambda \leq \infty$. (Here, $\q^{\infty}$ denotes $\q$.)
\end{lemma}
\begin{proof}
Let $(a(t),0)$ be a reducible trajectory of $\mathcal{X}qmlgc$. We will use $(a(t),0)$ for pointwise statements in $W$ and $(a,0)$ to represent the corresponding element in $W(Z_I)$, where $Z_I = I \mathfrak{t}imes Y$, for a compact interval $I$. We will obtain uniform bounds on $\|(a(t),0)\|_{L^2_k(Y)}$, independent of $a$ and $t$. First, since $(a(t),0)$ is a reducible trajectory, we have
\[
-(\frac{d}{dt}+ *d)a(t) = (p^\lambda c_{\q} )^0(a(t),0) = (p^\lambda \eta_{\q} )^0(a(t),0),
\]
since $c^0(a(t),0) = 0$.
Since $\eta_{\q}$ is a controlled Coulomb perturbation, Condition (iv) in Definition~\ref{def:controlled} implies that there exists a constant $C$ such that
\[
\| \eta_{\q}^0(b,0) \|_{L^2(Y)} \leq C,
\]
for all $b \in (\ker d^*)_k$. Therefore, $\| (p^\lambda \eta_{\q})^0(a(t),0) \|_{L^2(Y)} \leq C$ for all $t$, where $C$ is independent of $a$ and $t$. We thus have that
$$\|(\frac{d}{dt}+ *d)a(t)\|_{L^2(Z_I)}= \|(p^\lambda \eta_{\q} )^0(a,0)\|_{L^2(Z_I)}$$ is uniformly bounded. If we view the path $a(t)$ as a one-form $a$ on $Z_I$ (in temporal gauge, and thus having Neumann boundary conditions), then $\frac{d}{dt}+ *d)a(t)$ is $d^*a$
Note that , we may apply Lemma 5.1.2 in \cite{KMbook}
the elliptic estimates for $\frac{d}{dt} + *d$ to see that $(a,0)$ is bounded in $L^2_1(Z_{I'})$, for any $I' \operatorname{sp}incubseteq \operatorname{int}(I)$. Note that this bound is still independent of $a$. By Condition (ii) of Definition~\ref{def:controlled}, we obtain $L^2_1(Z_{I'})$-bounds on $(p^\lambda \hat{\eta}_\mathfrak{q})^0(a,0)$. Again, we may apply elliptic estimates to obtain uniform $L^2_2(Z_{I''})$-bounds on $(a,0)$, for smaller intervals $I'' \operatorname{sp}incubset \operatorname{int}(I')$. We can continue to bootstrap using Condition (ii) of Definition~\ref{def:controlled} to obtain $L^2_{k+1}(Z_{J})$-bounds on $(a,0)$, for any compact interval $J$. In particular, this gives bounds on $a(t)$ in $(\ker d^*)_k$, independent of $a$ and independent of $t \in J$, for a fixed compact interval $J$. Since these bounds are independent of $a$, translating trajectories shows that we in fact have bounds independent of $t \in \mathbb{R}$. This completes the proof.
\end{proof}
\end{comment}
\chapter{Stationary points}\label{sec:criticalpoints}
Throughout Chapters~\ref{sec:criticalpoints}--\ref{sec:trajectories2} we fix the following:
\begin{itemize}
\item a very tame, admissible perturbation $\q$; this means that $\q$ is very tame in the sense of Definition~\ref{def:verytame}, the stationary points of $\mathcal{X}qgcsigma$ are non-degenerate, and the associated moduli spaces of trajectories are regular; we will impose additional conditions on $\q$ in Proposition~\ref{prop:ND2} and in Proposition~\ref{prop:MorseSmales};
\item a Sobolev index $k \geq 5$;
\item a bound $R > 0$ such that all the stationary points and finite type trajectories of $\mathcal{X}qgc$ are contained in $B(2R) \operatorname{sp}incubset W_k$;
\item a value $N>0$ specifying a grading range $[-N, N]$, a closed neighborhood $\N$ and an open neighborhood $\mathcal{U} \operatorname{sp}incubset \N$ of the set of stationary points of $\mathcal{X}qgcsigma$ in that grading range, as in Section~\ref{sec:strategy}; we also assume that the projection of $\N$ to the blow-down is contained in $B(2R)$ and that $N$ is chosen large enough to contain each reducible stationary point $(a,0,\phi)$ of $\mathcal{X}qgcsigma$ where $\phi$ is an eigenvector of $D_{\q,a}$ with smallest positive eigenvalue;
\item a strict bound $\omega$ on the absolute values of the spinorial energies of points in $\N$, as in \eqref{eq:omega}.
\end{itemize}
In analogy with the finite-dimensional setting, for $M \operatorname{sp}incubset W$, we will use $M^\operatorname{sp}incigma$ to denote the closure in $W^\operatorname{sp}incigma$ of the preimage of $W - (\ker d^* \oplus 0)$ under the projection from $W^\operatorname{sp}incigma$ to $W$; and similarly for Sobolev completions. In particular, we can talk about $B(2R)^{\operatorname{sp}incigma}$. Note that $\N$ is a subset of $B(2R)^{\operatorname{sp}incigma}$.
Recall that it is our goal to identify some of the stationary points and trajectories of $\mathcal{X}qagcsigma$ with those of $\mathcal{X}qmlagcsigma$, for $\lambda$ sufficiently large. In this section, we deal with stationary points. First, in Section~\ref{sec:convergence} we show that the stationary points of $\mathcal{X}qmlagcsigma$ contained in $\N/S^1$ are close to stationary points of $\mathcal{X}qagcsigma$ in $\N/S^1$, for $\lambda$ sufficiently large. Then, in Section~\ref{sec:StabilityPoints} an inverse function theorem argument shows that inside $\N/S^1$, the stationary points of $\mathcal{X}qmlagcsigma$ are in one-to-one correspondence with those of $\mathcal{X}qagcsigma$. In Section~\ref{sec:nondegeneratestationarypoints} we prove that the nearby approximate stationary points are non-degenerate. (This will be needed later, when we define a Morse complex for $\mathcal{X}qmlagcsigma$.) In Section~\ref{sec:stationarypointsoutsideN}, we study stationary points outside of $\N/S^1$. We rephrase these results with gradings of stationary points in Section~\ref{sec:gradingsstationarypoints}.
As before, if we have a sequence $\lambda_n \mathfrak{t}o \infty$, we write $\pi_n$ to denote $p^{\lambda_n}$.
\operatorname{sp}incection{Convergence}~\label{sec:convergence}
We first point out a convergence result for stationary points in $W$ (not in the blow-up).
\begin{lemma}\label{lem:compactnessnoblowup}
Suppose that $x_n$ is a sequence of stationary points for $\mathcal{X}qmlngc$ in $B(2R)$ where $\lambda_n \mathfrak{t}o \infty$. Then, there is a subsequence that converges in $W_{k}$ to $x$, a stationary point for $\mathcal{X}qgc$. Further, if $x_n$ are reducible, so is $x$.
\end{lemma}
\begin{proof}
This follows from applying Lemma~\ref{lem:convergencenoblowup} to constant trajectories, noting that $c_\q$ is a very compact map by Proposition~\ref{prop:verycompact}. If $x_n = (a_n,0) \in B(2R)$ is a sequence of reducibles, then clearly the limit must be of the form $x = (a,0)$.
\end{proof}
We now need an analogous compactness result on the blow-up. It turns out that very compactness of $c_\q$ is not sufficient; we will use some of the other properties of controlled Coulomb perturbations to do this.
\begin{lemma}\label{lem:compactness}
Fix $\varepsilonilon >0$. There exists $b \gg 0$ such that for all $\lambda >b$ the following is true. If $x \in \mathcal{N}\operatorname{sp}incubset W^{\operatorname{sp}incigma}$ is a zero of $\mathcal{X}qmlgcsigma$, then there exists $x' \in \N$ such that $\mathcal{X}qgcsigma(x') = 0$ and $x, x'$ have $L^2_{k}$-distance at most $\varepsilonilon$ in $L^2_k(Y; i T^*Y) \oplus \mathbb{R} \oplus L^2_k(Y;\mathbb{S})$.
\end{lemma}
\begin{proof}
Suppose this is not true. Then, we can find a sequence $\lambda_n \mathfrak{t}o \infty$ and corresponding zeros $x_n= (a_n,s_n,\phi_n)$ of $\mathcal{X}qmlngcsigma$ in $\N$, none of which are within $L^2_k$-distance $\varepsilonilon$ of a stationary point of $\mathcal{X}qgcsigma$ in $\N$. We will contradict this by finding a subsequence converging to such a stationary point.
Since the $x_n$ are in $\N$, which is $L^2_k$-bounded, we can extract a subsequence that converges to some $x=(a, s, \phi)$ in $L^2_{k-1}$. Further, after passing to another subsequence, we can assume the $x_n$ are either all reducible or all irreducible.
First, suppose the $x_n$ are reducible, that is, $s_n = 0$. By Lemma~\ref{lem:compactnessnoblowup}, the convergence $a_n \mathfrak{t}o a$ is in the stronger $L^2_k$ norm. Moreover, $a \in (\ker d^*)_k$ is such that $(a,0)$ is a stationary point of $\mathcal{X}qgc$. Recall that $\phi_n$ are eigenvectors for $D_{\q^{\lambda_n}, a_n}$ with $L^2$-norm equal to $1$. We claim that $\phi$ is an eigenvector of $D_{\q,a} = D + \D_{(a,0)} (c_\q)^1(0,\cdot)$, and that the convergence $\phi_n \mathfrak{t}o \phi$ is also in $L^2_k$.
Let $\kappa_n$ be the associated eigenvalues for $\phi_n$, i.e., the $\lambda_n$-spinorial energies of $x_n$. Since $x_n \mathfrak{t}o x$ in $L^2_{k-1}$, by Lemma~\ref{lem:fam} (c) we have $|\mathscr{L}ambda_{\q}(x)| \leq \omega.$ By applying Lemma~\ref{lem:fam} (c) again, this time to the $\lambda_n$-spinorial energies of $x_n$, we see that $\mathscr{L}ambda_{\q^\lambdan}(x_n) \mathfrak{t}o \mathscr{L}ambda_{\q}(x)$. Therefore, we also have a bound $\omega' \geq \omega$ on $|\mathscr{L}ambda_{\q^\lambdan}(x_n)|=|\kappa_n|$. After passing to a subsequence, we can assume that the $\kappa_n$ converge to some value $\kappa$. Then, we have
\begin{align}
\nonumber \kappa_n \phi_n &= D_{\q^{\lambda_n},a_n}(\phi_n) \\
\label{eqn:approx-ev} &= D \phi_n + \D_{(a_n,0)} (\pi_n c_\q)^1(0,\phi_n) \\
\nonumber &= D \phi_n + \pi_n(\D_{(a_n,0)} c_\q(0,\phi_n))^1.
\end{align}
Now, since $(a_n,0)$ converges to $(a,0)$ in $L^2_k$ and $(0,\phi_n)$ converge to $(0,\phi)$ in $L^2_{k-1}$, the very compactness of $c_\q$ guarantees that $\D_{(a_n,0)} c_\q(0,\phi_n)$ converges to $\D_{(a,0)} c_\q(0,\phi)$
in $L^2_{k-1}$. As $\pi_n \mathfrak{t}o 1$ in the strong operator topology on $W_{k-1}$, we must also have
\[
\pi_n \bigl( \D_{(a_n,0)} c_\q(0,\phi_n) \bigr)^1 \mathfrak{t}o \bigl( \D_{(a,0)} c_\mathfrak{q}(0,\phi)\bigr)^1 \mathfrak{t}ext{ in $L^2_{k-1}$}.
\]
On the other hand, we have
\[
\kappa_n \phi_n \mathfrak{t}o \kappa \phi \mathfrak{t}ext{ in $L^2_{k-1}$}.
\]
Finally, we have that $D \phi_n \mathfrak{t}o D \phi$ in $L^2_{k-2}$, because $\phi_n \mathfrak{t}o \phi$ in $L^2_{k-1}$. Since the convergence of $\pi_n \D_{(a_n,0)} (c_\q)^1(0,\phi_n)$ and $\kappa_n \phi_n$ is in $L^2_{k-1}$, we must in fact have that $D \phi_n \mathfrak{t}o D \phi$ in $L^2_{k-1}$ by \eqref{eqn:approx-ev}. Thus, $\phi_n$ converges to $\phi$ in $L^2_k$ and $\kappa_n \phi_n = D_{\q^{\lambda_n}, a_n}(\phi_n) \mathfrak{t}o D_{\q, a}(\phi) = \kappa \phi$ in $L^2_k$. Thus, $\phi$ is an eigenvalue of $D_{\q, a}$, so $x=(a, 0, \phi)$ is a reducible stationary point of $\mathcal{X}qgcsigma$. Since $\N$ is closed in the $L^2_k$ norm and the convergence $x_n \mathfrak{t}o x$ is in $L^2_k$, we get that $x \in \N$, providing the contradiction.
We now assume that our sequence $(a_n,s_n,\phi_n)$ consists of irreducibles. If $s_n \geq \delta > 0$ for all $n$, Lemma~\ref{lem:compactnessnoblowup} guarantees that $(a_n,s_n,\phi_n)$ will converge in $W^\operatorname{sp}incigma_k$ to an irreducible stationary point $(a,s,\phi)$ of $\mathcal{X}qgcsigma$, since $\mathcal{X}qmlngcsigma$ is conjugate to $\mathcal{X}qmlngc$ via the blow-down. This is again a contradiction.
The final case is when $(a_n,s_n,\phi_n)$ is a sequence of irreducible stationary points in $B(2R)^\operatorname{sp}incigma$ with $s_n \mathfrak{t}o 0$. By Lemma~\ref{lem:compactnessnoblowup}, in the blow-down we have $(a_n, s_n \phi_n) \mathfrak{t}o (a, 0)$ in $L^2_k$. Since $x_n \in \N$, we can find an upper bound on $\|\phi_n\|_{L^2_k}$. By Lemma~\ref{lem:linearbounds}, we have a constant $C$ such that
\[
|s_n| \cdot \| D\phi_n \|_{L^2_k} = \|D (s_n \phi_n) \|_{L^2_k} = \| (\pi_n c_\q)^1(a_n,s_n\phi_n)\|_{L^2_k} \leq C |s_n|.
\]
Thus, we obtain $L^2_{k+1}$-bounds on $\phi_n$. After passing to a subsequence, we get that $(a_n,s_n,\phi_n)$ converges in $L^2_k$ to $(a,0,\phi)$. By Lemma~\ref{lem:fam} (c), we have that $\mathcal{X}qmlngcsigma(a_n,s_n,\phi_n) \mathfrak{t}o \mathcal{X}qgcsigma(a,0,\phi)$, and thus $(a,0,\phi)$ is a reducible stationary point. Moreover, since $\mathscr{L}ambda_{\q^{\lambda_n}}(a_n,s_n,\phi_n)=0$, by taking the limit $n \mathfrak{t}o \infty$ we obtain $ \mathscr{L}ambda_\q(a,0,\phi)=0$, i.e. $\phi$ is in the kernel of $D_{\q, a}$. This is impossible, because the non-degeneracy condition for reducibles requires that $0$ is not in the spectrum of $D_{\q, a}$.
\end{proof}
\begin{remark}
\label{rem:type}
The proof of Lemma~\ref{lem:compactness} shows that if the zero $x$ of $\mathcal{X}qmlgcsigma$ is reducible, then we can choose the nearby zero $x'$ of $\mathcal{X}qgcsigma$ to be reducible as well. Furthermore, since $\mathscr{L}ambda_{\q^\lambda}(x)$ is close to $\mathscr{L}ambda_{\q}(x')$, we see that $x'$ can be chosen to be stable (resp. unstable) when $x$ is stable (resp. unstable).
\end{remark}
\begin{corollary}
\label{cor:NisU}
For $\lambda \gg 0$, if $x$ is a zero of $\mathcal{X}qmlgcsigma$ in $\N$, then $x$ is actually in $\mathcal{U}$ and $|\mathscr{L}ambda_{\q^\lambda}(x)| < \omega.$
\end{corollary}
\begin{proof}
We get that $x \in \mathcal{U}$ by choosing $\varepsilonilon$ sufficiently small in Lemma~\ref{lem:compactness}. If we had a sequence $\lambda_n \mathfrak{t}o \infty$ and corresponding $x_n \in \N$ with $\mathcal{X}qmlngcsigma(x_n)=0$ and $|\mathscr{L}ambda_{\q^\lambdan}(x_n)| \geq \omega$, then after extracting a subsequence, and making use of Lemma~\ref{lem:compactness}, in the limit we would get a zero of $\mathcal{X}qgcsigma$ inside $\N$ with $|\mathscr{L}ambda_{\q}|\geq \omega$. This is a contradiction.
\end{proof}
\begin{corollary}
\label{cor:IsFinite}
For $\lambda \gg 0$, all the stationary points of $\mathcal{X}qmlgcsigma$ in $\N$ live inside the finite-dimensional blow-up $(W^{\lambda})^{\operatorname{sp}incigma}$.
\end{corollary}
\begin{proof}
This follows from Corollary~\ref{cor:NisU} and Lemma~\ref{lem:blowuptrajectoriesinvml}.
\end{proof}
Observe that the results of this subsection also apply to the stationary points of $\mathcal{X}qmlagcsigma$ and $\mathcal{X}qagcsigma$, in the quotient $W^\operatorname{sp}incigma/S^1$.
\operatorname{sp}incection{Stability} \label{sec:StabilityPoints}
In the previous section we showed that, for $\lambda$ large, the zeros of $\mathcal{X}qmlagcsigma$ in $\N$ are close to the zeros of $\mathcal{X}qagcsigma$. Our next goal is to show that near each stationary point of $\mathcal{X}qagcsigma$ there is exactly one stationary point of $\mathcal{X}qmlagcsigma$, again for $\lambda$ large. The proof will be an application of the implicit function theorem, using the fact that we have chosen the perturbation $\q$ so that the zeros of $\mathcal{X}qagcsigma$ are nondegenerate.
Recall that $p^{\lambda}$ is (roughly) the orthogonal projection to $W^{\lambda}$, modified so that it becomes a smooth function of $\lambda$ for $\lambda \in (0, \infty).$ In order to be able to apply the implicit function theorem at $\lambda = \infty$, we need to find a suitable identification of the interval $(0, \infty]$ with $[0,1)$, so that after this identification we get differentiability at zero.
For the present subsection, let us index (with multiplicity) the eigenvalues of $l$ by $(\lambda_n)_{n \geq 0}$ by the condition that $|\lambda_n| \leq |\lambda_{n+1}|$ for all $n$; note that $\lim_{n \mathfrak{t}o \infty} |\lambda_n |= \infty.$ Recall that we write
$$\mathcal{X}qgc = l + c_\q.$$
Pick a homeomorphism $f: (0, \infty] \mathfrak{t}o [0,1) $ with the following properties:
\begin{itemize}
\item The restriction of $f$ to $(0,\infty)$ is a strictly decreasing diffeomorphism onto $(0, 1)$;
\item $\lim_{n \mathfrak{t}o \infty} |\lambda_n|^2 f(|\lambda_{n+1}|) = \infty.$
\end{itemize}
The second property means that $f$ does not decrease too fast near infinity. We can achieve it, for example, by requiring that $f(|\lambda_{n+1}|) = \frac{1}{|\lambda_n|} + \frac{1}{n}$ for large $n$.
\begin{lemma}
\label{lem:homeo}
The map
$$h: W_{k} \mathfrak{t}imes (-1,1) \mathfrak{t}o W_{k-1}, \ \ \ h(x, r) = x- p^{f^{-1}(|r|)}(x)$$
is continuously differentiable, with $\mathcal{D}h_{(x, 0)}(0,1) =0$ for all $x$.
\end{lemma}
\begin{proof}
Continuous differentiability away from $r = 0$ is standard, taking into account that $p^{\lambda}$ are smoothed projections as in \eqref{eq:pl}.
Since $h(x,0) = 0$, it suffices to show that
$$ \lim_{r \mathfrak{t}o 0}\frac{ h(x,r)}{r} =0,$$
or, equivalently
\begin{equation}
\label{eq:qlim}
\lim_{\lambda \mathfrak{t}o \infty} \frac{\|x-p^{\lambda}(x)\|_{L^2_{k-1}}}{f(\lambda)} = 0.
\end{equation}
The eigenspaces of $l$ are orthogonal in $L^2$. Pick an $L^2$-orthonormal sequence of eigenvectors $w_n$ for $\lambda_n$. Let us write
$$ x = \operatorname{sp}incum_n x_n w_n,$$
for a sequence of real numbers $(x_n)_{n \geq 0}.$ We have $\|x\|^2_{L^2} = \operatorname{sp}incum_n |x_n|^2.$
Note that
$$ l^{k-1} (x) = \operatorname{sp}incum_n \lambda_n^{k-1} x_n w_n.$$
Since the $L^2_{k-1}$ norm is equivalent to the one defined using $l$ as a differential operator instead of $\nabla$, we can write
$$ \| x\|^2_{L^2_{k-1}} \approx \operatorname{sp}incum_n \operatorname{sp}incum_{j=0}^{k-1} |\lambda_n|^{2j} |x_n|^2 \approx \operatorname{sp}incum_n |\lambda_n|^{2k-2} |x_n|^2.$$
In fact, we have $x \in W_{k}$, so
$$\|x\|_{L^2_k}^2 \approx \operatorname{sp}incum_n |\lambda_n|^{2k} |x_n|^2 < \infty.$$
We have
$$ \|x- p^{|\lambda_n|}(x) \|_{L^2_{k-1}} \approx \operatorname{sp}incum_{m > n} |\lambda_m|^{2k-2} |x_m|^2 < \frac{1}{|\lambda_n|^2} \operatorname{sp}incum_{m > n} |\lambda_m|^{2k} |x_m|^2 \leq C \frac{\|x\|_{L^2_k}^2}{|\lambda_n|^2}$$
for some constant $C$ independent of $x$ and $\lambda_n$.
Recall that we chose $f$ so that $\lim_{n \mathfrak{t}o \infty} |\lambda_n|^2 f(|\lambda_{n+1}|) = \infty.$ From here we get:
$$ \lim_{n \mathfrak{t}o \infty} \frac{\|x- p^{|\lambda_n|}(x)\|_{L^2_{k-1}}}{f(|\lambda_{n+1}|)} = 0.$$
The claim \eqref{eq:qlim} follows: For any $\lambda \gg 0$, we choose $n$ such that $\lambda \in [|\lambda_n|, |\lambda_{n+1}|]$, and then we use the fact that both the numerator and the denominator of the right hand side of \eqref{eq:qlim} are nonincreasing functions of $\lambda$.
\end{proof}
Let $[x_0] \in W^\operatorname{sp}incigma/S^1$ be a non-degenerate, irreducible zero of $\mathcal{X}qagcsigma$. By Lemma~\ref{lem:rephraseStat}, non-degeneracy means that the linearization $$\D^\operatorname{sp}incigma_{[x_0]}(\mathcal{X}qagcsigma) : \K^{\operatorname{agC},\operatorname{sp}incigma}_{k,[x_0]} \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1,[x_0]} $$ is an invertible linear operator.
\begin{proposition} \label{prop:nearby}
Let $[x] \in W_k^\operatorname{sp}incigma/S^1$ be a non-degenerate stationary point of $\mathcal{X}qagcsigma$. Then, for any sufficiently small neighborhood $U_{[x]}$ of $[x]$ in $W_k^\operatorname{sp}incigma/S^1$, for $\lambda \gg 0$ (depending on $U_{[x]}$) there is a unique $[x_\lambda] \in U_{[x]}$ satisfying $\mathcal{X}qmlagcsigma([x_\lambda])=0$.
\end{proposition}
\begin{proof}
Consider the vector field $S: (W^\operatorname{sp}incigma_{k}/S^1) \mathfrak{t}imes (-1,1) \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1} \mathfrak{t}imes \rr$ given by
$$ S([x], r)=\bigl( \mathcal{X}_{\q^{f^{-1}(|r|)}}^{\operatorname{agC},\operatorname{sp}incigma}([x]), r \bigr).$$
If we choose a representative $x \in W_k^\operatorname{sp}incigma$ for $[x]$, we can write
$$ S([x],r) = \bigl(\bigl[ \bigl ( \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ (l + p^{f^{-1}(|r|)} c_\q)^\operatorname{sp}incigma(x) \bigr) \bigr], r \bigr).$$
Using Lemma~\ref{lem:hessiancontinuity} and its adaption to $\mathcal{X}qmlgcsigma$ together with Lemma~\ref{lem:homeo}, we deduce that $S$ is continuously differentiable. Furthermore, from Lemma~\ref{lem:homeo}, the derivative of $S$ at $([x],0)$ is the same as the one obtained without the factor $p^{f^{-1}(|r|)}$; that is, it equals $\begin{pmatrix} \D^\operatorname{sp}incigma_{[x]}(\mathcal{X}qagcsigma) & 0 \\ 0 & 1 \end{pmatrix}$, which is invertible by hypothesis.
Note that $(W_k^\operatorname{sp}incigma/S^1) \mathfrak{t}imes (-1,1)$ is a Banach manifold with boundary. Let $\mathscr{D}(W_k^\operatorname{sp}incigma/S^1 \mathfrak{t}imes (-1,1))$ be its double, equipped with the associated involution $\iota$. Since $S$ is tangent to the boundary of $(W_k^\operatorname{sp}incigma/S^1) \mathfrak{t}imes (-1,1)$, $S$ extends to the double in an $\iota$-invariant way. We now apply the inverse function theorem to $S$ (on the double) at $([x],0)$. We get that there is a unique solution of $S([y], r)= (0, r)$ near $([x],0)$, for small $r > 0$. Define $\lambda = f(r)$, and write $[x_\lambda] = [y]$.
Note that if $[x]$ was irreducible (i.e. in the interior of $(W_k^\operatorname{sp}incigma/S^1) \mathfrak{t}imes (-1,1)$), then the nearby solution, $[x_\lambda]$, is an irreducible stationary point of $\mathcal{X}qmlagcsigma$. If $[x]$ was reducible (i.e. on the boundary of $(W_k^\operatorname{sp}incigma/S^1) \mathfrak{t}imes (-1,1)$), by $\iota$-invariance, the nearby $[x_\lambda]$ is also on the boundary (i.e., it is a reducible stationary point of $\mathcal{X}qmlagcsigma$).
\end{proof}
Recall that in Section~\ref{sec:strategy} we denoted by $\mathcal{C}rit_{[-N, N]}$ the set of stationary points of $\mathcal{X}qagcsigma$ with grading in the interval $[-N, N]$ (or, equivalently, those inside $\N/S^1$). For simplicity, we write $\mathcal{C}rit_{\N}$ for $\mathcal{C}rit_{[-N, N]}$. Moreover, we denote by $\mathcal{C}rit^{\lambda}_{\N}$ the set of stationary points of $\mathcal{X}qmlagcsigma$ that live in $\N/S^1$.
Combining the results of this subsection and the previous one, we have the following:
\begin{corollary}
\label{cor:corresp}
For $\lambda \gg 0$, there is a one-to-one correspondence
$$\mathcal{X}i_{\lambda}: \mathcal{C}rit^{\lambda}_{\N} \mathfrak{t}o \mathcal{C}rit_{\N}.$$
This correspondence preserves the type of stationary point (irreducible, stable, unstable).
\end{corollary}
\begin{proof}
There are only finitely many stationary points in $\mathcal{C}rit_{\N}$. Thus, we can find $\varepsilonilon > 0$ such that the neighborhoods chosen in Proposition~\ref{prop:nearby} around each $[x] \in \mathcal{C}rit_{\N}$ are the balls of $L^2_k$ radius $\varepsilonilon$ around those points. We let $\mathcal{X}i_{\lambda}^{-1}([x])$ be the unique zeros of $\mathcal{X}qmlagcsigma$ inside those balls. Since the points in $\mathcal{C}rit_{\N}$ are actually inside the smaller open set $\mathcal{U}/S^1 \operatorname{sp}incubset \N/S^1$, we can assume that $\mathcal{X}i_{\lambda}^{-1}([x])$ are in $\N/S^1$ as well. By Lemma~\ref{lem:compactness}, we get that any stationary point of $\mathcal{X}qmlagcsigma$ from $\N/S^1$ must be of the form $\mathcal{X}i_{\lambda}^{-1}([x])$ for some $[x] \in \mathcal{C}rit_{\N}$. Thus, $\mathcal{X}i_{\lambda}$ is a one-to-one correspondence.
The claim about preserving type follows from Remark~\ref{rem:type} and the arguments at the end of the proof of Proposition~\ref{prop:nearby}.
\end{proof}
From now on, we will generally use $[x_\infty]$ to denote a stationary point of $\mathcal{X}qagcsigma$ (corresponding to $\lambda = \infty$). If we fix $[x_\infty] \in \mathcal{C}rit_{\N}$, observe that $\mathcal{X}i_{\lambda}^{-1}([x_\infty])$, which is a stationary point of $\mathcal{X}qmlagcsigma$, is smooth as a function of $\lambda$; this is guaranteed by the implicit function theorem. We will write $[x_\lambda]$ for $ \mathcal{X}i_{\lambda}^{-1}([x_\infty])$.
Finally, note that since the reducible stationary points of $\mathcal{X}qgcsigma$ all correspond to non-zero eigenvalues, the same holds for the corresponding reducible stationary points of $\mathcal{X}qmlagcsigma$, assuming $\lambda \gg 0$.
It turns out we also can obtain analogous results for reducible stationary points in the blow-down. This will be useful when analyzing the reducible stationary points which are in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma$, but not in $\N$; see Proposition~\ref{prop:GradingBounds} below.
\begin{lemma}\label{lem:implicitfunctionreducible}
Fix $\varepsilonilon > 0$. For $\lambda \gg 0$, there is a one-to-one correspondence in $B(2R)$ between reducible stationary points $x_\infty$ of $\mathcal{X}qgc$ and reducible stationary points $x_\lambda$ of $\mathcal{X}qmlgc$; further, $x_\lambda$ is $\varepsilonilon$-close to $x$.
\end{lemma}
\begin{proof}
It is straightforward to verify that since the stationary points of $\mathcal{X}qagcsigma$ are non-degenerate, so are the reducible stationary points of $\mathcal{X}qgc$ when restricting to the reducible locus in $W_k$. By the implicit function theorem, as used in the proof of Proposition~\ref{prop:nearby}, we obtain that near each reducible stationary point $x_\infty$ of $\mathcal{X}qgc$, there is a unique nearby reducible stationary point $x_\lambda$ of $\mathcal{X}qmlgc$. (This is in fact easier than Proposition~\ref{prop:nearby}, since there is no need to blow-up or quotient by $S^1$.) On the other hand, by Lemma~\ref{lem:compactnessnoblowup}, reducible stationary points of $\mathcal{X}qmlgc$ are necessarily nearby to reducible stationary points of $\mathcal{X}qgc$. This establishes the desired correspondence. Finally, Lemma~\ref{lem:trajectoriesinvml} implies that $x_\lambda$ is also in $W^\lambda$.
\end{proof}
\operatorname{sp}incection{Hyperbolicity}\label{sec:nondegeneratestationarypoints}
Recall that we chose $\q$ such that any stationary point $x$ of $\mathcal{X}qgcsigma$ is non-degenerate, that is, the Hessian $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ is invertible. By Lemma~\ref{lem:HessB} (b), $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ has real spectrum, and hence is a hyperbolic operator (i.e., the spectrum of its complexification is disjoint from the real axis).
For the approximate vector field $\mathcal{X}qmlgcsigma$, we define the $\mathfrak{t}ilde{g}$-Hessian in the blow-up by analogy with \eqref{eq:HessianBlowupCG}:
\begin{equation}
\label{eq:HessianBlowupCGlambda}
\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x} = \Pi^{\operatorname{agC},\operatorname{sp}incigma}_x \circ \Dgs_x \mathcal{X}qmlgcsigma : \K^{\operatorname{agC},\operatorname{sp}incigma}_{k,x} \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1,x}.
\end{equation}
As before, if $x$ is a stationary point of $\mathcal{X}qmlgcsigma$, then it does not matter which connection we use to differentiate $\mathcal{X}qmlgcsigma$ at $x$. Therefore, we can simply write
$$\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x} = \Pi^{\operatorname{agC},\operatorname{sp}incigma}_x \circ \D^\operatorname{sp}incigma_x \mathcal{X}qmlgcsigma.$$
We will say that $x$ is a {\em hyperbolic} stationary point if $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x}$ is hyperbolic. (Generally, it will no longer be the case that $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x}$ has real spectrum.)
If $x$ is in $\N$, then from Corollary~\ref{cor:IsFinite} we know that $x$ lives in the finite dimensional blow-up $(W^\lambda)^{\operatorname{sp}incigma}$. Furthermore, since $\mathcal{X}qmlgc = l+p^\lambda c_{\q}$ maps $W^\lambda$ to $W^\lambda$, then $x$ being hyperbolic (as above, in infinite dimensions) implies that $[x]$ is also hyperbolic as a stationary point of $\mathcal{X}qmlagcsigma$ restricted to the finite dimensional approximation $(W^\lambda)^\operatorname{sp}incigma/S^1$. Proving hyperbolicity for these stationary points will be the first step towards defining Morse homology with $\mathcal{X}qmlagcsigma$ on $(W^\lambda)^\operatorname{sp}incigma/S^1$ as in Section~\ref{sec:combinedMorse}.
Let us start by analyzing the Hessian on the blow-up more carefully.
Writing $x = (a,s,\phi) \in W_k^\operatorname{sp}incigma$, we have
\begin{equation}\label{eqn:lsigma}
l^\operatorname{sp}incigma(a,s,\phi) = (*da, \langle D\phi, \phi \rangle_{L^2} s, D\phi - \langle D\phi, \phi \rangle_{L^2} D \phi).
\end{equation}
Thus, we obtain for $v = (b,r,\psi) \in \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x}$
\begin{align}\label{eqn:lsigma-derivative}
\nonumber \D^\operatorname{sp}incigma_{x} l^\operatorname{sp}incigma(v) &= (*db, \langle D\phi, \phi \rangle_{L^2} r + 2\operatorname{Re} \langle D \psi, \phi \rangle_{L^2} s, \Pi^\perp_{\phi}(D \psi - \langle D\phi, \phi \rangle_{L^2} \psi - 2 \operatorname{Re} \langle D\psi, \phi \rangle_{L^2} \phi)) \\
& = (*db, \langle D\phi, \phi \rangle_{L^2} r + 2\operatorname{Re} \langle D \psi, \phi \rangle_{L^2} s, \Pi^\perp_{\phi}(D \psi) - \langle D\phi, \phi \rangle_{L^2} \psi) \\
\nonumber &= (*db, \langle D\phi, \phi \rangle_{L^2} r + 2\operatorname{Re} \langle D \psi, \phi \rangle_{L^2} s, D \psi - \langle D \psi, \phi \rangle_{L^2} \phi - \langle D\phi, \phi \rangle_{L^2} \psi)
\end{align}
where $\Pi^\perp_{\phi}$ denotes $L^2$ orthogonal projection onto $\{\psi' \in L^2_{j-1}(Y; \mathbb{S}) \mid \operatorname{Re} \langle \phi, \psi' \rangle_{L^2} = 0 \}$. Here we are using that $\Pi^\perp_{\phi}(\psi) = \psi$ since $(b,r,\psi) \in \T^\operatorname{sp}incigma_{j,x}$. Again, note that here we are taking derivatives with respect to the $L^2$ metric.
\begin{comment}
We have
\begin{align*}\label{eqn:lsigma-proj}
\Pi^\perp_\phi(D \psi - \langle D\phi,\phi \rangle_{L^2} \psi) &= D \psi - \langle D\phi,\phi \rangle_{L^2} \psi - \operatorname{Re} \langle D\psi - \langle D\phi, \phi \rangle_{L^2} \psi, \phi \rangle_{L^2} \phi \\
& = D \psi - \langle D\phi,\phi \rangle_{L^2} \psi - \operatorname{Re} \langle D\psi, \phi \rangle_{L^2} \phi.
\end{align*}
For notation, we write $\Pi_\phi^{\mathfrak{t}op}=1-\Pi_\phi^{\perp}$, so that
\begin{equation}
\label{eq:veew}
\Pi^{\mathfrak{t}op}_\phi(D \psi - \langle D \phi, \phi \rangle_{L^2} \psi)=\operatorname{Re} \langle D\psi, \phi \rangle_{L^2} \phi.
\end{equation}
\end{comment}
Observe that $L^2_k$ bounds on $(a,s,\phi)$ and $L^2_j$ bounds on $(b,r,\psi)$ easily give bounds on the $r$-component of \eqref{eqn:lsigma-derivative}.
Recall from Lemma~\ref{lem:HessB} that $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ has Fredholm index 0, regardless of whether $x$ is a stationary point. Further, we have that $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x}$ is a compact perturbation of $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}$ (roughly since both $p^\lambda c_\q$ and $c_\q$ are compact as maps from $W_k$ to $W_{k-1}$), and thus has Fredholm index 0. Further, since the inclusion $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k,x} \mathfrak{t}o \K^{\operatorname{agC},\operatorname{sp}incigma}_{k-1,x}$ is compact, we see that for any $z \in \mathbb{C}$, after complexifying,
\begin{equation}\label{eqn:Hesskappa}
\operatorname{ind}(\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x} - z I) = 0.
\end{equation}
\begin{proposition}
\label{prop:ND}
For $\lambda$ sufficiently large, the stationary points of $\mathcal{X}qmlgcsigma$ inside $\N$ are hyperbolic. As a consequence, among the stationary points of the restriction of $\mathcal{X}qmlagcsigma$ to the finite dimensional space $(W^\lambda)^\operatorname{sp}incigma/S^1$, all those in $\N/S^1$ are hyperbolic (and hence non-degenerate).
\end{proposition}
\begin{proof}
Recall from Section~\ref{sec:fdax} that we can use an equivalent $L^2_k$ metric on $W$ (and hence $W^\operatorname{sp}incigma$), where we use the operator $l$ in place of the covariant derivatives. The statement is independent of which metric we use, so we opt for $l$ in this proof.
Suppose the claim is false. Consider a sequence $x_n = (a_n,s_n,\phi_n) \in \N$ of stationary points of $\mathcal{X}^{\operatorname{gC}, \operatorname{sp}incigma}_{\q^{\lambda_n}}$ which are non-hyperbolic, where $\lambda_n \mathfrak{t}o \infty$. Note that we have $L^2_k$-bounds on each component of $x_n$. Moreover, by Lemma~\ref{lem:compactness}, there exists a subsequence of $x_n$ which converges in $W^\operatorname{sp}incigma_k$ to $x$, a stationary point of $\mathcal{X}qgcsigma$. By our assumption on $\q$, the stationary point $x$ is hyperbolic. Since each $x_n$ is non-hyperbolic, we may find a sequence of real numbers $\kappa_n$ such that, after complexifying, $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n} - i\kappa_n I$ is not invertible. Since $\operatorname{ind}(\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n} - i\kappa_n I) = 0$ by \eqref{eqn:Hesskappa}, these operators have non-trivial kernel, and thus we may find non-zero $v_n \in \K^{\operatorname{agC},\operatorname{sp}incigma}_{k,x_n} \otimes \mathbb{C}$ such that $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n}(v_n) = i \kappa_n v_n$. For the rest of the proof, we drop the complexified notation for simplicity.
After rescaling, we can assume that $\|v_n\|_{L^2_k} =1$. We will show that there exists a subsequence which converges in $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k}$ to a non-trivial element $v$ of $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k,x}$ for which $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}(v) = i \kappa v$, for some $\kappa \in \mathbb{R}$. This will contradict the hyperbolicity of $x$.
We write $v_n = (b_n,r_n,\psi_n)$. Our first step is to prove that the $\kappa_n$ are bounded, as are the $L^2_{k+1}$-norms of $v_n$.
By Lemma~\ref{lem:fam}(b), we get a uniform bound on the $L^2_k$ norms of $\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} (p^{\lambda_n} c_\q)^\operatorname{sp}incigma(v_n)$. Further, since $x_n$ is a stationary point of $\mathcal{X}qmlngcsigma$, we can write
\begin{align}
\label{eq:hesszero}
i\kappa_n v_n &= \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n}(v_n) \\
&= \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} \mathcal{X}^{\operatorname{gC},\operatorname{sp}incigma}_{\q^{\lambda_n}}(v_n) \notag \\
&= \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n) + \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} (p^{\lambda_n} c_\q)^\operatorname{sp}incigma(v_n). \notag
\end{align}
By definition, $\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n) = \D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n) - \alpha_n (0,0,i\phi_n)$ for some sequence of real numbers $\alpha_n$. The $L^2_k$-bounds on $x_n$ and $v_n$ give $L^2_{k-1}$-bounds on $\D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n)$ by \eqref{eqn:lsigma-derivative}. By continuity, $\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n)$ is $L^2_{k-1}$-bounded. Because $\| \phi_n \|_{L^2_{k-1}} \geq 1$, the sequence $\alpha_n$ is bounded. Thus, $\alpha_n (0,0,i\phi_n)$ is $L^2_k$-bounded.
Now note that $\langle D\psi_n, \phi_n \rangle_{L^2} \phi_n$ is $L^2_k$ bounded. By \eqref{eqn:lsigma-derivative}, we then have that $\D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n) - (*db_n, 0, D\psi_n - \langle D\phi_n, \phi_n \rangle_{L^2} \psi_n)$ is $L^2_k$ bounded; here we are using the observation that the $L^2_k$ bounds on $x_n$ and $v_n$ guarantee bounds on the $r$-component of $\D_{x_n} l^\operatorname{sp}incigma(v_n)$. Combining this with the above discussion, we have that
\[
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} l^\operatorname{sp}incigma(v_n) + \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} (p^{\lambda_n} c_\q)^\operatorname{sp}incigma(v_n) - (*db_n, 0 , D \psi_n - \langle D \phi_n, \phi_n \rangle_{L^2} \psi_n)
\]
is $L^2_k$-bounded. We now see \eqref{eq:hesszero} shows that
\begin{equation}\label{eqn:brpsi}
i\kappa_n (b_n,r_n,\psi_n) - (*db_n,r_n, D\psi_n - \langle D \phi_n, \phi_n \rangle_{L^2} \psi_n)
\end{equation}
is $L^2_k$-bounded. However, note that the operator $(b,r,\psi) \mapsto (*db,r, D \psi - \langle D \phi_n, \phi_n \rangle_{L^2} \psi)$ is $L^2$ self-adjoint. It follows from here that $i \kappa_n v_n$ and $(*db_n,r_n, D \psi_n - \langle D \phi_n, \phi_n \rangle_{L^2} \psi_n)$ are (real) orthogonal in the complexification, with respect to the $L^2$ inner product. Given that we are working with the $L^2_k$ inner products that are defined using $l$ as the derivative, we see that the orthogonality also holds with respect to $L^2_k$. Thus, the $L^2_k$ bounds on the quantity in \eqref{eqn:brpsi} imply that $\kappa_n v_n$ is $L^2_k$-bounded, and thus the $\kappa_n$ are bounded. On the other hand, we also obtain $L^2_k$-bounds on $(*db_n, r_n, D\psi_n)$ and thus the $v_n$ are bounded in $L^2_{k+1}$ by the ellipticity of $*d$ and $D$.
Thus, we can find a subsequence of the $v_n$ that converges in $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k}$ to an element $v$, which necessarily has $\| v \|_{L^2_k} = 1$. Because $\K^{\operatorname{agC},\operatorname{sp}incigma}_{k}$ is closed in the tangent bundle to $W^\operatorname{sp}incigma_k$, we have $v \in \K^{\operatorname{agC},\operatorname{sp}incigma}_k$ as well. After passing to a further subsequence, $\kappa_n \mathfrak{t}o \kappa$, for some $\kappa \in \mathbb{R}$. We have
$$ \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n}(v_n) - \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x_n}(v_n) = \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma_{x_n} (p^{\lambda_n} c_{\q} - c_{\q})^\operatorname{sp}incigma(v_n) \mathfrak{t}o 0 \mathfrak{t}ext{ in } L^2_{k-1}.$$
By assumption, the sequence $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^{\lambda_n},x_n}(v_n)$ is the sequence $i \kappa_n v_n$. Moreover, $ \operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x_n}(v_n)$ converges to $\operatorname{Hess}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x}(v)$ in $L^2_{k-1}$ by the continuity of the Hessian (cf. Lemma~\ref{lem:hessiancontinuity}). Thus, there exists a non-zero $v \in \K^{\operatorname{agC},\operatorname{sp}incigma}_k$ such that $\operatorname{Hess}^{\mathfrak{t}ilde{g}}_{\q,x}(v) = i \kappa v$. This contradicts the hyperbolicity of $x$.
\end{proof}
\operatorname{sp}incection{Other stationary points}\label{sec:stationarypointsoutsideN}
Proposition~\ref{prop:ND} was only about the stationary points in $\N$. Recall that $\N$ is a subset of $B(2R)^{\operatorname{sp}incigma} \operatorname{sp}incubset W^{\operatorname{sp}incigma}_k$. We do not have any control over the stationary points of $\mathcal{X}qmlgcsigma$ outside $B(2R)^{\operatorname{sp}incigma}$, but we can say a bit more about the ones in $B(2R)^{\operatorname{sp}incigma}$ (and not necessarily in $\N$).
First, recall from Corollary~\ref{cor:IsFinite} that, for $\lambda \gg 0$, all the stationary points of $\mathcal{X}qmlgcsigma$ in $\N$ are actually inside the finite-dimensional blow-up $(W^{\lambda})^{\operatorname{sp}incigma}$.
Second, by applying Proposition~\ref{prop:proposition3perturbed} to $N=\overline{B(2R)}$ and $U$ being the blow-down of $\mathcal{U}$, we see that for $\lambda \gg 0$, all the irreducible stationary points of $\mathcal{X}qmlgcsigma$ in $B(2R)^{\operatorname{sp}incigma}$ are actually in $\mathcal{U} \operatorname{sp}incubset \N$.
Some of the reducible solutions to $\mathcal{X}qmlgcsigma$ are in $\N$, and hence close to reducible zeros of $\mathcal{X}qgcsigma$ with grading in $[-N, N]$. However, there will be other reducibles in $B(2R)^\operatorname{sp}incigma$ which may not be in $\N$. We now study these other reducibles. A reducible $(a, 0, \phi)$ has to satisfy:
$$-*da = (p^{\lambda}c_{\q})^0(a, 0)$$
and
$$D_{\q^\lambda,a}(\phi) = D \phi + (p^\lambda)^1(\D_{(a,0)} c_\q(0,\phi)) = \kappa \phi,$$
for some $\kappa \in \rr$. Note that $(a, 0) \in W^{\lambda}$, but $(a, 0, \phi)$ may or may not be in $(W^{\lambda})^{\operatorname{sp}incigma}$.
Observe that $D$, $(p^\lambda)^1$ and $\D_{(a,0)} c_\q(0,\cdot )$ are all $L^2$ self-adjoint maps on spinors. (We are using here that the $\mathfrak{t}ilde g$ metric agrees with the $L^2$ metric at reducibles.) Nevertheless, the product of $(p^\lambda)^1$ and $\D_{(a,0)} c_\q(0,\cdot )$, and hence the operator $D_{\q^\lambda,a}$, may not be self-adjoint. On the other hand, for the real numbers $\lambda^{\bullet}_i$ defined in Section~\ref{sec:fdax}, the restriction of $D_{\q^{\lambda^{\bullet}_i},a}$ to (the spinorial part of) $W^{\lambda}$ is self-adjoint. This is because for $\lambda = \lambda^{\bullet}_i$, the map $p^\lambda$ is the honest $L^2$ projection onto $W^\lambda$; therefore, for $(a, 0, \phi) \in (W^{\lambda})^{\operatorname{sp}incigma}$ we can write
$$ D + (p^\lambda)^1 \D_{(a,0)} c_\q(0,\cdot ) = D + (p^\lambda)^1 \D_{(a,0)} c_\q(0,\cdot ) (p^\lambda)^1,$$
and the right hand side is self-adjoint.
We will focus on the reducible stationary points of $\mathcal{X}qmlgcsigma$ that are in $(B(2R) \cap W^{\lambda})^{\operatorname{sp}incigma}$ for $\lambda = \lambda^{\bullet}_i$. We then have the following strengthening of Proposition~\ref{prop:ND}:
\begin{proposition}
\label{prop:ND2}
We can choose the admissible perturbation $\q$ such that for any $\lambda \in \{\lambda^{\bullet}_1, \lambda^{\bullet}_2, \dots\}$ sufficiently large, the restriction of $\mathcal{X}qmlgcsigma$ to $(B(2R) \cap W^{\lambda})^{\operatorname{sp}incigma}$ has only hyperbolic (and hence non-degenerate) stationary points.
\end{proposition}
\begin{proof}
As part of the proof of existence of admissible perturbations, Kronheimer and Mrowka showed in \cite[Section 12.6]{KMbook} that for a residual (and hence nonempty) set of tame perturbations $\q$, the reducible stationary points of $\mathcal{X}q$ are non-degenerate. The key point is in the proof of Lemma 12.6.2 in \cite{KMbook}: There is a large enough space of tame (in fact, very tame) perturbations $\q^{\perp}$ (given by cylinder functions) that vanish at the reducible locus, such that in the tangent space to any $\q^{\perp}$ we can find a $\delta \q^{\perp} = \operatorname{gr}ad \delta f$, such that the Hessian of $\delta f|_V$ is any chosen $S^1$-equivariant self-adjoint endomorphism of $V = \ker(D_{\q^{\perp}, a})$.
We can adapt this proof to $\mathcal{X}qmlgcsigma$, and pick $\q$ such that all the reducibles are non-degenerate for a given $\lambda$. Indeed, we now need to find a $\delta \q^{\perp} = \operatorname{gr}ad \delta f$ such that the $\mathfrak{t}ilde{g}$-Hessian of $\delta f|_V$ is any $S^1$-equivariant self-adjoint endomorphism of $V = \ker(D_{(\q^{\perp})^{\lambda}, a}) \operatorname{sp}incubset W^\lambda$. Since we are at a reducible, on the spinorial part we have that the $\mathfrak{t}ilde{g}$-Hessian is the same as the usual Hessian. Further, because $\lambda= \lambda^{\bullet}_i$, we have that $p^\lambda$ is the $L^2$ orthogonal projection to $W^\lambda$. Hence, we have
$$ \operatorname{Hess} (\delta f|_{W^\lambda}) = p^\lambda \circ \operatorname{Hess}(\delta f)|_{W^\lambda},$$
and we can arrange so that this equals any chosen $S^1$-equivariant self-adjoint endomorphism of the spinors in $W^\lambda$. From here we get the same freedom in choosing $\operatorname{Hess} (\delta f|_V)$, for any subspace $V$ of spinors in $W^\lambda$. The other arguments from \cite[Section 12.6]{KMbook} can then be easily adapted to our setting.
Since $\lambda$ is part of a countable collection $\{\lambda^{\bullet}_n\}$, we can find $\q$ such that the reducibles are non-degenerate for all such $\lambda$. Note that non-degeneracy implies hyperbolicity for reducibles, because the relevant operators are self-adjoint.
Since the irreducible stationary points of $\mathcal{X}qmlgcsigma$ in $B(2R)^{\operatorname{sp}incigma}$ are actually in $\mathcal{U} \operatorname{sp}incubset \N$, by applying Proposition~\ref{prop:ND} we can arrange so that these irreducibles are hyperbolic as well.
\end{proof}
Proposition~\ref{prop:ND2} will be useful in showing that the vector field $\mathcal{X}qmlgcsigma$ in $(B(2R) \cap W^{\lambda})^{\operatorname{sp}incigma}$ is a Morse-Smale equivariant quasi-gradient. We can then construct a Morse homology group from it, and show that it is the same as Morse homology in $\mathcal{N}\cap (W^{\lambda})^{\operatorname{sp}incigma}$, in a certain grading range $[-N, N]$. Indeed, we will show that all the other reducible points in $(B(2R) \cap W^{\lambda})^{\operatorname{sp}incigma}$ cannot be in this grading range; see Proposition~\ref{prop:GradingBounds} below. Before we can discuss and define gradings on stationary points as in Section~\ref{sec:finite}, we must first establish that $\mathcal{X}qmlgc$ is indeed a Morse quasi-gradient. This is the subject of the following section. We return to discuss gradings on stationary points of $\mathcal{X}qmlgcsigma$ in Chapter~\ref{sec:gradings} and the Morse-Smale condition in Chapter~\ref{sec:MorseSmale}.
\chapter[The approximate flow as a quasi-gradient]{The approximate flow as a Morse equivariant quasi-gradient}
\label{sec:quasigradient}
Throughout this chapter we assume that the eigenvalue cut-off $\lambda$ is of the form $\lambda^{\bullet}_i$ for $i \gg 0$.
Note that $\mathcal{X}qgc=l+c_{\q}$ is the gradient of the $\mathscr{L}_{\q}$ functional with respect to the $\mathfrak{t}ilde g$ metric. However, the maps $p^\lambda$ are defined in terms of projections with respect to the usual $L^2$ metric. As discussed in Remark~\ref{rem:notgradient}, the vector field
$$\mathcal{X}qmlgc=l+p^\lambda c_{\q}$$ on $W^\lambda$ is neither the $L^2$ nor the $\mathfrak{t}ilde g$ gradient of the restriction of $\mathscr{L}_{\q}$ to $W^\lambda$. In fact, there is no reason for the derivative of $\mathcal{X}qmlgc$ at stationary points to have real spectrum (as it would happen for a gradient vector field, with respect to any metric).
Nevertheless, in this chapter we will be able to prove the following.
\begin{proposition}
\label{prop:AllMS}
We can choose the admissible perturbation $\q$ such that for all $\lambda = \lambda^{\bullet}_i$ with $i \gg 0$, the vector field $\mathcal{X}qmlgc$ on $W^\lambda \cap B(2R)$ is a Morse equivariant quasi-gradient, in the sense of Definition~\ref{def:eqgv}.
\end{proposition}
As discussed in Section~\ref{sec:finite}, having a Morse-Smale equivariant quasi-gradient suffices in order to construct (equivariant) Morse homology; the first step towards this is establishing the Morse condition (cf. Definition~\ref{def:eqgv}). The additional Morse-Smale condition on trajectories will be shown in Chapter~\ref{sec:MorseSmale}.
In view of Lemma~\ref{lem:MEquiv}, to check that $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient we need three things:
\begin{itemize}
\item that the stationary points of $\mathcal{X}qmlgcsigma$ are hyperbolic;
\item that the operators $\D_x\mathcal{X}qmlgc$ at reducible stationary points $x$ are self-adjoint;
\item part (d) of Definition~\ref{def:eqgv}.
\end{itemize}
Hyperbolicity of the stationary points was already checked in Proposition~\ref{prop:ND2}. Self-adjointness of the operators $\D_x\mathcal{X}qmlgc = l + p^\lambda \D_x c_{\q}$ at reducibles follows from the fact that the metrics $\mathfrak{t}ilde g$ and $L^2$ coincide there.
We are left to verify part (d) of Definition~\ref{def:eqgv}. Sections~\ref{sec:controlaway}-\ref{sec:consequences} below are devoted to proving this.
\begin{proposition}\label{prop:LqQuasi}
For each $\lambda \gg 0$, there exists a smooth function $$F_{\lambda}: W^\lambda \cap B(2R) \mathfrak{t}o \R$$ such that
\begin{equation}
\label{eq:dFlambda}
\frac{1}{4} \| \mathcal{X}qmlgc \|^2 _{\mathfrak{t}ilde{g}} \leq dF_{\lambda}(\mathcal{X}qmlgc) \leq 4 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}}.
\end{equation}
In particular, $dF_{\lambda}(\mathcal{X}qmlgc) \geq 0$, with equality only at the stationary points of $\mathcal{X}qmlgc$.
\end{proposition}
\begin{remark}
In the statement of Proposition~\ref{prop:LqQuasi}, the constants $\frac{1}{4}$ and $4$ are quite arbitrary. They could be replaced (at the expense of increasing $\lambda$) by $\frac{1}{C}$ and $C$, for any $C > 1$.
\end{remark}
\operatorname{sp}incection{Control away from the stationary points} \label{sec:controlaway}
Since we know that $\mathcal{X}qgc$ is the $\mathfrak{t}ilde g$-gradient of the perturbed CSD functional $\mathscr{L}q$, the first guess is to take $F_{\lambda}$ to be $\mathscr{L}q$. Then, the desired condition holds away from neighborhoods of the stationary points:
\begin{lemma}
\label{lem:cond1}
Fix $\varepsilonilon > 0$. Then, for all $\lambda \gg 0$, we have
\begin{equation}
\label{eq:dlq}
\frac{1}{4} \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}} < d\mathscr{L}_{\q} (\mathcal{X}qmlgc) < 4 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}}
\end{equation}
at any point in $W^\lambda \cap B(2R)$ which is at $L^2_{k-1}$ distance at least $\varepsilonilon$ from all stationary points of $\mathcal{X}qmlgc$ in $B(2R)$.
\end{lemma}
\begin{proof}
We suppose this is not true. Then there exists sequences $\lambda_n \mathfrak{t}o \infty$ and $x_n \in W^\lambdan \cap B(2R)$ such that $x_n$ is $L^2_{k-1}$ distance at least $\varepsilonilon$ from each stationary point of $\mathcal{X}qmlngc$ and $(d\mathscr{L}_{\q})_{x_n}(\mathcal{X}qmlngc)$ violates \eqref{eq:dlq}. Without loss of generality, we assume that the first inequality in \eqref{eq:dlq} is violated. (The case of the second inequality is similar.) Since the $x_n$ are $L^2_k$ bounded, we can extract a subsequence which converges in $L^2_{k-1}$ to some element $x$. We see that
$$ (d\mathscr{L}_{\q})_{x_n}(\mathcal{X}qmlngc) \mathfrak{t}o (d\mathscr{L}_{\q})_{x}(\mathcal{X}qgc).$$
Since $d\mathscr{L}_{\q}(\mathcal{X}qgc) = \| \operatorname{gr}ad \mathscr{L}_{\q} \|_{\mathfrak{t}ilde{g}}^2 \geq 0$, we see that $\| \mathcal{X}qgc(x) \|^2_{\mathfrak{t}ilde{g}} \leq \frac{1}{4}\| \mathcal{X}qgc(x) \|^2_{\mathfrak{t}ilde{g}}$. Therefore, $x$ is a stationary point of $\mathcal{X}qgc$. This implies that $x$ is in $B(2R)$. For $\lambda \gg 0$, by the work of Section~\ref{sec:StabilityPoints}, we have that $x$ has $L^2_k$ distance (and thus $L^2_{k-1}$ distance) at most $\varepsilonilon/2$ from a stationary point $x_\lambda$ of $\mathcal{X}qmlgc$. Since the $x_n$ converge to $x$ in $L^2_{k-1}$, they are eventually within $L^2_{k-1}$ distance $\varepsilonilon$ of $x_{\lambda_n}$ for $n \gg 0$. Since $x_{\lambda_n}$ is a stationary point of $\mathcal{X}qmlngc$, this is a contradiction.
\end{proof}
However, $\mathscr{L}_\q$ does not satisfy \eqref{eq:dlq} in the neighborhoods of stationary points. If it did, then by Lemma~\ref{lem:qgCritical2}, any stationary point of $\mathcal{X}qmlgc$, i.e., zero of $l+ p^\lambda c_{\q}$, would be a critical point of $\mathscr{L}_{\q}|_{W^\lambda}$. We can write the $\mathfrak{t}ilde g$-gradient of $\mathscr{L}_{\q}|_{W^\lambda}$ as $l + \mathfrak{t}pgml c_{\q}$, where $\mathfrak{t}pgml$ is the $\mathfrak{t}ilde g$-orthogonal projection from $W$ to $W^\lambda$. (Compare Remark~\ref{rem:notgradient}.) However, in general, the condition $(l+p^\lambda c_{\q})(x)=0$ does not imply $(l +\mathfrak{t}pgml c_{\q})(x)=0.$
\operatorname{sp}incection{The function $F_\lambda$} \label{sec:Flambda}
To construct the desired function $F_{\lambda}$ as in Proposition~\ref{prop:LqQuasi}, we need to alter $\mathscr{L}q$ near the stationary points of $\mathcal{X}qmlgc$.
Let us first introduce some notation. Given a point $x \in W$, its $S^1$-orbit can be either a point or a circle. In particular, the stationary points of $\mathcal{X}qgc$ come in finitely many such orbits, which we denote by $\mathcal{O}^{1}, \dots, \mathcal{O}^{m}.$
Throughout the rest of Chapter~\ref{sec:quasigradient}, we will fix some $\varepsilonilon > 0$ sufficiently small such that it satisfies the following.
\begin{assumption}
\label{as:1} $ $ \\
\begin{enumerate}[(a)]
\item The $L^2_{k-1}$ distance between any two orbits $\mathcal{O}^{j}, \mathcal{O}^{j'}$ ($j \neq j'$) is at least $7\varepsilonilon$;
\item If an orbit $\mathcal{O}^{j}$ consists of irreducibles, then the $L^2$ norm of a point in $\mathcal{O}^{j}$ is at least $4\varepsilonilon$.
\end{enumerate}
\end{assumption}
In Section~\ref{sec:controlnearby} we will add another assumption on $\varepsilonilon$. However, Assumption~\ref{as:1} above suffices for the results in the current subsection.
With $\varepsilonilon$ fixed, we will state our results for $\lambda$ being sufficiently large. Of course, how large $\lambda$ is may depend on $\varepsilonilon$.
It follows from Proposition~\ref{prop:nearby} and Lemma~\ref{lem:implicitfunctionreducible} that, for $\lambda \gg 0$, there is a one-to-one correspondence between the orbits of stationary points of $\mathcal{X}qgc$, on the one hand, and the orbits of stationary points of $\mathcal{X}qmlgc$ inside $B(2R)$, on the other hand. Further, this correspondence preserves the type of orbits (reducible or irreducible). Let $\mathcal{O}^{1}_{\lambda}, \dots, \mathcal{O}^{m}_{\lambda}$ be the latter set of orbits, with $\mathcal{O}^{j}_{\lambda}$ corresponding to $\mathcal{O}^{j}$. By choosing $\lambda$ sufficiently large, we can arrange so that, for all $j$, the orbits $\mathcal{O}^{j}$ and $\mathcal{O}^{j}_{\lambda}$ are within $L^2_{k-1}$ distance $\varepsilonilon$ of each other. In view of part (a) in Assumption~\ref{as:1}, this ensures that $\mathcal{O}^{j}_{\lambda}$ and $\mathcal{O}^{j'}_{\lambda}$ are at least $L^2_{k-1}$ distance $5\varepsilonilon$ apart, for $j\neq j'$.
Next, consider the neighborhoods $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$ of $\mathcal{O}^{j}_{\lambda}$, consisting of points at $L^2_{k-1}$ distance at most $2\varepsilonilon$ from these orbits. Because of our choice of $\varepsilonilon$, all these neighborhoods are disjoint from each other. As an aside, note also that these neighborhoods may well go outside of $B(2R)$, since the latter ball is taken in the $L^2_k$ metric.
Pick a point $x_{\lambda}^j$ on each orbit $\mathcal{O}^{j}_{\lambda}$. We define functions
$$ \omega_\lambda^j : \nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda}) \mathfrak{t}o S^1$$
as follows. If $x_{\lambda}^j$ is reducible, so that $\mathcal{O}^{j}_{\lambda} = \{ x_{\lambda}^j \}$, we simply set $\omega_\lambda^j=1$. If $x_{\lambda}^j$ is irreducible, so that $\mathcal{O}^{j}_{\lambda}$ is a circle, we ask that $\omega_\lambda^j(x) \cdot x_{\lambda}^j$ be the point on $\mathcal{O}^{j}_{\lambda}$ that is at minimal $L^2$ distance from $x$. To make sure that $\omega_\lambda^j$ is well-defined, we need to check that this point is unique. The closest point is not unique only for points in the $L^2$-orthogonal complement to the plane $\operatorname{Span}(\mathcal{O}^{j}_{\lambda})$. However, if $x \in \nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, then there is some $x' \in \mathcal{O}^{j}_{\lambda}$ within $L^2_{k-1}$ (and hence $L^2$) distance $2 \varepsilonilon$ from $x$. By part (b) in Assumption~\ref{as:1}, together with the fact that $\mathcal{O}^j_{\lambda}$ and $\mathcal{O}^j$ are within $L^2_{k-1}$ (and hence $L^2$) distance $\varepsilonilon$ from each other, we see that the $L^2$ norm of $x'$ is at least $3\varepsilonilon$. This shows that $x'$ cannot be perpendicular to $x$, and the claim about the uniqueness of the $L^2$-closest point to $x$ follows.
Explicitly, when $x_{\lambda}^j$ is irreducible, we can write
\begin{equation}\label{eq:omegalambda}
\omega_{\lambda}^j(x)= \frac{\operatorname{Re} \langle x, x^{j}_{\lambda} \rangle_{L^2} + i\operatorname{Re} \langle x, ix^{j}_{\lambda} \rangle_{L^2} }{ \bigl( (\operatorname{Re} \langle x, x^{j}_{\lambda} \rangle_{L^2})^2 + (\operatorname{Re} \langle x, ix^{j}_{\lambda} \rangle_{L^2})^2 \bigr)^{1/2}}.
\end{equation}
Note that the original orbit $\mathcal{O}^{j}$ is at $L^2_{k-1}$ distance at most $\varepsilonilon$ from $\mathcal{O}^{j}_{\lambda}$, and hence is contained in $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$. Let $x_{\infty}^j$ be the point in $\mathcal{O}^{j}$ that is closest in $L^2$ distance to the chosen basepoint $x_{\lambda}^j \in \mathcal{O}^{j}_{\lambda}$. Then, also $x_{\lambda}^j$ is the $L^2$-closest point to $x_{\infty}^j$ in $\mathcal{O}^{j}_{\lambda}$; in other words, we have
$$ \omega_{\lambda}^j( x_{\infty}^j)=1.$$
Let $h: [0, \infty) \mathfrak{t}o \R$ be a smooth, non-increasing function such that $h(x)=1$ for $x \leq 1$ and $h(x)=0$ for $x \geq 2$. Set
$$
H^j_{\lambda} : W^\lambda \mathfrak{t}o [0,1], \ \ \
H^j_{\lambda}(x) = h(\varepsilonilon^{-1} d_{L^2_{k-1}} (x, \mathcal{O}^{j}_{\lambda}))$$
where $d_{L^2_{k-1}}$ denotes $L^2_{k-1}$ distance. Note that $ H^j_{\lambda}$ is identically $0$ outside of $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, and is identically $1$ in the smaller neighborhood $\nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$.
We now define
$$ T_{\lambda} : W^\lambda \mathfrak{t}o W$$
by
\begin{equation}\label{eq:Tlambda}
T_{\lambda}(x) = x + \operatorname{sp}incum_{j=1}^m H^j_{\lambda}(x) \cdot \omega^j_{\lambda}(x) \cdot (x_{\infty}^j- x_{\lambda}^j)
\end{equation}
and finally set
\begin{equation}\label{eq:Flambda}
F_{\lambda} : W^\lambda \mathfrak{t}o \R, \ \ \ F_{\lambda}= \mathscr{L}q \circ T_{\lambda}.
\end{equation}
The function $F_\lambda$ is the one we will use to prove Proposition~\ref{prop:LqQuasi}. Before analyzing this function, we give a more qualitative description for the benefit of the reader.
Observe that $T_{\lambda}$ and $F_{\lambda}$ are $S^1$-equivariant, by construction. In the smaller neighborhood $\nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, if we restrict to the affine space perpendicular to $\mathcal{O}^{j}_{\lambda}$ at $x^j_{\lambda}$, we have $ \omega^j_{\lambda} \equiv 1$ there, and hence the map $T_{\lambda}$ is given by translation by $x_{\infty}^j- x_{\lambda}^j$. In particular, $T_{\lambda}(x_{\lambda}^j) = x_{\infty}^j.$ More generally, $T_{\lambda}$ takes the orbit $\mathcal{O}^{j}_{\lambda}$ to $\mathcal{O}^j$. In fact, we can view $\nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$ as a disk bundle over $\mathcal{O}^{j}_{\lambda}$, with the projection map given by taking the $L^2$ closest point on the orbit. Then, we can say that inside $\nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, the map $T_{\lambda}$ consists of fiberwise translations, arranged so that $\mathcal{O}^{j}_{\lambda}$ is taken to $\mathcal{O}^j$. Further, $T_{\lambda}$ is the identity outside $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, and in the intermediate region $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda}) \operatorname{sp}incetminus \nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, it is given by some interpolation between fiberwise translation and the identity.
The resulting function $F_{\lambda} : W^\lambda \mathfrak{t}o \R$ agrees with the perturbed CSD functional $\mathscr{L}q$ outside $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$, whereas near $\mathcal{O}^{j}_{\lambda}$ we have arranged so that the points of $\mathcal{O}^{j}_{\lambda}$ became critical points of $F_{\lambda}$. Effectively, this was accomplished by translating $\mathcal{O}^{j}_{\lambda}$ to $\mathcal{O}^j$, and using the fact that $\mathcal{O}^j$ consists of stationary points of $\mathcal{X}qgc$, i.e., critical points of $\mathscr{L}q$.
\operatorname{sp}incection{Control in the intermediate region}
Since $F_\lambda$ agrees with $\mathscr{L}_\q$ at any point with $L^2_{k-1}$ distance at least $2\varepsilonilon$ from a stationary point of $\mathcal{X}qmlgc$, Lemma~\ref{lem:cond1} implies that $dF_\lambda(\mathcal{X}qmlgc) > 0$ in this region. In the current subsection, we will be able to use the same arguments to show that $dF_\lambda(\mathcal{X}qmlgc) > 0$ as long as the distance is at least $\varepsilonilon$, thus gaining control in the intermediate region $\nu_{2\varepsilonilon}(\mathcal{O}^{j}_{\lambda}) \operatorname{sp}incetminus \nu_{\varepsilonilon}(\mathcal{O}^{j}_{\lambda})$. For notation, we will sometimes write $\mathfrak{t}ilde g(x)$ for the inner product given by $\mathfrak{t}ilde g$ on $T_xW$. We begin with a technical lemma.
\begin{lemma}\label{lem:approximate-quasi-convergence}
Consider a sequence $x_n$ in $B(2R)$ which converges in $L^2_{k-1}$ to some $x \in W_{k-1}$. Then if $\lambda_n \mathfrak{t}o \infty$, $(dF_{\lambda_n})_{x_n}(\mathcal{X}qmlngc) \mathfrak{t}o (d\mathscr{L}_\q)_x(\mathcal{X}qgc)$ in $\mathbb{R}$.
\end{lemma}
\begin{proof}
We first compute that for any $x'$,
\begin{align*}
(dF_\lambda)_{x'}(v) &= (d\mathscr{L}_\q)_{T_\lambda(x')}(\D_{x'} T_\lambda)(v) \\
&= \langle \mathcal{X}qgc (T_\lambda(x')), \D_{x'} T_\lambda (v) \rangle_{\mathfrak{t}ilde{g}(x')}.
\end{align*}
We claim it suffices to show that $T_{\lambda_n}(x_n) \mathfrak{t}o x$ and $(\D_{x_n} T_{\lambda_n})(\mathcal{X}qmlngc(x_n)) \mathfrak{t}o \mathcal{X}qgc(x)$, each in $L^2_{k-2}$. Indeed, since the $\mathfrak{t}ilde{g}(x_n)$-metrics converge to $\mathfrak{t}ilde{g}(x)$, this will imply that $(dF_{\lambda_n})_{x_n}(\mathcal{X}qmlngc)$ converges to $\langle \mathcal{X}qgc(x), \mathcal{X}qgc(x) \rangle_{\mathfrak{t}ilde{g}(x)}$, which is exactly $(d\mathscr{L}_\q)_x(\mathcal{X}qgc)$.
We begin by analyzing the continuity of $T_\lambda$, using \eqref{eq:Tlambda}. Note that $|H^j_\lambda|$ and $|\omega^j_\lambda|$ are bounded above by 1.
Since $\lambda_n \mathfrak{t}o \infty$, then $x^j_\infty - x^j_\lambda \mathfrak{t}o 0$ in $L^2_{k-1}$ by the discussion after Corollary~\ref{cor:corresp}. Therefore, we have $T_{\lambda_n}(x_n) \mathfrak{t}o x$ in $L^2_{k-1}$.
Thus, it remains to analyze $\D_{x} T_{\lambda}$. We have that
\[
(\D_x T_{\lambda}) (v) = v + \operatorname{sp}incum_{j} (dH^j_\lambda)_x(v) \cdot \omega^j_\lambda(x) \cdot (x_\infty - x_\lambda) + \operatorname{sp}incum_{j} H^j_\lambda(x) \cdot \D_x \omega^j_\lambda(v) \cdot (x_\infty - x_\lambda).
\]
Note that $|(dH^j_\lambda)_x(v)| \leq (C/\varepsilonilon) \| v \|_{L^2_{k-1}}$ for a constant $C$ independent of $x, j,$ and $\lambda$. (More precisely, $C$ is the $C^0$-norm of $h'$.) Also, $\omega^j_\lambda$ is a $C^1$-function when restricted to $\nu_{2\varepsilonilon}(\mathcal{O}^j_\lambda)$ whose denominator in \eqref{eq:omegalambda} is bounded below by $3\varepsilonilon^2$, since any point in $\mathcal{O}^j_\lambda$ has $L^2$ norm at least $3\varepsilonilon$. From this, it is easy to obtain bounds
\begin{equation}\label{eq:Dxomega-bounds}
|\D_x \omega^j_\lambda(v)| \leq C' \| v \|_{L^2} \leq C' \| v \|_{L^2_{k-1}}
\end{equation}
independent of $x \in B(2R), j,$ and $\lambda$. Therefore, $(\D_{x_n} T_{\lambda_n})(v_n) \mathfrak{t}o v$ in $L^2_{k-2}$ for any sequence $v_n$ which converges to $v$ in $L^2_{k-2}$ and is $L^2_{k-1}$ bounded. Since $\mathcal{X}qmlngc(x_n) \mathfrak{t}o \mathcal{X}qgc(x)$ in $L^2_{k-2}$ and $\mathcal{X}qmlngc(x_n)$ is $L^2_{k-1}$ bounded (because the $x_n$ are $L^2_k$ bounded), we have that $(\D_{x_n} T_{\lambda_n})(\mathcal{X}qmlngc(x_n)) \mathfrak{t}o \mathcal{X}qmlgc(x)$ in $L^2_{k-2}$. This suffices to complete the proof.
\end{proof}
With the above lemma, we now establish the analogue of Lemma~\ref{lem:cond1} for $F_\lambda$.
\begin{proposition}
\label{prop:intermediate}
Fix $\varepsilonilon > 0$ satisfying Assumption~\ref{as:1}. For $\lambda \gg 0$, we have
\begin{equation}\label{eq:dFlambda-gradient-inequality}
\frac{1}{4} \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}} < dF_\lambda(\mathcal{X}qmlgc) < 4 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}}
\end{equation}
at any point in $W^\lambda \cap B(2R)$ which is at $L^2_{k-1}$ distance at least $\varepsilonilon$ from any stationary point of $\mathcal{X}qmlgc$.
\end{proposition}
\begin{proof}
Suppose that the conclusion is not true. Then, there exists a sequence $\lambda_n \mathfrak{t}o \infty$ and a sequence $x_n \in W^\lambdan \cap B(2R)$ which are $L^2_{k-1}$ distance at least $\varepsilonilon$ from any stationary point of $\mathcal{X}qmlngc$ and $(dF_{\lambda_n})_{x_n}(\mathcal{X}qmlngc)$ violate \eqref{eq:dFlambda-gradient-inequality}. Then, there exists a subsequence of the $x_n$ which converges in $L^2_{k-1}$ to an element $x \in W_{k-1}$. By Lemma~\ref{lem:approximate-quasi-convergence}, we see that $(d\mathscr{L}_\q)_x(\mathcal{X}qgc)$ must be at most $\frac{1}{4} \| \mathcal{X}qgc(x) \|^2_{\mathfrak{t}ilde{g}}$ or at least $4 \| \mathcal{X}qgc(x) \|^2_{\mathfrak{t}ilde{g}}$. Since $(d\mathscr{L}_\q)(\mathcal{X}qgc) = \| \operatorname{gr}ad \mathscr{L}_\mathfrak{q}\|^2$, we get that $x$ is a stationary point of $\mathcal{X}qgc$, and thus $x \in B(2R) \operatorname{sp}incubset W_k$. Since the $x_n$ converge to $x$ in $L^2_{k-1}$, they are eventually within $L^2_{k-1}$ distance $\varepsilonilon/2$ of a stationary point of $\mathcal{X}qgc$. For $\lambda \gg 0$, there is a stationary point of $\mathcal{X}qmlgc$ within $L^2_{k-1}$ distance $\varepsilonilon/2$ of $x$. This contradicts the fact that the $x_n$ are not within $L^2_{k-1}$ distance $\varepsilonilon$ of a stationary point of $\mathcal{X}qmlngc$.
\end{proof}
\operatorname{sp}incection{The $L^2$ and $\mathfrak{t}ilde g$ metrics}
A more detailed analysis will be needed to prove the inequality \eqref{eq:dFlambda} in neighborhoods of the stationary points. This will be done in Section~\ref{sec:controlnearby}. As a preliminary step, since $\mathcal{X}qmlgc$ is an $L^2$ approximation to the $\mathfrak{t}ilde g$-gradient of $\mathscr{L}q|_{W^\lambda}$, we will prove a few results relating the $L^2$ and $\mathfrak{t}ilde g$ metrics.
Let us recall the definition of $\mathfrak{t}ilde g$ from Section~\ref{sec:SWe}. For $x=(a, \phi) \in W$ and $(b, \psi), (b', \psi') \in T_{(a, \phi)} W$, we have
$$ \langle (b, \psi), (b', \psi') \rangle_{\mathfrak{t}ilde g} = \langle \Pi^{\operatorname{elC}}_{(a, \phi)} (b, \psi), \Pi^{\operatorname{elC}}_{(a, \phi)} (b', \psi')\rangle_{L^2},$$
where $\Pi^{\operatorname{elC}}$ is the enlarged local Coulomb projection. Let us also recall the formula for this projection:
\begin{equation}
\label{eq:pilco}
\Pi^{\operatorname{elC}}_{(a, \phi)} (b, \psi) := (b - d\zeta, \psi + \zeta \phi),
\end{equation}
where $\zeta: Y \mathfrak{t}o i\R$ is determined (for $\phi \neq 0$) by the conditions $\int_Y \zeta=0$ and
\begin{equation}
\label{eq:zetaphi}
\Delta\zeta + |\phi|^2 \zeta - \mu_Y(|\phi|^2\zeta)=- i\operatorname{Re}\langle i\phi , \psi \rangle + i\mu_Y(\operatorname{Re}\langle i\phi , \psi \rangle).
\end{equation}
The last equality is Equation \eqref{eq:zetafirst}, where we used the fact that $d^*b=0$ for $(b, \psi) \in T_xW$.
\begin{lemma}
\label{lem:Zeta}
There is a constant $K > 0$ such that, for all $x=(a, \phi) \in B(2R) \operatorname{sp}incubset W_k$ and $(b, \psi) \in \T_{0,x}^{\operatorname{gC}} \cong W_0$, if $\zeta$ is the function in \eqref{eq:pilco}, then
\begin{equation}
\label{eq:L21}
\| \zeta \|_{L^2_1} \leq K \cdot \| \psi \|_{L^2_{-1}}.
\end{equation}
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{lem:elcUniqueness} by considering the operator $E_{\phi}$ from $L^2_1$ to $L^2_{-1}$. Note that $K$ can be taken to be a constant independent of $\phi$, because the $L^2_{k}$ norm of $\phi$ is bounded.
\end{proof}
Since $\| \psi \|_{L^2_{-1}} \leq \| \psi \|_{L^2}$, Lemma~\ref{lem:Zeta} also gives $L^2_1$ bounds on $\zeta$ in terms of $L^2$ bounds on $\psi$.
Our next result is about the equivalence between the $L^2$ and $\mathfrak{t}ilde g$ metrics.
\begin{proposition}
\label{prop:EquivalentMetrics}
There is a constant $C_0 > 0$ such that, for all $x=(a, \phi) \in B(2R) \operatorname{sp}incubset W_k$ and $(b, \psi) \in \T_{0,x}^{\operatorname{gC}} \cong W_0$, we have
\begin{equation}
\label{eq:EquivalentMetrics}
{C_0}^{-1} \cdot \| (b, \psi) \|_{L^2} \leq \| (b, \psi) \|_{\mathfrak{t}ilde g(x)} \leq C_0 \cdot \| (b, \psi) \|_{L^2}.
\end{equation}
\end{proposition}
\begin{proof}
We begin with the second inequality in \eqref{eq:EquivalentMetrics}.
Then, with $\zeta$ as in \eqref{eq:pilco}, we have
$$ \| (b, \psi) \|_{\mathfrak{t}ilde g(x)} \leq \| (b, \psi) \|_{L^2} + \| (d\zeta, \zeta \phi) \|_{L^2}.$$
We claim that the right hand side is bounded by a constant times $\| (b, \psi) \|_{L^2}$. This follows by the $L^2_1$ control on $\zeta$ from Lemma~\ref{lem:Zeta} and the fact that the condition $x \in B(2R)$ gives $L^2_k$ (and hence $C^0$) bounds on $\phi$.
To prove the first inequality, note that if we view $\Pi^{\operatorname{elC}}_x$ as an isomorphism from the global Coulomb slice to the extended local Coulomb slice, then its inverse is the infinitesimal global Coulomb projection, $(\Pi^{\operatorname{gC}}_*)_x$, from \eqref{eq:icp}. Thus, if we switch notation and now let $(b, \psi)$ be a vector in the extended local Coulomb slice, the first inequality in \eqref{eq:EquivalentMetrics} can be re-written as
$$ \| (\Pi^{\operatorname{gC}}_*)_x(b, \psi) \|_{L^2} \leq C_0 \cdot \| (b, \psi) \|_{L^2}.$$
This holds by Lemma~\ref{lem:igc-fb}.
\end{proof}
Our next goal is to compare the $L^2$- to $\mathfrak{t}ilde g$-orthogonal projections from $W$ to $W^\lambda$. Since $\lambda$ is of the form $\lambda^{\bullet}_i$, we have that $p^\lambda$ is the $L^2$-orthogonal projection to $W^\lambda$, and we write $p^{\lambda}_{\tilde g}x$ for the $\mathfrak{t}ilde g(x)$-orthogonal projection to $W^\lambda$. (Compare Remark~\ref{rem:notgradient}.)
\begin{proposition}
\label{prop:pgpl}
There is a constant $C_1 > 0$ such that, for all $x \in B(2R) \operatorname{sp}incubset W_k$, we have
$$ \| p^{\lambda}_{\tilde g}x - p^\lambda \| \leq \frac{C_1}{\lambda},$$
where $p^{\lambda}_{\tilde g}x$ and $p^\lambda$ are viewed as operators from the $L^2$ completion $W_0$ of $W$ to itself.
\end{proposition}
We first need a refinement of Lemma~\ref{lem:Zeta}. Let us denote by $(W^\lambda)^{\perp}_0$ the $L^2$-orthogonal complement to $W^\lambda$ inside $W_0$. In other words, $(W^\lambda)^{\perp}_0$ is the $L^2$ span of the eigenvectors of $l$ with eigenvalues at least $\lambda$ in absolute value.
\begin{lemma}
\label{lem:EstimateZeta}
There is a constant $C_2 > 0$ such that, for all $x=(a, \phi) \in B(2R) \operatorname{sp}incubset W_k$ and $(b, \psi) \in (W^\lambda)^{\perp}_0$, if $\zeta$ is the function in \eqref{eq:pilco}, then
$$ \| \zeta \|_{L^2_1} \leq \frac{C_2}{\lambda} \|\psi \|_{L^2}.$$
\end{lemma}
\begin{proof}
Write $\psi = \operatorname{sp}incum \psi_\kappa$, where $\psi_\kappa$ are eigenvectors of the Dirac operator $D$, with eigenvalues $\kappa$ such that $|\kappa| > \lambda$. Set
\begin{equation}
\label{eq:D-1}
D^{-1}(\psi) := \operatorname{sp}incum \kappa^{-1} \psi_\kappa.
\end{equation}
Note that $D(D^{-1}(\psi))=\psi$. Since $D$ is continuous from $L^2$ spinors to $L^2_{-1}$ spinors, we obtain
$$ \| \psi \|_{L^2_{-1}} \leq K' \| D^{-1}\psi \|_{L^2} \leq \frac{K'}{\lambda} \|\psi\|_{L^2},$$
for some constant $K'$. Together with Lemma~\ref{lem:Zeta}, this gives the desired inequality.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:pgpl}]
We need to check that, for all $v \in W_0$,
$$ \| p^{\lambda}_{\tilde g}x(v) - p^\lambda(v) \|_{L^2} \leq \frac{C_1}{\lambda} \| v \|_{L^2},$$
where $C_1$ is independent of $x, v,$ and $\lambda$.
First, if $v \in W^\lambda$, then note that $p^\lambda_{\mathfrak{t}ilde{g}(x)}(v) = p^\lambda(v)$ and the claim is trivial. Therefore, we can assume that $v \in (W^\lambda)^\perp_{0}$. Thus, we would like to bound
$$
\| p^\lambda_{\mathfrak{t}ilde{g}(x)} (v) - p^\lambda(v) \|_{L^2} = \| p^\lambda_{\mathfrak{t}ilde{g}(x)}(v)\|_{L^2}.
$$
We write $x = (a,\phi)$ and $$w = p^\lambda_{\mathfrak{t}ilde{g}(x)}(v) \in W^\lambda.$$ Recall from Proposition~\ref{prop:EquivalentMetrics} that there exists a constant $C_0$, independent of $x$, such that $\| v \|_{L^2} \leq C_0 \| v \|_{\mathfrak{t}ilde{g}(x)}$ and similarly for $w$. In this case, we have that
$$ \| p^\lambda_{\mathfrak{t}ilde{g}(x)}(v) \|^2_{L^2} \leq C^2_0 \| p^\lambda_{\mathfrak{t}ilde{g}(x)}(v) \|_{\mathfrak{t}ilde{g}(x)}^2 = C^2_0 \langle v, p^\lambda_{\mathfrak{t}ilde{g}(x)}(v) \rangle_{\mathfrak{t}ilde{g}(x)}.$$
Therefore, to obtain the desired bounds in the proposition, it suffices to prove that there exists a constant $K_0 > 0$, independent of $x \in B(2R)$ and $v \in (W^\lambda)^\perp_{0}$, such that
\begin{equation}\label{eq:g-tilde-projection-bounds}
| \langle v, w \rangle_{\mathfrak{t}ilde{g}(x)} | \leq \frac{K_0}{\lambda} \| v \|_{L^2} \| w \|_{L^2}.
\end{equation}
We now focus on proving this inequality.
For notation, let $v = (b,\psi)$ and $w = (b', \psi')$. We write $\Pi^{\operatorname{elC}}_x(v) = (b - d\zeta, \psi + \zeta \phi)$ and $\Pi^{\operatorname{elC}}_x(w) = (b' - d \zeta', \psi' + \zeta' \phi)$. Since $w = p^\lambda_{\mathfrak{t}ilde{g}(x)}(v)$, we have that $v$ and $w$ are $L^2$ orthogonal. From this, we obtain
\begin{align}
\langle v, w \rangle_{\mathfrak{t}ilde{g}(x)} &= \langle \Pi^{\operatorname{elC}}_x(v), \Pi^{\operatorname{elC}}_x(w) \rangle_{L^2} \\
&= -\langle d \zeta, b' \rangle_{L^2} - \langle b, d\zeta' \rangle_{L^2} + \langle d\zeta, d\zeta' \rangle_{L^2} + \langle \zeta \phi , \psi' \rangle_{L^2} \notag \\
& \hskip1cm + \langle \psi, \zeta' \phi \rangle_{L^2} + \langle \zeta \phi , \zeta' \phi \rangle_{L^2} \notag \\
\label{eq:PilCoul-inner-expand} &= \langle d\zeta, d\zeta' \rangle_{L^2} + \langle \zeta \phi , \psi' \rangle_{L^2} + \langle \psi, \zeta' \phi \rangle_{L^2} + \langle \zeta \phi , \zeta' \phi \rangle_{L^2},
\end{align}
where the third equality comes from the fact that $d^* b = d^* b' = 0$. Let us now collect the relevant bounds on $\zeta$ and $\zeta'$. By Lemma~\ref{lem:EstimateZeta}, we have that $\| \zeta\|_{L^2}$ and $\| d\zeta \|_{L^2}$ are bounded above by $\frac{C_2}{\lambda} \| \psi \|_{L^2}$ for some constant $C_2 > 0$. We do not have $L^2$ bounds on $\zeta'$ or $d\zeta'$ in terms of $\frac{1}{\lambda}$, since $w$ is not an element of $(W^\lambda)^\perp_0$. However, Lemma~\ref{lem:Zeta} still guarantees $\| \zeta' \|_{L^2}$ and $\| d\zeta'\|_{L^2}$ are bounded above by $K \cdot \|\psi' \|_{L^2}$, independent of $\lambda$ and $x \in B(2R)$. This gives:
\begin{equation}\label{eq:dzeta-dzeta'}
|\langle d\zeta, d\zeta' \rangle_{L^2}| \leq \frac{C_2 \cdot K}{\lambda} \| \psi \|_{L^2} \| \psi' \|_{L^2} \leq \frac{C_2 \cdot K}{\lambda} \| v \|_{L^2} \| w \|_{L^2}.
\end{equation}
To establish \eqref{eq:g-tilde-projection-bounds}, it remains to bound the other three terms in \eqref{eq:PilCoul-inner-expand}.
Since $x \in B(2R) \operatorname{sp}incubset W_k$, we have a uniform bound on $\| \phi \|_{C^0}$ independent of $x$, denoted $K_2$. Therefore, using a similar argument as for \eqref{eq:dzeta-dzeta'}, we obtain bounds
\begin{align*}
|\langle \zeta \phi, \psi' \rangle_{L^2}| &\leq \frac{C_2 \cdot K_2}{\lambda} \| v \|_{L^2} \| w \|_{L^2},\\
|\langle \zeta \phi, \zeta' \phi \rangle_{L^2}| & \leq \frac{C_2 \cdot K \cdot K_2}{\lambda} \| v \|_{L^2} \| w \|_{L^2}.
\end{align*}
Thus, the proof will be complete if we obtain similar bounds for $\langle \psi, \zeta' \phi \rangle_{L^2}$. We must be careful here, since we do not have bounds on $\zeta'$ in terms of $\frac{1}{\lambda}$. To handle this, we write
\begin{align*}
\langle \psi, \zeta' \phi \rangle_{L^2} &= \langle D(D^{-1} \psi), \zeta' \phi \rangle_{L^2} \\
&= \langle D^{-1} \psi, D(\zeta' \phi) \rangle_{L^2},
\end{align*}
where $D^{-1}$ is defined as in \eqref{eq:D-1}. Since $(b, \psi) \in (W^\lambda)^\perp_0$, we have that $\| D^{-1} \psi \|_{L^2} \leq \frac{1}{\lambda} \| \psi \|_{L^2}$. Finally, since $\zeta'$ is $L^2_1$-bounded in terms of $\|\psi\|_{L^2}$, we obtain $L^2_1$-bounds on $\zeta' \phi$ (independent of $x$) by Sobolev multiplication. In other words, there exists a constant $K_3$ such that $\| D( \zeta' \phi) \|_{L^2} \leq K_3 \| \psi' \|_{L^2}$, since $D$ is continuous from $L^2_1$ to $L^2$. Thus, we conclude that
$$ |\langle \psi, \zeta' \phi \rangle_{L^2}| \leq \frac{K_3}{\lambda} \| v \|_{L^2} \| w \|_{L^2}, $$
which completes the proof.
\end{proof}
\operatorname{sp}incection{Control near the stationary points}\label{sec:control-near-stationary}
\label{sec:controlnearby}
We are now ready to prove \eqref{eq:dFlambda} in the neighborhoods $\nu_{\varepsilonilon}(\mathcal{O}^j_\lambda)$ of the orbits of stationary points.
\begin{comment}
Note that the stationary points of $\mathcal{X}qmlgc$ come in $S^1$-orbits, which can be points (for reducibles) or circles (for irreducibles). In either case, a point $x \in W^\lambda$ is within $L^2_{k-1}$ distance $\varepsilonilon$ from a stationary orbit $\mathcal{O}$ if and only if it can be written as
$$ x = x_{\lambda} + y,$$
where $x_{\lambda}=(a_{\lambda}, \phi_{\lambda}) \in \mathcal{O}$ and $y=(b, \psi) \in W^\lambda$ satisfies
\begin{equation}
\label{eq:conditionsy}
\operatorname{Re} \langle i\phi, \psi\rangle_{L^2}=0, \ \ \|y\|_{L^2_{k-1}} \leq \varepsilonilon.
\end{equation}
Indeed, we can take $x_{\lambda}$ to be the closest point to $x$ in $\mathcal{O}$ in terms of $L^2$ distance, so that $x-x_{\lambda}=y$ is $L^2$-perpendicular to the orbit, i.e., to the vector $i\phi$. To see that $\|y\|_{L^2_{k-1}} \leq \varepsilonilon$, let $x'_{\lambda}$ be the closest point to $x$ in $\mathcal{O}$ in terms of $L^2_{k-1}$ distance, and note that
$$ \| x-x_{\lambda} \|_{L^2} \leq \| x-x'_{\lambda} \|_{L^2} \leq \| x-x'_{\lambda} \|_{L^2_{k-1}} \leq \varepsilonilon.$$
\end{comment}
Consider the stationary point $x^j_{\infty}$ of $\mathcal{X}qgc$. Since $x^j_{\infty}$ is non-degenerate, we have that when restricted to the anticircular global Coulomb slice, $l+\D_{x^j_{\infty}} c_{\q}$ is an invertible, self-adjoint operator with respect to the metric $\mathfrak{t}ilde g$. We will use this to show that $(l + p^\lambda \D_{x^j_\lambda} c_\q)(y)$ grows at least linearly in $y$ when $y$ is $L^2$ orthogonal to the $S^1$-orbit of $x^j_\lambda = (a^j_\lambda,\phi^j_\lambda)$; this will be key for establishing the inequalities \eqref{eq:dFlambda} near $x^j_\lambda$.
\begin{lemma}\label{lem:approximate-eigenvalue-bounds}
There exists a constant $\mu_j > 0$ with the following property. For $\lambda \gg 0$ and any $y \in W_1$ with $\operatorname{Re} \langle y , (0, i\phi^j_\lambda) \rangle_{L^2} = 0$, we have
\begin{equation}\label{eq:muj}
\| (l + p^\lambda \D_{x^j_\lambda} c_\q)(y) \|_{L^2} \geq \mu_j \| y \|_{L^2_1}.
\end{equation}
\end{lemma}
Of course, this lemma implies that we have bounds $\| (l + p^\lambda \D_{x^j_\lambda} c_\q)(y) \|_{L^2} \geq \mu_j \| y \|_{L^2}$ as well.
\begin{proof}
For notational convenience, we omit the index $j$ from the argument. We suppose that this is not true. Then, we can find a sequence $y_n \in W^\lambdan$ with $\| y_n\|_{L^2_1} = 1$ and $\operatorname{Re} \langle y_n, (0,i \phi_{\lambda_n}) \rangle_{L^2} = 0$ and a sequence $\lambda_n \mathfrak{t}o \infty$ such that $\| (l + p^\lambdan \D_{x_{\lambda_n}} c_\q)(y_n) \|_{L^2} \mathfrak{t}o 0$. Extract a subsequence for which $y_n$ converges in $L^2$ to some $y \in W_0$. Since $x_{\lambda_n} \mathfrak{t}o x_\infty$ in $L^2_k$, the continuity of $\mathcal{D}c_\q$ (Lemma~\ref{lem:Dcq}) implies that
$$
p^\lambdan \D_{x_{\lambda_n}} c_\q(y_n) \mathfrak{t}o \D_{x_\infty} c_\mathfrak{q}(y) \mathfrak{t}ext{ in } L^2.
$$
We see that $l(y_n)$ converges in $L^2$. Because the $L^2_{-1}$ limit of $l(y_n)$ is $l(y)$, we in fact have that $l(y_n)$ converges to $l(y)$ in $L^2$ and thus $y_n \mathfrak{t}o y$ in $L^2_1$, and $y \neq 0$. However, since $(l + p^\lambdan \D_{x_{\lambda_n}} c_\q)(y_n)$ converges to 0, we see that $(l + \D_{x_\infty} c_\q)(y) = 0$. Bootstrapping further shows that $y$ is actually an element of $W_k$.
First, suppose that $x_\infty$ is reducible. In this case, we have contradicted the non-degeneracy of $x_\infty$, as $l + \D_{x_\infty} c_\mathfrak{q}:W_k \mathfrak{t}o W_{k-1}$ is invertible (where we are using non-degeneracy in the blow-down). Now, suppose that $x_\infty$ is an irreducible stationary point of $\mathcal{X}qgc$. In this case, we can only say that $l + \D_{x_\infty} c_\q$ is injective on the $\mathfrak{t}ilde{g}(x_\infty)$-orthogonal complement of the $S^1$ orbit of $x_\infty$. Thus, we have that $y$ must have some component tangent to the $S^1$ orbit of $x_\lambda$, i.e., a multiple of $(0,i\phi_\infty)$. Because $\operatorname{Re} \langle y_n, (0,i\phi_{\lambda_n}) \rangle_{L^2} = 0$ for all $n$, we see that $\operatorname{Re} \langle y, (0,i\phi_\infty) \rangle_{L^2} = 0$ as well. This is a contradiction.
\begin{comment}
If $l + p^\lambda \D_{x_\lambda} c_\q$ were self-adjoint, we could obtain such bounds by studying the eigenvalues of this operator. However, as we have discussed, this operator is not self-adjoint with respect to $L^2$ or $\mathfrak{t}ilde{g}$. Therefore, we will approximate this operator by a self-adjoint one and then bound the eigenvalues of the new operator. Since $x_\infty$ is a stationary point of $l + c_\q$, we have that $\D_{x_\infty} (l + c_\q) = l + \D_{x_\infty} c_\q$ is $\mathfrak{t}ilde{g}(x_\infty)$-self-adjoint. Therefore, $l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\mathfrak{q}p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)}$ is also $\mathfrak{t}ilde{g}(x_\infty)$-self-adjoint as a map from $W_0$ to $W_0$. We can restate this as $l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\q$ is self-adjoint as a map from $W^\lambda$ to $W^\lambda$. This is the operator we will use to approximate $l + p^\lambda \D_{x_\lambda} c_\q$.
As operators from $W^\lambda$ to $W^\lambda$, each equipped with the $L^2$ norm, we have the bounds
\begin{align}
\label{eq:eigenvalue-Dxlambda-bounds} \| (l + p^\lambda \D_{x_\lambda} c_\q) - (l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\mathfrak{q}) \| &\leq \| p^\lambda \D_{x_\lambda} c_\mathfrak{q}- p^\lambda \D_{x_\infty} c_\mathfrak{q}\| + \| (p^\lambda - p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} ) \D_{x_\infty} c_\mathfrak{q}\| \\
\label{eq:eigenvalue-Dxlambda-bounds2}
& \leq \| \D_{x_\lambda} c_\mathfrak{q}- \D_{x_\infty} c_\mathfrak{q}\| + \| p^\lambda - p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \| \cdot \| \D_{x_\infty} c_\mathfrak{q}\|.
\end{align}
First, $\| p^\lambda - p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \| \mathfrak{t}o 0$ as $\lambda \mathfrak{t}o \infty$ by Proposition~\ref{prop:pgpl}. Next, $\| \D_{x_\lambda} c_\mathfrak{q}- \D_{x_\infty} c_\mathfrak{q}\| \mathfrak{t}o 0$ since $\mathcal{D}c_\q$ is continuous as a map from $W_k$ to $\operatorname{Hom}(W_0,W_0)$. Therefore, we may make \eqref{eq:eigenvalue-Dxlambda-bounds2} arbitrarily small. It now suffices to instead prove that for $\lambda \gg 0$,
$$\| (l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\q)(y) \|_{L^2} \geq \frac{3\mu}{4} \| y \|_{L^2}.$$
Because $l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\q: W^\lambda \mathfrak{t}o W^\lambda$ is self-adjoint, it suffices to show that for $\lambda \gg 0$, the eigenvalues of $l + p^\lambda_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\lambda} c_\q$ are at least $\frac{3\mu}{4}$ in absolute value.
Suppose this is false. In this case, we can find a sequence of eigenvectors $y_n \in W^\lambdan$ for $l + p^\lambdan_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\q$ with eigenvalues $\mu_n$ satisfying $|\mu_n| < \frac{3\mu}{4}$. Normalize these vectors so that $\| y_n \|_{L^2_{k-1}}=1$ and pick a subsequence converging to some $y$ in $L^2_{k-2}$, and such that $\mu_n$ converges to some $\mu_*$ with $|\mu_*| \leq \frac{3\mu}{4}$. Since the $y_n$ are eigenvectors and $p^\lambdan_{\mathfrak{t}ilde{g}(x_\infty)} \D_{x_\infty} c_\q(y_n) \mathfrak{t}o \D_{x_\infty} c_\q(y)$ by Proposition~\ref{prop:pgpl}, we get that $l(y_n) \mathfrak{t}o l(y)$ in $L^2_{k-2}$, so actually $y_n \mathfrak{t}o y$ in $L^2_{k-1}$, and hence $y \neq 0$. Therefore, $y$ is an eigenvector of $l+ \D_{x_\infty} c_\q$ for an eigenvalue $\mu_*$ with $|\mu_*| \leq \frac{3\mu}{4}$. Contradiction.
\end{comment}
\end{proof}
Recall that $\varepsilonilon > 0$ was chosen in Section~\ref{sec:Flambda} to be sufficiently small, depending on two requirements from Assumption~\ref{as:1}. We need an additional requirement. For each $j=1, \dots, m$, let $\mu_j$ be the constant as in Lemma~\ref{lem:approximate-eigenvalue-bounds}.
\begin{assumption}
\label{as:2}
For each $j=1, \dots, m$ and $x \in \nu_{3\varepsilonilon}(\mathcal{O}^j)$, we have
$$ \| \D_x c_{\q} - \D_{x^j_{\infty}} c_{\q} \| \leq \frac{\mu_j}{40 C_0^2}.$$
Here, the operator norm is taken in $\operatorname{Hom}(W_0, W_0)$, and $C_0$ is the constant from Proposition~\ref{prop:EquivalentMetrics}.
\end{assumption}
Note that the existence of such an $\varepsilonilon$ is guaranteed by the continuity of $\mathcal{D}c_{\q}$, as Lemma~\ref{lem:Dcq}, applied for Sobolev index $k-1$ instead of $k$, implies that
$$\mathcal{D}c_{\q} : W_{k-1} \mathfrak{t}o \operatorname{Hom}(W_0, W_0)$$
is continuous.
\begin{proposition}
\label{prop:cond2}
Fix $\varepsilonilon > 0$ satisfying Assumptions~\ref{as:1} and \ref{as:2}. Then, for all $\lambda \gg 0$, we have
\begin{equation}\label{eq:Xqmlgc-gradient-inequality}
\frac{1}{4} \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}} \leq dF_\lambda (\mathcal{X}qmlgc) \leq 4 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}}
\end{equation}
at any point $x \in W^\lambda \cap B(2R)$ which is at $L^2_{k-1}$ distance at most $\varepsilonilon$ from a stationary point of $\mathcal{X}qmlgc$.
\end{proposition}
\begin{proof}
In Section~\ref{sec:Flambda} we observed that, because of Assumption~\ref{as:1} (a) on $\varepsilonilon$, the $S^1$-orbits $\mathcal{O}_{\lambda}^j$ of stationary points $x^j_\lambda$ are at least $L^2_{k-1}$ distance $5\varepsilonilon$ apart from each other. Hence, their neighborhoods $\nu_{2\varepsilonilon}(\mathcal{O}_{\lambda}^j)$ are disjoint.
Let $x$ be in $ \nu_{\varepsilonilon}(\mathcal{O}_{\lambda}^j)$ for some $j$. Then, the only contribution to the summation in Equation~\eqref{eq:Tlambda} is from that $j$ and, moreover, we have $H^j_{\lambda}(x)=1$. For simplicity, we will omit the index $j$ from $x^j_\lambda$, $x^j_\infty$, $\mathcal{O}^j$, $\mathcal{O}_{\lambda}^j$, and $\omega^j_\lambda$ for the rest of this section. Thus,
$$T_{\lambda}(x) = x + \omega_{\lambda}(x) \cdot (x_{\infty}- x_{\lambda}).$$
We now expand $dF_\lambda(\mathcal{X}qmlgc)$:
\begin{align*}
(dF_\lambda)_{x}(\mathcal{X}qmlgc(x)) &= (d\mathscr{L}_\q)_{T_\lambda(x)} (\D_x T_\lambda)(\mathcal{X}qmlgc(x)) \\
&= \langle (l + c_\q)(T_\lambda(x)), (\D_x T_\lambda)(l + p^\lambda c_\q)(x) \rangle_{\mathfrak{t}ilde{g}(T_\lambda(x))} \\
&= \langle (l + c_\q)(T_\lambda(x)), (l + p^\lambda c_\q)(x) + (\D_x \omega_\lambda)(l + p^\lambda c_\q)(x) \cdot (x_\infty - x_\lambda) \rangle_{\mathfrak{t}ilde{g}(T_\lambda(x))}.
\end{align*}
By the $S^1$-equivariance of $\mathcal{X}qmlgc$ and the $S^1$-invariance of $F_\lambda$, it suffices to show that \eqref{eq:Xqmlgc-gradient-inequality} holds when $\omega_\lambda(x) = 1$, that is, when the $L^2$-closest point to $x$ on $\mathcal{O}_{\lambda}$ is exactly $x_\lambda$. (Of course, we automatically have $\omega_\lambda(x) = 1$ if $x_\lambda$ is reducible.) Further, note that if $x_\lambda$ is reducible, then $\D_x \omega_\lambda = 0$.
We will analyze $dF_\lambda (\mathcal{X}qmlgc)$ by linearizing some of the terms in the inner product above about $x_\lambda$. We begin by linearizing $(l + p^\lambda c_\q)(x)$. Let $y = x - x_\lambda$. Since $\omega_\lambda(x) = 1$, we have that $y$ is orthogonal to the $S^1$ orbit of $x_\lambda$. Since $x_\lambda$ is a stationary point, we have $(l + p^\lambda c_\q)(x_\lambda) = 0$. Therefore, the difference between $(l + p^\lambda c_\q)(y)$ and $(l + p^\lambda \D_{x_\lambda}c_\q)(x)$ is given by
\begin{align}
(l + p^\lambda c_\q)(x_\lambda + y) - (l + p^\lambda \D_{x_{\lambda}} c_\q)(y) &= l(x_\lambda) + p^\lambda(c_\q(x_\lambda + y)) - p^\lambda (\D_{x_{\lambda}} c_\q(y)) \\
\nonumber &= -p^\lambda \left (c_\q(x_\lambda) + c_\q(x_\lambda + y) - \D_{x_{\lambda}} c_\q(y) \right) \\
\label{eq:Stildelambda} &= \int^1_0 p^\lambda(\D_{x_\lambda + ty} c_\mathfrak{q}- \D_{x_{\lambda}} c_\q)(y) dt.
\end{align}
We denote the term in \eqref{eq:Stildelambda} by $\widetilde{S}_\lambda(y)$.
Similarly, we would like to linearize $(l + c_\q)(T_\lambda(x))$. We first recall that $T_\lambda(x_\lambda) = x_\infty$, which is a stationary point of $l + c_\q$. Second, since $\omega_\lambda(x) = 1$, we have that $x - x_\lambda$ is real $L^2$ orthogonal to $i x_\lambda$, and hence $(\D_{x_\lambda + ty} T_\lambda)(y) = y$ for any $t \in [0,1]$. Therefore, we can write
\begin{align*}
(l + c_\q)(T_\lambda(x_\lambda + y)) &= (l + c_\q)(T_\lambda(x_\lambda + y))- (l+c_\q)(T_\lambda(x_\lambda)) \\
&= \int^1_0 ( l + \D_{T_\lambda(x_\lambda + ty)} c_\q) \circ (\D_{x_\lambda + ty}T_{\lambda})(y) dt \\
&= \int^1_0 ( l + \D_{T_\lambda(x_\lambda + ty)} c_\q) (y) dt\\
&= \int^1_0 ( l + \D_{x_{\infty} + ty} c_\q) (y) dt.
\end{align*}
We obtain
\begin{equation}
\label{eq:Rtildelambda}
(l + c_\q)(T_\lambda(x_\lambda + y)) - (l + \D_{x_{\infty}} c_\q)(y) = \int^1_0 (\D_{x_\infty + ty} c_\mathfrak{q}- \D_{x_\infty} c_\q)(y) dt.
\end{equation}
We let $\widetilde{R}_\lambda(y)$ denote the term on the right hand side of \eqref{eq:Rtildelambda}.
Using these two linearizations and the fact that $\langle (1 - p^\lambda_{\mathfrak{t}ilde{g}(x')})(u), v \rangle_{\mathfrak{t}ilde{g}(x')} = 0$ for any $v \in W^\lambda$ and $u, x' \in W_0$, we obtain
\begin{align}\label{eq:dFlambda-linearized-pre}
(dF_\lambda)_{x}(\mathcal{X}qmlgc(x)) = & \langle (l + p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty} c_\q)(y) + \widetilde{R}_\lambda(y), \\
& (l + p^\lambda \D_{x_\lambda} c_\q)(y) + \widetilde{S}_\lambda(y) + (\D_x \omega_\lambda)(l + p^\lambda c_\q)(x) \cdot (x_\infty - x_\lambda)\rangle_{\mathfrak{t}ilde{g}(T_\lambda(x))}, \notag
\end{align}
where $\widetilde{R}_\lambda(y)$ and $\widetilde{S}_\lambda(y)$ are as defined above. At this point, it is still too difficult to compare the linear terms to understand this inner product. Therefore, we will alter our expressions so that the leading terms align. Write
\begin{equation}\label{eq:dFlambda-linearized}
(dF_\lambda)_{x}(\mathcal{X}qmlgc(x)) = \langle (l + p^\lambda \D_{x_\lambda} c_\q)(y) + R_\lambda(y) ,
(l + p^\lambda \D_{x_\lambda} c_\q)(y) + S_\lambda(y) \rangle_{\mathfrak{t}ilde{g}(T_\lambda(x))},
\end{equation}
where
\begin{align}
\label{eq:Rlambda}
R_\lambda(y) &= \widetilde{R}_\lambda(y) + (p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty}c_\mathfrak{q}- p^\lambda \D_{x_\lambda} c_\q)(y), \\
\label{eq:Slambda}
S_\lambda(y) &= \widetilde{S}_\lambda(y) + (\D_x \omega_\lambda)(l + p^\lambda c_\q)(x) \cdot (x_\infty - x_\lambda).
\end{align}
In Lemmas~\ref{lem:RStildelambda-bounds}, \ref{lem:Rlambda}, and \ref{lem:Slambda} below, we will prove that for our choice of $\varepsilonilon$ as in Assumption~\ref{as:2} and for $\lambda \gg 0$,
\begin{align}
\label{eq:Rtildelambda-bounds}\| \widetilde{R}_\lambda(y) \|_{L^2} &\leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \\
\label{eq:Stildelambda-bounds}\| \widetilde{S}_\lambda(y) \|_{L^2} &\leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \\
\label{eq:Rlambda-bounds}
\| R_\lambda(y) \|_{L^2} &\leq \frac{1}{10C_0^2} \| (l + p^\lambda \D_{x_\lambda} c_\q) (y) \|_{L^2} \\
\label{eq:Slambda-bounds}
\| S_\lambda(y) \|_{L^2} &\leq \frac{1}{10 C_0^2} \| (l + p^\lambda \D_{x_\lambda} c_\q) (y) \|_{L^2}
\end{align}
where $C_0$ is the constant of equivalency between the $\mathfrak{t}ilde{g}$- and $L^2$-metrics from Proposition~\ref{prop:EquivalentMetrics}. This will imply the analogous inequalities with $L^2$ replaced by $\mathfrak{t}ilde{g}(T_\lambda(x))$ and without the $C_0^2$ term. Let us see why this will complete the proof of the proposition.
Note that in an inner product space with vectors $u, v, w, \mathfrak{t}ilde{w}$ such that
\begin{equation}\label{eqn:uvw}
\| v \| \leq \frac{1}{10} \| u\|, \ \| w \| \leq \frac{1}{10} \| u\|, \ \| \mathfrak{t}ilde{w} \| \leq \frac{1}{20} \| u\|,
\end{equation}
we have the inequalities
\begin{align*}
\frac{19}{20} \| u \| & \leq \| u + \mathfrak{t}ilde{w} \| \leq \frac{21}{20} \| u \| \\
\left(\frac{9}{10} \right)^2 \| u \|^2 &\leq \langle u + v, u + w \rangle \leq \left(\frac{11}{10}\right)^2 \| u \|^2,
\end{align*}
from which we can deduce
$$
\frac{1}{2} \| u + \mathfrak{t}ilde{w} \|^2 \leq \langle u + v, u + w \rangle \leq 2 \| u + \mathfrak{t}ilde{w} \|^2.
$$
For our case, we take
$$
u =( l + p^\lambda \D_{x_\lambda} c_\q)(y), \ v = R_\lambda(y), \ w = S_\lambda(y), \ \mathfrak{t}ilde{w} = \widetilde{S}_\lambda(y),
$$
and $\mathfrak{t}ilde{g}(T_\lambda(x))$ as the inner product. Note that $u + \mathfrak{t}ilde{w} = \mathcal{X}qmlgc(x)$. The inequalities \eqref{eq:Stildelambda-bounds}-\eqref{eq:Slambda-bounds} imply that \eqref{eqn:uvw} holds and thus
$$
\frac{1}{2} \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}(T_\lambda(x))} \leq dF_\lambda (\mathcal{X}qmlgc) \leq 2 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}(T_\lambda(x))}.
$$
Since $T_\lambda(x) \mathfrak{t}o x$, the constant of equivalency between the $\mathfrak{t}ilde{g}(T_\lambda(x))$- and $\mathfrak{t}ilde{g}(x)$-metrics is at most $\operatorname{sp}incqrt{2}$ for $\lambda \gg 0$. From this, we obtain that
$$
\frac{1}{4} \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}} \leq dF_\lambda (\mathcal{X}qmlgc) \leq 4 \| \mathcal{X}qmlgc \|^2_{\mathfrak{t}ilde{g}},
$$
completing the proof.
\end{proof}
The rest of the section is now devoted to proving \eqref{eq:Rtildelambda-bounds}- \eqref{eq:Slambda-bounds}. We do this by a series of lemmas.
\begin{lemma}\label{lem:RStildelambda-bounds}
For $\lambda \gg 0$ and any $y \in W_1$ orthogonal to the $S^1$ orbit of $x_\lambda$, we have the following inequalities
\begin{align*}
\| \widetilde{R}_\lambda(y) \|_{L^2} &\leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \\
\| \widetilde{S}_\lambda(y) \|_{L^2} &\leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2}.
\end{align*}
\end{lemma}
\begin{proof}
We will prove the desired inequality for the case of $\widetilde{S}_\lambda$. The case of $\widetilde{R}_\lambda$ is similar. By Lemma~\ref{lem:approximate-eigenvalue-bounds}, $\| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \geq \mu\| y \|_{L^2}$. Therefore, it suffices to establish the upper bound
\begin{equation}\label{eq:Slambda-eigenvalue-bound}
\| \widetilde{S}_\lambda (y) \|_{L^2} \leq \frac{\mu}{20 C^2_0}\| y \|_{L^2}.
\end{equation}
Since $\| y \|_{L^2_{k-1}} \leq \varepsilonilon$ and $\| x_{\lambda} - x_\infty \|_{L^2_{k-1}} \leq \varepsilonilon$, we get that $x_{\lambda} + ty$ is in $\nu_{3\varepsilonilon}(\mathcal{O})$ for all $t \in [0,1]$. Assumption~\ref{as:2} implies that $\| \D_{x_\lambda + ty} c_\q- \D_{x_{\lambda}} c_\mathfrak{q}\| \leq \frac{\mu}{20 C^2_0}$. We obtain
\begin{align*}
\| \int^1_0 p^\lambda( \D_{x_\lambda + ty} c_\mathfrak{q}- \D_{x_\lambda} c_\q) (y) dt \|_{L^2} &\leq \left ( \int^1_0 \| p^\lambda \| \cdot \| \D_{x_\lambda + ty} c_\mathfrak{q}- \D_{x_\lambda} c_\q\| dt \right ) \cdot \| y \|_{L^2} \\
&\leq \frac{\mu}{20 C^2_0} \|y \|_{L^2},
\end{align*}
which establishes \eqref{eq:Slambda-eigenvalue-bound}.
\end{proof}
\begin{lemma}\label{lem:Rlambda}
For $\lambda \gg 0$, the inequality \eqref{eq:Rlambda-bounds} holds.
\end{lemma}
\begin{proof}
Using Lemma~\ref{lem:RStildelambda-bounds} and the definition of $R_\lambda$ in \eqref{eq:Rlambda}, it suffices to prove that for $\lambda \gg 0$,
$$
\| (p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty} c_\q)(y) - (p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q) (y) \|_{L^2}.
$$
The proof will be similar to that of Lemma~\ref{lem:RStildelambda-bounds}. By Lemma~\ref{lem:approximate-eigenvalue-bounds}, $\| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2} \geq \mu \| y \|_{L^2}$. Therefore, it suffices to establish the upper bound
\begin{equation}
\label{eq:forty}
\| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty} c_\mathfrak{q}- p^\lambda \D_{x_\lambda} c_\q\|\leq \frac{\mu}{20 C^2_0},
\end{equation}
where the norm is computed as an operator from $W_0$ to $W_0$. In view of Proposition~\ref{prop:EquivalentMetrics}, we have that as an operator from $L^2$ to $L^2$, $\| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \| \leq C^2_0$, as $p^\lambda_{\mathfrak{t}ilde{g}}$ is a $\mathfrak{t}ilde{g}$-orthogonal projection. Therefore,
\begin{align*}
\| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty} c_\mathfrak{q}- p^\lambda \D_{x_\lambda} c_\mathfrak{q}\| &\leq
\| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} (\D_{x_\infty} c_\mathfrak{q}- \D_{x_\lambda} c_\q)\|+ \|p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\lambda} c_\mathfrak{q}- p^\lambda \D_{x_\lambda} c_\q\| \\
&\leq C^2_0 \| \D_{x_\infty} c_\mathfrak{q}- \D_{x_\lambda} c_\mathfrak{q}\| + \| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} - p^\lambda \| \cdot \| \D_{x_\lambda} c_\mathfrak{q}\|.
\end{align*}
Since $x_\lambda \mathfrak{t}o x_\infty$ in $L^2_k$ norm, the continuity of $\mathcal{D}c_\q$ implies that we have uniform bounds on $\| \D_{x_\lambda} c_\mathfrak{q}\|$ independent of $\lambda$; further, this implies that $\| \D_{x_\infty} c_\mathfrak{q}- \D_{x_\lambda} c_\mathfrak{q}\| \mathfrak{t}o 0$ as $\lambda \mathfrak{t}o \infty$. Finally, by Proposition~\ref{prop:pgpl}, $\| p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} - p^\lambda \| \mathfrak{t}o 0$ as $\lambda \mathfrak{t}o \infty$. Thus, we conclude that for $\lambda \gg 0$, the operator norm of $p^\lambda_{\mathfrak{t}ilde{g}(T_\lambda(x))} \D_{x_\infty} c_\mathfrak{q}- p^\lambda \D_{x_\lambda} c_\q$ is at most $\frac{\mu}{40 C_0^2}$. This proves \eqref{eq:forty}, which we saw earlier was sufficient to complete the proof.
\end{proof}
\begin{lemma}\label{lem:Slambda}
For $\lambda \gg 0$, the inequality \eqref{eq:Slambda-bounds} holds.
\end{lemma}
\begin{proof}
Using Lemma~\ref{lem:RStildelambda-bounds} and the definition of $S_\lambda$ in \eqref{eq:Slambda}, it suffices to prove that for $\lambda \gg 0$,
\begin{equation}\label{eq:Dxomega-Xqmlgc}
\| (\D_x \omega_\lambda)(l + p^\lambda c_\q(x)) \cdot (x_\infty - x_\lambda) \|_{L^2} \leq \frac{1}{20 C^2_0}\| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2}.
\end{equation}
By the definition of $\widetilde{S}_\lambda$, we see that
$$
(\D_x \omega_\lambda)(l + p^\lambda c_\q(x)) = (\D_x \omega_\lambda)\bigl ( ( l + p^\lambda \D_{x_\lambda} c_\q)(y) + \widetilde{S}_\lambda(y) \bigr ).
$$
Recall from \eqref{eq:Dxomega-bounds}, $|(\D_x \omega_\lambda)(v)| \leq C' \| v \|_{L^2}$, for a constant $C'$ independent of $x \in B(2R)$. Combining this inequality with Lemma~\ref{lem:RStildelambda-bounds}, we have
\begin{align*}
|(\D_x \omega_\lambda)(l + p^\lambda c_\q(x))| \leq C'(1 + \frac{1}{20 C^2_0}) \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2}.
\end{align*}
Since $\| x_\infty - x_\lambda \|_{L^2} \mathfrak{t}o 0$, we can choose $\lambda \gg 0$ such that
$$
|(\D_x \omega_\lambda)(l + p^\lambda c_\q(x))| \cdot \| x_\infty - x_\lambda \|_{L^2} \leq \frac{1}{20 C^2_0} \| (l + p^\lambda \D_{x_\lambda} c_\q)(y) \|_{L^2}.
$$
This establishes \eqref{eq:Dxomega-Xqmlgc}, and the proof is complete.
\end{proof}
\operatorname{sp}incection[Consequences]{Proposition~\ref{prop:LqQuasi} and its consequences} \label{sec:consequences}
Proposition~\ref{prop:LqQuasi} now follows by combining Propositions~\ref{prop:intermediate} and~\ref{prop:cond2}. Let us also give a few corollaries, which will prove useful in the next sections.
From Proposition~\ref{prop:LqQuasi}, together with the equivalence between the $\mathfrak{t}ilde{g}$ and $L^2$ metrics (Proposition~\ref{prop:EquivalentMetrics}), we obtain
\begin{corollary}\label{cor:Flambda-gtilde-bounds}
There exists $C_0 > 0$ such that for $\lambda \gg 0$,
\begin{equation}\label{eq:Flambda-gtilde-bounds}
\frac{1}{4C_0} \| \mathcal{X}qmlgc \|^2_{L^2} \leq dF_\lambda(\mathcal{X}qmlgc) \leq 4C_0 \| \mathcal{X}qmlgc \|^2_{L^2}
\end{equation}
at any $x \in W^\lambda \cap B(2R)$.
\end{corollary}
We have shown that $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient vector field as in Definition~\ref{def:eqgv} (on a non-compact manifold, as discussed in Section~\ref{sec:combinedMorse}). Lemma~\ref{lem:ao2} implies the following:
\begin{corollary}
\label{cor:endpointsBlowDown}
Let $\gamma: \rr\mathfrak{t}o W^\lambda \cap B(2R)$ be a flow line of $\mathcal{X}qmlgc$, for $\lambda \gg 0$. Then, $\lim_{t\mathfrak{t}o -\infty} [\gamma(t)]$ and $\lim_{t\mathfrak{t}o +\infty} [\gamma(t)]$ exist in $(W^\lambda \cap B(2R))/S^1$, and they are both projections of stationary points of $\mathcal{X}qmlgc$.
\end{corollary}
There is also the similar result in the blow-up, which is a consequence of Lemma~\ref{lem:ao3}:
\begin{corollary}
\label{cor:endpointsBlowUp}
Let $[\gamma]: \rr\mathfrak{t}o (W^\lambda \cap B(2R))^{\operatorname{sp}incigma}/S^1$ be a flow line of $\mathcal{X}qmlgcsigma$, for $\lambda \gg 0$. Then, $\lim_{t\mathfrak{t}o -\infty} [\gamma(t)]$ and $\lim_{t\mathfrak{t}o +\infty} [\gamma(t)]$ exist in $(W^\lambda \cap B(2R))^{\operatorname{sp}incigma}/S^1$, and they are both projections of stationary points of $\mathcal{X}qmlgcsigma$.
\end{corollary}
In fact, one can say more about the limiting behavior of trajectories of $\mathcal{X}qmlgcsigma$. Recall that in classical Morse theory, trajectories converge to stationary points with exponential decay (see for example \cite[Section 10.2.b]{AudinDamian}). A similar argument works for Morse equivariant quasi-gradients as well. In particular, one obtains that trajectories $[\gamma]: \rr\mathfrak{t}o (W^\lambda \cap B(2R)^\operatorname{sp}incigma)/S^1$ of $\mathcal{X}qmlagcsigma$ from $[x_\lambda]$ to $[y_\lambda]$ must be contained in $\B^{\operatorname{gC},\mathfrak{t}au}_k([x_\lambda], [y_\lambda])$. In Chapter~\ref{sec:trajectories1} we will compute this exponential decay in terms of $F_\lambda$ more explicitly and obtain bounds uniform in $\lambda$.
If $I \operatorname{sp}incubset \R$ is an interval and $\gamma: I \mathfrak{t}o W^\lambda \cap B(2R)$ is a trajectory of $\mathcal{X}qmlgc$, its energy is defined\footnote{Our definition of energy differs from that of \cite{KMbook} by a factor of two.} to be
\begin{equation}
\label{eq:energylambda}
\mathcal{E}(\gamma) = \int_I \left\|\frac{d\gamma}{dt} \right \|_{L^2(Y)}^2 \! \!\!dt \ =\int_I \| \mathcal{X}qmlgc(\gamma(t)) \|_{L^2(Y)}^2 dt.
\end{equation}
Note that our Corollary~\ref{cor:Flambda-gtilde-bounds} gives a quantitative version of the quasi-gradient condition $dF_{\lambda}(\mathcal{X}qmlgc) \geq 0$. A consequence is the following result, which will be useful to us in Chapter~\ref{sec:trajectories1} when we establish exponential decay results for trajectories. It says that the energy of an approximate trajectory is commensurable with the drop in $F_{\lambda}$:
\begin{corollary}
\label{cor:trajectory-Flambda}
There exists $C_0 > 0$ such that, for any $\lambda \gg 0$ and any closed interval $[t_1, t_2] \operatorname{sp}incubset \R$, the following holds. If $\gamma: [t_1, t_2] \mathfrak{t}o W^\lambda \cap B(2R)$ is a trajectory of $\mathcal{X}qmlgc$, then
\begin{equation}
\label{eq:trajectory-Flambda}
\frac{1}{4C_0} \mathcal{E}(\gamma) \leq F_\lambda(\gamma(t_2)) - F_\lambda(\gamma(t_1)) \leq 4C_0 \mathcal{E}(\gamma).
\end{equation}
\end{corollary}
\begin{proof}
Note that
$$F_\lambda(\gamma(t_2)) - F_\lambda(\gamma(t_1))= \int_{t_1}^{t_2} dF_\lambda\Bigl(\frac{d}{dt} \gamma(t)\Bigr ) dt = \int_{t_1}^{t_2} dF_\lambda\bigl(\mathcal{X}qmlgc( \gamma(t)) \bigr ) dt .$$
Therefore, the result follows from \eqref{eq:energylambda} and Corollary~\ref{cor:Flambda-gtilde-bounds}.
\end{proof}
\chapter{Gradings}\label{sec:gradings}
Having established in the previous chapter that $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient for $\lambda = \lambda^{\bullet}_i \gg 0$, we are able to define the chain groups of a Morse complex for $\mathcal{X}qmlagcsigma$ on $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ as described in Section~\ref{subsec:CircleMorse}. (After establishing the Morse-Smale condition in Chapter~\ref{sec:MorseSmale}, we will see this indeed gives a complex.) In this chapter, we relate the gradings of stationary points of $\mathcal{X}qmlagcsigma$ with the gradings of the stationary points of $\mathcal{X}qagcsigma$. An important subtlety here is that stationary points of $\mathcal{X}qmlagcsigma$ can be thought of as living in the infinite-dimensional manifold $W^\operatorname{sp}incigma/S^1$ like those of $\mathcal{X}qagcsigma$ or in the smaller finite-dimensional manifold $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$, and thus there are two ways to define relative gradings. In Section~\ref{sec:gradingsstationarypoints}, we equate these two relative gradings and relate them to the relative gradings for stationary points of $\mathcal{X}qagcsigma$ in an appropriate grading range. In Section~\ref{sec:absgradings}, we define an absolute grading on stationary points of $\mathcal{X}qmlagcsigma$ (a shift of the Morse index) which we relate to the absolute grading $\operatorname{gr}_{\mathbb{Q}}$ on the stationary points of $\mathcal{X}qagcsigma$.
Thus, the work of Chapter~\ref{sec:criticalpoints} together with the claimed grading correspondences will establish an identification between the chain groups of the Morse complex for $\mathcal{X}qmlagcsigma$ with the monopole Floer complex in an appropriate grading range. This is summarized for the reader's benefit in Section~\ref{sec:conclusions}.
\operatorname{sp}incection{Relative gradings of stationary points}\label{sec:gradingsstationarypoints}
We would like to define relative gradings on stationary points of $\mathcal{X}qmlagcsigma$ in $\N/S^1$ using the analogs of the discussions in Section~\ref{sec:modifications} and Section~\ref{sec:GradingsCoulomb}.
First, recall from \eqref{eq:Fqmlgctau} that we have a section
$$ \F^{\operatorname{gC}, \mathfrak{t}au}_{\q^\lambda}: \mathfrak{t}C^{\operatorname{gC}, \mathfrak{t}au}(Z) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}(Z).$$
We can take the covariant derivatives of this section, $\D^{\mathfrak{t}ilde{g},\mathfrak{t}au}_{\gamma} \mathcal{F}_{\qml}gctau$ and $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau$, just as in \eqref{eq:tangerine} and \eqref{eq:tangerine2}. These agree in the particular case where $\gamma$ is a trajectory of $\mathcal{X}qmlgcsigma$. As discussed earlier, it will be easier to work with $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau$, so we focus on this operator.
\begin{lemma}
\label{lem:GrPres}
For $1 \leq j \leq k$ and $\lambda \gg 0$, the following is true: for any $[x_\infty],[y_\infty] \in \mathcal{C}rit_{\N}$ and each path $[\gamma_\lambda] \in \B^{\operatorname{gC},\mathfrak{t}au}_k([x_\lambda],[y_\lambda])$ with representative $\gamma_\lambda \in \mathcal{C}^{\operatorname{gC},\mathfrak{t}au}_k(x_\lambda, y_\lambda)$, the operator
$$(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma_\lambda}} : \K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma_{\lambda}} \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma_\lambda}$$
is Fredholm, with index equal to $\operatorname{gr}([x_\infty],[y_\infty])$.
\end{lemma}
Note that here we do not need the assumption that $\lambda = \lambda^{\bullet}_i$ for the nondegeneracy of $[x_\lambda], [y_\lambda]$, since we have that $[x_\lambda], [y_\lambda]$ are contained in $\N/S^1$ (cf. Proposition~\ref{prop:ND}).
\begin{proof}
Recall from Proposition~\ref{prop:FredholmCoulomb} that the relative grading between $[x_\infty]$ and $[y_\infty]$ is given by the index of $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}agctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}$. Recall further from Lemma~\ref{lem:allsurjective} that the index of $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\q}agctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}}$ is the same as that of the operator
$$\Qhat_{\gamma}^{\operatorname{gC}} : \T^{\mathfrak{t}au}_{j, \gamma} \mathfrak{t}o \T^{\mathfrak{t}au}_{j-1, \gamma}$$
from \eqref{eq:newextension}. We can define a similar operator
$$\Qhat_{\gamma_\lambda, \q^\lambda}^{\operatorname{gC}}: \T^{\mathfrak{t}au}_{j, \gamma_\lambda} \mathfrak{t}o \T^{\mathfrak{t}au}_{j-1, \gamma_\lambda},$$ by using the perturbation $\q^\lambda$ instead of $\q$, and the path $\gamma_\lambda$ instead of $\gamma$. The same arguments as in the proof of Proposition~\ref{prop:FredholmCoulomb} show that $\Qhat_{\gamma_\lambda, \q^\lambda}^{\operatorname{gC}}$ is Fredholm, so is $(\D^\mathfrak{t}au_{\gamma_\lambda} \mathcal{F}_{\qml}gctau)|_{\K^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma_\lambda}}$, and they have the same index. Thus, it remains to show that the operators $\Qhat_{\gamma_\lambda, \q^\lambda}^{\operatorname{gC}}$ and $\Qhat_{\gamma}^{\operatorname{gC}} $ have the same index.
Standard arguments show that the index of the operator $\Qhat^{\operatorname{gC}}_\gamma$ is independent of the choice of $\gamma$, for $\gamma$ a smooth path which is asymptotic to $x_\infty$ and $y_\infty$. Similarly, it suffices to look at the index of $\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda}$ for one choice of $\gamma_\lambda$. We choose $\gamma$ and $\gamma_\lambda$ to be constant outside of the interval $[-T,T]$. As in \eqref{eq:tQg}, we write
\begin{align*}
\Qhat^{\operatorname{gC}}_\gamma &= \frac{d}{dt} + L_0 + \hat{h}^{\operatorname{gC}}_t, \\
\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda} &= \frac{d}{dt} + L_0 + \hat{h}^{\operatorname{gC},\lambda}_t.
\end{align*}
We will define an intermediate operator $\Qhat^{\operatorname{int}, \lambda}$ interpolating between these two operators. Choose $\beta(t)$ a smooth bump function which is 0 outside of $[-T-2,T+2]$ and is identically 1 in $[-T-1,T+1]$. Define
$$
\Qhat^{\operatorname{int},\lambda} = (1-\beta(t)) \cdot \Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda} + \beta(t) \cdot \Qhat^{\operatorname{gC}}_{\gamma}.
$$
We will compare $\Qhat^{\operatorname{int},\lambda}$ to $\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda}$ and $\Qhat^{\operatorname{gC}}_\gamma$.
We begin by comparing $\Qhat^{\operatorname{int},\lambda}$ to $\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda}$. We will show that they differ by a compact operator. Notice that
$$
\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda} - \Qhat^{\operatorname{int},\lambda} = \beta(t) (\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda} - \Qhat^{\operatorname{gC}}_\gamma) = \beta(t) (\hat{h}^{\operatorname{gC},\lambda}_t - \hat{h}^{\operatorname{gC}}_t).
$$
For each $t$, $\hat{h}^{\operatorname{gC},\lambda}_t - \hat{h}^{\operatorname{gC}}_t$ is continuous as a self-map from $L^2_n(Y; iT^*Y \oplus \mathbb{S} \oplus i\R)$ for each $1 \leq n \leq j$; therefore, this is compact as a map from $L^2_n(Y)$ to $L^2_{n-1}(Y)$. Since $\beta(t)$ is smooth and the terms $\hat{h}^{\operatorname{gC},\lambda}_t$ and $\hat{h}^{\operatorname{gC}}_t$ vary smoothly in $t$, we have that the four-dimensional map $\beta(t) (\hat{h}^{\operatorname{gC},\lambda}_t - \hat{h}^{\operatorname{gC}}_t)$ from $L^2_n(I \mathfrak{t}imes Y)$ to $L^2_{n-1}(I \mathfrak{t}imes Y)$ is compact whenever $I$ is a compact interval. While the inclusion of $L^2_n(\rr\mathfrak{t}imes Y)$ into $L^2_{n-1}(\rr\mathfrak{t}imes Y)$ is not compact since $\rr\mathfrak{t}imes Y$ is unbounded, we have that $\beta(t) (\hat{h}^{\operatorname{gC},\lambda}_t - \hat{h}^{\operatorname{gC}}_t)$ is compact as an operator from $L^2_n(\rr\mathfrak{t}imes Y)$ to $L^2_{n-1}(\rr\mathfrak{t}imes Y)$ because $\beta$ is compactly supported. Therefore, $\Qhat^{\operatorname{int},\lambda}$ and $\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda}$ differ by a compact operator, and hence have the same index.
We now compare $\Qhat^{\operatorname{int},\lambda}$ to $\Qhat^{\operatorname{gC}}_{\gamma}$. We write
$$\Qhat^{\operatorname{int},\lambda} - \Qhat^{\operatorname{gC}}_{\gamma} = (1 - \beta(t))(\Qhat^{\operatorname{gC}}_{\gamma_\lambda, \q^\lambda} - \Qhat^{\operatorname{gC}}_{\gamma}). $$
Let us analyze this difference more carefully. When this operator is non-zero (i.e., when $|t| >T + 1$), we have that $\Qhat^{\operatorname{int},\lambda} - \Qhat^{\operatorname{gC}}_{\gamma}$ is given by $(1-\beta(t)) (\hat{h}^{\operatorname{gC},\lambda}_{-\infty} - \hat{h}^{\operatorname{gC}}_{-\infty})$ when $t < -T-1$ and $(1-\beta(t)) (\hat{h}^\lambda_{+\infty} - \hat{h}_{+\infty})$ when $t > T + 1$. We will argue that $(1-\beta(t)) (\hat{h}^{\operatorname{gC},\lambda}_{-\infty} - \hat{h}^{\operatorname{gC}}_{-\infty})$, considered as operators from $L^2_j(\rr\mathfrak{t}imes Y)$ to $L^2_j(\rr\mathfrak{t}imes Y)$, converge to 0 in operator norm as $\lambda \mathfrak{t}o \infty$. The same argument will apply for $\hat{h}^{\operatorname{gC},\lambda}_{+\infty} - \hat{h}^{\operatorname{gC}}_{+\infty}$, and this will imply that $\Qhat^{\operatorname{int},\lambda} - \Qhat^{\operatorname{gC}}_{\gamma}$ converges to $0$ in operator norm as $\lambda \mathfrak{t}o \infty$. In turn, this will imply that for $\lambda \gg 0$, the operators $\Qhat^{\operatorname{gC}}_{\gamma}$ and $\Qhat^{\operatorname{int},\lambda}$ must have the same index, completing the proof.
Because $\gamma$ and $\gamma_\lambda$ limit to stationary points of $\mathcal{X}qgcsigma$ and $\mathcal{X}qmlgcsigma$ respectively, \eqref{eq:4D-endpoint-Hess} yields that for $t < -T - 1$,
\begin{equation}\label{eq:hlambdagCoul}
(1-\beta(t)) (\hat{h}^{\operatorname{gC},\lambda}_{-\infty} - \hat{h}^{\operatorname{gC}}_{-\infty}) = (1-\beta(t)) (\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x_\lambda} - \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x_\infty})
\end{equation}
and similarly for $t > T + 1$.
Recall that $\operatorname{Hess}^\operatorname{sp}incigma_{\q,x_\infty} = \mathcal{H}^\operatorname{sp}incigma_{x_\infty}$ by Lemma~\ref{lem:fakehessian}\eqref{fakehessian-realstationary}. Combining the arguments of Lemma~\ref{lem:fakehessian} with the formula for $\operatorname{Hess}^\operatorname{sp}incigma_{\q,x_\infty}$ in \cite[p.209]{KMbook}, the operator norm (in three-dimensions) of the difference
$$
\widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q^\lambda,x_\lambda} - \widehat{\operatorname{Hess}}^{\mathfrak{t}ilde{g},\operatorname{sp}incigma}_{\q,x_\infty}
$$
from $L^2_n$ to $L^2_{n-1}$ can be bounded in terms of the $L^2_n$ norms of $x_\lambda - x_\infty$, $\D_{\pi(x_\infty)} (\Pi^{\operatorname{elC}} \circ c_\q) - \D_{\pi(x_\lambda)} (\Pi^{\operatorname{elC}} \circ p^\lambda c_\q)$, and $\D_{x_\infty} \widetilde{\Pi^{\operatorname{elC}} \circ c_\q}^1 - \D_{x_\lambda} \widetilde{ \Pi^{\operatorname{elC}} \circ p^\lambda c_\q}^1$, where $\pi(x_\infty), \pi(x_\lambda)$ denote the images in the blow-down. Note that since the two extended Hessians have the same first order term, $L_0$, the difference necessarily lands in $L^2_j(Y)$; more generally this difference induces a bounded operator from $L^2_n(Y)$ to $L^2_n(Y)$ for any $1 \leq n \leq j$. For each $1 \leq n \leq j$, the terms mentioned above converge to 0 in $L^2_n(Y)$ as $\lambda \mathfrak{t}o \infty$: $x_\lambda - x_\infty$ converges to 0 by the implicit function theorem, while the other two terms converge to zero by combining the arguments of Lemma~\ref{lem:fam} together with the continuity of $\Pi^{\operatorname{elC}}$ and its derivatives (Lemma~\ref{lem:elc}). It follows that the norm of
$$A_\lambda := \hat{h}^{\operatorname{gC},\lambda}_{-\infty} - \hat{h}^{\operatorname{gC}}_{-\infty},$$ thought of as an operator from $L^2_n(Y)$ to $L^2_{n-1}(Y)$, converges to zero as $\lambda \mathfrak{t}o \infty$. We must extend this to obtain the analogous convergence on $\rr\mathfrak{t}imes Y$.
Precisely, we seek to show that the norm of $(1-\beta(t))A_\lambda$, thought of as an operator from $L^2_j(\rr\mathfrak{t}imes Y)$ to $L^2_{j-1}(\rr\mathfrak{t}imes Y)$, converges to zero as $\lambda \mathfrak{t}o \infty$. Since $\beta(t)$ is smooth and compactly supported, it suffices to consider $A_\lambda$ itself. Using the analysis on the operator norms from $L^2_n(Y)$ to $L^2_{n-1}(Y)$ above, we have that for $\eta \in L^2_j(\rr\mathfrak{t}imes Y)$,
\begin{align*}
\|A_\lambda (\eta(t)) \|^2_{L^2_{j-1}(\rr\mathfrak{t}imes Y)} &= \operatorname{sp}incum^{j-1}_{n = 0} \int_\rr\| A_\lambda (\eta^{(n)}(t)) \|^2_{L^2_{j-n-1}(Y)} dt \\
& \leq \operatorname{sp}incum^j_{n=0} \int_\rrC_\lambda \| \eta^{(n)}(t) \|^2_{L^2_{j-n}(Y)} dt \\
& = C_\lambda \| \eta(t) \|^2_{L^2_{j}(\rr\mathfrak{t}imes Y)},
\end{align*}
where $C_\lambda \mathfrak{t}o 0$ as $\lambda \mathfrak{t}o \infty$. This gives the desired convergence in operator norm and completes the proof.
\end{proof}
\begin{remark}
We expect that, with more work, one could show that $\Qhat_{\gamma_\lambda, \q^\lambda}^{\operatorname{gC}}$ converges to $\Qhat_{\gamma}^{\operatorname{gC}} $ in operator norm; and hence the two operators have the same index. This would give an alternate proof of the above lemma, without using the intermediate operator $\Qhat^{\operatorname{int},\lambda}$.
\end{remark}
We define a relative grading on the stationary points of $\mathcal{X}qmlagcsigma$ in $\N/S^1$ by
\begin{equation}\label{eq:approx-rel-grading}
\operatorname{gr}([x_\lambda], [y_\lambda]) = \operatorname{ind}(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau|_{\K^{\operatorname{gC},\mathfrak{t}au}_{k,\gamma}}),
\end{equation}
for any path $\gamma$ from $x_\lambda$ to $y_\lambda$.
Lemma~\ref{lem:GrPres} immediately implies the following
\begin{corollary}
\label{cor:GrPres}
The correspondence $\mathcal{X}i_{\lambda}: \mathcal{C}rit_{\N}^{\lambda} \mathfrak{t}o \mathcal{C}rit_{\N}$ from Corollary~\ref{cor:corresp} preserves relative gradings.
\end{corollary}
Recall from Corollary~\ref{cor:IsFinite} that for $\lambda \gg 0$, the stationary points of $\mathcal{X}qmlagcsigma$ contained in $\N/S^1$ are contained in $(W^\lambda)^\operatorname{sp}incigma$, so we are able to treat these stationary points as living in finite dimensions. We would like to relate the above grading computations to the Morse indices in finite dimensions. We remind the reader that although $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient, $\mathcal{X}qmlagcsigma$ is not a Morse quasi-gradient itself in any obvious way. However, as discussed in Section~\ref{subsec:CircleMorse}, we can still do the analogous constructions in Morse homology.
\begin{proposition}
\label{prop:RelationFinite}
Let $[x_\lambda], [y_\lambda] \in \N/S^1$ be stationary points of $\mathcal{X}qmlagcsigma$. Then, $\operatorname{gr}([x_\lambda],[y_\lambda])$, where this grading is computed in infinite dimensions, is equal to the difference in gradings of $[x_\lambda],[y_\lambda]$ thought of as stationary points of $\mathcal{X}qmlagcsigma$ restricted to $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$.
\end{proposition}
\begin{proof}
We begin by studying the relevant operators. Let
$$\gamma: {\mathbb{R}}\mathfrak{t}o (B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}, \ \gamma(t)=(a(t), s(t), \phi(t))$$ be a path between stationary points $x_\lambda, y_\lambda \in (B(2R) \cap W^\lambda)^\operatorname{sp}incigma$. Let also $Z = \rr\mathfrak{t}imes Y$.
On the one hand, we have the Fredholm operator
\begin{equation}
\label{eq:Qgammagc}
Q_{\gamma}^{\operatorname{gC}} = \D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau \oplus \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma} : \T^{\operatorname{gC}, \mathfrak{t}au}_{j, \gamma}(x_\lambda,y_\lambda) \mathfrak{t}o \V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma}(Z) \oplus L^2_{j-1}(\R; i\R).
\end{equation}
This is similar to the operator $Q_{\gamma}^{\operatorname{gC}} $ from \eqref{eq:Qggc}, but uses the perturbation $\q^\lambda$ instead of $\q$. (For simplicity, we do not include the perturbation in the notation.)
We now restrict $Q_{\gamma}^{\operatorname{gC}} $ to
\begin{equation}
\label{eq:Tglambda}
\T^{\operatorname{gC}, \lambda, \mathfrak{t}au}_{j, \gamma} = \{ (b, r, \psi) \in \T^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}(x_\lambda, y_\lambda) \mid (b(t), \psi(t)) \in W^\lambda \mathfrak{t}ext{ for all } t \}.
\end{equation}
After defining $\V^{\operatorname{gC},\lambda, \mathfrak{t}au}_{j-1,\gamma}(Z)$ similarly, we get an operator
\begin{equation}
\label{eq:Qgg}
Q_{\gamma}^{\operatorname{gC}, \lambda}= \D^\mathfrak{t}au_{\gamma} \F^{\gCoul}tau_{\q^\lambda} \oplus \mathbf{d}^{\operatorname{gC}, \mathfrak{t}au, \mathfrak{t}ilde{\dagger}}_{\gamma} : \T^{\operatorname{gC}, \lambda, \mathfrak{t}au}_{j, \gamma}(x_\lambda, y_\lambda) \mathfrak{t}o \V^{\operatorname{gC},\lambda, \mathfrak{t}au}_{j-1,\gamma}(Z) \oplus L^2_{j-1}(\R; i\R).
\end{equation}
Recall that the index of $Q_{\gamma}^{\operatorname{gC}} $ computes the relative gradings between stationary points of $\mathcal{X}qmlgcsigma$ in infinite dimensions. We will show below in Lemma~\ref{lem:infinite-finite-same-index} that $Q_{\gamma}^{\operatorname{gC}} $ and $Q_{\gamma}^{\operatorname{gC}, \lambda}$ have the same index. Therefore, it remains to show that the index of $Q_{\gamma}^{\operatorname{gC}, \lambda}$ gives the relative gradings in the Morse complex on $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ in finite dimensions.
Note that the domain of $Q_{\gamma}^{\operatorname{gC}, \lambda}$ consists of paths $(V, \beta): \rr\mathfrak{t}o T(W^\lambda)^\operatorname{sp}incigma \mathfrak{t}imes i\R$, where $\beta$ comes from the $dt$ component of the connection. Similarly, the codomain also consists of paths $(V, \beta)$ of this form. We can also view $T_{(a,s,\phi)} (W^\lambda)^\operatorname{sp}incigma$ as the subspace of $W^\lambda \mathfrak{t}imes \mathbb{R}$ consisting of three-dimensional configurations $((b,\psi),r)$ such that $\operatorname{Re} \langle \phi, \psi \rangle_{L^2(Y)} = 0$.
The first observation is that the $L^2_j$ norm of $V$, as a four-dimensional configuration over $Z$, is equivalent to its $L^2_j$ norm as a map from $\R$ to the finite-dimensional space $W^\lambda \mathfrak{t}imes \mathbb{R}$. Indeed, in principle the former norm takes into account more derivatives, corresponding to the directions along $Y$, whereas the latter just uses derivatives in the $t$ direction. However, since we are in $W^\lambda$,
the $j$th derivatives of $V(t)$ in the $Y$ directions are bounded (in $L^2$) by a constant (of the order of $\lambda^j$) times the $L^2(Y)$ norm of $V(t)$ itself. This shows that the two Sobolev norms are equivalent. From now on we will use the latter norm for $V$, i.e. view $V$ as a map from $\R$ to $W^\lambda \mathfrak{t}imes \mathbb{R}$.
By Proposition~\ref{prop:ND}, the stationary points of $\mathcal{X}qmlagcsigma$ inside $\N/S^1$ are hyperbolic (where $\mathcal{X}qmlagcsigma$ is considered as a vector field in either infinite or finite dimensions). Thus, the relative Morse index between these stationary points is well-defined.
We aim to compare $Q_{\gamma}^{\operatorname{gC}, \lambda}$ with the operator
\begin{equation}
\label{eq:HessFinite}
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ ( \frac{D^\operatorname{sp}incigma}{dt} + \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma) : T_{j, \gamma} \P(x_\lambda, y_\lambda) \mathfrak{t}o T_{j-1, \gamma} \P(x_\lambda, y_\lambda).
\end{equation}
Here, $T_{j, \gamma} \P(x_\lambda, y_\lambda)$ is the subspace of $L^2_j(\R, T(W^\lambda)^\operatorname{sp}incigma)$ consisting of the paths $V(t)=(b(t), r(t), \psi(t))$ such that $\langle \psi(t), i\phi(t) \rangle_{\mathfrak{t}ilde{g}} = 0$ for all $t$, as in \eqref{eq:Kagcsigma}.
Note that the operator \eqref{eq:HessFinite} can be used to define the Morse index in $(B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}/S^1$. Indeed, it is an operator of the form~\eqref{eq:lgamma}, which uses the connection $ \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma$ on the tangent bundle $T((W^\lambda)^{\operatorname{sp}incigma}/S^1) \cong \K^{\operatorname{agC}, \operatorname{sp}incigma}.$
Let us write
\begin{equation}
\label{eq:decomposeTlambda}
\T^{\operatorname{gC}, \lambda, \mathfrak{t}au}_{j, \gamma} = T_{j, \gamma} \P(x_\lambda, y_\lambda) \oplus L^2_{j}(\R; i\R) \oplus L^2_{j}(\R; i\R)
\end{equation}
where the first $L^2_{j}(\R; i\R)$ summand corresponds to the tangent to the $S^1$-orbit (that is, real multiples of $\phi(t)$) and the last summand corresponds to $\beta(t)$. The codomain of $Q_{\gamma}^{\operatorname{gC}, {\operatorname{sp}incigma}, \lambda}$ can also be identified with \eqref{eq:decomposeTlambda}, except we use the Sobolev index $j-1$.
With respect to the decomposition $\T^{\operatorname{gC},\lambda,\mathfrak{t}au}_{j,\gamma} = \V^{\operatorname{gC},\lambda,\mathfrak{t}au}_{j,\gamma} \oplus L^2_j(\R;i\R)$, we have
\begin{equation}
\renewcommand*{\arraystretch}{1.3}
\label{eq:QQ}
Q_{\gamma}^{\operatorname{gC}, \lambda}= \frac{D^\operatorname{sp}incigma}{dt} + \begin{pmatrix}
\D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma} \\
\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,{\mathfrak{t}ilde \dagger}}_{\gamma} & 0
\end{pmatrix},
\end{equation}
where the operators $\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma} $ and $\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,\mathfrak{t}ilde{ \dagger}}_{\gamma}$ are the ones defined in \eqref{eq:dde} and \eqref{eq:ddedagger}. We can further write the entry $\D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma$ with respect to the decomposition of $T_j(W^\lambda)^{\operatorname{sp}incigma}$ into the anticircular Coulomb slice and the tangent to the $S^1$-direction, as
\begin{equation}
\label{eq:Kis}
\begin{pmatrix}
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma & K_2 \\
K_1 & K_3
\end{pmatrix},
\end{equation}
where the $K_i$ terms are compact and $K_2 = K_3 = 0$ at the endpoints $x_\lambda, y_\lambda$. (Because we use $L^2$ projections to $W^\lambda$ in $\mathcal{X}qmlgc$ instead of $\mathfrak{t}ilde{g}$-projections, $\mathcal{X}qmlgc$ is not a gradient vector field. Therefore, $\mathcal{D}\mathcal{X}qmlgc$ is not symmetric, and we cannot conclude that the term $K_1$ vanishes even at stationary points.) However, the tangent to the $S^1$-direction depends on time, and hence so does this decomposition of $T_j(W^\lambda)^{\operatorname{sp}incigma}$. Therefore, adding $\frac{D^\operatorname{sp}incigma}{dt}$ to $\D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma$ does not simply result in adding $\frac{D^\operatorname{sp}incigma}{dt}$ terms on the diagonal of \eqref{eq:Kis}.
Let us identify the tangent to the $S^1$-direction with $\rr$, such that the generator $(0, 0, i\phi(t))$ corresponds to $1$. We would like to understand the difference
\begin{equation}
\renewcommand*{\arraystretch}{1.3}
\label{eq:DiffDt}
\frac{D^\operatorname{sp}incigma}{dt} - \begin{pmatrix} \Pi^{\operatorname{agC}, \operatorname{sp}incigma} \circ \mathfrak{t}frac{D^\operatorname{sp}incigma}{dt} & 0 \\ 0 & \mathfrak{t}frac{d}{dt} \end{pmatrix}.
\end{equation}
If we write an element $V \in \V^{\operatorname{gC},\lambda,\mathfrak{t}au}_{j,\gamma}$ as a path
$$ V(t)=V^{\operatorname{agC}}(t) + f(t) \cdot (0, 0, i\phi(t)),$$
then the expression \eqref{eq:DiffDt} applied to $V$ becomes
\begin{equation}
\label{eq:DiffDt2}
\Pi^{\operatorname{gC}, \circ} \left(\frac{D^\operatorname{sp}incigma}{dt} V^{\operatorname{agC}}(t)\right) + f(t) \cdot \bigl(0, 0, \Pi^\perp_{\phi(t)} i \frac{d\phi(t)}{dt} \bigr),
\end{equation}
where $\Pi^{\operatorname{gC}, \circ}$ is the projection to the span of $(0, 0, i\phi(t))$, with kernel $\K^{\operatorname{agC}, \operatorname{sp}incigma}$. In other words, if $\psi^{\operatorname{agC}}(t)$ is the spinorial component of $V^{\operatorname{agC}}(t)$, then
$$ \Pi^{\operatorname{gC}, \circ} \left(\frac{D^\operatorname{sp}incigma}{dt} V^{\operatorname{agC}}(t) \right) = \frac{\Big \langle \frac{d}{dt} \psi^{\operatorname{agC}}(t), i \phi(t) \Big \rangle_{\mathfrak{t}ilde g} } {\|i \phi(t) \|^2_{\mathfrak{t}ilde{g}} } \cdot (0, 0, i\phi(t)).$$
By the Leibniz rule, we have
\begin{align*}
0 &= \frac{d}{dt} \langle \psi^{\operatorname{agC}}(t), i\phi(t) \rangle_{\mathfrak{t}ilde g} \\
&= \frac{d}{dt} \operatorname{Re} \langle (0,\psi^{\operatorname{agC}}(t)), \Pi^{\operatorname{elC}} (0,i\phi(t)) \rangle_{L^2(Y)} \\
&= \operatorname{Re} \langle (0,\frac{d}{dt}\psi^{\operatorname{agC}}(t)), \Pi^{\operatorname{elC}} (0,i\phi(t)) \rangle_{L^2(Y)} + \operatorname{Re} \langle (0, \psi^{\operatorname{agC}}(t)), \frac{d}{dt}\Pi^{\operatorname{elC}} (0,i\phi(t)) \rangle_{L^2(Y)} \\
&= \langle \frac{d}{dt}\psi^{\operatorname{agC}}(t), i\phi(t) \rangle_{\mathfrak{t}ilde g} + \operatorname{Re} \langle (0,\psi^{\operatorname{agC}}(t)), \frac{d}{dt}\Pi^{\operatorname{elC}} (0,i\phi(t)) \rangle_{L^2(Y)}.
\end{align*}
Therefore, we can write \eqref{eq:DiffDt2} as
$$ - \frac{\operatorname{Re} \langle (0,\psi^{\operatorname{agC}}(t)), \frac{d}{dt}\Pi^{\operatorname{elC}} (0,i\phi(t)) \rangle_{L^2(Y)}}{ \| i\phi(t) \|_{\mathfrak{t}ilde{g}}^2} \cdot (0,0,i\phi(t))+ f(t) \cdot \bigl(0, 0, \Pi^\perp_{\phi(t)} i \frac{d\phi(t)}{dt} \bigr).$$
Both of these terms involve the time derivatives of $\phi(t)$, and preserve Sobolev regularity $L^2_j \mathfrak{t}o L^2_j$ as long as $j < k$. We deduce that the difference \eqref{eq:DiffDt} is compact as a map from $L^2_j$ to $L^2_{j-1}$ for $j < k$.
We would like to compare $Q^{\operatorname{gC},\lambda}_\gamma$ to the operator
\begin{equation}
\renewcommand*{\arraystretch}{1.3}
\label{eq:QQ2}
Q_{\gamma}^{\operatorname{gC}, \operatorname{sp}incp, \lambda}= \begin{pmatrix}
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ ( \mathfrak{t}frac{D^\operatorname{sp}incigma}{dt} + \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma) & 0 & 0 \\
0 & \mathfrak{t}frac{d}{dt} & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma} \\
0 & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,{\mathfrak{t}ilde \dagger}}_{\gamma} & \mathfrak{t}frac{d}{dt}
\end{pmatrix},
\end{equation}
written with respect to the decomposition \eqref{eq:decomposeTlambda}. We will study the linear interpolations $\alpha Q^{\operatorname{gC},\lambda}_\gamma + (1-\alpha) Q^{\operatorname{gC},\operatorname{sp}incp, \lambda}_\gamma $. With respect to the same decomposition, this linear interpolation can be computed at the endpoints to have the form
\begin{equation}
\label{eq:QQGC2}
\renewcommand*{\arraystretch}{1.3}
\begin{pmatrix}
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma & 0 & 0 \\
\alpha K_1 & 0 & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma} \\
0 & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,{\mathfrak{t}ilde \dagger}}_{\gamma} & 0
\end{pmatrix},
\end{equation}
where $K_1$ is as in \eqref{eq:Kis}. First, notice that $ \Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma$ as a map from $\K^{\operatorname{agC},\operatorname{sp}incigma}_j$ to $\K^{\operatorname{agC},\operatorname{sp}incigma}_{j-1}$ is hyperbolic at the endpoints, because the endpoints are hyperbolic stationary points. (It is easy to see that the hyperbolicity for stationary points extends to $j < k$ as well.) We also have that $\begin{pmatrix} 0 & \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma} \\ \mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,{\mathfrak{t}ilde \dagger}}_{\gamma} & 0 \end{pmatrix}$ is hyperbolic by the proof of Lemma~\ref{lem:eH}. It is straightforward to deduce from here that \eqref{eq:QQGC2} is hyperbolic at the endpoints, for all $\alpha$. It follows that $\alpha Q^{\operatorname{gC},\lambda}_\gamma + (1-\alpha) Q^{\operatorname{gC},\operatorname{sp}incp, \lambda}_\gamma$ are Fredholm as maps from $L^2_j$ to $L^2_{j-1}$ for all $\alpha$ and $j < k$.
This implies that they are Fredholm for $j=k$ as well. Indeed, the kernel of the map from $L^2_k$ to $L^2_{k-1}$ is necessarily contained in the kernel from $L^2_{k-1}$ to $L^2_{k-2}$ and thus is finite-dimensional. A similar argument with the adjoint applies for the cokernel as well. Therefore, $Q_{\gamma}^{\operatorname{gC}, \lambda}$ and $Q_{\gamma}^{\operatorname{gC}, \operatorname{sp}incp, \lambda}$ have the same Fredholm index, being related by a continuous family of Fredholm operators.
For fixed $t$, both $\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma}_{\gamma(t)} $ and $\mathbf{d}^{\operatorname{gC}, \operatorname{sp}incigma,\mathfrak{t}ilde{ \dagger}}_{\gamma(t)}$ are just given by multiplication by nonzero constants. Thus, we can write the $2 \mathfrak{t}imes 2$ block at the bottom right of \eqref{eq:QQ2} as
\begin{equation}
\label{eq:dAt}
\frac{d}{dt} + A(t),
\end{equation}
where $A(t)$ is invertible and has real spectrum. Thus, $\{A(t)\}$ has zero spectral flow, and the $2 \mathfrak{t}imes 2$ block has index zero. In fact, given the form of $\{A(t)\}$, the operator \eqref{eq:dAt} itself is invertible.
Hence, $Q_{\gamma}^{\operatorname{gC}, \operatorname{sp}incp, \lambda}$ has the same Fredholm index as the top left entry in \eqref{eq:QQ2}, which is the operator \eqref{eq:HessFinite}. This completes the proof, modulo the claim that $Q_{\gamma}^{\operatorname{gC}} $ and $Q_{\gamma}^{\operatorname{gC}, \lambda}$ have the same index, which we prove below.
\end{proof}
\begin{lemma}\label{lem:infinite-finite-same-index}
For $\lambda = \lambda^{\bullet}_i \gg 0$, the index of $Q_{\gamma}^{\operatorname{gC}} $ is equal to that of $Q_{\gamma}^{\operatorname{gC}, \lambda}$.
\end{lemma}
\begin{proof}
Since $\lambda = \lambda^{\bullet}_i$, we have that $p^\lambda$ is a projection defined with respect to the $L^2$ (not $\mathfrak{t}ilde g$) metric.
We begin with some further discussion of the relevant spaces. Define
\[
\mathcal{R} = \{ (b(t), 0, \psi(t)) \in \T^{\operatorname{gC},\mathfrak{t}au}_{\gamma} \mid (b(t), \psi(t)) \in (W^\lambda)^\perp\}
\]
where $(W^\lambda)^\perp$ is the $L^2$-orthogonal complement of $W^\lambda$ in $W$. In other words, an element of $(W^\lambda)^\perp$ can be decomposed as a (possibly infinite) sum of the eigenvectors of $l$ in $W$ with associated eigenvalue outside of the interval $(-\lambda, \lambda)$. Note that $\mathcal{R}$ does not depend on $\gamma$, as $\gamma(t) \in W^\lambda$, so $(b(t), \psi(t)) \in (W^\lambda)^\perp$ automatically implies that $\psi(t)$ is orthogonal to $\gamma(t)$. Because of this, we have a canonical identification of $\mathcal{R}$ with the space of smooth paths in $(W^\lambda)^\perp$, which comes from simply ignoring the middle component. It follows that $\mathcal{R} \oplus \T^{\operatorname{gC},\lambda,\mathfrak{t}au}_{\gamma} = \T^{\operatorname{gC},\mathfrak{t}au}_\gamma$. Note that elements of $\mathcal{R}$ have no $dt$ component. We define $\mathcal{R}_j \operatorname{sp}incubset \T^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}$ as the Sobolev completion of $\mathcal{R}$ with respect to the four-dimensional $L^2_j$-norm, so that
$$ \mathcal{R}_j \oplus \T^{\operatorname{gC},\lambda,\mathfrak{t}au}_{j,\gamma} = \T^{\operatorname{gC},\mathfrak{t}au}_{j,\gamma}.$$
There is an analogous decomposition
\[
\V^{\operatorname{gC},\mathfrak{t}au}_{j-1,\gamma} \oplus L^2_{j-1}(\mathbb{R};i\mathbb{R})= \mathcal{R}_{j-1} \oplus \left(\V^{\operatorname{gC},\lambda,\mathfrak{t}au}_{j-1,\gamma} \oplus L^2_{j-1}(\mathbb{R};i\mathbb{R})\right).
\]
With respect to these splittings, we can decompose $Q_{\gamma}^{\operatorname{gC}}$ as
\begin{equation}
\label{eq:Qblock}
\begin{pmatrix}
\Pi_{\mathcal{R}_{j-1}} \circ \D^\mathfrak{t}au_\gamma(\mathcal{F}_{\qml}gctau)|_{\mathcal{R}_j} & 0 \\
* & Q_{\gamma}^{\operatorname{gC},\lambda}
\end{pmatrix},
\end{equation}
where the top-right entry of this matrix is zero because $\mathcal{X}qmlgc$ has image in $W^\lambda$, and $ \Pi_{\mathcal{R}_{j-1}}$ denotes the $L^2$-orthogonal projection to $\mathcal{R}_{j-1}$. Therefore, to show that $Q_{\gamma}^{\operatorname{gC},\lambda}$ has the same index as $Q_{\gamma}^{\operatorname{gC}}$, it suffices to show that $\Pi_{\mathcal{R}_{j-1}} \circ \D^\mathfrak{t}au_\gamma(\mathcal{F}_{\qml}gctau)|_{\mathcal{R}_j}$ is invertible. We now compute this operator explicitly.
For $(b(t),0,\psi(t)) \in\mathcal{R}_j$, it follows from \eqref{eq:tangerine} that
\begin{align*}
& (\D^\mathfrak{t}au_\gamma \mathcal{F}_{\qml}gctau)(b(t),0,\psi(t))
= \frac{D^\operatorname{sp}incigma}{dt}(b(t),0,\psi(t)) + \D^\operatorname{sp}incigma_{\gamma(t)} \mathcal{X}qmlgcsigma (b(t),0,\psi(t)) \\
&= (\frac{d}{dt}b(t),0,\Pi^\perp_{\phi(t)}\frac{d}{dt}\psi(t)) + \D^\operatorname{sp}incigma_{\gamma(t)}l^\operatorname{sp}incigma(b(t),0,\psi(t)) + \D^\operatorname{sp}incigma_{\gamma(t)}(p^\lambda c_\q)^\operatorname{sp}incigma(b(t),0,\psi(t)).
\end{align*}
First, note that $\frac{d}{dt}\psi(t)$ and $\phi(t)$ are $L^2$-orthogonal. Next, since $p^\lambda c_\q$ has image in $W^\lambda$, and $\mathcal{R}_{j-1}$ consists of paths of configurations orthogonal to $W^\lambda$, it is straightforward to verify that $\Pi_{\mathcal{R}_{j-1}} \circ \D^\operatorname{sp}incigma_{\gamma(t)}(p^\lambda c_\q)^\operatorname{sp}incigma(b(t),0,\psi(t)) = 0$. Therefore, we have
$$\Pi_{\mathcal{R}_{j-1}} \circ \D^\mathfrak{t}au_\gamma(\mathcal{F}_{\qml}gctau)|_{\mathcal{R}_j}=\frac{d}{dt} + \D^\operatorname{sp}incigma_{\gamma(t)}l^\operatorname{sp}incigma.$$
Using \eqref{eqn:lsigma-derivative} and the fact that $\langle \psi(t), D\phi(t) \rangle_{L^2(Y)} = 0$, one can compute directly that
$$(\D^\operatorname{sp}incigma_{\gamma(t)}l^\operatorname{sp}incigma)(b(t),0,\psi(t)) = (*db(t), 0, D\psi(t) - \langle \phi(t), D\phi(t) \rangle_{L^2(Y)} \psi(t)). $$
For notational convenience, let us simply ignore the middle component of $(b(t),0,\psi(t))$ in $\mathcal{R}_j$. Define $$h_t: \mathcal{R}_j \mathfrak{t}o \mathcal{R}_j, \ \ h_t(b(t), \psi(t)) := (0, - \langle \phi(t), D\phi(t) \rangle_{L^2(Y)} \psi(t)).$$ Showing that $\Pi_{\mathcal{R}_{j-1}} \circ \D_\gamma(\mathcal{F}_{\qml}gctau)|_{\mathcal{R}_j}$ is invertible is equivalent to showing the invertibility of
\[
\frac{d}{dt} + l + h_t: \mathcal{R}_j \mathfrak{t}o \mathcal{R}_{j-1}.
\]
First, we prove that $\frac{d}{dt} + l + h_t$ is injective. Since $\gamma(t) \in (W^\lambda)^{\operatorname{sp}incigma}$ for all $t$ and $\|\phi(t)\|_{L^2(Y)}=1$ for all $t$, it follows that there exists $\varepsilonilon > 0$ such that $|\langle \phi(t), D\phi(t) \rangle_{L^2(Y)}| \leq \lambda - \varepsilonilon$, independent of $t$. Suppose that $(b(t),\psi(t))$ is in the kernel of $\frac{d}{dt} + l + h_t$ and write $(b(t),\psi(t)) = \operatorname{sp}incum_{\kappa \geq \lambda} (b_\kappa(t),\psi_\kappa(t))$, where we are summing according to the eigenspace decomposition of $l$. Note that $(b_\kappa(t), \psi_\kappa(t))$ is in the kernel of $\frac{d}{dt} + l + h_t$ for each $\kappa$. However, it is straightforward to verify as in the proofs of Lemmas~\ref{lem:trajectoriesinvml} and ~\ref{lem:blowuptrajectoriesinvml}, that since $\kappa \geq \lambda$ we must have that $(b_\kappa(t), \psi_\kappa(t))$ must be unbounded either as $t \mathfrak{t}o \infty$ or $t \mathfrak{t}o -\infty$. This contradicts $(b_\kappa(t), \psi_\kappa(t))$ being an $L^2_j$-path.
It remains to see that $\frac{d}{dt} + l + h_t$ is surjective. Note that $\frac{d}{dt} + l + h_t$ naturally extends to an operator on sections from $\rr\mathfrak{t}imes Y$ to $p^*(iT^*Y \oplus \mathbb{S})$. The formal adjoint of this operator is $-\frac{d}{dt} + l + h_t$, which is injective by the same argument as above. Therefore, the extension of $\frac{d}{dt} + l + h_t$ is surjective. Since the formal adjoint preserves the condition of paths being in $(W^\lambda)^\perp$, we see that $\frac{d}{dt} + l + h_t$, as defined on $\mathcal{R}_j$, must be surjective.
\end{proof}
So far we have only discussed the relative grading between stationary points of $\mathcal{X}qmlagcsigma$ that live in $\N/S^1$. Let us end with a discussion about the reducible stationary points that are in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$, but not necessarily in $\N/S^1$.
For the rest of the subsection we fix $\lambda = \lambda^{\bullet}_i$ sufficiently large, and a reducible stationary point $(a, 0)$ of $\mathcal{X}qmlgc$ in $B(2R)$. Consider the reducible stationary points of $\mathcal{X}qmlagcsigma$ inside $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ that are of the form $[(a,0,\phi)]$. We write $\kappa(\phi)$ for the associated eigenvalues. By Proposition~\ref{prop:ND2}, any such $[(a,0,\phi)]$ is hyperbolic, when thought of as a stationary point on the finite-dimensional manifold $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$. Since $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient, we can compute the relative gradings (in finite dimensions) between these points.
\begin{lemma}
\label{lem:GradingsReduciblesNotInN}
Let $[(a,0,\phi)]$ and $[(a,0,\phi')]$ be stationary points of $\mathcal{X}qmlagcsigma$ as above. Assume that $\kappa(\phi) > \kappa(\phi')$. Then, the relative grading between these points, as computed from $\mathcal{X}qmlagcsigma$ restricted to the finite-dimensional manifold $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$, is given by
\begin{equation}\label{eq:approxreduciblegradings}
\operatorname{gr}([(a,0,\phi)], [(a,0,\phi')]) = \begin{cases}
2i(\kappa(\phi), \kappa(\phi')) & \mathfrak{t}ext{if $\kappa(\phi)$ and $\kappa(\phi')$ have the same sign,} \\
2i(\kappa(\phi),\kappa(\phi'))-1 &\mathfrak{t}ext{otherwise.}
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{lem:MEquiv}.
\end{proof}
\begin{lemma}
\label{lem:kappas}
Suppose $[(a,0,\phi)]$ is a stationary point of $\mathcal{X}qmlagcsigma$ that is contained in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ but not in $\N/S^1$.
(a) If $\kappa(\phi) > 0$, then for all stationary points of $\mathcal{X}qmlagcsigma$ of the form $[(a,0,\phi')]$ that are contained in $\N/S^1$, we have $\operatorname{gr}([(a,0,\phi)], [(a,0,\phi')])\geq 2.$
(b) If $\kappa(\phi) < 0$, then for all stationary points of $\mathcal{X}qmlagcsigma$ of the form $[(a,0,\phi')]$ that are contained in $\N/S^1$, we have $\operatorname{gr}([(a,0,\phi)], [(a,0,\phi')])\leq -2.$
\end{lemma}
\begin{proof}
Consider a pair of reducible stationary points $[x]$ and $[y]$ of $\mathcal{X}qagcsigma$ in $\N/S^1$ with the same connection component. It follows from Corollary~\ref{cor:GrPres}, Proposition~\ref{prop:RelationFinite}, and Lemma~\ref{lem:GradingsReduciblesNotInN}, that the relative grading between $[x]$ and $[y]$ is the same as the relative grading between $[x_\lambda]$ and $[y_\lambda]$, considered as stationary points of $\mathcal{X}qmlagcsigma$ on $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$. Further, the spinorial energies of $[x]$ and $[y]$ are necessarily close to the $\lambda$-spinorial energies of $[x_\lambda]$ and $[y_\lambda]$ respectively. Recall that $[x_\lambda]$ and $[y_\lambda]$ are necessarily contained in $\N/S^1$. Equation~\eqref{eq:GradingRed} and Lemma~\ref{lem:GradingsReduciblesNotInN} give that in each case, the relative gradings are computed in terms of the orderings by eigenvalues, which correspond to ($\lambda$-)spinorial energy. In particular, this implies that if $[(a,0,\phi)]$ is a reducible stationary point of $\mathcal{X}qmlagcsigma$ not in $\N/S^1$, then its associated eigenvalue cannot sit between those of $[x_\lambda]$ and $[y_\lambda]$ for any pair $[x_\lambda]$ and $[y_\lambda]$. The result now follows.
\end{proof}
The following is an immediate consequence of the proof of Lemma~\ref{lem:kappas}.
\begin{corollary}\label{cor:lowest-reducible-eigenvalue}
Let $[x]$ denote the reducible stationary point of $\mathcal{X}qagcsigma$ which has lowest eigenvalue among all reducible stationary points with the same connection component. Then, $[x_\lambda]$ is the reducible stationary point of $\mathcal{X}qmlagcsigma$ with the lowest positive eigenvalue among those in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ with the same connection component.
\end{corollary}
\operatorname{sp}incection{Absolute gradings}\label{sec:absgradings}
Recall that Theorem~\ref{thm:Main} asserts an isomorphism of $\widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s}))$ with $\overline{\mathit{HM}}to(Y,\mathfrak{s})$ which respects the absolute gradings. As our current strategy for the proof of this isomorphism is to identify each of these modules in a certain grading range with the Morse homology of $\mathcal{X}qmlagcsigma$ on $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$, we need to define an absolute grading on the stationary points of $\mathcal{X}qmlagcsigma$ which lines up with the gradings coming from $\operatorname{SWF}$ and from $\overline{\mathit{HM}}to$.
In Chapter~\ref{sec:quasigradient}, we showed that $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient. From this, \eqref{eq:EquivConleyMorse} implies that the Morse complex for $\mathcal{X}qmlagcsigma$ computes the reduced $S^1$-equivariant homology of the Conley index $I^\lambda$ from Chapter~\ref{sec:spectrum} in a certain grading range.
Since $\operatorname{SWF}(Y,\mathfrak{s}) = \Sigma^{-n(Y,\mathfrak{s},g) \mathbb{C}} \Sigma^{-W^{(-\lambda, 0)}} I^\lambda$, to have a complex whose homology agrees with that of the reduced $S^1$-equivariant homology of the spectrum, we must shift the gradings accordingly. Therefore, for $[x_\lambda]$ a stationary point of $\mathcal{X}qmlagcsigma$, define
\begin{equation}\label{eq:gr-lambda}
\operatorname{gr}_{\lambda}^{\SWF}([x_\lambda]) := \operatorname{ind}\bigl([x_{\lambda}] \mathfrak{t}ext{ in } (W^\lambda)^{\operatorname{sp}incigma}/S^1 \bigr) - \dim W^{(-\lambda, 0)} - 2n(Y, \operatorname{sp}inc, g),
\end{equation}
where $n(Y, \mathfrak{s}, g)$ is the quantity mentioned at the end of Chapter~\ref{sec:spectrum}. Therefore, the Morse complex for $\mathcal{X}qmlagcsigma$ with absolute grading given instead by $\operatorname{gr}_{\lambda}^{\SWF}$ computes $\widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s}))$ in the appropriate grading range by the discussion in Section~\ref{sec:combinedMorse}.
Thus, to connect the gradings on the Floer spectrum with monopole Floer homology, we will need to relate the absolute grading $\operatorname{gr}_{\lambda}^{\SWF}$ on $\mathcal{C}rit^\lambda_\N$ with the absolute grading $\operatorname{gr}^{\Q}$ from \eqref{eq:grQx} defined on $\mathcal{C}rit_\N$. This is the subject of the following proposition.
\begin{proposition}\label{prop:absolute-gradings-agree}
For any $\lambda = \lambda^{\bullet}_i \gg 0$ and $[x] \in \mathcal{C}rit$, we have
\begin{equation}\label{eq:finite-infinite-absolute-gradings}
\operatorname{gr}^{\SWF}_{\lambda}([x_\lambda])= \operatorname{gr}^{\Q}([x]).
\end{equation}
\end{proposition}
In order to prove \eqref{eq:finite-infinite-absolute-gradings}, let us now recall the precise definition of $n(Y, \mathfrak{s}, g)$ from \cite[Equation (6)]{Spectrum}:
\begin{equation}
\label{eq:ng}
n(Y, \mathfrak{s}, g) = \operatorname{ind}_{\cc} (D^+) - \frac{c_1(\mathfrak{t})^2 - \operatorname{sp}incigma(X)}{8}.
\end{equation}
Here: $X$ is a simply connected, oriented, compact Riemannian four-manifold with boundary $Y$ (such that the metric is a product near the boundary); $\mathfrak{t}$ is a $\operatorname{Spin}^c$ structure on $X$ such that $\mathfrak{t}|_Y = \operatorname{sp}inc$; $D^+$ is the Dirac operator on $(X, \mathfrak{t})$ with Atiyah-Patodi-Singer boundary conditions, and associated to a connection extending $A_0$ on $Y$; and $\operatorname{ind}_{\cc}$ denotes the index of a complex operator, which is twice the real index $\operatorname{ind}_\rr$. The Atiyah-Patodi-Singer boundary conditions mean that the domain of $D^+$ consists of spinors whose restrictions to $Y$ project trivially to the negative eigenspaces of the three-dimensional Dirac operator $D$.
\begin{proof}[Proof of Proposition~\ref{prop:absolute-gradings-agree}]
Let $[x] \in \mathcal{C}rit^s$ be a reducible generator corresponding to the lowest positive eigenvalue of the operator $D_{\q, a}$, where $[x] = [(a,0,\phi)]$. Corollary~\ref{cor:GrPres} and Proposition~\ref{prop:RelationFinite} imply that
\begin{equation}\label{eq:rel-gr-lambda}
\operatorname{gr}_{\lambda}^{\SWF}([x_\lambda]) - \operatorname{gr}_{\lambda}^{\SWF}([y_\lambda]) = \operatorname{gr}([x_\infty], [y_\infty]),
\end{equation}
so it suffices to prove the relation for $[x]$.
Pick $(X, \mathfrak{t})$ as in the definition of $n(Y, \mathfrak{s}, g)$ above. Then, recall from Section~\ref{sec:modifications} that
$$ \operatorname{gr}^{\Q}([x]) = -\operatorname{gr}_z(X, [x]) + \frac{c_1(\mathfrak{t})^2 - \operatorname{sp}incigma(X)}{4} - b^+(X) -1.$$
We seek to show that
\begin{equation}
\label{eq:grind}
\operatorname{gr}_z(X, [x]) = \operatorname{ind}_{\mathbb{R}}(D^+) - b^+(X) -1 - \operatorname{ind} \bigl ( [x_{\lambda}] \mathfrak{t}ext{ in } (W^\lambda)^{\operatorname{sp}incigma}/S^1 \bigr ) + \dim W^{(-\lambda, 0)}.
\end{equation}
The quantity $\operatorname{gr}_z(X, [x])$ is the virtual dimension of the Seiberg-Witten moduli space on $X$ (with an added cylindrical end) with asymptotics given by $x$. Following the proof of Lemma 28.3.2 in \cite{KMbook}, we can compute $\operatorname{gr}_z(X, [x])$ by using a reducible configuration on $X$. It then becomes the index of an operator with two parts: one is a perturbed signature operator, and the other is a perturbed Dirac operator. The former would have index $-b^+(X)-1$ if the perturbation $\q$ were zero, but in general it differs from this by the index of a signature operator on the cylinder $[0,1]\mathfrak{t}imes Y$, with boundary data $(0,0)$ and $(\q,a)$. This index can be computed as the spectral flow of the family
$$ \begin{pmatrix}0 & -d^* \\ -d & *d + 2t\D_{(ta,0)}\q^0 \end{pmatrix} : \mathcal{O}mega^0(Y; i\rr) \oplus \mathcal{O}mega^1(Y; i\rr) \mathfrak{t}o \mathcal{O}mega^0(Y; i\rr) \oplus \mathcal{O}mega^1(Y; i\rr) ,\ \ t \in [0,1].$$
By a compact perturbation that keeps the endpoints fixed, we can change this family of operators into
\begin{equation}
\label{eq:familynew}
\begin{pmatrix}0 & -d^* \\ -d & *d + 2t\D_{(a,0)}\q^0 \end{pmatrix} ,\ \ t \in [0,1].
\end{equation}
Since $(a,0)$ is a stationary point, we have that
$$\D_{(a,0)}\q^0 = \begin{pmatrix}0 & 0 \\ 0 & \D_{(a,0)}\eta_{\q}^0 \end{pmatrix}$$
with respect to the decomposition of imaginary one-forms into $\ker d \oplus \ker d^*$. Hence, \eqref{eq:familynew} decomposes into a $3 \mathfrak{t}imes 3$ block form, where one block is constantly $\left( \begin{smallmatrix}0 & -d^* \\ -d & 0 \end{smallmatrix} \right)$ and the other is
$$ *d+ 2t \D_{(a,0)}\eta_\q^0 : \ker d^* \mathfrak{t}o \ker d^*,\ \ t \in [0,1].$$
Since $\left( \begin{smallmatrix}0 & -d^* \\ -d & 0 \end{smallmatrix} \right)$ has no spectral flow, we have reduced the computation to the spectral flow of this last family. Let us denote the spectral flow by $\operatorname{SF}(\q)^0.$
The second contribution to $\operatorname{gr}_z(X, [x])$ comes from the index of a perturbed Dirac operator $D^+_{\q,a} - \lambda_0$, where $D^+_{\q,a}$ is an APS operator but with $D_{\q,a}$ on the boundary, unlike $D^+=D^+_{0,0}$. Here $\lambda_0$ is the eigenvalue corresponding to $x$, and the domain of $D^+_{\q,a} - \lambda_0$ consists of spinors whose restrictions to $Y$ project trivially to the eigenspaces of $D_{\q,a}$ for eigenvalues $< \lambda_0$. Since $\lambda_0$ is the lowest positive eigenvalue, the domain is the same as the one we considered for $D^+$ in \eqref{eq:ng}. The two operators $D^+_{\q,a}$ and $D^+_{\q,a} - \lambda_0$ differ by a constant (hence compact) term, and hence have the same index. Note that
$$ \operatorname{ind}_{\mathbb{R}}(D^+_{\q,a})- \operatorname{ind}_{\mathbb{R}}(D^+) = \operatorname{SF}(\q)^1,$$
where $\operatorname{SF}(\q)^1$ is the spectral flow of the perturbed Dirac operators on $Y$ as we move from $(0,0)$ to $(\q, a)$. Note that at a reducible stationary point $(a,0)$, we have that $D_a \psi + \D_{(a,0)} \q^1(0,\psi) = D \psi + \D_{(a,0)} (c_\q)^1(0,\psi)$, or in short, $D_{\q,a}^{\operatorname{gC}} = D_{\q,a}$. In particular, we can compute this spectral flow in Coulomb gauge.
Therefore, we have
$$ \operatorname{gr}_z(X, [x])= \operatorname{ind}_{\mathbb{R}}(D^+) - b^+(X) -1 + \operatorname{SF}(\q)^0 + \operatorname{SF}(\q)^1.$$
To obtain \eqref{eq:grind}, it remains to show that
\begin{equation}
\label{eq:grind2}
\dim W^{(-\lambda, 0)} - \operatorname{ind} \bigl([x_\lambda] \mathfrak{t}ext{ in } (W^\lambda)^{\operatorname{sp}incigma}/S^1 \bigr) = \operatorname{SF}(\q)^0 + \operatorname{SF}(\q)^1.
\end{equation}
We now analyze the terms on the left-hand side of \eqref{eq:grind2}. The first term, $ \dim W^{(-\lambda, 0)}$, is the number of negative eigenvalues of $l$ (counted with multiplicity) between $-\lambda$ and $0$. The second term requires a more careful analysis. Write $[x_{\lambda}] = (a_\lambda,0,\phi_\lambda)$. By Corollary~\ref{cor:lowest-reducible-eigenvalue}, $[x_{\lambda}]$ has lowest positive eigenvalue among stationary points in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ of $\mathcal{X}qmlagcsigma$ with connection component $a_\lambda$ (and not just among those in $\N/S^1$). By Lemma~\ref{lem:MEquiv}, $\operatorname{ind} \bigl([x_{\lambda}] \mathfrak{t}ext{ in } (W^\lambda)^{\operatorname{sp}incigma}/S^1 \bigr )$ is the sum of two parts. The first is the number of negative eigenvalues of the linearization of $l + p^\lambda c_{\q}$ restricted to the connection summand of $W^\lambda$, i.e. $*d + \D_{(a_\lambda,0)} (p^\lambda c_\q)^1(\cdot,0)$.
The second part is the number of negative eigenvalues of the linearization of $l+p^{\lambda} c_{\q}$, restricted to the spinorial summand of $W^\lambda$, at $(a_\lambda, 0)$, i.e. $D + \D_{(a_\lambda,0)} (p^\lambda c_{\q})^1(0, \cdot)$.
Putting it all together, we find that the left-hand side of \eqref{eq:grind2} is the spectral flow from $l$ to $l + p^\lambda A_\lambda$, where
$$
A_\lambda(b,\psi) = (\D_{(a_\lambda,0)} c_\q^0(b,0), \D_{(a_\lambda,0)}c_\q^1(0,\psi)),
$$
when considered as operators on $W^\lambda$. However, since $\lambda$ is of the form $\lambda^{\bullet}_i$, this is the same as the spectral flow from $l$ to $l + p^\lambda A_\lambda p^\lambda$, considered as operators from $W_k$ to $W_{k-1}$. Thus, to establish \eqref{eq:grind2}, it remains to show that there is no spectral flow from $l + p^\lambda A_\lambda p^\lambda $ to $l + A_\infty$, as operators from $W_k$ to $W_{k-1}$. For the rest of the discussion, we will only consider operators from $W_k$ to $W_{k-1}$. Further, all these operators will have index zero (being compact perturbations of $l$), and hence for them injectivity or surjectivity is equivalent to invertibility. Due to the block form of these operators, this fact remains true if we restrict the operators to either their connection or spinorial components.
Since $[x_\infty]$ is a non-degenerate stationary point of $\mathcal{X}qagcsigma$, we have that $l + A_\infty$ is injective. (This follows from Proposition~\ref{prop:nondegeneracycharacterized} and that this operator is index 0.) By Proposition~\ref{prop:ND2}, we have that $[x_\lambda]$ is a non-degenerate reducible stationary point of $\mathcal{X}qmlagcsigma$, and from this it follows that $l + p^\lambda A_\lambda p^\lambda$ is injective. Since $A_\infty$ and $p^\lambda A_\lambda p^\lambda$ are $L^2$ self-adjoint, it suffices to show that for $\lambda \gg 0$ and $t \in [0,1]$, $l + h_t^{\lambda}$ is injective, where $h^\lambda_t$ is the compact operator
\[
h^\lambda_t = t p^\lambda A_\lambda p^\lambda + (1-t) A_\infty.
\]
Note that for any sequence $v_n$ which converges to $v$ weakly in $W_k$, any sequence $t_n \in [0,1]$, and $\lambda_n \mathfrak{t}o \infty$, we have
\[
h^{\lambda_n}_{t_n}(v_n) \mathfrak{t}o A_\infty(v) \mathfrak{t}ext{ in } W_{k-1}.
\]
Now suppose that $l + h_{t_n}^{\lambda_n}$ is not injective for some sequences $t_n \in \mathbb{R}$ and $\lambda_n \mathfrak{t}o \infty$. Let $v_n \in W_k$ with $\| v_n \|_{L^2_k} = 1$ be such that $l + h^{\lambda_n}_{t_n}(v_n) = 0$. The $v_n$ converge weakly in $W_k$ to some $v$ and as discussed, $h^{\lambda_n}_{t_n}(v_n)$ converges in $W_{k-1}$ to $A_\infty(v)$. Thus, we see that $l(v_n)$ converges in $W_{k-1}$ to $l(v)$. In particular, $v_n$ converges in $W_k$ to $v$, which consequently has $\| v \|_{L^2_k} = 1$, and $(l + A_\infty)(v) = 0$. This contradicts the injectivity of $l + A_\infty$. The relation \eqref{eq:grind2} follows.
\begin{comment}
By Proposition~\ref{prop:ND2}, we have that $l + p^\lambda \D_{(a_\lambda,0)} c_\q$ is injective. Therefore, we have that $l + p^\lambda \D_{(a_\lambda,0)} c_\mathfrak{q}p^\lambda$ is injective as well. Similarly, because the stationary points of $(l + c_\q)^\operatorname{sp}incigma$ are non-degenerate we have that $l + \D_{(a,0)} c_\q$ is injective.
Therefore, it suffices to show that for $\lambda \gg 0$ and $t \in [0,1]$, $l + h_t^{\lambda}$ is injective, where $h^\lambda_t$ is the compact operator
\[
h^\lambda_t = t p^\lambda \D_{(a_\lambda,0)} c_\qp^\lambda + (1-t) \D_{(a,0)} c_\q.
\]
Note that for any sequence $v_n$ which converges to $v$ weakly in $W_k$, any sequence $t_n \in [0,1]$, and $\lambda_n \mathfrak{t}o \infty$, we have
\[
h^{\lambda_n}_{t_n}(v_n) \mathfrak{t}o \D_{(a,0)} c_\q(v) \mathfrak{t}ext{ in } W_{k-1}.
\]
Now suppose that $l + h_{t_n}^{\lambda_n}$ is not injective for some sequences $t_n \in \mathbb{R}$ and $\lambda_n \mathfrak{t}o \infty$. Let $v_n \in W_k$ with $\| v_n \|_{L^2_k} = 1$ be such that $l + h^{\lambda_n}_{t_n}(v_n) = 0$. The $v_n$ converge weakly in $W_k$ to some $v$ and as discussed, $h^{\lambda_n}_{t_n}(v_n)$ converges in $W_{k-1}$ to $\D_{(a,0)} c_\q(v)$. Thus, we see that $l(v_n)$ converges in $W_{k-1}$ to $l(v)$. In particular, $v_n$ converges in $W_k$ to $v$, which consequently has $\| v \|_{L^2_k} = 1$, and $(l + \D_{(a,0)} c_\q)(v) = 0$. This contradicts the injectivity of $l + \D_{(a,0)} c_\q$. The relation \eqref{eq:grind2} follows.
\end{comment}
\end{proof}
Having established Proposition~\ref{prop:absolute-gradings-agree}, we can rephrase Lemma~\ref{lem:kappas} as the following.
\begin{proposition}\label{prop:GradingBounds}
Any reducible stationary point of $\mathcal{X}qmlagcsigma$ in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ which is in the grading range $[-N,N]$ is contained in $\N/S^1$, assuming $\lambda = \lambda^{\bullet}_i$ for some $i \gg 0$.
\end{proposition}
\operatorname{sp}incection{Conclusions}\label{sec:conclusions}
Recall, from the discussion at the beginning of Section~\ref{sec:stationarypointsoutsideN}, that approximate irreducible stationary points are necessarily in $\N/S^1$. Therefore, using the fact that $\mathcal{X}i_{\lambda}$ is grading preserving and the fact that irreducible stationary points have vanishing ($\lambda$-)spinorial energy, it follows that there exists $N > 0$ such that all irreducible stationary points of $\mathcal{X}qmlagcsigma$ have grading in $[-N,N]$ for all $\lambda = \lambda^{\bullet}_i \gg 0$. Here, recall that we grade the stationary points of $\mathcal{X}qmlagcsigma$ using $\operatorname{gr}^{\operatorname{SWF}}_\lambda$, defined in \eqref{eq:gr-lambda}. We can now summarize the results of this section in the following.
\begin{proposition}\label{prop:stationarycorrespondence}
Let $\q$ be a very tame, admissible perturbation, and fix $N > 0$ such that all irreducible stationary points of $\mathcal{X}qmlagcsigma$ have grading in $[-N,N]$ for all $\lambda = \lambda^{\bullet}_i \gg 0$. For all $\lambda = \lambda^{\bullet}_i \gg 0$, there is a one-to-one correspondence $\mathcal{X}i_{\lambda}$ between:
\begin{itemize}
\item the stationary points of $\mathcal{X}qmlagcsigma$ in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ with grading in $[-N, N]$, including all irreducibles, and
\item
the stationary points of $\mathcal{X}qagcsigma$ with grading in $[-N, N]$, including all irreducibles.
\end{itemize}
This correspondence preserves the grading, as well as the type of stationary point (irreducible, stable, unstable). Furthermore, all the stationary points of $\mathcal{X}qmlagcsigma$ in $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ with grading in $[-N, N]$ are hyperbolic.
\end{proposition}
\begin{proof}
The conclusion follows from Corollary~\ref{cor:corresp}, Proposition~\ref{prop:ND}, Corollary~\ref{cor:GrPres} and Proposition~\ref{prop:GradingBounds}. \end{proof}
\chapter{The Morse-Smale condition for the approximate flow}\label{sec:MorseSmale}
Recall that in Chapter~\ref{sec:quasigradient}, we established that $\mathcal{X}qmlgc$ is a Morse equivariant quasi-gradient vector field on $W^\lambda \cap B(2R)$. In this chapter, we show that it is also Morse-Smale, in the sense of Definition~\ref{def:eMSqg}. Recall, from Lemma~\ref{lem:MSlu} and the discussion at the end of Section~\ref{sec:combinedMorse} that the Morse-Smale condition can be rephrased in terms of the surjectivity of a linear operator. In our setting, let
$$\gamma: \rr\mathfrak{t}o (W^\lambda \cap B(2R))^{\operatorname{sp}incigma} \operatorname{sp}incubset (W^\lambda)^{\operatorname{sp}incigma},$$
be a trajectory of $\mathcal{X}qmlgcsigma$, going between two stationary points $x$ and $y$. Regularity of the moduli space at $[\gamma]$ is equivalent to the surjectivity of the operator
\begin{equation}
\label{eq:Op0}
\Pi^{\operatorname{agC},\operatorname{sp}incigma} \circ ( \frac{D^{\operatorname{sp}incigma}}{dt} + \D^\operatorname{sp}incigma \mathcal{X}qmlgcsigma) : T_{j, \gamma} \P(x, y) \mathfrak{t}o T_{j-1, \gamma} \P(x, y)
\end{equation}
which has already made an appearance in \eqref{eq:HessFinite}. Again, while it may seem like a more natural choice to work with an operator where the derivatives are given by a connection coming from the $\mathfrak{t}ilde{g}$-metric, the choice of connection does not matter at a trajectory.
Alternatively, we can view $\gamma$ as a path in the infinite dimensional space $W^{\operatorname{sp}incigma}$. The corresponding linearized operator is
\begin{equation}
\label{eq:Op2}
\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau : \T_{k,\gamma}^{\operatorname{gC}, \mathfrak{t}au}(x, y) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}_{k-1,\gamma}(Z).
\end{equation}
\begin{lemma}
\label{lem:equivOp}
The operator \eqref{eq:Op0} is surjective if and only if the operator \eqref{eq:Op2} is surjective.
\end{lemma}
\begin{proof}
We use the notation and the results from Lemma~\ref{lem:allsurjective} and the proof of Proposition~\ref{prop:RelationFinite}.
By the analogue of Lemma~\ref{lem:allsurjective} with $\q^\lambda$ instead of $\q$, surjectivity of \eqref{eq:Op2} is equivalent to that of the operator $Q_{\gamma}^{\operatorname{gC}}$ from \eqref{eq:Qgammagc}. In the proof of Lemma~\ref{lem:infinite-finite-same-index} we gave a block form \eqref{eq:Qblock} for $Q_{\gamma}^{\operatorname{gC}}$, and we also showed that the top left block is invertible. This implies that $Q_{\gamma}^{\operatorname{gC}}$ is surjective if and only if the operator $Q_{\gamma}^{\operatorname{gC}, \lambda}$ from \eqref{eq:Qgg} is surjective. Finally, $ Q_{\gamma}^{\operatorname{gC}, \lambda}$ can be related to \eqref{eq:Op0} through \eqref{eq:QQ2}, using the invertibility of the operator \eqref{eq:dAt}.
\end{proof}
Thus, we can work with the operators $ \D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau$. Note that we should only ask for their surjectivity when $\gamma$ is boundary-unobstructed. When $\gamma$ is boundary-obstructed, we require surjectivity of its summand $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau)^{\del}$, which acts on the spaces of paths in the boundary of $(W^\lambda)^{\operatorname{sp}incigma}$. Similar arguments as above apply for the boundary-obstructed case as well.
\begin{proposition}\label{prop:MorseSmales}
We can choose the admissible perturbation $\q$ such that for any $\lambda \in \{\lambda^{\bullet}_1, \lambda^{\bullet}_2, \dots\}$ sufficiently large, the following holds. Given any flow trajectory $\gamma$ for the restriction of $\mathcal{X}qmlgcsigma$ to $(B(2R) \cap W^{\lambda})^{\operatorname{sp}incigma}$, we have that:
\begin{enumerate}[(i)]
\item If $\gamma$ is boundary-unobstructed, then $\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau$ is surjective;
\item If $\gamma$ is boundary-obstructed, then $(\D^\mathfrak{t}au_{\gamma} \mathcal{F}_{\qml}gctau)^{\del}$ is surjective.
\end{enumerate}
\end{proposition}
\begin{proof}
This is similar to the proof of transversality for moduli spaces of trajectories in \cite[Section 15]{KMbook}.
So far, we have chosen an admissible perturbation $\q_0$ so that the stationary points of both $\mathcal{X}qogcsigma$ and $\mathcal{X}qomlgcsigma$ inside $B(2R)^{\operatorname{sp}incigma}$ are non-degenerate; cf. Proposition~\ref{prop:ND2}. Consider the blow-down projections of the stationary points of $\mathcal{X}qogcsigma$; these come in a finite number of $S^1$-orbits. Pick disjoint open neighborhoods of those orbits, and let $U$ be the union of these neighborhoods. By Lemma~\ref{lem:implicitfunctionreducible}, for $\lambda$ large, the blow-down projections of the stationary points of $\mathcal{X}qomlgcsigma$ from $B(2R)^{\operatorname{sp}incigma}$ land inside $U$. Consider the set of perturbations
$$ \P_U = \{ \mathfrak{q}\in \mathcal{P}\mid \q|_{U} = \q_0|_{U} \}.$$
We can find an open neighborhood $\nu(\q_0)$ of $\q_0$ in $\P_U$ such that for all $\q$ in this neighborhood, we have the same set of stationary points for $\mathcal{X}qgcsigma$ and $\mathcal{X}qmlgcsigma$ as for $\q_0$, and therefore we still have nondegeneracy for them.
We now claim that for a residual set of perturbations $\q$ in $\nu(\q_0) \operatorname{sp}incubset \P_U$, the desired surjectivity conditions hold. Since we work with a countable set of $\lambda$, it suffices to prove this for some fixed $\lambda$, sufficiently large. We define a parametrized map
$$ \mathfrak{M}: \mathfrak{t}C_k^{\operatorname{gC}, \mathfrak{t}au}(x, y) \mathfrak{t}imes \nu(\q_0) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}_{k-1}(Z), \ \ \ \mathfrak{M}(\gamma, \q) = \mathcal{F}_{\qml}gctau(\gamma).$$
When $\gamma$ is reducible (i.e., contained in the reducible locus), there is a similar map $(\mathfrak{M})^{\del}$ acting on the space of paths in the boundary.
To prove our claim, it is enough to check that the derivative of $\mathfrak{M}$ is surjective at all points $(\gamma, \q)$ in $\mathfrak{M}^{-1}(0)$ when $\gamma$ is irreducible; and that the derivative of $(\mathfrak{M})^{\del}$ is surjective at reducibles. The proof of these facts is entirely similar to the proof of Proposition 15.1.3 in \cite{KMbook}.
\end{proof}
Let us now put the results of Chapters~\ref{sec:criticalpoints}, ~\ref{sec:quasigradient}, ~\ref{sec:gradings}, and the current one in context. Recall that it is our goal to establish an isomorphism between $\overline{\mathit{HM}}to(Y,\mathfrak{s})$ and the (singular) equivariant homology $\widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s}))$. To do this, for each $N \gg 0$, we will establish an isomorphism between the truncations $\overline{\mathit{HM}}to_{\leq N}(Y,\mathfrak{s})$ and $\widetilde{H}^{S^1}_{\leq N}(\operatorname{SWF}(Y,\mathfrak{s}))$ by passing through an intermediate group, the $S^1$-equivariant Morse homology of $\mathcal{X}qmlagcsigma$ on $(B(2R) \cap W^\lambda)^\operatorname{sp}incigma/S^1$ as defined in Section~\ref{sec:finite}. This group can be defined by Propositions~\ref{prop:AllMS} and ~\ref{prop:MorseSmales} for $\lambda = \lambda^{\bullet}_i$ with $i \gg 0$. We have established in Section~\ref{sec:finite} that the homology of this Morse complex in an appropriate grading range will be isomorphic to the (singular) Borel homology $\widetilde{H}^{S^1}_{\leq N}(\operatorname{SWF}(Y,\mathfrak{s}))$. Thus, it remains to identify the $S^1$-equivariant Morse homology of $\mathcal{X}qmlagcsigma$ with monopole Floer homology in the corresponding grading range.
At this point, for $\lambda = \lambda^{\bullet}_i$ with $i \gg 0$, we have established in Proposition~\ref{prop:stationarycorrespondence} a correspondence between the stationary points of $\mathcal{X}qagcsigma$ and $\mathcal{X}qmlagcsigma$ with grading in the interval $[-N,N]$. Thus, using the results of Chapter~\ref{sec:coulombgauge}, we have an isomorphism on the level of graded chain groups (but not yet on homology) between $\widecheck{\mathit{CM}}_{\leq N}(Y,\mathfrak{s},\q)$ and the $S^1$-equivariant Morse complex for $\mathcal{X}qmlagcsigma$.
This leaves us with one major step, which is to construct a chain complex isomorphism between the $S^1$-equivariant Morse complex and $\widecheck{\mathit{CM}}_{\leq N}(Y,\mathfrak{s},\q)$ by relating the trajectories of $\mathcal{X}qagcsigma$ to those in finite dimensions; this will be analogous to the correspondence on the level of stationary points we have established in this section. This is the focus of Chapter~\ref{sec:trajectories1} and Chapter~\ref{sec:trajectories2}. Before doing so, in the next section we do some technical work which will allow us to relate paths between stationary points of $\mathcal{X}qmlgcsigma$ to paths between stationary points of $\mathcal{X}qgcsigma$.
\chapter{Self-diffeomorphisms of configuration spaces}\label{sec:appendix}
In Section~\ref{sec:StabilityPoints}, we established a correspondence $\mathcal{X}i_\lambda : \mathcal{C}rit^\lambda_{\N} \mathfrak{t}o \mathcal{C}rit_{\N}$ between stationary points of $\mathcal{X}qmlagcsigma$ and $\mathcal{X}qagcsigma$. Our goal is to be able to do this for trajectories as well. In this case, the trajectories live in different path spaces: trajectories of $\mathcal{X}qagcsigma$ live in $\B_k^{\operatorname{gC},\mathfrak{t}au}([x_\infty],[y_\infty])$ while we will see in Chapter~\ref{sec:trajectories1} that trajectories of $\mathcal{X}qmlagcsigma$ are in $\B_k^{\operatorname{gC},\mathfrak{t}au}([x_\lambda],[y_\lambda])$. Therefore, we need a way to relate these different spaces. In this section, we will extend the correspondence from Corollary~\ref{cor:corresp} first to a family of $S^1$-equivariant self-diffeomorphisms of $W^\operatorname{sp}incigma$, and then to self-diffeomorphisms of $W^\mathfrak{t}au(I \mathfrak{t}imes Y)$ for $I \operatorname{sp}incubset \mathbb{R}$ and to other path spaces. This will be needed in Proposition~\ref{prop:L2k} and Proposition~\ref{prop:stabilitynearby}. The construction of these maps will use a setup similar to that of the function $T_\lambda$ defined in Section~\ref{sec:Flambda}.
Before stating the first result, we need some preliminaries. In this section, in order to make statements about the smoothness of functions with respect to $\lambda$ as a parameter, we do {\em not} restrict to the case that $\lambda = \lambda^{\bullet}_i$. We will not use the results of Chapter~\ref{sec:quasigradient} or Chapter~\ref{sec:gradings}, so this will not be a problem. For each stationary point $x_\infty$ of $\mathcal{X}qgcsigma$ in $\N$ let $x_\lambda$ denote the stationary point of $\mathcal{X}qmlgcsigma$ which is $L^2$ closest to $x_\infty$. (For an explanation of the well-definedness of $x_\lambda$ see Section~\ref{sec:Flambda}.) In what follows, we will abuse notation and say that a function $G_\lambda$, which depends on $\lambda$, is {\em smooth in $\lambda$ at and near infinity} if there exists $\varepsilonilon > 0$ such that $G_{f^{-1}(r)}$ depends smoothly on $r \in [0, \varepsilonilon)$, where $f: (0, \infty] \mathfrak{t}o [0,1)$ is the homeomorphism from Section~\ref{sec:StabilityPoints}.
\begin{lemma}\label{lem:Xixylambda}
For $\lambda \gg 0$, there exists an $S^1$-equivariant diffeomorphism $\mathcal{X}i_{\lambda} : W^\operatorname{sp}incigma_{0} \mathfrak{t}o W^\operatorname{sp}incigma_{0}$ satisfying:
\begin{enumerate}[(i)]
\item\label{xi:xlambda} $\mathcal{X}i_{\lambda}$ sends $x_\lambda$ to $x_\infty$ for each stationary point $x_\infty \in \N$,
\item\label{xi:sobolev} $\mathcal{X}i_{\lambda}$ restricts to a self-diffeomorphism of $W^\operatorname{sp}incigma_j$ for any $1 \leq j \leq k$,
\item\label{xi:smooth-f} Let $\mathcal{X}i_{\infty}$ be the identity. Then, for $0 \leq j \leq k$, $\mathcal{X}i_{\lambda}: W^\operatorname{sp}incigma_j \mathfrak{t}o W^\operatorname{sp}incigma_j$ and all its derivatives are smooth in $\lambda$ at and near infinity.
\end{enumerate}
Further, $\mathcal{X}i_\lambda$ extends to the double $\mathfrak{t}W^\operatorname{sp}incigma_0$ and the analogous properties hold.
\end{lemma}
Note that since $W^\operatorname{sp}incigma_j$ is not an affine space, we do not have a natural notion of higher derivatives. However, we can think of $W^\operatorname{sp}incigma_j$ as naturally being embedded as a submanifold of the linear space $\widehat{W}_j$, where we remove the conditions $s \geq 0$ and $\| \phi \|_{L^2} = 1$. (The same is true for the double $\mathfrak{t}W^\operatorname{sp}incigma_j$.) Of course, $\widehat{W}_j \cong W_j \mathfrak{t}imes \mathbb{R}$. We will then treat $\mathcal{X}i_\lambda$ as a map from $W^\operatorname{sp}incigma_j$ to $\widehat{W}_j$. After defining $\mathcal{X}i_\lambda$, it will be clear that this extends naturally to an $L^2_j$ neighborhood $\nu(W^\operatorname{sp}incigma_j)$ of $W^\operatorname{sp}incigma_j$ in $\widehat{W}_j$. We can make sense of higher derivatives of $\mathcal{X}i_\lambda$ on $\nu(W^\operatorname{sp}incigma_j)$, since this is an open submanifold of a linear space, and these are the derivatives we will talk about (and measure in norm). Further, since we may consider elements of $W^\operatorname{sp}incigma_j$ as elements of the linear space $\widehat{W}_j$, it makes sense to take differences of elements there and take their $L^2_j$ norms.
We will give the proof of Lemma~\ref{lem:Xixylambda} in Sections~\ref{sec:define-xi} and \ref{sec:xi-3d-bounds} below. We will then use this to obtain a number of important technical consequences for diffeomorphisms of path spaces. Note that given a path $\gamma$ in $W^\operatorname{sp}incigma_j$, we may apply $\mathcal{X}i_\lambda$ slicewise to obtain a new path $\mathcal{X}i_\lambda(\gamma)$ in $W^\operatorname{sp}incigma_j$. We will study the regularity of $\mathcal{X}i_\lambda$ on four-dimensional configurations. Before restating it, we need some discussion and terminology.
In parts (ii)-(iv) of the following proposition, we will discuss the smoothness of $\mathcal{X}i_\lambda : W^\mathfrak{t}au_j(x_\lambda, y_\lambda) \mathfrak{t}o W^\mathfrak{t}au_j(x_\infty, y_\infty)$ in $\lambda$. This a priori is not defined, as the domain space is changing. We postpone the discussion for what we mean by this in Section~\ref{xi:non-compact} (given after the statement of Proposition~\ref{xi:half-cylinder}) for expediency.
\begin{proposition}\label{prop:Xixy-paths}
Let $\mathcal{X}i_\lambda$ be as above. Fix stationary points $x_\infty$ and $y_\infty$ of $\mathcal{X}qgcsigma$ in $\N$ and let $x_\lambda$ and $y_\lambda$ be the stationary points of $\mathcal{X}qmlgcsigma$ which minimize $L^2$ distance to $x_\infty$ and $y_\infty$ respectively. Then for $\lambda \gg 0$ and $ 1 \leq j \leq k$, we have the following:
\begin{enumerate}[(i)]
\item\label{xi:paths} for a compact interval $I \operatorname{sp}incubseteq \mathbb{R}$, the map $\mathcal{X}i_{\lambda}$ induces an $S^1$-equivariant diffeomorphism of $\mathfrak{t}W^\mathfrak{t}au_{j}(I \mathfrak{t}imes Y)$ which is smooth in $\lambda$ at and near $\infty$ and preserves $W^\mathfrak{t}au_{j}(I \mathfrak{t}imes Y)$,
\item\label{xi:paths-smooth-f} $\mathcal{X}i_\lambda$ induces diffeomorphisms from $\mathfrak{t}W^\mathfrak{t}au_j(x_\lambda,y_\lambda)$ to $\mathfrak{t}W^\mathfrak{t}au_j(x_\infty,y_\infty)$, which vary smoothly in $\lambda$ at and near $\infty$,
\item \label{xi:Bgc} $\mathcal{X}i_\lambda$ induces a diffeomorphism from $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda],[y_\lambda])$ to $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty],[y_\infty])$,
\item \label{xi:Vgc} the diffeomorphisms $\mathcal{X}i_\lambda: \mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda],[y_\lambda]) \mathfrak{t}o \mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty],[y_\infty])$ from the above item lift to smooth (in domain and also with respect to $\lambda$ at and near $\infty$) bundle maps
$$
xymatrix{
\V^{\operatorname{gC},\mathfrak{t}au}_j \ar[r]^{(\mathcal{X}i_\lambda)_*} \ar[d] & \V^{\operatorname{gC},\mathfrak{t}au}_j \ar[d] \\
\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda], [y_\lambda]) \ar[r]^{\mathcal{X}i_\lambda} & \mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty], [y_\infty]).
}
$$
If $[x_\infty] \neq [y_\infty]$, the analogous statement also holds for $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty],[y_\infty])/\mathbb{R}$.
\end{enumerate}
\end{proposition}
As for the case of $W^\operatorname{sp}incigma_j$, there exists a similar way to discuss the derivatives of $\mathcal{X}i_\lambda$ in four dimensions. We will think of $\mathfrak{t}W^\mathfrak{t}au_j(I \mathfrak{t}imes Y)$ as naturally being embedded as a submanifold of the linear space $\widehat{W}_j(I \mathfrak{t}imes Y)$, where we remove the conditions $s(t) \geq 0$ and $\| \phi(t) \|_{L^2(Y)} = 1$. Following the discussion above, it turns out that $\mathcal{X}i_\lambda$ will extend to a neighborhood of $\mathfrak{t}W^\mathfrak{t}au_j(I \mathfrak{t}imes Y)$ in $\widehat{W}_j(I \mathfrak{t}imes Y)$. For simplicity, we will work in the larger affine space to compute derivatives and measure the distance between configurations. This will not affect any statements about smoothness or regularity.
Before giving the proofs of Lemma~\ref{lem:Xixylambda} and Proposition~\ref{prop:Xixy-paths}, let us state two immediate corollaries of the latter. The first follows from part \eqref{xi:paths}, by continuity:
\begin{corollary}
\label{cor:Xixy-paths}
For $1 \leq j \leq k$, if a sequence $\gamma_n \in W^\mathfrak{t}au_{j,loc}(I \mathfrak{t}imes Y)$ converges to some $\gamma_\infty$, then $\mathcal{X}i_{\lambda_n}(\gamma_n) \mathfrak{t}o \gamma_\infty$ for any sequence $\lambda_n \mathfrak{t}o \infty$.
\end{corollary}
The second corollary follows from part \eqref{xi:paths-smooth-f} of Proposition~\ref{prop:Xixy-paths}:
\begin{corollary}
\label{cor:path-distance} For $1 \leq j \leq k$, $\gamma_0 \in W^{\mathfrak{t}au}_j(x_{\infty}, y_{\infty})$, if a sequence $\gamma_n \in W^\mathfrak{t}au_j(x_{\lambda_n}, y_{\lambda_n})$ with $\lambda_n \mathfrak{t}o \infty$ satisfies
$$\| \gamma_n - \mathcal{X}i^{-1}_{\lambda_n}(\gamma_0) \|_{L^2_j({\mathbb{R}}\mathfrak{t}imes Y)} \mathfrak{t}o 0,$$
then
$$ \| \mathcal{X}i_{\lambda_n}(\gamma_n) - \gamma_0 \|_{L^2_j({\mathbb{R}}\mathfrak{t}imes Y)} \mathfrak{t}o 0.$$
\end{corollary}
We now give the organization of the rest of this section. In Section~\ref{sec:define-xi}, we give the construction of $\mathcal{X}i_\lambda$. In Section~\ref{sec:xi-3d-bounds}, we prove the desired properties of $\mathcal{X}i_\lambda$ in Lemma~\ref{lem:Xixylambda}. In Section~\ref{sec:xi-4d-bounds}, we prove parts (i) and (ii) of Proposition~\ref{prop:Xixy-paths}. Finally, in Section~\ref{sec:xi-ext} we prove parts (iii) and (iv).
\operatorname{sp}incection{The construction of $\mathcal{X}i_\lambda$}\label{sec:define-xi}
\begin{lemma}\label{lem:xiconstruct}
For $\lambda \gg 0$, there exists an $S^1$-equivariant diffeomorphism $\mathcal{X}i_{\lambda} : W^\operatorname{sp}incigma_{0} \mathfrak{t}o W^\operatorname{sp}incigma_{0}$ which sends $x_\lambda$ to $x_\infty$ for each stationary point $x_\infty$ of $\mathcal{X}qgcsigma$ in $\N$.
\end{lemma}
The construction will be similar to that of the diffeomorphism $T_\lambda$ used in Section~\ref{sec:Flambda} to define $F_\lambda$. Since the blow-up is not a linear space, there is not a notion of translation like for the definition of $T_\lambda$, so we will work in charts (in the directions orthogonal to the $S^1$ orbits). The reader may note that here we are working with $W^\operatorname{sp}incigma_0$, while in the definitions of $T_\lambda$ and $F_\lambda$ from Chapter~\ref{sec:quasigradient}, we only worked with $W_{k-1}$. The explanation for this is that the function $F_\lambda$ incorporated $c$, which is not well-behaved as a map from $W_0 \mathfrak{t}o W_0$ (since it contains quadratic terms), so we would not have been able to analyze $F_\lambda$ in lower Sobolev regularity.
A more minor distinction is the following. To construct $T_\lambda(x)$, we used the function $\omega_\lambda$ defined in \eqref{eq:omegalambda} to find which element of the $S^1$-orbit of an approximate stationary point $x_\lambda$ the point $x$ is closest to, and then translate $x$ by $\omega_\lambda(x)(x_\infty - x_\lambda)$, where $x_\infty$ minimizes $L^2$ distance to $x_\lambda$. To define $\mathcal{X}i_\lambda(x)$, we will compare $x$ to $x_\infty$ instead of $x_\lambda$, and then translate. The reason for this is that we will work with local charts, and it is simplest to work in charts around the same point ($x_\infty$) for every $\lambda$, as opposed to defining $\mathcal{X}i_\lambda$ in terms of different charts for each $\lambda$ and tracking the changes.
For notation, we choose representatives $x^1_\infty,\ldots, x^m_\infty$ for the $m$ orbits of stationary points of $\mathcal{X}qagcsigma$ in $\N$.
\begin{proof}[Proof of Lemma~\ref{lem:xiconstruct}]
We will first define $\mathcal{X}i_\lambda$ on an $S^1$-invariant neighborhood of $x^i_\infty$ in $W^\operatorname{sp}incigma_0$ for a fixed $i$; for now, we omit the index $i$. Write $x_\infty = (a_\infty,s_\infty,\phi_\infty)$. Consider the submanifold
$$
U_{x_\infty} = \{ (a,s,\phi) \in W^\operatorname{sp}incigma_0 \mid \operatorname{Re} \langle \phi, i \phi_\infty \rangle_{L^2} = 0, \langle \phi, \phi_\infty \rangle_{L^2} > 0 \}.
$$
Note that $U_{x_\infty}$ is not open, as it only consists of configurations which are (real) orthogonal to the $S^1$-orbit of $x_{\infty}$. We will define an $S^1$-equivariant self-diffeomorphism of $S^1 \cdot U_{x_\infty}$ which takes $x_\lambda$ to $x_\infty$ and is the identity outside of a smaller $S^1$-invariant neighborhood of $x_\infty$. We will then obtain the desired diffeomorphism $\mathcal{X}i_\lambda$ by repeating this construction for each orbit of stationary points of $\mathcal{X}qgcsigma$, and taking $\mathcal{X}i_\lambda$ to be the identity outside of these neighborhoods.
First, we observe that there exists a diffeomorphism $G_{x_\infty}$ from $U_{x_\infty}$ to the Hilbert manifold with boundary
$$
V_{x_\infty} = \{ (a,s,\phi) \in (\ker d^*)_0 \oplus \mathbb{R} \oplus L^2(Y;\mathbb{S}) \mid s \geq 0, \ \langle \phi, \phi_\infty \rangle_{L^2} = 0 \},
$$
given by
$$
G_{x_\infty} : (a,s,\phi) \mapsto \Bigl(a, s, \frac{\phi}{\langle \phi, \phi_\infty \rangle_{L^2}} - \phi_\infty \Bigr).
$$
We remark that $V_{x_\infty}$ is a submanifold of $\T^{\operatorname{gC},\operatorname{sp}incigma}_{0, x_\infty}$. We have that $G_{x_\infty}(x_\infty) = (a_\infty,s_\infty,0)$. Denote this vector by $z_\infty$. Note also that in the definition of $G_{x_\infty}$, have that $\operatorname{Re} \langle \phi, \phi_\infty \rangle_{L^2} = \langle \phi, \phi_\infty \rangle_{L^2}$, since $\phi$ is orthogonal to the $S^1$-orbit of $\phi_\infty$. Observe also that the inverse of $G_{x_\infty}$ is given by the formula
$$ G_{x_\infty}^{-1}: (a,s,\phi) \mapsto \Bigl(a, s, \frac{\phi + \phi_\infty}{\| \phi + \phi_\infty \|_{L^2}}\Bigr).$$
Since $x_\lambda= (a_\lambda,s_\lambda,\phi_\lambda)$ and $x_{\infty}$ minimize the $L^2$ distance between their $S^1$-orbits, we deduce that $x_\infty- x_\lambda$ is necessarily orthogonal to the $S^1$-orbit of $x_\infty$. Further, for $\lambda \gg 0$, $\| x_\infty - x_\lambda \|_{L^2}$ is arbitrarily small, and thus $\langle \phi_\lambda, i \phi_\infty \rangle_{L^2} > 0$, so $x_\lambda$ is contained in $U_{x_\infty}$. Let $z_\lambda$ denote the image of $x_\lambda$ under $G_{x_\infty}$. We will construct an interpolation between translation by $z_\infty - z_\lambda$ (which may not be defined near the boundary of $V_{x_\infty}$ if $x_\infty$ is irreducible) and the identity.
Pick $0 < \delta \ll \frac{1}{2}$ (independent of $\lambda$), such that an $L^2$ ball of size $\delta$ around the origin in $V_{x_\infty}$ has the following properties for $\lambda \gg 0$. First, this ball must contain $G_{x_\infty}(x_\lambda)$ and be disjoint from a $\delta$-ball around $G_{x_\infty}(x^n_\infty)$ or $G_{x_\infty}(x^n_\lambda)$, should they be defined, for any $n \neq i$ in $\N$. Further, if $x_\infty$ is irreducible, we choose $\delta$ such that the $\delta$ ball is contained in the interior of $V_{x_\infty}$. By our choice of $\delta$, if $(a,s,\phi) \in U_{x_\infty}$ satisfies $\langle \phi, \phi_\infty \rangle_{L^2} < \frac{1}{2}$, then $G_{x_\infty}(a,s,\phi)$ is not in this ball. It is in this ball where we will interpolate between the identity and the translation from $x_\lambda$ to $x_\infty$.
Let $\alpha: [0,\infty) \mathfrak{t}o [0,1]$ be a smooth bump function which is 1 on $[0,\delta/2]$ and 0 on $[\delta,\infty)$. We can now define a map $\mathcal{U}psilon_{\lambda}$ from $V_{x_\infty}$ to $V_{x_\infty}$ given by
\begin{equation}\label{xi:association}
\mathcal{U}psilon_{\lambda}: z \mapsto (1-\alpha(\|\psi\|_{L^2})) \| z + \alpha(\| \psi \|_{L^2}) (z + z_\infty - z_\lambda),
\end{equation}
where we write $z = (b,r,\psi)$.
By the choice of $\delta$, we have that this induces a well-defined map on $V_{x_\infty}$, since in the case that ${x_\infty}$ is reducible, the $s$-component is unchanged, and when ${x_\infty}$ is irreducible, we have chosen the $\delta$-ball such that translation does not happen near the boundary. Again, we point out that this map is translation by $z_\infty - z_\lambda$ inside of an $L^2$ ball of size $\delta/2$ in $V_{x_\infty}$ and is the identity outside of a $\delta$ ball. In particular, we see that $z_\lambda$ is taken to $z_\infty$.
By the work of Chapter~\ref{sec:criticalpoints}, we have that
\begin{equation}\label{xi:xlambdaxinfty}
x_\lambda \mathfrak{t}o x_\infty \mathfrak{t}ext{ in } L^2_j \mathfrak{t}ext{ for all } j \mathfrak{t}ext{ as } \lambda \mathfrak{t}o \infty
\end{equation}
and thus
\begin{equation}\label{xi:vlambdavinfty}
z_\lambda \mathfrak{t}o z_\infty \mathfrak{t}ext{ in } L^2_j \mathfrak{t}ext{ for all } j \mathfrak{t}ext{ as } \lambda \mathfrak{t}o \infty.
\end{equation}
We claim that for $\lambda \gg 0$, the map $\mathcal{U}psilon_{\lambda}$ in \eqref{xi:association} induces a diffeomorphism of $V_{x_\infty}$. Indeed, since $z_\lambda \mathfrak{t}o z_\infty$, for $\lambda \gg 0$, the derivative of $\mathcal{U}psilon_{\lambda}$ is close to the identity, and thus is an isomorphism at each point. The map is thus a local diffeomorphism, which is the identity outside of a ball. This is necessarily a self-covering map of a simply-connected space, and so we deduce that this is a global diffeomorphism.
We can now use $G_{x_\infty}$ to define the self-diffeomorphism $\mathcal{X}i_\lambda$ on $U_{x_\infty}$. Since $G_{x_\infty}(x_\lambda) = z_\lambda$ and $G_{x_\infty}(x_\infty) = z_\infty$, we see that
$$\mathcal{X}i_{\lambda} := G_{x_\infty}^{-1} \circ \mathcal{U}psilon_{\lambda} \circ G_{x_\infty} $$
takes $x_\lambda$ to $x_\infty$.
It follows by the construction that $x_\infty$ is taken to $x_\lambda$. We then extend $\mathcal{X}i_\lambda$ to a diffeomorphism on the $S^1$ orbit of $U_{x_\infty}$ as follows. Define a function $\omega_\infty: S^1 \cdot U_{x_\infty} \mathfrak{t}o S^1$, similar to $\omega^j_\lambda$ in \eqref{eq:omegalambda}, by
\begin{equation}
\label{eq:omegainfinity}
\omega_\infty(x)= \frac{\operatorname{Re} \langle \phi, \phi_{\infty} \rangle_{L^2} + i\operatorname{Re} \langle \phi, i\phi_{\infty} \rangle_{L^2} }{ \bigl( (\operatorname{Re} \langle \phi, \phi_\infty \rangle_{L^2})^2 + (\operatorname{Re} \langle \phi, i\phi_\infty \rangle_{L^2})^2 \bigr)^{1/2}}.
\end{equation}
Note that $\omega_\infty$ has the property that if $x \in S^1 \cdot U_{x_\infty}$, then $\overline{\omega_{\infty}(x)} \cdot x \in U_{x_\infty}$. Therefore, we extend $\mathcal{X}i_\lambda$ to the $S^1$ orbit of $U_{x_\infty}$ by conjugating by $\omega_{\infty}$:
$$
x \mapsto {\omega}_{\infty}(x) \cdot \mathcal{X}i_\lambda(\overline{\omega_{\infty}(x)} \cdot x).
$$
By construction this extension is $S^1$-equivariant. By repeating the above construction for each stationary point $x^i_\infty$, we extend $\mathcal{X}i_\lambda$ to neighborhoods of every stationary point of $\N$. Note that $\mathcal{X}i_\lambda$ is the identity near the boundary of each such neighborhood. We finally extend $\mathcal{X}i_\lambda$ to a diffeomorphism of all of $W^\operatorname{sp}incigma_0$ by the identity outside of these neighborhoods.
It is now clear from the construction that $\mathcal{X}i_\lambda$ is an $S^1$-equivariant diffeomorphism of $W^\operatorname{sp}incigma_0$.
\end{proof}
For future reference, we describe an explicit formula for $\mathcal{X}i_\lambda$. Let us first introduce some notation to help express $\mathcal{X}i_\lambda$ more compactly. For each $i = 1,\ldots, m$, define a function $\beta_i : W^\operatorname{sp}incigma_0 \mathfrak{t}o [0,1]$ by
$$
\beta_i(x) = \begin{cases}
0 & \mathfrak{t}ext{ if } x=(a,s,\phi) \notin S^1 \cdot U^{x^i_\infty} \\
\alpha(\| \frac{\phi}{\langle \phi, \omega^i_\infty(x) \phi^i_\infty \rangle_{L^2}} - {\omega^i_\infty}(x) \phi^i_\infty \|_{L^2}) & \mathfrak{t}ext{ if } x=(a,s,\phi) \in S^1 \cdot U^{x^i_\infty}.
\end{cases}
$$
Further write $x^i_\lambda = (a^i_\lambda, s^i_\lambda, \phi^i_\lambda)$, where $x^i_\lambda$ is the stationary point of $\mathcal{X}qmlgcsigma$ corresponding to $x^i_\infty$. We also write $v^i_\lambda = \frac{\phi^i_\lambda}{\langle \phi^i_\lambda, \phi^i_\infty\rangle_{L^2}} - \phi^i_\infty$. Note that $G_{x^i_\infty}(x^i_\lambda) = (a^i_\lambda, s^i_\lambda, v^i_\lambda)$. We get
\begin{align}
\label{eq:Xilambda-Ux} \mathcal{X}i_\lambda& : W^\operatorname{sp}incigma_0 \mathfrak{t}o W^\operatorname{sp}incigma_0 \\
\nonumber (a,s,\phi) &\mapsto \Big(a + \operatorname{sp}incum_i \beta_i(x) (a^i_\infty - a^i_\lambda),\ s + \operatorname{sp}incum_i \beta_i(x)(s^i_\infty - s^i_\lambda), \\
\nonumber &\hspace{2.1in}\left.\frac{\phi- \operatorname{sp}incum_i \beta_i(x) \langle \phi, \phi^i_\infty \rangle_{L^2} \cdot v^i_\lambda }{\| \phi- \operatorname{sp}incum_i \beta_i(x) \langle \phi, \phi^i_\infty \rangle_{L^2} \cdot v^i_\lambda \|_{L^2}}\right).
\end{align}
Here we take as notational convention that we do not worry about the well-definedness of $\omega^i_{\infty}$ when $\beta_i$ is zero. Observe that for each $x$, $\beta_i(x)$ is non-zero for at most one $i$.
We now discuss the extension of $\mathcal{X}i_\lambda$ to the double as claimed in the lemma. Recall that a stationary point $x_\infty$ is reducible if and only if the approximate stationary point $x_\lambda$ is. From \eqref{eq:Xilambda-Ux}, we therefore see that for $x \in W^\operatorname{sp}incigma_0$ with $s \ll 1$ (i.e. $s$ much smaller than the smallest value of $s_\infty$ for $x_\infty$ irreducible), the map $\mathcal{X}i_\lambda$ preserves the $s$-component. In particular, $\mathcal{X}i_\lambda$ preserves and is tangent to the reducible locus. It follows that $\mathcal{X}i_\lambda$ extends to the double $\mathfrak{t}W^\operatorname{sp}incigma_0$ as claimed. It will be clear that the arguments below establishing the desired properties of $\mathcal{X}i_\lambda$ for $W^\operatorname{sp}incigma_0$ extend to $\mathfrak{t}W^\operatorname{sp}incigma_0$.
\begin{remark}
It is worth noting that almost all of of the pieces of $\mathcal{X}i_\lambda$ are just determined by the spinorial component of $x$. Further, in the end result, the formula for $\mathcal{X}i_\lambda$ only uses the function $\omega$ in the bump-like functions $\beta_i$.
\end{remark}
\operatorname{sp}incection{Three-dimensional properties of $\mathcal{X}i_{\lambda}$}\label{sec:xi-3d-bounds}
We now show that $\mathcal{X}i_{\lambda}$ has the stated properties in Lemma~\ref{lem:Xixylambda}. After that, we will establish some additional bounds that will be useful for Proposition~\ref{prop:Xixy-paths}.
We begin with the first item in Lemma~\ref{lem:Xixylambda}.
\begin{proof}[Proof of Lemma~\ref{lem:Xixylambda}\eqref{xi:xlambda}]
It is clear from the construction of $\mathcal{X}i_\lambda$, that each $x^i_\lambda$ is mapped to $x^i_\infty$. The $S^1$-equivariance then implies that $x_\lambda$ is mapped to $x_\infty$ for any stationary point $x_\lambda$ of $\mathcal{X}qmlgcsigma$ in $\N$.
\end{proof}
\operatorname{sp}incubsection{Smoothness properties}
\begin{proof}[Proof of Lemma~\ref{lem:Xixylambda}\eqref{xi:sobolev}]
First, note that the functions $\beta_i$, $\omega^i_{\infty}$, and $\phi \mapsto \langle \phi, \phi^i_\infty \rangle_{L^2}$ are smooth on $L^2_j$ for any $1 \leq j \leq k$, since the $L^2$ inner-product is, and $x^i_\infty \in L^2_k$. At each step of the construction of $\mathcal{X}i_\lambda$, where $\mathcal{X}i_\lambda$ is not the identity (or in other words, some $\beta_i$ is non-zero), we are simply scalar multiplying by $\omega^i_{\infty}$, which is non-zero, or by adding a multiple of $\beta_i$ or $\langle \phi, \phi^i_\infty \rangle_{L^2}$ times $x^i_\infty$ or $\phi^i_\infty$. Since $x^i_\lambda, x^i_\infty$ are in $L^2_k$, these (invertible) operations necessarily preserve the condition of being an $L^2_j$ configuration for $1 \leq j \leq k$, and we have the desired result.
\end{proof}
At the beginning of this section, $\mathcal{X}i_\lambda$ was claimed to extend to a neighborhood of $W^\operatorname{sp}incigma_j$ in $\widehat{W}_j$. This is clear from the explicit description of $\mathcal{X}i_\lambda$ given in \eqref{eq:Xilambda-Ux} since wherever $\mathcal{X}i_\lambda$ disagrees with the identity, we have that $\beta_i \neq 0$ for precisely one $i$, and in this case $\langle \phi , \omega^i_\infty \phi^i_\infty \rangle _{L^2}$ is real and positive (in fact, at least 1/2); from this it is easy to see that $\mathcal{X}i_\lambda$ extends to a neighborhood of this point in $\widehat{W}_j$, when we expand the target to $\widehat{W}_j$ as well.
The following completes the proof of Lemma~\ref{lem:Xixylambda}.
\begin{proof}[Proof of Lemma~\ref{lem:Xixylambda}\eqref{xi:smooth-f}]
Since $\mathcal{X}i_\lambda$ is defined by extending a self-diffeomorphism of $U_{x^i_\infty}$ by $S^1$, and the space $U_{x^i_\infty}$ is defined independently of $\lambda$, it suffices to show that $\mathcal{X}i_{f^{-1}(r)} |_{U_{x^i_\infty}}$ is smooth at and near $r = 0$. Recall from Corollary~\ref{cor:corresp} that the correspondence $r \mapsto [x_{f^{-1}(r)}]$ is differentiable at and near $r=0$. It is easy to check that this lifts to a smoothness statement without quotienting by $S^1$, due to the condition that $\operatorname{Re} \langle x_{f^{-1}(r)}, i x^i_\infty \rangle_{L^2} = 0$. Using \eqref{eq:Xilambda-Ux}, we see the only terms in $\mathcal{X}i_{f^{-1}(r)}$ or its derivatives which depend on $r$ are smooth functions of $x_{f^{-1}(r)}$, and thus differentiable at and near $r = 0$. This gives the desired smoothness statement.
\end{proof}
\operatorname{sp}incubsection{Some three-dimensional bounds on $\mathcal{X}i_{\lambda}$}
Before moving on to the four-dimensional properties of $\mathcal{X}i_\lambda$ in Proposition~\ref{prop:Xixy-paths}, we study a few additional properties of $\mathcal{X}i_\lambda$. For notational simplicity, we work in a neighborhood $S^1 \cdot U_{x^i_\infty}$ for some $i$. For the rest of this subsection, we again omit the index $i$ from the notation.
It is clear from \eqref{eq:Xilambda-Ux} that the complication in $\mathcal{X}i_\lambda$ is in the spinorial component. We begin by simplifying the notation and then establishing a few key bounds.
We write
\begin{equation}
\label{eq:Xiflambda} f_\lambda(x) = \phi- \langle \phi, \phi_\infty \rangle_{L^2} \cdot \beta(x) \cdot v_\lambda
\end{equation}
and thus we can express the spinorial component of $\mathcal{X}i_\lambda$ as $\frac{f_\lambda}{\| f_\lambda\|_{L^2}}$ for $x \in S^1 \cdot U_{x_\infty}$. Therefore, $\mathcal{X}i_\lambda$ can be written a bit more succinctly as
\begin{equation}\label{xi:simplified}
\mathcal{X}i_\lambda(a,s,\phi) = \left(a + \beta(x) (a_\infty - a_\lambda),\ s + \beta(x)(s_\infty - s_\lambda), \frac{f_\lambda(x)}{\| f_\lambda(x) \|_{L^2}} \right).
\end{equation}
We point out some bounds that will be useful for us when proving Proposition~\ref{prop:Xixy-paths}. By the work of Chapter~\ref{sec:criticalpoints}, we have that $\| x_\lambda - x_\infty \|_{L^2_k} \mathfrak{t}o 0$. It is then easy to see that for $\lambda \gg 0$, since $\| \phi_\infty \|_{L^2} = 1$, we have
\begin{equation}\label{xi:phi-lambda-infty}
\| v_\lambda \|_{L^2} \mathfrak{t}o 0.
\end{equation}
From \eqref{xi:phi-lambda-infty}, using that the spinorial component of an element of $W^\operatorname{sp}incigma$ has unit $L^2$-norm, we can also deduce that for $\lambda \gg 0$:
\begin{equation}
\label{xi:flambda-L2j} \frac{1}{2} \leq \| f_\lambda(x) \|_{L^2} \leq \frac{3}{2}.
\end{equation}
\operatorname{sp}incection{Four-dimensional properties of $\mathcal{X}i_{\lambda}$}\label{sec:xi-4d-bounds}
In this section, we prove Proposition~\ref{prop:Xixy-paths}. For the rest of this section, we let $j$ be an integer such that $1 \leq j \leq k$. Before proceeding, we remind the reader that the $L^2_j$-norm of a four-dimensional configuration $\gamma$ on $I \mathfrak{t}imes Y$ is expressed as
$$
\| \gamma \|^2_{L^2_j(I \mathfrak{t}imes Y)} = \operatorname{sp}incum^j_{n=0} \int^b_a \| \frac{d^n}{dt^n}\gamma(t) \|^2_{L^2_{j - n}(Y)} dt.
$$
\operatorname{sp}incubsection{Sobolev multiplication and superposition operators}
In order to study how $\mathcal{X}i_\lambda$ acts on paths in $W^\mathfrak{t}au_{j}(I \mathfrak{t}imes Y)$, we will need some elementary variants of Sobolev multiplication and superposition operators. Many of these are well-known or easily deduced from the definitions and Sobolev multiplication. We include the proofs so as to reference them for analogous results later on that are less standard.
\begin{lemma}\label{xi:inner-product-path}
Let $I \operatorname{sp}incubseteq \rr$. Then, taking inner products induces smooth maps
\begin{align}
&\label{inner-product:4d} \widehat{W}_j(I \mathfrak{t}imes Y) \mathfrak{t}imes \widehat{W}_j(I \mathfrak{t}imes Y) \mathfrak{t}o L^2_j(I; \rr), \ (\gamma(t), \eta(t)) \mapsto \langle \gamma(t), \eta(t) \rangle_{L^2(Y)}, \\
\label{inner-product:3d}&\widehat{W}_j(I \mathfrak{t}imes Y) \mathfrak{t}imes \widehat{W}_j \mathfrak{t}o L^2_j(I; \rr), \ (\gamma,x) \mapsto \langle \gamma, x \rangle_{L^2(Y)}.
\end{align}
\end{lemma}
\begin{proof}
While the smoothness of \eqref{inner-product:4d} is a special case of Sobolev multiplication, we provide a proof, since it makes the second claim easier to justify and we will use similar techniques throughout the rest of the section.
We compute
\begin{align*}
\| \langle \gamma , \eta \rangle_{L^2(Y)} \|^2_{L^2_j(I)} &= \operatorname{sp}incum^j_{n=0} \Bigl\| \frac{d^n}{dt^n} \langle \gamma, \eta \rangle_{L^2(Y)} \Bigr\|^2_{L^2(I)} \\
& = \operatorname{sp}incum^j_{n=0} \int_I \Bigl|\operatorname{sp}incum^n_{i = 0} {n \choose i} \langle \frac{d^i}{dt^i} \gamma , \frac{d^{n-i}}{dt^{n-i}} \eta \rangle_{L^2(Y)} \Bigr |^2 dt \\
& \leq \operatorname{sp}incum^j_{n=0} \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1}{n \choose i_2} \int_I \Bigl| \langle \frac{d^{i_1}}{dt^{i_1}} \gamma, \frac{d^{n-i_1}}{dt^{n-i_1}} \eta \rangle_{L^2(Y)} \cdot \langle \frac{d^{i_2}}{dt^{i_2}} \gamma, \frac{d^{n-i_2}}{dt^{n-i_2}} \eta \rangle_{L^2(Y)}\Bigr| dt \\
& \leq \operatorname{sp}incum^j_{n=0} \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1}{n \choose i_2} \int_I \Bigl\| \frac{d^{i_1}}{dt^{i_1}} \gamma\Bigr\|_{L^2(Y)} \Bigl\| \frac{d^{n-i_1}}{dt^{n-i_1}} \eta\Bigr \|_{L^2(Y)}\Bigl \| \frac{d^{i_2}}{dt^{i_2}} \gamma\Bigr\|_{L^2(Y)} \Bigl\| \frac{d^{n-i_2}}{dt^{n-i_2}} \eta\Bigr \|_{L^2(Y)} dt.
\end{align*}
Let us focus on a fixed term in the final term above, i.e., we fix a value of $n, i_1, i_2$. Note that $n \leq j$, so one of $n - i_1$ or $i_1$ is bounded above by $\frac{j}{2}$. Without loss of generality, it is $i_1$. Since $j \geq 1$, we have that
$$ j - i_1 \geq \frac{j}{2} \geq \frac{1}{2}.$$
(In fact, $j-i_1 \geq 1$, because $j$ and $i_1$ are integers.) We deduce that the composition of $\frac{d^{i_1}}{dt^{i_1}}: \widehat{W}_{j}(I \mathfrak{t}imes Y) \mathfrak{t}o \widehat{W}_{j-i_1}(I \mathfrak{t}imes Y)$ with the restriction map to a slice $\widehat{W}_{j-i_1}(I \mathfrak{t}imes Y) \mathfrak{t}o L^2(\{t\} \mathfrak{t}imes Y)$ is continuous. From here we see that there exists a constant $C_{i_1}$, depending only on $Y$ and $I$, such that
$$\| \frac{d^{i_1}}{dt^{i_1}} \gamma(t) \|_{L^2(Y)} \leq C_{i_1} \| \gamma \|_{L^2_j(I \mathfrak{t}imes Y)}, \ \mathfrak{t}ext{ for all } t \in I.$$
We can repeat the same argument for $i_2$ or $n - i_2$. Again, without loss of generality we assume that $i_2 \leq \frac{j}{2}$, and we obtain bounds
$$\| \frac{d^{i_2}}{dt^{i_2}} \gamma(t) \|_{L^2(Y)} \leq C_{i_2} \| \gamma \|_{L^2_j(I \mathfrak{t}imes Y)}, \ \mathfrak{t}ext{ for all } t \in I.$$
Therefore, we may write
\begin{align*}
\| \langle \gamma , \eta \rangle_{L^2(Y)} \|^2_{L^2_j(I)} &\leq \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1}{n \choose i_2} \int_I \Big\| \frac{d^{i_1}}{dt^{i_1}} \gamma\Big\|_{L^2(Y)} \Big\| \frac{d^{n-i_1}}{dt^{n-i_1}} \eta\Big \|_{L^2(Y)} \Big\| \frac{d^{i_2}}{dt^{i_2}} \gamma\Big\|_{L^2(Y)}\Big \| \frac{d^{n-i_2}}{dt^{n-i_2}} \eta \Big\|_{L^2(Y)} dt \\
& \leq \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1}{n \choose i_2} C_{i_1} C_{i_2} \| \gamma \|^2_{L^2_j(I \mathfrak{t}imes Y)} \int_I \Big\| \frac{d^{n-i_1}}{dt^{n-i_1}} \eta \Big\|_{L^2(Y)} \Big \| \frac{d^{n-i_2}}{dt^{n-i_2}} \eta \Big\|_{L^2(Y)} dt \\
& \leq \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1} {n \choose i_2} C_{i_1} C_{i_2} \| \gamma \|^2_{L^2_j(I \mathfrak{t}imes Y)} \Big\| \frac{d^{n-i_1}}{dt^{n-i_1}} \eta\Big \|_{L^2(I \mathfrak{t}imes Y)} \Big\| \frac{d^{n-i_2}}{dt^{n-i_2}} \eta \Big\|_{L^2(I \mathfrak{t}imes Y)} \\
& \leq \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1} {n \choose i_2} C_{i_1} C_{i_2} \| \gamma \|^2_{L^2_j(I \mathfrak{t}imes Y)} \| \eta \|^2_{L^2_j(I \mathfrak{t}imes Y)},
\end{align*}
where in the penultimate inequality we have used Cauchy-Schwarz. An analogous inequality holds if we instead replace $i_1$ or $i_2$ with $n - i_1$ or $n - i_2$ respectively, depending on their values.
By summing over the relevant constants, we obtain a bound
$$
\| \langle \gamma , \eta \rangle_{L^2(Y)} \|_{L^2_j(I)} \leq C\| \gamma \|_{L^2_j(I \mathfrak{t}imes Y)} \| \eta \|_{L^2_j(I \mathfrak{t}imes Y)},
$$
where $C$ is independent of $\gamma, \eta$. It follows that slicewise inner products define a {\em bounded}, bilinear map. Such a map is necessarily smooth.
For \eqref{inner-product:3d}, we can repeat the proof above. Note that an element of $\widehat{W}_j$ does not determine an element of $\widehat{W}_j(I \mathfrak{t}imes Y)$ if $I$ is not compact. However, for each term analyzed in the above argument, we only needed that one of the terms involved (either a derivative of $\gamma$ or of $\eta$) be $L^2(Y)$ bounded uniformly in time, with the other term square-integrable in time. In this case, the three-dimensional configuration in $\widehat{W}_j$ is constant in time, and has bounded $L^2(Y)$ norm (since $j \geq 0$), so we have the desired result.
\end{proof}
\begin{lemma}\label{xi:pointwise-mult}
Let $I \operatorname{sp}incubseteq \rr$. Then, pointwise multiplication induces smooth maps
\begin{align}
&L^2_j(I; \rr) \mathfrak{t}imes \widehat{W}_j(I \mathfrak{t}imes Y) \mathfrak{t}o \widehat{W}_j(I \mathfrak{t}imes Y), \ (g, \gamma) \mapsto g \gamma, \\
&L^2_j(I; \rr) \mathfrak{t}imes \widehat{W}_j \mathfrak{t}o \widehat{W}_j(I \mathfrak{t}imes Y), \ (g, \gamma) \mapsto g \gamma.
\end{align}
\end{lemma}
\begin{proof}
The argument is similar to that of the above lemma. We begin with the first equation. Recall that for any $\gamma \in \widehat{W}_j(I \mathfrak{t}imes Y)$, we have a norm for $\gamma(t,y)$, defined for any $(t,y) \in I \mathfrak{t}imes Y$; we write this function as $|\gamma|$. We compute
\begin{align*}
\| g \gamma \|^2_{L^2_j(I \mathfrak{t}imes Y)} &= \operatorname{sp}incum^j_{n=0} \int_{I \mathfrak{t}imes Y} \Big| \operatorname{sp}incum^n_{i = 0}\frac{d^i}{dt^i}(g) \nabla^{n-i} \gamma \Big|^2 \\
& \leq \operatorname{sp}incum^n_{i_1, i_2 = 0} {n \choose i_1} \cdot {n \choose i_2} \int_{I \mathfrak{t}imes Y} \Big| \frac{d^{i_1}}{dt^{i_1}}(g) \nabla^{n-i_1} \gamma\Big| \cdot \Big| \frac{d^{i_2}}{dt^{i_2}}(g) \nabla^{n-i_2} \gamma \Big|.
\end{align*}
We can now apply the same argument as in the first part of Lemma~\ref{xi:inner-product-path} to obtain a bound
$$
\| g \gamma \|_{L^2_j(I \mathfrak{t}imes Y)} \leq K \| g\|_{L^2_j(I)} \| \gamma \|_{L^2_j(I \mathfrak{t}imes Y)},
$$
where $K$ is a constant independent of $g$ and $\gamma$. This shows that pointwise multiplication is continuous. Smoothness again follows by bilinearity.
The second equation can be obtained by applying the same modifications to the proof as in Lemma~\ref{xi:inner-product-path} to establish \eqref{inner-product:3d}.
\end{proof}
The following lemma is standard. We include a proof for completeness.
\begin{lemma}
\label{lem:composeSobolev}
Let $I \operatorname{sp}incubseteq \rr$ be an interval, and $h: {\mathbb{R}}\mathfrak{t}o \rr$ a smooth function. Consider the transformation of $C^\infty(\mathbb{R})$ given by
$$ H : g \mapsto h \circ g.$$
\begin{enumerate}[(a)]
\item If $I$ is compact, then $H$ induces a smooth map from $L^2_j(I; \rr)$ to $L^2_j(I; \rr)$.
\item The same holds true for arbitrary $I$ (not necessarily compact), provided that $h(0)=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(a)$ We use Fa{\`a} di Bruno's formula for the higher derivatives of a composition:
\begin{equation}
\label{eq:bruno}
\frac{d^n}{dt^n} h(g(t)) = \operatorname{sp}incum C(m_1, \dots, m_n) \cdot h^{(m_1 + \dots + m_n)}(g(t)) \cdot \prod_{i=1}^n \bigl( g^{(i)}(t)\bigr)^{m_i}.
\end{equation}
Here, the sum is taken over all $n$-tuples of nonnegative integers $(m_1, \dots, m_n)$ with
$$ 1 \cdot m_1 + 2 \cdot m_2 + \dots + n \cdot m_n = n,$$
and the constants $C(m_1, \dots, m_n)$ are
$$ C(m_1, \dots, m_n) = \frac{n!}{m_1! 1!^{m_1} m_2! 2!^{m_2} \dots m_n! n!^{m_n}}.$$
Let $g \in L^2_j(I; \rr)$. We first want to show that $h \circ g$ is also in $L^2_j(I; \rr)$, that is, the $n$th derivative $\mathfrak{t}frac{d^n}{dt^n} h(g(t))$ is in $L^2(I; \rr)$, for all $0 \leq n \leq j$.
We claim that each summand in the expression \eqref{eq:bruno} is in $L^2$. Indeed, since $g$ is in $L^2_j$ and $j \geq 1$, we have that $g$ is continuous. Further, since $h$ is smooth, we see that the expression $h^{(m_1 + \dots + m_n)}(g(t))$ is continuous in $t$, and therefore bounded (because $I$ is compact). Moreover, unless $i=n$, the factors $g^{(i)}(t)$ are in $L^2_1$; hence, by Sobolev multiplication, their product is also in $L^2_1$, and thus in $L^2$. It follows that
\begin{equation}
\label{eq:hmm}
h^{(m_1 + \dots + m_n)}(g(t))\cdot \prod_{i=1}^n \bigl( g^{(i)}(t)\bigr)^{m_i} \in L^2,
\end{equation}
as long as $m_n =0$, since $h^{(m_1 + \dots + m_n)} \circ g$ is bounded. In the special case $m_n > 0$, we see that $m_n =1$ and all the other $m_i$ must be zero, so $\bigl( g^{(i)}(t)\bigr)^{m_i} = g^{(n)}(t)$ is still in $L^2$, and we get the same conclusion.
We have shown that $H$ maps $L^2_j(I; \rr)$ to $L^2_j(I; \rr)$. To see that $H$ is smooth, we compute its derivatives:
\begin{equation}
\label{eq:drh}
(\D^r H)_{g}(xi_1, \dots, xi_r) = (h^{(r)} \circ g) \cdot xi_1 \cdot \ldots \cdot xi_r.
\end{equation}
When $g, xi_1, \dots, xi_r \in L^2_j(I; \rr)$, by applying the fact we just proved with $h^{(r)}$ instead of $h$, and making use of the Sobolev multiplication $L^2_j \mathfrak{t}imes L^2_j \mathfrak{t}o L^2_j$, we get that $(\D^r H)_{g}(xi_1, \dots, xi_r)$ is in $L^2_j$, as desired.
$(b)$ If $I$ is non-compact, we still have that $h^{(m_1 + \dots + m_n)}$ and $g$ are continuous. Further, by the Sobolev embedding theorem, $g \in L^2_j$ implies that $g$ is bounded. Therefore, it is still the case that $h^{(m_1 + \dots + m_n)}\circ g$ is bounded. We also have the Sobolev multiplication $L^2_j \mathfrak{t}imes L^2_j \mathfrak{t}o L^2_j$ for $j \geq 1$, so the same arguments as before show that \eqref{eq:hmm} holds, provided that not all $m_i$ are zero.
When $n=0$ and all $m_i$ are zero, the constant function $1 = \prod_{i=1}^n \bigl( g^{(i)}(t)\bigr)^{m_i}$ is not in $L^2$. Nevertheless, we claim that $h^{(m_1 + \dots + m_n)}(g(t)) \cdot 1 = h(g(t))$ is in $L^2$. To see this, we make use of the hypothesis $h(0)=0$. The derivative of $h$ at $0$ is
\begin{equation}
\label{eq:hzero}
h'(0) = \lim_{t \mathfrak{t}o 0} \frac{h(t)}{t} \in \rr.
\end{equation}
Recall that $g$ is bounded, so there is some $K > 0$ such that $|g(t)| \leq K$. Let $M$ be the supremum of $h(t)/t$ over the interval $[-K, K]$. From \eqref{eq:hzero} and the continuity of $h$, we see that this supremum is finite. Therefore, we have
$$ |h(g(t)) | \leq M |g(t)|,$$
for all $t \in \rr$. Since $g \in L^2$, this easily implies that $h \circ g$ is in $L^2$.
We have now shown that $H$ maps $L^2_j(I; \rr)$ to $L^2_j(I; \rr)$. To check that $H$ is smooth, it remains to show that the expression \eqref{eq:drh} is in $L^2_j(I; \rr)$, when $g, xi_1, \dots, xi_r \in L^2_j(I; \rr)$.
By Sobolev multiplication, the product $xi = xi_1 \cdot \ldots \cdot xi_r$ is in $L^2_j$. For $0 \leq n \leq j$, we need to verify that
\begin{equation}
\label{eq:productderivatives}
\frac{d^n}{dt^n} \bigl( (h^{(r)} \circ g) \cdot xi \bigr ) = \operatorname{sp}incum_{s=0}^n {n \choose s } \frac{d^s}{dt^s} (h^{(r)} \circ g) \cdot \frac{d^{n-s}}{dt^{n-s}} xi
\end{equation}
is in $L^2$.
For $s \neq 0$, the arguments above (based on Fa{\`a} di Bruno's formula), applied to $h^{(r)}$ instead of $h$, show that the $s^{\operatorname{th}}$ derivative of $h^{(r)} \circ g$ is in $L^2$. Further, since $n-s < n \leq j$, the $(n-s)^{\operatorname{th}}$ derivative of $xi$ is in $L^2_1$. It follows that \eqref{eq:productderivatives} is in $L^2$.
In the special case $s=0$, the expression $h^{(r)} \circ g$ is bounded, and $\frac{d^{n}}{dt^{n}} xi \in L^2$, so we again get that their product is in $L^2$. This concludes the proof.
\end{proof}
Let $I \operatorname{sp}incubseteq \rr$ and $\varepsilonilon > 0$. Consider the subset $L^2_j(I;\rr_{>\varepsilonilon})$ inside $L^2_j(I;\rr)$ consisting of functions with values in $(\varepsilonilon,\infty)$. Note that for this subset to be non-empty, the interval $I$ must be compact. Further, since $j \geq 1$, $L^2_j(I;\rr_{> \varepsilonilon})$ is an open subset of $L^2_j(I;\rr)$.
\begin{lemma}\label{lem:xi-inversion}
Let $I \operatorname{sp}incubseteq \rr$. The map $x \mapsto \frac{1}{x}$ induces a smooth map $H: L^2_j(I;\rr_{> \varepsilonilon}) \mathfrak{t}o L^2_j(I; \rr)$, given by $H(g)(x) = \frac{1}{g(x)}$. A similar statement applies to $x \mapsto \operatorname{sp}incqrt{x}$.
\end{lemma}
\begin{proof}
Let $h: {\mathbb{R}}\mathfrak{t}o \rr$ be any smooth function such that $h(x) = 1/x$ for $x \geq \varepsilonilon$. Then, the result follows from Lemma~\ref{lem:composeSobolev}(a), applied to $h$. The same argument works for $\operatorname{sp}incqrt{x}$.
\end{proof}
With this, we can study the regularity of the $L^2$ norm as well.
\begin{lemma}\label{lem:xi-norm}
Let $I \operatorname{sp}incubseteq \rr$. Let $\gamma \in \widehat{W}_j(I \mathfrak{t}imes Y)$ have $\| \gamma (t) \|_{L^2(Y)} \geq \varepsilonilon > 0$ for all $t$. For a neighborhood of $\gamma$ in $\widehat{W}_j(I \mathfrak{t}imes Y)$, the association $\gamma \mapsto \| \gamma \|_{L^2(Y)}$ is a smooth map to $L^2_j(I;\rr)$.
\end{lemma}
\begin{proof}
Writing $\| \gamma \|_{L^2(Y)} = \langle \gamma, \gamma \rangle_{L^2(Y)}^{\frac 1 2}$, the result follows from Lemma~\ref{xi:inner-product-path} and the second statement in Lemma~\ref{lem:xi-inversion}.
\end{proof}
Finally, we have
\begin{lemma}\label{xi:alpha}
Let $I \operatorname{sp}incubset \rr$ be compact. Postcomposition with the bump function $\alpha$ induces a smooth map $L^2_j(I;\rr_{\geq 0}) \mathfrak{t}o L^2_j(I;\rr_{\geq 0})$ given by $g \mapsto \alpha \circ g$.
\end{lemma}
\begin{proof}
This is an immediate consequence of Lemma~\ref{lem:composeSobolev}(a).
\end{proof}
\operatorname{sp}incubsection{On $L^2_{j}$ regularity of $\mathcal{X}i_{\lambda}$ for compact cylinders}\label{xi:compact}
With the above technical lemmas established, we can now prove the first part of Proposition~\ref{prop:Xixy-paths}, which concerns the $L^2_{j}$ regularity of $\mathcal{X}i_\lambda$ on compact cylinders. First, recall from Lemma~\ref{lem:Xixylambda} that $\mathcal{X}i_\lambda$ extends to $\mathfrak{t}W^\operatorname{sp}incigma_j$. There is an obvious involution $\iota$ on $\mathfrak{t}W^\operatorname{sp}incigma_j$ given by $(a,s,\phi) \mapsto (a,-s,\phi)$, such that $\mathcal{X}i_\lambda(a,-s,\phi) = \mathcal{X}i_\lambda(\iota(a,s,\phi)) = \iota (\mathcal{X}i_\lambda(a,s,\phi))$. When discussing the terms involved in $\mathcal{X}i_\lambda$, we will suppress this from the notation; for example, if $(a,s,\phi)$ has $s < 0$, we will write $\beta_i(a,s,\phi)$ instead of $\beta_i(\iota(a,s,\phi))$. Since $\mathcal{X}i_\lambda$ is $\iota$-equivariant, it is easy to verify that this will not affect any smoothness or regularity.
\begin{proof}[Proof of Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths}]
Let $I$ be a compact interval. The idea is to apply the technical lemmas proved above on Sobolev multiplication and superposition operators to the explicit formula for $\mathcal{X}i_\lambda$ in \eqref{eq:Xilambda-Ux}. Fix $\gamma \in \mathfrak{t}W^\mathfrak{t}au_j(I \mathfrak{t}imes Y)$, where we write $\gamma(t) = (a(t),s(t), \phi(t))$. Recall that $\| \gamma(t) \|_{L^2(Y)} = 1$ for all $t \in I$. We break $\mathcal{X}i_\lambda$ into elementary pieces and study their regularity.
First, observe that $\langle \phi(t), \phi^i_\infty \rangle_{L^2(Y)} \in L^2_j(I;\rr)$ by Lemma~\ref{xi:inner-product-path}. By Lemma~\ref{xi:pointwise-mult}, we have that $\langle \phi(t), \phi^i_\infty \rangle_{L^2(Y)} \cdot v^i_\lambda$ is in $L^2_j$.
Note that whenever $\gamma(t) \in S^1 \cdot U_{x^i_\infty}$, we have $|\langle \phi(t), \phi^i_\infty \rangle_{L^2(Y)}| \geq \frac{1}{2}$ by our choice of the $\delta$-balls in the definition of $\mathcal{X}i_\lambda$. (Here we are using that there exists $u(t) \in S^1$ such that $u(t) \cdot \phi(t) \in U_{x^i_\infty}$.) Using Lemma~\ref{lem:xi-inversion}, formula \eqref{eq:omegainfinity}, and the Sobolev multiplication $L^2_j(I;\rr) \mathfrak{t}imes L^2_j(I;\rr) \mathfrak{t}o L^2_j(I;\rr)$, we can deduce that $\omega^i_\infty(\gamma) \in L^2_j(I; \cc)$. Consequently, it follows from Lemmas~\ref{xi:pointwise-mult}, \ref{lem:xi-inversion}, and \ref{xi:alpha} that $\beta_i(\gamma) \in L^2_j(I;\rr)$. Therefore, we see by Lemma~\ref{xi:pointwise-mult} that
$$
\left(a + \beta_i(\gamma) (a^i_\infty - a^i_\lambda),\ s + \beta_i(\gamma)(s^i_\infty - s^i_\lambda),f^i_\lambda(\gamma)\right)
$$
is in $\widehat{W}_j(I \mathfrak{t}imes Y)$. The slicewise $L^2$ norm of the spinorial component of this path is bounded below by $1/2$ by \eqref{xi:flambda-L2j}. Therefore, we see that
$$
\left(a + \beta_i(\gamma) (a^i_\infty - a^i_\lambda),\ s + \beta_i(\gamma)(s^i_\infty - s^i_\lambda),\frac{f^i_\lambda(\gamma)}{\| f^i_\lambda(\gamma) \|_{L^2(Y)}} \right),
$$
is contained in $\widehat{W}_j(I \mathfrak{t}imes Y)$ by Lemmas~\ref{xi:pointwise-mult}, \ref{lem:xi-inversion}, and \ref{lem:xi-norm}. This expression is $\mathcal{X}i_\lambda(\gamma)$ by \eqref{xi:simplified}, as $\beta_n(\gamma) = 0$ for any $n \neq i$, when $\beta_i(\gamma) \neq 0$.
The smoothness of $\mathcal{X}i_\lambda$ follows from the relevant smoothness established in the sequence of technical lemmas above about Sobolev multiplication and superposition operators. To see that $\mathcal{X}i_\lambda$ is a diffeomorphism, we can apply similar arguments to show that $\mathcal{X}i^{-1}_\lambda$ satisfies the same regularity and smoothness properties.
We now show that $\mathcal{X}i_{\lambda}$ depends smoothly on $\lambda$, at and near infinity.
If we change the definition of $\mathcal{X}i_\lambda$ by varying $x^i_\lambda$ smoothly, the arguments above (i.e., the application of Lemmas~\ref{xi:inner-product-path}-\ref{xi:alpha}) show that we will vary the induced diffeomorphism in a smooth way. This only applies for small variations of $x^i_\lambda$ whose spinorial component stays slicewise in the $L^2$ ball of size $\delta$ around $\phi^i_\infty$, but that is all we will need. By Corollary~\ref{cor:corresp}, we see that $x_{f^{-1}(r)}$ is smooth at and near $r = 0$, and thus we obtain that $\mathcal{X}i_{f^{-1}(r)}$ is smooth at and near $r = 0$.
It remains to verify that $\mathcal{X}i_\lambda$ preserves $W^\mathfrak{t}au_{j}(I \mathfrak{t}imes Y)$ (i.e. $\mathcal{X}i_\lambda$ preserves the condition $s \geq 0$). This is trivial since $\mathcal{X}i_\lambda$ takes $W^\operatorname{sp}incigma_0$ to $W^\operatorname{sp}incigma_0$.
\end{proof}
\operatorname{sp}incubsection{On $L^2_j$ regularity of $\mathcal{X}i_{\lambda}$}\label{xi:non-compact}
Before studying further regularity properties of $\mathcal{X}i_\lambda$, we need to introduce some further notation, generalizing the construction of $W^\mathfrak{t}au_j(x_\lambda, y_\lambda)$. Let $I \operatorname{sp}incubseteq \rr$ be an interval. Given a path $\zeta \in \widehat{W}_{j, loc}(I \mathfrak{t}imes Y)$ (analogous to the smooth reference path $\gamma_0$ in defining $W^\mathfrak{t}au_j(x_\lambda, y_\lambda)$), we define
$$
\widehat{W}_j(I \mathfrak{t}imes Y, \zeta)= \{ \gamma \in \widehat{W}_{j, loc}(I \mathfrak{t}imes Y) \mid \gamma - \zeta \in \widehat{W}_{j}(I \mathfrak{t}imes Y) \}.
$$
We equip this space with the $L^2_j$ metric (not $L^2_{j,loc}$). Note that many different functions can induce the same space. Further, note that $ \widehat{W}_j(I \mathfrak{t}imes Y, \zeta)$ is a Banach manifold, since we have an affine identification with $\widehat{W}_{j}(I \mathfrak{t}imes Y)$.
Towards studying the map $\mathcal{X}i_\lambda$ on $\mathfrak{t}W^\mathfrak{t}au_j(x_\lambda, y_\lambda)$, we will begin with a regularity result for paths in $\mathfrak{t}W^\operatorname{sp}incigma_0$ that are contained entirely in some neighborhood of an orbit of stationary points (or their involutes under $\iota$). As usual, for an element $x \in \widehat{W}_j$, we will abusively write $x$ for the induced constant path in $\widehat{W}_{j,loc}(I \mathfrak{t}imes Y)$.
\begin{proposition}\label{xi:half-cylinder}
Fix a stationary point $x^i_\infty$ of $\mathcal{X}qagcsigma$ as in the construction of $\mathcal{X}i_\lambda$, and let $x^i_{\lambda}$ be the corresponding stationary points of $\mathcal{X}qmlagcsigma$. Let $T > 0$. Let $\gamma \in \mathfrak{t}W^\mathfrak{t}au_j([T,\infty) \mathfrak{t}imes Y, x^i_\lambda)$ be such that $\beta_i(\gamma) \equiv 1$. Then,
\begin{enumerate}[(a)]
\item in a neighborhood of $\gamma$ in $\widehat{W}_j([T,\infty) \mathfrak{t}imes Y, x^i_\lambda)$, $\mathcal{X}i_\lambda$ induces a smooth map to $\widehat{W}_j([T,\infty) \mathfrak{t}imes Y, x^i_\infty)$, and
\item this family of maps is smooth in $\lambda$ at and near infinity.
\end{enumerate}
An analogous result applies for the half-cylinder $(-\infty, -T] \mathfrak{t}imes Y$.
\end{proposition}
We postpone the proof of Proposition~\ref{xi:half-cylinder}, but do explain what we mean by smoothness of $\mathcal{X}i_\lambda$ in $\lambda$, since the domain of the function is changing. Fix a reference path $\gamma_0$ in $W^\mathfrak{t}au_{j,loc}([T,\infty) \mathfrak{t}imes Y)$ which agrees with $x_\infty$ for $t \gg 0$. Note that $\mathcal{X}i^{-1}_\lambda(\gamma_0)$ provides an $L^2_{j,loc}$ reference path which agrees with $x_\lambda$ for $t \gg 0$ by the work of Section~\ref{xi:compact}. Then, $\mathcal{X}i_\lambda$ induces a map,
$$
\widehat{W}_j([T,\infty) \mathfrak{t}imes Y) \mathfrak{t}o \widehat{W}_j([T,\infty) \mathfrak{t}imes Y), \ \gamma - \mathcal{X}i^{-1}_\lambda(\gamma_0) \mapsto \mathcal{X}i_\lambda(\gamma) - \gamma_0,
$$
defined in a neighborhood of $\gamma - \mathcal{X}i^{-1}_\lambda(\gamma_0)$, where $\gamma \in \mathfrak{t}W^\mathfrak{t}au_{j,loc}([T,\infty) \mathfrak{t}imes Y)$ satisfies $\gamma - \mathcal{X}i^{-1}_\lambda(\gamma_0) \in \widehat{W}_j([T,\infty) \mathfrak{t}imes Y)$. Since the domain and target of this map are constant in $\lambda$, we can make sense of smoothness in $\lambda$ at and near $\infty$. There is an analogous notion of smoothness in $\lambda$ for maps from $\mathfrak{t}W^\mathfrak{t}au_j(x_\lambda, y_\lambda)$ to $\mathfrak{t}W^\mathfrak{t}au_j(x_\infty, y_\infty)$ (assuming they extend to neighborhoods in the relevant larger affine space), which is what is meant in Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths-smooth-f} and Proposition~\ref{xi:half-cylinder}. More generally, given $f : \widehat{W}_{j,loc}(I\mathfrak{t}imes Y) \mathfrak{t}o \widehat{W}_{j,loc}(I \mathfrak{t}imes Y)$, one can analogously define smoothness in $\gamma$ of the family of maps $f: \widehat{W}_j(I \mathfrak{t}imes Y, \gamma) \mathfrak{t}o \widehat{W}_j(I \mathfrak{t}imes Y, f(\gamma))$. One can check that these notions are independent of the choice of reference path.
Before proving Proposition~\ref{xi:half-cylinder}, we use it to complete the proof of Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths-smooth-f}.
\begin{proof}[Proof of Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths-smooth-f}]
Let $\gamma \in \mathfrak{t}W^\mathfrak{t}au_j(x_\infty, y_\infty)$. First, we establish that $\mathcal{X}i_\lambda(\gamma) \in \mathfrak{t}W^\mathfrak{t}au_j(x_\infty, y_\infty)$. Fix a reference path $\gamma_0$ from $x_\infty$ to $y_\infty$ which is constant outside of $[-T,T]$. Write $x^i_\infty$ (respectively $x^n_\infty$) for the indexed stationary point from the construction of $\mathcal{X}i_\lambda$ that is in the orbit of $x_\infty$ (respectively $y_\infty$).
We have that $\mathcal{X}i_\lambda(\gamma) \in \mathfrak{t}W^\mathfrak{t}au_{j,loc}({\mathbb{R}}\mathfrak{t}imes Y)$ by Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths}. Since $\gamma \in \mathfrak{t}W^\mathfrak{t}au_j(x_\infty, y_\infty)$, we have that $\gamma - \gamma_0$ has finite $L^2_j({\mathbb{R}}\mathfrak{t}imes Y)$ norm, with $j \geq 1$. Thus, we see that for $t \gg 0$, we must have that $\| \gamma(t) - y_\infty\|_{L^2(Y)}$ is sufficiently small so that $\gamma(t)$ is contained in $S^1 \cdot U_{x^n_\infty}$ and $\beta_n(\gamma(t)) = 1$. A similar statement applies for $t \ll 0$, where we see that $\beta_i(\gamma(t)) = 1$.
By Proposition~\ref{xi:half-cylinder}, we see that
$$\mathcal{X}i_\lambda(\gamma|_{[T, \infty)}) \in \widehat{W}_j([T,\infty) \mathfrak{t}imes Y, \mathcal{X}i_\lambda(\gamma_0|_{[T,\infty)})).$$ A similar statement applies for $(-\infty, -T] \mathfrak{t}imes Y$ as well. Since $\gamma|_{[-T,T] } \in \mathfrak{t}W^\mathfrak{t}au_{j,loc}([-T,T] \mathfrak{t}imes Y)$, we can put these three pieces of $\gamma$ together to see that $\gamma \in \mathfrak{t}W^\mathfrak{t}au_{j}(x_\infty, y_\infty)$.
We will use a similar argument for smoothness. We provide the argument for the existence of the first derivative of $\mathcal{X}i_\lambda$; the higher derivatives are similar. Let $\eta \in T_{\gamma} \widehat{W}_j({\mathbb{R}}\mathfrak{t}imes Y, \gamma_0) \cong \widehat{W}_j({\mathbb{R}}\mathfrak{t}imes Y)$. We would like to see that
\begin{equation}\label{xi:L2j-deriv-exists}
\lim_{h \mathfrak{t}o 0} \Big\| \frac{\mathcal{X}i_\lambda(\gamma + h \cdot \eta) - \mathcal{X}i_\lambda(\gamma)}{h} \Big\|_{L^2_j({\mathbb{R}}\mathfrak{t}imes Y)}
\end{equation}
exists. This limit needs to be taken with respect to $L^2_j$, and not $L^2_{j,loc}$. As before, we have that $\| \eta(t) \|_{L^2(Y)}$ is uniformly bounded in $t$, because of the $L^2_j({\mathbb{R}}\mathfrak{t}imes Y)$ bounds on $\eta$. Therefore, for $h$ sufficiently small, we have that $\gamma + h \cdot \eta(t)$ is in $S^1 \cdot U_{x^n_\infty}$ for $t \geq T$, where we can choose $T$ independent of $h$. A similar statement applies for $t \leq -T$. Therefore, by Proposition~\ref{xi:half-cylinder} the above limit exists in the $L^2_j$ topology when we replace ${\mathbb{R}}\mathfrak{t}imes Y$ with $[T,\infty) \mathfrak{t}imes Y$ and with $(-\infty, -T] \mathfrak{t}imes Y$. By Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths}, we have that the limit in \eqref{xi:L2j-deriv-exists} exists when we replace $\rr$ with $[-T,T]$. This establishes the existence of the limit on $L^2_j({\mathbb{R}}\mathfrak{t}imes Y)$.
The smoothness of $\mathcal{X}i_\lambda$ in $\lambda$ at and near infinity can again be deduced by a similar argument, applying Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths} for a fixed compact interval and Proposition~\ref{xi:half-cylinder} outside of this interval.
\end{proof}
Before moving on to Propositions~\ref{prop:Xixy-paths}\eqref{xi:Bgc} and \eqref{xi:Vgc} in the final subsection, we will give the promised proof of Proposition~\ref{xi:half-cylinder}. This will be proved using similar techniques as for Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths}; however, since we work in a region where some $\beta_n \equiv 1$ (and all other $\beta_{n'} \equiv 0$), we do not need to worry about $\omega^n_\infty$ and thus our job will be easier.
We need one more technical lemma about superposition operators before giving the proof.
\begin{lemma}\label{lem:xi-inversion-global}
Fix $x\in \widehat{W}_j$ non-zero and $I \operatorname{sp}incubseteq \rr$. There is a smooth map
$$
\widehat{W}_j(I \mathfrak{t}imes Y, x) \mathfrak{t}o \widehat{W}_j\Bigl(I \mathfrak{t}imes Y, \frac{x}{\| x \|_{L^2(Y)}}\Bigr), \ \gamma \mapsto \frac{\gamma}{\| \gamma \|_{L^2(Y)}},
$$
defined in a neighborhood of any $\gamma \in \widehat{W}_j(I \mathfrak{t}imes Y, x)$ with $ \| \gamma(t)\|_{L^2(Y)} \geq \varepsilonilon > 0$ for all $t \in I$. Further, this family of maps is smooth in $x$.
\end{lemma}
\begin{proof}
Without loss of generality, we assume that $\|x\|_{L^2(Y)}=1$. (We can reduce to this case by dividing both $\gamma$ and $x$ by $\|x\|_{L^2(Y)}$.)
Let
$$\zeta := \gamma - x \in \widehat{W}_j(I \mathfrak{t}imes Y).$$
We are interested in showing that the following quantity is in $\widehat{W}_j(I \mathfrak{t}imes Y)$:
\begin{align*}
\frac{\gamma}{\| \gamma \|_{L^2(Y)}} - x &= \frac{\zeta + x}{\| \zeta + x \|_{L^2(Y)}} - x \\
&= \zeta \cdot (\|\zeta + x \|_{L^2(Y)}^{-1} - 1) + x \cdot (\|\zeta + x \|_{L^2(Y)}^{-1} - 1) + \zeta.
\end{align*}
Since $\zeta \in \widehat{W}_j(I \mathfrak{t}imes Y)$ and $x \in \widehat{W}_j$, in view of Lemma~\ref{xi:pointwise-mult}, it suffices to show that
\begin{equation}
\label{eq:Zeta}
\|\zeta + x \|_{L^2(Y)}^{-1} - 1 \in L^2_j(I; \rr).
\end{equation}
Set
$$ g:= \|\zeta + x \|_{L^2(Y)}^{2} - 1 = \langle \zeta,\zeta \rangle_{L^2(Y)} + 2\langle \zeta, x \rangle_{L^2(Y)}.$$
By applying Lemma~\ref{xi:inner-product-path}, we see that $g \in L^2_j(I; \rr)$. The expression in \eqref{eq:Zeta} can be written as $h \circ g$, where $h(y)=(y+1)^{-1/2} - 1.$ Note that $h(0)=0$. Thus, we can apply Lemma~\ref{lem:composeSobolev}(b), and deduce that $h \circ g \in L^2_j(I; \rr)$, as desired.
Smoothness with respect to $\gamma$ follows from the smoothness statements in Lemmas~\ref{xi:inner-product-path}, ~\ref{xi:pointwise-mult} and ~\ref{lem:composeSobolev}(b).
Similar arguments can be used to prove smoothness with respect to $x$.
\end{proof}
With the above lemma, we can prove Proposition~\ref{xi:half-cylinder}.
\begin{proof}[Proof of Proposition~\ref{xi:half-cylinder}]
Recall that by our assumptions on the path $\gamma$, we have that $\beta_i(\gamma) \equiv 1$ for some index $i$, and $\beta_n(\gamma) \equiv 0$ for all $n \neq i$. Writing $\gamma(t) = (a(t), s(t), \phi(t))$, we have
$$
\mathcal{X}i_\lambda(\gamma) = \left (a(t) + a^i_\infty - a^i_\lambda, s(t) + s^i_\infty - s^i_\lambda, \frac{\phi(t) + \langle \phi(t), \phi^i_\infty \rangle_{L^2(Y)}
\cdot v^i_\lambda}{\| \phi(t) + \langle \phi(t), \phi^i_\infty \rangle_{L^2(Y)} \cdot v^i_\lambda \|_{L^2(Y)}} \right).
$$
Recall that the spinorial component of $\mathcal{X}i_\lambda(\gamma)$ in this case is written as $\frac{f_\lambda(\gamma(t))}{\| f_\lambda(\gamma(t))\|_{L^2(Y)}}$. The result now follows from applying Lemmas~\ref{xi:inner-product-path}, \ref{xi:pointwise-mult} and \ref{lem:xi-inversion-global} using the lower bounds given in \eqref{xi:flambda-L2j}.
\end{proof}
\operatorname{sp}incection{Extensions of $\mathcal{X}i_{\lambda}$ to other path spaces}
\label{sec:xi-ext}
With the main technical results established, we are easily able to extend $\mathcal{X}i_\lambda$ to larger path spaces to complete the proof of Proposition~\ref{prop:Xixy-paths} by proving parts \eqref{xi:Bgc} and \eqref{xi:Vgc}.
\begin{proof}[Proposition~\ref{prop:Xixy-paths}\eqref{xi:Bgc}]
Recall from Section~\ref{sec:path} the space $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x,y)$. This space is larger than $\mathfrak{t}W^\mathfrak{t}au_j(x,y)$ due to the condition of pseudo-temporal gauge as opposed to temporal gauge. Since $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda],[y_\lambda])$ is the quotient of $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda, y_\lambda)$ by the gauge group $\G^{\operatorname{gC}}_{j+1}({\mathbb{R}}\mathfrak{t}imes Y)$, we will extend $\mathcal{X}i_\lambda$ to a diffeomorphism
$$
\mathcal{X}i_\lambda : \mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda,y_\lambda) \mathfrak{t}o \mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\infty, y_\infty),
$$
which commutes with the action by $\G^{\operatorname{gC}}_{j+1}({\mathbb{R}}\mathfrak{t}imes Y)$, and this will induce the desired diffeomorphism for $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j$.
Let $(a(t) + \alpha(t)dt, s(t), \phi(t)) \in \mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda, y_\lambda)$, where $(a(t), s(t), \phi(t)) \in \widetilde{W}^\mathfrak{t}au_j(x_\lambda, y_\lambda)$. By Proposition~\ref{prop:Xixy-paths}\eqref{xi:paths-smooth-f}, we have that $\mathcal{X}i_\lambda(a(t), s(t), \phi(t)) \in \widetilde{\mathcal{C}}^{\operatorname{gC},\mathfrak{t}au}_j(x_\infty, y_\infty)$. We now define
$$
\mathcal{X}i_\lambda(a(t) + \alpha(t)dt, s(t), \phi(t)) = \mathcal{X}i_\lambda(a(t), s(t), \phi(t)) + (\alpha(t)dt,0,0).
$$
Since $\mathcal{X}i_\lambda$ induces a diffeomorphism from $\mathfrak{t}W^\mathfrak{t}au_j(x_\lambda,y_\lambda)$ to $\mathfrak{t}W^\mathfrak{t}au_j(x_\infty, y_\infty)$, it is clear that $\mathcal{X}i_\lambda$ induces a diffeomorphism from $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda,y_\lambda)$ to $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\infty, y_\infty)$. Thus, it remains to see that this induced map respects the (four-dimensional) gauge action. Let $u \in \G^{\operatorname{gC},\mathfrak{t}au}_{j+1}({\mathbb{R}}\mathfrak{t}imes Y)$. For notation, we will write $u(t) \cdot V(t)$ to mean the path obtained by applying $u(t)$ pointwise.
Using the $S^1$-equivariance of $\mathcal{X}i_\lambda :\mathfrak{t}W^\operatorname{sp}incigma_j \mathfrak{t}o \mathfrak{t}W^\operatorname{sp}incigma_j$, we have
\begin{align*}
u \cdot \mathcal{X}i_\lambda(a(t) + \alpha(t)dt, s(t), \phi(t)) &= u \cdot (\mathcal{X}i_\lambda(a(t), s(t), \phi(t)) + (\alpha(t)dt,0,0)) \\
&= (-u^{-1}\frac{du}{dt} dt, 0, 0) + u(t) \cdot \mathcal{X}i_\lambda(a(t),s(t),\phi(t)) + (\alpha(t)dt,0,0) \\
&= (-u^{-1}\frac{du}{dt} dt, 0, 0) + \mathcal{X}i_\lambda(a(t),s(t), u(t) \cdot \phi(t)) + (\alpha(t)dt,0,0) \\
&= \mathcal{X}i_\lambda((-u^{-1}\frac{du}{dt} dt + \alpha(t)dt + a(t),s(t), u(t) \cdot \phi(t))) \\
&= \mathcal{X}i_\lambda( u \cdot (a(t) + \alpha(t)dt,s(t),\phi(t))). \\
\end{align*}
It follows that $\mathcal{X}i_\lambda$ induces a diffeomorphism from $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda], [y_\lambda])$ to $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty], [y_\infty])$.
\end{proof}
\begin{proof}[Proposition~\ref{prop:Xixy-paths}\eqref{xi:Vgc}]
We would like to show that $\mathcal{X}i_\lambda$ induces a bundle map
$$
xymatrix{
\V^{\operatorname{gC},\mathfrak{t}au}_j \ar[r] \ar[d] & \V^{\operatorname{gC},\mathfrak{t}au}_j \ar[d] \\
\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda, y_\lambda) \ar[r]^{\mathcal{X}i_\lambda} & \mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j(x_\infty, y_\infty).
}
$$
which is a diffeomorphism on the fibers.
Here, we recall that the bundle $\V^{\operatorname{gC},\mathfrak{t}au}_j({\mathbb{R}}\mathfrak{t}imes Y)$ over $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x],[y])$ comes from quotienting the bundle $\V^{\operatorname{gC},\mathfrak{t}au}_j({\mathbb{R}}\mathfrak{t}imes Y)$ over $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j([x],[y])$ by the gauge action. Therefore, it suffices to show that $\mathcal{X}i_\lambda$ induces a $\G^{\operatorname{gC},\mathfrak{t}au}_{j+1}({\mathbb{R}}\mathfrak{t}imes Y)$-equivariant bundle map
$$
xymatrix{
\V^{\operatorname{gC},\mathfrak{t}au}_j \ar[r]^{\mathcal{X}i_\lambda} \ar[d]& \V^{\operatorname{gC},\mathfrak{t}au}_j \ar[d] \\
\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\lambda, y_\lambda) \ar[r]^{\mathcal{X}i_\lambda} & \mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x_\infty, y_\infty),
}
$$
which is a diffeomorphism on the fibers.
Recall that the bundle $\V^{\operatorname{gC},\mathfrak{t}au}_j$ over $\mathfrak{t}C^{\operatorname{gC},\mathfrak{t}au}_j(x,y)$ has fiber over $(a(t) + \alpha(t)dt,s(t),\phi(t))$ consisting of paths in $(b(t), r(t), \psi(t))$ satisfying $\operatorname{Re} \langle \phi(t), \psi(t) \rangle_{L^2(Y)} = 0$ for all $t$. In other words, if we write $\gamma(t) = (a(t), s(t), \phi(t))$, then the fiber over $(a(t) + \alpha(t)dt,s(t),\phi(t))$ consists of paths $\eta$ such that $\eta(t) \in \T^{\operatorname{gC},\operatorname{sp}incigma}_{j, \gamma(t)}$ for all $t$.
Since $\mathcal{X}i_\lambda : \widetilde{W}^\operatorname{sp}incigma_j \mathfrak{t}o \widetilde{W}^\operatorname{sp}incigma_j$ is a diffeomorphism, we have that for any $x \in \mathfrak{t}W^\operatorname{sp}incigma_j$, we have an induced linear isomorphism
\begin{equation}\label{eq:Dx-Xi-lambda}
\D_x\mathcal{X}i_\lambda: \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,x} \operatorname{sp}inctackrel{\cong}{\mathfrak{t}o} \T^{\operatorname{gC},\operatorname{sp}incigma}_{j,\mathcal{X}i_\lambda(x)}.
\end{equation}
Therefore, for an element $\eta \in \V^{\operatorname{gC},\mathfrak{t}au}_j$ which sits over $(a(t) + \alpha(t)dt,s(t),\phi(t))$, we can define $\mathcal{X}i_\lambda(\eta)$ by pushforward. More precisely, $\mathcal{X}i_\lambda(\eta)$ is the path in $\V^{\operatorname{gC},\mathfrak{t}au}_j$ given by $$\bigl(\D_{(a(t),s(t),\phi(t))} \mathcal{X}i_\lambda\bigr)(\eta(t)).$$
By construction $\mathcal{X}i_\lambda(\eta)$ sits over $\mathcal{X}i_\lambda(a(t) + \alpha(t) dt, s(t), \phi(t))$, and thus we have a bundle map. It is not difficult to check that $(\mathcal{X}i_\lambda)_*$ induces a diffeomorphism of the fiber over $(a(t) + \alpha(t) dt, s(t), \phi(t))$, as $(\mathcal{X}i^{-1}_\lambda)_*$ provides the inverse. The gauge-equivariance follows from the gauge-equivariance of $\mathcal{X}i_\lambda$ on $\mathcal{C}^{\operatorname{gC},\mathfrak{t}au}$.
The analogous result for $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty], [y_\infty])/\mathbb{R}$ now follows from the above arguments together with the fact that if $[x_\infty] \neq [y_\infty]$, then the $\mathbb{R}$-action is free on $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\infty], [y_\infty])$ and $\mathfrak{t}B^{\operatorname{gC},\mathfrak{t}au}_j([x_\lambda], [y_\lambda])$.
\end{proof}
\chapter{Convergence of approximate trajectories}\label{sec:trajectories1}
In this chapter and the subsequent one, we establish the analogous results of Chapter~\ref{sec:criticalpoints} for flow trajectories instead of stationary points. In this section, we focus on results related to the convergence of approximate trajectories to honest trajectories as $\lambda \mathfrak{t}o \infty$. We now return to the case that $\lambda = \lambda^{\bullet}_i \gg 0$; we will often omit this assumption from the discussion.
\operatorname{sp}incection{Convergence downstairs}\label{sec:convergence-downstairs}
We start by discussing trajectories of $\mathcal{X}qmlgc$ in the blow-down. There are two kinds of convergence results that one expects. The simpler one is $C^{\infty}_{loc}$ convergence of parameterized trajectories, and the more refined one is convergence of unparameterized trajectories to a broken trajectory. We already know that the former kind holds:
\begin{proposition}\label{prop:convergencenoblowup}
Let $I \operatorname{sp}incubseteq \R$ be a closed interval, and $\gamma_n : I \mathfrak{t}o W$ be a sequence of trajectories of $\mathcal{X}qmlngc$ contained in $B(2R)$, where $\lambda_n \mathfrak{t}o \infty$. Then, there exists a subsequence of $\gamma_n$ for which the restrictions to any subinterval $I' \Subset I$ converge in the $C^{\infty}$ topology of $W(I' \mathfrak{t}imes Y)$ to $\gamma$, a trajectory of $\mathcal{X}qgc$.
\end{proposition}
\begin{proof}
As discussed in Section~\ref{sec:verycompact}, $l + c_\q$ satisfies the hypotheses of Proposition~\ref{prop:proposition3perturbed}. Thus, the result follows from Lemma~\ref{lem:convergencenoblowup}.
\end{proof}
In particular, when $I=\R$, the conclusion of Proposition~\ref{prop:convergencenoblowup} is convergence in the $C^{\infty}_{loc}$ topology of $W(\rr\mathfrak{t}imes Y)$. We denote the resulting topological space by $W_{loc}(\rr\mathfrak{t}imes Y)$.
We now seek to show that a sequence of unparameterized trajectories for the approximate equations converge, in a certain sense, to a broken trajectory of $\mathcal{X}qgc$. It will be convenient to work in the quotient $W/S^1$. Given $x \in W$, we write $[x]$ for its projection to $W/S^1$. If $x$ is a stationary point (of $\mathcal{X}qgc$ or $\mathcal{X}qmlgc$), we will say that $[x]$ is a {\em stationary point class}.
Furthermore, given an interval $I \operatorname{sp}incubset \R$ and a trajectory $\gamma: I \mathfrak{t}o W$ of $\mathcal{X}qgc$ or $\mathcal{X}qmlgc$, we consider the associated {\em parameterized trajectory class}
$$[\gamma] \in W(I \mathfrak{t}imes Y)/S^1,$$
where $S^1$ acts by constant gauge transformations. If $I=\R$, we can further divide by reparameterizations (translations in the domain $\R$), and obtain the {\em unparameterized trajectory class}
$$[\breve{\gamma}] \in W(\rr\mathfrak{t}imes Y)/(S^1 \mathfrak{t}imes \R).$$
When $I=\R$, Lemma 16.2.4 in \cite{KMbook}, translated into slicewise Coulomb gauge, shows that any parameterized trajectory class $[\gamma]$ of $\mathcal{X}qgc$ with finite energy admits limit points $[x]$ and $[y]$ at $\pm \infty$, with $[x]$ and $[y]$ being stationary point classes of $\mathcal{X}qgc$. Corollary~\ref{cor:endpointsBlowDown} gives the analogous result for trajectory classes of $\mathcal{X}qmlgc$, provided these come from trajectories contained in $B(2R)$. Since the limit points are unchanged by reparameterizations, we can also talk about the limit points of unparameterized trajectory classes.
\begin{definition}
\label{def:broken}
Fix $[x]$ and $[y]$, stationary point classes of $\mathcal{X}qgc$. An {\em (unparameterized) broken trajectory class} of $\mathcal{X}qgc$ from $[x]$ to $[y]$ consists of
\begin{itemize}
\item an integer $m \geq 0$, the number of components;
\item an $(m+1)$-tuple of stationary point classes of $\mathcal{X}qgc$: $[x] = [x_0], \ldots, [x_m] = [y]$
\item an unparameterized trajectory class $[\breve{\gamma}_i]$ of $\mathcal{X}qgc$ from $[x_{i-1}]$ to $[x_{i}]$, for every $i = 1,\ldots,m$.
\end{itemize}
We will represent broken trajectory classes by the tuple $[\breve{\gammas}] = ([\breve{\gamma}_1],\ldots,[\breve{\gamma}_m])$.
\end{definition}
Next, we want to say what it means for unparameterized trajectory classes of $\mathcal{X}qmlngc$ to converge to a broken trajectory class of $\mathcal{X}qgc$, as $\lambda_n \mathfrak{t}o \infty$. This is done in Definition~\ref{def:brokenconvergencedownstairs} below, which is inspired by the construction of the topology on the space of broken trajectories of $\mathcal{X}q$, in \cite[Section 16.1]{KMbook}. Given a trajectory $\gamma: \rr\mathfrak{t}o W$ and $s \in \R$, recall that we write $\mathfrak{t}au_s \gamma$ for the translate, $(\mathfrak{t}au_s\gamma)(t) = \mathfrak{t}au(s+t)$.
From Proposition~\ref{prop:nearby} and Lemma~\ref{lem:implicitfunctionreducible}, for any stationary point class $[x]$ of $\mathcal{X}qgc$, there is a corresponding (nearby) stationary point class of $\mathcal{X}qmlgc$, which we denote by $[x_{\lambda}]$, and similarly in the blow-up. We also recall from the discussion between Corollaries~\ref{cor:endpointsBlowUp} and \ref{cor:trajectory-Flambda} that for $\lambda \gg 0$, trajectories $[\gamma_\lambda]$ of $\mathcal{X}qmlagcsigma$ in $(W^\lambda \cap B(2R)^\operatorname{sp}incigma)/S^1$ connecting stationary points $[x_\lambda]$ and $[y_\lambda]$ are contained in $\B^{\operatorname{gC},\mathfrak{t}au}_k([x_\lambda],[y_\lambda])$. In particular, such trajectories satisfy $\lim_{t \mathfrak{t}o -\infty} [\mathfrak{t}au_t^* \gamma_\lambda] = [\gamma_{x_\lambda}]$ and $\lim_{t \mathfrak{t}o -\infty} [\mathfrak{t}au_t^* \gamma_\lambda] = [\gamma_{y_\lambda}]$ in $\B^{\operatorname{gC},\mathfrak{t}au}_{k,loc}(\rr\mathfrak{t}imes Y)$ analogous to Definition~\ref{def:sw-moduli-space}. We have the analogous result in the blow-down as well. We will use these facts implicitly throughout the next two sections.
\begin{definition}
\label{def:brokenconvergencedownstairs}
Fix $[x]$ and $[y]$, stationary point classes of $\mathcal{X}qgc$. For $\lambda_n \mathfrak{t}o \infty$, consider a sequence of unparameterized trajectory classes $[\breve{\gamma}_n]$ of $\mathcal{X}qmlngc$, coming from trajectories
$$\gamma_n: \rr\mathfrak{t}o W^{\lambda_n} \cap B(2R)$$
and such that the endpoints of $[\breve{\gamma}_n]$ are the stationary point classes $[x_{\lambda_n}]$ (respectively $[y_{\lambda_n}]$) of $\mathcal{X}qmlngc$ that correspond to $[x]$ (respectively $[y]$).
We say that $[\breve{\gamma}_n]$ converges to the broken trajectory class $[\breve{\gammas}_\infty] = ([\breve{\gamma}_{\infty,1}],\ldots,[\breve{\gamma}_{\infty,m}])$ if, for each $i=1, \dots, m$, there exist sequences of real numbers $(s_{n, i})_{n \geq 0}$ with
$$ s_{n, 1} < s_{n,2} < \dots < s_{n, m}$$
and
$$ s_{n, i} - s_{n, i-1} \mathfrak{t}o \infty \ \mathfrak{t}ext{ as } \ n \mathfrak{t}o \infty,$$
such that the translates $[\mathfrak{t}au_{s_{n,i}} \gamma_n]$ converge to some representative $[\gamma_{\infty, i}]$ of $[\breve{\gamma}_{\infty, i}]$ in the quotient topology of $W_{loc}(\rr\mathfrak{t}imes Y)/S^1$.
\end{definition}
\begin{proposition}\label{prop:brokenconvergencedownstairs}
Fix $[x]$ and $[y]$, stationary point classes of $\mathcal{X}qgc$. Fix a sequence $\lambda_n \mathfrak{t}o \infty$ and a sequence of unparameterized trajectory classes $[\breve{\gamma}_n]$ of $\mathcal{X}qmlngc$, going from $[x_{\lambda_n}]$ to $[y_{\lambda_n}]$, and such that the representatives $\gamma_n$ of $[\breve{\gamma}_n]$ are contained in $ W^{\lambda_n} \cap B(2R)$. Then, there exists a subsequence of $[\breve{\gamma}_n]$ that converges to a broken trajectory class $[\breve{\gammas}_{\infty}]$ of $\mathcal{X}qgc$, in the sense of Definition~\ref{def:brokenconvergencedownstairs}.
\end{proposition}
\begin{proof}
The proof is essentially the same as \cite[Proposition 16.2.1]{KMbook}. We provide an outline.
Since the energy $\mathcal{E}(\gamma)$ of a trajectory $\gamma$ of $\mathcal{X}qmlgc$ is unchanged by constant gauge transformations, we can talk about the energy $\mathcal{E}([\gamma])$ of the respective parameterized trajectory class. Similarly, since the functions $F_{\lambda}$ constructed in Chapter~\ref{sec:quasigradient} are $S^1$-invariant, we can talk about the drop in $F_{\lambda}$ for a parameterized trajectory class. In view of Corollary~\ref{cor:trajectory-Flambda}, the energy and the drop in $F_{\lambda}$ are commensurable.
Fix a compact interval $I$, constant $C > 0$, and a neighborhood $U_{[x]}$ of each stationary point class $[x]$ of $\mathcal{X}qgc$, where $[x]$ is thought of as a constant trajectory class in $W(I \mathfrak{t}imes Y)/S^1$. Using Proposition~\ref{prop:convergencenoblowup}, for any other compact interval $I'$, we can find $\varepsilonilon > 0$ independent of $\lambda$ such that if $[\gamma]$ is a trajectory class of $\mathcal{X}qmlgc$ with energy at most $C$, and with energy at most $\varepsilonilon$ when restricted to $I'$, then $[\gamma]|_{I \mathfrak{t}imes Y}$ is contained in some $U_{[x]}$. (Compare with Lemma 16.2.2 in \cite{KMbook}.)
Next, observe that, since the trajectory classes $[\gamma_n]$ go from $[x_{\lambda_n}]$ to $[y_{\lambda_n}]$, and
$$[x_{\lambda_n}] \mathfrak{t}o [x], \ \ [y_{\lambda_n}] \mathfrak{t}o [y], \ \ F_{\lambda_n} \mathfrak{t}o \mathscr{L}q \ \mathfrak{t}ext{as} \ n \mathfrak{t}o \infty,$$
we have that the drop in $F_{\lambda_n}$ along $[\gamma_n]$ is bounded. Hence, the energy of $[\gamma_n]$ is bounded, by a constant $K$ independent of $n$. With $\varepsilonilon > 0$ chosen as in the previous paragraph, we find that, for each $n$, there are at most $2K/\varepsilonilon$ integers $p$ such that
$$\mathcal{E}( [\mathfrak{t}au_p \gamma_n]|_{[-1,1]}) = \mathcal{E}([\gamma_n]|_{[p-1,p+1]}) > \varepsilonilon.$$
Therefore, for all other $p$, we must have $[\mathfrak{t}au_p \gamma_n] \in U_{[x]}$ for some stationary point class $[x]$ of $\mathcal{X}qgc$.
Starting from here, for each $n \gg 0$, we decompose $\R$ into finitely many intervals $I_i^n = [a_i^n, b_i^n]$ of fixed length, and intervals
$$J_0^n=(-\infty, a_1^n], \ J_i^n=[b_i^n, a_{i+1}^n], \ J_m^n=[b_m^n, \infty),$$ with the length of each $J_i^n$ going to infinity as $n \mathfrak{t}o \infty$, and the number $m$ of intervals being independent of $n$. The restriction of $[\gamma_n]$ to each $J_i^n$ lies near a stationary point class of $\mathcal{X}qgc$, and these point classes provide the breaking points of the limiting broken trajectory class. By applying Proposition~\ref{prop:convergencenoblowup} to the restrictions of $[\gamma_n]$ to $I_i^n$, we can arrange that they are convergent in $C^{\infty}_{loc}$. This gives the required convergence to a broken trajectory class.
\end{proof}
\operatorname{sp}incection{Convergence of parameterized trajectories in the blow-up}
We now move to the blow-up $W^{\operatorname{sp}incigma}$. We are interested in showing that the trajectories of $\mathcal{X}qmlgcsigma$ are close to those of $\mathcal{X}qgcsigma$, given appropriate control on the $\lambda$-spinorial energy. The goal of this subsection is to establish the following convergence result for parameterized trajectories. Before doing so, a quick notational remark. Sometimes we will be interested in studying the image of a path in $W^\operatorname{sp}incigma/S^1$ in the blow-down. When doing so, we will use the notation $\gamma^\mathfrak{t}au$ for the path upstairs and $\gamma$ for the blow-down.
\begin{proposition}\label{prop:convergence1}
Fix $\omega > 0$ and a compact interval $I = [t_1,t_2] \operatorname{sp}incubset \R$. Consider a smaller interval $I_{\varepsilonilon}=[t_1 + \varepsilonilon, t_2 - \varepsilonilon]$ for $\varepsilonilon > 0$. Suppose that $$\gamma^\mathfrak{t}au_n: I \mathfrak{t}o (W^\lambdan \cap B(2R))^{\operatorname{sp}incigma}$$ is a sequence of trajectories of $\mathcal{X}qmlngcsigma$, where $\lambda_n \mathfrak{t}o \infty$. Furthermore, suppose that at the ends of $I_{\varepsilonilon}$ we have
$$\mathscr{L}ambda_{\q^{\lambda_n}}(\gamma^\mathfrak{t}au_n(t_1+\varepsilonilon)) \leq \omega, \ \ \ \mathscr{L}ambda_{\q^{\lambda_n}}(\gamma^\mathfrak{t}au_n(t_2-\varepsilonilon)) \geq -\omega,$$ for all $n$. Then, there exists a subsequence of $\gamma^{\mathfrak{t}au}_n$ for which the restrictions to any $I' \Subset I_{\varepsilonilon}$ converge in the $C^{\infty}$ topology of $W^\mathfrak{t}au(I_{\varepsilonilon} \mathfrak{t}imes Y)$ to $\gamma^\mathfrak{t}au$, a trajectory of $\mathcal{X}qgcsigma$.
\end{proposition}
\begin{remark}\label{rmk:compactnessdifferences}
The analogous compactness result for trajectories of $\mathcal{X}qsigma$ is Theorem 10.9.2 in \cite{KMbook}. However, there are a few differences between the statement of Proposition~\ref{prop:convergence1} and \cite[Theorem 10.9.2]{KMbook}. Of course, the main one is that our result deals with trajectories of the vector fields $\mathcal{X}qmlngcsigma$, which approximate the vector field $\mathcal{X}qgcsigma$ in Coulomb gauge. Another difference is that in \cite[Theorem 10.9.2]{KMbook} one requires a bound on the energy of trajectories, that is, on the drop of the perturbed Chern-Simons-Dirac functional $\mathscr{L}q$; this is unnecessary in our setting since our trajectories are assumed to be in $B(2R)^{\operatorname{sp}incigma}$, which automatically gives bounds on $\mathscr{L}q$. Also, in \cite[Theorem 10.9.2]{KMbook}, the assumption is that $\q$ is a $k$-tame perturbation, and the conclusion is convergence in $L^2_{k+1}$. In our setting, $\q$ is tame (for all $k$), and we will use bootstrapping and a diagonalization argument for subsequences to get convergence in $C^{\infty}$. Finally, in \cite[Theorem 10.9.2]{KMbook}, the conclusion is convergence after gauge transformations. In our setting trajectories start in temporal and slicewise global Coulomb gauge; the remaining gauge consists only of constant transformations in $S^1$. Since $S^1$ is compact, there is no need to change trajectories by gauge before they converge. Compare the remark after Corollary 5.1.8 on \cite[p.110]{KMbook}.
\end{remark}
Before proving Proposition~\ref{prop:convergence1}, we need a couple of lemmas. The first is the following unique continuation result, analogous to \cite[Proposition 10.8.1]{KMbook}:
\begin{lemma}\label{lem:uniquecontinuation}
Let $\gamma(t) = (a(t),\phi(t))$ be a trajectory of $\mathcal{X}qmlgc$ in $B(2R)$ for some $\lambda \in (0,\infty]$. If $\phi(t) = 0$ for some $t$, then $\phi(t) \equiv 0$.
\end{lemma}
\begin{proof}
Since $\gamma(t)$ is a trajectory, we have
\[-\frac{d}{dt}\phi(t) = D \phi(t) + (p^\lambda c_\q)^1(\gamma(t)).\]
Furthermore, $\gamma(t)$ is continuous and thus $(p^\lambda c_\mathfrak{q})^1(\gamma(t))$ is a continuous path in $L^2_k(Y;\mathbb{S})$ as well. By \cite[Lemma 7.1.3]{KMbook}, it suffices to show that $\| (p^\lambda c_\q)^1(\gamma(t)) \|_{L^2} \leq C \| \phi(t) \|_{L^2}$ for all $t$. This follows from Lemma~\ref{lem:linearbounds}.
\end{proof}
The second result we need is the analogue of \cite[Lemma 10.9.1]{KMbook}:
\begin{lemma}
\label{lemma:1091}
There is a constant $C > 0$ such that, for any $\lambda \gg 0$, any interval $[t_1, t_2] \operatorname{sp}incubseteq \R$, trajectory $\gamma^{\mathfrak{t}au}: [t_1, t_2] \mathfrak{t}o B(2R)^{\operatorname{sp}incigma}$ of $\mathcal{X}qmlgcsigma$, and any $t \in [t_1, t_2]$, we have
$$\frac{d}{dt} \mathscr{L}ambda_{\q^\lambda}(\gamma^{\mathfrak{t}au}(t)) \leq C \cdot \|\mathcal{X}qmlgc(\gamma(t))\|_{L^2_k(Y)},$$
where $\gamma$ is the projection of $\gamma^{\mathfrak{t}au}$ in the blow-down.
\end{lemma}
\begin{proof}
This is similar to the proof of Lemma 10.9.1 in \cite{KMbook}. Note that the bound in \cite{KMbook} involved a function $\zeta(\gamma(t))$; in our case we can take this to be a constant, because we assumed that $\gamma(t) \operatorname{sp}incubset B(2R)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:convergence1}]
The argument follows that of Theorem 8.1.1 and Theorem 10.9.2 in \cite{KMbook} nearly verbatim. We give the argument for completeness.
Write $\gamma^\mathfrak{t}au_n(t) = (a_n(t), s_n(t), \phi_n(t))$. First, by Proposition~\ref{prop:convergencenoblowup} we can find a subsequence of the blown-down sequence $\gamma_n(t) = (a_n(t), s_n(t) \phi_n(t))$ for which the restriction to any $I' \Subset I$ converges in all $L^2_j(I' \mathfrak{t}imes Y)$ norms. Choose $I'$ such that $I_\varepsilonilon \Subset I'$. In particular, we have bounds (and convergence) for all $L^2_j(I' \mathfrak{t}imes Y)$ norms on $a(t)$. Therefore, we will now focus on the convergence of $s_n(t)$ and $\phi_n(t)$.
By Lemma~\ref{lem:uniquecontinuation}, each trajectory is either slicewise reducible or irreducible. After passing to a further subsequence, we can assume that $\gamma^\mathfrak{t}au_n$ are all of the same type. We first consider the case that $\gamma^\mathfrak{t}au_n(t)$ are irreducible for all $n$ and $t$. If the limit of $\gamma_n$ is irreducible, we are done using the result in the blow-down. Therefore, we assume that the limit is reducible for all $t$. It will be useful to simultaneously think of $s_n(t) \phi_n(t)$ as a four-dimensional spinor $z_n \Phi_n$, where $z_n = \| s_n \phi_n \|_{L^2(I_\varepsilonilon \mathfrak{t}imes Y)}$ and $\| \Phi_n \|_{L^2(I_\varepsilonilon \mathfrak{t}imes Y)} = 1$, so $z_n \Phi_n = s_n \phi_n$. (We have $z_n \neq 0$ since $s_n (t) \neq 0$ for {\em every} $t \in I'$.) The subtle change from $I'$ to $I_\varepsilonilon$ when computing $L^2$ norms will be used shortly.
Since $\gamma_n$ is an approximate trajectory in the blow-down, so is $e^{i \mathfrak{t}heta} \gamma_n$, and therefore we see that
$$
- D^+(e^{i\mathfrak{t}heta} z_n\Phi_n) = (p^\lambdan c_\q)^1(a_n, e^{i\mathfrak{t}heta} z_n\Phi_n),
$$
for all $\mathfrak{t}heta$. (Here $D^+$ denotes the Dirac operator in four-dimensions for the trivial connection induced by $A_0$.) By differentiating with respect to $\mathfrak{t}heta$ and evaluating at $\mathfrak{t}heta = 0$, we obtain
$$
- D^+(z_n \Phi_n) = \D_{\gamma_n} (p^\lambdan c_\q)^1(0,z_n\Phi_n),
$$
and thus by complex linearity,
$$
- D^+(\Phi_n) = \D_{\gamma_n} (p^\lambdan c_\q)^1(0,\Phi_n).
$$
By Lemma~\ref{lem:fam}, the $L^2(I' \mathfrak{t}imes Y)$ bounds on $\Phi_n$ and $L^2_k(I' \mathfrak{t}imes Y)$ bounds on $\gamma_n$ give that the right-hand side is $L^2(I' \mathfrak{t}imes Y)$ bounded. By ellipticity, we see that $\Phi_n$ is $L^2_1$-bounded on any interior cylinder. We can do further bootstrapping to obtain $L^2_{j}(I_\varepsilonilon \mathfrak{t}imes Y)$ bounds on the four-dimensional spinor $\Phi_n$ for all $j$. In particular, we can arrange for a further subsequence for which $\Phi_n$ converges in all $L^2_j(I_\varepsilonilon \mathfrak{t}imes Y)$ norms to a spinor $\Phi$ with $\| \Phi \|_{L^2(I_\varepsilonilon \mathfrak{t}imes Y)} = 1$.
By Lemma~\ref{lemma:1091} and the bounds on $\mathscr{L}ambda_{\q^\lambdan}$ at the endpoints of $I_\varepsilonilon$, we see that $\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t))$ is uniformly bounded in $n$ and $t \in I_\varepsilonilon$. Since $-\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t)) = \frac{d}{dt} \log s_n(t)$, it follows that there exists a constant $K > 0$ independent of $n$ and $t \in I_\varepsilonilon$ such that
$$ s_n(t) \geq K \| s_n \|_{L^2(I_\varepsilonilon)} = K \| s_n \phi_n \|_{L^2(I_\varepsilonilon)} = K z_n.$$
(We used here than $\|\phi_n(t)\|_{L^2(Y)}=1$ for all $t$.) We deduce that
$$
\| \Phi_n(t) \|_{L^2(Y)} \geq K > 0
$$
for all $t$. Since $\Phi_n$ converges to $\Phi$ in all $L^2_j(I_\varepsilonilon \mathfrak{t}imes Y)$ norms (and thus uniformly in all $L^2_j(Y)$ norms), we see that $\phi_n(t) = \frac{\Phi_n(t)}{\| \Phi_n(t) \|_{L^2(Y)}}$ is bounded in all $L^2_j(I \mathfrak{t}imes Y)$ norms by Lemma~\ref{lem:xi-inversion-global}. It follows that we have the desired convergence for the $\phi_n$.
We now focus on the convergence for $s_n$. Of course, rather than obtaining bounds on $L^2_j(I_\varepsilonilon \mathfrak{t}imes Y)$ norms as above, we could have used any intermediate cylinder with $I_\varepsilonilon \operatorname{sp}incubset I'' \Subset I'$. We in fact opt for $I''$, since we will need to bootstrap again to get $L^2_j(I_\varepsilonilon)$ bounds on $s_n$. By \eqref{eq:Xqmlgcsigmaformula} and \eqref{eq:lambda-spinorial}, since both $\gamma_n(t)$ and $\phi_n(t)$ are bounded in all $L^2_j(I'' \mathfrak{t}imes Y)$ norms, we see that $\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t))$ is bounded in all $L^2_j(I'')$ norms. Since $\gamma^\mathfrak{t}au_n(t)$ is a trajectory of $\mathcal{X}qmlngcsigma$, \eqref{eq:Xqmlgcsigmaformula} shows that
\begin{equation}\label{eq:bootstrap-lambdaq}
-\frac{d}{dt} s_n(t) = -\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t)) s_n(t)).
\end{equation}
Note that for all $n$ and $t \in I'$, $0 < s_n(t) \leq 2R$, so we obtain uniform bounds on $\frac{d}{dt} s_n(t)$ by the bounds on $s_n(t)$ and $\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t))$. We may continue to bootstrap using \eqref{eq:bootstrap-lambdaq} together with Sobolev multiplication to obtain bounds on $s_n$ in all $L^2_j(I_\varepsilonilon)$ norms, and consequently convergence in $C^\infty$. This completes the proof in the case that the $\gamma^\mathfrak{t}au_n$ are irreducible.
Now, we consider the case that $\gamma^\mathfrak{t}au_n(t)$ is reducible. As in the irreducible case we have $L^2_1(I')$ bounds on $\mathscr{L}ambda_{\q^\lambdan}(\gamma^\mathfrak{t}au_n(t))$, by Lemma~\ref{lemma:1091} and the bounds at the endpoints. In this case, choose $s^*_n(t)$ to solve
$$- \frac{d}{dt} s^*_n(t) = \mathscr{L}ambda_{\q^\lambdan}(a_n(t),0,\phi_n(t)) s^*_n(t), $$
where we ask that $0 < s^*_n (t) < M$ for all $t \in I$ and $n$. Since this is a one-dimensional ODE defined on a compact interval, we can easily arrange for such a solution. It is straightforward to verify that
$$
-\left(\frac{d}{dt} + \mathcal{X}qmlngc \right)^1(a_n(t), s^*_n(t) \phi_n(t)) = \D_{(a_n(t),0)} (p^\lambdan c_\q)^1(0,\phi_n(t)).
$$
We can now repeat the same arguments as above to obtain the desired convergence of $\phi_n(t)$.
\end{proof}
\begin{corollary}\label{cor:convergence}
Fix $\omega > 0$ and a closed (possibly non-compact) interval $I \operatorname{sp}incubseteq \R$. Suppose that $\gamma^\mathfrak{t}au_n: I \mathfrak{t}o (W^{\lambda_n} \cap B(2R))^{\operatorname{sp}incigma}$ is a sequence of trajectories of $\mathcal{X}qmlngcsigma$, where $\lambda_n \mathfrak{t}o \infty$. Furthermore, suppose that $|\mathscr{L}ambda_{\q^{\lambda_n}}(\gamma^\mathfrak{t}au_n(t))| \leq \omega$ for all $n$ and $t \in I$. Then, there exists a subsequence of $\gamma^{\mathfrak{t}au}_n$ for which the restrictions to any subinterval $I' \Subset I$ converge in the $C^{\infty}$ topology of $W^\mathfrak{t}au(I' \mathfrak{t}imes Y)$ to $\gamma^\mathfrak{t}au$, a trajectory of $\mathcal{X}qgcsigma$.
\end{corollary}
\begin{proof}
This follows from Proposition~\ref{prop:convergence1}, by a diagonalization argument.
\end{proof}
When $I=\R$, we let $W^{\mathfrak{t}au}_{loc}(\rr\mathfrak{t}imes Y)$ be $W^{\mathfrak{t}au}(\rr\mathfrak{t}imes Y)$ with the $C^{\infty}_{loc}$ topology. Then, the conclusion of Corollary~\ref{cor:convergence} is convergence in $W^{\mathfrak{t}au}_{loc}(\rr\mathfrak{t}imes Y)$.
\operatorname{sp}incection{Near-constant approximate trajectories}
This subsection contains several technical results, analogous to the ones in Sections 13.4 and 13.5 of \cite{KMbook}. We will use these to study the moduli spaces of broken trajectories of $\mathcal{X}qmlagcsigma$, leading up to Propositions~\ref{prop:brokenconvergence} and \ref{prop:L2k} below.
We start by establishing the analogue of Lemma 13.4.4 in \cite{KMbook}, which gives bounds on the distance of a trajectory from a constant trajectory on $[t_1, t_2] \mathfrak{t}imes Y$ in terms of $\mathscr{L}_\q$. Write $I = [t_1,t_2]$. We follow the notation of Sections~\ref{sec:Flambda} and \ref{sec:control-near-stationary}. More precisely, given a stationary point $x_\infty = (a_\infty, \phi_\infty)$ of $\mathcal{X}qgc$, there is an associated orbit $\mathcal{O}_\lambda$ of stationary points of $\mathcal{X}qmlgc$. We define $x_\lambda = (a_\lambda, \phi_\lambda)$ to be the point in $\mathcal{O}_\lambda$ which is $L^2$ closest to $x_\infty$.
\begin{lemma}\label{lem:L21-bounds-Flambda}
Fix a stationary point $x_\infty = (a_\infty, \phi_\infty)$ of $\mathcal{X}qgc$, which we treat as a (constant) trajectory of $\mathcal{X}qgc$ on $I \mathfrak{t}imes Y$. There exists an $S^1$-invariant neighborhood $U$ of $x_\infty$ in $W_{1}(I \mathfrak{t}imes Y)$ and a constant $C$ independent of $\lambda \gg 0$ satisfying the following. If $\gamma \in U$ is a trajectory of $\mathcal{X}qmlgc$, there exists a constant gauge transformation $u \in S^1$ such that
$$
\| u \cdot \gamma - x_\lambda \|_{L^2_1(I \mathfrak{t}imes Y)} \leq C (F_\lambda(\gamma(t_2)) - F_\lambda(\gamma(t_1))).
$$
\end{lemma}
\begin{proof}
In \cite[Lemma 13.4.4]{KMbook}, Kronheimer and Mrowka choose a gauge transformation of $\gamma$ that moves it into the Coulomb-Neumann slice through $x_\infty$. In our context, since we prefer to work with $\gamma$ in slicewise global Coulomb gauge, we choose $u \in S^1$ so that
$$\operatorname{Re} \langle u \cdot \gamma(t_1), (0,i\phi_\lambda) \rangle_{L^2(Y)} = 0.$$
To suppress $u$ from the notation, for the rest of the proof, we assume $\operatorname{Re} \langle \gamma(t_1), (0,i\phi_\lambda) \rangle_{L^2(Y)} = 0$, so that $u = 1$.
We have
\begin{equation}\label{eq:distance-from-constant}
\| \gamma - x_\lambda \|^2_{L^2_1(I \mathfrak{t}imes Y)} = \int^{t_2}_{t_1} \| \gamma - x_\lambda \|^2_{L^2_1(Y)} dt + \int^{t_2}_{t_1} \| \frac{d}{dt} \gamma \|^2_{L^2(Y)} dt.
\end{equation}
In view of \eqref{eq:trajectory-Flambda}, it suffices to bound the right-hand side by a constant times $\int^{t_2}_{t_1} \| \frac{d}{dt} \gamma \|^2_{L^2(Y)} dt$, or equivalently, $\int^{t_2}_{t_1} \| \mathcal{X}qmlgc(\gamma(t)) \|_{L^2(Y)}^2 dt$. Since the second term in \eqref{eq:distance-from-constant} is already of this form, it suffices to bound the first term, i.e., to find a constant $C$ such that
$$
\int^{t_2}_{t_1} \| \gamma - x_\lambda \|^2_{L^2_1(Y)} dt \leq C \cdot \int^{t_2}_{t_1} \| \frac{d}{dt} \gamma \|_{L^2(Y)}^2 dt = C \cdot \int^{t_2}_{t_1} \| \mathcal{X}qmlgc(\gamma(t)) \|_{L^2(Y)}^2 dt.
$$
By our choice of $\q$, we have that $x_\infty$ and $x_\lambda$ are non-degenerate stationary points of $\mathcal{X}qgc$ and $\mathcal{X}qmlgc$, respectively. There exist constants $C'>0$ and $\delta > 0$ (independent of $\lambda$) such that for $\| x - x_\lambda \|_{L^2_1(Y)} < \delta$, we have
\begin{equation}\label{eq:nearby-L21-traj-bounds}
\| x - x_\lambda \|^2_{L^2_1(Y)} \leq C' \left(\|\mathcal{X}qmlgc(x) \|^2_{L^2(Y)} + |\operatorname{Re} \langle x, (0,i\phi_\lambda) \rangle_{L^2(Y)}|^2\right),
\end{equation}
by an argument similar to that in Lemma~\ref{lem:approximate-eigenvalue-bounds}, where we do not include the second term on the right-hand side if $x_\lambda$ is reducible. Indeed, if $x$ is real $L^2$ orthogonal to $(0,i\phi_\lambda)$, then the inequality follows directly from the non-degeneracy of $\mathcal{X}qmlgc$. More generally, we use
\begin{align*}
\| x - x_\lambda \|_{L^2_1(Y)} &\leq \| e^{i\mathfrak{t}heta}x - x_\lambda \|_{L^2_1(Y)} + \| x - e^{i\mathfrak{t}heta} x \|_{L^2_1(Y)} \\
&\leq C' \| \mathcal{X}qmlgc(x) \|_{L^2(Y)} + |(1-e^{i\mathfrak{t}heta})| \| x \|_{L^2_1(Y)} \\
&\leq C' \| \mathcal{X}qmlgc(x) \|_{L^2(Y)} + C' | \operatorname{Re} \langle x, (0,i\phi_\lambda) \rangle_{L^2(Y)} |,
\end{align*}
where $\mathfrak{t}heta$ is the angle between $x$ and $(0,i\phi)$, i.e. $\operatorname{sp}incin \mathfrak{t}heta = \frac{\operatorname{Re} \langle x, (0,i\phi_\lambda) \rangle_{L^2(Y)}}{\| x\|_{L^2(Y)} \| i \phi_\lambda \|_{L^2(Y)}}$. To see the last inequality, we use that $x$ is $L^2_1$ bounded and that for irreducibles, the $\phi_\lambda$ is uniformly $L^2$ bounded above and below.
We therefore choose the neighborhood, $U$, in the statement of the lemma by extending a $\delta$-neighborhood of $x_\lambda$ in $L^2_1(I \mathfrak{t}imes Y)$ to be gauge invariant.
By \eqref{eq:nearby-L21-traj-bounds}, for a trajectory $\gamma \in U$, we have the desired bounds in the case that $x_\lambda$ is reducible. If $x_\lambda$ is irreducible, it remains to bound
$$
\int^{t_2}_{t_1} |\operatorname{Re} \langle \gamma(t) , (0,i\phi_\lambda) \rangle_{L^2(Y)}|^2 dt,
$$
in terms of $\int^{t_2}_{t_1} \| \frac{d}{dt} \gamma \|^2_{L^2(Y)}dt $.
We write
\begin{align*}
\int^{t_2}_{t_1} |\operatorname{Re} \langle \gamma(t), (0,i\phi_\lambda) \rangle_{L^2(Y)}|^2 dt &= \int^{t_2}_{t_1} |\operatorname{Re} \langle \gamma(t) - \gamma(t_1), (0,i\phi_\lambda) \rangle_{L^2(Y)}|^2 dt \\
&\leq C' \int^{t_2}_{t_1} \| \gamma(t) - \gamma(t_1) \|_{L^2(Y)}^2 dt \\
&\leq C' \int^{t_2}_{t_1} \| \int^{t}_{t_1} (\frac{d}{ds}\gamma) ds \|^2_{L^2(Y)} dt \\
&\leq C' \int^{t_2}_{t_1} (t - t_1) \int^t_{t_1} \| \frac{d}{ds} \gamma \|^2_{L^2(Y)} ds dt \\
& \leq C' (t_2 - t_1) \int^{t_2}_{t_1} \int^{t_2}_{t_1} \| \frac{d}{ds} \gamma \|^2_{L^2(Y)} ds dt \\
& = C' (t_2 - t_1)^2 \int^{t_2}_{t_1} \| \frac{d}{ds} \gamma \|^2_{L^2(Y)} ds,
\end{align*}
where the fourth line follows from Cauchy-Schwarz. This completes the proof.
\end{proof}
In \cite[Sections 13.4 and 13.5]{KMbook}, Lemma 13.4.4 is the starting point for a sequence of results about trajectories in neighborhoods of stationary points. In our setting, Lemma~\ref{lem:L21-bounds-Flambda} above gives analogous results, by essentially the same arguments. We state the main results below, but omit the proofs.
First, by bootstrapping, we obtain the following from Lemma~\ref{lem:L21-bounds-Flambda}, analogous to \cite[Proposition 13.4.7]{KMbook}.
\begin{lemma}\label{lem:L2k-bounds-Flambda}
Fix a stationary point $x_\infty$ of $\mathcal{X}qgc$, which we treat as a (constant) trajectory of $\mathcal{X}qgc$ on $I \mathfrak{t}imes Y$. There exists an $S^1$-invariant neighborhood $U$ of $x_\infty$ in $W_{k}(I \mathfrak{t}imes Y)$ and a constant $C$ independent of $\lambda \gg 0$ satisfying the following. If $\gamma \in U$ is a trajectory of $\mathcal{X}qmlgc$, there exists a constant gauge transformation $u \in S^1$ such that
$$
\| u \cdot \gamma - x_\lambda \|_{L^2_{k+1}(I' \mathfrak{t}imes Y)} \leq C (F_\lambda(\gamma(t_2)) - F_\lambda(\gamma(t_1))),
$$
for any compact interval $I' \Subset I$.
\end{lemma}
Next, we have two results about trajectories in the blow-up. These are the analogues of Proposition 13.4.1 and Corollary 13.4.8 in \cite{KMbook} respectively.
\begin{proposition}\label{prop:13.4.1-analogue}
Let $x_\infty \in B(2R)^\operatorname{sp}incigma$ be a stationary point of $\mathcal{X}qgcsigma$, $x_\lambda$ a nearby stationary point of $\mathcal{X}qmlgcsigma$, and $I' \Subset I = [t_1,t_2]$. Then, there exists a constant $C$ and a gauge-invariant neighborhood $U$ of the constant trajectory $x_\infty$ in $W^\mathfrak{t}au_k(I \mathfrak{t}imes Y)$, independent of $\lambda$, such that for every trajectory $\gamma^\mathfrak{t}au: I \mathfrak{t}o (W^\lambda \cap B(2R))^\operatorname{sp}incigma$ of $\mathcal{X}qmlgcsigma$ which belongs to $U$, there is a gauge transformation $u \in S^1$ such that:
\begin{enumerate}[(i)]
\item if $x_\infty$ is irreducible, then $$\| u \cdot \gamma^\mathfrak{t}au - x_\lambda \|_{L^2_{k+1}(I' \mathfrak{t}imes Y)} \leq C \left ( F_\lambda(\gamma(t_1)) - F_\lambda(\gamma(t_2))\right),$$
\item if $x_\infty$ is reducible, then $$\| u \cdot \gamma^\mathfrak{t}au - x_\lambda \|_{L^2_{k+1}(I' \mathfrak{t}imes Y)} \leq C \left ( \mathscr{L}ambda_{\q^\lambda}(\gamma(t_1)) - \mathscr{L}ambda_{\q^\lambda}(\gamma(t_2)) + (F_\lambda(\gamma(t_1)) - F_\lambda(\gamma(t_2)))^{\frac{1}{2}} \right ).$$
\end{enumerate}
\end{proposition}
\begin{proposition}\label{prop:13.4.8-analogue}
Let $x_\infty \in B(2R)^\operatorname{sp}incigma$ be a stationary point of $\mathcal{X}qgcsigma$ and $x_\lambda$ the corresponding stationary point of $\mathcal{X}qmlgcsigma$. There is a constant $C$ and a gauge-invariant neighborhood $U$ of the constant trajectory in $W_k([-1,1] \mathfrak{t}imes Y)$ obtained from blowing down $x_\infty$, independent of $\lambda \gg 0$, with the following property. If $\gamma^\mathfrak{t}au : [-1,1] \mathfrak{t}o (W^\lambda \cap B(2R))^{\operatorname{sp}incigma}$ is a trajectory of $\mathcal{X}qmlgcsigma$ whose blow-down $\gamma$ is in $U$, then
$$
\frac{d}{dt} \mathscr{L}ambda_{\q^\lambda}(\gamma^{\mathfrak{t}au}(t))\Big|_{t = 0} \leq C (F_\lambda(\gamma(-1)) - F_\lambda(\gamma(1)))^{\frac{1}{2}}.
$$
\end{proposition}
For trajectories that converge to stationary points at the ends, we are able to obtain exponential decay on the value of $F_\lambda$ near stationary points in the blow-up, analogous to that in \cite[Proposition 13.5.1]{KMbook}. Since $F_\lambda: W^\lambda \cap B(2R) \mathfrak{t}o \mathbb{R}$ is $S^1$-invariant, we obtain an induced map from $(W^\lambda \cap B(2R))^\operatorname{sp}incigma/S^1$ to $\mathbb{R}$. By a slight abuse of notation we will use $F_\lambda$ for this induced map as well. Recall from Chapter~\ref{sec:criticalpoints}, for every stationary point $[x_\infty]$ of $\mathcal{X}qagcsigma$ with grading in $[-N,N]$, we have a unique corresponding stationary point $[x_\lambda]$ of $\mathcal{X}qmlagcsigma$ for $\lambda \gg 0$.
\begin{proposition}\label{prop:exponential-decay-blowup}
Let $[x_\infty]$ be a stationary point of $\mathcal{X}qagcsigma$ with grading in $[-N,N]$. There exists $\delta > 0$ such that for $\lambda \gg 0$, and for every trajectory $[\gamma]: [0,\infty) \mathfrak{t}o (W^\lambda \cap B(2R))^\operatorname{sp}incigma/S^1$ of $\mathcal{X}qmlagcsigma$ with $\lim_{t \mathfrak{t}o \infty} [\mathfrak{t}au^*_t \gamma] = [x_\lambda]$ in $L^2_{k,loc}$, there exists $t_0$ such that for $t \geq t_0$
$$
F_\lambda([\gamma(t)]) - F_\lambda([x_\lambda]) \leq C e^{-\delta t},
$$
where $C = F_\lambda([\gamma(t_0)]) - F_\lambda([x_\lambda])$.
\end{proposition}
Here is a related result in the blow-down, which is the analogue of \cite[Proposition 13.5.2]{KMbook}.
\begin{proposition}\label{prop:exponential-decay}
Let $x_\infty$ be a stationary point of $\mathcal{X}qgc$. There exists a neighborhood $U$ of $[x_\infty]$ in $B(2R)/S^1$ and a constant $\delta > 0$ such that for $\lambda \gg 0$ and any trajectory $\gamma:[t_1,t_2] \mathfrak{t}o W^\lambda \cap U$ of $\mathcal{X}qmlgc$ in $L^2_k([t_1,t_2] \mathfrak{t}imes Y)$, we have the inequalities
$$
-C_2 e^{\delta(t-t_2)} \leq F_\lambda(\gamma(t)) - F_\lambda([x_\lambda]) \leq C_1 e^{-\delta(t-t_1)},
$$
where
$$ C_1 = | F_\lambda(\gamma(t_1)) - F_\lambda([x_\lambda])|, \ C_2 = | F_\lambda(\gamma(t_2)) - F_\lambda([x_\lambda])|.
$$
\end{proposition}
For a trajectory $\gamma$ of $\mathcal{X}qmlgcsigma$, let us introduce the quantities
$$
K_\lambda(\gamma) = \int_{\mathbb{R}} \left| \frac{d\mathscr{L}ambda_{\q^\lambda}(\gamma)}{dt} \right| dt, \ \ \ \ K_{\lambda,+}(\gamma) = \int_{\mathbb{R}} \left (\frac{d\mathscr{L}ambda_{\q^\lambda}(\gamma)}{dt} \right)^+ dt,
$$
which a priori may be infinite. Here, $f^+$ denotes $\max\{0,f\}$. Note that $K_\lambda$ is finite if and only if $K_{\lambda,+}$ is finite. Further, if the result is finite, and $\gamma$ is a trajectory from $x_\lambda$ to $y_\lambda$, then
\begin{equation}\label{eq:Lambda-K-relation}
\mathscr{L}ambda_{\q^\lambda}(x_\lambda) - \mathscr{L}ambda_{\q^\lambda}(y_\lambda) = K_\lambda(\gamma) - 2K_{\lambda,+}(\gamma).
\end{equation}
We will also use $K^I_\lambda$ and $K^I_{\lambda,+}$ to restrict the domain of integration to any subinterval $I \operatorname{sp}incubset \mathbb{R}$.
The exponential decay bound from Proposition~\ref{prop:exponential-decay}, combined with Lemmas~\ref{lemma:1091} and \ref{lem:L2k-bounds-Flambda} give the following analogue of \cite[Corollary 13.5.3]{KMbook}.
\begin{corollary}\label{cor:Klambda-J-bounds}
Fix $x_\infty$ a stationary point of $\mathcal{X}qgc$. Given a constant $\eta$, there is a gauge-invariant neighborhood $U \operatorname{sp}incubset W_k([-1,1] \mathfrak{t}imes Y)$ of the constant trajectory $x_\infty$ with the following property for $\lambda \gg 0$. Let $J \operatorname{sp}incubset \mathbb{R}$ be any interval and $J' = J + [-1,1]$. If we have a trajectory $\gamma^\mathfrak{t}au: J' \mathfrak{t}o (W^\lambda \cap B(2R))^\operatorname{sp}incigma$ of $\mathcal{X}qmlgcsigma$ such that $\mathfrak{t}au_t \gamma$ are contained in $U$ for all $t \in J$, then $
K^J_{\lambda,+}(\gamma^\mathfrak{t}au) \leq \eta$.
\end{corollary}
\operatorname{sp}incection{Convergence of unparameterized trajectories in the blow-up}
\label{sec:UnparameterizedBlowup}
Our goal in this subsection is to prove analogues of the results in Section~\ref{sec:convergence-downstairs} in the blow-up. Similarly to the stationary point classes in the singular space $W/S^1$ that appeared Section~\ref{sec:convergence-downstairs}, we will now consider stationary points $[x] \in W^\operatorname{sp}incigma/S^1$ of the vector fields $\mathcal{X}qagcsigma$ or $\mathcal{X}qmlagcsigma$. We have {\em parameterized trajectories} $$[\gamma] \in W^{\mathfrak{t}au}(I \mathfrak{t}imes Y)/S^1$$ of $\mathcal{X}qagcsigma$ or $\mathcal{X}qmlagcsigma$, and also, if $I=\R$, {\em unparameterized trajectories} $$[\breve{\gamma}] \in W^{\mathfrak{t}au}(\rr\mathfrak{t}imes Y)/(S^1 \mathfrak{t}imes \mathbb{R}).$$ Unlike in the blow-down, we omit the word ``class'' since the relevant objects arise from actual vector fields on the (smooth) quotients by $S^1$.
We will only consider trajectories of $\mathcal{X}qagcsigma$ that limit to two stationary points.
(Lemma 16.3.3 in \cite{KMbook}, translated into slicewise Coulomb gauge, gives conditions for this to happen---but we will not need it here.) For trajectories of $\mathcal{X}qmlagcsigma$, Corollary~\ref{cor:endpointsBlowUp} shows the existence of limiting points, provided that the trajectories are contained in $(W^\lambda \cap B(2R))^{\operatorname{sp}incigma}/S^1$.
Finally, analogous to Definitions~\ref{def:broken} and \ref{def:brokenconvergencedownstairs}, we have the notions of unparameterized broken trajectories and convergence to them. In particular, convergence to a broken trajectory is defined in terms of the convergence of some parameterized representatives in $W^{\mathfrak{t}au}_{loc}(\rr\mathfrak{t}imes Y)/S^1$.
The main focus of this subsection is the following.
\begin{proposition}\label{prop:brokenconvergence}
Fix $[x]$ and $[y]$, stationary points of $\mathcal{X}qagcsigma$ with grading in $[-N, N]$. Fix $\lambda_n \mathfrak{t}o \infty$, and a sequence of unparameterized trajectories $[\breve{\gamma}_n]$ of $\mathcal{X}qmlnagcsigma$, going from $[x_{\lambda_n}]$ to $[y_{\lambda_n}]$, and such that the representatives $\gamma_n$ of $[{\breve{\gamma}}_n]$ are contained in $(W^\lambdan \cap B(2R))^\operatorname{sp}incigma$. Then, there exists a subsequence of $[{\breve{\gamma}}_n]$ that converges to an unparameterized broken trajectory $[{\breve{\gammas}}_\infty]$ of $\mathcal{X}qagcsigma$.
\end{proposition}
\begin{lemma}\label{lem:Kbounds}
Fix $x$ and $y$ stationary points of $\mathcal{X}qgcsigma$. There exists $C > 0$ such that for all $\lambda \gg 0$ and trajectories $\gamma^{\mathfrak{t}au}$ from $x_\lambda$ to $y_\lambda$, we have
$K_\lambda(\gamma^{\mathfrak{t}au}) \leq C$.
\end{lemma}
\begin{proof}
This is analogous to \cite[Lemma 16.3.1]{KMbook}. We outline the proof. We will show that any sequence $\gamma^{\mathfrak{t}au}_n$ of trajectories from $x_{\lambda_n}$ to $y_{\lambda_n}$ has $K_{\lambda_n}(\gamma^{\mathfrak{t}au}_n)$ bounded, as long as $\lambda_n \gg 0$. Note that since $x_{\lambda_n}$ (respectively $y_{\lambda_n}$) are $L^2_k$ close to $x$ (respectively $y$), it suffices to obtain uniform bounds on $K_{\lambda_n,+}(\gamma^\mathfrak{t}au_n)$ by \eqref{eq:Lambda-K-relation}, since we have bounds on the $\lambda_n$-spinorial energies of $x_{\lambda_n}$ and $y_{\lambda_n}$. Consider the projections $\gamma_n$ of the trajectories $\gamma^{\mathfrak{t}au}_n$ to the blow-down. Following the proof of Proposition~\ref{prop:brokenconvergencedownstairs}, we find a decomposition of $\mathbb{R}$ into a finite number, independent of $n$, of intervals of two types: $J^i_n$, on which $\gamma_n$ is close to a constant trajectory as in Corollary~\ref{cor:Klambda-J-bounds}, and $I^i_n$, which have a fixed length. Corollary~\ref{cor:Klambda-J-bounds} gives uniform bounds on $K^{J^i_n}_{\lambda_n,+}(\gamma^{\mathfrak{t}au}_n)$. For the intervals $I^i_n$ of fixed length, we can apply Lemma~\ref{lemma:1091} to give uniform upper bounds on $\frac{d}{dt} \mathscr{L}ambda_{\q^\lambdan}(\gamma^{\mathfrak{t}au}_n(t))$. Since the lengths of the $I^i_n$ are fixed, the values $K^{I^i_n}_{\lambda_n,+}$ are bounded. Thus, we obtain the desired bounds on $K_{\lambda_n}(\gamma^{\mathfrak{t}au}_n)$.
\end{proof}
Lemma~\ref{lem:Kbounds} in fact gives uniform bounds on $\mathscr{L}ambda_{\q^\lambda}$ for approximate trajectories, which we now establish.
\begin{lemma}\label{lem:spinorialenergybound}
There exists a constant $C>0$ independent of $\lambda \gg 0$ with the following property. If $\gamma$ is a trajectory of $\mathcal{X}qmlgcsigma$ contained in $(W^\lambda \cap B(2R))^\operatorname{sp}incigma$, and going between two stationary points in the grading range $[-N,N]$, then $|\mathscr{L}ambda_{\q^\lambda}(\gamma(t))| \leq C$ for all $t$.
\end{lemma}
\begin{proof}
Since there are only finitely many stationary points of $\mathcal{X}qmlagcsigma$ in the grading range $[-N,N]$, it suffices to establish the result for trajectories $\gamma$ of $\mathcal{X}qmlgcsigma$ between $x_\lambda$ and $y_\lambda$ for a fixed pair of stationary points. We have from \eqref{eq:Lambda-K-relation} that
\begin{align*}
|\mathscr{L}ambda_{\q^\lambda}(x_\lambda) - \mathscr{L}ambda_{\q^\lambda}(\gamma(t))| &= |K_\lambda^{(-\infty,t]}(\gamma) - 2K_{\lambda,+}^{(-\infty,t]}(\gamma)| \\
&\leq 3 K_\lambda(\gamma) \\
&\leq 3C,
\end{align*}
where $C$ is the constant from Lemma~\ref{lem:Kbounds}. Since the $x_\lambda$ are all $L^2_k$ close to the $S^1$-orbit of a fixed stationary point $x$ of $\mathcal{X}qgc$, we have that $|\mathscr{L}ambda_{\q^\lambda}(x_\lambda)|$ is bounded independent of $\lambda \gg 0$.
\end{proof}
The following is the analogue of \cite[Lemma 16.3.2]{KMbook}, and has a similar proof. It is a consequence of the convergence of parameterized trajectories in the blow-up, Corollary~\ref{cor:convergence}.
\begin{lemma}
\label{lem:nearbyblowup}
For each stationary point $x$ of $\mathcal{X}qgcsigma$, choose a gauge-invariant neighborhood $U_{x} \operatorname{sp}incubset W^\mathfrak{t}au_k(I \mathfrak{t}imes Y)$ of the associated constant trajectory. Further, let $I'$ be any other interval with non-zero length. Then, there exists $\varepsilonilon > 0$ such that for $\lambda \gg 0$, if $\gamma \in M(y_\lambda, y'_\lambda)$ for stationary points $y_\lambda, y'_\lambda$ of $\mathcal{X}qmlgcsigma$ in the grading range $[-N,N]$ satisfies $K^{I'}(\gamma) \leq \varepsilonilon$, then $\gamma|_{I \mathfrak{t}imes Y} \in U_x$ for some $x$.
\end{lemma}
\begin{proof}[Proof of Proposition~\ref{prop:brokenconvergence}]
Lemma~\ref{lem:spinorialenergybound} guarantees that $\Lambda_{\q^{\lambda_n}}$ is bounded for all trajectories between $[x_{\lambda_n}]$ and $[y_{\lambda_n}]$ contained in $(W^\lambda \cap B(2R))^\operatorname{sp}incigma$. Therefore, we may repeat the argument of Proposition~\ref{prop:brokenconvergencedownstairs}, where we now apply Corollary~\ref{cor:convergence} and Lemma~\ref{lem:nearbyblowup} in place of Proposition~\ref{prop:convergencenoblowup} and we require that where we see bounds involving terms of the form $\mathcal{E}^I(\gamma)$, we also have analogous bounds on $K^I(\gamma)$.
\end{proof}
\begin{corollary}\label{cor:index1convergence}
Fix $\varepsilonilon > 0$. For $\lambda \gg 0$, the following is true. Suppose that $[\gamma_\lambda]$ is a trajectory of $\mathcal{X}qmlagcsigma$ from $[x_\lambda]$ to $[y_\lambda]$, such that either
\begin{enumerate}
\item $[\gamma_\lambda]$ is not boundary-obstructed and $\operatorname{gr}([x_\lambda],[y_\lambda]) = 1$ or
\item $[\gamma_\lambda]$ is boundary-obstructed and $\operatorname{gr}([x_\lambda],[y_\lambda]) =0$.
\end{enumerate}
Furthermore, suppose that the gradings of $[x_\lambda]$ and $[y_\lambda]$ are in $[-N,N]$. Then, $[\gamma_\lambda]$ is $\varepsilonilon$-close in $W_{k,loc}(\rr\mathfrak{t}imes Y)/S^1$ to $[\gamma]$, a trajectory of $\mathcal{X}qagcsigma$ with grading in $[-N,N]$.
\end{corollary}
\begin{proof}
We argue by contradiction. Suppose there exists a sequence of trajectories $[\gamma_{\lambda_n}]$, where $\lambda_n \mathfrak{t}o \infty$, which are always at least distance $\varepsilonilon$ from trajectories of $\mathcal{X}qagcsigma$. Without loss of generality, none of the trajectories are boundary-obstructed. A similar argument for the index 0 and boundary-obstructed case follows similarly. By Proposition~\ref{prop:brokenconvergence}, there exists a subsequence such that the unparameterized trajectories converge to a broken trajectory $[\breve{\gammas}_\infty]$ from $[x_\infty]$ to $[y_\infty]$, which is necessarily index 1 by Proposition~\ref{prop:stationarycorrespondence}. It remains to show that this trajectory is in fact unbroken.
We suppose that $[\breve{\gammas}_\infty] = ([\breve{\gamma}_{\infty,1}],\ldots,[\breve{\gamma}_{\infty,m}])$ with $m \geq 2$. By our non-degeneracy assumption, one $[\breve{\gamma}_{\infty,i}]$ must be be index 1, while the other trajectories must be index 0. Note that an index 0 trajectory must be boundary-obstructed (or else the moduli space is necessarily empty), and thus there can be at most one such trajectory. In particular, we see that $[\breve{\gammas}_\infty] = ([\breve{\gamma}_{\infty,1}], [\breve{\gamma}_{\infty,2}])$. Since we are assuming that the trajectories $[\gamma_{\lambda_n}]$ are not boundary-obstructed, it follows that one of $[\breve{\gamma}_{\infty,1}]$ or $[\breve{\gamma}_{\infty,2}]$ goes from boundary-unstable to boundary-stable or one of $[x_\infty]$ or $[y_\infty]$ is irreducible. In either case, we have that that one of the components of $[\breve{\gamma}_{\infty}]$ is an irreducible trajectory (see \cite[Proposition 14.5.7]{KMbook} or the discussion at the end of Section~\ref{sec:AdmPer}). This implies that $M([x_\infty], [y_\infty])$ must contain an irreducible trajectory. This contradicts \cite[Corollary 16.5.4]{KMbook}, which implies that if $\breve{M}([x_\infty], [y_\infty])$ contains a once-broken trajectory in the compactification and contains an irreducible trajectory, no component of the broken trajectory can be boundary-obstructed.
\end{proof}
\operatorname{sp}incection{Convergence in $L^2_k$}
In Chapter~\ref{sec:criticalpoints}, we used the inverse function theorem to find approximate stationary points near stationary points of $\mathcal{X}qgcsigma$. There we defined identifications $\mathcal{X}i_{\lambda}$ between $\mathcal{C}rit^\lambda_{\N}$ and $\mathcal{C}rit_{\N}$, which we extended to self-diffeomorphisms of $W^\operatorname{sp}incigma_j$ in Lemma~\ref{lem:Xixylambda} and then to self-diffeomorphisms of path spaces in Proposition~\ref{prop:Xixy-paths}. We now use these self-diffeomorphisms to improve the $L^2_{k,loc}$ convergence from Proposition~\ref{prop:brokenconvergence} to $L^2_k$ convergence.
\begin{proposition}
\label{prop:L2k}
Let $[\gamma_n]: \mathbb{R} \mathfrak{t}o (W^\lambdan \cap B(2R))^\operatorname{sp}incigma/S^1$ be a sequence of trajectories of $\mathcal{X}qmlnagcsigma$ between stationary points $[x_{\lambda_n}]$ and $[y_{\lambda_n}]$. Further, suppose that the unparameterized trajectory $[\breve{\gamma}_n]$ converges to $[\breve{\gamma}_{\infty}]$, an unbroken trajectory of $\mathcal{X}qagcsigma$ from $[x_\infty]$ to $[y_\infty]$. Then, after possible reparameterization, $\mathcal{X}i_{\lambda_n}([\gamma_n])$ converges to $[\gamma_{\infty}]$ in $\B_k^{\operatorname{gC},\mathfrak{t}au}([x_\infty],[y_\infty])$, where $[\gamma_{\infty}]$ is a representative of $[\breve{\gamma}_{\infty}]$.
\end{proposition}
\begin{proof}
We choose representative trajectories $\gamma_n$ of $\mathcal{X}qmlngcsigma$ and $\gamma_\infty$ of $\mathcal{X}qgcsigma$ of the trajectory classes such that $\gamma_n$ converges to $\gamma_\infty$. (This is easy to arrange, since these are only $S^1$-equivalence classes.) This means there exists a sequence $s_n \in \mathbb{R}$ such that $\mathfrak{t}au_{s_n} \gamma_n \mathfrak{t}o \gamma_\infty$ in $L^2_{k,loc}$. Since we may reparameterize $\mathcal{X}i_{\lambda_n}([\gamma_n])$, we assume that no translations are necessary. Therefore, we are interested in showing that there exists a sequence of gauge transformations $u_n: \rr\mathfrak{t}o S^1$ such that
$$
\| u_n \cdot \mathcal{X}i_{\lambda_n}(\gamma_n) -\gamma_\infty \|_{L^2_k(\mathbb{R} \mathfrak{t}imes Y)} \mathfrak{t}o 0.
$$
Fix $\varepsilonilon > 0$. We will show that $\| \mathcal{X}i_{\lambda_n}(\gamma_n) -\gamma_\infty \|_{L^2_k(\mathbb{R} \mathfrak{t}imes Y)} < \varepsilonilon$ for $n \gg 0$. Write $x_\infty$ and $y_\infty$ for the limit points of $\gamma_\infty$.
First, since $\gamma_n \mathfrak{t}o \gamma_\infty$ in $L^2_{k,loc}(\mathbb{R} \mathfrak{t}imes Y)$, we also have that $\mathcal{X}i_{\lambda_n}(\gamma_n) \mathfrak{t}o \gamma_\infty$ in $L^2_{k,loc}(\mathbb{R} \mathfrak{t}imes Y)$ by Corollary~\ref{cor:Xixy-paths}. In particular, for any $T > 0$, we have for $n \gg 0$,
$$
\| \mathcal{X}i_{\lambda_n}(\gamma_n) - \gamma_\infty \|_{L^2_k([-T,T] \mathfrak{t}imes Y)} < \varepsilonilon/2.
$$
Therefore, it suffices to show that we can find $T > 0$ such that for $n \gg 0$,
\begin{equation}\label{eq:L2k-afterxilambda-convergence}
\| \mathcal{X}i_{\lambda_n}(\gamma_n) - \gamma_\infty \|_{L^2_k([T,\infty) \mathfrak{t}imes Y)} < \varepsilonilon/4,
\end{equation}
and likewise on $(-\infty,-T] \mathfrak{t}imes Y$. We will establish the bounds for $[T,\infty) \mathfrak{t}imes Y$; the case of $(-\infty, -T] \mathfrak{t}imes Y$ follows in the same way.
We first claim that there exists $T > 0$ such that for $n \gg 0$,
\begin{equation}\label{eq:L2k-endpoints-afterxilambda}
\| \mathcal{X}i_{\lambda_n}(\gamma_n) - y_\infty \|_{L^2_k([T, \infty) \mathfrak{t}imes Y)} < \varepsilonilon/8.
\end{equation}
By Corollary~\ref{cor:path-distance}, we have that it suffices to establish
\begin{equation}\label{eq:L2k-endpoints-beforexilambda}
\| \gamma_n - y_{\lambda_n} \|_{L^2_{k}([T,\infty) \mathfrak{t}imes Y)} \mathfrak{t}o 0, \ n \mathfrak{t}o \infty.
\end{equation}
This is an analogue of \cite[Theorem 13.3.5]{KMbook}, which implies that there exists a gauge transformation of $\gamma_\infty$ which lands in $\B^{\operatorname{gC},\mathfrak{t}au}_k([x_\infty],[y_\infty])$. However, we will explicitly keep track of the $L^2_{k}$ bounds and the necessary gauge transformation. First, by Proposition~\ref{prop:13.4.1-analogue}, there exists a constant gauge transformation $u^i_n \in S^1$ and a constant $C$, independent of $i$ and $\lambda$, such that if $y_\infty$ is reducible, then
\begin{equation}\label{eq:trajectory-constant-bounds}
\| u^i_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-2,i+2] \mathfrak{t}imes Y)} \leq C \left ( (\mathscr{L}ambda_{\q^\lambdan}(i - 3) - \mathscr{L}ambda_{\q^\lambdan}(i + 3)) + (F_{\lambda_n}(i-3) - F_{\lambda_n}(i+3))^{\frac{1}{2}}\right),
\end{equation}
and if $y_\infty$ is irreducible, then
\begin{equation}\label{eq:trajectory-constant-bounds-irred}
\| u^i_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-2,i+2] \mathfrak{t}imes Y)} \leq C (F_{\lambda_n}(i-3) - F_{\lambda_n}(i+3)).
\end{equation}
In Proposition~\ref{prop:exponential-decay} we have established the exponential decay of $F_{\lambda_n}(\gamma_n(t))$ independent of $n$. Further, by Proposition~\ref{prop:13.4.8-analogue} and Proposition~\ref{prop:exponential-decay}, we obtain uniform bounds on $\mathscr{L}ambda_{\q^\lambdan}(i - 3) - \mathscr{L}ambda_{\q^\lambdan}(i+3)$ which decay exponentially as $i \mathfrak{t}o \infty$. We thus see that for any $T > 0$,
$$
\operatorname{sp}incum_{i \geq T} \| u^i_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-2, i+2] \mathfrak{t}imes Y)} \leq C(T)
$$
where the constant $C(T) > 0$ depends only on $T$ (i.e., independent of $n$) and converges to $0$ as $T \mathfrak{t}o \infty$.
To get the desired $L^2_{k}([T,\infty) \mathfrak{t}imes Y)$ bounds on $\gamma_n - y_{\lambda_n}$, it suffices to prove that
$$
\operatorname{sp}incum_{i \geq T} \| u^i_n \cdot y_{\lambda_n} - y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \leq C'(T),
$$
for some constant $C'(T)$ that converges to 0 as $T \mathfrak{t}o \infty$. Without loss of generality, assume $y_\infty$ is reducible (the irreducible case is similar). By \eqref{eq:trajectory-constant-bounds}, we have
\begin{align*}
\| (u^i_n)^{-1} \cdot y_{\lambda_n} - &
(u^{i+1}_n)^{-1} \cdot y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \\
& \leq \| (u^i_n)^{-1} \cdot y_{\lambda_n} - \gamma_n \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} + \|(u^{i+1}_n)^{-1} \cdot y_{\lambda_n} - \gamma_n\|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \\
& = \| u^i_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} + \| u^{i+1}_n \cdot \gamma_n - y_{\lambda_n}\|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \\
&\leq \| u^i_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-2,i+2] \mathfrak{t}imes Y)} + \|u^{i+1}_n \cdot \gamma_n - y_{\lambda_n} \|_{L^2_{k}([i-1,i+3] \mathfrak{t}imes Y)} \\
& \leq C \Big( (\mathscr{L}ambda_{\q^\lambdan}(i - 3) - \mathscr{L}ambda_{\q^\lambdan}(i +3)) + ((\mathscr{L}ambda_{\q^\lambdan}(i -2) - \mathscr{L}ambda_{\q^\lambdan}(i + 4)) \\
& + (F_{\lambda_n}(i-3) - F_{\lambda_n}(i+3))^{\frac{1}{2}} + (F_{\lambda_n}(i-2) - F_{\lambda_n}(i+4))^{\frac{1}{2}} \Big).
\end{align*}
The bounds from Proposition~\ref{prop:13.4.8-analogue} and exponential decay of Proposition~\ref{prop:exponential-decay} show that
$$\operatorname{sp}incum_{i \geq T} \| (u^i_n)^{-1} \cdot y_{\lambda_n} - (u^{i+1}_n)^{-1} \cdot y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \leq C'(T),$$ with $C'(T) \mathfrak{t}o 0$ as $T \mathfrak{t}o \infty$. We would like an analogous inequality without the inverses. Note that if $u, u'$ are in $S^1$, and $y= (a,s,\phi)$, we have that
$$u\cdot y - u'\cdot y = (0, 0, (u - u')\phi).$$
Since $| u - u'| = |u^{-1} - u'^{-1}|$, we see that
$$
\| (u^i_n)^{-1} \cdot y_{\lambda_n} - (u^{i+1}_n)^{-1} \cdot y_{\lambda_n}\|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} = \| u^i_n \cdot y_{\lambda_n} - u^{i+1}_n \cdot y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)}.
$$
Therefore, we have
$$\operatorname{sp}incum_{i \geq T} \| u^i_n \cdot y_{\lambda_n} - u^{i+1}_n \cdot y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \leq C'(T).$$
In particular, this implies that as $i \mathfrak{t}o \infty$, $u^i_n$ converges to some $u_n \in S^1$ (uniformly in $n$), and we see that
$$
\operatorname{sp}incum_{i \geq T} \| u^i_n \cdot y_{\lambda_n} - u_n \cdot y_{\lambda_n} \|_{L^2_{k}([i-1/2,i+1/2] \mathfrak{t}imes Y)} \leq C'(T).
$$
Therefore, it remains to establish that $u_n = 1$ for each $n$. This follows since by the construction of $u^i_n$ in Lemma~\ref{lem:L21-bounds-Flambda}, we have that $\operatorname{Re} \langle u^i_n \cdot \gamma_n(i-2), (0,i\phi_n) \rangle_{L^2(Y)} = 0$ and $\lim_{t \mathfrak{t}o +\infty} \gamma_n(t) = y_{\lambda_n} = (a_{n},\phi_{n})$. We have now established \eqref{eq:L2k-endpoints-beforexilambda} and thus \eqref{eq:L2k-endpoints-afterxilambda} as claimed.
As mentioned above, \cite[Theorem 13.3.5]{KMbook} gives $u : \rr\mathfrak{t}o S^1$ such that $u \cdot \gamma_\infty - y_\infty$ is in $L^2_k([T,\infty) \mathfrak{t}imes Y)$. In the above argument, we obtained an analogous result where we did not need the four-dimensional gauge transformation. By using the analogues of the results of this section for trajectories of $\mathcal{X}qgcsigma$ in place of $\mathcal{X}qmlgcsigma$, we can repeat the above arguments to obtain the stronger statement for $\gamma_\infty$ as well, i.e., that $\| \gamma_\infty - y_\infty \|_{L^2_k([T,\infty) \mathfrak{t}imes Y)} < \infty$. (As explained at the beginning of the proof Lemma~\ref{lem:L21-bounds-Flambda}, we work in a different gauge slice instead of the Coulomb-Neumann slice, which allows the arguments of this section to pin down the gauge transformation precisely.) Therefore, for $T \gg 0$, we have that
$$
\| \gamma_\infty - y_\infty \|_{L^2_k([T,\infty) \mathfrak{t}imes Y)} < \varepsilonilon/8.
$$
Combining this inequality with \eqref{eq:L2k-endpoints-beforexilambda} we obtain \eqref{eq:L2k-afterxilambda-convergence}, which completes the proof.
\end{proof}
We can use Proposition~\ref{prop:brokenconvergence} to make more refined statements in the case of an approximate trajectory with small index.
\begin{lemma}
Fix $\varepsilonilon, N > 0$. For $\lambda \gg 0$, the following is true. Let $[\gamma_\lambda]$ be a trajectory from $[x_\lambda]$ to $[y_\lambda]$ such that either
\begin{enumerate}
\item $[\gamma_\lambda]$ is not boundary-obstructed and $\operatorname{gr}([x_\lambda], [y_\lambda]) = 1$ or
\item $[\gamma_\lambda]$ is boundary-obstructed and $\operatorname{gr}([x_\lambda], [y_\lambda]) =0$.
\end{enumerate}
Furthermore, suppose that the gradings of $[x_\lambda]$ and $[y_\lambda]$ are in $[-N,N]$. Then $\mathcal{X}i_{\lambda}([\gamma_\lambda])$ is $\varepsilonilon$-close in $L^2_{k}(\rr\mathfrak{t}imes Y)$ of $[\gamma]$, a trajectory of $\mathcal{X}qagcsigma$ from $[x_\infty]$ to $[y_\infty]$ with grading in $[-N,N]$.
\end{lemma}
\begin{proof}
The argument is the same as for Corollary~\ref{cor:index1convergence}, except after applying Proposition~\ref{prop:brokenconvergence} to extract a convergent subsequence in $L^2_{k,loc}$, we improve this to $L^2_k$ convergence (after composition with $\mathcal{X}i_{\lambda}$) using Proposition~\ref{prop:L2k}.
\end{proof}
\chapter{Characterization of approximate trajectories}\label{sec:trajectories2}
In this chapter, using the inverse function theorem, we will identify the moduli spaces of isolated approximate trajectories (in a fixed grading range) with those of actual trajectories. We will also produce similar identifications for the cut moduli spaces used to define the $U$-actions in Morse and Floer homology. Furthermore, we will show that all these identifications can be chosen to preserve orientations.
\operatorname{sp}incection{Stability}
Let $[x_{\infty}]$ and $[y_{\infty}]$ be two stationary points of $\mathcal{X}qagcsigma$, with grading in $[-N, N]$; in other words, we have $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$. We assume that their grading difference is one and that we are not in the boundary-obstructed case. (Recall that boundary-obstructed means that $[x_{\infty}]$ is boundary-stable and $[y_{\infty}]$ is boundary-unstable.) Since $\q$ is an admissible perturbation, the moduli space
$$ \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}])= M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) / \R$$
is a finite set of points. We can view $ \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}])$ as the zero set of the section induced by $\mathcal{F}_{\q}gctau$ on the bundle $\V^{\operatorname{gC}, \mathfrak{t}au}(\rr\mathfrak{t}imes Y)$ over $\mathfrak{t}B^{\operatorname{gC}, \mathfrak{t}au}([x_{\infty}], [y_{\infty}])/\R$, restricted to $\B^{\operatorname{gC},\mathfrak{t}au}([x_\infty],[y_\infty])/\R$.
Consider the nearby stationary points of $\mathcal{X}qmlagcsigma$
$$ [x_{\lambda}] = \mathcal{X}i_{\lambda}^{-1}([x_{\infty}]), \ \ [y_{\lambda}] = \mathcal{X}i_{\lambda}^{-1}([y_{\infty}]).$$
We also have a moduli space of approximate Seiberg-Witten trajectories in $(B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}$:
$$ \Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])=M^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])/\R.$$
By Corollary~\ref{cor:GrPres}, the relative grading between $[x_{\lambda}]$ and $[y_{\lambda}]$ is also one. Further, by Proposition~\ref{prop:AllMS} and \ref{prop:MorseSmales}, for $\lambda = \lambda^{\bullet}_i$ with $i \gg 0$, $ \Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])$ is regular and consists of finitely many points.
Our main goal in this subsection is to prove:
\begin{proposition}
\label{prop:EquivalenceTrajectories}
For $\lambda = \lambda^{\bullet}_i \gg 0$, there is a one-to-one correspondence between the moduli spaces $ \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}])$ and $ \Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])$.
\end{proposition}
The proof has two parts. First, we show that in a fixed neighborhood of any $[\gamma] \in \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}])$, there is a unique approximate trajectory. Second, we show that no other approximate trajectories between $[x_{\lambda}]$ and $[y_{\lambda}]$ exist.
We start with the first step. This is a stability result, similar to what we established in Proposition~\ref{prop:nearby} for stationary points.
\begin{proposition}
\label{prop:stabilitynearby}
Suppose $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$ have $\operatorname{gr}([x_{\infty}], [y_{\infty}])=1$, and we are not in the boundary-obstructed case. Fix a trajectory $[\gamma_\infty] \in \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}])$, and a small neighborhood $U$ of $[\gamma_\infty]$ in $\B^{\operatorname{gC}, \mathfrak{t}au}_k([x_{\infty}], [y_{\infty}])/\R$.
Then, for $\lambda \gg 0$, there exists a unique trajectory $[\gamma_\lambda] \in \Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])$ such that $\mathcal{X}i_{\lambda}([\gamma_\lambda]) \in U$.
\end{proposition}
\begin{proof}
Recall that, because we do the blow-up construction in each time slice, the space $\B^{\operatorname{gC}, \mathfrak{t}au}_k([x_{\infty}], [y_{\infty}])$ is not even a Banach manifold with boundary. However, it is a subspace of the Banach manifold $\widetilde{\B}_k^{\operatorname{gC}, \mathfrak{t}au}([x_{\infty}], [y_{\infty}])$, which consists of (equivalence classes of) paths $(a(t)+\alpha(t)dt, s(t), \phi(t))$, with $s(t)$ allowed to vary in all of $\R$.
By Proposition~\ref{prop:Xixy-paths}\eqref{xi:Vgc}, the map $\mathcal{X}i_{\lambda}$ gives rise to a bundle map $(\mathcal{X}i_{\lambda})_*$ that is part of a commutative diagram
$$xymatrix{
\V_k^{\operatorname{gC}, \mathfrak{t}au}(\rr\mathfrak{t}imes Y) \ar[d] \ar[r]^{(\mathcal{X}i_{\lambda})_*} & \V_k^{\operatorname{gC}, \mathfrak{t}au}(\rr\mathfrak{t}imes Y)\ar[d] \\
\widetilde{\B}^{\operatorname{gC}, \mathfrak{t}au}([x_{\lambda}], [y_{\lambda}])/\rr\ar[r]^{\mathcal{X}i_{\lambda}} & \widetilde{\B}^{\operatorname{gC}, \mathfrak{t}au}_k([x_{\infty}], [y_{\infty}])/\R.
}$$
We define a map
$$ S: \widetilde{\B}^{\operatorname{gC}, \mathfrak{t}au}_k([x_{\infty}], [y_{\infty}]) \mathfrak{t}imes (-1,1) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}_k(\rr\mathfrak{t}imes Y) \mathfrak{t}imes \R$$
by the formula
$$ S([\gamma], r) = \Bigl( (\mathcal{X}i_{\lambda})_* \circ \mathcal{F}_{\qml}gctau \circ \mathcal{X}i_{\lambda}^{-1} ([\gamma]), \ r \Bigr),$$
where $\mathcal{F}_{\qml}gctau$ is as in \eqref{eq:Fqmlgctau} and we wrote $\lambda = f^{-1}(|r|)$, with $f: (0,\infty] \mathfrak{t}o [0,1)$ being the homeomorphism from Section~\ref{sec:StabilityPoints}.
By Proposition~\ref{prop:Xixy-paths}, we get that the section $S$ is differentiable. Observe also that
$$S([\gamma], 0) = (\mathcal{F}_{\q}gctau([\gamma]), 0),$$ because $\mathcal{X}i_{\infty}$ is the identity.
Therefore, the derivative of $S$ at $([\gamma_{\infty}], 0)$ can be written in block form as
$$
\begin{pmatrix}
\D^\mathfrak{t}au_{[\gamma_{\infty}]} \mathcal{F}_{\q}gctau & * \\
0 & I
\end{pmatrix}.
$$
By our assumptions on $\q$ and Proposition~\ref{prop:Qgammasurjective}, we get that $\D^\mathfrak{t}au_{[\gamma_{\infty}]} \mathcal{F}_{\q}gctau$ is surjective. It is Fredholm index one when viewed as a map with domain $\K^{\mathfrak{t}au}_{k, \gamma_{\infty}}$, the tangent space to $\mathfrak{t}B^{\operatorname{gC}, \mathfrak{t}au}_k([x_{\infty}], [y_{\infty}])$ at $[\gamma_\infty]$, and range $\K^{\operatorname{gC},\mathfrak{t}au}_{k-1, \gamma_\infty}$. In our setting we divided by $\R$, so it becomes a map of index zero. Since it is surjective, it must be invertible. Given the block form above, it follows that $\D^\mathfrak{t}au_{([\gamma_{\infty}], 0)} S$ is also invertible.
We now apply the inverse function theorem. For $r > 0$ small (that is, for $\lambda \gg 0$), we obtain a unique solution $[\gamma] \in U$ to the equation $S([\gamma], r) = (0, r)$. We then let
$$ [\gamma_{\lambda}] = \mathcal{X}i_{\lambda}^{-1} ([\gamma]).$$
Because $\mathcal{X}i_\lambda$ is a diffeomorphism, we see that $\F^{\operatorname{gC},\mathfrak{t}au}_{\q^\lambda}([\gamma_\lambda]) = 0$. Since $[\gamma_{\lambda}]$ has a temporal gauge representative in $\mathcal{C}^{\operatorname{gC},\mathfrak{t}au}_k(x_\lambda, y_\lambda)$ for some representatives $x_\lambda, y_\lambda$ (see Remark~\ref{rmk:no-temporal}), we see that $[\gamma_\lambda]$ gives the desired trajectory from $[x_\lambda]$ to $[y_\lambda]$.
Let $\gamma_{\lambda}$ be a temporal gauge representative of $[\gamma_{\lambda}]$. We need to check that it actually takes values in $(B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}$. We can repeat the argument above with the Sobolev coefficient $k+1$ instead of $k$. The resulting $\gamma_\lambda$ in this case must be the same due to the uniqueness guaranteed by the implicit function theorem. Since we use the Sobolev coefficient $k+1$, we have that $\mathcal{X}i_\lambda([\gamma_\lambda])$ must converge to $[\gamma]$ in an $L^2_{k+1}({\mathbb{R}}\mathfrak{t}imes Y)$ neighborhood. Since any temporal gauge representative of $\gamma$ is contained in $B(R)^\operatorname{sp}incigma$, we see that $\mathcal{X}i_\lambda(\gamma_\lambda)$ is contained in $B(3R/2)^\operatorname{sp}incigma$ for $\lambda \gg 0$. It is easy to see from the construction of $\mathcal{X}i_\lambda$ in Chapter~\ref{sec:appendix} that $\gamma_\lambda$ must be contained in $B(2R)^\operatorname{sp}incigma$. Further, we see that the values of $\mathscr{L}ambda_{\q^\lambda}(\gamma_\lambda)$ are uniformly close to those of $\mathscr{L}ambda_\q(\gamma)$, and thus uniformly bounded. By Lemma~\ref{lem:blowuptrajectoriesinvml}, we see that $\gamma_\lambda$ is contained in $(W^\lambda)^\operatorname{sp}incigma$ for $\lambda \gg 0$, which completes the proof.
\end{proof}
\begin{remark}
If the trajectory $[\gamma_{\infty}]$ from Proposition~\ref{prop:stabilitynearby} is contained in the reducible locus, then so is the nearby approximate trajectory $[\gamma_\lambda]$. This follows from the invariance of the constructions in the proof under the obvious involutions on $\mathfrak{t}B^{\operatorname{gC}, \mathfrak{t}au}([x_{\lambda}], [y_{\lambda}])$ and $\V_k^{\operatorname{gC}, \mathfrak{t}au}(\rr\mathfrak{t}imes Y)$.
\end{remark}
We are now ready to complete the proof of Proposition~\ref{prop:EquivalenceTrajectories} with the second step, which is to show that there are no other approximate trajectories between $[x_\lambda]$ and $[y_\lambda]$.
\begin{proof}[Proof of Proposition~\ref{prop:EquivalenceTrajectories}]
Let
$$ \Mbreve^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) = \{[\gamma_{\infty}^1], \dots, [\gamma_{\infty}^m]\}.$$
For each $\ell=1, \dots, m$, Proposition~\ref{prop:stabilitynearby} guarantees the existence of a neighborhood $U^\ell$ of $[\gamma_{\infty}^\ell]$ and a unique approximate trajectory $[\gamma_{\lambda}^\ell]$ with $\mathcal{X}i_{\lambda}([\gamma^\ell_\lambda]) \in U^\ell$.
We need to check that there are no other approximate trajectories in $\Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])$. Assume we had sequences $\lambda_n \mathfrak{t}o \infty$ and $[\gamma_n]\in \Mbreve^{\operatorname{agC}}([x_{\lambda_n}], [y_{\lambda_n}])$ such that $[\gamma_n]$ is not of the form
$[\gamma_{\lambda_n}^\ell]$ for any $n$ and $\ell$. By assumption, each $\lambda_n$ is some $\lambda^{\bullet}_i$. By applying Proposition~\ref{prop:L2k} we can find a subsequence of the $[\gamma_n]$ such that $\mathcal{X}i_{\lambda_n}([\gamma_n])$ converge to some $[\gamma_{\infty}^\ell]$ in $L^2_k$. (We require that the $\lambda_n$ be of the form $\lambda^{\bullet}_i$ since we are invoking results of Chapter~\ref{sec:trajectories1}, which used this assumption throughout.) This means that the $\mathcal{X}i_{\lambda_n}([\gamma_n])$ live inside $U^\ell$ for large $n$, and we obtain a contradiction with the uniqueness statement in Proposition~\ref{prop:stabilitynearby}.
\end{proof}
We have an analogous result for the boundary-obstructed case.
\begin{proposition}
\label{prop:EquivalenceTrajectoriesObstructed}
Let $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$ be in the boundary-obstructed case, and suppose $\operatorname{gr}([x_{\infty}], [y_{\infty}])=0$. For $\lambda = \lambda^{\bullet}_i \gg 0$, there is a one-to-one correspondence between $ \Mbreve^{\operatorname{agC}, \red}([x_{\infty}], [y_{\infty}])$ and $ \Mbreve^{\operatorname{agC}, \red}([x_{\lambda}], [y_{\lambda}])$.
\end{proposition}
\begin{proof}
The proof is similar to that of Proposition~\ref{prop:EquivalenceTrajectories}, but with an application of the inverse function theorem in the subspace of $\widetilde{B}_k^{\operatorname{gC}, \mathfrak{t}au}([x_{\infty}], [y_{\infty}])/\R$ consisting of reducible paths. In this subspace, the corresponding linear operator will again be index zero, and in fact invertible.
\end{proof}
\operatorname{sp}incection{The $U$-maps}\label{subsec:U-maps}
In Section~\ref{sec:mfh} we gave the definition of the $U$-maps in monopole Floer homology, in terms of cut-down moduli spaces $M([x], [y]) \cap \Zs$ and $M^{\red}([x], [y]) \cap \Zs$; cf. Equations \eqref{eqn:mU} and \eqref{eqn:mUred}. Moreover, in Section~\ref{sec:UCoulomb} we identified these cut-down moduli spaces with similar ones in Coulomb gauge:
\begin{equation}
\label{eq:cutdownmoduli}
M^{\operatorname{agC}}([x], [y]) \cap \Zs^{\operatorname{agC}} \ \mathfrak{t}ext{and} \ M^{\operatorname{agC}, \red}([x], [y]) \cap \Zs^{\operatorname{agC}}.
\end{equation}
Here, $\Zs^{\operatorname{agC}}$ is the zero set of a section $\zeta^{\operatorname{agC}}$ of the natural complex line bundle $E^{\operatorname{agC}, \operatorname{sp}incigma}$ over $W_{k}^{\operatorname{sp}incigma}/S^1 \operatorname{sp}incubset W_{k-1/2}^\operatorname{sp}incigma/S^1$. The section is chosen so that the intersections in \eqref{eq:cutdownmoduli} are transverse.
We would like to further identify the moduli spaces from \eqref{eq:cutdownmoduli} with those consisting of approximate trajectories, that appear in the construction of equivariant Morse homology in finite dimensions; see Section~\ref{subsec:UMorse}. To obtain the cut-down moduli spaces in the approximations $(W^\lambda)^{\operatorname{sp}incigma}/S^1$, we simply consider the restriction of $\zeta^{\operatorname{agC}}$ to those spaces.
Recall from Chapter~\ref{sec:MorseSmale} that we chose the perturbation $\q$ such that, for all $\lambda\in \{\lambda^{\bullet}_1, \lambda^{\bullet}_2, \dots \}$ sufficiently large, all the moduli spaces of flow lines for $\mathcal{X}qmlgcsigma$ on $(B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}/S^1$ are regular. Since there is a countable number of such moduli spaces, we can choose the section $\zeta^{\operatorname{agC}}$ such that it intersects all these moduli spaces (including the non-approximate ones) transversely. This implies that we can define the $U$-maps on the corresponding Morse homology groups as in Section~\ref{subsec:UMorse}.
We now focus on the cut-down moduli spaces of the form
$$M^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}]) \cap \Zs^{\operatorname{agC}} \ \mathfrak{t}ext{and} \ M^{\operatorname{agC}, \red}([x_{\lambda}], [y_{\lambda}]) \cap \Zs^{\operatorname{agC}},$$
for $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$. These moduli spaces suffice to determine the $U$-action in gradings from $-N$ to $N$.
\begin{proposition}\label{prop:Uagree}
$(a)$ Suppose $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$ have $\operatorname{gr}([x_{\infty}], [y_{\infty}])=2$ and are not boundary obstructed. For $\lambda = \lambda^{\bullet}_i \gg 0$, there is a one-to-one correspondence between $ M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}$ and $ M^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}]) \cap \Zs^{\operatorname{agC}}$.
$(b)$ Let $[x_{\infty}], [y_{\infty}] \in \mathcal{C}rit_{\N}$ be reducible stationary points with $\operatorname{gr}([x_{\infty}], [y_{\infty}])=1$ and boundary obstructed. For $\lambda = \lambda^{\bullet}_i \gg 0$, there is a one-to-one correspondence between $ M^{\operatorname{agC}, \red}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}$ and $M^{\operatorname{agC}, \red}([x_{\lambda}], [y_{\lambda}])\cap \Zs^{\operatorname{agC}}$.
\end{proposition}
\begin{proof}
Part (a) is the analogue of Proposition~\ref{prop:EquivalenceTrajectories}, and its proof is similar. Since the grading difference between $[x_{\infty}]$ and $[y_{\infty}]$ (and hence also between $[x_{\lambda}]$ and $[y_{\lambda}]$) is two, the cut-down moduli spaces in question are zero-dimensional. Consider a trajectory $$[\gamma] \in M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}.$$
Recall that the intersection is taken at time $t=0$, that is, we have $[\gamma(0)] \in \Zs^{\operatorname{agC}}$. The fact that the moduli space $M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}$ is cut out transversely at $[\gamma]$ can be rephrased in terms of the invertibility of the linear operator
$$ \D^\mathfrak{t}au_{[\gamma]}\mathcal{F}_{\q}gctau \oplus \D_{[\gamma(0)]} \zeta^{\operatorname{agC}}: \T_{k,[\gamma]}^{\operatorname{gC}, \mathfrak{t}au}([x_{\infty}], [y_{\infty}]) \mathfrak{t}o \V^{\operatorname{gC}, \mathfrak{t}au}_{k-1,[\gamma]}(Z) \oplus E^{\operatorname{agC}, \operatorname{sp}incigma}_{[\gamma(0)]}.$$
An application of the inverse function theorem shows that in a neighborhood of $[\gamma]$ there is a unique element of $M^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}]) \cap \Zs^{\operatorname{agC}}$ (after applying the corresponding diffeomorphism $\mathcal{X}i_{\lambda}$). This produces at least as many elements of $M^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}]) \cap \Zs^{\operatorname{agC}}$ as there are in $M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}.$ To see that there are no more, notice that, given a sequence $[\gamma_n] \in M^{\operatorname{agC}}([x_{\lambda_n}], [y_{\lambda_n}]) \cap \Zs^{\operatorname{agC}}$ with $\lambda_n \mathfrak{t}o \infty$, the sequence $\mathcal{X}i_{\lambda_n}([\gamma_n])$ admits a convergent subsequence, and the limit must be an element of $M^{\operatorname{agC}}([x_{\infty}], [y_{\infty}]) \cap \Zs^{\operatorname{agC}}.$
Part (b) is the analogue of Proposition~\ref{prop:EquivalenceTrajectoriesObstructed}, and again the proof is similar.
\end{proof}
\operatorname{sp}incection{Orientations}
Recall that in Section~\ref{sec:OrientCoulomb} we oriented the moduli spaces of trajectories of $\mathcal{X}qgcsigma$, using an orientation data system $o^{\operatorname{gC}}$ in Coulomb gauge. This was based on trivializing the determinant lines $\det(P_{\gamma}^{\operatorname{gC}} )$ for the operators $P_{\gamma}^{\operatorname{gC}} $ from \eqref{eq:Pgammagc}. The same procedure, but using operators $P_{\gamma}^{\operatorname{gC}} $ defined with the perturbation $\q^{\lambda}$ instead of $\q$, can be used to orient the moduli spaces of trajectories of $\mathcal{X}qmlgcsigma$ (in infinite dimensions).
We would like to relate the orientations of the moduli spaces of trajectories of $\mathcal{X}qmlgcsigma$ to the orientations in (finite dimensional) Morse homology and to the orientations of the moduli spaces of trajectories of $\mathcal{X}qgcsigma$ used in the definition of monopole Floer homology (recast in Coulomb gauge). For the moduli spaces of approximate Seiberg-Witten trajectories in the setting of finite dimensional Morse homology, we use the third construction of orientations discussed in Section~\ref{sec:or1}, in terms of a specialized coherent orientation (cf. Definition~\ref{def:sco}). This is based on trivializing determinant lines $\det(\mathfrak{t}ilde{L}_{\gamma})$, for the operators $\mathfrak{t}ilde{L}_{\gamma}$ defined in \eqref{eq:tLgamma}.
To relate the operators $P_{\gamma}^{\operatorname{gC}} $ (with perturbation $\q^\lambda$) and $\mathfrak{t}ilde{L}_{\gamma}$, we proceed as in the proof of Proposition~\ref{prop:RelationFinite}. Indeed, $P_{\gamma}^{\operatorname{gC}} $ is the analogue of the operator $Q_{\gamma}^{\operatorname{gC}} $ considered in that proposition, and $\mathfrak{t}ilde{L}_{\gamma}$ is the analogue of \eqref{eq:HessFinite}; in both cases, what is different here is that we work on compact cylinders and add spectral projections on the boundaries. Recall from the proof of Proposition~\ref{prop:RelationFinite} that we have an orthogonal splitting $Q_{\gamma}^{\operatorname{gC}} = \mathfrak{t}ilde{L}_{\gamma} \oplus F_\gamma$, where $F_\gamma : L^2_j(\R; (W^\lambda)^\perp) \mathfrak{t}o L^2_{j-1}(\R; (W^\lambda)^\perp)$ is defined by
$$F_\gamma(b(t), \psi(t)) = (\frac{d}{dt} b(t) + *d(b(t)), \frac{d}{dt} \psi(t) + D\psi(t) - \langle \phi(t), D \phi(t) \rangle_{L^2} \psi(t)).$$
Write $\widetilde{F}_\gamma$ for the analogous operator on the compact cylinder coupled with spectral projections. Note that while the operator $\widetilde{F}_\gamma$ depends on the path, the domain and target do not. We also write $\widetilde{F}_0$ for the analogous operator without the term $\langle \phi(t), D \phi(t) \rangle_{L^2} \psi(t)$.
We seek to show that there are trivializations of $\det(\widetilde{F}_\gamma)$ which respect concatenations of paths. First, note that the spectral decomposition of $(W^\lambda)^\perp$ corresponding to $
\widetilde{F}_\gamma$ is independent of $\gamma$; indeed, since $\phi(t) \in W^\lambda$ and $\psi(t) \in (W^\lambda)^\perp$, we have that $\| \langle \phi(t), D \phi(t) \rangle_{L^2} \psi(t)\|_{L^2(Y)}$ $ < \lambda \| \psi(t) \|_{L^2(Y)}$, while $\|D \psi(t)\|_{L^2(Y)} \geq \lambda \| \psi(t) \|_{L^2(Y)}$ and the claim follows. Further, we have the same spectral decomposition for any convex combination of $\widetilde{F}_\gamma$ and $\widetilde{F}_0$. Note that $(1-r)\widetilde{F}_0 + r \widetilde{F}_\gamma$ differs from $\mathfrak{t}ilde{F}_0$ by a compact operator. Thus, we have a homotopy between $\mathfrak{t}ilde{F}_\gamma$ and $\widetilde{F}_0$ through Fredholm operators, which induces an identification between $\det(\widetilde{F}_\gamma)$ and $\det(\widetilde{F}_0)$. This identification can easily be seen to respect concatenations. Therefore, we have an identification of $\det(\widetilde{F}_\gamma)$, for each $\gamma$,with a fixed determinant line bundle (independent of $\gamma$) which respects concatenation. Thus, we can find compatible trivializations of $\det(\widetilde{F}_\gamma)$ by simply fixing a trivialization of $\det(\widetilde{F}_0)$.
The above discussion shows that a trivialization of $\det(P_{\gamma}^{\operatorname{gC}} )$ is equivalent to a trivialization of $\det(\mathfrak{t}ilde{L}_{\gamma})$ in a way which respects concatenation of paths. It follows that an orientation data system in Coulomb gauge gives rise to a specialized coherent orientation on the finite dimensional approximation $(B(2R) \cap W^\lambda)^{\operatorname{sp}incigma}/S^1$. Once we fix these orientations, we obtain the following.
\begin{proposition}
The bijective correspondences described in Propositions~\ref{prop:EquivalenceTrajectories}, \ref{prop:EquivalenceTrajectoriesObstructed} and \ref{prop:Uagree} are orientation-preserving.
\end{proposition}
\begin{proof}
From the above construction, we see that, on the moduli spaces of approximate Seiberg-Witten trajectories, the signs defined using $P_{\gamma}^{\operatorname{gC}} $ (with $\q^\lambda$) coincide with those defined using $\mathfrak{t}ilde{L}_{\gamma}$.
Further, as $\lambda$ varies on an interval $[\lambda_0, \infty]$ (including infinity), the moduli spaces $\Mbreve^{\operatorname{agC}}([x_{\lambda}], [y_{\lambda}])$ form a continuous family, carrying determinant line bundles given by $\det(P_{\gamma}^{\operatorname{gC}} )$. These bundles vary continuously, so the trivialization at any finite $\lambda$ agrees with the one at $\infty$, under the correspondence from Proposition~\ref{prop:EquivalenceTrajectories}.
The same argument applies to the other two correspondences.
\end{proof}
\chapter {The equivalence of the homology theories}
\label{sec:equivalence}
We are now ready to prove the results advertised in the Introduction.
\begin{proof}[Proof of Theorem~\ref{thm:Main}]
Recall that the goal is to establish an absolutely graded isomorphism between $\widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s}))$ and $\overline{\mathit{HM}}to(Y,\mathfrak{s})$. First, by Proposition~\ref{prop:perturbedspectrum}, we have that
\begin{equation}\label{eq:swf-swfq-homology}
\widetilde{H}^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s})) \cong \widetilde{H}^{S^1}_*(\operatorname{SWF}_\q(Y,\mathfrak{s})),
\end{equation}
where $\operatorname{SWF}_\q(Y,\mathfrak{s})$, defined in Section~\ref{sec:verycompact}, is the analogous construction of the Floer spectrum using $\mathcal{X}qmlgc = l + p^\lambda c_\q$ instead of $l + p^\lambda c$. Here, we require that $\q$ is a very tame, admissible perturbation satisfying the conclusions of Proposition~\ref{prop:ND2}, Proposition~\ref{prop:AllMS}, and Proposition~\ref{prop:MorseSmales}. Recall that
$$
\operatorname{SWF}_\q(Y,\mathfrak{s}) = \Sigma^{-n(Y,\mathfrak{s},g)\mathbb{C}} \Sigma^{-W^{(-\lambda,0)}} I^\lambda_\q,
$$
where $I^\lambda_\q$ is the equivariant Conley index constructed from the flow of $l + p^\lambda c_\q$, analogous to $I^\lambda$ constructed in Section~\ref{sec:conleyswf}.
In Propositions~\ref{prop:AllMS} and ~\ref{prop:MorseSmales} we showed that for $\lambda = \lambda^{\bullet}_i \gg 0$, $\mathcal{X}qmlgc$ is a Morse-Smale equivariant quasi-gradient on $W^\lambda \cap B(2R)$. Thus, we can construct a Morse complex $(\check{C}_\lambda, \check{\partial}_\lambda)$ for $\mathcal{X}qmlagcsigma$ on $(W^\lambda \cap B(2R))^\operatorname{sp}incigma/S^1$ as defined in Sections~\ref{subsec:morseboundary} and Section~\ref{subsec:CircleMorse}. By \eqref{eq:EquivConleyMorse}, we have that
\begin{equation}\label{eq:Ilambda-morse}
\widetilde{H}^{S^1}_i(I^\lambda_\q) \cong H_{i}(\check{C}_\lambda, \check{\partial}_\lambda) ,
\end{equation}
for $0 \leq i \leq n_\lambda - 1$, where $n_\lambda$ is the connectivity of the pair $\bigl(I^\lambda_\q, \bigl( I^\lambda_\mathfrak{q}- (I^\lambda_\q)^{S^1} \bigr) \cup *\bigr)$. Note that this isomorphism respects the $\mathbb{Z}[U]$-module structures on each of the homologies.
By shifting gradings, we can rephrase \eqref{eq:Ilambda-morse} as
\begin{equation}
\label{eq:equalityM}
\widetilde{H}^{S^1}_i(\operatorname{SWF}_\q(Y,\mathfrak{s})) \cong H_{i + \dim W^{(-\lambda,0)} + 2n(Y,\mathfrak{s},g)}(\check{C}_\lambda, \check{\partial}_\lambda),
\end{equation}
for $- \dim W^{(-\lambda,0)} - 2n(Y,\mathfrak{s},g) \leq i \leq n_\lambda - 1 - \dim W^{(-\lambda,0)} - 2n(Y,\mathfrak{s},g)$. Set
$$ M_{\lambda} := \min \ \{ \dim W^{(-\lambda,0)} + 2n(Y,\mathfrak{s},g), \ n_\lambda - 1 - \dim W^{(-\lambda,0)} - 2n(Y,\mathfrak{s},g) \},$$
so that the isomorphism in \eqref{eq:equalityM} holds in the grading range $[-M_{\lambda}, M_{\lambda}]$.
We claim that $M_{\lambda} \mathfrak{t}o \infty$ as $\lambda = \lambda^{\bullet}_i \mathfrak{t}o \infty$. It is clear that $\dim W^{(-\lambda,0)} \mathfrak{t}o \infty$, so what we need to check is that
\begin{equation}
\label{eq:MlambdaEstimate}
n_\lambda - \dim W^{(-\lambda,0)} \mathfrak{t}o \infty.
\end{equation}
Let us investigate how the connectivity of the pair $\bigl(I^\lambda_\q, \bigl( I^\lambda_\mathfrak{q}- (I^\lambda_\q)^{S^1} \bigr) \cup *\bigr)$ changes as we increase $\lambda$. Fix a sufficiently large eigenvalue cut-off $\mu = \lambda^{\bullet}_{i}$. (Recall that this implies that neither $\mu$ nor $-\mu$ are eigenvalues.) For $\lambda=\lambda^{\bullet}_{j} \geq \mu$, it is proved in \cite[Section 7]{Spectrum} (for $\q=0$, but the same argument works for any $\q$) that
\begin{equation}
\label{eq:changeConley}
I^\lambda_\mathfrak{q}\operatorname{sp}incimeq I^{\mu}_\mathfrak{q}\wedge I(l)^{(\mu, \lambda)}.
\end{equation}
Here, $I(l)^{(\mu, \lambda)}$ is the Conley index associated to the isolated invariant set $\{0\}$ in the linear flow induced by $l$ on the complementary subspace to $W^{\mu}$ in $W^\lambda$, that is, on $W^{(-\lambda, -\mu)} \oplus W^{(\mu, \lambda)}$. Let us decompose this complementary subspace according to the sign of the eigenvalues, and also according to the type of eigenvectors (connections or spinors). Specifically, let
\begin{align*}
a^{\mu, \lambda}_+ &= \dim (W^{(\mu, \lambda)} \cap \ker d^*),& \ \ \ b^{\mu, \lambda}_+ &= \dim (W^{(\mu, \lambda)}\cap \Gamma(\mathbb{S})) \\
a^{\mu, \lambda}_- &= \dim (W^{(-\lambda, -\mu)} \cap \ker d^*),& \ \ \ b^{\mu, \lambda}_- &= \dim (W^{(-\lambda, -\mu)}\cap \Gamma(\mathbb{S})),
\end{align*}
where $\dim$ denotes the real dimension. Then, the Conley index of the linear flow is
$$ I(l)^{(\mu, \lambda)} \operatorname{sp}incimeq D(\rr^{a^{\mu, \lambda}_+})_+ \wedge D(\cc^{b^{\mu, \lambda}_+})_+ \wedge (\rr^{a^{\mu, \lambda}_-})^+ \wedge (\cc^{b^{\mu, \lambda}_-})^+.$$
We used here the standard notation in homotopy theory: If $V$ is a vector space, then $D(V)_+$ is the union of the unit disk in $V$ with one disjoint basepoint, and $V^+$ is the one-point compactification of $V$.
Using \eqref{eq:changeConley}, and keeping track of the fixed point sets in each Conley index, we obtain
$$(I^\lambda_\q, (I^\lambda_\q)^{S^1}) \operatorname{sp}incimeq \Bigl( D(\rr^{a^{\mu, \lambda}_+})_+ \wedge D(\cc^{b^{\mu, \lambda}_+})_+ \wedge \Sigma^{a^{\mu, \lambda}_- \rr} \Sigma^{b^{\mu, \lambda}_- \cc} (I^\mu_\q), \ D(\rr^{a^{\mu, \lambda}_+})_+ \wedge \Sigma^{a^{\mu, \lambda}_- \rr}
(I^\mu_\q)^{S^1}\Bigr).$$
Observe that when we change a pair $(X, Y)$ into $(D(\rr)_+ \wedge X, D(\rr)_+ \wedge Y)$, $(D(\cc)_+ \wedge X, Y)$, $(\Sigma^{\rr} X, \Sigma^{\rr}Y)$, or $(\Sigma^{\cc} X, Y)$, then the connectivity of the pair $\bigl(X, (X-Y) \cup *\bigr)$ increases by $0$, $2$, $1$ and $2$, respectively. Therefore,
$$ n_{\lambda} = n_{\mu} + 2b^{\mu, \lambda}_+ + a^{\mu, \lambda}_- + 2b^{\mu, \lambda}_-.$$
Since $\dim W^{(-\lambda,0)} = \dim W^{(-\mu,0)} + a^{\mu, \lambda}_- + 2b^{\mu, \lambda}_-$, we see that
$$n_\lambda - \dim W^{(-\lambda,0)}= n_{\mu} - \dim W^{(-\mu,0)}+2b^{\mu, \lambda}_+ \mathfrak{t}o \infty \ \ \mathfrak{t}ext{as } \lambda= \lambda^{\bullet}_i \mathfrak{t}o \infty.$$
This proves \eqref{eq:MlambdaEstimate}, and we conclude that $M_{\lambda} \mathfrak{t}o \infty$. Thus, the isomorphism \eqref{eq:equalityM} holds in a grading range that gets larger as $\lambda = \lambda^{\bullet}_i \mathfrak{t}o \infty$.
While the chain groups of $\check{C}_\lambda$ consist of stationary points of $\mathcal{X}qmlagcsigma$, the natural gradings from Morse homology are not the absolute gradings on stationary points that we have been working with throughout this monograph. Recall that the absolute grading $\operatorname{gr}^{\operatorname{SWF}}_\lambda$ on stationary points of $\mathcal{X}qmlagcsigma$ defined in \eqref{eq:gr-lambda} includes a shift by $\dim W^{(-\lambda,0)} + 2n(Y,\mathfrak{s},g)$. Therefore, we define the complex $\widecheck{\mathit{CM}}^\lambda(Y,\mathfrak{s},\q)$ by taking the Morse complex $(\check{C}_\lambda, \check{\partial}_\lambda)$ and shifting gradings down by $- \dim W^{(-\lambda,0)} + 2n(Y,\mathfrak{s},g)$, so stationary points have grading given by $\operatorname{gr}^{\operatorname{SWF}}_\lambda$. We denote the homology of $\widecheck{\mathit{CM}}^\lambda$ by $\overline{\mathit{HM}}to^\lambda$. With this definition, we have
\begin{equation}\label{eq:swf-hmtolambda}
\widetilde{H}^{S^1}_i(\operatorname{SWF}_\q(Y,\mathfrak{s})) \cong \overline{\mathit{HM}}to^\lambda_{i}(Y,\mathfrak{s},\q)
\end{equation}
for $i \in [-M_{\lambda}, M_{\lambda}]$. Therefore, it remains to establish an isomorphism between $\overline{\mathit{HM}}to^\lambda_{i}(Y,\mathfrak{s},\q)$ and $\overline{\mathit{HM}}to_i(Y,\mathfrak{s},\q)$. In other words, we must relate the Morse homology defined in terms of stationary points and trajectories of $\mathcal{X}qmlagcsigma$ versus the Floer homology defined in terms of stationary points and trajectories of $\mathcal{X}qsigma$.
First, by the work of Chapter~\ref{sec:coulombgauge} (more precisely described in Section~\ref{sec:CoulombSummary}), we can compute $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ using stationary points and trajectories of $\mathcal{X}qagcsigma$ instead of $\mathcal{X}qsigma$. In a fixed grading range $[-N,N]$, Proposition~\ref{prop:stationarycorrespondence} shows that for $\lambda = \lambda^{\bullet}_i \gg 0$, there is a grading-preserving identification between the stationary points of $\mathcal{X}qagcsigma$ and the stationary points of $\mathcal{X}qmlagcsigma$ in $(W^\lambda \cap B(2R))^\operatorname{sp}incigma/S^1$, and thus we have an identification between the chain groups $\widecheck{\mathit{CM}}^\lambda(Y,\mathfrak{s},\q)$ and $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ in the grading range $[-N,N]$.
By the work of Chapter~\ref{sec:trajectories2}, we have established an identification between the signed counts of index one (respectively boundary-obstructed index two) trajectories for $\mathcal{X}qagcsigma$ and for $\mathcal{X}qmlagcsigma$, in the given grading range. Since these are the moduli spaces that are counted in the differentials for $\widecheck{\mathit{CM}}^\lambda$ and $\widecheck{\mathit{CM}}$ (defined in \eqref{eq:delcheck} and \eqref{eqn:cmboundary} respectively), we obtain a chain complex isomorphism in this grading range. Therefore, we have that
\begin{equation}\label{eq:approximate-monopole-isomorphism}
\overline{\mathit{HM}}to^\lambda_i(Y,\mathfrak{s}) \cong \overline{\mathit{HM}}to_i(Y,\mathfrak{s})
\end{equation}
for $i \in [-N+1,N-1]$.
By Proposition~\ref{prop:Uagree}, the $U$-maps on these chain complexes, defined in \eqref{eqn:morsecap} and \eqref{eqn:floercap} for $\widecheck{\mathit{CM}}^\lambda(Y,\mathfrak{s},\q)$ and $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\q)$ respectively, must agree in the given grading range. Thus, the isomorphism in \eqref{eq:approximate-monopole-isomorphism} respects the $U$-action.
Therefore, we have established an isomorphism between $\overline{\mathit{HM}}to^\lambda_i(Y,\mathfrak{s})$ and $\overline{\mathit{HM}}to_i(Y,\mathfrak{s})$ $\mathbb{Z}[U]$-modules for $i \in [-N+1,N-1]$. Take $\lambda \gg 0$ so that $M_{\lambda} \geq N-1$. Combining this last isomorphism with \eqref{eq:swf-swfq-homology} and \eqref{eq:swf-hmtolambda}, we see that $$\widetilde{H}^{S^1}_i(\operatorname{SWF}(Y,\mathfrak{s})) \cong \overline{\mathit{HM}}to_i(Y,\mathfrak{s}),$$ for $i \in [-N+1,N-1].$ Again, this isomorphism respects the $U$-action. Since $N$ was arbitrary, we obtain the desired result.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:MainHat}]
Recall that Bloom's variant of monopole Floer homology, $\widetilde{HM}(Y,\mathfrak{s})$, is defined to be the homology of the mapping cone (with some grading shifts) of the $U$-map on $\widecheck{\mathit{CM}}(Y,\mathfrak{s},\mathfrak{q})$.
If $F$ is a field, the long exact sequence induced by the $U$-map on homology shows that $\overline{\mathit{HM}}tilde(Y,\mathfrak{s};F)$ is isomorphic to the homology of the mapping cone of the $U$-map on $\overline{\mathit{HM}}to(Y,\mathfrak{s};F)$, thought of as a complex equipped with the trivial differential. By Theorem~\ref{thm:Main}, $\overline{\mathit{HM}}tilde(Y,\mathfrak{s};F)$ is calculated as the mapping cone of the $U$-map on $\mathfrak{t}H^{S^1}_*(\operatorname{SWF}(Y,\mathfrak{s}); F)$. Applying the following lemma with $\mathbb{Z}/p^k\mathbb{Z}$- and $\mathbb{Q}$-coefficients, together with the universal coefficient theorem, completes the proof.
\end{proof}
\begin{lemma}
Let $X$ be a based, finite $S^1$-CW complex. Then, for any field $F$, we have a graded isomorphism
\[
\widetilde{H}^n(X;F) \cong \widetilde{H}^n([H^{*}_{S^1}(X;F)]_1 xrightarrow{U} \widetilde{H}^{*-2}_{S^1}(X;F)).
\]
Here, $[C]_1$ means we are shifting the degree of $C$ by 1.
\end{lemma}
\begin{proof}
Since the diagonal action of $S^1$ is free on $X \wedge ES^1_+$, we have that the orbit map $X \wedge ES^1_+ \mathfrak{t}o X \wedge_{S^1} ES^1_+ := (X \wedge ES^1_+)/S^1$ is a principal $S^1$-bundle (away from the basepoint). This bundle is isomorphic to the pullback bundle of the map $\pi: X \wedge_{S^1} ES^1_+ \mathfrak{t}o BS^1$. There is an associated Gysin sequence:
\[
\ldots \mathfrak{t}o \widetilde{H}^n(X \wedge ES^1_+;F) \mathfrak{t}o \widetilde{H}^{n-1}(X \wedge_{S^1} ES^1_+;F) xrightarrow{e \cup} \widetilde{H}^{n+1}(X \wedge_{S^1} ES^1_+;F) \mathfrak{t}o \ldots,
\]
where $e$ is the Euler class of this $S^1$-bundle. Since the Euler class agrees with the first Chern class, $e$ is by definition $\pi^*(U)$, where $U$ is the generator of $H^2(BS^1;F)$. Because $ES^1$ is contractible, we can rewrite the Gysin sequence as
\[
\ldots \mathfrak{t}o \widetilde{H}^n(X;F) \mathfrak{t}o \widetilde{H}_{S^1}^{n-1}(X;F) xrightarrow{\pi^*(U) \cup} \widetilde{H}_{S^1}^{n+1}(X;F) \mathfrak{t}o \ldots
\]
However, cup product with $\pi^*(U)$ corresponds precisely to the $U$-action in the $F[U]$-module structure on $H_{S^1}^*(X;\mathbb{Z})$. The Gysin sequence now gives the desired isomorphism.\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:delta}] This follows from Theorem~\ref{thm:Main}, given that $\delta$ and $-h$ are obtained in the same way from $\overline{\mathit{HM}}to$ resp. $\mathfrak{t}H^{S^1}_*(\SWF)$; they are both half of the grading of the lowest nontrivial element in the infinite $U$-tail of the corresponding homology group with $\qq$ coefficients.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:hfswf}] By the Seiberg-\!Witten / Heegaard Floer equivalence \cite{KLT1, CGH1, CGH3}, the minus and hat flavors of Heegard Floer homology are isomorphic to $\overline{\mathit{HM}}to$ resp. $\overline{\mathit{HM}}tilde$. Thus, the first two isomorphisms in Corollary~\ref{cor:hfswf} are consequences of Theorem~\ref{thm:Main} and Corollary~\ref{cor:MainHat}.
To get the isomorphism between $\mathit{HF}^-$ and co-Borel homology, we can apply the isomorphism between $\mathit{HF}^+$ and Borel homology to $Y$ with the orientation reversed. Indeed, there is a duality between $\mathit{HF}^-(-Y, \operatorname{sp}inc)$ and the cohomology version of $\mathit{HF}^+(Y, \operatorname{sp}inc)$; see \cite[Proposition 2.5]{HolDiskTwo}. Similarly, co-Borel homology is isomorphic (up to a reversal of degrees) to the Borel cohomology of the dual spectrum, and $\SWF(Y, \operatorname{sp}inc)$ is dual to $\SWF(-Y, \operatorname{sp}inc)$ by \cite[\S 9, Remark 2]{Spectrum}. Cohomology theories can be related back to homologies by the universal coefficient theorem.
To get the isomorphism between $\mathit{HF}^{\infty}$ and Tate homology, note that $\mathit{HF}^{\infty}$ can be obtained from $\mathit{HF}^-$ by inverting the variable $U$. Similarly, Tate homology is obtained from co-Borel homology by inverting $U$.
\end{proof}
\backmatter
\end{document}
|
\begin{document}
\title{Matrix integrals and generating functions for enumerating rooted
hypermaps by vertices, edges and faces for a given number of darts}
\author{Jacob P Dyer}
\maketitle
\lyxaddress{[email protected]}
\lyxaddress{Department of Mathematics, University of York, York YO10 5DD, UK}
\begin{abstract}
A recursive method is given for finding generating functions which
enumerate rooted hypermaps by number of vertices, edges and faces
for any given number of darts. It makes use of matrix-integral expressions
arising from the study of bipartite quantum systems. Direct evaluation
of these generating functions is then demonstrated through the enumeration
of all rooted hypermaps with up to 13 darts.\end{abstract}
\begin{description}
\item [{Keywords:}] enumeration, rooted hypermap, bipartite quantum system,
matrix integral, generating function, divergent power series
\end{description}
\section{Introduction}
This paper is an extension of work we carried out in a previous paper
\cite{Dyer2014a}. In that paper we showed how the mean value of traces
of integer powers of the reduced density operator of a finite-dimensional
bipartite quantum system are proportional to generating functions
for enumerating one-face rooted hypermaps. We then used this relation
to derive a matrix integral expression for these generating functions,
and found closed form expressions for them.
Matrix integral expressions derived from finding the average of a
function of the reduced density operator have been studied for some
time \cite{Lloyd1988,Page1993,Foong1994,Sanchez-Ruiz1995,Sen1996},
so numerous methods for evaluating them have been described. In particular,
Lloyd and Pagels were able to reduce the matrix integral to an integral
over the space of eigenvalues with the density function \cite{Lloyd1988}
\[
P(p_{1},\ldots,p_{m})dp_{1}\ldots dp_{m}\propto\delta\left(1-\sum_{i=1}^{m}p_{1}\right)\Delta^{2}(p_{1},\ldots,p_{m})\prod_{k=1}^{m}p_{k}^{n-m}dp_{k}
\]
where $\Delta(p_{1},\ldots,p_{m})$ is the Vandermonde determinant
of the eigenvalues of the reduced density operator. Using this, in
conjunction with the work of Sen \cite{Sen1996}, we were able to
evaluate a closed-form expression for the one-face generating functions.
In this paper we extend these methods to derive expressions for generating
functions which enumerate all rooted hypermaps by number of vertices,
edges and faces for a given number of darts (we give a definition
of rooted hypermaps in Section \ref{sec:Representing-rooted-hypermaps}).
These generating functions are defined recursively in terms of another
expression called $F(m,n,\lambda;x)$, which we define in Section
\ref{sec:Additional-generating-functions} and is itself evaluated
using a matrix integral as above.
Previous work already exists on enumeration of hypermaps \cite{Arques1987,Mednykh2010,Walsh2012},
and in particular Walsh managed to enumerate all rooted hypermaps
with up to 12 darts by number of edges, vertices and darts, and genus
\cite{Walsh2012}. But as far as we are aware this is the first time
that generating functions for enumerating all rooted hypermaps by
these properties have been found without direct computation of the
hypermaps themselves. By avoiding having to generate the hypermaps
individually, we are able to vastly speed up the process of enumeration
(there are more than $r!$ hypermaps with $r$ darts \cite{Dyer2014a},
so generating them all is a very slow process).
We will give an overview of our previous work in Section \ref{sub:One-face-hypermaps},
before showing how best to generalise it to multiple faces in Section
\ref{sub:Multiple-faces}. We will then use this to study the global
generating function for rooted hypermaps $H(m,n,\lambda;x)$ in Sections
\ref{sec:Additional-generating-functions} and \ref{sec:Hypermap-generating-functions},
before looking at the process of evaluating these functions and extracting
hypermap counts from them in Section \ref{sec:Evaluating}.
\section{Representing rooted hypermaps\label{sec:Representing-rooted-hypermaps}}
A thorough discussion of hypermaps can be found in \cite{Lando2004}.
A \emph{hypermap} is a generalisation of a map (a graph embedded on
an orientable surface so that its complement consists only of regions
which are homeomorphic to the unit disc) in which the edges are capable
of having any positive number of connections to vertices instead of
the usual two. Hypermaps can be thought of as equivalent to bipartite
bicoloured maps on the same surface (with the two colours of vertices
in the map representing the vertices and edges of the hypermap) \cite{Walsh1975}.
Each edge-vertex connection (the edges in the eqivalent bipartite
map) is called a \emph{dart}, and a \emph{rooted hypermap} is a hypermap
where one of the darts has been labelled as the \emph{root}, making
it distinct from the others.
The embedding of a hypergraph (the analogue of a graph) on an orientable
surface to produce a hypermap can be represented in other ways which
do not require explicit consideration of the surface involved. These
are called \emph{combinatorial embeddings}, and one such method uses
an object called a 3-constellation:
{}
\begin{defn}
A 3-constellation is an ordered triple $\{\xi,\eta,\chi\}$ of permutations
acting on some set $R$, satisfying the following two properties:
\begin{enumerate}
\item The group generated by $\{\xi,\eta,\chi\}$ acts transitively on $R$.
\item The product $\xi\eta\chi$ equals the identity.
\end{enumerate}
\end{defn}
{}
A hypermap $H$ with $r$ darts can be expressed using a 3-constellation
on the set $R=[1\ldots r]$. If the elements of $R$ are associated
with the darts in $H$, then the actions of $\xi$, $\eta$ and $\chi$
are to cycle the darts around their adjacent faces, edges and vertices
respectively. For our purposes here, the important result is that
the number of faces, edges and vertices in a hypermap $H\equiv\{\xi,\eta,\chi\}$
are given by the number of cycles in $\xi$, $\eta$ and $\chi$ respectively
\cite[p 43]{Lando2004}.
Two 3-constellations $\{\xi,\eta,\chi\}$ and $\{\xi',\eta',\chi'\}$
are isomorphic to each other if they are related by the bijection
\begin{equation}
\tau\,:\,\{\xi,\chi,\eta\}\rightarrow\{\xi',\chi',\eta'\}=\{\tau\xi\tau^{-1},\tau\eta\tau^{-1},\tau\chi\tau^{-1}\}\label{eq:tau-mapping}
\end{equation}
for some permutation $\tau$ \cite[p 8]{Lando2004}. are isomorphic
to each other (the action of $\tau$ on the hypermap as given above
simply involves a reordering of the darts in the set $R$ without
changing the connectivity). With this representation of hypermap isomorphism
established, we define rooted hypermaps as hypermaps with the aditional
property that they are only equivalent under the action of $\tau$
only when $\tau(1)=1$ (i.e. choosing for the root dart to have the
label $1$).
\subsection{One-face hypermaps\label{sub:One-face-hypermaps}}
\begin{figure}
\caption{\label{fig:one-face-diag}
\label{fig:one-face-diag}
\end{figure}
In our previous paper paper, we used the 3-constellation representation
of hypermap embedding to define a diagrammatic representation of rooted
hypermaps \cite{Dyer2014a} (see Figure \ref{fig:one-face-diag}).
If we define
\[
\xi=(12\ldots r),
\]
then the set of rooted hypermaps with one face is equivalent to the
set of permutations $\eta$ on $[1\ldots r${]} (i.e. the symmetric
group $Sym_{r}$) through the then any rooted hypermap with one face
is equivalent to , then the set of nonisomorphic rooted hypermaps
with $r$ darts is equivalent to the set of permutations all $\eta$
on $[1\ldots r]$ (i.e. the symmetric group $Sym_{r}$) through the
bijection
\[
\eta\rightarrow H_{\eta}\equiv\{\xi,\eta,\eta^{-1}\xi^{-1}\}
\]
(as $\xi$ is fixed, no two choices of $\eta$ will result in equivalent
rooted hypermaps). The diagrammatic representation in Figure \ref{fig:one-face-diag}
(here referred to as a ladder diagram) allows us to quickly count
the number of vertices and edges in $H_{\eta}$ by counting closed
loops (the numbers of which are equal to the number permutations in
$\eta$ and $\xi\eta$).
We showed that these diagrams also arise in the evaluation of the
function
\begin{equation}
P_{r}(m,n)=\left.\frac{\partial}{\partial\alpha_{a_{1}b_{1}}}\ldots\frac{\partial}{\partial\alpha_{a_{r}b_{r}}}(\alpha_{a_{1}b_{2}}\ldots\alpha_{a_{r}b_{1}})\right|_{\alpha=0},\label{eq:P_r}
\end{equation}
where $\alpha$ is an $m\times n$ real matrix: when the multiderivative
in (\ref{eq:P_r}) is fully expanded out, it has the form
\[
P_{r}(m,n)=\sum_{\eta\in Sym_{r}}\prod_{i=1}^{r}\delta[a_{i},a_{\eta(i)}]\delta[b_{i},b_{\xi\eta(i)}]=\sum_{\eta\in Sym_{r}}m^{cyc(\eta)}n^{cyc(\xi\eta)}
\]
where $cyc(\sigma)$ is the number of cycles in the permutation $\sigma$.
As $cyc(\eta)$ and $cyc(\xi\eta)$ are respectively the number of
edges and vertices in the rooted hypermap $H_{\eta}\equiv\{\xi,\eta,\eta^{-1}\xi^{-1}\}$,
and the number of faces in $H_{\eta}$ is $cyc(\xi)=1$, $P_{r}$
is therefore the generating function for enumerating one-face rooted
hypermaps with $r$ darts by number of edges and vertices. $P_{r}$
can also be computed using ladder diagrams as above, then, each diagram
contributing a single $m^{cyc(\eta)}n^{cyc(\xi\eta)}$ term.
We also showed, through Gaussian integration, that
\begin{eqnarray}
P_{r}(m,n) & = & \int_{\mathbb{C}^{mn}}d^{2mn}xe^{-x\cdot x}x_{a_{1}b_{1}}x_{a_{1}b_{2}}^{*}\ldots x_{a_{r}b_{r}}x_{a_{r}b_{1}}^{*}\nonumber \\
& = & \frac{\Gamma(mn+r)}{\Gamma(mn)}\langle\text{Tr}[(\hat{\rho}^{A})^{r}]\rangle,\label{eq:P_r-quantum}
\end{eqnarray}
where $\hat{\rho}^{A}$ is the reduced density operator of an $m$-dimensional
subsystem of an $mn$-dimensional bipartite quantum system, and the
mean is being taken over all possible pure states of the overall bipartite
system. What this means is explained in more detail in \cite{Dyer2014a},
but the facts of most relevance here are that (\ref{eq:P_r-quantum})
is symmetric in $m$ and $n$, and, when $n\ge m$, the mean can be
represented as an integral over the eigenvalues $(p_{1},\ldots,p_{m})$
of $\hat{\rho}^{A}$ with the density function \cite{Lloyd1988,Dyer2014}
\[
P(p_{1},\ldots,p_{m})dp_{1}\ldots dp_{m}\propto\delta\left(1-\sum_{i=1}^{m}p_{1}\right)\Delta^{2}(p_{1},\ldots,p_{m})\prod_{k=1}^{m}p_{k}^{n-m}dp_{k},
\]
where $\Delta^{2}(p_{1},\ldots,p_{m})$ is the Vandermonde discriminant
of the eigenvalues, giving
\[
\langle\text{Tr}[(\hat{\rho}^{A})^{r}]\rangle\propto\int\delta\left(1-\sum_{i=1}^{m}p_{1}\right)\Delta^{2}(p_{1},\ldots,p_{m})\prod_{k=1}^{m}(p_{k}^{n-m}dp_{k})\sum_{j=1}^{m}p_{j}^{r}.
\]
Using a co-ordinate substitution given in \cite{Page1993}, we multiply
this by the factor
\[
\frac{1}{\Gamma(mn+r)}\int_{0}^{\infty}\lambda^{mn+r-1}e^{-\lambda}d\lambda
\]
and define $q_{i}=\lambda p_{i}$, integrating over $\lambda$ in
order to remove the $\delta$ function, giving
\[
\langle\text{Tr}[(\hat{\rho}^{A})^{r}]\rangle\propto\frac{1}{\Gamma(mn+r)}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\sum_{j=1}^{m}q_{j}^{r}.
\]
Finally, we normalise this by using the fact that, as $n\ge m$, $\langle\text{Tr}[(\hat{\rho}^{A})^{0}]\rangle=\langle\text{Tr}[I_{m}]\rangle=m$,
giving
\[
\langle\text{Tr}[(\hat{\rho}^{A})^{r}]\rangle=\frac{\Gamma(mn)}{\Lambda_{mn}\Gamma(mn+r)}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\sum_{j=1}^{m}q_{j}^{r}
\]
where
\[
\Lambda_{mn}=\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k}).
\]
We now need to generalise both the concept of ladder diagrams and
the closely connected $P_{r}(m,n)$ functions in order to enumerate
hypermaps with more than one face, and we will define these generalisations
in the next section.
\subsection{Multiple faces\label{sub:Multiple-faces}}
\begin{figure}
\caption{\label{fig:three-face-diag}
\label{fig:three-face-diag}
\end{figure}
In Figure \ref{fig:one-face-diag}, there was just one loop consisting
only of single lines (i.e. single solid lines and single dotted lines,
but not the solid/dotted paired lines), which corresponded to the
single cycle in $\xi$, and therefore to the face in the associated
rooted hypermap. It follows that a hypermap with multiple faces would
have a diagram with multiple such loops (i.e. $\xi$ has multiple
cycles, one for each face). An example of such a diagram is shown
in Figure \ref{fig:three-face-diag}.
In these diagrams, the solid lines in combination with the dotted
lines defined by $\xi$ can be thought of as a fixed backbone, on
which the double lines given by $\eta$ are superimposed. We described
in Section \ref{sub:One-face-hypermaps} how, when $\xi=(12\ldots r)$,
we can sum over all possible $\eta$ and in each case count the solid
and dotted loops in order to get a generating function for enumerating
rooted one-face hypermaps with $r$ darts. We also showed that this
function was equivalent to (\ref{eq:P_r}) and (\ref{eq:P_r-quantum}).
We can apply the same procedure to diagrams with other backbones.
Looking at (\ref{eq:P_r}) and (\ref{eq:P_r-quantum}), we can see
that the single cycle of length $r$ in $\xi$ corresponds to a term
$\text{Tr}[(\hat{\rho}^{A})^{r}]$ in the quantum expression for $P_{r}$.
By extension it follows that if consists of $N$ cycles with lengths
$r_{1},r_{2},\ldots,r_{N}$ (e.g. the $\xi$ used in Figure (\ref{fig:three-face-diag})
corresponds to $N=3$, $\{r_{1},r_{2},r_{3}\}=\{2,4,1\}$), summing
over all ladder diagrams with such a backbone and following the same
procedure as in section \ref{sub:One-face-hypermaps}, we get the
function
\begin{eqnarray}
P_{r_{1}r_{2}\ldots r_{N}}(m,n) & = & \frac{\Gamma(mn+\Sigma_{i=1}^{N}r_{i})}{\Gamma(mn)}\left\langle \prod_{j=1}^{N}\text{Tr}[(\hat{\rho}^{A})^{r_{j}}]\right\rangle \nonumber \\
& = & \frac{1}{\Lambda_{mn}}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\nonumber \\
& & \qquad\times\prod_{i=1}^{N}\sum_{j=1}^{m}q_{j}^{r_{i}},\label{eq:P_mult}
\end{eqnarray}
again valid when $m\le n$.
These functions are not yet useful generating functions, however,
for two reasons: the sum over diagrams used to calculate them can
include disconnected diagrams (hypermaps are necessarily connected,
so cannot correspond to disconnected ladder diagrams), and any two
hypermaps which are related through cyclic permutation of one of the
cycles in $\xi$ are equivalent, producing a degeneracy. We will overcome
these issues in the following sections; first we will define some
additional functions in terms of the various $P_{r_{1}\ldots}$ in
Section \ref{sec:Additional-generating-functions} which will account
for the presence of disconnected diagrams, and then we will use these
to construct global generating functions for counting rooted hypermaps
in Section \ref{sec:Hypermap-generating-functions}.
\section{Connected diagrams\label{sec:Additional-generating-functions}}
As stated in the previous section, the functions $P_{r_{1}\ldots}$
defined in (\ref{eq:P_mult}) are generating functions each of which
count over a set of ladder diagrams. As defined, however, they include
disconnected diagrams in this count, whereas we require generating
functions which count only over connected diagrams. In this section
we will define such functions.
For any given $P_{r_{1}\ldots r_{N}}$, define $\bar{P}_{r_{1}\ldots r_{N}}$
to be a generating function defined as a summation over the same set
of diagrams as $P_{r_{1}\ldots}$ except with any disconnected diagrams
excluded. In the one-loop case, $P_{r}=\bar{P}_{r}$ as all one-loop
ladder diagrams are connected. When there is more than one loop present,
$P_{r_{1}\ldots}$ may be factorised in terms of $\bar{P}_{r_{1}\ldots}$
using the fact that any disconnected ladder diagram can be split into
a number of disjoint connected subdiagrams. We write this factorisation
\begin{equation}
P_{rr_{1}\ldots r_{N}}=\bar{P}_{rr_{1}\ldots r_{N}}+\bar{P}_{r}P_{r_{1}\ldots r_{N}}+\sum_{u\cup v=\{r_{1}\ldots r_{N}\}}^{u,v\ne\emptyset}\bar{P}_{ru_{1}\ldots}P_{v_{1}\ldots},\label{eq:Pbar-full}
\end{equation}
where the summation is over all partitions of the ordered multiset
$\{r_{1}\ldots r_{N}\}$ into two disjoint non-empty subfamilies.
This factorisation works by breaking up each diagram in the summation
into its disjoint connected subdiagrams and considering which subdiagram
the loop of length $r$ is in. This loop is factored out in a $\bar{P}$
term. As an example, when $N=3$,
\begin{eqnarray*}
P_{rabc} & = & \bar{P}_{rabc}+\bar{P}_{r}P_{abc}+\bar{P}_{ra}P_{bc}+\bar{P}_{rb}P_{ac}+\bar{P}_{rc}P_{ab}\\
& & +\bar{P}_{rab}P_{c}+\bar{P}_{rac}P_{b}+\bar{P}_{rbc}P_{a}.
\end{eqnarray*}
If the definition of $P_{r\ldots}$ is extended to include
\[
P(m,n)=1,
\]
which is consistent with (\ref{eq:P_mult}), then (\ref{eq:Pbar-full})
can be written more simply as
\begin{equation}
P_{rr_{1}\ldots r_{N}}=\sum_{u\cup v=\{r_{1}\ldots r_{N}\}}\bar{P}_{ru_{1}\ldots}P_{v_{1}\ldots},\label{eq:Pbar-simplified}
\end{equation}
where $u$ and $v$ are now allowed to be empty.
(\ref{eq:Pbar-full}) and (\ref{eq:P_mult}) can be used recusively
to construct integral expressions for any given $\bar{P}_{r\ldots}$.
However, when constructing the global generating function in Section
\ref{sec:Hypermap-generating-functions}, it will be more useful to
work with the functions
\begin{eqnarray}
\Pi_{r}^{(N)}(m,n;x) & = & \sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{rr_{1}\ldots r_{N}}(m,n)\label{eq:Pi}\\
\bar{\Pi}_{r}^{(N)}(m,n;x) & = & \sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}\bar{P}_{rr_{1}\ldots r_{N}}(m,n)\label{eq:Pi_bar}\\
\Sigma^{(N)}(m,n;x) & = & \sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{r_{1}\ldots r_{N}}(m,n).\label{eq:Sigma}
\end{eqnarray}
These are specifically defined as formal power series in $x$; in
general these series will be divergent if treated as functions of
a finite parameter $x$. Noting the special cases $\Pi_{r}^{(0)}(m,n;x)=\bar{\Pi}_{r}^{(0)}(m,n;x)=P_{r}(m,n)$
and $\Sigma^{(0)}(m,n;x)=1$, we use (\ref{eq:Pbar-simplified}) to
derive the recursion relation
\begin{eqnarray}
\Pi_{r}^{(N)}(m,n;x) & = & \sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{rr_{1}\ldots r_{N}}(m,n)\nonumber \\
& = & \sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}\sum_{u\cup v=\{r_{1}\ldots r_{N}\}}\bar{P}_{ru_{1}\ldots}(m,n)P_{v_{1}\ldots}(m,n)\nonumber \\
& = & \sum_{k=0}^{N}\binom{N}{k}\bar{\Pi}_{r}^{(k)}(m,n;x)\Sigma^{(N-k)}(m,n;x),\label{eq:Pi-recurse}
\end{eqnarray}
with the sum over partitions in (\ref{eq:Pbar-simplified}) becoming
a sum over the different possible sizes of the partitions instead.
In addition to these three sets of series, we will need one more series
to be defined:
\begin{equation}
F(m,n,\lambda;x)=\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\Sigma^{(N)}(m,n;x).\label{eq:F}
\end{equation}
This series' derivative satisfies
\begin{eqnarray}
x\frac{\partial}{\partial x}F(m,n,\lambda;x) & = & x\frac{\partial}{\partial x}\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{r_{1}\ldots r_{N}}(m,n)\nonumber \\
& = & \sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\cdot N\sum_{r_{1}=1}^{\infty}x^{r_{1}}\sum_{r_{2}=1}^{\infty}\frac{x^{r_{2}}}{r_{2}}\ldots\sum_{r_{N}}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{r_{1}\ldots r_{N}}(m,n)\nonumber \\
& = & \sum_{N=1}^{\infty}\frac{\lambda^{N}}{(N-1)!}\sum_{r=1}^{\infty}x^{r}\Pi_{r}^{(N-1)}(m,n;x)\nonumber \\
& = & \sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\Pi_{r}^{(N)}(m,n;x),\label{eq:F-derivative}
\end{eqnarray}
and when $x$ is set to zero,
\begin{eqnarray}
F(m,n,\lambda;0) & = & \sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\Sigma^{(N)}(m,n;0)\nonumber \\
& = & \Sigma^{(0)}(m,n;0)\nonumber \\
& = & 1.\label{eq:F_0}
\end{eqnarray}
With these various functions and series defined, we can proceed to
define the global generating function for enumerating rooted hypermaps
in terms $F$. After doing this in the next section, we will return
to $F$ in Section \ref{sec:Evaluating} and discuss methods for evaluating
it.
\section{Hypermap generating functions\label{sec:Hypermap-generating-functions}}
Let us define $H(m,n,\lambda;x)$ as the generating function for enumerating
all rooted hypermaps in the form
\begin{equation}
H(m,n,\lambda;x)=\sum_{e,v,f,r}H_{vefr}m^{v}n^{e}\lambda^{f}x^{r},\label{eq:counting-basic}
\end{equation}
where $H_{vefr}$ is the number of rooted hypermaps with v edges,
$e$ vertices, $f$ faces and $r$ darts. As with the expressions
used in Section \ref{sec:Additional-generating-functions}, this generating
function is strictly speaking a formal power series in $x$ which
will be dievergent in general. However, if we write
\[
H(m,n,\lambda;x)=\sum_{r=0}^{\infty}H_{r}(m,n,\lambda)x^{r},
\]
then the individual $H_{r}$ will be well-behaved polynomial functions
enumerating all rooted hypermaps with $r$ darts. Ultimately our aim
will be to compute these.
It is worth noting the symmetry properties of these functions:
{}
\begin{thm}
\label{thm:symmetric}Each $H_{r}$ is completely symmetric in its
three parameters, or, equivalently,
\[
H_{r}(m,n,\lambda)=H_{r}(n,m,\lambda)=H_{r}(m,\lambda,n).
\]
\end{thm}
\begin{proof}
This result follows easily from considering a rooted hypermap as a
3-constellation $\{\xi,\chi,\eta\}$. The mapping
\[
T_{ef}\,:\,\{\xi,\chi,\eta\}\rightarrow\{\chi^{-1},\xi^{-1},\eta^{-1}\}
\]
maps rooted hypermaps with $r$ darts onto each other, and specifically
maps a rooted hypermap with $v$ vertices, $e$ edges and $f$ faces
onto one with $v$ vertices, $f$ faces and $e$ edges. As $T_{ef}$
is bijective (it is its own inverse), this means that $H_{vefr}=H_{vfer}$,
and so
\[
H_{r}(m,n,\lambda)=\sum_{v,e,f}H_{vefr}m^{v}n^{e}\lambda^{f}=H_{r}(m,\lambda,n).
\]
Similarly, the mapping
\[
T_{ve}\,:\,\{\xi,\chi,\eta\}\rightarrow\{\xi^{-1},\eta^{-1},\chi^{-1}\}
\]
is a bijection which swaps the number of edges and vertices in each
rooted hypermaps, meaning $H_{vefr}=H_{evfr}$ and
\[
H_{r}(m,n,\lambda)=H_{r}(n,m,\lambda).
\]
\end{proof}
{}
\begin{figure}
\caption{\label{fig:A-cyclic-permutation}
\label{fig:A-cyclic-permutation}
\end{figure}
While we cannot evaluate $H$ directly, we are able to define it in
relation to the series $F$ defined previously:
{}
\begin{thm}
\label{thm:P-given-F}The generating function $H$ satisfies the relation
\[
H(m,n,\lambda;x)F(m,n,\lambda;x)=x\frac{\partial}{\partial x}F(m,n,\lambda;x).
\]
\end{thm}
\begin{proof}
Each $\bar{P}_{r_{1}\ldots r_{N}}$ is a generating function for a
set of rooted hypermaps with $N$ faces (one for each of the loops
in the associated ladder diagrams), and if all possible $\bar{P}_{r_{1}\ldots r_{N}}$
for fixed $r_{1}+\ldots+r_{N}=r$ are summed over, then the resulting
function will include terms for every rooted hypermap with $N$ faces
and $r$ darts, as any such hypermap has at least one associated ladder
diagram which contributes a term to one of the $\bar{P}_{r_{1}\ldots r_{N}}$.
However, each such hypermap with have a total of $(N-1)!r_{2}r_{3}\ldots r_{N}$
such ladder diagrams (the $N-1$ loops of length $r_{2}$ through
$r_{N}$ can be put in any order to get distinct diagrams to get a
degeneracy of $(N-1)!$, and each of these loops can have its nodes
permuted cyclically -- see Figure \ref{fig:A-cyclic-permutation}
-- giving a degeneracy of $r_{2}r_{3}\ldots r_{N}$; in both cases
the $r_{1}$ loop is fixed because it is associated with the root),
so in order to get a generating function which only counts each rooted
hypermap once, each $\bar{P}_{r_{1}\ldots r_{N}}$ must be divided
by this degeneracy.
We therefore write out $H$ explicitly by summing over all $P_{r_{1}\ldots r_{N}}$,
dividing each by $(N-1)!r_{2}r_{3}\ldots r_{N}$, and multiplying
each by $\lambda^{N}x^{r_{1}+\ldots+r_{N}}$ in order to index the
enumeration by number of faces and darts as well. The resulting expression
is
\begin{eqnarray*}
H(m,n,\lambda;x) & = & \sum_{N=1}^{\infty}\frac{\lambda^{N}}{(N-1)!}\sum_{r_{1}=1}^{\infty}x^{r_{1}}\sum_{r_{2}=1}^{\infty}\frac{x^{r_{2}}}{r_{2}!}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}\bar{P}_{r_{1}r_{2}\ldots r_{N}}(m,n)\\
& = & \sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}!}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}\bar{P}_{rr_{1}\ldots r_{N}}(m,n).
\end{eqnarray*}
We simplify this by substituting in (\ref{eq:Pi_bar}):
\begin{equation}
H(m,n,\lambda;x)=\sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\bar{\Pi}_{r}^{(N)}(m,n;x).\label{eq:P-by-Pi}
\end{equation}
Now, mutliplying this by $F$ as defined in (\ref{eq:F}), we get
\begin{eqnarray*}
H(m,n,\lambda;x)F(m,n,\lambda;x) & = & \sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\bar{\Pi}_{r}^{(N)}(m,n;x)\sum_{k=0}^{\infty}\frac{\lambda^{k}}{k!}\Sigma^{(k)}(m,n;x)\\
& = & \sum_{r=1}^{\infty}x^{r}\sum_{k=0}^{\infty}\sum_{N=0}^{\infty}\frac{\lambda^{N+k+1}}{N!k!}\bar{\Pi}_{r}^{(N)}(m,n;x)\Sigma^{(k)}(m,n;x)\\
& = & \sum_{r=1}^{\infty}x^{r}\sum_{k=0}^{\infty}\sum_{N=k}^{\infty}\frac{\lambda^{N+1}}{(N-k)!k!}\bar{\Pi}_{r}^{(N-k)}(m,n;x)\Sigma^{(k)}(m,n;x)\\
& = & \sum_{r=1}^{\infty}x^{r}\sum_{N=0}^{\infty}\sum_{k=0}^{N}\frac{\lambda^{N+1}}{(N-k)!k!}\bar{\Pi}_{r}^{(N-k)}(m,n;x)\Sigma^{(k)}(m,n;x)\\
& = & \sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\sum_{k=0}^{N}\binom{N}{k}\bar{\Pi}_{r}^{(N-k)}(m,n;x)\Sigma^{(k)}(m,n;x).
\end{eqnarray*}
This has a clear similarity to (\ref{eq:Pi-recurse}), so we substitute
in (\ref{eq:Pi-recurse}) and (\ref{eq:F-derivative}), giving
\begin{eqnarray}
H(m,n,\lambda;x)F(m,n,\lambda;x) & = & \sum_{N=0}^{\infty}\frac{\lambda^{N+1}}{N!}\sum_{r=1}^{\infty}x^{r}\Pi_{r}^{(N)}(m,n;x)\nonumber \\
& = & x\frac{\partial}{\partial x}F(m,n,\lambda;x).\label{eq:P-F-relation}
\end{eqnarray}
\end{proof}
{}
If $F$ and $H$ were both well-behaved functions, this expression
would be sufficient to evaluate $H$ given $F$. As both are formal
power series, however, it is only meaningful to consider this expression
in terms of the terms in these series. Defining the functions $F_{r}(m,n,\lambda)$
such that
\[
F(m,n,\lambda;x)=\sum_{r=0}^{\infty}F_{r}(m,n,\lambda)x^{r},
\]
(\ref{eq:P-F-relation}) becomes
\begin{equation}
\sum_{k=0}^{r}H_{r-k}(m,n,\lambda)F_{k}(m,n,\lambda)=rF_{r}(m,n,\lambda).\label{eq:generate-recurse}
\end{equation}
When $r=0$ this simply gives
\[
H_{0}(m,n,\lambda)=0,
\]
and then we can recursively construct other $H_{r}$ for $r>0$. For
example, the first few are
\begin{eqnarray*}
H_{1}(m,n,\lambda) & = & F_{1}(m,n,\lambda)\\
H_{2}(m,n,\lambda) & = & 2F_{2}(m,n,\lambda)-[F_{1}(m,n,\lambda)]^{2}\\
H_{3}(m,n,\lambda) & = & 3F_{3}(m,n,\lambda)-3F_{1}(m,n,\lambda)F_{2}(m,n,\lambda)+[F_{1}(m,n,\lambda)]^{2},
\end{eqnarray*}
where we have made use of the fact that $F_{0}(m,n,\lambda)=F(m,n,\lambda;0)=0$
as shown in (\ref{eq:F_0}).
All that remains, then, is to evaluate the various $F_{r}$. We will
do this in the next section.
\section{Evaluating $F_{r}$\label{sec:Evaluating}}
We now have the generating function $H$ defined in terms of the series
$F$. The problem of evaluating terms in the $x$-series expansion
of $H$ is therefore equivalent to the problem of evaluating the the
terms in $F$. In this section we will establish an integral representation
of $F$ and then discuss the use of this to explicitly evaluate the
terms $F_{r}$ in $F$.
\begin{thm}
\label{thm:F-integral}The series $F$ has the integral representation
\begin{equation}
F(m,n,\lambda;x)=\frac{1}{\Lambda_{mn}}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}\left(e^{-q_{k}}q_{k}^{n-m}dq_{k}\sum_{a=0}^{\infty}\frac{\Gamma(\lambda+a)}{a!\Gamma(\lambda)}q_{k}^{a}x^{a}\right)\label{eq:F-integral}
\end{equation}
for positive integers $m$, $n$, $\lambda$ and $x$ satisfying $m\le n$,
where $\Delta(q_{1},\ldots,q_{m})$ is the Vandermonde determinant,
the integral is over the range $0\le q_{k}<\infty$ for all $1\le k\le m$,
\[
\Lambda_{mn}=\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}e^{-q_{k}}q_{k}^{n-m}dq_{k}.
\]
\end{thm}
\begin{proof}
From (\ref{eq:F}) and (\ref{eq:Sigma}) we have that
\begin{equation}
F(m,n,\lambda;x)=\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}P_{r_{1}\ldots r_{N}}(m,n).\label{eq:F-expand}
\end{equation}
If we then substitute (\ref{eq:P_mult}) into this, we get that, when
$m\le n$,
\begin{eqnarray}
F(m,n,\lambda;x) & = & \frac{1}{\Lambda_{mn}}\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\sum_{r_{1}=1}^{\infty}\frac{x^{r_{1}}}{r_{1}}\ldots\sum_{r_{N}=1}^{\infty}\frac{x^{r_{N}}}{r_{N}}\int\Delta^{2}(q_{1},\ldots,q_{m})\nonumber \\
& & \qquad\times\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\prod_{i=1}^{N}\sum_{j=1}^{m}q_{j}^{r_{i}}\nonumber \\
& = & \frac{1}{\Lambda_{mn}}\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\nonumber \\
& & \qquad\times\prod_{i=1}^{N}\sum_{j=1}^{m}\sum_{r_{i}=1}^{\infty}\frac{q_{j}^{r_{i}}x^{r_{i}}}{r_{i}}\nonumber \\
& = & \frac{1}{\Lambda_{mn}}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}(e^{-q_{k}}q_{k}^{n-m}dq_{k})\nonumber \\
& & \qquad\times\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(\sum_{j=1}^{m}\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}\right)^{N}.\label{eq:F-nested}
\end{eqnarray}
This expression is divergent for any given non-zero $x$, as almost
all of the domain of integration has at least one $q_{j}$ such that
$|q_{j}x|>1$, making
\[
\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}
\]
diverge. However, we are still able to make more progress by considering
(\ref{eq:F-nested}) as a formal power series in $x$ again. we have
the identity
\[
\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(\sum_{j=1}^{m}\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}\right)^{N}=\prod_{j=1}^{m}\sum_{a_{j}=0}^{\infty}\frac{\Gamma(\lambda+a_{j})}{a_{j}!\Gamma(\lambda)}q_{j}^{a_{j}}x^{a_{j}}
\]
for $\lambda>0$ (see Theorem \ref{thm:Nested-series} in Appendix
1), so we rewrite (\ref{eq:F-nested}) as
\[
F(m,n,\lambda;x)=\frac{1}{\Lambda_{mn}}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}e^{-q_{k}}q_{k}^{n-m}dq_{k}\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}q_{k}^{a_{k}}x^{a_{k}}.
\]
\end{proof}
{}
This expression still bears similarities to expressions used in past
work \cite{Page1993,Foong1994,Sanchez-Ruiz1995,Sen1996,Dyer2014a}.
To evaluate the integral, we will use a method similar to that used
by Foong \cite{Foong1994}.
{}
\begin{thm}
~
\begin{equation}
F(m,n,\lambda;x)=\sum_{a_{0}=0}^{\infty}\cdots\sum_{a_{m-1}=0}^{\infty}\prod_{0\le i<j<m}\left(\frac{a_{i}-a_{j}}{j-i}+1\right)\prod_{s=0}^{m-1}\frac{\Gamma(\lambda+a_{s})}{\Gamma(\lambda)}\frac{\Gamma(n-s+a_{s})}{\Gamma(n-s)}\frac{x^{a_{s}}}{a_{s}!}\label{eq:F-eval}
\end{equation}
for positive integers $m$, $n$, $\lambda$ and $x$ satisfying $m\le n$.\end{thm}
\begin{proof}
From Theorem \ref{thm:F-integral} we have
\[
F(m,n,\lambda;x)=\frac{1}{\Lambda_{mn}}\int\Delta^{2}(q_{1},\ldots,q_{m})\prod_{k=1}^{m}\left(e^{-q_{k}}q_{k}^{n-m}dq_{k}\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}q_{k}^{a_{k}}x^{a_{k}}\right).
\]
As in \cite{Foong1994}, we multiply the integrand by a ``damping
factor'' $\exp(-\sum_{k=1}^{m}\epsilon_{k}q_{k})$, and then replace
the $q_{i}$ in the Vandermonde discriminant $\Delta^{2}(q_{1},\ldots,q_{m})$
by $D_{i}=-\partial/\partial\epsilon_{i}$:
\begin{eqnarray}
F(m,n,\lambda;x) & = & \lim_{\epsilon\rightarrow0}\frac{\Delta^{2}(D_{1},\ldots,D_{m})}{\Lambda_{mn}}\prod_{k=1}^{m}\int_{0}^{\infty}e^{-(1+\epsilon_{k})q_{k}}q_{k}^{n-m}dq_{k}\nonumber \\
& & \left.\qquad\times\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}q_{k}^{a_{k}}x^{a_{k}}\right|_{\epsilon_{i}=\epsilon}.\label{eq:F-D}
\end{eqnarray}
Foong then notes that
\[
\Delta^{2}(D_{1},\ldots,D_{m})=|\mathcal{DD}^{T}|,
\]
where
\[
\mathcal{D}=\left[\begin{array}{cccc}
1 & 1 & \cdots & 1\\
D_{1} & D_{2} & \cdots & D_{m}\\
\vdots & \vdots & \ddots & \vdots\\
D_{1}^{m-1} & D_{2}^{m-1} & \cdots & D_{m}^{m-1}
\end{array}\right],
\]
and that
\[
|\mathcal{DD}^{T}|\, f(\{\epsilon_{i}\})|_{\epsilon_{i}=\epsilon}=m!|\mathcal{D}|D_{2}D_{3}^{2}\cdots D_{m}^{m-1}f(\{\epsilon_{i}\})|_{\epsilon_{i}=\epsilon}
\]
when $f(\{\epsilon_{i}\})$ is a symmetric function of $\epsilon_{i}$
\cite{Foong1994}. Given this, we rewrite \ref{eq:F-D} as
\begin{eqnarray}
F(m,n,\lambda;x) & = & \lim_{\epsilon\rightarrow0}\frac{m!}{\Lambda_{mn}}|\mathcal{D}|\prod_{k=1}^{m}\int_{0}^{\infty}D_{k}^{k-1}e^{-(1+\epsilon_{k})q_{k}}q_{k}^{n-m}dq_{k}\nonumber \\
& & \left.\qquad\times\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}q_{k}^{a_{k}}x^{a_{k}}\right|_{\epsilon_{i}=\epsilon}\nonumber \\
& = & \lim_{\epsilon\rightarrow0}\frac{m!}{\Lambda_{mn}}|\mathcal{D}|\prod_{k=1}^{m}\int_{0}^{\infty}e^{-(1+\epsilon_{k})q_{k}}q_{k}^{n-m+k-1}dq_{k}\nonumber \\
& & \left.\qquad\times\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}q_{k}^{a_{k}}x^{a_{k}}\right|_{\epsilon_{i}=\epsilon}\nonumber \\
& = & \lim_{\epsilon\rightarrow0}\left.\frac{m!}{\Lambda_{mn}}|\mathcal{D}|\prod_{k=1}^{m}\sum_{a_{k}=0}^{\infty}\frac{\Gamma(\lambda+a_{k})}{a_{k}!\Gamma(\lambda)}\frac{\Gamma(n-m+k+a_{k})}{(1+\epsilon_{k})^{n-m+k+a_{k}}}x^{a_{k}}\right|_{\epsilon_{i}=\epsilon}\nonumber \\
& = & \frac{m!}{\Lambda_{mn}}\sum_{a_{1}=0}^{\infty}\frac{\Gamma(\lambda+a_{1})}{a_{1}!\Gamma(\lambda)}x^{a_{1}}\cdots\sum_{a_{m}=0}^{\infty}\frac{\Gamma(\lambda+a_{m})}{a_{m}!\Gamma(\lambda)}x^{a_{m}}\nonumber \\
& & \qquad\times\lim_{\epsilon\rightarrow0}\left.|\mathcal{D}|\prod_{k=1}^{m}\frac{\Gamma(n-m+k+a_{k})}{(1+\epsilon_{k})^{n-m+k+a_{k}}}x^{a_{k}}\right|_{\epsilon_{i}=\epsilon}.\label{eq:F-det}
\end{eqnarray}
Let
\[
Q=\lim_{\epsilon\rightarrow0}\left.|\mathcal{D}|\prod_{k=1}^{m}\frac{\Gamma(n-m+k+a_{k})}{(1+\epsilon_{k})^{n-m+k+a_{k}}}\right|_{\epsilon_{i}=\epsilon}.
\]
Expanding the determinant out explicitly in terms of the Levi-Civita
symbol, we get
\begin{eqnarray*}
Q & = & \lim_{\epsilon\rightarrow0}\left.\varepsilon_{i_{1}\ldots i_{m}}\prod_{k=1}^{m}D_{k}^{i_{k}-1}\frac{\Gamma(n-m+k+a_{k})}{(1+\epsilon_{k})^{n-m+k+a_{k}}}\right|_{\epsilon_{i}=\epsilon}\\
& = & \lim_{\epsilon\rightarrow0}\left.\varepsilon_{i_{1}\ldots i_{m}}\prod_{k=1}^{m}\frac{\Gamma(n-m+k+i_{k}-1+a_{k})}{(1+\epsilon_{k})^{n-m+k+i_{k}-1+a_{k}}}\right|_{\epsilon_{i}=\epsilon}\\
& = & \varepsilon_{i_{1}\ldots i_{m}}\Gamma(n-m+k+i_{k}-1+a_{k})\\
& = & \left|\begin{array}{cccc}
\Gamma(n-m+1+a_{1}) & \Gamma(n-m+2+a_{2}) & \cdots & \Gamma(n+a_{m})\\
\Gamma(n-m+2+a_{1}) & \Gamma(n-m+3+a_{2}) & \cdots & \Gamma(n+1+a_{m})\\
\Gamma(n-m+3+a_{1}) & \Gamma(n-m+4+a_{2}) & \cdots & \Gamma(n+2+a_{m})\\
\vdots & \vdots & \ddots & \vdots\\
\Gamma(n+a_{1}) & \Gamma(n+1+a_{2}) & \cdots & \Gamma(n+m-1+a_{m})
\end{array}\right|.
\end{eqnarray*}
We then simplify this determinant by a process of subtracting multiples
of different rows of the matrix from each other as follows:
\begin{enumerate}
\item Subtract $(n-m)$ times the first row from the second, $(n-m+1)$
times the second from the third etc. to give
\[
Q=\left|\tiny{\begin{array}{cccc}
\Gamma(n-m+1+a_{1}) & \Gamma(n-m+2+a_{2}) & \cdots & \Gamma(n+a_{m})\\
(a_{1}+1)\Gamma(n-m+1+a_{1}) & (a_{2}+2)\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)\Gamma(n+a_{m})\\
(a_{1}+1)\Gamma(n-m+2+a_{1}) & (a_{2}+2)\Gamma(n-m+3+a_{2}) & \cdots & (a_{m}+m)\Gamma(n+1+a_{m})\\
\vdots & \vdots & \ddots & \vdots\\
(a_{1}+1)\Gamma(n-1+a_{1}) & (a_{2}+2)\Gamma(n+a_{2}) & \cdots & (a_{m}+m)\Gamma(n+m-2+a_{m})
\end{array}}\right|.
\]
\item Subtract $(n-m)$ times the second row from the third, $(n-m+1)$
times the third row from the fourth etc. to give
\[
Q=\left|\tiny{\begin{array}{cccc}
\Gamma(n-m+1+a_{1}) & \Gamma(n-m+2+a_{2}) & \cdots & \Gamma(n+a_{m})\\
(a_{1}+1)\Gamma(n-m+1+a_{1}) & (a_{2}+2)\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)\Gamma(n+a_{m})\\
(a_{1}+1)^{2}\Gamma(n-m+1+a_{1}) & (a_{2}+2)^{2}\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)^{2}\Gamma(n+a_{m})\\
\vdots & \vdots & \ddots & \vdots\\
(a_{1}+1)^{2}\Gamma(n-2+a_{1}) & (a_{2}+2)^{2}\Gamma(n-1+a_{2}) & \cdots & (a_{m}+m)^{2}\Gamma(n+m-3+a_{m})
\end{array}}\right|.
\]
\item Continue to repeat this process, starting a row further down each
time, until
\begin{eqnarray*}
Q & = & \left|\tiny{\begin{array}{cccc}
\Gamma(n-m+1+a_{1}) & \Gamma(n-m+2+a_{2}) & \cdots & \Gamma(n+a_{m})\\
(a_{1}+1)\Gamma(n-m+1+a_{1}) & (a_{2}+2)\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)\Gamma(n+a_{m})\\
(a_{1}+1)^{2}\Gamma(n-m+1+a_{1}) & (a_{2}+2)^{2}\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)^{2}\Gamma(n+a_{m})\\
\vdots & \vdots & \ddots & \vdots\\
(a_{1}+1)^{m-1}\Gamma(n-m+1+a_{1}) & (a_{2}+2)^{m-1}\Gamma(n-m+2+a_{2}) & \cdots & (a_{m}+m)^{m-1}\Gamma(n+a_{m})
\end{array}}\right|\\
& = & \Delta(a_{1}+1,\ldots,a_{m}+m)\prod_{k=1}^{m}\Gamma(n-m+a_{k}+k)\\
& = & \prod_{0<i<j\le m}(a_{j}-a_{i}+j-i)\prod_{k=1}^{m}\Gamma(n-m+a_{k}+k).
\end{eqnarray*}
\end{enumerate}
We then substitute this expression into (\ref{eq:F-det}):
\begin{eqnarray*}
F(m,n,\lambda;x) & = & \frac{m!}{\Lambda_{mn}}\sum_{a_{1}=0}^{\infty}\frac{\Gamma(\lambda+a_{1})}{a_{1}!\Gamma(\lambda)}x^{a_{1}}\cdots\sum_{a_{m}=0}^{\infty}\frac{\Gamma(\lambda+a_{m})}{a_{m}!\Gamma(\lambda)}x^{a_{m}}\\
& & \qquad\times\lim_{\epsilon\rightarrow0}\left.|\mathcal{D}|\prod_{k=1}^{m}\frac{\Gamma(n-m+k+a_{k})}{(1+\epsilon_{k})^{n-m+k+a_{k}}}\right|_{\epsilon_{i}=\epsilon}\\
& = & \frac{m!}{\Lambda_{mn}}\sum_{a_{1}=0}^{\infty}\frac{\Gamma(\lambda+a_{1})}{a_{1}!\Gamma(\lambda)}x^{a_{1}}\cdots\sum_{a_{m}=0}^{\infty}\frac{\Gamma(\lambda+a_{m})}{a_{m}!\Gamma(\lambda)}x^{a_{m}}\\
& & \qquad\times\prod_{0<i<j\le m}(a_{j}-a_{i}+j-i)\prod_{k=1}^{m}\Gamma(n-m+a_{k}+k),
\end{eqnarray*}
and simplify this by making the substitutions $i\rightarrow m-i$,
$j\rightarrow m-j$, $k\rightarrow m-k$ and $a_{s}\rightarrow a_{m-s}$
such that
\begin{eqnarray*}
F(m,n,\lambda;x) & = & \frac{m!}{\Lambda_{mn}}\sum_{a_{0}=0}^{\infty}\frac{\Gamma(\lambda+a_{0})}{a_{0}!\Gamma(\lambda)}x^{a_{0}}\cdots\sum_{a_{m-1}=0}^{\infty}\frac{\Gamma(\lambda+a_{m-1})}{a_{m-1}!\Gamma(\lambda)}x^{a_{m-1}}\\
& & \qquad\times\prod_{0\le i<j<m}(a_{i}-a_{j}+j-i)\prod_{s=0}^{m-1}\Gamma(n-s+a_{s}).
\end{eqnarray*}
We know from (\ref{eq:F_0}) that $F(m,n,\lambda;0)=1$, which means
we can now fix the value of the normalisation constant $\Lambda_{mn}$,
as
\[
F(m,n,\lambda;0)=\frac{m!}{\Lambda_{mn}}\prod_{0\le i<j<m}(j-i)\prod_{s=0}^{m-1}\Gamma(n-s)=1.
\]
Therefore,
\[
F(m,n,\lambda;x)=\sum_{a_{0}=0}^{\infty}\cdots\sum_{a_{m-1}=0}^{\infty}\prod_{0\le i<j<m}\left(\frac{a_{i}-a_{j}}{j-i}+1\right)\prod_{s=0}^{m-1}\frac{\Gamma(\lambda+a_{s})}{\Gamma(\lambda)}\frac{\Gamma(n-s+a_{s})}{\Gamma(n-s)}\frac{x^{a_{s}}}{a_{s}!}.
\]
\end{proof}
{}
This expression can now be used to evaluate any given $F_{r}$, by
summing only over the cases $\{a_{0},\ldots,a_{m-1}\}$ for which
$a_{0}+\ldots+a_{m-1}=r$. Unlike the closed-form expressions we derived
previously in \cite{Dyer2014a}, however, this expression cannot be
used directly to find polynomial expansions for $F_{r}(m,n,\lambda)$,
due to the dependence on $m$ of the number of summations and the
ranges of the products.
However, we know that each $H_{r}$ must be a symmetric polynomial
(Theorem \ref{thm:symmetric}) of order at $r$ in each of its parameters
(a hypermap with $r$ darts can have a most $r$ edges), and that
$H_{r}(m,n,\lambda)=0$ if any of its parameters are zero (all hypermaps
must have at least one each of vertices, edges and faces). Given that
$F_{0}(m,n,\lambda)=1$, it follows from (\ref{eq:generate-recurse})
that $F_{r}(m,n,\lambda)$ for any $r>0$ is also symmetric, order
$r$ in each parameter, and zero when $m$, $n$ or $\lambda$ are
zero. Therefore, we can compute the polynomial coefficients of $H_{r}$
(and therefore enumerate rooted hypermaps) by evaluating $F_{r}(m,n,\lambda)$
-- and by extension $H_{r}(m,n,\lambda)$ -- using (\ref{eq:F-eval})
and (\ref{eq:generate-recurse}) at all $1\le m\le n\le\lambda\le r$
and using polynomial interpolation.
We used this method to compute the coefficients of $H_{r}$ for all
$1\le r\le13$. Some of the output is given in Appendix 2, and the
results agree exactly with past computations, in particular Walsh's
enumeration of all rooted hypermaps up to $r=12$ \cite{Walsh2012}.
Running on a 2012 Dell XPS 12 these calculations took 107 minutes,
in comparison to the few days taken by Walsh's algorithm.
\subsection{Special cases}
While no simple closed-form polynomial expressions are available for
$H_{r}(m,n,\lambda)$, there are a few special cases in which we can
get more useful results.
Consider the function
\begin{equation}
H_{r}(1,m,n)=\sum_{v,e,f}H_{vefr}m^{e}n^{f}.\label{eq:P_2}
\end{equation}
This is the generating function for enumerating rooted hypermaps with
$r$ darts by number of edges and faces (with all possible numbers
of vertices summed over). By the symmetry of $H_{r}$, (\ref{eq:P_2})
could also be used to enumerate by number of vertices and edges, summing
over all numbers of faces etc.
{}
\begin{thm}
For all $r>0$,
\begin{equation}
H_{r}(1,m,n)=\frac{1}{(r-1)!}\frac{\Gamma(m+r)}{\Gamma(m)}\frac{\Gamma(n+r)}{\Gamma(n)}-\sum_{k=1}^{r-1}\frac{1}{k!}\frac{\Gamma(m+k)}{\Gamma(m)}\frac{\Gamma(n+k)}{\Gamma(n)}H_{r-k}(1,m,n).\label{eq:P_2_eval}
\end{equation}
\end{thm}
\begin{proof}
From (\ref{eq:F-eval}) we have that
\[
F(1,m,n;x)=\sum_{a=0}^{\infty}\frac{\Gamma(m+a)}{\Gamma(m)}\frac{\Gamma(n+a)}{\Gamma(n)}\frac{x^{a}}{a!},
\]
so
\[
F_{r}(1,m,n)=\frac{1}{r!}\frac{\Gamma(m+r)}{\Gamma(m)}\frac{\Gamma(n+r)}{\Gamma(n)}.
\]
We substitute this into (\ref{eq:generate-recurse}) and rearrange
to get
\begin{eqnarray*}
H_{r}(1,m,n) & = & rF_{r}(1,m,n)-\sum_{k=1}^{r-1}F_{k}(1,m,n)H_{r-k}(1,m,n)\\
& = & \frac{1}{(r-1)!}\frac{\Gamma(m+r)}{\Gamma(m)}\frac{\Gamma(n+r)}{\Gamma(n)}\\
& & -\sum_{k=1}^{r-1}\frac{1}{k!}\frac{\Gamma(m+k)}{\Gamma(m)}\frac{\Gamma(n+k)}{\Gamma(n)}H_{r-k}(1,m,n).
\end{eqnarray*}
{}
In contrast to expressions such as (\ref{eq:F-integral}) and (\ref{eq:F-eval}),
this expression obviously gives rise to symmetric polynomial functions.
Given (\ref{eq:P_2_eval}), the following two results follow trivially:
{}
\end{proof}
\begin{cor}
For all $r>0$,
\[
H_{r}(1,1,m)=r\frac{\Gamma(m+r)}{\Gamma(m)}-\sum_{k=1}^{r-1}\frac{\Gamma(m+k)}{\Gamma(m)}H_{r-k}(1,1,m).
\]
\end{cor}
\begin{cor}
For all $r>0$,
\[
H_{r}(1,1,1)=r\cdot r!-\sum_{k=1}^{r-1}k!H_{r-k}(1,1,1).
\]
\end{cor}
{}
The second in particular allows us to count how many rooted hypermaps
there are in total with $r$ darts. The first few values are 1, 3,
13, 71, 461...
\section{Conclusions}
We have demonstrated a method for computing generating functions to
enumerate rooted hypermaps by number of vertices, edges and faces
for any given number of darts. This is an extension of previous work
where we derived closed form generating functions for counting enumerating
rooted hypermaps with one face \cite{Dyer2014a}, but in contrast
to that case the method shown here defines the generating function
$H_{r}$ for $r$ darts recursively in terms of $H_{1},\ldots,H_{r-1}$,
and it only allows $H_{r}$ to be evaluated numerically, not expanded
directly as a polynomial. We were still able to obtain a polynomial
expansion, however, by using polynomial interpolation.
This work is a further demonstration of the use of matrix integration
as a tool for finding generating functions for enumerating sets of
combinatoric objects. It specifically demonstrates the link, first
discussed in \cite{Dyer2014a}, between rooted hypermaps and the ensemble
of reduced density operators on random states of a bipartite quantum
system.
We also discussed a number of related results. First we showed the
symmetry of the generating functions $H_{r}$, arising from the symmetry
of 3-constellations, and used this to speed up computation of $H_{r}$
by reducing the range over which $H_{r}$ needed to be evaluated to
fix the polynomial expansion. Then we looked at cases where one or
more parameters in $H_{r}$ were set to unity, giving generating functions
for enumerating larger sets of rooted hypermaps (such as all those
with $r$ darts and $f$ faces, summing over all possible numbers
of edges and vertices). In particular, this allowed us to easily count
all rooted hypermaps with $r$ darts and any number of edges, vertices
and faces.
\section*{Appendix 1}
\begin{thm}
\label{thm:Nested-series}For positive integer $m$ and $\lambda>0$,
the formal power series
\[
\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(\sum_{j=1}^{m}\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}\right)^{N}=\prod_{j=1}^{m}\sum_{a_{j}=0}^{\infty}\frac{\Gamma(\lambda+a_{j})}{a_{j}!\Gamma(\lambda)}q_{j}^{a_{j}}x^{a_{j}},
\]
where $q_{j}$ are components of an $m$-dimensional real vector.\end{thm}
\begin{proof}
Let
\begin{equation}
L_{\lambda,q}(x)=\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(\sum_{j=1}^{m}\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}\right)^{N}.\label{eq:series-1}
\end{equation}
For any given positive integer $a$, we see by inspection that this
series contains only a finite number of terms of order $x^{a}$, as
such terms can only come from cases where $1\le N\le a$. In addition,
there is only one constant term: the $N=0$ case which equals unity.
Therefore, $L_{\lambda,q}(x)$ can be written in the form
\begin{equation}
L_{\lambda,q}(x)=\sum_{a=0}^{\infty}f_{a}(\lambda,q)x^{a}\label{eq:Taylor-general}
\end{equation}
where each $f_{a}(\lambda,q)$ is a polynomial in $\lambda$ and $q$.
$L_{\lambda,q}(x)$ converges when $(|q_{j}x|)<1$ for all $j$ to
\begin{eqnarray*}
L_{\lambda}(x) & = & \sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(-\sum_{j=1}^{m}\ln(1-q_{j}x)\right)\\
& = & \exp[-\lambda\sum_{j=1}^{m}\ln(1-q_{j}x)]\\
& = & \prod_{j=1}^{m}\frac{1}{(1-q_{j}x)^{\lambda}}.
\end{eqnarray*}
This has a series expansion in $x$, also valid when $|q_{j}x|<1$
for all $j$, of
\begin{equation}
\prod_{j=1}^{m}\sum_{a_{j}=0}^{\infty}\frac{\Gamma(\lambda+a_{j})}{a_{j}!\Gamma(\lambda)}q_{j}^{a_{j}}x^{a_{j}}.\label{eq:series-2}
\end{equation}
This can also be rearranged into the form (\ref{eq:Taylor-general}).
(\ref{eq:series-1}) and (\ref{eq:series-2}) are therefore both Taylor
series with the same radius of convergence, and they are equal to
each other everywhere within it, so it follows from the uniqueness
of Taylor series expansions of smuooth functions that they are equivalent,
i.e.
\[
\sum_{N=0}^{\infty}\frac{\lambda^{N}}{N!}\left(\sum_{j=1}^{m}\sum_{r=1}^{\infty}\frac{q_{j}^{r}x^{r}}{r}\right)^{N}=\prod_{j=1}^{m}\sum_{a_{j}=0}^{\infty}\frac{\Gamma(\lambda+a_{j})}{a_{j}!\Gamma(\lambda)}q_{j}^{a_{j}}x^{a_{j}}.
\]
\end{proof}
\pagebreak{}
\section*{Appendix 2}
Numbers of rooted hypermaps with $v$ vertices, $e$ edges, $f$ faces
and $r$ darts, calculated by computing the generating functions $H_{r}$.
Only the cases with $v\le e\le f$ are given, as the rest follow from
the symmetry of $H_{r}$. The cases $1\le r\le7$ are included for
comparison with Walsh's previous computation \cite{Walsh2012}, with
all cases up to $r=12$ agreeing with his computation. The new case
$r=13$ is also shown.
\begin{minipage}[t]{0.5\textwidth}
\tiny
$\boldsymbol{r=1:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 1 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=2:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 2 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=3:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 1 & 1\tabularnewline
1 & 2 & 2 & 3\tabularnewline
1 & 1 & 3 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=4:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 2 & 5\tabularnewline
2 & 2 & 2 & 17\tabularnewline
1 & 2 & 3 & 6\tabularnewline
1 & 1 & 4 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=5:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 1 & 8\tabularnewline
1 & 2 & 2 & 40\tabularnewline
1 & 1 & 3 & 15\tabularnewline
2 & 2 & 3 & 55\tabularnewline
1 & 3 & 3 & 20\tabularnewline
1 & 2 & 4 & 10\tabularnewline
1 & 1 & 5 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=6:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 2 & 84\tabularnewline
2 & 2 & 2 & 456\tabularnewline
1 & 2 & 3 & 175\tabularnewline
2 & 3 & 3 & 262\tabularnewline
1 & 1 & 4 & 35\tabularnewline
2 & 2 & 4 & 135\tabularnewline
1 & 3 & 4 & 50\tabularnewline
1 & 2 & 5 & 15\tabularnewline
1 & 1 & 6 & 1\tabularnewline
\end{tabular}
{}
$\boldsymbol{r=7:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 1 & 180\tabularnewline
1 & 2 & 2 & 1183\tabularnewline
1 & 1 & 3 & 469\tabularnewline
2 & 2 & 3 & 2695\tabularnewline
1 & 3 & 3 & 1050\tabularnewline
3 & 3 & 3 & 1694\tabularnewline
1 & 2 & 4 & 560\tabularnewline
2 & 3 & 4 & 889\tabularnewline
1 & 4 & 4 & 175\tabularnewline
1 & 1 & 5 & 70\tabularnewline
2 & 2 & 5 & 280\tabularnewline
1 & 3 & 5 & 105\tabularnewline
1 & 2 & 6 & 21\tabularnewline
1 & 1 & 7 & 1\tabularnewline
\end{tabular}
\normalsize
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\tiny
$\boldsymbol{r=13:}$\\
\begin{tabular}{ccc|c}
$v$ & $e$ & $f$ & $N$\tabularnewline
\hline
1 & 1 & 1 & 68428800\tabularnewline
1 & 2 & 2 & 686597184\tabularnewline
1 & 1 & 3 & 292271616\tabularnewline
2 & 2 & 3 & 2820651496\tabularnewline
1 & 3 & 3 & 1194737544\tabularnewline
3 & 3 & 3 & 4623070842\tabularnewline
1 & 2 & 4 & 687238552\tabularnewline
2 & 3 & 4 & 2646424729\tabularnewline
1 & 4 & 4 & 636184120\tabularnewline
3 & 4 & 4 & 2239280420\tabularnewline
1 & 1 & 5 & 109425316\tabularnewline
2 & 2 & 5 & 988043771\tabularnewline
1 & 3 & 5 & 414918075\tabularnewline
3 & 3 & 5 & 1453414846\tabularnewline
2 & 4 & 5 & 824962502\tabularnewline
4 & 4 & 5 & 582408775\tabularnewline
1 & 5 & 5 & 125855730\tabularnewline
3 & 5 & 5 & 374805834\tabularnewline
5 & 5 & 5 & 64013222\tabularnewline
1 & 2 & 6 & 108452916\tabularnewline
2 & 3 & 6 & 374127663\tabularnewline
1 & 4 & 6 & 87933846\tabularnewline
3 & 4 & 6 & 260619268\tabularnewline
2 & 5 & 6 & 93880696\tabularnewline
4 & 5 & 6 & 44136820\tabularnewline
1 & 6 & 6 & 9513504\tabularnewline
3 & 6 & 6 & 19315114\tabularnewline
1 & 1 & 7 & 8691683\tabularnewline
2 & 2 & 7 & 70367479\tabularnewline
1 & 3 & 7 & 29135106\tabularnewline
3 & 3 & 7 & 85050784\tabularnewline
2 & 4 & 7 & 47604648\tabularnewline
4 & 4 & 7 & 22089600\tabularnewline
1 & 5 & 7 & 6936930\tabularnewline
3 & 5 & 7 & 14019928\tabularnewline
2 & 6 & 7 & 3356522\tabularnewline
1 & 7 & 7 & 226512\tabularnewline
1 & 2 & 8 & 4114110\tabularnewline
2 & 3 & 8 & 11674663\tabularnewline
1 & 4 & 8 & 2642640\tabularnewline
3 & 4 & 8 & 5264545\tabularnewline
2 & 5 & 8 & 1827683\tabularnewline
1 & 6 & 8 & 169884\tabularnewline
1 & 1 & 9 & 183183\tabularnewline
2 & 2 & 9 & 1225653\tabularnewline
1 & 3 & 9 & 495495\tabularnewline
3 & 3 & 9 & 960960\tabularnewline
2 & 4 & 9 & 525525\tabularnewline
1 & 5 & 9 & 70785\tabularnewline
1 & 2 & 10 & 40040\tabularnewline
2 & 3 & 10 & 74217\tabularnewline
1 & 4 & 10 & 15730\tabularnewline
1 & 1 & 11 & 1001\tabularnewline
2 & 2 & 11 & 4433\tabularnewline
1 & 3 & 11 & 1716\tabularnewline
1 & 2 & 12 & 78\tabularnewline
1 & 1 & 13 & 1\tabularnewline
\end{tabular}
\normalsize
\end{minipage}
\end{document}
|
\begin{document}
\title{\bf Exact bipartite Tur\'an numbers of large even cycles}
\date{}
\author{Binlong Li\thanks{Department of Applied Mathematics,
Northwestern Polytechnical University, Xi'an, Shaanxi 710072,
P.R.~China. Email: [email protected]. Partially supported
by the NSFC grant (No.\ 11601429).}~~~ Bo
Ning\thanks{Corresponding author. Center for Applied
Mathematics, Tianjin University, Tianjin 300072, P.R.
China. Email: [email protected]. Partially supported
by the NSFC grant (No.\ 11601379) and
the Seed Foundation of Tianjin University (2018XRG-0025).}}
\maketitle
\begin{center}
\begin{minipage}{140mm}
\small\noindent{\bf Abstract:} Let the bipartite Tur\'an number
$ex(m,n,H)$ of a graph $H$ be the maximum number of edges in an
$H$-free bipartite graph with two parts of sizes $m$ and $n$,
respectively. In this paper, we prove that $ex(m,n,C_{2t})=(t-1)n+m-t+1$
for any positive integers $m,n,t$ with $n\geq m\geq t\geq \frac{m}{2}+1$.
This confirms the rest of a conjecture of Gy\"{o}ri \cite{G97} (in a
stronger form), and improves the upper bound of $ex(m,n,C_{2t})$
obtained by Jiang and Ma \cite{JM18} for this range. We also prove a
tight edge condition for consecutive even cycles in bipartite
graphs, which settles a conjecture in \cite{A09}. As a main tool,
for a longest cycle $C$ in a bipartite graph, we obtain an estimate
on the upper bound of the number of edges which are incident to at
most one vertex in $C$. Our two results generalize or sharpen a
classical theorem due to Jackson \cite{J85} in different ways.
\noindent{\bf Keywords:} bipartite Tur\'an number; even cycle;
bipartite graph; Gy\"{o}ri's conjecture
\end{minipage}
\end{center}
\section{Introduction}
We only consider simple graphs, which are undirected and finite.
The study of cycles in bipartite graphs has a rich history.
There are many results in the literature which use that a bipartite
graph with high degree has a long cycle, see references \cite{MM63,BC76,J81,J85,JL94}.
Moreover, \cite{KV05,LK18} reveal that results on cycles in bipartite
graphs play important roles in investigating
cycles in hypergraphs. For a lot of references from the view of
extremal graph theory, we refer to the survey \cite{FS13}.
Maybe one of the best known extremal results involving long cycles
in bipartite graphs is the following proved more than
30 years old.
\begin{thm}[Jackson \cite{J85}]\label{Thm-Jackson}
Let $t$ be an integer and $G=(X,Y;E)$ be a bipartite graph. Suppose
that $|X|=n$, $|Y|=m$, where $n\geq m\geq t\geq 2$.
Suppose that
$$e(G)>\left\{\begin{array}{ll}
(n-1)(t-1)+m, & if~m\leq 2t-2;\\
(m+n-2t+3)(t-1), & if~m\geq 2t-2.
\end{array}\right.$$
Then $G$ contains a cycle of length at least $2t$.
\end{thm}
One question naturally arises: Can we find
exact edge number conditions for cycles of given lengths? As we
shall see later, we indeed have the following significant
strengthening of Jackson's theorem.
\begin{thm}\label{Thm-SharpenJackson}
Let $t$ be an integer and $G=(X,Y;E)$ be a bipartite graph
with $|X|=m$, $|Y|=n$. Suppose that $n\geq m$ and
$t\leq m\leq 2t-2$. If $e(G)>(t-1)(n-1)+m$, then $G$ contains a
cycle of length $2t$.
\end{thm}
The above theorem in fact tells us exact information on
``bipartite Tu\'an number" of large even cycles.
Following F\"{u}redi \cite{F96}, we define the bipartite
Tur\'an number $ex(m,n,H)$ of a graph
$H$ to be the maximum number of edges in an $H$-free bipartite graph
with two parts of sizes $m$ and $n$.
In this paper, we mainly focus on the exact formula of $ex(m,n,C_{2t})$ for some range.
For a similar problem on paths, Gy\'{a}rfas, Rousseau, and
Schelp \cite{GRS84} completely determined the function $ex(m,n,P_t)$.
When restricting to cycles, the situation turns out to be much more difficult.
Let us recall the classical result that
$ex(n,n,C_4)=(1+o(1))n^{\frac{3}{2}}$ due to K\H{o}v\'{a}ri,
S\'{o}s, and Tur\'an \cite{KST54}. For the function $ex(m,n,C_6)$,
it is closely related to a number-theoretical problem on product
representations of squares, which was studied by Erd\H{o}s,
S\'{a}rk\"{o}zy, and S\'{o}s in \cite{ESS95}. They conjectured that:
(a) $ex(m,n,C_6)<c(mn)^{\frac{2}{3}}$ when $n> m\geq
n^{\frac{1}{2}}$; and (b) $ex(m,n,C_6)<2n+c(mn)^{\frac{2}{3}}$ when
$n\geq m^2$. The part (a) of this conjecture and a weaker result of
part (b) were confirmed by S\'{a}rk\"{o}zy \cite{S95}. For the part
(b), it was finally settled by Gy\"{o}ri \cite{G97}. Interestingly,
motivated by the extremal result on short cycles, Gy\"{o}ri
\cite{G97} suggested a general conjecture on longer cycles.
\begin{conjectureA}[{\rm Gy\"{o}ri \cite[p.373]{G97}, see also \cite{BGMV07}}]\label{Conj-Gyori}
Suppose that $m,n,k$ are integers, where $n\geq m^2$,
$m\geq t\geq 3$. Then
$$
ex(m,n,C_{2t})\leq (t-1)n+m-t+1.
$$
\end{conjectureA}
Using estimate on total weights of triangle-free multi-hypergraphs,
Gy\"{o}ri himself \cite{G06} disproved Conjecture \ref{Conj-Gyori}
for the case $t=3$. Balbuena, Garc\'{i}a-V\'{a}zquez, Marcote,
and Valenzuela \cite{BGMV07} further disproved it when $t\leq \frac{m+1}{2}$.
As far as we know, this conjecture remains open when $t\geq \frac{m}{2}+1$.
The following conjecture sharpens the left part of Gy\"{o}ri's conjecture.
\begin{conjectureA}\label{Conj-Cons}
Suppose that $m,n,t$ are positive integers, where $n\geq m\geq t\geq \frac{m}{2}+1$. Then
$$ex(m,n,C_{2t})=(t-1)n+m-t+1.$$
\end{conjectureA}
For general results on $ex(m,n,C_{2t})$, an upper bound was obtained by Naor and Verstra\"{e}te \cite{NV05},
who proved that for $m\leq n$ and $t\geq 2$,
$$ex(m,n,C_{2t})\leq\left\{\begin{array}{ll}
(2t-3)\cdot [(mn)^{\frac{t+1}{2t}}+m+n], & \mbox{if }t\mbox{ is odd};\\
(2t-3)\cdot [m^{\frac{t+2}{2t}}n^{\frac{1}{2}}+m+n], & \mbox{if }t\mbox{ is even}.
\end{array}\right.$$
Gy\"{o}ri \cite{G97} proved that there exists some
$c_t>0$ such that for $n\geq m^2$,
$$
ex(m,n,C_{2t})\leq (t-1)n+c_t\cdot m^2.
$$
Very recently, Jiang and Ma \cite{JM18} proved the following new bound:
\begin{thm}[Jiang and Ma, Proposition 5.5 in \cite{JM18}]
There exists a constant $d_t>0$ such that for any positive integers $n\geq m\geq 2$,
$$
ex(m,n,C_{2t})\leq (t-1)n+d_t\cdot m^{1+\frac{1}{[t/2]}}.
$$
\end{thm}
So, if Conjecture \ref{Conj-Cons} is true, then it improves
Jiang and Ma's result for the range $t\geq \frac{m}{2}+1$.
In this paper, we aim to solve the aforementioned conjectures.
In fact, we prove the following stronger result, which also confirms a
conjecture in \cite{A09} (see Conjecture 1 in \cite[p.30]{A09}).
\begin{thm}\label{Thm-general}
Let $G=(X,Y;E)$ be a bipartite graph with $|X|= m$ and $|Y|=n$.
Suppose that $n\geq m\geq 2k+2$ for some $k\in\mathbb{N}$. If
$e(G)\geq n(m-k-1)+k+2$, then $G$ contains cycles of all even
lengths from 4 up to $2m-2k$.
\end{thm}
Set $t=m-k$ in Conjecture 2. Let $G$ be a graph obtained by
identifying one vertex in the $n$-set from $K_{m-k-1,n}$ and the
other vertex in the $1$-set from $K_{1,k+1}$. Then a longest cycle
in $G$ is of length $2m-2k-2$ and $e(G)=(m-k-1)n+k+1$.
This tells us $ex(m,n,C_{2t})\geq (m-k-1)n+k+1$. An
immediate consequence of Theorem \ref{Thm-general} is that
$ex(m,n,C_{2t})\leq (m-k-1)n+k+1$. Thus, we have the following
result, which is equivalent to Theorem \ref{Thm-SharpenJackson} and
confirms Conjecture \ref{Conj-Cons}.
\begin{cor}
For any positive integers $m,n,t$, if $n\geq m\geq t\geq
\frac{m}{2}+1$, then
$$ex(m,n,C_{2t})=(t-1)n+m-t+1.$$
\end{cor}
The basic case of the proof of Theorem \ref{Thm-general} is
that the bipartite graph is balanced.
For this special case, a slightly stronger theorem will be proved.
\begin{thm}\label{Thm-balanced}
Let $G$ be a balanced bipartite graph of order $2n$, where $n\geq2k+2$,
$k\in \mathbb{N}$. If $e(G)\geq(n-k-1)n+k+2$,
then there hold:\\
(i) The circumference $c(G)\geq 2n-2k$; and
(ii) $G$ contains cycles of all even lengths from 4 to $c(G)$.
\end{thm}
Our proof of Theorem \ref{Thm-balanced} is motivated
by a theorem of Bondy \cite{B71} stated as follows,
which extends the celebrated Erd\H{o}s-Gallai Theorem \cite{EG59}
on cycles.
\begin{thm}[Bondy \cite{B71}]\label{Thm-Bondy}
Let $G$ be a graph on $n$ vertices and $C$ a longest cycle of $G$
with order $c$. Then
$$e(G-C)+e(G-C,C)\leq \frac{(n-c)c}{2}.$$
\end{thm}
Since $G[C]$ contains at most $\binom{c}{2}$ edges, it can imply Erd\H{o}s-Gallai
Theorem. In fact, Bondy's theorem and its variants turned out to be powerful tools
for tacking many problems on long cycles. For example, it actually plays an
important role in Bollob\'{a}s and Thomason's almost proof \cite{BT99} of Brandt's
conjecture \cite{Br97}, which says that every non-bipartite graph on $n$ vertices
is weakly pancyclic if $e(G)\geq\lfloor\frac{n^2}{4}\rfloor-n+5$. The other example
is that, Bondy's theorem is related to a conjecture of Woodall \cite{W76} in 1976.
Ma and one of authors here \cite{MN} recently proved a stability version of Bondy's
theorem, which is one step towards obtaining a stability version of Woodall's
conjecture \cite{W76}.
Very importantly for us, using Theorem \ref{Thm-Bondy} is an ingenious idea in Bondy's proof of
Tur\'an numbers of large cycles \cite{B71}.
We shall prove a bipartite analog of Theorem \ref{Thm-Bondy} and use it to prove
Theorem \ref{Thm-balanced}.
\begin{thm}\label{Thm-BipartiteBondy}
Let $G=(X,Y;E)$ be a bipartite graph and $C$ a longest cycle of
$G$. Suppose that $|X|=n$, $|Y|=m$, and $|V(C)|=2t$, where $n\geq m\geq t$.
Then there hold:\\
(1) If $m\leq 2t$, then $e(G-C)+e(G-C,C)\leq t(n-1-t)+m$.\\
(2) If $m\geq 2t$, then $e(G-C)+e(G-C,C)\leq t(m+n+1-3t)$.
\end{thm}
The bounds in Theorem \ref{Thm-BipartiteBondy} are tight.
We postpone the discussion to Section 2. Moreover,
Theorem \ref{Thm-BipartiteBondy} generalizes
Theorem \ref{Thm-Jackson}
in the other direction.
Let us digest some notation and terminologies. Let $G=(X,Y;E)$ be a
bipartite graph, where $X,Y$ are two bipartite sets and $E$ is the
edge set of $G$. We say that $G$ is \emph{balanced} if $|X|=|Y|$.
Let $P$ be a path of $G$. We say that $P$ is an $(x,y)$-path if
$x,y$ are two end-vertices of $P$; and $P$ is an $x$-path if $x$ is
one end-vertex of $P$. A graph $G$ is called \emph{weakly pancyclic}
if $G$ contains all cycles of lengths from $g(G)$ to $c(G)$, where
$g(G)$ and $c(G)$ are its girth and circumference, respectively. A
balanced bipartite graph $G$ is called \emph{bipancyclic}, if $G$
contains all cycles of even lengths from 4 to $2|X|$. For a subgraph
$H$ of $G$, we set $X_H=X\cap V(H)$ and $Y_H=Y\cap V(H)$. We use
$|H|$ to denote the order of $H$, that is, $|H|:=|V(H)|$. Let
$S\subseteq V(G)$. We use $G[S]$ to denote the subgraph induced by
$S$, and $G-S$ the subgraph induced by $V(G-S)$. Specially, when
there is no danger of ambiguity, we use $G-H$ instead of $G-V(H)$
sometimes. For $V_1,V_2\subseteq V(G)$ with $V_1\cap
V_2=\emptyset$, we set $e(V_1,V_2)=\{v_1v_2: v_1\in V_1,v_2\in
V_2\}$. If $H_1,H_2$ are two disjoint subgraphs of $G$, then we set
$e(H_1,H_2)=e(V(H_1),V(H_2))$.
The remainder of this paper is organized as follows.
In Section 2, we aim to prove Theorem \ref{Thm-BipartiteBondy}.
In Subsection 2.1, we prove several technical lemmas and
list useful theorems. In Subsection 2.2, we prove Theorem
\ref{Thm-BipartiteBondy}. In Section 3, we prove
Theorem \ref{Thm-balanced} and Theorem \ref{Thm-general}.
\section{Proof of Theorem \ref{Thm-BipartiteBondy}}
\subsection{Preliminaries for proving Theorem \ref{Thm-BipartiteBondy}}
In this subsection, we collect and establish several lemmas to be
used later. Let $G$ be a graph and $P$ be a path of $G$ with the
origin $x$ and terminus $y$. The path $P$ is called a \emph{maximal
path} of $G$, if $N_G(x)\cup N_G(y)\subseteq V(P)$. We say that $P$
is a \emph{maximal $x$-path} of $G$ if $N_G(y)\subseteq V(P)$.
\begin{lem}\label{Lem-specivertexpath}
Let $G=(X,Y;E)$ be a connected bipartite graph and $d(x)\geq d$ for every
vertex $x\in X$.\\
(1) If $|X|\geq |Y|$, then for every vertex $y_0\in Y$, $G$ has a maximal
$y_0$-path with the terminus in $X$ and of order at least $2d$.\\
(2) If $|X|>|Y|$, then for every vertex $x_0\in X$, $G$ has a
maximal $x_0$-path with the terminus in $X$ and of order at least
$2d+1$.
\end{lem}
\begin{proof}
We first show the existence of a maximal $x_0$- or $y_0$-path with
the terminus in $X$. We use induction on $n:=|V(G)|$. The assertion is trivial if
$n=1,2$. Suppose that $n\geq 3$.
First assume $|X|\geq |Y|$ and let $y_0\in Y$.
Thus, $|X|>|Y\backslash\{y_0\}|$. It follows that
there is a component $H$ of $G-y_0$ (possibly $H=G-y_0$) such that
$|X_H|>|Y_H|$. Since $G$ is connected, $y_0$ has a neighbor $x_0\in
X_H$. By the induction hypothesis, $H$ has a maximal $x_0$-path
$P_0$ with the terminus in $X_H\subseteq X$. Thus, $P=y_0x_0P_0$ is a
maximal $y_0$-path with the terminus in $X$.
Now assume $|X|>|Y|$ and let $x_0\in X$. Thus, $|X\backslash\{x_0\}|\geq|Y|$.
It follows that there is a component $H$ of $G-x_0$ (possibly $H=G-x_0$)
such that $|X_H|\geq|Y_H|$. Since $G$ is connected, $x_0$ has a neighbor
$y_0\in Y_H$. By the induction hypothesis, $H$ has a maximal
$y_0$-path $P_0$ with the terminus in $X_H\subseteq X$. Thus,
$P=x_0y_0P_0$ is a maximal $x_0$-path with the terminus in $X$.
It remains to show that the path $P$ is of order at least $2d$
(if $P$ originates at $y_0$), or at least $2d+1$ (if $P$ originates at $x_0$).
Let $x_1\in X$ be the terminus of $X$ other than $x_0$ or $y_0$. Notice that
$d(x_1)\geq d$ and $N(x_1)\subseteq V(P)$. We have $|Y\cap V(P)|\geq d$,
implying that $P$ has order at least $2d$ when $P$ originates at $y_0$, and at
least $2d+1$ when $P$ originates at $x_0$. This proves Lemma \ref{Lem-specivertexpath}.
\end{proof}
For a graph $G$ and $S\subseteq V(G)$, we denote by $\rho_G(S)$
the number of edges in $G$ which are incident to at least one vertex in $S$, that is,
$$\rho_G(S):=e(G[S])+e_G(S,V(G)\backslash S).$$ From this definition, one
can see $d(v)=\rho_G(\{v\})$ for any vertex $v\in V(G)$. When there
is no danger of ambiguity, we use $\rho(u,v)$ and $\rho(S)$ instead
of $\rho_G(\{u,v\})$ and $\rho_G(S)$, respectively. An
$\{s,s'\}$-\emph{disjoint path pair} of $G$ (or shortly, an
$\{s,s'\}$-\emph{DPP}), is the union of an $s$-path and an $s'$-path
which are vertex-disjoint. Let $D$ be an $\{s,s'\}$-DPP, and $t,t'$
be the termini of the two paths in $D$. We say that $D$ is a
\emph{maximal $\{s,s'\}$-DPP} in $G$ if $N_G(t)\cup N_G(t')\subseteq
V(D)$. Clearly, $D$ is a maximal $\{s,s'\}$-DPP of $G$, if and only
if $D+ss'$ is a maximal path of $G+ss'$. For a special case that $G$
is bipartite, we say that $D$ is \emph{detached} if $t$ and $t'$ are
in distinct partition sets of $G$.
Next we shall prove two lemmas on degree conditions
for detached maximal DDP in bipartite graphs.
\begin{lem}\label{Lem-goodmaximalpair}
Let $G=(X,Y;E)$ be a connected balanced bipartite graph. If
$\rho(x,y)\geq \rho$ for every $(x,y)\in(X,Y)$, then for any
$(x_0,y_0)\in(X,Y)$, $G$ has a detached maximal $\{x_0,y_0\}$-DPP of
order at least $\rho+1$.
\end{lem}
\begin{proof}
We first show the existence of the detached maximal
$\{x_0,y_0\}$-DPP by induction on $n:=|V(G)|$. The assertion is
trivial if $n=2$. So assume that $n\geq 4$. Let $x_0\in X,y_0\in
Y$ and let $G':=G-\{x_0,y_0\}$.
First assume that there is a balanced component $H$ of $G'$ that is
incident to both $x_0$ and $y_0$. Let $x_1,y_1\in V(H)$ be the
neighbors of $y_0$ and $x_0$, respectively. By the induction
hypothesis, $H$ has a detached maximal $\{x_1,y_1\}$-DPP, say $D_0$.
Thus, $D=D_0\cup\{x_0y_1,x_1y_0\}$ is a detached maximal
$\{x_0,y_0\}$-DPP of $G$.
Now assume that every balanced component of $G'$ is incident to
either $x_0$ or $y_0$ but not both. Let $\mathcal{H}_1$ be the set
of components $H$ of $G'$ such that either $|X_H|>|Y_H|$, or $H$ is
balanced and incident to $x_0$. Let $\mathcal{H}_2$ be the set of
components $H$ of $G'$ such that either $|Y_H|>|X_H|$, or $H$ is
balanced and incident to $y_0$.
If $y_0$ is not incident to any component of $G'$, then every
component of $G'$ is incident to $x_0$. This fact implies that
$\mathcal{H}_1\neq\emptyset$. Let $H\in\mathcal{H}_1$ and $y_1\in
N_H(x_0)$. By Lemma \ref{Lem-specivertexpath}(1), $H$ has a maximal
$y_1$-path $P_1$ with the terminus in $X_H\subseteq X$. Thus, the union
of the path $P_x=x_0y_1P_1$ and the trivial path $P_y=y_0$ is a
detached maximal $\{x_0,y_0\}$-DPP of $G$, and we are done.
In the following, we assume $y_0$ is incident to at least one
component of $G'$; and similarly, by symmetry, $x_0$ is incident
to at least one component of $G'$.
If $\mathcal{H}_2=\emptyset$, then every component of $G'$ is
balanced, and it follows that $y_0$ is not incident to any component
of $G'$, a contradiction. So $\mathcal{H}_2\neq\emptyset$, and
similarly, $\mathcal{H}_1\neq\emptyset$.
It follows that there exist $H_1\in\mathcal{H}_1$ and
$H_2\in\mathcal{H}_2$ such that: $x_0$ is incident to one of
$H_1$ and $H_2$, and $y_0$ is incident to the other. If $x_0$ is incident
to $H_1$ and $y_0$ is incident to $H_2$, then let $y_1\in
N_{H_1}(x_0)$ and $x_1\in N_{H_2}(y_0)$. Recall that
$|X_{H_1}|\geq|Y_{H_1}|$ and $|Y_{H_2}|\geq|X_{H_2}|$. By Lemma
\ref{Lem-specivertexpath}, $H_1$ has a maximal $y_1$-path $P_1$ with the
terminus in $X_{H_1}\subseteq X$ and $H_2$ has a maximal $x_1$-path
$P_2$ with the terminus in $Y_{H_2}\subseteq Y$. Thus, the union of the
two paths $P_x=x_0y_1P_1$ and $P_y=y_0x_1P_2$ is a detached maximal
$\{x_0,y_0\}$-DPP of $G$. If $x_0$ is incident to $H_2$ and $y_0$ is
incident to $H_1$, then let $y_1\in N_{H_2}(x_0)$ and $x_1\in
N_{H_1}(y_0)$. Note that in this case $|X_{H_1}|>|Y_{H_1}|$ and
$|Y_{H_2}|>|X_{H_2}|$. By Lemma \ref{Lem-specivertexpath}, $H_2$ has
a maximal $y_1$-path $P_1$ with the terminus in $Y_{H_2}\subseteq Y$, and
$H_1$ has a maximal $y_1$-path $P_2$ with the terminus in
$X_{H_1}\subseteq X$. Thus, the union of the two paths
$P_x=x_0y_1P_1$ and $P_y=y_0x_1P_2$ is a detached maximal
$\{x_0,y_0\}$-DPP of $G$. This proves the existence of the detached
maximal $\{x_0,y_0\}$-DPP of $G$.
Now let $D$ be a detached maximal $\{x_0,y_0\}$-DPP of $G$. We will
show that $D$ has order at least $\rho+1$. Let $x_1\in X,y_1\in Y$
be the termini of the two paths in $D$. Obviously, we have
$$\rho(x_1,y_1)=\left\{\begin{array}{ll}
d(x_1)+d(y_1), & x_1y_1\notin E(G);\\
d(x_1)+d(y_1)-1, & x_1y_1\in E(G).
\end{array}\right.$$
If $x_1y_1\notin E(G)$, then $|V(D)|\geq d(x_1)+d(y_1)+2\geq\rho+1$;
if $x_1y_1\in E(G)$, then $|V(D)|\geq d(x_1)+d(y_1)\geq\rho+1$. This
proves Lemma \ref{Lem-goodmaximalpair}.
\end{proof}
Let $G$ be a graph with connectivity 1, and $u,v\in V(G)$. We call
$\{u,v\}$ a \emph{good pair} of $G$, if there is an end-block $B$ of
$G$ such that exactly one of $u,v$ is an inner-vertex of $B$.
\begin{lem}\label{LeGoodPair}
Let $G=(X,Y;E)$ be a balanced bipartite graph with connectivity 1,
and $\{x_0,x'_0\}\subseteq X$ be a good pair of $G$. Suppose
$\rho(x,y)\geq \rho$ for every $(x,y)\in (X,Y)$. Then $G$ has a
detached maximal $\{x_0,x'_0\}$-DPP of order at least $\rho+1$.
\end{lem}
\begin{proof}
We prove the assertion by induction on $|V(G)|$. It is easy to check
that the assertion is true for $n=4$. Now assume that $n\geq 6$. If
$G$ has a detached maximal $\{x_0,x'_0\}$-DPP, say $D$, then $D$ has
order at least $\rho+1$ (see the last paragraph of the proof of
Lemma \ref{Lem-goodmaximalpair}), and the statement holds. Now we
assume that $G$ has no detached maximal $\{x_0,x'_0\}$-DPP. Let $B$
be an end-block of $G$ that contains one vertex, say $x'_0$, as an
inner-vertex, and let $u$ be the cut-vertex of $G$ contained in $B$.
If $x_0$ is also an inner-vertex of an end-blocks, say $B_0$, then
we assume without loss of generality that $|V(B)|\leq|V(B_0)|$.
Suppose first that $x_0$ is a cut-vertex of $G$. Let $H$ be the
component of $G-x_0$ containing $x'_0$.
If $H$ is balanced, then let $y_0$ be a neighbor of $x_0$ in $H$. By
Lemma \ref{Lem-goodmaximalpair}, $H$ contains an detached maximal
$(x'_0,y_0)$-DPP, say $D$. It follows that $D\cup\{x_0y_0\}$ is an
detached maximal $(x_0,x'_0)$-DPP of $G$, a contradiction. So we
assume that $H$ is not balanced.
If $|X_H|>|Y_H|$, then $|X_{G-H}|<|Y_{G-H}|$. By Lemma
\ref{Lem-specivertexpath}, $H$ has a maximal $x'_0$-path with
terminus in $X_H$ and $G-H$ has a maximal $x_0$-path with terminus
in $Y_{G-H}$. Thus the union of such two paths form an detached
maximal $(x_0,x'_0)$-DPP of $G$, a contradiction. If $|X_H|<|Y_H|$,
then $|X_{G-H}|>|Y_{G-H}|$. By Lemma \ref{Lem-specivertexpath}, $H$
has a maximal $x'_0$-path with terminus in $Y_H$ and $G-H$ has a
maximal $x_0$-path with terminus in $X_{G-H}$. Thus the union of
such two paths form a detached maximal $(x_0,x'_0)$-DPP of $G$, also
a contradiction.
Now we assume that $G'=G-x_0$ is connected. Specially we have
$x_0\in V(G-B)$. Here we deal with the case that $N(x_0)=\{u\}$. For
this case $x_0$ is an inner-vertex of the end-block $B_0$ with
$V(B_0)=\{x_0,u\}$. It follows that $V(B)=\{x'_0,u\}$. By Lemma
\ref{Lem-specivertexpath}, $G-\{x_0,x'_0\}$ has a maximal $u$-path
$P$ with terminus in $Y$. Now the two paths $P_1=x_0uP$ and
$P_2=x'_0$ form a detached maximal $(x_0,x'_0)$-DPP of $G$, a
contradiction. So we assume that $x_0$ has a neighbor $y_0\in
V(G-B)$. We choose $y_0$ such that the distance between $y_0$ and
$u$ in $G$ is as large as possible. It follows that $B$ is an
end-block of $G'$ as well.
Let $H$ be the component of $G'-y_0$ containing $x'_0$. So $B$ is
contained in $H$.
We claim that $B$ is an end-block of $H$ as well. Suppose not. Then
$B=H$. This implies that $N_H(y_0)=\{u\}$, specially $u\in X$. If
$x_0$ has a second neighbor $y_1$, then the distance between $u$ and
$y_1$ is larger than that between $u$ and $y_0$, a contradiction. It
follows that $N(x_0)=\{y_0\}$. Now $x_0$ is an inner-vertex of the
end-block $B_0$ with $V(B_0)=\{x_0,y_0\}$, which contradicting our
choice of $B$. Thus as we claimed, $B$ is an end-block of $H$ as
well.
If $H$ is balanced, then let $x_1=N_H(y_0)$. Since $y_0\in V(G-B)$,
we have that $\{x_1,x'_0\}$ is a good pair of $H$. By induction, $H$
has a detached maximal $(x_1,x'_0)$-DPP $D$. Now
$D\cup\{x_0y_0,y_0x_1\}$ is a detached maximal $(x_1,x'_0)$-DPP of
$G$, a contradiction. So we assume that $H$ is not balanced.
If $|X_H|>|Y_H|$, then $|X_{G-H}|<|Y_{G-H}|$. By Lemma
\ref{Lem-specivertexpath}, $H$ has a maximal $x'_0$-path $P_1$ with
terminus in $X_H$ and $G-H$ has a maximal $y_0$-path $P_2$ with
terminus in $Y_{G-H}$. If $|X_H|<|Y_H|$, then
$|X_{G-H}|\geq|Y_{G-H}|$. By Lemma \ref{Lem-specivertexpath}, $H$
has a maximal $x'_0$-path $P_1$ with terminus in $Y_H$ and $G-H$ has
a maximal $y_0$-path with terminus in $X_{G-H}$. In both cases, the
two paths $P_1$ and $x_0y_0P_2$ form a detached maximal
$(x_0,x'_0)$-DPP of $G$, a contradiction.
\end{proof}
\begin{lem}[Jackson \cite{J85}]\label{ThJa}
Let $G=(X,Y,E)$ be a 2-connected bipartite graph, where
$|X|\geq|Y|$. If each vertex of $X$ has degree at least $k$, and
each vertex of $Y$ has degree at least $l$, then $G$ contains a
cycle of length at least $2\min\{|Y|,k+l-1,2k-2\}$. Moreover, if
$k=l$ and $|X|=|Y|$, then $G$ contains a cycle of length at least
$2\min\{|Y|,2k-1\}$.
\end{lem}
\begin{lem}[Bagga, Varma \cite{BaVa}]\label{ThBaVa}
Let $G=(X,Y,E)$ be a balanced bipartite graph of order $2n$. If
$d(x)+d(y)\geq n+2$ for every $(x,y)\in(X,Y)$, then $G$ is
Hamilton-biconnected.
\end{lem}
\begin{lem}\label{LePathOrder}
Let $G$ be a 2-connected balanced bipartite graph such that
$\rho(x,y)\geq \rho$ for every $(x,y)\in (X,Y)$. Then for any two
vertices $x_1,x_2\in X$, $G$ has an $(x_1,x_2)$-path of order at
least $\rho$.
\end{lem}
\begin{proof}
The assertion can be checked easily when $|X|=2$. So we assume that
$|X|\geq 3$.
Set $k=\min\{d(x): x\in X\}$ and $l=\min\{d(y): y\in Y\}$. It
follows that $k+l=d(x)+d(y)\geq\rho(x,y)\geq\rho$ for some
$(x,y)\in(X,Y)$. If $k\neq l$, then $2k-2\geq\rho-1$ or
$2l-2\geq\rho-1$; if $k=l$, then we have $2k-1\geq\rho-1$. Notice
that $|X|=|Y|$. By Lemma \ref{ThJa}, $G$ has a cycle of length at
least $2\min\{|X|,\rho-1\}$.
Suppose first that $|X|\geq\rho-1$. It follows that $G$ has a cycle
$C$ of length at least $2\rho-2$. Since $G$ is 2-connected, there
are two disjoint paths $P_1,P_2$ between $x_1,x_2$, respectively,
and $C$. Let $u_1,u_2\in V(C)$ be the terminus of $P_1,P_2$
(possibly $x_i=u_i$ for some $i=1,2$). Now one of the two paths
$\overrightarrow{C}[u_1,u_2]$ and $\overleftarrow{C}[u_1,u_2]$ has
order at least $\rho$. Together with $P_1,P_2$, we can get an
$(x_1,x_2)$-path of order at least $\rho$.
Secondly, we suppose that $|X|=\rho-2$. It follows that $G$ has a
Hamiltonian cycle $C$. If one of the two paths
$P_1=\overrightarrow{C}[x_1,x_2]$ and
$P_2=\overleftarrow{C}[x_1,x_2]$ has order at least $\rho$, then
there are noting to prove. So we assume that both $P_1$ and $P_2$
has order exactly $\rho-1$. If there is an edge, say $u_1u_2$, with
$u_i\in V(P_i)\backslash\{x_1,x_2\}$, then one of the paths
$x_1P_1u_1u_2P_2x_2$ and $x_1P_2u_2u_1P_1x_2$ has order at least
$\rho$ (notice that the sum of the orders of such two paths is
$2\rho$), and we are done. Now we assume that there are no edges
between $V(P_1)\backslash\{x_1,x_2\}$ and
$V(P_2)\backslash\{x_1,x_2\}$. It follows that for any two vertices
$(x,y)\in(X\backslash\{x_1,x_2\},Y)$, $d(x)+d(y)\leq\rho-1$, a
contradiction.
Lastly, we suppose that $|X|\leq\rho-3$. Let $y_1\in N(x_1)$, $y_2\in
Y\backslash\{y_1\}$, and set $G'=G-\{x_1,y_2\}$. Now $G'$ is a
balanced bipartite graph of order $2(\rho-4)$ and for every
$(x,y)\in(X_{G'},Y_{G'})$,
$d_{G'}(x)+d_{G'}(y)\geq\rho_{G'}(x,y)\geq\rho-2$. By Lemma
\ref{ThBaVa}, $G'$ is Hamilton-biconnected. Let $P'$ be a
Hamiltonian $(y_1,x_2)$-path of $G'$. Then $P=x_1y_1P'$ is an
$(x_1,x_2)$-path of order $|V(G)|-1$. Notice that if $G$ is complete and
bipartite, then $\rho\leq|V(G)|-1$; otherwise
$\rho\leq|V(G)|-2$. It follows that $P$ is an $(x_1,x_2)$-path of
order at least $\rho$.
\end{proof}
A subgraph $F$ of a graph $G$ is called an \emph{$(x,L)$-fan} if $F$
has the following decomposition $F=\cup_{i=1}^kP_i$, where
\begin{itemize}
\item $k\geq 2$;
\item each $P_i$ is a path with two end-vertices $x$ and $y_i\in V(L)$;
\item $V(P_i)\cap V(L)=\{y_i\}$ and $V(P_i)\cap V(P_j)=\{x\}$.
\end{itemize}
The following lemma on the fan structure is a corollary of a theorem on weighted
graphs, which was proved by Fujisawa, Yoshimoto, and Zhang (see \cite[Lemma~1]{FYZ05}).
We need this refined version of the Fan Lemma to find a long cycle.
\begin{lem}\label{LemFYZ}
Let $G$ be a 2-connected graph, $C$ a longest cycle $G$, and $H$ a
component of $G-C$. If $d(v)\geq d$ for every $v\in V(H)$, then for
every vertex $x\in V(H)$, there is an $(x,C)$-fan $F$ with $e(F)\geq
d$.
\end{lem}
We also need the following two results on long cycles in bipartite graphs
due to Jackson \cite{J81,J85}.
\begin{lem}\rm ({Jackson \cite[Lemma~5]{J85}})\label{Lem-Jacksonmaximalpath}
Let $G=(X,Y;E)$ be a 2-connected bipartite graph. Let $P$ be a maximal
path in $G$ with two end-vertices $u$ and $v$.\\
(i) If $u\in X$ and $v\in Y$, then $G$ contains a cycle of length at
least $\min\{|V(P)|,2(d(u)+d(v)-1)\}$.\\
(ii) If $u,v\in X$ then $G$ contains a cycle of length at least
$\min\{|V(P)|-1,2(d(u)+d(v)-2)\}$.
\end{lem}
The final lemma was originally conjectured by Sheehan (see \cite[pp.332]{J81}).
\begin{lem}{\rm (Jackson \cite[Theorem~1]{J81})}\label{Lem_Cyclecontaining}
Let $G=(X,Y;E)$ be a bipartite graph with $|X|\leq |Y|$. If $d(x)\geq\max\{|X|,|Y|/2+1\}$ for every
vertex $x\in X$, then $G$ has a cycle containing all vertices in $X$.
\end{lem}
\subsection{Proof of Theorem \ref{Thm-BipartiteBondy}}
We first discuss on the extremal graphs of Theorem \ref{Thm-BipartiteBondy}.
Before the discussion, let us introduce some notations.
Let $a$, $b$, and $c$ be three positive integers. Let $\mathcal{B}_{a,b}$ be the set of bipartite
graphs with two partition sets of size $a$ and $b$, respectively. We define a graph $L_{a,b}^c$
as follows. (If the notation $c$ is unemphatic, we use $L_{a,b}$ instead.)
\noindent
$\bullet$ If $a\leq c$ or $b\leq c$, then let $L_{a,b}=K_{a,b}$.\\
$\bullet$ If $c<b\leq \max\{a,2c\}$, then let $L_{a,b}$ be the
graph by identifying one vertex from the $a$-set of $K_{a,c}$ and
the other one from the $1$-set of $K_{1,b-c}$.\\
$\bullet$ If $c< a\leq \max\{b,2c\}$, then let $L_{a,b}$ be the
graph by identifying one vertex from the $b$-set of $K_{b,c}$ and
the other vertex from the $1$-set of $K_{1,a-c}$.\\
$\bullet$ If $2c<\max\{a,b\}$, then let $L_{a,b}$ be the
graph by identifying one vertex from the $c$-set of $K_{c,b-c}$ and
the other vertex from the $(a-c+1)$-set of $K_{a-c+1,c}$.
By Theorem \ref{Thm-Jackson}, $L_{a,b}^c$ is a graph in $\mathcal{B}_{a,b}$ with the maximum number
of edges among those without cycles of length more than $2c$ (see Jackson \cite{J85}). One
can see the graph $L_{a,b}^c$ shows that the bounds in Theorem \ref{Thm-BipartiteBondy}
are tight for each case.
We define $$\varrho(a,b)=e(L_{a,b})-c^2.$$ Notice that if $C$ is a
longest cycle of a graph $G=L_{a,b}^c$, $c\leq b\leq a$, then
$\varrho(a,b)=\rho(G-C)$.
Armed with the necessary additional notations, let us restate
Theorem \ref{Thm-BipartiteBondy} as follows.
\noindent
{\bf Theorem $1.7^{'}$.} Let $G=(X,Y;E)$ be a bipartite graph,
where $|X|=a\geq b=|Y|$. Let $C$ be a longest cycle of
$G$ of length $2c$.
Then $\rho(G-C)\leq\varrho(a,b)$, i.e.,\\
(1) if $b\leq 2c$, then $\rho(G-C)\leq c(a-1-c)+b$; and\\
(2) if $b\geq 2c$, then $\rho(G-C)\leq c(a+b+1-3c)$.
Now we give a proof of Theorem $1.7^{'}$.
\noindent
{\bf Proof of Theorem $1.7^{'}$.}
We prove the theorem by contradiction. Let $G$ be a counterexample
to Theorem $1.7^{'}$ such that:\\
(i) $|G|$ is minimum; and\\
(ii) subject to (i), $e(G)$ is maximum.
\begin{claim}\label{Claim2-connected}
$G$ is 2-connected.
\end{claim}
\begin{proof}
If $G$ is disconnected, then let $G_1$ be a connected bipartite
graph obtained from $G$ by adding $\omega(G)-1$ edges such that each
new edge is between distinct partition sets, where $\omega(G)$
denotes the number of components in $G$. Since the added edges are
cut-edges of $G_1$, $C$ is a longest cycle of $G_1$ as well and all
the add edges are outside $C$. So $\rho_G(G-C)<\rho_{G_1}(G_1-C)$,
but $e(G_1)>e(G)$, a contradiction to (ii). This implies $G$ is
connected. Suppose now that $G$ has connectivity 1.
Let $B$ be an end-block of $G$ with smallest order
among those not containing $C$. We choose $G$
such that:\\
(iii) subject to (i),(ii), $|B|$ is minimum.
Let $u_0$ the cut-vertex of $G$ contained in $B$. Set
$$\theta_{X}=\left\{\begin{array}{ll}
1, &u_0\in X;\\
0, &u_0\in Y.
\end{array}\right. \mbox{ and~~~ } \theta_{Y}=\left\{\begin{array}{ll}
0, & u_0\in X;\\
1, & u_0\in Y.
\end{array}\right.$$
We first claim that $|X_B|\geq 2$ and $|Y_B|\geq 2$. Suppose not.
Since $B$ is bipartite and non-separable, we deduce $B\cong K_2$.
Set $V(B)=\{u_0,v_0\}$. So $v_0$ is of degree 1. Let $G_2:=G-v_0$.
By the choice of $G$, we get
$$\rho(G-C)=\rho_{G_2}(G_2-C)+1\leq\varrho(a-\theta_X,b-\theta_Y)+1\leq\varrho(a,b),$$
a contradiction. Thus, $|X_B|\geq 2$ and $|Y_B|\geq 2$.
Let $u_3\in V(C)$ such that $u,u_3$ are in the same partition set
(possibly $u_0=u_3$). Let
$$G_3:=(G-E(u_0,B-u_0))\cup\{u_3v: v\in V(B), u_0v\in E(G)\}.$$
Clearly, $C$ is a longest cycle of $G_3$ as well, and
$\rho(G-C)=\rho_{G_3}(G_3-C)$. Hence $G_3$ is also a counterexample
satisfying (i)(ii)(iii).
We use $B_3$ to denote the end-block of $G_3$ with the vertex set
$(V(B)\backslash\{u_0\})\cup \{u_3\}$. Set $D_3=G_3-(B_3-u_3)$. So
$G_3$ consists of $B_3$ and $D_3$. We have
$|X_{B_3}|+|X_{D_3}|=a+\theta_X$, and
$|Y_{B_3}|+|Y_{D_3}|=b+\theta_Y$. Since $D_3$ contains the cycle
$C$, $|X_{D_3}|\geq c$ and $|Y_{D_3}|\geq c$.
Let $B_4$ be a graph on $V(B_3)$ isomorphic to
$L_{|X_{B_3}|,|Y_{B_3}|}$ (recall the definition), $D_4$ a graph on $V(D_3)$ isomorphic
to $L_{|X_{D_3}|,|Y_{D_3}|}$, and $G_4$ the union of $B_4$ and
$D_4$. Clearly, the longest cycle of $D_4$ is of length $2c$. We
choose $D_4$ such that $C$ is a (longest) cycle in $D_4$ as well.
Also note that $B_3$ has no cycle of length more than $2c$. By
the choice of $G$,
$$\rho_{D_3}(D_3-C)\leq\varrho(|X_{D_3}|,|Y_{D_3}|)=\rho_{D_4}(D_4-C),$$
and furthermore, by Theorem \ref{Thm-Jackson},
$$e(B_3)\leq e(L_{|X_{B_3}|,|Y_{B_3}|})=e(B_4).$$
Thus, we have
$$\rho_{G_3}(G_3-C)=\rho_{D_3}(D_3-C)+e(B_3)\leq\rho_{D_4}(D_4-C)+e(B_4)=\rho_{G_4}(G_4-C).$$
Since $e(G_4[C])=c^2$ and $e(G_4)\geq e(G_3)$, we can see that $G_4$
is a counterexample satisfying (i)(ii)(iii).
If $B_4$ is separable, then we have a contradiction to (iii). So
$B_4$ is 2-connected, i.e., $B_4$ is an end-block of $G_4$. By the
definition of $B_4\cong L_{|X_{B_3}|,|Y_{B_3}|}$, we infer
$\min\{|X_{B_4}|,|Y_{B_4}|\}\leq c$ and $B_4\cong
K_{|X_{B_3}|,|Y_{B_3}|}$. Recall that $u_3$ is the cut-vertex of
$G_4$ contained in $B_4$. Let $u_4$ be a vertex in
$V(B_4)\backslash\{u_3\}$ with $d_{B_4}(u_4)\leq c$ (recall that
$|X_{B_4}|\geq 2$ and $|Y_{B_4}|\geq 2$). Set
$$\vartheta_x=\left\{\begin{array}{ll}
1, & u_4\in X;\\
0, & u_4\in Y.
\end{array}\right. \mbox{ and } \vartheta_y=\left\{\begin{array}{ll}
0, & u_4\in X;\\
1, & u_4\in Y.
\end{array}\right.$$
We will show that $\min\{|X_{D_4}|,|Y_{D_4}|\}=c$. Suppose that
$\min\{|X_{D_4}|,|Y_{D_4}|\}>c$. Without loss of generality, we assume
$|X_{D_4}|\geq|Y_{D_4}|$ (the other case can be dealt with
similarly). If $c<|Y_{D_4}|\leq 2c$, then $D_4$ has a pendent edge,
a contradiction to (iii). So $|Y_{D_4}|>2c$,
and $D_4$ consists of $A_x\cong K_{c,|Y_{D_4}|-c}$
and $A_y\cong K_{|X_{D_4}|+1-c,c}$, with a common vertex in
$X_{D_4}$. Let $G_5$ be the graph obtained from $G_4-E(u_4,B_4)$ by
adding edges from $u_4$ to all vertices in $X_{A_x}$ (if $u_4\in
Y_{B_4}$) or $Y_{A_y}$ (if $u_4\in X_{B_4}$). In each case, the
vertex $u_4$ is of degree $c$ in $G_5$. That is, $G_5=B_5\cup
D_5$ where
$$B_5=L_{|X_{B_4}|-\vartheta_x,|Y_{B_4}|-\vartheta_y},
D_5=L_{|X_{D_4}|+\vartheta_x,|Y_{D_4}|+\vartheta_y}.$$ Obviously, $C$
is a longest cycle of $G_5$ as well, and $\rho_{G_4}(G_4-C)\leq\rho_{G_5}(G_5-C)$.
Note that the end-block $B_5$ of $G_5$ has order less than $B$, a
contradiction to (iii). Thus, $\min\{|X_{D_4}|,|Y_{D_4}|\}=c$.
By symmetry, we assume that $|X_{D_4}|=c$. If $|X_{B_4}|\leq c$,
then we can get a contradiction
similarly as above. So assume that $|X_{B_4}|>c$, which implies
that $|Y_{B_4}|\leq c$, since $\min\{|X_{B_4}|,|Y_{B_4}|\}\leq c$.
We claim that $|Y_{B_4}|=c$. Suppose that $|Y_{B_4}|<c$. If
$|Y_{D_4}|=c$ as well, then we can get a contradiction similarly as
above. So $|Y_{D_4}|>c$. Let $u_6$ be a vertex in
$Y_{D_4-C}$, and let $G_6$ be the graph obtained from
$G_4-E(u_6,D_4)$ by adding edges from $u_6$ to all vertices in
$X_{B_4}$. Clearly $e(G_4)<e(G_6)$. One can see that
$G_6$ has no cycle of length more than $2c$. Thus, $C$ is a longest
cycle of $G_6$ as well, and $\rho_{G_4}(G_4-C)\leq\rho_{G_6}(G_6-C)$,
a contradiction to (ii). Hence we conclude $|Y_{B_4}|=c$.
This implies that $G_4$ is isomorphic to $L_{a,b}$ or $L_{b,a}$. In
any case, we have $\rho_{G_4}(G_4-C)=\varrho(a,b)$, a contradiction.
This proves Claim \ref{Claim2-connected}.
\end{proof}
Next we distinguish the following cases and derive a contradiction for each one.
\begin{case}
$2c<b\leq a$.
\end{case}
For this case, $\varrho(a,b)=\varrho(a-1,b)+c=\varrho(a,b-1)+c$. If
there is a vertex in $G-C$ with degree at most $c$, then by the
choice of $G$, $\rho_{G}(G-C)\leq\varrho(a,b)$. Thus, we assume that
every vertex in $G-C$ has degree at least $c+1$. Let $H$ be a
component of $G-C$. By Lemma \ref{LemFYZ}, for each vertex $v\in H$,
there is an $(v,C)$-fan $F$ with $e(F)\geq c+1$. This implies
$|V(C)|\geq 2c+2$, a contradiction.
\begin{case}
$c\leq b\leq 2c<a$.
\end{case}
For this case, $\varrho(a,b)=\varrho(a-1,b)+c$. If there is a vertex
in $X_{G-C}$ with degree at most $c$, then we can get a
contradiction by the choice of $G$. So assume that every vertex in
$X_{G-C}$ has degree at least $c+1$. Let $X'\subseteq X_{G-C}$ with
$|X'|=c+1$, and $G'=G[X'\cup Y]$. Observe that for every $x\in X'$,
$d_{G'}(x)\geq c+1=\max\{|X'|,|Y|/2+1\}$. By Lemma
\ref{Lem_Cyclecontaining}, $G'$ has a cycle containing all vertices
in $X'$, i.e., $G$ has a cycle of length $2c+2$, a contradiction.
\begin{case}
$c\leq b<a\leq 2c$.
\end{case}
For this case, $\varrho(a,b)=\varrho(a-1,b)+c$. So every vertex in
$X_{G-C}$ has degree at least $c+1$. Since $a>b$, we can choose $H$
as a component of $G-C$ with $|X_H|>|Y_H|$. Let
$N_C(H)=\{u_1,u_2,\cdots,u_\alpha\}$, where $u_1,\cdots,u_\alpha$
appear in this order along $C$. Let $v_1\in N_H(u_1)$. By Lemma
\ref{Lem-specivertexpath}, $H$ has a maximal $v_1$-path $P$ with
terminus in $X_H$. Let $s\in X_H$ be the terminus of $P_1$. We
extend the path $sP_1v_1u_1\overleftarrow{C}[u_1,u_2]$ to be a
maximal $t$-path, say $P$. Thus $P$ is a maximal path of $G$. Let
$t$ be the end-vertex of $P$ other than $s$. Since $d(s)\geq c+1$,
we have $|V(P)|\geq 2c+2$ and $d(s)+d(t)\geq c+3$. By Lemma
\ref{Lem-Jacksonmaximalpath}, $G$ has a cycle of length more than
$2c$, a contradiction.
\begin{case}
$c\leq b=a\leq 2c$.
\end{case}
For this case, $\varrho(a,b)=\varrho(a-1,b-1)+c+1$. If there is a
pair of vertices $(x,y)\in(X_{G-C},Y_{G-C})$ with $\rho(x,y)\leq
c+1$, then we can prove by the choice of $G$. So assume that for
every $(x,y)\in(X_{G-C},Y_{G-C})$, $\rho(x,y)\geq c+2$. Recall that
$$\rho(x,y)=\left\{\begin{array}{ll}
d(x)+d(y), & xy\notin E(G);\\
d(x)+d(y)-1, & xy\in E(G).
\end{array}\right.$$
\begin{subcase}
$G-C$ is disconnected.
\end{subcase}
First assume that there is a balanced component $H$ of $G-C$. Then
both $G_1=G[V(C)\cup V(H)]$ and $G_2=G-H$ are balanced. Clearly, $C$
is a longest cycle of $G_1$ and $G_2$ as well. Since $|G_1|<|G|$, we
get
$$\rho_{G_1}(G_1-C)\leq c(|X_{G_1}|-1-c)+|Y_{G_1}|=(c+1)|X_H|,$$ and similarly,
$$\rho_{G_2}(G_2-C)\leq(c+1)|X_{G-C-H}|.$$ Thus
$$\rho(G-C)=\rho(H)+\rho(G-C-H)=\rho_{G_1}(G_1-C)+\rho_{G_2}(G_2-C)\leq(c+1)|X_{G-C}|=\varrho(a,b).$$
Now we assume that every component of $G-C$ is not balanced. Let
$H_1, H_2$ be two components of $G-C$ such that
$|X_{H_1}|>|Y_{H_1}|$, $|X_{H_2}|<|Y_{H_2}|$. Let $N_C(H_1)\cup
N_C(H_1)=\{u_1,u_2,\ldots,u_\alpha\}$, where the vertices appear in
this order along $C$. Set $\beta_1=\min\{d_{H_1}(x): x\in X_{H_1}\}$
and $\beta_2=\min\{d_{H_2}(y): y\in Y_{H_2}\}$. Thus
$\alpha+\beta_1+\beta_2\geq c+2$.
Without loss of generality, we suppose $u_1\in N_C(H_1)$ and $u_2\in
N_C(H_2)$. Let $v_1\in N_{H_1}(u_1)$, $v_2\in N_{H_2}(u_2)$. By
Lemma \ref{Lem-specivertexpath}, $H_1$ has a maximal $v_1$-path
$P_1$ with terminus in $X_{H_1}$ and of order at least $2\beta_1$,
$H_2$ has a maximal $v_2$-path $P_2$ with terminus in $Y_{H_2}$ and
of order at least $2\beta_2$. Thus,
$P=P_1v_1\overleftarrow{C}[u_1,u_2]v_2P_2$ is a maximal path of $G$.
Let $s,t$ be the ends of $P$. Recall that $d(s)+d(t)\geq c+2$. If
$|V(P)|\geq 2c+1$, then by Lemma \ref{Lem-Jacksonmaximalpath}, $G$
has a cycle of length more than $2c$, a contradiction. So assume
that $|V(P)|\leq 2c$. This implies that
$\ell(\overrightarrow{C}[u_1,u_2])\geq 2$. Therefore, we infer
$\ell(\overrightarrow{C}[u_i,u_{i+1}])\geq 2$ for all
$i=1,2,\cdots,\alpha$ (the index are taken modular $\alpha$).
Moreover,
$$2c\geq|V(P_1)|+|V(P_2)|+\ell(\overleftarrow{C}[u_1,u_2])+1\geq
2\beta_1+2\beta_2+2c+1-\ell(\overrightarrow{C}[u_1,u_2]).$$ Thus we
have $\ell(\overrightarrow{C}[u_1,u_2])\geq 2\beta_1+2\beta_2+1$. So
$$\ell(C)=\sum_{i=1}^\alpha\ell(\overrightarrow{C}[u_i,u_{i+1}])\geq
2\beta_1+2\beta_2+1+2(\alpha-1)>2c,$$ a contradiction.
\begin{subcase}
$G-C$ is connected.
\end{subcase}
Let $H=G-C$, $N_C(H)=\{u_1,u_2,\ldots,u_\alpha\}$, and
$\beta=\min\{\rho_H(x,y): x\in X_H, y\in Y_H\}$. So
$\alpha+\beta\geq c+2$. Note that $H$ is balanced.
\begin{subsubcase}
$N_C(H)\cap X_C\neq\emptyset$ and $N_C(H)\cap Y_C\neq\emptyset$.
\end{subsubcase}
Without loss of generality, we assume that $u_1\in X_C$, $u_2\in
Y_C$. Let $v_1\in N_H(u_1)$, $v_2\in N_H(u_2)$. By Lemma
\ref{Lem-goodmaximalpair}, $H$ has a detached maximal
$(v_1,v_2)$-DPP $D$ with order at least $\beta+1$. Thus $P=D\cup
v_1u_1\overleftarrow{C}u_2v_2$ is a maximal path of $G$. Let $s,t$
be the two end-vertices of $P$. So $d(s)+d(t)\geq c+2$. If
$|V(P)|\geq 2c+1$, then by Lemma \ref{Lem-Jacksonmaximalpath}, $G$
has a cycle of length more than $2c$, a contradiction. So we derive
$|V(P)|\leq 2c$. Therefore,
$$2c\geq|V(D)|+\ell(\overleftarrow{C}[u_1,u_2])+1\geq
\beta+1+2c+1-\ell(\overrightarrow{C}[u_1,u_2]).$$ This implies
$\ell(\overrightarrow{C}[u_1,u_2])\geq \beta+2$.
Note that there also exists some subscript $i$ such that $u_i\in
Y_C$ and $u_{i+1}\in X_C$. We can get
$\ell(\overrightarrow{C}[u_i,u_{i+1}])\geq\beta+2$ as above. So
$$\ell(C)=\sum_{i=1}^\alpha\ell(C[u_i,u_{i+1}])\geq
2(\beta+2)+2(\alpha-2)>2c,$$ a contradiction.
\begin{subsubcase}
$N_C(H)\cap X_C=\emptyset$ or $N_C(H)\cap Y_C=\emptyset$.
\end{subsubcase}
Without loss of generality, we assume that $N_C(H)\subseteq Y_C$.
Recall that $N_C(H)=\{u_1,u_2,\ldots,u_\alpha\}$, and
$\beta=\min\{\rho_H(x,y): x\in X_H, y\in Y_H\}$. Since $G$ is
2-connected, $\alpha\geq 2$ and $|N_H(C)|\geq 2$. Specially,
$|X_H|=|Y_H|\geq 2$.
First assume that $H$ is 2-connected. Then at least two of $u_i$ have
the property that $u_i,u_{i+1}$ are adjacent to two distinct
vertices in $H$. If $u_i,u_{i+1}$ are adjacent to two distinct
vertices in $H$, then by Lemma \ref{LePathOrder}, there is a
$(u_i,u_{i+1})$-path of length at least $\beta+1$ with all internal
vertices in $H$. It follows that
$\ell(\overrightarrow{C}[u_i,u_{i+1}])\geq\beta+1$. Recall that
there are at least two pairs $\{u_i,u_{i+1}\}$ that have two
distinct neighbors in $H$. Thus
$$\ell(C)=\sum_{i=1}^\alpha\ell(C[u_i,u_{i+1}])\geq
2(\beta+1)+2(\alpha-2)>2c,$$ a contradiction.
Now we assume that $H$ has connectivity 1. Then at least two of
$u_i$ have the property that $u_i,u_{i+1}$ are adjacent to a good
pair of $H$. If $u_i,u_{i+1}$ are adjacent to a good pair of $H$,
say $\{x_i,x_{i+1}\}$, then by Lemma \ref{LeGoodPair}, $H$ has a
detached maximal $\{x_i,x_{i+1}\}$-DPP $D$ of order at least
$\beta+1$. Let $s,t$ be the termini of the two paths of $D$. It
follows that $P=D\cup(x_1u_1\overleftarrow{C}u_2x_2)$ is a maximal
path of $G$. If $|V(P)|\geq 2c+1$, then $G$ has a cycle of length
more than $2c$ by Lemma \ref{Lem-Jacksonmaximalpath}. So assume that
$|V(P)|\leq 2c$. That is,
$$2c\geq|V(P)|\geq\beta+1+\ell(\overleftarrow{C}[u_i,u_{i+1}])+1=\beta+2c+2-\ell(\overrightarrow{C}[u_i,u_{i+1}]).$$
This implies $\ell(\overrightarrow{C}[u_i,u_{i+1}])\geq\beta+2$.
Recall that there are at least two pairs $\{u_i,u_{i+1}\}$ that
are adjacent to a good pair of $H$. It follows that
$$\ell(C)=\sum_{i=1}^\alpha\ell(C[u_i,u_{i+1}])\geq
2(\beta+2)+2(\alpha-2)>2c,$$ a contradiction. The proof is complete.
\rule{4pt}{7pt}
\section{Proofs of Theorems \ref{Thm-general} and \ref{Thm-balanced}}
In this section, we prove Theorem \ref{Thm-balanced}
and then prove Theorem \ref{Thm-general}.
We first give a sketch of the proof of Theorem \ref{Thm-balanced}.
We shall show that $G$ is weakly bipancyclic with girth 4 and prove
it by contradiction. Suppose not. Then for a longest cycle $C$,
the hamiltonian graph $G[C]$ is not bipancyclic. By a theorem of
Entringer and Schmeichel \cite{ES88}, we can get an upper bound
of $e(G[C])$. Notice that Theorem \ref{Thm-BipartiteBondy} gives
an upper bound of $e(G-C)+e(G-C,C)$. Finally, this can give us an
estimate of an upper bound of $e(G)$, which contradicts the edge
number condition.
Proof of Theorem \ref{Thm-balanced} needs the following.
\begin{theoremA}[Entringer and Schmeichel \cite{ES88}]\label{ThES}
Let G be a hamiltonian bipartite graph of order $2n\geq 8$.
If $e(G)>\frac{n^2}{2}$, then $G$ is bipancyclic.
\end{theoremA}
Now we are in stand for giving the proof of Theorem \ref{Thm-balanced}.
\noindent
{\bf Proof of Theorem \ref{Thm-balanced}.} (i) Set $n=m$ and $t=n-k$
in Theorem \ref{Thm-Jackson}. Since $n\geq 2k+2$, we have $n\leq
2t-2$. By computation, we get $e(G)\geq n(n-k-1)+k+2>n+(n-1)(t-1)$.
By Theorem \ref{Thm-Jackson}, $c(G)\geq 2t=2(n-k)$. The proof is
complete.
\noindent
(ii) Suppose that $G$ is not weakly bipancyclic
with girth 4. Let $C$ be a longest cycle in $G$, $2c$ the length of $C$ in $G$,
and $G'=G[C]$. Notice that $G'$ is hamiltonian. Since $G$ is not weakly bipancyclic
with girth 4, $G'$ is not bipancyclic. By Theorem \ref{ThES}, we have $e(G')\leq \frac{c^2-1}{2}$.
By Theorem \ref{Thm-balanced}(i), we know $c\geq n-k\geq k+2$ where $k\in \mathbb{Z}$, and $2c-n\geq n-2k>0$.
By Theorem \ref{Thm-BipartiteBondy},
\begin{align*}
e(G)&=e(G')+e(G-C,C)+e(G-C)\leq -\frac{c^2}{2}+c(n-1)+n-\frac{1}{2}.
\end{align*}
Recall that $n-k\leq c\leq n$. Set a function $f(x)=-\frac{x^2}{2}+x(n-1)+n-\frac{1}{2}$,
where $n-k\leq x\leq n$.
The symmetric axis is $x=n-1$.
First, suppose that $k=0$ or $k\geq 2$. Notice that
$n-(n-1)\leq (n-1)-(n-k)$. There holds
\begin{align*}
f(x)\leq f(n)=\max\{f(n-k),f(n)\}=\frac{n^2}{2}-\frac{1}{2}<n(n-k-1)+k+2,
\end{align*}
since $n\geq 2(k+1)$, a contradiction. Let $k=1$. Then
$f(n-1)=\frac{(n-1)^2}{2}+n-\frac{1}{2}<n(n-2)+3$, a contradiction.
The proof is complete.
\rule{4pt}{7pt}
Using Theorem \ref{Thm-balanced}, we can prove Theorem \ref{Thm-general}
now.
\noindent
{\bf Proof of Theorem \ref{Thm-general}.}
We prove the theorem by contradiction. Let $G$ be a counterexample such that
the order $n+m$ is smallest, and then the number of edges is smallest.
By Theorem \ref{Thm-balanced}, we know $n\geq m+1$. If there is a vertex,
say $y\in Y$, such that $d_X(y)\leq m-k-1$, then let $G':=G-y$.
We can see $e(G')=e(G)-d_X(y)\geq (n-1)(m-k-1)+k+2$. By the choice of $G$,
$G'$ is not a counterexample, and so is $G$, a contradiction.
Thus, $d_X(y)\geq m-k$ for each vertex $y\in Y$. Hence $e(G)\geq n(m-k)>(n-1)(m-k-1)+k-2$,
since $m\geq 2k+2$ and $n\geq 2$. Now we can delete an edge in $G$ and
find a smaller counterexample, a contradiction. This contradiction completes
the proof.
\rule{4pt}{7pt}
\end{document}
|
\begin{document}
\title{Generating higher order quantum dissipation from lower order parametric processes}
\author{S.O. Mundhada}
\author{A. Grimm}
\author{S. Touzard}
\author{U. Vool}
\author{S. Shankar}
\author{M.H. Devoret}
\affiliation{Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA}
\author{M. Mirrahimi}
\affiliation{QUANTIC team, INRIA de Paris, 2 Rue Simone Iff, 75012 Paris, France}
\affiliation{Yale Quantum Institute, Yale University, New Haven, Connecticut 06520, USA}
\date{\today}
\begin{abstract}
Stabilization of quantum manifolds is at the heart of error-protected quantum information storage and manipulation. Nonlinear driven-dissipative processes achieve such stabilization in a hardware efficient manner. Josephson circuits with parametric pump drives implement these nonlinear interactions. In this article, we propose a scheme to engineer a four-photon drive and dissipation on a harmonic oscillator by cascading experimentally demonstrated two-photon processes. This would stabilize a four-dimensional degenerate manifold in a superconducting resonator. We analyze the performance of the scheme using numerical simulations of a realizable system with experimentally achievable parameters.
\end{abstract}
\maketitle
\section{Introduction}
\begin{figure*}
\caption{Basic principle behind cascading two-photon exchange processes. (a) A high-Q cavity at frequency $\omega_a$ coupled to a Josephson junction mode at frequency $\omega_b$ and anharmonicity $\chi_{bb}
\label{fig:basic_principle_a-}
\label{fig:basic_principle_a}
\label{fig:basic_principle_b}
\label{fig:basic_principle_c}
\label{fig:basic_principle}
\end{figure*}
In order to achieve a robust encoding and processing of quantum information, it is important to stabilize not only individual quantum states, but the entire manifold spanned by their superpositions. This requires synthesizing artificial interactions with desirable properties which are impossible to find in natural systems. Particularly, in the case of quantum superconducting circuits, Josephson junctions together with parametric pumping methods provide powerful hardware elements for the design of such Hamiltonians. Here we extend the design toolkit, by using ideas borrowed from Raman processes to achieve Hamiltonians of high-order nonlinearity. More precisely, we introduce a novel nonlinear driven-dissipative process stabilizing a four-dimensional degenerate manifold.
In general, a quantum system interacting with its environment will decohere through the entanglement between the environmental and the system degrees of freedom. However, in certain cases, a driven system with a properly tailored interaction with an environment can remain in a pure excited state or even a manifold of excited states. The simplest example is a driven harmonic oscillator with an ordinary dissipation, i.e. a frictional force proportional to velocity. In the underdamped quantum regime, this friction corresponds to the harmonic oscillator undergoing a single-photon loss process. Such a driven-dissipative process, in the rotating frame of the harmonic oscillator, can be modeled by the master equation
$$
\frac{\mathrm{d}}{\mathrm{d}t}\rho=-i[\epsilon_d \hat{a}^\dag+\epsilon_d^* \hat{a},\rho]+\kappa\mathcal{D}\left[\hat{a}\right]\rho,
$$
where $\rho$ is the density operator, $\hat a$ is the harmonic oscillator annihilation operator, $\epsilon_d$ represents the resonant complex amplitude of the resonant drive, $\kappa$ is the dissipation rate of the harmonic oscillator and
$$
\mathcal{D}\left[\hat{L}\right]\rho=\hat{L}\rho\hat{L}^\dagger - \frac{1}{2}\hat{L}^\dagger \hat{L} \rho - \frac{1}{2} \rho \hat{L}^\dagger \hat{L}
$$
is the Lindblad super-operator. The system admits a pure steady state, which is a coherent state denoted by $\ket{\alpha}$ where $\alpha=-2i\epsilon_d/\kappa$. Note also that the right-hand side of the above master equation can be simply written as $\kappa\mathcal{D}\left[\hat{a}-\alpha\right]\rho$. The fact that $\ket{\alpha}$ is the steady state of the process follows from $(\hat{a}-\alpha)\ket{\alpha}=0$. This idea can be generalized to a non-linear dissipation of the form $\kappa\mathcal{D}[a^n-\alpha^n]\rho$ which admits as steady states the $n$ coherent states $\{\ket{\alpha e^{2im\pi/n}}\}_{m=0}^{n-1}$. Indeed, all these coherent states and their superpositions are in the kernel of the dissipation operator $(a^n-\alpha^n)$. Therefore, this process stabilizes the whole $n$-dimensional manifold spanned by the above coherent states.
The case with $n=2$ has been proposed in~\citep{Wolinsky1988,Gilles1994,HachIII1994,Mirrahimi2014} and experimentally realized in~\citep{Leghtas2015}. The idea consists of mediating a coupling between a high-Q cavity mode (resonance frequency $\omega_a$) and a low-Q resonator (resonance frequency $\omega_b$) through a Josephson junction. Applying a strong microwave drive at frequency $\omega_{\mathrm{pump}}=2\omega_a-\omega_b$ and a weaker drive at frequency $\omega_b$, we achieve an effective interaction Hamiltonian of the form
$$
\frac{H_{\mathrm{2ph}}}{\hbar}=(g_{\mathrm{2ph}}^*\hat a^{\dag~2}\hat b+g_{\mathrm{2ph}}\hat a^{2}\hat b^\dag)-(\epsilon_d^* \hat b+\epsilon_d \hat b^\dag).
$$
Combining this interaction with a strong dissipation $\Gamma\mathcal{D}[\hat{b}]$ at the rate $\Gamma\gg|g_{\mathrm{2ph}}|$ translates to an effective dissipation of the form $\kappa_{\mathrm{2ph}}\mathcal{D}[\hat a^2-\alpha^2]\rho$, where $\kappa_{\mathrm{2ph}}=4|g_{\mathrm{2ph}}|^2/\Gamma$ and $\alpha=\sqrt{\epsilon_d/g_{\mathrm{2ph}}}$. Here we go beyond this by exploring a scheme which enables non-linear dissipations of higher-order. More precisely, we propose a method to achieve a four-photon interaction Hamiltonian without significantly increasing the required hardware complexity. The idea consists of using a Raman-type process~\citep{Gardiner2015}, exploiting virtual transitions, to cascade two $H_{\mathrm{2ph}}$ interactions.
An important application of such a manifold stabilization is error-protected quantum information encoding and processing. As suggested in~\citep{Mirrahimi2014}, a four-photon driven-dissipative process enables a protected encoding of quantum information in two steady states of the same photon-number parity. Continuous monitoring of the photon-number parity observable then enables a protection against the dominant decay channel of the harmonic oscillator corresponding to the natural single-photon loss~\cite{Ofek2016}.
Section~\ref{sec:cascading} describes the scheme to achieve a four-photon exchange Hamiltonian. In Section~\ref{sec:including_dissipation}, we study the dynamics in presence of dissipation of the low-Q mode along with possible improvements. In the Appendices~\ref{appendix:rwa_with_dissipation} and~\ref{appendix:optimizing_gamma} we discuss the derivation of the effective master equation and the accuracy of approximations used in the analytical calculations.
\section{Cascading nonlinear processes}
\label{sec:cascading}
Similar to the ideas presented in the last section, in order to have four-photon dissipation, we need to build a process that exchanges four cavity-photons with an excitation in a dissipative mode. Following the example of the two photon process, this could be realized by engineering an interaction Hamiltonian of the form $H_{\mathrm{int}}/\hbar=g_{4\mathrm{ph}} \hat{a}^4 \hat{b}^\dagger + g_{4\mathrm{ph}}^* \hat{a}^{4 \dagger}\hat{b} $ which is exchanging four photons of the cavity mode with a single excitation of the low-Q mode. We need the strength of the interaction $|g_{4\mathrm{ph}}|$ to significantly exceed the decay rate of the storage cavity mode $\hat{a}$.
The Hamiltonian of a Josephson junction provides us with a six-wave mixing process which combined with an off-resonant pump at frequency $\omega_p=4\omega_a-\omega_b$ could in principle produce such an interaction. However, this six-wave mixing process comes along with other nonlinear terms in the Hamiltonian which could be of the same or higher magnitude. In particular, as it has been explained in~\citep{Mirrahimi2014}, with the currently achievable experimental parameters, the cavity self-Kerr effect would be at least an order of magnitude larger than $|g_{4\mathrm{ph}}|$.
Reference~\citep{Mirrahimi2014} proposes a more elaborate Josephson circuit to realize a purer interaction Hamiltonian. This, however, comes at the expense of significant hardware development and might encounter other unknown experimental limitations. Here we propose an alternative approach, which is based on cascading two-photon exchange processes. This leads to significant hardware simplifications and could in principle be realized with current experimental setups~\citep{Leghtas2015}. In the next subsection, we give a schematic representation of the proposed protocol, which uses higher energy levels of the junction mode and a cascading based on Raman transition~\citep{Gardiner2015}. In Subsection~\ref{sec:rwa_without_dissipation} we sketch a mathematical analysis based on the second order rotating wave approximation (RWA). This is supplemented by numerical simulations comparing the exact and the approximate Hamiltonians.
\subsection{Four-photon exchange scheme}
\label{sec:basic_principle}
In order to combine a pair of two-photon exchange processes, we take advantage of the junction mode being a multilevel anharmonic system. The basic principle of our scheme is illustrated in Fig.~\ref{fig:basic_principle}. More precisely, we exchange four cavity photons with two excitations of the junction mode. This could be done in a sequential manner by exchanging, twice, two cavity photons with an excitation of the junction mode, once from $g$ to $e$ and then from $e$ to $f$. However, as will be seen later, populating the $e$ level of the junction mode leads to undesired decoherence channels for the cavity mode. Therefore we perform this cascading using a virtual transition through the $e$ level by detuning the two-photon exchange pumps. This is similar to a Raman transition in a three level system.
Consequently, we apply two pumps at frequencies, $\omega_{p1}=2\omega_a-\omega_b-\Delta$ and $\omega_{p2}=2\omega_a-(\omega_b-\chi_{bb})+\Delta$ as shown in Fig.~\ref{fig:basic_principle_a}. Note that here we are considering the $\hat{b}$ mode to be a junction mode with frequency $\omega_b$ and anharmonicity $\chi_{bb}$. For this protocol to work we require $\chi_{bb}\gg\Delta$, as we will justify in the next section. Starting in the state $\ket{g,n}$ these pumps make a transition to the state $\ket{f,n-4}$, passing virtually through the state $\ket{e,n-2}$ (see Fig.~\ref{fig:basic_principle_b}). As we will see in Section~\ref{sec:including_dissipation}, in order to achieve four-photon dissipation, we also require the junction mode to dissipate from the $f$ to the $g$ state.
\subsection{Analytical derivation using second-order RWA}
\label{sec:rwa_without_dissipation}
\begin{figure}
\caption{Numerical simulation of four-photon exchange process. (a) We compare the effective dynamics (ED) given by the Hamiltonian \eqref{eq:H_effective}
\label{fig:full_effective_comparison_H}
\end{figure}
Here, starting from the full Hamiltonian of the junction-cavity system, we provide a mathematical analysis of the proposed scheme. We also include an additional drive in our calculations, which will address a two-photon transition between the $g$ and $f$ levels of the junction mode. The importance of this drive will be clear in Section~\ref{sec:including_dissipation} when we talk about a four-photon driven-dissipative process. The starting Hamiltonian is given by~\citep{Nigg2012}
\begin{align}
\frac{H(t)}{\hbar} =& \omega_a \hat{a}^\dagger \hat{a} + \omega_b \hat{b}^\dagger \hat{b} - \frac{E_J}{\hbar} \left[\cos{\left(\hat{\varphi}\right)}+\frac{\hat{\varphi}^2}{2!}\right] \nonumber\\
&+ \sum_{k=0}^3 \epsilon_{pk}(t) \left(\hat{b}+\hat{b}^\dagger\right), \label{eq:basic-hamiltonian}\nonumber
\end{align}
where $E_J$ is the Josephson energy and $\hat{\varphi}=\phi_a (\hat{a}+\hat{a}^\dagger)+\phi_b (\hat{b}+\hat{b}^\dagger)$. Here $\phi_{a(b)}=\phi_{\mathrm{ZPF},a(b)}/\phi_0$ with $\phi_{\mathrm{ZPF},a(b)}$ corresponds to the zero point fluctuations of the two modes as seen by the junction and $\phi_0=\hbar/2e$ is the reduced superconducting flux quantum. The drive fields $\epsilon_{pk}(t)= 2\epsilon_{pk} \cos\left(\omega_{pk} t + \theta_{k}\right)$ represent the off-resonant pump terms. The pump frequencies are selected to be
\begin{align}
\omega_{p1}&=2\tilde\omega_a-\tilde\omega_b-\Delta+\delta \nonumber \\
\omega_{p2}&=2\tilde\omega_a-(\tilde\omega_b-\chi_{bb})+\Delta+\delta\nonumber\\
\omega_{p3}&= \tilde\omega_b - \frac{\chi_{bb}}{2} -\frac{\delta}{2}
\end{align}
where $\tilde{\omega}_{a}$ and $\tilde{\omega}_{b}$ are Lamb and Stark shifted cavity and junction mode frequencies. The additional detuning $\delta\ll \Delta$ will be selected to compensate for higher order frequency shifts.
Following the supplementary material of~\citep{Leghtas2015}, we go into a displaced frame absorbing the pump terms in the cosine. This leads to the Hamiltonian
\begin{align}
\frac{H'(t)}{\hbar} =& \omega_a \hat{a}^\dagger \hat{a} + \omega_b \hat{b}^\dagger \hat{b} - E_J \left[\cos{\left(\hat{\Phi}(t)\right)}+\frac{\hat{\Phi}^2(t)}{2!}\right], \nonumber
\end{align}
where
\begin{align}
\hat{\Phi}(t)= \phi_a \hat{a}+\phi_b \hat{b} +\phi_b \sum_{k=1}^3\xi_{k}\exp(-i\omega_{pk}t)+ \mathrm{h.c.}.\nonumber
\end{align}
Here $\mathrm{h.c.}$ stands for Hermitian conjugate and $\xi_{k}$ are complex coefficients related to phases and amplitudes of the pumps.
Developing the cosine up to fourth-order terms and keeping only the diagonal and the two-photon exchange terms, we get a Hamiltonian of the form
\begin{align}
\frac{H_{\mathrm{sys}}(t)}{\hbar}=&\tilde\omega_a \hat{a}^\dagger \hat{a} + \tilde\omega_b \hat{b}^\dagger \hat{b} \nonumber\\
& - \frac{\chi_{aa}}{2} \hat{a}^{\dagger 2} \hat{a}^2 - \frac{\chi_{bb}}{2} \hat{b}^{\dagger 2} \hat{b}^2 - \chi_{ab} \hat{a}^\dagger \hat{a} \hat{b}^\dagger \hat{b} \nonumber\\
&+\sum_{k=1,2} \left(g_k \exp\left(-i\omega_{pk}t)\right) \hat{a}^{\dagger 2} \hat{b} + \mathrm{h.c.}\right)\nonumber\\
&-\left(g_3 \exp\left(2i\omega_{p3}t)\right) \hat{b}^2 + \mathrm{h.c.}\right).\label{eq:H_sys}
\end{align}
Here we have ignored all the other terms assuming a sufficiently large frequency difference, $|\tilde{\omega}_a-\tilde{\omega}_b|$, between the two modes. Indeed, in the rotating frame of $\tilde\omega_a \hat{a}^\dagger \hat{a} + \tilde\omega_b \hat{b}^\dagger \hat{b}$ these terms will be oscillating at significantly higher frequencies. In the above Hamiltonian, $\chi_{aa}$, $\chi_{bb}$ and $\chi_{ab}$ are respectively the self-Kerr and cross-Kerr couplings between the junction mode and the cavity mode. Furthermore, $\tilde\omega_a$ and $\tilde\omega_b$ are given by
\begin{align}
\tilde\omega_a &= \omega_a-\chi_{aa}-\frac{\chi_{ab}}{2}-\chi_{ab}\sum_{k=1}^3|\xi_k|^2\nonumber\\
\tilde\omega_b &= \omega_b-\chi_{bb}-\frac{\chi_{ab}}{2}-2\chi_{bb}\sum_{k=1}^3|\xi_k|^2. \nonumber
\end{align}
Finally, the two photon exchange strengths $g_k$ are given by
\begin{align}
g_{1/2} &= -\frac{\chi_{ab}}{2} \xi_{1/2} \hspace{0.5cm}\mathrm{and}\hspace{0.5cm} g_3 = \frac{\chi_{bb}}{2}\xi_3^{*2}. \nonumber
\end{align}
Going into rotating frame with respect to $H_0/\hbar = \tilde\omega_a \hat{a}^\dagger \hat{a} + (\tilde\omega_b-\delta) \hat{b}^\dagger \hat{b} -\frac{\chi_{bb}}{2} \hat{b}^{\dagger 2} \hat{b}^2$, the Hamiltonian becomes
\begin{align}
\frac{ H_{\mathrm{I}}(t)}{\hbar}=& \delta \hat{b}^\dagger \hat{b} - \frac{\chi_{aa}}{2} \hat{a}^{\dagger 2} \hat{a}^2- \chi_{ab} \hat{a}^\dagger \hat{a} \hat{b}^\dagger \hat{b} \nonumber\\
&+ g_1 \exp{\left[i\left(\chi_{bb}\hat{b}^\dagger \hat{b}+\Delta\right)t\right]} \hat{a}^{\dagger 2} \hat{b} + \mathrm{h.c.}\nonumber\\
&+ g_2 \exp{\left[i\left(\chi_{bb}\left(\hat{b}^\dagger \hat{b}-1\right)-\Delta\right)t\right]} \hat{a}^{\dagger 2} \hat{b} + \mathrm{h.c.}\nonumber\\
&- g_3 \exp{\left[2i\chi_{bb}\hat{b}^\dagger\hat{b}\right]}\hat{b}^2+ \mathrm{h.c.}\nonumber
\end{align}
As outlined in~\citep{Mirrahimi2015}, we perform second order RWA to get
\begin{align}
H_{\mathrm{eff}} =& \overline{H_{\mathrm{I}}(t)}- i \overline{\left(H_{\mathrm{I}}(t)-\overline{H_{\mathrm{I}}(t)}\right)\int \mathrm{d}t\left(H_{\mathrm{I}}(t)-\overline{H_{\mathrm{I}}(t)}\right)} \nonumber
\end{align}
where $\overline{A(t)}=\lim_{T\to\infty}\frac{1}{T}\int_0^T A(t)\mathrm{d}t$. Using the expression for $H_{\mathrm{I}}(t)$, we get
\begin{multline}
\frac{H_{\mathrm{eff}}}{\hbar}= \left(g_{4\mathrm{ph}} \hat{a}^{\dagger 4} -\epsilon_{4\mathrm{ph}}\right) \hat\sigma_{fg} + \mathrm{h.c.}\\
+ \left(\zeta_{gaa} \hat\sigma_{gg} + \zeta_{eaa} \hat\sigma_{ee} + \zeta_{faa}\hat\sigma_{ff}-\frac{\chi_{aa}}{2}\right) \hat{a}^{\dagger 2} \hat{a}^2 \\
+ \left((\chi_{ea}-\chi_{ab}) \hat\sigma_{ee} + (\chi_{fa}-2\chi_{ab}) \hat\sigma_{ff} \right) \hat{a}^\dagger \hat{a} \\
+ \left(\delta+\frac{\chi_{ea}}{2}-\frac{3|g_3|^2}{\chi_{bb}}\right)\hat\sigma_{ee} +\left(2\delta+\frac{\chi_{fa}}{2}\right) \hat\sigma_{ff} \label{eq:H_effective}
\end{multline}
where we have only considered the first three energy levels $g$, $e$ and $f$ of the junction mode. The other energy levels of this mode are never populated in this scheme. The transition operators $\hat\sigma_{jk}$ are given by $\ket{k}\bra{j}$. The first row of \eqref{eq:H_effective} is the four-photon exchange term and the two-photon $g\leftrightarrow f$ drive on the junction mode with
\begin{align*}
g_{4\mathrm{ph}} = \sqrt{2} g_1 g_2\left(\frac{1}{\Delta}-\frac{1}{\chi_{bb}+\Delta}\right) \mbox{ and }\epsilon_{4\mathrm{ph}} = \sqrt{2}g_3.
\end{align*}
In addition to this, the pumping also modifies the cross-Kerr terms by
\begin{align*}
\chi_{ea} =& \frac{4|g_2|^2}{\chi_{bb}-\Delta} - \frac{4|g_1|^2}{\Delta},\nonumber\\
\chi_{fa} =& \frac{8|g_2|^2}{\Delta}- \frac{8|g_1|^2}{\chi_{bb}+\Delta}
\end{align*}
and produces higher order interactions
\begin{align*}
\zeta_{gaa} =& \left(\frac{|g_1|^2}{\Delta}-\frac{|g_2|^2}{\chi_{bb}+\Delta}\right),\nonumber\\
\zeta_{eaa} =& \left(-\frac{|g_1|^2(\chi_{bb}-\Delta)}{\Delta(\chi_{bb}+\Delta)}-\frac{|g_2|^2(2\chi_{bb}+\Delta}{\Delta(\chi_{bb}+\Delta)}\right),\nonumber\\
\zeta_{faa} =& \left(\frac{|g_2|^2(2\chi_{bb}+\Delta)}{\Delta (\chi_{bb}-\Delta)}-\frac{|g_1|^2}{2\chi_{bb}+\Delta}\right) .
\end{align*}
In order to show the correctness of the effective dynamics, let us consider the oscillations between the states $\ket{f,0}$ and $\ket{g,4}$. Note that the population of the state $\ket{e,n-2}$ will remain small. The terms $\left(\zeta_{gaa}\hat{\sigma}_{gg}-\chi_{aa}/2\right)\hat{a}^{\dagger 2}\hat{a}^2$ and $(2\delta+\chi_{fa}/2)\hat{\sigma}_{ff}$ produce additional frequency shifts between $\ket{g,4}$ and $\ket{f,0}$, thus hindering the oscillations. We counter the effect of these terms by selecting parameters such that
\begin{align}
\zeta_{gaa}=\frac{\chi_{aa}}{2} \mbox{ and } \delta=-\frac{\chi_{fa}}{4}. \label{eq:zeta_gg_constraint}
\end{align}
The dynamics given by Hamiltonian \eqref{eq:H_sys} (simulated in the rotating frame of $\tilde{\omega}_a\hat{a}^\dagger \hat{a}+\tilde{\omega}_b\hat{b}^\dagger \hat{b}$) is compared with the effective dynamics given by Hamiltonian \eqref{eq:H_effective} in Fig.~\ref{fig:full_effective_comparison_H}a. The system parameters are $\chi_{aa}/(2\pi)=\SI{312}{\hertz}$, $\chi_{bb}/(2\pi)=\SI{200}{\mega\hertz}$, $\chi_{ab}/(2\pi)=\SI{0.5}{\mega\hertz}$ satisfying $\chi_{ab}=2\sqrt{\chi_{aa}\chi_{bb}}$~\citep{Nigg2012}. The values $\Delta/(2\pi)=\SI{50}{\mega\hertz}$, $\delta=\SI{153}{\kilo\hertz}$, $g_1/(2\pi)=\SI{899}{\kilo\hertz}$ and $g_2/(2\pi)=\SI{2}{\mega\hertz}$ are selected to satisfy \eqref{eq:zeta_gg_constraint}. The third drive $g_3$ is set to zero in this simulation. Dynamics given by both, \eqref{eq:H_sys} and \eqref{eq:H_effective}, show the required oscillations. The slight mismatch between the oscillation frequencies is due to a higher order effect induced by the occupation of the state $\ket{e,2}$. Figure~\ref{fig:full_effective_comparison_H}b shows the population leakage to the $\ket{e,2}$ state. This leakage leads to an important limitation of the protocol (see Subsection~\ref{sec:effective_master_equation}).
\section{Four-photon driven-dissipative process}
\label{sec:including_dissipation}
We have showed in the last section that we get a four-photon exchange Hamiltonian \eqref{eq:H_effective} by cascading two-photon exchange processes. In this section we combine this idea with the dissipation of the junction mode to achieve a four-photon driven-dissipative dynamics on the cavity mode. In Subsection~\ref{sec:effective_master_equation}, we present the effective master equation governing the dynamics of the cavity. In particular, we observe that, as an undesired effect of population leakage towards the $e$ state, we introduce a two-photon dissipation on the cavity mode. This problem is addressed in Subsection~\ref{sec:two_photon_dissipation_error} by engineering the noise spectral density seen by the junction mode. Additionally, we analyze the performance of the proposed schemes through numerical simulations of the full and effective master equations.
\subsection{Effective master equation}
\label{sec:effective_master_equation}
\begin{figure*}
\caption{Simulation results for the full dynamics (FD, solid lines) given by the master equation \eqref{eq:full_eom_gamma12}
\label{fig:fidelity_purity}
\label{fig:wigners_vs_time}
\label{fig:simulation_results}
\end{figure*}
We consider the junction mode to be coupled to a cold bath, leading to the master equation
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\rho = -\frac{i}{\hbar}\left[H_{\mathrm{sys}}(t), \rho\right]+\Gamma_1\mathcal{D}[\hat{b}]\rho. \label{eq:full_eom}
\end{align}
where the Hamiltonian $H_{\mathrm{sys}}$ is given by~\eqref{eq:H_sys}. Note that this master equation implicitly assumes a white noise spectrum for the bath degrees of freedom. In Appendix~\ref{appendix:rwa_with_dissipation}, we will provide a more general analysis considering an arbitrary noise spectrum. Indeed, in this appendix, we perform RWA under such general assumptions, arriving at a time-independent master equation for the junction-cavity system. Under the assumption of strong dissipation, we can also eliminate the junction degrees of freedom, resulting in an effective master equation for the cavity mode (see Appendix~\ref{appendix:optimizing_gamma}):
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\rho_{\mathrm{cav}}=& -i\left[\left(\zeta_{gaa}-\chi_{aa}\right)\hat{a}^{\dagger 2}\hat{a}^2,\rho_{\mathrm{cav}}\right] \nonumber\\
&+ \kappa_{4\mathrm{ph}} \mathcal{D}[\hat{a}^4-\alpha^4]\rho_{\mathrm{cav}} + \kappa_{2\mathrm{ph}} \mathcal{D}[\hat{a}^2]\rho_{\mathrm{cav}} \label{eq:effective_eom}
\end{align}
with
\begin{align}
\kappa_{4\mathrm{ph}} =& \frac{2 |g_{4\mathrm{ph}}|^2}{\Gamma_1},\nonumber\\
\kappa_{2\mathrm{ph}} =& \left(\frac{|g_1|^2}{\Delta^2}+\frac{|g_2|^2}{(\Delta+\chi_{bb})^2}\right)\Gamma_1,\nonumber \\
\alpha =& \left(\frac{\epsilon_{4\mathrm{ph}}}{g_{4\mathrm{ph}}}\right)^{1/4}. \label{eq:kappa_4ph_kappa_2ph}
\end{align}
While we get the expected four-photon driven-dissipative term $\kappa_{4\mathrm{ph}}\mathcal{D}[\hat{a}^4-\alpha^4]$, we also inherit an undesired two-photon dissipation $\kappa_{2\mathrm{ph}}\mathcal{D}[\hat{a}^2]$. Such two photon dissipation corresponds to jumps between states with same photon number parities thus effectively introducing bit-flip errors in the logical code-space \citep{Mirrahimi2014}. In the next subsection, we will remedy this problem by engineering the dissipation of the junction mode.
To establish the validity of \eqref{eq:effective_eom}, we numerically compare the dynamics of the two master equations~\eqref{eq:full_eom} and~\eqref{eq:effective_eom}. The blue curves in Fig.~\ref{fig:simulation_results}a and~\ref{fig:simulation_results}b correspond to these simulations. We initialize the system in its ground state and plot the overlap with the cat state $\ket{\mathcal{C}_\alpha^{(0\mathrm{mod}4)}}=\mathcal{N}\left(\ket{\alpha}+\ket{-\alpha}+\ket{i\alpha}+\ket{-i\alpha}\right)$ where $\mathcal{N}$ is a normalization factor. Note that as the cavity is initialized in the vacuum state, we expect the four-photon driven-dissipative process to steer the state towards $\ket{\mathcal{C}_\alpha^{(0\mathrm{mod}4)}}$~\citep{Mirrahimi2014}. The chosen system parameters are the same as in the last section. The additional dissipation parameter $\Gamma_1/(2\pi) = \SI{2}{\mega\hertz}$. We also select $g_3/(2\pi)=\SI{460}{\kilo\hertz}$ to achieve a cat amplitude of $\alpha=2$. These parameters give $1/\kappa_{4\mathrm{ph}}\sim \SI{96}{\micro\second}$ and $1/\kappa_{2\mathrm{ph}}=\SI{205}{\micro\second}$. The maximum achieved overlap with the target state ($\ket{\mathcal{C}_\alpha^{(0\mathrm{mod}4)}}$) is merely above $50\%$. This is expected, since the two-photon dissipation rate is not much smaller than the four-photon dissipation rate. More precisely, the resulting steady state is a mixture of two even parity states represented by the Wigner function in Fig.~\ref{fig:simulation_results}c. Note that while this is a mixed state, the conservation of the photon number parity leads to negative values in the Wigner function.
\subsection{Mitigation of two-photon dissipation error}
\label{sec:two_photon_dissipation_error}
As mentioned in the previous subsection, the inherited two-photon dissipation can be seen as a bit-flip error channel in the code space. Its rate has to be compared to the rate of other errors that are not corrected by the four-component cat code. Indeed, this code can only correct for a single photon loss in the time interval $\delta t$ between two error syndrome (photon-number parity) measurements. The probability
of two single-photon losses during $\delta t$ is given by $ p_{1}^{2\mathrm{ph}}=(|\alpha|^2\kappa_{1ph}\delta t)^2/2$. Whereas the probability for a direct two-photon loss due to the $\hat a^2$ dissipation is $p_{2}^{2\mathrm{ph}}=|\alpha|^4\kappa_{2\mathrm{ph}}\delta t$. Hence, we require the induced error probability $p_2^{2\mathrm{ph}}$ to be of the same order or smaller than $p_1^{2\mathrm{ph}}$. Therefore, we need to reduce $\kappa_{2\mathrm{ph}}$ to a value smaller than $\kappa_{1\mathrm{ph}}^2\delta t/2$. In this subsection, we propose a simple modification of the above scheme, which, with currently achievable experimental parameters, should lead to $\kappa_{2\mathrm{ph}}/\kappa_{1\mathrm{ph}}$ to be less than $0.01$.
One such approach is to use a dynamically engineered coupling of the junction mode to the bath. More precisely, we start with a high-Q junction mode corresponding to a small $\Gamma_1$ with respect to the Hamiltonian parameters. By dispersively coupling this mode to a low-Q resonator in the photon-number resolved regime~\citep{Schuster2007}, one can engineer a dynamical cooling protocol similar to DDROP~\citep{Geerlings2013}, or parametric sideband cooling~\citep{Pechal2014,Narla2016}. While these experiments correspond to a dynamical cooling from $e$ to $g$, one can easily modify them to achieve a direct dissipation from $f$ to $g$. Here we model this engineered dissipation by adding a Lindblad term of the form $\Gamma_{fg}^{\mathrm{eng}}\mathcal{D}[\hat \sigma_{fg}]\rho$. This leads to the new dissipation rate
\begin{equation}
\kappa_{4\mathrm{ph}} = \frac{4 |g_{4\mathrm{ph}}|^2}{\Gamma_{fg}^{\mathrm{eng}}+2\Gamma_1},\label{eq:improved_scheme_rates}
\end{equation}
while the two-photon dissipation rate $\kappa_{2\mathrm{ph}}$ remains unchanged, as given by~\eqref{eq:kappa_4ph_kappa_2ph}.
The green curves in Fig.~\ref{fig:simulation_results}a and~\ref{fig:simulation_results}b illustrate the numerical simulations of this modified scheme. The solid curves correspond to the simulation of the master equation
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\rho = -\frac{i}{\hbar}\left[H_{\mathrm{sys}}(t), \rho\right]+\Gamma_1\mathcal{D}[\hat{b}]\rho + \Gamma_{fg}^{\mathrm{eng}}\mathcal{D}[\hat{\sigma}_{fg}]\rho, \label{eq:full_eom_gamma12}
\end{align}
with $\Gamma_1/(2\pi)=\SI{3}{\kilo\hertz}$ and $\Gamma_{fg}^{\mathrm{eng}}/(2\pi)=\SI{4}{\mega\hertz}$. This value is a compromise between the strength of $\kappa_{4\mathrm{ph}}$ and the validity of the adiabatic elimination, as shown in Appendix~\ref{appendix:optimizing_gamma}. Similarly, the dashed green curves correspond to the simulation of \eqref{eq:effective_eom} with $\kappa_{4\mathrm{ph}}$ now given by \eqref{eq:improved_scheme_rates}. Indeed, with these parameters, we achieve $1/\kappa_{4\mathrm{ph}}\sim \SI{96}{\micro\second}$ and $1/\kappa_{2\mathrm{ph}}=\SI{136}{\milli\second}$. In Appendix~\ref{appendix:optimizing_gamma}, we will provide an alternative approach based on using band-pass Purcell filters~\citep{Reed2010}, shaping the noise spectrum seen by the junction mode.
\section{Conclusion}
\label{sec:conclusion}
We have presented a theoretical proposal for the implementation of a controlled four-photon driven-dissipative process on a harmonic oscillator. By stabilizing the manifold span$\{\ket{\pm\alpha},\ket{\pm i\alpha}\}$, this process provides a means to realize an error-corrected logical qubit~\citep{Mirrahimi2014}. Our proposal relies on cascading two-photon exchange processes, which have already been experimentally demonstrated~\citep{Leghtas2015}.
While the required hardware complexity is similar to the existing system, the parameters need to be carefully chosen to avoid undesired interactions.
The technique of cascading nonlinear processes through Raman-like virtual transitions can be used to engineer other highly nonlinear interactions. In particular, a Hamiltonian of the form $g_{\mathrm{12}}\hat{a}_1^{2\dagger}\hat{a}_2^2+g^*_{12}\hat{a}_1^2 \hat{a}_2^{2\dagger}$ could entangle two logical qubits encoded in two high-Q cavities $\hat{a}_1$ and $\hat{a}_2$~\citep{Mirrahimi2014}. Such an interaction can be generated by coupling the cavities through a Josephson junction mode $\hat{b}$ and applying two off-resonant pumps at frequencies $\omega_{p1} = 2\tilde\omega_{a1}-\tilde\omega_b-\Delta$ and $\omega_{p2}=2\tilde\omega_{a2}-\tilde\omega_b-\Delta$. This entangling gate constitutes another important step towards fault-tolerant universal quantum computation with cat-qubits.
\section{RWA in presence of dissipation}
\label{appendix:rwa_with_dissipation}
We start by considering the Hamiltonian of a junction-cavity system where the junction mode is dissipative. This dissipation is typically modeled by a linear coupling to a continuum of infinitely many non-dissipative modes~\citep{Caldeira1983}. Here, instead, we model this dissipation by a linear coupling of the junction mode to infinitely many harmonic oscillators with finite frequency spacing and finite bandwidths. Indeed, for an under-coupled system (weak dissipation), we can use LCR elements~\citep{Nigg2012,Solgun2014} to represent the dissipation in terms of such dissipative oscillators (see Fig.~\ref{fig:foster}). Such a discretization could also be explained taking into account experimental considerations where the dissipation is mediated by various filters which could themselves be seen as lossy resonators. More precisely, we consider a Hamiltonian
\begin{align}
\frac{H_{\mathrm{tot}}}{\hbar} =& \frac{H_{\mathrm{sys}}}{\hbar}+ \sum_k \omega_k \hat{c}^{\dagger}[\omega_k]\hat{c}[\omega_k]\nonumber\\
&+ \sum_k \left(\Omega[\omega_k] \hat{b}\hat{c}^{\dagger}[\omega_k]+\Omega^*[\omega_k] \hat{b}^\dagger\hat{c}[\omega_k] \right), \nonumber
\end{align}
where $H_{\mathrm{sys}}$ is the system Hamiltonian given in \eqref{eq:H_sys} and the modes $\hat{c}[\omega_k]$ have decay rates $\gamma[\omega_k]$. We perform the second-order RWA on the associated master equation by going into the rotating frame of $\widetilde{H}_0 = \tilde\omega_a \hat{a}^\dagger \hat{a} + \tilde\omega_b \hat{b}^\dagger \hat{b} -\frac{\chi_{bb}}{2} \hat{b}^{\dagger 2} \hat{b}^2 + \sum_k \hbar\omega_k \hat{c}^\dagger[\omega_k] \hat{c}[\omega_k]$. The effective master equation becomes
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}{\rho} =& -\frac{i}{\hbar}\left[H_{\mathrm{eff,bath}},\rho\right]+\sum_k (1+n_{\mathrm{th}}[\omega_k])\gamma[\omega_k]\mathcal{D}\left[\hat{c}[\omega_k]\right]\rho \nonumber\\
&+\sum_k n_{\mathrm{th}}[\omega_k]\gamma[\omega_k]\mathcal{D}\left[\hat{c}^\dagger[\omega_k]\right]\rho \nonumber
\end{align}
\begin{figure}
\caption{Modelling dissipation using LCR elements \cite{Solgun2014}
\label{fig:foster}
\end{figure}
where $n_{\mathrm{th}}[\omega_k]$ implies the thermal population of the $k$th mode. The Hamiltonian $H_{\mathrm{eff,bath}}$ is given by
\begin{widetext}
\begin{align}
\frac{H_{\mathrm{eff,bath}}}{\hbar}\approx&\frac{H_{\mathrm{eff}}}{\hbar}+\sum_{n=0}^{\infty} \left(\Omega[\tilde\omega_b-(n-1)\chi_{bb}] \hat{c}^\dagger[\tilde\omega_b-\chi_{bb}]\sqrt{n}\hat\sigma_{n,n-1} +\mathrm{h.c.}\right)\nonumber\\
&+\left(g_1^* \Omega[\omega_{+\Delta,0}]\hat{c}^\dagger[\omega_{+\Delta,0}] \hat{a}^2 + \mathrm{h.c.}\right)\sum_{n=0}^{\infty}\frac{\chi_{bb}-\Delta}{(n\chi_{bb}+\Delta)((n-1)\chi_{bb}+\Delta)}\hat\sigma_{n,n} \nonumber\\
&+\left(g_2^* \Omega[\omega_{-\Delta,1}]\hat{c}^\dagger[\omega_{-\Delta,1}] \hat{a}^2 + \mathrm{h.c.}\right)\sum_{n=0}^{\infty}\frac{2\chi_{bb}+\Delta}{((n-2)\chi_{bb}-\Delta)((n-1)\chi_{bb}-\Delta)}\hat\sigma_{n,n} \nonumber\\
&+\sum_{n=0}^{\infty} \left(\frac{g_1\chi_{bb}\Omega[\omega_{+\Delta,2n}]}{(n\chi_{bb}+\Delta)((n+1)\chi_{bb}+\Delta)}\hat{c}^\dagger[\omega_{+\Delta,2n}]\hat{a}^{\dagger 2} \sqrt{(n+1)(n+2)}\hat{\sigma}_{n+2,n}+\mathrm{h.c.}\right)\nonumber\\
&+\sum_{n=0}^{\infty} \left(\frac{g_2\chi_{bb}\Omega[\omega_{-\Delta,2n+1}]}{(n\chi_{bb}-\Delta)((n-1)\chi_{bb}-\Delta)}\hat{c}^\dagger[\omega_{-\Delta,2n+1}] \hat{a}^{\dagger 2} \sqrt{(n+1)(n+2)}\hat{\sigma}_{n+2,n}+\mathrm{h.c.}\right).\label{eq:RWA_full_result}
\end{align}
\end{widetext}
Here $n$ indicates the number states of the junction mode (specifically $n=0,1,2$ correspond to $g,e,f$ levels in the main text). The frequencies $\omega_{\pm\Delta,n}$ are defined as $\tilde\omega_b\pm\Delta-n\chi_{bb}$. Along with the terms presented in \eqref{eq:RWA_full_result}, we also obtain terms of the form $\hat{\sigma}_{nn}\hat{c}^\dagger[\omega_k]\hat{c}[\omega_k]$ and $\hat{\sigma}_{n+2,n}\hat{c}^\dagger[\omega_k]\hat{c}^\dagger[\omega_m]+\mathrm{h.c.}$. The first type of terms corresponds to the dispersive coupling of the junction mode and the bath modes. For non-zero bath temperature, they contribute to the dephasing of the junction mode states. The latter terms become resonant when $\hbar(\omega_k+\omega_m)$ equals the energy difference between the states $n$ and $n+2$ of the junction mode and give rise to a direct two-photon dissipation between the two states. As stated in Section~\ref{sec:two_photon_dissipation_error} such a direct dissipation from the $f$ to $g$ state actually enhances the performance of the protocol. However, without additional engineering, the magnitude of such interactions is negligible compared to the regular single-photon dissipation terms in \eqref{eq:RWA_full_result}.
\begin{figure*}
\caption{Panel (a) compares the system dynamics before~\eqref{eq:before_elimination}
\label{fig:adiabatic_validity}
\end{figure*}
Next, by adiabatically eliminating the highly dissipative bath modes, we obtain the master equation
\begin{widetext}
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\rho =& -\frac{i}{\hbar}\left[H_{\mathrm{eff}},\rho\right]+\sum_{n=0}^{\infty} \left(n\Gamma_{\downarrow}[\tilde\omega_b-(n-1)\chi_{bb}]\mathcal{D}[\hat\sigma_{n,n-1}]+ n\Gamma_{\uparrow}[\tilde\omega_b-(n-1)\chi_{bb}]\mathcal{D}[\hat\sigma_{n-1,n}]\right)\rho \nonumber \\
&+\sum_{n=0}^{\infty}\left(\frac{|g_1|(\chi_{bb}-\Delta)}{(n\chi_{bb}+\Delta)((n-1)\chi_{bb}+\Delta)}\right)^2 \left(\Gamma_{\downarrow}[\omega_{+\Delta,0}]\mathcal{D}[\hat{a}^2\hat\sigma_{n,n}]+\Gamma_{\uparrow}[\omega_{+\Delta,0}]\mathcal{D}[\hat{a}^{\dagger 2}\hat\sigma_{n,n}]\right)\rho\nonumber\\
&+\sum_{n=0}^{\infty}\left(\frac{|g_2|(2\chi_{bb}+\Delta)}{((n-2)\chi_{bb}-\Delta)((n-1)\chi_{bb}-\Delta)}\right)^2 \left(\Gamma_{\downarrow}[\omega_{-\Delta,1}]\mathcal{D}[\hat{a}^2\hat\sigma_{n,n}]+\Gamma_{\uparrow}[\omega_{-\Delta,1}]\mathcal{D}[\hat{a}^{\dagger 2}\hat\sigma_{n,n}]\right)\rho\nonumber\\
&+\sum_{n=0}^{\infty}\left(\frac{\sqrt{(n+1)(n+2)}|g_1|\chi_{bb}}{(n\chi_{bb}+\Delta)((n+1)\chi_{bb}+\Delta)}\right)^2 \left(\Gamma_{\downarrow}[\omega_{+\Delta,2n}]\mathcal{D}[\hat{a}^{\dagger 2}\hat\sigma_{n+2,n}]+\Gamma_{\uparrow}[\omega_{+\Delta,2n}]\mathcal{D}[\hat{a}^2\hat\sigma_{n,n+2}]\right)\rho\nonumber\\
&+\sum_{n=0}^{\infty}\left(\frac{\sqrt{(n+1)(n+2)}|g_2|\chi_{bb}}{(n\chi_{bb}-\Delta)((n-1)\chi_{bb}-\Delta)}\right)^2 \left(\Gamma_{\downarrow}[\omega_{-\Delta,2n+1}]\mathcal{D}[\hat{a}^{\dagger 2}\hat\sigma_{n+2,n}]+\Gamma_{\uparrow}[\omega_{-\Delta,2n+1}]\mathcal{D}[\hat{a}^2\hat\sigma_{n,n+2}]\right)\rho \label{eq:after_RWA_ME}
\end{align}
\end{widetext}
where
\begin{align*}
\Gamma_{\downarrow}[\omega]&= \frac{4(1+n_{\mathrm{th}}[\omega])|\Omega[\omega]|^2}{\gamma[\omega]}\\
\Gamma_{\uparrow}[\omega]&= \frac{4 n_{\mathrm{th}}[\omega]|\Omega[\omega]|^2}{\gamma[\omega]}.
\end{align*}
In the next section we study the adiabatic elimination of the junction mode.
\section{Validity of adiabatic elimination}
\label{appendix:optimizing_gamma}
Here we limit ourselves to the lowest three levels ($\ket{g}$, $\ket{e}$ and $\ket{f}$) of the junction mode and furthermore we assume the bath to be at zero temperature. Additionally, following the discussion in Section~\ref{sec:two_photon_dissipation_error}, we consider the case of an engineered bath potentially leading to a strong direct dissipation from $f$ to $g$. The master equation is given by
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\rho &=-\frac{i}{\hbar}\left[H_{\mathrm{eff}},\rho\right] +\left(\Gamma_\downarrow[\tilde{\omega}_b]\mathcal{D}[\hat\sigma_{eg}]+ 2\Gamma_\downarrow[\tilde{\omega}_b-\chi_{bb}]\mathcal{D}[\hat\sigma_{fe}]\right)\rho \nonumber\\
&+\left(\kappa_{2,gg}\mathcal{D}[\hat{a}^2\hat\sigma_{gg}]+\kappa_{2,ee}\mathcal{D}[\hat{a}^2\hat\sigma_{ee}]+\kappa_{2,ff}\mathcal{D}[\hat{a}^2\hat\sigma_{ff}]\right)\rho\nonumber\\
&+\Gamma_{fg}^\mathrm{eng}\mathcal{D}[\hat\sigma_{fg}]\rho+\kappa_{2,fg}\mathcal{D}[\hat{a}^{\dagger 2}\hat\sigma_{fg}]\rho \label{eq:before_elimination}
\end{align}
where the decay rates $\kappa_{2,gg}$, $\kappa_{2,ee}$, $\kappa_{2,ff}$ and $\kappa_{2,fg}$ can be inferred from \eqref{eq:after_RWA_ME}. The rate $\Gamma_{fg}^{\mathrm{eng}}$ corresponds to the engineered direct dissipation from $f$ to $g$. Assuming $2\Gamma_{\downarrow}[\tilde{\omega}_b-\chi_{bb}]+\Gamma_{fg}^{\mathrm{eng}}\gg \|H_{\mathrm{eff}}/\hbar\|$, we adiabatically eliminate the junction mode to obtain the master equation in \eqref{eq:effective_eom}. Note, however, that for this general noise spectrum, the effective dissipation rates are given by
\begin{align}
\kappa_{4\mathrm{ph}} =& \frac{4 |g_{4\mathrm{ph}}|^2}{2\Gamma_{\downarrow}[\tilde\omega_b-\chi_{bb}]+\Gamma_{fg}^{\mathrm{eng}}},\nonumber\\
\kappa_{2\mathrm{ph}} =& \frac{|g_1|^2}{\Delta^2}\Gamma_{\downarrow}[\tilde\omega_b+\Delta]+\frac{|g_2|^2}{(\Delta+\chi_{bb})^2}\Gamma_{\downarrow}[\tilde\omega_b-\Delta-\chi_{bb}].\label{eq:diss_gen_spec}
\end{align}
The rates given in~\eqref{eq:kappa_4ph_kappa_2ph} and~\eqref{eq:improved_scheme_rates} correspond to the white noise case where $\Gamma_{\downarrow}[\omega_k]=\Gamma_1$ for all $k$.
The above general result provides another possible approach to mitigate the problem of the undesired two-photon dissipation. The two dissipation rates $\kappa_{4\mathrm{ph}}$ and $\kappa_{2\mathrm{ph}}$ from \eqref{eq:diss_gen_spec} are sensitive to noise at different frequencies. While $\kappa_{4\mathrm{ph}}$ involves the noise at frequency $\tilde\omega_b-\chi_{bb}$, the undesired $\kappa_{2\mathrm{ph}}$ involves the noise at frequencies $\omega_1 = \tilde\omega_b+\Delta$ and $\omega_2 = \tilde\omega_b-\Delta-\chi_{bb}$. It is possible to engineer the coupling of the system to an electromagnetic bath such that $\Gamma_{\downarrow}[\tilde\omega_b], \Gamma_{\downarrow}[\tilde\omega_b-\chi_{bb}]\gg \Gamma_{\downarrow}[\omega_1],\gamma_{\downarrow}[\omega_2]$. Indeed, one can mediate the coupling between the system and the bath through a band-pass filter. This is a more elaborate version of the Purcell filter realized in~\citep{Reed2010}. The frequencies $\tilde\omega_b$ and $\tilde\omega_b-\chi_{bb}$ have to be in the pass band, whereas the frequencies $\omega_1$ and $\omega_2$ have to be in the cut-off.
The rest of this appendix is devoted to checking the validity of this adiabatic elimination through numerical simulations. In Fig.\ref{fig:adiabatic_validity}a, we compare the dynamics given by \eqref{eq:before_elimination} and \eqref{eq:effective_eom}, using the same parameters as in Fig.\ref{fig:simulation_results} (corresponding to $|\alpha|^2=4$ and $g_{4\mathrm{ph}}/2\pi=\SI{41}{\kilo\hertz}$), and taking $\Gamma_{fg}^{\mathrm{eng}}/2\pi=\SI{0.6}{\mega\hertz}$ (black) and $\Gamma_{fg}^{\mathrm{eng}}/2\pi=\SI{6}{\mega\hertz}$ (magenta). For the considered amplitude $|\alpha|^2=4$, the choice of $\Gamma_{fg}^{\mathrm{eng}}/2\pi=\SI{6}{\mega\hertz}$ satisfies the above separation of time-scales, leading to a good agreement between the dashed and solid magenta lines. The choice of $\Gamma_{fg}^{\mathrm{eng}}/2\pi=\SI{0.6}{\mega\hertz}$ leads to a disagreement with the reduced dynamics. Note that the dynamics still converges towards the expected state albeit at a slower rate. To choose the optimum working point, we perform simulations of~\eqref{eq:before_elimination}, sweeping $\Gamma_{fg}^{\mathrm{eng}}/2\pi$ from $\SI{0.1}{\mega\hertz}$ to $\SI{10}{\mega\hertz}$. In Fig.\ref{fig:adiabatic_validity}b we plot the overlap with the $\ket{\mathcal{C}_\alpha^{(0\mathrm{mod}4)}}$ cat state as a function of time and $\Gamma_{fg}^{\mathrm{eng}}$. From this, we extract the time taken to achieve 90\% fidelity as a function of $\Gamma_{fg}^{\mathrm{eng}}$. This corresponds to the green curve in~Fig.\ref{fig:adiabatic_validity}c. As illustrated by the green dot, the optimum working point is given by $\Gamma_{fg}^{\mathrm{eng}}/2\pi=\SI{2}{\mega\hertz} $. Note that the working point of $\Gamma_{fg}=\SI{4}{\mega\hertz}$ used in Fig.~\ref{fig:simulation_results} is selected to be well in the region of adiabatic validity while still getting a strong four-photon dissipation ($\kappa_{4\mathrm{ph}}$). The same simulations for $|\alpha|^2=3$ and $5$ give rise to different working points at $\SI{1}{\mega\hertz}$ and $\SI{3.7}{\mega\hertz}$ respectively. Indeed, the norm $\|H_{\mathrm{eff}}\|$, in the assumption $2\Gamma_{\downarrow}[\tilde{\omega}_b-\chi_{bb}]+\Gamma_{fg}^{\mathrm{eng}}\gg \|H_{\mathrm{eff}}/\hbar\|$, corresponds to the norm of the Hamiltonian when confined to the code space $\textrm{span}\{\ket{\pm\alpha},\ket{\pm i \alpha}\}$. This implies a separation of time-scales which depends on the amplitude $|\alpha|$ of the cat state, therefore leading to different optimum working points.
\end{document}
|
\begin{document}
\title{Breaking graph symmetries by edge colourings}
\begin{abstract}
The distinguishing index $D'(G)$ of a graph $G$ is the least number of colours needed in an edge colouring which is not preserved by any non-trivial automorphism. Broere and Pil\'sniak conjectured that if every non-trivial automorphism of a countable graph $G$ moves infinitely many edges, then $D'(G) \leq 2$. We prove this conjecture.
\end{abstract}
\section{Introduction}
A colouring of the vertices or edges of a graph $G$ is called distinguishing if the only automorphism which preserves it is the identity. Originally inspired by a recreational mathematics problem, Albertson and Collins~\cite{MR1394549} first introduced the notion formally in 1996. Despite (or maybe because of) its recreational origin, the concept quickly received a lot of attention leading to numerous papers on distinguishing colourings of graphs and other combinatorial structures.
One interesting line of research is the connection between motion (i.e.\ the minimal number of elements moved by a non-trivial automorphism) and the least number of colours needed in a distinguishing colouring. Intuitively, the more elements are moved by every non-trivial automorphism, the easier it should be to find a colouring with few colours which isn't preserved by any of them. Russel and Sundaram~\cite{MR1617449} were the first to make this intuition precise. They showed that if the motion of a finite graph is at least $2 \cdot \log_2 |\operatorname{Aut} G|$, then there is a distinguishing $2$-colouring. The same is true for infinite graphs whose automorphism group is finite. Tucker~\cite{zbMATH05902980} conjectured, that an analogous result holds for locally finite graphs with infinite automorphism group.
\begin{con}[Infinite motion conjecture~\cite{zbMATH05902980}]
\label{con:imc}
Let $G$ be a locally finite, connected graph and assume that every automorphism of $G$ moves infinitely many vertices. Then there is a distinguishing $2$-vertex colouring.
\end{con}
Note that if the motion of such a graph is infinite, then then it must be $\aleph_0$ and that $2^{\aleph_0}$ is a trivial upper bound for the size of the automorphism group.
While Tucker's conjecture is still wide open, there are many partial results towards it, see~\cite{zbMATH06351222,MR2302543,zbMATH06405771,zbMATH06261159,growth,zbMATH06045720,MR2302536}. Broere and Pil\'sniak~\cite{zbMATH06428682} noticed that most of these partial results can be generalised to edge colourings. Consequently, they conjecured that an analogous statement to Conjecture~\ref{con:imc} should hold in the realm of edge colourings. In fact their conjecture for edge colourings is even stronger as it doesn't require the graph to be locally finite.
\begin{con}[Infinite edge motion conjecture~\cite{zbMATH06428682}]
\label{con:iemc}
Let $G$ be a countable, connected graph and assume that every automorphism of $G$ moves infinitely many edges. Then there is a distinguishing $2$-edge colouring.
\end{con}
The two conjectures are closely related. In~\cite{zbMATH03228459}, a generalisation of Whitney's theorem is proved, stating that for connected graphs on more than $4$ vertices (and in particular for infinite graphs) there is a natural group isomorphism between $\operatorname{Aut} G$ and $\operatorname{Aut} L(G)$, where $L(G)$ denotes the line graph of $G$. Hence, a distinguishing vertex colouring of $L(G)$ translates into a distinguishing edge colouring of $G$ and vice versa. In particular, Conjecture~\ref{con:imc} implies the special case of Conjecture~\ref{con:iemc} where the graph is assumed to be locally finite.
Furthermore, if the generalisation of Conjecture~\ref{con:imc} to countable graphs were true, then this would immediately imply Conjecture~\ref{con:iemc}. However, in ~\cite{zbMATH06502805} a counterexample for this generalisation is constructed, making it somewhat counterintuitive that Conjecture~\ref{con:iemc} holds in full generality.
Nevertheless, in the present paper we prove Conjecture~\ref{con:iemc}. We also attempt to give some intuition why this is not as surprising as it may seem at first glance. For this purpose, in Section \ref{sec:edgevertex} we compare distinguishing edge and vertex colourings. We show that if there is a distinguishing vertex colouring with $k$ colours, then there is a distinguishing edge colouring using at most $k+1$ colours. This is true for arbitrary graphs. One possible interpretation of this result is that finding a distinguishing edge colouring with few colours should generally be easier (or at least not harder) then finding such a vertex colouring and consequently that Conjecture~\ref{con:iemc} should be weaker than its vertex colouring counterpart.
\section{Notions and notations}
\label{sec:notions}
We will follow the terminology of~\cite{MR2159259} for all graph theoretical notions which are not explicitly defined. Let $G = (V,E)$ be a graph and let $\operatorname{Aut} G$ denote its automorphism group. A \emph{vertex colouring} of $G$ with colours in $C$ is a map $c \colon V \to C$. Analogously define an \emph{edge colouring}. We say that $\gamma \in \operatorname{Aut} G$ \emph{preserves} the (vertex or edge) colouring $c$ if $c \circ \gamma = c$. Two colourings $c$ and $d$ are called \emph{isomorphic}, if there is $\gamma \in \operatorname{Aut} G$ such that $c \circ \gamma = d$.
Call a colouring of $G$ \emph{distinguishing}, if the identity is the only automorphism which preserves it. The \emph{distinguishing number} of $G$, denoted by $D(G)$ is the least number of colours in a distinguishing vertex colouring. The \emph{distinguishing index} of $G$, denoted by $D'(G)$ is the analogous concept for edge colourings.
The \emph{motion} of a graph $G$ is the least number of vertices moved by a non-trivial automorphism of $G$. The \emph{edge motion} is the least number of edges moved by a non-trivial automorphism.
\section{Infinite motion and 2-distinguishability}
\label{sec:iemc}
In this section we prove Conjecture~\ref{con:iemc}. The following lemma will be useful.
\begin{lem}
\label{lem:infcomp}
Let $G$ be a graph with infinite edge motion and let $\gamma \in \operatorname{Aut} G$. Denote by $V_{\text{move}}$ the set of vertices of $G$ which are not fixed by $\gamma$. Let $C$ be the vertex set of a component of the subgraph of $G$ induced by $V_{\text{move}}$. If $C$ is finite, then it must contain a vertex of infinite degree.
\end{lem}
\begin{proof}
Assume for a contradiction that $C$ is finite and contains no vertex of infinite degree. Then $\gamma$ moves $C$ to some component $C'$ of $G[V_{\text{move}}]$. Denote by $\partial C$ the set of vertices outside of $C$ with a neighbour in $C$. Then each vertex of $\partial C$ is fixed by $\gamma$. If $C=C'$ we hence get the following automorphism:
\[
\gamma'(v) =
\begin{cases}
\gamma(v) & \text{if }v \in C,\\
v & \text{if }v \notin C.
\end{cases}
\]
Now $\gamma'$ only moves finitely many vertices all of which have finite degree. Hence it is an automorphism of $G$ with finite edge motion contradicting the fact that $G$ has infinite edge motion.
If $C \neq C'$ then define the following automorphism:
\[
\gamma'(v) =
\begin{cases}
\gamma(v) & \text{if }v \in C,\\
\gamma^{-1}(v) & \text{if }v \in C',\\
v & \text{otherwise}.
\end{cases}
\]
Again this is an automorphism with finite edge motion contradicting the fact that $G$ has infinite edge motion.
\end{proof}
\begin{thm}
Every countable graph with infinite edge motion has $2^{\aleph_0}$ non-isomorphic distinguishing $2$-edge colourings.
\end{thm}
\begin{proof}
We first show that there is a distinguishing edge colouring by giving an explicit construction and then argue that within this construction we can make sufficiently many choices to obtain $2^{\aleph_0}$ non-isomorphic colourings.
For the construction of the colouring we start with a colouring where all edges are coloured white and describe an inductive procedure to decide on edges whose colour will be changed to black. Since we change the colour of every edge at most once, we get a limit colouring which we will show to be distinguishing.
First we consider only edges incident to vertices of infinite degree. For this purpose choose an enumeration $(v_n^\infty)_{n \in \mathbb N}$ of these vertices and a strictly increasing sequence $(d_n)_{n \in \mathbb N}$ of natural numbers. Note that $d_n \geq n$ because the sequence is strictly increasing.
Now inductively recolour edges incident to $v_n^{\infty}$ such that this vertex is incident to exactly $d_n$ black edges. We can do so without recolouring any edges incident to any vertex appearing earlier in the enumeration because there are at most $n-1$ such edges incident to $v_n^\infty$. In particular, since $d_n \geq n$ there can't be more than $d_n$ black edges incident to $v_n^\infty$ before step $n$.
With the colouring described above, no matter how we colour the remaining edges, every colour preserving automorphism must fix every vertex of infinite degree. This is because vertices of infinite degree must be mapped to vertices of infinite degree, and all of them have different degrees in the graph spanned by the black edges.
Now denote by $G_{\not\infty}$ the graph obtained from $G$ by deleting all vertices of infinite degree. Since all vertices of infinite degree must be fixed by every colour preserving automorphism it follows from Lemma~\ref{lem:infcomp} that no such automorphism of $G$ moves any vertex contained in a finite component of $G_{\not \infty}$.
Hence we only need to take care of automorphisms moving vertices in infinite components of $G_{\not \infty}$. For this purpose let $(C_k)_{k \in \mathbb N}$ be an enumeration of these components and let $l_n$ be a strictly increasing sequence of natural numbers with $l_1 > 1$. We will now recolour the edges of each $C_k$ in such a way that
\begin{enumerate}[label=(\alph*)]
\item \label{itm:disjointpaths} the subgraph of $C_k$ spanned by the black edges is a vertex disjoint union of paths of lengths $1, l_k, l_{k+1}, l_{k+2}, \ldots$, and
\item \label{itm:breaksetstab} there is no colour preserving automorphism of $G$ stabilising $C_k$ setwise but not pointwise.
\end{enumerate}
Note that property~\ref{itm:disjointpaths} ensures that there is no automorphism which moves one infinite component to another because if $k_1 < k_2$ then $C_{k_1}$ contains a black path of length $l_{k_1}$ while $C_{k_2}$ doesn't. Property~\ref{itm:breaksetstab} makes sure that there is no automorphism mapping $C_k$ non-trivially to itself. Combined those two properties make sure that every colour preserving automorphism must fix every vertex in each $C_k$. Since we already established that each such automorphism must also fix all other vertices this means that we have found a distinguishing edge colouring.
We shall now construct a colouring of the edges of $C_k$ with properties~\ref{itm:disjointpaths} and \ref{itm:breaksetstab}. First we pick an edge $e$ of $C_k$ and colour it black. Define
\[
S_i = \{v \in V(C_k) \mid d(v,e) = i\}
\]
where $d(v,e)$ denotes the minimal distance (measured in $C_k$) of $v$ to one of the two endvertices of $e$. Observe that $S_i$ must be finite for every $i$ since $C_k$ is locally finite.
Throughout the construction the edge $e$ will remain the only black edge which is not incident to any other black edge. Hence every colour preserving automorphism which maps $C_k$ onto itself must fix the edge $e$ and thus lie in the setwise stabiliser of every $S_i$. In particular, if such an automorphism acts non-trivially on $C_k$ then there is some $S_i$ on which it acts non-trivially.
Furthermore, throughout the construction it will be true that every colour preserving automorphism fixes each vertex in $V(C_k) \setminus S_0$ which is incident to a black edge. This fact will be useful to make sure that the paths we colour black are indeed disjoint.
In what follows we will only consider colour preserving automorphisms which stabilise $C_k$ setwise but not pointwise. We will denote the set of such automorphisms by $\Gamma$. Note that $\Gamma$ changes in each recolouring step. Furthermore, every $\gamma \in \Gamma$ must fix $V_\infty$ pointwise because it is assume to preserve the colouring constructed thus far. In particular we can without loss of generality assume that such an automorphism acts trivially outside of $C_k$.
Let $i \in \mathbb N$ be minimal with the property that there is $\gamma \in \Gamma$ which acts non-trivially on $S_i$. Choose $v \in S_i$ and $\gamma \in \Gamma$ such that $\gamma(v) \neq v$. Since all vertices moved by $\gamma$ have finite degree, Lemma~\ref{lem:infcomp} tells us that there must be a ray starting in $v$ which consists only of vertices which are moved by $\gamma$. Furthermore we can without loss of generality assume that the ray starts in $S_i$ and otherwise only contains vertices in $S_j$ for $j > i$. This can be achieved by moving to a suitable subray since $\bigcup_{j \leq i} S_j$ only contains finitely many vertices. Since every vertex of the ray is moved by $\gamma$ and $\gamma$ is colour preserving we infer that the ray contains no vertex which is incident to a black edge.
Now let $l_n$ be the smallest length in the sequence $l_k, l_{k+1},\ldots$ which has not been used yet. We colour an initial piece $P$ of length $l_n$ of our ray black and leave the rest of the colouring as it is.
Clearly $P$ is vertex disjoint from all other black paths constructed so far. Furthermore, since there is no other black path of length $l_n$ in $C_k$, each automorphism in $\Gamma$ must fix $P$ setwise (note that by recolouring $P$ we changed $\Gamma$). Finally every such automorphism must fix $P$ pointwise because only one endpoint of $P$ lies in the set $S_i$.
Since $|S_i|$ is an upper bound on the number of vertex disjoint paths starting at $S_i$ we end up with no $\gamma \in \Gamma$ acting non-trivially on $S_i$ after finitely many steps. Iterate the procedure with the next (larger) $i$.
If after finitely many steps of this iteration we end up with $\Gamma = \{id\}$ then we put disjoint black paths of the remaining lengths anywhere in the (infinite) white part of $C_k$, otherwise continue inductively forever. This ensures that the colouring satisfies~\ref{itm:disjointpaths} which will be convenient when showing that there are continuum many non-isomorphic distinguishing colourings.
In the limit we get a colouring with infinitely many black paths of different lengths. An element of $\Gamma$ which preserves this limit colouring hence must preserve all of those paths setwise, in particular it must be a colour preserving automorphism of the colourings obtained in every single step of the construction. However, such an automorphism must act non-trivially on some $S_i$ which implies that there is some step for which it does not preserve the colouring. Hence the limit colouring satisfies properties~\ref{itm:disjointpaths} and \ref{itm:breaksetstab}. If we carry out this construction for every infinite component we thus get a distinguishing $2$-edge colouring.
It remains to show that we still have enough freedom in the construction to obtain $2^{\aleph_0}$ non-isomorphic such colourings.
Firstly, if there are infinitely many vertices of infinite degree then each of the $2^{\aleph_0}$ choices for the sequence $(d_n)_{n \in \mathbb N}$ will deliver a colouring which is not isomorphic to any of the other colourings. The reason for this is, that vertices of infinite degree must be mapped to vertices of infinite degree while preserving the number of black edges incident to them.
Secondly, if there is an infinite component of $G_{\not \infty}$ then each of the $2^{\aleph_0}$ choices for the sequence $(l_n)_{n \in \mathbb N}$ will give a colouring which is not isomorphic to any of the other colourings. Here the reason is, that black paths of length $l$ must be mapped to black paths of length $l$.
The only remaining case is that there are only finitely many vertices of infinite degree and all components of $G_{\not \infty}$ are finite. In this case we can colour the edges in the finite components any way we want (each choice giving a colouring which is not isomorphic to any of the others). Hence, if there are infinitely many such edges we are done. The only way this could fail is that all but finitely many of the components are singletons, so in particular $G_{\not \infty}$ must have infinitely many isolated vertices. But then there must be two such vertices which have the same neighbours in $G$ because there are only finitely many vertices of infinite degree. The transposition of two such vertices would be an automorphism of $G$ with finite edge motion, a contradiction to the assumption that $G$ has infinite edge motion.
\end{proof}
As a corollary to the above theorem we obtain another partial result towards Conjecture~\ref{con:imc}: we show that it is true for line graphs, the proof works even without the requirement of local finiteness.
\begin{cor}
Conjecture~\ref{con:imc} is true for line graphs.
\end{cor}
\begin{proof}
Let $L(G)$ be a countable, connected line graph of some graph $G$ with infinite motion. Then $G$ has only one component which contains edges, without loss of generality assume that $G$ is connected. By~\cite{zbMATH03228459} the automorphism groups of $G$ and $L(G)$ are isomorphic by means of the obvious map from $\operatorname{Aut} G$ to $\operatorname{Aut} L(G)$. This implies that $G$ has infinite edge motion and every distinguishing edge colouring of $G$ translates into a distinguishing vertex colouring of $L(G)$.
\end{proof}
\section{Edge colourings vs.\ vertex colourings}
\label{sec:edgevertex}
The purpose of this section is to compare distinguishing vertex and edge colourings. We will show how to construct from a distinguishing vertex colouring a distinguishing edge colouring using at most one more colour. The following construction will be our starting point.
Let $G$ be a graph and let $c\colon V \to C$ be a colouring of the vertex set of $G$ with colours in $C$. Without loss of generality assume that $C$ carries the additional structure of an Abelian group. Then we can obtain a colouring of the edge set by $e \mapsto c(u) + c(v)$ for $e = uv$. We will call such an edge colouring a \emph{canonical} edge colouring. We now derive some useful properties of canonical edge colourings.
\begin{lem}
\label{lem:nofixedpoint}
Let $G$ be a connected graph and let $c'$ be a canonical edge colouring of $G$ which comes from a distinguishing vertex colouring $c$. If there is a non-trivial automorphism preserving $c'$ then it doesn't preserve $c(v)$ for any vertex $v$. In particular such an automorphism can't have a fixed point in the vertex set.
\end{lem}
\begin{proof}
Let $\gamma$ be a non-trivial automorphism preserving $c'$. Since $\gamma$ preserves $c'$ we know that for every edge $e = uv$ it holds that
\[
c(u) + c(v) = c'(e) = c'(\gamma e) = c(\gamma u) + c(\gamma v).
\]
Now assume that there was a vertex $v_0$ such that $c(v_0) = c(\gamma v_0)$. Then for every neighbour $v$ of $v_0$ it we have
\[
c(v_0) + c(v) = c(\gamma v_0) + c(\gamma v) = c(v_0) + c(\gamma v),
\]
and thus $c(v) = c(\gamma v)$.
By induction on the distance between $v$ and $v_0$ we obtain that $c(v) = c(\gamma v)$ for every vertex $v$ of $G$. Hence $\gamma$ preserves the colouring $c$ and thus it must be the identity.
\end{proof}
\begin{cor}
Let $G$ be a connected graph and let $c'$ be a canonical edge colouring corresponding to a distinguishing vertex colouring $c$ with colours in $C$. Let $\Gamma$ be the stabiliser of $c'$ in $\operatorname{Aut} G$. Then $|\Gamma| \leq |C|$.
\end{cor}
\begin{proof}
By Lemma~\ref{lem:nofixedpoint} there is no vertex which is fixed by any $\gamma \in \Gamma$. Hence the size of $\Gamma$ is bounded from above by the size of the orbits on the vertex set. If the size of any orbit were $> |C|$, then there would be two different vertices with the same colour in this orbit. Hence we would have a non-trivial automorphism mapping some vertex to a vertex with the same colour, a contradiction.
\end{proof}
The above results show that a canonical edge colouring corresponding to a distinguishing vertex colouring is not far from being distinguishing. We now show how to modify it in order to obtain a distinguishing edge colouring using only one additional colour.
We note that a finite version of the following theorem has been proved (using an entirely different approach) in \cite{zbMATH06381902}. A proof for infinite graphs following essentially the same lines as the finite proof has been announced by Imrich et al.~\cite{distindex-compare}. It is also worth mentioning that the bound is known to be tight as there is a family of finite trees for which equality holds, see \cite{zbMATH06381902}.
\begin{thm}
\label{thm:differenceone}
Let $G$ be a connected graph, then $D'(G) \leq D(G)+1$.
\end{thm}
\begin{proof}
Let $c$ be a distinguishing $k$-colouring and let $c'$ be the corresponding canonical edge colouring.
If there are two incident edges with the same colour, then take two such edges $e$ and $f$ and colour both of them with a new colour $x$. An automorphism which preserves the resulting colouring either fixes both edges or swaps them. In both cases it is easy to verify that such an automorphism preserves $c'$. But since $e$ and $f$ are the only edges with colour $x$, such an automorphism must fix the vertex at which $e$ and $f$ meet. Thus it has a fixed point and by Lemma~\ref{lem:nofixedpoint} it must be the identity.
So assume that no two incident edges receive the same colour in $c'$. Take an arbitrary edge $e$ and colour it with colour $k$. Then recolour an edge $f$ incident to $e$ with colour $c'(e)$. An automorphism which preserves the resulting colouring must fix the edge $e$ because it is the only edge with colour $x$. It must also fix the edge $f$ because it is the only edge incident to $e$ which has colour $c'(e)$. Since all the other edges have the same colours as in $c'$, the automorphism in question must preserve $c'$. But it also has to fix the vertex where $e$ and $f$ meet and thus by Lemma~\ref{lem:nofixedpoint} it is the identity.
\end{proof}
\begin{cor}
\label{cor:infinite_equal}
If $D(G)$ is infinite then $D'(G) \leq D(G)$.
\end{cor}
\begin{proof}
For an infinite cardinal $\alpha$ we have $1+ \alpha = \alpha$.
\end{proof}
\end{document}
|
\begin{document}
\title{On Scottish Book Problem 157}
\author{Kevin Beanland, Paul Humke, Trevor Richards}
\address{Department of Mathematics, Washington and Lee University, Lexington, VA 24450.}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\thanks{}
\thanks{2010 \textit{Mathematics Subject Classification}. Primary: }
\thanks{\textit{Key words}: Scottish book, approximately continuous}
\maketitle
\begin{abstract}
This paper describes our hunt for the solver of Problem 157 in the Scottish Book, a problem originally posed by A.~J. (Gus) Ward in 1937. We first make the observation that a theorem of Richard O'Malley from 1975 yields an immediate positive solution. A further look at O'Malley's references revealed a 1970 paper by Donald Ornstein that we now believe contains the first solution of {\em SB 157}. We isolate the common elements in the machinery used by both Ornstein and O'Malley and discuss several consequences. We also examine an example function given by Ornstein. There are some difficulties with this function but we provide a fix, and show moreover that functions of that kind are typical in the sense of the Baire category theorem.
\end{abstract}
\section{The Solution in Brief}
On March 23rd, 1937 A.J. Ward asked the following problem which is recorded as Problem 157 in the famous {\em Scottish Book}.\footnote[1]{The prize for the solution to this problem is lunch at the ``The Dorothy" in Cambridge which the authors now offer to purchase for Richard O'Malley and Donald Ornstein; transportation costs are another matter!}
\begin{sb157}
Suppose $f$ is approximately continuous and at each point $x_0$ and the quantity $$\limsup_{h \to 0^+} \frac{f(x_0+h)-f(x)}{h},$$ neglecting any set of $h$ which have zero density at $h=0$,
is positive. Is $f(x)$ monotone increasing?
\end{sb157}
We became aware of this problem only recently when one of us\footnote[2]{Humke} was discussing the upcoming new edition of the {\it Scottish Book} with its editor, Dan Mauldin. Problem 157 was one of the problems marked as unresolved. Upon returning to campus, the authors decided to have a look at {\bf Scottish Book Problem 157} and a natural approach to its resolution soon brought them to Richard O'Malley's paper on approximate maxima, C[0,1]ite{OM}. In that paper, O'Malley proves C[0,1]ite[Theorem 1]{OM} from which the following result is an immediate corollary.
\begin{theoremom}\label{mom}
Let $f:[0,1]\to\mathbb{R}$ be approximately continuous but not strictly increasing. Then $f$ attains an approximate maximum at some point $x_0\in [0,1)$.
\end{theoremom}
A follow up conversation with O'Malley then led us to the following theorem by Donald Ornstein, C[0,1]ite{O}.
\begin{theoremorn}
Let $f(x)$ be a real--valued function of a real variable satisfying the following:
\begin{enumerate}
\item[(a)] $f(x)$ is approximately continuous,
\item[(b)] For each $x_0$, let $E$ be the set of $x$, such that $f(x)-f(x_0)\ge 0$. Then
\[
\limsup_{h\to 0^+}\lambda\left( EC[0,1]ap (x_0,x_0+h)\right)/h\not=0.
\]
\end{enumerate}
Then $f$ is monotone increasing and continuous.
\end{theoremorn}
This is clearly the solution to {\bf Scottish Book 157}, and as far as we see is the first. There is a certain commonality to the machinery used in the proofs of O'Malley and Ornstein, and we'll try to isolate that common thread in the next section. Our proofs (largely reformulating those of O'Malley and Ornstein) will provide the slightly stronger result that the function under consideration is in fact strictly increasing (rather than just monotonically increasing). We'll also list some elementary consequences and state all the relevant background.
In Section~\ref{sect: Ornstein's example.}, we'll examine an example given in~C[0,1]ite{O}. We will show that this example needs amending and supply that amendment. Finally, we will also show in that section that functions satisfying the properties of Ornstein's example are typical in the sense of the Baire category theorem.
But for those familiar with the definitions, here is the solution to~{\bf Scottish Book 157} which we have drawn from O'Malley's work and which is a trivial consequence of Theorem~1$^*$.
\begin{thm}
Suppose that $f:[0,1]\to\mathbb{R}$ is approximately continuous and that, for each $x_0\in[0,1)$ and each set of $h$ values $E\subset\mathbb{R}$ having zero density at $0$, the quantity $$\displaystyle\limsup_{h\to0^+,h\notin E}\dfrac{f(x_0+h)-f(x_0)}{h}>0.$$ Then $f$ is strictly increasing.
\end{thm}
\begin{proof}
Given any points $0\leq x_1<x_2\leq1$, if $f(x_1)\geq f(x_2)$, then applying O'Malley's~Theorem to the function $f$ restricted to $[x_1,x_2]$, we obtain that $f$ attains an approximate maximum (relative to $[x_1,x_2]$) at some point $x_0\in~[x_1,x_2)$. That is, there is some set of $h$ values $E_0\subset\mathbb{R}$ having zero density at $0$ such that, on some neighborhood of $x_0$ in $[x_1,x_2]\setminus E_0$, $f$ attains its absolute maximum at $x_0$.
We conclude therefore that $$\displaystyle\limsup_{h\to0^+,h\notin E_0}\dfrac{f(x_0+h)-f(x_0)}{h}\leq0.$$ This contradicts the assumption made on $f$. We conclude that $f(x_1)<f(x_2)$, and that therefore $f$ is strictly increasing on $[0,1]$.
\end{proof}
\section{The Rest of the Story}
All sets and functions considered here will be assumed to be measurable with respect to $\lambda$, Lebesgue measure on $\mathbb{R}$. Suppose $E\subset \mathbb{R}$ and $I\subset \mathbb{R}$ is an interval. Then the density of $E$ in $I$ is defined as $\Delta(E,I)=\lambda(EC[0,1]ap I)/\lambda(I).$ Now, if $x\in\mathbb{R}$, then the upper density of $E$ at $x$ is defined as $\overline\Delta(E,x)=\limsup_{r \to 0} \Delta(E,(x-r,x+r))$. The lower density at $x$, $\underline\Delta(E,x)$ is defined similarly where $\liminf$ replaces $\limsup$ and if these two are equal at $x$, their common value is called the density of $E$ at $x$ and is denoted $\Delta(E,x)$.
Now suppose a function $f:\mathbb{R}\to\mathbb{R}$ is given. Then $f$ is approximately continuous at $x_0$ if there is a set $E$
with zero density at $x_0$, so that the limit of $f$ as $x\to x_0$ on $\mathbb{R}\setminus E$ is $f(x_0)$. A function $f$ has an approximate maximum at $x_0$ if $\Delta(H_{f(x_0)},x_0)=0$ where we define $H_y\equiv H_y(f)=\{x:f(x)>y\}$.
For the purpose of exposition, we isolate the following remarks concerning density and approximately continuous functions.
\begin{rem}
Although the definitions above of upper and lower density at a point $x$ are given in terms of intervals which are symmetric around $x$ only, it is easy to show that if $E$ and $F$ are sets having density zero and one at $x$ respectively, then for any $\epsilon>0$ there is a $\delta>0$ small enough so that if $I$ is any interval containing $x$ in its closure with $\lambda(I)<\delta$, then $\Delta(E,I)<\epsilon$ and $\Delta(F,I)>1-\epsilon$.
\label{rem: Non-symmetric density.}
\end{rem}
\begin{rem}
Suppose $y \in \mathbb{R}$, $f$ is approximately continuous at $x$ and $\{I_n\}$ is a nested sequence of closed intervals so that that $\{x\}= \bigcap_n I_n$ and $\Delta(H_y,I_n)>\eta>0$ for all $n \in \mathbb{N}$. Then $\overline{\Delta}(H_y,x)>\eta/2$.
\label{ACcon}
\end{rem}
\begin{rem}
If $f$ is approximately continuous at $x$, $z<y$ and $\overline{\Delta}(H_y,x)>0$ then $f(x)\geqslant y$ and $\Delta(H_z,x)=1$.
\label{ACgreater}
\end{rem}
An important first step is to see that there is an approximately continuous function with no relative extrema. This is, perhaps, not particularly surprising, but the fact that this function can be a derivative is a good introduction into the real nature of derivatives. This example can be found in Andy Bruckner's classic introduction, C[0,1]ite{AMB}.
\begin{theoremb}
There is a bounded approximately continuous derivative which achieves no local maximum
and no local minimum.
\end{theoremb}
To visit O'Malley's machinery, consider a measurable set $H$ and an interval $I$ so that $\lambda(HC[0,1]ap I)>0$ and let $\varepsilon>0$ be given. We define
$$\mathcal{J}(H,I,\varepsilon)=\{J: J \subset I\text{ is an open interval with } \Delta(H,J)>\varepsilon\}$$ and we let $G_\varepsilon(H,I)=\bigcup\mathcal{J}(H,I,\varepsilon) $.
O'Malley proves the following lemma, C[0,1]ite[Lemma 1]{OM} concerning components of
$G_\varepsilon(H,I)$.
\begin{lem}\label{L-one}
Suppose $H\subset [0,1]$ is measurable and fix $(a_0,b_0) \subset (0,1)$ with $\lambda(H C[0,1]ap (a_0,b_0))>0$ and let $\varepsilon>0$. Let $(a_1,b_1)$ be a component of $G_\varepsilon(H,(a_0,b_0))$. Then
\begin{enumerate}
\item $\Delta(H,(a_1,b_1))\geqslant \varepsilon/2$, and
\item If $I \subset (a_0,b_0)$ is an open interval with ${I}C[0,1]ap\{a_1,b_1\}\not=\emptyset$ then $\Delta(H,I)\leqslant \varepsilon$.
\end{enumerate}
In particular, $\lambda(G_\varepsilon(H,(a_0,b_0)))\leqslant 2 \lambda(HC[0,1]ap (a_0,b_0))/\varepsilon$.
\end{lem}
The first item holds since each component is comprised of intervals with density at least $\varepsilon$. The second part of the lemma follows from the definition of $G_\varepsilon(H,(a_0,b_0))$. If $I\subset (a_0,b_0)$ is an interval that either contains or overlaps a component of $G_\varepsilon(H,(a_0,b_0))$ then, by definition, $I \not\in J(H,(a_0,b_0),\varepsilon)$ and so $\Delta(H,I)\leqslant \varepsilon$.
The next lemma has been extracted from the proof of Theorem~1$^*$ in~C[0,1]ite{OM} and is the main workhorse of the proof our main result. We postpone the proof to Section~\ref{sect: Proof of O'Malley's Lemma.}.
\begin{lemmaom}
Suppose $f:[a,b] \to \mathbb{R}$ is approximately continuous satisfying $\lambda(f^{-1}(y))=0$ for all $y\in\mathbb{R}$. Let $[a_0,b_0] \subset [a,b]$ be such that $s_0:=\sup f[a_0,b_0]>\max\{f(a_0),f(b_0)\}$. Then for each $\varepsilon>0$ and $y_0 <s_0$ there is a $y_1\in(y_0,s_0)$ and a component $(a_1,b_1)$ of $G_\varepsilon(H_{y_1},(a_0,b_0))$ satisfying:
\begin{enumerate}
\item $[a_1,b_1] \subset (a_0,b_0)$
\item $(b_1-a_1)<1/2(b_0-a_0)$
\item $\max\{f(a_0),f(b_0),y_0\} < y_1$
\item $\max\{f(a_1),f(b_1)\}\leqslant y_1$
\item $\Delta(H_{y_0},(a_1,b_1))>1/2$
\item If $I\subset(a_0,b_0)$ is an open interval with ${I}C[0,1]ap\{a_1,b_1\}\neq\emptyset$, then $\Delta(H_{y_1},I)\leqslant\varepsilon$.
\end{enumerate}
\end{lemmaom}
Using the above we give the proof of O'Malley's Theorem \ref{mom}$^*$.
\begin{proof}[Proof of O'Malley's Theorem \ref{mom}$^*$]
We first note that if the condition $\lambda(f^{-1}(y))=0$ found in the statement of O'Malley's Lemma fails for some $y\in\mathbb{R}$, then the set $f^{-1}(y)$ will have a density point $x_0\in[0,1)$. It is easy to see that $f$ will achieve an approximate maximum at $x_0$, and we are done. Therefore we assume that $\lambda(f^{-1}(y))=0$ for all $y\in\mathbb{R}$. Note also that as $f$ is not strictly increasing, there is a $b\in(0,1]$ so that $f(b)\not=\sup f[0,b]$. Therefore without loss of generality we assume that $f(1) \not= \sup f[0,1]$. If $f(0)=\sup f[0,1]$ we are done, setting $x_0=0$. Therefore we assume further that $\sup f[0,1]>\max\{f(0),f(1)\}$. Define $[a_0,b_0]=[0,1]$, $s_0=\sup f[0,1]$, and let $y_0<s_0$ be chosen arbitrarily. We apply O'Malley's Lemma iteratively with $\epsilon=1/k$ at the $k^{\text{th}}$ stage to obtain a strictly increasing sequence of real numbers $\{y_k\}$ and a strictly nested sequence of intervals $\{(a_k,b_k)\}$ such that for each $k\in\mathbb{N}$, $(a_{k+1},b_{k+1})$ is a component of $G_\varepsilon(H_{y_{k+1}},(a_k,b_k))$ and the following items hold.
\begin{enumerate}
\item[(i)] $[a_{k+1},b_{k+1}] \subset (a_k,b_k)$
\item[(ii)] $(b_{k+1}-a_{k+1})<1/2(b_k-a_k)$
\item[(iii)] $\max\{f(a_k),f(b_k),y_k\} < y_{k+1}$
\item[(iv)] $\max\{f(a_{k+1}),f(b_{k+1})\}\leqslant y_{k+1}$
\item[(v)] $\Delta(H_{y_k},(a_{k+1},b_{k+1}))>1/2$
\item[(vi)] If $I\subset(a_k,b_k)$ is an open interval with ${I}C[0,1]ap\{a_{k+1},b_{k+1}\}\neq\emptyset$, then $\Delta(H_{y_{k+1}},I)\leqslant1/k$.
\end{enumerate}
In order to justify this recursive construction (i.e. to ensure that at the $k^{\text{th}}$ stage the function $f$ and the interval $(a_k,b_k)$ satisfies the assumption made on $f$ and $(a_0,b_0)$ in the statement of the O'Malley's Lemma), observe that since $(a_{k+1},b_{k+1})$ is a component of $G_\varepsilon(H_{y_{k+1}},(a_k,b_k))$, $H_{y_{k+1}}C[0,1]ap(a_k,b_k)\neq\emptyset$. Therefore setting $s_k=\sup f[a_k,b_k]$, we have $s_k>y_{k+1}>\max\{f(a_k),f(b_k)\}$.
Items~(i) and~(ii) above yield that the intersection of the intervals so obtained consists of a single point, $\{x_0\} = \bigcap_{k=0}^\infty (a_k,b_k)$. We have two claims: $f(x_0) \geqslant y_n$ for each $n \in \mathbb{N}$ and $\Delta(H_{f(x_0)},x_0)=0$. This final claim yields that $f$ has an approximate maximum at $x_0\in[0,1)$, as desired.
Fix some positive integer $n$. Then for each positive integer $k$, $y_n < y_{n+k}$. Using this inequality and item (v) above
$$\Delta(H_{y_n},(a_{n+k+1},b_{n+k+1}))\geqslant \Delta(H_{y_{n+k}},(a_{n+k+1},b_{n+k+1}))>1/2.$$
\noindent Using Remark~\ref{ACcon}, $\overline{\Delta}(H_{y_n},x_0)>1/4$ and so, by Remark~\ref{ACgreater}, $f(x_0) \geqslant y_n$.
It remains to observe that $\Delta(H_{f(x_0)},x_0)=0$. Suppose by way of contradiction that $\overline{\Delta}(H_{f(x_0)},x_0)=\eta>0$. Choose some $m\in\mathbb{N}$ with $1/m<\eta/2$. Since $\overline{\Delta}(H_{f(x_0)},x_0)=\eta$, we can find some $r>0$ small enough so that the interval $I=(x_0-r,x_0+r)$ satisfies
$$I\subset (a_m,b_m),\text{ and }\Delta(H_{f(x_0)},I)>\eta/2.$$
Choose $k>0$ to be the smallest positive integer such that $I\not\subset(a_{m+k+1},b_{m+k+1})$. Then $$I\subset(a_{m+k},b_{m+k}),\text{ and }IC[0,1]ap\{a_{m+k+1},b_{m+k+1}\}\neq\emptyset,$$ so we obtain the contradiction
$$\dfrac{\eta}{2}<\Delta(H_{f(x_0)},I)\leqslant\Delta(H_{y_{m+k}},I)<\dfrac{1}{m+k}\leq\dfrac{1}{m}<\dfrac{\eta}{2},$$
where the second inequality follows from the fact that $f(x_0)>y_{m+k}$, and the third inequality above follows from item~(vi). This contradicts the choice of $r$.
\end{proof}
Several old standards can be immediately generalized to approximately continuous functions
using the theorem above. For example.
\begin{theoremrolle}
Let $f:[0,1]\to \mathbb{R}$ be approximately continuous and approximately differentiable on $(0,1)$
with $f(0)=f(1)$. Then there is a point $x_0\in (0,1)$ at which $f^\prime_{ap}(x_0)=0$.
\end{theoremrolle}
And hence, also it's immediate consequence.
\begin{theoremmvt}
Let $f:[a,b]\to\mathbb{R}$ be approximately continuous and approximately differentiable on $(a,b)$.
Then there is a point $x_0\in (0,1)$ at which $f^\prime_{ap}(x_0)=\frac{f(b)-f(a)}{b-a}$.
\end{theoremmvt}
Mean Value Theorems for the approximate derivative are well known and in much greater generality; in fact, O'Malley showed in~C[0,1]ite{OM2} that $x_0$ can be chosen so that
$f^\prime(x_0)=\frac{f(b)-f(a)}{b-a}$.
\section{Proof of O'Malley's Lemma}\label{sect: Proof of O'Malley's Lemma.}
We need the following easy remark.
\begin{rem}\label{remark: Mini.}
Suppose $f:[a,b]\to\mathbb{R}$ is approximately continuous, so that for all $y\in\mathbb{R}$, $\lambda(f^{-1}(y))=0$. Then, setting $s=\sup(f[a,b])$, $\displaystyle\lim_{y\to s}\lambda(H_y)=0$. Therefore by Lemma~\ref{L-one}, for such a function $f$ and any $\varepsilon>0$, $\lambda(G_\varepsilon(H_y,(a,b)))\to0$ as $y\to s$.
\end{rem}
We restate O'Malley's Lemma for reference.
\begin{lemmaom}
Suppose $f:[a,b] \to \mathbb{R}$ is approximately continuous satisfying $\lambda(f^{-1}(y))=0$ for all $y\in\mathbb{R}$. Let $[a_0,b_0] \subset [a,b]$ be such that $s_0:=\sup f[a_0,b_0]>\max\{f(a_0),f(b_0)\}$. Then for each $\varepsilon>0$ and $y_0 <s_0$ there is a $y_1\in(y_0,s_0)$ and a component $(a_1,b_1)$ of $G_\varepsilon(H_{y_1},(a_0,b_0))$ satisfying:
\begin{enumerate}
\item\label{item: Subset.} $[a_1,b_1] \subset (a_0,b_0)$
\item\label{item: Shrinking intervals.} $(b_1-a_1)<1/2(b_0-a_0)$
\item\label{item: r_1 greater.} $\max\{f(a_0),f(b_0),y_0\} < y_1$
\item\label{item: r_1 greater equal.} $\max\{f(a_1),f(b_1)\}\leqslant y_1$
\item\label{item: High density interval.} $\Delta(H_{y_0},(a_1,b_1))>1/2$
\item\label{item: Low density endpoints.} If $I\subset(a_0,b_0)$ is an open interval with ${I}C[0,1]ap\{a_1,b_1\}\neq\emptyset$, then $\Delta(H_{y_1},I)\leqslant\varepsilon$.
\end{enumerate}
\end{lemmaom}
The reader will note a marked similarity between the proof given of O'Malley's Theorem~1$^*$ and the following proof of O'Malley's~Lemma, both depending on a recursive construction of an increasing sequence of real numbers and a nested sequence of intervals. The thing to notice is that, in the following proof of O'Malley's~Lemma, the two sequences are chosen so that the corresponding members satisfy the first four items in the statement of O'Malley's~Lemma, and then one real number and interval is chosen which also satisfies items~(5) and~(6) as well. By contrast, each of the real numbers and corresponding intervals found in the proof of O'Malley's~Theorem~1$^*$ satisfy all six items from the statement of O'Malley's~Lemma. It was then shown that the intersection of all the intervals consists of a single point, which turns out to be the point we were looking for (at which the function achieves a local approximate maximum).
\begin{proof}
Fix $f,(a,b),(a_0,b_0),s_0,y_0$ and $\varepsilon>0$ as in the hypotheses. Find $\alpha>0$ so that $\max\{f(a_0),f(b_0),y_0\} < \alpha < s_0$. Since $f$ is approximately continuous there is a $\delta>0$ so that if $I$ is an interval in $[a,b]$ with either $a_0$ or $b_0$ as an endpoint, and with $\lambda(I)<\delta$,
\begin{equation}
\Delta(H_\alpha,I)<\varepsilon/2.
\label{smallish}
\end{equation}
In order to choose our number $y_1$ and interval $(a_1,b_1)$, we will first construct a strictly increasing sequence of real numbers $\{r_k\}$ and nested intervals $\{(c_k,d_k)\}$. To initialize this construction, we set $r_0=y_0$ and $(c_0,d_0)=(a_0,b_0)$.
By Remark~\ref{remark: Mini.}, we may find some $r_1$ with $\alpha < r_1 < s_0$ so that $\lambda(G_\varepsilon(H_{r_1},(a_0,b_0)))< \min\{\delta,1/2(d_0-c_0)\}$. Let $(c_1,d_1)$ be any component of $G_\varepsilon(H_{r_1},(a_0,b_0))$. Assuming that $(c_1,d_1)$ shares an endpoint with $(c_0,d_0)$ yields the contradiction
$$\varepsilon/2\leqslant \Delta(H_{r_1},(c_1,d_1)) \leqslant \Delta(H_{\alpha},(c_1,d_1)) <\varepsilon/2.$$
The first inequality comes from Lemma \ref{L-one}(1), the second from $r_1>\alpha$, and the third from (\ref{smallish}). Thus $[c_1,d_1]\subset (c_0,d_0)$.
So far (1), (2) and (3) are satisfied for $(c_1,d_1)$. To see (4), suppose, by way of contradiction, that $r_1 <\max\{f(c_1),f(d_1)\}$, and without loss of generality that $r_1< f(c_1)$. The approximate continuity of $f$ at $c_1$ then yields that $\Delta(H_{r_1},c_1)=1$. This, in turn, implies that there is an open interval $I$ contained in $(c_0,d_0)$ and containing $c_1$ with $\Delta(H_{r_1},I) > \varepsilon$. This contradicts Lemma~\ref{L-one}(2).
At this point we have that the interval $(c_1,d_1)$ and the value $r_1$ satisfy items~(\ref{item: Subset.})-(\ref{item: r_1 greater equal.}) in the statement of the lemma (replacing $(a_1,b_1)$ with $(c_1,d_1)$ and $y_1$ with $r_1$). The next step is to iterate this construction, obtaining a strictly nested sequence of intervals $(c_k,d_k)$ and a strictly increasing sequence of numbers $r_1<r_2<C[0,1]dots<s_0$ such that, for each $k\geqslant 1$, $(c_{k+1},d_{k+1})$ is a component of $G_\varepsilon(H_{r_{k+1}},(c_k,d_k))$ satisfying
\begin{enumerate}
\item[(i)] $[c_{k+1},d_{k+1}]\subset(c_k,d_k)$
\item[(ii)] $(d_{k+1}-c_{k+1})<1/2(d_k-c_k)$.
\item[(iii)] $\max\{f(c_k),f(d_k),r_k\}< r_{k+1}$
\item[(iv)] $\max\{f(c_{k+1}),f(d_{k+1})\}\leqslant r_{k+1}$
\end{enumerate}
In order to justify this recursive construction (i.e. to ensure that at the $k^{\text{th}}$ stage the function $f$ and the interval $(c_k,d_k)$ satisfies the assumption made on $f$ and $(a_0,b_0)$ in the statement of the lemma), observe that since $(c_{k+1},d_{k+1})$ is a component of $G_\varepsilon(H_{r_{k+1}},(c_k,d_k))$, $H_{r_{k+1}}C[0,1]ap(c_k,d_k)\neq\emptyset$. Therefore setting $s_k=\sup f[c_k,d_k]$, we have $s_k>r_{k+1}>\max\{f(c_k),f(d_k)\}$.
Items~(i) and~(iv) above guarantee that the intersection of the intervals $[c_k,d_k]$ consists of a single point, $\{x_0\} = \bigcap_k[c_k,d_k]$.
For each $k\geqslant1$, $r_1< r_{k+1}$, so $\Delta(H_{r_1},(c_k,d_k)) \geqslant \Delta(H_{r_{k+1}},(c_k,d_k)) \geqslant \varepsilon/2$ (by Lemma~\ref{L-one}(1)). Therefore using Remark~\ref{ACcon} we obtain $\overline{\Delta}(H_{r_1},x_0) \geqslant \varepsilon/4$. Remark~\ref{ACgreater} now yields that $f(x_0)\geqslant r_1$ and since $r_1>y_0$, $\Delta(H_{y_0},x_0)=1$. Therefore using Remark~\ref{rem: Non-symmetric density.}, we can find some $n$ so that $\Delta(H_{y_0},(c_n,d_n))> 1/2$. Define $y_1=r_n$ and $(a_1,b_1)=(c_n,d_n)$.
For this choice of $(a_1,b_1)$ and $y_1$ it is easy to verify items~(\ref{item: Subset.})--(\ref{item: r_1 greater equal.}) from the statement of the lemma. Item~(\ref{item: High density interval.}) follows immediately from the choice of $n$.
It remains to establish item~(\ref{item: Low density endpoints.}). To this end, let $(c,d)\subset(a_0,b_0)$ be any open interval with $(c,d)C[0,1]ap\{a_1,b_1\}\neq\emptyset$. Now, $(c,d)\subset(c_0,d_0)=(a_0,b_0)$, and $(c,d)\not\subset(c_n,d_n)=(a_1,b_1)$, so we may choose the least $m\in\{1,\ldots,n\}$ such that $(c,d)\not\subset(c_m,d_m)$ (and thus $(c,d)\subset(c_{m-1},d_{m-1})$). Moreover, $(c,d)$ intersects $(a_1,b_1)$, which is in turn contained in $(c_m,d_m)$, so we have that $(c,d)C[0,1]ap\{c_m,d_m\}\neq\emptyset$. Therefore, since $(c_m,d_m)$ is a component of $G_\varepsilon(H_{r_m},(c_{m-1},d_{m-1}))$, it follows from Lemma~\ref{L-one}(2) that $\Delta(H_{r_m},(c,d))\leq\varepsilon$. Since $y_1\geqslant r_m$, it follows that $\Delta(H_{y_1},(c,d))\leqslant\varepsilon$ as required.
\end{proof}
\section{An example}\label{sect: Ornstein's example.}
In order to show that item (b) in Ornstein's Theorem may not be significantly weakened without losing the result of the theorem, in~C[0,1]ite{O} a continuous function $f:[0,1]\to\mathbb{R}$ is described which satisfies the following weaker assumption (b'), but which is not monotonic on $[0,1]$.
(b') For each point $x_0\in[0,1]$, the set $E=\left\{x:\frac{f(x)-f(x_0)}{x-x_0}\geq0\right\}$ does not have zero density at $x_0$.
In this section we identify a problem with this example and provide a fix.
In the last section we show that the typical continuous function satisfies condition~(b')
but is not monotonically increasing.
\subsection{The function and the problem.}
Begin by choosing the eight points, $p_i=i/7$ for $i=0,1,\dots,7$ in $[0,1]$ (the points $\{p_i\}$ have been explicitely chosen for the sake of concreteness, but with any other choice of these points the same problem would occur) and defining a function $g$ at each of those points as follows:
\begin{align}\label{e0}
g(p_0)&=1,\hspace{5pt}g(p_1)=\frac{4}{3},\hspace{5pt}g(p_2)=\frac{1}{3},\hspace{5pt}g(p_3)=\frac{4}{3}, \notag \\
g(p_4)&=\frac{-1}{3},\hspace{5pt}g(p_5)=\frac{2}{3},\hspace{5pt}g(p_6)=\frac{-1}{3},\hspace{5pt}g(p_7)=0.
\end{align}
Extend $g$ linearly on the intervening intervals and let
\begin{equation}\label{den1}
E_{x_0}=\left\{x\in[0,1]:\dfrac{g(x)-g(x_0)}{x-x_0}\geq0\right\}.
\end{equation}
Then $\phi(x)=\Delta(E_{x},[0,1])$
is continuous and, by inspection, positive at each point $x\in[0,1]$. Hence, there
an $\alpha>0$ such that $\phi(x)>\alpha$ for each $x\in[0,1]$.
A sequence of functions, $g_n$ is defined inductively by first setting $g_0=g$.
\begin{quote}\label{insert}
Assuming $g_n$ has been defined, $g_{n+1}$ is obtained by replacing each decreasing segment of $g_n$
with a suitable affine copy of $g$. \hspace*{23mm} ($\star$)
\end{quote}
We refer to the process described in $(\star)$ as the {\em insertion of $g$ into $g_n$.} Specifically, this entails that if $[a,b]$ denotes a maximal interval on which $g_n$ is decreasing, define
$g_{n+1}(x)=SC[0,1]irc gC[0,1]irc T(x)$ for each $x\in [a,b]$ where
\begin{equation}\label{e1}
T(x)=\frac{x-a}{b-a}\text{ and }S(x)=xg(a)+(1-x)g(b).
\end{equation}
The claim is that the sequence $g_n$ converges pointwise to a continuous function, and that this limit function $f$ satisfies the condition (b').
Unfortunately, $\{g_n\}$ does not converge to a continuous function. To see this, we will show that there is a sequence of points $\{x_n\}\in[0,1]$ such that for each $n$ the sequence $\{g_k(x_n)\}$ is eventually constant, and these constants approach $\infty$ as $n\to\infty$. By the compactness of $[0,1]$ this implies that the functions $g_n$ do not converge to a continuous function.
To that end, we define a nested sequence $I_1\supset I_2\supset I_3C[0,1]dots$ of sub-intervals of $[0,1]$ inductively as follows. We set $I_1=[3/7,4/7]$. Suppose that $I_k$ has been defined for all $k<n$, and $I_{n-1}=[a_{n-1},b_{n-1}]$. Then we define
$$I_n=\left[a_{n-1}+\frac{3}{7^n},a_{n-1}+\frac{4}{7^n}\right].$$
The construction of $\{g_n\}$ above immediately implies that for each $n$, $I_n$ is a maximal interval of decrease for $g_n$. Let us also define a sequence $\{\Delta y_n\}$ by $\Delta y_n=g_n(b_n)-g_n(a_n)$, the net change in $g_n$ on $I_n$. Since $g(1)-g(0)=1$, and when $g_{n}$ is defined on $I_{n-1}$, the vertical scaling factor used is $|\Delta y_{n-1}|$, an easy induction argument shows that
$$\Delta y_n=\Delta y_{0}C[0,1]dot|\Delta y_{n-1}|=-\left|\Delta y_0\right|^{n+1}=-\left(\frac{5}{3}\right)^{n+1}.
$$
Using the same reasoning, it may also be shown that
$$g_n(a_n)=1+\displaystyle\sum_{i=0}^n\frac{1}{3}C[0,1]dot\left(\frac{5}{3}\right)^i.$$
Of course this sequence $\{g_n(a_n)\}\to\infty$, and we note that since $I_n$ is a maximal interval of decrease of $g_n$, for any $m>n$, $g_m(a_n)=g_n(a_n)$. Therefore putting $x_n=a_n$, the sequence has the properties described above. We conclude therefore that the sequence $g_n$ does not converge to a continuous function.
In actuality and with a little more computation it's not hard to see that $\{g_n(\frac{1}{2})\}\to +\infty$. The purpose of this note, however is simply to point out that the example needs repair and in the following subsection we show how this can be done in a rather straightforward manner.
\section{A fix}
In this section we adapt Ornstein's function so that the change in $y$ values (as in the $\Delta y_n$ from our discussion of Ornstein's functions) on each decreasing interval is strictly less than $1$. We will use $h$'s here, rather than $g$'s to avoid confusion.
Define $h$ to be the continuous function defined $[0,13]$ with the following prescribed values at the integers, and linear in the intervening intervals.
\begin{center}
\begin{tabular}{ccccccccc}
$h(0)$&=&4/4&$h(1)$&=&6/4&$h(2)$&=&3/4\\
$h(3)$&=&5/4&$h(4)$&=&2/4&$h(5)$&=&4/4\\
$h(6)$&=&1/4&$h(7)$&=&3/4&$h(8)$&=&0/4\\
$h(9)$&=&2/4&$h(10)$&=&-1/4&$h(11)$&=&1/4\\
$h(12)$&=&-2/4&$h(13)$&=&0/4&
\end{tabular}
\end{center}
The function $h$ has been chosen so that at each point $x_0$, $h_0$ takes a smaller value to the left of $x_0$ or a larger value to the right of $x_0$, thus again assuring that the difference quotient is positive on a density (in $[0,13]$) $\alpha$ set (for some $\alpha>0$ independent of $x_0$). If we choose $\alpha>0$ a bit smaller, we can say a bit more, that for each $x_0\in[0,13]$, the set
\[
E_{x_0}=\left\{x\in[0,13]:\dfrac{h_0(x)-h_0(x_0)}{x-x_0}\geq0\right\},
\]
where we disregard all $x$-values on which $h_0$ is decreasing, has density $\alpha$ in $[0,13]$. This will be of use to us when we show that our final function $h_\infty$ satisfies the property (b').
Let $h_n$ be the sequence of functions with domain $[0,13]$ defined recursively in
precisely the same manner as Ornstein's function was defined, but using $h=h_0$ as
our ``seed function'', rather than Ornstein's $g$.
That is,
\begin{quote}
$\dots$ to get $h_{n+1}$ we simply replace each line segment of the graph of $g_n$
having negative slope with an affine copy of $h$.
\end{quote}
This process
We will now show that our sequence $h_n$ does converge uniformly on $[0,13]$. If $h_0$ is decreasing on an interval $[a,b]\subset[0,13]$, then
\[
h_0(b)-h_0(a)\geq-1C[0,1]dot\dfrac{3}{4}.
\]
Recursively, if $h_n$ is decreasing on an interval $[a,b]\subset[0,13]$, then
\[
h_n(b)-h_n(a)\geq-1C[0,1]dot\left(\dfrac{3}{4}\right)^n.
\]
Therefore the difference between $h_{n+1}$ and $h_n$ on $[a,b]$ is at most $(3/4)^n$ times the difference between $h_0$ and the line $y=-1/13x+1$ on the interval $[0,13]$. That is, the sequence $\{h_n\}$ converges uniformly.
\subsection{Show that $h_\infty$ satisfies (b').}
If $x_0\in[0,13]$ is contained in one of the intervals on which some one of the $h_n$'s is increasing, then the desired result holds immediately, because the function values of all later $h_{n+k}$'s will not change on that interval.
Suppose that $x_0$ is not in any such interval. Define $I_0=[0,13]$, and for each $n>0$, let $I_n\subset[0,13]$ denote the interval on which $h_{n-1}$ is decreasing which contains $x_0$. (That is, $I_n$ is the interval containing $x_0$ on which $h_{n-1}$ is changed to form $h_n$.) It is easy to see that $m(I_n)=\left(\dfrac{1}{13}\right)^{n-1}$.
Moreover, if we set $$E_n=\left\{x\in I_n:\dfrac{h_\infty(x)-h_\infty(x_0)}{x-x_0}\geq0\right\},$$ (again including only the $x$ values at which $h_n$ is increasing) then we will show that $\Delta(E_n,I_n)\geq1/26$. Since the intervals $I_n\to\{x_0\}$, this will immediately imply that (b') holds.
We first show that, if $I_1=[1,2]$, then $$\Delta(E_0,I_0)=\Delta(E_0,[0,13])\geq1/26.$$ The idea is that, regardless of the value of $h_\infty(x_0)$, the union $[0,.5]C[0,1]up[2.5,3]$ (ie the left half of the interval of increase to the left of $I_1$ and the right half of the interval of increase to the right of $I_1$) contains at least mass $1/2$ of $E_0$. Put $y_0=h_\infty(x_0)$. We proceed by cases.
\begin{case}
$y_0\geq1.25$.
\end{case}
By inspection, $E_0$ contains the interval $[0,0.5]$. Thus $\Delta(E_0,[0,13])\geq1/26$.
\begin{case}
$y_0\leq1$.
\end{case}
By inspection, $E_0$ contains the interval $[2.5,3]$. Thus $\Delta(E_0,[0,13])\geq1/26$.
\begin{case}
$1\leq y_0\leq1.25$.
\end{case}
This is the interesting case. Consider the intervals of the graph with $x$-values $[0,.5]$ and $[2.5,3]$. $h_\infty$ has constant slope on these intervals, and $$h_\infty(0)=h_\infty(2.5)=1\text{, and }h_\infty(.5)=h_\infty(3)=1.25.$$ Therefore if we choose $\epsilon\in(0,1/2)$ so that $h_\infty(\epsilon)=y_0$, then $E_0$ contains the union $[0,\epsilon]C[0,1]ap[2.5+\epsilon,3]$.
But $m([0,\epsilon]C[0,1]ap[2.5+\epsilon,3])=.5$, so we conclude that $\Delta(E_0,[0,13])\geq1/26$.
This argument extends immediately to show that $\Delta(E_0,[0,13])\geq1/26$ for every other choice of $I_1$, always finding points in $E_0$ with mass at least $1/2$ in the intervals of increase of $h_0$ directly to the left and right of $I_1$.
Let us now consider the second interval $I_1$. We wish to show that $\Delta(E_1,I_1)\geq1/26$. The first (ie from left to right) possibility for $I_2$ is the interval $[14/13,15/13]$.
Here the argument is the same with the $y$ values for the cases being: Case 1) $y_0\geq27/16$, Case 2) $y_0\leq1.5$, and Case 3) $1.5\leq y_0\leq27/16$, and the intervals where we will be finding points in $E_1$ are $[26/26,27/26]$ (the first half of the interval of increase of $h_1$ to the left of $I_2$) and $[31/26,32/26]$ (the second half of the interval of increase of $h_1$ to the right of $I_2$). The conclusion is that $m(E_1)$ is greater than or equal to the length of one of these intervals, $m(E_1)\geq1/26$, and $m(I_1)=1$. Thus $\Delta(E_1,I_1)\geq1/26$. This iterates nicely (we always pick up a factor of $1/13$ in both the numerator and the denominator of our density calculation, which cancel), thus $\Delta(E_n,I_n)\geq1/26$, and thus the upper density of the set
\[
E=\left\{x\in[0,13]:\dfrac{h_\infty(x)-h_\infty(x_0)}{x-x_0}\geq0\right\}
\]
at $x_0$ is greater than or equal to $1/26>0$, proving that $h_\infty$ satisfies (b').
\section{Counterexamples are Typical}
The goal of this final section is to show that continuous functions with Property (b')
are ubiquitous in the complete space $C[0,1]$ of all continuous functions on [0,1] endowed with the $\sup$ metric. However, ``ubiquitous'' can be defined in several ways.
A property is {\em typical} in a complete metric space of functions, $C[0,1]$ if the set of functions
enjoying that property is residual (the complement of a set of the first Baire
Category) in $C[0,1]$. A well known method of establishing
whether a given set $A$ is residual or not is the so-called Banach-Mazur Game which we describe
briefly here. See C[0,1]ite{Z} for more details and generalizations.
This is a two player game and the players take turns selecting open balls from $C[0,1]$.
Suppose $A\subsetC[0,1]$ is fixed. Player $P_1$ selects a ball, $B_1$, then player two, $P_2$ selects a ball
$B_2\subset B_1$ and so on so that the game produces a nested sequence of balls,
$\{B_n:n\in\mathbb N\}$. $P_2$ wins the game if $AC[0,1]ap\bigcap_{n=1}^\infty B_n \not=\emptyset$
otherwise $P_1$ wins. And $P_2$ has a winning strategy provided $P_2$ can always win the game, meaning independently of the balls $P_1$ selects.
\begin{theorembm}
$P_2$ has a winning strategy iff $A$ is residual.
\end{theorembm}
The Banach-Mazur Game is a convenient way to see why the next result is true. The proof uses the
following notation. If $p_i=(x_i,y_i)\in\mathbb R^2,\ i=1,2$ we define
\[
DQ(p_1,p_2)=\frac{y_2-y_1}{x_2-x_1}.
\]
\begin{theorem}
The typical continuous function has Property (b').
\end{theorem}
\begin{proof}
Suppose $P_1$ has chosen the ball $B_n\equiv B_\epsilon(f)\subsetC[0,1]$ at the $n^{th}$ stage of play.
We describe the strategy for $P_2$.
\begin{enumerate}
\item First partition $[0,1]$ into sufficiently small intervals such that the insertion (see
$(\star)$ on page \pageref{insert}) of $h$ into any partition interval lies within the $\epsilon/2$ ball
about $f$. Let $g:[0,1]\to\mathbb R$ be the conjunction of all such insertions and define
\[
E_{x_0}=\left\{x: \frac{g(x)-g(x_0)}{x-x_0}>0\right\}.
\]
Then for every partition interval $J$ and every $x_0\in J$,
$\Delta(E_{x_0},J)>\alpha$. This function $g$ is the center of the ball $P_2$ will respond with.
\item To determine the response radius, first fix a partition interval $J$ and an $x_0\in J$. There exists
$\eta(x_0)>0$ such that $\Delta(E_{x_0}\backslash B_{\eta(x_0)},J)>\alpha$. Hence by compactness
there is a $\eta>0$ such that for every $x\in [0,1]$ and every partition interval $J$ containing $x$,
\[
\Delta(E_{x}\backslash B_\eta(x),J)>\alpha.
\]
Hence, again by compactness, there is a $0<\delta<\epsilon/2$ such that $\delta<\eta$ and if $x_0\in[0,1]$ and
$x_1\in E_{x_0}\backslash B_{\eta}$ then whenever
\begin{equation}\label{nest}
p_0\in B_\delta((x_0,g(x_0))\text{ and }
p_1\in B_\delta((x_1,g(x_1))\text{ then }DQ(p_0,p_1)>0.
\end{equation}
\end{enumerate}
\noindent Player $P_2$ returns the ball $B_\delta(g)$
where $\delta$ is the radius just determined above.
Now, any sequence of plays converges uniformly in the sense that if $f_n$ is any choice of a function in $B_n$, then the sequence of functions $\{f_n\}$ converges uniformly. Due to
(\ref{nest}), at each $x\in[0,1]$ the density of the set of points for which the difference quotient is
positive at $x$ exceeds $\alpha$ at the scale of each play of $P_2$. That is, the limit function
satisfies (b'); this then, completes the proof.
\end{proof}
As it is well known that the set of monotone functions is nowhere dense in $C[0,1]$, the following is an immediate corollary.
\begin{cor}
The typical continuous function satisfies property~(b') but is not monotonically increasing on any interval.
\end{cor}
\end{document}
|
\begin{document}
\title{Sum rules and large deviations for spectral matrix measures in the Jacobi ensemble}
\author{{\small Fabrice Gamboa}\footnote{ Universit\'e Paul Sabatier, Institut de Math\'ematiques de Toulouse, 31062-Toulouse Cedex 9, France,
[email protected]}
\and{\small Jan Nagel}\footnote{Eindhoven University of Technology, Department of Technology and Computer Science, 5600 MB Eindhoven, Netherlands,
e-mail: [email protected]}
\and{\small Alain Rouault}\footnote
{Laboratoire de Math\'ematiques de Versailles, UVSQ, CNRS, Universit\'e Paris-Saclay, 78035-Versailles Cedex France, e-mail: [email protected]}}
\maketitle
\begin{abstract}
We continue to explore the connections between large deviations for objects coming from random matrix theory and sum rules.
This connection was established in \cite{magicrules} for spectral measures of classical ensembles (Gauss-Hermite, Laguerre, Jacobi) and it was extended to spectral matrix measures of the Hermite and Laguerre ensemble in \cite{GaNaRomat}. In this paper, we consider the remaining case of spectral matrix measures of the Jacobi ensemble. Our main results are a large deviation principle for such measures and a sum rule for matrix measures with reference measure the Kesten-McKay law.
As an important intermediate step, we derive the distribution of canonical moments of the matrix Jacobi ensemble.
\end{abstract}
\section{Introduction}
A probability measure on a compact subset of $\mathbb R$ or on the unit circle may be encoded by the sequence of its moments or by the coefficients of the recursion satisfied by the corresponding orthogonal polynomials. It is however not easy to relate information on the measure, (for example on its support), with information on the recursion coefficients.
Sum rules give a way to translate between these two languages.
Indeed, a sum rule is an identity relating a functional of the probability measure, usually in the form of a realative entropy, and a functional of its recursion coefficients. The "{\it measure side}" of the identity gives the discrepancy between the measure and a reference measure and the "{\it coefficient side}" gives the discrepancy between the correponding series of recursion coefficients.
One of the most classical example of such a sum rule is the Szeg{\H o}-Verblunsky theorem for measures on the unit circle $\mathbb T$, see Chapter 1 of \cite{simon2}. Here, the reference measure is the uniform measure on $\mathbb{T}$ and the coefficient side involves a sum of functions of the Verblunsky coefficients. The most famous sum rule for measures on the real line is the Killip-Simon sum rule \cite{KS03} (see also \cite{simon2} Section 3.5). In this case, the reference measure is the semicircle distribution. In \cite{magicrules}, we gave a probabilistic interpretation of the Killip-Simon sum rule (KS-SR) and a general strategy to construct and prove new sum rules. The starting point is a $N\times N$ random matrix $X_N$ chosen according to the Gaussian unitarily invariant ensemble. The random spectral measure $\mu_N$ of this random matrix is then defined through its moments, by the relation
\begin{align*}
\int x^k d\mu_N = (X_N^k)_{1,1} .
\end{align*}
It was shown in \cite{gamboacanonical}, that as $N$ tends to infinity, the sequence $(\mu_N)_N$ satisfies a large deviation principle (LDP). The rate function $\mathcal{I}_c$ is a functional of the recursion coefficients. Surprinsingly, this functional is exactly
the coefficient side of the KS-SR. Later, in \cite{magicrules}, we gave an alternative proof of this LDP,
with a rate function $\mathcal{I}_m$ that is
exactly the measure side of KS-SR.
Since a large deviation rate function is unique, this
implies the sum rule identity $\mathcal{I}_c=\mathcal{I}_m$.
Working with a random matrix of one of the other two classical ensembles, the Laguerre and Jacobi ensemble, this method leads to new sum rules.
Here the reference measures are the Marchenko-Pastur law and the Kesten-McKay law, respectively \cite{magicrules}.
We also refer to recent interesting
developments of the method explored in \cite{BSZ} and \cite{breuer2018large}.
One of the ingredient to prove
the LDP
in terms of the coefficients
is the fact that these coefficients are independent and have explicit distributions. To be more precise, it has been shown in \cite{dumede2002}, that in the Gaussian case the coefficients are independent random variables with normal or gamma distributions. The Laguerre case has also been considered in \cite{dumede2002}. In this last frame,
the convenient encoding is not directly by
the
recursion coefficients, but by
decomposition of them into independent variables. In \cite{Killip1}, a further decomposition is
shown
for the Jacobi ensemble. Actually these variables are the Verblunsky coefficient of the measure lifted to the unit circle, which are sometimes also called canonical moments, see the monograph \cite{dette1997theory}.
A natural extension of scalar measures are measures with values
in the space of Hermitian nonnegative definite matrices. There is a rich theory of polynomials orthogonal with respect to such a matrix measure, and we refer the interested reader to \cite{sinap1996orthogonal}, \cite{duran1996orthogonal}, \cite{duran1995orthogonal} or \cite{damanik2008analytic} and referecences therein. Surprisingly,
sum rule identities also hold in the matrix frame.
In \cite{damal},
a matricial version of KS-SR is proved
(see also Section 4.6 of \cite{simon2}). In \cite{GaNaRomat}, we have extended our probabilistic method to the matrix case as well, and have proved an LDP
for random matrix valued spectral measures. This $p\times p$ measure $\Sigma_N$ is now defined by
its matrix moments
\begin{align} {\la^-}bdabel{defmatrixspectral}
\int x^k d\Sigma_N(x) = (X_N^k)_{i,j=1,\dots ,p} ,\qquad k \geq 1,
\end{align}
where $X_N$ is as before a random $N\times N$ matrix and $N\geq p$. Using the explicit construction of random matrices of the Gaussian and Laguerre ensemble, it is possible to derive the distribution of the recursion coefficients of $\Sigma_N$, which are now $p\times p$ matrices, and prove an LDP
for them, generalizing the results of \cite{dumede2002} and \cite{gamboacanonical}. Collecting these two LDPs
and different representations of the rate function, we obtain the matrix sum rule both for Gaussian and Laguerre cases.
A large deviation principle for the coefficients in the matricial Jacobi case, and consequently a new sum rule, has been open so far.
In this paper, we complete the trio of matrix
measures of classical ensembles by addressing
the Jacobi case. We prove an LDP
for the spectral matrix measure in Theorem \ref{thm:LDPcoefficient}, which then implies the
new matrix
sum rule stated in Theorem \ref{LDPSR}.
A crucial ingredient for the proof of Theorem \ref{thm:LDPcoefficient}
is Theorem \ref{thm:distributioncanonical},
where we derive the distribution of matricial canonical moments of $\Sigma_N$.
Up to our knowledge, this result is new. Actually, we have to consider for our probabilistic approach certain Hermitian versions of the canonical moments and we show that these versions are independent and each distributed as $p\times p$ matrices of the Jacobi ensemble, thereby generalizing the results of \cite{Killip1} to the matrix case.
An additional difficulty is that the measures we need to consider are finitely supported and then are not nontrivial. In this case, many arguments
used in the scalar case cannot be extended directly.
The fact that there is still a one-to-one correspondence between the spectral measure $\Sigma_N$ and its canonical moments might therefore be of independent interest.
Let us explain the main obstacle
that so far impeded a large deviation analysis of the coefficients in the
matrix Jacobi case. For
the Gaussian or Laguerre ensemble, the distribution of recursion coefficients can be derived through repeated Householder reflections applied to the full matrix $X_N$. In the Jacobi case, it seems impossible to control the effect of these tranformation on the different subblocks of $X_N$. Instead, looking at the scalar case, there are two potential strategies. First, by identifiying the canonical moments as variables appearing in the CS-decomposition of $X_N$. In the scalar case, this goes back to \cite{sutton2009computing} and \cite{edelman2008beta}. Any effort to generalize this to a block-CS-demposition seems to fail due to non-commutativity of the blocks. The other possible strategy is to follow the path of \cite{Killip1} applying
the (inverse) Szeg{\H o} mapping. This yields a symmetric measure on the unit circle $\mathbb{T}$. Then apply the Householder algorithm to the corresponding unitary matrix. Unfortunately, in the matrix case, the Szeg{\H o} mapping does not give a good symmetric measure on $\mathbb{T}$ in the matrix case. We refer to Section 4.2
for a discussion of this difficulty.
In the present paper, we obtain the distribution of canonical moments by directly
computing the Jacobian of a compound map. The first application maps
support points and weights of $\Sigma_N$ to the recursion coefficients.
Then the recursion coefficients are mapped
to a suitable Hermitian version of the canonical moments. We give two different ways to compute this distribution. One proof follows by
direct calculation. The other one is
more subtle.
It uses the relation between the canonical coefficients and the
matrix Verblunsky coefficients.
This paper is
organized as follows.
In Section \ref{sec:matrixmeasures}, we first give notations and explain the different representations for the
matrix measures. We also
discuss finitely supported matrix measures.
In Section \ref{sec:sumrule}, we
give our new sum rule.
Section \ref{sec:randommatrices} is devoted to the set up of probability distributions of the matrix models and of the canonical moments. This leads in Section \ref{sec:largedeviations} to an LDP for the coeffcient side.
Section \ref{sec:mainproofs} contains the proof of our three main results, subject to technical lemmas,
whose proofs are postponed to
Section \ref{sec:proofs}.
\section{Matrix measures and representation of coefficients}
{\la^-}bdabel{sec:matrixmeasures}
All along this paper, $p$ will be a fixed integer. A $p\times p$ matrix measure $\Sigma$ on $\mathbb R$ is a matrix of complex valued Borel measures on $\mathbb R$ such that for every Borel set $A \subset \mathbb R$ the matrix $\Sigma(A)$ is nonnegative definite, i.e. $\Sigma (A) \geq 0$.
When its $k$-th moment is finite, it is denoted by
\[M_k(\Sigma) = \int x^k \mathrm d \Sigma (x), \qquad k \geq 1,\]
writing $M_k$ for $M_k(\Sigma)$ if the measure is clear from context.
We keep, as much as possible, the notations close to those of \cite{GaNaRomat}. All matrix measures in this paper will be of size $p\times p$.
Let ${\bf 1}$ be the $p \times p$ identity matrix and ${\bf 0}$ be the $p \times p$ null matrix.
For every integer $n$, $I_n$ denotes the $np\times np$ identity matrix. The set of all matrix measures with support in some set $A$ is denoted by $\mathcal{M}_p(A)$, and we let $\mathcal{M}_{p,1}(A) := \{ \Sigma \in \mathcal{M}_p(A):\, \Sigma(A)={\bf 1} \}$
denoting the set of normalized measures.
For the remainder of this section, let $\Sigma\in \mathcal{M}_{p,1}(\mathbb{R})$
have compact support. Such a measure $\Sigma$ can be uniquely described by its sequence of moments $(M_1(\Sigma),M_2(\Sigma),\dots)$. Another particular convenient set of parameters characterizing the measure is given by the coefficients in the recursion of orthogonal matrix polynomials,
introduced in the following subsection. We will follow
largely the exposition developped in \cite{damanik2008analytic}. For matrix measures supported by $[0,1]$, there exists, just as in the scalar case, a remarkable decomposition of the recursion coefficients into a set of so-called canonical moments. The parametrization of $\Sigma$ in terms of these canonical moments is one of the main tools
for our probabilistic results.
\subsection{Orthogonal matrix polynomials}
The (right) inner product of two matrix polynomials ${\bf f} , {\bf g}$, i.e., polynomials whose coefficients are complex $p\times p$ matrices, is defined by
\[{\la^-}bdangle{\la^-}bdangle f , g \rangle\rangle = \int f(x)^\dagger d\Sigma(x)\!\ g(x)\,.\]
A matrix measure is called nontrivial, if for any non zero polynomial $P$ we have
\begin{align} {\la^-}bdabel{nontrivial}
\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P \rangle\rangle >0 ,
\end{align}
see Lemma 2.1 of \cite{damanik2008analytic} for equivalent characterizations of nontriviality.
Let us first suppose that $\Sigma$ is nontrivial. Lemma 2.3 of \cite{damanik2008analytic} shows that then ${\la^-}bdangle{\la^-}bdangle Q,Q\rangle\rangle $ is positive definite for any monic polynomial $Q$ (with leading coefficient ${\bf 1}$).
We may then apply the Gram-Schmidt procedure to $\{\boldsymbol{1}, x\boldsymbol{1}, \dots\}$ and obtain a sequence of monic matrix polynomials $P_n, n \geq 0$, where $P_n$ has degree $n$ and which are orthogonal with respect to $\Sigma$, that is, $ {\la^-}bdangle{\la^-}bdangle P_n , P_m \rangle\rangle={\bf 0}$ if $m\neq n$.
The polynomials satisfy the recurrence
\begin{equation}
{\la^-}bdabel{recursion1}
xP_n = P_{n+1} + P_n u_n + P_{n-1} v_n , \qquad n \geq 0,
\end{equation}
where, setting
\begin{align}
\ga^-mamma_n := {\la^-}bdangle{\la^-}bdangle P_n, P_n\rangle\rangle\,,
\end{align}
$\ga^-mamma_n$ is Hermitian and positive definite, and for $n\geq 1$
\begin{align} {\la^-}bdabel{hidden1}
u_n = \ga^-mamma_n^{-1} {\la^-}bdangle{\la^-}bdangle P_n, xP_n\rangle\rangle , \quad
v_n = \ga^-mamma_{n-1}^{-1}\ga^-mamma_n ,
\end{align}
with
$v_0={\bf 0}$.
This defines a one-to-one correspondence between the sequence $(u_0,v_1,u_2,\dots)$ and the measure $\Sigma$.
From the matrix coefficients $u_n,v_n$, we can then define a sequence of very useful Hermitian matrices.
We first define matrices related to
orthonormal polynomials recursion. Let for $n\geq 0$
\begin{align}
{\la^-}bdabel{defAAn}
\tilde{\mathcal{A}}_{n+1} &:= \ga^-mamma_{n}^{1/2} v_{n+1}\ga^-mamma_{n+1}^{-1/2} =\ga^-mamma_{n}^{-1/2}\ga^-mamma_{n+1}^{1/2} , \\
\ \mathcal{B}_{n+1} &:= \ga^-mamma_n^{1/2}u_n \ga^-mamma_n^{-1/2} = \ga^-mamma_{n}^{-1/2} {\la^-}bdangle{\la^-}bdangle P_n, x P_n \rangle\rangle \ga^-mamma_n^{-1/2} {\la^-}bdabel{defBn}.
\end{align}
Obviously, setting
\[{\bf p}_n = P_n \ga^-mamma_n^{-1/2}, \qquad n\geq 0,\]
defines a sequence of matrix orthonormal polynomials. These polynomials
satisfy
the recursion
\begin{align}
{\la^-}bdabel{recursion2}
x {\bf p}_n = {\bf p}_{n+1}\tilde{\mathcal A}_{n+1}^\dagger +{\bf p}_n \mathcal B_{n+1} + {\bf p}_{n-1}\tilde{\mathcal A}_{n}, \qquad n\geq 0),
\end{align}
taking ${\bf p}_{-1}={\bf 0}$. The matrices $\tilde{\mathcal{A}}_{n}$ and $\mathcal{B}_n$ play the role of matrix Jacobi coefficients in the following sense. Define the infinite block-tridiagonal matrix
\begin{align} {\la^-}bdabel{jacobimatrix}
J =
\begin{pmatrix}
\mathcal B_1 & \tilde{\mathcal A}_1 & \\
\tilde{\mathcal A}_1^\dagger & \mathcal B_2 & \mathrm dots \\
& \mathrm dots & \mathrm dots
\end{pmatrix} .
\end{align}
On the space of matrix polynomials, the map $f \mapsto (x \mapsto xf(x))$ is a right homomorphism, represented in the (right-module) basis ${\bf p}_0, {\bf p}_1, \dots$ by the matrix $J$.
Moreover, the measure $\Sigma$ is nothing more than the spectral measure of the matrix $J$ defined through its moments by
\begin{align*}
e_i^* \int x^k\, d\Sigma(x)e_j = e^*_iJe_j, \qquad i,j =1, \dots, p.
\end{align*}
(See for example Theorem 2.11 of \cite{damanik2008analytic}).
The matrix $\mathcal{B}_n$ is Hermitian and we define the Hermitian square of $\tilde{\mathcal{A}}_n$ by
\begin{equation} {\la^-}bdabel{defAn}
\mathcal A_n = \tilde{\mathcal A_n}\tilde{\mathcal A_n}^\dagger= \ga^-mamma_{n-1}^{-1/2} \ga^-mamma_n \ga^-mamma_{n-1}^{-1/2}\,.
\end{equation}
Note that $\mathcal A_n$ is Hermitian positive definite.
\subsection{Measures on $[0,1]$}
Now suppose that $\Sigma$ is a nontrivial matrix measure supported by a subset of
$[0,1]$. We present two (equivalent) ways to parametrize $\Sigma$, extending
the corresponding parametrization of the scalar case. The first one uses the canonical moments, the second one uses the Szeg{\H o} mapping and Verblunsky coeffcients.
\subsubsection{Encoding via canonical coefficients}
Dette and Studden \cite{destu02} proved the following matrix version of Favard's Theorem for measures on $[0,1]$: If $\Sigma$ has support in $[0,1]$, there exist matrices $U_n$, $n\geq 1$, such that the recursion coefficients
defined in \eqref{recursion1} can
may be decomposed as
\begin{align} {\la^-}bdabel{decomposition}
u_n =\zeta_{2n+1}+\zeta_{2n}, \qquad v_n = \zeta_{2n-1}\zeta_{2n} , \qquad n\geq 1,
\end{align}
where $\zeta_0={\bf 0}, \zeta_1=U_1$ and for $n>1$
\begin{align}{\la^-}bdabel{recursion3}
\zeta_n = ({\bf 1}-U_{n-1})U_n\,.
\end{align}
Moreover, $U_n$ has the following geometric interpretation. Suppose $M_1,\dots ,M_{n-1}$ are the first $n-1$ matrix moments of some nontrivial matrix probability measure on $[0,1]$. Then there exist Hermitian matrices $M_n^-$, $M_n^+$, which are upper and lower bounds for the $n$-th matrix moment.
More precisely, $M_1,\dots ,M_n$ are the first $n$ moments of some nontrivial measure with support in $[0,1]$, if and only if
\begin{align} {\la^-}bdabel{momentineq}
M_n^-< M_n < M_n^+ .
\end{align}
Here we use the partial
Loewner ordering, that is, $A> B$ ($A\geq B$) for Hermitian matrices $A,B$, if and only if $A-B$ is positive (non-negative) definite.
Then, if $M_n$ are the moments of a nontrivial measure,
the following representation holds:
\begin{align} {\la^-}bdabel{canonicalmoment}
U_n = (M_n^+ - M_n^-)^{-1} (M_n - M_n^-)\,.
\end{align}
So that,
$U_n$ is the relative position of $M_n$ within the set of all possible $n$-th matrix moments, given the matrix moments of lower order. For this reason, $U_n$ is also called \emph{canonical moment}. Let us define
\begin{align} {\la^-}bdabel{range}
R_n = M_n^+-M_n^-, \qquad H_n = M_n - M_n^-,
\end{align}
so that $U_n = R_n^{-1} H_n$.
A Hermitian version of the canonical moments can be defined by
\begin{align}{\la^-}bdabel{canonicalmoment2}
\mathcal{U}_n = R_n^{1/2} U_n R_n^{-1/2} = R_n^{-1/2} H_n R_n^{-1/2} .
\end{align}
The matrices $\mathcal{U}_n$ have been considered previously in \cite{dette2012matrix}, to study asymptotics
in the random matrix moment problem.
Note that $U_n$ and $\mathcal{U}_n$ are similar and
\[ {\bf 0} < \mathcal U_n < {\bf 1}\,.\]
Finally, we remark that $M_n^-,M_n^+$ are continuous functions of $M_1,\dots , M_{n-1}$, and that
\begin{align}
{\la^-}bdabel{hgamma}
H_{2n} = \ga^-mamma_n .
\end{align}
\subsubsection{Encoding via Szeg{\H o} mapping}
{\la^-}bdabel{susec:Szego}
The Szeg{\H o} mapping is two-one from $\mathbb T = \{z \in\mathbb C : |z| =1\}$ to $[-2, 2]$ defined by
\begin{align}
{\la^-}bdabel{Smap}
z \in \mathbb T\mapsto z+ z^{-1} \in [-2,2]\,.\end{align}
This induces a bijection $\Sigma \mapsto \hbox{Sz}(\Sigma)$ between matrix probability measures on $\mathbb T$ invariant by $z \mapsto \bar z$ and matrix probability measures on $[-2, 2]$.
On $\mathbb T$, a matrix measure is characterized by the system of its matricial Verblunsky coefficents, ruling the recursion of (right) orthogonal polynomials. When the measure is invariant, the
Verblunsky coefficients $(\boldsymbol{\alpha}_n)_{n\geq 0}$ are Hermitian (\cite{damanik2008analytic} Lemma 4.1) and satisfy ${\bf 0} < \boldsymbol{\alpha}_n^2 < \boldsymbol{1}$ for every $n$.
The Verblunsky coefficients of such a matrix probability measure on $\mathbb T$ and the Jacobi coefficients of the corresponding matrix measure on $[-2, 2]$ are connected by the Geronimus relations (\cite{damanik2008analytic} Theorem 4.2). It is more convenient here
to consider the matrix measure on $[0,1]$ denoted by $\widetilde{\operatorname{Sz}}(\Sigma)$, obtained by
pushing forward Sz$(\Sigma)$ by the affine mapping $x \mapsto (2-x)/4$.
For $n \geq 0$, let
$\boldsymbol{\alpha}_n$ be the Verblunsky coefficient of $\Sigma$ and $\mathcal{U}_{n+1}$ the Hermitian
canonical moment of $\widetilde{\operatorname{Sz}}(\Sigma)$. Then, the following equality
holds:
\begin{align}
{\la^-}bdabel{DeWag}
\boldsymbol{\alpha}_n = 2 \mathcal U_{n+1} - 1\,.
\end{align}
The correspondance between the two above encodings is proven in \cite{dewag09}, Theorem 4.3, for real-valued matrix measure. The general complex case is considered in \cite{jensdiss}.
\begin{rem}
In the scalar case, the canonical parameters $U_n$
can be identified in the CS decomposition (see Edelman-Sutton \cite{edelman2008beta}). In the matrix case, this approach does not seem to work, due to the lack of commutativity.
\end{rem}
\subsection{Finitely supported measures}
{\la^-}bdabel{sec:finitelysupported}
When the support of $\Sigma$ consists of $N= np$ distinct points, then \eqref{nontrivial} cannot be satisfied for all non zero polynomials and $\Sigma$ is not nontrivial. However, if \eqref{nontrivial} is satisfied for all non zero polynomials of degree at most $n-1$, then actually ${\la^-}bdangle{\la^-}bdangle Q,Q\rangle\rangle$ is positive definite for all monic polynomials of degree at most $n-1$, see Lemma 2.3 in \cite{damanik2008analytic}. This implies that we can use
the Gram-Schmidt method to define monic orthogonal polynomials up to degree $n$. Further,
$\ga^-mamma_k = {\la^-}bdangle{\la^-}bdangle P_k, P_k\rangle\rangle$ is positive definite for $k\leq n-1$. Therefore, the orthogonal polynomials
allow also to define the recursion coefficients $u_0, \dots, u_{n-1}; v_1, \dots v_{n-1}$. So that,
we can construct $\tilde{\mathcal A}_1, \dots, \tilde{\mathcal A}_{n-1}; \mathcal B_1, \dots, \mathcal B_n$ as well, with $\tilde{\mathcal{A}}_k$ nonsingular for $k=1, \dots, n-1$.
Let us denote by $J_{n}$ the $np\times np$ Hermitian block matrix of Jacobi coefficients
\begin{equation}
{\la^-}bdabel{jacmatrix0}
J_{n}
= \begin{pmatrix}
\mathcal B_1 & \tilde A_1 & & \\
\tilde{\mathcal A}_1^\dagger &\mathcal B_2 & \mathrm dots & \\
& \mathrm dots & \mathrm dots & \tilde{\mathcal A}_{n-1} \\
& & \tilde{\mathcal A}_{n-1}^\dagger &\mathcal B_n
\end{pmatrix} .
\end{equation}
Let $\Sigma^{J_n}$ denote the spectral measure of $J_n$, as defined by \eqref{defmatrixspectral}. The same calculation as in the scalar case shows that the first $2n-1$ moments of $\Sigma^{J_n}$ coincide with those of $\Sigma$. Since these matrix moments determine uniquely the recursion coefficients of monic orthogonal polynomials, the entries of the matrix \eqref{jacmatrix0} are then also the recursion coefficients of orthonormal polynomials
for $\Sigma^{J_n}$.
Now, suppose that the support points of $\Sigma$ lie in $[0,1]$. The existence of the canonical moments is tackled in the following lemma, proved in
Section 7.
It requires some additional assumption and is not so obvious.
\begin{lem} {\la^-}bdabel{lem:existencecanonical}
Suppose $\Sigma \in \mathcal{M}_{p,1}([0,1])$ is such that $\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle >0$ for all non zero polynomials of degree at most $n-1$. Suppose further
$\Sigma(\{0\})=\Sigma(\{1\})={\bf 0}$. Then, the matrices $M_k^-,M_k^+$ for $k\leq 2n-1$ still exist
and
they satisfy $M_k^-<M_k<M_k^+$ for $k\leq 2n-1$. Moreover, the matrices
\begin{align}
U_k = (M_k^--M_k^+)^{-1}(M_k-M_k^-), \qquad 1\leq k \leq 2n-1,
\end{align}
are related to the recursion coefficients $u_0, \dots, u_{n-1}; v_1, \dots v_{n-1}$ of $\Sigma$ as in
\eqref{decomposition} and
\eqref{recursion3}.
\end{lem}
Lemma \ref{lem:existencecanonical} implies that we may still define the Hermitian variables $\mathcal U_1, \dots, \mathcal U_{2n-1}$, if the measure $\Sigma$ is sufficiently nontrivial. In conclusion, for any measure satisfying the assumptions of Lemma \ref{lem:nontrivial}, we have a one-to-one correspondence between:
\begin{itemize}
\item matrix moments $M_1,\dots ,M_{2n-1}$, with $M_k^-<M_k<M_k^+$ for $k=1,\dots ,2n-1$,
\item recursion coefficients $\mathcal{B}_1,\dots ,\mathcal{B}_{n}$ as in \eqref{defBn} and positive definite $\mathcal{A}_1,\dots ,\mathcal{A}_{n-1}$ as in \eqref{defAn},
\item canonical moments $\mathcal{U}_1,\dots ,\mathcal{U}_{2n-1}$ as in \eqref{canonicalmoment2}, with ${\bf 0} < \mathcal{U}_k < {\bf 1}$ for $k\leq 2n-1$.
\end{itemize}
\section{The Jacobi sum rule}
{\la^-}bdabel{sec:sumrule}
The reference measure for the sum rule in the Jacobi case is the matricial version of the
Kesten-McKay law.
In the scalar case, this measure is defined for parameters $\kappa_1,\kappa_2\geq 0$ by
\begin{align*}
\hbox{KMK}(\kappa_1,\kappa_2)(dx) = \frac{2+ \kappa_1 + \kappa_2}{2\pi}\frac{\sqrt{(u^+ -x)(x- u^-)}}{x(1-x)} \ \mathbbm{1}_{(u^-, u^+)}(x) dx\, ,
\end{align*}
where
\begin{align}
{\la^-}bdabel{upm}
u^\pm := \frac{1}{2} + \frac{\kappa_1^2 - \kappa_2^2 \pm 4 \sqrt{(1+\kappa_1)(1+\kappa_2)(1+\kappa_1+\kappa_2)}}{2(2+\kappa_1+\kappa_2)^2} .
\end{align}
It appears (sometimes in other parametrizations) as a limit law for spectral measures of regular graphs (see \cite{mckay1981expected}), as the asymptotic eigenvalue distribution of the Jacobi ensemble (see \cite{dette2009some}), or in the study of random moment problems (see \cite{dette2018universality}). For $\kappa_1=\kappa_2=0 $, it reduces to the arcsine law.
The matrix version is then denoted by
\begin{align}
\Sigma_{\operatorname{KMK}(\kappa_1, \kappa_2)} := \operatorname{KMK}(\kappa_1, \kappa_2)\cdot \boldsymbol{1} .
\end{align}
The canonical moments of $\Sigma_{\operatorname{KMK}(\kappa_1, \kappa_2)}$ of even/odd order are given by
\begin{align} {\la^-}bdabel{KMKcanonical}
U_{2k} = U_e := \frac{1}{2+\kappa_1+\kappa_2} \cdot \boldsymbol{1} ,
\qquad U_{2k-1} = U_o := \frac{1+\kappa_1}{2+\kappa_1+\kappa_2}\cdot \boldsymbol{1}\,.
\end{align}
(See \cite{gamboa2011large} Sect. 6
for the scalar case, which can obviously be extended
to the matrix case.)
Both sides of
our sum rule (Theorem \ref{LDPSR}) will only be finite for measures satisfying a certain condition on their support,
related to the Kesten-McKay law. Let $I=[u^-,u^+]$. We define $\mathcal{S}_p = \mathcal{S}_p(u^-,u^+)$
as the set of all bounded nonnegative matrix
measures $\Sigma \in\mathcal{M}_p(\mathbb{R})$
that can be written as
\begin{align}{\la^-}bdabel{muinS0}
\Sigma = \Sigma_{I} + \sum_{i=1}^{N^+} \Gamma_i^+ \delta_{{\la^-}bdambda_i^+} + \sum_{i=1}^{N^-} \Gamma_i^- \delta_{{\la^-}bdambda_i^-},
\end{align}
where $\operatorname{supp}(\Sigma_I)\subset I$, $N^-,N^+\in\mathbb{N}_0\cup\{\infty\}$, $\Gamma_i^\pm$ are rank $1$ Hermitian matrices
and
\begin{align*}
0 \leq{\la^-}bdambda_1^-\leq{\la^-}bdambda_2^-\leq\dots <u^- \quad \text{and} \quad 1 \geq {\la^-}bdambda_1^+\geq{\la^-}bdambda_2^+\geq\dots >u^+\, .
\end{align*}
We assume that
${\la^-}bdambda_j^-$ converges towards $u^-$ (resp. ${\la^-}bdambda_j^+$ converges to $u^+$) whenever $N^-$ (resp. $N^+$) is not finite.
An atom outside $[\alpha^-,\alpha^+]$ may appear several times in the decomposition. Its multiplicity is the rank of the total matrix weight that is decomposed in a sum of rank $1$ matrices.
We also define \[\mathcal{S}_{p,1}=\mathcal{S}_{p,1}(u^-,u^+):=\{\Sigma \in \mathcal{S}_p(u^-,u^+) |\, \Sigma(\mathbb{R})={\bf 1}\}\,.\]
Furthermore, the spectral side of the sum rule of Theorem \ref{LDPSR} involves
the relative entropy with respect to the central measure. If $\Sigma$ has the Lebesgue decomposition
\begin{align}
{\la^-}bdabel{gamboa}\Sigma(dx) = h(x) \Sigma_{\operatorname{KMK}}(dx) + \Sigma^s (dx) ,
\end{align}
with $h$ positive $p\times p$ Hermitian and $\Sigma^s$ singular with respect to $\Sigma_{\operatorname{KMK}}$, then we
define the Kullback-Leibler distance of $\Sigma_{\operatorname{KMK}}$ with respect to $\Sigma$ as
\[\mathcal K(\Sigma_{\operatorname{KMK}} \, | \, \Sigma) = - \int \log \det h(x) \Sigma_{\operatorname{KMK}}(dx)\,.\]
Let us remark that if $K(\Sigma_{\operatorname{KMK}} \, | \, \Sigma)$ is finite, then $h$ is positive definite almost everywhere on $I$, which implies that $\Sigma$ is nontrivial. Conversely, if $\Sigma $ is trivial, then $K(\Sigma_{\operatorname{KMK}} \, | \, \Sigma)$ is infinite.
Finally, for the contribution of the outlying support points, we define two functionals
\begin{align} {\la^-}bdabel{outlierF+}
{\mathcal F}_J^+(x) = \begin{cases} \ \displaystyle \int_{u^+}^x \frac{\sqrt{(t - u^+)(t - u^-)}}
{t(1-t)}\!\ dt & \mbox{ if} \ u^+ \leq x \leq 1, \\
\ \infty & \mbox{ otherwise.}
\end{cases}
\end{align}
Similarly, let
\begin{align} {\la^-}bdabel{outlierF-}
{\mathcal F}_J^-(x) = \begin{cases}\ \displaystyle \int_x^{u^-} \frac{\sqrt{(u^--t)(u^+ -t)}}
{t(1-t)}\!\ dt & \mbox{ if} \ 0 \leq x \leq u^-,\\
\ \infty & \mbox{ otherwise.}
\end{cases}
\end{align}
We are now able to formulate our main result consisting in a sum rule for the matrix Jacobi case.
\begin{thm}
{\la^-}bdabel{LDPSR}
For $\Sigma \in \mathcal S_{p, 1}(u^-,u^+)$ a nontrivial measure with canonical moments $(U_k)_{k\geq 1}$, we have
\begin{align}
{\la^-}bdabel{sumrule}
\mathcal K(\Sigma_{\operatorname{KMK}}\, |\, \Sigma) + \sum_{i=1}^{N^+} \mathcal F^+_J ( {\la^-}bdambda_i^+) + \sum_{i=1}^{N^+} \mathcal F_J ({\la^-}bdambda_i^-) & =
\sum_{k=1}^\infty \mathcal H_o ( U_{2k+1}) + \mathcal H_e (U_{2k})
\end{align}
where, for a matrix $U$ satisfying ${\bf 0} \leq U \leq {\bf 1}$,
\begin{align} {\la^-}bdabel{Hoddeven}
\begin{split}
\mathcal H_e(U) &:= - (\log \det U - \log \det U_e) -(1+\kappa_1+\kappa_2) \left(\log \det ({\bf 1}-U) - \log \det ({\bf 1} - U_e)\right) , \\
\mathcal H_o (U) &:= -(1+ \kappa_1) \left(\log \det U - \log \det U_o\right) -(1+\kappa_2) \left(\log \det({\bf 1}-U)-\log \det ({\bf 1} -U_o)\right) ,
\end{split}
\end{align}
and where both sides may be infinite simultaneously. If $\Sigma \notin \mathcal S_{p, 1}(u^-, u^+)$, the right hand side equals $+\infty$.
\end{thm}
\begin{rem} {\la^-}bdabel{rem:sumrulearguments}
The arguments on the right hand side of the sume rule are the canonical moments as they appear in the decomposition of recursion coefficients in \eqref{decomposition} and
\eqref{recursion3}. For some applications, it might be more convenient to work with the Hermitian version as defined in \eqref{canonicalmoment2}. Indeed, since $\mathcal{H}_e, \mathcal{H}_o$ are invariant under similarity transforms, the value of the right hand side does not change when the Hermitian canonical moments $\mathcal{U}_k$ are considered.
We also
point out that for trivial measures, $U_k$ or $1-U_k$ will be singular for some $k$ and then the right hand side equals $+\infty$ (see also Theorem \ref{thm:LDPcoefficient}). Since in this case the Kullback-Leibler divergence equals $+\infty$ as well, the equality in Theorem \ref{LDPSR} is also true for trivial matrix measures.
\end{rem}
As in previous papers, an important consequence of this sum rule
a system of equivalent conditions for finiteness of both sides.
It is a {\it gem}, as defined by Simon in \cite{simon2} p.19. The following statement is the {\it gem} implied by Theorem \ref{LDPSR}.
We give equivalent conditions on the matrices $\mathcal U_k$ and the spectral measure, which characterize the finiteness of either side in the sum rule identity. The following corollary is the matrix counterpart of Corollary 2.6 in \cite{magicrules}. It follows immediately from Theorem \ref{LDPSR}, since
\[\mathcal F^\pm_J (u^\pm \pm h) = \frac{2\sqrt{u^+-u^-}}{3u^\pm (1-u^\pm)}h^{3/2} + o(h^{3/2}) \ \ \ (h \rightarrow 0^+)\]
and, for $H$ similar to a Hermitian matrix,
\begin{align*}
\mathcal H_e(U_e+H) &= \frac{(2+\kappa_1+\kappa_2)^2(\kappa_1+\kappa_2)}{2(1+ \kappa_1+\kappa_2)} \mathrm{tr} H^2 + o(||H||^2), \\
\mathcal H_o (U_o + H) &= \frac{(2+\kappa_1+\kappa_2)^2(\kappa_2 -\kappa_1)}{2(1+ \kappa_1)(1 +\kappa_2)} \mathrm{tr} H^2 + o(||H||^2),
\end{align*}
as $||H|| \rightarrow 0$, where $||\cdot||$ is any matrix norm.
\begin{cor} {\la^-}bdabel{semigemL}
Let $\Sigma$ be a nontrivial matrix probability measure on $[0,1]$ with canonical moments $(U_k)_{k\geq 1}$. Then for any $\kappa_1, \kappa_2 \geq 0$,
\begin{align}
{\la^-}bdabel{zl2}
\sum_{k=1}^\infty\left[\mathrm{tr}(U_{2k-1} - U_o)^2 + \mathrm{tr}(U_{2k} -U_e)^2 \right] < \infty
\end{align}
if and only if the three following conditions hold:
\begin{enumerate}
\item
$\Sigma \in \mathcal S_{p,1} (u^-, u^+)$
\item $\sum_{i=1}^{N^+} ({\la^-}bdambda_i^+ - u^+)^{3/2} + \sum_{i=1}^{N^-} (u^- - {\la^-}bdambda_i^- )^{3/2} < \infty$ and additionally, if $N^->0$, then ${\la^-}bdambda_1^- > 0$ and if $N^+>0$, then ${\la^-}bdambda_1^+<1$.
\item Writing the Lebesgue decomposition of $\Sigma$ as in (\ref{gamboa}), then
\begin{align*}
\int_{u^-}^{u^+} \frac{\sqrt{(u^+-x)(x-u^-)}}{x(1-x)} \log \det (h(x)) dx >-\infty .
\end{align*}
\end{enumerate}
\end{cor}
\section{Randomization: Classical random matrix ensembles and their spectral measures}
{\la^-}bdabel{sec:randommatrices}
To prove the sum rule of Theorem \ref{LDPSR} by our probabilistic method, we start from some
random Hermitian matrix $X_N$ of size $N=np$. The random spectral measure $\Sigma_N$ associated with ($X_N; e_1,\dots ,e_p)$,
is defined through its matrix moments:
\begin{align}{\la^-}bdabel{defspectralmeasure}
M_k(\Sigma_n)_{i,j} = e_i^\dagger X_N^k e_j , \qquad k\geq 0,\ 1 \leq i, j\leq p,
\end{align}
where $e_1,\dots ,e_N$ is the canonical basis of $\mathbb C^N$. From the spectral decomposition of $X_N$, we see that the matrix measure $\Sigma_N$ is
\begin{align} {\la^-}bdabel{defspectralmeasure2}
\Sigma_N = \sum_{j=1}^{N} \v_j \v_j^\dagger \delta_{{\la^-}bdambda_j} ,
\end{align}
where the support is given by the eigenvalues of $X_N$ and $\v_j$ is the projection of a unitary eigenvector corresponding to the eigenvalue ${\la^-}bdambda_j$ on the subspace generated by $e_1, \dots, e_p$.
A sum rule is then a consequence of two LDPs for the sequence $(\Sigma_N)_n$,
the first one when the measure is encoded by its support and the weight, as in \eqref{defspectralmeasure2}, and the second one when the measure is encoded by its recursion coefficients.
The two following questions are therefore crucial:
\begin{itemize}
\item What is the joint distribution of $({\la^-}bdambda_1, \dots, {\la^-}bdambda_N; \v_1, \dots, \v_N)$?
\item What is the distribution of the matricial recursion or canonical coefficients?
\end{itemize}
The answer to the first question is now classical (see \cite{mehta2004random} or \cite{agz}), when $X_N$ is
chosen according to a density (the joint density of all real entries, up to symmetry constaint) proportional to
\begin{align} {\la^-}bdabel{generalpotential}
\exp \big( -N \mathrm{tr} V(X) ) ,
\end{align}
for some potential $V$. In this case, the eigenvalues follow a log-gas distribution and independently, the eigenvector matrix is Haar distributed on the unitary group. In \cite{GaNaRomat}, the authors considered such general potentials and proved an LDP
using the encoding by eigenvalues and weights.
For $X_N$ distributed according to the Hermite and Laguerre ensemble, it is also possible to answer the second question and derive the LDPs in both
encodings. Remarkably, the recursion coefficients in the Hermite case are independent and are $p\times p$ matrices of the Hermite and Laguerre ensemble. In the Laguerre case, Hermitian version of the matrices $\zeta_k$ as in \eqref{decomposition} are Laguerre-distributed.
In this section, we give the answer to the second question, when $X_N$ is a matrix of the Jacobi ensemble. We first introduce all classical ensembles.
\subsection{The classical ensembles: GUE, LUE, JUE}
We denote by $\mathcal N(0, \sigmagma^2)$ the centered Gaussian distribution with variance $\sigmagma^2>0$.
A random variable $X$ taking values in $\mathcal H_N$, the set of all Hermitian $N \times N$ matrices, is distributed according to the Gaussian unitary ensemble $\operatorname{GUE}_N$, if all real diagonal entries are
distributed as $\mathcal N(0, 1)$
and the real and imaginary parts of off-diagonal variables are independent and $\mathcal N(0, 1/2)$ distributed (also called complex standard normal distribution).
All entries are assumed to be independent up to symmetry and conjugation.
The random matrix $X$ has then a density as in \eqref{generalpotential} with $V(x) = \tfrac{1}{2}x^2$. The joint density of the (real) eigenvalues ${\la^-}bdambda = ({\la^-}bdambda_1,\dots ,{\la^-}bdambda_N)$ of $X$
is
\begin{align} {\la^-}bdabel{evg}
g_G({\la^-}bdambda) = c_r^H \Delta({\la^-}bdambda)^2 \prod_{i=1}^N e^{- {\la^-}bdambda_i^2/2}.
\end{align}
where
\[\Delta({\la^-}bdambda) = \prod_{1\leq i < j\leq N} |{\la^-}bdambda_i - {\la^-}bdambda_j|\]
is the Vandermonde determinant.
By analogy with
the scalar $\chi^2$ distribution, the Laguerre ensemble is the distribution of the ''square'' of Gaussian matrices.
More precisely, if $a$ is a nonnegative integer and if
$G$ denotes a $N \times (N+ a)$ matrix with independent complex standard normal entries, then $X= GG^\dagger$
is said to be distributed according to the Laguerre ensemble $ \operatorname{LUE}_N(N+ a)$. Its density (on the set $\mathcal H_N^+$ of positive definite Hermitian matrices) is proportional to
\[(\det X)^a \exp \big( -\tfrac{1}{2}\mathrm{tr}\!\ X\big) \,.\]
The eigenvalues density in this case is
\begin{align} {\la^-}bdabel{evl}
g_L({\la^-}bdambda) = c_{N,a}^L \Delta({\la^-}bdambda)^2 \prod_{i=1}^N {\la^-}bdambda_i^{a} e^{-{\la^-}bdambda_i} \mathbbm{1}_{\{ {\la^-}bdambda_i>0\} }.
\end{align}
For $a,b$ nonnegative integers,
let $L_1$ and $L_2$ be independent matrices distributed according to $\operatorname{LUE}_N (N+a)$ and $\operatorname{LUE}_N (N+ b)$, respectively.
Then the Jacobi ensemble $\operatorname{JUE}_N(a,b)$ is the distribution of
\begin{align} {\la^-}bdabel{defjac}
X = ( L_1 + L_2)^{-1/2} L_1 (L_1 + L_2)^{-1/2} .
\end{align}
Its density on the set of Hermitian $N\times N$ matrices satisfying $0 < X < I_N$ is proportional to
\begin{align} {\la^-}bdabel{defjac2}
\det X^a \det (I_N-X)^b .
\end{align}
The density of the eigenvalues $({\la^-}bdambda_1, \dots, {\la^-}bdambda_N)$ is then given by
\begin{align} {\la^-}bdabel{evj}
g_J({\la^-}bdambda) = c_{N,a,b}^J | \Delta ({\la^-}bdambda )|^{2} \prod_{i=1}^N {\la^-}bdambda_i^{a} (1-{\la^-}bdambda_i)^{b} \mathbbm{1}_{\{ 0<{\la^-}bdambda_i<1 \} } .
\end{align}
By extension we will say that
$X$ is distributed according to $\operatorname{JUE}_N(a,b)$, if it has density \eqref{defjac2}, for general real parameters $a,b\geq 0$.
As mentioned above, in all three cases the eigenvector matrix is independent of the eigenvalues and Haar distributed on the group of unitary matrices. As a consequence, the matrix weights in
the spectral measure (see \eqref{defspectralmeasure2}) have a distribution which is a matrical generalization of the Dirichlet distribution. Let us denote the distribution of $(\v_1 \v_1^\dagger, \dots ,\v_N \v_N^\dagger )$ by $\mathbb{D}_{N,p}$. It was shown in \cite{FGARop}, that this distribution may be obtained as follows:
Let $z_1,\dots ,z_N$ be random vectors in $\mathbb{C}^p$, with all coordinates independent complex standard normal distributed and set $H =z_1z_1^\dagger + \dots + z_1z_1^\dagger$. Then we have the equality in distribution
\begin{align}
\big( \v_1 \v_1^\dagger, \dots ,\v_N \v_N^\dagger \big) \overset{d}{=} \big( H^{-1/2}z_1 z_1^\dagger H^{-1/2}, \dots ,H^{-1/2}z_N z_N^\dagger H^{-1/2} \big) .
\end{align}
Using this representation, we can prove the following useful lemma, which shows that although our random spectral measures are finitely supported and thus not nontrivial, it is still possible to define the first recursion coefficients or canonical moments.
\begin{lem} {\la^-}bdabel{lem:nontrivial}
Let $N=np$ and $\Sigma_N$ be a random spectral measure as in \eqref{defspectralmeasure2}. We assume that
there are almost surely $N$ distinct support points and that the weights are $\mathbb{D}_{N,p}$ distributed and independent of the support points. Then, with probability one, for all nonzero matrix polynomials $P$ of degree at most $n-1$,
\begin{align*}
\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle >0 .
\end{align*}
\end{lem}
\subsection{Distribution of coefficients}
In the following, let $N=np$. If $\Sigma_N$ is a spectral matrix measure of a matrix $X_N\sigmam\operatorname{GUE}_N$, then, almost surely, the $N$ support points of $\Sigma_N$ are distinct and none of them equal 0 or 1. By Lemma \ref{lem:nontrivial} and the discussion in Section \ref{sec:finitelysupported}, $\Sigma_N$ may be encoded by its first $2n-1$ coefficients in the polynomial recursion. It is known that then the random matrices $ \mathcal B_1, \dots, \mathcal B_n,\mathcal A_1, \dots, \mathcal A_{n-1}$ are independent and
\[\mathcal A_k \sigmam \operatorname{LUE}_p((N-k)p), \qquad \mathcal B_k \sigmam \operatorname{GUE}_p .\]
For the Laguerre ensemble, the spectral measure is supported by $[0, \infty)$ and then a decomposition as in \eqref{decomposition} still holds, where now Hermitian versions of
$\zeta_1,\dots ,\zeta_{2n-1}$
are distributed according to the Laguerre ensemble of dimension $p$ with appropriate parameter.
These results may be seen in \cite{GaNaRomat}, Lemma 6.1 and 6.2. They are extensions
of the scalar results of Dumitriu-Edelman \cite{dumede2002} and their proofs are in \cite{GaNaRomas}. Since therein they are formulated in a slightly different way, we clarify the arguments in the Hermite case when we prove Theorem \ref{thm:distributioncanonical} below. It is one of our main results, and shows that in the Jacobi case, the matricial canonical moments are independent and again distributed as matrices of the Jacobi ensemble.
\begin{thm} {\la^-}bdabel{thm:distributioncanonical}
Let $\Sigma_N$ be the random spectral matrix measure associated with the $\operatorname{JUE}_N(a,b)$ distribution. Then,
the Hermitian canonical moments $\mathcal U_1,\dots ,\mathcal{U}_{2n-1}$ are independent and for $k=1, 2, \dots, n-1$,
\begin{align}
\mathcal U_{2k-1}\sigmam \operatorname{JUE}_p(p(n-k) + a, p(n-k) + b),\quad \mathcal U_{2k} \sigmam \operatorname{JUE}_p(p(n-k-1), p(n-k) +a+b)
\end{align}
and $\mathcal U_{2n-1}\sigmam \operatorname{JUE}_p(a, b)$.
\end{thm}
The Jacobi scalar case was solved by Killip and Nenciu \cite{Killip1}. They used the inverse Szeg{\H o} mapping and actually considered the symmetric random measure on $\mathbb T$ as the spectral measure of $(U; e_1)$ where $U$ is an element of
$\mathbb S\mathbb O(2N)$ and $e_1$ is the first vector of the canonical basis. This measure
may be written as
\[\mu= \sum_{k=1}^N\w_k \left(\delta_{e^{i\theta_k}} +\delta_{e^{-i\theta_k}}\right)\,.\]
Under the Haar measure,
the support points (or eigenvalues) have
the joint density
proportional to
\[\Delta(\cos \theta_1, \dots, \cos \theta_N)^2\]
and the weights are Dirichlet distributed. This induces for the
pushed forward eigenvalues a density proportional to
\[\Delta({\la^-}bdambda)^2 \prod_{i=1}^N{\la^-}bdambda_i^{-1/2} (1- {\la^-}bdambda_i)^{-1/2}\]
Then they used a ''{\it magic relation}" to get rid of the factor $\prod {\la^-}bdambda_i^{a-1/2} (1-{\la^-}bdambda_i)^{b-1/2}$.
If we consider the matricial case, i.e. if we sample $U$ according to the Haar measure on $\mathbb S\mathbb O(2Np)$ with $(p \geq 2)$, the matrix spectral measure of $(U; e_1, \dots, e_p)$ is now
\[\Sigma =\sum_{k=1}^{N} \left(\w_k \delta_{e^{i\theta_k}} + \bar\w_k \delta_{e^{-i\theta_k}}\right) ,\]
the eigenvectors of conjugate eigenvalues being conjugate of each other.
Unfortunately, this measure is symmetric (i.e. invariant by $z \mapsto \bar z$) only in the scalar case $p=1$, which prohibits the use of the Szeg{\H o} mapping. To find the distribution of the canonical moments, we have to follow another strategy.
First, we will use
the explicit relation between the distribution of eigenvalues and weights and the distribution of the recursion coefficients, when sampling the matrix in the Gaussian ensemble. Then we will
compute the Jacobian of the mapping from recursion coefficients to canonical moments using the representation
in terms of moments as in \eqref{canonicalmoment2}.
\section{Large deviations}
{\la^-}bdabel{sec:largedeviations}
In order to be self-contained, let us recall the definition of a large deviation principle. For a general reference on large deviation statements we refer to the book
\cite{demboz98} or to the Appendix D of \cite{agz}.
Let $E$ be a topological Hausdorff space with Borel $\sigmagma$-algebra $\mathcal{B}(E)$. We say that a sequence $(P_{n})$ of probability measures on $(E,\mathcal{B}(E))$ satisfies the large deviation principle (LDP) with speed $a_n$ and
rate function $\mathcal{I} : E \rightarrow [0, \infty]$ if:
\begin{itemize}
\item [(i)] $\mathcal I$ is lower semicontinuous.
\item[(ii)] For all closed sets $F \subset E$: $\displaystyle \qquad
\limsup_{n\rightarrow\infty} \frac{1}{a_n} \log P_{n}(F)\leq -\inf_{x\in F}\mathcal{I}(x) $
\item[(iii)] For all open sets $O \subset E$: $\displaystyle \qquad
\liminf_{n\rightarrow\infty} \frac{1}{a_n} \log P_{n}(O)\geq -\inf_{x\in O}\mathcal{I}(x) $
\end{itemize}
The rate function $\mathcal{I}$ is good if its level sets
$\{x\in E |\ \mathcal{I}(x)\leq a\}$ are compact for all $a\geq 0$.
We say that a sequence of $E$-valued random variables satisfies an LDP if their distributions satisfy an LDP.
It was shown in Theorem 3.2 of \cite{GaNaRomat}, that the sequence of matrix spectral measures $\Sigma_N$ of the Jacobi ensemble $\operatorname{JUE}_N(\kappa_1N,\kappa_2N)$ satisfies an LDP with speed $N$
and good rate function equal to the left hand side of the sum rule in Theorem \ref{LDPSR}. The LDP for the coefficient side is given in the following theorem. Its proof is independent of the one given
in \cite{GaNaRomat}.
\begin{thm} {\la^-}bdabel{thm:LDPcoefficient}
Let $\Sigma_N$ be a random spectral matrix measure of the Jacobi ensemble $\operatorname{JUE}_N(\kappa_1 N, \kappa_2 N)$, with $\kappa_1,\kappa_2\geq 0$ and $N=pn$. Then the sequence $(\Sigma_N)_N$ satisfies the LDP in $\mathcal{M}_{p,1}([0,1])$, with speed $N$ and good rate function
\begin{align}
\mathcal{I}_J(\Sigma) =\sum_{k=1}^\infty \mathcal H_o (U_{2k-1}) + \mathcal H_e (U_{2k})
\end{align}
for nontrivial $\Sigma$, where $\mathcal H_o$ and $\mathcal H_e$ are defined in \eqref{Hoddeven} and $U_k, k\geq 1$ are the canonical moments of $\Sigma$. If $\Sigma$ is trivial, then $\mathcal{I}_J(\Sigma)=+\infty$.
\end{thm}
The following lemma shows an LDP for the Jacobi ensemble of fixed size. It is crucial in proving the LDP for the canonical moments and consequently Theorem \ref{thm:LDPcoefficient}.
\begin{lem} {\la^-}bdabel{prop:LDPsingleU}
For $\alpha, \alpha' > 0$ suppose that $X_n \sigmam \operatorname{JUE}_p (\alpha n + a , \alpha' n + b)$. Then $(X_n)_n$ satisfies the LDP in the set of Hermitian $p\times p$ matrices, with speed $n$ and good rate function $I_{\alpha,\alpha'}$
where
\begin{align}
{\la^-}bdabel{defIalpha}
I_{\alpha,\alpha'}(X) = - \alpha \log \det X - \alpha' \log \det(\boldsymbol{1} -X) + p\alpha \log \frac{\alpha}{\alpha + \alpha'} + p \alpha' \log \frac{\alpha'}{\alpha + \alpha'}
\end{align}
for $\bold{0}<X<\boldsymbol{1}$ and $I_{\alpha,\alpha'}(X)=\infty$ otherwise.
\end{lem}
The proof of Lemma \ref{prop:LDPsingleU} makes use of the explicit density and follows as Proposition 6.6 in \cite{GNROPUC}.
\section{Proof of the main results}
{\la^-}bdabel{sec:mainproofs}
In this section we prove our three main results in the order of their dependence. First, Theorem \ref{thm:distributioncanonical}
provides the distribution of the canonical moments
for the Jacobi ensemble, then Theorem \ref{thm:LDPcoefficient} shows the LDP
for the spectral measure of the Jacobi ensemble, and finally Theorem \ref{LDPSR}
establishes the sum rule for the Jacobi case. For these three proofs, we
use the result of all our technical lemmas, whose proofs are postponed
to Section 7.
\subsection{Proof of Theorem \ref{thm:distributioncanonical}}
The starting point is the spectral measure
\begin{align} {\la^-}bdabel{recallspectralmeasure}
\Sigma_N = \sum_{i=1}^N \v_i \v_i^\dagger \delta_{{\la^-}bdambda_i} ,
\end{align}
when the distribution of $({\la^-}bdambda,\v) = ({\la^-}bdambda_1,\dots ,{\la^-}bdambda_{np},\v_1,\dots \v_{np})$ is the probability measure proportional to
\begin{align} {\la^-}bdabel{ev+weights}
\left(\Delta({\la^-}bdambda)^2 \prod_{i=1}^N {\la^-}bdambda_i^a (1-{\la^-}bdambda_i)^b \mathbbm{1}_{\{0<{\la^-}bdambda_i<1\} } d{\la^-}bdambda_j\right) d\mathbb D_{N,p}(\v) .
\end{align}
We need to calculate the pushforward of this measure under the mapping $({\la^-}bdambda, \v) \mapsto \mathcal{U}=(\mathcal{U}_1,\dots ,\mathcal{U}_{2n-1})$ to the Hermitian canonical moments. By Lemma \ref{lem:nontrivial} and Lemma \ref{lem:existencecanonical} this is well-defined and the canonical moments satisfy ${\bf 0} <\mathcal{U}_k <{\bf 1}$. The first step will be
the computation of
the pushforward under the mapping $({\la^-}bdambda, \v)\mapsto (\mathcal{A},\mathcal{B})$, when $(\mathcal A, \mathcal B ) := (\mathcal A_1, \dots, \mathcal A_{n-1}, \mathcal B_1, \dots, \mathcal B_n)$ are the Hermitian recursion coefficients as defined in \eqref{defBn} and \eqref{defAn}. This can be done by considering the corresponding change of measure in the Gaussian case, that is, when $\Sigma_N$ is the spectral measure of a $\operatorname{GUE}_N$-distributed matrix with distribution proportional to
\begin{align} {\la^-}bdabel{ev+weightsG}
\left(\Delta({\la^-}bdambda)^2 \prod_{i=1}^N e^{-\tfrac{1}{2}{\la^-}bdambda_i^2} d{\la^-}bdambda_j\right) d\mathbb D_{N,p}(\v) .
\end{align}
As mentioned in Section \ref{sec:randommatrices}, the correspondence in the Gaussian case was investigated in \cite{GaNaRomat}. Lemma 6.1 therein shows that the spectral matrix measure $\Sigma_N$ is also the spectral matrix measure of the block-tridiagonal matrix
\begin{align}
{\la^-}bdabel{jacmatrix1}
\hat J_{n}
= \begin{pmatrix}
D_1 & C_1 & & \\
C_1 & D_2 & \mathrm dots & \\
& \mathrm dots & \mathrm dots & C_{n-1} \\
& & C_{n-1} &D_n
\end{pmatrix} ,
\end{align}
where $C_k,D_k$ are Hermitian and independent, with $D_k\sigmam \operatorname{GUE}_p$ and $C_k$ is positive definite with $C_k^2\sigmam \operatorname{LUE}_p(p(n-k))$. This implies that the Hermitian recursion coefficients $\mathcal{B}_k$ and $\mathcal{A}_k$ are given by $\mathcal{B}_k = D_k$ and $\mathcal{A}_k = C_k^2$, respectively. That is, the pushforward of the measure \eqref{ev+weightsG} under the mapping $({\la^-}bdambda, \v)\mapsto (\mathcal{A},\mathcal{B})$ is the measure proportional to
\begin{align}
\left(\prod_{k=1}^n \exp \left( - \frac{1}{2} \mathrm{tr} \mathcal B_k^2\right)\ d\mathcal B_k\right) \left(\prod_{k=1}^{n-1} (\det \mathcal A_k)^{p(n-k-1)} \exp \left(- \frac{1}{2} \mathrm{tr} \mathcal A_k\right)\ d\mathcal A_k\right).
\end{align}
Here and in the following, $dM$
denotes the Lebesgue measure in each of the functionally independent real entries of a Hermitian matrix $M$.
Since
\[\mathrm{tr} \hat J_n^2 = \sum_{k=1}^n \mathrm{tr} \mathcal B_k^2 + \sum_{k=1}^{n-1} \mathrm{tr} \mathcal A_k = \sum_{j=1}^N{\la^-}bdambda_j^2 , \]
we conclude
that the pushforward of the measure
\begin{align}
{\la^-}bdabel{pushm}
\left(\Delta({\la^-}bdambda)^2 \prod_{i=1}^N \mathbbm{1}_{\{0<{\la^-}bdambda_i<1\} } d{\la^-}bdambda_i\right) d\mathbb D_{N,p}(\w)
\end{align}
by the mapping $({\la^-}bdambda, \w) \mapsto (\mathcal A, \mathcal B)$ is, up to a multiplicative constant, the measure
\begin{align}
{\la^-}bdabel{lawAB}
\left(\prod_{k=1}^{n-1} (\det \mathcal A_k)^{p(n-k-1)} d\mathcal A_k\right) \prod_{k=1}^n d\mathcal B_k\,.
\end{align}
Note that an indicator function is omitted in \eqref{lawAB}, (ensuring that
the spectral measure is supported by $(0,1)$). This indicator function will appear
in the condition ${\bf 0} < \mathcal{U}_k < {\bf 1}$, but it does not play a role in the following arguments.
Now
two steps are remaining. First, we need to compute the pushforward of \eqref{lawAB} under the mapping $(\mathcal A , \mathcal B) \mapsto \mathcal U := (\mathcal U_1, \dots, \mathcal U_{2n-1})$.
Second, to express the prefactor $\prod_{i=1}^{np} {\la^-}bdambda_i^a(1-{\la^-}bdambda_i)^b$ in \eqref{ev+weights} in terms of $\mathcal U$. This is summarized in the two following technical lemmas, whose proofs are in Section \ref{appendix0} and \ref{appendix}, respectively.
\begin{lem}
{\la^-}bdabel{push2}
The pushforward of the measure \eqref{lawAB}
by the mapping $(\mathcal{A},\mathcal{B}) \mapsto \mathcal U$ is, up to a multiplicative constant, the measure
\begin{align}
\left(\prod_{k=1}^{n-1} \det((\boldsymbol{1}-\mathcal{U}_{2k-1})\mathcal{U}_{2k-1})^{p(n-k)} d\mathcal U_{2k-1}\right) \left(
\prod_{k=1}^{n-1} \det(\boldsymbol{1}-\mathcal{U}_{2k})^{p(n-k)} \det(\mathcal{U}_{2k})^{p(n-k)} d \mathcal U_{2k}\right)
\end{align}
\end{lem}
\begin{lem}
{\la^-}bdabel{crucial}
\begin{align}
\prod_{i=1}^{np} (1 - {\la^-}bdambda_i) = \prod_{k=1}^{2n-1} \det (\boldsymbol{1} - \mathcal U_k) , \qquad
\prod_{i=1}^{np} {\la^-}bdambda_i = \prod_{k=1}^n \det \mathcal U_{2k-1} \prod_{k=1}^{n-1} \det (\boldsymbol{1} -\mathcal U_{2k})\,.
\end{align}
\end{lem}
Gathering these results we see that the pushforward of the measure \eqref{ev+weights} by the mapping $({\la^-}bdambda, \w) \mapsto \mathcal U$ is, again up to a multiplicative constant,
\begin{align}
\prod_{k=1}^{n} \det (\mathcal{U}_{2k-1})^{p(n-k)+a}\det(\boldsymbol{1}-\mathcal{U}_{2k-1})^{p(n-k)+b}
\prod_{k=1}^{n-1} \det (\mathcal{U}_{2k})^{p(n-k-1)}\det(\boldsymbol{1}-\mathcal{U}_{2k})^{p(n-k)+a+b}
\prod_{k=1}^{2n-1} d\mathcal{U}_k .
\end{align}
That is, the canonical moments are independent and
\[\mathcal{U}_{2k-1}\sigmam \operatorname{JUE}_p (p(n-k)+a,p(n-k)+b) , \qquad \mathcal{U}_{2k}\sigmam \operatorname{JUE}_p (p(n-k-1),p(n-k)+a+b)\,.\]
This ends the proof.
$ \Box$
\subsection{Proof of Theorem \ref{thm:LDPcoefficient}}
Let $\Sigma_N$ be the spectral measure of a $\operatorname{JUE}_N(\kappa_1N,\kappa_2,N)$ distributed matrix, with $N=np$ and $\kappa_1, \kappa_2 \geq 0$. By Lemma \ref{lem:nontrivial} and Lemma \ref{lem:existencecanonical}, the first $2n-1$ canonical moments $U^{(N)}_k$ $1\leq k\leq 2n-1$ and their Hermitian versions $\mathcal{U}^{(N)}_k$, $1\leq k\leq 2n-1$ are well-defined. They are elements of the space
\begin{align}
\mathcal{Q}_j= \big\{ (H_1,\dots ,H_{2j-1}) |\, H_j\in\mathcal{H}_p \text{ and } {\bf 0}\leq H_j\leq {\bf 1} \text{ for all } j \big\} .
\end{align}
Let us define the sequence
\begin{align} {\la^-}bdabel{canonicalsequence}
\mathcal{U}^{(N)} = \big( \mathcal{U}^{(N)}_{1},\dots ,\mathcal{U}^{(N)}_{2n-1},{\bf 0},\dots \big) ,
\end{align}
as a random element of
\begin{align}
\mathcal{Q}_\infty = \big\{ (H_1,H_2,\dots ) |\, H_j\in\mathcal{H}_p \text{ and } {\bf 0}\leq H_j\leq {\bf 1} \text{ for all } j \big\},
\end{align}
which we endow with the product topology. By Theorem \ref{thm:distributioncanonical},
\begin{align*}
\mathcal{U}^{(N)}_{2k-1}\sigmam \operatorname{JUE}_p(p(n-k)+\kappa_1np,p(n-k)+\kappa_2np)
\end{align*}
for $1\leq k\leq n$, and then we apply Lemma \ref{prop:LDPsingleU}, to conclude that the sequence $(\mathcal{U}^{(N)}_{2k-1})_{n}$ satisfies the LDP in $\mathcal{Q}_1$ with speed $n$ and good rate function $I_{p+p\kappa_1,p+p\kappa_2}$. If we instead consider the LDP at speed $N$, the rate function becomes
\begin{align*}
p^{-1}I_{p+p\kappa_1,p+p\kappa_2}(\mathcal{U}) & = - (1+\kappa_1) \log \det (\mathcal{U}) - (1+\kappa_2) \log \det(\boldsymbol{1} -\mathcal{U}) \\
& \quad + p(1+\kappa_1) \log \frac{1+\kappa_1}{2+\kappa_1+\kappa_2} + p (1+\kappa_2) \log \frac{1+\kappa_1}{2+\kappa_1+\kappa_2} ,
\end{align*}
where the right hand side is interpreted as $+\infty$, if we do not have ${\bf 0}< \mathcal{U}< {\bf 1}$. Recalling \eqref{KMKcanonical} and \eqref{Hoddeven}, we see that $p^{-1}I_{p+p\kappa_1,p+p\kappa_2}=\mathcal{H}_o$.
Turning to the canonical moments of even index, Theorem \ref{thm:distributioncanonical} gives,
\begin{align*}
\mathcal{U}^{(N)}_{2k}\sigmam \operatorname{JUE}_p(p(n-k-1),p(n-k)+\kappa_1np+\kappa_2np)
\end{align*}
for $1\leq k\leq n-1$. Then Lemma \ref{prop:LDPsingleU} yields the LDP for $(\mathcal{U}^{(N)}_{2k})_{n}$ in $\mathcal{Q}_1$ with speed $N$ and good rate function $p^{-1}I_{p,p+p\kappa_1+p\kappa_2}$, satisfying
\begin{align*}
p^{-1}I_{p,p+p\kappa_1+p\kappa_2}(\mathcal{U}) & = - \log \det (\mathcal{U}) - (1+\kappa_1+\kappa_2) \log \det(\boldsymbol{1} -\mathcal{U}) \\
& \quad + p \log \frac{1}{2+\kappa_1+\kappa_2} + p (1+\kappa_1+\kappa_2) \log \frac{1+\kappa_1+\kappa_2}{2+\kappa_1+\kappa_2} \\
& = \mathcal{H}_e(\mathcal{U}) .
\end{align*}
Since the canonical moments are independent, we get for any $j\geq 1$, that $( \mathcal{U}^{(N)}_{1},\dots , \mathcal{U}^{(N)}_{2j-1})_{n\geq j}$ satisfies the LDP in $\mathcal{Q}_j$ with speed $N$ and good rate function
\begin{align*}
\mathcal{I}^{(j)}(\mathcal{U}_1,\dots ,\mathcal{U}_{2j-1}) = \mathcal{H}_o(\mathcal{U}_1)+\mathcal{H}_e(\mathcal{U}_2)+\dots +\mathcal{H}_o(\mathcal{U}_{2j-1}) .
\end{align*}
We can now apply the projective method of the Dawson-G\"artner Theorem (see Theorem 4.6.1 in \cite{demboz98}). It yields the LDP for the full sequence $\mathcal{U}^{(N)}$ in $\mathcal{Q}_\infty$, with speed $N$ and good rate function
\begin{align}
\mathcal{I}_\infty (\mathcal{U}_1,\mathcal{U}_2,\dots ) & = \sup_{j\geq 1} \mathcal{I}^{(j)}(\mathcal{U}_1,\dots ,\mathcal{U}_{2j-1})
= \sum_{k=1}^\infty \mathcal{H}_o(\mathcal{U}_{2k-1})+\mathcal{H}_e(\mathcal{U}_{2k}) .
\end{align}
This rate function is finite only if ${\bf 0}<\mathcal{U}_k <{\bf 1}$ for all $k$. In particular, the set where it is finite is a subset of the space
\begin{align}
\widehat{\mathcal{Q}}_\infty = \{ H|\, {\bf 0}< H <{\bf 1} \}^{\mathbb{N}} \cup \bigcup_{j=1}^\infty \Big( \{ H|\, {\bf 0}< H <{\bf 1} \}^{2j-1} \times \{ {\bf 0} \}^{\mathbb{N}} \Big).
\end{align}
We also have $\mathcal{U}^{(N)}\in \widehat{\mathcal{Q}}_\infty$ for all $n$, see \eqref{canonicalsequence}. It follows from Lemma 4.1.5 in \cite{demboz98}, that $\mathcal{U}^{(N)}$ also satisfies the LDP in $\widehat{\mathcal{Q}}_\infty$, with speed $N$ and good rate function the restriction of $\mathcal{I}_\infty$ to this space.
Then, we define the mapping $\psi:\widehat{\mathcal{Q}}_\infty\to \mathcal{M}_{p,1}([0,1])$ as follows. If $\mathcal{U} \in \widehat{\mathcal{Q}}_\infty$ is such that ${\bf 0}<\mathcal{U}_k<{\bf 1}$ for all $k$, there is a unique nontrivial $\Sigma\in \mathcal{M}_{p,1}([0,1])$, such that $\Sigma$ has Hermitian canonical moments $\mathcal{U}$, and we define $\psi(\mathcal{U})=\Sigma$. If $\mathcal{U}$ is such that ${\bf 0}<\mathcal{U}_{2j-1}< {\bf 1}$, but $\mathcal{U}_{k}={\bf 0}$ for $k> 2j-1$, we use the correspondence from Section \ref{sec:finitelysupported}: then there are moments $M_1,\dots ,M_{2j-1}$ with $M_{k}^-<M_k^+$ for $k\leq 2j-1$, and we define $\psi(\mathcal{U})$ as the spectral measure of the block Jacobi matrix $J_j$ as in \eqref{jacmatrix0}, constructed with these moments. That is, $\psi(\mathcal U)$ is the unique spectral measure of such a Jacobi matrix with first canonical moments $\mathcal{U}_1,\dots ,\mathcal{U}_{2j-1}$. Then $\mathcal{U}_n\to \mathcal{U}$ implies that the block-Jacobi matrix of $\psi(\mathcal{U}_n)$ converges entrywise to the block-Jacobi matrix of $\psi(\mathcal{U})$, where the latter one is extended by zeros if $\mathcal{U}$ has less nonzero matricial entries than $\mathcal{U}_n$. This implies that the moments of $\psi(\mathcal{U}_n)$ converge to the moments of $\psi(\mathcal{U})$. Since the convergence of moments of matrix measures on the compact set $[0,1]$ implies weak convergence, the mapping $\psi$ is continuous.
To
end the proof, we now apply the contraction principle (Theorem 4.2.1 in \cite{demboz98}). We have $\psi(\mathcal{U}^{(N)}) = \Sigma_N$, and as $\psi$ is continuous, the sequence $(\Sigma_N)_n$ satisfies the LDP in $\mathcal{M}_{p,1}([0,1])$ with speed $N$ and good rate function
\begin{align}
\mathcal{I}_J(\Sigma) = \inf_{\mathcal{U}:\psi(\mathcal{U})=\Sigma} \mathcal{I}_\infty(\mathcal{U}) .
\end{align}
This infimum is infinite, unless $\Sigma$ is nontrivial, and in this case it is given by $\mathcal{I}_\infty$ evaluated at the unique sequence of canonical moments of $\Sigma$.
$\Box$
\subsection{Proof of Theorem \ref{LDPSR}}
Let $\Sigma_N$ be the random spectral matrix measure of a matrix with distribution $\operatorname{JUE}_N(\kappa_1N,\kappa_2N)$, with $\kappa_1,\kappa_2 \geq 0$,
and suppose $N=np$. This distribution corresponds to a random matrix with potential
\begin{align} {\la^-}bdabel{Jacobipotential}
V(x) = -\kappa_1\log(x)-\kappa_2 \log(1-x) ,
\end{align}
see \eqref{generalpotential}. In the scalar case $p=1$, the equilibrium measure (the minimizer of the Voiculescu entropy or the limit of $\Sigma_N$) is given by $\operatorname{KMK} (\kappa_1,\kappa_2)$, see \cite{magicrules}, p. 515. For this potential, the assumptions (A1), (A2) and (A3) in \cite{GaNaRomat} are satisfied, with
matrix equilibrium measure $\Sigma_V=\Sigma_{\operatorname{KMK} (\kappa_1,\kappa_2)}$ and then by Theorem 3.2 of that paper,
the sequence $(\Sigma_N)_n$ satisfies the LDP in $\mathcal{M}_{p,1}(\mathbb{R})$ with speed $N$ and good rate function
\begin{align}
\mathcal{I}_{V}(\Sigma) = \mathcal K(\Sigma_{\operatorname{KMK}(\kappa_1,\kappa_2)}\!\ |\!\ \Sigma) + \sum_{i=1}^{N^+} \mathcal F^+_V ( {\la^-}bdambda_i^+) + \sum_{i=1}^{N^+} \mathcal F_V^- ({\la^-}bdambda_i^-)
\end{align}
for $\Sigma \in \mathcal{S}_{p,1}(u^-,u^+)$, and $\mathcal{I}_{V}(\Sigma)=+\infty$ otherwise. Here, the functions $\mathcal{F}^\pm_V$ are given by
\begin{align}
{\la^-}bdabel{rate0}
\mathcal{F}_V^+(x) & = \begin{cases}
\mathcal{J}_V(x) - \inf_{\xi \in \mathbb{R}} \mathcal{J}_V(\xi) & \text{ if } u^+\leq x \leq 1, \\
\infty & \text{ otherwise, }
\end{cases} \\ {\la^-}bdabel{rate0b}
\mathcal{F}_V^-(x) & = \begin{cases}
\mathcal{J_V}(x) - \inf_{\xi \in \mathbb{R}} \mathcal{J}_V(\xi) & \text{ if } 1\leq x \leq u^-, \\
\infty & \text{ otherwise, }
\end{cases}
\end{align}
where $\mathcal{J}_V$ is the effective potential
\begin{align*}
V(x) - 2\int \log |x-\xi|\!\ d \operatorname{KMK} (\kappa_1,\kappa_2)(\xi) .
\end{align*}
On the one hand, as discussed in Proposition 3.2 of \cite{magicrules} (see also the references therein),
for $V$
in \eqref{Jacobipotential}, we have $\mathcal{F}_V^\pm=\mathcal{F}_J^\pm$,
(see \eqref{outlierF-} and \eqref{outlierF+}). That is, the rate function $\mathcal{I}_V$ is precisely the left hand side of the sum rule in Theorem \ref{LDPSR}.
On the other hand, as shown in Theorem \ref{thm:LDPcoefficient}, the sequence $(\Sigma_N)_n$
satisfies the LDP with speed $N$ and good rate function $\mathcal{I}_J$. Since a large deviation rate function is unique, we get for any $\Sigma \in \mathcal{M}_{p,1}([0,1])$ the identity
\begin{align*}
\mathcal{I}_V(\Sigma) = \mathcal{I}_J(\Sigma)\,,
\end{align*}
which is the sum rule of Theorem \eqref{LDPSR}.
$\Box$
\section{Proof of the technical lemmas}
{\la^-}bdabel{sec:proofs}
\subsection{Proof of Lemma \ref{lem:existencecanonical}}
The following statements are true for general nonnegative matrix measures $\Sigma\in \mathcal{M}_p([0,1])$ that are not necessarily normalized. Let us denote the $n$-th moment space of nonnegative matrix measures on $[0,1]$
by
\begin{align}
\mathfrak{M}_{p,n} = \big\{ (M_0(\Sigma),\dots ,M_n(\Sigma)) |\, \Sigma \in \mathcal{M}_{p}([0,1])\, \big\} \subset \mathcal{H}_p^{n+1}\,.
\end{align}
A comprehensive study of this
matrix moment space and the relation between canonical moments and recursion coefficients
has been addressed in \cite{destu02}.
Indeed, Theorem 2.7 therein shows that if $(M_0,\dots ,M_{2n-1})$
lies in the interior of $\mathfrak{M}_{p,2n-1}$, then the upper and lower bound for $M_k$ satisfies $M_k^-<M_k<M_k^+$ for $1\leq k\leq 2n-1$, and then the canonical moments
\begin{align}
U_k = (M_k^--M_k^+)^{-1}(M_k-M_k^-), \qquad 1\leq k \leq 2n-1
\end{align}
are well
defined. Theorem 4.1 of \cite{destu02} shows that the recursion coefficients $u_0, \dots, u_{n-1}; v_1, \dots v_{n-1}$ of $\Sigma$ satisfy the decomposition as in
\eqref{decomposition} and
\eqref{recursion3}. Therefore, the statement of Lemma \ref{lem:existencecanonical} follows once we show that for a measure $\Sigma$ satisfying the assumption of the lemma, $(M_0,\dots ,M_{2n-1})$ is in the interior of the moment space $\mathfrak{M}_{p,2n-1}$. Since this result may be of independent interest, we formulate it as a lemma.
\begin{lem} {\la^-}bdabel{lem:nontrivialmomentspace}
Let $\Sigma \in \mathcal{M}_{p}([0,1])$ such that
\begin{align} {\la^-}bdabel{polynomialnorm}
\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle >0
\end{align}
for all matrix polynomials $P$ of degree at most $n-1$. Then $(M_0,\dots ,M_{2n-3})$
is in the interior of the moment space $\mathfrak{M}_{p,2n-3}$. If additionally $\Sigma(\{0\})=\Sigma(\{1\})={\bf 0}$, then $(M_0,\dots ,M_{2n-1})$
is in the interior of the moment space $\mathfrak{M}_{p,2n-1}$.
\end{lem}
By the above lemma, there are two sufficient conditions for the existence of the first $2n-1$ canonical moments: either \eqref{polynomialnorm} is satisfied for all polynomials up to degree $n$, or it holds for polynomials up to degree $n-1$ and the additional assumption $\Sigma(\{0,1\})={\bf 0}$ is satisfied. If the condition \eqref{polynomialnorm} fails for some polynomial of degree $n$, then atoms at the boundary can indeed cause the moments to be more ''{extremal}''. This can be made more precise in the scalar case, for which we refer to \cite{DeSt97}, Theorem 1.2.5 and Definition 1.2.10. Suppose $\mu$ is a scalar measure on $[0,1]$ with $n$ support points, then any nonzero polynomial with degree less than $n$ has positive $L^2(\mu)$-norm, but there is a polynomial of degree $n$ with vanishing norm. Then the first $2n-3$ moments will be in the interior of the moment space. On the other hand, the fact that $(M_0,\dots ,M_{2n-1})$
lies in the boundary of the moment space is actually equivalent to the fact that $\{0,1\}$ has
positive mass. If both 0 and 1 are in the support of $\mu$, then already
$(M_0,\dots ,M_{2n-2})$ lies at the boundary of the moment space. If exactly one support point is equal to 0 or 1, then the first $2n-2$ moments are interior, but the first $2n-1$ ones are not. If the support contains 0, then $M_{2n-1}=M_{2n-1}^-$, whereas 1 in the support implies $M_{2n-1}=M_{2n-1}^+$. The two versions of $\mu$ are then called the lower and upper principal representation of $(M_0,\dots,M_{2n-2})$, respectively. In the matrix case, the boundary of $\mathfrak{M}_{p,2n-1}$ has a more complicated structure and there is no such equivalence.
\textbf{Proof of Lemma \ref{lem:nontrivialmomentspace}:}
We again refer to
\cite{destu02}.
Lemma 2.3
says that $(M_0,\dots ,M_{m})$ is an element of $\mathfrak{M}_{p,m}$ if and only if, for all matrices $A_0,\dots, A_m$, such that $Q(x) = A_{m} x^m +\dots + A_0$ is nonnegative definite for all $x\in [0,1]$, we have
\begin{align}{\la^-}bdabel{inmomentspace}
\mathrm{tr} \sum_{k=0}^{m} A_k M_k \geq 0 .
\end{align}
Note that the case $A_m={\bf 0}$ is also included. Furthermore,
$(M_0,\dots ,M_{m})$ is an interior point of $\mathfrak{M}_{p,m}$ if and only if, for all $A_0,\dots ,A_m$ for which such $Q$ is nonnegative definite on $[0,1]$ and nonzero, we have
\begin{align}{\la^-}bdabel{inmomentspace2}
\mathrm{tr} \sum_{k=0}^{m} A_k M_k > 0 .
\end{align}
Theorem 2.5 of \cite{destu02} shows that if the degree of $Q$ is even, say $2\ell$, then such a polynomial can be written as
\begin{align} {\la^-}bdabel{polynomialdecomp}
Q(x) = B_1(x)B_1(x)^\dagger + x(1-x)B_2(x)B_2(x)^\dagger ,
\end{align}
where $B_1$ and $B_2$ are matrix polynomials of degree $\ell$ and $\ell-1$, respectively. If the degree of $Q$ is equal to $2\ell-1$, then
\begin{align} {\la^-}bdabel{polynomialdecomp2}
Q(x) = xB_1(x)B_1(x)^\dagger + (1-x)B_2(x)B_2(x)^\dagger ,
\end{align}
with $B_1,B_2$ of degree $\ell-1$.
Let $\Sigma\in \mathcal{M}_p([0,1])$ with $M_k$ the $k$-th moment of $\Sigma$. If $m=2\ell$ and $A_m\neq {\bf 0}$, then, using the decomposition \eqref{polynomialdecomp},
\begin{align} {\la^-}bdabel{polynomialpositive}
\mathrm{tr} \sum_{k=0}^m A_k M_k & = \mathrm{tr} \int Q(x) d\Sigma(x) \notag \\
& = \mathrm{tr} \int B_1(x)B_1(x)^\dagger d\Sigma(x) + \mathrm{tr} \int x(1-x)B_2(x)B_2(x)^\dagger d\Sigma(x) \notag \\
& = \mathrm{tr} \int B(x)_1^\dagger d\Sigma(x)B_1(x) + \mathrm{tr} \int x(1-x)B_2(x)^\dagger d\Sigma(x)B_2(x) \notag \\
& = \mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle B_1, B_1\rangle\rangle + \mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle p B_2 , B_2\rangle\rangle ,
\end{align}
where $p(x) = x(1-x)$.
A similar calculation can be made if $m=2\ell-1$ and $A_m\neq {\bf 0}$. Together with the characterizations of the moment space by \eqref{inmomentspace} and \eqref{inmomentspace2}, this implies that for any $\Sigma \in \mathcal{M}_p([0,1])$ and matrix polynomial $B$
\begin{align} {\la^-}bdabel{polcriterion}
\mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB, B\rangle\rangle \geq 0,
\end{align}
when $q(x)$ is the scalar polynomial
$1,x,1-x$ or $x(1-x)$.
Furthermore, the first $2m-1$ moments of $\Sigma$ are in the interior of the moment space $\mathfrak{M}_{p,2m-1}$, if
\begin{align} {\la^-}bdabel{polcriterion2}
\mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB, B\rangle\rangle > 0,
\end{align}
whenever $B$ is nonzero and such that the degree of $q(x)B(x)B(x)^\dagger$ is at most $2m-1$. We remark that this is actually equivalent to the criterion given in \cite{destu02} and stated in terms of Hankel matrices.
Now suppose that $\Sigma $ is such that $\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle >0$ for all nonzero polynomials $P$ of degree at most $n-1$. We show that then \eqref{polcriterion2} is satisfied whenever the degree of $qBB^\dagger$
is at most $2n-3$. For $q(x)=1$ this is trivially true. In the other cases,
\begin{align} {\la^-}bdabel{polynomialpositive2}
\mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB , B\rangle\rangle = \mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB ,qB\rangle\rangle + \mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB,(1-q)B\rangle\rangle .
\end{align}
Since $qB$ has degree at most $n-1$, the first inner product on the right hand side of \eqref{polynomialpositive2} is
positive by assumption. The second one is nonnegative by \eqref{inmomentspace}, since $q(1-q)BB^\dagger$
is nonnegative definite on $[0,1]$. This proves that $(M_0,\dots ,M_{2n-3})$ is in the interior of $\mathfrak{M}_{p,2m-1}$.
Now assume
$\Sigma(\{0,1\})={\bf 0}$, we show that then \eqref{polcriterion2} is satisfied whenever $qBB^\dagger$
has degree at most $2n-1$ and $B$ is nonzero. In this case, $B$ is of degree at most $n-1$, and $\mathrm{tr} {\la^-}bdangle{\la^-}bdangle B, B\rangle\rangle$
is
positive. Using that $\Sigma$ has no mass at $0,1$,
\begin{align}
\mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle B ,B\rangle\rangle = \lim_{\varepsilon \to 0}\, \mathrm{tr} \int_{\varepsilon}^{1-\varepsilon} B(x)^\dagger d\Sigma(x) B(x) ,
\end{align}
and then there exists a $\varepsilon>0$, such that the integral on the right hand side is positive. Since $q(x)\geq \varepsilon(1-\varepsilon)$ on $[\varepsilon,1-\varepsilon]$, and $\int_A B(x)^\dagger d\Sigma B(x)$ is always nonnegative definite,
\begin{align}
\mathrm{tr}\, {\la^-}bdangle{\la^-}bdangle qB, B\rangle\rangle \geq \mathrm{tr} \int_{\varepsilon}^{1-\varepsilon} q(x)B(x)^\dagger d\Sigma(x) B(x)\geq
\varepsilon(1-\varepsilon)\, \mathrm{tr} \int_{\varepsilon}^{1-\varepsilon} B(x)^\dagger d\Sigma(x) B(x) ,
\end{align}
which gives a
positive lower bound.
$ \Box $
\subsection{Proof of Lemma \ref{lem:nontrivial}}
Let us begin by noting that if $z_1,\dots ,z_N$ are random vectors in $\mathbb{C}^p$, independent and complex standard normal distributed, then almost surely, any $p$ of these vectors span $\mathbb{C}^p$. This implies that almost surely, $H=z_1 z_1^\dagger+ \dots + z_N z_N^\dagger$ has full rank. Consider such a realization and let
\begin{align*}
P(x)=C_{n-1}x^{n-1} + \dots + C_1 x + C_0
\end{align*}
be a matrix polynomial of degree at most $n-1$. We have
\begin{align*}
\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle = \mathrm{tr} \sum_{i=1}^N P({\la^-}bdambda_i)^\dagger \v_i\v_i^\dagger P({\la^-}bdambda_i) = \sum_{i=1}^N \v_i^\dagger P({\la^-}bdambda_i)^\dagger P({\la^-}bdambda_i) \v_i = \sum_{i=1}^N ||P({\la^-}bdambda_i)\v_i||^2.
\end{align*}
Suppose that $\mathrm{tr} {\la^-}bdangle{\la^-}bdangle P,P\rangle\rangle=0$, then the above calculation shows that for all $i$, $\v_i$ is in the kernel of $P({\la^-}bdambda_i)$.
We may rewrite this in matrix form by saying that
\begin{align} {\la^-}bdabel{nontrivial}
\mathbf{W} \mathbf{P} = \mathbf{0} ,
\end{align}
where $\mathbf{P}$ is $np\times p$ with $\mathbf{P}^\dagger = (C_0,\dots ,C_{n-1})$, and $\mathbf{W}$ is $np\times np$ with
\begin{align*}
\mathbf{W} =
\begin{pmatrix}
\v_1^\dagger & \v_1^\dagger {\la^-}bdambda_1 & \cdots & \v_1^\dagger{\la^-}bdambda_1^{n-1} \\
\v_2^\dagger & \v_2^\dagger {\la^-}bdambda_2 & \cdots & \v_2^\dagger {\la^-}bdambda_2^{n-1} \\
\vdots & \vdots & \mathrm dots & \vdots \\
\v_{np}^\dagger & \v_{np}^\dagger {\la^-}bdambda_{np} & \cdots & \v_{np}^\dagger {\la^-}bdambda_{np}^{n-1}
\end{pmatrix} .
\end{align*}
Now, we show that $\mathbf{W}$ is nonsingular, so that the only solution to \eqref{nontrivial} is $\mathbf{P}=0$, that is, $P$ is the zero polynomial.
Let $\mathbf{H}$ be the $np\times np$ block-diagonal matrix with blocks $H^{1/2}$ on the diagonal, then $\mathbf{H}$ is nonsingular. The matrix
$\mathbf{Z}=\mathbf{W}\mathbf{H}$
has the same structure as $\mathbf{W}$, except that $\v_i$ is replaced by $z_i$.
We use an argument similar to what has been done
in the proof of Lemma 2.2 in \cite{FGARop}. Conditionally on the eigenvalues, the determinant of $\mathbf{Z}$ is a polynomial in the $np^2$ entries of $z_1,\dots ,z_{np}$. Since they are all independent standard Gaussians, they have a joint density and then either $\det(\mathbf{Z})$ is 0 with probability 0 or it is the zero polynomial. Let us fix $z_{kp+i}=e_i$ for $k=0,\dots n-1$, $i=1,\dots,p$, where $e_1,\dots ,e_p$ is the
canonical basis of $\mathbb{C}^p$. In this case,
\begin{align}
\mathbf{Z} =
\begin{pmatrix}
e_1^\dagger & e_1^\dagger {\la^-}bdambda_1 & \cdots & e_1^\dagger{\la^-}bdambda_1^{n-1} \\
e_2^\dagger & e_2^\dagger {\la^-}bdambda_2 & \cdots & e_2^\dagger {\la^-}bdambda_2^{n-1} \\
\vdots & \vdots & \mathrm dots & \vdots \\
e_p^\dagger & e_p^\dagger {\la^-}bdambda_p & \cdots & e_p^\dagger {\la^-}bdambda_p^{n-1} \\
e_1^\dagger & e_1^\dagger {\la^-}bdambda_{p+1} & \cdots & e_1^\dagger {\la^-}bdambda_{p+1}^{n-1} \\
\vdots & \vdots & \mathrm dots & \vdots \\
e_{p}^\dagger & e_{p}^\dagger {\la^-}bdambda_{np} & \cdots & e_{p}^\dagger {\la^-}bdambda_{np}^{n-1}
\end{pmatrix} .
\end{align}
By reordering rows and columns, this matrix may be transformed into the block diagonal matrix $\widetilde{\mathbf{Z}}$ with $n\times n$ Vandermonde-blocks,
\begin{align}
\widetilde{\mathbf{Z}} =
\begin{pmatrix}
1 & {\la^-}bdambda_1 & \cdots & {\la^-}bdambda_1^{n-1} & & & & & \\
1 & {\la^-}bdambda_{p+1} & \cdots & {\la^-}bdambda_{p+1}^{n-1} & & & & & \\
\vdots & \vdots & & \vdots & & & & & \\
1 & {\la^-}bdambda_{(n-1)p+1} & \cdots & {\la^-}bdambda_{(n-1)p+1}^{n-1} & & & & & \\
& & & & \mathrm dots & & & & \\
& & & & & 1& {\la^-}bdambda_p & \cdots & {\la^-}bdambda_p^{n-1} \\
& & & & & 1& {\la^-}bdambda_{2p} & \cdots & {\la^-}bdambda_{2p}^{n-1} \\
& & & & & \vdots & \vdots & & \vdots \\
& & & & & 1 & {\la^-}bdambda_{np} & \cdots & {\la^-}bdambda_{np}^{n-1} \\
\end{pmatrix} ,
\end{align}
which has determinant
\begin{align}
\det(\widetilde{\mathbf{Z}}) = \prod_{k=1}^p \prod_{1<j<n} ({\la^-}bdambda_{jp+k}- {\la^-}bdambda_{ip+k} ) .
\end{align}
Since the ${\la^-}bdambda_i$ are almost surely disjoint, the matrix $\widetilde{\mathbf{Z}}$ is almost surely non-singular, which implies that $\mathbf{W}$ is almost surely nonsingular.
$ \Box$
\subsection{Proof of Lemma \ref{push2}}
{\la^-}bdabel{appendix0}
We have to compute the Jacobian determinant of the mapping $(\mathcal A, \mathcal B) \mapsto \mathcal U$.
We will do this by using the moments as intermediate variables.
Let us begin by noting that $u_{n-1},U_{2n-1}$ depend on $M_1,\dots ,M_{2n-1}$, but not on any higher moments and $v_n,U_{2n}$ depend only on $M_1,\dots ,M_{2n}$. Since for the similarity transforms in Section \ref{sec:matrixmeasures} we used only matrices depending on moments of strictly lower order, the same statements can by made for the Hermitian versions, where $\mathcal{B}_n,\mathcal{U}_{2n-1}$ depend on $M_1,\dots,M_{2n-1}$ and $\mathcal{A}_n,\mathcal{U}_{2n}$ depend on $M_1,\dots ,M_{2n}$.
We have in particular
\begin{align}{\la^-}bdabel{diffmoments1}
\frac{\partial(\mathcal{B},\mathcal{A})}{\partial M}:=\frac{\partial (\mathcal{B}_1,\mathcal{A}_1,\dots ,\mathcal{B}_{n})}{\partial(M_1,\dots ,M_{2n-1})}
= \frac{\partial \mathcal{B}_1}{\partial M_1}\times \frac{\partial \mathcal{A}_1}{\partial M_2}\times \cdots \times \frac{\partial \mathcal{B}_{n}}{\partial M_{2n-1}} .
\end{align}
Here, we denote by $\frac{\partial F(M)}{\partial M}$ the Jacobian determinant of the mapping $F:\mathcal{H}_p\to \mathcal{H}_p$, seen as a mapping of all the $p^2$ functionally independent real entries of a matrix in $\mathcal{H}_p$, and with the straightforward generalization to mappings with several such matricial coordinates, see \cite{jacobian}. In particular, Theorem 3.5 in \cite{jacobian} shows that for nonsingular $A$,
\begin{align} {\la^-}bdabel{cnetraljacobian}
\frac{\partial (AMA^\dagger)}{\partial M} = \det(A)^{2p} .
\end{align}
Recall that
\begin{align}
{\la^-}bdabel{recall}
\mathcal A_k = H_{2k-2}^{-1/2}H_{2k}H_{2k-2}^{-1/2} , \qquad \mathcal{B}_{k} = H_{2k-2}^{-1/2}{\la^-}bdangle{\la^-}bdangle x P_{k-1}, P_{k-1}\rangle\rangle H_{2k-2}^{-1/2}
\end{align}
and $H_{2k} = M_{2k}-M_{2k}^-$ depends only on $M_1, \dots, M_{2k}$ (see \eqref{hgamma}). Then by \eqref{cnetraljacobian},
\begin{align} {\la^-}bdabel{diffA}
\frac{\partial \mathcal{A}_k}{\partial M_{2k}}
= \det( H_{2k-2})^{-p} \ , \ \frac{\partial \mathcal{B}_{k}}{\partial M_{2k-1}}
= \det (H_{2k-2})^{-p}\,.
\end{align}
Putting these together, we get that \eqref{diffmoments1} is given by
\begin{align}{\la^-}bdabel{diffmoments1result}
\frac{\partial(\mathcal{B},\mathcal{A})}{\partial M} = \det(H_{2n-2})^{-p}\prod_{k=1}^{n-1} \det(H_{2k-2})^{-2p} .
\end{align}
To end this first step,
we need to evaluate
\begin{align}{\la^-}bdabel{diffmoments3}
\frac{\partial \mathcal{U}}{\partial M} := \frac{\partial (\mathcal{U}_1,\dots ,\mathcal{U}_{2n-1})}{\partial(M_1,\dots ,M_{2n-1})}
= \frac{\partial \mathcal{U}_1}{\partial M_1}\times \cdots \times \frac{\partial \mathcal{U}_{2n-1}}{\partial M_{2n-1}} .
\end{align}
where we have by \eqref{canonicalmoment2}
\begin{align}{\la^-}bdabel{diffU}
\frac{\partial \mathcal{U}_k}{\partial M_{k}} = \frac{\partial \left(R_{k}^{-1/2} H_{k}R_{k}^{-1/2}\right) }{\partial M_{k}} = \det( R_{k})^{-p}
\end{align}
and then
\begin{align}{\la^-}bdabel{diffmoments3result}
\frac{\partial \mathcal{U}}{\partial M} = \prod_{k=1}^{2n-1} \det (R_k)^{-p} \,.
\end{align}
Putting together
\eqref{diffmoments1result} and \eqref{diffmoments3result},
we have shown that
\begin{align}
{\la^-}bdabel{interm}
\frac{\partial (\mathcal{B},\mathcal{A})}{\partial \mathcal{U}} = \det(H_{2n-2})^{-p} \prod_{k=1}^{n-1} \det(H_{2k-2})^{-2p} \prod_{k=1}^{2n-1} \det (R_k)^{p} .
\end{align}
To express this in terms of the canonical moments, we use
\[R_k=R_{k-1}(\boldsymbol{1} -U_{k-1})U_{k-1} , \qquad H_{k}=R_{k}U_{k}\,,\]
(see \cite{destu02} formulas (2.19) and (2.16)).
Taking determinants, we obtain
\begin{align}
{\la^-}bdabel{taking}
\det R_k = \prod_{j=1}^{k-1} \det (\boldsymbol{1} -\mathcal U_{j}) \det \mathcal U_{j} , \qquad \det H_{2k-2}= \det R_{2k-2} \det \mathcal U_{2k-2}\,.
\end{align}
We gather \eqref{interm} and \eqref{taking}, to obtain that the pushforward of the measure \eqref{lawAB} by the mapping $(\mathcal{A}, \mathcal{B}) \mapsto \mathcal U$ has, up to a multiplicative constant,
the density
\begin{align} {\la^-}bdabel{jacobianABtoU}
\prod_{k=1}^{n-1} &\det(\mathcal{A}_k)^{p(n-k-1)}
\det(H_{2n-2})^{-p} \prod_{k=1}^{n-1} \det(H_{2k-2})^{-2p} \prod_{k=1}^{2n-1} \det (R_k)^{p} \notag \\
& = \prod_{k=1}^{n-1} \det(H_{2k-2})^{-p(n-k-1)} \det(H_{2k})^{p(n-k-1)}
\det(H_{2n-2})^{-p} \prod_{k=1}^{n-1} \det(H_{2k-2})^{-2p} \prod_{k=1}^{2n-1} \det (R_k)^{p} \notag \\
&= \prod_{k=1}^{n-1} \det(H_{2k-2})^{-p(n-k)} \det(H_{2k})^{p(n-k-1)}
\prod_{k=1}^{n} \det(H_{2k-2})^{-p} \prod_{k=1}^{2n-1} \det (R_k)^{p}\notag \\
&= \prod_{k=1}^{n} \det(H_{2k-2})^{-p} \prod_{k=1}^{2n-1} \det (R_k)^{p} ,
\end{align}
where for the second line we used \eqref{recall},
and then observe the telescopic product of the determinants of $H_k$.
It remains to express \eqref{jacobianABtoU} in terms of the canonical moments. It's time to use \eqref{taking} to get
\begin{align} {\la^-}bdabel{jacobianABtoU2}
\prod_{k=1}^{n} \det(H_{2k-2})^{-p} \prod_{k=1}^{2n-1} \det (R_k)^{p} & = \prod_{k=1}^{n-1} \det(R_{2k})^{-p}\det(\mathcal{U}_{2k})^{-p} \prod_{k=1}^{2n-1} \det (R_k)^{p} \notag \\
& = \prod_{k=1}^{n-1} \det(\mathcal{U}_{2k})^{-p} \prod_{k=1}^{n} \det(R_{2k-1})^{p} \notag \\
& = \prod_{k=1}^{n-1} \det(\mathcal{U}_{2k})^{-p} \prod_{k=1}^{n} \prod_{i=1}^{2k-2} \det(\boldsymbol{1}-\mathcal{U}_i)^p \det (\mathcal{U}_i)^p \notag \\
& = \prod_{k=1}^{n-1} \det(\mathcal{U}_{2k})^{-p} \prod_{k=1}^{n-1} \det((\boldsymbol{1} -\mathcal{U}_{2k-1})\mathcal{U}_{2k-1}(\boldsymbol{1} -\mathcal{U}_{2k})\mathcal{U}_{2k})^{p(n-k)} \notag \\
& = \prod_{k=1}^{n-1} \det((\boldsymbol{1} -\mathcal{U}_{2k-1})\mathcal{U}_{2k-1})^{p(n-k)}
\prod_{k=1}^{n-1} \det(-\mathcal{U}_{2k})^{p(n-k)} \det(\mathcal{U}_{2k})^{p(n-k)} .
\end{align}
This ends the proof of Lemma \ref{push2}.
$ \Box$
\subsection{Two proofs of Lemma \ref{crucial}}
{\la^-}bdabel{appendix}
It follows from Lemma 2.1 of Duran, Lopez-Rodriguez \cite{duran1996orthogonal}, that the eigenvalues of $J_n$ are precisely the zeros of the $n$-th polynomial orthogonal with respect to $\Sigma$. The quadrature formula of Sinap, van Assche \cite{sinap1996orthogonal} implies that the zeros of this polynomial are equal to the support of the spectral measure. As a consequence,
\begin{align}
{\la^-}bdabel{detprod}
\det (J_n) = \prod_{i=1}^{np} {\la^-}bdambda_i , \qquad \det (I-J_n)=\prod_{i=1}^{np} (1-{\la^-}bdambda_i).
\end{align}
In view of (\ref{detprod}) we have to prove that
\begin{align} {\la^-}bdabel{finalchange}
\det (I_n - J_n) = \prod_{k=1}^{2n-1} \det (\boldsymbol{1} - \mathcal U_k) , \qquad \det J_n = \left(\prod_{k=1}^n \det \mathcal U_{2k-1}\right) \left(\prod_{k=1}^{n-1} \det (\boldsymbol{1} -\mathcal U_{2k})\right).
\end{align}
We give two proofs. The first one is matricial, using a recursion of Schur complements and the second one is based on the Szeg{\H o} mapping and matrix polynomials on the unit circle.
\subsubsection{First proof}
Using the Schur complement formula (see Theorem 1.1 in \cite{hornbasic}),
\begin{align} {\la^-}bdabel{expansion}
\det(I_n-J_n) & = \det(I_{n-1}-J_{n-1})\det\big(\boldsymbol{1}-\mathcal{B}_n-(0,\dots ,0,\tilde{\mathcal{A}}_{n-1}^\dagger)(I_{n-1}-J_{n-1})^{-1} (0,\dots ,\tilde{\mathcal{A}}_{n-1}^\dagger)^\dagger \big) \notag \\
& = \det(I_{n-1}-J_{n-1}) \det\left( \boldsymbol{1} -\mathcal{B}_n-\tilde{\mathcal{A}}_{n-1}^\dagger [(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1}\tilde{\mathcal{A}}_{n-1}\right) \notag \\
& = \det(I_{n-1}-J_{n-1}) \det(\varphi_n) ,
\end{align}
where we wrote $[A]_{i,j}$ for the $p\times p$ sub-block in position $i,j$ and we define
\begin{align}
\varphi_n & = \ga^-mamma_n^{-1/2}\left(\boldsymbol{1}-\mathcal{B}_n-\tilde{\mathcal{A}}_{n-1}^\dagger [(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1}\tilde{\mathcal{A}}_{n-1}\right)\ga^-mamma_n^{1/2} \notag \\
& = \ga^-mamma_n^{-1/2}\left(\boldsymbol{1} -\ga^-mamma_n^{1/2}u_{n-1}\ga^-mamma_n^{-1/2}-\ga^-mamma_{n}^{1/2}\ga^-mamma_{n-1}^{-1/2} [(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1}\ga^-mamma_{n-1}^{-1/2}\ga^-mamma_n^{1/2}\right)\ga^-mamma_n^{1/2} \notag \\
& = \left( \boldsymbol{1} -u_{n-1}-\ga^-mamma_{n-1}^{-1/2} [(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1}\ga^-mamma_{n-1}^{-1/2}\ga^-mamma_n\right) \notag \\
& = \left(\boldsymbol{1} -u_{n-1}-\ga^-mamma_{n-1}^{-1/2} [(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1}\ga^-mamma_{n-1}^{1/2}v_{n-1}\right) .
\end{align}
Recall the non-Hermitian recursion coefficients $u_n, v_n$ have been defined in \eqref{recursion1} and \eqref{hidden1}. Using again the formula of Schur complements (see Theorem 1.2 in \cite{hornbasic}),
\begin{align}
[(I_{n-1}-J_{n-1})^{-1}]_{n-1,n-1} & = \left(\boldsymbol{1} -\mathcal{B}_{n-1}-(0,\dots ,0,\tilde{\mathcal{A}}_{n-2}^\dagger)(I_{n-2}-J_{n-2})^{-1} (0,\dots ,\tilde{\mathcal{A}}_{n-2}^\dagger)^\dagger\right)^{-1} \notag \\
& = \left(\boldsymbol{1} -\mathcal{B}_{n-1}-\tilde{\mathcal{A}}_{n-2}^\dagger [(I_{n-2}-J_{n-2})^{-1}]_{n-2,n-2}\tilde{\mathcal{A}}_{n-2}\right)^{-1}\notag\\
&= \ga^-mamma_{n-1}^{1/2} \varphi_{n-1}^{-1} \ga^-mamma_{n-1}^{-1/2} .
\end{align}
We see that $\varphi_n$ satisfies
the recursion
\begin{align}{\la^-}bdabel{recursion}
\varphi_1= \boldsymbol{1} -u_0 , \qquad \varphi_n = \boldsymbol{1} -u_{n-1}-\varphi_{n-1}^{-1} v_{n-1}, \qquad n \geq 2.
\end{align}
Let us write $V_k=\boldsymbol{1} -U_k$. Then we claim that the solution to this recursion is given by
\begin{align} {\la^-}bdabel{recursionsolution}
\varphi_n
= V_{2n-2}V_{2n-1} .
\end{align}
We prove \eqref{recursionsolution} by induction. For $n=1$, we have by \eqref{decomposition}
\begin{align}
\varphi_1 = \boldsymbol{1} - u_0 = \boldsymbol{1} -\zeta_1 = \boldsymbol{1} -U_1=V_1 ,
\end{align}
which agrees with \eqref{recursionsolution} since $V_0=\boldsymbol{1}$. Then,
\begin{align}
\varphi_{n+1} & = \boldsymbol{1} -u_{n}-\varphi_n^{-1} v_n \notag \\
& = \varphi_n^{-1}\left[ \varphi_n -\varphi_n (\zeta_{2n}+\zeta_{2n+1}) - \zeta_{2n-1}\zeta_{2n}\right] \notag \\
& = \varphi_n^{-1}\left[ V_{2n-2}V_{2n-1} - V_{2n-2}V_{2n-1} (V_{2n-1}U_{2n}+V_{2n}U_{2n+1}) - V_{2n-2}U_{2n-1}V_{2n-1}U_{2n}\right] \notag \\
& = \varphi_n^{-1}V_{2n-2} \left[ V_{2n-1} - V_{2n-1}^2U_{2n}-V_{2n-1}V_{2n}U_{2n+1} - U_{2n-1}V_{2n-1}U_{2n}\right] .
\end{align}
In the last line, we write $U_{2n-1}V_{2n-1}U_{2n}=V_{2n-1}U_{2n}-V_{2n-1}^2U_{2n}$ for the last term, which then cancels the second term in the brackets and leads to
\begin{align}
\varphi_{n+1}
& = \varphi_n^{-1}V_{2n-2} \left[ V_{2n-1} -V_{2n-1}V_{2n}U_{2n+1} - V_{2n-1}U_{2n}\right] \notag \\
& = \varphi_n^{-1}V_{2n-2} V_{2n-1}\left[ \boldsymbol{1} -V_{2n}U_{2n+1} - U_{2n}\right] \notag \\
& = \varphi_n^{-1}\varphi_n \left[ V_{2n} - V_{2n}U_{2n+1}\right] \notag \\
& = V_{2n}V_{2n+1}.
\end{align}
This proves \eqref{recursionsolution}. We may then calculate recursively for \eqref{expansion}
\[\det(I_n-J_n) = \det(I_{n-1}-J_{n-1})\det \varphi_n = \det(\varphi_1\dots \varphi_n) , \]
so that
\begin{align} {\la^-}bdabel{expansionsolution}
\det(I_n-J_n)
= \prod_{k=1}^{2n-1}\det V_k = \prod_{k=1}^{2n-1}\det ({\bf 1}-U_k) = \prod_{k=1}^{2n-1}\det ({\bf 1}-\mathcal U_k) \,.
\end{align}
For the computation of $
\det J_n$, we make use of a decomposition proven in Lemma 2.1 of \cite{GaNaRomat}. There exists a block bi-diagonal matrix $Z_n$, such that $J_n = Z_nZ_n^\dagger$ and (see the proof in \cite{GaNaRomas}), the block $D_k$ in position $k,k$ of $Z_n$ satisfies
\begin{align}
\det(D_k) = \det(\zeta_{2n-1})^{1/2} .
\end{align}
Then, this implies
\begin{align}
\det J_n &= (\det Z_n)^2 = \prod_{k=1}^n (\det D_k)^2
= \prod_{k=1}^n \det \zeta_{2k-1} \notag\\
&= (\det U_1) (\det V_2)\cdots (\det U_{2n-2}) (\det V_{2n-2}) (\det U_{2n-1}) ,
\end{align}
which
gives the second identity in \eqref{finalchange}.
$\Box $
\subsubsection{Second proof of Lemma \ref{crucial}
via Szeg{\H o}'s mapping}
It was tempting to extend to the
matrix case the method used in the scalar
one for the Jacobi ensemble.
The main steps use successively:
\begin {itemize}
\item
the inverse Szeg{\H o} mapping to turn the problem on $[0,1]$ into a problem on the unit circle,
\item the correspondence between orthogonal polynomials on the unit circle and on the real line,
\item the Szeg{\H o} recursion for polynomials on the unit circle.
\end{itemize}
To begin with, we transfer the measure on $[0,1]$ to a measure on $[-2, 2]$ by the mapping $x \mapsto 2 -4x$. The new Jacobi matrix $\hat J_n$ is deduced from the original matrix $J_n$ by
\begin{align}
{\la^-}bdabel{Omega}
\hat J_n =2I_n - 4 \Omega J_n \Omega\,,
\end{align}
where $\Omega$ is a diagonal matrix with alternating blocks $\pm\bf 1$'s on the diagonal.
Let $\hat P_0,\dots ,\hat P_n$ be the monic orthogonal polynomials associated with $\hat J_n$.
From \cite{damanik2008analytic} Section 2.9 (with reference in particular to \cite{duran1996orthogonal} and \cite{sinap1996orthogonal})
\begin{align}
\det \hat P_n(z) = \det (zI_n- \hat J_n) ,
\end{align}
so that
\begin{align}
\det \hat P_n(z) = \det\left((z-2) I_n + 4 \Omega J_n\Omega\right)= 4^{np} \det\left( \frac{z-2}{4} I_n + J_n\right)
\end{align}
and in particular,
\begin{align}
{\la^-}bdabel{detPbold}
\det J_n = 4^{-np} \det \hat P_n (2) , \qquad \det (I_n -J_n) = (-4)^{-np} \det \hat P_n (-2)\,.
\end{align}
We refer to the definition of the Szeg{\H o} mapping given in Section \ref{susec:Szego}. In this Section, we write $\Sigma_{\mathbb{R}}$ for a matrix measure on the real line and denote by $\Sigma_{\mathbb{T}} = \tilde{\operatorname{Sz}}^{-1}(\Sigma_{\mathbb{R}})$ the preimage under the Szeg{\H o} mapping.
The correspondence between polynomials orthogonal with respect to $\Sigma_{\mathbb T}$ and with respect to $\Sigma_{\mathbb R}$
is ruled by the following theorem (see Proposition 1 in \cite{Y-M}). It is the
matrix version of a famous theorem due to Szeg{\H o} \cite{szego1939orthogonal}. Since the notations are slightly different from the usual ones, we rewrite the proof in Section
\ref{YMproof}.
\begin{thm}[Yakhlef-Marcell\'an]
{\la^-}bdabel{Y-M}
Let $\Sigma_{\mathbb R}\in \mathcal{M}_{p,1}([-2,2])$ be a nontrivial matrix measure and denote by $\Sigma_{\mathbb T}=\tilde{\operatorname{Sz}}(\Sigma_{\mathbb{T}})$ the symmetric measure on $\mathbb T$ obtained by the Szeg{\H o} mapping.
If $\hat P_n$ is the $n$-th right monic orthogonal polynomial for $\Sigma_{\mathbb R}$
and $\boldsymbol{\Phi}_{2n}$ the $2n$-th right monic orthogonal polynomial\footnote{The right monic OP for $\Sigma_{\mathbb{T}}$ are obtained by applying Gram-Schmidt to $\{\boldsymbol{1}, z\boldsymbol{1}, \dots\}$.} for $\Sigma_{\mathbb T}$, then
\begin{align}
\hat P_n (z + z^{-1})= \left[z^{-n} \boldsymbol{\Phi}_{2n} (z) + z^n \boldsymbol{\Phi}_{2n}(z^{-1})\right] \boldsymbol{\tau}^{-1}_n\,,
\end{align}
where
\begin{align}
{\la^-}bdabel{defto}
\boldsymbol{\tau}_n := {\bf 1} + \boldsymbol{\Phi}_{2n}(0) = {\bf 1}- \boldsymbol{\kappa}_{2n-1} \boldsymbol{\alpha}_{2n-1} (\boldsymbol{\kappa}_{2n-1})^{-1}
\end{align}
with
\[\boldsymbol{\kappa}_k = \left(\boldsymbol{\rho}_0\dots\boldsymbol{\rho}_{k-1}\right)^{-1} , \qquad \boldsymbol{\rho}_j = ({\bold 1} - \boldsymbol{\alpha}_j^2)^{1/2}\,.\]
\end{thm}
From \eqref{detPbold} and \eqref{defto} we deduce
taking $z= \pm 1$,
\begin{align}
\hat P_n(\pm 2) = 2
(\pm 1)^n \boldsymbol{\Phi}_{2n}(\pm 1)\boldsymbol{\tau}_n^{-1}\,,
\end{align}
hence
\begin{align}
{\la^-}bdabel{detPbold2}
\det \hat P_n(\pm 2) = 2^p \det (1 - \boldsymbol{\alpha}_{2n-1})^{-1} (\pm 1)^{np}\det \boldsymbol{\Phi}_{2n}(\pm 1)\,.
\end{align}
Recall that the recursion formula
expressed for the monic polynomials on the unit circle, in this particular case, is
\begin{align}
{\la^-}bdabel{recursionMOPUC2}
z\boldsymbol{\Phi}_k(z) - \boldsymbol{\Phi}_{k+1}(z) = z^k \boldsymbol{\Phi}_k(z^{-1}) \boldsymbol{\kappa}_k \boldsymbol{\alpha}_k \boldsymbol{\kappa}_k^{-1}
\end{align}
(see (3.11) in \cite{damanik2008analytic}),
so that
\[\boldsymbol{\Phi}_{2n}(1) = \prod_{j=0}^{2n-1} \left({\bf 1} - \boldsymbol{\kappa}_j \boldsymbol{\alpha}_j \boldsymbol{\kappa}_j^{-1}\right)
, \qquad \boldsymbol{\Phi}_{2n}(-1) = \prod_{j=0}^{2n-1} \left({\bf 1} + (-1)^j \boldsymbol{\kappa}_j \boldsymbol{\alpha}_j \boldsymbol{\kappa}_j^{-1}\right)\]
and then
\begin{align}
\det \boldsymbol{\Phi}_{2n}(1) = \prod_{j=0}^{2n-1} \det \left({\bf 1} - \boldsymbol{\alpha}_j \right) , \qquad \det \boldsymbol{\Phi}_{2n}(-1) = \prod_{j=0}^{2n-1} \det \left({\bf 1} + (-1)^j \boldsymbol{\alpha}_j \right)\,.
\end{align}
These relations are the matrix extension of Lemma 5.2 of \cite{Killip1}.
Coming back to \eqref{detPbold} and \eqref{detPbold2}, we get
\begin{align}
\det (J_n ) = 2^{-(2n-1)p} \prod_{j=0}^{2n-2}\det({\bf 1} -\boldsymbol{\alpha}_j) , \qquad \det (I_n -J_n) = 2^{-(2n-1)p} \prod_{j=0}^{2n-2} \det({\bf 1} + (-1)^j \boldsymbol{\alpha}_j) .
\end{align}
The connection with the canonical moments follows then from \eqref{DeWag}. Note that this identity
still holds if $\Sigma$ is not nontrivial, as long as ${\bf 0} <\mathcal{U}_k <{\bf 1}$, or equivalently $-{\bf 1} <\boldsymbol{\alpha}_{k-1} <{\bf 1}$.
\subsubsection{Proof of Theorem \ref{Y-M}}
{\la^-}bdabel{YMproof}
In the scalar case, the proof is given in \cite{simon1} Theorem 13.1.5 or in \cite{simon2} Theorem 1.9.1, with references therein. In the matrix case, one can follow the same scheme.
Since $\Sigma_{\mathbb T}$ is invariant, the Verblunsky coefficients are Hermitian (see Lemma 4.1 in \cite{damanik2008analytic}).
The matrix Laurent polynomial
$z^{-n}\boldsymbol{\Phi}_{2n}(z)+ z^n \boldsymbol{\Phi}_{2n}(z^{-1})$
is invariant by $z \mapsto z^{-1}$. Hence there exists a matrix polynomial $\tilde Q_n$ of degree $n$, such that
\begin{align}{\la^-}bdabel{zbarz}
z^{-n}\boldsymbol{\Phi}_{2n}(z)+ z^n \boldsymbol{\Phi}_{2n}(z^{-1})
= \tilde Q_n (z + z^{-1})\,,\end{align}
(see for instance Lemma 13.4.2 in \cite{simon2}).
Collecting terms with highest degrees, we
have
\[\tilde Q_n (z+z^{-1}) =
\left(z^n + z^{-n}\right) \boldsymbol{\tau}_n + \cdots\]
and then
\begin{align}
{\la^-}bdabel{defQn}
\tilde Q_n (z + z^{-1}) \boldsymbol{\tau}_n^{-1} = Q_n (z + z^{-1})
\end{align}
where now $Q_n(x)$ is a monic polynomial of degree $n$.
Now, let us check that the $\tilde Q_k$ (hence $Q_k$) are orthogonal polynomials for $\Sigma_{\mathbb{R}}$.
First notice that
\[\tilde Q_k(z+z^{-1}) = z^{-k} \left(\boldsymbol{\Phi}_{2k}(z) + z^{2k} \boldsymbol{\Phi}_{2k}( z^{-1})\right)\,.\]
From
the Szeg{\H o} mapping and (\ref{defQn}), orthogonality of $\tilde Q_n$ and $\tilde Q_r$ (for $n\not= r$) with respect to $\Sigma_{\mathbb{R}}$ is equivalent to orthogonality (with respect to $\Sigma_{\mathbb T}$) of
$\boldsymbol{\Phi}_{2n}(z) + z^{2n} \boldsymbol{\Phi}_{2n}(z^{-1})$
and
$H$ where
\[H(z) = z^{n-r}\left[\boldsymbol{\Phi}_{2r}(z) + z^{2r} \boldsymbol{\Phi}_{2r}( z^{-1})\right]\,,\]
which is a polynomial of degree $n+r$ without constant term.
By definition, $\boldsymbol{\Phi}_{2n}$ is orthogonal to $ z^j{\bf 1}$ for all $j=0, \dots, 2n-1$. Besides,
$z^{2n} \boldsymbol{\Phi}_{2n}(z^{-1})$ is (right) orthogonal to
$z^j{\bf 1}$ for $j =1, \dots, 2n$. Indeed,
\begin{align*}
\int \left[z^{2n} \boldsymbol{\Phi}_{2n}(\bar z)\right]^\dagger d\Sigma_{\mathbb{T}}(z) z^j &= \int \boldsymbol{\Phi}_{2n}(\bar z)^\dagger d\Sigma_{\mathbb{T}}(z) z^{j-2n}\\
&= \int \boldsymbol{\Phi}_{2n}(z)^\dagger d\Sigma_{\mathbb{T}}(z) z^{2n-j}
\end{align*}
(by invariance of $\Sigma_{\mathbb{T}}$)
and this last integral is ${ \bf 0}$ for $1 \leq j \leq 2n$ due to the orthogonality of $\boldsymbol{\Phi}_{2n}$ with polynomials of degree at most $2n-1$.
One can then conclude that
$\boldsymbol{\Phi}_{2n}(z) + z^{2n} \boldsymbol{\Phi}_{2n}(z^{-1})$
is orthogonal to $z^k$ for $1\leq k\leq 2n-1$,
and so to $H$. Summarizing,
the $Q_n$'s are the monic polynomials orthogonal with respect to $\Sigma_{\mathbb{R}}$, and then
$\hat P_n = Q_n$ for every $n$,
or in other words, by (\ref{zbarz}) and (\ref{defQn})
\begin{align}
\hat P_n (z+z^{-1}) =
\left[z^{-n}\boldsymbol{\Phi}_{2n}(z) + z^{n} \boldsymbol{\Phi}_{2n}(z^{-1})\right]\boldsymbol{\tau}_n^{-1}\,.
\end{align}
$ \Box $
\end{document}
|
\begin{document}
\title{Reexamining Larmor precession in a spin-rotator: Testable correction and its ramifications}
\author{Dipankar Home}
\altaffiliation{[email protected]}
\affiliation{CAPSS, Departmrnt of Physics, Bose Institute, Salt Lake, Kolkata-700091, India}
\author{Alok Kumar Pan}
\altaffiliation{[email protected]}
\affiliation{Graduate School of Information Science, Nagoya University, Chikusa-ku, Nagoya 464-8601, Japan}
\author{Arka Banerjee}
\altaffiliation{[email protected]}
\affiliation{Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai-400005, India}
\begin{abstract}
For a spin-polarized plane wave passing through a spin-rotator containing uniform magnetic field, we provide a detailed analysis for solving the appropriate Schr\"{o}dinger equation. A modified expression for spin precession is obtained which reduces to the standard Larmor precession relation when kinetic energy is very large compared to the spin-magnetic field interaction. We show that there are experimentally verifiable regimes of departure from the standard Larmor precession formula. The treatment is then extended to the case of a spin-polarized wave packet passing through a uniform magnetic field. The results based on the standard expression for Larmor precession and that obtained from the modified formula are compared in various regimes of the experimental parameters.
\end{abstract}
\pacs{03.65.Ta}
\maketitle
\section{Introduction}
If a spin-1/2 particle passes through a region of uniform magnetic field, it is well known that the time evolution of the spin of the particle undergoes what is commonly known as Larmor precession. For example, if the particle with an initial spin orientation along the $+\hat x$ axis passes through a magnetic field oriented along the +$\hat z$ axis, the spin precesses in the x-y plane with a frequency determined by the strength of the magnetic field and the magnetic moment of the particle. This frequency is known as the Larmor frequency - a commonly discussed topic that has wide applications\cite{mezei1,zei,mezei2,alefeld,heb,otake,hino,rek,mar}, particularly in the analysis of experiments involving neutron, electron and atomic interferometry and in calculating the tunneling time \cite{baz67a,ryb,buttiker82,buttiker83,falck88} through a potential barrier.
The usual treatments on the standard Larmor precession essentially consider a situation where the particle is stationary, trapped within a region containing uniform magnetic field \cite{landau,sakurai,merz,cohen,grif,greiner,bohmbook}, thereby ignoring the spatial part of the wave function and the time evolution of the wave function is considered only in terms of the potential energy arising out of the spin-magnetic field interaction. The argument for ignoring the kinetic energy term in the Hamiltonian seems to take this term to be much smaller in magnitude as compared to the spin-magnetic field potential energy term. In our treatment, the incident spin-1/2 particles are considered to be passing through a spatial region within which a constant magnetic field is confined so that in treating the time evolution, both the kinetic energy term and the spin-magnetic field potential energy term are taken into account. Interestingly, the Larmor precession relation is recovered when the spin-magnetic field interaction energy is much smaller compared to the kinetic energy term. In this case, the path and spin degrees of freedom can then be treated independently.
We would also like to note that usual treatments of calculating tunneling time through a barrier based on Larmor clock \cite{baz67a,ryb,buttiker82,buttiker83,falck88} pertain to a spatial region within which a constant magnetic field and an external potential $V_{0}$ are both confined so that both spin-up and spin-down particles effectively see potential barriers of different heights. On the other hand, in our treatment, we do not consider any external potential so that while a spin-up particle sees a potential barrier, a spin-down particle sees a well, and eventually a path-spin entangled state is generated. To the best of our knowledge, it is only in the Appendix of the treatment given by Buttiker \cite{buttiker83} that the question of when the Larmor precession relation is valid in the absence of any external potential barrier is briefly discussed. Here we give a detailed analysis of this issue, providing a clear delineation of the regime of deviation from the standard Larmor precession relation.
We begin by considering a wave function whose space part is a plane wave and is spin polarized in the $+x$ direction. By examining closely the time evolution of the entire wave function, that is, both the spin and the space parts, caused by the interaction of the spin of the particle with the magnetic field, we find an interesting feature. Due to the specifics of the spin-magnetic field interaction, it is possible to derive the time evolution of the entire wave function by solving the time-independent Schrodinger equation for only the spatial part. Then, from the time-evolved entire wave function, one can find the change in the spin part of the wave function; this is shown in Section II. Subsequently, in Section III, the limit in which the result of our treatment matches the result of the standard Larmor precession as well as the limit in which there is an appreciable departure from Larmor precession are discussed. This treatment reveals that it is, in fact, the limit where the kinetic energy is much higher than the potential energy due to the spin-field interaction that the standard expression for Larmor precession holds true. In Section III numerical estimates of departure from the standard Larmor precession are presented. In Section IV we generalize the treatment given above to the case of an incident wave packet.
\section{Spin-rotator containing a uniform magnetic field}
We consider particles passing through a spin-rotator containing a constant magnetic field directed along the $+z$-axis in a region between $x=0$ and $x=a$. The total incident wave function of a particle is represented by $\Psi_{i}=\psi_{0}\otimes\chi$ , where $\psi_0=A e^{i k x}$ is the spatial part taken to be a plane wave with wave number $k$, and $\chi=(\frac{1}{\sqrt{2}}\left(\left|\uparrow \right\rangle_{z}+ \left|\downarrow\right\rangle_{z}\right)$ is the spin state polarized in the $+x$ direction with $|\uparrow\rangle_{z}$ and $|\downarrow\rangle_{z}$ are the eigenstates of $\widehat{\sigma}_{z}$. Hence, our setup is different from the case where the particle is trapped within the region containing uniform magnetic field.
The interaction Hamiltonian is $H_{int}=-\mu_n\vec {\sigma}.\textbf{B}$ where $\mu_n$ is the magnetic moment of the neutron, $\textbf{B}= B\widehat{z}$ is the homogeneous magnetic field and $\vec{\sigma}$ is the Pauli spin vector. Since $\mu_n$ is known to be a negative quantity, it is convenient to define for further calculations a quantity $\mu=-\mu_n$. Here note that the magnetic field has an implicit position dependence as it is confined between $x=0$ and $x=a$. In this case, the two-component Pauli equation can be written as the following two coupled
equations for the time evolution of the spatial parts $\psi^+$ and $\psi^-$, corresponding to the spin $|\uparrow\rangle_{z}$ and $|\downarrow\rangle_{z}$ components respectively
\begin{equation}
\label{pauli1}
i\hbar\frac{\partial\psi^{+}}{\partial t}=-\frac{\hbar^{2}}{2m}\nabla^{2}\psi^{+}+\mu B\widehat{z}\psi^{+}
\end{equation}
\begin{equation}
\label{pauli2}
i\hbar\frac{\partial\psi^{-}}{\partial t}=-\frac{\hbar^{2}}{2m}\nabla^{2}\psi^{-} - \mu B\widehat{z}\psi^{-}
\end{equation}
Eqs.(\ref{pauli1}) and (\ref{pauli2}) imply that while a neutron having spin-up interacts with the spin rotator containing constant magnetic field, its associated spatial wave function $(\psi^{+})$ evolves under a \emph{potential barrier} that has been generated due to the spin-magnetic field interaction; on the other hand, the associated spatial wave function $(\psi^{-})$ for a spin-down neutron evolves under a \emph{potential well}.
\begin{figure}
\caption{\label{fig.1}
\label{fig.1}
\end{figure}
Then the time evolved total wave function at $t=\tau$ after the interaction of
spins with the uniform magnetic field is given by
\begin{eqnarray}
\label{entstate}
\nonumber
\Psi\left(\textbf{x},\tau\right) &=& \exp({-\frac{iH\tau}{\hbar}})\Psi(\textbf{x},0)\\
&=&\frac{1}{\sqrt{2}}\left[\psi^{+}(\textbf{x},\tau)\otimes\left|\uparrow\right\rangle_{z}+\psi^{-}(\textbf{x}, \tau)\otimes\left|\downarrow\right\rangle_{z}\right]
\label{timeevolved}
\end{eqnarray}
where $\psi^{+}\left({\bf x},\tau\right)$ and
$\psi^{-}\left({\bf x},\tau\right)$
are the two components of the spinor
$\psi=\left(\begin{array}{c}\begin{array}{c}
\psi^{+}\\ \psi^{-}\end{array}\end{array}\right)$ which satisfies the
Pauli equation. The homogeneous magnetic field is written as
${\bf B}=B \widehat{z}$.
Note that the entanglement between the spin and the spatial parts of the wave function embodied in Eq.(\ref{entstate})results from the application of Pauli equation in this problem using the spin-magnetic field interaction term which has an implicit spatial dependence of the confinement of the uniform magnetic field within the bounded region of the spin rotator. It is this feature taken into account in the treatment given in this paper that leads to a nontrivial modification of the standard formula for Larmor precession.
Before proceeding to focus on the exact solutions of Eqs. (\ref{pauli1}) and (\ref{pauli2}), we revisit in the next section the usual treatment of this problem used to derive the Larmor precession relation for the rotated spin state after the spin-magnetic field interaction within the spin-rotator.
\subsection{The usual treatment of Larmor precession}
We may note here again that the usual treatments pertain essentially to a stationary particle within a spin-rotator. The behavior of the wave function after it starts interacting with the spin rotator is usually described by taking into account \textit {only} the spin part of the wave function, while the space part is completely left out of the analysis\cite{landau,sakurai,merz,cohen,grif,greiner,bohmbook}. Neglecting the kinetic energy, the Hamiltonian of the system inside the spin rotator is taken to be $H= \mu\vec{\sigma}.\bf{B}$. The spin up and spin down parts of the wave function evolve according to the Schroedinger equation in the following way
\begin{equation}
\label{so+}
i\hbar\frac{\partial \psi^{+}}{\partial t}= \mu B \psi^{+}
\end{equation}
\vskip -0.5cm
\begin{equation}
\label{so-}
i\hbar\frac{\partial \psi^{-}}{\partial t}= -\mu B \psi^{-}
\end{equation}
The solutions of the above two first order differential equations are $\psi^{\pm}=\psi_{0}e^{\mp i \omega \tau/2}$ where $\omega= 2\mu B/\hbar$, and $\tau$ is the time over which the spin-magnetic field interaction takes place. Putting these solutions in Eq.(\ref{entstate}), we finally obtain
\begin{eqnarray}
\label{psitauusual}
\Psi(x,\tau)=\psi_{0} \frac{1}{\sqrt{2}}\left(e^{- i \omega \tau}\left|\uparrow\right\rangle+ e^{i \omega \tau}\left|\downarrow\right\rangle\right)
\end{eqnarray}
Eq.(\ref{psitauusual}) can be written as
\begin{eqnarray}
\Psi(x,\phi)=\psi_{0}\frac{e^{-i\phi/2}}{\sqrt{2}}\left(\left|\uparrow\right\rangle_{z}+ e^{i\phi}\left|\downarrow\right\rangle_{z}\right)\equiv e^{- i\phi/2}\psi_{0} \chi(\phi)
\end{eqnarray}
where
\begin{equation}
\phi=\omega \tau
\end{equation}
is the well-known Larmor precession relation and $\omega $ is the Larmor frequency and, $e^{- i\phi/2}$ is the global phase. The spin state after the interaction is given by $\chi(\phi)=\frac{1}{\sqrt{2}}\left(\left|\uparrow\right\rangle+ e^{i\phi}\left|\downarrow\right\rangle\right)$.
In the above treatment $\tau$ is taken to be the transit time through the spin rotator, given by $\tau=\frac{a}{v}$ where $a$ is the spatial extension of spin-rotator containing the uniform magnetic field, and $v=\frac{\hbar k}{m}$ is the initial velocity of particle.
The detailed analysis of this problem for particles passing through the spin rotator should include the evolutions of both the spin and space parts of the total incident wave function. In the following section we provide such a treatment.
\subsection{The detailed analysis for particles passing through a spin-rotator}
We first consider the spatial part of the total incident wave function to be a plane wave corresponding to a single wave vector. Later, we shall consider the spatial part as a wave packet with superposition of plane waves.
Here we begin by recalling that Eqs. (\ref{pauli1}) and (\ref{pauli2}) imply that in this problem, we effectively have a situation where the spin up part of the wave function faces a \emph{potential barrier} while the spin down part of the wave function faces a \emph{potential well}. This, in turn, entails that the information about the spin part of the wave function enters the space part of the wave function through this potential, and thus solving the Schroedinger equation for only the space part of the wave function suffices to get complete information about the time evolution of the combined spin-space wave function. Therefore, instead of equations (\ref{so+}) and (\ref{so-}), one needs to solve Eqs.(\ref{pauli1}) and (\ref{pauli2}) explicitly.
The solutions to these equations, given our incident state, consist of a reflected part traveling in the $-x$-direction and a transmitted part of the wave function, traveling in the $+x$-direction. We should note here that the reflected part of the wave function exists \textit {only} to the left of the spin rotator, and the transmitted part exists \textit {only} to the right of the spin rotator. Our ultimate objective is to calculate the observable rotation of the spin part of the wave function caused by evolution of the state due to the spin-magnetic field interaction within the spin rotator.
For this, we need to look only at that part of the wave function which pertains to those neutrons which have actually passed through the spin rotator, or in other words we focus only on the transmitted part of the wave function. Therefore, using the solutions of equations (\ref{pauli1}) and (\ref{pauli2}), we will finally end up with the following state
\begin{equation}
\label{fitra}
\left|\Psi_{f}\right\rangle=\frac{N}{\sqrt{2}} ( \psi^{+}_T\left|\uparrow\right\rangle + \psi^{-}_{T}\left|\downarrow\right\rangle)
\end{equation}
where $N$ is the normalized constant can be written as $N=\int_{v}(|\psi^{+}_T|^{2} + |\psi^{-}_T|^{2}) dv$.\\
Note that, Eq.(\ref{fitra}) is an entangled state between the spin and spatial degrees of freedom of the transmitted part. The expressions for the reflected and transmitted parts of a wave function at a potential well or barrier are well known. However, from an empirical perspective, we note that if in the above setup, even if we use low energy or ultra-cold neutrons, having kinetic energy of the order of $5 \times 10^{-7} eV$, for the potential energy term ($|\mu B|$) to exceed the kinetic energy term ($E$), we will need a field of the order of $10 T$. Magnetic fields of such high intensity are difficult to produce in laboratory conditions, and therefore for all practical purposes, we should consider the situation where $E>\mu B$.
Now, since $\psi^{-}$ evolves under a potential well confined between $x=0$ and $x=a$ the reflected and transmitted parts are respectively given by
\begin{equation}
\label{psi-r}
\psi^{-}_{R}= A e^{- i k x}\frac{(k^2- k_1^2)(1-e^{2 i k_1 a})}{(k + k_1)^2- (k- k_1)^2 e^{2 i k_1 a} }; \quad x<0
\end{equation}
\begin{equation}
\label{psi-t}
\psi^{-}_{T}= A e^{i k x}\frac{4 k k_1 e^{- i k a} e^{i k_1 a}}{(k + k_1)^2- (k- k_1)^2 e^{2 i k_1 a} }; \quad x>a
\end{equation}
where $k=\frac{\sqrt{2 m E}}{\hbar}$ , $k_1= \frac{\sqrt{2 m (E+\mu B)}}{\hbar}$ and $a$ is the width of the spin rotator arrangement which contains the uniform magnetic field.
The wave function $\psi^{+}$ evolves under a potential well, the expressions for the transmitted and the reflected part by replacing all the $k_1$'s in Eqs.(\ref{psi-r}) and (\ref{psi-t}) by $k_2$ where $k_2=\frac{\sqrt{2 m (E-\mu B)}}{\hbar}$.
\begin{equation}
\psi^{+}_{R}= A e^ {-i k x}\frac{(k^2- k_2^2)(1-e^{2 i k_2 a})}{(k + k_2)^2- (k- k_2)^2 e^{2 i k_2 a} }; \quad x<0
\end{equation}
\begin{equation}
\psi^{+}_{T}= A e^{i k x}\frac{4 k k_2 e^{- i k a} e^{i k_2 a}}{(k + k_2)^2- (k- k_2)^2 e^{2 i k_2 a} }; \quad x>a
\end{equation}
Then, in the regime $E>\mu B$, we now rewrite equation (7) in the following form, which is the modified formula for Larmor precession calculated by the explicit time-evolved solution of the spatial parts, $\psi^{+}$ and $\psi^{-}$
\begin{equation}
\label{finalwfn}
\left|\Psi_{f}\right\rangle= \frac{A e^{i k x}}{\sqrt{2}} (c e^{i \phi_{1}}\left|\uparrow\right\rangle + b e^{i \phi_{2}}\left|\downarrow\right\rangle)\equiv \psi_{0}\chi(\phi)
\end{equation}
Here
\begin{equation}
\label{c}
c=\sqrt{Re(\psi^{+}_{T})^{2}+Im(\psi^{+}_{T})^{2}}
\end{equation}
\begin{equation}
\label{d}
b=\sqrt{Re(\psi^{-}_{T})^{2}+Im(\psi^{-}_{T})^{2}}
\end{equation}
\begin{equation}
\label{e}
\phi_{1}= \tan^{-1}\frac{Im(\psi^{+}_{T})}{Re(\psi^{+}_{T})}
\end{equation}
\begin{equation}
\label{f}
\phi_{2}= \tan^{-1}\frac{Im(\psi^{-}_{T})}{Re(\psi^{-}_{T})}
\end{equation}
where, by using equations (\ref{psi-t}) and (13), we find that
\begin{eqnarray}
\label{repsi-t}
&&Re(\psi^{-}_{T})= \\
\nonumber
&&\frac{8k k_1(k^2+k_1^2)\sin(k a)\sin(k_1 a)+16k^2 k_1^2\cos(k a)\cos(k_1 a)}{(k+k_1)^4+(k-k_1)^4-2(k+k_1)^2(k-k_1)^2 \cos(2 k_1 a)}
\end{eqnarray}
\begin{eqnarray}
\label{impsi-t}
&&Im(\psi^{-}_{T})=\\
\nonumber
&&\frac{8k k_1(k^2+k_1^2)\cos(k a)\sin(k_1 a)-16k^2 k_1^2\sin(k a)\cos(k_1 a)}{(k+k_1)^4+(k-k_1)^4-2(k+k_1)^2(k-k_1)^2 \cos(2 k_1 a)}
\end{eqnarray}
\begin{eqnarray}
\label{repsi+t}
&&Re(\psi^{+}_{T})= \\
\nonumber
&&\frac{8k k_2(k^2+k_2^2)\sin(k a)\sin(k_2 a)+16k^2 k_2^2\cos(k a)\cos(k_2 a)}{(k+k_2)^4+(k-k_2)^4-2(k+k_2)^2(k-k_2)^2 \cos(2 k_2 a)}
\end{eqnarray}
\begin{eqnarray}
\label{impsi+t}
&&Im(\psi^{+}_{T})=\\
\nonumber
&&\frac{8k k_2(k^2+k_2^2)\cos(k a)\sin(k_2 a)-16k^2 k_2^2\sin(k a)\cos(k_2 a)}{(k+k_2)^4+(k-k_2)^4-2(k+k_2)^2(k-k_2)^2 \cos(2 k_2 a)}
\end{eqnarray}
\section{Limits of validity of the standard formula for Larmor precession}
Let us now examine in what limit the above expressions do indeed reduce to the standard expressions for Larmor precession. As already mentioned, we are working in the range $E>\mu B$. Let us now consider the stronger limit where $E\gg\mu B$. In this limit, the kinetic energy term of the Hamiltonian is appreciably larger than the potential energy term, and then, effectively, the time evolution of the entire wave function occurs due to a very shallow well and a very low barrier. This situation would correspond to the entire wave being transmitted, but picking up a phase. From the expressions for $k, k_1, k_2$, in the limit $E\gg\mu B$, we find that $k\approx k_1\approx k_2$.\\
In order to get the standard expression for Larmor precession, we will first set $k=k_1=k_2$ in Eqs.(\ref{repsi-t}),(\ref{impsi-t}), (\ref{repsi+t}), and (\ref{impsi+t}) except when they appear inside sine or cosine functions, since the latter terms are much more sensitive to the differences in values of $k, k_1, k_2$. Eqs. (\ref{repsi-t}) and (\ref{impsi-t}) then simplify to
\begin{equation}
Re(\psi^{-}_{T})= \cos(k_1a-k a)
\end{equation}
\begin{equation}
Im(\psi^{-}_{T})=\sin(k_1 a - k a)
\end{equation}
Using the above expressions in Eqs. (\ref{d}) and (\ref{f}), we find that $b=1$ and $\phi_2=(k_1-k)a$. Similarly, rewriting equations (\ref{repsi+t}) and (\ref{impsi+t}), and using it in Eq.(\ref{c}) and (\ref{e}), we get $c=1$ and $\phi_1=(k_2-k)a$. Therefore Eq.(14) now has the form
\begin{equation}
\label{psiwfn1}
\left|\Psi_{f}\right\rangle= \frac{Ae^{i k x}}{\sqrt{2}} ( e^{i (k_2-k)a}\left|\uparrow\right\rangle + e^{i (k_1-k)a}\left|\downarrow\right\rangle)
\end{equation}
Since we have already stipulated the condition $k\approx k_1\approx k_2$, we can binomially expand $k_1$ and $k_2$ around $k$ and keep terms to the order of $\frac{\mu B}{E}$. Then $(k_1-k)a= - k\frac{\mu B}{2 E}a$. Using the relations $k=\frac{\sqrt{2 m E}}{\hbar}$ and $v=\frac{\hbar k}{m}$, we can write $(k_1-k)a= \frac{\mu B}{\hbar}\frac{a}{v}= \omega t$. Similarly, $(k_2-k)a= - \frac{\mu B}{\hbar}\frac{a}{v}= -\omega t$. Therefore, we can write Eq.(\ref{psiwfn1}) as
\begin{eqnarray}
\left|\Psi_{f}\right\rangle&=& \frac{A e ^{i k x}}{\sqrt{2}} ( e^{- i \omega t}\left|\uparrow\right\rangle + e^{i \omega t}\left|\downarrow\right\rangle)\\
\nonumber
&&=\psi_{0}\frac{e^{- i\phi/2}}{\sqrt{2}}\left(\left|\uparrow\right\rangle_{z}+ e^{i\phi}\left|\downarrow\right\rangle_{z}\right)\equiv e^{- i\phi/2}\psi_{0} \chi(\phi)
\end{eqnarray}
which is exactly the equation we get from the spin-only treatment of the problem.
The above treatment brings out the curious feature that while the standard treatment ignores the kinetic energy term of the Hamiltonian, on solving the problem in a more complete manner, the same expression can be derived in the other extreme limit where the kinetic energy term is much higher than the spin-magnetic field interaction energy term.
\section{Quantitative estimates of the departure from the Larmor formula}
In the previous section we have shown that it is only when the kinetic energy associated with the wave function is much larger than the potential energy term in the Hamiltonian, in other words, the height of the potential barrier, or the depth of the potential well, we get back the standard expression for Larmor precession. However, when the kinetic energy term becomes comparable to the potential energy term due to the spin-magnetic field interaction, the standard expression no longer holds. In this section we calculate the effect of this deviation in terms of an observable, given by, in our case, the number of particles measured to be along $\left |\uparrow \right \rangle_{\theta}$ when the state emerging from the region of the magnetic field is passed through a Stern-Gerlach arrangement which is oriented at an angle $\theta$ with respect to the $+\hat x$ axis.
The state $\left |\uparrow \right \rangle_{\theta}$ is defined in the following manner $|\uparrow \rangle_{\theta}
=1/{\sqrt{2}}~\left(|\uparrow
{\rangle}_z + e^{i \theta}|\downarrow {\rangle}_z \right)
$
The spin part of our original wave function is given by $\chi (0)=1/{\sqrt{2}}~ \left(|\uparrow {\rangle}_z +|\downarrow {\rangle}_z \right)$. According to the standard treatment of Larmor precession, the final spin state is given by $\chi (\phi)=1/{\sqrt{2}}~\left(|\uparrow {\rangle}_z + e^{i \phi}
|\downarrow {\rangle}_z \right)$. Therefore the probability of getting the particles with $\left |\uparrow \right \rangle_{\theta}$ is given by
\begin{equation}
\label{pplus}
p_{+}(\theta)={|}_{\theta}{\langle \uparrow}~|~\chi(\phi)\rangle|^2=\cos^2 (\theta-\phi)/2
\end{equation}
However, in the light of the complete treatment presented in the last section, the final spin part of the wave function is given by Eq. (14) to be $\chi^{\prime} (\phi)=1/{\sqrt{2}}~\left(c e^{i \phi_1}|\uparrow {\rangle}_z + b e^{i \phi_2}
|\downarrow {\rangle}_z \right)$. Consequently, the probability of getting particles with $\left |\uparrow \right \rangle_{\theta}$ is then modified which is of the form
\begin{equation}
\label{pplusprime}
p^{\prime}_{+}(\theta)=\frac{1}{2}\left(c^2+b^2+2 c b \cos(\phi_1-\phi_2+\theta)\right)
\end{equation}
It is clearly seen from Eqs.(\ref{pplus}) and (\ref{pplusprime}) that they are not same. We shall study the condition when they will be same. In the table given below, we compare the values of $p_{+}(\theta)$ and $p^{'}_{+}(\theta)$ for $\theta=0$ for different regimes of the velocity of the incident neutrons and the strength of the applied magnetic field. We can see clearly from the results given in the Tables 1 and 2 that when the incident velocity of the neutrons, or their kinetic energy is large, and the magnetic field is weak, $p_{+}(\theta)$ and $p^{\prime}_{+}(\theta)$ are the same. However, on increasing the strength of the magnetic field, or decreasing the velocity of the incident neutrons, there is an empirically verifiable difference between $p_{+}(\theta)$ and $p^{\prime}_{+}(\theta)$.
\begin{table}
\begin{tabular}{|c|c|c|}
\hline
$v(m/sec)$ & $p_{+}(\theta) $ & $p^{\prime}_{+}(\theta)$\\
\hline
\hline
$2000$ &0.40725&0.40725\\
\hline
$200$&0.645427&0.645464\\
\hline
$50$&0.690242& 0.653428\\
\hline
$10$&0.964184&0.855380\\
\hline
\end{tabular}
\caption{ \footnotesize Table 1: This table shows the numerical values of $p_{+}(\theta)$ and $p^{\prime}_{+}(\theta)$ for a fixed magnetic field, in this case, $2T$, while decreasing the velocity of the incident neutrons. For thermal neutrons, we see that the two are the same, while differences start arising for cold neutrons and this difference is appreciable for ultra-cold neutrons.}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|}
\hline
$B(Tesla)$ & $p_{+}(\theta) $ & $p^{\prime}_{+}(\theta)$\\
\hline
\hline
$0.5$ &0.997736&0.912933\\
\hline
$0.1$&0.645427&0.660003\\
\hline
$0.01$&0.407245& 0.407230\\
\hline
$0.001$&0.949661&0.949661\\
\hline
\end{tabular}
\caption{\footnotesize Table 2: This table shows the numerical values of $p_{+}(\theta)$ and $p^{\prime}_{+}(\theta)$ for a fixed incident velocity of neutrons, in this case, in the ultra-cold neutron regime of $10 m/s$, while decreasing the applied magnetic field.For a strong magnetic field, there is an appreciable difference between the two columns, which decreases as we decrease the magnetic field, till at low values of the field, the two are essentially the same.}
\end{table}
\section{Generalization of the Larmor precession treatment for calculating the spin distribution for a wave packet}
We will now use the results derived above to analyze the spin distribution which arises due to spin-magnetic field interaction when a wave packet passes through the spin rotator arrangement. For this purpose, we use a Gaussian wave packet as the space part of our incident wave function, instead of a plane wave as was used in Sections II and III. We choose the spin-polarization of the incident wave function to be in the $+x$ direction, and the magnetic field to be pointing along the $+z$ direction. Thus our incident wave function is given by
\begin{eqnarray}
\left|\Psi_i\right\rangle=\frac{1}{(2 \pi \delta ^2)^{\frac{1}{4}}}e^{-\frac{(x-x_0)^2}{4 \delta ^2}}e^{i k_0 x}\frac{1}{\sqrt{2}}\left(\left|\uparrow\right\rangle+\left|\downarrow\right\rangle\right)
\end{eqnarray}
where $x_0$ is the initial peak of the wave packet, $k_0$ is the peak wave-number and $\delta$ is the width of the incident wave packet. In the previous section we have seen that the precession of spin caused by interaction with the magnetic field within a spin rotator of given parameters is a function only of $k$. Therefore, while dealing with the Gaussian wave packet, it is convenient to use the Fourier transform of the incident wave function, given by
\begin{equation}
\left|\Psi_i\right\rangle=\left(\frac{2 \delta^2}{\pi}\right)^{\frac{1}{4}}e^{-\delta ^2(k-k_0)^2}e^{i k x_0}\frac{1}{\sqrt 2}\left(\left|\uparrow\right\rangle+\left|\downarrow\right\rangle\right)
\end{equation}
Using results from the previous section, we can then write the final wave function in the Fourier basis to be
\begin{eqnarray}
\left|\Psi_f\right\rangle&=&\left(\frac{2 \delta^2}{\pi}\right)^{\frac{1}{4}}e^{-\delta ^2(k-k_0)^2}e^{i k x_0}\\
\nonumber
&&\times\frac{1}{\sqrt{2}}\left(c(k)e^{i\phi_1(k)}\left|\uparrow\right\rangle+b(k)e^{i\phi_2(k)}\left|\downarrow\right\rangle\right)
\end{eqnarray}
where $c, b, \phi_1, \phi_2$ are as defined in Eqs. (15),(16),(17), and (18), and hence have different values for different values of $k$. From the above equation, it becomes clear that we have a spin distribution which occurs as a result of the spin-magnetic field interaction of different wave-number components of the original wave packet.
We will now find the distribution of spins along $\left|\chi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\uparrow\right\rangle+\left|\downarrow\right\rangle\right)$ or the $+x$ direction. The projection of the spin of the final wave function along this direction is given by
\begin{eqnarray}
\langle\chi\left|\Psi_f\right\rangle&=&\left(\frac{2 \delta^2}{\pi}\right)^{\frac{1}{4}}e^{-\delta ^2(k-k_0)^2}e^{i k x_0}\\
\nonumber
&&\times\frac{1}{2}\left(c(k)e^{i\phi_1(k)}+b(k)e^{i\phi_2(k)}\right)
\end{eqnarray}
Therefore, the probability of finding spins along the $+x$ direction will be given by
\begin{eqnarray}
\left|\langle\chi\left|\Psi_f\right\rangle\right|^2=&&\left(\frac{2 \delta^2}{\pi}\right)^{\frac{1}{2}}e^{-2 \delta ^2(k-k_0)^2}\\
\nonumber
&&\times\frac{1}{4}\left(c(k)e^{i\phi_1(k)}+b(k)e^{i\phi_2(k)}\right)\\
\nonumber
&&\times\left(c^{*}(k)e^{-i\phi_1(k)}+b^{*}(k)e^{-i\phi_2(k)}\right)
\end{eqnarray}
Such a spin probability function can be measured by a Stern Gerlach arrangement for the particles emerging from the spin rotator. A crucial feature to be stressed here is that the above spin probability function given by Eq. (33) has been obtained from our complete treatment of the problem of time evolution of the spin of a particle passing through a region of uniform magnetic field. On the other hand, the counterpart of such a spin probability distribution can also be obtained from the standard treatment of Larmor precession. A comparison between the results obtained from these two different approaches for various values of the magnetic field and velocity of the incident neutrons is illustrated in Figures 1 and 2. Similar to the case of the plane wave, we notice that the distributions from the two different approaches overlap when the incident velocity of the neutrons is high and the magnetic field is weak, whereas a noticeable divergence appears upon either decreasing the incident velocity or increasing the strength of the magnetic field.
\begin{figure}
\caption{\footnotesize (Color online) Here in the successive plots, we show the resultant probability distribution of spins emerging from the spin rotator, calculated according the standard Larmor precession formula (lighter red curve) and the modified formula (darker black curve) given in this paper, while varying the constant magnetic field applied in the region of the spin rotator. The incident velocity is fixed at $10m/s$, while the applied magnetic field takes the values $0.001T$, $0.03T$ and $0.15T$. The observable effect of the departure from the standard expression becomes more pronounced as the magnetic field is increased.}
\end{figure}
\begin{figure}
\caption{\footnotesize (Color online) Here in the successive plots, we show the resultant probability distribution of spins emerging from the spin rotator, calculated according the standard Larmor precession formula (lighter red curve) and the modified formula (darker black curve) given in this paper, while varying the incident velocity of the neutrons. The value of the applied magnetic field is held at $0.15T$, while the incident velocities are taken to be $100m/s$, $25m/s$ and $10m/s$. The observable effect of the departure from the standard expression becomes more pronounced when the incident velocity of the neutrons is decreased.}
\end{figure}
\section{conclusion and outlook}
In a nutshell, the central result of the present paper is that the standard expression for Larmor precession holds true only in the regime where the kinetic energy of the system is much greater than the potential energy term arising out of the interaction between the spin of the particle and the magnetic field. It is essentially when these two terms become comparable that the departure from the standard expression for Larmor precession becomes appreciable to the extent of being empirically observable - such a departure can indeed be tested by choosing appropriate conditions of high magnetic field and low velocity of incident neutrons; for example, in the experiments using cold neutrons.
An interesting application of the above treatment could be in enabling the construction of an effective transit time distribution for a spin-polarized wave packet passing through a spin-rotator containing uniform magnetic field, of course, subject to the constraint of choosing the relevant parameters appropriately such that the rotation of spin pertaining to any wave-number component of the wave packet does not exceed $\pi$. A detail derivation of the spin probability distribution emerging from a spin-rotator by following our treatment that enables identifying precisely the regime in which the standard formula of Larmor precession is valid would thus be a crucial ingredient for using a spin-rotator as a quantum clock \cite{pan}. Such a clock may particularly be useful for measuring the arrival/transit time distributions, and for making a quantitative study of the possible differences in the predictions obtained from the different quantum mechanical schemes suggested for computing the arrival/transit time distributions \cite{mugarev,mugabook}. Further studies along this line using the exact formula for Larmor precession derived in this paper are called for, the results of which could be compared with other models for quantum clock suggested in the literature \cite{wigner}.
Among other possible uses of the exact formula for Larmor precession in the light of the recent significant experiments, here we may mention, for example, the neutron interferometric experiment \cite{yuji} testing single particle Bell's inequality \cite{basu} involving entanglement between the path and the spin degrees of freedom of a spin-1/2 particle. In such an experiment, the spin flipper that is placed in one of the two paths of the interferometer plays a crucial role in generating the path-spin entangled state, since the spin flipper is ideally required to flip spins of all the neutrons passing through it, so that the flipped state and the unflipped spin state in the respective two paths are completely orthogonal. In order to ensure this condition, the choice of the relevant parameters has to be carefully made on the basis of an appropriate formula for Larmor precession. Usually this is done by using the standard Larmor formula, as in the above mentioned experiment \cite{yuji}. It is in such context that the exact formula for Larmor precession obtained in this paper could also be useful.
{\it Acknowledgements:} We are grateful to H. Rauch, A. Sudbery, V. Scarani, Y. Hasegawa, A. Matzkin and T. S. Mahesh for useful discussions based on an earlier version of this work. DH acknowledges support from the DST Project SR/S2/PU-16/2007. DH also thanks the Centre for Science, Kolkata for support. AKP acknowledges the support from JSPS Postdoc Fellowship for Foreign Researcher and Grant-in-Aid for JSPS fellows No. 24-
02320.
\end{document}
|
\begin{document}
\title{Upper bound for the first non-zero eigenvalue of the $p$-Laplacian}
\begin{abstract}
Let $M$ be a closed hypersurface in $\mathbb{R}^{n}$ and $\Omega$ be a bounded domain such that $M= \partial\Omega$. In this article, we obtain an upper bound for the first non-zero eigenvalue of the following problems.
\begin{itemize}
\item Closed eigenvalue problem:
\begin{align*}
\Delta_p u = \lambda_{p} \ |u|^{p-2} \ u \qquad \mbox{ on } \quad {M}.
\end{align*}
\item Steklov eigenvalue problem:
\begin{align*}
\begin{array}{rcll}
\Delta_{p}u &=& 0 & \mbox{ in } \Omega ,\\
|\nabla u|^{p-2} \frac{\partial u}{\partial \nu} &=& \mu_{p} \ |u|^{p-2} \ u &\mbox{ on } M .
\end{array}
\end{align*}
\end{itemize}
\end{abstract}
\textbf{Keywords:} p-Laplacian, Closed eigenvalue problem, Steklov eigenvalue problem, Center-of-mass.\\
\textbf{Mathematics subject classification:} 35P15, 58J50.\\
\section{Introduction}
The $p$-Laplace operator, defined as $ \Delta_{p} u := - \text{div}\left( {|\nabla u|^{p-2}} \nabla u \right)$, is the nonlinear generalization of the usual Laplace operator.
Many interesting results, providing the sharp upper bounds for the first non zero eigenvalue of the usual Laplacian $\left( p=2\right) $ have been obtained. In \cite{BW}, Bleecker and Weiner obtained a sharp upper bound of the first non-zero eigenvalue of Laplacian in terms of the second fundamental form on a hypersurface $M$ in $\mathbb{R}^n$. In \cite{R}, Reilly gave an upper bound for the first non-zero eigenvalue in terms of higher order mean curvatures for a compact $n$-dimensional manifold isometrically immersed in $\mathbb{R}^{n+p}$, which improves the earlier estimate. This result was later extended to submanifolds of simply connected space forms in various ways ( see \cite{EH, JFG}). These upper bounds are extrinsic in the sense that they depend either on the length of the second fundamental form or the higher order mean curvatures of $M$.
Let $M$ be a hypersurface in a rank-$1$ symmetric space. In \cite{GS2}, an upper bound for the first non-zero eigenvalue of $M$ was obtained in terms of the integral of the first non-zero eigenvalue of the geodesic spheres centered at the centre of gravity of $M$.
For a closed hypersurface $M$ contained in a ball of radius less than
$\frac{i(\mathbb{M}(k))}{4}$ and bounding a convex domain $Ω$ such that $\partial\Omega=M$ in the simply connected space form
$\mathbb{M}(k)$, $k=0$ or $1$, Santhanam \cite{GS1} proved that
\begin{align*}
\frac{\lambda_{1}(M)}{\lambda_{1}(S(R))} \leq \frac{\text{Vol}(M)}{\text{Vol}(S(R))},
\end{align*}
where $S(R)\left( = \partial B(R)\right) $ is the geodesic sphere of radius $R>0$ such that Vol$(B(R))$ = Vol$(\Omega)$. A similar result was also obtained for $k= -1$.
In this article, we extend the results in \cite{GS1} to $p$-Laplacian for a closed hypersurface $M \subset \mathbb{R}^n$. In particular, we consider the closed eigenvalue problem
\begin{align}
\label{closedeigenvalueproblem}
\Delta_{p}u = \lambda_{p}\, |u|^{p-2}\,u \quad \mbox{ on } M,
\end{align}
where $M$ is a closed hypersurface in $\mathbb{R}^n$ and find an upper bound for the first non-zero eigenvalue of this problem.
Let ${M}$ be a closed hypersurface in $\mathbb{R}^n$ and $\Omega$ be the bounded domain such that ${M} = \partial \Omega$. Consider the following problem
\begin{align*}
\begin{array}{rcll}
\Delta f &=& 0 & \mbox{ in } \Omega ,\\
\frac{\partial f}{\partial \nu} &=& \mu f &\mbox{ on } \partial \Omega,
\end{array}
\end{align*}
where $\nu$ is the outward unit normal on the boundary $\partial \Omega$ and $\mu$ is a real number.
This problem is known as Steklov eigenvalue problem and was introduced by Steklov \cite{S} for bounded domains in the plane in $1902.$ This problem is important as the set of eigenvalues of the Steklov problem is same as the set of eigenvalues of the well known Dirichlet-Neumann map.
There are several results which estimate the first non-zero eigenvalue $\mu_1$ of the Steklov eigenvalue problem \cite{P, E1, E2, E3, BGS}.
The first isoperimetric upper bound for $\mu_{1}$ was given by Weinstock \cite{W} in $1954$. He proved that among all simply connected planar domains with analytic boundary of fixed perimeter, the circle maximizes $\mu_{1}$. In \cite{P}, Payne obtained a two sided bound for the first non-zero Steklov eigenvalue on a convex plain domain in terms of minimum and maximum curvature. The lower bound in \cite{P} has been generalized by Escobar \cite{E1} to $2$-dimensional compact manifold with non-negative Gaussian curvature. Using the Weinstock inequality, Escobar \cite{E2} proved that for a fix volume, among all bounded simply connected domain in $2$-dimensional simply connected space forms, geodesic balls maximize the first non-zero Steklov eigenvalue. This result has been extended to non-compact rank-$1$ symmetric spaces in \cite{BGS}. We prove the similar result for the first non-zero eigenvalue of the eigenvalue problem
\begin{align} \label{stekloveigenvalueproblem}
\begin{array} {rcll}
\Delta_{p}u &=& 0 & \mbox{ in } \Omega, \\
|\nabla u|^{p-2} \frac{\partial u}{\partial \nu} &=& \mu_{p} \ |u|^{p-2} \ u &\mbox{ on } M,
\end{array}
\end{align}
where $\Omega$ is a bounded domain in $\mathbb{R}^n$ such that ${M} = \partial \Omega$ and $\nu$ is outward unit normal on $M$.
In Section $2$, we state our main results. In section $3$, we state some basic facts about the first non-zero eigenvalues of problem \eqref{closedeigenvalueproblem} and \eqref{stekloveigenvalueproblem}, and prove some results which will be required in the later sections. Followed by this, in section $4, 5$ and $6$, we provide the proof of results stated in section $2$.
\section{Statement of the results}
We state a variation of centre of mass theorem. This is crucial for our proof of main results.
\begin{thm}
\label{thm:test function}
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ and $M = \partial \Omega$. Then for every real number $1<p< \infty$, there exists a point $t \in \overline{v}erline{\Omega} $ depending on $p$ and normal coordinate system $\left(X_1,X_2,\ldots,X_n \right)$ centered at $t$ such that for $ 1 \leq i \leq n$,
\begin{align*}
\int_M{|X_i|^{p-2} X_i} = 0.
\end{align*}
\end{thm}
Now we state our main results.
The following theorem provides an upper bound for the first non-zero eigenvalue $\lambda_{1,p}$ of the closed eigenvalue problem \eqref{closedeigenvalueproblem}.
\begin{thm}
\label{thm:closedfe}
Let $M$ be a closed hypersurface in $\mathbb{R}^n$ bounding a bounded domain $\Omega$. Let $R>0$ be such that $\text{Vol }(\Omega)=\text{Vol }(B(R))$, where $B(R)$ is a ball of radius $R$. Then the first non-zero eigenvalue $\lambda_{1,p}$ of the closed eigenvalue problem~\eqref{closedeigenvalueproblem} satisfies
\begin{align}
\label{eqn:closedfe1}
\lambda_{1,p} \leq {n}^\frac{\vert p-2 \vert}{2} \, {\lambda_1(S(R))}^\frac{p}{2} \, \left(\frac{\text{ Vol }(M)}{\text{ Vol }(S(R))}\right).
\end{align}
Furthermore, for $p=2$, the upper bound (\ref{eqn:closedfe1}) is sharp and the equality holds if and only if $M$ is a geodesic sphere of radius $R$ ( see \cite{GS1}).
If equality holds in \eqref{eqn:closedfe1} then $M$ is a geodesic sphere and $p=2$.
\end{thm}
In case of Steklov eigenvalue problem, we have the following upper bound for the first non-zero eigenvalue $\mu_{1,p}$.
\begin{thm}
\label{thm:steklovfe}
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth bounday $M$ and $R>0$ be such that $\text{ Vol }(\Omega)=\text{ Vol }(B(R))$, where $B(R)$ is a ball of radius $R$. Then the first non-zero eigenvalue $\mu_{1,p}$ of problem \eqref{stekloveigenvalueproblem} satisfies the following inequality.
\begin{itemize}
\item For $ 1<p<2$,
\begin{align}
\label{eqn:steklovfe2}
\mu_{1,p} \leq \frac{1}{R^{p-1}}.
\end{align}
\item For $p \geq 2$,
\begin{align}
\label{eqn:steklovfe1}
\mu_{1,p} \leq \frac{{n}^{p-2}}{R^{p-1}}.
\end{align}
\end{itemize}
Furthermore, for $p=2$, equality holds in (\ref{eqn:steklovfe2}) and (\ref{eqn:steklovfe1}) iff $M$ is a geodesic sphere of radius $R$ (see \cite{BGS}).
If equality holds in \eqref{eqn:steklovfe2} and \eqref{eqn:steklovfe1} then $M$ is a geodesic sphere of radius $R$ and $p=2$.
\end{thm}
\section{Preliminaries}
In this section, we state some basic facts about the first non-zero eigenvalue of the eigenvalue problems \eqref{closedeigenvalueproblem} and \eqref{stekloveigenvalueproblem}. We will also prove some results that are needed in subsequent sections.
Let $u_1$ be an eigenfunction corresponding to the eigenvalue $\lambda_p$ of closed eigenvalue problem \eqref{closedeigenvalueproblem} and $u_2$ be an eigenfunction corresponding to the eigenvalue $\mu_{p}$ of the Steklov eigenvalue problem \eqref{stekloveigenvalueproblem}. Then $\lambda_p$ and $\mu_{p}$ satisfy
\begin{align*}
\lambda_{p} \int_M {|{u_1}|^p} = \int_M \|\nabla^{M} u_1\|^p,
\end{align*}
\begin{align*}
\mu_{p} \int_M {|{u_1}|^p} = \int_\Omega \|\nabla u_1\|^p.
\end{align*}
This shows that all eigenvalues of problems \eqref{closedeigenvalueproblem} and \eqref{stekloveigenvalueproblem} are non-negative.
Let $\lambda_{1,p}$ and $\mu_{1,p}$ be the first non-zero eigenvalues of the closed and steklov eigenvalue problems, respectively. Then the variational characterization for $\lambda_{1,p}$ and $\mu_{1,p}$ is given by
\begin{align*}
\lambda_{1,p}= \inf \left\lbrace \frac{\int_{M}{\|\nabla^{M} u\|^p}}{\int_M{|u|^p}} : \int_M{|u|^{p-2} u} =0, u(\neq 0) \in C^1(M)\right\rbrace,
\end{align*}
\begin{align*}
\mu_{1,p}= \inf \left\lbrace \frac{\int_\Omega{\|\nabla u\|^p}}{\int_M{|u|^p}} : \int_M{|u|^{p-2} u} =0, u(\neq 0) \in C^1(\Omega)\right\rbrace.
\end{align*}
\begin{rmk} If $p=2$, then the condition $\int_M{|u|^{p-2} u} = \int_M{u} = 0 $ is equivalent to say that the test function must be orthogonal to the constant function in $L^2$-\text{norm}.
\end{rmk}
Let $M$ be a closed hypersurface in $\mathbb{R}^n$ and $\Omega$ be a bounded domain in $\mathbb{R}^n$ such that $M = \partial\Omega$. Fix a point $q \in \Omega$. Then for every point $s \in M$, the line joining $q$ and $s$ may intersect $M$ at some points other than $s$. For every point $s \in M$, let $r(s):=d(q,s)$ and for every $u \in {\mathbb{S}^{n-1}}$, let
$\beta(u):= \text{ max } \left\lbrace \beta >0 | \, {q+\beta u} \in M , \, \beta \in \mathbb{R}\right\rbrace$. Let $A := \left\lbrace {q+ \beta(u)u}| \, u \in {\mathbb{S}^{n-1}} \right\rbrace$. Then $A \subseteq M$.
\begin{lemma}
\label{lem:lemma1}
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth boundary $M$ and $R>0$ be such that $\text{ Vol }(\Omega)=\text{Vol }(B(R))$, where $B(R)$ is a ball of radius $R$. Fix a point $q\in \overline{v}erline{\Omega}$, then
\begin{align}
\label{eqn:lemma1}
\int_M {r^{p}(s)} \, ds \geq {R^{p}} \text{ Vol }(S(R)).
\end{align}
Further, equality holds in \eqref{eqn:lemma1} iff $M$ is a geodesic sphere of radius $R$ centered at $q$.
\end{lemma}
\begin{proof} For a point $s \in A $, let $\gamma_s$ be the unique unit speed geodesic joining $q$ and $s$ with $\gamma_s(0)=q$. Let $u= \gamma'_s(0)$ and $t_s(u) = d(q,s) $.
Let $\theta(s)$ be the angle between the outward unit normal $\nu(s)$ to $M$ and the radial vector $\partial r(s)$. Let $du$ be the spherical volume density of the unit sphere ${\mathbb{S}^{n-1}}$. Then
\begin{align*}
\int_M {r^{p}(s)} \ ds &\geq \int_A {r^{p}(s)} \ ds \\
&= \int_{\mathbb{S}^{n-1}} \left(t_s(u)\right) ^{p} \sec \theta(s) \left(t_s(u)\right) ^{n-1} du \\
& \geq \int_{\mathbb{S}^{n-1}} \left(t_s(u)\right) ^{n+p-1} du \\
&= (n+p-1) \int_{\mathbb{S}^{n-1}} \int_{0} ^ {t_s(u)} {r}^{n+p-2} dr \ du \\
& \geq (n+p-1) \int_\Omega r^{p-1} dV
\end{align*}
and
\begin{align} \nonumber
\int_\Omega r^{p-1} dV &= \int_{\Omega\cap B(R)}r^{p-1} dV + \int_{\Omega\setminus{\Omega\cap B(R)}}r^{p-1} dV \\ \nonumber
&= \int_{ B(R)}r^{p-1} dV - \int_{B(R)\setminus{\Omega\cap B(R)}}r^{p-1} dV + \int_{\Omega\setminus{\Omega\cap B(R)}}r^{p-1} dV \\ \label{inequalityonr}
&\geq \int_{ B(R)}r^{p-1} dV - \int_{B(R)\setminus{\Omega\cap B(R)}}r^{p-1} dV + \int_{\Omega\setminus{\Omega\cap B(R)}}R^{p-1} dV \\ \nonumber
&= \int_{ B(R)}r^{p-1} dV + \int_{B(R)\setminus{\Omega\cap B(R)}}(R^{p-1}-r^{p-1})dV \\ \nonumber
& \geq \int_{ B(R)}r^{p-1} dV \\ \nonumber
&= \int_{\mathbb{S}^{n-1}} \int_{0} ^{R} r^{n+p-2} dr \ du \\ \nonumber
&= \int_{\mathbb{S}^{n-1}}\frac{R^{n+p-1}}{n+p-1} du \\ \nonumber
&= \frac{R^{p}}{n+p-1} \text{ Vol }(S(R)).
\end{align}
We have used the fact that $R \leq r \text{ in } \left( \Omega\setminus{\Omega\cap B(R)}\right)$ in \eqref{inequalityonr}.
Further, equality holds in \eqref{eqn:lemma1} iff $\sec \theta(s)=1 \text{ for all points } s \in M $ and $\text{ Vol} \left( {B(R)\setminus{\Omega\cap B(R)}}\right) =0 $. Note that $\sec \theta(s)=1$ iff $ \theta(s)=0 \text{ for all points } s \in M $. Therefore outward unit normal $\nu(s)= \partial r(s)\, \text{ for all points } s \in M $. This shows that $ \Omega= B(q,R)$ and $M$ is a geodesic sphere of radius $R$.\end{proof}
Above lemma is the generalization of the Lemma~1 in \cite{GS1}.
\begin{lemma}
\label{lem:lemma2}
Let $n \in \mathbb{N}$ and $y_1, y_2,\ldots, y_n $ be non-negative real numbers. Then for every real number $\gamma \geq 1$, the following inequality holds.
\begin{align}
\label{eqn:lemma2}
\left( y_1 + y_2 + \cdots + y_n \right)^ {\gamma} \geq y_{1}^{\gamma} + y_{2}^{\gamma} + \cdots + y_{n}^{\gamma} .
\end{align}
\end{lemma}
\begin{proof}
Let $n \in \mathbb{N}$ and $y_1, y_2,\ldots, y_n $ be non-negative real numbers. Let $\gamma \geq 1$. Then inequality~\eqref{eqn:lemma2} can be written as
$$
\left( \frac{y_1}{ y_1 + y_2 + \cdots + y_n }\right) ^{\gamma} + \left( \frac{y_2}{ y_1 + y_2 + \cdots + y_n }\right) ^{\gamma} + \cdots + \left( \frac{y_n}{ y_1 + y_2 + \cdots + y_n }\right) ^{\gamma} \leq 1 .$$
Therefore, it is enough to show that
$a_{1}^{\gamma} + a_{2}^{\gamma}+\cdots + a_{n}^{\gamma} \leq 1$ for non-negative real numbers $ a_i$ such that $a_1+a_2+\cdots +a_n =1$.
Since $0\leq a_i \leq 1$ and $\gamma \geq 1$, then $a_{i}^{\gamma}\leq a_i$. Therefore,
$ a_{1}^{\gamma} + a_{2}^{\gamma}+\cdots + a_{n}^{\gamma} \leq a_1 +a_2 +\cdots+a_n =1$. This proves the Lemma.
\end{proof}
Next we estimate $ {\sum_{i=1}^{n}} \|{\nabla ^M {x_i}}\|^{2} $.
\begin{lemma}
\label{lem:lemma3}
Let ${M}$ be a closed hypersurface in $\mathbb{R}^n$ and $\Omega$ be a bounded domain such that ${M} = \partial \Omega$. For a fixed point $t\in \overline{v}erline{\Omega}$, let $(x_1, x_2,\ldots, x_n)$ be the normal coordinate system centered at $t$. Then
$$
{\sum_{i=1}^{n}} \|{\nabla ^M {x_i}}\|^{2} = (n-1).
$$
\end{lemma}
\begin{proof} Observe that $\|{\nabla {x_i}(p)}\| = 1$ for $1 \leq i \leq n$ and a point $p \in \mathbb{R}^n$. Let $\nu$ be the outward unit normal on $M$. Then
\begin{align*}
{\sum_{i=1}^{n}} \|{\nabla ^M {x_i}}\|^{2} &= {\sum_{i=1}^{n}} \left( \|{\nabla {x_i}}\|^{2} - {\langle \nabla {x_i} , \nu \rangle}^2 \right) \\
&= {\sum_{i=1}^{n}} \|{\nabla {x_i}}\|^{2} - \|\nu\|^2 \\
&= n-1.
\qedhere
\end{align*}
\end{proof}
For a Riemannian geometric proof of above lemma, see \cite{EH}.
\section{Proof of Theorem \ref{thm:test function}}
\begin{proof} Given a point $x \in \mathbb{R}^n$, we write $\left(x_1,\ldots,x_n \right)$, the standard Euclidean coordinate system centered at origin. For $1<p<\infty$, define a function $f: \overline{v}erline{\Omega} \to \mathbb{R}$ by
$$
f\left( t_1, \ldots,t_n\right) =\frac{1}{p} \int_M{\sum_{i=1}^{n} |x_i - t_i|^{p}} \, dx_1 \, \cdots \, dx_n.
$$
The function $f$ is non-negative on $\overline{v}erline{\Omega}$. Let $\alpha$ be its infimum. Then there exists a sequence $\left(t_{1}^{j}, \ldots,t_{n}^{j} \right) $ in $\overline{v}erline{\Omega}$ such that
\begin{align}
\label{eqn:test function1}
\frac{1}{p} \int_M{\sum_{i=1}^{n} |x_i - t_{i}^{j}|^{p}} \longrightarrow \alpha \quad \text{ in } \mathbb{R} \quad \text{ as } j \longrightarrow \infty.
\end{align}
Observe that the sequence $\left( t_{1}^{j}, \ldots, t_{n}^{j}\right) $ is bounded. Therefore it has a convergent subsequence, without loss of generality, we denote it by $\left( t_{1}^{j}, \ldots, t_{n}^{j}\right) $ itself, which converges to $t = \left( t_1,\ldots,t_n\right) \in \mathbb{R}^n$. Thus $t \in \overline{v}erline{\Omega}$. Then
\begin{align*}
\sum_{i=1}^{n} |x_i - t_{i}^{j}|^{p} &\longrightarrow \sum_{i=1}^{n} |x_i - t_{i}|^{p} \qquad as \quad j \longrightarrow \infty \\
\mbox{ and } \qquad \frac{1}{p} \int_M {\sum_{i=1}^{n} |x_i - t_{i}^{j}|^{p}} &\longrightarrow \frac{1}{p} \int_M{\sum_{i=1}^{n} |x_i - t_{i}|^{p}} \qquad as \quad j \longrightarrow \infty.
\end{align*}
Therefore,
$$\frac{1}{p} \int_M {\sum_{i=1}^{n} |x_i - t_{i}|^{p}} = \alpha \qquad \mbox{ and } \qquad
f\left(t_1,\ldots,t_n \right) = \alpha.$$
Since $f$ attains its minimum at $t= \left(t_1,\ldots,t_n \right)$, we have $\left(\nabla f \right)_{t} = 0$. Therefore for each $1\leq i\leq n$,
\begin{align*}
\langle \nabla f, e_i \rangle_{(t_1,\ldots,t_n)} = \int_{M} |X_i|^{p-2} X_i =0,
\end{align*}
where $\left\lbrace e_i, 1 \leq i \leq n\right\rbrace $ is the standard orthonormal basis of $\mathbb{R}^n$ and $X_i:= (x_i - t_i), 1 \leq i \leq n$.
This proves the theorem.
\end{proof}
We will use the above theorem to show the existence of a point ${t}=\left( t_1,\ldots,t_n\right)\in \overline{v}erline{\Omega} $, such that the coordinate functions with respect to $t$ are test functions for the eigenvalue problems \eqref{closedeigenvalueproblem} and \eqref{stekloveigenvalueproblem}.
\section{Proof of Theorem \ref{thm:closedfe}}
\begin{proof} Let ${M}$ be a closed hypersurface in $\mathbb{R}^n$ and $\Omega$ be the bounded domain such that ${M}=\partial \Omega$. Let $R>0$ be such that $\text{Vol }(\Omega)=\text{ Vol }(B(R))$.
The variational characterization for $\lambda_{1,p}$ is given by
\begin{align*}
\lambda_{1,p}= \text{ inf } \left\lbrace \frac{\int_{M}{\|{\nabla ^M {u}}\|^p}}{\int_M{|u|^p}} : \int_M{|u|^{p-2} u =0, u (\neq 0)\in C^1(M)}\right\rbrace.
\end{align*}
By Theorem \ref{thm:test function}, there exists a point $t \in \overline{v}erline{\Omega}$ such that
\begin{align*}
\int_M{|x_i|^{p-2} x_i} = 0 \qquad \text{ for } 1 \leq i\leq n,
\end{align*}
where $\left(x_1,\ldots,x_n \right)$ denotes the normal coordinate system centered at $t$.
Therefore, for all $p>1$,
\begin{align}
\label{eqn:closedfep1}
\lambda_{1,p} \int_M {\sum_{i=1}^{n}|{x_i}|^p} \leq \int_M \sum_{i=1}^{n} \|{\nabla ^M {x_i}}\|^p \quad \text{ for } 1 \leq i\leq n.
\end{align}
Now, we divide the proof of the theorem into the following two cases.\\
$\mathbf{Case \, 1}$. $1 < p \leq 2$. \\
Since $\vert \frac{x_i}{r}\vert \leq 1$, it follows that
\begin{align}
\label{eqn:closedfep4}
|x_i|^p = {r^p \left|\frac{x_i}{r}\right|^p} \geq {r^p \left|{\frac{x_i}{r}}\right|^2 } \quad \text{ for } 1 \leq i\leq n.
\end{align}
Therefore,
\begin{align*}
r^{p} = r^{p}\sum_{i=1}^{n} \left|\frac{x_i}{r}\right|^2 \leq r^{p}\sum_{i=1}^{n} \left|\frac{x_i}{r}\right|^{p} = \sum_{i=1}^{n} {|x_i|^p}.
\end{align*}
For $1 < p < 2$, using H$\ddot{o}$lder's inequality, we obtain
\begin{align*}
\sum_{i=1}^{n} {\|{\nabla ^M {x_i}}\|^p} \leq \left( {\sum_{i=1}^{n} \|{\nabla ^M {x_i}}\|^2 }\right) ^ \frac{p}{2} n^\frac{2-p}{2}.
\end{align*}
This combining with Lemma \ref{lem:lemma3} gives
$$
\sum_{i=1}^{n} {\|{\nabla ^M {x_i}}\|^p}\leq (n-1)^{\frac{p}{2}} n^\frac{2-p}{2}.
$$
Observe that the above inequality is also true for $p=2$.
By substituting above values in inequality \eqref{eqn:closedfep1}, we get
\begin{align}
\label{eqn:closedfep5}
\lambda_{1,p} {\int_M {r^p}} \leq (n-1)^{\frac{p}{2}} \ {n^\frac{2-p}{2}} \, \text{Vol}(M).
\end{align}
By substituting ${\int_M {r^{p}}} \geq {R^P} \,\text{Vol}(S(R)) $ in above equation, we get
$$
\lambda_{1,p} \, {R^P} \, \text{Vol}(S(R)) \leq (n-1)^{\frac{p}{2}} \, {n^\frac{2-p}{2}} \, \text{Vol}(M). $$
As a consequence, we have
$$
\lambda_{1,p} \leq {n}^\frac{2-p} {2}\, {\lambda_1(S(R))}^\frac{p}{2} \, \left(\frac{\text{Vol}(M)}{\text{Vol}(S(R))}\right).
$$
This proves Theorem \ref{thm:closedfe} for $1 < p \leq 2$.
Equality in (\ref{eqn:closedfe1}) implies equality in Lemma \ref{lem:lemma1} and equality in (\ref{eqn:closedfep4}), which implies that $M$ is a geodesic sphere of radius $R$ and $p=2$. \\
$\mathbf{Case \, 2}$. $p \geq 2$. \\
For $p > 2$,
by H$\ddot{o}$lder's inequality, we have
\begin{align*}
\sum_{i=1}^{n} {|x_i|^2} &\leq \left( {\sum_{i=1}^{n} \left( |x_i|^2 \right) ^ \frac{p}{2}}\right) ^ \frac{2}{p} n^\frac{p-2}{p}.
\end{align*}
Therefore,
\begin{align}
\label{eqn:closedfep2}
n^\frac{2-p}{2} {r}^p &\leq \sum_{i=1}^{n} {|x_i|^p}.
\end{align}
Observe that equality holds in the above inequality for $p=2$, so (\ref{eqn:closedfep2}) holds for $p \geq 2$.
Now we estimate $\sum_{i=1}^{n} \|{\nabla ^M {x_i}}\|^p$. Since $\frac{p}{2} \geq 1$ and $\|{\nabla ^M {x_i}}\|^2 \geq 0$, for each $1 \leq i \leq n$, it follows from Lemma \ref{lem:lemma2} that
\begin{align} \nonumber
\sum_{i=1}^{n} \|{\nabla ^M {x_i}}\|^p &= {\sum_{i=1}^{n} \left( \|{\nabla ^M {x_i}}\|^2\right) ^\frac{p}{2}} \\ \nonumber
&\leq \left( \sum_{i=1}^{n} \|{\nabla ^M {x_i}}\|^2\right) ^\frac{p}{2} \\ \label{eqn:closedfep6}
&= {(n-1)}^{\frac{p}{2}}.
\end{align}
The last inequality follows from Lemma \ref{lem:lemma3}. By substituting values from (\ref{eqn:closedfep2}) and (\ref{eqn:closedfep6}) in (\ref{eqn:closedfep1}), we get
\begin{equation}
\label{eqn:closedfep3}
\lambda_{1,p}\,{n^\frac{2-p}{2}}{\int_M {r^p}} \leq (n-1)^\frac{p}{2} \, \text{Vol}(M).
\end{equation}
By substituting ${\int_M {r^{p}}} \geq {R^p} \,\text{Vol}(S(R)) $ from Lemma \ref{lem:lemma1} in above inequality, we have
$$
\lambda_{1,p}\,{n^\frac{2-p}{2}}\,{R^p} \,\text{Vol}(S(R)) \leq (n-1)^\frac{p}{2} \, \text{Vol}(M).$$
Therefore,
$$
\lambda_{1,p} \leq {n}^\frac{p-2} {2}\, {\lambda_1(S(R))}^\frac{p}{2} \, \left(\frac{\text{Vol}(M)}{\text{Vol}(S(R))}\right).
$$
This proves Theorem \ref{thm:closedfe} for $p \geq 2 $.
If equality holds in (\ref{eqn:closedfe1}), then equality holds in \eqref{eqn:lemma1} and also in \eqref{eqn:closedfep2}. Equality in \eqref{eqn:lemma1} implies that $M$ is a geodesic sphere of radius $R$ and equality in \eqref{eqn:closedfep2} holds iff $p=2$. Otherwise, $p > 2$ and equality in \eqref{eqn:closedfep2} implies that
$|x_i| = c$, for some constant $c$ and $1\leq i\leq n$.
Therefore, each point of $M$ is of the form $(\pm c,\pm c,\pm c,\ldots,\pm c)$, for some constant $c$. This contradicts our assumption that $M$ is the boundary of a bounded domain $\Omega$.
\qedhere
\end{proof}
\section{Proof of Theorem \ref{thm:steklovfe}}
\begin{proof} Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with smooth bounday $\partial \Omega = M$ and $R>0$ be such that $\text{Vol}(\Omega)=\text{Vol}(B(R))$, where $B(R)$ is a ball of radius $R$. The variational characterization for $\mu_{1,p}$ is given by
\begin{align*}
\mu_{1,p}= \text{ inf } \left\lbrace \frac{\int_\Omega{\|\nabla u\|^p}}{\int_M{|u|^p}} : \int_M{|u|^{p-2} u =0, u(\neq 0) \in C^1(\Omega)}\right\rbrace.
\end{align*}
By Theorem \ref{thm:test function}, there exists a point $t \in \overline{v}erline{\Omega}$ such that
\begin{align*}
\int_M{|x_i|^{p-2} x_i} = 0, \qquad \text{for all} \quad 1 \leq i \leq n,
\end{align*}
where $\left(x_1,x_2,x_3,\ldots,x_n \right)$ denotes the normal coordinate system centered at $t$.
By considering each ${x_i}$ as test function, we have
\begin{align}
\label{eqn:steklovfep1}
\mu_{1,p} \int_M {\sum_{i=1}^{n}|{x_i}|^p} \leq \int_\Omega \sum_{i=1}^{n} \|\nabla x_i\|^p.
\end{align}
Now we consider the following two cases to prove the theorem. \\
$\mathbf{Case \, 1}$. $1 < p \leq 2$. \\
By similar argument as in (\ref{eqn:closedfep4}), we get
\begin{align*}
r^p &\leq \sum_{i=1}^{n} {|x_i|^p}.
\end{align*}
By H$\ddot{o}$lder's inequality,
\begin{align*}
\sum_{i=1}^{n} {\|\nabla {x_i}\|^p} &\leq \left( {\sum_{i=1}^{n} \|\nabla{x_i}\|^2 }\right) ^ \frac{p}{2} n^\frac{2-p}{2} = n.
\end{align*}
By substituting above values in \eqref{eqn:steklovfep1}, we get
\begin{align*}
\mu_{1,p} \int_{M} {r}^p \leq n \ \text{ Vol}(\Omega).
\end{align*}
By substituting ${\int_M {r^{p}}} \geq {R^P} \,\text{Vol}(S(R)) $ from Lemma \ref{lem:lemma1}, we have
\begin{align*}
\mu_{1,p} \ {R}^p \, \text{Vol}(S(R)) \leq n \, \text{Vol}(\Omega).
\end{align*}
Since $ \text{Vol}(\Omega)= \text{Vol}(B(R))$ and $\frac{\text{ Vol}(B(R))}{\text{ Vol}(S(R))} = \frac{R}{n}$, we get
\begin{align*}
\mu_{1,p} \leq \frac{1}{R^{p-1}}.
\end{align*}
$\mathbf{Case \, 2}$. $p \geq 2$. \\
From (\ref{eqn:closedfep2}), we have
\begin{align*}
n^\frac{2-p}{2} {r}^p &\leq \sum_{i=1}^{n} {|x_i|^p} \quad \text{ for all } \, p \geq 2.
\end{align*}
By Lemma \ref{lem:lemma2}, we have
\begin{align*}
\sum_{i=1}^{n} \|\nabla x_i\|^p &\leq {\sum_{i=1}^{n} \left( \|\nabla x_i\|^2\right) ^\frac{p}{2}} \\
&\leq \left( \sum_{i=1}^{n} \|\nabla x_i\|^2\right)^\frac{p}{2} \\
&\leq {n}^{\frac{p}{2}}.
\end{align*}
By substituting above values in (\ref{eqn:steklovfep1}), we get
\begin{align*}
\mu_{1,p} \ n^\frac{2-p}{2} \int_{M} {r}^p \leq n^\frac{p}{2} \ \text{ Vol }(\Omega).
\end{align*}
We use Lemma \ref{lem:lemma1} again to get
\begin{align*}
\mu_{1,p} \ n^\frac{2-p}{2} R^p \ \text{Vol}(S(R)) \leq n^\frac{p}{2} \ \text{Vol}(\Omega).
\end{align*}
Since $\text{Vol}(\Omega) = \text{Vol}(B(R))$ and $\frac{\text{Vol}(B(R))}{\text{Vol}(S(R))} = \frac{R}{n}$, above equation becomes
\begin{align*}
\mu_{1,p} \leq \frac{n^{p-2}}{R^{p-1}}.
\end{align*}
Equality case will follow same as in Theorem \ref{thm:closedfe}.
This completes the proof.
\end{proof}
\section*{Acknowledgment}
I am very grateful to Prof. G. Santhanam for his support and constructive suggestions which led to improvements in the article.
\end{document}
|
\begin{document}
\title{Deriving quantum constraints and tight uncertainty relations}
\author{Arun Sehrawat}
\email[Email: ]{[email protected]}
\affiliation{Department of Physical Sciences,
Indian Institute of Science Education \& Research Mohali,
Sector 81 SAS Nagar, Manauli PO 140306,
Punjab, India}
\altaffiliation[Current address: ]
{Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019, India.}
\begin{abstract}
We present a systematic procedure to obtain all necessary and sufficient (quantum) constraints on the expectation values for any set of qudit's operators.
These constraints---arise form Hermiticity, normalization, and
positivity of a statistical operator and through Born's rule---analytically define an allowed region.
A point outside the admissible region does not correspond to any quantum state, whereas every point in it come from a quantum state.
For a set of observables, the allowed region is a compact and convex set in a real space, and all its extreme points come from pure quantum states.
By defining appropriate concave functions on the permitted region and
then finding their absolute minimum at the extreme points, we obtain different tight uncertainty relations for qubit's and spin observables.
In addition, quantum constraints are explicitly given for the Weyl operators and the spin
observables.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:Intro}
Von Neumann described a
state for a quantum system with
a density (statistical) operator on the system's Hilbert space
\cite{von-Neumann27,von-Neumann55,Fano57}.
A valid density operator must be \emph{Hermitian}, \emph{positive semi-definite}, and of \emph{unit trace}.
Born provided a rule \cite{Born26,Wheeler83} to compute the expectation values for any set of operators from a given statistical operator.
Naturally, all necessary and sufficient constraints---called \emph{quantum constraints} (QCs)---on the expectation values emerge from the three conditions on a density operator.
In Sec.~\ref{sec:QC}, a systematic procedure to derive the QCs is presented, where a result from \cite{Kimura03,Byrd03} is used for the positivity of a statistical operator (or simply a state).
To transfer the conditions from a state onto
the expectation values, one needs
the Born rule and
an operator-basis to represent operators.
One can choose any basis, the procedure in Sec.~\ref{sec:QC} is basis independent.
In~\cite{Kimura03,Byrd03}, generators of the special unitary group---that with the identity operator constitute an orthogonal operator-basis---are utilized, and the QCs on their average values are achieved by applying the Lie algebra.
Alternatively, one can start with an orthonormal basis of
the system's Hilbert space, and with all possible ``ket-bra'' pairs one can
assemble a \emph{standard} operator-basis.
Then, one can exploit the matrix mechanics---developed by Heisenberg, Born, Jordan, and Dirac \cite{Heisenberg25,Born25,Born25-b,Dirac25,Waerden68}---to reach the QCs as demonstrated in Sec.~\ref{sec:QC}.
The QCs and uncertainty relations (URs) are two main strands of this paper.
Heisenberg pioneered the first UR \cite{Heisenberg27,Wheeler83}
for the position and momentum operators.
A general version of Heisenberg's relation for a pair of operators is introduced by Robertson \cite{Robertson29} that is then improved by
Schr\"{o}dinger \cite{Schrodinger32}.
Deutsch \cite{Deutsch83}, Kraus \cite{Kraus87}, Maassen and
Uffink \cite{Maassen88} formulated URs by employing entropy---rather than the standard deviation that is exercised in \cite{Robertson29,Schrodinger32}---as a measure of uncertainty.
For an overview, we point to \cite{Wehner10,Bialynicki11,Coles17} for entropy URs and \cite{Folland97,Busch07,Busch14} are more in the spirit of Heisenberg's UR.
Throughout the article, we are considering a $d$-level quantum system (qudit).
For a set of \textsc{n} observables (Hermitian operators),
the QCs bound an \emph{allowed} region $\mathcal{E}$ of the expectation values
in the real space $\mathbb{R}^{\textsc{n}}$.
If one defines a suitable concave function on $\mathcal{E}$
to measure a combined uncertainty as described in Sec.~\ref{sec:QC}, then
creating a \emph{tight} UR becomes an optimization problem where at most ${2(d-1)}$ parameters are involved (for example, see \cite{Sehrawat17,Riccardi17}).
A UR is called tight if there exists a quantum state that
saturates it.
With this, we close Sec.~\ref{sec:QC} and try its results in the subsequent sections.
In Sec.~\ref{sec:unitary-basis}, we apply the
general methodology of Sec.~\ref{sec:QC} to the \emph{unitary} operator basis, which is known due to Weyl and Schwinger~\cite{Weyl32,Schwinger60}.
In the case of a prime (power) dimension $d$, the unitary-basis can be divided into ${d+1}$ disjoint subsets such that all the operators in each subset possess a common eigenbasis \cite{Bandyopadhyay02,Englert01}.
These ${d+1}$ eigenbases form a maximal set of mutually unbiased bases (MUBs) \cite{Ivanovic81,Wootters89,Durt10} of the Hilbert space.
In Sec.~\ref{sec:unitary-basis}, QCs
for the Weyl operators as well as for MUBs are presented.
There we arrive at the same \emph{quadratic} QC that is conceived in \cite{Larsen90,Ivanovic92,Klappenecker05}.
Using the quadratic QC, tight URs for the MUBs are achieved in \cite{Sanchez-Ruiz95,Ballester07}, and their
minimum uncertainty states are reported in \cite{Wootters07,Appleby14}.
In the case of ${d\geq3}$, there also exists a \emph{cubic} QC.
In Sec.~\ref{sec:qutrit}, ${d=3}$, QCs
are explicitly given for the Weyl operators of a qutrit and for a set of spin-1 operators.
In addition, a number of tight URs and certainty relations (CRs) are delivered for the spin operators.
By the way, the QCs for the spin-1 operators can also be achieved from \cite{Kimura03,Byrd03}.
In Sec.~\ref{sec:spin-j}, tight URs and CRs are obtained
for the angular momentum operators ${J_x,J_y,}$ and $J_z$, where the quantum number $\mathsf{j}$
can be
${\tfrac{1}{2}, 1,\tfrac{3}{2},2,\cdots\,}$.
The paper is concluded in Sec.~\ref{sec:conclusion}, where a list of our main contributions is prepared.
Appendix~\ref{sec:qubit}
offers a comprehensive analysis for a qubit ${(d=2)}$ that
includes the Schr\"{o}dinger UR \cite{Schrodinger32}, and URs for the
symmetric informationally complete positive operator valued measure (SIC-POVM) \cite{Rehacek04,Appleby09} are presented there.
In the case of a qubit, it is a known result that $\mathcal{E}$ will be an ellipsoidal region for any number of observables (measurement settings) \cite{Kaniewski14}, and it is also manifested here.
Appendixes~\ref{subsec:2settings} and \ref{subsec:3settings} separately deal with two and three measurement settings.
In the case of two settings, the ellipsoid transfigures into an ellipse,
which also appears in \cite{Lenard72,Larsen90,Kaniewski14,Abbott16,Sehrawat17}.
It is revealed in \cite{Sehrawat17} that several tight CRs and URs known from \cite{Larsen90,Busch14-b,Garrett90,Sanchez-Ruiz98,Ghirardi03,Bosyk12,Vicente05,Zozor13,Deutsch83,Maassen88,Rastegin12}
can be achieved by exploiting the ellipse.
In this article, we deal with Hilbert space $\mathscr{H}_d$ of kets
and Hilbert-Schmidt space $\mathscr{B}(\mathscr{H}_d)$ of operators,
and their bases are differently symbolized by $\mathcal{B}$ and $\mathfrak{B}$, respectively, to avoid any confusion.
\section{Quantum constraints, allowed region, and uncertainty measures}\label{sec:QC}
Quantum state for a qudit can be described by a statistical operator $\rho$ \cite{von-Neumann27,von-Neumann55,Fano57,Bengtsson06}, on the system's Hilbert space $\mathscr{H}_d$, such that
\begin{eqnarray}
\label{Herm-rho}
\rho&=&\rho^\dagger\quad\ \; \text{(Hermiticity)}\,,\\
\label{norm-rho}
\text{tr}(\rho)&=&1\qquad \text{(normalization)}\,,\qquad\text{and}\\
\label{pos-rho}
0&\leq& \rho\qquad\text{(positivity)}\,.
\end{eqnarray}
The dagger $\dagger$ denotes the adjoint.
It has been shown in \cite{Kimura03,Byrd03} that
an operator $\rho$ fulfills \eqref{pos-rho} if and only if it obeys
\begin{eqnarray}
\label{0<=S_n}
0&\leq&S_n \quad \mbox{for all} \quad
1\leq n\leq d\,,
\qquad\quad\text{where}\\
\label{S_n}
S_n&=&\tfrac{1}{n}
\sum_{m=1}^{n}(-1)^{m-1}\,
\text{tr}(\rho^{m})\,S_{n-m}
\end{eqnarray}
commencing with
${S_1=\text{tr}(\rho),}$ and ${S_0:=1}$.
It is advantageous to use inequalities \eqref{0<=S_n} between real numbers
than a single operator-inequality \eqref{pos-rho}; see also \cite{+veMat}.
Due to normalization~\eqref{norm-rho}, the first condition
${0\leq S_1=1}$ holds naturally.
In a nutshell, an operator $\rho$ on $\mathscr{H}_d$ represents a legitimate quantum state if and only if it complies with
\eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n} for ${2\leq n\leq d}$.
The set of all bounded operators on $\mathscr{H}_d$ form a $d^2$-dimensional Hilbert-Schmidt space $\mathscr{B}(\mathscr{H}_d)$ endowed with
the inner product
\begin{equation}
\label{HS-inner-pro}
\lgroup A,B\,\rgroup_\textsc{hs}=
\text{tr}(A^\dagger B)\,,
\quad\mbox{where}\quad
A,B\in\mathscr{B}(\mathscr{H}_d)\,.
\end{equation}
Suppose
\begin{equation}
\label{HS-basis}
\mathfrak{B}:=
\big\{\Gamma_\gamma\big\}
_{\gamma=1}^{d^2}\,,\qquad
\lgroup \Gamma_{\gamma'},\Gamma_\gamma\,\rgroup_\textsc{hs}=
\delta_{\gamma,\gamma'}\,,
\end{equation}
is an orthonormal basis of $\mathscr{B}(\mathscr{H}_d)$ \cite{Schwinger60,Kimura03,Byrd03}, where
${\delta_{\gamma,\gamma'}}$ is the Kronecker delta function.
Now we can resolve every operator ${A\in\mathscr{B}(\mathscr{H}_d)}$
in the basis $\mathfrak{B}$ as \cite{Fano57}
\begin{equation}
\label{A-resolution}
A=\sum_{\gamma=1}^{d^2}
\texttt{a}_\gamma\,\Gamma_\gamma\,,
\quad\mbox{where}\quad
\texttt{a}_\gamma=\lgroup \Gamma_\gamma,A\,\rgroup_\textsc{hs}
\end{equation}
are complex numbers.
In this way, we also have the resolution of
\begin{equation}
\label{rho-resolution}
\rho=\sum_{\gamma=1}^{d^2}
\texttt{r}_\gamma\,\Gamma_\gamma\,,
\quad\mbox{where}\quad
\texttt{r}_\gamma=\lgroup \Gamma_\gamma,\rho\,\rgroup_\textsc{hs}\,.
\end{equation}
Born introduced the rule \cite{Born26} (see also \cite{von-Neumann55})
\begin{eqnarray}
\label{Born rule-1}
\langle A\rangle_{\rho}&=&
\text{tr}(\rho\, A)=\lgroup A^\dagger,\rho\rgroup_\textsc{hs}\\
\label{Born rule-2}
&=&\lgroup\rho,A\rgroup_\textsc{hs}
\end{eqnarray}
to calculate the average value of an operator $A$ by taking the statistical operator $\rho$.
Definition~\eqref{HS-inner-pro} of the inner product
is exploited to reach the last term in~\eqref{Born rule-1}, and through Hermiticity~\eqref{Herm-rho}, we get~\eqref{Born rule-2}.
By the rule, \eqref{Born rule-1}, one can realize
\begin{equation}
\label{r}
\texttt{r}_\gamma=
\langle \Gamma_\gamma^{\,\dagger} \rangle_\rho
=\overline{\langle \Gamma_\gamma \rangle}_\rho\,,
\end{equation}
where the last equality is due to the conjugate symmetry
${\lgroup \Gamma,\rho\rgroup_\textsc{hs}=
\overline{\lgroup \rho,\Gamma\rgroup}_\textsc{hs}}$
and \eqref{Born rule-2}.
The overline designates the complex conjugation.
The set of equations ${\langle \Gamma_\gamma^{\,\dagger} \rangle=\overline{\langle \Gamma_\gamma \rangle}}$ for every $\gamma$,
or ${\langle A^{\dagger} \rangle=\overline{\langle A \rangle}}$
for every ${A\in\mathscr{B}(\mathscr{H}_d)}$,
is equivalent to Hermiticity~\eqref{Herm-rho} of $\rho$.
Using \eqref{A-resolution} and \eqref{rho-resolution}, we can express \eqref{Born rule-2} as
the standard inner product
\begin{equation}
\label{<A>-1}
\langle A\rangle_\rho
=\sum_{\gamma=1}^{d^2}\,
\overline{\texttt{r}}_{\gamma}\;\texttt{a}_\gamma
=\texttt{R}^\dagger\texttt{A}
\end{equation}
between
${\texttt{R}:=(\texttt{r}_1,\cdots,\texttt{r}_{d^2})^\intercal}$
and
${\texttt{A}:=(\texttt{a}_1,\cdots,\texttt{a}_{d^2})^\intercal}$ \cite{Fano57,von-Neumann27}, where $\intercal$ stands for the transpose.
The column vectors ${\texttt{A},\texttt{R}\in\mathbb{C}^{d^2}}$
are the numerical representations of ${A,\rho\in\mathscr{B}(\mathscr{H}_d)}$
in basis~\eqref{HS-basis}, whereas expectation value \eqref{<A>-1} does not depend on the basis $\mathfrak{B}$ \cite{unitary-eq}.
Suppose ${A=A^\dagger}$ (depicts an observable) is a Hermitian operator, and
\begin{equation}
\label{A}
A=\sum_{l=1}^{d}
a_l\,
|a_l\rangle\langle a_l|
\end{equation}
is its spectral decomposition.
Its expectation value [via \eqref{Born rule-1}]
\begin{equation}
\label{<A>-pro}
\langle A\rangle_\rho=
\sum_{l=1}^{d}a_l\,p_l\,,
\qquad
p_l=
\big\langle\,|a_l\rangle\langle a_l|\,\big\rangle_\rho\,,
\end{equation}
can be estimated by performing measurements in its eigenbasis
${\{|a_l\rangle\}_{l=1}^{d}}$.
$p_l$ is the probability of getting the outcome, eigenvalue, $a_l$.
Due to \eqref{norm-rho} and \eqref{pos-rho}, one can realize
\begin{eqnarray}
\label{p-const1}
1&=&\sum_{l=1}^{d}
p_l=\text{tr}(\rho)\quad \mbox{and} \\
\label{p-const2}
0&\leq&p_l=
\langle a_l|\,\rho\,|a_l\rangle
\quad \mbox{for all} \quad
1\leq l\leq d\,.
\end{eqnarray}
In \eqref{p-const1}, the completeness relation
${\textstyle\sum\nolimits_{l=1}^{d}|a_l\rangle\langle a_l|=I}$ plays a role, where $I$ is the identity operator.
The set of all probability vectors
${\vec{p}:=(p_1,\cdots,p_d)}$
constitutes a probability space $\Omega_a$, that is---defined by
\eqref{p-const1} and \eqref{p-const2}---the standard ${(d-1)}$-simplex in the $d$-dimensional real vector space $\mathbb{R}^d$ \cite{Bengtsson06,Sehrawat17}.
One can perceive ${\langle A\rangle_\rho}$ in \eqref{<A>-pro} as a linear function from $\Omega_a$ into $\mathbb{R}$ and then can recognize
\begin{equation}
\label{<A>in}
\langle A\rangle\in[a_\text{min}\,,\,a_\text{max}]\,,
\end{equation}
where endpoints of the interval are the smallest $a_\text{min}$
and the largest $a_\text{max}$ eigenvalues of $A$.
Every classical (discrete) probability distribution also follows
\eqref{p-const1} and \eqref{p-const2} \cite{Bengtsson06}.
The QCs become evident when we take two or more \emph{incompatible} observables (measurements), see below.
It is one of the most striking features of quantum physics that---has no classical analog---physically distinct measurements do exist, and one cannot estimate all the expectation values listed in $\overline{\texttt{R}}$ in \eqref{expt-values} by using a single setting for projective measurements \cite{SIC-POVM}.
One requires at least ${d+1}$ settings.
Moreover, two measurement settings can be so different that if one always gets a definite outcome in one setting, (s)he can get totally random results in the other setting \cite{Ivanovic81,Wootters89}.
Such settings correspond to \emph{complementary} operators \cite{Schwinger60,Kraus87} that are building blocks of the unitary-basis presented in Sec.~\ref{sec:unitary-basis}.
Now let us take \textsc{n} number of operators: ${A,B,\cdots,C}$.
We can build a single matrix equation
\begin{equation}
\label{expt-values}
\underbrace{\begin{pmatrix}
\langle A\rangle_\rho\\
\langle B\rangle_\rho\\
\vdots\\
\langle C\rangle_\rho\\
\end{pmatrix}}_{\displaystyle\textsf{E}}
=
\underbrace{\begin{pmatrix}
\texttt{a}_{\scriptscriptstyle 1} &
\texttt{a}_{\scriptscriptstyle 2} &
\cdots &
\texttt{a}_{\scriptscriptstyle d^2} \\
\texttt{b}_{\scriptscriptstyle 1} &
\texttt{b}_{\scriptscriptstyle 2} &
\cdots &
\texttt{b}_{\scriptscriptstyle d^2} \\
\vdots & \vdots & \ddots & \vdots \\
\texttt{c}_{\scriptscriptstyle 1} &
\texttt{c}_{\scriptscriptstyle 2} &
\cdots &
\texttt{c}_{\scriptscriptstyle d^2} \\
\end{pmatrix}}_{\displaystyle \textbf{M} }
\underbrace{\begin{pmatrix}
\overline{\texttt{r}}_{\scriptscriptstyle 1} \\
\overline{\texttt{r}}_{\scriptscriptstyle 2} \\
\vdots\\
\overline{\texttt{r}}_{\scriptscriptstyle d^2} \\
\end{pmatrix}}_{\displaystyle\overline{\texttt{R}}}
\end{equation}
by combining equations such as \eqref{<A>-1}.
Equation \eqref{expt-values}
is nothing but the numerical representation of Born's rule~\eqref{Born rule-2} in basis~\eqref{HS-basis}.
We present this article by keeping the experimental scenario,
\begin{equation}
\label{expt-situ}
\parbox{0.85\columnwidth}
{
a finite number of independent qudits are identically prepared in a quantum state $\rho$, and then individual qudits are measured using different settings for ${A,B,\cdots,C}$,
}
\end{equation}
in mind, where every expectation value is drawn from a same $\rho$.
Thus the subscript $\rho$ is omitted from
$\langle \ \rangle_{\rho}$ at some places for simplicity of notation.
In other experimental situations---(\textit{i}) where one wants to entangle the qudit of interest to an ancillary system and then wants to perform a joint measurement or (\textit{ii}) where one desires to execute sequential measurements on the same qudit \cite{Busch14}---one can also adopt the above formalism.
There one may need to keep track of how the initial qudit's state gets transformed after an entangling operation or a measurement.
At each stage of an experiment, a $\rho$ must respect \eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n}, and the mean values can be obtained by \eqref{expt-values}.
Matrix equation~\eqref{expt-values} has three parts $\displaystyle\overline{\texttt{R}}$, \textbf{M}, and \textsf{E}:
\begin{itemize}
\item
Conditions~\eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n} on a density operator $\rho$ enter through $\displaystyle\overline{\texttt{R}}$ and emerge as the QCs on the expectation values listed in \textsf{E}.
In experiment situation~\eqref{expt-situ}, all the knowledge about state preparation goes into the column $\displaystyle\overline{\texttt{R}}$.
\item
From top to bottom, rows in the ${\textsc{n}\times d^{\,2}}$ matrix
\textbf{M} completely specify ${A,B,\cdots,C}$. So \textbf{M} holds all, and only, the information about measurement settings.
\item
Conditions \eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n} as well as the mean values in \textsf{E} do not depend on the choice of basis \cite{unitary-eq}.
Therefore, the QCs on $\langle A\rangle,\langle B\rangle,\cdots,\langle C\rangle$ will be independent of the basis $\mathfrak{B}$.
So one can adopt any basis that suits him or her best.
A basis only facilitates the transfer of constraints from a
quantum state $\rho$ onto the expectation values in \textsf{E}.
\end{itemize}
Basically, one can achieve the QCs via a two-step procedure:
\begin{enumerate}
\item We need to express
conditions \eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n} for ${2\leq n\leq d}$
in terms of
${\{\overline{\texttt{r}}_\gamma\}_{\gamma=1}^{d^2}}$.
This delivers the QCs on mean values~\eqref{r} of the basis elements.
\item
Then, we acquire the QCs on
$\langle A\rangle,\langle B\rangle,\cdots,\langle C\rangle$
by matrix equation~\eqref{expt-values}.
\end{enumerate}
Let us focus on Step~1.
We already have condition~\eqref{Herm-rho}
in terms of
${\langle\Gamma_\gamma\rangle_\rho}$, see \eqref{r}.
To write the remaining conditions \eqref{norm-rho} and \eqref{0<=S_n} for ${2\leq n\leq d}$ in
${\langle\Gamma_\gamma\rangle_\rho}$
terms, we need to compute
\begin{equation}
\label{tr(rho^m)}
\text{tr}(\rho^m)=
\sum_{\gamma_1}\cdots
\sum_{\gamma_m}\,
\texttt{r}_{\gamma_1}\cdots\texttt{r}_{\gamma_m}
\text{tr}\left(\,\Gamma_{\gamma_1}\cdots\Gamma_{\gamma_m}\right)
\end{equation}
for every ${1\leq m\leq d}$.
One can view ${\text{tr}(\rho^m)}$ as a homogeneous polynomial of degree $m$, where average values~\eqref{r} are variables, and the constants
${\text{tr}\left(\,\Gamma_{\gamma_1}\cdots\Gamma_{\gamma_m}\right)}$ are determined by basis~\eqref{HS-basis} only.
Hence $S_n$ of \eqref{S_n} is a $n$-degree polynomial, and
${0\leq S_n}$ [see \eqref{0<=S_n}] leads to a $n$-degree QC.
In \cite{Kimura03,Byrd03}, generators of the special unitary group
$SU(d)$---that with the identity operator compose an orthogonal basis of $\mathscr{B}(\mathscr{H}_d)$---are taken, and $\text{tr}(\rho^m)$ is obtained by using the Lie algebra of $SU(d)$.
The generators are ${d^{\,2}-1}$ traceless Hermitian operators, thus we call this basis the \emph{Hermitian}-basis [for ${d=2,3}$, see Appendix~\ref{sec:qubit} and Sec.~\ref{sec:qutrit}].
If all the \textsc{n} operators $A,B,\cdots,C$ are Hermitian operators, then it is better
to choose a Hermitian-basis because every number in \eqref{expt-values} will be a real number.
Since the state space
\begin{equation}
\label{set-of-states}
\mathcal{S}=\big\{\rho\in\mathscr{B}(\mathscr{H}_d)\ |\
\rho\text{ obeys \eqref{Herm-rho}, \eqref{norm-rho}, and
\eqref{0<=S_n}} \big\}
\end{equation}
is a compact and convex set \cite{Bengtsson06}, the corresponding collection
of ${\texttt{R}=\overline{\texttt{R}}}$
forms a compact and convex set in $\mathbb{R}^{d^2-1}$ as the mapping
${\rho\leftrightarrow\texttt{R}}$ is a homeomorphisms \cite{Rudin91}.
Every qudit's state $\rho$ is completely specified by ${d^{\,2}-1}$ real numbers in ${\texttt{R}}$ \cite{Kimura03}, where one of its components is fixed by normalization condition~\eqref{norm-rho}, that is, $\textstyle\sum\nolimits_{\gamma=1}^{d^2}
\texttt{r}_\gamma\,\text{tr}(\Gamma_\gamma)=1$.
Next one can
view \eqref{expt-values} as a linear transformation from $\mathbb{R}^{d^2-1}$ to $\mathbb{R}^{\textsc{n}}$.
Such a transformation is always continuous, and it maps a compact and convex set in $\mathbb{R}^{d^2-1}$ to a compact and convex set in $\mathbb{R}^{\textsc{n}}$
\cite{Rudin76,con-to-con}.
Therefore, for \textsc{n} observables (Hermitian operators), the set of expectation values
\begin{equation}
\label{set-of-expt}
\mathcal{E}:=\big\{\,\textsf{E}\ |\
\rho\in\mathcal{S}\, \big\}
\end{equation}
will be a \emph{compact} and \emph{convex} set
[for example, see Figs. \ref{fig:region-2-pojs}, \ref{fig:regions-MUBs}, \ref{fig:regions}, \ref{fig:ellip-SUR}, and \ref{fig:regions-SIC-POVM}] in
a hyperrectangle
\begin{equation}
\label{hyperrectangle}
\mathcal{H}:=
[a_\text{min}, a_\text{max}]\times
[b_\text{min}, b_\text{max}]\times\cdots\times
[c_\text{min}, c_\text{max}]\subset\mathbb{R}^{\textsc{n}}
\end{equation}
described by the Cartesian product of the closed intervals,
whose endpoints are the minimum and maximum eigenvalues of the operators.
$\mathcal{E}$ is also known as the
quantum convex support \cite{Weis11}.
Furthermore, each extreme point of $\mathcal{E}$ corresponds to a pure state that is an extreme point of $\mathcal{S}$.
Note that
Eq.~\eqref{expt-values} does (map $\mathcal{S}$ \emph{onto} $\mathcal{E}$ via $\rho\leftrightarrow\texttt{R}\rightarrow \textsf{E}$) not provide
a one-to-one correspondence between the state space $\mathcal{S}$ and $\mathcal{E}$ unless there are $d^{\,2}$ linearly independent operators in the set ${\{A,B,\cdots,C,I\}}$.
In summary, $\mathcal{S}$ is an abstract set, we observe its \emph{image} $\mathcal{E}$
through an experiment scheme such as \eqref{expt-situ}.
The QCs---originate from \eqref{Herm-rho}, \eqref{norm-rho}, and \eqref{0<=S_n} via matrix equation~\eqref{expt-values}---bound the region $\mathcal{E}$.
As the QCs are necessary and sufficient restrictions on the expectation values, any point outside $\mathcal{E}$ does not come from a quantum state, whereas every point in $\mathcal{E}$ corresponds to at least one quantum state. So as a whole $\mathcal{E}$ is the only \emph{allowed} region
in the space of expectation values. Obviously,
one cannot achieve a region smaller than $\mathcal{E}$ without sacrificing a subset of quantum states.
Now we present
all the above material by taking a \emph{standard} operator-basis.
With an orthonormal basis $\mathcal{B}$ of the Hilbert space $\mathscr{H}_d$, where
\begin{eqnarray}
\label{B_i}
\mathcal{B}&:=&
\big\{|j\rangle\,:\,j\in\mathbb{Z}_d\big\}\,,\\
\label{Z_d}
\mathbb{Z}_d&:=&\{\,j\,\}_{j=0}^{d-1}\,,
\qquad\mbox{and}\qquad\\
\label{orth-|j>}
\text{tr}\big(|j\rangle\langle k|\big)&=&
\langle k|j\rangle=\delta_{j,k}\,,
\end{eqnarray}
one can construct the standard operator-basis
\begin{equation}
\label{Stnd-Basis}
\mathfrak{B}_\text{st}:=\big\{|j\rangle\langle k|\,:\,j,k\in\mathbb{Z}_d\big\}
\end{equation}
of $\mathscr{B}(\mathscr{H}_d)$.
Instead of a single index $\gamma$ that runs from $1$ to ${d^{\,2}}$,
here we have two indices $j$ and $k$ for a basis element, each of them runs from $0$ to ${d-1}$. The orthonormality condition
\begin{equation}
\label{orth-|j><k|}
\big\lgroup\, |j'\rangle\langle k'|\,,\,|j\rangle\langle k|\,\big\rgroup_\textsc{hs}
=\langle j'|j\rangle\langle k|k'\rangle
=\delta_{j,j'}\,\delta_{k,k'}
\end{equation}
for $\mathfrak{B}_\text{st}$ is ensured by orthonormality relation \eqref{orth-|j>} of $\mathcal{B}$.
In basis \eqref{Stnd-Basis}, the resolution of an operator $A$ and of a qudit's state $\rho$ are
\begin{eqnarray}
\label{A-in-stnd-basis}
A&=&
\sum_{j,k\,\in\,\mathbb{Z}_d}
\texttt{a}_{jk}\,|j\rangle\langle k|
\quad\mbox{with}\quad
\texttt{a}_{jk}=\langle j|\,A\,|k\rangle\quad\mbox{and}\qquad\\
\label{rho-in-stnd-basis}
\rho&=&
\sum_{j,k\,\in\,\mathbb{Z}_d}
\texttt{r}_{jk}\,|j\rangle\langle k|
\quad\mbox{with}\quad
\texttt{r}_{jk}=\langle j|\,\rho\,|k\rangle\,,
\end{eqnarray}
respectively.
The above coefficients $\texttt{a}$ and $\texttt{r}$ are obtained through \eqref{A-resolution} and \eqref{rho-resolution}, correspondingly.
Numerical representation~\eqref{<A>-1} of Born's rule now
becomes
\begin{equation}
\label{<A>-in-stnd-basis-2}
\langle A\rangle
=
\sum_{j,k}\,
\overline{\texttt{r}}_{jk}\,\texttt{a}_{jk}\\
=
\sum_{j,k}\,
\texttt{r}_{kj}\,\texttt{a}_{jk}\,,
\end{equation}
where the second equality is due to the Hermiticity:
\begin{equation}
\label{r_kj}
\texttt{r}_{jk}=
\big\langle\, |k\rangle\langle j|\, \big\rangle_\rho=
\overline{\texttt{r}}_{kj}
\quad\mbox{for all}\quad
j,k\in\mathbb{Z}_d
\end{equation}
is a manifestation of \eqref{r}.
In standard basis~\eqref{Stnd-Basis}, matrix equation~\eqref{expt-values} transpires as
\begin{equation}
\label{expt-values-std}
\underbrace{\begin{pmatrix}
\langle A\rangle_{\rho}\\
\langle B\rangle_{\rho}\\
\vdots\\
\langle C\rangle_{\rho}
\end{pmatrix}}_{\displaystyle\textsf{E}}
=
\underbrace{\begin{pmatrix}
\texttt{a}_{\scriptscriptstyle 0,0} &
\texttt{a}_{\scriptscriptstyle 0,1} & \cdots &
\texttt{a}_{\scriptscriptstyle d-1,d-1} \\
\texttt{b}_{\scriptscriptstyle 0,0} &
\texttt{b}_{\scriptscriptstyle 0,1} & \cdots &
\texttt{b}_{\scriptscriptstyle d-1,d-1} \\
\vdots & \vdots & \ddots & \vdots \\
\texttt{c}_{\scriptscriptstyle 0,0} &
\texttt{c}_{\scriptscriptstyle 0,1} & \cdots &
\texttt{c}_{\scriptscriptstyle d-1,d-1} \\
\end{pmatrix}}_{\displaystyle \textbf{M} }
\underbrace{\begin{pmatrix}
\overline{\texttt{r}}_{\scriptscriptstyle 0,0} \\
\overline{\texttt{r}}_{\scriptscriptstyle 0,1}\\
\vdots\\
\overline{\texttt{r}}_{\scriptscriptstyle d-1,d-1}
\end{pmatrix}}_{\overline{\displaystyle\texttt{R}}_d}.\qquad
\end{equation}
Next, to express conditions \eqref{norm-rho} and \eqref{0<=S_n} for ${2\leq n\leq d}$ in
${\texttt{r}_{jk}}$
terms, we need to represent
$\text{tr}(\rho^m)$ for every ${1\leq m\leq d}$ as a function of
$\{\texttt{r}_{jk}\}$.
Orthonormality relation~\eqref{orth-|j>} also yields the rule for composition
\begin{equation}
\label{comp}
|j'\rangle\langle k'|\,|j\rangle\langle k|=
\delta_{j,k'}\,|j'\rangle\langle k|\,,
\end{equation}
which gives rise to matrix multiplication
in the matrix mechanics \cite{Heisenberg25,Born25,Born25-b,Dirac25,Waerden68}.
Particularly here it is very easy to obtain
\begin{equation}
\label{rho^m-stnd}
\rho^m=
\sum_{j_1}\cdots
\sum_{j_{m+1}}
\texttt{r}_{j_1j_2}\,\texttt{r}_{j_2j_3}\cdots\texttt{r}_{j_mj_{m+1}}
\,|j_1\rangle\langle j_{m+1}|\,.\\
\end{equation}
Then, through \eqref{orth-|j>} and the linearity of trace, we secure
\begin{equation}
\label{tr(rho^m)-stnd}
\text{tr}(\rho^m)=
\sum_{j_1}\cdots
\sum_{j_m}\,
\texttt{r}_{j_1j_2}\,\texttt{r}_{j_2j_3}\cdots\texttt{r}_{j_mj_{1}}\,.
\end{equation}
One can compare \eqref{tr(rho^m)-stnd} with its general form~\eqref{tr(rho^m)}.
Let us explicitly write conditions \eqref{norm-rho} and \eqref{0<=S_n} for ${n=2,3,4}$ \cite{Kimura03,Fano57}:
\begin{eqnarray}
\label{1=S_1}
\sum_{j}
\texttt{r}_{jj}=\text{tr}(\rho)&=& 1\,,\\
\label{0<=S_2}
\texttt{R}^\dagger \texttt{R}=
\sum_{jk}
|\texttt{r}_{jk}|^2=\text{tr}(\rho^2)&\leq& 1\,,\\
\label{0<=S_3}
3\,\text{tr}(\rho^2)-2\,\text{tr}(\rho^3)&\leq& 1\,,\\
\label{0<=S_4}
6\,\text{tr}(\rho^2)-8\,\text{tr}(\rho^3)-3\,\big(\text{tr}(\rho^2)\big)^2+6\,\text{tr}(\rho^4)&\leq& 1\qquad
\end{eqnarray}
deliver linear, quadratic, cubic, and quartic QCs.
In \eqref{1=S_1} and \eqref{0<=S_2}, \eqref{tr(rho^m)-stnd}
and the column vector \texttt{R} from \eqref{expt-values-std} are used.
As a pure state ${\rho=\rho^2}$ is an extreme point of the state space
$\mathcal{S}$ [defined in \eqref{set-of-states}], it saturates inequalities \eqref{0<=S_n} for all ${n=2,\cdots,d}$ \cite{Byrd03}.
A pure state corresponds to a ket, and a qudit's ket can be parametrized by a set of ${2(d-1)}$ real numbers by ignoring an overall phase factor
(for example, see \cite{Arvind97}):
\begin{eqnarray}
\label{|psi>}
|\psi\rangle&=&
|0\rangle \cos\theta_0+\nonumber\\
&&|1\rangle \sin\theta_0\cos\theta_1\,e^{\text{i}\phi_1}+\nonumber\\
&&|2\rangle \sin\theta_0\sin\theta_1\cos\theta_2\,e^{\text{i}\phi_2}+\nonumber\\
&&\qquad\cdots+\nonumber\\
&&|d-2\rangle \sin\theta_0\sin\theta_1\cdots\cos\theta_{d-2}\,e^{\text{i}\phi_{d-2}}+
\nonumber\\
&&|d-1\rangle \sin\theta_0\sin\theta_1\cdots\sin\theta_{d-2}\,e^{\text{i}\phi_{d-1}}\,,
\end{eqnarray}
where ${\text{i}=\sqrt{-1}}$, $\theta_l\in[0,\tfrac{\pi}{2}]$ for all ${l=0,\cdots,d-2}$, and $\phi_{l'}\in[0,2\pi)$ for every ${l'=1,\cdots,d-1}$.
Thus the pure state ${\rho_\text{pure}=|\psi\rangle\langle\psi|}$ and the corresponding column vector ${\texttt{R}^{(\text{pure})}_d}$ [see \eqref{expt-values-std} for its complex conjugate] are specified by the ${2(d-1)}$ real numbers \cite{Bengtsson06},
for instance,
\begin{eqnarray}
\label{R-pure-2}
&\texttt{R}^{(\text{pure})}_2=\begin{pmatrix}
(\cos\theta_0)^2 \\
\cos\theta_0\sin\theta_0\,e^{-\text{i}\phi_1}\\
\cos\theta_0\sin\theta_0\,e^{\text{i}\phi_1}\\
(\sin\theta_0)^2
\end{pmatrix}&
\mbox{and}\qquad\quad
\\
\label{R-pure-3}
&\texttt{R}^{(\text{pure})}_3=\begin{pmatrix}
(\cos\theta_0)^2 \\
\cos\theta_0\sin\theta_0\cos\theta_1\,e^{-\text{i}\phi_1}\\
\cos\theta_0\sin\theta_0\sin\theta_1\,e^{-\text{i}\phi_2}\\
\cos\theta_0\sin\theta_0\cos\theta_1\,e^{\text{i}\phi_1}\\
{(\sin\theta_0\cos\theta_1)}^2\\
(\sin\theta_0)^2\cos\theta_1\sin\theta_1\,e^{\text{i}(\phi_1-\phi_2)}\\
\cos\theta_0\sin\theta_0\sin\theta_1\,e^{\text{i}\phi_2}\\
(\sin\theta_0)^2\cos\theta_1\sin\theta_1\,e^{-\text{i}(\phi_1-\phi_2)}\\
{(\sin\theta_0\sin\theta_1)}^2
\end{pmatrix}.&
\end{eqnarray}
By plugging ${\texttt{R}^{(\text{pure})}_d}$ in Eq.~\eqref{expt-values-std}, one can reach all those points
in $\mathcal{E}$ [defined in \eqref{set-of-expt}] that correspond to pure states in $\mathcal{S}$.
All the extreme points of $\mathcal{E}$ will be a subset of these points.
In the following, we demonstrate a procedure to built a combined uncertainty measure on $\mathcal{E}$ for Hermitian operators ${A,B,\cdots,C}$.
In the case of a non-Hermitian operator, considering \cite{Adagger}, one can talk about
uncertainty measures for the two Hermitian operators
$A^{\scriptscriptstyle(+)}=\tfrac{1}{2}(A+A^\dagger)$ and $A^{\scriptscriptstyle(-)}=\tfrac{1}{2\text{i}}(A-A^\dagger)$. Note that $A^{\scriptscriptstyle(+)}$ and $A^{\scriptscriptstyle(-)}$ commutes if and only if $A$---is a normal operator---commutes with $A^\dagger$.
The standard deviation
\begin{eqnarray}
\label{std-dev}
\Delta A&=&\sqrt{\langle A^2\rangle-\langle A\rangle^2}\nonumber\\
&=&\sqrt{\sum_{l=1}^{d}a_l^2\,p_l
-\left(\sum_{l=1}^{d}a_l\,p_l\right)^2}
\end{eqnarray}
can be viewed---through the first equality---as a concave function on the allowed region for $\{A,A^2\}$, which is
the convex hull of ${\{(a_l,a_l^2)\}_{l=1}^d}$, where $a_l$ is an eigenvalue of $A$ [see \eqref{A}].
By finding the absolute minimum of ${\Delta A+\Delta B+\cdots+\Delta C}$ on
the permitted region for $\{A,A^2,B,B^2,\cdots,C,C^2\}$, one can have a tight UR
based on the standard deviations
[for example, see \eqref{std-J 9}--\eqref{aq-std-J}].
If one wants to built a UR in the case of two projective measurements
described by ${\{|a_l\rangle\langle a_l|\}_{l=1}^d}$
and ${\{|b_k\rangle\langle b_k|\}_{k=1}^d}$, then one can consider the
permissible region of the two probability vectors ${\vec{p}=(p_1,\cdots,p_d)}$ and
${\vec{q}=(q_1,\cdots,q_d)}$, where
${p_l=\big\langle |a_l\rangle\langle a_l|\big\rangle_\rho}$
[see \eqref{<A>-pro}] and
${q_k=\big\langle |b_k\rangle\langle b_k|\big\rangle_\rho}$.
There are many uncertainty measures for $\vec{p}$ (and $\vec{q}\,$)---thanks to Shannon \cite{Shannon48}, R\'{e}nyi \cite{Renyi61}, and
Tsallis \cite{Tsallis88}---and many associated URs \cite{Coles17,Maassen88}.
Moreover, with the probability vector $\vec{p}$, we can calculate the expectation value of any function of the Hermitian operator ${A}$ [given in \eqref{A}] as well as its standard deviation \eqref{std-dev}.
Now suppose we have no access to the individual probabilities $p_l$, but only to the expectation value
${\langle A\rangle}$, then we can construct uncertainty or certainty measures as follows.
Let us recall from \eqref{<A>in} that
${\langle A\rangle\in[a_\text{min}\,,\,a_\text{max}]}$, and we are interested in the case ${a_\text{min}\neq\,a_\text{max}}$.
We call $\rho$ an \emph{eigenstate} corresponding to an eigenvalue $a$ of $A$
if and only if ${A\rho=a\rho=\rho A}$.
If ${\langle A\rangle_\rho=a_\text{min}}$ then we can say for sure: ($i$) qudits are prepared in a minimum-eigenvalue-state of ${A}$ and ($ii$) every outcome
${a_l\neq a_\text{min}}$ will never occur in a future projective measurement
${\{|a_l\rangle\langle a_l|\}_{l=1}^d}$
for $A$.
So, only in the two cases ${\langle A\rangle_\rho=a_\text{min},a_\text{max}}$, we have a minimum possible uncertainty
about $\rho$ (if it is unknown) in which the individual qudits are identically prepared in \eqref{expt-situ}
and about the results of a future measurement for $A$.
Therefore, for an uncertainty measure, we require a continuous function on the interval $[a_\text{min}\,,\,a_\text{max}]$
that reaches its absolute minimum at both the endpoints.
Furthermore, mixing states, ${w\rho+(1-w)\rho'=\rho_{\text{mix}}}$ with ${0\leq w\leq1}$,
yields the convex sum ${w\langle A\rangle_\rho+(1-w)\langle A\rangle_{\rho'}=\langle A\rangle_{\rho_{\text{mix}}}}$, and it does not decrease uncertainty (or increase certainty).
A suitable concave (convex) function can be taken as a measure of uncertainty (certainty) because it does not decrease (increase) under such mixing.
The two positive semi-definite operators
\begin{equation}
\label{Adot}
\dot{A}:=\frac{a_\text{max}\,I-A}{a_\text{max}-a_\text{min}}
\quad\text{and}\quad
\mathring{A}:=\frac{A-a_\text{min}\,I}{a_\text{max}-a_\text{min}}\,,
\end{equation}
are such that ${\dot{A}+\mathring{A}}$ is the identity operator ${I\in\mathscr{B}(\mathscr{H}_d)}$, and we only need ${\langle A\rangle}$ to compute both
${\langle \dot{A}\rangle,\langle \mathring{A}\rangle\in[0,1]}$.
Now we can define concave and convex functions of ${\langle A\rangle}$
that fulfill the above requirements:
\begin{eqnarray}
\label{H(A)}
H(\langle A\rangle)&=&-(\langle \dot{A}\rangle\ln\,\langle \dot{A}\rangle+\langle \mathring{A}\rangle\ln\,\langle \mathring{A}\rangle)\,,
\\
\label{u(A)}
u_\kappa(\langle A\rangle)&=&{\langle \dot{A}\rangle}^\kappa+{\langle \mathring{A}\rangle}^\kappa\,,\quad 0<\kappa<\infty\,,
\quad\text{and}\qquad\\
\label{umax(A)}
u_\text{max}(\langle A\rangle)&=&\max\,\{\,
\langle \dot{A}\rangle\,,\,\langle \mathring{A}\rangle\,\}\,.
\end{eqnarray}
One can easily show that $H$ and $u_\kappa$ for all ${0<\kappa<1}$ are concave functions, whereas $u_\kappa$ for all ${1<\kappa<\infty}$ and $u_\text{max}$ are convex functions.
For ${\kappa=1}$, ${u_\kappa(\langle A\rangle)=1}$ for every $\langle A\rangle$, and thus it is neither a genuine measure of uncertainty nor of certainty.
With $u_\kappa$ one can create quantities like R\'{e}nyi's and
Tsallis' entropies, and $H$ of \eqref{H(A)} is like the Shannon entropy
but, in general, it is different from ${-\sum_{l=1}^{d}p_l\ln p_l}$.
If $A$ only has two distinct eigenvalues, then $\dot{A}$ and $\mathring{A}$ become
mutually orthogonal projectors, and
\eqref{H(A)} turns into the standard form of Shannon entropy [for example, see Appendix~\ref{sec:qubit}].
Note that Shannon's and Tsallis' entropies are concave functions but not all
R\'{e}nyi's entropies are.
The ranges of the above functions are ${H\in[0,\ln2]}$, ${u_\kappa\in[1,2^{1-\kappa}]}$ for ${0<\kappa<1}$
and ${u_\kappa\in[2^{1-\kappa},1]}$ for ${1<\kappa<\infty}$, and
${u_\text{max}\in[\tfrac{1}{2},1]}$.
As desired, all the above concave (convex) functions reach their absolute minimum (maximum) when
${\langle A\rangle=a_\text{min},a_\text{max}}$.
In the case of a non-degenerate eigenvalue $a_\text{min}$, we will be even more certain that there is only one (pure) eigenstate state
${|a_\text{min}\rangle\langle a_\text{min}|}$ that can provide
${\langle A\rangle=a_\text{min}}$, and similarly for a non-degenerate $a_\text{max}$.
Like the standard deviation \eqref{std-dev},
all the concave (convex) functions in \eqref{H(A)}--\eqref{umax(A)}
attain their absolute maximum (minimum) when
${\langle A\rangle=\tfrac{1}{2}(a_\text{min}+a_\text{max})}$.
Both a ket ${\tfrac{1}{\sqrt{2}}(|a_\text{min}\rangle+e^{\text{i}\phi}|a_\text{max}\rangle)}$, ${\phi}$ is a real number, and a state that is the equal mixture of $|a_\text{min}\rangle\langle a_\text{min}|$ and $|a_\text{max}\rangle\langle a_\text{max}|$ provide the expectation value ${\langle A\rangle=\tfrac{1}{2}(a_\text{min}+a_\text{max})}$.
Since the equal superposition ket gives the maximum standard deviation of $A$, the ket plays an important role in the quantum metrology \cite{Giovannetti06}
and to determine a fundamental limit on the speed of unitary evolution generated by $A$ \cite{Mandelstam45,Margolus98,Levitin09}.
The sum of concave functions is a concave function, for example,
\begin{equation}
\label{Habc}
H(\textsf{E}):=
H(\langle A\rangle)+H(\langle B\rangle)+\cdots+H(\langle C\rangle)\,,
\end{equation}
where every $H$ is defined according to \eqref{H(A)}.
One can view \eqref{Habc} as a measure of combined uncertainty on the allowed region $\mathcal{E}$.
Its global minimum, say, $\mathfrak{h}$
will occur at the extreme points of $\mathcal{E}$ (see Theorem~${3.4.7}$ and Appendix~A.3 in \cite{Niculescu93}).
As every extreme point of $\mathcal{E}$ is related to a
pure state, one can find the minimum by changing at most ${2(d-1)}$ parameters that
appear in \eqref{|psi>} and then can enjoy the \emph{tight} UR
${\mathfrak{h}\leq H(\textsf{E})}$.
If a vertex of hyperrectangle~\eqref{hyperrectangle} is a part of $\mathcal{E}$
only then the lower bound $\mathfrak{h}$ becomes (trivial) 0.
It only happens when there exists a ket ${|\text{e}\rangle}$ that is a maximum- or
minimum-eigenvalue-ket of every operator in ${\{A,B,\cdots,C\}}$.
There are examples in \cite{Sehrawat18} where all ${A,B,\cdots,C}$ share a common eigenket, thus usual URs---based on probabilities associated with projective measurements for ${A,B,\cdots,C}$ or based on the standard deviations ${\Delta A,\Delta B,\cdots,\Delta C}$---become trivial while ${0<\mathfrak{h}\leq H(\textsf{E})}$.
Like \eqref{Habc}, one can built combined uncertainty or certainty measures (and relations) by picking concave or convex functions from \eqref{u(A)} and \eqref{umax(A)}.
If one chooses
a measure that is neither a concave nor convex function then its absolute extremum
can occur inside $\mathcal{E}$.
The above technique is applied to derive tight URs and CRs in \cite{Riccardi17,Sehrawat17} and in the subsequent sections.
Apart from a few exceptions, it is not clear to us whether we can interpret a QC as a bound on a combined uncertainty or certainty.
On the other hand, a UR puts a lower limit on a combined uncertainty, and it can also be perceived as
a constraint on mean values as every uncertainty measure is (not necessarily concave or convex but) their function.
Suppose we identify a region in hyperrectangle~\eqref{hyperrectangle} with a UR, for example,
\begin{equation}
\label{R_H}
\mathcal{R}_H:=
\big\{ (\textbf{a},\cdots,\textbf{c})\in\mathcal{H}
\ |\
{\mathfrak{h}\leq H(\textbf{a})+\cdots+H(\textbf{c})}
\big\}\,,
\end{equation}
where $H(\textbf{a})$ is obtained by replacing ${\langle A\rangle}$
with \textbf{a} in ${\langle \dot{A}\rangle}$, ${\langle \mathring{A}\rangle}$, and
then in \eqref{H(A)}; likewise, ${H(\textbf{c})}$ has the same functional form as
$H(\langle C\rangle)$.
One can easily prove that $\mathcal{R}_H$ is a convex set.
Obviously, ${\mathcal{E}}$ will be contained in $\mathcal{R}_H$,
there will be no $\rho$ for ${(\textbf{a},\cdots,\textbf{c})\in\mathcal{R}\setminus\mathcal{E}}$ such that
${(\textbf{a},\cdots,\textbf{c})=(\langle A\rangle_{\rho},\cdots,\langle C\rangle_{\rho})}$
holds, and such points cannot be realized experimentally in scheme~\eqref{expt-situ}.
The relative complement of $\mathcal{E}$ in $\mathcal{R}$ is denoted by $\mathcal{R}\setminus\mathcal{E}$.
One can also observe that
if ${(\textbf{a},\textbf{b},\cdots,\textbf{c})}$ belongs to $\mathcal{R}_H$
then ${(\textbf{a}',\textbf{b},\cdots,\textbf{c})}$, where ${\textbf{a}'=a_{\text{min}}+a_{\text{max}}-\textbf{a}}$, will also belong to $\mathcal{R}_H$ because ${H(\textbf{a})=H(\textbf{a}')}$.
In the case of ${\textbf{a}'\neq\textbf{a}}$, only one of the two points
can be allowed, because a single quantum state cannot provide two different expectation values of $A$.
By taking a few examples in this paper,
the gap $\mathcal{R}\setminus\mathcal{E}$ between the two regions is exhibited in Figs. \ref{fig:region-2-pojs}, \ref{fig:regions-MUBs}, \ref{fig:regions}, \ref{fig:ellip-SUR}, and \ref{fig:regions-SIC-POVM}.
\begin{figure}
\caption{The permitted region $\mathcal{E}
\label{fig:region-2-pojs}
\end{figure}
If we have to provide a yes/no answer to a question such as:
can ${0.8}$ and ${0.8}$ be the expectation values ${\langle P\rangle_\rho}$
and ${\langle Q\rangle_\rho}$, where $P$ and $Q$ are rank-1 projectors represented by
\begin{equation}
\label{PQ-mat}
\begin{pmatrix}
\frac{1}{75} & -\frac{\text{i}}{15} & \frac{7}{15}\\[0.4em]
\frac{\text{i}}{15} & \hphantom{-}\frac{1}{3} & \frac{7\,\text{i}}{15}\\[0.4em]
\frac{7}{75} & -\frac{7\,\text{i}}{15} & \frac{49}{75}
\end{pmatrix}
\quad\mbox{and}\quad
\frac{1}{9}\begin{pmatrix}
\hphantom{-}4 & \hphantom{-}2 & -4\\[0.4em]
\hphantom{-}2 & \hphantom{-}1 & -2\\[0.4em]
-4 & -2 & \hphantom{-}4
\end{pmatrix},
\end{equation}
respectively, in some orthonormal basis of $\mathscr{H}_3$?
Then, a \emph{clear} answer can be given with the allowed region.
Suppose
${P=|a\rangle\langle a|}$
and
${Q=|b\rangle\langle b|}$ are two rank-1 projectors on a $d$-dimensional Hilbert space $\mathscr{H}_d$
such that ${0<|\langle a|b\rangle|<1}$ (non-commuting).
For ${d=2}$, their allowed region $\mathcal{E}$
is determined by
\begin{eqnarray}
\label{ellipse-PQ}
&&
\textsf{E}^\intercal\, \textbf{G}^{-1}\textsf{E} \leq 1\,,
\quad \mbox{where} \quad
\textsf{E}=\begin{pmatrix}
2\,\langle P\rangle-1\\
2\,\langle Q\rangle-1
\end{pmatrix}
\ \mbox{and}\qquad
\\
&&
\label{G-PQ}
\textbf{G}=
\begin{pmatrix}
1 & {\scriptstyle 2|\langle a|b\rangle|^2-1}\\
{\scriptstyle 2|\langle a|b\rangle|^2-1}& 1
\end{pmatrix}.
\qquad
\end{eqnarray}
One can see through \eqref{angle-bt-axis} that \eqref{ellipse-PQ} and \eqref{ellipse}
are the same for a qubit.
In the case of ${d>2}$, the allowed region will be the convex hull of the elliptic region specified by the inequality in \eqref{ellipse-PQ} and the point $(0,0)$
\cite{Lenard72}; see also \cite{Sehrawat17}.
This point is given by all those states that lie in the orthogonal complement of
${\{P,Q\}}$.
These states are the common eigenstates of $P$ and $Q$.
By the way, a UR become a trivial statement in this case.
Answer to the above question is ``no'' because the point ${(0.8,0.8)}$ falls outside the allowed region as shown in Fig.~\ref{fig:region-2-pojs}.
If one asks a similar question for a set of commuting operators
${\{A,B,\cdots,C\}}$, then the permitted region will be the convex hull of
$\{(\langle \text{e}_l|A|\text{e}_l\rangle,
\langle \text{e}_l|B|\text{e}_l\rangle,\cdots,
\langle \text{e}_l|C|\text{e}_l\rangle)\}_{l=1}^{d}$,
where ${\{|\text{e}_l\rangle \}_{l=1}^{d}}$
is their common eigenbasis.
\section{The unitary operator basis}\label{sec:unitary-basis}
With orthonormal basis \eqref{B_i}
of the Hilbert space $\mathscr{H}_d$,
we can built a pair of (complementary) unitary
operators
\begin{eqnarray}
\label{X_i}
X&:=&\sum_{j\,\in\, \mathbb{Z}_d}
|j+1\rangle\langle j|
\qquad\quad\qquad(X^d=I)\quad\mbox{and}\qquad\\
\label{Z_i}
Z&:=&\sum_{j\,\in\, \mathbb{Z}_d}
\omega^{\,j}\,|j\rangle\langle j|
\qquad\quad\qquad\ (Z^d=I)
\end{eqnarray}
thanks to Weyl \cite{Weyl32} and Schwinger~\cite{Schwinger60},
where ${j+1}$ is the modulo-$d$ addition,
${\omega=\exp(\text{i}\tfrac{2\pi}{d})}$, and
$\mathbb{Z}_d$ is defined in \eqref{Z_d}.
Under the operator multiplication, $X$ and $Z$ generate the discrete Heisenberg-Weyl group \cite{Weyl32,Durt10}.
The group members follow the Weyl commutation relation \cite{Weyl32}
\begin{equation}
\label{Weyl commutation}
Z^zX^x=
\omega^{\,xz}\, X^xZ^z \quad\text{for every}\quad x,z\in\mathbb{Z}_d\,,
\end{equation}
and the property
\begin{equation}
\label{traceless}
\text{tr}(X^xZ^z)= d\,\delta_{x,0}\delta_{z,0}\,.
\end{equation}
A subset of the Weyl group
\begin{equation}
\label{Unitary-Basis}
\mathfrak{B}_{\text{uni}}:=\big\{X^xZ^z:x,z\in\mathbb{Z}_d\big\}
\end{equation}
forms an orthogonal basis of $\mathscr{B}(\mathscr{H}_d)$, where the orthogonality relation
\begin{equation}
\label{orth-XZ}
\big\lgroup X^{x'}Z^{z'},X^xZ^z\big\rgroup_\textsc{hs}=
\text{tr}\big(X^{x-x'}Z^{z-z'}\big)=
d\,\delta_{x,x'}\delta_{z,z'}\qquad
\end{equation}
is a consequence of \eqref{traceless} \cite{Schwinger60}.
All the elements in basis~\eqref{Unitary-Basis} are unitary operators and traceless [see \eqref{traceless}] except the identity operator that corresponds to ${x=0=z}$.
Basis~\eqref{Unitary-Basis} is called the \emph{unitary}-basis.
According to \eqref{rho-resolution} and \eqref{r}, a
statistical operator can be represented as
\begin{equation}
\label{rho-in-XZ}
\rho=\tfrac{1}{d}\sum_{x,z\,\in\,\mathbb{Z}_d}
\overline{\langle X^xZ^z\rangle}_\rho\ X^xZ^z
\end{equation}
in the basis $\mathfrak{B}_{\text{uni}}$. Here, the conditions for normalization \eqref{norm-rho} and for Hermiticity \eqref{r} become
${\langle X^0Z^0\rangle=1}$ and
\begin{equation}
\label{<XxZz>}
\overline{\langle X^xZ^z\rangle}=\langle(X^xZ^z)^\dagger\rangle=
\omega^{\,xz}\langle X^{-x}Z^{-z}\rangle\,,
\end{equation}
respectively.
The second equality in \eqref{<XxZz>} is obtained by the virtue of \eqref{Weyl commutation}.
The inverse of a basis element, ${(X^xZ^z)^\dagger}$,
does not always belong to basis~\eqref{Unitary-Basis} but
to the Weyl group.
Whereas both ${X^xZ^z}$ and ${X^{-x}Z^{-z}}$ are members of
$\mathfrak{B}_{\text{uni}}$, and their mean values are related through~\eqref{<XxZz>} (in this regard, see also \cite{Adagger}).
Taking the general form, \eqref{tr(rho^m)},
one can easily express $\text{tr}(\rho^m)$ in the unitary-basis
by using \eqref{Weyl commutation}, \eqref{traceless}, and \eqref{<XxZz>},
for example, \begin{eqnarray}
\label{tr_rho2-exp}
\text{tr}(\rho^2)
&=&\tfrac{1}{d}\sum_{x,z}
{|\langle X^xZ^z \rangle|}^{\,2}\quad\mbox{and}\\
\label{tr_rho3-exp}
\text{tr}(\rho^3)&=&\tfrac{1}{d^2}
\sum_{x_1,z_1}\sum\limits_{x_2,z_2}
\langle X^{-x_1}Z^{-z_1} \rangle\,
\langle X^{-x_2}Z^{-z_2} \rangle\times
\nonumber\\
&&\qquad
\langle X^{ x_1+x_2}Z^{ z_1+z_2} \rangle\;
\omega^{z_1(x_1+x_2)+z_2x_2}\,.\quad\quad
\end{eqnarray}
Then, one can draw QCs on the expectation values of the Weyl operators
from~\eqref{0<=S_n}.
In the case of a prime dimensional $d$,
the basis $\mathfrak{B}_{\text{uni}}$---without the identity operator---can be divided into ${d+1}$ disjoint subsets
\begin{eqnarray}
\label{d+1 subsets}
&&\big\{\mathcal{C}^{(1,z)}\,|\,z\in\mathbb{Z}_d\big\}
\cup\big\{\mathcal{C}^{(0,1)}\big\}
\,,\quad \mbox{where} \\
\label{Cxz}
&&\mathcal{C}^{(x,z)}:=
\big\{ X^{kx} Z^{kz}\,|\,
k\in\mathbb{Z}_d\ \ \mbox{and}\ \ k\neq0\big\}
\end{eqnarray}
carries ${d-1}$ pairwise commuting operators \cite{Bandyopadhyay02,Englert01}.
Hence, one can find a common eigenbasis of the operators in $\mathcal{C}^{(x,z)}$.
In fact, there exists a complete set of ${d+1}$ MUBs
of $\mathscr{H}_{d}$ \cite{Ivanovic81,Wootters89,Bandyopadhyay02}:
\begin{equation}
\label{d+1 bases}
\big\{\mathcal{B}^{\scriptscriptstyle(z)}\,|\,z\in\mathbb{Z}_d\big\}
\cup\big\{\,\mathcal{B}\,\big\}
\end{equation}
are eigenbases for the subsets in
\eqref{d+1 subsets}.
Our original basis $\mathcal{B}$ in \eqref{B_i} is an eigenbasis of ${Z\in\mathcal{C}^{(0,1)}}$ [see \eqref{Z_i}].
Let us define the remaining bases as \cite{Bandyopadhyay02,Englert01}
\begin{equation}
\label{B^z}
\mathcal{B}^{\scriptscriptstyle(z)}:=\big\{\,|z,j\rangle
\,|\,j\in\mathbb{Z}_d\big\}\,,
\ \mbox{where}\ \
XZ^z|z,j\rangle=\omega^j\,|z,j\rangle\,.
\end{equation}
Eigenvalues of every non-identity ${X^xZ^z}$ are distinct powers of $\omega$ \cite{Englert01,Schwinger60}.
With an integral power [obtained by repeatedly using~\eqref{Weyl commutation}]
\begin{equation}
\label{(X^xZ^z)^k}
(XZ^z)^k=
\omega^{\frac{k(k-1)}{2}z} X^{k}Z^{kz}
\end{equation}
and the eigenvalue equation in~\eqref{B^z},
one can arrive at the spectral decomposition
\begin{equation}
\label{X^{k}Z^{kz}}
X^{k}Z^{kz}=\omega^{-\frac{k(k-1)}{2}z}
\sum_{j\,\in\,\mathbb{Z}_d}\,
\omega^{kj}\,
|z,j\rangle\langle z,j|
\end{equation}
of every operator in the subset $\mathcal{C}^{(1,z)}$.
Now taking \eqref{X^{k}Z^{kz}} and \eqref{Z_i}, we can pronounce the average
values as
\begin{eqnarray}
\label{<X^{k}Z^{kz}>}
\langle X^{k}Z^{kz}\rangle_\rho&=&
\omega^{-\frac{k(k-1)}{2}z}
\sum_{j\,\in\,\mathbb{Z}_d}\,
\omega^{kj}\,
p^{\scriptscriptstyle(z)}_j
\quad\mbox{and}\\
\label{<Z^{k}>}
\langle Z^k\rangle_\rho&=&
\sum_{j\,\in\,\mathbb{Z}_d}\,
\omega^{kj}\,p_j\,,
\quad\mbox{where}\\
\label{pj}
p^{\scriptscriptstyle(z)}_j&=&\langle z, j|\,\rho\,|z,j\rangle
\quad\mbox{and}\quad
p_j=\langle j|\,\rho\,|j\rangle\qquad
\end{eqnarray}
are the probabilities for projective measurements in ${d+1}$ MUBs~\eqref{d+1 bases}.
Next, we can rewrite \eqref{tr_rho2-exp} as
\begin{eqnarray}
\label{tr_rho2-exp2}
\text{tr}(\rho^2)
&=&\tfrac{1}{d}\Big[1+\sum\limits_{z\,\in\,\mathbb{Z}_d}
\underbrace{\sum\limits_{k=1}^{d-1}
{|\langle X^{k}Z^{kz} \rangle|}^{\,2}}
_{\textstyle d \sum_{j}\big(p^{\scriptscriptstyle(z)}_j\big)^2-1}+
\underbrace{\sum\limits_{k=1}^{d-1}
{|\langle Z^k \rangle|}^{2}}
_{\textstyle d\sum_{j}(p_j)^2-1}
\Big]\quad
\nonumber\\
&=&
\label{tr_rho2-p}
\sum_{z\,\in\,\mathbb{Z}_d}\,
\sum_{j\,\in\,\mathbb{Z}_d}
\big(p^{\scriptscriptstyle(z)}_j\big)^2
+
\sum_{j\,\in\,\mathbb{Z}_d}
(p_j)^2-1\,.
\end{eqnarray}
Expression \eqref{tr_rho2-p}
is achieved with the help of \eqref{<X^{k}Z^{kz}>}--\eqref{pj},
\begin{equation}
\label{sum pj=1}
\sum_{j\,\in\,\mathbb{Z}_d}
p^{\scriptscriptstyle(z)}_j=1=
\sum_{j\,\in\,\mathbb{Z}_d}
p_j
\end{equation}
[due to \eqref{p-const1}] for every $z$, and
${\textstyle\sum\nolimits_{k=0}^{d-1} \omega^{k(j-j')}=d\,\delta_{j,j'}}$.
Owing to ${\text{tr}(\rho^2)\leq1}$ [see \eqref{0<=S_2}], we reach the \emph{quadratic} QC for the Weyl operators in~\eqref{tr_rho2-exp} and thus
\begin{equation}
\label{p^2<=1}
\sum_{z\,\in\,\mathbb{Z}_d}\,
\sum_{j\,\in\,\mathbb{Z}_d}
\big(p^{\scriptscriptstyle(z)}_j\big)^2
+
\sum_{j\,\in\,\mathbb{Z}_d}
(p_j)^2\,\leq2
\end{equation}
for the MUB-probabilities.
In \cite{Larsen90,Ivanovic92}, inequality~\eqref{p^2<=1} is achieved from ${\text{tr}(\rho^2)\leq1}$ via a different method (see also \cite{Klappenecker05}).
Using their result, that is \eqref{p^2<=1}, two tight URs are obtained in
\cite{Sanchez-Ruiz95,Ballester07} for ${d+1}$ MUBs.
In the case of ${d=2}$, these relations become \eqref{H-UR-JxJyJz} and \eqref{H2-UR-JxJyJz}.
For the cubic QC due to~\eqref{0<=S_3}, we need to express \eqref{tr_rho3-exp}
in terms of the probabilities.
In the next section, \eqref{tr_rho3-exp} is explicitly given for a qutrit.
Higher degree QCs for the Weyl operators and for the MUBs can be achieved---from~\eqref{0<=S_n}---by adopting the general formalism of Sec.~\ref{sec:QC}
like above.
The Weyl group exists for every $d$ \cite{Weyl32,Durt10,Englert06}, whereas a maximal set of ${d+1}$ MUBs is only known for a prime power dimension \cite{Wootters89,Bandyopadhyay02,Durt10}.
MUBs are \emph{optimal} for the quantum state estimation \cite{Ivanovic81,Wootters89}, where the QCs can be employed for
the validation of an estimated state.
\section{Qutrit and spin-1 system}\label{sec:qutrit}
In the case of ${d\geq3}$, there is a
\emph{cubic} QC as a result of \eqref{0<=S_3}.
For a qutrit (${d=3}$), let us first express $\text{tr}(\rho^m)$ of \eqref{tr(rho^m)-stnd} for ${m=1,2,3}$:
\begin{eqnarray}
\label{tr(rho)}
\text{tr}(\rho)&=&
\texttt{r}_{00}+\texttt{r}_{11}+\texttt{r}_{22}\,,\\
\label{tr(rho^2)}
\text{tr}(\rho^2)&=&
{\texttt{r}_{00}}^2+{\texttt{r}_{11}}^2+{\texttt{r}_{22}}^2+\nonumber\\
&& 2\,\big(\,|\texttt{r}_{01}|^2+|\texttt{r}_{02}|^2+|\texttt{r}_{12}|^2\,\big)\,,\quad\mbox{and}\quad
\\
\label{tr(rho^3)}
\text{tr}(\rho^3)&=&
{\texttt{r}_{00}}^3+{\texttt{r}_{11}}^3+{\texttt{r}_{22}}^3+\nonumber\\
&& 3\,\texttt{r}_{00}\,\big(\,|\texttt{r}_{01}|^2+|\texttt{r}_{02}|^2\,\big)+\nonumber\\
&& 3\,\texttt{r}_{11}\,\big(\,|\texttt{r}_{01}|^2+|\texttt{r}_{12}|^2\,\big)+\nonumber\\
&& 3\,\texttt{r}_{22}\,\big(\,|\texttt{r}_{02}|^2+|\texttt{r}_{12}|^2\,\big)+\nonumber\\
&& 3\,\big(\,\texttt{r}_{01}\,\texttt{r}_{12}\,\texttt{r}_{20}+ \overline{\texttt{r}}_{01}\,\overline{\texttt{r}}_{12}\,\overline{\texttt{r}}_{20}\,\big)
\end{eqnarray}
[for $\texttt{r}_{jk}$, see \eqref{r_kj}].
Here we consider two sets of operators: set~\eqref{Unitary-Basis} of the Weyl operators for a qutrit and a set of spin-1 operators.
In the following, we demonstrate:
how to achieve $\text{tr}(\rho^m)$, straight from~\eqref{tr(rho)}--\eqref{tr(rho^3)},
in terms of the expectation values of operators
in a given set without exploiting their algebraic properties.
Then, one gains automatically all the QCs from~\eqref{1=S_1}--\eqref{0<=S_3}.
In \eqref{X_i} and \eqref{Z_i}, the Weyl operators are expressed in the linear combinations of operators belong to standard basis~\eqref{Stnd-Basis}.
Now we write
\begin{eqnarray}
\label{proj-op}
|j\rangle\langle k|
&=&X^j\,|0\rangle\langle 0|\,X^{-k}
=X^j\left[\tfrac{1}{d}
\sum_{z\,\in\,\mathbb{Z}_d} Z^z\,
\right ] X^{-k}\nonumber\\
&=& \tfrac{1}{d}
\sum_{z\,\in\,\mathbb{Z}_d}\, \omega^{-kz}\, X^{j-k}\,Z^z
\end{eqnarray}
by using \eqref{X_i}, \eqref{Z_i}, and \eqref{Weyl commutation}; see
also \cite{Durt10}.
According to Born's rule~\eqref{Born rule-1}, the mean value is a linear function of an operator, so we own every $\texttt{r}_{kj}$ of \eqref{r_kj} as a linear sum of ${\langle X^xZ^z\rangle_\rho}$ through \eqref{proj-op}.
This constitutes a matrix equation such as \eqref{expt-values-std}.
By substituting $\texttt{r}_{kj}$ with
the associated linear combination in \eqref{tr(rho)}--\eqref{tr(rho^3)}, one can achieve~$\text{tr}(\rho^m)$
in terms of ${\langle X^x\,Z^z\rangle}$ for a qutrit:
\begin{eqnarray}
\label{tr_rho3-exp-qutrit}
&&\text{tr}(\rho^3)=\nonumber\\
&&\tfrac{1}{9}\,\big[\,
1+\langle X\rangle^3+ \langle X^2\rangle^3+\langle XZ\rangle^3+
\langle X^2Z^2\,\rangle^3+\nonumber\\
&&
\qquad\quad
\langle XZ^2\rangle^3+ \langle X^2Z\rangle^3+\langle Z\rangle^3+
\langle Z^2\,\rangle^3+\nonumber\\
&&\qquad
6\,\big(\,|\langle X\rangle|^2+|\langle XZ\rangle|^2
+|\langle XZ^2\rangle|^2+|\langle Z\rangle|^2\,\big)
\nonumber\\
&&
-\,3\,\big(
\langle X\rangle\langle XZ\rangle\langle XZ^2\rangle+
\langle X^2\rangle\langle X^2Z\rangle\langle X^2Z^2\rangle+\nonumber\\
&&\qquad
\langle Z\rangle\langle XZ\rangle\langle X^2Z\rangle+
\langle Z^2\rangle\langle XZ^2\rangle\langle X^2Z^2\rangle+\nonumber\\
&&\qquad
\omega\,\langle Z\rangle\langle X^2\rangle\langle XZ^2\rangle+
\omega\,\langle Z^2\rangle\langle X\rangle\langle X^2Z\rangle+\nonumber\\
&&\qquad
\omega^2\langle Z\rangle\langle X\rangle\langle X^2Z^2\rangle+
\omega^2\langle Z^2\rangle\langle X^2\rangle\langle XZ\rangle
\big)
\big]\,,\qquad\quad
\end{eqnarray}
where $\omega=\exp(\text{i}\tfrac{2\pi}{3})$, and the term $6(\cdots)$ is
$3(3\text{tr}(\rho^2)-1)$.
In Sec.~\ref{sec:unitary-basis}, we get \eqref{tr_rho2-exp} and \eqref{tr_rho3-exp} from \eqref{tr(rho^m)}
by exploiting algebraic properties~\eqref{Weyl commutation} and \eqref{traceless}.
One can compare that both the methods deliver the same items.
The next example,
a spin-1 particle is a ${d=3}$ levels quantum system (qutrit) if we consider only the spin degree of freedom.
Here we take a set of three Hermitian operators from Chap.~7 in \cite{Peres95}:
\begin{eqnarray}
\label{Jx}
J_x&:=&-\text{i}\big(|0\rangle\langle 1|-|1\rangle\langle 0|\big)\,,
\\
\label{Jy}
J_y&:=&-\text{i}\big(|0\rangle\langle 2|-|2\rangle\langle 0|\big)\,,
\quad\mbox{and}
\\
\label{Jz}
J_z&:=&-\text{i}\big(|1\rangle\langle 2|-|2\rangle\langle 1|\big)\,.
\end{eqnarray}
They obey the commutation relation ${J_xJ_y-J_yJ_x=\text{i}J_z}$ plus those obtained by the cyclic permutations of ${x,y,z}$, and thus they represent ${\text{spin-}1}$ observables.
One can check that
${J_x, J_y,J_z}$ with ${J_x^{\,2},J_y^{\,2},J_z^{\,2}}$ and the
anticommutators
\begin{equation}
\label{anti-commutators}
K_{xy}=J_xJ_y+J_yJ_x\,,\quad
K_{yz}\quad\mbox{and}\quad\,K_{zx}
\end{equation}
(attain by the cyclic permutations)
constitute a set of nine linearly independent operators, hence they form a Hermitian-basis of $\mathscr{B}(\mathscr{H}_3)$.
Though it is not an orthonormal basis with respect to inner product~\eqref{HS-inner-pro}.
One can recognize that ${J_x,J_y,J_z}$ and ${K_{xy},K_{yz},K_{zx}}$
are the Gell-Mann operators \cite{Gell-Mann61},
but ${J_x^{\,2},J_y^{\,2},J_z^{\,2}}$ are not.
We want to emphasize that the QCs on their average values
can be derived from \cite{Kimura03,Byrd03}.
So the following analysis is merely an
alternative procedure that does not require the Lie algebra of $SU(3)$.
After expressing the elements of standard basis~\eqref{Stnd-Basis}
in terms of the spin operators, we can write the average values as
\begin{eqnarray}
\label{<J>}
\texttt{r}_{00}&=&\tfrac{1}{2}\left(\hphantom{-}\langle J_x^{\,2}\rangle+
\langle J_y^{\,2}\rangle-\langle J_z^{\,2}\rangle\right)\,,
\nonumber\\
\texttt{r}_{11}&=&\tfrac{1}{2}\left(\hphantom{-}\langle J_x^{\,2}\rangle-
\langle J_y^{\,2}\rangle+\langle J_z^{\,2}\rangle\right)\,,
\nonumber\\
\texttt{r}_{22}&=&\tfrac{1}{2}\left(-\langle J_x^{\,2}\rangle+
\langle J_y^{\,2}\rangle+\langle J_z^{\,2}\rangle\right)\,,
\\
\texttt{r}_{01}&=&\tfrac{1}{2}\left(\hphantom{-}\langle K_{yz}\rangle-
\text{i}\,\langle J_x\rangle\right)=\overline{\texttt{r}}_{10}\,,
\nonumber\\
\texttt{r}_{02}&=&\tfrac{1}{2}\left(-\langle K_{zx}\rangle-
\text{i}\,\langle J_y\rangle\right)=\overline{\texttt{r}}_{20}\,,
\quad\mbox{and}
\nonumber\\
\texttt{r}_{12}&=&\tfrac{1}{2}\left(\hphantom{-}\langle K_{xy}\rangle-
\text{i}\,\langle J_z\rangle\right)=\overline{\texttt{r}}_{21}\,.
\nonumber
\end{eqnarray}
This set of equations frames a
matrix equation of the kind in \eqref{expt-values-std}.
Employing Eqs.~\eqref{<J>}, we can rephrase \eqref{tr(rho)}--\eqref{tr(rho^3)}
as
\begin{eqnarray}
\label{tr(rho)-J}
\text{tr}(\rho)&=&\tfrac{1}{2}\left(\langle J_x^{\,2}\rangle+
\langle J_y^{\,2}\rangle+\langle J_z^{\,2}\rangle\right)\,,
\\
\label{tr(rho^2)-J}
\text{tr}(\rho^2)&=&-1+
\langle J_x^{\,2}\rangle^2+
\langle J_y^{\,2}\rangle^2+\langle J_z^{\,2}\rangle^2+\nonumber\\
&&\quad
\tfrac{1}{2}\big(\langle J_x\rangle^2+\langle J_y\rangle^2+
\langle J_z\rangle^2+\nonumber\\
&&\qquad
\langle K_{xy}\rangle^2+\langle K_{yz}\rangle^2+\langle K_{zx}\rangle^2
\big)\,,\ \mbox{and}\qquad\
\\
\label{tr(rho^3)-J}
\text{tr}(\rho^3)&=&1-
3\,\langle J_x^{\,2}\rangle\langle J_y^{\,2}\rangle\langle J_z^{\,2}\rangle+
\nonumber\\
&&\
\tfrac{3}{4}\,\big[\,
\left(\langle J_x\rangle^2+\langle K_{yz}\rangle^2\right)\langle J_x^{\,2}\rangle+
\nonumber\\
&&\quad\ \
\left(\langle J_y\rangle^2+\langle K_{zx}\rangle^2\right)\langle J_y^{\,2}\rangle+
\nonumber\\
&&\quad\ \
\left(\langle J_z\rangle^2+\langle K_{xy}\rangle^2\right)\langle J_z^{\,2}\rangle
\nonumber\\
&&\
-\,\langle K_{xy}\rangle\langle K_{yz}\rangle\langle K_{zx}\rangle+
\langle J_{x}\rangle\langle K_{xy}\rangle\langle J_{y}\rangle
\nonumber\\
&&\quad
+\,\langle J_{y}\rangle\langle K_{yz}\rangle\langle J_{z}\rangle+
\langle J_{z}\rangle\langle K_{zx}\rangle\langle J_{x}\rangle
\,\big]\,.
\end{eqnarray}
Here, in each example, one can clearly perceive $\text{tr}(\rho^2)$ and $\text{tr}(\rho^3)$ as quadratic and cubic polynomials of the mean values.
Plugging \eqref{tr(rho)-J}--\eqref{tr(rho^3)-J} in \eqref{1=S_1}--\eqref{0<=S_3}, one captures all the---linear, quadratic, and cubic---QCs for the spin-1 operators.
The linear constraint ${\langle J_x^{\,2}+J_y^{\,2}+J_z^{\,2}\rangle=\langle 2I\rangle}$
is used to get \eqref{tr(rho^2)-J} and \eqref{tr(rho^3)-J}
in the above forms.
Now, let us call
$J_x,J_y,J_z,K_{yz},K_{zx},K_{xy},
J_x^{\,2},J_y^{\,2},J_z^{\,2}$
as ${A_1,\cdots,A_9}$, respectively.
In this case, every pure state $\rho_\text{pure}=|\psi\rangle\langle\psi|$ [for ${|\psi\rangle}$, see \eqref{|psi>}] of a qutrit delivers an extreme point of
the allowed region $\mathcal{E}$, and the extreme points can be parameterized as
\begin{eqnarray}
\label{J-pure}
\langle A_1\rangle_{\rho_\text{pure}}&=&
\sin 2\theta_0 \cos\theta_1 \sin\phi_1\,,
\nonumber\\
\langle A_2\rangle_{\rho_\text{pure}}&=&
\sin 2\theta_0 \sin\theta_1 \sin\phi_2\,,
\nonumber\\
\langle A_3\rangle_{\rho_\text{pure}}&=&
-(\sin\theta_0)^2 \sin 2\theta_1 \sin(\phi_1-\phi_2)\,,
\nonumber\\
\langle A_4\rangle_{\rho_\text{pure}}&=&
\sin 2\theta_0 \cos\theta_1 \cos\phi_1\,,
\nonumber\\
\langle A_5\rangle_{\rho_\text{pure}}&=&
-\sin 2\theta_0 \sin\theta_1 \cos\phi_2\,,
\\
\langle A_6\rangle_{\rho_\text{pure}}&=&
(\sin\theta_0)^2 \sin 2\theta_1 \cos(\phi_1-\phi_2)\,,
\nonumber\\
\langle A_7\rangle_{\rho_\text{pure}}&=&
(\cos\theta_0)^2+(\sin\theta_0)^2(\cos\theta_1)^2\,,
\nonumber\\
\langle A_8\rangle_{\rho_\text{pure}}&=&
(\cos\theta_0)^2+(\sin\theta_0)^2(\sin\theta_1)^2\,,
\quad\mbox{and}\qquad
\nonumber\\
\langle A_9\rangle_{\rho_\text{pure}}&=&
(\sin\theta_0)^2\,,
\nonumber
\end{eqnarray}
where ${\theta_0,\theta_1\in[0,\tfrac{\pi}{2}]}$ and ${\phi_1,\phi_2\in[0,2\pi)}$.
By putting expectation values \eqref{J-pure} in \eqref{tr(rho)-J}--\eqref{tr(rho^3)-J}, one can verify that ${\text{tr}(\rho_\text{pure}^{\,m})=1}$ for all $m=1,2,$ and 3.
The minimum and maximum eigenvalues of everyone in ${\{A_1,\cdots,A_6\}}$ are $-1$ and $+1$ and of each one in
${\{A_7,A_8,A_9\}}$ are 0 and $1$, respectively. Taking \eqref{Adot}--\eqref{umax(A)}, we formulate uncertainty or certainty measures for $\{A_i\}_{i=1}^9$, and a few combined measures are listed in
\begin{eqnarray}
\label{H-J}
6\ln 2 &\leq& \sum_{i=1}^{9}H(\langle A_i\rangle)\,,
\\
\label{u1/2-J}
3 + 6 \sqrt{2} &\leq& \sum_{i=1}^{9}u_{\sfrac{1}{2}}(\langle A_i\rangle)\,,
\\
\label{u2-J}
&& \sum_{i=1}^{9}u_{2}(\langle A_i\rangle)\leq 6\,,\quad\mbox{and}
\\
\label{umax-J}
&& \sum_{i=1}^{9}u_{\text{max}}(\langle A_i\rangle)\leq 6.51702\,.
\end{eqnarray}
As described in Sec.~\ref{sec:QC}, we find the absolute minimum of a concave function and
maximum of a convex function by putting \eqref{J-pure} in the above functions
and changing the four parameters $\theta$'s and $\phi$'s.
As a result, we achieve tight URs \eqref{H-J} and \eqref{u1/2-J} and CRs \eqref{u2-J} and \eqref{umax-J} for the nine spin-1 observables.
The basis ${\mathcal{B}=\{|0\rangle,|1\rangle,|2\rangle\}}$ in \eqref{B_i} is a common eigenbasis of ${\{A_7,A_8,A_9\}}$, a qutrit's state
$\rho=|j\rangle\langle j|$ that corresponds to a ket in $\mathcal{B}$ saturates inequalities \eqref{H-J}--\eqref{u2-J}.
One pure state that saturates CR \eqref{umax-J}, the corresponding parameters are
\begin{eqnarray}
\label{umax-ang}
\theta_0 = 0.482720\,,&&\quad
\theta_1 = 0.785398\,,\nonumber\\
\phi_1 = 2.520428\,,&&\quad
\phi_2 = 3.762757\,.
\end{eqnarray}
Since the square of every operator in the set ${\{A_i\}_{i=1}^9}$ lies in the set,
\begin{eqnarray}
\label{A^2}
(A_1)^2&=&(A_4)^2=(A_7)^2=A_7\,,\nonumber\\
(A_2)^2&=&(A_5)^2=(A_8)^2=A_8\,,\quad\mbox{and}\\
(A_3)^2&=&(A_6)^2=(A_9)^2=A_9\,,\nonumber
\end{eqnarray}
a sum of (the square of) the standard deviations
${\Delta A_i}$ [see \eqref{std-dev}] acts
as a concave function on the allowed region for the set.
As above we reach the global minima and thus establish the tight URs
\begin{eqnarray}
\label{std-J 9}
4 &\leq& \sum_{i=1}^{9}\Delta A_i\,,\\
\label{std-J 6}
1 + 2\sqrt{2} &\leq& \sum_{i=1}^{6}\Delta A_i\,,
\\
\label{aq-std-J}
\tfrac{10}{3} &\leq& \sum_{i=1}^{9}\big(\Delta A_i\big)^2\,,
\quad\mbox{and}\quad
\tfrac{8}{3} \leq \sum_{i=1}^{6}\big(\Delta A_i\big)^2\,.
\qquad\quad
\end{eqnarray}
URs \eqref{std-J 9} and \eqref{std-J 6} are saturated by
the eigenstates of $A_i$, $i=1,\cdots,6$,
associated with 0 and the non-zero eigenvalues, respectively.
The null-space (eigenspace associated with 0) of $A_i$
is the linear span of a ket in $\mathcal{B}$.
The equal superposition kets
${\tfrac{1}{\sqrt{3}}(|0\rangle+e^{\text{i}\phi_1}|1\rangle+e^{\text{i}\phi_2}|2\rangle)}$ provide the minimum uncertainty (pure) states for both the URs in \eqref{aq-std-J}.
\section{Spin-$\mathsf{j}$ operators}\label{sec:spin-j}
A spin-\textsf{j} particle is a quantum system of ${d=2\,\mathsf{j}+1}$ levels provided we consider only the spin degree of freedom, and \textsf{j} can be
${\tfrac{1}{2}, 1,\tfrac{3}{2},2,\cdots\,}$.
Let us take the spin-\textsf{j} operators
${J_x=\tfrac{1}{2}(J_++J_-)}$,
${J_y=\tfrac{1}{2\text{i}}(J_+-J_-)}$, and $J_z$
whose actions on the eigenbasis
${\{|\mathsf{m}\rangle : \mathsf{m=j,j}-1,\cdots,-\mathsf{j}\}}$ of $J_z$
are described as
\begin{eqnarray}
\label{JpmJz}
J_\pm\,|\mathsf{m}\rangle&=&
\sqrt{\mathsf{(j\mp m)(j\pm m}+1)}\;
|\mathsf{m}\pm 1\rangle\quad\mbox{and}\quad\\
J_z\,|\mathsf{m}\rangle&=&\mathsf{m}\,|\mathsf{m}\rangle\,.
\end{eqnarray}
For ${\mathsf{j}=\tfrac{1}{2}}$, the vector operator ${\vec{J}:=(J_x,J_y,J_z)}$
is the same as the Pauli vector operator ${\vec{\sigma}:=(X,Y,Z)}$ in Appendix
up to a factor $\tfrac{1}{2}$.
In \eqref{Jx}--\eqref{Jz}, the spin-1 operators are represented
in the common eigenbasis $\mathcal{B}$ of
${\{J_x^{\,2},J_y^{\,2},J_z^{\,2}\}}$.
The permitted region $\mathcal{E}$ for the three spin-observables is bounded by the QC
\begin{equation}
\label{JxJyJz,QC}
{\langle J_x\rangle}^2+{\langle J_y\rangle}^2+{\langle J_z\rangle}^2\leq \mathsf{j}^{\,2}\,,
\end{equation}
which says that the length of the vector
${(\langle J_x\rangle,\langle J_y\rangle,\langle J_z\rangle)}$
cannot be more than $\mathsf{j}$ \cite{Atkins71}.
So $\mathcal{E}$ is the closed ball of radius $\mathsf{j}$ in hyperrectangle~\eqref{hyperrectangle} that is the cube $[-\mathsf{j},\mathsf{j}]^{\times 3}$ here.
Note that, except ${\mathsf{j}=\tfrac{1}{2}}$, an interior point of $\mathcal{E}$ corresponds to \emph{not one but many} (pure as well as mixed) quantum states.
However, every extreme point of $\mathcal{E}$ comes from a unique pure state
${\chi(\alpha,\beta)=|\alpha,\beta\rangle\langle\alpha,\beta|}$, where
\begin{equation}
\label{bloch-ket}
|\alpha,\beta\rangle=\sum_{\mathsf{m=-j}}^{\mathsf{j}}
{\scriptstyle\sqrt{\tfrac{(2\mathsf{j})!}{(\mathsf{j+m})!\,(\mathsf{j-m})!}}
\left(\cos\tfrac{\alpha}{2}\right)^{\mathsf{j+m}}
\left(\sin\tfrac{\alpha}{2}\right)^{\mathsf{j-m}}
e^{-\text{i}\mathsf{m}\beta}}|\mathsf{m}\rangle
\end{equation}
is known as the angular momentum (or atomic) coherent state-vector \cite{Atkins71,Arecchi72}.
With ${J_x^{\,2}+J_y^{\,2}+J_z^{\,2}=\mathsf{j(j}+1)I}$, QC~\eqref{JxJyJz,QC} can be turned into a tight UR
\begin{equation}
\label{JxJyJz,UR}
\mathsf{j}\leq
(\Delta J_x)^2+(\Delta J_y)^2+(\Delta J_z)^2\,,
\end{equation}
for which all the coherent states are the minimum uncertainty states (see Chap.~10 in \cite{Peres95}).
UR~\eqref{JxJyJz,UR} is also captured in \cite{Larsen90,Abbott16,Hofmann03,Dammeier15}.
In fact, \eqref{JxJyJz,QC} can also be interpreted as CR because on the left-hand-side
there is a convex function of the expectation values.
In \cite{Dammeier15},
$(\Delta\, \widehat{\eta}.\vec{J}\,)^2$ is studied as a function of the unit vector ${\widehat{\eta}\in\mathbb{R}^3}$ for a fixed state $\rho$, and then the uncertainty regions of
${((\Delta J_x)^2,(\Delta J_y)^2,(\Delta J_z)^2)}$ are plotted by taking all $\rho$'s. Various URs are also obtained there for the three operators ${J_x,J_y,}$ and $J_z$.
Our regions $\mathcal{E}$ and $\mathcal{R}$'s are different from the uncertainty regions: $\mathcal{E}$ and $\mathcal{R}$
are in the space of expectation values, and both are convex sets.
We can parametrize the extreme points of $\mathcal{E}$ as
\begin{eqnarray}
\label{JxJyJz-para}
\langle\alpha,\beta|J_x|\alpha,\beta\rangle&=&\mathsf{j}\,\sin\alpha\cos\beta\,,
\nonumber\\
\langle\alpha,\beta|J_y|\alpha,\beta\rangle&=&\mathsf{j}\,\sin\alpha\sin\beta\,,
\\
\langle\alpha,\beta|J_z|\alpha,\beta\rangle&=&\mathsf{j}\,\cos\alpha\,,
\nonumber
\end{eqnarray}
where ${\alpha\in[0,\pi]}$ and ${\beta\in[0,2\pi)}$, and can define different uncertainty or certainty measures on $\mathcal{E}$ using \eqref{Adot}--\eqref{umax(A)}.
Since the minimum and maximum eigenvalues of $J_i$ for every ${i=x,y,z}$ are $-\mathsf{j}$ and $+\mathsf{j}$, respectively,
\begin{equation}
\label{Jdot}
\langle\dot{J}_i\,\rangle=
\tfrac{1}{2}\big(1-\tfrac{\langle J_i\rangle}{\mathsf{j}}\big)
\quad\mbox{and}\quad
\langle\mathring{J}_i\,\rangle=
\tfrac{1}{2}\big(1+\tfrac{\langle J_i\rangle}{\mathsf{j}}\big)\,,
\end{equation}
which
are functions of $\alpha$ and $\beta$ on the sphere specified by \eqref{JxJyJz-para}.
By varying the two angles we reach the tight lower and upper bounds of the uncertainty and certainty measures presented as follows
\begin{eqnarray}
\label{H-UR-JxJyJz}
2\ln 2&\leq&\sum_{i=x,y,z}H(\langle J_i\rangle)\,,\\
\label{H2-UR-JxJyJz}
3\ln(\tfrac{3}{2})&\leq&\sum_{i=x,y,z}H_2(\langle J_i\rangle)\,,\\
\label{u-UR-JxJyJz}
1+2\sqrt{2}&\leq& \sum_{i=x,y,z}
u_{\sfrac{1}{2}}(\langle J_i\rangle)\,,\\
\label{u2-UR-JxJyJz}
&& \sum_{i=x,y,z}u_2(\langle J_i\rangle)\leq 2 \,,
\quad \mbox{and}\qquad\\
\label{umax-UR-JxJyJz}
&& \sum_{i=x,y,z}u_\text{max}(\langle J_i\rangle)\leq
\tfrac{1}{2}(3+\sqrt{3}) \,,
\end{eqnarray}
where ${H_2=-\ln(u_2)}$ is like the R\'{e}nyi entropy \cite{Renyi61}
of order~2.
\begin{figure}
\caption{From top-left to bottom-right, along the rows, the first region is the permissible region $\mathcal{E}
\label{fig:regions-MUBs}
\end{figure}
All \eqref{H-UR-JxJyJz}--\eqref{umax-UR-JxJyJz}
hold for every ${\mathsf{j}=\tfrac{1}{2}, 1,\tfrac{3}{2},2,\cdots\,}$
and hence in every dimension ${d=2\,\mathsf{j}+1}$, and they are saturated by some
angular momentum coherent states $\chi(\alpha,\beta)$.
Like \eqref{R_H}, the regions characterized by URs \eqref{H-UR-JxJyJz}, \eqref{H2-UR-JxJyJz}, and CR \eqref{umax-UR-JxJyJz} are denoted here by $\mathcal{R}^{\text{spin}}_H$, $\mathcal{R}^{\text{spin}}_{H_2}$, and $\mathcal{R}^{\text{spin}}_{u_{\text{max}}}$, respectively.
Along with $\mathcal{E}$, they are
displayed in Fig.~\ref{fig:regions-MUBs} for ${\mathsf{j}=2}$.
$\mathcal{E}$ resides in every $\mathcal{R}$, and one can also perceive that ${\mathcal{R}^{\text{spin}}_{H_2}\subset\mathcal{R}^{\text{spin}}_{u_{\text{max}}}}$.
We can not right away say which of the tight URs, \eqref{H-UR-JxJyJz} or \eqref{H2-UR-JxJyJz}, is superior because neither
$\mathcal{R}^{\text{spin}}_H$ is completely contained in $\mathcal{R}^{\text{spin}}_{H_2}$ nor vice versa.
Similarly, it is difficult to compare \eqref{H-UR-JxJyJz} and \eqref{umax-UR-JxJyJz}
as ${\mathcal{R}^{\text{spin}}_H\nsubset\mathcal{R}^{\text{spin}}_{u_{\text{max}}}}$
and
${\mathcal{R}^{\text{spin}}_H\nsupset\mathcal{R}^{\text{spin}}_{u_{\text{max}}}}$.
If one region is not a subset of other then one can take the area of a region as a figure of merit to compare different CRs and/or URs.
However, in the paper, mostly those cases are reported where one region is completely submerged in another.
Since \eqref{JxJyJz,QC} and \eqref{u2-UR-JxJyJz} are the same, every
angular momentum coherent state saturates \eqref{u2-UR-JxJyJz}.
$\mathcal{E}$ touches the periphery of $\mathcal{R}^{\text{spin}}_H$
at six different points that are related to eigenstates of $J_x,J_y,J_z$ corresponding to their extreme-eigenvalues $\pm\,\mathsf{j}$.
These six pure states are only the minimum uncertainty states for UR~\eqref{H-UR-JxJyJz} as well as
UR~\eqref{u-UR-JxJyJz}.
The eight coherent states ${\chi(\alpha,\beta)}$---for which $\alpha=\arccos(\tfrac{1}{\sqrt{3}})$ and
${\beta=\tfrac{\pi}{4},\tfrac{3\pi}{4},\tfrac{5\pi}{4},\tfrac{7\pi}{4}}$, and the remaining four can be obtained by changing $\alpha$ into ${\pi-\alpha}$ and
$\beta$ into ${\pi+\beta\, (\text{mod}\, 2\pi)}$---saturate inequalities \eqref{H2-UR-JxJyJz} and \eqref{umax-UR-JxJyJz}.
The permitted region $\mathcal{E}$ touches the boundary of
$\mathcal{R}^{\text{spin}}_{H_2}$ and $\mathcal{R}^{\text{spin}}_{u_{\text{max}}}$
at the associated eight points.
The six cross sections in
$\mathcal{R}^{\text{spin}}_{H_2}$ and $\mathcal{R}^{\text{spin}}_{u_{\text{max}}}$ are due to ${-\mathsf{j}\leq\langle J_i\rangle\leq\mathsf{j}}$ required for every ${i=x,y,z}$.
In the case of ${\mathsf{j}=\tfrac{1}{2}}$, \eqref{JxJyJz,QC} and \eqref{|r|^2<=1}
are equal, $\mathcal{E}$ is the Bloch ball, and all the coherent states
become qubit's pure states.
Corresponding to the eight minimum uncertainty states for UR~\eqref{H2-UR-JxJyJz}, the Bloch vectors are
${\{\pm\widehat{a}_i\}_{i=1}^{4}}$ \cite{Wootters07,Appleby14},
where
\begin{equation}
\label{a1a2a3-v1v2v3}
\begin{pmatrix}
\widehat{a}_1\\
\widehat{a}_2\\
\widehat{a}_3
\end{pmatrix}
= \tfrac{1}{\sqrt{3}}
\begin{pmatrix}
\hphantom{-}1 & -1 & -1 \\
-1 & \hphantom{-}1 & -1 \\
-1 & -1 & \hphantom{-}1
\end{pmatrix}
\begin{pmatrix}
\widehat{v}_1\ \\
\widehat{v}_2\ \\
\widehat{v}_3\
\end{pmatrix}
\end{equation}
and ${\widehat{a}_4=
-\textstyle\sum\nolimits_{i=1}^{3}\widehat{a}_i}$
are given in the $v$-coordinate system [see Appendix].
One can easily deduce that both ${\{\widehat{\mathsf{a}}_i\}_{i=1}^{3}}$ presented in Appendix~\ref{subsec:3settings} and ${\{\widehat{a}_i\}_{i=1}^{3}}$
share the same Gram matrix, \eqref{SIC-POVM-Gram-matrix}.
The two sets of vectors are related by an invertible linear transformation that can be obtained by $\textbf{M}$ in
\eqref{M^-1 SIC} and the square matrix in~\eqref{a1a2a3-v1v2v3}.
Each of ${\{\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$ and ${\{\widehat{a}_i\}_{i=1}^{4}}$ constitutes a SIC-POVM via \eqref{Pi} for a qubit \cite{Rehacek04},
and the later one is known as the Weyl-Heisenberg covariant SIC-POVM \cite{Appleby09}.
\section{Conclusion and outlook}\label{sec:conclusion}
There are three primary contributions from this article.
First, we provided a basis-independent systematic procedure to
obtain the QCs for any set of operators that act on a qudit's Hilbert space.
The QCs are necessary and sufficient restrictions
that analytically specify the permitted region $\mathcal{E}$ of
the expectation values.
Second, we showed how to define uncertainty and certainty measures
on the allowed region $\mathcal{E}$, and their properties are discussed.
With a straightforward mechanism---that is also employed in \cite{Riccardi17,Sehrawat17}---we
achieved tight CRs and URs.
Third, we bounded a regions $\mathcal{R}$ by a tight CR or UR in the space of expectation values and exhibited the gap $\mathcal{R}\setminus\mathcal{E}$ between
$\mathcal{R}$ and the allowed region $\mathcal{E}$ through figures.
Our additional contributions are:
(\textit{i}) the QCs for the Weyl operators and the spin observables are reported.
(\textit{ii}) Various tight URs and CRs are obtained for the spin-1 observables
as well as for ${\{J_x,J_y,J_z\}}$ in the case of an arbitrary spin
${\mathsf{j}=\tfrac{1}{2}, 1,\tfrac{3}{2},2,\cdots\,}$.
Since all the extreme points of the permissible region
for ${\{J_x,J_y,J_z\}}$ come from the angular momentum coherent states,
always a coherent state is a minimum uncertainty state for the UR
formulated for the three observables.
(\textit{iii}) The case of a single qubit is thoroughly investigated in Appendix~\ref{sec:qubit} that includes Schr\"{o}dinger's UR, and tight URs CRs are presented there for the SIC-POVM.
Choice of an uncertainty measure to get a UR is a user's choice.
We have not yet found a single certainty or uncertainty measure
that is better than others in the sense that it always provides
a smaller region $\mathcal{R}$.
In some examples, one behaves better, whereas in another example
there is another.
To compare different CRs and/or URs, the area (or volume) of $\mathcal{R}$
can be a figure of merit, particularly when one region is not contained in another.
Although, it is not easy to compute such an area.
Naturally, $\mathcal{E}$ lies in all such $\mathcal{R}$'s, however it is not a primary objective of a UR to put a constraint on the mean values but on a combined uncertainty.
To draw a comparison between the QCs and URs, first, we have to put them on an equal footing.
That may or may not be possible because
a QC is primarily a bound on expectation values not, generally, on a combined uncertainty.
URs play very important roles in different branches of physics and mathematics, recently they are applied in the field of quantum information
(see Sec.~VI in \cite{Coles17}).
One can employ the QCs for those purposes as well as for the quantum state estimation \cite{Paris04}, where one can directly appoint the QCs for the validation of an estimated state.
\begin{acknowledgments}
I am very grateful to Titas Chanda for crosschecking the numerical results.
\end{acknowledgments}
\appendix
\section{Qubit}\label{sec:qubit}
For a qubit (${d=2}$),
the Pauli operators ${X,Y,Z}$ \cite{Pauli27} with
the identity operator $I$
constitute the Hermitian-basis of $\mathscr{B}(\mathscr{H}_2)$
\cite{Kimura03,Byrd03}.
The operators $X$ and $Z$ are defined in \eqref{X_i} and \eqref{Z_i}, respectively, and ${Y=\text{i}XZ}$.
In this basis, we can express qubit's state as
\begin{equation}
\label{rho-in-IXYZ}
\rho=\tfrac{1}{2}\big(I+\langle X\rangle\, X +\langle Y\rangle\, Y
+\langle Z\rangle\, Z\big)\,,
\end{equation}
where ${\vec{r}:=(\langle X\rangle, \langle Y\rangle,\langle Z\rangle)\in\mathbb{R}^3}$ is the well-known Bloch vector \cite{Bloch46,Bengtsson06} that is the mean value of the Pauli vector operator ${\vec{\sigma}=(X,Y,Z)}$.
Conditions \eqref{1=S_1} and \eqref{0<=S_2} now become
$\langle I\rangle_\rho=1$ and
\begin{equation}
\label{|r|^2<=1}
r^2:=|\vec{r}\,|^2=\langle X\rangle_\rho^2+\langle Y\rangle_\rho^2+\langle Z\rangle_\rho^2
\leq 1\,,
\end{equation}
respectively.
A projective measurement on a qubit can be completely specified by a three-component real unit vector \cite{A}.
So, we begin with three linearly independent unit vectors
${\widehat{a},\widehat{b},\widehat{c}\in\mathbb{R}^3}$, and define three Hermitian operators
\begin{eqnarray}
\label{A-qubit}
A&\,:=\,&\widehat{a}\cdot\vec{\sigma}\,:=\,2\,|a\rangle\langle a|-I\,,
\\
\label{B-qubit}
B&\,:=\,&\widehat{b}\cdot\vec{\sigma}\,:=\,2\,|b\rangle\langle b|-I\,,
\quad\mbox{and}
\\
\label{C-qubit}
C&\,:=\,&\widehat{c}\cdot\vec{\sigma}\,:=\,2\,|c\rangle\langle c|-I\,.
\end{eqnarray}
One can check that ${A^2=I}$ [with \eqref{AB}], hence its eigenvalues are $\pm1$, and then ${\langle A\rangle_\rho\in[-1,1]}$ is due to \eqref{<A>in}.
By definition~\eqref{A-qubit}, ${|a\rangle}$ and
${|a^\perp\rangle}$ (such that ${\langle a|a^\perp\rangle=0}$) are eigenkets of $A$ corresponding to the eigenvalues ${+1}$ and ${-1}$, respectively, and similarly for $B$ and $C$.
One can verify that the inner product between a pair of such operators is
\begin{equation}
\label{angle-bt-axis}
\lgroup A,B\,\rgroup_\textsc{hs}=
2\;\widehat{a}\cdot\widehat{b}=
4\;{|\langle a|b\rangle|}^2-2
\end{equation}
by using
\begin{eqnarray}
\label{AB}
&&AB=(\widehat{a}\cdot\vec{\sigma})(\widehat{b}\cdot\vec{\sigma})
=(\widehat{a}\cdot\widehat{b})\,I+\text{i}(\widehat{a}\times\widehat{b})
\cdot\vec{\sigma}\,,\quad\\
\label{tr(pauli)}
&&\text{tr}(X)=\text{tr}(Y)=\text{tr}(Z)=0\,,
\quad\mbox{and}\quad
\text{tr}(I)=d=2\,,\qquad
\end{eqnarray}
where ${\widehat{a}\cdot\widehat{b}}$ and ${\widehat{a}\times\widehat{b}}$ are the dot and cross product.
Taking the statistical operator from~\eqref{rho-in-IXYZ} and applying \eqref{angle-bt-axis} and \eqref{tr(pauli)} to the Born rule, \eqref{Born rule-1},
one can get the mean values
\begin{eqnarray}
\label{<A>-qubit}
\langle A\rangle_\rho&=&\widehat{a}\cdot\vec{r}=2p-1\,,
\\
\label{<B>-qubit}
\langle B\rangle_\rho&=&\widehat{b}\cdot\vec{r}=2q-1\,,
\quad\mbox{and}
\\
\label{<C>-qubit}
\langle C\rangle_\rho&=&\widehat{c}\cdot\vec{r}=2s-1\,,
\end{eqnarray}
where
\begin{equation}
\label{pqs}
p=\langle a|\rho|a\rangle\,,\quad
q=\langle b|\rho|b\rangle\,,\quad\mbox{and}\quad
s=\langle c|\rho|c\rangle\,\quad
\end{equation}
are the probabilities [see \eqref{<A>-pro} and \eqref{p-const2}] associated (with $+1$ eigenvalue) to the three projective measurements.
The probabilities $p,q,$ and $s$ are the mean values of three rank-1 projectors
\begin{equation}
\label{proj}
|a\rangle\langle a|=:P\,,\quad
|b\rangle\langle b|=:Q\,,\quad\mbox{and}\quad
|c\rangle\langle c|\,.\quad
\end{equation}
\begin{figure}
\caption{(\textsf{i}
\label{fig:abc-vectors}
\end{figure}
By applying the Gram-Schmidt orthonormalization process, we can turn the
linearly independent set ${\{\widehat{a},\widehat{b},\widehat{c}\,\}}$
into an orthonormal set ${\{\widehat{v}_1,\widehat{v}_2,\widehat{v}_3\}}$ of vectors; they are portrayed in Fig.~\ref{fig:abc-vectors}~(\textsf{i}).
The two sets are related through the transformation
\begin{equation}
\label{abc-v1v2v3}
\begin{pmatrix}
\widehat{v}_1\\
\widehat{v}_2\\
\widehat{v}_3
\end{pmatrix}
=
\underbrace{\begin{pmatrix}
1 & 0 & 0 \\
-\tfrac{\widehat{a}.\widehat{b}}
{\scriptstyle\sqrt{1-(\widehat{a}.\widehat{b})^2}}&
\tfrac{1}
{\scriptstyle\sqrt{1-(\widehat{a}.\widehat{b})^2}} & 0 \\
-\tfrac{e}{g} & -\tfrac{f}{g} & \tfrac{1}{g} \\
\end{pmatrix}}_{\displaystyle\textbf{M}^{-1}}
\begin{pmatrix}
\widehat{a}\\
\widehat{b}\\
\widehat{c}
\end{pmatrix}\,,
\end{equation}
where
\begin{eqnarray}
\label{e}
e&=&\tfrac{\widehat{a}.\widehat{c}-
(\widehat{a}.\widehat{b})(\widehat{b}.\widehat{c})}
{1-(\widehat{a}.\widehat{b})^2}\,,
\\
\label{f}
f&=&\tfrac{\widehat{b}.\widehat{c}-
(\widehat{a}.\widehat{b})(\widehat{a}.\widehat{c})}
{1-(\widehat{a}.\widehat{b})^2}\,,\quad\mbox{and}
\\
\label{g}
g&=&\sqrt{
\tfrac{1-(\widehat{a}.\widehat{b})^2-(\widehat{a}.\widehat{c})^2-
(\widehat{b}.\widehat{c})^2+
2\,(\widehat{a}.\widehat{b})(\widehat{a}.\widehat{c})
(\widehat{b}.\widehat{c})}
{1-(\widehat{a}.\widehat{b})^2}}\,.
\end{eqnarray}
We can convert~\eqref{abc-v1v2v3} into
\begin{equation}
\label{ABC-v1v2v3}
\underbrace{\begin{pmatrix}
\langle A\rangle\\
\langle B\rangle\\
\langle C\rangle
\end{pmatrix}}_{\displaystyle\textsf{E}}
=
\underbrace{\begin{pmatrix}
1 & 0 & 0 \\
{\scriptstyle\widehat{a}.\widehat{b}}&
{\scriptstyle\sqrt{1-(\widehat{a}.\widehat{b})^2}} & 0 \\
e+f({\scriptstyle\widehat{a}.\widehat{b}}) &
f{\scriptstyle\sqrt{1-(\widehat{a}.\widehat{b})^2}} &
g \\
\end{pmatrix}}_{\displaystyle\textbf{M}}
\underbrace{\begin{pmatrix}
\widehat{v}_1.\vec{r}\ \\
\widehat{v}_2.\vec{r}\ \\
\widehat{v}_3.\vec{r}\
\end{pmatrix}}_{\displaystyle\texttt{R}}\,,
\end{equation}
which is like Eq.~\eqref{expt-values}.
One can perceive that $\texttt{R}$ is real and it is the representation of Bloch vector $\vec{r}$ in the $v$-coordinate system (made of ${\widehat{v}_1,\widehat{v}_2,\widehat{v}_3}$) [see Fig.~\ref{fig:abc-vectors} (\textsf{ii})].
From top to bottom, the rows in $\textbf{M}$ are the representations of $\widehat{a}$, $\widehat{b}$, and $\widehat{c}$ in the $v$-coordinate system.
Next, one can verify that
\begin{equation}
\label{Gram-matrix}
\textbf{M}\,\textbf{M}^\intercal=
\begin{pmatrix}
1 & {\scriptstyle\widehat{a}.\widehat{b}} & {\scriptstyle\widehat{a}.\widehat{c}}
\\
{\scriptstyle\widehat{a}.\widehat{b}}&
1 & {\scriptstyle\widehat{b}.\widehat{c}} \\
{\scriptstyle\widehat{a}.\widehat{c}} & {\scriptstyle\widehat{b}.\widehat{c}} & 1 \\
\end{pmatrix}
=:\textbf{G}
\end{equation}
is the Gram matrix. Recall that $\intercal$ symbolizes the transpose.
After associating the Pauli operators with the orthonormal vectors as
\begin{equation}
\label{v-sigma}
\widehat{v}_1\cdot\vec{\sigma}:=X\,,\quad
\widehat{v}_2\cdot\vec{\sigma}:=Y\,,\quad\mbox{and}\quad
\widehat{v}_3\cdot\vec{\sigma}:=Z\,,
\end{equation}
condition \eqref{|r|^2<=1} emerges as
\begin{equation}
\label{|r|^2<=1-V}
r^2=
(\widehat{v}_1\cdot\vec{r}\,)^2+
(\widehat{v}_2\cdot\vec{r}\,)^2+(\widehat{v}_3\cdot\vec{r}\,)^2
=\texttt{R}^\intercal\texttt{R}\leq 1\,.
\end{equation}
And, with the matrix equation
${\textbf{M}^{-1}\textsf{E}=\texttt{R}}$---gained from \eqref{abc-v1v2v3} or \eqref{ABC-v1v2v3}---we achieve the \emph{quadratic} QC
\begin{eqnarray}
\label{EGE<=1}
\textsf{E}^\intercal
\underbrace{(\textbf{M}^{-1})^\intercal\,\textbf{M}^{-1}}_{\displaystyle\textbf{G}^{-1}}
\textsf{E}
=\texttt{R}^\intercal\texttt{R}\leq 1\,,
\end{eqnarray}
where
\begin{equation}
\label{G^-1}
\textbf{G}^{-1}=
\tfrac{1}{\text{det}(\textbf{G})}
\begin{pmatrix}
{\scriptstyle 1\,-\,(\widehat{b}.\widehat{c})^2} & {\scriptstyle(\widehat{a}.\widehat{c})(\widehat{b}.\widehat{c})-
\widehat{a}.\widehat{b}}&
{\scriptstyle(\widehat{a}.\widehat{b})(\widehat{b}.\widehat{c})-\widehat{a}.\widehat{c}}
\\
{\scriptstyle(\widehat{a}.\widehat{c})(\widehat{b}.\widehat{c})
-\widehat{a}.\widehat{b}}&
{\scriptstyle 1\,-\,(\widehat{a}.\widehat{c})^2} & {\scriptstyle(\widehat{a}.\widehat{b})(\widehat{a}.\widehat{c})\,-\,\widehat{b}.\widehat{c}}
\\
{\scriptstyle(\widehat{a}.\widehat{b})(\widehat{b}.\widehat{c})-\widehat{a}.\widehat{c}} &
{\scriptstyle(\widehat{a}.\widehat{b})(\widehat{a}.\widehat{c})\,-\,\widehat{b}.\widehat{c}} &
{\scriptstyle 1\,-\,(\widehat{a}.\widehat{b})^2}
\end{pmatrix}
\end{equation}
and ${\text{det}(\textbf{G})=({\scriptstyle 1\,-\,(\widehat{a}.\widehat{b})^2})\,g^2}$ [for $g$, see \eqref{g}].
$\textbf{G}^{-1}$ does exist for linearly independent vectors ${\widehat{a},\widehat{b},\widehat{c}}$, otherwise see
Appendix~\ref{subsec:2settings}.
One can observe that the matrices \textbf{M} and \textbf{G} are independent of $\rho$ and only depend on the three operators (measurement settings).
The quadratic QC in \eqref{EGE<=1} characterizes the
permissible region $\mathcal{E}$ [defined in \eqref{set-of-expt}] of expectation values~\eqref{<A>-qubit}--\eqref{<C>-qubit}.
The linear transformation in~\eqref{ABC-v1v2v3} maps the Bloch sphere identified by the equality in \eqref{|r|^2<=1-V} onto an ellipsoid \cite{Meyer00}.
So, for a qubit, the allowed region $\mathcal{E}$ will always be
an ellipsoid with its interior \cite{Kaniewski14}.
We want to emphasize that all the material between \eqref{abc-v1v2v3} and \eqref{G^-1} is given in a general form in \cite{Kaniewski14,Kaniewski,Meyer00}.
It is shown in \cite{Kaniewski14} that there is a one-to-one correspondence between a qubit's state $\rho\in\mathcal{S}$ [defined in \eqref{set-of-states}] and a point in $\mathcal{E}$ as long as \textbf{M} is full rank. That can be witnessed through Eq.~\eqref{ABC-v1v2v3}.
The ellipsoid can be parametrized by putting
\begin{equation}
\label{Rpure}
\texttt{R}^\intercal_{\text{pure}}=(\sin2\theta\cos\phi\,,\
\sin2\theta\sin\phi\,,\
\cos2\theta)
\end{equation}
in \eqref{ABC-v1v2v3}, where ${\theta\in[0,\tfrac{\pi}{2}]}$ and ${\phi\in[0,2\pi)}$.
If we put ${r\texttt{R}^\intercal_{\text{pure}}}$ in \eqref{ABC-v1v2v3}---where ${r\in[0,1]}$ is given in \eqref{|r|^2<=1-V}---then we can also reach its interior points.
The column vector $\texttt{R}_{\text{pure}}$ is associated with
$\texttt{R}^{(\text{pure})}_2$ of
\eqref{R-pure-2}.
For this section,
the subscripts of ${\theta_0}$ and ${\phi_1}$ are dropped.
The real symmetric matrix $\textbf{G}$ can be diagonalized with an orthogonal matrix \textbf{O}, hence
${\textbf{O}^\intercal \textbf{G}\, \textbf{O}}$ will be a diagonal matrix with entries $\lambda_1$, $\lambda_2$, and $\lambda_3$ at its main diagonal, which are the eigenvalues of \textbf{G}.
The same \textbf{O} also diagonalizes $\textbf{G}^{-1}$, and $\lambda^{-1}_l$ (${l=1,2,3}$) will be its eigenvalues.
With the orthogonal matrix, we can recast condition~\eqref{EGE<=1}
as
\begin{eqnarray}
\label{ellipsoid-f}
&&
\qquad\quad\frac{{t_1}^2}{\lambda_1}+\frac{{t_2}^2}{\lambda_2}+
\frac{{t_3}^2}{\lambda_3}\leq 1\,,\quad\mbox{where}\\
&&
\label{Ogf}
\textbf{O}^\intercal\,\textsf{E}=
\begin{pmatrix}
t_1\\
t_2\\
t_3
\end{pmatrix}
:=\begin{pmatrix}
\sqrt{\lambda_1}\,\sin\mu\,\cos\nu\\
\sqrt{\lambda_2}\,\sin\mu\,\sin\nu\\
\sqrt{\lambda_3}\,\cos\mu
\end{pmatrix}.
\end{eqnarray}
Through the last equality in \eqref{Ogf}, one can enjoy an alternative parameterization of the ellipsoid, where the parameters ${\mu\in[0,\pi]}$ and ${\nu\in[0,2\pi)}$.
By this technique one can easily find the orientation of the ellipsoid \cite{Meyer00}:
the eigenvectors (that are columns in \textbf{O}) and the eigenvalues $\lambda_i$ of \textbf{G} characterize the semi-principal axes of the ellipsoid.
\subsection{Two measurement settings}\label{subsec:2settings}
In the above investigation, we assume ${\{\widehat{a},\widehat{b},\widehat{c}\}}$ is a set of linearly independent vectors.
Now suppose $\widehat{c}$ is linearly dependent on $\widehat{a}$ and $\widehat{b}$, say $\widehat{c}=\vartheta_a \widehat{a} +
\vartheta_b \widehat{b} $,
whereas $\widehat{a}$ and $\widehat{b}$ are still linearly independent.
Then, we can discard all the items
related to $\widehat{c}$ in \eqref{ABC-v1v2v3}, and thus achieve an elliptic region $\mathcal{E}$ identified by
\begin{eqnarray}
&&\label{AB-v1v2}
\begin{pmatrix}
2p-1\\
2q-1
\end{pmatrix}=
\underbrace{\begin{pmatrix}
\langle A\rangle\\
\langle B\rangle
\end{pmatrix}}_{\displaystyle\textsf{E}}
=
\begin{pmatrix}
1 & 0 & 0 \\
{\scriptstyle\widehat{a}.\widehat{b}}&
{\scriptstyle\sqrt{1-(\widehat{a}.\widehat{b})^2}} & 0 \\
\end{pmatrix}
\underbrace{\begin{pmatrix}
\widehat{v}_1.\vec{r}\ \\
\widehat{v}_2.\vec{r}\ \\
\widehat{v}_3.\vec{r}\
\end{pmatrix}}_{\displaystyle\texttt{R}}\qquad\\
&&
\label{v1v2<1}
\mbox{with}\quad (\widehat{v}_1.\vec{r}\,)^2+(\widehat{v}_2.\vec{r}\,)^2\leq1 \quad \mbox{or}\\
\label{ellipse}
&&\textsf{E}^\intercal\, \textbf{G}^{-1}\textsf{E} \leq 1
\quad \mbox{with}\quad
\textbf{G}=
\begin{pmatrix}
1 & {\scriptstyle\widehat{a}.\widehat{b}}\\
{\scriptstyle\widehat{a}.\widehat{b}}& 1
\end{pmatrix}.
\end{eqnarray}
We owe \eqref{v1v2<1} and \eqref{ellipse} to \eqref{|r|^2<=1-V}
and \eqref{EGE<=1}, respectively.
The average value $\langle C\rangle=\vartheta_a \langle A\rangle + \vartheta_b \langle B\rangle $ is now just a linear function,
and the QC, presented by \eqref{AB-v1v2}--\eqref{ellipse}, has no effect of $C$.
To present the QCs, it is sufficient to consider only (linearly) independent
operators \cite{Adagger}.
So we are ignoring $C$ until Appendix~\ref{subsec:3settings}.
One can notice two things with Eq.~\eqref{AB-v1v2}.
First, a whole line segment---that is in the Bloch sphere and parallel to $\widehat{v}_3$ [displayed in Fig.~\ref{fig:abc-vectors}~(\textsf{ii})]---gets mapped onto a single point in $\mathcal{E}$ under the transformation in \eqref{AB-v1v2}.
Second, extreme points---that constitute the ellipse---of $\mathcal{E}$
come from the pure states that lie on the great circle [illustrated in Fig.~\ref{fig:abc-vectors}~(\textsf{ii})] of the Bloch sphere in the $v_1v_2$-plane.
\begin{figure}
\caption{Moving horizontally from top-left to the bottom, the (blue) shaded regions are
$\mathcal{R}
\label{fig:regions}
\end{figure}
Equivalently, one can take the projectors $P$ and $Q$ from \eqref{proj} at the places of $A$ and $B$ and then present everything in terms of the probabilities $p$ and $q$ given in \eqref{<A>-qubit}, \eqref{<B>-qubit}, and \eqref{pqs}.
In the case of projectors, hyperrectangle~\eqref{hyperrectangle} becomes the square
${[0,1]^{\times 2}}$, and the allowed region can be described as
\begin{equation}
\label{ellipse-pq}
\mathcal{E}=\{(p,q)\ |\ 0\leq p,q\leq1\ \ \text{obeys}\ \
\eqref{ellipse}\}\,.
\end{equation}
One can check that ${P=\mathring{A}}$ here [for $\mathring{A}$, see \eqref{Adot}], thus ${H(p)=H(\langle A\rangle)}$, which is in fact true for all the uncertainty and certainty measures in~\eqref{H(A)}--\eqref{umax(A)}.
It is shown in \cite{Sehrawat17} that many tight CRs and URs known from \cite{Larsen90,Busch14-b,Garrett90,Sanchez-Ruiz98,Ghirardi03,Bosyk12,Vicente05,Zozor13,Deutsch83,Maassen88,Rastegin12} can be derived by using
ellipse \eqref{AB-v1v2}, and the same ellipse emerges in \cite{Lenard72,Larsen90,Kaniewski14,Abbott16,Sehrawat17} through different methods.
A few such relations are
\begin{eqnarray}
\label{std-UR}
\sqrt{1-(2\epsilon-1)^2}&\leq&
\Delta P+\Delta Q\,,\\
\label{entropy-UR}
\mbox{if}\ 0.7\leq\epsilon\ \mbox{then}\quad
2\,h(\tfrac{1+\sqrt{\epsilon}}{2})&\leq&H(p)+H(q)\,,
\\
\label{u-UR}
1+\sqrt{\epsilon}+\sqrt{1-\epsilon}
&\leq&u_{\sfrac{1}{2}}(p)+
u_{\sfrac{1}{2}}(q)\,,\\
\label{u2-CR}
\max\{2-\epsilon,1+\epsilon\}&\geq&u_2(p)+u_2(q)\,,
\ \mbox{and}\quad \\
\label{umax-CR}
\max\big\{1+\sqrt{1-\epsilon}\,,\,1+\sqrt{\epsilon}\,\big\}
&\geq&u_{\textrm{max}}(p)+u_{\textrm{max}}(q)\,,\qquad\quad
\end{eqnarray}
where
all the above functions are defined according to \eqref{std-dev}--\eqref{umax(A)} for $P$ and $Q$,
\begin{equation}
\label{entropy}
h(p):=-(p\ln p+({1-p})\ln(1-p))\,,\\
\end{equation}
and
${\widehat{a}\cdot\widehat{b}=2\epsilon-1}$.
The standard deviation $\Delta$, the Shannon entropy $H$ \cite{Shannon48}, and $u_{\sfrac{1}{2}}$ are concave functions that provide tight URs \eqref{std-UR}--\eqref{u-UR}, whereas
the convex functions $u_2$ and $u_{\textrm{max}}$ give tight CRs~\eqref{u2-CR} and \eqref{umax-CR} \cite{Sehrawat17}.
Following \eqref{R_H}, one can define a region
\begin{eqnarray}
\label{region-std-UR}
\mathcal{R}_\Delta=\{(p,q)\ |\ 0\leq p,q\leq1\ \ \text{obeys}\ \
\eqref{std-UR}\}\qquad
\end{eqnarray}
that is limited by UR \eqref{std-UR}.
Similarly, one can bound $\mathcal{R}_H$, $\mathcal{R}_{u_{\sfrac{1}{2}}}$, $\mathcal{R}_{u_2}$, and $\mathcal{R}_{u_\text{max}}$ by tight relations \eqref{entropy-UR}--\eqref{umax-CR}.
Taking ${\epsilon=\tfrac{3}{4}}$, we display these regions
in Fig.~\ref{fig:regions} and realize that
${\mathcal{R}_\Delta\subset\mathcal{R}_{u_{\sfrac{1}{2}}}}$ and
${\mathcal{R}_H\subset \mathcal{R}_{u_2}\subset
\mathcal{R}_{u_\text{max}}}$, which may or may not hold for other $\epsilon$'s.
Whereas neither
$\mathcal{R}_\Delta$ is a subset of $\mathcal{R}_H$ nor vice versa.
One can also observe that ${(\epsilon,1)\in\mathcal{E}}$ while ${(1-\epsilon,1)\notin\mathcal{E}}$ in Fig.~\ref{fig:regions}.
In these points, $\epsilon$ and ${1-\epsilon}$ are associated with
the two distinct probability-vectors ${\vec{p}=(\epsilon,1-\epsilon)}$ and ${\vec{p}\,'=(1-\epsilon,\epsilon)}$, respectively.
After the permutation, ${\vec{p}}$ turns into
$\vec{p}\,'$ that is forbidden.
It is a distinguish feature of a \emph{quantum} probability
${p_l=\langle a_l|\rho |a_l\rangle}$ [see \eqref{<A>-pro} and \eqref{p-const2}]
that ${p_l}$
is not only associated with the measurement setting $a$ but also with the label $l$ for an outcome.
\subsection{Three measurement settings}\label{subsec:3settings}
Let us start with Schr\"{o}dinger's UR~\cite{Schrodinger32}
\begin{eqnarray}
\label{Schrodinger-UR}
&&
{\scriptstyle 0\,\leq\, \left(\langle A^2\rangle-\langle A\rangle^2\right)\left(
\langle B^2\rangle-\langle B\rangle^2\right)
-|\langle \widetilde{C}\rangle|^2
-|\langle \widetilde{D}\rangle-\langle A\rangle\langle B\rangle|^2}\,,
\nonumber\\
&&
\mbox{where}\quad
{\scriptstyle\widetilde{C}\,:=\frac{AB-BA}{2\,\text{i}}\quad\mbox{and}\quad
\widetilde{D}\,:=\frac{AB+BA}{2}}\quad
\end{eqnarray}
are related to the commutator and the anticommutator, respectively, of $A$ and $B$.
For qubit's operators~\eqref{A-qubit} and \eqref{B-qubit}, one can realize through~\eqref{AB} that
\begin{equation}
\label{c=a cros b}
\widetilde{C}=(\widehat{a}\times\widehat{b})\cdot\vec{\sigma}=
|\widehat{a}\times\widehat{b}|
\underbrace{\widehat{c}\cdot\vec{\sigma}}_{C}\,,\quad
\widehat{c}=\frac{\widehat{a}\times\widehat{b}}{|\widehat{a}\times\widehat{b}|}\,,
\end{equation}
${\widetilde{D}=(\widehat{a}\cdot\widehat{b})\,I}$, and ${A^2=I=B^2}$.
Considering these and
${\scriptstyle|\widehat{a}\times\widehat{b}|=\sqrt{1-(\widehat{a}\cdot\widehat{b})^2}}$,
we can rewrite Schr\"{o}dinger's UR for a qubit as
\begin{equation}
\label{Schrodinger-UR-qubit}
0\leq (1-\langle A\rangle^2)(
1-\langle B\rangle^2)
-({\scriptstyle 1-(\widehat{a}\cdot\widehat{b})^2})\langle C\rangle^2
-({\scriptstyle\widehat{a}\cdot\widehat{b}}\,-\langle
A\rangle\langle B\rangle)^2.
\end{equation}
To test \eqref{Schrodinger-UR-qubit} in experimental scenario~\eqref{expt-situ}, one requires three measurement settings
${\widehat{a},\widehat{b},\widehat{c}}$.
One can choose $\widehat{a}$ and $\widehat{b}$, and then $\widehat{c}$
is fixed by the cross product in~\eqref{c=a cros b}.
If one takes $\widehat{a}$ and $\widehat{b}$ collinear, then
\eqref{Schrodinger-UR-qubit} turns into the trivial statement ${0=0}$.
So we are taking $\widehat{a}$ and $\widehat{b}$ linearly independent.
\begin{figure}
\caption{The region of ${\langle A\rangle,\langle B\rangle,\langle C\rangle\in[-1,1]}
\label{fig:ellip-SUR}
\end{figure}
One can check that Schr\"{o}dinger's UR~\eqref{Schrodinger-UR-qubit}
and QC \eqref{EGE<=1} with the Gram matrix
\begin{equation}
\label{SR-Gram-matrix}
\textbf{G}_{\text{Sch}}=
\begin{pmatrix}
1 & {\scriptstyle\widehat{a}\cdot\widehat{b}} & 0 \\[1mm]
{\scriptstyle\widehat{a}\cdot\widehat{b}} & 1 & 0 \\[1mm]
0 &0 & 1 \\
\end{pmatrix}
\end{equation}
are the same thing, and the UR is saturated by every pure state for a qubit.
Without the last term in \eqref{Schrodinger-UR-qubit}, Schr\"{o}dinger's UR becomes Robertson's UR \cite{Robertson29}, which will form a bigger region
than the allowed region here characterized by \eqref{Schrodinger-UR-qubit}.
Taking ${\widehat{a}\cdot\widehat{b}=\tfrac{1}{2}}$,
the ellipsoid is displayed in Fig.~\ref{fig:ellip-SUR}.
Orthogonal projection of the ellipsoid onto the
${\langle A\rangle\langle B\rangle}$--plane produces the same elliptic region that is identified by~\eqref{ellipse} and shown in Fig.~\ref{fig:regions}.
The parametric forms [obtained via~\eqref{Ogf} and \eqref{ABC-v1v2v3} with \eqref{Rpure}] of the ellipsoid are
\begin{eqnarray}
\label{S-UR-ellipsoid-para}
\underbrace{\begin{pmatrix}
\langle A\rangle\\
\langle B\rangle\\
\langle C\rangle
\end{pmatrix}}_{\displaystyle\textsf{E}}
&=&
\underbrace{\begin{pmatrix}
\tfrac{1}{\sqrt{2}} & \hphantom{-}\tfrac{1}{\sqrt{2}} & 0 \\[1mm]
\tfrac{1}{\sqrt{2}} & -\tfrac{1}{\sqrt{2}} & 0 \\[1mm]
0 & \hphantom{-}0 & 1 \\
\end{pmatrix}}_{\displaystyle\textbf{O}}
\begin{pmatrix}
{\scriptstyle\sqrt{1+\widehat{a}\cdot\widehat{b}}}\,\sin\mu\,\cos\nu\\
{\scriptstyle\sqrt{1-\widehat{a}\cdot\widehat{b}}}\,\sin\mu\,\sin\nu\\
\quad \cos\mu
\end{pmatrix}\qquad\\
&=&
\underbrace{\begin{pmatrix}
1 & 0 & 0 \\[1mm]
{\scriptstyle\widehat{a}\cdot\widehat{b}} & {\scriptstyle\sqrt{1-(\widehat{a}\cdot\widehat{b})^2}} & 0 \\[1mm]
0 & 0 & 1 \\
\end{pmatrix}}_{\displaystyle\textbf{M}}
\underbrace{\begin{pmatrix}
\sin2\theta\,\cos\phi\\
\sin2\theta\,\sin\phi\\
\cos2\theta
\end{pmatrix}}_{\displaystyle\texttt{R}_\text{pure}}\ .
\end{eqnarray}
One can easily recognize the semi-principal axes in Fig.~\ref{fig:ellip-SUR} with \eqref{S-UR-ellipsoid-para}.
For the next example, we consider three linearly-independent unit vectors
${\widehat{\mathsf{a}}_1,\widehat{\mathsf{a}}_2,\widehat{\mathsf{a}}_3}$ such that their
Gram matrix is
\begin{equation}
\label{SIC-POVM-Gram-matrix}
\textbf{G}_{\textsc{sic}}=
\begin{pmatrix}
\hphantom{-}1 & -\tfrac{1}{3} & -\tfrac{1}{3} \\[1mm]
-\tfrac{1}{3} & \hphantom{-}1 & -\tfrac{1}{3} \\[1mm]
-\tfrac{1}{3} & -\tfrac{1}{3} & \hphantom{-}1 \\
\end{pmatrix}\,.
\end{equation}
It implies that there is an equal angle, $\arccos(-\tfrac{1}{3})$, between every pair of the vectors. There exists one more such unit vector ${\widehat{\mathsf{a}}_4=-\textstyle\sum\nolimits_{i=1}^{3}
\widehat{\mathsf{a}}_i}$.
The set of four vectors
${\{\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$
yields a SIC-POVM for a qubit \cite{Rehacek04,Appleby09},
whose elements are the positive semi-definite operators
\begin{equation}
\label{Pi}
\varPi_i=\tfrac{1}{4}(I+\widehat{\mathsf{a}}_i\cdot\vec{\sigma})\,,
\quad\mbox{and}\quad
\sum_{i=1}^{4}\varPi_i=I
\end{equation}
is because $\textstyle\sum\nolimits_{i=1}^{4}\widehat{\mathsf{a}}_i$ is a null vector.
\begin{figure}
\caption{From top-left to bottom-right, moving horizontally, the first one is the allowed (ellipsoidal) region $\mathcal{E}
\label{fig:regions-SIC-POVM}
\end{figure}
Since eigenvalues of every $\varPi_i$ are $0$ and $\tfrac{1}{2}$,
its mean value
${\langle \varPi_i\rangle:=\mathsf{p}_i\in[0,\tfrac{1}{2}]}$ according to \eqref{<A>in}.
Moreover, for ${(\varPi_1,\varPi_2,\varPi_3)}$, hyperrectangle~\eqref{hyperrectangle} is the cube ${[0,\tfrac{1}{2}]^{\times 3}}$ in which regions are exhibited in Fig.~\ref{fig:regions-SIC-POVM}.
The linear QC
\begin{equation}
\label{Pi=1}
\sum_{i=1}^{4}\,
\langle\widehat{\mathsf{a}}_i\cdot\vec{\sigma}\rangle=0
\quad\Leftrightarrow\quad
\sum_{i=1}^{4}\mathsf{p}_i=1
\end{equation}
is due to normalization~\eqref{norm-rho} of a state, that is, ${\langle I\rangle=1}$, where the identity operator is given in \eqref{Pi}.
To estimate ${\langle\widehat{\mathsf{a}}_i\cdot\vec{\sigma}\rangle=4\,\mathsf{p}_i-1}$, ${i=1,\cdots,4}$,
one can either choose three projective measurements along $\widehat{\mathsf{a}}_1, \widehat{\mathsf{a}}_2$, and $\widehat{\mathsf{a}}_3$ or the single POVM ${\{\varPi_i\}_{i=1}^{4}}$ that can be realized by the scheme in \cite{Rehacek04}.
In either case, the permitted region $\mathcal{E}$
is identified by \eqref{Pi=1}
and the quadratic QC
\begin{equation}
\label{Pi<=1/3}
\sum_{i=1}^{4}\,
\langle\widehat{\mathsf{a}}_i\cdot\vec{\sigma}\rangle^2\leq\tfrac{4}{3}
\quad\Leftrightarrow\quad
\sum_{i=1}^{4}\mathsf{p}_i^2
\leq\tfrac{1}{3}\,.
\end{equation}
The right-hand-side inequality is given in \cite{Rehacek04}, and it can also be derived from~\eqref{EGE<=1} by using Gram matrix \eqref{SIC-POVM-Gram-matrix}.
Employing~\eqref{Ogf} and \eqref{ABC-v1v2v3} with \eqref{Rpure}, we can have parametric forms
\begin{eqnarray}
\label{SIC-POVM-ellipsoid-para-O}
\underbrace{\begin{pmatrix}
4\,\mathsf{p}_1-1\\
4\,\mathsf{p}_2-1\\
4\,\mathsf{p}_3-1
\end{pmatrix}}_{\displaystyle\textsf{E}}
&=&
\underbrace{\begin{pmatrix}
\tfrac{1}{\sqrt{3}} & \hphantom{-}\tfrac{1}{\sqrt{2}} & -\tfrac{1}{\sqrt{6}} \\[1mm]
\tfrac{1}{\sqrt{3}} & -\tfrac{1}{\sqrt{2}} & -\tfrac{1}{\sqrt{6}} \\[1mm]
\tfrac{1}{\sqrt{3}} & \hphantom{-}0 & \hphantom{-}\tfrac{2}{\sqrt{6}} \\
\end{pmatrix}}_{\displaystyle\textbf{O}}
\begin{pmatrix}
\tfrac{1}{\sqrt{3}}\,\sin\mu\,\cos\nu\\
\tfrac{2}{\sqrt{3}}\,\sin\mu\,\sin\nu\\
\tfrac{2}{\sqrt{3}}\,\cos\mu
\end{pmatrix}\qquad\ \ \\
\label{SIC-POVM-ellipsoid-para-M}
&=&
\label{M^-1 SIC}
\underbrace{\begin{pmatrix}
\hphantom{-}1 & \hphantom{-}0 & 0 \\[1mm]
-\tfrac{1}{3} & \hphantom{-}\tfrac{2\sqrt{2}}{3} & 0 \\[1mm]
-\tfrac{1}{3} & -\tfrac{\sqrt{2}}{3} & \tfrac{\sqrt{2}}{\sqrt{3}} \\
\end{pmatrix}}_{\displaystyle\textbf{M}}
\underbrace{\begin{pmatrix}
\sin2\theta\,\cos\phi\\
\sin2\theta\,\sin\phi\\
\cos2\theta
\end{pmatrix}}_{\displaystyle\texttt{R}_\text{pure}}.
\end{eqnarray}
of the ellipsoid that is the boundary of $\mathcal{E}$.
Taking \eqref{SIC-POVM-ellipsoid-para-O}, one can check orientation of the ellipsoid exhibited in Fig.~\ref{fig:regions-SIC-POVM} (top-left).
\begin{figure}
\caption{Contour plot of the entropy $-\textstyle\sum\nolimits_{i=1}
\label{fig:entropy-plot-SIC}
\end{figure}
\begin{table}[H]
\centering
\caption{The
values of $\theta$ and $\phi$ for ${\{-\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$ are drawn according to
\eqref{Rpure}, which parameterizes every unit vector in $\mathbb{R}^3$.
By replacing $2\theta$ and $\phi$ with ${\pi-2\theta}$ and ${\pi+\phi}$, respectively, one can get the values for the antipodal vectors ${\{\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$.
}
\label{tab:a-theta-phi}
\begin{tabular}{c@{\hspace{2mm}} | @{\hspace{2mm}}c@{\hspace{3mm}} c }
\hline\hline\rule{0pt}{2ex}
& $2\theta$ & $\phi$ \\
\hline\rule{0pt}{3ex}
$-\,\widehat{\mathsf{a}}_1$ & $\tfrac{\pi}{2}$ & $\pi$
\\[1mm]
$-\,\widehat{\mathsf{a}}_2$ &
$\tfrac{\pi}{2}$ & $\pi+\arccos(-\tfrac{1}{3})$
\\[1mm]
$-\,\widehat{\mathsf{a}}_3$ &
$\pi-\arccos(\tfrac{\sqrt{2}}{\sqrt{3}})$ & $\hphantom{\pi+}\arccos(\tfrac{1}{\sqrt{3}})$
\\[1mm]
$-\,\widehat{\mathsf{a}}_4$ & $\hphantom{\pi-}\arccos(\tfrac{\sqrt{2}}{\sqrt{3}})$ & $\hphantom{\pi+}\arccos(\tfrac{1}{\sqrt{3}})$ \\[1.2mm]
\hline\hline
\end{tabular}
\label{tab:theta-phi-for(-a)}
\end{table}
To measure a combined uncertainty,
if one picks a suitable concave function of ${\{\mathsf{p}_i\}_{i=1}^{4}}$,
for example, the standard Shannon entropy $-\textstyle\sum\nolimits_{i=1}^{4}\mathsf{p}_i\ln\mathsf{p}_i$,
then its absolute minimum will occur on the ellipsoid
parametrized by $\theta$ and $\phi$ in \eqref{SIC-POVM-ellipsoid-para-M}.
By plotting the entropy as a function
of ${\theta\in[0,\tfrac{\pi}{2}]}$ and ${\phi\in[0,2\pi)}$ in Fig.~\ref{fig:entropy-plot-SIC},
we observe that the entropy reaches its absolute minimum
${\ln3}$ when the Bloch vector ${\vec{r}\in\{-\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$;
the corresponding $\theta$'s and $\phi$'s are registered in
Table~\ref{tab:a-theta-phi}.
In this way, we establish three tight URs
\begin{eqnarray}
\label{std-SIC}
2\sqrt{2}&\,\leq\,&
\sum_{i=1}^{4}
\sqrt{1-\langle\widehat{\mathsf{a}}_i\cdot\vec{\sigma}\rangle^2}
=\sum_{i=1}^{4}\sqrt{1-(4\mathsf{p}_i-1)^2}\,,\qquad\ \ \\
\label{entropy-SIC}
\ln3&\,\leq\,&
-\sum_{i=1}^{4}\mathsf{p}_i\ln\mathsf{p}_i =
h(\mathsf{p}_1,\cdots,\mathsf{p}_4)\,,
\quad\mbox{and}\\
\label{u-SIC}
\sqrt{3}&\,\leq\,&
\sum_{i=1}^{4}\sqrt{\mathsf{p}_i} =:
\mathsf{u}_{\sfrac{1}{2}}(\mathsf{p}_1,\cdots,\mathsf{p}_4)\,.
\end{eqnarray}
The right-hand-side is the sum of standard deviations in UR~\eqref{std-SIC}, which is saturated by eight pure states,
whose Block vectors ${\vec{r}\in\{\pm\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$.
Whereas, both URs~\eqref{entropy-SIC} and \eqref{u-SIC} are saturated by four pure states that are related to ${\{-\widehat{\mathsf{a}}_i\}_{i=1}^{4}}$.
Note that the uncertainty measures $h$ and $\mathsf{u}_{\sfrac{1}{2}}$ in \eqref{entropy-SIC} and \eqref{u-SIC} are different from \eqref{H(A)} and \eqref{u(A)}.
Since
${\textstyle\sum\nolimits_{i=1}^{4}\mathsf{p}_i^2}$ is a convex function
\cite{Sehrawat17}, the right-hand-side inequality in \eqref{Pi<=1/3} can be seen as a tight CR.
While the left-hand-side inequality delivers a tight UR for the sum of squared standard deviations ${1-\langle\widehat{\mathsf{a}}_i\cdot\vec{\sigma}\rangle^2}$, and the sum is bounded by $\tfrac{8}{3}$ from below.
Both these relations are saturated by every pure state.
As before, we can restrict a set of ${(\mathsf{p}_1,\cdots,\mathsf{p}_3)}$ by
one of the above URs, for instance,
\begin{eqnarray}
\label{R-std-SIC}
&&\mathcal{R}^{\textsc{sic}}_\Delta:=\{(\mathsf{p}_1,\cdots,\mathsf{p}_3)\ |\ 0\leq \mathsf{p}_1,\cdots,\mathsf{p}_4\leq \tfrac{1}{2}\ \nonumber\\
&&
\qquad\qquad\qquad\qquad\qquad \text{obey}\
\eqref{Pi=1}\ \text{and}\ \eqref{std-SIC}\}\,.\qquad
\end{eqnarray}
Replacing UR~\eqref{std-SIC} in \eqref{R-std-SIC} by \eqref{entropy-SIC}
and \eqref{u-SIC}, we define the regions $\mathcal{R}^{\textsc{sic}}_h$ and
$\mathcal{R}^{\textsc{sic}}_{\mathsf{u}_{\sfrac{1}{2}}}$, respectively.
One can see cross sections in $\mathcal{R}^{\textsc{sic}}_h$ and
$\mathcal{R}^{\textsc{sic}}_{\mathsf{u}_{\sfrac{1}{2}}}$ caused by ${\mathsf{p}_i=\tfrac{1}{2}}$ ${(i=1,\cdots,4)}$, which
shows a significance of \eqref{<A>in}.
\end{document}
|
\begin{document}
\title {Macroscopic non-contextuality as a principle for Almost Quantum Correlations}
\author{Joe Henson, Ana Bel\'en Sainz\\[0.5em]
{\it\small H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, U.K.}}
\date{13 April 2015}
\maketitle
\begin{abstract}
Quantum mechanics allows only certain sets of experimental results (or ``probabilistic models'') for Bell-type quantum non-locality experiments. A derivation of this set from simple physical or information theoretic principles would represent an important step forward in our understanding of quantum mechanics, and this problem has been intensely investigated in recent years. ``Macroscopic locality,'' , which requires the recovery of locality in the limit of large numbers of trials, is one of several principles discussed in the literature that place a bound on the set of quantum probabilistic models.
A similar question can also be asked about probabilistic models for the more general class of quantum contextuality experiments. Here, we extend the Macroscopic Locality principle to this more general setting, using the hypergraph approach of Ac\'in, Fritz, Leverrier and Sainz [Comm. Math. Phys. 334(2), 533-628 (2015)], which provides a framework to study both phenomena of nonlocality and contextuality in a unified manner. We find that the set of probabilistic models allowed by our Macroscopic Non-Contextuality principle is equivalent to an important and previously studied set in this formalism, which is slightly larger than the quantum set. In the particular case of Bell Scenarios, this set is equivalent to the set of ``Almost Quantum'' models, which is of particular interest since the latter was recently shown to satisfy all but one of the principles that have been proposed to bound quantum probabilistic models, without being implied by any of them (or even their conjunction). Our condition is the first characterisation of the almost quantum set from a simple physical principle.
\end{abstract}
\vskip 1cm
\section*{Introduction}
Nonlocality \cite{Bell} and contextuality \cite{KS} \cite{RevBell, exp1, exp2, contexp0, contexp1, contexp2} are arguably the two phenomena that most starkly reveal the difference between quantum and classical mechanics \cite{RevBell, LSW, conttheo1}. With regard to the first of these, Bell showed that quantum mechanics makes predictions for the strength of correlations between spacelike separated measurements that are incompatible with Bell's local causality condition, a seemingly natural formalisation of the idea that there is no superluminal causal influence \cite{Bell}. For the second, the Kochen-Specker theorem \cite{KS} states that quantum mechanics is incompatible with the assumption that measurement outcomes are determined by physical properties that do not depend on the measurement context. Quantum theory successfully explains both phenomena, but it predicts that only some sets of experimental probabilities for nonlocality and contextuality experiments can be attained, and it remains an open challenge to characterise this set of physically attainable results with simple, natural physical or information-theoretic principles. The search for this deeper understanding of quantum nonlocality and contextuality is motivated by the possibility of reformulating and/or generalising quantum theory (\textit{e.g.}~for the purposes of formulating a theory of quantum gravity) as well as finding new ways to prove results in quantum information theory directly from simple principles, without having to invoke the whole structure of quantum theory.
The principle that there are no superluminal signals is not enough to characterise quantum correlations for nonlocality experiments in this sense \cite{tsirelson, PR}. Hence, stronger principles are needed. Some proposals are non-trivial communication complexity \cite{vd}, information causality (IC) \cite{IC}, local orthogonality \cite{FSA} and macroscopic locality \cite{ML}. In this work we focus on the macroscopic locality (ML) principle. Essentially ML states that, for a certain macroscopic extension of a Bell experiment, Bell's local causality will hold, or in other words quantum nonlocality will no longer be detectable in this macroscopic limit.
The problem of characterising quantum correlations in Bell scenarios from basic principles is however far from being solved. All the principles proposed so far, except IC, have been shown to be satisfied by some supra-quantum correlations (and the same is suspected to be true of IC) \cite{AQ}. Indeed, there exists a set of correlations called ``almost quantum'' that is slightly larger than the quantum set and yet satisfies these principles, presenting a curious barrier to a full characterisation of the quantum set \cite{AQ}. However, even a characterisation of the almost quantum set from basic principles is still missing.
Moving on to contextuality scenarios, the problem of characterising quantum models from basic principles has not been so intensely studied. The ``almost quantum'' set of non-local correlations generalises in this case to a set called $\mathcal{Q}_1$, which is strictly larger than the quantum set. The most relevant proposals to describe the latter are the \textit{Exclusivity} principle \cite{cab2} and \textit{Consistent Exclusivity} (CE), which (when defined as in definition 7.1.1 of \cite{AFLS}) impose the same constraints (compare CE to the definition of the E principle in e.g. \cite{cab2}). This E/CE principle has been applied in many different ways to contextuality scenarios \cite{AFLS, cab2, cab1}, but they were never strong enough to single out quantum models in the sense of \cite{AFLS}. Moreover, several of the most powerful results rely on auxillary assumptions, some more simple and physically compelling than others. For instance, when assuming both CE and that all quantum models are \textit{inside} the physically allowed set of models, it can be shown that all models violating $\mathcal{Q}_1$ are \textit{outside} the physically allowed set of models \cite{cab1, AFLS}.
In this work we propose a generalisation of ML to arbitrary contextuality scenarios, which we call \textit{macroscopic non-contextuality} (MNC). We use the hypergraph approach to nonlocality and contextuality developed in \cite{AFLS} to represent such scenarios, which we briefly review below. We find that MNC characterises the particular set $\mathcal{Q}_1$ of probabilistic models, which includes supraquantum models. For Bell scenarios, this strengthens the original ML principle, because the set $\mathcal{Q}_1$ is equivalent to the set of almost quantum correlations \cite{AQ} in that case. Thus, we provide the first characterisation of almost quantum correlations from basic physical principles.
In section \ref{se:contsce} below, the hypergraph approach to contextuality is reviewed and the relevant sets of probabilistic models (quantum, $\mathcal{Q}_1$ and non-contextual) are defined. In section \ref{se:MNC} macroscopic noncontextuality is defined in analogy to macroscopic locality, and shown to be equivalent to $\mathcal{Q}_1$. A discussion of the physical motivation of the principle and some other details follow in section \ref{se:discussion}.
\section{Contextuality scenarios}\langlebel{se:contsce}
In this paper we represent general contextuality scenarios, including Bell scenarios, as in the hypergraph approach to contextuality of \cite{AFLS}. This section provides a brief review of the notation as well as the sets of correlations which are relevant for our result. For more details on the formalism, the reader can consult \cite{AFLS}.
A contextuality scenario \cite{AFLS} is defined as a hypergraph $H = (V, E)$ whose vertices $v \in V$ correspond to the events in the scenario. Each \textit{event} represents an outcome obtained from a device after it receives some input or ``measurement choice''. The hyperedges $e \in E$ are sets of events representing all the possible outcomes given a particular measurement choice. The hypergraph approach assumes that every such measurement set is complete, in the sense that if the measurement corresponding to $e$ is performed, exactly one of the outcomes corresponding to $v \in e$ is obtained. Note that measurement sets may have non-trivial intersection; when an event appears in more than one hyperedge, this represents the idea that the two different operational outcomes should be thought of as equivalent, in a sense that will be specified further below.
A \textit{probabilistic model} on a contextuality scenario is an assignment of a number to each of the events, $p\,:\, V \, \to \, [0,1]$, which denotes the probability with which that event occurs when a measurement $e \ni v$ is performed. By defining probabilistic models in this way (rather than by a function $p_e(v)$ depending on the measurement $e$ performed), we are assuming that in the set of experimental protocols that we are interested in, the probability for a given outcome is independent of the measurement that is performed.\footnotemark~Because the measurements are complete, every probabilistic model $p$ over the contextuality scenario $H$ satisfies the normalisation condition $\sum_{v \in e} p(v) = 1$ for every $e \in E$.
\footnotetext{In standard discussions of quantum contextuality, ``the set of experimental protocols that we are interested in'' means carrying out a fixed set of measurements on a quantum system. In this case, two outcomes always have the same probabilities if they correspond to the same measurement operator acting on the same Hilbert space. When discussing contextuality more generally, it is often (explicitly or implicitly) assumed that some naturally defined set of experiments will still be available, with respect to which outcomes can be identified in a similar way; in some cases this can be justified by appeal to general principles, especially lack of signalling between parties.}
Bell scenarios (see Ap. \ref{se:corr-meas}) are naturally incorporated in the hypergraph approach as a type of product of several contextuality scenarios, one for each local party. Specifically, in an $(n,m,d)$ Bell scenario the ``global'' events $v \in V$ can be asscoiated with a list of ``local'' outcomes for each party: $v = (a_1 \ldots a_n | x_1 \ldots x_n)$. The hyperedge set $E$ however does not have such a simple representation: it includes \textit{simultaneous measurements} as well as \textit{correlated measurements}, also denoted as \textit{branching measurements} \cite{joe-hist} or \textit{one-way LOCC measurements} \cite{LOCCandreas} (see Ap. \ref{se:corr-meas} for a fuller explanation).
In what follows we revisit the definitions of \textit{classical}, \textit{quantum} and $\mathcal{Q}_1$ probabilistic models. For other interesting sets of models we refer the reader to \cite{AFLS}.
\begin{defn}\langlebel{qmdef} \textbf{Quantum models} [\cite{AFLS}, 5.1.1]\\
Let $H$ be a contextuality scenario. An assignment of probabilities $p: V(H)\to [0,1]$ is a \emph{quantum model} if there exist a Hilbert space $\mathcal{H}$, a quantum state $\rho\in\mathcal{B}_{+,1}(\mathcal{H})$ and a projection operator $P_v\in\mathcal{B}(\mathcal{H})$ associated to every $v\in V$ which constitute projective measurements in the sense that
\begin{equation}
\langlebel{qmeas}
\sum_{v\in e} P_v = \mathbbm{1}_{\mathcal{H}} \quad\forall e\in E(H) ,
\end{equation}
and reproduce the given probabilities,
\begin{equation}
\langlebel{qrep}
p(v) = \mathrm{tr}\left( \rho P_v \right) \quad\forall v\in V(H) .
\end{equation}
The set of all quantum models is the \emph{quantum set} $\mathcal{Q}(H)$.
\end{defn}
Later some comments will be made on the meaning and consequences of generalising this definition to POVMs.
Ac\'in, Fritz, Leverrier and Sainz \cite{AFLS} prove that, in the case of Bell scenarios, Def.~\ref{qmdef} accords with the usual definition of quantum correlations (see Def.~\ref{qcorrvan} in Ap. \ref{se:corr-meas}), meaning that each global measurement represented by the projectors $\{P_v\}_{v \in e}$ can consistently be expressed as a product of local projectors, one for each party, such that the projectors for different parties commute (and sum up to the identity). For instance, in the bipartite case $P_{ab | xy} = P_{a|x} P_{b|y}$, where $[P_{a|x}, P_{b|y}]=0$ for all $a,b,x,y$ and $\sum_{a} P_{a|x} = \mathbbm{1}_{\mathcal{H}}$ (similarly $\sum_{b} P_{b|y} = \mathbbm{1}_{\mathcal{H}}$).
The following set of correlations will be important in the following argument.
\begin{defn}\langlebel{q1mdef} $\mathcal{Q}_1$ \textbf{models} [\cite{AFLS}, 6.1.2]\\
Let $H$ be a contextuality scenario. An assignment of probabilities $p: V(H)\to [0,1]$ is a $\mathcal{Q}_1$ \emph{model} if there exists a ``$\mathcal{Q}_1$ certificate'': a p.s.d.~matrix ranging over all $v \in V(H)$, with a special column labelled $1$, such that for all $e \in E(H)$,
\begin{enumerate}
\item\langlebel{q1a} $\sum_{u\in e} M_{uv} = M_{1v}$ and $\sum_v M_{1v} = M_{11}$ ;
\item\langlebel{q1b} $(u,v \in e$ and $ u \neq v) \, \Rightarrow M_{uv} = 0$;
\item\langlebel{q1c} $M_{vv} = p(v)$;
\end{enumerate}
The set of all these models is denoted $\mathcal{Q}_1(H)$.
\end{defn}
This set $\mathcal{Q}_1$ arises in \cite{AFLS} as the first level of a hierarchy of relaxations that converges to the quantum set. In addition, it is shown in Corollary 6.4.2 of \cite{AFLS} that when the contextuality scenario is a Bell scenario, the set $\mathcal{Q}_1$ coincides with the Almost Quantum set of correlations \cite{AQ}.
An equivalent characterisation of $\mathcal{Q}_1$ models is useful in the main proof of section \ref{se:MNC}:
\begin{lemma}\langlebel{q1mdef2}
Given a scenario $H$, a matrix $M$ is a $\mathcal{Q}_1$ certificate for a given behaviour $P$ iff it is a p.s.d.~matrix ranging over all $v \in V(H)$, and a special column labelled $1$, such that
\begin{enumerate}
\item\langlebel{q12a} $\sum_{u\in e} M_{uv} = P(v)$ for all $u\in V(H)$ ;
\item\langlebel{q12b} $(u,v \in e$ and $ u \neq v) \, \Rightarrow M_{uv} = 0$;
\item\langlebel{q12c} $M_{vv} = P(v)$;
\item\langlebel{q12d} $M_{1v} = P(v)$ and $M_{11} = 1$;
\end{enumerate}
\end{lemma}
\begin{proof}
Condition (\ref{q1b}) of Def.~\ref{q1mdef} is equivalent to (\ref{q12b}) of Lemma~\ref{q1mdef2}, and condition (\ref{q1c}) of Def.~\ref{q1mdef} to (\ref{q12c}) of Lemma~\ref{q1mdef2}.
Conditions (\ref{q1a}), (\ref{q1b}) and (\ref{q1c}) of Def.~\ref{q1mdef} easily imply (\ref{q12d}) of Lemma~\ref{q1mdef2}. Assuming (\ref{q12d}) of Def.~\ref{q1mdef}, (\ref{q1a}) of Def.~\ref{q1mdef} is equivalent to (\ref{q12a}) of Lemma~\ref{q1mdef2}.
\end{proof}
Finally, classical probabilistic models are defined as follows:
\begin{defn}\langlebel{classical} \textbf{Classical models} [\cite{AFLS}, 4.1.1]\\
Let $H$ be a contextuality scenario. An assignment of probabilities $p: V(H)\to [0,1]$ is a \emph{classical model} if it can be written as
\begin{equation}\langlebel{eq:clas}
p(v) = \sum_\langlembda q_\langlembda p_\langlembda(v),
\end{equation}
where the weights $q_\langlembda$ satisfy $\sum_\langlembda q_\langlembda = 1$, and $p_\langlembda$ are deterministic probabilistic models, that is normalised models such that $p_\langlembda(v)=\{0,1\} \quad \forall \, v,\,\langlembda$.
The set of all these models is denoted $\mathcal{C}(H)$.
\end{defn}
Expressed in this language, the most famous result in this field, the Kochen-Specker theorem \cite{KS}, is that there exist scenarios that admit quantum models but no classical models, implying that there exist scenarios $H$ such that set $\mathcal{C}(H) \subsetneq \mathcal{Q}(H)$. This phenomenon is referred to as \textit{contextuality}, and the set of classical models is also referred to as the set of \textit{noncontextual models}.
Besides Bell scenarios, another special kind of scenario will be of particular relevance below when we come to discuss macroscopic versions of microscopic scenarios. They are sometimes called ``marginal scenarios'' \cite{AB} or ``joint measurement scenarios''. Here, we imagine some list of constituent experiments labelled $m \in X = \{1,...,k\}$, each with an outcome in the set $O=\{1,...,d\}$. Some subsets of the constituent experiments are ``jointly measurable''\footnote{In the sense that there exists a physically implementable protocol to measure them at the same time.}, and these subsets of $X$ are collected in the set $\mathcal{M} \subset 2^X$ ($2^X$ is the set of all subsets of $X$). For each $C \in \mathcal{M}$, experimental probabilities are then assigned to each specification of a value for each of the constituent measurements: $\mathcal{P}^{ex}_C(\{ a_m \}_{m\in C})$ where $a_m \in O$. Marginal scenarios can be represented in the hypergraph approach to contextuality, as explained in appendix \ref{a:joint_measurements}. As is also explained in that appendix, for this type of scenario the definition of classical models given above can be rewritten with (\ref{eq:clas}) becoming
\begin{equation}\langlebel{e:nc_joint}
\mathcal{P}^{ex}_C(\{a_m \}_{m\in C}) =
\sum_{m \in X \backslash C} \mathcal{P}_{\text{NC}}( \{ a_m \}_{m \in X} ) ,
\end{equation}
where $\mathcal{P}^{ex}_C(\{a_m \}_{m\in C})$ is the experimental probability of obtaining outcomes $\{a_m \}_{m\in C}$ given that the joint measurement $C$ was performed, and where $\backslash$ is set difference.
In this form, the following interpretation of non-contextuality for marginal scenarios is brought out: the probabilities are such that the results of the constituent experiments are consistent with an outcome for every observable having been predetermined before the measurement is performed, and the experiment ``merely revealing'' the results for the ones that are measured. It should be noted that unlike the most general scenarios that can be represented in the hypergraph approach, all the (non-empty) joint measurement scenarios have a non-empty set of classical models.
\section{Macroscopic Non-Contextuality}\langlebel{se:MNC}
In \cite{ML} Navascu\'es and Wunderlich identify an interesting property of quantum correlations which they termed \textit{macroscopic locality} (ML), and proposed that this be thought of as a simple physical principle to bound the set of correlations. Essentially, macroscopic locality requires that a certain ``macroscopic limit'' of a Bell-type experiment has a local explanation in the sense of Bell. In \cite{ML} the principle was applied to bipartite Bell scenarios, and shown to be equivalent to the first level of the NPA hierarchy \cite{NPA0, NPA}. Hence, the set of correlations which satisfies the principle is strictly larger than the set of quantum correlations. In this section we extend this kind of reasoning to general contextuality scenarios in the hypergraph approach, including multipartite Bell scenarios as special cases, as described above. We prove that the set of probabilistic models satisfying this principle is, again, strictly larger that the quantum set $\mathcal{Q}$, but that it is stronger than Navascues and Wunderlich's ML when specialised to Bell scenarios.
Consider a physical system $s$ and a set of measurements $E$, from which we choose one to perform on $s$. As reviewed above, in the hypergraph approach to contextuality such a scenario is represented by a hypergraph $H=(V,E)$; the (normalised) probability $p(v)$ of obtaining an outcome $v \in V$ given that a measurement $e \ni v$ is performed, for all outcomes, defines a probabilistic model on $H$. An experiment of this type is depicted in Fig.~\ref{micro-exp}, and we refer to it as \textit{microscopic experiment}. Now we want to define a macroscopic version of such an experiment, which we call its ``macroscopic extension''. Suppose now that the source produces $N$ independent copies of this system $s$, and that these $N$ systems reach the measurement device (see Fig.~\ref{macro-exp}). Now we assume that we are no longer able to distinguish individual outcomes, but only the fraction of instances (or ``intensity'') of each outcome $v$ given a measurement $e$. The experimental results for a particular measurement in the macroscopic experiment are thus described by a probability distribution $\mathcal{P}_e(\{I^v\}_{v \in e})$ where $I^v$ denotes the intensity for outcome $v$.
This can be described as a joint measurement scenario in which the constituent experiments are the measurements of the intensity $I^v$ for each $v$.\footnote{Although here we must allow continuous values for the intensities, the generalisation does not change anything important for our purposes.} The probabilities for the macroscopic extension are determined by the microscopic probabilistic model $p(v)$, in a way that we will make explicit below.
\begin{figure}
\caption{Microscopic experiment. A source S prepares a system $s$, which is sent to the measurement device M. There, an interaction between the measurement apparatus and the system sends the system towards one of a set of detectors, where its presence can be observed as a ``detector click''. The clicking of detector D$_k$ corresponds to obtaining outcome $k$.}
\end{figure}
\begin{figure}
\caption{Macroscopic experiment. A source S prepares $N$ independent copies of a system $s$, which are sent to the measurement device M. There, for each system (and independently for each system), an interaction between the measurement apparatus and the system sends the system towards one of a set of detectors, However, in this case, rather than a single click, there is a distribution of `clicks' over the detectors according to the probabilities for each outcome in the microscopic experiment. Hence, the `output' of this macroscopic experiment is the collection of intensities $I^v_e$ registered at the detectors.}
\end{figure}
Generalising ML \cite{ML}, our principle will be that in the limit of large $N$ there exists a non-contextual model for this experiment: the probabilities are such that the intensities for \textit{all} of the outputs $v$ could have been predetermined before the measurement is performed, and the experiment ``merely reveals'' the intensities that are measured.
\begin{defn}\textbf{Macroscopic Non-Contextuality } \textsl{(MNC)}\\
The probabilistic model $p(v)$ obeys macroscopic non-contextuality if, in the limit $N \rightarrow \infty$, there exists a probability distribution $\mathcal{P}_{\text{NC}}$ over a set of intensities $\{ I^v \}_{v \in V(H)}$, such that the experimental probabilities for the macroscopic extension of $p(v)$, $\mathcal{P}_e(\{I^v\}_{v \in e})$, can be obtained as marginals from $\mathcal{P}_{\text{NC}}$:
\begin{equation}\langlebel{e:nc_prob_dist}
\mathcal{P}_e(\{I^v\}_{v \in e}) =
\int \Bigl ( \prod_{v \in V(H) \backslash e} dI^v \Bigr ) \; \mathcal{P}_{\text{NC}}( \{ I^v \}_{v \in V(H)} ) ,
\end{equation}
where $\backslash$ is set difference.
\end{defn}
Equation (\ref{e:nc_prob_dist}) is the analogue of Eq.~(\ref{e:nc_joint}) for the macroscopic experiment. Note that no matter what the scenario $H$ is for the microscopic experiment (at least as long as it supports any probabilistic models at all), this condition can always be satisfied by some probability distributions. The original $H$ may even constitute a proof of Kochen-Specker, but the corresponding macroscopic experiment is always represented by a marginal scenario, which is never of that type.
We will now investigate how to characterise the set of probabilistic models $p(v)$ that satisfy MNC. It is useful to discuss this question in terms of random variables. In general, the results of the macroscopic experiment are described by the probability distributions $\mathcal{P}_e(\{I^v\}_{v \in e})$.
With a slight abuse of notation, we denote by $I^v_e$ the random variable associated to variable $I^v$ in the distribution $\mathcal{P}_e$. This is done because the random variables derived from different distributions are distinct, and those corresponding to the same outcome would share the same symbol without the added subscript. Note that the random variables $I^u_e$ and $I^v_f$ for $e \neq f$ are defined from different distributions and so it is meaningless to ask about correlations between them.
A macroscopic experiment is defined from $N$ ``runs'' of the microscopic experiment. Similarly to \cite{ML}, define $d^v_{i\,e}$ as a random variable that is $1$ if $v$ is obtained in the $i$th run of experiment $e$ and $0$ otherwise. The intensity of outcome $v$ given measurement $e$, $I^v_e$, is then proportional to $\sum_{i=1}^N d^v_{i\,e}$, and its deviation from the mean value is expressed as:
\begin{equation}
\langlebel{e:def_I}
\bar{I}^v_e = \sum_{i=1}^N \frac{\bar{d}^v_{i\,e}}{\sqrt{N}} = \sum_{i=1}^N \frac{d^v_{i\,e}-p(v)}{\sqrt{N}},
\end{equation}
where the normalisation has been chosen to be $\sqrt{N}$ for reasons that will hopefully become clear below.
There are some constraints on the random variables $\bar{I}^v_e$ that follow from the consistency of the probabilistic model for the microscopic experiment. The first simply comes from the fact that the sum of the number of hits for all outcomes over all runs must be N, so that
\begin{equation}
\langlebel{e:overall_norm}
\sum_{v \in e} \bar{I}^v_e=0 \quad \forall\,e.
\end{equation}
The second only holds in the limit. The central limit theorem \cite{CLT} implies that, when $N \to \infty$, the probability distribution over the intensity fluctuations for each experiment converges to a multivariate Gaussian distribution. In this case the covariance matrix $\gamma^e$ for the experiment $e$ will be given by the following, defined for all $u,v \in e$,
\begin{equation}
\langlebel{e:central_lim}
\gamma^e_{uv} = \langle \bar{I}^u_e \bar{I}^v_e \rangle = \langle \bar{d}^u_{1 \,e} \bar{d}^v_{1 \,e} \rangle= \delta_{u v} p(v) - p(u)p(v).
\end{equation}
Note that the value of $\gamma^e_{uv}$ is the same for fixed $u$ and $v$, for any value of $e$. This is because the marginal distribution $\mathcal{P}^{ex}_e(I^u,I^v)$ for some measurement containing $u,v$ as outcomes is the same no matter what $e$ is (this in turn follows from the consistency of the probabilistic model for the microscopic experiment and the definition of the intensities).
So far these observations hold in general (\textit{i.e.}~without any constraint being placed on the microscopic model beyond consistency). Now, if MNC holds, then in the limit $N \rightarrow \infty$ there exists a joint probability distribution over the set of intensitites for \textit{all} outcomes, such that the experimental distributions can be recovered as marginals as in (\ref{e:nc_prob_dist}). In terms of random variables we can now define $I^v$, without any subscript denoting a measurement, from $\mathcal{P}_{\text{NC}}(\{ I^v \}_{v \in V(H)} )$. These $I^v$ are all derived from the same disriibution and so MNC implies that there must exist a bigger matrix $\gamma_{uv}$ defined for \textit{all} $u,v \in V(H)$ that has the properties of a covariance matrix for this probability distribution; in particular it is a positive semi-definite matrix. Furthermore, from (\ref{e:nc_prob_dist}) this $\gamma_{uv}$ must reduce to (\ref{e:central_lim}) when $u,v$ are restricted to $e$. A further constraint on $\gamma_{uv}$ implied by (\ref{e:nc_prob_dist}) and (\ref{e:overall_norm}) is that, even for $u$ not in the same measurement as $v$,
\begin{equation}
\sum_{u\in e} \gamma_{uv} = \langle (\sum_{u\in e} \bar{I}^u) \bar{I}^v \rangle = 0.
\end{equation}
Hence, the microscopic probabilistic models which are consistent with MNC may be characterised as follows:
\begin{defn}
\langlebel{d:mnc}
A probabilistic model $p$ on scenario $H$ is macroscopically non-contextual if there exists a ``macroscopic non-contextuality certificate'': a p.s.d.~matrix $\gamma$ ranging over all $v \in V(H)$ such that
\begin{itemize}
\item $\sum_{u\in e} \gamma_{uv} = 0$;
\item $(u,v \in e \text{ and } u \neq v) \, \Rightarrow \gamma_{uv} = -p(u)p(v)$;
\item $\gamma_{vv} = p(v)-p(v)^2$;
\end{itemize}
\end{defn}
Our main result is that these microscopic probabilistic models are equivalent to the $\mathcal{Q}_1$ set in the hierarchy of pobabilistic models defined in the hypergraph approach \cite{AFLS}.
\begin{thm}
A behaviour is macroscopically non-contextual iff it is in $\mathcal{Q}_1$.
\end{thm}
\begin{proof}
The proof is very similar to the one in \cite{ML} for ML. From Schur's theorem \cite{Horn}, because $M_{11}=1>0$, the positivity of $M$ is equivalent to the positivity of $\gamma_{uv}= M_{uv} - M_{1v}M_{1u}=M_{uv} - p(u)p(v)$.
With this definition, it is easy to check that (\ref{q1mdef2}.\ref{q12a}) is equivalent to $\sum_{u\in e} \gamma_{uv} =0$, (\ref{q1mdef2}.\ref{q12b}) is equivalent to $(u,v \in e \text{ and } u \neq v) \, \Rightarrow \gamma_{uv} = -p(u)p(v)$ and (\ref{q1mdef2}.\ref{q12c}) is equivalent to $\gamma_{uv} = p(v)-p(u)p(v)$. Given the probabilistic model, the values of $M_{1v}$ and $M_{11}$ are determined. Thus one can derive a macroscopic non-contextuality certificate $\gamma$ given that there exists a $\mathcal{Q}_1$ certificate, and \textit{vice-versa}.
\end{proof}
Since quantum models are included within the $\mathcal{Q}_1$ set \cite{AFLS}, Quantum theory satisfies MNC for any contextuality scenario.
\section{Discussion}\langlebel{se:discussion}
In the particular case of Bell Scenarios, the set $\mathcal{Q}_1$ is equivalent to the set of almost quantum correlations (see \cite{AFLS} theorem 6.4.1, and similarly, \cite{AQ} lemma 3). Hence, while the original version of ML allows a strictly larger set of correlations than the almost quantum set \cite{AQ}, when Bell scenarios are defined as in the hypergraph approach a stronger version of the principle arises, which allows exactly the almost quantum correlations. This is the first time that a simple physical principle has been shown to limit correlations to the almost quantum set\footnote{In \cite{joe-hist} the set SPJQM$_b$ is shown to be equivalent to the almost quantum set. However, in \cite{joe-hist} the motivation is slightly different, looking for a natural and useful generalisation of quantum mechanics, rather than a derivation of properties of QM from principles of the type discussed in the present work. It is difficult to call SPJQM$_b$ a simple physical principle in itself, since in \cite{joe-hist} the positive-semidefiniteness is ``added by hand'' rather than derived. Nonetheless, it is suggestive and intriguing for both programs that the same set of correlations has been arrived at from these different starting points.}.
The essential differences that lead to this strengthening are the consideration of global outcomes rather than local ones, and the inclusion of the \textit{correlated measurements} in the definition. However, the gedanken experiment used above to motivate MNC was not of the form of a physical Bell experiment and so some care is needed here when it comes to the motivation for applying the condition. These issues are discussed in detail in appendix \ref{se:corr-meas}. The question of how to motivate correlated measurements in general nonlocality scenarios however goes beyond the scope of this manuscript and is deferred to future work \cite{fuwo}.
This situation is somewhat similar to the difference between the sets SPJQM and SPJQM$_b$ in the histories approach to quantum non-locality \cite{joe-hist}. There, including correlated (or ``branching'') measurements allowed the authors to recover almost quantum correlations from a condition that otherwise has only been shown to imply the first level of the NPA hierarchy \cite{NPA0, NPA}. Also, the almost quantum set is much more naturally defined on the hypergraph approach version of Bell scenarios than the first NPA set, while the latter is the more natural condition when correlated measurements are not considered, and instead it is directly imposed that all mathematical objects associated to local outcomes are independent of the distant measurement settings. This suggests that, in order to characterise the almost quantum set, it is necessary to bring in considerations of correlated measurements (although it is of course possible that a different way to motivate the same strengthening may be found).
A \textit{wiring} is a classical operation by which a new probabilistic model $p$ is constructed from a set of models $\{p_1, \ldots, p_r\}$. For example, in a tripartite Bell scenario a
wiring may consist of Alice communicating her outcome $a$ to Bob, who uses this outcome as his choice of measurement and obtains an outcome $b$. One can then define a new probabilistic model $p$ from the original tripartite one upon identifying Alice and Bob with a new joint party with joint measurement choice $x$ and joint outcome $b$. This type of classical operation can increase the violation of a Bell inequality \cite{Brun}. However, there is very strong motivation to assume that the set of probabilistic models that arises within a physical theory is closed under these classical operations \cite{Brun}. This is indeed the case for the set of MNC probabilistic models. When considering a general contextuality scenario the possible classical operations include choosing one measurement from many via a probability distribution, or in considering many devices (\textit{i.e.}~systems) ``in parallel'' as one larger device. In this regard, it is shown in \cite{AFLS} that the set $\mathcal{Q}_1$ is both convex and closed under tensor products. However, for the particular case of Bell scenarios there exists a larger set of classical operations one could consider. This is studied in \cite{AQ}, where it is proven that the set of Almost quantum correlations is closed under post-selection, grouping of parties and composition. Hence, whenever a collection of microscopic probabilistic models satisfy MNC, the result of any such wiring operation among them will satisfy the principle as well.
Another question that is suggested by the above result is how far the principle can be pushed. One could argue that the most general measurement in quantum mechanics is given by a POVM, and so Def.~\ref{qmdef} should be generalised, substituting positive operators for projectors. Will the MNC principle still be true for all ``quantum correlations'' when we allow this? And if the principle fails in these cases, should that not cast doubt on the claim that the principle is a physically reasonable restriction, undermining the motivations discussed above?
In fact the principle \textit{does} fail in this case, but our view is that this only highlights how problematic it is to generalise Def.~\ref{qmdef} to POVMs. Indeed, one can take any general probabilistic model $p(v)$ on the contextuality scenario $H$ (that is, any model that satisfies the normalisation constraints), and define the positive operators $P(v):= p(v) \, \mathbbm{1}$, where $ \mathbbm{1}$ is the identity on $\mathcal{H}$. These operators will always satisfy the conditions of Def.~\ref{qmdef} generalised to POVMs. Since $p(v)$ can be any probabilistic model, if we allow general POVMs rather than projective measurements then \textit{no} principle that places a non-trivial restriction on correlations will be respected. Thus, this kind of ``quantum model'' is clearly pathological. Furthermore, there is nothing special to quantum theory about this: one could apply an analogous generalisation to classical models with similar results. In this case, the analogue of POVMs would be to incorporate classical randomness (``noise'') into the measurements. But this would allow the unmysterious form of contextuality in which identified outcomes do not in fact to correspond to the same property of the system in any meaningful sense. As already noted by Spekkens \textit{et al.} \cite{Spe, LSW}, the relation of POVMs to contextuality demands more careful consideration (see also \cite{Kunjwal:2014} for further considerations along these lines).
Finally, a note on the meaning of the $N \rightarrow \infty$ limit being used here is in order. We have assumed that (\textit{a}) we cannot look for correlations between individual runs on the experiment but only between the proportions of outcomes over all runs, and (\textit{b}) that at large $N$ the experimenter has the ability to resolve the fluctuations described by the CLT but not the deviations from this due to the finiteness of $N$. In effect the experimenter must have a resolution that can pick out fluctuations of order $\sqrt{N}$. This is stronger than the resolution necessary to measure the mean values of the intensities but weaker than would be necessary to resolve the microscopic structure in the stronger sense of seeing finite $N$ effects. It is very intriguing that quantum mechanics turns out to be non-contextual in this natural limit.
\section{Conclusions}
In this work we have proposed a strengthening of Macroscopic Locality, called Macroscopic Non-Contextuality, as a new principle to bound Quantum models on general contextuality scenarios. We have used the hypergraph approach to nonlocality and contextuality to represent such scenarios, and proven that our principle is equivalent to the first level ($\mathcal{Q}_1$) in the hierarchy of probabilistic models defined in the hypergraph approach.
In the hypergraph approach representation of Bell scenarios, the $\mathcal{Q}_1$ set corresponds to Almost Quantum correlations, hence our approach provides a natural characterisation of this set. The inclusion of one-way LOCC measurements as feasible actions in a Bell scenario seems to be the key ingredient that allows the strengthening of the original ML principle. One-way LOCC measurements are important in information theoretic tasks, such as local distinguishability of quantum states \cite{oneway1}. Some questions remain open surrounding the physical motivation for considering this type of correlated measurement in nonlocality and contextuality scenarios, which will be deferred to future work \cite{fuwo}.
There are related programs which also treat the problem of characterising the set $\mathcal{Q}_1$. The exclusivity principle characterises $\mathcal{Q}_1$ when it is assumed in addition that quantum models are all included in the physically allowed set of models (the inclusion of this assumption defines \textit{Extended Consistent Exclusivity} (ECE) in the nomenclature of the hypergraph approach \cite{AFLS}). One of the main differences between this approach and ours is that our approach does not need the extra assumption. The other main difference involves the application of the principle to Bell scenarios. To derive the bound from the ECE principle for a Bell scenario, it is necessary to assume that some non-Bell scenarios can be realised and must also obey the same set of assumptions; the proof does not go through if considerations are restricted to nonlocality scenarios alone. In contrast, the derivation of the $\mathcal{Q}_1$ bound from MNC for a particular scenario involves no considerations of other scenarios at all. Thus, MNC may be used as a principle to characterise correlations in Bell scenarios solely, and successfully recovers the almost quantum set.
In view of the fact that the Almost Quantum set satisfies (or at least has not been shown to violate) all of the principles proposed so far \cite{AQ}, the most important outstanding question is how to formulate a principle that gets closer to quantum models. In the present work, similarly to the original ML paper \cite{ML}, we focus on experiments where only one of many possible measurements is performed on the system. However, another physically relevant experiment could be defined by applying sequences of measurements. The formulation of an MNC-like principle for such experimental scenarios is an open problem, whose solution we believe may shed light on the important question of how to distinguish quantum from almost quantum correlations from basic natural principles.
A short discussion on the Almost quantum set is still in order. Even though there exist several mathematical characterisations of this set, the relation of the Almost Quantum behaviours to physical theories is still unclear and deserving of further research. For instance, no toy theory is available that produces this set of behaviours. It is also not yet known how to express in a physical form a simple property of quantum mechanics that rules out the supra-quantum almost quantum correlations. We believe that the understanding of these open problems will shed light on the characterisation of quantum theory.
\section{Marginal Scenarios and non-contextuality in the hypergraph approach.}\langlebel{a:joint_measurements}
There is a special kind of scenario called ``marginal scenarios'' (or ``joint measurement scenarios'') which often appear in discussions of contextuality. These are the central structure of an alternative formalism for contextuality called the ``observable-based'' approach \cite{AB}, which can be shown to be essentially equivalent to the hypergraph approach in the sense that one can be derived from the other (although additional constraints must be added to the observable-based formalism to recover the hypergraph-based formalism). These joint measurement scenarios appear in the main argument of this paper as the representation of macroscopic experiments.
In appendix D of \cite{AFLS} the relationship between the joint measurement scenarios and the hypergraph approach to contextuality is explained. In this appendix, we briefly reprise the relevant points made there, and make explicit the relationship between classical non-contextual models in the hypergraph approach and the natural condition for non-contextuality that we apply to marginal scenarios in the main text. Here we will use a slightly less abstract notation than that in appendix D of \cite{AFLS}, but otherwise the terminology will be the same.
Consider some list of constituent or ``basic'' experiments, which we will call ``observables'', labelled $m \in X = \{1,...,k\}$, each of which has outcomes in the set $O=\{1,...,d\}$ (here we have assumed that all observables are valued in the same set). Some subsets of the constituent experiments are ``jointly measurable'' and these subsets of $X$ are collected in the set of ``measurement contexts'' $\mathcal{M} \subset 2^X$. Here we will only be interested in ``maximal'' measurement contexts which cannot be extended by adding more observables, and so we assume that, for any $C,C' \in \mathcal{M}$, if $C \subseteq C'$ then $C=C'$. We also assume that for every $m\in X$ there exists a $C \in \mathcal{M}$ that contains it. As is common practice in such cases we represent the marginal scenario defined by $(X,\mathcal{M},O)$ just by $X$ when the context makes it clear what is meant.
Some auxillary definitions are necessary before we can define a hypergraph approach scenario $H[X]$ that is equivalent to this. In the hypergraph approach each event $v \in V(H)$ represents a full (``global'') specification of the experimental outcome. Here, given a set of jointly measurable observables $C \in \mathcal{M}$ a possible outcome is specified by $\{ a_m \}_{m \in C}$ where $a_m \in O$. These outcomes have to be distinguished for every $C \in \mathcal{M}$, and therefore we set
\begin{equation}
V(H[X]) := \bigl\{ (C, \{ a_m \}_{m \in C} ) : C \in \mathcal{M}, \{ a_m \}_{m \in C} \in O^{C} \bigr\}.
\end{equation}
That is, we have a disjoint set of ``global'' outcomes for every (maximal) measurement context. The definition of the measurement set $E(H[X])$ requires more care. As well as simply measuring all observables in a measurement context, we could measure each observable in turn, and choose which observable to measure next depending on the results obtained so far, until we had an outcome for each observable in one of the maximal sets of jointly measurable observables. For such a measurement, the full list of alternative global outcomes is not just a list of all possible combinations of local outcomes for one fixed measurement context. We will define these ``measurement protocols'' recursively.
Measuring an observable $A$ limits our options on what we can measure next. Given the first measured observable $m$, the remaining possibilities can be represented as an ``induced'' marginal scenario $(X\{m\},\mathcal{M}\{m\},O )$, for which
\begin{align}
X\{m\} &:= \bigl\{ m' : m\neq m', \exists C \in \mathcal{M} \text{ s.t. } \{m,m'\} \subseteq C \bigr\}, \\
\mathcal{M}\{m\} &:= \bigl\{ C \backslash \{m\} : C \in \mathcal{M} \text{ and } m \in C \bigr\}.
\end{align}
A measurement protocol $T(X)$ on the marginal scenario $X$ can now be defined in the following way: $T=\emptyset$ if $X=\emptyset$ and otherwise $T=(m,f)$ where $m\inX$ is an observable and $f: O \rightarrow T(X\{m\})$ is a function from outcomes to measurement protocols on $X\{m\}$. In words, we first choose an observable $m$ to measure, and then decide between all protocols for choosing subsequent observables to measure based on the outcome. We continue making these choices until we can no longer find a jointly measurable observable to add to the set that we have already measured. The set of all possible outcomes for a protocol $T=(m,f)$ can be defined in a similar way by including the outcome of the observable in the recursive structure:
\begin{equation}
\text{Out}(T) := \bigl \{ (m,a_m,\alpha') : a_m \in O, \alpha' \in \text{Out}(f(a_m)) \bigr \}.
\end{equation}
Using these recursion relations, each outcome $\alpha\in \text{Out}(T)$ uniquely specifies an event $v \in V(H[X])$. That is, $v_\alpha=(C_\alpha, \{ a_m \}_{m \in C_\alpha})$ where, if $\alpha=(m,a,\alpha')$ as in the above equation, then $C_\alpha= \{m\} \cup C_{\alpha'}$ on $X$, and $\{ a_m \}_{m \in C_\alpha}$ is the set of outcomes associated to $\alpha$ in the obvious way. Finally, the measurement sets are
\begin{equation}
E(H[X]) := \{e_T : T \in \text{MP}(X) \}
\end{equation}
where $ \text{MP}(X) $ is the set of all measurement protocols on $X$ and
\begin{equation}
e_T := \bigl \{ (C_\alpha,\{ a_m \}_{m \in C_\alpha}) : \alpha \in \text{Out}(T) \}.
\end{equation}
In \cite{AFLS} the relationship between the most general consistent probability structures in the two formalisms (``empirical models'' on marginal scenarios and ``probabilistic models'' for the hypergraph approach) is established, and it is noted that there is a similar relationship between the definitions of classical noncontextual models and quantum models in the two cases as well. Here it is necessary to make the former connection explicit: non-contextuality for marginal scenarios is used to define macroscopic non-contextuality in the main text, while otherwise the hypergraph approach has been employed, and so this begs the question of the connection between the two.
Let us consider the possible deterministic probabilistic models on a scenario $H[X]$. If the only measurements included in our considerations were the ones corresponding to maximal measurement contexts $C \in \mathcal{M}$, then, because these measurements correspond to disjoint sets of outcomes, for any choice of an outcome for every $C$ there would be a deterministic probabilistic model that assigned probability 1 to those outcomes only. However, this would allow the implied outcome for a particular observable to depend on the overall context $C$ in which it was measured. The definition of $E(H[X])$ given above, including all measurement protocols, prevents this, as we explain in the following.
Consider a pair of outcomes for a pair measurement contexts which imply different outcomes for some particular observable $m^* \in X$. Formally, this pair is some $u=(C,\{ a_m \}_{m \in C})$, $v=(C',\{ a'_m \}_{m \in C'})\in V(H[X])$ such that there exists an observable $m^* \in X$ with $m^* \in C$ and $m^* \in C'$ but with $a_{m^*} \neq a'_{m^*}$. Any such pair is contained in some measurement $e \in E(H[X])$ (as can be seen by making $m^*$ the first measurement in some measurement protocol $T=(m^*,f)$ and chosing the function $f$ appropriately so that $u,v \in \text{Out}(T)$). Considering this, in a deterministic probabilistic model on $H[X]$, no such pair can both be assigned probability 1, or in other words, the probability of obtaining a particular value for an observable cannot depend on the context $C$. It is not difficult to see that there are no other constraints on deterministic probabilistic models on $H[X]$. Any such model is thus completely specified by assigning an output to every observable, $\{a_m\}_{m \in X}$.
In the light of this, consider the definition of a classical model given in section \ref{se:contsce}. For marginal scenarios, for an event $u=(C, \{\hat{a}_m \}_{m \in C})$, the definition of classical models (\ref{eq:clas}) now takes the form
\begin{equation}\langlebel{e:nc_joint_deltas}
\mathcal{P}^{ex}_C(\{ \hat{a}_m \}_{m \in C}) := P \bigl ( (C, \{ \hat{a}_m \}_{m \in C} ) \bigr) =
\sum_{C \in \mathcal{M}} \mathcal{P}_{NC}( \{ a_m \}_{m \in \mathcal{M}} ) q_{\{ a_m \}_{m \in \mathcal{M}}}(\{\hat{a}_m \}_{m\in C}),
\end{equation}
where $a_m \in A_m$, $\mathcal{P}_{NC}( \{ a_m \}_{m \in \mathcal{M}} )$ is a probability distribution over $\{ a_m \}_{m \in \mathcal{M}}$, and
\begin{equation}
q_{\{ a_m \}_{m \in \mathcal{M}}}(\{\hat{a}_m \}_{m\in C}) = \prod_{m \in j} \delta_{\hat{a}_m \, a_m},
\end{equation}
that is, $q_{\{ a_m \}_{m \in \mathcal{M}}}(\{\hat{a}_m \}_{m\in C})$ is 1 if $\hat{a}_m=a_m$ for all $m\in C$ and 0 otherwise. To compare to eq.(\ref{eq:clas}), here $\{ a_m \}_{m \in \mathcal{M}}$ corresponds to $\langlembda$, $\mathcal{P}_{NC}$ is the ``weight'' and $q$ is the deterministic model. Equation (\ref{e:nc_joint_deltas}) can then be easily rewritten as (\ref{e:nc_joint}), given in the main text. This establishes the connection between the two ways of expressing classicality/non-contextuality, for scenarios in the hypergraph approach and marginal scenarios.
\section{Bell scenarios, their hypergraph-approach version and their macroscopic experiments}\langlebel{se:corr-meas}
Mathematically, the MNC condition is a fairly straightforward generalisation of the Macroscopic Locality condition for nonlocality to general contextuality scenarios. But physically the gedankenexperiment used to motivate it concerned a single system passing through a ``beam splitter,'' and this is different from the motivation given in \cite{ML}. On examination this opens a number of issues of physical motivation, many of which will be relevant for other applications of physical principles to contextuality scenarios.
One of the main claims above was that, when specialised to Bell scenarios, the MNC principle constrains probabilistic models to the almost quantum set. Below, the way in which Bell scenarios are represented in the hypergraph approach is briefly reviewed, and we present a way to interpret the macroscopic version of a Bell scenario in the hypergraph approach.
\subsection{Bell scenarios}
A typical Bell-type experiment consists of $n$ separated parties which have access each to a physical system. The ``local'' measurements carried out by these parties are arranged so that they define spacelike separated events. In each run of the experiment, each party can subject their local system to their choice of one of $m$ local measurements, each with $d$ possible outcomes. The measurement choices are usually denoted by $x_k$ and the measurements outcomes by $a_k$, where $k$ labels the parties. Such a Bell scenario is thus characterised by the numbers $(n,m,d)$. If the parties take note of the outcomes in each run of the experiment and gather statistics, they will eventually obtain a conditional probability distribution $P(a_1 \ldots a_n | x_1 \ldots x_n)$ (also referred to as \textit{correlations}). Usually only probability distributions that obey the well-known ``no-signalling'' principle are of interest, meaning that marginalising over a local outcome $a_i$ will give a probability distribution that is independent of the local measurement setting $x_i$. As well as the ``simultaneous'' measurements defined by a choice of measurement for each party, ``correlated measurements'' can be defined, which are important for the representation of Bell scenarios in the ALFS formalism.
\begin{figure}
\caption{Construction of the CHSH scenario $B_{2,2,2}
\end{figure}
A correlated measurement in a Bell scenario is defined as follows. One party performs a local measurement $x_{i_1}$ and obtains an outcome $a_{i_1}$ which is communicated to the remaining parties. The second party in the protocol chooses a measurement $x_{i_2}$, which may depend on $a_{i_1}$. An outcome $a_{i_2}$ is obtained and communicated to the remaining parties. The protocol proceeds similarly for all parties, so that each party's measurement may depend on the previous parties' outcomes. The order in which the parties measure may also be defined dynamically throughout the protocol (see \cite{AFLS}, Def. 3.3.4). This kind of protocol is often referred to as an example of a ``wiring protocol'' between parties. These measurements are included in the hypergraph approach definition of Bell scenarios, which are denoted $B_{n,m,d}$. See figure \ref{CHSH_sce} for the example of $B_{2,2,2}$, also known as the CHSH scenario\footnote{\langlebel{f:bell-marginal}In the hypergraph approach, Bell scenarios are a special case of the marginal scenarios discussed in the previous appendix. There are $n$ ``local'' sets of observables each containing $m$ observables, with $d$ outcomes each, such that the maximal jointly measurable sets are all sets composed of one observable from each of the local sets \cite{AFLS}. In this case, the set of all measurement protocols defined in the previous appendix is the same as the set of all simultaneous and correlated measurements discussed here \cite{AFLS}.}.
As commented in \cite{AFLS}, the motivation for including the correlated measurements is mainly mathematical. The simple hypergraph based framework allows the application of powerful graph-theoretic methods, and so it is useful to be able to treat Bell scenarios as a special case. Most commonly in discussions of Bell scenarios, it is directly imposed that a \textit{local} measurement outcome at the $A$ wing given a setting in the $B$ wing should be indentifed with the same local outcome at $A$ given a different setting at $B$ (and similarly with $A$ and $B$ reversed). This, without further restrictions, implies no-signalling, and is essentially what is done in \cite{ML} for instance. But this is not directly representable in the hypergraph approach, which deals directly with global outcomes and only allows these to be indentified with each other. To impose the no-signalling principle directly on these scenarios would require either an ad-hoc restriction or a more complicated general formalism (\textit{e.g.}~allowing the identification of sets of global outcomes as well as individual outcomes). Adding correlated measurements circumvents this problem because, when they are present, only probabilistic models that satisfy the no-signalling principle are consistent. Similarly, their inclusion ensures that the definition of quantum models for general contextuality scenarios specialises to the usual definition of quantum correlations for Bell scenarios, which is as follows.
\begin{defn}\langlebel{qcorrvan}\textbf{Quantum Correlations}\\
Let $(n,m,d)$ be a Bell Scenario. A conditional probability distribution $P(a_1 \ldots a_n | x_1 \ldots x_n)$ is quantum if there exists a Hilbert space $\mathcal{H}$, a state $\rho\in\mathcal{B}_{+,1}(\mathcal{H})$ and $m$ projective measurements $\{P^k_{a_k|x_k}\}_{a_k=1 \ldots d}$ for each party $k$ such that:
\begin{enumerate}
\item $\sum_{a_k=1}^d P^k_{a_k|x_k} = \mathbbm{1}_{\mathcal{H}}$ for all $x_k = 1 \ldots m$,
\item $[P^k_{a_k|x_k},P^{k^\prime}_{a_{k^\prime}|x_{k^\prime}}]=0$ for all $k \neq {k^\prime}$,
\item $P(a_1 \ldots a_n | x_1 \ldots x_n) = \mathrm{tr} (P^1_{a_1|x_1} \, \ldots \, P^n_{a_n|x_n} \, \rho)$.
\end{enumerate}
\end{defn}
Similarly non-contextuality, when applied to Bell scenarios, is equivalent to locality.
Thus, correlated measurements play a useful role mathematically. Physically, however, if the scenario is meant to represent a choice of measurement for $n$ spacelike-separated parties, the protocol for correlated measurements cannot actually be carried out, and so we seem to have a contradiction between the most important application of Bell scenarios and the inclusion of correlated measurements. It is interesting to consider how this affects the motivation of principles applied to Bell scenarios in general. Below, we only consider motivations for applying the MNC principle to the results of Bell experiments.
\subsection{Bell scenarios and the MNC principle}
As mentioned in the main text, applying the MNC condition to bipartite Bell scenarios results in a condition that is stronger than the original Macroscopic Locality condition \cite{ML}, which applies exclusively to this type of scenario. However, the motivations for imposing the two conditions in this case differ substantially.
In section \ref{se:MNC} we gave a physical picture to motivate the MNC condition, of a single system passing through a measurement device after which it ends up hitting one of many detectors; the macroscopic version of the experiment simply consisted of many systems passing through a similar apparatus. For physical experiments of this form, the motivation for applying the MNC condition has the most clarity. Some such gedankenexperiments do indeed correspond to the $B_{n,m,d}$ scenarios discussed above, and in this sense these motivating comments are valid for these scenarios. However, this easy answer has little to do with the special status of Bell scenarios; because it takes place at one location in spacetime, such an experiment cannot (directly, at least) invoke any motivations stemming from locality or relativity. It is the standard Bell experiment, involving spacelike separated parties, that is the physical experiment of interest which, to a large extent, motivates the study of these scenarios in the first place.
In the original argument for ML, the gedankenexperiment discussed corresponds to this standard case: one run of the experiment involves a pair of particles, each passing through a measurement device at distant locations. This is important, because the separation of the two measurement devices is what motivates the application of the no-signalling and locality conditions used in the argument. The main strength of this kind of motivation, in contrast to that for the MNC condition, is that it does not depend on any assumptions about the experimental protocols under consideration apart from the separation of the parties (``device independence'').
It is difficult to directly extend this sort of motivation to the MNC condition applied to Bell scenarios. To begin with, MNC concerns ``intensities'' corresponding to the \textit{global} measurement outcomes, but in the ML gedankenexperiment only the counts of local outcomes are available, not counts of how many \textit{pairs} of particles hit a particular \textit{pair} of detectors. Secondly, it is not immediately clear how to motivate the inclusion of correlated measurements. Thus it is not clear if MNC implies any constraints on experimental results for the spacelike-separated version of the Bell experiment\footnote{One might wonder if the global intensities and the inclusion of correlated measurements could be discarded without affected the strength of the MNC condition. But these actually constitute the difference between the ML and MNC conditions, and these must therefore account for the increased strength of the MNC condition over the ML condition. For example, from the constraint in equation (\ref{e:central_lim}) in the main text, when we have two outcomes that are alternatives in one measurement, the corresponding entry in the covariance matrix is determined. Thus, the more measurements are given, the more constraints there are on this matrix, and so considering the correlated measurements changes the constraints.}.
Given a single system experiment, one the other hand, these problems are avoided. To apply MNC to the ``true'' Bell experiment, therefore, we need to relate this experiment to a single system experiment. To examine this issue, let us consider the relatively familiar case of quantum theory.
Let us first consider the usual Bell experiment, which will be called ``experiment 1'', with two separated parties. The experimental results are a conditional probability distribution, and the mathematical model of experiment 1 is as in Def. \ref{qcorrvan} given in the previous section, \textit{i.e.}~a Hilbert space, state, and projectors with certain properties. In quantum mechanics, it is always (in principle) possible to set up a single system experiment that can be described by the same quantum model, and thus has corresponding experimental results. Call this ``experiment 2''. Furthermore, in experiment 2 it is (in principle) possible to add measurements corresponding to all the correlated measurements for the appropriate Bell scenario, making experiment 2 a realisation of a quantum model on a Bell scenario in the hypergraph approach. Thus, in quantum theory at least, there is a strong relation between any given ``true'' Bell experiment and some single system experiment which can be described by a Bell scenario in the hypergraph approach.
To apply MNC to the separated Bell experiment in general, we need this property to remain true in whatever theoretical framework is being applied. That is, for any possible experimental results for a given Bell experiment, there should exist (in principle) a single system experiment that can be described by the corresponding Bell scenario in the hypergraph approach, and which gives the corresponding experimental results. If this was true, then any constraint implied by a principle for the single-system experiment would also apply to the separated Bell scenario. In this case MNC would indeed apply to the separated Bell experiment and the almost quantum bound would be respected.
This is a rather abstract assumption on the relation of theory to experiment. It does provide at least one way to apply MNC to experiments with separated parties, even if the motivation is not as simple as merely invoking relativistic causality. Furthermore it might be agued that, whenever nonlocality is considered as a subcategory of contextuality for all intents and purposes, some assumptions of this nature are always implicitly made.
\small
\end{document}
|
\begin{document}
\maketitle
\section{The conjecture}
We state a conjecture on the stability of Betti diagrams of powers of monomial ideals. Boij and S\"oderberg \cite{AE_BS} conjectured that Betti diagrams can be decomposed into pure diagrams, and that was proved by Eisenbud and Schreyer \cite{AE_ES}. We don't cover the basics of Boij-S\"oderberg theory in this extended abstract, see Fl\o ystad \cite{AE_F} for a survey.
Up to scaling a pure diagram is determined by its non-zero positions. In our setting the top left corner is always non-zero and we normalise by assigning it the value one. For higher powers of ideals we need taller pure diagrams in a sequential way. A \emph{translation of a pure diagram} for $k=0,1,2,\ldots$ is a sequence of pure diagrams on the form
\[
\begin{array}{rcl}
\pi(k) & = &
\begin{array}{r|l}
& 0 \,\,\,\,\, 1 \,\,\,\,\, 2 \,\,\,\,\, \cdots \,\,\,\,\, \\
\hline
0 & 1 \\
\vdots \\
l(k) & \,\,\,\,\, \,\,\,\,\, \textrm{A fixed shape for} \\
& \,\,\,\,\, \,\,\,\,\, \textrm{non-zero entries} \\
\end{array}
\end{array}
\]
where $l(k)$ is a linear function.
According to Boij-S\"oderberg theory there is for every ideal $I$ in $S$ a decomposition of the Betti diagram $\beta(S/I)=w_1\pi_1+\cdots +w_m\pi_m$ where each $w_i$ is a non-negative real number and each $\pi_i$ is a pure diagram. Usually there are many choices of weights, and when considering algorithms to find decompositions there is a point to finding a particular one. But the amount of choices is also a measure of the complexity of the Betti diagram, and one might notice that for ideals that we know the invariants of for large powers, the complexity in this sense is quite low. For example, if all powers are linear, then there are no choices at all.
For any $\beta(S/I)$ there is a finite set of pure diagrams that can be included in a decomposition with a positive weights. We call the set of possible weight vectors for $\beta(S/I)$ the \emph{polytope of Betti diagram decompositions}.
We conjecture that for high powers the polytope of Betti diagram decompositions stabilises.
\begin{conjecture}
Let $I$ be a monomial ideal in $S$ with all generators of the same degree. Then there is a $k_0$ such that for all $k>k_0,$
\begin{itemize}
\item[1.] For some translations of pure diagrams $\pi_1(k),\ldots,\pi_m(k)$ any decomposition of $\beta(S/I^k)$ is a weighted sum on the form $w_1\pi_1(k)+\cdots+w_m\pi_m(k).$ Denote the polytope of Betti diagram decompositions of $\beta(S/I^k)$ in $\mathbb{R}^m$ by $P_k.$
\item[2.] All $P_k$ are of the same combinatorial type as a polytope $P_I$. For any vertex $v$ of $P_I$ there is a function $h_v(k)\in \mathbb{R}^m$, which is rational in each coordinate, such that the vertex corresponding to $v$ in $P_k$ is $h_v(k).$
\end{itemize}
\end{conjecture}
The conjecture is true for ideals whose large enough powers are all linear: The polytope is a point. This follows from that the column sums in Betti diagrams stabilises to polynomials for large powers according to Kodiyalam \cite{AE_K} and from the procedure to derive the unique decomposition of linear diagrams. The conjecture holds for many small examples that the author have calculated. There is unfortunately no abundance of ideals in the literature for which the Betti diagrams of all powers are given explicitly, since these concepts are fairly new. But there are many interesting tools accessible, for example from algebraic and topological combinatorics, that should make serious attempts to derive them fruitful.
\section{An example}
In this section we give an example of an ideal satisfying the conjecture.
Engstr\"om and Nor\'en \cite{AE_EN} constructed explicit cellular minimal resolutions of $S/I^k$ for all $k$ and $n,$ where
\[ S=\mathbf{k}[x_1,x_2,\ldots,x_n] \,\,\, \textrm{and} \,\,\, I=\langle x_1x_2,x_2x_3,\ldots,x_{n-1}x_n \rangle, \]
and calculated the Betti numbers:
\[
\beta_{i,j}(S/I^k)={n+3k-j-2 \choose 2j-3i-3k+3}{n+4k+2i-2j-4 \choose 2k+2i-j-2}{j-i-k \choose k-1}.
\]
The Betti diagram of $\mathbf{k}[x_1,x_2,x_3,x_4,x_5,x_6] / \langle x_1x_2,x_2x_3,x_3x_4,x_4x_5,x_5x_6 \rangle^k$ is
\[
\begin{array}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 & 5 \\
\hline
0 & 1 \\
\vdots \\
2k-1 && {k+4 \choose 4} & 4{k+3 \choose 4} & 6{k+2 \choose 4} & 4{k+1 \choose 4} & {k \choose 4} \\
&&& k(k+2) & 2k(k+1) & k^2 & \\
\end{array}
\]
The translations of pure diagrams:
\[\begin{array}{cc}
\begin{array}{rcl}
\pi_1(k) & = &
\begin{array}{c|cccc}
& 0 & 1 & 2 & 3 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \ast \\
\end{array}
\end{array}
&
\begin{array}{rcl}
\pi_2(k) & = &
\begin{array}{r|cccc}
& 0 & 1 & 2 & 3 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \\
&&&& \ast \\
\end{array}
\end{array} \\
\, \\
\begin{array}{rcl}
\pi_3(k) & = &
\begin{array}{c|cccc}
& 0 & 1 & 2 & 3 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast && \\
&&& \ast & \ast \\
\end{array}
\end{array} &
\begin{array}{rcl}
\pi_4(k) & = &
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \ast & \ast\\
\end{array}
\end{array} \\
\, \\
\begin{array}{rcl}
\pi_5(k) & = &
\begin{array}{r|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \ast \\
&&&&& \ast
\end{array}
\end{array} &
\begin{array}{rcl}
\pi_6(k) & = &
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \\
&&&& \ast & \ast
\end{array}
\end{array} \\
\, \\
\begin{array}{rcl}
\pi_7(k) & = &
\begin{array}{r|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \\
&&& \ast & \ast & \ast
\end{array}
\end{array} &
\begin{array}{rcl}
\pi_8(k) & = &
\begin{array}{r|cccccc}
& 0 & 1 & 2 & 3 & 4 & 5\\
\hline
0 & 1 \\
\vdots \\
2k-1 & & \ast & \ast & \ast & \ast & \ast \\
\end{array}
\end{array}
\end{array}\]
The polytope $P_k$ is a triangle whose vertices have the coordinates $h_1(k),$ $h_2(k)$ and $h_3(k).$
\[h_1(k)=\left(
0,0, \frac{k+2}{2k+3}, w_4(k),
\frac{(2k+5)(k-1)}{(2k+1)(k+2)(k+1)}, \frac{(4k+5)(k+1)}{(2k+3)(2k+1)(k+2)},0,w_8(k)
\right) \]
\[h_2(k)=\left(
0,\frac{2(k+2)(k+2)}{(2k+3)(2k+1)},0, w_4(k),
\frac{(2k+5)(k-1)}{(2k+1)(k+2)(k+1)}, \frac{(k+1)(k-1)}{(2k+3)(2k+1)(k+2)},\frac{1}{2k+3},w_8(k)
\right) \]
\[h_3(k)=\left(
\frac{k+2}{2k+1},0,0, w_4(k),
\frac{k^2-7}{(2k+1)(k+2)(k+1)}, \frac{k+1}{(2k+3)(2k+1)(k+2)},\frac{1}{2k+3},w_8(k)
\right) \]
\[w_4(k)= \frac{(7k+5)(k-1)(k-2)}{4(2k+3)(2k+1)(k+1)} \]
\[w_8(k)= \frac{(k-1)(k-2)(k-3)}{4(2k+3)(2k+1)(k+1)} \]
\end{document}
|
\begin{document}
\title[The Slope Conjecture for a Family of Montesinos Knots]
{The Slope Conjecture for a Family of Montesinos Knots}
\author{Xudong Leng$^{*}$}
\address{School of Mathematical Sciences, Dalian University of
Technology, Dalian 116024, P. R. China} \email{[email protected]}
\thanks{$^{*}$ Corresponding author}
\author{Zhiqing Yang$^{\dag}$}
\address{School of Mathematical Sciences, Dalian University of
Technology, Dalian 116024, P. R. China} \email{[email protected]}
\thanks{$^{\dag}$ Supported by the NFSC (No. 11271058)}
\author{Xinmin Liu$^{\ddag}$}
\address{School of Mathematical Sciences, Dalian University of
Technology, Dalian 116024, P. R. China }
\email{[email protected]}
\thanks{$^{\ddag}$ Supported by the NFSC (No. 10371076 and 11431009) }
\subjclass[2010]{57N10, 57M25 }
\keywords{Slope Conjecture, Colored Jones polynomial, Quadratic integer programming, Boundary slope, Incompressible surface}
\begin{abstract}
The Slope Conjecture relates the degree of the colored Jones polynomial to the boundary slopes of a knot. We verify the Slope Conjecture and the Strong Slope Conjecture for Montesinos knots $M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} )$ with $r,u,t$ odd, $s$ even and $u\leq-1$, $r<-1<1<s,t$.
\end{abstract}
\maketitle
\section{Introduction}
Soon after V. Jones discovered the famous Jones polynomial~\cite{J}, E. Witten found an intrinsic explanation~\cite{W} through TQFT approach, which led to the colored Jones polynomial. As a generalization of Jones polynomial, colored Jones polynomial reveals many deep connections between quantum algebra and three-dimensional topology. For example, the Volume Conjecture, which relates the asymptotic behavior of the colored Jones polynomial of a knot to the hyperbolic volume of its complement. Another connection proposed by S. Garoufalidis~\cite{Gar11b} named Slope Conjecture, predicts that the growth of maximal degree of colored Jones polynomial of a knot determines some boundary slopes of the knot complement. So far, the Slope Conjecture has been verified for knots with up to 10 crossings~\cite{Gar11b}, adequate knots~\cite{FKP11}, 2-fusion knots~\cite{GvdV} and some pretzel knots~\cite{LV}. In~\cite{MT15}, K. Motegi and T. Takata prove that the conjecture is closed under taking connected sums and is true for graph knots. In~\cite{KT15}, E. Kalfagianni and A. T. Tran show the conjecture is closed under taking the $(p,q)$-cable with certain conditions on the colored Jones polynomial and formulate the Strong Slope Conjecture, see Conjecture 2.2b.
Enlightened by Lee and van der Veen~\cite{LV}, in this article we prove the Slope Conjecture and the Strong Slope Conjecture for a family of Montesinos knots, $M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} )$ with $r,u,t$ odd, $s$ even and $u\leq -1$, $r<-1<1<s,t$ (See Section 2 and Figure 1). Particularly, when $u=-1$, $M(\frac{1}{r},\frac{1}{s+1},\frac{1}{t} )$ are just the pretzel knots in~\cite{LV}, and our results coincide with that of~\cite{LV} in this case. The strategy of the proof is to compare the maximal degree of colored Jones polynomial and certain boundary slopes of the corresponding knot. To compute colored Jones polynomial, we use the notion of knotted trivalent graphs~\cite{LV,vdV09,Thu02}, which is a convenient version of skein method~\cite{MV} for Montesinos knots, and to compute boundary slope we apply Hather and Oertel's edgepath system~\cite{HO89}.
\section{The Slope Conjecture}
Let $K$ denote a knot in $S^3$ and $N(K)$ denote its tubular neighbourhood. A surface $S$ properly embedded in the knot exterior $E(K)=S^3-N(K)$ is called \textit{essential} if it is incompressible, $\partial$-incompressible, and non $\partial$-parallel. A fraction $\frac{p}{q}\in \mathbb{Q}\bigcup \{\infty\}$ is a \textit{boundary slope} of $K$ if $pm+ql$ represents the homology class of $\partial S$ in the torus $\partial N(K)$, where $m$ and $l$ are the canonical meridian and longitude basis of $H_1(\partial N(K))$. The \textit{number of sheets} of $S$, denoted by $\mathrm{sh}arp S$, is the minimal number of points at which the meridional circle of $\partial N(K)$ and $\partial S$ intersect.
For colored Jones polynomial, we use the convention of~\cite{LV}, where the unnormalized $\textit{n-colored Jones polynomial}$ is denoted by $J_{K}(n;v)$, see Section 3. Its value on the unknot is $[n]=\frac{v^{2n}-v^{-2n}}{v^2-v^{-2}}$ and the variable $v$ satisfies $v= A^{-1}$, where A is the A-variable of the Kauffman bracket. The maximal degree of $J_{K}(n)$ in $v$ is denoted by $d_{+}J_{K}(n)$.
A fundamental result due to S. Garoufalidis and T. Q. T. Le~\cite{GL05} states that colored Jones polynomial is \textit{q}-holonomic. Furthermore, the degree of a colored Jones polynomial is a quadratic quasi-polynomial~\cite{Gar11a}, which can be formulated as follows.
\begin{thm}~\cite{Gar11a}
For any knot $K$, there exist integer $p_{K}\in \mathbb{N}$ and quadratic polynomials $Q_{K,1}\ldots Q_{K,j} \in \mathbb{Q}[x]$ such that $d_{+} J_{K}(n)=Q_{K,j}(n)$ if $n=j\ (mod\ p_{K})$ for n sufficiently large.
\end{thm}
Now we can state the Slope Conjecture and the Strong Slope Conjecture:
\begin{conj}
In the context of the above theorem, set $Q_{K,j}=a_{j}x^2+ 2b_{j}x+ c_j$, then for each $j$ there exists an essential surface $S_j \subset S^3-K$, such that:
a.(Slope Conjecture~\cite{Gar11b}) $a_j$ is a boundary slope of $S_j$,
b.(Strong Slope Conjecture~\cite{KT15}) $ b_j=\frac{\chi(S_j)}{\mathrm{sh}arp S_j}$, where $\chi(S_j)$ is the Euler characteristic of $S_j$.
\end{conj}
\begin{figure}
\caption{The Montesinos Knot $M(\frac{1}
\end{figure}
A Montesinos knot is defined as a knot obtained by putting rational tangles together in a circle (See figure 1). A Montesinos knot obtained from rational tangles $R_{1}$, $R_{2}$, \ldots $R_{N}$ is denoted by $ M(R_{1}, R_{2},..., R_{N})$. Properties about Montesinos knots are omitted here, for details see~\cite{BZ}. It is known that a Montesinos knot is always semi-adequate~\cite{LT}, and the Slope Conjecture has been proved for adequate knots~\cite{FKP11}. So we focus on a family of A-adequate and non-B adequate knots $M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} ) $ with $r,u,t$ odd, $s$ even and $u\leq-1$, $r<-1<1<s,t$ and the maximal degrees of their colored Jones polynomials. The following is our main theorem.
\begin{thm}
The Slope Conjecture and the Strong Slope Conjecture are true for the Montesinos knots $M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} ) $, where $r,u,t$ are odd and $s$ is even and $u\leq -1$, $r<-1<1<s,t$.
\end{thm}
The above theorem is proved directly from the following two theorems. The first one is about the degree of the colored Jones polynomial and the second is about essential surfaces.
\begin{thm}
Let $K=M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} )$, where $r,u,t$ are odd, $s$ is even and $u\leq-1$, $r<-1<1<s,t$,
, set $A=-\frac{r+s+1}{2}, B=-(r+1), C=-\frac{r+t}{2}, \Delta=4AC-B^2 $. \\
(1)If $ A\geq0 $ or $ C\geq0 $, or $ A,C<0 \ and\ \Delta<0 $, then
\[d_{+} J_{K}(n)= Q_{K,j}=[\frac{2(t-1)^{2}}{s+t-1}-2(r+t)]n^{2}+2(r+u+3)n+c_{j},\]
where $c_j$ is defined as following. Let $0\leq j < \frac{s+t-1}{2}$ such that $n= j\ mod\ \frac{s+t-1}{2}$ , and set $v_j$ to be the odd number nearest to $\frac{2(t-1)j}{s+t-1} $. Then $c_j=-\frac{s+t-1}{2}\beta^{2}_{j}-(s+t-1)\beta_j-2(u+2)$, $\beta_j=v_j-1-\frac{2(t-1)}{s+t-1}j$. $p_K=\frac{s+t-1}{2}$ is a period of $d_{+} J_{K}(n)$ but may not be the least one. \\
(2)If $A,C< 0 \ and \ \Delta\geq0$, then
\[d_{+} J_{K}(n)= 2u(n-1).\]
\end{thm}
\begin{thm}
Under the same assumptions as the previous theorem, \\
(1)When $ A\geq0 $ or $ C\geq0 $, or $ A,C<0 \ and\ \Delta<0 $, there exists an essential surface $S$ with boundary slope $\frac{2(t-1)^{2}}{s+t-1}-2(r+t)$, and $\frac{\chi_{(S)}}{\mathrm{sh}arp S}=r+u+3$.\\
(2)When $A,C< 0 \ and \ \Delta\geq0$, there exist an essential surface $S_0$ with boundary slope $0$, and $\frac{\chi_{(S_0)}}{\mathrm{sh}arp S_0}= u$.
\end{thm}
\section{The Colored Jones Polynomial}
To compute the colored Jones polynomial of Montesinos knots, we use the notion of \textit{knotted trivalent graphs} (KTG) introduced in~\cite{LV} (See also~\cite{vdV09,Thu02}.), which is a natural generalization of knots and links.
\begin{figure}
\caption{Operations on KTG: framing change $F$ and unzip $U$ applied to an edge $e$, triangle move $A^{\omega}
\end{figure}
\begin{dfn}
~\cite{LV}
(1)A \textit{framed graph} is a one dimensional simplicial complex $\Gamma$ together with an embedding $\Gamma \rightarrow \Sigma$ of $\Gamma$ into a surface with boundary of $\Sigma$ as a spine.
(2)A \textit{coloring} of $\Gamma$ is a map $\sigma: E(\Gamma)\rightarrow\mathbb{N}$, where $E(\Gamma)$ is the set of edges of $\Gamma$.
(3)A \textit{knotted trivalent graph} (KTG) is a trivalent framed graph embedded as a surface into $\mathbb{R}^3$, considered up to isotopy.
\end{dfn}
The advantage of KTGs over knots or links is that they support many operations. There are three types of operations we will need in this paper, the \textit{framing change} denoted by $F^{e}_{\pm}$, the \textit{unzip} denoted by $U^e$, and the \textit{triangle move} denoted by $A^{\omega}$, as illustrated in Figure 2.
The important thing is that these three types of operations are sufficient to produce any KTG from the $\Theta$ graph.
\begin{thm}
~\cite{vdV09,Thu02} Any KTG can be generated from $\Theta$ by repeatedly applying the operations $F_{\pm}$, $U$ and $A$ defined above.
\end{thm}
Following this result, we can define the colored Jones polynomial of any KTG once we fix the value of any colored $\Theta$ graph and describe how it changes under the the operations described above.
\begin{dfn}
~\cite{LV} The colored Jones polynomial of a KTG $\Gamma $ with coloring $\sigma$, denoted by $\langle\Gamma,\sigma\rangle$, is defined by the four equations below.
\[
\langle\Theta; a, b, c\rangle= O^{\frac{a+b+c}{2}}\left[
\begin{matrix}
&\frac{a+b+c}{2}& &\\
\frac{-a+b+c}{2}&\frac{a-b+c}{2}&\frac{a+b-c}{2}&
\end{matrix}
\right]
\]
\[
\langle F_{\pm}^{e}(\Gamma), \sigma\rangle= f(\sigma(e))^{\pm 1}\langle\Gamma, \sigma\rangle
\]
\[
\langle U^{e}(\Gamma), \sigma\rangle= \langle \Gamma, \sigma\rangle \sum_{\sigma(e)}\frac{O^{\sigma(e)}}{\langle\Theta; \sigma(e), \sigma(b), \sigma(d)\rangle}
\]
\[
\langle A^{\omega}(\Gamma), \sigma\rangle= \langle \Gamma, \sigma\rangle \Delta(a, b, c, \alpha, \beta, \gamma)
\]
\end{dfn}
Particularly, a knot is considered as a $0$-frame KTG without vertices, so we define its colored Jones polynomial as $J_{K}(n+1)=(-1)^{n}\langle K, n \rangle$, where $n$ denotes the color of the single edge of the knot, and the $(-1)^n$ term is to normalize the unknot as $J_{O}(n)=[n]$
In above formulas, the quantum integer $[k]=\frac{v^{2k}-v^{-2k}}{v^2-v^{-2}}$, $[k]!=[k][k-1]\ldots[1]$, the symmetric multinomial coefficieat is defined as:
\[\left[
\begin{matrix}
a_1+ a_2+ \ldots a_r\\
a_1,a_2,\ldots, a_r
\end{matrix}
\right]=\frac{[a_{1}+a_{2}+...+a_{r}]!}{[a_{1}]!\ldots[a_{r}]!}.
\]
The value of the $k$-colored unknot is $O^k=(-1)^{k}[k+1]=\langle O,k \rangle $.
The $f$ of the framing change is defined as:
$f(a)=(\sqrt{-1})^{-a} v^{-\frac{1}{2}a(a+2)}$.
The summation in the equation of unzip is taken over all admissible colorings of the edge $e$ that has been unzipped.
$\Delta$ is the quotient of the $6j$-symbol and the $\Theta$, \[\Delta(a,b,c,\alpha,\beta,\gamma)=\Sigma\frac{(-1)^z}{(-1)^{\frac{a+b+c}{2}}}\left[
\begin{matrix}
z+1 \\
\frac{a+b+c}{2}
\end{matrix}
\right]\left[\begin{matrix}
\frac{-a+b+c}{2}\\
z-\frac{a+\beta+\gamma}{2}
\end{matrix}
\right]\left[\begin{matrix}
\frac{a-b+c}{2}\\
z-\frac{\alpha+b+\gamma}{2}
\end{matrix}
\right]\left[\begin{matrix}
\frac{a+b-c}{2}\\
z-\frac{\alpha+\beta+c}{2}
\end{matrix}
\right].
\]
The summation range of $\Delta$ is indicated by the binomials. Note that this $\Delta$ is not the one in Theorem 2.4, we just maintain their traditional notations and this won't cause any ambiguity according to different contexts.
The above definition agrees with the integer normalization used in~\cite{Cos}, which shows that $\langle \Gamma, \sigma\rangle$ is a Laurent polynomial in $v$ and is independent of the choice of operations to produce the KTG.
As illustrated in Figure 3, we obtain the colored Jones polynomial of the knot $K=M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t})$ as follows. Starting from a $\theta$ , we first apply three $A$ moves, then four $F$ moves on the edges $a$, $b$, $c$, $d$ and then unzip these four twisted edges. Finally, to get the $0$-frame colored Jones polynomial we need to multiply the factor $f(n)^{-2(r+s+u+t)-2\cdot writhe}$ (in this case $f(n)^{-4u}$) to cancel the framing produced by the operations and the writhe of the knot.
\begin{figure}
\caption{The operations to produce the knot $K$ from a $\theta$. For simplicity, in this example we set $r=u=-3, s=2, t=3$. }
\end{figure}
\begin{lem}
The colored Jones polynomial of the Montesinos knot $K=M(\frac{1}{r},\frac{1}{s-\frac{1}{u}},\frac{1}{t} ) $ is
\[
\begin{split}
J_{K}(n+1)&=(-1)^{n}f^{-4u}(n)\sum_{a, b, c, d\in D_n}^{}
\langle\Theta;a, b, c\rangle\Delta^{2}(a, b, c, n, n, n)\Delta(b, n, n, d, n, n)f^{r}(a)f^{s}(b)\\&f^{t}(c)f^{u}(d)O^{a}O^{b}O^{c}O^{d}
\langle\Theta;a, n, n\rangle^{-1}\langle\Theta;b,n,n\rangle^{-1}\langle\Theta;c, n, n\rangle^{-1}\langle\Theta;d, n, n\rangle^{-1},
\end{split}
\]
where the domain $D_n$ is defined such that $a$,$b$,$c$,$d$ are all even with $0\leq a, b, c,d \leq 2n$, and $a$,$b$,$c$ satisfy the triangle inequality.
\end{lem}
To calculate the degree of the colored Jones polynomial, we need to analyze the the factors of the summands. The following lemma is from~\cite{LV}.
\begin{lem}~\cite{LV}
\[
d_{+}\langle\Theta; a, b, c\rangle= a(1-a)+b(1-b)+c(1-c)+\frac{(a+b+c)^{2}}{2},
\]
\[
d_{+}\langle F_{\pm}^{e}(\Gamma), \sigma\rangle= \pm d_{+}f(\sigma(e))\langle\Gamma, \sigma\rangle,
\]
\[
d_{+}\langle U^{e}(\Gamma), \sigma\rangle\geq d_{+}\langle\Gamma, \sigma\rangle+max_{\sigma(e)}(d_{+}O^{\sigma(e)}-d_{+}\langle\Theta; \sigma(e), \sigma(b), \sigma(d)\rangle),
\]
\[
d_{+}\langle A^{\omega}(\Gamma), \sigma\rangle= d_{+}\langle \Gamma, \sigma\rangle+ d_{+}\Delta(a, b, c, \alpha, \beta, \gamma).
\]
\[
\begin{split}
d_{+}\Delta(a, b, c, \alpha, \beta, \gamma)&= g(m+1,\frac{a+b+c}{2}+1)+ g(\frac{-a+b+c}{2},m-\frac{a+\beta+\gamma}{2})\\
&+g(\frac{a-b+c}{2},m-\frac{\alpha+b+\gamma}{2})+g(\frac{a+b-c}{2},m-\frac{\alpha+\beta+c}{2}),\\
\end{split}
\]
where $g(n,k)=2k(n-k)$ and $2m=a+b+c+\alpha+\beta+\gamma-max(a+\alpha,b+\beta,c+\gamma)$
\end{lem}
Now we are ready to prove Theorem 2.4.
\begin{proof}(of Theorem 2.4)
The maximal degree of $J_{K}(n+1)$ satisfies the inequality below.
\[d_{+}J_{K}(n+1)\leq max_{a, b, c, d\in D_{n}}^{}\Phi(a, b, c, d),\]
where
$\Phi(a, b, c, d)= d_{+}\langle\Theta; a, b, c\rangle+ 2d_{+}\Delta(a, b, c, n, n, n)+d_{+}\Delta(b, n, n, d, n, n)+rd_{+}f(a)+sd_{+}f(b)+td_{+}f(s)+ud_{+}f(d)
+d_{+}O^{a}+d_{+}O^{b}+d_{+}O^{c}+d_{+}O^{d}-d_{+}\langle\Theta; a, n, n\rangle-d_{+}\langle\Theta; b, n, n\rangle-d_{+}\langle\Theta; c,n,n\rangle-d_{+}\langle\Theta; d, n, n\rangle-4ud_{+}f(n)$.
Generally, solving $max_{a,b,c,d\in D_{n}}\Phi(a,b,c,d)$ is a problem of quadratic integer programming, which is quite a involved topic~\cite{GvdV}. In this case however, it can be solved by observing its monotonicity.
Note that the feasible region $D_{n}$ is an even integer lattice in a convex polytope in $ \mathbb{R}^4$ and can be divided into 6 subregions corresponding to 6 different forms of $\Phi$, that is, $D_{n}=D_{n}^{a,b+d}\cup D_{n}^{b,b+d}\cup D_{n}^{c,b+d}\cup D_{n}^{a,2n}\cup D_{n}^{b,2n}\cup D_{n}^{c,2n}$, where $D_{n}^{a,b+d}$, $D_{n}^{b,b+d}$, $D_{n}^{c,b+d}$, $D_{n}^{a,2n}$, $D_{n}^{b,2n}$ and $D_{n}^{c,2n}$ are defined to be $D_{n}\cap \{(a,b,c,d)\mid a\geq b,c; \ b+d\geq 2n\}$, $D_{n}\cap \{(a,b,c,d)\mid b\geq a,c;\ b+d\geq 2n\}$, $D_{n}\cap \{(a,b,c,d)\mid c\geq b,a;\ b+d\geq 2n\}$, $D_{n}\cap \{(a,b,c,d)\mid a\geq b,c; \ 2n\geq b+d\}$, $D_{n}\cap \{(a,b,c,d)\mid b\geq a,c;\ 2n\geq b+d\}$ and $D_{n}\cap \{(a,b,c,d)\mid c\geq b,a;\ 2n\geq b+d\}$, respectively. Then we calculate the partial derivatives of the real function $\Phi$, and find that on each of the 6 regions we have $\partial_ {d}\Phi >0$, $\partial_{a}\Phi >0$, and in $D_{n}^{a,b+d}$ we have $\partial_{b}\Phi<0 $. So any maximum of $\Phi$ on $D_{n}$ must occur when $d=2n$, $a=b+c$. Note that in this paper it is more convenient to calculate the real partial derivatives of these even integer functions when we consider their monotonicity, for example, we use $\partial_{d}\Phi >0$ rather than $\Phi(a,b,c,d+2)>\Phi(a,b,c,d)$, obviously the later is implied by the former.
Now we focus on the following 2-variable function $R(b,c)$ in the domain $T_{n}=\{(b,c)\mid b,c\geq 0, b+c\leq 2n\}$.
\begin{equation}
\begin{split}
R(b, c)&=\Phi(b+c, b, c, 2n)\\
&=-\frac{r+s+1}{2}b^{2}-(r+s-1)b-(r+1)bc-\frac{r+t}{2}c^{2}-(r+t-2)c+2un.\\
\end{split}
\end{equation}
Set $A=-\frac{r+s+1}{2}, B=-(r+1), C=-\frac{r+t}{2}, \Delta=4AC-B^2 $. \\
(1)If $ A\geq0 $ or $ C\geq0 $, we have $\partial_{b}R>0$, or $\partial_{c}R>0$. Then the maxima must be on the line $b+c=2n$. Set $Q(b)=R(b,2n-b)$, we have
\[
\begin{split}
\ Q(b)&=R(b,2n-b)\\
&=-\frac{s+t-1}{2}b^{2}+[2(t-1)n-s+t-1]b-2(r+t)n^2-2(r-u+t-2)n.
\end{split}
\]
$Q(b)$ is a quadratic function in $b$ with negative leading coefficient, and its real maximum is at $b_{m}=\frac{2(t-1)n-s+t-1}{s+t-1}$, $b_{m}\in (0,2n)$ for $n$ sufficiently large.
Set $n+1=N=p(\frac{s+t-1}{2})+j$, $0\leq j< \frac{s+t-1}{2}$, then $b_{m}=(t-1)p-1+\frac{2(t-1)j}{s+t-1}$.
Let $b_{0}$ be the even number nearest to $b_{m}$, then we have $b_{0}=(t-1)p-1+v_{j}$, where $v_{j}$ is the odd number nearest to $\frac{2(t-1)j}{s+t-1}$,
so $b_{0}=\frac{2(t-1)}{s+t-1}N-\frac{2(t-1)}{s+t-1}j+v_{j}-1$. Then we have
\[max_{a,b,c,d\in D_{n}}\Phi(a,b,c,d)= Q(b_{0})=[\frac{2(t-1)^{2}}{s+t-1}-2(r+t)]N^{2}+2(r+u+3)N+c_{j},\]
where $c_j=-\frac{s+t-1}{2}\beta^{2}_{j}-(s+t-1)\beta_j-2(u+2)$, $\beta_j=v_j-1-\frac{2(t-1)}{s+t-1}j$.
Finally, when $b_{m}$ is not an odd integer, the maximum is unique. Otherwise, $\Phi$ has exactly 2 maxima, we have to consider the possibility that the coefficients of the 2 maximal degree terms may cancel out. It is easy to see that for the leading coefficients of the terms of the summation, the $O$ terms contribute $(-1)^{a+b+c+d}=1$, the $f$ terms contribute $(-1)^{\frac{1}{2} (ar+bs+ct+du)}$, $\Theta$ terms contribute $(-1)^{a+b+c+4n+\frac{1}{2}d}=(-1)^{\frac{1}{2}d}$, $\Delta$ terms contribute$(-1)^{\frac{1}{2}b+n}$. Multiplied by the $(-1)^{n}$ in front of the summation, the leading coefficients are $(-1)^{\frac{1}{2}[ra+(s+1)b+tc+(u+1)d]}$. Note that $r,u,t$ are odd integers and $s$ is an even integer, and any maximum must occur on $a=b+c=2n$. So if there are two maximal degree terms, their coefficient are both $1$, and no cancellation will happen.
(2)When $A<0$ and $C<0$, for any fixed $c_0$, $R(b,c_{0})$ is a quadratic function in $b$ with negative leading coefficient, whose axis of symmetry (in the plane $c=c_0$ of the uvR-coordinates) intersects the line $\partial_{b}R=0$ and is perpendicular to the $bc$-plane, so we consider the real value of $R(b,c)$ on the line $\partial_{b}R=0$:
\begin{equation*}
\left\{
\begin{array}{l}
R(b,c)=-\frac{r+s+1}{2}b^{2}-(r+s+1)b-(r+1)bc-\frac{r+t}{2}c^2-(r+t-2)c+2un \\
\partial_{b}R=-(r+s+1)b-(r+s-1)-(r+1)c =0.\\
\end{array}
\right.
\end{equation*}
Then we have
\begin{equation}
\begin{split}
&R\mid _{\partial _b R=0}= R(b,-\frac{r+s+1}{r+1}b-\frac{r+s-1}{r+1})\\
&=[-\frac{r+s+1}{2(r+1)^{2}}\Delta]b^{2}+\frac{r+s+1}{(r+1)^{2}}[(r+1)(r+t-2)-(r+t)(r+s-1)]b+ const.
\end{split}
\end{equation}
If $\Delta \neq 0$, $R\mid _{\partial _b R=0}$ is a quadratic function in $b$ whose axis of symmetry is perpendicular to the $bc$-plane at the point
\[P=(b_p, c_p)=(\frac{(r\!+\!1)(r\!+t\!-\!2)\!-\!(r\!+\!t)(r\!+\!s\!-\!1)}{\Delta},\frac{(r\!+\!1)(r\!+\!s\!-\!1)\!-\!(r\!+\!s\!+\!1)(\!r+\!t\!-\!2)}{\Delta}).\]($P$ is actually the intersection of $\partial_{b}R=0$ and $\partial_{c}R=0$).
\begin{figure}
\caption{$R(b,c)$ is restricted to the triangle domain $T_n$: $(0,0)$-$(0,2n)$-$(2n,0)$, and the arrows indicate its increasing direction; $\ell$ denotes the line $\partial_{b}
\end{figure}
(2.1)If $A$ and $C<0, \Delta<0$, by Equation (3.2), $R\mid _{\partial _b R=0}$ is a quadratic function in $b$ with positive leading coefficient. And we have $b_p\leq 0$, $c_p\leq 0$. See Figure 4(a). The arrows indicate the increasing direction. For sufficiently large $n$, any maximum must be on the segment $[Q,(0,2n)]$, then the argument will be the same as that of case (1).
(2.2)If $A,C<0$, and $\Delta>0$, $R\mid _{\partial _b R=0}$ is a quadratic function in $b$ with negative leading coefficient. And we have $b_p\geq 0$, $c_p\geq 0$. Any maximum must occur on $OR $. See Figure 4(b). Note that
$R(0,c)=-\frac{r+t}{2}c^{2}-(r+t-2)c+2un$ decreases in $[0,+\infty)$, the maximum must occur on $O=(0,0)$, so
\[d_{+}J_{K}(n+1)=R(0,0)=2un, \ d_{+}J_{K}(n)=2u(n-1).\]
(2.3)If $A,C<0$ ,$\Delta = 0$, and $(r+s-1)^2+(r+t-2)^2\neq 0$, $R\mid _{\partial _b R=0}$ is a decreasing linear function in $b$, any maximum must occur on OS. See Figure 4(c). $R(0,c)$ decreases in $[0,+\infty)$, the maximum must be on $O=(0,0)$, so
\[ d_{+}J_{K}(n)=2u(n-1).\]
(2.4)If $A,C<0$, $\Delta =0$, and $(r+s-1)^{2}+(r+t-2)^{2}=0$, then we immediately have $r=-3, s=4, t=5$, $R(b,c)=-(b-c)^{2}+2un$, the maxima are $R(0,0)=R(2,2)=\ldots =R(k,k)$, where $k=n$ when $n$ is even, and $k=n-1$ when $n$ is odd. See Figure 4(d). By a similar argument with the last paragraph of (1), we conclude that there are no cancellations between the the highest-degree coefficients, so
\[d_{+}J_{K}(n)=2u(n-1).\]
\end{proof}
\section{Boundary Slope and Euler Characteristic }
The main idea of Hatcher-Oertel edgepath system, based on the work of~\cite{HT85}, is to deal with properly embedded surfaces in Montesinos knot complement combinatorially. For details see~\cite{HO89, IM07}. Briefly speaking, the so called \textit{candidate surfaces} are associated to \textit{admissible edgepath systems} in a diagram $\mathcal{D}$ in the $uv$-plane. The vertices of $\mathcal{D}$ correspond to projective curve systems $[a, b, c]$ on the 4-punctured sphere carried by the train track in Figure 5(a) via $u=\frac{b}{a+b}$, $v=\frac{c}{a+b}$.
\begin{figure}
\caption{(a)The train track in a 4-punctured sphere. (b)The diagram $\mathcal{D}
\end{figure}
There are three types of vertices of $\mathcal{D}$:
(1)the arcs with slope $\frac{p}{q}$ denoted by $\langle\frac{p}{q}\rangle$, corresponding to the projective curve systems $[1, q-1, p]$, with the \textit{uv}-coordinates $(\frac{q-1}{q}, \frac{p}{q})$,
(2)the circles with slope $\frac{p}{q}$ denoted by $\langle\frac{p}{q}\rangle ^{\circ}$, corresponding to the projective curve systems $[0, p, q]$, with the \textit{uv}-coordinates $(1, \frac{p}{q})$,
(3)the arcs with slope $\infty$ denoted by $\langle \infty \rangle$, with the \textit{uv}-coordinates $(-1,0)$.
And there are 6 types of edges in $\mathcal{D}$:
(1)the \textit{non-horizontal } edges connecting the vertex $\frac{p}{q}$ to the vertex $\frac{r}{s}$ with $\mid ps- qr\mid =1$, denoted by $[\langle \frac{r}{s}\rangle , \langle \frac{p}{q}\rangle ]$,
(2)the \textit{horizontal } edges connecting $\langle\frac{p}{q}\rangle ^{\circ}$ to $\langle\frac{p}{q}\rangle $, denoted by $[ \langle \frac{p}{q}\rangle ,\langle\frac{p}{q}\rangle ^{\circ} ]$,
(3)the \textit{vertical } edges connecting $\langle z\rangle$ to $\langle z\pm 1\rangle$ , denoted by $[z, z\pm 1]$, here $z\in $ $\mathbb{Z}$,
(4)the \textit{infinity} edges connecting $\langle z\rangle$ to $\langle \infty \rangle$ denoted by $[\infty, z]$,
(5)the \textit{constant} edges which are points on the horizontal edge $[ \langle \frac{p}{q}\rangle ,\langle\frac{p}{q}\rangle ^{\circ} ]$ with the form $ \frac{k}{m} \langle \frac{p}{q}\rangle + \frac{m-k}{m}\langle\frac{p}{q}\rangle ^{\circ} $,
(6)the \textit{partial} edges which are parts of non-horizontal edges $[\langle \frac{r}{s}\rangle , \langle \frac{p}{q}\rangle ]$ with the form $[\frac{k}{m}\langle \frac{r}{s}\rangle + \frac{m-k}{m} \langle \frac{p}{q}\rangle, \langle \frac{p}{q}\rangle ] $.
An \textit{edgepath} denoted by $\gamma$ in $\mathcal{D}$ is a piecewise linear path beginning and ending at rational points of $\mathcal{D}$. An admissible edgepath system denoted by $\Gamma =(\gamma_{1},\gamma_{2},\ldots, \gamma_{n})$ is an \textit{n}-tuple of edgepaths in $\mathcal{D}$ with following properties.\\
(E1)The starting point of $\gamma_i$ is on the horizontal edge $[\langle \frac{p_i}{q_i} \rangle, \langle \frac{p_i}{q_i} \rangle^{\circ}]$, and if it is not the vertex $\langle \frac{p_i}{q_i}\rangle$, $\gamma_i$ is constant.\\
(E2)$\gamma_i$ is minimal, that is, it never stops or retraces itself, nor does it ever go along two sides of the same triangle of $\mathcal{D}$ in succession.\\
(E3)The ending points of $\gamma_i$'s are rational points of $\mathcal{D}$ with their $\textit{u}$-coordinates equal and $\textit{v}$-coordinates adding up to zero.\\
(E4)$\gamma_i$ proceeds monotonically from right to left, ``monotonically'' in the weak sense that motion along vertical edges is permitted.
In~\cite{HO89}, a finite number of candidate surfaces are associated to the each admissible edgepath system, then every essential surface in knot complement with non-empty boundary of finite slope is isotopic to one of the candidate surfaces. Finally, to rule out the inessential surfaces, the notion of \textit{r}-value is developed in ~\cite{HO89}, in our case however, we only need the following convenient criterion from~\cite{Ich}.
\begin{lem}~\cite{Ich}
For a admissible edgepath system having ending points with positive $u$-coordinate, if all the last edges of the edgepaths travel in the same direction (upward or downward) from right to left, then all the candidate surfaces associated to the edgepath system are essential.
\end{lem}
The boundary slope of a essential surface $S$ is computed by $\tau(S)- \tau(S_0)$, where $\tau(S)$ is the \textit{total number of twist} (or twist for short) of an essential surface $S$, and $S_0$ is the Seifert surface. For a candidate surface $S$ associated to the admissible edgepath system $\Gamma$, we have~\cite{IM07}
\begin{equation}
\tau(S)=\sum_{\gamma_{i}\in \Gamma_{non-const}} \sum_{e_{i,j}\in \gamma_{i}}-2\sigma(e_{i,j})|e_{i,j}|.
\end{equation}
In above formula, $|e|$ is the \textit{length} of an edge $e$, which is defined to be $0$, $1$, or $\frac{k}{m}$ for a constant edge, a complete edge or a partial edge $[\frac{k}{m}\langle \frac{r}{s}\rangle + \frac{m-k}{m} \langle \frac{p}{q}\rangle, \langle \frac{p}{q}\rangle ] $, respectively. And $\sigma(e)$ is the \textit{sign} of a non-constant edge $e$, which is defined to be $+1$ or $-1$ according to whether the edge is increasing or decreasing (from right to left in \textit{uv}-plane) respectively for a non-$\infty$ edge; for an $\infty$ edge the sign is defined to be $0$.
Now we are ready to prove Theorem 2.5.
\begin{proof}(of Theorem 2.5)
By the method of~\cite{HO89} (pg461), when $r,u,t$ are odd and $s$ is even with $u\leq-1$, $r<-1<1<s,t$, we directly find the edgepath system of the Seifert surface $S_0$:
\[
\begin{split}
&\delta_{1}:[\langle0\rangle,\langle\frac{1}{r}\rangle],\\
&\delta_{2}:[\langle0\rangle, \langle\frac{1}{s+1}\rangle, \ldots,\langle\frac{-u-i}{s(-u-i)+1}\rangle, \ldots, \langle\frac{-u}{-su+1}\rangle],\\
&\delta_{3}:[\langle0\rangle, \langle\frac{1}{t}\rangle],\\
\end{split}
\]
where $0\leq i \leq u-1$.
$S_0$ has boundary slope 0, and by the formula (3.4) from~\cite{IM07},
\[-\frac{\chi(S_0)}{\mathrm{sh}arp S_0}=2-u-2,\]
\[\frac{\chi(S_0)}{\mathrm{sh}arp S_0}=u.\]
By now we have proved (2).
For (1), with direct calculations we always have $\Delta<0$, and we claim that there exists an admissible edgepath system having ending points with $\textit{u}$-coordinate $u_{0}=\frac{(t-1)s}{ts+t-1}$ in $\textit{uv}$-plane. In fact, $u_0$ is just the solution of the equation $v_{1}(u)+v_{2}(u)+ v_{3}(u)=0$, where the linear functions $v=v_{1}(u)$, $v_{2}(u)$ and $v_{3}(u)$ are determined by the lines through the edges $[\langle0\rangle,\langle\frac{1}{t}\rangle]$, $[\langle0\rangle,\langle\frac{1}{s+1}\rangle]$ and $[\langle -1\rangle,\langle -\frac{1}{2}\rangle...,\langle\frac{1}{r}\rangle]$, respectively. Denote by $u_t$, $u_{s+1}$ and $u_r$ the $\textit{u}$-coordinates of $\langle \frac{1}{t}\rangle$, $\langle \frac{1}{s+1}\rangle$ and $\langle \frac{1}{r}\rangle$ respectively. With direct calculations we have
\[u_0-u_t=-\frac{(t-1)^2}{t(st+t-1)}<0,\]
\[u_0-u_{s+1}=-\frac{s^2}{(s+1)(st+t-1)}<0,\]
\[u_0-u_r=-\frac{\Delta}{r(st+t-1)}<0,\]
so $u_o$ must be on the left of $u_t$, $u_{s+1}$ and $u_r$. Suppose the edgepath of the $\frac{1}{r}$-tangle ends on edge $[\frac{1}{r+k+1}, \frac{1}{r+k}]$, where $0\leq k\leq -r-2$, then $u_0$ must be the $\textit{u}$-coordinate of ending points of the admissible edgepath system $\Gamma$ below:
\[
\begin{split}
&\gamma_{1}:[(\frac{(t\!-\!1\!)^{2}}{s+t-\!1}\!-\!r\!-\!t-\!k)\langle\frac{1}{r\!+\!k\!+\!1\!}\!\rangle\!+\!(\!1\!+\!r\!+\!t\!+\!k-\!\frac{(\!t\!-\!1)^{2}}{s\!+\!t\!-\!1})\!\langle\frac{1}{r+k}\rangle,
\langle\frac{1}{r\!+\!k}\!\rangle, \!\langle\frac{1}{r\!+\!k\!-1}\!\rangle, \ldots, \!\langle\frac{1}{r}\!\rangle], \\
&\gamma_{2}:[\frac{s}{s+t-1}\langle0\rangle+\frac{t-1}{s+t-1}\langle\frac{1}{s+1}\rangle, \langle\frac{1}{s+1}\rangle, \ldots, \langle\frac{-u-i}{s(-u-i)+1}\rangle,
\ldots, \langle\frac{-u}{-su+1}\rangle],\\
&\gamma_{3}:[\frac{t-1}{s+t-1}\langle0\rangle+\frac{s}{s+t-1}\langle\frac{1}{t}\rangle, \langle\frac{1}{t}\rangle],\\
\end{split}
\]
where $0\leq i \leq -u-1$, and the length of the partial edges are calculated via $u_0$ by formula (3.1) from~\cite{IM07}. By Lemma 4.1, any candidate surface associated to $\Gamma$ must be essential.
By formula (4.1), the twist of the surface $S$ associated to the above edgepath system is
\[
\begin{split}
\tau(S)&=\sum_{r_{i}\in \Gamma_{non-const}^{}} \sum_{e_{i,j}\in r_{i}}-2\sigma(e_{i,j})|e_{i,j}|
\\&=2[\frac{(t-1)^{2}}{s+t-1}-r-t-k]+2k+\frac{2s}{s+t-1}+2(-u-1)+\frac{2(t-1)}{s+t-1}\\
&=\frac{2(t-1)^{2}}{s+t-1}-2(u+r+t).
\end{split}
\]
The twist of the Seifert surface $S_0$ is
\[\tau (S_0)=-2-2u+2=-2u.\]
So the boundary slope of $S$ is
\[\tau(S)-\tau(S_0)=\frac{2(t-1)^{2}}{s+t-1}-2(r+t).\]
Finally, by the formula (3.5) from~\cite{IM07} we have
\[
\begin{split}
-\frac{\chi(S)}{\mathrm{sh}arp S}&=\sum_{r_{i}\in \Gamma_{non-const}}|r_{i}|+N_{const}-N+(N-2-\sum_{r_{i}\in \Gamma_{const}}\frac{1}{q_{i}})\frac{1}{1-u_{0}}\\
&=\sum_{r_{i}\in\Gamma_{non-const}}|r_{i}|-3+\frac{1}{1-u_{0}}\\
&=\frac{(t-1)^{2}}{s+t-1}-(u+r+t)-3+\frac{ts+t-1}{s+t-1}\\
&=-(u+r+3).\\
\end{split}
\]
\[\frac{\chi(S)}{\mathrm{sh}arp S}=u+r+3.\]
\end{proof}
\end{document}
|
\begin{document}
\title[Quasi-optimal nonconforming methods I]
{Quasi-optimal nonconforming methods for symmetric elliptic problems.
I -- Abstract theory }
\author[A.~Veeser]{Andreas Veeser}
\author[P.~Zanotti]{Pietro Zanotti}
\begin{abstract}
We consider nonconforming methods for symmetric elliptic problems and characterize their quasi-optimality in terms of suitable notions of stability and consistency. The quasi-optimality constant is determined and the possible impact of nonconformity on its size is quantified by means of two alternative consistency measures. Identifying the structure of quasi-optimal methods, we show that their construction reduces to the choice of suitable linear operators mapping discrete functions to conforming ones. Such smoothing operators are devised in the forthcoming parts of this work for various finite element spaces.
\end{abstract}
\maketitle
\section{Introduction}
Consider an elliptic boundary value problem, which can be cast in the abstract form
\begin{equation}
\label{sym-ell-prob}
\text{find } u \in V \text{ such that }
\forall v \in V \;\; a(u,v) = \langle \ell,v \rangle,
\end{equation}
where the bilinear form $a$ is a scalar product on the linear function space $V$. The Ritz-Galerkin method defines an approximation to $u$ as the solution $U$ of the problem where the infinite-dimensional space $V$ is replaced by a finite-dimensional subspace $S \subseteq V$. C\'ea's lemma~\cite{Cea:64} reveals that $U$ is the best approximation to $u$ in $S$ with respect to the norm induced by $a$. Remarkably, this holds irrespective of the regularity of the exact solution $u$. In other words: the Ritz-Galerkin method is always optimal in $S$ with respect to the energy norm.
There are various generalizations of C\'ea's lemma. For Petrov-Galerkin methods applied to well-posed problems, Babu\v{s}ka \cite{Babuska:70} has shown the quasi-optimality property
\begin{equation}
\label{qo}
\forall u \text{ solutions}
\quad
\norm{u-U}
\leq
C_{\mathrm{qopt}} \inf_{s \in S} \norm{u-s}
\end{equation}
and, recently, Tantardini and Veeser \cite{Tantardini.Veeser:16} have shown that the best constant is
\begin{equation*}
C_{\mathrm{qopt}}
=
\sup_{\sigma \in \Sigma}
\frac{ \sup_{\norm{v} = 1} b(v,\sigma) }{ \sup_{\norm{s}=1} b(s,\sigma) },
\end{equation*}
where $b$ is the underlying bilinear form, $v$, $s$, and $\sigma$ vary, respectively, in the continuous trial space, the discrete trial space and the discrete test space. This provides a rather general but still very strong result when the discrete spaces are conforming, that is, are subspaces of their continuous counterparts.
For classical nonconforming finite element methods (NCFEM) like the Crouzeix-Raviart or the Morley method and for Discontinuous Galerkin (DG) methods, such a strong result is not available, to our best knowledge. Here the so-called second Strang lemma \cite{Berger.Scott.Strang:72} or variants serve as a replacement for C\'ea's lemma and the bound of the term associated with the consistency error is problematic. It involves extra regularity,
\begin{itemize}
\item either of the solution $u$, which has to be taken from a strict compact subset of $V$, see, e.g., Brenner/Scott \cite{Brenner.Scott:08} and Di~Pietro/Ern \cite{Ern.DiPietro:12},
\item or, in the medius analysis initiated by Gudi \cite{Gudi:10}, of the load term $\ell$, which has to be taken from a strict compact subset of $V'$; see Brenner \cite{Brenner:15}.
\end{itemize}
This extra regularity then obstructs a further bound by the best approximation error with respect to the energy norm in order to conclude quasi-optimality.
However, nonconforming discrete spaces are of interest because the `rigidity' of their conforming counterparts may cause problems in approximation, see, e.g., de~Boor/DeVore \cite{DeBoor.DeVore:83} and Babu\v{s}ka/Suri \cite{Babuska.Suri:92}, in stability, see Scott/Vogelius \cite{Scott.Vogelius:85}, or in accommodating structural properties like conservation.
This article is the first in a project to close the gap of missing quasi-optimality for nonconforming methods. Here we consider continuous problems of the form \eqref{sym-ell-prob}, together with a rather big class of nonconforming methods. This class contains in particular classical NCFEM, DG and other interior penalty methods.
Our first main result states that quasi-optimality as in \eqref{qo} is equivalent to full algebraic consistency and full stability. Full algebraic consistency means that, whenever the exact solution happens to be in the discrete space, it is also the discrete solution. Notice that this is a quite weak property if the conforming part $S \cap V$ of the discrete space is small. Full stability means that the discrete problem is stable for all loads, irrespective of their regularity. Moreover, we show that full stability holds if and only if the discrete problem reads
\begin{equation*}
\text{find } U \in S \text{ such that }
\forall \sigma \in S \;\; b(U,\sigma) = \langle \ell, E\sigma \rangle
\end{equation*}
where $b$ is the discrete bilinear form and $E:S \to V$ is a linear map, called smoother, and defined on the whole discrete space $S$. Notice that, usually, nonconforming methods are used without a smoother and so full stability does not hold. It is thus not a surprise that previous results did not establish quasi-optimality with respect to the energy norm. Nonconforming methods with smoothing can be found in Arnold and Brezzi \cite{Arnold.Brezzi:85}, which observes increased stability, Brenner and Sung \cite{Brenner.Sung:05}, which presents fully stable methods, and Badia et al.\ \cite{Badia.Codina.Gudi.Guzman:14}, which contains also a partial quasi-optimality result.
As a second main result, we determine the quasi-optimality constant, i.e.\ the best constant in \eqref{qo}, for a quasi-optimal nonconforming method:
\begin{equation*}
C_{\mathrm{qopt}}
=
\sup_{\sigma \in S}
\frac{ \sup_{\norm{v+s} = 1} a(v,E\sigma)+b(s,\sigma) }{ \sup_{\norm{s}=1} b(s,\sigma) }.
\end{equation*}
Notice that the enumerator handles the nonconformity by an extension interweaving data from the continuous and the discrete problem. Moreover, we can determine $C_{\mathrm{qopt}}$ by two consistency measures generalizing algebraic consistency: one incorporating stability, one essentially independent of stability.
These results reduce the construction of quasi-optimal nonconforming methods for \eqref{sym-ell-prob} to devising suitable smoothers $E$. This is established for various nonconforming finite element spaces in our forthcoming works \cite{Veeser.Zanotti:17p2,Veeser.Zanotti:17p3}.
\section{Setting, stability and consistency}
\label{S:setup}
This section sets up the notations and notions for our analysis, individuating concepts of stability and consistency that are necessary for quasi-optimality.
\subsection{Symmetric elliptic problems and nonconforming methods}
\label{S:setting}
We introduce the abstract boundary value problem and then a class of nonconforming methods, sufficiently large to host our discussion.
Let $V$ be an infinite-dimensional Hilbert space with scalar product $a(\cdot, \cdot)$ and \emph{energy norm} $\norm{\cdot} = \sqrt{a(\cdot,\cdot)}$. Moreover, let $V'$ be the topological dual space of $V$, denote by $\left\langle \cdot, \cdot\right\rangle$ the pairing of $V$ and $V'$ and endow $V'$ with the \emph{dual energy norm}
$\norm{\ell}_{V'} := \sup_{v\in V, \norm{v} = 1} \langle\ell,v\rangle$.
We consider the following \emph{`continuous' problem}: given $\ell \in V'$, find $u\in V$ such that
\begin{equation}
\label{ex-prob}
\forall v \in V
\quad
a(u, v)
=
\langle \ell, v \rangle.
\end{equation}
In view of the Riesz representation theorem, this problem is well-posed in the sense of Hadamard and well-conditioned. In fact, if $A:V \to V'$, $v \mapsto a(v, \cdot)$ is the Riesz isometry of $V$, we have $u=A^{-1}\ell$ with
\begin{equation}
\label{isometry}
\norm{u} = \norm{\ell}_{V'}.
\end{equation}
Given a generic functional $\ell \in V'$, we are interested in `computable' approximations of the solution $u$ in \eqref{ex-prob}. In other words, we are interested in approximating the linear operator $A^{-1}$ suitably. Since $A^{-1}$ is bounded, one may want to approximate it by linear operators that are bounded, too. However, in order to embed also existing methods in our setting, we consider more general linear operators $M$, possibly unbounded, with finite-dimensional range $R(M)$ and domain $D(M)$ that is dense in $V'$. We say that $M$ is \emph{entire} whenever it can be directly applied to every instance of the continuous problem: $D(M) = V'$.
We shall analyze methods that build upon the variational structure of \eqref{ex-prob} in the following manner. Let $S$ be a nontrivial, finite-dimensional linear space, which will play the role of $V$.
We write $\left\langle \cdot, \cdot\right\rangle$ also for the pairing of $S$ and $S'$. Notice that we do not require $S \subseteq V$. As a consequence, $\langle\ell,\sigma\rangle$ and $a(s,\sigma)$ may be not defined for some $\ell\in V'$ and $s,\sigma\in S$. We therefore introduce an operator $L: D(L) \subseteq V' \to S'$ and a counterpart $b : S \times S \to \mathbb{R}$ of $a$ and require:
\begin{itemize}
\item $L$ is linear, (possibly) unbounded, and densely defined,
\item $b$ is bilinear and nondegenerate in that, for any $s\in S$, the property
$b(s,\sigma) = 0$ for all $\sigma \in S$ entails $s = 0$.
\end{itemize}
A method $M$ with domain $D(M) = D(L)$ is then defined by the following \emph{discrete problem}: given $\ell \in D(M)$, find $M\ell \in S$ such that
\begin{equation}
\label{disc-prob}
\forall \sigma \in S
\quad
b(M \ell, \sigma)
=
\langle L\ell, \sigma \rangle.
\end{equation}
\begin{remark}[Computing discrete solutions]
\label{R:ComputingDiscreteSolutions}
If $\varphi_1,\dots,\varphi_n$ is some basis of $S$, \eqref{disc-prob} can be reformulated as a uniquely solvable linear system for the coefficients of $M\ell$ with respect to $\varphi_1,\dots,\varphi_n$. Consequently,
$M\ell$ is computable, whenever $b(\varphi_j,\varphi_i)$ and $\langle L\ell,\varphi_i\rangle$ can be evaluated for $i,j=1,\dots,n$. Of course, it is desirable that the number of operations to compute $M\ell$ is of optimal order $O(n)$. A necessary condition for this is that the total number of operations for the aforementioned evaluations is of order $O(n)$.
\end{remark}
Methods $M$ with the discrete problem \eqref{disc-prob} are given by the triplet $(S,b,L)$, whence we shall write also $M=(S,b,L)$. They may be called \emph{nonconforming linear variational methods} or, shortly, \emph{nonconforming methods}. An important subclass are the \emph{conforming} ones, where the discrete space is contained in the continuous one: $S \subseteq V$. (As for the common usage of `unbounded' and `bounded' in operator theory, our usage of `nonconforming' and `conforming' is slightly inconsistent in that a conforming method is also nonconforming.) Conformity allows choosing $b$ and $L$ by means of simple restriction:
\begin{equation}
\label{ConformingGalerkin}
b = a_{|S \times S}
\quad\text{and}\quad
\forall \ell \in V' \;\; L\ell = \ell_{|S}.
\end{equation}
In this case \eqref{disc-prob} is a (conforming) \emph{Galerkin method}. Truly nonconforming examples are DG methods and classical NCFEM.
Introducing the invertible map $B:S \to S'$, $s \mapsto b(s,\cdot)$, the method $M$ is represented by the composition
\begin{equation}
\label{M=}
M = B^{-1} L.
\end{equation}
Although the target function $u$ is usually unknown, the approximation operator
\begin{equation}
\label{Def-P}
P := M A = B^{-1} L A
\end{equation}
with domain $D(P) := A^{-1} D(M)$ in $V$ will turn out to be a useful tool. Figure \ref{F:EntireNonconformingMethods} illustrates our setting in a commutative diagram for the special case of an entire method.
\begin{remark}[$S$ and surjectivity of $L$]
\label{R:surjectivity-of-L}
If $L$ is a linear, unbounded, densely defined operator from $V'$ to $S'$, we have $R(M) \subseteq S$, with equality if and only if $L$ is surjective. In addition, if $R(M)$ is a proper subset of $S$, elementary linear algebra allows to reformulate $M$ as a method over $R(M)$. Consequently, there is some ambiguity in the choice of $S$ if $L$ is not surjective and a slight abuse of notation in writing $M=(S,b,L)$.
\end{remark}
\begin{figure}
\caption{\label{F:EntireNonconformingMethods}
\label{F:EntireNonconformingMethods}
\end{figure}
\subsection{Defining quasi-optimality, stability and consistency}
\label{S:stab-cons}
We now define the key notions of our analysis for nonconforming methods.
For each $\ell\in V'$, a nonconforming variational method $M=(S,b,L)$ chooses an element of $S$ in order to approximate $u=A^{-1}\ell$. To assess the quality of this choice, we assume that $a$ can be extended to a scalar product $\widetilde{a}$ on ${\widetilde{V}}:=V+S$ and consider the \emph{extended energy norm}
\begin{equation*}
\norm{\cdot} := \sqrt{\widetilde{a}(\cdot, \cdot)}
\quad\text{on }{\widetilde{V}},
\end{equation*}
with the same notation as for the original one. Observe that $V$ and $S$ are closed subspaces of ${\widetilde{V}}$.
The best approximation error within $S$ to some function $v \in V$ is then given by $\inf_{s\in S} \norm{v-s}$. Of course, it is desirable that a method is uniformly close to this benchmark, i.e.\ there holds an inequality that essentially reverses
\begin{equation*}
\forall u \in D(P)
\qquad
\inf_{s\in S} \norm{u-s}
\leq
\norm{ u - P u }.
\end{equation*}
\begin{definition}[Quasi-optimality]
\label{D:qopt}
A nonconforming variational method $M$ with discrete space $S$ and approximation operator $P$ is \emph{quasi-optimal} whenever there exists a constant $C\geq 1$ such that
\[
\forall u \in D(P)
\qquad
\norm{ u - P u }
\leq
C \inf_{s\in S} \norm{u-s}.
\]
The quasi-optimality constant $C_{\mathrm{qopt}}$ of $M$ is then the smallest constant with this property.
\end{definition}
C\'ea's lemma \cite{Cea:64} shows that conforming Galerkin methods for \eqref{ex-prob} are quasi-optimal with $C_{\mathrm{qopt}}=1$ and that the associated approximation operator $P=M A$ is the bounded linear $a$-orthogonal projection (or idempotent) onto $S$: in fact, we have the celebrated Galerkin orthogonality
\begin{equation}
\label{Galerkin-orthogonality}
\forall u \in V, \sigma\in S \subseteq V
\qquad
a(u-P u,\sigma) = 0.
\end{equation}
Before analyzing which of these properties still hold in the general case, let us discuss some necessary conditions for quasi-optimality and their consequences.
\begin{remark}[Quasi-optimal needs entire]
\label{R:qopt->entire}
Let $P$ be the approximation operator of a quasi-optimal method $M$. Observe that the best error $\inf_{s\in S} \norm{\cdot-s}$ is a Lipschitz continuous function on $V$. Therefore, quasi-optimality implies that also $\mathrm{Id}_V-P$ and $P$ are Lipschitz continuous. Since $D(P)$ is dense in $V$ and $S$ complete, the operator $P$ thus extends to $V$ in a continuous and unique manner. As a consequence, $M$ extends to $V'$ in a continuous and unique manner. In other words: ignoring the aspect of computability, only entire methods can be quasi-optimal.
\end{remark}
Notice that most classical NCFEM and DG methods are not defined as entire. Consequently, the simple observation in Remark \ref{R:qopt->entire} questions that these methods can be quasi-optimal. This doubt will be confirmed in Remark~\ref{R:failure-of-idS} below.
Generally speaking, stability is associated with the property that small input perturbations result in small output perturbations. The form of the discrete problem \eqref{disc-prob} suggests adopting the viewpoint that input is taken from a subset of $V'$. Since \eqref{disc-prob} is linear, stability then amounts to some operator norm of $M$. Notice that this differs from the common viewpoint that stability is connected solely with an operator norm of $B^{-1}$, i.e.\ taking input from $S'$. In the following definition, we consider perturbations and measure them as suggested by the setting of the continuous problem.
\begin{definition}[Full stability]
\label{D:stab}
We say that $M$ is \emph{fully stable} whenever $D(M)=V'$ and, for some constant $C\geq0$, we have
\begin{equation*}
\forall \ell \in V'
\qquad
\norm{M \ell}
\leq
C \norm{\ell}_{V'}.
\end{equation*}
The smallest such constant is the stability constant $C_{\mathrm{stab}}$ of $M$.
\end{definition}
Full stability may go beyond the need for practical computations, but it relates to the previous notions in the following manner.
\begin{remark}[Fully stable, quasi-optimal and entire]
\label{R:FullStability}
The approximation operator $P$ of a quasi-optimal method satisfies
\begin{equation*}
\norm{Pu}
\leq
\norm{u} + \norm{Pu-u}
\leq
(1+C_{\mathrm{qopt}}) \norm{u}
=
(1+C_{\mathrm{qopt}}) \norm{Au}_{V'}
\end{equation*}
for all $u\in V$, using $0\in S$, \eqref{isometry} and Remark \ref{R:qopt->entire}. In view of \eqref{Def-P}, full stability is thus necessary for quasi-optimality. Furthermore, full stability itself requires that the method is entire in the vein of Remark \ref{R:qopt->entire}.
\end{remark}
Roughly speaking, consistency measures to what extent the exact solution verifies the discrete problem.
To this end, one usually substitutes in the discrete problem the discrete solution by the exact one and investigates a possible defect. Here nonconformity entails that the forms $b$ and $L$ cannot be defined by simple restriction and so creates the following issues concerning trial and test space:
\begin{itemize}
\item In which sense can we plug a generic exact solution $u$ into the discrete problem? Does this require an extension of $b$ or a representative of $u$ in $S$?
\item How do we relate the condition associated with a nonconforming test function $\sigma \in S\setminus V$ in \eqref{disc-prob} to the conditions given by the continuous test functions in \eqref{ex-prob}?
\end{itemize}
These issues are usually tackled with the help of regularity assumptions on the exact solution, see, e.g., Arnold et al.\ \cite{Arnold.Brezzi.Cockburn.Marini:02}, or only on data, see Gudi \cite{Gudi:10}. The following definition takes a different approach within our non-asymptotic setting.
\begin{definition}[Full algebraic consistency]
\label{D:cons}
The method $M$ is \emph{fully algebraically consistent} whenever $D(M)=V'$ and
\begin{equation}
\label{Consistency}
\forall u\in V \cap S, \sigma \in S
\quad
b(u,\sigma) = \langle LAu,\sigma \rangle.
\end{equation}
\end{definition}
Conforming Galerkin \eqref{ConformingGalerkin} methods are fully algebraically consistent. Let us discuss further aspects of full algebraic consistency.
\begin{remark}[Full algebraic consistency and approximation operator]
\label{R:reformulations-of-consistency}
In view of the discrete problem \eqref{disc-prob} and the definition \eqref{Def-P} of the approximation operator, \eqref{Consistency} is equivalent to $b(u - P u,\sigma) = 0$ for all $u\in V \cap S, \sigma \in S$.
Since $b$ is nondegenerate, the consistency condition \eqref{Consistency} is therefore equivalent to
\begin{equation}
\label{P|V cap S}
\forall u\in V \cap S
\quad
P u = u.
\end{equation}
In other words: full algebraic consistency means that whenever the exact solution is discrete, it is the discrete solution. The advantage of \eqref{Consistency} is that it is directly formulated in terms of the originally given data $A$, $S$, $b$ and $L$. In Lemma \ref{L:ConsistencyWithExtension} and Theorem \ref{T:qopt-smoothing} below, we will present further equivalent formulations.
\end{remark}
\begin{remark}[Quasi-optimal needs fully algebraically consistent]
\label{R:QuasiOptRequiresFullConsistency}
In light of Remark~\ref{R:qopt->entire}, a quasi-optimal method $M$ is entire and so its approximation operator $P$ is defined on all $V$. For any $u \in V\cap S$, the best error in $S$ vanishes and so $Pu=u$. Consequently, $M$ is fully algebraically consistent.
\end{remark}
Definition \ref{D:cons} involves only exact solutions from the discrete space $S$, which may be a quite small set. Indeed, for example, when applying the Morley method to the biharmonic problem, the intersection $S \cap V$ has poor approximation properties for certain mesh families; see \cite[Theorem 3]{DeBoor.DeVore:83} and \cite[Remark~3.11]{Veeser.Zanotti:17p2}. Other consistency notions of algebraic type involving more exact solutions may thus appear stronger than Definition \ref{D:cons}. The following lemma sheds a different light on this.
\begin{lemma}[Full algebraic consistency with extension]
\label{L:ConsistencyWithExtension}
Let the method $M$ be fully algebraically consistent and set ${\widetilde{V}} := V + S$. Then there exists a unique bilinear form $\widetilde{b}$ that extends $b$ as well as $\langle LA\cdot,\cdot\rangle$ on ${\widetilde{V}} \times S$.
\end{lemma}
\begin{proof}
Observe that the left-hand side of \eqref{Consistency} is defined for all $u \in S$, while its right-hand side is defined in particular for all $u \in V$. We exploit this in order to extend $b$. Given $\widetilde{v} \in {\widetilde{V}}$ and $\sigma \in S$, we write $\widetilde{v} = v + s$ with $v \in V$ and $s \in S$ and set
\begin{equation}
\label{bext}
\widetilde{b}(\widetilde{v},\sigma)
:=
\langle LAv,\sigma \rangle + b(s,\sigma).
\end{equation}
Thanks to \eqref{Consistency}, $\widetilde{b}$ is well-defined. Indeed, if $v_1 + s_1 = v_2 + s_2$ with $v_1, v_2 \in V$ and $s_1, s_2 \in S$, we have $v_1-v_2 = s_2 - s_1 \in V \cap S$ and therefore \eqref{Consistency} yields $\langle LA(v_1-v_2),\sigma \rangle = - b(s_1-s_2,\sigma)$, which in turn ensures
\begin{equation*}
\langle LA v_1, \sigma \rangle + b(s_1,\sigma)
=
\langle LA v_2, \sigma \rangle + b(s_2,\sigma).
\end{equation*}
To show uniqueness of the extension, let $\widetilde{\beta}$ be another common extension of $b$ and $\langle LA\cdot, \cdot \rangle$. Given $\widetilde{v} \in {\widetilde{V}}$ and $\sigma \in S$, we write $\widetilde{v} = v + s$ with $v \in V$ and $s \in S$ as before and infer
\begin{equation*}
\widetilde{\beta}(\widetilde{v},\sigma)
=
\widetilde{\beta}(v,\sigma) + \widetilde{\beta}(s,\sigma)
=
\langle LAv,\sigma \rangle + b(s,\sigma)
=
\widetilde{b}(\widetilde{v},\sigma)
\end{equation*}
and the proof is complete.
\end{proof}
Notice that full algebraic consistency differs from the usual consistency, as, e.g. in Arnold \cite{Arnold:15} also for the following aspects: on the one hand, it is stronger in that it requires an algebraic identity instead of a limit. On the other hand, it does not involve approximation properties of the underlying discrete space. In fact, our purpose here is to identify the part of consistency that is necessary for quasi-optimality. As a consequence, algebraic consistency and stability alone are not sufficient for convergence.
Let us conclude this section by introducing a subclass of natural candidates for fully algebraically consistent methods. A method $M = (S,b,L)$ is a \emph{nonconforming Galerkin method} whenever
\begin{equation}
\label{NonConformingGalerkinMethod}
b_{|S_C \times S_C} = a_{|S_C \times S_C}
\quad\text{and}\quad
\forall \ell \in D(L) \;\;
L\ell_{|S_C} = \ell_{|S_C},
\end{equation}
where $S_C = S \cap V$ is the conforming subspace of the discrete space $S$. Thus, a nonconforming Galerkin method is constrained by restriction where applicable. Notice that:
\begin{itemize}
\item In contrast to conforming Galerkin methods, nonconforming ones are not completely determined by the continuous problem and the discrete space.
\item The condition \eqref{NonConformingGalerkinMethod} readily yields
\begin{equation*}
\forall u, \sigma \in S \cap V
\quad
b(u, \sigma) = \langle LAu, \sigma \rangle,
\end{equation*}
which is weaker than full algebraic consistency in that less test functions are involved.
\end{itemize}
For example, classical NCFEM, DG and $C^0$ interior penalty methods are nonconforming Galerkin methods.
\section{Characterizing quasi-optimality}
\label{S:qopt}
The purpose of this section is twofold. First, we show that full algebraic consistency and full stability are not only necessary but also sufficient for quasi-optimality. Second, we assess the possible impact of nonconformity on the quasi-optimality constant.
\subsection{Quasi-optimality and extended approximation operator}
\label{S:to-qopt}
To show that full algebraic consistency and full stability imply quasi-optimality,
we start with the following short proof of a `partial' quasi-optimality, which motivates a new tool for the analysis of nonconforming methods.
Assume that $P$ is the approximation operator of a fully algebraically consistent and a fully stable method. Rewriting \eqref{P|V cap S} as
\begin{equation}
\label{I-P-and-ScapV}
\forall v\in V, s \in S \cap V
\quad
v-Pv
=
(\mathrm{Id}_V-P)(v-s)
\end{equation}
and exploiting that full stability entails the boundedness of $P$, we can deduce quasi-optimality with respect to the conforming part $S\cap V$ of the discrete space $S$:
\begin{equation*}
\norm{v-Pv}
\leq
\opnorm{\mathrm{Id}_V-P}{V}{S} \inf_{s \in S\cap V} \norm{v-s}.
\end{equation*}
Note that we do not obtain quasi-optimality with respect to the whole discrete space, just because $P s = s$ is not available for general $s\in S$. In particular, $P s$ is not defined for general $s \in S$. We therefore explore an appropriate extension of $P$.
For this purpose, we use the following facts on linear projections; cf., e.g., Buckholtz \cite{Buckholtz:00}. Let $K$ and $R$ be subspaces of a Hilbert space $H$ with scalar product $(\cdot,\cdot)_H$ and induced norm $\norm{\cdot}_H$. The spaces $K$ and $R$ provide a direct decomposition of $H$, $H = K \oplus R$, if and only if there exists a unique linear projection $Q$ on $H$ with kernel $N(Q)=K$ and range $R(Q)=R$. Then $\mathrm{Id}_H-Q$ is the linear projection with kernel $R$ and range $K$. As a consequence of the closed graph theorem, $R$ and $K$ are closed if and only if $Q$ is bounded if and only if $\mathrm{Id}_H - Q$ is bounded.
\begin{lemma}[Extended approximation operator]
\label{L:Pext}
Assume that the approximation operator $P$ verifies $P|_{S \cap V} = \mathrm{Id}_{S \cap V}$ and is bounded. Then there exists a unique bounded linear projection $Pext$ from ${\widetilde{V}}$ onto $S$ satisfying $Pext_{|V} = P$.
\end{lemma}
\begin{proof}
First, we observe that $Pext$ has to satisfy
\begin{equation}
\label{properties-of-Pext}
Pext:{\widetilde{V}} \to S \text{ linear},
\quad
Pext_{|V} = P
\quad\text{and}\quad
Pext_{|S} = \mathrm{Id}_S.
\end{equation}
Since ${\widetilde{V}} = V + S$, linear extension entails that there is at most one operator satisfying \eqref{properties-of-Pext} and we are thus led to consider the following definition: given $\widetilde{v}\in{\widetilde{V}}$, choose $v \in V$ and $s \in S$ such that $\widetilde{v} = v+s$ and set
\begin{equation}
\label{Pext}
Pext \widetilde{v}
:=
P v + s.
\end{equation}
The assumption $P_{|S \cap V} = \mathrm{Id}_{S \cap V}$ means that the two identities in \eqref{properties-of-Pext} are compatible and so guarantees that $Pext$ is well-defined; compare with the definition of $\widetilde{b}$ in the proof of Lemma~\ref{L:ConsistencyWithExtension}.
In order to show the boundedness of $Pext$, we represent it in terms of $P$ and the following operators, corresponding to an appropriate choice of $v$ and $s$ in \eqref{Pext}. Let $\Pi_Y$ be the $\widetilde{a}$-orthogonal projection onto $Y := (S\cap V)^\perp$ and let $Q$ be the linear projection on $Y$ with range $V\cap Y$ and kernel $S \cap Y$. We then have
\begin{equation*}
Pext
=
P Q \Pi_Y
+
(\mathrm{Id}_Y - Q) \Pi_Y
+
(\mathrm{Id}_{{\widetilde{V}}} - \Pi_Y)
=
P Q \Pi_Y
+
\mathrm{Id}_{{\widetilde{V}}} - Q \Pi_Y.
\end{equation*}
Since the subspaces $S$, $V$, and $Y$ are closed, the projections $\Pi_Y$ and $Q$ are bounded. Consequently, the boundedness of $P$ implies the boundedness of its extension $Pext$.
\end{proof}
Using the extended approximation operator $Pext$, the proof of the announced characterization of quasi-optimality is quite simple. Notice also that the quantitative aspect of our first main result highlights the importance of $Pext$.
\begin{theorem}[Characterization of quasi-optimality]
\label{T:qopt}
A nonconforming method is quasi-optimal if and only if it is fully algebraically consistent and fully stable.
Moreover, for any quasi-optimal method, we have
\begin{equation*}
C_{\mathrm{qopt}}
=
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
\end{equation*}
where $Pext$ is the extended approximation operator from Lemma \ref{L:Pext}.
\end{theorem}
\begin{proof}
Remarks \ref{R:FullStability} and \ref{R:QuasiOptRequiresFullConsistency} show that quasi-optimality implies full algebraic consistency and full stability.
To show the converse, consider any fully algebraically consistent and fully stable nonconforming method. We simply follow the lines of the corresponding part of the proof of Tantardini/Veeser \cite[Theorem~2.1]{Tantardini.Veeser:16}, replacing $P$ by $Pext$ and exploiting the following generalization of \eqref{I-P-and-ScapV}:
\begin{equation}
\label{I-Appext}
\forall v\in V, s\in S
\quad
(\mathrm{Id}_{\widetilde{V}} - Pext) (v-s)
=
(\mathrm{Id}_V - P) v.
\end{equation} Given arbitrary $v\in V$ and $s\in S$, we thus derive
\begin{equation*}
\norm{v-P v}
=
\norm{(v-s) - Pext(v-s)}
\leq
\opnorm{\mathrm{Id}_{{\widetilde{V}}} - Pext}{{\widetilde{V}}}{{\widetilde{V}}} \norm{v-s}.
\end{equation*}
Taking the infimum over all $s\in S$ and then the supremum over all $v\in V$, we obtain
\begin{equation}
\label{Copt<=}
C_{\mathrm{qopt}}
\leq
\opnorm{\mathrm{Id}_{{\widetilde{V}}} - Pext}{{\widetilde{V}}}{{\widetilde{V}}}
\end{equation}
and see that $M$ is quasi-optimal because $Pext$ is bounded.
To verify, the identity for $C_{\mathrm{qopt}}$, let us first see that \eqref{Copt<=} is actually an equality. In fact, for $v \in V$ and $s \in S$, we derive
\begin{equation*}
\norm{(\mathrm{Id}_{{\widetilde{V}}} - Pext)(v+s)}
=
\norm{v-P v}
\leq
C_{\mathrm{qopt}} \inf \limits_{\hat{s} \in S}\norm{v-\hat{s}}
\leq
C_{\mathrm{qopt}}
\norm{v+s}
\end{equation*}
using \eqref{I-Appext} again. We thus obtain the converse to \eqref{Copt<=} by taking the supremum over all $v\in V$ and $s \in S$.
Moreover, since $\{0\}\subsetneq S\subsetneq {\widetilde{V}}$, the extended approximation operator $Pext$ is a bounded linear idempotent with $0 \neq Pext = Pext^2 \neq \mathrm{Id}_{{\widetilde{V}}}$ on the Hilbert space ${\widetilde{V}}$. We therefore can apply Buckholtz \cite[Theorem 2]{Buckholtz:00} or Xu/Zikatanov \cite[Lemma~5]{Xu.Zikatanov:03}
and conclude
\begin{equation}
\label{Cqopt=}
C_{\mathrm{qopt}}
=
\opnorm{\mathrm{Id}_{{\widetilde{V}}} - Pext}{{\widetilde{V}}}{{\widetilde{V}}}
=
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}. \qedhere
\end{equation}
\end{proof}
Formula \eqref{Cqopt=} allows for the following geometric interpretation of the quasi-optimality constant.
\begin{remark}[Geometry of quasi-optimality constant]
\label{R:geometry}
Buckholtz \cite{Buckholtz:00} shows that the operator norm of a bounded projection $Q$ on a Hilbert space $H$ satisfies
\begin{equation*}
\opnorm{Q}{H}{H}
=
\frac{1}{\sin\theta}
=
\opnorm{\mathrm{Id}_H - Q}{H}{H},
\end{equation*}
where $\theta$ is the angle between $K=N(Q)$ and $R=R(Q)$, that is, $\theta\in(0,\pi/2]$ and its cosine equals $\sup\{|\langle k,r \rangle_H| \mid k\in K, r\in R, \norm{k}_H=1, \norm{r}_H=1 \}$. Notice that $N(Pext) = R(\mathrm{Id}_{\widetilde{V}}-Pext) = R(\mathrm{Id}_V-P)$, where the last identity follows from \eqref{I-Appext}. Combining these two facts, we deduce
\begin{equation}
\label{|Pext|}
C_{\mathrm{qopt}}
=
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
=
\frac{1}{\sin\alpha}
\end{equation}
where $\alpha$ is the angle between the discrete space $S$ and the range $R(\mathrm{Id}_V-P)$.
\end{remark}
Theorem \ref{T:Cqopt} reveals that the possibly weak full algebraic consistency is still enough consistency to ensure, together with stability, quasi-optimality. However, it does not control the size of the quasi-optimality constant.
\subsection{The quasi-optimality constant and two consistency measures}
\label{S:Cqopt}
Let $P$ be the approximation operator of a quasi-optimal method. The fact that $Pext$ is an extension of $P$ readily yields
\begin{equation*}
C_{\mathrm{qopt}}
=
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
\geq
\opnorm{P}{V}{S}
=
C_{\mathrm{stab}},
\end{equation*}
where the last identity is due to isometry \eqref{isometry} of $A$. The possible enlargement of $C_{\mathrm{qopt}}$ with respect to $C_{\mathrm{stab}}$ is a new feature triggered by nonconformity. It is the purpose of the section to quantify this phenomenon.
Our key tool will be the following elementary lemma.
\begin{lemma}[Operator norm and restrictions]
\label{L:restrictions}
Let $T\in\mathcal{L}(H)$ be a bounded linear operator on a Hilbert space $H$ with scalar product $\langle\cdot,\cdot\rangle_H$ and induced norm $\norm{\cdot}_H$. If $Y$ is a linear closed subspace of $H$ and $Y^\perp$ is its orthogonal complement, we have
\begin{equation*}
\max\{ C,\delta \}
\leq
\opnorm{T}{H}{H}
\leq
\sqrt{C^2+\delta^2}
\end{equation*}
with
\begin{equation*}
C = \opnorm{T_{|Y}}{Y}{H}
\quad\text{and}\quad
\delta = \opnorm{T_{|Y^\perp}}{Y^\perp}{H}.
\end{equation*}
\end{lemma}
\begin{proof}
The lower bound immediately follows from the definition of the operator norm $\opnorm{T}{H}{H} = \sup_{\norm{x}_H=1} \norm{Tx}_H$. To verify the upper bound, let $x \in H$ be arbitrary and denote by $\pi_Y$ the orthogonal projection onto $Y$. We have
\begin{equation}
\label{|Tx|<=;1}
\begin{aligned}
\norm{Tx}_H^2
&=
\norm{T\pi_Yx}_H^2
+ 2 \big\langle T\pi_Yx, T(x-\pi_Yx) \big\rangle_H
+ \norm{T(x-\pi_Yx)}_H^2
\\
&\leq
C^2 \norm{\pi_Yx}_H^2
+ 2 C \delta\norm{\pi_Yx}_H \norm{x-\pi_Yx}_H
+ \delta^2 \norm{x-\pi_Yx}_H^2
\end{aligned}
\end{equation}
in view of the bilinearity of the scalar product, the Cauchy-Schwarz inequality and the definitions of $C$ and $\delta$. Notice that
\begin{equation*}
\norm{\pi_Yx}_H^2 + \norm{x-\pi_Yx}_H^2 = \norm{x}_H^2
\end{equation*}
thanks to the orthogonality of $\pi_Y$. Thus, if we write $\alpha = \norm{\pi_Yx}$, \eqref{|Tx|<=;1} becomes
\begin{equation*}
\norm{Tx}_H^2
\leq
h(\alpha)^2
\quad\text{with}\quad
h(\alpha)
:=
C\alpha + \delta \sqrt{1-\alpha^2},
\end{equation*}
which implies
\begin{equation*}
\opnorm{T}{H}{H}
\leq
\max_{\clsint{0}{1}} h.
\end{equation*}
A straight-forward discussion of the function $h$ yields $\max_{\clsint{0}{1}} h = \sqrt{C^2 + \delta^2}$ and the upper bound is established, too.
\end{proof}
\begin{remark}[Sharpness of bounds via restrictions]
\label{R:sharpness-bounds-restriction}
Since
\begin{equation*}
\max \{C,\delta \}
\leq
\sqrt{C^2 + \delta^2}
\leq
\sqrt{2}\max\{ C,\delta \},
\end{equation*}
the bounds in Lemma \ref{L:restrictions} miss an equality at most by the factor $\sqrt{2}$. Let us see with two simple examples that, without additional information on $T$ and $Y$, we cannot improve on this.
First, consider $H = \mathbb{R}^2$, $T_1 = \mathrm{Id}_{\mathbb{R}^2}$ and let $Y$ be any 1-dimensional subspace of $\mathbb{R}^2$. Obviously, we then have $\opnorm{ T_1 }{H}{H} = \opnorm{T_1{}_{|Y}}{Y}{H} = \opnorm{T_1{}_{|Y^\perp}}{Y^\perp}{H} = 1 $ and so
the lower bound becomes an equality, while the upper bound is strict.
Second, consider $H = \mathbb{R}^2$ and let $T_2$ be the linear operator which is represented in the canonical basis of $\mathbb{R}^2$ by the Matlab matrix \texttt{1/2*ones(2)}. The operator $T_2$ is the orthogonal projection onto the diagonal $\{ (t,t) \mid t \in \mathbb{R} \}$, whence $\opnorm{ T_2 }{H}{H}=1$. Finally, let $Y = \{ (0,t) \mid t \in \mathbb{R} \}$ be the ordinate. Then the operator norms of $T_2$ restricted to $Y$ and $Y^\perp$ correspond to the Euclidean norms of the columns of the aforementioned matrix: $\opnorm{T_2{}_{|Y}}{Y}{H} = \opnorm{T_2{}_{|Y^\perp}}{Y^\perp}{H} = 1/\sqrt{2}$. Consequently, here the upper bound is an equality, while the lower bound is strict.
\end{remark}
The fact that the extended approximation operator $Pext$ is given on $S$ by the identity and on $V$ by $P$ suggests two options for applying Lemma \ref{L:restrictions}: $Y=S$ and $Y=V$. We start with the first option, which leads to a consistency measure in the spirit of the second Strang lemma.
\begin{proposition}[Consistency mixed with stability]
\label{P:consistency-with-stability}
Let $\Pi_S$ be the $\widetilde{a}$-orthogonal projection onto $S$ and $\delta_V \geq 0$ be the smallest constant such that
\begin{equation*}
\forall v \in V
\quad
\norm{\Pi_S v - P v}
\leq
\delta_V
\norm{v - \Pi_S v}.
\end{equation*}
Then the quasi-optimality constant is given by
$
C_{\mathrm{qopt}} = \sqrt{1+\delta_V^2}.
$
\end{proposition}
\begin{proof}
Owing to Theorem \ref{T:qopt}, we may show the claimed identity by verifying $\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}} = \sqrt{1+\delta_V^2}$.
Applying Lemma \ref{L:restrictions} with $H={\widetilde{V}}$, $T=Pext$ and $Y=S$, we obtain
\begin{equation*}
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}} \leq \sqrt{1+\delta^2}
\end{equation*}
with $\delta = \opnorm{Pext}{S^\perp}{{\widetilde{V}}}$. Given $s^\perp \in S^\perp$, we write $s^\perp = v + s$ with $v \in V$ and $s \in S$ and observe that
\begin{equation*}
s^\perp = s^\perp - \Pi_S s^\perp = v - \Pi_S v
\quad\text{and}\quad
Pext s^\perp = P v - \Pi_S v.
\end{equation*}
Hence $\delta = \delta_V$ and
\begin{equation}
\label{Pext>delta_V}
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}} \leq \sqrt{1+\delta_V^2}.
\end{equation} To show that this is actually an equality, note that, for any $v \in V$,
\begin{equation}
\label{orth}
\norm{v - \Pi_S v}^2 + \norm{\Pi_S v - P v}^2
=
\norm{v - P v}^2
\leq
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}^2 \norm{v - \Pi_S v}^2,
\end{equation}
where we first combined the orthogonality of $\Pi_S$ with $\Pi_S v - P v \in S$ and then used Theorem \ref{T:qopt}. Rearranging terms, we see that $\delta_V^2 \leq \opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}^2 - 1$, yielding the desired inequality $\sqrt{1+\delta_V^2} \leq \opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}$.
\end{proof}
The following two remarks discuss the nature of $\delta_V$.
\begin{remark}[$\delta_V$ and (non)conforming consistency]
\label{R:delta_V-consistency}
In the conforming case $S \subseteq V$, without assuming the quasi-optimality of the underlying method, the existence of $\delta_V$ is equivalent to full algebraic consistency. Therefore, $\delta_V$ can be seen as a quantitative generalization of full algebraic consistency to the nonconforming case. It measures, in relative manner, how much the method deviates from the best approximation $\Pi_S$. Thus, Proposition \ref{P:consistency-with-stability} is a specification of the second Strang lemma, where the exploitation of the nonconforming direction is compared with the best approximation error. Let us illustrate this in the purely nonconforming case $V \cap S = \{0\}$. The best case corresponds to $P = \Pi_S$, yielding $\delta_V = 0$ and $C_{\mathrm{qopt}} = 1$. Instead, $P = 0$ is quasi-optimal with $\delta_V = ( \inf_{\norm{s}=1} \inf_{\Pi_S v = s} \norm{s-v} )^{-1}$, which becomes infinity as the distance between $S$ and $V$ tends to $0$.
\end{remark}
\begin{remark}[$\delta_V$ and stability]
\label{R:delta_V-stability}
The size of $\delta_V$ is in general affected by stability. Indeed, using \eqref{Pext>delta_V}, we readily derive
\begin{equation*}
\delta_V
\geq
\sqrt{\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}^2 - 1}
\geq
\sqrt{\opnorm{P}{V}{S}^2 - 1}
=
\sqrt{ C_{\mathrm{stab}}^2 - 1}
\end{equation*}
and notice in particular that, if a sequence of methods becomes unstable, the corresponding $\delta_V$'s become unbounded.
\end{remark}
We now turn to the second option of applying Lemma \ref{L:restrictions}. Interestingly, it provides an alternative consistency measure which is essentially independent of stability.
\begin{proposition}[Consistency without stability]
\label{P:consistency-without-stability}
Let $\Pi_V$ be the $\widetilde{a}$-orthogonal projection onto $V$ and $\delta_S \geq 0$ be the smallest constant such that
\begin{equation*}
\forall s \in S
\quad
\norm{s - P\Pi_V s}
\leq
\delta_S
\norm{s - \Pi_V s}.
\end{equation*}
Then the quasi-optimality constant satisfies
\begin{equation}
\label{deltaS-bounds}
\max \{ C_{\mathrm{stab}}, \delta_S \}
\leq
C_{\mathrm{qopt}}
\leq
\sqrt{ C_{\mathrm{stab}}^2 + \delta_S^2}.
\end{equation}
\end{proposition}
\begin{proof}
Thanks to Theorem \ref{T:qopt}, it suffices to apply Lemma \ref{L:restrictions} with $H = {\widetilde{V}}$, $T = Pext$ and $Y = V$ and to observe the following identities: given $v^\perp \in V^\perp$, $v \in V$, $s \in S$ such that $v^\perp = v + s$, we have
\begin{equation*}
v^\perp = v^\perp - \Pi_V v^\perp = s - \Pi_V s
\quad\text{and}\quad
Pext v^\perp = s - P\Pi_V s.
\qedhere
\end{equation*}
\end{proof}
We now discuss also the nature of $\delta_S$, elaborating its differences from the first consistency measure $\delta_V$.
\begin{remark}[$\delta_S$ and (non)conforming consistency]
\label{R:delta_S-consistency}
As for $\delta_V$, the existence of $\delta_S$ is equivalent to full algebraic consistency in the conforming case $S \subseteq V$. Correspondingly, it can be seen as an alternative, quantitative generalization of full algebraic consistency to the nonconforming case. The alternative $\delta_S$ is however not comparing with the best approximation $\Pi_S$. In particular, we have that $\delta_S=0$ implies
\begin{equation*}
C_{\mathrm{qopt}}
=
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
=
\opnorm{P}{V}{S}
=
C_{\mathrm{stab}},
\end{equation*}
which is an interesting property not involving the best approximation $\Pi_S$.
Let us illustrate how the difference is expressed in measuring the exploitation of the nonconforming directions by considering, as in Remark \ref{R:delta_V-consistency}, the purely nonconforming case $V \cap S = \{ 0\}$. Here the best choice $P = \Pi_S$ leads to $\delta_S<1$, while $P = 0$ gives $\delta_S = (\inf_{\norm{s}=1} \norm{s-\Pi_V s})^{-1}$. In the latter case, $\delta_S$ like $\delta_V$ becomes infinity as the distance between $S$ and $V$ tends to $0$, although in a (possibly) other manner.
\end{remark}
\begin{remark}[$\delta_S$ and stability]
\label{R:delta_S-stability}
We illustrate that the quantities $\delta_S$ and $C_{\mathrm{stab}}$ are essentially independent. In order to make sure that this is not affected by a possible lack of approximability, we consider the following setting with a sequence of discrete spaces:
\begin{gather*}
{\widetilde{V}} = \ell_2(\mathbb{R})
\text{ with canonical basis }(e_i)_{i=0}^\infty,
\quad
\widetilde{a}(v,w) = \sum_{i=0}^\infty v_i w_i,
\intertext{where we identify $v = \sum_{i=0}^\infty v_i e_i$ with $(v_i)_{i=0}^\infty$, etc., and}
V = \overline{\text{span}\,\{ e_i \mid i \geq 1 \}},
\quad
S_n = \text{span}\,\{ e_i \mid i=1,\dots,n-1 \} + \text{span}\,\{ \alpha_n e_0 + e_n \},
\end{gather*}
where $n \geq 1$ and $(\alpha_n)_n \subseteq \mathbb{R}_+$ is some sequence of positive reals. Here only $\alpha_n e_0 + e_n$ is nonconforming and thus not involved in full algebraic consistency. If $\lim_{n\to\infty} \alpha_n = 0$, this direction becomes a new conforming direction, while for $\lim_{n\to\infty} \alpha_n = \infty$, it gets orthogonal to $V$. In any case, we have
\begin{equation*}
S_n \cap V = \text{span}\,\{e_i \mid i = 1,\dots, n-1 \}
\quad\text{and}\quad
V = \overline{ \bigcup_{n \geq 1} S_n }.
\end{equation*}
Moreover, straight-forward computations reveal that the orthogonal projections onto $S_n$ and $V$ are given by
\begin{gather*}
\Pi_{S_n} v
=
\sum_{i=1}^{n-1} v_i e_i + \frac{v_n}{1+\alpha_n^2} (\alpha_n e_0 + e_n)
\text{ for } v \in V,
\quad
\Pi_V s = \sum_{i=1}^{n} s_i e_i
\text{ for } s \in S.
\end{gather*}
One possibility to deal with the nonconforming direction $\alpha_n e_0 + e_n$ is to ignore it, e.g., by choosing methods with the approximation operators
\begin{equation*}
P_{1,n} v = \sum_{i=1}^{n-1} v_i e_i
\quad\text{for}\quad
v \in V.
\end{equation*}
Each approximation operator $P_{1,n}$ is fully algebraically consistent and fully stable with $\opnorm{P_{1,n}}{V}{S} = 1$. Furthermore, $\Pi_V(\alpha_n e_0 + e_n) = e_n$ and $P_{1,n} e_n = 0$ yield
\begin{equation*}
\delta_{S_n}
\geq
\frac{ \norm{\overline{s}_n - P_{1,n} \Pi_V \overline{s}_n} }
{ \norm{\overline{s}_n - \Pi_V \overline{s}_n} }
=
\frac{\norm{\overline{s}_n}}{\alpha_n \norm{e_0}}
=
\frac{\sqrt{1+\alpha_n^2}}{\alpha_n}
\geq
\frac{1}{\alpha_n}.
\end{equation*}
with $\overline{s}_n := \alpha_n e_0 + e_n$. Consequently, letting $\alpha_n\to 0$ shows that $\delta_S$ can become arbitrarily large, while the stability constant attains its minimal value for the case $S_n\cap V \neq \{0\}$.
Given a sequence $(\beta_n)_n \subseteq \mathbb{R}_+$ of positive reals, the approximation operators
\begin{equation*}
P_{2,n} v
:=
\sum_{i=1}^{n-1} v_i e_i
+ \left( v_n + \frac{\beta_n}{1+\alpha_n^2} v_{n+1}\right) (\alpha_n e_0 + e_n)
\quad\text{for}\quad v \in V
\end{equation*}
exploit the nonconforming direction $\alpha_n e_0 + e_n$. Again, each $P_{2,n}$ is fully algebraically consistent and fully stable. Here, since $P_{2,n} \Pi_V s = s$ for all $s \in S$, we have that $\delta_S = 0$, while
\begin{equation*}
\opnorm{P_{2,n}}{V}{S}
\geq
\frac{ \norm{ P_{2,n} e_{n+1} } }{ \norm{e_{n+1}} }
\geq
\frac{ \beta_n }{ \sqrt{ 1 + \alpha_n^2 } }.
\end{equation*}
Thus, $ \beta_n / \sqrt{ 1 + \alpha_n^2 } \to \infty $ shows that the stability constant can become arbitrarily large, while $\delta_S$ attains its minimal value $0$.
\end{remark}
\begin{remark}[Asymptotic consistency]
\label{R:asym-consistency}
The preceding remark exemplifies that the exploitation of the nonconforming direction measured by $\delta_V$ and $\delta_S$ is relevant also `in the limit' for sequences of discrete spaces and can be controlled via the uniform boundedness of the consistency measures.
\end{remark}
We conclude this section with slight generalizations of Propositions~\ref{P:consistency-with-stability} and \ref{P:consistency-without-stability}.
\begin{remark}[Consistency measures and non-quasi-optimality]
\label{R:consistency-infty}
If the method underlying $P$ is not quasi-optimal, we may set $C_{\mathrm{qopt}}=\infty$. Similarly, if $\delta_V$ (or $\delta_S$) does not exist, we set $\delta_V = \infty$ (or $\delta_S=\infty$). Then
\begin{equation*}
\delta_V = \infty \iff C_{\mathrm{qopt}} = \infty
\qquad\text{and}\qquad
\delta_S=\infty \implies C_{\mathrm{qopt}} = \infty
\end{equation*}
and, using standard conventions for $\infty$, the formulas in Propositions~\ref{P:consistency-with-stability} and \ref{P:consistency-without-stability} hold irrespective of quasi-optimality.
\end{remark}
\section{The structure of quasi-optimal methods}
\label{S:Structure}
As explained in the introduction, there is a great interest to devise quasi-optimal nonconforming methods. To this end, it is useful to determine the structure of nonconforming methods that are quasi-optimal. This is the task of this section, which, in light of Theorem~\ref{T:qopt}, reduces to determine the structure of full stability and full algebraic consistency.
\subsection{Extended approximation operator and extended bilinear form}
\label{S:extensions}
Our analysis of quasi-optimality in \S\ref{S:qopt} has been centered around
the extended approximation operator $Pext$. In this subsection we relate this key tool to the extended bilinear form $\widetilde{b}$ from Lemma \ref{L:ConsistencyWithExtension} and, thus, more closely to the data $(a,S,b,L)$ defining problem and method.
\begin{lemma}[Extensions of approximation operator and bilinear forms]
\label{L:Pext_and_bext}
The approximation operator $P$ extends to a bounded linear projection $Pext$ from ${\widetilde{V}}$ onto $S$ if and only if there exists a bounded common extension $\widetilde{b}$ of $b$ and $\langle LA\cdot,\cdot\rangle$ to ${\widetilde{V}}\times S$.
If one of the two extensions exists, we have the following generalization of the Galerkin orthogonality:
\begin{equation*}
\forall \widetilde{v} \in {\widetilde{V}}, \sigma \in S
\quad
\widetilde{b}(\widetilde{v} - Pext\widetilde{v},\sigma) = 0.
\end{equation*}
\end{lemma}
\begin{proof}
Assume $Pext$ is a bounded linear projection from ${\widetilde{V}}$ onto $S$ extending $P$. Then
\begin{equation}
\label{def_bext}
\widetilde{b}(\widetilde{v},\sigma)
:=
b(Pext\widetilde{v},\sigma)
\end{equation}
defines a bounded bilinear form on ${\widetilde{V}}\times S$. Since $Pext$ is a projection onto $S$, $\widetilde{b}$ is an extension of $b$. Furthermore, if $v\in V$ and $\sigma \in S$, then $Pext_{|V} = P$ yields $\widetilde{b}(v,\sigma) = b(P v, \sigma) = \langle LAv,\sigma \rangle$. Consequently, $\widetilde{b}$ is also an extension of $\langle LA\cdot,\cdot \rangle$.
Conversely, assume that $\widetilde{b}$ is a bounded common extension of $b$ and $\langle LA\cdot,\cdot \rangle$ on ${\widetilde{V}} \times S$. Given $\widetilde{v}\in{\widetilde{V}}$, define $Pext\widetilde{v}$ by
\begin{equation}
\label{def_Appext}
Pext\widetilde{v} \in S
\quad\text{such that}\quad
\forall \sigma \in S
\;\;
b(Pext\widetilde{v},\sigma) = \widetilde{b}(\widetilde{v},\sigma).
\end{equation}
Since $b$ is a nondegenerate bilinear form on $S\times S$, the element $Pext\widetilde{v}$ exists, is unique and depends on $\widetilde{v}$ linearly. The uniqueness and $\widetilde{b}=b$ on $S \times S$ give $Pext_{|S} = \mathrm{Id}_S$. Using $\widetilde{b} = \langle LA\cdot,\cdot \rangle = b(P\cdot,\cdot)$ on $V\times S$, we obtain $Pext_{|V} = P$. Finally, the boundedness of $\widetilde{b}$ entails the boundedness of $Pext$ and the claimed equivalence is
verified.
It remains to verify the generalized Galerkin orthogonality. If one of the two extensions exists, then the other one is given either by \eqref{def_bext} or by \eqref{def_Appext}, which both just restate the claimed generalization.
\end{proof}
The close relationship between the two extensions $Pext$ and $\widetilde{b}$ suggests that the operator norm $\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}$ can be reformulated in terms of $\widetilde{b}$. To this end, the following lemma will be very useful, which in turn exploits the following fact from linear functional analysis; see, e.g., Brezis \cite{Brezis:11}.
If $X$ and $Y$ are normed linear spaces, $T:X \to Y$ is a linear operator and $T^\star$ stands for its adjoint, then
\begin{equation}
\label{adjoint-for-bdd-op}
T \text{ is bounded}
\implies
D(T^\star) = Y'
\text{ with }
\opnorm{T^\star}{Y'}{X'} = \opnorm{T}{X}{Y}.
\end{equation}
\begin{lemma}[$b$-duality for energy norm on $S$]
\label{L:b-duality}
The nondegenerate bilinear form $b$ induces a norm on $S$ by
\begin{equation*}
\norm{\sigma}_{b}
:=
\norm{b(\cdot,\sigma)}_{S'}
=
\sup_{s \in S, \norm{s} = 1} b(s,\sigma),
\quad
\sigma \in S,
\end{equation*}
satisfying
\begin{equation*}
\norm{s} = \sup_{\sigma\in S} \frac{b(s,\sigma)}{\norm{\sigma}_b}.
\end{equation*}
\end{lemma}
\begin{proof}
Obviously, $\norm{\cdot}_{b}$ is a seminorm and definite thanks to the nondegeneracy of $b$. To verify the claimed identity, we observe
\begin{equation}
\label{b;bddness}
\sup_{s\in S}\sup_{\sigma\in S}
\frac{b(s,\sigma)}{\norm{s}\norm{\sigma}_{b}}
=
\sup_{\sigma\in S} \sup_{s\in S}
\frac{b(s,\sigma)}{\norm{s}\norm{\sigma}_{b}}
=
1
\end{equation}
and
\begin{equation}
\label{b;infsup}
\infimum_{s \in S} \sup_{\sigma \in S}
\frac{b(s,\sigma)}{\norm{s}\norm{\sigma}_{b}}
=
\infimum_{\sigma \in S} \sup_{s \in S}
\frac{b(s,\sigma)}{\norm{s}\norm{\sigma}_{b}} = 1,
\end{equation}
where the `=1's follow from the definition of $\norm{\cdot}_b$ and the first equality in \eqref{b;infsup} follows from \eqref{adjoint-for-bdd-op} applied to the inverse of $B$, the linear operator representing $b$. Combining \eqref{b;bddness} and \eqref{b;infsup}, we see that
\begin{equation*}
\sup_{\sigma \in S}
\frac{b(s,\sigma)}{\norm{s}\norm{\sigma}_{b}}
=
1
\end{equation*}
for all $s\in S$ and the claimed identity is verified.
\end{proof}
\begin{lemma}[Norms of extensions]
\label{L:Pext=bext}
If one of the extensions in Lemma \ref{L:Pext_and_bext} exists, we have
\begin{equation*}
\label{||Pext||=bext}
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
=
\sup_{\sigma\in S}
\frac{\|\widetilde{b}(\cdot,\sigma)\|_{{\widetilde{V}}'}}{\norm{b(\cdot,\sigma)}_{S'}}
\end{equation*}
with the `extended' dual norm
$
\norm{\ell}_{{\widetilde{V}}'}
:=
\sup_{\widetilde{v}\in{\widetilde{V}}, \norm{\widetilde{v}} = 1} \langle \ell,\widetilde{v} \rangle.
$
\end{lemma}
\begin{proof}
Applying Lemma \ref{L:b-duality}, the generalized Galerkin orthogonality of Lemma~\ref{L:Pext_and_bext} and the definition of the extended dual norm, we infer
\begin{align*}
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
&=
\sup_{\widetilde{v}\in{\widetilde{V}}} \frac{\norm{Pext\widetilde{v}}}{\norm{\widetilde{v}}}
=
\sup_{\widetilde{v}\in{\widetilde{V}},\sigma\in S}
\frac{b(Pext\widetilde{v},\sigma)}{\norm{\widetilde{v}}\norm{\sigma}_b}
=
\sup_{\widetilde{v}\in{\widetilde{V}},\sigma\in S}
\frac{\widetilde{b}(\widetilde{v},\sigma)}{\norm{\widetilde{v}}\norm{\sigma}_b}
\\
&=
\sup_{\sigma\in S}
\frac{\norm{\widetilde{b}(\cdot,\sigma)}_{{\widetilde{V}}'}}{\norm{\sigma}_b}
=
\sup_{\sigma\in S}
\frac{\norm{\widetilde{b}(\cdot,\sigma)}_{{\widetilde{V}}'}}{\norm{b(\cdot,\sigma)}_{S'}}.
\qedhere
\end{align*}
\end{proof}
Before closing this subsection, two remarks are in order.
\begin{remark}[Alternative proof and formula]
\label{R:alternative-formula}
An alternative proof of Lemma \ref{L:Pext=bext} may be based on a continuous counterpart of $\norm{\cdot}_b$ from Lemma \ref{L:b-duality}; see Tantardini and Veeser \cite[Theorem 2.1]{Tantardini.Veeser:16}. Using that approach, one derives also
\begin{equation*}
\opnorm{Pext}{{\widetilde{V}}}{{\widetilde{V}}}
=
\sup_{s\in S, \norm{s}=1} \; \infimum_{\sigma \in S, \norm{\sigma}=1}
\frac{\|\widetilde{b}(\cdot,\sigma)\|_{{\widetilde{V}}'}}{|b(s,\sigma)|}.
\end{equation*}
by duality.
\end{remark}
\begin{remark}[Reformulations of quasi-optimality]
\label{R:qopt}
Remarks \ref{R:FullStability} and \ref{R:QuasiOptRequiresFullConsistency}, Lem\-ma\-ta~\ref{L:Pext} and \ref{L:Pext_and_bext} as well as Theorem \ref{T:qopt} show that the following statements are equivalent reformulations of quasi-optimality for a nonconforming method $M=(S,b,L)$ with approximation operator $P$:
\begin{subequations}
\label{char-qopt}
\begin{align}
\label{char:fs-fc}
&\text{$M$ is fully algebraically consistent and fully stable.}
\\ \label{char:P}
&\text{$Ps=s$ for all $s \in S\cap V$ and $P$ is bounded.}
\\ \label{char:Pext}
&\text{$P$ extends to a linear projection $Pext$ from ${\widetilde{V}}$ onto $S$ that is bounded.}
\\ \label{char:bext}
&\text{$b$ and $\langle LA\cdot,\cdot\rangle$ have a common extension $\widetilde{b}$ that is bounded.}
\\
&\text{$P$ is bounded and $b,P$ have extensions $\widetilde{b},Pext$ such that $\widetilde{b}(\widetilde{v}-Pext\widetilde{v},\sigma)=0$}
\\ \nonumber &\text{for all $\widetilde{v}\in{\widetilde{V}}$ and $\sigma \in S$.}
\end{align}
\end{subequations}
It is worth observing that no additional regularity beyond the natural one in \eqref{ex-prob} is involved. All this illustrates that extensions, as developed in our approach, are a well-tuned tool in the analysis of the quasi-optimality of nonconforming methods.
\end{remark}
\subsection{The structure of full stability}
\label{S:stab-smooting}
In this subsection we determine the structure of nonconforming methods that are fully stable.
To this end, \eqref{adjoint-for-bdd-op} and the following facts of linear functional analysis will be basic: if $X$ and $Y$ are normed linear spaces and $T:X\to Y$ linear, then
\begin{gather}
\label{dim-and-bdd}
\dim X < \infty \iff \text{all linear operators $X \to Y$ are bounded},
\\
\label{T*surjective-iff-Tinjective}
\text{if $\dim X < \infty$, then }
T^\star \text{ surjective} \iff T \text{ injective},
\end{gather}
see, e.g., \cite{Brezis:11} and \cite[p.\ 1418]{Buckholtz:00}.
Let $M=(S,b,L)$ be a nonconforming method and recall that $M$ is fully stable if and only if the operator $M:V' \to S$ is bounded, where $V'$ and $S$ are equipped, respectively, with the dual and extended energy norm.
We claim that the full stability of $M$ hinges on the boundedness of $L$. In light of Remark~\ref{R:FullStability}, we may assume that $D(M)=D(L)=V'$. The equivalence \eqref{dim-and-bdd} yields the following two consequences. First, the boundedness of $M:V'\to S$ is a true requirement, because its domain $V'$ has infinite dimension. Second, the critical operator in the composition $M = B^{-1} L$ from \eqref{M=} is $L$. In fact, its domain $V'$ has infinite dimension, while the domain $S'$ of $B^{-1}$ has finite dimension. Consequently, a method $M$ is fully stable if and only if it is entire and the operator $L:V' \to S'$ is bounded.
Next, we characterize the class of bounded linear operators from $V'$ to $S'$ and derive first a necessary condition. Let $L:V' \to S'$ be linear and bounded. Owing to \eqref{adjoint-for-bdd-op}, its adjoint $L^\star$ is a bounded linear operator from $S''$ to $V''$. Since the spaces $S$ and $V$ are reflexive, we thus deduce the existence of a linear operator $E:S\to V$ such that
\begin{equation}
\label{L=E*}
\forall \ell \in V', \sigma \in S
\quad
\left\langle L\ell, \sigma\right\rangle
=
\left\langle \ell, E\sigma\right\rangle .
\end{equation}
Conversely, if $E:S\to V$ is a linear operator satisfying \eqref{L=E*}, then $L$ is bounded on $V'$ with $\opnorm{L}{V'}{S'} = \opnorm{E}{S}{V}$ by \eqref{adjoint-for-bdd-op} and \eqref{dim-and-bdd}.
\begin{remark}[Smoothing of $E$]
\label{R:smoothing}
Usually, the nonconformity $S \not\subseteq V$ arises from a lack of smoothness, e.g., across interelement boundaries in the case of finite element methods. The operator $E:S \to V$ may then be viewed as a smoothing operator.
\end{remark}
The above observations prepare the following result, which is our first step towards the structure of quasi-optimal methods.
\begin{theorem}[Full stability and smoothing]
\label{T:fstab-smoothing}
A nonconforming method $M = (S,b,L)$ for \eqref{ex-prob} is fully stable if and only if $L$ is the adjoint of a linear smoothing operator $E:S \to V$.
The discrete problem for $\ell \in V'$ then reads
\begin{equation}
\label{disc-prob-with-smoothing}
\forall \sigma \in S
\quad
b(M\ell, \sigma)
=
\left\langle \ell, E\sigma\right\rangle
\end{equation}
and the stability constant satisfies
\begin{equation}
\label{Cstab-with-smoothing}
C_{\mathrm{stab}}
=
\opnorm{M}{V'}{S}
=
\sup_{\sigma\in S} \, \frac{\norm{E\sigma}}{\norm{b(\cdot,\sigma)}_{S'}}.
\end{equation}
Moreover, the range of $M$ is $S$ if and only if $E$ is injective.
\end{theorem}
\begin{proof}
The observations preceding Theorem \ref{T:fstab-smoothing} show that $M$ is fully stable if and only if $L$ is the adjoint of a linear smoothing operator $E:S\to V$. Moreover, they provide the claimed form of the discrete problem via \eqref{L=E*}. The second equivalence readily follows from \eqref{T*surjective-iff-Tinjective} and Remark \ref{R:surjectivity-of-L}.
To verify \eqref{Cstab-with-smoothing}, we combine Lemma \ref{L:b-duality} with $\norm{v} = \sup_{\ell\in V', \norm{\ell}_{V'}=1} \langle \ell,v \rangle$, see, e.g., Brezis \cite[Corollary 1.4]{Brezis:11}:
\begin{align*}
C_{\mathrm{stab}}
&=
\opnorm{M}{V'}{S}
=
\sup_{\ell\in V'} \frac{\norm{M\ell}}{\norm{\ell}_{V'}}
=
\sup_{\ell \in V', \sigma \in S}
\frac{b(M\ell,\sigma)}{\norm{\ell}_{V'}\norm{\sigma}_b}
\\
&=
\sup_{\sigma \in S, \ell \in V'}
\frac{\langle\ell,E\sigma\rangle}{\norm{\ell}_{V'}\norm{\sigma}_b}
=
\sup_{\sigma \in S}
\frac{\norm{E\sigma}}{\norm{\sigma}_b}
=
\sup_{\sigma \in S}
\frac{\norm{E\sigma}}{\norm{b(\cdot,\sigma)}_{S'}}.
\qedhere
\end{align*}
\end{proof}
Let us start the discussion of this result by considering a canonical choice for the smoother $E$.
\begin{remark}[Trivial smoothing for conforming methods]
\label{R:trivial-smoothing-cG}
Assume that the discrete space $S\subseteq V$ is conforming and consider the simplest choice $E=\mathrm{Id}_S$. For this classical case, \eqref{Cstab-with-smoothing} reduces to the well-known identity
\begin{equation*}
C_{\mathrm{stab}}
=
\sup_{\sigma \in S} \frac{\norm{\sigma}}{\norm{b(\cdot,\sigma)}_{S'}}
=
\left(
\infimum_{\sigma \in S} \sup_{s \in S} \frac{b(s,\sigma)}{\norm{s}\norm{\sigma}}
\right)^{-1}
=
\left(
\infimum_{s \in S} \sup_{\sigma \in S} \frac{b(s,\sigma)}{\norm{s}\norm{\sigma}}
\right)^{-1}.
\end{equation*}
\end{remark}
\begin{remark}[Failure of $\mathrm{Id}_S$]
\label{R:failure-of-idS}
Let $S$ be a nonconforming discrete space with $S \not\subseteq V$. Then the choice $E=\mathrm{Id}_S$ is not compatible with full stability and so, in view of Theorem \ref{T:qopt}, not with quasi-optimality. Indeed, Theorem \ref{T:fstab-smoothing} shows that $E(S) \subseteq V$ is necessary for full stability. Consequently, the condition $Es=s$ entails $s \in S\cap V$ and thus produces a contradiction for any $s\in S\setminus V$. We therefore need to define $Es$ for $s\in S\setminus V$ differently, which, in view of the nature of $S$ and $V$ in applications, typically amounts to some kind of smoothing.
\end{remark}
Most DG methods and classical NCFEM rely on the simple choice $E=\mathrm{Id}_S$, requiring that the load term $\ell$ in \eqref{ex-prob} has some additional regularity. Remark \ref{R:failure-of-idS} implies that these methods are not fully stable and so, in view of Theorem~\ref{T:qopt}, not quasi-optimal. This provides an alternative to falsify quasi-optimality with Remark~\ref{R:qopt->entire}.
We end this subsection by considering first alternatives to $E=\mathrm{Id}_S$ and illustrating that the choice of $E$ is in general a delicate matter.
\begin{remark}[Previous uses of smoothing]
\label{R:Previous-smoothers}
Advantages of suitable smoothing have been previously observed. An obvious one is that the method can be made entire and this has been pointed out, e.g., in the DG context by Di Pietro and Ern \cite{Ern.DiPietro:12}.
Comparing the Hellan-Hermann-Johnson method with the Morley method, Ar\-nold and Brezzi \cite{Arnold.Brezzi:85} showed that a particular smoothing in the Morley method leads to an a~priori error estimate requiring less regularity of the underlying load term. This corresponds to an increased stability thanks to the employed smoothing.
Also in the context of fourth order problems, Brenner and Sung \cite{Brenner.Sung:05} proposed $C^0$ interior penalty methods and proved a~priori error estimates also for nonsmooth loads. Furthermore, the involved regularity is minimal from the viewpoint of approximation.
Finally, Badia et al.\ \cite{Badia.Codina.Gudi.Guzman:14} used a rather involved smoother, which is related to our construction in \cite{Veeser.Zanotti:17p2}, to show a partial quasi-optimality result.
\end{remark}
\begin{remark}[Smoothers into $S\cap V$]
\label{R:qo-Lsurjective}
It may look natural to use smoothers $E$ that map into the conforming part $S\cap V$ of the discrete space. In view of Remark \ref{R:surjectivity-of-L}, the range $R(M)$ of the corresponding method is a proper subspace of $S$, whenever $S \setminus V \neq \emptyset$. Quasi-optimality is then not ruled out, but it hinges on the validity of results like Corollary 1 in Veeser \cite{Veeser:16} and requires in particular that $S \cap V$ is not small.
\end{remark}
\begin{remark}[Optimal smoothing]
\label{R:Optimal-smoothing}
The structure of full stability does not principally exclude methods that are optimal from the viewpoint of approximation. Consequently, the variational crime of nonconformity does not necessarily result in some consistency error.
To see this, consider the discrete bilinear form $b = \widetilde{a}_{|S\times S}$. Since
\begin{equation*}
\forall v \in V, \sigma \in S
\quad
\widetilde{a}(P v - v, \sigma)
=
\widetilde{a}(v, E\sigma - \sigma),
\end{equation*}
we have
\begin{equation*}
P = \Pi_S
\iff
E = \Pi_V.
\end{equation*}
In other words: a nonconforming method $(S,\widetilde{a}_{S\times S}, E^\star)$ provides the best approximation if and only if the smoother $E$ is the $\widetilde{a}$-orthogonal projection onto $V$. This smoother is however not feasible in the sense of the following remark.
\end{remark}
\begin{remark}[Feasible smoothing]
\label{R:comput-E}
Adopt the notation of Remark \ref{R:ComputingDiscreteSolutions} and let $\varphi_1,\dots,\varphi_n$ be a computionally convenient basis for the discrete bilinear form $b$. In order to compute $M\ell$ by \eqref{disc-prob-with-smoothing} with optimal complexity, the total number of operations for evaluating $\langle \ell, E\varphi_i\rangle$ for all $i=1,\dots,n$ has to be of order $O(n)$. A sufficient condition for this is that, for each $i=1,\dots,n$, the function $E\varphi_i$ is locally supported so that $\langle \ell, E\varphi_i\rangle$ can be evaluated at cost $O(1)$.
\end{remark}
\subsection{The structure of quasi-optimality}
\label{S:qopt-smoothing}
We are finally ready for the main results of our abstract analysis about the quasi-optimality of nonconforming methods.
\begin{theorem}[Quasi-optimality and smoothing]
\label{T:qopt-smoothing}
A nonconforming method $M=(S,b,L)$ for \eqref{ex-prob} is quasi-optimal if and only if there exists a linear smoothing operator $E:S \to V$ such that the discrete problem reads
\begin{equation*}
\forall \sigma \in S
\quad
b(M\ell,\sigma) = \langle \ell, E\sigma \rangle
\end{equation*}
for any $\ell \in V'$ and
\begin{equation}
\label{fa-consistency-with-E}
\forall u \in S\cap V, \sigma \in S
\quad
b(u, \sigma)
=
a(u, E\sigma).
\end{equation}
Its quasi-optimality constant is given by
\begin{equation}
\label{Cqopt=;smoothing}
C_{\mathrm{qopt}}
=
\sup_{\sigma \in S} \,
\frac{ \sup_{\norm{v+s} = 1} a(v,E\sigma) + b(s,\sigma) }
{\sup_{\norm{s} = 1} b(s,\sigma)},
\end{equation}
where $v$ varies in $V$ and $s$ in $S$.
\end{theorem}
\begin{proof}
We first check the claimed equivalence. The form of the discrete problem means that $L$ is the adjoint of $E$ and, in view of Theorem \ref{T:fstab-smoothing}, that $M$ is fully stable. Moreover, since
\begin{equation}
\label{LA-e-E}
\langle LAu, \sigma \rangle
=
\langle Au,E\sigma \rangle
=
a(u, E\sigma)
\end{equation}
for all $u\in V$ and $\sigma \in S$, \eqref{fa-consistency-with-E} is equivalent to \eqref{Consistency}, i.e. full algebraic consistency. Consequently, the claimed equivalence follows from Theorem~\ref{T:qopt}.
To show the identity for the quasi-optimality constant, we observe that the extension $\widetilde{b}$ exists and satisfies, for $\widetilde{v}\in{\widetilde{V}}$, $v \in V$, $s,\sigma \in S$ such that $\widetilde{v} = v + s$,
\begin{equation*}
\widetilde{b}(\widetilde{v}, \sigma)
=
\langle LAv,\sigma \rangle + b(s,\sigma)
=
a(v, E\sigma) + b(s,\sigma)
\end{equation*}
thanks to \eqref{LA-e-E}. Therefore, the formula for $C_{\mathrm{qopt}}$ follows from Theorem \ref{T:qopt} and Lemma \ref{L:Pext=bext}.
\end{proof}
We start the discussion of Theorem \ref{T:qopt-smoothing} by a remark about the notion of Galerkin methods.
\begin{remark}[Galerkin methods]
\label{R:nG}
Assume first that the discrete space $S\subseteq V$ is conforming. Then trivial smoothing $E=\mathrm{Id}_S$ in \eqref{fa-consistency-with-E} yields $b = a_{|S \times S}$. In other words: conforming Galerkin methods are the only quasi-optimal methods with the simplest choice $E=\mathrm{Id}_S$ for smoothing.
Next, consider a general nonconforming discrete space $S$, together with the simplest choice for smoothing in the conforming part $S\cap V$, i.e.\ with $E_{|S\cap V} = \mathrm{Id}_{S\cap V}$. Here \eqref{fa-consistency-with-E} yields $b_{|S_C \times S_C} = a_{|S_C \times S_C}$ with $S_C = S \cap V$. Thus, nonconforming Galerkin methods are the only candidates for quasi-optimal methods with $E_{|S\cap V} = \mathrm{Id}_{S\cap V}$. In this context, the following observation if useful in constructing $E$ with $E_{|S\cap V} = \mathrm{Id}_{S\cap V}$. If $E$ maps some $s \in S\setminus V$ in $S\cap V$, then the injectivity of $E$ is broken and, in view of Theorem \ref{T:fstab-smoothing}, the range of the method is a strict subspace of $S$.
\end{remark}
\begin{remark}[Comparison with second Strang lemma]
\label{R:2nd-Strang}
For conforming Galerkin methods, Theorem \ref{T:qopt-smoothing} reduces to the well-known C\'ea lemma, with $C_{\mathrm{qopt}}=1$. C\'ea's lemma is a basic building block in the analysis of the energy norm error for conforming methods. In the context of nonconforming methods, the second Strang lemma is often used as a replacement. Theorem \ref{T:qopt} provides a specialization revealing the structure of quasi-optimal methods and so lays the groundwork for their design.
\end{remark}
\begin{remark}[Comparison with conforming Petrov-Galerkin methods]
\label{R:Petrov-Galerkin}
Our setting of \S\ref{S:setting} includes the application of Petrov-Galerkin methods to \eqref{ex-prob}. It is therefore of interest to compare formula \eqref{Cqopt=;smoothing} with its conforming counterpart in Theorem~2.1 of Tantardini and Veeser \cite{Tantardini.Veeser:16}:
\begin{equation*}
C_{\mathrm{qopt}}
=
\sup_{\sigma \in S} \,
\frac{ \sup_{\norm{v} = 1} b(v,\sigma) }{\sup_{\norm{s} = 1} b(s,\sigma)},
\end{equation*}
where here $b$ stands for the continuous (and discrete) bilinear form, $v$, $s$, and $\sigma$ vary, respectively, in the continuous trial space, in the discrete trial space and in the discrete test space. We see that \eqref{Cqopt=;smoothing} generalizes this formula, replacing the continuous bilinear form by the extended one, which interweaves discrete and continuous problems.
\end{remark}
\begin{remark}[`Classical' bound for quasi-optimality constant]
\label{R:Cqopt}
A consequence of the formula for the quasi-optimality constant in Theorem \ref{T:qopt-smoothing} and \eqref{adjoint-for-bdd-op} is the following upper bound:
\begin{equation}
\label{Cqopt<=b(ext)}
C_{\mathrm{qopt}}
\leq
\frac{C_{\widetilde{b}}}{\beta}
\end{equation}
with the continuity and inf-sup constants
\begin{equation*}
C_{\widetilde{b}} := \sup_{\norm{v+s}=1, \norm{\sigma} = 1} a(v,E\sigma) + b(s,\sigma),
\qquad
\beta := \infimum_{\norm{s} = 1} \sup_{\norm{\sigma} = 1} b(s,\sigma),
\end{equation*}
where $v$ varies in $V$ and $s$ and $\sigma$ in $S$. This upper bound has the classical form of constants appearing in quasi-optimality results, apart from the slight difference that the continuity constant of the numerator involves the extended bilinear form; see also Remark \ref{R:Petrov-Galerkin}.
It is worth mentioning that the right-hand side of \eqref{Cqopt<=b(ext)} can become arbitrarily large, while its left-hand side remains bounded; see \cite[Remark~2.7]{Veeser.Zanotti:17p2}.
\end{remark}
Let us now assess what determines the size of the quasi-optimality constant.
\begin{theorem}[Size of quasi-optimality constant]
\label{T:Cqopt}
Assume $M=(S,b,L)$ is a quasi-optimal nonconforming method with linear smoother $E:S \to V$ and stability constant $C_{\mathrm{stab}}$. The consistency measure $\delta_V$ of Proposition \ref{P:consistency-with-stability} is finite and is
\begin{equation}
\label{deltaV=}
\delta_V
=
\sup_{v \in V, \Pi_S v \neq v} \, \sup_{\sigma \in S} \,
\frac{ b(\Pi_S v, \sigma) - a(v, E\sigma)}
{\norm{ \Pi_S v - v} \norm{b(\cdot,\sigma)}_{S'}}.
\end{equation}
Similarly, the consistency measure $\delta_S$ of Proposition \ref{P:consistency-without-stability} is finite and the smallest positive constant such that
\begin{equation*}
\forall s \in S
\quad
\sup_{\sigma \in S}
\frac{ b(s,\sigma) - a(\Pi_V s, E\sigma) }{ \norm{b(\cdot,\sigma)}_{S'}}
\leq
\delta_S
\norm{ s - \Pi_V s}.
\end{equation*}
Then the quasi-optimality constant of $M$ satisfies
\begin{equation*}
\max \{C_{\mathrm{stab}}, \delta_S \}
\leq
C_{\mathrm{qopt}}
=
\sqrt{1+\delta_V^2}
\leq
\sqrt{C_{\mathrm{stab}}^2+\delta_S^2}.
\end{equation*}
\end{theorem}
\begin{proof}
Lemma \ref{L:b-duality} readily yields the identities
\begin{equation*}
\norm{\Pi_S v - P v}
=
\sup_{\sigma \in S} \frac{ b(\Pi_S v - P v, \sigma) }{ \norm{\sigma}_b }
\quad\text{and}\quad
\norm{ s - P\Pi_V s}
=
\sup_{\sigma \in S} \frac{ b(s - P\Pi_V s, \sigma) }{ \norm{\sigma}_b }.
\end{equation*}
Notice also $b(P v, \sigma) = b(M Av, \sigma) = \langle LAv, \sigma \rangle = a(v,E\sigma)$ and $\norm{\sigma}_b = \norm{b(\cdot,\sigma)}_{S'}$ for $v \in V$ and $\sigma \in S$ as well as $V \setminus S \neq \emptyset$. Therefore, $\delta_V$ and $\delta_S$ coincide with the corresponding quantities in Propositions \ref{P:consistency-with-stability} and \ref{P:consistency-without-stability} and Theorem \ref{T:Cqopt} just restates their conclusions.
\end{proof}
We refer to \S\ref{S:Cqopt} for a discussion of the relationship between $C_{\mathrm{qopt}}$ and $C_{\mathrm{stab}}$ and in particular the consistency measures $\delta_V$ and $\delta_S$. Let us further connect the expression of $\delta_V$ in this theorem with classical consistency.
\begin{remark}[$\delta_V$ and classical consistency error]
\label{R:consistency error}
The numerator of \eqref{deltaV=} represents the action of a linear functional on $S$, namely
\begin{equation*}
b(\Pi_S v, \sigma) - a(v, E \sigma)
=
\left\langle B \Pi_S v - LA v, \sigma \right\rangle
=:
\left\langle \rho, \sigma \right\rangle.
\end{equation*}
Let us recall that $LAv$ is the discrete load associated to $v$ in problem \eqref{disc-prob} and $B\Pi_S v$ is the linear functional obtained from the representative $\Pi_S v$ of $v$ in $S$, through the isomorphism $B$. Introducing the norm $\norm{\cdot}_{S',b} := \sup_{\norm{b(\cdot, \sigma)}_{S'} =1} \left\langle \cdot, \sigma \right\rangle$, the quantity $\norm{\rho}_{S', b}$ is a consistency error in the sense of Arnold \cite{Arnold:15}. The measure $\delta_V$ compares this quantity with the natural benchmark in the context of quasi-optimality, i.e.\ the best error $\norm{v-\Pi_S v}$.
\end{remark}
Given $S$ and $b$, Theorem \ref{T:qopt-smoothing} reduces the construction of quasi-optimal nonconforming methods to the choice of a computationally feasible linear smoother $E$ and Theorem \ref{T:Cqopt} shows how the smoother $E$ affects the size of the quasi-optimality constant. In the follow-ups \cite{Veeser.Zanotti:17p2,Veeser.Zanotti:17p3} of this work, we devise such smoothers for various nonconforming finite element spaces. Modifying classical NCFEM (like the Crouzeix-Raviart method), we can obtain $\delta_S=0$ and so $C_{\mathrm{qopt}} = C_{\mathrm{stab}}$, as for conforming Galerkin methods. Also DG and $C^0$ interior penalty methods can be modified to be quasi-optimal. Remarkably, additional terms not affecting full algebraic consistency entail $\delta_S > 0$ for the employed smoothing.
\end{document}
|
\begin{document}
\title{Rapid Numerical Approximation Method for Integrated Covariance Functions Over Irregular Data Regions}
\begin{abstract}
In many practical applications, spatial data are often collected at areal levels (i.e., block data) and the inferences and predictions about the variable at points or blocks different from those at which it has been observed typically depend on integrals of the underlying continuous spatial process. In this paper we describe a method based on \textit{Fourier transform} by which multiple integrals of covariance functions over irregular data regions may be numerically approximated with the same level of accuracy to traditional methods, but at a greatly reduced computational expense.
\end{abstract}
\keywords{Change of Support \and Continuous Spatial Process \and Integrated Covariance Functions \and Fourier Transform}
\section{Introduction}\label{sec:intro}
\subsection{Motivation}\label{subsec:motive}
Due to advances in science and technology, spatial data from remotely sensed observations, surveys, and censuses are being gathered at a rapid pace (cf., \cite{Brad:2015a, Brad:2015b, Brad:2016, Nguyen:2012, Nguyen:2014}) and subsequently, this has created the opportunity to quantify spatial dependence and make predictions for many different kinds of processes and variables. Often these data are in the form of spatial averages over irregular and possibly overlapping regions. This feature makes it difficult to apply standard methods of spatial analysis which are typically built for point-referenced data. These kind of spatial problems are termed as \emph{change of support} problems (see \cite{Cressie:1993}) and our primary concern is to make statistical inferences about the values of a variable at spatial scales different from those at which it has been observed (cf.,~\cite{Gotway:2002}). For example, in the context of remote sensing and continuous geophysical variables, satellite data products can be reported as averages over a set of regions (e.g., intervals, areas, or volumes), but the interest is in spatial field varying over a continuum. When the underlying spatial field can be represented as a Gaussian process and the observational data are linear functionals of the field, a standard statistical framework can be applied to make inferences about the spatial process. This approach depends on evaluating the covariance matrix among the observations. In the case of linear functionals being integrals over spatial regions, multi-dimensional integrals involving the covariance function may not have a closed analytical form and so a numerical method is required for such computations. Efficient numerical methods in the literature are typically either tied to particular region geometries, require user intervention to allocate quadrature points, or achieve efficiency at a cost in accuracy (see \cite{Journel:2003, Martin:1994}). This work was motivated by the lack of accurate numerical strategies to handle large change of support problems. We present a new method of numerically approximating covariance integrals over irregular regions when the underlying covariance model is assumed to be (second-order) stationary. This method is efficient because it uses the discrete Fourier transform (DFT) and so can handle a large number of quadrature points to gain accuracy. In addition, our approximation is based on a coherent representation of the spatial process and so has a useful interpretation. Finally, the proposed method deals with a single discretized process and discretized integrals for which the observation covariances and subsequent statistical inferences are exact.
\subsection{Model}\label{subsec:model}
Consider a random field $Y(\bld s)$ with mean function $\mu_Y(\bld s)$ and covariance function $c_Y(\bld s,\bld s^{\prime})$, for $\bld s,\bld s^{\prime} \in \bld R^d$. We also assume $Y(\bld s)$ to be defined over a domain ${\cal{D}}\subseteq \bld R^d$. Let $B_1,\ldots,B_n$ be regions in ${\cal{D}}$ and we have observation functionals given by $\bld z_i =\frac{1}{|B_i|} \int_{B_i} Y(\bld s)d\bld s$, where $|B_i|=\int_{B_{i}}d\bld s$ for $i=1,\ldots,n$. If $Y(\bld s)$ is a Gaussian process, then $\bld z$ follows a multivariate normal distribution (\cite{Song:2008}, \cite{Cressie:1993}). To simplify the exposition in this paper we only focus on two-dimensional spatial domains and with polygons serving as the regions of interest, but the proposed method can operate in arbitrary dimensions. Moreover, the regions of interest need not be polygonal, only well-approximated by indicator functions discretized over regular grids. See Section~\ref{algo} for details.\\
\vskip-1em
\noindent Define the mean of the observation functionals as
\begin{eqnarray}
\bld \mu_i = E(\bld z_i) = \frac{1}{|B_i|} \int_{B_i} \mu_Y(\bld s)d\bld s,\ \mbox{for}\ i=1,\ldots,n.
\label{eq:target0}
\end{eqnarray}
\noindent and the covariance matrix, $\bld K$, as
\begin{eqnarray}
\bld K_{i,j}= cov(\bld z_i,\bld z_j)= \frac{1}{|B_i|} \frac{1}{|B_j|} \int_{B_i} \int_{B_j} c_Y(\bld u,\bld v) d\bld v d\bld u\ \mbox{for}\ i,j=1,\ldots,n. \label{eq:target}
\end{eqnarray}
Note that, in Eqn.~(\ref{eq:target}), the integrand is bounded and therefore, the interchange of expectation and integration is permissible.
In general, one might also consider an additional weight function in the integrand but we will omit this extension to simplify exposition. \\
\vskip-1em
\noindent Generally, two basic components of a spatial data analysis are prediction of the spatial field to locations that are not observed and estimating statistical parameters in the covariance functions and the observational model. Here we emphasize the aspects of these operations that are challenging with change of support data.
Typically, one includes an additional measurement error component, so the complete observational model is
\begin{equation}
\bld Z_i = \bld z_i + \bld \epsilon_i,
\end{equation}
with $\bld \epsilon$ being mean zero, Gaussian white noise with variance $ \tau^2 $ and independent of the process $Y$.
Under the assumption that $\bld \mu$, $\mu_Y(\bld s)$, $\tau^2$ and any additional parameters in $c_Y$ are known, we have the standard Kriging prediction for
$Y(\bld s)$ given as
\begin{equation}
\label{eq:target00}
\widehat{Y}(\bld s) = \mu_Y(\bld s) + \bld k(\bld s)^T ( \bld K + \tau^2 \bld I ) ^{-1}(\bld Z - \bld \mu)
\end{equation}
\vskip-1em
\noindent where,
\begin{equation}
\label{eq:target1}
\bld k(\bld s)_i = COV(Y(\bld s), \bld Z_i)= \frac{1}{|B_i|}\int_{B_i} c_Y(\bld s,\bld u) d\bld u,
\end{equation}
and $\bld I$ is the identity matrix. Thus we see that prediction will involve evaluation of $\bld K$ and also an additional integral for every prediction location. Our proposed method will give accurate approximations to this vector and makes it feasible for spatial prediction on a large and dense grid of locations. \\
\vskip-1em
\noindent Another important aspect of a spatial analysis is estimating unknown statistical parameters. Following a maximum likelihood approach or nested within a Bayesian model, one requires evaluation of the negative log likelihood
\begin{equation}
\label{nloglike}
-\ell( \bld Z) =\frac{1}{2} (\bld Z - \bld \mu)^T ( \bld K + \tau^2 \bld I ) ^{-1}(\bld Z - \bld \mu) + \frac{1}{2} \log | \bld K + \tau^2 \bld I | +C,
\end{equation}
where $C$ is a constant and where both $\bld \mu$ and $\bld K$ may depend on other statistical parameters. From Eqn.~(\ref{nloglike}) we see that mean and covariance parameters for the underlying process can be deduced through the observational covariance matrix. However, maximization of this likelihood will require re-evaluating the integral expressions for $\bld K$ and $\bld \mu$ for any parameters that enter Eqns.~(\ref{eq:target0}) or (\ref{eq:target}) in a nonlinear manner. Our proposed method significantly reduces the computational burden associated with evaluation of $\bld K$ and $\bld \mu$ and so makes maximum likelihood methods, or related Bayesian inference feasible.
\section{Algorithms }
\label{algo}
In this section we review some of the previous work and describe our new method. Our new approach exploits the computational efficiency of the DFT and we refer to this as Fourier Approximations of Integrals over Regions (FAIR). For certain families of covariance functions and regular shapes, such as rectangles and triangles, it is possible to obtain closed form expressions for the covariance matrix $\bld K$. However, a core set of spatial statistical applications typically involve at least two dimensions and covariance functions based on radial distances. Therefore, a closed form expression for $\bld K$ often does not exist. For instance, for the widely used Mat{\`e}rn family the closed form expressions are not available, even for rectangular regions.
\subsection{Direct quadrature}\label{subsec:dq}
A direct approach to approximate the key integrals given in Eqns. (\ref{eq:target0}),~(\ref{eq:target}) and~(\ref{eq:target1}) deals with a sum, up to a scaling factor, over a grid of locations restricted to the regions, i.e., a discrete, Riemann approximation to the continuous integral.
In particular, suppose
$G$ be a regular grid of locations that include the spatial domain and let $\bld s_i^G = \{\bld s: \bld s\in B_{i}\cap G\},\ i=1,\ldots,n$, with $L_i^G$ the number of points in $\bld s_i^G$. Then
$\bld K_{i,j}$ can be approximated by
\begin{eqnarray}
\frac{1}{L_i^G L_j^G}\sum_ {\bld s_{k}\in \bld s_i^G,\bld s_{\ell}\in \bld s_j^G} c_Y(\bld s_k, \bld s_\ell) \label{eqRieman}
\end{eqnarray}
This approximation has the advantage that the discretized process on the grid is the stochastic basis for the statistical model. In fact the analysis will be exact if one replaces the integral expressions for the observation functionals by discrete sums implied by this Riemann approximation.
Although straightforward as a formula with a useful interpretation, many authors have pointed out the computational burden in evaluating this double sum over a multidimensional grid. Unfortunately for most problems and in particular for large spatial data sets the computations are prohibitive with single threaded codes. \\
\vskip-1em
\noindent To overcome the computational challenges inherent in Eqn.~(\ref{eqRieman}), \cite{Journel:2003} (henceforth [JH]) propose a Riemann approximation described as ``centered regular discrete approximation with uniform weighting" that is particularly well-suited for semi-variogram (or covariance) approximation when the regions of interest are parallelograms or parallelepipeds. The key idea is to use many small, distinct grids tailored to each region.
More precisely, given grids $G_{i}$ and $G_{j}$ tailored to regions $B_{i}$ and $B_{j}$ respectively, with $\bld s_i^{G_i} = \{\bld s: \bld s\in B_{i}\cap G_i\},\ i=1,\ldots,n$, with $L_i^{G_i}$ the number of points in $\bld s_i^{G_i}$, then the covariance entry $\bld K_{i,j}$ is approximated by
\begin{eqnarray}
\frac{1}{L_i^{G_i} L_j^{G_j}} \sum_ {\bld s_{k}\in \bld s_i^{G_{i}},\bld s_{\ell}\in \bld s_j^{G_{j}}} c_Y(\bld s_k, \bld s_\ell).
\label{eqJR}
\end{eqnarray}
\noindent The recommended maximum grid densities to use for the grids $G_{i}, i=1,\ldots,n$ with the [JH] approach are 10 points for a one-dimensional domain, $6 \times 6$ for a two-dimensional domain, and $4 \times 4 \times 4$ for a three-dimensional domain. This approach is particularly well-suited to parallelogram regions, as the quadrature point locations can be centered in each unit of a regular partitioning of the overall region, avoiding the potential for bias that arises when more geometry-agnostic regular grid overlay methods are applied.\\
\vskip-1em
\noindent The [JH] approach, however, has several disadvantages for large data sets. The number of points in this method is limited to avoid generating large sums over covariance matrices in Eqn.~(\ref{eqJR} ). This will be an issue for large numbers of locations because in practice evaluating transcendental functions for the covariance kernel will take appreciable computation time. In restricting the number of quadrature points, however, this results in limited accuracy for irregular shapes. To improve representing the integral the [JH] method also proposes quadrature points aligned with the extent and boundaries of the regions. This makes coding this method complicated. Finally we note that because the set of quadrature points is adapted to each region's shape and locations, it makes it difficult to describe an underlying discrete process that will be the basis for prediction. \\
\vskip-1em
\noindent Our approach revisits the Riemann double sum over a large fixed grid. However, by taking advantage of the fast Fourier transform and stationarity of $c_Y$, we are able to reduce the double sum over the grid to a single sum in frequency space. In addition, there is the possibility of even more rapid evaluation of the covariance function if it has a closed form Fourier transform, such as the Mat\`{e}rn family.
\subsection{FAIR algorithm }\label{subsec:FAIRalgo}
We seek to evaluate integrals of the type shown in Eqn.~(\ref{eq:target}) using Fourier representations. Given this strategy it is
useful to review some of this area. Assume a stationary
covariance model $c_Y(\bld s_i,\bld s_j) = c(\bld s_i - \bld s_j)$. Let $\ast$ denote the (multi-dimensional) convolution operator,
overlines denote complex conjugates, and ${\cal{F}} \left[g \right] (\bld \omega)$ denote the (multi-dimensional) Fourier transform
of the function $g(\bld s)$. We first cast the computation of $\bld K_{i,j}$ in terms of this continuous transform and then
approximate this representation using the DFT. \\
\vskip-1em
\noindent We make use of the convolution theorem,
\begin{equation}
f = g \ast h \Leftrightarrow {\cal{F}} \left[f \right] ={\cal{F}} \left[g \right] {\cal{F}} \left[h \right]. \nonumber
\end{equation}
\noindent Also, by Plancherel's Theorem,
\begin{equation}
\int_{\bld R^d} f(\bld s) \overline{g(\bld s)} d\bld s =
\int_{\bld R^d} {\cal{F}} \left[f \right] (\bld \omega) \overline{{\cal{F}} \left[g \right] (\bld \omega)} d\bld \omega. \nonumber
\end{equation}
\vskip-1em
\noindent With these results we have,
\begin{equation}
\begin{aligned}
\int_{B_i} \int_{B_j} c_Y(\bld u,\bld v) d\bld u d\bld v & =
\int_{\bld R^d} I_{B_i}(\bld u) ( I_{B_j} \ast c ) (\bld u) d\bld u
\\ & = \int_{\bld R^d} {\cal{F}} \left[I_{B_i} \right] (\bld \omega)
\overline{ {\cal{F}} \left[ I_{B_j} \ast c \right] (\bld \omega) } d\bld \omega
\\ & = \int_{\bld R^d} {\cal{F}} \left[ I_{B_i} \right] (\bld \omega)
\overline{ {\cal{F}} \left[ I_{B_j} \right] (\bld \omega) }
{\cal{F}} \left[ c \right] (\bld \omega) d\bld \omega,
\label{eq:planch}
\end{aligned}
\end{equation}
\noindent where, $I_{A}(\bld x)=1\ \mbox{if}\ \bld x\in A\ \mbox{and}\ 0,\ \mbox{otherwise}$. Furthermore, the representation given in Eqn.~(\ref{eq:planch}) is exact. \\
\vskip-1em
\noindent We now make the following two approximations to the final expression in Eqn.~(\ref{eq:planch}).
\begin{itemize}
\item[(a)] The first approximation is the restriction of the integral to a finite domain. Note that, for functions $c(\bld s)$ such that $c(\bld s)\rightarrow 0$ as $||\bld s||\rightarrow \infty$, $( I_{B_i} \ast c )$ will also decay to zero away from $B_i$; thus, we select $S \subset \bld R^d$, a $d$-dimensional, rectangular region (e.g., an interval in $\bld R^1$, a rectangle in $\bld R^2$, a rectangular prism in $\bld R^3$, etc.), where $B_i,B_j$ are well contained in $S$ (i.e., the distance between $B_i,B_j$ and the boundaries of $S$ is large, relative to the decay range of $c(\cdot)$). Accordingly this approximation will restrict the integral in Eqn.~(\ref{eq:planch}) to the domain $S$.
\item[(b)] The second approximation is a discretization.
Let $G$ be a regular grid of points that covers $S$ and with spacing $\Delta_j$ in dimension $j$. For a function, say $g$, evaluated on this grid, let $DFT[ g]$ denote its DFT. Let $\omega^G$ be the mirror grid in the frequency domain. Then we are led to the approximation:
\begin{eqnarray}
\label{eq:disc_approx}
\int_{B_i} \int_{B_j} c_Y(\bld u,\bld v) d\bld u d\bld v \approx \sum_k DFT[{\mathcal{I}_{B_{i}}}](\omega_k^G) \overline{ DFT[{ \mathcal{I}_{B_{j}}}](\omega_k^G) } DFT[{c}](\omega_k^G) \Delta
\end{eqnarray}
where $\Delta = \Pi_{i=1}^{d}\Delta_i$, and $\mathcal{I}_{B_{i}}$ is a weighted version of the indicator function $I_{B_i}$. Each grid point can be associated with a surrounding grid box, and $\mathcal{I}_{B}$ at a grid point is equal to the fraction of the area of the associated grid box that is contained in $B$. Values for $\mathcal{I}_{B}$ will be mostly zero or one with fraction values for grid boxes on the boundary of $B$. In two dimensions one can evaluate an approximate version of $\mathcal{I}_{B}$ rapidly, and an example is illustrated in Figure~\ref{fig:frac_ind}.
\end{itemize}
\begin{figure}
\caption{Example output of $\mathcal{I}
\label{fig:frac_ind}
\end{figure}
\subsection*{Implementation}
Our method approximates the integrals shown in Eqn.~(\ref{eq:planch}) with discretized sums over a finite domain as shown in Eqn.~(\ref{eq:disc_approx}), exploiting the fast Fourier transform (FFT) algorithm to compute the DFT. The algorithm to determine an entry of $\bld K$ is as follows. Throughout this algorithm we will use $\theta_x$ to refer to the distance where the correlation function decreases to $x$.
\begin{enumerate}
\item A regular grid $G$ is constructed which extends beyond the regions ${B_1,...,B_n}$. The overall extent of the grid is constrained to be at least twice $\theta_{0.05}$ and also extend beyond all observation regions by at least $\theta_{0.25}$.
The grid size is chosen to be highly composite (dyadic) to facilitate an efficient FFT.
\item $\widehat{\bld B} ^i = DFT[\mathcal{I}_{B_i }] $ is found by FFT for $i=1,\dots,n$.
\item $\widehat{\bld c} = DFT[c]$ is found for $c(\cdot)$ by FFT.
\item $\widehat{|B_i|}=\sum \mathcal{I}_{B_{i}}(G) \Delta$ is found for $i=1,\dots,n$.
\item $\bld K_{i,j} = \frac{1}{\widehat{|B_{i} |}} \frac{1}{\widehat{|B_{j} |}} \sum_k \widehat{\bld B}^i_k \overline{\widehat{\bld B}^j_k} \widehat{\bld c}_k \Delta $ are found with the sum being over the discretized Fourier frequencies prescribed by the DFT.
\end{enumerate}
\noindent The grid extent constraints are motivated by two factors. The discrete, finite domain approximation to the full Fourier transform of $c$ degrades significantly when the grid is constructed over a domain which is small relative to the decay rate of the covariance function. This is due to the well known property of the DFT as enforcing a periodic representation of the covariance function.
Extending the grid to be at least twice $\theta_{0.05}$ avoids this issue. Also, the discrete, finite domain approximation degrades significantly without a `buffer zone' around the regions ${B_1,...,B_n}$ based on correlation range. This is due to the periodic wrapping of the approximation to the covariance whereby regions on one edge of the domain become (spuriously) correlated with regions on the opposite edge. Experiments show that extending the grid with a buffer of $\theta_{0.25}$ avoids this artifact. Thus, if the spatial domain that contains the regions ${B_1,...,B_n}$ is a square with a length $L$, a conservative grid for computation should be extended to have extent of at least $ max( L + 2 \theta_{0.25}, 2 \theta_{0.05}) $. The larger the extent of the grid, the more these effects are minimized, but larger extents at a fixed grid resolution also result in coarser evaluations of the functions in question.\\
\vskip-1em
\noindent The accuracy of this method is also a function of the resolution of the grid. In Section~\ref{subsubsec:acc_method} we provide a numerical study to suggest the relationship between grid spacing ($\Delta$), extent, and accuracy.
\subsection{Surface Prediction} \label{subsec:surf_pred}
\noindent The proposed FAIR algorithm also provides intermediate quantities to make surface prediction (see, Eqn.~(\ref{eq:target00})) efficient. In particular, the fine grid $G$ (containing $L$ points), used for the computation of the change of support integrals, can be reused for creating a predicted surface. To simplify the exposition we will assume throughout this section that $\bld mu, \mu_{Y}^{\cdot}, \tau^{2}$ and any additional parameters in $c_{Y}$ are known. Now, motivated by Eqn.~(\ref{eq:target00} ) one can define the vector
\[ \bld eta= ( \bld K + \tau^2 \bld I ) ^{-1}(\bld Z - \bld \mu). \]
Now focusing on the second term of Eqn.~(\ref{eq:target1}) and an arbitrary $i$-th member of the grid, denoted by $\bld u_{i}$, we have:
\begin{eqnarray*}
\bld k(\bld u_{i})^T\bld eta &=& \sum_{\ell=1}^n \left( \frac{1}{|B_{\ell}|}\int_{B_\ell} c_Y(\bld u_{i},\bld u) d\bld u \right) \bld eta_{\ell}
= \int \sum_{\ell=1}^n \left( \frac{1}{|B_{\ell}|} I _{B_\ell}(\bld u) \bld eta_\ell \right) c_Y(\bld u_{i},\bld u) d\bld u\\
&=& \int \phi(\bld u) c( \bld u_{i} - \bld u) d\bld u = (\phi \ast c)[ \bld u_{i}],
\end{eqnarray*}
where $\phi( \bld u) = \sum_{\ell=1}^n \frac{ \bld eta_\ell}{|B_{\ell}|} I _{B_\ell} (\bld u) $.
Note that the last expression is a convolution of $\phi$ with $c$ evaluated at $\bld u_i$; using the same discrete approximation described above we are lead to
$$ \widehat{\bld Y}^G = \mu_{Y}(G) + DFT^{-1} [ DFT[\phi] DFT[ c] ], $$
where $DFT(\phi)$ and $DFT(c)$ have support on the mirror grid in the frequency domain, and $\mu_{Y}(G)=\{\mu_{Y}(\bld u_i) \}_{i=1}^{L}$. Note that the $i^{th}$ component of $\widehat{\bld Y}^G$ is the predicted value at $\bld u_i$. Thus, evaluation of the predicted surface on the entire grid $G$ is $\widehat{\bld Y}^G$, obtained in a single step using the inverse DFT. Additionally, note that $DFT[\phi] = \sum_{\ell=1}^n \frac{ \bld eta_\ell}{|B_{\ell}|} DFT[I _{B_\ell}] \approx \sum_{\ell=1}^n \frac{ \bld eta_\ell}{\widehat{|B_{\ell}|}} DFT[\mathcal{I}_{B_\ell }] $, and from evaluation of $\bld eta$, $\widehat{|B_\ell|}$ as well as $DFT[\mathcal{I}_{B_\ell }] $ for each $B_\ell$ and $DFT[c]$ have already been computed. \\
\section{Numerical Studies}\label{sec:numstudy}
In this section, we consider a zero-mean spatial process $Y(\bld s)$ with covariance function $c_{Y}(\cdot)$. In general, integrals of the form in Eqn.~(\ref{eq:target}) are not subject to analytical solution over irregular regions $B_i,B_j$ and common covariance functions $c_Y$, but there are exceptions. In Section~\ref{subsubsec:acc_method}, we consider a special case of conveniently selected regions and Gaussian covariance function $c_Y(\mathbf{h})=\exp{[-\|\mathbf{h}\|^2/2]}$ that allow a ``ground truth'' that can be obtained to double precision floating point accuracy. We then use this ground truth to determine the accuracy of FAIR relative to changes in grid resolution and distance between regions. In Section~\ref{subsec:mat_method}, we consider a Mat\'ern covariance function (with marginal variance $\sigma^{2} = 1$, range $\theta = 0.5$, and smoothness $\nu = 1.5$) to show that for a commonly used covariance model over a modest number of irregular regions of interest, the estimates generated by FAIR are of equal quality to those generated by a simple Riemann sum approach (for a given grid resolution), but as the grid resolution increases, the computational expense of FAIR scales more favorably than the expense of the Riemann sum approach. Finally, in Section~\ref{subsec:census}, we perform an example estimation and prediction process on census block data using both FAIR and Riemann approaches to provide a comparison in the context of an application.
\subsection{Gaussian Covariance Accuracy Study}\label{subsubsec:acc_method}
In this section, we consider the underlying process $Y(\bld s)$ being averaged over two rectangular regions with sides parallel to the coordinate axes. In particular, the regions are $A=[ax_{1},ax_{2}]\times [ay_{1},ay_{2}]$ and $B=[bx_{1},bx_{2}]\times [by_{1},by_{2}]$, respectively. Then straightforward calculation yields,
\begin{multline}
\int_A \int_B c_Y(\mathbf{u},\mathbf{v}) d\mathbf{v} d\mathbf{u}= \\ \left ( - \int_{bx_2-ax_1}^{bx_2-ax_2} \Phi(w_1) dw_1 + \int_{bx_1-ax_1}^{bx_1-ax_2} \Phi(w_2) dw_2 \right ) \left ( - \int_{by_2-ay_1}^{by_2-ay_2} \Phi(w_3) dw_3 + \int_{by_1-ay_1}^{by_1-ay_2} \Phi(w_4) dw_4 \right ), \label{eq:intermed_accuracy}
\end{multline}
where $\Phi(.)$ denotes the cumulative distribution function of a Normal(0,1) random variable. Note that the integrals in Eqn.~(\ref{eq:intermed_accuracy}) are of the same general type and can be computed using integration by parts.\\
\vskip-1em
\noindent For our accuracy studies, we consider pairs of unit squares $A = [0,1]^{2}$ and $B_{i} = [\delta_{i},1+\delta_{i}]^{2}$, where the offset $\delta_{i}$ is a separation distance between the regions.
For each $\delta_{i}$, the ground truth covariance matrix for the two averaged regional values was computed. The ground truth correlation between regions varies with their separation distance - some representative values are shown in Table \ref{tab:acc_study_true_corr}. \\
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|ccccc|}
\hline
$\delta_{i}$ & 0.3 & 0.9 & 1.5 & 2.1 & 2.7 \\ \hline
$Cor(z_{A},z_{B_i})$ & 0.926 & 0.501 & 0.147 & 0.023 & 0.002 \\ \hline
\end{tabular}
\caption{Gaussian accuracy studies: ground truth correlations by separation distance.}
\label{tab:acc_study_true_corr}
\end{center}
\end{table}
\vskip-1em
\noindent For each, we employ FAIR to approximate the covariance matrix using varying grids (in each case, using the default image radius and region padding selection method as described in Section~\ref{subsec:FAIRalgo}. The grid resolution was varied over integer powers of two: the coarsest grid consisted of $2^3 \times 2^3 = 8 \times 8 = 64$ total grid points, while the finest grid had $2^{11} \times 2^{11} = 2048 \times 2048 \approx$ 4 million points. The approximated covariance matrices are used to generate correlations, which are then compared against the ground truth values. The absolute errors observed during these studies are shown in Figure~\ref{fig:acc_results_default}.\\
\begin{figure}
\caption{Accuracy for Gaussian covariance function with default grid extent.}
\label{fig:acc_results_default}
\end{figure}
\vskip-1em
\noindent Overall, our method does an excellent job in terms of accuracy and as one would expect, accuracy improves with increased grid resolution, until hitting a plateau. Note that shape pairs with smaller offsets reach this plateau at lower resolutions than pairs with larger offsets. This plateau represents the transition from accuracy being constrained by grid resolution to being constrained by grid extent. When the study is repeated with larger grid extents (in this instance, increasing default extent constraints by a single unit, i.e., extent set to $ max( L + 2 \theta_{0.25}+1, 2 \theta_{0.05}+1)$), the plateau appears at lower error values/higher grid resolutions, as shown in Figure~\ref{fig:acc_results_extended}, but there is also a decrease in accuracy at lower resolutions for most of the offset distances. This loss of accuracy is due to coarser grids (spreading the same number of grid points over a larger extent), which then lead to coarser evaluations of the functions involved.
\begin{figure}
\caption{Accuracy for Gaussian covariance function with expanded grid extent.}
\label{fig:acc_results_extended}
\end{figure}
\subsection{Mat\'ern Covariance Timing and Consistency Study} \label{subsec:mat_method}
\subsubsection{Study Design} \label{subsubsec:tim_method}
In this section, we demonstrate the degree to which the computational cost of our method scales more favorably than direct quadrature, and in a more realistic context. We consider regions that have a random polygon shape and the Mat\'ern family of covariance functions. We fix a square spatial domain with dimensions $[-11,11]^2$. We generate one hundred random polygons with approximate centers drawn from a uniform distribution on $[-10,10]^2$. The number of sides, sizes and irregularity of the sides are all randomly chosen with self-intersecting polygons being omitted. The realized polygons are shown in Figure~\ref{fig:tim_study_regions}.\\
\begin{figure}
\caption{Randomly-generated irregular polygonal regions of interest employed in Mat\'ern covariance timing study.}
\label{fig:tim_study_regions}
\end{figure}
\vskip-1em
\noindent For an underlying process with Mat\'ern covariance (with range 0.5, smoothness 1.5, and marginal variance 1.0), direct quadrature and FAIR were used to estimate the $100\times100$ covariance matrix associated with these regions using grids with different resolutions covering the domain $[-12.5, 12.5]^{2}$. We consider grid resolutions $2^{m},m=7,8,9,10$. For each resolution considered, measures of the consistency between the estimates were collected, as well as timing results for each case.\\
\vskip-1em
\noindent Note that in the direct quadrature case, each entry in the target matrix computed as in Eqn.~\ref{eqRieman} effectively requires the evaluation of the covariance function over a matrix of distances that is $m \times n$, where $m,n$ are the number of grid points for which the two regional indicator functions $ I_{B_k}( \bld s_k^G)$ and $I_{B_\ell}( \bld s_{\ell}^G)$ are indicating. For very large grids, these matrices may exceed what can be stored in memory, and require additional costs to compute piecemeal. However, for the resolutions considered in this study, the largest of these matrices was approximately $8000\times 8000$ and thus was not affected by storage issues.\\
\vskip-1em
\noindent Additionally, it is worth noting that because the regions of interest are polygons with a small number of sides, one may obtain more accurate integrations by a triangularization of the regions and numerical quadrature. The goal of this study is not to suggest that either method examined here is the very best possible for this particular problem; rather to compare the two against each other in a setting more complex than simple regular rectangles. Both algorithms tested here (namely, FAIR and direct quadrature) do not require triangularization of regions - both require that the regions of interest be reasonably representable by indicator functions on a regular grid.\\
\vskip-1em
\noindent We perform the entire timing study using an i7-7700 @ 3.60GHz (w. 16GB RAM). Because there is not an analytic ground truth benchmark available in this case, absolute accuracy measures are not available, so several comparative measures are provided.
\subsubsection{Results}
\label{subsubsec:time_results}
\par We consider the following three criteria to assess the agreement between approximated covariance matrices $\widehat{\Sigma}_{FAIR}$ and $\widehat{\Sigma}_{Direct}$, generated by the two methods, FAIR and Direct, respectively, at each resolution.
\begin{itemize}
\item[(a)] \emph{The root mean squared entry-wise difference (RMSED):}~ \\ $RMSED(\widehat{\Sigma}_{FAIR}, \widehat{\Sigma}_{Direct}) = \frac{1}{n}\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n} (\widehat{\Sigma}_{FAIR;~i,j}-\widehat{\Sigma}_{Direct;~i,j})^2}$.
\item[(b)] \emph{The maximum absolute entry-wise difference (MAED):}~ \\ $MAED(\widehat{\Sigma}_{FAIR}, \widehat{\Sigma}_{Direct}) = \max_{i,j \in \{1\ldots n\}} \| \widehat{\Sigma}_{FAIR;~i,j}-\widehat{\Sigma}_{FAIR;~i,j} \|$.
\item[(c)] \emph{The Kullback-Leibler (KL) divergence:}~ \\ $D_{KL}(\widehat{\Sigma}_{FAIR}, \widehat{\Sigma}_{Direct})=\frac{1}{2}\left(\mbox{Tr}[\widehat{\Sigma}_{FAIR}^{-1}\widehat{\Sigma}_{Direct}]-n+\log{\left(\frac{\det{\widehat{\Sigma}_{FAIR}}}{\det{\widehat{\Sigma}_{Direct}}}\right)}\right).$
\end{itemize}
\noindent In all of the above, $n$ is the number of regions ($n=100$ in this study). The $RMSED$ and $MAED$ measures are chosen to provide a sense of how the two estimates differ on an entry-by-entry basis, while $D_{KL}$ provides a sense of how they may differ in the context of a likelihood evaluation. For all three measures, smaller values indicate more similar matrices.\\
\vskip-1em
\noindent Timing results and difference metrics at each resolution considered are shown in Table~\ref{tab:tim_study_tim_sim}. The results show that the approximations generated by the two methods become more similar as resolutions increase. The results also demonstrate the efficient scaling of the FAIR method; note that as the grid resolution doubles, execution times for the FAIR method increase by a factor of approximately 2.5; contrast that with a factor of approximately 16 for the direct method.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Resolution & $\Delta x$ & $RMSED(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{D})$ & $MAED(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{D})$ & $D_{KL}(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{D})$ & Time: FAIR & Time: Direct \\ \hline
$2^7 \times 2^7$ & 0.194 & 8.168e-03 & 8.511e-02 & 1.701e+01 & 11.4s & \textbf{8.2s} \\ \hline
$2^8 \times 2^8$ & 0.097 & 4.023e-03 & 4.284e-02 & 3.078e+00 & \textbf{27s} & 128s \\ \hline
$2^9 \times 2^9$ & 0.048 & 2.026e-03 & 2.169e-02 & 7.107e-01 & \textbf{66s} & 2099s \\ \hline
$2^{10} \times 2^{10}$ & 0.024 & 1.039e-03 & 1.124e-02 & 1.792e-01 & \textbf{191s} & 33755s \\ \hline
\end{tabular}
\caption{Observed timing and approximation similarity measures in the Mat\'ern covariance timing study, comparing $\widehat{\Sigma}_{FAIR}$ to $\widehat{\Sigma}_{Direct}$ at each grid resolution (denoted $\widehat{\Sigma}_{F}$ and $\widehat{\Sigma}_{D}$ respectively).}
\label{tab:tim_study_tim_sim}
\end{center}
\end{table}
\vskip-1em
\noindent The estimates from each method at each resolution are also compared to the direct method at the highest resolution. These results are shown in Tables ~\ref{tab:tim_study_hr_RMSED}, \ref{tab:tim_study_hr_MAED}, and \ref{tab:tim_study_hr_DKL}.\\
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Grid Resolution & $RMSED(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{HR})$ & $RMSED(\widehat{\Sigma}_{D}, \widehat{\Sigma}_{HR})$ \\ \hline
$2^7 \times 2^7$ & 8.170e-03 & 1.158e-03 \\ \hline
$2^8 \times 2^8$ & 4.032e-03 & 3.368e-04 \\ \hline
$2^9 \times 2^9$ & 2.027e-03 & 1.331e-04 \\ \hline
$2^{10} \times 2^{10}$ & 1.039e-03 & - \\ \hline
\end{tabular}
\caption{Observed $RMSED$ values in the Mat\'ern covariance timing study, comparing $\widehat{\Sigma}_{FAIR}$ and $\widehat{\Sigma}_{Direct}$ at each grid resolution to $\widehat{\Sigma}_{Direct}$ at the highest grid resolution (denoted $\widehat{\Sigma}_{HR}$).}
\label{tab:tim_study_hr_RMSED}
\end{center}
\end{table}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Grid Resolution & $MAED(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{HR})$ & $MAED(\widehat{\Sigma}_{D}, \widehat{\Sigma}_{HR})$ \\ \hline
$2^7 \times 2^7$ & 8.494e-02 & 1.978e-02 \\ \hline
$2^8 \times 2^8$ & 4.321e-02 & 6.720e-03 \\ \hline
$2^9 \times 2^9$ & 2.160e-02 & 2.487e-03 \\ \hline
$2^{10} \times 2^{10}$ & 1.124e-02 & - \\ \hline
\end{tabular}
\caption{Observed $MAED$ values in the Mat\'ern covariance timing study.}
\label{tab:tim_study_hr_MAED}
\end{center}
\end{table}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Grid Resolution & $D_{KL}(\widehat{\Sigma}_{F}, \widehat{\Sigma}_{HR})$ & $D_{KL}(\widehat{\Sigma}_{D}, \widehat{\Sigma}_{HR})$ \\ \hline
$2^7 \times 2^7$ & 1.675e+01 & 1.488e-01 \\ \hline
$2^8 \times 2^8$ & 3.091e+00 & 2.119e-02 \\ \hline
$2^9 \times 2^9$ & 7.145e-01 & 1.766e-03 \\ \hline
$2^{10} \times 2^{10}$ & 1.792e-01 & - \\ \hline
\end{tabular}
\caption{Observed $D_{KL}$ values in the Mat\'ern covariance timing study.}
\label{tab:tim_study_hr_DKL}
\end{center}
\end{table}
\vskip-1em
\noindent In this experiment, more than half of the total execution time for the FAIR algorithm was taken up by the formation of the weighted indicator functions. Our implementation of these functions is presently at proof-of-concept stage, and we expect that significant timing improvements are still possible in this portion of the algorithm. In some contexts (e.g. numerical maximum likelihood estimation), this cost (as well as the calculation of the Fourier transforms of these functions) may be considered a one-time expense, as varying only the covariance kernel (using a fixed set of regions and a fixed grid) does not require recomputation of these functions, and in Section \ref{subsec:census}, these costs are handled in just such a fashion. Additionally, if very high resolution grids are used, then the accuracy benefits of using an improved indicator function on such grids may be marginal, and the more typical "in or out" functions may be used at some savings.
\subsection{Dataset Example} \label{subsec:census}
\noindent In this section, we demonstrate the performance of the algorithm with a spatial dataset, specifically a spatial estimation and prediction problem using real-world data and region geometry.\\
\vskip-1em
\noindent The region geometry selected for this study are the 84 US Census blocks entirely contained in the region that extends from 104.85 to 104.75 West Longitude, and from 38.8 to 38.9 North Latitude (a region covering some of Colorado Springs, CO) as indicated in Figure~\ref{fig:co_blocks}. The data chosen are the block-level counts of traffic intersections (an indicator of economic development). Intersection data made available by Patricia Romero Lankao from the National Energy Renewable Laboratory and aligned with the 2016 census block group shape files, obtained from \url{https://www2.census.gov/geo/tiger/} as GENZ2016 2016 Cartographic Boundary Files (cb\_2016\_08\_bg\_500k)\footnote{This data set is compiled by The Center for Neighborhood Technology (\url{cnt.org}) and distributed in support of the Housing and Transportation (H+T) Affordability Index. State by state files in \texttt{cvs} format are available for download from \url{https://htaindex.cnt.org/}}.\\
\begin{figure}
\caption{Census block geometry for Colorado state, with dataset example subset indicated.}
\label{fig:co_blocks}
\end{figure}
\vskip-1em
\noindent We shift and rescale the region geometry to occupy a unit square at $[0,1]^2$, transform the count data to densities by dividing by block area, and then normalize the density data using the observed sample mean and standard deviation. The transformed data are shown in Figure~\ref{fig:cs_norm_data}.\\
\begin{figure}
\caption{Rescaled, normalized block geometry and intersection density data employed in dataset example.}
\label{fig:cs_norm_data}
\end{figure}
\vskip-1em
\noindent We model the normalized data as block averages of an underlying Gaussian process with a Mat\'ern covariance. We assume a smoothness of 1.5, and estimate the marginal variance, range, and (post-aggregation) nugget variance using maximum likelihood methods - to facilitate the timing comparison, we concentrate the marginal variance and nugget variance into a ratio term, and perform a simple grid search over (range parameter, variance ratio) candidate pairs - 9 distinct range candidates were considered between 0.0625 and 0.50, with 45 distinct variance ratio candidates between 0.005 and 2.0. FAIR was implemented with a $2^{10} \times 2^{10}$ grid extending approximately to $[-2,3]^2$. Riemann quadrature points were subsampled from the FAIR grid, ranging from163 points in the smallest region, up to 2593 points in the largest region, with a mean of 414 points per region.\\
\vskip-1em
\noindent
Each range parameter candidate investigated requires the recalculation of the regional covariance matrix. For each candidate, these calculations were repeated using both the FAIR algorithm and the standard Riemann one. Because the region geometry does not vary across these calculations, some quantities can be precomputed (the Fourier transforms of the fractional indicator functions in the case of FAIR, and the coordinates of the quadrature points in each region in the case of Riemann). For larger range parameter candidates, some matrices returned by FAIR were not quite positive definite, and were thus coerced to the nearest positive definite matrix using \texttt{Matrix::nearPD} in \texttt{R}. Additionally, the geometry of these regions (while still polygonal) is more complex than in our previous studies, with a mean side count of 12. These factors contribute to differing cost ratios in this study, relative to those observed in the study described in Section~\ref{subsec:mat_method}. Timing results for the major phases of each algorithm for this estimation task are provided in Tables \ref{tab:FAIR_pred} and \ref{tab:Riemann_pred}. \\
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Performance - FAIR Algorithm}} \\ \hline
Setup Grid & 0.14s \\ \hline
Precompute Indicator Functions and Fourier Transforms & 33.39s \\ \hline
Maximum Likelihood Estimation & 1022.44s \\ \hline
\textbf{Total Execution Time} & 1055.97s \\ \hline
\end{tabular}
\caption{Timing results observed for FAIR algorithm during estimation task for dataset example.}
\label{tab:FAIR_pred}
\end{center}
\end{table}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Performance - Riemann Algorithm}} \\ \hline
Setup Quadrature Points & 3.73s \\ \hline
Maximum Likelihood Estimation & 2516.25s \\ \hline
\textbf{Total Execution Time} & 2519.98s \\ \hline
\end{tabular}
\caption{Timing results observed for Riemann algorithm during estimation task for dataset example.}
\label{tab:Riemann_pred}
\end{center}
\end{table}
\vskip-1em
\noindent With covariance parameter estimates found, we use the values computed by the FAIR algorithm during the estimation process to predict the underlying surface on the default grid. As noted in Section~\ref{subsec:surf_pred}, this involves only a weighted sum of the Fourier-transformed indicator functions to form $DFT[\phi]$, an entry-wise product of the $DFT[\phi]$ and $DFT[c]$ terms, and an inverse FFT, and incurs only 2.02s of additional compute time. A subset of the predicted surface is shown in Figure~\ref{fig:cs_norm_data} (only a subset is provided, as away from the observation regions, the prediction reverts to the mean). \\
\begin{figure}
\caption{Surface prediction generated using FAIR algorithm products in dataset example.}
\label{fig:cs_pred_data}
\end{figure}
\vskip-1em
\noindent
We note that this model does not provide high predictive power in this case, with both algorithms finding parameter MLEs at the boundary of the search grid. We reiterate that the goal of this study is not to produce a high-quality model for this particular dataset, rather to employ the competing algorithms in a typical spatial estimation task for a realistic comparison of relative performance, and to demonstrate the low cost of surface prediction having performed an estimation task with FAIR.
\section*{Discussion}\label{sec:discussion}
\par In this work, we propose a DFT based approach to evaluate integrals that are intrinsic to spatial prediction and inference when observations are integrals over irregular regions. We show that for a stationary covariance function and correlation ranges that are a modest fraction of the spatial domain, the FAIR algorithm can be an order of magnitude or more faster than a direct quadrature approach. Moreover the FAIR algorithm provides, as an intermediate quantity, the matrix multiplication needed for spatial prediction on a grid. In either case the efficiency is due to replacing the double sum over grid points in the direct quadrature with a single sum and an FFT. In addition the FAIR algorithm can reduce the number of evaluations of the covariance function and therefore can avoid many extra transcendental and Bessel function evaluations. The application of change of support models to larger spatial data sets has been limited by the computational burden of using direct quadrature. We believe the speedup afforded by FAIR algorithm now makes it feasible to tackle larger problems. \\
\vskip-1em
\noindent One disadvantage of this algorithm are in cases when the correlation is large. Often a low order polynomial in the spatial coordinates, termed a spatial drift, is a useful component that provides an interpretable base model and also reduces large scale correlation. Adding a low order polynomial to the spatial model can often reduce the correlation range in the stochastic component and so make the FAIR approximation accurate. This strategy raises the practical modeling question of whether large scale structure in a spatial structure is best represented as a low dimensional basis expansion or a covariance function with long range dependence. Typically either model will be effective and the low dimensional basis has the advantage of supporting the FAIR algorithm and often simplifying the dependence structure. \\
\vskip-1em
\noindent When the number of regions is moderate it is efficient to compute the DFTs of the the regions, $\widehat{\bld B} ^i = DFT(\mathcal{I}_{B_i }) $ once and store the the results. For large grids and many regions this may not be possible due to limitation in memory.
A two-dimensional DFT at a resolution of $2^{10}\times 2^{10}$ has a memory footprint of 16Mb (double complex precision). A standard computational node ( for example a node on the NCAR supercomputer Cheyenne) will have no difficulty storing several hundred regions, but will be unable to manage this task across, for example, $10,000$ regions. The alternative approach is to compute the transforms in smaller batches and recompute as needed. Of course there is also the shortcut of considering single precision and a real-valued DFT to further reduce storage. We believe this will still be more efficient than a direct quadrature approach noting that for a large number of regions the direct approach will involve many additional evaluations of the covariance function in the double sum. Finally, for regions that are translations of a single shape, such as the regular footprints from remote sensing platforms, one may be able to obtain $\widehat{\bld B} ^i$ directly using the fact that the translation operator is just a multiplication by complex exponentials in the transform space. \\
\vskip-1em
\noindent Large spatial domains tend to exhibit heterogeneity due to changing processes and conditions and so it is natural to consider nonstationary covariances to approximate variation in the process. In this case we propose to extend the FAIR strategy to a fixed rank approach that uses many compactly supported basis functions and a sparse precision matrix for the basis coefficients (e.g., LatticeKrig model proposed by \cite{nychka:2015}) . Explicitly, if
\[ Y(\bld s) = \sum_j \psi_j(\bld s) c_j, \]
where $\{ \psi_j \}$ are a basis and $\bld c$ multivariate Gaussian
then a mean zero observational model is
\begin{equation}
\label{LKrig}
\bld Z_i = \int_{B_i} Y(\bld s) ds + \bld \epsilon_i = \sum_j X_{i,j} c_j +\bld \epsilon_i
\end{equation}
with
\[ X_{i,j}= \int_{B_i} \psi_j(\bld s) ds. \]
\noindent If the basis functions are built from translations of a single template then computing $X_{i,j}$ can be rephrased as a convolution of the region, $i$ with the template translated to location, $j$ and computed efficiently by modifying the FAIR algorithm. We note that the LatticeKrig \texttt{R} package supports nonstationary covariances and already implements the model given in Eqn.~(\ref{LKrig}) provided $\bld c$ follows a Gaussian Markov range field, $\bld X$ is sparse, and $\bld X$ is computed externally to the package code. However, implementing nonstationary covariance models is very much an active area of research and the extension to change of support observations is challenging. \\
\vskip-1em
\noindent In summary, providing an efficient algorithm for handling large change of support problems
will support spatial analysis of many new remotely-sensed and demographic data sets. We also hope this will open up more foundational research as to the value and potential limitations of considering aggregated spatial observations.
\end{document}
|
\begin{document}
\font\titlebf=cmbx10 scaled \magstep2
\centerline{\titlebf Towers of function fields with extremal properties}
\centerline{{\bf Vinay Deolalikar} \footnote[1]{ This work forms part of the author's doctoral research and was supervised by the late Prof. Dennis Estes.}}
\centerline{\sc Dedicated to the late Prof. Dennis Estes}
\section{Introduction}
For $F/K$ an algebraic function field in one variable over a finite field of constants $K$ (\mbox{\em i.e.}, $F$ is a finite algebraic extension of $K(x)$ where $x \in F$ is transcendental over $K$), let $N(F)$ and $g(F)$ denote the number of places of degree one and the genus, respectively, of $F$.
Let ${\cal F} = (F_1,F_2,F_3,\ldots)$ be a tower of function fields, each defined over $K$. Further, we will assume that $F_1 \subseteq F_2 \subseteq F_3 \ldots$, where $F_{i+1}/F_i$ is a finite separable extension and $g(F_i) > 1$ for some $i \geq 1$. This follows the conventions of \cite{GarSti2}.
In this paper, the techniques developed in \cite{DeoEst1} and \cite{DeoEst2} are applied to splitting rational places in towers of function fields. While the basic ideas are the same, it has to be kept in mind that what is optimal at one stage of the tower may lead to complications at later stages.
Let ${\cal F}$ be as above. It is known that the sequence $(N(F_i)/g(F_i))$ converges as $i \rightarrow \infty$ \cite{GarSti2}. Let $\lambda({\cal F}) := \lim_{i \rightarrow \infty} N(F_i)/g(F_i)$.
There are known bounds on the behaviour of function fields over a finite field \mbox{${\fam\msbfam \tenmsb F}_q\,$}. Let $N_q(g)$ := max\{$N(F) | F$ a function field over ${\fam\msbfam \tenmsb F}_q$ of genus $g(F) = g$\}. Also, let
\begin{equation}
A(q) := \limsup_{g \rightarrow \infty} N_q(g)/g,
\end{equation}
then the Drinfeld-Vladut bound \cite{DriVla1} says that
\begin{equation}
A(q) \leq \mbox{${\fam\msbfam \tenmsb F}_{q^2}\,$}rt{q} - 1.
\end{equation}
Ihara \cite{Iha1}, and Tsafasman, Vladut and Zink \cite{TsfVlaZin1} showed that this bound can be met in the case where $q$ is a square. It is not known what the value of $A(q)$ is for non-square $q$, though there are results by Serre \cite{Ser1,Ser2,Ser3} and Schoof \cite{Sch1} in this direction.
Clearly, for a tower of function fields ${\cal F} = (F_1,F_2,\ldots)$, $F_i/\mbox{${\fam\msbfam \tenmsb F}_q\,$}$, we have that
\begin{equation}
0 \leq \lambda({\cal F}) \leq A(q).
\end{equation}
Garcia and Stichtenoth \cite{GarSti1, GarSti2} gave two explicitly constructed towers of function fields over a field of square cardinality that meet the Drinfeld-Vladut bound. In \cite{GarSti3}, they gave more explicit descriptions of towers of function fields over \mbox{${\fam\msbfam \tenmsb F}_q\,$}, with $\lambda({\cal F}) > 0$. These also meet the Drinfeld-Vladut bound in some cases where the underlying field of constants is of square cardinality.
Elkies, in \cite{Elk1}, gave eight explicit iterated equations for towers of modular curves, which also attained the Drinfeld-Vladut bound over certain fields and showed that the examples presented in \cite{GarSti1} and \cite{GarSti3} were also modular. He then conjectured that all asymptotically optimal towers would, similarly, be modular.
In \cite{DeoEst1}, the author used the notion of symmetry of functions to describe explicitly constructed extensions of function field in which all rational places except one split completely. In \cite{DeoEst2}, it was shown that on generalizing the notion of symmetry to include the so-called ``quasi-symmetric'' functions, one could actually split all the rational places in an extension of function fields. Furthermore, in both these cases, infinite families of extensions with such properties were obtained.
In this paper, techniques developed in \cite{DeoEst1} and \cite{DeoEst2} are applied to the problem of splitting rational places in a tower of function fields. Towards that end, infinite families of towers in which all the rational places split completely throughout the tower are described. Infinite families of towers in which all rational places, except one, split completely throughout the tower are also described. It is observed that inspite of such splitting behaviour at the rational places, all these towers have $\lambda({\cal F}) = 0$. In that sense, the main accent here is not so much on obtaining a high value for $\lambda({\cal F})$, as it is to show the existence of certain explicitly constructed families of towers in which all rational places split completely throughout the tower. In addition, it is hoped that these examples will lead to a better general understanding of what makes $\lambda({\cal F}) > 0$. Two examples of towers with $\lambda({\cal F}) > 0$ presented in \cite{GarSti3} are also generalized, resulting in infinite families of such towers. Subfamilies of these attain the Drinfeld-Vladut bound.
\section{Notation}
For symmetric polynomials:
\begin{tabbing}
\mbox{${\fam\frakfam \teneufm S}_n\,$} \hspace{1.6cm} \= the symmetric group on $n$ characters \\
${\bf s}_{n,i}(X)$ \> the \mbox{$i^{th}\;$} elementary symmetric polynomial on $n$ variables \\
$q$ \> a power of a prime $p$ \\
${\fam\msbfam \tenmsb F}_l$ \> the finite field of cardinality $l$ \\
$s_{n,i}(t)$ \> the \mbox{$i^{th}\;$} $(n,q)$-elementary symmetric polynomial
\end{tabbing}
For function fields and their symmetric subfields:
\begin{tabbing}
$K$ \hspace{1.6cm} \= the finite field of cardinality $q^n$, where $n>1$ \\
$F/K$ \> an algebraic function field in one variable whose full field of constants is $K$ \\
$F_s$ \> the subfield of $F$ comprising $(n,q)$-symmetric functions \\
$F_s^\phi$ \> the subfield of $F_s$ comprising functions whose coefficients are from \mbox{${\fam\msbfam \tenmsb F}_q\,$} \\
$F_{qs}$ \> the subfield of $F$ comprising $(n,q)$-quasi-symmetric functions \\
$F^{\phi}_{qs}$ \> the subfield of $F_{qs}$ comprising functions whose coefficients are from \mbox{${\fam\msbfam \tenmsb F}_q\,$} \\
$E$ \> a finite separable extension of $F$, $E=F(y)$ where $\varphi(y) = 0$
for some irreducible \\
\> polynomial $\varphi[T] \in F[T]$
\end{tabbing}
For a generic function field $F$:
\begin{tabbing}
${\fam\msbfam \tenmsb P}(F)$ \hspace{1.3cm} \= the set of places of $F$ \\
$N(F)$ \> the number of places of degree one in $F$ \\
$g(K)$ \> the genus of $F$ \\
$P$ \> a generic place in $F$ \\
$v_P$ \> the normalized discrete valuation associated with the place $P$ \\
\mbox{${\cal O}_P\,$} \> the valuation ring of the place $P$ \\
$P'$ \> a generic place lying above $P$ in a finite separable extension of $F$\\
$e(P'|P)$ \> the ramification index for $P'$ over $P$ \\
$d(P'|P)$ \> the different exponent for $P'$ over $P$
\end{tabbing}
For the rational function field $K(x)$:
\begin{tabbing}
$P_\alpha$ \hspace{1.6cm} \= the place in $K(x)$ that is the unique zero of $x-\alpha, \; \alpha \in K$ \\
$P_\infty$ \> the place in $K(x)$ that is the unique pole of $x$
\end{tabbing}
For towers of function fields:
\begin{tabbing}
${\cal F}$ \hspace{1.8cm} \= a tower of function fields $F_i \subseteq F_2 \subseteq F_3 \ldots$ \\
$\lambda(\cal F)$ \> $\displaystyle\lim_{i \rightarrow \infty}(N(F_i)/g(F_i))$
\end{tabbing}
\section{Preliminaries}
In this section we state some preliminary results. For detailed proofs of these, please refer \cite{DeoEst1} and \cite{DeoEst2}.
\begin{proposition} \label{proposition:Art-Sch}
Let $F/K$ be an algebraic function field, where $K=\mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}$ is algebraically closed in $F$. Let $w \in F$ and assume that there exists a place $P \in \mbox{${\fam\msbfam \tenmsb P}(F)\,$}$ such that
$$ v_P(w) = -m, \, m > 0 \mbox{ and } \gcd(m,q) = 1. $$
Then the polynomial $l(T)-w = a_{n-1}T^{q^{n-1}} + a_{n-2}T^{q^{n-2}}+ \ldots + a_0T - w \in F[T] $ is absolutely irreducible. Further, let $l(T)$ split into linear factors over $K$. Let $E=F(y)$ with
$$ a_{n-1}y^{q^{n-1}} + a_{n-2}y^{q^{n-2}}+ \ldots + a_0y = w. $$
Then the following hold:
\begin{enumerate}
\item $E/F$ is a Galois extension, with degree $[E:F] = q^{n-1}$. $\mbox{\rm Gal}(E/F) = \{\sigma_\beta:y \rightarrow y + \beta\}_{l(\beta)=0}$.
\item $K$ is algebraically closed in $E$.
\item The place $P$ is totally ramified in $E$. Let the unique place of $E$ that lies above $P$ be $P'$. Then the different exponent $d(P'|P)$ in the extension $E/F$ is given by
$$ d(P'|P) = (q^{n-1}-1)(m+1).$$
\item Let $R \in \mbox{${\fam\msbfam \tenmsb P}(F)\,$}$, and $v_R(w) \geq 0$. Then $R$ is unramified in $E$.
\item If $a_{n-1}=\ldots=a_0=1$, and if $Q\in \mbox{${\fam\msbfam \tenmsb P}(F)\,$}$ is a zero of $w-\gamma$, with $\gamma \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$. Then $Q$ splits completely in $E$.
\end{enumerate}
\end{proposition}
{\bf Proof}. For (i) - (iv), pl. refer \cite{Sul1}. For (v), notice that under the hypotheses, the equation $ T^{q^{n-1}} +T^{q^{n-2}}+ \ldots + T = \gamma $ has $q^{n-1}$ distinct roots in $K$.
$\Box$
For many of the extensions that we will describe, there exists no place where the hypothesis of Proposition~\ref{proposition:Art-Sch} is satisfied, namely, that the valuation of $w$ at the place is negative and coprime to the characteristic. In particular, we need a criterion for determining the irreducibility of the equations that we will need to use. We provide such a criterion in Proposition~\ref{proposition:irreducibility} and Corollary~\ref{corollary:irreducibility}.
\begin{proposition} \label{proposition:irreducibility}
Let $V$ be a finite subgroup of the additive group of \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}. Then $V$ is a \mbox{${\fam\msbfam \tenmsb F}_p\,$}-vector space. Define $L_V(T) = \prod_{v \in V}(T-v)$. Thus, $L_V(T)$ is a separable \mbox{${\fam\msbfam \tenmsb F}_p\,$}-linear polynomial whose degree is the cardinality of $V$. Now let $h(T,x) = L_V(T) - f(x)$, where $f(x) \in \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}[x]$. Then, $h(T,x)=L_V(T) - f(x)$ is reducible over $\mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}[T,x]$ iff there exists a polynomial $g(x) \in \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}[x]$ and a proper additive subgroup $W$ of $V$ such that $f(x) = L_{W'}(g(x))$, where $W' = L_W(V)$.
\end{proposition}
For a proof of this proposition, please refer to \cite{DeoThe} or \cite{DeoEst1}.
\begin{definition}
For $f(x) \in \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}[x]$, a coprime term of $f$ is a term with non-zero coefficient in $f$ whose degree is coprime to $p$. The coprime degree of $f$ is the degree of the coprime term of $f$ having the largest degree.
\end{definition}
\begin{corollary} \label{corollary:irreducibility}
Let $f(x) \in \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}[x]$. Let there be a coprime term in $f(x)$ of degree $d$, such that there are no terms of degree $dp^i$ for $i>0$ in $f(x)$. Then $L_V(T) -f(x)$ is irreducible for any subgroup $V \subset \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}$.
\end{corollary}
{\bf Proof}. Suppose $f(x)$ is the image of a linear polynomial $\sum a_nx^{p_n}$. Then the coprime term can only occur in the image of the term $a_0x$. But then, the images of the coprime term under $a_nx^{p^n}$, for $n>0$ will have degrees that contradict the hypothesis.
\begin{lemma} \label{lemma:subextensions}
Let $F = K(x)$, where $K=\mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}, q = p^m, r=m(n-1),$ and $E = F(y)$, where $y$ satisifes the following equation:
$$ y^{q^{n-1}} + y^{q^{n-2}} + \ldots + y = f(x), $$
and $f(x) \in F$ is not the image of any element in $F$ under a linear polynomial.
Then the following hold:
\begin{enumerate}
\item $E/F$ is a Galois extension of degree $[E:F]=q^{n-1}$. $\mbox{\rm Gal}(E/F) = \{\sigma_\beta: y \rightarrow y + \beta \}_{s_{n,1}(\beta)=0}$ can be identified with the set of elements in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$} whose trace in \mbox{${\fam\msbfam \tenmsb F}_q\,$} is zero by $\sigma_\beta \leftrightarrow \beta$. This gives it the structure of a $r$-dimensional \mbox{${\fam\msbfam \tenmsb F}_p\,$} vector space.
\item There exists a (non-unique) tower of subextensions
$$ F=E^0 \subset E^1 \subset \ldots \subset E^{r} = E, $$
such that for $0 \leq i \leq r-1,\; [E^{i+1}: E^i]$ is a Galois extension of degree $p$.
\item Let $\{b_i\}_{1 \leq i \leq r}$ be a \mbox{${\fam\msbfam \tenmsb F}_p\,$}-basis for $\mbox{\rm Gal}(E/F)$. Then we can build one tower of subextensions as in {\rm (ii)} as follows. We set $E^j$ to be the fixed field of the subgroup of the Galois group that corresponds to the \mbox{${\fam\msbfam \tenmsb F}_p\,$}-subspace generated by $\{b_1,b_2,\ldots, b_{r-j}\}$. Then, the generators of $E^{j}$ are $\{y_1,y_2,\ldots,y_j\}$, where $y_1,y_2,\ldots,y_{r}=y$ satisfy the following relations:
\begin{eqnarray*}
y^p - B_{r}^{p-1} y &=& y_{r-1}, \\
y_{r-1}^p - B_{r-1}^{p-1} y_{r-1} &=& y_{r-2}, \\
\vdots & & \vdots \\
y_1^p - B_{1}^{p-1}y_1 &=& f(x),
\end{eqnarray*}
where,
$$
\begin{array}{rcll}
\beta_{r,j} &=& b_{r-j+1}, & B_{r} = \beta_{r,r}, \\
\beta_{r-1,j} &=& \beta_{r,j}^p - B_{r}^{p-1}\beta_{r,j}, & B_{r-1} = \beta_{r-1,r-1},\\
\vdots & & \vdots &\vdots \\
\beta_{1,j} &=& \beta_{2,j}^p - B_2^{p-1}\beta_{2,j}, & B_1 = \beta_{1,1}.
\end{array}
$$
\end{enumerate}
\end{lemma}
For a proof of this lemma, please refer to \cite{DeoThe} or \cite{DeoEst1}.
Next, we introduce the notions of symmetric and quasi-symmetric functions. For a systematic development of these, please refer to \cite{DeoEst1} and \cite{DeoEst2}.
Let $R$ be an integral domain and $\overline{R}$ its field of fractions. Consider the polynomial ring in $n$ variables over $R$, given by $R\,[X]= R\,[x_1,x_2,\ldots,x_n]$. The symmetric group \mbox{${\fam\frakfam \teneufm S}_n\,$} acts in a natural way on this ring by permuting the variables.
\begin{definition}
A polynomial ${\bf f}(X) \in R\,[X]$ is said to be symmetric if it is fixed under the action of \mbox{${\fam\frakfam \teneufm S}_n\,$}. If \mbox{${\fam\frakfam \teneufm S}_n\,$} is allowed to act on $\overline{R}(X)$ in the natural way, its fixed points will be called symmetric rational functions, or simply, symmetric functions. These form a subfield $\overline{R}(X)_s$ of $\overline{R}(x)$. Furthermore, $\overline{R}(X)_s$ is generated by the $n$ elementary symmetric functions given by
\begin{eqnarray*}
{\bf s}_{n,1}(X) &=& \sum_{i=1}^{n} x_i, \\
{\bf s}_{n,2}(X) &=& \sum_{i<j \atop 1 \leq i,j \leq n} x_ix_j, \\
\vdots & & \vdots \\
{\bf s}_{n,n}(X) &=& x_1x_2\ldots x_n.
\end{eqnarray*}
\end{definition}
\begin{definition}
For the extension ${\fam\msbfam \tenmsb F}_{q^n}/{\fam\msbfam \tenmsb F}_q$, we will evaluate the elementary symmetric polynomials (resp. symmetric functions) in ${\fam\msbfam \tenmsb F}_{q^n}(X)$ at $(X)=(t,\phi(t),\ldots,\phi^{n-1}(t))=(t,t^q,\ldots,t^{q^{n-1}})$. These will be called the $(n,q)$-elementary symmetric polynomials (resp. $(n,q)$-symmetric functions). For ${\bf f}(X) \in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}\!(X)$, we will denote ${\bf f}(t,t^q,\ldots,t^{q^{n-1}})$ by $f(t)$, or, when the context is clear, by $f$.
\end{definition}
Thus the $(n,q)$-elementary symmetric polynomials are the following:
\begin{eqnarray*}
s_{n,1}(t) &=& \sum_{0 \leq i \leq n-1} t^{q^i}, \\
s_{n,2}(t) &=& \sum_{i<j \atop 0 \leq i,j \leq n-1}t^{q^i}t^{q^j}, \\
\vdots & & \vdots \\
s_{n,n}(t) &=& t^{1+q+q^2+\ldots+q^{n-1}}.
\end{eqnarray*}
See \cite{DeoEst1} for a demonstration of the use of $(n,q)$-symmetric functions in splitting places of degree one in extensions of algebraic functions fields.
We now extend the notion of symmetry to get a larger class of functions that can be very effectively used to split all places of degree one in extensions of function fields. These functions are called ``$(n,q)$-quasi-symmetric.''
\begin{definition} A polynomial ${\bf f}(X)$ in $R[X]$ will be called quasi-symmetric if it is fixed by the cycle $\varepsilon = (1\; 2\;\ldots\; n) \in {\fam\frakfam \teneufm S}_n$. If $\varepsilon$ is allowed to act on $\overline{R}(X)$ in the natural way, its fixed points will be called quasi-symmetric rational functions, or simply, quasi-symmetric functions. These form a subfield $\overline{R}(X)_{qs}$ of $\overline{R}(X)$.
\end{definition}
\begin{lemma} For $n>2$, there always exist quasi-symmetric functions that are not symmetric.
\end{lemma}
{\bf Proof}. $\langle{\varepsilon}\rangle$ has index $(n-1)!$ in $\mbox{${\fam\frakfam \teneufm S}_n\,$}$. Thus for $n >2$, the set of functions fixed by \mbox{${\fam\frakfam \teneufm S}_n\,$} is strictly smaller than those fixed by $(\varepsilon)$. For $n=2,\; \mbox{${\fam\frakfam \teneufm S}_n\,$} = (\varepsilon)$ so that the notions of symmetric and quasi-symmetric coincide.
$\Box$
\begin{example} \rm \label{example:quasi-symmetric-n=3}
\rm $(n=3)$ A family of quasi-symmetric functions in three variables is given below:
$${\bf f}(x_1,x_2,x_3) = x_1{x_2}^i + x_2{x_3}^i + x_3{x_1}^i. $$
Note that for $i \neq 0 \mbox{ or }1$, these are not symmetric.
\end{example}
\begin{definition}
Consider the extension ${\fam\msbfam \tenmsb F}_{q^n}/{\fam\msbfam \tenmsb F}_q$ of finite fields. We will evaluate the quasi-symmetric polynomials (resp. quasi-symmetric functions) in ${\fam\msbfam \tenmsb F}_{q^n}(X)$ at $(X)=(t,\phi(t),\ldots,\phi^{n-1}(t))=(t,t^q,\ldots,t^{q^{n-1}})$. These will be called $(n,q)$-quasi-symmetric polynomials (resp. $(n,q)$-quasi-symmetric functions).
\end{definition}
\begin{example}\rm
Using the three-variable quasi-symmetric functions of Example~$\ref{example:quasi-symmetric-n=3}$, we can obtain the following $(3,q)$-quasi-symmetric functions:
$$f(t) = {\bf f}(t,t^q,t^{q^2}) = t^{1+iq} + t^{q+iq^2} + t^{q^2 + i}.$$
Again, these are not $(3,q)$-symmetric for $i \neq 0 \mbox{ or }1$.
\end{example}
\begin{lemma} \label{lemma:qs-no-zeros}
There exist $(n,q)$-quasi-symmetric functions that have no zeros in ${\fam\msbfam \tenmsb
F}_{q^n}$.
\end{lemma}
A simple method to obtain such functions is to compose irreducible polynomials over \mbox{${\fam\msbfam \tenmsb F}_q\,$}, with $(n,q)$-quasi-symmetric functions.
\section{Towers where almost all rational places split completely}
In this section we construct families of towers of function fields with very good splitting behaviour. In some of the families, all rational places split completely throughout the tower, and in others, all rational places, except one, split completely throughout the tower.
\subsection{Towers of Artin-Schreier extensions}
First we begin with a tower of function fields in which all rational places split completely throughout the tower. We will denote the subfield of $F_i$ comprising $(n,q)$-quasi-symmetric functions in $x_j$ by $F_{j,qs}$ and the subfield comprising the $(n,q)$-quasi-symmetric functions of $x_j$ with \mbox{${\fam\msbfam \tenmsb F}_q\,$} coefficients by $F_{j,qs}^\phi$. In particular, in $F_i$, $F_{i,qs}^\phi$ will denote the subfield of $(n,q)$-quasi-symmetric functions of $x_i$ with \mbox{${\fam\msbfam \tenmsb F}_q\,$} coefficients.
\begin{theorem} \label{theorem:all-rational-split-1}
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$ and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^{n-1}} + x_{i+1}^{q^{n-2}} + \ldots + x_{i+1} = \frac{g(x_i)}{h(x_i)},
\end{equation}
where $g(x_i), h(x_i) \in F_{i,qs}^\phi$, $ \frac{g(x_i)}{h(x_i)}$ is not the image of a rational function under a linear polynomial, and $h(x)$ has no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}. Also, $deg(g(x_i)) \leq deg(h(x_i))$. Then the following hold:
\begin{enumerate}
\item All the rational places of $F_1$ split completely in all steps of the tower.
\item For every place $P$ in $T_i$ that is ramified in $T_{i+1}$, the place $P'$ in $T_{i+1}$ that lies above $P$ is unramified in $T_{i+2}$. Thus, ramification at a place cannot ``continue'' up the tower.
\end{enumerate}
\end{theorem}
{\bf Proof}. $P_\infty$ splits completely because of the condition of the degrees of $g$ and $h$. Also, the RHS is in the valuation ring at every rational place since $h$ has no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$} and $deg(g) \leq deg(h)$. Also, its class in the residue class field is in \mbox{${\fam\msbfam \tenmsb F}_q\,$} at each of these places, since the RHS is in $F_{i,qs}^\phi$. Then Proposition~\ref{proposition:Art-Sch} tells us that every rational place in $F_i$ splits completely in $F_{i+1}$. For (ii), note that if $P \in \mbox{${\fam\msbfam \tenmsb P}(T_i)\,$}$ is ramified in $T_{i+1}$, and $P'$ is a place lying above it in $T_{i+1}$, then the RHS of the equation for $x_{i+2}$ has a zero at $P'$, because of the condition on the degrees of $h$ and $g$. Thus $P'$ will be unramified in $T_{i+2}$.
$\Box$
\begin{example} \rm
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^3}(x_1)$, $q$ is not a power of $2$, and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^2} + x_{i+1}^q + x_{i+1} = \frac{x_i^{2q^2 + 2q + 2}}{(x_i^{q^2} + x_i^q + x_i)^2 - \alpha_i},
\end{equation}
where $\alpha_i \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ is not a square. All rational places split completely throughout the tower. Let the place $P$ of $T_1$ be a simple pole of the RHS in $T_1$ (\mbox{\em i.e.}, for the case $i=1$.). Then, the place $P^{(i)}$ of $T_i$, where $i \geq 2$, which divides $P$, is a pole of $x_i$ of order $2^{i-2} \bmod q$. Also notice that there will always exist such places, if we look at the equation over \mbox{${\overline {\fam\msbfam \tenmsb F}}_p\,$}. Thus the equation is absolutely irreducible at each stage.
\end{example}
\begin{theorem} \label{theorem:abelian-tower}
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$, $p\neq 2$, and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^{n-1}} + x_{i+1}^{q^{n-2}} + \ldots + x_{i+1} = \frac{1}{(x_{i}^{q^{n-1}} + x_{i}^{q^{n-2}} + \ldots + x_{i})^2 - \alpha},
\end{equation}
where $\alpha \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ is not a square. Then the following hold:
\begin{enumerate}
\item $T_i/T_1$ is an Abelian extension for $i \geq 2$.
\item All rational places split completely throughout this tower.
\item When a (non-rational) place $P \in \mbox{${\fam\msbfam \tenmsb P}(T_i)\,$}$ is ramified in $T_{i+1}$, from then on, it behaves like a rational place for splitting, and therefore splits completely further throughout the tower.
\end{enumerate}
\end{theorem}
{\bf Proof}. First we note that the equations defining the tower at each stage are indeed irreducible. For this, note that if $P$ is a place in $T_i$ that is a zero of $ (x_{i}^{q^{n-1}} + x_{i}^{q^{n-2}} + \ldots + x_{i})^2 - \alpha)$ in $T_i$, the zero can be of degree at most two. This can be seen as follows. Let $\mbox{${\fam\msbfam \tenmsb F}_{q^2}\,$}rt{\alpha}$ be one of the square roots of $\alpha$. Then,
\begin{eqnarray*}
x_{i}^{q^{n-1}} + x_{i}^{q^{n-2}} + \ldots + x_{i} - \mbox{${\fam\msbfam \tenmsb F}_{q^2}\,$}rt{\alpha} & = & \frac{1}{(x_{i-1}^{q^{n-1}} + x_{i-1}^{q^{n-2}} + \ldots + x_{i-1})^2 - \alpha} - \mbox{${\fam\msbfam \tenmsb F}_{q^2}\,$}rt{\alpha}, \\
& = & \frac{1 - \mbox{${\fam\msbfam \tenmsb F}_{q^2}\,$}rt{\alpha}((x_{i-1}^{q^{n-1}} + x_{i-1}^{q^{n-2}} + \ldots + x_{i-1})^2 - \alpha)}{(x_{i-1}^{q^{n-1}} + x_{i-1}^{q^{n-2}} + \ldots + x_{i-1})^2 - \alpha}.
\end{eqnarray*}
Now note that the second derivative of the numerator of the RHS with respect to $x_{i-1}$ is constant. The denominator is a unit at this place. Thus the zeros of the RHS can occur to at most multiplicity two. Since a similar argument holds at each stage, the valuation of the RHS at $P$ must be a power of two, which is coprime to the characteristic. Irreducibility then follows from Proposition~\ref{proposition:Art-Sch}.
For (i), notice that the automorphisms of $T_{i+1}/T_i$ in the tower leave $x_{i+2}$ fixed, for $i \geq 1$. Further, $T_{i+1}/T_i$ is Abelian. For (ii), note that the class of the RHS in the residue field at any rational place is in \mbox{${\fam\msbfam \tenmsb F}_q\,$} at any stage of the tower. And thus the defining equation splits into linear factors over the residue class field.
$\Box$
\begin{theorem} \label{theorem:wildly-ramified-all-split}
There exist wildly ramified extensions of the rational function field over non-prime fields of cardinality $> 4$ of degree equal to any power of the characteristic in which all the rational places split completely.
\end{theorem}
{\bf Proof}. For finite-separable extensions, which are not necessarily Galois, refer Theorem~\ref{theorem:all-rational-split-1}. Each extension $T_{i+1}/T_i$ has subextensions of degree equal to any arbitrary power of $p$. By an appropriate resolution of the tower, we can get the desired result.
\begin{theorem}
There exist Abelian extensions over non-prime fields of odd characteristic of degree equal to any power of the characteristic in which all the rational places split completely.
\end{theorem}
For Abelian extensions, Theorem~\ref{theorem:abelian-tower} says that the Galois group of the extension $T_i/T_1$ is an elementary Abelian group of exponent $p$, for $i \geq 1$. Thus, it will have normal subgroups of all indices that are powers of $p$. The result then follows by considering the fixed fields of these subgroups.
\begin{example} \rm
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^3}(x_1)$, $q$ is not a power of $2$, and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^2} + x_{i+1}^q + x_{i+1} = \frac{1}{(x_i^{q^2} + x_i^q + x_i)^2 - \alpha},
\end{equation}
where $\alpha \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ is not a square. In this example, all rational places split completely at all steps of the tower. Furthermore, when a (non-rational) place $P \in \mbox{${\fam\msbfam \tenmsb P}(T_i)\,$}$ is ramified in $T_{i+1}$, from then on, it behaves like a rational place for splitting, and therefore splits completely further throughout the tower.
\end{example}
\begin{theorem}
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$ and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^{n-1}} + x_{i+1}^{q^{n-2}} + \ldots + x_{i+1} = \frac{g(x_i)}{h(x_i)},
\end{equation}
where $g(x_i), h(x_i) \in F_{i,qs}^\phi$, $ \frac{g(x_i)}{h(x_i)}$ is not the image of a rational function under a linear polynomial, and $h(x)$ has no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}. Also, $deg(g(x_i)) > deg(h(x_i))$. Then all the rational places of $F_1$, except $P_\infty$, split completely in all steps of the tower.
If, in addition, we have that $deg(g(x_i)) = deg(h(x_i)) + 1$, the the pole order of $x_i$ in the unique place lying above $P_\infty$ in $T_i$ remains one for all $i \geq 1$.
\end{theorem}
\begin{example}\rm
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^3}(x_1)$, $q$ is not a power of $2$, and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^2} + x_{i+1}^q + x_{i+1} = \frac{x_i^{2q^2 + 1} + x_i^{2+q} +
x_i^{2q + q^2}}{(x_i^{q^2} + x_i^q + x_i)^2 - \alpha},
\end{equation}
where $\alpha \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ is not a square. Here, except the unique pole $P_\infty$ of $x_1$ in $F_1$, all other rational places split completely throughout the tower. Furthermore, let $P$ be any pole of $x_2$ in $T_2$, and $P^{(n)}$ denote the unique place in $T_n$ lying above it. Then, the pole order of $x_n$ at $P^{(n)}$ remains constant for $n \geq 2$.
\end{example}
\begin{example} \rm
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$ that is obtained as follows. $T_2 = T_1(x_1)$, where
$$ x_{2}^{q^{n-1}} + x_{2}^{q^{n-2}} + \ldots + x_{2} = \frac{1}{(x_{1}^{q^{n-1}} + x_{1}^{q^{n-2}} + \ldots + x_{1})^m - \alpha},
$$
where $\alpha$ is not an $m^{th}$ power in \mbox{${\fam\msbfam \tenmsb F}_q\,$}. And for $i \geq 2$, $T_{i+1} = T_i(x_{i+1})$ where $x_{i+1}$ satisfies the equation
$$ x_{i+1}^{q^{n-1}} + x_{i+1}^{q^{n-2}} + \ldots + x_{i+1} = \frac{h(x_i)}{g(x_i)},$$
where $h(x_i),g(x_i) \in F_{i,qs}^\phi$, and $deg(h(x_i))=deg(g(x_i))+1$. Note that we are guaranteed the existence of such polynomials $h$ and $g$ by the following construction. Take any two functions $f_1$ and $f_2$ in $F_{i,qs}^\phi$ with coprime degrees $d_1$ and $d_2$ respectively (in particular, trace and norm will do). Then there exist
integers $m,n$ such that $md_1 + nd_2 = 1$. Without loss of generality, let $m$ be positive and $n$ negative. Then let $h(x_i) = i_1(f_1(x_i))$ and $g(x_i) = i_2(f_2(x_i))$, where $i_1$ and $i_2$ are irreducible polynomials over \mbox{${\fam\msbfam \tenmsb F}_q\,$} of degrees $m$ and $n$ respectively. Let $P$ be any place in $T_1$ such that $v_P({(x_{1}^{q^{n-1}} + x_{1}^{q^{n-2}} + \ldots + x_{1})^m - \alpha}) = 1$. Then $P^{(i)}$, which is the unique place in $T_i$ dividing $P$, remains a simple pole of $x_i$ for $ i \geq 2$, ensuring irreducibility of the defining equation at each stage of the tower.
\end{example}
\subsection{Towers of Kummer extensions}
\begin{theorem} \label{theorem:tamely-ramified-all-split}
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$ and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{\frac{q^n -1}{q-1}} = \frac{g(x_i)}{h(x_i)},
\end{equation}
where $g(x_i), h(x_i) \in F_{i,qs}^\phi$, $ \frac{g(x_i)}{h(x_i)} \neq w^{\frac{q^n -1}{q-1}}, \; \forall w \in F_i$, and $g,h$ have no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}. Also, $deg(g(x_i)) = deg(h(x_i))$. Then all the rational places split throughout the tower.
\end{theorem}
{\bf Proof}. The RHS is in the valuation ring at every rational place at $F_i,\; \forall i$ and its class in the residue class field is in $\mbox{${\fam\msbfam \tenmsb F}_q\,$} \setminus \{0\}$, since $g,h$ have no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}, and the RHS is $(n,q)$-quasi-symmetric. Then every rational place in $F_i$ splits completely in $F_{i+1}, \; \forall i$.
$\Box$
\begin{example}\rm
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^3}(x_1)$, $q$ is not a power of $2$, and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{q^2 + q + 1} = \frac{(x_i^{q^2} + x_i^q + x_i)^2 - \beta}{(x_i^{q^2} + x_i^q + x_i)^2 - \alpha},
\end{equation}
where $\alpha, \beta \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ not squares. All rational places split completely throughout the tower.
\end{example}
\begin{theorem}
Consider the tower of function fields ${\cal F} = (F_1,F_2,\ldots)$ where $F_1 = {\fam\msbfam \tenmsb F}_{q^n}(x_1)$ and for $i \geq 1$, $F_{i+1} = F_{i}(x_{i+1})$, where $x_{i+1}$ satisfies the equation
\begin{equation}
x_{i+1}^{\frac{q^n -1}{q-1}} = \frac{g(x_i)}{h(x_i)},
\end{equation}
where $g(x_i), h(x_i) \in $ are two $(n,q)$-quasi-symmetric polynomials, $ \frac{g(x_i)}{h(x_i)} \neq w^{\frac{q^n -1}{q-1}}, \forall w \in F_i$, and $g,h$ have no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}. Also, $deg(g(x_i)) \neq deg(h(x_i))$. Then all the rational places, except possibly $P_\infty$ split throughout the tower.
\end{theorem}
{\bf Proof}. The RHS is in the valuation ring at every rational place in $F_i \; \forall i$, except possibly those dividing $P_\infty \in T_1$ and its class in the residue class field is in $\mbox{${\fam\msbfam \tenmsb F}_q\,$} \setminus \{0\}$, since $g,h$ have no zeros in \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$}, and the RHS is $(n,q)$-quasi-symmetric. Then every rational place in $F_i$ splits completely in $F_{i+1}, \; \forall i$.
$\Box$
\begin{theorem} There exist tamely ramified extensions of arbitrarily high degree of the rational function field over any non-prime field of cardinality greater than $4$, in which all the rational places split completely.
\end{theorem}
{\bf Proof}. Consider Theorem~\ref{theorem:tamely-ramified-all-split}. We can guarantee that such $(n,q)$-quasi-symmetric functions exist, for $q>2$. Then, in the tower described in the theorem, one can go up the tower to get arbitrarily high degree extensions of the rational function field. These will not be Galois, in general.
$Box$
For the towers ${\cal F}$ described in this paper in which all, or all except one, rational places split completely throughout the tower,
$\lambda({\cal F}) = 0$. This is because while the ramification in the rational places is nil, or minimal, that in the non-rational places rises quite fast, leading to a fast rise in the genus. Indeed, it seems from the known examples of towers ${\cal F}$ with $\lambda({\cal F}) > 0$ that it might be necessary to have a certain amount of ramification in the rational places, in order to have $\lambda({\cal F}) > 0$. Or at least it seems that it is not easy to control ramification in the non-rational places, and so it is better to restrict it to a few rational places alone\footnote{These statements are for towers whose first stage is a function field of genus zero.}.
\noindent{{\bf Note:} In all the constructions given above, the property of $(n,q)$-symmetric and $(n,q)$-quasi-symmetric functions that is crucial is that they map \mbox{${\fam\msbfam \tenmsb F}_{q^n}\,$} to \mbox{${\fam\msbfam \tenmsb F}_q\,$}. Thus, these functions may be replaced by any other functions with this property in all the constructions. However, for such functions not to be $(n,q)$-quasi-symmetric, they must have degree atleast $q^n$ \cite{DeoEst2}.
Also, in most of the examples that appear above, we have composed the trace/norm polynomials with the irreducible polynomial $x^2 - \alpha$, where $\alpha \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$ is not a square. However, we could get infinite families of further examples by using the composition $i(q(x))$, where $i(x) \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}[x]$ has no zeros in \mbox{${\fam\msbfam \tenmsb F}_q\,$}, and $q(x)$ is a $(n,q)$-quasi-symmetric function with \mbox{${\fam\msbfam \tenmsb F}_q\,$} coefficients.
\section{Towers with $\lambda({\cal F}) > 0$}
In this section, we generalize two examples of towers with $\lambda({\cal F}) > 0$ from \cite{GarSti3} to obtain two infinite families of such towers. Subfamilies attain the Drinfeld-Vladut bound.
\begin{theorem} \label{theorem:good-family-1}
Let $q=p^n$ and $m|n, m \neq n$. Let $k_m = (p^n - 1)/(p^m -1)$. Consider a tower of function fields in the family given by ${\cal T} = (T_1, T_2, \ldots)$, where $T_1 = \mbox{${\fam\msbfam \tenmsb F}_q\,$}(x_1)$ and for $i \geq 1$, $T_{i+1} = T_i(x_{i+1})$, where $x_{i+1}$ satisfies
\begin{eqnarray*}
x_{i+1}^{k_m} + z_i^{k_m} = b_i^{k_m}, \\
z_i = a_ix_i^{r_i} + b_i,
\end{eqnarray*}
where $a_i,b_i \in {\fam\msbfam \tenmsb F}_{p^m} \setminus \{0\}$ for $i \geq 1$. Also $r_i$ is a power of $p, \;\forall i$. Then the following hold:
\begin{enumerate}
\item $P_\infty$ splits completely throughout the tower.
\item Every ramified place in the tower lies above a rational place in $T_1$.
\item $\lambda({\cal T}) \geq \frac{2}{q-2}$, and hence this family attains the Drinfeld-Vladut bound for $n=2, m=1$ and $q=4$.
\end{enumerate}
\end{theorem}
{\bf Proof}.
Firstly, we verify that under the hypothesis, we do indeed get a tower of function fields. Notice that at one of the places dividing $x_1$ in $T_2$, we get a zero of $x_2$ of order not divisible by $k_m$. This implies that the RHS, for $i=1$, is not of the form $w^{k_m}$, for $w \in T_1$. Further, one of the places dividing $x_2$ in $T_3$ also exhibits the same performance, and so on up the tower. Thus, each equation is irreducible and gives us an extension.
(i) follows from the basic theory of Kummer extensions cf. \cite{Sti1}, Ch. III.7. It is important to note that linear transformations fix the place at infinity, so that it splits at each stage of the tower.
For (ii), working with residue classes, note that for ramification to take place at the $i^{th}$ step of the tower, the norm of $z_i$ should be an element of ${\fam\msbfam \tenmsb F}_{p^m}$. Thus $z_i$ must be in \mbox{${\fam\msbfam \tenmsb F}_q\,$}. Since $z_i$ is obtained by a linear tranformation with \mbox{${\fam\msbfam \tenmsb F}_q\,$} coefficients of a characteristic power of $x_i$, it follows that $x_i$ must be in \mbox{${\fam\msbfam \tenmsb F}_q\,$}. But the relations between the variables $x_i$ and $z_{i-1}$ at the previous step of the tower then force $z_{i-1}$, and therefore $x_{i-1}$ to be in \mbox{${\fam\msbfam \tenmsb F}_q\,$}. Proceeding this way to the first step of the tower, we get that $x_1 \in \mbox{${\fam\msbfam \tenmsb F}_q\,$}$. Thus every ramified place in $T_i$ divides a rational place ($\neq P_\infty$) in $T_1$.
To get (iii), notice that
$$ N(T_j) > k_m^j, \mbox{ for } j \geq 1. $$
Also, the degree of the different at the \mbox{$j^{th}\;$} stage of the tower is always less than the value it would have had all $q$ finite rational places ramified from the second stage of the tower onwards. Now, using the transitivity of the different, we can say that
\begin{eqnarray*}
\mbox{\rm degDiff}(T_j/T_1) &<& q(k_m -1)[1 + k_m + \ldots + k_m^{j-2}], \\
& < & q (k_m^{j-1} -1).
\end{eqnarray*}
Now using the Hurwitz-genus formula, it follows that
$$ g(T_j) < \frac{(q-2)(k_m^{j-1} -1)}{2}. $$
Giving us
$$\lim_{j \rightarrow \infty} N(T_j)/g(T_j) \geq \frac{2}{q-2}. $$
$\Box$
This tower, for the case of $m=r_i=1; \, z_i = x_i+1$, first appeared in \cite{GarSti3}.
\begin{theorem} \label{theorem:good-family-2}
Let $q=p^n > 4$ and $m|n$. Let $l_m = (p^m -1)$. Consider a tower of function fields in the family given by ${\cal T} = (T_1, T_2, \ldots)$, where $T_1 = \mbox{${\fam\msbfam \tenmsb F}_q\,$}(x_1)$ and for $i \geq 1$, $T_{i+1} = T_i(x_{i+1})$, where $x_{i+1}$ satisfies
\begin{eqnarray*}
x_{i+1}^{l_m} + z_i^{l_m} = 1, \\
z_i = a_ix_i^{s_i} + b_i,
\end{eqnarray*}
where $a_i,b_i \in {\fam\msbfam \tenmsb F}_{p^m} \setminus \{0\}$ for $i \geq 1$. Also $s_i$ is a power of $p$, $\forall i$. Then the following hold:
\begin{enumerate}
\item $P_\infty$ splits completely throughout the tower.
\item Every ramified place in the tower lies above a rational place in $T_1$ of the form $P_\gamma$, with $\gamma \in {\fam\msbfam \tenmsb F}_{q^m}$.
\item $\lambda({\cal T}) \geq \frac{2}{l_m - 1}$, and hence this family attains the Drinfeld-Vladut bound for $n=2, m=1$ and $q=9$.
\end{enumerate}
\end{theorem}
{\bf Proof}. First we verify as in the proof of Theorem~\ref{theorem:good-family-1} that we do indeed get a tower of function fields. For this, note that $b_i^{l_m} = 1$. Again (i) follows from the basic theory of Kummer extensions. For (ii), we note that to have ramification at the $i^{th}$ stage of the tower, we must have that $z_i^{l_m} = 1$ implying that $z_i \in {\fam\msbfam \tenmsb F}_{q^m} \setminus \{0\}$. Then by similar reasoning as in the proof of Theorem~\ref{theorem:good-family-1} , it follows that such a ramified place would divide a rational place in $T_1$ of the form $P_\gamma$, with $\gamma \in {\fam\msbfam \tenmsb F}_{q^m}$. Using the Hurwitz genus formula and the transitivity property of the different along similar lines as in the proof of Theorem~\ref{theorem:good-family-1}, we get (iii).
$\Box$
This tower, for the case of $s_i=m=1$, also first appeared in \cite{GarSti3}.
Following the conjecture of Elkies, it is very likely that many of these towers are modular. In that case, there seems to be a definite relation between some modular towers and certain symmetric towers (the other optimal constructions from \cite{GarSti1} and \cite{GarSti2} are also symmetric, and are modular as shown in \cite{Elk1}). An interesting study would be to understand under what conditions can a modular tower be written down in terms of symmetric equations.
\end{document}
|
\begin{document}
\title{\textbf{Influence of different kind of thin sets in the theory of convergence}}
\author{Manoranjan Singha, Ujjal Kumar Hom}
\date{}
\maketitle
{\let\thefootnote\relax\footnotetext{{MSC: Primary 40A35, Secondary 54A20.\\Department of Mathematics, University of North Bengal, Raja Rammohunpur, Darjeeling-734013, West Bengal, India.\\ Email address: [email protected], rs\[email protected]}}}
\begin{abstract}
A class of subsets designated as very thin subsets of natural numbers has been studied and seen that theory of convergence may be rediscovered if very thin sets are given to play main role instead of thin or finite sets which removes some drawback of statistical convergence. While developing the theory of very thin sets, concepts of super thin, very very thin and super super thin sets are evolved spontaneously.
\end{abstract}
\noindent
\section{Introduction}
Let's begin with the well-known definition of asymptotic density \textbf{\cite{u12}} of subsets of set of natural numbers $\omega$. For any $A\subset \omega$, $|A|$ denotes the cardinality of $A$ and $A(n)=|\{m\in \omega:m\in A\cap\{1,2,...,n\} \}|$. The numbers
\begin{center}
$\underline{d}(A) = \displaystyle{\liminf_{n\rightarrow\infty}} \frac{A(n)}{n}$ and $\overline{d}(A) = \displaystyle{\limsup_{n\rightarrow\infty}} \frac{A(n)}{n}$
\end{center}
are called the lower and upper asymptotic density of A, respectively. If $\underline{d}(A) = \overline{d}(A)$, then $d(A)=\overline{d}(A)$ is called asymptotic density of $A$. As in \textbf{\cite{u13}}, $A$ is called thin subset of $\omega$ if $d(A)=0$ otherwise $A$ is nonthin.
The concept of statistical convergence \textbf{\cite{u4}} of a real sequences, is a generalization of usual convergence, is based on notion of asymptotic density where thin subsets of $\omega$ play an important role. A sequence $(x_n)_{n\in \omega}$ of real numbers is statistically convergent to a real number $a$ if for any $\epsilon>0$ the set $\{n\in \omega : \mid x_n-a \mid\geqslant\epsilon\}$ is thin.
Consider a real sequence $(x_n)_{n\in \omega}$ where
\begin{center}
$x_n=\begin{cases}-1,& \text{if }n=2^k+j,k\in \omega \text{ and } 0\leq j\leq k-1\\ 1, & \text{otherwise } \end{cases} $
\end{center}
In this sequence -1 is repeated at a stretch $k$ times from $(2^k)^{th}$ term to $(2^k+(k-1))^{th}$ term for every natural number $k$. As $k$ increases towards infinity, number of repetition of -1 at a stretch is also increases towards infinity.\\
Let's consider another real sequence $(y_n)_{n\in \omega}$ where
\[y_n=\begin{cases}-1,& \text{if }n=2^k,k\in \omega\\ 1, & \text{otherwise } \end{cases} \]
In this sequence -1 appearing only at every ${2^k}^{th}$ place for every $k\in\omega$. As $k$ increases towards infinity gap between two consecutive appearance of -1 is also tending to infinity. In the existing literature both $(x_n)_{n\in \omega}$ and $(y_n)_{n\in \omega}$ are statistically convergent. This article distinguishes these kinds of sequences in regard of convergence.
\section{Definitions of different kind of thin subsets of $\omega$}\label{sec1}
Let $A=\{n_{1}<n_{2}<n_{3}<...\}$ be an infinite subset of $\omega$ such that $b_k=(n_{k+1}-n_k)>m$ eventually for any $m\in \omega$. Then $\displaystyle{\lim_{k \to \infty}} \frac{\frac{1}{b_1}+\frac{1}{b_2}+...+\frac{1}{b_k}}{k} = 0$. Now
\begin{center}
$(b_1+b_2+...+b_k)(\frac{1}{b_1}+\frac{1}{b_2}+...+\frac{1}{b_k})\geqslant k^{2} \Rightarrow\frac{\frac{1}{b_1}+\frac{1}{b_2}+...+\frac{1}{b_k}}{k}\geqslant\frac{k}{b_1+b_2+...+b_k}\geqslant\frac{k}{n_{k+1}}.$
\end{center}
Hence $\displaystyle{\lim_{k \to \infty}} \frac{k+1}{n_{k+1}} = 0$ and so $A$ is thin.
\begin{example}\label{example1}
Let $A_1=\{2^k:k\in \omega\}$ and $A_2=\{2^k+1:k\in \omega\}$. Let $\mathbb{A}=A_1\cup A_2=\{n_{1}<n_{2}<n_{3}<...\}$. Then $\mathbb{A}$ is thin but $(n_{k+1}-n_{k})=1$ frequently.
\end{example}
Suppose $A\subset \omega$ and $M\in \omega$. Define \newline
$(A)_M$=$\{1\}$ $\cup$ $\{n$: $n>1$ and there exist $n$ consecutive elements of $A$ such that difference between any two consecutive among them is less than or equal to $M$$\}$.
\begin{definition}
A subset $A$ of $\omega$ is super thin if there exists a sub-collection $\{A_n: n\in \omega\}$ of finite subsets of $\omega$ such that $\max(A\backslash A_n)_n=1$ for all $n$.
\end{definition}
\begin{proposition}
A subset $A$ of $\omega$ is super thin if and only if $A$ is finite or $\displaystyle{\lim_{n \to \infty}} (n_{k+1}-n_{k}) = \infty$ if $A=\{n_{1}<n_{2}<n_{3}<...\}$.
\end{proposition}
Every super thin subset of $\omega$ is thin and from Example \ref{example1} we see that thin set may not be super thin but finite union of super thin sets may not be super thin. However there exists a sub-collection $\{B_n: n\in \omega\}$ of finite subsets of $\omega$ such that $\max(\mathbb{A}\backslash B_n)_n\leqslant 2$ for all $n$ and using this fact from Example we have set the next definition.
\begin{definition}
A subset $A$ of $\omega$ is very thin if there exist a sub-collection $\{A_n: n\in \omega\}$ of finite subsets of $\omega$ and $M\in \omega$ such that $\max(A\backslash A_n)_n\leqslant M$ for all $n$.
\end{definition}
\begin{proposition}\label{Prop1}
A subset $A$ of $\omega$ is very thin if and only if $A$ is finite or $A$ can be written as follows:\newline $(V1)$ $A=\displaystyle{\bigcup_{k\in\omega}}\mathcal{A}_k$ where $1\leqslant |\mathcal{A}_k|\leqslant \mathcal{M} $ for some $\mathcal{M}\in\omega$ for all $k\in\omega$ ,\newline $(V2)$ $\min(\mathcal{A}_{k+1}) - \max(\mathcal{A}_k) >0$ for all $k\in \omega$,\newline $(V3)$ $\displaystyle{\lim_{k \to \infty}}(\min(\mathcal{A}_{k+1}) - \max(\mathcal{A}_k))=\infty$.
\end{proposition}
\begin{proof}
For any finite subset $A$ of $\omega$, $\max(A)_n\leqslant |A|$ for all $n$. So finite subsets of $\omega$ are very thin.\\
Let $A$ be an infinite very thin subset of $\omega$. Then we can get a sub-collection $\{A_n: n\in \omega\}$ of non-empty finite subsets of $\omega$ and $M\in \omega$ such that $(\max(A_{n+1})-\max(A_n))>2^n$ and $\max(A\backslash A_n)_{t_n}\leqslant M$ for all $n$ where $\{t_{1}<t_{2}<t_{3}<...\}$ is an infinite subset of $\omega$. If $|B_k=A\cap \{\max(A_k)+1,...,\max(A_{k+1})\}|>1$ then $B_k$ can be decomposed as \begin{center}
$B_k=\displaystyle{\bigcup_{i=1}^{j_k}}B_{ki}$
\end{center}such that $1\leqslant |B_{ki}|\leqslant M$, $1\leqslant i\leqslant j_k$ and
\begin{center}
$\min(B_{k(i+1)}) - \max(B_{ki}) >t_k$, $1\leqslant i\leqslant j_k-1$.
\end{center}
Thus $A$ can be expressed in such way that $A$ satisfies $(V1)$, $(V2)$ and $(V3)$ where $\mathcal{M}=\max\{M,\max{A_1}\}$.\\
Conversely, let $A$ be an infinite subset of $\omega$ so that $A$ satisfies $(V1)$, $(V2)$ and $(V3)$. Then there exists an infinite subset $\{n_1<n_2<n_3<...\}$ of $\omega$ such that
\begin{center}
$(\min(\mathcal{A}_{m+1}) - \max(\mathcal{A}_m))>k$ for all $m>n_k$.
\end{center}
Let $A_k=\{1,...,\min(\mathcal{A}_{n_k+1})\}$, $k\geqslant 1$. Then $\max(A\backslash A_k)_k\leqslant \mathcal{M}$ for all $k$.
\end{proof}
\begin{example}
Let $A=\displaystyle{\bigcup_{k\in \omega}A_k}$ where $A_k=\{2^k,2^k+1,...,2^k+k\}$. Then $d_n(A)\leqslant \frac{(k+1)(k+2)}{2^k}$ for $2^k\leqslant n<2^{k+1}$. So, $A$ is thin. If $A=\displaystyle{\bigcup_{k\in \omega}B_k}$ where $1\leqslant |B_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$ and $\min(B_{k+1}) - \max(B_k) >0$ for all $k\in \omega$ then there exists a subset $\{n_1<n_2<n_3<...\}$ of $\omega$ such that $\displaystyle{\lim_{k \to \infty}}(\min(B_{n_{k}+1}) - \max(B_{n_k}))=1$. Therefore, $A$ is not very thin.\\
The Prime number theorem implies that set of prime numbers is thin (see in \textbf{\cite{u9}}). Now we will see whether set of prime numbers is very thin or not. As in \textbf{\cite{u8}}, a set $\mathcal{D}=\{d_1,...,d_k\}$ consisting of non-negative integers is called admissible set if for any prime $p$, there is an integer $b_{p}$ such that $b_p\not\equiv$ d(mod p) for all $d\in \mathcal{D}$.
The statement of Prime k-tuple conjecture is given as in \textbf{\cite{u8}}:
\begin{conjecture}[\textbf{Prime k-tuple conjecture}]
Let $\mathcal{D}=\{d_1,...,d_k\}$ be an admissible set. Then there are infinitely many integers $h$ such that $\{h+d_1,...,h+d_{k}\}$ is a set of primes.
\end{conjecture}
\begin{result}
Set of prime numbers is not very thin.
\end{result}
\begin{proof}
Let $P$ be the set of all primes and let $P=\displaystyle{\bigcup_{k\in\omega}}A_k$ where $1\leqslant |A_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$ and $\min(A_{k+1}) - \max(A_k) >0$ for all $k\in \omega$. Then there exists an admissible set $\{d_1,...d_{M+1}\}$. \\Let $G=\max\{d_{i+1}-d_i:i=1,...,M\}$. By k-tuple conjecture there should be infinitely many ($M$+1)-tuple of primes $(p+d_1,...,p+d_{M+1})$. Therefore $(\min(A_{k+1}) - \max(A_k))\leq G$ for infinitely many $k\in \omega$. So $P$ is not very thin.
\end{proof}
Now the question is whether finite union of very thin sets is very thin or not. We will get this answer in the next section. Before that we have more two definitions of namely super super thin and very very thin which are generalized from super thin and very thin
respectively.
\begin{definition}
A subset $A$ of $\omega$ is super super thin if $A$ is finite or $\displaystyle{\sum_{k=1}^{\infty}}\frac{1}{(n_{k+1}-n_k)}<\infty$ if $A=\{n_{1}<n_{2}<n_{3}<...\}$.
\end{definition}
\begin{definition}
A subset $A$ of $\omega$ is very very thin if $A$ is finite or $A$ can be written as follows:\newline (i) $A=\displaystyle{\bigcup_{k\in\omega}}\mathcal{A}_k$ where $1\leqslant|\mathcal{A}_k|\leqslant \mathcal{M} $ for some $\mathcal{M}\in\omega$ for all $k\in\omega$,\newline (ii) $\min(\mathcal{A}_{k+1}) - \max(\mathcal{A}_k) >0$ for all $k\in \omega$,\newline (iii)$\displaystyle{\sum_{k=1}^{\infty}}\frac{1}{(\min(\mathcal{A}_{k+1}) - \max(\mathcal{A}_k))}<\infty$.
\end{definition}
\begin{example}
$\displaystyle{\bigcup_{k\in \omega}\{2^k,2^k+k\}}$ is super thin and very very thin but not super super thin.
\end{example}
\begin{example}
Let $X=\{b_1,b_2,b_3,...\}$ and $Y=\{b_1,b_1+1,b_2,b_3,b_3+1,b_4,...\}$ where $b_k=1+...+k$. Let $X=\displaystyle{\bigcup_{k\in\omega}}X_k$ where $1\leqslant |X_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$. Then $(\min(X_{k+1}) - \max(X_k))\leq (b_{l+1}-b_l)=l+1$ for some $l\leq kM
$. Hence for all $k\geqslant 1$,
\begin{center}
$\frac{1}{k+1}\leq \frac{M}{\min(X_{k+1}) - \max(X_k)}$.
\end{center}
Therefore $X$ is not very very thin but super thin. Since $X\subset Y$, $Y$ is not very very thin but very thin.
\end{example}
\section{Characterization of very thin sets and its relation with thin and uniformly thin sets}\label{sec2}
\begin{lemma}\label{lemma1}
Union of two super thin sets is very thin.
\end{lemma}
\begin{proof}
Suppose $S=\{s_1<s_2<s_3<...\}$ and $T=\{t_1<t_2<t_3<...\}$ are super thin subsets of $\omega$. For each $i\in \omega$ we construct a set containing $s_i$ and the smallest number $t$ of T such that $s_i\leqslant t\leqslant \frac{s_i+s_{i+1}}{2}$ if such a number $t$ exists and the largest number $t$ of $T$ such that $\frac{s_{i-1}+s_{i}}{2}\leqslant t\leqslant s_i$ if such a number $t$ exists. Leave all remaining elements of $T$ as singleton. Then $S\cup T$ can be decomposed into the sets $\mathcal{A}_k$ such that $S\cup T=\displaystyle{\bigcup_{k\in\omega}}\mathcal{A}_k$ where $1\leqslant |\mathcal{A}_k|\leqslant 3 $ for all $k\in\omega$ and $(\mathcal{A}_k)_{k\in \omega}$ satisfies $V(2)$ and $V(3)$ given in Proposition \ref{Prop1}.
\end{proof}
\begin{corollary}\label{co1}
Union of two super super thin sets is very very thin.
\end{corollary}
\begin{lemma}\label{lemma2}
Union of super thin and very thin subsets of $\omega$ is very thin.
\end{lemma}
\begin{proof}
Let S be a very thin and $T=\{t_1<t_2<t_3<...\}$ be a super thin subsets of $\omega$. Let $S=\displaystyle{\bigcup_{k\in\omega}}A_k$ where $1\leqslant |A_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$ and $\min(A_{k+1}) - \max(A_k) >0$ for all $k\in \omega$ with $\displaystyle{\lim_{k \to \infty}}(\min(A_{k+1}) - \max(A_k))=\infty$.\newline
For every $i\in \omega$, we construct a set $B_i$ containing $A_i$ and the smallest number $t$ of $T$ such that $\max(A_i)\leqslant t\leqslant \frac{\max(A_i)+\min(A_{i+1})}{2}$ if such a number $t$ exists and the largest number $t$ of $T$ such that $\frac{\min(A_{i-1})+\max(A_{i})}{2}\leqslant t\leqslant \min(A_i)$ if such a number $t$ exists and elements $t$ of $T$ such that $\min(A_i)<t<\max(A_i)$. Now each $B_i$ can be decomposed as \begin{center}
$B_i=\displaystyle{\bigcup_{k=1}^{j(i)}}B_{ik}$
\end{center}such that
\begin{center}
$\min(B_{i(k+1)}) - \max(B_{ik}) >0$, $1\leqslant k\leqslant j(i)-1$,\\$\max(B_{ik})$ $\in T$ for $1\leqslant k< j(i)$ and $\min(B_{ik})$ $\in T$ for $1< k\leqslant j(i)$
\end{center}
and there is no $r\in \omega$ so that $t_r,t_{r+1}\in B_{ik}$ if there do not exist any $s\in A_i$ satisfying $t_r<s<t_{r+1}$. Leave all remaining elements of $T$ as singleton, one can decompose $S\cup T$ into the sets $\mathcal{A}_k$ such that $S\cup T=\displaystyle{\bigcup_{k\in\omega}}\mathcal{A}_k$ where $1\leqslant |\mathcal{A}_k|\leqslant 2M+1 $ for all $k\in\omega$ and $(\mathcal{A}_k)_{k\in \omega}$ satisfies $V(2)$ and $V(3)$ given in Proposition \ref{Prop1}.
\end{proof}
\begin{corollary}\label{co2}
Union of super super thin and very very thin subsets of $\omega$ is very very thin.
\end{corollary}
\begin{theorem}\label{thverythin}
A subset A of $\omega$ is very thin if and only if A can be expressed as a finite union of super thin subsets of $\omega$.
\end{theorem}
\begin{proof}
Suppose A is a very thin subset of $\omega$ such that\newline (i) $A=\displaystyle{\bigcup_{k\in\omega}}A_k$ where $1\leqslant |A_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$,\newline (ii) $\min(A_{k+1}) - \max(A_k) >0$ for all $k\in \omega$,\newline (iii)$\displaystyle{\lim_{k \to \infty}}(\min(A_{k+1}) - \max(A_k))=\infty$.\newline
Let $A_k=\{a_{k1}\leqslant a_{k2}\leqslant ...\leqslant a_{kM}\}, k\in \omega$. Define $B_i=\{a_{ki}:k\in \omega\}, 1\leqslant i\leqslant M$. Then $A=\displaystyle{\bigcup_{i=1}^{M}} B_i$ and
\begin{align*}
(a_{(k+1)i} - a_{ki})&\geqslant (a_{k+1)1} - a_{kM}) = (\min(A_{k+1}) - \max(A_k))
\end{align*}
Therefore $B_i$ is super thin, $1\leqslant i\leqslant M$.\newline Converse part follows directly from Lemma 1 and Lemma 2.
\end{proof}
\begin{corollary}
Finite union of very thin subsets of $\omega$ is very thin.
\end{corollary}
\begin{corollary}\label{cor}
Very thin subsets of $\omega$ are thin.
\end{corollary}
\end{example}
\begin{corollary}
Any very thin set can be expressed as a finite intersection of thin but non very thin sets.
\end{corollary}
\begin{proof}
Suppose $S=\{t_1<t_2<t_3<...\}$ is a super thin subset of $\omega$. Let $n_1\in\omega$ such that $t_{n_1+1}-t_{n_1}>2$. Then we get a strictly increasing sequence of natural numbers $(n_k)_{k\geqslant 1}$ such that $t_{n_k}>2t_{n_{k-1}}$ and $t_{n_k+1}-t_{n_k}>2k$ for $k\geq2$.\newline
Define $A=\displaystyle{\bigcup_{k\geqslant 1}\{t_{n_k},...,t_{n_k}+k\}}$ and $B=\displaystyle{\bigcup_{k\geqslant 1}\{t_{n_k+1}-k,...,t_{n_k+1}\}}$.\\Then A and B both are thin but not very thin. Let $A'=A\cup S$ and $B'=B\cup S$. Since $S$ is super thin, $A'$ and $B'$ are thin but not very thin with $A'\cap B'=S$. Hence any super thin set can be expressed as intersection of two thin but non very thin sets. From Theorem~\ref{thverythin} it follows that any very thin set can be expressed as a finite intersection of thin but non very thin sets.
\end{proof}
\begin{theorem}
A subset A of $\omega$ is very very thin if and only if A can be expressed as a finite union of super super thin
subsets of $\omega$.
\end{theorem}
\begin{proof}
If $A$ is very very thin then each $B_i$ defined in Theorem \ref{thverythin} become super super thin.\\
Corollary \ref{co1} and Corollary \ref{co2} together implies that finite union of super super thin sets is very very thin.
\end{proof}
\begin{corollary}
Finite union of very very thin subsets of $\omega$ is very very thin.
\end{corollary}
From Corollary \ref{cor} it follows that very thin sets are thin. Since zero uniform density\textbf{(\cite{u1,u11})} subsets of $\omega$ are thin, it is natural to arise a question that whether very thin sets have uniform density zero or zero uniform density sets are very thin.
Let $B\subset \omega$. For $h\geqslant 0$ and $k\geqslant 1$, let $A(h+1,h+k)=|\{n\in B:h+1\leqslant n\leqslant h+k\}|$. The existence of the following limits are proved in \textbf{\cite{u11}}:
\begin{center}
$ \underline{u}(B) = \displaystyle{\lim_{k\to \infty}\frac{1}{k}\liminf_{h\to \infty}A(h+1,h+k)}$ , $ \overline{u}(B) = \displaystyle{\lim_{k\to \infty}\frac{1}{k}\limsup_{h\to \infty}A(h+1,h+k)}$
\end{center}
$\underline{u}(B)$ and $\overline{u}(B)$ are called lower and upper uniform density of B respectively. If $\underline{u}(B)=\overline{u}(B)$ then $u(B)=\overline{u}(B)=\underline{u}(B)$ is called uniform density of $B$. From now on call B is uniformly thin if $u(B)=0$. Since $\underline{u}(B)\leqslant \underline{d}(B)\leqslant \overline{d}(B)\leqslant \overline{u}(B)$, $B$ is thin if $B$ is uniformly thin but the converse is not true (see in \textbf{\cite{u11}}).\newline
Now we will see the relation between very thin and uniformly thin subsets of $\omega$.
\begin{theorem}
Any very thin subset of $\omega$ is uniformly thin.
\end{theorem}
\begin{proof}
Suppose $A$ be a very thin subset of $\omega$ such that $A=\displaystyle{\bigcup_{k\in\omega}}A_k$ where $1\leqslant |A_k|\leqslant M $ for all $k\in\omega$ for some $M\in\omega$, $\min(A_{k+1}) - \max(A_k) >0$ for all $k\in \omega$ and $\displaystyle{\lim_{k \to \infty}}(\min(A_{k+1}) - \max(A_k))=\infty$. Let $n_k=\min(A_{k+1}) - \max(A_k)$, $k\in \omega$.\newline
Let $k_1$ = least element of $\omega$ such that $n_{k_1}\leqslant n_k$ for all $k\in \omega$ and let\newline
$k_{r+1}$ = least element of $\omega-\{k_1,...,k_r\}$ such that $n_{k_{r+1}}\leqslant n_k$ for all $k\in \omega-\{k_1,...,k_r\}$, $r>1$.\newline Then $\displaystyle{\lim_{r\to \infty}n_{k_r}}=\infty$ and so $\displaystyle{\lim_{r\to \infty}\frac{r}{n_{k_1}+...+n_{k_r}}}=0$. \newline
By induction it can be shown that for any $l\geqslant 1$ and for any $l$ distinct elements $m_1,...,m_l$ of $\omega$,
\begin{center}
$n_{k_1}+...+n_{k_l}\leqslant n_{m_1}+...+n_{m_l}$.
\end{center}
Let $s_l=n_{k_1}+...+n_{k_l}$. Then $\{n\in A:t+1\leqslant n\leqslant t+s_l+1\}$ can intersect at most $l$ consecutive $A_k$, $k\in \omega$. Therefore
\begin{center}
$\displaystyle{\max_{t\geqslant 0 }|\{n\in A:t+1\leqslant n\leqslant t+s_l+1\}|}\leqslant Ml$
\end{center}
and so $u(A)=0$.
\end{proof}
\begin{example}\label{ex6}
Let $a_1=1$ and let $a_p=a_{p-1}+2(1^3+...+(p-1)^3)+1,p\geqslant 2$. \newline
Define $A_p=\{a_p,a_p+1^3,a_p+1^3+2^3,...,a_p+1^3+2^3+...+p^3\}$, $p\geqslant 1$. Let
$A=\displaystyle{\bigcup_{p=1}^{\infty}}A_p$. Then $A$ is not very thin.\newline
Let $b_n=1^3+2^3+...+n^3$ and $s_m=|\{r\in A:m+1\leqslant r\leqslant m+b_n+1\}|$, $n\geqslant 1$, $m\geqslant 0$.\newline
Then
\begin{center}
$s_m\leqslant (n+1)$ if $a_n\leqslant m$.
\end{center}
Now $s_m\leqslant |A_1|+...+|A_n|\leqslant \frac{(n+1)(n+2)}{2}$ if $0\leqslant m<a_n$.\newline
Hence $A$ is uniformly thin.
\end{example}
\begin{figure}\label{fig:figure333m}
\end{figure}
Example {\pageref{ex6}} shows that uniformly thin set mat not be very thin. A diagram is given in Figure~\ref{fig:figure333m} on the basis of all examples given in Section \ref{sec1} and Section \ref{sec2}.
\section{Fin-BW and BW properties}
First, we recall some properties and propositions related to ideal and ideal convergence given in \textbf{\cite{u1,u5}}:\newline
An ideal $\mathcal{I}$ on $\omega$ has the Fin-BW property if for any bounded sequence $(x_n)_{n\in \omega}$ of real numbers there is $A\notin \mathcal{I}$ such that $(x_n)_{n\in A}$ is convergent and the BW property if for any bounded sequence $(x_n)_{n\in \omega}$ of real numbers there is $A\notin \mathcal{I}$ such that $(x_n)_{n\in A}$ is $\mathcal{I}$-convergent. If an ideal $\mathcal{I}$ has Fin-BW property then it also has BW property.
Recall that $2^\omega,2^{<\omega}$ and $2^n$ denote the set of all infinite sequences of zeros and ones, the set of all finite sequences of zeros and ones and sequences of zeros and ones of length $n$ respectively. If $s\in 2^n$, then $s\string^i$ denotes the sequence of length $n+1$ which extends $s$ by $i$ for $i\in \omega$. If $x\in 2^{\omega}$ then $x\upharpoonright n=(x(0),x(1),...,x(n-1))$ for $n\in \omega$.
\begin{proposition}[\textbf{\cite{u1}}]\label{prop1}
An ideal has the BW property (the Fin-BW property) if and only if for every family of sets $\{A_s:s\in 2^{<\omega}
\}$ satisfying the following conditions \newline
$(S1) A_\emptyset=\omega,\newline
(S2) A_s=A_{s\string^0} \cup A_{s\string^1},\newline
(S3) A_{s\string^0} \cap A_{s\string^1}=\emptyset$,\newline
there exist $x\in {2^\omega}$ and $B\subset \omega$, $B\notin I$ such that $B\backslash A_{x\upharpoonright n}\in I$ ($B\backslash A_{x\upharpoonright n}$ is finite respectively) for all $n$.
\end{proposition}
$\mathcal{I}_d$ = ideal of all thin subsets of $\omega$ and $\mathcal{I}_u$ = ideal of all uniformly thin subsets of $\omega$ do not satisfy BW and so FinBW property ( see Example 4 in \textbf{\cite{u3}} and Corollary 1 in \textbf{\cite{u1}}). Now the following example shows that $\mathcal{I}_v$ = ideal of very thin subsets of $\omega$ does not satisfy Fin-BW property.
\begin{example}
Define a family of sets $\{A_s:s\in 2^{<\omega}\}$ as follows:
\begin{center}
$A_\emptyset=\omega$,
\end{center}
\begin{figure}\label{fig:figure111}
\end{figure}
\begin{center}
$A_{(0)}=2\omega$, $A_{(1)}=2\omega-1$,
\end{center}
\begin{center}
$A_{(0,0)}=2^2\omega$, $A_{(0,1)}=2^2\omega-2$, $A_{(1,0)}=2^2\omega-1$ ,$A_{(1,1)}=2^2\omega-3$,
\end{center}
\begin{center}
$A_{(0,0,0)}=2^3\omega$, $A_{(0,0,1)}=2^3\omega-4$, $A_{(0,1,0)}=2^3\omega-2$, $A_{(0,1,1)}=2^3\omega-6$, \newline $A_{(1,0,0)}=2^3\omega-1$, $A_{(1,0,1)}=2^3\omega-5$, $A_{(1,1,0)}=2^3\omega-3$, $A_{(1,1,1)}=2^3\omega-7$
\end{center}
and so on.\newline
So if $s\in 2^n$ then $A_s=2^n\omega-i$, $0\leqslant i\leqslant 2^n-1$ and
\begin{center}
$A_{s\string^0}=2^n(2\omega)-i$, $A_{s\string^1}=2^n(2\omega-1)-i$.
\end{center}
Let $x\in 2^{\omega}$. Then $\displaystyle{\bigcap_{n\in \omega}}A_{x\upharpoonright n}$=$\emptyset$ and $\{A_{x\upharpoonright n}\backslash A_{x\upharpoonright {n+1}}:n\in \omega\}$ is a collection of mutually disjoint sets so that numbers in each $A_{x\upharpoonright n}\backslash A_{x\upharpoonright {n+1}}$ are in arithmetic progressions.\newline
Let $N=\{n_0<n_1<n_2<n_3<...\}$ be an infinite subset of $\omega$ and let $a_{n_k}$ be the $n_k^{th}$ element in the arithmetic progression formed by elements of $A_{x\upharpoonright k}\backslash A_{x\upharpoonright {k+1}}$, $k\geqslant 0$.\newline
Let $Ar_N=\displaystyle{\bigcup_{k\geqslant 0}}$\{first $n_k$ numbers in the arithmetic progression of elements of $A_{x\upharpoonright k}\backslash A_{x\upharpoonright {k+1}}$\}. Then $Ar_N$ is super thin.\newline
Suppose $B\subset \omega$ such that $B\backslash A_{x\upharpoonright n}$ is finite for all $n$. Since $\displaystyle{\bigcap_{n\in \omega}}A_{x\upharpoonright n}$=$\emptyset$, $B\subset Ar_N$ for some infinite subset $N$ of $\omega$ and so $B$ is super thin.
\end{example}
\begin{theorem}
\label{thbw}
$\mathcal{I}_v$ = ideal comprising of very thin subsets of $\omega$ satisfies the BW property.
\end{theorem}
\begin{proof}
Let $A$ be a subset of $\omega$ and $M\in \omega$. Define\newline
$(A)_M$=$\{1\}$$\cup$$\{n\in \omega$: $n>1$ and there exist $n$ consecutive elements of $A$ such that difference between any two consecutive among them is less than or equal to $M$$\}$.
Then $A$ is very thin implies $(A)_M$ is finite for all $M\in \omega$.\newline
Let $\{A_s:s\in 2^{<\omega}\}$ be a family of sets satisfying the three conditions in Proposition~\ref{prop1}. This theorem can be proved in the following cases.\newline
\newline
$\textbf{Case 1:}$ Suppose there exist $x\in 2^{\omega}$ and $M\in \omega$ such that $(A_{x\upharpoonright n})_M$ is infinite for all $n\in \omega$.\newline
Let $B_0$= $\{1\}$ and define $B_{n+1}$ to be the $(n+1)$ consecutive elements of $A_{x\upharpoonright{ n+1}}$ such that difference between each two consecutive among them is $\leqslant M$ so that $\max(B_n)<\min(B_{n+1})$, $n\in \omega$.\newline
Let $B=\displaystyle{\bigcup_{n\in \omega}}B_n$. Then $B$ is not very thin and $B\backslash A_{x\upharpoonright n}\subset\displaystyle{\bigcup_{i=0}^{n-1}}B_i$ is finite for $n>0$.\newline
\newline
$\textbf{Case 2:}$ Suppose for any $x\in 2^{\omega}$ and $M\in \omega$ if $A_{x\upharpoonright {n+1}}$ is not very thin for all $n\in \omega$ then there exists a $n_M\in \omega$ such that $(A_{x\upharpoonright {n_M}})_M$ is finite.\newline
So there exist $k_0\in \omega$ and $s_0\in 2^{k_0}$ such that $A_{s_0}$ is not very thin and $(A_{s_0})_1$ is finite and let $M_0$=$\max(A_{s_0})_1$.\newline
As $\max(A_{s_0})_1$=$M_0$, due to the hypothesis
there exist a $k_1\in \omega$ with $k_0<k_1$ and $s_1\in 2^{k_1}$ such that\newline
(i) $A_{s_0}$ and $A_{s_1}$ are disjoint and
$(A_{s_1})_{M_0+1}$ is finite,\newline
(ii) there is an infinite subset $\mathcal{A}_1$ of $A_{s_0}$ such that if $p\in \mathcal{A}_1$ then $p+i\in A_{s_1}$ for some $i$, $1\leqslant i\leqslant M_0$ and if $p<q<p+i$ then $q\in A_{s_0}$,\newline(iii) $\max(A_{s_0}\cup A_{s_1})_1\leqslant (M_0+1)M_1+M_0$ = $N_1$ where $M_1$= $\max(A_{s_1})_{M_0+1}$.\newline
Let $B_1=\{a_1^0,a_1^1\}$ where $a_1^i\in A_{s_i}$, $i\in \{0,1\}$ and $(a_1^1-a_1^0)\leqslant M_0$.\newline\newline
Continuing in this way we will get an infinite subset $\{k_0<k_1<k_2<...\}$ of $\omega$, a sequence $(s_i)_{i\in \omega}$ such that $s_i\in 2^{k_i}$, mutually disjoint sets $\{A_{s_i}:i\in \omega\}$, a collection of infinite sets $\{A_{s_0}=\mathcal{A}_{0}\supset \mathcal{A}_{1}\supset \mathcal{A}_{2}\supset \mathcal{A}_{3}\supset ...\}$, a collection of finite sets $\{B_i:i\geqslant 1\}$, a sequence $(M_i)_{i\in \omega}$ and an infinite subset $\{M_0=N_0<N_1<N_2<...\}$ of $\omega$ such that for $n\in \omega$ \newline
(i) $M_{n+1}$= $\max(A_{s_{n+1}})_{N_n+1}$,\newline \newline
(ii) $\mathcal{A}_{n+1}$ infinite subset of $\mathcal{A}_{n}$ such that if $p\in \mathcal{A}_{n+1}$ then $p+i_0+i_1+...+i_k\in A_{s_{k+1}}$ for some $i_k$ with $k+1\leqslant i_0+i_1+...+i_k\leqslant N_k$, $0\leqslant k\leqslant n$ so that if $p<q<p+i_0$ then $q\in A_{s_0}$ and if $p+i_0+i_1+...+i_{k-1}< q< p+i_0+i_1+...+i_k$ then $q\in A_{s_0}\cup A_{s_1}\cup ...\cup A_{s_k}$ where $1\leqslant k\leqslant n$,\newline \newline
(iii) $\max(A_{s_0}\cup A_{s_1}\cup ...\cup A_{s_{n+1}})_1\leqslant (N_n+1)M_{n+1}+N_n$ $=N_{n+1}$ and\newline \newline
(iv) $B_{n+1}=\{a_{n+1}^0,a_{n+1}^1,...,a_{n+1}^{n+1}\}$ where $a_{n+1}^i\in A_{s_i}$ and $(a_{n+1}^{i+1}-a_{n+1}^i)\leqslant N_i$ for $0\leqslant i\leqslant n$ and for $n\geqslant 1$, $a_{n+1}^0-a_{n}^n>2^{n}$.
\begin{figure}\label{fig:figure222}
\end{figure}\newline
Let $B=\displaystyle{\bigcup_{n=1}^{\infty}}B_n$. Then $B$ is not very thin and $B\cap A_{s_i}$ is super thin for all $i\in \omega$. Moreover if $M$ is an infinite subset of $\{s_i:i\in \omega\}$ then $B_M=\displaystyle{\bigcup_{s\in M}}B\cap A_s$ is not very thin (see Figure~\ref{fig:figure222}).\newline
Define $x\in 2^{\omega}$ such that there are infinitely many $i\in \omega$ so that $A_{s_i}\subset A_{x\upharpoonright n}$, $n\in \omega$.
If there exists a $m\in \omega$ such that there is no $i\in \omega$ so that $A_{s_i}\subset A_{x\upharpoonright n}\backslash A_{x\upharpoonright {n+1}}$ for all $n\geqslant m$ then we simply take $M=\{s_i:A_{s_i}\subset A_{x\upharpoonright {m+1}}\}$. If there does not exist such $m$, construct $M$ by taking least $s_i$ such that $A_{s_i}\subset A_{x\upharpoonright n}\backslash A_{x\upharpoonright {n+1}}$ (if such $s_i$ exists) for $n\in \omega$. In both cases $M$ is infinite and so $B_M$ is not very thin with $B_M\backslash A_{x\upharpoonright n}$ is very thin for all $n$.
\end{proof}
\begin{remark}
Let
\begin{center}
$r_n=0+1+...+n$, $n\in \omega$,\\ $p_n=N_0+N_1+...+N_{r_n}$, $n\in \omega$ and\\$q_0=N_0$ and $q_n=q_{n-1}+p_n$, $n\geqslant 1$
\end{center}
Define a set $D=\displaystyle{\bigcup_{i=0}^{\infty}}D_{i}$ as follows:
\begin{center}
$D_0$=$\displaystyle{\bigcup_{i=1}^{q_0}}B_{i}$
\end{center}
and for $n\geqslant 1$,
\begin{center}
$D_n$=$\displaystyle{\bigcup_{i=q_{n-1}+1}^{q_n}}B_{i}$$\bigg \backslash$ $\displaystyle{\bigcup_{i=0}^{n-1}}A_{s_i}$
\end{center}
Then $D$ is not very very thin and also $D\cap A_{s_i}$ is finite for all $i\in \omega$. Similarly one can construct a non very very thin set $D_M$ such that $D\cap A_{s}$ is finite for all $s\in M$ and $D_M\subset B_M$ where $M$ is an infinite subset of $\{s_i:i\in \omega\}$.\newline
Replacing $B_M$ by $D_M$ and defining same $x\in 2^{\omega}$ as in the last case of Theorem ~\ref{thbw} , it can be shown that $\mathcal{I}_{vv}$ = ideal of very very thin subsets of $\omega$ has the Fin-BW property. \newline\newline
\end{remark}
\eject
\end{document}
|
\begin{document}
\title{Nonlinear diffusion equations
\
with {R}
\begin{abstract}
Condition imposed on the nonlinear terms of a nonlinear diffusion equation with {R}obin boundary condition is the main focus of this paper.
The degenerate parabolic equations, such as the {S}tefan problem, the {H}ele--{S}haw problem, the porous medium equation and the fast diffusion equation, are included in this class.
By characterizing this class of equations as an asymptotic limit of the {C}ahn--{H}illiard systems, the growth condition of the nonlinear term can be improved.
In this paper, the existence and uniqueness of the solution are proved.
From the physical view point, it is natural that, the {C}ahn--{H}illiard system is treated under the homogeneous {N}eumann boundary condition.
Therefore, the {C}ahn--{H}illiard system subject to the {R}obin boundary condition looks like pointless.
However, at some level of approximation, it makes sense to characterize the nonlinear diffusion equations.
\noindent \textbf{Key words:}~~
{C}ahn--{H}illiard system, degenerate parabolic equation, {R}obin boundary condition, growth condition.
\noindent \textbf{AMS (MOS) subject classification:} 35K61, 35K65, 35K25, 35D30, 80A22.
\end{abstract}
\section{Introduction}
\setcounter{equation}{0}
We consider the initial boundary value problem of a nonlinear diffusion system {\rm (P)}, comprising a parabolic partial differential equation with a {R}obin boundary condition:
\begin{equation*}
{\rm (P)} \quad
\begin{cases}
\displaystyle \frac{\partial u}{\partial t}-\Delta \xi =g,
\quad \xi \in \beta (u) \quad {\rm in~}Q:=\Omega \times (0, T), \\
\partial_{\boldsymbol{\nu} }\xi +\kappa \xi =h \quad
{\rm on~}\Sigma :=\Gamma \times (0, T), \\
u(0)=u_{0} \quad {\rm in~}\Omega,
\end{cases}
\end{equation*}
where $\kappa $ is a positive constant.
In an asymptotic form, it is characterized as the limit of the {C}ahn--{H}illiard system with a {R}obin boundary condition,
\begin{equation*}
{\rm (P)}_{\varepsilon }
\quad
\begin{cases}
\displaystyle \frac{\partial u_\varepsilon }{\partial t}-\Delta \mu_\varepsilon =0 \quad {\rm in~}Q, \\
\mu_\varepsilon =-\varepsilon \Delta u_\varepsilon
+\xi_\varepsilon +\pi_{\varepsilon }(u_\varepsilon )-f, \quad \xi_\varepsilon \in \beta (u_\varepsilon )
\quad {\rm in~}Q, \\
\partial_{\boldsymbol{\nu} }u_\varepsilon +\kappa u_\varepsilon =0,
\quad \partial_{\boldsymbol{\nu} }\mu_\varepsilon +\kappa \mu_\varepsilon =0
\quad {\rm on~}\Sigma,
\\
u_\varepsilon (0)=u_{0\varepsilon } \quad {\rm in~}\Omega
\end{cases}
\end{equation*}
as $\varepsilon \searrow 0$ with $\xi :=\mu +f$, where $0< T< +\infty$, $\Omega $ is a bounded domain of $\mathbb{R}^{d}$ $(d=2,3)$ with smooth boundary $\Gamma :=\partial \Omega $, $\partial_{\boldsymbol{\nu}}$ denotes the outward normal derivative on $\Gamma $, $\Delta $ is the {L}aplacian.
Functions $g: Q\to \mathbb{R}$, $h: \Sigma \to \mathbb{R}$, $u_{0}: \Omega \to \mathbb{R}$ and $u_{0\varepsilon }: \Omega \to \mathbb{R}$ are given as the boundary and initial data.
$f: Q\to \mathbb{R}$ is constructed from $g$ and $h$ later.
Moreover, in the nonlinear diffusion term, $\beta $ is a maximal monotone graph and $\pi_{\varepsilon }$ is an anti-monotone function that tends to 0 as $\varepsilon \searrow 0$.
It is well known that the {C}ahn--{H}illiard system is characterized by the nonlinear term $\beta +\pi_{\varepsilon }$, a simple example being $\beta (r)=r^{3}$ and $\pi_{\varepsilon }(r)=-\varepsilon r$ for all $r\in \mathbb{R}$.
In this way, we choose a suitable $\pi_{\varepsilon }$ that depends on the definition of $\beta $ yielding the structure of the {C}ahn--{H}illiard system for ({\rm P})$_\varepsilon $.
Alternatively, by choosing a suitable $\beta $, the degenerate parabolic equation {\rm (P)} characterizes various types of nonlinear problems, such as the {S}tefan problem, {H}ele-{S}haw problem, porous medium equation, and fast diffusion equation (see, e.g., \cite[pp.6935--6937, Examples]{CF16}).
In analyzing the well-posedness of system {\rm (P)}, there are two standard approaches, the ``$L^1$-approach'' and the `{H}ilbert space approach'' (see, e.g., \cite{Aka09}, \cite[Chapter~5]{Bar10}).
With respect to the ``{H}ilbert space approach'', the pioneering result \cite{Dam77} concerns the enthalpy formulation for the {S}tefan problem with a {D}irichelet--{R}obin boundary condition, essentially of {R}obin-type.
Afterwards, the {N}eumann boundary condition was treated in \cite{Ken90, KL05} and the dynamic boundary condition in \cite{Aik93, Aik95, Aik96, FM17}.
See also \cite{KY17, KY17b, FKY17} for a more general space setting.
For all of these results, the growth condition for $\beta $ is a very important assumption, such as
\begin{equation*}
\widehat{\beta }(r)\geq c_{1}r^{2}-c_{2} \quad {\rm for~all~}r\in \mathbb{R},
\end{equation*}
where $c_{1}$ and $c_{2}$ are positive constants,
and $\widehat{\beta }$ is a proper lower semicontinuous convex function
satisfying the subdifferential form $\partial_{\mathbb{R}}\widehat{\beta }=\beta $.
However, it is too restricted in regard to application;
indeed, for fast diffusion or nonlinear diffusion of {P}enrose--{F}ife type are excluded.
A drawback of the ``{H}ilbert space approach'' compared with the ``$L^1$-approach'' is detailed in \cite[Chapter~5]{Bar10}.
With that as motivation,
the improvement of the growth condition subject to the {R}obin boundary condition,
was studied in \cite{DK99} using a certain technique called the ``lower semicontinuous convex extension''.
For porous medium, that is, $\beta (r)=|r|^{q-1}r$, $(q > 1)$,
a different approach to the doubly nonlinear evolution equation was studied in \cite{Aka09}.
Regarding a recent result,
the characterization of the nonlinear diffusion equation as an asymptotic limit of the {C}ahn--{H}illiard system with dynamic boundary conditions was introduced in \cite{Fuk16, Fuk16b},
and the same problem subject to the {N}eumann boundary condition was given in \cite{CF16}.
In these instances, we do not need any growth condition;
see \cite[Chapter~6]{CF16} or \cite{Fuk16b}.
This is one of the big advantages of this approach;
indeed, we do not need techniques such as the lower semicontinuous convex extension by \cite{DK99}.
The main objective of this paper is to improve on the pioneering result give in \cite{Dam77, DK99}, more precisely, an improvement of the growth condition subject to the {R}obin boundary condition without using the lower semicontinuous convex extension.
Up to a certain level of approximation, we consider the {C}ahn--{H}illiard system subject to the {R}obin boundary condition (see, e.g., \cite{Mil17}).
From a physical perspective,
the {C}ahn--{H}illiard system is more naturally treated under the homogeneous {N}eumann boundary condition.
Therefore the {C}ahn--{H}illiard system with the {R}obin boundary condition imposed looks to be without points.
However, up to a given level of approximation, characterizing the nonlinear diffusion equations with the {R}obin boundary condition imposed does make sense.
Moreover, we obtain the order of convergence for the solutions of {\rm (P)} with the {R}obin boundary condition as that for {\rm (P)} with the {N}eumann boundary condition, that is, letting $\kappa $ tend to $0$, where $\kappa $ is the constant in the boundary condition.
The outline of the paper is as follows.
In Section~2, the main theorems are stated.
For this purpose, we present the notation used in this paper and define a suitable duality map and the $H^{1}$-norm equivalent to the standard norm.
Next, we introduce the definition of a weak solution of {\rm (P)} and {\rm (P)}$_{\varepsilon }$;
the principal theorems are then given.
In Section~3, to prove the convergence theorem, we deduce the uniform estimates of the approximate solution of {\rm (P)}$_{\varepsilon }$.
We use {M}oreau--{Y}osida regularization of $\widehat{\beta }$ employing the second-order approximate of parameter $\lambda $.
In Section~4, to obtain the weak solution of {\rm (P)}$_{\varepsilon }$, we first pass to the limit $\lambda \searrow 0$.
Second, we prove the existence of weak solutions by passing to the limit $\varepsilon \searrow 0$.
We also discuss the uniqueness of solutions.
In Section~5, we improve the assumption for $\beta $ subject to a strong assumption for the heat source $f$.
From this results, we can avoid the growth condition for $\beta $.
In Section~6, we obtain the order of convergence related to the {N}eumann problem from the {R}obin problem as $\kappa \searrow 0$.
Table of contents:
\begin{itemize}
\item[1.] Introduction
\item[2.] Main results
\begin{itemize}
\item[2.1.] Notation
\item[2.2.] Definition of the solution and main theorem
\end{itemize}
\item[3.] Approximate problem and uniform estimates
\begin{itemize}
\item[3.1.] Approximate problem for {\rm (P)}$_{\varepsilon }$
\item[3.2.] Uniform estimates
\end{itemize}
\item[4.] Proof of convergence theorem
\begin{itemize}
\item [4.1.] Passage to the limit $\lambda \searrow 0$
\item [4.2.] Passage to the limit $\varepsilon \searrow 0$
\end{itemize}
\item [5.] Improvement of the results
\item [6.] Asymptotic limits to solutions of {N}eumann problem
\item [] Appendix
\end{itemize}
\section{Main results}
\setcounter{equation}{0}
In this section, we state the main theorem. Hereafter, $\kappa $ is a positive constant.
\subsection{Notation}
We employ spaces $H:=L^{2}(\Omega )$ and $H_{\Gamma }:=L^{2}(\Gamma )$,
with standard norms $|\cdot |_{H}$ and $|\cdot |_{H_{\Gamma }}$,
along with inner products $(\cdot, \cdot )_{H}$ and $(\cdot, \cdot )_{H_{\Gamma }}$, respectively.
We also use the space $V:=H^{1}(\Omega )$ with norm $|\cdot |_{V}$ and inner product $(\cdot, \cdot )_{V}$,
\begin{equation*}
|z|_{V}:=\bigl(|\nabla z|_{H^{d}}^{2}
+ \kappa |z|_{H_{\Gamma }}^{2}\bigr)^{\frac{1}{2}}
\quad {\rm for~all~} z \in V.
\end{equation*}
By virtue of the {P}oincar\'e inequality and the trace theorem, there exist positive constants $c_{\rm P}, c_{\rm P}'$ with $c_{\rm P} < c_{\rm P}'$ such that
\begin{equation*}
c_{\rm P} \| z \|_{V}^2
\le | z |_{V}^2
\le c_{\rm P}' \| z \|_{V}^2
\quad {\rm for~all~} z \in V,
\end{equation*}
where $\| \cdot \|_V$ stands for the standard $H^{1}$-norm. Moreover, we set
\begin{equation*}
W:=\bigl\{ z\in H^{2}(\Omega ) \ : \ \partial_{\nu }z+\kappa z=0
\quad {\rm a.e.~on~}\Gamma \bigr\}.
\end{equation*}
The symbol $V^{*}$ denotes the dual space of $V$; the duality pairing between $V^{*}$ and $V$ is denoted by $\langle \cdot, \cdot \rangle_{V^{*}, V}$.
Also, for all $z\in V$, let $F: V\to V^{*}$ be the duality mapping defined by
\begin{equation*}
\langle Fz,
\tilde{z}
\rangle_{V^{*}, V}
:=\int_{\Omega }\nabla z\cdot \nabla \tilde{z}dx+\kappa \int_{\Gamma }z\tilde{z}d\Gamma
\quad {\rm for~all~}
z,\tilde{z}\in V.
\end{equation*}
Moreover, we define an inner product of $V^{*}$ by
\begin{equation*}
(z^{*}, \tilde{z}^{*})_{V^{*}}
:=\langle z^{*}, F^{-1}\tilde{z}^{*}
\rangle_{V^{*}, V}
\quad
{\rm for~all~}
z^{*}, \tilde{z}^{*}\in V^{*}.
\end{equation*}
Then, the dense and compact embeddings
$V\mathop{\hookrightarrow} \mathop{\hookrightarrow}
H\mathop{\hookrightarrow} \mathop{\hookrightarrow} V^*$
hold; that is, $(V, H, V^*)$ is a standard {H}ilbert triplet.
As a remark, for all of these settings, $\kappa >0$ is essential.
For a {N}eumann boundary condition, $\kappa =0$, see \cite{CF16}.
\subsection{Definition of the solution and main theorem}
We next define our solution for {\rm (P)} and then state the main theorem.
\paragraph{Definition 2.1.}
{\it The pair $(u, \xi )$ is called the weak solution of {\rm (P)} if
\begin{gather*}
u\in H^{1}(0, T;V^{*})\cap L^{\infty }(0, T;H), \quad \xi \in L^{2}(0, T;V), \label{muxi} \\
\xi \in \beta (u) \quad {\it a.e.~in~}Q, \label{2.20}
\end{gather*}
and they satisfy}
\begin{gather}
\bigl\langle u'(t), z\bigr\rangle_{V^{*}, V}
+\int_{\Omega }\nabla \xi (t)\cdot \nabla zdx
+\kappa \int_{\Gamma }\xi(t) zd\Gamma =
\int_{\Omega }^{} g(t)z dx +\int_{\Gamma }^{} h(t)z d\Gamma \nonumber \\
\quad {\it for~all~}z\in V, \quad
{\it for~a.a.\ } t \in (0,T), \label{2.19'} \\
u(0)=u_{0} \quad {\it a.e.~in}~\Omega. \label{ic}
\end{gather}
In this definition, the {R}obin boundary condition for $\xi$ is hidden in the weak formulation \eqref{2.19'}.
The strategy behind the proof of the main theorem is the characterization of our nonlinear diffusion equation {\rm (P)} as an asymptotic limit of the {C}ahn--{H}illiard system.
Therefore, for each $\varepsilon \in (0,1]$, we define the approximate problem of {C}ahn--{H}illiard type with {R}obin boundary condition as follows:
\paragraph{Definition 2.2.}
{\it The triplet $(u_{\varepsilon }, \mu_{\varepsilon }, \xi_{\varepsilon })$ is called the weak solution of {\rm (P)}$_{\varepsilon }$ if
\begin{gather}
u_{\varepsilon }\in H^{1}(0, T;V^{*})\cap L^{\infty }(0, T;V) \cap L^2(0,T;W), \label{3.1}\\
\mu_{\varepsilon }\in L^{2}(0, T;V), \quad \xi_{\varepsilon }\in L^{2}(0, T;H), \nonumber \\
\xi_{\varepsilon }\in \beta (u_{\varepsilon }) \quad {\it a.e.~in~}Q \nonumber
\end{gather}
and they satisfy
\begin{gather}
\bigl\langle u_{\varepsilon }'(t), z\bigr\rangle_{V^{*}, V}
+\int_{\Omega }\nabla \mu_{\varepsilon }(t)\cdot \nabla zdx
+\kappa \int_{\Gamma }\mu_{\varepsilon } (t)zd\Gamma =0 \quad {\it for~all~}z\in V, \label{2.13}\\
\mu_{\varepsilon }(t)
=-\varepsilon \Delta u_{\varepsilon }(t)
+\xi_{\varepsilon }(t)+\pi_{\varepsilon } \bigl( u_{\varepsilon }(t) \bigr)
-f(t) \quad {\it in~}H, \label{2.14}
\end{gather}
for a.a.\ $t\in (0, T)$, with}
\begin{equation*}
u_\varepsilon (0)=u_{0\varepsilon } \quad {\it a.e.~in}~\Omega.
\end{equation*}
The {R}obin boundary condition for $u_\varepsilon $ is stated with regard to the class of $W$, that is, the regularity \eqref{3.1};
that for $\mu_\varepsilon $ is hidden in the weak formulation \eqref{2.13}.
The {C}ahn--{H}illiard structure is characterized by the nonlinear term $\beta+\pi_\varepsilon $.
The conditions for these terms are given as assumptions:
\begin{enumerate}
\item[(A1)]
$\beta :\mathbb{R}\to 2^{\mathbb{R}}$ is a maximal monotone graph, which is
the subdifferential $\beta =\partial_{\mathbb{R}}\widehat{\beta }$
of some proper lower semicontinuous convex function
$\widehat{\beta }: \mathbb{R}\to [0, +\infty ]$
satisfying $\widehat{\beta }(0)=0$ with the effective domain $D(\beta ):=\{ r \in \mathbb{R} : \beta (r) \ne \emptyset \}$;
\item[(A2)]
there exist positive constants $c_{1}$, $c_{2}$ such that
$\widehat{\beta }(r)\geq c_{1}r^{2}-c_{2}$ for all $r\in \mathbb{R}$;
\item[(A3)]
$\pi_{\varepsilon }: \mathbb{R}\to \mathbb{R}$ is a Lipschitz continuous function for all $\varepsilon \in (0, 1]$.
Moreover, there exist a positive constant $c_{3}$ and strictly increasing continuous function $\sigma : [0, 1]\to [0, 1]$ such that $\sigma (0)=0$, $\sigma (1)=1$, and
\begin{equation}
\bigl|\pi_{\varepsilon }(0)\bigr|+
|\pi_{\varepsilon }'|_{L^{\infty }(\mathbb{R})}\leq c_{3}\sigma (\varepsilon )
\quad {\rm for~all~}\varepsilon \in (0, 1]. \label{2.8}
\end{equation}
\end{enumerate}
In particular, {\rm (A1)} yields $0\in \beta (0)$.
Assumption {\rm (A2)} is improved in Section~6.
The assumptions pertaining to the given data are as follows:
\begin{enumerate}
\item[(A4)]
$g\in L^{2}(0, T; H)$, $h\in L^{2}(0, T; H_{\Gamma })$;
\item[(A5)]
$u_{0}\in H$ with $\widehat{\beta }(u_{0})\in L^{1}(\Omega )$. Moreover, let
$u_{0\varepsilon }\in V$; then there exists a positive constant
$c_{4}$ such that, for all $\varepsilon \in (0, 1]$,
\begin{equation}\label{2.9}
|u_{0\varepsilon }|_{H}^{2}\leq c_{4},
\quad \int_{\Omega }\widehat{\beta }(u_{0\varepsilon })dx\leq c_{4},
\quad \varepsilon |\nabla u_{0\varepsilon }|_{H^{d}}^{2}\leq c_{4},
\quad \varepsilon |u_{0\varepsilon }|_{H_{\Gamma }}^{2}\leq c_{4}.
\end{equation}
In addition, $u_{0\varepsilon }\to u$ strongly in $H$ as $\varepsilon \searrow 0$ (cf.\ \cite[Lemma~A.1]{CF16}).
\end{enumerate}
From assumption {\rm (A4)}, we see from the {L}ax--{M}ilgram theorem that there exists a unique function $f\in L^{2}(0, T; V)$ such that
\begin{equation*}
\int_{\Omega }\nabla f(t)\cdot \nabla zdx
+\kappa \int_{\Gamma }f(t)zd\Gamma
=\int_{\Omega }g(t)zdx+\int_{\Gamma }h(t)zd\Gamma
\end{equation*}
for all $z\in V$ and for a.a.\ $t\in (0, T)$.
Therefore, introducing new variable $\mu :=\xi -f \in L^2(0,T;V)$, we rewrite the weak formulation \eqref{2.19'} as follows:
\begin{equation}
\bigl\langle u'(t), z\bigr\rangle_{V^{*}, V}
+\int_{\Omega }\nabla \mu (t)\cdot \nabla zdx
+\kappa \int_{\Gamma }\mu(t) zd\Gamma =0
\quad {\rm for~all~}z\in V, \label{2.19}
\end{equation}
for a.a.\ $t \in (0,T)$.
The proof of main theorem follows that in \cite{Fuk16, CF16} for the {R}obin boundary condition.
The characterization of the nonlinear diffusion equation from the asymptotic limits of
{C}ahn--{H}illiard system \cite{Fuk16, Fuk16b, CF16, FKY17} furnishes a big advantage
in regard to the growth condition for $\beta $.
Because of this, we can improve the result \cite{Dam77} and widen the setting for
$\beta $ using the different approach described in \cite{DK99} starting from the lower semicontinuous convex extension.
To do so, we replace assumption {\rm (A4)} by the following {\rm (A6)}:
\begin{enumerate}
\item [(A6)]
$g\in L^{2}(0, T; H)$, $h=0$ a.e.\ on $\Sigma $.
\end{enumerate}
Then, we see that there exists a unique function $f\in L^{2}(0, T; V)$ such that,
\begin{equation*}
\int_{\Omega }\nabla f(t)\cdot \nabla zdx
+\kappa \int_{\Gamma }f(t)zd\Gamma =\int_{\Omega }g(t)zdx
\quad {\rm for~all~} z \in V,
\end{equation*}
for a.a.\ $t\in (0, T)$.
Now, taking a test function $z \in \mathcal{D}(\Omega )$, we have $-\Delta f(t)=g(t)$ in $\mathcal{D}'(\Omega )$. Specifically, by comparison, $-\Delta f(t)=g(t)$ in $H$.
This yields
\begin{equation*}
\partial_{\boldsymbol{\nu }}f(t)+\kappa f(t)=0 \quad {\rm a.e.~on~}\Gamma,
\end{equation*}
for a.a.\ $t\in (0, T)$.
Therefore, under assumption {\rm (A6)}, we have $-\Delta f\in L^{2}(0, T; H)$ and $\partial_{\boldsymbol{\nu }}f\in L^{2}(0, T; H_{\Gamma })$.
These higher regularities are essential to improve the growth condition {\rm (A2)}.
The well-posedness of {C}ahn--{H}illiard system has been treated in many studies (see, e.g., \cite{Mil17}).
In regard to the abstract theorem of the evolution equation, we refer the reader to \cite{KNP95, Kub12}.
Based on these results, we obtain the following proposition:
\paragraph{Proposition 2.1.}{\it Given assumptions {\rm (A1)}--{\rm (A5)} or {\rm (A1)}, {\rm (A3)} with $\sigma (\varepsilon )=\varepsilon ^{1/2}$, {\rm (A5)}, {\rm (A6)}, then for each $\varepsilon \in (0, 1]$, there exists a unique weak solution of {\rm (P)}$_\varepsilon $. } \\
This proposition implies that, for the well-posedness of the {C}ahn--{H}illiard system, the growth condition {\rm (A2)} is not essential. It can be recovered by the strong assumption {\rm (A6)}.
The proof of this proposition is given in Section~4. Indeed, we can prove this proposition by considering the approximate problem given in Proposition~3.1.
Our main theorem is now given.
\paragraph{Theorem 2.1.}
{\it With assumptions {\rm (A1)}--{\rm (A5)}, for each $\varepsilon \in (0, 1]$, let $(u_{\varepsilon }, \mu_{\varepsilon }, \xi_{\varepsilon })$ be the weak solution of {\rm (P)}$_\varepsilon $ obtained in Proposition~2.1.
Then, there exists a weak solution $(u,\xi )$ of {\rm (P)} and $(u, \xi)$ characterized by $(u_{\varepsilon }, \mu_{\varepsilon }, \xi_{\varepsilon })$ in the following sense:
\begin{gather*}
u_\varepsilon \to u \quad {\it strongly~in~} C\bigl( [0,T];V^* \bigr)
\quad {\it and~weakly~star~in~}H^1(0,T;V^*) \cap L^\infty (0,T;H),
\\
\xi_\varepsilon \to \xi
\quad {\it weakly~in~}L^2(0,T;H), \\
\mu_\varepsilon \to \mu:=\xi -f
\quad {\it weakly~in~}L^2(0,T;V)
\end{gather*}
as $\varepsilon \searrow 0$.
Moreover, the component $u$ of the solution of {\rm (P)} is uniquely determined. Also, if $\beta $ is single-valued, then the component $\xi $ of the solution of {\rm (P)} is also unique.
}\\
The second theorem relates to improving the well-posedness result of \cite{Dam77, DK99}.
\paragraph{Theorem 2.2.}
{\it Given assumptions {\rm (A1)}, {\rm (A3)} with $\sigma (\varepsilon )=\varepsilon ^{1/2}$, {\rm (A5)}, {\rm (A6)}, the same statement as in Theorem~2.1 holds.
}
\section{Approximate problem and uniform estimates}
\setcounter{equation}{0}
The proof of the main theorems exploits the characterization of the nonlinear diffusion equation through the asymptotic limits of the {C}ahn--{H}illiard system \cite{Fuk16, CF16}.
To apply it, we consider the second-order approximation of the nonlinear term $\beta $.
At this level of approximation, we obtain a uniform estimate independent of the approximation parameters.
\subsection{Approximate problem for (P)$_{\varepsilon }$}
We consider an approximate problem to show the well-posedness of (P)$_{\varepsilon }$.
For each $\lambda \in (0,1]$, we define $\beta_{\lambda }: \mathbb{R}\to \mathbb{R}$ by
\begin{equation*}
\beta_{\lambda }(r)
:=\frac{1}{\lambda }\bigl(r-J_{\lambda }(r)\bigr) \quad {\rm for~all~}r\in \mathbb{R},
\end{equation*}
where the resolvent operator $J_{\lambda }: \mathbb{R}\to \mathbb{R}$ is given by
\begin{equation*}
J_{\lambda }(r):=(I+\lambda \beta )^{-1}(r) \quad {\rm for~all~}r\in \mathbb{R}.
\end{equation*}
Also, we define the {M}oreau--{Y}osida regularization $\widehat{\beta }_{\lambda }$ of $\widehat{\beta }: \mathbb{R}\to \mathbb{R}$ by
\begin{equation*}
\widehat{\beta }_{\lambda }(r)
:=\inf_{s\in \mathbb{R}}\left\{\frac{1}{2\lambda }|r-s|^{2}+\widehat{\beta }(s)\right\}
=\frac{1}{2\lambda }\bigl| r-J_{\lambda }(r) \bigr|^{2}
+ \widehat{\beta } \bigl( J_{\lambda }(r) \bigr)
\quad {\rm for~all~} r\in \mathbb{R}.
\end{equation*}
Now, we consider the problem (P)$_{\varepsilon, \lambda }$ for the viscous {C}ahn--{H}illiard like system:
\begin{equation*}
{\rm (P)}_{\varepsilon, \lambda }
\quad
\begin{cases}
\displaystyle \frac{\partial u_{\varepsilon, \lambda }}{\partial t}
-\Delta \mu_{\varepsilon, \lambda }=0 \quad {\rm a.e.\ in~}Q, \\
\displaystyle \mu_{\varepsilon, \lambda }
=\lambda \frac{\partial u_{\varepsilon, \lambda }}{\partial t}
-\varepsilon \Delta u_{\varepsilon, \lambda }
+\beta_{\lambda }(u_{\varepsilon, \lambda })
+\pi_{\varepsilon }(u_{\varepsilon, \lambda })-f \quad {\rm a.e.\ in~}Q, \\
\partial_{\boldsymbol{\nu} }u_{\varepsilon, \lambda }
+\kappa u_{\varepsilon, \lambda }
=0, \quad \partial_{\boldsymbol{\nu} }\mu_{\varepsilon, \lambda }
+\kappa \mu_{\varepsilon, \lambda }=0 \quad {\rm a.e.\ on~}\Sigma,\\
u_{\varepsilon, \lambda }(0)=u_{0\varepsilon } \quad {\rm a.e.\ in~}\Omega.
\end{cases}
\end{equation*}
Define $A:D(A) \to H$ by $A u =-\Delta u$ in $H$ with $D(A)=W$; the treatment of $A$ is given in Appendix.
From the well-known abstract theory of the doubly nonlinear evolution equation \cite{CV90},
we obtain the following well-posedness result (see, also \cite{CF15, KNP95, Kub12}):
\paragraph{Proposition 3.1.}{\it
For each $\lambda \in (0, 1]$, there exists a unique
\begin{equation*}
u_{\varepsilon, \lambda }\in H^{1}(0, T; H)\cap L^{\infty }(0, T; V)\cap L^{2}(0, T; W)
\end{equation*}
such that $u_{\varepsilon, \lambda }$ satisfies the following {C}auchy problem}
\begin{align}
& (\lambda I+F^{-1})u'_{\varepsilon, \lambda }(t)
+ \varepsilon A
u_{\varepsilon, \lambda }(t) \nonumber \\
& \quad =-\beta_{\lambda }\bigl( u_{\varepsilon, \lambda }(t) \bigr)
-\pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(t) \bigr)
+f(t) \quad {\it in~}H, \quad {\it for~a.a.~} t\in (0, T), \label{1} \\
& u_{\varepsilon, \lambda }(0)=u_{0\varepsilon } \quad {\it in~}H. \nonumber
\end{align}
With this level of abstractness, the {C}ahn--{H}illiard system with {R}obin boundary condition is essentially the same as in previous studies. Therefore, we omit the proof of this proposition. \\
Now, for each $\lambda \in (0, 1]$, we put
\begin{equation}
\mu_{\varepsilon, \lambda }(t)
:=\lambda u_{\varepsilon, \lambda }'(s)
+\varepsilon A u_{\varepsilon, \lambda }(t)
+\beta_{\lambda } \bigl( u_{\varepsilon, \lambda }(t) \bigr)
+\pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(t) \bigr)
-f(t) \quad {\rm in}~H,
\label{9}
\end{equation}
for a.a.\ $t\in (0, T)$.
Then, we can rewrite the evolution equation \eqref{1} as
\begin{equation}
F^{-1}u'_{\varepsilon, \lambda }(t)+\mu_{\varepsilon, \lambda }(t)=0 \quad {\rm in~}V,
\label{10}
\end{equation}
for a.a.\ $t\in (0, T)$.
We remark here that we do not need the projection to $\mu_{\varepsilon, \lambda}$ because of the boundary condition (cf.\ \cite{CF15, KNP95, Kub12}).
This is different from that for the {N}eumann boundary condition.
\subsection{Uniform estimates}
To prove the convergence theorem, we now obtain the uniform estimates independent of $\varepsilon, \lambda $.
\paragraph{Lemma 3.1.}
{\it There exists a positive constant $M_{1}$ and two values $\bar{\lambda }, \bar{\varepsilon }\in (0, 1]$, depending only on the data, such that }
\begin{gather}
\int_{0}^{t}
\bigl|u'_{\varepsilon, \lambda }(s)
\bigr|_{V^{*}}^{2}ds
+2\lambda \int_{0}^{t}
\bigl|u'_{\varepsilon, \lambda }(s)\bigr|_{H}^{2}ds
+\varepsilon \bigl| u_{\varepsilon, \lambda }(t) \bigr|_{V}^{2} \nonumber \\
{}+\bigl| \widehat{\beta }_{\lambda } \bigl( u_{\varepsilon, \lambda }(t) \bigr)
\bigr|_{L^{1}(\Omega )}
+ \frac{c_{1}}{4} \bigl| u_{\varepsilon, \lambda }(t) \bigr |_{H}^{2} \leq M_{1}, \label{m1} \\
\int_{0}^{t}\bigl| \mu_{\varepsilon, \lambda }(s) \bigr|_{V}^{2}ds\leq M_{1} \label{m2}
\end{gather}
{\it for all $t\in [0, T], \lambda \in (0, \bar{\lambda }]$ and $\varepsilon \in (0, \bar{\varepsilon }]$. }
\paragraph{Proof}
We test \eqref{1} at time $s\in (0, T)$ for $u_{\varepsilon, \lambda }'(s)\in H$.
Then, we see that
\begin{align}
& \lambda \bigl|u_{\varepsilon, \lambda }'(s)
\bigr|_{H}^{2}+\bigl|u_{\varepsilon, \lambda }'(s)
\bigr|_{V^{*}}^{2}
- \varepsilon
\bigl( \Delta u_{\varepsilon, \lambda }(s), u_{\varepsilon, \lambda }'(s)\bigr)_{H}
\nonumber \\
& \quad {}
+\frac{d}{ds} \int_{\Omega } \widehat{\beta }_{\lambda }
\bigl( u_{\varepsilon, \lambda }(s) \bigr)dx
=-\bigl(\pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(s) \bigr), u_{\varepsilon, \lambda }'(s)\bigr)_{H}
+\bigl(f(s), u_{\varepsilon, \lambda }'(s)\bigr)_{H} \label{m1-1}
\end{align}
for a.a.\ $s\in (0, T)$.
We have now from the boundary condition of $u_{\varepsilon, \lambda }(s)$
\begin{align*}
- \bigl(
\Delta u_{\varepsilon, \lambda }(s), u_{\varepsilon, \lambda }'(s)
\bigr)_{H}
&= - \int_{\Gamma }\partial_{\boldsymbol{\nu }}u_{\varepsilon, \lambda }(s)u_{\varepsilon, \lambda }'(s)d\Gamma
+ \int_{\Omega }\nabla u_{\varepsilon, \lambda }(s)\cdot \nabla u_{\varepsilon, \lambda }'(s)dx \\
&= \kappa \int_{\Gamma }u_{\varepsilon, \lambda }(s)u_{\varepsilon, \lambda }'(s)d\Gamma
+\int_{\Omega }\nabla u_{\varepsilon, \lambda }(s)\cdot \nabla u_{\varepsilon, \lambda }'(s)dx \\
&= \frac{\kappa }{2}
\frac{d}{ds}\int_{\Gamma } \bigl|u_{\varepsilon, \lambda }(s) \bigr|^{2}d\Gamma
+\frac{1}{2}
\frac{d}{ds}\int_{\Omega } \bigl|\nabla u_{\varepsilon, \lambda }(s) \bigr|^{2}dx
\end{align*}
for a.a.\ $s\in (0, T)$.
Integrating \eqref{m1-1}with respect to $s$ over interval $[0, t]$, and using the above, we infer that
\begin{align}
&\int_{0}^{t}
\bigl|u_{\varepsilon, \lambda }'(s)\bigr|_{V^{*}}^{2}
ds
+\lambda \int_{0}^{t}
\bigl|u_{\varepsilon, \lambda }'(s)\bigr|_{H}^{2}
ds
+\frac{\varepsilon }{2}
\bigl|u_{\varepsilon, \lambda }(t)
\bigr|_{V}^{2}+\int_{\Omega }\widehat{\beta }_{\lambda }
\bigl( u_{\varepsilon, \lambda }(t) \bigr)dx \nonumber \\
&=
\frac{\varepsilon }{2}|u_{0\varepsilon }|_{V}^{2}
+\int_{\Omega }\widehat{\beta }_{\lambda }(u_{0\varepsilon })dx
+\int_{\Omega }\widehat{\pi }_{\varepsilon }(u_{0\varepsilon })dx
-\int_{\Omega }\widehat{\pi }_{\varepsilon }
\bigl(u_{\varepsilon, \lambda }(t) \bigr)dx \nonumber \\
&\quad {}+\int_{0}^{t}\bigl\langle u_{\varepsilon, \lambda }'(s), f(s)
\bigr\rangle_{V^{*}, V}ds \label{3.6}
\end{align}
for all $t\in [0, T]$, where $\widehat{\pi }_{\varepsilon }$ is the primitive of $\pi_{\varepsilon }$ given by $\widehat{\pi }_{\varepsilon }(r):=\int_{0}^{r} \pi_{\varepsilon }(\tau)d\tau \ {\rm for~all~} r\in \mathbb{R}$.
Now, aided by assumption {\rm (A2)}, we have
\begin{align*}
\widehat{\beta }_{\lambda }(r)
& = \frac{1}{2\lambda }
\bigl|r-J_{\lambda }(r)
\bigr|^{2}
+\widehat{\beta }_{\lambda }\bigl(J_{\lambda }(r)\bigr) \\
& \geq
\frac{1}{2\lambda }
\bigl|r-J_{\lambda }(r)\bigr|^{2}+c_{1}
\bigl|J_{\lambda }(r)\bigr|^{2}-c_{2} \quad {\rm for~all~}r\in \mathbb{R}.
\end{align*}
Hence, putting $\bar{\lambda }:=\min \{1, 1/(2c_{1})\}$, we see that $c_{1}/2\leq 1/(4\bar{\lambda })$.
It follows that
\begin{align}
\int_{\Omega }
\widehat{\beta }_{\lambda }
\bigl( u_{\varepsilon, \lambda }(s) \bigr)
&\geq
\frac{1}{2}
\int_{\Omega }\widehat{\beta }_{\lambda } \bigl( u_{\varepsilon, \lambda }(s) \bigr)dx
+\frac{1}{4\bar{\lambda }}
\bigl|u_{\varepsilon, \lambda }(s)-J_{\lambda }
\bigl(
u_{\varepsilon, \lambda }(s)
\bigr)
\bigr|_{H}^{2} \nonumber \\
&\quad {} +\frac{c_{1}}{2}
\bigl|
J_{\lambda } \bigl(
u_{\varepsilon, \lambda }(s) \bigr) \bigr|_{H}^{2}-\frac{c_{2}}{2}|\Omega | \nonumber \\
&\geq \frac{1}{2}
\int_{\Omega }\widehat{\beta }_{\lambda }
\bigl(
u_{\varepsilon, \lambda }(s) \bigr) dx+\frac{c_{1}}{4}\bigl|u_{\varepsilon, \lambda }(s)\bigr|_{H}^{2}-\frac{c_{2}}{2}|\Omega | \label{3.7}
\end{align}
for a.a.\ $s \in (0, T)$.
Next, we use the {M}aclaurin expansion and {\rm (A3)} to obtain
\begin{equation*}
\bigl| \widehat{\pi }_{\varepsilon }(r) \bigr|
\leq
\bigl| \pi_{\varepsilon }(0) \bigr|
|r|
+ \frac{1}{2}
|\pi_{\varepsilon }'|_{L^{\infty }(\mathbb{R})}
r^{2}
\leq c_{3} \sigma (\varepsilon )(1+r^{2})
\end{equation*}
for all $r\in \mathbb{R}$.
Also, with assumption {\rm (A3)}, there exists $\bar{\varepsilon }\in (0, 1]$ such that
\begin{equation*}
\sigma (\varepsilon )\leq \frac{c_{1}}{8c_{3}(1+|\Omega |)}
\end{equation*}
for all $\varepsilon \in (0, \bar{\varepsilon }]$; that is, we deduce that
\begin{align}
- \int_{\Omega }
\widehat{\pi }_{\varepsilon }
\bigl( u_{\varepsilon, \lambda }(t) \bigr)dx
&\leq c_{3}\sigma (\varepsilon )\int_{\Omega }
\bigl(1+\bigl|u_{\varepsilon, \lambda }(t)\bigr|^{2}\bigr)dx \nonumber \\
&\leq \frac{c_{1}}{8}\bigl(1+\bigl|u_{\varepsilon, \lambda }(t)\bigr|_{H}^{2}\bigr)
\label{s6}
\end{align}
for all $t \in [0, T]$.
Hence, applying the {Y}oung inequality to \eqref{3.6} and collecting all of the above, we see that there exists a positive constant $M_{1}$ depending only on $c_{1}$, $c_{2}$, $c_{4}$, $|\Omega |$, and $|f|_{L^{2}(0, T; V)}$, independent of $\varepsilon \in (0, \bar{\varepsilon }], \lambda \in (0, \bar{\lambda }]$ such that \eqref{m1} holds. Finally, we have from \eqref{10}
\begin{equation*}
\int_{0}^{t} \bigl| \mu_{\varepsilon, \lambda }(s) \bigr|_{V}^{2}ds
= \int_{0}^{t} \bigl| u'_{\varepsilon, \lambda }(s) \bigr|_{V^*}^{2}\leq M_{1}
\end{equation*}
for all $t \in [0,T]$.
$\Box $
\paragraph{Lemma 3.2.}
{\it There exist positive constants $M_{2}$ and $M_{3}$, independent of $\varepsilon \in (0, \bar{\varepsilon }]$ and $\lambda \in (0, \bar{\lambda }]$, such that }
\begin{gather}
\int_{0}^{t} \bigl|\beta_{\lambda }
\bigl( u_{\varepsilon, \lambda }(s) \bigr) \bigr|_{H}^2ds \leq M_{2},
\label{4.23}
\\
\int_{0}^{t}
\bigl|\varepsilon \Delta u_{\varepsilon, \lambda }(s)
\bigr|_{H}^{2}ds\leq M_{3}
\label{lemma33b}
\end{gather}
{\it for all} $t\in [0, T]$.
\paragraph{Proof}
We test \eqref{9} for times $s\in (0, T)$ by $\beta_{\lambda }(u_{\varepsilon, \lambda }(s))\in H $. Then, we see that
\begin{gather*}
\bigl( \mu_{\varepsilon, \lambda }(s),
\beta_{\lambda }\bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigr)_{H}
= - \varepsilon \bigl( \Delta u_{\varepsilon, \lambda }(s),
\beta_{\lambda }\bigl(u_{\varepsilon, \lambda }( s )\bigr)
\bigr)_{H}
+ \bigl| \beta_{\lambda } \bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigr|_{H}^2 \\
{} + \bigl(\lambda u_{\varepsilon, \lambda }'(s)
+ \pi_{\varepsilon } \bigl(
u_{\varepsilon, \lambda }(s)
\bigr) - f(s),
\beta_{\lambda } \bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigr)_{H}
\end{gather*}
for a.a.\ $s\in (0, T)$.
Calculating the first term on the right-hand side of the above equation, we infer from the boundary condition of $u_{\varepsilon, \lambda }(s)$ that
\begin{align*}
-\varepsilon \bigl(\Delta u_{\varepsilon, \lambda }(s),
\beta_{\lambda }
\bigl( u_{\varepsilon, \lambda }(s) \bigr)\bigr)
& =
\varepsilon \int_{\Omega }
\beta_{\lambda }' \bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigl|\nabla u_{\varepsilon, \lambda }(s)\bigr|^{2}dx
+\kappa \varepsilon
\int_{\Gamma }
u_{\varepsilon, \lambda }(s)
\beta_{\lambda }\bigl( u_{\varepsilon, \lambda }(s) \bigr)d\Gamma
\\
& \ge 0
\end{align*}
for a.a.\ $s\in (0, T)$ because $\beta$ is monotonic. Hence, applying the {Y}oung inequality, we obtain
\begin{align*}
&\bigl|\beta_{\lambda } \bigl( u_{\varepsilon, \lambda }(s) \bigr)\bigr|_{H}^{2} \\
&\quad \leq \bigl( \mu_{\varepsilon, \lambda }(s)
- \lambda u_{\varepsilon, \lambda }'(s)
- \pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(s) \bigr)
+ f(s), \beta_{\lambda }\bigl( u_{\varepsilon, \lambda }(s) \bigr) \bigr)_{H} \\
&\quad \leq \frac{1}{2}
\bigl|\beta_{\lambda } \bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigr|_{H}^{2}
+ 2\left(
\bigl| \mu_{\varepsilon, \lambda }(s) \bigr|_{H}^{2}
+\lambda ^{2}\bigl| u_{\varepsilon, \lambda }'(s) \bigr|_{H}^{2}
+\bigl|\pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigr|_{H}^{2}
+\bigl| f(s) \bigr|_{H}^{2} \right)
\end{align*}
for a.a.\ $s\in (0, T)$.
Integrating with respect to $s$ over $[0, t]$ and applying Lemma~3.2, we see that there exists a positive constant $M_{2}$ depending only on $M_{1}$, $c_3$, and $|f|_{L^{2}(0, T; H)}$ such that estimate \eqref{4.23} holds.
Next, by comparing with \eqref{9}, we obtain \eqref{lemma33b} for some positive constant $M_{3}$.
$
$ $\Box $
\section{Proof of convergence theorem}
We next show the existence of solution of (P)$_{\varepsilon }$ by passing to the limit for the approximate problem (P)$_{\varepsilon, \lambda }$ .
\subsection{Passage to the limit $\lambda \searrow 0$}
\paragraph{Proof of Proposition 2.1.}
\indent
From previous estimates established in Lemmas~3.1 and 3.2,
we see that there exists a subsequence $\{\lambda_{k}\}_{k\in \mathbb{N}}$ with $\lambda_{k}\searrow 0$
as $k\nearrow +\infty $ and some limit functions
$u_{\varepsilon }\in H^{1}(0, T; V^{*})\cap L^{\infty }(0, T; V)\cap L^2(0,T;W)$, $\mu_{\varepsilon }\in L^{2}(0, T; V)$, and $\xi_{\varepsilon }\in L^{2}(0, T; H)$ such that
\begin{gather}
\label{4.27}
u_{\varepsilon, \lambda_{k}}\to u_{\varepsilon }
\quad {\rm weakly~star~in~} H^{1}(0, T; V^{*})\cap L^{\infty }(0, T; V),
\\ \label{4.28}
\lambda_{k}u_{\varepsilon, \lambda_{k}}\to 0
\quad {\rm strongly~in~} H^{1}(0, T; H),
\\
\label{4.29}
\mu_{\varepsilon, \lambda_{k}}\to \mu_{\varepsilon }
\quad {\rm weakly~in~} L^{2}(0, T; V),
\\
\label{4.30}
\beta_{\lambda_{k}}(u_{\varepsilon, \lambda_{k}})\to \xi_{\varepsilon }
\quad {\rm weakly~in~} L^{2}(0, T; H),
\\
\label{4.28b}
\varepsilon \Delta u_{\varepsilon, \lambda_{k}} \to \varepsilon \Delta u_{\varepsilon }
\quad {\rm weakly~in~} L^{2}(0, T; H)
\end{gather}
as $k\nearrow +\infty $. From \eqref{4.27} and well-known compactness results (see, e.g., \cite{Sim87}), we obtain
\begin{equation}
\label{4.33}
u_{\varepsilon, \lambda_{k}}\to u_{\varepsilon }
\quad {\rm strongly~in~} C \bigl( [0, T]; H \bigr)
\end{equation}
as $k\nearrow +\infty $.
From \eqref{4.33}, we deduce that $u_{\varepsilon}(0)=u_{0\varepsilon }$ a.e.\ in $\Omega $.
Moreover, from \eqref{4.33} and the {L}ipschitz continuity of $\pi_{\varepsilon }$, we deduce that
\begin{equation}
\label{pi}
\pi_{\varepsilon }(u_{\varepsilon, \lambda_k })
\to \pi_{\varepsilon }(u_{\varepsilon })
\quad {\rm strongly~in~} C \bigl( [0, T]; H \bigr)
\end{equation}
as $k\nearrow +\infty $.
Also, applying \eqref{4.30}, \eqref{4.33}, and the monotonicity of $\beta $,
we obtain $\xi_{\varepsilon}\in \beta (u_{\varepsilon })$ a.e.\ in $Q$.
Finally, given the level of approximation associated with the weak formulations of \eqref{9}--\eqref{10},
taking the limit as $k\nearrow +\infty $, and using \eqref{4.27}--\eqref{pi},
we find that $(u_\varepsilon, \mu_\varepsilon, \xi_\varepsilon )$ satisfies \eqref{2.13}--\eqref{2.14}.
$\Box$
\subsection{Passage to the limit $\varepsilon \searrow 0$}
\paragraph{Proof of Theorem 2.1.}
\indent
From weakly and strongly convergence, \eqref{4.27}--\eqref{4.28b}, the same kind of uniform estimates in Lemmas~3.1 and 3.2 holds for $(u_{\varepsilon }, \mu_{\varepsilon }, \xi_{\varepsilon })$, that is,
\begin{gather*}
\int_{0}^{t}
\bigl|u'_{\varepsilon}(s)
\bigr|_{V^{*}}^{2}ds
+\varepsilon \bigl| u_{\varepsilon}(t) \bigr|_{V}^{2}
+ \frac{c_{1}}{4} \bigl| u_{\varepsilon}(t) \bigr |_{H}^{2} \leq M_{1},\\
\int_{0}^{t}\bigl| \mu_{\varepsilon}(s) \bigr|_{V}^{2}ds\leq M_{1},\\
\int_{0}^{t} \bigl|\xi_{\varepsilon}(s) \bigr|_{H}^2ds \leq M_{2}, \\
\int_{0}^{t}
\bigl|\varepsilon \Delta u_{\varepsilon}(s)
\bigr|_{H}^{2}ds \leq M_{3}
\end{gather*}
for all $t \in [0,T]$.
Hence, there exists a subsequence $\{\varepsilon_{k}\}_{k\in \mathbb{N}}$ with $\varepsilon _{k}\searrow 0$ as $k\nearrow +\infty $ and some limits functions $u\in H^{1}(0, T; V^{*})\cap L^{\infty }(0, T; H)$, $\mu \in L^{2}(0, T; V)$, $\xi \in L^{2}(0, T; H)$ such that
\begin{gather}
\label{4.1}
u_{\varepsilon_{k}}\to u \quad {\rm weakly~star~in~} H^{1}(0, T; V^{*})\cap L^{\infty }(0, T; H),
\\
\label{4.2}
\varepsilon_{k}u_{\varepsilon_{k}}\to 0 \quad {\rm strongly~in~} L^{\infty }(0, T; V),
\\
\label{4.3}
\mu_{\varepsilon_{k}}\to \mu \quad {\rm weakly~in~} L^{2}(0, T; V),
\\
\label{4.4}
\xi_{\varepsilon_{k}}\to \xi \quad {\rm weakly~in~} L^{2}(0, T; H)
\end{gather}
as $k\nearrow + \infty $.
From \eqref{4.1} and the well-known {A}scoli--{A}rzel\`a theorem (see, e.g., \cite{Sim87}), we obtain
\begin{equation}
\label{4.5}
u_{\varepsilon_{k}} \to u \quad {\rm strongly~in~} C \bigl( [0, T]; V^{*} \bigr).
\end{equation}
Moreover, by virtue of \eqref{4.2},
\begin{equation}
\varepsilon_{k}\Delta u_{\varepsilon_{k}}
\to 0 \quad {\rm weakly~in~} L^{2}(0, T; H)
\label{lap}
\end{equation}
as $k\nearrow +\infty $.
Aided by assumption {\rm (A3)}, we see that
\begin{equation*}
\bigl|\pi_{\varepsilon_{k}}(u_{\varepsilon_{k}})\bigr|
\leq |\pi_{\varepsilon_{k} }'|_{L^{\infty }(\mathbb{R})}
|u_{\varepsilon_{k}}|
+ \bigl| u_{\varepsilon _{k}}(0) \bigr|
\leq c_{3}\sigma (\varepsilon _{k})\bigl( 1+|u_{\varepsilon_{k}}|\bigr)
\quad {\rm a.e.\ in~}Q.
\end{equation*}
Therefore, we obtain
\begin{equation}
\label{4.6}
\pi_{\varepsilon_{k}}(u_{\varepsilon_{k}})
\to 0 \quad {\rm strongly~in~} L^{\infty }(0, T; H)
\end{equation}
as $k\nearrow +\infty $.
Now, from \eqref{2.13}, we have
\begin{equation*}
\bigl\langle u_{\varepsilon }'(t), z\bigr\rangle_{V^{*}, V}
+\bigl\langle F z,\mu_{\varepsilon }(t) \bigr\rangle_{V^*,V}
= 0 \quad {\rm for~all~}z\in V,
\end{equation*}
for a.a.\ $t\in (0, T)$.
Therefore, using \eqref{4.1}, \eqref{4.3} and \eqref{4.5},
we obtain \eqref{2.19} by passing to the limit in the above, that is, \eqref{2.19'}.
On the other hand, in \eqref{2.14},
using \eqref{4.3}, \eqref{4.4}, \eqref{lap}, \eqref{4.6} and performing a comparison,
we obtain $\mu =\xi -f$ in $L^2(0,T;H)$.
Moreover, by comparison, this gives us the additional regularity
$\xi =\mu +f \in L^2(0,T;V)$ because $\mu, f \in L^2(0,T;V)$.
We now have $u\in C([0, T]; V^{*})\cap L^{\infty }(0, T; H)$;
the function $u$ is thus weakly continuous from $[0, T]$ to $H$, that is, \eqref{ic} holds.
Finally, it remains to show that $\xi \in \beta (u)$ a.e.\ in $Q$.
For this purpose, we show that
\begin{equation}
\label{4.7}
\limsup_{k \to +\infty }\int_{0}^{T}
\bigl(\xi_{\varepsilon_{k}}(t), u_{\varepsilon_{k}}(t)\bigr)_{H}dt
\leq \int_{0}^{T} \bigl(\xi (t), u(t) \bigr)_{H}dt.
\end{equation}
To this aim, testing \eqref{2.14} for $u_{\varepsilon_{k}}(t)$ and integrating over $[0, T]$, we deduce from the boundary condition that
\begin{align*}
\int_{0}^{T} \bigl(\xi_{\varepsilon_{k}}(t),
u_{\varepsilon_{k}}(t)\bigr)_{H}dt
& = \int_{0}^{T}
\bigl(\mu_{\varepsilon_{k}}(t)+f(t), u_{\varepsilon_{k}}(t)
\bigr)_{H}dt
+\varepsilon_{k}
\int_{0}^{T}
\bigl(\Delta u_{\varepsilon_{k}}(t), u_{\varepsilon_{k}}(t)\bigr)_{H}dt \\
& \quad {} - \int_{0}^{T} \bigl(
\pi_{\varepsilon_{k}}\bigl(
u_{\varepsilon_{k}}(t)
\bigr), u_{\varepsilon_{k}} ( t )
\bigr)_{H}dt \\
& = \int_{0}^{T}
\bigl\langle u_{\varepsilon_{k}}(t), \mu_{\varepsilon_{k}}(t)+f(t) \bigr \rangle_{V^*,V} dt
-\varepsilon_{k}
\int_{0}^{T}\bigl|\nabla u_{\varepsilon_{k}}(t)\bigr|_{H^{d}}^{2}dt \\
& \quad {}
-
\varepsilon_{k} \kappa
\int_{0}^{T} \bigl| u_{\varepsilon_{k}}(t) \bigr|_{H_\Gamma }^2 dt
-\int_{0}^{T} \bigl(\pi_{\varepsilon_{k}} \bigl(
u_{\varepsilon_{k}}(t)
\bigr), u_{\varepsilon_{k}}(t)\bigr)_{H}dt \\
& \leq
\int_{0}^{T}\bigl\langle u_{\varepsilon_{k}}(t),
\mu_{\varepsilon_{k}}(t)+f(t)
\bigr\rangle_{V^*,V} dt
-\int_{0}^{T}\bigl(\pi_{\varepsilon_{k}}
\bigl( u_{\varepsilon_{k}}(t) \bigr), u_{\varepsilon_{k}}(t)\bigr)_{H}dt.
\end{align*}
From \eqref{4.3} and \eqref{4.5}, we see that
\begin{align*}
\lim_{k\to \infty }\int_{0}^{T}
\bigl\langle u_{\varepsilon_{k}}(t), \mu_{\varepsilon_{k}}(t) + f(t)\bigr\rangle_{V^{*}, V}dt
&= \int_{0}^{T} \bigl\langle u(t), \mu (t)+f(t)\bigr\rangle_{V^{*}, V}dt \\
&= \int_{0}^{T} \bigl(u(t), \mu (t)+f(t)\bigr)_{H}dt \\
&= \int_{0}^{T} \bigl(\xi (t), u(t)\bigr)_{H}dt.
\end{align*}
Moreover, from \eqref{4.1} and \eqref{4.6}, we have
\begin{equation*}
\lim_{k\to \infty }\int_{0}^{T}
\bigl(\pi_{\varepsilon_{k}}
\bigl(
u_{\varepsilon_{k}}(t)
\bigr), u_{\varepsilon_{k}}(t)\bigr)_{H}dt
=0.
\end{equation*}
Therefore, \eqref{4.7} holds.
Applying the closedness theorem with respect to weakly-weakly convergence (see, e.g.,\ \cite[Lemma~2.3]{Bar10}), we obtain the fact that $\xi \in \beta (u)$ a.e.\ in $Q$.
Thus, the proof of existence stated in Theorem~2.1 is complete.
Next, we show that the component $u$ of the weak solution of (P) is unique.
Now, for $i=1, 2$, let $(u^{(i)}, \xi ^{(i)})$ be weak solutions of (P), respectively.
Then, from \eqref{2.19}, we have
\begin{align*}
& \bigl
\langle u_{1}'(t)-u_{2}'(t),
z\bigr\rangle_{V^{*}, V}
+\int_{\Omega }
\nabla \bigl(\mu_{1}(t)-\mu_{2}(t)\bigr)\cdot \nabla zdx \\
& \quad {} + \kappa \int_{\Gamma }
\bigl( \mu_{1}(t)-\mu_{2}(t) \bigr) z d\Gamma =0 \quad {\rm for~all~}z\in V.
\end{align*}
Here, putting $z:=F^{-1}(u_{1}(t)-u_{2}(t))$ in $V$ at time $t\in (0, T)$, using the monotonicity of $\beta $, and $\mu_{1}-\mu_{2}=\xi_{1}-\xi_{2}$ a.e.\ in $Q$, we infer that
\begin{align*}
& \int_{\Omega }
\nabla \bigl(\mu_{1}(t)-\mu_{2}(t)\bigr)\cdot \nabla F^{-1}\bigl( u_{1}(t)-u_{2}(t) \bigr)dx
+ \kappa \int_{\Gamma }
\bigl( \mu_{1}(t)-\mu_{2}(t) \bigr) F^{-1}\bigl( u_{1}(t)-u_{2}(t) \bigr) d\Gamma \\
& \quad = \bigl\langle FF^{-1} \bigl( u_{1}(t)-u_{2}(t)\bigr), \xi_{1}(t)-\xi_{2}(t)\bigr\rangle_{V^{*}, V} \\
& \quad = \bigl\langle u_{1}(t)-u_{2}(t), \xi_{1}(t)-\xi_{2}(t) \bigr \rangle_{V^*,V} \\
& \quad = \bigl(\xi_{1}(t)-\xi_{2}(t),u_{1}(t)-u_{2}(t) \bigr)_H \\
& \quad \geq 0.
\end{align*}
Therefore, for all $t\in [0, T]$, we deduce that
\begin{equation*}
\frac{1}{2}
\bigl|u_{1}(t)-u_{2}(t)\bigr|_{V^{*}}^{2}
+\int_{0}^{t}\bigl(\xi_{1}(t)-\xi_{2}(t),u_{1}(t)-u_{2}(t) \bigr)_Hdt \le 0,
\end{equation*}
which implies that the component $u$ is unique.
Thus, we obtain our result.
$\Box $
\paragraph{Remark 4.1.} {\rm (i)}
Allowing the maximal monotone graph $\beta $ to be multi-valued, the component $\xi $ is therefore not uniquely determined.
However, if $\beta $ is single-valued, then $\xi =\beta (u)$ is unique. \\
{\rm (ii)} The argument for the proof of Theorem~2.1 is essentially the same as in \cite{CF16}.
Also, we can obtain the error estimates by some reinforcement.
Indeed, if we add the assumption in \eqref{2.8} of {\rm (A3)} by $\sigma (\varepsilon ):=\varepsilon ^{1/2}$.
Moreover, if we add the following assumption in {\rm (A5)}, then there exists $c_{4}>0$ such that
\begin{equation}
|u_{0\varepsilon }-u_{0}|_{V^{*}}
\leq c_{4}\varepsilon ^{\frac{1}{4}} \quad {\rm for~all~}\varepsilon \in (0, 1].
\label{ee1}
\end{equation}
Then we obtain error estimate
\begin{equation}
|u_{\varepsilon }
-u|_{C([0, T]; V^{*})}^{2}
+\int_{0}^{T}\bigl(\xi_{\varepsilon }(t)-\xi (t),
u_{\varepsilon }(t)-u(t)\bigr)_{H}dt
\leq C^{*}
\varepsilon ^{\frac{1}{3}}
\label{ee2}
\end{equation}
for all $\varepsilon \in (0, \bar{\varepsilon }]$, where $C^*$ is a positive constant depending only on the data (see, \cite[Theorem~5.1]{CF16}).
\section{Improvement of the results}
\indent
As mentioned in Introduction, for all of the results from the ``{H}ilbert space approach'' to nonlinear diffusion equation, the growth condition {\rm (A2)} for $\beta $ is a very important assumption. Nevertheless, it is too restricted from the perspective of applications. For this reason, improving the growth condition was studied in \cite{DK99} using the ``lower semicontinuous convex extension''. See also a different approach given in \cite{Aka09}.
In this section, we consider the improved growth condition {\rm (A2)} for $\beta $.
Hereafter, we assume {\rm (A1)}, and {\rm (A3)} with $\sigma (\varepsilon )=\varepsilon ^{1/2}$, {\rm (A5)}, and {\rm (A6)}; that is, assumption {\rm (A2)} is avoided.
\paragraph{Proof of Theorem 2.2.}
Recall \eqref{10} in the following form
\begin{equation*}
u_{\varepsilon, \lambda }'(s)+F \mu_{\varepsilon, \lambda}(s)=0
\quad {\rm in~} V^*,
\end{equation*}
for a.a.\ $s \in (0,T)$.
Multiplying $u_{\varepsilon, \lambda }(s)$ by the above, we see that
\begin{equation}
\label{6.1}
\bigl\langle u_{\varepsilon, \lambda }'(s), u_{\varepsilon, \lambda }(s)
\bigr\rangle_{V^{*}, V}
+\int_{\Omega }\nabla \mu_{\varepsilon, \lambda }(s)\cdot
\nabla u_{\varepsilon, \lambda }(s)dx
+\kappa \int_{\Gamma }\mu_{\varepsilon, \lambda }(s)u_{\varepsilon, \lambda }(s)d\Gamma =0
\end{equation}
for a.a.\ $s\in (0, T)$.
Moreover, testing \eqref{9} with $-\Delta u_{\varepsilon, \lambda }(s)$
and using boundary conditions and assumption {\rm (A6)}, we deduce that
\begin{align}
&\int_{\Omega }\nabla \mu_{\varepsilon, \lambda }(s)
\cdot \nabla u_{\varepsilon, \lambda }(s)ds
+\kappa \int_{\Gamma }\mu_{\varepsilon, \lambda }(s)u_{\varepsilon, \lambda }(s)d\Gamma
\nonumber \\
&=\frac{\lambda }{2}\frac{d}{ds}
\bigl|\nabla u_{\varepsilon, \lambda }(s)\bigr|_{H^{d}}^{2}
+\frac{\kappa \lambda }{2}\frac{d}{ds}\bigl|u_{\varepsilon, \lambda }(s)\bigr|_{H_{\Gamma }}^{2}
+\varepsilon \bigl|\Delta u_{\varepsilon, \lambda }(s)\bigr|_{H}^2
+\int_{\Omega }\beta '_{\lambda }
\bigl( u_{\varepsilon, \lambda }(s) \bigr)
\bigl|
\nabla u_{\varepsilon, \lambda }(s)
\bigl|^{2}dx \nonumber \\
&\quad
{}
+ \kappa
\int_{\Gamma }
u_{\varepsilon, \lambda }(s)
\beta_{\lambda } \bigl(
u_{\varepsilon, \lambda }(s) \bigr)
d\Gamma
- \bigl(\pi_{\varepsilon } \bigl(
u_{\varepsilon, \lambda }(s) \bigr),
\Delta u_{\varepsilon, \lambda }(s)\bigr)_{H}
+\bigl(\Delta f(s), u_{\varepsilon, \lambda }(s)\bigr)_{H} \label{6.2}
\end{align}
for a.a.\ $s\in (0, T)$. Here, we used
\begin{align*}
\bigl(
f(s), \Delta u_{\varepsilon, \lambda }(s)
\bigr)_{H}
&=-\int_{\Omega }\nabla f(s)\cdot \nabla u_{\varepsilon, \lambda }(s)dx
+\int_{\Gamma }f(s) \partial_{\boldsymbol{\nu }} u_{\varepsilon, \lambda }(s)d\Gamma \\
&=-\int_{\Omega }\nabla f(s)\cdot \nabla u_{\varepsilon, \lambda }(s)dx
-\kappa \int_{\Gamma } f(s)u_{\varepsilon, \lambda }(s)d\Gamma \\
&=-\int_{\Omega }\nabla f(s)\cdot \nabla u_{\varepsilon, \lambda }(s)dx
+\int_{\Gamma }\partial_{\boldsymbol{\nu }}f(s)u_{\varepsilon, \lambda }(s)d\Gamma \\
&=\bigl(\Delta f(s), u_{\varepsilon, \lambda }(s)\bigr)_{H}
\end{align*}
for a.a.\ $s\in (0, T)$.
Here, using {\rm (A3)} and the {Y}oung inequality, we infer that
\begin{align}
\bigl( \pi_{\varepsilon } \bigl( u_{\varepsilon, \lambda }(s) \bigr),
\Delta u_{\varepsilon, \lambda }(s)\bigr)_{H}
&\leq \int_{\Omega }
\left(
|\pi_{\varepsilon }'|_{L^{\infty }(\mathbb{R})}
\bigl| u_{\varepsilon, \lambda }(s) \bigr|
+\bigl|\pi_{\varepsilon }(0) \bigr|
\right)
\bigl|
\Delta u_{\varepsilon, \lambda }(s)
\bigr|dx \nonumber \\
&\leq \int_{\Omega }c_{3}
\varepsilon ^{\frac{1}{2}}
\left(1+\bigl|u_{\varepsilon, \lambda }(s)\bigr| \right)
\bigl|\Delta u_{\varepsilon, \lambda }(s)\bigr|dx \nonumber \\
&\leq \frac{\varepsilon }{2}
\bigl| \Delta u_{\varepsilon, \lambda }(s) \bigr|_{H}^{2}
+c_{3}^{2}
\left( |\Omega |+ \bigl|u_{\varepsilon, \lambda }(s)
\bigr|_{H}^{2} \right) \label{6.3}
\end{align}
for a.a.\ $s\in (0, T)$.
Then, combining (\ref{6.1})--(\ref{6.3})
and integrating the resultant with respect to $s$ over interval $[0, t]$, we obtain
\begin{align}
& \frac{1}{2}\bigl|u_{\varepsilon, \lambda }(t)
\bigr|_{H}^{2}
+\frac{\lambda }{2}\bigl|\nabla u_{\varepsilon, \lambda }(t)\bigr|_{H^{d}}^{2}
+\frac{\kappa \lambda }{2}\bigl|u_{\varepsilon, \lambda }(t)\bigr|_{H_{\Gamma }}^{2}
+\frac{\varepsilon }{2}
\int_{0}^{t}\bigl|\Delta u_{\varepsilon, \lambda }(s)\bigr|_{H}^2ds \nonumber \\
& \quad \leq \frac{1}{2}|u_{0\varepsilon }|_{H}^{2}
+\frac{\lambda }{2}|\nabla u_{0\varepsilon }|_{H^{d}}^{2}
+\frac{\kappa \lambda }{2}
|u_{0\varepsilon }|_{H_{\Gamma }}^{2} \nonumber \\
& \quad \quad {} +c_{3}^{2}
\int_{0}^{t}
\left(
|\Omega | +
\bigl|u_{\varepsilon, \lambda }(s)\bigr|_{H}^{2}
\right)ds
+\frac{1}{2}\int_{0}^{t}
\bigl|
\Delta f(s)
\bigr|_{H}^{2}ds
+\frac{1}{2}
\int_{0}^{t}\bigl|u_{\varepsilon, \lambda }(s)\bigr|_{H}^{2}
ds \label{6.5}
\end{align}
for all $t\in [0, T]$. Now, note that we can take $\lambda \leq \varepsilon $, because $\lambda $ tends to $0$ with fixed $\varepsilon $. Also, from \eqref{2.9} of assumption {\rm (A5)}, we have
\begin{equation}
\frac{1}{2}|u_{0\varepsilon }|_{H}^{2}
+\frac{\lambda }{2}|\nabla u_{0\varepsilon }|_{H^{d}}^{2}
+\frac{\kappa \lambda }{2}|u_{0\varepsilon }|_{H_{\Gamma }}^{2}\leq \frac{3}{2}(1+\kappa)c_{4}.
\label{6.5b}
\end{equation}
Then, using \eqref{6.5}, \eqref{6.5b}, and the {G}ronwall inequality, it follows that
\begin{align*}
&
\bigl|u_{\varepsilon, \lambda }(t)\bigr|_{H}^{2}
+\lambda \bigl|\nabla u_{\varepsilon, \lambda }(t)\bigr|_{H^{d}}^{2}
+\kappa \lambda \bigl|u_{\varepsilon, \lambda }(t) \bigr|_{H_{\Gamma }}^{2} \nonumber \\
& \quad \leq \left\{
3(1+\kappa )c_{4}+ \int_{0}^{T} \bigl| \Delta f(s) \bigr|_H^{2}ds
+2c_{3}^{2}|\Omega |T \right\} \exp \bigl\{ (2c_{3}^{2}+1)T \bigr\}=:M_4
\label{6.6}
\end{align*}
for all $t\in [0, T]$. Moreover,
\begin{equation*}
\varepsilon
\int_{0}^{t}\bigl|\Delta u_{\varepsilon, \lambda }(s)\bigr|_{H}^2ds
\leq M_4(1+2c_3^2 T+ T)
\end{equation*}
for all $t \in [0,T]$.
Hence, according to Lemma~3.1, using \eqref{3.6} and \eqref{s6} without \eqref{3.7}, we obtain
\begin{align*}
&\frac{1}{2}\int_{0}^{t}
\bigl|u_{\varepsilon, \lambda }'(s)
\bigr|_{V^{*}}^{2}ds
+\lambda \int_{0}^{t}
\bigl|u_{\varepsilon, \lambda }'(s)\bigr|_{H}^{2}ds
+\frac{\varepsilon }{2}\bigl|u_{\varepsilon, \lambda }(t)\bigr|_{V}^{2}
+\bigl|
\widehat{\beta }_{\lambda }
\bigl(
u_{\varepsilon, \lambda }(t)
\bigr)
\bigr|_{L^{1}(\Omega )} \nonumber \\
& \quad \leq
\frac{3}{2}c_{4}+\frac{c_1}{8}(1+c_4)+\frac{c_1}{8}(1+M_4)
+\frac{1}{2}|f|_{L^{2}(0, T; V)}^{2}
\end{align*}
for all $t\in [0, T]$.
Therefore, we obtain the same kind of uniform estimates \eqref{m1} and \eqref{m2} in Lemma~3.1 without the growth condition {\rm (A2)}.
The rest of the proof of Theorem 2.2 is the same as that of Theorem~2.1.
$
$ $\Box $
\paragraph{Remark 5.1.}
As stated in Remark~4.1, we also can improve the error estimate, by assuming {\rm (A1)}, {\rm (A3)} with $\sigma (\varepsilon ):=\varepsilon ^{1/2}$, {\rm (A5)} and \eqref{ee1}.
Then, \eqref{ee2} is improved by stating it in the form
\begin{equation*}
|u_{\varepsilon }
-u|_{C([0, T]; V^{*})}^{2}
+\int_{0}^{T}\bigl(\xi_{\varepsilon }(t)-\xi (t),
u_{\varepsilon }(t)-u(t)\bigr)_{H}dt
\leq C^{*}
\varepsilon ^{\frac{1}{2}}
\end{equation*}
for all $\varepsilon \in (0, \bar{\varepsilon }]$ (see, \cite[Theorem~6.1]{CF16}).
\section{Asymptotic limits to solutions of {N}eumann problem}
In this section, we establish the order of convergence between the {R}obin problem and the {N}eumann problem related to {\rm (P)}.
We rewrite our problem for {\rm (P)}, hereafter denoted {\rm (P)}$_{\rm R}$, as follows:
\begin{gather*}
{\rm (P)}_{\rm R}
\quad
\begin{cases}
\displaystyle \frac{\partial u_{\kappa }}{\partial t}
-\Delta \xi_{\kappa }=g_{\kappa }, \quad \xi_{\kappa }\in \beta (u_{\kappa })
\quad {\rm in~}Q, \\
\partial_{\boldsymbol{\nu }}\xi_{\kappa }+\kappa \xi_{\kappa }
=h_{\kappa } \quad {\rm on~}\Sigma, \\
u_{\kappa }(0)=u_{0\kappa } \quad {\rm in~}\Omega
\end{cases}
\end{gather*}
for $\kappa >0$, where $g_{\kappa }: Q\to \mathbb{R}$, $h_{\kappa }: \Sigma \to \mathbb{R}$, and $u_{0\kappa }: \Omega \to \mathbb{R}$ are given data, and $\beta $ is same maximal monotone graph as in {\rm (A1)}.
Moreover, we recall the previous result \cite{CF16} for problem {\rm (P)} subject to the {N}eumann boundary condition,
\begin{gather*}
{\rm (P)}_{\rm N}
\quad
\begin{cases}
\displaystyle
\frac{\partial u}{\partial t}
-\Delta \xi =g, \quad \xi \in \beta (u) \quad {\rm in~}Q, \\
\partial_{\boldsymbol{\nu }}\xi
=h \quad {\rm on~}\Sigma, \\
u(0)=u_{0} \quad {\rm in~}\Omega
\end{cases}
\end{gather*}
which is denoted {\rm (P)}$_{\rm N}$, where $g: Q\to \mathbb{R}$, $h: \Sigma \to \mathbb{R}$, and $u_{0}: \Omega \to \mathbb{R}$ are given data.
The well-posedness for {\rm (P)}$_{\rm N}$ has already discussed in \cite[Theorem~2.3]{CF16}, specifically, there exists a pair $(u, \xi )$ such that $u\in H^{1}(0, T;V^{*})\cap L^{\infty }(0, T;H)$, $\xi \in L^{2}(0, T;V)$ with $\xi \in \beta (u)$ a.e.\ in $Q$ and they satisfy
\begin{gather}
\bigl\langle u'(t), z\bigr\rangle_{V^{*}, V}
+\int_{\Omega }\nabla \xi (t)\cdot \nabla zdx
=
\int_{\Omega }^{} g(t)z dx +\int_{\Gamma }^{} h(t)z d\Gamma \nonumber \\
\quad {\rm for~all~}z\in V, \label{wfn}\quad
{\rm for~a.a.\ } t \in (0,T), \\
u(0)=u_{0} \quad {\rm a.e.~in}~\Omega. \nonumber
\end{gather}
For each $\kappa>0$, we assume in addition that,
\begin{enumerate}
\item[(A7)] $g$, $g_{\kappa }\in L^{2}(0, T; H)$, $h$, $h_{\kappa }\in L^{2}(0, T; H_{\Gamma })$ and $u_{0}$, $u_{0\kappa }\in H$ with $\widehat{\beta}({u_0}), \widehat{\beta }(u_{0\kappa}) \in L^1(\Omega )$. Moreover, there exists a positive constant $c_{5}$ such that
\begin{gather}
|g_{\kappa }-g|_{L^2(0,T;H)} \leq \kappa c_{5},
\quad
|h_{\kappa }-h|_{L^2(0,T;H_{\Gamma })}\leq \kappa c_{5},
\quad
|u_{0\kappa }-u_{0}|_{V^{*}}\leq \kappa c_{5}.
\label{7-1}
\end{gather}
\end{enumerate}
Then, we obtain the order of convergence between the {R}obin problem {\rm (P)}$_{\rm R}$
and the {N}eumann problem {\rm (P)}$_{\rm N}$ as $\kappa \searrow 0$:
\paragraph{Theorem 6.1.}
{\it Under assumptions {\rm (A1)}, {\rm (A2)}, {\rm (A7)} or {\rm (A1)}, {\rm (A7)} with $h=h_\kappa \equiv 0$.
Let $(u, \xi )$ be the weak solution {\rm (P)}$_{\rm N}$ and $(u_{\kappa }, \xi_{\kappa })$ be the weak solution of {\rm (P)}$_{\rm R}$.
Then, there exists a positive constant $M^{\star }$, depending only the data, such that
\begin{equation}
|u_{\kappa }-u|_{C([0, T]; V^{*})}^{2}+2\int_{0}^{T}
\bigl(\xi_{\kappa }(s)-\xi (s), u_{\kappa }(s)-u(s)
\bigr)_{H}ds\leq M^{\star }\kappa ^{2} \label{pnpr}
\end{equation}
for all $\kappa >0$. Moreover, if $\beta $ is {L}ipschitz continuous, then it follows that
\begin{equation*}
\int_{0}^{T}
\bigl|
\xi_{\kappa }(s)-\xi (s) \bigr|_{H}^{2}ds\leq C_{\beta }M^{\star }\kappa ^{2}
\end{equation*}
for all $\kappa >0$, where $C_{\beta }$ is the {L}ipschitz constant for $\beta $. }
\paragraph{Proof}
We take the difference between \eqref{2.19'} for $u_\kappa $
and \eqref{wfn} for $u$ at $t=s$
and test it setting $z:=F^{-1}(u_{\kappa }(s)-u(s))$ at time $s\in (0, T)$. Then, we have
\begin{align*}
&
\frac{1}{2}\frac{d}{ds}
\bigl|u_{\kappa }(s)-u(s)\bigr|_{V^{*}}^{2}
+\int_{\Omega }^{}
\nabla \bigl(
\xi_{\kappa }(s)-\xi (s) \bigr) \cdot \nabla F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr) dx \\
&\quad {}+\kappa \int_{\Gamma }^{} \xi_{\kappa }(s)F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr) d\Gamma \\
& =\int_{\Omega }^{} \bigl( g_{\kappa }(s)-g(s) \bigr)
F^{-1} \bigl( u_{\kappa }(s)-u(s) \bigr) dx
+ \int_{\Gamma }^{} \bigl(h_{\kappa }(s)-h(s) \bigr)
F^{-1}(u_{\kappa }(s)-u(s) \bigr) d\Gamma.
\end{align*}
Now, note that
\begin{align*}
&\int_{\Omega }^{} \nabla \bigl(
\xi_{\kappa }(s)-\xi (s) \bigr)
\cdot \nabla F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr) dx
+\kappa \int_{\Gamma }^{} \xi_{\kappa }(s)F^{-1} \bigl( u_{\kappa }(s)-u(s) \bigr) d\Gamma\\
& =\bigl\langle F F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr), \xi_{\kappa }(s)-\xi (s)\bigr\rangle_{V^{*}, V}
+\kappa \bigl(\xi (s), F^{-1}\bigl(u_{\kappa }(s)-u(s)\bigr)\bigr)_{H_{\Gamma }} \\
& =\bigl(u_{\kappa }(s)-u(s), \xi_{\kappa }(s)-\xi (s) \bigr)_{H}
+\kappa \bigl(\xi (s), F^{-1}\bigl(u_{\kappa }(s)-u(s)\bigr) \bigr)_{H_{\Gamma }}.
\end{align*}
Therefore, using the {Y}oung inequality and the trace theorem
$|\cdot |_{H_\Gamma }^2\le c_{\rm P}'|\cdot |_V^2$, we deduce that
\begin{align}
&\frac{1}{2}\frac{d}{ds}\bigl|u_{\kappa }(s)-u(s)\bigr|_{V^{*}}^{2}
+\bigl(\xi_{\kappa }(s)-\xi (s), u_{\kappa }(s)-u(s)\bigr)_{H} \nonumber \\
&=-\kappa \bigl(\xi (s), F^{-1} \bigl( u_{\kappa }(s)-u(s) \bigr)\bigr)_{H_{\Gamma }}
+\bigl(g_{\kappa }(s)-g(s), F^{-1} \bigl(u_{\kappa }(s)-u(s) \bigr)\bigr)_{H}
\nonumber \\
&\quad \quad
+\bigl(h_{\kappa }(s)-h(s), F^{-1} \bigl(u_{\kappa }(s)-u(s) \bigr)\bigr)_{H_{\Gamma }}
\nonumber \\
&\leq \frac{3}{2}\kappa ^{2}c_{\rm P}'\bigl|\xi (s)\bigr|_{H_{\Gamma }}^{2}
+\frac{1}{6c_{\rm P}'}\bigl| F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr)\bigr|_{H_{\Gamma }}^{2}
+\frac{3}{2} \bigl| g_{\kappa }(s)-g(s)\bigr|_{H}^{2}\nonumber \\
& \quad {}
+\frac{1}{6} \bigl| F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr)\bigr|_{H}^{2}
+\frac{3}{2}c_{\rm P}' \bigl|h_{\kappa }(s)-h(s)\bigr|_{H_{\Gamma }}^{2}
+\frac{1}{6c_{\rm P}'} \bigl| F^{-1}\bigl( u_{\kappa }(s)-u(s) \bigr)\bigr|_{H_{\Gamma }}^{2} \nonumber \\
&\leq \frac{3}{2}\kappa ^{2}c_{\rm P}'^2
\bigl| \xi (s) \bigr|_{V}^{2}
+\frac{3}{2} \bigl| g_{\kappa }(s)-g(s)\bigr|_{H}^{2}
+\frac{3}{2}c_{\rm P}'\bigl|h_{\kappa }(s)-h(s)\bigr|_{H_{\Gamma }}^{2}
+\frac{1}{2} \bigl|u_{\kappa }(s)-u(s)\bigr|_{V^{*}}^{2}
\label{pnpr1}
\end{align}
for a.a.\ $s\in (0, T)$.
Here, using \eqref{7-1} of {\rm (A7)} and the {G}ronwall inequality, we infer that
\begin{align*}
& \bigl| u_{\kappa }(t)-u(t) \bigr|_{V^{*}}^{2} \nonumber \\
&\leq \left\{
|u_{0\kappa }-u_{0}|_{V^{*}}^{2}
+3c_{\rm P}'^2 \kappa ^{2}
| \xi |_{L^2(0,T;V)}^{2}
+ 3 |g_{\kappa }-g|_{L^2(0,T;H)}^{2}
+ 3c_{\rm P}' |h_{\kappa }-h|_{L^2(0,T;H_\Gamma)}^{2}
\right\}\exp (T) \nonumber \\
&\leq \kappa ^2 \bigl\{c_{5}^{2}
+3c_{\rm P}'^2| \xi |_{L^2(0,T;V)}^{2} +3c_5^2 + 3 c_{\rm P}'c_5^2\bigr\}\exp (T)
\end{align*}
for all $t\in [0, T]$.
Moreover, by integrating \eqref{pnpr1} over $[0, T]$ with respect $s$, there exists a positive constant $M^{\star }$, such that the estimates \eqref{pnpr} holds.
If $\beta $ is {L}ipschitz continuous, we infer from its monotonicity that
\begin{align*}
\int_{0}^{T}
\bigl|\xi_{\kappa }(s)-\xi (s)\bigr|_{H}^{2}ds
&\leq \int_{0}^{T}\! \!
\int_{\Omega }C_{\beta }
\bigl|u_{\kappa }(s)-u(s)\bigr| \bigl|\xi_{\kappa }(s)-\xi (s)\bigr|dxds \\
&=C_{\beta }\int_{0}^{T}\bigl(\xi_{\kappa }(s)-\xi (s), u_{\kappa }(s)-u(s)\bigr)_{H}ds \\
&\leq C_{\beta }M^{\star }\kappa ^{2}.
\end{align*}
This completes the proof. $
$ $\Box $
\section*{Acknowledgments}
We thank Richard Haase, Ph.D, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript.
\section*{Appendix}
We use the same notation as in the previous sections.
To characterize the operator $A$ as in Section~3, we employ the maximal monotone theory related to the subdifferential.
To do so, we define a proper lower semicontinuous convex functional $\varphi : H\to [0, +\infty ]$,
\begin{equation*}
\label{varp}
\varphi (z):=\begin{cases}
\displaystyle \frac{1}{2}\int_{\Omega }|\nabla z|^{2}dx
+\frac{\kappa }{2}\int_{\Gamma }|z|^{2}d\Gamma
& {\rm if}
\quad z\in V, \\
+\infty & {\rm otherwise}. \\
\end{cases}
\end{equation*}
We now present the representation of subdifferential operator $\partial \varphi$ (see e.g., \cite[pp.62--65, Proposition~2.9]{Bar10}).
\paragraph{Lemma A.}
{\it The subdifferential $\partial \varphi $ on $H$ is characterized by}
$$\partial \varphi (z)=-\Delta z \quad {\it in~}H \quad {\it for~all~} z\in D(\partial \varphi )=W. $$
\paragraph{Proof}
Let $z\in D(\varphi )=V$. For all $z^{*}\in \partial \varphi (z)$ in $H$. From the definition of the subdifferential $\partial \varphi $, we have
\begin{equation*}
(z^{*}, \tilde{z})_{H}
=(\nabla z, \nabla \tilde{z})_{H^{d}}
+\kappa (z, \tilde{z})_{H_{\Gamma }}
\quad {\rm for~all~} \tilde{z}\in V.
\end{equation*}
Now, taking $\tilde{z}\in \mathcal{D}(\Omega )$ as the test function, we deduce $z^{*}=-\Delta z$ in $\mathcal{D}'(\Omega )$, that is,
$z^* = -\Delta z$ in $H$ by comparison.
Hence, for all $\tilde{z}\in V$, we have
\begin{equation*}
(z^{*}, \tilde{z})_{H}
=(-\Delta z, \tilde{z})_{H}
+ \langle \partial_{\boldsymbol{\nu} } z, \tilde{z} \rangle
+ \kappa\int_{\Gamma }^{} z \tilde{z} d\Gamma,
\end{equation*}
specifically, by comparison, $\partial_{\boldsymbol{\nu }} z = -\kappa z$ in $H^{1/2}(\Gamma)$.
Therefore, from the theory of elliptic regularity for {N}eumann problem, it follows that $z \in H^2(\Omega )$.
Thus, we infer that $\partial \varphi (z)$ is singleton and is equal to $-\Delta z$ with $D(\partial \varphi )=\{z\in H^{2}(\Omega ) : \partial_{\boldsymbol{\nu} }z+\kappa z=0 \ {\rm a.e.~on~}\Gamma \}$.
$\Box $ \\
Finally, we can define $A$ by $\partial \varphi $ and then find the same abstract structure for the evolution equation \eqref{1}, which has also been treated in previous works \cite{CF15, KNP95, Kub12}.
\end{document}
|
\begin{document}
\begin{center}
{\large \bf Distributions of several infinite families of mesh patterns}
\end{center}
\begin{center}
Sergey Kitaev$^{a}$, Philip B. Zhang$^{b}$, Xutong Zhang$^{c}$\\[6pt]
$^{a}$Department of Computer and Information Sciences \\
University of Strathclyde, 26 Richmond Street, Glasgow G1 1XH, UK\\[6pt]
$^{b,c}$College of Mathematical Science \\
Tianjin Normal University, Tianjin 300387, China\\[6pt]
Email: $^{a}${\tt [email protected]},
$^{b}${\tt [email protected]},
$^{c}${\tt [email protected]}
\end{center}
\noindent\textbf{Abstract.}
Br\"and\'en and Claesson introduced mesh patterns to provide explicit expansions for certain permutation statistics as linear combinations of (classical) permutation patterns. The first systematic study of the avoidance of mesh patterns was conducted by Hilmarsson et al., while the first systematic study of the distribution of mesh patterns was conducted by the first two authors.
In this paper, we provide far-reaching generalizations for 8 known distribution results and 5 known avoidance results related to mesh patterns by giving distribution or avoidance formulas for certain infinite families of mesh patterns in terms of distribution or avoidance formulas for smaller patterns. Moreover, as a corollary to a general result, we find the distribution of one more mesh pattern of length 2. \\[5pt]
\noindent {\bf Keywords:} mesh pattern, distribution, avoidance \\[5pt]
\noindent {\bf AMS Subject Classifications:} 05A15
\section{Introduction}\label{intro}
Patterns in permutations and words have attracted much attention in the literature (see~\cite{Kit} and references therein), and this area of research continues to grow rapidly.
The notion of a {\em mesh pattern}, generalizing several classes of patterns, was introduced by Br\"and\'en and Claesson \cite{BrCl} to provide explicit expansions for certain permutation statistics as, possibly infinite, linear combinations of (classical) permutation patterns. Several papers are dedicated to the study of mesh patterns and their generalizations \cite{AKV,Borie,JKR,KL,KR1,KRT,T1,T2}.
Let $\dbrac{0,k}$ denote the interval of the integers from $0$ to $k$. A pair $(\tau,R)$, where $\tau$ is a permutation of length $k$ written in one-line notation and $R$ is a subset of $\dbrac{0,k} \times \dbrac{0,k}$, is a
\emph{mesh pattern} of length $k$. Let $\boks{i}{j}$ denote the box whose corners have coordinates $(i,j), (i,j+1),
(i+1,j)$, and $(i+1,j+1)$. Let the horizontal lines represent the values, and the vertical lines denote the positions in the pattern. Mesh patterns can be drawn by shading the boxes in $R$. For example, the picture
\[
\pattern{scale=1}{3}{1/2,2/3,3/1}{1/2, 2/1}
\]
represents the mesh pattern with $\tau=231$ and $R = \{\boks{1}{2},\boks{2}{1}\}$. A mesh pattern $(\tau,R)$ of length $k\geq 2$ is {\em irreducible} if the permutation $\tau=\tau_1\tau_2\cdots\tau_k$ is irreducible, that is, if there exists no $i$, where $2\leq i\leq k$, such that $\tau_j<\tau_i$ for all $1\leq j<i$. For convenience, in this paper we assume that if $\tau$ is of length 1 then it is {\em not} irreducible, even though normally such a $\tau$ is assumed to be irreducible. All mesh patterns of interest in this paper can be found in Tables~\ref{tab-1}--\ref{tab-3}, where patterns' numbers $<66$ are coming from \cite{Hilmarsson2015Wilf,SZ}. Also, we let $Z:= \pattern{scale = 0.8}{1}{1/1}{0/0,1/1}$.
A subsequence $\pi'=\pi_{i_1}\pi_{i_2}\cdots\pi_{i_k}$ of a permutation $\pi=\pi_1\pi_2\cdots \pi_n$ is an occurrence of a mesh pattern $(\tau,R)$ if (a) $\pi'$ is order-isomorphic to $\tau$, and (b) the shaded squares given by $R$ do not contain any elements of $\pi$ not appearing in $\pi'$. For example, the mesh pattern of length 3 drawn above appears twice in the permutation $24531$ (as the subsequences 241 and 453). Note that even though the subsequences 251 and 451 are order isomorphic to 231 (the $\tau$ in the drawn pattern), they are not occurrences of the pattern because of the elements 4 and 3, respectively, be in the shaded squares. See \cite{Hilmarsson2015Wilf} for more examples of occurrences of mesh patterns in permutations.
Let $S_n$ be the set of permutations of length $n$.
Given a permutation $\pi$, denote by $p(\pi)$ the number of occurrences of pattern $p$ in $\pi$. Denote by $S(p)$ set of permutations avoiding $p$.
We let $A_p(x)$ be the generating function for $S(p)$ and let
$$F_p(x,q):=\sum_{n\geq 0}x^n\sum_{\pi\in S_n}q^{p(\pi)}.$$
In this paper we provide various generalizations of results in~\cite{SZ}. The main idea of this paper is to consider a mesh pattern $p$, and to replace some of its unshaded boxes by mesh patterns. To illustrate this idea, consider the mesh pattern
\begin{center}
$P=$ \begin{tikzpicture}[scale=1, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (1+0.99,1+0.99);
\filldraw (1,1) circle (3pt) node[below left]{\quad};
\filldraw (1.2,1.7) circle (0pt) node[below right] {$p_1$};
\end{tikzpicture}
\end{center}
obtained from the smaller mesh pattern $Y = \pattern{global scale = 0.8}{1}{1/1}{0/0,0/1,1/0}$ by inserting a mesh pattern $p_1$ in the box $(1,1)$. We can then find the distribution of $P$ in terms of the distribution of $p_1$, which not only allows us to obtain three results in~\cite{SZ} at the same time (distributions of the patterns Nr.\ 12, 13, and 17; see Section~\ref{sec2}) but also to derive the previously unknown distribution of the pattern
\begin{center}
Nr.\ 66\ =\ \begin{tikzpicture}[scale=0.6, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,2/0,1/0,0/2,1/1}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\filldraw (1,1) circle (4pt) ;
\filldraw (2,2) circle (4pt);
\end{tikzpicture}
\end{center}
(pattern's number is introduced by us in this paper) which is not equivalent to any of the patterns in~\cite{Hilmarsson2015Wilf}.
\begin{table}[htbp]
{
\renewcommand{1.7}{1.7}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Nr.\ } & {Repr.\ $p$} & {Generalization} & {\ $p_1$ } & {\ $p_2$} & {\ $p_3$} & {Distribution}
\\[8pt]
\hline \hline
$X$ &
$\pattern{global scale = 0.8}{1}{1/1}{0/1,1/0}$ &
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.4, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,1/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (1+0.99,1+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Irreducible} & - & - & Theorem~\ref{th:pattern X}
\\[8pt]
\hline
$Y$ & $\pattern{global scale = 0.8}{1}{1/1}{0/0,0/1,1/0}$ &
\raisebox{0.3ex}{\begin{tikzpicture}[scale = 0.4, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,1/0,0/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (1+0.99,1+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & - & - & Theorem~\ref{th:pattern Y}
\\[8pt]
\hline
12 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0}$ & - & - & -
& - & Corollary~\ref{cor1}
\\[8pt]
\hline
13 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,2/1,2/2,1/2}$
& {\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,0/2,1/0,2/0,2/1,2/2,1/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & - & - & Theorem ~\ref{th:pattern 13}
\\[-10pt]
& & & & & & Corollary~\ref{cor2} \\
\hline
17 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,2/1,1/2}$ & - & - & - & - & Corollary~\ref{cor3}
\\[8pt]
\hline
19 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/1,1/2,2/0,2/2}$ &
\raisebox{0.3ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,0/2,1/1,1/2,2/0,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (2.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & - & - & Theorem~\ref{th:pattern 19}
\\[8pt]
\hline
20 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/1,1/2,2/1,2/0}$ &
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,0/2,1/1,1/2,2/1,2/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (2.6,2.6) {\tiny $p_1$};
\node at (1.6,0.5) {\tiny $p_2$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & {Any} & - & Theorem~\ref{th:pattern 20}
\\[8pt]
\hline
22 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/1,1/2,2/0,2/2}$ &
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/1,1/2,2/0,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (0.6,2.6) {\tiny $p_1$};
\node at (1.6,0.5) {\tiny $p_2$};
\node at (2.6,1.6) {\tiny $p_3$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & {Any} & {Any} & Theorem~\ref{th:pattern 22}
\\[8pt]
\hline
28 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$ &
{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & - & - & Theorem~\ref{th:pattern 28}
\\[8pt]
\hline
33 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,2/0,2/1,1/2}$ &
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,0/2,1/0,2/0,2/1,1/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (2.6,2.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}} & {Irreducible} & - & - & Theorem~\ref{th:pattern 33}
\\[8pt]
\hline
66 & $\pattern{global scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,1/1}$ & - & - & - & - & Corollary~\ref{cor4}
\\[8pt]
\hline
\cline{1-6}
\end{tabular}
\end{center} }
\caption{Distributions of generalizations of short mesh patterns. Note that pattern $X$ is essentially equivalent to pattern $Z$ in the sense of distribution.}
\label{tab-1}
\end{table}
\begin{table}[htbp]
{
\renewcommand{1.7}{1.6}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline {Nr.\ } & {Repr.\ $p$} & {Generalization} & {\ $p_1$ } & {\ $p_2$} & {Avoidance}
\\[5pt]
\hline \hline
27 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,2/0,2/2}$ &
\raisebox{0.3ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,0/2,1/0,1/1,2/0,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (0.6,0.6) {\tiny $p_1$};
\node at (2.6,1.6) {\tiny $p_2$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}} & {Any} & {Any}
& Theorem~\ref{th:pattern 27}
\\[5pt]
\hline
28 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$ &
\raisebox{0.3ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\node at (2.6,0.6) {\tiny $p_2$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}} & {Any} & {Any}
& Theorem~\ref{th:pattern 28-2}
\\[5pt]
\hline
30 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,1/2,2/0,2/1}$ & \raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,0/2,1/0,1/1,1/2,2/0,2/1}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (2.6,2.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}} &
{Two general classes} & - & Theorem~\ref{th:pattern 30}
\\[5pt]
\cline{1-6}
\end{tabular}
\end{center}
}
\caption{Avoidance for generalizations of short mesh patterns.}
\label{tab-2}
\end{table}
For a more sophisticated example illustrating the power of our results in this paper, suppose that one wants to find the distribution of the mesh pattern in Figure~\ref{sophist-ex-mesh}. Approaching this problem directly is probably not doable. However, one can see that the elements $a$ and $b$ give a mesh pattern of the form in Figure~\ref{pic-thm-pat-19}, so that Theorem~\ref{th:pattern 19} can be
applied with $p_1$ there given by the elements $c$ and $d$, which give a mesh
pattern of the form in Figure~\ref{pic-thm-pat-28}, so that Theorem~\ref{th:pattern 28} can be applied. Finally, $p_1$ in Theorem~\ref{th:pattern 28} in our example is nothing else but the mesh pattern Nr.\ 66, so Corollary~\ref{cor3} can be applied. This will result in the distribution of the mesh pattern in Figure~\ref{sophist-ex-mesh} be
\begin{small}
$$F(x)\left(1+x(F(x)-1)\left(\frac{1}{1+x^2F(x)\left(F(x)-1-qx\sum_{n=1}^{\infty}\prod_{i=1}^{n-1}(q+i)x^n \right)}-1\right)\right), $$
\end{small}
where $F(x):=\sum_{n\geq 0}n!x^n$.
\begin{figure}
\caption{A mesh pattern of length 6}
\label{sophist-ex-mesh}
\end{figure}
In Tables~\ref{tab-1} and~\ref{tab-2} we give references to our enumerative results related to distribution and avoidance, respectively. Moreover, in Table~\ref{tab-3} we give references to our distribution and avoidance results on certain generalizations of short mesh patterns. We note that the patterns $p_1$, $p_2$, $p_3$ in Tables~\ref{tab-1}--\ref{tab-3} can be empty, in which case one needs to substitute the generating functions $A_{p_i}(x)$ and $F_{p_i}(x,q)$ by 0 and $qF(x)$, respectively, in our results. Indeed, one can assume that any permutation contains exactly one occurrence of the empty pattern, which makes the substitutions work. Also, in the case of empty $p_1$, one needs to set $k=1$ in our results related to the pattern Nr.\ 34 (in Theorems~\ref{thm-pat-34} and \ref{th:pattern 34-2}). In this way, one can obtain any previous results in \cite{Hilmarsson2015Wilf,SZ} related to the patterns appearing in this paper.
\begin{table}[htbp]
{
\renewcommand{1.7}{1.7}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline {Nr.\ } & {Repr.\ $p$} & {Generalization} & {\ $p_1$ } & {\ $p_2$} & {Reference}
\\[5pt]
\hline \hline
33 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,2/0,2/1,1/2}$ & \raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,0/2,1/0,2/0,2/1,1/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (2.6,2.6) {\tiny $p_1$};
\node at (1.5,1.65) {\tiny $\mathrm{id}dots$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}} & {Irreducible} & - & Theorem~\ref{th:pattern 33-2}
\\[5pt]
\hline
34 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/1,1/2,2/1,2/2}$ &
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/1,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.5,1.5) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & - & Theorem~\ref{thm-pat-34}
\\[5pt]
\hline
34 & $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/1,1/2,2/1,2/2}$ & \raisebox{0.3ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/1,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.5,1.5) {\tiny $p_1$};
\node at (2.5,0.5) {\tiny $p_2$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}}
& {Any} & {Any} & Theorem~\ref{th:pattern 34-2}
\\[5pt]
\cline{1-6}
\end{tabular}
\end{center}
}
\caption{Generalizations of short mesh patterns allowing replacement of $\tau=12$ either by an increasing permutation or by a permutation beginning with the smallest element with all boxes shaded.
The distributions are given for Nr.\ 33, and 34 with $p_1$ but without $p_2$, and the avoidance for Nr.\ 34 with $p_1$ and $p_2$.}
\label{tab-3}
\end{table}
In this paper, we need the following result.
\begin{thm}[{\cite[Theorem 1.1]{SZ}}]\label{thm-length-1} Let
$$F(x,q) =\sum_{n\geq 0}x^n\sum_{\pi\in S_n}q^{\pattern{scale=0.5}{1}{1/1}{0/1,1/0}(\pi)}=\sum_{n\geq 0}x^n\sum_{\pi\in S_n}q^{\pattern{scale=0.5}{1}{1/1}{0/0,1/1}(\pi)}$$ and let $A(x)$ be the generating function for $S(\pattern{scale=0.5}{1}{1/1}{0/1,1/0}\ )=S(\pattern{scale=0.5}{1}{1/1}{0/0,1/1}\ )$. Then,
$$A(x)=\frac{F(x)}{1+xF(x)},\ \ \ \ \ F(x,q)=\frac{F(x)}{1+x(1-q)F(x)}.$$
\end{thm}
This paper is organized as follows. In Section~\ref{sec2} we present distribution and avoidance results for certain mesh patterns derived from the pattern $Y$. In Section~\ref{sec3} we study the distribution and avoidance for certain patterns derived from the patterns Nr.\ 13, 19, 20, 22, and 28. Section~\ref{sec4} is dedicated to the distribution and avoidance for certain mesh patterns derived from the pattern $X$. In Section~\ref{sec5} we deal with avoidance results for certain mesh patterns derived from the patterns Nr.\ 27, 28, 30, and 34. Finally, we provide some concluding remarks in Section~\ref{final-sec}.
\section{A generalization of the pattern $Y$}\label{sec2}
In this section, we consider a generalization of the pattern $Y = \pattern{scale = 0.8}{1}{1/1}{0/0,0/1,1/0}$. As an application of our general results, we will find the distributions of the following patterns:\\
Nr.~12 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0}$, \quad
Nr.~13 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,2/1,2/2,1/2}$, \quad
Nr.~17 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,2/1,1/2}$, \quad
Nr.~66 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0,1/1}$.\\
\begin{thm}\label{th:pattern Y}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-Y}, where $p_1$ is any mesh pattern, and the label $a$ is to be ignored. Then,
\begin{align}
A_p(x)& =(1-x)F(x)+xA_{p_1}(x),\label{avoidance-patt-Y-2}\\[6pt]
F_p(x,q)& =(1-x)F(x)+xF_{p_1}(x,q).\label{dis-pattern-Y-2}
\end{align}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern Y}
\label{pic-thm-pat-Y}
\end{figure}
\begin{proof}
Any permutation counted by $F(x)$ either avoids $p$, which is counted by $A_p(x)$, or contains at least one occurrence of $p$. The generating function for the latter case is $x(F(x)-A_{p_1}(x))$. Indeed, for any occurrence of $p$, the element $a$ in Figure~\ref{pic-thm-pat-Y} is the same. Now the North East box in the figure must contain at least one occurrence of $p_1$, which is counted by $F(x)-A_{p_1}(x)$, and the element $a$ contributes the factor of $x$. This leads to
\begin{equation}\label{av-pattern-Y}
A_p(x) +x(F(x)-A_{p_1}(x))=F(x).
\end{equation}
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-Y}
A_p(x) +x(F_{p_1}(x,q)-A_{p_1}(x))=F_p(x,q).
\end{equation}
Our proof of \eqref{dis-pattern-Y} is essentially the same as that in the avoidance case.
In particular, the contribution of the North East box is $F_{p_1}(x,q)-A_{p_1}(x)$, since every occurrence of $p_1$ there, along with the element $a$, will give an occurrence of $p$, and all occurrences of $p$ are obtained in this way.
The formulas \eqref{avoidance-patt-Y-2} and \eqref{dis-pattern-Y-2} now follow from the formulas \eqref{av-pattern-Y} and \eqref{dis-pattern-Y}, respectively. This completes the proof.
\end{proof}
The following results follow from Theorem~\ref{th:pattern Y}.
\begin{coro}[{\cite[Theorem 2.4]{SZ}}]\label{cor1} For the pattern Nr.~$12$ = $p=\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/0,2/0}$, we have
\begin{align*}
A_p(x) & =(1-x)F(x)+x, \\[6pt]
F_p(x,q) & =(1-x)F(x)+xF(qx).
\end{align*}
\end{coro}
\begin{proof} The mesh pattern $p$ is obtained from the pattern $Y$ by inserting the pattern $p_1=\pattern{scale=0.8}{1}{1/1}{}$. It is easy to see that $A_{p_1}(x)=1$ and $F_{p_1}(x,q)=F(qx)$.
After substituting these into \eqref{avoidance-patt-Y-2} and \eqref{dis-pattern-Y-2}, we obtain the desired result. \end{proof}
\begin{coro}[{\cite[Theorem 2.5]{SZ}}]\label{cor2} For the pattern Nr.~$13$ = $p=\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,2/0,2/2,1/0,0/2,1/2,2/1}$, we have
\begin{align*}
A_p(x)& =(1-x^2)F(x),\\[6pt]
F_p(x,q)& =(1-x^2+qx^2)F(x).
\end{align*}
\end{coro}
\begin{proof} The mesh pattern $p$ is obtained from the pattern $Y$ by inserting the pattern $p_1=\pattern{scale=0.8}{1}{1/1}{0/1,1/0,1/1}$. It is easy to see that $A_{p_1}(x)=1$ and $F_{p_1}(x,q)=F(qx)$.
Together with $A_{p_1}(x)=F(x)-xF(x)$, and $F_{p_1}(x,q)=F(x)-xF(x)+ qxF(x)$, we have the desired result. \end{proof}
\begin{coro}[{\cite[Theorem 3.2]{SZ}}]\label{cor3} For the pattern Nr.~$17$ = $p=\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,1/2,0/0,2/0,1/0,0/2,2/1}$, we have
$$F_p(x,q)=\left(1-x + \frac{x}{1+x(1-q)F(x)}\right)F(x).$$
Also, $A_p(x)=F_p(x,0)$.
\end{coro}
\begin{proof} The mesh pattern $p$ is obtained from the pattern $Y$ by inserting the pattern $p_1=\pattern{scale=0.8}{1}{1/1}{0/1,1/0}$. By Theorem~\ref{thm-length-1}, we have $F_{p_1}(x,q)=\frac{F(x)}{1+x(1-q)F(x)}$. After substituting it into~\eqref{dis-pattern-Y-2}, we obtain the desired result.\end{proof}
The following result is new.
\begin{coro}\label{cor4} For the pattern Nr.~$66$ = $p=\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,2/0,1/0,0/2,1/1}$, we have
\begin{align*}
A_p(x) & =(1-x)F(x)+x, \\
F_p(x,q) & =(1-x)F(x)+x+x\sum_{n=1}^{\infty}\prod_{i=0}^{n-1}(q+i)x^n.
\end{align*}
\end{coro}
\begin{proof} The mesh pattern $p$ is obtained from the pattern $Y$ by inserting the pattern $p_1=\pattern{scale=0.8}{1}{1/1}{0/0}$. Note that $A_{p_1}(x)=1$ and the distribution of $p_1$ is given by the {\em unsigned Stirling numbers of the first kind} \cite[p. 19]{Stanley}:
\begin{align*}
F_{p_1}(x,q)=1+\sum_{n=1}^{\infty}\prod_{i=0}^{n-1}(q+i)x^n.
\end{align*} Substituting these into \eqref{avoidance-patt-Y-2} and \eqref{dis-pattern-Y-2} gives the desired result. \end{proof}
\section{Distributions of several mesh patterns}\label{sec3}
In this section, we generalize known distribution results for the patterns
\begin{center}
Nr. 13 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,0/2,2/0,2/2,2/1,1/2}$, \quad
Nr. 19 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/1,1/2,2/0,2/2}$, \quad
Nr. 20 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/1,1/2,2/1,2/0}$, \\[6pt]
Nr. 22 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/1,1/2,2/0,2/2}$, \quad
Nr. 28 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$.
\end{center}
The theorems proved in this section are not so difficult (and somewhat similar), but they prepare the reader for the upcoming more involved distribution or avoidance results.
\subsection{The pattern Nr. 13}
We first consider the generalization of the pattern Nr. 13 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,1/0,0/2,2/0,2/2,2/1,1/2}$.
\begin{thm}\label{th:pattern 13}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-13}, where $p_1$ is any mesh pattern, and the labels $a$ and $b$ are to be ignored. Then, \begin{align*}
A_p(x) & =F(x)-x^2(F(x)-A_{p_1}(x)),\\[6pt]
F_p(x,q) & = (1-x^2)F(x)+x^2 F_{p_1}(x,q).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 13}
\label{pic-thm-pat-13}
\end{figure}
\begin{proof}
We have the following functional equation:
\begin{equation}\label{av-pattern-13}
A_p(x) +x^2(F(x)-A_{p_1}(x))=F(x).
\end{equation}
Indeed, any permutation counted by $F(x)$ either avoids $p$, which is counted by $A_p(x)$, or contains at least one occurrence of $p$. The generating function for the latter case is $x^2(F(x)-A_{p_1}(x))$. Indeed, an occurrence of $p$ implies at least one occurrence of $p_1$ in the central box in Figure~\ref{pic-thm-pat-13}, which contributes the factor of $F(x)-A_{p_1}(x)$. Besides, the factor of $x^2$ is contributed by $ab$.
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-13}
A_p(x) +x^2(F_{p_1}(x,q)-A_{p_1}(x))=F_p(x,q).
\end{equation}
The proof of \eqref{dis-pattern-13} is essentially the same as that in the avoidance case. In particular, the factor of $F_{p_1}(x,q)-A_{p_1}(x)$ comes from the fact that in any occurrence of $p$, $a$, and $b$ in Figure~\ref{pic-thm-pat-13} must be the same.
The desired result follows from \eqref{av-pattern-13} and \eqref{dis-pattern-13}.
This completes the proof.
\end{proof}
\subsection{The pattern Nr. 19}
Now we generalize the pattern Nr. 19 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/1,1/2,2/0,2/2}$.
\begin{thm}\label{th:pattern 19}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-19}, where $p_1$ is any mesh pattern, and the labels $a$, $b$, $A$, and $B$ are to be ignored. Then, \begin{align*}
A_p(x)&=F(x)-x(F(x)-1)(F(x)-A_{p_1}(x)),\\[6pt]
F_p(x,q) &= F(x)+x(F(x)-1)(F_{p_1}(x,q)-F(x)).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 19}
\label{pic-thm-pat-19}
\end{figure}
\begin{proof}
We have the following functional equation:
\begin{equation}\label{av-pattern-19}
A_p(x)+x(F(x)-1)(F(x)-A_{p_1}(x))=F(x),
\end{equation}
where the right hand side counts all permutations. Indeed, each permutation either avoids $p$, which is given by the $A_p(x)$ term in \eqref{av-pattern-19}, or it contains at least one occurrence of $p$. The latter case is given by the second term on the left hand side of \eqref{av-pattern-19}, which shall be proved in the following.
To pick the occurrence $ab$ in Figure~\ref{pic-thm-pat-19}, we choose $b$ the \emph{highest} possible and $a$ is then uniquely determined. Referring to this figure, we note that the East box must contain at least one occurrence of $p_1$, which is counted by $F(x)-A_{p_1}(x)$. Furthermore, the boxes $A$ and $B$ together with $a$, which is the maximum element in the permutation $AaB$, contribute the factor of $F(x)-1$. Note that $a$ must exist, so $AaB$ is not empty and there are no other restrictions for $AaB$.
Finally, $b$ contributes the factor of $x$. Thus, we complete the proof of \eqref{av-pattern-19} and hence give the formula of $A_p(x)$.
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-19}
A_p(x)+x(F(x)-1)(F_{p_1}(x,q)-A_{p_1}(x))=F_p(x,q).
\end{equation}
The proof of \eqref{dis-pattern-19} is essentially the same as that in the avoidance case.
Since $a$ is uniquely determined, the boxes $A$ and $B$ together with $a$ contribute the factor of $F(x)-1$. The East box contributes the factor of $F_{p_1}(x,q)-A_{p_1}(x)$, since each occurrence of $p_1$ in that box induces one occurrence of $p$. Finally, the factor of $x$ corresponds to the element $b$, which completes the proof of \eqref{dis-pattern-19}.
Substituting the formula of $A_p(x)$ into \eqref{dis-pattern-19} gives the formula of $F_p(x,q)$ as desired.
\end{proof}
\subsection{The pattern Nr. 20}
Now, we generalize the pattern $\pattern{scale = 0.6}{2}{1/1,2/2}{0/0,0/1,0/2,1/1,1/2,2/1,2/0}$ by adding $p_1$ and $p_2$ in it.
\begin{thm}\label{th:pattern 20}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-20}, where $p_1$ and $p_2$ are any mesh patterns, and the labels $a$ and $b$ are to be ignored. Then,
\begin{align*}
A_p(x)=&\, F(x)-x^2(F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x)),\\[6pt]
F_p(x,q)=&\, F(x)+x^2 \big( (F_{p_1}(x,q)-A_{p_1}(x))(F_{p_2}(x,q)-A_{p_2}(x))-\\
&(F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x)) \big).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 20}
\label{pic-thm-pat-20}
\end{figure}
\begin{proof}
We have the following functional equation:
\begin{equation}\label{av-pattern-20}
A_p(x)+x^2(F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x))=F(x).
\end{equation}
Indeed, the right hand side of \eqref{av-pattern-20} counts all permutations. On the left hand side, $A_p(x)$ gives avoidance of $p$, and the other term, to be justified next, counts permutations with at least one occurrence of $p$. Indeed, the elements $a$ and $b$ contributing the factor of $x^2$ are uniquely determined, which is evident from Figure~\ref{pic-thm-pat-20}. Referring to this figure, we note that the upper (resp., lower) non-shaded box $A$ (resp., $B$) must contain at least one occurrence of $p_1$ (resp., $p_2$) counted by $F(x)-A_{p_1}(x)$ (resp., $F(x)-A_{p_2}(x)$). Thus, we complete the proof of \eqref{av-pattern-20} and hence give the formula of $A_p(x)$.
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-20}
A_p(x)+x^2(F_{p_1}(x,q)-A_{p_1}(x))(F_{p_2}(x,q)-A_{p_2}(x))=F_p(x,q).
\end{equation}
The proof of~\eqref{dis-pattern-20} is essentially the same as that in the avoidance case.
The box $A$ (resp., $B$) contributes the factor of $F_{p_1}(x,q)-A_{p_1}(x)$ (resp., $F_{p_2}(x,q)-A_{p_2}(x)$), since each occurrence of $p_1$ in $A$, together with any occurrence of $p_2$ in $B$, form an occurrence of $p$. Together with the factor of $x^2$ corresponding to $ab$, we complete the proof of \eqref{dis-pattern-20}.
Substituting the formula of $A_p(x)$ into \eqref{dis-pattern-20} gives the desired result.
\end{proof}
\subsection{The pattern Nr. 22}
Next we generalize the pattern $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/1,1/2,2/0,2/2}$ by adding $p_1$, $p_2$, and $p_3$ in it.
\begin{thm}\label{th:pattern 22}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-22}, where $p_1$, $p_2$, and $p_3$ are any mesh patterns, and the labels $a$ and $b$ are to be ignored. Then,
\begin{align*}
A_p(x) = & \, F(x)-x^2 (F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x))(F(x)-A_{p_3}(x)),\\[6pt]
F_p(x,q) = & \, F(x)+x^2\Big( (F_{p_1}(x,q)-A_{p_1}(x))(F_{p_2}(x,q)-A_{p_2}(x))(F_{p_3}(x,q)-A_{p_3}(x))\\
& \quad -(F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x)) (F(x)-A_{p_3}(x))\Big).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 22}
\label{pic-thm-pat-22}
\end{figure}
\begin{proof}
Any permutation, counted by $F(x)$, either avoids $p$, which is counted by $A_p(x)$, or contains at least one occurrence of $p$. In the latter situation, the elements $ab$ in an occurrence of $p$ are uniquely defined. Indeed, looking at Figure~\ref{pic-thm-pat-22} we see that another occurrence of $p$ cannot be inside of any of the unshaded boxes:
\begin{itemize}
\item if $p$ would occur in the box labeled with $p_1$, it would contradict the South East box being shaded;
\item if $p$ would occur in the box labeled with $p_2$, it would contradict the element $b$ being in the North East shaded box;
\item if $p$ would occur in the box labeled with $p_3$, it would contradict the element $a$ being in the South West shaded box.
\end{itemize}
Moreover, another occurrence of $p$ cannot begin at the box
\begin{itemize}
\item labeled $p_1$ because of the two shaded top boxes;
\item labeled $p_2$ because of the element $a$;
\item labeled $p_3$ because of the North East box being shaded.
\end{itemize}
So, $a$ and $b$ will contribute the factor of $x^2$. Referring to this figure, we note that the non-shaded box labeled by $p_1$ (resp., $p_2$ and $p_3$) must contain at least one occurrence of $p_1$ (resp., $p_2$ and $p_3$) and thus is counted by $F(x)-A_{p_1}(x)$ (resp., $F(x)-A_{p_2}(x)$ and $F(x)-A_{p_3}(x)$). Thus, we obtain
\begin{equation*}
A_p(x)+x^2(F(x)-A_{p_1}(x))(F(x)-A_{p_2}(x))(F(x)-A_{p_3}(x))=F(x),
\end{equation*}
which leads to the formula of $A_p(x)$.
For the distribution, we have the following functional equation:
\begin{align}
A_p(x) + x^2 \big( F_{p_1}(x,q) - A_{p_1}(x) \big) \big( F_{p_2}(x,q)-A_{p_2}(x)\big) & \big(F_{p_3}(x,q)-A_{p_3}(x)\big) \notag \\[5pt]
= &\, F_p(x,q). \label{dis-pattern-22}
\end{align}
The proof of \eqref{dis-pattern-22} is essentially the same as that in the avoidance case. The only remark is that any occurrence of the pair $(p_1, p_2, p_3)$ in their respective non-shaded boxes induce an occurrence of $p$.
Substituting the formula of $A_p(x)$ into \eqref{dis-pattern-22} gives the formula of $F_p(x,q)$.
This completes the proof.
\end{proof}
\subsection{The pattern Nr. 28}
We now generalize the pattern $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$ by adding $p_1$ in it.
\begin{thm}\label{th:pattern 28}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-28}, where $p_1$ is any mesh pattern, and the labels $a$, $b$, $A$, and $B$ are to be ignored. Then,
\begin{align*}
A_p(x)& =\frac{F(x)}{1+x^2 \big(F(x)-A_{p_1}(x) \big)F(x)},\\[6pt]
F_p(x,q)& =\frac{F(x)}{1+x^2\big( F(x)-F_{p_1}(x,q) \big) F(x) }.
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 28}
\label{pic-thm-pat-28}
\end{figure}
\begin{proof}
Similar to the avoidance case of Theorem~\ref{th:pattern 19}, we have
\begin{equation}\label{av-pattern-28}
A_p(x)+x^2 A_p(x) \big(F(x)-A_{p_1}(x) \big) F(x)=F(x).
\end{equation}
The only term requiring an explanation here is $x^2 A_p(x) \big(F(x)-A_{p_1}(x) \big) F(x)$ corresponding to permutations with at least one occurrence of $p$. Among all such occurrences consider $ab$ with \emph{leftmost} possible $a$ as shown in Figure~\ref{pic-thm-pat-28}. Note that the element $b$ is then uniquely determined. Further, the middle box must contain at least one occurrence of $p_1$, which is counted by $F(x)-A_{p_1}(x)$, and the permutation in box $A$ must be $p$-avoiding since $a$ is the leftmost. Moreover, box $B$ can contain any permutation, and thus its contribution is $F(x)$.
Finally, $a$ and $b$ contribute the factor of $x^2$, which completes the proof of \eqref{av-pattern-28} and hence gives the formula of $A_p(x)$.
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-28}
A_p(x)+x^2 A_p(x) \big( F_{p_1}(x,q)-A_{p_1}(x) \big) F_p(x,q)=F_p(x,q).
\end{equation}
The proof of \eqref{dis-pattern-28} is essentially the same as that in the avoidance case. We just remark that any occurrence of $p_1$ in the middle box gives an occurrence of $p$, which explains the term $F_{p_1}(x,q)-A_{p_1}(x)$, and $F_p(x,q)$ on the left hand side of \eqref{dis-pattern-28} records occurrences of $p$ in box $B$.
\end{proof}
\section{Patterns derived from the pattern $X$}\label{sec4}
This section gives the distribution of an infinite family of mesh patterns obtained from the pattern $X=\pattern{scale = 0.8}{1}{1/1}{0/1,1/0}$. Namely, we generalize $X$ by considering \raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,1/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (1+0.99,1+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}},
where $p_1$ is an {\em irreducible} mesh pattern. Moreover, we use the same approach to study the distribution of a family of patterns that can be derived from the pattern Nr.\ 33 = $\pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,2/0,1/2,2/1}$. Such distributions cannot be directly obtained from our results for \raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/1,1/0}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (1+0.99,1+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\end{tikzpicture}},
since $p_1$ must be irreducible there.
\subsection{The pattern $X$}
In our next theorem we consider a generalization of $X$.
\begin{thm}\label{th:pattern X}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-X}, where $p_1$ is an irreducible mesh pattern of length $\geq 2$, and the label $a$ is to be ignored. Then,
\begin{align*}
A_p(x) & =\frac{ \big(1+xA_{p_1}(x) \big) F(x)}{1+xF(x)},\\
F_p(x,q)& =\left( 1+\sum_{i\ge 1} x^i \prod_{j=1}^{i}\frac{F_{p_1}(x,q^j)}{1+xF_{p_1}(x,q^j)} \right) \frac{F(x)}{1+xF(x)}.
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern X}
\label{pic-thm-pat-X}
\end{figure}
\begin{proof}
Let $B(x)$ be the generating function for $X$-avoiding permutations. Then, it follows from Theorem~\ref{thm-length-1} that
\begin{equation}\label{th:pattern X-B(x)}
B(x)=\frac{F(x)}{1+xF(x)}.
\end{equation}
We next justify the following functional equation: \begin{equation}\label{av-pattern-X}
A_p(x)+xB(x) \big(F(x)-A_{p_1}(x)\big)=F(x).
\end{equation}
The right hand side counts all permutations. On the left hand side, $A_p(x)$ counts $p$-avoiding permutations. If a permutation contains at least one occurrence of $p$, we can consider the occurrence with the {\em leftmost} possible $a$ as shown in Figure~\ref{pic-thm-pat-X}. Referring to this figure, we note that the Noth East box must contain at least one occurrence of $p_1$, which is counted by $F(x)-A_{p_1}(x)$, and a permutation in box $A$ must be
$\pattern{scale = 0.8}{1}{1/1}{0/1,1/0}$\ -avoiding since $a$ is the leftmost. Finally, $a$ contributes the factor of $x$. This completes the proof of \eqref{av-pattern-X}. Substituting \eqref{th:pattern X-B(x)} into \eqref{av-pattern-X}, we obtain the formula of $A_p(x)$.
In order to study the distribution of $p$, we also need the distribution of $p_1$. Let $B_{p_1}(x,q)$ be the distribution of $p_1$ on $X$-avoiding permutations. Then,
\begin{equation}\label{dis-pattern-X-p_1}
B_{p_1}(x,q)+xB_{p_1}(x,q)F_{p_1}(x,q)=F_{p_1}(x,q).
\end{equation}
This equation is obtained by considering $X$-avoiding permutations separately from the other permutations, and only the second term on the left hand side, corresponding to permutations with at least one occurrence of $X$, requires a justification.
Consider the occurrence of $X$ with the leftmost $a$ as in Figure~\ref{pic-thm-pat-X}. Any permutation in box $A$ is $X$-avoiding and thus contributes the factor of $B_{p_1}(x,q)$. Since $p_1$ is irreducible, the two non-shaded boxes in Figure~\ref{pic-thm-pat-X} are independent from each other. In other words, an occurrence of $p_1$ cannot start in $A$ and end in the other non-shaded box. Thus, the contribution of the North East box is the factor of $F_{p_1}(x,q)$ in \eqref{dis-pattern-X-p_1}. This, along with the factor of $x$ corresponding to $a$, completes the proof of \eqref{dis-pattern-X-p_1}. Therefore, it follows from \eqref{dis-pattern-X-p_1} that
\begin{equation}\label{dis-pattern-X-p_1-2}
B_{p_1}(x,q)=\frac{F_{p_1}(x,q)}{1+xF_{p_1}(x,q)}.
\end{equation}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern X}
\label{pic-thm-pat-X-2}
\end{figure}
Now we consider the distribution of $p$. Claim that
\begin{equation}\label{dis-pattern-X}
B(x)+B(x)\sum_{i\ge 1} x^i \prod_{j=1}^{i}B_{p_1}(x,q^j)=F_{p}(x,q).
\end{equation}
Indeed, each permutation, counted by $F_{p}(x,q)$ on the right hand side, either avoids $X$, which is counted by $B(x)$ on the left hand side, or contains at least one occurrence of $X$. In the latter case, suppose $a_1$, $a_2,\ldots,a_i$ are all occurrences of $X$ in a permutation as shown in Figure~\ref{pic-thm-pat-X-2}, so all the boxes $A_j$, where $0\leq j\leq i$, are $X$-avoiding.
Our key observation is that any occurrence of $p_1$ in a box $A_j$, where $1\leq j\leq i$, together with each of the $a_k$, where $1\leq k\leq j$, contribute an occurrence of $p$. Thus, the contribution of $A_j$ is $F_{p_1}(x,q^j)$ for $1\leq j\leq i$, and the contribution of $A_0$ is $B(x)$. Finally, $x^i$ is given by $a_i$'s and we can sum over all $i\geq 1$. This completes the proof of \eqref{dis-pattern-X}.
Substituting the formulas of $B(x)$ and $B_{p_1}(x,q)$ into \eqref{dis-pattern-X} gives the formula of $F_p(x,q)$. This completes the proof.
\end{proof}
Note that $p_1$ in Theorem~\ref{th:pattern X} must be of length $\geq 2$ because the result does not work for $p_1=$$\pattern{scale=0.8}{1}{1/1}{}$. In the latter case though we deal with the pattern Nr.\ 16 $= \pattern{scale = 0.6}{2}{1/1,2/2}{0/1,0/2,1/0,2/0}$, whose avoidance and distribution are solved in \cite{SZ}.
\subsection{The pattern Nr. 33}
In our next theorem we obtain a generalization of the pattern Nr. 33 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,0/2,1/0,2/0,1/2,2/1}$.
\begin{thm}\label{th:pattern 33}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-33}, where $p_1$ is an irreducible mesh pattern of length $\geq 2$, and the labels $a$, $b$, $A$, and $B$ are to be ignored.
Then,
\begin{align*}
A_p(x)&=\frac{\big( 1+x^2A_{p_1}(x) \big)F(x)}{1+x^2F(x)},\\[6pt]
F_p(x,q)&=\frac{F(x)}{1+xF(x)}+\left(\frac{F(x)}{1+xF(x)}\right)^2 \left(x+\sum_{i\ge 2} x^i \prod_{j=2}^{i}\frac{F_{p_1}(x,q^{\binom{j}{2}})}{1+xF_{p_1}(x,q^{\binom{j}{2}})} \right).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 33}
\label{pic-thm-pat-33}
\end{figure}
\begin{proof}
The case of avoidance is similar to our considerations of the pattern $X=\pattern{scale = 0.8}{1}{1/1}{0/1,1/0}$. We assume that the elements $a$ and $b$ in an occurrence of $p$ in Figure~\ref{pic-thm-pat-33} are {\em leftmost} possible. But then, the boxes $A$ and $B$ will be $X$-avoiding. Thus, we have the following functional equation:
\begin{equation}\label{av-pattern-33}
A_p(x)+x^2B^2(x) \big(F(x)-A_{p_1}(x) \big)=F(x).
\end{equation}
Substituting \eqref{th:pattern X-B(x)} for $B(x)$ into \eqref{av-pattern-33}, we obtain the formula of $A_p(x)$.
For the distribution, we have the following functional equation:
\begin{equation}\label{dis-pattern-33}
B(x)+x B^2(x) + B^2(x)\sum_{i\ge 2} x^i \prod_{j=2}^{i}B_{p_1}(x,q^{\binom{j}{2}})=F_{p}(x,q),
\end{equation}
where $B_{p_1}(x,q)$ is given in \eqref{dis-pattern-X-p_1-2}. Our proof of \eqref{dis-pattern-33} is essentially the same as that of \eqref{dis-pattern-X} and we omit it. The only difference is that here we have two $X$-avoiding boxes $A$ and $B$ resulting in the factor of $B^2(x)$ instead of $B(x)$.
Substituting \eqref{dis-pattern-X-p_1-2} for $B_{p_1}(x,q)$ into \eqref{dis-pattern-33}, we obtain the desired formula of $F_p(x,q)$, and thus complete the proof.
\end{proof}
\begin{figure}
\caption{Related to Theorem~\ref{th:pattern 33-2}
\label{pic-thm-pat-33-2}
\end{figure}
The proof of the following theorem follows similar steps to those in Theorem~\ref{th:pattern 33}, and thus is omitted. We note that Theorem~\ref{th:pattern 33-2} is a far-reaching generalization of Theorem~\ref{th:pattern 33}.
\begin{thm}\label{th:pattern 33-2}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-33-2}, where $p_1$ is an irreducible mesh pattern. Let pattern $X=\pattern{scale = 0.8}{1}{1/1}{0/1,1/0}$.
Then the distribution of $p$ is
\begin{align*}
F_p(x,q)=\sum_{i=1}^{k}x^{i-1}\left(\frac{F(x)}{1+xF(x)}\right)^i+\left(\frac{F(x)}{1+xF(x)}\right)^k \sum_{i\ge k} x^i \prod_{j=k}^{i}\frac{F_{p_1}(x,q^{\binom{j}{k}})}{1+xF_{p_1}(x,q^{\binom{j}{k}})}.
\end{align*}
\end{thm}
\section{Remaining avoidance cases}\label{sec5}
In this section, we study avoidance of generalizations of the patterns:\\
Nr. 27 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,2/0,2/2}$, \quad
Nr. 28 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$, \quad
Nr. 30 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,1/2,2/0,2/1}$, \quad
Nr. 34 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/1,1/2,2/1,2/2}$.
\subsection{Pattern Nr. 28}
The following theorem allows us to generalize the known avoidance result~\cite{SZ} for the pattern Nr. 28 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/2,2/1,2/2}$ by inserting two mesh patterns, $p_1$ and $p_2$, into it. We were unable to find the distribution of the pattern in the next theorem because of difficulties of controlling occurrences of the patterns $p$ and $p_2$, at the same time, in the rightmost bottom box.
\begin{thm}\label{th:pattern 28-2}
Let $p$ be the pattern shown in Figure~\ref{pic-thm-pat-28-2}, where the labels $a$, $b$, and $A$ are to be ignored, and $p_1$ and $p_2$ are any mesh patterns. Then, the avoidance of $p$ is given by
\begin{align*}
A_p(x)=\frac{F(x)+x^2F(x)A_{p_2}(x) \big(F(x)-A_{p_1}(x) \big)}{1+x^2 \big(F(x)-A_{p_1}(x) \big) F(x)}.
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 28-2}
\label{pic-thm-pat-28-2}
\end{figure}
\begin{proof}
Let $A(x)$ be the generating function for the number of permutations avoiding the pattern
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}}.
Then, it follows from Theorem~\ref{th:pattern 28} that
$$A(x)=\frac{F(x)}{1+x^2(F(x)-A_{p_1}(x))F(x)}.$$
We have the following functional equation:
\begin{equation}\label{av-pattern-28-2}
A_p(x)+x^2 A(x)\big(F(x)-A_{p_1}(x) \big) \big(F(x)-A_{p_2}(x) \big)=F(x).
\end{equation}
Indeed, each permutation, counted by $F(x)$ on the right hand side, either avoids $p$, counted by $A_p(x)$, or contains at least one occurrence of $p$. In the latter case, among all such occurrences, we can pick the occurrence $ab$ with the \emph{leftmost} possible $a$ as shown in Figure~\ref{pic-thm-pat-28-2}. Referring to this figure, we note that the central box must contain at least one occurrence of $p_1$,which is counted by $F(x)-A_{p_1}(x)$, and the rightmost bottom box must contain at least one occurrence of $p_2$, counted by $F(x)-A_{p_2}(x)$. Moreover, the permutation in box $A$ must avoid the pattern
\raisebox{0.6ex}{\begin{tikzpicture}[scale = 0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\node at (1.6,1.6) {\tiny $p_1$};
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above left] {};
\end{tikzpicture}},
which is counted by $A(x)$, since $a$ is the leftmost possible element in an occurrence of $p$. Finally, $a$ and $b$ contribute the factor of $x^2$. This completes our proof of \eqref{av-pattern-28-2} and hence gives the formula of $A_p(x)$ as desired.
\end{proof}
\begin{remark} Note that exactly the same enumeration result as that in Theorem~\ref{th:pattern 28-2} holds for the pattern \raisebox{0.6ex}{\begin{tikzpicture}[scale=0.3, baseline=(current bounding box.center)]
\foreach \x/\y in {0/0,0/1,1/0,1/2,2/1,2/2}
\fill[gray!20] (\x,\y) rectangle +(1,1);
\draw (0.01,0.01) grid (2+0.99,2+0.99);
\filldraw (1,1) circle (4pt) node[above left] {};
\filldraw (2,2) circle (4pt) node[above right] {};
\node at (1.5,1.5) {\tiny $p_1$};
\node at (0.5,2.5) {\tiny $p_2$};
\end{tikzpicture}}.
Indeed, in the proof of Theorem~\ref{th:pattern 28-2} one just essentially need to substitute the ``leftmost $a$'' by the ``rightmost $b$''.\end{remark}
\subsection{Pattern Nr. 30}
We next consider generalizations of the pattern
Nr. 30 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,1/2,2/0,2/1}$.
\begin{lem}[{\cite[Theorem 3.5]{SZ}}]\label{lem-pattern-30}
Let $p=\pattern{scale=0.6}{2}{1/1,2/2}{0/1,1/2,2/0,1/0,1/1,2/1,0/2}$,
$F(x,q) =\sum_{n\geq 0}x^n\sum_{\pi\in S_n}q^{p(\pi)}$, and $A(x)$ be the generating function for $S(p)$. Then,
\begin{align*}
A(x)=\frac{(1+x)F(x)}{1+x+x^2F(x)},
\ \ \ \ \
F(x,q)=\frac{(1+x-qx)F(x)}{1+(1-q)x+(1-q)x^2F(x)}.
\end{align*}
\end{lem}
The avoidance of the patterns in the next two theorems is based on Lemma~\ref{lem-pattern-30}, but we were not able to derive the distribution of these patterns extending the respective formula in Lemma~\ref{lem-pattern-30}.
\begin{thm}\label{th:pattern 30}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-30}, where $p_1$ is a mesh pattern with the leftmost bottom box non-shaded, and the labels $a$, $b$, and $A$ are to be ignored. Then, the avoidance of $p$ is given by
\begin{align*}
A_p(x)= \frac{(1+x) F(x) + x^2 A_{p_1}(x)}{1+x+x^2F(x)}.
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 30}
\label{pic-thm-pat-30}
\end{figure}
\begin{proof}
We have the following functional equation
\begin{equation}\label{av-pattern-30}
A_p(x)+ \frac{x^2 F(x)}{1+x+x^2F(x)}\big(F(x)-A_{p_1}(x) \big)=F(x).
\end{equation}
Indeed, the right hand side counts all permutations, and the left hand side considers separately permutations avoiding $p$, counted by the $A_p(x)$ term in \eqref{av-pattern-30}, and those containing at least one occurrence of $p$. In the latter case, among all such occurrences, we can pick the occurrence $ab$ with the \emph{leftmost} possible $a$ as shown in Figure~\ref{pic-thm-pat-30}. Referring to this figure, we note that the North East box must contain at least one occurrence of $p_1$, counted by $F(x)-A_{p_1}(x)$. Moreover, a permutation in box $A$ must avoid both patterns
$\pattern{scale=0.6}{2}{1/1,2/2}{0/1,1/2,2/0,1/0,1/1,2/1,0/2}$ and
$\pattern{scale = 0.8}{1}{1/1}{0/1,1/0,1/1}$, since $a$ is the leftmost possible.
We denote by $B(x)$ the generating function of such permutations. Then, we have that
\begin{align*}
B(x) + x B(x) = \frac{(1+x)F(x)}{1+x+x^2F(x)}
\end{align*}
by dividing the $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,1/2,2/0,1/0,1/1,2/1,0/2}$-avoiding permutations, whose enumeration is given by Lemma~\ref{lem-pattern-30}, into two parts depending on whether they avoid $\pattern{scale = 0.8}{1}{1/1}{0/1,1/0,1/1}$.
Note that when a permutation contains the pattern $\pattern{scale = 0.8}{1}{1/1}{0/1,1/0,1/1}$, the sub-permutation consisting of the first $n-1$ positions avoids both patterns
$\pattern{scale=0.6}{2}{1/1,2/2}{0/1,1/2,2/0,1/0,1/1,2/1,0/2}$ and
$\pattern{scale = 0.8}{1}{1/1}{0/1,1/0,1/1}$.
Therefore, we get that
\begin{align*}
B(x) = \frac{F(x)}{1+x+x^2F(x)}.
\end{align*}
Finally, $a$ and $b$ contribute the factor of $x^2$.
Substituting the formula of $B(x)$ into \eqref{av-pattern-30}, we obtain the desired formula of $A_p(x)$.
This completes the proof.
\end{proof}
\subsection{Pattern Nr. 27}
We next consider avoidance of a generalization of the pattern Nr. 27 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/1,0/2,1/0,1/1,2/0,2/2}$.
\begin{thm}\label{th:pattern 27}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-27}, where $p_1$ and $p_2$ are any mesh patterns, and the labels $a$, $b$, and $A$ are to be ignored. Then, the avoidance of $p$ is given by
\begin{align*}
A_p(x)=F(x)-x^2 \frac{F(x)}{1+xF(x)} \big(F(x)-A_{p_1}(x)\big) \big(F(x)-A_{p_2}(x) \big).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 27}
\label{pic-thm-pat-27}
\end{figure}
\begin{proof}
We have the following functional equation:
\begin{equation}\label{av-pattern-27}
A_p(x)+x^2 B(x) \big(F(x)-A_{p_1}(x)\big) \big(F(x)-A_{p_2}(x) \big)=F(x),
\end{equation}
where $B(x)$ is the generating function for the number of $X$-avoiding permutations given in Theorem~\ref{thm-length-1}, which satisfies
$$B(x)=\frac{F(x)}{1+xF(x)}.$$
Indeed, $F(x)$ on the right hand side of \eqref{av-pattern-27} counts all permutations. On the left hand side of \eqref{av-pattern-27}, we count separately permutations avoiding $p$, counted by the $A_p(x)$ term, and those containing at least one occurrence of $p$. In the latter case, among all occurrences of $p$, we pick the occurrence $ab$ with the {\em leftmost} possible $b$ as shown in Figure~\ref{pic-thm-pat-27} and $a$ is then uniquely determined. Referring to this figure, we note that the South West box must contain at least one occurrence of $p_1$, counted by $F(x)-A_{p_1}(x)$, and the East box must contain at least one occurrence of $p_2$, counted by $F(x)-A_{p_2}(x)$. Moreover, the permutation in the box $A$ must avoid the pattern $X$, counted by $B(x)$, since $b$ is the leftmost possible. Finally, $a$ and $b$ contribute the factor of $x^2$. Thus, this completes the proof of~\eqref{av-pattern-27}.
Substituting the formula of $B(x)$ into~\eqref{av-pattern-27}, we obtain the desired formula of $A_p(x)$.
\end{proof}
Theorem~\ref{th:pattern 27} generalizes the avoidance of the pattern Nr.\ 27. However, generalizing its distribution is hard, because we need to control at the same time occurrences of the patterns $p$ and $Z = \pattern{scale = 0.8}{1}{1/1}{0/0,1/1}$ in the box $A$.
Moreover, we cannot further generalize Theorem~\ref{th:pattern 27} by placing a mesh pattern $p_3$ in the box $A$, because $A$ must avoid $Z$ when requiring from $b$ to be the leftmost, so we will be forced to control two patterns $p_3$ and $Z$ at the same time. Of course, we can require from $b$ to be the rightmost, but then we will be forced to control two patterns in the East box. Finally, swapping $A$ and $p_2$ in Theorem~\ref{th:pattern 27} leads to the same enumeration result, which is not hard to see.
\subsection{Pattern Nr. 34}
We next consider generalizations of the pattern
Nr. 34 = $\pattern{scale=0.6}{2}{1/1,2/2}{0/0,0/1,1/0,1/1,1/2,2/1,2/2}$.
\begin{lem}[{\cite[Theorem 3.7]{SZ}}]\label{lem-pat-34}
Let $p=\pattern{scale=0.6}{2}{1/1,2/2}{0/1,1/2,0/0,2/2,1/0,1/1,2/1}$.
Then, the avoidance and distribution of $p$ are
\begin{align*}
A_p(x)= \frac{F(x)}{1+x^2F(x)},
\ \ \ \ \
F_p(x,q)=\frac{F(x)}{1+(1-q)x^2F(x)}.
\end{align*}
\end{lem}
Replacing the two elements in the pattern Nr.\ 34 by the pattern $1p_1$, where $p_1$ is any permutation of $\{2,3,\ldots, k\}$, $k\geq 2$, with all boxes shaded as in Figure~\ref{pic-thm-pat-34}, we can apply essentially the same arguments as in the proof of Lemma~\ref{lem-pat-34} in \cite{SZ} to obtain the following theorem.
\begin{figure}
\caption{Related to Theorem~\ref{thm-pat-34}
\label{pic-thm-pat-34}
\end{figure}
\begin{thm}\label{thm-pat-34}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-34}, where $k\geq 1$ elements are in increasing order in the middle box. Then, the avoidance and distribution of $p$ are given by
\begin{align*}
A_p(x)= \frac{F(x)}{1+x^kF(x)},
\ \ \ \ \
F(x,q)=\frac{F(x)}{1+(1-q)x^kF(x)}.
\end{align*}
\end{thm}
The avoidance of the pattern Nr.\ 34 can be generalized, which is done in the next theorem, but the distribution is hard because we need to control $p_2$ and $p$ in the same box in that theorem.
\begin{thm}\label{th:pattern 34-2}
Suppose that $p$ is the pattern shown in Figure~\ref{pic-thm-pat-34-2}, where $p_1$ is any permutation of $\{2,3,\ldots, k\}$, $k\geq 1$, with all boxes shaded, $p_2$ is any mesh pattern, and the labels $a$ and $A$ are to be ignored. Then, the avoidance of $p$ is given by
\begin{align*}
A_p(x)=F(x)- \frac{x^k F(x)}{1+x^k F(x)} \big(F(x)-A_{p_2}(x)\big).
\end{align*}
\end{thm}
\begin{figure}
\caption{Related to the proof of Theorem~\ref{th:pattern 34-2}
\label{pic-thm-pat-34-2}
\end{figure}
\begin{proof}
Let $D(x)$ be the generating function for the number of permutations avoiding the pattern in Figure~\ref{pic-thm-pat-34}.
Then, it follows from Theorem~\ref{thm-pat-34} that
$$D(x)=\frac{F(x)}{1+x^kF(x)}.$$
We have the following functional equation:
\begin{equation}\label{av-pattern-34}
A_p(x)+x^k D(x) \big(F(x)-A_{p_2}(x) \big)=F(x).
\end{equation}
Indeed, the right hand side counts all permutations. On the left hand side, we count separately permutations avoiding $p$, counted by $A_p(x)$, and those containing at least one occurrence of $p$. In the latter case, among all occurrences of $p$, we can pick the occurrence with the \emph{leftmost} possible $a$ as shown in Figure~\ref{pic-thm-pat-34-2}. Referring to this figure, we note that the South East box must contain at least one occurrence of $p_2$, counted by $F(x)-A_{p_2}(x)$. Moreover, the permutation in box $A$ must avoid the pattern in Figure~\ref{pic-thm-pat-34}
that is counted by $D(x)$, since $a$ is the leftmost possible. There are no other restrictions on $A$ because the pattern in Figure~\ref{pic-thm-pat-34} cannot begin in $A$ and end somewhere else. Finally, the $k$ elements in the middle box contribute the factor of $x^k$. Thus, by combing with the formula of $D(x)$, we complete the proof of \eqref{av-pattern-34}, and hence give the formula of $A_p(x)$.
\end{proof}
\section{Concluding remarks}\label{final-sec}
We have a number of general results related to distribution or avoidance of several infinite families of mesh patterns. How to describe the class of mesh patterns for which our distribution or avoidance results can be applicable? Namely, in which situations one can break the problem of enumerating mesh patterns into smaller problems using our theorems? What is the complexity of recognizing the class?
\section*{\bf Acknowledgments}
The first author is grateful to the administration of the Center for Combinatorics at Nankai University for their hospitality during the author's stay in April 2018.
The second author was partially supported by the National Science Foundation of China (Nos. 11701424).
\end{document}
|
\begin{document}
\title{Optimality in multiple comparison procedures}
\author{Djalel Eddine Meskaldji\footnote{Signal Processing Laboratory (LTS5), Ecole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), Lausanne, Switzerland. Email: [email protected].} \footnote{This work was supported in part by the FNS grant N$^0$144467.}
\and Jean-Philippe Thiran* \and Stephan Morgenthaler\footnote{FSB/MATHA, Ecole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), Lausanne, Switzerland.}}
\maketitle
\doublespacing
\begin{abstract}
When many (m) null hypotheses are tested with a single dataset, the
control of the number of false rejections is often the principal
consideration. Two popular controlling rates are the probability of making at least one false discovery (FWER) and the expected fraction of false discoveries among all rejections (FDR).
Scaled multiple comparison error rates form a new family that bridges the gap between these two extremes. For example, the Scaled Expected Value (SEV) limits the number of false positives relative to an arbitrary increasing function of the number of rejections, that is, ${\mathbb{E}}({\mathsf{FP}}/s(R)\vee 1)$. We discuss the problem of how to choose in practice which procedure to use, with elements of
an optimality theory, by considering the number of false rejections ${\mathsf{FP}}$ separately from the number of correct rejections ${\mathsf{TP}}$. Using this framework we will show how to choose an element in the new family mentioned above.
\end{abstract}
\noindent Keywords: {Multiple comparisons, Family-Wise Error Rate, False Discovery
Rate, ordered p-values.}
\section{Introduction}
The theory of multiple testing is dominated by discussions of error rates and
the procedures that control those rates. The outcome of $m$ tests can be
summarized by the number of true rejections ${\mathsf{TP}}$ (the rejections among the
$m_1$ true alternatives) and the false rejections ${\mathsf{FP}}$ (the rejections among
the $m_0$ true null hypotheses). The total number of rejections is
$R={\mathsf{TP}}+{\mathsf{FP}}$.\\
With this paper, we want to broaden the discussion to include the optimal
choice of error rate. This choice depends on the number of tests $m$, the likely size of the alternative effects and the fraction of true nulls $m_0$ among the $m$ null hypotheses. To illustrate why this is so,
consider the following example. If the true alternatives are sparse (small
$m_1$), then the FDR will almost always be better than the FWER, because it
has a better chance of detecting the true alternatives, and yet will not make
many false discoveries. Another situation is when the effect sizes that define
the alternatives are huge, then the FWER is slightly better, because it will
also detect the true alternatives, but will make even fewer mistaken rejections.
As $m_1$ increases, the choice of the FDR becomes problematic due to the definition of the control.
Even a small percentage of a large number of rejections can be sizable.\\
In the aim of bridging the gap between the two extremes, \cite{MeskCER2011} introduced the scaled error rates. The number of false positives is considered with the number of rejections via a scaling function, that is, the ratio ${\mathsf{FP}}/s(R\vee 1)$ is considered and is called the Scaled False discovery Proportion SFDP. \cite{MeskCER2011} derived as well, procedures that control either the quantiles
or the expectation of the SFDP.
The expectation of the SFDP is called the Scaled Expected Value (SEV) defined by $$\mbox{SEV}_s={\mathbb{E}}\elleft[
\frac{{\mathsf{FP}}}{s(R \vee 1)}\right],$$ where $s(\cdot )$ is a non-decreasing positive function called the scaling function. The Per Family Error rate ${\mathbb{E}}({\mathsf{FP}}),$ and the FDR are met by setting $s(R)\equiv1$ and $s(R)=R$ respectively.\\
The procedure that control the SEV under dependence and positive dependence is a step up (SU) procedure that uses the sequence of thresholds $\mathcal {T}_s =(t_i =\frac{s(i)}{m}\alpha)_{1\elleq i \elleq m}$. This is a scaled version of the LSU procedure proposed by \cite{BenjaminiFDR1995} to control the FDR. This procedure generalizes many multiple comparison procedures. The Bonferroni procedure and the LSU procedure are met by setting $s(i)\equiv1$ and $s(i)= i,\, \forall i \in I,$ respectively. Note that the Bonferroni procedure controls the PFER which implies the control of the FWER by Markov's inequality.\\
The choices offered by the scaled error rates opens the question of how to
proceed in practice. We will investigate some aspects of this question in this paper.
Among the Multiple Comparison Procedures (MCPs), the ones that reject a
maximal number of hypotheses are preferred. This is the extent to which
optimality is investigated. First, we have to find a common optimality
criterion to compare the different error metrics and control procedures. We
propose to measure the worth of each true discovery by the value 1 and the
loss due to a false discovery by $-\ellambda$.\\
We present the optimality criterion and discuss the choice of the parameter $\ellambda$ in Section~\ref{PartI: section4.2}. In Section~\ref{PartI: section4.3}, we derive asymptotic results for the SEV and we investigate in more details a particular case of scaling functions which is $s(i)=i^\Gamma=(i, r)ma,$ with $\Gamma=(i, r)ma \in [0,1].$ We present different simulations for this particular case in Section~\ref{PartI: section4.4}. Finally, we derive exact calculations for the SFDP under the unconditional mixture effect model using the SU procedure described above. The results are based on Theorem 3.1 of \cite{Roquain2011exact} and obtained immediately when inserting the scaling function at the right places.
\section{Optimality of MCPs}\ellabel{PartI: section4.2}
The general goal of any multiple testing procedure, consists in making ${\mathsf{TP}}$
large while keeping ${\mathsf{FP}}$ small. The two types of rejections are opposites of
each other, but asymmetrical opposites. The prevailing approach consists in deciding on a level and type of control against false rejections (errors of type I) and subject to this constraint to maximize the number of rejections. This is analogous to the Neyman-Pearson approach of bounding the probability of a false rejection and then, given this constraint, maximizing the power. But since there is no agreement on the choice of control in multiple testing, the analogy is not convincing. This approach does not allow one to compare across a spectrum of type I error metrics. Controlling the false discovery rate, for example, can potentially lead to many rejections and is in this sense powerful, but how should this be compared to a method that controls the probability of making at least one erroneous rejection? \\
\subsection{Common optimality criterion}
One may think of the underlying problem in terms of costs. Each true rejection is worth one unit, while each
false rejections leads to a loss of $\ellambda \geq 1$. The cost $\ellambda$
of a false discovery is a tuning constant to be set by the user. It acts as a
penalty against false discoveries. If $\ellambda=1$, the true and the false
discoveries are of equal value, in which case maximizing the gain $R-2{\mathsf{FP}}$ is
equivalent to minimizing $m_1-R+2{\mathsf{FP}}$, the sum of false rejections and false
discoveries. The cost $\ellambda$ can also be seen as a shadow price, that is, the value of a
Lagrange multiplier at the optimum. This interpretation appears if we
optimize the number of true rejections under constraints involving the false
discoveries.\\
Based on this loss, the best choice of error rate minimizes the loss function
\begin{equation}
\mtc{L}_{\ellambda}=\ellambda {\mathbb{E}}[{\mathsf{FP}}]-{\mathbb{E}}[{\mathsf{TP}}]=(\ellambda+1){\mathbb{E}}[{\mathsf{FP}}]-{\mathbb{E}}[R].
\end{equation}
with $m_0\geq 1$ and $m_1 \geq 1$.\\
This approach will be unfamiliar to statisticians, who are used to maximizing power under control of the false
rejections. Our criterion allows a mixture of different error rates and
will pick the one best adapted to $\ellambda$.
\subsection{Choice of the cost $\ellambda$}
Before starting the main question of the paper we give some thoughts about the choice of the price $\ellambda$. In the philosophy of multiple testing, $\ellambda\geq1$, because the subsequent investigation of any discovery is expensive and being on the wrong track is a grave mistake. In a more refined theory, the cost $\ellambda$ should probably rather be seen as a marginal price, which increases with the number of false discoveries, but we will stay with the simpler model of a fixed price per false rejection.\\
To gain further insight, consider a model case, where $m=2$ with $m_0=m_1=1$
and we observe independent test statistics $X_0\sim{\cal
N}(0,1)$, a unit Gaussian, and $X_1\sim {\cal N}(\Delta,1)$. We are
testing a zero mean vs. a positive mean and the two tests reject if the
observed value exceeds a critical value $\text{cv}>0$. If we reject based on
$X_0$ we have a false rejection and if we reject based on $X_1$ we have a
true rejection. In this case, ${\mathsf{TP}}$ and ${\mathsf{FP}}$ are independent Bernoulli
variables with success probabilities $p_0=1-\Phi(\text{cv})=\Phi(-\text{cv})$
and $p_1=1-\Phi(\text{cv}-\Delta)=\Phi(\Delta-\text{cv})$. The criterion thus
has value $$\mtc{L}_{\ellambda}=\ellambda {\mathbb{E}}[{\mathsf{FP}}] - {\mathbb{E}}[{\mathsf{TP}}]=\ellambda p_0 - p_1\,.$$
For a fixed price $\ellambda$, the largest value of the criterion, the optimal gain, is achieved
for the critical value that satisfies $$-\varphi(\Delta-\text{cv}_\text{opt})
+ \ellambda \varphi(-\text{cv}_\text{opt})=0,$$ which leads to
$$\text{cv}_\text{opt}=\ellog(\ellambda)/\Delta + \Delta/2.$$
The optimal gain is always positive, increases with $\Delta$ and decreases with $\ellambda$. In
this simple model, the two tests are determined by the critical value.\\
For a fixed price $\ellambda$, the optimal critical value $\ellog(\ellambda)/\Delta +
\Delta/2$ as a function of the effect $\Delta$ is convex and has a minimum at
$\Delta = \text{cv}_\text{opt}=\sqrt{2\ellog(\ellambda)}$. This is the optimal
test with the minimal level.\\
When $p_0$ is fixed ($p_0=\alpha$), the price paid for a false positive is
\begin{equation}
\ellambda(\Delta)=exp\elleft\{\Delta\elleft( \Phi^{-1}(1-\alpha)-\frac{\Delta}{2}\right)\right\}.\ellabel{LambdaDeltaEq}
\end{equation}
Equation (\ref{LambdaDeltaEq}) shows that the maximum price that has to be paid corresponds to a situation where the mean of the alternative distribution $\Delta$ is equal
to the critical values of the rejection area. When $\Delta$ becomes small, the
mixture of the observations will more resemble the null distribution and the
probability of rejection decreases until $\alpha$. On the other hand, when
$\Delta$ increases, the probability of detection increases to the point where we
can increase the critical value. When the value of $\Delta$ reaches
$2\Phi^{-1}(1-\alpha)$ the probability of a false negative and false positive
become equal. In this case, $\ellambda=1$, which corresponds to the
classification criterion and the critical threshold becomes $\Delta/2$. Figure \ref{PartI: PlotLambdaDelta} shows the behavior of the price $\ellambda$ in function of the effect $\Delta$ for two common values of
$\alpha$ namely $\alpha=0.05$ and $\alpha=0.01$. \\
To link this with the classical testing theory, consider the Bonferroni procedure for two one-sided tests with overall FWER of $\alpha$. For example, if $\alpha=0.05$ then $\ellambda=3.868132$ and if $\alpha=0.01$ then $\ellambda=14.96849$. This gives an idea on the price used in this case. At the very least, this model suggests that the price of a false discovery has to be substantially higher than 1. There has to be a real penalty associated with a
false discovery.
\section{Asymptotically optimal procedure}\ellabel{PartI: section4.3}
\subsection{General results}
For independent tests we can think of the p-values as a mixture of $m_0$
random draws from the uniform distribution and $m_1$ random draws from the
alternative distribution, which might itself be a mixture
distribution. Suppose that $\mtc{F}$ is the common distribution of the p-values under
the alternative hypothesis. \cite{Genovese2002operating} showed that
asymptotically (i.e. for large $m$), the LSU procedure corresponds to rejecting the null
hypothesis when the corresponding p-value is less than a threshold $u^*$ where
$u^*$ is the solution of the equation $F(u)=\eta u$ with
\[
\eta=\frac{1/\alpha -\pi_0}{1-\pi_0},
\]
where $F$ is the cumulative probability distribution of $\mtc{F}$, and $\pi_0 = m_0/m.$ They showed also that the LSU procedure is intermediate between the Bonferroni procedure (corresponding to $\alpha/m$) and non-multiplicity correction (corresponds
to $\alpha$). Clearly, this shows that the gain in power of the LSU
procedure is due to an increase of the expected number of false positives
from $\pi_0 \alpha/m$ to $\pi_0 U^*$. We give in this section similar results for the SEV.\\
Suppose that the scaling function $s$ is such that
$${\mathbb{E}}\paren{\frac{{\mathsf{FP}}}{s(R)}}=\frac{{\mathbb{E}}({\mathsf{FP}})}{{\mathbb{E}}\paren{s(R)}}+\xi(m),$$
where $\xi(m)\rightarrow 0$ when $m \rightarrow \infty.$ In this case, $u^*$ satisfies
$$
\frac{m_0 u}{s\elleft(m_{0}u+\elleft( m-m_{0}\right) F\elleft(u\right)\right)}
=\alpha.
$$
Under certain assumptions on $s$, $u^{\ast }$ is the unique solution of
\begin{equation}
s^{-1}\elleft( \frac{u m_0}{\alpha }\right)=m_{0}u+\elleft( m-m_{0}\right)
F\elleft( u\right),
\end{equation}
which leads to
\begin{equation}
\elleft( m-m_{0}\right) F\elleft( u^{\ast }\right) =s^{-1}\elleft( \frac{
u^{\ast }m_0}{\alpha }\right) -m_{0}u^{\ast}.\ellabel{PartI: Equ. findUstar}
\end{equation}
The optimization criterion $\mathcal{L}_{\ellambda}$ becomes
\begin{eqnarray}
\nonumber \mathcal{L}_{\ellambda} &\simeq &\ellambda m_{0}u^{\ast }- \elleft( m-m_{0}\right) F\elleft( u^{\ast
}\right) \\
\nonumber &=&\ellambda
m_{0}u^{\ast }-s^{-1}\elleft( \frac{u^{\ast }m_0}{\alpha }\right) -m_{0}u^{\ast } \\
\nonumber &=&\elleft( \ellambda -1\right)\alpha
\elleft( m_{0}/alpha\right) \elleft( \frac{u^{\ast }}{\alpha }\right)-s^{-1}\elleft( \frac{u^{\ast }m_0}{\alpha }\right) \\
&=&\elleft( \ellambda -1\right) \alpha v-s^{-1}\elleft( v\right),\ellabel{PartI: Equ. LossFunctionAssymp}
\end{eqnarray}
where $v=\frac{u^{\ast }m_0}{\alpha }.$ \\%\in \elleft[ \frac{1}{m},1\right]$
\subsection{A particular case}
Consider now, the particular case of $s(R)=R^\Gamma=(i, r)ma,$ with $\Gamma=(i, r)ma\in[0,1]$. Then, the SEV becomes
${\mathbb{E}}\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right).$ This family of error rates
includes the PFER and the FDR for $\Gamma=(i, r)ma=0$ and $1$ respectively.
\cite{MeskCER2011} showed that the family of thresholds
$t_i=\alpha s_\Gamma=(i, r)ma(i)/m=\alpha i^\Gamma=(i, r)ma/m$ provides weak control of the
FWER at a common level $\alpha$. This defines the family of MCPs we will
consider. They are indexed by the parameter $0\elleq\Gamma=(i, r)ma\elleq 1$ and will be
denoted by SU$_\Gamma=(i, r)ma$. When $\Gamma=(i, r)ma=0$, the Bonferroni procedure results,
while $\Gamma=(i, r)ma=1$ corresponds to the LSU procedure.\\
The $\mbox{SEV}$ for this family, can be approximated by
\[
{\mathbb{E}}\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right) =\frac{m_{0}p_{0}}{\elleft( m_{0}p_{0}+m_{1}p_{1}\right) ^{\Gamma=(i, r)ma }}+\mathcal{O}(m^{-\Gamma=(i, r)ma/2}).
\]
\begin{proof}
Set $$g({\mathsf{FP}},{\mathsf{TP}}) =\frac{{\mathsf{FP}}}{s({\mathsf{FP}}+{\mathsf{TP}})}\,.$$
Then, we have
$$\frac{\partial g({\mathsf{FP}},{\mathsf{TP}})}{\partial {\mathsf{FP}}} =\frac{(1-\Gamma=(i, r)ma){\mathsf{FP}}+{\mathsf{TP}}}{({\mathsf{FP}}+{\mathsf{TP}})^{\Gamma=(i, r)ma+1}},$$
and
$$\frac{\partial g({\mathsf{FP}},{\mathsf{TP}})}{\partial {\mathsf{TP}}} =-\frac{\Gamma=(i, r)ma {\mathsf{FP}}}{\elleft({\mathsf{FP}}+{\mathsf{TP}}\right)^{\Gamma=(i, r)ma + 1}}\,.$$
Let $p_0$ and $p_1$ be the probabilities of having a false positive and a true positive respectively. Let also, $\mu_{{\mathsf{FP}}}$ and $\mu_{{\mathsf{TP}}}$ be the expectations of ${\mathsf{FP}}$ and ${\mathsf{TP}}$ respectively.\\
We have $\mu_{{\mathsf{FP}}}=m_0 p_0$ and $\mu_{{\mathsf{FP}}}=m_1 p_1$ under the independence assumption. We use the delta method to provide an approximation for ${\mathbb{E}}\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right)$.
\[
E\elleft( \frac{{\mathsf{FP}}}{s(R)}\right) \approx \frac{\mu _{{\mathsf{FP}}}}{(\mu _{{\mathsf{FP}}+{\mathsf{TP}}})^{\Gamma=(i, r)ma}}=\frac{
m_{0}p_{0}}{\elleft( m_{0}p_{0}+m_{1}p_{1}\right) ^{\Gamma=(i, r)ma }}
\]
and
\[
Var\elleft( \frac{{\mathsf{FP}}}{s({\mathsf{FP}}+{\mathsf{TP}})}\right) \approx \elleft(\partial_{{\mathsf{FP}}} g(\mu_{{\mathsf{FP}}},\mu_{{\mathsf{TP}}})\right)^2 Var({\mathsf{FP}})+ \elleft(\partial_{{\mathsf{TP}}} g(\mu_{{\mathsf{FP}}},\mu_{{\mathsf{TP}}})\right)^2 Var({\mathsf{TP}})
\]
since $Cov({\mathsf{FP}},{\mathsf{TP}})=0$ by independence.\\
For $s(R)=R^\Gamma=(i, r)ma\,,$ the variance becomes
\begin{eqnarray*}
Var\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right) &\approx& \elleft(\frac{(1-\Gamma=(i, r)ma)\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}}}{(\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}})^{\Gamma=(i, r)ma+1}}\right)^2 m_{0}p_{0}(1-p_{0})+
\elleft(\frac{\Gamma=(i, r)ma \mu_{{\mathsf{FP}}}}{\elleft(\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}}\right)^{\Gamma=(i, r)ma + 1}}\right)^2 m_{1}p_{1}(1-p_{1})\\
&=& \frac{m_0 p_0}{{(\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}})^{2\Gamma=(i, r)ma+2}}}\elleft[\elleft((1-\Gamma=(i, r)ma)\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}}\right)^2 (1-p_{0})+
\elleft(\Gamma=(i, r)ma^2 \mu_{{\mathsf{FP}}}\right) m_{1}p_{1}(1-p_{1})\right].
\end{eqnarray*}
We have,
\[\mu_{\mathsf{FP}}=m_0 p_0\elleq m\Gamma=(i, r)ma,
\]
\[
\elleft((1-\Gamma=(i, r)ma)\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}}\right)^2 (1-p_{0})\elleq((1-\Gamma=(i, r)ma)m^\Gamma=(i, r)ma +m)^2=C_1 m^2,
\]
\[
\elleft(\Gamma=(i, r)ma^2 \mu_{{\mathsf{FP}}}\right) m_{1}p_{1}(1-p_{1})\elleq \Gamma=(i, r)ma^2 m^\Gamma=(i, r)ma m=C_2 m^{\Gamma=(i, r)ma +1},
\]
and
\[
(\mu_{{\mathsf{FP}}}+\mu_{{\mathsf{TP}}})^{2\Gamma=(i, r)ma+2}\geq C m^{2\Gamma=(i, r)ma+2},
\]
where $C_1$, $C_2$ and $C$ are constants. This leads to,
\[
Var\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right)\elleq m\Gamma=(i, r)ma \frac{C_1 m^2+C_2 m^{\Gamma=(i, r)ma +1}}{C m^{2\Gamma=(i, r)ma+2}}=\mathcal{O}(m^{-\Gamma=(i, r)ma}).
\]
Hence,
\[
E\elleft( \frac{{\mathsf{FP}}}{R^\Gamma=(i, r)ma}\right) =\frac{m_{0}p_{0}}{\elleft( m_{0}p_{0}+m_{1}p_{1}\right) ^{\Gamma=(i, r)ma }}+\mathcal{O}(m^{-\Gamma=(i, r)ma/2}).
\]
\end{proof}
When applying SU$_\Gamma=(i, r)ma$, Equation (\ref{PartI: Equ. findUstar}) becomes
\[
\elleft( m-m_{0}\right) F\elleft( u^{\ast }\right) =\elleft( \frac{
u^{\ast }m_0}{\alpha }\right)^{\frac{1}{\Gamma=(i, r)ma}} -m_{0}u^{\ast},
\]
and the expected loss of (\ref{PartI: Equ. LossFunctionAssymp}) becomes
$$\mtc{L}_{\ellambda}=\ellambda {\mathbb{E}}[{\mathsf{FP}}_\Gamma=(i, r)ma] -{\mathbb{E}}[{\mathsf{TP}}_\Gamma=(i, r)ma] = \elleft( \ellambda -1\right) \alpha v - v^{\frac{1}{\Gamma=(i, r)ma }} \,.$$
The loss $\mtc{L}_{\ellambda}$ is minimized when
\[
\frac{\partial \mathcal{L}}{\partial \mathcal{\Gamma=(i, r)ma }}=\frac{\partial v}{
\partial \mathcal{\Gamma=(i, r)ma }}\cdot \elleft[ -\frac{\ellog v}{\mathcal{\Gamma=(i, r)ma }^{2}}
\cdot v^{\frac{1}{\Gamma=(i, r)ma }}-\elleft( \ellambda -1\right) \alpha
\right]=0,
\]
which implies that
\[
\mathbb{R}ightarrow -\frac{\ellog v}{\mathcal{\Gamma=(i, r)ma }^{2}}\cdot v^{\frac{1}{\Gamma=(i, r)ma }
}=\elleft( \ellambda -1\right) \alpha.
\]
Finally, the asymptotically optimal value of $\Gamma=(i, r)ma$ for a given unit price $\ellambda$ is obtained by solving the system:
\begin{equation}
\begin{array}{l}
\elleft( m-m_{0}\right) F\elleft( u^{\ast }\right) =\elleft( \frac{
u^{\ast }m_0}{\alpha }\right)^{\frac{1}{\Gamma=(i, r)ma}} -m_{0}u^{\ast},\ellabel{gamma1}\\
-\frac{\ellog \elleft( u^{\ast }m_0/\alpha\right)}{\Gamma=(i, r)ma ^{2}}\cdot \elleft( u^{\ast }m_0/\alpha\right)^{\frac{1}{\Gamma=(i, r)ma}
}=\elleft( \ellambda -1\right)\alpha.
\end{array}
\end{equation}
\section{Simulations}\ellabel{PartI: section4.4}
A simple choice for $F$ is the distribution of the p-value one
obtains from a standardized Gaussian test statistic which under the
alternatives is shifted to the right by a common value $\Delta >0$. The
distribution of the p-values for one-sided tests is then $F_1(u) =
1-\Phi(z_{1-u}-\Delta)$ where $z_u =\Phi^{-1}(u)$. The three parameters
$m_0$, $m_1$ and $\Delta$ characterize a multiple testing problem of the kind
we are going to simulate.\\
We consider multiple comparisons situations with either $m=1000$ or $m=10000$ tests. We consider $m_1=10, 50$ and $m_1=100$ when $m=1000$, and $m_1=100, 500$ and $m_1=1000$ when $m=10000$. The distribution of the test statistics is the same as in the above model situation with the alternative
effect equal to $\Delta=2$ or $4$. The protection level is $\alpha=0.05$. Figures \ref{PartI: FigOpt1000} and \ref{PartI: FigOpt10000} show the value of $\Gamma=(i, r)ma$ to be used in the case of $s(i)=i^\Gamma=(i, r)ma$ in order to
minimize the expected loss. In each Panel, three curves are plotted. First, the optimal value of $\Gamma=(i, r)ma$ obtained by Monte Carlo simulation. Second, the value obtained by numerically resolving the system of equations (\ref{gamma1}) when the parameters $m_0$ and $\Delta$ are supposed to be known. The third case is identical to the second one except that the two parameters $m_0$ and $\Delta$ are estimated by using the library "mixtools" in the "R" software. The optimal value of $\Gamma=(i, r)ma$ decreases as the penalty $\ellambda$ for each false discovery increases. The value $\Gamma=(i, r)ma=1$
which corresponds to the LSU procedure is only optimal for relatively
small penalties, for larger and more reasonable values it quickly drops
towards $\Gamma=(i, r)ma=0.5$ if there are few true alternatives and towards
$\Gamma=(i, r)ma=0.7$ otherwise. For $m=1000$, the effect $\Delta=2$ is relatively
small and hard to detect. For a larger and more easily detectable effect, the
values of $\Gamma=(i, r)ma$ drop even quicker. The value $\Gamma=(i, r)ma=0.5$ is a good
default choice if little is known about the number of alternatives and the
effect size. The optimal value of $\Gamma=(i, r)ma$ obtained asymptotically seems to underestimate the real optimal value, especially, when $\Delta=2$. This underestimation leads to a stricter control of the false positives.
\section{Exact calculations of the SFDP in the SU case under the unconditional independent model}\ellabel{PartI: section4.5}
The aim of this section is to provide exact expressions for the $\kappa$-th moment of the SFDP, the SEV and the power, for any scaling function $s$, when using the SU procedure with thresholds collection $\mathcal {T}_s =(t_r =\frac{s(r)}{m}\alpha)_{1\elleq r \elleq m}$. The results of the section are based on the work of \cite{Roquain2011exact}, who provided new techniques to derive exact calculations for the FDP and the FDR.\\
Consider the so-called "two-groups mixture model" introduced by \cite{Efron2001} in which $H_i=0$ with probability
$\pi_0$. Let be $G(u)=\pi_0 F_0(u)+(1-\pi_0)F_1(u)$ the common c.d.f. of the p-values, where $F_0$ is the null c.d.f. and $F_1$ is the alternative c.d.f.. This model is called the \emph{unconditional model}. In addition, when the p-values $p_1,\elldots,p_m$ are independent, the model is called the \emph{unconditional independent model}.\\
For any $r\geq 0$ and a threshold sequence $\mathcal{T}= (t_1,...,t_r)$, we denote \citep[see][]{Roquain2011exact}
\begin{equation}
\Psi_r(\mathcal{T}) = \Psi_k(t_{1},...,t_{r}) ={\mathbb{P}}b{U_{(1)}\elleq t_{1}, ..., U_{(r)}\elleq t_r},\ellabel{equ_psi}
\end{equation}
where $(U_i)_{1\elleq i\elleq r}$ is a sequence of $r$ random variables i.i.d. uniform on $[0,1]$, with the convention $\Psi_0(\cdot)=1$.
We also introduce the following quantity. For a thresholds sequence $\mathcal{T}=(t_r)_{1\elleq r \elleq m}$ and $r\geq 0$, $r\elleq m$, we define
\begin{align}
\mathcal{D}_m(\mathcal{T},r) &= {m \choose r} (t_r)^r \Psi_{m-r}\big( 1-t_m,...,1-t_{r+1}\big)\ellabel{equ_for_distsu}.
\end{align}
We have that $$\sum_{r=0}^m\mathcal{D}_m(\mathcal{T},r)=1$$ for any thresholds sequence $\mathcal{T}$ \citep[see][]{Roquain2011exact}.\\
Recall that the $\kappa$-th moment ($\kappa \geq 1$) of random variable $X$ following a binomial distribution, $X\sim \mathcal{B}(n,p),$ is given by ${\mathbb{E}}[ X^\kappa]=\sum_{\ell=1}^{\kappa \wedge n} \frac{n!}{(n-\ell)!} \Sti{\kappa}{\ell} p^\ell$,
where $\Sti{\kappa}{\ell}$ are the Stirling numbers of the second kind, defined by $\Sti{\kappa}{0}=0$, $\Sti{\kappa}{\ell}=0$ for $\ell>\kappa$, $\Sti{1}{1}=1$ and the recurrence relation, $ \forall 1\elleq \ell \elleq \kappa+1$,
$$
\Sti{\kappa+1}{\ell} = \ell \Sti{\kappa}{\ell} + \Sti{\kappa}{\ell-1}.
$$
The following theorem is stated and demonstrated by \cite{Roquain2011exact}.
\begin{theorem}\ellabel{main_indep}
When testing $m\geq 2$ hypotheses, consider a SU procedure with thresholds sequence $\mathcal{T}$ and rejection set ${\mathcal{R}}(\mathcal{T})$. Then for all $\pi_0\in [0,1]$, we have under the unconditional independent model, for any $r\geq 1$,
\begin{align}
|{\mathcal{R}} \cap I_0 |={\mathsf{FP}} \: \mbox{ given }\: R\equiv| {\mathcal{R}}(\mathcal{T}) |=r \:\:\:\:\sim\:\: \mathcal{B}\bigg(r, \frac{\pi_0 F_0(t_r)}{G(t_r)}\bigg).\ellabel{equ_distrFDP}
\end{align}
\end{theorem}
From this theorem, we derive the following formulas.
For any $x\in(0,1)$
\begin{equation}
{\mathbb{P}}[\mbox{SFDP} \elleq x ]= \sum_{r=0}^{m} \sum_{j=0}^{ \ellfloor x s(r) \rfloor} {{r} \choose {j}} \bigg(\frac{\pi_0F_0(t_r)}{G(t_r)}\bigg)^{j} \bigg(\frac{\pi_1F_1 (t_r)}{G(t_r)}\bigg)^{r-j} \mathcal{D}_m\big( [G(t_{j})]_{1\elleq j\elleq m},r\big),
\ellabel{equ_FDP_indep}
\end{equation}
where we used the fact that ${\mathbb{P}} (R=r)=\mathcal{D}_m\big( [G(t_{j})]_{1\elleq j\elleq m},r\big)$ \citep[see][]{Roquain2011exact}.
\begin{equation}
{\mathbb{E}}[\mbox{SFDP}^{\kappa} ]= \sum_{\ell=1}^{\kappa \wedge m} \frac{m!}{(m-\ell)!}\Sti{\kappa}{\ell} \pi_0^\ell \sum_{r=\ell}^{m} \frac{F_0(t_r)^\ell}{s(r)^\kappa} \: \mathcal{D}_{m-\ell}\big( [G(t_{j+\ell})]_{1\elleq j\elleq m-\ell} ,r-\ell\big).
\ellabel{equ_mom_FDP_indep}
\end{equation}
\begin{equation}
\mbox{SEV}=\pi_0 m \sum_{r=1}^{m} \frac{F_0(t_r)}{s(r)} \: \mathcal{D}_{m-1}\big( [G(t_{j+1})]_{1\elleq j\elleq m-1} ,r-1\big).\ellabel{equ_FDR_indep}
\end{equation}
We can apply \eqref{equ_FDR_indep} in the case where $t_r=\alpha s(r) /m$, to deduce that $\mbox{SEV}=\pi_0\alpha$, in the unconditional model. Furthermore, \cite{Roquain2011exact} derived a formula for the power of any SU procedure with thresholds sequence $\mathcal{T}$.
\begin{equation}
\mbox{Pow}(\mbox{SU}(\mathcal{T})) = \sum_{r=1}^{m} F_1( t_r) \: \mathcal{D}_{m-1}\big([G(t_{j+1})]_{1\elleq j\elleq m-1},r-1\big).
\ellabel{equ_Pow_indep}
\end{equation}
When using the thresholds sequence $\cal T_s$ with $t_r=\alpha s(r) /m$, the power becomes
$$\mbox{Pow}(T_s) = \sum_{r=1}^{m} F_1( \alpha s(r)/m) {{m-1} \choose {r-1}} (G(\alpha s(r)/m))^{r-1} \Psi_{m-r}\big( 1-G(\alpha m/m),...,1-G(\alpha({r+1})/m)\big).$$
These formulas can help to provide the optimal choice of the scaling function that maximizes a certain criterion of optimality.
\section{Conclusion}
We discussed in this paper ideas on how to choose a scaling function in multiple comparisons. The framework in which we studied this choice used a new point of view, different from the classical view of level and power. The classical approach needs to be rethought and adapted to the multiple comparisons context with large numbers of hypotheses. Under the proposed framework, we derived asymptotic results, especially for a particular family of scaling functions. In a simulation study we showed that an intermediate choice is usually preferable. We also provided exact formulas for the SFDP and the SEV. These formulas can be used in future investigations of the optimal choice of scaling functions.
\end{document}
|
\begin{document}
\begin{abstract}
We show that a monotone Lagrangian $L$ in $\mathbb{C}\mathbb{P}^n$ of minimal Maslov number $n + 1$ is homeomorphic to a double quotient of a sphere, and thus homotopy equivalent to $\mathbb{R}\mathbb{P}^n$. To prove this we use Zapolsky's canonical pearl complex for $L$ with coefficients in $\mathbb{Z}$, and various twisted versions thereof, where the twisting is determined by connected covers of $L$. The main tool is the action of the quantum cohomology of $\mathbb{C}\mathbb{P}^n$ on the resulting Floer homologies.
\end{abstract}
\title{Monotone Lagrangians in $\C\P^n$ of minimal Maslov number $n+1$}
\section{Introduction}
\subsection{Background and statement of results}
Fix a positive integer $n$ and suppose $L \subset \mathbb{C}\mathbb{P}^n$ is a closed, connected, monotone Lagrangian submanifold of minimal Maslov number $N_L = n + 1$ (see \cref{section soft concepts} below for a review of the Maslov index and monotonicity). It is well-known, following Seidel \cite{SeidelGraded}, that this is the maximal possible value of $N_L$ for a monotone Lagrangian in projective space. It is attained by the standard $\mathbb{R}\mathbb{P}^n \subset \mathbb{C}\mathbb{P}^n$, but up to Hamiltonian isotopy there are no other known examples. In this note we show the following:
\begin{thm}
\label{Theorem1}
Let $L \subset \mathbb{C}\mathbb{P}^n$ be a closed, connected, monotone Lagrangian submanifold satisfying $N_L = n + 1$. Then
$L$ has fundamental group $\mathbb{Z}/2$ and its universal cover is homeomorphic to $S^n$.
\end{thm}
\noindent
Combined with \cite[Lemma 3]{HirschMilnorInvolutionsOfSpheres}, \cref{Theorem1} immediately implies:
\begin{cor}\label{corollary homotopy equivalence}
$L$ is homotopy equivalent to $\mathbb{R}\mathbb{P}^n$.
\end{cor}
This constitutes a step towards answering a question posed by Biran and Cornea in \cite[Section 6.2.5]{BiranCorneaQS}, informally asking whether a Lagrangian $L$ in $\mathbb{C}\mathbb{P}^n$ which ``looks like'' $\mathbb{R}\mathbb{P}^n$ must be (diffeomorphic to, Hamiltonian isotopic to) $\mathbb{R}\mathbb{P}^n$; some history of this problem is discussed in \cref{subsection previous results}. For us ``looks like'' will always mean that it is monotone of minimal Maslov number $n + 1$. When $n = 1$ or $2$, the answer is as strong as possible: any such Lagrangian is Hamiltonian isotopic to $\mathbb{R}\mathbb{P}^n$. This is trivial for $n=1$, whilst the $n = 2$ case follows from recent work of Borman--Li--Wu \cite[Theorem 1.3]{BormanLiWuSphericalLags}. Our \cref{Theorem1} allows us to prove diffeomorphism for $n = 3$:
\begin{cor}
Let $L\subset \mathbb{C}\mathbb{P}^3$ be a monotone Lagrangian of minimal Maslov number $4$. Then $L$ is diffeomorphic to $\mathbb{R}\mathbb{P}^3$.
\end{cor}
\begin{proof}
By \cite[Theorem 3]{LivesayInvolutionsOfS3} any fixed-point-free involution on $S^3$ is conjugate to the antipodal map by a homeomorphism, and in dimension $3$ the topological and smooth categories are equivalent.
\end{proof}
However, we cannot easily upgrade homotopy equivalence to diffeomorphism for $n \geq 4$ as the papers \cite{CappellShaneson} ($n=4$) and \cite{HirschMilnorInvolutionsOfSpheres} ($n \geq 5$) show.
We end this discussion by noting that \cref{corollary homotopy equivalence} implies a version of the nearby Lagrangian conjecture for $\mathbb{R}\mathbb{P}^n$:
\begin{cor}
\label{corollary nearby Lagrangian}
Any closed, connected, exact Lagrangian in $T^*\mathbb{R}\mathbb{P}^n$ with vanishing Maslov class is homotopy equivalent to the zero section.
\end{cor}
\begin{proof}
Let $L$ be such a Lagrangian. Note that $\mathbb{C}\mathbb{P}^n$ decomposes as the union of a quadric $Q$ and a disjoint Weinstein neighbourhood $U$ of the standard $\mathbb{R}\mathbb{P}^n$, so by rescaling $L$ towards the zero section if necessary we may assume it embeds in $U$ and hence in $\mathbb{C}\mathbb{P}^n$. By considering the long exact sequence in cohomology for the triple $(\mathbb{C}\mathbb{P}^n, U, L)$, with real coefficients, the exactness and vanishing of the Maslov class of $L$ in $U$ imply that $L$ is monotone in $\mathbb{C}\mathbb{P}^n$. (See \cref{section soft concepts} for a summary of the Maslov class and monotonicity.)
More specifically, $H^2(\mathbb{C}\mathbb{P}^n, U; \mathbb{Z})$ is freely generated by the class $\alpha$ given by ``intersection with $Q$'', and there exist positive constants $A$ and $M$ such that for every disc class $\beta$ in $\pi_2(\mathbb{C}\mathbb{P}^n, L)$ the area and Maslov index of $\beta$ are given by $A\alpha(\beta)$ and $M\alpha(\beta)$ respectively. Since the Maslov index of a line is $2(n+1)$ (twice its Chern number), we see that $M$ is in fact $n+1$. Hence $L$ has minimal Maslov number $n+1$ and \cref{corollary homotopy equivalence} gives the result.
\end{proof}
This result was already known from the work of Abouzaid \cite{AbouzaidNearbyLagrangians}, building on Fukaya--Seidel--Smith \cite{FukayaSeidelSmith} and Nadler \cite{NadlerMicrolocalBranes}, but our approach is much more elementary. Kragh later removed the Maslov-zero hypothesis with Abouzaid \cite{KraghRingSpectra}, and subsequently gave a simpler proof \cite{KraghHomotopyEquivalenceSerre} of a weaker statement which also implies \cref{corollary nearby Lagrangian}, but using completely different methods.
\subsection{Relation to previous works}
\label{subsection previous results}
Monotone Lagrangians $L$ in $\mathbb{C}\mathbb{P}^n$, and especially those which resemble $\mathbb{R}\mathbb{P}^n$, have been intensively studied. Back in \cite[Theorem 3.1]{SeidelGraded} Seidel showed that for any monotone $L \subset \mathbb{C}\mathbb{P}^n$ the group $H^1(L; \mathbb{Z}/(2n+2))$ is non-zero (this is roughly equivalent to the fact that the minimal Maslov number of $L$ is at most $n+1$, since the mod-$(2n+2)$ reduction of the Maslov class in $H^2(X, L; \mathbb{Z}/(2n+2))$ lifts to $H^1(L; \mathbb{Z}/(2n+2))$ and it is this lift which is shown to be non-zero), and that if it's $2$-torsion then there is an isomorphism of graded $\mathbb{Z}/2$-vector spaces $H^*(L; \mathbb{Z}/2) \cong H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z}/2)$. He did this by showing that the Floer cohomology of $L$ is $2$-periodic in its grading, and comparing it with the classical cohomology of $L$ via the Oh spectral sequence \cite{OhSpectralSequence} (also constructed by Biran--Cornea \cite{BiranCorneaQS}, and recapped in \cref{CanonicalComplex} below). In particular, if $L \subset \mathbb{C}\mathbb{P}^n$ is a Lagrangian satisfying $2H_1(L; \mathbb{Z})=0$ (which automatically implies that it's monotone and that its minimal Maslov number is $n+1$) then $L$ is additively a $\mathbb{Z}/2$-homology $\mathbb{R}\mathbb{P}^n$.
Later, Biran--Cieliebak \cite[Theorem B]{BiranCieliebak} reproved the first part of Seidel's result by introducing the important \emph{Biran circle bundle} construction, which associates to a monotone Lagrangian in $\mathbb{C}\mathbb{P}^n$ a displaceable one in $\mathbb{C}^{n + 1}$ and then uses the vanishing of the Floer cohomology of the latter to constrain the topology of the former via the Gysin sequence. Combining this construction with the Oh spectral sequence, Biran \cite[Theorem A]{BiranNonIntersections} then reproved the second part of Seidel's result---the $\mathbb{Z}/2$-homology isomorphism---but under the hypothesis that $L \subset \mathbb{C}\mathbb{P}^n$ is monotone and of minimal Maslov number $n+1$ (he states the assumption that $H_1(L; \mathbb{Z})$ is $2$-torsion but only uses the monotonicity and minimal Maslov consequences). Note that, in conjunction with the classification of surfaces, this result already shows that for $n = 2$ the Lagrangian must be diffeomorphic to $\mathbb{R}\mathbb{P}^2$.
The next major development was the introduction of the pearl complex model for Floer cohomology by Biran--Cornea \cite{BiranCorneaQS}, using which they gave another proof of the additive isomorphism $H^*(L;\mathbb{Z}/2) \cong H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z}/2)$ and showed that it is in fact an \emph{algebra} isomorphism if $H_1(L;\mathbb{Z})$ is $2$-torsion \cite[Section 6.1]{BiranCorneaRigidityUniruling} (this was partially proved in \cite{BiranNonIntersections}; the $2$-torsion assumption is only used for odd $n$). The key ingredient is the quantum module action of the hyperplane class $h$ in $QH^*(\mathbb{C}\mathbb{P}^n;\mathbb{Z}/2)$ on $HF^*(L,L;\mathbb{Z}/2)$: since $h$ is invertible, this gives an isomorphism
\[
h \mathbin{*} - : HF^*(L, L; \mathbb{Z}/2) \xrightarrow{\ \sim \ } HF^{*+2}(L, L; \mathbb{Z}/2)
\]
which subsumes both Seidel's periodicity observation and the circle bundle Gysin sequence map given by cupping with the Euler class.
The strongest results to date were then obtained by Damian \cite[Theorem 1.8 c)]{Damian}, who applied his lifted Floer theory to the circle bundle construction to show that when $n$ is odd and $2H_1(L;\mathbb{Z}) = 0$, $L$ must be homeomorphic to a double quotient of $S^n$. In fact, under our weaker hypothesis---monotonicity and minimal Maslov number $n+1$---Damian's methods can be pushed to give:
\begin{align}
&\text{For odd $n$, the universal cover $\widetilde{L}$ is homeomorphic to $S^n$ and $\pi_1(L)$ is finite.}\label{eqDamianOdd}
\\ &\text{For even $n$, $\widetilde{L}$ is a $\mathbb{Z}/2$-homology sphere and $\pi_1(L) \cong \mathbb{Z}/2$.}\label{eqDamianEven}
\end{align}
We sketch these arguments in \cref{subsection Damian}, and thank the referee for pointing them out to us.
Our \cref{Theorem1} strengthens these results by showing that, regardless of the parity of $n$, $\widetilde{L}$ is homeomorphic to $S^n$ and $\pi_1(L) \cong \mathbb{Z}/2$. One notable feature of Damian's approach is its reliance on the ingenious auxiliary construction of the circle bundle, which replaces our Lagrangian $L$ in $\mathbb{C}\mathbb{P}^n$ with the related Lagrangian $\Gamma_L$ in $\mathbb{C}^{n+1}$ that is necessarily displaceable. Part of our motivation was to see whether one could prove the same results by directly studying the Floer theory of $L$, and this paper answers that question in the affirmative.
\subsection{Idea of proof}
The proof of \cref{Theorem1} is a combination of the quantum module action and lifted Floer theory, which we discuss within the more general framework of higher rank local systems. For many of the arguments we need to use Floer theory with $\mathbb{Z}$ coefficients, rather than $\mathbb{Z}/2$, which is problematic when $n$ is even because then $L$ is necessarily non-orientable, and hence so are the moduli spaces of holomorphic discs that we wish to count. However, we are able to work around this using the recently-introduced canonical orientations package of Zapolsky \cite{Zapolsky}, of which the present paper represents one of the first concrete applications.
Ignoring many technicalities, the idea (for $n \geq 3$) is roughly as follows. Since $N_L \geq 4 > 2$, the Floer cohomology of $L$ can be defined with arbitrary higher rank local systems; in particular, lifted Floer theory is defined for all covers $L'$ of $L$. For such covers, the Oh spectral sequence computing $HF^*(L, L'; R)$ contains the compactly-supported singular cohomology $H^*_c(L'; R)$ in the zeroth column of its first page, as shown in \cref{figLprime}.
\begin{figure}
\caption{The zeroth column of the first page of the Oh spectral sequence computing $HF^*(L, L'; R)$.\label{figLprime}
\label{figLprime}
\end{figure}
As $N_L= n+1$ all of the cohomology groups $H^*_c(L'; R)$ with $0 < * < n$ survive to the limit. We are being deliberately vague about the choice of ring $R$ and what the terms labelled $\bullet$ are in the spectral sequence.
There is an algebra homomorphism $\mathbb{C}O : QH^*(\mathbb{C}\mathbb{P}^n; R) \rightarrow HF^*(L, L;R)$ called the length-zero closed--open string map \cite[Definition 2.3]{SheridanFano}, which is described in Zapolsky's framework \cite[Section 3.9.3]{Zapolsky} as the quantum module action of $QH^*$ on the unit $1_L$ in $HF^0$. When $R=\mathbb{Z}/2$ we show that it is an isomorphism in degree $2$. Since $N_L \geq 4$, this map in degree $2$ coincides with the classical restriction $i^* : H^2(\mathbb{C}\mathbb{P}^n; R) \rightarrow H^2(L; R)$, so we deduce that the latter is also an isomorphism and hence that $L$ is relatively pin. Using Zapolsky's machinery, this allows us to take $R = \mathbb{Z}$.
The Auroux--Kontsevich--Seidel criterion (\cite[Proposition 6.8]{AurouxMSandTduality}, \cite[Lemma 2.7]{SheridanFano}) now tells us that $\mathbb{C}O(2(n+1)h) = 0$, so $H^1(L; \mathbb{Z})$ (which coincides with $HF^1(L, L; \mathbb{Z})$) is $2(n+1)$-torsion and therefore vanishes ($H^1$ is always torsion-free). A topological argument then shows that $i^*h$ has order $2$ in $H^2(L; \mathbb{Z})$, so $\mathbb{C}O(2h)=0$. Hence, by the quantum module action of $h$, all intermediate compactly-supported cohomology groups of $L'$ are $2$-torsion and $2$-periodic. Letting $L'$ range through the covers of $L$ corresponding to cyclic subgroups of $\pi_1(L)$ yields the result.
\subsection{Structure of the paper}
In \cref{section soft concepts} we review the Maslov index and monotonicity. \mathbb{C}ref{section Floer review} then gives a summary of Zapolsky's canonical pearl complex, including: its algebraic structures (\cref{subsection algebraic structures}); their relation to classical operations (\cref{classical comparison}); orientations and the relevance of relative pin structures (\cref{RelPin}); local systems (\cref{TwistedCoeffs}); the worked example of $\mathbb{R}\mathbb{P}^n$ (\cref{RPnExample}), which we compute in a different way from Zapolsky; and an outline of Damian's methods (\cref{subsection Damian}). Finally, \cref{section proof} contains the full proof of \cref{Theorem1}.
\section{Soft concepts}\label{section soft concepts}
We begin by recalling some general facts about the Maslov class, the minimal Maslov number and monotonicity. For completeness we state the definitions and observations in their most general form, but for the purposes of the rest of this paper we will only use the special case \cref{lemma: the restriction of the hyperplane class to a lag in cpn}, so the reader familiar with the concepts is invited to skip the interlude. In this section all homology and cohomology groups are considered with $\mathbb{Z}$ coefficients, unless explicitly specified otherwise.
Let $(X, J)$ be an almost complex manifold of real dimension $2n$ and $L \subset X$ a properly embedded totally real submanifold of dimension $n$. The bundle $\Lambda^n_{\mathbb{R}}TL$ is naturally a rank $1$ real subbundle of $\at{\Lambda^n_{\mathbb{C}}TX}{L}$, so the bundle pair $(\Lambda^n_\mathbb{C} TX, \Lambda^n_\mathbb{R} TL)$ over $(X, L)$ is classified by a map
\[
\varphi : (X, L) \rightarrow (B\mathrm{U}(1), B(\mathbb{Z}/2)).
\]
One can view the pair $(B\mathrm{U}(1), B(\mathbb{Z}/2))$
as
\[
B(\mathbb{Z}/2) \cong \mathbb{R}\mathbb{P}^\infty = \mathrm{Gr}_\mathbb{R}(1, \mathbb{R}^\infty) \xrightarrow{\ \otimes \mathbb{C} \ } \mathrm{Gr}_\mathbb{C}(1, \mathbb{C}^\infty) = \mathbb{C}\mathbb{P}^\infty \cong B\mathrm{U}(1).
\]
The long exact sequence for the pair shows that $H^2(B\mathrm{U}(1), B(\mathbb{Z}/2))$ is isomorphic to $\mathbb{Z}$, generated by a relative characteristic class which maps to $2c_1$ in $H^2(B\mathrm{U}(1))$. This generator is called the \emph{Maslov class}, denoted by $\mu$, and its pullback via $\varphi$ is the Maslov class of $L$, denoted by $\mu_L \in H^2(X, L)$. If $j^* \colon H^2(X, L) \to H^2(X)$ is the natural restriction map, then it is clear from the above description that one has
\begin{equation}\label{eq Viterbo}
j^*(\mu_L) = 2c_1(X).
\end{equation}
We will write $I_{\mu_L} \colon H_2(X, L) \to \mathbb{Z}$ and $I_{c_1}\colon H_2(X) \to \mathbb{Z}$ for the group homomorphisms given by pairing with $\mu_L$ and $c_1$ respectively.
\begin{rmk}\label{remarkMuvsW1}
Note that the long exact sequence of the pair $(B\mathrm{U}(1), B(\mathbb{Z}/2))$ with $\mathbb{Z}/2$-coefficients shows that the mod $2$ reduction of $\mu_L$ equals the image of the first Stiefel--Whitney class of $TL$ under the co-boundary map $H^1(L;\mathbb{Z}/2) \to H^2(X,L;\mathbb{Z}/2)$. In particular, for any class $A \in H_2(X, L)$, the parity of $I_{\mu_L}(A)$ is determined by whether the pairing of $w_1(TL)$ with $\partial A$ vanishes. Thus, if $L$ is orientable then $I_{\mu_L}$ has image in $2\mathbb{Z}$ and, conversely, if $I_{\mu_L}(H_2(X, L)) \le 2\mathbb{Z}$ and the boundary map $H_2(X, L) \to H_1(L)$ is surjective (e.g.~if $H_1(X) = 0$), then $L$ is orientable.
\end{rmk}
Now let $H_2^D(X, L)$ and $H_2^S(X)$ denote the images of the Hurewicz homomorphisms
\[
\pi_2(X, L) \to H_2(X, L) \text{\quad and \quad} \pi_2(X) \to H_2(X)
\]
and let $j \colon H_2(X) \to H_2(X, L)$ be the natural map. Define the integers $N_L^{\pi}$, $N_L^H$, $N_X^{\pi}$ and $N_X^H$ to be the non-negative generators of the $\mathbb{Z}$-subgroups $I_{\mu_L}(H_2^D(X, L))$, $I_{\mu_L}(H_2(X, L))$, $I_{c_1}(H_2^S(X))$, $I_{c_1}(H_2(X))$, respectively. Using \eqref{eq Viterbo} and the fact that $j(H_2^S(X)) \le H_2^D(X, L)$, it is easy to see that there exist non-negative integers $k_L, k_X, m_{\pi}, m_H$ such that:
\begin{equation}
\label{equation minimal numbers}
N_L^{\pi} = k_L N_L^H, \quad N_X^{\pi} = k_X N_X^H, \quad 2N_X^{\pi} = m_{\pi}N_L^{\pi}, \quad
2N_X^H = m_H N_L^H.
\end{equation}
Observe that if $N_L^H \neq 0$ (e.g. if $N_X^H \neq 0$), then one has the identity
\begin{equation}\label{eq relation between kl, km, mpi and mh}
k_L m_{\pi} = k_X m_H.
\end{equation}
We note the following result for later:
\begin{lem}\label{lemma: the main soft observation lemma}
Suppose that $N_L^H \neq 0$, $H^1(L) = 0$, and $H^2(X)$ is isomorphic to $\mathbb{Z}$, generated by some class $h$. Then the restriction of $h$ to $H^2(L)$ has order $m_H$.
\end{lem}
\begin{proof}
The long exact sequence in cohomology for the pair $(X, L)$ yields the exact sequence
\[
\xymatrix{
H^1(L) \ar[r]\ar@{=}[d]& H^2(X, L) \ar[r]\ar@{=}[d]& H^2(X) \ar[r]^{i^*}\ar@{=}[d]& H^2(L)\ar@{=}[d]\\
0 \ar[r]& H^2(X, L) \ar[r]^{j^*}& \mathbb{Z}\langle h \rangle \ar[r]^{i^*}& H^2(L),}
\]
where $i : L \rightarrow X$ is the inclusion. This tells us that $H^2(X, L)$ injects into $\mathbb{Z}\langle h \rangle$ and so is freely generated by some class $g \in H^2(X, L)$, which is non-zero since $N_L^H \neq 0$. By the universal coefficients theorem there exists a class $u \in H_2(X, L)$ with which $g$ pairs to $1$, and hence $\mu_L = N_L^H g$. The same argument shows that $c_1 = N_X^H h$. Applying $j^*$ to the identity $\mu_L = N_L^H g$ and using \eqref{eq Viterbo} we then get $2 N_X^H h = N_L^H j^*(g)$ and hence $j^*(g) = m_H \,h$. By exactness of the above diagram it follows that $i^*(h)$ has order $m_H$ in $H^2(L)$.
\end{proof}
Consider now the case when $(X, \omega)$ is symplectic and $L$ is a Lagrangian submanifold. Then $L$ is totally real with respect to any almost complex structure compatible with the symplectic form.
In this setting we also have homomorphisms $I_{\omega} \colon H_2(X) \to \mathbb{R}$, $I_{\omega, L} \colon H_2(X, L) \to \mathbb{R}$ given by integration of the symplectic form. The manifold $(X, \omega)$ is called \emph{monotone} if there exists a positive constant $\lambda$ such that
\[\at{I_{\omega}}{H_2^S(X)} = 2 \lambda \at{I_{c_1}}{H_2^S(X)}.\]
For example, $(\mathbb{C}\mathbb{P}^n, \omega_{FS})$ is monotone with $\lambda = \pi/2(n + 1)$ when the Fubini-Study form is normalised so that a line has area $\pi$.
In turn, the Lagrangian submanifold $L$ is called \emph{monotone} if
\[\at{I_{\omega, L}}{H_2^D(X, L)} = \lambda' \at{I_{\mu_L}}{H_2^D(X, L)}\]
for some positive constant $\lambda'$. Note that if $\at{I_{c_1}}{H_2^S(X)} \neq 0$ then \eqref{eq Viterbo} implies that a monotone Lagrangian can only exist if $X$ itself is monotone and $\lambda'$ coincides with $\lambda$.
In the literature on holomorphic curves, the numbers $N_X^{\pi}$ and $N_L^{\pi}$ are usually the ones referred to as the \emph{minimal Chern number} of $X$ and the \emph{minimal Maslov number} of $L$, respectively. This can potentially cause confusion since these numbers are not the same as $N_X^H$ and $N_L^H$ in general. However, if $X$ is simply connected (for example, if it is a projective Fano variety---see \cite{KollarMiyaokaMori} and \cite[Theorem 3.5]{Campana}), then these numbers coincide. Indeed, we have the commutative diagram
\[
\xymatrix{
\pi_2(X) \ar[d] \ar[r]& \pi_2(X, L) \ar[d]\ar[r]& \pi_1(L) \ar[d]\ar[r]& 1 \ar[d]\\
H_2(X) \ar[r] & H_2(X, L) \ar[r]& H_1(L) \ar[r]& 0
}
\]
in which the third vertical arrow is a surjection by Hurewicz. If $X$ is simply connected then the first vertical arrow is also a surjection, again by Hurewicz, so $N_X^\pi = N_X^H$. A diagram chase in the spirit of the $5$-lemma (or alternatively, noticing that $\pi_1(X,L) = 0$ and applying the relative Hurewicz theorem) then shows that the second vertical arrow must also be a surjection, from which we deduce that $N_L^{\pi} = N_L^H$. In this case there is therefore no ambiguity, and we denote the common values simply by $N_X$ and $N_L$ respectively.
Consider again the example of $X = \mathbb{C}\mathbb{P}^n$, with $L \subset \mathbb{C}\mathbb{P}^n$ a totally real submanifold. We then have $N_{\mathbb{C}\mathbb{P}^n} = n + 1$ and by \eqref{equation minimal numbers} $N_L$ is non-zero and divides $2(n + 1)$. As a corollary of \cref{lemma: the main soft observation lemma} we immediately obtain:
\begin{lem}\label{lemma: the restriction of the hyperplane class to a lag in cpn}
If $L \subset \mathbb{C}\mathbb{P}^n$ is a totally real submanifold with $H^1(L) = 0$ and minimal Maslov number $N_L$
then the restriction of the hyperplane class $h \in H^2(\mathbb{C}\mathbb{P}^n)$ to $L$ has order $2(n + 1)/N_L$ in $H^2(L)$.
\end{lem}
\section{Floer theory review}
\label{section Floer review}
\subsection{The canonical pearl complex}
\label{CanonicalComplex}
Our argument for \cref{Theorem1} is based on consideration of the self-Floer theory of $L$. In particular, we employ Zapolsky's \emph{canonical pearl complex} which we now review briefly; see \cite{Zapolsky} for further details. Let $(X, \omega)$ be a symplectic manifold which is either closed or convex at infinity, and let $L \subset X$ be a closed, connected, monotone Lagrangian submanifold of minimal Maslov number $N_L^{\pi}$ at least $2$. We consider the following condition \cite[Section 1.2]{Zapolsky}:
\begin{center}
\begin{tabular}{m{0.95\textwidth}}
\emph{Assumption (O):} For some point (or, equivalently, all points) $q$ in $L$, the second Stiefel--Whitney class $w_2(TL)$ vanishes on the image of $\pi_3(X, L, q)$ in $\pi_2(L, q)$ under the boundary map.
\end{tabular}
\end{center}
As explained in \cref{RelPin} below, this is implied by $L$ being relatively pin. Fix a ground ring $R$. This must have characteristic $2$ unless $L$ satisfies assumption (O), in which case it is arbitrary.
Fix a generic choice of $\omega$-compatible almost complex structure $J$ on $X$. For each point $q$ in $L$ and each class $A$ in $\pi_2(X, L, q)$, Zapolsky considers the family $D_A$ of linear Cauchy--Riemann operators over the space of based discs in class $A$. We restrict these operators to vector fields vanishing at the base point and denote the resulting family by $D_A \# 0$. By assumption (O), the index bundle of $D_A \# 0$ is orientable (see \cref{RelPin} below) and we define $\mathscr{C}(q, A)$ to be the free $R$-module of rank $1$ generated by its two orientations, modulo the relation that they sum to zero. Taking the direct sum of these modules over $A$ we obtain a module $\mathscr{C}^*_q$, graded by the Maslov index:
\[\mathscr{C}^r_q \coloneqq \bigoplus_{\substack{A \in \pi_2(X, L, q)\\ I_{\mu_L}(A) = r}} \mathscr{C}(q, A).\] As $q$ varies these modules fit together to form a local system over $L$ which we denote by $\mathscr{C}^*$. Note that $\mathscr{C}^0$ contains a copy of the trivial local system given by $\mathscr{C}(q, 0)$ inside each fibre $\mathscr{C}^0_q$; we denote this by $\mathscr{C}^\mathrm{triv}$.
Now choose a Morse function $f$ on $L$ and a metric $g$ such that the pair $(f, g)$ is Morse--Smale. For each critical point $q \in \mathrm{Crit}(f)$ let $C(q)$ denote the rank $1$ free $R$-module generated by the orientations of the descending manifold of $q$ (modulo summing to zero, as usual; we won't keep repeating this). Zapolsky's complex $CF^{*, *}_\mathrm{Zap} (L, L; R)$ is given by \cite[Section 4.2.1]{Zapolsky}
\[
CF^{r, s}_\mathrm{Zap} (L, L; R) = \bigoplus_{\substack{q \in \mathrm{Crit}(f) \\ |q| = s}}\mathscr{C}^r_q \otimes_R C(q),
\]
where $|q|$ denotes the Morse index of the critical point $q$. The differential, of total degree $1$, counts rigid \emph{pearly trajectories}, meaning upwards Morse flow lines which may be interrupted by the boundaries of $J$-holomorphic discs in $X$ bounded by $L$. More precisely, for each critical point $q \in \mathrm{Crit}(f)$ and each class $A \in \pi_2(X, L, q)$ one considers the module $C(q, A) \coloneqq \mathscr{C}(q, A) \otimes_R C(q)$. For each pearly trajectory $u$ from $q$ to $q'$ Zapolsky defines a class $A \# u$ in $\pi_2(X, L, q')$ and an isomorphism
\[
C(u, A) : C(q, A) \rightarrow C(q', A \# u).
\]
The differential $\partial$ is then the sum of all these maps $C(u, A)$. See \cite[Section 4.2.2]{Zapolsky} for full details. We denote the resulting cohomology by $HF^*_{\mathrm{Zap}}(L, L; R)$.
Crucially, $\partial$ has non-negative degree with respect to the $r$-grading: in fact it decomposes as $\partial_0 + \partial_1 + \dots$ where $\partial_j$ has bigrading $(jN_L^{\pi}, 1-jN_L^{\pi})$. Filtering by this grading we therefore obtain
\begin{prop}[{\cite[Theorem 4.17]{Zapolsky}}]
There is a spectral sequence---the Oh (or Biran) spectral sequence---which starts at the (Morse) cohomology of $L$ with coefficients in the local system $\mathscr{C}^*$, and which converges to $HF^*_{\mathrm{Zap}}(L, L; R)$.
\end{prop}
\begin{rmk}
The monodromy of $\mathscr{C}^*$ has two contributions: one from the local system $\pi_2(X, L, q) \mapsto q$ over $L$, and one from the index bundles of the operators $D_A \# 0$. In \cite[Theorem 4.17]{Zapolsky} the former is considered, but in general the latter is necessary too, as in the computation in \cref{RPnExample}.
\end{rmk}
Since the $r$-grading is concentrated in $N_L^{\pi} \mathbb{Z}$, if we laid out the spectral sequence in the standard way then only one in every $N_L^{\pi}$ columns and pages would be interesting. We therefore squash it up so that the $E_1$-page is given by
\[
E_1^{a, b} = H^{a + b - N_L^{\pi}a}(L; \mathscr{C}^{a})
\]
and the differential on the $E_j$-page is $\partial_j$, acting from the $(a, b)$-entry to the $(a+j, b-j+1)$-entry. Note that the zeroth column of $E_1$ contains a copy of the usual cohomology of $L$ over $R$, corresponding to $\mathscr{C}^\mathrm{triv} \subset \mathscr{C}^0$.
\subsection{Algebraic structures}
\label{subsection algebraic structures}
The Floer product can be defined on the canonical pearl complex by counting $Y$-shaped pearly trajectories \cite[Section 4.2.3]{Zapolsky}. As with the differential, one has to keep track of homotopy classes attached to generators of the complex, and define appropriate maps between the modules $C(q, A)$. The product has a unit $1_L$ coming from the summand $C(\text{Morse min}, 0)$ of $CF^{0,0}_\mathrm{Zap}(L, L; R)$ \cite[Section 4.2.4]{Zapolsky}.
Zapolsky similarly defines a canonical pearl-type complex $QC^{*,*}_\mathrm{Zap}(X; R)$ for the quantum cohomology of $X$ \cite[Section 4.5.1]{Zapolsky}. This time, he constructs a module $\mathscr{C}(x, B)$ for each point $x$ in $X$ and for each class $B$ in $\pi_2(X, x)$, and since the relevant Cauchy--Riemann operators are all canonically oriented there is no need for an analogue of assumption (O). These modules assemble into a local system on $X$ (isomorphic to $R[\pi_2(X)]$), and the boundary operator and product are defined in an analogous way to the Lagrangian case above \cite[Sections 4.5.1--4.5.2]{Zapolsky}. Again there is a canonical trivial subsystem of the local system, given by $\mathscr{C}(x, 0)$ in each fibre.
The closed--open string map (or quantum module structure) carries over to this setting to give
\begin{prop}[{\cite[Section 4.5.4]{Zapolsky}}]
There is a unital $R$-algebra homomorphism
\[
\mathbb{C}O : QH^*_\mathrm{Zap}(X; R) \rightarrow HF^*_\mathrm{Zap}(L, L; R).
\]
\end{prop}
\subsection{Comparison with classical operations}
\label{classical comparison}
There are obvious inclusions of the Morse complexes of $X$ and $L$ into $QC^{0,*}_\mathrm{Zap}(X; R)$ and $CF^{0,*}_\mathrm{Zap}(L, L; R)$, sending a critical point $q$ (plus an $R$-orientation of its descending manifold) to the corresponding generator of $C(q)$ tensored with the canonical generator of $\mathscr{C}(q, 0)$. For quantum cohomology this gives the usual inclusion
\[
H^*(X; R) \rightarrow QH^*_\mathrm{Zap}(X; R)
\]
of $R$-modules, whilst for Lagrangian Floer cohomology we only obtain a map
\[
H^{<N_L^{\pi}-1}(L; R) \rightarrow HF^*_\mathrm{Zap}(L, L; R),
\]
which can be viewed as the inclusion of $H^{<N_L^{\pi}-1}(L; R)$ in the zeroth column of the first page of the Oh spectral sequence. We refer to both of these as ``PSS maps'' because, when one uses a Hamiltonian model for the right-hand sides, they coincide with the usual morphisms constructed by Piunikhin-Salamon-Schwarz (\cite{PSS}) and Albers (\cite{Albers}) respectively.
In the Lagrangian case the map actually extends to the kernel of the spectral sequence differential $\partial_1 : H^{\leq N_L^{\pi}-1}(L; R) \rightarrow H^{\leq 0}(L; R)$ and we denote this extended domain by $H^\mathrm{PSS}(L; R)$. The reason is that this kernel maps to the $E_2$ page, from which point onwards it lies in the kernel of each page differential for degree reasons. Hence it maps to $E_\infty \cong \operatorname{gr} HF^*_\mathrm{Zap}(L, L; R)$, where it sits in the top piece of the associated grading (which is actually the bottom right-hand end of each descending diagonal on $E_\infty$) and therefore maps from $\operatorname{gr} HF^*_\mathrm{Zap}(L, L; R)$ to $HF^*_\mathrm{Zap}(L, L; R)$ itself. In simpler terms, a Morse cocycle on $L$ of degree $\leq N_L^\pi -1$ is Floer-closed if and only if it lies in the kernel of $\partial_1$, and is Floer-exact if it is Morse-exact.
These PSS maps preserve the units $1_X$ and $1_L$, but do not in general respect the product structures unless the total degree of the classes being multiplied is less than the minimal Chern number $N_X^{\pi}$ or the minimal Maslov number $N_L^{\pi}$, for $QH^*$ and $HF^*$ respectively. Similarly, $\mathbb{C}O$ is related to the classical restriction map $i^* : H^*(X; R) \rightarrow H^*(L; R)$ by the following commutative diagram:
\begin{equation}\label{CODiagram}
\begin{tikzcd}
H^{<N^\pi_L}(X; R) \arrow{r}{i^*} \arrow{d}[swap]{\mathrm{PSS}} & H^\mathrm{PSS}(L; R) \arrow{d}{\mathrm{PSS}}
\\ QH^*_\mathrm{Zap}(X; R) \arrow{r}{\mathbb{C}O} & HF^*_\mathrm{Zap}(L, L; R)
\end{tikzcd}
\end{equation}
Note that the image of $H^{<N^\pi_L}(X; R)$ under $i^*$ is contained in $H^\mathrm{PSS}(L; R)$ since at chain level $i^*$ coincides with $\mathbb{C}O$ on $C^{< N^\pi_L}(X)$, and $\mathbb{C}O$ is a chain map with respect to the ordinary differential on $C^*(X)$ and the pearl differential on $C^*(L)$.
At this point we can already deduce the main workhorse of the current paper.
\begin{lem}\label{lemma: torsion of Floer cohomology}
Let $L \subset \mathbb{C}\mathbb{P}^n$ be a closed, monotone Lagrangian with $N_L \ge 3$ and satisfying assumption (O). Suppose that $H^1(L;\mathbb{Z}) = 0$. Then $HF^*_\mathrm{Zap}(L, L;R)$ is $(2(n + 1)/N_L)$-torsion.
\end{lem}
\begin{rmk}
If $R$ has characteristic $2$ we do not need to assume that $L$ satisfies assumption (O). In this case the result is only interesting if $2 (n+1)/N_L$ is odd, in which case it tells us that $HF^*_\mathrm{Zap}(L,L;\mathbb{Z}/2)$ vanishes.
\end{rmk}
\begin{proof}
The manifold $\mathbb{C}\mathbb{P}^n$ is simply connected, has minimal Chern number $2(n+1)$, and has $\pi_2(\mathbb{C}\mathbb{P}^n, q)$ isomorphic to $\mathbb{Z}$ for any base point $q$. This means that the local system $\mathbb{Z}[\pi_2(\mathbb{C}\mathbb{P}^n)]$ on which Zapolsky's quantum cohomology pearl complex lives is simply the constant sheaf with fibre the Novikov ring $\mathbb{Z}[T^{\pm 2}]$, where $T$ has degree $n+1$ (it might seem more natural to work with the Novikov ring $\mathbb{Z}[T^{\pm 1}]$, with $T$ assigned degree $2(n+1)$, but then we would have to introduce a square root of $T$ when we came to discuss the Floer cohomology of $L$). Zapolsky's construction therefore yields the standard quantum cohomology ring of $\mathbb{C}\mathbb{P}^n$ \cite[Example 8.1.6]{SmallMcDuffSalamon}, namely
\[
QH^*_\mathrm{Zap} (\mathbb{C}\mathbb{P}^n; \mathbb{Z}) \cong \mathbb{Z}[h, T^{\pm 2}]/(h^{n+1}-T^2),
\]
where $h$ is (the PSS image of) the hyperplane class.
Our assumption $N_L \ge 3$ implies that $i^*h$ lies in $H^\mathrm{PSS}(L;R)$, and by the commutativity of \eqref{CODiagram} we have
\[\mathbb{C}O(h) = \mathrm{PSS}(i^* h) \in HF^*_\mathrm{Zap}(L, L;R).\]
Since $H^1(L;\mathbb{Z}) = 0$, \cref{lemma: the restriction of the hyperplane class to a lag in cpn} implies that $(2(n+1)/N_L)i^*h = 0$ and so $\mathbb{C}O((2(n+1)/N_L)h) = 0$. Since $h$ is invertible, it follows that $HF^*_\mathrm{Zap}(L, L;R)$ is $(2(n + 1)/N_L)$-torsion.
\end{proof}
\subsection{Orientation and relative pin structures}
\label{RelPin}
In this subsection we explain how:
\begin{enumerate}[(i)]
\item\label{itm1} assumption (O) allows the definition of the local system $\mathscr{C}^*$ \cite[Lemma 4.1]{Zapolsky}
\item\label{itm2} the monodromy of the local system $\mathscr{C}^*$ can be computed (\cref{lem:explicit monodromy} below)
\item\label{itm3} the existence of a relative pin structure implies assumption (O) \cite[Remark 7.1]{Zapolsky}
\item\label{itm4} the choice of such a structure allows one to recover a more standard version of Floer theory \cite[Sections 7.2--7.4]{Zapolsky}.
\end{enumerate}
For our applications the reader can happily skip to \cref{prop Quotient Properties} if they are willing to take (\ref{itm1}), (\ref{itm3}) and (\ref{itm4}) as a black box from \cite{Zapolsky}. The only place we explicitly use (\ref{itm2}) is in the example computation in \cref{RPnExample}. To fix notation, let $D$ denote the closed unit disc in $\mathbb{C}$, and $S^1 = \partial D$ its boundary. Let $\langle \cdot, \cdot \rangle_Z$ be the mod $2$ pairing between homology and cohomology on a space $Z$.
We begin by having a closer look at how the local system $\mathscr{C}^*$ is constructed.
Let $\pi \colon \overline{L} \to L$ denote the cover of $L$ with fibres $\pi^{-1}(q) = \pi_2(X,L,q)$. Consider the space $C^{\infty}_{X,L} \coloneqq C^{\infty}((D,\partial D), (X,L))$. It fibres over $\overline{L}$ via a map $\psi \colon C^{\infty}_{X,L} \to \overline{L}$, which associates to every disc $u$ its relative homotopy class based at $u(1)$. It is important that the fibres of $\psi$ are connected and the evaluation map $\mathrm{ev}_1 \colon C^{\infty}_{X,L} \to L$ factors as $\mathrm{ev}_1 = \pi \circ \psi$.
Now consider the family of Fredholm operators $D_\bullet$ over $C^{\infty}_{X,L}$, where for each disc $u$ the operator
\[
D_u \colon W^{1, p}((D, \partial D), (u^*TX, u^*TL)) \to L^p(D,\overline{\operatorname{Hom}}_{\mathbb{C}}((TD, i), (u^*TX, u^*J)))
\]
is the linearisation of $\bar{\partial}_J$ (for some fixed and irrelevant choice of connection on $X$). The determinant lines of these operators give rise to a line bundle $\det(D_\bullet)$ on $C^{\infty}_{X,L}$, whose first Stiefel--Whitney class has been computed by Seidel in \cite[Lemma 11.7]{SeidelBook} (there is a slight subtlety here: the bundle is not canonically topologised, but rather there exists an uncountable family of suitable choices, as described in \cite{ZingerDeterminantLine}; this is taken care of by Zapolsky and we will not dwell on it). Namely, if $v \colon S^1 \times (D, \partial D) \to (X,L)$ is a loop in $C^{\infty}_{X,L}$ with
$v_*[\{1\}\times D] = A \in \pi_2(X,L, v(1, 1))$, then
\begin{equation}\label{eqSeidelFormula}
\langle w_1(\det(D_\bullet)), v \rangle_{C^{\infty}_{X, L}} =
\langle w_2(TL), v_*[S^1 \times \partial D] \rangle_L
+ (I_{\mu_L}(A) - 1)\langle w_1(TL), v_*[S^1 \times \{1\}]\rangle_L.
\end{equation}
\begin{rmk}
Seidel's setup and notation are slightly different. To translate into his language we first need to trivialise $v^*TX$, identifying all fibres with a standard symplectic vector space $V$. At each time $t$, we obtain a loop $v|_{\{t\} \times \partial D}$ in the Lagrangian Grassmannian $\mathrm{Gr}(V)$ of $V$, and hence a point in its free loop space $\mathscr{L} \mathrm{Gr}(V)$. As $t$ varies in $S^1$, these points sweep a $1$-chain in $\mathscr{L} \mathrm{Gr}(V)$, and we denote this by $\sigma$. Seidel introduces operators
\[
T \colon H^{k+1}(\mathrm{Gr}(V); \mathbb{Z}) \to H^k(\mathscr{L} \mathrm{Gr}(V);\mathbb{Z}) \text{\quad and \quad} U\colon H^k(\mathrm{Gr}(V); \mathbb{Z}) \to H^k(\mathscr{L} \mathrm{Gr}(V);\mathbb{Z})
\]
on cohomology, whose duals take a $k$-chain on $\mathscr{L} \mathrm{Gr}(V)$ and output, respectively, the $(k+1)$- and $k$-chains on $\mathrm{Gr}(V)$ swept by the $k$-chain of whole loops and by the $k$-chain of initial points of the loops. His formula is given in terms of these operators as
\[
\langle w_1(\det(D_\bullet)), \sigma\rangle_{\mathscr{L} \mathrm{Gr}(V)} =
\langle T(w_2),\sigma \rangle_{\mathscr{L} \mathrm{Gr}(V)} +
\langle (T(\mu) - 1)\smile U(\mu), \sigma \rangle_{\mathscr{L} \mathrm{Gr}(V)}.
\]
The right-hand side can be shown to be independent of the initial choice of trivialisation (changing trivialisation does not affect $T(w_2)$ and $T(\mu)$ and preserves the parity of $U(\mu)$). To derive \eqref{eqSeidelFormula}, simply use the fact that $w_1$ of the tautological bundle over $\mathrm{Gr}(V)$ equals the mod $2$ reduction of $\mu \in H^1(\mathrm{Gr}(V);\mathbb{Z})$.
\end{rmk}
Recall however that for the construction of the local system $\mathscr{C}^*$ we are actually interested in the family of operators $D_\bullet\# 0$, whose determinant line bundle $\det(D_\bullet \# 0)$ over $C^{\infty}_{X,L}$ is canonically (up to a positive real multiple) isomorphic to $\det(D_\bullet) \otimes \mathrm{ev}_1^*(\det(TL))$.
So we have:
\begin{equation}\label{MonodromySign}
\langle w_1(\det(D_\bullet\# 0)), v \rangle_{C^{\infty}_{X, L}} =
\langle w_2(TL), v_*[S^1 \times \partial D] \rangle_L
+ I_{\mu_L}(A)\langle w_1(TL), v_*[S^1 \times \{1\}]\rangle_L.
\end{equation}
Now suppose that the loop $v$ is contained entirely in a fibre $\psi^{-1}(q, A)$. Then $v(S^1 \times \{1\}) = q$ so the second term in \eqref{MonodromySign} vanishes, and assumption (O) implies the vanishing of the first term (indeed, assumption (O) is equivalent to the vanishing of the first term for all loops whose image under $\operatorname{ev}_1$ is constant). Thus the restriction $\at{\det(D_\bullet \# 0)}{\psi^{-1}(q, A)}$ is orientable for all $q$ and $A$ if and only if assumption (O) holds. Assume that this is the case and define $\mathscr{C}(q, A)$ to be the free rank $1$ $R$-module generated by its orientations (this makes sense, meaning that there are only two possible orientations, which differ by sign, because $\psi^{-1}(q, A)$ is connected). These modules now define a local system $\overline{\mathscr{C}}$ over $\overline{L}$ and \eqref{MonodromySign} in principle allows us to also compute the monodromy of $\overline{\mathscr{C}}$. By taking the direct sums of the fibres of $\overline{\mathscr{C}}$ over points in the fibres of $\pi \colon \overline{L} \to L$---i.e.~by pushing forward $\overline{\mathscr{C}}$ by $\pi$---we obtain the local system $\mathscr{C}^*$ on $L$.
Suppose now that $L$ is relatively pin, meaning that $w_2(TL)$ or $w_2(TL) + w_1(TL)^2$ is in the image of the restriction map $i^* : H^2(X; \mathbb{Z}/2) \rightarrow H^2(L; \mathbb{Z}/2)$. The positive consequences of this assumption are twofold: first, it implies that $L$ satisfies assumption (O) and so the local system $\overline{\mathscr{C}}$ is well defined; second, it significantly simplifies the computation of the monodromy of $\overline{\mathscr{C}}$. These follow because the first term in \eqref{MonodromySign} is zero for \emph{all} loops of discs, not just the ones contained in a single fibre of $\psi$. To see this, suppose first that $w_2(TL) = i^* b$ for some background class $b$ in $H^2(X; \mathbb{Z}/2)$. We then have
\[
\big\langle w_2(TL), v_*[S^1 \times \partial D] \big\rangle_L = \big\langle b, \partial(v_*[S^1 \times D]) \big\rangle_X = 0.
\]
If, on the other hand, we have $w_2(TL) + w_1(TL)^2 = i^*b$ then the same argument shows that
\[
\big\langle w_2(TL), v_*[S^1 \times \partial D] \big\rangle_L = \big\langle (v^*w_1(TL))^2, [S^1 \times \partial D] \big\rangle_{S^1 \times \partial D},
\]
and the right-hand side vanishes since the squaring map $H^1(T^2; \mathbb{Z}/2) \rightarrow H^2(T^2; \mathbb{Z}/2)$ on the torus $T^2 = S^1 \times \partial D$ is zero. We deduce:
\begin{lem}
\label{lem:explicit monodromy}
If $L$ is relatively pin and $\gamma \in \pi_1(L, q)$ fixes a class $A \in \pi_2(X,L,q)$ (i.e. $\gamma$ lifts to a loop in $\overline{L}$), then the monodromy action of $\gamma$ on $\mathscr{C}(q, A)$ is trivial if the Maslov index of $A$ is even or if $\gamma$ is an orientation-preserving loop in $L$, and is multiplication by $-1$ otherwise.
\end{lem}
As described in \cite[Proposition 7.4, Section 7.3]{Zapolsky}, a choice of relative pin structure defines an isomorphism between $\mathscr{C}(q, A)$ and $\mathscr{C}(q, A')$ whenever $I_{\mu_L}(A) = I_{\mu_L}(A')$, and quotienting by these identifications we obtain a local system $\mathscr{C}_\mu^*$ which has rank $1$ in degrees $\dots, -N_L^{\pi}, 0, N_L^{\pi}, 2N_L^{\pi}, \dots$ (and zero in every other degree). Moreover, the choice defines a canonical trivialisation of the even degree part of $\mathscr{C}_\mu^*$, which then forms a constant sheaf of rings on $L$. In more traditional versions of Floer theory the fibre of this sheaf of rings is viewed as the Novikov ring $\Lambda = R[T^{\pm 1}]$, were the variable $T$ has degree $N_L^{\pi}$ if $N_L^{\pi}$ is even (for instance if $L$ is orientable) and degree $2N_L^{\pi}$ otherwise.
If $N_L^{\pi}$ is odd then the odd degree part of $\mathscr{C}_\mu^*$ looks like $\Lambda \otimes \mathscr{C}_\mu^{N_L^{\pi}}$ and this local system may be twisted (contrast this with the even degree part, which is not just untwisted but canonically trivialised by the relative pin structure). When the $\pi_1(L, q)$-action on $\pi_2(X, L, q)$ is trivial the twisting can be computed from \cref{lem:explicit monodromy} to be exactly $\det (TL)$, but when it is non-trivial the twisting may in general depend on the choice of relative pin structure.
The quotient procedure is compatible with the differential \cite[Section 7.3]{Zapolsky} so one obtains a cohomology group $HF^*_{\mathrm{Zap}, \mu}(L, L;R)$, and since it also respects the bigrading on the complex we still get the spectral sequence. In addition, quotienting is compatible with the multiplication \cite[Section 7.3]{Zapolsky} and with the quantum module structure \cite[Section 7.4]{Zapolsky} after applying a similar procedure to $QC^*_\mathrm{Zap}(X; R)$ \cite[Section 7.2]{Zapolsky}. Although the latter does not depend on a choice of spin structure or similar on $X$, it does depend on the background class $b$ of the relative pin structure on $L$. We denote this quotiented Zapolsky version of quantum cohomology by $QH^*(X, b; R)$ since it coincides with the standard $b$-twisted version of quantum cohomology over the Novikov ring $R[S^{\pm 1}]$, where $S$ has degree $2N^\pi_X$. We think of $S$ as $T^d$, where $d=2N^\pi_X/|T|$, so that $R[S^{\pm1}]$ sits inside $\Lambda$.
In the setting relevant to us, the upshot of all of this is the following.
\begin{prop}
\label{prop Quotient Properties}
Suppose $L$ is relatively pin, that we fix a choice of relative pin structure on $L$ with background class $b \in H^2(X; \mathbb{Z}/2)$, and that $\pi_1(L, q)$ acts trivially on $\pi_2(X, L, q)$. Then there are quantum cohomology $QH^*(X, b; R)$ and Floer cohomology rings $HF^*_{\mathrm{Zap}, \mu}(L, L; R)$ such that:
\begin{enumerate}[(a)]
\item\label{itma} If $N^\pi_L$ is even then $HF^*_{\mathrm{Zap}, \mu}(L, L; R)$ is naturally a module over the Novikov ring $R[T^{\pm 1}]$, where $T$ has degree $N^\pi_L$, and there is a spectral sequence
\[
E_1 = H^*(L; R[T^{\pm 1}]) \implies HF^*_{\mathrm{Zap}, \mu}(L, L; R).
\]
\item\label{itmb} If $N^\pi_L$ is odd then $HF^*_{\mathrm{Zap}, \mu}(L, L; R)$ is naturally a module over the Novikov ring $R[T^{\pm 1}]$, where $T$ has degree $2N^\pi_L$, and there is a spectral sequence
\[
E_1 = H^*(L; R[T^{\pm 1}]) \oplus H^*(L; \mathscr{R})[N^\pi_L] \implies HF^*_{\mathrm{Zap}, \mu}(L, L; R),
\]
where $\mathscr{R}$ is the rank $1$ local system of $R[T^{\pm 1}]$-modules on $L$ whose monodromy around a loop $\gamma$ is $(-1)^{\langle w_1(TL), \gamma \rangle_L}$ and where $[N^\pi_L]$ denotes grading shift.
\item\label{itmc} $QH^*(X, b; R)$ is a module over $R[T^{\pm d}]$, where $d = 2N^\pi_X / |T|$, and is additively isomorphic to $H^*(X; R[T^{\pm d}])$. The product is deformed by counts of holomorphic spheres with signs twisted by $(-1)^{\langle b, A \rangle_X}$, where $A$ is the class of the sphere.
\item\label{itmd} There are PSS maps $H^*(X; R) \to QH^*(X, b; R)$ and $H^{\mathrm{PSS}}(L; R) \to HF^*_{\mathrm{Zap}, \mu}(L, L; R)$, where the latter comes from mapping part of the $E_1$ page of the above spectral sequences to the $E_\infty$ page (as described in \cref{classical comparison}), and a map $\mathbb{C}O : QH^*(X, b; R) \to HF^*_{\mathrm{Zap}, \mu}(L, L; R)$ of unital $R[T^{\pm d}]$-algebras which fits into a commutative diagram analogous to \eqref{CODiagram}.
\end{enumerate}
\end{prop}
\begin{rmk}
\label{rmk Char 2}
If $R$ has characteristic $2$ then we need not assume that $L$ is relatively pin, and choices of relative pin structure and background class are irrelevant. Moreover, the local system $\mathscr{R}$ is trivial.
\end{rmk}
\subsection{Twisted coefficients}
\label{TwistedCoeffs}
As in traditional Lagrangian Floer theory, one can twist Zapolsky's Floer complex by local systems $\mathscr{E}^1$ and $\mathscr{E}^2$ of $R$-modules on $L$. Now the complex is
\begin{equation}\label{eq def of HFzap with local coeffs}
CF^{r, s}_\mathrm{Zap} ((L, \mathscr{E}^1), (L, \mathscr{E}^2)) = \bigoplus_{\substack{q \in \mathrm{Crit}(f) \\ |q| = s}} \mathscr{C}^r_q \otimes C(q) \otimes \operatorname{Hom}_R (\mathscr{E}^1_q, \mathscr{E}^2_q).
\end{equation}
If a certain obstruction class vanishes (see \cite[Theorem 2.2.12]{KonstantinovThesis}, adapted to Zapolsky's setup in \cite[Appendix A.4]{SmithThesis}) then the differential squares to zero and we can take cohomology. This is always the case when the minimal Maslov number of $L$ is at least $3$, which it will be in our applications. One can also extend the Floer product to
\begin{equation}\label{eq product with local systems}
HF^*_\mathrm{Zap} ((L, \mathscr{E}^1), (L, \mathscr{E}^2)) \otimes_R HF^*_\mathrm{Zap} ((L, \mathscr{E}^0), (L, \mathscr{E}^1)) \rightarrow HF^*_\mathrm{Zap} ((L, \mathscr{E}^0), (L, \mathscr{E}^2)).
\end{equation}
The quotient procedure which defines $HF^*_{\mathrm{Zap}, \mu}$ only concerns the $\mathscr{C}^r_q$ factor of each summand of \eqref{eq def of HFzap with local coeffs} and so descends to this setting to give the complex
\[
CF^{r, s}_{\mathrm{Zap}, \mu} ((L, \mathscr{E}^1), (L, \mathscr{E}^2)) = \bigoplus_{\substack{q \in \mathrm{Crit}(f) \\ |q| = s}} \mathscr{C}^r_{\mu, q} \otimes C(q) \otimes \operatorname{Hom}_R (\mathscr{E}^1_q, \mathscr{E}^2_q).
\]
We denote its homology by $HF^*_{\mathrm{Zap}, \mu} ((L, \mathscr{E}^1), (L, \mathscr{E}^2))$.
We will only be interested in the situation where $\mathscr{E}^1$ is trivial and $\mathscr{E}^2$ is the local system corresponding to a cover $L'$ of $L$, in which case we denote the twisted Floer cohomology by $HF^*_\mathrm{Zap, \mu}(L, L'; R)$. Floer theory for pairs of local systems of this form is essentially equivalent to Damian's lifted Floer homology \cite{Damian} on the cover $L'$. However we choose to phrase it in the above way in order to fit it into the wider context of Floer theory with local systems, closed--open string maps, and Zapolsky's orientation schemes.
After incorporating local systems of this type, the analogue of \cref{prop Quotient Properties} is as follows.
\begin{prop}
\label{prop Twisted Properties}
Setup as in \cref{prop Quotient Properties}, with the additional assumption that $N^\pi_L \geq 3$. For any cover $L'$ of $L$ we have a lifted Floer cohomology ring $HF^*_{\mathrm{Zap}, \mu}(L, L'; R)$ which satisfies the following modified versions of properties (\ref{itma})--(\ref{itmd}) from \cref{prop Quotient Properties}:
\begin{enumerate}[(A)]
\item\label{itmA} If $N^\pi_L$ is even then there is a spectral sequence
\[
E_1 = H^*_c(L'; R[T^{\pm1}]) \implies HF^*_{\mathrm{Zap}, \mu}(L, L'; R),
\]
where ${}_c$ denotes compact support, and $T$ has degree $N^\pi_L$.
\item\label{itmB} If $N^\pi_L$ is odd then there is a spectral sequence
\[
E_1 = H^*_c(L'; R[T^{\pm 1}]) \oplus H^*_c(L'; \mathscr{R}')[N^\pi_L] \implies HF^*_{\mathrm{Zap}, \mu}(L, L'; R),
\]
where $\mathscr{R}'$ is the pullback of $\mathscr{R}$ (from \cref{prop Quotient Properties}) to $L'$, and $T$ has degree $2N^\pi_L$.
\item\label{itmC} Identical to (\ref{itmc}).
\item\label{itmNewD} Identical to (\ref{itmd}).
\item\label{itmD} $HF^*_{\mathrm{Zap}, \mu}(L, L'; R)$ is a right module over $HF^*_{\mathrm{Zap}, \mu}(L, L; R)$, and thus over $QH^*(X, b; R)$ via $\mathbb{C}O$.
\end{enumerate}
\end{prop}
\begin{proof}[Sketch proof]
In the untwisted case (i.e.~in \cref{prop Quotient Properties}) the $E_1$ page is the (Morse) cohomology of $L$ with coefficients in the constant sheaf $R[T^{\pm1}]$, plus---when $N^\pi_L$ is odd---another copy of the cohomology of $L$ but with coefficients in $\mathscr{R}$. Twisting by $\mathscr{E}^1$ (trivial) and $\mathscr{E}^2$ (corresponding to $L'$) amounts to tensoring the local systems with $\operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2)$, where $\operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,}$ is sheaf hom (the $\operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,}$ and tensor product are both taken over the constant sheaf $R$), so the $E_1$ page is given by
\[
H^*(L; \operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2) \otimes_R R[T^{\pm1}]) \; \text{or} \; H^*(L; \operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2) \otimes_R R[T^{\pm1}]) \oplus H^*(L; \operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2) \otimes_R \mathscr{R})[N_L^\pi]
\]
depending on the parity of $N^\pi_L$. Strictly we are working with \emph{Morse} cohomology with local systems here, but it is proved in \cite{BanyagaHurtubiseSpaeth} that this coincides with other standard constructions of cohomology with local systems.
To identify this twisted cohomology, note that the singular chain complex of $L$ with coefficients in $\operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2) \otimes_R R[T^{\pm 1}]$ is equivalent to the ordinary singular chain complex of $L'$ over $R[T^{\pm 1}]$: both are freely generated over $R[T^{\pm 1}]$ by singular simplices in
$L$ with lifts to $L'$. By Poincar\'e duality we thus have
\[
H^*(L; \operatorname{\mathscr{H}\text{\kern -3pt {\calligra\large om}}\,} (\mathscr{E}^1, \mathscr{E}^2) \otimes_R R[T^{\pm1}]) \cong H^*_c(L'; R[T^{\pm 1}]).
\]
Twisting the left-hand side by $\mathscr{R}$ corresponds to twisting the right by $\mathscr{R}'$. This proves (\ref{itmA}) and (\ref{itmB}).
To get (\ref{itmD}) simply take $\mathscr{E}^0 = \mathscr{E}^1$ to be trivial in \eqref{eq product with local systems} to see that $HF^*_\mathrm{Zap}(L, L'; R)$ is a right (unital) module over $HF^*_{\mathrm{Zap}}(L,L;R)$ and hence also (via $\mathbb{C}O$) over quantum cohomology. Everything descends to the quotient determined by the choice of relative pin structure.
\end{proof}
\subsection{The example of $\mathbb{R}\mathbb{P}^n$}
\label{RPnExample}
To make all this more concrete we now describe how everything works for $(X, L) = (\mathbb{C}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n)$ over the ground ring $R=\mathbb{Z}$, assuming $n \geq 2$. The Floer cohomology in this case was calculcated by Zapolsky in \cite[Section 8.1]{Zapolsky} but by different means. Our approach relies heavily on the spectral sequence and can serve as a guide to the proof of \cref{Theorem1} in \cref{section proof}, where our aim will be to show that any monotone Lagrangian $L$ in $\mathbb{C}\mathbb{P}^n$ of minimal Maslov number $n + 1$ behaves like $\mathbb{R}\mathbb{P}^n$.
First observe that the map $H^2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}/2) \rightarrow H^2(\mathbb{R}\mathbb{P}^n; \mathbb{Z}/2)$ is surjective (in fact, an isomorphism), since the restriction of the tautological complex line bundle on $\mathbb{C}\mathbb{P}^n$, whose second Stiefel--Whitney class generates $H^2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}/2)$, is the direct sum of two copies of the tautological real line bundle on $\mathbb{R}\mathbb{P}^n$, whose squared first Stiefel--Whitney class generates $H^2(\mathbb{R}\mathbb{P}^n; \mathbb{Z}/2)$. This means that $\mathbb{R}\mathbb{P}^n$ is relatively pin and thus satisfies assumption (O).
The group $\pi_2(\mathbb{C}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n, q)$ is isomorphic to $\mathbb{Z}$ (for any base point $q$) and is generated by the class of a disc of Maslov index $n+1$. In particular, there is only one homotopy class of discs of each possible Maslov index, so we have that $\mathscr{C}^*_\mu$ coincides with $\mathscr{C}^*$ and the only purpose of the quotienting procedure from \cref{RelPin} is to identify the even degree part of $\mathscr{C}_\mu^*$ with the constant sheaf $R[T^{\pm 1}]$. The action of $\pi_1(\mathbb{R}\mathbb{P}^n, q)$ on $\pi_2(\mathbb{C}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n, q)$ is trivial, because it preserves Maslov index, so by \cref{lem:explicit monodromy} the odd degree part of $\mathscr{C}^*_\mu$ (which is only non-zero if $n$ is even) has monodromy $-1$ around the generator of $\pi_1(\mathbb{R}\mathbb{P}^n, q)$.
From this discussion, or directly from \cref{prop Quotient Properties}(\ref{itma}), if $n$ is odd then the first page of the Oh spectral sequence is
\[
\dots \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z})[n] \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z}) \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z})[-n] \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z})[-2n] \rightarrow \dots
\]
(each term represents a column, the square brackets denote grading shift as usual, and the arrows represent the differential which maps horizontally from one column to the next), whilst if $n$ is even then by \cref{prop Quotient Properties}(\ref{itmb}) it is
\[
\dots \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathscr{L})[n] \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z}) \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathscr{L})[-n] \rightarrow H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z})[-2n] \rightarrow \dots,
\]
where $\mathscr{L}$ denotes the unique non-trivial rank $1$ local system (over $\mathbb{Z}$) on $\mathbb{R}\mathbb{P}^n$. Note that the cellular cochain complex computing $H^*(\mathbb{R}\mathbb{P}^n; \mathscr{L})$ is the same as that computing $H^*(\mathbb{R}\mathbb{P}^n; \mathbb{Z})$ but the differentials which were $\pm 2$ become $0$ and vice versa.
The only potentially non-zero differentials in the whole spectral sequence are on this first page and go from $H^n$ in one column to $H^0$ in the adjacent column on the right. Just focusing on these entries, the page is as shown in \cref{figOhSS}.
\begin{figure}
\caption{The interesting part of the Oh spectral sequence for $\mathbb{R}
\label{figOhSS}
\end{figure}
The limit $HF^*_{\mathrm{Zap}, \mu}(\mathbb{R}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n; \mathbb{Z})$ must be $2$-periodic, by the quantum module action of $h \in QH^*(\mathbb{C}\mathbb{P}^n, b; \mathbb{Z})$ via \cref{prop Quotient Properties}(\ref{itmd}), and $HF^2_{\mathrm{Zap}, \mu}(\mathbb{R}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n; \mathbb{Z}) \cong H^2(\mathbb{R}\mathbb{P}^n;\mathbb{Z})$ is $\mathbb{Z}/2$, so all of the $\mathbb{Z} \rightarrow \mathbb{Z}$ differentials have to be multiplication by $\pm 2$, and we obtain
\[
HF^p_{\mathrm{Zap}, \mu}(\mathbb{R}\mathbb{P}^n, \mathbb{R}\mathbb{P}^n; \mathbb{Z}) \cong \begin{cases} \mathbb{Z}/2 & \text{if } p \text{ is even} \\ 0 & \text{if } p \text{ is odd,}\end{cases}
\]
as computed by Zapolsky. Note that, as expected, this is a module over the even degree part of $\mathbb{Z}[T^{\pm 1}]$, but it is \emph{not} a module over the whole ring if $n$ is even. Note also that the Floer cohomology is $2$-torsion, as predicted by \cref{lemma: torsion of Floer cohomology}.
If we twist by the local system corresponding to the cover $S^n \rightarrow \mathbb{R}\mathbb{P}^n$ then by \cref{prop Twisted Properties}(\ref{itmA}) and (\ref{itmB}) the first page of the spectral sequence becomes
\[
\dots \rightarrow H^*(S^n; \mathbb{Z})[n] \rightarrow H^*(S^n; \mathbb{Z}) \rightarrow H^*(S^n; \mathbb{Z})[-n] \rightarrow H^*(S^n; \mathbb{Z})[-2n] \rightarrow \cdots.
\]
The differentials must all be isomorphisms
since the resulting cohomology is still $2$-periodic (it is still a module over quantum cohomology by \cref{prop Twisted Properties}(\ref{itmD})) but the first page is zero in degrees $n-1$ and $n + 2$.
\subsection{Damian's argument}
\label{subsection Damian}
We conclude this review by outlining the proofs of \eqref{eqDamianOdd} and \eqref{eqDamianEven}, namely that if $L \subset \mathbb{C}\mathbb{P}^n$ is a monotone Lagrangian with $N_L = n+1$ then: for odd $n$, the universal cover $\widetilde{L}$ is homeomorphic to $S^n$ and $\pi_1(L)$ is finite; for even $n$, $\widetilde{L}$ is a $\mathbb{Z}/2$-homology sphere and $\pi_1(L) \cong \mathbb{Z}/2$. This uses only standard (pre-Zapolsky) Floer theory---in the above language we work only with $\mathscr{C}_\mu^*$, which is trivialised either by fixing an orientation and spin structure on our Lagrangian or by using $\mathbb{Z}/2$ coefficients---and is intended to demonstrate the state of the art before the present work. It should be contrasted with our new approach, detailed in \cref{section proof}, which completely circumvents the auxiliary construction of the circle bundle.
Let $\Gamma_L$ be the Biran circle bundle associated to $L$: view $\mathbb{C}\mathbb{P}^n$ as a hyperplane in $\mathbb{C}\mathbb{P}^{n+1}$ and take $\Gamma_L$ to be the lift of $L$ to the unit normal bundle of this hyperplane, embedded in $\mathbb{C}^{n+1} \cong \mathbb{C}\mathbb{P}^{n+1} \setminus \mathbb{C}\mathbb{P}^n$. Consider its mod-$2$ Floer cohomology, lifted to its universal cover $\widetilde{\Gamma}_L$. Since $\Gamma_L$ is displaceable this Floer cohomology must vanish. On the other hand, it can be computed by a spectral sequence whose first page is $H^*_c(\widetilde{\Gamma}_L; \mathbb{Z}/2) \cong H_{n+1-*}(\widetilde{\Gamma}_L; \mathbb{Z}/2)$ (compare \cref{prop Twisted Properties}(\ref{itmA}) and (\ref{itmB}) and \cref{rmk Char 2}). Combining these, we must have
\[
H^*_c(\widetilde{\Gamma}_L; \mathbb{Z}/2) \cong \begin{cases} \mathbb{Z}/2 & \text{if } * = 1 \text{ or } n+1 \\ 0 & \text{otherwise.} \end{cases}
\]
As observed by Schatz \cite[Lemmes 5.3--5.4]{SchatzThesis}, $\widetilde{\Gamma}_L$ is homotopy equivalent to $\widetilde{L}$, so we deduce that $\widetilde{L}$ is a $\mathbb{Z}/2$-homology sphere. In particular, it is compact so $\pi_1(L)$ is finite. Moreover, we have
\[
\chi_{\mathbb{Z}/2}(\widetilde{L}) = |\pi_1(L)| \cdot \chi_{\mathbb{Z}/2}(L),
\]
so if $n$ is even then $|\pi_1(L)| = 1$ or $2$. It can't be $1$, because if $L$ were simply connected then it would have minimal Maslov number $2(n+1)$, so we must have $\pi_1(L) \cong \mathbb{Z}/2$.
When $n$ is odd, the fact that the minimal Maslov number of $\Gamma_L$ is even means that it is orientable (see \cref{remarkMuvsW1}), so its Floer theory can be defined over $\mathbb{Z}$ if it's also spin. To prove that it is indeed spin we can apply the previous spectral sequence argument to the unlifted Floer cohomology of $\Gamma_L$ over $\mathbb{Z}/2$, and see that
\[
H^*(\Gamma_L; \mathbb{Z}/2) = H^*_c(\Gamma_L; \mathbb{Z}/2) \cong \begin{cases} \mathbb{Z}/2 & \text{if } * = 0 \text{, } 1 \text{, } n \text{, or } n+1 \\ 0 & \text{otherwise,} \end{cases}
\]
so the second Stiefel--Whitney class in $H^2(\Gamma_L; \mathbb{Z}/2)$ must vanish. We can now go back to studying the Floer theory lifted to $\widetilde{\Gamma}_L$, but this time over $\mathbb{Z}$, and deduce that $\widetilde{L} \cong \widetilde{\Gamma}_L$ is a $\mathbb{Z}$-homology sphere. Since $\widetilde{L}$ is also simply connected, the homology Whitehead theorem (see e.g. \cite{MayWhitehead}) and the Poincar\'e conjecture imply that $\widetilde{L}$ is homeomorphic to $S^n$.
\section{Proof of \cref{Theorem1}}
\label{section proof}
\subsection{Preliminaries}
We now detail the proof of \cref{Theorem1}, which uses the full force of Zapolsky's machinery. Throughout this section we fix the following setup: $L \subset \mathbb{C}\mathbb{P}^n$ is a closed, connected, monotone Lagrangian, of minimal Maslov number $n+1$. The $n=1$ case of \cref{Theorem1} is trivial so we assume that $n$ is at least $2$.
Before embarking on any Floer theory, we make the following basic topological observation:
\begin{lem}
\label{H1mod2nonzero}
We have $H^1(L; \mathbb{Z}/2) \neq 0$.
\end{lem}
\begin{proof}
The long exact sequence in homology for the pair $(\mathbb{C}\mathbb{P}^n, L)$ gives the exact sequence
\[
H_2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}) \rightarrow H_2(\mathbb{C}\mathbb{P}^n, L; \mathbb{Z}) \rightarrow H_1(L; \mathbb{Z}) \rightarrow 0.
\]
Applying the left-exact functor $\operatorname{Hom}_\mathbb{Z}(-, \mathbb{Z}/2)$ we obtain the exact sequence
\[
0 \rightarrow H^1(L; \mathbb{Z}/2) \xrightarrow{f} \operatorname{Hom}_\mathbb{Z}(H_2(\mathbb{C}\mathbb{P}^n, L; \mathbb{Z}), \mathbb{Z}/2) \xrightarrow{g} \operatorname{Hom}_\mathbb{Z}(H_2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}), \mathbb{Z}/2),
\]
and the penultimate term contains the mod $2$ reduction $I'_{\mu_L}$ of $I_{\mu_L}/(n+1)$. Since $I_{\mu_L}/(n+1)$ restricts to $2I_{c_1}/(n+1)$ on $H_2(\mathbb{C}\mathbb{P}^n; \mathbb{Z})$, and this is always even, we deduce that $g(I'_{\mu_L})$ is zero. This means that $I'_{\mu_L}$ is in the image of $f$, and as $I'_{\mu_L}$ itself is non-zero we must have $H^1(L; \mathbb{Z}/2) \neq 0$.
\end{proof}
Now consider the Floer cohomology $HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2)$. By \cref{prop Quotient Properties}(\ref{itma}) and (\ref{itmb}), and \cref{rmk Char 2}, it is computed by a spectral sequence whose $E_1$ page has $p$th column $H^*(L; \mathbb{Z}/2)[-pn]$. For degree reasons the only potentially non-zero differentials in the whole spectral sequence map from $H^n(L; \mathbb{Z}/2)[-(p-1)n] \cong \mathbb{Z}/2$ to $H^0(L; \mathbb{Z}/2)[-pn] \cong \mathbb{Z}/2$ on this page, and these maps are independent of $p$ (the $p$-dependence is all contained in the local system $\mathscr{C}^*$, which we are quotienting down to $\mathscr{C}^*_\mu$, and in the signs which are irrelevant mod $2$) so it suffices to understand the $p=0$ case.
From this spectral sequence and the action of $\mathbb{C}O(h)$ (by \cref{prop Quotient Properties}(\ref{itmd})) we obtain the following two lemmas, which are analogous to \cite[Lemmas 6.1.3, 6.1.4]{BiranCorneaRigidityUniruling}.
\begin{lem}
\label{HFmod2nonzero}
We have an isomorphism of graded $\mathbb{Z}/2$-vector spaces
\[
HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2) \cong \bigoplus_{p=-\infty}^\infty H^*(L; \mathbb{Z}/2)[-p(n + 1)].
\]
That is $HF^k_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2) \cong \oplus_{p = -\infty}^\infty H^{k + (n + 1)p}(L;\mathbb{Z}/2) \cong H^{\ell_k}(L;\mathbb{Z}/2)$, where $\ell_k$ is the unique element of $\{0, \dots, n\}$ congruent to $k$ modulo $n+1$.
Further, $H^k(L;\mathbb{Z}/2) \cong \mathbb{Z}/2$ for all $0 \le k \le n$.
\end{lem}
\begin{proof}
By the preceding discussion, to prove the first part it is enough to show that the differential
\[
\partial_1 : H^n(L; \mathbb{Z}/2)[n] \rightarrow H^0(L; \mathbb{Z}/2)
\]
vanishes. Since the codomain comprises just $0$ and the classical unit, we are done if the latter survives the spectral sequence. But (the PSS image of) the classical unit is also the unit $1_L$ for the Floer product so we simply need to check that $HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2)$ is non-zero. To see that this is indeed the case, observe that $H^1(L; \mathbb{Z}/2)$ survives and is non-zero by \cref{H1mod2nonzero}.
We thus have that $HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2) \cong \bigoplus_{p=-\infty}^\infty H^*(L; \mathbb{Z}/2)[-p(n + 1)]$.
In particular, we see that $HF^0_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2) \cong H^0(L;\mathbb{Z}/2) \cong \mathbb{Z}/2$ and $HF^{-1}_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2) \cong H^n(L;\mathbb{Z}/2) \cong \mathbb{Z}/2$. But by invertibility of the hyperplane class $h$ in quantum cohomology, Floer multiplication by $\mathbb{C}O(h)$ gives an isomorphism $HF^k_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2) \cong HF^{k + 2}_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2)$ for every $k \in \mathbb{Z}$ and so we must have $HF^k_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2) \cong \mathbb{Z}/2$ for all $k \in \mathbb{Z}$. This finishes the proof.
\end{proof}
\begin{lem}
\label{H2}
The group $H^2(L; \mathbb{Z}/2)$ is isomorphic to $\mathbb{Z}/2$ and is generated by $i^*h$, where $i : L \rightarrow \mathbb{C}\mathbb{P}^n$ is the inclusion. In particular, $L$ is relatively pin and hence satisfies assumption (O).
\end{lem}
\begin{proof}
By \cref{HFmod2nonzero} we already know that $H^2(L; \mathbb{Z}/2)$ is isomorphic to $\mathbb{Z}/2$. Moreover, its proof simultaneously shows the following:
\begin{itemize}
\item the map $\mathrm{PSS}\colon H^2(L;\mathbb{Z}/2) \to HF^2(L, L;\mathbb{Z}/2)$ is an isomorphism (it also shows that this map is well-defined when $n = 2$, i.e. that $H^2(L;\mathbb{Z}/2) \le H^\mathrm{PSS}(L;\mathbb{Z}/2)$);
\item Floer multiplication by $\mathbb{C}O(h)$ gives an isomorphism
\[
HF^0_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2) \xrightarrow{\ \sim \ } HF^2_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2);
\]
\item $HF^0_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2)$ is $\mathbb{Z}/2$, generated by the unit $1_L$.
\end{itemize}
From the latter two items we get that $HF^2_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z}/2)$ is $\mathbb{Z}/2$, generated by $\mathbb{C}O(h) * 1_L = \mathbb{C}O(h)$.
The diagram in \cref{prop Quotient Properties}(\ref{itmd}), relating $i^*$ to $\mathbb{C}O$, then yields the commuting diagram
\begin{equation}
\begin{tikzcd}
H^2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}/2) \arrow{r}{i^*} \arrow{d}[swap]{\mathrm{PSS}} & H^2(L; \mathbb{Z}/2) \arrow{d}{\mathrm{PSS}}
\\ QH^2(\mathbb{C}\mathbb{P}^n; \mathbb{Z}/2) \arrow{r}{\mathbb{C}O} & HF^2_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z}/2)
\end{tikzcd}
\end{equation}
The left-hand vertical map is an isomorphism between $\mathbb{Z}/2$'s, and the above discussion shows that the same is true for the right-hand vertical map and the bottom horizontal map. Hence the top horizontal map is also an isomorphism, which is what we wanted.
\end{proof}
Observe that \cref{HFmod2nonzero} allows us to immediately complete the $n = 2$ case of \cref{Theorem1} since, by the classification of surfaces, $\mathbb{R}\mathbb{P}^2$ is the only closed surface whose first cohomology group with $\mathbb{Z}/2$ coefficients is isomorphic to $\mathbb{Z}/2$.
We are now ready to unleash Floer theory over $\mathbb{Z}$ in order to deal with the general case.
\subsection{The main argument}
From now on we assume that $n \geq 3$, and fix an arbitrary choice of relative pin structure on $L$.
Since assumption (O) is satisfied we can work over $\mathbb{Z}$, and since $N_L \geq 3$ we can twist by any cover $L'$ as in \cref{TwistedCoeffs}, and consider the cohomology $HF^*_{\mathrm{Zap}, \mu}(L, L';\mathbb{Z})$. In principle this depends on the choice of relative pin structure but we do not explicitly notate this. By \cref{prop Twisted Properties}(\ref{itmA}) and (\ref{itmB}) the zeroth column of the first page of the Oh spectral sequence which computes $HF^*_{\mathrm{Zap}, \mu}(L, L';\mathbb{Z})$ is isomorphic to $H^*_c(L'; \mathbb{Z})$, and for degree reasons all of the intermediate cohomology (meaning $0 < * < n$) survives. The key result is the following:
\begin{prop}
\label{MainProp}
For any cover $L'$ of $L$ the compactly-supported cohomology groups $H^k_c(L';\mathbb{Z})$ for $0 < k < n$ are $2$-torsion and $2$-periodic.
\end{prop}
\begin{proof}
Since these intermediate cohomology groups survive to $HF^*_{\mathrm{Zap}, \mu}(L, L'; \mathbb{Z})$, they are acted upon by the invertible element $\mathbb{C}O(h)$ of degree $2$. This gives us $2$-periodicity.
To prove $2$-torsion, first note that since the minimal Maslov number is greater than $2$, the well-known argument of Auroux, Kontsevich and Seidel (\cite[Proposition 6.8]{AurouxMSandTduality}, \cite[Lemma 2.7]{SheridanFano}) implies that $\mathbb{C}O(2c_1(\mathbb{C}\mathbb{P}^n)) = 0$, that is $2(n+1)\mathbb{C}O(h) = 0$.
Another way to see this is to note that $\mathbb{C}O(2c_1(\mathbb{C}\mathbb{P}^n)) = \mathrm{PSS}(i^*(2c_1(\mathbb{C}\mathbb{P}^n)))$ which vanishes since $2c_1(\mathbb{C}\mathbb{P}^n) = j^*(\mu_L)$ by \eqref{eq Viterbo}.
Taking $L' = L$ and applying invertibility of $h$ again, we see that $HF^*_{\mathrm{Zap}, \mu}(L, L;\mathbb{Z})$, and hence the intermediate cohomology of $L$, is $2(n+1)$-torsion. But by the universal coefficients theorem, $H^1(L; \mathbb{Z})$ is torsion-free, so it must vanish.
We can now apply \cref{lemma: torsion of Floer cohomology} to see that $HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z})$ is $2$-torsion. By \cref{prop Twisted Properties}(\ref{itmD}), for each cover $L'$ the cohomology $HF^*_{\mathrm{Zap}, \mu}(L, L'; \mathbb{Z})$ is a (unital) module over the ring $HF^*_{\mathrm{Zap}, \mu}(L, L; \mathbb{Z})$, so the former must also be $2$-torsion. This in turn means that the intermediate compactly-supported cohomology groups of each $L'$ are $2$-torsion.
\end{proof}
Finally we complete the proof of \cref{Theorem1}, by showing that $L$ has fundamental group $\mathbb{Z}/2$ and universal cover homeomorphic to $S^n$:
\begin{proof}[Proof of \cref{Theorem1}]
Let $L^+$ denote the minimal orientable cover of $L$, meaning $L$ itself, if it is orientable, or the orientable double cover otherwise.
Apply \cref{MainProp} to every connected cover $L'$ of $L^+$ to see that for every such cover the group $H^{n - 1}_c(L';\mathbb{Z})$ is $2$-torsion. Since $L'$ is orientable, Poincar\'{e} duality tells us that $H^{n - 1}_c(L';\mathbb{Z})$ is isomorphic to $H_1(L';\mathbb{Z})$ and so the latter is $2$-torsion.
By the Hurewicz theorem, this means that every subgroup of $\pi_1(L^+)$ has $2$-torsion abelianisation. In particular, by considering the cyclic subgroups, we see that every element of $\pi_1(L^+)$ has order $2$, so the group is abelian (every commutator $aba^{-1}b^{-1}$ is square $(ab)^2$ and hence equal to the identity). We deduce that $\pi_1(L^+)$ is isomorphic to $H_1(L^+; \mathbb{Z})$ and is $2$-torsion. It is also finitely-generated (since $L^+$ is compact) and therefore finite. Hence $\pi_1(L)$ is finite as well.
Consider now the universal cover $\widetilde{L}$ of $L$, which is compact by the above discussion. By Hurewicz and universal coefficients, $H^1(\widetilde{L};\mathbb{Z})$ vanishes and $H^2(\widetilde{L};\mathbb{Z})$ is torsion-free. Then \cref{MainProp} shows that $\widetilde{L}$ is an integral homology sphere, so as in \cref{subsection Damian} it is homeomorphic to $S^n$.
We are now in a position to finish the proof by showing that $\pi_1(L)$ is $\mathbb{Z}/2$. Suppose first that $L$ is orientable, in which case we replace $L^+$ by $L$ in the above to see that $\pi_1(L)$ is finite, abelian and $2$-torsion and hence $\pi_1(L) \cong (\mathbb{Z}/2)^k$ for some $k \in \mathbb{N}$. On the other hand, by \cref{HFmod2nonzero}, we know that $H^1(L;\mathbb{Z}/2) \cong \mathbb{Z}/2$ and so $k = 1$.
Suppose now that $L$ is non-orientable. Then by \cref{remarkMuvsW1} we see that $n$ must be even. But then $\pi_1(L)$ is a non-trivial group which acts freely on the even-dimensional sphere $\widetilde{L}$ and so we must have $\pi_1(L) \cong \mathbb{Z}/2$.
\end{proof}
\end{document}
|
\begin{document}
\lhead{}
\rhead{}
\begin{flushleft}
\Large
\noindent{\bf \Large Analysis and computation of the transmission eigenvalues with a conductive boundary condition}
\end{flushleft}
pace{0.2in}
{\bf \large Isaac Harris}\\
\indent {\small Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA}\\
\indent {\small Email: \texttt{[email protected]}}\\
{\bf \large Andreas Kleefeld}\\
\indent {\small Forschungszentrum J\"{u}lich GmbH, J\"{u}lich Supercomputing Centre, Wilhelm-Johnen-} \\
\indent {\small Stra{\ss}e, 52425 J\"{u}lich, Germany. Email: \texttt{[email protected]}}
pace{0.2in}
\begin{abstract}
\noindent We provide a new analytical and computational study of the transmission eigenvalues with a conductive boundary condition. These eigenvalues are derived from the scalar inverse scattering problem for an inhomogeneous material with a conductive boundary condition. The goal is to study how these eigenvalues depend on the material parameters in order to estimate the refractive index. The analytical questions we study are: deriving Faber-Krahn type lower bounds, the discreteness and limiting behavior of the transmission eigenvalues as the conductivity tends to infinity for a sign changing contrast. We also provide a numerical study of a new boundary integral equation for computing the eigenvalues. Lastly, using the limiting behavior we will numerically estimate the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
\end{abstract}
pace{0.1in}
\noindent {\bf Keywords}: Transmission Eigenvalues $\cdot$ Conductive Boundary Condition $\cdot$ Inverse Spectral Problem $\cdot$ Boundary Integral Equations \\
\section{Introduction}
In this paper, we consider analytical and computational questions involving the interior transmission eigenvalues for a scalar scattering problem with a conductive boundary condition. Transmission eigenvalue problems are derived by considering the inverse scattering problem of attempting to recover the shape of the scatterer from the far-field data (see for e.g. \cite{TE-book}). The scatterer is assumed to be illuminated by an incident plane wave, then the corresponding direct scattering problem associated with the transmission eigenvalues is given by: find the total field $u \in H^1(D)$ and scattered field $u^s \in H^1_{loc}(\mathbb{R}^d \setminus \overlineerline{D})$ (where $d=2,3$) such that
\begin{align}
\Delta u^s +k^2 u^s=0 \quad \textrm{ in } \mathbb{R}^d \setminus \overlineerline{D} \quad \text{and} \quad \Delta u +k^2 nu=0 \quad &\textrm{ in } \, {D} \label{direct1}\\
(u^s+u^i )^+ - u^-=0 \quad \text{and} \quad \partial_\nu (u^s+u^i )^+ + \eta (u^s+u^i )^+= {\partial_{\nu} u^-} \quad &\textrm{ on } \, \partial D \label{direct3}\\
\lim\limits_{r \rightarrow \infty} r^{{(d-1)}/{2}} \left( {\partial_r u^s} -\text{i} k u^s \right)=0 \label{SRC}&\,.
\end{align}
The superscripts $+$ and $-$ indicate the trace on the boundary taken from the exterior or interior of the domain $D$, respectively. The parameter $\eta$ denotes the conductivity. The total field is given by $u=u^s+u^i$ where the incident field is given by $u^i=\text{e}^{\text{i} k x \cdot \hat{y} }$. Here the incident direction $\hat{y}$ is given by a point on the unit sphere/circle denoted $\mathbb{S}^{d-1}$. In this case one has that the corresponding far-field operator used in the inversion algorithm is injective with a dense range provided that the corresponding wave number is not a transmission eigenvalue \cite{fmconductbc}. One can consider these wave numbers as corresponding to frequencies where there is a Herglotz wave function that does not produce a scattered wave if taken as the illuminating incident wave, where the Herglotz wave function can be seen as a superposition of incident plane waves. Transmission eigenvalue problems are non-linear and non-self-adjoint eigenvalue problems. This makes their investigation interesting mathematically but also challenging. This has lead to new and interesting methods for the theoretical as well as computational study of these problems.
In general, these eigenvalues can be determined from the far-field data via the linear sampling method (LSM) and the inside-out duality method (IODM). See for the determination of the classical transmission eigenvalues in \cite{TE-book, cchlsm, mypaper1, armin, isp-n2} and \cite{te-cbc,te-cbc2} for numerical examples for the transmission eigenvalues with a conductive boundary condition. This implies that one does not need to have {\it a prior knowledge} of the coefficients to determine the eigenvalues. Since in \cite{te-cbc} the transmission eigenvalues have been shown to depend on the material parameters monotonically one can study the {\it inverse spectral problem} of determining/estimating the material parameters from the eigenvalues. The most well known inverse spectral problems is the ``Can you hear the shape of a drum'' problem proposed in the 1966 paper \cite{drumshape}. This problem amounts to the question: Do the Dirichlet eigenvalues of the Laplacian uniquely determine the shape of the domain? It is well known that one can not hear the shape but some geometric properties can be uniquely determined. Similarly the question in the preceding section can be seen as ``Can you hear the material parameters?'' since we wish to estimate the material properties from the far-field data (see for e.g. \cite{mypaper1,isp-n2}). In many medical and engineering applications one wants to detect changes in the material parameters of a known object. These problems can fall under the category of non-destructive testing where one wants to infer the integrity of the interior structure using acoustic or electromagnetic waves.
We are motivated by the previous work in \cite{te-cbc} where this eigenvalue problem was first analyzed as well as \cite{te-cbc2} where the investigation was continued with the study of the eigenvalues as the conductivity tends to zero. We also refer to \cite{electro-cbc,two-eig-cbc} for the study of the electromagnetic transmission eigenvalues with a conductive boundary condition. There are some important theoretical and computational questions concerning the transmission eigenvalues with a conductive boundary condition that are studied in this paper. Here we will consider three theoretical questions for this problem, as is done in \cite{DMS-te} for the classical transmission eigenvalues. First we derive Faber-Krahn type inequalities for the transmission eigenvalues with a conductive boundary (see for e.g. \cite{te-fk,te-void}). Then, we prove the discreteness as well as establish the limiting behavior as the conductivity tends to infinity for a sign changing contrast. The limiting behavior as the conductivity tends to infinity for the electromagnetic transmission eigenvalues with a conductive boundary condition was studied in \cite{two-eig-cbc}. Using the limiting behavior we will consider the inverse spectral problem of estimating the refractive index provided that the conductivity is larger but unknown. Recently, the method of fundamental solutions for computing the classical transmission eigenvalues was studied and implemented in \cite{mfs-te} and in \cite{kleefeldITP} a system of boundary integral equations was established. We will also provide a new boundary integral equation for computing the transmission eigenvalues using the idea given in \cite{cakonikress}. Precisely, the new boundary integral equation is derived from making the ansatz that the eigenfunction can be written in terms of a single layer potential.
The rest of the paper is ordered as follows. In Section \ref{te-section}, we begin by defining the transmission eigenvalue problem as well as study the analytical questions with a conductive boundary condition. Then in Section \ref{bie}, we will derive a new boundary integral equation for computing the eigenvalues and give details on its implementation. In Section \ref{numerics-section}, we provide verification of the implementation. Additionally, we will numerically validate the theoretical results for the limiting behavior as the conductivity tends to infinity. Lastly, we will numerically investigate the inverse spectral problem of estimating the refractive index from the eigenvalues provided the conductivity is sufficiently large but unknown.
\section{Analysis of the Transmission Eigenvalues }\label{te-section}
In this section, we analytically study the transmission eigenvalues with a conductive boundary condition. To do so, we will begin by rigorously defining the eigenvalue problem in the appropriate function spaces. In our analysis we will use a variational method to prove the theoretical results. Our analysis will also use the concept of $T$ coercivity for a sesquilinear form. This has been used in \cite{te-coersive1,te-coersive2} for studying other transmission eigenvalue problems. In particular, our analysis we will use the $T$ coercivity developed in \cite{T-coercive}. Therefore, we let $D \subset \mathbb{R}^d$ be a bounded simply connected open set with $\nu$ being the unit outward normal to the boundary. Here we will assume that the boundary $\partial D$ is either of class $\mathscr{C}^2$ or is polygonal with no reentrant corners. This assumption on the boundary will allow us to appeal to elliptic regularity estimates in our analysis. Furthermore, we assume that the refractive index $n(x)$ is a scalar bounded real-valued function defined in $D$ and the conductivity parameter $\eta (x)$ is a scalar bounded real-valued function defined on the boundary $\partial D$.
The transmission eigenvalue problem with conductive boundary condition is given by: find $k \in \mathbb{C}$ and non-trivial $(w,v) \in L^2(D) \times L^2(D)$ such that
\begin{align}
\Delta w +k^2 n w=0 \quad \text{and} \quad \Delta v + k^2 v=0 \quad &\textrm{ in } \, D \label{teprob1} \\
w-v=0 \quad \text{and} \quad {\partial_{\nu} w}-{\partial_\nu v}= \eta v \quad &\textrm{ on } \partial D \label{teprob2}
\end{align}
where $w-v \in X(D)$. The variational space for difference of the eigenfunctions is
$$X(D)=H^2(D) \cap H^1_0(D) \quad \text{ such that } \quad \| \cdot \|_{X(D)} = \|\Delta \cdot \|_{L^2(D)}\,.$$
By our assumptions on the boundary $\partial D$ we have that the well-posedness estimate for the Poisson problem and the $H^2$ elliptic regularity estimate (see for e.g. \cite{evans}) implies that the $L^2$ norm of the Laplacian is equivalent to the $H^2$ in the associated Hilbert space $X(D)$.
Here the Sobolev spaces are given by
$$H^2(D) =\big\{ \varphi \in L^2(D) \, : \, \partial_{x_i} \varphi \quad \text{and} \quad \partial_{x_i x_j} \varphi \in L^2(D) \, \text{ for } \, i,j =1, \ldots, d \big\} $$
and
$$H^1_0(D) =\big\{ \varphi \in L^2(D) \, : \, \partial_{x_i} \varphi \in L^2(D) \,\, \text{ for } \,\, i =1, \ldots , d \,\, \text{ with } \,\, \varphi|_{\partial D}=0 \big\}\,.$$
Notice that by subtracting the equations in \eqref{teprob1} and the boundary conditions in \eqref{teprob2} we have that the difference $u=w-v$ satisfies
$$ \Delta u +k^2 n u=-k^2(n-1)v \quad \textrm{ in } \, D\,, \quad u=0 \, \, \textrm{ and } \, \, {\partial_\nu}u = \eta v \, \textrm{ on } \, \partial D\,.$$
{\color{black} For analytical considerations we will assume that $|n-1|^{-1} \in L^{\infty}( D)$.} Therefore, we have that the transmission eigenvalue problem with conductive boundary condition can be written as:
find the values $k \in \mathbb{C}$ such that there is a nontrivial solution $u \in X(D)$ satisfying
\begin{align}
(\Delta+k^2)\frac{1}{n-1}(\Delta u +k^2 n u)=0 \quad &\textrm{ in } \, D\,, \label{teprobu3} \\
-\frac{k^2}{\eta} \frac{\partial u}{\partial \nu}=\frac{1}{n-1}(\Delta u +k^2 n u) \quad &\textrm{ on } \partial D\,. \label{teprobu4}
\end{align}
The boundary condition \eqref{teprobu4} is understood in the sense of the trace theorem. In \cite{te-cbc} it has been shown that \eqref{teprob1}--\eqref{teprob2} and \eqref{teprobu3}--\eqref{teprobu4} are equivalent. We also have that there are infinity many real transmission eigenvalues provided $\eta$ is strictly positive on $\partial D$ and the contrast $n-1$ is either uniformly positive or negative in $D$.
Notice that $v$ and $w$ are related to the eigenfunction $u$ by
$$v = -\frac{1}{k^2(n-1)}(\Delta u + k^2 n u) \quad \text{ and } \quad w = -\frac{1}{k^2(n-1)}(\Delta u + k^2 u)\,. $$
In order to study this eigenvalue problem \eqref{teprobu3}--\eqref{teprobu4} we derive an equivalent variational formulation. Taking a test function $\varphi \in X(D)$, multiplying it by the conjugate of \eqref{teprobu3}, and using Green's Second Theorem we have that
\begin{align}
0 = \int\limits_D \frac{1}{ n-1 }(\Delta u +k^2 nu) (\Delta \overlineerline{\varphi} +k^2 \overlineerline{\varphi}) \, \text{d}x + \int\limits_{\partial D} \frac{k^2}{\eta} \frac{\partial u}{\partial \nu} \frac{\partial \overlineerline{\varphi} }{\partial \nu} \, \text{d}s \label{TE-varform}
\end{align}
(see \cite{te-cbc} for details) where the boundary integral is obtained by the conductive boundary condition \eqref{teprobu4}. In order for the variational form to be well-defined in $X(D)$ we will assume that
$$|n-1|^{-1} \in L^{\infty}( D) \quad \text{ and } \quad 0<\eta_{\text{min}} \leq \eta \leq \eta_{\text{max}} \quad \text{ for a.e.} \,\,\,\,x \in D$$
with $\eta_{\text{min}}$ and $\eta_{\text{max}}$ are constants.
\subsection{Faber-Krahn type inequalities}
We now show that there is a lower bound on the real transmission eigenvalues provided $\eta$ is strictly positive on $\partial D$ and the contrast $n-1$ is either uniformly positive or negative in $D$. We will derive these lower bounds by studying the variational formulation \eqref{TE-varform}. The lower bounds will depend on if the contrast is positive or negative in $D$. Here we use similar techniques as in \cite{te-void} where Faber-Krahn inequalities for the classical transmission eigenvalues with a cavity have been derived.
To begin, we first need a Poincar\'e type estimate in $X(D)$ in terms of an auxiliary eigenvalue problem. Therefore, we let $\lambda \in \mathbb{R}_{+}$ and the non-trivial $\psi \in X(D)$ be the simply supported plate buckling eigenpair with
\begin{align}
\Delta^2 \psi = -\lambda \Delta \psi \,\, \,\, \text{in} \,\, \,\,D \quad \text{ and } \quad \Delta \psi = 0 \,\,\,\, \text{{\color{black} on}} \,\,\,\, \partial D\,. \label{aux-eig}
\end{align}
It is clear that there are infinitely many eigenvalues $\lambda_j$ and eigenfunctions $\psi_j$ satisfying the above problem. The eigenvalues also satisfy a Courant-Fischer min-max principle and in particular the smallest eigenvalue satisfies
$$\lambda_1 = \min\limits_{\varphi \in X(D) \setminus \{ 0\} } \frac{ \| \Delta \varphi \|^2_{L^2(D)} }{ \| \nabla \varphi \|^2_{L^2(D)}}\,.$$
Therefore, we can conclude that for all $\varphi \in X(D)$ we have the Poincar\'e type estimate
$$ \lambda_1 \| \nabla \varphi \|^2_{L^2(D)} \leq \| \Delta \varphi \|^2_{L^2(D)}\,.$$
From this we can now derive the Faber-Krahn type inequalities for the transmission eigenvalues with a conductive boundary condition. For this we will now assume that there are two constants such that
$$ n_{\text{min}} \leq n \leq n_{\text{max}} \quad \text{ for a.e.} \,\,\,\,x \in D\,.$$
\begin{theorem}\label{fk-inequ}
Let $k$ be a real transmission eigenvalue satisfying \eqref{teprobu3}--\eqref{teprobu4} then we have the inequalities
$$ k^2 \geq \frac{\lambda_1}{n_{max}} \,\, \textrm{ when } \,\, {\color{black}n_{\text{min}}>1} \quad \text{ and} \quad k^2 \geq \frac{\lambda_1 {\color{black} \eta_{\text{min}}} }{{\color{black} \eta_{\text{min}}} +C_T \lambda_1} \,\, \textrm{ when } \,\, {\color{black}n_{\text{max}}<1}\,. $$
Here $\lambda_1$ is the first simply supported plate buckling eigenvalue for $D$ and $C_T$ is the trace theorem constant such that $\| \partial_{\nu} \varphi \|^2_{L^2(\partial D)} \leq C_T \| \Delta \varphi \|^2_{L^2(D)}$ for all $\varphi \in X(D)$.
\end{theorem}
\begin{proof}
To prove the claim we first start with the case when ${\color{black}n_{\text{min}}>1}$. Therefore, it can be shown (see also \cite{te-cbc}) by taking $\varphi =u$ (the corresponding eigenfunction) that \eqref{TE-varform} can be written as
\begin{align*}
0&=\int\limits_D \frac{1}{ n-1} | \Delta u +k^2 u |^2 +k^4 |u|^2 -k^2|\nabla u |^2 \, \mathrm{d} x + k^2 \int\limits_{\partial D} \frac{1}{\eta} |\partial_{\nu} u|^2 \, \mathrm{d} s \\
&\geq \int\limits_D \alpha | \Delta u +k^2 u |^2 \, \mathrm{d} x+ \int\limits_Dk^4 |u|^2 -k^2|\nabla u |^2 \, \mathrm{d} x + k^2 \int\limits_{\partial D} \frac{1}{\eta} |\partial_{\nu} u|^2 \, \mathrm{d} s
\end{align*}
where we let $\alpha=(n_{max}-1)^{-1}>0$. We now expand to obtain
\begin{align*}
0 &\geq \alpha \| \Delta u \|^2_{L^2(D)} -2\alpha k^2 \| \Delta u \|_{L^2(D)}\| u \|_{L^2(D)} +k^4(\alpha+1) \| u \|^2_{L^2(D)} -k^2 \| \nabla u \|^2_{L^2(D)}\\
&\geq \left( \alpha -\frac{\alpha^2}{\varepsilon} \right)\| \Delta u \|^2_{L^2(D)} +k^4(\alpha+1-\varepsilon)\| u \|^2_{L^2(D)}-k^2 \| \nabla u \|^2_{L^2(D)}
\end{align*}
by appealing to Cauchy-Schwarz inequality and Young's inequality for any positive $\varepsilon$. Provided that $\varepsilon \in (\alpha , \alpha+1)$ we can conclude that
\begin{align*}
0 &\geq \left( \alpha -\frac{\alpha^2}{\varepsilon} \right)\| \Delta u \|^2_{L^2(D)} -k^2 \| \nabla u \|^2_{L^2(D)}\\
&\geq \left( \alpha -\frac{\alpha^2}{\varepsilon} - \frac{k^2}{\lambda_1}\right)\| \Delta u \|^2_{L^2(D)}
\end{align*}
where we have used the Poincar\'e estimate in $X(D)$. This implies that
$$ \alpha \lambda_1 \left(1- \frac{\alpha}{\varepsilon} \right) \leq k^2 \quad \text{ for all } \quad \varepsilon \in (\alpha , \alpha+1)$$
provided that $k$ is a transmission eigenvalue. Letting $\varepsilon \to \alpha+1$ and the fact that $\alpha=(n_{\text{max}}-1)^{-1}>0$ we obtain $k^2 \geq \frac{\lambda_1}{n_{\text{max}}}$ which proves the result for this case.
Now, for the case when ${\color{black} n_{\text{max}}<1}$ we take $\varphi =u$ and rewriting \eqref{TE-varform} we can obtain
\begin{align*}
0&= \int\limits_D \frac{n}{ 1-n}| \Delta u +k^2 u |^2 +|\Delta u|^2 \, \mathrm{d} x -k^2 \int\limits_D |\nabla u|^2 \, \mathrm{d} x -k^2 \int\limits_{\partial D} \frac{1}{\eta} |\partial_{\nu} u|^2 \, \mathrm{d} s\\
&\geq \| \Delta u \|^2_{L^2(D)} -k^2 \| \nabla u \|^2_{L^2(D)} - \frac{k^2}{ {\color{black} \eta_{\text{min}}} } \| \partial_{\nu} u \|^2_{L^2(D)}\,.
\end{align*}
We now apply the trace theorem as well as the Poincar\'e type estimate in $X(D)$
$$ 0\geq \left(1 - \frac{k^2}{\lambda_1} - \frac{C_T k^2}{{\color{black} \eta_{\text{min}}}} \right) \| \Delta u \|^2_{L^2(D)}\,.$$
Therefore, we again can conclude $k$ to be a transmission eigenvalue. Then a simple calculation gives that $k^2 \geq \frac{\lambda_1 {\color{black} \eta_{\text{min}}} }{{\color{black} \eta_{\text{min}}} +C_T \lambda_1}$ which proves the claim.
\end{proof}
Notice that Theorem \ref{fk-inequ} holds for any domain $D$ with piecewise smooth boundary $\partial D$. Here we have assumed that $\partial D$ is either of class $\mathscr{C}^2$ or is polygonal with no reentrant corners. This gives that we have the elliptic $H^2$ regularity estimate. Using this one can show that the simply supported plate buckling eigenvalues coincide with the Dirichlet eigenvalues for the Laplacian. Theorem \ref{fk-inequ} then gives that the transmission eigenvalues with a conductive boundary condition satisfy the same Faber-Krahn inequality given in \cite{te-fk} when ${\color{black} n_{\text{min}} }>1$ as the classical transmission eigenvalues. For the case when ${\color{black} n_{\text{max}}<1}$ we note that from \cite{te-cbc2} that as ${\color{black}\eta \to 0}$ the $k_\eta \to k_0$ where $k_0$ is the classical transmission eigenvalue with $\eta = 0$. Therefore, in the limit as ${\color{black} \eta \to 0}$ the Faber-Krahn inequality in Theorem \ref{fk-inequ} becomes the known inequality in \cite{te-fk} for the classical transmission eigenvalues.
\subsection{Discreteness for sign changing contrast}
For this section, we will study the discreteness of the transmission eigenvalues when the contrast $|n-1|^{-1} \in L^{\infty}( D)$. Recall, that this is all that is needed to make sense for the variational formulation \eqref{TE-varform}. In \cite{te-cbc} the discreteness of the set of transmission eigenvalues was established provided that the contrast is either uniformly positive or negative in $D$. This is a strong requirement assumed on the contrast, especially if one is interested in the problem of estimating the refractive index from the eigenvalues for non-destructive testing. We will employ the concept of $T$ coercivity that was studied in \cite{T-coercive} for the biharmonic operator with sign changing coefficients.
To begin, we will define what it means for a sesquilinear form to be $T$ coercive. Let $a(\cdot , \cdot )$ be a given bounded sesquilinear form acting on a Hilbert space $X$, then we say that $a(\cdot , \cdot )$ is $T$ coercive provided that there is an isomorphism $T : X \mapsto X$ such that $a(\cdot , T \cdot )$ is a coercive sesquilinear form. Notice that by the inf-sup condition we have that if $a(\cdot , \cdot )$ is $T$ coercive, then $a(\cdot , \cdot )$ can be represented by a continuously invertible operator $A: X \mapsto X$ such that
$$ a(u,\varphi)=(Au,\varphi)_{X} \quad \text{ for all } \quad u,\varphi \in X\,.$$
This will be used to split the variational formulation \eqref{TE-varform} into an invertible and compact part. Then we can appeal to the Analytic Fredholm Theorem (see for e.g. \cite{TE-book}) to conclude discreteness.
Now, we have that the variational formulation \eqref{TE-varform} can be written as
\begin{align}
a(u,\varphi)+b_k (u,\varphi) = 0 \quad \text{ for all } \quad \varphi \in X(D)\,. \label{varform}
\end{align}
The bounded sesquilinear forms $a(\cdot \, , \cdot)$ and $b_k(\cdot \, , \cdot)$ on $X(D) \times X(D)$ are given by
\begin{align}
a(u,\varphi) &= \int\limits_D \frac{1}{ n-1} \Delta u \Delta \overlineerline{\varphi} \, \mathrm{d} x \label{a-form}
\end{align}
and
\begin{align}
b_k(u,\varphi) &= k^2\int\limits_D \frac{1}{n-1} ( \overlineerline{\varphi} \, \Delta u + u \, \Delta \overlineerline{\varphi}) \, \mathrm{d} x - k^2 \int\limits_D \nabla u \cdot \nabla \overlineerline{\varphi} \, \mathrm{d} x \nonumber \\
&\hspace{2in} + k^2 \int\limits_{\partial D} \frac{1}{\eta} {\partial_{\nu} u}{\partial_{\nu} \overlineerline{\varphi} } \, \mathrm{d} s + k^4 \int\limits_D u \overlineerline{\varphi} \, \mathrm{d} x\,. \label{b-form}
\end{align}
The boundedness is clear from the fact that it is assumed that $|n-1|^{-1} \in L^{\infty}( D)$ and the conductivity satisfies $0<\eta_{\text{min}} \leq \eta$. Following the arguments in \cite[Section 3]{te-cbc}, we can conclude that $b_k(\cdot \, , \cdot)$ in \eqref{b-form} can be represented by a compact operator $B_k :X(D) \mapsto X(D)$. We also obtain since
$$ b_k (u,\varphi)=(B_k u,\varphi)_{X(D)} \quad \text{ for all } \quad u,\varphi \in X(D)$$
that $B_k$ depends analytically on $k \in \mathbb{C}$. Similarly, we can have that there is a bounded operator $A:X(D) \mapsto X(D)$ that represents the sesquilinear form $a(\cdot \, , \cdot)$ in \eqref{a-form}. Now, motivated by \cite{T-coercive} we define the operator $T:X(D) \mapsto X(D)$ such that
\begin{align}
\Delta T\varphi = (n-1) \Delta \varphi \quad \text{ for all} \quad \varphi \in X(D)\,. \label{T-def}
\end{align}
We now wish to show that the operator $T$ defined by \eqref{T-def} is an isomorphism on $X(D)$ whenever $\partial D$ is either of class $\mathscr{C}^2$ or is polygonal with no reentrant corners. To do so, we first notice that by the well-posedness of the Poisson problem we have that for every $\varphi \in X(D)$ there is a $T\varphi \in H_0^1(D)$ satisfying \eqref{T-def}. Now, by elliptic regularity \cite{evans} we conclude that $T\varphi \in X(D)$. Moreover, by \eqref{T-def} along with the fact that $|n-1|^{-1} \in L^{\infty}( D)$ we easily obtain that
$$ C \|\Delta \varphi\|_{L^2(D)}^2 \leq \big| ( \Delta T\varphi , \Delta \varphi )_{L^2(D)} \big|$$
where the constant $C$ is independent of $\varphi$ but depend only on the contrast. Since $\| \cdot \|_{X(D)} = \|\Delta \cdot \|_{L^2(D)}$ we can conclude that $T$ is coercive on $X(D)$. This implies that $T$ is an isomorphism on $X(D)$ by the Lax-Milgram Lemma. With this we are ready to prove the discreteness of the transmission eigenvalues.
\begin{theorem}\label{discrete}
Assume that $|n-1|^{-1} \in L^{\infty}( D)$ and $0<\eta_{\text{min}} \leq \eta$. Then the set of transmission eigenvalues is at most discrete.
\end{theorem}
\begin{proof}
To prove the claim we will appeal to the Analytic Fredholm Theorem. Therefore, by the definition of the operator $T$ given in \eqref{T-def} we notice that
$$a(u,Tu) = \int\limits_D \frac{1}{ n-1} \Delta u \Delta \overlineerline{T u} \, \mathrm{d} x = \|\Delta u\|_{L^2(D)}^2 \quad \text{ for all } \quad u \in X(D)$$
which implies that $a(\cdot , \cdot )$ is $T$ coercive on $X(D)$ and therefore $a(\cdot \, , \cdot)$ can be represented by an invertible operator. From \eqref{varform} we have that $k \in \mathbb{C}$ is a transmission eigenvalue if and only if $A+B_k$ is not injective. Since $A$ is invertible and $B_k$ is compact we have that $A+B_k$ is Fredholm
with index zero. By definition of $B_k$ from the sesquilinear form given in \eqref{b-form} we have that $B_0$ is the zero operator. We then conclude that $k=0$ is not a transmission eigenvalue since $A+B_0$ is injective and by the Analytic Fredholm Theorem there is at most a discrete set of values in $\mathbb{C}$ where $A+B_k$ fails to be injective, proving the claim.
\end{proof}
Notice that Theorem \ref{discrete} only requires that $|n-1|^{-1} \in L^{\infty}( D)$. This implies that the contrast can take both positive and negative values. The discreteness result given in \cite{te-cbc} as well as the analysis (for real-valued contrast) in \cite{te-cbc2} depends on the contrast being of one sign. The existence of these transmission eigenvalues is still an open question for the case of a sign changing coefficient. We also note that Theorem \ref{discrete} holds for the case when the refractive index is complex-valued as long as $|n-1|^{-1} \in L^{\infty}( D)$.
\subsection{Convergence as the conductivity goes to infinity}
In this section, we study the convergence as $\eta$ tends to infinity. Here, we will assume that $\eta \in L^{\infty}(\partial D)$ and satisfies that $\eta_{\text{min}} \to \infty$. This has been studied for the analogous Maxwell's transmission eigenvalues with a conductive boundary condition in \cite{two-eig-cbc}. The analysis in \cite{two-eig-cbc} for the Maxwell's system is only established when the contrast is either positive or negative definite. By appealing to $T$ coercivity discussed in the previous section we will be able to study the convergence for a sign changing contrast. We also remark that the analysis as $\eta$ tends to zero in \cite{te-cbc2} can be augmented to work for a sign changing contrast as well.
Now, we assume that the transmission eigenvalues $k_\eta \in \mathbb{R}_{+}$ form a bounded set as $\eta_{\text{min}} \to \infty$. From Theorem 3.2 in \cite{te-cbc2} we have that this is the case provided that contrast is positive (or negative) in the domain $D$. Also, since the eigenfunction $u_\eta$ is non-trivial we may assume that they are normalized in $X(D)$ i.e. $\|\Delta u_\eta \|_{L^2(D)}^2 =1$ for any $\eta$. Since the sequences $(k_\eta , u_\eta) \in \mathbb{R}_{+} \times X(D)$ are bounded as $\eta_{\text{min}} \to \infty$ we then conclude that there exists $k_\infty $ and $u_\infty $ such that
$$k_\eta \to k_\infty \quad \text{ and } \quad u_\eta \rightharpoonup u_\infty \,\, \textrm{ in } \,\, X(D) \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty\,.$$
Recall, that the eigenpair satisfies
$$ a(u_\eta ,\varphi)+b_{k_\eta ,\eta } (u_\eta ,\varphi) = 0 \quad \text{ for all } \quad \varphi \in X(D)$$
where the sesquilinear forms $a(\cdot \, , \cdot)$ and $b_{k ,\eta }(\cdot \, , \cdot)$ are defined as in \eqref{a-form}--\eqref{b-form} where the dependance on $\eta$ is made explicit. Here, we will let $b_{k ,0 }(\cdot \, , \cdot)$ denote the sesquilinear form without the boundary integral on $\partial D$. Therefore, we have that since $k_\eta$ and $u_\eta$ are bounded
$$ | a(u_\eta ,\varphi)+b_{k_\eta , 0 } (u_\eta ,\varphi) |= \left| k_\eta ^2 \int\limits_{\partial D} \frac{1}{\eta} {\partial_{\nu} u_\eta}{\partial_{\nu} \overlineerline{\varphi} } \, \mathrm{d} s \right| \leq \frac{C}{\eta_{\text{min}}} {\color{black} \| \partial_{\nu} \varphi \|_{L^2(\partial D)} }$$
which tends to zero as $\eta_{\text{min}} \to \infty$ for any $\varphi \in X(D).$ By appealing to the convergence of the eigenvalues and weak convergence of the eigenfunctions we have that
$$ a(u_\infty ,\varphi)+b_{k_\infty ,0 } (u_\infty ,\varphi) =0 \quad \text{ for all } \quad \varphi \in X(D)\,.$$
In order to prove the main result of this section we first must show the convergence of $u_\eta$ to $u_\infty$ in the $X(D)$ norm.
\begin{lemma}\label{conv-lemma}
Assume that $|n-1|^{-1} \in L^{\infty}(D)$ and $k_\eta \in \mathbb{R}_{+}$ forms a bounded set as $\eta_{\text{min}} \to \infty$. Then $u_\eta \to u_\infty $ in $X(D)$ as $\eta_{\text{min}} \to \infty$.
\end{lemma}
\begin{proof}
In order to prove the claim we first notice that for any $\varphi \in X(D)$
\begin{eqnarray*}
&&a\big( u_\eta - u_\infty , \varphi \big) \\
&& \hspace{0.4in}= b_{k_\infty , 0 }(u_\infty , \varphi ) - b_{k_\eta , 0 }(u_\infty , \varphi ) - b_{k_\eta , 0 }\big( u_\eta - u_\infty , \varphi \big) - k_\eta ^2 \int\limits_{\partial D} \frac{1}{\eta} {\partial_{\nu} u_\eta}{\partial_{\nu} \overlineerline{\varphi} } \, \mathrm{d} s.
\end{eqnarray*}
We now need to estimate the righthand side of the equality and prove that it tends to zero as $\eta_{\text{min}} \to \infty$ for $\varphi = T(u_\eta - u_\infty)$. Then, by the $T$ coercivity we have that
$$ a\big( u_\eta - u_\infty , T(u_\eta - u_\infty ) \big) = \|\Delta (u_\eta -u_\infty ) \|_{L^2(D)}^2$$
where $T$ is defined via \eqref{T-def} and showing that $ a\big( u_\eta - u_\infty , T(u_\eta - u_\infty ) \big)$ tends to zero as $\eta_{\text{min}} \to \infty$ will give the result. We begin with
\begin{eqnarray*}
&&b_{k_\infty , 0 }(u_\infty , \varphi ) - b_{k_\eta , 0 }(u_\infty , \varphi )\\
&& \hspace{0.4in} = (k_\infty ^2 -k_\eta^2)\int\limits_D \frac{1}{n-1} ( \overlineerline{\varphi} \, \Delta u_\infty + u_\infty \, \Delta \overlineerline{\varphi}) \, \mathrm{d} x \\
&& \hspace{0.4in}- (k_\infty ^2 -k_\eta^2) \int\limits_D \nabla u_\infty \cdot \nabla \overlineerline{\varphi} \, \mathrm{d} x + (k_\infty ^4 -k_\eta^4) \int\limits_D u_\infty \overlineerline{\varphi} \, \mathrm{d} x
\end{eqnarray*}
for any $\varphi \in X(D)$. Notice that by since $k_\eta \to k_\infty $ as $\eta_{\text{min}} \to \infty$
$$ \big| b_{k_\infty , 0 }\big( u_\infty , T(u_\eta - u) \big) - b_{k_\eta , 0 }\big( u_\infty , T(u_\eta - u_\infty ) \big)\big| \longrightarrow 0 \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty$$
where we have used the fact that $T$ is a bounded operator and $u_\eta - u_\infty $ is bounded in $X(D)$.
Just as before we have that
$$ \left| k_\eta ^2 \int\limits_{\partial D} \frac{1}{\eta} {\partial_{\nu} u_\eta}{\partial_{\nu} \overlineerline{T(u_\eta - u_\infty)} } \, \mathrm{d} s \right| \leq \frac{C}{\eta_{\text{min}}} \longrightarrow 0 \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty$$
where we have used the fact that $T$ is a bounded operator and $u_\eta - u_\infty$ is bounded in $X(D)$. Now, notice that
\begin{eqnarray*}
&&b_{k_\eta , 0 }\big( u_\eta - u_\infty , T(u_\eta - u_\infty) \big) \\
&& \hspace{0.4in} = k_\eta^2 \int\limits_D \frac{1}{n-1} \Big( \overlineerline{T(u_\eta - u_\infty) } \, \Delta (u_\eta - u_\infty) + (u_\eta - u_\infty) \, \Delta \overlineerline{T(u_\eta - u_\infty) } \Big) \, \mathrm{d} x \\
&& \hspace{0.4in} -k_\eta^2 \int\limits_D \nabla (u_\eta - u_\infty) \cdot \nabla \overlineerline{T(u_\eta - u_\infty) } \, \mathrm{d} x + k_\eta^4 \int\limits_D (u_\eta - u_\infty) \overlineerline{T(u_\eta - u_\infty) } \, \mathrm{d} x.
\end{eqnarray*}
By compact embedding of $H^2(D)$ into $H^1(D)$ we have that
$$\| u_\eta - u_\infty \|_{H^1(D)} \quad \text{ and } \quad \| T(u_\eta - u_\infty) \|_{H^1(D)}$$
tends to zero as $\eta_{\text{min}} \to \infty$. Then by the Cauchy-Schwarz inequality as well as the fact the $k_\eta \in \mathbb{R}_{+}$ forms a bounded set we conclude that
$$ b_{k_\eta , 0 }\big( u_\eta - u_\infty , T(u_\eta - u_\infty) \big) \longrightarrow 0 \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty\,.$$
From this we have that
$$ \|\Delta (u_\eta -u_\infty) \|_{L^2(D)}^2 = a\big( u_\eta - u_\infty , T(u_\eta - u_\infty) \big) \longrightarrow 0 \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty$$
which proves the claim.
\end{proof}
Lemma \ref{conv-lemma} gives that the corresponding eigenpair will converge as $\eta_{\text{min}} \to \infty.$ The natural question is to determine what specific limiting values one obtains for the eigenpair. One would expect the limiting values would be an eigenpair for some associated limiting eigenvalue problem. Therefore, we recall that for a given eigenpair $(k_\eta , u_\eta) \in \mathbb{R}_{+} \times X(D)$ there is a corresponding $(v_\eta ,w_\eta ) \in L^2(D) \times L^2(D)$ such that
$$v_\eta = -\frac{1}{k_\eta^2(n-1)}(\Delta u_\eta + k_\eta^2 n u_\eta) \quad \text{ and } \quad w_\eta = -\frac{1}{k_\eta^2(n-1)}(\Delta u_\eta + k_\eta^2 u_\eta)\,.$$
Here, $(v_\eta ,w_\eta )$ satisfy the Helmholtz equation and `modified' Helmholtz equation in the distributional sense with
$$\Delta v_\eta +k_\eta^2 v_\eta=0 \quad \text{and} \quad \Delta w_\eta + k_\eta^2 n w_\eta=0 \quad \textrm{ in } \,\,\, D\,.$$
Due to the convergence
$$k_\eta \to k_\infty \quad \text{ and } \quad u_\eta \to u_\infty \,\, \textrm{ in } \,\, X(D) \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty$$
we can conclude that there is a $(v_\infty ,w_\infty ) \in L^2(D) \times L^2(D)$ such that
$$v_\eta \to v_\infty \quad \text{ and } \quad w_\eta \to w_\infty \,\, \textrm{ in } \,\, L^2(D) \quad \textrm{ as } \,\, \eta_{\text{min}} \to \infty$$
that are solutions to the Helmholtz equation and `modified' Helmholtz equation in the distributional sense with
$$\Delta v_\infty +k_\infty^2 v_\infty=0 \quad \text{and} \quad \Delta w_\infty + k_\infty^2 n w_\infty=0 \quad \textrm{ in } \,\,\, D\,.$$
From this we see that it seems that the limiting value $k_\eta^2$ may be either a Dirichlet eigenvalue or `modified' Dirichlet eigenvalue for the domain $D$. The proceeding result proves this assertion.
\begin{theorem}\label{conv-thm}
Assume that $|n-1|^{-1} \in L^{\infty}(D)$ and $k_\eta \in \mathbb{R}_{+}$ forms a bounded set as $\eta_{\text{min}} \to \infty$. Then $k_\eta \to k_\infty $ as $\eta_{\text{min}} \to \infty$ where $k_\infty^2$ is either Dirichlet eigenvalue or `modified' Dirichlet eigenvalue for the domain $D$.
\end{theorem}
\begin{proof}
To prove the claim we must show that either $v_\infty$ or $w_\infty$ are non-trivial with zero trace on $\partial D$. Now, recall that
$$v_\eta = \frac{1}{\eta} \partial_{\nu} u_\eta \quad \text{ and } \quad w_\eta = v_\eta \quad \text{ on } \,\, \partial D\,.$$
Since we have assumed that $\|\Delta u_\eta \|_{L^2(D)}^2 =1$ the trace theorem implies that
$$ \| v_\eta\|_{H^{1/2}(\partial D)} \quad \text{ and } \quad \| w_\eta\|_{H^{1/2}(\partial D)} $$
tend to zero as $\eta_{\text{min}} \to \infty$. This implies that $v_\infty$ and $w_\infty$ have zero trace on $\partial D$. By appealing to Green's first theorem and the strong $L^2(D)$ convergence, we have that $(v_\infty ,w_\infty ) \in H^1_0(D) \times H^1_0(D)$ such that
$$\Delta v_\infty +k_\infty^2 v_\infty=0 \quad \text{and} \quad \Delta w_\infty + k_\infty^2 n w_\infty=0 \quad \textrm{ in } \,\,\, D\,.$$
It is left to prove that either $v_\infty$ or $w_\infty$ are non-trivial. Assume on the contrary that $v_\infty=w_\infty=0$ in $D$. Then we would have that $u_\infty = w_\infty - v_\infty=0$ in $D$ which contradicts the fact that $\|\Delta u_\infty \|_{L^2(D)}^2 =1$ given by Lemma \ref{conv-lemma}. This proves the claim.
\end{proof}
\section{Boundary Integral Equations}\label{bie}
In this section, we will derive a new boundary integral equations for various interior transmission eigenvalue problems which is based on the idea of \cite{cakonikress}. Recall, that in general the problem under consideration is the following: Find non-trival solution $(w,v)\in L^2(D) \times L^2(D)$ and $k\in \mathbb{C}$ such that
\begin{align*}
\Delta w +n k^2 w=0 \quad &\textrm{ in } \, D \\
\Delta v + \tilde{n} k^2 v=0 \quad &\textrm{ in } \, D \\
w=v \quad &\textrm{ on } \partial D\\
{\partial_{\nu} w}-{\partial_\nu v}=\eta w \quad &\textrm{ on } \partial D
\end{align*}
is satisfied, where $n$, $\tilde{n}$ as well as $\eta$ are given. Given different parameters for $\tilde{n}$ and $\eta$ we obtain substantially different eigenvalue problems. Here, we will now assume that the parameters $n$, $\tilde{n}$ {\color{black}and $\eta$} are constant.
If $\tilde{n}=1$ and $\eta=0$, then we are dealing with the `classical' interior transmission eigenvalue problem.
If $\tilde{n}=1$ and $\eta>0$, then we have the interior transmission eigenvalue problem with conductive boundary condition.
If $\tilde{n}=0$ and $\eta>0$, then we have the zero-index interior transmission eigenvalue problem with conductive boundary condition.
To solve this problem, we use boundary integral equations. We make the single-layer ansatz as in \cite{cakonikress}. Precisely, we use
\[w(x)=\mathrm{SL}_{k\sqrt{n}}\varphi(x)\qquad\text{and}\qquad v(x)=\mathrm{SL}_{k\sqrt{\tilde{n}}}\psi(x)\,,\qquad x\in D\,,\]
where
\[\mathrm{SL}_{k}\phi(x)=\int_{\partial D}\Phi_k(x,y)\phi(y)\,\mathrm{d}s\,,\qquad x\in D\]
with $\Phi_k(x,y)$ the fundamental solution of the Helmholtz equation in two dimensions. Here, $\varphi$ and $\psi$ are yet unknown functions on $\partial D$. In order to obtain an integral equation for the eigenvalue problem we will need the Dirichlet-to-Neumann mapping for the Helmholtz equation as in \cite{cakonikress}. To this end, we will assume that the boundary of the domain is of class $\mathscr{C}^{2,1}$. This gives that on the boundary $\partial D$, we have
\[w(x)=\mathrm{S}_{k\sqrt{n}}\varphi(x)\qquad\text{and}\qquad v(x)=\mathrm{S}_{k\sqrt{\tilde{n}}}\psi(x)\,,\]
where
\[\mathrm{S}_{k}\phi(x)=\int_{\partial D}\Phi_k(x,y)\phi(y)\,\mathrm{d}s\,,\qquad x\in \partial D\,.\]
Taking the normal derivative and the jump conditions, yields
\[\partial_\nu w(x)=\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{n}}\right)\varphi(x)\qquad\text{and}\qquad \partial_\nu v(x)=\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{\tilde{n}}}\right)\psi(x)\,,\]
where
\[\mathrm{K}'_{k}\phi(x)=\int_{\partial D}\partial_{\nu(x)}\Phi_k(x,y)\phi(y)\,\mathrm{d}s\,,\qquad x\in \partial D\,.\]
Hence, using the first boundary condition $w={\color{red}v}$, we obtain the following Dirichlet-to-Neumann mappings (assuming $k\sqrt{n}$ and $k\sqrt{\tilde{n}}$ are not eigenvalues of $-\Delta$ in $D$)
\[\partial_\nu w=\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{n}}\right)\mathrm{S}_{k\sqrt{n}}^{-1}w \qquad\text{and}\qquad \partial_\nu v=\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{\tilde{n}}}\right)\mathrm{S}_{k\sqrt{\tilde{n}}}^{-1}w\,\]
since the boundary of the domain is of class $\mathscr{C}^{2,1}$(see for e.g. \cite{cakonikress}).
Using the boundary condition ${\partial_{\nu} w}-{\partial_\nu v}=\eta w$ yields
\[\left[\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{n}}\right)\mathrm{S}_{k\sqrt{n}}^{-1}-\left(\frac{1}{2}\mathrm{I}+\mathrm{K}'_{k\sqrt{\tilde{n}}}\right)\mathrm{S}_{k\sqrt{\tilde{n}}}^{-1}-\eta \mathrm{I}\right]w=0\,,\]
which can be written abstractly as
\begin{eqnarray}
M(k;n,\tilde{n},\eta)w=0\,.
\label{sys}
\end{eqnarray}
That means, that one has to solve a non-linear eigenvalue problem in $k$ for given constants $n$, $\tilde{n}$, and $\eta$. The boundary integral operator arising in (\ref{sys}) are approximated
by boundary element collocation method using quadratic interpolation (see \cite{kleefeldITP} for details). The resulting non-linear eigenvalue problem is solved by the Beyn's algorithm \cite{beyn} using a circle centered at $(\mu,0)$ with radius 0.5 as contour in the complex plane.
The contour integral are approximated by 24 equidistant points on this circle. {\color{black}Supportive theoretical results for the approximation of the new boundary integral equation is subject to future research. It can be expected to follow along the same lines given in \cite{ETNA} for the approximation of the boundary integral equation for the solution of the exterior Neumann problem in combination with Beyn's method \cite{beyn} also used in \cite{HabilKleefeld}.}
\section{Computational Experiments }\label{numerics-section}
In this section, we provide extensive numerical experiments to test our implementation, to show the limiting behavior for small and large $\eta$, and to estimate the index of refraction from known interior transmission eigenvalues. In our experiments we implement Beyn algorithm \cite{beyn} to find the eigenvalues in the circle centered at $(\mu,0)$ with radius 0.5 in the complex plane.
\subsection{Testing the boundary element collocation method}\label{part1}
First, we use $n=4$, $\tilde{n}=1$, and $\eta=0$ within (\ref{sys}) to compute the `classical' interior transmission eigenvalues for a domain $D$ being the unit circle. With $\mu=3.1$ and $40$ collocation points, we obtain the
interior transmission eigenvalues
$2.9026$, $2.9026$, $3.3842$, $3.4121$, and $3.4121$ which are in agreement with the one given in \cite[Figure 6]{mfs-te}. All presented digits are correct. This can also be compared by computing the zeros of the determinant of
\[\left(
\begin{array}{cc}
J_m(k\sqrt{n}) & -J_m(k)\\
\sqrt{n}J_m'(k\sqrt{n}) & -J_m'(k)\
\end{array}
\right) \quad \textrm{ for } \,\, m\in \mathbb{N} \cup \{0\}.\]
For the domain $D$ being an ellipse with semi-axis $1$ and $0.8$, we obtain interior transmission eigenvalues $3.1353$, $3.4852$, $3.5473$, and $3.8843$ using $\mu=3.4$ and $40$ collocation points. All digits are correct as one can compare with \cite[Figure 6]{mfs-te}.
Second, we now investigate the computation of the conductive interior transmission eigenvalues via our new boundary integral equation. To {\color{black}begin}, we use $n=4$, $\tilde{n}=1$, and $\eta=1$ within (\ref{sys}) to compute the eigenvalues for the unit sphere. We use $\mu=3.1$ and $160$ collocation points to obtain $2.7741$, $2.7741$, $3.2908$,
$3.3122$ and $3.3122$ with five digits accuracy.
The results are in agreement with \cite[Table 8]{te-cbc2}. All reported digits are correct as one can check also by computing the zeros of the determinant of
\[\left(
\begin{array}{cc}
J_m(k\sqrt{n}) & -J_m(k)\\
k\sqrt{n}J_m'(k\sqrt{n})-\eta J_m(k\sqrt{n}) & -k J_m'(k)\
\end{array}
\right) \quad \textrm{ for } \,\, m\in \mathbb{N} \cup \{0\}.\]
We obtain $3.0034$, $3.3565$, $3.7819$, and $3.4485$ for an ellipse with semi-axis $1$ and $0.8$ when using $\mu=3.3$ and $160$ collocation points.
Third, we compute zero-index conductive interior transmission eigenvalues; that is, we choose the parameters $n=4$, $\tilde{n}=0$, and $\eta=1$ in (\ref{sys}). We obtain $1.7840$, $2.4735$, $2.4735$, $3.1151$, $3.1151$, and $3.4363$ when using
$\mu=2.0$ and $\mu=3.0$ with
$160$ collocation points. The results are in agreement with the zeros of the determinant
\[\left(
\begin{array}{cc}
J_m(k\sqrt{n}) & -1\\
k\sqrt{n}J_m'(k\sqrt{n})-\eta J_m(k\sqrt{n}) & -m\
\end{array}
\right) \quad \textrm{ for } \,\, m\in \mathbb{N} \cup \{0\}\]
see \cite[p. 22]{two-eig-cbc} for details.
\subsection{Limiting behavior for small and large conductivity}
First, we see what happens if we let $\eta$ approach zero for the conductive interior transmission problem using a unit circle with $n=4$. From \cite{te-cbc2} we have that as $\eta$ approach zero the conductive interior transmission eigenvalues converge to the `classical' interior transmission eigenvalues. As expected the convergence is linear as one can also see in Table \ref{table1} {\color{black}by checking the estimated order of convergence (EOC) which is given by }
$${\color{black}\mathrm{EOC}=\log\left(\varepsilonsilon_\eta/\varepsilonsilon_{\eta/2}\right)/\log(2) \quad \text{where} \quad \varepsilonsilon_\eta = |k_1(\eta)-k_1(0)|.}$$
\begin{table}[!ht]
\centering
\begin{tabular}{l|cc}
$\eta$ & $k_1(\eta)$ & EOC\\
\hline
1/2 & 2.8416 & \\
1/4 & 2.8730 & \\
1/8 & 2.8880 & 1.0621 \\
1/16 & 2.8954 & 1.0331 \\
1/32 & 2.8990 & 1.0170 \\
1/64 & 2.9008 & 1.0086 \\
1/128 & 2.9017 & 1.0043 \\
1/256 & 2.9022 & 1.0022 \\
1/512 & 2.9024 & 1.0011 \\
1/1024 & 2.9025 & 1.0005
\end{tabular}
\caption{\label{table1}Linear convergence of the first conductive interior transmission eigenvalue for $\eta\rightarrow 0$ to the first `classical' interior transmission eigenvalue for the unit circle with $n=4$.}
\end{table}
Next, we let $\eta$ approach $\infty$ in the conductive interior transmission problem. We again use a unit circle with $n=4$.
\begin{table}[!ht]
\centering
\begin{tabular}{r|cc|cc}
$\eta$ & $k_1(\eta)$ & EOC & $k_2(\eta)$ & EOC\\
\hline
80 & 2.5998 & &3.7756& \\
160 & 2.5838 & &3.8061& \\
320 & 2.5758 &0.9959&3.8194&1.2048\\
640 & 2.5718 &0.9983&3.8256&1.0776\\
1280 & 2.5698 &0.9992&3.8287&1.0345\\
2560 & 2.5688 &0.9996&3.8302&1.0163\\
5120 & 2.5683 &0.9998&3.8310&1.0079\\
10240 & 2.5681 &0.9999&3.8313&1.0039\\
20480 & 2.5679 &1.0000&3.8315&1.0019\\
40960 & 2.5679 &1.0014&3.8316&1.0024
\end{tabular}
\caption{\label{table2}Linear convergence of the first two conductive interior transmission eigenvalue for $\eta\rightarrow \infty$ towards interior Dirichlet eigenvalues for the unit circle with $n=4$.}
\end{table}
As we can see in Table \ref{table2}, we obtain a linear convergence towards $2.567\,811\,150\,920\,341$ which is the `modified' Dirichlet eigenvalue of $\Delta u+\tau u=0$ with $\tau=2\lambda$
(first zero of the Bessel function of the first kind of order two divided by two)
and towards $3.831\,705\,970\,207\,512$ which is the Dirichlet eigenvalue of $\Delta u+\lambda u=0$ (first zero of the Bessel function of the first kind of order one). Recall, that the convergence presented in Table \ref{table2} is predicted by Theorem \ref{conv-thm} but the convergence rate has yet to be justified.
Likewise, we can check the convergence as $\eta$ tends to infinity when considering the zero-index conductive interior transmission problem. We use the unit circle with $n=4$ and obtain a linear convergence against the `modified' Dirichlet eigenvalues as shown in Table \ref{table3} which is predicted by Theorem 3.6 of \cite{two-eig-cbc} but the convergence rate has yet to be justified.
\begin{table}[!ht]
\centering
\begin{tabular}{r|cc|cc|cc}
$\eta$ & $k_1(\eta)$ & EOC & $k_2(\eta)$ & EOC & $k_3(\eta)$ & EOC\\
\hline
80 & 1.9396 & &2.5993& &3.2287& \\
160 & 1.9278 & &2.5837& &3.2097& \\
320 & 1.9218 &0.9919&2.5758&0.9778&3.2000&0.9638\\
640 & 1.9188 &0.9963&2.5718&0.9894&3.1950&0.9825\\
1280 & 1.9173 &0.9982&2.5698&0.9948&3.1926&0.9914\\
2560 & 1.9166 &0.9991&2.5688&0.9974&3.1913&0.9957\\
5120 & 1.9162 &0.9996&2.5683&0.9987&3.1907&0.9979\\
10240 & 1.9160 &0.9998&2.5681&0.9994&3.1904&0.9989\\
20480 & 1.9159 &0.9999&2.5679&0.9997&3.1902&0.9995\\
40960 & 1.9159 &1.0014&2.5679&1.0013&3.1902&1.0011
\end{tabular}
\caption{\label{table3}Linear convergence of the first three zero-index conductive interior transmission eigenvalue for $\eta\rightarrow \infty$ towards the `modified' Dirichlet eigenvalues for the unit circle with $n=4$.}
\end{table}
The three `modified' Dirichlet eigenvalues are given as $1.915\,852\,985\,103\,756$, $2.567\,811\,150\,920\,341$, and $3.190\,080\,947\,961\,992$ which are the zeros of the Bessel function of the first kind or order 1, 2, and 3
divided by two, respectively. Note that we also tried different $n$, for example $n=3$. We obtain for $\eta=40960$ $2.2123$, $2.9651$, $3.6837$, $4.3812$, and $4.9964$ which are in agreement with the `modified' Dirichlet eigenvalues $2.212\,236\,473\,354\,803$, $2.965\,052\,918\,423\,964$, $3.683\,588\,188\,085\,106$, $4.381\,131\,547\,263\,831$, and $4.996\,232\,140\,012\,951$.
If we use an ellipse with semi-axis $1$ and $0.8$, we obtain a linear convergence as well. The values for $\eta=40960$ are $1.3602$ and $1.5706$ for $n=4$ and $n=3$, respectively. They agree with the `modified' Dirichlet eigenvalues which are given as $1.3601$ and $1.5705$.
Now, we focus on a scattering object that has different piecewise index of refractions. We consider a circle with radius $R$ and inside there is a circle with radius $r$.
The interior circle has contrast $n_1$ and the remaining part of the circle has contrast $n_2$. In the sequel, we call this a double layer circle.
Following the approach of \cite[p. 198]{cosso} one needs to compute the zeros of the determinant
\[\left(
\begin{array}{cccc}
\scriptscriptstyle -J_m(kR) & \scriptscriptstyle H_m^{(1)}(kR\sqrt{n_2}) & \scriptscriptstyle H_m^{(2)}(kR\sqrt{n_2}) & \scriptscriptstyle 0\\
\scriptscriptstyle -kJ_m'(kR) & \scriptscriptstyle k\sqrt{n_2}H_m^{(1)'}(kR\sqrt{n_2})-\eta H_m^{(1)}(kR\sqrt{n_2})& \scriptscriptstyle k\sqrt{n_2}H_m^{(2)'}(kR\sqrt{n_2}) -\eta H_m^{(2)}(kR\sqrt{n_2})& \scriptscriptstyle 0\\
\scriptscriptstyle 0 & \scriptscriptstyle H_m^{(1)}(kr\sqrt{n_2}) & \scriptscriptstyle H_m^{(2)}(kr\sqrt{n_2}) & \scriptscriptstyle -J_m(kr\sqrt{n_1})\\
\scriptscriptstyle 0 & \scriptscriptstyle k\sqrt{n_2}H_m^{(1)'}(kr\sqrt{n_2}) & \scriptscriptstyle k\sqrt{n_2}H_m^{(2)'}(kr\sqrt{n_2}) & \scriptscriptstyle -k\sqrt{n_1}J_m'(kr\sqrt{n_1})
\end{array}
\right)\]
for $m\in \mathbb{N} \cup \{0\}$ to obtain the conductive interior transmission eigenvalues.
We use $R=1$, $r=1/2$, $n_1=1/2$, $n_2=4$ to model a sign changing contrast. With $m\in \mathbb{N} \cup \{0\}$ we obtain the results in Table \ref{table4}.
\begin{table}[!ht]
\centering
\begin{tabular}{r|cc|cc|cc}
$\eta$ & $k_1(\eta)$ & EOC & $k_2(\eta)$ & EOC & $k_3(\eta)$ & EOC\\
\hline
80 & 2.3772 & &3.2053& &3.4197& \\
160 & 2.3904 & &3.1992& &3.4124& \\
320 & 2.3975 &0.9076&3.1975&1.8156&3.4094&1.3075\\
640 & 2.4011 &0.9547&3.1969&1.6568&3.4081&1.1942\\
1280 & 2.4030 &0.9776&3.1967&1.4819&3.4075&1.1111\\
2560 & 2.4039 &0.9888&3.1966&1.3165&3.4072&1.0598\\
5120 & 2.4044 &0.9944&3.1966&1.1881&3.4071&1.0311\\
10240 & 2.4046 &0.9972&3.1966&1.1040&3.4070&1.0159\\
20480 & 2.4047 &0.9986&3.1966&1.0549&3.4070&1.0080\\
40960 & 2.4048 &1.0007&3.1966&1.0296&3.4069&1.0054
\end{tabular}
\caption{\label{table4} Linear convergence of the first three conductive interior transmission eigenvalue for $\eta\rightarrow \infty$ towards either the Dirichlet eigenvalues or the `modified' Dirichlet eigenvalues for the double layer circle with $n_1=1/2$ and $n_2=4$.}
\end{table}
As can see in Table \ref{table4}, the values converge for $\eta\rightarrow\infty$ towards either the Dirichlet eigenvalue or the `modified' Dirichlet eigenvalue. Similarly, we can calculate the zero-index conductive interior transmission eigenvalues for a double layer circle by computing the zeros of the determinant
\[\left(
\scriptscriptstyle
\begin{array}{cccc}
\scriptscriptstyle -R^{m} &\scriptscriptstyle H_m^{(1)}(kR\sqrt{n_2}) &\scriptscriptstyle H_m^{(2)}(kR\sqrt{n_2}) &\scriptscriptstyle 0\\
\scriptscriptstyle -mR^{m-1} &\scriptscriptstyle k\sqrt{n_2}H_m^{(1)'}(kR\sqrt{n_2})-\eta H_m^{(1)}(kR\sqrt{n_2})&\scriptscriptstyle k\sqrt{n_2}H_m^{(2)'}(kR\sqrt{n_2}) -\eta H_m^{(2)}(kR\sqrt{n_2})&\scriptscriptstyle 0\\
\scriptscriptstyle 0 &\scriptscriptstyle H_m^{(1)}(kr\sqrt{n_2}) &\scriptscriptstyle H_m^{(2)}(kr\sqrt{n_2}) &\scriptscriptstyle -J_m(kr\sqrt{n_1})\\
\scriptscriptstyle 0 &\scriptscriptstyle k\sqrt{n_2}H_m^{(1)'}(kr\sqrt{n_2}) &\scriptscriptstyle k\sqrt{n_2}H_m^{(2)'}(kr\sqrt{n_2}) &\scriptscriptstyle -k\sqrt{n_1}J_m'(kr\sqrt{n_1})
\end{array}
\right)\]
for $m\in \mathbb{N} \cup \{0\}$. Using the same parameters as before yields the results in Table \ref{table5} showing that the zero-index conductive interior transmission eigenvalues converge to the `modified' Dirichlet eigenvalues as $\eta$ tends to infinity.
\begin{table}[!ht]
\centering
\begin{tabular}{r|cc|cc|cc}
$\eta$ & $k_1(\eta)$ & EOC & $k_2(\eta)$ & EOC & $k_3(\eta)$ & EOC\\
\hline
80 & 3.1214 & &3.1301& &3.3478& \\
160 & 3.1225 & &3.1648& &3.3774& \\
320 & 3.1228 &1.5965&3.1825&0.9740&3.3930&0.9192\\
640 & 3.1229 &1.4352&3.1909&1.0675&3.4009&0.9925\\
1280 & 3.1230 &1.2811&3.1946&1.2060&3.4046&1.0850\\
2560 & 3.1230 &1.1645&3.1960&1.3974&3.4062&1.2368\\
5120 & 3.1230 &1.0899&3.1964&1.6099&3.4067&1.4643\\
10240 & 3.1230 &1.0472&3.1965&1.8080&3.4069&1.7639\\
20480 & 3.1230 &1.0242&3.1966&1.9958&3.4069&2.2117\\
40960 & 3.1230 &1.0137&3.1966&2.2537&3.4069&3.9809
\end{tabular}
\caption{\label{table5}Linear convergence of the first three zero-index conductive interior transmission eigenvalue for $\eta\rightarrow \infty$ towards the `modified' Dirichlet eigenvalues for the double layer circle with
$n_1=1/2$ and $n_2=4$.}
\end{table}
\subsection{Estimation of the index of refraction}
As we have seen in Table \ref{table1} the `classical' transmission eigenvalues are obtained when $\eta$ approaches zero in the conductive interior transmission problem. Here, we let $k(n ; \eta)$ denote the transmission eigenvalue for refractive index $n$ and conductivity $\eta$. From the convergence results as $\eta$ tends to zero, we have the conductive transmission eigenvalues tend to the classical transmission eigenvalues. When $\eta$ tends to infinity, the zero-index conductive transmission eigenvalues tend to the `modified' Dirichlet eigenvalues. From this, we now show that the refractive index can be estimated provided $\eta$ is sufficently small and unknown.
One can estimate the index of refraction by solving for $n_\mathrm{approx}$ in the non-linear equation $k_1(n;\eta)=k_1(n_\mathrm{approx})$ for given small $\eta$ where $k_1(n)$ denotes the first classical transmission eigenvalue.
For the unit circle with $n=4$ and $\eta=1/2$ respectively $\eta=1/10$, to obtain $n_\mathrm{approx}=3.897\,441\,361\,498\,941$
and $n_\mathrm{approx}=3.979\,992\,664\,293\,090$, respectively.
We have seen in Table \ref{table3} that the zero-index conductive transmission eigenvalues converge linearly towards the `modified' Dirichlet eigenvalues denoted $\tau(n)$ as $\eta$ tends to infinity. That means, we can estimate the index of refraction by solving the non-linear equation $k_1(n;\eta)=\tau_1(n_\mathrm{approx})$ for large $\eta$.
In our examples we take for example $\eta=100$, $\eta=200$, and $\eta=1000$.
Using $n=4$ and these $\eta$'s for the unit circle gives the approximated index of refraction $3.921\,606\,411\,761\,363$, $3.960\,400\,848\,027\,610$, and $3.992\,016\,007\,078\,786$, respectively.
Likewise using $n=3$ with the same $\eta$'s for the unit circle yields the approximated index of refraction $2.941\,204\,808\,821\,021$, $2.970\,300\,636\,020\,707$, and $2.994\,012\,005\,309\,089$.
\section{Summary and Conclusions }\label{end-section}
In this paper, we have further studied the interior conductive transmission eigenvalue problem both analytically and numerically. The numerical experiments give that the new boundary integral equation developed here can be used to compute multiple interior transmission eigenvalue problems with a constant refractive index. Notice that theoretically the conductivity parameter can be non-constant in the proposed boundary integral equation. We have also investigated the inverse spectral problem numerically for recovering constant refractive indices from the given interior conductive transmission eigenvalue provided $\eta$ is small and unknown. Analytically we have established discreteness as well as the limiting behavior as $\eta \to \infty$ for the interior conductive transmission eigenvalue problem for a sign changing contrast. This analysis and numerical approach are both new and provide a deeper understanding of these eigenvalue problems. One unanswered question is whether the convergence rate as $\eta \to \infty$ for the eigenvalues and eigenfunctions is linear. Another one is the existence of complex eigenvalues provided the refractive index and conductivity parameters are real-valued.
\end{document}
|
\begin{document}
\title{Tight, robust, and feasible quantum speed limits for open dynamics}
\author{Francesco Campaioli}
\email{[email protected]}
\affiliation{School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia}
\author{Felix A. Pollock}
\affiliation{School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia}
\author{Kavan Modi}
\affiliation{School of Physics and Astronomy, Monash University, Clayton, Victoria 3800, Australia}
\date{26 July 2019}
\begin{abstract}
Starting from a geometric perspective, we derive a quantum speed limit for arbitrary open quantum evolution, which could be Markovian or non-Markovian, providing a fundamental bound on the time taken for the most general quantum dynamics. Our methods rely on measuring angles and distances between (mixed) states represented as generalized Bloch vectors. We study the properties of our bound and present its form for closed and open evolution, with the latter in both Lindblad form and in terms of a memory kernel. Our speed limit is provably robust under composition and mixing, features that largely improve the effectiveness of quantum speed limits for open evolution of mixed states. We also demonstrate that our bound is easier to compute and measure than other quantum speed limits for open evolution, and that it is tighter than the previous bounds for almost all open processes. Finally, we discuss the usefulness of quantum speed limits and their impact in current research.
\end{abstract}
\maketitle
\makeatletter
\section{Introduction}
\emph{Quantum speed limits} (QSLs) set a lower bound on the time required for a quantum system to evolve between two given states~\cite{Mandelstam1945,Margolus1998,Deffner2013,Deffner2017}. Such bounds are typically applied to estimate the speed of computational gates~\cite{Lloyd2000,Giovannetti2003a}, the precision in quantum metrology~\cite{Alipour2014,Giovannetti2011,Chin2012,Demkowicz-Dobrzanski2012,Chenu2017}, the performance in quantum optimal control~\cite{Reich2012,Caneva2009,DelCampo2012,Hegerfeldt2013,Murphy2010,An2016,Deffner2017,Campbell2017,Funo2017a}, and the charging power
in quantum thermodynamics~\cite{Campaioli2017, Campaioli2018a}. Besides their practical relevance, speed limit bounds stand as a fundamental result for both classical and quantum systems~\cite{Okuyama2018, Shanahan2018}, providing an operational interpretation of the largely discussed time-energy uncertainty relations~\cite{Deffner2017}. For these reasons, they have received particular attention from the quantum information community in recent years~\cite{Kupferman2008, Uzdin2012,Santos2015,Santos2016, Goold2016, Uzdin2016,Mondal2016, Mondal2016b, Mirkin2016, Pires2016, Marvian2016, Friis2016, Epstein2017, Ektesabi2017, Russell2017, Garcia-Pintos2018, Berrada2018, Santos2018, Hu2018, Volkoff2018}.
The typical blueprint for constructing QSL bounds involves estimating the minimal evolution time $\tau$ as the ratio between some distance between states and the average \emph{speed} induced by the generator of the evolution~\cite{Pires2016, Deffner2017}. For example, when unitary evolution of pure states is considered, an achievable QSL is given by $\tau\geq d_{FS}/\overline{\Delta E}$, where we have set $\hbar \equiv 1$, $d_{FS}$ is the Fubini-Study distance between the initial and final state~\cite{Fubini1904, Study1905, Bengtsson2008}, and $\overline{\Delta E}$ is the time-averaged standard deviation of the Hamiltonian $H$, which plays the role of the average speed~\cite{Mandelstam1945, Levitin2009}. In more general cases, such as for unitary evolution of mixed states or open (Markovian or non-Markovian) dynamics, such as those in Refs.~\cite{Deffner2013,Deffner2013b, DelCampo2013, Sun2015, Pires2016, Mondal2016, Mondal2016b}, QSLs are generally loose, and an attainable bound is not known~\footnote{Except for the case of qubits~\cite{Campaioli2018}.}. Moreover, the tightest of these bounds are difficult to compute or measure, requiring the diagonalization of initial and final states.
In this Article we propose a geometrically motivated QSL for open quantum evolutions and demonstrate, analytically and numerically, its superiority over the tightest of known QSLs for almost all open processes. Namely, we show our bound's performance discussing two aspects of QSL: feasibility and tightness.
The feasibility of a bound is quantified in terms of the computational or experimental resources requisite to evaluate or measure the bound.~\cite{Deffner2017, Campaioli2018}. Bounds that require the evaluation of complicated functions of the states or the generators of the evolution are less feasibile, and thus less useful, than otherwise equally performing bounds that are easier to compute or experimentally measure. The distance term in many QSLs requires the square root of the initial and final states, thus the solution of the eigenvalue problem for the initial state $\rho$ and final state $\sigma$~\cite{Deffner2013, Deffner2013b, Pires2016, Mondal2016}. In contrast, the bounds that only involve the evaluation of the overlap $\mathrm{tr}[\rho\sigma]$~\cite{DelCampo2013, Sun2015, Campaioli2018}, including the one that we introduce here, are much easier to compute and measure~\cite{Keyl2001, Ekert2002, Mondal2016b}.
Aside from the distance, the other important feature of QSL bounds is the average speed term, discussed above, that depends on the orbit of the evolution~\cite{Bengtsson2008, Russell2014a, Russell2017, Mondal2016,Mondal2016b, Pires2016, Campaioli2018}. A common criticism of QSLs is that calculating these bounds becomes as hard as solving the dynamical problem, reducing their relevance to a mere curiosity. We overcome this limitation by providing an operational procedure to experimentally evaluate the speed term for any type of process, and go on to discuss the purpose of QSLs in this context.
The tightness of QSLs, which represents how precisely they bound the actual minimal time of evolution, becomes a problem as soon as we move away from the case of unitary evolution of pure states~\cite{Levitin2009}\, which is, in practice, always an idealized description. We will show below that the available bounds for the general case of open evolution of mixed states are rather loose. Their performance gets worse as increasingly mixed states are considered -- which constitutes a major issue for the effectiveness of QSLs.
This looseness is often a consequence of the choice of the distance used to derive the bounds: Different distances result in different speed limits, and a suitable choice that reflects the features of the considered evolution is the key to performance~\cite{Bengtsson2008, Pires2016, Campaioli2018}. In this Article, we directly address this issue, deriving a bound that is provably robust under mixing, vastly improving the effectiveness of QSLs.
The present Article strongly complements the findings of Ref.~\cite{Pires2016}, where the authors used geometric arguments to obtain an infinite family of distances and their corresponding QSL bounds. While their result firmly and rigorously establishes the mathematical framework for a certain class of QSLs, it leaves open the problem of finding a distance that leads to a QSL that is tight and feasible. We do exactly this by uncovering a distance measure on quantum states, which is based on the geometry of the space of density operators, leading to a QSL that is both tight and feasible for almost all states and processes.
\section{QSL for geometric distance}
Usually, to generalize QSL for when the initial and final states are not pure, the Fubini-Study distance, mentioned in the introduction, is replaced by the Bures distance~\cite{Wootters1981, Deffner2013}:
\begin{gather}
\label{eq:Bures}
B(\rho,\sigma):=\arccos\big(F(\rho,\sigma)\big),
\end{gather}
where $F(\rho,\sigma) := \mathrm{tr}\big[\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\big]$ is the quantum fidelity. In Ref.~\cite{Campaioli2018} we showed that for unitary evolution of mixed quantum states, the corresponding QSL can be extremely loose. This looseness can be attributed to certain feature of the Bures metric, which led us to conclude that a more apt metric for the space of mixed states is desirable.
To remedy this problem, we exploited the generalized Bloch representation~\cite{Byrd2003}, for which every mixed state $\rho$ is associated to a real vector $\bm{r}$, known as a generalized Bloch vector (GBV). By noticing that unitary dynamics of the system preserves the radius of such vectors $\bm{r}$ (which is directly related to the purity of $\rho$), we introduced a geometric distance between states, given by the angle $\Theta$ between their GBVs (discussed in details below). The QSL derived from this distance is provably attainable for the case of qubits, and tighter than the traditional QSL for almost all states in the case of higher dimensions~\cite{Campaioli2018}. In the present Article, driven by this geometric consideration, we generalize the distance $\Theta$ to derive a QSL for arbitrary open quantum processes that outperforms the bounds given in Refs.~\cite{Deffner2013b, DelCampo2013, Sun2015, Deffner2013b, Mondal2016}.
We consider a $d$-dimensional system $S$, where $d=\dim\mathcal{H}_S$, coupled to its environment $E$ (with total Hilbert space $\mathcal{H} = \mathcal{H}_S\otimes\mathcal{H}_E$) and denote its physical state space of positive, unit trace density operators by $\mathcal{S}(\mathcal{H}_S)$.
A state $\rho\in\mathcal{S}(\mathcal{H}_S)$ of the system can be expressed as
\begin{gather}
\label{eq:generalized_bloch_vetor}
\rho=\frac{\mathbb{1}+c\;\bm{r}\cdot\bm{\Lambda}}{d},
\end{gather}
where $c=\sqrt{d(d-1)/2}$, given an operator basis $\{\Lambda_a\}$ satisfying $\mathrm{tr}[\Lambda_a\Lambda_b]=2\delta_{ab}$ and $\mathrm{tr}[\Lambda_a]=0$.
The generalized Bloch vector $\bm{r}$ is a vector in a $(d^2-1)$-dimensional real vector space, equipped with the standard Euclidean norm $\lVert \bm{r} \rVert_2=\sqrt{\sum_i r_i^2}$~\cite{Byrd2003}.
We would like to measure the distance between two states $\rho \leftrightarrow \bm{r}$ and $\sigma \leftrightarrow \bm{s}$ using the length of the shortest path through $\mathcal{S}(\mathcal{H}_S)$ that connects $\rho$ and $\sigma$. However, solving this geodesic problem is, in general, a hard task, since the state space for $d>2$ is a \emph{complicated} subset of the $(d^2-1)$-ball containing all (sub-)unit vectors $\bm{r}$~\cite{Byrd2003,Bengtsson2008}. Our approach will be to simplify this problem by lower bounding this distance by the length of the well-known geodesics of this ball. While this leads to a QSL that intrinsically underestimates the actual optimal evolution time, we will see that it provides a significant improvement over other bounds in the literature.
\begin{figure}
\caption{\textbf{Distance on the space of states.}
\label{fig:distances}
\end{figure}
For general evolutions, including \emph{non-unitary} open evolution, the length of the GBV is allowed to change. Here, the natural choice for the geodesic is with respect to the Euclidean distance, which is just the straight line between $\bm{r}$ and $\bm{s}$ (see Fig.~\ref{fig:distances} {\fontfamily{phv}\selectfont \textbf{b}}), whose length is given by
\begin{gather}
\label{eq:distance}
D(\rho,\sigma)=\lVert \bm{r}-\bm{s}\rVert_2.
\end{gather}
From this distance we derive our speed limit, following the same geometric argument used in Refs.~\cite{Pires2016,Campaioli2018}, and other QSL derivations.
By definition, the distance $D(\rho,\sigma)$ is smaller than or equal to the length $L[\gamma_{\rho}^{\sigma}]=\int_0^\tau D(\rho_{t+dt},\rho_t)$ of any path $\gamma_{\rho}^{\sigma}= [\rho_t]_{t=0}^\tau$, generated by some dynamical process, that connects $\rho=\rho_0$ and $\sigma=\rho_\tau$. We evaluate the infinitesimal distance $D(\rho_{t+dt},\rho_t)$ and rearrange to obtain $\tau \geq D(\rho,\sigma)/\overline{\lVert \dot{\bm{r}}_t\rVert}_2$,
where $\overline{f(t)}=\int_0^\tau dt\: f(t)/\tau$ is the average of $f(t)$ along the orbit parameterized by $t\in[0,\tau]$. Expressing $\lVert \bm{r}-\bm{s}\rVert_2$ and $\lVert \dot{\bm{r}}_t\rVert_2$ in terms of $\rho$ and $\sigma$,
we obtain the bound $T_{D}$,
\begin{align}
\label{eq:speed_limit_arbitrary}
& \tau \geq T_{D} = \frac{\lVert \rho - \sigma \rVert}{\overline{ \lVert \dot{\rho}_t\rVert}},
\end{align}
where the Hilbert-Schmidt norm $\lVert X \rVert = \sqrt{\mathrm{tr}[X^\dagger X]}$ of an operator $X$ arises as a consequence of equipping the space of GBVs with the standard Euclidean norm~\cite{Bengtsson2008}.
Despite its surprisingly simple form, reminiscent of kinematic equations, the bound in Eq.~\eqref{eq:speed_limit_arbitrary} originates from a precise geometric approach and encompasses the fundamental features of previous QSL bounds, including the orbit dependent term $\overline{ \lVert \dot{\rho}_t\rVert}$, which will be referred to as \emph{speed}, or \emph{strength of the generator}, that appears, under various guises, in the bounds of Refs.~\cite{Deffner2013, Deffner2013b, Taddei2013, DelCampo2013, Sun2015, Pires2016, Mondal2016, Mondal2016b} (see Appendix~\ref{a:derivation} for details about distance $D$ and the derivation of bound $T_D$).
Below we elaborate on several key properties of the geometric QSL given in the last equation. First, we show that this QSL is robust. Next, we give the exact form of the denominator for several important classes of dynamics. Finally, we discuss the superiority of this QSL by showing its feasibility and tightness.
\section{Robustness of geometric QSL}
\subsection{Robustness under composition}
\label{s:robustness_composition}
Our bound is robust in two important ways. First, it is well-known that the Hilbert-Schmidt norm is generally not contractive for CPTP dynamics~\cite{Perez-Garcia2006}. This means that the distance between $\rho$ and $\sigma$ changes drastically when an ancillary system $\alpha$ is introduced trivially, i.e., $\rho \to \rho \otimes \alpha$ and $\sigma \to \sigma \otimes \alpha$. Then we have $ \| \rho\otimes \alpha-\sigma \otimes\alpha \| = \| \rho-\sigma \| \cdot \|\alpha\|$, where the last term is the purity of the ancillary system. Simply by introducing an ancilla that is not pure decreases the original distance by the value of $\alpha$'s purity~\cite{Piani2012}.
However, if the ancilla does not participate in the dynamics then we have $\| \partial_t (\rho_t \otimes \alpha) \| = \| \partial_t \rho_t \| \cdot \| \alpha \|$ and the denominator is affected by the same factor. Thus, we have $T_D (\rho \otimes \alpha ,\sigma \otimes \alpha) =T_D (\rho,\sigma)$. However, if the ancillary system were to be correlated with the system or be part of the dynamics, then the actual time and the QSL will indeed be affected.
\subsection{Robustness under mixing}
The usual QSL is tight for unitary dynamics of pure states~\cite{Deffner2013}. However, it becomes rather loose for mixed states. The reason for this, as we show in detail in Ref.~\cite{Campaioli2018}, is that the Bures distance, given in Eq.~\eqref{eq:Bures}, monotonically decreases under mixing. That is, $B (\rho,\sigma) \geq B (\rho',\sigma')$, where
\begin{gather}
\label{eq:depolarization}
\rho'=\mathcal{D}_\epsilonsilon[\rho]:=\epsilonsilon \rho + \frac{1-\epsilonsilon}{d}\mathbb{1},
\end{gather}
with $\mathbb{1}$ being the identity operator and $\epsilonsilon\in[0,1]$. When $\epsilonsilon$ tends to 0, the Bures distance vanishes faster than the speed term, and so does the corresponding QSL. Now, note that the GBVs of $\rho$ ($\sigma$) and $\rho'$ ($\sigma'$) are $\bm{r}$ ($\bm{s}$) and $\epsilonsilon\bm{r}$ ($\epsilonsilon\bm{s}$) respectively. A unitary transformation that maps $\bm{r}$ to $\bm{s}$ will also map $\epsilonsilon\bm{r}$ to $\epsilonsilon\bm{s}$ in exactly the same time. That is, the value of $\epsilonsilon$ is inconsequential. Based on this observation, we proposed the angle between the GBVs as distance because it is independent of $\epsilonsilon$ and therefore robust under mixing, (see Fig.~\ref{fig:invariance} {\fontfamily{phv}\selectfont \textbf{a}}). This robustness is precisely the reason for the supremacy of the QSL provided in Ref.~\cite{Campaioli2018} over the usual QSL.
Even when open evolution is considered, it is of primary importance for a QSL to remain effective and tight for increasingly mixed initial and final states. We now show that the bound $T_D$, introduced in Eq.~\eqref{eq:speed_limit_arbitrary}, is robust under mixing not only for arbitrary unitary dynamics, but also for any open evolution with a well defined fixed point.
Our present bound is invariant when the initial state is mixed with the fixed state of the dynamics. Let the dynamics from $\rho$ to $\sigma$ be due to a completely positive linear map $\mathcal{C}$ with a fixed point $\phi$, i.e., $\mathcal{C}(\rho)=\sigma$ and $\mathcal{C}(\phi)=\phi$. If we change the initial state $\rho$ to $\rho'=\epsilonsilon \rho + (1-\epsilonsilon) \phi$, the the final state will be $\sigma'=\epsilonsilon \sigma + (1-\epsilonsilon) \phi$. This shrinks the numerator of the Eq.~\eqref{eq:speed_limit_arbitrary} by $\epsilonsilon \in [0,1]$, i.e., $\|\rho'-\sigma'\| = \epsilonsilon \|\rho-\sigma\|$. However, the denominator also shrinks by the same amount. If the dynamics is time independent then we also have $\dot{\phi}=0$ and $\dot{\rho}$ will be mapped to $\dot{\rho'}=\epsilonsilon \dot{\rho}$.
Hence $T_D(\rho',\sigma')=T_D(\rho,\sigma)$.
A similar, but more elaborate, argument can also be carried out for time dependent dynamics, but will be omitted from the present manuscript. The above result contains the previous case of unitary dynamics, and all unital dynamics, as they preserve the maximally mixed state. In this case the condition for robustness under mixing simply becomes a condition on the contraction factor for the length of the GBV $\bm{r}'_t=\epsilonsilon \bm{r}_t$, as expressed in Fig.~\ref{fig:invariance} {\fontfamily{phv}\selectfont \textbf{b}}.
In the next section we study the form of the bound, with particular attention to the speed, for the fundamental types of quantum evolution.
\begin{figure}
\caption{\textbf{Robustness of $T_D$ under pure depolarization.}
\label{fig:invariance}
\end{figure}
\section{Form of the speed}
\label{s:forms}
\subsection{Unitary evolution}
When \emph{unitary evolution} is considered, the denominator of Eq.~\eqref{eq:speed_limit_arbitrary} is a simple function of the time-dependent Hamiltonian $H_t$
\begin{gather}
\label{eq:unitary_denominator}
\overline{\lVert \dot{\rho}_t \rVert} = \overline{\sqrt{2\;\mathrm{tr}[H_t^2\rho_t^2-(H_t\rho_t)^2]}}.
\end{gather}
This term is proportional (up to a constant of motion) to the term in the denominator of the QSL derived in Ref.~\cite{Campaioli2018}~\footnote{For pure states, this quantity reduces to the time-averaged standard deviation of $H_t$, $\Delta E = \overline {\sqrt{\mathrm{tr}[H_t^2 \rho_t] - |\mathrm{tr}[H_t\rho_t]|^2}}$.}. However, the numerator of the QSL in Eq.~\eqref{eq:speed_limit_arbitrary} and the QSL in Ref.~\cite{Campaioli2018} are different and the latter is always tighter. This is because, for this special case, the length of the GBV $\bm{r}$ must be preserved along the evolution~\cite{Campaioli2018}, and the geodesics are arcs of great circles that connect $\bm{r}$ and $\bm{s}$ (see Fig.~\ref{fig:distances}{ \fontfamily{phv} \selectfont \textbf{a}}). Accordingly, as we showed in Ref.~\cite{Campaioli2018}, the distance becomes $D_U(\rho,\sigma)=\Theta \lVert \bm{r}\rVert_2 = \Theta \lVert \bm{s}\rVert_2$, where
\begin{gather}
\label{eq:generalized_bloch_angle}
\Theta(\rho,\sigma) = \arccos(\hat{\bm{r}}\cdot\hat{\bm{s}}),
\end{gather}
is the generalized Bloch angle, with $\hat{\bm{r}}$, $\hat{\bm{s}}$ being the unit vectors associated to $\bm{r}$ and $\bm{s}$, respectively~\footnote{The distance defined in Ref.~\cite{Campaioli2018} differs by a constant factor $\lVert \bm{r}\rVert_2=\lVert \bm{s}\rVert_2$, which does not affect the derived QSL in the unitary case.}.
The above observation should not be surprising. The distance in the last equation corresponds to the arc-length, which is always greater than the distance in Eq.~\eqref{eq:speed_limit_arbitrary} measuring the length along the straight line. If we are promised that the evolution is unitary, then we are free to work with the tighter QSL from Ref.~\cite{Campaioli2018}. However, if that information is not available, we must be conservative and work with the QSL in Eq.~\eqref{eq:speed_limit_arbitrary}.
We now consider the open evolution case, starting with Lindblad dynamics, before proceeding with more general non-Markovian evolutions.
\subsection{Lindblad dynamics}
In the case of semigroup dynamics, $ \overline{\lVert \dot{\rho}_t \rVert} $ becomes a function of the Lindblad operators~\cite{DelCampo2013}. As this function is generally complicated, we present its form for some particular types of Lindblad dynamics, for which it substantially simplifies. Let us consider a general form of the Lindblad master equation given by $\dot{\rho}_t=-i[H,\rho_t]+\sum_k\gamma_k\big(L_k\rho L_k^\dagger -\frac{1}{2}\{L_k^\dagger L_k,\rho_t\}\big)$, where typically the Lindblad operators are chosen to be orthonormal and traceless, i.e., $\mathrm{tr}[L_k L_l]=\delta_{kl}$, and $\mathrm{tr}[L_k]=0$. If the unitary part of the dynamics is irrelevant with respect to the dissipator, i.e., when $[H,\rho_t]$ is negligible when compared to the other terms, we obtain
\begin{align}
\label{eq:lindblad_depolarization_denominator}
\overline{\lVert \dot{\rho}_t \rVert} \leq 2 \sum_k \gamma_k^2 \overline{\lVert L_k \rVert^2},
\end{align}
where the inequality holds since $\lVert\rho_t\rVert \lVert\dot{\rho}_t\rVert \leq 2\sum_k \gamma_k^2 \lVert L_k\rVert^2 \lVert\rho_t\rVert ^2$~\cite{Uzdin2016}, and $\lVert \rho_t\rVert \leq 1$. We can readily apply this result to three important cases: pure dephasing dynamics, pure depolarization dynamics, and speed of purity change.
\textbf{Pure dephasing dynamics. ---} This type of dynamics models the idealized evolution of an open quantum system whose coherence decays over time due to the interaction with the environment. Under this kind of dynamics, a quantum system that evolves for a sufficiently long time is expected to lose its quantum mechanical features and exhibit a classical behavior.
Here, for the sake of clarity, we consider the case of \emph{pure dephasing of a single qubit}, described by the Lindblad equation $\dot{\rho}_t = \gamma(\sigma_z\rho_t\sigma_z -\rho_t)$, as done in Ref.~\cite{DelCampo2013}. The instantaneous speed reads $\lVert \dot{\rho}_t\rVert =\sqrt{2} \;\gamma \sqrt{r_1^2(t)+r_2^2(t)}$, where $\bm{r}_t = (r_1(t),r_2(t),r_3(t))$ is the Bloch vector associated to $\rho_t$. In this case the time-averaged speed can be bounded as
\begin{gather}
\label{eq:dephasing_denominator}
\overline{\lVert \dot{\rho}_t \rVert} \leq \sqrt{2}\gamma.
\end{gather}
Although considering the case of a single qubit might sound simplistic, the same description can be used to cover relevant high-dimensional systems that effectively behave like qubits~\cite{Ilichev2003,Zueco2009}.
\textbf{Pure depolarizing dynamics. ---} Another interesting observation is that our bound $T_D$ is geometrically tight when purely depolarizing dynamics is considered, i.e., when $\rho_t= \mathcal{D}_{\epsilonsilon(t)}[\rho_0] = \epsilonsilon(t)\rho_0+\frac{1-\epsilonsilon(t)}{d}\mathbb{1}$, which serves as an idealized model of noise for the evolution of an open quantum system that monotonically deteriorates towards the state of maximal entropy, i.e., the maximally mixed state. Geometrically, it corresponds to the re-scaling of the generalized Bloch vector $\bm{r}_t = \epsilonsilon(t)\; \bm{r}_0$, where $\epsilonsilon(0)=1$. Tightness is guaranteed by the fact that each vector $\bm{r}_t$ obtained in this way represents a state, along with the fact that the orbit of such evolution is given by the straight line that connects $\bm{r}_0$ to $\bm{r}_\tau$, whose length is exactly given by $D(\rho_0,\rho_\tau)$. In this case our bound reads
\begin{gather}
T_D(\rho_0,\rho_\tau)=\frac{1-\epsilonsilon(\tau)}{\overline{|\dot{\epsilonsilon}(t)|}}.
\end{gather}
If we restrict ourselves to the case of strictly monotonic contraction (expansion) of the GBV, the denominator becomes $(1-\epsilonsilon(\tau))/\tau$, which further supports our argument for the tightness of our bound. In this case, it simply returns the condition for optimal evolution $T_D(\rho_0,\rho_\tau)=\tau$, i.e., the evolution time $\tau$ coincides with the bound, and thus with the minimal time.
\textbf{Speed of purity change. ---} Since a contraction of the GBV corresponds to a decrease of the purity of the initial state $\rho_0$, Eq.~\eqref{eq:speed_limit_arbitrary} provides a QSL for the variation in the purity $\Delta \mathcal{P}$, which is saturated when obtained by means of purely depolarizing dynamics with strictly monotonic contraction. In particular, $\bm{r}\to\epsilonsilon\bm{r}$ implies a variation of the purity $\mathrm{tr}[\rho_0^2]\to \epsilonsilon^2\mathrm{tr}[\rho_0^2]+(1-\epsilonsilon^2)\mathbb{1}/d$, which thus depends also on the dimension $d$ of the system. Similar QSLs have been derived by the authors of Ref.~\cite{Rodriguez-Rosario2011}, who express a bound on the instantaneous variation of the purity in terms of the strength of the interaction Hamiltonian and the properties of the total system-environment density operator, as well as by the authors of Ref.~\cite{Uzdin2016}, who provide a bound on the variation of the purity $\mathcal{P}[\rho_0]/\mathcal{P}[\rho_\tau]$ as a function of the non-unitary part of the evolution, both in Hilbert and Liouville space.
\subsection{Memory kernel master equation}
For the most general non-Markovian dynamics, the denominator of bound~\eqref{eq:speed_limit_arbitrary} can be written in terms of a convolution with a memory kernel~\cite{Breuer2002}, e.g., in the form of the Nakajima-Zwanzig equation, $\dot{\rho}_t = \mathcal{L}_t\rho_t + \int_{t_0}^t ds\mathcal{K}_{t,s}\rho_s +\mathcal{J}_{t,t_0}$, where $\mathcal{L}_t$ is a time-local generator like that of the Lindblad master equation, the memory kernel $\mathcal{K}_{t,s}$ accounts for the effect of memory, and $\mathcal{J}_{t, t_0}$ accounts for initial correlations between system and environment~\cite{Pollock2017}. The denominator of bound~\eqref{eq:speed_limit_arbitrary} can be simplified using the triangle inequality $\lVert A + B + C\rVert \leq \lVert A\rVert +\lVert B\rVert +\lVert C\rVert $, at the cost of its tightness. Similarly, the memory kernel can be divided up into a finite sum of terms whenever it is possible to resort to a temporal discretization, $\lVert\int_{t_0}^t ds\mathcal{K}_{t,s} \rho_s \rVert \sim\delta t\sum_k \lVert \mathcal{K}_{t_{k},t_{k-1}} \rho_{t_{k-1}} \rVert $, again, at the cost of reducing the tightness of the bound.
\textbf{Underlying evolution. ---} Alternatively, the orbit dependent term can always be related to an underlying unitary evolution with a wider environment: $\dot{\rho}_t = -i \ \mathrm{tr}_E [H,\Pi_t]$, where $H$ and $\Pi_t$ are the Hamiltonian and the state of the joint system-environment, respectively. We can further break down the total Hamiltonian into $H = H_S+H_{\textrm{int}} +H_E $, where $H_S$ ($H_E$) is the Hamiltonian of the system (environment) and $H_{\textrm{int}}$ describes the interactions between the two. In this case the denominator of bound~\eqref{eq:speed_limit_arbitrary} reads
\begin{align}
\label{eq:arbitrary_denominator}
\overline{\lVert \dot{\rho}_t \rVert} &= \overline{\lVert -i[H_S,\rho_t]-i\mathrm{tr}_E\{[H_{\textrm{int}}, \Pi_t]\}\rVert},
\end{align}
since $\mathrm{tr}_E\{[H_E, \Pi_t]\} = 0$. A less tight speed limit can be obtained by splitting the right hand side of Eq.~\eqref{eq:arbitrary_denominator}, using the triangle inequality and the linearity of the time average, to obtain $\overline{\lVert \dot{\rho}_t \rVert} \leq \overline{\lVert -i[H_S,\rho_t]\rVert} + \overline{\lVert-i\mathrm{tr}_E\{[H_{\textrm{int}}, \Pi_t]\}\rVert}$, in order to isolate the contribution of $H_{\textrm{int}}$ from that of $H_S$, when convenient.
Additionally, by considering the larger Hilbert space of system and environment combined, it is possible to appreciate the difference between the traditional QSL, $T_B(\rho,\sigma) = B(\rho,\sigma) / \overline{\Delta E}$, expressed in terms of the Bures angle $B(\rho,\sigma)$ (see Eq.~\eqref{eq:Bures}), and the bound $T_D$ of Eq.~\eqref{eq:speed_limit_arbitrary}. The Bures distance $B(\rho,\sigma)$ corresponds to the minimal Fubini-Study distance between purifications of $\rho$ and $\sigma$ in a larger Hilbert space \cite{Bengtsson2008}, here denoted by $\ketbra{\psi}{\psi}$ and $\ketbra{\varphi}{\varphi}$, respectively. Such purified states must be entangled states of system and environment when $\rho$ and $\sigma$ are mixed. Moreover, unlike in Eq.~\eqref{eq:arbitrary_denominator}, these states may have nothing to do with the actual system-environment evolution. In general, in order to saturate the traditional QSL, one must have access to those (possibly fictional) entangled states, and be able to perform highly non-trivial operations over both system and environment, such as $\ketbra{\psi}{\varphi}+\ketbra{\varphi}{\psi}$, which can contain terms with high order of interaction \cite{Campaioli2017, Campaioli2018}. Since in practice one has little, if any control over the environment degrees of freedom, and nearly no access to the entangled state of the system and environment combined, the traditional QSL rapidly loses its efficacy.
In contrast, bound $T_D$, introduced in Eq.~\eqref{eq:speed_limit_arbitrary}, provides a conservative estimate of the minimal evolution time between two states $\rho$ and $\sigma$, under the assumption of no access to their purification. The speed of the evolution is assessed observing only the local part (the system) of a global evolution (the underlying unitary evolution of system and environment), as expressed by Eq.~\eqref{eq:arbitrary_denominator}, while still allowing for optimal driving of the purifications of $\rho$ and $\sigma$.
In addition to the ability of QSLs to represent an achievable bound for the minimal evolution time, their usefulness also depends on how easily they can be calculated and measured. We discuss this aspect in the next section, comparing the features of our bound $T_D$ to those of other QSLs.
\section{Feasibility}
As discussed in the introduction, the usefulness of a QSL bound is directly linked to the feasibility of its evaluation, whether it be computational or experimental. There are two types of difficulties that one might encounter in the evaluation of a QSL. First, computing the distance, that in our case is given by the orbit-independent term in the numerator of Eq.~\eqref{eq:speed_limit_arbitrary}, and second, evaluating the speed, that in our case is given by the orbit-dependent term in the denominator of Eq.~\eqref{eq:speed_limit_arbitrary}. While the latter is usually related to some norm of $\dot{\rho}_t$, the former changes remarkably from bound to bound. We address the distance first, before proceeding to a discussion of the speed.
\textbf{The distance. ---} Among all the QSL bounds known so far one can make a clear-cut distinction between the type of distances that have been used: either they require evaluating the overlap $\mathrm{tr}[\rho\sigma]$ between the initial and the final states~\cite{Sun2015, DelCampo2013, Campaioli2018}, or they require to calculate $\sqrt{\rho}$ and $\sqrt{\sigma}$ (or similar functions)~\cite{Deffner2013b, Mondal2016, Pires2016}. The latter is much more complicated than the former, as it requires solving the eigenvalue and eigenvectors of $\rho$ and $\sigma$. While solving the former does not require diagonalizing the density matrices. Moreover, the overlap between two density operators ($\mathrm{tr}[\rho \sigma]$) is easily measured experimentally using a controlled \textsc{SWAP} and measurement on an ancillary system~\cite{Ekert2002} independent of the dimensions of the system.
The same approach can be used to estimate the fidelity between $\rho$ and $\sigma$, i.e., $\mathrm{tr}[\sqrt{\sqrt{\rho}\:\sigma\sqrt{\rho}}]$, but the number of interference experiments required grows with the dimension of the system in order to reach a good approximation. Conveniently, our bound $T_D$ simply depends on the overlap $\mathrm{tr}[\rho\sigma]$: This feature elevates $T_D$ to the most favourable choice, even in the cases where its tightness is comparable to that of other QSLs.
\textbf{The speed. ---}
We now move on to the orbit-dependent term $\overline{\lVert \dot{\rho}_t\rVert}$, i.e., the denominator of Eq.~\eqref{eq:speed_limit_arbitrary}, which appears in different forms in virtually every QSL bound. This term can be interpreted as the \textit{speed} of the evolution~\footnote{Note that $\dot{\rho}_t$ is proportional to the tangent vector $\dot{\bm{r}}_t$, which can be regarded to as the velocity. Accordingly, the norm of the latter if the speed, and it is proportional up to a constant of motion to the HS norm of $\dot{\rho}_t$.}, and it can be hard to compute, as it might require the knowledge of the solution $\rho_t$ to the dynamical problem $\dot{\rho}_t=L[\rho_t]$. For this reason, one might criticize QSLs as being impractical, or ineffective, if too hard to compute. Surely, when QSLs are easy to compute, they can be used to quickly estimate the evolution time $\tau$, required by some specific dynamics $\dot{\rho}_t=L[\rho_t]$ to evolve between $\rho$ and $\sigma$, however, their main purpose is rather to answer the question, \emph{can we evolve faster?} The evaluation of a QSL bound for the initial and final states $\rho$ and $\sigma$, along the orbit described by $\rho_t$, immediately tells us if we could evolve faster along another orbit that has the same speed, or confirms that we are already doing the best we can.
Besides, it is not always necessary to solve the the dynamics of the system in order to evaluate the speed, which can be constant along the orbit. For example, the standard deviation of any time-independent Hamiltonian $H$ is a constant of motion, and can be directly obtained from the initial state of the system and the Hamiltonian $H$, making the speed extremely easy to compute.
In the more general case of an actually orbit-dependent speed, it is often possible to numerically and experimentally estimate $\overline{\lVert \dot{\rho}_t\rVert}$ using the following approach: First, we can approximate $\dot{\rho}_t$ with the finite-time increment, $\dot{\rho}_t \sim (\rho_{t_2}-\rho_{t_1})/|t_2-t_1|$, where $t_{1,2} = t \pm \epsilonsilon/2$, for small $\epsilonsilon$.
We then proceed with the approximation
\begin{gather}
\label{eq:finite_time}
\mathrm{tr}[\dot{\rho}_t^2] \sim \frac{\mathrm{tr}[\rho_{t_2}^2]+\mathrm{tr}[\rho_{t_1}^2] - 2\mathrm{tr}[\rho_{t_2}\rho_{t_1}]}{|t_2-t_1|^2}.
\end{gather}
Each term on the numerator of the right-hand side of Eq.~\eqref{eq:finite_time} can be evaluated with a controlled-\textsc{SWAP} circuit, as one would do for $\mathrm{tr}[\rho\sigma]$, as described above and in Ref.~\cite{Ekert2002}.
In this sense, the Euclidean metric considered here has an advantage over those featuring $\sqrt{\rho_t}$, such as those based on quantum fidelity and affinity, since in general it requires fewer measurements for each instantaneous sample of the speed of the evolution.
Nevertheless, obtaining a precise estimation of the average speed of the evolution is generally hard, requiring a number of samples that strongly depends on the distribution of the velocities of the considered process, independently of the notion of the considered metric. When such estimation has to be approached, it is thus fundamental to reduce the amount of measurements required to obtain each instantaneous sample of the speed of the evolution.
In the next section, we will show that, in addition to being more feasible, our bound also outperforms existing speed limits for the majority of processes.
\section{Tightness}
\label{s:tightness}
\emph{Tightness of QSL bounds.}---
As stated in the introduction, one of our main interests is the performance of our bound $T_D$, in particular its tightness relative to other proposed QSLs. To this end, we must ensure that different bounds are fairly compared: Since QSL bounds depend on the orbit, they can only be compared with each other when evaluated along a chosen evolution, given fixed initial and final states. If their orbit-dependent terms are identical for any given evolution, such a comparison reduces to the evaluation of their orbit-independent terms. We compare our bound to the most significant bounds appearing in the literature~\cite{Sun2015, DelCampo2013, Deffner2013b, Pires2016,Mondal2016} which either depend on the overlap $\mathrm{tr}[\rho\sigma]$ or require the evaluation of quantum fidelity $F(\rho,\sigma) = \mathrm{tr}[\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}]$~\cite{Uhlmann1992b}, affinity $A(\rho,\sigma)=\mathrm{tr}[\sqrt{\rho}\sqrt{\sigma}]$~\cite{Luo2004}, or Fisher information~\cite{Facchi2010}.
We begin by considering the bound of Pires et al.~\cite{Pires2016}. In fact, Ref.~\cite{Pires2016} gives an infinite family of bounds that can be adjusted to fit the particular type of evolution that one might consider. However, as we pointed out in Ref.~\cite{Campaioli2018}, this freedom of choice has a drawback: The task of finding the right distance that induces a tight bound for the desired evolution is a difficult one. In addition, these bounds require the calculation of quantum fidelity or quantum Fisher information, which are almost always harder to evaluate and measure in comparison with the Hilbert-Schmidt norm (as discussed above). For these reasons, we disregard the family of bounds in Ref.~\cite{Pires2016} from our subsequent discussion and proceed with the more practically feasible ones.
As different bounds can be meaningfully compared only when evaluated along the same orbit, one might be led to assume that the hierarchy between the bounds depends on the process in question. However, the orbit-dependent term that appears at the denominator of bounds from Refs.~\cite{DelCampo2013,Deffner2013b,Sun2015} is always given by $\overline{\lVert \dot{\rho}_t\rVert}$~\footnote{In particular, we selected the Hilbert-Schmidt norm for analytical comparison, while we have evaluated operator norm and trace numerically, if required by the considered QSL.} (i.e., the \emph{strength} of the generator), or can be directly related to it, up to some orbit-independent factors. This fact allows us to reduce the hierarchy of the bounds to that of the distance terms that depend only on the initial and final states, regardless of the chosen process and orbit. When this direct comparison is not possible, such as for the case of the bound in Ref.~\cite{Mondal2016}, we need to resort to numerical comparison.
The orbit-independent term of our bound can be directly compared with those of Sun et al.~\cite{Sun2015} and Del Campo et al.~\cite{DelCampo2013}, which depend on the overlap $\mathrm{tr}[\rho\sigma]$. In order to analytically compare our bound to that of Deffner et al.~\cite{Deffner2013b}, we over-estimate the orbit-independent term of the latter by replacing the fidelity with its lower-bound sub-fidelity, introduced in~\cite{Miszczak2008}, which depends on the overlap $\mathrm{tr}[\rho\sigma]$, and on the additional quantity $\mathrm{tr}[(\rho\sigma)^2]$. For brevity, we will henceforth refer to previously introduced bounds by the corresponding first author's name.
As a result we find that, independently of the chosen process (i.e., for every choice of the generator $\dot{\rho}_t$), the bound $T_D$ expressed in Eq.~\eqref{eq:speed_limit_arbitrary} is tighter (i.e., greater) than Sun's, Del Campo's, and Deffner's for every (allowed) choice of $\rho$ and $\sigma$
\begin{align}
\label{eq:sun_delcampo}
& T_D \geq \max\big\{T_{\textrm{Sun}},T_{\textrm{Del Campo}}\big\}, \;\; \forall \rho,\sigma \; \in \mathcal{S}(\mathcal{H}_S), \\
\label{eq:deffner_beat}
& T_D \geq T_{\textrm{Deffner}} \;\; \forall \rho,\sigma, \; \textrm{s.t.} \; \rho^2=\rho \; \textrm{or} \; \sigma^2=\sigma,
\end{align}
(see Fig.~\ref{fig:hierarchy} { \fontfamily{phv}\selectfont \textbf{a}}). While Sun's and Del Campo's QSLs are as easy to compute as our QSL given in Eq.~\eqref{eq:speed_limit_arbitrary}, they are also the loosest bounds. In contrast, Deffner bound's can be as tight as ours, but, since it requires the evaluation of $\sqrt{\rho}$ and $\sqrt{\sigma}$, it is less feasible.
In particular, Deffner's bound has been proven to be valid only when one of the two states is pure, i.e., for $\rho=\rho^2$ (or $\sigma=\sigma^2$)~\cite{Sun2015}. Under this condition our bound is always tighter than Deffner's. Additionally, we can analytically extend the validity of Deffner's bound to a larger class of cases by comparing it with our bound, and studying the region of the space of states for which $T_D$ is larger. All the details about the relative tightness of the considered bounds can be found in Appendix~\ref{a:tightness}.
\begin{figure}
\caption{\textbf{Relative tightness of QSL bounds.}
\label{fig:hierarchy}
\end{figure}
Finally, we compare our bound to that of Ref.~\cite{Mondal2016} by Mondal et al., derived for the case of any general evolution, starting from the assumption that the initial state of the system $\rho_0$ is uncorrelated with that of the environment $\gamma_E$, i.e., $\Pi_0=\rho_0\otimes\gamma_E$. The orbit-independent term of their bound is a function of the affinity $A(\rho_0,\rho_\tau)=\mathrm{tr}[\sqrt{\rho_0}\sqrt{\rho_\tau}]$ between initial and final states of the system, $\rho_0$ and $\rho_\tau$, respectively~\cite{Mondal2016}, which, as mentioned earlier, is hard to calculate and to measure as it requires the diagonalization of both density operators. The orbit-dependent term of their bound is a function of the root of $\rho_0$ and of an \emph{effective} Hamiltonian $\widetilde{H_S} = \mathrm{tr}_E[H\; \mathbb{1}\otimes\gamma_E]$, where $H$ is the total system-environment Hamiltonian. This function is not equivalent to $\overline{\lVert \dot{\rho}_t \rVert} $ (not even up to an orbit-independent factor), so we must calculate the two bounds for any given choice of dynamics, i.e., for any choice of total system-environment Hamiltonian $H$ and of initial state of the environment $\gamma_E$.
As such, we proceed with a numerical comparison of the two bounds. We randomly generate total Hamiltonians $H$, initial states of the environment $\gamma_E$, and initial states of the system $\rho_0$, in order to choose the final state of the system $\rho_\tau = \mathrm{tr}_E [U_\tau \rho_0\otimes\gamma_E U_\tau^\dagger]$, where $U_\tau = \exp[-i H \tau]$, fixing $\tau=1$ for reference. We then compute both bounds for each instance of $H$, $\gamma_E$, and $\rho_0$ and compare their performance by measuring the difference $T_D - T_{\textrm{Mondal}}$. Remembering that $\tau=1$, and that both bounds must be smaller than $\tau$, the difference $T_D - T_{\textrm{Mondal}}$ must be bounded by $-1$ and $1$. Our numerical results provide a convincing evidence of the performance of $T_D$ over $T_{\textrm{Mondal}}$, with the former being larger then the latter in $94\%$ of the cases for the considered sample, with an average difference of $0.57\pm0.28$ (see Fig.~\ref{fig:hierarchy} {\fontfamily{phv}\selectfont \textbf{b}} for the details about the numerical study). While Mondal's bound performs better than Deffner's, Sun's and Del Campo's, it is arguably less feasible than all of them, as it involves the evaluation of $\sqrt{\rho}$ and $\sqrt{\sigma}$ for both distance and speed terms.
We have now shown that bound $T_D$ of Eq.~\eqref{eq:speed_limit_arbitrary} is tighter than the QSLs by Del Campo et al.~\cite{DelCampo2013}, by Sun et al.~\cite{Sun2015}, and by Deffner et al.~\cite{Deffner2013b}, for all processes, while being just as easy to compute (if not easier). We have also provided numerical evidence of the superiority of our bound $T_D$ over the QSL by Mondal et al.~\cite{Mondal2016} for almost all processes (for over 90\% instances), while being more feasible.
\section{Conclusions}
In this Article, we have presented a geometric quantum speed limit for arbitrary open quantum evolutions, based on the natural embedding of the space of quantum states in a high-dimensional ball, where states are represented by generalized Bloch vectors. Our speed limit $T_D$ is induced by the Euclidean norm of the displacement vector $\bm{r}-\bm{s}$ between the two generalized Bloch vectors $\bm{r}$ and $\bm{s}$, associated with the initial and final states of the evolution. The measure of distinguishability that arises from this choice of distance corresponds to the Hilbert-Schmidt norm of the difference between initial and final states, $\rho$ and $\sigma$. The use of this norm has several benefits: It allows for the effective use of optimization techniques, such as convex optimization and semidefinite programming \cite{Abernethy2009}, it is easy to manipulate analytically and numerically, it has a straightforward geometric interpretation, and it is independent from the choice of the Lie algebra $\bm{\Lambda}$ of $SU(d)$ used to represent states as GBVs. The Hilbert-Schmidt norm is also widely used in experimental context, not only for quantum optimal control tasks, in order to impose finite energy bandwidth constraints on the control Hamiltonian \cite{Wang2015,Geng2016}.
We have considered the case of general open dynamics, in terms of a system-environment Hamiltonian and convolution with a memory kernel, as well as the special cases of unitary evolution and Lindblad dynamics. While the performance of many QSLs diminishes when increasingly mixed states are considered, our bound remains robust under mixing, as well as under composition. We have discussed the form of the our bound, with particular attention given to the speed of the evolution. We highlighted the differences between our bound and the traditional QSL, induced by the Bures distance, and shed light on the reasons for the poor performance of the latter. Comparatively speaking, our bound outperforms several bounds derived so far in the literature~\cite{Sun2015, DelCampo2013, Deffner2013b, Mondal2016} for the majority (if not all) processes. We have also address the physical interpretation of our bound, as well as that of similar QSLs, by providing a feasible experimental procedure that aims at the estimation of both the distance $D$ and the speed of the evolution $\overline{\lVert \dot{\rho}_t \rVert}$, while showing that our bound is easier to compute, as well as experimentally measure, than the other comparably tight bounds~\cite{Deffner2013b, Pires2016, Mondal2016}. These features indicate $T_D$ as the preferred choice of QSL. In particular, the versatility of this bound, as compared to that of Ref.~\cite{Campaioli2018}, allows it to be used for an much larger class of dynamics, which we have only just approached with our examples in Section~\ref{s:forms}; a reflection that will hopefully be inspiring for further studies.
The efficacy of the QSL derived from this distance suggests that the use of a real vector space equipped with Euclidean metric to represent the space of operators could also find application in the search for constructive approaches to time-optimal state preparation and gate design. This geometric picture might also offer solutions to some urgent outstanding problems, such as quantum optimal control in the presence of uncontrollable drift terms and constraints on tangent space, local quantum speed limits for multipartite evolution with restricted order of the interaction, and time-optimal unitary design for high-dimensional systems. The restrictions imposed by the constraints on the generators of evolution are known to dramatically change the the geodesic that connects two states, and thus the bound on the minimal time of evolutions, as discussed in Refs.~\cite{Arenz_2014,Lee2018}. There, the authors introduced methods to bound the speed of evolution depending on the form of uncontrollable drift terms, control complexity and size of the system, obtaining accurate results for the case of single qubits in~Ref.~\cite{Arenz_2017}. Combining such considerations with the geometric approach used here could simplify the task of improving quantum speed limits and optimal driving of controlled quantum systems, by exploiting constants of motions that are easier to represent in the generalised Bloch sphere picture.
Adapting this approach could find applications in other areas of quantum information, such as quantum metrology and quantum thermodynamics, where geodesic equations and geometric methods are routinely employed for the solution of optimization problems. While an attainable speed limit for arbitrary processes is yet to be found, its comprehension goes hand in hand with the understanding of the geometry of quantum states, as well as with the development of constructive techniques for time-optimal control.
\begin{acknowledgments}
\noindent
We kindly acknowledge R. Uzdin, and V. Giovannetti for the fruitful discussion. KM is supported through Australian Research Council Future Fellowship FT160100073.
\end{acknowledgments}
\appendix
\section{Derivation of bound~\ref{eq:speed_limit_arbitrary} from distance~\ref{eq:distance}}
\label{a:derivation}
\noindent
Given two states $\rho$, $\sigma\in\mathcal{S}(\mathcal{H}_S)$ of the system, with associated generalized Bloch vectors $\bm{r}$, $\bm{s}$, respectively, the function $D(\rho,\sigma)=\lVert \bm{r}-\bm{s}\rVert_2$ expressed in Eq.~\eqref{eq:distance} is clearly a distance, as it is the Euclidean norm of the displacement vector $\bm{r}-\bm{s}$~\cite{Bengtsson2008}. $D$ can be expressed as a function of the dimension $d$ of the system and of the density matrices $\rho$ and $\sigma$, remembering that
\begin{gather}
\label{eq:euclidean_hilbertschmidt}
\begin{split}
\mathrm{tr}[(\rho-\sigma)^2] & = \mathrm{tr}[(\frac{c}{d} \sum_a (r_a-s_a)\Lambda_a)^2] \\
& = \frac{d(d-1)}{2d^2}\sum_{a,b}(r_a -s_a)(r_b-s_b)\mathrm{tr}[\Lambda_a\Lambda_b], \\
& = \frac{d-1}{2d}\sum_{a,b}(r_a -s_a)(r_b-s_b) 2\delta_{ab}, \\
& = \frac{d-1}{d}\sum_{a}(r_a -s_a)^2, \\
& = \frac{d-1}{d}\lVert \bm{r} - \bm{s} \rVert_2^2; \\
\end{split}
\end{gather}
thus, recalling that $\lVert \rho \rVert = \sqrt{\mathrm{tr}[\rho^\dagger\rho]} = \sqrt{\mathrm{tr}[\rho^2]}$,
\begin{gather}
D(\rho,\sigma) = \sqrt{\frac{d}{d-1}}\lVert \rho -\sigma \rVert .
\end{gather}
The proof for the QSL bound of Eq.~\eqref{eq:speed_limit_arbitrary} is carried out as follows: Consider a parametric curve $\gamma(s):[0,S]\in\mathbb{R}\to\mathbb{R}^N$ for some $N\geq1$, that connects two different points $A=\gamma(0)$ and $B=\gamma(S)$. Let $\lVert \cdot \rVert_\eta$ be some norm, specified by $\eta$, on $\mathbb{R}^N$, which induces the distance $D(A,B)=\lVert A-B\rVert_\eta$. The length of the path $\gamma$ is given by $L[\gamma_A^B]=\int_0^S ds \lVert \dot{\gamma}(s)\rVert_\eta$, where $\dot{\gamma}(s) = d\gamma(s)/ds$. Since $D(A,B)$ is the geodesic distance between $A$ and $B$, any other path between the two points can be either longer or equal, with respect to the chosen distance (associated by the chosen norm), thus $D(A,B)\leq L[\gamma_A^B]$. In particular, we choose $D(\rho,\sigma)=\lVert \bm{r}-\bm{s}\rVert_2$ and $\bm{r}(t)$ as the parametric curve generated by some arbitrary process, with $\bm{r}(0)=\bm{r}$, and $\bm{r}(\tau)=\bm{s}$. Accordingly, the length of the curve is given by $L[\gamma_{\rho}^{\sigma}] = \int_0^\tau dt \lVert \dot{\bm{r}}(t)\rVert_2$. Recalling that $\lVert \dot{\bm{r}}(t)\rVert_2 = \sqrt{d/(d-1)}\lVert \dot{\rho}_t\rVert $, we obtain the bound.
$\square$
\noindent
\section{Comparison of significant QSL bounds}
\label{a:tightness}
\noindent
As mentioned in the letter, we have considered some significant bounds~\cite{Sun2015,DelCampo2013,Deffner2013b,Mondal2016,Pires2016} to test the performance of our bound $T_D$.
First, we notice Pires's bound is in fact an infinite family of bounds, which depend on the choice of the distance/metric chose to fit the specific type of evolution. Optimal choices of the distance are well known for some notable cases, such as that unitary evolution of pure states, as mentioned above. However, for the general case of arbitrary processes a \emph{preferred} distance has not been specified by authors in~\cite{Pires2016}. For this reason we cannot perform a direct comparison between bound $T_D$ and Pires's, which will be disregarded henceforth.
We then analytically compare our bound to Sun's, Del Campo's, and Deffner's bounds. The last three bounds are given by
\begin{align}
\label{eq:sun}
&T_{\textrm{Sun}} = \frac{\bigg|1-\frac{\mathrm{tr}[\rho\sigma]}{\sqrt{\mathrm{tr}[\rho^2]\mathrm{tr}[\sigma^2]}}\bigg|}{2 \overline{\left({\lVert \dot{\rho}_t\rVert }/{\lVert \rho_t \rVert }\right)}},\\
\label{eq:delcampo}
&T_{\textrm{Del Campo}} = \frac{\bigg|1-\frac{\mathrm{tr}[\rho\sigma]}{\mathrm{tr}[\rho^2]}\bigg|\lVert \rho \rVert ^2}{\overline{\lVert \dot{\rho}_t\rVert} },\\
\label{eq:deffner}
&T_{\textrm{Deffner}} = \frac{\sin^2\bigg[\arccos \bigg( F(\rho,\sigma)\bigg) \bigg]}{\overline{\lVert \dot{\rho}_t\rVert} },
\end{align}
where $F(\rho,\sigma) = \mathrm{tr}[\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}]$ is the quantum fidelity between $\rho$ and $\sigma$.
The orbit-dependent term of all of these bounds only depends on the strength of the generator $\lVert \dot{\rho}_t \rVert $, or can be bounded by some quantity that only depends on this term. This observation allows us to evaluate the relative tightness of these bounds and of $T_D$ just by comparing their orbit-independent terms. Let us assume that $\mathrm{tr}[\rho^2]\geq\mathrm{tr}[\sigma^2]$, without loss of generality, and introduce the enhanced bounds
\begin{align}
\label{eq:over_sun}
&T_{\textrm{Sun}}^{\;\star} = \frac{\bigg|1-\frac{\mathrm{tr}[\rho\sigma]}{\sqrt{\mathrm{tr}[\rho^2]\mathrm{tr}[\sigma^2]}}\bigg|\lVert \rho \rVert }{2 \overline{\lVert \dot{\rho}_t\rVert} }, \\
\label{eq:over_deffner}
&T_{\textrm{Deffner}}^{\;\star} = \frac{\sin^2\bigg[\arccos \bigg(E(\rho,\sigma)\bigg) \bigg]}{\overline{\lVert \dot{\rho}_t\rVert} },
\end{align}
where $E$ is the sub-fidelity
\begin{gather}
\label{eq:sub_fidelity}
E(\rho,\sigma)=\sqrt{\mathrm{tr}[\rho\sigma]+\sqrt{2(\mathrm{tr}[\rho\sigma]^2 -\mathrm{tr}[\rho\sigma\rho\sigma]) }},
\end{gather}
which is a lower bound to $F$~\cite{Miszczak2008}. Both enhanced bounds $T^{\;\star}_{\textrm{Sun}}$ and $T^{\;\star}_{\textrm{Deffner}}$ are larger than the respective bounds of Eqs.~\eqref{eq:sun}~and~\eqref{eq:deffner}. Therefore, whenever $T_D$ is larger than the enhanced bounds it is also surely larger than the actual ones. Moreover, the enhanced bounds have orbit-independent terms that only depend on the following four parameters
\begin{align}
\label{eq:x}
&x: =\mathrm{tr}[\rho^2], \\
\label{eq:y}
&y:=\mathrm{tr}[\sigma^2], \\
\label{eq:z}
&z:=\mathrm{tr}[\rho\sigma], \\
\label{eq:beta}
&\beta:=\mathrm{tr}[\rho\sigma\rho\sigma],
\end{align}
where $x,y$ are bounded by $1/d$ from below and by $1$ from above, $z$ is bounded by $\sqrt{x y}$ from above, and $\beta$ is bounded by $z^2$ from above.
We proceed with the evaluation of the relative tightness of these bounds and of $T_D$ just by comparing their orbit-independent terms, obtaining
\begin{align}
\label{eq:analytical_sun}
&\bigg | 1 - \frac{z}{\sqrt{xy}} \bigg|\frac{\sqrt{x}}{2} \leq \sqrt{x+y-2z} \Rightarrow T^{\;\star}_{\textrm{Sun}} \leq T_D,
\end{align}
and
\begin{gather}
\bigg | 1 - \frac{z}{x} \bigg|x \leq \sqrt{x+y-2z}\Rightarrow T_{\textrm{Del Campo}} \leq T_D,
\end{gather}
for all $\rho,\sigma \in \mathcal{S}(\mathcal{H}_S)$ and all processes.
As mentioned in the main text, Deffner's bound is proven to be valid only when one of the two states is pure, i.e., for $\rho=\rho^2$ (or $\sigma=\sigma^2$)~\cite{Sun2015}, i.e., when $x=1$. Under this condition sub-fidelity, fidelity and super-fidelity all coincide~\cite{Miszczak2008} to be equal to $\sqrt{\mathrm{tr}[\rho\sigma]}$, and we obtain
\begin{gather}
\begin{split}
&\sin^2\big[\arccos\big(\sqrt{z}\big)\big]\leq\sqrt{1+y-2z} \\
&\Rightarrow T^\star_{\textrm{Deffner}}\leq T_D,
\end{split}
\end{gather}
which proves our statement.
\section{Extending the validity of bound by Deffner et al.}
\label{a:deffner}
We will now show that our bound can be used to extend the validity of Deffner's bound~\cite{Deffner2013b} to the case of mixed initial states $\rho$, with $\mathrm{tr}[\rho^2]<1$.
As mentioned earlier, we can directly compare our bound $T_D$ to the enhanced bound $T^\star_{\textrm{Deffner}}$, given in Eq.~\eqref{eq:over_deffner}, which is always larger then the actual bound $T_{\textrm{Deffner}}$. Since our bound is valid for any initial state $\rho$, anytime $T_D$ is larger then $T^\star_{\textrm{Deffner}}$, then $T_{\textrm{Deffner}}$ is guaranteed to be valid for such values of $x$, $y$, $z$, and $\beta$.
Even though there is not a universal hierarchy between these two bounds, we can express a ranking between $T_D$ and $T^{\;\star}_{\textrm{Deffner}}$ using the following strategy:
We calculate the probability $p(T_D \geq T^*_{\textrm{Deffner}})$ of $T_D$ being larger than the upper bound on $T_{\textrm{Deffner}}$ in the space spanned by $z\in[0,\sqrt{x y}]$ and $\beta \in [0,z^2]$, as the ratio between the area where $T_D\geq T^*_{\textrm{Deffner}}$ and the area of the full space spanned by $z$ and $\beta$,
\begin{gather}
\label{eq:probability_test}
p(T_D\geq T^\star_{\textrm{Deffner}}) = \frac{\int_0^{\sqrt{xy}} \int_0^{z^2} \frac{\textrm{sgn}(\Gamma (x,y,z,\theta))+1}{2} dz \;d\beta}{\int_0^{\sqrt{xy}} \int_0^{z^2} dz\; d\beta},
\end{gather}where $\textrm{sgn}$ is the sign function, and
\begin{gather}
\label{eq:performance_function}
\begin{split}
&\Gamma (x,y,z,\theta) = \sqrt{x+y-2z}\; + \\
& \;\;- \sin^2\bigg[\arccos\bigg(\sqrt{z+\sqrt{2(z^2-\beta)}}\bigg)\bigg].
\end{split}
\end{gather}
The probability $p(T_D\geq T^*_{\textrm{Deffner}})$ is a function of $x$ and $y$ measures how often $T_D$ is larger then Deffner in the space spanned by $z\in[0,\sqrt{xy}]$ and $\beta \in [0,z^2]$, given $x$ and $y$. As a result, we obtain a general \emph{rule of thumb} to decide which bound to use given the purity of initial and final states: For $y\geq 1- x$ bound $T_D$ is outperforms Deffner's (and vice versa for $y\leq 1-x$), as shown in Fig.~\ref{fig:hierarchy_deffner} {\fontfamily{phv}\selectfont \textbf{a}}.
Additionally, we have directly compared our bound $T_D$ to $T_{\textrm{Deffner}}$ numerically in Fig.~\ref{fig:hierarchy} { \fontfamily{phv}\selectfont \textbf{c}}, sampling $3\cdot10^6$ initial and final states from the Bures and the Ginebre ensembles. As can be seen from the figure, our bound outperforms Deffner's for the vast majority of the cases, as shown in Fig.~\ref{fig:hierarchy_deffner} {\fontfamily{phv}\selectfont \textbf{b}}.
\begin{figure}
\caption{
({\fontfamily{phv}
\label{fig:hierarchy_deffner}
\end{figure}
\end{document}
|
\begin{document}
\title{Quantum-Classical Access Networks with Embedded Optical Wireless Links}
\author{Osama~Elmabrok,~\IEEEmembership{Student~Member,~IEEE,}
Masoud Ghalaii,~\IEEEmembership{Student~Member,~IEEE,}
and Mohsen~Razavi
\thanks{This work was presented in part at the IEEE Globecom Conf. 2016 in Washington, DC. This research has partly been funded by the UK EPSRC Grants EP/M506953/1 and EP/M013472/1, the ministry of higher education and scientific research (MHESR) in Libya, and White Rose Research Studentship. The authors are with the School of Electronic and Electrical Engineering, University of Leeds, Leeds, LS2 9JT, UK (e-mail: [email protected], [email protected], and [email protected]).}}
\maketitle
\begin{abstract}
We examine the applicability of wireless indoor quantum key distribution (QKD) in hybrid quantum-classical networks. We propose practical configurations that would enable wireless access to such networks. The proposed setups would allow an indoor wireless user, equipped with a QKD-enabled mobile device, to communicate securely with a remote party on the other end of the access network. QKD signals, sent through wireless indoor channels, are combined with classical ones and sent over shared fiber links to the remote user. Dense wavelength-division multiplexing would enable the simultaneous transmission of quantum and classical signals over the same fiber. We consider the adverse effects of the background noise induced by Raman scattered light on the QKD receivers due to such an integration. In addition, we consider the loss and the background noise that arise from indoor environments. We consider a number of discrete and continuous-variable QKD protocols and study their performance in different scenarios.
\end{abstract}
\begin{IEEEkeywords}
Quantum key distribution, quantum networks, BB84, decoy states, continuous-variable QKD (CV-QKD), measurement-device-independent QKD (MDI-QKD), optical wireless communications (OWC).
\end{IEEEkeywords}
\section{Introduction}
\IEEEPARstart{F}{uture} communications networks must offer improved security features against possible attacks enabled by quantum computing technologies. One possible solution is to develop quantum-classical networks that allow any two users to, not only exchange data, but also share secret key bits using quantum key distribution (QKD) techniques. Such a key can then be used to enable secure transmission of data between the two users. QKD technology is commercially available today~\cite{idquantique,QuantumCTek} and it has been used to exchange secret keys between pairs of users connected via fiber \cite{QKD_10Gbps_DWDM} or free space \cite{Zeilinger_Decoy_07} channels. QKD has also been implemented in several network settings~\cite{SECOQC2009, TokyoQKDNetwork2011, Chinanetworks2009}. Despite this progress, more work needs to be done to make QKD conveniently available to the end users of communications networks. In this paper, we address {\em wireless} access to a hybrid quantum-classical network. We consider hybrid links, with or without a trusted/untrusted relay point, between a wireless end user and the corresponding central office in an access network. This is done by adopting wireless indoor QKD links and embedding them into fiber-based passive optical networks (PONs).
QKD enables two remote users, Alice and Bob, to generate and exchange provably secure keys guaranteed by the laws of quantum physics~\cite{Gisin2002quantum, scarani2009security}. The obtained secret keys can then be used for data encryption and decryption between the two intended users. In conventional QKD protocols, an eavesdropper, Eve, cannot intercept the key without disturbing the system, and accordingly having her presence discovered. Furthermore, because of the no-cloning theorem~\cite{no-cloning}, Eve cannot exactly copy unknown quantum states. Based on these two principles, Bennett and Brassard in 1984 came up with their BB84 protocol in which {\em single photons} were carrying the key-bit information from Alice to Bob \cite{Bennett_BB84}. Over the time, more practical protocols have been developed that allow us to use weak laser pulses instead of ideal single-photon sources \cite{MXF:Practical:2005}. Nevertheless, most QKD protocols will still rely on the few-photon regime of operation, which makes them vulnerable to loss and background noise. This will make the implementation of QKD especially challenging in wireless mobile environments in which background noise is strong and alignment options are limited \cite{Wireless_indoor_QKD, Globecom15}.
However challenging, embedding QKD capability into mobile/handheld devices is an attractive solution for exchanging sensitive data in a safe and convenient manner. For instance, customers in a bank can exchange secret keys wirelessly with access points in the branch without waiting for a teller or a cash machine. Initial prototypes have already been made, which enable a handheld device to exchange secret keys with an ATM without being affected by skimming frauds \cite{HP_HandheldQKD,chun2017handheld}. As another application, it would be desirable to enable a user working in a public space, such as an airport or a cafe, to exchange secret keys with its service provider via possibly untrusted nodes. Similarly, once fiber-to-the-home infrastructure is widely available, home users should benefit from such wireless links that connect them, via a PON, to other service provider nodes. In this case, the connection to the PON can be via an internal QKD node trusted by the user. Note that, in all cases above, we are dealing with a wireless link in an {\em indoor} environment, which may offer certain advantages, as compared to a general outdoor setup, in terms of ease of implementation. It is then a proper starting point for offering wireless QKD services as we study in this paper.
The above scenarios require hybrid links on which both data and quantum signals can travel in both wireless and wired modes. In this case, wireless QKD signals must somehow be collected and sent to the nearest service provider node over an optical fiber. In order to have a cost effective solution, the collected wireless QKD signals should be transmitted along with classical data signals over the same fiber links. A QKD system run on such a hybrid quantum-classical link would then face certain challenges. First, the background light in the wireless environment can sneak into the fiber system and increase error rates of the QKD setup. Furthermore, due to nonlinear effects in optical fibers such as four-wave mixing and Raman scattering~\cite{Eraerds2010_1Gbps}, the data channels that travel alongside QKD channels on the same fiber can induce additional background noise on QKD systems. In particular, the impact of the Raman scattered light can be severe \cite{Eraerds2010_1Gbps}, because its spectrum can overlap with the frequency band of QKD channels. By using extensive filtering in time and frequency domains, the impact of this noise can be mitigated~\cite{Patel2012coexistence, QKD_10Gbps_DWDM, Bahrani2016Crosstalk} and even maximally reduced~\cite{Bahrani2016orthogonal}, but it cannot be fully suppressed.
In this paper, by considering the effect of various sources of noise mentioned above, four setups for embedding wireless indoor QKD links into quantum-classical access networks are investigated. In each case, we find the corresponding key generation rate for relevant QKD protocols. We use the decoy-state BB84 (DS-BB84)~\cite{MXF:Practical:2005}, which relies on weak laser pulses, and measurement-device-independent QKD (MDI-QKD)~\cite{Lo2012MDI-QKD} protocols in our setups. The latter protocol can provide a trust-free link, as required in the case of a user in a public space, between the wireless user and the central office in an access network. The price to pay, however, is possible reduction in the rate. We also consider the GG02 protocol~\cite{GG02}, as a continuous-variable (CV) QKD scheme, and compare it with our discrete-variable (DV) protocols in terms of resilience to background noise and loss~\cite{kumar2014experimental,lasota2017robustness}. CV QKD receivers require standard telecommunications technology for coherent detection, and in that sense they do not rely on single-photon detectors as their DV counterparts do.
The remainder of this paper is organized as follows. In Sec.~\ref{Sec:SystemDescription}, the system is described and in Sec.~\ref{Sec:KeyRateAnalysis} the key rate analysis is presented. The numerical results are discussed in Sec.~\ref{Sec:NumericalResults}, and Sec.~\ref{Sec:Conclusions} concludes the paper.
\section{System Description}
\label{Sec:SystemDescription}
In this section, we describe our proposed setups for hybrid quantum-classical access networks comprised of optical wireless and fiber-optic links. Such setups can wirelessly connect a mobile user, in indoor environments, to the central office in access networks; see Fig.~\ref{fig_schematic_view}. We assume a total of $N$ end users, which are connected to the central office via a dense wavelength-division multiplexing (DWDM) PON. The corresponding wavelengths assigned to quantum and classical data channels are, respectively, denoted by $Q = \lbrace\lambda_{q_1}$, $\lambda_{q_2}$, ...,$\lambda_{q_N} \rbrace$ and $D$ = $\lbrace$$\lambda_{d_1}$, $\lambda_{d_2}$, ...,$\lambda_{d_N}$$\rbrace$. The $k$th user, $k = 1,\ldots, N$, employs wavelength $\lambda_{q_k}$ ($\lambda_{d_k}$) to communicate his/her quantum (classical) signals to the central office, as shown in Fig.~\ref{fig_schematic_view}. The same wavelengths are also used for the downlink. In order to heuristically reduce the Raman noise effect, we assume that the lower wavelength grid is allocated to the QKD channels, while the upper grid is assigned to data channels~\cite{Bahrani2016Crosstalk}. In principle, one can optimize the wavelength allocation such that the Raman noise on the quantum channels is minimized \cite{Bahrani_optimal}.
\begin{figure}
\caption{Schematic view of exchanging secret keys between an indoor wireless user with a central office at the end of an access network. The transmitter is mobile, while the QKD receiver or the collection point is fixed on the ceiling.}
\label{fig_schematic_view}
\end{figure}
For our wireless user, we consider a particular indoor environment, in which it has been shown that wireless QKD is feasible \cite{Wireless_indoor_QKD,Globecom15}. In this setting, a window-less room, of $X\times Y \times Z$ dimensions, is lit by an artificial light source. The possibly mobile QKD transmitter is placed on the floor and it transmits light toward the ceiling. The transmitter module may or may not be equipped with beam steering tools. In the former case, we assume that a minimal manual alignment is in place, by which the QKD source is facing the ceiling. This can be achieved by providing some instructions for the end user during the QKD protocol. The QKD receiver or the signal collector is fixed at the center of the room's ceiling; see Fig.~\ref{fig_schematic_view}. We assume that, by using some dynamic beam steering, maximum possible power is collected from the QKD source. This may be achieved by using additional beacon pulses. The collected light may go through a non-imaging optical concentrator, such as a compound parabolic collector, and then be filtered by a bandpass filter before being detected or sent out toward its final destination.
In each setup, we particularly study three different cases regarding the position of the mobile QKD device. Case 1 refers to the scenario when the QKD transmitter is placed at the center of the room's floor and emits light upward with semi-angle at half power of $\Phi_{1/2}$. In case 2, the same QKD transmitter as in case 1 is moved to a corner of the room in order to assess the mobility features. These cases will represent the best and the worst case scenarios in terms of channel loss, when minimal beam alignment is used at the transmitter end. In case 3, the light beam at the QKD source is narrowed and is directed toward the QKD receiver or the coupling element. This would correspond to the worst case scenario when beam alignment is available at both the source and the receiver. In all cases, we assume a static channel in our analysis, that is we assume that the channel does not change during the key exchange procedure. The real mobile user is then expected to experience a quality of service bounded by the worst and best-case scenarios above. In the following, we first describe our proposed setups and the QKD protocols used in each case, followed by a description of the channel model.
\subsection{The proposed setups}
We consider four setups in which an indoor wireless user, Alice, equipped with a QKD-enabled mobile device, would exchange secret keys with a remote party, Bob, located at the central office. {In order to keep the mobile user's device simple, we assume that Alice is only equipped with the QKD encoder. That would imply that certain QKD schemes, such as entanglement-based QKD \cite{BBM_92}, are not suitable for our purpose if they require measuring single photons at the mobile user's end. Bob, however, represents the service provider node and could be equipped with the encoder and/or the decoder module as needed.} Based on these assumptions, here, we consider several settings depending on the existence or non-existence of a trusted/untrusted relay point between the wireless user and Bob at the central office. In all setups, a data channel will be wavelength multiplexed with the quantum one to be sent to the central office. We assume that classical data is being modulated at a constant rate throughout the QKD operation.
\subsubsection{Setup 1 with a trusted relay point}
Setup 1 is applicable whenever a trusted node between the sender and the recipient exists. For instance, in an office, we can physically secure a QKD relay node inside the building with which the wireless QKD users in the room can exchange secret keys. In Fig.~\ref{fig_scenario_1}, such a node is located on the ceiling and it is comprised of Rx and Tx boxes. In this setup, the secret key exchange between Alice and Bob is accomplished in two steps: a secret key, $K_1$, is generated between Alice and the Rx box in Fig.~\ref{fig_scenario_1}; also, independently but in parallel, another secure key, $K_2$, is exchanged between Tx and the relevant Bob in the central office. The final secret key is then obtained by applying an exclusive-OR (XOR) operation to $K_1$ and $K_2$. Note that in this setup both links are completely run separately; therefore, the wavelength used in the wireless link does not need to be the same as the wavelength used in the fiber link. In fact, for the wireless link, we use 880~nm range of wavelength, for which efficient and inexpensive single-photon detectors are available. For the fiber link, conventional telecom wavelengths are used. DS-BB84 and GG02 protocols will be used for this setup.
\begin{figure}
\caption{Setup 1, where secret key exchange between Alice and Bob is achieved in two steps. $K_1$ is generated between Alice and Rx, while $K_2$ is generated between Tx and Bob. The resultant key is computed by taking the XOR of $K_1$ and $K_2$. Three cases are examined according to the position and alignment of the QKD transmitter. The DS-BB84 and GG02 protocols will be examined in this setup. Dynamic beam steering is used at the Rx node.}
\label{fig_scenario_1}
\end{figure}
\subsubsection{Setup 2 without a relay point}
In this setup, we remove the need for having a relay point altogether. As shown in Fig.~\ref{fig_scenario_2}, the signals transmitted by Alice are collected by a telescope and coupled to a single-mode fiber to be sent to the central office. QKD measurements will then be performed at the central office. Because of this coupling requirement, the wireless signals undergo an additional coupling loss in setup 2. To reduce the coupling loss, in this setup, and, for fairness, in all others, we assume that the telescope at the collection point can focus on the QKD source. This can be achieved by additional beacon beams and micro-electro-mechanical based steering mirrors \cite{chun2017handheld}. In order to efficiently couple this photon to the fiber, the effective FOV at the collection point should match the numerical aperture of a single-mode fiber. That requires us to use FOVs roughly below $6^\circ$, although, in practice, much lower values may be needed. In this setup, DS-BB84 and GG02 can be suitable protocols and will be examined in the following sections.
\begin{figure}
\caption{Setup 2, where secret keys are exchanged between Alice and Bob using the DS-BB84 and GG02 protocols. The latter is only used in case 3. The QKD signals are collected and coupled to the fiber and sent to Bob, where the measurement is performed. Dynamic beam steering is used at the collection node.}
\label{fig_scenario_2}
\end{figure}
\subsubsection{Setups 3 and 4 with untrusted relay points}
The setups in Figs.~\ref{fig_scenario_3} and \ref{fig_scenario_4} are of interest whenever the indoor environment that the wireless user is working at is not trustworthy. For instance, if the user is working at a public place, such as a coffee shop or an airport, s/he may not necessarily trust the owners of the local system. In such setups, we can use the MDI-QKD technique~\cite{Lo2012MDI-QKD} to directly interfere the quantum signal sent by the users with that of the central office. This can be accomplished by, if necessary, coupling the wireless signal into the fiber and performing a Bell-state measurement (BSM) on the photons sent by Alice and Bob at either the user's end (setup 3), or at a certain place located between the sender and the recipient at the central office (setup 4). In setup 4, we use the splitting terminal of a PON to implement such BSMs. Note that in setups 3 and 4 we need to interfere a single-mode signal traveling in fiber with a photon that has traveled through the indoor channel. In order to satisfy the BSM indistinguishability criterion, we then need to collect only one spatial mode from the wireless channel. The flexible beam steering used at the collection node should then satisfy this requirement.
\begin{figure}
\caption{Setup 3, where secret keys are exchanged between Alice and Bob using the MDI-QKD protocol. The BSM is performed at the user's end in this setup.}
\label{fig_scenario_3}
\end{figure}
Here, we use a probabilistic setup for the BSM operation, as shown in Fig.~\ref{fig_BSM_2}. In this setup, the light coming from the two users are coupled at a 50:50 (fiber-based) beam splitter and then detect the outgoing signals using single-photon detectors. This simple setup is suitable for time-bin encoding techniques in QKD, which offer certain advantages in both fiber and free-space QKD systems. In particular, they may suffer less from alignment issues as compared to polarization-based encoding in wireless environments. Note that two successive clicks, one corresponding to each time bin, is required to have a successful BSM. That would require fast single-photon detectors with sub-nanosecond deadtimes. This is achievable using self-difference feedback techniques developed recently \cite{Yuan:Selfdif:2007}. If such detectors are not available, one can rely on one click on each detector, which roughly corresponds to declaring half of the success cases.
\begin{figure}
\caption{Setup 4, where secret keys are exchanged between Alice and the central office using the MDI-QKD protocol. The BSM is performed at the splitting point of the DWDM PON. }
\label{fig_scenario_4}
\end{figure}
\begin{figure}
\caption{The Bell-state measurement (BSM) module used in setups 3 and 4. This module works for time-bin encoded QKD signals. If fast detectors are available, as assumed here, we can do a separate measurement on each time bin. If not, we can still measure one out of four Bell states by relying on a single click in total on each detector.}
\label{fig_BSM_2}
\end{figure}
\subsection{The employed QKD protocols}
We use a number of discrete and continuous-variable QKD protocols to investigate the performance of the proposed configurations. In the case of DV protocols, we use the time-bin encoding, in which the information is encoded onto the phase difference between two successive pulses \cite{time-bin_encoding}. We assume that the gap between the two pulses is sufficiently short that similar phase distortions would be applied to both time bins while traversing the channel. Possible discrepancies are modeled by a relative-phase error term $e_d$. In the following, we provide a brief description of QKD protocols considered in this paper.
\subsubsection{DS-BB84}
In the ideal BB84 protocol~\cite{Bennett_BB84}, it is assumed that Alice, the sender, uses a single-photon source. However, this is not necessarily the case in practice. The actual alternative source is a highly attenuated laser that produces weak coherent states. The problem with using such sources is the possibility of experiencing the photon-number-splitting (PNS) attack~\cite{Brassard2000limitations} as each pulse might contain more than one photon. That is, Eve can siphon a photon and forward the rest to Bob. Later, after public announcement of the bases by Alice and Bob, Eve can measure exactly the state of the photon without revealing her presence. The decoy-state technique was proposed to beat this kind of attack \cite{hwang2003quantum}. The idea is to use several different light intensities, instead of one, so that any attempts by Eve to intrude on the link is more likely to be detected~\cite{MXF:Practical:2005}. In our key-rate analysis, we use the efficient version of DS-BB84~\cite{Lo2005efficient}, where $Z$ basis is chosen more frequently than the $X$ basis. In the time-bin encoding, the $Z$ basis is spanned by the single-photon states corresponding to each time-bin, whereas the $X$ eigenbases are the superposition of such states. We also assume that a passive Mach-Zehnder interferometer is used for decoding purposes.
\subsubsection{MDI-QKD}
The MDI-QKD protocol provides an efficient method of removing all detector side-channel attacks~\cite{Lo2012MDI-QKD}. This is done by performing the measurement by a third party, Charlie, who is not necessarily trusted. In this protocol, Charlie performs a BSM on Alice and Bob's signals, where each have a DS-BB84 time-bin encoder \cite{Ma2012alternative, MDIQKD_finite_PhysRevA2012}. Here, we again assume that the efficient version of DS-BB84 is in use. After Charlie announces the measurement outcomes of the successful events over a public channel, Alice and Bob follow the typical sifting and post processing procedures to come up with a shared secret key.
\subsubsection{GG02}
While DV-QKD requires single-photon detectors, CV-QKD protocols are compatible with standard telecommunication technologies for coherent optical communications, namely, that of homodyne and heterodyne receivers~\cite{diamanti2015distributing}. CV-QKD has also, in certain regimes, the possible advantage of being more resilient to the background noise induced in WDM networks than DV-QKD \cite{CV_ResilienceBQi}. This is due to the intrinsic filtration of photons that do not match the spatio-temporal and polarization mode of the local oscillator (LO) in homodyne receivers \cite{kumar2014experimental}. However, for secure communication, CV-QKD may only be practical for short distances in comparison with DV-QKD~\cite{lodewyck2007quantum, jouguet2013experimental}.
This is because of the excess noise and loss in the optical channels, as well as the limited efficiency of the classical reconciliation~\cite{Madsen2012entangledstates}.
The GG02 protocol is introduced by Grosshans and Grangier \cite{GG02}. It is the counterpart of the BB84 protocol in the CV prepare and measure schemes. In contrast to BB84, which relies on discrete variables, such as the polarization of single photons, GG02 exploits the quadratures of coherent states for encoding the information. In GG02, two random numbers, $X_A$ and $P_A$ are drawn by Alice according to two independent zero-mean Gaussian distributions with variance $V_A$, in the shot noise unit. The coherent state $|X_A+\textrm{i} P_A\rangle$ is then prepared, using amplitude and phase modulators, by Alice and sent to Bob, who randomly measures one of the two quadratures. After this stage both users acquire correlated random data. The error reconciliation and the privacy amplification are then performed in order to obtain the final secure key~\cite{GG02}. Reverse reconciliation \cite{grosshans2003quantum} is used in our study.
\subsection{Channel Characterization}
In this section, we model the two parts of our communication link, i.e., the wireless and fiber-based components and find out how much loss or background noise they may introduce.
\subsubsection{Indoor optical wireless channel}
A wireless QKD system ay suffer from two issues. The first is the existence of background noise caused by the artificial, as well as natural, sources of light in the room. The second important issue is the path loss, which can also have a severe impact on the QKD performance in indoor environments. The latter is modeled by the channel DC gain, $H_{\rm DC}$ \cite{kahn1997wireless,gfeller1979wireless}, which determines the portion of the transmitted power that will be detected at the receiver.
In this paper, we follow the same methodology and assumptions, as presented in our recent work in ~\cite{Wireless_indoor_QKD,Globecom15}, to calculate the indoor channel transmittance, $H_{\rm DC}$ and the corresponding background noise. In our assumed window-less room, the background noise induced by the artificial lamp is calculated. That would depend on the power spectral density (PSD) of the employed light source. The receiver's FOV is also important since it limits the amount of background noise that may sneak into the QKD receiver. Here, we account for the reflected light from the walls and the floor that would be collected at the ceiling. We use optical wireless communication (OWC) models in~\cite{kahn1997wireless,gfeller1979wireless} for loss and background noise calculations. For the sake of brevity, we do not repeat that analysis here, but give some of the key relationships below.
The DC-gain for a line-of-sight (LOS) link, which here is used to estimate the channel transmittance, is given by~\cite{kahn1997wireless}:
\begin{align}
\label{eta_DC}
H_{\rm DC}=
\begin{cases}
\frac{A(m+1)}{2\pi d^2} \cos(\phi)^{m} T_s(\psi) \\
\times g(\psi) \cos(\psi) ~~~~~~~~~~~~~~~~ 0 \leq \psi \leq \Psi_c \\
0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \text{elsewhere}, \end{cases}
\end{align}
where $d$ is the distance between the QKD sender and the QKD node on the ceiling; $\psi$ symbolizes the incidence angle with respect to the receiver axis, whereas $\phi$ represents the irradiance angle. Such parameters describe the relative position and orientation between the transmitter and receiver modules. For instance, the orientation in case 3 is modeled by assuming that the transmitter and receiver axes are identical and the beam angle is narrow. $T_s(\psi)$ is the filter signal transmission; $m$ and $g(\psi)$ are, respectively, the Lambert's mode number used to define the directivity of the source beam and the concentrator gain, which are given by
\begin{align}
m=\frac{-\ln 2}{\ln(\cos (\Phi_{1/2}))}
\end{align}
and
\begin{align}
\label{g_psi}
g(\psi)= \begin{cases}
\frac{n^2}{\sin^2(\Psi_{c})} ~~~~~~ 0 \leq \psi \leq \Psi_c
\\
0 ~~~~~~~~~~~~~~~~\psi > \Psi_c.
\end{cases}
\end{align}
In \eqref{eta_DC}-\eqref{g_psi}, $\Psi_c$, $\Phi_{1/2}$ and $n$ are, respectively, the receiver's FOV, semi-angle at half power of the light source and the refractive index of the concentrator. Note that the narrower the FOV, the higher the concentrator gain is. This, of course, meets certain practical constraints for very low FOVs, which we try to avoid.
\subsection{Optical fiber link}
As for the optical link, we make the following assumptions. We consider a loss coefficient $\alpha$ in dB/km in the single-mode fiber. We also assume that the loss contributed by each multi-port DWDM multiplexer, labeled as AWG (arrayed waveguide grating) in Figs.~\ref{fig_scenario_1}--\ref{fig_scenario_4} is $\Lambda$ in dB. We neglect the loss associated with two-to-one multiplexers.
As we mentioned earlier, the main source of background noise in QKD channels in a fiber link is Raman scattering. The Raman noise generated by a strong classical signal spans over a wide range of frequencies, hence can populate the QKD receivers with unwanted signals~\cite{Eraerds2010_1Gbps}. The receivers can be affected by forward and backward scattered light depending on their locations and the direction of light propagation \cite{Bahrani2016orthogonal}. For a classical signal with intensity $I$ at wavelength $\lambda_d$, the power of Raman noise at a QKD receiver with bandwidth $\Delta \lambda$ centered at wavelength $\lambda_q$ is given by ~\cite{Eraerds2010_1Gbps,Patel2012coexistence}
\begin{align}
{I^{f}_{R}}(I,L,\lambda_d,\lambda_q)=Ie^{-\alpha L}L \Gamma (\lambda_d,\lambda_q) \Delta \lambda
\end{align}
for forward scattering, and
\begin{align}
{I^{b}_{R}}(I,L,\lambda_d,\lambda_q)=I\frac{(1-e^{-2 \alpha L })}{2 \alpha} \Gamma (\lambda_d,\lambda_q) \Delta \lambda
\end{align}
for backward scattering, where $L$ is the fiber length and $\Gamma (\lambda_d,\lambda_q)$ is the Raman cross section (per unit of fiber length and bandwidth), which can be measured experimentally. In our work, we have used the results reported in~\cite{Eraerds2010_1Gbps} for $\lambda_d = 1550$~nm and have used the prescription in~\cite{Bahrani2016orthogonal} to adapt it to any other wavelengths in the C band. The transmitted power $I$ is also set to secure a bit error rate (BER) of no more than 10$^{-9}$ for all data channels.
A photodetector would then collect a total average number of photons, due to forward and backward scattering, respectively, given by
\begin{align}
\label{murf}
{\mu^{f}_{R}}=\frac{\eta_d {I^{f}_{R}} \lambda_q T_d}{hc}
\end{align}
and
\begin{align}
\label{murb}
{\mu^{b}_{R}}=\frac{\eta_d {I^{b}_{R}} \lambda_q T_d}{hc},
\end{align}
where $T_d$, $\eta_d$ and $h$, respectively, represent the detectors' gate duration, their quantum efficiency and Planck's constant with $c$ being the speed of light in the vacuum.
\section{Key Rate Analysis}
\label{Sec:KeyRateAnalysis}
In this section, the secret key rate analysis for our proposed setups is presented considering non-idealities in the system. The secret key rate is defined as the asymptotic ratio between the number of secure bits and sifted bits. Without loss of generality, we only calculate the rate for user 1 assuming that there is no eavesdropper present. The DS-BB84~\cite{MXF:Practical:2005} and GG02 protocols are used for setups 1 and 2, while the MDI-QKD protocol~\cite{Lo2012MDI-QKD, Ma2012alternative} is employed for setups 3 and 4.
\subsection{Setups 1 and 2}
\subsubsection{DS-BB84 protocol}
The lower bound for the key generation rate in the limit of an infinitely long key is given by~\cite{MXF:Practical:2005}
\begin{align}
\label{KeyRate_limit}
R\geq q\lbrace -Q_\mu f h(E_\mu)+Q_1[1-h(e_1)]\rbrace,
\end{align}
where all new parameters are defined in Appendix~\ref{App:DS-BB84_KeyRate}. There, we show that the expected value for these parameters in our loss and background induced model for the channel mainly depends on two parameters: the overall efficiency of each link $\eta$, and the total background noise per detector, denoted by $n_N$. Here, $n_N$ accounts for both dark counts and background noise in the link. In the following, we specify how these parameters can be calculated in each setup.
In setup 1, we have two links, a wireless link and a wired link. Below, the parameter values for each link will be calculated separately.
\noindent{\bf Setup 1, wireless link:} For the wireless channel, we assume that the background noise due to the artificial lighting source is denoted by $n_{B_1}$, which can be calculated using the methodology proposed in \cite{Wireless_indoor_QKD}. In our calculations, we upper bound $n_{B_1}$ by considering the case where the QKD receiver is focused on the center of the room. The total noise per detector, $n_N$, is then given by
\begin{align}
{n_N=n_{B_1} \eta_{d_1}/2 + n_{dc}},
\end{align}
where $\eta_{d_1}$ is the detector efficiency, for the detector in the Rx box, and $n_{dc}$ is the dark count rate per pulse for each detector in the Rx box in Fig.~\ref{fig_scenario_1}. We neglect the impact of the ambient noise in our windowless room~\cite{Wireless_indoor_QKD}. The total transmissivity is also given by $\eta = H_{\rm DC} \eta_{d_1}/2$. The factor 1/2 represents the loss in the passive time-bin decoder consisted of a Mach-Zehnder interferometer.
\noindent{\bf Setup 1, fiber link:} As for the fiber-based link, the background noise is mainly induced by the Raman scattered light. In this setup, where Bob's receiver is at the central office, forward scattered light is generated because of the classical signals sent by the users and backward scattered light is due to the signals sent by the central office. The total power of Raman noise, at wavelength $\lambda_{q_1}$, for forward and backward scattering are, respectively, given by
\begin{align}
I^{f}_{T1} =& [I_R^f(I,L_0 + L_1,\lambda_{d_1},\lambda_{q_1}) \nonumber \\
&+ \sum_{k=2}^N{I_R^f(Ie^{-\alpha L_k},L_0,\lambda_{d_k},\lambda_{q_1})}] 10^{-2\Lambda/10}
\end{align}
and
\begin{align}
I^{b}_{T1} = & [I_R^b(I,L_0 + L_1,\lambda_{d_1},\lambda_{q_1})
\nonumber \\
&+ \sum_{k=2}^N{I_R^b(I,L_0,\lambda_{d_k},\lambda_{q_1})}] 10^{-2\Lambda/10},
\end{align}
where $L_0$ is the total distance between the central office and the AWG box at the users' splitting point and $L_k$ is the distance of the $k$th user to the same AWG in the access network. In the above equations, we have neglected the out-of-band Raman noise that will be filtered by relevant multiplexers in our setup. For instance, in calculating $I_{T1}^f$, we account for the effect of the forward Raman noise by the data signal generated by User 1 over a total distance of $L_0+L_1$, but, a similar effect by other users is only accounted for over a distance $L_0$. That is because the AWG box filters most of the Raman noise at $\lambda_{q_1}$ generated over distances $L_k$ and their effect can be neglected. By substituting the above equations in \eqref{murf} and \eqref{murb}, the total background noise per detector, at Bob's end in Fig.~\ref{fig_scenario_1}, is given by
\begin{align}
n_{N}=\frac{\eta_{d_2} \lambda_{q_1}T_d}{2hc}(I^{f}_{T1}+I^{b}_{T1})+n_{dc},
\end{align}
where $\eta_{d_2}$ is the detector efficiency at Bob's receiver. Note that in setup 1 we consider two different values for $\eta_{d_1}$ and $\eta_{d_2}$.The reason is that the former corresponds to the avalanche photodiode (APD) single-photon detectors at 880~nm, while the latter could be for InGaAs APD single-photon detectors within the 1550~nm band.
The total transmissivity $\eta$ for the fiber link is given by $\eta_{\rm fib}\eta_{d_2}/2$, where $\eta_{\rm fib}$ is the optical fiber channel transmittance including the loss associated with AWGs given by
\begin{align}
\label{fiber_channel_transmittanc}
{\eta_{\rm fib}=10^{-[\alpha(L_1+L_0)+2\Lambda]/10}.}
\end{align}
\noindent{\bf Setup 2:} In setup 2, the total Raman noise power for forward and backward scattering, denoted by $I^{f}_{T2}$ and $I^{b}_{T2}$ are given by $I^{f}_{T1}$ and $I^{b}_{T1}$, respectively. The total background noise per detector at Bob's end in Fig.~\ref{fig_scenario_2} is then given by
\begin{align}
\label{tot_backgroundNoise}
n_N = \frac{\eta_{d_2}}{2}\left[\frac{ \lambda_{q_1} T_d}{hc} \left( {I^{f}_{T2}} + {I^{b}_{T2}}\right) + n_{B_1}\eta_{\rm fib} \eta_{\rm coup}\right]
+ n_{dc},
\end{align}
where $\eta_{\rm coup}$ is the additional air-to-fiber coupling loss that the indoor background photons, generated by the bulb, will experience before reaching the QKD receiver. The total channel transmittance between the sender and the recipient in this setup is given by $\eta=H_{\rm DC}\eta_{\rm coup}\eta_{\rm fib}\eta_{d_2}/2$.
\subsubsection{GG02 protocol}
The secure key rate for GG02 with reverse reconciliation under collective attacks is given by~\cite{fossier2009field}
\begin{align}
K= \beta I_{AB}-\chi_{BE},
\end{align}
where $\beta$ is the reconciliation efficiency. $I_{AB}$ and $\chi_{BE}$ are, respectively, the mutual information between Alice and Bob, and the amount of information obtained by the adversary in reverse reconciliation. More details can be found in Appendix \ref{App:GG02_KeyRate}.
GG02 is characterized by the channel transmissivity $\eta_{\rm ch}$ and the excess noise $\varepsilon$. For estimating the latter, we need to consider the contribution of the bulb, $\varepsilon_{b}$, as well as the Raman scattering, $\varepsilon_{r}$. The total excess noise, $\varepsilon$, is then given by $\varepsilon_{b} + \varepsilon_{r} + \varepsilon_{q}$, where $\varepsilon_{q}$ is any other additional noise observed in the experiment. In the Appendix~\ref{App:GG02_KeyRate} formulation, the excess noise terms must be calculated at the input. For chaotic sources of light, if the average noise count at the end of a channel with transmissivity $\eta_t$ is given by $n$, the corresponding excess noise at the input would be given by $2n/\eta_t$ \cite{Feasibility2010dense,kumar2015coexistence}. Below, we use this expression to calculate $\varepsilon_{b}$ and $\varepsilon_{r}$ assuming that both the Raman noise and the bulb-induced background noise are of chaotic-light nature.
\noindent{\bf Setup 1, wireless link:} In setup 1, the background noise due to the bulb is denoted by $n_{B_1}$. This is the total background noise at the Rx box input. Given that the LO would pick a single spatio-temporal mode with matching polarization, the corresponding count that sneaks into the homodyne receiver would be $n_{B_1}/2$. The corresponding excess noise would then be given by $\varepsilon_{b} = n_{B_1}/H_{\rm DC}$ and $\varepsilon = \varepsilon_{b} + \varepsilon_{q} $. In this case, $\eta_{\rm ch} = H_{\rm DC}$. In an experiment, $\varepsilon_{q}$ is often calculated by measuring the corresponding parameter, $\varepsilon_{q}^{\rm rec}$, at the receiver. In this case, $\varepsilon_{q} = \varepsilon_{q}^{\rm rec}/(\eta_{\rm ch} \eta_B)$, where $\eta_B$ is Bob's receiver overall efficiency.
\noindent{\bf Setup 1, fiber link:} In this case, $\eta_{\rm ch} = \eta_{\rm fib}$, $\varepsilon_{b} = 0$, and $\varepsilon_{r} = n_r/\eta_{\rm ch}$, where
\begin{align}
n_{r}=\frac{\lambda_{q_1}T_d}{hc}(I^{f}_{T1}+I^{b}_{T1}).
\end{align}
\noindent {\bf Setup 2:} In setup 2, $\eta_{\rm ch} = H_{\rm DC} \eta_{\rm coup} \eta_{\rm fib}$, $\varepsilon_{b} = n_{B_1}/H_{\rm DC}$, and $\varepsilon_{r} = n_r/\eta_{\rm ch}$, where
\begin{equation}
n_r = \frac{\lambda_{q_1} T_d}{hc} \left( {I^{f}_{T2}} + {I^{b}_{T2}}\right).
\end{equation}
In all CV-QKD setups, we assume that a phase reference for the LO is available at the receiver.
\subsection{Setups 3 and 4 with MDI-QKD protocol}
The secret key rate for the MDI-QKD setup is given in Appendix \ref{App:MDI-QKD}. The key parameters to find for this scheme are $\eta_a$ and $\eta_b$, which, respectively, correspond to the total transmissivity seen by Alice and Bob channels, as well as $n_N$, which is the total background noise per detector. Here we find these parameters for Setups 3 and 4.
\noindent{\bf Setup 3:}
The total forward and backward Raman noise power for setup 3 at wavelength $\lambda_{q_1}$ are, respectively, given by
\begin{align}
I^{f}_{T3} =& [I_R^f(I,L_0 + L_1,\lambda_{d_1},\lambda_{q_1}) \nonumber \\
&+ e^{-\alpha L_1}\sum_{k=2}^N{I_R^f(I,L_0,\lambda_{d_k},\lambda_{q_1})}] 10^{-2\Lambda/10}, \nonumber \\
I^{b}_{T3} =& [I_R^b(I,L_0 + L_1,\lambda_{d_1},\lambda_{q_1})
\nonumber \\
&+ e^{-\alpha L_1} \sum_{k=2}^N{I_R^b(I e^{-\alpha L_k},L_0,\lambda_{d_k},\lambda_{q_1})}] 10^{-2\Lambda/10}.
\end{align}
The total noise per detector, $n_N$, for setup 3 is then given by
\begin{align}
n_N = \frac{\eta_{d_2}}{4}\left[\frac{ \lambda_{q_1} T_d}{hc} \left( {I^{f}_{T3}} + {I^{b}_{T3}}\right) + n_{B_1} \eta_{\rm coup}\right]
+ n_{dc},
\end{align}
where we account for one particular polarization entering the BSM module.
In setup 3, $\eta_{a} = H_{\rm DC} \eta_{d2}\eta_{\rm coup}/2$ and $\eta_{b} = \eta_{d2}\eta_{\rm fib}/2$, assuming an average loss factor of 1/2 for polarization mismatch. Note that the two modes interfering at the BSM must have matching polarizations. This can be achieved passively by using polarization filters before the 50:50 beam splitter in the BSM, in which case, an average loss of 1/2 is expected, or, alternatively, we need to use active polarization stabilizer, for which the corresponding loss factor approaches one.
\noindent{\bf Setup 4:}
The total forward and backward Raman noise power for setup 4 at wavelength $\lambda_{q_1}$ are, respectively, given by
\begin{align}
I^{f}_{T4}=& [I_R^f(I,L_0,\lambda_{d_1},\lambda_{q_1})+ \sum_{k=2}^N{I_R^f(I,L_0,\lambda_{d_k},\lambda_{q_1})}] \nonumber \\
&\times 10^{-2\Lambda/10}+I_R^f(I,L_1,\lambda_{d_1},\lambda_{q_1}), \nonumber \\
I^{b}_{T4} = & [I_R^b(Ie^{-\alpha L_1},L_0,\lambda_{d_1},\lambda_{q_1}) \nonumber \\ & + \sum_{k=2}^N{I_R^b(Ie^{-\alpha L_k},L_0,\lambda_{d_k},\lambda_{q_1})}\nonumber \\
& +I_R^b(Ie^{-\alpha L_0},L_1,\lambda_{d_1},\lambda_{q_1})]
10^{-2\Lambda/10}.
\end{align}
The total noise per detector, $n_N$, for setup 4 is as follows
\begin{align}
n_N = \frac{\eta_{d_2}}{4}\left[\frac{\lambda_{q_1} T_d}{hc} \left( {I^{f}_{T4}} + {I^{b}_{T4}}\right) + n_{B_1} \eta_{\rm coup}10^{-\alpha L_1/10}\right]
+ n_{dc}.
\end{align}
In setup 4, $\eta_{a} = H_{\rm DC} \eta_{d_2} \eta_{\rm coup} 10^{-\alpha L_1/10}/2$ and $\eta_{b} = \eta_{d_2} 10^{-[\alpha L_0+2\Lambda]/10}/2$.
\section{Numerical Results}
\label{Sec:NumericalResults}
\begin{table}[t]
\caption{Nominal values used for our system parameters.}
\begin{tabular}{|c|c|} \hline
{\bf System Parameters} & {\bf Nominal value}\\ \hline
Number of users, $N$ & 32 \\
Fiber attenuation coefficient, $\alpha$ & 0.2 dB/km \\
AWG insertion loss, $\Lambda$ & 2 dB \\
Room size, $X$,$Y$,$Z$ & ($4 \times 4 \times 3$) m$^3$ \\
Semi-angle at half power of the bulb & $70^{\circ}$ \\
Reflection coefficients of the walls and floor & $0.7$ \\
Detector area & $1$ cm$^2$ \\
Refractive index of the concentrator& 1.5 \\
Semi-angle at half power of QKD source, $\Phi_{1/2}$ & $20^{\circ}$, $1^{\circ}$ \\
\hline
{\bf DV-QKD Parameters} & {\bf Nominal value}\\
\hline
Average number of photons per signal pulse, $\mu=\nu$ & 0.5\\
Error correction inefficiency, $f$ & 1.16 \\
Dark count per pulse, $n_{dc}$ & $10^{-7}$ \\
Detector gate width, $T_{d}$ & 100 ps \\
Relative-phase error probability, $e_{d}$ & 0.033 \\
Quantum efficiency of detector, $\eta_{d1}$,
{at 880~nm} & 0.6 \\
Quantum efficiency of detector, $\eta_{d2}$,
{at 1550~nm}& 0.3 \\
\hline
{\bf CV-QKD Parameters} & {\bf Nominal value}\\
\hline
Reconciliation efficiency, $\beta$ & 0.95 \\
Receiver overall efficiency, $\eta_{B}$ & 0.6 \\
Electronic noise (shot noise units), $v_{elec}$ & 0.015 \\
Excess noise (shot noise units), $\varepsilon_q^{\rm rec}$ & 0.002 \\\hline
\end{tabular}
\label{Table}
\end{table}
In this section, we provide some numerical results for secret key rates in the four proposed setups. We use a DWDM scheme with 100~GHz channel spacing in the C-band with 32 users. We define $Q$ = $\lbrace$1530.8 nm, 1531.6 nm,...,1555.62 nm$\rbrace$ and $D$ = $\lbrace$1560.4 nm, 1561.2 nm,...,1585.2 nm$\rbrace$ for quantum and classical channels, respectively. We assume that $\lambda_{q_1}$ is 1555.62 nm and the corresponding $\lambda_{d_1}$ is 1585.2 nm. The classical data is transmitted with launch power $I = 10^{(-3.85+\alpha L/10+2\Lambda/10)}$ mW, which corresponds to receiver sensitivity of -38.5 dB guaranteeing a BER of $\textless$ 10$^{-9}$~\cite{Bahrani2016orthogonal}. In all setups, we assume that $L_1= L_2= \cdots=L_N$ all equal to 500~m.
Other nominal parameter values used in our simulation are summarized in Table~\ref{Table}. These are based on values that are technologically available today. {In particular, for DV-QKD systems, we assume silicon-based single-photon detectors are used in the 800~nm regime (setup 1, indoor channel), whereas GaAs detectors may need to be used in the 1550~nm regime (all other setups). The former often have higher quantum efficiencies than the latter. That is why in our numerical parameters, $\eta_{d1}$ is twice as big as $\eta_{d2}$. The dark count rate in such detectors varies from (100--1000)/s for an APD, to (1--100)/s for superconducting detectors~\cite{DUSEK2006381}. The average dark count rate considered here is 1000/s, which, over a period of $100$~ps, will result in $n_{dc} = 10^{-7}$. In the CV-QKD system, $\eta_{B}$ is Bob's receiver overall efficiency, which includes detector efficiencies and any insertion loss in the homodyne receiver. The parameter $\beta$ is the efficiency of our post-processing, which nowadays exceeds 95\% \cite{PhysRevA.84.062317}. The parameter values chosen for the receiver electronic noise and excess noise correspond to the observed values in recent CV-QKD experiments \cite{jouguet2013experimental}. Based on the values chosen for our system parameters, relevant parameters in Sec.~\ref{Sec:KeyRateAnalysis}, such as $\eta_{\rm fib}$ and $\eta_{\rm ch}$, can be calculated from which parameter $\eta$ for each setup is obtained. The noise parameter $n_N$, for each setup, can similarly be found. The Raman noise terms, in particular, have been calculated by extracting the Raman cross section from the experimental measurements reported in \cite{Eraerds2010_1Gbps}. Note that, in our numerical calculations, we often vary the coupling loss to study system performance.}
In each setup, three cases are considered for the light beam orientation of the QKD source. In the first case, the semi-angle at half power of the QKD source is $\Phi_{1/2}$ = 20$^{\circ}$ while the QKD source is placed at the center of the room's floor. With the same $\Phi_{1/2}$, the QKD source is moved to the corner of the room in the second case. We use $\Phi_{1/2}$ = 1$^{\circ}$ in the third case where the QKD source is located at the corner of the room, as in the second case, but the beam is directed and focused toward the QKD receiver or the collection element. A full alignment is assumed in the third case, while in the other two cases the QKD source is sending light upward to the ceiling with a wider beam angle. As for the receiver, we assume that its telescope is dynamically rotating to collect the maximum power from the user in the three cases. We assume that the effective receiver's FOV would correspond to the numerical aperture (NA) of a single-mode fiber. For single-mode fibers, NA is about 0.1, which means that the corresponding FOV that can be coupled to the fiber is around 6$^{\circ}$. Here, the QKD receiver's FOV is assumed to be 6$^{\circ}$ in order to maximize the collected power.
\begin{figure}
\caption{The secret key rate per pulse versus the coupling loss, $\eta_{\rm coup}
\label{fig_rate_vs_couping_loss_case1_2}
\end{figure}
The first thing we study here is whether the loose alignment in cases 1 and 2 would be sufficient for the proper operation of a networked wireless link. The short answer turns out to be negative for setups 2--4. We already know the result for setup 1 from the previous work in \cite{Wireless_indoor_QKD}, in which the authors show that, if the only source of lighting in the room is an LED bulb with a PSD on the order of $10^{-5}$--$10^{-6}$~W/nm, then there will be regions over which even in cases 1 and 2 the wireless user can exchange secret keys with the Rx box. This seems to no longer necessarily hold if we remove the trusted relay node in the room. In Fig.~\ref{fig_rate_vs_couping_loss_case1_2}, we have plotted the secret key rate versus the coupling loss for setups 2 to 4. While for a user in the center of the room, it may be marginally possible to exchange keys at PSD $=10^{-7}$~W/nm, once the user moves to the corner, the required PSD drops to $10^{-8}$~W/nm. This is not strange as in setups 2--4, we have more loss and additional sources of noise as compared to setup 1. The required parameter values may not, however, be achievable in practical settings, and that implies that dynamic beam steering may be needed at both the transmitter and the receiver side of a wireless QKD link.
There are several other observations that can be made from Fig.~\ref{fig_rate_vs_couping_loss_case1_2}. We have verified that the MDI-QKD with DS has a rather poor performance, and in order to tolerate substantial coupling loss, we need to use nearly ideal single-photon sources. It can also be seen that the performance of setups 3 and 4 is more or less the same. As expected, moving the BSM module around does not make a big difference in the key rate. Setup 3 has slightly better performance for the parameter values chosen here, partly because setup 4 might have slightly more Raman noise, as will be shown later. But, overall, if one needs to go with a trust-free relay node, its position can be decided based on the operational convenience without sacrificing much of the performance. In forthcoming graphs, we then only present the results for setup 3.
\begin{figure}
\caption{The secret key rate for setups 1--3 in case 3, in which the full alignment between the QKD node on the ceiling and wireless transmitter is obtained. The QKD source is placed at a corner of the room's floor, with semi-angle at half power $\Phi_{1/2}
\label{fig_rate_vs_couping_loss_case3}
\end{figure}
The situation is much more optimistic if full alignment, with $\Phi_{1/2}$ = 1$^{\circ}$, between the wireless QKD receiver and transmitter is attained (case 3). In this case, the QKD source is located at a corner of the room and transmits directly to the QKD receiver or the collector. The full alignment for this narrow beam would highly improve the channel transmissivity. Figure \ref{fig_rate_vs_couping_loss_case3}(a) shows key rate versus coupling loss at a PSD of $10^{-5}$~W/nm. It can be seen that coupling loss as high as 40~dB can be tolerated in certain setups. That leaves a large budget for loss in different elements of the system. As compared to Fig.~\ref{fig_rate_vs_couping_loss_case1_2}, the rate has also improved by around three orders of magnitude. For a fixed coupling loss of 10~dB, Fig.~\ref{fig_rate_vs_couping_loss_case3}(b) shows how the remaining loss budget can be used to reach farther central offices. It seems that tens of kilometers are reachable with practical decoy-state signals in all setups. In this figure, we have also shown the total key rate for setup 1, which can serve as a benchmark for other setups. For a repetition rate of 1~GHz, keys can be exchanged at a total rate ranging from kbps to Mbps at moderate distances.
\begin{figure}
\caption{Noise counts per detector due to (a) forward Raman scattering, (b) backward Raman scattering, (c) the artificial lighting source, and (d) the total background noise $n_N$, all in count per pulse (c/p), versus $L_0$. The bulb's PSD is $10^{-5}
\label{fig_four_figures}
\end{figure}
{There are additional interesting, but somehow puzzling, points in Fig.~\ref{fig_rate_vs_couping_loss_case3}. For instance, in Fig.~\ref{fig_rate_vs_couping_loss_case3}(a), the MDI-QKD curve with DS implies that no secret keys can be exchanged at low coupling losses. This is counter-intuitive. But, we have verified that the same behavior is seen in asymmetric MDI-QKD systems, when one user's, let's say Alice, signal is accompanied by a background noise. Such a background noise would therefore undergo the same amount of loss as the Alice signal. In a particular regime, where the background noise is comparable to Bob's rate of photon arrival at the BSM module, such background photons could masquerade Bob's photons and cause errors. In setup 3, the background noise that accompanies Alice's signal is that of the bulb noise. If we make the coupling loss very low, such a noise would easily get into our BSM module and can cause errors. This explains the strange behavior of the MDI-QKD curve in Fig.~\ref{fig_rate_vs_couping_loss_case3}(a). Another detailed point is in Fig.~\ref{fig_rate_vs_couping_loss_case3}(b), in which the maximum security distance for setup 2, with 10~dB of coupling loss, is 60~km. In that case, one may expect that the security distance for setup 1, with no coupling loss should be 50~km (corresponding to 10~dB of fiber loss) longer, i.e., 110~km. The difference is, however, around 30~km. This turns out to be because of the additional Raman noise at longer distances. In order to understand this and the previous observation better, we need to explore the noise characteristic of the system, as we do next.}
{In Fig.~\ref{fig_four_figures}, we have plotted the noise counts per detector due to (a) forward Raman scattering (FRS), (b) backward Raman scattering (BRS), (c) the lighting source bulb, and (d) the total background noise $n_N$ for each setup. In each setup, the (a)--(c) noise components have been obtained from the corresponding expression for $n_N$ by breaking it into its individual terms. There are several observations to be made. In terms of order of magnitude, all three sources of noise in Figs.~\ref{fig_four_figures}(a)-(c), are larger or comparable to dark count noise per pulse, where the latter in our setup is $10^{-7}$/pulse. This proves the relevance of our analysis that accounts for Raman and background noises. In Fig.~\ref{fig_four_figures}(a), the FRS in setup 4 has a surprising rise at long distances. This is because of the launch power control scheme in use, which requires the data transmitters to send a larger amount of power proportional to the channel loss. At a short fixed $L_1$, this additional power creates additional FRS in setup 4. The effect of FRS is, however, negligible when compared to BRS, which is roughly two orders of magnitude higher than FRS. BRS increases with fiber length because of the power control scheme, and will be the major source of noise in long distances. This increase in BRS justifies the shorter-than-expected security distances in Fig.~\ref{fig_rate_vs_couping_loss_case3}(b). Finally, it can be seen that why MDI-QKD setups are more vulnerable to bulb noise than the DS system of setup 2. The bulb noise would enter the BSM module in setups 3 and 4 by mainly being attenuated by the coupling loss, whereas in setup 2, it will be further attenuated by the channel loss. That is partly why the rate in setup 2 can be higher than that of setups 3 and 4. Based on these results, one can conclude that, if the MDI property is not a crucial design factor, setup 2 could offer a reasonable practical solution to the scenarios where a trusted relay is not available. In the rest of this section, we will then compare the performance of different protocols that can be run in setup 2.}
Figure \ref{fig_three_figures_case3} compares the GG02 performance in setups 1 and 2 with DS-BB84. In Fig.~\ref{fig_three_figures_case3}(a) we study the resilience of either scheme against background noise at low values of coupling loss. As has been shown for fiber-based systems \cite{CV_ResilienceBQi}, CV-QKD can tolerate a higher amount of background noise in this regime due to the intrinsic filtering properties of its local oscillator. That benefit would however go away if the coupling loss roughly exceeds 10~dB in our case; see Fig.~\ref{fig_three_figures_case3}(b). {This implies that full beam steering is definitely a must when it comes to CV-QKD.} Depending on the setting of the system, the operator can decide whether a DV or a CV scheme is the better option.
\begin{figure}
\caption{Comparison of the GG02 and DS-BB84 protocols for setup 2 and case 3 (except for the curve labeled GG02 (setup 1)). (a) Secret key rate per pulse versus total background noise. The latter is assumed to be per detector for DV-QKD, while it is per spatio-temporal mode for CV-QKD. (b) Secret key rate per pulse versus coupling loss, $\eta_{\rm coup}
\label{fig_three_figures_case3}
\end{figure}
Figure~\ref{fig_tradeoff} shows the relevant regimes of operation for DV and CV-QKD schemes in a different way. In Fig.~\ref{fig_tradeoff}(a), we have looked at the maximum coupling loss tolerated by each of the two schemes for a given background noise. It is clear that while for low values of coupling loss, CV-QKD can tolerate more noise, at high values of coupling loss DV-QKD is the only option, although it can tolerate less noise. There is therefore a trade-off between the amount of coupling loss versus background noise the system can tolerate. In Fig.~\ref{fig_tradeoff}(b), we have compared the two systems from the clock rate point of view. CV-QKD is often practically constrained by its low repetition rate. In Fig.~\ref{fig_tradeoff}(b), we have fixed the CV repetition rate to 25 MHz~\cite{wang201525} and have found out at what clock rate the DV system offers a higher total key rate than the CV one. For numerical values used in our simulation this cross-over rate is around {200}~MHz, which is achievable for today's DV-QKD systems. The ultimate choice between DV and CV would then depend on the characteristics of the system, such as loss and noise levels, as well as the clock rate available to the QKD system.
\begin{figure}
\caption{ (a) Regions of secure operation for DV-QKD (DS-BB84) and CV-QKD (GG02) protocols for setup 2 (case 3). The curves show the maximum tolerable background noise at different values of coupling loss, $\eta_{\rm coup}
\label{fig_tradeoff}
\end{figure}
\section{Conclusions}
\label{Sec:Conclusions}
We proposed and studied four configurations that enabled wireless access to hybrid quantum-classical networks. All these setups included an initial wireless indoor link that connected a quantum user to the network. Each user, in the access network, could also communicate classically with the central office via another wavelength in the same band. We considered setups in which a local relay point could be trusted, as well as setups where such a trust was not required. We showed that with proper beam alignment it was possible, in both DV- and CV-QKD, to achieve positive key rates for both trusted and untrusted relay points in certain indoor environments.
{The choice of the optimum setup would depend on various system parameters, which we studied in our analysis. For instance, we found that our MDI-QKD setups, which offered trust-free QKD immune to measurement attacks, were mostly insensitive to the position of their measurement module, but could suffer harshly from the background noise generated in the indoor environment. If the immunity to measurement attacks was not required, we could simply collect QKD signals at the ceiling and couple them into optical fibers along with other data channels. With decoy-state techniques, we showed that we could tolerate up to 30~dB of coupling loss in such a setting, provided that full alignment is achieved. At long distances, the Raman noise induced by the data channels would also take its toll on the maximum secure distance limiting it to tens of kilometers. Both Raman noise and the background noise due to the artificial light source in the indoor environment could be orders of magnitude larger than the static dark count of single-photon detectors. We also showed that in the low coupling loss regime, CV-QKD could offer higher rates and more resilience to background noise than DV-QKD systems. But, overall, DV-QKD schemes could offer a more stable and flexible operation adaptable to a wider range of scenarios. In short, using our analytical results, we can identify the winner in realistic setups that enable high-rate wireless access to future quantum networks.}
\appendices
\section{DS-BB84 key rate analysis}
\label{App:DS-BB84_KeyRate}
In this appendix, the secret key generation rate of the DS-BB84 protocol is calculated. The lower bound for the key rate, in the limit of an infinitely long key, is given by~\cite{MXF:Practical:2005}
\begin{align}
\label{App:KeyRateLB}
R\geq q\lbrace -Q_\mu f h(E_\mu)+Q_1[1-h(e_1)]\rbrace,
\end{align}
where $q$ is the basis-sift factor, which is assumed to approach 1 in the efficient BB84 protocol \cite{Lo2005efficient} as employed in this work. The error correction inefficiency is denoted by $f>1$ and $\mu$ is the average number of photons per signal pulse. Moreover, in \eqref{App:KeyRateLB}, $Q_\mu$, $E_\mu$, $Q_1$, $e_1$ and $h(x)$ are, respectively, the overall gain, the quantum bit error rate (QBER), the single-photon gain, the error rate in single-photon states and the Shannon binary entropy function. In the case of a lossy channel with a total transmissivity of $\eta$ and a total background noise per detector of $n_N$, the above parameters are given by~\cite{Panayi2014memory}:
\begin{align}
Q_\mu=& 1-e^{-\eta\mu}(1-n_N){^2}, \nonumber \\
E_{\mu}=& \frac{e_{0}{Q_\mu}-({e_0}-{e_d})(1-e^{-\eta\mu})(1-n_N)}{Q_\mu}, \nonumber \\
Q_1=& Y_{1}\mu{e^{-\mu}}, e_1= \frac{e_0{Y_1}-({e_0}-{e_d})\eta(1-n_N)}{Y_1},
\end{align}
where $e_0=1/2$ and
\begin{align}
Y_1=& 1-(1-\eta)(1-n_N)^2, \nonumber \\
h(x)=& -x\log{_2}x-(1-x)\log{_2}(1-x),
\end{align}
where we assume that there has been no eavesdropping activity in the channel. This is considered to be the normal oeprating mode of the system and the key rate calculated under above conditions would give us a sense of what we may expect from our QKD system in practice. The same assumptions have been used to calculate the key rate of other protscols as we see next.
\section{GG02 key rate analysis}
\label{App:GG02_KeyRate}
The secret key rate for GG02 with reverse reconciliation, under collective attacks, is given by~\cite{fossier2009field}
\begin{align}
\label{App:GG02_KeyRateFormula}
K= \beta I_{AB}-\chi_{BE},
\end{align}
where $\beta$ is the reconciliation efficiency, $I_{AB}$ is the mutual information between Alice and Bob, which, for a Gaussian channel, is given by
\begin{align}
I_{AB}= \frac{1}{2} \log_{2}\frac{V+\chi_{tot}}{1+\chi_{tot}},
\end{align}
where $V$ and $\chi_{tot}$ are, respectively, the total variance and the total noise given by
\begin{align}
V= V_{A}+1,
\end{align}
with $V_A$ being the variance of Alice's quadrature modulation and
\begin{align}
\chi_{tot}=\chi_{line}+\chi_{hom}/\eta_{\rm ch},
\end{align}
in which
\begin{align}
\label{channel_noise}
\chi_{line}=& \frac{1-\eta_{\rm ch}}{\eta_{\rm ch}} +\varepsilon, \nonumber \\
\chi_{hom}=& \frac{1-\eta_{B}}{\eta_{B}} +\frac{v_{elec}}{\eta_{B}},
\end{align}
are, respectively, the noise due to the channel and the noise stemming from homodyne detection. Also, the parameters $\eta_{B}$, $v_{elec}$, $\varepsilon$ and $\eta_{\rm ch}$, are, respectively, Bob's overall efficiency, electronic noise variance induced by homodyne electronic board, excess noise, and the channel transmittance.
In \eqref{App:GG02_KeyRateFormula}, $\chi_{BE}$ is the Holevo information between Eve and Bob, and it is given by
\begin{align}
\chi_{BE}= g(\Lambda_1)+g(\Lambda_2)-g(\Lambda_3)-g(\Lambda_4),
\end{align}
where
\begin{align}
g(x)=(\frac{x+1}{2})\log_{2}(\frac{x+1}{2})-(\frac{x-1}{2})\log_{2}(\frac{x-1}{2}),
\end{align}
with
\begin{eqnarray}
&\Lambda_{1/2}= {\mathpalette\DHLhksqrt{(A\pm \mathpalette\DHLhksqrt{A^2-4B})/2}}, &\nonumber \\
&\Lambda_{3/4}= {\mathpalette\DHLhksqrt{(C\pm \mathpalette\DHLhksqrt{C^2-4D})/2}}.&
\end{eqnarray}
In the above equations:
\begin{align}
A=& V^2(1-2\eta_{\rm ch})+2\eta_{\rm ch}+\eta_{\rm ch}^2(V+\chi_{line})^2, \nonumber \\
B=& \eta_{\rm ch}^2(V\chi_{line}+1)^2, \nonumber \\
C=& \frac{V\mathpalette\DHLhksqrt{B}+\eta_{\rm ch}(V+\chi_{line})+A\chi_{hom}}{\eta_{\rm ch}(V+\chi_{tot})}, \nonumber \\
D=& \mathpalette\DHLhksqrt{B}\frac{V+\mathpalette\DHLhksqrt{B}\chi_{hom}}{\eta_{\rm ch}(V+\chi_{tot})}.
\end{align}
\section{MDI-QKD key rate analysis}
\label{App:MDI-QKD}
In this appendix, we summarize the secret key rate of the MDI-QKD protocol. The rates for the ideal single-photon source and the decoy-state protocols, respectively, are
\begin{align}
\label{SPP-MDIQKDrate:App}
R_{\rm MDI-QKD}^{\rm{SPP}}= Y_{11}[1-h(e_{11:X})-fh(e_{11:Z})]
\end{align}
and
\begin{align}
\label{DS-MDIQKDrate:App}
R_{\rm MDI-QKD}^{\rm{DS}}= Q_{11}(1-h(e_{11;X}))-fQ_{\mu\nu;Z}h(E_{\mu\nu;Z}).
\end{align}
In the above, $Q_{11}$ is the gain of the single-photon states given by
\begin{align}
Q_{11}=\mu\nu e^{-\mu-\nu} Y_{11},
\end{align}
where $\mu$ ($\nu$) is the mean number of photons in the signal state sent by Alice (Bob) and $Y_{11}$ is the yield of the single-photon states given by
\begin{align}
Y_{11}= & (1-n_N)^2[\eta_a \eta_b/2+(2\eta_a+2\eta_b-3\eta_a \eta_b)n_N \nonumber \\
& +4(1-\eta_a)(1-\eta_b)n_N^2] ,
\end{align}
where $n_N$ represents the total noise per detector and $\eta_a$ and $\eta_b$ are, respectively, the total transmittance between Alice and Bob sides and that of Charlie~\cite{Panayi2014memory}. In \eqref{SPP-MDIQKDrate:App} and \eqref{DS-MDIQKDrate:App}, $e_{11;Z}$, $e_{11;X}$, $Q_{\mu\nu;Z}$ and $E_{\mu\nu;Z}$, respectively, represent the QBER in the $Z$ basis for single-photon states, the phase error for single-photon states, the overall gain and the QBER in the $Z$-basis, which are given by~\cite{Panayi2014memory}:
\begin{align}
e_{11;X}Y_{11}=& Y_{11}/2-(1/2-e_d)(1-n_N)^2 \eta_a \eta_b/2, \nonumber\\
e_{11;Z}Y_{11}=& Y_{11}/2-(1/2-e_d)(1-n_N)^2 (1-2n_N) \eta_a \eta_b/2, \nonumber\\
Q_{\mu \nu;Z}=& Q_C+Q_E, E_{\mu \nu;Z}Q_{\mu \nu;Z}= e_dQ_c+(1-e_d)Q_E,
\end{align}
where
\begin{align}
Q_C= & 2(1-n_N)^2e^{-\mu^{'}/2}[1-(1-n_N)e^{-\eta_a \mu/2}] \nonumber \\
& \times [1-(1-n_N)e^{-\eta_b \nu/2}] \nonumber \\
Q_E=& 2n_N(1-n_N)^2e^{-\mu^{'}/2}[I_0(2x)-(1-n_N)e^{-\mu^{'}/2}],
\end{align}
with $x= \mathpalette\DHLhksqrt{\eta_a \mu \eta_b \nu}/2$, $\mu^{'}=\eta_a \mu+\eta_b \nu$ and $I_0$ being the modified Bessel function.
\begin{IEEEbiographynophoto}{Osama Elmabrok}
received his B.Sc degree in electrical and electronic engineering from University of Benghazi (formerly known as Garyounis University) in 2002, and M.Eng in communication and computer engineering from National University of Malaysia in 2007. He was working as an assistant lecturer for University of Benghazi before joining University of Leeds where is currently working toward his Ph.D. degree. Prior joining the academic field, he worked as a communications engineer for Almadar Telecom Company, Azzaawiya Refining Company, and RascomStar-QAF Company. His current research interest is about wireless quantum key distribution (QKD) in indoor environments.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Masoud Ghalaii}
was born in April 1987. He received the B.Sc. degree in Chemical Engineering from Isfahan University of Technology, Isfahan, Iran, in September 2011. He then pursued his studies and received the M.Sc. degree with honors in Physics with the group Quantum Information Science at Sharif University of Technology, Tehran, Iran, in January 2014. Since October 2015, he has been working toward the Ph.D. degree at the School of Electronic and Electrical Engineering at the University of Leeds, Leeds, United Kingdom. His research interests include quantum optical communications, and application of quantum amplifiers and repeaters in continuous-variable quantum key distribution, as well as quantum information science.
\end{IEEEbiographynophoto}
\begin{IEEEbiographynophoto}{Mohsen Razavi}
received his B.Sc. and M.Sc. degrees (with honors) in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 1998 and 2000, respectively. From August 1999 to June 2001, he was a member of research staff at Iran Telecommunications Research Center, Tehran, Iran, working on all-optical CDMA networks and the possible employment of optical amplifiers in such systems. He joined the Research Laboratory of Electronics, at the Massachusetts Institute of Technology (MIT), in 2001 to pursue his Ph.D. degree in Electrical Engineering and Computer Science, which he completed in 2006. He continued his work at MIT as a Post-doctoral Associate during Fall 2006, before joining the Institute for Quantum Computing at the University of Waterloo as a Post-doctoral Fellow in January 2007. Since September 2009, he is a Faculty Member at the School of Electronic and Electrical Engineering at the University of Leeds. His research interests include a variety of problems in classical optical communications. In 2014, he chaired and organized the first international workshop on Quantum Communication Networks. He is the Coordinator of the European project QCALL, which endeavors to make quantum communications technologies available to end users.
\end{IEEEbiographynophoto}
\end{document}
|
\begin{document}
\title{Delta lenses as coalgebras for a comonad}
\author{Bryce Clarke}
\address{Centre of Australian Category Theory\\
Macquarie University, NSW 2109, Australia}
\email{[email protected]}
\thanks{The author is supported by the
Australian Government Research Training Program Scholarship.}
\subjclass[2020]{18C15}
\keywords{delta lens, cofunctor, coalgebra, bidirectional transformation}
\begin{abstract}
Delta lenses are a kind of morphism between categories which are
used to model bidirectional transformations between systems.
Classical state-based lenses, also known as very well-behaved lenses,
are both algebras for a monad and coalgebras for a comonad.
Delta lenses generalise state-based lenses, and while delta lenses
have been characterised as certain algebras for a semi-monad,
it is natural to ask if they also arise as coalgebras.
This short paper establishes that delta lenses are coalgebras
for a comonad, through showing that the forgetful functor
from the category of delta lenses over a base, to the category
of cofunctors over a base, is comonadic.
The proof utilises a diagrammatic approach to delta lenses,
and clarifies several results in the literature concerning the
relationship between delta lenses and cofunctors.
Interestingly, while this work does not generalise the
corresponding result for state-based lenses, it does provide new
avenues for exploring lenses as coalgebras.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:introduction}
The goal of understanding various kinds of lenses as mathematical
structures has been an ongoing program in the study of bidirectional
transformations.
For example, \emph{very well-behaved lenses} \cite{FGMPS07},
also known as \emph{state-based lenses} \cite{AU17},
have been understood as both algebras for a monad \cite{JRW10} and
coalgebras for a comonad \cite{Con11, GJ12}.
A generalisation of state-based lenses called \emph{category lenses}
\cite{JRW12} were also introduced as algebras for a monad,
based on classical work in $2$-category theory on split
opfibrations \cite{Str74}.
Another kind of lens between categories called a \emph{delta lens}
\cite{DXC11} was shown to be a certain algebra for a semi-monad \cite{JR13},
however it remained open as to whether delta lenses could also be
characterised as (co)algebras for a (co)monad.
The purpose of this short paper is to characterise delta lenses as
coalgebras for a comonad (Theorem~\ref{thm:main}).
The proof of this simple result builds upon and clarifies several recent
advances in the theory of delta lenses.
In 2017, Ahman and Uustalu introduced \emph{update-update lenses}
\cite{AU17} as morphisms of \emph{directed containers} \cite{ACU14},
which are equivalent to certain morphisms called
\emph{cofunctors} between categories \cite{Agu97}.
In the same paper, they show explicitly how, using the notation of
directed containers, delta lenses may be understood as cofunctors with
additional structure.
In earlier work \cite{AU16} from 2016, Ahman and Uustalu also provide a
construction on morphisms of directed containers which yields a
\emph{split pre-opcleavage} for a functor; in other words, they
show how cofunctors may be turned into delta lenses.
We show that this construction is actually a right adjoint to the
forgetful functor from delta lenses to cofunctors (Lemma~\ref{lemma:right-adjoint}),
and that the coalgebras for the comonad generated from this adjunction are
delta lenses (Theorem~\ref{thm:main}).
In 2020, a diagrammatic characterisation of delta lenses was
introduced by the current author \cite{Cla20}, building upon
an earlier characterisation of cofunctors as spans \cite{HM93}.
This diagrammatic approach is utilised throughout this paper,
and leads to another simple characterisation of delta lenses
(Proposition~\ref{prop:delta-lens}).
\subsection*{Overview of the paper and related work}
This section provides an informal overview of the paper, together
with further commentary on the background, and references to related
work.
The goal is to provide a conceptual understanding of the results;
later sections will be dedicated to the formal mathematics.
Section~\ref{sec:background} contains the mathematical background
required for the main results, which are presented in
Section~\ref{sec:main-result}.
Consequences of the main result and concluding remarks are in
Section~\ref{sec:conclusion}.
Throughout the paper we make the assumption that a \emph{system},
whatever that may be, can be understood as a category.
The objects of this category are the \emph{states} of the system,
while the morphisms are the \emph{transitions} (or \emph{deltas})
between system states.
Delta lenses were introduced in \cite[Definition~4]{DXC11} to
model bidirectional transformations between systems when they are
understood as categories.
The \textsc{Get} of a delta lens is a functor
$f \colon A \rightarrow B$ from the \emph{source category} $A$ to
the \emph{view category} $B$, while the \textsc{Put} is a certain kind of
function (that this paper calls a \emph{lifting operation})
satisfying axioms analogous to the classical lens laws.
A slightly modified definition of delta lens appeared in
\cite[Definition~1]{JR13}, however this definition
still seemed to be ad hoc, and made it difficult to prove deep results
without checking many details.
The definition of delta lens (Definition~\ref{defn:delta-lens})
given in this paper is based on a diagrammatic characterisation
which first appeared in \cite[Corollary~20]{Cla20}, by
representing the \textsc{Put} in terms of
bijective-on-objects functors (Definition~\ref{defn:bijective-on-objects})
and discrete opfibrations (Definition~\ref{defn:discrete-opfibration}).
This diagrammatic approach provides a natural framework for
studying delta lenses using category theory, and has the benefit of
allowing for very simple (albeit more abstract) proofs.
This approach will be utilised throughout this paper,
although in many places we will also include explicit
descriptions of constructions using the traditional definition of a
delta lens.
A key idea presented in \cite{AU17, Cla20} is that the
\textsc{Get} and \textsc{Put} of a delta lens can be separated into
functors and \emph{cofunctors} (Definition~\ref{defn:cofunctor}),
respectively.
Intuitively, a cofunctor can be understood as a delta lens without
any information on how the \textsc{Get} acts on morphisms; it is
the minimum amount of structure needed to specify a \textsc{Put}
operation between categories.
It was shown in the paper \cite{AU17} that delta lenses are cofunctors
with additional structure.
In this paper, we aim to show that said
structure arises coalgebraically via a comonad.
Both delta lenses and cofunctors are predominantly understood and
studied as \emph{morphisms} between categories, however to prove that
delta lenses are cofunctors equipped with coalgebraic structure,
it is necessary for them to be understood as \emph{objects}.
Therefore this paper introduces a new category $\mathsf{Cof}(B)$, whose
objects are cofunctors into a fixed category $B$
(Definition~\ref{defn:category-cofunctors}).
The category $\mathsf{Lens}(B)$, whose objects are delta lenses into a
fixed category $B$, was previously studied in \cite{JR17, Cla20b}.
Surprisingly, we show that the category $\mathsf{Lens}(B)$ can be defined
(Definition~\ref{defn:delta-lens-category}) as the slice category
$\mathsf{Cof}(B) / 1_{B}$.
Not only does this provide a new characterisation of delta lenses
in term of cofunctors (Proposition~\ref{prop:delta-lens}),
but also provides the insight that the canonical forgetful functor
$L \colon \mathsf{Lens}(B) \rightarrow \mathsf{Cof}(B)$, which takes a delta lens to its
underlying \textsc{Put} cofunctor, is a projection from a
slice category.
Finally, proving that delta lenses are coalgebras for a comonad on
$\mathsf{Cof}(B)$ amounts to showing that the forgetful functor
$L \colon \mathsf{Lens}(B) \rightarrow \mathsf{Cof}(B)$ is \emph{comonadic}
(Theorem~\ref{thm:main}).
A necessary condition is that $L$ has a right adjoint $R$
(Lemma~\ref{lemma:right-adjoint}), which
constructs the \emph{cofree delta lens} from each cofunctor in $\mathsf{Cof}(B)$.
This construction first appeared explicitly in
\cite[Section~3.2]{AU16}, however it was not obviously a right adjoint
--- or even a functor --- and it was disconnected from
the context of cofunctors and delta lenses.
Both Lemma~\ref{lemma:right-adjoint} and
Theorem~\ref{thm:main} admit straightforward proofs, with the
benefit of the diagrammatic approach to cofunctors and delta lenses.
\subsection*{Notation and conventions}
This section outlines some of the notation and conventions used in the
paper.
Given a category~$A$, its underlying set (or discrete category) of
objects is denoted~$A_{0}$.
Given a functor $f \colon A \rightarrow B$, its underlying object
assignment is denoted $f_{0} \colon A_{0} \rightarrow B_{0}$.
Similarly, a cofunctor $\varphi \colon A \nrightarrow B$ will have
an underlying object assignment $\varphi_{0} \colon A_{0} \rightarrow B_{0}$.
Thus the orientation of a cofunctor agrees with the orientation of its
underlying object assignment (this convention is chosen to agree
with the orientation of delta lenses, however this choice is not uniform
in the literature on cofunctors).
The operation $\cod$ sends each morphism to its \emph{codomain} or
\emph{target} object.
\section{Prerequisites for the main result}
\label{sec:background}
We first recall two special classes of functors, which
we will use as the building blocks for defining cofunctors and
delta lenses.
New contributions in this section include the category $\mathsf{Cof}(B)$
whose objects are cofunctors (Definition~\ref{defn:category-cofunctors}),
and the characterisation of delta lenses as certain morphisms therein
(Proposition~\ref{prop:delta-lens}).
\begin{definition}\label{defn:bijective-on-objects}
A functor $f \colon A \rightarrow B$ is \emph{bijective-on-objects} if its
underlying object assignment $f_{0} \colon A_{0} \rightarrow B_{0}$ is a bijection.
\end{definition}
\begin{definition}\label{defn:discrete-opfibration}
A functor $f \colon A \rightarrow B$ is a \emph{discrete opfibration} if for all pairs,
\[
(a \in A, \, u \colon fa \rightarrow b \in B)
\]
there exists a unique morphism
$w \colon a \rightarrow a'$ in $A$ such that $fw = u$.
\end{definition}
\begin{definition}\label{defn:cofunctor}
A \emph{cofunctor} $\varphi \colon A \nrightarrow B$ between categories is a
span of functors,
\begin{equation}
\begin{tikzcd}[column sep = small]
& X
\arrow[ld, "\varphi"']
\arrow[rd, "\overline{\varphi}"]
&
\\
A & & B
\end{tikzcd}
\end{equation}
where $\varphi$ is a bijective-on-objects functor and $\overline{\varphi}$ is a discrete opfibration.
\end{definition}
Alternatively, a cofunctor $\varphi \colon A \nrightarrow B$ consists of a
function $\varphi_{0} \colon A_{0} \rightarrow B_{0}$,
together with a \emph{lifting operation} $\varphi$, which assigns each pair
$(a \in A, \, u \colon \varphi_{0}a \rightarrow b \in B)$ to a morphism
$\varphi(a, u) \colon a \rightarrow~a'$ in $A$, such that the following axioms are satisfied:
\begin{enumerate}[(1)]
\item $\varphi_{0}\cod \big( \varphi(a, u) \big) = \cod(u)$;
\item $\varphi(a, 1_{\varphi_{0}a}) = 1_{a}$;
\item $\varphi(a, v \circ u) = \varphi(a', v) \circ \varphi(a, u)$,
where $a' = \cod\big( \varphi(a, u) \big)$.
\end{enumerate}
\begin{definition}\label{defn:delta-lens}
A \emph{delta lens} $(f, \varphi) \colon A \rightleftharpoons B$ between categories is
a commutative diagram of functors,
\begin{equation}\label{eqn:delta-lens}
\begin{tikzcd}[column sep = small]
& X
\arrow[ld, "\varphi"']
\arrow[rd, "\overline{\varphi}"]
&
\\
A
\arrow[rr, "f"']
& & B
\end{tikzcd}
\end{equation}
where $\varphi$ is a bijective-on-objects functor and $\overline{\varphi}$ is a discrete opfibration.
\end{definition}
We can also describe a delta lens $(f, \varphi) \colon A \rightleftharpoons B$ as
consisting of a functor $f \colon A \rightarrow B$ together with a \emph{lifting operation}
$\varphi$, which assigns each pair
$(a \in A, \, u \colon fa \rightarrow b \in B)$ to a morphism
$\varphi(a, u) \colon a \rightarrow a'$ in $A$, such that the following axioms are satisfied:
\begin{enumerate}[(1)]
\item $f \varphi(a, u) = u$;
\item $\varphi(a, 1_{fa}) = 1_{a}$;
\item $\varphi(a, v \circ u) = \varphi(a', v) \circ \varphi(a, u)$,
where $a' = \cod\big( \varphi(a, u) \big)$.
\end{enumerate}
Every delta lens $(f, \varphi) \colon A \rightleftharpoons B$ has an underlying
functor $f \colon A \rightarrow B$ and an underlying cofunctor $\varphi \colon A \nrightarrow B$,
and their corresponding underlying object assignments are equal; that is, $f_{0} = \varphi_{0}$.
\begin{definition}\label{defn:category-cofunctors}
For each category $B$, there is a category $\mathsf{Cof}(B)$ of \emph{cofunctors over the base $B$}
whose objects are cofunctors with codomain $B$, and whose morphisms are given by commutative
diagrams of functors of the form:
\begin{equation}\label{diagram:morphism-cofunctors}
\begin{tikzcd}[column sep = small]
A
\arrow[rr, "h"]
& & C
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{h}"]
\arrow[rd, "\overline{\varphi}"']
& & Y
\arrow[u, "\gamma"']
\arrow[ld, "\overline{\gamma}"]
\\
& B &
\end{tikzcd}
\end{equation}
\end{definition}
Equivalently, a morphism in $\mathsf{Cof}(B)$ from a cofunctor $\varphi \colon A \nrightarrow B$
to a cofunctor $\gamma \colon C \nrightarrow B$ consists of a functor
$h \colon A \rightarrow C$ such that $\gamma_{0}ha = \varphi_{0}a$ for all $a \in A$,
and $h\varphi(a, u) = \gamma(ha, u)$ for all pairs
$(a \in A, \, u \colon \varphi_{0}a \rightarrow b \in B)$.
The functor $\overline{h} \colon X \rightarrow Y$ is then uniquely induced
from this data.
Intuitively, if $A$ and $C$ are understood as \emph{source categories} with a fixed
\emph{view category} $B$, then the morphisms in $\mathsf{Cof}(B)$ are functors between the source
categories which preserve the chosen lifts, given by the
corresponding cofunctors, from the view category.
\begin{proposition}\label{prop:delta-lens}
Every delta lens $(f, \varphi) \colon A \rightleftharpoons B$ is equivalent to a morphism
in $\mathsf{Cof}(B)$ whose codomain is the trivial cofunctor on $B$.
\end{proposition}
\begin{proof}
Consider the morphism in $\mathsf{Cof}(B)$ given by the commutative diagram of functors:
\begin{equation}\label{diagram:morphism-trivial-cofunctor}
\begin{tikzcd}[column sep = small]
A
\arrow[rr, "f"]
& & B
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{\varphi}"]
\arrow[rd, "\overline{\varphi}"']
& & B
\arrow[u, "1_{B}"']
\arrow[ld, "1_{B}"]
\\
& B &
\end{tikzcd}
\end{equation}
The upper commutative square describes a delta lens as given in Definition~\ref{defn:delta-lens}.
Conversely, every delta lens may be depicted as a morphism in $\mathsf{Cof}(B)$ in this way.
\end{proof}
We can unpack \eqref{diagram:morphism-trivial-cofunctor} using the explicit
characterisation of morphisms in $\mathsf{Cof}(B)$ to obtain the precise difference
between cofunctors and delta lenses, in terms of objects and morphisms.
Namely, the diagram \eqref{diagram:morphism-trivial-cofunctor} states that a delta
lens corresponds to a cofunctor $\varphi \colon A \nrightarrow B$ together with
a functor $f \colon A \rightarrow B$ such that $fa = \varphi_{0}a$ for all $a \in A$,
and $f\varphi(a, u) = u$ for all pairs $(a \in A, \, u \colon fa \rightarrow b \in B)$.
\begin{definition}
\label{defn:delta-lens-category}
For each category $B$, we define the category of \emph{delta lenses over the base $B$} to be the
slice category $\mathsf{Lens}(B) \coloneqq \mathsf{Cof}(B) \, / \, 1_{B}$,
where $1_{B}$ is the trivial cofunctor on $B$.
\end{definition}
By Proposition~\ref{prop:delta-lens}, the objects of $\mathsf{Lens}(B)$ are delta lenses with codomain
$B$, represented as a morphism into the trivial cofunctor as shown in
\eqref{diagram:morphism-trivial-cofunctor}.
The morphisms in $\mathsf{Lens}(B)$ are given by morphisms \eqref{diagram:morphism-cofunctors}
in $\mathsf{Cof}(B)$ such that the following pasting condition holds:
\begin{equation}\label{diagram:morphism-delta-lens}
\begin{tikzcd}[column sep = 2em]
A
\arrow[rr, "h"]
& & C
\arrow[rr, "g"]
& & B
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{h}"]
\arrow[rrd, "\overline{\varphi}"']
& & Y
\arrow[u, "\gamma"']
\arrow[d, "\overline{\gamma}"]
\arrow[rr, "\overline{\gamma}"]
& & B
\arrow[u, "1_{B}"']
\arrow[lld, "1_{B}"]
\\
& & B & &
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[column sep = small]
A
\arrow[rr, "f"]
& & B
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{\varphi}"]
\arrow[rd, "\overline{\varphi}"']
& & B
\arrow[u, "1_{B}"']
\arrow[ld, "1_{B}"]
\\
& B &
\end{tikzcd}
\end{equation}
In other words, the only additional requirement on a morphism $h \colon A \rightarrow C$
between delta lenses over $B$, compared to a morphism between cofunctors over $B$,
is that $g \circ h = f$.
This is opposed to just requiring $\gamma_{0}ha = \varphi_{0}a$
on objects (where recall for delta lenses, the underlying object assignments for the
functor and cofunctor are equal, that is,
$g_{0} = \gamma_{0}$ and $f_{0} = \varphi_{0}$).
There is a canonical forgetful functor,
\begin{equation*}
L \colon \mathsf{Lens}(B) \longrightarrow \mathsf{Cof}(B)
\end{equation*}
which assigns every delta lens to its underlying cofunctor.
This forgetful functor is the focus of the main result in the following section.
\section{Main result}
\label{sec:main-result}
While not every cofunctor may be given the structure of a delta lens,
Ahman and Uustalu \cite{AU16} developed a method which constructs a
delta lens from any cofunctor.
To understand their construction, first recall that the underlying objects functor
$(-)_{0} \colon \mathsf{Cat} \rightarrow \mathsf{Set}$ has a right adjoint
$(\widehat{-}) \colon \mathsf{Set} \rightarrow \mathsf{Cat}$ which takes each set $X$
to the \emph{codiscrete category} $\widehat{X}$.
Given a cofunctor $\varphi \colon A \nrightarrow B$ with underlying object assignment
$\varphi_{0} \colon A_{0} \rightarrow B_{0}$, we may construct the following pullback
in $\mathsf{Cat}$:
\begin{equation}
\begin{tikzcd}[row sep = small]
& P
\arrow[ld, "\pi_{A}"']
\arrow[rd, "\pi_{B}"]
\arrow[dd, phantom, "\lrcorner" rotate = -45, very near start]
& \\
A
\arrow[rd, "\widehat{\varphi}_{0} \, \circ \, \eta_{A}"']
& & B
\arrow[ld, "\eta_{B}"]
\\
& \widehat{B}_{0}
&
\end{tikzcd}
\end{equation}
Here $\eta_{B} \colon B \rightarrow \widehat{B}_{0}$ is the component of the unit
for the adjunction at $B$, and $\widehat{\varphi}_{0} \circ \eta_{A}$ the component of
the unit at $A$ followed by image of $\varphi_{0}$ under the right adjoint.
Using the universal property of the pullback, we have the following:
\begin{equation}\label{diagram:right-adjoint}
\begin{tikzcd}[row sep = small]
& X
\arrow[ldd, bend right, "\varphi"']
\arrow[rdd, bend left, "\overline{\varphi}"]
\arrow[d, dashed, "{\langle \varphi, \overline{\varphi} \rangle}"]
&
\\[+1em]
& P
\arrow[ld, "\pi_{A}"']
\arrow[rd, "\pi_{B}"]
\arrow[dd, phantom, "\lrcorner" rotate = -45, very near start]
& \\
A
\arrow[rd, "\widehat{\varphi}_{0} \, \circ \, \eta_{A}"']
& & B
\arrow[ld, "\eta_{B}"]
\\
& \widehat{B}_{0}
&
\end{tikzcd}
\end{equation}
Since $\eta_{B}$ is bijective-on-objects, the projection functor $\pi_{A}$ is also
bijective-on-objects which, together with the functor $\varphi$, implies that
$\langle \varphi, \overline{\varphi} \rangle \colon X \rightarrow P$ is bijective-on-objects,
due to the properties of bijections at the level of objects.
Thus, the upper right triangle in \eqref{diagram:right-adjoint} defines a delta lens
$P \rightleftharpoons B$.
The category $P$ has the same objects as $A$, but morphisms $a \rightarrow a'$ in $P$
are given by pairs of
the form $(w \colon a \rightarrow a' \in A, u \colon \varphi_{0}a \rightarrow \varphi_{0}a' \in B)$.
The functor $\pi_{B} \colon P \rightarrow B$ projects to the second arrow in this
pair.
The lifting operation which makes this functor into a delta lens is induced by the
lifting operation of the original cofunctor; it takes an object $a \in P$ and
a morphism $u \colon \varphi_{0}a \rightarrow b \in B$ to the morphism
$\big( \varphi(a, u) \colon a \rightarrow a', u \colon \varphi_{0}a \rightarrow b \big)$ in
$P$.
We now show that this construction due to Ahman and Uustalu is universal, in the
sense that it provides a right adjoint to the functor taking a delta lens to its underlying
cofunctor.
\begin{lemma}\label{lemma:right-adjoint}
The forgetful functor $L \colon \mathsf{Lens}(B) \rightarrow \mathsf{Cof}(B)$ has a right adjoint.
\end{lemma}
\begin{proof}
Using the construction in \eqref{diagram:right-adjoint}, define the functor
$R \colon \mathsf{Cof}(B) \rightarrow \mathsf{Lens}(B)$ by the assignment:
\begin{equation}
\begin{tikzcd}[column sep = small]
& X
\arrow[ld, "\varphi"']
\arrow[rd, "\overline{\varphi}"]
&
\\
A & & B
\end{tikzcd}
\qquad \qquad \longmapsto \qquad \qquad
\begin{tikzcd}[column sep = small]
& X
\arrow[ld, "{\langle \varphi, \overline{\varphi} \rangle}"']
\arrow[rd, "\overline{\varphi}"]
&
\\
P
\arrow[rr, "\pi_{B}"']
& & B
\end{tikzcd}
\end{equation}
We describe the components of the unit and counit for the adjunction $L \dashv R$
and omit the detailed checks that the triangle identities hold.
Given a cofunctor $\varphi \colon A \nrightarrow B$ the component of the counit is
given by:
\begin{equation}
\begin{tikzcd}[column sep = small]
P
\arrow[rr, "\pi_{A}"]
& & A
\\
X
\arrow[u, "{\langle \varphi, \overline{\varphi} \rangle}"]
\arrow[rr, equal]
\arrow[rd, "\overline{\varphi}"']
& & X
\arrow[u, "\varphi"']
\arrow[ld, "\overline{\varphi}"]
\\
& B &
\end{tikzcd}
\end{equation}
Given a delta lens $(f, \varphi) \colon A \rightleftharpoons B$ the component of
the unit is given by:
\begin{equation}
\label{eqn:unit}
\begin{tikzcd}[column sep = 2em]
A
\arrow[rr, "{\langle 1_{A}, f \rangle}"]
& & P
\arrow[rr, "\pi_{B}"]
& & B
\\
X
\arrow[u, "\varphi"]
\arrow[rr, equal]
\arrow[rrd, "\overline{\varphi}"']
& & X
\arrow[u, "{\langle \varphi, \overline{\varphi} \rangle}"']
\arrow[d, "\overline{\varphi}"]
\arrow[rr, "\overline{\varphi}"]
& & B
\arrow[u, "1_{B}"']
\arrow[lld, "1_{B}"]
\\
& & B & &
\end{tikzcd}
\qquad = \qquad
\begin{tikzcd}[column sep = small]
A
\arrow[rr, "f"]
& & B
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{\varphi}"]
\arrow[rd, "\overline{\varphi}"']
& & B
\arrow[u, "1_{B}"']
\arrow[ld, "1_{B}"]
\\
& B &
\end{tikzcd}
\end{equation}
The above diagrams show that the pasting condition required in
\eqref{diagram:morphism-delta-lens} is satisfied.
\end{proof}
\begin{theorem}\label{thm:main}
The forgetful functor $L \colon \mathsf{Lens}(B) \rightarrow \mathsf{Cof}(B)$ is comonadic.
\end{theorem}
\begin{proof}
By Lemma~\ref{lemma:right-adjoint}, the functor $L$ has a right adjoint $R$.
To prove that $L$ is comonadic, it remains to show that the category of coalgebras
for the induced comonad $LR$ on $\mathsf{Cof}(B)$ is equivalent to $\mathsf{Lens}(B)$.
Given a cofunctor $\varphi \colon A \nrightarrow B$, a coalgebra structure map is given
by a morphism in $\mathsf{Cof}(B)$ of the form:
\begin{equation}\label{eqn:coalgebra}
\begin{tikzcd}[column sep = small]
A
\arrow[rr, "h"]
& & P
\\
X
\arrow[u, "\varphi"]
\arrow[rr, "\overline{h}"]
\arrow[rd, "\overline{\varphi}"']
& & X
\arrow[u, "{\langle \varphi, \overline{\varphi} \rangle}"']
\arrow[ld, "\overline{\varphi}"]
\\
& B &
\end{tikzcd}
\end{equation}
However compatibility with the counit forces $\overline{h} = 1_{X}$
and $h = \langle 1_{A}, f \rangle$, where $f \colon A \rightarrow B$ is a functor
such that $f \circ \varphi = \overline{\varphi}$.
Compatibility with the comultiplication doesn't add any further conditions.
Therefore, a coalgebra for the comonad $LR$ on $\mathsf{Cof}(B)$ is equivalent to a delta lens
$(f, \varphi) \colon A \rightleftharpoons B$.
\end{proof}
This theorem establishes the result stated in the title of the paper,
that delta lenses \eqref{eqn:delta-lens}
are coalgebras \eqref{eqn:coalgebra} for a comonad.
\section{Concluding remarks}
\label{sec:conclusion}
In this paper, the category $\mathsf{Lens}(B)$ of delta lenses over the base $B$
was characterised as the category of coalgebras for a comonad on
the category $\mathsf{Cof}(B)$ of cofunctors over the base~$B$.
This brings together recent results in the study of delta lenses and
cofunctors.
In particular, we have shown that the extra structure on cofunctors
given in Ahman and Uustalu's \cite{AU17} characterisation of delta lenses
is coalgebraic, and that their construction of a delta lens from cofunctor
in the paper \cite{AU16} is precisely the cofree delta lens on a cofunctor.
Throughout we have also shown how the abstract diagrammatic approach to
delta lenses, first introduced in \cite{Cla20}, has led to concise
proofs of these results, and offers a clear perspective on the relationship
between these ideas.
Aside from clarification and development of theory, the results presented
in this paper have several other mathematical consequences.
For example, the functor $L \colon \mathsf{Lens}(B) \rightarrow \mathsf{Cof}(B)$
creates all colimits which exist in $\mathsf{Cof}(B)$.
Thus we can take the coproduct of a pair of cofunctors in $\mathsf{Cof}(B)$,
and automatically know how to construct the coproduct of the
corresponding delta lenses in $\mathsf{Lens}(B)$.
Another consequence from the unit \eqref{eqn:unit} of the adjunction
between $\mathsf{Cof}(B)$ and $\mathsf{Lens}(B)$ is that every delta lens factorises
into a bijective-on-objects functor followed by a cofree lens.
Intuitively, this allows us to first pair every transition in the
source category $A$ with a transition in the view category $B$
via the functor part $f \colon A \rightarrow B$ of the delta lens,
\[
w \colon a \rightarrow a' \in A
\qquad \longmapsto \qquad
(w \colon a \rightarrow a' \in A, fw \colon fa \rightarrow fa' \in B)
\]
then consider the update propagation determined by the cofunctor
part $\varphi \colon A \nrightarrow B$ of the delta lens.
The cofree delta lens on a cofunctor behaves much like an analogue
of \emph{constant complement} state-based lenses, except that the
complement is with respect to morphisms rather than objects.
While the main contributions of this paper are mathematical, it is
hoped that these results also prompt new ways of understanding delta
lenses.
For example, previously state-based lenses have been considered
from a ``\textsc{Put}-based'' perspective \cite{PHF14, FHP15},
however this approach could also be adapted to the setting of delta lenses.
Rather than starting with a \textsc{Get} functor between systems
and then asking how we might construct a delta lens,
we might instead start with a \textsc{Put} cofunctor and then
ask for ways in which this can be given the structure of a delta lens.
This shift of focus is subtle but important,
especially in the context of the ideas in \cite{AU17},
as it is arguably the \textsc{Put} structure (rather than the \textsc{Get}
structure) which is central to the study of bidirectional
transformations and lenses.
On an separate note, it is worth remarking on the
similarity between the main result of this paper
and the classical result stating that very well-behaved lenses are
coalgebras for a comonad \cite{Con11, GJ12}.
Despite the clear analogy between them, and the inspiration that this paper
derives from the classical result, it seems that they are unrelated
at a mathematical level.
The classical result relies on $\mathsf{Set}$ being a cartesian closed
category, and arises from the adjunction $(-) \times B \dashv [B, - ]$,
whereas the results in this paper arise from a different adjunction,
and don't require any aspect of cartesian closure.
There are many questions to be explored in future work.
For instance, it is natural to ask if $\mathsf{Lens}(B)$ is comonadic over other
categories (such as $\mathsf{Cat}$ as was suggested by an anonymous reviewer),
or if split opfibrations (also known as c-lenses \cite{JRW12}) are also comonadic
over $\mathsf{Cof}(B)$.
In recent work by the current author, it has been demonstrated that delta lenses
arise as algebras for a monad on $\mathsf{Cat} / B$,
providing a dual to the main result of this paper and strengthening
the previous work of Johnson and Rosebrugh \cite{JR13}.
Finally, given the importance of the category $\mathsf{Lens}(B)$ in the study
of \emph{symmetric lenses} \cite{JR17, Cla20b}, it is also hoped that
the coalgebraic perspective provides new insights into this area,
and this will be the subject of further investigation.
\end{document}
|
\begin{document}
\title{Abstraction based Output Range Analysis for Neural Networks}
\begin{abstract}
In this paper, we consider the problem of output range analysis for
feed-forward neural networks with \ensuremath{\textit{ReLU}}{} activation functions.
The existing approaches reduce the output range analysis problem to
satisfiability and optimization solving, which are NP-hard problems,
and whose computational complexity increases with the number
of neurons in the network. To tackle the computational complexity, we
present a novel abstraction technique that constructs a simpler neural
network with fewer neurons, albeit with interval weights called
interval neural network (\ensuremath{\textit{INN}}{}), which over-approximates the output
range of the given neural network.
We reduce the output range analysis on the \ensuremath{\textit{INN}}{}s to solving a mixed
integer linear programming problem.
Our experimental results highlight the trade-off between the
computation time and the precision of the computed output range.
\end{abstract}
\section{Introduction}
Neural networks are extensively used today in safety critical
control systems such as autonomous vehicles and airborne collision
avoidance systems~\citep{autocar1, autocar2, reluplex, aircraft1}.
Hence, rigorous methods to ensure correct functioning of neural
network controlled systems is imperative.
Formal verification refers to a broad class of techniques that provide
strong guarantees of correctness by exhibiting a proof.
Formal verification of neural networks has attracted a lot of attention
in the recent years~\citep{reluplex,bunel,range, taylor-nn1,
taylor-nn2, taylor-nn3}.
However, verifying neural networks is extremely challenging due to the
large state-space, and the presence of nonlinear activation functions,
and the verification problem is known to be NP-hard for even simple
properties~\citep{reluplex}.
Our broad objective is to investigate techniques to verify neural
network controlled physical systems such as autonomous vehicles.
These systems consist of a physical system and a neural network
controller that are connected in a feedback, that is, the output of
the neural network is the control input (actuator values) to the
physical system and the output of the physical system (sensor values)
is input to the neural network controller.
An important verification problem is that of safety, wherein, one
seeks to ensure that the state of the neural network controlled system
never reaches an unsafe set of states.
This is established by computing the reachable set, the set of states
reached by the system, and ensuring that the reach set does not
intersect the unsafe states.
An important primitive towards computing the reachable set is to
compute the output range of a neural network controller given a set of
input valuations.
In this paper, we focus on neural networks with rectified linear unit
(\ensuremath{\textit{ReLU}}{}) function as an activation function, and we investigate the
output range computation problem for feed-forward neural
networks~\citep{range}.
Recently, there have been several efforts to address this problem
that rely on satisfiability checking and optimization.
Reluplex~\citep{reluplex} is a tool that develops a satisfiability
modulo theory for verifying neural networks, in particular, it
encodes the input/output relations of a neural network as a
satisfiability checking problem.
A mixed integer linear programming ($\ensuremath{\textit{MILP}}$) based approach is
proposed in ~\citep{range, sriram2} to compute the output
range.
These approaches construct constraints that encode the neural network
behavior, and check satisfiability or compute optimal values over the
constraints.
The complexity of verification depends on the size of the constraints
which in turn depends on the number of neurons in the neural network.
To increase the verification efficiency, we present an orthogonal
approach that consists of a novel abstraction procedure to reduce the
state-space (number of neurons) of the neural network.
Abstraction is a formal verification technique that refers to methods for reducing the state-space while providing formal guarantees of properties that are preserved by the reduction. One of the well-studied abstraction procedures is predicate abstraction~\cite{clark11,graph} that consists of partitioning the state-space of a given system into a finite number of regions, and constructing an abstract system that consists of these regions as the states. Predicate abstraction has been employed extensively for safety verification, since, the safety of the abstract system is \emph{sound}, that is, it implies the safety of the given system.
Our main result consists of a \emph{sound} abstraction that in
particular over-approximates the output range of a given neural
network.
Note that an over-approximation can still provide useful safety
analysis verdicts, since, if a superset of the reachable set does not
intersect with the unsafe set, then the actual reachable set will also
not intersect the unsafe set.
The abstraction procedure essentially merges sets of neurons within a
particular layer, and annotates the edges and biases with interval
weights to account for the merging.
Hence, we obtain a neural network with interval weights, which we call
\emph{interval neural networks} (\ensuremath{\textit{INN}}{}s).
While interval neural networks are more general than neural networks,
we show as a proof of concept that the satisfiability and optimization
based verification approaches can be extended to \ensuremath{\textit{INN}}{}s by extending
the $\ensuremath{\textit{MILP}}$ based encoding in~\citep{range} for neural networks to
interval neural networks and use it to compute the output range of the
abstract \ensuremath{\textit{INN}}{}.
We believe that other methods such as Reluplex can be extended to
handle interval neural network, and hence, the abstraction procedure
presented here can be used to reduce the state-space before applying
existing or new verification algorithms for neural networks.
An abstract interpretation based method has been explored in~\citep{rice}, wherein an abstract reachable set is propagated.
However, our approach has the flavor of predicate
abstraction~\citep{graph} and computes an over-approximate system
which can then be used to compute an over-approximation of the output
range using any of the above methods including the one based on
abstract interpretation~\citep{rice}.
The crucial part of the abstraction construction consists of
appropriately instantiating the weights of the abstract edges.
In particular, a convex hull of the weights associated with the
concrete edges corresponding to an abstract edge does not guarantee
soundness, which is shown using a counterexample in Section \ref{sec:abs}.
We need to multiply the convex hull by a factor equivalent to the
number of merged nodes in the source abstract node.
The proof of soundness is rather involved, since, there is no
straightforward relation between the concrete and the abstract
states.
We establish such a connection, by associating a set of abstract
valuations with a concrete valuation for a particular layer, wherein,
the abstract valuation for an abstract node takes values in the range
given by the concrete valuations for the related concrete nodes.
The crux of the proof lies in the observation (Proposition
\ref{prop:avg}) that the behavior of a concrete valuation is mimicked
in the abstract valuation by an average of the concrete valuations at
the nodes corresponding to an abstract node.
We conclude that the input/output relation associated with a certain
layer of the concrete system is over-approximated by input/output
valuations of the corresponding layer in the abstract system.
We have implemented our algorithm in a Python toolbox.
We perform experimental analysis on the ACAS~\citep{aircraft2} case
study, and observe that the verification time increases with the
increase in the number of abstract nodes, however, the
over-approximation in the output range decreases.
Further, we notice that the output range can vary non-trivially even
for a fixed number of abstract nodes, but different partitioning of
the concrete nodes for merging.
This suggests that further research needs to be done to understand
the best strategies for partitioning the state-space of neurons for
merging, which we intend to explore in the future.
\paragraph{Related work.}
Recent studies~\citep{taylor-survey,bunel,saftey1,taylor-nn3,saftey3,saftey4}
compare several neural network verification algorithms.
Formal verification of feedforward neural networks with different
activation functions have been considered. For instance,
\citep{reluplex,rice} consider \ensuremath{\textit{ReLU}}{}, where as \citep{marta1,abstract1} consider large class of activation
functions that can be represented as Lipschitz-continuous functions.
We focus on \ensuremath{\textit{ReLU}}{} functions, but our method can be extended to more
general functions.
Different verification problems have been considered including output
range analysis~\citep{shiram3,interval,marta1,saftey1,shiram4,taylor1}, and robustness analysis ~\citep{rice,Binarized}.
Verification methods include those based on reduction to
satisfiability solving~\citep{reluplex,saftey1,saftey3}, optimizaiton solving~\citep{anytime}, abstract interpretation~\citep{yasser,abstract1}, and linearization~\citep{saftey3,saftey4}.
There is some recent work on verification of AI controlled cyber-physical systems~\cite{new1,new2}
\section{Interval Neural Network}
\ensuremath {\textit{labs}}bel{section.inn}
A neural network (\ensuremath{\textit{NN}}) is a computational model that
consists of nodes (neurons) that are organized in layers and edges
which are the connections between the nodes labeled by weights.
An $\ensuremath{\textit{NN}}$ contains an input layer, some hidden layers, and an output
layer each composed of neurons.
Given values to the nodes in the input layer, the values at the nodes
in the next layer are computed through a weighted sum dictated by the
edge weights and the addition of the bias associated with the output
node followed by an activation operation which we will assume is the $\ensuremath{\textit{ReLU}}$
(rectifier linear unit) function.
In this section, we introduce interval neural networks $\ensuremath{\textit{INN}}$ that
generalize neural networks with interval weights on edges and biases
and will represent our abstract systems.
\paragraph{Preliminaries.}
Let $\ensuremath{\mathbb{R}}$ denote the set of real numbers. Given a non-negative integer $k$, let $[k]$
denote the set $\{0,1, \cdots, k\}$. Given a set $A$, $\norm{A}$ represents the number of elements of $A$.
For any two functions $f, g: A \to \ensuremath{\mathbb{R}}$, we say $f\leq g$ if $\forall s\in A$, $f(s) \leq g(s)$.
We denote the \ensuremath{\textit{ReLU}}{} function by $\ensuremath{\hat{s}_i}gma$, which is defined as ${\displaystyle \ensuremath{\hat{s}_i}gma(x) = \max(0,x)}$. Given two binary relations $R_1 \subseteq A \times B$ and $R_2 \subseteq B \times C$, we define their composition, denoted by
$R_1 \circ R_2$, to be $\{ (u, v) \ | \ \exists w, \ (u, w) \in R_1$ and $(w, v) \in R_2\}$. For any set $S$, a valuation over $S$ is a function $f : S \to \ensuremath{\mathbb{R}}$. We define $\ensuremath{\textit{Val}}(S)$ to be the set of all valuations over $S$.
A partition $R$ of the set $A$ is a set $R = \{R_1, \ldots, R_k\}$ such that
$\bigcup_{i=1}^{k} R_i = A$
and $R_i \cap R_j = \emptyset\quad \forall i, j \in \{1, \ldots, k\}$ and $i \not= j$.
\begin{definition}[Interval Neural Network]\ensuremath {\textit{labs}}bel{def:inn}
An interval neural network ($\ensuremath{\textit{INN}}$) is a tuple $(k, \{ S_i\}_{i \in
[k]},$ $\{\lwr{W_i}, \uppr{W_i}\}_{i \in [k-1]}, \{\lwr{b_i},
\uppr{b_i}\}_{i \in [k]/\{0\}})$, where
\begin{enumerate}[-]
\item $k$ is a natural number which we refer to as the number of layers;
\item $\forall i\in [k], \ S_i$ is a set of nodes of $i$-th layer in the
interval neural network such that $\forall i \neq j, \ S_i \cap S_j
= \emptyset$. $S_0$ is the input layer, $S_k$ is the output layer
and $S_i, \ \forall i \in [k]/\{0,k\}$ is a hidden layer;
\item $\forall i\in [k-1], \ \lwr{W_i}, \uppr{W_i} : S_i\times S_{i+1}
\to \ensuremath{\mathbb{R}}$ represent the weights of the edges between the $i$-th and
$i+1$-th layer.
We assume that $\forall i \in [k-1], s_i \in S_i, \ s_{i+1} \in S_{i+1}, \
\lwr{W_i}(s_i,s_{i+1}) \leq \uppr{W_i}(s_i,s_{i+1})$;
\item $\forall i\in [k]/\{0\}, \ \lwr{b_i}, \uppr{b_i}: S_{i} \to
\ensuremath{\mathbb{R}}$ are the biases associated with the nodes in the $i$-th layer,
that is, $\forall i\in [k]/\{0\},
s_{i} \in S_{i} , \ \lwr{b_i}(s_{i}) \leq \uppr{b_i}(s_{i})$.
\end{enumerate}
\end{definition}
A neural network can be defined as a special kind of \ensuremath{\textit{INN}}{} where the
weights and biases are singular intervals.
\begin{definition}[Neural Network]\ensuremath {\textit{labs}}bel{def:nn}
An $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$ is a neural network ($\ensuremath{\textit{NN}}$) if $\ \forall i \in [k-1],
s\in S_i, s\textprime \in S_{i+1}, \ \lwr{W_i}(s,s\textprime) =
\uppr{W_i}(s,s\textprime)$, and $\forall i\in [k]/\{0\}, s \in S_{i},
\ \lwr{b_i}(s) = \uppr{b_i}(s)$.
\end{definition}
$
$
\begin{figure}
\caption{A neural network}
\caption{An interval neural network}
\end{figure}
Figure \ref{fig:nn} shows a neural network with $3$ layers. The input layer has $2$ nodes, the output layer has $1$ node, and each of the hidden layers has $3$ nodes.
The weights on the edges are a single number (singular intervals), hence, it is a neural network.
Figure \ref{fig:inn} shows an interval neural network again with $3$ layers.
The input and output layers have the same number of nodes as before, but the hidden layers have $2$ nodes each.
The weights on the edges are intervals (and non-singular), so this is an interval neural network (rather than just a neural network).
An execution of the neural network starts with valuations to the input
nodes, and the valuations to the nodes of a certain layer are computed
based on the valuations for the nodes in the previous layer.
More precisely, to compute the value at a node $s_{i, j}$ corresponding to the $j$-th node in the layer $i$, we choose a weight from the interval for each of the incoming nodes and compute a weighted sum of the valuations of the nodes in the previous layer.
Then a bias is chosen from the bias interval associated with $s_{i, j}$ and added to the weighted sum. Finally, the $\ensuremath{\textit{ReLU}}$ function is applied on this sum.
The execution then proceeds to the next layer.
The semantics of the neural network is captured using a set of pairs
of input-output valuations wherein the output valuation is a possible result starting from the input valuation and executing the neural network.
Next, we define the semantics of an $\ensuremath{\textit{INN}}$ as a set of valuations
for the input and output layers.
\begin{definition}[Semantics of \ensuremath{\textit{INN}}{} Network]\ensuremath {\textit{labs}}bel{def:sem_inn}
Given an $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}} = (k, \{ S_i\}_{i \in [k]},$ $\{\lwr{W_i},
\uppr{W_i}\}_{i \in [k-1]}, \{\lwr{b_i},\uppr{b_i}\}_{i \in [k]/\{0\}})$ and
$i \in [k-1]$, $\ \dfu{T}{i} = \{ (v_1, v_2)\in \ensuremath{\textit{Val}}(S_i) \times
\ensuremath{\textit{Val}}(S_{i+1}) \, | \, $ $\forall s' \in S_{i+1}, \ v_2(s') = \ensuremath{\hat{s}_i}gma( \
\sum_{s \in S_i} w_{s,s'} \ v_1(s) + b_{s'}),$ where $\lwr{W_i}(s,
s') \leq w_{s,s'} \leq \uppr{W_i}(s, s'), \ \lwr{b_i}(s') \leq b_{s'}
\leq \uppr{b_i}(s')\}$.
We define $\df{T} = \dfu{T}{0} \circ \dfu{T}{1} \circ \cdots \circ \dfu{T}{k-1}$.
\end{definition}
The semantics can be captured alternately using a post operator, that
given a valuation of layer $i$, returns the set of all valuations of
layer $i+1$ that are consistent with the semantics.
\begin{definition}
Given an $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$ with $k$ layers, $i \in [k-1]$ and $V \subseteq
\ensuremath{\textit{Val}}(S_i)$, we define $\ensuremath{\textit{Post}}_{\ensuremath{{\cal T}}, i}(V) = \{v' \ |\ \exists v \in V,\
(v,v') \in \dfu{T}{i}\}$.
Given $V \subseteq \ensuremath{\textit{Val}}(S_0)$, we define $\ensuremath{\textit{Post}}_\ensuremath{{\cal T}}(V) = \{v' \ |\
\exists v \in V,\ (v,v') \in \df{T}\}$.
\end{definition}
For notational convenience, we will write $\ensuremath{\textit{Post}}_{\ensuremath{{\cal T}}, i}(\{v\})$ and
$\ensuremath{\textit{Post}}_\ensuremath{{\cal T}} (\{v\})$ as just $\ensuremath{\textit{Post}}_{\ensuremath{{\cal T}}, i}(v)$ and
$\ensuremath{\textit{Post}}_\ensuremath{{\cal T}} (v)$, respectively.
Our objective is to find an over-approximation of the
values the output neurons can take in an interval neural network, given
a set of valuations for the input layer.
\begin{problem}[Output range analysis]
Given an $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$ with $k$ layers and a set of input valuations $I
\subseteq \ensuremath{\textit{Val}}(S_0)$, compute valuations $l, u \in \ensuremath{\textit{Val}}(S_k)$ such
that $\forall (v_1, v_2) \in \df{T}$ if $v_1 \in I$ then $l(s) \leq
v_2(s) \leq u(s)$ for every $s \in S_k$.
\end{problem}
\section{Our Approach}
\ensuremath {\textit{labs}}bel{section.approach}
In this section, we present an abstraction based approach for
over-approximating the output range of an interval neural network.
First, in Section \ref{sec:abs}, we describe the construction of an abstract system whose semantics over-approximates the semantics of a given
$\ensuremath{\textit{INN}}$ and argue the correctness of the construction.
In Section \ref{sec:encode}, we present an encoding of the interval
neural network to mixed integer linear programming that enables the
computation of the output range.
\subsection{Abstraction of an $\ensuremath{\textit{INN}}$}
\ensuremath {\textit{labs}}bel{sec:abs}
The motivation for the abstraction of an $\ensuremath{\textit{INN}}$ is to reduce the
``state-space'', the number of neurons in the network, so that
computation of the output range can scale to larger $\ensuremath{\textit{INN}}$s.
Our broad idea consists of merging the nodes of a given concrete
$\ensuremath{\textit{INN}}$ so as to construct a smaller abstract $\ensuremath{\textit{INN}}$.
However, it is crucial that we instantiate the weights on the edges
and the biases appropriately to ensure that the semantics of the
abstracted system is an over-approximation of the concrete $\ensuremath{\textit{INN}}$.
For instance, consider the neural network in Figure \ref{fig:ex} and
consider an input value $1$. It results in an output value of $2$.
Figure \ref{fig:wrong} abstracts the neural network in Figure
\ref{fig:ex} by taking the convex hull of the weights on the concrete
edges corresponding to the abstract edge. However, given input $1$,
the output of the abstract neural network is $1$ and does not contain
$2$.
Hence, we need to be careful in the construction of the abstract
system.
$
$
\begin{figure}
\caption{A concrete neural network}
\caption{An incorrect abstraction}
\end{figure}
Given two sets of concrete nodes from consecutive layers of the $\ensuremath{\textit{INN}}$,
$\hat{s}_1$ and $\hat{s}_2$, which are each merged into one abstract
node, we associate an interval with the edge between $\hat{s}_1$ and
$\hat{s}_2$ to be the interval $ [|\hat{s}_1| w_1, |\hat{s}_1| w_2]$, where $w_1$ and $w_2$ are
the minimum and maximum weights associated with the edges in the
concrete system between nodes in $\hat{s}_1$ and $\hat{s}_2$,
respectively, and $|\hat{s}_1|$ is the number of concrete nodes
corresponding to the abstract node $\hat{s}_1$.
In other words, $[w_1, w_2]$ is the convex hull of the intervals
associated with the edges between nodes in $\hat{s}_1$ and
$\hat{s}_2$ multiplied by a factor corresponding to the number of
concrete nodes corresponding to the source abstract node.
Note that the above abstraction will lead to a weight of $2$ on the
second edge in Figure \ref{fig:wrong}, thus leading to an output of
$2$ as in the concrete system.
Next, we formally define the abstraction. We say that $P = \{P_i\}_{i
\in [k]}$ is a partition of $\ensuremath{{\cal T}}$, if for every $i$, $P_i$ is a partition of the
$S_i$, the nodes in the $i$-th layer of $\ensuremath{{\cal T}}$.
\begin{definition}[Abstract Neural Network]\ensuremath {\textit{labs}}bel{def:abs}
Given an \ensuremath{\textit{INN}}t{} $= (k, \{ S_i\}_{i \in [k]},$ $\{\lwr{W_i}, \uppr{W_i}\}_{i \in [k-1]}, \{\lwr{b_i}, \uppr{b_i}\}_{i \in [k]/\{0\}})$ and a partition $P = \{P_i\}_{i \in [k]}$ of $T$, we define an \ensuremath{\textit{INN}}{} $\ensuremath{T/P} = (k, \{ \ensuremath{\hat{S}_i}\}_{i \in [k]}, \{\ensuremath{\lwr{\widehat{W}_i}}, \ensuremath{\uppr{\widehat{W}_i}}\}_{i \in [k-1]}, \{\ensuremath{\lwr{\hat{b}_i}}, \ensuremath{\uppr{\hat{b}_i}}\}_{i \in [k]/\{0\}})$, where
\begin{enumerate}[-]
\item $\forall i\in [k], \ \ensuremath{\hat{S}_i} = P_i$;
\item $\forall i\in [k-1]$, $\ensuremath{\hat{s}_i} \in \ensuremath{\hat{S}_i}, \ensuremath{\hat{s}_i}plus \in \ensuremath{\hat{S}_i}plus$,
$\ensuremath{\lwr{\widehat{W}_i}}(\ensuremath{\hat{s}_i}, \ensuremath{\hat{s}_i}plus) = |\h{s}_i|$ min $\{\lwr{W_i}(s_i, s_{i+1})\ | \
s_i\in \h{s}_i, \ s_{i+1}\in \h{s}_{i+1}\}$ and
$\ensuremath{\uppr{\widehat{W}_i}}(\ensuremath{\hat{s}_i}, \ensuremath{\hat{s}_i}plus) = |\h{s}_i|$ max $\{\uppr{W_i}(s_i, s_{i+1}) \ | \ s_i\in \h{s}_i, \ s_{i+1}\in \h{s}_{i+1}\}$;
\item $\forall i\in [k]/\{0\}, \ensuremath{\hat{s}_i} \in \ensuremath{\hat{S}_i}$,
$\ensuremath{\lwr{\hat{b}_i}}(\ensuremath{\hat{s}_i}) =$ min $\{\lwr{b_i}(s_{i})\ | \ s_{i}\in \h{s}_{i}\}$ and
$\ensuremath{\uppr{\hat{b}_i}}(\ensuremath{\hat{s}_i}) =$ max $\{\uppr{b_i}(s_{i}) \ | \ s_{i}\in \h{s}_{i}\}$.
\end{enumerate}
\end{definition}
Figure \ref{fig:inn} shows the abstraction of the neural network in Figure \ref{fig:nn}, where the nodes $s_{1,1}$ and $s_{1,2}$ are merged and the nodes $s_{2,1}$ and
$s_{2,2}$ are merged.
Note that the edge from $\{s_{1, 1}, s_{1,2}\}$ to $\{s_{2,1}, s_{2,2}\}$ has weight interval $[14, 22]$, which is obtained by taking the convex hull of the four weights $7, 10, 8$ and $11$, and multiplying by $2$, the size of the source abstract node.
The following theorem states the correctness of the construction of
$\ensuremath{T/P}$. It states that every input/output valuation that is admitted by
$T$ is also admitted by $\ensuremath{T/P}$, thus establishing the soundness of the abstraction.
\begin{theorem}
\ensuremath {\textit{labs}}bel{thm:main}
Given an \ensuremath{\textit{INN}}t{} $= (k, \{ S_i\}_{i \in [k]},$ $\{\lwr{W_i}, \uppr{W_i}\}_{i \in [k-1]}, \{\lwr{b_i}, \uppr{b_i}\}_{i \in [k]/\{0\}})$ and a partition $P = \{P_i\}_{i \in [k]}$ of $T$ such that $P_0 = S_0$ and $P_k = S_k$, $\df{T} \subseteq\df{\ensuremath{T/P}}$.
\end{theorem}
We devote the rest of the section to sketch a proof of Theorem
\ref{thm:main}.
Broadly, the proof consists of relating the valuations in the $i$-th
layer of the concrete $\ensuremath{\textit{INN}}$ with the $i$-th layer of the abstract
$\ensuremath{\textit{INN}}$.
Note that the nodes in a particular layer of the abstract and the
concrete system might not be the same.
The following definition relates states in the concrete system to
those in the abstract system.
\begin{definition}
Given a valuation $v \in \ensuremath{\textit{Val}}(S_i)$, $\ab{v} = \{\hat{v} \in
\ensuremath{\textit{Val}}(\hat{S}_i) \,|\, \forall \hat{s} \in \hat{S}_i, \min_{s \in
\hat{s}} v(s) \leq \hat{v}(\hat{s}) \leq \max_{s \in
\hat{s}} v(s)\}$.
\end{definition}
Given a valuation $v$ of the $i$-th layer of the concrete system,
$\ab{v}$ consists of the set of all abstract valuations of the $i$-th
layer in the abstract system, where each abstract node gets a value
which is within the range of values of the corresponding concrete
nodes.
Proof of Theorem \ref{thm:main} relies on the following connection
between corresponding layers of the concrete and abstract $\ensuremath{\textit{INN}}$s.
\begin{lemma}
\ensuremath {\textit{labs}}bel{lem:main}
If $(v,v')\in \dfu{T}{i}$, then $\ab{v'} \subseteq \ensuremath{\textit{Post}}_{\ensuremath{T/P}, i}(\ab{v})$.
\end{lemma}
The proof of Lemma \ref{lem:main} broadly follows the following
structure.
We first observe that the abstraction procedure corresponding to edges
between layer $i$ and layer $i+1$ can be decomposed into
two steps, wherein we first merge the nodes of the $i$-th layer
and then we merge the nodes of the $i+1$-st layer.
Note that
\[\ensuremath{\lwr{\widehat{W}_i}}(\ensuremath{\hat{s}_i}, \ensuremath{\hat{s}_i}plus) = |\h{s}_i| \min_{ s_i\in \h{s}_i, \ s_{i+1}\in
\h{s}_{i+1}} \lwr{W_i}(s_i, s_{i+1}) = \min_{\ s_{i+1}\in
\h{s}_{i+1}} |\h{s}_i| \min_{ s_i\in \h{s}_i} \lwr{W_i}(s_i,
s_{i+1}) \]
A similar observation can be made about the $\max$.
Hence, our first step consists of a function $\ensuremath {\textit{labs}}$ which merges the
nodes in the ``left'' layer and associates an interval with the edges
which corresponds to computing the convex hull followed by multiplying
with an appropriate factor. Next, the function $\ensuremath {\textit{rabs}}$ merges the
nodes in the ``right'' layer and associates an interval which
corresponds to only computing the convex hull.
Next, we define these abstraction functions, and state their relation
with the concrete systems.
\begin{definition}\ensuremath {\textit{labs}}bel{def:labs}
Given an \ensuremath{\textit{INN}}{} $T= (k, \{ S_i\}_{i \in [k]}, \{\lwr{W_i},
\uppr{W_i}\}_{i \in [k-1]}, \{\lwr{b_i}, \uppr{b_i}\}_{i \in [k]/\{0\}})$,
$j$ and $P$ which is a partition of $j$th layer of $\ensuremath{{\cal T}}$, we define an
\ensuremath{\textit{INN}}{} $\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j, P) = (1, \{ \h{S_i} \}_{i \in [1]} , \{\hat{W}_i^l, \hat{W}_i^u\}_{i \in [0]}, \{\hat{b}_i^l, \hat{b}_i^u\}_{i \in \{1\}})$, where
\begin{enumerate}[-]
\item $\h{S_0} = P, \ \h{S_1} = S_{j+1}$;
\item $\forall \h{s}_0 \in \h{S}_0, \h{s}_1 \in \h{S}_1$,
$\ensuremath{\lwr{\widehat{W}_i}}z(\h{s}_0, \h{s}_1) = |\h{s}_0|$ min $\{\lwr{W_0}(s_0, \h{s}_{1})\ | \
s_0\in \h{s}_0\}$, and $\ensuremath{\uppr{\widehat{W}_i}}z(\h{s}_0, \h{s}_1) = |\h{s}_0|$ max
$\{\uppr{W}_0(s_0, \h{s}_{1}) \ | \ s_0\in \h{s}_0\}$;
\item $\forall \h{s}_1 \in \h{S}_1,$ $\ensuremath{\lwr{\hat{b}_i}}o(\h{s}_1)
=\lwr{b_1}(\h{s}_{1})$, and $\ensuremath{\uppr{\hat{b}_i}}o(\h{s}_1)
=\uppr{b_1}(\h{s}_{1})$.
\end{enumerate}
\end{definition}
$
$
\begin{figure}
\caption{A left abstraction illustration}
\caption{A right abstraction illustration}
\end{figure}
Figure \ref{fig:labs} show the left abstraction of the the neural network in Figure \ref{fig:nn} with respect to layer $1$, where the nodes $s_{1,1}$ and $s_{1, 2}$ are merged. The edge from $\{s_{1,1}, s_{1,2}\}$ to $s_{2, 1}$ has weight $[14, 20]$ which is obtained by taking the convex hull of the values $7$ and $10$ and multiplying by $2$.
\begin{definition}{}
\ensuremath {\textit{labs}}bel{def:rabs}
Given an \ensuremath{\textit{INN}}{} $\ensuremath{{\cal T}}=(1, \{ S_i\}_{i \in [1]}, \{\lwr{W_i},
\uppr{W_i}\}_{i \in [0]} \},
\{\lwr{b_i}, \uppr{b_i}\}_{i \in \{1\}})$ and $P$ which is a partition of the
layer $1$ of $\ensuremath{{\cal T}}$, we define an \ensuremath{\textit{INN}}{} $\ensuremath {\textit{rabs}}(\ensuremath{{\cal T}},P) = (1, \{ \h{S_i} \}_{i \in [1]} , \{\hat{W}_i^l, \hat{W}_i^u\}_{i \in [0]}, \{\hat{b}_i^l, \hat{b}_i^u\}_{i \in \{1\}})$, where
\begin{enumerate}[-]
\item $\h{S_0} = S_{0}, \ \h{S_1} = P$;
\item $\forall \h{s}_0 \in \h{S}_0, \h{s}_1 \in \h{S}_1$,
$\ensuremath{\lwr{\widehat{W}_i}}z(\h{s}_0, \h{s}_1) = $ min $\{\lwr{W_0}(\h{s}_0, s_{1})\ | \
s_1\in \h{s}_1\}$, and $\ensuremath{\uppr{\widehat{W}_i}}z(\h{s}_0, \h{s}_1) =$ max
$\{\uppr{W_0}(\h{s}_0, s_{1}) \ | \ s_1\in \h{s}_1\}$;
\item $\forall \h{s}_1 \in \h{S}_1, \ \ensuremath{\lwr{\hat{b}_i}}o(\h{s}_1) =$ min $\{
\lwr{b_1}(s_{1})\ | \ s_{1}\in \h{s}_1 \}$, and $\ensuremath{\uppr{\hat{b}_i}}o(\h{s}_1) =$
max $\{ \uppr{b_1}(s_{1})\ | \ s_{1}\in \h{s}_1 \}$.
\end{enumerate}
\end{definition}
Figure \ref{fig:rabs} shows the right abstraction of the interval neural network in Figure \ref{fig:labs}, where the nodes $s_{2,1}$ and $s_{2, 2}$ are merged. The edge from $\{s_{1,1}, s_{1,2}\}$ to $\{s_{2, 1}, s_{2, 2}\}$ has weight $[14, 22]$ which is obtained by taking the convex hull of the intervals $[14, 20]$ and $[16, 22]$. Note that Figure \ref{fig:rabs} is the same as the Figure \ref{fig:inn} with restricted $2$ layers $1$ and $2$.
Note that applying the left abstraction followed by right abstraction
to the $j$-th layer of $\ensuremath{{\cal T}}$ gives us the $j$-th layer of $\ensuremath{T/P}$. This is
stated in the following lemma.
\begin{lemma}{}
\ensuremath {\textit{labs}}bel{lem:rl}
$\ensuremath {\textit{rabs}}(\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j, P_j), P_{j+1}) = (1, \{ \hat{S}_{i+j}\}_{i \in [1]},
\{\hat{W}_{i+j}^l, \hat{W}_{i+j}^u\}_{i \in [0]}, \{\hat{b}_{i+j}^l,$ $
\hat{b}_{i+j}^u\}_{i \in \{1\}})$.
\end{lemma}
Proof of Lemma \ref{lem:main} relies on some crucial properties which
we state below.
The crux of the proof of the correctness of left abstraction lies in
the following proposition. It states that the contribution of the
values of a set of left nodes $\{s_1, \cdots, s_n\}$ on a right node $s$, can be
simulated in a left abstraction which merges $\{s_1, \cdots, s_n\}$ by
the average of the values.
\begin{proposition}
\ensuremath {\textit{labs}}bel{prop:avg}
Let $v_1, v_2, \ldots, v_n$ and $w_1, w_2, \ldots, w_n$ be real
numbers.
Let $\bar{v} = \sum_i v_i / n$.
There exists a $w$ such that $n \min_i w_i \leq w \leq n \max_i w_i$ and
$\sum_i w_i v_i = \bar{v} w$.
\end{proposition}
\begin{proposition}
\ensuremath {\textit{labs}}bel{prop:labs}
If $(v_1,v_2)\in\dfu{T}{i}$, then $v_2\in \ensuremath{\textit{Post}}_{\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j, P)}(\ab{v_1})$.
\end{proposition}
Next, we state the correctness of $\ensuremath {\textit{rabs}}$. Here we show that
given any valuation $v$ of the right layer in the concrete system, any
valuation $\hat{v} \in \ab{v}$ can be obtained in the abstraction.
It relies on the observation that $\hat{v}(\hat{s})$ is a convex
combination of the $\min_{s \in \hat{s}} v(s)$ and $\min_{s \in
\hat{s}} v(s)$, and the weight interval of an abstract edge is a convex hull
of the intervals of the corresponding concrete edges.
\begin{proposition}
\ensuremath {\textit{labs}}bel{prop:rabs}
Given an $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$ with one layer, and a partition $P$ of layer $1$, if $(v_1,v_2)\in\df{T}$, then
$\ab{v_2} \subseteq \ensuremath{\textit{Post}}_{\ensuremath {\textit{rabs}}(\ensuremath{{\cal T}}, P)}(v_1)$.
\end{proposition}
Proofs are eliminated due to shortage of space and are provided in the
supplementary material.
\subsection{Encoding the interval neural network and \ensuremath{\textit{MILP}}{} solver}
\ensuremath {\textit{labs}}bel{sec:encode}
In this section, we present a reduction of the range computation
problem to solving a mixed integer linear program.
The ideas are similar to those in~\citep{saftey5, cheng} using the big-M method.
However, since, our weights on the edges are not unique but come from
an interval, a direct application of the previous encodings where the
constant weights are replaced by a variable with additional
constraints related to the interval the weight variable is required to
lie in, results in non-linear constraints.
However, we observe that we can eliminate the weight variable by
replacing it appropriately with the minimum and maximum values of the
interval corresponding to it.
We encode the semantics of an $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$ as a constraint $\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}})$
over the following variables. For every node $s$ of the $\ensuremath{\textit{INN}}$ $\ensuremath{{\cal T}}$, we have a real valued variable $x_s$, and we have a binary variable $q_s$ that takes values in $\{0, 1\}$.
Let $X_i$ denote the set of variables $\{x_s \,|\, s \in S_i\}$, and
$Q_i = \{q_s \ | \ s \in S_i\}$.
Given a valuation $v \in \ensuremath{\textit{Val}}(S_i)$, we will abuse notation and use
$v$ to also denote a valuation of $X_i$, wherein $v$ assigns to $x_s
\in X_i$, the valuation $v(s)$, and vice versa.
Let $X = \cup_i X_i$ and $Q = \cup_i Q_i$.
$\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}})$ is the union of the encodings of the different layers of
$\ensuremath{{\cal T}}$, that is, $\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}}) = \cup_i \ensuremath{\textit{Enc}}(\ensuremath{{\cal T}}, i)$, where $\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}}, i)$
denotes the constraints corresponding to layer $i$ of $\ensuremath{{\cal T}}$.
$\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}}, i)$ in turn is the union of constraints corresponding to the
different nodes in layer $i+1$, that is, $\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}}, i) = \cup_{s' \in
S_{i+1}} C_{s'}^{i+1}$, where the constraints in $C_{s'}^{i+1}$ are
as below:
\begin{align}
C_{s'}^{i+1}:
\begin{cases}
\sum_{s \in S_i} \lwr{W_i}(s, s')x_s +\lwr{b_{i}}(s') \leq x_{s'}, \ 0\leq x_{s'}\\
\sum_{s \in S_i} \uppr{W_i}(s, s')x_s +\uppr{b_{i}}(s')+ Mq_{s'} \geq x_{s'}, \ M(1-q_{s'}) \geq x_{s'}
\end{cases}
\end{align}
Here, $M$ is an upper bound on the absolute values any neuron can take (before applying the $\ensuremath{\textit{ReLU}}$ operation) for a given input set.
It can be estimated using the norms
of the weights interpreted as matrices, that is, $\lVert \lwr{W_i}
\rVert$ and $\lVert \uppr{W_i}
\rVert$, an interval box around the input polyhedron, and the norms of
biases.
Next, we state and prove the correctness of the encoding. More
precisely, we show that $(v, v') \in \dfu{T}{i}$ if and only if there are
valuations for the variables in $Q_i$ such that the constraints
$\ensuremath{\textit{Enc}}(T, i)$ are satisfied when the values for $X_i$ and $X_{i+1}$ are
provided by $v$ and $v'$.
\begin{theorem}\ensuremath {\textit{labs}}bel{theorm:encoding}
Let $v \in \ensuremath{\textit{Val}}(S_i)$ and $v' \in \ensuremath{\textit{Val}}(S_i)$.
Then $(v, v') \in \dfu{T}{i}$ if and only if there is a valuation $z
\in \ensuremath{\textit{Val}}(Q_i)$, such that $\ensuremath{\textit{Enc}}(T, i)$ is satisfied with values $v,
v'$ and $z$.
\end{theorem}
We can now compute the output range analysis by solving a maximization
and a minimization problem for each output variable.
More precisely, for each $s \in S_k$, the output layer, we solve:
$\max x_s$ such that $\ensuremath{\textit{Enc}}(\ensuremath{{\cal T}})$ and $\ensuremath{\textit{I}}$ hold, where $\ensuremath{\textit{I}}$ is a
constraint on the input variables encoding the set of input
valuations.
Similarly, we solve a minimization problem, and thus obtain an output
range for the variable $x_s$ given the input set of valuations $\ensuremath{\textit{I}}$.
The maximization and minimization problems can be solved using mixed
integer linear programming ($\ensuremath{\textit{MILP}}$) if $\ensuremath{\textit{I}}$ is specified using linear
constraints.
Even checking satisfiability of a set of mixed integer linear
constraints is NP-hard problems, however, there are commercial software tools
that solve $\ensuremath{\textit{MILP}}$ such as Gurobi and CPLEX.
\section{Implementation}
In this section, we present our experimental analysis using a Python
toolbox that implements the abstraction procedure and the reduction of
the $\ensuremath{\textit{INN}}$ output range computation to $\ensuremath{\textit{MILP}}$ solving.
We consider as a case study ACAS Xu benchmarks, which are neural
networks with $6$ hidden layer with each layer consisting of $50$
neurons~\citep{bunel}.
We report here the results with one of the benchmarks, we observed
similar behavior with several other benchmarks.
We consider abstractions of the benchmark with different number of
abstract nodes, namely, $2, 4, 8, 16, 32$, which are generated
randomly.
For a fixed number of abstract nodes, we perform $30$ different random
runs, and measure the average, maximum and minimum time for different
parts of the analysis.
Similarly, we compute the output range for a fixed number of abstract
nodes, and obtain the average, maximum and minimum on the lower and
upper bound of the output ranges. The lower bound was unanimously
$0$, hence, we do not report it here.
The results are summarized in Figures
~\ref{fig:abstract},~\ref{fig:encoding},~\ref{fig:gurobi}, and~\ref{fig:range}.
As shown in Figure~\ref{fig:abstract}, the abstraction construction
time increases gradually with the number of abstract neurons.
We observe a similar trend with encoding time.
However, the time taken by Gurobi to solve the $\ensuremath{\textit{MILP}}$ problems
increases drastically after certain number of abstract nodes. Also, as
shown in Figure~\ref{fig:gurobi}, the $\ensuremath{\textit{MILP}}$ solving time by Gurobi
is the most expensive part of the overall computation.
Since, this is directly proportional to the number of abstract nodes,
abstraction procedure proposed in the paper, has the potential to
reduce the range computation time drastically.
In fact, Gurobi did not return when ACAS Xu benchmark was encoded
without any abstraction, thus, demonstrating the usefulness of the
abstraction.
$
$
\begin{figure}
\caption{Abstraction Time}
\caption{Encoding Time}
\end{figure}
\begin{figure}
\caption{\ensuremath{\textit{MILP}
\caption{Output Range}
\end{figure}
We compare output ranges (upper bounds) based on different
abstractions. The upper bound of the output range decreases as we consider
more abstract nodes, since, the system becomes more precise.
In fact, it decreases very drastically in the first few abstraction.
We compute the average, minimum and maximum of the upper bound on the
output range.
Even for a fixed number of abstract nodes, the maximum and minimum of
the upper bound on the output range among the random runs has a wide
range, and depends on the specific partitioning.
For instance, as seen in the Figure~\ref{fig:range}, although we have
only $2$ partitions, the upper bound on the output range varies by a
factor of $2$.
This suggest that the partitioning strategy can play a crucial role in
the precision of output range.
Hence, we plan to explore partitioning strategies in the future.
To conclude, our method provides a trade-off between
verification time and the precision of the output range depending on
the size of the abstraction.
\section{Conclusions}
In this paper, we investigated a novel abstraction techniques for
reducing the state-space of neural networks by introducing the concept
of interval neural networks. Our abstraction technique is orthogonal
to existing techniques for analyzing neural networks.
Our experimental results demonstrate the usefulness of abstraction
procedure in computing the output range of the neural network, and the
trade-off between the precision of the output range and the computation
time.
However, the precision of the output range is affected by the specific
choice of the partition of the concrete nodes even for a fixed number
of abstract nodes. Our future direction will consist of exploring
different partition strategies for the abstraction with the aim of obtaining
precise output ranges. In addition, we will consider more complex
activation function. Our abstraction technique will extend in a
straightforward manner, however, we will need to investigate methods
for analyzing the ``interval'' version of the neural network for these
new activation functions.
\section*{Acknowledgments}
Pavithra Prabhakar was partially supported by NSF CAREER Award No. 1552668 and ONR YIP Award No. N000141712577.
\section{Supplementary material}
\paragraph{Proof of Proposition \ref{prop:avg}.}
Let $w = \sum_i w_i v_i / \bar{v}$. Then it trivially satisfies
$\sum_i w_i v_i = \bar{v} w$.
Let $w_\min = \min_i w_i$ and $w_\max = \max_i w_i$.
We need to show that $n w_\min \leq w \leq n w_\max$.
Note that $w = \sum_i w_i v_i / \bar{v} \geq \sum_i w_\min v_i /
\bar{v} = w_\min \sum_i v_i /
\bar{v} = w_\min n \bar{v}/\bar{v} = n w_\min$.
Similarly, we can show that $w = \sum_i w_i v_i / \bar{v} \leq n w_\max$.
The following proposition captures the relation between a layer of the
concrete system and its left abstraction.
\paragraph{Proof of Proposition \ref{prop:labs}.}
Let $(v_1,v_2)\in\dfu{T}{i}$.
Let $ S_0$ and $S_{1}$ be the left and right layers of
$\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j,$ $ P)$, respectively.
From the definition of the semantic of \ensuremath{\textit{INN}}{} given by Definition
\ref{def:sem_inn}, we know that for any $s' \in S_1$, $v_2(s') =
\ensuremath{\hat{s}_i}gma(\sum_{s \in S_0} w_{s, s'} v_1(s) + b_{s'})$, where for every
$s$, $\lwr{W_0}(s, s') \leq w_{s,s'} \leq \uppr{W_0}(s, s'), \
\lwr{b_1}(s') \leq b_{s'} \leq \uppr{b_1}(s')$.
We can group together all neurons that are merged together in $P$, and
rewrite the above as
$v_2(s') = \ensuremath{\hat{s}_i}gma(\sum_{\hat{s} \in P} \sum_{s \in \hat{s}} w_{s, s'}
v_1(s) + b_{s'})$.
From Proposition \ref{prop:avg}, we can replace $\sum_{s \in \hat{s}} w_{s, s'}
v_1(s)$ by $v_{\hat{s}} w_{\hat{s}}$, where $v_{\hat{s}} = \sum_{s \in
\hat{s}} v_1(s) / \norm{\hat{s}}$ and $w_{\hat{s}}$ is such that
$\norm{\hat{s}} \min_{s \in \hat{s}} w_{s, s'} \leq w_{\hat{s}} \leq
\norm{\hat{s}} \max_{s \in \hat{s}} w_{s, s'}$.
Consider a valuation $\hat{v}_1$, where $\hat{v}_1(\hat{s}) =
v_{\hat{s}} = \sum_{s \in
\hat{s}} v_1(s) / \norm{\hat{s}}$. Since, the average is in between the minimum and maximum
values, $\hat{v}_1 \in \ab{v_1}$.
Now $v_2$ can be rewritten using $\hat{v}_1$ as
$v_2(s') = \ensuremath{\hat{s}_i}gma(\sum_{\hat{s} \in P} v_{\hat{s}} w_{\hat{s}} +
b_{s'}) = \ensuremath{\hat{s}_i}gma(\sum_{\hat{s} \in P} \hat{v}_1(\hat{s}) w_{\hat{s}} +
b_{s'})$, where $\norm{\hat{s}} \min_{s \in \hat{s}} w_{s, s'} \leq w_{\hat{s}} \leq
\norm{\hat{s}} \max_{s \in \hat{s}} w_{s, s'}$.
Since $w_{s, s'}$ also satisfies $\norm{\hat{s}} \min_{s \in \hat{s}}
W_0^l(s, s') \leq w_{\hat{s}} \leq
\norm{\hat{s}} \max_{s \in \hat{s}}$ $ W_0^u(s, s')$, we see that
$v_2\in \ensuremath{\textit{Post}}_{\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j, P)}(\ab{v_1})$ (since, $v_2$ and
$\hat{v}_1$ satisfy the semantics of $\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, j, P)$).
\paragraph{Proof of Proposition \ref{prop:rabs}.}
Consider $\hat{v}_2 \in \ab{v_2}$. Then $\hat{v}_2({\hat{s}}') = \alpha
v_2(s'_1) + (1 - \alpha) v_2(s'_2)$, where $s'_1$ is the node in
${\hat{s}}'$ for which $v_2$ at the node is the minimum and $s'_2$ is the node in
${\hat{s}}'$ for which $v_2$ at the node is the maximum.
Let $S_0$ and $S_1$ be the nodes in the left and right layers of $\ensuremath{{\cal T}}$.
$v_2(s'_i) = \ensuremath{\hat{s}_i}gma(\sum_{s \in S_0} w_{s, s'_i} v_1(s) + b_{s'_i})$, where for every
$s$, $\lwr{W_0}(s, s'_i) \leq w_{s,s'_i} \leq \uppr{W_0}(s, s'_i), \
\lwr{b_1}(s'_i) \leq b_{s'_i} \leq \uppr{b_1}(s'_i)$.
$\hat{v}_2({\hat{s}}') = \alpha
v_2(s'_1) + (1 - \alpha) v_2(s'_2) = \alpha \ensuremath{\hat{s}_i}gma(\sum_{s \in S_0}
w_{s, s'_1} v_1(s) + b_{s'_1}) + (1 - \alpha) \ensuremath{\hat{s}_i}gma(\sum_{s \in S_0}
w_{s, s'_2} v_1(s) + b_{s'_2})$.
Let us first consider the case where the expressions within $\ensuremath{\hat{s}_i}gma$
are non-negative.
Then $\hat{v}_2({\hat{s}}') = \alpha (\sum_{s \in S_0}
w_{s, s'_1} v_1(s) + b_{s'_1}) + (1 - \alpha) (\sum_{s \in S_0}
w_{s, s'_2} v_1(s) + b_{s'_2}) = \sum_{s \in S_0}
(\alpha w_{s, s'_1} + (1- \alpha) w_{s, s'_2}) v_1(s) + (\alpha
b_{s'_1} + (1 - \alpha) b_{s'_2}) = \sum_{s \in S_0}
w_{s, s'_1, s'_2} v_1(s) + b_{s'_1, s'_2})$, where
$w_{s, s'_1, s'_2} = (\alpha w_{s, s'_1} + (1- \alpha) w_{s, s'_2})$
and $b_{s'_1, s'_2}= \alpha
b_{s'_1} + (1 - \alpha) b_{s'_2}$.
Note that $w_{s, s'_1, s'_2}$ and $b_{s'_1, s'_2}$ are in the edge
weights and biases of the abstract system.
If $\sum_{s \in S_0}
w_{s, s'_2} v_1(s) + b_{s'_2}$ is negative, then $v_2(s'_2) =
v_2(s'_1) = 0$, hence, $\hat{v}_2({\hat{s}}') = 0$ can be simulated
using either the values used to obtain $v_2(s'_1)$ or $v_2(s'_2)$.
If $v_2(s'_1) = 0$, but $v_2(s'_2) > 0$, then we note that $\sum_{s \in S_0}
w_{s, s'_1} v_1(s) + b_{s'_1}) \leq 0$ and $ (\sum_{s \in S_0}
w_{s, s'_2} v_1(s) + b_{s'_2}) > 0$, any linear combination of the two
can still be obtained using $v_1$, and $(1- \alpha) (\sum_{s \in S_0}
w_{s, s'_2} v_1(s) + b_{s'_2})$ is between the two values and can be
obtained from $v_1$, and further, applying $\ensuremath{\hat{s}_i}gma$ would give us $v_2(s'_2)$.
\paragraph{Proof of Lemma \ref{lem:main}.} Suppose $(v,v')\in
\dfu{T}{i}$. Then from Proposition \ref{prop:labs}, we have $v'\in
\ensuremath{\textit{Post}}_{\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, i, P_{i+1})}(\ab{v})$, and further, from Proposition
\ref{prop:rabs}, we have $\ab{v'} \subseteq \ensuremath{\textit{Post}}_{\ensuremath {\textit{rabs}}(\ensuremath {\textit{labs}}(\ensuremath{{\cal T}}, i,
P_{i+1}),P_i)}(\ab{v})$.
Finally, from Lemma \ref{lem:rl}, we obtain that $\ab{v'} \subseteq
\ensuremath{\textit{Post}}_{\ensuremath{T/P}, i}(\ab{v})$.
\paragraph{Proof of Theorem \ref{thm:main}.}
Suppose $(v, v') \in \df{T}$, then there exists a sequence of
valuations $v_0, v_1, \cdots, v_k$, where $v_0 = v'$ and $v_i \in
\ensuremath{\textit{Post}}_{\ensuremath{{\cal T}}, i}(v_{i-1})$ for $i > 0$.
From Lemma \ref{lem:main}, we know that since $(v_i, v_{i+1}) \in
\dfu{T}{i}$, $\ab{v_{i+1}} \subseteq
\ensuremath{\textit{Post}}_{\ensuremath{T/P}, i}(\ab{v_i})$.
Since, $\df{T}$ is the composition of $\dfu{T}{i}$, we obtain that
$\ab{v_k} \subseteq \ensuremath{\textit{Post}}_{\ensuremath{T/P}}(\ab{v_0})$.
If the nodes in the input and output layer are not merged, then,
$\ab{v_0} = \{v_0\} = \{v\}$ and $\ab{v_k} = \{v_k\} = \{v'\}$.
Therefore, $(v, v') \in \df{T}$.
\paragraph{Proof of Theorem \ref{theorm:encoding}.}
First, let us prove that if $(v, v') \in \dfu{T}{i}$, then there is a
valuation $z \in \ensuremath{\textit{Val}}(Q_i)$, such that $\ensuremath{\textit{Enc}}(T, i)$ is satisfied with
values $v, v'$ and $z$.
In fact, it suffices to fix an $s'$ and show that $C_{s'}^{i+1}$ is
satisfied by $v, v'(s')$ and $z(q_{s'})$.
First, note that $v'(s') \geq 0$ since it is obtained by applying the
$\ensuremath{\textit{ReLU}}$ function, so the second constraint in $C_{s'}^{i+1}$ is
satisfied.
From the semantics, we know that $v'(s') = \ensuremath{\hat{s}_i}gma(\sum_{s \in S_i}
w_{s, s'} v(s) + b_{s'})$, where $W_i^l(s, s') \leq w_{s, s'} \leq
W_i^u(s, s')$ and $b_i^l(s') \leq b_{s'} \leq b_i^u(s')$.
Hence, $\ensuremath{\hat{s}_i}gma(\sum_{s \in S_i} W_i^l(s, s') v(s) +b_i^l(s') ) \leq v'(s') \leq \ensuremath{\hat{s}_i}gma(\sum_{s \in S_i}
W_i^u(s, s') v(s) + b_i^u(s'))$.
Let $v''(s') = \sum_{s \in S_i}
w_{s, s'} v(s) + b_{s'}$, that is, $v'(s') = \ensuremath{\hat{s}_i}gma(v''(s'))$.
Case $v''(s') \geq 0$: $v'(s') = v''(s')$ and we have $\sum_{s \in
S_i} W_i^l(s, s') v(s) +b_i^l(s') \leq v'(s') \leq \sum_{s \in
S_i} W_i^u(s, s') v(s) + b_i^u(s')$. Hence, for $z(q_{s'}) = 0$, the first,
third and fourth constraints in $C_{s'}^{i+1}$ are satisfied.
Case $v''(s') < 0$: In this case, $v'(s') = 0$ and we set $z(q_{s'}) = 1$.
$\sum_{s \in S_i} W_i^l(s, s') v(s) +b_i^l(s') \leq \sum_{s \in S_i}
w_{s, s'} v(s) + b_{s'} = v''(s') < 0 = v'(s')$, so the first
constraint is satisfied.
$\sum_{s \in S_i} W_i^u(s, s') v(s) + b_i^u(s') + Mq_{s'} = \sum_{s
\in S_i} W_i^u(s, s') v(s) + b_i^u(s') + M$.
Since, $M$ is an upperbound on the absolute value of $x_{s'}$ before
applying the $\ensuremath{\textit{ReLU}}$ operation, $\sum_{s
\in S_i} W_i^u(s, s') v(s) + b_i^u(s') + M$ is positive, and hence,
satisfies the third constraint.
The fourth constraint is satisfied by the choice of $M$, that is, $M
\geq x_{s'}$.
Next, we prove the other direction. Suppose $C_{s'}^{i+1}$ is
satisfied for every $s'$ by some $v, v', z$, then we show that $(v,
v') \in \dfu{T}{i}$.
Case $z(q_{s'}) = 0$: In this case, we have
$\sum_{s \in S_i} \lwr{W_i}(s, s')x_s +\lwr{b_{i}}(s') \leq x_{s'}
\leq \sum_{s \in S_i} \uppr{W_i}(s, s')x_s +\uppr{b_{i}}(s')$.
Since, $\ensuremath{\textit{ReLU}}$ is a monotonic function and $x_{s'} > 0$ by the
second constraint, we have $\ensuremath{\hat{s}_i}gma(x_{s'}) = x_{s'}$ and hence,
$\ensuremath{\hat{s}_i}gma(\sum_{s \in S_i} \lwr{W_i}(s, s')x_s +\lwr{b_{i}}(s')) \leq
x_{s'} \leq \ensuremath{\hat{s}_i}gma(\sum_{s \in S_i} \uppr{W_i}(s, s')x_s
+\uppr{b_{i}}(s'))$.
Hence, $x_{s'} = \ensuremath{\hat{s}_i}gma(\sum_{s \in S_i} w_{s, s'} x_s
+ b_{s'})$ for some $\lwr{W_i}(s, s') \leq w_{s, s'} \leq
\uppr{W_i}(s, s')$ and $\lwr{b_{i}}(s') \leq b_{s'} \leq
\uppr{b_{i}}(s')$.
Hence, $v'(s')$ is obtained from $v$ using the definition of $\dfu{T}{i}$.
Case $z(q_{s'}) = 1$: In this case, $x_{s'} = 0$ and $\sum_{s \in S_i}
\lwr{W_i}(s, s')x_s +\lwr{b_{i}}(s') \leq x_{s'} = 0$.
Therefore $\ensuremath{\hat{s}_i}gma(\sum_{s \in S_i}
\lwr{W_i}(s, s')x_s +\lwr{b_{i}}(s')) = 0 = x_{s'}$, therefore
$v'(s')$ is obtained from $v$ using the definition of $\dfu{T}{i}$.
\end{document}
|
\begin{document}
\title{The Complexity of Corporate Culture as a Potential Source of Firm Profit Differentials}
\begin{abstract}
\justifying \sloppy
This paper proposes an addition to the firm-based perspective on intra-industry profitability differentials by modelling a business organisation as a complex adaptive system.
The presented agent-based model introduces an endogenous similarity-based social network and employees' reactions to dynamic management strategies informed by key company benchmarks.
The value-based decision-making of employees shapes the behaviour of others through their perception of social norms from which a corporate culture emerges.
These elements induce intertwined feedback mechanisms which lead to unforeseen profitability outcomes.
The simulations reveal that variants of extreme adaptation of management style yield higher profitability in the long run than the more moderate alternatives.
Furthermore, we observe convergence towards a dominant management strategy with low intensity in monitoring efforts as well as high monetary incentivisation of cooperative behaviour.
The results suggest that measures increasing the connectedness of the workforce across all four value groups might be advisable to escape potential lock-in situation and thus raise profitability.
A further positive impact on profitability can be achieved through knowledge about the distribution of personal values among a firm's employees.
Choosing appropriate and enabling management strategies, and sticking to them in the long run, can support the realisation of the inherent self-organisational capacities of the workforce, ultimately leading to higher profitability through cultural stability.
\end{abstract}
\textbf{Keywords:} Complex Adaptive Systems, Self-Organization, Corporate Culture, Social Networks, Agent-Based Modeling\\
\textbf{JEL Classification:} C63, D21, L25, M14, Z13\\
\section{Introduction}
\justifying \sloppy
Firms operating within the same industry have long experienced persistent profit margin differentials \citep[see][among others]{Mueller.1986, Geroski.1988, Leiponen.2000}.
Intra-industry profitability differentials sparked a long-standing debate about which factors drive firms' performance.
The dichotomy between industry- and firm-based factors has led to two competing theories \citep{Bowman.1990}.
The industrial organisation (IO) literature addresses profit differentials to structural characteristics of the firms' industry.
Assuming homogeneous firms \citep{Mauri.1998}, IO-based empirical research has tied higher profits to (i) market power and concentration ratios \citep{Bain.1954}, (ii) entry barriers \citep{Mann.1966}, (iii) mobility barriers \citep{Semmler.1984}\footnote{
Mobility barriers comprise factors hindering a firm's entry and exit from an industry.
}, and (iv) market shares \citep{Ravenscraft.1983}.
The resource-based view (RBV) of the firm challenges the implicit assumption of firms' homogeneity behind the industry-driven empirical analyses.
The RBV emphasises firm-specific factors as the drivers of performance variance between firms within the same industry \citep{Hansen.1989, Cefis.2005, Blease.2010}.
In this vein, each organisation has a \textit{unique} set of resources and capabilities, which include management skills \citep{Bowman.2001}, routines \citep{Barney.2001} and corporate culture \citep{Barney.1986}.
Existing literature has provided stronger empirical support for firm effects as drivers of performance and profit differentials \citep[see][among others]{Hawawini.2004, Arend.2009, Bamiatzi.2009, Hambrick.2014, Bamiatzi.2016}.
Firms' idiosyncratic resources have thus greater explanatory power for competitive advantage than industry-specific effects \citep{Galbreath.2008}.
Moreover, the prevalence of industry effects on firm factors \citep{Schmalensee.1985} has been observed less frequently and is valid for medium-size firms only \citep{Fernandez.2019}.
The economic literature often treats firms as a "black box". As such, organisations are conceived as indivisible units with homogeneous resources and capabilities where firm-specific factors have no relevance.
In order to better understand the sources of competitive advantage and profit differentials, opening up the black box is necessary to avoid neglecting the complexity of firms' internal structure \citep{Foss.1994}.
Moreover, existing literature has mainly focused on the response of firms' resources to \enquote{shifts in the business environment}\citep[][page 515]{Teece.1997}.
However, it is still unclear how company-specific factors, i.e. corporate culture and management strategies, co-evolve and impact corporate performance when firms' inner conditions change.
This paper contributes to the understanding of the inner workings of a firm (i) by combining concepts and methods from economics, (social) psychology, and complexity science, and (ii) by adopting an agent-based bottom-up approach.
We regard formal and informal institutions within a firm as a key factor of independent and heterogeneous human resources embedded in a social context, interacting actively and reactively with other employees and responding to corporate strategy changes.
To capture these mechanisms, we see a company as an example of a complex adaptive system (CAS) \citep{Fuller.2001}, inherently difficult to control and manage by nature \citep{Holland.1992, Fredin.2020}.
This paper extends an earlier agent-based model \citep{Roos.2022}.
It builds on top of its framework within which agents have heterogeneous value hierarchies \citep{Schwartz.2012b}, shaping corporate culture – captured via descriptive social norms \citep{Deutsch.1955}\footnote{
Descriptive social norms refer to what is \textit{seen as normal} within an organisation, formalised in \cite{Roos.2022} as the average behaviour of all employees.
}
– and mediating the impact of management instruments on corporate performance.\footnote{
\cite{Foster.2000} analyses the relation between the transaction-cost framework and CAS-based views of business organisations.
Specifically, the usefulness of the Coasian tradition for complex adaptive approaches is seen through the lens of the \textit{psychological complexity} embedded in the formalisation of the opportunism-oriented "contractual man", which goes beyond the usual rational homo oeconomicus on which the neoclassical tradition is grounded.
While we deem this interpretation worth mentioning, we still feel the urge to complement a CAS-firm approach with multiple and heterogeneous employees' motivational dispositions – i.e. only accounting for opportunism is not sufficient – by modelling individual decision-making based on social norms and personal values in a \textit{self-contained} corporate environment.
This CAS-based extension thus does not aim at a theoretical reconciliation of previous traditions with complexity science.
On the contrary, we want this paper to provide a new, modular, and extensible framework to serve as an openly accessible foundation for future research on modelling firms as complex adaptive systems.
}
As a first step towards modelling an organisation as a CAS, we propose three main extensions.
First, agents take part in an endogenous and dynamic social network, whose peering mechanism determines how social norms and corporate culture spread within the organisation.
Second, employees heterogeneously adapt their behaviour to corporate strategies, depending on how management instruments – i.e., monitoring and monetary incentives – affect their value-based satisfaction.
Third, the management can more or less frequently update the degree of its monitoring activities and the implementation of pay-for-performance (PFP) schemes, which are influenced by, and feed back into, the development of corporate culture.
By combining a dynamic corporate culture, employees' adaptive behaviour, and endogenous management strategies, we aim to answer the following research questions:
What are potential effects of corporate culture on the profit differentials of otherwise similar firms?
In which ways are corporate outcome and profitability affected by the frequency of changes in management strategy?
Under which conditions – if any – will the management's attempts to steer the organisation boost profitability?
Integration and management of firms' resources are essential for productivity \citep{Russo.1997}.
Organisations depend on the management's efforts to reach – potentially incompatible – corporate goals like maximising profits, output, employee cooperation and satisfaction. In such a context, the instruments available to organisational management can yield unintended or unexpected dynamics that are not easily foreseeable and might derive from
(i) the interaction between different management instruments and
(ii) agents' heterogeneous response to the implemented corporate strategies, leading to adaptive behavioural patterns.
Firms are also organisations that form a "minisociety" itself \citep{Macneil.1977}, whose members struggle for an outcome that considers all the values, attitudes, ideas, history, resources, and skills of relevant individuals, in-groups and out-groups, and society as a whole.\footnote{
Business scholars have long identified this phenomenon under the concept of \textit{stakeholder management}, which has led to a growing interest in stakeholder theory as a rival paradigm to the contractual perspective \citep{Key.1999}.
The management literature has been involved in a long-lasting debate about similarities and divergences between the competence- and stakeholder-based theories of the firm \citep[see][among others]{Hodgson.1998, Freeman.2021}.
Since this controversy lies beyond the scope of this paper, we conceive the two approaches as non-rivals in light of (i) their common origination in the strategic management field, (ii) their mutual influence, and (iii) their rejection by the dominant paradigm.
}
Neglecting the intertwined nature of managerial boundaries and the action space of the affected agents might lead to ineffective management and, thus, to undesirable outcomes for the entire system.
Theories of the firm that ignore the social environment of organisations might also encourage a \enquote{practically unsustainable neglect of society by actual firms} \citep[page 1062]{Thompson.2017} and might thus produce bad public policies \citep{Teece.2017}.
This paper fills a gap in the economic literature by adopting a multidisciplinary approach to the theory of the firm, combined with an unambiguous decolonisation of the field from the maximising modes of agents’ behaviour.
In an attempt to ”’open-up’ the black box” \citep{Casson.2005}, contract-based theories of the firm in the Coase tradition focused on employers’ and employees’ decision-making as driven by internal and external transaction costs.
However, notions of profit maximisation and market equilibrium – at the core of this transaction-cost framework – make behavioural changes merely dependent on extrinsic causes.
By identifying agents’ competencies as the cornerstone of the theory of the firm, the evolutionary (competence) approach emphasises the crucial role of the social component within organisations \citep{Foss.1993}.
While acting as a foundation for the monolith of literature in economics and business that deals with firms as such ultra-optimised institutions, the underlying assumptions of the competence-based approach – highly rational and fully informed agents who maximise their expected utility – still advance an inadequate representation of firms’ behaviour as a “series of rational and dispassionate activities” \citep[][page 1501]{Hodgkinson.2011}.
As a consequence, standard microfoundations – also in the strategic management tradition – appear considerably incompatible with the findings of experimental economists, psychologists, sociologists, neuroscientists, and others (e.g. \citealp{Kahnemann.1979, Shafir.2002, Sarnecki.2007}).
Moreover, the two predominant theories of the firm – contract-based and competence-based – neglect social behaviour.
Contrary to empirical findings, they thus overlook (i) the \textit{glue role} of corporate culture \citep{Freiling.2010, Heine.2013} in binding tangible, intangible, and personnel-based resources – proper of the RBV \citep{Grant.1991} – and (ii) the impact of firm-specific factors on profitability.
Even though building models of complex systems is necessarily a reductionist approach towards real-world practices and processes, it allows us to study firms' dynamics within predefined limits - inherent to the CAS approach - and to assume more flexible behavioural rules suitable for several modelling scenarios.
Moreover, conceptualising a firm as a CAS within confined boundaries and spheres of influence facilitates the study of its internal workings and allows us to use it as a laboratory for systems with higher complexity \citep{Guiso.2015}.
There is a relatively young tradition of scholars working on firms as CAS.
The focuses of these previous studies have been widespread and range from innovation \citep{Chiva.2004, Akgun.2014, Inigo.2016}, to entrepreneurial ecosystems \citep{Roundy.2018, Fredin.2020}, knowledge diffusion \citep{Magnanini.2021}, and learning \citep{Marsick.2017, Lizier.2022}.
This paper contributes to this line of thought both in terms of object of study – i.e. business organisations – and employed method – CAS-based approach – but focuses, distinct from already existing literature, on corporate culture and its influence on firm performance and profitability.
The paper is structured as follows. Section \ref{sec: model} describes the three extensions of the model, i.e. network formation and the emergence of corporate culture, employees' adaptive behavioural rules, and endogenous management strategies. Section \ref{sec: results} explains the simulations and the main results of the model, and section \ref{sec: discussion}
discusses the relevance of our findings. The last section concludes.
\section{Model}\label{sec: model}
There are $n$ employees in the company.
Every employee $i \in N$ where $N = [1,n]$ has the same daily time budget $\tau$, which has to be allocated among three activities: cooperation ($c_{i,t}$), shirking ($s_{i,t}$) and, residually, individual tasks ($p_{i,t}$).
Employees' behaviour depends on personal values \citep{Schwartz.2012b}.
Each agent belongs to one of the four higher-order value types:
Self-transcendent (ST-type) employees are motivated by benevolence and universalism, and self-enhancing (SE-type) agents by power and achievement.
Conservative (C-type) employees value security and conformity above all else, whereas open-to-change individuals (O-type) especially value self-direction and stimulation. Agents' decisions depend on social norms, from which they can deviate positively or negatively.
Time allocations among the three activities are assumed to be triangularly distributed and are modelled in terms of stochastic deviations from the cooperative ($c^{*}_{i,t}$) and shirking ($s^{*}_{i,t}$) norms, defined in section \ref{subsec: network}.
The main behavioural equations which constitute the backbone of this model follow \cite{Roos.2022}.
For the sake of clarity, Table \ref{tab: list_eq} in Appendix \ref{sec: app-1} provides a brief comparison between the original equations and the changes the three additional extensions presented in this paper entail.
The management can implement monitoring strategies and/or financial rewards.
The adoption of these instruments can lead to a certain degree of trusting (or controlling) management style and/or to a competitive or cooperative rewards setting (PFP schemes).\footnote{
A full list of starting values for the model parameters can be found in Table \ref{tab: parameters} of Appendix \ref{sec: app-1}.
}
In the following subsections, we explain in detail (i) the network formation and its influence on social norms and corporate culture (\ref{subsec: network}), (ii) employees' adaptive behaviour based on job satisfaction concerns (\ref{subsec: adaptive-behaviour}), and (iii) how management strategies endogenously react to key company benchmarks (\ref{subsec: end_management}).
For the sake of clarity, Figure \ref{fig: model-overview} presents the model overview, highlights the main feedback mechanisms our CAS-based firm is based on (bold lines), and indicates for each extension its corresponding section.
\begin{figure}
\caption{Model overview. Source: Authors' own illustration.}
\label{fig: model-overview}
\end{figure}
\subsection{Spread of norms in a social network}\label{subsec: network}
In \citet{Roos.2022}, employees' perceived social norms for each task are assumed to be equal to the actual descriptive norm inside the firm, modelled as the mean behaviour among all agents.
In this paper, we relax this artificial assumption of static environments by modelling the spread of information about social norms via the construction of an endogenous network where informal connections in the firm (e.g. based on value homophily or cooperation and shirking intensity) continuously evolve over time.
These connections form a personal network that captures an employee's relevant peers and, thus, also their perceived social norms.
Since all the intricate details of personal interactions within a firm – no matter if directly related to work or non-work processes – are intractable, we need to provide a formalised simplification able to capture the inherent dynamics of the endogenous formation and evolution of such a social network. For this purpose, we exploit the tenets of the Social Referent Literature (SRL).
The SRL distinguishes two types of social referents at the workplace \citep{Brass.1984}. On the one hand, \textit{cohesive} referents are the ones with whom employees engage more frequently, potentially allowing for the formation of close interpersonal ties and friendships \citep{Galaskiewicz.1989}. On the other hand, \textit{structurally equivalent} relationships are formed among employees who perform the same role, occupy the same position in a network, or share a similar pattern of relationships \citep{Burt.1987}.
There is a long-standing debate about what kind of information the two kinds of actors (referents) are likely to share with co-workers which has revealed interesting findings.
Cohesive referents share more general organisational information related to employees' social integration within the firm's corporate culture, while employees turn to structural referents for information strictly related to their work – such as tasks, roles and responsibilities – for performance improvements \citep{Shah.2000}.
While not neglecting the influence of structural relationships on employees' social networks, we currently leave aside considerations about formal connections.
Indeed, in the network we aim to construct, co-workers' interactions inform employees about the prevailing social norm within the firm, information less likely shared by structural referents \citep{Shah.1998}.
Therefore, behavioural norms – i.e. normative information\footnote{
\cite{Shah.1998} classifies normative information under the category of "general organizational information", relevant for social adaptation to a company's culture and social integration within a group.
While the author specifically refers to "norms of expected behaviour" – i.e. injunctive norms, which are outside the scope of this paper – we deem descriptive norms as also falling in the above-mentioned category as these might encourage behavioural conformity \citep{Cialdini.1998} for the sake of belonging to a social group and being integrated within an organisation's social system, even independently of self-identification concerns \citep{Pryor.2019}.
} – are best acquired via cohesive social relationships, whose formation depends on the \textit{frequency} and the \textit{intensity} of interactions within organisations.
By exploiting the concept of cohesive relationships as dependent on the frequency of contacts, we propose a \textit{similarity-based} approach that relies on the agents' decisions regarding their time allocation during each workday.
In other words, employees' chances of connection are determined by how similarly they spend their time on the three activities: cooperation ($c_{i,t}$), shirking ($s_{i,t}$) and individual tasks ($p_{i,t}$).
The rationale behind this is that employees have a higher chance to meet others who spend their time in similar ways, resulting in a higher probability to connect with one another while performing these activities:
The higher the activities similarity, the greater the probability of contact between two agents and the higher the amount of transmitted cues about socially normative information.
Assume that the initial connection strength (i.e. edge weight) – how well agents know each other – between two employees $\{i,j\} \in N$ is zero, such that $e_{i,j,t-1} = e_{j,i,t-1} = 0$.
This means that the simulation starts with a "blank slate" social network without any connections and fully isolated agents, i.e. with $n$ vertices and $0$ edges.
The set of an agent's peers $P_{i,t} = \{ j \; | \; e_{i,j,t-1} \neq 0 \}$ is therefore empty.
As a consequence, there is no influence of individual behaviour on other agents' perceived social norms during the first time period.
During each subsequent workday, agents make their regular decisions on how to spend their time.
Based on employees' decisions, we calculate the activity differences ($AD_{i,j,t}$) between each pair of agents regarding the time they spent on the three activity types $c_{i,t}$, $s_{i,t}$ and $p_{i,t}$.
This calculation is performed from the perspective of each agent $i$ regarding all other agents and vice versa.
To model this, we exploit and adapt the weighted absolute-deviation index (WADI) proposed by \cite{Stewart.2006}, which facilitates interpretation and guarantees a higher degree of robustness compared to other dissimilarity indices.
Translated into a case-independent equation that is able to deal with any number $A$ of generic activities $a$, this leads us to formalise $AD_{i,j,t}$ as in Equation \ref{eq: WAD-index}.\footnote{
In this model, performing individual activities ($p_{i,t}$) also enters the calculation of the activity similarity measure.
Doing so allows modelling of the fact that employees may gain behavioural information from co-workers sharing the same office or working space, even while devoting time to individual tasks.
}
\begin{equation}\label{eq: WAD-index}
AD_{i,j,t} = \frac{\sum_{k=1}^{A} \lvert a_{k,i,t} - a_{k,j,t} \rvert}{\tau}
\end{equation}
Equation \ref{eq: WAD-index} describes the weighted absolute difference in activities between two agents with equal daily working time $\tau_{i,t} = \tau_{j,t} = \tau$.
The weights are thus equal to the fraction of the total time spent in each activity\footnote{
The weights in Equation \ref{eq: WAD-index} are implicit as they are equal to $\sum_{k=1}^{A} \frac{(a_{k,i,t} + a_{k,j,t})}{(a_{k,i,t} + a_{k,j,t})} = 1$, if employees $i$ and $j$ are endowed with the same amount of available maximum working time.
}
and it follows that the activity differences between two employees would be symmetrical, that is $AD_{i,j,t} = AD_{j,i,t}$.\footnote{
However, interacting pairs of employees may also experience heterogeneous degrees of relational \textit{intensity}.
When agents have different time budgets ($\tau_{i,t} \; \neq \; \tau_{j,t}$), the spread of social norms might be asymmetrical – i.e. $e_{i,j,t} \; \neq \; e_{j,i,t}$ – reflecting the potential asymmetrical reciprocity of relational ties which exerts a great impact on firms' social dynamics and corporate performance \citep{Lopez.2018}.
Therefore, each employee might value the three activities differently. In the context of this model, the importance each agent assigns to an activity can be deduced by the relative time spent on that task with respect to its individual time budget.
To account for differences in \textit{relative importance} of a given activity for any agent, we could extend the calculation of the WADI by introducing a simple weight $w_{i,t} = \frac{\tau_{i,t}}{\tau_{j,t}}$. However, we leave heterogeneous time budgets to future works dealing with flexible working time arrangements.
}
To calculate employees' activity similarity $AS_{i,j,t}$, we subtract the previously computed activity difference from 1, to reflect the positive impact of perceived activity similarity on agents' connection strength.
The higher the activity similarity, the stronger the employees' connection during this working day.
\begin{equation}
AS_{i,j,t} = 1 - AD_{i,j,t}
\end{equation}
This similarity measure $AS_{i,j,t}$ is used to represent (i) the chance that the two agents have met during a workday $t$, and (ii) agent $i$'s assigned importance of occurred interactions with employee $j$.
To model the chance of agents' interactions, we make a random draw $d_{i,j,t}$ from a uniform distribution between $0$ and $1$ for each agent pairing.
Let $d_{t}$ denote the set of these draws.\footnote{
Note that the fixed set $d_{t}$ necessarily means that interactions are always symmetrical between two agents, implicitly assuming that $i$ cannot interact with $j$ without $j$ also interacting with $i$.
}
\begin{equation}\tag{b}
d_{t} = \{ d_{i,j,t} \sim U(0, 1), \; \forall \; (i,j) \in N \}
\end{equation}
If the value of $d_{i,j,t}$ is less than their activity similarity $AS_{i,j,t}$ ($= AS_{j,i,t}$), they interact during the current workday.
If and how well employees know each other ($e_{i,j,t}$) determines the order by which agent $i$ checks for potential interactions ($I_{i,j}^{pot}$).
Agents will always first check for potential interactions with their existing peers $P_{i,t-1}$, starting from those with whom they have the strongest connection (i.e. $\max (e_{i,j,t-1}), \; \forall \; j \in P_{i,t-1}$) and going through this sequence in descending order.
Only after that has been done agents also check for potential interactions with other randomly chosen employees who are yet unknown to them.
Each agent $j$ can only be checked once for possible interaction with $i$ and can also only be interacted with once.
\begin{equation}\label{eq: set_potential-interactions}
I_{i,t}^{pot} =
\{j \;|\; e_{i,j,t-1} > e_{i,k,t-1}, \forall \; j,k \in P_{i,t-1},\; j\neq k \} \cup
\{j \; | \; j \in_{R} N \setminus P_{i,t-1} \}
\end{equation}
Therefore, the set of agents with whom employee $i$ interacts ($I_{i,t}$) can be defined following Equation \ref{eq: set-agents-interactions}.
\begin{equation}\label{eq: set-agents-interactions}
I_{i,t} = \{ j \; | \; d_{i,j,t} \; < \; AS_{i,j,t}, \; \forall j \in I_{i,t}^{pot} \}
\end{equation}
Equation \ref{eq: set-agents-interactions} stochastically determines whether or not two agents interact, and Equation \ref{eq: set_potential-interactions} captures the order in which potential interactions are checked.
Naturally, this leads to relatively dense networks over time, which is especially evident in the long run if agents' behaviours converge.
Such high amounts of daily interactions stand in stark contrast to empirical findings from epidemiology, which show that on an average daily basis, people have 8 \citep{Leung.2017} or 13.4 contacts \citep{Mossong.2008}.
To avoid the peculiarity of extremely high interactivity in our theoretical model of a firm, a new agent variable is added, which limits the amount of interactions agents can have over the course of one day.
At each step, agents pick their maximum amount of interactions ($\iota_{i,t}$) from a theoretical distribution ($ID$) such that $\iota_{i,t} \sim ID$.\footnote{
To account for the fact that there are no fractional interactions in our model, $\iota_{i,t}$ is rounded to its nearest integer value.
}
This distribution can either be informed by empirical literature or created freely to explore its effects on the modelling results.
For the analysis conducted throughout this paper, we have chosen a uniform distribution $ID = U(0, 7.14)$ loosely based on the contact numbers of the above-mentioned studies.\footnote{
Attributing the same weight to their results, we assumed that people have $(8 + 13.4) / 2 = 10.7$ contacts per day on average.
Further assuming an equal distribution of contacts across the day, we estimate the average amount of work contacts on a normal work day with $\tau = 8$ to be $\frac{8}{24} * 10.7 \approx 3.57$.
}
Therefore, $I_{i,t}$ can never contain more than $\iota_{i,t}$ elements, and, as such, the set is truncated after $\iota_{i,t}$ elements.
Whenever agents $i$ and $j$ interact, their connection intensifies by $AS_{i,j,t}$.
Otherwise, their connection strength does not change during this workday.
We introduce $\Delta e_{i,j,t}$ to reflect the weight change for each agent $i$'s directed edge toward agent $j$.
\begin{equation}
\Delta e_{i,j,t} =
\left\{ \begin{array}{ll}
AS_{i,j,t} & \text{if} \; j \in I_{i,t}\\
0 & \text{otherwise}
\end{array} \right. \\
\end{equation}
It describes how strong the interaction between agents $i$ and $j$ is during that day, and by that also how important agent $j$'s behaviour is for agent $i$'s updating of descriptive social norms.
At the end of the workday $t$, the new connection between agents $i$ and $j$ can be formulated as in Equation \ref{eq: edge_weights}.
\begin{equation}\label{eq: edge_weights}
e_{i,j,t} =
\left\{ \begin{array}{ll}
\frac{(t-1) \cdot e_{i,j,t-1} + \Delta e_{i,j,t}}{t} & \text{if}\; t \geq 1 \\
0 & \text{if}\; t = 0
\end{array} \right. \\
\end{equation}
The edge weights $e_{i,j,t}$ reflect the long-term interaction history between two agents while also accounting for the fact that the connection between them deteriorates over time if no interaction takes place.
They are then used to update the \textit{descriptive} social norms perceived by agent $i$, describing the relative influence of peers with whom $i$ has interacted during this workday.
This leads to the following adaptations to equations (3) and (4) from \cite{Roos.2022}:
\begin{align}
\label{eq: shirking-norm}
s_{i,t}^{*} & =
\left\{ \begin{array}{ll}
(1 - h) \; s_{i,t-1}^{*} + h \; \frac{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1} \; s_{j,t-1}}{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1}} & \text{if} \; I_{i,t} \neq \emptyset\\
s_{i,t-1}^{*} & \text{otherwise}
\end{array} \right. \\
\label{eq: cooperation-norm}
c_{i,t}^{*} & =
\left\{ \begin{array}{ll}
(1 - h) \; c_{i,t-1}^{*} + h \; \frac{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1} \; c_{j,t-1}}{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1}} & \text{if} \; I_{i,t} \neq \emptyset\\
c_{i,t-1}^{*} & \text{otherwise}
\end{array} \right.
\end{align}
Rather than modelling employees' motivation to maintain specific ties at the workplace \citep[see e.g.][]{Randel.2007}, we assume that agents remember all past interactions with others, no matter how weak the ties between them.
Because the strength of each connection can only grow by a value between $[0,1]$ per simulation step, we can observe the \textit{relative} strength (weakness) of emergent connections between agents who have historically interacted more (less) frequently.
\subsection{Adaptive behaviour and job satisfaction}\label{subsec: adaptive-behaviour}
A moderate but robust correlation regarding the effects of job satisfaction on job performance has been found by meta-studies \citep{Judge.2001,Fisher.2003}.\footnote{
It is noteworthy that while we modelled a direct connection between productivity effects and job satisfaction, there are other plausible relationships dealing with the broad spectrum of employee happiness \citep{Thompson.2021}, organisational citizenship behaviour \citep{Spector.2022}, and counterproductive work behaviour \citep{Nemteanu.2021}.
}
To incorporate this in the model, each employee has a level of job satisfaction $S_{i,t} \in [0,1]$ that directly influences job performance through short-run productivity effects $\pi_{i,t}$.
We assume that $\pi_{i,t} = (1 - S^{eff}) + 2 \cdot S^{eff} \cdot S_{i,t}$ where $S^{eff} \in [0,1]$ is an exogenous model parameter mediating the effect of satisfaction on productivity.\footnote{
This simplified approach is chosen on a forward-looking basis to facilitate model calibration and later integration of other productivity factors.
}
For the simulations conducted in this paper, we have chosen $S^{eff} = 0.5$, which results in $\pi_{i,t} \in [0.5,1.5]$.
Under these conditions, dissatisfaction (low $S_{i,t}$) directly leads to a reduction in productivity by impacting the intensity with which working time is used and thus individual output ($O_{i,t}$).
\begin{equation}\label{eq: output_eq}
O_{i,t} = \pi_{i,t} (p_{i,t}^{\phantom{i}(1-\kappa)}\cdot \bar{c}_{i,t}^{\phantom{i}\kappa})
\end{equation}
\noindent Individual output thus depends on (i) the time devoted to individual tasks ($p_{i,t}$), (ii) the average cooperative time ($\bar{c_{i,t}} = \sfrac{1}{(n-1)} \sum_{j \neq i} c_{j,t}$), and (iii) the extent to which employee $i$'s performance depends on the support of co-workers ($\kappa$) and on initial productivity.
The firm's management employs a certain degree of monitoring $\Sigma$ which can range between a fully trusting ($\Sigma = 0$) and a fully controlling ($\Sigma = 1$) management style.\footnote{
In a controlling environment, C-type employees are happiest and shirk much less than the social norm, and the opposite occurs with O-type employees.
Vice versa under a trusting management attitude.
SE and ST employees are assumed to be indifferent to monitoring but responsive to financial rewards.
}
The bonus each employee $i$ receives is defined in Equation \ref{bonus}
and depends on the type of PFP scheme ($\lambda$) implemented and individual output $O_{i,t}$.
Pure bonus systems can incentivise only one type of behaviour, i.e. by linking bonus payments to individual ($\lambda = 0$) or joint ($\lambda = 1$) output.
Mixed PFP schemes ($\lambda = [0,1]$) also cover intermediate cases where a proportional combination of output assessment is used.
\begin{equation}\label{bonus}
B_{i,t} = (1-\lambda)O_{i,t} + \lambda (\frac{1}{n}) \sum_{j=1}^{n} O_{j,t}
\end{equation}
The firm pays all employees a homogeneous base wage $\omega_{b}$, plus individual bonuses ($B_{i,t}$), which are weighted for a parameter $\mu = \{0,1\}$ that reflects the intensity of rewards the management is willing to offer.\footnote{
The intensity of PFP schemes $\mu$ is such that $\mu = 0$ if no rewards are implemented, and $\mu = 1$ if bonuses are granted, whatever the type.
}
\begin{equation}\label{rewards}
R_{i,t} = \omega_{b} + \mu B_{i,t}
\end{equation}
An employee's base satisfaction level ($S_{i}^{0}$) can take a value between $[0,1]$ where $0$ is completely dissatisfied and $1$ means completely satisfied, therefore defining a neutral level of job satisfaction to be at $0.5$.
Equation \ref{eq: base-satisfaction} shows that it is dependent on the employees' value types, management's monitoring efforts ($\Sigma$), the type of implemented PFP schemes ($\lambda$), and their intensity ($\mu$).
The initial satisfaction of each agent at the beginning of the simulation is equal to $S_{i,t=0} = S_{i}^{0}$.
\begin{equation}\label{eq: base-satisfaction}
S_{i}^{0} =
\left\{ \begin{array}{ll}
\Sigma & \text{if}\; i\;\in\; \text{C-type} \\
1 - \Sigma & \text{if}\; i\;\in\; \text{O-type} \\
0.5 + \mu (0.5 - \lambda) & \text{if}\; i\;\in\; \text{SE-type} \\
0.5 + \mu (\lambda - 0.5) & \text{if}\; i\;\in\; \text{ST-type}
\end{array} \right.
\end{equation}
Since satisfaction carries over from one day to another, we can state that $S_{i,t} = S_{i,t-1}$ at the beginning of a new time step in the simulation.
Should $S_{i,t-1}$ deviate positively (negatively) from $S_{i}^{0}$, it is reduced (increased) by 1\% of its value, as formulated in Equation \ref{eq: satisfaction-offset}.\footnote{
It is also conceivable to model satisfaction recovery in a non-linear fashion such that only the \textit{offset} from base satisfaction would be reduced by 1\%.
A possible formalisation would be where greater deviations from base satisfaction are reduced faster and a total recovery back to $S_{i}^{0}$ is made impossible, such that $S_{i,t} = S_{i,t-1} - \frac{S_{i,t-1} - S_{i}^{0}}{100}$.
Regardless of the chosen implementation, the restriction of $S_{i,t} \in [0,1]$ shall always hold.
}
\begin{equation}\label{eq: satisfaction-offset}
S_{i,t} =
\left\{ \begin{array}{ll}
0.99 \; S_{i,t-1} & \text{if}\; S_{i,t-1} > S_{i}^{0} \\
1.01 \; S_{i,t-1} & \text{if}\; S_{i,t-1} < S_{i}^{0}
\end{array} \right.
\end{equation}
During each period, the management observes a random subset of workers and controls for excessive shirking levels.
The management checks a subset of randomly drawn employees $ETC \subset N$ with cardinality $|ETC| = \Sigma \cdot n$.
We assume that the management willingly accepts a certain amount of shirking activity ($s^{max}$) because it is inevitable to some degree and might even be beneficial \citep{Vermeulen.2000, Campbell.2019}.
This threshold might be subject to various considerations such as the firm's desired revenue or profit margin, management values, or just the observed behaviour of employees.
For the time being, the simulations conducted with this second model extension will use a constant value of one-tenth of the available working time of all agents, i.e. $s^{max} = \tau / 10$.
Thus, when checking on employees, management deems their shirking levels to be reasonable as long as $s_{i,t} \leq s^{max}$.
Since receiving a warning from superiors is generally a negative experience, employees become more dissatisfied after getting caught shirking too much, thus lowering their productivity.
To which extent getting caught impacts agents' degree of satisfaction $S_{i,t}$ might depend on agents' value types \citep{Chatman.1991}, however, we model the impact on satisfaction after receiving a verbal warning in the same manner for all agent types.
Thus, a verbal warning will reduce employee satisfaction by an arbitrary \textit{shock of being caught} ($\eta = [0,1]$) such that $S_{i,t} = S_{i,t}(1 - \eta)$.
The simulation results discussed in this paper have used a constant $\eta = 0.05$.
If workers get caught \textit{for the third time} shirking more than accepted, the management will issue a written warning ($ww_{i,t}$) signalling that repeating such behaviour might result in some form of punishment.\footnote{
That being said, there is no form of consequence or punishment implemented in the presented model.
Therefore, this provides an intriguing venue for future research, as for example in a model dealing with hiring-firing mechanisms and their impact on the labour market.
}
The warnings have two effects:
(i) the worker might shirk less in the future for fear of bad consequences; hence \textit{individual} deviations from the shirking norm are modelled along with the type-specific ones;
(ii) workers get \textit{dissatisfied} with their work.
This implies the existence of an optimum degree of monitoring for which the positive deviation from the shirking norm is minimised while keeping a high employee satisfaction (and thus productivity).
Letters of reprimand are written in reaction to management's actual observations of shirking behaviour, hence, receiving one reduces workers' future positive deviations from the shirking norm.
Formally we model this with an individual-specific scaling factor $\beta_{i,t}$, with $\beta_{i,t} = 1$ as its default state, which alters the upper bound of the triangular distribution used for individual decision making\footnote{
The original upper bound was $b_{i,t} = s_{i,t}^{*} (1 + \delta_{i})$, as can be seen on row number 10 of Table \ref{tab: list_eq} in Appendix \ref{sec: app-1}.
}, see Equation \ref{eq: shirking-upper-bound}.
Figure \ref{fig: beta} provides an example of what would happen for a $\beta_{i,t}$ of $\sfrac{2}{3}$ (red line) versus the baseline case (black line) assumed in \cite{Roos.2022}.
\begin{equation}\label{eq: shirking-upper-bound}
b_{i,t} = s^{*}_{i,t} (1 + \beta_{i,t\phantom{i}}\delta_{i})
\end{equation}
\begin{figure}
\caption{Density function of a triangular distribution for shirking behaviour with $\beta_{i,t}
\label{fig: beta}
\end{figure}
Changes to $\beta_{i,t}$ are assumed to have a persistent effect which gradually decreases over time.
Thus, the more time has passed since the last written warning was received, the less an employee's value-based behaviour is modified.
To capture this, written warnings are modelled as a finite set that takes record of the steps at which the agent has been caught shirking more than acceptable for the third time: $WW_{i,t} = \{ww_{1}, ww_{2}, \dots, ww_{n} \}$.
If the set is non-empty, employees will permanently alter their shirking behaviour according to \textit{how long ago} the last warning ($ww_{n} : ww_{n}\; \in\; WW_{i,t} \neq \emptyset$) was received and \textit{how many} warnings ($|WW_{i,t}|$) were recorded overall, see Equation \ref{eq: beta}.
\begin{equation}\label{eq: beta}
\beta_{i,t}=
\left\{ \begin{array}{lll}
1 - \frac{|WW_{i,t}|}{3} + \frac{|WW_{i,t}|}{3} \; \frac{t - x_{n}}{t} & \text{if}\; 0 \leq |WW_{i,t}| < 3\\
0.0 + 1.0 \; \frac{t - x_{n}}{t} & \text{otherwise}\\
\end{array} \right.
\end{equation}
Instead of just being verbally admonished to shirk less, receiving a written warning is an important formal signal of management control which results in a bigger impact on employee satisfaction.
We chose a factor of three for the simulations in the work at hand.
Hence, after receiving a written warning, the affected agent's satisfaction is reduced to a fraction of its former value, such that $S_{i,t} = S_{i,t} (1 - 3 \eta)$.
\subsection{Endogenous management strategies}\label{subsec: end_management}
Differently from the static and exogenous management assumptions in \cite{Roos.2022}, both monitoring and incentives are dynamic and endogenous here.
The management tracks key company benchmarks, namely average company output ($\bar O_{t}$) as well as the average shirking ($\bar s_{t}$) and cooperative times ($\bar c_{t}$) of observed workers over the past $x$ periods.
The average observed shirking and cooperative times are defined respectively as
\begin{equation}\label{eq: mean-obs-shirk}
\bar s_{t} = \frac{1}{n} \sum_{i}^{ETC_{t}} s_{i,t}
\end{equation}
and
\begin{equation}\label{eq: mean-obs-coop}
\bar c_{t} = \frac{1}{n} \sum_{i}^{ETC_{t}} c_{i,t}
\end{equation}
where $ETC_{t}$ is the now endogenous subset of observed workers with cardinality $|ETC_{t}| = \Sigma_{t-1} \cdot n$.\footnote{
Please note that in the previous extension (Section \ref{subsec: adaptive-behaviour}) the cardinality of the subset of observed workers was exogenous, as $\Sigma$ was not responding to any company benchmarks.
}
Management judges recent developments based on the preset goals of expected group output (Equation \ref{eq: EGO}), the exogenous degree of task interdependence $\kappa$ (Equation \ref{eq: output_eq}), and maximum acceptable shirking time (Equation \ref{eq: max-shirking}).
In contrast to the fixed value chosen in the previous extension, the maximum acceptable shirking time is now endogenised as $s_{t}^{max}$ which reflects the management's adaptive expectations regarding the usual and necessary work efforts of the firm employees.
Considering the deliberate absence of any external (e.g. market-related) factors in our model that might influence management behaviour, we propose that the maximum accepted shirking level is modelled in a similar fashion to how it has been done for other social norms.
Thus, $s_{t}^{max}$ can be understood as the shirking norm perceived by the management and depends on both its previous value ($s_{t-1}^{max}$) and the mean shirking behaviour of the observed agents on the previous day ($\bar s_{t-1}$).
How quickly the management adapts $s_{t}^{max}$ again depends on the exogenous parameter $h$ previously used in Equations \ref{eq: shirking-norm} and \ref{eq: cooperation-norm}.\footnote{
In a model including value-based management, these changes in maximum acceptable shirking time could be further endogenised with theory-driven behavioural rules.
}
Instead of peer influence as in the case of agents, changes to $s_{t}^{max}$ depend on those agents that the management has controlled on the previous day.
Note that the reference point is shifted from an individual to a top-down aggregate view (see the second column of Table \ref{tab: list_eq} in Appendix \ref{sec: app-1}).
\begin{equation}\label{eq: max-shirking}
s_{t}^{max} = (1 - h) \; s_{t-1}^{max} + h \; \bar s_{t-1}
\end{equation}
The management can now infer an expected group output $EGO_{t}$, which is the maximum of the Cobb-Douglas type production function under the constraint $s_{t} = s_{t}^{max}$.
Let us denote employees' available time out of the maximum acceptable shirking threshold with $\alpha_{t} = \tau - s_{t}^{max}$, $EGO_{t}$ is then defined as follows.
\begin{equation}\label{eq: EGO}
EGO_{t} = \left[\alpha_{t}(1-\kappa)\right]^{(1-\kappa)} \cdot (\alpha_{t}\kappa)^{\kappa}
\end{equation}
Monitoring and incentive strategies are updated in a pre-determined interval according to a \textit{strategy update frequency} parameter ($suf \in \mathbb{N}$).
The future degree of corporate monitoring ($\Sigma_{t} \in [0,1]$) is determined as in Equation \ref{eq: sigma_endog} where $sui \in \mathbb{R}^{+}$ is an exogenous \textit{strategy update intensity} parameter.
We have chosen $suf = \{1,30,180,365\}$, $sui = \{\sfrac{1}{60}, \sfrac{1}{20}, \sfrac{3}{10}, \sfrac{73}{120}\}$, and $x = suf$ for the results discussed in Section \ref{sec: discussion}.
\begin{equation}\label{eq: sigma_endog}
\Sigma_{t} =
\left\{ \begin{array}{ll}
(1 + sui) \Sigma_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar s_{t} > s_{t}^{max}\\
(1 - sui) \Sigma_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar s_{t} \leq s_{t}^{max}\\
(1 - \frac{\bar O_{t}}{EGO_{t}}) sui & \text{if}\; \Sigma_{t-1} = 0 \; \wedge \; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} < EGO_{t}
\end{array}
\right.
\end{equation}
Therefore, the management becomes more (less) controlling when the average observed shirking ($\bar{s}_{t}$ ) of the current set of monitored employees ($\forall\; i \in ETC_{t}$) exceeds (stays within) the maximum predefined threshold $s_{t}^{max}$.
As can be noted from Equation \ref{eq: sigma_endog}, we also account for a special case which takes place when the management has adopted a fully trusting strategy in the previous period, i.e. when $\Sigma_{t-1} = 0$.
When this event occurs, it is reasonable to conceive the management as indifferent to employees' shirking attitudes.
In this case, the firm would instead anchor any monitoring decisions to the average company output ($\bar O_{t}$) such that $\Sigma_{t}$ would increase proportionally to how far away $\bar O_{t}$ was from expected group output ($EGO_{t}$).
While monetary incentives are assumed to have positive steering effects on employee motivation \citep{Gerhart.2017}, the management should keep the amount of financial rewards as low as feasible as it contributes to the overall costs of the firm.
Wages are assumed to be sticky to some extent, as reflected in Equation \ref{eq: mu_endog}.
The management increases (decreases) the amount of financial rewards when the average company output ($O_{t}$) is below (above) the management's expected group output.
If the company benchmark – namely $EGO_{t}$ – is reached, we assume that the management has no incentive to alter the amount of rewards.
\begin{equation}\label{eq: mu_endog}
\mu_{t} =
\left\{ \begin{array}{ll}
(1 + sui) \mu_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} < EGO_{t} \\
(1 - sui) \mu_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} > EGO_{t} \\
\mu_{t-1} & \text{otherwise} \\
\end{array}\right.
\end{equation}
When the management observes a subset of employees ($ETC_{t}$), it also gathers information about the amount of time they have devoted to cooperative activities ($\bar c_{t}$).
The management shifts to a higher (lower) degree of competitive rewards when the desired amount of time allocation to cooperation ($\kappa \cdot \alpha_{t}$) is (not) achieved.
By doing so, we also account for mixed PFP schemes ($\lambda_{t} \in [0,1]$), i.e. schemes that combine collective and individual rewards, which represent the most common type of rewards used in real-world scenarios \citep{Nyberg.2018}.
\begin{equation}\label{eq: lambda_endog}
\lambda_{t} =
\left \{ \begin{array}{ll}
(1 + sui) \lambda_{t-1} &\text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar c_{t} < \kappa \cdot \alpha_{t} \\
(1 - sui) \lambda_{t-1} &\text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar c_{t} > \kappa \cdot \alpha_{t} \\
\lambda_{t-1} & \text{otherwise} \\
\end{array} \right.
\end{equation}
Equations \ref{eq: sigma_endog}, \ref{eq: mu_endog}, and \ref{eq: lambda_endog} could result in values below or above the parameter boundaries of $[0,1]$.
In these cases, $\Sigma_{t}$, $\mu_{t}$ or $\lambda_{t}$ will be rounded to the nearest possible value inside the interval.
Further, any changes to the management style ($\Sigma_{t}$, $\mu_{t}$ or $\lambda_{t}$) also induce an update of the base satisfaction of agents $S_ {i}^{0}$ in the same way as described in Equation \ref{eq: base-satisfaction}.
\section{Simulations and results}\label{sec: results}
Our main research question is about the potential effect of corporate culture on the profit differentials of otherwise similar firms.
To shed light on this, the current section presents our findings from the conducted agent-based simulations by focusing on three aspects identified to impact profitability: (i) the frequency of changes in management decisions, (ii) the influence of employees' homophily in interactions, and (iii) the role of job satisfaction.
A detailed description of the agent-based simulation algorithm can be found in Figure \ref{fig: flowchart} in Appendix \ref{sec: app-2}.
The simulations have been run for $3650$ time steps (i.e. $10$ years) with a stable workforce.
The presented results are mean aggregates over $100$ uniquely seeded replicate runs for each parameter constellation (= scenario).
All initial model parameters are summarised in Table \ref{tab: parameters} in Appendix \ref{sec: app-1}.\footnote{
The Julia code to recreate all of the simulation results and visualisations is available online (see \href{https://git.noc.ruhr-uni-bochum.de/vepabm/firm-as-cas}{here}) which comes with a thorough explanation of how to run it.
}
The nine scenarios previously used by \cite{Roos.2022} have become meaningless here with the endogenisation of the management strategy.
As such, all further simulations will start from the same neutral management strategy which equals to the previously used Base scenario.
As outlined in the model description (see Section \ref{sec: model}), the management strategy regarding monitoring efforts and implemented pay for performance scheme now changes over time and depends on the chosen strategy update intensity ($sui$) and frequency ($suf$).
Hence, we introduce four new scenarios in Table \ref{tab: EM-scenarios} that modulate these parameters with which the management deterministically reacts to changes in the observed firm variables.\footnote{
While both $suf$ and $sui$ are modulated, the ratio of $\sfrac{sui}{suf}$ is held constant to keep the amount of scenarios low.
This assumption can be relaxed for more in-depth analysis of these model parameters.
}
\begin{table}[H]
\centering
\begin{tabular}{lcc}
\toprule
Name & $suf$ & $sui$ \\
\hline \hline
Daily & 1 & 1/600 \\
Monthly & 30 & 1/20 \\
Biannually & 180 & 3/10 \\
Yearly & 365 & 73/120 \\
\bottomrule
\end{tabular}
\caption{Varying strategy update intensity ($sui$) and frequency ($suf$) combinations in four scenarios for use with the endogenous management extension.}
\label{tab: EM-scenarios}
\end{table}
Figure \ref{fig: profitability} displays the firms profitability over time across four scenarios with varying degrees of strategy update frequency and intensity (see Table \ref{tab: EM-scenarios} for details).
Profitability has been formalised as the ratio of sum of output to sum of rewards.\footnote{
Note that this ratio is of deeply theoretical nature as the model at hand does not have a market to convert output into money.
As such, the relative differences in profitability allow for a discussion of corporate culture as a potential source of profit differentials between otherwise equal firms.
Therefore, this abstraction to relate units of output to unspecified monetary units of reward payments seems sufficient for the aims of this paper.
}
The main plot in the top left subfigure shows this from an aggregate perspective, taking into account the output and rewards of all firm employees.
The top right contains four subfigures providing a more fine-grained view on the profitability of each of the four value groups.
The evolution of the management strategy according to the three parameters monitoring ($\Sigma$), intensity ($\mu$), and type ($\lambda$) of implemented PFP scheme can be tracked in the bottom row of Figure \ref{fig: profitability}.
\begin{figure}
\caption{Firm profitability (top row) and management strategy (bottom row) over time across four scenarios. Top left plot shows aggregate profitability across the whole work force. Top right plots show profitability broken down into the four higher-order value groups. Three plots in bottom row show the development of the management strategy parameters. Source: Authors' own illustration.}
\label{fig: profitability}
\end{figure}
The top left plot shows that changes in profitability get more erratic with decreasing strategy update frequency whereas more incremental updates lead to a smoother transition over time.
This allows the firm's management to react faster to new business insights and closely adapt their currently employed strategy in accordance with the underlying heuristics.
After one year the Yearly scenario brings the best results in terms of profitability, which points towards the positive impact of a stable environment that allows social norms to manifest and spread among the employees.
However, this changes rapidly in the following year where drastic modifications of the management style (increased monitoring due to higher than acceptable observed shirking) lead to severe drops in profitability.
Although Conservative agents react positively to this change, even leading to a short-lived rise in profitability for these employees, the decline in aggregate profitability can be observed in Figure \ref{fig: profitability} across all value-types from the second year onward, where only Open-to-change agents' profitability shows a U-shaped recovery.
One noteworthy outlier is the profitability of Conservative and Open-to-change agents in the Daily scenario, implying that small and frequent updates lead to (un)favourable outcomes for employees in these higher-order value group.
Here, the management can constantly observe the shirking of a subset of employees and adapt its expectations (i.e. $s_{t}^{max}$ and therefore also $EGO_{t}$) to what has actually happened over the past day.
The resulting early drops in monitoring efforts $\Sigma$ significantly increase the satisfaction levels (i.e. productivity) of Open-to-change agents.\footnote{
The inverse effect of more monitoring can also be observed in the evolution over time of Monthly, Biannually, and Yearly scenarios.
}
Even though the model has been built under the assumption of behavioural symmetry between employees of opposing higher order values, Conservative agents do not completely mirror the reactions of Open-to-change agents because their increasingly productive behaviour also manifests itself in the social norms perceived by others.
Furthermore, any reduction in monitoring also lowers the amount of observed employees each day, thus leading to less verbal and written warnings, and ultimately resulting in higher satisfaction across the whole population.
In the Daily scenario, this effect counters the negative impact on satisfaction of Conservative employees throughout years one to three and even leads to small increases in their profitability.
Yet the long-run trend towards very low degrees of monitoring throughout all four scenarios eventually overshadows these gains and lead to convergence of Conservative and Open-to-change employees' profitability around $0.19$ and $0.56$ respectively.
Although also positively affected by the reduction in warnings issued by the management, the evolution of Self-enhancing and Self-transcendent agents is driven by different influential factors.
With respect to the intensity of implemented incentive schemes, the four scenarios paint a very similar, albeit slightly time-lagged, picture.
After two years at most, they all lead to maximum $\mu$, leaving this parameter around this level until the end with only a short-lived dip in PFP intensity in years four to six of the Yearly scenario.
As laid out in Equation \ref{eq: mu_endog}, changes to the amount of incentives paid depend on management's expectations regarding expected group output which is starkly influenced by what is observed over time by the management as normal shirking behaviour.
Indeed, the results hint at those expectations being practically unachievable under most simulation settings.
The only exception is a period of two years in the Yearly scenario where the sustained productivity of Self-enhancing employees and the rapidly increasing productivity of Open-to-change employees contribute to levels of output that are high enough to warrant a reduction in monetary incentives.
Still, it is important to note that the amount of paid incentives has a strong impact on the calculation of profitability, ultimately adding to the explanation of the ongoing side-/downward trend in aggregate profitability after year six (and earlier when looking at the separate higher order value groups).
Changes to the type of implemented PFP scheme occur in the first half of the simulation period and cause positive/negative reactions from Self-enhancing/Self-transcendent agents.
However, the positive impact on the former group's behaviour is diminished by their peers from other value groups, thus countering the development of potentially more profitable social norms.
This kind of mitigation cannot be observed for Self-transcendent employees which is caused by their social networks exhibiting high degrees of homophily\footnote{
Homophily is defined as the weighted share of agent $i$'s peers who belong to the same value group as $i$.
The weights are determined by the current connection strength at time $t$ between agents $i$ and $j \; \forall \; j \in P_{i,t}$.
} at approximately twice the levels of all other value groups in the long run.
As such, this management decision is overshadowed by the declining influence of Self-transcendent employees on the social norms across the whole firm.
Subsequently the decision to lower $\lambda$ is reverted from years 2 (Daily) to 4 (Yearly) onward, eventually remaining at high levels above $0.9$ again.
\begin{figure}
\caption{Interactions homophily in the endogenous social network across four scenarios. Left plot shows aggregate interactions homophily across the whole work force. Right plots show interactions homophily broken down into the four higher-order value groups. Source: Authors' own illustration.}
\label{fig: homophily-interactions}
\end{figure}
Figure \ref{fig: homophily-interactions} provides insights on how homophily in interactions between the employees changes over time.
The main plot on the left side depicts the average homophily of all agents' interactions in the endogenous social network across the four scenarios and also provides a dashed horizontal line as a reference case with an unweighted complete graph.
The four subplots on the right side are again divided by the higher-order value types and display their mean interactions homophily for inter-group comparisons.
While it is evident that differences exist between all four employee types, Self-transcendent agents reach severely higher degrees of homophily ($0.80 - 0.85$) than the three other groups.
Since the probability for two agents to interact depends on their activity similarity, the stronger deviations from social norms allow Open-to-change agents to consistently achieve the most inter-group interactions at the end of the four simulated scenarios ($0.37 - 0.38$).
Conservative ($0.39 - 00.41$) and Self-enhancing ($0.41 - 0.46$) employees find themselves in the middle.
The explanation for the wider spread of the latter group can be found in the different intensity of competitive incentives across the four scenarios (cf. bottom right plot in Figure \ref{fig: profitability}) which leads to temporary boosts to activity similarity in this value group.
These findings suggest that even short-lived changes in management strategy can have a lasting effect on the firm's network and by that consequently also affect profitability.
\begin{figure}
\caption{Job satisfaction across four scenarios. Left plot shows aggregate satisfaction across the whole work force. Right plots show satisfaction broken down into the four higher-order value groups. Source: Authors' own illustration.}
\label{fig: satisfaction}
\end{figure}
Comparing the satisfaction (see Figure \ref{fig: satisfaction}) and profitability curves of the four agent groups reveals high similarities of their evolution for almost all employees except those belonging to the Self-transcendent group.
The correlation values reported in Table \ref{tab: correlations} show that bivariate correlations between satisfaction and profitability are strong for Conservative, Open-to-change, and Self-enhancing agents ($SP \geq 0.879$), which suggests that job satisfaction is indeed a positive influential factor for these employees.
The low correlation of Self-transcendent agents is an indicator that there are cases in which job satisfaction is indeed high with no opportunity to translate this into high levels of output and/or profitability.
This is due to their generally unproductive allocation of time with too much emphasis on cooperative activities and the accompanying social separation from the rest of the firm's employees which in combination lead to low profitability of Self-transcendent agents.
However, for all higher order value groups a lower strategy update frequency generally implies a more positive correlation between satisfaction and profitability.
\begin{table}[H]
\centering
\begin{tabular}{llccc}
\toprule
Value group & Scenario & $SP$ & $HP$ & $SH$ \\
\hline \hline
\multirow{4}*{Conservative}
& Daily & \phantom{-}0.879 & -0.703 & -0.817 \\
& Monthly & \phantom{-}0.977 & -0.236 & -0.291 \\
& Biannually & \phantom{-}0.978 & -0.256 & -0.238 \\
& Yearly & \phantom{-}0.970 & -0.274 & -0.225 \\
\cmidrule{2-5}
\multirow{4}*{Open-to-change}
& Daily & \phantom{-}0.943 & \phantom{-}0.948 & \phantom{-}0.899 \\
& Monthly & \phantom{-}0.998 & \phantom{-}0.831 & \phantom{-}0.818 \\
& Biannually & \phantom{-}0.998 & \phantom{-}0.843 & \phantom{-}0.842 \\
& Yearly & \phantom{-}0.994 & \phantom{-}0.860 & \phantom{-}0.879 \\
\cmidrule{2-5}
\multirow{4}*{Self-enhancing}
& Daily & \phantom{-}0.982 & -0.126 & -0.156 \\
& Monthly & \phantom{-}0.927 & -0.350 & -0.494 \\
& Biannually & \phantom{-}0.973 & \phantom{-}0.013 & \phantom{-}0.053 \\
& Yearly & \phantom{-}0.954 & \phantom{-}0.071 & \phantom{-}0.158 \\
\cmidrule{2-5}
\multirow{4}*{Self-transcendent}
& Daily & -0.855 & -0.977 & \phantom{-}0.883 \\
& Monthly & -0.838 & -0.966 & \phantom{-}0.902 \\
& Biannually & -0.651 & -0.962 & \phantom{-}0.761 \\
& Yearly & -0.268 & -0.931 & \phantom{-}0.519 \\
\cmidrule{2-5}
\multirow{4}*{Average}
& Daily & -0.141 & -0.220 & \phantom{-}0.937 \\
& Monthly & \phantom{-}0.602 & \phantom{-}0.138 & \phantom{-}0.760 \\
& Biannually & \phantom{-}0.006 & -0.090 & \phantom{-}0.843 \\
& Yearly & \phantom{-}0.023 & -0.110 & \phantom{-}0.793 \\
\bottomrule
\end{tabular}
\caption{Bivariate Pearson correlations between satisfaction and profitability ($SP$), interactions homophily and profitability ($HP$), and satisfaction and interactions homophily ($SH$). Results have been truncated to three digits.}
\label{tab: correlations}
\end{table}
Homophily has weak explanatory value for the profitability of Self-enhancing agents and relatively low, although firmly negative, correlations for Conservative agents.
One exception is the Daily scenario where the early reduction of monitoring leads to a vastly divergent result and by that pushes the correlation coefficient of satisfaction and profitability for Conservative agents further into negative territory.
The interaction homophily of Open-to-change (Self-transcendent) agents follows their profitability in more pronounced ways, exhibiting strong positive (negative) correlations that decrease in intensity with lower strategy update frequency.
Satisfaction and interaction homophily evolve in similar ways for both Open-to-change and Self-transcendent agents and show only low signs of correlation for Self-enhancing agents.
However, there is a negative correlation for Conservative agents implying that higher satisfaction levels are accompanied by lower homophily in their interactions.
These observations suggest that a management might want to implement measures that increase the embeddedness of Self-transcendent agents in the broader firm population (thus lowering their homophily levels) while at the same time such measures are likely to be relatively ineffective for Self-enhancing agents.
For Conservative, Open-to-change, and Self-enhancing employees the effects of high job satisfaction on profitability are likely stronger than the impact of their personal social network, suggesting that a sensible management would instead cater to this by implementing measures that raise their general job satisfaction.
The last four rows of Table \ref{tab: correlations} show how the variables correlate with each other when using average values computed from all firm employees.
In the Biannually and Yearly scenarios, both satisfaction and interactions homophily show only very slight correlations to firm profitability which suggests that these aspects of corporate culture play a relatively minor role in scenarios with slow-moving management strategies.
This effect is more pronounced in the Monthly scenario ($SP = 0.602$) which is also the only case where interactions homophily has a positive correlation coefficient with profitability ($HP = 0.138$).
Quite contrarily, daily management strategy updates lead to a situation in which satisfaction and interactions homophily are strongly positively correlated ($SH = 0.937$) and higher values are accompanied by lower profitability.\footnote{
While these results are somewhat unexpected and might even warrant a more in-depth examination and discussion, it has to be duly noted that the Daily scenario is an extreme edge case.
It implies perfect willingness and ability of both employees and management to adapt to an ever-changing work environment and does not incorporate any cost of changing strategies (which arguably becomes more important with higher strategy update frequencies).
}
To summarise, three out of four scenarios show a more or less moderate positive correlation between satisfaction and profitability which qualitatively fits the findings in the empirical literature \citep{Judge.2001, Fisher.2003}.
Going back to Figure \ref{fig: profitability}, after some time the four scenarios have reached states of slowly declining profitability past $6$ simulated years which persist until the end of the simulation.
Yearly strategy updates yield the highest profitability at approximately $0.2751$ after ten years.
The other three scenarios reached profitability levels closer to each other with Biannually at $\approx 0.2675$, Monthly at $\approx 0.2653$, and Daily at $\approx 0.2647$.
Thus, it becomes apparent that (i) fluctuations in profitability increase with less frequent changes in management style, (ii) achieved levels of profitability are higher under a less adaptive management, and (iii) management expectations regarding output and normal degrees of shirking play a crucial role in the long-term profitability of firms.
\begin{figure}
\caption{Cumulated firm profitability over time for each of the four scenarios divided by the cumulated firm profitability of a baseline scenario with a neutral and constant management style (black dashed horizontal line). Source: Authors' own illustration.}
\label{fig: relative-profitability}
\end{figure}
Although the differences between the scenarios are clearly visible, their endpoints are relatively close to each other while also only providing a snapshot of the current profitability at any given time.
Cumulated profitability over time paints another picture, however, as it takes into account the accumulated profits in each scenario's pathway which can then be compared to a neutral baseline scenario\footnote{
The model parameters shaping the management style of this scenario are $\Sigma = 0.5$, $\mu = 0.0$, and $\lambda = 1.0$ which is identical to the Base scenario used in \cite{Roos.2022}.
} without any changes in management style.
We can see in Figure \ref{fig: relative-profitability} that the four lines at first follow similar trajectories but indeed deviate from each other's paths over the long run.
\begin{table}[H]
\centering
\begin{tabular}{r|ccccc}
Scenario & Base & Yearly & Daily & Biannually & Monthly \\
Relative profitability & 100.00 & 62.85 & 61.75 & 59.81 & 57.69 \\
\end{tabular}
\caption{Relative profitability of the four scenarios at the end of the simulation in percent, sorted in descending order from left to right. This is measured as the relation of their cumulated firm profitability values to that of the baseline scenario without endogenised management decision making.}
\label{tab: relative-profitability}
\end{table}
Considering only the end of the simulations, Table \ref{tab: relative-profitability} provides some insight on the fact that the differences in cumulated profits are deeply ingrained in the emerging corporate culture that is shaped by social norms and the frequency of strategic management decisions.
The Yearly scenario yields the highest cumulated profitability in the long run even though slow changing strategies may lead to managerial overreaction due to lower ability to adapt to new insights in the short term.
Daily and biannual changes in management strategies lead to medium levels of cumulated profitability whereas pursuing monthly strategy changes consistently leads to the worst performance among the four scenarios.
However, it has to be noted that managerial interventions only produce higher levels of relative profitability in the first half year and continue to perform below the reference level for the rest of the simulation.
The neutral baseline scenario which neither changes monitoring nor type or intensity of implemented incentive schemes continues to outperform the adaptive scenarios by more than $59\%$.
The equal distribution of higher order values in the simulated workforce is reflected in the employed neutral scenario which does not favour or adversely affect the behaviour of any particular group.
This finding suggests that it might be better to not change the implemented management strategy at all and instead rely on the realisation of self-organisational capacity in the social network of the firm's employees.
Given the knowledge of the distribution of higher order values among the employees, the firm's management could anticipate which strategy would provide the best long-term fit to the emerging corporate culture that manifests itself in the employees' behaviour guided by personal values and social norms.
\section{Discussion}\label{sec: discussion}
We view the firm as a complex adaptive system, i.e. a system of a large number of agents that interact and adapt \citep{Holland.2006}.
In our model of the firm, several kinds of interactions and adaptations occur.
Employees interact by forming a network and by cooperating.
Through their actions, social norms of behaviour emerge, to which each worker adapts in line with his or her own values.
The evolving social norms anchor employees’ behaviours, but do not determine them completely.
The management also interacts with the employees.
The management tries to keep shirking under control, to achieve high output and to promote cooperation among employees.
It uses direct control instruments and the parameters of a monetary reward scheme to influence employees’ actions.
Employees indeed adapt to the management’s policies.
They change their shirking and cooperation behaviour, and the satisfaction, which influences their productivity, adapts, too.
Finally, the management adapts its management strategy to the observed outcomes of the use of the management instruments.
Hence, corporate culture in the form of employees’ social norms, that guide their behaviour, and management strategies (or the firm’s formal institutions) co-evolve.
Due to the adaptation of the firm’s corporate culture, it is difficult for the management to influence the behaviour of employees in the desired way.
Hence, there are strong constraints on the ability of the management to control the system with the given management tools.
Our analysis focused on what we call management scenarios.
The four scenarios considered how often and how strongly the management updates the intensity and the frequency of its instrument use (or strategy).
Our first result is that in the long run (i.e. after about six years), all scenarios converge to similar level of profitability and almost identical strategies.
In the long run, the management does not monitor employees’ shirking anymore and uses group performance rewards as an incentive scheme.
Over time there is an implicit learning effect that stems from the management's observations of actual employee behaviour, gradually leading to adaptation of expectations regarding shirking behaviour and produced output.
The finding that the management completely abandons monitoring efforts in the long run can therefore be interpreted as the endogenous emergence of trust.
Despite this convergence, there are enormous differences in profitability across the scenarios during the adaptation process.
Especially the two extreme adaptation styles – daily updating and yearly updating – lead to stark temporary differences in management strategies.
While daily updating leads to a gradual reduction of monitoring and a gradual and relatively soon increase in group rewards, yearly updating for some years generates almost the opposite strategy, i.e. high monitoring and low group rewards.
Nevertheless, the cumulated profitability of these extreme adaptation styles are close together, indicating that both management strategies are relatively successful.
The cumulative profitability of biannual adaptation, but especially of monthly adaptation, is clearly lower.
The scenarios differ with regard to the impact on the different types of employees.
Daily adaptation\footnote{
Daily updating is more a theoretical limiting case than a realistic description of management behaviour.
In the presence of decision-making costs, the management cannot be expected to change its strategy on a daily basis.
} leads to an early reduction of monitoring and is strongly appreciated by O-agents who in turn experience high levels of job satisfaction.
As shown in \cite{Roos.2022}, O-agents are important for the evolution of corporate culture because their motivation or demotivation can impact others through their wide-reaching influence on social norms.
In contrast to that, the yearly updating scenario produces temporarily high individual monetary rewards which have a motivating effect on SE-agents.
In the long run, both C- and SE-agents end up with very low levels of job satisfaction because the instruments they value (i.e. monitoring for C-agents and individual rewards for SE-agents) are not used.
The mirror image of this result is that O-agents and ST-agents converge to very high levels of job satisfaction.
Because of the link between job satisfaction and labour productivity, the long-run demotivation of some employees is a crucial issue.
In our model calibration, a long-run job satisfaction close to zero of C-agents and SE-agents implies that both groups only work with the minimum productivity of 0.5 resulting in a substantial output loss of the firm.
The findings regarding job satisfaction require some qualifications.
It seems plausible that employees with very low job satisfaction will quit their job at some point in time whereas there is no turnover of the firm’s workforce in our current model.
Both the dynamics and the long-run outcomes might be quite different if employees were allowed to leave the firm when their job satisfaction falls below a threshold for a certain time.
The different adaptation styles of the management strategy might then lead to a selection of particular types of employees in a firm.
A firm with daily updating might quickly lose all C-agents and over time also most SE-agents.
Vice versa, every more long-term updating would probably drive away O-agents rather soon.
With a change of the workforce composition, we might expect that the four adaptation scenarios will not converge to the same management strategies in the long run.
We leave a detailed analysis of workforce turnover to future work.
Another interesting finding concerns the dynamics of the interaction homophily in the endogenous social network.
There are practically no differences across the adaptation scenarios, suggesting that the network dynamics are unaffected by the management style.
This finding can be interpreted as a form of self-organisation that constrains the management’s ability to control the system.
While the interaction homophily of C-, SE- and O-agents convergences to the same level, the long run value of ST-agents’ interaction homophily is twice as high.
This means that this group has a much stronger in-groups connectivity than the others.
ST-agents hence strongly interact among themselves implying that they develop their subculture within the firm over time.
This result might be relevant if the management tried to influence corporate culture directly, e.g. by communication, which is not represented in our model.
An analysis of direct efforts of the management to affect corporate culture is another interesting topic for future research.
A final remarkable result is that all adaptation scenarios with changing management styles lead to significantly lower cumulated profitability than a baseline scenario in which the management initially chooses a neutral management style and sticks with it forever.
Neutral management style means that the management chooses an intermediate monitoring strategy and abstains from using pay-for-performance schemes.
The key point is that while it is possible to influence employees’ behaviour with monetary incentives, using rewards is also costly.
Furthermore, a constant strategy makes it easier for social norms to converge quickly on a dominant path \citep[see][]{Roos.2022}.
Hence letting the self-organisation forces within a firm work might be preferable to active management efforts to achieve a certain behaviour.
However, we consider this conclusion as tentative and preliminary and stress that more research is necessary to check its robustness to variations of the model.
In future work, our model could be modified in several dimensions.
First, employees only differ in terms of their values, but not in terms of other characteristics such as skills or knowledge.
As a consequence, the task interdependence and hence the necessary cooperation is rather abstract.
In the present model, it is not necessary that certain employees cooperate due to skill or knowledge complementarities, which is a limitation.
Relatedly, there is no hierarchy and no formal working structure in the firm whereas both might have an impact on the formation of norms and the evolution of intra-firm subcultures.
Second, there is no labour turnover.
Employees do not quit if they are dissatisfied and the management does not fire underperformers or employees who received several written warnings as a result of being caught shirking more than deemed acceptable.
When employees are allowed to leave the firm, a hiring process of new employees must also be modelled.
By selecting employees according to their value types, the management would have another management instrument that might be crucial for performance.
Third, the management is modelled as an abstract entity, but in reality it consists of individuals with values and behaviours as well.
Modelling managers as individuals could also have a direct impact on corporate culture, either by being a role model or through direct communication efforts.
Finally, in the current model, both monitoring and the updating of the management strategy are costless.
However, monitoring efforts would cause a direct resource cost impacting profitability while changing the management strategy would require efforts from the managers and entail learning and implementation costs.
These might also depend on personal characteristics of the managers and on power relations within the firm.
\section{Conclusion}\label{sec: conclusion}
Our paper shows that a firm can be viewed and modelled as a complex adaptive system.
Due to the adaptation of employees to the management strategy, the emergence of social norms and self-organisation within the workforce, the management’s ability to control the firm is limited.
We define corporate culture as the endogenous social norms that regulate employees’ shirking and cooperation behaviour, which in turn has a direct impact on the firm’s output.
Employees’ responses to social norms are driven by their values.
The management tries to influence employees’ behaviour directly through monitoring and monetary incentives, but does not consider the indirect effects on corporate culture.
This implies that management policies can have unintended side effects which counteract the direct ones.
The presented model provides plenty of opportunities for future extensions, e.g. by adding personal skills and knowledge, a formal hierarchy and dependencies in the organisation, costs to monitoring and strategy changes, or fluctuations in the workforce.
We show that firms with a management that adopts extreme adaptation styles of their management strategy have higher profitability than firms that chose intermediate or moderate adaptation styles.
Firms in which adaptation occurs either very frequently (i.e. daily) or very infrequently (i.e. yearly) have higher cumulated profitability at the end of the simulations.
The different adaptation styles have diverse effects on employees with different values and hence on endogenous corporate culture.
We find that adaptation of the management style leads to a long-run decrease in monitoring and the increased use of group performance rewards.
The decrease in monitoring can be interpreted as the endogenous emergence of trust of the management in employees.
Frequent adaptation with a fast decrease of monitoring has a strong positive effect on the satisfaction and the performance of employees who are self-directed and open to change.
Due to their connectedness inside the firm's social network, they are drivers of corporate culture.
However, we also find that active adaptation of the management’s strategy is always inferior with regard to a firm's profitability than a non-adapting management style that is already tailored to fit the value composition in the workforce.
In firms with non-adapting management the self-organisation of corporate culture and its effect on employee behaviour is not disturbed.
\begin{appendices}
\setcounter{table}{0}
\renewcommand{\Alph{section}\arabic{table}}{\Alph{section}\arabic{table}}
\renewcommand{\Alph{section}\arabic{figure}}{\Alph{section}\arabic{figure}}
\setcounter{equation}{0}
\section{Additional tables} \label{sec: app-1}
\setcounter{table}{0}
\begin{table}[H]
\centering
\resizebox{!}{.2\textwidth}{\begin{tabular}{llccc}
\toprule
Parameter & Description & \multicolumn{3}{c}{Value}\\ \cmidrule{3-5}
& & Social Network & Adaptive Behaviour & Endogenous Management\\
& & (Section \ref{subsec: network}) & (Section \ref{subsec: adaptive-behaviour}) & (Section \ref{subsec: end_management})\\
\hline \hline
$n$ & number of agents in the model & $100$ & — &— \\
$dist$ & distribution of agent types & $(0.25, 0.25, 0.25, 0.25)$& —&— \\
$\kappa$ & degree of task interdependence & $0.5$& —&—\\
$w_{b}$ & hourly wage & $1$& —&— \\
$h$ & rate of adjustment to social norms & $0.1$&— &— \\
$\tau$ & maximum available time & $8$& —&— \\
$\Sigma$ & monitoring intensity & $\{0, 0.5, 1\}$& — & $\Sigma \rightarrow \Sigma_{t} \in [0,1]$ \\
$\mu$& incentive intensity & $\{0, 1\}$& — & $\mu \rightarrow \mu_{t} \in [0,1]$ \\
$\lambda$ & type of PFP scheme & $\{0, 0.5, 1\}$& — & $\lambda \rightarrow \lambda_{t} \in [0,1] $\\
$S^{eff}$ & mediator satisfaction &/ & $0.5$& —\\
$ETC$& subset of observed employees &/& $ETC \subset N$& $ETC \rightarrow ETC_{t}$\\
$\eta$ & shock of being caught &/& $0.05$& — \\
$suf$ & strategy update frequency &/&/& $\{1, 30, 180, 365\}$\\
$sui$ & strategy update intensity &/&/& $\{\frac{1}{600}, \frac{1}{20}, \frac{3}{10}, \frac{73}{120}\}$\\
\bottomrule
\end{tabular}}
\caption{\enspace Model parameters. We use "/" to indicate a missing element, "—" no changes from the previous column, and "$\rightarrow$" for endogenised parameters.}
\label{tab: parameters}
\end{table}
\begin{landscape}
\begin{table}[p!]
\begin{adjustbox}{width=1.5\textwidth}
\begin{tabular}{lcccc}
\toprule
Description & Original Equations & \multicolumn{3}{c}{Model Equations} \\ \cmidrule{3-5}
&\citep{Roos.2022} & Social Network& Adaptive Behaviour & Endogenous Management \\
& & (Section \ref{subsec: network}) & (Section \ref{subsec: adaptive-behaviour}) & (Section \ref{subsec: end_management}) \\\hline \hline
1. \hyperlink{page.12}{Individual output} & $O_{i,t} = p_{i,t}^{(1-\kappa})*\bar{c_{i,t}}^{\kappa}$ & — & $O_{i,t} = \pi_{i,t} (p_{i,t}^{\phantom{i}(1-\kappa)} * \bar{c}_{i,t}^{\phantom{i}\kappa})$& —\\
2. \hyperlink{page.12}{Productivity effects} &/ & / & $\pi_{i,t} = (1 - S^{eff}) + 2 \cdot S^{eff} \cdot S_{i,t}$& — \\
3. Average cooperation & $ \bar{c}_{i,t}= \sfrac{1}{(n-1)} \sum_{j \neq i} c_{j,t}$ & — & — & — \\
4. \hyperlink{page.11}{Shirking time norm} & $ s^{*}_{t} = (1-h)\; s^{*}_{t-1} + h\; \frac{\sum_{j \epsilon N} s_{j,t-1}}{n}$ & $ s_{i,t}^{*} =
\left\{ \begin{array}{ll}
(1 - h) \; s_{i,t-1}^{*} + h \; \frac{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1} \; s_{j,t-1}}{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1}} & \text{if} \; I_{i,t} \neq \emptyset\\
s_{i,t-1}^{*} & \text{otherwise}
\end{array} \right.$& — & —\\
5. \hyperlink{page.11}{Cooperative time norm} & $ c^{*}_{t} = (1-h)\; c^{*}_{t-1} + h\; \frac{\sum_{j \epsilon N} c_{j,t-1}}{n} $ & $c_{i,t}^{*} =
\left\{ \begin{array}{ll}
(1 - h) \; c_{i,t-1}^{*} + h \; \frac{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1} \; c_{j,t-1}}{\sum_{j \in I_{i,t}} \Delta e_{i,j,t-1}} & \text{if} \; I_{i,t} \neq \emptyset\\
c_{i,t-1}^{*} & \text{otherwise}
\end{array} \right.$ & — & — \\
6. Individual time norm & $p_{t}^{*} = \tau - s_{t}^{*} - c_{t}^{*}$ & $p_{i,t}^{*} = \tau - s_{i,t}^{*} - c_{i, t}^{*}$ & — & — \\
7. Shirking time & $s_{i,t} \sim T(a_{i,t}, b_{i,t}, m_{i,t})$ & — & — & —\\
8. Cooperative time & $c_{i,t} \sim T(a_{i,t}, b_{i,t}, m_{i,t})$ & — & — & —\\
9. Individual time & $p_{i,t} = \tau - s_{i,t} - c_{i,t}$ & — & — & —\\
10. Lower bound triangular distribution & $a_{i,t} = x_{t}^{*}(1-\delta_{i})$, where $x_{t}^{*} = \{s_{t}^{*}, c_{t}^{*}\}$ & — & — & —\\
11. \hyperlink{page.14}{Upper bound triangular distribution} & $b_{i,t} = x_{t}^{*}(1+\delta_{i})$ , where $x_{t}^{*} = \{s_{t}^{*}, c_{t}^{*}\}$& — & $b_{i,t} = x_{i,t}^{*}(1+\beta_{i,t\phantom{i}}\delta_{i})$, where $x_{i,t}^{*} = \{s_{i,t}^{*}, c_{i,t}^{*}\}$ & —\\
12. Mode triangular distribution & $ m_{i,t} = \left \{\begin{array}{ll} x^{*}_{t} + \phi_{i,t} & \text{if}\; x_{t}^{*} = s_{i,t}^{*}\\
x^{*}_{t} + \gamma_{i,t} + \rho_{i,t} & \text{if}\; x_{t}^{*} = c_{i,t}^{*} \\
\end{array}\right.$, where $x_{t}^{*} = \{s_{t}^{*}, c_{t}^{*}\}$ &
$x_{t}^{*} \rightarrow x_{i,t}^{*}$
& — & —\\
13. Deviation from norms & $ \delta_{i} = \left \{\begin{array}{ll}
\sfrac{1}{3} & \text{if}\; i = C \\
1 & \text{if} \; i = O \\
\sfrac{2}{3} & \text{if}\; i = SE \\
\sfrac{2}{3} & \text{if}\; i = ST \\
\end{array}\right. $ & — & — & —\\
14. Need for autonomy & $ \phi_{i,t} = \left \{\begin{array}{ll}
\phantom{-}0.5s_{t}^{*}\delta_{i} & \text{if}\; i = C \; \text{and}\; \Sigma_{t} = 0\; \text{or}\; i = O \; \text{and}\; \Sigma_{t}= 1\\
-0.5s_{t}^{*}\delta_{i} & \text{if}\; i = C \; \text{and}\; \Sigma_{t} = 1\; \text{or}\; i = O \;\text{and} \; \Sigma_{t} = 0\\
\phantom{-}0 & \text{otherwise} \\
\end{array}\right.$
&
$s_{t}^{*} \rightarrow s_{i,t}^{*}$
& —
& $\Sigma = 0 \rightarrow \Sigma_{t} \in \left[0,0.5\right)$ and $\Sigma = 1 \rightarrow \Sigma_{t} \in \left(0.5,1\right]$\\
15. Degree of cooperativeness & $ \gamma_{i,t} = \left \{\begin{array}{ll}
-0.5c_{t}^{*}\delta_{i} & \text{if}\; i = SE \\
\phantom{-}0.5c_{t}^{*}\delta_{i} & \text{if} \; i = ST \\
\phantom{-}0 & \text{otherwise} \\
\end{array}\right.$ &
$ c_{t}^{*} \rightarrow c_{i,t}^{*}$ & —& —\\
16. Responsiveness to rewards & $ \rho_{i,t} = \left \{\begin{array}{ll}
-0.5c_{t}^{*}\delta_{i} & \text{if}\; i = SE \; \text{and}\; \lambda = 0\\
-0.1c_{t}^{*}\delta_{i} & \text{if}\; i = ST \; \text{and}\; \lambda = 0\\
\phantom{-}0.1c_{t}^{*}\delta_{i} & \text{if}\; i = SE \; \text{and}\; \lambda = 1\\
\phantom{-}0.5c_{t}^{*}\delta_{i} & \text{if}\; i = ST \; \text{and}\; \lambda = 1\\
\phantom{-}0 & \text{otherwise} \\
\end{array}\right.$
&
$c_{t}^{*} \rightarrow c_{i,t}^{*}$ & — &
$\lambda = 0 \rightarrow \Sigma_{t} \in \left[0,0.5\right)$ and $\lambda = 1 \rightarrow \Sigma_{t} \in \left(0.5,1\right]$\\
17. \hyperlink{page.12}{Employees' bonus} & $B_{i,t} = (1-\lambda)O_{i,t} + \lambda (\frac{1}{n}) \sum_{j=1}^{n} O_{j,t}$ & — & — & $\lambda \rightarrow \lambda_{t}$\\
18. \hyperlink{page.12}{Financial rewards} & $ R_{i,t} = \omega_{b} + \mu B_{i,t}$ & — & — & $ \mu \rightarrow \mu_{t}$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Model equations overview. Comparison between the original equations in \cite{Roos.2022} and the three additional extensions. We use "/" to indicate a missing element, "—" no changes from the equation in the column to the left, and "$\rightarrow$" for changes of single variables. \textit{Continues on the next page.}}
\label{tab: list_eq}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}[p!]
\begin{adjustbox}{width=1.5\textwidth}
\begin{tabular}{lcccc}
\toprule
Description & Original Equations & \multicolumn{3}{c}{Model Equations} \\ \cmidrule{3-5}
&\citep{Roos.2022} & Social Network& Adaptive Behaviour & Endogenous Management \\
& & (Section \ref{subsec: network}) & (Section \ref{subsec: adaptive-behaviour}) & (Section \ref{subsec: end_management}) \\\hline \hline
19. \hyperlink{page.16}{ Management style} & $\Sigma = \{0,0.5,1\}$& — & — & $ \Sigma_{t} =
\left\{ \begin{array}{ll}
(1 + sui) \Sigma_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar s_{t} > s_{t}^{max}\\
(1 - sui) \Sigma_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar s_{t} \leq s_{t}^{max}\\
(1 - \frac{\bar O_{t}}{EGO_{t}}) sui & \text{if}\; \Sigma_{t-1} = 0 \; \wedge \; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} < EGO_{t}
\end{array}
\right.$\\
20. \hyperlink{page.17}{PFP schemes} & $\lambda = \{0,0.5,1\}$ &— & — & $ \lambda_{t} =
\left \{ \begin{array}{ll}
(1 + sui) \lambda_{t-1} &\text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar c_{t} < \kappa \cdot \alpha_{t} \\
(1 - sui) \lambda_{t-1} &\text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar c_{t} > \kappa \cdot \alpha_{t} \\
\lambda_{t-1} & \text{otherwise} \\
\end{array} \right.$\\
21. \hyperlink{page.17}{Intensity of rewards} & $\mu = \{0,1\}$ & — & — & $\mu_{t} =
\left\{ \begin{array}{ll}
(1 + sui) \mu_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} < EGO_{t} \\
(1 - sui) \mu_{t-1} & \text{if}\; \frac{1}{x}\sum\limits_{t-x}^{t-1} \bar O_{t} > EGO_{t} \\
\mu_{t-1} & \text{otherwise} \\
\end{array}\right.$\\
22. \hyperlink{page.8}{Weighted absolute-deviation index (WADI)} & / & $AD_{i,j,t} = \frac{\sum_{k=1}^{A} \lvert a_{k,i,t} - a_{k,j,t} \rvert}{\tau}$ & — & —\\
23. \hyperlink{page.9}{Activity similarity} & / & $ AS_{i,j,t} = 1 - AD_{i,j,t}$ & — & —\\
24. \hyperlink{page.9}{Chance of interaction} & / & $d_{t} = \{ d_{i,j,t} \sim U(0, 1), \; \forall (i,j) \in N \}$ & — & — \\
25. \hyperlink{page.10}{Potential interactions check} & / & $
I_{i,t}^{pot} =
\{j \;|\; e_{i,j,t-1} > e_{i,k,t-1}, \forall \; j,k \in P_{i,t-1},\; j\neq k \} \cup
\{j \; | \; j \in_{R} N \setminus P_{i,t-1} \}$ & — & —\\
26. \hyperlink{page.10}{Set interacting agents} & / & $ I_{i,t} = \{ j \; | \; d_{i,j,t} \; < \; AS_{i,j,t}, j \neq i \}$ & — & —\\
27. \hyperlink{page.10}{Weight change agents' directed edge} & / & $\Delta e_{i,j,t} = \left\{ \begin{array}{ll}
AS_{i,j,t} & \text{if} \; j \in I_{i,t}\\
0 & \text{otherwise}
\end{array} \right.$ & — & — \\
28. \hyperlink{page.11}{Network connections} & / & $ e_{i,j,t} =
\left\{ \begin{array}{ll}
\frac{(t-1) \cdot e_{i,j,t-1} + \Delta e_{i,j,t}}{t} & \text{if}\; t \geq 1 \\
0 & \text{if}\; t = 0
\end{array} \right.$ & — & — \\
29. \hyperlink{page.13}{Base satisfaction levels} & / & / & $ S_{i}^{0} =
S_{i}^{0} =
\left\{ \begin{array}{ll}
\Sigma & \text{if}\; i\;\in\; \text{C-type} \\
1 - \Sigma & \text{if}\; i\;\in\; \text{O-type} \\
0.5 + \mu (0.5 - \lambda) & \text{if}\; i\;\in\; \text{SE-type} \\
0.5 + \mu (\lambda - 0.5) & \text{if}\; i\;\in\; \text{ST-type}
\end{array} \right.$ & $\{\Sigma, \lambda, \mu \} \rightarrow \{\Sigma_{t}, \lambda_{t}, \mu_{t} \}$\\
30. \hyperlink{page.13}{Satisfaction adjustments} & / & / & $ S_{i,t} =
\left\{ \begin{array}{ll}
0.99 \; S_{i,t-1} & \text{if}\; S_{i,t-1} > S_{i}^{0} \\
1.01 \; S_{i,t-1} & \text{if}\; S_{i,t-1} < S_{i}^{0}
\end{array} \right.$ & —\\
31. \hyperlink{page.15}{Written warning scaling factor} & / & / & $ \beta_{i,t}=
\left\{ \begin{array}{lll}
1 - \frac{|WW_{i,t}|}{3} + \frac{|WW_{i,t}|}{3} \; \frac{t - x_{n}}{t} & \text{if}\; 0 \leq |WW_{i,t}| < 3\\
0.0 + 1.0 \; \frac{t - x_{n}}{t} & \text{otherwise}\\
\end{array} \right.$ & —\\
32. \hyperlink{page.16}{Maximum acceptable shirking} & / & / & $s^{max} = \sfrac{\tau}{10}$ & $ s_{t}^{max} = (1 - h) \; s_{t-1}^{max} + h \; \frac{1}{n} \sum_{i}^{ETC_{t}} s_{i,t}$\\
33. \hyperlink{page.15}{Observed shirking time} & / & / & / & $ \bar s_{t} = \frac{1}{n} \sum_{i}^{ETC_{t}} s_{i,t}$\\
34. \hyperlink{page.15}{Observed cooperative time} & / & / &/ & $ \bar c_{t} = \frac{1}{n} \sum_{i}^{ETC_{t}} c_{i,t}$\\
35. \hyperlink{page.16}{Expected group output} & / & / & / & $EGO = \left[\alpha_{t}(1-\kappa)\right]^{(1-\kappa)} \cdot (\alpha_{t}\kappa)^{\kappa}$\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption*{Table \ref{tab: list_eq} Continued: We use "/" to indicate a missing element, "—" no changes from the equation in the column to the left, and "$\rightarrow$" for changes of single variables.}
\end{table}
\end{landscape}
\setcounter{figure}{0}
\section{Additional figures} \label{sec: app-2}
\begin{figure}
\caption{Model stepping flowchart. Source: Authors' own illustration.}
\label{fig: flowchart}
\end{figure}
\end{appendices}
\end{document}
|
\begin{document}
\title{Haar type and Carleson Constants}
\author{Stefan Geiss \and
Paul F.X. M\"uller
\thanks{Research of both authors supported in part by FWF Pr. Nr. P150907-N01.}}
\maketitle
\begin{abstract}
For a collection ${\cal E} $ of dyadic intervals,
a Banach space $X$, and
$p\in (1,2]$ we assume the
upper $\ell^p$ estimates
\[ \left\| \sum _{I \in {\cal E}} x_I h_I/|I|^{1/p} \right\|^p_{L^p_X}
\le c^p \sum _{I \in {\cal E}} \| x_I\|_X^p ,\]
where $ x_I \in X $ and $h_I$ denotes the $L^\infty$ normalized Haar
function supported on $I$. We determine the minimal requirement
on the size of ${\cal E} $ so that these estimates imply that
$ X$ is of Haar type $p. $ The characterization is given in terms
of the Carleson constant of ${\cal E} . $
\end{abstract}
2000 Mathematics Subject Classification:
46B07,
46B20
\section{Introduction}
Let $X$ be a Banach space.
We fix a non-empty collection of dyadic intervals ${\cal E}$
and assume the upper $\ell^p$ estimates
\begin{equation}
\label{upper}
\left\| \sum _{I \in {\cal E}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c \left ( \sum _{I \in {\cal E}} \| x_I\|_X^p \right )^\frac{1}{p}
\end{equation}
for finitely supported $(x_I)_{I\in{\cal E}}\subset X$ and some $p\in (1,2]$, where
$h_I$ is the $L^\infty$ normalized Haar function supported on $I$.
The consequences for $X$, one may draw from \eqref{upper}, depend
on the size and structure of the collection ${\cal E} . $
For instance, if ${\cal E} $ is a collection of pairwise disjoint dyadic
intervals, then any Banach space satisfies \eqref{upper}, hence it
does not impose any restriction on $X.$
If, on the other hand, we choose ${\cal E}$ to be the collection of {\it all}
dyadic intervals, then the upper $\ell ^p $ estimates \eqref{upper}
simply state that $X$ is of Haar type $p;$ due to important work of G. Pisier \cite{pis}
the latter condition is equivalent to certain renorming properties of
the Banach space $X$.
In this paper we ask how massive a collection
${\cal E}$ has to be so that \eqref{upper} implies that
$X$ is of Haar type $p$. We give the answer
to this question in terms of the Carleson
constant
defined by
\begin{equation}\label{eqn:Carleson_constant}
\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky := \sup_{I\in {\cal E}} \frac{1}{|I|} \sum_
{ J \in {\cal E} ,\, J \subseteq I}
|J|.
\end{equation}
The proof is based on the following well-known dichotomy:
either ${\cal E} $ can be decomposed into finitely many collections
consisting of ``almost disjoint'' dyadic intervals
or ${\cal E} $ contains large and densely packed blocks of dyadic intervals,
with arbitrary high degree of condensation.
Initially we encountered the problem treated here in connection with
our efforts to obtain a vector valued version of E. M. Semenov's
characterization of bounded operators rearranging the
Haar system. See \cite{sem} and \cite{gm}.
\section{Preliminaries}
In the following we equip the unit interval $[0,1)$ with the Lebesgue measure
denoted by $|\cdot|$.
Let ${\cal D} $ denote the collection of dyadic intervals in $[0,1)$, i.e.
$I \in {\cal D}$ provided that there exist $m\ge 0$ and $1\le k\le 2^m$ such that
\[ I = [(k-1)/2^m , k/2^m), \]
and let
\[ {\cal D} _n := \{ I \in {\cal D} : |I| \ge 2^{-n} \}
\sptext{1}{where}{1}
n\ge 0. \]
The $L^\infty$ normalized Haar function supported on $I\in{\cal D}$
is denoted by $h_I$, i.e. $h_I = -1$ in the right half of $I$ and
$h_I = 1$ on the left half of $I .$
By $L_X^p$, $p\in [1,\infty)$, we denote the space of Radon random
variables $f:[0,1)\to X$ such that
\[ \|f\|_{L_X^p } = \left(\int_0^1\|f(t)\|_X ^p dt \right)^{1/p} < \infty. \]
\paragraph{Haar type.}
Given $p\in (1,2]$, a Banach space $X$ is of Haar type $p$
provided that there exists a constant
$c > 0 $ such that
\[ \left\| \sum _{I \in {\cal D}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c \left ( \sum _{I \in {\cal D}} \| x_I\|_X^p\right )^\frac{1}{p} \]
for all finitely supported families
$(x_I)_{I \in {\cal D}}\subset X$. We let $HT_p(X)$ be the infimum
of all possible $c>0$ as above.
The central result concerning Haar type is due to G. Pisier \cite{pis} and
asserts that Haar type $p$ is equivalent to the fact that $X$ can be equivalently
renormed such that the new norm has a modulus of smoothness of power type
$p.$ For additional information see \cite{Deville_Godefroy_Zizler,piwe} and the
references therein.
\paragraph{Carleson Constants.}
Let ${\cal E} \subseteq {\cal D}$ be a non-empty collection of dyadic
intervals. The Carleson constant of ${\cal E}$ is given by
equation (\ref{eqn:Carleson_constant}).
Next we define consecutive generations of ${\cal E} $ and, using
$ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky$, describe a dichotomy for ${\cal E} $ known as
the almost disjointification and condensation properties.
We define $G_0({\cal E})$ to be the maximal dyadic intervals
of ${\cal E}$ where {\em maximal} refers to inclusion.
Suppose, we have already defined
$G_0({\cal E} )$,..., $G_p({\cal E} )$, then we form
\[ G_{p+1} ({\cal E}) := G_0( {\cal E}\setminus (G_0({\cal E} )\cup\dots \cup G_p({\cal E} )) ). \]
Given $I\in {\cal D}$, let $ I\cap {\cal E} =\{ J \in {\cal E} ,\, J \subseteq I\}$
and put
\[ G_k(I,{\cal E}) := G_k(I\cap {\cal E})
\sptext{1}{for}{1}
k\ge 1. \]
Assume that $\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky < \infty$ and that $M$ is the largest integer
smaller than $4\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky+1$. Then
\begin{equation}\label{eqn:decomposition_E}
{\cal E}_i := \bigcup_{k=0}^\infty G_{Mk + i} ({\cal E} ) ,\quad 0 \le i \le M-1,
\end{equation}
consists of almost disjoint dyadic intervals, in the sense that
for $ I \in {\cal E}_i ,$
\[ \sum_{J \in G_1(I,{\cal E}_i)} |J| \le \frac {|I|}{2}
\quad\text{and}\quad
\sum _{ K \in I \cap {\cal E}_i} |K| \le 2 |I| .
\]
Conversely, if $\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty$, then for all $n \ge 1$ and $\varepsilon\in(0,1)$
there exists a $K_0 \in {\cal E} $ that is densely packed in the sense that
\[ \sum_{J \in G_n ( K_0 , {\cal E} )} |J| \ge (1- \varepsilon) |K_0| .\]
Based on this we show in
Lemma \ref{lemma:sufficient_condition_type}
that for any $n $ the span $(h_I)_{I\in{\cal E}}$
contains a system of functions, with the same joint distribution
as the first $2^n$ elements of the Haar basis.
The proof of this basic dichotomy and some of its applications can be found
in \cite[Chapter 3]{pfxm1}.
\section{Haar Type and Carleson Constants}
The main results of this note are Theorems \ref{main} and \ref{main_inverse}.
Combined they give an answer to the question as to which families
of dyadic intervals ${\cal E} $ will detect whether a Banach space $X$ has
Haar type $p .$ The answer is a dichotomy: either $\lbrack\!\lbrack{\cal E} \lbrack\!\lbracky <\infty ,$
then $ {\cal E} $ does not detect any Haar type of any Banach space,
or $\lbrack\!\lbrack{\cal E} \lbrack\!\lbracky =\infty ,$ then $ {\cal E} $ determines the Haar type
$p$ constant exactly, for any Banach space $X$ and each $ 1 < p \le 2 . $
\setminusallskip
\begin{theorem}\label{main}
Let $p\in (1,2]$ and ${\cal E}\subseteq {\cal D}$ be a non-empty collection of dyadic
intervals. Then the following statements are equivalent:
\begin{enumerate}[$(1)$]
\item
A Banach space $X$ is Haar type $p$ if there exists
$c>0$ such that for all finitely supported families
$(x_I)_{I\in{\cal D}} \subset X$ one has
\[ \left\| \sum _{I \in {\cal E}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c \left (\sum _{I \in {\cal E}} \| x_I\|_X^p\right )^\frac{1}{p} ;\]
the infimum over all such $c> 0 $ coincides with $HT_p(X).$
\item $ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty. $
\end{enumerate}
\end{theorem}
\begin{theorem}\label{main_inverse}
Let $p\in (1,2]$, ${\cal E}\subseteq {\cal D}$ be a non-empty collection of dyadic
intervals, and $X$ be a Banach space which is not of Haar type $p$.
Then the following statements are equivalent:
\begin{enumerate}[$(1)$]
\item There exists a constant $c>0$ such that for all finitely supported
families $(x_I)_{I\in{\cal D}} \subset X$ one has
\[ \left\| \sum _{I \in {\cal E}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c \left (\sum _{I \in {\cal E}} \| x_I\|_X^p\right )^\frac{1}{p}. \]
\item $ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky < \infty. $
\end{enumerate}
\end{theorem}
Theorem~\ref{main} and Theorem~\ref{main_inverse} follow immediately
from the following two lemmas (and the obvious fact that there are
Banach spaces without Haar type $p$ if $p\in (1,2]$).
\begin{lemma}
Let $p\in (1,\infty)$, $\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky < \infty$, and $X$ be a Banach space.
Then there is a constant $c_p>0$, depending at most on $p$, such that
one has
\[ \left\| \sum _{I \in {\cal E}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c_p \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky^{1-\frac{1}{p}}
\left (\sum _{I \in {\cal E}} || x_I||_X^p\right )^{\frac{1}{p}} \]
for all finitely supported $(x_I)_{I\in{\cal E}} \subset X$.
\end{lemma}
\proof
Using (\ref{eqn:decomposition_E}), we write
$ {\cal E} = {\cal E}_0 \cup \dots \cup {\cal E}_{M-1} $ with
$ M < 4 \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky+1$ such that the collections $\{ A_I : I \in {\cal E}_i \} $
of pairwise disjoint and measurable sets defined by
\[ A_I := I \setminus \bigcup_{ J \in G_1(I, {\cal E}_i) } J , \quad I \in {\cal E}_i, \]
satisfy
\[ \frac{1}{2} |I| \le |A_I| \le |I|. \]
Since
\begin{eqnarray*}
\left\| \sum_{ I \in {\cal E} } x_I \frac{h_I}{|I|^{1/p}}\right\|_{L_X^p}
&\le& \sum _{i = 0}^{M-1} \left \| \sum_{ I \in {\cal E}_i } x_I \frac{h_I}{|I|^{1/p}}
\right \|_{L_X^p} \\
&\le& M^{1-\frac{1}{p}} \left( \sum _{i = 0}^{M-1} \left \|
\sum_{ I \in {\cal E}_i } x_I \frac{h_I}{|I|^{1/p}}
\right \|_{L_X^p}^p \right)^{1/p}
\end{eqnarray*}
it is sufficient to prove
\[ \left\| \sum_{ I \in {\cal E}_i } x_I \frac{h_I}{|I|^{1/p}}\right\|_{L_X^p}
\le c_p \left ( \sum_{ I \in {\cal E}_i } \|x_I\|_X^p \right )^\frac{1}{p} \quad \text{for} \quad i\le M .\]
But here we get that
\begin{eqnarray*}
& & \left \| \sum_{I\in{\cal E}_i} x_I \frac{h_I}{|I|^\frac{1}{p}} \right\|_{L_X^p}\\
& = & \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| \sum_{I\in{\cal E}_i} x_I \frac{h_I(t)}{|I|^\frac{1}{p}}
\right \|_X^p dt \right )^\frac{1}{p} \\
& = & \left ( \sum_{K\in{\cal E}_i} \frac{|A_K|}{|K|}\int_{A_K}
\left \| \sum_{I\in{\cal E}_i} x_I \left (\frac{|K|}{|I|}\right )^\frac{1}{p} h_I(t)
\right \|_X^p \frac{dt}{|A_K|} \right )^\frac{1}{p} \\
&\le& \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| \sum_{I\in{\cal E}_i} x_I \left (\frac{|K|}{|I|}\right )^\frac{1}{p} h_I(t)
\right \|_X^p \frac{dt}{|A_K|} \right )^\frac{1}{p} \\
& = & \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| \sum_{K\subseteq I\in{\cal E}_i} x_I \left (\frac{|K|}{|I|}\right )^\frac{1}{p} h_I(t)
\right \|_X^p \frac{dt}{|A_K|} \right )^\frac{1}{p} \\
& = & \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| \sum_{l=0}^{n(K)} x_{G_{-l}(K,{\cal E}_i)}
\left (\frac{|K|}{|G_{-l}(K,{\cal E}_i)|}\right )^\frac{1}{p} h_{G_{-l}(K,{\cal E}_i)}(t)
\right \|_X^p \frac{dt}{|A_K|} \right )^\frac{1}{p}\\
& = & \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| \sum_{l=0}^\infty x_{G_{-l}(K,{\cal E}_i)} \chi_{\{l\le n(K)\}}
\left (\frac{|K|}{|G_{-l}(K,{\cal E}_i)|}\right )^\frac{1}{p} h_{G_{-l}(K,{\cal E}_i)}(t)
\right \|_X^p
\right . \\
& & \left . \hspace*{26em}
\frac{dt}{|A_K|} \right )^\frac{1}{p}
\end{eqnarray*}
where $G_{-l}(K,{\cal E}_i)$ form the maximal sequence of dyadic
intervals from ${\cal E}_i$ such that
\[ K=G_0 (K,{\cal E}_i)
\subset G_{-1}(K,{\cal E}_i)
\cdots
\subset G_{-n(K)}(K,{\cal E}_i) \]
and $G_{-n(K)}(K,{\cal E}_i)$ is the unique maximal interval in ${\cal E}_i$ containing $K$. Next we obtain an upper estimate for the last expression as follows:
\begin{eqnarray*}
& & \hspace*{-2.5em}
\sum_{l=0}^\infty \left ( \sum_{K\in{\cal E}_i} \int_{A_K}
\left \| x_{G_{-l}(K,{\cal E}_i)} \chi_{\{l\le n(K)\}}
\left (\frac{|K|}{|G_{-l}(K,{\cal E}_i)|}\right )^\frac{1}{p} h_{G_{-l}(K,{\cal E}_i)}(t)
\right \|_X^p \frac{dt}{|A_K|} \right )^\frac{1}{p} \\
& = & \sum_{l=0}^\infty \left ( \sum_{K\in{\cal E}_i}
\left \| x_{G_{-l}(K,{\cal E}_i)} \chi_{\{l\le n(K)\}} \right \|_X^p
\frac{|K|}{|G_{-l}(K,{\cal E}_i)|}
\right )^\frac{1}{p}\\
& = & \sum_{l=0}^\infty \left ( \sum_{I,K\in{\cal E}_i \atop G_{-l}(K,{\cal E}_i)=I}
\left \| x_I \right \|_X^p
\frac{|K|}{|I|}
\right )^\frac{1}{p}\\
& = & \sum_{l=0}^\infty \left ( \sum_{I\in{\cal E}_i}
\left \| x_I \right \|_X^p
\frac{\sum_{K\in{\cal E}_i \atop G_{-l}(K,{\cal E}_i)=I} |K|}{|I|}
\right )^\frac{1}{p}\\
&\le& \sum_{l=0}^\infty \left ( \sum_{I\in{\cal E}_i}
\left \| x_I \right \|_X^p 2^{-l}
\right )^\frac{1}{p}\\
& = & \left ( \sum_{l=0}^\infty 2^{-\frac{l}{p}} \right )
\left ( \sum_{I\in{\cal E}_i} \left \| x_I \right \|_X^p
\right )^\frac{1}{p}.
\end{eqnarray*}
\par\nobreak\hbox to \hsize{\hfil\vrule width 5pt height 5pt}\goodbreak\vskip 3pt
Next we turn to the case $ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty$ for
which it is known that the Gamlen-Gaudet construction yields an approximation
of the Haar system by appropriate 'blocks' of $(h_I)_{I\in {\cal E}}$.
The next lemma demonstrates that this construction fits perfectly with our
Haar type inequalities. We carefully avoid using
the unconditionality of the Haar system (and therefore the UMD property
of Banach spaces) and exhibit a system of functions
with exactly the same joint distribution as the ususal Haar basis,
rather than to
allow that the measures of the support of a Haar function and its
approximation are related by uniformly bounded multiplicative constants.
\begin{lemma}\label{lemma:sufficient_condition_type}
Let $X$ be a Banach space, $p\in (1,2]$,
and ${\cal E} $ be a collection of dyadic intervals such that
\[ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty. \]
If there is a constant $c>0$ such that
\begin{equation}\label{eqn:E_type}
\left\| \sum _{I \in {\cal E}} x_I \frac{h_I}{|I|^{1/p}} \right\|_{L^p_X}
\le c \left ( \sum _{I \in {\cal E}} || x_I||_X^p \right )^\frac{1}{p}
\end{equation}
for all finitely supported families $(x_I)_{I\in{\cal E}} \subset X$,
then $X$ is of Haar type $p$ with $HT_p(X)\le c$.
\end{lemma}
\begin{remark}
In Lemma \ref{lemma:sufficient_condition_type} the range $p\in (2,\infty)$
does not make sense since already $X={\bbb R}$ does not have
Rademacher type $p\in (2,\infty)$ and henceforth Haar type $p\in (2,\infty)$.
In other words, for $\lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty$ and $p\in (2,\infty)$
the inequality {\rm (\ref{eqn:E_type})} fails to be true.
\end{remark}
\goodbreak\noindent{\sc Proof of }\nobreak{Lemma \ref{lemma:sufficient_condition_type}.}
Let $n \ge 1,$ $\delta\in (0,1)$, and
$\varepsilon= 2^{-n-1}\delta.$
Since $ \lbrack\!\lbrack {\cal E} \lbrack\!\lbracky = \infty$ the condensation property
(cf. \cite[Lemma 3.1.4]{pfxm1}) yields
the existence of a $K_0 \in {\cal E} $ such that
\[ \sum_{J \in G_n ( K_0 , {\cal E} )} |J| \ge (1-\varepsilon) |K_0| . \]
Examining the Gamlen-Gaudet construction \cite{gg} as (for example)
presented in \cite[Proposition 3.1.6]{pfxm1}, we obtain a family
$({\cal B}_I)_{I\in{\cal D}_n}$ of collections of dyadic intervals such that
\begin{enumerate}[(i)]
\item ${\cal B}_I \subseteq K_0 \cap {\cal E}$,
\item the elements of ${\cal B}_I$ are pairwise disjoint,
\item for $B_I := \bigcup_{K \in {\cal B}_I} K$ one has that $B_I\cap B_J=\emptyset$
if and only if $I\cap J=\emptyset$, and
$B_I\subseteq B_J$ if and only if $I\subseteq J$,
\item for
\[ k_I := \sum_{K\in {\cal B}_I} h_K \]
and $I,I^-,I^+\in{\cal D}_n$ such that $I^-$ is the right half of $I$ and
$I^+$ the left half of $I$, one has
$B_{I^-}\subseteq \{ k_I = -1 \}$ and
$B_{I^+}\subseteq \{ k_I = 1 \}$,
\item for $0\le k \le n$ and $|I|=2^{-k}$ one has
\[ \frac{|K_0|}{2^k} - 2 \varepsilon |K_0| \le |B_I| \le \frac{|K_0|}{2^k} .\]
\end{enumerate}
As a consequence
\[ \frac{1-\delta}{2^n} |K_0| \le |B_I| \le \frac{|K_0|}{2^n}
\sptext{1}{for}{1}
|I|=2^{-n} \]
and
\[ (1-\delta) |K_0| \le \sum_{|I|=2^{-n}} |B_I| \le |K_0|.\]
Choose measurable subsets $A_I \subseteq B_I$ for $|I|=2^{-n}$ such that
\begin{enumerate}[(a)]
\item $|A_I|= (1-\delta) 2^{-n} |K_0|$,
\item the $k_I$ restricted to $A_I$ are symmetric.
\end{enumerate}
Let $S:= \bigcup_{|I|=2^{-n}} A_I$, so that $|S|=(1-\delta)|K_0|$, and
\[ A_I := B_I \cap S
\sptext{1}{for all (remaining)}{1}
I\in{\cal D}_n. \]
When restricted to the probability space $ (S,\frac{dt}{|S|}) $
the system $(k_I)_{I\in{\cal D}_n}$ has the same joint
distribution as the usual Haar system $(h_I)_{I\in{\cal D}_n}$ on the unit interval.
Hence, as a consequence of the Gamlen-Gaudet construction,
we obtain that
\begin{eqnarray*}
\left \| \sum_{I\in{\cal D}_n} \frac{h_I}{|I|^{1/p}} x_I \right \|_{L_X^p}
& = & \left \| \sum_{I\in{\cal D}_n} \frac{k_I}{|I|^{1/p}} x_I
\right \|_{L_X^p\left (S,\frac{dt}{|S|}\right ) } \\
& = & \left ( \frac{1}{|S|} \right )^\frac{1}{p}
\left \| \sum_{I\in{\cal D}_n} \frac{k_I}{|I|^{1/p}} x_I \right \|_{L_X^p(S,dt)} \\
&\le& \left ( \frac{1}{|S|} \right )^\frac{1}{p}
\left \| \sum _{ I \in {\cal D} _n}
\sum_{K\in {\cal B}_I } \frac{h_K}{|K|^{1/p} }
\left( \frac{|K|^{1/p}}{|I|^{1/p} } x_I \right) \right \|_{L_X^p}.
\end{eqnarray*}
Recall that we selected the collection ${\cal B}_I$ as a sub-collection of ${\cal E}$.
Using our hypothesis concerning ${\cal E} $ and $X$, we obtain an upper estimate
for the last term as follows,
\begin{eqnarray*}
c \left( \frac{1}{|S|}\right )^\frac{1}{p}
\left ( \sum _{ I \in {\cal D} _n} \sum_{K\in {\cal B}_I }
\frac{|K|}{|I|} \|x_I\|^p \right )^\frac{1}{p}
& = & c \left ( \sum _{ I \in {\cal D} _n}
\frac{|B_I|}{|I| |S|} \|x_I\|^p \right )^\frac{1}{p} \\
& = & c \left ( \sum _{ I \in {\cal D} _n}
\frac{|B_I|}{|I| (1-\delta)|K_0|} \|x_I\|^p \right )^\frac{1}{p} \\
&\le& \frac{c}{(1-\delta)^\frac{1}{p}}
\left ( \sum _{ I \in {\cal D} _n} \|x_I\|^p \right )^\frac{1}{p}.
\end{eqnarray*}
Letting $\delta \downarrow 0$ yields our statement.
\par\nobreak\hbox to \hsize{\hfil\vrule width 5pt height 5pt}\goodbreak\vskip 3pt
\textbf{Addresses}
\parindent0em
Department of Mathematics and Statistics \\
P.O. Box 35 (MaD) \\
FIN-40014 University of Jyv\"askyl\"a \\
Finland
Department of Analysis\\
J. Kepler University\\
A-4040 Linz\\
Austria
\end{document}
|
\begin{document}
\title{A note on the stability of trinomials over finite fields}
\author[O. Ahmadi]{Omran Ahmadi}
\address{Institute for Research in Fundamental Sciences, Iran}
\email{[email protected]}
\author[K. Monsef-Shokri]{Khosro Monsef-Shokri}
\address{School of Mathematical Science, Shahid Beheshti University,
Iran}
\email{k\[email protected]}
\date{August 16, 2018.}
\begin{abstract}
A polynomial $f(x)$ over a field $K$ is called stable if all of its iterates are irreducible over $K$. In this paper we study the stability of trinomials over finite fields. Specially, we show that if $f(x)$ is a trinomial of even degree over the binary field ${\FF_2}$, then $f(x)$ is not stable. We prove a similar result for some families of monic trinomials over finite fields of odd characteristic. These results are obtained towards the resolution of a conjecture on the instability of polynomials over finite fields whose degrees are divisible by the characteristic of the underlying field.
\end{abstract}
\maketitle
\let\thefootnote\relax\footnotetext{ Mathematical Classification Subject: 11T06,12E20\\
Keywords: Polynomials, Iterations, Stability, Finite fields\nonumber}
Let $K$ be a field, and let $K[x]$ denote the polynomial ring over $K$. For a polynomial $f(x)\in K[x]$, its $n$-th iterate for $n\ge 0$ is defined inductively by the following equations
\[
f^{(0)}(x)=x, f^{(n)}(x)=f(f^{(n-1)}(x).
\]
A polynomial $f(x)\in K[x]$ is called stable if $f^{(n)}(x)$ is irreducible over $K$ for every $n\ge 1$. Studying the stability of polynomials had attracted the attention of many researchers (see for example~\cite{AOLS,Nidal,Ayad,DNOS,Goksel,Jones-Boston,Odoni}). In this paper we are interested in the stability of polynomials over finite fields. Let $\FF_p$ denotes the finite field with $p$ elements where $p$ is a prime number. This paper has been inspired by a question raised by Domingo Gomez-Perez~\cite{Domingo} who based on some computations asked whether it is true that if the degree of the polynomial $f(x)\in\FF_p[x]$ is divisible by the prime number $p$, then $p+1$-th iterate of $f(x)$, i.e., $f^{(p+1)}(x)$, is not an irreducible polynomial over $\FF_p$. Our computations show that the answer to his question is negative since if $f(x)=x^{10} + x^9 + x^6 + x^5 + x^4 + x^3 + 1$, then $f(x),ff(x)$ and $fff(x)$ are irreducible and $f^{(4)}(x)$ is not irreducible over ${\FF_2}$. Though the answer to Gomez-Perez's question turned out to be negative, based on our computations we make the following conjecture.
\begin{conj}\label{main-conj}
If the degree of the polynomial $f(x)\in\FF_p[x]$ is divisible by the prime number $p$, then $f(x)$ is not stable over $\FF_p$.
\end{conj}
The conjecture is trivial when $f(x)$ is a binomial since if $f(x)=x^{pl}+a$, then $f(x)=(x^l+a)^p$.
In his pioneering work studying the stability of polynomials ~\cite{Odoni}, Odoni showed that the additive polynomial $f(x)=x^p-x-1$ is not stable over the finite field $\FF_p$. In~\cite{AOLS}, it was shown that there is no stable quadratic polynomial over the fields of characteristic two. In~\cite{DNOS}, it was shown that certain cubic polynomials which are trinomial, i.e, polynomials with three nonzero terms are not stable. All these cases can be considered as special cases for which Conjecture~\ref{main-conj} is true. Considering these cases it seems that the next natural step towards the proof the conjecture would be to confirm it for trinomials. In this paper, we study stability of monic trinomials over finite fields and confirm Conjecture~\ref{main-conj} for all trinomials over the finite field ${\FF_2}$ and some families of trinomials over finite fields of odd characteristic. We as well present some results dealing with polynomials of higher weight.
This paper is organized as follows. In Section~\ref{Prem}, we gather some preliminary results. In Section 2, we prove our main results about trinomials over finite fields. In Section 3, we present some results on the stability of polynomials of higher weight. Finally, Section 4 contains our concluding remarks.
\section{Preliminaries}\label{Prem}
In this section we prove and gather some results which will be used in the rest
of the paper to prove the main results of the paper.
\subsection{Newton Identities} Let $L$ be a field, and let $x_1,x_2,\ldots,x_m\in L$.
We denote by $e_k(x_1,x_2,\ldots,x_m)$ and $p_k(x_1,x_2,\ldots,x_m)$ the $k$-th elementary symmetric polynomial and the $k$-th power sum in $x_1,x_2,\ldots,x_m$, respectively, i.e,
\[
e_k(x_1,x_2,\ldots,x_m)=\sum_{1\le i_1<i_2<\ldots<i_k\le m}x_{i_1}x_{i_2}\ldots x_{i_k},
\]
\[
p_k(x_1,x_2,\ldots,x_m)=\sum_{i=1}^{m}x_i^k.
\]
If $L$ is of characteristic zero and we let $p_0(x_1,x_2,\ldots,x_m)=m$, then for $k\ge 1$ and $l=\min(k,m)$ we have
\[
p_k-p_{k-1}e_1+p_{k-2}e_2+\cdots+(-1)^{l-1}p_{k-l+1}e_{l-1}+(-1)^l\frac{l}{m}p_{k-l}e_l=0.
\]
\subsection{Polynomial transformation of irreducible polynomials}\
The following lemma is known as Capelli's lemma and can be found in ~\cite{Cohen-1}
too.
\begin{lemma}\label{cohen}
Let $f(x)$ be a degree $n$ irreducible polynomial over $\mathbb{F}_q$, and
let $g(x), h(x)\in \mathbb{F}_q[x]$. Then $p(x)=h(x)^nf(g(x)/h(x))$ is irreducible over
$\mathbb{F}_q$ if and only if for some root $\alpha$ of $f(x)$ in
$\mathbb{F}_{q^n}$, $g(x)-\alpha h(x)$ is an irreducible polynomial over
$\mathbb{F}_{q^n}$.
\end{lemma}
\subsection{Parity of the number of the irreducible factors of a polynomial over
finite fields}{\bf{Be careful with the characteristic}}
We begin this section by recalling the {\it Discriminant} and the
{\it Resultant} of polynomials over a field. For a more detailed treatment
see \cite[Ch. 1, pp. 35-37]{LN}.
Let $K$ be a field, and let $F(x) \in K[x]$ be a polynomial of
degree $s\geq 2$ with leading coefficient $a$. Then the {\it
Discriminant}, $\operatorname{Disc}(F)$, of $F(x)$ is defined by
\[
\operatorname{Disc}(F) = a^{2s-2} \prod_{i<j} (x_i - x_j)^2 \; ,
\]
where $x_0,x_1,\ldots,x_{s-1}$ are the roots of $F(x)$ in some
extension of $K$. Although $\operatorname{Disc}(F)$ is defined in terms of the
elements of an extension of $K$, it is actually an element of $K$
itself. There is an alternative formulation of $\operatorname{Disc}(F)$, given
below, which is very helpful for the computation of the
discriminant of a polynomial.
Let $G(x) \in K[x]$ and suppose $F(x)= a \prod_{i=0}^{s-1}(x-x_i)$
and $G(x) = b \prod_{j=0}^{t-1}(x-y_j)$, where $a,b \in K$ and
$x_0,x_1,\ldots,x_{s-1}$, $y_0,y_1,\ldots,y_{t-1}$ are in some
extension of $K$. Then the {\it Resultant}, $\operatorname{Res}(F,G)$, of $F(x)$
and $G(x)$ is
\begin{equation}\label{roots}
\operatorname{Res}(F,G) = (-1)^{st} b^s \prod_{j=0}^{t-1} F(y_j)
= a^t \prod_{i=0}^{s-1} G(x_i) \; .
\end{equation}
The following statements are immediate from the definition of the
resultant of two polynomials.
\begin{cor}\label{cor1}
If $F$ is as above, and $G_1,G_2,R\in K[x]$, then
\begin{itemize}
\item[(i)] $\operatorname{Res}(F,-x)=F(0)$.
\item[(ii)] $\operatorname{Res}(F,G_1G_2)=\operatorname{Res}(F,G_1)\operatorname{Res}(F,G_2)$.
\end{itemize}
\end{cor}
\begin{cor} If $F$ is as above, and $F'\in K[x]$ is the
derivative of $F$, then
\begin{equation}\label{ff'}
{\operatorname{Disc}}(F)=(-1)^{\frac{s(s-1)}{2}}a^{s-l-2} \operatorname{Res}(F,F'),
\end{equation}
where $l$ is the degree of $F'$. Notice that if $K$ is of positive characteristic, then we may have $l<n-1$.
\end{cor}
\begin{lemma}\label{Res-Disc}
Let $u(x)$ and $v(x)\in K[x]$ be of degrees $m$ and $n$ and leading coefficients $a$ and $b$, respectively. Furthermore, suppose that the derivative of $u(x)$, i.e. $u'(x)$, is of degree $l$. Then
\[
\operatorname{Res}(u(v(x)),u'(v(x))=[(-1)^{\frac{m(m-1)}{2}}a^{-m+l+2}b^{ml}]^n\operatorname{Disc}(u(x))^n.
\]
\end{lemma}
\begin{proof}
Let $\alpha_1,\alpha_2,\ldots,\alpha_{m}$ be the roots of $u(x)=0$ in some extension of $K$. Then the roots of $uv(x)$ are the collection of the roots of the equations $v(x)=\alpha_i$ for $i=1,2,\ldots,m$. We denote by $\beta_{ij}$, $j=1,2,\ldots,n$, the $n$ roots of $v(x)=\alpha_i$. So the roots of $uv(x)=0$ are $\beta_{ij}$ for $1\le i\le m$ and $1\le j\le n$. Since $u'(x)$ is of degree $l$, $u'(v(x))$ is of degree $nl$. Thus using~\eqref{roots} and the fact that $v(\beta_{ij})=\alpha_i$ for $1\le j\le n$ we have
\begin{eqnarray}
\operatorname{Res}(u(v(x)),u'(v(x))&=&(ab^{m})^{nl}\prod_{i=1}^{m}\prod_{j=1}^{n}u'(v(\beta_{ij}))=(ab^{m})^{nl}\prod_{i=1}^{m}u'(\alpha_i)^n\nonumber\\&=&(b^{m})^{nl}(a^l\prod_{i=1}^{m}u'(\alpha_i))^n=(b^{m})^{nl}[\operatorname{Res}(u(x),u'(x))]^n\nonumber\\&=&[(-1)^{\frac{m(m-1)}{2}}a^{-m+l+2}b^{ml}]^n\operatorname{Disc}(u(x))^n.
\end{eqnarray}
\end{proof}
The following results are our main tools for
determining the parity of the number of irreducible factors of a
polynomial over a finite field.
\begin{theorem}
\cite{Pellet,Stickelberger} \label{thm-Stickel} Suppose that the
$s$-degree polynomial $f(x)\in {\FF_q}[x]$, where $q$ is an odd
prime power, is the product of $r$ pairwise distinct irreducible
polynomials over ${\FF_q}[x]$. Then $r\equiv s\pmod{2}$ if and only
if $\operatorname{Disc}(f)$ is a square in ${\FF_q}$.
\end{theorem}
\begin{theorem}
\cite{Dalen,Swan} \label{thm-Stickelberger} Suppose that
the $s$-degree polynomial $f(x)\in {\FF_2}[x]$ is the product of
$r$ pairwise distinct irreducible polynomials over ${\FF_2}[x]$ and,
let $F(x)\in {\mathbb Z}[x]$ be any monic lift of $f(x)$ to the
integers. Then $\operatorname{Disc}(F)\equiv 1$ or $5\pmod 8$, and more
importantly, $r\equiv s\pmod{2}$ if and only if $\operatorname{Disc}(F) \equiv 1 \pmod{8}$.
\end{theorem}
If $s$ is even and $\operatorname{Disc}(F)\equiv 1 \pmod{8}$, then
Theorem~\ref{thm-Stickelberger} asserts that $f(x)$ has an even
number of irreducible factors and therefore is reducible over
${\FF_2}[x]$. Thus one can find necessary conditions for the
irreducibility of $f(x)$ by computing $\operatorname{Disc}(F)$ modulo 8.
\section{Stability of trinomials over finite fields}
\subsection{Trinomials over the binary fields}
In this section we prove our main result about the instability of trinomials over the binary field ${\FF_2}$. First we prove some results about the composition of arbitrary polynomials with trinomials.
\subsubsection{Composition of polynomials with trinomials}
\begin{lemma}\label{comp-trinomial}
Let $f(x)=x^{2n}+x^{2n-s}+1,g(x)=x^{2m}+\sum_{i=0}^{2m-1}a_ix^i$ be two polynomials with integer coefficients such that both $s$ and $g(1)$ are odd numbers. Furthermore suppose that $a_{2m-1}$ is an even number whenever $n=s$. Then
\[
\operatorname{Res}(gf(x), f'(x))\equiv 1 \pmod 8.
\]
\end{lemma}
\begin{proof}
Let $R=\operatorname{Res}(gf(x), f'(x))$. We have
\[
f'(x)=2nx^{2n-1}+(2n-s)x^{2n-s-1}=x^{2n-s-1}(2nx^s+2n-s).
\]
Thus using Corollary~\ref{cor1}, we have
\begin{eqnarray}
R&=&\operatorname{Res}(gf(x),x^{2n-s-1})\operatorname{Res}(gf(x),2nx^s+2n-s)\\\nonumber&=&(gf(0))^{2n-s-1}\operatorname{Res}(gf(x),2nx^s+2n-s)\\\nonumber&=&(g(1))^{2n-s-1}\operatorname{Res}(gf(x),2nx^s+2n-s)\nonumber,
\end{eqnarray}
and hence since both $s$ and $g(1)$ are odd numbers we get
\begin{equation}\label{modular}
R=\operatorname{Res}(gf(x),2nx^s+2n-s) \pmod 8.
\end{equation}
Now let $\alpha_1,\alpha_2,\ldots,\alpha_{2m}$ be the roots of $g(x)=0$ in some extension of rational numbers. Then the roots of $gf(x)$ are the union of the roots of the equations $f(x)=\alpha_i$ for $i=1,2,\ldots,2m$. We denote by $\beta_{ij}$, $j=1,2,\ldots,2n$, the $2n$ roots of $f(x)=\alpha_i$. So the roots of $gf(x)=0$ are $\beta_{ij}$ for $1\le i\le 2m$ and $1\le j\le 2n$. Using Newton identities for each $i$, $1\le i\le 2m$, we have:
\begin{equation}\label{mono-s-power}
p_s(\beta_{i1},\beta_{i2},\ldots,\beta_{i(2n)})=\sum_{j=1}^{2n}\beta_{ij}^{s}=-s,
\end{equation}
\begin{equation}\label{2s-power}
p_{2s}(\beta_{i1},\beta_{i2},\cdots,\beta_{i(2n)})=\sum_{j=1}^{2n}\beta_{ij}^{2s}=\left\{
\begin{array}{cl}
s, & \;\;\;\mbox{if } s<n, \\
2n\alpha_i-n, & \;\;\;\mbox{if } s=n, \\
s, & \;\;\;\mbox{if } n<s<2n, \\
\end{array}
\right.
\end{equation}
and hence
\begin{equation}\label{s-power}
p_s(\beta_{11},\beta_{12},\ldots,\beta_{(2m)(2n)})=\sum_{i=1}^{2m}p_s(\beta_{i1},\beta_{i2},\ldots,\beta_{i(2n)})=-2ms.
\end{equation}
From~\eqref{roots} and \eqref{modular} it follows that
\begin{eqnarray}\nonumber
R=\prod_{i=1}^{2m}\prod_{j=1}^{2n}(2n\beta_{ij}^s+2n-s)=(2n-s)^{4mn}+(2n-s)^{4mn-1}(2n)p_s(\beta_{11},\beta_{12},\ldots,\beta_{(2m)(2n)})\\+(2n-s)^{4mn-2}(2n)^2e_2(\beta_{11}^s,\beta_{12}^s,\ldots,\beta_{(2m)(2n)}^s)\nonumber\\+(2n-s)^{4mn-3}(2n)^3S(\beta_{11},\beta_{12},\ldots,\beta_{(2m)(2n)}),\nonumber
\end{eqnarray}
where $S(\beta_{11},\beta_{12},\ldots,\beta_{(2m)(2n)})$ is a symmetric polynomial in the roots of $gf(x)=0$ and hence it is an integer number. Thus using~\eqref{s-power} and the fact that $s$ is an odd number we get
\[
R=1-(2n-s)(4mn)s+(2n)^2e_2(\beta_{11}^s,\beta_{12}^s,\ldots,\beta_{(2m)(2n)}^s)\pmod 8.
\]
Now let $q_{si}=p_s(\beta_{i1},\beta_{i2},\cdots,\beta_{i(2n)})$ and $r_{is}=e_2(\beta_{i1}^s,\beta_{i2}^s,\cdots,\beta_{i(2n)}^s)$. Then
\[
e_2(\beta_{11}^s,\beta_{12}^s,\ldots,\beta_{(2m)(2n)}^s)=e_2(q_{s1},q_{s2},\cdots,q_{s(2m)})+\sum_{i=1}^{2m}e_2(\beta_{i1}^s,\beta_{i2}^s,\cdots,\beta_{i(2n)}^s).
\]
By~\eqref{mono-s-power} for each $i$ we have $q_{si}=-s$ which yields
\[
e_2(q_{s1},q_{s2},\cdots,q_{s(2m)})=\binom{2m}{2}s^2.
\]
Also we have
\[
e_2(\beta_{i1}^s,\beta_{i2}^s,\cdots,\beta_{i(2n)}^s)=\frac{1}{2}[p_s(\beta_{i1},\beta_{i2},\cdots,\beta_{i(2n)})^2-p_{2s}(\beta_{i1},\beta_{i2},\cdots,\beta_{i(2n)})],
\]
and hence using~\eqref{2s-power}
\begin{equation}\label{e-sum}
e_2(\beta_{i1}^s,\beta_{i2}^s,\cdots,\beta_{i(2n)}^s)=\left\{
\begin{array}{cl}
\frac{1}{2}(s^2-s), & \;\;\;\mbox{if } s<n, \\
\frac{1}{2}(n^2-2n\alpha_i+n), & \;\;\;\mbox{if } s=n, \\
\frac{1}{2}(s^2-s), & \;\;\;\mbox{if } n<s<2n. \\
\end{array}
\right.
\end{equation}
From the above equations we get that if $s\neq n$, then
\[
e_2(\beta_{11}^s,\beta_{12}^s,\ldots,\beta_{(2m)(2n)}^s)=\binom{2m}{2}s^2+(2m)\frac{1}{2}(s^2-s)=m(2m-1)s^2+m(s^2-s)
\]
from which we get that
\[
R=1-(2n-s)(4mn)s+(2n)^2(m(2m-1)s^2+m(s^2-s))\pmod 8
\]
if $n\neq s$. It is easy to see that in this case $R\equiv 1\pmod 8$. Now let $n=s$. Then
\[
e_2(\beta_{11}^s,\beta_{12}^s,\ldots,\beta_{(2m)(2n)}^s)=\binom{2m}{2}n^2+\sum_{i=1}^{2m}\frac{1}{2}(n^2-2n\alpha_i+n)=\binom{2m}{2}n^2+m(n^2+n)-n\sum_{i=1}^{2m}\alpha_i.
\]
Thus
\[
R\equiv 1-4mn^3+4n^2[m(2m-1)n^2+m(n^2+n)+na_{2m-1}]\pmod 8.
\]
Since we have assumed that $s=n$ is an odd number, we deduce that
\[
R\equiv 1+4a_{2m-1}\equiv 1 \pmod 8.
\]
\end{proof}
\begin{cor}\label{Disc-2n}
Let $f(x)=x^{2n}+x^{2n-s}+1,g(x)=x^{2m}+\sum_{i=0}^{2m-1}a_ix^i$ be two polynomials with integer coefficients such that both $s$ and $g(1)$ are odd numbers. Furthermore suppose that $a_{2m-1}$ is an even number whenever $n=s$. Then
\[
\operatorname{Disc}(g(f(x)))\equiv \operatorname{Disc}(g(x))^{2n} \pmod 8.
\]
\end{cor}
\begin{proof}
Since degree of $gf(x)$ is divisible by 4, using Corollaries~\ref{cor1} and~\ref{ff'} we have
\[
\operatorname{Disc}(gf(x))=\operatorname{Res}(gf(x),f'(x)g'(f(x)))=\operatorname{Res}(gf(x),f'(x))\operatorname{Res}(gf(x),g'(f(x))),
\]
and hence using Lemmata~\ref{comp-trinomial} and ~\ref{Res-Disc} we get
\[
\operatorname{Disc}(gf(x))=\operatorname{Res}(gf(x),g'(f(x)))=\operatorname{Disc}(g(x))^{2n}\pmod 8.
\]
\end{proof}
\subsubsection{Instability of trinomials over ${\FF_2}$ }
The following is our main result about the trinomials over ${\FF_2}$.
\begin{theorem}\label{trim-even}
Let $f(x)=x^{2n}+x^{2n-s}+1$ be a trinomial over ${\FF_2}$. Then $f^{(3)}(x)$ is not an irreducible polynomial over ${\FF_2}$ and hence $f(x)$ is not stable.
\end{theorem}
\begin{proof}
If $s$ is an even number, then $f(x)=(x^n+x^{n-\frac{s}{2}}+1)^2$ which means that $f(x)$ is not irreducible and hence $f^{(3)}(x)$ is not an irreducible polynomial over ${\FF_2}$. Now let $s$ be an odd number. If $n=1$, then $s=1$, $f(x)=x^2+x+1$, $ff(x)=x^4+x+1$, and $$f^{(3)}(x)=x^8+x^4+x^2+x+1= (x^4 + x^3 + 1)(x^4 + x^3 + x^2 + x + 1).$$ So let $n>1$. If $f(x)$ is not irreducible, then the claim is trivial. So assume that $f(x)$ is an irreducible polynomial and $F(x)=x^{2n}+x^{2n-s}+1$ is a lift of $f(x)$ to the integers. From Theorem~\ref{thm-Stickelberger} we deduce that $\operatorname{Disc}(F(x))\equiv 5 \pmod 8$. Now notice that $F(1)=3$ and since $n>1$, the coefficient of $x^{2n-1}$ is zero in $F(x)$ whenever $n=s$. Thus using Corollary~\ref{Disc-2n} we get $\operatorname{Disc}(FF(x))\equiv 1\pmod 8$ which in turn using Theorem~\ref{thm-Stickelberger} implies that $ff(x)$ is not an irreducible polynomial over ${\FF_2}$.
\end{proof}
\begin{remark}
The conclusion of Theorem~\ref{trim-even} does not hold for trinomials of odd degree. For example, if $f(x)=x^3+x^2+1$, our limited computations with MAGMA computer algebra package shows that $f^{(n)}(x)$ is irreducible for $n\le 10$, and even more, it is primitive for $n\le 6$. Our guess is that it is stable over ${\FF_2}$.
\end{remark}
\begin{remark}
It is possible to prove results similar to the results of this section for monic polynomials of even degree over finite fields of characteristic two.
\end{remark}
\subsection{Polynomials over fields of odd characteristic}
In this section we consider trinomials over the finite fields of odd characteristic and show that some families of them are not stable.
\begin{theorem}
Let $p>2$ be a prime number such that $p\mid n$,
and let $f(x)=x^{2n}+ax^{2s+1}+b$ be a trinomial over ${\mathbb F}_{p^t}$ and $g(x)$ be a monic polynomial of $\deg(g)=2m$ in ${\mathbb F}_{p^t}[x]$.
Then $gf(x)$ is reducible. In particular $f(x)$ is not stable.
\end{theorem}
\begin{proof}
It suffices to show that the discriminant of $gf(x)$ is a quadratic residue in ${\mathbb F}_{p^t}$. Since then by Theorem~\ref{thm-Stickel}, $gf(x)$ has an even number of factors in ${\mathbb F}_{p^t}[x]$ and hence it is reducible. We have
$f'(x)=a(2s+1)x^{2s}$. Since the degree of $gf(x)$ is divisible by $4$, from~\eqref{ff'} we get
$$
\operatorname{Disc}(gf(x))=\operatorname{Res}(gf(x),f'(x)g'(f(x)))=\operatorname{Res}( gf(x),f'(x)) \operatorname{Res}(gf(x),g'(f(x))).
$$
Now on the one hand using Corollary~\ref{cor1} we have
$$
\operatorname{Res}( gf(x),f'(x))= (a(2s+1))^{4nm}g(f(0))^{2s}=(a(2s+1))^{4mn}g(b)^{2s}
$$
which shows that $\operatorname{Res}(gf(x),g'(f(x)))$ is a quadratic residue, and on the other hand from Lemma~\ref{Res-Disc} we have
$$
\operatorname{Res}(gf(x),g'(f(x)))=\operatorname{Disc}(g(x))^{2n}.
$$
Thus from the equations above we conclude that $\operatorname{Disc}(gf(x))$ is a quadratic residue in ${\mathbb F}_{p^t}$ which finishes the proof.
\end{proof}
In the theorem above, we dealt with the polynomials of even degree. It is possible to apply Theorem~\ref{thm-Stickelberger} to prove instability of some families of trinomials of odd degree. For example we have the following theorem.
\begin{theorem}
Let $p\not\equiv 1\pmod 8$, and let $f(x)=x^{p}+ax^2+b$ be a polynomial over ${\mathbb F}_p$. Furthermore suppose that $ab=-3$.
Then $f(x)$ is not stable over ${\mathbb F}_p$.
\end{theorem}
\begin{proof}
From Corollary~\ref{ff'}, we have $\operatorname{Disc}(f(x))=-b(2a)^{p}=-b(2a)=6$ and thus using Lemma \ref{Res-Disc} we get
\begin{eqnarray}
\operatorname{Disc}(f(f(x)))&=&\operatorname{Disc}(f)^{p}\operatorname{Res}(ff,f')=6\operatorname{Res}(ff,f')
=6\operatorname{Res}(ff,f')\nonumber\\&=&-12aff(0)=-12a(b^{p}+ab^2+b)=-12a(ab^2+2b)=-36. \nonumber
\end{eqnarray}
Now if $p\equiv 3,7\pmod 8$, then $\operatorname{Disc}(ff(x))=-36$ is a quadratic non-residue and hence from Theorem~\ref{thm-Stickel} it follows that $ff(x)$ is not irreducible. So in order to finish the proof we need to prove the claim for $p\equiv 5\pmod 8$. Now we have
\[
\operatorname{Disc}(fff(x))=\operatorname{Disc}(ff(x))^p \operatorname{Res}(fff,f')
\]
and hence
\begin{align*}
\operatorname{Res}(fff,f')=-2afff(0)&=-2a((ab^2+2b)+a(ab^2+2b)^2+b)\\
&=-2ab(ab+3+ab(ab+2)^2)=-18.
\end{align*}
But as $-2$ is a quadratic non-residue when $p\equiv 5\pmod 8$, it follows that $\operatorname{Disc}(fff(x))=-18$ is a quadratic non residue and thus $fff(x)$ is not irreducible by Theorem \ref{thm-Stickel}.
\end{proof}
\section{Polynomials of higher weights}
It is possible to prove results similar to the results of the previous section for some families of polynomials which are of higher weight, i.e., polynomials which have more than three nonzero terms. As such are the following theorems about some families of polynomials over the binary field. The proofs of the following theorems are very similar to the proofs of theorems of the previous section and hence we omit them.
\begin{theorem}\label{higher-weight}
Let $f(x)=x^{2n}+x^s+g(x^8)+1$ be a polynomial over ${\FF_2}$ such that $8\deg(g)<s$. Then $f(x)$ is not stable over ${\FF_2}$.
\end{theorem}
\begin{theorem}
Let $f(x)=g(x^4)+x^s+1$ be a polynomial over ${\FF_2}$ such that $s<4\deg(g)$. Then $f(x)$ is not stable over ${\FF_2}$.
\end{theorem}
\section{Concluding remarks}
As we noted in the introduction in~\cite{AOLS} the stability of quadratic polynomials over binary fields were studied. In~\cite{DNOS} the stability of the polynomials of degree three over fields of characteristic three has been studied. It seems that methods of~\cite{AOLS, DNOS} are very hard to apply for polynomials of higher degree or higher weight. In this paper, we used Theorems~\ref{thm-Stickel} and ~\ref{thm-Stickelberger} to study the stability of trinomials and some families of polynomials of higher weight over finite fields. This method also seems to be not that much of help for attacking our conjecture for polynomials of higher weight. For example, if $f(x)=x^{20}+x^{18}+x^5+x^2+1$, then
computations with MAGMA computer algebra package shows that if $F(x)$ is a monic lift of $f(x)$ over the integers, then $\operatorname{Disc}(F(x))=\operatorname{Disc}(FF(x))=\operatorname{Disc}(FFF(x))\equiv 5 \pmod 8$ while $f(x)$ and $ff(x)$ are irreducible over ${\FF_2}$ and $fff(x)$ is not irreducible over ${\FF_2}$. Thus one cannot hope to use Theorems~\ref{thm-Stickel} and~\ref{thm-Stickelberger} solely to resolve our conjecture.
We also noted in the introduction that Odoni~\cite{Odoni} studied the stability of additive polynomial $f(x)=x^p-x-1$ over $\FF_p$. His method is different from the methods of the current paper and those of~\cite{AOLS, DNOS}. Using Capelli's lemma he showed that the Galois group of $ff(x)$ over $\FF_p$ is the cyclic group of order $p$ and hence $ff(x)$ is not stable. It is not clear how to generalize Odoni's method to the case of polynomials which are not additive over $\FF_p$. In conclusion, it seems that new ideas and methods are needed to be able to confirm Conjecture~\ref{main-conj} if it is correct.
\iffalse
We can show that $f(x)$ is not stable for the additive polynomials. For example let $f(x)=x^p+ax+b$. We note that if $a\neq -1$, then $f(x)$ has a root in ${\mathbb F}_p$, so $f(x)$ is reducible. For $a=-1$ we assume that $\zeta\in {\mathbb F}_{p^p}$ be a root of $f$. Then one can construct an explicit root for $f(x)-\zeta=0$. Indeed if we choose $\delta\in {\mathbb F}_{p^p}$, such that $tr(\delta)=1$, then
$$
\sum_{i=0}^{p-2}(\sum_{j=i+1}^{p-1}\delta^{p^j})(b-\zeta)^{p^i}
$$
is a root of $f(x)-\zeta=0$. Hence by Capelli lemma $ff(x)$ reducible over ${\mathbb F}_p[x]$.
In~\cite{Odoni}, Odoni also studied the Galois theory of iterates of polynomials over fields of characteristic zero. He showed that if $G(x)\in {\mathbb Q}[x]$ is of degree $p$, then
the Galois group of $GG(x)$ is a subgroup of the wreath product of $S_p$ with itself, i.e., $[S_p]^2$, and hence it is a proper subgroup of $S_{p^2}$.
Now one may hope that for a polynomial $f(x)$ over $\FF_p$, it is possible to lift $f(x)$ over the integers to obtain $F(x)$ and then use the information about the Galois theory of iterates of $F(x)$ to resolve our conjecture. Here we argue that this less likely.
Now if $g(x)=G(x)\pmod p$ and $gg(x)$ is stable over ${\mathbb F}_p[x]$, since $p$ is unramified, so according to Dedekind's theorem~\cite{Marcus}, a cycle of length $p^2$ appears in the Galois group of $GG(x)$. Keeping in mind that a transitive subgroup of $S_n$ generated by the cycle $(1\, 2\cdots n)$ and a transposition of $(a\, b)$ with $gcd (n, a-b)=1$ is $S_n$ itself, so one might wish to find a transposition and then deduce that the Galois group of $ff(x)$ is $S_{p^2}$ which is a contradiction. But this methods fails. Indeed in our case all transpositions like $( a\, b)$ satisfy the condition
$p\mid a-b$. There are $p\binom{p}{2}$ transpositions with this condition living in the permutation group $S_{p^2}$. A group theory question would be that what is this group look like. An interesting point is that this number is exactly the number of transpositions of the wreath product $[S_p]^2$.
Indeed if $G$ is a finite group which acts on a finite set $A$. For every $g\in G$
we denote $tr(g)= \{ a\in A\mid g.a=a \},$ then for $\Phi(x)= \sum_{g\in G} x^{tr(g)}$ We have
\begin{equation}
\label{eqx8}
\Phi_{G[H]}(x)=\Phi_{G}(\Phi_{H}(x)),
\end{equation}
where $G[H]$ is the wreath product of $G$ and $H$. If we set $G=H=S_p$, thn the coefficient of $x^{p-2}$ in
$\Phi_{G}(x)$, denote the number of transpositions of $S_p$ which is $\binom{p}{2}$. From \eqref{eqx8} the number of transpositions of $[S_p]^2$ is exactly $p\binom{p}{2}$. Hence probably a group generated by $(1 \,2\cdots p^2)$ and a transposition $(1\; p+1)$ is the wreath product $[S_p]^2$ and there is no any contradiction.\\
I
* A polynomial which is stable for three iterations $x^7 + x^6 + x^5 + x^4 + 1$
*Case of the self-reciprocal polynomials
*Field of characteristic 2
*$f=x^9-x^8-x-1$ over ${\FF_3}$ which is irreducible for three iterations and splits in the fourth iteration.
*$f=x^7+x^6+x^5+3$ over ${\mathbb F}_{7}$ is irreducible for four iterations and splits in the fifth iteration ;
*$x^8 + x^4 + x^3 + x^2 + 1$ and $x^8 + x^7 + x^5 + x + 1$ over ${\FF_2}$ are irreducible for two iterations and split at the third iteration.
\fi
\begin{bibdiv}
\begin{biblist}
\bib{AOLS}{article}{
author={Ahmadi, Omran},
author={Luca, Florian},
author={Ostafe, Alina},
author={Shparlinski, Igor E.},
title={On stable quadratic polynomials},
journal={Glasg. Math. J.},
volume={54},
date={2012},
number={2},
pages={359--369},
issn={0017-0895},
review={\MR{2911375}},
doi={10.1017/S001708951200002X},
}
\bib{Nidal}{article}{
author={Ali, Nidal},
title={Stabilit\'e des polyn\^omes},
language={French},
journal={Acta Arith.},
volume={119},
date={2005},
number={1},
pages={53--63},
issn={0065-1036},
review={\MR{2163517}},
doi={10.4064/aa119-1-4},
}
\bib{Ayad}{article}{
author={Ayad, Mohamed},
author={McQuillan, Donald L.},
title={Corrections to: ``Irreducibility of the iterates of a quadratic
polynomial over a field'' [Acta Arith. {\bf 93} (2000), no. 1, 87--97;
MR1760091 (2001c:11031)]},
journal={Acta Arith.},
volume={99},
date={2001},
number={1},
pages={97},
issn={0065-1036},
review={\MR{1845367}},
doi={10.4064/aa99-1-9},
}
\bib{Cohen-1}{article}{
author={Cohen, Stephen D.},
title={On irreducible polynomials of certain types in finite fields},
journal={Proc. Cambridge Philos. Soc.},
volume={66},
date={1969},
pages={335--344},
review={\MR{0244202}},
}
\bib{Dalen}{article}{
author={Dalen, K\aa re},
title={On a theorem of Stickelberger},
journal={Math. Scand.},
volume={3},
date={1955},
pages={124--126},
issn={0025-5521},
review={\MR{0071460}},
doi={10.7146/math.scand.a-10433},
}
\bib{Domingo}{article}{
author={G\'omez-P\'erez, Domingo},
title={Personal communication},
}
\bib{DNOS}{article}{
author={G\'omez-P\'erez, Domingo},
author={Nicol\'as, Alejandro P.},
author={Ostafe, Alina},
author={Sadornil, Daniel},
title={Stable polynomials over finite fields},
journal={Rev. Mat. Iberoam.},
volume={30},
date={2014},
number={2},
pages={523--535},
issn={0213-2230},
review={\MR{3231208}},
doi={10.4171/RMI/791},
}
\bib{Goksel}{article}{
author={Goksel, Vefa},
author={Xia, Shixiang},
author={Boston, Nigel},
title={A refined conjecture for factorizations of iterates of quadratic
polynomials over finite fields},
journal={Exp. Math.},
volume={24},
date={2015},
number={3},
pages={304--311},
issn={1058-6458},
review={\MR{3359218}},
doi={10.1080/10586458.2014.992079},
}
\bib{Jones-Boston}{article}{
author={Jones, Rafe},
author={Boston, Nigel},
title={Settled polynomials over finite fields},
journal={Proc. Amer. Math. Soc.},
volume={140},
date={2012},
number={6},
pages={1849--1863},
issn={0002-9939},
review={\MR{2888174}},
doi={10.1090/S0002-9939-2011-11054-2},
}
\bib{LN}{book}{
author={Lidl, Rudolf},
author={Niederreiter},
title={Finite Fields},
issn={Cambridge University Press},
date={1984},
}
\bib{Odoni}{article}{
author={Odoni, R. W. K.},
title={The Galois theory of iterates and composites of polynomials},
journal={Proc. London Math. Soc. (3)},
volume={51},
date={1985},
number={3},
pages={385--414},
issn={0024-6115},
review={\MR{805714}},
doi={10.1112/plms/s3-51.3.385},
}
\bib{Pellet}{article}{
author={Pellet, A.-E.},
title={Sur la décomposition d'une fonction entière en facteurs irréductibles suivant un module premier p.},
journal={Comptes Rendus de l'Académie des Sciences Paris},
volume={86},
date={1878},
pages={1071--1072},
}
\bib{Stickelberger}{article}{
author={Stickelberger, Ludwig},
title={ Über eine neue Eigenschaft der Diskriminanten algebraischer Zahlkörper.},
journal={Verhandlungen des ersten Internationalen Mathematiker-Kongresses, Zürich},
volume={},
date={1897},
pages={182--193},
}
\bib{Swan}{article}{
author={Swan, Richard G.},
title={Factorization of polynomials over finite fields},
journal={Pacific J. Math.},
volume={12},
date={1962},
pages={1099--1106},
issn={0030-8730},
review={\MR{0144891}},
}
\iffalse
\bibitem{Bluher}
A. Bluher, ``A Swan-like theorem'', \emph{Finite Fields and Their
Applications}, to appear.
\bibitem{Cohen-1}
S.~D. Cohen.
\newblock On irreducible polynomials of certain types in finite fields.
\newblock {\em Proceedings of Cambridge Philosophical Society}, 66:335--344,
1969.
\bibitem{HN}
A. Hales and D. Newhart,
``Irreducibles of tetranomial type'',
in \emph{Mathematical Properties of Sequences and Other Combinatorial
Structures}, Kluwer, 2003.
\bibitem{HN2}
A. Hales and D. Newhart,
``Swan's theorem for binary tetranomials'',
preprint, 2004.
\bibitem{LN}
R. Lidl and H. Niederreiter,
\emph{Finite Fields},
Cambridge University Press, 1984.
\bibitem{Stickelberger}
L. Stickelberger,
``\"{U}ber eine neue Eigenschaft der Diskriminanten algebraischer
Zahlk\"{o}rper'',
Verh.\ 1 Internat.\ Math.\ Kongresses, Zurich 1897, 182-193.
\bibitem{Swan}
R. Swan,
``Factorization of polynomials over finite fields'',
\emph{Pacific Journal of Mathematics},
12 (1962), 1099-1106.
\fi
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{Existence of energy-variational solutions to hyperbolic conservation laws}
\begin{abstract}
We introduce the concept of energy-variational solutions for hyperbolic conservation laws.
Intrinsically,
these energy-variational solutions fulfill the weak-strong uniqueness
principle and the semi-flow property,
and the set of solutions is convex and weakly-star closed.
The existence of energy-variational solutions is proven
via a suitable time-discretization scheme
under certain assumptions.
This general result
yields
existence of energy-variational solutions
to the magnetohydrodynamical equations for ideal incompressible fluids
and to the Euler equations in both the incompressible
and the compressible case.
Moreover, we show that energy-variational solutions to the Euler equations
coincide with dissipative weak solutions.
\partial_text{\partial_textit{e}}nd{abstract}
\noindent
\partial_textbf{MSC2020:}
35L45,
35L65,
35A01,
35A15,
35D99,
35Q31,
76B03,
76N10.
\\
\noindent
\partial_textbf{Keywords:}
Generalized solutions,
conservation laws,
time discretization,
weak-strong uniqueness,
Euler equations.
\partial_tableofcontents
\section{Introduction}
Hyperbolic conservation laws form a class of nonlinear evolution equations that is omnipresent in mathematical physics and its applications.
These range from traffic models~\cite{traffic} over thermomechanics~\cite[Sec.~2.3]{dafermos2} to fluid dynamics and weather forecast~\cite{applfluid}. Even though this class of equations is so fundamental and plays such a prominent role in the research of partial differential equations, up to now there is no
suitable concept
of generalized solutions
such that existence can be established for a large class of general multi-dimensional
hyperbolic conservation laws.
To contribute to filling this gap, in this article we
propose the concept of energy-variational solutions.
We consider general conservation laws
\begin{subequations}\label{eq}
\begin{alignat}{2}
\label{eq.pde}
\partial_t \f U + \di \f F(\f U) ={}& \f 0 && \quad\partial_text{in }\mathbb{T}^d\partial_times (0,T)\,,\\
\label{eq.iv}
\f U(\cdot,0) ={}& \f U_0 && \quad\partial_text{in }\mathbb{T}^d\,
\partial_text{\partial_textit{e}}nd{alignat}
\partial_text{\partial_textit{e}}nd{subequations}
on the $d$-dimensional (flat) torus ${\mathbb{T}^d}$, $d\in\N$,
and for a finite time $T\in(0,\infty)$.
Here $\f U\colon{\mathbb{T}^d}\partial_times(0,T)\partial_to\R^m$, $m\in\N$, denotes the
unknown state variable,
$\f F : \R^m \ra \R^{m\partial_times d }$ is a given flux matrix
depending on the state,
and $\f U_0\in\R^m$ denotes prescribed initial data.
As usual (\partial_textit{cf.}~\cite[Sec.~11.4.2]{evans}), we assume that there exists a strictly convex entropy $ \partial_text{\partial_textit{e}}ta : \R^m \ra [0,\infty ] $
such that the total entropy
$ \mathcal{E} (\f U(t )) := \int_{{\mathbb{T}^d}} \partial_text{\partial_textit{e}}ta( \f U(t)) \,\mathrm{d} \f x $
is conserved along smooth solutions,
but which may decrease along non-smooth solutions.
To ensure this, we assume that
$$ \F{ \partial_tilde{\f U} }{D \partial_text{\partial_textit{e}}ta (\partial_tilde{\f U})} = 0 $$
for all suitable $\partial_tilde{\f U} $.
This condition differs from the usual entropy-pair assumption,
where the existence of a corresponding entropy flux is required,
but it allows for more general entropy functions
and therefore a larger class of conservation laws;
see Remark \ref{rem:integralcondition} below for further explanation.
Observe that we use the letter $\mathcal{E}$ to denote the
total entropy since in the considered examples
the mathematical entropy is always played by the physical energy of the
respective system.
Hyperbolic conservation laws are well understood in one spatial dimension,
that is, in the case $d=1$ or $m=1$.
Going back to the fundamental works of Hopf~\cite{hopf} and Lax~\cite{lax}, the theory is nowadays fairly standard; see~\cite{evans} and~\cite{dafermos2} for example.
In contrast,
the one-dimensional theory cannot be transferred to the multi-dimensional case
$m,d\geq 2$ immediately,
where a general solution concept that ensures solvability is missing.
Instead, solution concepts are usually constructed
such that they fit to one specific conservation law,
and often there are several different concepts for the same equation.
A prominent example is the Euler system for inviscid fluid flow,
for which
DiPerna and Majda established the existence of measure-valued solutions
in the incompressible case~\cite{DiPernaMajda},
and a weak-strong uniqueness principle
was proven later in~\cite{weakstrongeuler}.
Weak-strong uniqueness is another favorable property for any solution concept
and means that a generalized solution
coincides with a strong solution with the same initial data if the latter exists.
In the same article~\cite{weakstrongeuler}, the weak-strong uniqueness of measure-valued solutions to hyperbolic conservation laws was shown,
but the existence of these solutions is not known and not expected to hold in general.
The weak-strong uniqueness principle for dissipative measure-valued solutions,
where the measure-valued formulation is enriched with a defect measure,
was shown for more general conservation laws in~\cite{Gwiazda},
but still their existence remains unclear.
In the case of the compressible Euler equations,
the existence of dissipative weak solutions,
defined by enriching the weak formulation with a defect measure,
was shown
in~\cite{BreitComp}, and a weak-strong uniqueness principle was proved in~\cite{weakstrongCompEul}.
We shall see that both the incompressible and the compressible Euler equations
can be treated in the abstract framework of hyperbolic conservation laws presented here.
In particular, we establish existence of energy-variational solutions to both systems,
and we show that they coincide with the corresponding dissipative weak solutions.
In this respect, we present a new way
to construct
dissipative weak solutions for these equations.
As another example, we consider the equations of magnetohydrodynamics
for an incompressible ideal fluid, which means that the effects of viscosity
and electrical resistivity are neglected.
While there are results on
the local existence of strong solutions~\cite{Schmidt1988,Secchi1993,DiazLerena2002},
and a weak-strong uniqueness principle for measure-valued solutions was shown in~\cite{Gwiazda},
the global existence of suitably generalized solutions seems to be unknown.
By providing existence of energy-variational solutions to this system,
the present work gives the first result in this direction.
We believe that the class of equations considered here is quite general,
and that the presented theory yields existence results for many other conservation laws.
To explain the main idea of our solution concept,
let us begin with
the classical approach towards a generalized solution concept for problem \partial_text{\partial_textit{e}}qref{eq},
namely the notion of weak solutions,
defined via the weak formulation of \partial_text{\partial_textit{e}}qref{eq.pde},
that is, the identity
\begin{equation}\label{eq:weak.intro}
- \langle \f U, \Phi \rangle \Big|_{s}^t + \int_s^t \int_{{\mathbb{T}^d}} \f U \cdot\partial_t \Phi + \f F (\f U ) : \nabla \Phi\,\mathrm{d} \f x \,\mathrm{d} \partial_tau = 0 \,
\partial_text{\partial_textit{e}}nd{equation}
for $s,t\in[0,T]$ and all test functions $\Phi$
in a suitable class $\Y$ of test functions.
As mentioned above,
a natural assumption is that the total entropy is non-increasing along solutions,
which means that $\mathcal{E}(\f U)\big|_s^t\leq 0$ if $s<t$.
Combing this condition with \partial_text{\partial_textit{e}}qref{eq:weak.intro},
we obtain the variational inequality
\begin{equation}\label{eq:weak.var}
\bb{\mathcal{E}(\f U)- \langle \f U ,\Phi \rangle }\Big|_{s}^t + \int_s^t \int_{{\mathbb{T}^d}} \f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi\,\mathrm{d} \f x \,\mathrm{d} \partial_tau \leq 0\,
\partial_text{\partial_textit{e}}nd{equation}
for $s<t$ and $\Phi\in\Y$.
Since \partial_text{\partial_textit{e}}qref{eq:weak.intro} can be recovered from
\partial_text{\partial_textit{e}}qref{eq:weak.var}
(see also Lemma \ref{lem:var.affine} below),
we may also take \partial_text{\partial_textit{e}}qref{eq:weak.var} to define
weak solutions with non-increasing total entropy.
As explained above,
existence of such weak solutions cannot be guaranteed for general
hyperbolic conservation laws,
which is why we introduce the concept of
energy-variational solutions.
The main idea is
to replace the total mechanical entropy $\mathcal{E}(\f U) \in L^\infty(0,T)$ with an auxiliary entropy variable $E\in\BV$,
which may be seen as a turbulent entropy and
may exceed the mechanical entropy of the system.
Additionally, we introduce the difference $\mathcal{E}(\f U)-E\leq 0$,
weighted by a suitable factor $\mathcal K(\Phi)\geq 0$
depending on the test function,
into the equation \partial_text{\partial_textit{e}}qref{eq:weak.var}.
This leads to the inequality
\begin{equation}
\left[ E - \langle \f U , \Phi \rangle\right ] \Big|_{s}^t + \int_s^t \Bb{\int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi\,\mathrm{d} \f x + \mathcal{K}(\Phi) \left [\mathcal{E} (\f U) - E \right ]} \,\mathrm{d} \partial_tau \leq 0 \, \label{eq:envar.intro}
\partial_text{\partial_textit{e}}nd{equation}
for $s<t$ and $\Phi\in\Y$,
which will serve as the basic inequality defining
energy-variational solutions.
In particular,
if we have $E=\mathcal{E}(\f U)$, then \partial_text{\partial_textit{e}}qref{eq:weak.var}
is equivalent to \partial_text{\partial_textit{e}}qref{eq:envar.intro},
and energy-variational solutions
coincide with weak solutions.
The crucial assumption for our approach
is that the function $\mathcal K$ is chosen in such
a way that the mapping
\[
\f U \mapsto \F{\f U}{\Phi} + \mathcal{K}(\Phi) \mathcal{E}(\f U)
\]
is convex for any $\Phi\in\Y$.
Under this assumption, $(\f U,E)$ appears in \partial_text{\partial_textit{e}}qref{eq:envar.intro}
in a convex way,
so that inequality~\partial_text{\partial_textit{e}}qref{eq:envar.intro} is preserved
under weak$^*$ convergence.
Note that the idea of relaxing the formulation of an evolution equation to a variational inequality
and providing convexity by introducing an additional term
goes back to Pierre-Louis Lions
in the context of the incompressible Euler equations~\cite[Sec.~4.4]{lionsfluid}.
Similar solution concepts have recently been used in the context of
fluids with viscosity
as the incompressible Navier--Stokes equations~\cite{maxidss}
and viscoelastic fluid models~\cite{EiHoLa22}.
Besides showing existence
of energy-variational solution via a semi-discretization in time,
which may justify their usefulness for
numerical implementations,
we further show certain properties
that are directly included in the solution concept,
for example, a weak-strong uniqueness principle.
Furthermore, we introduce the concept of energy-variational solutions
in such a way that the semi-flow property is satisfied.
This is a desirable property of a solvability concept,
in particular, when uniqueness of solutions cannot be guaranteed;
see~\cite{basaric,BreitComp} for example.
As is the case for many generalized solution concepts,
energy-variational solutions may not be unique but
instead
capture all limits of suitable approximations.
Hence, additional selection criteria would have to be applied in order to choose the physically relevant solution. This definitely requires further research, but we shall see that the class of energy-variational solutions has desirable properties for such a selection process.
In particular, we prove that the set of energy-variational solutions is convex and weakly$^\ast$ closed, which might make it possible to define an appropriate minimization problem on this set (\partial_textit{cf.}~\cite{envar}),
and to identify the (unique) minimizer
with the physically relevant solution.
For scalar conservation laws, Dafermos~\cite{dafermosscalar} proposed the entropy-rate admissibility criterion to select the physically relevant solution. He was able to prove that in a certain class this selection procedure coincides with a selection according to the well established Lax-admissibility criterion~\cite{dafermosscalar}.
It is worth noticing that for the auxiliary variable $E\in \BV$ the entropy rate $\partial_t E$ is well defined in the space of Radon measures, and the proposed minimization of this value may be defined at least for finitely many points in time.
Therefore, it might be possible to follow Dafermos's proposed criterion in the present case.
This is in accordance with the semi-discrete time-stepping scheme proposed in~\partial_text{\partial_textit{e}}qref{eq:timedis} below, where the energy is minimized in every step,
which might provide additional regularity for the minimizer as well as for the solution in the limit.
This question will be further investigated in the future, together with the performance of the proposed semi-discretization in numerical experiments.
The article is organized as follows: In Section~\ref{sec:pre}, we explain the relevant notation and
introduce the notion of energy-variational solutions for hyperbolic conservation laws.
We formulate the main result on their existence
and collect several auxiliary lemmas.
Section~\ref{sec:hyper} is concerned with the study of energy-variational solutions to these hyperbolic conservation laws.
We derive a number of general properties of energy-variational solutions,
and we prove the existence of energy-variational solutions via the convergence of a suitable time-discretization based on an iterative minimization procedure.
After considering the incompressible hydrodynamical equations and the incompressible Euler equations in Section~\ref{sec:incomp}, we deal with the compressible Euler equations in Section~\ref{sec:comp}.
\section{Preliminaries and main result\label{sec:pre}}
\subsection{Notation}
For $d\in\N$,
we denote the scalar product of
two vectors $\f a, \f b\in\R^d$
by $\f a \cdot \f b\coloneqq\f a_j \f b_j$,
and the Frobenius product of two matrices
$\f A,\f B\in \R^{m\partial_times d}$
by
$\f A : \f B \coloneqq \f A_{ij}\f B_{ij}$.
Here and in the following, we tacitly use Einstein summation convention
and implicitly sum over repeated indices from $1$ to $d$ or $m$ depending on the context.
By
$\mathbb{R}^{d\partial_times d}_{\partial_text{sym}}$,
$\R^{d\partial_times d}_{\partial_text{skw}}$ and
$\mathbb{R}^{d\partial_times d}_{\partial_text{sym},+}$
we denote the sets of symmetric, skew-symmetric and
symmetric positive semi-definite $d$-dimensional matrices, respectively.
The symbols $(\f A)_{\partial_text{sym}}=\frac{1}{2}(\f A+\f A^T)$
and $(\f A)_{\partial_text{skw}}=\frac{1}{2}(\f A-\f A^T)$ denote the symmetric and
the skew-symmetric part
of a matrix $\f A\in\R^{d\partial_times d}$,
and by $(\f A)_{\partial_text{sym},+}$ and $(\f A)_{\partial_text{sym},-}$, we denote the
positive semi-definite and the negative semi-definite part of the symmetric
matrix $(\f A)_{\partial_text{sym}}$, respectively.
We usually equip matrix spaces with the spectral norm $\snorm{\cdot}_2$
defined by
\begin{equation}\label{eq:spectralnorm}
\snorm{\f A}_2= \sup _{|\f a| = 1} \f a ^T \cdot \f A \f a\,,
\partial_text{\partial_textit{e}}nd{equation}
that is, $\snorm{\f A}_2$ is the square root of the largest eigenvalue
of $\f A^T \f A$.
The dual norm of the spectral norm with respect to the Frobenius product
is the trace norm and denoted by $| \cdot |'_2$.
For symmetric matrices
$\f S\in \mathbb{R}^{d\partial_times d}_{\partial_text{sym}}$
we thus have
$
\snorm{\f S}_2
= \max_{j\in\{ 1,\ldots,d\}} \snorm{\lambda_j}
$
and
$| \f S |'_2 = \sum_{i=j}^d \snorm{\lambda _j} $,
where $\lambda _j$, $j=1,\dots,d$, are the (real) eigenvalues of the matrix $\f S$.
For symmetric positive semi-definite matrices $\f S \in \mathbb{R}^{d\partial_times d}_{\partial_text{sym},+}$ we may write
$| \f S |'_2 = \sum_{i=j}^d {\lambda _j} = \f S:I = \partial_tr(\f S)$,
where $I$ denotes the identity matrix in $\R^{d\partial_times d}$.
By ${\mathbb{T}^d}\coloneqq\R^d/\Z^d$ we denote the $d$-dimensional (flat) torus
equipped with the Lebesgue measure.
The Radon measures on ${\mathbb{T}^d}$ taking values in $\mathbb{R}^{d\partial_times d}_{\partial_text{sym}}$ are denoted by $\mathcal{M}({\mathbb{T}^d} ; \mathbb{R}^{d\partial_times d}_{\partial_text{sym}} ) $, which may be interpreted as the dual space of the corresponding continuous functions, \partial_textit{i.e.,} $\mathcal{M}({\mathbb{T}^d}; \mathbb{R}^{d\partial_times d}_{\partial_text{sym}} ) =(\C({\mathbb{T}^d}; \mathbb{R}^{d\partial_times d}_{\partial_text{sym}} ) )^*$.
Moreover, $\mathcal{M}({\mathbb{T}^d}; \mathbb{R}^{d\partial_times d}_{\partial_text{sym},+} ) $
is the class of symmetric positive semi-definite Radon measures,
which consists of Radon measures $\mu \in \mathcal{M}({\mathbb{T}^d}; \mathbb{R}^{d\partial_times d}_{\partial_text{sym}} ) $
such that for any $\f \xi \in\R^d$ the measure $ \f \xi \otimes \f \xi : \mu $ is nonnegative.
For a Banach space $\mathbb{X}$, we denote its dual space by $\mathbb{X}^\ast$,
and we use $\langle\cdot,\cdot\rangle$ to denote the associated dual pairing.
The space $\C_w([0,T];\mathbb X )$ denotes the class of functions on $[0,T]$ taking values in $\mathbb X$ that are continuous with respect to the weak topology of $\mathbb X$.
Analogously, the space $\C_{w^*}([0,T];\mathbb X^* )$ denotes the class of functions on $[0,T]$ taking values in $\mathbb X^*$ that are continuous with respect to the weak$^*$ topology of $\mathbb X^*$.
The space $L^\infty_{w^*} ([0,T];\mathbb X^*)$ is the space of all function on $[0,T]$ taking values in $\mathbb X^*$ that are Bochner measurable and essentially bounded with respect to $\mathbb X^*$ equipped with the weak$^\ast$ topology.
We write $x_n\rightharpoonup x$
if a sequence $(x_n)\subset \mathbb X$
converges weakly to some $x\in \mathbb X$,
and $\varphi_n\xrightharpoonup{\ast}\varphi$
if a sequence $(\varphi_n)\subset \mathbb X^\ast$
converges weakly$^\ast$ to some $\varphi\in \mathbb X^\ast$.
In spaces of the form
$L^\infty(0,T;\X)$
we usually consider a mixture of the weak convergence in $\X$
and weak$^\ast$ convergence in $L^{\infty}$,
which we call weak$(^\ast)$ convergence,
and we write $u_n\xrightharpoonup{(\ast)} u$
if a sequence $(u_n)\subset L^\infty(0,T;\X)$
converges weakly$(^\ast)$ to some $u\in L^\infty(0,T;\X)$,
that is,
if
\begin{equation}
\label{eq:weakconv.LinfL1}
\forall f\in\LR{1}(0,T;\mathbb X^\ast):
\quad
\lim_{n\partial_to\infty}\int_0^T \langle u_n(t), f(t)\rangle \,\mathrm{d}t
=\int_0^T \langle u(t), f(t)\rangle \,\mathrm{d}t.
\partial_text{\partial_textit{e}}nd{equation}
The total variation of a function $E:[0,T]\ra \R$ is given by
$$ | E |_{\partial_text{TV}([0,T])}= \sup_{0=t_0<\ldots <t_n=T} \sum_{k=1}^n \lvert E(t_{k-1})-E(t_k) \rvert\,, $$
where the supremum is taken over all finite partitions of the interval $[0,T]$.
We denote the space of all integrable functions on $[0,T]$
with bounded variation by~$\BV$, and we equip this space with the norm
$\| E \|_{\BV} := \| E\|_{L^1(0,T)} + | E |_{\partial_text{TV}([0,T])}$~(cf.~\cite{BV}).
Recall that an integrable function $E$ has bounded variation if and only if its
distributional derivative $E'$ is an element of
$\mathcal M([0,T])$, the space of finite Radon measures on $[0,T]$.
Moreover, $\BV$ coincides with the dual space of a Banach space,
see \cite[Remark~3.12]{AmbrosioFusoPallara_BVFunctions_2000} for example,
and we usually work with the corresponding weak$^\ast$ convergence,
which can be characterized by
\[
E_n \xrightharpoonup{*} E \partial_text{ in } \BV \quad \iff \quad
E_n \partial_to E \partial_text{ in } L^1(0,T) \ \partial_text{ and } \
E_n'\xrightharpoonup{*} E' \partial_text{ in } \mathcal M([0,T]).
\]
Note that the total variation of a decreasing non-negative function $E$
can be estimated by the initial value since
\[
| E|_{\partial_text{TV}([0,T])} = \sup_{0=t_0<\ldots <t_n=T}\sum_{k=1}^N \bp{ E(t_{k-1})-E(t_k) }
= E(0) - E(T) \leq E(0) \,.
\]
Let $\partial_text{\partial_textit{e}}ta :\R^d \partial_to [0,\infty]$ be a convex, lower semi-continuous function with $\partial_text{\partial_textit{e}}ta (\f 0 )= 0$.
The domain of $\partial_text{\partial_textit{e}}ta$ is defined by $\dom\partial_text{\partial_textit{e}}ta=\setc{\f x\in\R^d}{\partial_text{\partial_textit{e}}ta(\f x)<\infty}$.
We denote the convex conjugate of $\partial_text{\partial_textit{e}}ta$ by $\partial_text{\partial_textit{e}}ta^\ast$,
which is defined by
\begin{align*}
\partial_text{\partial_textit{e}}ta^*(\f z) =\sup_{\f y \in \R^m} \left [
\f z\cdot\f y
- \partial_text{\partial_textit{e}}ta(\f y)\right ] \qquad \partial_text{for all }\f z \in \R^mdual \,.
\partial_text{\partial_textit{e}}nd{align*}
Then $\partial_text{\partial_textit{e}}ta^*$ is also convex, lower semi-continuous, non-negative and satisfies $\partial_text{\partial_textit{e}}ta^*(\f 0) = 0$.
We introduce the subdifferential $\partial \partial_text{\partial_textit{e}}ta$ of $\partial_text{\partial_textit{e}}ta$ by
\begin{align*}
\partial \partial_text{\partial_textit{e}}ta (\f y) := \left \{ \f z \in \R^mdual \mid
\forall \partial_tilde{\f y} \in\R^mdual:\ \partial_text{\partial_textit{e}}ta( \partial_tilde{\f y}) \leq \partial_text{\partial_textit{e}}ta (\f y) +
\f z \cdot \np{\partial_tilde{\f y} - \f y}
\right \} \,
\partial_text{\partial_textit{e}}nd{align*}
for $\f y\in\R^m$.
The subdifferential $\partial\partial_text{\partial_textit{e}}ta^\ast$ of $\partial_text{\partial_textit{e}}ta^\ast$ is defined analogously.
Then the Fenchel equivalences hold:
For $\f y,\f z\in\R^d$ we have
\begin{equation}\label{eq:fenchel}
\f z \in \partial \partial_text{\partial_textit{e}}ta(\f y)
\quad\iff\quad
\f y \in \partial \partial_text{\partial_textit{e}}ta^*(\f z )
\quad\iff\quad
\partial_text{\partial_textit{e}}ta(\f y ) + \partial_text{\partial_textit{e}}ta^*(\f z) =
\f z \cdot \f y\,.
\partial_text{\partial_textit{e}}nd{equation}
A proof of this well-known result can be found in~\cite[Prop~2.33]{barbu} for example.
If $\partial\partial_text{\partial_textit{e}}ta(\f y)$ is a singleton for some $\f y\in \R^m$,
then $\partial_text{\partial_textit{e}}ta$ is Fr\'echet differentiable in $\f y$ and
$\partial\partial_text{\partial_textit{e}}ta(\f y)=\set{D\partial_text{\partial_textit{e}}ta(\f y)}$.
In this case, we identify $\partial\partial_text{\partial_textit{e}}ta(\f y)$ with $D\partial_text{\partial_textit{e}}ta(\f y)$.
\subsection{Main result}
We introduce the notion of energy-variational solutions to
the hyperbolic conservation law \partial_text{\partial_textit{e}}qref{eq}.
Consider an entropy functional $\partial_text{\partial_textit{e}}ta : \R^m \partial_to [0,\infty]$, $m\in\N$.
We define the total entropy functional
\begin{equation}
\label{eq:E}
\mathcal E\colon
L^1({\mathbb{T}^d};\R^m)
\partial_to[0,\infty], \qquad
\mathcal{E}(\f U) = \int_{{\mathbb{T}^d}}\partial_text{\partial_textit{e}}ta(\f U) \,\mathrm{d} \f x\,
\partial_text{\partial_textit{e}}nd{equation}
with domain
$\dom\mathcal{E}\coloneqq \{ \f U \in L^1({\mathbb{T}^d};\R^m) \mid \mathcal{E}(\f U) < \infty \}$.
As the set of test functions,
we consider a closed subspace $\Y$ of $\C^1({\mathbb{T}^d};\R^m )$.
We next collect further
assumptions on $\partial_text{\partial_textit{e}}ta$, $\f F$, and $\Y$.
\begin{hypothesis}\label{hypo}
Assume that
$\partial_text{\partial_textit{e}}ta : \R^m \partial_to [0,\infty]$
is a strictly convex and lower semi-continuous function that satisfies $\partial_text{\partial_textit{e}}ta(\f 0)=0$
and has superlinear growth, that is,
\begin{equation}\label{eq:superlin.growth}
\lim_{ |\f y|\ra \infty} \frac{\partial_text{\partial_textit{e}}ta (\f y )}{|\f y|} = \infty \,.
\partial_text{\partial_textit{e}}nd{equation}
We assume that the set
\begin{equation}
\label{setD}
\mathbb{D}:=
\{ \f U \in \dom \mathcal{E} \mid \partial_text{\partial_textit{e}}xists \{ \Phi_n\}_{n\in\N} \subset \Y : D\partial_text{\partial_textit{e}}ta^* \circ\Phi_n \rightharpoonup \f U \partial_text{ in } L^1({\mathbb{T}^d};\R^m)
\}
\partial_text{\partial_textit{e}}nd{equation}
is convex.
Furthermore, let $\f F : \R^m \ra \R^{m\partial_times d }$
be a measurable function
such that
there exists a constant $C>0$ with
\begin{equation}
\forall \f y \in \R^m :\quad | \f F( \f y) | \leq C(\partial_text{\partial_textit{e}}ta(\f y)+1) \,, \label{BoundF}
\partial_text{\partial_textit{e}}nd{equation}
and such that
\begin{equation}\label{eq:integralFentropy}
\forall \Phi\in\Y:\quad
\int_{\mathbb{T}^d} \f F( D\partial_text{\partial_textit{e}}ta^\ast(\Phi(x))):\nabla\Phi(x)\,\mathrm{d}\f x=0.
\partial_text{\partial_textit{e}}nd{equation}
We further assume that there exists a convex and continuous function $\mathcal{K}: \Y \ra [0,\infty) $
such that for any $\Phi\in\Y$ the mapping
\begin{equation}
\mathbb{D} \ra \R, \quad
\f U \mapsto \F{\f U}{\Phi} + \mathcal{K}(\Phi) \mathcal{E}(\f U) \label{ass:convex}
\partial_text{\partial_textit{e}}nd{equation}
is convex, lower semi-continuous and non-negative.
\partial_text{\partial_textit{e}}nd{hypothesis}
Before we further explain the assumptions made in Hypothesis \ref{hypo},
let us introduce the notion of energy-variational solutions and formulate
the main result on their existence.
\begin{definition}[Energy-variational solutions]\label{def:envar}
We call a pair $(\f U, E) \in L^\infty (0,T;\mathbb{D})\partial_times \BV$
an energy-variational solution to \partial_text{\partial_textit{e}}qref{eq}
if $\mathcal{E} (\f U) \leq E $ a.e.~on $[0,T]$,
if
\begin{equation}
\left[ E - \langle \f U , \Phi \rangle\right ] \Big|_{s}^t + \int_s^t \Bb{\int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi\,\mathrm{d} \f x + \mathcal{K}(\Phi) \left [\mathcal{E} (\f U) - E \right ] } \,\mathrm{d} \partial_tau \leq 0 \, \label{envarform}
\partial_text{\partial_textit{e}}nd{equation}
for a.a.~$s,t\in (0,T)$, $s<t$, including $s=0$ with $\f U(0) = \f U_0$, and all $\Phi \in \C^1( [0,T]; \Y )$,
\partial_text{\partial_textit{e}}nd{definition}
While energy-variational solutions may not have much regularity at the outset,
we shall see that
the initial value $\f U_0$ is attained in the weak$^\ast$ sense in $\Y^\ast$,
and that
$\f U$ and $E$ can be redefined
such that
$E$ is non-increasing
and $ \f U \in \C_{w^*}([0,T];\Y^*)$,
see Proposition~\ref{prop:reg} below.
As the main result of this article, we show existence of energy-variational solutions
under the previously specified assumptions.
\begin{theorem}[Existence of energy-variational solutions]\label{thm:main}
Let Hypothesis~\ref{hypo} be satisfied, and let $\f U_0\in\mathbb{D}$. Then there exists an energy-variational solution in the sense of Definition~\ref{def:envar} with $E(0+)=\mathcal{E}(\f U_0)$.
\partial_text{\partial_textit{e}}nd{theorem}
The proof of this theorem relies on a suitable time discretization and is provided
in
Subsection \ref{subsec:existence}.
Next we further comment on the assumptions stated in Hypothesis~\ref{hypo}
and on the solution concept of energy-variational solutions.
\begin{remark}\label{rem:integralwelldefined}
Hypothesis~\ref{hypo} ensures,
that the integrals in \partial_text{\partial_textit{e}}qref{eq:integralFentropy} and \partial_text{\partial_textit{e}}qref{ass:convex}
are well defined.
For the integral in \partial_text{\partial_textit{e}}qref{ass:convex} note that
the estimate \partial_text{\partial_textit{e}}qref{BoundF} implies $\snorm{\f F\circ\f U}\in L^1({\mathbb{T}^d})$
for all $\f U\in\mathbb{D}$.
For the left-hand side of \partial_text{\partial_textit{e}}qref{eq:integralFentropy},
we first observe that $\partial\partial_text{\partial_textit{e}}ta^\ast$ is single valued
by Lemma \ref{lem:convex} below,
since $\partial_text{\partial_textit{e}}ta$ has superlinear growth.
The Fenchel equivalences \partial_text{\partial_textit{e}}qref{eq:fenchel} yield the identity
\[
\partial_text{\partial_textit{e}}ta(D\partial_text{\partial_textit{e}}ta^\ast(\Phi(x)))
=D\partial_text{\partial_textit{e}}ta^\ast(\Phi(x)) \cdot \Phi(x)-\partial_text{\partial_textit{e}}ta^\ast(\Phi(x)),
\]
which shows that $x\mapsto\partial_text{\partial_textit{e}}ta(D\partial_text{\partial_textit{e}}ta^\ast(\Phi(x)))$
is a continuous function on the compact set ${\mathbb{T}^d}$
and thus bounded for any $\Phi\in\Y$. Hence $ D\partial_text{\partial_textit{e}}ta^* \circ \Phi \in \dom \mathcal{E}$.
Therefore,
inequality \partial_text{\partial_textit{e}}qref{BoundF}
yields a bound for the integrand in \partial_text{\partial_textit{e}}qref{eq:integralFentropy}.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}
\label{rem:domE}
The convexity assumption on $\mathbb{D}$ can be seen as a compatibility condition on the space $\Y$ and the entropy $\partial_text{\partial_textit{e}}ta$.
We note that $D\partial_text{\partial_textit{e}}ta^*\circ\Phi \in \dom\mathcal{E}$ for $\Phi \in \Y$ as shown in Remark~\ref{rem:integralwelldefined}.
Moreover, for any sequence $ \{ \f U_n\}_{n\in\N} \subset \mathbb{D}$ with bounded entropies, $ \mathcal{E}(\f U_n) \leq C $, there is a convergent subsequence with limit $\f U \in \mathbb{D}$. Indeed, \partial_text{\partial_textit{e}}qref{eq:superlin.growth} yields the existence of a subsequence weakly converging to $\f U$ in $L^1({\mathbb{T}^d};\R^m)$ with $ \mathcal{E}(\f U) \leq C$, see Lemma~\ref{lem:delavalle} below. A diagonalization argument gives a sequence $\{ \Phi_n\}_{n\in\N}\subset\Y$ with $ D\partial_text{\partial_textit{e}}ta^*\circ\Phi_n \rightharpoonup \f U$ in $L^1({\mathbb{T}^d};\R^m)$, which shows $\f U \in \mathbb{D}$.
In the case of a quadratic functional $\partial_text{\partial_textit{e}}ta(\f y)=a\snorm{\f y}^2$, $a>0$,
the set $\mathbb{D}$ is the weak closure of $\Y$ in $L^1({\mathbb{T}^d};\R^m)$.
Since $\Y$ is a linear subspace and $\partial_text{\partial_textit{e}}ta$ is quadratic,
this is nothing else than the strong closure of $\Y$ in
$L^2({\mathbb{T}^d};\R^m)$.
In particular, the convexity of $\mathbb{D}$ is satisfied trivially.
In the case $\Y=\C^1({\mathbb{T}^d};\R^m)$,
we have $\mathbb{D}=\dom\mathcal{E}$.
In particular, $\mathbb{D}$ is convex.
Since $\dom(\partial\mathcal{E})$ is dense in $\dom\mathcal{E}$
(see \cite[Corollary 2.44]{barbu})
this follows from the above approximation property
and $\dom(\partial\mathcal{E})\subset\mathbb{D}$.
To see the latter, let
$\f U\in\dom(\partial\mathcal{E})$.
From~\cite[Prop.~2.53]{barbu}, we infer that
the existence of $\Phi \in L^\infty( {\mathbb{T}^d};\R^m)$ such that
$\Phi ( \f x ) \in \partial \partial_text{\partial_textit{e}}ta(\f U(\f x ))$ for a.a.~$x\in{\mathbb{T}^d}$,
that is,
$D\partial_text{\partial_textit{e}}ta^*(\Phi ( \f x )) = \f U(\f x )$
by the Fenchel equivalences~\partial_text{\partial_textit{e}}qref{eq:fenchel}.
The density of $\C^1({\mathbb{T}^d};\R^m) $ in $L^\infty( {\mathbb{T}^d};\R^m)$ with respect to the weak$^*$ topology, guarantees the existence of a sequence $\{ \Phi _n\}_{n\in\N} \subset \C^1({\mathbb{T}^d};\R^m) $ with $ \| \Phi_n \|_{L^\infty({\mathbb{T}^d};\R^m)} \leq \| \Phi \|_{L^\infty({\mathbb{T}^d};\R^m)} $ and $ \Phi_n \ra \Phi$ a.e.~in ${\mathbb{T}^d}$, see~\cite[Ex.~4.25]{brezis}. Lebesgue's convergence theorem allows to conclude that $D\partial_text{\partial_textit{e}}ta^*(\Phi_n) \ra \f U $ in $L^1({\mathbb{T}^d};\R^m)$ by the continuity of $D\partial_text{\partial_textit{e}}ta^*$,
which shows $\f U\in\mathbb{D}$.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}
Instead of assuming that $\partial_text{\partial_textit{e}}ta(\f 0)=0$ and
$ \partial_text{\partial_textit{e}}ta\geq 0$,
we may consider a function $\partial_text{\partial_textit{e}}ta : \R^m \partial_to (-\infty,\infty]$
that attains its minimum at $\f 0$.
Indeed, the original assumptions
can then be recovered by simply adding a suitable constant to $\partial_text{\partial_textit{e}}ta$.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}\label{rem:integralcondition}
Equation \partial_text{\partial_textit{e}}qref{eq:integralFentropy} ensures that the
total entropy is conserved along smooth solutions.
Indeed, if $\f U$ is a solution
and all functions are sufficiently smooth,
then we formally have
\[
\ddt\mathcal{E}(\f U)
=\!\!\int_{\mathbb{T}^d} \partial_t \f U\cdot D\partial_text{\partial_textit{e}}ta(\f U) \,\mathrm{d}\f x
=-\!\!\int_{\mathbb{T}^d} [\dv \f F(\f U)]\cdot D\partial_text{\partial_textit{e}}ta(\f U) \,\mathrm{d}\f x
= \!\!\int_{\mathbb{T}^d} \f F( \f U):\nabla D\partial_text{\partial_textit{e}}ta(\f U)\,\mathrm{d}\f x
= 0,
\]
where the last identity follows from
\partial_text{\partial_textit{e}}qref{eq:integralFentropy} with
$\Phi=D\partial_text{\partial_textit{e}}ta(\f U)$.
Classically,
this conservation property is ensured by
requiring the existence of an
entropy flux $\f q: \R^ m \ra \R^d$ such that
\begin{equation}
\label{eq:entropypair}
D \partial_text{\partial_textit{e}}ta (\f y) ^T D\f F (\f y )= D \f q (\f y)^T
\partial_text{\partial_textit{e}}nd{equation}
for all $\f y\in\R^m$,
which is a shorthand for the relation
\[
D \partial_text{\partial_textit{e}}ta (\f y) ^T D \f F_j (\f y )= D \f q_j(\f y)^T \qquad (j=1,\ldots,d).
\]
Clearly, this identity only makes sense if $\partial_text{\partial_textit{e}}ta$ and, in particular,
$\f F$ are smooth enough.
This smoothness cannot be guaranteed for general conservation laws
as we shall see in Section \ref{sec:comprEuler}
in the context of the compressible Euler equations.
However, if this is the case,
then \partial_text{\partial_textit{e}}qref{eq:integralFentropy} follows from \partial_text{\partial_textit{e}}qref{eq:entropypair}.
Indeed, setting $\f U=D\partial_text{\partial_textit{e}}ta^\ast(\Phi)$, that is, $\Phi=D\partial_text{\partial_textit{e}}ta(\f U)$,
and integrating by parts,
we deduce
\[
\begin{aligned}
\int_{\mathbb{T}^d} \f F( D\partial_text{\partial_textit{e}}ta^\ast(\Phi)):\nabla\Phi\,\mathrm{d}\f x
&=\int_{\mathbb{T}^d} \f F( \f U):\nabla D\partial_text{\partial_textit{e}}ta(\f U)\,\mathrm{d}\f x
=-\int_{\mathbb{T}^d} \bb{D\partial_text{\partial_textit{e}}ta(\f U)^T D\f F( \f U)}:\nabla \f U\,\mathrm{d}\f x
\\
&=-\int_{\mathbb{T}^d} D \f q(\f U):\nabla \f U\,\mathrm{d}\f x
=-\int_{\mathbb{T}^d} \dv \f q(\f U)\,\mathrm{d}\f x
= 0 \,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Instead of verifying \partial_text{\partial_textit{e}}qref{eq:integralFentropy} directly,
one can also show existence of a vector field $\partial_tilde{\f q}\colon\R^m\partial_to\R^d$
such that $\partial_tilde{\f q}\circ D\partial_text{\partial_textit{e}}ta^\ast\in\C^1(\R^m;\R^d)$ and
\begin{equation}\label{eq:entropyflux.new}
\forall \f z\in\R^m:\quad
\f F(D\partial_text{\partial_textit{e}}ta^\ast(\f z))= D\bb{\partial_tilde{\f q}\circ D\partial_text{\partial_textit{e}}ta^\ast}(\f z).
\partial_text{\partial_textit{e}}nd{equation}
This implies
\[
\f F(D\partial_text{\partial_textit{e}}ta^\ast(\Phi))\nabla\Phi= \dv \bb{\partial_tilde{\f q}(D\partial_text{\partial_textit{e}}ta^\ast(\Phi))}
\]
for all $\Phi\in\C^1({\mathbb{T}^d};\R^m)$, so that
\partial_text{\partial_textit{e}}qref{eq:integralFentropy} follows from the divergence theorem.
Observe that, in contrast to \partial_text{\partial_textit{e}}qref{eq:entropypair},
condition \partial_text{\partial_textit{e}}qref{eq:entropyflux.new} does not require
$\f F$ to be differentiable.
Moreover, we do not require differentiability of
$\partial_tilde{\f q}$ and $D\partial_text{\partial_textit{e}}ta^\ast$
but merely of their composition.
This distinction can be helpful since
there are standard cases where
$\partial_text{\partial_textit{e}}ta^\ast$ is not twice differentiable,
for example, the compressible Euler equations,
which we study in Section~\ref{sec:comprEuler}.
Formally, the relations~\partial_text{\partial_textit{e}}qref{eq:entropyflux.new} and~\partial_text{\partial_textit{e}}qref{eq:entropypair}
are equivalent in the case that $\partial_text{\partial_textit{e}}ta^* \in \C^2(\R^m)$
and $D^2\partial_text{\partial_textit{e}}ta^\ast(\f z)$ is invertible at each $\f z\in \R^m$.
Indeed, choosing $\f y = D \partial_text{\partial_textit{e}}ta^*(\f z)$, we find by~\partial_text{\partial_textit{e}}qref{eq:fenchel} and the chain rule that
\[
\begin{aligned}
\bb{D\partial_text{\partial_textit{e}}ta(D\partial_text{\partial_textit{e}}ta^*(\f z))^T & D \f F(D\partial_text{\partial_textit{e}}ta^*(\f z)) - D \f q(D \partial_text{\partial_textit{e}}ta^*(\f z)) }
D^2\partial_text{\partial_textit{e}}ta^*(\f z )
\\
&=\f z ^T D\nb{ \f F(D\partial_text{\partial_textit{e}}ta^*(\f z)) } - D\nb{ \f q(D \partial_text{\partial_textit{e}}ta^*(\f z)) }
\\
&= D \bb{\f z ^T \f F(D\partial_text{\partial_textit{e}}ta^*(\f z)) - \f q(D \partial_text{\partial_textit{e}}ta^*(\f z))}
-\f F( D \partial_text{\partial_textit{e}}ta^*(\f z)) \,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Hence,
\partial_text{\partial_textit{e}}qref{eq:entropypair} is satisfied
if and only if \partial_text{\partial_textit{e}}qref{eq:entropyflux.new} holds for
$\partial_tilde{\f q}(\f U) = D\partial_text{\partial_textit{e}}ta(\f U)^T \f F (\f U) - \f q(\f U)$.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}
In case that $\f F$ is entropy-convex, \partial_textit{i.e.}, there exists a constant $\lambda > 0$ such that
$|\f F| + \lambda \partial_text{\partial_textit{e}}ta$ is a convex, weakly lower semi-continuous function on $\R^m$,
we may choose $\mathcal{K}(\Phi)= \lambda\| \nabla \Phi\|_{L^\infty({\mathbb{T}^d})}$.
We shall use a similar functional $\mathcal K$ in Subsection \ref{sec:magneto},
but finer choices may be possible as we shall see
in Subsection~\ref{sec:incompEuler} and Section~\ref{sec:comprEuler}.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}[Boundary conditions]
In order to simplify the analysis, we restrict ourselves to the case of periodic boundary conditions.
But the method can also be adapted to more general boundary conditions. These can usually be included into our framework by modification of the space of test functions $\mathbb Y$;
see also Remark~\ref{rem:boundary} below.
\partial_text{\partial_textit{e}}nd{remark}
\subsection{Auxiliary results}
Before we start with the analysis of energy-variational solutions,
we prepare several auxiliary lemmas.
We start with the following basic result
on an affine linear variational inequality.
\begin{lemma}\label{lem:var.affine}
Let $\mathbb X$ be a Banach space,
and let $a_1, a_2 \in \R$ and $y_1,y_2\in \mathbb X^\ast$ such that
\[
a_1 + \langle y_1, x \rangle \leq a_2 + \langle y_2, x \rangle
\]
for all $x\in \mathbb X$.
Then $a_1\leq a_2$ and $y_1=y_2$.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
The choice $x=0$ directly yields $a_1\leq a_2$.
To infer $y_1=y_2$, let $\bar x\in\X$ and $\lambda >0$.
Choosing $x=\lambda \bar x$ and dividing by $\lambda$, we deduce
\[
\lambda^{-1}a_1 + \langle y_1, \bar x \rangle
\leq \lambda^{-1}a_2 + \langle y_2,\bar x\rangle.
\]
A a passage to the limit $\lambda\partial_to\infty$ yields
$\langle y_1,\bar x\rangle \leq \langle y_2,\bar x\rangle$.
Choosing $x=-\lambda \bar x$ and proceeding in the same way
results in the converse inequality, and we obtain
$\langle y_1,\bar x\rangle = \langle y_2,\bar x\rangle$.
Since $\bar x\in \mathbb X$ was arbitrary,
this yields $y_1=y_2$ and completes the proof.
\partial_text{\partial_textit{e}}nd{proof}
The next result yields the equivalence of a
pointwise inequality and its variational formulation.
\begin{lemma}\label{lem:invar}
Let $f\in L^1(0,T)$, $g\in L^\infty(0,T)$ and $g_0\in\R$.
Then the following two statements are equivalent:
\begin{enumerate}[label=\roman*.]
\item
The inequality
\begin{equation}
-\int_0^T \phi'(\partial_tau) g(\partial_tau) \,\mathrm{d} \partial_tau + \int_0^T \phi(\partial_tau) f(\partial_tau) \,\mathrm{d} \partial_tau - \phi(0)g_0 \leq 0
\label{ineq1}
\partial_text{\partial_textit{e}}nd{equation}
holds for all $\phi \in {\C}^1_c ([0,T))$ with $\phi \geq 0$.
\item
The inequality
\begin{equation}
g(t) -g(s) + \int_s^t f(\partial_tau) \,\mathrm{d} \partial_tau \leq 0
\label{ineq2}
\partial_text{\partial_textit{e}}nd{equation}
holds for a.e.~$s,\, t\in[0,T)$ with $s<t$,
including $s=0$ if we replace $g(0)$ with $g_0$.
\partial_text{\partial_textit{e}}nd{enumerate}
If one of these conditions is satisfied,
then $g$ can be identified with a function in $\BV$
such that
\begin{equation}
g(t+) -g(s-) + \int_s^t f(\partial_tau) \,\mathrm{d} \partial_tau \leq 0 \,
\label{ineq.pw}
\partial_text{\partial_textit{e}}nd{equation}
for all $s,t\in[0,T)$ with $s\leq t$,
where we set $g(0-)\coloneqq g_0$.
In particular, it holds $g(0+)\leq g_0$ and $g(t+)\leq g(t-)$ for all $t\in(0,T)$.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
To see that \partial_text{\partial_textit{e}}qref{ineq1} implies \partial_text{\partial_textit{e}}qref{ineq2},
one can use a standard procedure and
approximate the indicator function of the interval $(s,t)$ by
elements of $\C^1_c([0,T))$.
For the inverse implication,
first note that \partial_text{\partial_textit{e}}qref{ineq2} implies that $g$ coincides a.e.~with an element of $\BV$.
Hence, one-sided limits of $g$ exist in each point,
and we deduce \partial_text{\partial_textit{e}}qref{ineq.pw} from \partial_text{\partial_textit{e}}qref{ineq2}.
The choice $s=t$ in \partial_text{\partial_textit{e}}qref{ineq2} implies $g(t+)\leq g(t-)$
and $g(0+)\leq g_0$.
Now let $0\leq\phi \in {\C}^1_c ([0,T))$
and consider a partition $0=s_0\leq t_0< s_1<t_1<\dots< s_N<t_N<T$ of $[0,T]$
such that
\[
\phi' \geq 0 \quad\partial_text{in } [t_{j-1},s_j]\,,
\qquad
\phi' \leq 0 \quad\partial_text{in } [s_j,t_j]\,,
\qquad
\phi=\phi'=0 \quad \partial_text{in } [t_N,T]\,.
\]
To show \partial_text{\partial_textit{e}}qref{ineq1},
we subdivide the left-hand side of this inequality accordingly.
Since $\phi'\leq0$ in $[s_j,t_j]$,
we can use \partial_text{\partial_textit{e}}qref{ineq.pw} with $s=s_j$ and integration by parts
to estimate
\[
\begin{aligned}
&-\int_{s_j}^{t_j}\phi'(\partial_tau) g(\partial_tau)\,\mathrm{d}\partial_tau
\leq
-\int_{s_j}^{t_j}\phi'(\partial_tau) \Bp{g(s_j-)-\int_{s_j}^{\partial_tau}f(r)\,\,\mathrm{d} r}\,\mathrm{d}\partial_tau
\\
&\qquad
= -\phi(t_j)\Bp{g(s_j-)-\int_{s_j}^{t_j}f(r)\,\,\mathrm{d} r}+\phi(s_j) g(s_j-)
-\int_{s_j}^{t_j}\phi(\partial_tau) f(\partial_tau)\,\mathrm{d}\partial_tau,
\partial_text{\partial_textit{e}}nd{aligned}
\]
where for $j=0$ we have to replace $g(s_0-)$ with $g_0$.
Since $\phi'\geq0$ in $[t_{j-1},s_j]$,
we can use \partial_text{\partial_textit{e}}qref{ineq.pw} with $t=s_j$ in a similar way
to conclude
\[
\begin{aligned}
&-\int_{t_{j-1}}^{s_j}\phi'(\partial_tau) g(\partial_tau)\,\,\mathrm{d}\partial_tau
\leq
-\int_{t_{j-1}}^{s_j}\phi'(\partial_tau) \Bp{g(s_j+)+\int_{\partial_tau}^{s_j}f(r)\,\mathrm{d} r}\,\mathrm{d}\partial_tau
\\
&\qquad
= -\phi(s_j) g(s_j+)
+\phi(t_{j-1})\Bp{g(s_j+)+\int_{t_{j-1}}^{s_j}f(r)\,\mathrm{d} r}
-\int_{t_{j-1}}^{s_j}\phi(\partial_tau) f(\partial_tau)\,\mathrm{d}\partial_tau.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Summing up and using $\phi=\phi'=0$ in $[t_N,T]$, we obtain
\[
\begin{aligned}
-\int_0^T & \phi'(\partial_tau) g(\partial_tau)\,\,\mathrm{d}\partial_tau+\int_0^T \phi(\partial_tau) f(\partial_tau) \,\mathrm{d} \partial_tau
-\phi(0)g_0
\\
&=-\sum_{j=0}^N \int_{s_j}^{t_j}\phi'(\partial_tau) g(\partial_tau)\,\mathrm{d}\partial_tau
-\sum_{j=1}^N\int_{t_j-1}^{s_j}\phi'(\partial_tau) g(\partial_tau)\,\mathrm{d}\partial_tau
+\int_0^T \phi(\partial_tau) f(\partial_tau) \,\mathrm{d} \partial_tau -\phi(0)g_0
\\
&\leq
\sum_{j=1}^N \phi(s_j)\bp{g(s_j-)-g(s_j+)}
+\sum_{j=1}^N \phi(t_j) \Bp{g(s_{j+1}+)-g(s_j-)+\int_{s_j}^{s_{j+1}}f(r)\,\mathrm{d} r}
\partial_text{\partial_textit{e}}nd{aligned}
\]
Since $\phi\geq 0$,
invoking inequality~\partial_text{\partial_textit{e}}qref{ineq.pw}
and that $g(t +)\geq g(t -)$,
we can estimate the terms in the last line by $0$
and finally conclude~\partial_text{\partial_textit{e}}qref{ineq1}.
\partial_text{\partial_textit{e}}nd{proof}
Next we show an adaption of a well-known
theorem by de la Vall\'ee Poussin, see
\cite[Sect.~1.2, Theorem 2]{RaoRen_OrliczSpaces_1991} for example.
For the sake of completeness, we give a proof here.
Observe that the statement remains valid if
${\mathbb{T}^d}$ is replaced with any other finite measure space.
\begin{lemma}\label{lem:delavalle}
Let $\psi : \R^m \ra [0,\infty]$ have superlinear growth, \partial_textit{i.e.},
$\lim_{| \f y| \ra \infty} \psi( \f y) / | \f y| = \infty $,
and let $\mathcal{F}\subset L^1({\mathbb{T}^d};\R^m) $ and
$C>0$ such that
\[
\forall\, \f U \in \mathcal{F}: \quad
\int_{\mathbb{T}^d} \psi(\f U ) \,\mathrm{d} \f x \leq C \,.
\]
Then the set $\mathcal{F}$ is equi-integrable and therewith
relatively weakly compact in $L^1({\mathbb{T}^d}; \R^m)$.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
Let $\varepsilon>0$ and set $M=2C/\varepsilon$.
By assumption, we can choose $R>0$ so large that $\snorm{\f y}>R$
implies $\psi(\f y)> M\snorm{\f y}$.
Let $A\subset{\mathbb{T}^d}$ be a measurable set
with $\snorm{A}<\frac{\varepsilon}{2R}$.
Then
\[
\int_A\! |\f U| \,\mathrm{d}\f x
= \int_{\setc{\f x\in A}{\,\snorm{\f U(\f x)}\leq R}}\!|\f U| \,\mathrm{d}\f x
+ \int_{\setc{\f x\in A}{\,\snorm{\f U(\f x)}> R}}\!|\f U| \,\mathrm{d}\f x
\leq R \snorm{A}
+ \frac{1}{M}\int_{{\mathbb{T}^d}}\!\psi(\f U(\f x)) \,\mathrm{d}\f x
\leq \varepsilon.
\]
This shows
\[
\lim_{\snorm{A}\partial_to 0} \sup_{\f U\in\mathcal F} \int_A |\f U| \,\mathrm{d}\f x =0,
\]
that is, the equi-integrability of $\mathcal F$.
The relative weak compactness of $\mathcal F$
now follows from the Dunford--Pattis theorem~\cite[Thm.~3.2.1]{dunford}.
\partial_text{\partial_textit{e}}nd{proof}
The next lemma collects
useful properties
of a convex functionals with superlinear growth.
\begin{lemma}\label{lem:convex}
Let
$\partial_text{\partial_textit{e}}ta :\R^m \ra [0,\infty]$ be a strictly convex, lower semi-continuous function with $\partial_text{\partial_textit{e}}ta (\f 0 )= 0$ and
\partial_text{\partial_textit{e}}qref{eq:superlin.growth}.
Then the set-valued operator
$\partial \partial_text{\partial_textit{e}}ta : \R^m \ra \R^mdual$ is maximal monotone and surjective.
Moreover, the convex conjugate $\partial_text{\partial_textit{e}}ta^\ast$ is globally defined and continuously differentiable.
In particular,
\[
\forall \f z\in\R^d: \quad
(\partial\partial_text{\partial_textit{e}}ta)^{-1}(\set{\f z})=\partial\partial_text{\partial_textit{e}}ta^\ast(\f z)=\set{D\partial_text{\partial_textit{e}}ta^\ast(\f z)}.
\]
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
The subdifferential $\partial\partial_text{\partial_textit{e}}ta$ induces a maximal monotone operator according to~\cite[Thm.~2.43]{barbu}, and from~\cite[Prop.~2.47]{barbu} we infer that this operator is surjective.
The Fenchel equivalences \partial_text{\partial_textit{e}}qref{eq:fenchel} allow to identify this inverse with the subdifferential of the conjugate $\partial_text{\partial_textit{e}}ta^*$.
Note that $\partial_text{\partial_textit{e}}ta^*$ is even Gateaux-differentiable~\cite[Rem.~2.41 and Prop.~2.40]{barbu}
and continuous with $\dom\partial_text{\partial_textit{e}}ta^\ast=\R^d$~\cite[Prop.~2.25 and Thm.~2.14]{barbu}.
The assertion that $\partial \partial_text{\partial_textit{e}}ta^*$ is single-valued and continuous can be found in~\cite[Thm.~5.20]{roubicek}.
\partial_text{\partial_textit{e}}nd{proof}
We use some of these properties
to prove the following lemma that shows a way how to
continuously interpolate between $0$ and a given value in the range of $\mathcal{E}$
defined in~\partial_text{\partial_textit{e}}qref{eq:E}.
\begin{lemma}\label{lem:surjective}
In the situation of Lemma \ref{lem:convex},
let $\Phi \in\C({\mathbb{T}^d};\R^m)$ and $\partial_tilde{\f U}= D\partial_text{\partial_textit{e}}ta^\ast\circ\Phi$.
Then the mapping
\[
\mathcal G\colon [0,1]\partial_to [0,\mathcal{E}(\partial_tilde{\f U})], \quad
\alpha \mapsto
\mathcal{E} ( D \partial_text{\partial_textit{e}}ta^* (\alpha \Phi))
\]
is well defined, continuous and surjective.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
Fix $\f x\in{\mathbb{T}^d}$, and let
$\f y =\partial_tilde{\f U}(\f x)\in \dom \partial\partial_text{\partial_textit{e}}ta$
and $\f z=\Phi(\f x)\in\partial\partial_text{\partial_textit{e}}ta(\f y)$.
Consider
\[
f\colon [0,1]\partial_to [0,\infty], \quad
\alpha \mapsto
\partial_text{\partial_textit{e}}ta ( D \partial_text{\partial_textit{e}}ta^* (\alpha \f z))
\]
Since $\partial \partial_text{\partial_textit{e}}ta^\ast$ has full domain and is
single valued according to Lemma~\ref{lem:convex},
the mapping is well defined.
Via the Fenchel equivalences \partial_text{\partial_textit{e}}qref{eq:fenchel}, we may further express $f$ as
\[
f(\alpha) = \partial_text{\partial_textit{e}}ta( D\partial_text{\partial_textit{e}}ta^* (\alpha \f z)) = \langle D \partial_text{\partial_textit{e}}ta^* (\alpha \f z ) , \alpha \f z \rangle - \partial_text{\partial_textit{e}}ta^* ( \alpha \f z) .
\]
This shows that $f(\alpha)$ is finite
and that $f$ is continuous
since $\partial_text{\partial_textit{e}}ta^*$ and $D\partial_text{\partial_textit{e}}ta^* $ are continuous by Lemma~\ref{lem:convex}.
Moreover, $f(0)=0$ and $f(1)=\partial_text{\partial_textit{e}}ta(\f y)$,
and via Fenchel's identity and the monotonicity of $D \partial_text{\partial_textit{e}}ta^*$,
we further observe for $0 \leq \beta < \alpha \leq 1 $ that
\[
\begin{aligned}
f(\alpha )-f(\beta) &={}
\langle D\partial_text{\partial_textit{e}}ta^*(\alpha \f z) , \alpha \f z\rangle - \langle D\partial_text{\partial_textit{e}}ta^*(\beta \f z) \,\beta \f z\rangle -\bp{\partial_text{\partial_textit{e}}ta^*(\alpha \f z)- \partial_text{\partial_textit{e}}ta^*(\beta \f z)} \\
&\geq \langle D\partial_text{\partial_textit{e}}ta^*(\alpha \f z) ,\alpha \f z\rangle - \langle D\partial_text{\partial_textit{e}}ta^*(\beta \f z) ,\beta \f z\rangle + \langle D\partial_text{\partial_textit{e}}ta^* (\alpha \f z),\beta \f z - \alpha \f z\rangle
\\
&= \frac{\beta}{\alpha - \beta } \left \langle D\partial_text{\partial_textit{e}}ta^*(\alpha \f z)- D\partial_text{\partial_textit{e}}ta^*(\beta \f z) ,\alpha \f z - \beta \f z \right \rangle \geq 0 \,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Hence, $f$ is a continuous and non-decreasing mapping with range $[0,\partial_text{\partial_textit{e}}ta(\f y)]$.
This implies that the mapping $\mathcal G$ is well defined with
$0=\mathcal G(0)\leq\mathcal G(\alpha)\leq \mathcal G(1)=\mathcal E(\partial_tilde{\f U})$
for all $\alpha\in[0,1]$.
Using Lebesgue's theorem on dominated convergence,
we further conclude that $\mathcal G$ is continuous,
which also implies that $\mathcal G$ is surjective.
\partial_text{\partial_textit{e}}nd{proof}
We shall also make use of the following result on the extension
of certain linear functionals.
\begin{lemma}\label{lem:hahn}
Let $ \f l : \mathcal{V} \ra \R$ be a linear continuous functional,
where $ \mathcal{V} $ is a closed subspace of
\[
\mathcal U:= \setcl{\f \varphi\in \C_0^1({\T}\partial_times[0,T);\R^d)}{ \int_{\T}\f\varphi\,\mathrm{d} x =0}.
\]
Set
\[
\mathcal I \colon \mathcal U \partial_to L^1(0,T; \C({\T}; \R_{\mathrm{sym}}^{d\partial_times d })),\qquad
\mathcal I(\f \psi)=(\nabla\f\psi)_{\mathrm{sym}},
\]
and let
$\mathfrak p : L^1(0,T;\C({\T} ; \R^{d\partial_times d}_{\partial_text{sym}})) \ra \R$ be a sublinear mapping such that
\begin{equation}
\forall \f \psi\in \mathcal V \colon \quad \langle \f l , \f \psi \rangle \leq \mathfrak p ( \mathcal{I}(\f\psi ) ) \,.\label{est:l}
\partial_text{\partial_textit{e}}nd{equation}
Then there exists
an element
$$\mathfrak R \in (L^1(0,T; \C({\T}; \R_{\partial_text{sym}}^{d\partial_times d }
)))^*=L^\infty_{w^*}(0,T;\mathcal{M}({\T} ; \R_{\partial_text{sym}}^{d\partial_times d}
)) $$
satisfying
\[
\forall\Phi\in L^1(0,T; \C({\T}; \R_{\partial_text{sym}}^{d\partial_times d })):\ \langle -\mathfrak R, \Phi\rangle \leq \mathfrak p(\Phi),
\qquad
\forall\f \psi\in \mathcal{V}: \
\langle -\mathfrak R, \mathcal I(\f\psi)\rangle = \langle \f l, \f \psi\rangle.
\]
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
First consider $\f \psi\in\mathcal V$ with $\mathcal I(\f \psi)=0$.
This implies that $\f \psi(\cdot,t)$ is affine linear,
and since $\f \psi\in\mathcal V$ is spatially periodic and has vanishing mean value,
this is only possible for $\f \psi=0$.
Therefore, $\mathcal I$ is injective,
and on its image $\mathcal W=\mathcal I(\mathcal V)$
we can define the functional $L$ by $\langle L,\Psi\rangle=\langle \f l,\f \psi\rangle$
for $\Psi=\mathcal I(\f \psi)\in\mathcal W$.
Then estimate \partial_text{\partial_textit{e}}qref{est:l} implies
\begin{equation}\label{linearformest}
\langle L, \Psi \rangle \leq \mathfrak p(\Psi)
\partial_text{\partial_textit{e}}nd{equation}
for all $\Psi\in\mathcal W\subset L^1(0,T;\C({\T} ; \R^{d\partial_times d} _{\partial_text{sym}}))$.
By the Hahn--Banach theorem (see e.g.~\cite[Thm~1.1]{brezis}), we may extend $L$
from $\mathcal W$ to a linear functional on $L^1(0,T;\C({\T} ; \R^{d\partial_times d} _{\partial_text{sym}}))$.
Using the Riesz representation theorem,
we may identify this extension with an object $-\mathfrak R$
such that the asserted properties are satisfied.
\partial_text{\partial_textit{e}}nd{proof}
\section{Properties and existence of energy-variational solutions}
\label{sec:hyper}
In this section we collect several general properties
of energy-variational solutions that follow directly from Definition~\ref{def:envar}.
Moreover, under additional regularity assumptions, we can show
a relative entropy inequality, which yields a weak-strong uniqueness principle.
Finally, in Subsection~\ref{subsec:existence},
we introduce a time-discrete scheme that leads to
the existence of energy-variational solutions as claimed in
Theorem \ref{thm:main}.
\subsection{General properties}
Let us begin with some continuity properties of
energy-variational solutions,
which follow directly from Definition~\ref{def:envar}.
\begin{proposition}\label{prop:reg}
Let $(\f U, E)$ be an energy-variational solution in the sense of Definition~\ref{def:envar}.
Then $\f U$ and $E$ can be redefined on a subset of $[0,T]$ of measure zero
such that $E$ is a non-increasing function
and such that $ \f U \in \C_{w^*}([0,T];\Y^*)$
with $\f U(0)=\f U_0$ in $\Y^\ast$.
Then inequality~\partial_text{\partial_textit{e}}qref{envarform} is fulfilled everywhere in $[0,T]$ in the sense that
for all $\Phi \in \C^1( [0,T]; \Y )$ it holds
\begin{equation}
\left[ E - \langle \f U , \Phi \rangle\right ] \Big|_{s-}^{t+} + \int_s^t \int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi+ \mathcal{K}(\Phi) \left [\mathcal{E} (\f U) - E \right ] \,\mathrm{d} \f x \,\mathrm{d} \partial_tau \leq 0 \, \label{envarform2}
\partial_text{\partial_textit{e}}nd{equation}
for all $s\leq t \in [0,T)$,
where $E(0-)-\langle \f U(0-), \Phi(0-) \rangle
\coloneqq E(0+)-\langle \f U_0, \Phi(0) \rangle$.
\partial_text{\partial_textit{e}}nd{proposition}
\begin{proof}
Setting $\Phi\partial_text{\partial_textit{e}}quiv 0$ in inequality~\partial_text{\partial_textit{e}}qref{envarform}, we infer
that $ E\big|^t_s \leq 0$ for a.e.~$t>s\in(0,T)$.
Since $E \in \BV$, all left-sided and right-sided limits exist and $E$ is continuous
except for countably many points,
so that we can redefine $E$ such that it is non-increasing.
For any fixed $\Phi \in \C^1( [0,T]; \Y )$ we further observe that
\[
\begin{aligned}
\left[E- \langle \f U , \Phi \rangle\right ] \Big|_{s}^t \leq{}& - \int_s^t \int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi+ \mathcal{K}(\Phi) \left [\mathcal{E} (\f U)-E \right ] \,\mathrm{d} \f x \,\mathrm{d} s
\\
\leq {}& \int_s^t \Bb{\int_{{\mathbb{T}^d}} \partial_text{\partial_textit{e}}ta(\f U)+\partial_text{\partial_textit{e}}ta^*(\partial_t \Phi) \,\mathrm{d} \f x + \mathcal{K}(\Phi) E }\,\mathrm{d} \partial_tau
\,
\partial_text{\partial_textit{e}}nd{aligned}
\]
for a.e.~$t>s \in (0,T)$, where we used
the Fenchel--Young inequality
and the non-negativity of the function in~\partial_text{\partial_textit{e}}qref{ass:convex}.
This implies that $t \mapsto E(t)-\langle \f U(t),\Phi(t) \rangle \in \BV$.
In particular, left-sided and right-sided limits of this function exist,
and passing to those limits in \partial_text{\partial_textit{e}}qref{envarform} yields \partial_text{\partial_textit{e}}qref{envarform2}.
Choosing now $s=t$ and $\Phi\in\Y$ independent of time, we infer that
\[
\left[ E - \langle \f U , \Phi \rangle\right ] \Big|_{t-}^{t+} \leq 0 \quad \partial_text{for all }t\in(0,T) \partial_text{ and } \Phi \in \Y\,.
\]
Lemma \ref{lem:var.affine} now yields $\f U(t+) = \f U(t-)$ in $\mathbb{Y}^\ast$
for all $t\in (0,T)$ \partial_textit{i.e.,}
we can redefine $\f U$ on a set of measure $0$ such that
$ \f U \in \C_{w^*}([0,T];\Y^*)$.
\partial_text{\partial_textit{e}}nd{proof}
\begin{proposition}
\label{prop:betterreg}
Assume that for two elements $ \f V $, $\f W \in \mathbb{D} $ with $\langle \f V - \f W , \Phi \rangle
= 0 $ for all $ \Phi \in \Y$ it holds $ \f V = \f W$.
Then we have $\f U \in \C_{w}([0,T]; L^1({\mathbb{T}^d};\R^m))$. Furthermore, if $\mathcal{E}(\f U_0)=E(0)$, the initial value is attained in the strong sense in $L^1({\mathbb{T}^d};\R^m)$.
\partial_text{\partial_textit{e}}nd{proposition}
\begin{proof}
Let $t\in [0,T]$ and consider a sequence $ \{ t_n\}_{n\in\N} \subset [0,T]$ with $t_n \ra t$.
Then
$ \mathcal{E}(\f U(t_n)) \leq E(t_n)\leq E_0$ for $n\in\N$,
and from \partial_text{\partial_textit{e}}qref{eq:superlin.growth}
and Lemma~\ref{lem:delavalle} we infer that the set $ \{ \f U(t_n)\}_{n\in\N} $ is relatively weakly compact in $L^1({\mathbb{T}^d};\R^m)$.
Hence, we may extract a subsequence such that
\begin{equation*}
\f U(t_{n_k})\rightharpoonup \f A _t \quad\partial_text{in }L^1({\mathbb{T}^d};\R^m)\,
\partial_text{\partial_textit{e}}nd{equation*}
for some $\f A_t\in \mathbb{D} $.
As shown above, we also have
$$ \f U(t_{n_k}) \stackrel{*}{\rightharpoonup} \f U(t) \quad \partial_text{in }\Y^*\,.$$
We infer that $ \langle \f U (t) , \Phi \rangle = \langle \f A_t , \Phi \rangle $ for all $ \Phi \in \Y$.
The assumption implies
$\f U(t)=\f A_t$.
Due to the uniqueness of the weak limit, all subsequences converge to this limit,
so that $ \f U \in \C_w([0,T];L^1({\mathbb{T}^d};\R^m))$.
Moreover, if $\mathcal{E}(\f U_0)=E(0)$, we infer
\[
E(0) \geq \lim _{t\searrow 0 } E( t ) \geq \lim_{t \searrow 0} \mathcal{E}(\f U(t)) \geq \mathcal{E}(\f U_0) = E(0) \,
\]
due to the monotonicity of the function $E$ and the weak lower semi-continuity of $\mathcal{E}$.
We conclude that $ \mathcal{E}(\f U(t)) \ra \mathcal{E}(\f U_0) $ as $t \ra 0$.
Since we also have $\f U (t) \rightharpoonup \f U_0$,
from the strict convexity of $\mathcal{E}$, we infer that $\f U (t) \partial_to \f U_0$ strongly in $L^1({\mathbb{T}^d} ; \R^m) $ by~\cite[Thm.~10.20]{singular}.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}[Semi-flow property]
We note that energy-variational solutions fulfill the semi-flow property.
This means that
the restriction of a solution to a smaller time interval
as well as the concatenation
of two solutions $(\f U_1, E_1)$ and $(\f U_2, E_2)$
on subsequent time intervals $(t_0,t_1)$ and $(t_1,t_2)$
with $(\f U_1(t_1-), E_1(t_1-))=(\f U_2(t_1+), E_2(t_1+))$.
is again a solution.
This follows from Proposition~\ref{prop:reg} due to inequality~\partial_text{\partial_textit{e}}qref{envarform2}
for all $t\geq s\in[0,T]$
and the weak$^\ast$ continuity of the solution.
\partial_text{\partial_textit{e}}nd{remark}
\begin{proposition}[Solution set]
The set of all energy-variational solutions with common initial value
$\f U_0\in\mathbb{V}$ is convex.
Moreover, let $\mathcal{E}( \f U_0)\leq B$ for some $B>0$, and
let $\mathcal S$ be the set of all energy-variational solutions $\np{\f U, E}$
with initial value $\f U_0\in\mathbb{D}$ and $E(0)\leq B$.
Then $\mathcal S$ is compact in $L^\infty(0,T;L^1({\mathbb{T}^d})) \partial_times \BV$
with respect to the weak$^*$ topology in $\BV$ and
the weak$(^\ast)$ topology in $L^\infty(0,T;L^1({\mathbb{T}^d}))$
defined in \partial_text{\partial_textit{e}}qref{eq:weakconv.LinfL1}.
\partial_text{\partial_textit{e}}nd{proposition}
\begin{proof}
Using the convexity of $\mathcal{E}$ and of the mapping from \partial_text{\partial_textit{e}}qref{ass:convex},
one readily sees that
all terms involving $(\f U, E)$ appear in a convex way
in \partial_text{\partial_textit{e}}qref{envarform}.
Therefore, the convex combination of two energy-variational solutions
with coincident initial value
is again an energy-variational solution with the same initial value.
Now consider the set $\mathcal S$.
By Proposition \ref{prop:reg},
we may assume that for all $(\f U,E)\in\mathcal S$ the function $E$ is non-increasing,
which implies that $ | E | _{\partial_text{TV}([0,T])} \leq B$.
Due to the inequality $\mathcal{E}(\f U(t)) \leq E(t)$ for a.a.~$t\in [0,T]$
and
the superlinear growth of $\partial_text{\partial_textit{e}}ta$,
we infer from Lemma~\ref{lem:delavalle}
and Helly's selection theorem (cf.~\cite[Thm.~1.126]{barbu}) that
any sequence in $\mathcal S$ contains a subsequence
$\{ (\f U^n , E^n)\}_{n\in \N}$
such that
\begin{equation}
\begin{aligned}
{\f U}^n &\xrightharpoonup{(\ast)} \f U &&\partial_text{in } L^\infty (0,T; L^1 ({\mathbb{T}^d}))\,, \\
{E}^n &\xrightharpoonup{\phantom{(}\ast\phantom{)}} {E} && \partial_text{in }\BV\,,\\
{E}^n(t) &\xrightarrow{\phantom{(\ast)}} {E}(t) && \partial_text{for all }t\in[0,T]\,.
\label{weakconv}
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
For the initial values, we may further extract a subsequence such that $E^n(0+) \ra E_0$
for some $E_0\leq B$, and we have
$\f U^n(0) = \f U_0$ for all $n\in \N$.
Using Lemma~\ref{lem:invar},
we may rewrite the energy-variational inequality \partial_text{\partial_textit{e}}qref{envarform}
in its weak form
\begin{align*}
- \int_0^T \phi' \left [ E ^n - \langle \f U^n, \Phi \rangle \right ]\,\mathrm{d} t &- \phi(0) \left [ E ^n (0+)- \langle \f U_0 ,\Phi(0)\rangle \right ]\\& + \int_0^T \phi \left [\F{\f U^n}{\Phi} + \mathcal{K}(\Phi )\left [\mathcal{E}(\f U^n) - E^n \right ] \right ] \,\mathrm{d} t\leq 0
\partial_text{\partial_textit{e}}nd{align*}
for all $\phi \in \C_c^1( [0,T)) $ with $\phi \geq 0$ and for all $\Phi \in \C^1( [0,T]; \Y )$.
Via the convergences~\partial_text{\partial_textit{e}}qref{weakconv}, we may pass to the limit in this formulation
and obtain, again by Lemma~\ref{lem:invar}, the formulation~\partial_text{\partial_textit{e}}qref{envarform}.
Moreover, the weak lower semi-continuity of $\mathcal{E}$ allows to deduce that $ E (t) \geq \mathcal{E}(\f U(t))$ for a.e.~$t\in(0,T)$.
Consequently, $\np{\f U, E}$ is an energy-variational solution in $\mathcal S$.
\partial_text{\partial_textit{e}}nd{proof}
\begin{proposition}\label{prop:equality}
Let $(\f U, E) \in L^\infty(0,T;\mathbb{D}) \partial_times \BV$ be an energy-variational solution in the sense of Definition~\ref{def:envar}, and let the regularity weight $\mathcal{K}$ be homogeneous of degree one, \partial_textit{i.e.,} $ \mathcal{K}(\alpha \Phi ) = \alpha \mathcal{K}(\Phi)$ for all $\alpha \in [0,\infty)$ and $\Phi \in \Y$.
Then the inequality~\partial_text{\partial_textit{e}}qref{envarform} is equivalent to the two inequalities
\begin{equation}\label{twoineq}
E\Big|_s^t \leq 0, \qquad
- \langle \f U , \Phi \rangle \Big|_{s}^t + \int_s^t \!\!\int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Phi + \f F (\f U ) : \nabla \Phi\,\mathrm{d} \f x + \mathcal{K}(\Phi) \left [\mathcal{E} (\f U) - E \right ] \,\mathrm{d} \partial_tau \leq 0
\partial_text{\partial_textit{e}}nd{equation}
for a.a.~$s,t\in (0,T)$, $s<t$, and for all $\Phi \in \C^1( [0,T]; \Y )$.
\partial_text{\partial_textit{e}}nd{proposition}
\begin{proof}
Summation of the two inequalities in~\partial_text{\partial_textit{e}}qref{twoineq} directly gives the inequality~\partial_text{\partial_textit{e}}qref{envarform}.
For the converse direction, the first inequality in~\partial_text{\partial_textit{e}}qref{twoineq} can be deduced from~\partial_text{\partial_textit{e}}qref{envarform} by choosing $\Phi \partial_text{\partial_textit{e}}quiv 0$.
In order to infer the second inequality in~\partial_text{\partial_textit{e}}qref{twoineq}, we choose $\Phi = \alpha \Psi$ in~\partial_text{\partial_textit{e}}qref{envarform} for $\alpha >0$ and $\Psi \in \C^1([0,T];\Y)$. Multiplying the resulting inequality by $\frac{1}{\alpha}$ implies
\begin{align*}
\left[ \frac{1}{\alpha} E - \langle \f U ,\Psi \rangle\right ] \Big|_{s}^t + \int_s^t \int_{{\mathbb{T}^d}}\f U \cdot \partial_t \Psi + \f F (\f U ) : \nabla \Psi\,\mathrm{d} \f x + \mathcal{K}(\Psi) \left [\mathcal{E} (\f U) - E \right ] \,\mathrm{d} \partial_tau \leq 0\,.
\partial_text{\partial_textit{e}}nd{align*}
Passing to the limit $\alpha \ra \infty$, we infer the second inequality in~\partial_text{\partial_textit{e}}qref{twoineq}.
\partial_text{\partial_textit{e}}nd{proof}
\subsection{Relative entropy and weak-strong uniqueness}
In order to derive a relative entropy inequality
for energy-variational solutions, we make the following assumptions
on higher regularity of $\partial_text{\partial_textit{e}}ta$ and $\f F$ in the interior of
the domain of $\partial_text{\partial_textit{e}}ta$.
\begin{hypothesis}\label{hypo:smooth}
Let the assumptions of Hypothesis~\ref{hypo} be fulfilled.
Set $M := \interi \dom \partial_text{\partial_textit{e}}ta $ and assume that $ \partial_text{\partial_textit{e}}ta\big|_{M} \in \C^2(M; \R)$ such that $ D^2 \partial_text{\partial_textit{e}}ta (\f z ) $ is positive definite for all $\f z \in M$,
and that $\f F\big|_{M} \in \C^1(M ; \R^{m \partial_times d} )$ such that there exists a $\partial_tilde{\f q} \in \C^1(M;\R^d)$ fulfilling~\partial_text{\partial_textit{e}}qref{eq:entropyflux.new}.
\partial_text{\partial_textit{e}}nd{hypothesis}
Under these regularity assumptions,
we can introduce the relative total entropy functional $\mathcal{R}: \mathbb{D}\partial_times
\C^1({\mathbb{T}^d}; M)\ra \R$, which is given by
\begin{subequations}\label{RW}
\begin{equation}
\mathcal{R}(\f U| \partial_tilde{\f U}) := \mathcal{E}(\f U) - \mathcal{E}(\partial_tilde{\f U}) -\langle D\mathcal{E}(\partial_tilde{\f U}),\f U- \partial_tilde{\f U}\rangle \,.\label{R}
\partial_text{\partial_textit{e}}nd{equation}
Additionally, we define the relative form
$\mathcal{W}:\mathbb{D}\partial_times
\C^1({\mathbb{T}^d}; M)\ra \R $ via
\begin{equation}
\mathcal{W}(\f U | \partial_tilde{\f U}) \!=\! \int_{{\mathbb{T}^d}}\! \nabla D \partial_text{\partial_textit{e}}ta (\partial_tilde{\f U}) :\! \left ( \f F( \f U) - \f F(\partial_tilde{\f U}) - D \f F(\partial_tilde{\f U}) ( \f U {-} \partial_tilde{\f U} ) \right ) \!\,\mathrm{d} \f x + \mathcal{K}(\partial_tilde{\f U} ) \mathcal R(\f U | \partial_tilde{\f U}) \,. \label{W}
\partial_text{\partial_textit{e}}nd{equation}
\partial_text{\partial_textit{e}}nd{subequations}
We note that the assumption $\partial_tilde{\f U} \in \C^1({\mathbb{T}^d} ; M)$ implies
$\partial_tilde{\f U}\in\mathbb{D}$,
so that $\mathcal R(\f U|\partial_tilde{\f U})$ is finite.
Indeed, since $\partial_text{\partial_textit{e}}ta$ is continuous in the interior of its domain,
the composition $\partial_text{\partial_textit{e}}ta\circ\partial_tilde{\f U}$ is a continuous function on the compact set ${\mathbb{T}^d}$
and thus bounded,
which yields $\mathcal{E}(\partial_tilde{\f U})<\infty$.
Similarly, all compositions of functions in \partial_text{\partial_textit{e}}qref{W} are bounded,
and $\mathcal W$ is well defined.
Moreover,
both terms $\mathcal R$ and $\mathcal W$ are non-negative
due to the convexity of $\partial_text{\partial_textit{e}}ta$ and of the function from \partial_text{\partial_textit{e}}qref{ass:convex},
respectively.
\begin{proposition}[Relative entropy inequality]\label{prop:weakstrong}
Let $(\f U, E)$ be an energy-variational solution in the sense of
Definition~\ref{def:envar}, and let Hypothesis~\ref{hypo:smooth} be satisfied.
Then the relative entropy inequality
\begin{equation}
\begin{aligned}
&\left [\mathcal{R}(\f U| \partial_tilde{\f U} ) + E - \mathcal E(\f U)\right ]\Big|_s^t- \int_s^t\mathcal{K}(\partial_tilde{\f U}) \left [\mathcal{R}(\f U| \partial_tilde{\f U} ) + E - \mathcal E(\f U)\right ] \,\mathrm{d} \partial_tau
\\
&\qquad
+ \int_s^t \left [\mathcal{W}(\f U | \partial_tilde{\f U}) +\int_{{\mathbb{T}^d}}\left ( \partial_t \partial_tilde{\f U} + \di \f F(\partial_tilde{\f U}) \right ) \cdot D^ 2\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}) ( \f U - \partial_tilde{\f U} ) \,\mathrm{d} \f x
\right ]\,\mathrm{d} \partial_tau\leq 0 \,\label{inuniqu}
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
holds for a.e.~$s,t\in(0,T)$ and all $\partial_tilde{\f U} \in \C^1({\mathbb{T}^d} \partial_times [0,T]; M) $.
\partial_text{\partial_textit{e}}nd{proposition}
An immediate consequence of inequality \partial_text{\partial_textit{e}}qref{inuniqu}
is the following weak-strong uniqueness property.
\begin{corollary}[Weak-strong uniqueness]\label{cor:uni}
Let Hypothesis~\ref{hypo:smooth} be satisfied.
If there exists a strong solution
$\partial_tilde{\f U} \in \C^1(s,t;\mathbb{Y})\cap\C([s,t);\mathbb{Y})$
to \partial_text{\partial_textit{e}}qref{eq.pde} in some interval $(s,t)\subset[0,T]$,
then $(\partial_tilde{\f U},\mathcal E(\partial_tilde{\f U}))$ coincides with any energy-variational solution $( \f U ,E) \in \C_{w^*}(0,T;\Y^*) \partial_times \BV $
in the sense of Definition~\ref{def:envar}
with $(\f U(s), E(s-))= (\partial_tilde{\f U}(s), \mathcal{E}(\partial_tilde{\f U}(s)))$.
\partial_text{\partial_textit{e}}nd{corollary}
\begin{proof}
Since~$\partial_tilde{\f U}$ is a strong solution on $[s,t]$,
it holds $ \partial_t \partial_tilde{\f U} + \di \f F(\partial_tilde{\f U})=0$ in $(s,t)$.
For any energy-variational solution $(\f U, E)$ such that $\f U(s) = \partial_tilde{\f U}(s) $ and $E(s-) = \mathcal{E}(\partial_tilde{\f U}(s))$, we further observe
\[
\mathcal{R}(\f U(s)| \partial_tilde{\f U}(s))+ E(s-)- \mathcal{E}(\f U(s)) = 0 \,.
\]
From the inequality~\partial_text{\partial_textit{e}}qref{inuniqu}, we thus infer that
\begin{align*}
\mathcal{R}(\f U(r)| \partial_tilde{\f U}(r) ) + E (r+)- \mathcal{E}(\f U(r))
&+ \int_s^r \mathcal{W}(\f U|\partial_tilde{\f U}) \,\mathrm{d} \partial_tau \\
&\leq \int_s^r\mathcal{K}(\partial_tilde{\f U}) \left [\mathcal{R}(\f U| \partial_tilde{\f U} ) + E - \mathcal{E}(\f U)\right ] \,\mathrm{d} \partial_tau
\partial_text{\partial_textit{e}}nd{align*}
for all $r \in [s,t]$.
The convexity of the function from~\partial_text{\partial_textit{e}}qref{ass:convex} implies $\mathcal W\geq 0$.
From Gronwall's inequality, we infer
that $\mathcal{R}(\f U| \partial_tilde{\f U}) + E - \mathcal{E}(\f U)\leq 0$ in $(s,t)$.
Since $E\geq \mathcal{E}(\f U)$, this implies $\mathcal{R}(\f U| \partial_tilde{\f U})\leq0$, so that
$\f U= \partial_tilde{\f U}$
due to the strict convexity of $\partial_text{\partial_textit{e}}ta$.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}
The above weak-strong uniqueness result is stronger than the usual weak-strong uniqueness results (cf.~\cite{weakstrongeuler}). Usually, these results are stated in the sense that: If there exists a strong solution emanating from the same initial data as the generalized solution, then both solutions coincide as long as the strong one exists.
The above result also holds in case that the energy-variational solution coincides with a strong solution at some later point $s$ in the evolution.
However, the solution has to satisfy $E(s-)=\mathcal{E}(\f U(s))$ at such a point in time.
Note that here we do not claim existence of
such regular solutions.
There are many different results on the existence of classical solutions on short time intervals for conservation laws. We refer to~\cite[Ch.~V]{dafermos2} and the references therein.
\partial_text{\partial_textit{e}}nd{remark}
It remains to show the relative entropy inequality \partial_text{\partial_textit{e}}qref{inuniqu}.
\begin{proof}[Proof of Proposition~\ref{prop:weakstrong}]
For any smooth function $\partial_tilde{\f U} \in \C^1({\mathbb{T}^d}\partial_times [0,T]; M) $, we observe by the fundamental theorem of
calculus and the product rule that
\begin{equation}
\left [ \mathcal{E}(\partial_tilde{\f U}) - \langle D \mathcal{E}(\partial_tilde{\f U}), \partial_tilde{\f U}\rangle \right ]\Big|_s^t + \int_s^t \int_{{\mathbb{T}^d}} D^2\partial_text{\partial_textit{e}}ta (\partial_tilde{\f U}) (\f U-\partial_tilde{\f U} ) \cdot \partial_t \partial_tilde{\f U} - \partial_t D\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}) \cdot \f U \,\mathrm{d} \f x \,\mathrm{d} \partial_tau = 0 \,.\label{weakstrong1}
\partial_text{\partial_textit{e}}nd{equation}
Note that $\partial_tilde{\f U}$ only takes values in $M$ such that the following calculations are rigorous.
Taking the derivative of the assumed
relation~\partial_text{\partial_textit{e}}qref{eq:entropyflux.new} with respect to $\f z$, we infer
\begin{align*}
D_l \f F_{ij}( D \partial_text{\partial_textit{e}}ta^*(\f z)) D^2_{lk}\partial_text{\partial_textit{e}}ta^*(\f z) =\frac{\partial}{\partial \f z_k} \f F_{ij}(D \partial_text{\partial_textit{e}}ta^*(\f z)) = D^2_{ki} [ \f q_j \circ D\partial_text{\partial_textit{e}}ta^* ] (\f z ) \,.
\partial_text{\partial_textit{e}}nd{align*}
Note that since $ \f z = D \partial_text{\partial_textit{e}}ta (D \partial_text{\partial_textit{e}}ta^*( \f z))$, we infer from the implicit function theorem that $\partial_text{\partial_textit{e}}ta^*$ is twice continuously differentiable with $ D^2 \partial_text{\partial_textit{e}}ta^*(\f z) = \left [ D^2 \partial_text{\partial_textit{e}}ta (D \partial_text{\partial_textit{e}}ta^*( \f z))\right ]^{-1}$.
We may express the derivative of $\f F$ via
\[
D_l \f F_{ij}( D \partial_text{\partial_textit{e}}ta^*(\f z)) = D^2_{ki} [ \f q_j \circ D\partial_text{\partial_textit{e}}ta^* ] (\f z ) D^2_{kl}\partial_text{\partial_textit{e}}ta(D\partial_text{\partial_textit{e}}ta^*(\f z))\,.
\]
Multiplying the above relation by $ D^2\partial_text{\partial_textit{e}}ta(D \partial_text{\partial_textit{e}}ta^*( \f z))$ from the left, we infer by the symmetry of the second derivatives of $\f q$ and $\partial_text{\partial_textit{e}}ta$ that
\begin{align*}
D^2_{im } \partial_text{\partial_textit{e}}ta(D \partial_text{\partial_textit{e}}ta^*( \f z)) D_l \f F_{ij}( D \partial_text{\partial_textit{e}}ta^*(\f z)) &= D^2_{im } \partial_text{\partial_textit{e}}ta(D \partial_text{\partial_textit{e}}ta^*( \f z))D^2_{ki} [ \f q_j \circ D\partial_text{\partial_textit{e}}ta^* ] (\f z ) D^2_{lk } \partial_text{\partial_textit{e}}ta(D \partial_text{\partial_textit{e}}ta^*( \f z))\\& =D^2_{lk } \partial_text{\partial_textit{e}}ta(D \partial_text{\partial_textit{e}}ta^*( \f z)) D_m \f F_{kj}( D \partial_text{\partial_textit{e}}ta^*(\f z))\,.
\partial_text{\partial_textit{e}}nd{align*}
This symmetry can be used to calculate
\begin{align*}
D^2_{im } \partial_text{\partial_textit{e}}ta( \partial_tilde{\f U} )\frac{\partial }{\partial{\f x_j}} \f F_{ij} ( \partial_tilde{\f U}) &= D^2_{im } \partial_text{\partial_textit{e}}ta( \partial_tilde{\f U} ) D_l \f F_{ij}(\partial_tilde{\f U}) \frac{\partial \f U_l}{\partial \f x_j}\\& = D^2_{il } \partial_text{\partial_textit{e}}ta( \partial_tilde{\f U} ) D_m \f F_{ij}(\partial_tilde{\f U}) \frac{\partial \f U_l}{\partial \f x_j} = \frac{\partial}{\partial \f x_j } D_i\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}) D_m \f F_{ij}(\partial_tilde{\f U}) \,,
\partial_text{\partial_textit{e}}nd{align*}
which implies
\[
\begin{aligned}
\di \f F (\partial_tilde{\f U}) \cdot D ^2\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U} ) ( \f U-\partial_tilde{\f U}) &= (D \f F (\partial_tilde{\f U}): \nabla \partial_tilde{\f U}) \cdot D ^2\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}) ( \f U-\partial_tilde{\f U}) \\
&= D \f F (\partial_tilde{\f U})\dreidots \nabla D \partial_text{\partial_textit{e}}ta(\partial_tilde{\f U})\otimes ( \f U-\partial_tilde{\f U}) \\& =\nabla D \partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}): \left ( D \f F (\partial_tilde{\f U}) ( \f U-\partial_tilde{\f U})\right )\,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Additionally, we may set $\Phi :=D \partial_text{\partial_textit{e}}ta (\partial_tilde{\f U}) $ in~\partial_text{\partial_textit{e}}qref{eq:integralFentropy} in order to conclude from
the Fenchel equivalences~\partial_text{\partial_textit{e}}qref{eq:fenchel} that
\[
\int_{{\mathbb{T}^d}} \nabla D \partial_text{\partial_textit{e}}ta (\partial_tilde{\f U}): \f F(\partial_tilde{\f U})\,\mathrm{d} \f x =\int_{{\mathbb{T}^d}} \nabla \Phi : \f F(D \partial_text{\partial_textit{e}}ta^* ( \Phi))\,\mathrm{d} \f x =0\,.
\]
Combining the last two equations, we find
\begin{align}
0 = \int_{{\mathbb{T}^d}} \di \f F (\partial_tilde{\f U}) \cdot D ^2\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U} ) ( \f U-\partial_tilde{\f U}) \,\mathrm{d} \f x - \int_{{\mathbb{T}^d}} \nabla D \partial_text{\partial_textit{e}}ta (\partial_tilde{\f U}): \left [ \f F(\partial_tilde{\f U})+ D \f F (\partial_tilde{\f U}) ( \f U-\partial_tilde{\f U})\right ] \,\mathrm{d} \f x \,.\label{weakstrong2}
\partial_text{\partial_textit{e}}nd{align}
Adding the above identities~\partial_text{\partial_textit{e}}qref{weakstrong1} and~\partial_text{\partial_textit{e}}qref{weakstrong2} to the inequality~\partial_text{\partial_textit{e}}qref{envarform} with $\Phi = D \partial_text{\partial_textit{e}}ta(\partial_tilde{\f U})$ implies
\begin{align*}
&\left [ E - \mathcal{E}(\partial_tilde{\f U}) - \langle D \mathcal{E} (\partial_tilde{\f U}) , \f U - \partial_tilde{\f U} \rangle \right ]\Big|_s^t \\
&\qquad+ \int_s^t \int_{{\mathbb{T}^d}} \nabla D \partial_text{\partial_textit{e}}ta(\partial_tilde{\f U} ): \left ( \f F(\f U) -\f F (\partial_tilde{\f U}) - D \f F (\partial_tilde{\f U}) ( \f U-\partial_tilde{\f U})\right ) \,\mathrm{d} \f x \,\mathrm{d} \partial_tau \\
&\qquad+\int_s^t\int_{{\mathbb{T}^d}}\left ( \partial_t \partial_tilde{\f U} + \di \f F(\partial_tilde{\f U}) \right ) \cdot D^ 2\partial_text{\partial_textit{e}}ta(\partial_tilde{\f U}) \left ( \f U - \partial_tilde{\f U}\right ) \,\mathrm{d} \f x+ \mathcal{K}(\partial_tilde{\f U}) \left [\mathcal{E} (\f U) - E \right ] \,\mathrm{d} \partial_tau \leq 0
\partial_text{\partial_textit{e}}nd{align*}
which is \partial_text{\partial_textit{e}}qref{inuniqu}.
\partial_text{\partial_textit{e}}nd{proof}
\subsection{Existence of energy-variational solutions}
\label{subsec:existence}
In this subsection
we prove Theorem \ref{thm:main}, that is, we show existence
of energy-variational solutions to the hyperbolic conservation law \partial_text{\partial_textit{e}}qref{eq}.
To do so, we introduce a semi-discretization scheme in time.
For $N\in\N$, we define $\partial_tau :=T/N$, and we set $t^n:= \partial_tau n$ for $n\in \{ 0,\ldots , N\}$
to obtain
an equidistant partition of $[0,T]$.
We set $\f U^0\coloneqq\f U_0\in\mathbb{D}$,
and in the $n$-th time step, $n\geq 1$,
we compute $\f U^n$ from $ \f U^{n-1}\in \mathbb{D}$
by solving the minimization problem
\begin{equation}\label{eq:timedis}
\begin{aligned}
\f U^n= \argmin _{\f U\in \mathbb{D}; \mathcal{E}(\f U)\leq \mathcal{E}(\f U^{n-1}) } \, &\sup _{ \Phi \in \Y
} \!
\Bigg [
\left (\mathcal{E}(\f U) - \mathcal{E}(\f U^{n-1})\right ) - \left ( \f U - \f U^{n-1}, \Phi \right )
\\
&\quad
+ \partial_tau \left [ \F{\f U}{\Phi} + \mathcal{K}(\Phi ) \left( \mathcal{E}(\f U ) - \mathcal{E}(\f U^{n-1}) \right ) \right ] \Bigg] \,.
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
\begin{remark}[Comparison to time discretization for gradient flows]
In the theory of gradient flows it is nowadays standard to consider a time-discretization scheme based on a sequential minimization~\cite[Chap.~6]{gradient}.
This is certainly a different setting than in the problem considered here
since the energy is not formally conserved along a gradient flow
but dissipated by some dissipation functional. Nevertheless,
a similarity is that a saddle-point problem has to be solved in every time step. The current algorithm can thus be seen as a first generalization of this technique from gradient flows to more general systems, also including Hamiltonian dynamics. A goal for the future is to combine both approaches in order to find a suitable discretization scheme for general GENERIC systems~\cite{generic}, which combine dissipative and Hamiltonian effects.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}[Solving the min-max problem numerically]
It is worth observing that the discrete optimization problem
from~\partial_text{\partial_textit{e}}qref{eq:timedis} is given in form of a saddle-point problem.
This is a standard problem in optimization theory and machine learning and there are different tools to solve such a problem numerically~\cite{numeric}.
\partial_text{\partial_textit{e}}nd{remark}
\begin{theorem}[Solution of the time-discrete problem]\label{thm:disex}
For each $\f U^{n-1} \in \mathbb{D}$
there exists a unique solution $ \f U^n$ to the minimization problem~\partial_text{\partial_textit{e}}qref{eq:timedis},
and it holds
\begin{align}
\left (1 +\partial_tau \mathcal{K}(\Phi)\right ) \left ( \mathcal{E}(\f U^n) - \mathcal{E}(\f U^{n-1})\right ) - \langle \f U^n - \f U^{n-1}, \Phi\rangle + \partial_tau \F{\f U^n}{\Phi}
\leq 0 \label{disineq}
\partial_text{\partial_textit{e}}nd{align}
for all $\Phi \in\Y$.
\partial_text{\partial_textit{e}}nd{theorem}
\begin{proof}
The proof is divided into different steps:
\partial_textit{Step 1: Functional framework.}
We define the set
\[
\mathbb{D} ^n :={} \left \{ \f U \in \mathbb{D} | \, \mathcal{E}(\f U ) \leq {\mathcal{E}(\f U^{n-1})}{}\right \}
\]
and the function
\begin{align*}
\mathcal{F}_n^\partial_tau(\f U | \Phi) :={}& \left ( 1 + \partial_tau \mathcal{K}(\Phi) \right ) \left ( \mathcal{E} (\f U ) - \mathcal{E}(\f U^{n-1}) \right )
- \left \langle \f U - \f U^{n-1} , \Phi \right \rangle
+ \partial_tau \F{\f U}{\Phi} \,.
\partial_text{\partial_textit{e}}nd{align*}
Then we solve the time-discrete minimization problem \partial_text{\partial_textit{e}}qref{eq:timedis}
if we find a unique minimizer $\f U^n\in\mathbb{D}^n$ of the function
\[
\mathcal H \colon \mathbb D^n\partial_to\R,
\quad
\mathcal H(\f U)
=\sup_{\Phi \in \mathbb{Y} } \mathcal{F}_n^\partial_tau(\f U | \Phi )\,.
\]
\partial_textit{Step 2: Min-max theorem.}
In order to show that
\begin{equation}
\inf_{\f U \in \mathbb{D}^n
} \sup_{\Phi \in\mathbb{Y}} \mathcal{F}_n^\partial_tau(\f U| \Phi ) = \sup _{\Phi\in\mathbb{Y}} \inf_{\f U \in \mathbb{D}^n
}\mathcal{F}_n^\partial_tau(\f U| \Phi ) \,.\label{minmax}
\partial_text{\partial_textit{e}}nd{equation}
we apply a min-max theorem.
Since $\mathcal{E}$ is superlinear,
the set $\mathbb{D}^n $ is weakly compact in $L^1({\mathbb{T}^d};\R^m)$ by Lemma~\ref{lem:delavalle}
and the function
$ \f U \mapsto \mathcal{F}^\partial_tau_n ( \f U | \Phi)$ is convex and weakly lower semi-continuous for every $ \Phi \in\Y $.
Moreover, the function $ \Phi \mapsto \mathcal{F}^\partial_tau_n(\f U| \Phi)$ is concave for all $ \f U \in \mathbb D^n$
since $\mathcal K$ is convex and $\mathcal{E}(\f U)\leq\mathcal{E}(\f U^{n-1})$.
Therefore, \partial_text{\partial_textit{e}}qref{minmax} follows from Fan's
min-max theorem~\cite[Theorem 2]{Fan1953}.
\partial_textit{Step 3: Inequality~\partial_text{\partial_textit{e}}qref{disineq}.}
We show $ \inf_{\f U \in \mathbb{D}^n}\mathcal H(\f U) \leq 0$.
To do so,
let $\Phi \in \mathbb{Y} $ be arbitrary and define $ \partial_tilde{\f U} =D\partial_text{\partial_textit{e}}ta^\ast\circ\Phi$
and $\hat{\f U} = D \partial_text{\partial_textit{e}}ta ^* \circ( \alpha \Phi ) $,
where $\alpha>0$ is chosen as follows:
If $\mathcal{E}(\partial_tilde{\f U}) \leq \mathcal{E}(\f U^{n-1})$, we set $\alpha =1$,
so that $\hat{\f U} = \partial_tilde{\f U}$.
If $\mathcal{E}(\partial_tilde{\f U}) > \mathcal{E}(\f U^{n-1})$, we let $\alpha\in(0,1)$
such that $\mathcal{E} ( \hat{\f U}) = \mathcal{E}(\f U^{n-1})$,
which is possible by Lemma \ref{lem:surjective}.
Then
the assumed identity \partial_text{\partial_textit{e}}qref{eq:integralFentropy}
implies
\[
\int_{\mathbb{T}^d} \f F(\hat{\f U }
) :
\nabla \Phi
\,\mathrm{d} \f x
=\frac{1}{\alpha}\int_{\mathbb{T}^d} \f F( D\partial_text{\partial_textit{e}}ta^\ast(\alpha\Phi(x))):\alpha\nabla\Phi(x)\,\mathrm{d}\f x
=0.
\]
Since
$\alpha\Phi \in\partial \mathcal{E}(\hat{\f U})$,
from the definition of the subdifferential
of $\mathcal{E}$ we obtain
\[
\begin{aligned}
\inf_{\f U\in\mathbb{D}^n} &\mathcal{F}^\partial_tau_n ( \f U | \Phi)
\leq \mathcal{F}^\partial_tau_n ( \hat{\f U } | \Phi )
\\
&= \left ( 1 + \partial_tau \mathcal{K}(\Phi) \right ) \left ( \mathcal{E}(\hat{\f U }) - \mathcal{E}(\f U^{n-1})\right )
-\frac{1}{\alpha} \left \langle\hat{\f U } - \f U^{n-1} ,\alpha\Phi
\right \rangle + \partial_tau \int_{{\mathbb{T}^d}} \f F(\hat{\f U }) : \nabla \Phi \,\mathrm{d} \f x
\\
&\leq \left (1 + \partial_tau \mathcal{K}(\Phi) -\frac{1}{\alpha} \right ) \left ( \mathcal{E}(\hat{\f U }) - \mathcal{E}(\f U^{n-1})\right ) \leq 0\,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
The last inequality
follows since $\alpha\in(0,1]$ and
$\mathcal{E}(\hat{\f U}) \leq \mathcal{E}(\f U^{n-1})$.
Because $\Phi\in\Y$ was arbitrary,
identity \partial_text{\partial_textit{e}}qref{minmax} implies $\inf_{\f U \in \mathbb{D}^n}\mathcal H(\f U)\leq 0$.
\partial_textit{Step 4: Solvability of the optimization problem.}
From the identity
\[
\begin{aligned}
\mathcal H(\f U)
&= \left ( \mathcal{E} (\f U ) - \mathcal{E}(\f U^{n-1}) \right ) \\
&\quad + \sup_{\Phi\in \mathbb{Y}}
\left ( \partial_tau \mathcal{K}(\Phi ) (\mathcal{E}(\f U)- \mathcal{E}(\f U^{n-1}) )-
\left \langle \f U - \f U^{n-1} , \Phi \right \rangle
+\partial_tau \F{\f U}{\Phi } \right ),
\partial_text{\partial_textit{e}}nd{aligned}
\]
we conclude the strict convexity
of the mapping
$\mathcal H $
from the strict convexity of $\mathcal{E}$ and the convexity the function in the second line,
which is the supremum of convex functions.
Additionally, $\mathcal H $
is not equal to $+\infty $ everywhere due to \partial_textit{Step 3}.
Furthermore, we observe the coercivity of $\mathcal H$
via
\[
\mathcal H(\f U)
\geq \mathcal{F}^\partial_tau_n(\f U | \f 0) = \mathcal{E} (\f U ) - \mathcal{E}(\f U^{n-1})\,
\]
since $\mathcal{E}$ is superlinear,
which also implies that
$\mathbb D^n $ is weakly compact in $L^1({\mathbb{T}^d})$ by Lemma~\ref{lem:delavalle}.
In total, $\mathcal H$ is a
strictly convex, lower semicontinuous and coercive function on the compact set $\mathbb{D}^n$
and thus has a unique minimizer $\f U^n$.
\partial_text{\partial_textit{e}}nd{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
We prove the existence of energy-variational solutions
via the convergence of a time-discretization scheme.
We divide the proof into three steps.
\partial_textit{Step 1: Discretized formulation.}
For $N\in\N$, let $\partial_tau = T/N$ and $t^n= n \partial_tau $ as above.
Set $\f U^0=\f U_0$,
define $\f U^n$ iteratively
by~\partial_text{\partial_textit{e}}qref{eq:timedis},
and set $E^n \coloneqq \mathcal{E}(\f U^n)$ for $n\in\set{0,\dots,N}$.
Theorem~\ref{thm:disex} guarantees that $\f U^n\in\mathbb{D}$ exists and satisfies
\begin{equation}\label{disrelen}
\begin{aligned}
E^n - E^{n-1} & - \left \langle \f U^n - \f U^{n-1} , \Phi \right \rangle
\\
&+ \partial_tau \left [\F{\f U^n}{\Phi } +\mathcal{K}(\Phi ) [ \mathcal{E}(\f U^n) - E^{n-1}]
\right ] \leq 0 \,
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
for all $\Phi \in \mathbb{Y} $.
For functions $\phi \in \C^\infty_c([0,T); [0,\infty))$ and $ \Phi \in \C^1( [0,T];\mathbb Y )$,
we define $ \phi^n \coloneqq \phi(t^n)$ and $ \Phi^n \coloneqq \Phi (t^n)$ for $n \in \{ 0, \ldots , N\}$.
Using $\Phi = \Phi ^{n-1}$ in~\partial_text{\partial_textit{e}}qref{disrelen},
multiplying the resulting inequality by $\phi^{n-1}$ and summing this relation
over $n\in \{ 1, \ldots , N\}$ implies
\[
\begin{aligned}
\sum_{n=1}^N &\left [ \phi^{n-1} ( E^n-E^{n-1} ) - \phi^{n-1} \langle \f U^n-\f U^{n-1} , \Phi^{n-1} \rangle \right ]
\\
&+ \partial_tau \sum_{n=1}^N \phi^{n-1} \left [ \F{\f U^n}{\Phi ^{n-1}} + \mathcal{K}(\Phi ^{n-1} ) (\mathcal{E}(\f U^n) -E^{n-1}) \right ] \leq 0 \,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Since $\phi^N=0$,
using a discrete integration-by-parts formula
and dividing by $\partial_tau>0$,
we obtain
\begin{equation}
\begin{aligned}
-\sum_{n=1}^N &\left [\frac{\phi^n- \phi^{n-1}}{\partial_tau} ( E^n- \left \langle \f U^n,\Phi^{n-1}\right \rangle) - \phi^{n} \left \langle \f U^n ,\frac{ \Phi^n- \Phi^{n-1}}{\partial_tau}\right \rangle \right ]- \phi^0 \bp{ \mathcal{E}(\f U_0)-\langle \Phi^0, \f U_0\rangle }
\\
&+ \sum_{n=1}^N \phi^{n-1} \left [ \F{\f U^n}{\Phi ^{n-1}} + \mathcal{K}(\Phi ^{n-1} ) (\mathcal{E}(\f U^n) -E^{n-1}) \right ] \leq 0 \,.
\partial_text{\partial_textit{e}}nd{aligned}
\label{disrel}
\partial_text{\partial_textit{e}}nd{equation}
\partial_textit{Step 2: Prolongations.}
We define the piece-wise constant prolongations
\[
\begin{aligned}
\ov{\f U}^N(t) &:= \begin{cases}
\f U^n & \partial_text{for } t \in (t^{n-1},t^n], \\
\f U_0 & \partial_text{for } t = 0,
\partial_text{\partial_textit{e}}nd{cases} \,\quad
\\
\ov {E}^N(t) &:= \begin{cases}
\mathcal{E}(\f U^{n})
& \partial_text{for } t \in (t^{n-1},t^n],
\\
\mathcal{E}(\f U_0) &\partial_text{for } t = 0,
\partial_text{\partial_textit{e}}nd{cases}\,
\qquad
\un{E}^N(t) := \begin{cases}\mathcal{E}(\f U^N) & \partial_text{for } t = T, \\
\mathcal{E}(\f U^{n-1} ) & \partial_text{for } t \in [t^{n-1},t^n).
\partial_text{\partial_textit{e}}nd{cases}\,
\partial_text{\partial_textit{e}}nd{aligned}
\]
Analogously, for test functions $ \psi \in \C^1([0,T]; \mathbb{X})$,
where $\mathbb{X}$ is $\R$ or $\mathbb{Y}$,
we define the piece-wise constant and piece-wise linear prolongations by
\begin{align*}
\overline{\psi}^N(t)& \coloneqq \begin{cases}
\psi(t^n) & \partial_text{for } t \in (t^{n-1},t^n], \\
\psi(0) & \partial_text{for } t = 0,
\partial_text{\partial_textit{e}}nd{cases} \,\qquad
\underline{\psi}^N(t) \coloneqq \begin{cases} \psi(T) & \partial_text{for } t = T, \\
\psi(t^{n-1}) & \partial_text{for } t \in [t^{n-1},t^n),
\partial_text{\partial_textit{e}}nd{cases}\\
\widehat{\psi}^N(t)& \coloneqq \frac{\psi(t^n)-\psi(t^{n-1})}{\partial_tau}(t-t^{n-1}) + \psi(t^{n-1})
\quad \partial_text{for } t\in[t^{n-1},t^n]\,.
\partial_text{\partial_textit{e}}nd{align*}
With this notation, the discrete energy-variational inequality \partial_text{\partial_textit{e}}qref{disrel}
becomes
\begin{multline}\label{disenin}
- \int_0^T \Bp{ \partial_t \widehat{\phi} ^N\left [ \ov E ^N - \langle \ov {\f U}^N, \un{\Phi}^N \rangle \right ] - \ov \phi^N \langle \ov{\f U}^N , \partial_t \widehat{\Phi}^N\rangle + \un{\phi}^N \mathcal{K}(\un{\Phi}^N )\un{E}^N} \,\mathrm{d} t
\\
+ \int_0^T \un\phi^N \left [\F{\ov{\f U}^N}{\un{\Phi}^N}+ \mathcal{K}(\un{\Phi}^N ) \mathcal{E}(\ov{\f U}^N) \right ]\,\mathrm{d} t - \phi(0)\left[\mathcal{E}(\f U_0) - \langle \Phi(0) , \f U_0 \rangle \right] \leq 0
\partial_text{\partial_textit{e}}nd{multline}
for all $ \Phi \in \C^1( [0,T];\mathbb Y )$
and all $\phi \in \C^1_c([0,T))$ with $\phi \geq 0$.
\partial_textit{Step 3: Convergence.}
Since we have
$0\leq\mathcal{E}(\f U^{n})\leq \mathcal{E}(\f U^{n-1})$,
we obtain that $t\mapsto \ov{E}^N(t) $ and $t\mapsto \un{E}^N(t) $
are non-negative and non-increasing functions and as such bounded in $\BV$ by the initial value $E^0=\mathcal{E}(\f U_0)$.
Moreover, by the superlinear growth of $\partial_text{\partial_textit{e}}ta $, we infer
from $ \mathcal{E} ( \ov{\f U}^N(t)) \leq \mathcal{E}(\f U_0)$
that the sequence $\{ \ov{\f U}^N\}_{N\in\N}$ is bounded in $L^\infty(0,T;\mathbb{D})$.
Thus, we may extract (not-relabeled) subsequences
such that there exist $\ov{E},\un{E}\in \BV$ and $\f U \in L^\infty(0,T;\mathbb{D})$ such that
\[
\begin{aligned}
\ov{\f U}^n &\xrightharpoonup{(\ast)} \f U
&&\quad\partial_text{in } L^\infty(0,T;L^1({\mathbb{T}^d};\R^m))\,,\\
( \ov{E}^N,\,\un{E}^N) &\xrightharpoonup{\ \ast\ } (\ov{E},\, \un{E})
&&\quad \partial_text{in }\BV\,,\\
( \ov{E}^N(t),\,\un{E}^N(t)) &\xrightarrow{\ \phantom{\ast}\ } (\ov{E}(t),\, \un{E}(t))
&&\quad \partial_text{for all }t\in[0,T]\,,
\partial_text{\partial_textit{e}}nd{aligned}
\]
where the weak$(^\ast)$ convergence in
$L^\infty(0,T;L^1({\mathbb{T}^d};\R^m))$ was defined in \partial_text{\partial_textit{e}}qref{eq:weakconv.LinfL1},
and where we used Helly's selection theorem (see~\cite[Thm.~1.126]{barbu} for example).
We next show that $\ov{E}^N $ and $\un{E}^N$ converge to the same limit,
that is, $\ov{E}=\un{E}$ a.e.~in $(0,T)$.
Due to the monotony~$E^n \leq E^{n-1}$, we find
\[
\int_0^T | \ov{E}^N-\un{E}^N|\,\mathrm{d} t
= \sum_{n=1}^N \partial_tau (E^{n-1}-E^{n} )
= \partial_tau( E(0) - E^N) \leq\partial_tau E(0) \longrightarrow 0 \quad
\partial_text{ as } N \partial_to\infty\,.
\]
Since $\BV$ continuously embeds into $L^1(0,T)$,
this allows to identify $\un{E}=\ov{E}=:E$.
Due to the pointwise convergence~in $[0,T]$ of $\ov{E}^N$,
we infer from
the weak lower semi-continuity of $\mathcal{E}$ that $ E \geq \mathcal{E}(\f U)$ a.e. in $(0,T)$.
We
clearly have
\begin{align*}
\partial_t \widehat{\phi}^N &\ra \partial_t \phi,
& \ov{\phi}^N &\ra \phi,
& \un{\phi}^N &\ra \phi
&&\partial_text{ pointwise in } [0,T] \partial_text{ as }N \ra \infty \,,
\\
\partial_t \widehat{\Phi}^N &\ra \partial_t \Phi,
& \un{\Phi}^N &\ra \Phi,
& \nabla\un{\Phi}^N &\ra \nabla\Phi \quad &\partial_text{in } \C({\mathbb{T}^d};\R^m)
&\partial_text{ pointwise in } [0,T] \partial_text{ as }N \ra \infty
\,.
\partial_text{\partial_textit{e}}nd{align*}
With these observations, we may pass to the limit in the weak form~\partial_text{\partial_textit{e}}qref{disenin}.
We note that $\ov{\f U}^N$ occurs linearly in the first line of~\partial_text{\partial_textit{e}}qref{disenin}. All other terms are bounded and converge almost everywhere in $(0,T)$.
This implies that
\begin{multline*}
\lim_{N\ra\infty} \int_0^T \Bb{ \partial_t \widehat{\phi} ^N\left [ \ov E ^N - \langle \ov {\f U}^N , \un{\Phi}^N \rangle \right ] - \ov \phi^N \langle \ov{\f U}^N , \partial_t \widehat{\Phi}^N\rangle + \un{\phi}^N \mathcal{K}(\un{\Phi}^N )\un{E}^N }\,\mathrm{d} t \\
= \int_0^T\Bb{ \partial_t {\phi}\left [ E - \langle {\f U} , {\Phi}, \rangle \right ] - \phi \langle {\f U} , \partial_t {\Phi}\rangle + {\phi} \mathcal{K}({\Phi} ){E}} \,\mathrm{d} t\,.
\partial_text{\partial_textit{e}}nd{multline*}
Observing that the second line in~\partial_text{\partial_textit{e}}qref{disenin} is bounded from below due to Hypothesis~\partial_text{\partial_textit{e}}qref{hypo} and that $\phi\geq 0$ in $[0,T]$, we may apply Fatou's lemma and the weak lower semi-continuity of the function from~\partial_text{\partial_textit{e}}qref{ass:convex} as well as the continuity of $\mathcal{K}$ in order to pass to the limit in the second line of~\partial_text{\partial_textit{e}}qref{disenin}, which yields
\begin{multline*}
\liminf_{N\ra\infty} \left [ \int_0^T \un\phi^N \left [\F{\ov{\f U}^N}{\un{\Phi}^N}+ \mathcal{K}(\un{\Phi}^N ) \mathcal{E}(\ov{\f U}^N) \right ]\,\mathrm{d} t \right ] \\
\geq \int_0^T\liminf_{N\ra\infty} \Bb{ \un\phi^N \left [\F{\ov{\f U}^N}{\un{\Phi}^N}+ \mathcal{K}(\un{\Phi}^N ) \mathcal{E}(\ov{\f U}^N) \right ]}\,\mathrm{d} t \\
\geq \int_0^T \phi \int_{{\mathbb{T}^d}} \f F (\f U) : \nabla \Phi \,\mathrm{d} \f x + \mathcal{K}(\Phi ) \mathcal{E}(\f U) \,\mathrm{d} t\,.
\partial_text{\partial_textit{e}}nd{multline*}
In total, we infer from \partial_text{\partial_textit{e}}qref{disenin} that
\[
\begin{aligned}
-\int_0^T &\partial_t \phi \left [ {E} - \left \langle {\f U } , \Phi \right \rangle\right ] \,\mathrm{d} t - \phi(0)\left[\mathcal{E}(\f U_0) - \langle \f U_0,\Phi(0) \rangle \right]\\
&+\int_0^T \phi \left [\left \langle {\f U } , \partial_t \Phi \right \rangle+ \F{\f U}{\Phi } + \mathcal{K}(\Phi ) [\mathcal{E}({\f U}) - E] \right ] \,\mathrm{d} t
\leq 0\,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Via Lemma~\ref{lem:invar}, we now end up with the energy-variational inequality~\partial_text{\partial_textit{e}}qref{envarform} and with
\begin{equation*}
\lim_{t \searrow 0}\left [ E (t) - \langle \f U(t),\Phi(t) \rangle \right] \leq \mathcal{E}(\f U_0) - \langle\f U_0, \Phi(0) \rangle\,,
\partial_text{\partial_textit{e}}nd{equation*}
after possible redefining the function on a set of measure zero.
By Lemma~\ref{lem:var.affine},
this inequality implies $\f U(t+)=\f U_0$ in $\mathbb Y^\ast$, that is,
the initial value is attained in the asserted sense.
\partial_text{\partial_textit{e}}nd{proof}
\section{Two incompressible fluid models\label{sec:incomp}}
Our first two examples are models
for incompressible inviscid fluids, the incompressible magnetohydrodynamical equations and the incompressible Euler system.
While the latter can be seen as a special case of the first system,
it allows to derive more properties
and the comparison
with weak dissipative solutions for the Euler equations.
\subsection{Incompressible magnetohydrodynamics}
\label{sec:magneto}
As the first example, we consider the equations modeling
an incompressible, inviscid and electronically conductive fluid.
The corresponding equations of motion are the
magnetohydrodynamical equations given by
\begin{subequations}\label{eq:magneto}
\begin{align}
\partial_t \f v + ( \f v \cdot \nabla ) \f v - \mu (\f H \cdot \nabla ) \f H + \nabla p + \nabla \frac{\mu}{2}| \f H|^2 ={}& \f 0 , \qquad && \partial_text{in }{\mathbb{T}^d} \partial_times (0,T)\,,\label{eq:incompNav}
\\
\partial_t \f H - \nabla \partial_times ( \f v \partial_times \f H ) ={}& \f 0 \qquad && \partial_text{in }{\mathbb{T}^d} \partial_times (0,T)\,,\label{eq:mag}
\\
\di \f v = 0 , \quad \di \f H ={}& 0 \qquad && \partial_text{in }{\mathbb{T}^d} \partial_times (0,T)\,,
\label{eq:magneto.div}
\\
\f v (0) = \f v_0, \quad \f H(0) = {}&\f H_0 \qquad && \partial_text{in } {\mathbb{T}^d} \,.
\partial_text{\partial_textit{e}}nd{align}
\partial_text{\partial_textit{e}}nd{subequations}
Here $ \f v : {\mathbb{T}^d}\partial_times (0,T) \ra \R^d $ denotes the velocity of the fluid, $ \f H : {\mathbb{T}^d} \partial_times (0,T) \ra \R^d $ is the magnetic field, $p : {\mathbb{T}^d}\partial_times (0,T) \ra \R$ denotes the pressure, and $\mu \in (0,\infty)$ is the quotient of the magnetic permeability and the constant density of the fluid.
\begin{remark}
We note that the above equation is not formally of the form~\partial_text{\partial_textit{e}}qref{eq.pde}. The pressure is not a function of $\f H $ and $\f v$ but should rather be seen as a Lagrange multiplier to fulfill the divergence-free condition in the evolution.
The first equation~\partial_text{\partial_textit{e}}qref{eq:incompNav} can be interpreted as $\partial_t \f v+ P\di( \f v\otimes \f v - \mu \f H \otimes \f H
) =0$, where $P$ denotes the Helmholtz projection on divergence-free functions,
and condition \partial_text{\partial_textit{e}}qref{eq:magneto.div} is incorporated in the functional framework
by working in the space of divergence-free functions.
Another viewpoint is that one can
derive a weak formulation of \partial_text{\partial_textit{e}}qref{eq:magneto}
by testing with divergence-free test functions.
Then the pressure term can be omitted,
and the weak formulation is of the form~\partial_text{\partial_textit{e}}qref{eq:weak.intro}.
\partial_text{\partial_textit{e}}nd{remark}
To introduce the notion of energy-varational solutions
to the magnetohydrodynamical equations \partial_text{\partial_textit{e}}qref{eq:magneto},
we define the corresponding mathematical entropy
as the physical energy
\begin{equation}\label{eq:magneto.energy}
\mathcal{E} (\f v, \f H)
= \frac{1}{2}\norml{\f v }_{L^2({\mathbb{T}^d})}^2+ \frac{\mu}{2}\norml{\f H}_{L^2({\mathbb{T}^d})}^2.
\partial_text{\partial_textit{e}}nd{equation}
Moreover, we
we introduce the class of divergence-free vector fields
\[
\LRsigma{q}({\mathbb{T}^d})
\coloneqq\setcL{\f v\in\LR{q}({\mathbb{T}^d};\R^d)}{\forall \f \varphi\in \C^1({\mathbb{T}^d})\colon \int_{\mathbb{T}^d} \f v \cdot \nabla\f \varphi\,\mathrm{d}\f x =0}
\]
for $q\in[1,\infty)$.
The mathematical precise sense of energy-variational solutions
is given in the following definition.
\begin{definition}\label{def:magneto}
A tuple $(\f v , \f H , E )\in L^\infty(0,T; L^2_\sigma ( {\mathbb{T}^d} ) )^2 \partial_times \BV $ is called an energy-variational solution to the incompressible magnetohydrodynamical equations
\partial_text{\partial_textit{e}}qref{eq:magneto}
if it satisfies $\mathcal{E} (\f v(t), \f H(t))\leq E(t)$ for a.a.~$t\in (0,T)$,
and if the energy-variational inequality
\begin{equation}
\begin{aligned}
&\left [ E - \int_{{\mathbb{T}^d}}
\f v \cdot \f \varphi - \f H \cdot \f \psi
\,\mathrm{d} \f x \right ] \Big|_s^t
+ \int_s^t \int_{{\mathbb{T}^d}}\bb{
\f v \cdot \partial_t \f \varphi
+ \left (\f v \otimes \f v - \mu \f H \otimes \f H \right ) : \nabla \f \varphi
\,\mathrm{d} \f x
\\
&\quad
+ \int_s^t \int_{{\mathbb{T}^d}}
\f H \cdot \partial_t \f \psi + \left (\f H \otimes \f v - \f v \otimes \f H \right ) :\nabla \f \psi
\,\mathrm{d} \f x
+ \mathcal{K}(\f \varphi, \f \psi ) \left [ \mathcal{E}(\f v, \f H ) - E\right ]
}\,\mathrm{d} s
\leq 0
\,
\partial_text{\partial_textit{e}}nd{aligned}
\label{relenMagneto}
\partial_text{\partial_textit{e}}nd{equation}
holds for a.e.~$s<t\in(0,T)$ including $s=0$ with $( \f v (0), \f H (0)) = ( \f v_0 ,\f H_0)$ and all test functions $(\f \varphi, \f \psi ) \in \C^1({\mathbb{T}^d} \partial_times [0,T]; \R^{d})^2$ with $ \di \f \varphi = \di \f \psi = 0$.
Here,
\begin{equation}
\mathcal{K}(\f \varphi, \f \psi ) =
2\|(\nabla \f \varphi )_{\partial_text{sym}}\| _{L^\infty({\T};\R^{d\partial_times d})} + \frac{2}{\sqrt\mu} \|(\nabla \f \psi )_{\partial_text{skw}}\| _{L^\infty({\T};\R^{d\partial_times d})}\label{K:magneto}
\partial_text{\partial_textit{e}}nd{equation}
with
\[
\| \Phi\| _{L^\infty({\T};\R^{d\partial_times d})}
=
\partial_text{\partial_textit{e}}sssup_{x\in{\T}} | \Phi(x) |_2,
\]
where $\snorm{\cdot}_2$ denotes the spectral norm defined in \partial_text{\partial_textit{e}}qref{eq:spectralnorm}.
\partial_text{\partial_textit{e}}nd{definition}
\begin{theorem}\label{thm:magneto}
For every initial datum $(\f v _0, \f H_0)\in L^2_{\sigma}({\mathbb{T}^d})\partial_times L^2_{\sigma}({\mathbb{T}^d})$, there exists an energy-variational solution in the sense of Definition~\ref{def:magneto} with $E(0)=\mathcal{E}(\f v _0, \f H_0)$ and $ (\f v , \f H) \in \C_w([0,T];L^2_{\sigma}({\mathbb{T}^d})\partial_times L^2_{\sigma}({\mathbb{T}^d}))$, and the initial values are attained in the strong sense.
\partial_text{\partial_textit{e}}nd{theorem}
\begin{proof}
We have to show that the Hypothesis~\ref{hypo} is fulfilled.
To realize the system~\partial_text{\partial_textit{e}}qref{eq:magneto}
in the abstract framework introduced above,
we introduce the quadratic entropy functional $ \partial_text{\partial_textit{e}}ta : \R^{2d} \ra \R$ via
$\partial_text{\partial_textit{e}}ta(\f y _1 , \f y_2 )=\frac{1}{2}\snorm{\f y_1}^2+\frac{\mu}{2}\snorm{\f y_2}^2$,
which is obviously strictly convex, lower semi-continuous and
has superlinear growth.
The space of test functions is given by
$\Y=\setcl{(\f\varphi,\f\psi)\in\C^1({\mathbb{T}^d};\R^d)^2}{\di\f\varphi=\di\f\psi=0}$,
and we have
$\mathbb{D}=
\LRsigma{2}({\mathbb{T}^d})\partial_times \LRsigma{2}({\mathbb{T}^d})$
(see Remark~\ref{rem:domE}),
which is obviously convex.
Note that $\partial_text{\partial_textit{e}}ta^\ast ( \f z_1 , \f z _2) =\frac{1}{2}\snorm{\f z_1}^2+\frac{1}{2\mu }\snorm{\f z_2}^2 $,
and the corresponding total entropy
is given by the physical energy $\mathcal{E}$ from \partial_text{\partial_textit{e}}qref{eq:magneto.energy}.
The function $\f F: \R^ {2d} \ra \R^{2d\partial_times d } $ is given by
$$ \f F(\f v, \f H ) = \begin{pmatrix}
\f v \otimes \f v - \mu \f H \otimes \f H \\ \f H \otimes \f v - \f v \otimes \f H \partial_text{\partial_textit{e}}nd{pmatrix}
\,.$$
Observing that $D\partial_text{\partial_textit{e}}ta^*(\f z_1, \f z _2 ) = ( \f z_1, \frac{\f z_2}{\mu})^T$, we find that the condition~\partial_text{\partial_textit{e}}qref{eq:integralFentropy} is fulfilled due to
\begin{align*}
\int_{{\mathbb{T}^d}} &\f F( D \partial_text{\partial_textit{e}}ta^*( \f \varphi , \f \psi) ) : \nabla \begin{pmatrix}
\f \varphi\\\f \psi
\partial_text{\partial_textit{e}}nd{pmatrix}\,\mathrm{d} \f x \\
&= \int_{{\mathbb{T}^d}} \left ( \f \varphi \otimes \f \varphi - \frac{1}{\mu} \f \psi \otimes \f \psi \right ) : \nabla \f\varphi + \frac{1}{\mu} \left ( \f \psi \otimes \f \varphi - \f \varphi \otimes \f \psi \right ) : \nabla\f \psi \,\mathrm{d} \f x
\\
&=\int_{{\mathbb{T}^d}} ( \f \varphi \cdot \nabla )\frac{|\f \varphi|^2}{2} - \frac{1}{\mu}\left [\left ( \f \psi \otimes \f \psi \right ) : \nabla \f \varphi
- ( \f \varphi \cdot \nabla )\frac{|\f \psi|^2}{2} - \nabla \f\varphi : \left ( \f \psi \otimes \f \psi \right ) - \f \varphi \cdot \f \psi \di \f\psi
\right ]\,\mathrm{d} \f x
\\&=0\,,
\partial_text{\partial_textit{e}}nd{align*}
where we integrated by parts in the last term.
The last equality follows
by another integration by parts
since $\f \varphi$ and $\f \psi$ are solenoidal vector fields.
Moreover, inequality~\partial_text{\partial_textit{e}}qref{BoundF} is fulfilled for $C= 2 + \frac{2}{\sqrt\mu}$
since from Young's inequality, it follows
\begin{align*}
| \f F(\f v , \f H )| \leq \snorm{\f v}^2 + {\mu} \snorm{\f H}^2 + 2 \snorm{\f v}\snorm{\f H} \leq 2 \partial_text{\partial_textit{e}}ta(\f v, \f H) + \frac{2}{\sqrt \mu} \partial_text{\partial_textit{e}}ta(\f v, \f H) \,.
\partial_text{\partial_textit{e}}nd{align*}
Finally, we have to show that the choice~\partial_text{\partial_textit{e}}qref{K:magneto}
of the regularity weight $\mathcal{K}$ yields the convexity of
the function from~\partial_text{\partial_textit{e}}qref{ass:convex}.
We infer similarly to the previous estimate that
\begin{align*}
\snormL{\int_{{\mathbb{T}^d}} \f F (\f v , \f H ) : \nabla \begin{pmatrix}
\f \varphi \\\f \psi
\partial_text{\partial_textit{e}}nd{pmatrix} \,\mathrm{d}\f x }
\leq {}& \left ( \| \f v \otimes \f v- \mu \f H \otimes \f H \|_{L^1({\mathbb{T}^d};\R^{d\partial_times d})} \right ) \| (\nabla \f \varphi )_{\partial_text{sym}}\|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d})}
\\& + 2\| \f v \otimes \f H \|_{L^1({\mathbb{T}^d};\R^{d\partial_times d})} \| (\nabla \f \psi )_{\partial_text{skw}}\|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d})}
\\ \leq {}&
\mathcal{K}(\f \varphi , \f \psi ) \mathcal{E}(\f v, \f H)\,.
\partial_text{\partial_textit{e}}nd{align*}
This implies that the mapping
\[ (\f v, \f H ) \mapsto \int_{{\mathbb{T}^d}} \f F( \f v, \f H) : \nabla \begin{pmatrix}
\f \varphi \\\f \psi
\partial_text{\partial_textit{e}}nd{pmatrix} \,\mathrm{d}\f x +\mathcal{K}(\f \varphi , \f \psi ) \mathcal{E}(\f v, \f H) \]
is quadratic and non-negative, and thus convex and weakly lower semi-continuous.
In total, Hypothesis \ref{hypo} is satisfied,
and from Theorem~\ref{thm:main}
we infer the existence of a solution in the sense of
Definition~\ref{def:magneto}
with the regularity from Proposition~\ref{prop:reg}.
Finally,
Proposition~\ref{prop:betterreg} implies the additional regularity.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}[Alternative choice of $\f F$]
We could also define the function $\f F$ by
$$ \f F(\f v, \f H ) = \begin{pmatrix}
\f v \otimes \f v - \mu \f H \otimes \f H + I \left ( \frac{| \f v |^2}{2}+ \frac{\mu|\f H|^2}{2}\right ) \\ \f H \otimes \f v - \f v \otimes \f H - ( \f v \cdot \f H) I \partial_text{\partial_textit{e}}nd{pmatrix}
\,.$$
With this definition, we can derive the relation~\partial_text{\partial_textit{e}}qref{eq:entropyflux.new} for the function $\partial_tilde{\f q}$ given by
$$
\partial_tilde{\f q} (\f v , \f H) = \f v \left ( \frac{| \f v |^2}{2}+ \frac{\mu|\f H|^2}{2}\right ) - \mu \f H ( \f H \cdot \f v) \,,
$$
and the function $\f F$ fits better into our abstract framework with Hypothesis~\ref{hypo:smooth}. But since both choices yield the same when tested with solenoidal functions, we rather use the simpler version in the above proof. Note that both choices fulfill the condition~\partial_text{\partial_textit{e}}qref{eq:integralFentropy}.
\partial_text{\partial_textit{e}}nd{remark}
\begin{remark}[Boundary conditions]\label{rem:boundary}
The concept can be transferred to the usual impermeability boundary conditions.
Indeed, on a bounded Lipschitz domain ${\T}mega \subset \R^d$, we may equip the system~\partial_text{\partial_textit{e}}qref{eq:magneto} with the boundary conditions $ \f n \cdot \f v = 0 = \f n \cdot \f H$ on $\partial {\T}mega$,
where $\f n$ denotes the outer unit normal vector at $\partial{\T}mega$.
The associated space for the test functions $\mathbb Y$ has to be restricted to $ (\f\varphi, \f \psi)\in \Y := \C^1({\T}mega \partial_times [0,T]; \R^{2d}) $ with $ \f n \cdot \f \varphi = 0 = \f n \cdot \f \psi $ on $\partial {\T}mega$ and $\di \f \varphi = 0 = \di \f \psi $ in ${\T}mega$.
Similarly to the above calculation, one may verify that condition~\partial_text{\partial_textit{e}}qref{eq:integralFentropy} is still fulfilled, where the integral is taken over ${\T}mega$ instead of ${\mathbb{T}^d}$.
\partial_text{\partial_textit{e}}nd{remark}
\subsection{Incompressible Euler equations}
\label{sec:incompEuler}
For the sake of completeness,
we apply the abstract result to the incompressible Euler equations, even though
the existence of energy-variational solution to this system
was already proven in~\cite{envar}.
Actually, this can be seen as a special case of the
magnetohydrodynamical equations \partial_text{\partial_textit{e}}qref{eq:magneto}
by setting $\f H\partial_text{\partial_textit{e}}quiv 0$.
However, here we can give a finer choice of the regularity weight $\mathcal K$
that allows us to show that energy-variational solutions
are also dissipative weak solutions.
The incompressible Euler equations are given by
\begin{subequations}\label{eq:Incomp}
\begin{alignat}{2}
\partial_t \f v + ( \f v \cdot \nabla ) \f v + \nabla p = \f 0 , \quad \di \f v ={}& 0 \qquad && \partial_text{in }{\mathbb{T}^d} \partial_times (0,T)\,,
\\
\f v (0) ={}& \f v_0 \qquad && \partial_text{in } {\mathbb{T}^d} \,.
\partial_text{\partial_textit{e}}nd{alignat}
\partial_text{\partial_textit{e}}nd{subequations}
Again, $ \f v : {\mathbb{T}^d}\partial_times (0,T) \ra \R^d $ denotes the velocity of the fluid and $p : {\mathbb{T}^d}\partial_times (0,T) \ra \R$ denotes the pressure.
We introduce the energy $\mathcal{E} : \LRsigma{2}({\mathbb{T}^d}) \ra \R$
with $\mathcal{E}(\f v) := \frac{1}{2} \norml{\f v }_{L^2({\mathbb{T}^d})}^2$.
\begin{definition}\label{def:envarIncommp}
A pair $(\f v , E )\in L^\infty(0,T; L^2_\sigma ( {\mathbb{T}^d} )) \partial_times \BV $ is called an energy-variational solution to the incompressible Euler system
\partial_text{\partial_textit{e}}qref{eq:Incomp}
if $E(t) \geq \mathcal{E} (\f v )$ for a.e.~$t\in (0,T)$
and if the inequality
\begin{equation}
\left [ E - \int_{{\mathbb{T}^d}}
\f v \cdot \f \varphi
\,\mathrm{d} \f x \right ] \Big|_s^t
+ \int_s^t \int_{{\mathbb{T}^d}}
\f v \cdot \partial_t \f \varphi
+ { \f v \otimes \f v} : \nabla \f \varphi
\,\mathrm{d} \f x
+ \mathcal{K}(\f \varphi ) \left [ \mathcal{E}(\f v) - E\right ]
\,\mathrm{d} s
\leq 0
\,\label{relenIncomp}
\partial_text{\partial_textit{e}}nd{equation}
holds for a.e.~$s<t\in(0,T)$, including $s=0$ with $\f v (0) = \f v_0 $,
and for all test functions $\f \varphi\in \C^1({\mathbb{T}^d} \partial_times [0,T]; \R^{d})$ with $ \di \f \varphi = 0$,
where
\begin{equation}\label{eq:regweight.incomp}
\mathcal{K}(\f \varphi) =2
\|(\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})}\,.
\partial_text{\partial_textit{e}}nd{equation}
\partial_text{\partial_textit{e}}nd{definition}
Besides existence of energy-variational solutions,
we shall show that they can be identified with so-called weak dissipative solutions
to the incompressible Euler equations \partial_text{\partial_textit{e}}qref{eq:Incomp}.
The following definition is an adaption of the compressible case,
see Definition~\ref{def:weakEul} below.
\begin{definition}[Dissipative weak solution]\label{def:disssol.incomp}
We call a pair $(\f v , E )\in L^\infty(0,T; L^2_\sigma ( {\mathbb{T}^d} )) \partial_times \BV $
a dissipative weak solution to the Euler equations, if there exists a \partial_textit{Reynolds defect} $ \mathfrak{R} \in L^\infty _{w^*} (0,T;\mathcal{M}( {\T} ; \mathbb{R}^{d\partial_times d}_{\partial_text{sym},+}))$ such that the equation
\begin{align}
\int_{{\mathbb{T}^d}} \f v \cdot \f \varphi\,\mathrm{d} \f x \Big|_s^t
= \int_s^t\!\! \int_{{\mathbb{T}^d}} \f v \cdot \partial_t \f \varphi + \f v \otimes \f v: \nabla \f \varphi\,\mathrm{d} \f x\,\mathrm{d} s + \int_s^t\!\!\int_{{\mathbb{T}^d}} \nabla \f \varphi:\,\mathrm{d} \mathfrak{R}(s) \,\mathrm{d} s \label{measeq}
\partial_text{\partial_textit{e}}nd{align}
is fulfilled for all $ \f \varphi \in \C^1({\mathbb{T}^d}\partial_times[0,T];\R^d )$ with $\dv\f\varphi=0$,
and for a.a.~$s,t\in(0,T) $, including $s=0$ with $\f v(0)=\f v_0$,
and if $E$ is a non-increasing function with $E(0+) = \mathcal{E}(\f v_0)$ such that
\begin{equation}
\mathcal{E}(\f v (t))+ \frac{1}{2}\int_{{\mathbb{T}^d}} I : \,\mathrm{d} \mathfrak{R}(t) \leq E(t) \,\label{measeneq}
\partial_text{\partial_textit{e}}nd{equation}
for a.a.~$t\in(0,T)$.
\partial_text{\partial_textit{e}}nd{definition}
\begin{theorem}\label{thm:incompEuler}
For every initial datum $\f v _0\in L^2_{\sigma}({\mathbb{T}^d})$, there is an energy-variational solution in the sense of Definition~\ref{def:envarIncommp}
with $E(0)=\mathcal{E}(\f v _0)$ with $\f v \in \C_{w}([0,T]; L^2_{\sigma}({\mathbb{T}^d}) )$ such that the initial condition is attained in the strong sense.
Moreover, a pair $(\f v , E )\in L^\infty(0,T; L^2_\sigma ( {\mathbb{T}^d} )) \partial_times \BV $ is an energy-variational solution in the sense of Definition~\ref{def:envarIncommp} if and only if it is a dissipative weak solution in the sense of Definition~\ref{def:disssol.incomp}.
\partial_text{\partial_textit{e}}nd{theorem}
\begin{proof}
At first, we show that the Hypothesis~\ref{hypo} is fulfilled,
which is very similar to the proof of Theorem~\ref{thm:magneto}.
To the most extent, we can copy the above proof with $\f H\partial_text{\partial_textit{e}}quiv 0$ or vanishing second component in all functionals.
But since we assert that
the regularity weight $\mathcal K$ can be chosen in the finer manner
stated in \partial_text{\partial_textit{e}}qref{eq:regweight.incomp},
it remains to verify the convexity of the function from~\partial_text{\partial_textit{e}}qref{ass:convex}
with this choice.
Indeed, we have
\[
\begin{aligned}
&\int_{{\mathbb{T}^d}} \f F( \f v ) : \nabla \f \varphi \,\mathrm{d} \f x
+ \mathcal K(\f \varphi)\mathcal{E}(\f v)\\
&= \int_{{\mathbb{T}^d}} \f v \otimes \f v : ( \nabla \f \varphi)_{\partial_text{sym},+} \,\mathrm{d} \f x
+ \int_{{\mathbb{T}^d}} ( \f v \otimes \f v ) : \bb{( \nabla \f \varphi)_{\partial_text{sym},-} + \| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})} I} \,\mathrm{d} \f x \,,
\partial_text{\partial_textit{e}}nd{aligned}
\]
where we infer the convexity and weak lower semi-continuity of both terms in the second line since they are non-negative and quadratic.
Hence, Hypothesis~\ref{hypo} is satisfied,
and from Theorem~\ref{thm:main}, we infer the existence of an energy-variational solution.
Now let~$(\f v ,E)$ be an energy-variational solution in the sense of Definition~\ref{def:envarIncommp}.
The choice $ \f \varphi = \f 0$ implies that $ E $ is non-increasing.
Since the regularity weight $\mathcal K$ is homogeneous of degree one, we infer from Proposition~\ref{prop:equality} that
\begin{align}
- \int_{{\mathbb{T}^d}} \f v \cdot \f \psi \,\mathrm{d} \f x \Big|_{0}^T + \int_0^T \int_{{\mathbb{T}^d}} \f v \cdot \partial_t \f\psi + ( \f v \otimes \f v ) : \nabla \f \psi \,\mathrm{d} \f x \,\mathrm{d} t \leq \int_0^T \mathcal{K}(\f \psi ) [ E - \mathcal{E}(\f v) ] \,\mathrm{d} t \label{estincomp}
\partial_text{\partial_textit{e}}nd{align}
for all $ \f \psi \in \C^1({\mathbb{T}^d}\partial_times[0,T];\R^d)$ with $\dv\f\psi=0$.
We define
\[
\begin{aligned}
&\mathcal V:= \{\f \varphi\in \C_0^1({\T}\partial_times[0,T);\R^d) \mid \di \f \varphi = 0 \partial_text{ a.e.~in }{\mathbb{T}^d} \partial_times (0,T)\,,\ \int_{\T}\f\varphi\,\mathrm{d} x =0\}\,,
\\
&\f l \colon\mathcal V\partial_to\R,
\quad
\langle \f l, \f \psi \rangle := - \int_{{\mathbb{T}^d}} \f v \cdot \f \psi \,\mathrm{d} \f x \Big|_{0}^T + \int_0^T \int_{{\mathbb{T}^d}} \f v \cdot \partial_t \f\psi + ( \f v \otimes \f v ) : \nabla \f \psi \,\mathrm{d} \f x \,\mathrm{d} t\,,
\\
&\mathfrak p\colon L^1(0,T;\C({\T} ; \R^{d\partial_times d}_{\partial_text{sym}})) \ra \R,
\quad
\mathfrak p(\Phi) := \int_0^T 2\| (\Phi)_{-}\|_{\C({\mathbb{T}^d};\R^{d\partial_times d})}(E - \mathcal{E}(\f v))\,\mathrm{d} t\,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
Due to \partial_text{\partial_textit{e}}qref{estincomp}, Lemma~\ref{lem:hahn} implies
that there exists
$\mathfrak R \in L^\infty_{w^*}(0,T;\mathcal{M}({\T} ; \R_{\partial_text{sym}}^{d\partial_times d}
))$
with
\[
\forall\Phi\in L^1(0,T; \C({\T}; \R_{\partial_text{sym}}^{d\partial_times d })):\ \langle -\mathfrak R, \Phi\rangle \leq \mathfrak p(\Phi),
\qquad
\forall\f \psi \in \mathcal{V} : \ \langle -\mathfrak R, \nabla \f \psi \rangle = \langle \f l , \f \psi \rangle.
\]
The first property implies $\langle \mathfrak R, \Phi\rangle \geq 0$
if $\Phi$ is positive semi-definite in ${\mathbb{T}^d}\partial_times (0,T)$,
so that we have $\mathfrak R \in L^\infty_{w^*}(0,T;\mathcal{M}({\T} ; \R_{\partial_text{sym},+}^{d\partial_times d}))$.
The second property yields \partial_text{\partial_textit{e}}qref{measeq}
for $\f \psi\in \mathcal V$.
Using $\f \psi=\f e_j$ in~\partial_text{\partial_textit{e}}qref{estincomp},
where $\f e_j$ is the $j$-th unit vector in $\R^d$,
we see that $\int_{{\T}}\f v\,\mathrm{d} x$ is constant in time.
Therefore, we can drop the mean-value condition on $\f \psi$
and infer \partial_text{\partial_textit{e}}qref{measeq} for all $\f \psi \in \C^1({\T}\partial_times[0,T]; \R ^d )$
with $\dv\f\psi=0$.
Considering
$\Phi(x,t)= - \phi(t)I$ for some $\phi\in\C_0^{1}([0,T))$ with $\phi \geq 0$,
we further have
\[
\int_0^T \phi(t) \int_{{\mathbb{T}^d}} I : \,\mathrm{d} \mathfrak{R}(t) \,\mathrm{d} t
=\langle -\mathfrak R, \Phi\rangle
\leq \mathfrak p(\Phi)
=2\int_0^T \phi(t) (E - \mathcal{E}(\f v))\,\mathrm{d} t.
\]
Since $\phi\geq 0$ is arbitrary, this directly implies \partial_text{\partial_textit{e}}qref{measeneq}
for a.a.~$t\in(0,T)$.
In total, we see that $(\f v, E)$ is a dissipative weak solution.
In order to prove the converse implication,
let $(\f v , E )\in L^\infty(0,T; L^2_\sigma ( {\mathbb{T}^d} )) \partial_times \BV $
be a dissipative weak solution to \partial_text{\partial_textit{e}}qref{eq:Incomp}.
Due to $\mathfrak{R}(t) \in \mathcal{M}({\mathbb{T}^d};\R^{d\partial_times d}_{\partial_text{sym},+})$, the duality of the spectral norm and the trace norm for matrices,
H\"older's inequality and inequality~\partial_text{\partial_textit{e}}qref{measeneq} allow to infer
\begin{align*}
\int_{{\mathbb{T}^d}} \nabla \f \psi : \,\mathrm{d} \mathfrak{R} \geq \int_{{\mathbb{T}^d}} ( \nabla \f \psi )_{\partial_text{sym},-} : \,\mathrm{d} \mathfrak{R}
& \geq - \| ( \nabla \f \psi )_{\partial_text{sym},-} \|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d })}
\int_{{\mathbb{T}^d}}I : \,\mathrm{d} \mathfrak{R}
\\
& \geq 2 \| ( \nabla \f \psi )_{\partial_text{sym},-} \|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d })} \left [ \mathcal{E}(\f v ) -E \right ] \,
\partial_text{\partial_textit{e}}nd{align*}
a.e.~in $(0,T)$.
Estimating the last term of \partial_text{\partial_textit{e}}qref{measeq} with $\f\varphi=-\f \psi$ in this way,
we obtain
\[
-\left [ \int_{{\mathbb{T}^d}}
\f v \cdot \f \psi
\,\mathrm{d} \f x \right ] \Big|_s^t
+ \int_s^t \int_{{\mathbb{T}^d}}
\f v \cdot \partial_t \f \psi
+ { \f v \otimes \f v} : \nabla \f \psi
\,\mathrm{d} \f x
+ \mathcal{K}(\f \psi) \left [ \mathcal{E}(\f v) - E\right ]
\,\mathrm{d} s
\leq 0
\,.
\]
Since $E$ is non-increasing,
we may add the term $E\big|_s^t$ to the left-hand side
to infer the formulation~\partial_text{\partial_textit{e}}qref{relenIncomp}.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}[Trace-free measures]
Due to the fact that the equation~\partial_text{\partial_textit{e}}qref{measeq} holds for solenoidal test functions, one may change the measure $\mathfrak{R}$ in this formulation by adding a multiplicative of the identity.
This can be done in such a way that the resulting measure $\bar{\mathfrak{R}}$ is trace-free
by setting $ \bar{\mathfrak{R}}= \mathfrak{R}-\frac{1}{d} \partial_tr(\mathfrak{R})I$.
Consequently, we could adapt Definition~\ref{def:disssol.incomp} by requiring $ \mathfrak R\in L^\infty_{w}(0,T;\mathcal{M}({\mathbb{T}^d} ; \R^{d\partial_times d}_{\partial_text{sym},0}))$, where $\R^{d\partial_times d}_{\partial_text{sym},0} $ denotes the set of symmetric trace-free matrices,
and by demanding the simpler inequality $\mathcal{E}(\f v) \leq E $
instead of inequality~\partial_text{\partial_textit{e}}qref{measeneq}.
We could infer this formulation with the same arguments as above, but by choosing $\mathcal{I}(\f \psi) = (\nabla \f \psi)_{\partial_text{sym}} - \frac{1}{d} \partial_tr (\nabla \f \psi ) I$ in Lemma~\ref{lem:hahn} and $ \mathfrak p(\Phi) = \int_0^T 2\| \Phi\|_{\C({\mathbb{T}^d};\R^{d\partial_times d})}(E - \mathcal{E}(\f v))\,\mathrm{d} t$.
However, we prefer the choice made in Definition~\ref{def:disssol.incomp} since in inequality~\partial_text{\partial_textit{e}}qref{measeneq} the dissipative nature of the Reynolds defect $\mathfrak{R}$ becomes visible.
\partial_text{\partial_textit{e}}nd{remark}
\section{Compressible Euler equations\label{sec:comp}}
\label{sec:comprEuler}
Now, we turn to the compressible Euler system.
Here, instead of formulating the equations
in terms of the density $\,\mathrm{d}ns$ and the fluid velocity $\f v$,
we use the density and the momentum $\f m=\,\mathrm{d}ns\f v$.
This is often done in the literature, see for instance~\cite{Fereisl21_NoteLongTimeBehaviorDissSolEuler}.
The main reason for this choice is
that the associated energy functional is convex in the variables $(\,\mathrm{d}ns,\f m)$
as we will see below.
The Euler equations then read
\begin{subequations}
\label{eq:comprEuler}
\begin{alignat}{2}
\partial_t \,\mathrm{d}ns + \di \f m ={}& 0 &&\quad\partial_text{in }{\mathbb{T}^d}\partial_times (0,T),
\label{eq:comprEuler.mass}
\\
\partial_t \f m + \di \left ( \frac{\f m \otimes \f m}{\,\mathrm{d}ns} \right ) + \nabla p(\,\mathrm{d}ns) ={}& 0
&&\quad \partial_text{in } {\mathbb{T}^d} \partial_times (0,T),
\label{eq:comprEuler.momentum}
\\
(\,\mathrm{d}ns,\f m)(\cdot,0)={}&(\,\mathrm{d}ns_0,\f m_0)
&&\quad\partial_text{in }{\mathbb{T}^d}.
\partial_text{\partial_textit{e}}nd{alignat}
\partial_text{\partial_textit{e}}nd{subequations}
Here $\,\mathrm{d}ns\colon{\mathbb{T}^d}\partial_times(0,T)\partial_to[0,\infty)$ and
and $\f m\colon{\mathbb{T}^d}\partial_times(0,T)\partial_to\R^d$
denote the mass density and the momentum field of an inviscid fluid flow,
and the pressure $p$ is related to the density $\,\mathrm{d}ns$ by
a barotropic pressure law $p=p(\,\mathrm{d}ns)$.
Note that we follow~\cite{weakstrongCompEul}
and use $h$ for the density variable instead of $\rho $,
which fits to our notation
to use Latin letters for the state variables
and Greek letters for the test functions.
To see that \partial_text{\partial_textit{e}}qref{eq:comprEuler} belongs to the class of hyperbolic conservation laws
introduced above,
we set
\[
\f F ( \,\mathrm{d}ns,\f m ) =
\begin{pmatrix}
\f m ^T \\
\bp{\frac{\f m \otimes \f m }{\,\mathrm{d}ns} + p(\,\mathrm{d}ns) I}\chi_{(0,\infty)}(\,\mathrm{d}ns)
\partial_text{\partial_textit{e}}nd{pmatrix}.
\]
Then \partial_text{\partial_textit{e}}qref{eq:comprEuler} is equivalent to \partial_text{\partial_textit{e}}qref{eq} with
$ \f U = (\,\mathrm{d}ns , \f m) $.
The mathematical entropy $\partial_text{\partial_textit{e}}ta$ for the system
is defined as
\[
\partial_text{\partial_textit{e}}ta (\,\mathrm{d}ns,\f m)
=\begin{cases} \frac{1}{2}\frac{|\f m |^2}{\,\mathrm{d}ns}
+ P(\,\mathrm{d}ns)
& \partial_text{if }h> 0,\\
0
& \partial_text{if }(h, \f m) = (0,\f 0) ,
\\
\infty & \partial_text{else},
\partial_text{\partial_textit{e}}nd{cases}
\]
and
$\mathcal{E}(\,\mathrm{d}ns,\f m)=\int_{\mathbb{T}^d}\partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns(x),\f m(x))\,\mathrm{d}\f x$
is the total physical energy.
Here $P$ denotes the potential energy, which is associated to the
pressure $p$ via
\begin{equation}\label{eq:pot.from.pres}
P(\,\mathrm{d}ns)=\,\mathrm{d}ns\int_0^\,\mathrm{d}ns \frac{p(z)}{z^2}\,\mathrm{d} z.
\partial_text{\partial_textit{e}}nd{equation}
Vice versa, the pressure $p$ can be derived from the potential energy $P$ via
\begin{equation}\label{eq:pres.from.pot}
p(\,\mathrm{d}ns)=\,\mathrm{d}nsP'(h)-P(h).
\partial_text{\partial_textit{e}}nd{equation}
For conditions ensuring that all expressions in \partial_text{\partial_textit{e}}qref{eq:pot.from.pres} and
\partial_text{\partial_textit{e}}qref{eq:pres.from.pot} are well defined,
we refer to \partial_text{\partial_textit{e}}qref{eq:pres.properties} and
\partial_text{\partial_textit{e}}qref{eq:pot.properties} below, respectively.
\subsection{Energy-variational solutions to the compressible Euler equations}
For the sake of convenience, we now transfer Definition~\ref{def:envar}
to the compressible Euler system,
and express all quantities in the way considered here.
\begin{definition}\label{def:envarEUL*}
A triple
$(\,\mathrm{d}ns,\f m, E)\in
L^1_{\mathrm{loc}}({\mathbb{T}^d}\partial_times(0,T);[0,\infty))\partial_times
L^1_{\mathrm{loc}}({\mathbb{T}^d}\partial_times(0,T);\R^d)
\partial_times \BV$
is called an energy-variational solution to the compressible Euler system
\partial_text{\partial_textit{e}}qref{eq:comprEuler}
if
$\mathcal{E} (\,\mathrm{d}ns(t) , \f m(t))\leq E(t)$ for a.e.~$t\in (0,T)$
and
if
the energy-variational inequality
\begin{multline}
\left [ E - \int_{{\mathbb{T}^d}} h \rho
+ \f m \cdot \f \varphi
\,\mathrm{d} \f x \right ] \Big|_s^t + \int_s^t
\int_{{\mathbb{T}^d}} h \partial_t \rho
+\f m \cdot \nabla \rho
\,\mathrm{d} \f x
\,\mathrm{d} \partial_tau
\\
+ \int_s^t \int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \f \varphi
+ \left (\frac{ \f m \otimes \f m}{h} + p(\,\mathrm{d}ns)I \right ): \nabla \f \varphi
\,\mathrm{d} \f x
+ \mathcal{K}_\alpha (\rho,\f \varphi ) \left [ \mathcal{E}(h ,\f m) - E\right ]
\,\mathrm{d} \partial_tau
\leq 0
\,\label{relenEul*}
\partial_text{\partial_textit{e}}nd{multline}
holds for a.e.~$s<t\in(0,T)$, including $s=0$ with $( \,\mathrm{d}ns (0), \f m (0)) = ( \,\mathrm{d}ns_0 ,\f m_0)$,
and for all test functions
$(\rho,\f \varphi )\in \C^1({\mathbb{T}^d}\partial_times [0,T])\partial_times\C^1({\mathbb{T}^d}\partial_times [0,T];\R^m)$,
where $\mathcal K_\alpha $ is given by
\begin{align}
\mathcal{K}_\alpha (\,\mathrm{d}nst,\f mt )=\mathcal{K}_\alpha(\f mt) =
\max\setl{ 2,\alpha d}
\| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})}
\label{K:CompEul}\,
\partial_text{\partial_textit{e}}nd{align}
for a suitable choice of $\alpha>0 $.
\partial_text{\partial_textit{e}}nd{definition}
\begin{remark}[Choice of regularity weight]
There are different choices possible for the regularity weight~$\mathcal{K}_\alpha$. A finer choice would be given by
\begin{multline*}
\partial_tilde{\mathcal{K}}_{\alpha}(\varphi) = \max\setl{ 2\| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})},\alpha \| (\di \varphi )_{-}\| _{L^\infty({\T};\R)}}
\\
\leq \max\setl{ 2,\alpha d}
\| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})} =\mathcal{K}_\alpha(\f\varphi)\,. \partial_text{\partial_textit{e}}nd{multline*}
Note that the solution concept is finer for a smaller regularity weight, since the energy-variational inequality~\partial_text{\partial_textit{e}}qref{envarform} remains valid, if the regularity weight increases (\partial_textit{cf.}~\cite[Prop.~4.4]{EiHoLa22}).
Nevertheless, we use the above choice
since it yields the equivalence to dissipative weak solutions; see Theorem~\ref{thm:measval.CompEul} below.
\partial_text{\partial_textit{e}}nd{remark}
To show existence of
energy-variation solutions to \partial_text{\partial_textit{e}}qref{eq:comprEuler},
we restrict the class of
admissible pressure laws and assume that $p$ is of the form
$p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$.
Then we show the following result.
\begin{theorem}\label{thm:exCompEul}
Let $p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$ for some $a>0$, $\gamma>1$,
and set $q=2\gamma/(1+\gamma)$ and $\alpha =\gamma-1$.
For every initial data $ ( h_0, \f m_0)\in L^1_{\mathrm{loc}}({\mathbb{T}^d};\R^{d+1})$
with $\mathcal{E}( h_0, \f m_0)<\infty$
there exists an energy-variational solution
\[
(\,\mathrm{d}ns,\f m,E)\in \C_w([0,T]; L^\gamma ( {\mathbb{T}^d} )) \partial_times \C_w([0,T]; L^q ({\mathbb{T}^d} ;\R^d )) \partial_times \BV
\]
to the compressible Euler equations~\partial_text{\partial_textit{e}}qref{eq:comprEuler} in the sense of Definition~\ref{def:envarEUL*} with
$E(0)=\mathcal{E}(\,\mathrm{d}ns_0, \f m_0)$ and
such that the initial conditions are attained in the strong sense.
\partial_text{\partial_textit{e}}nd{theorem}
\subsection{Existence of energy-variational solutions}
To prove existence of an energy-variational solution,
we show that all assertions
of Theorem \ref{thm:main} are satisfied.
Actually, most of them
can be shown for more general pressure laws
than those in the statement of Theorem \ref{thm:exCompEul}.
For the moment, we shall merely assume that the potential energy $P$
satisfies
\begin{subequations}\label{eq:pot.properties}
\begin{align}
P\in\C^1[0,\infty)\cap\C^2(0,\infty),
\qquad
&P''(z)>0 \partial_text{ for all } z>0,
\label{eq:pot.regularity}
\\
\lim_{z\partial_to\infty} \frac{P(z)}{z}=\infty,
\qquad
&P(0)=P'(0)=0.
\label{eq:pot.limits}
\partial_text{\partial_textit{e}}nd{align}
\partial_text{\partial_textit{e}}nd{subequations}
In particular, $P$ is a strictly convex function with superlinear growth,
and $p$ is well defined via \partial_text{\partial_textit{e}}qref{eq:pres.from.pot}.
\begin{remark}
\label{rem:pressurelaw}
In \partial_text{\partial_textit{e}}qref{eq:pot.properties}
we introduced assumptions on the potential energy $P$,
while in the literature it is much more common to
state assumptions on the pressure $p$ directly.
To guarantee \partial_text{\partial_textit{e}}qref{eq:pot.regularity} and \partial_text{\partial_textit{e}}qref{eq:pot.limits},
one may assume that $p$ satisfies
\begin{subequations}\label{eq:pres.properties}
\begin{align}
p\in\C^0[0,\infty)\cap\C^1(0,\infty),
\qquad
&p'(z)>0 \partial_text{ for all } z>0,
\label{eq:pres.regularity}
\\
\lim_{z\partial_to\infty} \frac{p(z)}{z}=\infty,
\qquad
&\int_0^1 \frac{p(z)}{z^2}\,\,\mathrm{d} z <\infty.
\label{eq:pres.limits}
\partial_text{\partial_textit{e}}nd{align}
\partial_text{\partial_textit{e}}nd{subequations}
One readily verifies that then the right-hand side of
\partial_text{\partial_textit{e}}qref{eq:pot.from.pres} is well defined,
and that \partial_text{\partial_textit{e}}qref{eq:pot.regularity} and \partial_text{\partial_textit{e}}qref{eq:pres.regularity}
are equivalent since $P''(z)=p'(z)/z$.
Moreover, the second condition in \partial_text{\partial_textit{e}}qref{eq:pres.limits}
implies $\lim_{z\partial_to 0} p(z)/z =0$,
and it is equivalent to the second condition in \partial_text{\partial_textit{e}}qref{eq:pot.limits},
which follows with the identity
\[
\int_0^1 \frac{p(z)}{z^2}\,\mathrm{d} z
=\int_0^1\ddz\bb{\frac{P(z)}{z}}\,\mathrm{d} z
=P(1)-\lim_{z\partial_to0} \frac{P(z)}{z}
=P(1)-P'(0).
\]
Additionally, superlinear growth of $p$ implies superlinear growth of $P$.
Indeed, the first condition in \partial_text{\partial_textit{e}}qref{eq:pot.limits} yields the existence of $\,\mathrm{d}ns_0>0$
such that $p(z)\geq z$ for all $z\geq\,\mathrm{d}ns_0$,
whence we have
\[
\frac{P(\,\mathrm{d}ns)}{\,\mathrm{d}ns}
\geq \int_0^{\,\mathrm{d}ns_0} \frac{p(z)}{z^2}\,\mathrm{d} z
+\int_{\,\mathrm{d}ns_0}^\,\mathrm{d}ns \frac{1}{z}\,\mathrm{d} z
=\int_0^{\,\mathrm{d}ns_0} \frac{p(z)}{z^2}\,\mathrm{d} z
+\log(\,\mathrm{d}ns)-\log(\,\mathrm{d}ns_0)
\partial_to\infty
\]
as $h\partial_to\infty$.
However, the converse is not true.
For example, the function $P(\,\mathrm{d}ns)=(1+\,\mathrm{d}ns)\log(1+\,\mathrm{d}ns)-\,\mathrm{d}ns$
satisfies \partial_text{\partial_textit{e}}qref{eq:pot.regularity} and \partial_text{\partial_textit{e}}qref{eq:pot.limits},
but the associated pressure $p(\,\mathrm{d}ns)=\,\mathrm{d}ns-\log(1+\,\mathrm{d}ns)$ does not
have superlinear growth.
Therefore, the assumption
on the potential $P$ in \partial_text{\partial_textit{e}}qref{eq:pot.properties}
are less restrictive
than the assumptions
on the pressure $p$ in \partial_text{\partial_textit{e}}qref{eq:pres.properties},
which explains why we work with the former in what follows.
\partial_text{\partial_textit{e}}nd{remark}
We separate the proof into several lemmas,
the first one concerns properties of $\partial_text{\partial_textit{e}}ta$.
\begin{lemma}\label{lem:eta.comprEuler}
If $P$ satisfies \partial_text{\partial_textit{e}}qref{eq:pot.properties},
then $\partial_text{\partial_textit{e}}ta$ is strictly convex and has superlinear growth.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
By \partial_text{\partial_textit{e}}qref{eq:pot.regularity}, we have $P''>0$,
and the strict convexity of $\partial_text{\partial_textit{e}}ta$ directly follows by computing the second derivatives.
To show the superlinear growth, let $(\,\mathrm{d}ns_n,\f m_n)\subset\R\partial_times\R^n$ be a sequence
with $\snorm{(\,\mathrm{d}ns_n,\f m_n)}\partial_to\infty$.
If $\,\mathrm{d}ns_n\partial_to-\infty$ as $n\partial_to\infty$,
then we clearly have $\partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns_n,\f m_n)/\snorm{(\,\mathrm{d}ns_n,\f m_n)}\partial_to\infty$.
So we may assume $\,\mathrm{d}ns_n>0$ in the following.
If $\lim_{n\partial_to\infty}\snorm{\f m_n}/\,\mathrm{d}ns_n= \infty$,
then
\[
\frac{\partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns_n,\f m_n)}{\snorm{\np{\,\mathrm{d}ns_n,\f m_n}}}
\geq \frac{\snorm{\f m_n}^2}{2\,\mathrm{d}ns_n\,\snorm{\np{\,\mathrm{d}ns_n,\f m_n}}}
=\frac{\snorm{\f m_n}}{\sqrt{\,\mathrm{d}ns_n^2+\snorm{\f m_n}^2}}\frac{\snorm{\f m_n}}{2\,\mathrm{d}ns_n}
\partial_to \infty
\]
as $n\partial_to\infty$;
if
$\liminf_{n\partial_to\infty}\snorm{\f m_n}/\,\mathrm{d}ns_n= c \geq 0$,
then we have $h_n\partial_to\infty$ and
\[
\frac{\partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns_n,\f m_n)}{\snorm{\np{\,\mathrm{d}ns_n,\f m_n}}}
\geq \frac{P(\,\mathrm{d}ns_n)}{\snorm{\np{\,\mathrm{d}ns_n,\f m_n}}}
=\frac{\,\mathrm{d}ns_n}{\sqrt{\,\mathrm{d}ns_n^2+\snorm{\f m_n}^2}}
\frac{P(h_n)}{h_n} \partial_to \infty
\]
as $n\partial_to\infty$ due to \partial_text{\partial_textit{e}}qref{eq:pot.limits}.
In total, this completes the proof.
\partial_text{\partial_textit{e}}nd{proof}
Next we calculate the convex conjugate $\partial_text{\partial_textit{e}}ta^*$ of $\partial_text{\partial_textit{e}}ta$.
To this end, we use that $P'$ is an invertible mapping.
\begin{lemma}\label{lem:etastar.comprEuler}
If $P$ satisfies \partial_text{\partial_textit{e}}qref{eq:pot.properties},
then the mapping $Q\coloneqq P'$ is strictly increasing
and a bijective self-mapping on $[0,\infty)$.
The convex conjugate $\partial_text{\partial_textit{e}}ta^*$ of $\partial_text{\partial_textit{e}}ta$ is given by
\begin{equation}\label{eq:etastar.comprEuler}
\partial_text{\partial_textit{e}}ta^\ast ( \,\mathrm{d}nst , \f mt )
=\int_0^{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+} Q^{-1}(z)\,\mathrm{d}\f z
=p\circ Q^{-1}\Bp{\Bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+},
\partial_text{\partial_textit{e}}nd{equation}
and it holds
\begin{equation}\label{eq:Detastar.comprEuler}
D\partial_text{\partial_textit{e}}ta^\ast ( \,\mathrm{d}nst , \f mt )
= Q^{-1}\Bp{\Bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}\begin{pmatrix}
1\\
\f \varphi
\partial_text{\partial_textit{e}}nd{pmatrix}
\partial_text{\partial_textit{e}}nd{equation}
where $z_+\coloneqq\max\set{z,0}$ for $z\in\R$.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
By \partial_text{\partial_textit{e}}qref{eq:pot.properties},
the function $Q=P'$ is strictly increasing and continuous on $[0,\infty)$
with $Q(0)=0$,
and the convexity of $P$ yields
\[
Q(z)=P'(z)\geq\frac{P(z)}{z}\partial_to \infty
\]
as $z\partial_to\infty$.
Therefore, $Q$ is a bijective self-mapping on $[0,\infty)$
with inverse $Q^{-1}$.
The second equality in \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler} is now
a direct consequence of the identity
\[
\ddz p(Q^{-1}(z))
= p'(Q^{-1}(z)) (Q^{-1})'(z)
= p'(Q^{-1}(z)) \frac{1}{Q'(Q^{-1}(z))}
= Q^{-1}(z),
\]
where we used $Q'(z)=P''(z)=p'(z)/z$.
To verify the first equality in \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler},
consider the case $\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}\leq0$ at first.
We employ Young's inequality to estimate
\[
\,\mathrm{d}ns\,\,\mathrm{d}nst + \f m\cdot\f mt - \partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns,\f m)
\leq -\,\mathrm{d}ns\frac{\snorm{\f mt}^2}{2} + \frac{\snorm{\f m}^2}{2\,\mathrm{d}ns}
+ \frac{\,\mathrm{d}ns\snorm{\f mt}^2}{2} - \partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns,\f m)
\leq 0,
\]
which shows $\partial_text{\partial_textit{e}}ta^\ast ( \rho , \f \varphi ) \leq 0$.
Since we also have $\partial_text{\partial_textit{e}}ta^\ast\geq 0$,
we infer $\partial_text{\partial_textit{e}}ta^\ast ( \rho , \f \varphi )=0$,
which is \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler}
if $\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}\leq0$.
This also implies \partial_text{\partial_textit{e}}qref{eq:Detastar.comprEuler} in this case.
If $\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}>0$, then $(\,\mathrm{d}nst,\f mt)$
belongs to the range of $D\partial_text{\partial_textit{e}}ta$, which is given by
\[
D \partial_text{\partial_textit{e}}ta (\,\mathrm{d}ns,\f m)
= \left( -\frac{\snorm{\f m}^2}{2\,\mathrm{d}ns^2}+Q(\,\mathrm{d}ns), \frac{\f m}{\,\mathrm{d}ns} \right)
\]
for $\,\mathrm{d}ns>0$, $\f m\in\R^d$.
Computing the inverse, we arrive at \partial_text{\partial_textit{e}}qref{eq:Detastar.comprEuler}
in this case.
This further
yields \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler} for $\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}>0$
since $Q(0)=0$.
In total, we have thus verified \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler}
and \partial_text{\partial_textit{e}}qref{eq:Detastar.comprEuler}.
\partial_text{\partial_textit{e}}nd{proof}
In the next lemma, we verify the compatibility condition \partial_text{\partial_textit{e}}qref{eq:integralFentropy}
between $F$ and the entropy functional $\partial_text{\partial_textit{e}}ta$.
\begin{lemma}\label{lem:integralcond.comprEuler}
If $P$ satisfies \partial_text{\partial_textit{e}}qref{eq:pot.properties},
then the condition \partial_text{\partial_textit{e}}qref{eq:integralFentropy} is satisfied.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
We first consider the integrand of \partial_text{\partial_textit{e}}qref{eq:integralFentropy}.
Using \partial_text{\partial_textit{e}}qref{eq:Detastar.comprEuler},
for all $(\,\mathrm{d}nst,\f mt)\in\C^1({\mathbb{T}^d};\R^{d+1})$ we have
\[
\begin{aligned}
\f F(&D\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt)):\nabla
\begin{pmatrix}
\,\mathrm{d}nst \\
\f mt
\partial_text{\partial_textit{e}}nd{pmatrix}
= \begin{pmatrix}
\f mt^T \ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+} \\
\f mt\otimes\f mt \ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}
+ p\circ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}\, I
\partial_text{\partial_textit{e}}nd{pmatrix}
:\nabla
\begin{pmatrix}
\,\mathrm{d}nst \\
\f mt
\partial_text{\partial_textit{e}}nd{pmatrix}
\\
&= \f mt\cdot\nabla\Bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}} \
Q^{-1}\Bp{\Bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}
+ p\circ Q^{-1}\Bp{\Bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+} \dv\f mt
\\
&=\dv \bb{\f mt \,\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt)},
\partial_text{\partial_textit{e}}nd{aligned}
\]
where the last equality follows from \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler}.
Integrating this identity
yields \partial_text{\partial_textit{e}}qref{eq:integralFentropy}.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}\label{rem:entropyflux}
We can also show \partial_text{\partial_textit{e}}qref{eq:integralFentropy}
by verifying the alternative condition \partial_text{\partial_textit{e}}qref{eq:entropyflux.new}.
Indeed, we can use \partial_text{\partial_textit{e}}qref{eq:etastar.comprEuler} and
\partial_text{\partial_textit{e}}qref{eq:Detastar.comprEuler} to derive
\[
\begin{aligned}
\f F(D\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt))
&= \begin{pmatrix}
\f mt^T \ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+} \\
\f mt\otimes\f mt \ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}
+ p\circ Q^{-1}\bp{\bp{\,\mathrm{d}nst+\frac{\snorm{\f mt}^2}{2}}_+}\, I
\partial_text{\partial_textit{e}}nd{pmatrix}
\\
&= \begin{pmatrix}
\f mt^T \ D_\,\mathrm{d}nst\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt) \\
\f mt\otimes D_\f mt\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt)
+ \partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt)\, I
\partial_text{\partial_textit{e}}nd{pmatrix}\\
&=D \bb{
\f mt \,\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt) }
= D \bb{\f mt\, p (D_\,\mathrm{d}nst\partial_text{\partial_textit{e}}ta^\ast(\,\mathrm{d}nst,\f mt))}
= D \bb{\partial_tilde{\f q}\circ D\partial_text{\partial_textit{e}}ta^\ast}(\,\mathrm{d}nst,\f mt)
\partial_text{\partial_textit{e}}nd{aligned}
\]
for $\partial_tilde{\f q}(\,\mathrm{d}ns,\f m)=\frac{\f m}{\,\mathrm{d}ns} p(\,\mathrm{d}ns)$.
This shows \partial_text{\partial_textit{e}}qref{eq:entropyflux.new},
which implies \partial_text{\partial_textit{e}}qref{eq:integralFentropy} by Remark \ref{rem:integralcondition}.
However,
we cannot use the classical entropy-flux condition \partial_text{\partial_textit{e}}qref{eq:entropypair}
in the present situation,
since $\f F$ is not differentiable.
\partial_text{\partial_textit{e}}nd{remark}
It remains to show that
the function $\mathcal K$ defined in \partial_text{\partial_textit{e}}qref{K:CompEul} is a suitable choice.
To show this, we have to impose more restrictive conditions
on $p$ and $P$.
\begin{lemma}\label{lem:nonlin.convex.compEuler}
Assume that additionally to \partial_text{\partial_textit{e}}qref{eq:pot.properties}
there exist constants $c,\alpha>0$ such that
\begin{equation}\label{eq:est.pres.by.pot}
p(\,\mathrm{d}ns)\leq c(1+P(\,\mathrm{d}ns)),
\partial_text{\partial_textit{e}}nd{equation}
for all $h\geq 0$,
and such that the functions $p$ and $\alphaP-p$
are convex and non-negative.
Then \partial_text{\partial_textit{e}}qref{BoundF} holds for some $C>0$,
and
for any $\f mt\in\C^1({\mathbb{T}^d};\R^d)$
the mapping
\begin{equation}
\dom\mathcal{E}\partial_to\R,\quad
(\,\mathrm{d}ns,\f m)\mapsto \int_{{\mathbb{T}^d}}
\left (\frac{ \f m \otimes \f m}{\,\mathrm{d}ns} + p(\,\mathrm{d}ns)I \right ): \nabla \f mt
\,\mathrm{d} \f x
+ \mathcal{K}_\alpha(\f mt )\mathcal{E}(\,\mathrm{d}ns ,\f m) \label{mapping}
\partial_text{\partial_textit{e}}nd{equation}
is convex, lower semi-continuous and non-negative
for $\mathcal{K}_{\alpha}$ as in \partial_text{\partial_textit{e}}qref{K:CompEul}.
\partial_text{\partial_textit{e}}nd{lemma}
\begin{proof}
For $h\leq 0$, estimate \partial_text{\partial_textit{e}}qref{BoundF} is trivial,
and for $h>0$ we use Young's inequality and \partial_text{\partial_textit{e}}qref{eq:est.pres.by.pot} to conclude
\[
\snorm{\f F(\,\mathrm{d}ns,\f m)}
\leq C\Bp{\frac{\snorm{\f m}}{\sqrt{\,\mathrm{d}ns}}\sqrt{\,\mathrm{d}ns}+\frac{\snorm{\f m}^2}{h}+p(\,\mathrm{d}ns)}
\leq C\Bp{h+\frac{\snorm{\f m}^2}{h}+1+P(\,\mathrm{d}ns)}
\leq C\bp{1+\partial_text{\partial_textit{e}}ta(\,\mathrm{d}ns,\f m)},
\]
where we used the superlinear growth of $P$ in the last estimate.
In total, this shows \partial_text{\partial_textit{e}}qref{BoundF}.
To deduce that \partial_text{\partial_textit{e}}qref{mapping} is convex and non-negative,
firstly note that
the mapping
\[
(h,\f m)\mapsto
\int_{{\mathbb{T}^d}}\frac{ \f m \otimes \f m}{h} {:} \nabla \f \varphi
\,\mathrm{d} \f x
+ \mathcal{K}_{\alpha}(\f \varphi )\int_{\mathbb{T}^d}\frac{| \f m|^2}{2h}\,\mathrm{d}\f x
=\int_{{\mathbb{T}^d}}\frac{ \f m \otimes \f m}{h} {:} ((\nabla \f \varphi)_\partial_text{sym}
+ \frac{1}{2} \mathcal{K}_{\alpha}(\f mt )I )\,\mathrm{d} \f x
\]
is convex and non-negative because the matrix
\[
(\nabla \f \varphi)_\partial_text{sym}
+ \frac{1}{2} \mathcal{K}_{\alpha}(\f mt )I
= (\nabla \f \varphi)_{\partial_text{sym},+} + (\nabla \f \varphi )_{\partial_text{sym},-}
+ \frac{1}{2} \mathcal{K}_{\alpha}(\f mt )I
\]
is symmetric and positive semi-definite.
For the term $(\nabla \f \varphi)_{\mathrm{sym},+}$ this is clear,
and for the remaining term this follows from
$\frac{1}{2}\mathcal{K}_\alpha(\f \varphi )
\geq \| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})}$.
Secondly, the mapping
\[
\begin{aligned}
\,\mathrm{d}ns\mapsto
\int_{{\mathbb{T}^d}}
&{}p(\,\mathrm{d}ns) I : \nabla \f mt
\,\mathrm{d} \f x
+ \mathcal{K}_{\alpha}(\f mt )\int_{{\mathbb{T}^d}} P(\,\mathrm{d}ns)\,\,\mathrm{d} \f x
\\
&=\int_{{\mathbb{T}^d}}
p(\,\mathrm{d}ns) I:(\nabla\f mt)_{\partial_text{sym},+}
+ \frac{1}{d}\int_{{\mathbb{T}^d}} P(\,\mathrm{d}ns)I:
\bp{\mathcal{K}_{\alpha}(\f mt)I+\alpha d(\nabla\f mt)_{\partial_text{sym},-}}\,\,\mathrm{d} \f x
\\
&\qquad+ \int_{{\mathbb{T}^d}} \bp{p(\,\mathrm{d}ns)-\alphaP(\,\mathrm{d}ns)}I:(\nabla\f mt)_{\partial_text{sym},-}\,\mathrm{d} \f x
\partial_text{\partial_textit{e}}nd{aligned}
\]
is also convex and non-negative.
Indeed, this follows from the convexity of $p$, $P$
and $\alphaP-p$ and from
the fact that $(\nabla\f mt)_{\partial_text{sym},+}$,
$-(\nabla\f mt)_{\partial_text{sym},-}$
and $\mathcal{K}_{\alpha}(\f mt)I+\alpha d(\nabla\f mt)_{\partial_text{sym},-}$
are positive semi-definite in ${\mathbb{T}^d}$.
Note that for the last term,
this follows from
$\mathcal{K}_\alpha(\f \varphi )
\geq \alpha d\| (\nabla \f \varphi )_{\partial_text{sym},-}\| _{L^\infty({\T};\R^{d\partial_times d})}$.
In total, the asserted convexity and non-negativity of \partial_text{\partial_textit{e}}qref{mapping} follows.
Finally, since strong convergence
implies point-wise convergence almost everywhere of a subsequence,
the non-negativity of the mapping~\partial_text{\partial_textit{e}}qref{mapping} and Fatou's lemma
imply the lower semi-continuity of~\partial_text{\partial_textit{e}}qref{mapping}.
\partial_text{\partial_textit{e}}nd{proof}
Finally, we prove Theorem \ref{thm:exCompEul} on
existence of energy-variational solutions to the compressible Euler system.
\begin{proof}[Proof of Theorem \ref{thm:exCompEul}]
If $p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$,
then $P(\,\mathrm{d}ns)=(\gamma-1)^{-1}a\,\mathrm{d}ns^\gamma$,
and one directly sees that all properties from \partial_text{\partial_textit{e}}qref{eq:pot.properties}
(or even \partial_text{\partial_textit{e}}qref{eq:pres.properties})
are satisfied, and Lemma \ref{lem:eta.comprEuler} and
Lemma \ref{lem:integralcond.comprEuler} are applicable.
Moreover, we have $p(\,\mathrm{d}ns)=(\gamma-1)P(\,\mathrm{d}ns)$,
so that the assumptions of Lemma \ref{lem:nonlin.convex.compEuler}
are satisfied with $c=\alpha=\gamma-1$.
Moreover, we may identify $\mathbb{D} = \dom \mathcal{E}$, which is convex, \partial_textit{cf.} Remark~\ref{rem:domE}.
From Theorem \ref{thm:main} we thus conclude the existence of
energy-variational solutions $(\,\mathrm{d}ns,\f m)$
in the sense of Definition~\ref{def:envarEUL*}.
Moreover,
Young's inequality implies
\[
|\f m| ^q= \left (\frac{|\f m|}{\sqrt{h}}\right )^q {h}^{\frac{q}{2}} \leq \frac{q}{2} \frac{|\f m| ^2 }{h} + \frac{2-q}{2} h^{\frac{q}{2-q}} = \frac{\gamma}{1+\gamma} \frac{|\f m| ^2 }{h} + \frac{1}{1+\gamma} h ^\gamma \,,
\]
whence
\[
\int_{\T} \,\mathrm{d}ns(t)^\gamma+\snorm{\f m(t)}^q \,\mathrm{d}\f x
\leq
C\,\mathcal{E}(\,\mathrm{d}ns(t),\f m(t))
\leq C\,
\mathcal{E}(\,\mathrm{d}ns_0,\f m_0)
\]
for a.a.~$t\in(0,T)$ and some $C=C(\gamma)>0$.
Finally, the assumptions of Proposition~\ref{prop:betterreg}
are clearly satisfied
such that
$(\,\mathrm{d}ns,\f m)$ belongs to the asserted function class.
\partial_text{\partial_textit{e}}nd{proof}
\begin{remark}
It is readily seen that the previous proof also works for more general
pressure laws than the above choice $p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$
since it suffices to satisfy condition \partial_text{\partial_textit{e}}qref{eq:pot.properties} and
the assumptions from Lemma \ref{lem:nonlin.convex.compEuler}
to obtain existence.
For example,
one may consider
pressure laws of the form $p(\,\mathrm{d}ns)=a_1\,\mathrm{d}ns^{\gamma_1}+a_2\,\mathrm{d}ns^{\gamma_2}$
with $a_1,a_2>0$ and $\gamma_1,\gamma_2>1$.
One easily checks that then \partial_text{\partial_textit{e}}qref{eq:pot.properties}
is satisfied,
and the assumptions of Lemma \ref{lem:nonlin.convex.compEuler}
hold with $c=\alpha=\max\set{\gamma_1,\gamma_2}-1$.
Another example would be the pressure law $p(\,\mathrm{d}ns)=\,\mathrm{d}ns-\log(1+\,\mathrm{d}ns)$
with associated
potential energy $P(\,\mathrm{d}ns)=(1+\,\mathrm{d}ns)\log(1+\,\mathrm{d}ns)-\,\mathrm{d}ns$
from Remark \ref{rem:pressurelaw},
where one can choose $c=\alpha=1$.
\partial_text{\partial_textit{e}}nd{remark}
\subsection{Comparison with dissipative weak solutions}
To compare the notion of energy-variational solutions with
existing solution concepts for the compressible Euler system \partial_text{\partial_textit{e}}qref{eq:comprEuler},
we recall the notion of dissipative weak solutions
for pressure laws $p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$
(cf.~\cite[Def.~2.1]{Fereisl21_NoteLongTimeBehaviorDissSolEuler}).
\begin{definition}\label{def:weakEul}
We call a tuple $(h,\f m, E )\in L^\infty(0,T; L^\gamma ( {\mathbb{T}^d} )) \partial_times L^\infty(0,T; L^q ({\mathbb{T}^d} ;\R^d )) \partial_times \BV$ with $q=2\gamma/(1+\gamma)$ a dissipative weak solution to the compressible Euler system
\partial_text{\partial_textit{e}}qref{eq:comprEuler} if
there exists a so-called \partial_text{\partial_textit{e}}mph{Reynolds defect}
$\mathfrak R\in L^\infty_{w^*}(0,T;\mathcal{M}({\mathbb{T}^d} ; \R^{d\partial_times d}_{\partial_text{sym},+}))$
such that the equations
\begin{subequations}
\label{weakforn*}
\begin{align}
\int_{\T} h \rho \,\mathrm{d}\f x\Big|_s^t&=\int_s^t \int_{\T} h \partial_t \rho + \f m \cdot \nabla \rho \,\mathrm{d} \f x \,\mathrm{d} \partial_tau, \label{mass*}
\\
\begin{split}
\int_{\T} \f m\cdot \f \varphi \,\mathrm{d} \f x \Big|_s^t
&= \int_s^t \int_{\T} \f m \partial_t \f \varphi + \left (\frac{\f m \otimes \f m}{h} \right ) : (\nabla \f \varphi )_{\partial_text{sym}}+ a h^\gamma (\di \f \varphi) \,\mathrm{d} \f x \,\mathrm{d} \partial_tau \\
&\qquad + \int_s^t \int_{{\mathbb{T}^d}} \nabla \f \varphi : \,\mathrm{d} \mathfrak R (\partial_tau) \,\mathrm{d} \partial_tau
\partial_text{\partial_textit{e}}nd{split}
\label{momentum*}
\partial_text{\partial_textit{e}}nd{align}
are fulfilled for all $\rho \in \C^1 ({\mathbb{T}^d}\partial_times[0,T])$ and
$\f\varphi \in \C^1 ({\mathbb{T}^d}\partial_times[0,T];\R^d )$,
and for a.a.~$s,t\in(0,T)$, including $s=0$ with $(\,\mathrm{d}ns (0), \f m (0))= ( \,\mathrm{d}ns_0,\f m_0)$.
The function $E$ is non-increasing and satisfies $E(0+)=\mathcal{E}(h_0,\f m_0)$ and
\begin{equation}
\mathcal{E}(h (t), \f m(t) )
+ c_{\mathfrak R}\int_{{\mathbb{T}^d}}\,\mathrm{d} \partial_tr[\mathfrak R(t)] \leq E(t)
\label{energy*}
\partial_text{\partial_textit{e}}nd{equation}
\partial_text{\partial_textit{e}}nd{subequations}
for a.a.~$t\in(0,T)$ and
a constant $ c_{\mathfrak R}\geq 0$.
\partial_text{\partial_textit{e}}nd{definition}
Now we show that energy-variational solutions to \partial_text{\partial_textit{e}}qref{eq:comprEuler}
coincide with dissipative weak solutions in the above sense.
\begin{theorem}\label{thm:measval.CompEul}
Let $p(\,\mathrm{d}ns)=a\,\mathrm{d}ns^\gamma$ with $a>0$ and $\gamma>1$,
and let $ ( h_0, \f m_0)\in L^1({\mathbb{T}^d})\partial_times L^1({\mathbb{T}^d};\R^d)$ satisfy $ \mathcal{E}(h _0,\f m_0 ) < \infty$.
Consider a tuple
$(h,\f m, E )\in L^\infty(0,T; L^\gamma ( {\mathbb{T}^d} )) \partial_times L^\infty(0,T; L^q ({\mathbb{T}^d} ;\R^d )) \partial_times \BV$
with $q=2\gamma/(1+\gamma)$.
Then $(h,\f m, E )$ is an
energy-variational solution in the sense of Definition~\ref{def:envarEUL*}
with $\alpha=\gamma-1$ if and only if it
is a dissipative weak solution in the sense of Definition~\ref{def:weakEul} with $c_{\mathfrak{R}}= \min\left \{ \frac{1}{2}, \frac{1}{d(\gamma-1)} \right \} $.
\partial_text{\partial_textit{e}}nd{theorem}
\begin{proof}
Let $(h, \f m, E )$ be an energy-variational solution in the sense of Definition~\ref{def:envarEUL*}.
Since the regularity weight $\mathcal{K}=\mathcal K_\alpha$ given in~\partial_text{\partial_textit{e}}qref{K:CompEul} is homogeneous of degree one, we may apply Proposition~\ref{prop:equality} in order to infer $E \big|_{s}^t \leq 0$ and
\begin{multline}
- \left [ \int_{{\mathbb{T}^d}} h
\rho
+ \f m \cdot \f \varphi
\,\mathrm{d} \f x \right ] \Big|_{s}^t
+ \int_s^t
\int_{{\mathbb{T}^d}} h \partial_t \rho
+\f m \cdot \nabla \rho
\,\mathrm{d} \f x
\,\mathrm{d} s
\\
+ \int_s^t \int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \f \varphi
+ \left (\frac{ \f m \otimes \f m}{h} + I h ^ \gamma \right ): \nabla \f \varphi
\,\mathrm{d} \f x
+ \mathcal{K}\left (\f \varphi \right )\left [\mathcal{E}(\,\mathrm{d}ns,\f m )- E\right ]
\,\mathrm{d} s
\leq 0
\,\label{relenEulmini}
\partial_text{\partial_textit{e}}nd{multline}
for every $(\rho , \f \varphi ) \in \C^1([0,T];\C^1({\mathbb{T}^d};\R^m))$ and a.e. $s<t\in[0,T]$, where $E(0+) = \mathcal{E}(\,\mathrm{d}ns_0,\f m_0)$.
For the choice
$\f \varphi \partial_text{\partial_textit{e}}quiv 0$ we infer~\partial_text{\partial_textit{e}}qref{mass*},
but first merely with an inequality sign.
However, since $\rho$ varies in a linear space,
the equality~\partial_text{\partial_textit{e}}qref{mass*} follows immediately.
Choosing $\rho=0$ in~\partial_text{\partial_textit{e}}qref{relenEulmini} instead implies
\begin{align}
\begin{aligned}
- \int_{{\mathbb{T}^d}} \f m \cdot \f \varphi \,\mathrm{d} \f x \Big|_s^t + \int_s^t \int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \f \varphi
&+ \left (\frac{ \f m \otimes \f m}{h} + I h ^ \gamma \right ): \nabla \f \varphi
\,\mathrm{d} \f x \,\mathrm{d} \partial_tau \\
&\qquad\qquad
\leq \int_s^t\mathcal{K}( \f \varphi ) \left [ E- \mathcal{E}(h ,\f m) \right ] \,\mathrm{d} \partial_tau \,.
\partial_text{\partial_textit{e}}nd{aligned}
\label{esttimederiEul}
\partial_text{\partial_textit{e}}nd{align}
The left-hand side of \partial_text{\partial_textit{e}}qref{esttimederiEul} defines
a linear functional $\f l$ by
\begin{align}
\langle\f l , \f \varphi \rangle ={}&
- \int_{{\mathbb{T}^d}} \f m \cdot \f \varphi \,\mathrm{d} \f x \Big|_0^T + \int_0^T \int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \f \varphi
+ \left (\frac{ \f m \otimes \f m}{h} + I h ^ \gamma \right ): \nabla \f \varphi
\,\mathrm{d} \f x \,\mathrm{d} \partial_tau
\partial_text{\partial_textit{e}}nd{align}
for $\f \varphi\in \mathcal V$, where
\[
\mathcal V:=\{\f \varphi\in \C^1({\T}\partial_times[0,T];\R^d) \mid\int_{\T}\f\varphi\,\mathrm{d} x =0\}.
\]
We define the sublinear mapping $\mathfrak p$ by
\[
\begin{aligned}
&\mathfrak p : L^1(0,T;\C({\T} ; \R^{d\partial_times d}_{\partial_text{sym}})) \ra \R, \\
&\mathfrak p(\Phi) := \max \setl{ 2,d(\gamma-1)} \int_0^T \| (\Phi)_{-}\|_{\C({\mathbb{T}^d};\R^{d\partial_times d})}(E - \mathcal{E}(h,\f m))\,\mathrm{d} t.
\partial_text{\partial_textit{e}}nd{aligned}
\]
From~\partial_text{\partial_textit{e}}qref{esttimederiEul}, we infer the estimate $\langle \f l, \f \varphi \rangle \leq \mathfrak p(\mathcal{I}(\f \varphi))$ for all $\f \varphi \in \mathcal{V}$.
Lemma~\ref{lem:hahn} shows the existence of an element
$\mathfrak R \in L^\infty_{w^*}(0,T;\mathcal{M}({\T} ; \R_{\partial_text{sym}}^{d\partial_times d}
)) $
satisfying
\[
\forall\Phi\in L^1(0,T; \C({\T}; \R_{\partial_text{sym}}^{d\partial_times d })):\ \langle -\mathfrak R, \Phi\rangle \leq \mathfrak p(\Phi),
\qquad
\forall\f \varphi \in \mathcal V: \ \langle -\mathfrak R, \nabla \f \varphi\rangle = \langle \f l , \nabla \f \varphi \rangle.
\]
As for the incompressible Euler equations
(see the proof of Theorem \ref{thm:incompEuler}),
we show that
$\mathfrak R \in L^\infty_{w^*}(0,T;\mathcal{M}({\T} ; \R_{\partial_text{sym},+}^{d\partial_times d}))$
and that \partial_text{\partial_textit{e}}qref{momentum*} holds for all $\f \varphi\in \C^1({\T}\partial_times[0,T]; \R ^d )$.
Considering
$\Phi(x,t)= -\psi(t)I$ for some $\psi\in\C_0^{1}([0,T))$ with $\psi\geq 0$,
we further have
\[
\int_0^T \psi(t) \int_{{\mathbb{T}^d}} \,\mathrm{d} \partial_tr[\mathfrak R] (t) \,\mathrm{d} t
=\langle -\mathfrak R, \Phi\rangle
\leq \mathfrak p(\Phi)
=\max\big \{2, d(\gamma-1) \big\}\int_0^T \psi(t) (E - \mathcal{E}(h,\f m))\,\mathrm{d} t.
\]
Since $\psi\geq 0$ is arbitrary, this directly implies \partial_text{\partial_textit{e}}qref{energy*}
for a.a.~$t\in(0,T)$,
where
In order to infer the converse implication,
let $(h,\f m, E )$ be a dissipative weak solution in the sense of Definition~\ref{def:weakEul}.
Adding $ E |_{s}^{t}\leq 0$ for $s<t$ and
the identities~\partial_text{\partial_textit{e}}qref{mass*} and~\partial_text{\partial_textit{e}}qref{momentum*} with
$\rho = - \phi$ and $\f \varphi = - \f \psi$, we infer
\begin{equation}
\label{eq:distoev.compr}
\begin{aligned}
&\left [ E - \int_{{\mathbb{T}^d}} h \phi
+ \f m \cdot \f \psi
\,\mathrm{d} \f x \right ] \Big|_s^t + \int_s^t
\int_{{\mathbb{T}^d}} h \partial_t \phi
+\f m \cdot \nabla \phi
\,\mathrm{d} \f x
\,\mathrm{d} \partial_tau
\\& \quad
+ \int_s^t \int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \f \psi
+ \left (\frac{ \f m \otimes \f m}{h} + h^\gamma I \right ): \nabla \f \psi
\,\mathrm{d} \f x \,\mathrm{d} \partial_tau
+ \int_s^t \int_{{\mathbb{T}^d}} \nabla \f \psi : \,\mathrm{d} \mathfrak R (t) \,\mathrm{d} \partial_tau
\leq 0\,
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
for a.e.~$s<t\in(0,T)$ and all test functions
$(\phi,\f \psi )\in \C^1({\mathbb{T}^d}\partial_times [0,T])\partial_times\C^1({\mathbb{T}^d}\partial_times [0,T];\R^m)$.
From $ \mathfrak R \in L^\infty_{w^*}(0,T;\mathcal M({\mathbb{T}^d};\R^{d\partial_times d}_{\partial_text{sym},+}))$,
the duality between spectral norm and trace norm,
H\"older's inequality, and inequality~\partial_text{\partial_textit{e}}qref{energy*}, we infer
\begin{align*}
\int_{{\mathbb{T}^d}} \nabla \f \psi : \,\mathrm{d} \mathfrak R
&\geq \int_{{\mathbb{T}^d}} (\nabla \f \psi)_{\partial_text{sym},-} : \,\mathrm{d} \mathfrak R
\geq -\|(\nabla \f \psi)_{\partial_text{sym},-} \|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d})} \int_{{\mathbb{T}^d}} I : \,\mathrm{d} \mathfrak R
\\
& \geq \|(\nabla \f \psi)_{\partial_text{sym},-} \|_{L^\infty({\mathbb{T}^d};\R^{d\partial_times d})} c_{\mathfrak{R}}^{-1} \bb{ \mathcal{E}(\,\mathrm{d}ns , \f m)-E} \,
= \mathcal K_\alpha(\f \psi) \bb{ \mathcal{E}(\,\mathrm{d}ns , \f m)-E}
\partial_text{\partial_textit{e}}nd{align*}
a.e.~in $(0,T)$,
where $\alpha=\gamma-1$.
Using these estimates in \partial_text{\partial_textit{e}}qref{eq:distoev.compr} yields
\partial_text{\partial_textit{e}}qref{relenEul*}.
\partial_text{\partial_textit{e}}nd{proof}
\subsection{Relative entropy inequality and weak-strong uniqueness}
It is readily shown that the Hypothesis~\ref{hypo:smooth}
is fulfilled for the compressible Euler equations~\partial_text{\partial_textit{e}}qref{eq:comprEuler},
and that the weak-strong uniqueness principle of Corollary~\ref{cor:uni} holds.
In particular, relation~\partial_text{\partial_textit{e}}qref{eq:entropyflux.new} was already observed in Remark~\ref{rem:entropyflux}. Nevertheless, the calculation of the relative entropy inequality~\partial_text{\partial_textit{e}}qref{inuniqu} for this non-quadratic energy remains a nonstandard task,
and we exemplify it here for the reader's convenience.
All calculations are done along the lines of Proposition~\ref{prop:weakstrong}.
Note that during the calculations only~\partial_text{\partial_textit{e}}qref{eq:pres.from.pot} is used, but in order to
derive weak-strong uniqeness, we explicitly need~\partial_text{\partial_textit{e}}qref{eq:pot.properties} and
the assumptions of Lemma~\ref{lem:nonlin.convex.compEuler}.
The relative total entropy $\mathcal{R}$ is given by
\begin{align*}
\mathcal{R}(h,\f m | \partial_tilde{h},\partial_tilde{\f m} ) ={}& \int_{\T} \frac{|\f m|^2}{2h} - \frac{|\partial_tilde{\f m}|^2}{2h}- \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\cdot\left ( \f m-\partial_tilde{\f m}\right ) +\frac{|\partial_tilde{\f m}|^2}{2\partial_tilde{h}^2}( h -\partial_tilde{h}) \,\mathrm{d} \f x\\& + \int_{\T} P(\,\mathrm{d}ns)-P(\partial_tilde{h})-P'(\partial_tilde{h})(\,\mathrm{d}ns-\partial_tilde{h})
\,\mathrm{d} \f x
\\
={}& \int_{\T} \frac{h}{2} \left |\frac{\f m}{h} - \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right |^2+P(\,\mathrm{d}ns)-P(\partial_tilde{h})-P'(\partial_tilde{h})(\,\mathrm{d}ns-\partial_tilde{h})
\,\mathrm{d} \f x
\,,
\partial_text{\partial_textit{e}}nd{align*}
the system operator $\mathcal{A}$ by
\[
\mathcal{A}(\partial_tilde{h}, \partial_tilde{\f m}) = \begin{pmatrix}
\partial_t \partial_tilde{h} + \di \partial_tilde{\f m} \\
\partial_t \partial_tilde{\f m} + \di \left (\frac{\partial_tilde{\f m}\otimes \partial_tilde{\f m}}{\partial_tilde{h}} + p(\partial_tilde{h}) I \right )
\partial_text{\partial_textit{e}}nd{pmatrix},
\]
and the relative Hamiltonian is defined via
\[
\begin{aligned}
&\mathcal{W}(h,\f m|\partial_tilde{h},\partial_tilde{\f m}) \\
&= \int_{\T} \left [h \left (\frac{\f m}{h} -\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \otimes \left ( \frac{\f m}{h} -\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )+
\left (
p(\,\mathrm{d}ns)-p(\partial_tilde{h})-p'(\partial_tilde{h})(\,\mathrm{d}ns-\partial_tilde{h})
\right ) I \right ]: \left (\nabla \left (\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )\right )_{\partial_text{sym}} \!\!\,\mathrm{d} \f x \\
&\qquad+ \mathcal{K}\left(\frac{\f m}{h}\right ) \mathcal{R}(h,\f m| \partial_tilde{h},\partial_tilde{\f m}) \,,
\partial_text{\partial_textit{e}}nd{aligned}
\]
where the regularity measure $\mathcal{K}$ is given as above.
\begin{proposition}
Let $(\,\mathrm{d}ns,\f m)$ be energy-variational solution in the sense of Definition~\ref{def:envarEUL*}
with initial value $(\,\mathrm{d}ns_0,\f m_0)$.
Then $(\,\mathrm{d}ns,\f m)$
fulfills the relative entropy inequality
\begin{multline}
\left [\mathcal{R}(h,\f m | \partial_tilde{h},\partial_tilde{\f m} ) + E - \mathcal{E}(h,\f m ) \right ]\Big|_s^t - \int_s^t \mathcal{K}\left (\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \left [\mathcal{R}(h,\f m | \partial_tilde{h},\partial_tilde{\f m} ) + E - \mathcal{E}(h,\f m ) \right ] \,\mathrm{d} \partial_tau
\\
+\int_s^t \mathcal{W}(h,\f m| \partial_tilde{h},\partial_tilde{\f m}) + \left \langle \mathcal{A}(\partial_tilde{h},\partial_tilde{\f m}) ,
\begin{pmatrix}
P''(\partial_tilde{h})(h-\partial_tilde{h}) - \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \\
\frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
\partial_text{\partial_textit{e}}nd{pmatrix}
\right \rangle \,\mathrm{d} \partial_tau \leq 0 \,
\label{relenEuler}
\partial_text{\partial_textit{e}}nd{multline}
for all $\partial_tilde{h} \in \C^1({\mathbb{T}^d} \partial_times [0,T]; (0,\infty))$ and $\partial_tilde{\f m} \in \C^1({\mathbb{T}^d}\partial_times [0,T]; \R^d)$.
Moreover, if $p(h)=a h^\gamma$,
and $(\partial_tilde{h},\partial_tilde{\f m})$ is a (classical) solution to \partial_text{\partial_textit{e}}qref{eq:comprEuler} with
$(\partial_tilde{h},\partial_tilde{\f m})(0)=(\partial_tilde{h}_0,\partial_tilde{\f m}_0)$,
then $(h,\f m)=(\partial_tilde{h},\partial_tilde{\f m})$.
\partial_text{\partial_textit{e}}nd{proposition}
\begin{proof}
First we calculate the second derivative of the entropy function $\partial_text{\partial_textit{e}}ta$ and mulitply it with the difference $(h - \partial_tilde{h} , \f m -\partial_tilde{\f m})$, which implies
\begin{align*}
D ^2 \partial_text{\partial_textit{e}}ta ( \partial_tilde{h},\partial_tilde{\f m}) \begin{pmatrix}
h-\partial_tilde{h} \\
\f m - \partial_tilde{\f m}
\partial_text{\partial_textit{e}}nd{pmatrix} ={}& \begin{pmatrix}
P''(\partial_tilde{h} )
+ \frac{|\partial_tilde{\f m}|^2}{\partial_tilde{h}^3}& - \frac{\partial_tilde{\f m}^T}{\partial_tilde{h}^2}
\\-\frac{\partial_tilde{\f m} }{\partial_tilde{h}^2}& \frac{1}{\partial_tilde{h}}I
\partial_text{\partial_textit{e}}nd{pmatrix}
\begin{pmatrix}
h-\partial_tilde{h} \\
\f m - \partial_tilde{\f m}
\partial_text{\partial_textit{e}}nd{pmatrix}\\={}& \left (
P''(\partial_tilde{h})
(h-\partial_tilde{h}) - \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) , \frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \right )^T \,.
\partial_text{\partial_textit{e}}nd{align*}
This gives the term the system operator~$\mathcal{A}(\partial_tilde{h},\partial_tilde{\f m})$ is tested with in~\partial_text{\partial_textit{e}}qref{relenEuler}.
For this, we observe by some calculations and the identity $\frac{p'(\partial_tilde{h}) }{\partial_tilde{h}}= P''(\partial_tilde{h})$ that
\begin{equation}
\begin{aligned}
\Big \langle \mathcal{A}(\partial_tilde{h},\partial_tilde{\f m}) &,
\begin{pmatrix}
P''(\partial_tilde{h})
(h-\partial_tilde{h}) - \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \\
\frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
\partial_text{\partial_textit{e}}nd{pmatrix}
\Big \rangle
\\
&=
\int_{\T} \partial_t \partial_tilde{h} P''(\partial_tilde{h}) (h-\partial_tilde{h}) - \partial_t \partial_tilde{h} \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \,\mathrm{d} \f x
\\
&\qquad
+ \int_{\T}
\di \partial_tilde{\f m}
\left (
P''(\partial_tilde{h}) (h-\partial_tilde{h}) - \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \right ) \,\mathrm{d} \f x
\\
&\qquad
+ \int_{\T} (\partial_t \partial_tilde{h} \frac{\partial_tilde{\f m}}{\partial_tilde{h}} + \partial_tilde{h} \partial_t \frac{\partial_tilde{\f m}}{\partial_tilde{h}}) \frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \,\mathrm{d} \f x \\
&\qquad
+ \int_{\T} \left (
\di \partial_tilde{\f m} \frac{\partial_tilde{\f m}}{\partial_tilde{h}}+ \partial_tilde{\f m} \nabla \left ( \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
+ \nabla \partial_tilde{h} p'(\partial_tilde{h}) \right ) \frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \,\mathrm{d} \f x
\\
&=
\int_{\T} \left [\partial_t\partial_tilde{h} + \di \partial_tilde{\f m} \right ]
P''(\partial_tilde{h}) (h-\partial_tilde{h}) \,\mathrm{d} \f x
\\
&\qquad
+ \int_{\T} \left [ \partial_t \left (\frac{\partial_tilde{\f m}}{\partial_tilde{h}} \right )+ \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \cdot \nabla \left ( \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
+ \nabla \partial_tilde{h} P''(\partial_tilde{h}) \right ] h\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \,\mathrm{d} \f x \,.\label{soloperator}
\partial_text{\partial_textit{e}}nd{aligned}
\partial_text{\partial_textit{e}}nd{equation}
Adding and substracting the energy $\mathcal{E}(h,\f m)$ in the first term
of the energy-variational formulation~\partial_text{\partial_textit{e}}qref{relenEul*}
and choosing $\rho =
P'(\partial_tilde{h})- \frac{|\partial_tilde{\f m}|^2}{2\partial_tilde{h}^2} $ and $\f \varphi = \frac{\partial_tilde{\f m}}{\partial_tilde{h}}$
as test functions, we further observe that
\[
\begin{aligned}
&\left [ E- \mathcal{E}(h,\f m) + \int_{{\mathbb{T}^d}}
P(\,\mathrm{d}ns)-P'(\partial_tilde{h})h + \frac{|\f m|^2}{2h} - \f m \cdot \frac{\partial_tilde{\f m}}{\partial_tilde{h}} + \frac{| \partial_tilde{\f m}|^2}{2\partial_tilde{h}^2} h \,\mathrm{d} \f x \right ] \Big|_s^t
\\
&\quad
+ \int_s^t \!\!
\int_{{\mathbb{T}^d}} h \partial_t \partial_tilde{h}
P''(\partial_tilde{h})- h \partial_t \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\cdot \frac{\partial_tilde{\f m}}{\partial_tilde{h}}
+
\f m \cdot \nabla\partial_tilde{h}
P''(\partial_tilde{h}) - \f m \cdot \nabla \frac{ \partial_tilde{\f m}}{\partial_tilde{h}} \cdot \frac{ \partial_tilde{\f m}}{\partial_tilde{h}}
\,\mathrm{d} \f x
\,\mathrm{d} s
\\
&\quad
+ \int_s^t \!\!\int_{{\mathbb{T}^d}}
\f m \cdot \partial_t \frac{\partial_tilde{\f m}}{\partial_tilde{h}} + \left (\frac{ \f m \otimes \f m}{h} +
p(h) I\right ): \nabla \frac{\partial_tilde{\f m}}{\partial_tilde{h}}
\,\mathrm{d} \f x
+ \mathcal{K}\left ( \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \left [ \mathcal{E}(h ,\f m) - E\right ]
\!\,\mathrm{d} s
\leq 0
\,.
\partial_text{\partial_textit{e}}nd{aligned}
\]
We now invoke the identity
\[
h \partial_t \partial_tilde{h} P''(\partial_tilde{h}) = \partial_t \partial_tilde{h} P''(\partial_tilde{h})\partial_tilde{h} + \partial_t \partial_tilde{h} P''(\partial_tilde{h}) (h-\partial_tilde{h})=
- \partial_t \bb{ P(\partial_tilde{h}) - P'(\partial_tilde{h}) \partial_tilde{h} } + \partial_t \partial_tilde{h} P''(\partial_tilde{h}) (h-\partial_tilde{h})
\]
to introduce the relative energy in the first line.
Subsequently, we use equation~\partial_text{\partial_textit{e}}qref{soloperator}
to deduce
\[
\begin{aligned}
&\left [ E- \mathcal{E}(h,\f m) +\mathcal{R}( h,\f m | \partial_tilde{h},\partial_tilde{\f m}) \right ] \Big|_s^t+ \int_s^t
\mathcal K \left ( \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \left [ \mathcal{E}(h ,\f m) - E\right ]
\,\mathrm{d} s
\\
&\quad
+ \int_s^t\!\! \int_{{\mathbb{T}^d}}
\left (\frac{ \f m \otimes \f m}{h} +
p(\,\mathrm{d}ns ) I \right ): \nabla \frac{\partial_tilde{\f m}}{\partial_tilde{h}}- \f m \cdot \nabla \frac{ \partial_tilde{\f m}}{\partial_tilde{h}} \cdot\frac{ \partial_tilde{\f m}}{\partial_tilde{h}} - \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \cdot\nabla \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\cdot \f m
\,\mathrm{d} \f x \,\mathrm{d} \partial_tau
\\
&\quad
+ \int_s^t\!\!\int_{\T}
\left [ \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \cdot \nabla \left ( \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
+
\nabla \partial_tilde{h}
P''(\partial_tilde{h})\right ] \cdot h\frac{\partial_tilde{\f m}}{\partial_tilde{h}}
-\left [ \partial_tilde{h} \di \frac{\partial_tilde{\f m}}{\partial_tilde{h}} +\nabla \partial_tilde{h} \cdot \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \right ]
P''(\partial_tilde{h}) (h-\partial_tilde{h})
\,\mathrm{d} \f x \,\mathrm{d} \partial_tau
\\
&\quad
+\int_s^t \left \langle \mathcal{A}(\partial_tilde{h},\partial_tilde{\f m}) ,
\begin{pmatrix}
P''(\partial_tilde{h}) (h-\partial_tilde{h}) - \frac{h\partial_tilde{\f m}}{\partial_tilde{h}^2}\left (\frac{\f m }{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \\
\frac{h}{\partial_tilde{h}}\left ( \frac{\f m}{h}-\frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right )
\partial_text{\partial_textit{e}}nd{pmatrix}
\right \rangle \,\mathrm{d} \partial_tau
\leq 0\,.\label{lastnumber}
\partial_text{\partial_textit{e}}nd{aligned}
\]
With the identity $\partial_tilde{h} P''(\partial_tilde{h})=p'(\partial_tilde{h})$ and
integration by parts,
the second and the third line can be transformed to
\[
\begin{aligned}
\int_s^t\!\!\int_{\T} \di \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \left (
p(\,\mathrm{d}ns){-}p(\partial_tilde{h}) {-}p'(\partial_tilde{h})(\,\mathrm{d}ns{-}\partial_tilde{h})
\right )
+ h \left ( \frac{\f m}{h}- \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) \otimes \left ( \frac{\f m}{h}- \frac{\partial_tilde{\f m}}{\partial_tilde{h}}\right ) : \left (\nabla \frac{\partial_tilde{\f m}}{\partial_tilde{h}} \right )_{\partial_text{sym}}\!\!\,\mathrm{d} \f x \,\mathrm{d}\partial_tau \,,
\partial_text{\partial_textit{e}}nd{aligned}
\]
which implies the relative entropy inequality~\partial_text{\partial_textit{e}}qref{relenEuler}.
The weak-strong uniqueness principle now follows as in the proof of
Corollary~\ref{cor:uni},
where we required that
the relative entropy $\mathcal{R}$ and the relative Hamiltonian $\mathcal{W}$ are non-negative.
For this purpose, we assume $p(h)=a h^\gamma$ again,
which satisfies~\partial_text{\partial_textit{e}}qref{eq:pot.properties} and
the assumptions of Lemma~\ref{lem:nonlin.convex.compEuler}.
\partial_text{\partial_textit{e}}nd{proof}
\small
\begin{thebibliography}{10}
\bibitem{AmbrosioFusoPallara_BVFunctions_2000}
L.~Ambrosio, N.~Fusco, and D.~Pallara.
\newblock {\partial_text{\partial_textit{e}}m Functions of bounded variation and free discontinuity problems}.
\newblock Oxford Mathematical Monographs. The Clarendon Press, Oxford
University Press, New York, 2000.
\bibitem{barbu}
V.~Barbu and T.~Precupanu.
\newblock {\partial_text{\partial_textit{e}}m Convexity and optimization in {Banach} spaces.}
\newblock Springer Monogr. Math. Dordrecht: Springer, 4th updated and revised
edition, 2012.
\bibitem{basaric}
D.~Basari{\'c}.
\newblock Semiflow selection to models of general compressible viscous fluids.
\newblock {\partial_text{\partial_textit{e}}m J. Math. Fluid Mech.}, 23(1):22, 2021.
\bibitem{BreitComp}
D.~Breit, E.~Feireisl, and M.~Hofmanov{\'a}.
\newblock {Solution Semiflow to the Isentropic Euler System}.
\newblock {\partial_text{\partial_textit{e}}m Arch. Ration. Mech. Anal.}, 235(1):167--194, 2020.
\bibitem{weakstrongeuler}
Y.~Brenier, C.~De~Lellis, and L.~Sz{\'e}kelyhidi, Jr.
\newblock Weak-strong uniqueness for measure-valued solutions.
\newblock {\partial_text{\partial_textit{e}}m Comm. Math. Phys.}, 305(2):351--361, 2011.
\bibitem{brezis}
H.~Brezis.
\newblock {\partial_text{\partial_textit{e}}m Functional analysis, {S}obolev spaces and partial differential
equations}.
\newblock Springer, New York, 2011.
\bibitem{numeric}
S.~Bubeck.
\newblock Convex optimization: algorithms and complexity.
\newblock {\partial_text{\partial_textit{e}}m Found. Trends Mach. Learn.}, 8(3-4):231--357, 2015.
\bibitem{dafermosscalar}
C.~M. Dafermos.
\newblock The entropy rate admissibility criterion for solutions of hyperbolic
conservation laws.
\newblock {\partial_text{\partial_textit{e}}m J. Differ. Equ.}, 14(2):202 -- 212, 1973.
\bibitem{dafermos2}
C.~M. Dafermos.
\newblock {\partial_text{\partial_textit{e}}m Hyperbolic Conservation Laws in Continuum Physics}.
\newblock Springer, Berlin, 2016.
\bibitem{DiazLerena2002}
J.~I. D\'{\i}az and M.~B. Lerena.
\newblock On the inviscid and non-resistive limit for the equations of
incompressible magnetohydrodynamics.
\newblock {\partial_text{\partial_textit{e}}m Math. Models Methods Appl. Sci.}, 12(10):1401--1419, 2002.
\bibitem{DiPernaMajda}
R.~J. DiPerna and A.~J. Majda.
\newblock Oscillations and concentrations in weak solutions of the
incompressible fluid equations.
\newblock {\partial_text{\partial_textit{e}}m Comm. Math. Phys.}, 108(4):667--689, 1987.
\bibitem{dunford}
N.~Dunford and B.~J. Pettis.
\newblock Linear operations on summable functions.
\newblock {\partial_text{\partial_textit{e}}m Trans. Am. Math. Soc.}, 47:323--392, 1940.
\bibitem{EiHoLa22}
T.~Eiter, K.~Hopf, and R.~Lasarzik.
\newblock Weak-strong uniqueness and energy-variational solutions for a class
of viscoelastoplastic fluid models.
\newblock {\partial_text{\partial_textit{e}}m Adv. Nonlinear Anal.}, 12(1):20220274, 2023.
\bibitem{evans}
L.~C. Evans.
\newblock {\partial_text{\partial_textit{e}}m Partial differential equations}, volume~19 of {\partial_text{\partial_textit{e}}m Graduate
Studies in Mathematics}.
\newblock American Mathematical Society, Providence, RI, second edition, 2010.
\bibitem{Fan1953}
K.~Fan.
\newblock Minimax theorems.
\newblock {\partial_text{\partial_textit{e}}m Proc. Nat. Acad. Sci. U.S.A.}, 39:42--47, 1953.
\bibitem{Fereisl21_NoteLongTimeBehaviorDissSolEuler}
E.~Feireisl.
\newblock A note on the long-time behavior of dissipative solutions to the
{Euler} system.
\newblock {\partial_text{\partial_textit{e}}m J. Evol. Equ.}, 21(3):2807--2814, 2021.
\bibitem{singular}
E.~Feireisl and A.~Novotn{\'y}.
\newblock {\partial_text{\partial_textit{e}}m Singular limits in thermodynamics of viscous fluids}.
\newblock Advances in mathematical fluid mechanics. Birkh{\"a}user, Basel,
2009.
\bibitem{generic}
M.~Grmela and H.~C. \"Ottinger.
\newblock Dynamics and thermodynamics of complex fluids. {I.} {D}evelopment of
a general formalism.
\newblock {\partial_text{\partial_textit{e}}m Phys. Rev. E}, 56:6620--6632, Dec 1997.
\bibitem{Gwiazda}
P.~Gwiazda, O.~Kreml, and A.~{\'S}wierczewska-Gwiazda.
\newblock Dissipative measure-valued solutions for general conservation laws.
\newblock {\partial_text{\partial_textit{e}}m Ann. Inst. Henri Poincar{\'e}, Anal. Non Lin{\'e}aire},
37(3):683--707, 2020.
\bibitem{weakstrongCompEul}
P.~Gwiazda, A.~\'{S}wierczewska Gwiazda, and E.~Wiedemann.
\newblock Weak-strong uniqueness for measure-valued solutions of some
compressible fluid models.
\newblock {\partial_text{\partial_textit{e}}m Nonlinearity}, 28(11):3873--3890, 2015.
\bibitem{BV}
M.~Heida, R.~I.~A. Patterson, and D.~R.~M. Renger.
\newblock Topologies and measures on the space of functions of bounded
variation taking values in a banach or metric space.
\newblock {\partial_text{\partial_textit{e}}m J. Evol. Equ.}, 19(1):111--152, Mar 2019.
\bibitem{hopf}
E.~Hopf.
\newblock The partial differential equation {{\(u_t+uu_x=\mu u_{xx}\)}}.
\newblock {\partial_text{\partial_textit{e}}m Commun. Pure Appl. Math.}, 3:201--230, 1950.
\bibitem{envar}
R.~Lasarzik.
\newblock {On the existence of weak solutions in multidimensional
incompressible fluid dynamics}.
\newblock {\partial_text{\partial_textit{e}}m \partial_textsc{WIAS} Preprint, No. 2834, Berlin}, 2021.
\bibitem{maxidss}
R.~Lasarzik.
\newblock Maximally dissipative solutions for incompressible fluid dynamics.
\newblock {\partial_text{\partial_textit{e}}m Z. Angew. Math. Phys.}, 73(1):21, 2022.
\bibitem{lax}
P.~D. Lax.
\newblock Hyperbolic systems of conservation laws. {II}.
\newblock {\partial_text{\partial_textit{e}}m Commun. Pure Appl. Math.}, 10:537--566, 1957.
\bibitem{traffic}
M.~J. Lighthill and G.~B. Whitham.
\newblock On kinematic waves. {II}. {A} theory of traffic flow on long crowded
roads.
\newblock {\partial_text{\partial_textit{e}}m Proc. R. Soc. Lond., Ser. A}, 229:317--345, 1955.
\bibitem{lionsfluid}
P.-L. Lions.
\newblock {\partial_text{\partial_textit{e}}m Mathematical topics in fluid mechanics. {V}ol. 1}.
\newblock The Clarendon Press, New York, 1996.
\bibitem{RaoRen_OrliczSpaces_1991}
M.~M. Rao and Z.~D. Ren.
\newblock {\partial_text{\partial_textit{e}}m Theory of {O}rlicz spaces}, volume 146 of {\partial_text{\partial_textit{e}}m Monographs and
Textbooks in Pure and Applied Mathematics}.
\newblock Marcel Dekker, Inc., New York, 1991.
\bibitem{roubicek}
T.~Roub{\'{\i}}{\v{c}}ek.
\newblock {\partial_text{\partial_textit{e}}m Nonlinear partial differential equations with applications}.
\newblock Birkh{\"a}user, Basel, 2005.
\bibitem{gradient}
F.~Santambrogio.
\newblock {\partial_text{\partial_textit{e}}m Optimal transport for applied mathematicians. {Calculus} of
variations, {PDEs}, and modeling}, volume~87 of {\partial_text{\partial_textit{e}}m Prog. Nonlinear Differ.
Equ. Appl.}
\newblock Cham: Birkh{\"a}user/Springer, 2015.
\bibitem{Schmidt1988}
P.~G. Schmidt.
\newblock On a magnetohydrodynamic problem of {E}uler type.
\newblock {\partial_text{\partial_textit{e}}m J. Differ. Equ.}, 74(2):318--335, 1988.
\bibitem{Secchi1993}
P.~Secchi.
\newblock On the equations of ideal incompressible magneto-hydrodynamic.
\newblock {\partial_text{\partial_textit{e}}m Rend. Sem. Mat. Univ. Padova}, 90(4):103--119, 1993.
\bibitem{applfluid}
L.~D.~G. Sigalotti, E.~Sira, J.~Klapp, and L.~Trujillo.
\newblock Environmental fluid mechanics: Applications to weather forecast and
climate change.
\newblock In L.~D.~G. Sigalotti, J.~Klapp, and E.~Sira, editors, {\partial_text{\partial_textit{e}}m
Computational and Experimental Fluid Mechanics with Applications to Physics,
Engineering and the Environment}, pages 3--36. Springer International
Publishing, Cham, 2014.
\partial_text{\partial_textit{e}}nd{thebibliography}
\partial_text{\partial_textit{e}}nd{document}
|
\begin{document}
\begin{frontmatter}
\title{Branching Brownian motion under soft killing}
\author{Mehmet \"{O}z}
\ead{[email protected]}
\ead[url]{https://faculty.ozyegin.edu.tr/mehmetoz/}
\address{Department of Natural and Mathematical Sciences, Faculty of Engineering, \"{O}zye\u{g}in University, Istanbul, T\"urkiye}
\begin{abstract}
We study a $d$-dimensional branching Brownian motion (BBM) among Poissonian obstacles, where a random \emph{trap field} in $\mathbb{R}^d$ is created via a Poisson point process. In the soft obstacle model, the trap field consists of a positive potential which is formed as a sum of a compactly supported bounded function translated at the atoms of the Poisson point process. Particles branch at the normal rate outside the trap field; and when inside the trap field, on top of complete suppression of branching, particles are killed at a rate given by the value of the potential. Under soft killing, the probability that the entire BBM goes extinct due to killing is positive in almost every environment. Conditional on ultimate survival of the process, we prove a law of large numbers for the total mass of BBM among soft Poissonian obstacles. Our result is quenched, that is, it holds in almost every environment with respect to the Poisson point process.
\end{abstract}
\begin{keyword}
Branching Brownian motion \sep law of large numbers \sep Poissonian traps \sep random environment \sep soft killing
\MSC[2020] 60J80 \sep 60K37 \sep 60F05 \sep 60J65
\end{keyword}
\end{frontmatter}
\pagestyle{myheadings}
\markright{BBM under soft killing
}
\section{Introduction}\label{intro}
In this work, we consider a model of a spatial random process in a random environment in $\mathbb{R}^d$, where the random process is a $d$-dimensional branching Brownian motion (BBM), and the random environment is created via a Poisson point process (PPP). We will call an environment in $\mathbb{R}^d$ \emph{Poissonian} if its randomness is created via a PPP. A random \emph{trap field} is formed as a positive potential which is given by the sum of a compactly supported, positive, bounded function translated at the atoms of the PPP. We specify the interaction between the BBM and the random trap field via the \emph{soft killing} rule: the particles are killed at a rate given by the value of the potential inside the trap field. Furthermore, the branching of particles is assumed to be completely suppressed inside the trap field whereas particles branch at a fixed rate outside the trap field. We call the model described here, the model of \emph{BBM among soft obstacles}. We study the growth of mass, that is, the population size, of a BBM evolving in a typical such random environment in $\mathbb{R}^d$. Clearly, the presence of traps tends to decrease the mass compared to a `free' BBM, that is, a BBM in $\mathbb{R}^d$ without any traps. The goal of this paper is to prove a law of large numbers on the reduced mass of the BBM among soft obstacles.
The mass of BBM among random obstacles in $\mathbb{R}^d$ was first studied by Engl\"ander \cite{E2008}, where the random environment was composed of spherical traps of fixed radius with centers given by a PPP, and the interaction between the BBM and the trap field was given by the \emph{mild} obstacle rule: when a particle is inside the traps, it branches at a positive rate lower than usual and there is no killing of particles. Engl\"ander showed that on a set of full measure with respect to the PPP, a kind of law of large numbers holds (see [6, Theorem 1]) for the mass of the process. His result was later improved by \"Oz \cite{O2021} to a strong law of large numbers, including the case of zero branching in the trap field. In both aforementioned works, the challenging part of the proof was the lower bound of the law of large numbers. Under soft killing considered here, the proof of the lower bound is even more delicate as the system tends to produce fewer particles due to possible killing compared to the case of mild obstacles, and there is positive probability for the entire process to be killed by the trap field in finite time. Therefore, one has to condition the process on survival for meaningful results. It is more challenging to show that sufficiently many particles are produced with high probability under soft killing, because at each step of the proof one has to overcome the effect of possible killing of particles, which makes the analysis significantly more elaborate compared to the case of mild obstacles with zero branching inside the trap field.
\subsection{The model}
We now present the model in more detail. Firstly, we introduce the two sources of randomness, the BBM and the random trap field, and then we develop a model of a random process in random environment by specifying an interaction between the random components.
\textbf{1. Branching Browian motion:} Let $Z=(Z_t)_{t\geq 0}$ be a strictly dyadic $d$-dimensional BBM with branching rate $\beta>0$, where $t$ represents time. The process can be described as follows. It starts with a single particle, which performs a Brownian motion in $\mathbb{R}^d$ for a random lifetime, at the end of which it dies and simultaneously gives birth to two offspring. Starting from the position where their parent dies, each offspring particle repeats the same procedure as their parent independently of others and the parent, and the process evolves through time in this way. All particle lifetimes are exponentially distributed with parameter $\beta>0$. For each $t\geq 0$, $Z_t$ can be viewed as a finite discrete measure on $\mathbb{R}^d$, which is supported at the positions of the particles at time $t$. We use $P_x$ and $E_x$, respectively, to denote the law and corresponding expectation of a BBM starting with a single particle at $x\in\mathbb{R}^d$. For $t\geq 0$ and a Borel set $A\subseteq\mathbb{R}^d$, we write $Z_t(A)$ to denote the mass of $Z$ falling inside $A$ at time $t$, and set $N_t=|Z_t|=Z_t(\mathbb{R}^d)$ to be the (total) mass of BBM at time $t$.
\textbf{2. Trap field:} The random environment in $\mathbb{R}^d$ is created as follows. Let $\Pi$ be a Poisson point process in $\mathbb{R}^d$ with constant intensity $\nu>0$, and $(\Omega,\mathbb{P})$ be the corresponding probability space with expectation $\mathbb{E}$. We now describe a way to obtain a random trap field out of $\Pi$, along with the corresponding trapping rule, which serves as the interaction between the BBM and the Poissonian trap field.
\textbf{Soft obstacles:} Consider a positive, bounded, measurable, and compactly supported \emph{killing function} $W:\mathbb{R}^d\to (0,\infty)$, and for $\omega=\sum_i \delta_{x_i}\in\Omega$ and $x\in\mathbb{R}^d$, define the potential
\begin{equation}
V(x,\omega)=\sum_i W(x-x_i). \label{eqpotential}
\end{equation}
In this case, the Poissonian trap field $K=K(\omega)$ in $\mathbb{R}^d$ is formed as follows:
\begin{equation} \label{eqtraprule}
x\in K(\omega) \:\:\Leftrightarrow\:\: V(x,\omega)>0 .
\end{equation}
The \emph{soft killing} rule is that particles branch at the normal rate $\beta$ when outside $K$, whereas inside $K$ they are killed at rate $V=V(x,\omega)$ and their branching is completely suppressed. Note that the special case of constant killing rate inside spherical traps defined in \eqref{eqtrapfield} below corresponds to taking $W=\alpha\mathbbm{1}_{\bar{B}(0,a)}$ with $\alpha>0$ except that $W$ is not summed on overlapping balls. A formal treatment of BBM killed at rate $V=V(x,\omega)$ in $\mathbb{R}^d$ is given in \cite{LV2012}.
For $\omega\in\Omega$ we refer to $\mathbb{R}^d$ with $K(\omega)$ attached simply as the random environment $\omega$, and use $P^\omega_x$ to denote the conditional law of the BBM in the random environment $\omega$. For simplicity, set $P^\omega=P^\omega_0$. Observe that under the law $P^\omega$ the BBM has a spatially dependent branching rate
$$ \beta(x,\omega):=\beta\,\mathbbm{1}_{K^c(\omega)}(x). $$
The main objective of this paper is to prove a quenched law of large numbers (LLN) for the mass of BBM among the Poissonian trap field introduced above.
We now briefly describe the mild obstacle problem for BBM, which was studied in \cite{E2008} and \cite{O2021}, and serves as motivation to study the current problem. Let the trap field be given by the random set
\begin{equation}
K=K(\omega):=\bigcup_{x_i\in\,\text{supp}(\Pi)}\bar{B}(x_i,a), \label{eqtrapfield}
\end{equation}
where $\bar{B}(x,a)$ denotes the closed ball of radius $a$ centered at $x\in\mathbb{R}^d$. The \emph{mild obstacle} rule is as follows: when a particle of BBM is outside $K$, it branches at rate $\beta_2>0$, whereas when inside $K$, it branches at a lower rate $\beta_1$ with $0\leq\beta_1<\beta_2$. That is, under the law $P^\omega$, the BBM has a spatially dependent branching rate
$$ \beta(x,\omega):=\beta_2\,\mathbbm{1}_{K^c(\omega)}(x)+\beta_1\,\mathbbm{1}_{K(\omega)}(x).$$
We note that $\beta_1$ was taken to be strictly positive in \cite{E2008}, whereas in \cite{O2021} the case of $\beta_1=0$ was allowed. There is no killing of particles in the mild obstacle model.
Unlike the mild obstacle setting, under soft killing one can show that on a set of full $\mathbb{P}$-measure there is positive probability for the entire process to be killed in finite time (see Proposition~\ref{prop1}). Therefore, to obtain meaningful results, the process is conditioned on the event of ultimate survival. Recall that $N_t=|Z_t|$ denotes the mass of BBM at time $t$. Let
\begin{equation} \label{survival}
S_t=\{N_t\geq 1\}, \quad \quad S = \bigcap_{t\geq 0} S_t
\end{equation}
be, respectively, the event of survival up to time $t$, and the event of ultimate survival. We may also write $S_t=\{\tau>t\}$, where $\tau=\inf\{s\geq 0:N_s=0\}$. By continuity of measure from above, one deduces that $\lim_{t\to\infty} P^\omega(S_t) = P^\omega(S)$. Define the law $\widehat{P}^\omega$ as
$$ \widehat{P}^\omega(\:\cdot\:):=P^\omega(\:\cdot\:\mid S). $$
Compared to the mild obstacle problem, the main extra challenge is to show that even under soft killing, in almost every environment conditional on $S$ exponentially many particles are produced for large times with overwhelming probability. This is carried out in Part 1 of the proof of the lower bound of Theorem~\ref{thm1} by making critical use of Lemma~\ref{lemma1} and Lemma~\ref{lemma2}.
For quick reference, the following list collects the different probabilities that we use in this paper. The corresponding expectations will be denoted by similar fonts. In what follows, \emph{free} refers to the model where there is no trap field in $\mathbb{R}^d$.
(i) $\mathbb{P}$ is the law of a homogeneous Poisson point process,
(ii) $P_x$ are the laws of free BBM started by a single particle at $x\in\mathbb{R}^d$,
(iii) $P_x^\omega$ are the conditional laws of BBM started by a single particle at $x$ in the environment $\omega$,
(iv) $\widehat{P}_x^\omega(\:\cdot\:):=P_x^\omega(\:\cdot\: \mid S)$, where $S$ is the event of ultimate survival of the BBM from killing,
(v) $\mathbf{P}_x$ are the laws of free Brownian motion started at $x\in\mathbb{R}^d$,
(vi) $\mathbf{P}_x^\omega$ are the conditional laws of Brownian motion started at $x$ in the environment $\omega$.
\subsection{History}
The aim of this section is to lay the background literature for the current work and to put it in perspective. The study of spatial branching processes among random obstacles in $\mathbb{R}^d$ has originated from Engl\"ander \cite{E2000}, in which a BBM among hard Poissonian obstacles was investigated with the killing rule of \emph{trapping of the first particle}. That is, the entire process is killed at the first hitting of the BBM to the trap field $K$ as opposed to killing only the particle that hits $K$. Equivalently, the event of survival up to time $t$ is defined as
\begin{equation}
\{T>t\} \quad \text{where} \quad T=\inf\{s\geq 0: Z_t(K)\geq 1 \} . \label{firstparticle}
\end{equation}
In \cite{E2000}, Engl\"ander considered a uniform field of traps, and obtained the large-time asymptotic behavior of the annealed probability of survival in $d\geq 2$. Then, Engl\"ander and den Hollander studied in \cite{EH2003} the more interesting case where the trap intensity was radially decaying as
\begin{equation} \label{radialdecay}
\frac{\text{d}\nu}{\text{d}x} \sim \frac{\ell}{|x|^{d-1}}, \quad |x|\rightarrow\infty, \quad \ell>0 ,
\end{equation}
where $\text{d}\nu/\text{d}x$ denotes the density of the mean measure of the PPP with respect to the Lebesgue measure. It was shown in \cite{EH2003} that the decay rate in \eqref{radialdecay} is interesting, because it gives rise to a phase transition at a critical intensity $\ell=\ell_{cr}>0$, at which the behavior of the system changes both in terms of the large-time asymptotics of the annealed survival probability and in terms of the optimal survival strategy. In both \cite{E2000} and \cite{EH2003}, the branching rule was taken as strictly dyadic. Then, in \cite{OCE2017}, the asymptotic results for the survival probability of the system studied in \cite{EH2003} were extended to the case of a BBM with a generic branching law, including the case $p_0>0$, where $p_0$ is the probability that a particle gives no offspring at the end of its lifetime, so that a second mechanism of extinction for the BBM is intrinsically present other than that of the traps. Recently in \cite{OE2019}, conditioning the BBM on the event of survival from hard Poissonian obstacles, \"Oz and Engl\"ander proved several optimal survival strategies in the annealed setting, with particular emphasis on the population size. All works mentioned thus far assumed the hard killing rule in \eqref{firstparticle}.
In \cite{E2008}, Engl\"ander proposed the mild obstacle problem for BBM, that is, there is no killing of particles but the branching is decreased to a nonzero constant inside $K$, and showed that on a set of full measure with respect to the PPP a kind of LLN holds for the mass of the process. This quenched result was recently improved in \cite{O2021} to the strong law of large numbers, allowing the possibility of no branching inside $K$. The current work is mainly motivated by \cite{E2008} and \cite{O2021}, and aims at proving an LLN for the mass of BBM under soft killing in $\mathbb{R}^d$. We also note that a related problem where a critical BBM that is killed at a small rate $\varepsilon>0$ inside soft obstacles was studied in \cite{LV2012}, where the main problem was to find the asymptotics of the probability that the BBM ever goes outside the ball of radius $R$ centered at the origin if $R$ is large.
We refer the reader to \cite{E2007} for a survey, and to \cite{E2014} for a detailed treatment on the topic of BBM among random obstacles. Also, we note that the problem of LLN for spatial branching processes in a free environment in $\mathbb{R}^d$, that is, without obstacles, dates back to \cite{W1967}, where an almost sure result on the asymptotic behavior of certain branching Markov processes was established, covering the SLLN for local mass of BBM in fixed Borel sets in $\mathbb{R}^d$ as a special case. For more on the LLN in a free environment, one can see \cite{B1992} and \cite{EHK2010}, where the former work proves SLLN for spatial branching processes in linearly moving Borel sets in both the discrete setting of branching random walk in discrete time and the continuous setting of BBM, and the latter studies the local growth of mass for a large class of branching diffusions. Also, Chapters $2-4$ of \cite{E2014} contains a thorough exposition about the SLLN of branching diffusions in various settings.
\subsection{Outline}
The rest of the paper is organized as follows. In Section~\ref{section2}, we present our main result. Section~\ref{section3} contains several preparatory results for the proof of Theorem~\ref{thm1}. In Section~\ref{section4}, we construct the almost sure (a.s.) environment that will be used in the soft obstacle problem. In Section~\ref{section5}, we state and prove a key lemma that will serve as a first step for the proof of our main result. In Section~\ref{section6}, we present the proof of Theorem~\ref{thm1}. Section~\ref{section8} discusses some further related problems.
\section{Main Result} \label{section2}
In this section, we present our main result. To this end, we introduce further notation, and define two relevant constants. Let $\omega_d$ denote the volume of the $d$-dimensional unit ball, and $\lambda_{d,r}$ denote the principal Dirichlet eigenvalue of $-\frac{1}{2}\Delta$ on $B(0,r)$ in $d$ dimensions. Set $\lambda_d=\lambda_{d,1}$. Recall that $\nu>0$ is the constant intensity of the PPP, and define the positive constants
\begin{equation} \label{eqro}
R_0=R_0(d,\nu):=\left(\frac{d}{\nu \omega_d}\right)^{1/d}
\end{equation}
and
\begin{equation} \nonumber
c(d,\nu):=\lambda_d \left(\frac{d}{\nu \omega_d}\right)^{-2/d}.
\end{equation}
With these definitions, observe that $R_0=R_0(d,\nu)=\sqrt{\lambda_d/c(d,\nu)}$. Also, recall the law $\widehat{P}^\omega(\:\cdot\:)=P^\omega(\:\cdot\:\mid S)$ with $S$ as in \eqref{survival}.
\begin{theorem}[Quenched LLN for BBM among soft Poissonian obstacles, $d\geq 2$]\label{thm1}
Let the random environment in $\mathbb{R}^d$ be given by \eqref{eqpotential} and \eqref{eqtraprule}. Then, under the soft killing rule, in $d\geq 2$, on a set of full $\mathbb{P}$-measure,
\begin{equation} \label{eqthm1}
\underset{t\rightarrow\infty}{\lim} (\log t)^{2/d}\left(\frac{\log N_t}{t}-\beta\right)=-c(d,\nu) \quad \text{in}\:\: \widehat{P}^\omega\text{-probability} .
\end{equation}
\end{theorem}
\begin{remark}[Quenched LLN]
Theorem~\ref{thm1} is called a law of large numbers, because it says that the mass of BBM among Poissonian obstacles grows as its expectation (see Proposition~\ref{prop2}) as $t\rightarrow\infty$ in the sense of convergence in probability. The reason why it is called quenched is that it holds on a set of full $\mathbb{P}$-measure, that is, in almost every environment.
\end{remark}
\begin{remark}[Robustness]
Observe that the result \eqref{eqthm1} does not depend on the details of the killing potential $V$, such as $\sup_{x}W(x)=\sup_{x}V(x,\omega)$ or $\sup_{x\in K_0}\{|x|:W(x)>0\}$, where $K_0$ stands for the compact on which $W$ is supported. Moreover, in \cite[Theorem 1]{O2021}, the same formula as in \eqref{eqthm1} (except that there was no conditioning on ultimate survival) was obtained as the SLLN for BBM among mild obstacles, where the branching was totally or partially suppressed inside the traps but there was no killing of particles. This means, the result is not only unaffected by the fine details of the trapping mechanism, but is also unaffected by the nature of the traps, whether they be soft or mild. Therefore, Theorem~\ref{thm1} above and \cite[Theorem 1]{O2021} suggest that the LLN for the mass of BBM among Poissonian obstacles is quite robust to the nature and details of the trapping mechanism.
This can be explained as follows. In almost every environment, the mass of BBM to the leading order is entirely determined by what happens inside large trap-free regions rather than what happens inside the traps. In more detail, `large' clearings (see Definition~\ref{def1}) are present in almost every environment in the case of mild traps but also even under soft killing via a potential of the form in \eqref{eqpotential} (to make a connection between the two models, set $a=\sup_{x\in K_0}\{|x|:W(x)>0\}$, where $a$ is the trap radius in \eqref{eqtrapfield} in the case of mild traps), and the BBM is able to hit these clearings soon enough with overwhelming probability regardless of the details of the trapping mechanism. Once the BBM hits such a large clearing, the sub-BBM emanating from the particle that hits the clearing is able to produce sufficiently many particles within this clearing in the remaining time. It is the growth inside the large clearing that determines the mass, to the leading order, of the BBM for large times. Obviously, the sub-BBM evolving inside the large clearing does not feel the effect of the traps. This is why the result is insensitive to the nature and the parameters of the trapping mechanism. The details of this discussion are presented in the proof of the lower bound of Theorem~\ref{thm1}.
\end{remark}
\section{Preparations}\label{section3}
In this section, we collect some preparatory results that will later be used in the proof of Theorem~\ref{thm1}.
Let us introduce some further notation that will be used throughout the paper. We use $\mathbb{N}$ as the set of positive integers and $\mathbb{R}_+$ as the set of positive real numbers. For $x\in\mathbb{R}^d$, we denote by $|x|$ the Euclidean distance of $x$ to the origin. For a set $A\subseteq\mathbb{R}^d$ and $x\in\mathbb{R}^d$, we define their sum in the sense of sum of sets as $x+A:=\{x+y:y\in A\}$. For a set $A\subseteq\mathbb{R}^d$, we denote by $\partial A$ its boundary in $\mathbb{R}^d$. For two functions $f,g:\mathbb{R}_+\to\mathbb{R}_+$, we write $g(t)=o(f(t))$ and $f(t)\sim g(t)$ if $g(t)/f(t)\rightarrow 0$ and $g(t)/f(t)\rightarrow c$ for some positive constant $c>0$ as $t\rightarrow\infty$, respectively. For an event $A$, we use $A^c$ to denote its complement, and $\mathbbm{1}_A$ its indicator function. We will use $c$, $c_1$, $c_2$, etc. to denote generic constants, whose values may change from line to line. The notation $c(p)$ or $c_p$ will be used to mean that the constant $c$ depends on the parameter $p$.
\subsection{Tubular estimate}
Let $X=(X_t)_{t\geq 0}$ denote a standard Brownian motion in $d$ dimensions, and $(\mathbf{P}_x:x\in\mathbb{R}^d)$ be the laws of Brownian motion started at $x$ with corresponding expectations $(\mathbf{E}_x:x\in\mathbb{R}^d)$. We now state a previous result, which is taken from \cite{S1993}, and will be used in the proof of Lemma~\ref{lemma2}. It concerns a Brownian motion in a free environment, and gives a lower bound on the probability that a Brownian motion stays within a fixed distance from the central axis of a `tube' connecting its starting point to a given point. For $x,y\in\mathbb{R}^d$ and $t>0$, consider the line segment $\{x+(y-x)s/t:0\leq s\leq t\}$. One may refer to the set $\{z\in\mathbb{R}^d:\inf_{0\leq s\leq t}|z-(x+(y-x)s/t)|<a\}$ as a tube (or cylinder) of radius $a$ connecting $x$ and $y$ in $\mathbb{R}^d$.
\begin{propa}[Tubular estimate for Brownian motion; \cite{S1993}]
Let $x,y\in\mathbb{R}^d$ be fixed. There exists a constant $c_d>0$ that depends only on dimension $d$ such that for all $t>0$ and $b>0$,
\begin{equation}
\mathbf{P}_x\left(\sup_{0\leq s\leq t}\bigg|X_s-\left(x+\frac{s}{t}(y-x)\right)\bigg|<b \right) \geq c_d\exp\left[-\frac{\lambda_d t}{b^2}-\frac{|y-x|^2}{2t} \right]. \nonumber
\end{equation}
\end{propa}
\subsection{Survival probability}
We first give a definition concerning special random subsets of $\mathbb{R}^d$ in the environment $\omega$, followed by a previous result, which gives an a.s.-environment in the soft obstacle setting. Then, we prove a preliminary result on the survival probability of the BBM among soft obstacles.
\begin{definition} \label{def1}
We call $A\subseteq\mathbb{R}^d$ a \emph{clearing} in the random environment $\omega$ if $A\subseteq K^c$. By a \emph{clearing of radius $r$}, we mean a ball of radius $r$ which is a clearing.
\end{definition}
\begin{propb}[Large almost-sure clearings, soft obstacles; \cite{S1998}]
Let the random environment in $\mathbb{R}^d$ be given by \eqref{eqpotential} and \eqref{eqtraprule}. Then, on a set of full $\mathbb{P}$-measure, there exists $\ell_0>0$ such that for all $\ell>\ell_0$ the cube $[-\ell,\ell]^d$ contains a clearing of radius
\begin{equation} \label{eq0}
R_\ell:=R_0(\log \ell)^{1/d}-(\log \log \ell)^2 ,\:\:\ell>1.
\end{equation}
\end{propb}
\begin{proposition}[Survival probability for BBM among soft obstacles]\label{prop1}
Let the random environment in $\mathbb{R}^d$ be given by \eqref{eqpotential} and \eqref{eqtraprule}. Then, under the soft killing rule, on a set of full $\mathbb{P}$-measure,
$$ 0<P^\omega(S)<1 . $$
\end{proposition}
\begin{proof}
Recall the definitions of $S_t$ and $S$ from \eqref{survival}. It is clear that $P^\omega(S_t)$ is nonincreasing in $t$, and bounded below by zero. Hence, $\lim_{t\to\infty} P^\omega(S_t) = P^\omega(S)$ exists. We will show that on a set of full $\mathbb{P}$-measure there exist constants $c_1=c_1(\omega)$ and $c_2=c_2(\omega)$ such that
\begin{equation}
0<c_1\leq P^\omega(S_t)\leq c_2<1 \label{eqsurvival}
\end{equation}
for all large $t$. To prove the upper bound in \eqref{eqsurvival}, use the single-particle Brownian survival asymptotics among soft obstacles from \cite[Theorem 2.5]{S1993}, which implies that on a set of full $\mathbb{P}$-measure the survival probability goes to zero as $t\to\infty$. This means, there exists $t_0=t_0(\omega)$ such that for all $t\geq t_0$ the probability of survival up to time $t$ is at most $1/2$. This implies, since the branching and motion mechanisms in a BBM are independent, that on a set of full $\mathbb{P}$-measure, $P^\omega(S_t)\leq 1-\exp(-\beta t_0)/2$ for all $t\geq t_0$.
To prove the lower bound in \eqref{eqsurvival}, define
\begin{equation}
\Omega_s=\{\omega\in\Omega:\exists\:\ell_1=\ell_1(\omega), \:\forall\:\ell\geq \ell_1, \:[-\ell,\ell]^d \:\text{contains a clearing of radius}\: R_\ell\} . \label{eqomegas}
\end{equation}
From Proposition B, we know that $\mathbb{P}(\Omega_s)=1$. On the other hand, by the proof of \cite[Thm.\ 5.5.4, p.193]{E2014}, there exists a critical radius, say $R_{cr}$, which is given by $\lambda_{d,R_{cr}}=\beta$, such that for any $R>R_{cr}$ the probability $p_R$ that at least one particle of BBM has not left $B(0,R)$ ever, is positive. Now let $\omega\in\Omega_s$, and choose $R=R(\omega)$ so that
$$ R>R_{cr}+1 \quad \text{and} \quad e^{(2R/R_0)^d}>\ell_1(\omega) , $$
where $\ell_1$ is as introduced in \eqref{eqomegas}. Then, in the environment $\omega$, by definition of $R_\ell$ and $\Omega_s$, the box
$$ C(\omega,d):=\left[-e^{(2R/R_0)^d},e^{(2R/R_0)^d}\right]^d $$
contains a clearing of radius $R$. Let $B(x_0,R)$ be this clearing where $x_0=x_0(\omega)$ and $B(x_0,R)\subseteq C(\omega,d)$. Consider the following survival strategy for the BBM. Over $[0,1]$, avoid being killed by the trap field and send the initial particle to $B(x_0,1)$. We may (but don't have to) suppress the branching over $[0,1]$ so that the initial particle is still alive at time $1$. Let this joint strategy have probability $p_s$. Observe that $p_s>0$ since the box $C(\omega,d)$ is fixed and the killing function $W$ is bounded. Then, over $[1,\infty)$, we know from the proof of \cite[Thm.\ 5.5.4, p.193]{E2014} that since $R>R_{cr}+1$ at least one particle of the sub-BBM that is initiated at time $1$ from within $B(x_0,1)$ does not ever leave the clearing $B(x_0,R)$ with probability $p_R>0$. Hence, by the Markov property applied at time $1$, for all $t\geq 1$,
\begin{equation} \nonumber
P^\omega(S_t)\geq P^\omega(S) \geq p_s p_R>0 .
\end{equation}
This completes the proof of the lower bound in \eqref{eqsurvival}.
\end{proof}
\subsection{Expected mass}
A first consideration for the mass of BBM among soft obstacles is to calculate its expectation and obtain a formula to the leading order which holds in almost every environment. The expected mass formula below will also be explicitly used in the proof of the upper bound of Theorem~\ref{thm1}.
\begin{proposition}[Expected mass for BBM among soft obstacles] \label{prop2}
On a set of full $\mathbb{P}$-measure,
\begin{equation*}
E^\omega[N_t]=\exp\left[\beta t-c(d,\nu)\frac{t}{(\log t)^{2/d}}(1+o(1))\right].
\end{equation*}
\end{proposition}
\begin{proof}
Consider a general branching mechanism in $K$ such that when inside $K$ particles branch according to the offspring law $(p_k)_{k\geq 0}$ as opposed to binary branching. Let $\mu_1=\sum_{k=1}^\infty k p_k$ be the associated mean number of offspring. Observe that soft killing under the potential $V$ together with complete suppression of branching inside the obstacles is tantamount to the offspring law $(p_k)_{k\geq 0}$ with $p_0=1$ and branching rate $V(x,\omega)$ inside $K$. In this way, both the branching rate and the offspring mean depend on position as
\begin{align}
\beta(x,\omega) &=\beta\,\mathbbm{1}_{K^c(\omega)}(x)+V(x,\omega)\,\mathbbm{1}_{K(\omega)}(x), \label{branchinglaw} \\
\mu(x,\omega) &=2\,\mathbbm{1}_{K^c(\omega)}(x) \label{branchinglaw2} .
\end{align}
Note that $p_0=1$ implies $\mu_1=0$. By the construction in \eqref{eqtraprule}, $V=V\mathbbm{1}_K$. Define $m(x,\omega)=\mu(x,\omega)-1$ and $m_1=\mu_1-1$. Then, $\beta(x,\omega)m(x,\omega)=\beta-(\beta+V)\mathbbm{1}_K$. Applying the classical first moment formula for spatial branching processes $\omega$-wise (see for instance \cite[Lemma 1]{GHK2022} for a more general version), and using \eqref{branchinglaw} and \eqref{branchinglaw2}, we obtain
\begin{align}
E^\omega[N_t] &= \mathbf{E}_0 \left[\exp\left(\int_0^t \beta(X_s,\omega)m(X_s,\omega) ds \right)\right] \nonumber \\
&= e^{\beta t }\mathbf{E}_0 \left[\exp\left(-\int_0^t (\beta\mathbbm{1}_{K(\omega)}(X_s)+V(X_s,\omega))ds\right) \right]. \nonumber
\end{align}
The expectation on the right-hand side is the survival probability up to $t$ of a single Brownian motion among soft obstacles with killing function
\begin{equation}
\widetilde{W}(x)=\beta\mathbbm{1}_{K_0}(x)+W(x), \label{eqw}
\end{equation}
where $K_0$ denotes the compact set on which $W$ is supported, except that the first term in \eqref{eqw} is not summed on the overlapping compacts. This, nonetheless, does not affect the asymptotic behavior of the survival probability (see \cite[Remark 4.2.2]{S1998}). Note that the function $\widetilde{W}$ is also positive, bounded, measurable, and compactly supported. Hence, the result follows from \cite[Theorem 4.5.1]{S1998}.
\end{proof}
\subsection{Large-deviations for BBM in an expanding ball}
For a generic standard Brownian motion $X=(X_t)_{t\geq 0}$ and a Borel set $A\subseteq\mathbb{R}^d$, define $\sigma_A=\inf\{s\geq 0:X_s\notin A\}$ to be the first exit time of $X$ out of $A$. We now describe the model of \emph{BBM with deactivation at a boundary}, which was introduced in \cite{O2021}. For a Borel set $A\subseteq\mathbb{R}^d$, denote by $\partial A$ the boundary of $A$. Consider a family of Borel sets $B=(B_t)_{t\geq 0}$. Let $Z^B=(Z_t^{B_t})_{t\geq 0}$ be the BBM deactivated at $\partial B$, which can be obtained from $Z$ as follows: for each $t\geq 0$, start with $Z_t$, and delete from it any particle whose ancestral line up to $t$ has exited $B_t$ to obtain $Z^{B_t}_t$. This means, $Z^{B_t}_t$ consists of particles of $Z_t$ whose ancestral lines up to $t$ have been confined to $B_t$ over the time period $[0,t]$ (but may have left $B_s$ at an earlier time $s$).
The following result is the first part (the low $\kappa$ regime) of \cite[Theorem 2]{O2021}, and will be used in the proof of the main result. It gives the large-time asymptotic behavior of the probability that the mass of BBM deactivated at the boundary of a subdiffusively expanding ball $B=(B_t)_{t\geq 0}$ is atypically small.
\begin{thmc}[Lower large-deviations for mass of BBM in an expanding ball; Theorem 2, \cite{O2021}] \label{thma}
Let $r:\mathbb{R}_+ \to \mathbb{R}_+$ be increasing such that $r(t)\to\infty$ as $t\to\infty$ and $r(t)=o(\sqrt{t})$. Also, let $\gamma:\mathbb{R}_+ \to \mathbb{R}_+$ be defined by $\gamma(t)=e^{-\kappa r(t)}$, where $\kappa>0$ is a constant. For $t>0$, set $B_t=B(0,r(t))$, $p_t=\mathbf{P}_0(\sigma_{B_t}\geq t)$, and $n_t=|Z_t^{B_t}|$. Then, for any $0<\kappa\leq \sqrt{\beta/2}$,
\begin{equation}
\underset{t\rightarrow\infty}{\lim}\,\frac{1}{r(t)}\log P\left(n_t < \gamma_t p_t e^{\beta t}\right)= -\kappa. \nonumber
\end{equation}
\end{thmc}
\section{The quenched environment}\label{section4}
In this section, we construct the a.s., that is, the quenched environment for the problem of BBM among soft obstacles. The following lemma is a stronger version of \cite[Lemma 1]{O2021} and \cite[Lemma 4.5.2]{S1998}, and will be used to prepare the a.s.-environment for the soft obstacle problem.
\begin{lemma}[A.s.\ clearings, soft obstacles, $d\geq 2$]\label{lemma1}
Let $a\in\mathbb{R}_+$, $b\geq 0$, and $c\geq 1$ be fixed, and define the function $f:\mathbb{R}_+\to\mathbb{N}$ by
$$ f(\ell)=\left\lceil e^{a\ell^{3/2}} \right\rceil .$$
For $\ell>0$, let $x_1,\ldots,x_{f(\ell)}$ be any set of $f(\ell)$ points in $\mathbb{R}^d$, and define the cubes $C_{j,\ell}=x_j+[-\ell,\ell]^d$, $1\leq j\leq f(\ell)$. Then, in $d\geq 2$, on a set of full $\mathbb{P}$-measure, there exists $\ell_0>0$ such that for each $\ell\geq \ell_0$, each of $C_{1,\ell},C_{2,\ell},\ldots,C_{f(\ell),\ell}$ contains a clearing of radius $R_\ell+b$, where $R_\ell$ is given by
\begin{equation}
R_\ell:=\frac{R_0}{5^{1/d}}(\log c\ell)^{1/d} ,\:\:\: \ell>1. \label{eqrell}
\end{equation}
\end{lemma}
\begin{proof}
Let $x_1,x_2,\ldots$ be a sequence of points in $\mathbb{R}^d$, and $C_{j,\ell}:=x_j+[-\ell,\ell]^d$ for $j=1,2,\ldots$ For $k\geq 0$, let $A_{\ell,k}$ be the event that there is a clearing of radius $R_\ell+k$ in each $C_{1,\ell},C_{2,\ell},\ldots,C_{f(\ell+1),\ell}$. Also, for $k\geq 0$, define
$$E_{\ell,k}=\{[-\ell,\ell]^d\:\:\text{contains a clearing of radius $R_\ell+k$}\}.$$
Then, by the homogeneity of the PPP and the union bound,
\begin{equation}
\mathbb{P}(A_{\ell,k}^c)\leq f(\ell+1) \mathbb{P}(E_{\ell,k}^c). \label{eq1lemma3}
\end{equation}
We now estimate $\mathbb{P}(E_{\ell,k}^c)$. Partition $[-\ell,\ell]^d$ into smaller cubes of side length $2(R_\ell+k)$. Inscribe a ball of radius $R_\ell+k$ in each smaller cube, and bound $\mathbb{P}(E_{\ell,k}^c)$ from above as
\begin{equation}
\mathbb{P}(E_{\ell,k}^c)\leq \left[1-e^{-\nu\omega_d (R_\ell+k)^d}\right]^{\lfloor\ell/(R_\ell+k)\rfloor^d}
\leq \exp\left[-\left\lfloor\frac{\ell}{R_\ell+k}\right\rfloor^d e^{-\nu\omega_d (R_\ell+k)^d}\right], \label{eq2lemma3}
\end{equation}
where the estimate $1+x\leq e^x$ is used.
Let
$$\alpha_\ell:=\left\lfloor \ell/(R_\ell+k) \right\rfloor^d e^{-\nu\omega_d (R_\ell+k)^d}.$$
Then, using \eqref{eqro} and \eqref{eqrell}, and that $\log\lfloor\ell/(R_\ell+k) \rfloor\geq \log\frac{\ell}{2 R_\ell}$ for large $\ell$, it follows that
\begin{align}
\log \alpha_\ell&\geq d\log \ell-d\log(2 R_\ell)-\nu\omega_d(R_\ell+k)^d \nonumber \\
&\geq d\log \ell-d\log(2 R_\ell)-\frac{d}{R_0^d}\left[(19/18)^{1/d}R_\ell\right]^d \nonumber \\
&\geq \left(d-\frac{2d}{9}\right)\log \ell \geq \frac{14}{9}\log\ell, \label{eq3lemma3}
\end{align}
for all large $\ell$, where the last line follows due to \eqref{eqrell} and since $d\geq 2$ by assumption. It follows from \eqref{eq2lemma3} and \eqref{eq3lemma3} that for a given $k>0$, for all large $\ell$,
\begin{equation*}
\mathbb{P}(E_{\ell,k}^c)\leq e^{-\alpha_\ell}\leq e^{-\ell^{14/9}}.
\end{equation*}
Then, \eqref{eq1lemma3} yields
\begin{equation} \label{borelcantelli}
\sum_{n=1}^\infty \mathbb{P}\left(A_{n,k}^c\right) \leq c(n_0)+\sum_{n=n_0}^\infty \left\lceil e^{a(n+1)^{3/2}} \right\rceil e^{-n^{14/9}}<\infty,
\end{equation}
where $c(n_0)$ is a constant that depends on $n_0$. Applying Borel-Cantelli lemma, we conclude that with $\mathbb{P}$-probability one, only finitely many $A_{n,k}^c$ occur. That is, $\mathbb{P}(\Omega_s)=1$, where
\begin{equation} \label{eqlemma10}
\Omega_s=\{\omega:\exists n_1=n_1(\omega)\:\:\forall n\geq n_1,\:\:\text{each}\:\:C_{1,n},\ldots,C_{f(n+1),n}\:\:\text{has a clearing of radius $R_n+k$} \} .
\end{equation}
Let $\omega_0\in\Omega_s$, and $n_1=n_1(\omega_0)$ be as in \eqref{eqlemma10}. Observe that
\begin{equation}
R_{n+1}-R_n\leq \frac{R_0}{5^{1/d}}\left[(\log c(n+1))^{1/d}-(\log c n)^{1/d}\right] \rightarrow 0, \quad n\to\infty . \nonumber
\end{equation}
In particular, there exists $n_2\in\mathbb{N}$ such that for all $n\geq n_2$, $R_{n+1}-R_n\leq 1$. Choose $k=b+1$. (So far the choice of $k>0$ was arbitrary.) Denote by $a\vee b$ the maximum of the numbers $a$ and $b$. To complete the proof, it suffices to show that in the environment $\omega_0$ for each $\ell\geq n_3:=n_1\vee n_2$, each $C_{1,\ell},\ldots,C_{f(\ell),\ell}$ contains a clearing of radius $R_\ell+b$. Take $\ell\geq n_3$ so that there exists $n\geq n_3$ with $n\leq \ell\leq n+1$. Fix this integer $n$. Then, since $R_\ell$ is increasing in $\ell$, we have
\begin{equation} \label{tavsancik2}
R_\ell+b \leq R_{n+1}+b \leq R_n+1+b = R_n+k .
\end{equation}
Furthermore,
\begin{equation} \label{tavsancik30}
f(\ell)=\left\lceil e^{a\ell^{3/2}} \right\rceil \leq \left\lceil e^{a(n+1)^{3/2}} \right\rceil = f(n+1) .
\end{equation}
Then, \eqref{eqlemma10}, \eqref{tavsancik2} and \eqref{tavsancik30} imply that for $\ell\geq n_3$, each of $C_{1,\ell},\ldots,C_{f(\ell),\ell}$ contains a clearing of radius $R_\ell+b$. This completes the proof since the choice of $\omega_0\in\Omega_s$ was arbitrary and $\mathbb{P}(\Omega_s)=1$.
\end{proof}
Next, we use Lemma~\ref{lemma1} with a suitably chosen collection of points $\left(x_j:1\leq j\leq f(\ell)\right)$ and a set of parameters $\ell$, $a$, $c$ in order to prepare an a.s.-environment with `high' concentration of `large' clearings, that is, in which the covering radius of the `large' clearings is sufficiently small. We will use this a.s.-environment as the quenched setting for the problem of BBM among soft obstacles.
\begin{proposition}[An a.s.-environment, soft obstacles, $d\geq 2$] \label{prop3}
Let $k>0$ be fixed, and $C(0,kt)=[-kt,kt]^d$ be the cube centered at the origin with side length $2kt$. Let $\rho:\mathbb{R}_+\to\mathbb{R}_+$ be such that
\begin{equation} \label{eqrho}
\rho(t)=(\log t)^{2/3}, \quad t>1 .
\end{equation}
For $b>0$, define the set of environments $\Omega_s=\Omega_s(k,b)$ as
\begin{equation} \label{eqenviron}
\Omega_s=\{\omega\in\Omega:\exists\:t_0\:\:\forall\:t\geq t_0,\:\:\forall\:x\in C(0,kt)\:\: \exists\:y\in B(x,\rho(t))\:\:\text{such that}\: B\left(y,R_{\rho(t)}+b\right)\subseteq K^c\},
\end{equation}
where $R_{\rho(t)}$ is as in \eqref{eqrell} with $c=1$. Then, in $d\geq 2$, $\mathbb{P}(\Omega_s)=1$.
\end{proposition}
\begin{proof}
Consider the simple cubic packing of $C(0,kt)$ with balls of radius $\rho(t)/(2\sqrt{d})$. Then, at most
\begin{equation} \label{eq12}
n_t:=\left\lceil \frac{kt}{\rho(t)/(2\sqrt{d})} \right\rceil^d
\end{equation}
balls are needed to completely pack $C(0,kt)$, say with centers $\left(z_j:1\leq j\leq n_t\right)$. For each $j$, let $B_t^j=B(z_j,\rho(t)/(2\sqrt{d}))$. Now consider generically a simple cubic packing of $\mathbb{R}^d$ by balls $(\mathcal{B}_j:j\in\mathbb{N})$ of radius $R>0$, and let $x\in\mathbb{R}^d$ be any point. It is easy to deduce from elementary geometry that $\min_j \max_{z\in \mathcal{B}_j}|x-z|<(\sqrt{d}/2)4R$, where $\sqrt{d}/2$ is the distance between the center and any vertex of the $d$-dimensional unit cube $C(0,1/2)$. Then, since the radius of the packing balls is $\rho(t)/(2\sqrt{d})$ in our case, it follows that
\begin{equation} \label{eq1100}
\forall\,x\,\in C(0,kt), \quad
\underset{1\leq j\leq n_t}{\min}\:\underset{z\in B_t^j}{\max}\:|x-z|<\rho(t).
\end{equation}
We now combine the simple cubic packing of $C(0,kt)$ and Lemma~\ref{lemma1}. Set $\ell=\rho(t)/(2\sqrt{d})=(\log t)^{2/3}/(2\sqrt{d})$ in Lemma~\ref{lemma1}. Then, $t=e^{(2\ell\sqrt{d})^{3/2}}$, and it follows from \eqref{eq12} that for all large $\ell$,
\begin{equation}
n_t\leq \frac{(2k)^d}{\ell^d} e^{d(2\ell\sqrt{d})^{3/2}} \leq e^{d(2\ell\sqrt{d})^{3/2}}. \nonumber
\end{equation}
Now, with the choices $\ell=\rho(t)/(2\sqrt{d})$, $a=d(2\sqrt{d})^{3/2}$, $c=2\sqrt{d}$ and $x_j=z_j$ for $j\leq n_t$, where $(z_j:1\leq j\leq n_t)$ are as above, in view of \eqref{eq1100} and since $\ell\to\infty$ as $t\to\infty$, Lemma~\ref{lemma1} implies the following. For fixed $k>0$ and $b>0$, on a set of full $\mathbb{P}$-measure, there exists $t_0>0$ such that for all $t\geq t_0$, $B(x,\rho(t))$ contains a clearing of radius $\frac{R_0}{5^{1/d}}(\log \rho(t))^{1/d}+b$ for each $x\in C(0,kt)$.
\end{proof}
In the subsequent proofs, $\Omega_s=\Omega_s(k,b)$ given in \eqref{eqenviron} with a suitable pair $(k,b)$, will be our quenched environment for the problem of BBM among soft obstacles.
The term `overwhelming probability' is henceforth used with a precise meaning, which is given as follows.
\begin{definition}[Overwhelming probability]
Let $(A_t)_{t>0}$ be a family of events indexed by time $t$, and $\mathcal{P}$ be the relevant probability. We say that $A_t$ occurs \emph{with overwhelming probability} if
$$\underset{t\to\infty}{\lim}\mathcal{P}(A_t^c)=0 . $$
\end{definition}
\section{Hitting the moderate clearings}\label{section5}
In this section we show that on the set of full $\mathbb{P}$-measure developed in the previous section, that is, on $\Omega_s$ given in \eqref{eqenviron}, the BBM hits clearings of a certain size over $[0,t]$ for large $t$ with overwhelming $\widehat{P}^\omega$-probability. In the rest of the paper, two types of clearings will be considered according to size: \emph{moderate} clearings have $r(t)\sim (\log\log t)^{1/d}$ and \emph{large} clearings have $r(t)\sim (\log t)^{1/d}$, where $r=r(t)$ is used as the radius of a clearing. The following lemma will play a central role in the proof of the lower bound of Theorem~\ref{thm1}. It is on the hitting probability and the position of hitting over $[0,t]$ of a BBM among soft obstacles to moderate clearings conditioned on survival over $[0,t]$, and says that with overwhelming probability, the BBM hits such a clearing within the horizon $[-kt,kt]^d$ over $[0,t]$. As before we use $Z=(Z_t)_{t\geq 0}$ to denote a BBM in $d$ dimensions, $P_x$ as the law of a free BBM started with a single particle at position $x\in\mathbb{R}^d$, and $P_x^\omega$ as the conditional law of a BBM started with a single particle at position $x\in\mathbb{R}^d$ in the environment $\omega$. Set $P^\omega=P^\omega_0$. Also, recall the definition of $R_0$ from \eqref{eqro}. The \emph{range} (accumulated support) of $Z$ is the process defined by
\begin{equation}
\mathcal{R}(t)=\bigcup_{0\leq s\leq t} \text{supp}(Z_s). \nonumber
\end{equation}
\begin{lemma}[Hitting probability of BBM to moderate clearings]\label{lemma2}
Let $r:\mathbb{R}_+\to\mathbb{R}_+$ be such that
\begin{equation} \label{part1eqradius}
r(t)=\frac{1}{3}\frac{R_0}{5^{1/d}}\left(\frac{2}{3}\right)^{1/d}(\log \log t)^{1/d} , \quad t>e .
\end{equation}
Let $k>\sqrt{2\beta}$ be fixed. For $\omega\in\Omega$ and $t>0$, define
\begin{equation}
\Phi_t^\omega=\{x\in\mathbb{R}^d:B(x,r(t))\subseteq K^c(\omega)\}, \quad\:\: \widehat{\Phi}_t^\omega=\Phi_t^\omega \cap [-kt,kt]^d . \label{eqphi}
\end{equation}
Then, in $d\geq 2$, there exists $\Omega_1\subseteq\Omega$ with $\mathbb{P}(\Omega_1)=1$ such that for every $\omega\in\Omega_1$,
\begin{equation}
\underset{t\to \infty}{\lim}\: P^\omega\left(\mathcal{R}(t)\cap \widehat{\Phi}_t^\omega=\emptyset \:\big\vert\: S_t\right)=0 . \nonumber
\end{equation}
\end{lemma}
\begin{proof}
Call $x\in\mathbb{R}^d$ a \emph{good point} for $\omega\in\Omega$ at time $t$ if $B(x,r(t))$ is a clearing (see Definition~\ref{def1}) in the random environment $\omega$. That is,
$$ \Phi_t^\omega:=\{x\in\mathbb{R}^d:B(x,r(t))\subseteq K^c(\omega)\} $$
is the set of good points associated to the pair $(\omega,t)$. Given $\omega\in\Omega$, for $t>0$ define the events
$$ E_t=E_t(\omega)=\{\mathcal{R}(t)\cap \widehat{\Phi}_t^\omega=\emptyset\} . $$
In words, $E_t$ is the event that the BBM does not hit a good point inside $[-kt,kt]^d$ associated to the pair $(\omega,t)$ over $[0,t]$. By Proposition~\ref{prop1}, on a set of full $\mathbb{P}$-measure, there exists $c_1=c_1(\omega)>0$ such that for all large $t$,
\begin{equation} \label{part1eq11}
P^\omega(E_t \mid S_t) = \frac{P^\omega(E_t \cap S_t)}{P^\omega(S_t)}\leq c_1 P^\omega(E_t\cap S_t) .
\end{equation}
In the rest of the proof, we bound $P^\omega(E_t\cap S_t)$ from above in a typical environment $\omega$.
For $t>1$, introduce the time scale
$$ h(t):=(\log t)^{2/3} . $$
For notational convenience\footnote{We would like to avoid the floor function in notation.}, suppose that $t/h(t)$ is an integer. Split the interval $[0,t]$ into $t/h(t)$ pieces as
$$ [0,h(t)],\: [h(t),2h(t)],\:\ldots\:,[t-h(t),t], $$
and for $j=1,2,\ldots,t/h(t)$, define the intervals $I_{j,t}$ as
$$ I_{j,t}=[(j-1)h(t),j h(t)] . $$
Next, for $t>e$, introduce two space scales as follows: use $\rho(t)$, previously defined in \eqref{eqrho}, as the larger space scale, and $r(t)$ given in \eqref{part1eqradius} as the smaller space scale. That is, we have
$$ \rho(t)=h(t)=(\log t)^{2/3}, \quad r(t)=\frac{1}{3}\frac{R_0}{5^{1/d}}\left(\frac{2}{3}\right)^{1/d}(\log \log t)^{1/d} .$$
Observe that $2r(t)\leq R_{\rho(t)}$ for all large $t$, where $R_\ell$ is as in \eqref{eqrell} with $c=1$ therein. Hence, for each $\omega\in\Omega_s(k,0)$ (see \eqref{eqenviron} for the definition), for all large $t$ any ball of radius $\rho(t)$ centered within $C(0,kt)$ contains a clearing of radius $2r(t)$. In the rest of the proof, we set $\Omega_s=\Omega_s(k,0)$ with $k>\sqrt{2\beta}$ fixed. Recall that $\mathbb{P}(\Omega_s)=1$ by Proposition~\ref{prop3}.
For an interval $I\subseteq [0,\infty)$, define the range of $Z$ over $I$ as
\begin{equation}
\mathcal{R}(I)=\bigcup_{s\in I} \text{supp}(Z_s). \nonumber
\end{equation}
Next, for $t>1$ and $j=1,2,\ldots,t/h(t)$, define the events
$$ E_{j,t}=\{\mathcal{R}(I_{j,t})\cap \widehat{\Phi}_t^\omega=\emptyset\}, \:\: \quad S_{j,t}:=\{N_{jh(t)}\geq 1\}. $$
Observe that
\begin{equation} \label{eq1part3}
E_t \cap S_t = \bigcap_{j=1}^{t/h(t)} (E_{j,t}\cap S_{j,t}).
\end{equation}
Also, for $t>0$, let
\begin{equation} \label{eqmt}
M_t:=\inf\{r\geq 0:\mathcal{R}(t)\subseteq B(0,r)\}
\end{equation}
be the radius of the minimal ball containing the range of BBM at time $t$, and for $t>1$ and $j=1,2,\ldots,t/h(t)$ define the events
\begin{equation} \label{eqfj}
F_{j,t}=\{M_{j h(t)}\leq k j h(t)\}, \:\: \quad F_t=F_{1,t} \cap \ldots \cap F_{t/h(t),t} .
\end{equation}
We now apply repeated conditioning at times $h(t), 2h(t), \ldots, t-h(t)$, and at each intermediate time $j h(t)$, we will throw away the rare event $F_{j,t}^c$. Note that $F_{j,t}^c$ is indeed a rare event since $k>\sqrt{2\beta}$ by assumption and it is well-known that the \emph{speed} of a strictly dyadic BBM is $\sqrt{2\beta}$. Let $\omega\in\Omega_s$. Then, using \eqref{eq1part3}, \eqref{eqfj}, and the union bound,
\begin{align}
P^\omega(E_t, S_t)&\leq P^\omega(E_t, S_t, F_t) + P^\omega(F_{1,t}^c) + \ldots + P^\omega(F_{t/h(t),t}^c) \nonumber \\
& = P^\omega\left(\bigcap_{j=1}^{t/h(t)} (E_{j,t}, S_{j,t}, F_{j,t}) \right)+\sum_{j=1}^{t/h(t)}P^\omega(F_{j,t}^c) \nonumber \\
&= P^\omega\left(\bigcap_{j=2}^{t/h(t)} (E_{j,t}, S_{j,t}, F_{j,t}) \:\bigg\vert\: E_{1,t}, S_{1,t}, F_{1,t}\right)P^\omega(E_{1,t}, S_{1,t}, F_{1,t})+ \sum_{j=1}^{t/h(t)}P^\omega(F_{j,t}^c) \nonumber .
\end{align}
Iterating the argument above at times $2h(t), \ldots, t-h(t)$, and noting that $S_{j,t}=\cap_{k=1}^j S_{k,t}$, we obtain
\begin{equation} \nonumber
P^\omega(E_t, S_t)\leq P^\omega\left(E_{1,t}, S_{1,t}, F_{1,t}\right) \prod_{j=2}^{t/h(t)} P^\omega\left(E_{j,t}, S_{j,t}, F_{j,t} \:\bigg\vert\: S_{j-1,t}\, ,\:\bigcap_{k=1}^{j-1} (E_{k,t}, F_{k,t})\right) + \sum_{j=1}^{t/h(t)}P^\omega(F_{j,t}^c)
\end{equation}
from which it follows that
\begin{equation} \label{eqmain2}
P^\omega(E_t, S_t)\leq P^\omega(E_{1,t})\prod_{j=2}^{t/h(t)}P^\omega\left(E_{j,t} \:\bigg\vert\: S_{j-1,t}\, ,\:\bigcap_{k=1}^{j-1} (E_{k,t}, F_{k,t})\right) + \sum_{j=1}^{t/h(t)}P^\omega(F_{j,t}^c).
\end{equation}
In the rest of the proof, we find an upper bound that is valid for large $t$ on the right-hand side of \eqref{eqmain2} in an environment $\omega\in\Omega_s$.
\textbf{(i) Upper bound on $P^\omega(F_{j,t}^c)$ in any environment}
To estimate $P^\omega(F_{j,t}^c)=P^\omega(M_{jh(t)}>k j h(t))$, we need some control on the spatial spread of the BBM at time $j h(t)$. We start by noting an $\omega$-wise comparison between a BBM among soft obstacles and a free BBM. (Recall that \emph{free} refers to the model where $V\equiv 0$, that is, there is no killing and the BBM branches at rate $\beta$ everywhere in $\mathbb{R}^d$.) The following stochastic domination is clear since the presence of $V>0$ can only kill particles as well as suppressing their branching, and otherwise has no effect on the motion of particles. As before, we use $(P_y:y\in\mathbb{R}^d)$ for the laws of a free BBM starting with a single particle at $y\in\mathbb{R}^d$, and set $P=P_0$.
\begin{remark}[Comparison $1$, free environment versus soft killing]
For $t>0$ and $B\subseteq\mathbb{R}^d$, let $Z_t(B)$ denote the mass of $Z$ that fall inside $B$ at time $t$. Then, for all $y\in\mathbb{R}^d$, $B\subseteq \mathbb{R}^d$ Borel, $k\in\mathbb{N}$, and $t>0$,
\begin{equation} \label{eqcomp1}
P_{y}(Z_t(B)<k) \leq P_{y}^\omega(Z_t(B)<k) \quad \text{for each $\omega\in\Omega$} .
\end{equation}
\end{remark}
Then, for any $r>0$, it follows by taking $B=(B(0,r))^c$ and $k=1$ in \eqref{eqcomp1} that
\begin{equation} \label{eqcomp2}
P(M_t\leq r) \leq P^\omega(M_t\leq r) ,
\end{equation}
where $M_t$ is as defined in \eqref{eqmt}. Observe that $M_t/t$ is a kind of speed for the BBM, and measures the spread of $Z$ from the origin over the time interval $[0,t]$.
Let $\mathcal{N}_t$ denote the set of particles of $Z$ that are alive at time $t$, and for $u\in\mathcal{N}_t$, let $(Y_u(s))_{0\leq s\leq t}$ denote the ancestral line up to $t$ of particle $u$. By the \emph{ancestral line up to $t$} of a particle present at time $t$, we mean the continuous trajectory traversed up to $t$ by the particle, concatenated with the trajectories of all its ancestors. Note that when $V\equiv 0$, $(Y_u(s))_{0\leq s\leq t}$ is identically distributed as a Brownian trajectory $(X_s)_{0\leq s\leq t}$ for each $u\in\mathcal{N}_t$. Also note that $N_t=|\mathcal{N}_t|$. Then, using the union bound, for $\gamma>0$,
\begin{equation} \label{eq1part1}
P\left(M_t>\gamma t\right)= P\left(\exists\, u\:\in \mathcal{N}_t:\sup_{0\le s\le t}|Y_u(s)|>\gamma t\right) \le E[N_t]\:\mathbf{P}_0\left(\sup_{0\le s\le t}|X_s|>\gamma t\right).
\end{equation}
It is a standard result that $E[N_t]=\exp(\beta t)$ (one can deduce this, for example, from \cite[Sect.\ 8.11]{KT1975}), and from \cite[Lemma 5]{OCE2017} we have that $\mathbf{P}_0\left(\sup_{0\le s\le t}|X_s|>\gamma t\right)=\exp[-\gamma^2 t/2(1+o(1))]$. Set $\gamma=k$ and replace $t$ by $j h(t)$ in \eqref{eq1part1}. Then, combining \eqref{eqcomp2} and \eqref{eq1part1}, and recalling that $k>\sqrt{2\beta}$,
\begin{align} \label{eq2part1}
P^\omega(F_{j,t}^c)=P^\omega(M_{j h(t)}>k j h(t))&\leq P(M_{j h(t)}>k j h(t)) \nonumber \\
&\leq E[N_{jh(t)}]\:\mathbf{P}_0\bigg(\sup_{0\le s\le j h(t)}|X_s|>kj h(t)\bigg) \nonumber \\
&= \exp[jh(t)(\beta-k^2/2)(1+o(1))] .
\end{align}
It follows from \eqref{eq2part1} that when $k>\sqrt{2\beta}$,
\begin{equation} \label{maineqpiece2}
\sum_{j=1}^{t/h(t)}P^\omega(F_{j,t}^c) \leq t \exp[-h(t)(k^2/2-\beta)(1+o(1))] .
\end{equation}
\textbf{(ii) Upper bound on $P^\omega\left(E_{1,t}\right)$ in a typical environment}
Next, for $\omega\in\Omega_s$, we find an upper bound on $P^\omega\left(E_{1,t}\right)$ that is valid for large $t$. We will estimate $P^\omega(E_{1,t}^c)$ from below, and in order to do that, since $E_{1,t}^c=\{\mathcal{R}(I_{1,t})\cap \widehat{\Phi}^\omega_t\neq \emptyset\}$, we look for a hitting strategy to $\widehat{\Phi}^\omega_t$. We start by noting an $\omega$-wise comparison between a Brownian motion (BM) among soft obstacles and a BBM among soft obstacles. The following comparison is obvious since each particle of BBM follows a Brownian trajectory while alive under the laws $P^\omega_y$.
\begin{remark}[Comparison $2$, BM versus BBM]
Let $(\mathbf{P}^\omega_y:y\in\mathbb{R}^d)$ be the laws under which $X$ is a BM starting at position $y$ and is killed at rate $V=V(x,\omega)$ in the environment $\omega$. Then, for all $y\in\mathbb{R}^d$, $t>0$ and $B\subseteq \mathbb{R}^d$ Borel,
\begin{equation} \nonumber
\mathbf{P}_y^\omega\left((\cup_{0\leq s\leq t}\{X_s\}) \cap B\neq\emptyset\right) \leq P_{y}^\omega\left(\mathcal{R}(t)\cap B\neq\emptyset\right) \quad \text{for each $\omega\in\Omega$} .
\end{equation}
That is, even under the killing potential $V$, it is easier for a BBM to hit any set $B$ than it is for a single BM. We note that a similar comparison between a free BBM and a free BM also holds.
\end{remark}
The remark above implies that
\begin{equation} \label{eqprelim}
\mathbf{P}_0^\omega\left((\cup_{0\leq s\leq h(t)}\{X_s\}) \cap \widehat{\Phi}^\omega_t \neq\emptyset\right) \leq P^\omega(E_{1,t}^c) ,
\end{equation}
and hence, it suffices to estimate $\mathbf{P}_0^\omega\left((\cup_{0\leq s\leq h(t)}\{X_s\}) \cap \widehat{\Phi}^\omega_t \neq\emptyset\right)$ from below. Let $\omega\in\Omega_s$ and choose $t$ large enough. Then, in the environment $\omega$, $B_{1,t}:=B(0,\rho(t))$ contains a clearing of radius $2r(t)$, hence a ball of radius $r(t)$, say $\mathcal{B}_{1,t}$, that is entirely contained in $\Phi^\omega_t$. Since $\rho(t)\leq kt$ for large $t$, we have $\mathcal{B}_{1,t}\subseteq \widehat{\Phi}^\omega_t\cap B_{1,t}$. Let $\mathbf{e}$ be the unit vector in the direction of the center of $\mathcal{B}_{1,t}$ in $\mathbb{R}^d$. Consider the following strategy for a standard BM: over $[0,h(t)]$, avoid being killed by the potential $V$, stay in the tube
$$ T_t:=\left\{z\in\mathbb{R}^d:\inf_{0\leq s\leq h(t)}\bigg|z-\rho(t)\mathbf{e}\,\frac{s}{h(t)}\bigg|<r(t) \right\} , $$
and hit $\mathcal{B}_{1,t}$. The probability of this joint event is at least
\begin{equation} \label{eqjoint}
\exp\left[-h(t)\sup_{x\,\in\,T_t} V(x,\omega)\right] \mathbf{P}_0\left(\sup_{0\leq s\leq h(t)} \bigg|X_s-\rho(t)\mathbf{e}\,\frac{s}{h(t)}\bigg|<r(t)\right) ,
\end{equation}
where the second factor is a lower bound for the probability that the particle stays inside $T_t$ and hits $\mathcal{B}_{1,t}$. Indeed, if the event $\left\{\sup_{0\leq s\leq h(t)} \big|X_s-\rho(t)\mathbf{e}\,\frac{s}{h(t)}\big|<r(t)\right\}$ is realized, this means the particle is in $B(\rho(t)\mathbf{e},r(t))$ at time $h(t)$, which, by continuity of Brownian paths, implies that it must have hit $\mathcal{B}_{1,t}$ over the interval $[0,h(t)]$. By \cite[Lemma 4.5.2]{S1998},
\begin{equation} \label{eqjoint1}
\exp\left[-h(t)\sup_{x\,\in\,T_t} V(x,\omega)\right] \geq \exp[-h(t)\log \rho(t)].
\end{equation}
By the tubular estimate in Proposition A, and since $r(t)\geq 1$ for all large $t$,
\begin{equation} \label{eqjoint2}
\mathbf{P}_0\left(\sup_{0\leq s\leq h(t)} \bigg|X_s-\rho(t)\mathbf{e}\,\frac{s}{h(t)}\bigg|<r(t)\right)\geq c_d \exp\left[-\lambda_d h(t)-\frac{\rho^2(t)}{2h(t)} \right]
\end{equation}
for all large $t$, where $c_d>0$ is a constant that only depends on the dimension. Then, it follows from \eqref{eqprelim}-\eqref{eqjoint2} that for all large $t$,
\begin{equation} \label{eqkey}
P^\omega(E_{1,t}^c) \geq c_d \exp\left[-\left(h(t)\log \rho(t)+\lambda_d h(t)+\frac{\rho^2(t)}{2h(t)}\right) \right] .
\end{equation}
Using $\rho(t)=h(t)$, we see that $\exp[-2h(t)\log h(t)]$ is smaller than the right-hand side of \eqref{eqkey} for large $t$, and hence conclude that for all large $t$,
\begin{equation} \label{eqkey2}
P^\omega(E_{1,t}) \leq 1-e^{-2h(t)\log h(t)} .
\end{equation}
This completes the estimate for $P^\omega\left(E_{1,t}\right)$ when $\omega\in\Omega_s$.
\textbf{(iii) Applying the Markov property at times $h(t), 2h(t),\ldots,t-h(t)$.}
For $t>1$ and $j=2,\ldots,t/h(t)$, abbreviate
$$ A_{j-1,t}:= S_{j-1,t}\cap (\cap_{i=1}^{j-1}(E_{i,t}, F_{i,t})) .$$
We now estimate $P^\omega(E_{j,t} \mid A_{j-1,t})$ in \eqref{eqmain2}. Recall the definition of $F_{j,t}$ from \eqref{eqfj}. Observe that conditional on $A_{j-1,t}$, at time $(j-1)h(t)$ the BBM has at least one particle alive and all particles are within a distance of $k(j-1)h(t)$ from the origin. Pick any particle that is alive\footnote{For concreteness, we may for instance pick the one that is closest to the origin at time $(j-1)h(t)$.} at time $(j-1)h(t)$, call it $u^*$, and let $y^{(j)}_t:=Y_{u^*}((j-1)h(t))$ denote its position at that time. Since $F_{j-1,t}\subset A_{j-1,t}$ and $k(j-1)h(t)\leq k(t-h(t))$ for all $j=1,\ldots,t/h(t)$, conditional on $A_{j-1,t}$ we have that $y^{(j)}_t\leq k(t-h(t))$. Now let $\omega\in\Omega_s$ and choose $t$ large enough. Then, in the environment $\omega$, $B_{j,t}:=B\big(y^{(j)}_t,\rho(t)\big)$ contains a clearing of radius $2r(t)$, hence a ball of radius $r(t)$, say $\mathcal{B}_{j,t}$, that is entirely contained in $\Phi^\omega_t$ and also in $[-kt,kt]^d$ since $y^{(j)}_t\leq k(t-h(t))$ and $\rho(t)=h(t)$. That is, $\mathcal{B}_{j,t}\subseteq \widehat{\Phi}^\omega_t\cap B_{j,t}$. Then, applying the Markov property at time $(j-1)h(t)$, a tubular estimate argument similar to the one used in step (ii) for the case $j=1$ yields that for all large $t$,
\begin{equation} \label{eqmain3}
P^\omega\left(E_{j,t}\mid A_{j-1,t}\right)\leq 1-e^{-2h(t)\log h(t)} ,\quad j=2,\ldots,t/h(t) .
\end{equation}
Then, combining \eqref{eqmain2}, \eqref{maineqpiece2}, \eqref{eqkey2} and \eqref{eqmain3} yields the following conclusion. Provided $k>\sqrt{2\beta}$, in any environment $\omega\in\Omega_s$ for all large $t$,
\begin{align}
P^\omega(E_t\cap S_t)&\leq \left[1-e^{-2h(t)\log h(t)}\right]^{t/h(t)}+ t \exp[-h(t)(k^2/2-\beta)(1+o(1))] \nonumber \\
& \leq \exp\left[-e^{-2h(t)\log h(t)}\frac{t}{h(t)}\right]+ t \exp[-h(t)(k^2/2-\beta)(1+o(1))] \nonumber
\end{align}
where we have used the estimate $1+x\leq e^x$. Since $h(t)=(\log t)^{2/3}$, it follows that
\begin{equation} \nonumber
\underset{t\to\infty}{\lim}\: P^\omega(E_t\cap S_t) = 0 .
\end{equation}
This completes the proof of Lemma~\ref{lemma2} in view of \eqref{part1eq11}.
\end{proof}
\begin{remark}
Note that any conditioning on the events $S_t$ (or on $S$) changes the law of the BBM. In particular, the ancestral lines are no longer Brownian. In the proof of Lemma~\ref{lemma2}, the conditioning on $S_t$ was carried out in stages over successive subintervals $[(j-1)h(t),jh(t)]$ in order to work with Brownian paths.
Also, observe that the trivial bound $P^\omega(E_t\cap S_t)\leq P^\omega(E_t)$ is not useful for proving Lemma~\ref{lemma2}, because the event $E_t$ is realized if the entire process is killed before hitting $\widehat{\Phi}_t^\omega$, which has a probability bounded below by a positive number uniformly for all large $t$.
In case of mild obstacles, where there is no killing but only a suppression of branching inside traps, each ancestral line is Brownian under $P^\omega$, and therefore it is sufficient to prove the counterpart of Lemma~\ref{lemma2} for a single Brownian motion (see Lemma 2 in \cite{O2021}). In contrast, in case of soft obstacles, one has to estimate $P^\omega(E_t\cap S_t)$ for the entire BBM. On the event $E_t\cap S_t$ there is at least one particle at time $t$ whose ancestral line is Brownian under $P^\omega$ and who has survived up to time $t$, but we don't know which particle, and a standard union bound argument over all possible particles existing at time $t$ would not be successful in showing that $P^\omega(E_t\cap S_t)\to 0$ as $t\to\infty$.
We emphasize that all of the aforementioned difficulties arise due to the soft killing mechanism in the model, which was not present in the mild obstacle problem.
\end{remark}
\section{Proof of Theorem~\ref{thm1}}\label{section6}
\subsection{Proof of the upper bound}
For the proof of the upper bound, let $\Omega_1\subset\Omega$ be the intersection of the sets of full $\mathbb{P}$-measure in Proposition~\ref{prop1} and Proposition~\ref{prop2}, and let $\omega\in\Omega_1$. Recall that the law $\widehat{P}^\omega$ is defined by $\widehat{P}^\omega(\:\cdot\:)=P^\omega(\:\cdot\: \mid S)$, and denote by $\widehat{E}^\omega$ the corresponding expectation. Write
$$ \widehat{E}^\omega[N_t] P^\omega(S) = E^\omega[N_t \mathbbm{1}_{S}] \leq E^\omega[N_t] . $$
We know from Proposition~\ref{prop1} that $0<P^\omega(S)<1$, and therefore,
\begin{equation} \label{eqsqueeze}
\widehat{E}^\omega[N_t] \leq \frac{E^\omega[N_t]}{P^\omega(S)}.
\end{equation}
By the Markov inequality, we then have
\begin{equation}
\widehat{P}^\omega\left(N_t>\exp\left[\beta t+\frac{(-c(d,\nu)+\varepsilon)t}{(\log t)^{2/d}}\right]\right)\leq \widehat{E}^\omega[N_t]\exp\left[-\beta t+\frac{(c(d,\nu)-\varepsilon)t}{(\log t)^{2/d}}\right], \nonumber
\end{equation}
which, along with \eqref{eqsqueeze} and Proposition~\ref{prop2} implies that
$$ \widehat{P}^\omega\left((\log t)^{2/d}\left(\frac{\log N_t}{t}-\beta\right)+c(d,\nu)>\varepsilon\right)\leq \exp\left[-\varepsilon t(\log t)^{-2/d}+o\left(t(\log t)^{-2/d}\right)\right] .$$
This proves the upper bound of the LLN in \eqref{eqthm1}.
\subsection{Proof of the lower bound}
The proof of the lower bound is split into three parts for better readability. Let $\varepsilon>0$. In what follows, in a typical environment $\omega$, we find an upper bound that is valid for large $t$ on
\begin{equation}
\widehat{P}^\omega\left((\log t)^{2/d}\left(\frac{\log N_t}{t}-\beta\right)+c(d,\nu)<-\varepsilon\right)=\widehat{P}^\omega\left(N_t<\exp\left[t\left(\beta-\frac{c(d,\nu)+\varepsilon}{(\log t)^{2/d}}\right)\right]\right) . \nonumber
\end{equation}
Throughout the proof, we assume that $d\geq 2$ so that Lemma~\ref{lemma2} is applicable.
\textbf{\underline{Part 1}: Upper bound on exponentially few total mass}
For a Borel set $B\subseteq \mathbb{R}^d$ and $t\geq 0$, as before $Z_t(B)$ denotes the mass of $Z$ that fall inside $B$ at time $t$. In this part of the proof, we will show that on a set of full $\mathbb{P}$-measure, for any $0<\delta<\beta$, the event
\begin{equation} \label{part1event}
A_t=\{\exists\:z_0=z_0(\omega)\in[-kt,kt]^d \:\:\text{such that}\:\: Z_t(B(z_0,r(t)))\geq e^{\delta t} \} ,
\end{equation}
with $r(t)$ as in \eqref{part1eqradius} and $k>\sqrt{2\beta}$, occurs with overwhelming $\widehat{P}^\omega$-probability. Observe that $A_t$ corresponds to producing exponentially many particles and keeping them close to each other and also to the origin at time $t$. The main ingredient in this part of the proof will be Lemma~\ref{lemma2}.
Let $0<\delta<\beta$, and choose $\alpha$ such that $0<\alpha<1-\delta/\beta$. Also, for concreteness, set $k=\sqrt{3\beta}$. Recall the definitions of $\Phi_t^\omega$ (the set of good points associated to the pair $(\omega,t)$) and $\widehat{\Phi}_t^\omega=\Phi_t^\omega \cap [-kt,kt]^d$ from \eqref{eqphi}. Split the interval $[0,t]$ into two pieces as $[0,\alpha t]$ and $[\alpha t,t]$. We will show that with overwhelming probability, a particle of the BBM hits a point in $\widehat{\Phi}_{\alpha t}^\omega$, say $z_0\in [-k\alpha t,k\alpha t]^d$, over $[0,\alpha t]$, and then the sub-BBM emanating from this particle starting at $z_0$ produces at least $e^{\delta t}$ particles over $[\alpha t,t]$ inside $B(z_0,r(\alpha t))$.
For $t>0$, define the events
$$ E_t:=\{ \mathcal{R}(\alpha t)\cap \widehat{\Phi}_{\alpha t}^\omega \neq \emptyset \} .$$
Let $\tau=\tau(\omega)=\inf\{s>0:\mathcal{R}(s)\cap \widehat{\Phi}_{\alpha t}^\omega\neq\emptyset\}$ be the first time that $Z$ hits a good point within the cube $[-k\alpha t,k\alpha t]^d$ associated to the pair $(\omega,\alpha t)$. Observe that $E_t=\{\tau\leq \alpha t \}$. Estimate
\begin{align}
P^\omega\left(A_t^c \cap S_{\alpha t}\right) &= P^\omega\left(A_t^c \cap S_{\alpha t} \cap E_t^c\right) + P^\omega\left(A_t^c \cap S_{\alpha t} \cap E_t\right) \nonumber \\
&\leq P^\omega\left(E_t^c \mid S_{\alpha t} \right) + P^\omega\left(A_t^c \mid E_t\right) . \label{eqnewnew}
\end{align}
By Lemma~\ref{lemma2}, on a set of full $\mathbb{P}$-measure,
\begin{equation} \label{part1eq6}
\underset{t\to\infty}{\lim}P^\omega(\mathcal{R}(\alpha t)\cap \widehat{\Phi}_{\alpha t}^\omega=\emptyset \mid S_{\alpha t})=\underset{t\to\infty}{\lim}P^\omega(E_t^c \mid S_{\alpha t})=0.
\end{equation}
Conditional on $E_t=\{\tau\leq\alpha t\}$, let $z_0$ be the (random) point where $Z$ first hits $\widehat{\Phi}_{\alpha t}^\omega$. Now apply the strong Markov property of BBM at time $\tau$, and then apply Theorem C to the sub-BBM initiated at time $\tau$ from position $z_0$ by the particle that first hits $\widehat{\Phi}_{\alpha t}^\omega$. Note that $t-\tau\geq (1-\alpha)t$, and by definition of $\widehat{\Phi}_{\alpha t}^\omega$, $B(z_0,r(\alpha t))$ is a clearing with $z_0\in[-k\alpha t,k\alpha t]^d$. In detail, for $t>1$ let
$$ s:=(1-\alpha)t,\quad \hat{r}(s)=\frac{1}{3}\frac{R_0}{5^{1/d}}\left(\frac{2}{3}\right)^{1/d}\left[\log\log\left(\frac{\alpha s}{1-\alpha}\right)\right]^{1/d}, \quad B_s:=B(0,\hat{r}(s)).$$
Observe that $\hat{r}(s)=r(\alpha t)$. Also, $\delta t<(1-\alpha)\beta t=\beta s$ due to the choice $\alpha<1-\delta/\beta$. Next, let
$$ p_s:=\mathbf{P}_{z_0}(\sigma_{B(z_0,\hat{r}(s))}\geq s) = \mathbf{P}_0\left(\sigma_{B_s}\geq s\right) , $$
where, as before, $\mathbf{P}_x$ denotes the law of a standard BM started at $x$, and $\sigma_A=\inf\{s\geq 0:X_s\notin A\}$ denotes the first exit time of the BM out of $A$. By a standard result on Brownian confinement in balls (see for instance Proposition B in \cite{O2021}),
\begin{equation} \nonumber
p_s=\exp\left[-\frac{\lambda_d s}{(\hat{r}(s))^2}(1+o(1))\right].
\end{equation}
Then, on a set of full $\mathbb{P}$-measure, Theorem C upon setting $\gamma_s=\exp[-\sqrt{\beta/2}\,\hat{r}(s)]$ for instance implies that for all large $t$,
\begin{equation} \label{part1eq7}
P^\omega\left(A_t^c \mid E_t \right) \leq P\left(\big | Z_s^{B_s} \big |< e^{\delta t}\right) \leq
P\left(\big | Z_s^{B_s} \big |< e^{-\sqrt{\beta/2}\,\hat{r}(s)} p_s e^{\beta s}\right) = e^{-\sqrt{\beta/2}\,\hat{r}(s)(1+o(1))},
\end{equation}
where $A_t$ is as in \eqref{part1event}, $Z^B=(Z_u^{B_u})_{u\geq 0}$ is a BBM with deactivation at $\partial B$ as in Theorem C, and we have used in the first inequality that $B(z_0,r(\alpha t))$ is a clearing . Finally, in view of $s=(1-\alpha)t$, we reach the following conclusion via \eqref{eqnewnew}, \eqref{part1eq6} and \eqref{part1eq7}. On a set of full $\mathbb{P}$-measure, for all large $t$,
\begin{align} \label{part1eq8}
P^\omega\left(A_t^c \mid S \right)&=\frac{P^\omega(A_t^c\cap S)}{P^\omega(S)}\leq c(\omega) P^\omega(A_t^c\cap S) \nonumber \\
&\leq c(\omega) P^\omega(A_t^c\cap S_{\alpha t}) \to 0,\:\:\: t\to\infty.
\end{align}
where we have used Proposition~\ref{prop1} in the first inequality, and that $S\subseteq S_{r}$ for any $r>0$ in the second inequality. This completes the first part of the proof of the lower bound of Theorem~\ref{thm1}.
\textbf{\underline{Part 2}: Time scales within $[0,t]$ and moving a particle into a large clearing}
Introduce two time scales, $m(t)$ and $\ell(t)$, where $m(t)=o(t)$ and $\ell(t)\log \ell(t)=o(m(t))$. We will split the time interval $[0,t]$ into three pieces: $[0,m(t)]$, $[m(t),m(t)+\ell(t)]$ and $[m(t)+\ell(t),t]$. (This way of splitting $[0,t]$ is different from the corresponding splitting of $[0,t]$ used in \cite{E2008} and \cite{O2021} for the proofs of the mild obstacle problem.) More precisely, let $\ell,m:\mathbb{R}_+\to\mathbb{R}_+$ be such that
\begin{enumerate}
\item[(i)] $\lim_{t\rightarrow\infty}\ell(t)=\infty$,
\item[(ii)] $\lim_{t\rightarrow\infty} \frac{\log t}{\log \ell(t)}=1$,
\item[(iii)] $\ell(t)\log \ell(t)=o(m(t))$,
\item[(iv)] $m(t)=o(\ell^2(t))$,
\item[(v)] $m(t)=o(t(\log t)^{-2/d})$.
\end{enumerate}
In this part of the proof, our goal is to show that on a set of full $\mathbb{P}$-measure with overwhelming $\widehat{P}^\omega$-probability, at time $m(t)+\ell(t)$ there is a particle within distance $1$ of the center, say $x_0$, of a large clearing. This can be achieved by combining two partial strategies as follows. Firstly, sufficiently many particles are produced over $[0,m(t)]$ and kept close to each other at time $m(t)$, and then at least one of the sub-BBMs initiated by these particles at time $m(t)$ contributes a particle to $B(x_0,1)$ at time $m(t)+\ell(t)$.
\noindent \textbf{Partial strategy 1.} Let $0<\delta<\beta$ and $I(t)=\lfloor e^{\delta m(t)} \rfloor$. Then, since $\lim_{t\to\infty}m(t)=\infty$, it follows from \eqref{part1event} and \eqref{part1eq8} that on a set of full $\mathbb{P}$-measure,
\begin{equation} \label{part2eq1}
\underset{t\to\infty}{\lim}P^\omega(A_{m(t)}^c \mid S) = 0,
\end{equation}
where
\begin{equation}
A_{m(t)}=\left\{\exists\,z_0=z_0(\omega)\in[-k m(t),k m(t)]^d \:\:\text{with}\:\: Z_{m(t)}\left(B(z_0,c[\log\log m(t)]^{1/d})\right)\geq I(t)\right\} \label{eqamt}
\end{equation}
with $0<c<R_0$ (see \eqref{part1eqradius}). We now choose our `new origin' as the point $z_0$ in \eqref{eqamt} for the rest of the proof, that is, for the evolution of the system over $[m(t),t]$.
\noindent \textbf{Partial strategy 2.} Let $z_0=z_0(\omega)\in[-k m(t),k m(t)]^d$ be as in \eqref{eqamt}. By a similar argument as in the proof of Proposition~\ref{prop3}, it follows from a close-packing of $[-k m(t),k m(t)]^d$ by balls of radius $\ell(t)/(2\sqrt{d})$ together with \cite[Lemma 1]{O2021} (which is an extension of Proposition B) upon choosing $n=d+1$, $a=1$, and $\ell=\ell(t)/(2\sqrt{d})$ therein that on a set of full $\mathbb{P}$-measure, for all large $t$ any ball of radius $\ell(t)$ centered within $[-k m(t),k m(t)]^d$ contains a clearing of radius
\begin{equation}
R(t)+1=R_{\ell(t)}+1 \asymp R_0[\log(\ell(t))]^{1/d} \asymp R_0[\log t]^{1/d}, \label{eqbiggyradius}
\end{equation}
where $R_{\ell(t)}$ is as in \eqref{eq0}, and we have used assumption (ii). In \eqref{eqbiggyradius} and hereafter, we use $f(t)\asymp g(t)$ to mean $f(t)/g(t)\to 1$ as $t\to\infty$. To see why the choice $n=d+1$ in \cite[Lemma 1]{O2021} works, observe that the number of balls of radius $\ell(t)/(2\sqrt{d})$ needed to completely pack $[-k m(t),k m(t)]^d$ is at most
$$ \left\lceil \frac{k m(t)}{\ell(t)/(2\sqrt{d})} \right\rceil^d \leq \left(\frac{k\ell^2}{\ell}\right)^d \leq \ell^{d+1} $$
for all large $t$, where we have used assumption (iv) in the first inequality. In particular, on a set of full $\mathbb{P}$-measure $B(z_0(\omega),\ell(t))$ contains a clearing of radius $R(t)+1$ for all large $t$. Let $x_0=x_0(\omega)$ denote the center of this clearing. We will next show that on a set of full $\mathbb{P}$-measure the event
$$ C_t:=\{\exists\,x_0=x_0(\omega)\in\mathbb{R}^d \:\:\text{with}\:\: Z_{m(t)+\ell(t)}(B(x_0,1))>0 \:\:\text{and}\:\: B(x_0,R(t)+1)\subseteq K^c\} $$
occurs with overwhelming $P^\omega$-probability conditional on $A_{m(t)}$.
Consider a particle inside $B(z_0,c[\log\log m(t)]^{1/d})$ at time $m(t)$, and call it generically particle $u$. Let $q_u(t)$ be the probability that the sub-BBM initiated by $u$ at time $m(t)$ contributes a particle to $B(x_0,1)$ at time $m(t)+\ell(t)$. For an upper bound on $q_u(t)$, we consider the worst case scenario:
\begin{enumerate}
\item[(a)] assume that $u$ is located at the boundary of $B(z_0,c[\log\log m(t)]^{1/d})$ at time $m(t)$ in the opposite direction of $x_0$ with respect to the `origin' $z_0$,
\item[(b)] neglect possible branching of $u$ over $[m(t),m(t)+\ell(t)]$,
\item[(c)] assume that the Brownian path initiated by $u$ travels through the trap field $K$ over the entire interval $[m(t),m(t)+\ell(t)]$.
\end{enumerate}
By \cite[Lemma 4.5.2]{S1998}, on a set of full $\mathbb{P}$-measure, we have
\begin{equation}
\sup_{[z_0-\ell(t),z_0+\ell(t)]^d} V(\:\cdot\:,\omega)= o(\log\ell(t)), \quad t\to\infty. \nonumber
\end{equation}
Then, for each particle $u$ that is inside $B(z_0,c[\log\log m(t)]^{1/d})$ at time $m(t)$, in view of the inequalities $c[\log\log m(t)]^{1/d}\leq \ell(t)$ and $|x_0-z_0|\leq \ell(t)$, we have for all large $t$,
\begin{equation} \label{eq7}
q_u(t)\geq \exp\left[-\frac{(2\ell(t))^2}{2\ell(t)}(1+o(1))-\ell(t)\log\ell(t)\right] \geq \exp\left[-2\ell(t)\log\ell(t)\right] =: p(t),
\end{equation}
where the first term in the exponent on the right-hand side comes from a linear Brownian displacement and the second term from surviving the killing over $[m(t),m(t)+\ell(t)]$. Then, by the Markov property and the independence of particles present at time $m(t)$, we have
\begin{equation} \label{eq8}
P^\omega(C_t^c \mid A_{m(t)}) \leq (1-p(t))^{I(t)} = e^{-p(t) I(t)},
\end{equation}
where we have used that $1+x\leq e^x$. Note that in order to keep the probability of the unwanted event $C_t^c$ small, we aimed at a high enough $p(t) I(t)$ throughout the argument. Finally, by \eqref{eq7} and \eqref{eq8}, we reach the following conclusion. On a set of full $\mathbb{P}$-measure, for all large $t$,
\begin{equation} \label{eq9}
P^\omega(C_t^c \mid A_{m(t)}) \leq \exp\left[-e^{-2\ell(t)\log\ell(t)+\delta m(t)}\right] .
\end{equation}
Since $\delta>0$ and due to the assumptions (ii) and (iii), the right-hand side of \eqref{eq9} is superexponentially small in $t$.
\textbf{\underline{Part 3}: BBM in the large clearing}
This part of the proof is similar to the corresponding part as for the mild obstacle case, because over the remaining time interval $[m(t)+\ell(t),t]$ the BBM grows freely inside the clearing $B(x_0,R(t)+1)$, and hence is insensitive to the nature of the traps. For $t>0$, define the events
$$ D_t:=\left\{\exists\,x_0=x_0(\omega)\in\mathbb{R}^d \:\:\text{with}\:\: Z_t\left(B(x_0,R(t)+1)\right)\geq \exp\left[t\left(\beta-\frac{c(d,\nu)+\varepsilon}{(\log t)^{2/d}}\right)\right]\right\}. $$
We argue as follows. Let $\Omega_0$ be the set of environments for which $0<P^\omega(S)<1$. We know from Proposition~\ref{prop1} that $\mathbb{P}(\Omega_0)=1$. For each $\omega\in\Omega_0$, set $c(\omega)=1/P^\omega(S)$, and estimate
\begin{align}
P^\omega(D_t^c \mid S) &= \frac{1}{P^\omega(S)}\left[P^\omega(D_t^c\cap S\cap C_t)+P^\omega(D_t^c\cap S\cap C_t^c)\right] \nonumber\\
&\leq c(\omega) \left[P^\omega(D_t^c\cap S_t\cap C_t)+P^\omega(D_t^c\cap S\cap C_t^c)\right] \nonumber \\
&\leq c(\omega) \left[P^\omega(D_t^c \mid C_t)+P^\omega(C_t^c \mid S)\right], \label{eq4}
\end{align}
where we have used that $S\subseteq S_t$ in the first inequality. The second term on the right-hand side of \eqref{eq4} can be estimated as follows for $\omega\in\Omega_0$:
\begin{align}
P^\omega(C_t^c \mid S) &= \frac{1}{P^\omega(S)}\left[P^\omega\left(C_t^c\cap S\cap A_{m(t)}\right)+P^\omega\left(C_t^c\cap S\cap A_{m(t)}^c\right)\right] \nonumber \\
&\leq c(\omega)\left[P^\omega\left(C_t^c \mid A_{m(t)}\right)+P^\omega\left(A_{m(t)}^c \mid S\right)\right] \nonumber.
\end{align}
It then follows from \eqref{part2eq1} and \eqref{eq9} that on a set of full $\mathbb{P}$-measure,
\begin{equation}
\lim_{t\to\infty}P^\omega(C_t^c \mid S)=0 \label{eqfok}.
\end{equation}
Next, we turn our attention to $P^\omega(D_t^c\mid C_t)$ in \eqref{eq4} and show that on a set of full $\mathbb{P}$-measure,
\begin{equation}
\underset{t\to\infty}{\lim} P^\omega(D_t^c \mid C_t) = 0 . \label{eq11}
\end{equation}
Conditional on the event $C_t$, let $v$ be the name of the particle that is closest to $x_0$ at time $m(t)+\ell(t)$ and $y_0$ denote its position at time $m(t)+\ell(t)$. Note that $|y_0-x_0|\leq 1$ on the event $C_t$. We will show via Proposition C that sufficiently many particles are produced inside $B(y_0,R(t))$ over the remaining interval $[m(t)+\ell(t),t]$. Let $\widehat{Z}$ be the sub-BBM initiated by particle $v$ at time $m(t)+\ell(t)$ starting from position $y_0$. Define $\widehat{R}:\mathbb{R}_+\to\mathbb{R}_+$ such that $\widehat{R}(t-(m(t)+\ell(t)))=R(t)$ for all large $t$. (Respecting the conditions (i)-(v), we may and do choose $m(t)$ and $\ell(t)$ such that $t-(m(t)+\ell(t))$ is increasing on $t\geq t_0$ for some $t_0>0$. Therefore, $t_1-(m(t_1)+\ell(t_1))=t_2-(m(t_2)+\ell(t_2))$ implies that $t_1=t_2$ for $t_1\wedge t_2\geq t_0$.) Next, let $s:=t-(m(t)+\ell(t))$, $\widehat{B}_s:=B(y_0,R(t))$ and
$$ p_s:=\mathbf{P}_{y_0}(\sigma_{\widehat{B}_s}\geq s) = \mathbf{P}_0\left(\sigma_{B(0,\widehat{R}(s))}\geq s\right) . $$
By the Markov property of $Z$ applied at time $m(t)+\ell(t)$, $\widehat{Z}$ is a BBM started with a single particle at $y_0$. Observe that $\widehat{B}_s$ is a clearing since $\widehat{B}_s\subseteq B(x_0,R(t)+1)$. Then, Theorem C upon setting $\gamma_s=\exp[-\sqrt{\beta/2}\widehat{R}(s)]$ implies that
\begin{align}
P^\omega\left(|\widehat{Z}_s|< e^{-\sqrt{\beta/2}\,\widehat{R}(s)} p_s e^{\beta s} \:\big|\: C_t \right)&\leq P_{y_0}\left(\big| Z_s^{\widehat{B}_s}\big|< e^{-\sqrt{\beta/2}\,\widehat{R}(s)} p_s e^{\beta s}\right) \nonumber \\
&=\exp\left[-\sqrt{\beta/2}\,\widehat{R}(s)(1+o(1))\right]. \label{eq309}
\end{align}
By \eqref{eqbiggyradius} and a standard result on Brownian confinement in balls (see for instance Proposition B in \cite{O2021}), and since $\widehat{R}(s)=R(t)$ and $c(d,\nu)=\lambda_d/R_0^2$,
\begin{equation} \label{eq310}
p_s=\exp\left[-\frac{\lambda_d s}{\widehat{R}^2(s)}(1+o(1))\right]=\exp\left[-\frac{c(d,\nu)(t-(m(t)+\ell(t)))}{(\log \ell(t))^{2/d}}(1+o(1))\right].
\end{equation}
It follows from the assumptions (ii), (iii) and (v) that
$$ \frac{t-(m(t)+\ell(t))}{(\log \ell(t))^{2/d}} \asymp \frac{t}{(\log t)^{2/d}} , $$
by which we can continue \eqref{eq310} with
\begin{equation}
p_s=\exp\left[-\frac{c(d,\nu)t}{(\log t)^{2/d}}(1+o(1))\right]. \label{eq311}
\end{equation}
On the other hand, using that $s=t-(m(t)+\ell(t))$, we have for any $\varepsilon>0$,
$$ \exp\left[t\left(\beta-\frac{c(d,\nu)+\varepsilon}{(\log t)^{2/d}}\right)\right]=\exp\left[\beta s+\beta(m(t)+\ell(t))-\frac{(c(d,\nu)+\varepsilon)t}{(\log t)^{2/d}}\right]\leq e^{-\sqrt{\beta/2}\,\widehat{R}(s)} p_s e^{\beta s} $$
for all large $t$, where we have used \eqref{eq311}, assumption (v), and that $\widehat{R}(s)=R(t)=o(t(\log t)^{-2/d})$ in passing to the inequality. Then, it follows from \eqref{eq309} and the definitions of $D_t$ and $\widehat{Z}$ that for all large $t$,
$$ P^\omega(D_t^c \mid C_t) \leq P^\omega\left(|\widehat{Z}_s|< e^{-\sqrt{\beta/2}\,\widehat{R}(s)} p_s e^{\beta s} \:\big|\: C_t \right) \leq \exp\left[-\sqrt{\beta/2}\,\widehat{R}(s)(1+o(1))\right] . $$
This proves, \eqref{eq11} holds on a set of full $\mathbb{P}$-measure, which, together with \eqref{eq4} and \eqref{eqfok}, imply that
\begin{equation} \nonumber
P^\omega\left(N_t<\exp\left[t\left(\beta-\frac{c(d,\nu)+\varepsilon}{(\log t)^{2/d}}\right)\right] \:\bigg\vert\: S \right)\leq P^\omega(D_t^c \mid S)\to 0, \quad t\to\infty .
\end{equation}
This completes the proof of the lower bound of Theorem~\ref{thm1}. We emphasize that over $[m(t)+\ell(t),t]$, the sub-BBM starting from $y_0$ at time $m(t)+\ell(t)$ and deactivated at $\partial B(y_0,R(t))$ doesn't feel the effect of traps; so for this part of the proof it doesn't matter whether traps have a killing mechanism or not.
\section{Further problems}\label{section8}
We conclude by discussing several further problems related to our model.
\noindent\textbf{Problem 1: The case $d=1$}.
We emphasize that the LLN for the case $d=1$ remains open in the soft obstacle setting. Here, we briefly explain why the current method fails when $d=1$.
We start by considering Lemma~\ref{lemma2}. Observe that the key estimate in the proof of Lemma~\ref{lemma2} is \eqref{eqkey}, where the right-hand side gives the cost of a hitting strategy to a moderate clearing, and involves a tubular estimate and an estimate on survival from killing over the interval $[0,h(t)]$. Note that in $d=1$, we do not need a full tubular estimate, but we still need a single Brownian motion to travel a distance of $\sim\rho(t)$ over a time interval of length $h(t)$. Even if we ignore the `tubular estimate' contribution in \eqref{eqkey}, we still have the factor $\exp[-h(t)\log\rho(t)]$, which is solely due to the survival from soft killing. Then, to show that $[P^\omega(E_{1,t})]^{t/h(t)}$ tends to zero as $t\to\infty$, at the very least we need
$$ \left[1-c_d e^{-h(t)\log\rho(t)}\right]^{t/h(t)}\leq \exp\left[-c_d e^{-h(t)\log\rho(t)}\frac{t}{h(t)}\right] \to 0 ,$$
which is true only if
\begin{equation} \label{eqneed}
t e^{-h(t)\log\rho(t)} \to \infty.
\end{equation}
An elementary inspection shows that the choices $h(t)=k_1\log t$ and $\rho(t)=k_2\log t$ will not satisfy \eqref{eqneed} no matter how small $k_1$ and $k_2$ are, and therefore for \eqref{eqneed} to hold one needs to choose $h(t)=o(\log t)=\rho(t)$ as $t\to\infty$.
We now turn our attention to the preparation of the a.s.-environment, which is based on securing a clearing of radius $\sim r(t)$ within each ball of radius $\sim\rho(t)$, where the union of the balls covers the box $[-kt,kt]^d$ (see Proposition~\ref{prop3}). For this argument to hold, we need such an $r(t)$-clearing inside each of
\begin{equation}
\sim t/\rho(t) \nonumber
\end{equation}
many balls. As we set $\ell=\rho(t)/2$, let us take a close look at Lemma~\ref{lemma1}. Observe that Lemma~\ref{lemma1} becomes stronger and more difficult to prove as each of $R_\ell$ and $f(\ell)$ are chosen larger. We now argue that the current proof of Lemma~\ref{lemma1} breaks down in $d=1$ in view of $\ell=\rho(t)/2$ under the requirement that $\rho(t)=o(\log t)$. Let us simply set $R_\ell=R$ for some constant $R>0$ to make the proof easier. Even in this case, the estimate \eqref{eq3lemma3} when $d=1$ yields
$$ \log \alpha_\ell \geq \log\ell-\log(2R)-R/R_0 .$$
Even if we ignore the middle term, this leads to
$$ e^{-\alpha_\ell}\leq \exp\left[-e^{(\log\ell-R/R_0)}\right]\leq \exp\left[-\ell e^{-R/R_0}\right]=\left(e^{-\ell}\right)^{e^{-R/R_0}}. $$
Then, for the Borel-Cantelli argument based on \eqref{borelcantelli} to hold, we can have $f(\ell)$ growing at most exponentially in $\ell$ (see \eqref{eq1lemma3} and \eqref{borelcantelli}). On the other hand, it is easy to see that when $\rho(t)=o(\log t)$, setting $\ell=\rho(t)/2$ followed by $f(\ell)\sim t^{d}/\rho(t)^d$ requires $f(\ell)$ to be superexponentially large in $\ell$.
We may summarize our findings as follows. The current method fails when $d=1$, because when $d=1$ the estimate $[P^\omega(E_{1,t})]^{t/h(t)}$ in Lemma~\ref{lemma2} is incompatible with the Borel-Cantelli argument in the proof of Lemma~\ref{lemma1}, which is used to prepare the a.s.-environment for the soft obstacle problem. When $d=1$, there is no pair of choices for the time scale $h(t)$ and the space scale $\rho(t)$ under which both Lemma~\ref{lemma1} and Lemma~\ref{lemma2} hold upon setting $\ell=\rho(t)/2$.
\noindent\textbf{Problem 2: SLLN.}
It would be desirable to improve the LLN in Theorem~\ref{thm1} to the corresponding SLLN if possible. To achieve this, one must control the probabilities of the `unwanted' events in the proof of the lower bound so that a Borel-Cantelli argument could be carried out to obtain the lower bound of the desired SLLN. Recall that Lemma~\ref{lemma2} was the key component in the proof of the lower bound of Theorem~\ref{thm1}. Therefore, a first and important step would be to improve Lemma~\ref{lemma2} to give a lower bound on the rate of decay to zero of $P^\omega\left(\mathcal{R}(t)\cap \widehat{\Phi}_t^\omega=\emptyset \:\big\vert\: S_t\right)$ as $t\to\infty$.
\noindent\textbf{Problem 3: Dominant region.}
The current work, as well as \cite{E2008} and \cite{O2021}, study the total mass of the BBM among random obstacles, but don't make any claims on the geometric distribution of particles for large times although the proofs suggest that for large times `most' of the particles are to be found in `large' clearings which exist in a.e.-environment.
Hence, a further problem concerns the geometric distribution of particles at large times. One natural question is, conditional on ultimate survival of the BBM, is there a \emph{dominant region} $B=B(\omega,t)$ in $\mathbb{R}^d$ such that an overwhelming proportion of particles are found inside $B$ for all large $t$. Recall that we write $Z_t(B)$ to denote the mass of $Z$ that fall inside $B$ at time $t$. More precisely, given an environment $\omega$ does there exist a region $B(\omega,t)\subset \mathbb{R}^d$ such that
$$ \frac{Z_t(B(\omega,t)^c)}{N_t} \to 0, \quad t\to\infty $$
in some sense of convergence with respect to the law $\widehat{P}^\omega(\:\cdot\:)=P^\omega(\:\cdot\:\mid S)$ for a.e.\ $\omega$? If yes, this would mean that on a set of full $\mathbb{P}$-measure the mass accumulates in some $\omega$-dependent special subset of $\mathbb{R}^d$ for large times. This special subset, for instance, could be of a similar form as $\Phi_t^\omega$ in \eqref{eqphi} with a suitable radius function $r(t)$. We note that a similar problem in the setting of mild obstacles was listed as a further problem in \cite{E2008}.
\noindent\textbf{Problem 4: Lower large-deviations.}
In the course of the proof of the lower bound of Theorem~\ref{thm1}, we show that the rare event of atypically small mass for the BBM has probability decaying to zero as $t\to\infty$, but we do not find the rate of decay to zero of this probability. It is natural to look for this rate of decay and hence to obtain a precise lower-tail asymptotics for the mass of the BBM. That is, can we obtain an asymptotic result as $t\to\infty$ in the form
$$ P^\omega\left(N_t<\exp{\left[t\left(\beta-\frac{c(d,\nu)+\varepsilon}{(\log t)^{2/d}}\right)\right]}\right) = e^{-g(t)(1+o(1))}$$
that is valid in a.e.-environment, where the function $g=g_\varepsilon:\mathbb{R}_+\to\mathbb{R}_+$ with $\lim_{t\to\infty}g(t)=\infty$ is precisely identified?
\noindent\textbf{Problem 5: Hard obstacles.}
Let $\Pi$ be a Poisson point process in $\mathbb{R}^d$ with constant intensity $\nu>0$ as before, and consider the Poissonian trap field
\begin{equation}
K=K(\omega)=\bigcup_{x_i\in\:\text{supp}(\Pi)} \bar{B}(x_i,a) \nonumber
\end{equation}
as in \eqref{eqtrapfield}, where the trap radius $a>0$ is fixed. The \emph{hard killing} rule for BBM is that each particle branches at the normal rate $\beta$ when outside $K$, and is immediately killed upon hitting $K$. This model may also be viewed as a BBM with individual killing at the boundary of the random set $K$. One can show, similar to the case of soft obstacles, that on a set of full $\mathbb{P}$-measure the entire BBM is killed with positive $P^\omega$-probability, and for meaningful results concerning the total mass one works under the law $\widehat{P}^\omega(\:\cdot\:)=P^\omega(\:\cdot\:\mid S)$.
It is observed from \cite[Theorem 1]{E2008} and \cite[Theorem 1]{O2021}, and then from the current work that the LLN for the total mass of BBM among random obstacles is quite robust to the details of the mass-reducing mechanism coming from the trap field, whether traps simply reduce the branching rate, or completely suppress the branching, or even apply soft killing to the particles. Therefore, it is reasonable to expect a similar LLN to hold even in the hard obstacle setting.
It is known from the theory of site percolation that there exists $a_0>0$ such that when the trap radius $a$ satisfies $a\leq a_0$, a unique infinite trap-free component exists in a.e.-environment (see \cite{AKN1987} and \cite{S1993}). Here, a trap-free component refers to a connected region in $\mathbb{R}^d$ in which there is no atom of $\omega$. Let us denote by $\mathcal{C}$ this unique $\omega$-dependent infinite trap-free component, and call $A\subseteq\mathbb{R}^d$ \emph{accessible} if $A\subseteq\mathcal{C}$. Then, denoting the origin by $\mathbf{0}$, the conditions under which we may expect a LLN to hold (see \cite{S1993}) are as follows:
$$ \mathbb{P}-\text{a.s. on the set} \:\: \{\mathbf{0}\in\mathcal{C}\} \:\: \text{and when} \:\: a\leq a_0 .$$
Note that in all cases of random Poissonian traps, the upper bound of the LLN is obtained by a first moment argument, and is unaffected by the nature of the traps since the first moment formula for the mass of BBM remains the same to the leading order due to the robustness of the single-particle Brownian survival asymptotics among the traps (see \cite{S1993}). In contrast, it is clear that the proof of the lower bound of the LLN becomes more difficult as the severity of the trapping mechanism increases in the following order: mild traps with a lower but positive rate of branching, mild traps with zero branching, traps with soft killing, and finally hard traps.
In case of hard traps, the main extra challenge is due to the fact that although the concentration of moderate clearings close enough to the origin (within $[-kt,kt]^d$ for suitable $k>0$) is still high enough and there is at least one large clearing close enough to the origin just as in the case of soft obstacles, there is no guarantee that these clearings will be accessible to the BBM as they could be outside the unique infinite trap-free component. Therefore, all a.s.-clearings established in the proofs should further be qualified as accessible clearings.
\end{document}
|
\mbox{${\beta}$}gin{document}
\selectlanguage{english}
\title{On the Origin of the $\log d$ Variation of the Electrostatic Force Minimizing Voltage in Casimir Experiments}
\author{\firstname{S.~K.}~\surname{Lamoreaux}}
\email{[email protected]}
\affiliation{Yale University, Department of Physics, P.O. Box 208120, New Haven, CT 06520-8120}
\author{\firstname{A.~O.}~\surname{Sushkov}}
\email{[email protected]}
\affiliation{Yale University, Department of Physics, P.O. Box 208120, New Haven, CT 06520-8120}
\mbox{${\beta}$}gin{abstract}
A number of experimental measurements of the Casimir force have observed a logarithmic distance variation of the voltage that minimizes electrostatic force between the plates in a sphere-plane geometry. We show that this variation can be simply understood from a geometric averaging of surface potential patches together with the Proximity Force Approximation.
\end{abstract}
\maketitle
A number of experimental measurements of the Casimir force have observed a distance variation of the voltage applied between the plates that minimizes the electrostatic potential.\cite{kim1,kim2,deman,deman2} This distance variation is of the approximate form
\mbox{${\beta}$}gin{equation}
V_m(d)=a+b\log d
\end{equation}
over a range of distances $d$ spanning up to nearly two orders of magnitude.
We have shown numerically that a variation in the minimizing voltage can result from the geometrical averaging of patch potentials on the plate surfaces, and have developed a heuristic explanation of the effect, as described in \cite{kim}. However the $\log d$ form of the variation was not explained or derived in \cite{kim}. Here we provide added details showing that in the case of a spherical surface/plane surface geometry, the $\log d$ variation arises quite naturally.
The electrostatic force between the plates is minimized when the free energy is minimized. For a plate with a spherical surface curvature $R$ together with a flat plate, both of diameter $2R_m$, the Proximity Force Approximation (PFA) works well for distances $d< R_m^2/2R$; in addition, the characteristic radius of the patches $r_0$ must satisfy $r_0>\sqrt{2Rd}$, in which case we can consider each patch as only interacting with its own image in the opposite plate. If this latter criterion is not met, the effects calculated here will be reduced in magnitude, however the general conclusions are otherwise unaltered.
For random patches on the surfaces, the electrostatic free energy is
\mbox{${\beta}$}gin{equation}
U=\sum {1\over 2} CV^2={\mbox{${\varepsilon}$}silon_0 R\over 2} \int_0^{R_m}\int_0^{2\pi} (V_a-V_p(r,\phi))^2{(r/R)\ d\phi\ dr\over d+r^2/2R}
\end{equation}
where $V_a$ is a voltage applied between the plates, and $V_p(r,\phi)$ describes the random voltage patches on the plates' surfaces. The value of $V_a$ that minimizes the force (or electrostatic free energy) is referred to as the minimizing potential, and is often called the contact potential. $V_m$ is found by taking the derivative with respect to $V_a$:
\mbox{${\beta}$}gin{equation}\mbox{${\lambda}$}bel{vm}
{\mbox{${\partial}$}rtial U\over \mbox{${\partial}$}rtial V_a}=0={\mbox{${\varepsilon}$}silon_0 R}\int_0^{R_m}\int_0^{2\pi} (V_a-V_p(r,\phi)){(r/R)\ d\phi\ dr\over d+r^2/2R}\ \ {\rm for\ } V_a=V_m.
\end{equation}
Because the only $\phi$ dependence is in $V_p(r,\phi)$, the angular integral of this term can be replaced with $V_p(r)$, with the understanding that this replacement represents a sum of the individual patches, of typical radius $r_0$, intersected by a circle of radius $r$. For homogeneous random patches, this number is roughly $(2\pi r)/(2 r_0)$, for $r>r_0$, or 1 for $r<r_0$. Then if $RMS$ magnitude of the patches is $V_0$ with zero average, this sum will have magnitude proportional to the square root of the number of patches in the sum times $\pm V_0$, or
\mbox{${\beta}$}gin{equation}
\int_0^{2\pi} (V_m-V_p(r,\phi))d\phi =2\pi \left[V_m+V_p(r)\right]
\end{equation}
\mbox{${\beta}$}gin{equation}
\approx 2\pi V_m\pm \left[\mbox{${\Delta}$}lta\phi\left[\theta(r-r_0)-\theta(r-R_m)\right]\sqrt{2\pi r\over 2 r_0}+2\pi\left[\theta(r)-\theta(r-r_0)\right]\right] V_0
\end{equation}
where $\theta(r-r_0)$ is the Heaviside step function, $\mbox{${\Delta}$}lta\phi=2\pi r_0/r$ results from $d\phi$ in the integral, and a single patch near the center of the plates is included. (The relative sign of these two terms on the r.h.s. is random.) Thus, $V_p(r)\propto 1/\sqrt{r/r_0}$ for $r\gg r_0$. Although we will not directly use this result in the subsequent discussion, it is important to note that $|V_p(r)|\leq |V_p(0)|$ in the case of homogenous random patches.
Equation (\ref{vm}) can be integrated by parts, yielding
\mbox{${\beta}$}gin{equation}
(V_m-V_p(r))\log(d+r^2/2R)|_0^{R_m} +\int_0^{R_m}\log(d+r^2/2R){dV_p(r)\over d r} dr=
\end{equation}
\mbox{${\beta}$}gin{equation}
=(V_m-V_p(R_m))\log(d+R_m^2/2R)-(V_m-V_p(0))\log d -Q(d)=0
\end{equation}
where we have introduced a new function $Q(d)$. Then
\mbox{${\beta}$}gin{equation}
V_m(d)={V_p(R_m)\log(d+R_m^2/2R)-V_p(0)\log d +Q(d)\over \log(d+R_m^2/2R)-\log d}.
\end{equation}
There are three cases to consider.
\noindent
{\bf Case 1: Close range:} $d<< R_m^2/2R$, $|\log d|>> |\log R_m^2/2R|$ and $d<<r_0^2/2R$
In this limit, one single patch at or near the center dominates in the determination of $V_m$, and $dV_p(r)/dr\approx 0$ when $r^2/2R<d$ in the integral defining $Q(d)$. The $\log d$ terms have the largest magnitude, and thus we can neglect all terms not multiplied by $\log d$. In this case,
\mbox{${\beta}$}gin{equation}
V_m(d)=V_p(0).
\end{equation}
\noindent
{\bf Case 2: Intermediate range:} $d<< R_m^2/2R$, $|\log d|< |\log R_m^2/2R|$ and $d<r_0^2/2R$
In this case,
\mbox{${\beta}$}gin{equation}
\log(d+R_m^2/2r)\approx \log(R_m^2/2R)
\end{equation}
so
\mbox{${\beta}$}gin{equation}
Q(d)=Q_0\log(R_m^2/2R)
\end{equation}
because in this limit $Q(d)$ is nearly independent of $d$. This can be seen in Eq. (5) where the $\log(d+r^2/2R)$ factor contributes a $d$ dependence only when $r$ is small, but in that limit, $dV_p(r)/dr\approx 0$.
Therefore,
\mbox{${\beta}$}gin{equation}
V_m(d)\approx {V_p(R_m)\log(R_m^2/2R)-V_p(0)\log d +Q_0 \log(R_m^2/2R) \over \log(R_m^2/2R)-\log d}.
\end{equation}
The denominator can be Taylor expanded and we therefore arrive at
\mbox{${\beta}$}gin{equation}
V_m(d)\approx \left[V_p(R_m)-{V_p(0)\log d\over \log(R_m^2/2R)} + Q_0\right]\left[1+{\log d\over \log(R_m^2/2R)}\right]\approx a+b\log d
\end{equation}
where $a$ and $b$ do not depend on $d$, and where terms only first order in $\log d/\log(R_m^2/2R)$ are retained. In addition, there can be a contribution to $a$ from external circuit contact potentials.
It should be noted that a single patch at large $r$ will generate a non-zero $Q_0$. Depending on the size and magnitude of the single patch, it might dominate the $\log d$ distance dependence of $V_m$.
\noindent
{\bf Case 3:} $d> R_m^2/2R$ and $d\ {{\bf u}ildrel > \over \mbox{${\sigma}$}m}\
r_0^2/2R$ (so that the PFA remains valid)
For this case, it is easiest to go back to Eq. (3) and expand the denominator. We have
\mbox{${\beta}$}gin{equation}
0=\int_0^{R_m}\int_0^{2\pi}(V_m-V_p(r,\phi))r(1-r^2/2Rd)drd\phi\approx \int_0^{R_m}\int_0^{2\pi}(V_m-V_p(r,\phi)) r dr d\phi
\end{equation}
so that
\mbox{${\beta}$}gin{equation}
V_m={\int_0^{R_m}\int_0^{2\pi} V_p(r,\phi) r dr d\phi\over \pi R_m^2}=\mbox{${\lambda}$}ngle V_p(r,\phi)\rangle
\end{equation}
which is the surface average of $V_p(r,\phi)$.
\noindent
{\bf Discussion}
It is easy to understand the first and third cases where $d\rightarrow 0$ and $d\rightarrow \infty$. In the first case, a single patch dominates the electrostatic force, and its potential determines $V_m$. In the latter, $V_m$ is simply the average surface potential, as the variation is the distance between the surfaces due to the curvature is very small compared to $d$.
The intermediate case is slightly more difficult to understand, but arises from, with increasing distance, the loss in dominance (in magnitude) of $\log d$, compared to $\log (d+R_m^2/2R)$.
Numerical calculations based on Eq. (3), using random patches specified on a surface, are straightforward and fully support the essential conclusions presented above. As an aside, there is a remaining question as to where it is the energy or force that must be minimized. Numerical calculations based on the minimization of the force produces no statistically significant differences compared to the energy minimization, as might be expected.
Of course, this is a very simplistic model, particularly in the assumptions that the patch potentials are randomly $\pm V_0$, that all patches have the same radius and are circular, and that the radius of the positive patches is equal to that of the negative patches. These assumptions are not essential to the derivation of Eq. (8), which is the principal result reported here. Further refinements will likely not significantly change the subsequent conclusions presented here. For example, as discussed already, a single patch at large $r$ will generate a non-zero $Q_0$. If this patch has a large area and/or large potential ({\it e.g.}, a charged speck of dust or a scratch), it could easily be the dominant contribution to the $\log d$ distance dependence. For such a patch, $V_m(d)$ will only vary slowly with a relative translational repositioning of the plates.
This work was supported by the DARPA/MTOs Casimir Effect Enhancement project under SPAWAR Contract No. N66001-09-1-2071.
\mbox{${\beta}$}gin{thebibliography}{99}
\bibitem{kim1} W.-J. Kim, M. Brown-Hayes, D.A.R. Dalvit, J.H. Brownell, and R. Onofrio, Phys. Rev. A {\bf 78}, 020101(R) (2008).
\bibitem{kim2} W.-J. Kim, A.O. Sushkov, D.A.R. Dalvit, and S.K. Lamoreaux, Phys. Rev. Lett. {\bf 103}, 060401 (2009).
\bibitem{deman} S. de Man, K. Heeck, and D. Iannuzzi, Phys. Rev. A {\bf 79}, 024102 (2009).
\bibitem{deman2} S. de Man, K. Heeck, R.J. Wijngaarden, and D. Iannuzzi, J. Vac. Sci. Tech. B {\bf 28}, C4A25 (2010).
\bibitem{kim} W.J. Kim, A.O. Sushkov, D.A.R. Dalvit, and S.K. Lamoreaux, Phys. Rev. A {\bf 81}, 022505 (2010).
\end{thebibliography}
\end{document}
|
\begin{document}
\label{'ubf'}
\setcounter{page}{1}
\markboth {\hspace*{-9mm} \centerline{\footnotesize \sc
Decomposition of hypercubes into sunlet graphs}
}
{ \centerline {\footnotesize \sc
A.V. Sonawane } \hspace*{-9mm}
}
\begin{center}
{
{\Large \textbf { \sc Decomposition of hypercubes\\ into sunlet graphs}
}
\\
{\sc A.V. Sonawane}\\
{\footnotesize Government of Maharashtra's Ismail Yusuf College of Arts, Science and Commerce,\\ Mumbai 400 060, INDIA.}\\
{\footnotesize e-mail: {\it [email protected]}}
}
\end{center}
\thispagestyle{empty}
\begin{abstract}
{\footnotesize For any positive integer $k \geq 3,$ the sunlet graph of order $2k$, denoted by $L_{2k},$ is the graph obtained by adding a pendant edge to each vertex of a cycle of length $k.$ In this paper, we prove that the necessary and sufficient condition for the existence of an $L_{16}$-decomposition of the $n$-dimensional hypercube $Q_n$ is $n = 4$ or $n \geq 6.$ Also, we prove that for any integer $m \geq 2,$ $Q_{mn}$ has an $L_{2k}$-decomposition if $Q_{n}$ has a $C_k$-decomposition.
{\small \textbf{Keywords:} decomposition, hypercube, sunlet graph}
\indent {\small {\bf 2020 Mathematics Subject Classification:} 05C51}
}
\end{abstract}
\section{Introduction}
All graphs under consideration are simple and finite. For any positive integer $n,$ the {\it hypercube} of dimension $n,$ denoted by $Q_n,$ is a graph with vertex set $\{x_1 x_2 \cdots x_n : x_i =$ $0$ or $1$ for $i = 1, 2, \cdots, n \}$ and any two vertices are adjacent in $Q_n$ if and only if they differ at exactly one position. The {\it Cartesian product} of graphs $G$ and $H,$ denoted by $G \Box H,$ is a graph with vertex set $V(G) \times V(H),$ and two vertices $(x,y)$ and $(u,v)$ are adjacent in $G \Box H$ if and only if either $x=u$ and $y$ is adjacent to $v$ in $H,$ or $x$ is adjacent to $u$ in $G$ and $y=v.$ It is well-known that $Q_n$ is the Cartesian product of $n$ copies of the complete graph $K_2.$ Note that $Q_n$ is an $n$-regular and $n$-connected graph with $2^n$ vertices and $n 2^{n-1}$ edges.
Let $k \geq 3$ be an integer. A cycle of length $k$ is denoted by $C_k.$ The {\it sunlet graph} of order $2k,$ denoted by $L_{2k},$ is obtained by adding a pendant edge to each vertex of the cycle $C_k$ \cite{a}. Note that $L_{2k}$ has $2k$ vertices and $2k$ edges. The sunlet graph of order sixteen $L_{16}$ is shown in Figure 1.
\begin{center}
\scalebox{0.5}{
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.1);
\draw [fill=black] (1,0) circle (0.1);
\draw [fill=black] (2,1) circle (0.1);
\draw [fill=black] (2,2) circle (0.1);
\draw [fill=black] (1,3) circle (0.1);
\draw [fill=black] (0,3) circle (0.1);
\draw [fill=black] (-1,2) circle (0.1);
\draw [fill=black] (-1,1) circle (0.1);
\draw [fill=black] (-0.5,-1) circle (0.1);
\draw [fill=black] (1.5,-1) circle (0.1);
\draw [fill=black] (3,0.5) circle (0.1);
\draw [fill=black] (3,2.5) circle (0.1);
\draw [fill=black] (1.5,4) circle (0.1);
\draw [fill=black] (-0.5,4) circle (0.1);
\draw [fill=black] (-2,2.5) circle (0.1);
\draw [fill=black] (-2,0.5) circle (0.1);
\draw (0,0)--(1,0)--(2,1)--(2,2)--(1,3)--(0,3)--(-1,2)--(-1,1)--(0,0) (0,0)--(-0.5,-1) (1,0)--(1.5,-1) (2,1)--(3,0.5) (2,2)--(3,2.5) (1,3)--(1.5,4) (0,3)--(-0.5,4) (-1,2)-- (-2,2.5) (-1,1)--(-2,0.5);
\node at (0.5,-2) {\Large Figure 1. The sunlet graph $L_{16}$};
\end{tikzpicture}}
\end{center}
A {\it decomposition} of a graph $G$ is a collection of edge-disjoint subgraphs of $G$ such that the edge set of the subgraphs partitions the edge set of $G.$ For a given graph $H,$ an {\it $H$-decomposition} of $G$ is a decomposition into subgraphs each isomorphic to $H.$
The problem of decomposing the given graph into the sunlet graphs is studied for various classes of regular graphs in the literature \cite{a, ani, c, f, s, m}. Fu et al. \cite{f} proved that if $k = 6,10,14$ or $2^m ~(m \geq 2),$ then there exists an $L_{2k}$-decomposition of $K_n$ if and only if $n \geq 2k$ and $n(n-1) \equiv 0 (\text{mod}~ 4k).$ The existence of an $L_{10}$-decomposition of the complete graph $K_n$ for $n \equiv 0, 1, 5, 16 ({\text{mod}}~20)$ is guaranteed by Fu, Huang and Lin \cite{c}. Anitha and Lekshmi \cite{ani} established that the complete graph $K_{2n},$ the complete bipartite graph $K_{2n,2n}$ and the Harary graph $H_{4, 2n}$ have $L_{2n}$-decompositions for all $n \geq 3.$ Akwu and Ajayi \cite{a} proved that for even $m \geq 2,$ odd $n \geq 3$ and odd prime $p,$ the lexicographic product of $K_n$ and the graph $\Bar{K}_m$ consisting of only $m$ isolated vertices has an $L_{2p}$-decomposition if and only if $\frac{1}{2} n (n-1) m^2 \equiv 0 (\text{mod}~ 2p).$ Sowndhariya and Muthusamy \cite{sm} gave necessary and sufficient conditions for the existence of an $L_8$-decomposition
of tensor product and wreath product of complete graphs. Sowndhariya and Muthusamy \cite{m} studied an $L_8$-decomposition of the graph $K_n \Box K_m$ and proved that such a decomposition exists if and only if $n$ and $m$ satisfy one of the specific eight conditions. Sonawane and Borse \cite{s} proved that the $n$-dimensional hypercube $Q_n$ has an $L_8$-decomposition if and only if $n$ is 4 or $n \geq 6.$
In this paper, we consider the problem of decomposing the hypercube $Q_n$ into the sunlet graphs. In Section 2, we prove that the necessary and sufficient condition for the existence of an $L_{16}$-decomposition of $Q_n$ is $n = 4$ or $n \geq 6.$ In Section 3, we prove that if $Q_{n}$ has a $C_k$-decomposition, then $Q_{mn}$ has an $L_{2k}$-decomposition for $m \geq 2.$
\section{An $L_{16}$-decomposition of hypercubes}
In this section, we prove that the necessary and sufficient condition for the existence of an $L_{16}$-decomposition of $Q_n$ is $n = 4$ or $n \geq 6.$
We need a corollary of the following result due to El-Zanati and Eynden \cite{z}. They considered the cycle decomposition of the Cartesian product of cycles each of length power of $2$ and obtained the result, which is stated below.
\begin{theorem}
Let $n, k_1, k_2, \cdots, k_n \geq 2$ be integers and let $G$ be the Cartesian product of the cycles $C_{2^{k_1}}, C_{2^{k_2}}, \cdots C_{2^{k_n}}.$ Then there exists a $C_s$-decomposition of $G$ if and only if $s = 2^t$ with $2 \leq t \leq k_1 + k_2 + · · · + k_n.$
\end{theorem}
The following result is a corollary of the above theorem as $Q_n$ is the Cartesian product of $\frac{n}{2}$ cycles of length $4$ for any even integer $n \geq 2.$
\begin{corollary} \label{C}
For any even integer $n \geq 2,$ there exists a $C_s$-decomposition of $Q_n$ if and only if $s = 2^t$ with $2 \leq t \leq 2^n.$
\end{corollary}
In the next lemma, we prove that the necessary condition for the existence of an $L_{16}$-decomposition of $Q_n$ is $n = 4$ or $n \geq 6.$
\begin{lemma} \label{5}
There does not exist an $L_{16}$-decomposition of $Q_n$ if $n \in \{1,2,3,5\}.$
\end{lemma}
\begin{proof}
Contrary assume that $Q_n$ has an $L_{16}$-decomposition for some $n \in \{1,2,3,5\}.$ Then the number of edges of $L_{16}$ must divide the number of edges of $Q_n.$ Hence $16$ divides $n 2^{n-1}.$ This shows that $n \geq 4$ and so, $n=5.$ Since $Q_5$ has $80$ edges, there are five copies of the graph $L_{16}$ in the $L_{16}$-decomposition of $Q_5.$ Every vertex of $Q_5$ has degree $5$ whereas $L_{16}$ has eight vertices of degree 3 and eight of degree 1. Therefore, a degree 3 vertex of any copy of $L_{16}$ in the decomposition cannot be a degree 3 vertex of another copy of $L_{16}.$ This implies that $Q_5$ has at least 40 vertices, a contradiction.
\end{proof}
In the next lemma, we give decomposition of $C_k \Box C_k$ into spanning sunlet subgraphs for any even integer $k \geq 4.$
\begin{lemma} \label{2}
For any even integer $k \geq 4,$ the graph $C_k \Box C_k$ has an $L_{k^2}$-decomposition.
\end{lemma}
\begin{proof}
Let $V(C_k) = \mathbb{Z}_k$ such that a vertex $i$ is adjacent to a vertex $i+1\pmod{k}.$ Then $V(C_k \Box C_k) = \{(i,j) : i,j = 1,2,\cdots,k\}.$ We construct two vertex-disjoint cycles $Z_1$ and $Z_2$ of length $\frac{k^2}{2}$ in $C_k \Box C_k$ as $Z_1=\langle (1,1),(1,2),\cdots,(1,\frac{k}{2}),(2,\frac{k}{2}),(2,\frac{k}{2}+1),\cdots,(2,k-1),(3,k-1),(3,k),(3,1),\cdots,(3,\frac{k}{2}-2),\cdots,(k,1) \rangle$ and $Z_2= \langle (1,\frac{k}{2}+1),(1,\frac{k}{2}+2),\cdots,(1,k),\\(2,k),(2,1),\cdots,(2,\frac{k}{2}-1),(3,\frac{k}{2}-1),(3,\frac{k}{2}), \cdots,(3,k-1),\cdots,(k,\frac{k}{2}+1) \rangle.$ Now we adjoin a pendant edge to each vertex of $Z_1$ and $Z_2$ in the lexicographic order as per the availability of the vertex, so that we get two edge-disjoint spanning subgraphs of $C_k \Box C_k$ which are isomorphic to $L_{k^2}.$ This completes the proof.
\end{proof}
For an illustration, an $L_{64}$-decomposition of $C_8 \Box C_8$ is shown in Figure 2. For convenience, edges of the cycles $C_{32}$ are shown by lines and edges with the pendant vertices by dotted lines in both the copies of $L_{64}.$
\begin{center}
\scalebox{0.7}{
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.1);
\draw [fill=black] (1,0) circle (0.1);
\draw [fill=black] (2,0) circle (0.1);
\draw [fill=black] (3,0) circle (0.1);
\draw [fill=black] (4,0) circle (0.1);
\draw [fill=black] (5,0) circle (0.1);
\draw [fill=black] (6,0) circle (0.1);
\draw [fill=black] (7,0) circle (0.1);
\draw [fill=black] (0,1) circle (0.1);
\draw [fill=black] (1,1) circle (0.1);
\draw [fill=black] (2,1) circle (0.1);
\draw [fill=black] (3,1) circle (0.1);
\draw [fill=black] (4,1) circle (0.1);
\draw [fill=black] (5,1) circle (0.1);
\draw [fill=black] (6,1) circle (0.1);
\draw [fill=black] (7,1) circle (0.1);
\draw [fill=black] (0,2) circle (0.1);
\draw [fill=black] (1,2) circle (0.1);
\draw [fill=black] (2,2) circle (0.1);
\draw [fill=black] (3,2) circle (0.1);
\draw [fill=black] (4,2) circle (0.1);
\draw [fill=black] (5,2) circle (0.1);
\draw [fill=black] (6,2) circle (0.1);
\draw [fill=black] (7,2) circle (0.1);
\draw [fill=black] (0,3) circle (0.1);
\draw [fill=black] (1,3) circle (0.1);
\draw [fill=black] (2,3) circle (0.1);
\draw [fill=black] (3,3) circle (0.1);
\draw [fill=black] (4,3) circle (0.1);
\draw [fill=black] (5,3) circle (0.1);
\draw [fill=black] (6,3) circle (0.1);
\draw [fill=black] (7,3) circle (0.1);
\draw [fill=black] (0,4) circle (0.1);
\draw [fill=black] (1,4) circle (0.1);
\draw [fill=black] (2,4) circle (0.1);
\draw [fill=black] (3,4) circle (0.1);
\draw [fill=black] (4,4) circle (0.1);
\draw [fill=black] (5,4) circle (0.1);
\draw [fill=black] (6,4) circle (0.1);
\draw [fill=black] (7,4) circle (0.1);
\draw [fill=black] (0,5) circle (0.1);
\draw [fill=black] (1,5) circle (0.1);
\draw [fill=black] (2,5) circle (0.1);
\draw [fill=black] (3,5) circle (0.1);
\draw [fill=black] (4,5) circle (0.1);
\draw [fill=black] (5,5) circle (0.1);
\draw [fill=black] (6,5) circle (0.1);
\draw [fill=black] (7,5) circle (0.1);
\draw [fill=black] (0,6) circle (0.1);
\draw [fill=black] (1,6) circle (0.1);
\draw [fill=black] (2,6) circle (0.1);
\draw [fill=black] (3,6) circle (0.1);
\draw [fill=black] (4,6) circle (0.1);
\draw [fill=black] (5,6) circle (0.1);
\draw [fill=black] (6,6) circle (0.1);
\draw [fill=black] (7,6) circle (0.1);
\draw [fill=black] (0,7) circle (0.1);
\draw [fill=black] (1,7) circle (0.1);
\draw [fill=black] (2,7) circle (0.1);
\draw [fill=black] (3,7) circle (0.1);
\draw [fill=black] (4,7) circle (0.1);
\draw [fill=black] (5,7) circle (0.1);
\draw [fill=black] (6,7) circle (0.1);
\draw [fill=black] (7,7) circle (0.1);
\draw[line width=0.3mm] (0,0)--(0,3)--(1,3)--(1,6)--(2,6)--(2,7)..controls(2.8,3.5)..(2,0)--(2,1)--(3,1)--(3,4)--(4,4)--(4,7)--(5,7)..controls(5.8,3.5)..(5,0)--(5,2)--(6,2)--(6,5)--(7,5)--(7,7)..controls(7.8,3.5)..(7,0)..controls(3.5,-0.8)..(0,0);
\draw[dotted] [line width=0.4mm] (0,0) -- (1,0) (0,1) -- (1,1) (0,2) -- (1,2) (0,3) -- (0,4) (1,3) -- (2,3) (1,4) -- (2,4) (1,5) -- (2,5) (1,6) -- (1,7) (2,0) -- (3,0) (2,1) -- (2,2) (2,6) -- (3,6) (2,7) -- (3,7) (3,1)--(4,1) (3,2)--(4,2) (3,3)--(4,3) (3,4)--(3,5) (4,4)--(5,4) (4,5)--(5,5) (4,6)--(5,6) (4,7)..controls(4.8,3.5)..(4,0) (5,0)--(6,0) (5,1)--(6,1) (5,2)--(5,3) (5,7)--(6,7) (6,2)--(7,2) (6,3)--(7,3) (6,4)--(7,4) (6,5)--(6,6) (7,0)--(7,1) (7,5)..controls(3.5,5.8)..(0,5) (7,6)..controls(3.5,6.8)..(0,6) (7,7)..controls(3.5,7.8)..(0,7);
\draw [fill=black] (10,0) circle (0.1);
\draw [fill=black] (11,0) circle (0.1);
\draw [fill=black] (12,0) circle (0.1);
\draw [fill=black] (13,0) circle (0.1);
\draw [fill=black] (14,0) circle (0.1);
\draw [fill=black] (15,0) circle (0.1);
\draw [fill=black] (16,0) circle (0.1);
\draw [fill=black] (17,0) circle (0.1);
\draw [fill=black] (10,1) circle (0.1);
\draw [fill=black] (11,1) circle (0.1);
\draw [fill=black] (12,1) circle (0.1);
\draw [fill=black] (13,1) circle (0.1);
\draw [fill=black] (14,1) circle (0.1);
\draw [fill=black] (15,1) circle (0.1);
\draw [fill=black] (16,1) circle (0.1);
\draw [fill=black] (17,1) circle (0.1);
\draw [fill=black] (10,2) circle (0.1);
\draw [fill=black] (11,2) circle (0.1);
\draw [fill=black] (12,2) circle (0.1);
\draw [fill=black] (13,2) circle (0.1);
\draw [fill=black] (14,2) circle (0.1);
\draw [fill=black] (15,2) circle (0.1);
\draw [fill=black] (16,2) circle (0.1);
\draw [fill=black] (17,2) circle (0.1);
\draw [fill=black] (10,3) circle (0.1);
\draw [fill=black] (11,3) circle (0.1);
\draw [fill=black] (12,3) circle (0.1);
\draw [fill=black] (13,3) circle (0.1);
\draw [fill=black] (14,3) circle (0.1);
\draw [fill=black] (15,3) circle (0.1);
\draw [fill=black] (16,3) circle (0.1);
\draw [fill=black] (17,3) circle (0.1);
\draw [fill=black] (10,4) circle (0.1);
\draw [fill=black] (11,4) circle (0.1);
\draw [fill=black] (12,4) circle (0.1);
\draw [fill=black] (13,4) circle (0.1);
\draw [fill=black] (14,4) circle (0.1);
\draw [fill=black] (15,4) circle (0.1);
\draw [fill=black] (16,4) circle (0.1);
\draw [fill=black] (17,4) circle (0.1);
\draw [fill=black] (10,5) circle (0.1);
\draw [fill=black] (11,5) circle (0.1);
\draw [fill=black] (12,5) circle (0.1);
\draw [fill=black] (13,5) circle (0.1);
\draw [fill=black] (14,5) circle (0.1);
\draw [fill=black] (15,5) circle (0.1);
\draw [fill=black] (16,5) circle (0.1);
\draw [fill=black] (17,5) circle (0.1);
\draw [fill=black] (10,6) circle (0.1);
\draw [fill=black] (11,6) circle (0.1);
\draw [fill=black] (12,6) circle (0.1);
\draw [fill=black] (13,6) circle (0.1);
\draw [fill=black] (14,6) circle (0.1);
\draw [fill=black] (15,6) circle (0.1);
\draw [fill=black] (16,6) circle (0.1);
\draw [fill=black] (17,6) circle (0.1);
\draw [fill=black] (10,7) circle (0.1);
\draw [fill=black] (11,7) circle (0.1);
\draw [fill=black] (12,7) circle (0.1);
\draw [fill=black] (13,7) circle (0.1);
\draw [fill=black] (14,7) circle (0.1);
\draw [fill=black] (15,7) circle (0.1);
\draw [fill=black] (16,7) circle (0.1);
\draw [fill=black] (17,7) circle (0.1);
\draw[line width=0.3mm] (10,4)--(10,7)--(11,7)..controls(11.8,3.5)..(11,0)--(11,2)--(12,2)--(12,5)--(13,5)--(13,7)..controls(13.8,3.5)..(13,0)--(14,0)--(14,3)--(15,3)--(15,6)--(16,6)--(16,7)..controls(16.8,3.5)..(16,0)--(16,1)--(17,1)--(17,4)..controls(13.5,3.2)..(10,4);
\draw[dotted] [line width=0.4mm] (10,4)--(11,4) (10,5)--(11,5) (10,6)--(11,6) (10,7)..controls(9.2,3.5)..(10,0) (11,0)--(12,0) (11,1)--(12,1) (11,2)--(11,3) (11,7)--(12,7) (12,2)--(13,2) (12,3)--(13,3) (12,4)--(13,4) (12,5)--(12,6) (13,0)--(13,1) (13,5)--(14,5) (13,6)--(14,6) (13,7)--(14,7) (14,0)--(15,0) (14,1)--(15,1) (14,2)--(15,2) (14,3)--(14,4) (15,3)--(16,3) (15,4)--(16,4) (15,5)--(16,5) (15,6)--(15,7) (16,0)--(17,0) (16,1)--(16,2) (16,6)--(17,6) (16,7)--(17,7) (17,4)--(17,5) (17,1)..controls(13.5,0.2)..(10,1) (17,2)..controls(13.5,1.2)..(10,2) (17,3)..controls(13.5,2.2)..(10,3);
\node at (8.5,-1.5) {\large Figure 2. An $L_{64}$-decomposition of $C_8 \Box C_8$};
\end{tikzpicture}}
\end{center}
The following result is a corollary of the above lemma.
\begin{corollary} \label{4}
For any integer $n \geq 1,$ there exists an $L_{2^{4n}}$-decomposition of $Q_{4n}.$ In other words, $Q_{4n}$ has a decomposition into the spanning sunlet graphs for any integer $n \geq 1.$
\end{corollary}
\begin{proof}
We can write $Q_{4n}=Q_{2n} \Box Q_{2n}.$ By Corollary \ref{C}, $Q_{2n}$ has a decomposition into Hamiltonian cycles. Let $Z_1,Z_2, \cdots, Z_n$ be Hamiltonian cycles in $Q_{2n}$ such that the collection $\{Z_1,Z_2, \cdots, Z_n\}$ decomposes $Q_{2n}.$ Then $Z_1 \Box Z_1,Z_2 \Box Z_2, \cdots, Z_n \Box Z_n$ are edge-disjoint spanning subgraphs of $Q_{4n}$ and their collection decomposes $Q_{4n}.$ By Lemma \ref{2}, each $Z_i \Box Z_i$ has an $L_{2^{4n}}$-decomposition. Hence $Q_{4n}$ has an $L_{2^{4n}}$-decomposition.
\end{proof}
Now we prove the necessary condition for the existence of an $L_{16}$-decomposition of $Q_n$ is also sufficient.
We need the following four lemmas to prove the sufficient condition.
\begin{lemma} \label{6}
There exists an $L_{16}$-decomposition of $Q_6.$
\end{lemma}
\begin{proof}
Write $Q_6$ as $Q_6 = Q_4 \Box C_4$ as $C_4 = Q_2.$ Thus $Q_6$ is obtained by replacing each vertex of $C_4$ by a copy of $Q_4$ and replacing each edge of $C_4$ by a matching between two copies of $Q_4$ corresponding to the end vertices of that edge. Let $C_4 = \langle 0,1,2,3,0 \rangle$ and $Q_4^0, Q_4^1, Q_4^2, Q_4^3$ be copies of $Q_4$ in $Q_6$ corresponding to vertices $0,1,2,3$ of $C_4,$ respectively. For $i \in \{0,2\},$ $Q_4^i$ has an $L_{16}$-decomposition by Lemma \ref{2} as each $Q_4^i$ can be written as the Cartesian product of cycles of length $4.$ For $i \in \{1,3\},$ from each vertex of $Q_4^i,$ exactly two cycles of length eight passes as $Q_4^i$ has a $C_8$-decomposition by Corollary \ref{C}. Adjoin each vertex of one of two cycles to the corresponding vertex in $Q_4^0,$ and adjoin each vertex of the other cycle to the corresponding vertex in $Q_4^2.$ So, from each copy of the cycle of length eight, we get a copy of $L_{16}.$ This completes the proof.
\end{proof}
\begin{lemma} \label{7}
There exists an $L_{16}$-decomposition of $Q_7.$
\end{lemma}
\begin{proof}
Write $Q_7$ as $Q_7 = Q_4 \Box Q_3.$ Let $D$ be a directed graph obtained from $Q_3$ by giving directions to the edges, as shown in Figure 3.
\begin{center}
\scalebox{1}{
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.1);
\draw [fill=black] (3,0) circle (0.1);
\draw [fill=black] (0,3) circle (0.1);
\draw [fill=black] (3,3) circle (0.1);
\draw [fill=black] (1,1) circle (0.1);
\draw [fill=black] (2,1) circle (0.1);
\draw [fill=black] (2,2) circle (0.1);
\draw [fill=black] (1,2) circle (0.1);
\draw (0,0) -- (3,0) -- (3,3) -- (0,3) -- (0,0) (1,1) -- (2,1) -- (2,2) -- (1,2) -- (1,1) -- (0,0) (2,1) -- (3,0) (2,2) -- (3,3) (1,2) -- (0,3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.6,0) -- (0.5,0);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.4,0.4) -- (0.3,0.3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0,0.6) -- (0,0.5);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(3,2.5) -- (3,2.6);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.6,2.6) -- (2.7,2.7);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.5,3) -- (2.6,3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1.4,1) -- (1.3,1);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1,1.6) -- (1,1.7);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1.6,2) -- (1.7,2);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2,1.4) -- (2,1.3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.4,2.6) -- (0.3,2.7);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.6,0.4) -- (2.7,0.3);
\node at (1.5,-0.6) {Figure 3.};
\end{tikzpicture}}
\end{center}
In $D,$ there are two vertices with in-degree 3 and out-degree 0, and the in-degrees and out-degrees of remaining all vertices are 1 and 2, respectively. The graph $Q_7$ is obtained by replacing each vertex of $Q_3$ with a copy of $Q_4$ and replacing each edge of $Q_3$ by a matching between two copies of $Q_4$ corresponding to the end vertices of that edge. Consider an $L_{16}$-decomposition of copies of $Q_4$ corresponding to each vertex of $D$ with out-degree 0, and a $C_8$-decomposition of copies of $Q_4$ corresponding to each vertex of $D$ with out-degree 2. In a $C_8$-decomposition of copies of $Q_4,$ exactly two cycles pass from each vertex. Adjoin a pedant edge to each vertex of copies of $Q_4$ of a vertex corresponding the out-degree 2, to one of the vertices of its nearest copy of $Q_4$ according to the direction of the corresponding edge in $D.$ Then we get $L_{16}$ from each $C_8$ from a $C_8$-decomposition of each copy of $Q_4$ of a vertex corresponding to the out-degree 2. Hence we get an $L_{16}$-decomposition of $Q_7.$
\end{proof}
\begin{lemma} \label{9}
There exists an $L_{16}$-decomposition of $Q_9.$
\end{lemma}
\begin{proof}
Write $Q_9$ as $Q_9 = Q_6 \Box Q_3.$ Let $D$ be a directed graph obtained from $Q_3$ by giving directions to the edges, as shown in Figure 4.
\begin{center}
\scalebox{1}{
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.1);
\draw [fill=black] (3,0) circle (0.1);
\draw [fill=black] (0,3) circle (0.1);
\draw [fill=black] (3,3) circle (0.1);
\draw [fill=black] (1,1) circle (0.1);
\draw [fill=black] (2,1) circle (0.1);
\draw [fill=black] (2,2) circle (0.1);
\draw [fill=black] (1,2) circle (0.1);
\draw (0,0) -- (3,0) -- (3,3) -- (0,3) -- (0,0) (1,1) -- (2,1) -- (2,2) -- (1,2) -- (1,1) -- (0,0) (2,1) -- (3,0) (2,2) -- (3,3) (1,2) -- (0,3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.6,0) -- (0.5,0);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.4,0.4) -- (0.3,0.3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0,0.6) -- (0,0.5);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(3,2.5) -- (3,2.6);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.6,2.6) -- (2.7,2.7);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.5,3) -- (2.6,3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1.6,1) -- (1.7,1);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1,1.6) -- (1,1.7);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(1.4,2) -- (1.3,2);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2,1.4) -- (2,1.3);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(0.7,2.3) -- (0.8,2.2);
\draw[decoration={markings,mark=at position 1 with
{\arrow[scale=3,>=stealth]{>}}},postaction={decorate}]
(2.3,0.7) -- (2.2,0.8);
\node at (1.5,-0.6) {Figure 4.};
\end{tikzpicture}}
\end{center}
In $D,$ there are four vertices with out-degree 0, and the out-degree of the remaining four vertices is 3. The graph $Q_9$ is obtained by replacing each vertex of $Q_3$ with a copy of $Q_6$ and replacing each edge of $Q_3$ by a matching between two copies of $Q_6$ corresponding to the end vertices of that edge. Consider an $L_{16}$-decomposition of copies of $Q_6$ of vertices corresponding to the out-degree 0 and a $C_8$-decomposition of copies of $Q_6$ of vertices corresponding to the out-degree 3. In a $C_8$-decomposition of copies of $Q_6,$ exactly three cycles pass from each vertex. Adjoin a pedant edge to each vertex of copies of $Q_6$ corresponding to each vertex with out-degree 3, to one of the vertices of its nearest copy of $Q_6$ according to the direction of the corresponding edge in $D.$ Then we get a copy of $L_{16}$ from each copy of $C_8$ from a $C_8$-decomposition of each copy of $Q_6$ corresponding to each vertex with out-degree 3. Hence we get an $L_{16}$-decomposition of $Q_9.$
\end{proof}
The following lemma follows from the definition of the Cartesian product of graphs.
\begin{lemma} \label{def}
If the graphs $G_1$ and $G_2$ each has an $H$-decomposition, then the graph $G_1 \Box G_2$ has an $H$-decomposition.
\end{lemma}
In the following lemma, we prove that the sufficient condition for the existence of an $L_{16}$-decomposition of $Q_n$ is $n=4$ or $n \geq 6.$
\begin{lemma} \label{n}
There exists an $L_{16}$-decomposition of $Q_n$ if $n=4$ or $n \geq 6.$
\end{lemma}
\begin{proof}
We prove the result by induction on $n.$ For $n=4,$ the result holds as $Q_4$ has an $L_{16}$-decomposition by Lemma \ref{2}. For $n=8,$ we write $Q_8 = Q_4 \Box Q_4$ and the result holds by Lemma \ref{def}. For $n \in \{6,7,9\},$ the result follows by Lemmas \ref{6}, \ref{7} and \ref{9}. Suppose that $n \geq 10.$ Assume that the result holds for the $k$-dimensional hypercube for any integer $k$ with $6 \leq k \leq n-1.$ Write $Q_n = Q_{n-4} \Box Q_4.$ By induction hypothesis, $Q_{n-4}$ has an $L_{16}$-decomposition as $n-4 \geq 6.$ Hence $Q_n$ has an $L_{16}$-decomposition by Lemma \ref{def}. This completes the proof.
\end{proof}
The following result follows from Lemmas \ref{5} and \ref{n}.
\begin{theorem}
The necessary and sufficient condition for the existence of an $L_{16}$-decomposition of $Q_n$ is $n = 4$ or $n \geq 6.$
\end{theorem}
\section{An $L_{2k}$-decomposition of hypercubes}
In this section, we prove that $Q_{mn}$ has an $L_{2k}$-decomposition if $Q_{n}$ has a $C_k$-decomposition for $m \geq 2.$ In next two lemmas, we prove the result for $m=2$ and $m=3.$ Note that a $C_k$-decomposition of $Q_n$ is possible only for an even integer $n \geq 2.$ For $n=2,$ $Q_n = C_4.$
\begin{lemma} \label{2n}
If $Q_{n}$ has a $C_k$-decomposition, then $Q_{2n}$ has an $L_{2k}$-decomposition.
\end{lemma}
\begin{proof}
Suppose $Q_{n}$ has a $C_k$-decomposition. Note that in the $C_k$-decomposition of $Q_{n},$ from each vertex of $Q_{n}$ exactly $\frac{
n}{2}$ cycles passes. We can write $Q_{2n} = Q_{n} \Box Q_{n}.$ Let $W_0, W_1, \cdots, W_{2^{n}-1}$ be copies of $Q_{n}$ in $Q_{2n}$ replaced by vertices of $Q_{n}.$ Then each $W_i$ has a $C_k$-decomposition. Also, there are $n$ copies of $W_j$'s that are adjacent to $W_i$ for each $i.$
Since $Q_{n}$ is a regular and connected graph with even degree $n,$ there is a directed Eulerian circuit in $Q_{n}$ in which each of in-degree and out-degree of each vertex is $\frac{n}{2}.$ In a $C_k$-decomposition of each $W_i,$ adjoin each vertex of each cycle to exactly one vertex of the nearest copy $W_j$ of $W_i$ in $Q_{2n},$ if there is a directed edge in the directed Eulerian circuit from the vertex $i$ to the vertex $j.$ From a $C_k$-decomposition of each $W_i$'s, we get edge-disjoint copies of $L_{2k}.$ This completes the proof.
\end{proof}
We need concepts of even and odd parity vertex in the proof of the following lemma. A vertex $v = x_1 x_2 \cdots x_n$ of $Q_n$ is said to be a vertex with \textit{even (odd) parity} if there are even (odd) number of $x_i$'s are $1$ in $v.$ Let $X$ and $Y$ be subsets of vertex set of $Q_n$ containing vertices with even parity and odd parity, respectively and $X \cup Y = V(Q_n).$ Then $(X,Y)$ is a bipartition of the bipartite graph $Q_n.$
\begin{lemma} \label{3n}
If $Q_{n}$ has a $C_k$-decomposition, then $Q_{3n}$ has an $L_{2k}$-decomposition.
\end{lemma}
\begin{proof}
We can write, $Q_{3n} = Q_{2n} \Box Q_{n}.$ Let $W_0, W_1, \cdots, W_{2^{n}-1}$ be copies of $Q_{2n}$ in $Q_{3n}$ replaced by vertices of $Q_{n}.$ Let $D$ be a digraph obtained from $Q_{n}$ such that out-degree of each vertex with even parity is $n$ and odd parity is $0.$ By Lemma \ref{2n}, each $W_j$ corresponding to vertex of $Q_{n}$ with odd parity, has an $L_{2k}$-decomposition. Consider a $C_k$-decomposition of $W_j$ corresponding to vertex of $Q_{n}$ with even parity. Note that in the $C_k$-decomposition of $W_j,$ from each vertex exactly $n$ edge-disjoint cycles passes. By adjoining exactly one vertex to each cycle in $W_j$ corresponding to vertex of $Q_{n}$ with even parity, we get copies of $L_{2k}$ corresponding to each $C_k$ in the $C_k$-decomposition of $W_j.$ This completes the proof.
\end{proof}
Now, we have the following result.
\begin{theorem} \label{m}
If $Q_{n}$ has a $C_k$-decomposition, then $Q_{mn}$ has an $L_{2k}$-decomposition for $m \geq 2.$
\end{theorem}
\begin{proof}
If $m$ is multiple of $2,$ the result holds by Lemmas \ref{def} and \ref{2n} as $Q_{mn}$ is the Cartesian product of $\frac{m}{2}$ copies of $Q_{2n}.$ Similarly, the result holds by Lemmas \ref{def} and \ref{3n} if $m$ is multiple of $3$ as $Q_{mn}$ is the Cartesian product of $\frac{m}{3}$ copies of $Q_{2n}.$ For $m=5$ and $7,$ we can write $Q_{mn}$ as $Q_{5n}= Q_{2n} \Box Q_{3n}$ and $Q_{7n}= Q_{4n} \Box Q_{3n},$ respectively. Thus the result holds by Lemmas \ref{def}, \ref{2n} and \ref{3n} for $m=5,7.$ It follows that the result holds for $m$ with $2 \leq m \leq 10.$ Suppose that $m \geq 11,$ and $m$ is not multiple of $2$ and $3.$ Then either $m = 6q+5$ for some $q \geq 1$ or $m=6q+1$ for some $q \geq 2.$ Suppose $m = 6q+5$ for $q \geq 1.$ Then we can write $Q_{mn}$ as $Q_{mn}= Q_{6qn} \Box Q_{5n}.$ Suppose $m = 6q+1$ for $q \geq 2.$ Then we can write $Q_{mn}$ as $Q_{mn}= Q_{6(q-1)n} \Box Q_{7n}.$ Note that for any $r \geq 1,$ $Q_{6rn}$ has an $L_{2k}$-decomposition by both Lemmas \ref{2n} and \ref{3n}. Thus by Lemma \ref{def}, $Q_{mn}$ has an $L_{2k}$-decomposition.
\end{proof}
As a consequence of Theorem \ref{m}, we have the following result.
\begin{corollary}
Let $m \geq 2$ be an integer and $n \geq 4$ be an even integer.
\begin{enumerate}
\item $Q_{mn}$ has an $L_{2^{t+1}}$-decomposition for $2\leq t \leq n-1.$
\item $Q_{mn}$ has an $L_{2n}$-decomposition.
\item $Q_{mn}$ has an $L_{4n}$-decomposition.
\item $Q_{mn}$ has an $L_{8n}$-decomposition.
\item $Q_{mn}$ has an $L_{n 2^{k+1}}$-decomposition for $2n \leq n 2^k \leq \frac{2^n}{n}.$
\end{enumerate}
\end{corollary}
\begin{proof}
We have following $C_k$-decompositions of $Q_n$ for an even integer $n \geq 4.$
\begin{enumerate}
\item Zanati and Eynden \cite{z} proved that $Q_n$ has a $C_{2^t}$-decomposition for $2\leq t \leq n-1.$
\item Ramras \cite{r} proved that $Q_n$ has a $C_n$-decomposition.
\item Mollard and Ramras \cite{mr} proved that $Q_n$ has a $C_{2n}$-decomposition.
\item Tapadia, Borse and Waphare \cite{t} obtained that $Q_n$ has a $C_{4n}$-decomposition.
\item Axenovich, Offner and Tompkins \cite{ax} established that $Q_n$ has a $C_{n 2^k}$-decomposition for $2n \leq n 2^k \leq \frac{2^n}{n}.$
\end{enumerate}
By applying Theorem \ref{m} to each of above $C_k$-decompositions of $Q_n,$ we get the desired $L_{2k}$-decomposition of $Q_{mn}.$
\end{proof}
\end{document}
|
\begin{document}
\title{Linear chord diagrams on two intervals}
{\renewcommand{\thefootnote}{}
\footnote{\emph{E-mail addresses}: andersen{\char'100}imf.au.dk (J.\ E.\ Andersen), rpenner{\char'100}imf.au.dk (R.\ C.\ Penner), duck{\char'100}santafe.edu(C.\ M.\ Reidys), wangrui{\char'100}cfc.nankai.edu.cn (R. R. Wang)}
\footnotetext[1]{JEA and RCP
supported by QGM (Center for Quantum Geometry of Moduli
Spaces, funded by the Danish National Research Foundation).}
\footnotetext[2]{CMR and RRW supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China.}
\begin{abstract}
Consider all possible
ways of attaching disjoint chords to two ordered and oriented disjoint intervals so as to produce a connected
graph. Taking the intervals to lie in the real axis with the induced orientation and the chords to lie in the
upper half plane canonically determines a corresponding fatgraph which
has some associated genus $g\geq 0$, and we consider the natural
generating function ${\bf C}_g^{[2]}(z)=\sum_{n\geq 0} {\bf c}^{[2]}_g(n)z^n$
for the number ${\bf c}^{[2]}_g(n)$ of distinct such chord diagrams of
fixed genus $g\geq 0$ with a given number $n\geq 0$ of chords.
We prove here the surprising fact that ${\bf C}^{[2]}_g(z)=z^{2g+1} R_g^{[2]}(z)/(1-4z)^{3g+2} $
is a rational function, for $g\geq 0$, where the
polynomial $R^{[2]}_g(z)$ with degree at most $g$ has integer coefficients and satisfies $R_g^{[2]}({1\over 4})\neq 0$.
Earlier work had already determined that the analogous generating function
${\bf C}_g(z)=z^{2g}R_g(z)/(1-4z)^{3g-{1\over 2}}$
for chords attached to a single interval is
algebraic, for $g\geq 1$, where the polynomial $R_g(z)$ with degree at most $g-1$ has integer coefficients and satisfies $R_g(1/4)\neq 0$ in analogy to the generating function ${\bf C}_0(z)$ for the Catalan numbers. The new results here on ${\bf C}_g^{[2]}(z)$ rely
on this earlier work, and indeed, we find
that $R_g^{[2]}(z)=R_{g+1}(z) -z\sum_{g_1=1}^g R_{g_1}(z) R_{g+1-g_1}(z)$, for $g\geq 1$.
\end{abstract}
\begin{keywords}
linear chord diagrams, fatgraph, group character, generating function
\end{keywords}
\begin{AMS}
05A15, 05E10
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{J. E. ANDERSEN, R. C. PENNER, C. M. REIDYS, AND R. R. WANG}{Linear chord diagrams on two intervals}
\section{Introduction}\label{S:intro}
Fix a collection of disjoint oriented intervals called ``backbones'' in a specified
linear ordering and consider all possible ways of attaching a family of
unoriented and unordered intervals called ``chords'' to the backbones by gluing
their endpoints to pairwise distinct points in the backbone. These combinatorial structures
occur in a number of instances in pure mathematics including finite type invariants
of links \cite{Barnatan95,Kontsevich93}, the geometry of moduli spaces of flat connections
on surfaces \cite{AMR1,AMR2}, the representation theory of Lie algebras \cite{CSM}, and the mapping class groups \cite{BAMP}. They also
arise in applied mathematics for codifying the
pairings among nucleotides in a collection of RNA molecules \cite{Reidys},
or more generally in any macromolecular complex \cite{Nagai-Mattaj},
and
in bioinformatics where they apparently
represent the building blocks of grammars of folding algorithms
of RNA complexes \cite{Alkan,backofen,RIP}.
This paper is dedicated to enumerative problems associated with
connected chord diagrams on two backbones as follows.
Drawing a chord diagram $G$ in the plane with its backbone segments
lying in the real axis with the natural orientation and its chords in the upper half-plane determines
cyclic orderings on the half-edges of the underlying graph
incident on each vertex. This defines a corresponding ``fatgraph''
$\mathbb G$, to which is canonically associated a topological surface
$F(\mathbb G)$ (cf.\ $\S$\ref{S:fatgraphs}) of some genus, cf.\ Figure~\ref{F:two}.
\begin{figure}
\caption{\small From a chord diagram to a fatgraph and to a surface.}
\label{F:two}
\end{figure}
Let ${\bf c}_g^{[b]}(n)$ denote the number of distinct
chord diagrams on $b\geq 1$ backbones with $n\geq 0$ chords
of genus $g\geq 0$ with its natural generating function
${\bf C}_g^{[b]}(z)=\sum_{n\geq 0} {\bf c}^{[b]}_g(n)z^n$,
setting in particular ${\bf c}_g(n)={\bf c}_g^{[1]}(n)$ and
likewise ${\bf C}_g(z)={\bf C}_g^{[1]}(z)$ for convenience.
In particular, the Catalan numbers ${\bf c}_0(n)$, i.e., the number of
triangulations of a polygon with $n+2$ sides, enumerate linear chord
diagrams of genus zero, and we have ${\bf C}_0(z)={{1-\sqrt{1-4z}}\over{2z}}$
with
The one-backbone generating functions
\begin{eqnarray*}
\mathbf{C}_g(z) = \, \frac{P_g(z)}{(1-4z)^{3g-{1\over 2}}}, ~{\rm for~any}~g\geq 1,
\end{eqnarray*}
were computed in \cite{APRW},
where $P_g(z)$ is a polynomial defined over the integers of degree at most
$(3g-1)$ that is divisible by $z^{2g}$ with $P_g(1/4)\neq 0$.
In particular, $\mathbf{C}_g(z)$ is algebraic over $\mathbb C(z)$ for all
$g\geq 1$, just as is the Catalan generating function $\mathbf{C}_0(z)$,
and there are explicit expressions such as
\begin{eqnarray*}
\mathbf{c}_1(n) & = & \frac{2^{n-2}(2n-1)!!}{3(n-2)!}, \\
\mathbf{c}_2(n) & = & \frac{2^{n-4}(5n-2)(2n-1)!!}{90 (n-4)!}, \\
\mathbf{c}_3(n) & = &\frac{2^{n-6} (35n^2-77n+12) (2n-1)!!
}{5670 (n-6)! },
\end{eqnarray*}
for these ``higher'' Catalan numbers,
where $\mathbf{c}_g(n)=0$, for $n<2g$.
In Theorem \ref{T:main} of this paper, we derive the two-backbone generating functions
\begin{eqnarray*}
\mathbf{C}_g^{[2]}(z) & = & \frac{P_g^{[2]}(z)}{(1-4z)^{3g+2}},~{\rm for~any}~g\geq 0,
\end{eqnarray*}
where $P_g^{[2]}(z)$ is an integral polynomial of degree at most
$(3g+1)$ that is divisible by $z^{2g+1}$ with $P_g^{[2]}(1/4)> 0$.
In particular, these generating functions are rational functions defined over the integers.
In fact, these polynomials
$$P_g^{[2]}(z) = z^{-1}P_{g+1}(z)-\sum_{g_1=1}^{g}P_{g_1}(z)P_{g+1-g_1}(z)$$
are computable in terms of the previous ones, for example:
\begin{eqnarray*}
P_0^{[2]}(z)&=&z,\\
P_1^{[2]}(z)&=&z^3(20z+21),\\
P_2^{[2]}(z)&=&z^5\, \left(1696z^2+6096z+1485 \right),\\
P_3^{[2]}(z)&=&z^7\, \left(330560z^3+2614896z^2+1954116z+225225 \right),\\
P_4^{[2]}(z)&=&z^9\left(118652416z^4 +1661701632z^3+2532145536z^2+851296320z+
59520825\right),\\
P_5^{[2]}(z)&=&z^{11} \left(68602726400z^5+
1495077259776z^4\right.\\
&&\left.+3850801696512z^3+2561320295136z^2 +505213089300z+24325703325\right).
\end{eqnarray*}
The experimental fact that the coefficients of $P_g^{[2]}$ are positive, just as are those of
$P_g(z)$, leads us to speculate that they themselves solve an as yet unknown
enumerative problem.
Furthermore, explicit expressions for the number of two-backbone
chord diagrams of fixed genus such as
\begin{eqnarray*}
\mathbf{c}^{[2]}_0(n) & =& n\, 4^{n-1},\\
\mathbf{c}^{[2]}_1(n) & =& \frac{1}{12}\left( 13\,n+3 \right)
n\left( n-1 \right)\left( n-2 \right)\,{4}^{n-3},\\
\mathbf{c}^{[2]}_2(n) & =& {\frac {1}{180}}\,\left(
445\,{n}^{2}-401\,n-210 \right) n
\left(n-1 \right)\left(n-2 \right)\left( n-3 \right)
\left(n-4 \right)\, {4}^{n-6} ,
\end{eqnarray*}
can also be derived, cf.\ Corollary \ref{C:formel}.
\section{Background and Notation}\label{Back}
We formulate the basic terminology for graphs and chord diagrams,
establish notation and recall facts about symmetric groups, recall the
fundamental concepts about fatgraphs, and finally recall and extend
results from \cite{APRW} for application in subsequent sections.
\subsection{Graphs and chord diagrams on several backbones}
A {\it graph} $G$ is a finite one-dimensional CW complex comprised of
one-dimensional open edges $E(G)$ and zero-dimensional vertices $V(G)$.
An edge of the first barycentric subdivision of $G$ is called a {\it half-edge}. A
half-edge of $G$ is {\it incident} on $v\in V(G)$ if $v$ lies in its closure, and the {\it
valence} of $v$ is the number of incident half-edges.
Consider a collection $\beta_1,\ldots ,\beta_b$ of pairwise disjoint closed intervals with integer endpoints in the real axis $\mathbb R$, where $\beta_i$ lies to the left of $\beta_{i+1}$, for $i=1,\ldots , b-1$; thus,
these {\it backbone intervals} $\{ \beta _i\}_1^b$ are oriented and ordered by the orientation
on $\mathbb R$. The {\it backbone} $B=\sqcup\{ \beta _i\}_1^b$ itself is regarded as a graph
with vertices given by the integer points $\mathbb Z\cap B$ and edges determined by the unit intervals
with integer endpoints that it contains. A {\it chord diagram} on the backbone $B$ is a graph
$C$ containing $B$ so that $V(C)=V(B)$ where each vertex in $B-\partial B$ has valence
three and each vertex in $\partial B$ has valence two; in particular, $\# V(G)$ is even,
where $\#X$ denotes the cardinality of the set $X$.
Edges in $E(B)\subseteq E(C)$ are called {\it backbone edges}, and edges
in the complement $E(C)-E(B)$ are called {\it chords}.
\subsection{Permutations}
The symmetric group $S_{2n}$ of all permutations on $2n$ objects plays
a central role in our calculations, and we establish here standard
notation and recall fundamental tools. Let $(i_1,i_2,\ldots ,i_k)$ denote
the cyclic permutation $i_1\mapsto
i_2\mapsto\cdots\mapsto i_k\mapsto i_1$ on distinct objects $i_1,\ldots,
i_k$. We shall compose permutations $\pi,\tau$ from right to left, so that
$\pi\circ\tau(i)=\pi(\tau(i))$. Two cycles are {\it disjoint} if they have disjoint supports.
Let $[\pi ]$ denote the conjugacy class of $\pi\in S_{2n}$. Conjugacy
classes in $S_{2n}$ are
identified with classes of partitions of $\{1,\ldots
,2n\}$, where in the standard slightly abusive notation,
$\pi\in[\pi ]=[1^{\pi_1} \cdots {2n}^{\pi_{2n}}]$ signifies that
$\pi$ is comprised of $\pi_k\geq 0$ many $k$ cycles with pairwise disjoint
supports, for $k=1,\ldots ,2n$.
In particular, $\sum_{k=1}^{2n} k\pi_k=2n$, and the number of elements in the class
$[\pi]$ is given by
$$
|\pi|= |[\pi]|= {{(2n)!}\over{\prod_{k=1}^{2n}\left( k^{\pi_k}\pi_k!\right)}}.
$$
We shall make use of the fact that the irreducible characters
$\chi^Y$ of $S_{2n}$ are labeled by Young tableaux $Y$ and shall
slightly abuse notation writing $\chi^Y([\pi])=\chi^Y(\pi)$ for the value
taken on either a permutation $\pi$ or its conjugacy class $[\pi]$.
In order to evaluate characters, we shall rely heavily on the
Murnaghan-Nakayama rule
\begin{equation}\label{E:MN}
\chi^Y( (i_1,\dots,i_m ) \circ \sigma ) =
\sum_{Y_\mu~{\rm so~that}~ Y- Y_\mu\, \text{\rm is a} \atop
\text{\rm rim hook of length $m$}} (-1)^{w(Y_\mu)}
\chi^{Y_\mu}(\sigma),
\end{equation}
where $w(Y_\mu)$ is one less than the number of rows in the rim hook $Y-Y_\mu$.
See a standard text such as \cite{Sagan} for further details.
\subsection{Fatgraphs}\label{S:fatgraphs}
If $G$ is a graph, then a {\it fattening} of $G$ is the specification of a collection of
cyclic orderings, one such ordering on the half-edges incident on each
vertex of $G$. A graph $G$ together with a fattening is called a {\it fatgraph}
${\mathbb G}$.
The key point is that a fatgraph $\mathbb G$ determines an oriented surface
$F({\mathbb G})$ with boundary as follows.
For each $v\in V(G)$, take an oriented polygon
$P_v$ with $2k$ sides, and choose a tree in $P_v$ with a univalent
vertex in every other side of $P_v$ joined to a single $k$-valent
vertex in the interior. Identify the half-edges incident on $v$ with the edges
of the tree in the natural way
so that the induced counter-clockwise cyclic ordering on the boundary of
$P_v$ agrees with the fattening of $\mathbb G$ about $v$.
We construct the surface
$F(\mathbb G)$ as the quotient of $\sqcup_{v\in V(G)}
P_v$ by identifying pairs of frontier edges of the polygons with an orientation-reversing homeomorphism if
the corresponding half-edges lie in a common edge of $G$. The oriented surface $F(\mathbb G)$
is connected if $G$ is and has some associated genus $g(\mathbb G)\geq 0$ and number $r(\mathbb G)\geq 1$ of boundary components.
Furthermore, the trees in the various polygons combine to give
a graph identified with $G$ embedded in $F(\mathbb G)$, and we may thus regard
$G\subseteq F(\mathbb G)$. By construction, $G$ is a deformation retraction of
$F(\mathbb G)$, hence their Euler characteristics
agree, namely,
$$
\chi(G)=\#V(G)-\#E(G)=\chi (F(\mathbb G))=2-2g(\mathbb G)-r(\mathbb G),
$$
where the last equality holds provided $G$ is connected.
A fatgraph $\mathbb G$ is uniquely determined by a pair of permutations on
the half-edges of its underlying graph $G$ as follows. Let
$v_k\geq 0$ denote the number of $k$-valent vertices, for each
$k\geq 1,\ldots ,K$, where $K$ is the maximum valence of vertices of $G$,
so that $\sum_{k=1}^K kv_k=2n$, and
identify the set of half-edges
of $G$ with the set $\{ 1,2,\ldots ,2n\}$ once and for all. The orbits of the cyclic orderings in
the fattening on $G$ naturally give rise to disjoint cycles comprising a permutation $\tau\in S_{2n}$ with $\tau\in [1^{v_1}2^{v_2}\cdots K^{v_K}]$. The second permutation $\iota\in [2^n]$ is the product
over all edges of $G$ of the two-cycle permuting the two half-edges it contains.
One important point is that the boundary
components of $F(\mathbb G)$ are in bijective correspondence with the
cycles of $\tau\circ\iota$, i.e.,
$r(\mathbb G)$ is the~number~of~disjoint~cycles~comprising $\tau\circ\iota$.
Furthermore, isomorphism classes of fatgraphs with vertex valencies
$(v_k)_{k=1}^K$ are in one-to-one correspondence with conjugacy classes of
pairs $\tau,\iota\in S_{2n}$, where $\tau\in [1^{v_1}2^{v_2}\cdots K^{v_K}]$
and $\iota\in [2^n]$, cf.\ Proposition \ref{P:isos}. Thus,
fatgraphs are easily stored and manipulated on the computer, and
various enumerative problems can be formulated in terms of Young tableaux.
See \cite{Penner88,PKWA} for more details on fatgraphs and
\cite{APRW, BIZ,Harer-Zagier,Itzykson-Zuber,Milgram-Penner,Penner92}
for examples of fatgraph enumerative problems in terms of character
theory for the symmetric groups.
\subsection{Fatgraphs and chord diagrams}
A convenient graphical method to uniquely determine a fatgraph is
the specification of a
regular planar projection of a graph in 3-space, where the counter-clockwise cyclic
orderings in the plane of projection determine the fattening and the crossings of edges
in the plane of projection can be arbitrarily resolved into
under/over crossings without affecting the resulting isomorphism class.
A band about each edge can be added to a neighborhood of
the vertex set in the plane of projection respecting orientations in
order to give an explicit representation of the associated surface embedded
in 3-space as on the top-right of Figure \ref{F:two}.
In particular, the standard planar representation of a chord diagram $C$ on two
backbones positions the two backbone segments in the real axis respecting their ordering
and orientation and places the chords as semi-circles in the upper half-plane as on the
bottom right in Figure \ref{F:two}. This determines the {\it canonical fattening} $\mathbb C$ on $C$ as illustrated on
the top right in the same figure. We may furthermore collapse each backbone segment to a
distinct point and induce a fattening on the quotient in the natural way to produce a fatgraph $\mathbb G_C$
with two vertices and corresponding surface $F(\mathbb G_C)$ as depicted on the left in Figure \ref{F:two}.
Insofar as $C$ is a deformation retraction of $F(\mathbb C)$ and hence of the homeomorphic surface
$F(\mathbb G_C)$,
these three spaces share the same Euler characteristic.
A chord diagram $C$ on two backbones with $n\geq 0$ chords
therefore gives rise to a
fatgraph $\mathbb G_C$ with two ordered vertices of respective valencies $c$ and
$(2n-c)$ and two distinguished half-edges, namely, the
ones coming just after the locations of the collapsed backbones.
Upon labeling the half-edges, $C$ determines
a pair of permutations $\tau_c\in[c,(2n-c)]$ with its two disjoint cycles corresponding
to the fattening of the two vertices of $\mathbb G_C$ and $\iota\in[2^n]$
corresponding to the edges of $\mathbb G_C$. Provided $C$ and hence
$\mathbb G_C$ is connected, we may define
$$\aligned
r(C)&=r(\mathbb G_C)=
{\rm the~number~of~disjoint~cycles~comprising}~\tau_c\circ\iota,\\
g(C)&=g(\mathbb G_C)= {1\over 2}\bigl(n-r(C)\bigr).\\
\endaligned$$ Conversely, the specification of $\tau_c\in[c,(2n-c)]$ and
$\iota\in[2^n]$ uniquely determines a chord diagram on two backbones with $2n$
edges. In fact, the isomorphism class of the fatgraph clearly corresponds to the
conjugacy class of the pair $\tau_c,\iota$.
For example, labeling the half-edges contained in chords from left to right on the chord diagram $C$ on the
bottom-right in Figure ~\ref{F:two} produces the fatgraph on the left of the figure
with permutations given by $\tau_3=(1,3,2)(4,5,6,7,8)$ and $\iota = (1,3)(2,5)(4,8)(6,7)$
with $r(C)=4$ and $g(C)=0$.
Summarizing, we have
\begin{proposition}\label{P:isos}
The following sets are in bijective correspondence:
$$\aligned
&\{ {\rm isomorphism~classes~of~chord~diagrams~on~two~backbones~with}~n~{\rm chords}\}\\
\approx &\{{\rm isomorphism~classes~of~fatgraphs~with~two~vertices~and}~n~{\rm edges}:\\
&\hskip 1.5ex{\rm each~vertex~has~a~distinguished~incident~half-edge}\}\\
\approx&\{{\rm conjugacy~classes~of~pairs}~\tau,\iota\in S_{2n}: \iota\in [2^n]~{\rm and}~ \tau\in[c,(2n-c)]\\
&\hskip 1.5ex{\rm for~some}~1\leq c\leq 2n-1, ~{\rm with~an~ordered~pair~of~elements,~one~from~each~of~the~two~cycles~of}~\tau\}
\endaligned$$
\end{proposition}
Our first counting results will rely upon the bijection established here
between chord diagrams and pairs of permutations.
\subsection{The one backbone case}
We collect here results from \cite{APRW} which will be required in the sequel
as well as extend from \cite{APRW} an explicit computation we shall also need.
As a general notational point for any power series $T(z)=\sum a_iz^i$, we
shall write $[z^i]T(z)=a_i$ for the extraction of the coefficient $a_i$
of $z^i$.
As mentioned in the introduction, the generating function ${\bf C}_g(z)=\sum_{n\geq 0} {\mathbf c}_g(n)z^n$ for $g\geq 1$ satisfies
$${\bf C}_g(z)=P_g(z)(1-4z)^{{1\over 2}-3g},$$
where the polynomial $P_g(z)$ has integer coefficients, is divisible by $z^{2g}$ and has degree at most
$3g-1$. Indeed,
$\mathbf{C}_g(z)$ is recursively computed using the ODE
\begin{eqnarray}\label{E:ODE}
z(1-4z)\frac{d}{dz}\mathbf{C}_g(z) +(1-2z)\mathbf{C}_g(z) & = &
\Phi_{g-1}(z),
\end{eqnarray}
where
\begin{eqnarray*}
\Phi_{g-1}(z) = z^2\left(4z^3\frac{d^3}{dz^3}\mathbf{C}_{g-1}(z) +
24z^2 \frac{d^2}{dz^2}\mathbf{C}_{g-1}(z) + 27z\frac{d}{dz}
\mathbf{C}_{g-1}(z)+3\mathbf{C}_{g-1}(z)\right),
\end{eqnarray*}
with initial condition $\mathbf{C}_g(0)=0$.
This ODE in turn follows from the recursion
\begin{eqnarray}\label{E:recursion}
(n+1)\, \mathbf{c}_g(n) & = & 2(2n-1)\,\mathbf{c}_g(n-1)+
(2n-1)(n-1)(2n-3)\,\mathbf{c}_{g-1}(n-2),
\end{eqnarray}
where $\mathbf{c}_g(n)=0$ for $2g>n$,
which is derived from a fundamental identity
first proved in \cite{Harer-Zagier}. Namely, the polynomials
\begin{equation}
P(n,N)=\sum_{\{g:2g\leq n\}}\mathbf{c}_g(n) N^{n+1-2g}
\end{equation}
combine into the generating function
\begin{equation}\label{E:polynN}
1+2\sum_{n\geq 0} \frac{P(n,N)}{(2n-1)!!} z^{n+1}=\biggl ( \frac{1+z}{1-z}\biggr )^N.
\end{equation}
In addition to the explicit expressions for ``higher'' Catalan number,
one also computes
\begin{equation}\label{E:values}
[z^{2g}]P_g(z)={\bf c}_g(2g)=\frac{(4g)!}{4^g(2g+1)!}.
\end{equation}
In fact, it is also shown in \cite{APRW} that $P_g(1/4)$ is non-zero, but here,
we shall require its exact numerical value for subsequent estimates:
\begin{lemma}\label{L:gamma}We have for $g\geq 1$,
\begin{equation}\label{E:q_g4}
P_g(1/4) =\bigl (\frac{9}{4}\bigr )^g~ {\frac {\Gamma \left( g-1/6 \right)
\Gamma \left( g+1/2\right) \Gamma
\left( g+1/6 \right) }{6{\pi }^{3/2}
\Gamma \left( g+1 \right) }}.
\end{equation}
\end{lemma}
\begin{proof}
We compute
$P_g(1/4)$ by induction on $g$, and for the basis step
$g=1$,
\begin{equation}
{\bf C}_1(z)= {\frac {{z}^{2}}{\left( 1-4\,z \right)^{3}}}
\sqrt {1-4\,z}= {\frac {{z}^{2}}{\left( 1-4\,z \right)^{3}}} \bigl (1-2z{\bf C}_0(z)\bigr )
\end{equation}
according to \cite{APRW}, whence $P_1(1/4)=(1/4)^2=1/16$.
The solution to the ODE (\ref{E:ODE}) is given by
\begin{equation}\label{E:intsoln}
{\bf C}_{g+1}(z)=\, \left(\int_0^z
\frac{\Phi_{g}(y)}{(1-4y)^{\frac{3}{2}}} dy+C \right)\,\frac{\sqrt{1-4z}}{z}=\,\left(\int_0^z
\frac{Q_g(y)}{(1-4y)^{3g+4}} dy+C \right)\,\frac{\sqrt{1-4z}}{z},
\end{equation}
where $Q_g(z)$ is shown to be a polynomial of degree at most $(3g+2)$.
Insofar as ${\bf C}_{g}(z)=P_{g}(z)/(1-4z)^{3g-1/2}$,
we find
$$
\aligned
\frac{d{\bf C}_{g}(z)}{dz}&=
\frac{P_{1g}(z)}{(1-4z)^{3g+1/2}},~{\rm where}~
P_{1g}(z)=(1-4z)P_{g}'(z)+(12g-2)P_{g}(z),\\
\quad \frac{d^2 {\bf
C}_{g}(z)}{dz^2}&=
\frac{P_{2g}(z)}{(1-4z)^{3g+3/2}},~{\rm where}~
P_{2g}(z)=(1-4z)P_{1g}'(z)+(12g+2)P_{1g}(z), \\
\quad
\frac{d^3 {\bf C}_{g}(z)}{dz^3}&=
\frac{P_{3g}(z)}{(1-4z)^{3g+5/2}},~{\rm where}~
P_{3g}(z)=(1-4z)P_{2g}'(z)+(12g+6)P_{2g}(z).\\
\endaligned
$$
Thus,
\begin{equation}\label{E:Q}
Q_g(z)=4z^5P_{3g}(z)+24z^4(1-4z)P_{2g}(z)
+27z^3(1-4z)^2P_{1g}(z)+3z^2(1-4z)^3P_{g}(z),
\end{equation}
whence
\begin{equation}
Q_g(1/4)=4^{-4}P_{3g}(1/4)=4^{-4}(12g+6)(12g+2)(12g-2)P_g(1/4)\neq 0.
\end{equation}
Since $Q_g(z)$ has degree at most $(3g+2)$, its partial fraction expansion
reads
\begin{equation}\label{E:pfrac}
\frac{Q_g(z)}{\left( 1-4\,z \right) ^{(3g+4)}} =
\sum_{j= 2}^{3g+4}\frac{A_j}{\left( 1-4\,z \right) ^{j}},
\end{equation}
where the $A_j\in \mathbb{Q}$ and $A_{3g+4}=Q_g(1/4)$.
According to \cite{APRW}, $P_{g+1}(z)$ is given by
\begin{equation}\label{E:laurent}
P_{g+1}(z)= {{-1}\over 4z}
\left(\sum_{j= 2}^{3g+4}\frac{-A_j}{j-1}\left( 1-4\,z \right)^{3g+4-j}+
\sum_{j= 2}^{3g+4}\frac{A_j}{j-1}\left( 1-4\,z \right)^{3g+3}\right),
\end{equation}
and hence
\begin{eqnarray}\label{E:recgamma}
P_{g+1}(1/4)&=&4^{-4}(12g+6)(12g+2)(12g-2)P_g(1/4)/(3g+3)\nonumber\\
&=&\frac{9(g+1/2)(g+1/6)(g-1/6)}{4(g+1)}P_g(1/4),
\end{eqnarray}
where $P_1(1/4)=1/16$. The lemma follows by checking
that the formula in eq.~(\ref{E:q_g4}) is the unique solution of
eq.~(\ref{E:recgamma}) with $P_1(\frac14) = \frac{1}{16}$.
\end{proof}
\section{Lemmas on characters}
\begin{lemma}\label{L:kro} For any two permutations $\tau,\pi\in S_{2n}$, we have
\begin{equation}
\sum_{\sigma\in S_{2n}}
\delta_{[\sigma],[2^n]}\cdot\delta_{[\tau\sigma],[\pi]}=
\frac{(2n-1)!!}{\prod_jj^{\pi_j}\cdot \pi_j!}
\sum_{Y} \frac{\chi^Y([2^n])\chi^Y(\pi)\chi^Y(\tau)}
{\chi^Y([1^{2n}])}.
\end{equation}
\end{lemma}
\begin{proof}
In order to prove the lemma, we shall apply orthogonality and completeness
of the irreducible characters of $S_{2n}$, that irreducible
representations are indexed by Young diagrams $Y$ containing $2n$ squares,
and the fact that $\chi^Y(\sigma)=\chi^Y(\sigma^{-1})$, for any $\sigma\in S_{2n}$.
Fix some $\pi\in S_{2n}$ with
$\pi\in[1^{\pi_1} \cdots {2n}^{\pi_{2n}}]$.
According to the orthogonality relation
\begin{eqnarray*}
\sum_Y \chi^Y(\sigma_1)\chi^Y(\sigma_2) & = &
\frac{(2n)!}{\vert [\sigma_1]\vert}\cdot
\delta_{[\sigma_1],[\sigma_2]}
\end{eqnarray*}
of the second kind, we have
\begin{eqnarray*}
\sum_{\sigma\in S_{2n}}
\delta_{[\sigma],[2^n]}\cdot\delta_{[\tau\sigma],[\pi]} & = &
\sum_{\sigma\in S_{2n}}\left[\frac{\vert [2^n]\vert}{(2n)!}
\sum_Y \chi^Y(\sigma)\chi^Y([2^n])\right]
\cdot \left[ \frac{\vert [\pi]\vert}{(2n)!}
\sum_{Y'} \chi^{Y'}(\tau\sigma)\chi^{Y'}(\pi)\right] \\
&=& \frac{(2n-1)!!}{\prod_j j^{\pi_j}\cdot \pi_j!}
\sum_{Y,Y'} \chi^Y([2^n])\chi^{Y'}(\pi)
\left[\frac{1}{(2n)!}\sum_{\sigma\in S_{2n}}\chi^Y(\sigma)\chi^{Y'}
(\tau\sigma)\right].
\end{eqnarray*}
The variant
\begin{eqnarray*}
\frac{1}{(2n)!}\sum_{\sigma\in S_{2n}}\chi^Y(\sigma)\chi^{Y'}(\tau\sigma)=
\frac{\chi^Y(\tau)}{\chi^Y([1^{2n}])}\cdot \delta_{Y,Y'}
\end{eqnarray*}
of the orthogonality relation of the first kind gives
\begin{eqnarray*}
\sum_{\sigma\in S_{2n}}\delta_{[\sigma],[2^n]}\cdot\delta_{[\tau\sigma],[\pi]}
&=& \frac{(2n-1)!!}{\prod_j j^{\pi_j}\cdot \pi_j!}
\sum_{Y,Y'} \chi^Y([2^n])\chi^{Y'}(\pi)\left[
\frac{\chi^{Y'}(\tau)}{\chi^{Y'}([1^{2n}])}\cdot\delta_{Y,Y'}\right]\\
&=& \frac{(2n-1)!!}{\prod_j j^{\pi_j}\cdot \pi_j!}
\sum_{Y} \frac{\chi^Y([2^n])\chi^Y(\pi)\chi^Y(\tau)}{
\chi^Y([1^{2n}])}
\end{eqnarray*}
as required.
\end{proof}
\begin{figure}
\caption{\small The Young tableaux $Y_{p,q}
\label{F:Young}
\end{figure}
In the course of our analysis, the two shapes of Young diagrams illustrated in Figure ~\ref{F:Young} are
of importance, where
$\bullet$ $p,q\ge 0$ with $p+q+1=2n$ determine the shape
$Y_{p,q}$ which is $(p,q)$-hook, having a single row
of length $q+1\ge 1$ and $p$ rows of length one
with corresponding character $\chi^{p,q}=\chi^{Y_{p,q}}$,
$\bullet$ $p_1\ge p_2+1\ge 1$, $q_1\ge q_2+1\ge 1$ with
$p_1+q_1+p_2+q_2+2=2n$ determine the shape
$Y_{p_1,p_2}^{q_1,q_2}$ with
one row of length $q_1+1$, one row of length $q_2+2$,
$p_2$ rows of length two and $p_1-p_2-1$ rows of length
one.\\
\begin{lemma}\label{L:char}
Suppose $\tau_c=(1,\dots,c)\circ (c+1,\dots,2n)$.
Then we have
\begin{equation}\label{E:1L}
\sum_{c=1}^{2n-1} \chi^{Y}(\tau_c)= (-1)^p(q-p)\,\delta_{Y,Y_{p,q}}
\end{equation}
for any Young diagram $Y$.
\end{lemma}
\begin{proof}
\begin{figure}
\caption{\small Rim-hooks in a Young diagram $Y_{p,q}
\label{F:Youngrim1}
\end{figure}
In order to prove eq.~(\ref{E:1L}), let us first assume $Y=Y_{p,q}$.
Since the only rim-hooks in $Y$ lie entirely in the first row or column as in Figure
\ref{F:Youngrim1}
the Murnaghan-Nakayama rule gives
\begin{eqnarray*}
\sum_{c=1}^{2n-1}\chi^{p,q}(\tau_c)
& = & \sum_{c=1}^{2n-1}\left(\chi^{p,q-c}((c+1,\dots,2n)) +
(-1)^{c-1}\, \chi^{p-c,q}((c+1,\dots,2n)) \right) \\
& = & \sum_{c=1}^{q}(-1)^p + \sum_{c=1}^{p} (-1)^{c-1}(-1)^{p-c} \\
& = & q(-1)^p + p(-1)^{p-1}\\
&=& (-1)^{p}(q-p).
\end{eqnarray*}
\begin{figure}
\caption{\small The case of $Y_{p_1,p_2}
\label{F:Young4}
\end{figure}
Next, let us assume $Y\neq Y_{p,q}$. Since $\tau_c$ is a product of
two disjoint cycles, $\chi^Y(\tau_c)\neq 0$ implies that at most two
removals of rim-hooks exhaust $Y$ by the Murnaghan-Nakayama rule, so
$Y$ is necessarily of the form $Y_{p_1,p_2}^{q_1,q_2}$. Since the
cycles of $\tau_c$ are disjoint, we may remove them in any desired
order, and we choose to first remove a rim-hook of size $c$ and
second a rim-hook of size $2n-c$. There are exactly four scenarios
for such removals as illustrated in Figure \ref{F:Young4}, it is
worth mentioning that these scenarios can be distinguished by
whether the first rim hook contains no squares in the leftmost
column or the top row (a), contains squares in one but not both of
them (b or c), or contains squares in both (d). We accordingly
derive
\begin{eqnarray*}
\sum_{c=1}^{2n-1}\chi^{Y_{p_1,p_2}^{q_1,q_2}}(\tau_c) & = &
\sum_{c=1}^{2n-1} \left[{(-1)^{p_2}\delta_{c,p_2+q_2+1} (-1)^{p_1}
\delta_{2n-c,p_1+q_1+1}}_{} +\right. \\
& & \qquad {(-1)^{p_1-1} \delta_{c,(p_1+1)+q_2}
(-1)^{p_2} \delta_{2n-c,p_2+1+q_1}}_{} + \\
& & \qquad {(-1)^{p_2+1} \delta_{c,p_2+(q_1+1)} (-1)^{p_1}
\delta_{2n-c,p_1+1+q_2}}_{}+\\
& & \qquad \left. {(-1)^{p_1} \delta_{c,(p_1+1)+q_1} (-1)^{p_2}
\delta_{2n-c,p_2+1+q_2}}_{}\right] \\
& = & 0
\end{eqnarray*}
since the first/third and second/fourth terms pairwise cancel.
Thus, for any Young diagram $Y$ other than a $(p,q)$-hook
the term $\sum_{c}\chi^{Y_{p_1,p_2}^{q_1,q_2}}(\tau_c)$ is trivial,
and Lemma~\ref{L:char} follows.
\end{proof}
\begin{lemma}\label{L:Schur}
For any Young diagram $Y$ that contains $2n$ squares, we have
\begin{equation}\label{E:schur}
\frac{1}{(2n)!}\sum_{\pi\in S_{2n}}N^{\sum_i\pi_i}\chi^Y(\pi)=
s_{Y}(1,\dots,1),
\end{equation}
where $s_Y(x_1,\dots,x_N)$ denotes the Schur-polynomial over $N\ge 2n$
indeterminants.
\end{lemma}
\begin{proof}
Rewriting
\begin{equation*}
N^{\sum_i\pi_i}=\prod_{\pi_i}\left(\sum_{h=1}^N 1^{i}\right)^{\pi_i},
\end{equation*}
as a product of power sums $p_i(x_1,\dots,x_N)=\sum_{h=1}^N
x_h^{i}$, we identify eq.~(\ref{E:schur}) via the Frobenius Theorem
as a particular value of the Schur-polynomial $s_{Y}(x_1,\dots,x_N)$
of $Y$ over $N\ge 2n$ indeterminants:
\begin{equation}\label{E:Schur}
\frac{1}{(2n)!}\sum_{\pi\in S_{2n}} \prod_{\pi_i}
\left(\sum_{h=1}^N 1^{i}\right)^{\pi_i}
\, \chi^{Y}(\pi)=
\frac{1}{(2n)!}\sum_{\pi\in S_{2n}} \chi^{Y}(\pi)\,
\prod_{\pi_i} p_i(1,\dots,1)^{\pi_i}=
s_{Y}(1,\dots,1).
\end{equation}
\end{proof}
\section{The generating function}
In analogy to the polynomials $P(n,x)$ in eq.~(\ref{E:polynN}), we define
\begin{equation}
Q(n,x)=\sum_{\{g\mid 2g\le n\}} \mathbf{c}^{[2]}_g(n) \cdot x^{n-2g}.
\end{equation}
\begin{lemma}\label{L:Q-P}
The polynomial $Q(n,N)$ can be written as
\begin{equation}
Q(n,N)=U(n,N)-V(n,N)
\end{equation}
where
\begin{equation}\label{E:w}
-3-4Nz-2N^2z^2+2\sum_{n\geq1}\frac{U(n,N)}{(2n-1)!!}z^{n+2}
= (z+z^3)\frac{d}{dz}\left(\left(\frac{1+z}{1-z}\right)^N\right)-
3\left(\frac{1+z}{1-z}\right)^N
\end{equation}
and
\begin{equation}
V(n,N)=\sum_{1\le c\le n-1}P(c,N)P(n-c,N).
\end{equation}
\end{lemma}
\begin{proof}
A connected chord diagram of genus $g$ on two backbones with $n$ chords
is described by permutations
$\tau_c=(1,\dots,c)\circ (c+1,\dots,2n)$ and $\iota\in[2^n]$ via Proposition
\ref{P:isos} satisfies $2-2g-r=2-n$, where the number $r=n-2g$
of boundary components is given by the
the number of cycles of $\tau_c\circ\iota$.
On the other hand, if the chord diagram corresponding to
$\tau_c$ and $\iota$ is disconnected, then not only
does $\tau_c$ decompose into disjoint cycles $\tau_1=(1,\ldots, c)$
and $\tau_2=(c+1,\ldots ,2n)$ but also
$\iota=\iota_1\circ\iota _2$ similarly
decomposes into disjoint permutations, and
the number of boundary components is given by
\begin{equation}
\sum_i(\tau_c\circ\iota)_i=\sum_s(\tau_1\circ \iota_1)_s+\sum_t(\tau_2\circ\iota_2)_t,
\end{equation}
where $(\pi)_i$ denotes the number of cycles of length $i$ in $\pi$,
$\pi\in S_{2n}$.
We proceed by writing $Q(n,N)=U(n,N)-V(n,N)$ as the difference of two terms,
the first being the contribution of all chord diagrams, irrespective of
being connected, and the second being the contribution of all
disconnected chord diagrams. Thus,
\begin{eqnarray*}
Q(n,N) & = &
\sum_{\{g \, \mid\, 2g\le n\}} \mathbf{c}^{[2]}_g(n) \, N^{n-2g} \\
& = & \underbrace{\sum_{1\le c\le 2n-1}
\sum_{\iota\in [2^n]\atop \tau_c=(1,\dots,c)(c+1,\dots,2n)}
N^{\sum_i(\tau_c\iota)_i}}_{U(n,N)} \\
& - & \underbrace{\sum_{\atop 1\le d\le n-1}
\left(\sum_{\iota_1\in [2^d]\atop \tau_1=(1,\dots,2d)}
N^{\sum_i(\tau_1\iota_1)_i}\right)
\left(\sum_{\iota_2\in [2^{n-d}]\atop \tau_2=(2d+1,\dots,2n)}
N^{\sum_i(\tau_2\iota_2)_i}\right)}_{V(n,N)}
\end{eqnarray*}
since the number of vertices in a chord diagram is necessarily even.
In view of
\begin{equation*}
P(d,N) = \sum_{\iota_1\in [2^{d}]\atop \tau_1=(1,\dots,2d)}
N^{\sum_i(\tau_1\iota_1)_i}\quad\text{\rm and}\quad
P(n-d,N) = \sum_{\iota_2\in [2^{n-d}]\atop \tau_2=(2d+1,\dots,2n)}
N^{\sum_i(\tau_2\iota_2)_i},
\end{equation*}
we conclude
\begin{eqnarray*}
V(n,N) & = & \sum_{1\le d\le n-1}P(d,N)P(n-d,N).
\end{eqnarray*}
Turning our attention now to $U(n,N)$, we have
\begin{eqnarray}\label{E:wq}
U(n,N)=
\sum_{c=1}^{2n-1}\sum_{\iota\in [2^n]\atop \tau_c=(1,\dots,c)(c+1,\dots,2n)}
N^{\sum_i(\tau_c\iota)_i} & = &
\sum_{[\pi]} N^{\sum_i\pi_i}\sum_{c=1}^{2n-1}
\sum_{\iota\in [2^n]\atop \tau_c\iota\in [\pi]} 1.
\end{eqnarray}
Expressing the right-hand side of eq.~(\ref{E:wq}) via Kronecker
delta-functions, we obtain a sum taken over all permutations
\begin{eqnarray*}
\sum_{[\pi]} N^{\sum_i\pi_i}\sum_{c=1}^{2n-1}
\sum_{\iota\in [2^n]\atop \tau_c\iota\in [\pi]} 1
&=& \sum_{[\pi]} N^{\sum_i\pi_i} \sum_{c=1}^{2n-1}\sum_{\sigma\in S_{2n}}
\delta_{[\sigma],[2^n]}\cdot\delta_{[\tau_c\sigma],[\pi]},
\end{eqnarray*}
and application of Lemma~\ref{L:kro} gives
\begin{eqnarray}\label{E:erni}
U(n,N) & = & (2n-1)!!\cdot
\sum_{c=1}^{2n-1}\sum_{Y} \frac{\chi^Y([2^n])\chi^Y(\tau_c)}{\chi^Y([1^{2n}])}
\frac{1}{(2n)!}\sum_{\pi\in S_{2n}}N^{\sum_i\pi_i}
\chi^Y(\pi).
\end{eqnarray}
Interchanging the order of summations and using Lemma~\ref{L:Schur}, we
may rewrite this as
\begin{eqnarray*}\label{E:erni-pq}
U(n,N) & = & (2n-1)!!
\sum_{Y} \frac{\chi^Y([2^n])}{\chi^Y([1^{2n}])}
\left(\sum_{c=1}^{2n-1}\chi^Y(\tau_c)\right)
s_Y(1,\ldots,1).
\end{eqnarray*}
Now, according to Lemma~\ref{L:char}, we have $\sum_{c}\chi^Y(\tau_c)=
(-1)^p(q-p)\,\delta_{Y,Y_{p,q}}$. This reduces the character sum
to the consideration of characters $\chi^{p,q}$ and Schur-polynomials
$s_{p,q}=s_{Y_{p,q}}$ associated to the irreducible representations $Y_{p,q}$.
The corresponding expressions computable
using the Murnaghan-Nakayama rule, for instance,
are given by
\begin{eqnarray}
\chi^{p,q}([1^{2n}]) & = & \binom{2n-1}{q},\\
\chi^{p,q}([2^n]) &= &
\begin{cases}
(-1)^{\frac{p}{2}}\binom{n-1}{\frac{p}{2}}; & \text{\rm for $p\equiv 0
\mod 2$,}\\
(-1)^{\frac{p+1}{2}}\binom{n-1}{\frac{p-1}{2}}; & \text{\rm for $p\equiv 1
\mod 2$,}
\end{cases}\\
s_{p,q}(1,\dots,1)&=&\binom{N+q}{2n}\,\binom{2n-1}{q}.
\end{eqnarray}
Consequently, we compute
\begin{eqnarray*}
\frac{U(n,N)}{(2n-1)!!} & =& \sum_{j=0}^{n-1}(-1)^j\binom{n-1}{j}\left[
(2n-4j-1)\binom{N+2n-2j-1}{2n}\right.\\
&&\qquad\qquad\qquad\qquad\left.+(2n-4j-3)\binom{N+2n-2j-2}{2n}\right] \\
& =& \sum_{j=0}^{n-1}(-1)^j \binom{n-1}{j}\frac{1}{2\pi i}
\oint \left[(2n-4j-1)\frac{(1+x)^{N+2n-2j-1}}{x^{N-2j}}\right.\\
&&\qquad\qquad\qquad\qquad\left.+
(2n-4j-3)\frac{(1+x)^{N+2n-2j-2}}{x^{N-2j-1}}\right]dx.
\end{eqnarray*}
Taking the summation into the integral, we obtain
\begin{eqnarray*}
& =& \frac{1}{2\pi i}
\oint \frac{(1+x)^N}{x^N}
\left(\sum_{j=0}^{n-1}(-1)^j\binom{n-1}{j}(2n-4j-1)
x^{2j}(1+x)^{2n-2j-1}\right.\\
&&\qquad\qquad\qquad\quad+
\left.\sum_{j=0}^{n-1}(-1)^j\binom{n-1}{j}(2n-4j-3)
x^{2j+1}(1+x)^{2n-2j-2}\right)dx \\
& = & \frac{1}{2\pi i} \oint \frac{(1+x)^N}{x^N}\left( 1+2x\right)^{n-1}
\left((n-1)(1+2x)^2+n\right)dx
\end{eqnarray*}
using $1+2x=(1+x)^2-x^2$.
It remains to substitute $z=1/(1+2x)$ and compute
\begin{eqnarray*}
\frac{2U(n,N)}{(2n-1)!!}
& =& \frac{1}{2\pi i}
\oint\left(\frac{1+z}{1-z}\right)^N\left(\frac{n-1}{z^{n+3}}
+\frac{n}{z^{n+1}}\right)dz\\
& =& (n-1)[z^{n+2}]\left(\frac{1+z}{1-z}\right)^N
+n[z^n]\left(\frac{1+z}{1-z}\right)^N\\
& =&[z^{n+1}]\frac{d}{dz}\left(\left(\frac{1+z}{1-z}\right)^N\right)
+[z^{n-1}]\frac{d}{dz}\left(\left(\frac{1+z}{1-z}\right)^N\right)
-3[z^{n+2}]\left(\frac{1+z}{1-z}\right)^N
\end{eqnarray*}
so that
\begin{equation*}
-3-4Nz-2N^2z^2+2\sum_{n\geq1}\frac{U(n,N)}{(2n-1)!!}z^{n+2}
= (z+z^3)\frac{d}{dz}\left(\left(\frac{1+z}{1-z}\right)^N\right)-
3\left(\frac{1+z}{1-z}\right)^N
\end{equation*}
completing the proof.
\end{proof}
\begin{theorem}\label{T:main}
For any $g\ge 0$, the generating function $\mathbf{C}_g^{[2]}(z)$ is a rational
function with integer coefficients given by
\begin{eqnarray*}
\mathbf{C}_g^{[2]}(z) & = & \frac{P_g^{[2]}(z)}{(1-4z)^{3g+2}},
\end{eqnarray*}
where $P_g^{[2]}(z)$ is an integral polynomials of degree at most
$(3g+1)$, $P_g^{[2]}(1/4)> 0$ and $[z^h]P_g^{[2]}(z)=0$, for $0\leq h\leq 2g$.
Furthermore, we have
\begin{eqnarray}
P_g^{[2]}(z) & = & z^{-1}P_{g+1}(z)-\sum_{g_1=1}^{g}P_{g_1}(z)P_{g+1-g_1}(z), \\
\label{E:22}
\left[{z^{2g+1}}\right]P^{[2]}_g(z) & = & \mathbf{c}^{[2]}_g(2g+1) =
\mathbf{c}_{g+1}(2g+2) = \frac{(4g+4)!}{4^{g+1}(2g+3)!}.
\end{eqnarray}
and the coefficients of ${\bf C}_g^{[2]}(z)$ have the asymptotics
\begin{equation}
[z^n]{\bf C}_g^{[2]}(z)\sim \frac{P_g^{[2]}(\frac{1}{4})}{\Gamma(3g+2)}
n^{3g+1} 4^n.
\end{equation}
\end{theorem}
\begin{proof}
Taking the coefficient of $z^{n+2}$ in eq.~(\ref{E:w}), we find
\begin{eqnarray*}
\frac{U(n,N)}{(2n-1)!!} & = &
[z^{n+2}]\biggl ((z+z^3)\sum_{n\ge 0}(n+1)\frac{P(n,N)}{(2n-1)!!}z^n- 3
\sum_{n\ge 0}\frac{P(n,N)}{(2n-1)!!} z^{n+1} \biggr )\\
& = & (n+2)\frac{P(n+1,N)}{(2(n+1)-1)!!} + n \frac{P(n-1,N)}{(2(n-1)-1)!!} -
3\frac{P(n+1,N)}{(2(n+1)-1)!!},
\end{eqnarray*}
for any
$n\ge 1$, whence
\begin{eqnarray*}
(2n+1) ~ U(n,N) & = & (n-1)~ P(n+1,N) + n (2n+1)(2n-1)~P(n-1,N).
\end{eqnarray*}
Substituting $U(n,N)=\sum_{2g\le n}\mathbf{u}_g(n) N^{n-2g}$ and
$P(n,N)= \sum_{2g\le n}\mathbf{c}_g(n) N^{n+1-2g}$, we obtain
\begin{eqnarray*}
(2n+1) \sum_{2g\le n}\mathbf{u}_g(n) N^{n-2g} & = &
(n-1) \sum_{2g\le n+1}\mathbf{c}_g(n+1) N^{n+2-2g} \\
& + & n(2n+1)(2n-1) \sum_{2g\le n-1}\mathbf{c}_g(n-1) N^{n-2g},
\end{eqnarray*}
so
\begin{eqnarray*}
(2n+1) ~ \mathbf{u}_g(n) & = & (n-1) ~\mathbf{c}_{g+1}(n+1) + n
(2n+1)(2n-1)~ \mathbf{c}_g(n-1).
\end{eqnarray*}
Using $(n+2)\, \mathbf{c}_{g+1}(n+1)=2(2n+1)\,\mathbf{c}_{g+1}(n)+
(2n+1)n(2n-1)\,\mathbf{c}_{g}(n-1)$
from the recursion eq.~(\ref{E:recursion}), we have
\begin{eqnarray*}
\mathbf{u}_g(n) & = & \mathbf{c}_{g+1}(n+1)-2\,\mathbf{c}_{g+1}(n),
\end{eqnarray*}
or equivalently, setting $\mathbf{U}_g(z)=\sum_n\mathbf{u}_g(n)z^n$,
it follows that
\begin{eqnarray}\label{E:ww}
z \mathbf{U}_g(z) & = & (1-2z) \mathbf{C}_{g+1}(z).
\end{eqnarray}
We next consider the term $V(n,N)$ as a polynomial in $N$:
\begin{eqnarray}
V(n,N) & = & \sum_{1\le d\le n-1}
\left(\sum_{g_1}\mathbf{c}_{g_1}(d) N^{d+1-2g_1}\right)
\left(\sum_{g_2}\mathbf{c}_{g_2}(n-d)N^{(n-d)+1-2g_2}\right)\nonumber\\
\label{E:contri}
& = & \sum_{g\ge 0}\sum_{g_1=0}^g\sum_{1\le d\le n-1}
\mathbf{c}_{g_1}(d) \mathbf{c}_{g-g_1}(n-d)N^{n+2-2g},
\end{eqnarray}
where for $i=1,2$, $\mathbf{c}_{g_i}(a)=0$ if $2g_i>a$.
According to eq.~(\ref{E:contri}) for fixed genus $g$,
the contribution of pairs of chord diagrams, each having one backbone, of
genus $g_1$ and $g_2$ to $N^{n-2g}$ is given by
\begin{eqnarray}\label{E:www}
\sum_{g_1=0}^{g+1}
\sum_{1\le d\le n-1}\mathbf{c}_{g_1}(d)\, \mathbf{c}_{g+1-g_1}(n-d)
& = & [z^n]\sum_{g_1=0}^{g+1} \mathbf{C}^*_{g_1}(z)\mathbf{C}^*_{g+1-g_1}(z),
\end{eqnarray}
where
\begin{equation}
\mathbf{C}^*_{g_1}(z)=
\begin{cases}
{\bf C}_0(z)-1;& \text{\rm for $g_1=0$}, \\
\mathbf{C}_{g_1}(z); & \text{\rm otherwise.}
\end{cases}
\end{equation}
This necessary modification stems from the fact that $1\le d\le n-1$ implies that
the coefficient $\mathbf{c}_0(0)=1$ does not appear.
Using eqs.~(\ref{E:ww}) and~(\ref{E:www}), we obtain
\begin{eqnarray*}
\mathbf{C}^{[2]}_g(z) & = & \mathbf{U}_g(z)-
\sum_{g_1} \mathbf{C}^*_{g_1}(z)\mathbf{C}^*_{g+1-g_1}(z) \\
& = & \frac{1-2z}{z}\mathbf{C}_{g+1}(z)-
\sum_{g_1} \mathbf{C}^*_{g_1}(z)\mathbf{C}^*_{g+1-g_1}(z).
\end{eqnarray*}
Now, for $g\ge 1$, we have $\mathbf{C}^*_g(z)=\mathbf{C}_g(z)=
\frac{P_{g}(z)\sqrt{1-4z}}{(1-4z)^{3g}}$ so
\begin{eqnarray*}
\frac{1-2z}{z}\mathbf{C}_{g+1}(z)-2\mathbf{C}^*_{0}(z)\mathbf{C}_{g+1}(z)
&=& \left[\frac{1-2z}{z}-2\left(\frac{1-\sqrt{1-4z}}{2z}-1\right)\right]
\mathbf{C}_{g+1}(z)\\
&=& \frac{P_{g+1}(z)}{z(1-4z)^{3g+2}},
\end{eqnarray*}
and hence the two non-rational terms conveniently cancel.
We continue by computing
\begin{eqnarray*}
\mathbf{C}_{g_1}(z)\mathbf{C}_{g+1-g_1}(z) =
\frac{P_{g_1}(z)\sqrt{1-4z}}{(1-4z)^{3g_1}}
\frac{P_{g+1-g_1}(z)\sqrt{1-4z}}{(1-4z)^{3(g+1-g_1)}} =
\frac{P_{g_1}(z)P_{g+1-g_1}(z)}{(1-4z)^{3g+2}},~{\rm for}~g_1\ge 1,
\end{eqnarray*}
so all other terms in the sum are also rational expressions.
Thus,
\begin{eqnarray}\label{E:here}
\mathbf{C}_g^{[2]}(z) & = & \frac{z^{-1}P_{g+1}(z)}{(1-4z)^{3g+2}}-
\sum_{g_1=1}^{g}\frac{P_{g_1}(z)P_{g+1-g_1}(z)}{(1-4z)^{3g+2}},
\end{eqnarray}
and hence
\begin{equation}\label{E:pg2z}
P^{[2]}_g(z)={\bf C}_g^{[2]}(z)(1-4z)^{3g+2}=z^{-1}P_{g+1}(z)-\sum_{g_1=1}^{g}P_{g_1}(z)P_{g+1-g_1}(z)\end{equation}
as was asserted. Since the $P_{g}(z)$ are polynomials of degree at most $(3g-1)$,
it follows from eq.~(\ref{E:pg2z}) that the degree of $P_g^{[2]}(z)$ is at most
$3g+1$.
Moreover, it follows immediately from $P_g^{[2]}(z)={\bf C}_g^{[2]}(z) (1-4z)^{3g+2}$
that the polynomial $P_g^{[2]}(z)$ has integer coefficients.
We next show that $P^{[2]}_g(1/4)>0$.
According to Lemma~\ref{L:gamma},
$P_{g_1}(1/4)P_{g+1-g_1}(1/4)$ is given by
\begin{eqnarray*}
& &\frac {\Gamma \left( g_1-\frac{1}{6} \right)
\Gamma \left( g_1+\frac{1}{2}\right)
\Gamma\left( g_1+\frac{1}{6} \right)
\Gamma \left( g-g_1+\frac{5}{6} \right)
\Gamma \left( g-g_1+\frac{3}{2}\right)
\Gamma \left( g-g_1+\frac{7}{6} \right)}
{36\,{\pi }^{3}\,{4}^{g+1}\,{9}^{-(g+1)}\,
\Gamma \left( g_1+1 \right) \Gamma \left(g+1- g_1+1 \right) }\\
&\leq&\frac{{9}^{(g+1)}}{36\,{\pi }^{3}\,{4}^{g+1}}
\left(\Gamma \left( g_1-\frac{1}{6} \right)
\Gamma\left( g_1+\frac{1}{6}\right) \Gamma\left( g-g_1+\frac{5}{6}\right)
\Gamma\left( g-g_1+\frac{7}{6} \right)\right),
\end{eqnarray*}
where we use the identity $\Gamma \left(g+1/2\right)<\Gamma \left(g+1\right)$, which
follows from $\Gamma(x+1)=x\Gamma (x)$. Furthermore
\begin{eqnarray*}
Z_{g_1}&=&\Gamma \left( g_1-\frac{1}{6} \right)
\Gamma\left( g_1+\frac{1}{6} \right)\Gamma\left(g-g_1+\frac{5}{6}\right)
\Gamma\left( g-g_1+\frac{7}{6} \right)\\
&=&\left(g_1-\frac{7}{6}\right)_{g_1-1}
\left(g_1-\frac{5}{6}\right)_{g_1}
\left(g-g_1-\frac{1}{6}\right)_{g-g_1}
\left(g-g_1+\frac{1}{6}\right)_{g+1-g_1}
\left(\Gamma\left(\frac{5}{6}\right)
\Gamma\left(\frac{1}{6}\right)\right)^2,
\end{eqnarray*}
where
$(x)_n=x(x-1)\cdots(x-n+1)$ denotes the Pochhammer symbol.
It follows that
$Z_{g_1}=Z_{g+1-g_1}$, for $1\leq g_1\leq g$,
and furthermore,
\begin{equation}
Z_{g_1}\leq Z_1=(g-7/6)_{g-1}(g-5/6)_{g}
\left(\Gamma(5/6)\Gamma(1/6)\right)^2/6.
\end{equation}
Thus,
\begin{equation}
\sum_{g_1=1}^gP_{g_1}(1/4)P_{g+1-g_1}(1/4)\leq
\frac{{9}^{(g+1)}}{36\,{\pi }^{3}\,{4}^{g+1}}\cdot\frac{g}{6}\cdot
(g-7/6)_{g-1}(g-5/6)_{g}\left(\Gamma(5/6)\Gamma(1/6)\right)^2.
\end{equation}
We proceed by estimating
\begin{eqnarray*}
4P_{g+1}(1/4) &=&
\frac {{9}^{g+1}}{6\,{\pi }^{3/2}\,{4}^{g}}\cdot\frac{\Gamma
\left( g+5/6 \right) \Gamma \left( g+3/2\right) \Gamma
\left( g+7/6 \right) }{\Gamma \left( g+2 \right) }\\
&\geq &\frac {{9}^{g+1}}{6\,{\pi }^{3/2}\,{4}^{g}}\cdot
\frac{\Gamma \left( g+5/6 \right) \Gamma \left( g+7/6 \right) }{ g+1 },\\
&= &\frac {{9}^{g+1}}{6\,{\pi }^{3/2}\,{4}^{g}}\cdot
\frac{\left(g-1/6\right)_{g}\left(g+1/6\right)_{g+1}
\Gamma(5/6)\Gamma(1/6)}{g+1}
\end{eqnarray*}
since $\Gamma\left(g+3/2\right)\geq\Gamma \left( g+1\right)$. Thus,
\begin{eqnarray*}
P^{[2]}_g(1/4)&=&4P_{g+1}(1/4)-
\sum_{g_1=1}^gP_{g_1}(1/4)P_{g+1-g_1}(1/4)\\
&\geq& \frac {{9}^{g+1}}{6\,{\pi }^{3/2}\,{4}^{g+1}}
\Gamma(5/6)\Gamma(1/6)(g-7/6)_{g-1}(g-5/6)_{g}\\
&&\quad\times
\left(\frac{4\left(g-1/6\right)
\left(g+1/6\right)}{ g+1 }-
\frac{g\,\Gamma(5/6)\Gamma(1/6)}{36\,{\pi }^{3/2}}\right).
\end{eqnarray*}
Finally,
\begin{equation}
\frac{4\left(g-1/6\right)
\left(g+1/6\right)}{ g+1 }-\frac{g\,\Gamma(5/6)\Gamma(1/6)}{36\,{\pi }^{3/2}}
=\frac{4g^2-1/9-g^2/(18\sqrt{\pi})-g/(18\sqrt{\pi})}{g+1}>0,
\end{equation}
for $g\geq 1$, so we indeed have $P^{[2]}_g(1/4)>0$ as claimed.
To see that $[z^h]P_g^{[2]}(z)=0$, for any $0\leq h\leq 2g$, it follows from eq.~(\ref{E:here}) that
\begin{equation}
[z^h]P_g^{[2]}(z)= [z^{h+1}]P_{g+1}(z)-
[z^h]\sum_{g_1=1}^gP_{g_1}(z)P_{g+1-g_1}(z).
\end{equation}
Since $[z^h]P_{g_1}=0$, for $h<2g_1$ or $h>3g_1-1$,
we conclude from
$[z^h]\sum_{g_1=1}^gP_{g_1}(z)P_{g+1-g_1}(z)=\sum_{g_1=1}^g
\sum_{i=0}^h[z^i]P_{g_1}(z)[z^{h-i}]P_{g+1-g_1}(z)$ that
$[z^i]P_{g_1}(z)[z^{h-i}]P_{g+1-g_1}(z)\neq 0$ implies
$i\geq 2g_1$ and $h-i\geq 2(g+1-g_1)$.
Thus,
\begin{equation}\label{E:-1}
[z^h]\sum_{g_1=1}^gP_{g_1}(z)P_{g+1-g_1}(z)=0,
\end{equation}
for $0\leq h\leq 2g+1$, and consequently,
\begin{equation}\label{E:0}
[z^h]P_g^{[2]}(z)=[z^{h+1}]P_{g+1}(z)=0,
\end{equation}
for $0\leq h\leq 2g$, as required.
It remains only to compute the coefficient of $z^{2g+1}$ in $P_g^{[2]}(z)$.
To this end, we have
\begin{equation}
[z^{2g+1}]P_g^{[2]}(z)=[z^{2g+2}]P_{g+1}(z)-[z^{2g+1}]
\sum_{g_1=1}^gP_{g_1}(z)P_{g+1-g_1}(z)=[z^{2g+2}]P_{g+1}(z)
\end{equation}
in light of eq.~(\ref{E:-1}). Since $[z^h]P_g^{[2]}(z)=0$, for any
$0\leq h\leq 2g$, we conclude from $P_g^{[2]}(z)={\bf C}_g^{[2]}(z)
(1-4z)^{3g+2}$ that
$[z^{2g+1}]P_g^{[2]}(z)=\mathbf{c}^{[2]}_g(2g+1)$, and hence using
eq.~(\ref{E:values})
\begin{equation}
\mathbf{c}^{[2]}_g(2g+1) = [z^{2g+1}]P_g^{[2]}(z)=
[z^{2g+2}]P_{g+1}(z)=\mathbf{c}_{g+1}(2g+2)=\frac{(4g+4)!}{4^{g+1}(2g+3)!}.
\end{equation}
\end{proof}
\begin{corollary}\label{C:formel}
We have the explicit expressions
\begin{eqnarray*}
\mathbf{c}^{[2]}_0(n) & =& n\, 4^{n-1},\\
\mathbf{c}^{[2]}_1(n) & =& \frac{1}{12}\left( 13\,n+3 \right)
n\left( n-1 \right)\left( n-2 \right)\,{4}^{n-3},\\
\mathbf{c}^{[2]}_2(n) & =& {\frac {1}{180}}\,\left(
445\,{n}^{2}-401\,n-210 \right) n
\left(n-1 \right)\left(n-2 \right)\left( n-3 \right)
\left(n-4 \right)\, {4}^{n-6} .
\end{eqnarray*}
\end{corollary}
\begin{proof}
Using the expression for $P_1(z)$ in the Introduction, Theorem~\ref{T:main}
gives
\begin{equation}
\mathbf{C}^{[2]}_0(z)=\frac{z^{-1}P_1(z)}{(1-4z)^2}=\frac{z}{(1-4z)^2},
\end{equation}
which immediately implies
\begin{equation}
\mathbf{c}^{[2]}_0(n)=[z^n]\mathbf{C}^{[2]}_0(z)
=[z^{n-1}]\left(\frac{1}{1-4z}\right)^2
=\sum_{i=0}^{n-1}4^{i}4^{n-1-i}=n4^{n-1}.
\end{equation}
In order to derive the expression for $\mathbf{c}^{[2]}_1(n)$, we
use both of the expressions $P_1(z)=z^2$ and $P_2(z)=21z^5+21z^4$.
Theorem~\ref{T:main}
implies
\begin{equation}
\mathbf{C}^{[2]}_1(z)=\frac{z^{-1}P_2(z)-P_1(z)^2}{(1-4z)^5}
=\frac{(20z+21)z^3}{(1-4z)^5},
\end{equation}
and differentiation gives
\begin{equation}\label{E:ode1}
(52z^3-13z^2)\frac{{d}^2\mathbf{C}_1(z)}{{d}z^2}+
(168z^2+36z)\frac{{d}\mathbf{C}_1(z)}{{d}z}+(64z-30)\mathbf{C}_1(z)=0,
\end{equation}
where $\frac{{d}^3\mathbf{C}_1(z)}{{d} z^3}|_{z=0}=126$.
Straightforward calculation shows that eq.~(\ref{E:ode1}) implies
\begin{equation}
\mathbf{c}^{[2]}_1(n+1)={52n^2+116n+64\over 13n^2-23n-6}
\mathbf{c}^{[2]}_1(n),
\end{equation}
where $\mathbf{c}^{[2]}_1(i)=0$, for $0\leq i\leq 2$ and
$\mathbf{c}^{[2]}_1(3)=21$. It remains to observe that
$\mathbf{c}^{[2]}_1(n)=\frac{1}{12} \left( 13\,n+3 \right) n\left(
n-1 \right)\left( n-2 \right)\,{4}^{n-3}$ satisfies this recursion
together with its initial condition.
To compute $\mathbf{c}^{[2]}_2(n)$, we likewise employ
$P_3(z)=11z^6
\left( 158\,{z}^{2}+558\,z+135 \right)$ and Theorem~\ref{T:main} to conclude
\begin{equation}
\mathbf{C}^{[2]}_2(z)=\frac{z^{-1}P_3(z)-2P_1(z)P_2(z)}{(1-4z)^8}
=\frac{(1696z^2+6096z+1485)z^5}{(1-4z)^8}.
\end{equation}
This yields the ODE
\begin{eqnarray*}
&&(1780z^4-445z^3)\frac{{d}^3\mathbf{C}^{[2]}_2(z)}{{d}z^3}+
(9076z^3+2181z^2)\frac{{d}^2\mathbf{C}^{[2]}_2(z)}{{d} z^2}\\
&&+(6808z^2-4020z)\frac{{d}\mathbf{C}^{[2]}_2(z)}{{d} z}-(664z-3180)
\mathbf{C}^{[2]}_2(z)=0,
\end{eqnarray*}
where $\frac{{d}^5\mathbf{C}^{[2]}_2(z)}{{d} z^5}|_{z=0}=178200$.
Thus,
\begin{equation}
\mathbf{c}^{[2]}_2(n+1)={(1780n^3+3736n^2+1292n-664)
\over(445n^3-2181n^2+1394n+840)}\mathbf{c}^{[2]}_2(n),
\end{equation}
where $\mathbf{c}^{[2]}_2(i)=0$, for $0\leq i\leq 4$, with
$\mathbf{c}^{[2]}_2(5)=1485$, and we conclude as before
the verity of the asserted formula.
\end{proof}
\end{document}
|
\begin{document}
\title{Beyond Adiabatic Elimination: Systematic Expansions}
\author{I.~L. Egusquiza}
\affiliation{Department of Theoretical Physics and History of Science, UPV-EHU, 48080 Bilbao, Spain}
\begin{abstract}
We restate the adiabatic elimination approximation as the first term in a singular perturbation expansion. We use the invariant manifold formalism for singular perturbations in dynamical systems to identify systematic improvements on adiabatic elimination, connecting with well established quantum mechanical perturbation methods. We prove convergence of the expansions when energy scales are well separated. We state and solve the problem of hermiticity of improved effective hamiltonians.
\end{abstract}
\pacs{03.65.-w, 31.15.-p, 32.80.Rm}
\maketitle
\section{Introduction}
Adiabatic elimination is a standard tool for the quantum optician \cite{walls2008quantum,Shore:2010uq,Yoo1985239,Steck:2013kx}. It is applied both to rate equations and amplitude equations. In its textbook presentation, the argument is that, because of wide separation of scales, the population or amplitude of a ``fast" state will not change appreciably from the point of view of a ``slow" state, if the initial state is mostly slow. The approximation then consists on setting the corresponding derivative to zero, solving for the fast variable, and introducing this solution in the rest of equations, thus obtaining an effective dynamics.
Given its widespread use, it natural that other presentations of the approximation appear. Of particular relevance is that achieved as the \emph{static} approximation in Laplace space, whereby an unknown energy term is discarded in a resolvent or Green function corresponding only to the fast sector. This also provides us with a suggestion for improving on the adiabatic approximation, as has been pointed out by Brion et al. \cite{1751-8121-40-5-011}, for instance.
In fact, the idea of separation of scales leading to approximations and effective dynamics is to be found in all areas of physics. In atomic and molecular physics one finds the Born--Oppenheimer approximation. In nuclear physics improvements on the shell model have led to effective interactions and operators to account for phenomena such as electric quadrupole moments.
Yet again, effective lagrangians are a staple in nonperturbative analyses of field theories (see for instance chapter 12 of \cite{weinberg1996quantum}). One approach to understanding these effective lagrangians consists on carrying out the functional integral for the irrelevant degrees of freedom, thus producing a (generally non-local) effective lagrangian for the relevant degrees of freedom. Naturally enough, quite some skill is required to identify the proper degrees of freedom that need integrating out, and a number of techniques have been developed to this end. Additionally one normally needs to perform some approximations. One such combination of integrating out and approximation is again given by the \emph{static approximation}, which is the first term of an expansion of a non-local propagator around a particular pole.
This integration of irrelevant degrees of freedom and further approximation pertains to the use of field theory both for high energy and for condensed matter physics, differing in the symmetries and scales to be stressed.
The generality of this concept of separation of scales means that essentially the same technique has been often developed independently in different contexts. Although we most definitely do not attempt a survey of each method in all areas, we do attempt to provide some pointers to the multiple sources of a particular perturbation procedure to which adiabatic elimination affiliates.
Because indeed adiabatic elimination in its textbook presentation can be understood as the first term in a systematic perturbative expansion, as we will show. Even more, we will show that it is an specific presentation of perturbation theory that is widespread in physics. The specifity, due to wide energy separations, will allow us furthermore a statement on convergence.
In this manner we will provide a response to the questions posed recently in, for example, \cite{1751-8121-40-5-011} and \cite{Paulisch:2012fk}: here the somewhat handwaving argument of standard adiabatic elimination is made precise, by identifying a small parameter that controls the precision of the approximation; adiabatic elimination forms part of a set of well-defined, consistent and convergent systematic approximations; the issues with the normalisation of the wave function and, in particular, with the computation of the population of the eliminated states are settled for each approximation; and, in appendix, we will detail the dependence on the interaction picture used to set up adiabatic elimination.
To do this we will first consider directly the well-known example of the \(\Lambda\) system, in which we will identify the relevant small parameter. We will proceed by recognising the system as a singular perturbation problem, which will be tackled with the invariant manifold formalism, the first term of the corresponding expansion being adiabatic elimination itself. We will recognise that in quantum mechanical systems this corresponds to linear embedding, and rewrite the method accordingly.
In the next section we extend the findings for the \(\Lambda\) system to a general setting, and connect the results with the similarity formalism for perturbation theory that has been frequently rediscovered in different areas of physics. The third section proposes two systematic approximation schemes, and proves the existence of a solution. These schemes are illustrated in the next section by applying them again to the \(\Lambda\) system.
Section \ref{sec:hermapprox} is devoted to the issue of normalisation and hermiticity. It also connects again with the similarity formalism and effective diagonalisation. We end by proposing some conclusions and ideas for future work.
\section{The \(\Lambda\) System}
In order to compare results, we shall use the same notation as \cite{1751-8121-40-5-011}. Thus, the dynamical system of interest is
\begin{eqnarray}\langlebel{eq:brionlambda}
i\dot\alpha&=& - \frac{ \delta}{2} \alpha+ \frac{ \Omegaega^*_a}{2} \gamma\nonumber\,,\\
i\dot \beta&=& \frac{ \delta}{2} \beta+ \frac{ \Omegaega^*_b}{2} \gamma\,,\\
i\dot \gamma&=& \frac{ \Omegaega_a}{2} \alpha +\frac{ \Omegaega_b}{2} \beta+ \Delta \gamma\,.\nonumber
\end{eqnarray}
We are interested in the regime in which \(\Delta\gg \delta, \Omegaega_i\). The adiabatic elimination approximation consists of the argument that the \(\gamma\) component of the wave function will be approximately constant, and computed by solving for \(\gamma\) in the last equation after substituting \( \dot \gamma\to0 \).
We shall first phrase the problem as a singular perturbation one, which will allow us to better understand the origin of adiabatic elimination. In order to do this, let us introduce the variable \( \tau= \delta t \), and define
\(
\epsilon= \delta/ \Delta
\). The system is thus written as
\begin{eqnarray}
i\partial_\tau\alpha&=&- \frac{1}{2} \alpha+ \frac{ \Omegaega^*_a}{2 \delta} \gamma\nonumber\,,\\
i\partial_\tau\beta&=&\frac{1}{2} \beta+ \frac{ \Omegaega^*_b}{2 \delta} \gamma\,,\\
i \epsilon \partial_\tau\gamma&=& \gamma+ \frac{ \epsilon}{2}\left( \frac{ \Omegaega_a}{ \delta} \alpha + \frac{\Omegaega_b}{ \delta} \beta\right)\,.\nonumber
\end{eqnarray}
This rewriting gives a direct justification for adiabatic elimination, by means of the \(\epsilon\) factor in front of \(\dot\gamma\). We can of course, go further.
For small \(\epsilon\) (and finite \( \Omegaega_i/ \delta \) for \( i= a,b \)), this is a singular perturbation problem: the problem, as stated, is a system with three differential equations, while for \(\epsilon=0\) it is a system with just two differential equations. Many methods have been developed for this kind of system \cite{kevorkian2011multiple,bender1999advanced}: averaging, multiple scales, dynamical renormalization group \cite{PhysRevLett.73.1311,PhysRevE.54.376} (for some applications of the dynamical renormalization group to quantum mechanics and quantum optics, see \cite{PhysRevA.57.1586} and \cite{PhysRevA.56.1548}). For our purposes we shall select the invariant manifold approach, which will be seen as a direct generalization of the standard adiabatic elimination process.
In the invariant manifold approach to singular perturbations, one determines among all possible manifolds invariant under the dynamics those that are perturbative. In our case, this is achieved by writing a submanifold of the three dimensional complex manifold in its explicit form \( \gamma=h( \alpha, \beta) \), and demanding that it be invariant under the dynamics. Denoting derivatives with respect to the variables \(\alpha\) and \(\beta\) by subindices, the invariance condition reads
\begin{multline}\langlebel{eq:invariantlambda}
h+ \frac{ \epsilon}{2 \delta}\left( \Omegaega_a \alpha + \Omegaega_b \beta\right) =\\
\frac{ \epsilon}{2}\left[ \beta h_\beta- \alpha h_\alpha + \frac{h}{ \delta}\left( \Omegaega_a^* h_\alpha + \Omegaega_b^* h_\beta\right)\right]\,.
\end{multline}
Once the perturbation solution \( h \) has been determined, the effective evolution is
\begin{eqnarray}
i\dot\alpha&=& - \frac{ \delta}{2} \alpha+ \frac{ \Omegaega^*_a}{2} h( \alpha, \beta)\nonumber\,,\\
i\dot \beta&=& \frac{ \delta}{2} \beta+ \frac{ \Omegaega^*_b}{2} h( \alpha, \beta)\,.
\end{eqnarray}
The advantage of this method is that once a perturbative approximation has been determined for \( h \), the effective evolution will not present secular terms. The disadvantage is that the effective evolution has to be recomputed for different approximations.
In the case at hand, the perturbative solution of eqn. (\ref{eq:invariantlambda}) has the form \( h=\sum_{k=1}^\infty \epsilon^k h_k \). Thus
\begin{eqnarray}
h_1( \alpha, \beta)&=& - \frac{1}{2 \delta}\left( \Omegaega_a \alpha+ \Omegaega_b \beta\right)\nonumber\,,\\
h_2( \alpha, \beta)&=& \frac{1}{4 \delta}\left( \Omegaega_a \alpha- \Omegaega_b \beta\right)\nonumber\,,
\end{eqnarray}
and, for \( k\geq2 \),
\[
h_{k+1}= \frac{1}{2}\left( \beta\partial_\beta- \alpha\partial_\alpha\right)h_k + \frac{1}{2 \delta}\sum_{l=1}^{k-1} h_{k-l}\left( \Omegaega_a^*\partial_\alpha+ \Omegaega_b^*\partial_\beta\right) h_l\,.
\]
The first term in the expansion, \( h_1 \), corresponds exactly with that is termed adiabatic elimination, thus making clear in which way the adiabatic elimination approximation is a first term in a (singular) perturbative expansion, as desired from the outset.
The invariant manifold approach is specially adequate for nonlinear dynamical systems, and it can be simplified in the case at hand:
even though the embedding equation (\ref{eq:invariantlambda}) is nonlinear, the perturbative solution is linear (one way of seeing this is that the differential system is the characteristic system for the embedding partial differential equation). Therefore, the effective evolution on the invariant manifold is also linear; it is, however, not unitary. This comes about because the conserved quantity is actually the full norm \( | \alpha|^2+ | \beta|^2+ | \gamma|^2 \), and not just \( | \alpha|^2+ | \beta|^2\).
Since the embedding is linear, we can rephrase it as
\[ \gamma= \left(b_1,b_2\right) \begin{pmatrix}
\alpha\\ \beta
\end{pmatrix}= B \begin{pmatrix}
\alpha\\ \beta
\end{pmatrix}\,.\]
\( B \) is an operator from the two dimensional space spanned by \( \left(1,0,0\right)^{\dag} \) and \( \left(0,1,0\right)^{\dag} \) to the one dimensional space spanned by \( \left(0,0,1\right)^{\dag} \). The invariance condition reads now
\begin{equation}\langlebel{eq:invariblambda}
-\frac{ \delta}{2 \Delta} B \sigma_3+ \frac{1}{ \Delta} B \Omegaega^{\dag} B= B + \frac{1}{ \Delta} \Omegaega\,,
\end{equation}
where
\[ \Omegaega = \frac{1}{2}\left( \Omegaega_a, \Omegaega_b\right)\,.\]
Given any \( B \) solution of the invariance condition (\ref{eq:invariblambda}), a non hermitian effective Hamiltonian is computed as
\[h_{ \mathrm{eff}}= - \frac{ \delta}{2} \sigma_3+ \Omegaega^{\dag} B\,.\]
\section{General Formal Theory}
\subsection{Direct approach}
The presentation above inspires a more general, admittedly formal, theory. Consider a Hilbert space \( \mathcal{H} \) on which a Hamiltonian acts, with clearly separate scales present (in a sense to be made clear later). Assume therefore that there is a subspace \( \mathcal{H}_P \) corresponding to the \emph{slow} variables (also called the model or valence space in nuclear physics, or the active space in atomic and molecular physics). The complementary space, \( \mathcal{H}_Q= \mathcal{H}_P^\perp \), with \( Q=1-P \), is used to describe the \emph{fast} variables. The hamiltonian can be written in block diagonal form,
\begin{equation}
H= \begin{pmatrix}
PHP&PHQ\\ QHP& QHQ
\end{pmatrix}= \hbar \begin{pmatrix}
\omegaega &\Omegaega^{\dag}\\
\Omegaega& \Delta
\end{pmatrix}\,,
\end{equation}
which corresponds to the direct sum structure \( \mathcal{H}= \mathcal{H}_P \oplus \mathcal{H}_Q \) as
\[ \mathcal{H} \ni\psi= \begin{pmatrix}
P\psi\\ Q\psi
\end{pmatrix}= \begin{pmatrix}
\alpha \\ \gamma
\end{pmatrix}\,.\]
The embedding condition means the identification of manifolds invariant under the flow determined by the Hamiltonian,
\[i \partial_t \begin{pmatrix}
\alpha\\ \gamma
\end{pmatrix}= \begin{pmatrix}
\omegaega &\Omegaega^{\dag}\\
\Omegaega& \Delta
\end{pmatrix}\begin{pmatrix}
\alpha\\ \gamma
\end{pmatrix}\,,\]
with the specification that the invariant manifold is isomorphic (unitarily equivalent) to \( \mathcal{H}_P \) (for finite dimensionality, of the same dimensionality). This can be achieved in an explicit form by means of operators \( B: \mathcal{H}_P\to \mathcal{H}_Q \), which obey the equation
\begin{equation}\langlebel{eq:bembedding}
\Omegaega+ \Delta B= B \omegaega + B \Omegaega^{\dag} B\,.
\end{equation}
To prove the need for this condition, substitute \( \gamma = B \alpha \) in the evolution equation, and ask for consistency.
The effective evolution for \(\alpha\) is readily seen to be \( i\dot\alpha=\left( \omegaega+ \Omegaega^{\dag}B\right) \alpha \), thus suggesting the effective hamiltonian
\( h_{ \mathrm{eff}}= \omegaega+ \Omegaega^{\dag}B \).
This is in fact the equation for what has been called in the literature the reduced Bloch wave operator or the \emph{model} or \emph{wave} operator from nuclear physics \cite{PhysRev.97.1366,Ellis:1977kx} (see also section 2 of \cite{0305-4470-36-20-201} and appendix \ref{app:blochwave}). We have obtained the equation directly from reduction by partitioning of Schr\"odinger's time dependent equation, while the reduced wave operator is normally obtained from the eigenvalue equation applying the same partitioning, following the projector operator formalism of Feshbach \cite{Feshbach1958357,Feshbach1962287} (notice the reprinting of the latter as \cite{Feshbach2000519}). The advantage of our approach will come from it making visible the singular perturbation aspect of the equations in the adiabatic elimination case.
An important aspect here is that in principle the embedding algebraic equation (\ref{eq:bembedding}) has solutions that give the \emph{full} spectrum of the total hamiltonian. For instance, if the slow and fast variables were decoupled, with no degeneracy, the trivial solution \( B=0 \) would be unique. As a consequence of this equation involving all eigenspaces of the full hamiltonian, it can lead in some circumstances to problems in its perturbative solution, related to the existence of so-called intruder states, for instance.
\subsection{Similarity formalism}\langlebel{ssec:similarity}
Alternatively, it is also related to the similarity transformation formalism, as follows. As is known, similarity transformations are isospectral. We thus look for invertible operators \( X \) such that
\begin{equation}\langlebel{eq:decoupling}
Q X^{-1} H X P=0\,,
\end{equation}
and some additional conditions are also fulfilled.
Given such an operator \( X \), the effective hamiltonian in the slow or model space would be given by
\[H_X^P= P X^{-1} H X P\,.\]
Notice that \( H_X^P \) need not be hermitian; it is indeed hermitian if \( X \) is unitary. A bonus of this approach is that effective operators \( A_{ \mathrm{eff}} \) acting on \( \mathcal{H}_P \) are constructed for each \( A \) acting on the full Hilbert space \( \mathcal{H} \) as
\[ A_{ \mathrm{eff}}= P X^{-1} A X P\,.\]
This similarity transformation construction, without further restriction, would not be very informative (for instance, it need not reflect an existing hierarchy of scales), and thus a number of additional conditions have been developed in the literature \cite{Suzuki01071982,Suzuki01081983} to best reflect the underlying physics. When Van Vleck first introduced this formalism in 1929 (section 4 of \cite{Vleck:1929fk}) he demanded unitarity of \( X \). This approach was reintroduced in relativistic quantum mechanics by Foldy and Wouthuysen
\cite{PhysRev.78.29}, and in condensed matter physics some time later by Schrieffer and Wolff \cite{PhysRev.149.491} (for a review, see \cite{Bravyi20112793}).
If unitarity is not demanded, an immediate proposal is
\[X_B= \exp\left(B\right)\,.\]
Here we have a slight overload of notation: we have used \( B \) as an operator acting on \( \mathcal{H} \), taking into account that \( B=QBP=QB=BP \), which in turn implies
\[X_B= 1+B\]
(since \( B^2=0 \))
and
\[X_B^{-1}= 1-B\,.\]
Inserting these in the decoupling condition (\ref{eq:decoupling}), we have
\begin{multline}
Q\left(1-B\right)H\left(1+B\right)P=
\left(Q-B\right)H\left(P+B\right)=\\
QHP-BPHP+QHQB-BPHQB=\\
\hbar\left[ \Omegaega - B \omegaega + \Delta B - B \Omegaega^{\dag} B\right]\,,
\end{multline}
thus proving that with this choice of \( X \) the decoupling condition is precisely the embedding equation.
With this proposal, for a solution of the embedding equation we have an effective hamiltonian \( h_{ \mathrm{eff}}= \omegaega+ \Omegaega^{\dag}B \), which is generically not hermitian. This effective hamiltonian has been called the Bloch hamiltonian in the literature \cite{0305-4470-36-20-201}.
\section{Formal Adiabatic Elimination Expansion and Convergence}
As stated above, the embedding equation is by itself very general, and can have multiple solutions. We want to study the specific case in which, physically, there is a wide separation of scales. For the \(\Lambda\) system this pertained to the condition \( \Delta\gg \delta, \Omegaega_a, \Omegaega_b \). In general, we will achieve separation of scales if our choice of subspaces (equivalently of the \( P \) projector) is such that a) \(\Delta\) is invertible as an operator on \( \mathcal{H}_Q \), b)
\[ \epsilon=\| \Delta^{-1}\| \| \omegaega\|\ll1\,,\]
and c)
\[ \epsilon'= \| \Delta^{-1}\| \| \Omegaega\|\ll1\,,\]
for operator norms.
Define a new time variable \( \tau= \| \omegaega\| t \). The equations of motion are now
\begin{eqnarray}
i \partial_\tau \alpha&=& \frac{ \omegaega}{\| \omegaega\|} \alpha+ \frac{ \Omegaega^{\dag}}{\| \omegaega\|} \gamma\,,\nonumber\\
i \epsilon \partial_\tau \gamma&=& \epsilon \frac{ \Omegaega}{\| \omegaega\|} \alpha+ \| \Delta^{-1}\| \Delta \gamma\,,
\end{eqnarray}
in which the singular perturbation character is clearly visible. This suggests treating the embedding equation as determining either a perturbative expansion or a recurrence relation.
If indeed \(\Delta\) is invertible as an operator on \( \mathcal{H}_Q \), the embedding condition (\ref{eq:bembedding}) can be rewritten as
\[B=T(B)\,,\]
where \( T \) maps \( \mathcal{B}\left( \mathcal{H}_P,\mathcal{H}_Q \right)\) (bounded operators from \( \mathcal{H}_P \) to \( \mathcal{H}_Q \)) to itself nonlinearly, as follows
\begin{equation}
T(A)= - \Delta^{-1} \Omegaega + \Delta^{-1}A \omegaega + \Delta^{-1} A \Omegaega^{\dag} A\,.
\end{equation}
For any norm on \( \mathcal{B}\left( \mathcal{H}_P,\mathcal{H}_Q \right)\) we have the bound
\[
\|T(A)\|\leq \|\Delta^{-1} \Omegaega\| + \|\Delta^{-1}A \omegaega\| + \|\Delta^{-1} A \Omegaega^{\dag} A\|\,.
\]
For the operator norm, we also have
\begin{multline}
\|T(A)\|\leq \|\Delta^{-1}\|\,\| \Omegaega\| + \|\Delta^{-1}\|\,\| \omegaega\|\,\| A\| + \|\Delta^{-1} \|\,\| \Omegaega\|\,\| A\|^2\\
\leq \epsilon'\left(1+ \|A\|^2\right) + \epsilon \|A\|\,.
\end{multline}
Now, by plotting the functions \( x \) and \( g(x)= \epsilon'(1+x^2)+ \epsilon x \) for \( \epsilon, \epsilon'\geq0 \) and \( \epsilon'\leq (1- \epsilon)/2 \), one readily sees that if \( A \) is in the ball centered at the null operator with radius \( r( \epsilon, \epsilon')= (1- \epsilon)/(2 \epsilon')+ \sqrt{ (1- \epsilon)^2/(2 \epsilon')^2-1} \) then \( T(A) \) also belongs to that ball. Thus, by Schauder's fixed point theorem, \( T \) has a fixed point in the ball. Let us now construct the recurrence
\begin{eqnarray}\langlebel{eq:brecurrence}
B^{(0)}&=& - \Delta^{-1} \Omegaega\,,\\
B^{(k+1)}&=& T\left[B^{(k)}\right]\nonumber\,.
\end{eqnarray}
Since \( \|B^{(0)}\|\leq \epsilon' \), it will belong to the ball if \( \epsilon'\leq r( \epsilon, \epsilon') \) (which is the generic situation), and the recurrence will converge (we have actually not excluded the possibility that there exist a fixed cycle --- see however Appendix \ref{app:nocycles}).
Furthermore, writing \( \epsilon'= a \epsilon \), where \( a= \| \Omegaega\|/\| \omegaega\| \) is a fixed constant, we can write the embedding equation as perturbative, where \( \Delta^{-1} \) has perturbative weight \(\epsilon\): expand
\begin{equation}\langlebel{eq:adiaelimexp}
B= \sum_{k=1}^\infty B_{(k)}
\end{equation}
and insert in the embedding equation to obtain
\begin{eqnarray}\langlebel{eq:adiaelimexprecurr}
B_{(1)}&=&- \Delta^{-1} \Omegaega\,,\nonumber\\
B_{(2)}&=& - \Delta^{-2} \Omegaega \omegaega\,,\\
B_{(k+1)}&=& \Delta^{-1} B_{(k)} \omegaega + \Delta^{-1} \sum_{l=1}^{k-1}B_{(k-l)} \Omegaega^{\dag} B_{(l)}\,,\nonumber
\end{eqnarray}
where in the last line \( k\geq2 \). Notice that, formally,
\[B^{(k)}-\sum_{l=1}^{k+1}B_{(l)}=O\left( \Delta^{-(k+2)}\right)\,.\]
We shall give the name \emph{perturbative adiabatic elimination expansion} to the expansion (\ref{eq:adiaelimexp}) with recurrence (\ref{eq:adiaelimexprecurr}).
Given this expansion to some order \( k \), we shall have the effective hamiltonian to the same order by substituting the approximation for \( B \) in \( h_{ \mathrm{eff}}= \omegaega+ \Omegaega^{\dag}B \).
Notice that we have the expansion in a purely algebraic manner; there is no need to impose consistency through the single pole approximation for the resolvent, as in \cite{1751-8121-40-5-011}, or, equivalently, in the Taylor expansion proposed in \cite{Paulisch:2012fk}.
One must bear in mind that this is a particular perturbation expansion, particularly adapted to the problem at hand, that differs from other expansions present in the literature. See appendix \ref{app:seconperturb} for another perturbative expansion, which is adequate even if our expansion parameter is not small.
\section{Example: the \(\Lambda\) system}
We can now identify the constructions of the previous section in the \(\Lambda\) system previously introduced; as one can see
\[ \omegaega= - \frac{ \delta}{2} \sigma_3\,,\quad \Omegaega= \frac{1}{2}\left( \Omegaega_a, \Omegaega_b\right)\,,\]
and \(\Delta\) is simply a number.
To fourth order we have
\begin{eqnarray}
B_{(1)}&=&- \frac{1}{ \Delta} \Omegaega\,,\nonumber\\
B_{(2)}&=& \frac{ \delta}{2 \Delta^2} \Omegaega \sigma_3\,,\\
B_{(3)}&=& \frac{1}{ \Delta^3}\left( \frac{ \delta}{2}+ \Omegaega \Omegaega^{\dag}\right) \Omegaega\,,\nonumber\\
B_{(4)}&=& - \frac{ \delta}{2 \Delta^4}\left( \Omegaega \sigma_3 \Omegaega^{\dag}\right) \Omegaega - \frac{ \delta}{2 \Delta^4}\left( 2 \Omegaega \Omegaega^{\dag}+ \frac{ \delta}{2}\right) \Omegaega \sigma_3\,.\nonumber
\end{eqnarray}
Notice that at any order \( B_{(k)} \) will generically have a term proportional to the operator \(\Omegaega\) and another proportional to \(\Omegaega \sigma_3\), with no other possibility.
To the same order we obtain
\begin{eqnarray}
h_{ \mathrm{eff}} &=& - \frac{ \delta}{2} \sigma_3 \nonumber\\
&&- \frac{1}{ \Delta} \left( 1- \frac{ \delta}{2 \Delta^2}- \frac{ \Omegaega \Omegaega^{\dag}}{ \Delta^2}+ \frac{ \delta}{2 \Delta^3} \Omegaega \sigma_3 \Omegaega^{\dag}\right) \Omegaega^{\dag} \Omegaega\nonumber\\
&& + \frac{ \delta}{2 \Delta^2} \left(1- \frac{ \delta}{2 \Delta^2}+ \frac{ \Omegaega \Omegaega^{\dag}}{ \Delta^2}\right) \Omegaega^{\dag} \Omegaega \sigma_3+ \cdots\nonumber
\end{eqnarray}
This expression has been ordered in terms of the operators \( \sigma_3 \), \( \Omegaega^{\dag} \Omegaega \) and \( \Omegaega^{\dag} \Omegaega \sigma_3 \), with numerical coefficients in front; again, these are the only possibilities. The non hermitian terms are due to \( \Omegaega^{\dag} \Omegaega \sigma_3 \).
Notice that some terms with physical significance are spread in more than one coefficient; for instance, the Stark shift has contributions from both \( \mathrm{Tr}\left[ \Omegaega^{\dag} \Omegaega\right] \) and \( \mathrm{Tr}\left[ \Omegaega^{\dag} \Omegaega \sigma_3\right] \).
\begin{figure}
\caption{Evolution of the population of the ground and excited states, with initial state \( (1,0,0)^T \), under a) the exact hamiltonian (continuous black line), b) zeroth order effective hamiltonian (dashed blue line) and c) fourth iteration of \( T \) (dotted red line). The parameters are \( \delta= -0.0175 \Delta\), \( \Omegaega_a=0.4 \Delta \), \( \Omegaega_b=0.3 \Delta \), for direct comparison with \cite{Paulisch:2012fk}
\end{figure}
Numerically it will be faster to use the recurrence relation. In fig. \ref{fig:lambdaex} we depict the evolution of the populations in the slow manifold for exact evolution, effective hamiltonian with \( B^{(0)} \), and effective hamiltonian constructed with \( B^{(4)} \). At the scale of the figure, the graphs constructed with \( B^{(2)} \) and \( B^{(3)} \) would be indistinguishable from the fourth iteration for those values of the parameters. Notice that the hamiltonian \( h_{ \mathrm{eff}}^{(0)}= \omegaega+ \Omegaega^{\dag} B^{(0)} \) is the usual adiabatically eliminated hamiltonian, and that \( h_{ \mathrm{eff}}^{(4)}\) is much more successful in tracking the envelope of the evolution. For the numerical values used, the non conservation of probability due to the nonhermiticity of \( h_{ \mathrm{eff}}^{(4)}\) is of order \( 5\% \).
\section{Hermitian approximations: a general theory}\langlebel{sec:hermapprox}
\subsection{Exact effective hermitian hamiltonian}
As stated above, the nonhermiticity of Bloch's hamiltonian \( h_{ \mathrm{eff}}= \omegaega+ \Omegaega^{\dag} B \) is due to the part of the norm carried by \( \gamma=B \alpha\in \mathcal{H}_Q \). The norm conserved under the evolution with the full hamiltonian in \( \mathcal{H} \) is simply
\begin{multline}
\left\langlengle \alpha, \alpha\right\ranglengle_{ \mathcal{H}_P}+\left\langlengle \gamma, \gamma\right\ranglengle_{ \mathcal{H}_Q}=\\
\left\langlengle \alpha, \alpha\right\ranglengle_{ \mathcal{H}_P}+\left\langlengle B \alpha, B \alpha\right\ranglengle_{ \mathcal{H}_Q}=\\
\left\langlengle \alpha, \alpha\right\ranglengle_{ \mathcal{H}_P}+\left\langlengle \alpha, B^{\dag}B \alpha\right\ranglengle_{ \mathcal{H}_P}=\\
\left\langlengle \alpha,\left(1+ B^{\dag}B\right) \alpha\right\ranglengle_{ \mathcal{H}_P}\,.
\end{multline}
By construction, \( 1+ B^{\dag}B \) is positive, and one can define its square root
\[S_B= \sqrt{1+ B^{\dag}B }\,.\]
Then for any constant unitary \( V \) from \( \mathcal{H}_P \) to itself we have that the norm of
\[ \tilde{\alpha}_V= V S_B \alpha\]
is conserved if \(\alpha\) evolves under the Bloch hamiltonian \( h_{ \mathrm{eff}} \). I.e.
\[i\partial_t\tilde{ \alpha}_V=V S_B i\partial_t \alpha= V S_B h_{ \mathrm{eff}} S_B^{-1}V^{-1} \tilde{ \alpha}_V\,,\]
from which one defines the family of hamiltonians
\[
h_V= V S_B h_{ \mathrm{eff}} S_B^{-1}V^{-1} \,,
\]
which are indeed hermitian. This last statement can be proven in several ways. Firstly, by construction: the flow under \( h_V \) preserves the norm and the inner products. It is a continuous flow, so by Wigner's theorem it is a unitary flow. The hamiltonians \( h_V \) are the constant generators of the unitary flow, and thus hermitian. In finite dimensions we can also assert self-adjointness.
A second proof is explicit. Observe that
\[ S_B h_{ \mathrm{eff}} S_B^{-1}= S_B^{-1}\left(1+B^{\dag}B\right) h_{ \mathrm{eff}} S_B^{-1}\,.\]
Since \( S_B \) is hermitian, we will have proven hermiticity of \( h_V \) if we prove hermiticity of \( \left(1+B^{\dag}B\right) h_{ \mathrm{eff}} \), that is, of
\[\left(1+B^{\dag}B\right)\left( \omegaega+ \Omegaega^{\dag} B\right)\,.\]
And if \( B \) is a solution of eqn. (\ref{eq:bembedding}) we indeed have that
\begin{eqnarray}\langlebel{eq:computehermit}
\left(1+B^{\dag}B\right)\left( \omegaega+ \Omegaega^{\dag} B\right)&=& \omegaega + \Omegaega^{\dag} B+ B^{\dag}\left(B \omegaega+ B \Omegaega^{\dag}B\right)\nonumber\\
&=& \omegaega + \Omegaega^{\dag} B+ B^{\dag}\left( \Delta B+ \Omegaega\right)\\
&=& \omegaega + \Omegaega^{\dag} B+ B^{\dag} \Omegaega + B^{\dag} \Delta B\nonumber\,,
\end{eqnarray}
which, given the hermiticity of \(\omegaega\) and \(\Delta\), is explicitly hermitian. Notice that we have used eqn. (\ref{eq:bembedding}) in the second step.
\subsection{Approximate effective hermitian hamiltonians}
This proof of hermiticity relies on \( B \) being an exact solution of (\ref{eq:bembedding}); if \( B_{\mathrm{a}} \) were an approximation to the solution to a given order, then
\begin{equation}
\sqrt{1+ B_{\mathrm{a}}^{\dag}B_{\mathrm{a}} } \left( \omegaega + \Omegaega^{\dag} B_{\mathrm{a}}\right) \frac{1}{\sqrt{1+ B_{\mathrm{a}}^{\dag}B_{\mathrm{a}} } }\nonumber
\end{equation}
will be approximately hermitian, to the same order. However, the proof itself suggests that
\begin{equation}\langlebel{eq:approxhermitian}
h_V\left[B_{ \mathrm{a}}\right]= \frac{1}{\sqrt{1+ B_{\mathrm{a}}^{\dag}B_{\mathrm{a}}}} \left( \omegaega + \Omegaega^{\dag} B_{\mathrm{a}}+ B_{\mathrm{a}}^{\dag} \Omegaega + B_{\mathrm{a}}^{\dag} \Delta B_{\mathrm{a}} \right)\frac{1}{\sqrt{1+ B_{\mathrm{a}}^{\dag}B_{\mathrm{a}}}}
\end{equation}
is a always a hermitian approximation.
In particular, by using the successive approximations \( B^{(k)} \) introduced in (\ref{eq:brecurrence}) one can define successive hermitian approximations of the full hermitian effective hamiltonian as
\[h_V^{(k)}=h_V\left[B^{(k)}\right]\,.\]
\begin{figure}
\caption{Evolution of the population of the ground and excited states, with initial state \( (1,0,0)^T \), under a) the exact hamiltonian (continuous black line), b) \( h_{ \mathrm{eff}
\end{figure}
As an example, in figure \ref{fig:lambdaherm} we depict also the evolution with the hermitian hamiltonian, again for the \(\Lambda\) system. Observe that the central maximum is at one, and that norm is conserved.
\subsection{Effective triangularization/diagonalisation}
In the previous subsections we have emphasised the effective hamiltonian, be it exact or approximate, in the slow sector. However, the construction provides us also with information about the fast sector.
Look back to subsection \ref{ssec:similarity}. There we defined
\[H_X^P= P X^{-1} H X P\]
as the effective hamiltonian in the slow space. We could similarly regard
\[H_X^Q= Q X^{-1} H X Q\]
as the effective hamiltonian in the fast space. This is correct in that it provides us with the eigenvalues corresponding to the fast space: since we impose the condition
\(Q X^{-1} H X P=0\), the similarity transformed \( X^{-1} H X \) is block upper triangular, and, since the eigenvalues of a block upper triangular matrix are the eigenvalues of its diagonal blocks, the fast eigenvalues are indeed those of \( H_X^Q \).
Using \( X_B \), with \( B \) an exact solution of the embedding condition (\ref{eq:bembedding}), we have
\[X_B^{-1} \begin{pmatrix}
\omegaega& \Omegaega^{\dag}\\ \Omegaega & \Delta
\end{pmatrix} X_B= \begin{pmatrix}
\omegaega+ \Omegaega^{\dag}B & \Omegaega^{\dag}\\0& \Delta- B \Omegaega^{\dag}
\end{pmatrix}\,,\]
so that \( \Delta- B \Omegaega^{\dag} \) is the effective hamiltonian in the fast space.
We now desire to have a hermitian effective hamiltonian in the fast sector. In order to do that, let us first notice that,
similarly to the computation in (\ref{eq:computehermit}), one can prove that
\[\left(1+B B^{\dag}\right)\left( \Delta- \Omegaega B^{\dag}\right)= \Delta- \Omegaega B^{\dag}- B \Omegaega^{\dag}+ B \omegaega B^{\dag}\,,\]
which is explicitly hermitian, as long as \( B \) is a solution of eqn. (\ref{eq:bembedding}). Define
\[\tilde{S}_B= \sqrt{1+ B B^{\dag}}\]
acting on \( \mathcal{H}_Q \) (for later computations it will prove useful to bear in mind that \( \tilde{S}_B^2 B= B S_B^2 \), and thus \( \tilde{S}_B B S_B^{-1}= \tilde{S}_B^{-1} B S_B\)). We have proven that \( H \) is similar to a block upper triangular matrix with hermitian diagonal blocks and bounded off-diagonal block. Explicitly: the computations hitherto signify
\begin{multline}\langlebel{eq:vssimilarity}
\begin{pmatrix}
V_P S_B&0\\-V_Q \tilde{S}_B B& V_Q \tilde{S}_B
\end{pmatrix} \begin{pmatrix}
\omegaega& \Omegaega^{\dag}\\ \Omegaega& \Delta
\end{pmatrix}
\begin{pmatrix}
S_B^{-1}V_P^{\dag}&0\\B S_B^{-1}V_P^{\dag}& \tilde{S}_B^{-1}V_Q^{\dag}
\end{pmatrix}
=\\
\begin{pmatrix}
V_P h_c^\alpha V_P^{\dag} & V_P S_B \Omegaega^{\dag}\tilde{S}_B^{-1}V_Q^{\dag}\\
0& V_Q h_c^\gamma V_Q^{\dag}
\end{pmatrix}\,,
\end{multline}
where
\begin{eqnarray}
h_c^\alpha&=& \frac{1}{\sqrt{1+ B^{\dag}B}}\left[\omegaega + \Omegaega^{\dag} B+ B^{\dag} \Omegaega + B^{\dag} \Delta B\right] \frac{1}{\sqrt{1+ B^{\dag}B}}\,,\nonumber\\
h_c^\gamma&=& \frac{1}{\sqrt{1+ BB^{\dag}}}\left[\Delta- \Omegaega B^{\dag}- B \Omegaega^{\dag}+ B \omegaega B^{\dag}\right] \frac{1}{\sqrt{1+ BB^{\dag}}}\,.\nonumber\
\end{eqnarray}
Notice that
\[\begin{pmatrix}
S_B^{-1}V_P^{\dag}&0\\B S_B^{-1}V_P^{\dag}& \tilde{S}_B^{-1}V_Q^{\dag}
\end{pmatrix}\]
is the inverse of
\[
\begin{pmatrix}
V_P S_B&0\\-V_Q \tilde{S}_B B& V_Q \tilde{S}_B
\end{pmatrix}
\]
with \( V_P \) and \( V_Q \) unitaries acting on \( \mathcal{H}_P \) and \( \mathcal{H}_Q \) respectively, so that the left hand side of (\ref{eq:vssimilarity}) is indeed a similarity transformation.
One of the advantages of this approach to the problem is that we have approximations for \( h_c^\alpha \) and \( h_c^\gamma \) which are explicitly hermitian for each approximation of \( B \). So we have a block upper triangular matrix that is an approximation to the exact one for each approximation of \( B \) solution of the embedding equation (\ref{eq:bembedding}), with hermitian diagonal blocks and bounded off-diagonal block.
In finite dimensions, Roth's theorem \cite{MR0047598} gives us a condition for this block upper triangular matrix to be in turn similar to the block diagonal matrix with the same diagonal blocks. For infinite dimensions, if we can guarantee self-adjointness and not just hermiticity of the diagonal blocks, we can make use of the analogous result by Rosenblum \cite{MR0233214}.
The simple result, from our point of view, is that if for a given \( B \) the spectra of \( h_c^\alpha(B) \) and \( h_c^\gamma(B) \) are disjoint, then a similarity transformation exists that transforms the block upper triangular matrix into the corresponding block diagonal one. This similarity transform is given by \( Y: \mathcal{H}_Q \to \mathcal{H}_P\), solution of the Sylvester equation
\[h_c^\alpha Y-Y h_c^\gamma= S_B \Omegaega^{\dag} \tilde{S}_B^{-1}\]
through
\[ \begin{pmatrix}
1&Y\\0&1
\end{pmatrix}
\begin{pmatrix}
h_c^\alpha& S_B \Omegaega^{\dag} \tilde{S}_B^{-1}\\0& h_c^\gamma
\end{pmatrix}
\begin{pmatrix}
1&-Y\\0&1
\end{pmatrix}=\begin{pmatrix}
h_c^\alpha& 0\\0& h_c^\gamma
\end{pmatrix}\,.\]
The crucial point for us at this juncture is to note that the existence of the diagonalising transformation (that must necessarily be unitary and not just by similarity) is guaranteed if the spectra of \( h_c^\alpha(B) \) and \( h_c^\gamma(B) \) are disjoint; this is intuitively the case if indeed \(\omegaega\) and \(\Omegaega\) are much smaller than \(\Delta\) in the sense above.
Actually, there is a much faster explicit route: notice that
\begin{multline}
\begin{pmatrix}
1& B^{\dag}\\ -B&1
\end{pmatrix}
\begin{pmatrix}
\omegaega& \Omegaega^{\dag}\\ \Omegaega& \Delta
\end{pmatrix}
\begin{pmatrix}
1& -B^{\dag}\\ B&1
\end{pmatrix}=\\
\begin{pmatrix}
\omegaega+ B^{\dag} \Omegaega+ \Omegaega^{\dag}B+ B^{\dag} \Delta B&0\\0& \Delta- \Omegaega B^{\dag}-B \Omegaega^{\dag}+ B \omegaega B^{\dag}
\end{pmatrix}\,,\end{multline}
if \( B \) is a solution of eqn. (\ref{eq:bembedding}). Define
\begin{eqnarray}\langlebel{eq:finalx}
X= \begin{pmatrix}
1& -B^{\dag}\\ B&1
\end{pmatrix} \begin{pmatrix}
S_B^{-1}&0\\0& \tilde{S}_B^{-1}
\end{pmatrix}\,.
\end{eqnarray}
Then
\[X^{-1}= \begin{pmatrix}
S_B^{-1}&0\\0& \tilde{S}_B^{-1}
\end{pmatrix}\begin{pmatrix}
1& B^{\dag}\\ -B&1
\end{pmatrix}\,,\]
and we have proved that, for \( B \) solution of eqn. (\ref{eq:bembedding}),
\[X^{-1} \begin{pmatrix}
\omegaega& \Omegaega^{\dag}\\ \Omegaega& \Delta
\end{pmatrix} X= \begin{pmatrix}
h_c^\alpha& 0\\0& h_c^\gamma
\end{pmatrix}\,.\]
There is a subtlety to be pointed out for the sake of the unwary: the fact that \( X \) constructed as in definition (\ref{eq:finalx}) gives the same diagonalization as
\[X_Y=
\begin{pmatrix}
1&0\\B&1
\end{pmatrix}
\begin{pmatrix}
S_B^{-1}&0\\0& \tilde{S}_B^{-1}
\end{pmatrix}
\begin{pmatrix}
1&-Y\\0&1
\end{pmatrix}\]
does not mean that they are the same. Rather, it means that \( X X_Y^{-1} \) commutes with \( H \).
The explicit expression for \( X \) given here was first presented in \cite{Suzuki01071982}. The alternative expression
\[X=\exp\left[ \mathrm{arctanh}\left(B-B^{\dag}\right)\right]\] has been computed independently by several authors (additional to \cite{Suzuki01071982,Suzuki01081983}, see also \cite{Bravyi20112793}, more directly adapted to the presentation of Schrieffer and Wolff \cite{PhysRev.149.491}).
In the context of quantum optics, the matrix \( X \) provides us with the dressing of the subspaces:
\[XPX^{-1}=P_X\]
is the projector onto the low lying sector. In other words, \( X \) intertwines the projector onto the unperturbed low lying states and the projector onto the fully non-perturbative low lying states, \(XP=P_X X \).
Notice that given any \( B \) (solution of eqn. (\ref{eq:bembedding}) or not), \( X \) is unitary, and it fully determines a projector \( P_X \). This means that if we have an approximate solution of eqn. (\ref{eq:bembedding}) we have an approximation to the projector onto the space of low lying states. Similarly, we will also have an approximation for the effective hamiltonian, \( h_c^\alpha \), hermitian by construction.
In the case of interest, where two subspaces are tentatively identified as corresponding to widely separated time scales, this approximation can be given by truncating at a given order the perturbative adiabatic elimination expansion, or, alternatively, by the recurrence relation.
\section{Conclusions}
We have identified the adiabatic elimination approximation as the first member of sets of systematic approximations, built algebraically in a well defined way. In fact, we have made explicit the connection between adiabatic elimination and well established perturbation methods. As a new result we have the proof of convergence of the procedure, and of existence of the diagonalisation matrix. We have also made explicit the issues with normalisation and hermiticity, and provided explicit solutions. We have shown by example the simplicity and effectiveness of the expansions in the \(\Lambda\) system.
In the \(\Lambda\) system itself we have produced explicit formulae for the effective hamiltonian; in the text itself we have mentioned that only two types of terms ever appear ( \( \Omegaega^{\dag} \Omegaega \) and \( \Omegaega^{\dag} \Omegaega \sigma_3\) for the non-hermitian effective hamiltonian). We expect that this kind of result will generally be available in other systems, and we suggest that one of the advantages of the method put forward here is that it will facilitate the identification of such general structures.
We have not made explicit the connection between our algebraic procedures and the resolvent techniques; we leave this for future work. We also postpone the application of these expansions to improve on the Rotating Wave Approximation, which is work in progress.
We have also not addressed the corresponding analysis for open quantum systems (for density matrices in closed open systems, see appendix \ref{app:mixed}); we hope our results will also be helpful there.
\acknowledgments
It is a pleasure to acknowledge support by the Basque Government (IT-559-10), the Spanish Ministry of Science and Technology under Grant No. FPA2009-10612, the Spanish Consolider-Ingenio Program CPAN (Grant No. CSD2007-00042), and the UPV/EHU UFI 11/55. Thanks are also extended for fruitful conversations with G\'eza T\'oth, Manuel A. Valle Basagoiti, and Enrique Solano and his QUTIS group.
\appendix
\section{Bloch wave operator and Bloch's equation}\langlebel{app:blochwave}
For completeness, we shall here make explicit the relation between our operator \( B \) and the Bloch wave operator in the literature. The Bloch wave operator is frequently denoted as \( \Omegaega \) or \( \Omegaega(E) \) (depending on whether one denotes the reduced or energy dependent wave operator), as, for example do Viennot \cite{Viennot:2013fk} or Brandow \cite{RevModPhys.39.771}; Killingbeck and Jolicard \cite{0305-4470-36-20-201} denote it by \( W \), Bravyi et al. \cite{Bravyi20112793} by \( \mathcal{U} \). To avoid confusion with our use of the symbol, we shall make reference to the Bloch wave operator as \( \mathcal{B} \) and \( \mathcal{B}(E) \) respectively.
The Bloch wave operator fulfils the condition \( \mathcal{B}^2= \mathcal{B} \), and is defined such that \( P H \mathcal{B} P\psi=EP\psi\) for \( E \in \sigma(H)\) and \( P \) a projector. In fact, for \( \psi_0=P\psi_0\in \mathcal{H}_P \) in the eigenspace of the projector, it maps it to the full space, \( \mathcal{B}\psi_0\in \mathcal{H} \), such that the relevant eigenspace of \( H \) is invariant. That is,
\[ H \mathcal{B}= \mathcal{B} H \mathcal{B}\,,\]
which is called the Bloch equation. To see the equivalence of this equation to eqn. (\ref{eq:bembedding}), observe that \( \mathcal{B}^2= \mathcal{B} \) implies
\[H \mathcal{B}= \mathcal{B} H \mathcal{B}\Leftrightarrow \left[ H, \mathcal{B}\right] \mathcal{B}=0\,.\]
Let \( P= \begin{pmatrix}
1&0\\0&0
\end{pmatrix} \). Notice that the assignment
\[ \mathcal{B}= \begin{pmatrix}
1&0\\ B&0
\end{pmatrix}\]
satisfies 1) \( \mathcal{B}^2= \mathcal{B} \), 2) \( \mathcal{B}P= \mathcal{B} \), and inserting it into the Bloch equation in the right hand side form we obtain
\begin{multline}
\left[ H, \mathcal{B}\right]\mathcal{B}= \left[ \begin{pmatrix}
\omegaega& \Omegaega^{\dag}\\ \Omegaega& \Delta
\end{pmatrix},\begin{pmatrix}
1&0\\ B&0
\end{pmatrix}\right]\begin{pmatrix}
1&0\\ B&0
\end{pmatrix}\\
=\begin{pmatrix}
\Omegaega^{\dag}B&- \Omegaega^{\dag}\\ \Delta B- B \omegaega + \Omegaega&- B \Omegaega^{\dag}
\end{pmatrix}\begin{pmatrix}
1&0\\ B&0
\end{pmatrix}\\
= \begin{pmatrix}
0&0\\ \Delta B - B \omegaega + \Omegaega - B \Omegaega^{\dag} B&0
\end{pmatrix}\,.
\end{multline}
\section{Gauge invariance}
A recurrent topic in the literature about extensions of the adiabatic elimination approximation is the dependence of the effective hamiltonian on the initial scheme. We shall now consider the change in the Bloch reduced wave operator \( B \) (also called decoupling operator or correlation operator) due to a class of changes of picture. Namely, consider the family of pictures given by unitary transformations that do not mix the slow and fast variables, i. e. unitary transformations \( \tilde{\psi}=V\psi \) of the form
\[V(t)\psi= \begin{pmatrix}
v_{ \alpha}(t)&0\\0& v_{ \gamma}(t)
\end{pmatrix} \begin{pmatrix}
\alpha\\ \gamma
\end{pmatrix}\,,\]
with unitary blocks \( v_{ \alpha}(t) \) and \( v_{ \gamma}(t) \),
\[ v_{ \alpha}(t)^{\dag}v_{ \alpha}(t)=1\,,\qquad v_{ \gamma}(t)^{\dag}v_{ \gamma}(t)=1\,.\]
The total hamiltonian in these pictures is
\[
\frac{1}{\hbar}\tilde{H}= \begin{pmatrix}
h_{ \alpha}+ v_{ \alpha} \omegaega v_{ \alpha} ^{\dag} &v_{ \alpha} \Omegaega^{\dag} v_{ \gamma} ^{\dag} \\
v_{ \gamma} \Omegaega v_{ \alpha} ^{\dag}& h_{ \gamma}+ v_{ \gamma} \Delta v_{ \gamma}^{\dag}
\end{pmatrix}\,,\]
where obviously
\[h_{ \alpha}(t)= \left[i\partial_ t v_{ \alpha}(t)\right] v_{ \alpha}^{\dag}(t)\,,\quad h_{ \gamma}(t)= \left[i\partial_ t v_{ \gamma}(t)\right] v_{ \gamma}^{\dag}(t)\,.\]
Let \( \tilde{B} \) be the corresponding Bloch reduced wave operator. After some algebra, the equation which determines it can be written as
\begin{multline}\langlebel{eq:gaugetranseq}
\Delta \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) - \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) \omegaega + \Omegaega - \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) \Omegaega^{\dag} \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) =\\
\left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) \left( v_{ \alpha}^{\dag} h_{ \alpha} v_{ \alpha}\right)- \left( v_{ \gamma}^{\dag} h_{ \gamma} v_{ \gamma}\right) \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) \,.
\end{multline}
If the second term were zero, then \( \left( v_{ \gamma}^{\dag}\tilde{B}v_{ \alpha}\right) \) must be a solution for the original equation, so it equals the original \( B \). The new effective hamiltonian, under the same hypothesis, is simply
\[\tilde{h}_{ \mathrm{eff}}= h_{ \alpha}+ v_{ \alpha} h_{ \mathrm{eff}} v_{ \alpha}^{\dag},,\]
while the normalisation operator reads
\[\tilde{S}_B= \sqrt{1+ v_{ \alpha} B B^{\dag} v_{ \alpha}^{\dag}}= v_{ \alpha} S_B v_{ \alpha}^{\dag}\,.\]
We do not have a full characterisation of the block diagonal unitaries for which the right hand side of eqn. (\ref{eq:gaugetranseq}) vanishes; we do know, however, of a case that has puzzled some researchers in the past (see the discussion in section 3 of \cite{1751-8121-40-5-011}). Namely, when
\[ V(t)= \exp[-i \varphi(t)] \times \begin{pmatrix}
1_{ \alpha}&0\\0&1_{ \gamma}
\end{pmatrix}\,,\]
since it follows that
\[h_{ \alpha} = \dot\varphi(t) P\quad \mathrm{and}\quad h_{ \gamma} = \dot\varphi(t) Q\,,\]
from which \( \tilde{B}=B \) and
\[\tilde{h}_{ \mathrm{eff}}= \dot\varphi(t) 1_{ \alpha}+ h_{ \mathrm{eff}}\,,\]
while \( \tilde{S}_B=S_B \).
Summarizing, under a gauge transformation (time dependent phase transformation) the Bloch reduced wave operator is invariant, while the effective hamiltonian is covariant as expected (shifted by \( \dot\varphi(t) \)).
Similarly, for a constant change of basis that respects the structure \( \mathcal{H}= \mathcal{H}_{ P}\oplus \mathcal{H}_{ Q} \), we have that \( B \), \( h_{ \mathrm{eff}} \) and \( S_B \) change rigidly.
\section{Another perturbative expansion}\langlebel{app:seconperturb}
Consider now the situation in which the spectrum of \(\omegaega\) and the spectrum of \(\Delta\) are disjoint and such that the \(\Omegaega\) terms are perturbative; to be explicit, let us introduce an expansion parameter \(\epsilon\) by the substitution \(\Omegaega\to \epsilon \Omegaega\), and consider the reduced Bloch equation
\[ \Delta B- B \omegaega= - \epsilon \Omegaega + \epsilon B \Omegaega^{\dag} B\,.\]
\( B \) will be represented by a perturbation expansion of the form
\[B=\sum_{k=0}^\infty \epsilon^{2k+1}b_k\,.\]
Thus we rewrite Bloch's equation as the infinite set
\begin{eqnarray}
\Delta b_0- b_0 \omegaega&=& - \Omegaega\,,\nonumber\\
\Delta b_{k+1}- b_{k+1} \omegaega&=& \sum_{l=0}^k b_{k-l} \Omegaega^{\dag}b_l\,,\nonumber
\end{eqnarray}
for \( k\geq0 \). Each of the equations of this recurrence is a Sylvester equation, and (with some caveats in the infinite dimensional case) the solution is unique if \( \sigma( \Delta)\cap \sigma( \omegaega)=\emptyset \) \cite{Bhatia01011997}.
\begin{figure}
\caption{Evolution of the population of the ground and excited states, with initial state \( (1,0,0)^T \), under a) the exact hamiltonian (continuous black line), b) \( h_{ \mathrm{eff}
\end{figure}
In figure \ref{fig:sylvexpansion} we show that this expansion is also applicable to the numerical example used in the main text, with similar results.
This method would be preferable if \(\Delta\) were not so different from \( \pm \delta/2 \); if there is the wide separation between \(\omegaega\) and \(\Delta\) asked for in the text, this method provides a resummation of the perturbative adiabatic elimination expansion.
\section{No two cycle}\langlebel{app:nocycles}
There is an appealing argument to discard the possibility of a two cycle for the \( T \) transformation. Assume that \( A_1 \) and \( A_2 \) are related by a two-cycle, that is
\[T(A_1)=A_2\,,\quad T(A_2)=A_1\,.\]
Then let
\[A_+=A_1+A_2\,,\quad A_-=A_1-A_2\,,\]
which must obey
\begin{eqnarray}
\Delta A_+- A_+ \omegaega&=& - 2 \Omegaega + \frac{1}{2} A_+ \Omegaega^{\dag} A_++ \frac{1}{2} A_- \Omegaega^{\dag} A_-\,,\nonumber\\
\left( \Delta+ \frac{1}{2}A_+ \Omegaega^{\dag}\right) A_-&=&- A_- \left( \omegaega + \frac{1}{2} \Omegaega^{\dag}A_+\right)\,.
\end{eqnarray}
The second expression has the form of a Sylvester equation. Given the conditions we have stated that guarantee the existence of a fixed point, the only solution for \( A_- \) in the second expression, taking \( A_+ \) as fixed, would be \( A_-=0 \), whence \( A_1=A_2 \), so there is no two cycle.
\section{Mixed states}\langlebel{app:mixed}
The similarity transformation approach is very useful in that it has direct application to mixed states. As pointed out in the main text, given an operator \( A \) acting on the full Hilbert space we compute an effective operator acting on the slow, active, or model space as \( A_{ \mathrm{eff}} =PX^{-1}AXP\). Applying this to the density matrix we would obtain
\[
\tilde{\rho}_{ \mathrm{eff}}= P X^{-1} \rho X P\,.
\]
This is not necessarily properly normalised, however. It makes sense to define, therefore,
\[ \rho_{ \mathrm{eff}}= \frac{1}{ \mathrm{Tr}\left[X P X^{-1} \rho(0) \right]}P X^{-1} \rho X P\,.\]
Notice that the normalization factor is the initial population of the low lying space in the unitary case.
If the evolution of the total system is hamiltonian and the decoupling equation (\ref{eq:decoupling}) holds, then the effective density matrix obeys the equation
\[
i\hbar \dot \rho_{ \mathrm{eff}}= \left[ H_{ \mathrm{eff}},\rho_{ \mathrm{eff}}\right]+ \frac{1}{ \mathrm{Tr}\left[X P X^{-1} \rho(0) \right]}P X^{-1}HX Q X^{-1} \rho X P\,.\]
As is only to be expected, the second term is zero if the decoupling equation holds and \( X \) is unitary.
An open question, which we hope to address in future work, is how this formalism ties in into the calculation of effective evolution for reduced open quantum systems performed by Reiter and S\o rensen \cite{Reiter:2012fk}. The proposal of extending the Schrieffer--Wolff approach to superoperators \cite{PhysRevA.86.012126} will undoubtedly prove relevant in this respect.
\input{AdElimExp.bbl}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.